text
stringlengths 14
1.76M
|
|---|
# Topological multiple recurrence of weakly mixing minimal systems for
generalized polynomials
Ruifeng Zhang School of Mathematics, Hefei University of Technology, Hefei,
Anhui,230009,P.R. China<EMAIL_ADDRESS>and Jianjie Zhao Wu Wen-Tsun
Key Laboratory of Mathematics, USTC, Chinese Academy of Sciences and School of
Mathematics, University of Science and Technology of China, Hefei, Anhui,
230026, P.R. China<EMAIL_ADDRESS>
###### Abstract.
Let $(X,T)$ be a weakly mixing minimal system, $p_{1},\cdots,p_{d}$ be
integer-valued generalized polynomials and $(p_{1},p_{2},\cdots,p_{d})$ be
non-degenerate. Then there exists a residual subset $X_{0}$ of $X$ such that
for all $x\in X_{0}$
$\\{(T^{p_{1}(n)}x,\cdots,T^{p_{d}(n)}x):n\in\mathbb{Z}\\}$
is dense in $X^{d}$.
###### Key words and phrases:
General polynomials, weakly mixing minimal systems
###### 2010 Mathematics Subject Classification:
37B20, 37B05
## 1\. Intruduction
By a topological dynamical system $(X,T)$, we mean a compact metric space $X$
together with a homeomorphism $T$ from $X$ to itself. By a measure preserving
system we mean a quadruple $(X,\mathcal{B},\mu,T)$, where
$(X,\mathcal{B},\mu)$ is a Lebesgue space and $T$ and $T^{-1}$ are measure
preserving transformations. In this paper, we study the topological multiple
recurrence of weakly mixing minimal systems.
For a measure preserving system, Furstenberg [6] proved the multiple
recurrence theorem, and gave a new proof of Szemer$\acute{e}$di’s theorem.
Later, Glasner [7] considered the counterpart of [6] in topological dynamics
and proved that: for a weakly mixing minimal system $(X,T)$ and a positive
integer $d$, there is a dense $G_{\delta}$ subset $X_{0}$ of $X$ such that for
each $x\in X_{0}$, $\\{(T^{n}x,\cdots,T^{dn}x):n\in\mathbb{Z}\\}$ is dense in
$X^{d}$. Note that a different proof of this result can also be found in [11,
12]
For a weakly mixing measure preserving system, Bergelson [2] proved the
following result: let $(X,\mathcal{B},\mu,T)$ be a weakly mixing system, let
$k\in\mathbb{N}$ and let $p_{i}(n)$ be integer-valued polynomials such that no
$p_{i}$ and no $p_{i}-p_{j}$ is constant, $1\leq i\neq j\leq k$. Then for any
$f_{1},f_{2},\dots,f_{k}\in L^{\infty}(X)$,
$\lim_{N-M\rightarrow\infty}||\frac{1}{N-M}\sum_{n=M}^{N-1}T^{p_{1}(n)}f_{1}T^{p_{2}(n)}f_{2}\dots
T^{p_{k}(n)}f_{k}-\prod_{i=1}^{k}\int fd\mu||_{L^{2}}=0.$
Note that this is a special case of a polynomial extension of
Szemer$\acute{e}$di’s theorem obtained in [3].
In the topological side, Huang, Shao and Ye [8] considered the correspondence
result of [3], and they proved the following result: let $(X,T)$ be a weakly
mixing minimal system and $p_{1},\cdots,p_{d}$ be distinct polynomials with
$p_{i}(0)=0,i=1,\cdots,d$, then there is a dense $G_{\delta}$ subset $X_{0}$
of $X$ such that for each $x\in X_{0}$,
$\\{(T^{p_{1}(n)}x,\cdots,T^{p_{d}(n)}x):n\in\mathbb{Z}\\}$
is dense in $X^{d}$.
The multiple recurrence of a weakly mixing measure preserving system for
generalized polynomials was studied by Bergelson and McCutcheon [5] (for more
details concerning generalized polynomials, see [4]). In this paper, we
consider the problem in topological side. As the generalized polynomials are
much more complicated than the polynomials, for instance $\left\lceil 2\pi
n-\left\lceil 2\pi n\right\rceil\right\rceil$ can only take values $0$ and
$1$, clearly we should preclude such kind of “bad” generalized polynomials. So
we introduced the notion of $(p_{1},p_{2},\cdots,p_{d})$ be non-degenerate
(see Definition 2.13). The main result of this paper is the following theorem.
###### Theorem 1.1.
Let $(X,T)$ be a weakly mixing minimal system, $p_{1},\cdots,p_{d}$ be
integer-valued generalized polynomials and $(p_{1},p_{2},\cdots,p_{d})$ be
non-degenerate. Then there is a dense $G_{\delta}$ subset $X_{0}$ of $X$ such
that for all $x\in X_{0}$,
$\\{(T^{p_{1}(n)}x,\cdots,T^{p_{d}(n)}x):n\in\mathbb{Z}\\}$
is dense in $X^{d}$.
Moreover, for any non-empty open subsets $U,V_{1},\cdots,V_{d}$ of $X$, for
any $\varepsilon>0$, for any $s,t\in\mathbb{N}$ and
$g_{1},\cdots,g_{t}\in\widehat{SGP_{s}}$, let
$C=C(\varepsilon,g_{1},\cdots,g_{t}):=\bigcap_{k=1}^{t}\\{n\in\mathbb{Z}:\\{g_{k}(n)\\}\in(-\varepsilon,\varepsilon)\\},$
$N=\\{n\in\mathbb{Z}:U\cap T^{-p_{1}(n)}V_{1}\cap\cdots\cap
T^{-p_{d}(n)}V_{d}\neq\emptyset\\}.$
Then $N\cap C$ is syndetic, where $\widehat{SGP_{s}}$ and $\\{g_{k}(n)\\}$ are
defined in Section 2.
The key ingredient in the proof of the main result is to view the integer-
valued generalized polynomials, in some sense, as the ordinary polynomials,
and thus we can use the method in [8]. Roughly speaking, the difficulty is in
calculating $p(n+m)-p(m)-p(n)$. For instance, generally $\left\lceil
a(n+m)^{2}\right\rceil$ is not equal to $\left\lceil
an^{2}\right\rceil+\left\lceil 2amn\right\rceil+\left\lceil
am^{2}\right\rceil$, while $a(n+m)^{2}=an^{2}+2anm+am^{2}$. To overcome this,
we need to restrict $n$ in some set $C$ where the fractional part
$\\{an^{2}\\}$ and $\\{2amn\\}$ are small enough such that for any $n\in C$,
$\left\lceil a(n+m)^{2}\right\rceil=\left\lceil an^{2}\right\rceil+\left\lceil
2amn\right\rceil+\left\lceil am^{2}\right\rceil$. Roughly speaking, we will
restrict integer-valued generalized polynomials to a Nil Bohr0-set rather than
$\mathbb{Z}$.
The paper is organized as follows. In Section 2, we introduce some notions and
some properties that will be needed in the proof. In Section 3, we prove
Theorem 1.1 for integer-valued generalized polynomilals of degree $1$. In the
final section, we recall the PET-induction and show the proof of Theorem 1.1.
Acknowledgments. The authors would like to thank Professor X. Ye for help
discussions. The first author were supported by NNSF of
China(11871188,11671094), the second author were supported by NNSF of China
(11431012).
## 2\. Preliminary
### 2.1. Some important subsets of integers and Furstenberg families
In this paper, the set of all integers and positive integers are denoted by
$\mathbb{Z}$ and $\mathbb{N}$ respectively.
A subset $S$ of $\mathbb{Z}$ is _syndetic_ if it has a bounded gap, i.e. there
is $L\in\mathbb{N}$ such that $\\{n,n+1,\cdots,n+L\\}\cap S\neq\emptyset$ for
every $n\in\mathbb{Z}$. $S$ is _thick_ if it contains arbitrarily long runs of
integers, i.e. for any $L\in\mathbb{N}$, there is $a_{L}\in\mathbb{Z}$ such
that $\\{a_{L},a_{L}+1,\cdots,a_{L}+L\\}\subset S$. $S$ is _thickly syndetic_
if for every $L\in\mathbb{N}$, there exists a syndetic set
$B_{L}\subset\mathbb{Z}$ such that $B_{L}+\\{0,1,\cdots,L\\}\subset A$, where
$B_{L}+\\{0,1,\cdots,L\\}=\cup_{b\in B_{L}}\\{b,b+1,\cdots,b+L\\}$.
The family of all syndetic sets, thick sets and thickly syndetic sets are
denoted by $\mathcal{F}_{s}$, $\mathcal{F}_{t}$ and $\mathcal{F}_{ts}$
respectively.
Let $\mathcal{P}$ denote the collection of all subsets of $\mathbb{Z}$. A
subset $\mathcal{F}$ of $\mathcal{P}$ is called a _Furstenberg family_ (or
just a _family_), if it is hereditary upward, i.e.,
$F_{1}\subset F_{2}\ \ \text{and}\ \ F_{1}\in\mathcal{F}\ \ \text{imply}\ \
F_{2}\in\mathcal{F}.$
A family $\mathcal{F}$ is called _proper_ if it is a non-empty proper subset
of $\mathcal{P}$, i.e. it is neither empty nor all of $\mathcal{P}$. Any non-
empty collection $\mathcal{A}$ of subsets of $\mathbb{Z}$ naturally generates
a family
$\mathcal{F}(A)=\\{F\subset\mathbb{Z}:A\subset F\ \ \text{for some}\ \
A\in\mathcal{A}\\}.$
A proper family $\mathcal{F}$ is called a _filter_ if
$F_{1},F_{2}\in\mathcal{F}$ implies $F_{1}\cap F_{2}\in\mathcal{F}$.
Note that the set of all thickly syndetic sets is a filter, i.e. the
intersection of any finite thickly syndetic sets is still a thickly syndetic
set.
### 2.2. Topological dynamics
Let $(X,T)$ be a dynamical system. For $x\in X$, we denote the orbit of $x$ by
$orb(x,T)=\\{T^{n}x:n\in\mathbb{Z}\\}$. A point $x\in X$ is called a
_transitive point_ if the orbit of $x$ is dense in $X$, i.e.,
$\overline{orb(x,T)}=X$. A dynamical system $(X,T)$ is called _minimal_ if
every point $x\in X$ is a transitive point.
Let $U,V\subset X$ be two non-empty open sets, the _hitting time set_ of $U$
and $V$ is denoted by
$N(U,V)=\\{n\in\mathbb{Z}:U\cap T^{-n}V\neq\emptyset\\}.$
We say that $(X,T)$ is _(topologically) transitive_ if for any non-empty open
sets $U,V\subset X$, the hitting time $N(U,V)$ is non-empty; _weakly mixing_
if the product system $(X\times X,T\times T)$ is transitive.
We say that $(X,T)$ is _thickly syndetic transitive_ if for any non-empty open
sets $U,V\subset X$, the hitting time $N(U,V)$ is thickly syndetic. Let
$p_{i}:\mathbb{Z}\rightarrow\mathbb{Z},i=1,2,\cdots,k$, we say that $(X,T)$ is
$\\{p_{1},p_{2},\cdots,p_{k}\\}$-thickly-syndetic transitive if for any non-
empty open sets $U_{i},V_{i}\subset X,i=1,2,\cdots,k$,
$N(\\{p_{1},p_{2},\cdots,p_{k}\\},U_{1}\times U_{2}\times\cdots\times
U_{k},V_{1}\times V_{2}\times\cdots\times
V_{k}):=\bigcap_{i=1}^{k}N(p_{i},U_{i},V_{i})$
is thickly syndetic, where $N(p_{i},U_{i},V_{i}):=\\{n\in\mathbb{Z}:U_{i}\cap
T^{-p_{i}(n)}V_{i}\neq\emptyset\\}$, $i=1,2,\cdots,k$.
The following Lemma is the analogue of Lemma $2.6$ in [8].
###### Lemma 2.1.
Let $(X,T)$ be a dynamical system and
$p_{1},\cdots,p_{d}:\mathbb{Z}\rightarrow\mathbb{Z}$ such that $(X,T)$ is
$\\{p_{1}(n),\cdots,p_{d}(n)\\}$-thickly-syndetic transitive. Let $C$ be a
syndetic set. Then for any non-empty open sets $V_{1},\cdots,V_{d}$ of $X$ and
any subsequence $\\{r(n)\\}_{n=0}^{\infty}$ of natural numbers, there is a
sequence of integers $\\{k_{n}\\}_{n=0}^{\infty}\subset C$ such that
$|k_{0}|>r(0),|k_{n}|>|k_{n-1}|+r(|k_{n-1}|)$ for all $n\geq 1$, and for each
$i\in\\{1,2,\cdots,d\\}$, there is a descending sequence
$\\{V_{i}^{(n)}\\}_{n=0}^{\infty}$ of non-empty open subsets of $V_{i}$ such
that for each $n\geq 0$ one has that
$T^{p_{i}(k_{j})}T^{-j}V_{i}^{(n)}\subset V_{i},\ \text{for all \ }0\leq j\leq
n.$
###### Proof.
Let $V_{1},\cdots,V_{d}$ be non-empty open subsets of $X$. Then
$\bigcap_{i=1}^{d}N(p_{i},V_{i},V_{i})$ is thickly syndetic. Since $C$ is
syndetic, thus $\bigcap_{i=1}^{d}N(p_{i},V_{i},V_{i})\cap C$ is syndetic.
Choose $k_{0}\in\bigcap_{i=1}^{d}N(p_{i},V_{i},V_{i})\cap C$ such that
$|k_{0}|>r(0)$. It implies $T^{-p_{i}(k_{0})}V_{i}\cap V_{i}\neq\emptyset$ for
all $i=1,\cdots,d$. Put $V_{i}^{(0)}=T^{-p_{i}(k_{0})}V_{i}\cap V_{i}$ for all
$i=1,\cdots,d$ to complete the base step.
Now assume that for $n\geq 1$ we have found numbers
$k_{0},k_{1},\cdots,k_{n-1}\in C$ and for each $i=1,\cdots,d$, we have non-
empty open subsets $V_{i}\supseteq V_{i}^{(0)}\supseteq
V_{i}^{(1)}\cdots\supseteq V_{i}^{(n-1)}$ such that $|k_{0}|>r(0)$, and for
each $m=1,\cdots,n-1$ one has $|k_{m}|>|k_{m-1}|+r(|k_{m-1}|)$ and
$T^{p_{i}(k_{j})}T^{-j}V_{i}^{(m)}\subset V_{i},\ \text{for \ all \ }0\leq
j\leq m.$
For $i=1,\cdots,d$, let $U_{i}=T^{-n}(V_{i}^{n-1})$. Since $(X,T)$ is
$\\{p_{1}(n),\cdots,p_{d}(n)\\}$-thickly-syndetic transitive,
$\bigcap_{i=1}^{d}N(p_{i},U_{i},V_{i})=\\{n\in\mathbb{Z}:U_{i}\cap
T^{-p_{i}(n)}V_{i}\neq\emptyset\\}$
is thickly syndetic. Hence $C\cap(\bigcap_{i=1}^{d}N(p_{i},U_{i},V_{i}))$ is
syndetic. Then there exists $k_{n}\in
C\cap(\bigcap_{i=1}^{d}N(p_{i},U_{i},V_{i}))$ such that
$|k_{n}|>|k_{n-1}|+r(|k_{n-1}|)$. It implies
$T^{-p_{i}(k_{n})}V_{i}\cap U_{i}\neq\emptyset\ \ \text{for all}\
i=1,\cdots,d.$
Then for $i=1,\cdots,d$,
$T^{p_{i}(k_{n})}U_{i}\cap V_{i}=T^{p_{i}(k_{n})}T^{-n}(V_{i}^{n-1})\cap
V_{i}\neq\emptyset.$
Let
$V_{i}^{(n)}=V_{i}^{(n-1)}\cap(T^{p_{i}(k_{n})}T^{-n})^{-1}V_{i}.$
Then $V_{i}^{(n)}\subset V_{i}^{(n-1)}$ is a non-empty open set and
$T^{p_{i}(k_{n})}T^{-n}V_{i}^{(n)}\subset V_{i}.$
Since $V_{i}^{(n)}\subset V_{i}^{(n-1)}$, we have
$T^{p_{i}(k_{j})}T^{-j}V_{i}^{(n)}\subset V_{i},\ \text{for all}\ 0\leq j\leq
n.$
Hence we finish our induction. The proof is completed. ∎
The following Lemma is the analogue of Propostion $1$ in [12].
###### Lemma 2.2.
Let $(X,T)$ be a dynamical system and $d\in\mathbb{N}$. For any functions
$p_{1},\cdots,p_{d}$ from $\mathbb{Z}$ to $\mathbb{Z}$. Then the following are
equivalent:
1. (1)
If $U,V_{1},\cdots,V_{d}\subset X$ are non-empty open sets, then there exists
$n\in\mathbb{Z}$, such that
$U\cap T^{-p_{1}(n)}V_{1}\cap\cdots\cap T^{-p_{d}(n)}V_{d}\neq\emptyset.$
2. (2)
There exists a dense $G_{\delta}$ subset $Y\subset X$ such that for every
$x\in Y$,
$\\{(T^{p_{1}(n)}x,T^{p_{2}(n)}x,\cdots,T^{p_{d}(n)}x):n\in\mathbb{Z}\\}$
is dense in $X^{d}$.
###### Proof.
The proof is similar to the proof in [12]. For completeness, we include a
proof.
$1\Rightarrow 2$: Consider a countable base of open balls
$\\{B_{k}:k\in\mathbb{N}\\}$ of $X$. Put
$Y=\bigcap_{(k_{1},\cdots,k_{d})\in\mathbb{N}^{d}}\bigcup_{n\in\mathbb{Z}}\bigcap_{i=1}^{d}T^{-p_{i}(n)}B_{k_{i}}.$
The set $\cup_{n\in\mathbb{Z}}\cap_{i=1}^{d}T^{-p_{i}(n)}B_{k_{i}}$ is open,
and is dense by 1. Thus by the Baire category theorem, $Y$ is a dense
$G_{\delta}$ subset of $X$. By construction, for every $x\in Y$,
$\\{(T^{p_{1}(n)}x,T^{p_{2}(n)}x,\cdots,T^{p_{d}(n)}x):n\in\mathbb{Z}\\}$
is dense in $X^{d}$.
$2\Rightarrow 1$: Choose $x\in Y\cap U$ and $n\in\mathbb{Z}$ such that
$(T^{p_{1}(n)}x,T^{p_{2}(n)}x,\cdots,T^{p_{d}(n)}x)\in V_{1}\times\cdots\times
V_{d},$
then $x\in U\cap T^{-p_{1}(n)}V_{1}\cap\cdots\cap T^{-p_{d}(n)}V_{d}$. ∎
### 2.3. Generalized polynomials
For a real number $a$, let $\left\|{a}\right\|=\inf\\{|a-n|:n\in\mathbb{Z}\\}$
and
$\left\lceil{a}\right\rceil=\min\\{m\in\mathbb{Z}:|a-m|=\left\|{a}\right\|\\}$.
We denote $[a]$ the greatest integer not exceeding $a$, then $\left\lceil
a\right\rceil=[a+\frac{1}{2}]$. We put $\\{a\\}=a-\left\lceil{a}\right\rceil$,
and $\\{a\\}\in(-\frac{1}{2},\frac{1}{2}]$.
In [9], Huang, Shao and Ye introduced the notions of $GP_{d}$ and
$\mathcal{F}_{GP_{d}}$.
###### Definition 2.3.
Let $d\in\mathbb{N},$ the _generalized polynomials_ of degree $\leq d$
(denoted by $GP_{d}$) is defined as follows. For $d=1$, $GP_{1}$ is the
smallest collection of functions from $\mathbb{Z}$ to $\mathbb{R}$ containing
$\\{h_{a}:a\in\mathbb{R}\\}$ with $h_{a}(n)=an$ for each $n\in\mathbb{Z}$,
which is closed under taking $\left\lceil{\cdot}\right\rceil$, multiplying by
a constant and finite sums.
Assume that $GP_{i}$ is defined for $i<d$. Then $GP_{d}$ is the smallest
collection of functions from $\mathbb{Z}$ to $\mathbb{R}$ containing $GP_{i}$
with $i<d$, functions of the forms
$a_{0}n^{p_{0}}\left\lceil{f_{1}(n)}\right\rceil\cdots\left\lceil{f_{k}(n)}\right\rceil$
(with $a_{0}\in\mathbb{R},p_{0}\geq 0,k\geq 0,f_{l}\in GP_{p_{l}}$ and
$\sum_{l=0}^{k}p_{l}=d$), which is closed under taking
$\left\lceil{\cdot}\right\rceil$, multiplying by a constant and finite sums.
Let $GP=\bigcup_{i=1}^{\infty}GP_{i}$. Note that if $p\in GP$, then $p(0)=0$.
###### Definition 2.4.
Let $\mathcal{F}_{GP_{d}}$ be the family generated by the sets of forms
$\bigcap_{i=1}^{k}\\{n\in\mathbb{Z}:p_{i}(n)\ (\text{mod}\ \mathbb{Z})\
\in(-\varepsilon_{i},\varepsilon_{i})\\},$
where $k\in\mathbb{N}$, $p_{i}\in GP_{d}$, and $\varepsilon_{i}>0,1\leq i\leq
k$. Note that $p_{i}(n)\ (\text{mod}\
\mathbb{Z})\in(-\varepsilon_{i},\varepsilon_{i})$ if and only if
$\\{p_{i}(n)\\}\in(-\varepsilon_{i},\varepsilon_{i})$.
###### Remark 2.5.
$\mathcal{F}_{GP_{d}}$ is a filter.
A subset $A\subset\mathbb{Z}$ is a _Nil d Bohr0-set_ if there exist a $d$-step
nilsystem $(X,T)$, $x_{0}\in X$ and an open set $U\subset X$ containing
$x_{0}$ such that $N(x_{0},U):=\\{n\in\mathbb{Z}:T^{n}x_{0}\in U\\}$ is
contained in $A$. Denote by $\mathcal{F}_{d,0}$ the family consisting of all
Nild Bohr0-sets. A subset $A\subset\mathbb{Z}$ is called Nil Bohr0-set if
$A\in\mathcal{F}_{d,0}$ for some $d\in\mathbb{N}$. In [9], the authors proved
the following theorem.
###### Theorem 2.6 (Theorem B in [9]).
Let $d\in\mathbb{N}$. Then $\mathcal{F}_{d,0}=\mathcal{F}_{GP_{d}}$.
###### Remark 2.7.
Since a nilsystem is distal, every Nild Bohr0-set is syndetic. Together with
Remark 2.5 we know $\mathcal{F}_{GP_{d}}$ is a filter and any
$A\in\mathcal{F}_{GP_{d}}$ is a syndetic set.
Now we introduce the notion of integer-valued generalized polynomials.
###### Definition 2.8.
For $d\in\mathbb{N}$, the integer-valued generalized polynomials of degree
$\leq d$ is defined by
$\widetilde{GP_{d}}=\\{\left\lceil p(n)\right\rceil:p(n)\in GP_{d}\\},$
and the integer-valued generalized polynomials is then defined by
$\mathcal{G}=\bigcup_{i=1}^{\infty}\widetilde{GP_{i}}.$
For $p(n)\in\mathcal{G}$, the least $d\in\mathbb{N}$ such that
$p\in\widetilde{GP_{d}}$ is defined by the _degree_ of $p$, denoted by
$deg(p)$.
Since the integer-valued generalized polynomials are very complicated, we will
also specify a subclass of the integer-valued generalized polynomials, i.e.
the special integer-valued generalized polynomials (denoted by
$\widetilde{SGP}$), which will be used in the proof of our main results.
We need to recall the defintion of $L(a_{1},a_{2},\dots,a_{l})$ in Defintion
4.2 of [9]. For $a\in\mathbb{R}$, we define $L(a)=a$. For
$a_{1},a_{2}\in\mathbb{R}$, we define $L(a_{1},a_{2})=a_{1}\left\lceil
L(a_{2})\right\rceil$. Inductively, for
$a_{1},a_{2},\dots,a_{l}\in\mathbb{R}(l\geq 2)$ we define
$L(a_{1},a_{2},\dots,a_{l})=a_{1}\left\lceil
L(a_{2},\dots,a_{l})\right\rceil.$
Before introducing the definition of $\widetilde{SGP}$, we need to introduce
the notion of the simple generalized polynomials.
###### Definition 2.9.
For $d\in\mathbb{N}$, the simple generalized polynomials of degree $\leq d$
(denoted by $\widehat{SGP_{d}}$ ) is defined as follows. $\widehat{SGP}_{d}$
is the smallest collection of generalized polynomials of the forms
$\prod_{i=1}^{k}L(a_{i,1}n^{j_{i,1}},\cdots,a_{i,l_{i}}n^{j_{i,l_{i}}}),$
where $k\geq 1$, $1\leq l_{i}\leq d$,
$a_{i,1},a_{i,2},\dots,a_{i,l_{i}}\in\mathbb{R}$,
$j_{i,1},j_{i,2},\dots,j_{i,l_{i}}\geq 0$ and
$\sum_{i=1}^{k}\sum_{t=1}^{l_{i}}j_{i,t}\leq d$.
With the help of the above definition, we can intoduce the notion of special
integer-valued generalized polynomials.
###### Definition 2.10.
For $d\in\mathbb{N}$, the special integer-valued generalized polynomials of
degree $\leq d$ (denoted by $\widetilde{SGP_{d}}$) is defined as follows.
$\widetilde{SGP_{d}}=\\{\sum_{i=1}^{k}c_{i}\left\lceil
p_{i}(n)\right\rceil:p_{i}(n)\in\widehat{SGP_{d}}\text{ and
}c_{i}\in\mathbb{Z}\\}.$
The special integer-valued generalized polynomials is then defined by
$\widetilde{SGP}=\bigcup_{d=1}^{\infty}\widetilde{SGP_{d}}.$
Clearly $\widetilde{SGP}\subset\mathcal{G}$ and we have the following
obsevation.
###### Lemma 2.11.
Let $p_{1},\cdots,p_{d}\in\widehat{SGP_{s}}$ (for some $s\in\mathbb{N}$). Then
for any $n\in\mathbb{Z}$ with
$-\frac{1}{2}<\\{p_{1}(n)\\}+\cdots+\\{p_{d}(n)\\}<\frac{1}{2},$
we have $\left\lceil
p_{1}(n)+\cdots+p_{d}(n)\right\rceil=\sum\limits_{i=1}^{d}\left\lceil
p_{i}(n)\right\rceil$.
The following lemma shows the the relationship between $\widetilde{GP_{d}}$
and $\widetilde{SGP_{d}}$.
###### Lemma 2.12.
Let $d\in\mathbb{N}$ and $p(n)\in\widetilde{GP_{d}}$. Then there exist
$h(n)\in\widetilde{SGP_{d}}$ and a set
$C=C(\delta,q_{1},\cdots,q_{t})=\bigcap_{k=1}^{t}\\{n\in\mathbb{Z}:\\{q_{k}(n)\\}\in(-\delta,\delta)\\}$
such that
$p(n)=h(n),\forall n\in C,$
where $\delta>0$ is small enough and $q_{k}\in\widehat{SGP_{d}},k=1,2,\dots,t$
for some $t\in\mathbb{N}$.
###### Proof.
We will prove it by induction on $d$.
When $d=1$, we may assume that
$p(n)=\left\lceil\sum_{j=1}^{m}\alpha_{j}\left\lceil\beta_{j}n\right\rceil\right\rceil$.
Let
$q_{j}(n)=\alpha_{j}\left\lceil\beta_{j}n\right\rceil,j=1,\dots,m.$
Let $0<\delta<\frac{1}{2m}$, we set
$C=C(\delta,q_{1},\dots,q_{m})=\bigcap_{j=1}^{m}\\{n\in\mathbb{Z}:\\{q_{j}(n)\\}\in(-\delta,\delta)\\}.$
Since for each $n\in C$,
$\\{q_{j}(n)\\}=\\{\alpha_{j}\left\lceil\beta_{j}n\right\rceil\\}\in(-\delta,\delta)$,
$-\frac{1}{2}<-m\delta<\sum_{j=1}^{m}\\{\alpha_{j}\left\lceil\beta_{j}n\right\rceil\\}<m\delta<\frac{1}{2},\forall
n\in C.$
Let
$h(n)=\sum_{j=1}^{m}\left\lceil\alpha_{j}\left\lceil\beta_{j}n\right\rceil\right\rceil$,
then $h(n)\in\widetilde{SGP_{1}}$. Hence by Lemma 2.11, $p(n)=h(n),\forall
n\in C.$
Assmume that the result holds for $d>1$. Next we will show the result holds
for $d+1$. We just need to show that when $p(n)=\left\lceil r(n)\right\rceil$
the result holds, where
$r(n)=a_{0}n^{p_{0}}\left\lceil{f_{1}(n)}\right\rceil\cdots\left\lceil{f_{k}(n)}\right\rceil$
(with $a_{0}\in\mathbb{R},p_{0}\geq 0,k\geq 0,f_{l}\in GP_{p_{l}}$ and
$\sum_{l=0}^{k}p_{l}=d+1$).
If $p_{0}={d+1}$, then $p(n)=\left\lceil
a_{0}n^{d+1}\right\rceil\in\widetilde{SGP_{d+1}}$. Next we assume that $0\leq
p_{0}<d+1$ and $0<p_{l}<d+1,l=1,2,\dots,k$. For each $1\leq l\leq k$, by
induction hypothesis, there exist $h_{l}(n)\in\widetilde{SGP_{p_{l}}}$ and
$C_{l}$ such that
$\left\lceil{f_{l}(n)}\right\rceil=h_{l}(n):=\sum_{i=1}^{b_{l}}c_{l,i}\left\lceil
r_{l,i}(n)\right\rceil,\forall n\in C_{l},$
where $c_{l,i}\in\mathbb{Z}$, $r_{l,i}(n)\in\widehat{SGP_{p_{l}}}$ and
$C_{l}=C_{l}(\delta_{l},q_{l,1},\dots,q_{l,t_{l}})$
(with $\delta_{l}>0$ is small enough and
$q_{l,k}\in\widehat{SGP_{p_{l}}},k=1,\cdots,t_{l}$ for some
$t_{l}\in\mathbb{N}$).
For any $n\in\bigcap_{l=1}^{k}C_{l}$,
$\displaystyle r(n)$ $\displaystyle=$ $\displaystyle
a_{0}n^{p_{0}}\left\lceil{f_{1}(n)}\right\rceil\cdots\left\lceil{f_{k}(n)}\right\rceil$
$\displaystyle=$ $\displaystyle a_{0}n^{p_{0}}h_{1}(n)\cdots h_{k}(n)$
$\displaystyle=$ $\displaystyle
a_{0}n^{p_{0}}(\sum_{i=1}^{b_{1}}c_{1,i}\left\lceil
r_{1,i}(n)\right\rceil)\cdots(\sum_{i=1}^{b_{k}}c_{k,i}\left\lceil
r_{k,i}(n)\right\rceil).$
Note that $\left\lceil r_{1,i_{1}}(n)\right\rceil\cdots\left\lceil
r_{k,i_{k}}(n)\right\rceil\in\widehat{SGP_{d+1-p_{0}}}$ and are integer-
valued, then $r(n)$ can be written as
$r(n)=\sum_{j=1}^{m}\beta_{j}n^{p_{0}}\left\lceil d_{j}(n)\right\rceil,$
where $d_{j}(n)$ is of the form $\left\lceil
r_{1,j_{1}}(n)\right\rceil\cdots\left\lceil r_{k,j_{k}}(n)\right\rceil$.
Let $Q=\\{\beta_{1}n^{p_{0}}\left\lceil
d_{1}(n)\right\rceil,\dots,\beta_{m}n^{p_{0}}\left\lceil
d_{m}(n)\right\rceil\\}\cup(\bigcup_{l=1}^{k}\\{q_{l,1}(n),\dots,q_{l,t_{l}}(n)\\})$.
Let $0<\delta<\min\\{\frac{1}{2m},\delta_{1},\dots,\delta_{k}\\}$ and
$C=C(\delta,Q)=\bigcap_{q(n)\in
Q}\\{n\in\mathbb{Z}:\\{q(n)\\}\in(-\delta,\delta)\\}.$
Clearly $C\subset\bigcap_{l=1}^{k}C_{l}$. For each $n\in C$,
$\\{\beta_{j}n^{p_{0}}\left\lceil
d_{j}(n)\right\rceil\\}\in(-\delta,\delta),j=1,2,\dots,m.$ Hence
$-\frac{1}{2}<-m\delta<\sum_{j=1}^{m}\\{\beta_{j}n^{p_{0}}\left\lceil
d_{j}(n)\right\rceil\\}<m\delta<\frac{1}{2}.$
Let $h(n)=\sum_{j=1}^{m}\left\lceil\beta_{j}n^{p_{0}}\left\lceil
d_{j}(n)\right\rceil\right\rceil$, $h(n)\in\widetilde{SGP_{d+1}}$. By Lemma
2.11, $p(n)=h(n),\forall n\in C$.
∎
By Lemma 2.12, every $p(n)\in\widetilde{GP_{d}}$ correspondes to an
$h(n)\in\widetilde{SGP_{d}}$, we call the maximal-degree components of $h(n)$
be the maximal-degree components of $p(n)$. But we need to mention that here
we will not do the $+$ and $-$. For instance, let $p(n)=n\left\lceil 2\pi
n^{2}-\left\lceil 2\pi n^{2}\right\rceil+\sqrt{2}n\right\rceil$ then we denote
$h(n):=n\left\lceil 2\pi n^{2}\right\rceil-n\left\lceil 2\pi
n^{2}\right\rceil+n\left\lceil\sqrt{2}n\right\rceil$, and we denote the
maximal-degree components of $p(n),h(n)$ be $n\left\lceil 2\pi
n^{2}\right\rceil$ and $-n\left\lceil 2\pi n^{2}\right\rceil$ and the
coefficients of the maximal-degree components of $p(n),h(n)$ are $2\pi$ and
$-2\pi$.
###### Definition 2.13.
Let $p(n)\in\mathcal{G}$, we denote $A(p(n))$ be the sum of the coefficients
of the maximal-degree componentes of $p(n)$. Let
$p_{1},p_{2},\cdots,p_{d}\in\mathcal{G}$, a tuple $(p_{1},p_{2},\cdots,p_{d})$
is called a non-degenerate tuple if $A(p_{i})\neq 0$ and $A(p_{i}-p_{j})\neq
0$, $1\leq i\neq j\leq d$.
For instance, $A(\left\lceil an^{2}\left\lceil bn\right\rceil+\left\lceil
cn^{3}\right\rceil\right\rceil+dn^{3}+2n^{2})=ab+c+d$, $A(n+n\left\lceil 2\pi
n-\left\lceil 2\pi n\right\rceil\right\rceil)=0$.
$(n^{2}+n,n^{2}+\left\lceil\sqrt{3}n\right\rceil)$ is non-degenerate,
$(n\left\lceil 2\pi n\right\rceil+n,\left\lceil 2\pi n^{2}\right\rceil+2n)$ is
not non-degenerate.
The key ingredient in the proof of the main result is to view the integer-
valued generalized polynomials, in some sense, as the ordinary polynomials. To
do this, we need to introduce the following definition.
###### Definition 2.14.
Let $p(n)\in\widetilde{SGP}$, $m\in\mathbb{Z}$ and $C\subset\mathbb{Z}$. We
say that $p$ is _proper_ with respect to (w.r.t. for short) $m$ and $C$ if for
every $n\in C$,
* •
if $deg(p)=1$, $p(n+m)=p(n)+p(m)$.
* •
if $deg(p)>1$, $p(n+m)-p(n)-p(m)=q(n),$ where $q(n)\in\widetilde{SGP}$ and
$deg(q)<deg(p)$.
For example, let $p(n)=\left\lceil an^{2}\right\rceil$, if
$p(n+m)=\left\lceil a(n+m)^{2}\right\rceil=\left\lceil
an^{2}\right\rceil+\left\lceil am^{2}\right\rceil+\left\lceil
2amn\right\rceil,\forall n\in C,$
then we say $p(n)$ is proper w.r.t. $m$ and $C$.
Let $p(n)\in\widetilde{SGP}$, $m\in\mathbb{Z}$. To study whether there exists
$C$ such that $p(n)$ is proper w.r.t. $m$ and $C$, we need to introduce the
following notion.
###### Definition 2.15.
Let $p(n)\in\widetilde{SGP}$ and $m\in\mathbb{Z}$.
* •
If $p(n)=\left\lceil L(a_{1}n^{j_{1}},\dots,a_{l}n^{j_{l}})\right\rceil$, we
say $m$ is good w.r.t. $p(n)$ if for any $1\leq t\leq l$,
$\\{L(a_{t}m^{j_{t}},a_{t+1}m^{j_{t+1}},\cdots,a_{l}m^{j_{l}})\\}\neq\frac{1}{2}$.
* •
If $p(n)=\left\lceil\prod_{i=1}^{k}r_{i}(n)\right\rceil$ with
$r_{i}(n)=L(a_{i,1}n^{j_{i,1}},\dots,a_{i,l_{i}}n^{j_{i,l_{i}}})$, we say $m$
is good w.r.t. $p(n)$ if $\\{\prod_{i=1}^{k}r_{i}(m)\\}\neq\frac{1}{2}$ and
$m$ is good w.r.t. $\left\lceil r_{i}(n)\right\rceil$ for each $1\leq i\leq
k$.
* •
If $p(n)=\sum_{t=1}^{k}c_{t}\left\lceil q_{t}(n)\right\rceil$ with
$c_{i}\in\mathbb{Z}$ and each $q_{t}(n)$ is of the form
$\prod_{i=1}^{k}r_{i}(n)$ with
$r_{i}(n)=L(a_{i,1}n^{j_{i,1}},\dots,a_{i,l_{i}}n^{j_{i,l_{i}}})$, we say $m$
is good w.r.t. $p(n)$ if $m$ is good w.r.t. $\left\lceil q_{t}(n)\right\rceil$
for each $1\leq t\leq k$.
For example, if $\\{bm\left\lceil cm\right\rceil\\}\neq\frac{1}{2}$ and
$\\{cm\\}\neq\frac{1}{2}$, then $m$ is good w.r.t. $p(n)=\left\lceil
bn\left\lceil cn\right\rceil\right\rceil$.
We have the following observation.
###### Lemma 2.16.
Let $p(n)\in\widetilde{SGP}$. Then there exist $\delta>0$,
$Q\subset\widehat{SGP_{s}}$ (for some $s\in\mathbb{N}$) and
$C(\delta,Q)=\bigcap_{q(n)\in
Q}\\{n\in\mathbb{Z}:\\{q(n)\\}\in(-\delta,\delta)\\}$
such that for each $m\in C(\delta,Q)$, $m$ is good w.r.t. $p(n)$.
###### Proof.
Choose $0<\delta<\frac{1}{4}$.
* •
If $p(n)=\left\lceil L(a_{1}n^{j_{1}},\dots,a_{l}n^{j_{l}})\right\rceil$, let
$Q=\\{L(a_{t}n^{j_{t}},a_{t+1}n^{j_{t+1}},\cdots,a_{l}n^{j_{l}}):1\leq t\leq
l\\}$. Then for each $m\in C(\delta,Q)$, $m$ is good w.r.t. $p(n)$.
* •
If $p(n)=\left\lceil\prod_{i=1}^{k}r_{i}(n)\right\rceil$ with
$r_{i}(n)=L(a_{i,1}n^{j_{i,1}},\dots,a_{i,l_{i}}n^{j_{i,l_{i}}})$, let
$Q=\\{\prod_{i=1}^{k}r_{i}(n)\\}\cup\bigcup_{i=1}^{k}\\{L(a_{i,t}n^{j_{i,t}},a_{i,t+1}n^{j_{t+1}},\cdots,a_{i,l_{i}}n^{j_{i,l_{i}}}):1\leq
t\leq l\\}.$
Then for each $m\in C(\delta,Q)$, $m$ is good w.r.t. $p(n)$.
* •
If $p(n)=\sum_{t=1}^{k}c_{t}\left\lceil q_{t}(n)\right\rceil$. For each
$\left\lceil q_{t}(n)\right\rceil$, by the above argument there exists a
$Q_{t}$ such that for each $m\in C(\delta,Q_{t})$, $m$ is good w.r.t.
$\left\lceil q_{t}(n)\right\rceil$. Let $Q=(\bigcup_{t=1}^{k}Q_{t})$, for each
$m\in C(\delta,Q)$, $m$ is good w.r.t. $p(n)$.
∎
The following lemmas are very useful in our proof. We first prove the simple
case to illustrate our idea. The general case can be deduced directly.
###### Lemma 2.17.
Let $p(n)=\left\lceil r(n)\right\rceil,n\in\mathbb{Z}$, where
$r(n)\in\widehat{SGP_{d}}$ for some $d\in\mathbb{N}$. Let $l\in\mathbb{N}$,
$m_{i}\in\mathbb{Z}$ and $m_{i}$ is good w.r.t. $p(n)$ for each $1\leq i\leq
l$. Then for any $\varepsilon>0$, there exists
$C=C(\delta,q_{1},\cdots,q_{t})=\bigcap_{k=1}^{t}\\{n\in\mathbb{Z}:\\{q_{k}(n)\\}\in(-\delta,\delta)\\},$
where $\delta>0\ (\delta<\varepsilon)$ is a small enough number, and
$q_{k}\in\widehat{SGP_{d}},k=1,2,\dots,t$ for some $t\in\mathbb{N}$, such that
for all $i\in\\{1,\cdots,l\\},$
1. (1)
$p(n)$ is proper w.r.t. $m_{i}$ and $C$.
2. (2)
$\\{r(n+m_{i})\\}\in(\\{r(m_{i})\\}-\varepsilon,\\{r(m_{i})\\}+\varepsilon),\forall
n\in C$.
###### Proof.
We first show a special case $r(n)=bn\left\lceil cn\right\rceil$ to illustrate
our idea, the general cases are similar.
Let
$\delta_{1}=\frac{1}{2}-\mathop{\max}_{i=1,\dots,l}\\{|\\{bm_{i}\left\lceil
cm_{i}\right\rceil\\}|,|\\{cm_{i}\\}|\\}$. Since for each $1\leq i\leq l$,
$m_{i}$ is good w.r.t. $\left\lceil r(n)\right\rceil$, $\delta_{1}>0$. Choose
$0<\delta<\min\\{\frac{\delta_{1}}{4},\frac{\varepsilon}{3}\\}$ and let
$\displaystyle C(\delta)=\bigcap_{i=1}^{l}\\{n\in\mathbb{Z}:\\{bn\left\lceil
cn\right\rceil\\},\\{bn\left\lceil cm_{i}\right\rceil\\},\\{bm_{i}\left\lceil
cn\right\rceil\\},\\{cn\\}\in(-\delta,\delta)\\}.$
Since for all $i=1,\cdots,l$ and $n\in C(\delta)$, we have
$|\\{cm_{i}\\}|\leq\frac{1}{2}-\delta_{1},\\{cn\\}\in(-\delta,\delta),$
$|\\{bm_{i}\left\lceil
cm_{i}\right\rceil\\}|\leq\frac{1}{2}-\delta_{1},\\{bn\left\lceil
cm_{i}\right\rceil\\},\\{bn\left\lceil cn\right\rceil\\},\\{bm_{i}\left\lceil
cn\right\rceil\\}\in(-\delta,\delta).$
Then
(1) $-\frac{1}{2}<\\{cm_{i}\\}+\\{cn\\}<\frac{1}{2},$ (2)
$-\frac{1}{2}<\\{bn\left\lceil cn\right\rceil\\}+\\{bn\left\lceil
cm_{i}\right\rceil\\}+\\{bm_{i}\left\lceil
cn\right\rceil\\}+\\{bm_{i}\left\lceil cm_{i}\right\rceil\\}<\frac{1}{2}.$
By (1) and Lemma 2.11, $\left\lceil cm_{i}+cn\right\rceil=\left\lceil
cm_{i}\right\rceil+\left\lceil cn\right\rceil$. Then
$\displaystyle r(n+m_{i})$ $\displaystyle=$ $\displaystyle
b(n+m_{i})\left\lceil c(n+m_{i})\right\rceil$ $\displaystyle=$
$\displaystyle(bn+bm_{i})(\left\lceil cn\right\rceil+\left\lceil
cm_{i}\right\rceil)$ $\displaystyle=$ $\displaystyle bn\left\lceil
cn\right\rceil+bn\left\lceil cm_{i}\right\rceil+bm_{i}\left\lceil
cm_{i}\right\rceil+bm_{i}\left\lceil cm_{i}\right\rceil.$
By (2) and Lemma 2.11,
$\displaystyle p(n+m_{i})$ $\displaystyle=$ $\displaystyle\left\lceil
r(n+m_{i})\right\rceil$ $\displaystyle=$ $\displaystyle\left\lceil
bn\left\lceil cn\right\rceil\right\rceil+\left\lceil bn\left\lceil
cm_{i}\right\rceil\right\rceil+\left\lceil bm_{i}\left\lceil
cn\right\rceil\right\rceil+\left\lceil bm_{i}\left\lceil
cm_{i}\right\rceil\right\rceil$ $\displaystyle=$ $\displaystyle
p(n)+p(m_{i})+(\left\lceil bn\left\lceil
cm_{i}\right\rceil\right\rceil+\left\lceil bm_{i}\left\lceil
cn\right\rceil\right\rceil).$
Which implies $p(n+m_{i})$ is proper. It also implies that
$\displaystyle\\{r(n+m_{i})\\}$ $\displaystyle=$
$\displaystyle\\{r(m_{i})+bn\left\lceil cn\right\rceil+bn\left\lceil
cm_{i}\right\rceil+bm_{i}\left\lceil cn\right\rceil\\}$ $\displaystyle\in$
$\displaystyle(\\{r(m_{i})\\}-\varepsilon,\\{r(m_{i})\\}+\varepsilon).$
We will prove the general cases by proving the result holds for the following
three cases.
Case 1: $r(n)$ is of the form $r(n)=L(an^{j})$.
For any $1\leq i\leq l$. Let $\tilde{Q}_{i}$ be the set of all the components
of the expansion of $a(n+m_{i})^{j}$ and
$Q_{i}=\tilde{Q}_{i}\setminus\\{L(am_{i}^{j})\\}$
(e.g. $a(n+m_{i})^{2}=an^{2}+2anm_{i}+am_{i}^{2}$,
$\tilde{Q}_{i}=\\{an^{2},2anm_{i},am_{i}^{2}\\}$ and
$Q_{i}=\\{an^{2},2anm_{i}\\}$). It is clear that $\\#\tilde{Q}_{i}\leq 2^{j}$,
where $\\#Q$ is the number of elements of the set $Q$.
Let $\delta_{1}=\frac{1}{2}-\max_{i=1,2,\cdots,l}\\{|{L(am_{i}^{j})}|\\}$.
Since for each $i=1,\cdots,l$, $m_{i}$ is good w.r.t. $p(n)=\left\lceil
r(n)\right\rceil$, $\delta_{1}>0$. Choose
$0<\delta<\min\\{\frac{\delta_{1}}{2^{j}},\frac{\varepsilon}{2^{j}-1}\\}$ and
let
$C(\delta)=\bigcap_{i=1}^{l}\bigcap_{q(n)\in
Q_{i}}\\{n\in\mathbb{Z}:\\{q(n)\\}\in(-\delta,\delta)\\}.$
For any $1\leq i\leq l$ and any $n\in C(\delta)$,
$-\delta_{1}<-(2^{j}-1)\delta<\sum_{q(n)\in
Q_{i}}\\{q(n)\\}<(2^{j}-1)\delta<\delta_{1},$
$|\\{L(am_{i}^{j})\\}|\leq\max_{i=1,2,\cdots,l}\\{|{L(am_{i}^{j})}|\\}=\frac{1}{2}-\delta_{1}.$
Since
$\displaystyle r(n+m_{i})=a(n+m_{i})^{j}=\sum_{q(n)\in
Q_{i}}q(n)+L(am_{i}^{j})=\sum_{q(n)\in Q_{i}}q(n)+r(m_{i})$
and
$\displaystyle-\frac{1}{2}=-\delta_{1}-(\frac{1}{2}-\delta_{1})<\sum_{q(n)\in
Q_{i}}\\{q(n)\\}+\\{L(am_{i}^{j})\\}<\delta_{1}+\frac{1}{2}-\delta_{1}=\frac{1}{2}.$
By Lemma 2.11,
$\displaystyle p(n+m_{i})$ $\displaystyle=$ $\displaystyle\left\lceil
r(n+m_{i})\right\rceil=\left\lceil\sum_{q(n)\in
Q_{i}}q(n)+L(am_{i}^{j})\right\rceil=\sum_{q(n)\in Q_{i}}\left\lceil
q(n)\right\rceil+\left\lceil L(am_{i}^{j})\right\rceil$ $\displaystyle=$
$\displaystyle p(n)+p(m_{i})+\sum_{q(n)\in
Q_{i}\setminus\\{r(n)\\}}\left\lceil q(n)\right\rceil.$
Which implies $p(n)$ is proper w.r.t. $m_{i}$ and $C$. Since
$-\varepsilon<-(2^{j}-1)\delta<\sum_{q(n)\in
Q_{i}}\\{q(n)\\}<(2^{j}-1)\delta<\varepsilon.$
We have
$\\{r(n+m_{i})\\}=\\{r(m_{i})+\sum_{q(n)\in
Q_{i}}q(n)\\}=\\{r(m_{i})\\}+\\{\sum_{q(n)\in
Q_{i}}q(n)\\}\in(\\{r(m_{i})\\}-\varepsilon,\\{r(m_{i})\\}+\varepsilon).$
Case 2: $r(n)$ is of the form $r(n)=L(a_{1}n^{j_{1}},\dots,a_{t}n^{j_{t}})$.
For any $1\leq i\leq l$ and $1\leq k\leq t$, let $\tilde{Q}_{i,k}$ be the set
of all the components of $a_{k}(n+m_{i})^{j_{k}}$, we denote
$\displaystyle Q_{i,t}=\tilde{Q}_{i,t}\setminus\\{L(a_{t}m_{i}^{j_{t}})\\},$
$\displaystyle
Q_{i,t-1}=\tilde{Q}_{i,t-1}\left\lceil\tilde{Q}_{i,t}\right\rceil\setminus\\{L(a_{t-1}m_{i}^{j_{t-1}},a_{t}m_{i}^{j_{t}})\\},$
$\displaystyle\cdots\cdots,$ $\displaystyle
Q_{i,1}=\tilde{Q}_{i,1}\left\lceil\tilde{Q}_{i,2}\left\lceil\cdots\left\lceil\tilde{Q}_{i,t}\right\rceil\cdots\right\rceil\right\rceil\setminus\\{L(a_{1}m_{i}^{j_{1}},\cdots,a_{t}m_{i}^{j_{t}})\\},$
and
$Q_{i}=Q_{i,t}\cup Q_{i,t-1}\cup\cdots\cup Q_{i,1},$
where $\left\lceil A\right\rceil:=\\{\left\lceil a\right\rceil:a\in A\\}$ and
$AB:=\\{ab:a\in A,b\in B\\}$ for $A,B\subset\mathcal{G}$.
Let
$\delta_{1}=\frac{1}{2}-\max_{i=1,2,\dots,l}\\{|\\{L(a_{t}m_{i}^{j_{t}})\\}|,|\\{L(a_{t-1}m_{i}^{j_{t-1}},a_{t}m_{i}^{j_{t}})\\}|,\cdots,|\\{L(a_{1}m_{i}^{j_{1}},\cdots,a_{t}m_{i}^{j_{t}})\\}|\\}.$
Since $m_{i}$ is good w.r.t. $p(n)=\left\lceil r(n)\right\rceil$,
$\delta_{1}>0$. Let
$L:=2^{j_{t}}+2^{j_{t}+j_{t-1}}+\dots+2^{j_{t}+j_{t-1}+\dots+j_{1}}>\\#Q_{i},$
we choose $0<\delta<\min\\{\frac{\delta_{1}}{L},\frac{\varepsilon}{L-1}\\}$.
Let
$C(\delta)=\bigcap_{i=1}^{l}\bigcap_{q(n)\in
Q_{i}}\\{n\in\mathbb{Z}:\\{q(n)\\}\in(-\delta,\delta)\\}.$
For any $1\leq i\leq l$ and any $n\in C(\delta)$,
$-\delta_{1}<-2^{j_{t}}\delta<\sum_{q(n)\in
Q_{i,t}}\\{q(n)\\}<2^{j_{t}}\delta<\delta_{1},$
$|\\{L(a_{t}m_{i}^{j_{t}})\\}|\leq\frac{1}{2}-\delta_{1},$
using the same argument as in case 1 and applying Lemma 2.11, we have
$\left\lceil L(a_{t}(n+m_{i})^{j_{t}})\right\rceil=\left\lceil
L(a_{t}m_{i}^{j_{t}})\right\rceil+\sum_{q\in Q_{i,t}}\left\lceil
q(n)\right\rceil.$
Then
$\displaystyle L(a_{t-1}(n+m_{i})^{j_{t-1}},a_{t}(n+m_{i})^{j_{t}})$
$\displaystyle=$ $\displaystyle(\sum_{q\in\tilde{Q}_{i,t-1}}q(n))(\left\lceil
L(a_{t}m_{i}^{j_{t}})\right\rceil+\sum_{q\in Q_{i,t}}\left\lceil
q(n)\right\rceil)$ $\displaystyle=$ $\displaystyle
L(a_{t-1}m_{i}^{j_{t-1}},a_{t}m_{i}^{j_{t}})+\sum_{q\in Q_{i,t-1}}q(n).$
Since
$-\delta_{1}<-2^{j_{t}+j_{t-1}}\delta<\sum_{q(n)\in
Q_{i,t-1}}\\{q(n)\\}<2^{j_{t}+j_{t-1}}\delta<\delta_{1},$
$|\\{L(a_{t-1}m_{i}^{j_{t-1}},a_{t}m_{i}^{j_{t}})\\}|\leq\frac{1}{2}-\delta_{1},$
using the same argument as in case 1 and applying Lemma 2.11, we have
$\displaystyle\left\lceil
L(a_{t-1}(n+m_{i})^{j_{t-1}},a_{t}(n+m_{i})^{j_{t}})\right\rceil=\left\lceil
L(a_{t-1}m_{i}^{j_{t-1}},a_{t}m_{i}^{j_{t}})\right\rceil+\sum_{q\in
Q_{i,t-1}}\left\lceil q(n)\right\rceil.$
Inductively, we have
$\displaystyle L(a_{1}(n+m_{i})^{j_{1}},\cdots,a_{t}(n+m_{i})^{j_{t}})$
$\displaystyle=$ $\displaystyle(\sum_{q\in\tilde{Q}_{i,1}}q(n))(\left\lceil
L(a_{2}m_{i}^{j_{2}},\cdots,a_{t}m_{i}^{j_{t}})\right\rceil+\sum_{q\in
Q_{i,2}}\left\lceil q(n)\right\rceil)$ $\displaystyle=$ $\displaystyle
L(a_{1}m_{i}^{j_{1}},\cdots,a_{t}m_{i}^{j_{t}})+\sum_{q(n)\in Q_{i,1}}q(n).$
Since
$-\delta_{1}<-2^{j_{t}+j_{t-1}+\cdots+j_{1}}\delta<\sum_{q(n)\in
Q_{i,1}}\\{q(n)\\}<2^{j_{t}+j_{t-1}+\cdots+j_{1}}\delta<\delta_{1},$
$|\\{L(a_{1}m_{i}^{j_{1}},\cdots,a_{t}m_{i}^{j_{t}})\\}|\leq\frac{1}{2}-\delta_{,}$
using the same argument as in case 1, we have
$-\frac{1}{2}<\\{L(a_{1}m_{i}^{j_{1}},\cdots,a_{t}m_{i}^{j_{t}})\\}+\sum_{q\in
Q_{i,1}}\\{q(n)\\}<\frac{1}{2}.$
Then applying Lemma 2.11,
$\displaystyle\left\lceil r(n+m_{i})\right\rceil$ $\displaystyle=$
$\displaystyle\left\lceil
L(a_{1}(n+m_{i})^{j_{1}},\cdots,a_{t}(n+m_{i})^{j_{t}})\right\rceil=\left\lceil
L(a_{1}m_{i}^{j_{1}},\cdots,a_{t}m_{i}^{j_{t}})\right\rceil+\sum_{q\in
Q_{i,1}}\left\lceil q(n)\right\rceil$ $\displaystyle=$
$\displaystyle\left\lceil r(m_{i})\right\rceil+\left\lceil
r(n)\right\rceil+\sum_{q\in Q_{i,1}\setminus\\{r(n)\\}}\left\lceil
q(n)\right\rceil.$
It implies $p(n)=\left\lceil r(n)\right\rceil$ is proper w.r.t. $m_{i}$ and
$C(\delta)$. Since
$-\varepsilon<-(L-1)\delta<\sum_{q(n)\in
Q_{i,1}}\\{q(n)\\}<(L-1)\delta<\varepsilon,$
we have
$\displaystyle\\{r(n+m_{i})\\}$ $\displaystyle=$
$\displaystyle\\{L(a_{1}m_{i}^{j_{1}},\cdots,a_{t}m_{i}^{j_{t}})+\sum_{q\in
Q_{i,1}}q(n)\\}$ $\displaystyle\in$
$\displaystyle(\\{r(m_{i})\\}-\varepsilon,\\{r(m_{i})\\}-\varepsilon).$
Case 3: $r(n)$ is of the form $r(n)=\prod_{h=1}^{k}r_{h}(n)$ with
$r_{h}(n)=L(a_{h,1}n^{j_{h,1}},\dots,a_{h,t_{h}}n^{j_{h,t_{h}}})$.
Notice that
$L(an,bn)L(cn,dn)=(an\left\lceil bn\right\rceil)(cn\left\lceil
dn\right\rceil)=acn^{2}\left\lceil bn\right\rceil\left\lceil
dn\right\rceil=L(acn^{2},bn)\left\lceil L(dn)\right\rceil,$
we can assume $r(n)=r_{1}(n)\prod_{h=2}^{k}\left\lceil r_{h}(n)\right\rceil$
with $r_{h}(n)=L(a_{h,1}n^{j_{h,1}},\dots,a_{h,t_{h}}n^{j_{h,t_{h}}}),h\geq
1$.
For each $1\leq h\leq k$ and $1\leq i\leq l$, by case 2, there exsit
$L_{h}\in\mathbb{Z}$, $\delta_{h}>0$,
$\tilde{Q}^{h}_{i,1},\cdots,\tilde{Q}^{h}_{i,t_{h}},Q^{h}_{i,1},\cdots,Q^{h}_{i,t_{h}}$
with
$C(\delta_{h})=\bigcap_{i=1}^{l}\bigcap_{q(n)\in
Q_{i}^{h}}\\{n\in\mathbb{Z}:\\{q(n)\\}\in(-\delta_{h},\delta_{h})\\},$
$Q_{i}^{h}=Q^{h}_{i,t_{h}}\cup Q^{h}_{i,t_{h}-1}\cup\cdots\cup Q^{h}_{i,1},$
corresponding to $r_{h}(n)$ and $m_{i}$, such that
$\left\lceil r_{h}(n+m_{i})\right\rceil=\left\lceil
r_{h}(m_{i})\right\rceil+\sum_{q(n)\in Q^{h}_{i,1}}\left\lceil
q(n)\right\rceil,\forall n\in C(\delta_{h}).$
Let
$Q_{i}=\tilde{Q}_{i,1}^{1}\prod_{h=2}^{k}\left\lceil\tilde{Q}_{i,1}^{h}\right\rceil\backslash\\{r(m_{i})\\}$,
$L=\prod_{h=1}^{k}L_{h}$ and $\tilde{\delta}=\min_{1\leq h\leq k}\delta_{h}$.
Let
$\displaystyle B=$
$\displaystyle\bigcup_{i=1}^{l}\bigcup_{h=1}^{k}\\{L(a_{h,t_{h}}m_{i}^{j_{h,t_{h}}}),L(a_{h,t_{h}-1}m_{i}^{j_{h,t_{h}-1}},a_{h,t_{h}}m_{i}^{j_{h,t_{h}}}),\cdots,L(a_{h,1}m_{i}^{j_{h,1}},\cdots,a_{h,t_{h}}m_{i}^{j_{h,t_{h}}})\\}$
$\displaystyle\bigcup\\{r_{1}(m_{i})\prod_{h=2}^{k}\left\lceil
r_{h}(m_{i})\right\rceil:i=1,2,\dots,l\\}$
and $\hat{\delta}=\frac{1}{2}-\max_{q\in B}\\{|\\{q\\}|\\}$. Since $m_{i}$ is
good w.r.t. $p(n)$, $\hat{\delta}>0$. We choose
$\delta<\min\\{\frac{\varepsilon}{L},\frac{\tilde{\delta}}{L},\frac{\hat{\delta}}{L}\\}$
and let
$C(\delta)=(\bigcap_{h=1}^{k}\bigcap_{i=1}^{l}\bigcap_{q(n)\in
Q_{i}^{h}}\\{n\in\mathbb{Z}:\\{q(n)\\}\in(-\delta,\delta)\\})\cap(\bigcap_{i=1}^{l}\bigcap_{q(n)\in
Q_{i}}\\{n\in\mathbb{Z}:\\{q(n)\\}\in(-\delta,\delta)\\}).$
Since $C(\delta)\subset\bigcap_{h=1}^{k}C(\delta_{h})$, for any $m_{i}$ and
$n\in C(\delta)$,
$\displaystyle r(n+m_{i})$ $\displaystyle=$ $\displaystyle
r_{1}(n+m_{i})\prod_{h=2}^{k}\left\lceil r_{h}(n+m_{i})\right\rceil$
$\displaystyle=$ $\displaystyle(r_{1}(m_{i})+\sum_{q(n)\in
Q_{i,1}^{1}}\left\lceil q(n)\right\rceil)\prod_{h=2}^{k}(\left\lceil
r_{h}(m_{i})\right\rceil+\sum_{q(n)\in Q_{i,1}^{h}}\left\lceil
q(n)\right\rceil)$ $\displaystyle=$ $\displaystyle
r_{1}(m_{i})\prod_{h=2}^{k}\left\lceil r_{h}(m_{i})\right\rceil+\sum_{q(n)\in
Q_{i}}q(n)$
and
$-\hat{\delta}<-L\delta<\sum_{q(n)\in Q_{i}}\\{q(n)\\}<L\delta<\hat{\delta},\
|\\{r_{1}(m_{i})\prod_{h=2}^{k}\left\lceil
r_{h}(m_{i})\right\rceil\\}|<\frac{1}{2}-\hat{\delta}.$
Hence
$-\frac{1}{2}<\\{r_{1}(m_{i})\prod_{h=2}^{k}\left\lceil
r_{h}(m_{i})\right\rceil\\}+\sum_{q(n)\in Q_{i}}\\{q(n)\\}<\frac{1}{2}.$
By Lemma 2.11,
$\displaystyle\left\lceil r(n+m_{i})\right\rceil$ $\displaystyle=$
$\displaystyle\left\lceil r_{1}(m_{i})\prod_{h=2}^{k}\left\lceil
r_{h}(m_{i})\right\rceil\right\rceil+\sum_{q(n)\in Q_{i}}\left\lceil
q(n)\right\rceil$ $\displaystyle=$ $\displaystyle\left\lceil
r(m_{i})\right\rceil+\left\lceil r(n)\right\rceil+\sum_{q(n)\in
Q_{i}\backslash\\{r(n)\\}}\left\lceil q(n)\right\rceil.$
It implies $p(n)$ is proper w.r.t. $m_{i}$ and $C(\delta)$. Since
$-\varepsilon<-L\delta<\sum_{q(n)\in Q_{i}}\\{q(n)\\}<L\delta<\varepsilon,$
we have
$\displaystyle\\{r(n+m_{i})\\}$ $\displaystyle=$
$\displaystyle\\{r_{1}(m_{i})\prod_{h=2}^{k}\left\lceil
r_{h}(m_{i})\right\rceil+\sum_{q(n)\in Q_{i}}q(n)\\}$ $\displaystyle\in$
$\displaystyle(\\{r_{1}(m_{i})\prod_{h=2}^{k}\left\lceil
r_{h}(m_{i})\right\rceil\\}-\varepsilon,\\{r_{1}(m_{i})\prod_{h=2}^{k}\left\lceil
r_{h}(m_{i})\right\rceil\\}+\varepsilon)$ $\displaystyle=$
$\displaystyle(\\{r(m_{i})\\}-\varepsilon,\\{r(m_{i})\\}+\varepsilon).$
Thus we finish the proof. ∎
Since $\mathcal{F}_{d,0}$ is a filter, we have the following result.
###### Lemma 2.18.
Let $p_{1}(n)=\left\lceil r_{1}(n)\right\rceil,\cdots,p_{t}(n)=\left\lceil
r_{t}(n)\right\rceil,n\in\mathbb{Z}$, where
$r_{i}\in\widehat{SGP_{d}},i=1,\cdots,t$ for some $d\in\mathbb{N}$. Let
$l\in\mathbb{N},m_{j}\in\mathbb{Z}$ and $m_{j}$ is good w.r.t. $p_{i}(n)$ for
$1\leq j\leq l,1\leq i\leq t$. Then for any $\varepsilon>0$, there exists
$C=C(\delta)=\bigcap_{k=1}^{h}\\{n\in\mathbb{Z}:\\{q_{k}(n)\\}\in(-\delta,\delta)\\},$
where $\delta>0\ (\delta<\varepsilon)$ is a small enough number,
$s=\max_{1\leq i\leq t}deg(p_{i})$ and
$q_{k}\in\widehat{SGP_{s}},k=1,2,\dots,h$ for some $h\in\mathbb{N}$, such that
for all $i\in\\{1,\cdots,t\\},j\in\\{1,\cdots,l\\}$,
1. (1)
$p_{i}(n)$ is proper w.r.t. $m_{j}$ and $C$.
2. (2)
$\\{r_{i}(n+m_{j})\\}\in(\\{r_{i}(m_{j})\\}-\varepsilon,\\{r_{i}(m_{j})\\}+\varepsilon),\forall
n\in C$.
And the general case is the following lemma.
###### Lemma 2.19.
Let $p_{1},\cdots,p_{d}\in\widetilde{SGP}$. Let
$l\in\mathbb{N},m_{j}\in\mathbb{Z}$ and $m_{j}$ is good w.r.t. $p_{i}(n)$ for
$1\leq i\leq d,1\leq j\leq l$. Then there exists a Nils Boh0-set $C$ with the
form
$C=\bigcap_{k=1}^{t}\\{n\in\mathbb{Z}:\\{q_{k}(n)\\}\in(-\delta,\delta)\\}$
such that for all $(i,j)\in\\{1,\cdots,d\\}\times\\{1,\cdots,l\\}$, $p_{i}(n)$
is proper w.r.t. $m_{j}$ and $C$, where $\delta>0$ is a small enough number,
$s=\mathop{\max}_{1\leq i\leq d}deg(p_{i})$ and
$q_{k}\in\widehat{SGP_{s}},k=1,2,\dots,t$ for some $t\in\mathbb{N}$.
###### Remark 2.20.
We call the Nils Bohr0-set $C$ above is associated to
$\\{p_{1},\cdots,p_{d}\\}$ and $\\{m_{1},\cdots,m_{l}\\}$.
## 3\. Proof of Theorem 1.1 for degree $1$ integer-valued generalized
polynomials
In this section, we will prove Theorem 1.1 for degree $1$ integer-valued
generalized polynomials. We need the following lemma.
###### Lemma 3.1.
Let $(X,T)$ be a weakly mixing minimal system and $p\in\widetilde{SGP_{1}}$
with $A(p(n))\neq 0$. Then for any non-empty open subsets $U,V$ of $X$,
$N(p,U,V):=\\{n\in\mathbb{Z}:U\cap T^{-p(n)}V\neq\emptyset\\}$
is thickly syndetic.
###### Proof.
We may assume $p(n)=\sum\limits_{i=1}^{\rm{t_{1}}}\left\lceil
b_{i}\left\lceil\alpha_{i}n\right\rceil\right\rceil-\sum\limits_{j=1}^{\rm{t_{2}}}\left\lceil
c_{j}\left\lceil\beta_{j}n\right\rceil\right\rceil,n\in\mathbb{Z}$ with
$t_{1},t_{2}\in\mathbb{N},\alpha_{i},b_{i}\in\mathbb{R},i=1,\cdots,t_{1}$ and
$\beta_{j},c_{j}\in\mathbb{R},j=1,\cdots,t_{2}$.
Moreover,
$A(p(n))=\sum_{i=1}^{t_{1}}b_{i}\alpha_{i}-\sum_{j=1}^{t_{2}}c_{j}\beta_{j}\neq
0.$
For given non-empty open subsets $U,V$ of $X$, since $(X,T)$ is weakly mixing,
$N(U,V):=\\{n\in\mathbb{Z}:U\cap T^{-n}V\neq\emptyset\\}$
is thickly syndetic (see Theorem 4.7 in [10] ). Then for any $L\in\mathbb{N}$,
there exists a syndetic set $A\subset\mathbb{Z}$ such that
$A+\\{0,1,\cdots,L\\}\subset N(U,V).$
We denote $A=\\{a_{1}<a_{2}<\cdots\\}$ and K the gap of $A$. Note that for
every $n\in\mathbb{Z}$,
$\sum_{i=1}^{t_{1}}b_{i}(\alpha_{i}n-1)-t_{1}-\sum_{j=1}^{t_{2}}c_{j}(\beta_{j}n+1)-t_{2}<p(n)<\sum_{i=1}^{t_{1}}b_{i}(\alpha_{i}n+1)+t_{1}-\sum_{j=1}^{t_{2}}c_{j}(\beta_{j}n-1)+t_{2}.$
We put
$M=\sum_{i=1}^{t_{1}}b_{i}\alpha_{i}-\sum_{j=1}^{t_{2}}c_{j}\beta_{j},M_{0}=\sum_{i=1}^{t_{1}}b_{i}+\sum_{j=1}^{t_{2}}c_{j}+t_{1}+t_{2}$,
then we have
$Mn-M_{0}<p(n)<Mn+M_{0}.$
We can choose $L\in\mathbb{N}$ large enough, such that $L\gg 2M_{0}+8M$.
For $n\in\mathbb{Z}$, if $p(n)\in\\{0,1,\cdots,L\\}+a_{i}$ for some
$i\in\mathbb{N}$, then $U\cap T^{-p(n)}V\neq\emptyset$.
We consider $n\in\mathbb{Z}$ such that
$a_{i}\leq Mn-M_{0}<p(n)<Mn+M_{0}\leq a_{i}+L$
for some $i\in\mathbb{N}$. Then we have
$\frac{a_{i}}{M}+\frac{L}{M}-\frac{M_{0}}{M}\geq
n\geq\frac{a_{i}}{M}+\frac{M_{0}}{M}(\ if\ M\ positive),$
or
$\frac{a_{i}}{M}+\frac{L}{M}-\frac{M_{0}}{M}\leq
n\leq\frac{a_{i}}{M}+\frac{M_{0}}{M}(\ if\ M\ negative).$
Without loss of generality, we way assume that $M$ is positive.
Since
$\frac{a_{i}}{M}+\frac{M_{0}}{M}\leq\left\lceil\frac{a_{i}}{M}\right\rceil+\left\lceil\frac{M_{0}}{M}\right\rceil+2$
and
$\frac{a_{i}}{M}+\frac{L}{M}-\frac{M_{0}}{M}\geq\left\lceil\frac{a_{i}}{M}\right\rceil+\left\lceil\frac{L}{M}\right\rceil-\left\lceil\frac{M_{0}}{M}\right\rceil-3,$
then when
$n\in\\{n\in\mathbb{Z}:\left\lceil\frac{a_{i}}{M}\right\rceil+\left\lceil\frac{M_{0}}{M}\right\rceil+2\leq
n\leq\left\lceil\frac{a_{i}}{M}\right\rceil+\left\lceil\frac{L}{M}\right\rceil-\left\lceil\frac{M_{0}}{M}\right\rceil-3\\},$
we have that $p(n)\in N(U,V)$.
Let
$B=\\{b_{i}\buildrel\Delta\over{=}\left\lceil\frac{a_{i}}{M}\right\rceil+\left\lceil\frac{M_{0}}{M}\right\rceil+2:a_{i}\in
A,i=1,2,\cdots\\},$
$L_{N}={\left\lceil\frac{L}{M}\right\rceil}-2\left\lceil\frac{M_{0}}{M}\right\rceil-5>0.$
Then
$b_{i+1}-b_{i}=\left\lceil\frac{a_{i+1}}{M}\right\rceil-\left\lceil\frac{a_{i}}{M}\right\rceil\leq\frac{a_{i+1}}{M}-\frac{a_{i}}{M}+2=\frac{a_{i+1}-a_{i}}{M}+2\leq\frac{K}{M}+2$
for all $i\in\mathbb{N}$, thus $B$ is syndetic. Since $L$ can be large enough,
so is $L_{N}.$ Thus $B+\\{0,1,\cdots,L_{N}\\}\subset N(p,U,V)$, i.e.,
$N(p,U,V)$ is thickly syndetic. ∎
First we prove an even more special case.
###### Theorem 3.2.
Let $(X,T)$ be a weakly mixing minimal system,
$p_{1},\cdots,p_{d}\in\widetilde{SGP_{1}}$ and $(p_{1},\cdots,p_{d})$ be non-
degenerate. Then there is a dense $G_{\delta}$ subset $X_{0}$ of $X$ such that
for all $x\in X_{0}$,
$\\{(T^{p_{1}(n)}x,\cdots,T^{p_{d}(n)}x):n\in\mathbb{Z}\\}$
is dense in $X^{d}$.
Moreover, for any non-empty open subsets $U,V_{1},\cdots,V_{d}$ of $X$, for
any $\varepsilon>0\ (\varepsilon<\frac{1}{4})$, for any $s,t\in\mathbb{N}$ and
$g_{1},\cdots,g_{t}\in\widehat{SGP_{s}}$, put
$C=C(\varepsilon,g_{1},\cdots,g_{t})=\bigcap_{j=1}^{t}\\{n\in\mathbb{Z}:\\{g_{i}(n)\\}\in(-\varepsilon,\varepsilon)\\},$
$N=\\{n\in\mathbb{Z}:U\cap T^{-p_{1}(n)}V_{1}\cap\cdots\cap
T^{-p_{d}(n)}V_{d}\neq\emptyset\\},$
we have $N\cap C$ is syndetic.
###### Proof.
By Lemma 2.2, it suffices to prove the moreover part of the theorem. We will
prove it by induction on $d$.
When $d=1$, by Lemma 3.1, $N=N(p_{1},U,V_{1})$ is thickly syndetic, note that
$C\in\mathcal{F}_{GP_{s}}=\mathcal{F}_{s,0}$ is a syndetic set, hence $N\cap
C$ is syndetic.
Assume that the result holds for $d>1$. Next we will show that the result
holds for $d+1$. Let $U,V_{1},\cdots,V_{d},V_{d+1}$ be non-empty open subsets
of $X$, $0<\varepsilon<\frac{1}{4}$ and
$g_{1},\cdots,g_{t}\in\widehat{SGP_{s}}$. We put
${C}=C(\varepsilon,g_{1},\dots,g_{t}),$ $N=\\{n\in\mathbb{Z}:U\cap
T^{-p_{1}(n)}V_{1}\cap\dots\cap T^{-p_{d+1}(n)}V_{d+1}\neq\emptyset\\},$
we will show that $N\cap C$ is syndetic.
Let
$\tilde{C}_{1}=C(\frac{\varepsilon}{2},g_{1},\dots,g_{t}),$
then $\tilde{C}_{1}\in\mathcal{F}_{GP_{s}}=\mathcal{F}_{s,0}$ is a syndetic
set. By Lemma 2.16, there exist $Q\subset\widehat{SGP_{b}}$ (for some
$b\in\mathbb{N}$) and
$\tilde{C}_{2}=C(\frac{\varepsilon}{2},Q)=\bigcap_{q(n)\in
Q}\\{n\in\mathbb{Z}:\\{q(n)\\}\in(-\frac{\varepsilon}{2},\frac{\varepsilon}{2})\\}$
such that for each $m\in\tilde{C}_{2}$, $m$ is good w.r.t.
$q(n)\in\\{p_{1}(n),p_{2}(n),\cdots,p_{d+1}(n)\\}\cup\\{\left\lceil
g_{1}\right\rceil,\cdots,\left\lceil g_{t}\right\rceil\\}$. Let
$\tilde{C}=\tilde{C}_{1}\cap\tilde{C}_{2}$, $\tilde{C}$ is a syndetic set.
Since $(X,T)$ is minimal, there is some $l\in\mathbb{N}$ such that
$X=\cup_{j=0}^{l}T^{j}U$. By Lemma 2.1, there are non-empty open subsets
$V_{1}^{(l)},\cdots,V_{d+1}^{(l)}$ and integers
$k_{0},k_{1},\cdots,k_{l}\in\tilde{C}$ such that for each $i=1,2,\cdots,d+1$,
one has that
$T^{p_{i}(k_{j})}T^{-j}V_{i}^{(l)}\subset V_{i},\ \text{for \ all \ }0\leq
j\leq l.$
Notice that $k_{i}\in\tilde{C}\subset\tilde{C}_{2}$ is good w.r.t.
$q(n)\in\\{p_{1}(n),p_{2}(n),\cdots,p_{d+1}(n)\\}\cup\\{\left\lceil
g_{1}\right\rceil,\cdots,\left\lceil g_{t}\right\rceil\\}$, $0\leq i\leq l$.
By Lemma 2.19, there is a Nil1 Bohr0-set $C_{1}^{\prime}$ associated to
$\\{p_{1},\cdots,p_{d+1}\\}$ and $\\{k_{0},k_{1},\cdots,k_{l}\\}$, and by
Lemma 2.18, there is a Nils Bohr0-set $C_{1}^{\prime\prime}$ associated to
$\\{\left\lceil g_{1}\right\rceil,\cdots,\left\lceil g_{t}\right\rceil\\}$ and
$\\{k_{0},k_{1},\cdots,k_{l}\\}$.
Put $C_{1}=C_{1}^{\prime}\cap C_{1}^{\prime\prime}$, then
$C_{1}\in\mathcal{F}_{s,0}$ is a Nils Bohr0-set. We may assume that
$\frac{\varepsilon}{2}$ is as in Lemma 2.18.
Let $q_{i}=p_{i+1}-p_{1}\in\widetilde{SGP_{1}}$, $i=1,2,\cdots,d$. Then by
induction hypothesis,
$\\{n\in\mathbb{Z}:V_{1}^{(l)}\cap T^{-q_{1}(n)}V_{2}^{(l)}\cap\cdots\cap
T^{-q_{d}(n)}V_{d+1}^{(l)}\neq\emptyset\\}\cap(\tilde{C}\cap C_{1})$
is syndetic.
Put
$E=\\{n\in\mathbb{Z}:V_{1}^{(l)}\cap T^{-q_{1}(n)}V_{2}^{(l)}\cap\cdots\cap
T^{-q_{d}(n)}V_{d+1}^{(l)}\neq\emptyset\\}\cap(\tilde{C}\cap C_{1}).$
Since $E\subset C_{1}\subset C_{1}^{\prime}$, we have
$p_{i}(m+k_{j})=p_{i}(m)+p_{i}(k_{j}),\forall m\in E$
for all $i=1,2,\dots,d+1,j=0,1,\dots,l$.
Let $m\in E$. Then there is some $x_{m}\in V_{1}^{(l)}$ such that
$T^{q_{i}(m)}x_{m}\in V_{i+1}^{(l)}$ for $i=1,\cdots,d$. There is some $y_{m}$
with $y_{m}=T^{p_{1}(m)}x_{m}$. Since $X=\cup_{j=0}^{l}T^{j}U$, there is some
$b_{m}\in\\{0,1,\cdots,l\\}$ such that $T^{b_{m}}z_{m}=y_{m}$ for some
$z_{m}\in U$. Thus for each $i=1,2,\cdots,d+1$,
$\displaystyle T^{p_{i}(m+k_{b_{m}})}z_{m}$
$\displaystyle=T^{p_{i}(m+k_{b_{m}})}T^{-b_{m}}y_{m}$
$\displaystyle=T^{p_{i}(m+k_{b_{m}})}T^{-b_{m}}T^{-p_{1}(m)}x_{m}$
$\displaystyle=T^{p_{i}(m)}T^{p_{i}(k_{b_{m}})}T^{-b_{m}}T^{-p_{1}(m)}x_{m}$
$\displaystyle=T^{p_{i}(k_{b_{m}})}T^{-b_{m}}T^{p_{i}(m)-p_{1}(m)}x_{m}$
$\displaystyle=T^{p_{i}(k_{b_{m}})}T^{-b_{m}}T^{q_{i-1}(m)}x_{m}$
$\displaystyle\subset T^{p_{i}(k_{b_{m}})}T^{-b_{m}}V^{(l)}_{i}\subset V_{i}.$
That is,
$z_{m}\in U\cap T^{-p_{1}(n)}V_{1}\cap\cdots\cap T^{-p_{d}(n)}V_{d}\cap
T^{-p_{d+1}(n)}V_{d+1},$
where $n=m+k_{b_{m}}\in N$.
Note that $k_{b_{m}}\in\tilde{C}$ implies that
$\\{g_{j}(k_{b_{m}})\\}\in(-\frac{\varepsilon}{2},\frac{\varepsilon}{2}),$
and $m\in E\subset C_{1}^{\prime\prime}$ implies that
$\\{g_{j}(m+k_{b_{m}})\\}\in(\\{g_{j}(k_{b_{m}})\\}-\frac{\varepsilon}{2},\\{g_{j}(k_{b_{m}})\\}+\frac{\varepsilon}{2}),$
for all $j=1,\cdots,t$. Hence $m+k_{b_{m}}\in C$. Thus
$N\cap C\supset\\{m+k_{b_{m}}:m\in E\\}$
is a syndetic set. By induction the proof is completed. ∎
Now we can prove our main result for degree $1$ integer-valued generalized
polynomials.
###### Theorem 3.3.
Let $(X,T)$ be a weakly mixing minimal system,
$p_{1},\cdots,p_{d}\in\widetilde{GP_{1}}$ and $(p_{1},p_{2},\cdots,p_{d})$ be
non-degenerate. Then there is a dense $G_{\delta}$ subset $X_{0}$ of $X$ such
that for all $x\in X_{0}$,
$\\{(T^{p_{1}(n)}x,\cdots,T^{p_{d}(n)}x):n\in\mathbb{Z}\\}$
is dense in $X^{d}$.
Moreover, for any non-empty open subsets $U,V_{1},\cdots,V_{d}$ of $X$, for
any $\varepsilon>0\ (\varepsilon<\frac{1}{4})$, for any $s,t\in\mathbb{N}$ and
$g_{1},\cdots,g_{t}\in\widehat{SGP_{s}}$, put
$C=C(\varepsilon,g_{1},\cdots,g_{t})=\bigcap_{j=1}^{t}\\{n\in\mathbb{Z}:\\{g_{i}(n)\\}\in(-\varepsilon,\varepsilon)\\},$
$N=\\{n\in\mathbb{Z}:U\cap T^{-p_{1}(n)}V_{1}\cap\cdots\cap
T^{-p_{d}(n)}V_{d}\neq\emptyset\\},$
we have $N\cap C$ is syndetic.
###### Proof.
By Lemma 2.2, it suffices to prove the moreover part of the theorem. Let
$p_{1},\cdots,p_{d}\in\widetilde{GP_{1}}$. Then by Lemma 2.12, there exists
$h_{i}(n)\in\widetilde{SGP_{1}}$, $i=1,2,\dots,d$ and
$C_{1}=C(\delta,q_{1},\cdots,q_{k})$ such that $p_{i}(n)=h_{i}(n),\forall n\in
C_{1},i=1,2,\dots,d$.
Set
$N_{1}=\\{n\in\mathbb{Z}:U\cap T^{-h_{1}(n)}V_{1}\cap\cdots\cap
T^{-h_{d}(n)}V_{d}\neq\emptyset\\},$
by Theorem 3.2, $N_{1}\cap(C\cap C_{1})$ is syndetic. Since for any $n\in
N_{1}\cap(C\cap C_{1})\subset C_{1}$, $p_{i}(n)=h_{i}(n),i=1,2,\cdots,d$, we
have
$N_{1}\cap(C\cap C_{1})\subset N\cap C,$
hence $N\cap C$ is syndetic. ∎
## 4\. PET-induction and the proof of Theorem 1.1
### 4.1. The PET-induction
In this section, we will prove Theorem 1.1 using PET-induction, which was
introduced by Bergelson in [1]. Basically, we associate any finite collection
of integer-valued generalized polynomials with a “complexity”, and reduce the
complexity at some step to the simple one, where we use the simple one as the
first step (basis of induction). We first introduce the precise definition of
the “complexity”, in a sense, it is an ordering relationship.
Let $p(n),q(n)\in\widetilde{SGP}$, we denote $p\sim q$ if $\deg(p)=\deg(q)$
and $\deg(p-q)<\deg(p)$. One can easily check that $\sim$ is an equivalence
relation. A _system_ $P$ is a finite subset of $\widetilde{SGP}$. Given a
system $P$, we define its _weight vector_
$\Phi(P)=(\omega_{1},\omega_{2},\cdots)$, where $\omega_{i}$ is the number of
equivalent classes under $\sim$ of degree $i$ integer-valued generalized
polynomials represented in $P$. For distinct weights
$\Phi(P)=(\omega_{1},\omega_{2},\cdots)$ and
$\Phi(P^{\prime})=(\upsilon_{1},\upsilon_{2},\cdots)$, one writes
$\Phi(P)>\Phi(P^{\prime})$ if $\omega_{d}>\upsilon_{d}$, where $d$ is the
largest $j$ satisfying $\omega_{j}\neq\upsilon_{j}$, then we say that
$P^{\prime}$ _precedes_ $P$. This is a well-ordering of the set of weights and
the PET-induction is simply induction on this ordering.
For example, let $P=\\{\left\lceil an\right\rceil+2n,\left\lceil
bn^{3}\left\lceil cn\right\rceil\right\rceil+\left\lceil
en^{3}\right\rceil,4n^{4},4n^{4}+n^{3},\left\lceil fn\right\rceil\left\lceil
hn\right\rceil\\}$ (where $a,b,c,e,f,h$ are distinct numbers), then
$\Phi(P)=(1,1,0,2,0,\cdots)$.
In order to prove the Theorem 1.1 for system $P$, we will start from
$\Phi(P)=(d,0,\cdots)$ (this is true by Theorem 3.2). After that, we assume
the result holds for any systems $P^{\prime}$ with $\Phi(P^{\prime})<\Phi(P)$.
Then we show the result holds for $P$, and we complete the proof.
### 4.2. Some Lemmas
To simplify the argument, we need to introduce three symbols: $>>$, $\approx$
and $=_{C}$ .
* •
Let $a>b>0$, we denote $a>>b$ iff there exists a large enough $N>0$ such that
$a>N(b+1)$.
* •
$a\approx b$ iff $|a|>>|a-b|$ and $|b|>>|a-b|$.
* •
$p(n)=_{C}q(n)$ iff $p(n)=q(n)$ for any $n\in C$.
We have the following observation.
###### Lemma 4.1.
1. (1)
Let $|a|>>1$. Then $\left\lceil a\right\rceil\approx a$.
2. (2)
Let $|a|>>1$, $|b|>>1$, $a\approx a^{\prime}$ and $b\approx b^{\prime}$. Then
$ab\approx a^{\prime}b^{\prime}$. Moreover, if it still satisfies $|a+b|>>1$,
then $a^{\prime}+b^{\prime}\approx a+b$.
3. (3)
Let $|\sum_{i=1}^{k}a_{i}|>>1$, and for any $1\leq i\leq k$, $|a_{i}|>>1$,
$a_{i}\approx a_{i}^{\prime}$. Then $|\sum_{i=1}^{k}a_{i}^{\prime}|>>1$.
For instance, $10000\sqrt{2}>>1$, $5000\sqrt{3}>>1$,
$|10000\sqrt{2}-5000\sqrt{3}|>>1$, $\left\lceil
10000\sqrt{2}\right\rceil\approx 10000\sqrt{2},\left\lceil
5000\sqrt{3}\right\rceil\approx 5000\sqrt{3},$ $10000\sqrt{2}\times
5000\sqrt{3}\approx\left\lceil 10000\sqrt{2}\right\rceil\left\lceil
5000\sqrt{3}\right\rceil.$
Recall that $A(p(n))$ be the sum of the coefficients of the maximal-degree
components of the generalized polynomial $p(n)$ (see Definition 2.13). We have
the following lemmas.
###### Lemma 4.2.
If $h(n)=\left\lceil p(n)\right\rceil\in\widetilde{SGP_{d}}$ with $\deg(h)\geq
2$ and
$p(n)=\prod_{i=1}^{k}L(a_{i,1}n^{j_{i,1}},\cdots,a_{i,l_{i}}n^{j_{i,l_{i}}})$
where $|a_{i,1}|>>1$, $j_{i,1}\geq 0$, $a_{i,1}\in\mathbb{R}$ and
$|a_{i,t}|>>1$, $j_{i,t}\geq 1$, $a_{i,t}\in\mathbb{R}\setminus\mathbb{Q}$ for
$t=2,\cdots,l_{i}$, $1\leq i\leq k$. Let $0\neq m\in\mathbb{Z}$. Then there
exist a Nil Bohr0 set $C$ and $D(h(n),m)\in\widetilde{SGP_{d-1}}$ such that
$D(h(n),m)=_{C}h(n+m)-h(n)-h(m)$
and
$A(D(h(n),m))\approx\deg(h)mA(h(n)).$
We call $D(h(n),m)$ be the deritive of $h(n)$ w.r.t. $m$.
###### Proof.
We first prove it for two special cases to illustrate the idea. Then we prove
it for the general case.
Note: For any $k\in\mathbb{N}$,
$k\mathbb{Z}=\\{n\in\mathbb{Z}:\\{\frac{n}{k}\\}\in(-\frac{1}{2k},\frac{1}{2k})\\}$
is a Nil1 Bohr0-set, and for any $a\in\mathbb{Q}$, there exists
$k_{0}\in\mathbb{N}$ such that $\left\lceil
an\right\rceil=_{k_{0}\mathbb{Z}}an$.
Special case 1. Assume that $p(n)=L(an,bn^{2})$ with $a>>1$, $b>>1$ and
$a\in\mathbb{R},b\in\mathbb{R}\setminus\mathbb{Q}$. By Note, we may assume
that $a\in\mathbb{R}\setminus\mathbb{Q}$. Let $0\neq m\in\mathbb{Z}$, then $m$
is good w.r.t. $p(n)$. The expansion of $b(n+m)^{2}$ is
$b(n+m)^{2}=b\sum_{i=0}^{2}C_{2}^{i}m^{i}n^{2-i}=b(n^{2}+2mn+m^{2}).$
By Lemma 2.17, there exists a Nil Bohr0 set $C$ such that
$\left\lceil b(n+m)^{2}\right\rceil=_{C}\left\lceil
bn^{2}\right\rceil+\left\lceil 2bmn\right\rceil+\left\lceil
bm^{2}\right\rceil,$ $\displaystyle\left\lceil a(n+m)\left\lceil
b(n+m)^{2}\right\rceil\right\rceil$ $\displaystyle=_{C}$
$\displaystyle\left\lceil(an+am)(\left\lceil bn^{2}\right\rceil+\left\lceil
2bmn\right\rceil+\left\lceil bm^{2}\right\rceil)\right\rceil$
$\displaystyle=_{C}$ $\displaystyle\left\lceil an\left\lceil
bn^{2}\right\rceil\right\rceil+\left\lceil an\left\lceil
2bmn\right\rceil\right\rceil+\left\lceil an\left\lceil
bm^{2}\right\rceil\right\rceil$ $\displaystyle+\left\lceil am\left\lceil
bn^{2}\right\rceil\right\rceil+\left\lceil am\left\lceil
2bmn\right\rceil\right\rceil+\left\lceil am\left\lceil
bm^{2}\right\rceil\right\rceil$
We denote
$\displaystyle D(h(n),m)$ $\displaystyle=$ $\displaystyle\left\lceil
an\left\lceil 2bmn\right\rceil\right\rceil+\left\lceil an\left\lceil
bm^{2}\right\rceil\right\rceil+\left\lceil am\left\lceil
bn^{2}\right\rceil\right\rceil+\left\lceil am\left\lceil
2bmn\right\rceil\right\rceil,$
then $D(h(n),m)=_{C}h(n+m)-h(n)-h(m)$. The maximal-degree components of
$D(h(n),m)$ are $\left\lceil an\left\lceil 2bmn\right\rceil\right\rceil$ and
$\left\lceil am\left\lceil bn^{2}\right\rceil\right\rceil$, hence
$A(D(h(n),m))=2abm+abm=3abm=\deg(h)mA(h(n)).$
Special case 2. Assume that $p(n)=\left\lceil an\right\rceil\left\lceil
bn\right\rceil$.
By Note, we may assume that $a,b\in\mathbb{R}\backslash\mathbb{Q}$. Let $0\neq
m\in\mathbb{Z}$, then $m$ is good w.r.t. $p(n)$. By Lemma 2.17 there exist a
Nil Bohr0 set $C$ such that
$\displaystyle\left\lceil a(n+m)\right\rceil\left\lceil b(n+m)\right\rceil$
$\displaystyle=_{C}$ $\displaystyle(\left\lceil an\right\rceil+\left\lceil
am\right\rceil)(\left\lceil bn\right\rceil+\left\lceil bm\right\rceil)$
$\displaystyle=$ $\displaystyle\left\lceil an\right\rceil\left\lceil
bn\right\rceil+\left\lceil am\right\rceil\left\lceil
bn\right\rceil+\left\lceil an\right\rceil\left\lceil
bm\right\rceil+\left\lceil am\right\rceil\left\lceil bm\right\rceil.$
Let $D(h(n),m)=\left\lceil am\right\rceil\left\lceil
bn\right\rceil+\left\lceil an\right\rceil\left\lceil bm\right\rceil$, then
$D(h(n),m)=_{C}h(n+m)-h(n)-h(m)$ and
$A(D(h(n),m))=a\left\lceil bm\right\rceil+\left\lceil am\right\rceil b\approx
2mab=\deg(h)mA(h(n)).$
The general case. Assume that
$p(n)=\prod_{i=1}^{k}L(a_{i,1}n^{j_{i,1}},\cdots,a_{i,l_{i}}n^{j_{i,l_{i}}})$.
By the argument of Special case 1 and 2, we may assume that for $0\neq
m\in\mathbb{Z}$, $m$ is good w.r.t. $p(n)$.
By Lemma 2.17, there exist a Nil Bohr0 set $C$ and
$D(h(n),m)=_{C}h(n+m)-h(n)-h(m)$. The maximal-degree components of $D(h(n),m)$
are
$\left\lceil\prod_{i=1}^{k}L(a_{i,1}n^{j_{i,1}},\cdots,a_{i,t}C_{j_{i,t}}^{1}mn^{j_{i,t}-1},\cdots,a_{i,l_{i}}n^{j_{i,l_{i}}})\right\rceil,1\leq
t\leq l_{i},1\leq i\leq k.$
Hence
$\displaystyle A(D(h(n),m))$ $\displaystyle\approx$
$\displaystyle\sum_{i=1}^{k}\sum_{t=1}^{l_{i}}(a_{i,1}\cdots(C_{j_{i,t}}^{1}ma_{i,t})\cdots
a_{i,l_{i}})\prod_{s\neq i,s=1}^{k}(a_{s,1}\cdots a_{s,l_{s}})$
$\displaystyle=$ $\displaystyle\prod_{s=1}^{k}(a_{s,1}\cdots
a_{s,l_{s}})\sum_{i=1}^{k}\sum_{t=1}^{l_{i}}j_{i,t}m$ $\displaystyle=$
$\displaystyle A(h(n))\deg(h)m.$
∎
###### Lemma 4.3.
Let $h(n)=\sum_{k=1}^{l}c_{k}\left\lceil
p_{k}(n)\right\rceil\in\widetilde{SGP_{d}}$, where $c_{k}\in\mathbb{Z}$,
$p_{k}(n)\in\widehat{SGP_{d}}$ as in above lemma, $|A(h(n))|>>1$ and
$\deg(h)\geq 2$. Let $0\neq m\in\mathbb{Z}$. Then there exist a Nil Bohr0 set
$C$ and $D(h(n),m)\in\widetilde{SGP_{d-1}}$ such that
$D(h(n),m)=_{C}h(n+m)-h(n)-h(m),$ $A(D(h(n),m))\approx\deg(h)mA(h(n)).$
###### Proof.
Notice that when calculating $A(h)$, we just need to consider the maximal-
degree components, we can assume that
$\deg(p_{k}(n))=\deg(h(n)),k=1,\cdots,l$. Then
$A(h(n))=\sum_{k=1}^{l}A(p_{k}(n))$. For any $k=1,\cdots,l$, by Lemma 4.2,
there exist Nil Bohr0 set $C_{k}$ and $D(\left\lceil p_{k}(n)\right\rceil,m)$
such that
$D(\left\lceil p_{k}(n)\right\rceil,m)=_{C_{k}}\left\lceil
p_{k}(n+m)\right\rceil-\left\lceil p_{k}(n)\right\rceil-\left\lceil
p_{k}(m)\right\rceil,$ $A(D(\left\lceil
p_{k}(n)\right\rceil,m))\approx\deg(p_{k})mA(p_{k}(n)).$
Let $C=\bigcap_{k=1}^{l}C_{k}$ and $D(h(n),m)=\sum_{k=1}^{l}c_{k}D(\left\lceil
p_{k}(n)\right\rceil,m)$. Then
$D(h(n),m)=_{C}h(n+m)-h(n)-h(m),$
$A(D(h(n),m))=\sum_{k=1}^{l}c_{k}A(D(\left\lceil
p_{k}(n)\right\rceil,m))\approx\sum_{k=1}^{l}c_{k}\deg(p_{k})mA(p_{k}(n))=\deg(h)mA(h(n)).$
∎
###### Lemma 4.4.
Let $h_{1},h_{2}\in\widetilde{SGP}$, $(h_{1},h_{2})$ be non-degnerate,
$h_{1}\sim h_{2}$, $\deg(h_{1})\geq 2$ and $h_{1},h_{2}$ satisify conditions
in the above lemmas. Then for any $0\neq m\in\mathbb{Z}$,
$D(h_{1}(n),m)-D(h_{2}(n),m)=D(h_{1}(n)-h_{2}(n),m).$
###### Proof.
Let $\deg(h_{1})=d$. Since $h_{1}\sim h_{2}$, then $\deg(h_{1}-h_{2})=r<d$.
Then if we write
$h_{1}(n)=\sum_{k=1}^{d}\sum_{j=1}^{l_{k,1}}\left\lceil
p_{k,j,1}(n)\right\rceil,\ \
h_{2}(n)=\sum_{k=1}^{d}\sum_{j=1}^{l_{k,2}}\left\lceil
p_{k,j,2}(n)\right\rceil$
with $p_{k,j,i}\in\widehat{SGP_{k}}$, $\deg(\left\lceil
p_{k,j,i}\right\rceil)=k,k=1,2,\dots,d,j=1,2,\cdots l_{k,i},i=1,2.$ Then for
each $r+1\leq k\leq d$,
$\sum_{j=1}^{l_{k,1}}\left\lceil
p_{k,j,1}(n)\right\rceil=\sum_{j=1}^{l_{k,2}}\left\lceil
p_{k,j,2}(n)\right\rceil.$
Hence
$D(h_{1}(n),m)-D(h_{2}(n),m)=D(h_{1}(n)-h_{2}(n),m).$
∎
###### Lemma 4.5.
Let $p_{i}\in\widetilde{SGP}$, $(p_{1},p_{2},\cdots,p_{d})$ be non-degenerate
with $\deg(p_{i})\geq 2,1\leq i\leq d$ and $p_{i}$ satisify conditions in the
above lemmas. Then there exist a sequence $\\{r(n)\\}_{n=0}^{\infty}$ of
natural numbers, such that for any $l\in\mathbb{N}$ and
$k_{0},k_{1},\cdots,k_{l}\in\mathbb{Z}$ with $|k_{0}|>r(0)$ and
$|k_{i}|>|k_{i-1}|+r(|k_{i-1}|)$, there exist a Nil Bohr0 set $C$ and
$q_{i,j}\in\widetilde{SGP}$ with
$q_{i,j}(n):=D(p_{i}(n),k_{j})+p_{i}(n)-p_{1}(n)=_{C}p_{i}(n+k_{j})-p_{i}(k_{j})-p_{1}(n)$
and
(3) $|A(q_{i,j}(n))|>>1,|A(q_{i,j}(n)-q_{i^{\prime},j^{\prime}}(n))|>>1$
for all
$(i,j)\neq(i^{\prime},j^{\prime})\in\\{1,2,\cdots,d\\}\times\\{0,1,\cdots,l\\}$.
###### Proof.
Let
$M=\max_{1\leq i\neq i^{\prime}\leq
d}\\{\deg(p_{i}),|A(p_{i})|,|A(p_{i}-p_{i^{\prime}})|\\},$ $L=\min_{1\leq
i\neq i^{\prime}\leq d}\\{\deg(p_{i}),|A(p_{i})|,|A(p_{i}-p_{i^{\prime}})|\\}$
Set $r(n)=10^{10}\frac{M^{2}}{L^{2}}(n+1),n=0,1,\cdots$. We will show that if
$|k_{i}|>|k_{i-1}|+r(|k_{i-1}|)$, then for all
$(i,j)\neq(i^{\prime},j^{\prime})\in\\{1,2,\cdots,d\\}\times\\{0,1,\cdots,l\\}$,
(3) holds. To do so, for $k_{j},k_{j^{\prime}}\in\mathbb{Z}$, we need to
calculate $A(q_{i,j}(n))$ and $A(q_{i,j}(n)-q_{i^{\prime},j^{\prime}}(n))$.
Case 1: The value of $A(q_{i,j}(n))$. Notice that
$q_{i,j}(n)=D(p_{i}(n),k_{j})+p_{i}(n)-p_{1}(n)$.
* •
If $p_{i}(n)\nsim p_{1}(n)$, then the maximal-degree components of
$q_{i,j}(n)$ is either in $p_{i}(n)$ or in $p_{1}(n)$, hence $A(q_{i,j}(n))$
is equal to $A(p_{i}(n))$ or $A(p_{1}(n))$, hence $|A(q_{i,j}(n))|>>1$.
* •
If $p_{i}(n)\sim p_{1}(n)$, there are two cases. If
$\deg(p_{i}-p_{1})<\deg(p_{i})-1$, then
(4) $A(q_{i,j}(n))=A(D(p_{i},k_{j}))\approx k_{j}\deg(p_{i})A(p_{i}(n)).$
If $\deg(p_{i}-p_{1})=\deg(p_{i})-1$, then if
$|k_{j}\deg(p_{i})A(p_{i}(n))+A(p_{i}(n)-p_{1}(n))|>>1$ and by Lemma 4.1, we
have
(5) $A(q_{i,j}(n))\approx k_{j}\deg(p_{i})A(p_{i}(n))+A(p_{i}(n)-p_{1}(n)).$
Case 2: The value of $A(q_{i,j}(n)-q_{i^{\prime},j^{\prime}}(n))$.
$q_{i,j}(n)-q_{i^{\prime},j^{\prime}}(n)=D(p_{i},k_{j})-D(p_{i^{\prime}},k_{j^{\prime}})+p_{i}(n)-p_{i^{\prime}}(n).$
* •
If $p_{i}(n)\nsim p_{i^{\prime}}(n)$, then
$A(q_{i,j}(n)-q_{i^{\prime},j^{\prime}}(n))$ is equal to $A(p_{i}(n))$ or
$A(p_{i^{\prime}}(n))$, hence $|A(q_{i,j}(n))|>>1$ for all $j=0,\cdots,l$.
* •
If $p_{i}(n)\sim p_{i^{\prime}}(n)$, there are two cases. If $j=j^{\prime}$,
then by Lemma 4.4,
$D(p_{i}(n),k_{j})-D(p_{i^{\prime}}(n),k_{j})=D(p_{i}(n)-p_{i^{\prime}}(n),k_{j})$
and hence
$|A(q_{i,j}(n)-q_{i^{\prime},j}(n))|=|A(p_{i}(n)-p_{i^{\prime}}(n))|>>1.$ If
$j\neq j^{\prime}$, there are two cases. If
$\deg(p_{i}-p_{i^{\prime}})<\deg(p_{i})-1$, then if
$|k_{j}\deg(p_{i})A(p_{i})-k_{j^{\prime}}\deg(p_{i^{\prime}})A(p_{i^{\prime}})|>>1$
and by Lemma 4.1, one has
(6) $A(q_{i,j}(n)-q_{i^{\prime},j^{\prime}}(n))\approx
k_{j}\deg(p_{i})A(p_{i})-k_{j^{\prime}}\deg(p_{i^{\prime}})A(p_{i^{\prime}}).$
If $\deg(p_{i}-p_{i^{\prime}})=\deg(p_{i})-1$, then if
$|k_{j}\deg(p_{i})A(p_{i})-k_{j^{\prime}}\deg(p_{i^{\prime}})A(p_{i^{\prime}})+A(p_{i}-p_{i^{\prime}})|>>1$
and by Lemma 4.1, one has
(7) $A(q_{i,j}(n)-q_{i^{\prime},j^{\prime}}(n))\approx
k_{j}\deg(p_{i})A(p_{i})-k_{j^{\prime}}\deg(p_{i^{\prime}})A(p_{i^{\prime}})+A(p_{i}-p_{i^{\prime}}).$
Now we will show that (3) holds. First choose any $|k_{0}|>r(0)$, then by (4)
$|A(q_{i,0})|>>1,$
by (5)
$|A(q_{i,0})|\geq|k_{0}|L^{2}-M>10^{10}\frac{M^{2}}{L^{2}}L^{2}-M>>1,$
Thus (3) holds for $k_{0}$.
Next we choose $|k_{1}|>|k_{0}|+r(|k_{0}|)$, by (4) and (5),
$|A(q_{i,0})|>>1,|A(q_{i,1})|>>1.$
By (6)
$|A(q_{i,1}-q_{i^{\prime},0})|\geq|k_{1}|L^{2}-|k_{0}|M^{2}>(|k_{0}|+10^{10}\frac{M^{2}}{L^{2}}(|k_{0}|+1))L^{2}-|k_{0}|M^{2}>>1.$
By (7)
$|A(q_{i,1}-q_{i^{\prime},0})|>|k_{1}|L^{2}-|k_{0}|M^{2}-M>(|k_{0}|+10^{10}\frac{M^{2}}{L^{2}}(|k_{0}|+1))L^{2}-|k_{0}|M^{2}-M>>1.$
Hence (3) holds for $k_{0},k_{1}$.
Inductively, we can show that (3) holds for $k_{0},k_{1},\cdots,k_{l}$, and we
complete the proof.
∎
###### Lemma 4.6.
Let $U,V_{1},\cdots,V_{d}$ be non-empty open sets of $X$. Let $k\in\mathbb{N}$
and we denote
$N=\\{n\in\mathbb{Z}:U\cap T^{-p_{1}(n)}V_{1}\cap\cdots\cap
T^{-p_{d}(n)}V_{d}\neq\emptyset\\},$ $N_{1}=\\{m\in\mathbb{Z}:U\cap
T^{-p_{1}(km)}V_{1}\cap\cdots\cap T^{-p_{d}(km)}V_{d}\neq\emptyset\\}.$
If for any $\varepsilon>0$, for any $s,t\in\mathbb{N}$ and
$g_{1},\cdots,g_{t}\in\widehat{SGP_{s}}$,
$N_{1}\cap C(\varepsilon,g_{1},\cdots,g_{t})$
is syndetic, then $N\cap C(\varepsilon,g_{1},\cdots,g_{t})$ is syndetic.
###### Proof.
Let $\tilde{g_{i}}(n)=g_{i}(kn)\in\widehat{SGP_{s}}$ and
$C_{1}=C(\varepsilon,\tilde{g}_{1},\cdots,\tilde{g}_{t})\cap
C(\varepsilon,g_{1},\cdots,g_{t}),$
$N_{1}\cap C_{1}$ is syndetic. For any $n\in N_{1}\cap C_{1}$, $kn\in N\cap
C(\varepsilon,g_{1},\cdots,g_{t})$. Since $N_{1}\cap C_{1}$ is syndetic,
$N\cap C(\varepsilon,g_{1},\cdots,g_{t})$ is syndetic. ∎
### 4.3. The proof of Theorem 1.1
Notice that for any $0\neq a\in\mathbb{R}$, if we choose
$C=\\{n\in\mathbb{Z}:\\{abn^{k}\\}\in(-\frac{1}{4},\frac{1}{4}),\\{bn^{k}\\}\in(-\frac{1}{4|a|},-\frac{1}{4|a|})\\},$
then we have $\left\lceil a\left\lceil
bn^{k}\right\rceil\right\rceil=_{C}\left\lceil abn^{k}\right\rceil$. Combining
this fact with Lemma 4.6, from now on we always assume that all
$p(n)\in\widetilde{SGP}$ in the following theorem satisfy the conditions in
Lemma 4.2 and Lemma 4.3.
We first prove the following theorem.
###### Theorem 4.7.
Let $(X,T)$ be a weakly mixing minimal system,
$p_{1},\cdots,p_{d}\in\widetilde{SGP}$ and $(p_{1},p_{2},\cdots,p_{d})$ be
non-degenerate. Then there is a dense $G_{\delta}$ subset $X_{0}$ of $X$ such
that for all $x\in X_{0}$,
$\\{(T^{p_{1}(n)}x,\cdots,T^{p_{d}(n)}x):n\in\mathbb{Z}\\}$
is dense in $X^{d}$.
Moreover, for any non-empty open subsets $U,V_{1},\cdots,V_{d}$ of $X$, for
any $\varepsilon>0$, for any $s,t\in\mathbb{N}$ and
$g_{1},\cdots,g_{t}\in\widehat{SGP_{s}}$, let
$C=C(\varepsilon,g_{1},\cdots,g_{t}),$ $N=\\{n\in\mathbb{Z}:U\cap
T^{-p_{1}(n)}V_{1}\cap\cdots\cap T^{-p_{d}(n)}V_{d}\neq\emptyset\\},$
we have $N\cap C$ is syndetic.
###### Proof.
We will use the PET-induction. Let $P=\\{p_{1},\cdots,p_{d}\\}$. Just as the
argument above (by Lemma 4.6), we can assume that $|A(p_{i})|>>1$ and
$|A(p_{i}-p_{j})|>>1$ for any $1\leq i\neq j\leq d$.
We start from the system whose weight vector is $(d,0,\cdots)$. That is, the
degree of all the elements of $P$ is 1. By Lemma 3.1 and Theorem 3.2, we know
that
* $*_{1}$
$(X,T)$ is $P$-thickly-syndetic transitive.
* $*_{2}$
For any non-empty open subsets $U,V_{1},\cdots,V_{d}$ of $X$, for any
$\varepsilon>0$, for any $s,t\in\mathbb{N}$ and
$g_{1},g_{2},\cdots,g_{t}\in\widehat{SGP_{s}}$, put
$C=C(\varepsilon,g_{1},\cdots,g_{t}),$ $N=\\{n\in\mathbb{Z}:U\cap
T^{-p_{1}(n)}V_{1}\cap\cdots\cap T^{-p_{d}(n)}V_{d}\neq\emptyset\\},$
we have $N\cap C$ is syndetic.
Let $P\subset\widetilde{SGP}$ be a system whose weight vector $>(d,0,\cdots)$,
and we assume that for all systems $P^{\prime}$ preceding $P$ satisfy $*_{1}$
and $*_{2}$.
Now we show that system $P$ holds. More precisely, in Claim 1 we will show
that $*_{1}$ holds for $P^{\prime}$ and $*_{2}$ hold for $P^{\prime}$ imply
that $*_{1}$ holds for $P$, in Claim 2 we will show that $*_{1}$ holds for $P$
and $*_{2}$ holds for $P^{\prime}$ imply that $*_{2}$ holds for $P$.
Claim 1. $*_{1}$ holds for $P$, i.e. $(X,T)$ is $P$-thickly-syndetic
transitive.
Proof of Claim 1: Since the intersection of two thickly syndetic sets is still
a thickly syndetic set, it is sufficient to show that for any $p\in P$, and
for any given non-empty open subsets $U,V$ of $X$,
$N(p,U,V)=\\{n\in\mathbb{Z}:U\cap T^{-p(n)}V\neq\emptyset\\}$
is thickly syndetic.
If $\deg(p)=1$, by Lemma 3.1, $N(p,U,V)$ is thickly syndetic.
We assume $\deg(p)\geq 2$. As $(X,T)$ is minimal, there is some
$l\in\mathbb{N}$ such that $X=\cup_{i=0}^{l}T^{i}U$.
Let $L\in\mathbb{N}$ and $k_{i}=i(L+2)+1$, for all $i\in\\{0,1,\cdots,l\\}$.
Since $(X,T)$ is weakly mixing and minimal, for any
$(i,j)\in\left\\{{0,1,\cdots,l}\right\\}\times\\{{0,1,\cdots,L}\\}$,
$N(V,(T^{p(k_{i}+j)-i})^{-1}V)$ is thickly syndetic (see Theorem 4.7 in [10]
), hence
$C:=\bigcap\limits_{(i,j)\in\left\\{{0,1,\cdots,l}\right\\}\times\left\\{{0,1,\cdots,L}\right\\}}\\{k\in\mathbb{Z}:V\cap
T^{-k}(T^{p(k_{i}+j)-i})^{-1}V\neq\emptyset\\}$
is a thickly syndetic set. Choose $c\in C$. Then for any
$(i,j)\in\\{{0,1,\cdots,l}\\}\times\\{{0,1,\cdots,L}\\}$, one has
$V_{i,j}:=V\cap(T^{p(k_{i}+j)+c-i})^{-1}V$
is a non-empty open subset of $V$ and
$T^{p(k_{i}+j)+c-i}V_{i,j}\subset V.$
By Lemma 4.3 and Lemma 2.19, there is a Nildeg(p) Bohr0-set $C_{1}$ associated
to $p$ and $\\{k_{i}+j:0\leq i\leq l,0\leq j\leq L\\}$. This means for every
$(i,j)\in\\{{0,1,\cdots,l}\\}\times\\{{0,1,\cdots,L}\\}$, there exists
$D(p(n),k_{i}+j)=_{C_{1}}p(k_{i}+j+n)-p(k_{i}+j)-p(n)$ with
$\deg(D(p(n),k_{i}+j))<\deg(p)$. Let $q_{i,j}(n)=D(p(n),k_{i}+j)$ and
$P^{\prime}=\\{q_{i,j}:(i,j)\in\\{0,1,\cdots,l\\}\times\\{0,1,\cdots,L\\}\\}.$
Then $P^{\prime}\subset\widetilde{SGP}$. Since for any $q_{i,j}\in
P^{\prime}$, $\deg(q_{i,j})<\deg(p)$, we have
$\Phi(P^{\prime})<\Phi(\\{p\\})$.
For any
$(i,j)\neq(i^{\prime},j^{\prime})\in\\{{0,1,\cdots,l}\\}\times\\{{0,1,\cdots,L}\\}$,
recall that we choose $k_{i}=i(L+2)+1$, $k_{i}+j\neq
k_{i^{\prime}}+j^{\prime}$. Hence by Lemma 4.3 and Lemma 4.1,
$A(q_{i,j})\approx\deg(p)(k_{i}+j)A(p(n)),\ |A(q_{i,j})|>>1,$
$A(q_{i^{\prime},j^{\prime}})\approx\deg(p)(k_{i^{\prime}}+j^{\prime})A(p(n)),\
|A(q_{i^{\prime},j^{\prime}})|>>1,$
$A(q_{i^{\prime},j^{\prime}}-q_{i,j})\approx\deg(p)(k_{i^{\prime}}+j^{\prime}-k_{i}-j)A(p(n)),\
|A(q_{i^{\prime},j^{\prime}}-q_{i,j})|>>1.$
By the inductive assumption that $*_{2}$ holds for $A^{\prime}$, we have
$E=\\{n\in\mathbb{Z}:V\cap\bigcap_{(i,j)\in\\{{0,1,\cdots,l}\\}\times\\{{0,1,\cdots,L}\\}}T^{-q_{i,j}(n)}V_{i,j}\neq\emptyset\\}\cap
C_{1}$
is syndetic.
For $m\in E$, we have $q_{i,j}(m)=p(k_{i}+j+m)-p(k_{i}+j)-p(m)$. And there
exists $x_{m}\in V$ such that $T^{q_{i,j}(m)}x_{m}\in V_{i,j}$ for all
$(i,j)\in\\{{0,1,\cdots,l}\\}\times\\{{0,1,\cdots,L}\\}$. Let
$y_{m}=T^{-p(m)}x_{m}$. Since $X=\cup_{i=0}^{l}T^{i}U$, there are $z_{m}\in U$
and $0\leq b_{m}\leq l$ such that $T^{c}y_{m}=T^{b_{m}}z_{m}$. Then
$z_{m}=T^{-p(m)+c-b_{m}}x_{m}$ and we have
$\displaystyle T^{p(m+k_{b_{m}}+j)}z_{m}$
$\displaystyle=T^{p(m+k_{b_{m}}+j)}T^{-p(m)+c-b_{m}}x_{m}$
$\displaystyle=T^{p(k_{b_{m}}+j)+c-b_{m}}(T^{p(m+k_{b_{m}}+j)-p(k_{b_{m}}+j)-p(m)}x_{m})$
$\displaystyle=T^{p(k_{b_{m}}+j)+c-b_{m}}(T^{q_{b_{m},j}(m)}x_{m})$
$\displaystyle\in T^{p(k_{b_{m}}+j)+c-b_{m}}V_{b_{m},j}\subset V$
for each $j\in\\{0,1,\cdots,L\\}$. Thus
$\\{m+k_{b_{m}}+j:0\leq j\leq L\\}\subset N(p,U,V).$
Hence the set $\\{n\in\mathbb{Z}:n+j\in N(p,U,V)\ \text{for}\
j=0,1,\cdots,L\\}$ contains the syndetic set $\\{m+k_{b_{m}}:m\in E\\}$. As
$L\in\mathbb{N}$ can be arbitrary large, $N(p,U,V)$ is a thickly syndetic set.
Claim 2. $*_{2}$ holds for $P$. That is, for any non-empty open subsets
$U,V_{1},\cdots,V_{d}$ of $X$, for any $\varepsilon>0$, for any
$s,t\in\mathbb{N}$ and $g_{1},g_{2},\cdots,g_{t}\in\widehat{SGP_{s}}$ , put
$C=C(\varepsilon,g_{1},\cdots,g_{t}),$ $N=\\{n\in\mathbb{Z}:U\cap
T^{-p_{1}(n)}V_{1}\cap\cdots\cap T^{-p_{d}(n)}V_{d}\neq\emptyset\\},$
we have $N\cap C$ is syndetic.
Proof of Claim 2: By permuting the indices, we may assume that $\deg(p_{i})$
will not decrease as $i$ increase. Assume that $\deg(p_{w})=1$ and
$\deg(p_{w+1})\geq 2$, $1\leq w<d$. If for any $p\in P$, $\deg p\geq 2$, we
put $w=0$. Let $\\{r(n)\\}_{n=0}^{\infty}$ be the sequence in Lemma 4.5 w.r.t.
$(p_{w+1},\cdots,p_{d})$.
Put
$\tilde{C}=C(\frac{\varepsilon}{2},g_{1},\cdots,g_{t}),$ $h_{1}=\max_{p\in
A}\text{deg}p,\ h_{2}=\max_{1\leq j\leq t}\text{deg}g_{j}.$
Since $(X,T)$ is minimal, there is some $l\in\mathbb{N}$ such that
$X=\cup_{i=0}^{l}T^{i}U$.
By Claim 1, $(X,T)$ is P-thickly-syndetic transitive. Then by Lemma 2.1, there
are integers $\\{k_{j}\\}_{j=0}^{l}\subset\tilde{C}$ and non-empty open sets
$V_{i}^{(l)}\subset V_{i},1\leq i\leq d$ such that
$|k_{j}|>|k_{j-1}|+r(|k_{j-1}|)$ for $j=0,\cdots,l\ (k_{-1}=0)$ and
$T^{p_{i}(k_{j})}T^{-j}V_{i}^{(l)}\subset V_{i},0\leq j\leq l,1\leq i\leq d.$
By Lemma 2.19, there is a Nil${}_{h_{1}}$ Bohr0-set $C_{1}^{\prime}$
associated to $\\{p_{1},\cdots,p_{d}\\}$ and $\\{k_{0},\cdots,k_{l}\\}$. By
Lemma 2.18, there is a Nil${}_{h_{2}}$ Bohr0-set $C_{1}^{\prime\prime}$
associated to $\\{g_{1},\cdots,g_{t}\\}$ and $\\{k_{0},\cdots,k_{l}\\}$. Put
$C_{1}=C_{1}^{\prime}\cap C_{1}^{\prime\prime}$, then
$C_{1}\in\mathcal{F}_{h,0}$, where $h=\max\\{h_{1},h_{2}\\}$. Without loss of
generality, we may assume that $\frac{\varepsilon}{2}$ is as in Lemma 2.18.
Fix $(i,j)\in\\{1,\cdots,d\\}\times\\{0,\cdots,l\\}$. For $w+1\leq i\leq d$,
by Lemma 4.3, there exists $D(p_{i}(n),k_{j})\in\widetilde{SGP}$ with
$\deg(D(p_{i}(n),k_{j}))<\deg(p_{i})$ such that
$D(p_{i}(n),k_{j})=p_{i}(k_{j}+n)-p_{i}(k_{j})-p_{i}(n),\forall n\in C_{1}.$
Let $p_{i,j}(n)=p_{i}(k_{j}+n)-p_{i}(k_{j})-p_{1}(n)$ and
$q_{i,j}(n)=D(p_{i}(n),k_{j})+p_{i}(n)-p_{1}(n)$, then
$q_{i,j}(n)\in\widetilde{SGP}$ and
$p_{i,j}(n)=q_{i,j}(n),\forall n\in C_{1}.$
For $w\geq 1,1\leq i\leq w$, since $\deg(p_{i})=1$, we have
$p_{i}(k_{j}+n)-p_{i}(k_{j})-p_{i}(n)=0,\forall n\in C_{1}$, and we let
$q_{i,j}(n)=p_{i}(n)-p_{1}(n),j=0,1,\cdots,l$.
Let
$P^{\prime}=\\{p_{2}(n)-p_{1}(n),\cdots,p_{w}(n)-p_{1}(n)\\}\cup\\{q_{i,j}(n):(i,j)\in\\{w+1,\cdots,d\\}\times\\{0,1,\cdots,l\\}\\}.$
Then $P^{\prime}\subset\widetilde{SGP}$ and $\Phi(P^{\prime})<\Phi(P)$ since
$q_{i,j}\sim p_{i},(i,j)\in\\{w+1,\cdots,d\\}\times\\{0,1,\cdots,l\\}$.
For $w=0$, one has $q_{1,j}(n)=D(p_{1}(n),k_{j})$ and $\deg q_{1,j}<\deg
p_{1}$. In this case
$P^{\prime}=\\{q_{i,j}(n):(i,j)\in\\{1,\cdots,d\\}\times\\{0,1,\cdots,l\\}\\}$.
We still have $P^{\prime}\subset\widetilde{SGP}$ and
$\Phi(P^{\prime})<\Phi(P)$.
Since $|k_{j}|>|k_{j-1}|+r(|k_{j-1}|)$ for $j=0,\cdots,l$, by Lemma 4.5,
$|A(q_{i,j})|>>1$ and $|A(q_{i,j}-q_{i^{\prime},j^{\prime}})|>>1$.
By the inductive assumption, for $V_{1}^{(l)},\cdots,V_{d}^{(l)}$. We have
$E=\\{n\in\mathbb{Z}:V_{1}^{(l)}\cap\bigcap_{j=0}^{l}(T^{-q_{1,j}(n)}V_{1}^{(l)}\cap\cdots\cap
T^{-q_{d,j}(n)}V_{d}^{(l)})\neq\emptyset\\}\cap(\tilde{C}\cap C_{1})$
is syndetic.
Let $m\in E$. We have $p_{i,j}(m)=q_{i,j}(m)$ since $m\in C_{1}$. Then there
is some $x_{m}\in V_{1}^{(l)}$ such that
$T^{p_{i,j}(m)}x_{m}\in V_{i}^{(l)}\ \text{for all}\ 1\leq i\leq d\
\text{and}\ 0\leq j\leq l.$
Clearly, there is some $y_{m}\in X$ such that $y_{m}=T^{-p_{1}(m)}x_{m}$.
Since $X=\cup_{i=0}^{l}T^{i}U$, there is some $b_{m}\in\\{0,1,\cdots,l\\}$
such that $T^{b_{m}}z_{m}=y_{m}$ for some $z_{m}\in U$. Thus for each
$i=1,\cdots,d$
$\displaystyle T^{p_{i}(m+k_{b_{m}})}z_{m}$
$\displaystyle=T^{p_{i}(m+k_{b_{m}})}T^{-b_{m}}T^{-p_{1}(m)}x_{m}$
$\displaystyle=T^{p_{i}(k_{b_{m}})}T^{-b_{m}}T^{p_{i}(m+k_{b_{m}})}T^{-p_{i}(k_{b_{m}})}T^{-p_{1}(m)}x_{m}$
$\displaystyle=T^{p_{i}(k_{b_{m}})}T^{-b_{m}}T^{p_{i,{b_{m}}}(m)}x_{m}$
$\displaystyle\in T^{p_{i}(k_{b_{m}})}T^{-b_{m}}V_{i}^{(l)}\subset V_{i}.$
That is,
$z_{m}\in U\cap T^{-p_{1}(n)}V_{1}\cap\cdots\cap T^{-p_{d}(n)}V_{d},$
where $n=m+k_{b_{m}}\in N$.
Note that $k_{b_{m}}\in\widetilde{C}$ implies
$\\{g_{j}(k_{b_{m}})\\}\in(-\frac{\varepsilon}{2},\frac{\varepsilon}{2}),$
and $m\in C_{1}^{\prime\prime}$ implies
$\\{g_{j}(m+k_{b_{m}})\\}\in(\\{g_{j}(k_{b_{m}})\\}-\frac{\varepsilon}{2},\\{g_{j}(k_{b_{m}})\\}+\frac{\varepsilon}{2})\subset(-\varepsilon,\varepsilon),$
for all $j=1,\cdots,t$. That is $m+k_{b_{m}}\in C$.
Thus
$N\cap C\supset\\{m+k_{b_{m}}:m\in E\\}$
is a syndetic set.
For every $P$, the induction will stop after finitely many steps, by induction
the proof is completed.
∎
###### Proof of Theorem 1.1..
By Lemma 2.2, it suffices to prove the moreover part of the theorem. Let
$p_{1},\cdots,p_{d}\in\mathcal{G}$. Then by Lemma 2.12, there exists
$h_{i}(n)\in\widetilde{SGP}$, $i=1,2,\dots,d$ and
$C_{1}=C(\delta,q_{1},\cdots,q_{k})$ such that
$p_{i}(n)=_{C_{1}}h_{i}(n),i=1,2,\dots,d.$
Set
$N_{1}=\\{n\in\mathbb{N}:U\cap T^{-h_{1}(n)}V_{1}\cap\cdots\cap
T^{-h_{d}(n)}V_{d}\neq\emptyset\\},$
by Theorem 4.7, $N_{1}\cap(C\cap C_{1})$ is syndetic. Since for any $n\in
N_{1}\cap(C\cap C_{1})\subset C_{1}$, $p_{i}(n)=h_{i}(n),i=1,2,\cdots,d$,
$n\in N=\\{n\in\mathbb{N}:U\cap T^{-p_{1}(n)}V_{1}\cap\cdots
T^{-p_{d}(n)}V_{d}\neq\emptyset\\}.$
This implies
$N_{1}\cap(C\cap C_{1})\subset N\cap C,$
hence $N\cap C$ is syndetic. ∎
## References
* [1] Bergelson, V.: Weakly mixing PET. _Ergodic Theory Dynam. Systems_ , 7, 337–349 (1987)
* [2] Bergelson, V.: Ergodic Ramsey theory-an update, in Ergodic Theory of $Z^{d}$-actions (M. Pollicot and K. Schmidt, eds.), _London Mathematical Society Lecture Note Series_ , 228, 1–61 (1996)
* [3] Bergelson, V., Leibman, A.: Polynomial extensions of van der Waerden’s and Szemeredi’s theorems. _J. Amer. Math. Soc._ , 9, 725–753 (1996)
* [4] Bergelson,V., Leibman, A.: Distribution of values of bounded generalized polynomials. _Acta Math._ , 198,155–230 (2007)
* [5] Bergelson, V., McCutcheon, R.: Idempotent ultrafilters, multiple weak mixing and Szemerédi’s theorem for generalized polynomials. _J. Anal. Math._ , 111, 77–130 (2010)
* [6] Furstenberg, H.: Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions. _J. Anal. Math._ , 31, 204–256 (1977)
* [7] Glasner, E.: Topological ergodic decompositions and applications to products of powers of a minimal transformation. _J. Anal. Math._ , 64, 241–262 (1994)
* [8] Huang, W., Shao, S., Ye, X.: Topological correspondence of multiple ergodic averages of nilpotent group actions. _J. Anal. Math._ , 138, no. 2, 687–715 (2019)
* [9] Huang, W., Shao, S., Ye, X.: Nil Bohr0-set, Poincaré recurrence and generalized polynomials . _Mem. Amer. Math. Soc._ , 241 , no. 1143, v+83 pp (2016)
* [10] Huang, W., Ye, X.: An explicit scattering, non-weakly mixing example and weak disjointness. _Nonlinearity_ 15 (2002), no. 3, 849–862.
* [11] Kwietniak, D., Oprocha, P.: On weak mixing, minimality and weak disjointness of all iterates. _Ergodic Theory Dynam. Systems_ , 32, 1661–1672 (2012)
* [12] Subrahmonian Moothathu,T. K.: Diagonal points having dense orbit. _Colloq. Math._ , 120, 127–138 (2010)
|
# Social cohesion V.S. task cohesion: An evolutionary game theory study
Xinglong Qu${}^{1,^{\star}}$ , Shun Kurokawa2, The Anh Han3
1 The Research Center of Information Technology & Social and Economic
Development, Hangzhou Dianzi University, Hangzhou, Zhejiang, China
2 Graduate School of Arts and Sciences, University of Tokyo, 3-8-1 Komaba
Meguro-ku, Tokyo 153-8902, Japan
3 School of Computing, Engineering and Digital Technologies, Teesside
University, Borough Road, Middlesbrough, UK TS1 3BA
⋆ Corresponding author: Xinglong Qu<EMAIL_ADDRESS>
## Abstract
Using methods from evolutionary game theory, this paper investigates the
difference between social cohesion and task cohesion in promoting the
evolution of cooperation in group interactions. Players engage in public goods
games and are allowed to leave their groups if too many defections occur. Both
social cohesion and task cohesion may prevent players from leaving. While a
higher level of social cohesion increases a player’s tolerance towards
defections, task cohesion is associated with her group performance in the
past. With a higher level of task cohesion, it is more likely that a
dissatisfied player will refer to the history and remains in her group if she
was satisfied in the past. Our results reveal that social cohesion is
detrimental to the evolution of cooperation while task cohesion facilitates
it. This is because social cohesion hinders the conditional dissociation
mechanism but task cohesion improves the robustness of cooperative groups
which are usually vulnerable to mistakes. We also discuss other potential
aspects of cohesion and how they can be investigated through our modelling.
Overall, our analysis provides novel insights into the relationship between
group cohesion and group performance through studying the group dynamics and
suggests further application of evolutionary game theory in this area.
Keywords: public goods game, cooperation, evolutionary game theory,
conditional dissociation, social cohesion, task cohesion
## 1 Introduction
Group cohesion or cohesiveness is one of the oldest and most widely studied
factors in the group dynamics literature (Casey-Campbell and Martens, 2009;
Mullen and Copper, 1994). Despite a large body of works showing that group
cohesion is positively related to the group performances, such as job
satisfaction, psychological well-being, and group efficiency (Beal et al.,
2003; Carless and De Paola, 2000; Mullen and Copper, 1994), there are also
some controversies (Friedkin, 2004; Dyaram and Kamalanabhan, 2005; Rovio et
al., 2009; Khoshsoroor et al., 2019). That includes groupthink (Janis and
Janis, 1982), a conjecture that the effort to reach an agreement may lead to a
dysfunctional decision-making outcome. This inconsistency highlights the fact
that our understanding of the mechanism of how group cohesion works on the
group performance is insufficient (Carron and Brawley, 2000; Casey-Campbell
and Martens, 2009; McLeod and Von Treuer, 2013). Notably, Drescher et al.
(2012) even call it a spectacular embarrassment in group theory and research.
While apparently the complex nature of cohesion is a major cause of such
conflicting results, there exist other crucial drawbacks in the current
literature.
The principal one rests in the inconsistency of the definition and measurement
of cohesion among researchers. To make it up, Friedkin (2004) and Casey-
Campbell and Martens (2009) suggest scientists to turn focus on what is
concrete and its interrelationship with other constructs. One milestone is the
distinction between social and task cohesion (Mikalachki and Administration,
1969; Dion and Evans, 1992; Casey-Campbell and Martens, 2009). The social
facet includes relationships within the group, while the task facet includes
collective performance, goals, and objectives (Carron et al., 1985; Carron and
Brawley, 2000). In general, it is found that task cohesion is more positively
related to group performance than social cohesion (Zaccaro, 1991; Bernthal and
Insko, 1993; Mullen and Copper, 1994). But unfortunately, the explanation for
it is in short either.
Another shortage in the conventional studies is the overlook of the dynamic
nature of groups (Mathieu et al., 2008; Casey-Campbell and Martens, 2009;
Drescher et al., 2012; Mathieu et al., 2015). In reality, individuals
continually join and leave different groups and in the meanwhile interact with
each other. However, most of the previous works fail to describe this
dynamical process, and more importantly, its influence on peoples’ decision-
making. Mathieu et al. (2015) contribute one of the most important works which
proves positive reciprocal relationships between cohesion and performance over
time. They firstly synthesise aforetime studies that are about the reciprocal
influence of group cohesion and performance. Then they apply the so-called
structural equation (SEM) method to analyse the results, and justify their
model with data collected from a series of business simulations that last over
10 weeks. While SEM is inspiring, scholars are cautious about drawing strong
causal inference from it (Mathieu et al., 2015), and serious scrutiny is
required before assuming the framework of the model111 Mathieu et al. (2015)
find only 17 papers that use SEM to study the reciprocal effect of cohesion
and performance, which is the most key question in the study of cohesion, not
to mention other aspect of cohesion..
To further reveal the mechanism regarding how cohesion impacts group
performance, we take players’ decision-making into consideration using an
evolutionary game theory model. In the later half of last century, game theory
was developed as a fundamental tool to analyze the conflicts and cooperation
among human society and other organisms (Aumann, 2019). However, there are
multiple deficiencies lie in the traditional theory of games that centres on
the calculation and analysis of equilibrium, such as complexity of the
calculation, equilibrium selection, hyperrational agents and so on. These
problems motivate social scientists in a variety of areas to turn their eyes
on evolutionary game theory (Alexander, 2019). It assumes the decision makers
are short sighted, their action patterns are heuristic, and very limited
information is available. Despite the simple rules and assumptions, its
effectiveness in explaining the world wasn’t overshadowed. Especially when
considering large population behavior, simple action rules could generate
complex social phenomena which coincide with the institution and social facts
(Newton, 2018).
As aforementioned, while the complex nature of cohesion makes it hard to draw
an overall conclusion, it is meaningful to focus on the concrete aspects of
cohesion and apply different theoretical tools to study it. An important
characteristic of cohesion could be drawn from the definition of Festinger
(1950) that it is “the resultant forces which are acting on the members to
stay in a group . That is, the more cohesive a group is, the more likely that
its members would stay within it. However, people in reality keep leaving and
joining new groups for different reasons. In game theory study, this
phenomenon is studied in the model of conditional dissociation, which allows
dissatisfied players to break off with their opponents. This mechanism is
shown to promote cooperation in the two-person prisoners’ dilemma game
(Aktipis, 2004; Izquierdo et al., 2010; Qu et al., 2016). Qu et al. (2019)
extend this model to the public goods game, which is a multi-person game, and
also incorporate cohesion into consideration. In the model, if cohesion
exists, group members who do not want to leave would be united together,
otherwise, all group members become single. Their main finding is that
cooperation is better promoted when cohesion exists as it results in a greater
chance for cooperators to play with each other, which is usually termed as
positive assortativity (Eshel and Cavalli-Sforza, 1982; Doebeli and Hauert,
2005; Fletcher and Doebeli, 2006, 2009).
In this paper, we examine other important aspects of cohesion in influencing
group cooperation. In particular, as motivated above and also emphasized by
Mathieu et al. (2015), we compare social and task cohesion. Our basic model is
in line with Qu et al. (2019) where a well mixed population of players
interacting with each other using the one-shot public goods game under
conditional dissociation mechanisms. Players’ leaving decisions depend on both
their own tolerance towards defections and the levels and types of cohesion.
Social and task cohesion are inspected separately in the first two settings
and synthetically afterwards. When some players are unsatisfied, both types of
cohesion may prevent the group from dismissing. More concretely, with social
cohesion, players tend to become more tolerant toward defections; while with
task cohesion, dissatisfied players become more patient only if their goal was
achieved in the past. To be more specific, while social cohesion is described
as forces irrelevant to the history of interactions, task cohesion consists in
the likelihood that a player might refer to the play history in the last
round.
Our model reveals that, social cohesion and task cohesion, defined as above,
have opposite effects on the evolution of cooperation where the former one
harms it but the latter benefits it. The main reason accounting for the
failure of social cohesion and success of task cohesion in our model is that
social cohesion counteracts while the latter enhances the positive
assortativity effect (Qu et al., 2019). With either type of cohesion, the
dissociation mechanism is hindered and players would stay together longer. So
the higher the levels of cohesion are, the less likely that dissociation
happens, which explains the negative effect of social cohesion. However, with
task cohesion, players would leave their groups for sure once they are
dissatisfied in two continuously rounds. Moreover, since the winning
cooperative strategy is intolerant towards defection, it is vulnerable to
mistakes. Task cohesion prevents groups from being dismissed by mistakenly
defecting, which keeps the cooperative groups play together longer.
This finding partially in line with the vast empirical literature that claim
task cohesion is more positively related to group performances than social
cohesion (Zaccaro and Lowe, 1988; Zaccaro, 1991; Mullen and Copper, 1994;
Chang and Bordia, 2001; Warner et al., 2012; Spink et al., 2014). Our finding
supports the common wisdom that organizations and practitioners should
encourage team members to share successful experience more often. Intuitively,
this would increase the attraction towards teams or group pride or morale,
which according to our analysis, also increases the task cohesion that bonds
group members based on shared commitment to achieving goals.
Despite the difference of our results from the classical conclusion about the
positive significance of social cohesion, we will highlight other potential
aspects of cohesion that may be associated with group performance. When
players are less likely to update their strategies, cooperation is higher. In
reality, people update their strategies to improve their utilities. High
levels of cohesion is usually associated with higher levels of job
satisfaction, which thus reduces the possibility they update strategies. As
the probability of making mistakes decreases, group cooperation increases. In
reality, it is reasonable to conjecture that groups with higher levels of
social cohesion would communicate more effectively and accurately and thus
make less mistakes, which improves group performances. Another more subtle one
may be that social cohesion is related with selection intensity. In
evolutionary game theory study, selection intensity is an important element of
the evolution dynamics. Rand et al. (2013) find that a moderate level of
selection intensity is optimal for the emergence of generosity behavior. In
their experiment, they come up with a sagacious measurements for selection
intensity by asking subjects that among those they interact with in daily
life, how clear is it which people are more or less successful?. The more
clear their answers are, the larger the selection intensity is implied. So
cohesion might be important in maintaining a relatively optimal level of
selection intensity.
The rest of this paper is organized as follows. We detail our model in Section
2 and present the simulation results in Section 3. We then briefly examine and
discuss other potential aspects of cohesion in Section 4. The conclusions are
described in Section 5, with some further discussion being followed in Section
6.
## 2 Models
### 2.1 Public goods game
Consider a finite population of individuals who are going to play the
following public goods games. $G$ persons are grouped together with everyone
being endowed with 1 unit of personal token. They simultaneously decide
whether or not to contribute their personal token to the public wealth which
would magnify by $r(1<r<G)$ times and be equally shared among all the group
members. As the reserved personal token would not increase, the payoff
function for a player is defined as:
$u_{i}=\frac{r}{G}\cdot\sum\limits_{j=1}^{G}a_{j}+(1-a_{i}),$ (1)
where $a_{j}=1$ if player $j$ cooperates by contributing and $a_{j}=0$ if she
reserves her personal wealth in which case we say she defects.
### 2.2 Conditional dissociation and strategies
Conditional dissociation mechanism (Izquierdo et al., 2010; Qu et al., 2016,
2019) has proven to be an efficient mechanism to promote the evolution of
cooperation. Initially, all players are single players and they are randomly
grouped to play the games. After each round of game, if a player feels
unsatisfied about her group, under conditional mechanism, she could decide
whether or not to stay with her current group opponents. By leaving, both she
and her group mates become single and enter the matching pool. All the single
players in the pool would be randomly regrouped to play the games in next
round of game.
So, for each player $i$, her strategy is denoted by a tuple $(a_{i},b_{i})$
where $a_{i}\in\\{0,1\\}$ indicates whether she is a cooperator $(a_{1}=1)$ or
defector $(a_{1}=0)$ when playing the game, and $b_{i}\in\\{0,1,\cdots,G-1\\}$
indicates her tolerance towards defections. Denote
$\Sigma=\\{0,1\\}\times\\{0,1,\cdots,G-1\\}$ as the strategy space for agents.
We assume mistakes may happen to each agent randomly with a fixed probability
of $\epsilon$, that is, a player who intended to cooperate may wrongly defect
and a defector may wrongly cooperate. Let $d_{i}$ denote the number of
defectors observed by player $i$. Players are satisfied with their groups and
choose to stay within current group only if
$d_{i}\leq b_{i}.$ (2)
A group dissolves if any of its members are dissatisfied and choose to leave.
After a group dissolves, all its members, whether satisfied or not in the last
round, become single players and enter the matching pool. Players in the
matching pool regroup with each other randomly and play in the next round.
In the above model, whenever some players are dissatisfied, their groups
dissolve immediately. In this paper, different types of cohesion that prevent
the dissolution of groups are investigated.
### 2.3 Social cohesion and task cohesion
We compare two constructs of cohesion, i.e. social cohesion and task cohesion.
Social cohesion is the attractiveness that players feel about their groups,
which is independent of the history of games. To be more specific, for each
player $i$, her perception of the social cohesion, denoted by
$\xi_{i}=B(G,p)$, is a random variable satisfying a Binomial distribution $p$
indicates the level of social cohesion. A player is satisfied if the number of
defections is no more than the sum of her tolerance and her perceived social
cohesiveness, that is,
$d_{i}\leq\xi_{i}+b_{i}.$ (3)
Compared with (2), social cohesion $\xi_{i}$ is added to the right side.
Apparently, the expected cohesion $E\xi_{i}=G\cdot p$ so the higher level of
cohesion the more likely that player stays within the group. Here we assume
$\xi_{i}$ to be random since it is player’s perception or feelings which
usually are affected by some other external factors and change over time.
After each round of play, if a player is dissatisfied in the sense that she
observes too many defections than her tolerance and social cohesion, she
chooses to leave and all her group members enter the matching pool.
Task cohesion is dependent on how well players achieve their goals while
playing together. If a group of players have played together in the last
rounds, players may refer to the history of the last round of game. And the
level of task cohesion is defined here as the probability that players refer
to the history. To be specific, with task cohesion being $q$, a dissatisfied
player (i.e. when the condition in Eq 2 is violated) would choose to leave the
current group with probability $1-q$ and would look back on the last round
with probability $q$. If she was satisfied in the last round, then she would
still choose to stay within current group. The higher the task cohesion is,
the more likely that player would refer to the history before they make their
final decisions about leave or stay. When $q=0$, players do not look back on
the history and leave their current group immediately if they are
dissatisfied. When $q=1$, players always refer to the history and stay within
the current group unless they are dissatisfied in two consecutive rounds.
In our work, we investigate the role of social cohesion and task cohesion
separately in the first two settings and then jointly in the third.
### 2.4 Evolution dynamics
After each round of game, every player updates her strategy with a probability
of $\delta$. If a player updates her strategy, she would learn the strategy
from another player in the population. The learning process is a Moran process
(Nowak et al., 2004; Taylor et al., 2004; Szabó and Fath, 2007), which means
the probability that one player’s strategy is learned by others is
$\frac{f_{i}}{\sum f_{i}}$ where $f_{i}$ is player’s fitness defined as:
$f_{i}=u_{i}^{s}.$
Here $u_{i}$ is the player’s payoff222Payoffs gained in each round are
consumed immediately and cannot be accumulated. and $s$ measures the selection
intensity: when $s=0$, the process is just the neutral drift and when
$s\rightarrow\infty$, it is the best-response dynamics. Meanwhile, mutations
may also happen with a fixed probability $\mu$ to the learner that she would
randomly choose a strategy from the strategy set $\Sigma$ regardless of its
performance. The strategy update in each round is simultaneous which means if
a player has just updated her strategy and is learned by others, it is her
former strategy rather than the new strategy could be learned.
## 3 Results
Our main results are derived from computer simulations. We run three series of
simulations to compare how social cohesion and task cohesion affect the group
cooperation.
(a) Evolution of cooperation (b) Distribution of strategies
Figure 1: The evolution of the percentages of cooperation after $10^{6}$
rounds of games for different levels of social cohesion. Mistake=0.01, Group
size =5, population $N=200$, mutation rate $\mu$=0.05, strategy updating rate
$\delta=0.001$, r=3, selection intensity $s=1$.
### 3.1 Only social cohesion exists
Firstly, we investigate how different levels of social cohesion influence the
emergence of cooperation. In this set-up, players’ decisions only depend on
the current status of play. They leave if their tolerance and the
attractiveness of their group are unable to make them satisfied.
As can be seen from Figure 1(a), the stronger social cohesion is, the less
cooperation is observed. When social cohesion is fully hypothesized, only
about 10 percent of players are cooperators. This verifies that conditional
dissociation is beneficial to the evolution of cooperation from another
direction but contradicts the general hypothesis that social cohesion is good
for group performances. As when social cohesion increases, conditional
dissociation mechanism is weakened and thus cooperation declines.
We can see from Figure 1(b) that when cooperation flourishes, the most
successful strategy is $(1,1)$ and the least tolerant strategy always performs
badly. However, as strategies become more tolerant, they perform worse. The
strategy distribution is quite different to those observed in (Qu et al.,
2019), where the only winning cooperative strategy is the most intolerant one
i.e.$(1,0)$. One major difference is that in their model, whenever some
players update their strategies, their groups dismiss. Compared with $(1,0)$
strategists, players using $(1,1)$ settle down more quickly as they could
tolerate one more defector. At this point, despite being exploited by the
single defector, their payoffs are higher than the average of the whole
population and they are more likely learned by other agents, which means they
are more successful in the evolution.
### 3.2 Only task cohesion exists
(a) Evolution of cooperation for different task cohesion (b) Evolution of
cooperation for different task cohesion
Figure 2: The evolution of the percentages of cooperation after $10^{6}$
rounds of games for different levels of task cohesion. Mistake=0.05, Group
size =5, population $N=200$, error rate=0.05, strategy updating rate
$\delta=0.001$, r=3, selection intensity $s=1$.
With task cohesion, individuals may make their leaving decisions based on both
the current and the previous rounds of play. If they are satisfied in either
round, they would stay.
As can be seen from Figure 2, the effect of task cohesion is indistinguishable
when cohesion is smaller than 0.5. However, as task cohesion increases,
cooperation increases substantially. The cooperation level is highest when
players always consider the play history in the last round.
The strategy distribution is similar to the situation where only social
cohesion exists. When task cohesion is 1, the winning cooperative strategy is
also $(1,1)$. A tiny difference is that, except for $(1,1)$ that performs
better and better as task cohesion increases, all the remaining cooperative
strategies perform almost the same in different settings.
Since positive assortativity is the explanation for how dissociation mechanism
promotes cooperation, it would be natural to investigate how task cohesion
enhances it. In the study of conditional dissociation mechanisms, players are
rather impulsive that whenever they feel dissatisfied, they would leave their
opponents immediately. Intuitively, when a group of cooperators meet together,
they would continue playing together until someone defects by mistake. In our
model, with higher levels of task cohesion, players get more discreet before
they decide whether to stay within the current group or not. And a pleasant
history of play would stop the dissolution of groups. So with higher levels of
task cohesion, cooperative groups are less likely to dissolve which means
enhanced positive assortativity.
Figure 3: The percentage of cooperation after $10^{6}$ rounds of simulations
with different levels social cohesion, task cohesion and mistake rate. The
colors and sizes represent the percentages of cooperation. The more
cooperation, the larger the circle filled with the hotter color. Group size
=5, population $N=200$, mutation rate $\mu$=0.05, strategy updating rate
$\delta=0.001$, r=3, selection intensity $s=1$.
In Figure 3, we present the results of evolution for different levels of
social cohesion, task cohesion and mistake rates. It can be easily seen that
cooperation rates always decrease as mistake rates increase. Both types of
cohesion reduce the differences of cooperation rates under low mistake rates
and high mistake rates but at different directions. When the mistake rate is
low, the effect of increasing task cohesion is rather limited while it is
obvious that increasing social cohesion is detrimental to cooperation. When
the mistake rate is higher, the advantage of higher level task cohesion
becomes more obvious, which suggests that task cohesion is effective in
preserving cooperation from mistakes.
### 3.3 Both social and task cohesion exist
We now combine both types of cohesion and examine whether there is any synergy
effect from the combination. In each round of the game, player $i$ is
satisfied if and only if the number of observed defections $d_{i}$, her
tolerance, and the perceived social cohesiveness satisfy equation (3). With
task cohesion, every player might also refer to the history to determine
whether or not to leave. If a dissatisfied player doesn’t refer to the
history, or she was also dissatisfied in the last round, she leaves.
Otherwise, a player would remain in her group if she is satisfied in current
round or she refers to the history and finds herself satisfied in the last
round.
Our results suggest the negative effect of social cohesion and positive effect
of task cohesion remain when both types of cohesion exist. In Figure 4, we
present the outcomes for different levels of social and task cohesion. For any
levels of task cohesion, the proportions of cooperation decrease with
increasing social cohesion, which indicates the negative effect of social
cohesion. On the other hand, if we increase the levels of task cohesion, we
observe higher levels of cooperation with higher levels of task cohesion. So
task cohesion promotes cooperation whatever levels of social cohesion are.
The different effects for the evolution of cooperation in group interactions
from the two types of cohesion further support the idea to differentiate
between them in real-world social and economic contexts.
Figure 4: The percentages of cooperation after $10^{6}$ rounds of games for
different levels of social and task cohesion. Mistake rate=0.01, Group size
=5, population $N=200$, mutation rate $\mu$=0.05, strategy updating rate
$\delta=0.001$, r=3, selection intensity $s=1$.
## 4 Potential aspects of cohesion
While contradicting the common viewpoint that suggests positive relationship
between social cohesion and group cooperation, our model sheds light on other
potential aspects of cohesion, which could improve group performances but are
not involved in the current model.
### 4.1 Strategy updating and mistake rate
Our model confirms with the classical conclusion that cooperation is more
abundant under lower strategy updating rate (Figure 3) and mistake rate
(Figure 5).
Cohesion may lead to lower levels of strategy updating rate since it increases
individuals’ sanctification. In human society, people change their strategies
continually for different reasons. The primary one would be to pursue better
outcomes so they learn it from more successful ones. With higher levels of
group cohesion, players are more easily to be satisfied, so their incentives
to pursue better outcomes decline, and update their strategies less.
Figure 5: The percentage of cooperation after $10^{6}$ rounds of simulations
with different levels social cohesion, task cohesion and strategy updating
rate. The colors and sizes represent the percentages of cooperation. The more
cooperation, the larger the circle filled with the hotter color. Group size
=5, population $N=200$, mutation rate $\mu$=0.05, selection intensity $s=1$,
r=3. The mistake rate is $0.01$ in figure a for social cohesion and it is 0.05
in figure b for task cohesion.
In reality, everyone makes mistakes but by being more careful, they are less
likely to do so. The sense of responsibility has been emphasized by some other
studies (Chan et al., 2006; Dickes et al., 2010; Schiefer and van der Noll,
2017) on cohesion. With higher cohesion, players feel more responsible to act
carefully, which thus could reduce the probability that individuals make
mistakes. As the winning cooperative strategy is also intolerant of
defections, less mistakes help them to play together longer and gain more
benefits.
### 4.2 Selection intensity
In the study of evolutionary game theory, selection intensity has determinant
influence on the outcomes. However, in reality, how it is related to human
society or even natural world is not so clear. Rand et al. (2013) measure
selection intensity by asking subjects how clear to tell that someone is more
successful. This measurement obviously is related to the social and cultural
aspects of our society, which have non-trivial influences on human behavior.
Inspired by Rand et al. (2013), our model also suggests that social cohesion
may forge a medium level of selection intensity is better for group
cooperation.
Figure 6: The percentage of cooperation after $10^{6}$ rounds of simulations
with different levels social cohesion, task cohesion and selection intensity.
The colors and sizes represent the percentages of cooperation. The more
cooperation, the larger the circle filled with the hotter color. Group size
=5, population $N=200$, mutation rate $\mu$=0.05, strategy updating rate
$\delta=0.001$, r=3. The mistake rate is $0.01$ in figure a for social
cohesion and it is 0.05 in figure b for task cohesion.
It is worthwhile to remind that, with tiny selection intensity (say it is
being 0.05), the distributions of strategies remain almost unchanged, so we
see similar levels of cooperation (near 50%) in all settings in Figure 6. When
only a low level of social cohesion exists, cooperation rates firstly increase
and then decrease as the selection intensities increase. When a high level of
social cohesion exists, cooperation always decreases as selection intensities
increase. With medium social cohesion (=0.5), if we increase the selection
intensities from 0.05 to 0.5, cooperation decreases because it is not favored
by evolution. However, we would see a larger proportion of cooperation if
selection intensity is 1 or 2. But again cooperation almost vanishes under too
high selection intensities.
Intuitively, when the selection intensity is too weak, then all the strategies
almost make no difference in the evolution so we could not expect high levels
of cooperation. When selection intensity is too strong, the lucky defectors
whose opponents are all cooperators would get the highest payoffs and gain
more success. So medium levels of selection intensity is best for cooperation.
## 5 Conclusion
Cohesion is one of the most widely studied concepts in small-group performance
an intra- and intergroup relations (Evans and Dion, 1991; Carron and Brawley,
2000; Chiocchio and Essiembre, 2009) as it is essential for teams where people
of different talents and background meet together to collaborate on common
tasks. However, due to the complicated and elusive nature, researches have to
face the difficulty in defining, measuring, operationalizing, and
experimentally manipulating cohesion (Carron and Brawley, 2000). And it always
gives people the feeling of inconsistency when reviewing the literature
relevant to cohesion(Rosh et al., 2012). Obviously, it is hard to unify all
the previous definitions and measurements without giving new ones, which in
turn only add to the incoherence within this field of study. Friedkin (2004)
and Dion (2000) call for emphasis on the solid aspects of cohesion rather than
trying to propose new ones, which is more practical and heuristic.
One of the most significant and generally admitted findings in the study of
cohesion is the distinction between social and task cohesion, which is
referred as a milestone by Evans and Dion (1991). In general various meta-
analyses of existing cohesion studies are in agreement that group performances
are more strongly related with task cohesion than social cohesion (Mullen and
Copper, 1994; Casey-Campbell and Martens, 2009; Chiocchio and Essiembre, 2009;
Castano et al., 2013; Grossman et al., 2015). In these studies, the
definitions and measurements of cohesion are usually different among the
literature, making it hard to compare which factors are more crucial in the
cohesion-performance relation. Salas et al. (2015) conduct a meta-analysis and
highlight that further study should give priority to social and task cohesion
and incorporate dynamical and temporal factors into study.
Inspired by these suggestions and empirical observations, we compare in this
work how social and task cohesion influence the emergence of cooperation by
resorting to evolutionary game theory, since in this field, there has been
significant success in understanding the dynamic processes of group and
agreement formation and interactive strategies among group members (Szabó and
Fath, 2007; Han et al., 2015, 2017; Newton, 2018) In particular, conditional
dissociation mechanisms allow players to leave their groups once they are
dissatisfied about their opponents and have been proven beneficial for
cooperation to evolve (Izquierdo et al., 2010; Qu et al., 2016, 2019; Aktipis,
2004). We introduce group cohesion into this process to study how group
cohesion influences the performance of groups and the evolutionary outcomes of
cooperation.
Unlike psychological study where the definitions of cohesion are largely
dependent on the measurement the researchers choose, our definitions of social
and task cohesion are provided according to the intrinsic nature of individual
choices and decision making. Social cohesion is defined as a strength that
prevents players from leaving their groups regardless of the history of the
play. With higher levels of social cohesion, players are more tolerant towards
defections and thus more refrain from leaving. Task cohesion is modeled as the
likelihood that unsatisfied players would look back into the history before
they choose to leave. If they were satisfied about the outcomes in the last
round of play, they would choose to stay.
Our primary finding is that social cohesion has a negative effect on the group
cooperation while task cohesion has a positive effect. The difference of the
effect of social and task cohesion could be illustrated from the perspective
of positive assortativity, which is termed as the common characters of almost
all mechanisms that promote cooperation, including conditional dissociation
(Eshel and Cavalli-Sforza, 1982; Doebeli and Hauert, 2005; Fletcher and
Doebeli, 2009; Izquierdo et al., 2010; Qu et al., 2019). With either type of
cohesion, dissatisfied cooperators are less likely to leave their groups,
which means cohesion hinders the conditional dissociation mechanism. However,
task cohesion enhances positive assortativity by enforcing cooperative groups.
As cooperation is vulnerable to defections or mistakes, task cohesion helps
players to distinguish if the defection is intentional. A higher level of task
cohesion enables players to be more patient and thus protects cooperators from
entering a more defective match pool. So for organizational practitioners,
when monitoring and improving cohesion, recalling the successful history
regularly would benefit the team for a more productive team fulfillment.
We also discuss other parameters of the evolutionary dynamics and their
influence on the cooperation, which is a standard approach in evolutionary
game study. Our discussion is central on how it is related to other aspects of
cohesion and its impact on group performances. Compared with the complicated
nature of cohesion, our model is rather simple yet very powerful in that it is
capable of revealing the underlying evolutionary dynamics related to cohesion,
shedding light on the mechanisms based on which cohesion influences group
cooperation.
## 6 Further discussion
Cohesion is such an important and complex concept that many aspects of it need
to be further examined in future.
The first one is to study other categories of games which enable us to explore
cohesion from different angles. Both our paper and Qu et al. (2019) apply the
public goods game, which is the most studied multi-person game. But,
apparently in reality, people engage in other types of interactions too.
Sometimes, people need to coordinate on a common action or idea, then the
coordination game could be a better choice for study. Other well-known game
models including stag-hunt game, battle of sex, ultimatum game and etc. (Szabó
and Fath, 2007) could also be applied to analyze different kinds of conflicts
and interests among group members.
In our model, while both social and task cohesion are random and change over
time, there is no direction of the changes. So our model doesn’t incorporate
the reciprocal effect between cohesion and performances. In previous works,
different or even contradictory patterns of cohesion are found in military
units (Bartone and Adler, 1999; Siebold, 2006). Grossman et al. (2015) suggest
that social cohesion emerges first then members shift attention to task
cohesion. After they achieve more resources by accomplishing group goals, they
in turn enhance social cohesion and finally both cohesion become stable over
time. So the second direction for further study would be to take the time
inconsistency into consideration and in particular, the reciprocal effect
between social and task cohesion.
Our models assume well-mixed population, and it is definitely necessary and
interesting to investigate the influence of cohesion in structured populations
or spatial networks where certain individuals are more likely to interact than
others. Aktipis (2011) and Ichinose et al. (2018) study conditional
dissociation mechanism on spatial networks and complex networks respectively.
There are multiple ways to introduce cohesion into their models. Cohesion may
be referred to as how often players want to leave, how far they can migrate to
other parts in the network, and when updating strategies, how far neighbors
can be imitated. Since different types of networks (e.g., homogeneous vs
heterogeneous) exhibit different tendencies in promoting cooperation, it would
even more exciting to investigate how these topological properties interact
with different aspects of cohesion.
Evolutionary game theory is a powerful tool in analyzing group dynamics and
behaviors. Cohesion is an important concept in understanding team dynamics and
performances. Our work suggests it is promising to apply evolutionary game
theory to study cohesion in more diverse directions, which may provide other
interesting results and help us better understand how cohesion facilitates
group performances.
## Acknowledgements
Xinglong Qu acknowledges support from the National Natural Science Foundation
of China (NO. 71701058). T.A.H. acknowledges support from the Leverhulme
Research Fellowship (RF-2020-603/9) and Future of Life Institute (grant
RFP2-154).
## References
* Aktipis [2004] C Athena Aktipis. Know when to walk away: contingent movement and the evolution of cooperation. _Journal of Theoretical Biology_ , 231(2):249–60, 2004. ISSN 0022-5193. doi: 10.1016/j.jtbi.2004.06.020. URL http://www.ncbi.nlm.nih.gov/pubmed/15380389.
* Aktipis [2011] C. Athena Aktipis. Is cooperation viable in mobile organisms? simple walk away rule favors the evolution of cooperation in groups. _Evolution and Human Behavior_ , 32(4):263 – 276, 2011. ISSN 1090-5138. doi: https://doi.org/10.1016/j.evolhumbehav.2011.01.002. URL http://www.sciencedirect.com/science/article/pii/S1090513811000043.
* Alexander [2019] J. McKenzie Alexander. Evolutionary game theory. In Edward N. Zalta, editor, _The Stanford Encyclopedia of Philosophy_. Metaphysics Research Lab, Stanford University, summer 2019 edition, 2019.
* Aumann [2019] R.J. Aumann. _Lectures On Game Theory_. CRC Press, 2019. ISBN 9780429693335. URL https://books.google.co.uk/books?id=DdCkDwAAQBAJ.
* Bartone and Adler [1999] Paul T. Bartone and Amy B. Adler. Cohesion over time in a peacekeeping medical task force. _Military Psychology_ , 11(1):85–107, 1999. doi: 10.1207/s15327876mp1101“˙5. URL https://doi.org/10.1207/s15327876mp1101_5.
* Beal et al. [2003] Daniel J. Beal, Robin R. Cohen, Michael J. Burke, and Christy L. McLendon. Cohesion and performance in groups: A meta-analytic clarification of construct relations. _Journal of Applied Psychology_ , 88(6):989–1004, 2003. ISSN 1939-1854(Electronic),0021-9010(Print). doi: 10.1037/0021-9010.88.6.989.
* Bernthal and Insko [1993] Paul R. Bernthal and Chester A. Insko. Cohesiveness without groupthink. _Group & Organization Management_, 18(1):66–87, 1993. ISSN 1059-6011 1552-3993. doi: 10.1177/1059601193181005.
* Carless and De Paola [2000] Sally A. Carless and Caroline De Paola. The measurement of cohesion in work teams. _Small Group Research_ , 31(1):71–88, 2000. ISSN 1046-4964. doi: 10.1177/104649640003100104. URL https://doi.org/10.1177/104649640003100104.
* Carron et al. [1985] A. V. Carron, W. N. Widmeyer, and L. R. Brawley. The development of an instrument to assess cohesion in sport teams: The group environment questionnaire. _Journal of Sport Psychology_ , 7(3):244–266, 1985. ISSN 0163-433X(Print).
* Carron and Brawley [2000] Albert V. Carron and Lawrence R. Brawley. Cohesion: Conceptual and measurement issues. _Small Group Research_ , 31(1):89–106, 2000. doi: 10.1177/104649640003100105. URL https://doi.org/10.1177/104649640003100105.
* Casey-Campbell and Martens [2009] Milly Casey-Campbell and Martin L. Martens. Sticking it all together: A critical assessment of the group cohesion–performance literature. _International Journal of Management Reviews_ , 11(2):223–246, 2009. doi: 10.1111/j.1468-2370.2008.00239.x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-2370.2008.00239.x.
* Castano et al. [2013] N. Castano, T. Watts, and A. G. Tekleab. A reexamination of the cohesion-performance relationship meta-analyses: A comprehensive approach. _Group Dynamics-Theory Research and Practice_ , 17(4):207–231, 2013. ISSN 1089-2699. URL <GotoISI>://CCC:000328817600001.
* Chan et al. [2006] Joseph Chan, Ho-Pong To, and Elaine Chan. Reconsidering social cohesion: Developing a definition and analytical framework for empirical research. _Social indicators research_ , 75(2):273–302, 2006. doi: https://doi.org/10.1007/s11205-005-2118-1.
* Chang and Bordia [2001] Artemis Chang and Prashant Bordia. A multidimensional approach to the group cohesion-group performance relationship. _Small Group Research_ , 32(4):379–405, 2001\. ISSN 1046-4964. doi: 10.1177/104649640103200401. URL https://doi.org/10.1177/104649640103200401.
* Chiocchio and Essiembre [2009] François Chiocchio and Hélène Essiembre. Cohesion and performance: A meta-analytic review of disparities between project teams, production teams, and service teams. _Small group research_ , 40(4):382–420, 2009\.
* Dickes et al. [2010] Paul Dickes, Marie Valentova, and Monique Borsenberger. Construct validation and application of a common measure of social cohesion in 33 european countries. _Social Indicators Research_ , 98(3):451–473, 2010. doi: https://doi.org/10.1007/s11205-009-9551-5.
* Dion [2000] Kenneth L. Dion. Group cohesion: From ”field of forces” to multidimensional construct. _Group Dynamics: Theory, Research, and Practice_ , 4(1):7–26, 2000. ISSN 1930-7802(Electronic),1089-2699(Print). doi: 10.1037/1089-2699.4.1.7.
* Dion and Evans [1992] Kenneth L. Dion and Charles R. Evans. On cohesiveness: Reply to keyton and other critics of the construct. _Small Group Research_ , 23(2):242–250, 1992\. doi: 10.1177/1046496492232007. URL https://doi.org/10.1177/1046496492232007.
* Doebeli and Hauert [2005] Michael Doebeli and Christoph Hauert. Models of cooperation based on the prisoner’s dilemma and the snowdrift game. _Ecology Letters_ , 8(7):748–766, 2005. doi: 10.1111/j.1461-0248.2005.00773.x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1461-0248.2005.00773.x.
* Drescher et al. [2012] Stuart Drescher, Gary Burlingame, and Addie Fuhriman. Cohesion: An odyssey in empirical understanding. _Small Group Research_ , 43(6):662–689, 2012\. doi: 10.1177/1046496412468073. URL https://doi.org/10.1177/1046496412468073.
* Dyaram and Kamalanabhan [2005] Lata Dyaram and T. J. Kamalanabhan. Unearthed: The other side of group cohesiveness. _Journal of Social Sciences_ , 10(3):185–190, 2005. ISSN 0971-8923. doi: 10.1080/09718923.2005.11892479. URL https://doi.org/10.1080/09718923.2005.11892479.
* Eshel and Cavalli-Sforza [1982] Ilan Eshel and L. L. Cavalli-Sforza. Assortment of encounters and evolution of cooperativeness. _Proceedings of the National Academy of Sciences_ , 79(4):1331–1335, 1982. ISSN 0027-8424. doi: 10.1073/pnas.79.4.1331. URL https://www.pnas.org/content/79/4/1331.
* Evans and Dion [1991] Charles R. Evans and Kenneth L. Dion. Group cohesion and performance: A meta-analysis. _Small Group Research_ , 22(2):175–186, 1991\. doi: 10.1177/1046496491222002. URL https://doi.org/10.1177/1046496491222002.
* Festinger [1950] Leon Festinger. Informal social communications. _Psychological Review_ , 57(5):271–282, 1950\. doi: 10.1037/h0056932. URL http://psycnet.apa.org/record/1951-04528-001.
* Fletcher and Doebeli [2006] Jeffrey A Fletcher and M. Doebeli. How altruism evolves: assortment and synergy. _Journal of Evolutionary Biology_ , 19(5):1389–1393, 2006. doi: 10.1111/j.1420-9101.2006.01146.x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1420-9101.2006.01146.x.
* Fletcher and Doebeli [2009] Jeffrey A Fletcher and Michael Doebeli. A simple and general explanation for the evolution of altruism. _Proceedings of the Royal Society B: Biological Sciences_ , 276(1654):13–19, 2009. doi: 10.1098/rspb.2008.0829. URL https://royalsocietypublishing.org/doi/abs/10.1098/rspb.2008.0829.
* Friedkin [2004] Noah E. Friedkin. Social cohesion. _Annual Review of Sociology_ , 30(1):409–425, 2004. doi: 10.1146/annurev.soc.30.012703.110625. URL https://doi.org/10.1146/annurev.soc.30.012703.110625.
* Grossman et al. [2015] Rebecca Grossman, Zachary Rosch, David Mazer, and Eduardo Salas. _What Matters for Team Cohesion Measurement? A Synthesis_ , volume 17 of _Research on Managing Groups and Teams_ , pages 147–180. Emerald Group Publishing Limited, 2015. ISBN 978-1-78560-283-2, 978-1-78560-282-5/1534-0856. doi: 10.1108/S1534-085620150000017007. URL https://doi.org/10.1108/S1534-085620150000017007.
* Han et al. [2015] The Anh Han, Luís Moniz Pereira, and Tom Lenaerts. Avoiding or Restricting Defectors in Public Goods Games? _J. Royal Soc Interface_ , 12(103):20141203, 2015\. doi: 10.1098/rsif.2014.1203.
* Han et al. [2017] The Anh Han, Luís Moniz Pereira, and Tom Lenaerts. Evolution of commitment and level of participation in public goods games. _Autonomous Agents and Multi-Agent Systems_ , 31(3):561–583, 2017.
* Ichinose et al. [2018] G. Ichinose, Y. Satotani, and T. Nagatani. Network flow of mobile agents enhances the evolution of cooperation. _EPL (Europhysics Letters)_ , 121(2):28001, jan 2018. doi: 10.1209/0295-5075/121/28001. URL https://doi.org/10.1209/0295-5075/121/28001.
* Izquierdo et al. [2010] Segismundo S Izquierdo, Luis R Izquierdo, and Fernando Vega-Redondo. The option to leave: conditional dissociation in the evolution of cooperation. _Journal of Theoretical Biology_ , 267(1):76–84, 2010. ISSN 1095-8541. doi: 10.1016/j.jtbi.2010.07.039. URL http://www.ncbi.nlm.nih.gov/pubmed/20688083.
* Janis and Janis [1982] Irving Lester Janis and Irving Lester Janis. _Groupthink: Psychological studies of policy decisions and fiascoes_ , volume 349. Houghton Mifflin Boston, 1982.
* Khoshsoroor et al. [2019] Somayeh Khoshsoroor, Dapeng Liang, Gholamheidar Ebrahimbay Salami, and Ehsan Chitsaz. Team cohesion: A double-edged sword in the relationship between rival estimation and escalation of commitment. _Social Behavior and Personality: an international journal_ , 47(9):1–12, 2019. ISSN 0301-2212. doi: 10.2224/sbp.8122.
* Mathieu et al. [2015] J. E. Mathieu, M. R. Kukenberger, L. D’Innocenzo, and G. Reilly. Modeling reciprocal team cohesion-performance relationships, as impacted by shared leadership and members’ competence. _Journal of Applied Psychology_ , 100(3):713–34, 2015. ISSN 1939-1854 (Electronic) 0021-9010 (Linking). doi: 10.1037/a0038898. URL https://www.ncbi.nlm.nih.gov/pubmed/25751749.
* Mathieu et al. [2008] John Mathieu, M. Travis Maynard, Tammy Rapp, and Lucy Gilson. Team effectiveness 1997-2007: A review of recent advancements and a glimpse into the future. _Journal of Management_ , 34(3):410–476, 2008\. doi: 10.1177/0149206308316061. URL https://doi.org/10.1177/0149206308316061.
* McLeod and Von Treuer [2013] Janet McLeod and Kathryn Von Treuer. Towards a cohesive theory of cohesion. _International Journal of Business and Social Research_ , 3(12):1–11, 2013. doi: http://dx.doi.org/10.18533/ijbsr.v3i12.338. URL http://hdl.handle.net/10536/DRO/DU:30059957.
* Mikalachki and Administration [1969] A. Mikalachki and University of Western Ontario. School of Business Administration. _Group Cohesion Reconsidered: A Study of Blue Collar Work Groups_. School of Business Administration, University of Western Ontario, 1969\. URL https://books.google.co.uk/books?id=J9NEAAAAIAAJ.
* Mullen and Copper [1994] Brian Mullen and Carolyn Copper. The relation between group cohesiveness and performance: An integration. _Psychological Bulletin_ , 115(2):210–227, 1994\. ISSN 1939-1455(Electronic),0033-2909(Print). doi: 10.1037/0033-2909.115.2.210.
* Newton [2018] Jonathan Newton. Evolutionary game theory: A renaissance. _Games_ , 9(2), 2018. ISSN 2073-4336. doi: 10.3390/g9020031. URL https://www.mdpi.com/2073-4336/9/2/31.
* Nowak et al. [2004] Martin A. Nowak, Akira Sasaki, Christine Taylor, and Drew Fudenberg. Emergence of cooperation and evolutionary stability in finite populations. _Nature_ , 428(6983):646–650, 2004. ISSN 1476-4687. doi: 10.1038/nature02414. URL https://doi.org/10.1038/nature02414.
* Qu et al. [2016] Xinglong Qu, Changli Zhou, Zhigang Cao, and Xiaoguang Yang. Conditional dissociation as a punishment mechanism in the evolution of cooperation. _Physica A: Statistical Mechanics and its Applications_ , 449:215 – 223, 2016. ISSN 0378-4371. doi: https://doi.org/10.1016/j.physa.2015.12.128. URL http://www.sciencedirect.com/science/article/pii/S0378437115011656.
* Qu et al. [2019] Xinglong Qu, Zhigang Cao, Xiaoguang Yang, and The Anh Han. How group cohesion promotes the emergence of cooperation in public goods game under conditional dissociation. _Journal of Artificial Societies and Social Simulation_ , 22(3):5, 2019. ISSN 1460-7425. doi: 10.18564/jasss.4070. URL http://jasss.soc.surrey.ac.uk/22/3/5.html.
* Rand et al. [2013] David G. Rand, Corina E. Tarnita, Hisashi Ohtsuki, and Martin A. Nowak. Evolution of fairness in the one-shot anonymous ultimatum game. _Proceedings of the National Academy of Sciences_ , 110(7):2581–2586, 2013. ISSN 0027-8424. doi: 10.1073/pnas.1214167110. URL https://www.pnas.org/content/110/7/2581.
* Rosh et al. [2012] Lisa Rosh, Lynn R. Offermann, and Rhonda Van Diest. Too close for comfort? distinguishing between team intimacy and team cohesion. _Human Resource Management Review_ , 22(2):116–127, 2012. ISSN 1053-4822. doi: https://doi.org/10.1016/j.hrmr.2011.11.004. URL http://www.sciencedirect.com/science/article/pii/S1053482211000519.
* Rovio et al. [2009] Esa Rovio, Jari Eskola, Stephen A. Kozub, Joan L. Duda, and Taru Lintunen. Can high group cohesion be harmful?: A case study of a junior ice-hockey team. _Small Group Research_ , 40(4):421–435, 2009\. doi: 10.1177/1046496409334359. URL https://doi.org/10.1177/1046496409334359.
* Salas et al. [2015] Eduardo Salas, Rebecca Grossman, Ashley M. Hughes, and Chris W. Coultas. Measuring team cohesion. _Human Factors: The Journal of the Human Factors and Ergonomics Society_ , 57(3):365–374, 2015. ISSN 0018-7208 1547-8181. doi: 10.1177/0018720815578267.
* Schiefer and van der Noll [2017] David Schiefer and Jolanda van der Noll. The essentials of social cohesion: A literature review. _Social Indicators Research_ , 132(2):579–603, 2017. ISSN 1573-0921. doi: 10.1007/s11205-016-1314-5. URL https://doi.org/10.1007/s11205-016-1314-5https://link.springer.com/content/pdf/10.1007/s11205-016-1314-5.pdf.
* Siebold [2006] Guy L Siebold. Military group cohesion. _Military life: The psychology of serving in peace and combat_ , 1:185–201, 2006.
* Spink et al. [2014] Kevin S. Spink, Jocelyn D. Ulvick, Alyson J. Crozier, and Kathleen S. Wilson. Group cohesion and adherence in unstructured exercise groups. _Psychology of Sport and Exercise_ , 15(3):293–298, 2014. ISSN 14690292. doi: 10.1016/j.psychsport.2013.11.008.
* Szabó and Fath [2007] György Szabó and Gabor Fath. Evolutionary games on graphs. _Physics Reports_ , 446(4):97 – 216, 2007. ISSN 0370-1573. doi: https://doi.org/10.1016/j.physrep.2007.04.004. URL http://www.sciencedirect.com/science/article/pii/S0370157307001810.
* Taylor et al. [2004] Christine Taylor, Drew Fudenberg, Akira Sasaki, and Martin A. Nowak. Evolutionary game dynamics in finite populations. _Bulletin of Mathematical Biology_ , 66(6):1621–1644, 2004. ISSN 1522-9602. doi: 10.1016/j.bulm.2004.03.004. URL https://doi.org/10.1016/j.bulm.2004.03.004.
* Warner et al. [2012] Stacy Warner, Matthew T. Bowers, and Marlene A. Dixon. Team dynamics: A social network perspective. _Journal of Sport Management_ , 26(1):53, 2012\. ISSN 0888-4773. doi: 10.1123/jsm.26.1.53. URL https://journals.humankinetics.com/view/journals/jsm/26/1/article-p53.xml.
* Zaccaro [1991] Stephen J. Zaccaro. Nonequivalent associations between forms of cohesiveness and group-related outcomes: Evidence for multidimensionality. _The Journal of Social Psychology_ , 131(3):387–399, 1991. ISSN 1940-1183(Electronic),0022-4545(Print). doi: 10.1080/00224545.1991.9713865.
* Zaccaro and Lowe [1988] Stephen J. Zaccaro and Charles A. Lowe. Cohesiveness and performance on an additive task: Evidence for multidimensionality. _The Journal of Social Psychology_ , 128(4):547–558, 1988. ISSN 0022-4545. doi: 10.1080/00224545.1988.9713774. URL https://doi.org/10.1080/00224545.1988.9713774.
|
11institutetext: LUTH, Observatoire de Paris, CNRS, PSL, Université de Paris;
5 Place Jules Janssen, 92190 Meudon, France
11email<EMAIL_ADDRESS>22institutetext: Santa Cruz
Institute for Particle Physics and Department of Physics, University of
California at Santa Cruz, Santa Cruz,CA 95064, USA
22email<EMAIL_ADDRESS>
# Flux variability from ejecta in structured relativistic jets with large-
scale magnetic fields
G. Fichet de Clairfontaine Z. Meliani A. Zech 111111 O. Hervet 22
###### Abstract
Context. Standing and moving shocks in relativistic astrophysical jets are
very promising sites for particle acceleration to large Lorentz factors and
for the emission from the radio up to the $\gamma$-ray band. They are thought
to be responsible for at least part of the observed variability in radio-loud
active galactic nuclei.
Aims. We aim to simulate the interactions of moving shock waves with standing
recollimation shocks in structured and magnetized relativistic jets and to
characterize the profiles of connected flares in the radio light curve.
Methods. Using the relativistic magneto-hydrodynamic (MHD) code MPI-AMRVAC and
a radiative transfer code in post-processing, we explore the influence of the
magnetic-field configuration and transverse stratification of an over-
pressured jet on its morphology, on the moving shock dynamics, and on the
emitted radio light curve. First, we investigate different large-scale
magnetic fields with their effects on the standing shocks and on the
stratified jet morphology. Secondly, we study the interaction of a moving
shock wave with the standing shocks. We calculated the synthetic synchrotron
maps and radio light curves and analyze the variability at two frequencies 1
and 15.3 GHz and for several observation angles. Finally, we compare the
characteristics of our simulated light curves with radio flares observed from
the blazar 3C 273 with the Owens Valley Radio Observatory (OVRO) and Very Long
Baseline Array (VLBA) in the MOJAVE survey between 2008 and 2019.
Results. We find that in a structured over-pressured relativistic jet, the
presence of the large-scale magnetic field structure changes the properties of
the standing shock waves and leads to an opening in the jet. The interaction
between waves from inner and outer jet components can produce strong standing
shocks. When crossing such standing shocks, moving shock waves accompanying
overdensities injected in the base of the jet cause very luminous radio
flares. The observation of the temporal structure of these flares under
different viewing angles probes the jet at different optical depths. At 1 GHz
and for small angles, the self-absorption caused by the moving shock wave
becomes more important and leads to a drop in the observed flux after it
interacts with the brightest standing knot. A weak asymmetry is seen in the
shape of the simulated flares, resulting from the remnant emission of the
shocked standing shocks. The characteristics of the simulated flares and the
correlation of peaks in the light curve with the crossing of moving and
standing shocks favor this scenario as an explanation of the observed radio
flares of 3C 273.
###### Key Words.:
magnetohydrodynamics (MHD) – ISM: jets and outflows – radiation mechanisms:
nonthermal – galaxies: active – quasars: individual (3C 273) – methods :
numerical
## 1 Introduction
Emission from relativistic jets in radio-loud active galactic nuclei (AGN) has
been detected from the radio band up to the teraelectronvolt range and shows
frequent episodes of high flux variability in many sources. The emission is
generally ascribed to a population of relativistic particles inside the jet
that interact with magnetic and photon fields to produce nonthermal radiation
over a large wavelength range. In blazar-type AGN, the emission from the jet
plasma is amplified in the observer frame by relativistic beaming effects as
the jet axis is aligned with the line of sight, while the observed emission is
weaker in radio galaxies, where the jet is seen under a larger angle. The
multiwavelength emission of AGN requires a mechanism able to reaccelerate
particles as they travel along the jet (Blandford & Koenigl 1979). The
processes that are most frequently proposed are acceleration from internal
shocks (Pelletier et al. 2019; Lemoine et al. 2019b, a), shear acceleration
(Rieger & Duffy 2019; Tavecchio 2020), or magnetic reconnection (Blandford et
al. 2017).
Many direct and indirect observations demonstrate the presence of a transverse
stratified profile of AGN jets, characterized by the presence of a fast inner
jet (spine) and a slower outer jet (sheath or layer), both inner and outer
jets being relativistic. The most compelling observation of this structure at
the limb-brightened jets was observed with very-long-baseline interferometry
(VLBI), down to a few Schwarzschild radii for the nearest radio galaxies
(Nagai et al. 2014; Kim et al. 2018). Seen at large angles, the slower layer
is more Doppler boosted than the spine, leading to the appearance of a
distinctive radio structure of an ”empty pipe.” Observations of a different
magnetic structure of the inner and outer jets via polarization measurements
(Gabuzda et al. 2014; Avachat et al. 2016; Giroletti et al. 2004) support the
idea that the two jet components may be launched from different processes,
such as those proposed by Blandford & Znajek (1977) and Blandford & Payne
(1982), for launching from the vicinity of the rotating supermassive black
hole or from the accretion disk.
Theoretical studies in plasma physics also support this interpretation where
the fast inner jet is responsible for most of the radiative output while
having a lower density and a population dominated by electrons and positrons,
while the outer jet is denser and less radiative, dominated by cold protons
(Sol et al. 1989). The gamma-ray detection of radio galaxies is challenging to
explain when considering a uniform jet. It might be better explained by a
stratified jet structure, where the particle and synchrotron fields of both
jet components interact to produce a strong high-energy inverse-Compton
emission (Tavecchio & Ghisellini 2008, 2014). The stratification of the
relativistic jet can explain the spectral shape of the emission at
multiwavelength, from the radio band to the X-ray band (Siemiginowska et al.
2007), and up to the (very) high energy gamma-ray band (Ghisellini et al.
2005).
Large-scale magnetic fields seem to play an important role in extragalactic
jets, in particular for the collimation of the jet and its acceleration (Porth
et al. 2011; Fendt et al. 2012). Observations tend to show a correlation
between the large-scale magnetic structure and the resulting synchrotron
emission (Gabuzda et al. 2014; Walker et al. 2018).
The jet emission often shows a presence of bright spots (”knots”) that can be
associated with standing and moving shocks. Such features have been detected
in relativistic jets with radio and optical polarimetry observations (Perlman
et al. 1999; Marshall et al. 2002), in the radio and millimeter band (Britzen
et al. 2010; Jorstad et al. 2013) and in the X-ray band (Wilson & Yang 2002).
One way to interpret these ”knots” is by evoking recollimation shocks along
the jet caused by pressure mismatches with the medium surrounding the jet
(Marscher et al. 2008).
Flux variability is a characteristic feature of emission from radio-loud AGN
and it depends on the observed frequency range and the AGN class. In the
gigahertz radio bands, the shortest observed flare time scales can be of a few
months in the case of BL Lac objects observed at a frequency of $\nu=43$ GHz
(Wehrle et al. 2012, 2016), to a few years in the sample of several tens of
radio-loud AGN observed at frequencies from
$\nu=\left[4.8;\leavevmode\nobreak\ 230\right]\leavevmode\nobreak\ \text{GHz}$
(Hovatta et al. 2008; Nieppola et al. 2009). In the latter, the median flare
duration was estimated to be 2.5 years, with flares occurring on average every
four to six years. While in many cases the light curves of the detected flares
exhibited a complex structure, sometimes including multiple peaks, in general,
the decay times were found to be typically 1.3 times longer than the rise
times. Their analysis concludes that the observed flare characteristics are in
agreement with a generalized shock model (Valtaoja et al. 1992). In the case
of the very nearby radio-galaxy M 87, VLBI observations (Walker et al. 2018)
can locate fast flux variability from the radio core. At high energies, in
X-rays and gamma-rays, very rapid flares are observed from blazars and radio-
galaxies with durations of days, down to time scales below an hour at
teraelectronvolt energies.
Since the first analytical model (Blandford & Königl 1979) that was able to
reproduce the flat radio spectra of jets with an isothermal jet associated
with shock waves traveling in the jet, models have been evolving in several
directions. Hydrodynamic and MHD models are developed for in-depth studies of
jet launching and propagation, while models of radiative transfer focus on the
description of multiwavelength emission processes due to relativistic particle
populations in the emission region, and particle in cell (PIC) models treat
the microphysics of particle acceleration at small scales.
Today, increasingly sophisticated simulations address at the same time the
macrophysics of the jet plasma and its radiative emission to try to improve
our understanding of the jet physics from multiwavelength observations.
Several hydrodynamical simulations of jets (Gómez et al. 1997; Agudo et al.
2001; Mimica et al. 2009) have shown that injection of perturbations at the
base of the jet succeeds in reproducing the observed radio structure of
superluminal and stationary components accounting for synchrotron radiation
from a randomly oriented magnetic field. These simulations have also shown
that perturbations traveling along an over-pressured jet can lead to the
appearance of recollimation shocks.
Including a large-scale magnetic field structure in simulations of
relativistic jets, Mizuno et al. (2015) studied the impact of the geometry of
the magnetic field on recollimation shocks and rarefaction structures. They
showed that the influence of the magnetic structure is not negligible and
that, for example, axial fields lead to stronger collimation and rarefaction
than toroidal fields. In studies by Martí (2015) and Martí et al. (2018), the
authors simulated models of relativistic magnetized, axisymmetric jets with
azimuthal velocity (i.e., rotation). For certain configurations, this
azimuthal velocity leads to change the stationary shock-wave structure. Thus,
they obtain a standing-shock structure and compute synthetic radio maps
compatible with observations of parsec-scale extragalactic jets. Fuentes et
al. (2018) are able to obtain polarization maps by computing the optically
thin and linearly polarized synchrotron emission. They find that the electric
vector polarization angles tend to be perpendicular to the axis of the jet
between the recollimation shocks. This characteristic polarization can be
compared with that obtained in VLBI observations of blazar jets.
Porth & Komissarov (2015) show that both unmagnetized and magnetized jets have
great stability due to interactions with the ambient medium. The difference in
pressure between the jet and the ambient medium allows the jet plasma to keep
a causal connection. They propose an explanation for the Fanaroff-Riley
dichotomy with different pressure values leading to the appearance of
different structures of recollimation shocks. Simulations of stratified
relativistic jets (Hervet et al. 2017) show that a two-component model of jets
in interaction with an interstellar ambient medium can reproduce the observed
knots through the generation of standing and moving shocks inside the jets.
Shock waves passing through a conical relativistic jet were first evoked by
Marscher & Gear (1985) to interpret a flare of the quasar 3C 273 observed in
1983 from the millimeter to the infrared band. In this scenario, superluminal
radio knots naturally arise as shocked regions in the jet. With today’s
increased computing capacity, Fromm et al. (2016, 2018) have been able to
reproduce typical flares observed at different wavelengths by simulating the
interaction between ejecta and a recollimation shock structure.
Several characteristics of pc-scale relativistic jets are well reproduced with
current models. Nevertheless, a better comprehension of the link between the
supposed recollimation shock structure and the magnetic configuration is
necessary to understand the multiwavelength observations, particularly during
flares. We aim to understand the impact of the magnetic configuration in the
jet on the dynamics of a perturbation (”ejecta”) at the base of the jet, which
we suppose to be the cause for the observed flares. We have studied in detail
its interaction with a two-component jet for different large-scale magnetic
configurations, as well as the synchrotron emission during its propagation.
The overall aim is to reproduce typical radio flare observation by the
injection of such perturbations at the base of the jet and to put constraints
on the magnetic field configuration and jet structure.
We carried out special relativistic magnetohydrodynamic (SRMHD) simulations of
pc-scale jets with the MPI-AMRVAC code (Keppens et al. 2012), using a two-
component model of relativistic jets with different magnetic configurations.
Following Hervet et al. (2017) we considered an external jet component that
carries more power than the internal one, while staying within the same order
of magnitude. This kind of configuration leads to the formation of a typical
standing-shock structure. We consider four different magnetic configurations
(defined in cylindrical coordinates) : hydrodynamic (H) with a turbulent
magnetic field linked to the thermal pressure, toroidal (T) (with magnetic
lines along $\varphi$), poloidal (P) (with magnetic lines along $z$, the jet
propagation axis) and helical (HL) (with a magnetic pinch angle of
$45\degree$). Synchrotron radiation is computed in post-processing, assuming
the injection of relativistic electrons following Gómez et al. (1995) and
accounting for relativistic effects. In this way, radio light curves can be
computed for the passage of ejecta in the stationary shock structure.
The organization of this paper is as follows. In Section 2, we briefly present
the MPI-AMRVAC code and the numerical method used to solve the SRMHD
equations, followed by a description of the numerical implementation of the
two-component jet in Section 3. The structure of standing shocks that arises
in the steady-state solution of this model is discussed in Section 4 for the
four different magnetic-field configurations. The introduction of ejecta leads
to the perturbations of the steady-state structure is developed in Section 5.
The treatment of radiative transfer and generation of synchrotron maps and
light curves in post-processing is explained in Section 6 and results are
presented. To illustrate the relevance of our scenario to explain radio
flares, recent results from observations of the blazar 3C 273 with the VLBA
and OVRO are discussed in Section 7 and are qualitatively compared to our
simulations. Section 8 provides a general discussion of the implications of
our scenario.
Throughout this paper we use natural units where the speed of light $c=1$. As
the distance unit will be the jet radius $R_{\rm jet}$, the time unit will be
$R_{\rm jet}$ in the co-mobile frame or $R_{\rm jet}/\delta$ in the absolute
frame (where $\delta$ is the Doppler factor).
## 2 Governing SRMHD equations and numerical method
We perform the numerical simulation of the relativistic magnetized two-
component jet model using the 2D ideal special-relativistic
magnetohydrodynamics version of the finite volume code MPI-AMRVAC in
conservation form, using high-resolution shock-capturing methods (Meliani &
Keppens 2007; Keppens et al. 2012). It solves the governing conservation
equation as in (Martí & Müller 2015) with $\@vec{U}$ the state vector and
$\@vec{F}^{\rm i}$ the associated flux vectors :
$\partial_{\rm t}\,\@vec{U}+\partial_{\rm x^{i}}\,\@vec{F}^{i}(\@vec{U})=0\,,$
(1)
with :
$\@vec{U}=\left(D,\,S^{\rm j},\,\tau,\,B^{\rm k}\right)^{\rm T}\,,$ (2)
$\displaystyle\@vec{F}^{\rm i}{}$ $\displaystyle=\Big{(}Dv^{\rm i},\,S^{\rm
j}v^{\rm i}+p\delta^{\rm ij}-b^{\rm j}B^{\rm i}/\gamma,\,\tau v^{\rm
i}+pv^{\rm i}-b^{\rm 0}B^{\rm i}/\gamma,\,v^{\rm i}B^{\rm k}$ (3)
$\displaystyle-v^{\rm k}B^{\rm i}\Big{)}^{\rm T}\,,$
where the rest-mass density is $D$, the momentum density in the j-direction
$S^{\rm j}$ and the total energy density $\tau$, calculated in the absolute
frame. They are given by :
$\displaystyle D=\rho\gamma\,,$ (4) $\displaystyle S^{\rm j}=\rho
h^{*}\gamma^{2}v^{\rm j}-b^{\rm 0}b^{\rm j}\,,$ (5) $\displaystyle\tau=\rho
h^{*}\gamma^{2}-p_{\rm tot}-\left(b^{\rm 0}\right)^{2}\,,$ (6)
where $h^{*}\,=\,\left(1+\epsilon_{\rm the}+p_{\rm th}/\rho+b^{2}/\rho\right)$
with $\epsilon_{\rm the}$ the internal energy. We note $b^{\rm i}$ the three
vector magnetic field described in the co-moving frame ($\@vec{B}$ and
$\@vec{v}$ describe the magnetic field and the velocity vector in the absolute
frame) as :
$\displaystyle b^{\rm 0}=\gamma\@vec{B}.\@vec{v}\,,$ (7) $\displaystyle b^{\rm
i}=B^{\rm i}/\gamma+b^{\rm 0}v^{\rm i}\,.$ (8)
Finally, the Lorentz factor is given by $\gamma=1/\sqrt{1-v^{\rm i}v_{\rm i}}$
(with i running over the spatial indices $\left[1,2,3\right]$).
As a closure equation for the hydrodynamic part, we exploit the Synge equation
of state (Mathews 1971; Meliani et al. 2004),
$p=\frac{\left(\Gamma-1\right)\rho}{2}\left(\frac{\epsilon}{\rho}-\frac{\rho}{\epsilon}\right),$
(9)
and the corresponding effective polytropic index is given as (Meliani et al.
2008),
$\Gamma_{\rm
eff}=\Gamma-\dfrac{\Gamma-1}{2}\left(1-\dfrac{1}{\epsilon}\right)\,,$ (10)
with $\epsilon=\left(1+\epsilon_{\rm the}\right)$ is the specific internal
energy. We fix $\Gamma=5/3$; the effective index can vary in time and location
between its relativistic ($\Gamma=4/3$, when the thermal energy becomes larger
than the mass energy) and its classical value ($\Gamma=5/3$, when the thermal
energy becomes negligible compared to the mass energy).
The divergence free condition for the magnetic field is satisfied by using the
divergence cleaning method described by Dedner et al. (2002). We use the
Harten–Lax–van Leer–Contact (HLLC) Riemann solver (Mignone & Bodo 2006) with
third order reconstruction method cada3 (Cada & Torrilhon 2009). The
combination of cada3 reconstruction (third order accurate) and HLLC flux
computations is extremely robust and handles both sharp discontinuities and
shock development accurately.
In the study of recollimation shocks, it is important to detect the shocks and
distinguish them from compression waves in the numerical simulation. These
internal shocks are in some cases very weak, making their detection difficult.
For this purpose, in the hydrodynamics case, we use the shock detection
algorithm by Zanotti et al. (2010). For the magnetized cases, we use a jump
condition on the fast-magnetosonic number. We should note that in the
simulation of the magnetized jet with only a toroidal-field component, the two
methods converge at the vicinity of the jet axis.
## 3 Two-component model of a magnetized relativistic jet
In order to investigate the effect of the magnetic field configuration on the
shock structure in a transverse structured jet, we use the two-component jet
model proposed by Meliani & Keppens (2007, 2009); Hervet et al. (2017). We
adopt typical characteristics of a radio loud relativistic AGN jet, with a
total kinetic luminosity of $L_{\rm kin}=10^{46}\leavevmode\nobreak\
\text{erg.s}^{-1}$ (Ghisellini et al. 2014), and the jet radius taken to be
$R_{\rm jet}\,=\,0.1\,{\rm pc}$ at the parsec scale as observed in the jet of
M 87 (Biretta et al. 2002). Concerning the internal jet structure, we assume
an external jet radius $R_{\rm out}\,=\,R_{\rm jet}$ and an internal jet
radius $R_{\rm in}=R_{\rm out}/3$. We assume that the outer jet component
carries an important fraction $f_{\rm out}=0.75$ of the total kinetic
luminosity,
$L_{\rm out,\,kin}=f_{\rm out}\,L_{\rm kin}=\left(\gamma_{\rm out}h_{\rm
out}-1\right)\rho_{\rm out}\gamma_{\rm out}\pi\left(R^{2}_{\rm out}-R^{2}_{\rm
in}\right)v_{\rm z,out}\,,$ (11)
with the remaining $L_{\rm in,\,kin}=\left(1-f_{\rm out}\right)\,L_{\rm
kin}=0.25\,L_{\rm kin}$ carried by the inner jet component.
For the simulations performed in this paper as initial condition, we set a
non-rotating, superfast, magnetosonic cylindrical jet surrounded by a uniform,
unmagnetized ambient medium with high rest-mass density. The rest-mass density
and the total pressure ratio of the outer jet $\left(\rho_{\rm out},p_{\rm
out}\right)$ to the ambient medium $\left(\rho_{\rm
am}=10^{3}\leavevmode\nobreak\ \text{cm}^{-3},p_{\rm am}=1\leavevmode\nobreak\
\text{dyn.cm}^{-2}\right)$ is $\left(\eta_{\rm\rho}=\rho_{\rm out}/\rho_{\rm
am}\,=\,10^{-2},\eta_{\rm out,p}=p_{\rm jet,\,out}/p_{\rm am}\,=\,1.0\right)$.
We assume a more over-pressured inner jet, with a lower rest-mass density and
total pressure ratio of the inner jet $\left(\rho_{\rm in},p_{\rm in}\right)$
to the ambient medium $\left(\rho_{\rm am},p_{\rm am}\right)$ of
$\left(\eta_{\rm\rho}=\rho_{\rm in}/\rho_{\rm am}\,=\,10^{-3},\eta_{\rm
in,p}=p_{\rm in}/p_{\rm am}\,=\,1.5\right)$ (Gómez et al. 1995, 1997).
Moreover, the inner jet is assumed to be faster ($\gamma_{\rm in}=10$) than
the outer jet ($\gamma_{\rm out}=3$) (Giroletti et al. 2004).
To investigate the effects of the magnetic field on the recollimation shocks
and therefore on the evolution of the ejecta, we have considered different
topologies: hydrodynamic (H) (as reference case), toroidal (T), axial
(poloidal) (P) and helical (HL). The jet magnetization is set through the
local maximum magnetization parameters at the inner and the outer jet
component. The magnetization parameter is given by,
$\sigma_{\rm
M}=\frac{\@vec{B}^{2}+\left(\@vec{v}\cdot\@vec{B}\right)^{2}/2}{\rho\,h}\,.$
(12)
In all magnetized cases, the magnetization parameter is set $\sigma_{\rm
M,\,in}=10^{-3}$ for the inner jet component and $\sigma_{\rm
M,\,out}=10^{-4}$ for the outer jet component, sufficiently low to allow the
Fermi 1 acceleration mechanism to be efficient (Lemoine & Pelletier 2010;
Sironi et al. 2013; Plotnikov et al. 2018). In all cases the relativistic jet
is kinematically dominated.
In the poloidal field case (P), the magnetic field is uniform and parallel to
the jet flow in each component, and $\left(\sigma_{\rm M,\,in},\sigma_{\rm
M,\,out}\right)$ are constants. In the toroidal and helical cases (T, HL), we
adopt the same profile as in Meliani & Keppens (2009),
$B_{\varphi}=\left\\{\begin{aligned}
&B_{\rm\varphi,in,0}\,\left(\frac{R}{R_{\rm\,in}}\right)^{a_{\rm
in}/2}&&;\text{if}\ R<R_{\rm\,in}\,,\\\
&B_{\rm\varphi,out,0}\,\left(\frac{R}{R_{\rm\,in}}\right)^{a_{\rm
out}/2}&&;\text{if}\ R\geq R_{\rm\,in}\,,\end{aligned}\right.$ (13)
$\left(B_{\rm\varphi,in,0},\,B_{\rm\varphi,out,0}\right)$ are respectively the
toroidal magnetic field strength of the inner and the outer jet component, at
their interface and they are deduced from (Eq. 12) and we fix the exponents
$(a_{\rm in},\,a_{\rm out})=(0.5,-2)$ as in (Meliani & Keppens 2009). It
should be noted that the value of these exponents can have an influence on the
resulting pattern of the recollimation shocks and the moving shock.
Concerning the helical-field case (HL), we chose the same configuration as for
the toroidal-field case (Eq. 13) and a constant axial field strength with
constant magnetic pitch angle $\theta_{B}=45^{\degree}$.
Finally, the thermal pressure profile $p_{\rm th}(r)$ is deduced by assuming
for each component a transverse equilibrium among the pressure gradient and
Lorentz force following (Meliani & Keppens 2009),
$p_{\rm th}(r)=p_{\rm tot}-\left(\left(1-v_{\rm
z}^{2}\right)\cdot\left(\dfrac{1}{a_{\rm
in}+0.5}\right)\cdot\dfrac{B_{\varphi}^{2}}{2}+\dfrac{B_{\rm
z}^{2}}{2}\right)\,.$ (14)
The initial distributions of $B_{\varphi}$ and $p_{\rm mag}$ in the different
jet components can be seen in Fig. 14.
The smooth transition between the two components of the jet is imposed using a
hyperbolic tangential function with an extension of $R_{\rm\,in}/2$ on the
density, and magnetization parameter $\sigma_{\rm M}$. We note that all
physical parameters are calculated with respect to the relativistic jet
kinetic energy flux, radius, Lorentz factor and magnetization.
To make the simulation with high resolution and large spatial extend more
tractable, we assume axisymmetry. The simulations are carried out in a domain
size $\left[R,Z\right]=\left[16,200\right]\leavevmode\nobreak\ R_{\rm jet}$;
we take a base resolution of $64\times 1000$ and we allow for five levels of
refinement, achieving an effective resolution of $1024\times 16000$. For the
initial state, the jet inlet is set over the full jet axis $Z$.
## 4 Results from steady-state jet simulations
In the following, we present simulation results for the four magnetic
configurations (H, T, P, HL). The simulations are run until they reach a
steady state with a stationary shock pattern appearing within the two
components of the jet over the full simulation box (Fig. 1).
Figure 1: Snapshot: different types of structured jets (H, T, P and HL)
without an injected perturbation. The propagation of the jet is going from
left to right along the increasing value of Z. For each case, the density
contour (in log-scale) is drawn on the bottom side and the thermal energy
contour on the top side. Units on x and y-axis in $R_{\rm jet}$ unit.
Figure 2: Profile of the mean Lorentz factor along the propagation axis $Z$
and for a radius between $R=\left[0.1,0.23\right]\leavevmode\nobreak\ R_{\rm
jet}$ (in the inner jet). We show the profiles of the 4 cases studied with (H)
in blue, (T) in green, (P) in orange and (HL) in red without ejecta.
### 4.1 Hydrodynamic case (H)
The over-pressured inner jet expands inducing the development of multiple
steady conical recollimation shocks and rarefaction waves along the inner and
the outer jet component (Fig. 1, top left). The high inertia ratio between the
inner and outer jet component $\left(\gamma^{2}_{\rm in}h_{\rm in}\rho_{\rm
in}\right)/\left(\gamma^{2}_{\rm out}h_{\rm out}\rho_{\rm out}\right)\simeq
42$ enhances the influence of the inner component on the outer component, even
if the inner component carries only $25\%$ of the total kinetic energy flux.
In the jet inlet at the inner/outer jet component interface, the lateral
expansion of the over-pressured inner jet is associated with the development
of conical rarefaction waves propagating in the inner jet component toward the
jet axis, and conical shock waves propagating toward the jet edge in the outer
jet. These propagating waves form an angle $\alpha=\arctan\left(1/{\cal
M}\right)$ with the jet axis, where ${\cal M}=\gamma\,v/(\gamma_{\rm s}c_{\rm
s})$ is the relativistic Mach number of the jet component in which the waves
propagate, $c_{\rm s}$ is the sound speed and the associated Lorentz factor
$\gamma_{\rm s}=1/\sqrt{1-c_{\rm s}^{2}}$ (for more details see (Hervet et al.
2017)). These waves produce a pattern of successive and near-equidistant
standing recollimation shocks with separation distance of $\delta Z_{\rm
shock}=2R_{\rm jet}{\cal M}$.
In the inner jet component, the rarefaction wave propagates inward and forms
an angle $\alpha_{\rm in}\sim 5.5\degree$ with the jet axis; when it reaches
the jet axis, it is reflected outward under the same angle. At the interface
between the inner and outer jet components, the wave is partially transmitted
as a rarefaction wave in the outer jet with an angle $\alpha_{\rm
out}=\arctan\left(1/{\cal M_{\rm out}}\right)\simeq 7.5\degree$ and it is
partially reflected as a shock wave toward the jet axis inside the inner jet.
Each time the wave is partially reflected at the inner or the outer jet
border, it changes the type from rarefaction to shock wave and vice versa. We
can clearly see this structure in the evolution of the inner-jet Lorentz
factor along the $Z$-direction (Fig. 2, blue curve). The partial transmission
from the inner toward the outer jet dampens the wave and its intensity
decreases with distance, whereas the wave intensity associated with the outer
jet component increases with distance, since it accumulates the successive
waves transmitted from the inner component (Fig. 1). Moreover, each time the
waves from inner and outer jet interact, a partial wave reflection is produced
toward the jet axis. A pattern of multiple waves arises with a wavelength
depending on the Mach number of the inner and outer jet and on the jet radius.
The transverse ram pressure produced by this reflected wave causes an
expansion of the jet. The expansion of the inner jet component with its higher
effective inertia is more pronounced. The radial expansion stabilizes fairly
quickly and the jet opening angle becomes $\theta_{\rm jet}\simeq 0.05\degree$
for the parameters chosen in our simulations (Fig. 3, blue line).
### 4.2 Toroidal case (T)
Figure 3: Radius of the external jet ($R_{\rm out}$) as a function of the
distance along the jet axis. We represent, in different colors, the
hydrodynamic (H, lowest curve), the toroidal (T), the poloidal (P) and the
helical case (HL) of jet. The opening angle is deduced from the slope of a
linear function fitted to these curves. Figure 4: Zoom on the base of the
toroidal jet (T). A standing shock structure appears on the pressure map.The
$R$-axis and the $Z$-axis are given in $R_{\rm jet}$ unit. The white lines
represent the rarefaction and compression wave shock fronts.
In the toroidal case, a large-scale azimuthal magnetic field is set up in the
inner jet and in the outer jet with respectively $B_{\rm in,\varphi}=50$ mG
and $B_{\rm out,\varphi}=5$ mG. Due to the over-pressured inner jet, a
recollimation structure arises in the jet, with a ”diamond” structure formed
by compression and rarefaction waves in the two components (Fig. 1 (top right)
or Fig. 4). Since the magnetic field strength chosen for this case is weak,
the kinetic energy flux and the inertia ratio between the outer and the inner
jet component remain of the same order as for the hydrodynamic case (H).
However, there are a few differences between the toroidal and hydrodynamic
cases.
The high Lorentz factor associated with the inner jet component of at least
$\gamma=10$ induces a strong radial electric field
$E_{r}=-B_{\varphi}\cdot\sqrt{1-1/\gamma^{2}}$ that decreases the efficiency
of the radial magnetic tension to collimate the jet. This efficiency decreases
more in the rarefaction region where the Lorentz factor can reach a higher
value of $\gamma=20$ and the magnetic field strength decreases. As a result,
the jet expands radially in these zones and the rarefaction zones become more
elongated in the Z direction. The shock wave at the recollimation knots is
dampened and appears closer to the jet axis compared to the hydrodynamic case.
Therefore, the stationary shock wave decollimates the jet which expands with
an opening angle two times higher than the hydrodynamic case with $\theta_{\rm
jet}\simeq 0.10\degree$ (Fig. 3, orange curve). Nevertheless, thanks to the
magnetic tension, we recover a higher value of magnetic energy ($\sim B^{2}$)
in the inner jet compared to an axial magnetic field. With distance from the
jet inlet ($Z>100\leavevmode\nobreak\ R_{\rm jet}$), the strength of the shock
wave and the associated rarefaction wave decreases. As a result, the ram-
pressure applied by these waves weakens and the radial expansion of the jet
slows down (Fig. 3, green curve). In this region, a radial instability grows
in the jet inducing oscillations in the density and Lorentz factor.
### 4.3 Poloidal case (P)
Now we consider the case of an initial large-scale axial and uniform magnetic
field in the inner-jet $B_{\rm in,\leavevmode\nobreak\ p}\,=\,50\,{\rm mG}$
and in the outer-jet $B_{\rm out,\leavevmode\nobreak\ p}\,=\,5\,{\rm mG}$. As
in the hydrodynamic case, the inner-jet component is over-pressured, which
leads to multiple steady conical recollimation shocks and rarefactions waves
that emerge at the jet inlet and propagate along the jet (Fig. 1, top left).
At the jet inlet, the low magnetization induces only a weak difference with
the hydrodynamic case. The distance between two recollimation shocks remains
of the same order. The difference appears at a larger distance, where the
magnetized jet starts to decollimate.
When the shock waves interact with the inner and the outer jet interface, the
jet undergoes a weak transverse expansion. In this case, the jet expansion
induced by the shock wave is stronger than in the hydrodynamics case, with a
jet opening angle three-times larger $\theta_{\rm jet}\simeq 0.17\degree$
(Fig. 3, orange curve). The successive shock waves push the magnetic field
toward the axis, increasing the magnetic pressure. As a result, the jet
decollimates. Moreover, the rarefaction regions exhibit a stronger radial
expansion, inducing more efficient acceleration.
The interaction between the shock waves and the poloidal magnetic field
induces radial instabilities that can be associated with the development of
thin structures. By perturbing the jet, these instabilities develop themselves
at the interface where the expansion is at a maximum and where the poloidal
magnetic pressure in $Z$-direction increases. These instabilities become more
pronounced with distance from the core and disturb the jet. As a result, the
intensity of the shock and rarefaction waves decreases with distance in
comparison with the hydrodynamic case. In addition, an expansion of the
external jet toward the internal jet is well marked. This expansion, due to
the heating of the external jet and the magnetic pressure along the $Z$-axis,
tends to compress and recollimate the internal jet and thus modifies the
structure of stationary shocks. It is this compression that tends to stretch
the shock structure in the internal jet along the propagation axis.
### 4.4 Helical case (HL)
Finally, we consider a helical case with similar characteristics as the
poloidal case, with a magnetic field strength of $B=50$ mG in the inner jet
and $B=5$ mG for the outer jet. The only difference is the helical structure
with a pitch angle between the azimuthal and axial magnetic direction fixed to
$45\degree$.
Overall, the recollimation shock structure is similar to the poloidal one
(Fig. 1, bottom right). As for the poloidal case, the jet decollimation occurs
after the second recollimation knot at a distance $Z\simeq
50\leavevmode\nobreak\ R_{\rm jet}$ from the jet inlet, and the jet opening
angle tends to $\theta_{\rm jet}\simeq 0.17\degree$ (Fig. 3, red curve). A
small difference in the jet expansion appears between the poloidal and helical
case at a large distance that results from the toroidal magnetic tension that
tends to collimate the jet.
As in the poloidal case, radial instabilities develop due to the axial
magnetic pressure increasing in the rarefaction region and they explain the
strong variation of the Lorentz factor along the $Z$-direction in the inner
jet (Fig. 2, red curve).
We observe again an expansion of the external jet toward the internal jet. The
poloidal component of the magnetic field still involves compression of the
internal jet and an elongation of the stationary shock structure in the
direction of propagation. In addition and similar to the toroidal case, the
toroidal component of the magnetic field implies a higher value of magnetic
energy (compared to the case without toroidal component) in the inner jet.
## 5 Results from simulations of moving ejecta
A promising scenario to explain the observed flux variability in AGN jets is
to consider the propagation of shock waves within the jet. In our models, the
shock wave is caused by an initially over-dense ($\rho_{\rm e}=10^{3}\rho_{\rm
in}$) spherical ejecta with radius $R_{\rm e}=R_{\rm jet}/6$ (half of the
inner jet radius) and placed initially at the jet axis ($R=0$) at the distance
$Z=R_{\rm e}$ from the inner boundary. Its Lorentz factor is the same as the
one of the inner jet $\gamma_{\rm e}=\gamma_{\rm in}$. With this
configuration, the kinetic energy flux associated with the ejecta is
$10^{47}\leavevmode\nobreak\ \text{erg/s}$. We should note that the thermal
energy of the ejecta is negligible in comparison to kinetic energy. All the
time in this section is given in the co-mobile frame ($\delta=1$).
The ejecta reaches the edge of the simulation box at time
$t=200\leavevmode\nobreak\ R_{\rm jet}$, but the simulations run until
$t=230\leavevmode\nobreak\ R_{\rm jet}$ to cover the jet relaxation phase
after the passage of the shock wave induced by the ejecta. The jet is
presented with a moving shock wave in Fig. 5 corresponding respectively to the
co-moving times $135\leavevmode\nobreak\ R_{\rm jet}$.
### 5.1 Hydrodynamic case (H)
For the hydrodynamic case (H) (Fig. 5, top left), the ejecta remains well
confined within the inner jet during a short period $t<5\leavevmode\nobreak\
R_{\rm jet}$ until it starts interacting with the first standing shock. During
this phase, the ejecta and thus the moving shock wave undergoes an adiabatic
acceleration in the first rarefaction zone to reach a Lorentz factor
$\gamma_{\rm ms}\simeq 14$ before it collides with the first standing shock.
As a result, the thermal energy of the ejecta increases and as it evolves in
the rarefaction region, it is accelerated once more (Fig. 6, blue curve). In
interactions between the ejecta and internal shocks, all the gained energy of
the ejecta is transferred to shock waves that continue to propagate mainly
within the inner jet with low transverse energy loss. Globally, the velocity
of the ejecta will follow the profile of the Lorentz factor of the internal
jet without seeing its velocity drop to the minimum value of $\gamma=11$. As
the ejecta is traveling through the jet, it will perturb momentarily the
stationnary shock structure.
### 5.2 Toroidal case (T)
In the toroidal case (T) (Fig. 5, top right), the interaction of the moving
shock wave with the standing shocks has some similarities with the previous
case. In the first rarefaction zone, the ejecta accelerates adiabatically
before it starts interacting with the first shock. The resulting moving shock
wave is subject to a strong radial expansion after this first interaction and
slows down close to its initial value of $\gamma_{\rm ms}=10$ at $Z\sim
50\leavevmode\nobreak\ R_{\rm jet}$. Then the moving shock sees its velocity
increase between each recollimation shock (Fig. 6, green curve). Due to its
initial higher inertia compared to the surrounding jet, the ejecta undergoes a
stronger interaction when its crosses the recollimation zones. Therefore, the
thermal pressure of the ejecta increases more than the ambient flow.
Afterwards, in the rarefaction zone, the higher pressure and inertia of the
shocked ejecta starts to behave as a fireball within the surrounding jet. As a
result, the moving shock wave induced by the ejecta reaches a higher Lorentz
factor than the surrounding jet.
As mentioned before (Section 4.2), the rarefaction zones are larger than in
the other cases, especially after the fourth standing shock, where the shock
wave is strongly accelerated to a value of $\gamma_{\rm ms}\simeq 30$ at
$Z\sim 125\leavevmode\nobreak\ R_{\rm jet}$. Then the moving shock interacts
strongly with all the following standing shocks and causes them to oscillate
along the jet axis (with a typical oscillation time close to $\sim 13R_{\rm
jet}$).
Figure 5: Snapshot: different types of structured jets (H, T, P and HL) with
an injected perturbation. The generated moving shock wave is located at $\sim
135R_{\rm jet}$ from the base. The propagation of the jet is going from left
to right along the increasing value of Z. For each case, the density contour
(in log-scale) is drawn on the bottom side and the thermal energy contour on
the top side. Units on x and y-axis in $R_{\rm jet}$ unit.
Figure 6: Evolution of the Lorentz factor of the moving shock as a function of
time in the co-moving frame. We represent, in different colors, the
hydrodynamic (H), toroidal (T), poloidal (P) and the helical case (HL) of jet.
### 5.3 Poloidal case (P)
In the poloidal case (P) (Fig. 5, bottom left), the moving shock wave
undergoes a strong acceleration to reach $\gamma_{\rm ms}\simeq 21$ before it
interacts with the first stationary features and slows down to $\gamma_{\rm
ms}\simeq 13$ at $Z\sim 24\leavevmode\nobreak\ R_{\rm jet}$ (Fig. 6, orange
curve.) In a second phase, the resulting moving shock propagates through the
successive rarefaction zones where it accelerates and the compression zones
where it decelerates. The mean Lorentz factor of the moving shock increases
with distance until it reaches a stationary shock at the distance $Z\sim
125\leavevmode\nobreak\ R_{\rm jet}$. This acceleration results from the
expansion of the inner jet component in this region. Beyond, the moving shock
continues the propagation in the inner jet component which is compressed by
the outer jet, and transverse instabilities grow along the stationary shock
wave. In this region, the rarefaction zones are smaller and they are subject
to multiple small scale stationary shocks. As a result, the moving shock wave
decelerates to reach a Lorentz factor of $14$. As in the toroidal case, beyond
the fourth stationary shock, the passage of the moving shock induces an
oscillation.
### 5.4 Helical case (HL)
In the helical case (HL) (Fig. 5, bottom right), the moving shock, after
crossing the first internal shock wave at $z=5\leavevmode\nobreak\ R_{\rm
jet}$, also undergoes a strong acceleration in the first rarefaction. Like in
the poloidal case, this strong acceleration is followed by a strong
interaction of the moving shock with stationary shocks, leading to the
deceleration of the moving shock to ($\gamma_{\rm ms}=10$) (Fig. 6, red
curve.) Beyond the fourth standing shock, the moving shock accelerates again
to $\gamma_{\rm ms}\simeq 20$.
## 6 Modeling the radiative transfer
### 6.1 Radiative processes
In a post-processing step using a separate code, we evaluate the synchrotron
radiation of an electron population in the radio band and solve the radiative
transfer equation along a given line of sight. We construct synchrotron
emission maps to study the variation of the flux observed in different zones
along the jet over time, as well as light curves of the spectral flux density
integrated over the full simulated jet. In each cell, the relativistic
electrons population is set with a power law, as expected for shock
acceleration,
$N_{\rm
e}\left(\gamma\right)\text{d}\gamma=K\gamma^{-\text{p}}\leavevmode\nobreak\
\text{d}\gamma\,,$ (15)
where $\gamma_{\rm min}<\gamma<\gamma_{\rm max}$. Following (Gómez et al.
1995), we define the normalization coefficient as,
$K=\left[\dfrac{e_{\rm th,e}\left(\text{p}-2\right)}{1-C_{\rm
E}^{2-\text{p}}}\right]^{\leavevmode\nobreak\ \text{p}-1}\leavevmode\nobreak\
\left[\dfrac{1-C_{\rm E}^{1-\text{p}}}{n_{\rm
e}\left(\text{p}-1\right)}\right]^{\leavevmode\nobreak\ \text{p}-2}\,,$ (16)
where $e_{\rm th,e}=\epsilon_{\rm e}e_{\rm th}$ is the fraction of thermal
energy carried by the electrons, $n_{\rm e}=\epsilon_{\rm e}\,n$ is the
fraction of the electron number density with $\epsilon_{\rm e}=0.1$,
$\text{p}=2.2$ is the index of the power law and the coefficient $C_{\rm
E}=\gamma_{\rm max}/\gamma_{\rm min}$ is set to $10^{3}$. As a simplification,
we do not take into account radiative losses of the relativistic electrons and
assume,
$\gamma_{\rm min}=\dfrac{e_{\rm th,e}}{n_{\rm
e}}\dfrac{\text{p}-2}{\text{p}-1}\dfrac{1-C_{\rm E}^{1-\text{p}}}{1-C_{\rm
E}^{2-\text{p}}}\,.$ (17)
In the present application, we are focusing only on the radio band, where the
effect of radiative cooling is the smallest.
The specific intensity for each cell with index ”i” is determined in the frame
of a stationary observer at the location of the source (quantities in the co-
moving frame are noted $x^{\prime}$),
$I_{\nu;\,\rm\texttt{i}}=I_{\nu;\,\rm\texttt{i}-1}\exp{\left(-\tau_{\nu}\right)}+S_{\nu;\,\rm\texttt{i}}\left(1-\exp{\left(-\tau_{\nu}\right)}\right)\,,$
(18)
where $\tau_{\nu}$ is the optical depth due to synchrotron self-absorption and
$S_{\nu}$ is the synchrotron source function.
Fig. 7 shows schematically how the contribution of each cell along the line of
sight to the total specific intensity $I_{\nu}$ is estimated. It should be
noted that here we do not account for light crossing time effects, which can
lead to a superposition of the signal from different emission regions along
the jet due to the relativistic movement of the jet and the final speed of
signal propagation. In the radio band, this effect is expected to be of minor
importance (Chiaberge & Ghisellini 1999).
Figure 7: Schematic representation of the resolution of the radiative transfer
equation. We sum the different contributions along the line of sight.
The specific intensity $I_{\nu}$ $\left(\leavevmode\nobreak\
\text{in}\leavevmode\nobreak\ \left[\text{erg}\leavevmode\nobreak\
\text{s}^{-1}\leavevmode\nobreak\ \text{cm}^{-2}\leavevmode\nobreak\
\text{Hz}^{-1}\leavevmode\nobreak\ \text{sterad}^{-1}\right]\right)$ depends
on the specific emission coefficient $j_{\nu}$, the absorption coefficient
$\alpha_{\nu}$ and optical depth $\tau_{\nu}$. They are transformed to the
observer frame at the location of the source for each cell,
$\displaystyle j_{\nu}$ $\displaystyle=\delta^{2}\leavevmode\nobreak\
j_{\nu^{\prime}}\,,$ (19) $\displaystyle\alpha_{\nu}$
$\displaystyle=\delta^{-1}\leavevmode\nobreak\ \alpha_{\nu^{\prime}}\,,$ (20)
$\displaystyle\tau_{\nu}$ $\displaystyle=\tau_{\nu^{\prime}}\,.$ (21)
These transformations (Rybicki & Lightman 1979) depend on the Doppler factor
$\delta=\left(\gamma\left(1-\beta\cos{\left(\theta_{\rm
obs}\right)}\right)\right)^{-1}$ with $\gamma=1/\sqrt{1-v^{2}}$ the bulk
Lorentz factor of the material in the cell and $\theta_{\rm obs}$ is the angle
between the direction of the jet axis and the line of sight. The different
quantities ($j_{\nu}$, $\alpha_{\nu}$ and $\tau_{\nu}$) are estimated in each
cell following the approximations given by (Katarzyński et al. 2001) which are
appropriate for the cases studied here. The synchrotron flux in the observer
frame on Earth is determined by,
$F_{\nu}=\frac{S_{\rm e}}{d_{l}^{2}}\left(1+z\right)I_{\nu}\,,$ (22)
with $d_{\rm l}$ the luminosity distance (assuming
$H_{0}=70\leavevmode\nobreak\ \text{km.s}^{-1}\text{Mpc}^{-1}$) and $S_{\rm
e}$ the typical emission surface.
In our study, as an illustration, we choose the distance corresponding to M 87
($z=0.00428$). Finally, we obtain a 2D flux map of the synchrotron emission.
To provide images that can be eventually compared with real VLBI images, we
smooth the simulated images with a typical beamwidth obtained in the radio
domain for M 87 close to $1.6\leavevmode\nobreak\ R_{\rm jet}$. To be able to
distinguish between the emission from the jet and from the ejecta, we adjust
an asymmetric 2D Gaussian distribution to the ejecta at each time step and
extract the flux from the fitted ($2\sigma$) region (Fig. 9).
To add synchrotron emission in the hydrodynamical case (H), a passive
turbulent magnetic field is added during the post-processing step. In this
case, we assume that the magnetic energy density is a fraction $\epsilon_{\rm
B}$ of the thermal energy density,
$B_{\text{turb}}=\sqrt{\epsilon_{\rm B}\leavevmode\nobreak\ \epsilon_{\rm
the}}\,,$ (23)
with $\epsilon_{\rm B}=0.1$.
### 6.2 Synthetic synchrotron images and light curves
Synthetic images are build by integrating the radiative transfer equation (Eq.
18) along a line of sight, for all four cases (H,T,P, HL) and for one viewing
angle $\theta_{\text{obs}}={\leavevmode\nobreak\ 90\degree}$ (Fig. 8). We show
the light curves realized for the four cases tested at $\theta_{\rm
obs}={\leavevmode\nobreak\ 90\degree}$ (Fig. 9) and for the helical case at
$\theta_{\rm obs}={2\degree,\leavevmode\nobreak\ 15\degree}$ (Fig. 10). The
light curve is computed by integrating the flux on the full simulation box for
the three observation angles. For an observation angle of
$\theta_{\text{obs}}=90\degree$, it is also computed by integrating only the
emission issued from the moving shock wave using the method described in
Section 6.1.
Figure 8: Snapshot : synchrotron emission map of the different types of jets
(H, T, P, and HL) without ejecta and stationnary. Each map represent the flux
intensity in mJy unit. The $R$ and $Z$-axis are given in $R_{\rm jet}$ unit.
These maps represent a resolution of the radiative transfer equation with an
angle between the jet propagation axis and the line of sight equal to
$\theta_{\text{obs}}=90\degree$ and $\nu=1$ GHz.
The synthetic image of the stationary hydrodynamic jet (H) emission obtained
for the observation angle $\theta_{\rm obs}=90\degree$ is presented in Fig. 8
(top). The emission is dominated by the stationary knots within the inner jet
component. The intensity and the size of the emitting knots decrease with
distance as a result of the slow damping of shock strength with distance
(Section 3). The contribution from the external jet component to the emission
is negligible, since the stationary shock wave within this jet component is
weak.
The interaction between the moving shock wave, induced by the ejecta, with the
stationary shocks rapidly increases the injection of accelerated particles and
the associated synchrotron emission, causing a flare. The emission from the
moving shock wave increases each time it passes through a standing knot.
Moreover, after each collision, the stationary compression wave within the jet
cannot ensure the radial cohesion of the shocked knot, thus the heated knot
undergoes adiabatic expansion until it reaches the interface between the
inner-outer jet. Afterwards the knot cools down and contracts. A remnant
emission is associated to this adiabatic expansion. This illumination is
visible on the jet light curve obtained with observation angle $\theta_{\rm
obs}=90\degree$ (Fig. 9, top left). It contributes to the emission during the
decay phase of the observed flares (Fig. 9, top left). Moreover, each time a
knot is shocked by the moving shock wave, it starts to stretch and to
oscillate along the jet axis. This behavior is associated with local changes
of jet characteristics, such as the Mach number $\cal{M}$ and the inner jet
radius $R_{\rm jet,in}$. As a result, the structure of the standing shock
waves evolves over a long period. This variability is associated with an
increase of the base emission of the jet compared to the stationary case.
Figure 9: Light curve obtained by integrating the total synchrotron flux
emitted from a simulation box with a size of
$\left[\text{R}=8,\leavevmode\nobreak\ \text{Z}=200\right]R_{\rm jet}$. The
computation of the light curve is realized from the four different cases of
jets (H, T, P and HL). The flux is integrated from $\text{t}=0$ the time of
the injection of the ejecta, until $t\sim 2300\leavevmode\nobreak\ R_{\rm
jet}/\delta$ with $\delta(\theta_{\rm obs}=90\degree)=0.1$ in the absolute
mobile frame and with $\nu=1$ GHz. We separate the total flux (orange) in two
component : the jet (green) and the moving shock wave (blue).
The synthetic image of the stationary state for the toroidal case (T) is shown
in the second panel of Fig. 8. Like in the hydrodynamic case (H), the emission
comes mainly from the standing shocks in the inner jet, but the emission from
the region between the shocks is stronger. Moreover, the stationary shocks
appear more elongated along the jet and especially beyond the fourth knot.
This behavior results from the toroidal magnetic field (Section 4.2).
The moving shock within the jet increases the emission, and the interaction
with the stationary knots produces flares (Fig. 9, bottom left). In comparison
with the hydrodynamic case (H), the strength of the flares is variable with
time. The strongest flare occurs at a time $t\sim 1100R_{\rm jet}/\delta$ when
the moving shock crosses the fourth (and strongest) standing shock. This
interaction leads to a strong deformation and oscillation of the knots along
the jet axis. The following relaxation of the knots is associated with a slow
flux decline in the light curve and produces an asymmetric flare. This
asymmetry occurs in the strongest flares and is enhanced when a knot splits in
two after interacting strongly with the moving shock.
The synthetic image for the poloidal case (P) is shown in the panel on Fig. 8,
where the emission of the stationary shock structure is represented. The
emission comes mainly from the part of the stationary shocks very near to the
jet axis, where the shock is sufficiently strong. In comparison to cases H and
T, the steady shock wave is weaker in the poloidal case. A complex emission
structure is visible due to the magnetic pressure along the $Z$-axis and the
expansion of the outer toward the inner jet (Section 4.3). In this case, the
variations in flux observed between shock and rarefaction zones are weak. Due
to the expansion of the outer jet, the emission structure is more elongated
along the jet axis and therefore less pronounced. As before in the P case, the
moving shock induced by the ejecta triggers flares as it crosses the steady
knots (Fig. 9, top right) but the efficiency of its interaction is weaker than
in the H and T cases. Indeed, the poloidal magnetic field tends to damp the
strength of the steady shock waves and it limits the transverse expansion of
the moving shock wave. The resulting light curve is characterized by weak
flares. Furthermore, when the moving shock interacts with a standing shock,
instabilities form along the steady shock wave. These instabilities will
propagate through the jet in the form of several moving shocks and thus
locally increase slightly the synchrotron emission.
Finally, the synthetic images of the helical case (HL) can be see in the last
panel of Fig. 8, where the emission of the stationary shock structure is
represented. As in all the others cases, the emission comes mainly from the
shock zones within the inner jet. However, the observed synchrotron flux is
twice as high as in the other cases, due to the combination of the toroidal
field effects with strong emission emanating from the center of the knots and
the poloidal field effects with diffuse emission resulting from the
compression of the inner jet by the outer component.
As in the previous cases, the moving shock will increase locally the
synchrotron flux emitted and in particular when it crosses a stationary knot.
But despite the fact that a toroidal magnetic field component is present, the
shape of the light curve is very similar to the poloidal case (P) at
$\theta_{\rm obs}=90\degree$ (Fig. 9, bottom right). As the emission is
extended along the axis in the internal jet, variability is observed
essentially from the inner jet itself due to the propagation of the moving
shock. The presence of a poloidal field allows the development of transverse
instabilities along steady shock waves. These instabilities will propagate in
the inner jet in the form of moving shocks behind the principal moving shock
and increase the emission from the jet. We notice that the impact of a
magnetic tension due to the toroidal component is visible when the moving
shock interacts with the last shock zones. In fact, the moving shock interacts
more strongly with the last standing knot where the most pronounced flare
occurs. As before, the interaction of the moving shock with standing knot
induces an knot oscillation along the jet axis.
To evaluate the impact of a reduced viewing angle on the shape of the light
curve, we show the results for the hydrodynamic configuration (H) (Fig. 10)
for $\theta_{\rm obs}=15\degree$ and $\theta_{\rm obs}=2\degree$ for 1 GHz.
Compared to the one obtained for $\theta_{\rm obs}=90\degree$, they show the
expected increase in the overall flux due to stronger Doppler beaming. The
flares are shorter in duration due to the Doppler effect, but also due to self
absorption. In certain cases, the dense ejecta partially hides the shocked
knot behind it. For an observation angle $\theta_{\rm obs}=2\degree$, the self
absorption effect is very significant. The mean intensity decreases with
distance, since, as the ejecta propagates within the jet, it hides a large
number of stationary knots.
Figure 10: Light curve obtain by integrating the total synchrotron flux
emitted from a simulation box with a size of
$\left[\text{R}=8,\leavevmode\nobreak\ \text{Z}=200\right]R_{\rm jet}$. The
computation of the light curve is realized for the hydrodynamic case (H) of
jet. The flux is integrated from $\text{t}=0$ the time of the injection of the
ejecta, until respectively $t\sim\left(90,\leavevmode\nobreak\
13\right)\leavevmode\nobreak\ R_{\rm jet}/\delta$ for $\delta(\theta_{\rm
obs}=15\degree)\simeq 2.57$ (top) and $\delta(\theta_{\rm obs}=2\degree)\simeq
18$ (bottom) in the absolute frame and $\nu=1$ GHz. The gray dotted line
represent the exit of ejecta from the simulation box.
## 7 Comparison with radio observations, the case of 3C 273
Our study shows the complex link between ejecta propagating in magnetized two-
flow jets with stationary shocks, and its observed radio variability. All
simulations in the present study consider a kinetic power of the outer jet
larger than the one of the inner jet with an initial ratio of 3:1. The ratio
of two-flow kinetic powers was proposed in Hervet et al. (2017) as a critical
criterion discriminating between types of VLBI radio jets, which are
themselves associated with spectral types of blazars (Hervet et al. 2016;
Piner & Edwards 2018; Lister et al. 2019). Two-flow jets with kinetic powers
within the same order of magnitude, such as the ones simulated in this study,
were found to be the most similar to FSRQ-type blazars (flat spectrum radio
quasar).
In order to compare the results of our simulations with an astrophysical case,
we focus on the radio observations of one of the brightest and best monitored
FSRQs over decades, 3C 273 (B1226+023). Its redshift of $z=0.1583$ (Strauss et
al. 1992) translates into a scaling factor of 2.73 pc/mas considering
$H_{0}=70$ km.s-1.Mpc-1. 3C 273 displays a peculiar mix of fast moving and
quasi-stationary radio knots. This hybrid radio kinematic behavior is most
often observed in intermediate blazars (Low or Intermediate frequency-peaked
BL Lacs) (Hervet et al. 2016). However 3C 273 significantly differs from these
sources in the way that quasi-stationary knots are visible only during low-
activity periods of the jet, not continuously.
For this study, we used 15.3 GHz observations from the VLBA, analyzed by the
MOJAVE team up to August 2019 (Lister et al., in prep). Most of these data up
to December 2016 are publicly available from Lister et al. (2019). We combine
this dataset with observations, at the same frequency, from the Californian
40m single-dish OVRO telescope, which provides public light curves from a
monitoring program of Fermi-LAT blazars (Richards et al. 2011). Our goal is to
see, in a qualitative way, how the observed radio-VLBI ejecta influence the
overall jet light curve observed with OVRO, as well as the luminosity
evolution during their propagation.
We consider the period 2008-2019, overlapping both OVRO and MOJAVE
observations with a dense VLBA monitoring. We specifically focus on four fast
moving radio knots (k22, k31, k37, k36) and two quasi-stationary knots (k32,
k35) which were observed during this period. All these observations are
gathered in Figure 11. The moving knot k39 is not considered in our study as
it is only referenced by MOJAVE beyond the k32 zone, and we suspect several
wrong identifications between k39, k32 and k35 from their referenced positions
and luminosities.
Figure 11: 3C 273 observed at 15.3 GHz. Top panel: Distance to the core of
radio knots analyzed by MOJAVE. We focus on the firmly identified components
(in color). Straight lines are linear extrapolations of the moving knots based
on their first 4 observations. Horizontal dashed lines show the mean position
in the jet of the two observed quasi-stationary knots k32 and k35, with the
gray band displaying the 1 sigma dispersion around the mean. Middle panel:
radio jet light curve observed by OVRO. Bottom panel: Measured flux of the
radio core and moving knots. Dashed lines (extrapolation assuming smooth core
emission) indicate that the observed core variability is actually due to the
flux increase of the emerging moving knots when they are indiscernible from
the core due to the limits of the VLBA angular resolution. Vertical lines show
the most likely time when the moving knots pass through the stationary zone
defined by k35 and the purple dashed line, with its associated uncertainty in
gray.
In Figure 11, we see that quasi-stationary knots appear during a specifically
long quiescent state of the source of 2 years. This low activity period is
marked by the long decrease of the radio flux seen by OVRO starting from 2011
up to 2013, and also corresponds to an absence of radio ejecta. This
observation suggests that the radio-jet of 3C 273 presents at least two quasi-
stationary knots (k32, k35) in its nominal (quiescent, non-perturbed) state.
In a few instances, the quasi-stationary knots k32 and k35 seem absent of the
observations where one would expect to see them (k35 in two measurements
between 2010 and 2012; k32 in one measurement before 2011). While these
disappearances could be due to observing conditions or instrumental
limitations, such as a jet locally outshined by the moving component; they
could also highlight a relaxation time for the jet structure to return to its
non-perturbed state after the passage of a moving knot. Linear extrapolations
of the motions of the ejecta show that the jet radio luminosity starts to
increase when ejecta are emerging from the radio core. For each new ejecta,
the OVRO flux increase pursues up to the first standing knot marked by k35
($0.36\pm 0.08$ mas to the core), and then it decreases while ejecta continue
their propagation. VLBA observations of the flux from individual ejecta show a
similar behavior as OVRO. The observed link between VLBI jet kinematics and
the radio flux variability in 3C 273 leads to a consistent picture that is in
agreement with the scenario we are proposing with our simulations.
Firstly, there is a systematic association between the passage of ejecta
through the first standing knot k35 and a large flux increase of the overall
jet radio luminosity. This crossing is the main phenomenon triggering the
radio variability of the source. In addition, the flux increase of the ejecta
up to k35 matches what is expected if k35 is a marker of a strong
recollimation shock, as the ejecta should undergo a strong acceleration that
enhances its Doppler factor by its passage through a large rarefaction zone
before this recollimation shock.
Secondly, the ejecta enter in an uninterrupted cooling phase after their
passage through k35. This is expected when considering that a rarefaction zone
should follow the standing shock k35. The second stationary knot marked by k32
does not appear to play an important role for the variability. This suggests
that this second recollimation shock is much less powerful than the first one.
Finally, the absence of flare can be linked with the passage of the ejecta
through the radio core. This may suggest that the radio core marks the jet
expanding funnel rather than a first stationary shock (Fig. 4a in Daly &
Marscher (1988)). However studying the variability at higher energy, outside
the synchrotron-self-absorbed frequencies, would be necessary to confirm this
assumption.
Figure 12: Light curve obtain by integrating the total synchrotron flux
emitted during the first three interactions between the moving shock and the
standing shocks. The computation of the light curve is realized for the four
different cases of jets (H, T, P and HL) for $\delta(\theta_{\rm
obs}=2\degree)\simeq 18$ and $\nu=15.3\leavevmode\nobreak\ \rm{GHz}$. The red
dots represent the estimated beginning, maximum and end of the flare. Figure
13: Light curve obtained as in Fig. 12. The computation of the light curve is
realized for the cases where significant flares were present (P and HL) for
$\delta(\theta_{\rm obs}=6\degree)\simeq 10$ and $\nu=15.3\leavevmode\nobreak\
\rm{GHz}$. The red dots represent the estimated beginning, maximum and end of
the flare.
## 8 Discussion
### 8.1 Effects of jet stratification and magnetic field structure
As seen in previous studies, in hydrodynamic (Gómez et al. 1997; Fromm et al.
2018) and magnetized jets (Mizuno et al. 2015; Porth & Komissarov 2015;
Fuentes et al. 2018), we find the well known diamond structure of standing
shocks with a clear succession of shock and rarefaction zones in an over-
pressured jet. A regular standing-shock structure with a linear intensity
decrease over the distance from the base is observed. We found a combined
effect of the large-scale magnetic configuration and jet stratification on the
details of the shock structure. As observed in Hervet et al. (2017), the jet
stratification induces a larger variety of standing-shock structures due to
the interferences between the characteristics waves from the two layers of the
jet. The toroidal magnetic field strengthens the standing shocks by inducing
intense rarefaction regions, where the plasma is strongly accelerated (Fig. 1
(top right)). Stratification leads to the development of Rayleigh-Taylor
instabilities at the outer-inner jet interface and along the standing shock
wave as observed in Toma et al. (2017) in the hydrodynamic case. In the
poloidal and helical cases, the magnetic pressure along the $Z$-axis amplifies
these instabilities along the jet. They grow and heat both jet components.
Within the inner jet, these instabilities interfere with the standing shocks
and lead to a smoother structure and the appearance of a turbulent region at
large distance (see Fig. 5 (bottom)).
In a transverse structured jet, the presence of a structured magnetic field
amplifies the jet opening angle compared to the hydrodynamic case (Fig. 3).
This is especially apparent in the poloidal and in the helical cases. Even if
an outer jet layer shields the inner part from a homogeneous ambient medium
(Porth & Komissarov 2015), the magnetic field modifies the topology of the
characteristic waves of the fluid. The presence of a toroidal magnetic field
component tends to limit this transverse expansion as observed in Mizuno et
al. (2015) and at large distances in our simulations. On the other hand, the
poloidal magnetic field component induces instabilities as we saw; these
instabilities lead to a jet decollimation at medium and large distances. It
was shown by Martí (2015) that introducing an azimuthal velocity component of
the jet flow leads to a centrifugal force that further increases the jet
opening. This effect is not currently treated in our models, but we expect
that it may have an impact on the standing shock structure, which we will
evaluate in a future study.
### 8.2 Ejecta and associated flares in the light curve
In all cases, the moving shock interacts with the successive standing
rarefaction zones, where it is accelerated adiabatically, and collides with
standing shocks, where it is heated and decelerated. These interactions lead
to the appearance of flares in the light curves, due to thermal energy
increase (cf. also Gómez et al. 1997; Fromm et al. 2016). The presence of a
toroidal component of the magnetic field ensures the cohesion of the moving
shock and of the standing shocks. The fact that the moving and standing shock
zones are very compact is reflected in the emission of intense and clearly
marked flares. We recover a similar behavior in the hydrodynamic case. On the
contrary, where a poloidal field is present, the interaction between the
moving shock wave and the more diffuse steady shock zones is weaker and its
associated flares are less pronounced.
As we saw, the large variety of flares obtained is related to the intensity of
the different knots, stemming from the combination of the characteristic waves
of the plasma. The outer jet component allows interferences between the
stationary shock waves in the inner jet and those of the outer jet. For
certain conditions, the two stationary shock waves can combine and lead to a
particularly strong emission. In the toroidal case, this effect leads to a
strong standing shock, with an important rarefaction zone behind it (cf. the
fourth standing shock in our simulations). This standing shock region is
linked to the luminous flare emission in the light curve (Fig. 9). After this
interaction, the ejecta is strongly accelerated in the rarefaction zone and
will lose its cohesion due to fast adiabatic expansion.
### 8.3 Temporal flare structure
The simulated flares in H, T and P cases are characterized by a temporal
asymmetry with a fast rise and a slower decay phase, even though this is not
always clearly visible due to the varying strength of the standing shocks and
the limits in temporal resolution. When the moving shock interacts with the
standing shocks, it heats and compresses them. Afterwards, the knots
decompress. These interactions induce the formation of trailing recollimation
shocks, already observed by Agudo et al. (2001) and Mimica et al. (2009),
which will perturb standing knots along the jet axis and make them oscillate.
This is associated with remnant emission from the shocked knots during their
adiabatic cooling phase. This process is observed in all cases, but most
clearly in the cases with pronounced interaction between the moving shock and
the standing shock (cases H and T). Indeed, after the strong flare, we see in
the toroidal case a delay between the emission of the ejecta and the emission
of the jet (Fig. 9). This is the emission signature of the shocked knot, which
causes a slight asymmetry. This additional radiation counterpart will tend to
soften the slope after the ejecta has passed. This behavior is in accordance
with the observed flare structure described by Hovatta et al. (2008); Nieppola
et al. (2009) and obtained numerically from over-pressured jets by Gómez et
al. (1997); Fromm et al. (2016).
### 8.4 Light curve with small observation angle
As the angle decreases to $15\degree$ or $2\degree$ (Fig. 10), the effect of
the absorption of the moving shock along the line of sight becomes more
important at low frequencies. Thus, the observed flare intensity, duration and
asymmetry decrease as the occultation by the moving shock becomes more
important. Moreover, after the flare associated with the interaction of the
moving shock with the most intense steady knot, the flux intensity decreases.
This is also observed by Fromm et al. (2016) but, in our case, we have not
taken into account the effect of the ”light travel delay” of photons emitted
in different parts of the jet, which would lead to smoother light curves with
a longer decay. This behavior is well distinguishable with viewing angle
$\theta_{\rm obs}=2\degree$.
### 8.5 Comparison with observation of 3C 273
With our shock propagation model, we arrive at a consistent, for now
qualitative interpretation of the flaring behavior observed in 3C 273.
Detailed modeling of the observed light curve is out of the scope of this work
and will be addressed in a future publication. The number of recollimation
shocks present in our simulated jets is much higher than the number of
standing knots observed in the case of 3C 273, mostly due to the purely
cylindrical shape of our jets. In real jets that are imperfectly aligned, the
superposition of knots along the line of sight and radio opacity might also
lead to an obscuration of standing knots for the observer. VLBI observations
of the most nearby radio-galaxy (M 87, (e.g., Asada et al. 2014)) and of other
AGN (Hervet et al. 2016) show the presence of multiple standing features that
may be interpreted as series of recollimation shocks. Before the moving shock
waves interact with the first standing shock, extrapolation of the emitted
flux seems to indicate that they are undergoing a strong acceleration. This
phenomenon appears clearly in our results, with the moving shock passing
through the first rarefaction zone before interacting with the first knot. The
acceleration seems especially pronounced in the magnetized cases as shown in
the Fig. 6. After the passage of moving shock waves, the disruption of the
standing knots can be explained by the dynamics and / or by the radiative
transfer within the jet. Concerning the dynamics aspect, the trailing
recollimation shocks can perturb momentarily the standing-shock structure, as
we saw in our simulations. On the other hand, the apparent ”disappearance” of
standing knots, if confirmed, may suggest that they are simply obscured by the
brighter moving knots (k31 or k37) that hide the quasi-stationary knot, like
we see in our synthetic light curve.
In our model, the light curve (Fig. 12) obtained at frequency 15.3 GHz
(OVRO/MOJAVE frequency) and viewing angle $2\degree$ (e.g., Hervet et al.
2016) shows the interaction of the moving shock with the first two stationary
shocks. The light curve integrates the flux emitted by the whole jet like a
single dish telescope. We can thus compare our results with the OVRO data, in
particular with the k37 event, which is isolated from other events.
We should note that the observations from Jorstad et al. (2017) found a
different viewing angle of $6.4\pm 2.4\degree$. To investigate the effects of
the viewing angle, which is not precisely known, we also compute the light
curves for $6\degree$ (Fig.13).
In the simulations, as in the observations, we find a flare during each
interaction between the moving shock wave and a standing shock. However, the
typical flare duration is very different ($\sim 800$ days for k37 and $\sim
150$ days for our flares on average). This could be due to differences in the
morphology of the jet, the size of the ejecta and stationary knots, and the
uncertain value of the observation angle of 3C 273.
The shape of the observed flares seems to show a small asymmetry. To quantify
this effect, we used the method proposed by Roy et al. (2018); Nalewajko
(2013) to compare the doubling (or halving) time in the rise (or fall) phase
of the flare. Applying the method to the k37 flare, we found $\xi_{\rm
k37}=0.12\pm 0.03$ where the fall time is superior than the rise time.
Applying the same method to the simulated flares, we found respectively
$\xi_{\rm H}=0.14\pm 0.01$, $\xi_{\rm T}=0.13\pm 0.02$, $\xi_{\rm P}=0.12\pm
0.03$ and $\xi_{\rm HL}=-0.29\pm 0.06$ for the hydrodynamic, toroidal,
poloidal and helical case for $\theta_{\rm obs}=2\degree$ (Fig. 12).
At $\theta_{\rm obs}=6\degree$, the shape of the light curve changes
significantly due to beaming effects, such that the partial superposition of
the first three flares does not allow us to clearly determine the presence or
absence of asymmetries. The peak of the second flare is only well visible in
the P and HL cases (Fig. 13). In the HL case, no asymmetry is found, while in
the P case it has switched signs compared to the simulation at $\theta_{\rm
obs}=2\degree$ (we found for this case $\xi_{\rm P}=-0.23\pm 0.03$). We should
also note that the current version of our model does not take into account
time delay effects, which may play an important role at small angles. A more
detailed study is beyond the scope of this work and will be applied in a
future study to dedicated simulations for a given data set.
The amplitude of the flares is on average much larger in 3C 273 where the
variability in radio can reach around twice the baseline value, compared to an
increase of $\sim 15\%$ in the same-frequency, small angle simulations (Fig.
12). This difference depends on the same characteristics as the flare
duration.
For the general picture, the main difference with 3C 273 is the presence of
only two visible recollimation shocks, where only the first one is linked to
strong flares, compared to a greater number of standing shocks and resulting
flares in our simulations. The number of knots observed is however strongly
linked to the angular resolution and sensitivity of the VLBA. Another
stationary knot very close to the core at $\sim 0.10-0.16$ mas, noted A1, ST1,
or S1 has been detected when observing the jet at 43.2 GHz (Jorstad et al.
2005; Savolainen et al. 2006; Lisakov et al. 2017; Kim et al. 2020).
As discussed in Section 8.2 for the simulations, this effect may be explained
in 3C 273 by a strong interaction between shock waves in the inner jet and
outer jet occurring at the position of k35, at 0.36 mas to the core. This zone
can lead to a major outburst when interacting with a moving shock. It also has
the specificity of disrupting the downstream shock structure, which would
explain the weak presence of the downstream stationary shock k32, as well as
the absence of significant flare event associated with it, and the apparent
adiabatic cooling of ejecta moving outside of k35.
To directly model observed flares with our radiative SRMHD code, a more
important opening of the jet, reflecting changes in the density profile of the
ambient medium, will be required. This should lead to a shock structure
dominated by a few standing shocks close to the core. An implementation of the
”light travel delay” effect (Chiaberge & Ghisellini 1999, cf.) and of
radiative cooling will also be needed for a more realistic description of the
radiative emission.
## 9 Conclusions
We have investigated the effect of the large-scale magnetic field on the
standing shocks and their interaction with ejecta within a two-component
relativistic jet. The associated light curves, which were computed at two
radio frequencies ($\nu=\left(1,\leavevmode\nobreak\
15.3\right)\leavevmode\nobreak\ {\rm GHz}$) and for several observation angles
($\theta_{\rm obs}=\left(2\degree,\leavevmode\nobreak\
15\degree,\leavevmode\nobreak\ 90\degree\right)$), show a variety of flares
with varying durations and amplitudes.
Two-component magnetized jets are characterized by a complex standing-shock
structure due to the interaction of characteristic waves propagating in the
two jet components. In this way, jet stratification leads to the appearance of
radio knots with a range of intensities along the jet. This is especially
apparent in the toroidal case, where we recover a strong standing shock,
giving rise to a pronounced flare in the interaction with the moving shock
wave. Temporal asymmetry associated to the relaxation phase of the shocked
standing knot is well visible for the strongest flares. The introduction of
large-scale magnetic fields is seen to cause an intrinsic opening of the jet
with an opening angle up to three times larger than the hydrodynamic case for
our jet configuration.
Our scenario of moving ejecta interacting with standing shocks inside a two-
component jet provides a good description of the kinematics and light curves
seen in the jet of the FSRQ type blazar 3C 273 with VLBI and single-dish radio
observations. In a preliminary study, at observation angle of $2\degree$, an
asymmetry in the simulated flare profiles, with a fall-time that is longer
than the rise-time, was seen for the hydrodynamic, toroidal and poloidal
cases, consistent with what is observed in the OVRO data for this source.
###### Acknowledgements.
The authors thank the anonymous reviewer for his peer review. The computations
of the SRMHD results were carried out on the OCCIGEN cluster at
CINES111https://www.cines.fr/ in Montpellier (project named lut6216,
allocation A0050406842) and from the MesoPSL Cluster at PSL
University222http://www.mesopsl.fr/ in the Observatory of Paris. This research
has made use of data from the MOJAVE database that is maintained by the MOJAVE
team (Lister et al. 2018), and from the OVRO 40m monitoring program which was
supported in part by NASA grants NNX08AW31G, NNX11A043G and NNX14AQ89G, and
NSF grants AST-0808050 and AST-1109911, and private funding from Caltech and
the MPIfR. The authors wish to thank M. Lister for providing preliminary
MOJAVE data points for this study. We wish also thank M. Lemoine for providing
insightful discussion during the study.
## References
* Agudo et al. (2001) Agudo, I., Gómez, J.-L., Martí, J.-M., et al. 2001, The Astrophysical Journal, 549, L183
* Asada et al. (2014) Asada, K., Nakamura, M., Doi, A., Nagai, H., & Inoue, M. 2014, ApJ, 781, L2
* Avachat et al. (2016) Avachat, S. S., Perlman, E. S., Adams, S. C., et al. 2016, The Astrophysical Journal, 832, 3
* Biretta et al. (2002) Biretta, J. A., Junor, W., & Livio, M. 2002, New A Rev., 46, 239
* Blandford et al. (2017) Blandford, R., Yuan, Y., Hoshino, M., & Sironi, L. 2017, Space Sci. Rev., 207, 291
* Blandford & Koenigl (1979) Blandford, R. D. & Koenigl, A. 1979, Astrophys. Lett., 20, 15
* Blandford & Königl (1979) Blandford, R. D. & Königl, A. 1979, The Astrophysical Journal, 232, 34
* Blandford & Payne (1982) Blandford, R. D. & Payne, D. G. 1982, MNRAS, 199, 883
* Blandford & Znajek (1977) Blandford, R. D. & Znajek, R. L. 1977, MNRAS, 179, 433
* Britzen et al. (2010) Britzen, S., Kudryavtseva, N. A., Witzel, A., et al. 2010, A&A, 511, A57
* Cada & Torrilhon (2009) Cada, M. & Torrilhon, M. 2009, J. Comput. Physics, 228, 4118
* Chiaberge & Ghisellini (1999) Chiaberge, M. & Ghisellini, G. 1999, MNRAS, 306, 551
* Daly & Marscher (1988) Daly, R. A. & Marscher, A. P. 1988, ApJ, 334, 539
* Dedner et al. (2002) Dedner, A., Kemm, F., Kröner, D., et al. 2002, Journal of Computational Physics, 175, 645
* Fendt et al. (2012) Fendt, C., Porth, O., & Vaidya, B. 2012, Journal of Physics: Conference Series, 372, 012011
* Fromm et al. (2016) Fromm, C. M., Perucho, M., Mimica, P., & Ros, E. 2016, A&A, 588, A101
* Fromm et al. (2018) Fromm, C. M., Perucho, M., Porth, O., et al. 2018, A&A, 609, A80
* Fuentes et al. (2018) Fuentes, A., Gómez, J. L., Martí, J. M., & Perucho, M. 2018, The Astrophysical Journal, 860, 121
* Gabuzda et al. (2014) Gabuzda, D. C., Reichstein, A. R., & O’Neill, E. L. 2014, MNRAS, 444, 172
* Ghisellini et al. (2005) Ghisellini, G., Tavecchio, F., & Chiaberge, M. 2005, A&A, 432, 401
* Ghisellini et al. (2014) Ghisellini, G., Tavecchio, F., Maraschi, L., Celotti, A., & Sbarrato, T. 2014, Nature, 515, 376
* Giroletti et al. (2004) Giroletti, M., Giovannini, G., Feretti, L., et al. 2004, The Astrophysical Journal, 600, 127
* Gómez et al. (1997) Gómez, J. L., Martí, J. M., Marscher, A. P., Ibáñez, J. M., & Alberdi, A. 1997, The Astrophysical Journall, 482, L33
* Gómez et al. (1995) Gómez, J. L., Martí, J. M., Marscher, A. P., Ibáñez, J. M., & Marcaide, J. M. 1995, The Astrophysical Journal, 449
* Hervet et al. (2016) Hervet, O., Boisson, C., & Sol, H. 2016, A&A, 592, A22
* Hervet et al. (2017) Hervet, O., Meliani, Z., Zech, A., et al. 2017, å, 606, A103
* Hovatta et al. (2008) Hovatta, T., Nieppola, E., Tornikoski, M., et al. 2008, å, 485, 51
* Jorstad et al. (2005) Jorstad, S. G., Marscher, A. P., Lister, M. L., et al. 2005, AJ, 130, 1418
* Jorstad et al. (2017) Jorstad, S. G., Marscher, A. P., Morozova, D. A., et al. 2017, ApJ, 846, 98
* Jorstad et al. (2013) Jorstad, S. G., Marscher, A. P., Smith, P. S., et al. 2013, ApJ, 773, 147
* Katarzyński et al. (2001) Katarzyński, K., Sol, H., & Kus, A. 2001, å, 367, 809
* Keppens et al. (2012) Keppens, R., Meliani, Z., van Marle, A., et al. 2012, Journal of Computational Physics, 231, 718 , special Issue: Computational Plasma Physics
* Kim et al. (2020) Kim, D.-W., Trippe, S., & Kravchenko, E. V. 2020, A&A, 636, A62
* Kim et al. (2018) Kim, J. Y., Krichbaum, T. P., Lu, R. S., et al. 2018, A&A, 616, A188
* Lemoine & Pelletier (2010) Lemoine, M. & Pelletier, G. 2010, MNRAS, 402, 321
* Lemoine et al. (2019a) Lemoine, M., Pelletier, G., Vanthieghem, A., & Gremillet, L. 2019a, Phys. Rev. E, 100, 033210
* Lemoine et al. (2019b) Lemoine, M., Vanthieghem, A., Pelletier, G., & Gremillet, L. 2019b, Phys. Rev. E, 100, 033209
* Lisakov et al. (2017) Lisakov, M. M., Kovalev, Y. Y., Savolainen, T., Hovatta, T., & Kutkin, A. M. 2017, MNRAS, 468, 4478
* Lister et al. (2018) Lister, M. L., Aller, M. F., Aller, H. D., et al. 2018, ApJS, 234, 12
* Lister et al. (2019) Lister, M. L., Homan, D. C., Hovatta, T., et al. 2019, ApJ, 874, 43
* Marscher & Gear (1985) Marscher, A. P. & Gear, W. K. 1985, The Astrophysical Journal, 298, 114
* Marscher et al. (2008) Marscher, A. P., Jorstad, S. G., D’Arcangelo, F. D., et al. 2008, Nature, 452, 966
* Marshall et al. (2002) Marshall, H. L., Miller, B. P., Davis, D. S., et al. 2002, ApJ, 564, 683
* Martí & Müller (2015) Martí, J. M. & Müller, E. 2015, Living Reviews in Computational Astrophysics, 1, 3
* Martí (2015) Martí, J.-M. 2015, MNRAS, 452, 3106
* Martí et al. (2018) Martí, J. M., Perucho, M., Gómez, J. L., & Fuentes, A. 2018, International Journal of Modern Physics D, 27, 1844011
* Mathews (1971) Mathews, W. G. 1971, ApJ, 165, 147
* Meliani & Keppens (2007) Meliani, Z. & Keppens, R. 2007, å, 475, 785
* Meliani & Keppens (2009) Meliani, Z. & Keppens, R. 2009, ApJ, 705, 1594
* Meliani et al. (2008) Meliani, Z., Keppens, R., & Giacomazzo, B. 2008, A&A, 491, 321
* Meliani et al. (2004) Meliani, Z., Sauty, C., Tsinganos, K., & Vlahakis, N. 2004, A&A, 425, 773
* Mignone & Bodo (2006) Mignone, A. & Bodo, G. 2006, MNRAS, 368, 1040
* Mimica et al. (2009) Mimica, P., Aloy, M. A., Agudo, I., et al. 2009, The Astrophysical Journal, 696, 1142
* Mizuno et al. (2015) Mizuno, Y., Gómez, J. L., Nishikawa, K.-I., et al. 2015, The Astrophysical Journal, 809, 38
* Nagai et al. (2014) Nagai, H., Haga, T., Giovannini, G., et al. 2014, ApJ, 785, 53
* Nalewajko (2013) Nalewajko, K. 2013, Monthly Notices of the Royal Astronomical Society, 430, 1324
* Nieppola et al. (2009) Nieppola, E., Hovatta, T., Tornikoski, M., et al. 2009, The Astronomical Journal, 137, 5022
* Pelletier et al. (2019) Pelletier, G., Gremillet, L., Vanthieghem, A., & Lemoine, M. 2019, Phys. Rev. E, 100, 013205
* Perlman et al. (1999) Perlman, E. S., Biretta, J. A., Zhou, F., Sparks, W. B., & Macchetto, F. D. 1999, ApJ, 117, 2185
* Piner & Edwards (2018) Piner, B. G. & Edwards, P. G. 2018, ApJ, 853, 68
* Plotnikov et al. (2018) Plotnikov, I., Grassi, A., & Grech, M. 2018, MNRAS, 477, 5238
* Porth et al. (2011) Porth, O., Fendt, C., Meliani, Z., & Vaidya, B. 2011, The Astrophysical Journal, 737, 42
* Porth & Komissarov (2015) Porth, O. & Komissarov, S. S. 2015, MNRAS, 452, 1089
* Richards et al. (2011) Richards, J. L., Max-Moerbeck, W., Pavlidou, V., et al. 2011, ApJS, 194, 29
* Rieger & Duffy (2019) Rieger, F. M. & Duffy, P. 2019, ApJ, 886, L26
* Roy et al. (2018) Roy, N., Chatterjee, R., Joshi, M., & Ghosh, A. 2018, MNRAS, 482, 743
* Rybicki & Lightman (1979) Rybicki, G. B. & Lightman, A. P. 1979, Radiative processes in astrophysics
* Savolainen et al. (2006) Savolainen, T., Wiik, K., Valtaoja, E., & Tornikoski, M. 2006, A&A, 446, 71
* Siemiginowska et al. (2007) Siemiginowska, A., Stawarz, Ł., Cheung, C. C., et al. 2007, The Astrophysical Journal, 657, 145
* Sironi et al. (2013) Sironi, L., Spitkovsky, A., & Arons, J. 2013, The Astrophysical Journal, 771, 54
* Sol et al. (1989) Sol, H., Pelletier, G., & Asseo, E. 1989, MNRAS, 237, 411
* Strauss et al. (1992) Strauss, M. A., Huchra, J. P., Davis, M., et al. 1992, ApJS, 83, 29
* Tavecchio (2020) Tavecchio, F. 2020, arXiv e-prints, arXiv:2011.03264
* Tavecchio & Ghisellini (2008) Tavecchio, F. & Ghisellini, G. 2008, Monthly Notices of the Royal Astronomical Society: Letters, 385, L98
* Tavecchio & Ghisellini (2014) Tavecchio, F. & Ghisellini, G. 2014, MNRAS, 443, 1224
* Toma et al. (2017) Toma, K., Komissarov, S. S., & Porth, O. 2017, MNRAS, 472, 1253
* Valtaoja et al. (1992) Valtaoja, E., Lahteenmaki, A., & Terasranta, H. 1992, Astronomy & Astrophysics, Suppl., 95, 73
* Walker et al. (2018) Walker, R. C., Hardee, P. E., Davies, F. B., Ly, C., & Junor, W. 2018, The Astrophysical Journal, 855, 128
* Wehrle et al. (2016) Wehrle, A. E., Grupe, D., Jorstad, S. G., et al. 2016, The Astrophysical Journal, 816, 53
* Wehrle et al. (2012) Wehrle, A. E., Marscher, A. P., Jorstad, S. G., et al. 2012, ApJ, 758, 72
* Wilson & Yang (2002) Wilson, A. S. & Yang, Y. 2002, ApJ, 568, 133
* Zanotti et al. (2010) Zanotti, O., Rezzolla, L., Del Zanna, L., & Palenzuela, C. 2010, A&A, 523, A8
## Appendix A Additional Figures
(a)
(b)
Figure 14: Left: Initial representation of the toroidal magnetic field
strength in space following Eq. 13 and assuming
$\left(B_{\rm\varphi,in,0},\,B_{\rm\varphi,out,0}\right)\equiv 1$. Right:
Initial representation of the magnetic pressure following Eq. 14.
|
# Uncertainty-Aware Body Composition Analysis with Deep Regression Ensembles
on UK Biobank MRI
Taro Langner Fredrik K. Gustafsson Benny Avelin Robin Strand Håkan Ahlström
Joel Kullberg Uppsala University, Department of Surgical Sciences, Uppsala,
Sweden Uppsala University, Department of Information Technology, Uppsala,
Sweden Uppsala University, Department of Mathematics, Uppsala, Sweden
Antaros Medical AB, BioVenture Hub, Mölndal, Sweden
###### Abstract
Along with rich health-related metadata, medical images have been acquired for
over 40,000 male and female UK Biobank participants, aged 44-82, since 2014.
Phenotypes derived from these images, such as measurements of body composition
from MRI, can reveal new links between genetics, cardiovascular disease, and
metabolic conditions. In this work, six measurements of body composition and
adipose tissues were automatically estimated by image-based, deep regression
with ResNet50 neural networks from neck-to-knee body MRI. Despite the
potential for high speed and accuracy, these networks produce no output
segmentations that could indicate the reliability of individual measurements.
The presented experiments therefore examine uncertainty quantification with
mean-variance regression and ensembling to estimate individual measurement
errors and thereby identify potential outliers, anomalies, and other failure
cases automatically. In 10-fold cross-validation on data of about 8,500
subjects, mean-variance regression and ensembling showed complementary
benefits, reducing the mean absolute error across all predictions by 12%. Both
improved the calibration of uncertainties and their ability to identify high
prediction errors. With intra-class correlation coefficients (ICC) above 0.97,
all targets except the liver fat content yielded relative measurement errors
below 5%. Testing on another 1,000 subjects showed consistent performance, and
the method was finally deployed for inference to 30,000 subjects with missing
reference values. The results indicate that deep regression ensembles could
ultimately provide automated, uncertainty-aware measurements of body
composition for more than 120,000 UK Biobank neck-to-knee body MRI that are to
be acquired within the coming years.
###### keywords:
MRI , UK Biobank , Neural Networks , Biomarkers
††journal: Computerized Medical Imaging and Graphics
## 1 Introduction
UK Biobank studies more than half a million volunteers by collecting data on
blood biochemistry, genetics, questionnaires on lifestyle, and medical records
[41].
For 100,000 participants, the ongoing examinations also include medical
imaging, such as dedicated MRI of the brain, heart, liver, pancreas, and the
entire body from neck to knee [31]. Ongoing repeat imaging for 70,000 subjects
will furthermore enable longitudinal studies over two or more years. Image-
derived phenotypes, such as measurements of body composition and organ
volumes, hold the potential for non-invasive studies of aging, cardiovascular,
and metabolic conditions at large scale within this cohort.
The relationship between obesity, type-2 diabetes, and nonalcoholic fatty
liver disease is of particular interest due to their high prevalence and
associated adverse health effects [45, 30]. Depending on genetic and
environmental factors, body fat can accumulate in organs, abdominal depots,
and muscle infiltrations, all of which have specific effects on health
outcomes. Ongoing work is therefore concerned with acquiring measurements of
liver fat content [45], muscle volumes, and adipose tissue depots [43, 30]
with manual and semi-automated techniques [5]. Recent works also proposed
fully-automated techniques with neural networks for segmentation, which have
been applied to the heart [3], kidney [25], pancreas [4, 2], and liver [18],
but also the iliopsoas muscles [9], spleen, adipose tissues, and more [32].
Similar to the latter, neural networks have also been proposed for
segmentation of adipose tissues in other studies involving computed tomography
(CT) [42, 44] and MRI [24, 8, 22].
Apart from semantic segmentation, neural networks can also be trained for
image-based regression, predicting numerical measurement values without any
need for explicit delineations. In medical imaging, deep regression has gained
attention for analyses of human age in MRI of the brain [7], volume
measurements of the heart [46], and blood pressure, sex, and age in retinal
fundus photographs [36]. On UK Biobank neck-to-knee body MRI, deep regression
can quantify human age and liver fat, but also various measurements of body
composition. For the latter, its accuracy can exceed the agreement between
established gold standard techniques [26].
This type of deep regression requires no ground truth segmentations and can
measure abstract properties by training on numerical reference values from
arbitrary sources. However, the lack of output segmentations poses a
limitation, as the predicted numerical values give no indication of confidence
or reliability. Previous work examined the underlying relevant image features
with saliency analysis, but only provided interpretations on cohort level
without attempting to estimate individual measurement errors.
Recent advances in the field of uncertainty quantification have the potential
to address some of these concerns by providing an error estimate for each
individual measurement [12]. High uncertainty could accordingly alert
researchers or clinical operators to anomalies, outliers, or other failure
cases of these systems [19]. Among various proposed methods, such as Bayesian
inference with Markov chain Monte-Carlo techniques [33] and more
computationally viable approximations that apply dropout at test time [11],
recent work reported superior behavior for deep ensembling strategies [15, 35,
1]. These approaches provide predictive uncertainty by training multiple
neural networks to each predict not only a point estimate but a probability
distribution, with multiple network instances forming an ensemble [23]. In
related work, a similar approach was recently applied for age estimation from
fetal brain MRI, reporting high accuracy and promising indications for
abnormality detection [39].
The aim of this work is to develop an automated strategy for body composition
analysis on UK Biobank neck-to-knee body MRI which provides not only
measurements [26] but also introduces individual uncertainty estimates that
can represent confidence intervals. As a key advantage, the deep regression
approach can be trained without access to reference segmentation masks and
instead learns to emulate the existing, numerical metadata. Six body
composition measurements relating to adipose tissues with high relevance for
cardiometabolic disease were predicted from two-dimensional representations of
the MRI data. ResNet50 neural network instances [16] for image-based
regression were trained to each predict the mean and variance of a Gaussian
probability distribution over a given measurement value. Combined into
ensembles they provided estimates of predictive uncertainty [23]. The main
contribution consists in extensive analysis of the independent effects of
mean-variance regression and ensembling on overall accuracy and speed, but
also on the calibration [13] of uncertainties and their ability to identify
the worst predictions in sparsification [17], both in cross-validation on
about 8,500 subjects and testing on another 1,000 subjects. The proposed
method was deployed for inference to obtain previously unavailable
measurements from more than 30,000 images, including 1,000 repeat scans.
## 2 Materials and methods
The neck-to-knee body MRI of each subject was formatted into a two-dimensional
image from which the proposed method estimates a numerical measurement value
in image-based regression. This work examines least squares regression, which
produces only the measurement value itself, [26, 27], but also mean-variance
regression [34], in which both the mean value and the variance of a Gaussian
probability distribution over one measurement of one subject is modeled. In
ensembling, the predictions of several networks are furthermore aggregated
[23]. The thus obtained uncertainty estimates can help to identify outliers
and potential failure cases automatically [14].
Fig. 1: As input to the neural network, each MRI volume was represented as
color image of $(256\times 256\times 3)$ pixels by forming channels from the
projected water (red) and fat (green) signal and fat fraction slices (blue)
from two axes each.
### 2.1 UK Biobank image data
UK Biobank has recruited more than half a million men and women by letter from
the National Health Service in the United Kingdom, starting in 2006 [41].
Examinations involve several visits to UK Biobank assessment centers, with
imaging procedures launching in 2014 for a subgroup of 100,000 participants
[31]. At the time of writing, medical imaging data from three different
centers has been released for 40,264 men and women (52% female) aged 44-82
(mean 64) years with BMI 14-62 (mean 27) $kg/m^{2}$ and a majority of 94% with
self-reported White-British ethnicity.
For 1,209 of these, data from a repeat imaging visit with an offset of about
two years has been released. All participants provided informed consent and
both the UK Biobank examinations and the experiments in this work were
approved by the responsible British and Swedish ethics committees.
#### 2.1.1 MRI data
The MRI protocol examined in this work is listed as UK Biobank field 20201 and
covers the body from neck to knee in six separate imaging stations acquired in
a scan time below ten minutes [43, 31]. Volumetric, co-aligned images of water
and fat signal were acquired with a two-point Dixon technique with TR = 6.69,
TE = 2.39/4.77 ms and flip angle 10deg on a Siemens Aera Magnetom 1.5 device.
The image resolution varies between stations, with a typical grid of
($224\times 174\times 44)$ voxels of $(2.232\times 2.232\times 4.5)$ mm (for
more detail, see ”Body MRI protocol parameters” in [31]).
#### 2.1.2 Image formatting
For this work, the six MRI stations of each subject were first fused into a
common voxel grid by trilinear interpolation to form a single volume of
$(224\times 174\times 370)$ voxels for each signal type. These volumes were
then converted to two-dimensional representations by summing all values along
two axes of view, yielding a coronal and sagittal mean intensity projection,
which were concatenated side by side. This was done separately for both the
water and fat signal, with the resulting images individually normalized and
downsampled to form two color channels of a single image of $(256\times
256\times 2)$ pixels [26]. As a third image channel, both a single coronal and
sagittal fat fraction slice were extracted based on a body mask [27]. These
fractions resulted from voxel-wise division of the fat signal by the sum of
water and fat signal. Fig. 1 shows the result, a dual mean intensity
projection with fat fraction slices, encoded in 8bit for faster processing.
### 2.2 Ground truth
UK Biobank provides several body composition measurements from the same neck-
to-knee body MRI data as used in this work, based on volumetric multi-atlas
segmentations [43, 6]: Visceral Adipose Tissue (VAT), abdominal Subcutaneous
Adipose Tissue (SAT), Total Adipose Tissue (TAT), Total Lean Tissue (TLT), and
Total Thigh Muscle (TTM). Together with Liver Fat Fraction (LFF) values based
on dedicated multi-echo liver MRI [30], these reference measurements form the
ground truth data, or regression targets, for this work.
### 2.3 Data partitions
Among the 40,264 released images of the initial imaging visit, visual
inspection identified 1,376 subjects with artifacts such as water-fat signal
swaps, non-standard positioning and metal objects [26]. Three datasets were
formed from the initial imaging visit from those subjects for whom any of the
six reference measurements were available.
Dataset $D_{cv}$ consists of 8,539 subjects without artifacts and was
subdivided into a 10-fold cross-validation split which was retained for all
experiments.
Dataset $D_{test}$ contains another 1,107 subjects without artifacts and
served as a test set, but notably lacks any values for two of the six
regression targets for which no reference values have been released yet.
Dataset $D_{art}$ was formed from those subjects with identified artifacts,
yielding 330 subjects, to examine behavior on abnormal data.
Two additional datasets were formed from those subjects with no available
reference measurements. Dataset $D_{infer}$ comprises all remaining 29,234
subjects without artifacts from the initial imaging visit, for whom the
prediction model was applied to for inference. Finally, dataset $D_{revisit}$
was formed for inference on the repeat imaging visit from 1,179 subjects with
no image artifacts.
### 2.4 Model
A ResNet50 architecture [16] was configured to receive the two-dimensional
image format as seen in Fig. 1 as input for a given subject and predict all
six regression targets at once. No explicit segmentation was performed at any
stage of this work. Each network was pre-trained on ImageNet and optimized
with Adam [20] at batch size 32 with online augmentation by random
translations. After 5,000 iterations, the base learning rate of 0.0001 was
reduced by factor 10 and training continued for another 1,000 iterations [26].
All experiments were conducted in PyTorch, using an Nvidia RTX 2080 Ti
graphics card with 11GB RAM.
Four distinct configurations were compared. As the first one, a least squares
regression network predicted only these six output values, each corresponding
to one measurement for a given subject, trained by optimizing the mean squared
error criterion of equation 1. In this formula, $\mu_{\theta}(\mathbf{x}_{n})$
represents the network prediction for the $n$-th input sample
$\mathbf{x}_{n}$, with $y_{n}$ as the corresponding ground truth value.
$MSE=\frac{1}{N}\sum_{n=1}^{N}(y_{n}-\mu_{\theta}(\mathbf{x}_{n}))^{2}$ (1)
As a second configuration, least squares ensembles were formed by combining
ten such networks. Their predictions were averaged and the spread, or
empirical variance, of their predictions used as uncertainty estimate [17].
As the third configuration, mean-variance regression was performed by
predicting two values, corresponding to the mean and variance of a Gaussian
probability distribution over one measurement value for a given subject,
optimized with a negative log-likelihood criterion [34] as shown in equation
2. Here, $p_{\theta}(y_{n}|\mathbf{x}_{n})$ is the probabilistic predictive
distribution over one measurement value, modeled by the network outputs
$\mu_{\theta}(\mathbf{x}_{n})$ and $\sigma^{2}_{\theta}(\mathbf{x}_{n})$,
which represent the predicted mean and corresponding predicted variance for
input sample $\mathbf{x}_{n}$, respectively. The last term, $c$, is a constant
that does not depend on $\theta$. This criterion expands the mean squared
error of eq. 1 by a sample-specific, heteroscedastic variance and can likewise
be averaged across multiple samples. This predicted variance directly serves
as an estimate of uncertainty, with high values describing a wide normal
distribution within which plausible values for the estimated measurement are
assumed.
$-\log
p_{\theta}(y_{n}|\mathbf{x}_{n})=\frac{\log\sigma^{2}_{\theta}(\mathbf{x}_{n})}{2}+\frac{(y_{n}-\mu_{\theta}(\mathbf{x}_{n}))^{2}}{2\sigma^{2}_{\theta}(\mathbf{x}_{n})}+c$
(2)
As the fourth and final configuration, mean-variance ensembles employ ten such
network instances. Their predictions can likewise be aggregated to obtain
estimates of predictive uncertainty [23].
In all ensembles, model diversity was increased by withholding one of ten
evenly sized subsets of the training data from each instance, as if they had
been obtained from a preceding cross-validation experiment. The target values
were standardized [26]. When one or more of the six ground truth values for a
given training sample were missing, their contribution to the loss term was
dynamically set to zero, so that they would not affect the training process.
In this way, it was possible to utilize samples with missing values and
provide as much training data as possible. A PyTorch implementation for
training and inference will be made publicly
available111github.com/tarolangner/ukb_mimir.
### 2.5 Evaluation
All configurations were evaluated in 10-fold cross-validation on dataset
$D_{cv}$ and also validated against artifact dataset $D_{art}$. The best
configuration was eventually applied to test dataset $D_{test}$ and deployed
for inference on datasets $D_{infer}$ and $D_{revisit}$.
The predicted measurements were compared to the reference values with the
intraclass correlation coefficient (ICC) with a two-way random, single
measures, absolute agreement definition [21] and the coefficient of
determination R2. The mean absolute error (MAE) is also reported, together
with the mean absolute percentage error (MAPE) as a relative error
measurement. The latter is the absolute difference between prediction and
reference divided by the reference. Additionally, aggregated saliency maps
were generated to highlight relevant image areas [38].
The estimated uncertainties were evaluated regarding sparsification [17] and
calibration [13]. Sparsification examines whether the highest uncertainties
coincide with the highest prediction errors. Ranking all measurements by their
uncertainty and excluding one after another should accordingly yield
consistent improvements in performance metrics such as the MAE. Calibration
examines the magnitude of uncertainties and resulting under- or overconfidence
of predictions. The uncertainty obtained for any given sample corresponds to
the variance of a Gaussian probability distribution, modeling characteristic
confidence intervals around the predicted mean. Higher uncertainty scales
these intervals to be wider, enabling them to cover larger errors. Ideally
calibrated uncertainties define confidence intervals that cover, on a set of
samples, a percentage of errors that corresponds exactly to their specific
confidence level.
## 3 Results
Both mean-variance regression and ensembling provided complementary benefits.
Combining both yielded the best predictive performance, shown in Table 1 and
Fig. 2, with additional detail provided in the supplementary material. On
average, the predictions can account for 98% (R2) of the variability in
reference values, with absolute agreement (ICC) above 0.97 on all targets. The
metrics carry over to the test data largely unchanged. All targets are
predicted with a relative error below 5%, except the liver fat fraction. This
target also incurred the highest relative uncertainties and is examined
further in the supplementary material, together with additional evaluation
metrics, and a comparison to alternative reference methods. It also provides
additional detail on the saliency analysis, which is compiled into Fig. 3.
Fig. 2: Mean-variance ensemble predictions and reference values for Visceral Adipose Tissue (VAT) in cross-validation on $D_{cv}$, testing on $D_{test}$, and on subjects with artifacts of $D_{art}$, depicted with color-coded uncertainty. The listed percentiles refer to those samples with the highest uncertainty. Table 1: Evaluation results | | Cross-Validation | Testing
---|---|---|---
Target name | | ICC | % error | ICC | % error
Visceral Adipose Tissue | (VAT) | 0.997 | 4.2 | 0.997 | 3.6
Abdominal Subcutaneous Adipose Tissue | (SAT) | 0.996 | 2.8 | 0.996 | 2.7
Total Adipose Tissue | (TAT) | 0.997 | 1.8 | / | /
Total Lean Tissue | (TLT) | 0.983 | 2.5 | / | /
Total Thigh Muscle | (TTM) | 0.996 | 1.6 | 0.995 | 1.6
Liver Fat Fraction | (LFF) | 0.979 | 25.7 | 0.982 | 21.6
* Results for the mean-variance ensemble on cross-validation dataset $D_{cv}$ and testing on dataset $D_{test}$,
with intraclass correlation coefficient (ICC) and MAPE (% error).
Fig. 4 shows that even without utilizing the uncertainties, the mean-variance
regression ensemble reduces the MAE by 12% when compared to the least-squares
regression baseline. The uncertainties enable sparsification, identifying some
of the worst predictions which can be excluded to reduce the prediction error
even further. The scatter plots of Fig. 2 show predictions for one target in
detail, together with color-coded uncertainty. Despite containing image
artifacts, not all subjects of dataset $D_{art}$ yield higher uncertainties
than the normal material. Indeed, many of these subjects result in highly
accurate predictions despite the artifacts, and high uncertainties tend to
occur only in those cases with high prediction errors. On test dataset
$D_{test}$, the uncertainty highlights an outlier case for VAT (see Fig. 2),
SAT, and TTM. This one subject causes consistently flawed predictions and was
found to suffer from an abnormal, atrophied right leg.
On datasets $D_{cv}$ and $D_{test}$ the predicted means exhibit a consistent,
linear correlation with the predicted log uncertainties. Accordingly, large
subjects with high volumes induce systematically higher uncertainty. Although
these cases also generally incur higher prediction errors, this bias can be
shown to not achieve optimal sparsification. On the normal material with
hardly any outliers, this tendency is so strong that sparsifying simply by
predicted mean is almost as effective as using the uncertainties. On dataset
$D_{art}$, this bias is less pronounced, as those cases with artifacts that
cause genuine prediction failures are correctly assigned much higher
uncertainty.
The best calibration was also achieved by the mean-variance ensemble, which
nonetheless often produced overconfident uncertainties. Post-processing with
target-wise scaling factors can achieve a near perfect fit to the validation
data, however, and also improves the overall calibration on the test set. The
supplementary material explores both sparsification and calibration in more
detail and also lists results for datasets $D_{infer}$ and $D_{revisit}$, on
which the proposed method inferred new measurements for over 30,000 images.
No difference in processing speed was observed between least squares and mean-
variance regression. Image formatting required the bulk of processing time,
but once cached, training one network only requires about 15 minutes, or 2.5
hours for an ensemble of ten instances. Ensemble predictions for about 60
subjects can be generated within one second, so that inference for all 30,000
required less than ten minutes.
Fig. 3: Co-registered, aggregated saliency information for about 3,000
subjects, showing the fat signal channel only (see supplementary material for
more). Fig. 4: This sparsification plot [17] shows how the overall
performance can be improved by gradually excluding those subjects with the
highest predicted measurement uncertainty. Each position along the x-axis
represents a certain share of excluded, most uncertain measurements, whereas
the y-axis shows the change in mean absolute error relative to baseline,
averaged across all targets on dataset $D_{cv}$. Even without utilizing the
uncertainty to exclude any subjects, the mean-variance ensemble achieves a
reduction of the MAE by 12%. Further improvements in the MAE can be achieved
excluding increasingly large shares of those measurements with highest
uncertainty.
## 4 Discussion
With relative measurement errors below 5%, all targets except the liver fat
fraction can be predicted with higher accuracy than observed for the mutual
agreement between the reference and alternative established methods, both in
cross-validation and on the test data. For liver fat itself, the relative
error of 22-26% is worse than the 15% seen between the reference used here and
an alternative set of UK Biobank liver fat measurements. The two-point Dixon
images inherently limit the prediction accuracy for this target, as the
reference values were obtained from another imaging protocol that reconstructs
fat fractions more faithfully [45, 30]. The saliency analysis of Fig. 3
indicates that the networks nonetheless learned to correctly identify liver
tissue and other target-specific regions. The inference on 30,000 subjects
provides material for further medical study which is, however, beyond the
scope of this work.
The estimated uncertainties identified many of the worst prediction errors.
They correctly highlighted an outlier with abnormal physiology on the test
data and enabled consistent reductions in the mean prediction error by
excluding the least certain measurements. On the inference datasets, the
highest uncertainties were furthermore found in several cases to coincide with
previously undetected anomalies in positioning, but also with minor artifacts
and pathologies that may have negatively affected prediction accuracy and
should arguably have been excluded during the original quality controls. In
practice, the acquired measurements can accordingly be supplied together with
their uncertainty, which could serve both as an error estimate and as a means
to identify potential anomalies and failure cases. The affected cases could
then be manually examined and, if necessary, excluded from further analyses.
However, the results also show two noteworthy limitations of the proposed
approach which arise from imperfect calibration and the observed bias for high
measurement values to incur high uncertainties. The imperfect calibration is
linked to uncertainties that often underestimate the true measurement error.
This is a known effect related to overfitting on the training data [13, 29].
As shown in the supplementary material, it is possible to correct the
calibration by calculating target-wise scaling factors on the validation
results. Once obtained, these simple scaling factors also yield improved
overall calibration on the test data.
The bias towards systematically higher uncertainty in higher measurement
values is a more concerning pattern. This effect can make it hard to
distinguish whether a measurement with high uncertainty should be excluded due
to being flawed or whether it merely resulted from a large subject, many of
whom may provide valuable insight in correlation studies. It is most
pronounced in the normal material where no genuine failure cases are
encountered. In contrast, the uncertainty for one abnormal subject in the test
set or the flawed predictions on images with artifacts of dataset $D_{art}$
are typically higher.
Conceptually, body weights above 150kg and BMIs of up to 53 kg/m2 as present
in the training data represent physiological extremes that could be considered
outliers in their own right. Arguably, the two-dimensional projections are
also inherently less suitable to represent more voluminous bodies and many of
the largest subjects furthermore show considerable variability in shape and
extend beyond the field of view. Even then, the effect is gradual and large
subjects incur higher uncertainty than warranted in terms of the prediction
errors alone. Previous work on age estimation from fetal brain MRI reported
similar effects [39], noting specifically that higher aleatoric uncertainty,
corresponding to the variances returned by the network instances, correlated
with higher gestational age of the fetal brain. In this work, the effect is
present in both the aleatoric and epistemic uncertainty component as modeled
by the empirical variance, even in least-squares regression ensembles.
On a technical level, the mean-variance configuration provided immediate
benefits over least squares regression despite merely changing the loss
function and requiring that both a mean and a variance be predicted. This
could be explained by loss attenuation [19, 17] weakening the impact of
outliers among the ground truth values. Several mismatches between the image
data and reference were identified where the predictions also incur high
errors in spite of low uncertainty. Images with artifacts, in contrast, did
not necessarily yield high uncertainties, as the method was in fact able to
provide accurate predictions for many of them. In turn, this also means that
subjects with artifacts will not generally be identified as out-of-
distribution samples. Ensembling yielded an inherent benefit in prediction
accuracy and also improved the calibration. The ten network instances were
conveniently obtained from a cross-validation split, but sufficient ensemble
diversity could potentially be induced by random weight initialization alone
and similar benefits can be achieved with fewer instances as seen in ablation
experiments of the supplementary material and related literature [10, 35].
Based on the results, even a single mean-variance instance would be viable in
practical settings if model size and runtime are of chief concern. The
calibration could be adjusted with scaling factors, although it would not
benefit from the 12% reduction in MAE achieved by ensembling.
Several additional limitations apply on a methodological level. No
independent, external test set was examined, so that no claim can be made
about generalization of the trained networks to other studies. The validation
and test cases used in this work are furthermore preselected for the intended
measurements by virtue of having passed the quality controls of the reference
methods. Similarly, certain phenotypes were systematically excluded from the
experiments in this paper, such as subjects with knee implants or other severe
pathologies. When applied to different imaging devices, protocols, or subject
demographics, new training data in the range of several hundred samples would
likely be required. In contrast, multi-atlas segmentations with manual
corrections have been based on just above 30 annotated subjects [43], whereas
neural networks for semantic segmentation typically report training data
ranging from 90 to 220 subjects [9, 2] on UK Biobank MRI.
When compared to neural networks for segmentation, the proposed approach
accordingly requires more training samples and produces no output segmentation
masks. In turn, it can be trained without access to reference segmentations in
an end-to-end fashion that does not require for the property of interest to be
manually encoded in the input data during training. Previous work showed that
it outperformed segmentation in estimating liver fat from the two-point Dixon
images, possibly by using additional image information that is not easily
accessible to human intuition [27], and also accurately estimated other, more
abstract properties [26]. Likewise, the uncertainty quantification as proposed
here can provide error bounds for the measurement that is ultimately of
interest for medical research, although approaches for voxel-wise uncertainty
from segmentation networks have also been proposed in the literature [37].
The concept of designing two-dimensional input formats resembles hand-crafted
feature selection and it would be preferable to apply a regression technique
directly to the volumetric MRI data. No claim is intended for the chosen
representation to be optimal as input to the neural network. The MRI volumes
could be sliced, projected, or aggregated in various ways and in any signal or
phase component may contain valuable information. Despite the empirical
success of the presented approach, further improvements may be possible, as
the chosen format compresses the MRI data to just $0.5\%$ of its original size
and almost certainly results in a loss of information. However, a fully
volumetric approach would likely require substantially increased processing
time and GPU memory. The proposed approach, in contrast, can run on consumer-
grade hardware and achieves relative errors as low as 1.6%, which may be hard
to improve much further. Future work may adapt the presented approach to the
dedicated liver MRI of UK Biobank, with potential for far more accurate liver
fat predictions.
Future work may also explore how the bias between high measurements and high
uncertainty can be corrected for and could explore alternative strategies
which are known to produce substantially distinct estimates of uncertainty
[40]. However, it is unclear whether Monte-Carlo techniques that employ
dropout at test time [11] could reach sufficient predictive performance,
whereas more faithful approximations of Bayesian inference with Markov chain
Monte-Carlo [33] may not be computationally viable. Deep ensembles are often
reported as one of the most successful strategies [15, 35, 1] and a suitable
alternative will have to achieve better calibration and sparsification without
sacrificing predictive accuracy or exceeding the computational limitations in
order to be competitive.
In a large-scale study such as the UK Biobank the main strengths of the
proposed approach can be exploited. Without any need for further guidance,
corrections, or intervention, these values can be inferred for the entire
imaged study population, both for existing and future imaging data. The
resulting measurements can be obtained for further study and quality control
months or years before full coverage has been achieved with the reference
techniques. In practice, researchers may apply this system to obtain automated
measurements for all upcoming 120,000 UK Biobank neck-to-knee body MRI scans
yet to be released, and will be alerted to potential prediction failures by
the predictive uncertainty. Future developments may also yield comparable
systems that could ultimately be integrated into scanner software to provide
fully automated analyses for specific imaging protocols.
## 5 Conclusion
In conclusion, both mean-variance regression and ensembling provided
complementary benefits for the presented task. Without extensive architectural
changes or prohibitive increases in computational cost they enabled fast and
accurate measurements of body composition for the entire imaged UK Biobank
cohort. The predicted uncertainty can, despite the specified limitations, give
valuable insight into potential failure cases and will be made available
together with the inferred measurements for further medical studies.
## Acknowledgment
This work was supported by a research grant from the Swedish Heart- Lung
Foundation and the Swedish Research Council (2016-01040, 2019-04756,
2020-0500, 2021-70492) and used the UK Biobank Resource under application no.
14237.
## References
* Ashukha et al. [2020] Ashukha, A., Lyzhov, A., Molchanov, D., Vetrov, D., 2020\. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning, in: International Conference on Learning Representations. URL: https://openreview.net/forum?id=BJxI5gHKDr.
* Bagur et al. [2020] Bagur, A.T., Ridgway, G., McGonigle, J., Brady, M., Bulte, D., 2020. Pancreas segmentation-derived biomarkers: Volume and shape metrics in the uk biobank imaging study, in: Annual Conference on Medical Image Understanding and Analysis, Springer. pp. 131–142.
* Bai et al. [2018] Bai, W., Sinclair, M., Tarroni, G., Oktay, O., Rajchl, M., Vaillant, G., Lee, A.M., Aung, N., Lukaschuk, E., Sanghvi, M.M., et al., 2018\. Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. Journal of Cardiovascular Magnetic Resonance 20, 65.
* Basty et al. [2020] Basty, N., Liu, Y., Cule, M., Thomas, E.L., Bell, J.D., Whitcher, B., 2020. Automated measurement of pancreatic fat and iron concentration using multi-echo and t1-weighted mri data, in: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), IEEE. pp. 345–348.
* Borga [2018] Borga, M., 2018. Mri adipose tissue and muscle composition analysis—a review of automation techniques. The British journal of radiology 91, 20180252.
* Borga et al. [2015] Borga, M., Thomas, E.L., Romu, T., Rosander, J., Fitzpatrick, J., Dahlqvist Leinhard, O., Bell, J.D., 2015. Validation of a fast method for quantification of intra-abdominal and subcutaneous adipose tissue for large-scale human studies. NMR in biomedicine 28, 1747–1753.
* Cole et al. [2018] Cole, J.H., Ritchie, S.J., Bastin, M.E., Hernández, M.V., Maniega, S.M., Royle, N., Corley, J., Pattie, A., Harris, S.E., Zhang, Q., et al., 2018\. Brain age predicts mortality. Molecular psychiatry 23, 1385–1392.
* Estrada et al. [2020] Estrada, S., Lu, R., Conjeti, S., Orozco-Ruiz, X., Panos-Willuhn, J., Breteler, M.M., Reuter, M., 2020. Fatsegnet: A fully automated deep learning pipeline for adipose tissue segmentation on abdominal dixon mri. Magnetic resonance in medicine 83, 1471–1483.
* Fitzpatrick et al. [2020] Fitzpatrick, J., Basty, N., Cule, M., Liu, Y., Bell, J.D., Thomas, E.L., Whitcher, B., 2020\. Large-scale analysis of iliopsoas muscle volumes in the uk biobank. arXiv preprint arXiv:2008.05217 .
* Fort et al. [2019] Fort, S., Hu, H., Lakshminarayanan, B., 2019. Deep ensembles: A loss landscape perspective. arXiv preprint arXiv:1912.02757 .
* Gal and Ghahramani [2016] Gal, Y., Ghahramani, Z., 2016\. Dropout as a bayesian approximation: Representing model uncertainty in deep learning, in: international conference on machine learning, pp. 1050–1059.
* Ghahramani [2015] Ghahramani, Z., 2015. Probabilistic machine learning and artificial intelligence. Nature 521, 452–459.
* Guo et al. [2017] Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q., 2017. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599 .
* Gustafsson et al. [2020a] Gustafsson, F.K., Danelljan, M., Bhat, G., Schön, T.B., 2020a. Energy-based models for deep probabilistic regression, in: European Conference on Computer Vision, Springer. pp. 325–343.
* Gustafsson et al. [2020b] Gustafsson, F.K., Danelljan, M., Schon, T.B., 2020b. Evaluating scalable bayesian deep learning methods for robust computer vision, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 318–319.
* He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep Residual Learning for Image Recognition, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. doi:10.1109/CVPR.2016.90.
* Ilg et al. [2018] Ilg, E., Cicek, O., Galesso, S., Klein, A., Makansi, O., Hutter, F., Brox, T., 2018. Uncertainty estimates and multi-hypotheses networks for optical flow, in: Proceedings of the European Conference on Computer Vision (ECCV), pp. 652–667.
* Irving et al. [2017] Irving, B., Hutton, C., Dennis, A., Vikal, S., Mavar, M., Kelly, M., Brady, J.M., 2017. Deep quantitative liver segmentation and vessel exclusion to assist in liver assessment, in: Annual Conference on Medical Image Understanding and Analysis, Springer. pp. 663–673.
* Kendall and Gal [2017] Kendall, A., Gal, Y., 2017. What uncertainties do we need in bayesian deep learning for computer vision?, in: Advances in neural information processing systems, pp. 5574–5584.
* Kingma and Ba [2014] Kingma, D.P., Ba, J., 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
* Koo and Li [2016] Koo, T.K., Li, M.Y., 2016. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of chiropractic medicine 15, 155–163.
* Küstner et al. [2020] Küstner, T., Hepp, T., Fischer, M., Schwartz, M., Fritsche, A., Häring, H.U., Nikolaou, K., Bamberg, F., Yang, B., Schick, F., et al., 2020\. Fully automated and standardized segmentation of adipose tissue compartments via deep learning in 3d whole-body mri of epidemiologic cohort studies. Radiology: Artificial Intelligence 2, e200010.
* Lakshminarayanan et al. [2017] Lakshminarayanan, B., Pritzel, A., Blundell, C., 2017. Simple and scalable predictive uncertainty estimation using deep ensembles, in: Advances in neural information processing systems, pp. 6402–6413.
* Langner et al. [2019a] Langner, T., Hedström, A., Mörwald, K., Weghuber, D., Forslund, A., Bergsten, P., Ahlström, H., Kullberg, J., 2019a. Fully convolutional networks for automated segmentation of abdominal adipose tissue depots in multicenter water–fat mri. Magnetic resonance in medicine 81, 2736–2745.
* Langner et al. [2020a] Langner, T., Östling, A., Maldonis, L., Karlsson, A., Olmo, D., Lindgren, D., Wallin, A., Lundin, L., Strand, R., Ahlström, H., et al., 2020a. Kidney segmentation in neck-to-knee body mri of 40,000 uk biobank participants. Scientific reports 10, 1–10.
* Langner et al. [2020b] Langner, T., Strand, R., Ahlström, H., Kullberg, J., 2020b. Large-scale biometry with interpretable neural network regression on uk biobank body mri. Scientific Reports 10, 1–9.
* Langner et al. [2020c] Langner, T., Strand, R., Ahlström, H., Kullberg, J., 2020c. Large-scale inference of liver fat with neural networks on uk biobank body mri, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 602–611.
* Langner et al. [2019b] Langner, T., Wikström, J., Bjerner, T., Ahlström, H., Kullberg, J., 2019b. Identifying morphological indicators of aging with neural networks on large-scale whole-body MRI. IEEE Transactions on Medical Imaging , 1–1URL: https://doi.org/10.1109/tmi.2019.2950092, doi:10.1109/tmi.2019.2950092.
* Laves et al. [2020] Laves, M.H., Ihler, S., Fast, J.F., Kahrs, L.A., Ortmaier, T., 2020. Well-calibrated regression uncertainty in medical imaging with deep learning, in: Medical Imaging with Deep Learning.
* Linge et al. [2018] Linge, J., Borga, M., West, J., Tuthill, T., Miller, M.R., Dumitriu, A., Thomas, E.L., Romu, T., Tunón, P., Bell, J.D., et al., 2018\. Body composition profiling in the uk biobank imaging study. Obesity 26, 1785–1795.
* Littlejohns et al. [2020] Littlejohns, T.J., Holliday, J., Gibson, L.M., Garratt, S., Oesingmann, N., Alfaro-Almagro, F., Bell, J.D., Boultwood, C., Collins, R., Conroy, M.C., et al., 2020\. The uk biobank imaging enhancement of 100,000 participants: rationale, data collection, management and future directions. Nature Communications 11, 1–12.
* Liu et al. [2021] Liu, Y., Basty, N., Whitcher, B., Bell, J., Sorokin, E., van Bruggen, N., Thomas, E.L., Cule, M., 2021\. Genetic architecture of 11 organ traits derived from abdominal mri using deep learning. ELife 10, e65554.
* Neal [2012] Neal, R.M., 2012. Bayesian learning for neural networks. volume 118\. Springer Science & Business Media.
* Nix and Weigend [1994] Nix, D.A., Weigend, A.S., 1994\. Estimating the mean and variance of the target probability distribution, in: Proceedings of 1994 ieee international conference on neural networks (ICNN’94), IEEE. pp. 55–60.
* Ovadia et al. [2019] Ovadia, Y., Fertig, E., Ren, J., Nado, Z., Sculley, D., Nowozin, S., Dillon, J., Lakshminarayanan, B., Snoek, J., 2019. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift, in: Advances in Neural Information Processing Systems, pp. 13991–14002.
* Poplin et al. [2018] Poplin, R., Varadarajan, A.V., Blumer, K., Liu, Y., McConnell, M.V., Corrado, G.S., Peng, L., Webster, D.R., 2018\. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering 2, 158\.
* Roy et al. [2019] Roy, A.G., Conjeti, S., Navab, N., Wachinger, C., Initiative, A.D.N., et al., 2019. Bayesian quicknat: Model uncertainty in deep whole-brain segmentation for structure-wise quality control. NeuroImage 195, 11–22.
* Selvaraju et al. [2017] Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., 2017\. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization, in: 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, Venice. pp. 618–626. URL: http://ieeexplore.ieee.org/document/8237336/, doi:10.1109/ICCV.2017.74.
* Shi et al. [2020] Shi, W., Yan, G., Li, Y., Li, H., Liu, T., Sun, C., Wang, G., Zhang, Y., Zou, Y., Wu, D., 2020. Fetal brain age estimation and anomaly detection using attention-based deep ensembles with uncertainty. NeuroImage 223, 117316\.
* Ståhl et al. [2020] Ståhl, N., Falkman, G., Karlsson, A., Mathiason, G., 2020\. Evaluation of uncertainty quantification in deep learning, in: International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, Springer. pp. 556–568.
* Sudlow et al. [2015] Sudlow, C., Gallacher, J., Allen, N., Beral, V., Burton, P., Danesh, J., Downey, P., Elliott, P., Green, J., Landray, M., Liu, B., Matthews, P., Ong, G., Pell, J., Silman, A., Young, A., Sprosen, T., Peakman, T., Collins, R., 2015. UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age. PLOS Medicine 12, e1001779. URL: https://dx.plos.org/10.1371/journal.pmed.1001779, doi:10.1371/journal.pmed.1001779.
* Wang et al. [2017] Wang, Y., Qiu, Y., Thai, T., Moore, K., Liu, H., Zheng, B., 2017. A two-step convolutional neural network based computer-aided detection scheme for automatically segmenting adipose tissue volume depicting on ct images. Computer methods and programs in biomedicine 144, 97–104.
* West et al. [2016] West, J., Dahlqvist Leinhard, O., Romu, T., Collins, R., Garratt, S., Bell, J.D., Borga, M., Thomas, L., 2016\. Feasibility of MR-Based Body Composition Analysis in Large Scale Population Studies. PLoS ONE 11. URL: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5035023/, doi:10.1371/journal.pone.0163332.
* Weston et al. [2019] Weston, A.D., Korfiatis, P., Kline, T.L., Philbrick, K.A., Kostandy, P., Sakinis, T., Sugimoto, M., Takahashi, N., Erickson, B.J., 2019. Automated abdominal segmentation of ct scans for body composition analysis using deep learning. Radiology 290, 669–679.
* Wilman et al. [2017] Wilman, H.R., Kelly, M., Garratt, S., Matthews, P.M., Milanesi, M., Herlihy, A., Gyngell, M., Neubauer, S., Bell, J.D., Banerjee, R., et al., 2017\. Characterisation of liver fat in the uk biobank cohort. PloS one 12, e0172921.
* Xue et al. [2017] Xue, W., Islam, A., Bhaduri, M., Li, S., 2017\. Direct multitype cardiac indices estimation via joint representation and regression learning. IEEE transactions on medical imaging 36, 2057–2067.
Supplementary Material
The following pages provide additional detail on predictive performance,
sparsification, calibration, and inference with the proposed approach. Unless
otherwise specified, all listed results were acquired with the configuration
that combines both mean-variance regression and ensembling. The individual
targets are furthermore examined in detail and compared to alternative UK
Biobank reference values. A PyTorch implementation for preprocessing,
training, and inference with mean-variance regression on the given image data
is available online.
GitHub repository:
https://github.com/tarolangner/ukb_mimir
### 1.1 Datasets and Predictive Performance
The effective number of samples in the three datasets used for evaluation is
listed in Supplementary Table 1, taking into account missing reference values.
The inference dataset $D_{inf}$ furthermore contained 29,234 and the repeat
imaging dataset $D_{revisit}$ another 1,179 unique samples.
Supplementary Table 1: Number of Subjects per Dataset Field ID | Target | | Cross-validation | Testing | Artifacts
---|---|---|---|---|---
22407 | Visceral Adipose Tissue | (VAT) | 8,534 | 1,096 | 327
22408 | Abdominal Subcutaneous Adipose Tissue | (SAT) | 8,534 | 1,097 | 326
22415 | Total Adipose Tissue | (TAT) | 8,270 | 0 | 242
22416 | Total Lean Tissue | (TLT) | 8,270 | 0 | 242
22409 | Total Thigh Muscle | (TTM) | 8,478 | 1,038 | 284
22436 | Liver Fat Fraction | (LFF) | 8,474 | 1,061 | 323
*UK Biobank Field IDs and number of available subjects with known reference values per target
in cross-validation on dataset $D_{cv}$, testing on dataset $D_{test}$, and
artifact dataset $D_{art}$.
Additional documentation for each target is publicly available in the UK
Biobank showcase, based on the listed Field IDs:
https://biobank.ndph.ox.ac.uk/showcase/search.cgi
Supplementary Table 2 lists additional evaluation metrics on all targets in
cross-validation and testing. The results of all four configurations in cross-
validation are listed in Supplementary Table 3.
Supplementary Table 2: Predicted Performance in Detail | | | Cross-validation | Testing
---|---|---|---|---
Target | | unit | N | ICC | R2 | MAE | MAPE | N | ICC | R2 | MAE | MAPE
Visceral Adipose Tissue | (VAT) | L | 8,534 | 0.997 | 0.994 | 0.122 | 4.2 | 1,096 | 0.997 | 0.995 | 0.119 | 3.6
Abdominal Subcutaneous Adipose Tissue | (SAT) | L | 8,534 | 0.996 | 0.993 | 0.191 | 2.8 | 1,097 | 0.996 | 0.992 | 0.192 | 2.7
Total Adipose Tissue | (TAT) | L | 8,270 | 0.997 | 0.995 | 0.358 | 1.8 | 0 | | | |
Total Lean Tissue | (TLT) | L | 8,270 | 0.983 | 0.966 | 0.579 | 2.5 | 0 | | | |
Total Thigh Muscle | (TTM) | L | 8,478 | 0.996 | 0.993 | 0.162 | 1.6 | 1,038 | 0.995 | 0.990 | 0.174 | 1.6
Liver Fat Fraction | (LFF) | % | 8,474 | 0.979 | 0.959 | 0.666 | 25.7 | 1,061 | 0.982 | 0.965 | 0.647 | 21.6
* Results for an ensemble of ten mean-variance networks in 10-fold cross-validation on dataset $D_{cv}$ and testing on dataset $D_{test}$.
N: Number of subjects, ICC: Intraclass correlation coefficient, R2:
Coefficient of determination, MAE: Mean absolute error,
MAPE: Mean absolute percentage error.
Supplementary Table 3: Comparison of All Configurations in Cross-Validation Configuration | ICC | R2 | MAE | MAPE
---|---|---|---|---
Visceral Adipose Tissue (VAT) in L | | | |
Least squares instance | 0.996 | 0.992 | 0.150 | 5.2
Mean-variance instance | 0.997 | 0.993 | 0.134 | 4.6
Least squares ensemble | 0.997 | 0.993 | 0.133 | 4.6
Mean-variance ensemble | 0.997 | 0.994 | 0.122 | 4.2
Abdominal Subcutaneous Adipose Tissue (SAT) in L | | | |
Least squares instance | 0.995 | 0.991 | 0.222 | 3.3
Mean-variance instance | 0.996 | 0.992 | 0.209 | 3.1
Least squares ensemble | 0.996 | 0.992 | 0.202 | 3.0
Mean-variance ensemble | 0.996 | 0.993 | 0.191 | 2.8
Total Adipose Tissue (TAT) in L | | | |
Least squares instance | 0.997 | 0.993 | 0.420 | 2.1
Mean-variance instance | 0.997 | 0.994 | 0.390 | 1.9
Least squares ensemble | 0.997 | 0.994 | 0.377 | 1.9
Mean-variance ensemble | 0.997 | 0.995 | 0.358 | 1.8
Total Lean Tissue (TLT) in L | | | |
Least squares instance | 0.981 | 0.963 | 0.650 | 2.8
Mean-variance instance | 0.981 | 0.962 | 0.632 | 2.7
Least squares ensemble | 0.983 | 0.966 | 0.594 | 2.6
Mean-variance ensemble | 0.983 | 0.966 | 0.579 | 2.5
Total Thigh Muscle (TTM) in L | | | |
Least squares instance | 0.996 | 0.991 | 0.182 | 1.8
Mean-variance instance | 0.996 | 0.992 | 0.176 | 1.8
Least squares ensemble | 0.996 | 0.993 | 0.163 | 1.6
Mean-variance ensemble | 0.996 | 0.993 | 0.162 | 1.6
Liver Fat Fraction (LFF) in % | | | |
Least squares instance | 0.977 | 0.956 | 0.706 | 28.1
Mean-variance instance | 0.977 | 0.954 | 0.702 | 26.6
Least squares ensemble | 0.979 | 0.960 | 0.671 | 27.0
Mean-variance ensemble | 0.979 | 0.959 | 0.666 | 25.7
* Results for all configurations in 10-fold cross-validation on dataset $D_{cv}$.
N: Number of subjects, ICC: Intraclass correlation coefficient, R2:
Coefficient of determination,
MAE: Mean absolute error, MAPE: Mean absolute percentage error.
### 1.2 Overall Calibration
All examined configurations are biased towards overconfidence, consistently
underestimating the true prediction errors. The predicted uncertainty should
accordingly be scaled up. Suitable target-wise scaling factors can be
determined to reach a better calibration on the validation data after training
[Guo et al., 2017, Laves et al., 2020]. In this work a simple grid search was
used, which resulted in the target-wise scaling factors and the areas under
calibration error curve (AUCE) [Gustafsson et al., 2020b] shown in
Supplementary Table 4, with calibration plots, or reliability diagrams, shown
in Supplementary Fig. 1. The same factors also achieve a considerable
improvement when applied to the test data, indicating that the calibration of
the proposed method could easily be corrected with this strategy for the
normal material of the entire cohort.
Supplementary Figure 1: Calibration plots for the mean-variance regression ensemble on cross-validation dataset $D_{cv}$ and testing on dataset $D_{test}$. Ideally, each prediction interval as modeled by the underlying predicted Gaussian probability distribution should cover the corresponding share of reference values. This hypothetical optimum is represented by the gray dashed line. Supplementary Table 4: Calibration | | Cross-validation | Testing |
---|---|---|---|---
Target | | AUCE | AUCEscaled | AUCE | AUCEscaled | Scaling factor
Visceral Adipose Tissue | (VAT) | 0.025 | 0.004 | 0.017 | 0.041 | 1.212
Abdominal Subcutaneous Adipose Tissue | (SAT) | 0.035 | 0.005 | 0.028 | 0.010 | 1.306
Total Adipose Tissue | (TAT) | 0.029 | 0.003 | | | 1.250
Total Lean Tissue | (TLT) | 0.017 | 0.011 | | | 1.109
Total Thigh Muscle | (TTM) | 0.012 | 0.003 | 0.030 | 0.022 | 0.941
Liver Fat Fraction | (LFF) | 0.083 | 0.003 | 0.042 | 0.038 | 1.828
* Calibration of the mean-variance regression ensemble in cross-validation on dataset $D_{cv}$ and testing on dataset $D_{test}$.
The area under calibration error curve (AUCE) can be far reduced (to
AUCEscaled) with target-wise scaling factors.
Note: Earlier versions of this manuscript reported slightly worse calibration
metrics due to flawed reversing of the standard scaling.
### 1.3 Ensemble Size
Supplementary Figure 2: Ablation experiments show how ensembling of mean-
variance network instances (blue), each trained on 90% of samples, compares to
a single instance trained on all samples (dotted gray). Averaged across all
targets, even ensembles of size two reach superior agreement and calibration.
### 1.4 Correlations
Supplementary Figure 3: The male- or female-specific target values are highly
correlated with body weight (UK Biobank field 21002), as indicated by
Pearson’s correlation coefficient $r$.
### 1.5 Detail on Individual Targets
The following pages list dedicated plots for the prediction, sparsification,
and calibration of each target. For the test data only the mean-variance
ensemble configuration is shown, which was determined to be the best
performing approach in cross-validation.
Each subsection also includes short discussions and comparisons to alternative
reference measurements which are primarily derived from two main sources. The
first source contains body composition measurements obtained by Dual-energy
X-ray absorptiometry (DXA) as conducted by UK Biobank [Littlejohns et al.,
2020]. The second source contains additional measurements based on independent
machine learning analysis of the same neck-to-knee body MRI as used in this
work as conducted by Application 23889, who have shared a return dataset 981.
Similar comparisons have been previously reported for a comparable least
squares regression technique [Langner et al., 2020b]. Some measurements may be
highly correlated but yield low agreement due to a shift or scaling
difference. Where specified, these alternative measurements were therefore
mapped with linear regression to the target values as used in this work, so
that agreement values can be reported. Additionally, Pearson’s coefficient of
correlation r is reported. For a fair comparison, the methods are evaluated on
the same subjects.
The sparsification plots also show oracle sparsification curves [Ilg et al.,
2018], which describe a hypothetical optimum that would result from
sparsifying with a ranking of uncertainties that corresponds exactly to a
ranking of absolute prediction errors. This optimum can typically not be
reached in practice, as it would require imitating not only the desired
measurements but also any inconsistencies and noise in the reference
techniques themselves. The sparsification for the three evaluation datasets is
shown separately, but it is worth noting that in most cases the samples with
artifacts incurred the highest uncertainty. When applied to a dataset that
included mixed normal material and artifacts, the latter would therefore
typically be excluded first in the sparsification. The outlier with largest
prediction error in testing for VAT, SAT, and TAT is the same subject, found
to suffer from an atrophied right leg.
Aggregated saliency maps were obtained by generating guided gradient-weighted
class activation maps for 3,091 subjects and co-aligning them by image
registration [Langner et al., 2019b]. Each aggregated saliency map accordingly
highlights which anatomical structures were predominantly considered by the
network to make predictions for the specified target. For clarity, the
visualizations show the aggregated saliency as a heatmap for each of the three
input image channels side by side and are provided with and without the
template subject anatomy as an overlay. The network weights used for this
purpose are based on the mean-variance configuration with a single network
trained for cross-validation in this work, in each case using the instance
that did not contain the given image in its training set.
#### 1.5.1 Visceral Adipose Tissue (VAT)
Supplementary Figure 4: Predictions in cross-validation on $D_{cv}$, testing
on $D_{test}$, and on subjects with artifacts of $D_{art}$, with color-coded
uncertainty. Supplementary Figure 5: Sparsification in cross-validation on
$D_{cv}$, testing on $D_{test}$, and on subjects with artifacts of $D_{art}$,
with oracle curves (dotted). Supplementary Figure 6: Calibration in cross-
validation on $D_{cv}$, testing on $D_{test}$, and on subjects with artifacts
of $D_{art}$.
Visceral Adipose Tissue (VAT), extended notes:
Supplementary Fig. 4 shows a close fit with few outliers in the normal
material. In testing, a single subject with an atrophied right leg incurs a
substantially overestimated measurement, which can be identified by high
uncertainty.
Alternative reference methods:
UK Biobank field 23289 contains measurements of VAT by DXA for 5,109 subjects.
These values were first converted from mL to L and then mapped to the target
with the following linear transformation parameters:
($2.27x+0.83L$).
UK Biobank return 981 by application 23889 also offers VAT measurements for
9,127 subjects. These values were converted from mL to L, but did not require
adjustment by linear regression.
Supplementary Table 5: Comparison of VAT references Method | N | ICC | R2 | MAE | MAPE | r
---|---|---|---|---|---|---
Proposed | 4,491 | 0.997 | 0.994 | 0.131 | 4.3 | 0.997
Field 23289 | 4,491 | 0.970 | 0.942 | 0.401 | 14.9 | 0.971
Proposed | 7,871 | 0.997 | 0.994 | 0.121 | 4.1 | 0.997
Return 981 | 7,871 | 0.996 | 0.993 | 0.137 | 4.4 | 0.996
*Comparison to the target values, listing both the proposed predictions
and alternative UK Biobank reference values on the same subjects
Aggregated saliency (VAT):
Supplementary Figure 7: Aggregated saliency [Langner et al., 2019b] for
Visceral Adipose Tissue (VAT) for 3,091 subjects, generated by a single mean-
variance network. Each row shows the water, fat, and fat-fraction channels
side by side, with the top row showing an overlay on the image data and the
bottom row the saliency only.
#### 1.5.2 Abdominal Subcutaneous Adipose Tissue (SAT)
Supplementary Figure 8: Predictions in cross-validation on $D_{cv}$, testing
on $D_{test}$, and on subjects with artifacts of $D_{art}$, with color-coded
uncertainty. Supplementary Figure 9: Sparsification in cross-validation on
$D_{cv}$, testing on $D_{test}$, and on subjects with artifacts of $D_{art}$,
with oracle curves (dotted). Supplementary Figure 10: Calibration in cross-
validation on $D_{cv}$, testing on $D_{test}$, and on subjects with artifacts
of $D_{art}$.
Abdominal Subcutaneous Adipose Tissue (SAT), extended notes:
The scatter plot for the test data of Fig.8 shows a single outlier with about
15 L of subcutaneous adipose tissue, for whom the prediction yields almost 20
L with high uncertainty. This subject was found to suffer from an abnormal,
atrophied right leg and also incurs high measurement errors in TTM and VAT.
Alternative reference methods:
UK Biobank return 981 by application 23889 also offers measurements of
subcutaneous adipose tissue volume for 9,379 subjects. These values were
converted from mL to L and then mapped to the target with the following linear
transformation parameters: ($0.98x+0.46L$).
Supplementary Table 6: Comparison of SAT references Method | N | ICC | R2 | MAE | MAPE | r
---|---|---|---|---|---|---
Proposed | 8,085 | 0.996 | 0.993 | 0.187 | 2.8 | 0.996
Return 981 | 8,085 | 0.994 | 0.989 | 0.208 | 3.1 | 0.994
*Comparison to the target values, listing both the proposed predictions
and alternative UK Biobank reference values on the same subjects
Aggregated saliency (SAT):
Supplementary Figure 11: Aggregated saliency [Langner et al., 2019b] for
Subcutaneous Adipose Tissue (SAT) for 3,091 subjects, generated by a single
mean-variance network. Each row shows the water, fat, and fat-fraction
channels side by side, with the top row showing an overlay on the image data
and the bottom row the saliency only.
#### 1.5.3 Total Adipose Tissue (TAT)
Supplementary Figure 12: Predictions in cross-validation on $D_{cv}$, testing
on $D_{test}$, and on subjects with artifacts of $D_{art}$, with color-coded
uncertainty. Supplementary Figure 13: Sparsification in cross-validation on
$D_{cv}$, testing on $D_{test}$, and on subjects with artifacts of $D_{art}$,
with oracle curves (dotted). Supplementary Figure 14: Calibration in cross-
validation on $D_{cv}$, testing on $D_{test}$, and on subjects with artifacts
of $D_{art}$.
Total Adipose Tissue (TAT), extended notes:
No test data was available for this target.
Alternative reference methods:
UK Biobank field 23278 contains alternative measurements of total fat mass by
DXA for 5,170 subjects. These values were first converted from mL to L and
then mapped to the target with the following linear transformation parameters:
($0.80x+0.51L$).
Supplementary Table 7: Comparison of TLT references Method | N | ICC | R2 | MAE | MAPE | r
---|---|---|---|---|---|---
Proposed | 4,323 | 0.997 | 0.995 | 0.353 | 1.8 | 0.997
Field 23278 | 4,323 | 0.991 | 0.982 | 0.689 | 3.4 | 0.991
*Comparison to the target values, listing both the proposed predictions
and alternative UK Biobank reference values on the same subjects
Aggregated saliency (TAT):
Supplementary Figure 15: Aggregated saliency [Langner et al., 2019b] for Total
Adipose Tissue (TAT) for 3,091 subjects, generated by a single mean-variance
network. Each row shows the water, fat, and fat-fraction channels side by
side, with the top row showing an overlay on the image data and the bottom row
the saliency only.
#### 1.5.4 Total Lean Tissue (TLT)
Supplementary Figure 16: Predictions in cross-validation on $D_{cv}$, testing
on $D_{test}$, and on subjects with artifacts of $D_{art}$, with color-coded
uncertainty. Supplementary Figure 17: Sparsification in cross-validation on
$D_{cv}$, testing on $D_{test}$, and on subjects with artifacts of $D_{art}$,
with oracle curves (dotted). Supplementary Figure 18: Calibration in cross-
validation on $D_{cv}$, testing on $D_{test}$, and on subjects with artifacts
of $D_{art}$.
Total Lean Tissue (TLT), extended notes:
No test data was available for this target. Supplementary Fig. 16 shows a
curious pattern for the cross-validation, where a subset of measurements is
consistently overestimated by about 2 L.
The reason for this mismatch is unclear. The affected subjects are not part of
the same cross-validation split set, were imaged in different imaging centers,
and share no other obvious confounding factors. However, alternative
measurements of total lean tissue by DXA (total lean mass, field 23280)
independently support these overestimations relative to the reference used in
this work. Supplementary Fig. 19 shows a comparison where the reference is
plotted against the DXA measurements. All those cases that were overestimated
by the proposed method by at least 2L are color-coded and form a similar
pattern as observed in cross-validation.
Supplementary Figure 19: In some subjects (red), the proposed method
overestimated total lean tissue (TLT) by at least 2L. As shown on the right,
the DXA scan shows a similar pattern and independently indicates higher values
for these subjects.
Alternative reference methods:
UK Biobank field 23280 contains additional measurements of total lean mass by
DXA for 5,170 subjects. These values were first converted from mL to L and
then mapped to the target with the following linear transformation parameters:
($0.50x+0.47L$).
On a side note, UK Biobank field 23285 also contains DXA measurements of trunk
lean mass, but these values reaches lower agreement with the target than field
23280 and were not considered further.
Supplementary Table 8: Comparison of TLT references Method | N | ICC | R2 | MAE | MAPE | r
---|---|---|---|---|---|---
Proposed | 4,323 | 0.976 | 0.953 | 0.684 | 3.0 | 0.978
Field 23280 | 4,323 | 0.969 | 0.941 | 0.856 | 3.7 | 0.970
*Comparison to the target values, listing both the proposed predictions
and alternative UK Biobank reference values on the same subjects
Aggregated saliency (TLT):
Supplementary Figure 20: Aggregated saliency [Langner et al., 2019b] for Total
Lean Tissue (TLT) for 3,091 subjects, generated by a single mean-variance
network. Each row shows the water, fat, and fat-fraction channels side by
side, with the top row showing an overlay on the image data and the bottom row
the saliency only.
#### 1.5.5 Total Thigh Muscle (TTM)
Supplementary Figure 21: Predictions in cross-validation on $D_{cv}$, testing
on $D_{test}$, and on subjects with artifacts of $D_{art}$, with color-coded
uncertainty. Supplementary Figure 22: Sparsification in cross-validation on
$D_{cv}$, testing on $D_{test}$, and on subjects with artifacts of $D_{art}$,
with oracle curves (dotted). Supplementary Figure 23: Calibration in cross-
validation on $D_{cv}$, testing on $D_{test}$, and on subjects with artifacts
of $D_{art}$.
Total Thigh Muscle (TTM), extended notes:
Supplementary Fig. 21 shows a close fit with few outliers in the normal
material. In testing, a single subject with an atrophied right leg incurs high
uncertainty, together with a moderately overestimated measurement. Several
other high-valued testing cases are slightly underestimated.
Many of those cases with the highest uncertainty show severe fat infiltrations
of the thigh muscle.
Alternative reference methods:
UK Biobank field 23275 contains measurements of the lean mass of the legs by
DXA for 5,170 subjects. These values describe more than just muscle volume,
but may still be considered as a proxy. These values were first converted from
mL to L and then mapped to the target with the following linear transformation
parameters:
($0.69x+0.64L$).
UK Biobank return 981 by application 23889 also offers thigh muscle volume
measurements for 9,441 subjects. These values were first converted from mL to
L and then mapped to the target with the following linear transformation
parameters:
($1.06x+0.67L$).
Supplementary Table 9: Comparison of TTM references Method | N | ICC | R2 | MAE | MAPE | r
---|---|---|---|---|---|---
Proposed | 4,483 | 0.996 | 0.992 | 0.173 | 1.7 | 0.997
Field 23275 | 4,483 | 0.958 | 0.919 | 0.561 | 5.6 | 0.959
Proposed | 8,144 | 0.997 | 0.993 | 0.161 | 1.6 | 0.997
Return 981 | 8,144 | 0.989 | 0.978 | 0.284 | 2.8 | 0.989
*Comparison to the target values, listing both the proposed predictions
and alternative UK Biobank reference values on the same subjects
Aggregated saliency (TTM):
Supplementary Figure 24: Aggregated saliency [Langner et al., 2019b] for Total
Thigh Muscle (TTM) for 3,091 subjects, generated by a single mean-variance
network. Each row shows the water, fat, and fat-fraction channels side by
side, with the top row showing an overlay on the image data and the bottom row
the saliency only.
#### 1.5.6 Liver Fat Fraction (LFF)
Supplementary Figure 25: Predictions in cross-validation on $D_{cv}$, testing
on $D_{test}$, and on subjects with artifacts of $D_{art}$, with color-coded
uncertainty. Supplementary Figure 26: Sparsification in cross-validation on
$D_{cv}$, testing on $D_{test}$, and on subjects with artifacts of $D_{art}$,
with oracle curves (dotted). Supplementary Figure 27: Calibration in cross-
validation on $D_{cv}$, testing on $D_{test}$, and on subjects with artifacts
of $D_{art}$.
Liver Fat Fraction (LFF), extended notes:
The scatter plots of Supplementary Fig. 25 show that a small number of samples
in the range of zero to five fat fraction points are severely overestimated,
both in cross-validation and testing. Not all of these predictions incur high
uncertainty.
Visual control of the affected subjects showed that the predictions by the
proposed method often provided a better match to the neck-to-knee body MRI
than achieved by the reference values. No obvious confounding factors such as
artifacts or high liver iron content were observed. A similar effect was noted
in previous work [Langner et al., 2020c] where a least squares regression
technique was trained to emulate an alternative set of UK Biobank liver fat
measurements, field 22402. As both of these reference fields are based on the
dedicated liver MRI instead of the neck-to-knee body MRI used here, a possible
explanation could be an unusually severe mismatch of both protocols for these
subjects.
On average, LFF incurred by far the highest normalized uncertainties
(calculated by dividing the predicted uncertainty by the predicted means) of
all targets. Finally, it is worth noting that for this target superior results
may be possible when using an input format that only shows a fat fraction
slice of the upper body, as previously proposed [Langner et al., 2020c],
although no rigorous comparison was attempted in the scope of this work. The
technique could also be applied directly to the dedicated liver MRI.
Alternative reference methods:
UK Biobank field 22402 contains alternative liver fat fraction values for
4,616 subjects, obtained by mostly manual analysis of dedicated liver MRI
[Wilman et al., 2017]. Relative to the target used in this work, one outlier
subject is overestimated by 24 fat fraction points and no linear
transformation was applied.
Supplementary Table 10: Comparison of LFF references Method | N | ICC | R2 | MAE | MAPE | r
---|---|---|---|---|---|---
Proposed | 4,401 | 0.978 | 0.956 | 0.669 | 26.3 | 0.978
Field 22402 | 4,401 | 0.987 | 0.972 | 0.430 | 14.8 | 0.989
*Comparison to the target values, listing both the proposed predictions
and alternative UK Biobank reference values on the same subjects
Aggregated saliency (LFF):
Supplementary Figure 28: Aggregated saliency [Langner et al., 2019b] for Liver
Fat Fraction (LFF) for 3,091 subjects, generated by a single mean-variance
network. Each row shows the water, fat, and fat-fraction channels side by
side, with the top row showing an overlay on the image data and the bottom row
the saliency only.
### 1.6 Inference
The following histograms of Supplementary Fig. 29, 30, and 31 show the
reference values in comparison to those measurements predicted for inference
on the original imaging visit on dataset $D_{infer}$ and the later repeat
imaging visit $D_{revisit}$. All shown data passed the visual quality
controls, but no further attempt was made to exclude outliers based on the
predicted uncertainty for these plots.
Supplementary Figure 29: Reference and predicted Visceral Adipose Tissue (VAT)
(right column) and Subcutaneous Adipose Tissue (SAT) (right column).
Supplementary Figure 30: Reference and predicted Total Adipose Tissue (TAT)
(left column) and Total Lean Tissue (TLT) (right column).
Supplementary Figure 31: Reference and predicted Total Thigh Muscle (TTM)
(left column) and Liver Fat Fraction (LFF) (right column).
|
# Influence of water models on water movement through AQP1
Miguel A. Gonzalez<EMAIL_ADDRESS>Universidad Rey Juan Carlos,
ESCET, 28933, Mostoles-Madrid, Spain Alberto Zaragoza Department of
Chemistry, The University of Utah, Salt Lake City, Utah 84112-0850, USA
Charlotte I. Lynch University of Oxford, Department of Biochemistry, South
Parks Road, OX1 3QU, Oxford, UK Mark S.P. Sansom University of Oxford,
Department of Biochemistry, South Parks Road, OX1 3QU, Oxford, UK Chantal
Valeriani Universidad Complutense de Madrid, Facultad de Ciencias Fícias,
Departamento de Estructura de la Materia, Física Térmica y Electrónica, 28040,
Madrid, Spain
###### Abstract
Water diffusion through membrane proteins is a key aspect of cellular
function. Essential processes of cellular metabolism are driven by osmotic
pressure, which depends on water channels. Membrane proteins such as
aquaporins (AQPs) are responsible for enabling water permeation through the
cell membrane. AQPs are highly selective, allowing only water and relatively
small polar molecules to cross the membrane. Experimentally, estimation of
water flux through membrane proteins is still a challenge, and hence accurate
simulations of water permeation are of particular importance. We present a
numerical study of water diffusion through AQP1 comparing three water models:
TIP3P, OPC and TIP4P/2005. Bulk diffusion, diffusion permeability and osmotic
permeability are computed and compared among all models. The results show that
there are significant differences between TIP3P (a particularly widespread
model for simulations of biological systems), and the more recently developed
TIP4P/2005 and OPC models. We demonstrate that OPC and TIP4P/2005 reproduce
protein-water interactions and dynamics in very good agreement with
experimental data. From this study, we find that the choice of the water model
has a significant effect on the computed water dynamics as well as its
molecular behaviour within a biological nanopore.
## I Introduction
Water is the most abundant molecule in cells. Even though water is able to
diffuse slowly through a lipid bilayer, for essential physiological processes
a high water flux is required. This can be achieved making use of protein
membranes such as aquaporins (AQPs) which allow for a high selectivity for
water permeation across biological membranes [1, 2]. AQPs are located in
several cells found in the brain, kidneys, skin, blood vessels, liver and
connective tissue. [3, 4] The lack of functionality of such cells might induce
several diseases, such as glaucoma, brain edema or congestive heart failure.
[3, 5, 6] Although evidence of the existence of water channels had been
reported several years earlier, the aquaporin structure was not described
until 1992 [7]. All members of the AQP family are small ($\sim$ 30 kDa)
intrinsic membrane proteins with a strongly hydrophobic character, and consist
of four monomers. Each monomer is a functional water channel having a
hourglass shape [8] with an amphiphilic interior and a selectivity filter. The
primary sequence of those proteins shows six trans-membrane helices (I-VI)
linked by five loops. The water flux inside the channel depends on an
asparagine-proline-alanine (NPA) motif, found in two loops located
approximately in the middle of the membrane protein channel. The interaction
between these motifs not only confers a particular shape to the channel but it
is also responsible for the orientation of water molecules inside the channel.
In combination with the NPA motif, water permeation is also modulated by the
aromatic/arginine selectivity filter which is situated next to the
extracellular exit/entrance of the channel. [9, 10] The topology of the pore
is extremely important as it enables selected small polar molecules to cross
the membrane (e.g. H2O, NH3, glycerol) whilst preventing the passage of others
(e.g. ions and larger molecules). Based on its permeability, AQPs can be
classified in two subfamilies: classical aquaporins (permeable to water) and
aquaglyceroporins (permeable to both water and glycerol).
AQP1 is a classical aquaporin with a high water permeability as described in
detail in Ref. [11] and confirmed in several numerical studies [12, 9, 10, 13,
14, 15]. There are three different approaches presented in the literature to
study water through aquaporins: 1) a continuum hydrodynamic approach [16] that
performs surprisingly well for a nanopore of molecular dimensions, capturing
key steric effects; 2) a potential of mean force approach, to study the
energetics of water transport [17, 14]; 3) “a continuous-time random-walk
model” approach of water transport across the channel [18].
As suggested by De Groot et al. in Ref. [19], approaches 2 and 3) are in a
quantitative agreement. Thus, we will follow approach 3 throughout our study
(extracting equations from Ref. [20]). Numerous molecular dynamics studies
have been conducted on the water permeation mechanism through aquaporins [12,
21, 22, 23, 24, 25, 26, 27, 11, 28, 29]. As discussed by Ozu et al. [30], many
AQP simulations have been carried out using different force fields (GROMOS or
CHARMM) for the protein and the standard older TIP3P or SPC water models. De
Groot and Grubmüller [21] demonstrated that proton translocation across AQPs
was prevented by an electrostatic barrier. However Jensen et al. [27], using a
different force field to simulate AQP and water (TIP3P), concluded that the
proton induced an interruption of the hydrogen bond chain network built into
the channel, preventing the proton from crossing the pore. Thus, it is
reasonable to suggest that models of water permeation might be affected by the
force field used to simulate water molecules.
When dealing with pure water, both Vega et al. [31] and Onufriev et al. [32]
carried out a detailed comparison between experimental and numerical values of
thermodynamic and dynamic properties, simulating water by means of several
interaction potentials. Even though they clearly demonstrated that TIP3P was
not one of the best performing potentials for pure water, TIP3P remains a
widely used water model in biomolecular simulations, in combination with
CHARMM (the most widely used protein force field). With this in mind, we
propose to study the molecular mechanism of water permeation of AQP1 (1H6I
code PDB) [12] simulated with CHARMM36m [33], in combination with the TIP3P,
OPC and TIP4P/2005 water models. Even though the latter two water models are
less frequently used in biomolecular simulations, they are known to give very
reliable thermodynamic and dynamic properties for bulk water. To our
knowledge, TIP4P/2005 has been already implemented with the AMBER ff03w [34]
force field as a default water model but has not been employed in simulations
of AQP, where the microscopic behaviour of water clearly plays a vital role.
It is important to note that we have chosen to focus our study on rigid non-
polarizable water models. There is a range of potentially more accurate
polarizable and flexible water models, however the use of these comes at
significant computational cost [35]. It is therefore beneficial to first
assess the consistency and accuracy of a range of rigid non-polarizable models
in order to establish whether the use of polarizable water models is needed
for the study of water behaviour in aquaporins. Similar comparisons of non-
polarizable water models have been conducted for the 5-HT3 receptor channel
[36] and Cucurbit[7]uril-guest systems [37]. In this article, we have studied
several properties of human AQP1 (water diffusion across the pore, diffusion
permeability and osmotic permeability) taking into account the molecular
mechanism characteristic of AQP. In this way we have selected to simulate a
range of representative (non-polarizable) models of water, comparing two more
recent water potentials to what is still the “standard model”. This allows us
to capture the likely range of behaviours for this class of model.
## II Methods
### II.1 Simulation details
We have carried out molecular dynamics simulations for AQP1 embedded in a
palmitoyloleoylphosphatidylcholine (POPC) bilayer membrane using the GROMACS
2016.4 software package [38]. Human AQP1 (1H6I code PDB) [12] is simulated via
the CHARMM36m force field [33], whilst the water molecules are simulated using
TIP3P [39], OPC [40], and TIP4P/2005 [41] potentials. In order to maintain the
protein fold, the simulations were carried out with backbone restraints.
Figure 1 a) shows an image of the system under study, representing the
different regions where water molecules can be found: bulk water molecules (in
brown), interfacial water molecules (in red), lipids (in blue) and AQP1 (in
purple). Panel b) represents a rendered image of the AQP1 channel, showing
water molecules (in red/white) passing through, while changing their
orientation when crossing the center.
a) b)
Figure 1: a) Rendered image of the simulated system (from top/bottom towards
the middle): bulk water molecules (in brown), interfacial water molecules (in
red), lipids (in blue) and AQP1 (in purple). b) Rendered image of the AQP1
channel, showing water molecules passing through, while changing their dipolar
moment when crossing the center. The water molecules are plotted in red/white.
Considering the three models of water, we have set up three initial
configurations with the AQP1 tetramer embedded in a lipid bilayer of 209 POPC
molecules and solvated with approx. 20000 water molecules. In order to prepare
a neutral configuration, we have added 4 Cl- ions. Each AQP1-water system was
simulated for 100 ns, and simulations were repeated four times with different
random initial velocities. We set a time step of 1 fs keeping the temperature
constant at 298 K using a velocity rescale thermostat [42] with a relaxation
time set to 0.67 ps. The Parrinello-Rahman barostat [43] was applied to
maintain the pressure constant at 1 bar using a 2.57 ps relaxation time. When
dealing with water, the Lennard-Jones (LJ) potential was truncated at 1.2 nm,
adding standard long-range corrections to the LJ energy. Ewald sums [44] were
considered to compute long-range electrostatic forces with a real space cut-
off at 1.2 nm. In order to compare the behaviour of the three water models and
their interactions with AQP1, we have selected the most relevant properties
that allow us to establish the influence of water potentials on water dynamics
through protein channels.
### II.2 Diffusion, diffusion permeability, rate of water molecules and
average dipole moment across the channel
Water diffusion through a cell membrane is considered as a subdiffusive
process [45, 46, 47, 48, 49]. Also, Yamamoto et al.[50] demonstrated that the
water diffusion near a lipid membrane is lower than the bulk due to two
mechanisms. On the one hand, there is a divergent mean trapping time induced
by the lipid-water interaction which can be described by a continuous-time
random walk. On the other hand, the viscoelasticity of the lipid membrane
induces a long-time correlated noise described by fractional Brownian motion.
In order to compute water diffusion, we distinguished four regions depending
on the distance ($d$) between the lipid headgroup and the position of the
water molecule. As in Ref. 48, the criteria applied to create those regions
are the following: we identify a 1) bulk region whenever $d$ $>$ 0.6 nm, an 2)
interface region whenever 0.3 nm $<d<$ 0.6 nm, a 3) contact region whenever
$d<$ 0.3 nm) and an 4) AQP region defined as the region inside the lipid
headgroups (corresponding to waters in the pore). The long time diffusion
coefficient ($D$) is then computed using Einstein’s relation based on the
water mean square displacement:
$D=\frac{\left\langle\left|r(t_{0}+t)-r(t_{0})\right|^{2}\right\rangle}{2\cdot
dim\cdot t}$ (1)
where $r(t)$ is the position of the water molecule centre of mass, $t$ is the
time, and $dim$ the dimension that depends on the geometry of the system that
the molecules are moving through, with $dim=1$ for 1D geometries such as
carbon nanotubes CNT or AQP1 and $dim=3$ for bulk [51].
Another approach to calculate water diffusion inside the protein channel (AQP
region) is based on a single-water molecule column [18]. In this case, water
diffusion ($D_{z}$) is computed according to the relation [52]:
$D_{z}=\frac{k_{0}z^{2}}{2}=\frac{z^{2}p_{f}}{2\nu_{w}}$ (2)
where $k_{0}$, $z$, $p_{f}$ and $\nu_{w}$ are the rate of water molecules
across the channel, the average distance between two water molecules inside
the channel, the osmotic permeability (see below for more details) and the
volume of a single molecule, respectively.
Water permeability can be divided into osmotic permeability ($p_{f}$) and
diffusion permeability ($p_{d}$) depending on the system’s conditions. Osmotic
pressure drives the movement of water molecules through the membrane protein
when there is a solute gradient across the membrane. Hence in these conditions
water permeability would be given by the osmotic permeability of the system.
As employed in previous studies, [53, 19, 20] osmotic permeability is defined
as the proportionality constant between the net water flux ($j_{w}$) across
the protein channel and the solute concentration difference ($\Delta C_{s}$),
$j_{w}=p_{f}\Delta C_{s}.$ (3)
Taking into account the rate of water molecules across the membrane protein,
$k_{0}$, the osmotic permeability is also defined as
$p_{f}=\nu_{w}k_{0}.$ (4)
At equilibrium and in the absence of solute or solute gradient, there is still
net water moving through the channel that could be attributed to diffusion of
water molecules due to temperature. Thus, the diffusion permeability, $p_{d}$,
allows us to study the water flux across the protein under these conditions:
$j_{tr}=p_{d}\Delta C_{tr}$ (5)
where $\Delta C_{tr}$ corresponds to the difference in the tracer’s
concentration between the two sides of the membrane and $j_{tr}$ is the net
tracer’s flux (where a tracer is defined as a labelled water molecule that can
be distinguished from similar molecules, i.e. a molecule of heavy water is a
tracer for water). At equilibrium, the rate of water molecules crossing the
channel is computed by $q_{0}$, which is related to $p_{d}$ via
$p_{d}=(V_{w}/N_{a})q_{0}=\nu_{w}q_{0}.$ (6)
Note that the centre used to decide if a water molecule crossed the channel is
the average of Z position of N atoms for ASN residue inside the channel. One
molecule has been considered that it crossed the AQP when its Z coordinate is
1.5 nm from this point. This centre is updated each single time step during
the analysis.
Molecular simulations become an essential tool for computing $p_{d}$ as they
allow us to label individual water molecules and therefore distinguish them
from one another. Both properties, $p_{d}$ and $p_{f}$, are proportional to
the number of water molecules, $N$, taking part in the translocation following
the continuous-time random-walk process [18] based on Ref. [54]. For AQP1, Zhu
et al. [20] presented a value for $p_{f}/p_{d}=N+1=11.9$. This is in good
agreement to the experimental value of $p_{f}/p_{d}=13.2$ [55]. However, these
values are higher than the average number of molecules inside the protein
channel, since there are molecules of water outside the channel that
participate in the permeation across the membrane.
In order to compute the flux in Eq. 5 we need to compute $p_{d}$ and $\Delta
C_{tr}$. The latter is computed by simple counting whereas the former is
computed via Eq. 6. To compute $q_{0}$ in Eq. 6 we count the number of water
molecules that cross the membrane through the AQP1. Thus, we define the centre
of the protein and a typical distance from which a molecule is considered to
be outside the channel. The channel centre is identified in the following way:
two nitrogen atoms inside the channel are bound to water via hydrogen bonds
and their orientation is essential to describe the AQP1 permeability. Thus, we
compute the channel centre as the average of these nitrogen atom positions.
The channel length is set at 1.5 nm from the centre.
Water molecules in bulk are characterised by an average dipole moment equal to
zero. However, in nano-confined structures such as a CNT [51] or membrane
channels [11], there is a particular orientation of the molecules due to the
hydrogen bond network. Thus, the dipole moment is an important parameter to
study the molecular orientations inside the channel, both artificial and
natural. In our work, we have computed the water dipole moment $\mu$ according
to $\vec{\mu}=\sum_{i}q_{i}\vec{r_{i}}$, where $q_{i}$ and $r_{i}$ are the
charge and the position vector for molecule $i$, respectively. We focus on the
$z$ component of $\vec{\mu}$, since in our simulations the orientation of the
water molecule inside the channels is mainly projected onto this axis. Hence,
we calculate the average dipole moment of slabs, having a thickness of 0.025
nm, across the $z$ axis to compare the molecular orientation between the
channel and molecules.
## III Results And Discussion
### III.1 Water molecule orientations inside AQP1
As suggested in Ref.[11], water molecules arrange inside the channel forming a
single-molecule chain with a specific orientation. In our work we demonstrate
that this mechanism is reproduced by the three water models.
As shown in Fig. 2, the water molecule orientation can be illustrated by
computing the average of the $z$ component of the dipole moment inside the
channel.
Figure 2: $z$ component of the average dipole moment $\mu_{z}$ for the three
water models. The red line is for TIP3P, the blue line for TIP4P/2005, and the
black line for OPC. Error bars have been added taking into account the
standard deviation of $\mu_{z}$. The vertical dashed lines indicate the
approximate locations of the Asn residue of the NPA motifs. Note that the
centre of the bilayer is at z = 0 nm, and the pore runs from approximately z =
-1 to +0.8 nm.
Although there is a dynamic permeation across the channel, it is possible to
distinguish the positions of water molecules, described by the slightly ragged
shape of the water dipole moment inside the protein (see Fig. 2 ). These
results demonstrate that there are two stable orientations inside the channel
as described by Ref. [11].
There are no major differences between the dipole moment profiles for the
three water models, although the degree of water dipole orientation is less
for the OPC model in the second (i.e. $z>0$) half of the pore than for the
other two water models. This implies that, even though TIP3P is the only water
model fully “tuned” with CHARMM36, both OPC and TIP4P/2005 are capable of
capturing a qualitatively similar behaviour inside the AQP. Therefore, one
could deduce that the similarity between the dipole moments computed with the
different water models is due to water interacting in a similar way with AQP1.
Water interacts with AQP via hydrogen bonds that can be only formed when water
is close enough to the asparagines located at the middle of the pore. Our
results on the dipolar moment suggest that water interacts with asparagine in
a similar way independently of the model used to mimic its behaviour.
### III.2 Water molecule diffusion through the protein channel
As in Ref.48, we identify interfacial, pore and contact regions, depending on
the position of the water molecule with respect to the bilayer and the AQP1
pore. To compute the water diffusion constant ($D$) we use Eq. 1 with $dim=3$
for the bulk ($D_{bulk}$), interface ($D_{inter}$), and contact
($D_{contact}$) regions, and $dim=1$ for the AQP ($D_{AQP}$) region (AQP1
being almost cylindrical).
(a)
(b)
(c)
Figure 3: Mean square displacement (nm2) for TIP3P (left), TIP4P/2005
(centre), and OPC (right). The coloured lines denote the following regions:
bulk (red), interface (blue), lipid-water contact region (green), and AQP
(black).
Moreover, we compare the value of the diffusion computed in the bulk region at
$d>0.6$ nm ($D_{bulk}$) to the one computed for each model in bulk
($D_{model}$), since the presence of the lipid membrane reduces the value of
$D$ in nearby regions (even at distances higher than 2 nm [50]). All mean
square displacements (msd) for each region are reported in Fig. 3. Two groups
of data could be distinguished for each model (although in the case of TIP3P
it is more evident). The bulk, interface and contact regions have similar
slopes, while the slope for the AQP region is lower than the rest. The
diffusion coefficient (D) values for all water models in bulk and in the
various regions are presented in Table 1 (as calculated from the mean square
displacement in Fig. 3).
Table 1: Diffusion coefficient values computed using Eq. 1 with $dim=3$ in the bulk, interface and contact region, with $dim=1$ in the AQP region for TIP3P, TIP4P/2005 and OPC water models. The values in parenthesis are the standard deviations. All diffusion coefficients are expressed in 10-5 (cm2/s). Water model | Dbulk | Dinter | Dcontact | DPORE
---|---|---|---|---
TIP3P | 4.60 (0.01) | 3.74 (0.02) | 2.93 (0.02) | 0.10 (0.01)
TIP4P/2005 | 1.87 (0.04) | 1.24 (0.1) | 0.77 (0.04) | 0.05 (0.02)
OPC | 1.94 (0.04) | 1.26 (0.08) | 0.72 (0.09) | 0.02 (0.02)
Exp. | 2.30 [31] | | | 0.04
When dealing with bulk water, the literature values of $D_{literature}$ at
298K and 1 bar are $5.5$x$10^{-5}$ cm2/s [56] for TIP3P, $2.06$x$10^{-5}$
cm2/s for TIP4P/2005 [41] and $2.3$x$10^{-5}$ cm2/s [40] for OPC
$D_{literature}$. These data are higher than the $D_{bulk}$ computed from the
mean square displacement in our simulations (see Fig. 3), reflecting the
phenomena of subdiffusion of water close to the lipid membrane, explained in
Ref. [50], and in addition it might be due to differences in the simulation
conditions, such as the value of the cut-off and/or the thermostat applied
during the simulation. However, the value of water diffusion given in this
article is reliable. However, it is clear that TIP4P/2005 and OPC water models
reproduce the experimental bulk water diffusion better than TIP3P. Concerning
the hydrogen bonds formed in bulk water, a recent paper[57] has shown that the
TIP3P water model yields a higher rotational relaxation time and a lower
network complexity parameter. This results in a higher diffusion coefficient
for TIP3P water in bulk. In contrast, TIP4P/2005 (a water model very similar
to OPC) yields a lower rotational relaxation time and a higher network
complexity parameter. This results in a lower diffusion coefficient for
TIP4P/2005 (and by extension for OPC) water in bulk. TIP3P bulk water has a
higher diffusion coefficient than both OPC and TIP4P/2005, due to its higher
rotational relaxation time and lower network complexity parameter. This effect
is enhanced when water is confined within the single file pore AQP, as already
suggested in Ref[51].
On the one hand, although it is possible to consider TIP3P water subdiffusing
as $D$ is compared to $D_{literature}$, TIP3P overestimates bulk water
diffusion in comparison to the experimental data of $2.3$x$10^{-5}$ cm2/s in
all regions. On the other hand, TIP4P/2005 and OPC show a consistent
subdiffusion of water molecules close to the membrane, as expected [50] both
with respect to their $D_{literature}$ and in comparison to the experimental
value. Focusing on the AQP region, i.e. inside the membrane protein, the
experimental data of water diffusion is $0.04$x$10^{-5}$ cm2/s [52]. TIP3P
gives $0.10$x$10^{-5}$ cm2/s for $D_{PORE}$, which is dramatically higher than
the experimental value and also higher than the other water models,
$0.05$x$10^{-5}$ cm2/s and $0.02$x$10^{-5}$ cm2/s for TIP4P/2005 and OPC,
respectively. When comparing the diffusion coefficient of water inside the
pore with respect to its bulk counterpart, we get a comparable ratio
independent of the model. The reason is that TIP3P water diffuses more than
the other water models not only in bulk but also when confined in a nanopore.
In addition, note that the diffusion of water inside the channel is different
with respect to the experimental value depending on the chosen model. In the
case of TIP3P the diffusion is 2.5 times higher than the experimental one. OPC
also shows a significant discrepancy, although in this case the model presents
a diffusion that is 2 times lower than the experimental one. However, the
diffusion of the TIP4P/2005 model is in optimal agreement with the
experimental value.
In order to visualise the values of $D$ across the four regions, a diffusion
profile has been plotted for each water model, see Fig. 4. This shows the
diffusion profiles for the three models by considering only the molecules that
have crossed the membrane through the protein channel, as these molecules
allow us to compute the water diffusion through the channel.
(a)
(b)
(c)
Figure 4: Diffusion profiles for TIP3P (left), TIP4P/2005 (centre), and OPC
(right) water models. The profiles have been split into four regions: bulk
(red crosses), interface (blue crosses), lipid-water contact (green crosses),
and AQP region (black crosses).
It can be observed that $D$ decreases from the bulk region to the AQP region,
starting from a value of $D$ lower than $D_{literature}$. Each region displays
a distinct diffusion profile, although there is a slight overlap between
regions. This is due to the criteria used to define the regions, i.e. an
average of the phosphorus atoms position in $z$. As a consequence, it is
possible to find a water molecule having a lower value of $D$ than the one
expected in that region. As shown in Fig. 4, the AQP region is not symmetric
and there are higher values of $D_{PORE}$ at negative distances, $d$, from the
membrane centre than at positive ones. This is because the membrane protein is
not symmetric and the diffusion around the protein is asymmetric as a
consequence.
### III.3 Diffusion permeability and osmotic permeability
To compute the water permeability, both the diffusion (pd) and osmotic
permeability (pf), we have calculated the rate at which molecules are able to
cross the membrane ($q_{0}$ and $k_{0}$) and the diffusion of those molecules
through the AQP ($D_{z}$). The results have been collected in Table 2, where
we compare numerical and experimental data.
Table 2: Number of molecules per second (q0, multiplied by 108 (s-1) ; diffusion permeability pd, by 10-14 (cm3/s) ; osmotic permeability pf, by 10-14 (cm3/s) ([*] via $p_{f}/p_{d}=11.9$); number of molecules per second for an osmotic process k0, by 109 (s-1); water diffusion inside the AQP $D_{z}$, by 10-5 (cm2/s) ([**] via Eq. 2) and $D_{PORE}$, by 10-5 (cm2/s) Water model | q0 | pd | pf ∗ | k0 ∗∗ | Dz ∗∗ | DPORE
---|---|---|---|---|---|---
TIP3P | 4.0 (0.3) | 1.20 (0.09) | 14.2 (1.1) | 4.85 | 0.190 | 0.10 (0.01)
TIP4P/2005 | 0.65 (0.2) | 0.194 (0.07) | 2.31 (0.8) | 0.77 | 0.03 | 0.05 (0.02)
OPC | 0.5 (0.1) | 0.149 (0.04) | 1.78 (0.4) | 0.59 | 0.02 | 0.02 (0.02)
Exp. | | | 5.43 [58] | | | 0.04
Diffusion permeability ($p_{d}$) has been computed using Eq. 4, which is based
on the number of water molecules that crossed the membrane through the
membrane protein ($q_{0}$) and the volume of a single water molecule
($\nu_{w}=2.989$x$10^{-23}$ cm3). From molecular dynamics simulations $q_{0}$
is a parameter that is easily accessible. For TIP3P $q_{0}=4.0$x108 s-1
whereas for TIP4P/2005 ($q_{0}=0.65$x108 s-1) and OPC ($q_{0}=0.5$x108 s-1)
the number of molecules per second is lower, by almost an order of magnitude.
Therefore, we find 1) on the one hand an overestimation of the TIP3P diffusion
permeability and 2) on the other hand a good performance of both TIP4P/2005
and OPC diffusion permeability (see Table 2).
The osmotic permeability $p_{f}$ has been computed from the diffusion
permeability using the ratio $p_{f}/p_{d}=N+1=11.9$ proposed by Zhu et al.
[20] for AQP1 and the values are reported in Table 2. The experimental data
for $p_{f}$ is $p_{f}=5.43$x $10^{-14}$ cm3/s [58], which is in very good
agreement with the TIP4P/2005 (2.31x$10^{-14}$cm3/s) and OPC
($1.78$x$10^{-14}$cm3/s) values. The TIP3P value, 14.2x$10^{-14}$cm3/s, is
dramatically higher than the experimental value.
Having calculated the osmotic permeability, the rate of water crossing the AQP
($q_{0}$) can be calculated by applying Eq. 2 with a value of z = 0.28 nm
[52]. From the same Eq. 2, we calculated $D_{z}$ for the different water
models, obtaining 19.0x10-7 cm2/s, 3.03x10-7 cm2/s, and 2.33x10-7 cm2/s for
TIP3P, TIP4P/2005 and OPC, respectively. TIP4P/2005 and OPC compare well with
the experimental data, 4.0 x10-7 cm2/s. However, TIP3P shows a value higher
with respect to the experimental one than the other water models.
It is worth nothing that $D_{z}$ could be compared to $D_{PORE}$ using Eq. 1
as $dim=1$. This fact allows us to test the consistency of our diffusion
results, since $D_{PORE}$ is calculated via Eq. 1 and $D_{z}$ is computed via
an independent method (via Eq. 2) based on the number of water molecules that
crossed the membrane through AQP ($q_{0}$) and its ratio with respect to the
water permeability. As it can be observed, the results for water diffusion
inside the AQP are in excellent agreement independently of the method used to
compute it, either using Eq. 1 on the AQP region or applying Eq. 2.
In summary, our results provide compelling evidence that the TIP3P water model
shows a higher diffusion rate than the other water models studied. Inside the
channel, the TIP4P/2005 model produces results in excellent agreement with
experiment. Focusing on the diffusion through the AQP protein using two
independent methods, we demonstrate that there is an overestimation of the
diffusion, resulting in overestimated values of water permeability for TIP3P.
However, for TIP4P/2005 and OPC, which have bulk diffusion values close to the
experimental one, their calculated values of water permeability are comparable
to the experimental value. This implies that TIP4P/2005 and OPC are capable of
producing reliable estimates of water diffusion through the AQP1 channel.
## IV Conclusion
In this work, water diffusion and permeability have been studied using
molecular dynamics for three systems, consisting of AQP1 embedded in a POPC
lipid membrane and three water models, TIP3P, TIP4P/2005 and OPC. TIP3P is the
standard water model used for CHARMM36m, whereas TIP4P/2005 and OPC are water
models not frequently used in biomolecular simulations but known to better
reproduce bulk water properties [31, 40].
Recently a number of studies have addressed the relevance of water models in
e.g. protein aggregation and host-guest recognition systems [37, 59]. Our main
purpose was to compare among the behaviour of different water models and shed
light on AQP1 water permeability. This enabled us to obtain reliable results
for the molecular mechanism of water diffusion as well as both diffusion and
permeability values, near the membrane and through the AQP1. First of all, the
well-known molecular mechanism of water permeation through AQP1 has been
tested for the three models, giving similar and excellent results. All of them
show a specific orientation of water molecules inside the AQP1 as demonstrated
in Ref. [11]. Hence, using any of these water models, the water-protein
interaction is reliable and AQP1 keeps its high-selectivity permeation of
water across the membrane.
The obvious difference among water models can be detected in the transport
properties. Diffusion inside the protein has been computed using two methods,
one based on the mean square displacement considering the geometry of the
channel and another one from the water permeability, giving consistent results
for the three models. TIP3P has a diffusion constant twice that of the
experimental value, whereas TIP4P/2005 and OPC produce values remarkably close
to experiment. The overestimation of TIP3P is consistent in the whole process
of water permeation across the membrane. The most relevant region is inside
the membrane protein, i.e., the AQP region, where the TIP3P diffusion is
higher than the experiments. On the other hand, OPC presents a lower diffusion
rate. When comparing to experiments, we conclude that TIP4P/2005 is the model
that best resembles water diffusivity inside the channel. It is worth noting
that the ratios of TIP3P/TIP4P/2005 and TIP3P/OPC increase when moving from
bulk to the interfacial region to the AQP region. Therefore, the difference
between the models seems to be more pronounced within the nanopore
environment.
Focusing on water permeability, we have computed the number of water molecules
able to cross the membrane through the AQP1, $q_{0}$. By means of this
parameter we calculated the diffusion permeability, which allowed us to obtain
the value of osmotic permeability. As in the diffusion case, TIP3P gives a
diffusion permeability between 6 and 8 times higher than TIP4P/2005 and OPC,
respectively.
The overestimation of TIP3P is observed also in the calculation of the osmotic
permeability (being almost 3 times higher than the experimental counterpart)
and in the water diffusion inside the AQP1 (being 5 times higher than the
experimental counterpart). However the calculated values obtained with the OPC
and TIP4P/2005 water models are in much better agreement with the experiments.
It is difficult to compute the osmotic permeability in experiments and there
are several values of $p_{f}$. However, in this work reliable values of water
diffusion and water permeability are reported, which demonstrate the clear
differences between the behaviour of TIP3P and the results of TIP4P/2005 and
OPC. We believe that this observation can be relevant not only for the AQP
channel but also for numerical studies of transport properties of water and
ions within nanopores and ion channels.
Simulating polarizable models require a considerable increase in computational
effort, and to date their application has generally been limited to somewhat
simpler membrane [60] and channel systems [36, 61]. We therefore have decided
to first understand the basic features of water transport through AQP1, using
only non-polarizable models. However, in the future it will be important to
consider systematically polarizable models for water, proteins and lipids [62,
61].
This study has focused on the dynamical properties of the water model in AQP
simulations. However, it should be noted that the protein and lipid parameters
have been optimized for use with e.g. TIP3P. The question therefore remains
whether in the future one should re-optimize these parameters for improved
water models or switch to polarizable models. The latter are being extended to
lipids as well as to water and proteins and are now feasible, if
computationally expensive, for membrane channel protein systems.[63] An
interim compromise may be the use of charge scaling (via the electronic
continuum correction approach) to mimic electronic polarization, which has
been used with some success for biological membranes.[64]
## Supplementary material
See the supplementary material is available free of charge at
* •
Dipole moment for TIP4P/2005. (Figure S1)
## DATA AVAILABILITY
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## acknowledgements
The work has been performed under the Project HPC-EUROPA3
(INFRAIA-2016-1-730897), with the support of the EC Research Innovation Action
under the H2020 Programme; in particular, the author gratefully acknowledges
the support of M.S.P. Sansom and the computer resources and technical support
provided by ARCHER. M.A.G. thanks for the support by Ayuda Juan de la Cierva-
Incorporación (IJCI-2016-27497) from Ministerio de Ciencia, innovación y
universidades (Spain). M.S.P. Sansom acknowledges the following fundings:
BBSRC BB/N000145/1 EPSRC EP/R004722/1 and EP/V010948/1, and C. Valeriani
acknowledges fundings from MINECO PID2019-105343GB-I00.
## References
* Beitz [2009] E. Beitz, ed., _Aquaporins_, Handbook of Experimental Pharmacology, Vol. 190 (Springer Berlin Heidelberg, Berlin, Heidelberg, 2009).
* Yang [2017] B. Yang, ed., _Aquaporins_, Advances in Experimental Medicine and Biology, Vol. 969 (Springer Netherlands, Dordrecht, 2017).
* Castle [2005] N. A. Castle, “Aquaporins as targets for drug discovery,” Drug Discov. Today 10, 485–493 (2005).
* Day _et al._ [2014] R. E. Day, P. Kitchen, D. S. Owen, C. Bland, L. Marshall, A. C. Conner, R. M. Bill, and M. T. Conner, “Human aquaporins: Regulators of transcellular water flow,” Biochim. Biophys. Acta - Gen. Subj. 1840, 1492–1506 (2014).
* Kruse, Uehlein, and Kaldenhoff [2006] E. Kruse, N. Uehlein, and R. Kaldenhoff, “The aquaporins.” Genome Biol. 7, 206 (2006).
* Agre and Kozono [2003] P. Agre and D. Kozono, “Aquaporin water channels: Molecular mechanisms for human diseases,” FEBS Lett. 555, 72–78 (2003).
* Preston _et al._ [1992] G. M. Preston, T. P. Carroll, W. B. Guggino, and P. Agre, “Appearance of water channels in Xenopus oocytes expressing red cell CHIP28 protein.” Science 256, 385–7 (1992).
* Walz _et al._ [1997] T. Walz, T. Hirai, K. Murata, J. B. Heymann, K. Mitsuoka, Y. Fujiyoshi, B. Smith, P. Agre, and A. Engel, “The Three-Dimensional Structure of Aquaporin-1,” Nature 387, 624–627 (1997).
* Chen, Wu, and Voth [2006] H. Chen, Y. Wu, and G. A. Voth, “Origins of Proton Transport Behavior from Selectivity Domain Mutations of the Aquaporin-1 Channel,” Biophys. J. 90, L73–L75 (2006).
* Kosinska Eriksson _et al._ [2013] U. Kosinska Eriksson, G. Fischer, R. Friemann, G. Enkavi, E. Tajkhorshid, and R. Neutze, “Subangstrom Resolution X-Ray Structure Details Aquaporin-Water Interactions,” Science 340, 1346–1349 (2013).
* Tajkhorshid _et al._ [2002] E. Tajkhorshid, P. Nollert, M. Ø. Jensen, L. J. W. Miercke, J. O’Connell, R. M. Stroud, and K. Schulten, “Control of the Selectivity of the Aquaporin Water Channel Family by Global Orientational Tuning,” Science 296, 525–530 (2002).
* de Groot, Engel, and Grubmüller [2001] B. L. de Groot, A. Engel, and H. Grubmüller, “A refined structure of human aquaporin-1,” FEBS Lett. 504, 206–211 (2001).
* Law and Sansom [2004] R. J. Law and M. S. P. Sansom, “Homology modelling and molecular dynamics simulations: comparative studies of human aquaporin-1,” Eur. Biophys. J. 33, 477–489 (2004).
* Ariz-Extreme and Hub [2017] I. Ariz-Extreme and J. S. Hub, “Potential of Mean Force Calculations of Solute Permeation Across UT-B and AQP1: A Comparison between Molecular Dynamics and 3D-RISM,” J. Phys. Chem. B 121, 1506–1519 (2017).
* Decker _et al._ [2017] K. Decker, M. Page, A. Boyd, I. MacAllister, M. Ginsberg, and A. Aksimentiev, “Selective Permeability of Truncated Aquaporin 1 in Silico,” ACS Biomater. Sci. Eng. 3, 342–348 (2017).
* Gravelle _et al._ [2013] S. Gravelle, L. Joly, F. Detcheverry, C. Ybert, C. Cottin-Bizonne, and L. Bocquet, “Optimizing water permeability through the hourglass shape of aquaporins,” Proc. Natl. Acad. Sci. 110, 16367–16372 (2013).
* Aponte-Santamaría _et al._ [2017] C. Aponte-Santamaría, G. Fischer, P. Båth, R. Neutze, and B. L. de Groot, “Temperature dependence of protein-water interactions in a gated yeast aquaporin,” Scientific Reports 7, 4016 (2017).
* Berezhkovskii and Hummer [2002] A. Berezhkovskii and G. Hummer, “Single-file transport of water molecules through a carbon nanotube,” Phys. Rev. Lett. 89, 064503/1–064503/4 (2002).
* de Groot and Grubmulle [2005] B. L. de Groot and H. Grubmulle, “The dynamics and energetics of water permeation and protonexclusion in aquaporins,” Curr. Opin. Struct. Biol. 15, 176–183 (2005).
* Zhu, Tajkhorshid, and Schulten [2004] F. Q. Zhu, E. Tajkhorshid, and K. Schulten, “Theory and simulation of water permeation in aquaporin-1,” Biophys. J. 86, 50–57 (2004).
* de Groot _et al._ [2001] B. L. de Groot, H. Grubmüller, W. B. Guggino, P. Agre, J. O’Connell, R. M. Stroud, K. Schulten, and H. Grubmuller, “Water permeation across biological membranes: Mechanism and dynamics of aquaporin-1 and GlpF,” Science 294, 2353–2357 (2001).
* de Groot _et al._ [2003] B. L. de Groot, T. Frigato, V. Helms, and H. Grubmuller, “The mechanism of proton exclusion in the aquaporin-1 water channel,” J. Mol. Biol. 333, 279–293 (2003).
* Hashido, Ikeguchi, and Kidera [2005] M. Hashido, M. Ikeguchi, and A. Kidera, “Comparative simulations of aquaporin family: AQP1, AQPZ, AQP0 and GlpF,” FEBS Lett. 579, 5549–5552 (2005).
* Hub and de Groot [2006] J. S. Hub and B. L. de Groot, “Does CO2 Permeate through Aquaporin-1?” Biophys. J. 91, 842–848 (2006).
* Hub _et al._ [2010] J. S. Hub, C. Aponte-Santamaría, H. Grubmüller, and B. L. de Groot, “Voltage-Regulated Water Flux through Aquaporin Channels In Silico,” Biophys. J. 99, L97–L99 (2010), arXiv:0608246v3 [arXiv:physics] .
* Mamonov _et al._ [2007] A. B. Mamonov, R. D. Coalson, M. L. Zeidel, and J. C. Mathai, “Water and deuterium oxide permeability through aquaporin 1: MD predictions and experimental verification,” J. Gen. Physiol. 130, 111–116 (2007).
* Jensen, Tajkhorshid, and Schulten [2003] M. Ø. Jensen, E. Tajkhorshid, and K. Schulten, “Electrostatic Tuning of Permeation and Selectivity in Aquaporin Water Channels,” Biophys. J. 85, 2884–2899 (2003).
* Zhu, Tajkhorshid, and Schulten [2002] F. Q. Zhu, E. Tajkhorshid, and K. Schulten, “Pressure-induced water transport in membrane channels studied by molecular dynamics,” Biophys. J. 83, 154–160 (2002).
* Lee _et al._ [2016] S. C. Lee, S. Khalid, N. L. Pollock, T. J. Knowles, K. Edler, A. J. Rothnie, O. R. T. Thomas, and T. R. Dafforn, “Encapsulated membrane proteins: A simplified system for molecular simulation,” Biochim. Biophys. Acta-Biomembranes 1858, 2549–2557 (2016).
* Ozu _et al._ [2013] M. Ozu, H. A. Alvarez, A. N. McCarthy, J. R. Grigera, and O. Chara, “Molecular dynamics of water in the neighborhood of aquaporins,” Eur. Biophys. J. 42, 223–239 (2013).
* Vega and Abascal [2011] C. Vega and J. L. F. Abascal, “Simulating water with rigid non-polarizable models: a general perspective,” Physical Chemistry Chemical Physics 13, 19663–19688 (2011).
* Onufriev and Izadi [2018] A. V. Onufriev and S. Izadi, “Water models for biomolecular simulations,” Wiley Interdisciplinary Reviews: Computational Molecular Science 8, e1347 (2018).
* Huang _et al._ [2017] J. Huang, S. Rauscher, G. Nawrocki, T. Ran, M. Feig, B. L. de Groot, H. Grubmüller, and A. D. MacKerell, “CHARMM36m: an improved force field for folded and intrinsically disordered proteins,” Nat. Methods 14, 71–73 (2017).
* Best and Mittal [2010] R. B. Best and J. Mittal, “Protein Simulations with an Optimized Water Model: Cooperative Helix Formation and Temperature-Induced Unfolded State Collapse,” J. Phys. Chem. B 114, 14916–14923 (2010).
* Lynch, Rao, and Sansom [2020] C. I. Lynch, S. Rao, and M. S. P. Sansom, “Water in nanopores and biological channels: A molecular simulation perspective,” Chemical Reviews 120, 10298–10335 (2020).
* Klesse, Tucker, and Sansom [2020] G. Klesse, S. J. Tucker, and M. S. P. Sansom, “Electric field induced wetting of a hydrophobic gate in a model nanopore based on the 5-ht3 receptor channel,” ACS Nano 14, 10480–10491 (2020).
* Çınaroğlu and Biggin [2021] S. S. Çınaroğlu and P. C. Biggin, “Evaluating the performance of water models with host–guest force fields in binding enthalpy calculations for cucurbit[7]uril–guest systems,” J. Phys. Chem. B 125, 1558–1567 (2021).
* Abraham _et al._ [2015] M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, and E. Lindahl, “GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers,” SoftwareX 1-2, 19–25 (2015).
* Jorgensen [1981] W. L. Jorgensen, “Quantum and statistical mechanical studies of liquids. 10. Transferable intermolecular potential functions for water, alcohols, and ethers. Application to liquid water,” J. Am. Chem. Soc 103, 335–340 (1981).
* Izadi, Anandakrishnan, and Onufriev [2014] S. Izadi, R. Anandakrishnan, and A. V. Onufriev, “Building Water Models: A Different Approach,” J. Phys. Chem. Lett. 5, 3863–3871 (2014).
* Abascal and Vega [2005] J. L. F. Abascal and C. Vega, “A general purpose model for the condensed phases of water: TIP4P/2005,” J. Chem. Phys. 123, 234505 (2005).
* Bussi, Donadio, and Parrinello [2007] G. Bussi, D. Donadio, and M. Parrinello, “Canonical sampling through velocity rescaling,” J. Chem. Phys. 126, 014101 (2007).
* Parrinello and Rahman [1981] M. Parrinello and A. Rahman, “Polymorphic Transitions in Single Crystals: A New Molecular Dynamics Method,” J. Appl. Phys. 52, 7182–7190 (1981).
* Essmann _et al._ [1995] U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee, and L. G. Pedersen, “A smooth particle mesh Ewald method,” J. Chem. Phys. 103, 8577–8593 (1995).
* Róg, Murzyn, and Pasenkiewicz-Gierula [2002] T. Róg, K. Murzyn, and M. Pasenkiewicz-Gierula, “The dynamics of water at the phospholipid bilayer surface: a molecular dynamics simulation study,” Chem. Phys. Lett. 352, 323–327 (2002).
* Bhide and Berkowitz [2005] S. Y. Bhide and M. L. Berkowitz, “Structure and dynamics of water at the interface with phospholipid bilayers,” J. Chem. Phys. 123, 224702 (2005).
* Yamamoto _et al._ [2013] E. Yamamoto, T. Akimoto, Y. Hirano, M. Yasui, and K. Yasuoka, “Supporting Information to “ Power-law trappings of water molecules on the lipid membrane surface induce the water retardation ”,” Phys. Rev. E - Stat. Nonlinear, Soft Matter Phys. 87, 1–6 (2013).
* Yang, Calero, and Martí [2014] J. Yang, C. Calero, and J. Martí, “Diffusion and spectroscopy of water and lipids in fully hydrated dimyristoylphosphatidylcholine bilayer membranes,” J. Chem. Phys. 140, 104901 (2014).
* Toppozini _et al._ [2015] L. Toppozini, F. Roosen-Runge, R. I. Bewley, R. M. Dalgliesh, T. Perring, T. Seydel, H. R. Glyde, V. García Sakai, and M. C. Rheinstädter, “Anomalous and anisotropic nanoscale diffusion of hydration water molecules in fluid lipid membranes,” Soft Matter 11, 8354–8371 (2015).
* Yamamoto _et al._ [2015] E. Yamamoto, T. Akimoto, M. Yasui, and K. Yasuoka, “Origin of subdiffusion of water molecules on cell membrane surfaces,” Sci. Rep. 4, 4720 (2015).
* Zaragoza _et al._ [2019] A. Zaragoza, M. A. Gonzalez, L. Joly, I. López-Montero, M. A. Canales, A. L. Benavides, and C. Valeriani, “Molecular dynamics study of nanoconfined TIP4P/2005 water: how confinement and temperature affect diffusion and viscosity,” Phys. Chem. Chem. Phys. 21, 13653–13667 (2019).
* Horner _et al._ [2015] A. Horner, F. Zocher, J. Preiner, N. Ollinger, C. Siligan, S. A. Akimov, and P. Pohl, “The mobility of single-file water molecules is governed by the number of H-bonds they may form with channel-lining residues,” Sci. Adv. 1, E1400083 (2015).
* Shahbabaei and Kim [2020] M. Shahbabaei and D. Kim, “Exploring fast water permeation through aquaporin-mimicking membranes,” Phys. Chem. Chem. Phys. 22, 1333–1348 (2020).
* Finkelstein [1987] A. Finkelstein, _Water movement through lipid bilayers, pores, and plasma membranes: theory and reality_ , Distinguished lecture series of the Society of General Physiologists (Wiley & Sons, New York, 1987) p. 228.
* Mathai _et al._ [1996] J. C. Mathai, S. Mori, B. L. Smith, G. M. Preston, N. Mohandas, M. Collins, P. C. M. van Zijl, M. L. Zeidel, and P. Agre, “Functional Analysis of Aquaporin-1 Deficient Red Cells,” J. Biol. Chem. 271, 1309–1313 (1996).
* MacKerell _et al._ [1998] A. D. MacKerell, D. Bashford, M. Bellott, R. L. Dunbrack, J. D. Evanseck, M. J. Field, S. Fischer, J. Gao, H. Guo, S. Ha, D. Joseph-McCarthy, L. Kuchnir, K. Kuczera, F. T. Lau, C. Mattos, S. Michnick, T. Ngo, D. T. Nguyen, B. Prodhom, W. E. Reiher, B. Roux, M. Schlenkrich, J. C. Smith, R. Stote, J. Straub, M. Watanabe, J. Wiórkiewicz-Kuczera, D. Yin, and M. Karplus, “All-atom empirical potential for molecular modeling and dynamics studies of proteins,” J. Phys. Chem. B 102, 3586–3616 (1998).
* Martelli [2021] F. Martelli, “Topology and complexity of the hydrogen bond network in classical models of water,” J. Mol. Liq. , 115530 (2021).
* Walz _et al._ [1994] T. Walz, B. L. Smith, M. L. Zeidel, A. Engel, and P. Agre, “Biologically active two-dimensional crystals of aquaporin CHIP,” J. Biol. Chem. 269, 1583–1586 (1994).
* Emperador, Crehuet, and Guàrdia [2021] A. Emperador, R. Crehuet, and E. Guàrdia, “Effect of the water model in simulations of protein–protein recognition and association,” Polymers 13, 176 (2021).
* Chen _et al._ [2021] P. Chen, I. Vorobyov, B. Roux, and T. W. Allen, “Molecular dynamics simulations based on polarizable models show that ion permeation interconverts between different mechanisms as a function of membrane thickness,” J. Phys. Chem. B 125, 1020–1035 (2021).
* Ngo _et al._ [2021] V. Ngo, H. Li, A. D. MacKerell, T. W. Allen, B. Roux, and S. Noskov, “Polarization effects in water-mediated selective cation transport across a narrow transmembrane channel,” J. Chem. Theory Comput. 17, 1726–1741 (2021), pMID: 33539082.
* Ren and Ponder [2003] P. Ren and J. W. Ponder, “Polarizable atomic multipole water model for molecular mechanics simulation,” J. Phys. Chem. B 107, 5933–5947 (2003).
* Jing _et al._ [2021] Z. Jing, J. A. Rackers, L. R. Pratt, C. Liu, S. B. Rempe, and P. Ren, “Thermodynamics of ion binding and occupancy in potassium channels,” Chem. Sci. 12, 8920–8930 (2021).
* Duboué-Dijon _et al._ [2020] E. Duboué-Dijon, M. Javanainen, P. Delcroix, P. Jungwirth, and H. Martinez-Seara, “A practical guide to biologically relevant molecular simulations with charge scaling for electronic polarization,” J. Chem. Phys. 153, 050901 (2020).
|
# On Data-Augmentation and Consistency-Based Semi-Supervised Learning
Atin Ghosh & Alexandre H. Thiery
Department of Statistics and Applied Probability
National University of Singapore
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods
such as the $\Pi$-model, temporal ensembling, the mean teacher, or the virtual
adversarial training, have advanced the state of the art in several SSL tasks.
These methods can typically reach performances that are comparable to their
fully supervised counterparts while using only a fraction of labelled
examples. Despite these methodological advances, the understanding of these
methods is still relatively limited. In this text, we analyse (variations of)
the $\Pi$-model in settings where analytically tractable results can be
obtained. We establish links with Manifold Tangent Classifiers and demonstrate
that the quality of the perturbations is key to obtaining reasonable SSL
performances. Importantly, we propose a simple extension of the Hidden
Manifold Model that naturally incorporates data-augmentation schemes and
offers a framework for understanding and experimenting with SSL methods.
## 1 Introduction
Consider a dataset $\mathcal{D}=\mathcal{D}_{L}\cup\mathcal{D}_{U}$ that is
comprised of labelled samples
$\mathcal{D}_{L}=\\{x_{i},y_{i}\\}_{i\in\mathbf{I}_{L}}$ as well as unlabelled
samples $\mathcal{D}_{U}=\\{x_{i}\\}_{i\in\mathbf{I}_{U}}$. Semi-Supervised
Learning (SSL) is concerned with the use of both the labelled and unlabeled
data for training. In many scenarios, collecting labelled data is difficult or
time consuming or expensive so that the amount of labelled data can be
relatively small when compared to the amount of unlabelled data. The main
challenge of SSL is in the design of methods that can exploit the information
contained in the distribution of the unlabelled data (Zhu, 2005; Chapelle et
al., 2009).
In modern high-dimensional settings that are common to computer vision, signal
processing, Natural Language Processing (NLP) or genomics, standard
graph/distance based methods (Blum & Chawla, 2001; Zhu & Ghahramani, 2002; Zhu
et al., 2003; Belkin et al., 2006; Dunlop et al., 2019) that are successful in
low-dimensional scenarios are difficult to implement. Indeed, in high-
dimensional spaces, it is often difficult to design sensible notions of
distances that can be exploited within these methods. We refer the interested
reader to the book-length treatments (Zhu, 2005; Chapelle et al., 2009) for
discussion of other approaches.
The manifold assumption is the fundamental structural property that is
exploited in most modern approaches to SSL: high-dimensional data samples lie
in a small neighbourhood of a low-dimensional manifold (Turk & Pentland, 1991;
Basri & Jacobs, 2003; Peyré, 2009; Cayton, 2005; Rifai et al., 2011a). In
computer vision, the presence of this low-dimensional structure is
instrumental to the success of (variational) autoencoder and generative
adversarial networks: large datasets of images can often be parametrized by a
relatively small number of degrees of freedom. Exploiting the unlabelled data
to uncover this low-dimensional structure is crucial to the design of
efficient SSL methods. A recent and independent evaluation of several modern
methods for SSL can be found in (Oliver et al., 2018). It is found there that
consistency-based methods (Bachman et al., 2014; Sajjadi et al., 2016; Laine &
Aila, 2016; Tarvainen & Valpola, 2017; Miyato et al., 2018; Luo et al., 2018;
Grill et al., 2020), the topic of this paper, achieve state-of-the art
performances in many realistic scenarios.
#### Contributions:
consistency-based semi-supervised learning methods have recently been shown to
achieve state-of-the-art results. Despite these methodological advances, the
understanding of these methods is still relatively limited when compared to
the fully-supervised setting (Saxe et al., 2013; Advani & Saxe, 2017; Saxe et
al., 2018; Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017). In this
article, we do not propose a new SSL method. Instead, we analyse consistency-
based methods in settings where analytically tractable results can be
obtained, when the data-samples lie in the neighbourhood of well-defined and
tractable low-dimensional manifolds, and simple and controlled experiments can
be carried out. We establish links with Manifold Tangent Classifiers and
demonstrate that consistency-based SSL methods are in general more powerful
since they can better exploit the local geometry of the data-manifold if
efficient data-augmentation/perturbation schemes are used. Furthermore, in
section 4.1 we show that the popular Mean Teacher method and the conceptually
more simple $\Pi$-model approach share the same solutions in the regime when
the data-augmentations are small; this confirms often reported claim that the
data-augmentation schemes leveraged by the recent SSL, as well as fully
unsupervised algorithms, are instrumental to their success. Finally, in
section 4.3 we propose an extension of the Hidden Manifold Model (Goldt et
al., 2019; Gerace et al., 2020). This generative model allows us to
investigate the properties of consistency-based SSL methods, taking into
account the data-augmentation process and the underlying low-dimensionality of
the data, in a simple and principled manner, and without relying on a specific
dataset. For gaining understanding of SSL, as well as self-supervised learning
methods, we believe it to be important to develop a framework that (i) can
take into account the geometry of the data (ii) allows the study of the
influence of the quality of the data-augmentation schemes (iii) does not rely
on any particular dataset. While the understanding of fully-supervised methods
have largely been driven by the analysis of simplified model architectures
(eg. linear and two-layered models, large dimension asymptotic such as the
Neural Tangent Kernel), these analytical tools alone are unlikely to be enough
to explain the mechanisms responsible for the success of SSL and self-
supervised learning methods Chen et al. (2020); Grill et al. (2020), since
they do not, and cannot easily be extended to, account for the geometry of the
data and data-augmentation schemes. Our proposed framework offers a small step
in that direction.
## 2 Consistency-Based Semi-Supervised Learning
For concreteness and clarity of exposition, we focus the discussion on
classification problems. The arguments described in the remaining of this
article can be adapted without any difficulty to other situations such as
regression or image segmentation. Assume that the samples
$x_{i}\in\mathcal{X}\subset\mathbb{R}^{D}$ can be represented as
$D$-dimensional vectors and that the labels belong to $C\geq 2$ possible
classes, $y_{i}\in\mathcal{Y}\equiv\\{1,\ldots,C\\}$. Consider a mapping
$\mathcal{F}_{\theta}:\mathbb{R}^{D}\to\mathbb{R}^{C}$ parametrized by
$\theta\in\Theta\subset\mathbb{R}^{|\Theta|}$. This can be a neural network,
although that is not necessary. For $x\in\mathcal{X}$, the quantity
$\mathcal{F}_{\theta}(x)$ can represent probabilistic output of the
classifier, or , for example, the pre-softmax activations. Empirical risk
minimization consists in minimizing the function
$\displaystyle\mathcal{L}_{L}(\theta)=\frac{1}{|\mathcal{D}_{L}|}\sum_{i\in\mathbf{I}_{L}}\,\ell{\left(\mathcal{F}_{\theta}(x_{i}),y_{i}\right)}$
for a loss function $\ell:\mathbb{R}^{C}\times\mathcal{Y}\mapsto\mathbb{R}$.
Maximum likelihood estimation corresponds to choosing the loss function as the
cross entropy. The optimal parameter $\theta\in\Theta$ is found by a variant
of stochastic gradient descent (Robbins & Monro, 1951) with estimated gradient
$\displaystyle\nabla_{\theta}\;{\left\\{\frac{1}{|\mathcal{B}_{L}|}\sum_{i\in\mathcal{B}_{L}}\,\ell{\left(\mathcal{F}_{\theta}(x_{i}),y_{i}\right)}\right\\}}$
for a mini-batch $\mathcal{B}_{L}$ of labelled samples. Consistency-based SSL
algorithms regularize the learning by enforcing that the learned function
$x\mapsto\mathcal{F}_{\theta}(x)$ respects local derivative and invariance
constraints. For simplicity, assume that the mapping
$x\mapsto\mathcal{F}_{\theta}(x)$ is deterministic, although the use of drop-
out (Srivastava et al., 2014) and other sources of stochasticity are popular
in practice. The $\Pi$-model (Laine & Aila, 2016; Sajjadi et al., 2016) makes
use of a stochastic mapping
$\mathscr{S}:\mathcal{X}\times\Omega\to\mathcal{X}$ that maps a sample
$x\in\mathcal{X}$ and a source of randomness
$\omega\in\Omega\subset\mathbb{R}^{d_{\Omega}}$ to another sample
$\mathscr{S}_{\omega}(x)\in\mathcal{X}$. The mapping $\mathscr{S}$ describes a
stochastic data augmentation process. In computer vision, popular data-
augmentation schemes include random translations, rotations, dilatations,
croppings, flippings, elastic deformations, color jittering, addition of
speckle noise, and many more domain-specific variants. In NLP, synonym
replacements, insertions and deletions, back-translations are often used
although it is often more difficult to implement these data-augmentation
strategies. In a purely supervised setting, data-augmentation can be used as a
regularizer. Instead of directly minimizing $\mathcal{L}_{L}$, one can
minimize instead
$\displaystyle\theta\mapsto\frac{1}{|\mathcal{D}_{L}|}\sum_{i\in\mathbf{I}_{L}}\,\mathbb{E}_{\omega}{\left[\ell{\left(\mathcal{F}_{\theta}[\mathscr{S}_{\omega}(x_{i})],y_{i}\right)}\right]}.$
In practice, data-augmentation regularization, although a simple strategy, is
often crucial to obtaining good generalization properties (Perez & Wang, 2017;
Cubuk et al., 2018; Lemley et al., 2017; Park et al., 2019). The idea of
regularizing by enforcing robustness to the injection of noise can be traced
back at least to (Bishop, 1995). In the $\Pi$-model, the data-augmentation
mapping $\mathscr{S}$ is used to define a consistency regularization term,
$\displaystyle\mathcal{R}(\theta)=\frac{1}{|\mathcal{D}|}\sum_{i\in\mathbf{I}_{L}\cup\mathbf{I}_{U}}\mathbb{E}_{\omega}{\left\\{\big{\|}\mathcal{F}_{\theta}[\mathscr{S}_{\omega}(x_{i})]-\mathcal{F}_{\theta_{\star}}(x_{i})\big{\|}^{2}\right\\}}.$
(1)
The notation $\theta_{\star}$ designates a copy of the parameter $\theta$,
i.e. $\theta_{\star}=\theta$, and emphasizes that when differentiating the
consistency regularization term $\theta\mapsto\mathcal{R}(\theta)$, one does
not differentiate through $\theta_{\star}$. In practice, a stochastic estimate
of $\nabla\mathcal{R}(\theta)$ is obtained as follows. For a mini-batch
$\mathcal{B}$ of samples $\\{x_{i}\\}_{i\in\mathcal{B}}$, the current value
$\theta_{\star}\in\Theta$ of the parameter and the current predictions
$f_{i}\equiv\mathcal{F}_{\theta_{\star}}(x_{i})$, the quantity
$\displaystyle\nabla{\left\\{\frac{1}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}\big{\|}\mathcal{F}_{\theta}[\mathscr{S}_{\omega}(x_{i})]-f_{i}\big{\|}^{2}\right\\}}$
is an approximation of $\nabla\mathcal{R}(\theta)$. There are indeed many
variants (eg. use of different norms, different manners to inject noise), but
the general idea is to force the learned function
$x\mapsto\mathcal{F}_{\theta}(x)$ to be locally invariant to the data-
augmentation scheme $\mathscr{S}$. Several extensions such as the Mean Teacher
(Tarvainen & Valpola, 2017) and the VAT (Miyato et al., 2018) schemes have
been recently proposed and have been shown to lead to good results in many SSL
tasks. The recently proposed and state-of-the-art BYOL approach Grill et al.
(2020) is relying on mechanisms that are very close to the consistency
regularization methods discussed on this text.
If one recalls the manifold assumption, this approach is natural: since the
samples corresponding to different classes lie on separate manifolds, the
function $\mathcal{F}_{\theta}:\mathcal{X}\to\mathbb{R}^{C}$ should be
constant on each one of these manifolds. Since the correct value of
$\mathcal{F}_{\theta}$ is typically well approximated or known for labelled
samples $(x_{i},y_{i})\in\mathcal{D}_{L}$, the consistency regularization term
equation 1 helps propagating these known values across these manifolds. This
mechanism is indeed similar to standard SSL graph-based approaches such as
label propagation (Zhu & Ghahramani, 2002). Graph-based methods are difficult
to directly implement in computer vision, or NLP, when a meaningful notion of
distance is not available. This interpretation reveals that it is crucial to
include the labelled samples in the regularization term equation 1 in order to
help propagating the information contained in the labelled samples to the
unlabelled samples. Our numerical experiments suggest that, in the standard
setting when the number of labelled samples is much lower than the number of
unlabeled samples, i.e. $|\mathcal{D}_{L}|\ll|\mathcal{D}_{U}|$, the
formulation equation 1 of the consistency regularization leads to sub-optimal
results and convergence issues: the information contained in the labelled data
is swamped by the number of unlabelled samples. In all our experiments, we
have adopted instead the following regularization term
$\displaystyle\mathcal{R}(\theta)$
$\displaystyle=\frac{1}{|\mathcal{D}_{L}|}\sum_{i\in\mathbf{I}_{L}}\mathbb{E}_{\omega}{\left\\{\big{\|}\mathcal{F}_{\theta}[\mathscr{S}_{\omega}(x_{i})]-\mathcal{F}_{\theta_{\star}}(x_{i})\big{\|}^{2}\right\\}}$
(2)
$\displaystyle+\frac{1}{|\mathcal{D}_{U}|}\sum_{j\in\mathbf{I}_{U}}\mathbb{E}_{\omega}{\left\\{\big{\|}\mathcal{F}_{\theta}[\mathscr{S}_{\omega}(x_{j})]-\mathcal{F}_{\theta_{\star}}(x_{j})\big{\|}^{2}\right\\}}$
that balances the labelled and unlabelled data samples more efficiently.
Furthermore, it is clear that the quality and variety of the data-augmentation
scheme $\mathscr{S}:\mathcal{X}\times\Omega\to\mathcal{X}$ is pivotal to the
success of consistency-based SSL methods. We argue in this article that it is
the dominant factor contributing to the success of this class of methods.
Effort spent on building efficient local data-augmentation schemes will be
rewarded in terms of generalization performances. Designing good data-
augmentation schemes is an efficient manner of injecting expert/prior
knowledge into the learning process. It is done by leveraging the
understanding of the local geometry of the data manifold. As usual and not
surprisingly (Niyogi et al., 1998; Montavon et al., 2012), in data-scarce
settings, any type of domain-knowledge needs to be exploited and we argue that
consistency regularization approaches to SSL are instances of this general
principle.
## 3 Approximate Manifold Tangent Classifier
It has long been known (Simard et al., 1998) that exploiting the knowledge of
derivatives, or more generally enforcing local invariance properties, can
greatly enhance the performance of standard classifiers/regressors (Haasdonk &
Keysers, 2002; Chapelle & Schölkopf, 2002). In the context of deep-learning,
the Manifold Tangent Classifier (Rifai et al., 2011a) is yet another
illustration of this idea. Consider the data manifold
$\mathcal{M}\subset\mathcal{X}\subset\mathbb{R}^{D}$ and assume that the data
samples lie on a neighbourhood of it. For $x\in\mathcal{M}$, consider as well
the tangent plane $T_{x}$ to $\mathcal{M}$ at $x$. Assuming that the manifold
$\mathcal{M}$ is of dimension $1\leq d\leq D$, the tangent plane $T_{x}$ is
also of dimension $d$ with an orthonormal basis
$\mathbf{e}_{1}^{x},\ldots,\mathbf{e}_{d}^{x}\in\mathbb{R}^{D}$. This
informally means that, for suitably small coefficients
$\omega_{1},\ldots,\omega_{d}\in\mathbb{R}$, the transformed sample
$\overline{x}\in\mathcal{X}$ defined as
$\displaystyle\overline{x}\;=\;x+\sum_{j=1}^{d}\omega_{j}\,\mathbf{e}^{x}_{j}$
also lies, or is very close to, the data manifold $\mathcal{M}$. A possible
stochastic data-augmentation scheme can therefore be defined as
$\mathscr{S}_{\omega}(x)=x+V_{\omega}$ where
$V_{\omega}=\sum_{j=1}^{d}\omega_{j}\,\mathbf{e}^{x}_{j}$. If $\omega$ is a
multivariate $d$-dimensional centred Gaussian random vector with suitably
small covariance matrix, the perturbation vector $V_{\omega}$ is also centred
and normally distributed. To enforce that the function
$x\to\mathcal{F}_{\theta}(x)$ is locally approximately constant along the
manifold $\mathcal{M}$, one can thus penalize the derivatives of
$\mathcal{F}_{\theta}$ at $x$ in the directions $V_{\omega}$. Denoting by
$\mathbf{J}_{x}\in\mathbb{R}^{C,D}$ the Jacobian with respect to
$x\in\mathbb{R}^{D}$ of $\mathcal{F}_{\theta}$ at $x\in\mathcal{M}$, this can
be implemented by adding a penalization term of the type
$\mathbb{E}_{\omega}[\|\mathbf{J}_{x}\,V_{\omega}\|^{2}]=\mathbf{Tr}{\left(\Gamma\otimes\mathbf{J}^{T}_{x}\mathbf{J}_{x}\right)}$,
where $\Gamma\in\mathbb{R}^{D,D}$ is the covariance matrix of the random
vector $\omega\to V_{\omega}$. This type of regularization of the Jacobian
along the data-manifold is for example used in (Belkin et al., 2006). More
generally, if one assumes that for any $x,\omega\in\mathcal{X}\times\Omega$ we
have
$\mathscr{S}_{\varepsilon\,\omega}(x)=x+\varepsilon\,\mathbf{D}(x,\omega)+\mathcal{O}(\varepsilon^{2})$,
for some derivative mapping
$\mathbf{D}:\mathcal{X}\times\Omega\to\mathcal{X}$, it follows that
$\displaystyle\lim_{\varepsilon\to
0}\;\frac{1}{\varepsilon^{2}}\,\mathbb{E}_{\omega}{\left[\|\mathcal{F}_{\theta}[\mathscr{S}_{\varepsilon\,\omega}(x)]-\mathcal{F}_{\theta}(x)\|^{2}\right]}\;=\;\mathbb{E}_{\omega}{\left[\|\mathbf{J}_{x}\,\mathbf{D}(x,\omega)\|^{2}\right]}\;=\;\mathbf{Tr}{\left(\Gamma_{x,\mathscr{S}}\otimes\mathbf{J}_{x}^{T}\,\mathbf{J}_{x}\right)}$
where $\Gamma_{x,\mathscr{S}}$ is the covariance matrix of the
$\mathcal{X}$-valued random vector
$\omega\mapsto\mathbf{D}(x,\omega)\in\mathcal{X}$. This shows that
consistency-based methods can be understood as approximated Jacobian
regularization methods, as proposed in (Simard et al., 1998; Rifai et al.,
2011a).
### 3.1 Limitations
In practice, even if many local dimension reduction techniques have been
proposed, it is still relatively difficult to obtain a good parametrization of
the data manifold. The Manifold Tangent Classifier (MTC) (Rifai et al., 2011a)
implements this idea by first extracting in an unsupervised manner a good
representation of the dataset $\mathcal{D}$ by using a Contractive-Auto-
Encoder (CAE) (Rifai et al., 2011b). This CAE can subsequently be leveraged to
obtain an approximate basis of each tangent plane $T_{x_{i}}$ for
$x_{i}\in\mathcal{D}$, which can then be used for penalizing the Jacobian of
the mapping $x\mapsto\mathcal{F}_{\theta}(x)$ in the direction of the tangent
plane to $\mathcal{M}$ at $x$.
Figure 1: Left: Jacobian (i.e. first order) Penalization method are short-
sighted and do not exploit fully the data-manifold Right: Data-Augmentation
respecting the geometry of the data-manifold.
The above discussion shows that the somewhat simplistic approach consisting in
adding an isotropic Gaussian noise to the data samples is unlikely to deliver
satisfying results. It is equivalent to penalizing the Frobenius norm
$\|\mathbf{J}_{x}\|_{\text{F}}^{2}$ of the Jacobian of the mapping
$x\mapsto\mathcal{F}_{\theta}(x)$; in a linear model, that is equivalent to
the standard ridge regularization. This mechanism does not take at all into
account the local-geometry of the data-manifold. Nevertheless, in medical
imaging applications where scans are often contaminated by speckle noise, this
class of approaches which can be thought off as adding artificial speckle
noise, can help mitigate over-fitting (Devalla et al., 2018).
There are many situations where, because of data scarcity or the sheer
difficulty of unsupervised representation learning in general, domain-specific
data-augmentation schemes lead to much better regularization than Jacobian
penalization. Furthermore, as schematically illustrated in Figure 1, Jacobian
penalization techniques are not efficient at learning highly non-linear
manifolds that are common, for example, in computer vision. For example, in
“pixel space", a simple image translation is a highly non-linear
transformation only well approximated by a first order approximation for very
small translations. In other words, if $x\in\mathcal{X}$ represents an image
and $g(x,v)$ is its translated version by a vector $v$, the approximation
$g(x,v)\approx x+\nabla_{v}g(x)$, with
$\nabla_{v}g(x)\equiv\lim_{\varepsilon\to
0}\,(g(x,\varepsilon\,v)-g(x)/\varepsilon$, becomes poor as soon as the
translation vector $v$ is not extremely small.
In computer vision, translations, rotations and dilatations are often used as
sole data-augmentation schemes: this leads to a poor local exploration of the
data-manifold since this type transformations only generate a very low
dimensional exploration manifold. More precisely, the exploration manifold
emanating from a sample $x_{0}\in\mathcal{X}$, i.e.
$\\{\mathscr{S}(x_{0},\omega)\;:\;\omega\in\Omega\\}$, is very low
dimensional: its dimension is much lower than the dimension $d$ of the data-
manifold $\mathcal{M}$. Enriching the set of data-augmentation degrees of
freedom with transformations such as elastic deformation or non-linear pixel
intensity shifts is crucial to obtaining a high-dimensional local exploration
manifold that can help propagating the information on the data-manifold
efficiently (Cubuk et al., 2019a; Park et al., 2019).
## 4 Asymptotic Properties
### 4.1 Fluid Limit
Consider the standard $\Pi$-model trained with a standard Stochastic Gradient
Descent (SGD). Denote by $\theta_{t}\in\Theta$ the current value of the
parameter and $\eta>0$ the learning rate. We have
$\displaystyle\theta_{k+1}\;=\;\theta_{k}-\eta\,\nabla_{\theta}\bigg{\\{}\frac{1}{|\mathcal{B}_{L}|}\sum_{i\in\mathcal{B}_{L}}\ell{\left(\;\mathcal{F}_{\theta_{k}}(x_{i}),\,y_{i}\;\right)}$
$\displaystyle+\frac{\lambda}{|\mathcal{B}_{L}|}\sum_{j\in\mathcal{B}_{L}}\Big{\|}\mathcal{F}_{\theta_{k}}(\mathscr{S}_{\omega}[x_{j}])-f_{j}\Big{\|}^{2}$
(3)
$\displaystyle+\frac{\lambda}{|\mathcal{B}_{U}|}\sum_{k\in\mathcal{B}_{U}}\Big{\|}\mathcal{F}_{\theta_{k}}(\mathscr{S}_{\omega}[x_{k}])-f_{k}\Big{\|}^{2}\bigg{\\}}$
for a parameter $\lambda>0$ that controls the trade-off between supervised and
consistency losses, as well as subsets $\mathcal{B}_{L}$ and $\mathcal{B}_{U}$
of labelled and unlabelled data samples, and
$f_{j}\equiv\mathcal{F}_{\theta\star}(x_{j})$ for
$\theta_{\star}\equiv\theta_{k}$ as discussed in Section 2. The right-hand-
side is an unbiased estimate of
$\eta\,\nabla_{\theta}\Big{[}\mathcal{L}_{L}({\theta_{k}})+\lambda\,\mathcal{R}({\theta_{k}})\Big{]}$
with variance of order $\mathcal{O}(\eta^{2})$, where the regularization term
$\mathcal{R}({\theta_{k}})$ is described in equation 2. It follows from
standard fluid limit approximations (Ethier & Kurtz, 2009)[Section 4.8] for
Markov processes that, under mild regularity and growth assumptions and as
$\eta\to 0$, the appropriately time-rescaled trajectory
$\\{\theta_{k}\\}_{k\geq 0}$ can be approximated by the trajectory of the
Ordinary Differential Equation (ODE).
###### Proposition 4.1
Let $\mathbf{D}([0,T],\mathbb{R}^{|\Theta|})$ be the usual space of càdlàg
$\mathbb{R}^{|\Theta|}$-valued functions on a bounded time interval $[0,T]$
endowed with the standard Skorohod topology. Consider the update equation 3
with learning rate $\eta>0$ and define the continuous time process
$\overline{\theta}^{\eta}(t)=\theta_{[t/\eta]}$. The sequence of processes
$\overline{\theta}^{\eta}\in\mathbf{D}([0,T],\mathbb{R}^{|\Theta|})$ converges
weakly in $\mathbf{D}([0,T],\mathbb{R}^{|\Theta|})$ and as $\eta\to 0$ to the
solution of the ordinary differential equation
$\displaystyle\dot{\overline{\theta}}_{t}\;=\;-\nabla\Big{(}\mathcal{L}(\overline{\theta}_{t})+\lambda\,\mathcal{R}(\overline{\theta}_{t})\Big{)}.$
(4)
The article (Tarvainen & Valpola, 2017) proposes the mean teacher model, an
averaging approach related to the standard Polyak-Ruppert averaging scheme
(Polyak, 1990; Polyak & Juditsky, 1992), which modifies the consistency
regularization term equation 2 by replacing the parameter $\theta_{\star}$ by
an exponential moving average (EMA). In practical terms, this simply means
that, instead of defining $f_{j}=\mathcal{F}_{\theta_{\star}}(x_{j})$, with
$\theta_{\star}=\theta_{k}$ in equation 3, one sets
$f_{j}=\mathcal{F}_{\theta_{\text{avg},k}}(x_{j})$ where the EMA process
$\\{\theta_{\text{avg},k}\\}_{k\geq 0}$ is defined through the recursion
$\theta_{\text{avg},k}=(1-\alpha\,\eta)\,\theta_{\text{avg},k-1}+\alpha\,\eta\,\theta_{k}$
where the coefficient $\alpha>0$ controls the time-scale of the averaging
process. The use of the EMA process $\\{\theta_{\text{avg},k}\\}_{k\geq 0}$
helps smoothing out the stochasticity of the process $\theta_{k}$. Similarly
to Proposition 4.1, as $\eta\to 0$, the joint process
$(\overline{\theta}^{\eta}_{t},\overline{\theta}^{\eta}_{\text{avg},t})\equiv(\theta^{\eta}_{[t/\eta]},\theta^{\eta}_{\text{avg},[t/\eta]})$
converges as $\eta\to 0$ to the solution of the following ordinary
differential equation
$\left\\{\begin{aligned}
\dot{\overline{\theta}}_{t}&=-\nabla\Big{(}\mathcal{L}(\overline{\theta}_{t})+\lambda\,\mathcal{R}(\overline{\theta}_{t},\overline{\theta}_{\text{avg},t})\Big{)}\\\
\dot{\overline{\theta}}_{\text{avg},t}&=-\alpha\,(\overline{\theta}_{\text{avg},t}-\overline{\theta}_{t})\end{aligned}\right.$
(5)
where the notation
$\mathcal{R}(\overline{\theta}_{t},\overline{\theta}_{\text{avg},t})$
designates the same quantity as the one described in equation 2, but with an
emphasis on the dependency on the EMA process. At convergence
$(\overline{\theta}_{t},\overline{\theta}_{\text{avg},t})\to(\overline{\theta}_{\infty},\overline{\theta}_{\text{avg},\infty})$,
one must necessarily have that
$\overline{\theta}_{\infty}=\overline{\theta}_{\text{avg},\infty}$, confirming
that, in the regime of small learning rate $\eta\to 0$, the Mean Teacher
method converges, albeit often more rapidly, towards the same solution as the
more standard $\Pi$-model. This indicates that the improved performances of
the Mean Teacher approach sometimes reported in the literature are either not
statistically meaningful, or due to poorly executed comparisons, or due to
mechanisms not captured by the $\eta\to 0$ asymptotic. Indeed, several
recently proposed consistency based SSL algorithms (Berthelot et al., 2019;
Sohn et al., 2020; Xie et al., 2019) achieve state-of-the-art performance
across diverse datasets without employing any exponential averaging processes.
These results are achieved by leveraging more sophisticated data augmentation
schemes such as Rand-Augment (Cubuk et al., 2019b) , Back Translation (Artetxe
et al., 2017) or Mixup (Zhang et al., 2017).
### 4.2 Minimizers are harmonic functions
To understand better the properties of the solutions, we consider a simplified
setting further exploited in Section 4.3. Assume that
$\mathcal{F}:\mathcal{X}\equiv\mathbb{R}^{D}\to\mathbb{R}$ and
$\mathcal{Y}\equiv\mathbb{R}$ and that, for every
$y_{i}\in\mathcal{Y}\equiv\mathbb{R}$, the loss function
$f\mapsto\ell(f,y_{i})$ is uniquely minimized at $f=y_{i}$. We further assume
that the data-manifold $\mathcal{M}\subset\mathbb{R}^{D}$ can be globally
parametrized by a smooth and bijective mapping
$\Phi:\mathbb{R}^{d}\to\mathcal{M}\subset\mathbb{R}^{D}$. Similarly to the
Section 2, we consider a data-augmentation scheme that can be described as
$\mathscr{S}_{\varepsilon\omega}(x)=\Phi(z+\varepsilon\omega)$ for
$z=\Phi^{-1}(x)$ and a sample $\omega$ from a $\mathbb{R}^{d}$-valued centred
and isotropic Gaussian distribution. We consider a finite set of labelled
samples $\\{x_{i},y_{i}\\}_{i\in\mathbf{I}_{L}}$, with $x_{i}=\Phi(z_{i})$ and
$z_{i}\in\mathbb{R}^{d}$ for $i\in\mathbf{I}_{L}$. We choose to model the
large number of unlabelled data samples as a continuum distributed on the data
manifold $\mathcal{M}$ as the push-forward measure $\Phi_{\sharp}\mu(dz)$ of a
probability distribution $\mu(dz)$ whose support is $\mathbb{R}^{d}$ through
the mapping $\Phi$. This means that an empirical average of the type
$(1/|\mathcal{D}_{U}|)\,\sum_{i\in\mathbf{I}_{u}}\varphi(x_{i})$ can be
replaced by $\int\varphi[\Phi(z)]\,\mu(dz)$. We investigate the regime
$\varepsilon\to 0$ and, similarly to Section 2, the minimization of the
consistency-regularized objective
$\displaystyle\mathcal{L}_{L}(\theta)$
$\displaystyle+\frac{\lambda}{\varepsilon^{2}}\int_{\mathbb{R}^{d}}\mathbb{E}_{\omega}{\left\\{\big{\|}\mathcal{F}_{\theta}[\mathscr{S}_{\varepsilon\omega}(\Phi(z))]-\mathcal{F}_{\theta}(\Phi(z))\big{\|}^{2}\right\\}}\,\mu(dz).$
(6)
For notational convenience, set
$f_{\theta}\equiv\mathcal{F}_{\theta}\circ\Phi$. Since
$\mathscr{S}_{\varepsilon\omega}[\Phi(z)]=\Phi(z+\varepsilon\,\omega)$, as
$\varepsilon\to 0$ the quantity
$\frac{1}{\varepsilon^{2}}\mathbb{E}_{\omega}{\left\\{\big{\|}\mathcal{F}_{\theta}[\mathscr{S}_{\varepsilon\omega}(\Phi(z))]-\mathcal{F}_{\theta}(\Phi(z))\big{\|}^{2}\right\\}}$
converges to $\|\nabla_{z}f_{\theta}\|^{2}$ and the objective function
equation 6 approaches the quantity
$\text{G}(f_{\theta})\;\equiv\;\frac{1}{|\mathcal{D}_{L}|}\sum_{i\in\mathbf{I}_{L}}\ell(f_{\theta}(z_{i}),y_{i})+\lambda\int_{\mathbb{R}^{d}}\,\|\nabla_{z}f_{\theta}(z)\|^{2}\,\mu(dz).$
(7)
A minimizer $f:\mathbb{R}^{d}\to\mathbb{R}$ of the functional G that is
consistent with the labelled data, i.e. $f(z_{i})=y_{i}$ for
$i\in\mathbf{I}_{L}$, is a minimizer of the energy functional
$f\mapsto\int_{\mathbb{R}^{d}}\,\|\nabla_{z}f_{\theta}(z)\|^{2}\,\mu(dz)$
subject to the constraints $f(z_{i})=y_{i}$. It is the variational formulation
of the Poisson equation
$\left\\{\begin{aligned} \Delta f(z)&=0\qquad\textrm{for}\quad
z\in\mathbb{R}^{d}\setminus\\{z_{i}\\}_{i\in\mathbf{I}_{L}}\\\
f(z_{i})&=y_{i}\qquad\textrm{for}\quad
i\in\mathbf{I}_{L}.\end{aligned}\right.$ (8)
Note that the solution does not depend on the regularization parameter
$\lambda$ in the regime of $\varepsilon\to 0$: this indicates, as will be
discussed in Section 4.3 in detail, that the generalization properties of
consistency-based SSL methods will typically be insensitive to this parameter,
in the regime of small data-augmentation at least. Furthermore, equation 8
shows that consistency-based SSL methods are indeed based on the same
principles as more standard graph-based approaches such as Label Propagation
(Zhu & Ghahramani, 2002): solutions are gradient/Laplacian penalized
interpolating functions. In Figure 2, we consider the case where $D=d=2$ with
trivial mapping $\Phi(x)=x$. We consider labelled data situated on the right
(resp. left) boundary of the unit square and corresponding to the label $y=0$
(resp. $y=1$). For simplicity, we choose the loss function
$\ell(f,y)=\frac{1}{2}\,(f-y)^{2}$ and parametrize $\mathcal{F}_{\theta}\equiv
f_{\theta}$ with a neural network with a single hidden layer with $N=100$
neurons. As expected, the $\Pi$-model converges to the solution to the Poisson
equation 8 in the unit square with boundary condition $f(u,v)=0$ for $u=0$ and
$f(u,v)=1$ for $u=1$.
Figure 2: Labelled data samples with class $y=0$ (green triangle) and $y=+1$
(red dot) are placed on the Left/Right boundary of the unit square. Unlabelled
data samples (blue stars) are uniformly placed within the unit square. We
consider a simple regression setting with loss function
$\ell(f,y)=\frac{1}{2}\,(f-y)^{2}$. Left: Randomly initialized neural network.
Middle: labelled/unlabelled data Right: Solution of $f$ obtained by training a
standard $\Pi$-model. It is the harmonic function $f(u,v)=u$, as described by
equation 8.
### 4.3 Generative model for Semi-Supervised Learning
As has been made clear throughout this text, SSL methods crucially rely on the
dependence structure of the data. The existence and exploitation of a much
lower-dimensional manifold $\mathcal{M}$ supporting the data-samples is
instrumental to this class of methods. Furthermore, the performance of
consistency-based SSL approaches is intimately related to the data-
augmentation schemes they are based upon. Consequently, in order to understand
the mechanisms that are at play when consistency-based SSL methods are used to
uncover the structures present in real datasets, it is important to build
simplified and tractable generative models of data that (1) respect these low-
dimensional structures and (2) allow the design of efficient data-augmentation
schemes. Several articles have investigated the influence of the dependence
structures that are present in the data on the learning algorithm (Bruna &
Mallat, 2013; Mossel, 2016). Here, we follow the Hidden Manifold Model (HMM)
framework proposed in (Goldt et al., 2019; Gerace et al., 2020) where the
authors describe a model of synthetic data concentrating near low-dimensional
structures and analyze the learning curve associated to a class of two-layered
neural networks.
Figure 3: Left: For a fixed data-augmentation scheme, generalization
properties for $\lambda$ spanning two orders of magnitude. Right: Influence of
the quantity of the data-augmentation of the generalization properties.
Low-dimensional structure: Similarly to Section 4.2, assume that the
$D$-dimensional data-samples $x_{i}\in\mathcal{X}$ can be expressed as
$x_{i}=\Phi(z_{i})\in\mathbb{R}^{D}$ for a fixed smooth mapping
$\Phi:\mathbb{R}^{d}\to\mathbb{R}^{D}$. In other words, the data-manifold
$\mathcal{M}$ is $d$-dimensional and the mapping $\Phi$ can be used to
parametrize it. The mapping $\Phi$ is chosen to be a neural network with a
single hidden layer with $H$ neurons, although other choices are indeed
possible. For $z=(z^{1},\ldots,z^{d})\in\mathbb{R}^{d}$, set $\Phi(z)=A^{1\to
2}\,\varphi(A^{0\to 1}z+b^{1})$ for matrices $A^{0\to 1}\in\mathbb{R}^{H,d}$
and $A^{1\to 2}\in\mathbb{R}^{D,H}$, bias vector $b^{1}\in\mathbb{R}^{H}$ and
non-linearity $\varphi:\mathbb{R}\to\mathbb{R}$ applied element-wise. In all
our experiments, we use the ELU non-linearity. We adopt the standard
normalization $A^{0\to 1}_{i,j}=w^{(1)}_{i,j}/\sqrt{d}$ and $A^{1\to
2}_{i,j}=w^{(2)}_{i,j}/\sqrt{H}$ for weights $w^{(k)}_{i,j}$ drawn i.i.d from
a centred Gaussian distribution with unit variance; this ensures that, if the
coordinate of the input vector $z\in\mathbb{R}^{d}$ are all of order
$\mathcal{O}(1)$, so are the coordinates of $x=\Phi(z)$.
Data-augmentation: consider a data sample $x_{i}\in\mathcal{M}$ on the data-
manifold. It can also be expressed as $x_{i}=\Phi(z_{i})$. We consider the
natural data-augmentation process which consists in setting
$\mathscr{S}_{\varepsilon\omega}(x_{i})=\Phi(z_{i}+\varepsilon\omega)$ for a
sample $\omega\in\mathbb{R}^{d}$ from an isotropic Gaussian distribution with
unit covariance and $\varepsilon>0$. Crucially, the data-augmentation scheme
respect the low-dimensional structure of the data: the perturbed sample
$\mathscr{S}_{\varepsilon\omega}(x_{i})$ belongs to the data-manifold
$\mathcal{M}$ for any perturbation vector $\varepsilon\,\omega$. Note that,
for any value of $\varepsilon$, the data-augmentation preserves the low-
dimensional manifold: perturbed samples
$\mathscr{S}_{\varepsilon\omega}(x_{i})$ exactly lie on the data-manifold. The
larger $\varepsilon$, the more efficient the data-augmentation scheme; this
property is important since it allows to study the influence of the amount of
data-augmentation.
Classification: we consider a balanced binary classification problem with
$|\mathcal{D}_{L}|\geq 2$ labelled training examples
$\\{x_{i},y_{i}\\}_{i\in\mathbf{I}_{L}}$ where $x_{i}=\Phi(z_{i})$ and
$y_{i}\in\mathcal{Y}\equiv\\{-1,+1\\}$. The sample $z_{i}\in\mathbb{R}^{d}$
corresponding to the positive (resp. negative) class are assumed to have been
drawn i.i.d from a Gaussian distribution with identity covariance matrix and
mean $\mu_{+}\in\mathbb{R}^{d}$ (resp. mean $\mu_{-}\in\mathbb{R}^{d}$). The
distance $\|\mu_{+}-\mu_{-}\|$ quantifies the hardness of the classification
task.
Neural architecture and optimization: Consider fitting a two-layered neural
network $\mathcal{F}_{\theta}:\mathbb{R}^{D}\to\mathbb{R}$ by minimising the
negative log-likelihood
$\mathcal{L}_{L}(\theta)\equiv(1/|\mathcal{D}_{L}|)\sum_{i}\ell[\mathcal{F}_{\theta}(x_{i}),y_{i}]$
where $\ell(f,y)=\log(1+\exp[-y\,f])$. We assume that there are
$|\mathcal{D}_{L}|=10$ labelled data pairs
$\\{x_{i},y_{i}\\}_{i=\mathbf{I}_{L}}$, as well as $|\mathcal{D}_{U}|=1000$
unlabelled data samples, that the ambient space has dimension $D=100$ and the
data manifold $\mathcal{M}$ has dimension $d=10$. The function $\Phi$ uses
$H=30$ neurons in its hidden layer. In all our experiments, we use a standard
Stochastic Gradient Descent (SGD) method with constant learning rate and
momentum $\beta=0.9$. For minimizing the consistency-based SSL objective
$\mathcal{L}_{L}(\theta)+\lambda\,\mathcal{R}(\theta)$, with regularization
$\mathcal{R}(\theta)$ given in equation 2, we use the standard strategy
(Tarvainen & Valpola, 2017) consisting in first minimizing the un-regularized
objective alone $\mathcal{L}_{L}$ for a few epochs in order for the function
$\mathcal{F}_{\theta}$ to be learned in the neighbourhood of the few labelled
data-samples before switching on the consistency-based regularization whose
role is to propagate the information contained in the labelled samples along
the data manifold $\mathcal{M}$.
Figure 4: Learning curve test (NLL) of the $\Pi$-model with $\lambda=10$ for
different “quality" of data-augmentation. The data manifold is of dimension
$d=10$ in an ambient space of dimension $D=100$. For $x_{i}=\Phi(z_{i})$ and
$1\leq k\leq d$, the data-augmentation scheme is implemented as
$\mathscr{S}_{\varepsilon\omega[k]}(x_{i})=\Phi(z_{i}+\varepsilon\,\omega[k])$
where $\omega[k]$ is a sample from a Gaussian distribution whose last $(d-k)$
coordinates are zero. In other words, the data-augmentation scheme only
explores $k$ dimensions out of the $d$ dimensions of the data-manifold. We use
$\varepsilon=0.3$ in all the experiments. Left: Learning curves (Test NLL) for
data-augmentation dimension $k\in[5,10]$ Right: Test NLL at epoch $N=200$ (see
left plot) for data-augmentation dimension $k\in[5,10]$.
Insensitivity to $\lambda$: Figure 3 (Left) shows that this method is
relatively insensitive to the parameter $\lambda$, as long as it is within
reasonable bounds. This phenomenon can be read from equation 8 that does not
depend on $\lambda$. Much larger or smaller values (not shown in Figure 3) of
$\lambda$ do lead, unsurprisingly, to convergence and stability issues.
Amount of Data-Augmentation: As is reported in many tasks Cubuk et al. (2018);
Zoph et al. (2019); Kostrikov et al. (2020), tuning the amount data-
augmentation in deep-learning applications is often a delicate exercise that
can greatly influence the resulting performances. Figure 3 (Right) reports the
generalization properties of the method for different amount of data-
augmentation. Too low an amount of data-augmentation (i.e. $\varepsilon=0.03$)
and the final performance is equivalent to the un-regularized method. Too
large an amount of data-augmentation (i.e. $\varepsilon=1.0$) also leads to
poor generalization properties. This is because the choice of
$\varepsilon=1.0$ corresponds to augmented samples that are very different
from the distribution of the training dataset (i.e. distributional shift),
although these samples are still supported by the data-manifold.
Figure 5: Mean-Teacher (MT) learning curves (Test NLL) for different values of
the exponential smoothing parameter $\beta_{\textrm{MT}}\in(0,1)$. For
$\beta_{\text{MT}}\in\\{0.9,0.95,0.99,0.995\\}$, the final test NLL obtained
through the MT approach is identical to the test NLL obtained through the
$\Pi$-model. In all the experiments, we used $\lambda=10$ and used SGD with
momentum $\beta=0.9$.
Quality of the Data-Augmentation: to study the influence of the quality of the
data-augmentation scheme, we consider a perturbation process implemented as
$\mathscr{S}_{\varepsilon\omega[k]}(x_{i})=\Phi(z_{i}+\omega[k])$ for
$x_{i}=\Phi(z_{i})$ where the noise term $\omega[k]$ is defined as follows.
For a data-augmentation dimension parameter $1\leq k\leq d$ we have
$\omega[k]=(\xi_{1},\ldots,\xi_{k},0,\ldots,0)$ for i.i.d standard Gaussian
samples $\xi_{1},\ldots,\xi_{k}\in\mathbb{R}$. This data-augmentation scheme
only explores the first $k$ dimensions of the $d$-dimensional data-manifold:
the lower $k$, the poorer the exploration of the data-manifold. As
demonstrated on Figure 4, lower quality data-augmentation schemes (i.e. lower
values of $k\in[0,d]$) hurt the generalization performance of the $\Pi$-model.
Mean-Teacher versus $\Pi$-model: we implemented the Mean-Teacher (MT) approach
with an exponential moving average (EMA) process
$\theta_{\text{avg},k}=\beta_{\textrm{MT}}\,\theta_{\text{avg},k-1}+(1-\beta_{\textrm{MT}})\,\theta_{k}$
for the MT parameter $\theta_{\text{avg},k}$ with different scales
$\beta_{\text{MT}}\in\\{0.9,0.95,0.99,0.995\\}$, as well as a $\Pi$-model
approach, with $\lambda=10$ and $\varepsilon=0.3$. Figure 5 shows, in
accordance with Section 4.1, that the different EMA schemes lead to
generalization performances similar to a standard $\Pi$-model.
## 5 Conclusion
Consistency-based SSL methods rely on a subtle trade-off between the
exploitation of the labelled samples and the discovery of the low-dimensional
data-manifold. The results presented in this article highlight the connections
with more standard methods such as Jacobian penalization and graph-based
approaches and emphasize the crucial role of the data-augmentation scheme. The
analysis of consistency-based SSL methods is still in its infancy and our
numerical simulations suggest that the variant of the Hidden Manifold Model
described in this text is a natural framework to make progress in this
direction.
## References
* Advani & Saxe (2017) Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in neural networks. _arXiv preprint arXiv:1710.03667_ , 2017.
* Artetxe et al. (2017) Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. Unsupervised neural machine translation. _arXiv preprint arXiv:1710.11041_ , 2017.
* Bachman et al. (2014) Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In _Advances in Neural Information Processing Systems_ , pp. 3365–3373, 2014.
* Basri & Jacobs (2003) Ronen Basri and David W Jacobs. Lambertian reflectance and linear subspaces. _IEEE Transactions on Pattern Analysis & Machine Intelligence_, (2):218–233, 2003.
* Belkin et al. (2006) Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. _Journal of machine learning research_ , 7(Nov):2399–2434, 2006.
* Berthelot et al. (2019) David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In _Advances in Neural Information Processing Systems_ , pp. 5050–5060, 2019.
* Bishop (1995) Chris M Bishop. Training with noise is equivalent to tikhonov regularization. _Neural computation_ , 7(1):108–116, 1995.
* Blum & Chawla (2001) Avrim Blum and Shuchi Chawla. Learning from labeled and unlabeled data using graph mincuts. In _Proceedings of the Eighteenth International Conference on Machine Learning_ , pp. 19–26. Morgan Kaufmann Publishers Inc., 2001.
* Bruna & Mallat (2013) Joan Bruna and Stéphane Mallat. Invariant scattering convolution networks. _IEEE transactions on pattern analysis and machine intelligence_ , 35(8):1872–1886, 2013.
* Cayton (2005) Lawrence Cayton. Algorithms for manifold learning. _Univ. of California at San Diego Tech. Rep_ , 12(1-17):1, 2005.
* Chapelle & Schölkopf (2002) Olivier Chapelle and Bernhard Schölkopf. Incorporating invariances in non-linear support vector machines. In _Advances in neural information processing systems_ , pp. 609–616, 2002.
* Chapelle et al. (2009) Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews]. _IEEE Transactions on Neural Networks_ , 20(3):542–542, 2009.
* Chen et al. (2020) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. _arXiv preprint arXiv:2002.05709_ , 2020.
* Cubuk et al. (2018) Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. _arXiv preprint arXiv:1805.09501_ , 2018.
* Cubuk et al. (2019a) Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation strategies from data. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 113–123, 2019a.
* Cubuk et al. (2019b) Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical data augmentation with no separate search. _arXiv preprint arXiv:1909.13719_ , 2019b.
* Devalla et al. (2018) Sripad Krishna Devalla, Prajwal K Renukanand, Bharathwaj K Sreedhar, Giridhar Subramanian, Liang Zhang, Shamira Perera, Jean-Martial Mari, Khai Sing Chin, Tin A Tun, Nicholas G Strouthidis, et al. Drunet: a dilated-residual U-net deep learning network to segment optic nerve head tissues in optical coherence tomography images. _Biomedical optics express_ , 9(7):3244–3265, 2018.
* Dunlop et al. (2019) Matthew M Dunlop, Dejan Slepčev, Andrew M Stuart, and Matthew Thorpe. Large data and zero noise limits of graph-based semi-supervised learning algorithms. _Applied and Computational Harmonic Analysis_ , 2019.
* Ethier & Kurtz (2009) Stewart N Ethier and Thomas G Kurtz. _Markov processes: characterization and convergence_ , volume 282\. John Wiley & Sons, 2009.
* Gerace et al. (2020) Federica Gerace, Bruno Loureiro, Florent Krzakala, Marc Mezard, and Lenka Zdeborová. Generalisation error in learning with random features and the hidden manifold model. _arXiv preprint arXiv:2002.09339_ , 2020.
* Goldt et al. (2019) Sebastian Goldt, Marc Mézard, Florent Krzakala, and Lenka Zdeborová. Modelling the influence of data structure on learning in neural networks. _arXiv preprint arXiv:1909.11500_ , 2019.
* Grill et al. (2020) Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. _arXiv preprint arXiv:2006.07733_ , 2020.
* Haasdonk & Keysers (2002) Bernard Haasdonk and Daniel Keysers. Tangent distance kernels for support vector machines. In _Object recognition supported by user interaction for service robots_ , volume 2, pp. 864–868. IEEE, 2002.
* Kostrikov et al. (2020) Ilya Kostrikov, Denis Yarats, and Rob Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. _arXiv preprint arXiv:2004.13649_ , 2020.
* Laine & Aila (2016) Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. _arXiv preprint arXiv:1610.02242_ , 2016.
* Lemley et al. (2017) Joseph Lemley, Shabab Bazrafkan, and Peter Corcoran. Smart augmentation learning an optimal data augmentation strategy. _IEEE Access_ , 5:5858–5869, 2017.
* Luo et al. (2018) Yucen Luo, Jun Zhu, Mengxi Li, Yong Ren, and Bo Zhang. Smooth neighbors on teacher graphs for semi-supervised learning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 8896–8905, 2018.
* Miyato et al. (2018) Takeru Miyato, Shin-ichi Maeda, Shin Ishii, and Masanori Koyama. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. _IEEE transactions on pattern analysis and machine intelligence_ , 2018.
* Montavon et al. (2012) Grégoire Montavon, Katja Hansen, Siamac Fazli, Matthias Rupp, Franziska Biegler, Andreas Ziehe, Alexandre Tkatchenko, Anatole V Lilienfeld, and Klaus-Robert Müller. Learning invariant representations of molecules for atomization energy prediction. In _Advances in Neural Information Processing Systems_ , pp. 440–448, 2012.
* Mossel (2016) Elchanan Mossel. Deep learning and hierarchal generative models. _arXiv preprint arXiv:1612.09057_ , 2016.
* Niyogi et al. (1998) Partha Niyogi, Federico Girosi, and Tomaso Poggio. Incorporating prior information in machine learning by creating virtual examples. _Proceedings of the IEEE_ , 86(11):2196–2209, 1998.
* Oliver et al. (2018) Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In _Advances in Neural Information Processing Systems_ , pp. 3235–3246, 2018.
* Park et al. (2019) Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. Specaugment: A simple data augmentation method for automatic speech recognition. _arXiv preprint arXiv:1904.08779_ , 2019.
* Perez & Wang (2017) Luis Perez and Jason Wang. The effectiveness of data augmentation in image classification using deep learning. _arXiv preprint arXiv:1712.04621_ , 2017.
* Peyré (2009) Gabriel Peyré. Manifold models for signals and images. _Computer Vision and Image Understanding_ , 113(2):249–260, 2009.
* Polyak (1990) Boris T Polyak. New stochastic approximation type procedures. _Automat. i Telemekh_ , 7(98-107):2, 1990.
* Polyak & Juditsky (1992) Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. _SIAM journal on control and optimization_ , 30(4):838–855, 1992.
* Rifai et al. (2011a) Salah Rifai, Yann N Dauphin, Pascal Vincent, Yoshua Bengio, and Xavier Muller. The manifold tangent classifier. In _Advances in Neural Information Processing Systems_ , pp. 2294–2302, 2011a.
* Rifai et al. (2011b) Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In _Proceedings of the 28th International Conference on International Conference on Machine Learning_ , pp. 833–840. Omnipress, 2011b.
* Robbins & Monro (1951) Herbert Robbins and Sutton Monro. A stochastic approximation method. _The annals of mathematical statistics_ , pp. 400–407, 1951.
* Sajjadi et al. (2016) Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In _Advances in Neural Information Processing Systems_ , pp. 1163–1171, 2016.
* Saxe et al. (2013) Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. _arXiv preprint arXiv:1312.6120_ , 2013.
* Saxe et al. (2018) Andrew Michael Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan Daniel Tracey, and David Daniel Cox. On the information bottleneck theory of deep learning. 2018\.
* Shwartz-Ziv & Tishby (2017) Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information. _arXiv preprint arXiv:1703.00810_ , 2017.
* Simard et al. (1998) Patrice Y Simard, Yann A LeCun, John S Denker, and Bernard Victorri. Transformation invariance in pattern recognition—tangent distance and tangent propagation. In _Neural networks: tricks of the trade_ , pp. 239–274. Springer, 1998.
* Sohn et al. (2020) Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. _arXiv preprint arXiv:2001.07685_ , 2020.
* Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. _The Journal of Machine Learning Research_ , 15(1):1929–1958, 2014.
* Tarvainen & Valpola (2017) Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In _Advances in neural information processing systems_ , pp. 1195–1204, 2017.
* Tishby & Zaslavsky (2015) Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. In _2015 IEEE Information Theory Workshop (ITW)_ , pp. 1–5. IEEE, 2015.
* Turk & Pentland (1991) Matthew Turk and Alex Pentland. Eigenfaces for recognition. _Journal of cognitive neuroscience_ , 3(1):71–86, 1991.
* Xie et al. (2019) Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation for consistency training. 2019\.
* Zhang et al. (2017) Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. _arXiv preprint arXiv:1710.09412_ , 2017.
* Zhu & Ghahramani (2002) Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical report, Citeseer, 2002.
* Zhu et al. (2003) Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In _Proceedings of the 20th International conference on Machine learning (ICML-03)_ , pp. 912–919, 2003.
* Zhu (2005) Xiaojin Jerry Zhu. Semi-supervised learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2005.
* Zoph et al. (2019) Barret Zoph, Ekin D Cubuk, Golnaz Ghiasi, Tsung-Yi Lin, Jonathon Shlens, and Quoc V Le. Learning data augmentation strategies for object detection. _arXiv preprint arXiv:1906.11172_ , 2019.
|
# Motor-Imagery-Based Brain Computer Interface using Signal Derivation and
Aggregation Functions
Javier Fumanal-Idocin, Yu-Kai Wang , Chin-Teng Lin, , Javier Fernández, Jose
Antonio Sanz, Humberto Bustince, Javier Fumanal-Idocin, Javier Fernandez, Jose
Antonio Sanz and Humberto Bustince are with the Departamento de Estadistica,
Informatica y Matematicas, Universidad Publica de Navarra, Campus de
Arrosadia, 31006, Pamplona, Spain. emails<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Fernandez, Jose Antonio Sanz and Humberto Bustince
are with the Institute of Smart Cities, Universidad Publica de Navarra, Campus
de Arrosadia, 31006, Pamplona, Spain.Javier Fernandez and Humberto Bustince
are with the Laboratory Navarrabiomed, Hospital Complex of Navarre (CHN),
Universidad Publica de Navarra, IdiSNA, Irunlarrea 3. 31008, Pamplona,
Spain.Y.-K. Wang and C.-T. Lin are with the Australian AI Institute, Faculty
of Engineering and Information Technology, University of Technology Sydney,
Ultimo, NSW 2007, Australia (e-mail<EMAIL_ADDRESS>chin-
<EMAIL_ADDRESS>
###### Abstract
Brain Computer Interface (BCI) technologies are popular methods of
communication between the human brain and external devices. One of the most
popular approaches to BCI is Motor Imagery (MI). In BCI applications, the
ElectroEncephaloGraphy (EEG) is a very popular measurement for brain dynamics
because of its non-invasive nature. Although there is a high interest in the
BCI topic, the performance of existing systems is still far from ideal, due to
the difficulty of performing pattern recognition tasks in EEG signals. This
difficulty lies in the selection of the correct EEG channels, the signal-to-
noise ratio of these signals and how to discern the redundant information
among them. BCI systems are composed of a wide range of components that
perform signal pre-processing, feature extraction and decision making. In this
paper, we define a new BCI Framework, named Enhanced Fusion Framework, where
we propose three different ideas to improve the existing MI-based BCI
frameworks. Firstly, we include an additional pre-processing step of the
signal: a differentiation of the EEG signal that makes it time-invariant.
Secondly, we add an additional frequency band as feature for the system: the
Sensory Motor Rhythm band, and we show its effect on the performance of the
system. Finally, we make a profound study of how to make the final decision in
the system. We propose the usage of both up to six types of different
classifiers and a wide range of aggregation functions (including classical
aggregations, Choquet and Sugeno integrals and their extensions and overlap
functions) to fuse the information given by the considered classifiers. We
have tested this new system on a dataset of 20 volunteers performing motor
imagery-based brain-computer interface experiments. On this dataset, the new
system achieved a $88.80\%$ of accuracy. We also propose an optimized version
of our system that is able to obtain up to $90,76\%$. Furthermore, we find
that the pair Choquet/Sugeno integrals and overlap functions are the ones
providing the best results.
###### Index Terms:
Brain-Computer-Interface (BCI); Motor Imagery (MI); Classification;
Aggregation functions; Information Fusion; Signal Processing;
©2021 IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works. DOI:
10.1109/TCYB.2021.3073210
## I Introduction
Brain-computer interfaces (BCIs) provide a new interface between the human
brain and the devices or systems to be controlled by the changes of brain
dynamics [1], [2]. One popular BCI is Motor Imagery (MI) in which a person
imagines a specific body movement (usually moving left or right hand). As
imagining the movement, the Event-Related Desynchronization (ERD) in mu rhythm
near motor areas has been widely reported in the previous studies [3], [4].
Therefore, correct ERD identification highly influences the performance of MI-
based BCI. Recently, a lot of state-of-the-art algorithms, such as Common
Spatial Pattern (CSP), support vector machines, or deep learning have been
extensively used to identify the ERD in MI-based BCI [4, 5, 2, 6].
BCIs use a wide range of techniques to extract features from the original raw
data. Due to the volume-conduction effect, it is very difficult to extract
information directly from the ElectroEncephaloGraphy (EEG) data [7], because
the measures taken are affected by the conductance of the biological tissues
that transmit the electrical signal. To cope with this, most algorithms use a
procedure to extract features from the EEG data before feeding them to a
classifier. Some of the most common procedures include using the Fast Fourier
transform (FFt) to transform the EEG signals to the frequency domain [8, 9,
10] and the Meyer wavelet transformation [11, 12]. There have also been many
different fuzzy approaches to the BCI problem [13, 14, 15].
A BCI framework is composed of signal pre-processing, feature extraction and
control commands. The interactions among all of these elements take a crucial
role in the final performance of the system. However, possible correlations
and synergies among the different features are ignored in the command control
phase in the classical BCI framework. In [16] the authors proposed a channel
selection procedure to minimize the number of correlated components in the
system. In [17] the authors use a time-window selection algorithm to choose
the best time to collect the MI features, and the authors in [18] the authors
use the spatio-temporal information of the EEG signals to detect the optimal
channels to discriminate between the MI tasks. In [3] the authors propose a
new BCI framework, the Multimodal Fuzzy Fusion BCI Framework (MFF), that uses
the fuzzy integrals [19, 20] to model these interactions, and a two-step
decision making process that considers that combines the outputs of three
classical classifiers.
Two of the most important fuzzy integrals are the Sugeno and the Choquet
integrals. Both aggregate values using a measure that indicates how important
are the different correlations among the data. Therefore, they are specially
suited in applications where there are significant interactions among the
features to aggregate. Fuzzy integrals have been widely used in decision
making [21], image processing [20] and deep learning [22]. As we have
mentioned, Fuzzy integrals have already been used in a BCI framework in [3],
obtaining better results than the classical aggregations. Many different
generalizations of the Choquet integral have been proposed [23, 24, 25]. The
CF, [26], and CF1,F2 [27] generalizations of the Choquet integral have proven
to be very successful in classification systems. Ordered-Weighted-Averaging
operators [28, 29] (OWAs) are a specific case of the fuzzy integrals that have
also been used in multicriteria decision [30], and finance [31].
A closed concept to aggregation functions are the overlap functions, which
were introduced in [32] in the fuzzy community, as a way to represent the
overlapping between two concepts. Since these functions were only defined for
two elements, the generalized version of overlap functions for n-valued
vectors were proposed in [33]. Some common examples of these generalized
overlap functions are the geometric and the harmonic means. Generalized
overlap functions have been successfully used in Big Data [34] and in fuzzy
rule-based classifiers [35].
The most successful MI-based BCI framework using aggregation functions is [3].
However, in the decision making process, it does not study:
1. 1.
The effects of new types of classifiers with the new integrals.
2. 2.
The possibility of using different aggregation functions in each step of the
process.
3. 3.
It does not improve other areas of the BCI framework besides the decision
making phase.
In this paper, we present a new BCI framework, named Enhanced Fusion BCI
Framework (EMF). It includes a new differentiation signal phase, an additional
wave band: the SensoriMotor Rhythm, and we add two additional types of
classifiers to the ensemble of classifiers: Gaussian Process and Support
Vector Machines. We also consider a wider set of aggregation functions to be
used in the decision making phase that includes not only the Choquet and
Sugeno integrals and their generalizations, but also Ordered Weighted
Averaging operators and generalized overlaps. Finally, we also propose an
Optimized version of the EMF (OEMF) in terms of accuracy by checking the most
proper combinations of wave bands and classifiers.
The rest of our paper is organized as follows. In section II we remind the
concept of is an aggregation functions and different types of them, we also
describe the traditional BCI framework [36] and the MFF BCI framework [3]. In
section III we explain the the new Enhanced Multimodal Fusion BCI Framework.
In section IV we show our experimental results for our own BCI dataset, and in
section V we discuss our results for the BCI IV competition dataset [37].
Finally, in section VI we give our final conclusions and remarks for this
work.
## II Preliminars
In this section we recall some basic notions about aggregation functions
(Section II-A), the traditional BCI framework (Section II-B) and the MFF BCI
framework (Section II-C).
### II-A Aggregation Functions
Aggregation functions are used to fuse information from n sources into one
single output. A function A: $[0,1]^{n}\rightarrow[0,1]$ is said to be a n-ary
aggregation function if the following conditions hold:
* •
A is increasing in each argument: $\forall
i\in\\{1,\dots,n\\},i<y,A(x_{1},\dots.,x_{i},\dots x_{n})\leq
A(x_{1},\dots,y,\dots x_{n})$
* •
$A(0,\dots,0)=0$
* •
$A(1,\dots,1)=1$
Some examples of classical n-ary aggregation functions are:
* •
Artihmetic mean: $A(x)=\frac{1}{n}\sum_{i=1}^{n}x_{i}$.
* •
Median: $A(x)=x_{i}:\\{a:\forall x_{a}<x_{i}\\},\\{b,\forall
x_{b}>x_{i}\\},|a|=|b|$.
* •
Max: $A(x)=max(x_{1},\dots,x_{n})$.
* •
Min: $A(x)=min(x_{1},\dots,x_{n})$.
Other types of aggregation functions are the following ones:
#### II-A1 T-norm [38]
A T-norm is an aggregation function $[0,1]^{2}\rightarrow[0,1]$that satisfies
the following properties for $a,b,c\in[0,1]$:
* •
$T(a,b)=T(b,a)$
* •
$T(a,T(b,c))=T(T(a,b),c)$
* •
$T(a,1)=a$
Some examples of T-norms are the product or the minimum.
#### II-A2 Choquet integral [20]
Having $N=\\{1,\dots,n\\}$, a function $m:2^{n}\rightarrow[0,1]$ is a fuzzy
measure if, for all $X,Y\in N$, it satisfies the following properties:
1. ($m$1)
Increasingness: if $X\in Y$, then $m(X)\subseteq m(Y)$.
2. ($m$2)
Boundary conditions: $m(\emptyset)=0$ and $m(N)=1$.
The discrete Choquet integral of $\textbf{x}=(x_{1},\dots,x_{n})\in[0,1]^{n}$
with respect to $m$ is defined as $C_{m}:[0,1]^{n}\rightarrow[0,1]$ given by
$C_{m}(x)=\sum_{i=1}^{n}(x_{\sigma(i)}-x_{\sigma(i-1)})\cdot m(A_{i})$ (1)
where $\textbf{x}_{\sigma}$ is an increasing permutation of x such that $0\leq
x_{\sigma(1)}\leq\dots\leq x_{\sigma(n)}$. With the convention that $x_{0}=0$,
and $A_{i}=\\{i,\dots,n\\}$.
#### II-A3 CF [26]
It is a generalization of the Choquet integral that replaces the product used
in Eq. 1 for a more general function F. In [39] the authors detail the
required properties for F so that the CF is an aggregation function, and
conclude that the best F in their experimental results is the Hamacher T-norm.
For this reason, we have chosen it for our experimentation, as detailed in the
following expressions:
$T_{H}(x,y)=\begin{cases}0,&\mbox{if }x=y=0\\\
\frac{xy}{x+y-xy},&\mbox{otherwise}\end{cases}$
$CF(x)=\sum_{i=1}^{n}T_{H}(x_{\sigma(i)}-x_{\sigma(i-1)},m(A_{i}))$
#### II-A4 CF1,F2 [27]
The original product of the Choquet Integral can be decomposed on two product
functions using the distributive property of the product. Therefore, the
Choquet integral can be written as:
$C(x)=\sum_{i=1}^{n}x_{\sigma(i)}m(A_{i})-x_{\sigma(i-1)}m(A_{i})$
Then, the product functions are substituted for two more generic functions: F1
and F2. In [27] the authors explain the properties that must hold F1 and F2 so
that the CF1,F2 is an aggregation function. Consequently, the expression for
the CF1,F2 is the following:
$C_{F1,F2}(x)=\sum_{i=1}^{n}F_{1}(x_{\sigma(i)}),m(A_{i}))-F_{2}(x_{\sigma(i-1)}),m(A_{i}))$
#### II-A5 Sugeno integral [19]
Let $m:2^{N}\rightarrow[0,1]$ be a fuzzy measure. The discrete Sugeno integral
of $\textbf{x}=(x_{1},\dots,x_{n})\in[0,1]^{n}$ with respect to $m$ is defined
as a function $S_{m}:[0,1]^{n}\rightarrow[0,1]$, given by:
$S_{m}(x)=\max\\{min(x_{\sigma(i)},m(A_{i}))|i=1,..,n\\}$ (2)
#### II-A6 Sugeno Hamacher [3]
If we consider using the Hamacher T-norm instead of the minimum in Eq. 2, we
obtain the following expression:
$S(x)=\max\\{T_{H}(x_{\sigma(i)},m(A_{i}))|i=1,..,n\\}$
#### II-A7 Ordered Weighted Averaging operators (OWA) [28]
$\overrightarrow{w}=(w_{1},...,w_{n})\in[0,1]^{n}$ is called a weighting
vector if $\sum_{i=1}^{n}w_{i}=1$. The OWA operator associated to
$\overrightarrow{w}$ is the mapping
OWA${}_{\overrightarrow{w}}:[0,1]^{n}\rightarrow[0,1]$ defined for every
$(x_{1},...,x_{n})\in[0,1]^{n}$ by:
$OWA(x_{1},...,x_{n})=w_{1}x_{\gamma(1)}+...+w_{n}x_{\gamma(n)}$
where $\gamma:\\{1,...,n\\}\rightarrow\\{1,..,n\\}$ is a permutation such
that: $x_{\gamma(1)}\geq x_{\gamma(2)}\geq...\geq x_{\gamma(n)}$.
The weight vector can be computed used a quantifier function, Q. For this
study, we have used the following one:
$w_{i}=Q(\frac{i}{n})-Q(\frac{i-1}{n})$ $Q_{a,b}(i)=\begin{cases}0,&\mbox{if
}i<a\\\ 1,&\mbox{if }i>b\\\ \frac{i-a}{b-a},&\mbox{otherwise}\end{cases}$
where $a,b\in[0,1]$. Depending on the value of the parameters $a$ and $b$,
different weight vectors can be obtained. We have used three different ones:
* •
OWA1: $a=0.1,b=0.5$
* •
OWA2: $a=0.5,b=1$
* •
OWA3: $a=0.3,b=0.8$
#### II-A8 Overlap functions [33]
A n-dimensional overlap, $G$, is a $[0,1]^{n}\rightarrow[0,1]$ function that
holds:
* •
Is commutative.
* •
If $\prod_{i=1}x_{i}=0$, then $G(x)=0$.
* •
If $\prod_{i=1}x_{i}=1$, then $G(x)=1$
* •
$G$ is increasing.
* •
$G$ is continuous.
The minimum function, for example, is an overlap function. We have also
considered three more:
* •
Harmonic Mean (HM): $\frac{n}{\sum_{i=1}^{n}\frac{1}{x_{i}}}$
* •
Sinus Overlap (SO): $sin\frac{\pi}{2}(\prod_{i=1}^{n}x_{i})$
* •
Geometrical Mean (GM): $\sqrt[n]{\prod x_{i}}$
### II-B Traditional BCI Framework
The traditional BCI system structure includes four parts:
1. 1.
The first step is acquiring the EEG data from the commercial EEG device and
performing band-pass filtering and artefact removal on the collected EEG
signals.
2. 2.
The second step is EEG feature transformation and feature extraction. Usually,
the FFt is used to transform the EEG signals from the into different frequency
components [40]. The FFt analysis transforms the time-series EEG signals in
each channel into its constituent frequencies. Following the procedure in [36,
18], we cover the frequencies range 1-30Hz. We study for the delta ($\delta$)
wave band the 1-3 Hz frequencies, for the theta ($\theta$) wave band the 4-7
Hz frequencies, for the alpha ($\alpha$) 8-13 Hz frequencies, for the beta
($\beta$) the 14-30 Hz frequencies and All 1-30Hz frequencies [41] using a
50-point moving window segment overlapping 45 data points.
3. 3.
Subsequently, the CSP was used for feature extraction to extract the maximum
spatial separability from the different EEG signals corresponding to the
control commands. The CSP is a well-known supervised mathematical procedure
commonly used in EEG signal processing. The CSP is used to transform
multivariate EEG signals into well-separated subcomponents with maximum
spatial variation using the labels for each example [42], [43], [44].
4. 4.
Last, pattern classification is performed on the CSP features signals using an
ensemble of classifiers to differentiate the commands. Each base classifier is
trained using a different wave band (for instance, if the base classifier is
the LDA, the ensemble would be composed of: $\delta-LDA$, $\theta-LDA$,
$\alpha-LDA$, $\beta-LDA$, and $All-LDA$) and the final decision is taken
combining all of them. The most common way of obtaining the final decision is
to compute the arithmetic mean of the outputs of all the base classifiers
(each one provides a probability for each class), and take the class with
higher aggregated value. The most common classifiers used for this task are
the Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA)
and k-nearest neighbours classifier (KNN) [45].
### II-C Multimodal Fuzzy Framework
The Multi-modal Fuzzy Framework (MFF) is proposed in [3]. It follows a similar
structure to the one in the traditional BCI framework: it starts with the EEG
measurements, it computes the FFt transformation to the frequency domain and
it uses the CSP transform to obtain a set of features to train the
classifiers.
However, in the MFF it is necessary to train not one, but three classifiers
for each wave band: a LDA, a QDA and a KNN. We name the classifiers according
to their type of classifier and the wave band used to train it. For instance,
for the $\delta$ band we would have $\delta-LDA$, $\delta-QDA$ and $\delta-
KNN$.
Then, the decision making phase is performed in two phases:
1. 1.
Frequency phase: since we got a LDA, QDA and KNN for each wave band, the first
step is to fuse the outputs of these classifiers in each wave band. For
example, in the case of the LDA classifiers, we have a $\delta-LDA$, $\theta-
LDA$, $\alpha-LDA$, $\beta-LDA$ and $All-LDA$ that will be fused using an
aggregation function to obtain a vector, $FF-LDA$. That is, the same process
explained for the traditional framework is applied but without making the
final decision. We do the same with the QDA and KNN classifiers. The result of
this phase is a list of collective vectors (one for each type of classifier).
2. 2.
Classifier phase: in this phase, the input is the list of collective vectors
given by each different kind of classifier ($FF-LDA$, $FF-LDA$,$FF-KNN$)
computed in the frequency phase. We fuse the three vectors according to the
classes, and the result is a vector containing the score for each class for
the given sample. As in the traditional framework, the decision is made in
favour to the class associated with the largest value.
We must point out that the same aggregation is used for both the frequency
phase and the classifier phases.
The aggregation functions tested in the MFF are the Choquet integral, the CF
integral using the Hamacher T-norm, the $CF_{min,min}$ generalizations, the
Sugeno integral and the Hamacher Sugeno integral. We used the cardinal fuzzy
measure for all of them [46]
## III Enhanced Fusion Framework
Enhanced Fusion Framework Figure 1: Architecture of the proposed Enhanced
Fusion Framework.
Our aim with the Enhanced Multimodal Fusion Framework (EMF) is to build upon
the foundations of the MFF in order to improve its experimental results.
Starting from the MFF, we add a new band as well as a new signal pre-
processing phase known as differentiation. Furthermore, we considered more
classifiers and a wider set of aggregation functions for the decision making
process. Finally, we give more flexibility to the decision making process
because we allow the aggregation function to be different in each stage.
A schematic view of the whole EMF classification process is in Fig. 1. The new
components compared to the MFF in [3] can be seen in the figure: the SMR in
the FFt phase, the differentiation phase, and the two new classifiers (SVM and
GP). In the following sections we present in detail each component added in
the EMF framework.
### III-A Wave bands
For the EMF we have considered all the wave bands used in the traditional BCI
framework: $\delta$, $\theta$, $\alpha$, $\beta$ and $All$. However, we have
also added the SensoriMotor Rhythm (SMR), which covers the $13-15$Hz
frequencies [47].
Regarding the nature of ERD/ERS, movement or preparation for movement is
typically accompanied by a decrease in mu and beta rhythms, particularly
contralateral to the movement [48] to control sequence. This phenomenon has
been named event-related desynchronization or ERD [49]. Its opposite, rhythm
increase, or event-related synchronization (ERS), occurs after movement and
with relaxation [50]. So, we suppose that the activity present in this band
will be closely related to the studied task.
### III-B Signal pre-processing and feature extraction
The data acquisition process is similar to the rest of the BCI frameworks.
First, we obtain the EEG data, second, we apply the FFt. Then, we add a new
component, the differentiation for the signals. Finally, we compute the CSP of
25 components to extract the features from which we will train the classifiers
from the differenced signals.
The idea to apply the differentiation comes from a related area, neuroscience.
In [51] the authors measure the activations of a moving C.Elegans, using the
luminescence of each neuron during a series of trials. Alongside the paper,
the corresponding dataset was released. This dataset was composed of real
numbers that quantified the luminescence of each neuron, instead of a neuron
activated/deactivated classification, since a straightforward method to
compute if a neuron was activated or not was not found. Authors in [52], based
on this same C.Elegans dataset, stated that the real activations of the
neurons should be computed not using the absolute value of the luminescence,
but only the spikes of the signal. They attributed the big changes in the
tendency of the temporal series to artefacts in the measures. So, they
performed a low-pass filter to the signal (Fig. 2).
(a) width=0.45 (b) width=0.45
Figure 2: Neuronal luminescence for a single neuron in a C.Elegans during free
movement in one trial, [51].
EEG data and the neuron luminescence, although different in nature, are both
time series data [53]. Temporal series are composed of three components:
tendency, seasonal component and the random component [54], which can be
observed for the average of all of our wave bands in Fig. 3. The high pass
Butterworth filter used in [52] is a way to remove the tendency from the time
series while keeping the spikes. We have decided to do something similar to
the EEG signal in order to extract the spikes in the wave bands, which are
similar to the random component in the temporal series. However, instead of
using the high pass Butterworth filtering, we have used a simpler procedure,
the differentiation, to avoid having to tune additional parameters for the
filtering. In Fig. 4 we show the resulting signal of the differentiation
process.
(a) width=0.7
(b) width=0.33 (c) width=0.33 (d) width=0.33
Figure 3: Signal comparison using the average over all the waves for Subject
16. Original time series and its decomposed components.
(a) width=0.7
(b) width=0.35 (c) width=0.35 (d) width=0.35
Figure 4: Signal comparison using the average over all the waves for Subject
16 after being differentiated. Original differentiated time series and its
decomposed components.
### III-C Classifiers
In the MFF the classifiers types are LDA, QDA and KNN. They are chosen because
they are “weak” classifiers, and it is when using the aggregation process that
the “strong” decision is obtained.
In the EMF we have decided to test two more types of classifiers: Support
Vector Machines (SVM) [55] and Gaussian Processes (GP) [56]. These classifiers
are by themselves generally more accurate than the three classifiers used in
the MFF, which may increase the final accuracy of the system. However, it
makes the decision making process less diverse, because the higher the
accuracy for each individual classifier, the less novel info each one of them
will give to take the final decision.
### III-D Decision Making
In the original MFF, authors studied the effects of the arithmetic mean
operator and the following aggregation functions:
* •
Choquet Integral and two generalizations: CFmin,min and CF using the Hamacher
T-norm.
* •
Sugeno Integral and Sugeno using the Hamacher T-norm.
The same aggregation function was used both in the frequency-phase and in the
classifier phase fusion steps.
For the EMF we have considered a wider set of aggregation functions, more
precisely, all the aggregation functions presented in section II. That
includes the classical ones, Choquet and Sugeno Integrals alongside their
generalizations, OWA operators, and n-ary overlap functions.
We have also added an extra degree of freedom to this process: the frequency
fusion phase and classifier fusion phase can use a different aggregation
function. We allow it because the aim in each phase is different. In the case
of the frequency fusion phase, we are fusing outputs of classifiers from the
same type, so their predictions are of the same nature and we are building a
new collective vector. In the case of the classifier fusion phase, we are
fusing different types of classifiers (even if the the outputs are normalized
in the $[0,1]$ scale) and we want to make the final decision, not only
building another collective vector.
## IV Experimental results for the Enhanced Multimodal Fusion Framework on
the UTS MI dataset
We have evaluated the EMF using the MI dataset collected by the University of
Technology Sidney (UTS) using the same procedure as in [3]. This dataset
consists of twenty participants. Each one of them performed a total of forty
trials in which they were asked to imagine to move the left or right hand.
Half of them corresponding to right, and the other half to left, consisting in
a total of 800 trials. EEG data was taken from the channels C3, C4, CP3 and
CP4. We have used a CSP with 3, 4, 6, 15, 3 and 25 components, respectively,
for the $\delta$, $\theta$, $\alpha$, $\beta$, SMR and All ($1-30Hz$) wave
bands. These values have been chosen empirically (Fig. 5).
We have applied a five-fold cross validation scheme to evaluate our results:
we have taken the 800 available trials, and divided it into 5 different 80/20
train-test splits, denoting the final accuracy as the mean of the accuracy for
each test split. Although results here shown only show the performance of the
EMF for the totality of the dataset, results for each individual subject are
available online at: https://github.com/Fuminides/BCI_Results.
Figure 5: Accuracy for the Traditional SVM BCI Framework using signal differentiation, according to a different maximum number of CSP filters for each band. Framework | Accuracy
---|---
Trad. SVM | $67.07$
Trad. LDA | $72.24$
Trad. QDA | $73.82$
Trad. KNN | $68.87$
Trad. GP | $72.47$
MFF | $76.96$
EMF | $88.86$
TABLE I: Performance for each BCI framework in the UTS dataset. Framework | Mean agg. | Best Frequency Fusion
---|---|---
Diff-traditional SVM | $79.98$ | $80.67$
Diff-traditional LDA | $67.30$ | $74.38$
Diff-traditional QDA | $72.13$ | $83.25$
Diff-traditional KNN | $86.06$ | $86.06$
Diff-traditional GP | $85.95$ | $86.51$
TABLE II: Performance for the traditional BCI frameworks using the
differentiation. We compare the usage of the base aggregation (the arithmetic
mean) against the best possible one.
In Table I we compared the results for the traditional framework, the MFF and
our new proposal, the EMF. For the traditional framework we have used the 5
classifiers considered in this work (LDA, QDA, KNN, SVM and GP), and in the
case of both the MFF, the EMF we have reported the result of the best
aggregation (we will show their influence later). Looking at these results, we
can observe that we have obtained a remarkable improvement using the EMF
compared to any of the other frameworks, since we improve in almost $12\%$ the
MFF and more than $15\%$ the best traditional framework (Trad. QDA).
The EMF offered two main differences compared to the MFF and the traditional
approaches: the differentiation phase and the enhanced frequency and
classifier fusion phases. To test how much percentage of improvement comes
from each one, we have computed the traditional BCI framework using the
differentiation signal phase. We named this configuration “Diff-traditional”
Framework. In the traditional framework, (obviously also in the Diff-
traditional), the final decision is taken fusing the information from each
classifier using the arithmetic mean of all of them. Then, to see the
improvement that can be achieved on the diff-traditional framework using
different aggregation functions, we have tried all the aggregation functions
considered in this paper and then we select the best one in terms of accuracy.
The results of these tests are in Table II. In the first column we specify the
classifier used for each diff-traditional framework configuration. In the
second column we specify the accuracy if we use the arithmetic mean
aggregation (Mean agg.), and in the third, we specify again the accuracy for
the best possible aggregation function (Best Frequency Fusion). We have found
that the differentiation alone is capable of boosting the performance in the
case of SVM, KNN and GP more than $10\%$. In the cases were differentiation
was not very successful (LDA and QDA), the aggregation phase obtained similar
winnings in terms of accuracy as the differentiation did on the other cases.
In Table III we show the results for each pair of aggregations used in the
frequency (rows) and in the classifier (columns) fusion phases using the EMF.
Depending on the aggregations used, the accuracy can vary from $\sim 85\%$ to
$\sim 88\%$. Although there are some combinations which results in a really
poor interaction between the frequency and the classifier function base.
Therefore, we can conclude that in general, they provide competitive results
since most of the combinations would allow to improve the results of the MFF
($76.96\%$).
Then, we also test the best possible performance of each individual wave band,
which is detailed in Tab IV. The process is the same as in the standard EMF,
but using only one wave band, so there is no classifier fusion-phase. In the
first column we show the different wave bands used, in the second column the
combination of classifiers that works best for this wave band, the third
column is the aggregation used to fuse the information from the classifiers,
and the last one shows the accuracy obtained. For example, in the case of the
$\delta$ wave band, the best result is obtained using a SVM-$\delta$ and a KNN
-$\delta$ classifier, and fusing their results using any aggregation function,
as all of them result in the same accuracy for this case. We must stress that
the $\beta$ channels alone can lead up to $\sim 86\%$ accuracy using Gaussian
Process and KNN classifiers and the All wave band can achieve a $89.77\%$
using only a Gaussian Process classifier (so, no aggregation process is made).
The good performance of the $\beta$ band is in line with the results in [57]
and [58] The SMR band did not performed better than the alpha or beta wave
bands, in spite of the results regarding MI reported in [47]
Finally, in Table V we show the resulting Information Transfer Rate (ITR) for
the MFF, the EMF. The ITR measures the efficiency of the system, and it
measured as bits/trial. It is computed using the following formulas [48]:
$B=log_{2}N+P\times log_{2}P+(1-P)\times log_{2}\Biggl{(}{1-P\over
N-1}\Biggr{)}$ $Q\Biggr{(}{Trials\over Min}\Biggr{)}={S\over T}$ $ITR=B*Q,$
where N is the number of target classes (2 in this case), S is the number of
observations and P is the accuracy rate. So, the more accuracy and the less
computing time, the better this index will be. From these results we can
confirm again that the EMF, using the best combination of aggregations in
terms of accuracy, surpasses the MFF.
| Mean | Median | Choquet | CFm,m | Sugeno | H. Sugeno | F-Sugeno | Min | Max | CF1,F2 | OWA1 | OWA2 | OWA3 | CF | GM | SO | HM
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
Mean | 87.08 | 85.73 | 87.08 | 86.85 | 86.85 | 86.97 | 87.42 | 87.87 | 87.87 | 87.98 | 87.08 | 87.08 | 86.52 | 87.08 | 87.98 | 88.31 | 88.31
Median | 86.85 | 83.03 | 86.52 | 84.04 | 83.93 | 86.63 | 86.52 | 86.07 | 86.07 | 86.29 | 86.18 | 86.29 | 84.83 | 85.84 | 86.40 | 86.40 | 86.29
Choquet | 87.08 | 85.73 | 87.75 | 86.74 | 86.74 | 87.19 | 88.09 | 88.76 | 86.97 | 88.09 | 86.63 | 87.98 | 86.63 | 87.75 | 88.31 | 88.31 | 88.31
CFm,m | 87.30 | 83.37 | 87.19 | 84.72 | 84.72 | 86.52 | 87.30 | 86.18 | 83.82 | 69.10 | 85.84 | 86.97 | 84.94 | 85.73 | 88.09 | 87.30 | 87.64
Sugeno | 87.30 | 83.37 | 87.19 | 84.72 | 84.72 | 86.52 | 87.30 | 86.18 | 83.82 | 69.10 | 85.84 | 86.97 | 84.94 | 85.73 | 88.09 | 87.30 | 87.64
H. Sugeno | 87.64 | 85.84 | 87.30 | 86.97 | 86.97 | 86.52 | 87.98 | 87.19 | 83.82 | 87.98 | 86.52 | 87.98 | 86.85 | 87.98 | 87.98 | 88.20 | 87.87
F-Sugeno | 87.30 | 86.18 | 87.98 | 86.97 | 86.97 | 87.30 | 88.09 | 88.76 | 86.29 | 88.09 | 86.74 | 87.98 | 87.08 | 87.75 | 88.20 | 88.09 | 87.87
Min | 87.53 | 86.74 | 87.98 | 87.08 | 87.08 | 87.75 | 88.09 | 87.53 | 83.93 | 87.98 | 86.52 | 87.87 | 87.75 | 87.98 | 87.42 | 87.42 | 87.75
Max | 87.53 | 86.74 | 86.97 | 84.94 | 84.94 | 85.39 | 86.63 | 83.93 | 87.53 | 86.40 | 87.75 | 85.96 | 86.85 | 86.97 | 86.97 | 86.97 | 86.63
CF1,F2 | 87.30 | 85.84 | 87.64 | 86.52 | 86.52 | 87.19 | 13.26 | 88.54 | 64.04 | 12.13 | 86.74 | 87.75 | 86.40 | 86.18 | 88.54 | 74.16 | 88.31
OWA1 | 87.08 | 85.62 | 86.40 | 85.28 | 85.28 | 86.29 | 86.40 | 85.96 | 88.31 | 86.85 | 87.30 | 86.40 | 85.96 | 86.63 | 86.85 | 86.63 | 86.63
OWA2 | 87.08 | 85.73 | 87.98 | 87.08 | 87.08 | 87.08 | 88.09 | 88.31 | 85.73 | 88.31 | 86.52 | 88.20 | 86.97 | 87.53 | 88.20 | 88.20 | 87.87
OWA3 | 86.52 | 83.37 | 86.85 | 84.38 | 84.38 | 85.84 | 87.75 | 88.09 | 87.30 | 87.87 | 86.18 | 87.08 | 84.72 | 85.62 | 88.09 | 88.31 | 87.64
CF | 86.97 | 85.17 | 86.74 | 85.84 | 85.84 | 86.40 | 87.42 | 87.75 | 87.98 | 87.75 | 86.63 | 87.08 | 85.62 | 86.29 | 87.64 | 87.87 | 87.75
GM | 87.30 | 85.96 | 87.42 | 86.74 | 86.74 | 87.08 | 87.87 | 87.64 | 87.98 | 87.87 | 86.97 | 87.64 | 86.52 | 87.30 | 87.53 | 87.64 | 87.53
SO | 87.30 | 85.96 | 87.42 | 86.74 | 86.74 | 87.08 | 86.29 | 87.64 | 87.98 | 13.71 | 86.97 | 87.64 | 86.52 | 87.30 | 87.53 | 87.64 | 87.53
HM | 87.30 | 86.07 | 87.53 | 86.74 | 86.74 | 87.19 | 87.75 | 88.09 | 87.87 | 87.87 | 86.74 | 87.64 | 86.52 | 87.42 | 87.64 | 87.64 | 87.87
TABLE III: Results for the EMF in the UTS dataset. Each row represents the aggregation function used in the frequency fusion phase, and each column the one used in the classifier fusion phase. Wave Band | Classifiers | Aggregation | Accuracy
---|---|---|---
$\delta$ | SVM, KNN | Any | $70.44$
$\theta$ | SVM, KNN, GP | CFmin,min | $70.11$
$\alpha$ | KNN, GP | Any | $81.65$
$\beta$ | KNN, GP | Any | $86.17$
SMR | KNN | None | $73.03$
All | GP | None | $89.77$
TABLE IV: Single wave performance, using the best classifier combination for each individual channel. Framework | Aggregation(s) | ITR (bit/min)
---|---|---
MFF | Choquet Integral | 901.40
EMF | Choquet, OWA3 | 1710.74
TABLE V: ITR comparison table. Wave sets | Classifiers | Freq. Agg. | Class. Agg. | Accuracy
---|---|---|---|---
$\alpha$, $\beta$, All | QDA, KNN, GP | H. Sugeno | GM | 90.78
$\beta$, All | GP | Any* | Any* | 90.44
$\theta$, $\alpha$, $\beta$ | LDA, QDA, KNN, GP | H. Sugeno | Mean | 90.44
$\theta$, $\alpha$, $\beta$, All | QDA, KNN, GP | F-Sugeno | GM | 90.44
$\alpha$, $\beta$, All | QDA, KNN, GP | OWA1 | GM | 90.33
$\alpha$, $\beta$, All | QDA, KNN, GP | Mean | SO | 90.22
$\theta$, $\alpha$, $\beta$ | LDA, QDA, KNN, GP | CF1,F2 | Mean | 90.22
$\beta$, All | QDA, KNN | CF1,F2 | Min | 90.11
$\theta$, $\beta$, All | KNN, GP | CF | Choquet | 90.11
$\delta$, $\alpha$, $\beta$, All | KNN, GP | Min | Mean | 90.11
TABLE VI: Top 10 configurations in terms of accuracy for the UTS dataset.
Including aggregation functions used and channels used. * Any aggregation gave
the same result.
### IV-A Optimal Enhanced Motor Fusion: Classifier and wave band selection in
the Enhanced Motor Fusion Framework
Diversity is an important part of an ensemble of classifiers [59]. As it seems
logical, if a set of classifiers always give the same output, its combination
will not give us new information. We are using a total of thirty classifiers
in the EMF: five for each band (LDA, QDA, KNN, SVM and GP) and a total of six
bands ($\delta$, $\theta$, $\alpha$, $\beta$, SMR, All). To measure the
diversity of this system we have used the Q-statistic [60], whose result is
0.99 meaning that the diversity in the output of these classifiers is scarce,
which might be impacting the performance of the system.
One simple way to improve the system diversity, and reduce the Q-statistic
value, is to erase components of the system, which is likely to improve the
accuracy. Since there are only 30 classifiers, we can compute all the possible
configurations of the system to see which combinations of classifiers and
which wave bands are the most suited according to their accuracy. After
computing all the possible 1860 combinations of wave bands and classifiers, we
have determined which are the top configurations in terms of accuracy, which
are shown in Table VI. As suspected, reducing the number of components has a
good impact on the performance, since we can see that the best configuration
is able to improve the EMF in almost $2\%$. It also improves the ITR compared
to the previous tests, obtaining a value of $2007.47$bit/min. We have named
this optimal configuration of the EMF “Optimal Enhanced Motor Fusion” (OEMF).
## V Comparison on the BCI competition IV 2a dataset
In this section we discuss the behaviour of our new approaches in the BCI
competition IV dataset 2a, which is detailed in [61]. This dataset consists of
four classes of imaginary tasks: tongue, foot, left-hand and right-hand
performed by 9 volunteers. For each task, 22 EEG channels were collected.
There is a total of 288 trials for each participants, equally distributed
among the 4 classes.
For our experimental setup, we have used 4 out of the 22 channels available
(8, 12, 14, 18). As features, we have used the FFt to obtain the $\delta$,
$\theta$, $\alpha$, $\beta$, SMR and All, and the CSP filters are the same in
Section IV. From each subject, we have generated twenty partitions of the 288
trials consisting of 50% train (144 trials) and 50% test (144 trials) chosen
randomly. Since we have 9 subjects, this produces a total of 180 different
datasets. We do this in order to compute a population of accuracies for each
classifier, which allow us to calculate the statistical significances among
them.
We have studied this dataset from two different perspectives:
1. 1.
Taking the four classes to perform the classification task.
2. 2.
Taking only the left and right hand classes, so that the classification task
is the same as in the previous section.
We have not reported the accuracy for each individual class, as the
differences among them were not significant.
We show the results for the four-classes case in Fig. 6. We compare the same
set of traditional frameworks as in the UTS dataset, alongside the MFF, the
EMF and the OEMF. We can observe that the performance of EMF is $83.15\%$,
using the CF1,F2 and the harmonic mean functions in the frequency and
classifier based fusion phases, respectively, which is better than that of
both the traditional framework with all the classifiers and the MFF. The
performances of all the pairs of combinations of the aggregation functions for
the EMF can be seen in Table VII. When applying the OEMF, we obtain a
$85.62\%$ of accuracy using the LDA, QDA and KNN classifiers, and all the wave
bands available. This implies a notable enhancement on the results of the EMF,
which confirms the suitability of this procedure to tackle this task. We have
also tested the statistical the results for all the BCI frameworks using the
multi-population Friedman test, comparing the population of 180 accuracies for
each framework. The result for the multi population comparison gave us a
P-value $<0.001$. Then, we applied the Nemenyi post-hoc to confirm that both
the EMF and the OEMF are statistically better than the rest of the methods,
although they are not statistically different one from another.
In Fig. 7 we show the results for the two classes problem, using the same
frameworks as in the four classes case. As expected, the accuracy of all
classifiers improves. The EMF increases almost $3\%$ with respect to the four
classes problem. In this scenario, the best pair of aggregations for the EMF
is the Hamacher Sugeno and the Sin overlap. The rest of the results for each
pair of aggregations can be seen in Tab. VIII.
The best configuration obtained by the OEMF (is using all the possible wave
bands and QDA and KNN classifiers). This configuration presents a huge
improvement of more than $12\%$ over the EMF, reaching a total of $97.60\%$.
We have also studied the statistical differences among the frameworks, using
an analogue procedure to the one performed in the four-classes problem. We
have found that the EMF and the OEMF are statistically better than the rest of
the frameworks. In this case, the OEMF is also statistically better than the
EMF.
We have also studied the impact of the pair of aggregations to be used in our
new framework, since it seems to have an impact on the obtained results. There
are some combinations which tend to perform always near the best possible
performance, and others that do the opposite. Our results show that usually,
the best combination is using a Sugeno/Choquet integral in the frequency
fusion phase and an overlap function in the classifier fusion phase, which can
be appreciated in Tables III, VII, VIII.
Finally, we have compared our results in the four classes task with three
other kinds of BCI frameworks in Table IX. In [62] the authors combine
Riemannian geometry with a Linear SVM. In [63] the authors optimize the time
interval for each subject to obtain the optimal set of features to
discriminate between tasks, and then apply a bio-inspired algorithm to
optimize the CSP features and SVM parameters. And the authors in [64] use the
Dempster–Shafer theory to fuse the output of different LDA classifiers. We
found that the results obtained by the EMF are higher than the other three BCI
frameworks.
Framework | Accuracy
---|---
Trad. SVM | $60.67\pm 4.23$
Trad. LDA | $69.94\pm 4.05$
Trad. QDA | $47.93\pm 2.82$
Trad. KNN | $60.56\pm 4.20$
Trad. GP | $53.04\pm 4.60$
MFF | $67.96\pm 2.19$
EMF | $83.15\pm 3.02$
OEMF | $85.40\pm 3.03$
Ghosteen is the best album in the 21st century and Joan of Arc is the best
catholic saint. P-values SVM LDA QDA KNN GP MFF EMF OEMF SVM $-$ LDA $*$ $-$
QDA $+$ $+$ $-$ KNN $.9$ $+$ $*$ $-$ GP $+$ $+$ $.28$ $+$ $-$ MFF $*$ .9 $*$
$*$ $*$ $-$ EMF $*$ $*$ $*$ $*$ $*$ $*$ $-$ OEMF $*$ $*$ $*$ $*$ $*$ $*$ $.84$
$-$
Figure 6: Accuracy comparison for the BCI Competition IV dataset 2a using the
four classes. $*$ represents a p-value less than 0.001, favouring the
framework in the row. $+$ represents a p-value less than 0.001, favouring the
framework in the column.
Framework | Accuracy
---|---
Trad. SVM | $73.97\pm 4.79$
Trad. LDA | $80.75\pm 4.42$
Trad. QDA | $67.15\pm 4.49$
Trad. KNN | $71.41\pm 4.90$
Trad. GP | $66.84\pm 4.32$
MFF | $81.15\pm 1.32$
EMF | $85.83\pm 1.68$
OEMF | $97.60\pm 1.03$
Ghosteen is the best album in the 21st century and Joan of Arc is the best
catholic saint. P-values SVM LDA QDA KNN GP MFF EMF OEMF SVM $-$ LDA $*$ $-$
QDA $+$ $+$ $-$ KNN $+$ $+$ $.04$ $-$ GP $+$ $+$ $*$ $+$ $-$ MFF $*$ .9 $*$
$*$ $*$ $-$ EMF $*$ $*$ $*$ $*$ $*$ $*$ $-$ OEMF $*$ $*$ $*$ $*$ $*$ $*$ $.01$
$-$
Figure 7: Accuracy comparison for the BCI Competition IV dataset 2a using only the left-hand and right-hand classes. $*$ represents a p-value less than 0.001, favouring the framework in the row. $+$ represents a p-value less than 0.001, favouring the framework in the column. | Mean | Median | Choquet | CFm,m | Sugeno | H. Sugeno | F-Sugeno | Min | Max | CF1,F2 | OWA1 | OWA2 | OWA3 | CF | GM | SO | HM
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
Mean | 78.38 | 70.69 | 77.48 | 69.10 | 69.10 | 70.70 | 77.46 | 79.36 | 78.52 | 79.40 | 77.13 | 76.21 | 73.74 | 75.59 | 81.55 | 82.49 | 81.20
Median | 78.04 | 70.12 | 76.71 | 67.61 | 67.60 | 69.41 | 76.56 | 76.88 | 76.57 | 77.61 | 76.49 | 75.15 | 73.58 | 75.39 | 78.58 | 78.58 | 77.80
Choquet | 77.46 | 70.09 | 77.98 | 68.43 | 68.43 | 70.69 | 78.60 | 81.41 | 76.44 | 79.92 | 75.90 | 77.46 | 73.76 | 74.93 | 82.57 | 82.79 | 82.40
CFm,m | 78.01 | 70.12 | 76.71 | 67.61 | 67.60 | 69.41 | 77.46 | 73.50 | 72.93 | 64.30 | 75.30 | 74.96 | 73.58 | 75.28 | 78.58 | 80.34 | 77.80
Sugeno | 78.01 | 70.12 | 76.71 | 67.61 | 67.60 | 69.41 | 77.46 | 73.52 | 72.93 | 64.30 | 75.30 | 74.96 | 73.58 | 75.28 | 78.58 | 80.34 | 77.80
H. Sugeno | 76.08 | 68.73 | 77.47 | 65.92 | 65.85 | 70.23 | 78.00 | 77.03 | 70.55 | 79.58 | 71.93 | 76.26 | 73.12 | 73.25 | 80.22 | 82.50 | 79.33
F-Sugeno | 76.88 | 69.70 | 77.93 | 68.01 | 68.01 | 70.48 | 78.80 | 82.09 | 75.24 | 80.10 | 74.92 | 77.69 | 73.55 | 74.47 | 82.84 | 82.71 | 82.72
Min | 74.08 | 67.19 | 76.70 | 65.13 | 65.06 | 68.80 | 78.02 | 81.58 | 66.08 | 79.56 | 69.92 | 77.21 | 71.58 | 71.60 | 81.28 | 81.28 | 81.54
Max | 77.74 | 67.61 | 74.15 | 65.70 | 65.70 | 67.01 | 71.96 | 64.58 | 78.12 | 73.81 | 75.68 | 69.27 | 70.32 | 73.67 | 76.02 | 76.01 | 73.16
CF1,F2 | 77.43 | 70.16 | 77.93 | 68.55 | 68.55 | 70.63 | 78.08 | 82.71 | 77.11 | 2.69 | 75.95 | 77.42 | 73.74 | 74.96 | 82.64 | 71.93 | 83.15
OWA1 | 79.11 | 70.65 | 76.33 | 68.66 | 68.66 | 69.97 | 75.55 | 73.78 | 79.22 | 76.87 | 77.65 | 73.79 | 73.23 | 75.90 | 78.18 | 78.28 | 76.85
OWA2 | 76.83 | 69.53 | 77.87 | 67.70 | 67.70 | 70.48 | 78.80 | 82.00 | 74.33 | 80.09 | 74.40 | 77.77 | 73.58 | 74.52 | 82.52 | 82.63 | 82.34
OWA3 | 78.29 | 70.28 | 77.26 | 68.36 | 68.36 | 70.13 | 77.40 | 79.18 | 77.23 | 78.60 | 76.81 | 75.84 | 73.91 | 75.67 | 79.87 | 79.75 | 79.91
CF | 78.78 | 70.52 | 76.54 | 68.82 | 68.82 | 69.75 | 77.18 | 76.80 | 77.87 | 79.53 | 77.35 | 74.20 | 73.36 | 75.98 | 80.41 | 82.26 | 79.93
GM | 78.05 | 70.51 | 77.72 | 69.05 | 69.05 | 70.69 | 78.25 | 81.52 | 77.81 | 79.60 | 76.73 | 77.10 | 73.78 | 75.47 | 80.95 | 81.06 | 81.53
SO | 78.05 | 70.51 | 77.72 | 69.05 | 69.05 | 70.69 | 78.25 | 81.52 | 77.81 | 3.34 | 76.73 | 77.10 | 73.78 | 75.47 | 80.95 | 81.06 | 81.53
HM | 77.73 | 70.41 | 77.85 | 68.88 | 68.88 | 70.62 | 78.47 | 81.62 | 76.55 | 79.68 | 76.33 | 77.46 | 73.76 | 75.29 | 81.31 | 81.27 | 81.66
TABLE VII: Performance in the EMF for each pair of aggregations, for the BCI competition IV 2a dataset using the four classes. Each row represents the aggregation function used in the frequency fusion phase, and each column the one used in the classifier fusion phase. | Mean | Median | Choquet | CFm,m | Sugeno | H. Sugeno | F-Sugeno | Min | Max | CF1,F2 | OWA1 | OWA2 | OWA3 | CF | GM | SO | HM
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
Mean | 81.24 | 77.81 | 81.24 | 77.87 | 77.87 | 79.13 | 81.27 | 82.28 | 82.28 | 83.17 | 80.31 | 81.24 | 79.37 | 80.41 | 83.54 | 84.83 | 83.46
Median | 78.89 | 75.43 | 79.42 | 76.31 | 76.31 | 79.00 | 79.51 | 79.92 | 79.92 | 78.87 | 78.52 | 79.20 | 77.19 | 78.36 | 79.15 | 79.15 | 79.39
Choquet | 81.24 | 77.81 | 82.17 | 78.37 | 78.37 | 80.14 | 82.57 | 83.67 | 79.48 | 83.92 | 78.74 | 82.53 | 79.78 | 80.56 | 84.98 | 85.21 | 84.88
CFm,m | 80.37 | 77.18 | 81.54 | 77.01 | 76.99 | 79.14 | 83.48 | 79.95 | 75.96 | 56.37 | 76.59 | 80.85 | 79.12 | 79.54 | 81.67 | 85.67 | 81.64
Sugeno | 80.37 | 77.18 | 81.54 | 77.01 | 76.99 | 79.14 | 83.48 | 79.94 | 75.96 | 56.37 | 76.59 | 80.85 | 79.12 | 79.54 | 81.67 | 85.67 | 81.62
H. Sugeno | 81.39 | 78.11 | 82.15 | 78.11 | 78.11 | 80.28 | 83.26 | 81.34 | 73.98 | 84.83 | 76.24 | 82.04 | 80.37 | 81.35 | 83.33 | 85.83 | 82.67
F-Sugeno | 81.45 | 78.05 | 82.63 | 78.58 | 78.58 | 80.41 | 83.11 | 84.00 | 78.49 | 84.33 | 78.17 | 82.88 | 80.12 | 80.81 | 85.24 | 85.39 | 85.19
Min | 81.69 | 78.06 | 83.58 | 77.99 | 77.99 | 80.37 | 83.98 | 85.40 | 73.98 | 84.83 | 76.22 | 83.56 | 80.68 | 81.48 | 85.83 | 85.83 | 85.73
Max | 81.69 | 78.06 | 78.13 | 75.49 | 75.50 | 74.77 | 76.78 | 73.98 | 85.40 | 79.74 | 82.97 | 76.08 | 78.78 | 80.98 | 80.17 | 80.17 | 78.69
CF1,F2 | 80.42 | 77.16 | 82.29 | 77.89 | 77.89 | 80.12 | 18.90 | 85.05 | 68.37 | 16.80 | 80.27 | 82.75 | 79.01 | 79.33 | 85.04 | 44.28 | 85.49
OWA1 | 80.79 | 77.70 | 78.79 | 75.88 | 75.88 | 77.02 | 78.24 | 78.13 | 83.17 | 79.38 | 81.73 | 77.70 | 78.25 | 80.09 | 79.74 | 79.73 | 78.92
OWA2 | 81.24 | 77.81 | 82.58 | 78.47 | 78.47 | 80.49 | 82.96 | 83.74 | 77.78 | 84.20 | 77.49 | 82.92 | 80.06 | 80.55 | 85.08 | 85.33 | 85.03
OWA3 | 79.93 | 76.58 | 80.37 | 77.33 | 77.33 | 79.36 | 80.73 | 81.32 | 80.29 | 80.94 | 78.98 | 80.49 | 78.36 | 79.47 | 80.64 | 80.84 | 81.04
CF | 80.89 | 77.74 | 79.67 | 76.83 | 76.83 | 77.72 | 81.67 | 81.37 | 81.63 | 83.79 | 80.55 | 79.70 | 78.64 | 80.27 | 83.52 | 84.75 | 83.54
GM | 81.28 | 77.89 | 81.63 | 77.95 | 77.95 | 79.44 | 82.16 | 85.46 | 81.44 | 83.49 | 80.01 | 82.09 | 79.45 | 80.37 | 84.80 | 85.02 | 85.42
SO | 81.28 | 77.89 | 81.63 | 77.95 | 77.95 | 79.44 | 78.49 | 85.46 | 81.44 | 16.51 | 80.01 | 82.09 | 79.45 | 80.37 | 84.80 | 85.02 | 85.42
HM | 81.35 | 78.00 | 82.23 | 78.02 | 78.02 | 79.78 | 82.81 | 85.75 | 80.56 | 83.70 | 79.77 | 82.80 | 79.63 | 80.52 | 85.07 | 85.03 | 85.45
TABLE VIII: Performance in the EMF for each pair of aggregations, for the BCI competition IV 2a dataset using only the left-right classes. Each row represents the aggregation function used in the frequency fusion phase, and each column the one used in the classifier fusion phase. Framework | Accuracy
---|---
EMF | $83.15\%$
KLRRM + LSVM [62] | $74,43\%$
CSP/AM-BA-SVM [63] | $78.55\%$
Dempster-Shafter [64] | $81.93\%$
TABLE IX: Comparison of different BCI frameworks in the four classes task.
## VI Conclusions
In this paper, we proposed a new BCI framework, named Enhanced Motor Fusion
framework (EMF). The EMF proposes three new ideas to improve the Multimodal
Fuzzy Fusion (MFF) BCI system performance:
1. 1.
A signal differentiation step.
2. 2.
Both a new wave band (SMR) and new types of classifiers: SVM and GP.
3. 3.
The usage of different aggregation functions in each phase in the decision
making process.
Furthermore, we have also enlarged the number of aggregation functions used in
the decision making process, like CF1,F2, overlap functions and the OWA
operators. We observe that the best results are obtained using a
Sugeno/Choquet integral in the first phase and an overlap function in the
second phase. Moreover, we have also proposed an optimized version of the EMF
by obtaining an optimal combination of the different wave bands and the
different types of classifiers. The performance of our new approaches is
tested on two BCI datasets, where the suitability of our new methods is
proven. Specifically, the EMF improves the results of the traditional
framework as well as those of the MFF. The OEMF improves even more the
obtained results, achieving statistical differences in one of the scenarios,
which makes it a good option to tackle this kind of problems.
## Acknowledgement
Javier Fumanal Idocin’s, Jose Antonio Sanz’s, Javier Fernandez’s, and Humberto
Bustince’s research has been supported by the project PID2019-108392GB I00
(AEI/10.13039/501100011033).
## References
* [1] Chin-Teng Lin, Ching-Yu Chiu, Avinash Kumar Singh, Jung-Tai King, Li-Wei Ko, Yun-Chen Lu, and Yu-Kai Wang. A wireless multifunctional ssvep-based brain–computer interface assistive system. IEEE Transactions on Cognitive and Developmental Systems, 11(3):375–383, 2018.
* [2] Yu-Kai Wang, Tzyy-Ping Jung, and Chin-Teng Lin. Eeg-based attention tracking during distracted driving. IEEE transactions on neural systems and rehabilitation engineering, 23(6):1085–1094, 2015.
* [3] Li-Wei Ko, Yi-Chen Lu, Humberto Bustince, Yu-Cheng Chang, Yang Chang, Javier Ferandez, Yu-Kai Wang, Jose Antonio Sanz, Gracaliz Pereira Dimuro, and Chin-Teng Lin. Multimodal fuzzy fusion for enhancing the motor-imagery-based brain computer interface. IEEE Computational Intelligence Magazine, 14(1):96–106, 2019.
* [4] Brent J Lance, Jon Touryan, Yu-Kai Wang, Shao-Wei Lu, Chun-Hsiang Chuang, Peter Khooshabeh, Paul Sajda, Amar Marathe, Tzyy-Ping Jung, Chin-Teng Lin, et al. Towards serious games for improved bci. Handbook of Digital Games and Entertainment Technologies. Singapore: Springer, pages 1–28, 2015.
* [5] Yousef Rezaei Tabar and Ugur Halici. A novel deep learning approach for classification of eeg motor imagery signals. Journal of neural engineering, 14(1):016003, 2016.
* [6] Y. Zhang, C. S. Nam, G. Zhou, J. Jin, X. Wang, and A. Cichocki. Temporally constrained sparse group spatial patterns for motor imagery bci. IEEE Transactions on Cybernetics, 49(9):3322–3332, Sep. 2019.
* [7] Benjamin Blankertz, Steven Lemm, Matthias Treder, Stefan Haufe, and Klaus-Robert Müller. Single-trial analysis and classification of erp components—a tutorial. NeuroImage, 56(2):814–825, 2011.
* [8] M. Mironovova and J. Bíla. Fast fourier transform for feature extraction and neural network for classification of electrocardiogram signals. In 2015 Fourth International Conference on Future Generation Communication Technology (FGCT), pages 1–6, 2015.
* [9] W. Zheng, W. Liu, Y. Lu, B. Lu, and A. Cichocki. Emotionmeter: A multimodal framework for recognizing human emotions. IEEE Transactions on Cybernetics, 49(3):1110–1122, March 2019.
* [10] M Murugappan, Subbulakshmi Murugappan, Celestin Gerard, et al. Wireless eeg signals based neuromarketing system using fast fourier transform (fft). In 2014 IEEE 10th International Colloquium on Signal Processing and its Applications, pages 25–30. IEEE, 2014.
* [11] D. Iacoviello, A. Petracca, M. Spezialetti, and G. Placidi. A classification algorithm for electroencephalography signals by self-induced emotional stimuli. IEEE Transactions on Cybernetics, 46(12):3171–3180, Dec 2016.
* [12] L. Xie, Z. Deng, P. Xu, K. Choi, and S. Wang. Generalized hidden-mapping transductive transfer learning for recognition of epileptic electroencephalogram signals. IEEE Transactions on Cybernetics, 49(6):2200–2214, June 2019.
* [13] A. Jafarifarmand, M. A. Badamchizadeh, S. Khanmohammadi, M. A. Nazari, and B. M. Tazehkand. A new self-regulated neuro-fuzzy framework for classification of eeg signals in motor imagery bci. IEEE Transactions on Fuzzy Systems, 26(3):1485–1497, June 2018\.
* [14] P. A. Herman, G. Prasad, and T. M. McGinnity. Designing an interval type-2 fuzzy logic system for handling uncertainty effects in brain–computer interface classification of motor imagery induced eeg patterns. IEEE Transactions on Fuzzy Systems, 25(1):29–42, Feb 2017.
* [15] T. K. Reddy, V. Arora, L. Behera, Y. Wang, and C. Lin. Multiclass fuzzy time-delay common spatio-spectral patterns with fuzzy information theoretic optimization for eeg-based regression problems in brain–computer interface (bci). IEEE Transactions on Fuzzy Systems, 27(10):1943–1951, Oct 2019\.
* [16] Jing Jin, Yangyang Miao, Ian Daly, Cili Zuo, Dewen Hu, and Andrzej Cichocki. Correlation-based channel selection and regularized feature optimization for mi-based bci. Neural Networks, 118:262 – 270, 2019.
* [17] Jiankui Feng, Erwei Yin, Jing Jin, Rami Saab, Ian Daly, Xingyu Wang, Dewen Hu, and Andrzej Cichocki. Towards correlation-based time window selection method for motor imagery bcis. Neural Networks, 102:87–95, 2018.
* [18] Feifei Qi, Wei Wu, Zhu Liang Yu, Zhenghui Gu, Zhenfu Wen, Tianyou Yu, and Yuanqing Li. Spatiotemporal-filtering-based channel selection for single-trial eeg classification. IEEE Transactions on Cybernetics, 2020.
* [19] Michio Sugeno. Theory of fuzzy integrals and its applications. Doct. Thesis, Tokyo Institute of technology, 1974.
* [20] Gleb Beliakov, Humberto Bustince, and Tomasa Calvo Sánchez. A practical guide to averaging functions, volume 329. Springer, 2016.
* [21] Michel Grabisch and Marc Roubens. Application of the choquet integral in multicriteria decision making. Fuzzy Measures and Integrals-Theory and Applications, pages 348–374, 2000.
* [22] Camila Alves Dias, Jéssica CS Bueno, Eduardo N Borges, Silvia SC Botelho, Graçaliz Pereira Dimuro, Giancarlo Lucca, Javier Fernandéz, Humberto Bustince, and Paulo Lilles Jorge Drews Junior. Using the choquet integral in the pooling layer in deep learning networks. In North American Fuzzy Information Processing Society Annual Conference, pages 144–154. Springer, 2018.
* [23] Sansanee Auephanwiriyakul, James M Keller, and Paul D Gader. Generalized choquet fuzzy integral fusion. Information fusion, 3(1):69–85, 2002.
* [24] Toshiaki Murofushi and Michio Sugeno. Fuzzy t-conorm integral with respect to fuzzy measures: generalization of sugeno integral and choquet integral. Fuzzy Sets and Systems, 42(1):57–71, 1991.
* [25] Graçaliz Pereira Dimuro, Javier Fernández, Benjamín Bedregal, Radko Mesiar, José Antonio Sanz, Giancarlo Lucca, and Humberto Bustince. The state-of-art of the generalizations of the choquet integral: From aggregation and pre-aggregation to ordered directionally monotone functions. Information Fusion, 57:27–43, 2020.
* [26] Giancarlo Lucca, José Antonio Sanz, Graçaliz Pereira Dimuro, Benjamín Bedregal, Humberto Bustince, and Radko Mesiar. Cf-integrals: A new family of pre-aggregation functions with application to fuzzy rule-based classification systems. Information Sciences, 435:94 – 110, 2018.
* [27] Graçaliz Pereira Dimuro, Giancarlo Lucca, Benjamín Bedregal, Radko Mesiar, José Antonio Sanz, Chin-Teng Lin, and Humberto Bustince. Generalized cf1f2-integrals: From choquet-like aggregation to ordered directionally monotone functions. Fuzzy Sets and Systems, 378:44–67, 2020.
* [28] Ronald R Yager and Janusz Kacprzyk. The ordered weighted averaging operators: theory and applications. Springer Science & Business Media, 2012.
* [29] Ronald R Yager. Generalized owa aggregation operators. Fuzzy Optimization and Decision Making, 3(1):93–107, 2004.
* [30] Laura De Miguel, Mikel Sesma-Sara, Mikel Elkano, M Asiain, and Humberto Bustince. An algorithm for group decision making using n-dimensional fuzzy sets, admissible orders and owa operators. Information Fusion, 37:126–131, 2017.
* [31] Jose M Merigo and Montserrat Casanovas. The uncertain generalized owa operator and its application to financial decision making. International Journal of Information Technology & Decision Making, 10(02):211–230, 2011.
* [32] H Bustince, J Fernandez, R Mesiar, Javier Montero, and R Orduna. Overlap functions. Nonlinear Analysis: Theory, Methods & Applications, 72(3-4):1488–1499, 2010.
* [33] Laura De Miguel, Daniel Gómez, J. Tinguaro Rodríguez, Javier Montero, Humberto Bustince, Graçaliz P. Dimuro, and José Antonio Sanz. General overlap functions. Fuzzy Sets and Systems, 372:81 – 96, 2019. Theme: Aggregation Operations.
* [34] Mikel Elkano, Mikel Galar, Jose Sanz, and Humberto Bustince. Fuzzy rule-based classification systems for multi-class problems using binary decomposition strategies: on the influence of n-dimensional overlap functions in the fuzzy reasoning method. Information Sciences, 332:94–114, 2016.
* [35] Mikel Elkano, Mikel Galar, Jose Antonio Sanz, Alberto Fernández, Edurne Barrenechea, Francisco Herrera, and Humberto Bustince. Enhancing multiclass classification in farc-hd fuzzy classifier: On the synergy between $n$-dimensional overlap functions and decomposition strategies. IEEE Transactions on Fuzzy Systems, 23(5):1562–1580, 2014.
* [36] Shang-Lin Wu, Yu-Ting Liu, Tsung-Yu Hsieh, Yang-Yin Lin, Chih-Yu Chen, Chun-Hsiang Chuang, and Chin-Teng Lin. Fuzzy integral with particle swarm optimization for a motor-imagery-based brain–computer interface. IEEE Transactions on Fuzzy Systems, 25(1):21–28, 2016.
* [37] Michael Tangermann, Klaus-Robert Müller, Ad Aertsen, Niels Birbaumer, Christoph Braun, Clemens Brunner, Robert Leeb, Carsten Mehring, Kai J Miller, Gernot Mueller-Putz, et al. Review of the bci competition iv. Frontiers in neuroscience, 6:55, 2012.
* [38] M.M. Gupta and J. Qi. Theory of t-norms and fuzzy inference methods. Fuzzy Sets and Systems, 40(3):431 – 450, 1991.
* [39] Giancarlo Lucca, Jose Antonio Sanz, Gracaliz Pereira Dimuro, Benjamin Bedregal, Radko Mesiar, Anna Kolesarova, and Humberto Bustince. Preaggregation Functions: Construction and an Application. IEEE Transactions on Fuzzy Systems, 24(2):260–272, APR 2016.
* [40] Mehmet Akin. Comparison of wavelet transform and fft methods in the analysis of eeg signals. Journal of medical systems, 26(3):241–247, 2002.
* [41] Michal Teplan et al. Fundamentals of eeg measurement. Measurement science review, 2(2):1–11, 2002.
* [42] Benjamin Blankertz, Ryota Tomioka, Steven Lemm, Motoaki Kawanabe, and Klaus-Robert Muller. Optimizing spatial filters for robust eeg single-trial analysis. IEEE Signal processing magazine, 25(1):41–56, 2007.
* [43] Christoph Guger, Herbert Ramoser, and Gert Pfurtscheller. Real-time eeg analysis with subject-specific spatial patterns for a brain-computer interface (bci). IEEE transactions on rehabilitation engineering, 8(4):447–456, 2000\.
* [44] Alexandre Gramfort, Martin Luessi, Eric Larson, Denis A Engemann, Daniel Strohmeier, Christian Brodbeck, Roman Goj, Mainak Jas, Teon Brooks, Lauri Parkkonen, et al. Meg and eeg data analysis with mne-python. Frontiers in neuroscience, 7:267, 2013.
* [45] S. Bhattacharyya, A. Khasnobish, S. Chatterjee, A. Konar, and D. N. Tibarewala. Performance analysis of lda, qda and knn algorithms in left-right limb movement classification from eeg data. In 2010 International Conference on Systems in Medicine and Biology, pages 126–131, Dec 2010.
* [46] G. Lucca, J. A. Sanz, G. P. Dimuro, E. N. Borges, H. Santos, and H. Bustince. Analyzing the performance of different fuzzy measures with generalizations of the choquet integral in classification problems. In 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pages 1–6, 2019.
* [47] Santiago Arroyo, Ronald P Lesser, Barry Gordon, Sumio Uematsu, Darryl Jackson, and Robert Webber. Functional significance of the mu rhythm of human cortex: an electrophysiologic study with subdural electrodes. Electroencephalography and clinical Neurophysiology, 87(3):76–87, 1993.
* [48] Jonathan R Wolpaw, Niels Birbaumer, Dennis J McFarland, Gert Pfurtscheller, and Theresa M Vaughan. Brain–computer interfaces for communication and control. Clinical neurophysiology, 113(6):767–791, 2002.
* [49] Claudio Babiloni, Filippo Carducci, Febo Cincotti, Paolo M Rossini, Christa Neuper, Gert Pfurtscheller, and Fabio Babiloni. Human movement-related potentials vs desynchronization of eeg alpha rhythm: a high-resolution eeg study. Neuroimage, 10(6):658–665, 1999.
* [50] Gert Pfurtscheller and FH Lopes Da Silva. Event-related eeg/meg synchronization and desynchronization: basic principles. Clinical neurophysiology, 110(11):1842–1857, 1999.
* [51] Jeffrey Nguyen, Frederick Shipley, Ashley Linder, George Plummer, Mochi Liu, Sagar Setru, Joshua Shaevitz, and Andrew Leifer. Whole-brain calcium imaging with cellular resolution in freely behaving caenorhabditis elegans. Bulletin of the American Physical Society, 2016.
* [52] Miguel Aguilera, Carlos Alquézar, and Eduardo J. Izquierdo. Signatures of criticality in a maximum entropy model of the c. elegans brain during free behaviour. In Proceedings of the 14th European Conference on Artificial Life ECAL 2017, pages 29–35, 2017.
* [53] Galka Andreas. Topics in nonlinear time series analysis, with implications for EEG analysis, volume 14. World Scientific, 2000.
* [54] James D Hamilton. Time series analysis, volume 2. Princeton New Jersey, 1994.
* [55] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273–297, 1995.
* [56] David JC MacKay. Introduction to gaussian processes. NATO ASI Series F Computer and Systems Sciences, 168:133–166, 1998\.
* [57] Gunther Krausz, Reinhold Scherer, G Korisek, and Gert Pfurtscheller. Critical decision-speed and information transfer in the “graz brain–computer interface”. Applied psychophysiology and biofeedback, 28(3):233–240, 2003.
* [58] Carmen Vidaurre, Reinhold Scherer, Rafael Cabeza, Alois Schlögl, and Gert Pfurtscheller. Study of discriminant analysis applied to motor imagery bipolar data. Medical & biological engineering & computing, 45(1):61, 2007.
* [59] Ludmila I Kuncheva and Christopher J Whitaker. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine learning, 51(2):181–207, 2003.
* [60] Ludmila I Kuncheva, Christopher J Whitaker, Catherine A Shipp, and Robert PW Duin. Is independence good for combining classifiers? In Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, volume 2, pages 168–171. IEEE, 2000.
* [61] Clemens Brunner, Muhammad Naeem, Robert Leeb, Bernhard Graimann, and Gert Pfurtscheller. Spatial filtering and selection of optimized components in four class motor imagery eeg data using independent components analysis. Pattern recognition letters, 28(8):957–964, 2007.
* [62] Pradeep Kumar Mishra, B Jagadish, MPRS Kiran, Pachamuthu Rajalakshmi, and D Santhosh Reddy. A novel classification for eeg based four class motor imagery using kullback-leibler regularized riemannian manifold. In 2018 IEEE 20th International Conference on e-Health Networking, Applications and Services (Healthcom), pages 1–5. IEEE, 2018.
* [63] S. Selim, M. M. Tantawi, H. A. Shedeed, and A. Badr. A csp
am-ba-svm approach for motor imagery bci system. IEEE Access, 6:49192–49208,
2018.
* [64] Sara Razi, Mohammad Reza Karami Mollaei, and Jamal Ghasemi. A novel method for classification of bci multi-class motor imagery task based on dempster–shafer theory. Information Sciences, 484:14 – 26, 2019.
| Javier Fumanal Idocin holds a B.Sc in Computer Science at the University of
Zaragoza, Spain and a M.Sc in Data Science and Computer Engineering at the
University of Granada, Spain. He is now a PhD Student of the Public University
of Navarre, Spain in the department of Statistics, Informatics and
Mathematics. His research interests include machine intelligence, fuzzy logic,
social networks and Brain-Computer Interfaces.
---|---
| Javier Fernandez received the M.Sc. and Ph.Degrees in Mathematics from the
University of Zaragoza, Saragossa, and the Univesity of The Basque Country,
Spain, in 1999 and 2003, respectively. He is currently an Associate Lecturer
with the with the Department of Statistics, Computer Science and Mathematics
Public University of Navarre, Pamplona, Spain. He is the author or coauthor of
approximately 45 original articles and is involved with teaching artificial
intelligence and computational mathematics for students of the computer
sciences and data science. His research interests include fuzzy techniques for
image processing, fuzzy sets theory, interval-valued fuzzy sets theory,
aggregation functions, fuzzy measures, deep learning, stability, evolution
equation, and unique continuation.
---|---
| Yu-Kai Wang (M’13) received the M.S. degree in biomedical engineering and
the Ph.D. degree in computer science in 2009 and 2015, respectively, both from
the National Chiao Tung University, Taiwan. He is currently a Senior Lecturer
of Faculty of Engineering and Information Technology and Co-Director of Motion
Platform and Mixed Reality Lab at University of Technology Sydney, Australia.
He is the author of 32 published original articles in international journals
and more than 40 contributions to international conferences. His current
research interests include computational neuroscience, human performance
modelling, Brain-Computer Interface and novel human-agent interaction.
---|---
| Chin-Teng Lin received the B.S. degree from National Chiao-Tung University
(NCTU), Taiwan in 1986, and the Master and Ph.D. degree in electrical
engineering from Purdue University, USA in 1989 and 1992, respectively. He is
currently the Distinguished Professor of Faculty of Engineering and
Information Technology, and Co-Director of Center for Artificial Intelligence,
University of Technology Sydney, Australia. He is also invited as Honorary
Chair Professor of Electrical and Computer Engineering, NCTU, International
Faculty of University of California at San-Diego (UCSD), and Honorary
Professorship of University of Nottingham. Dr. Lin was elevated to be an IEEE
Fellow for his contributions to biologically inspired information systems in
2005, and was elevated International Fuzzy Systems Association (IFSA) Fellow
in 2012. Dr. Lin received the IEEE Fuzzy Systems Pioneer Awards in 2017. He
served as the Editor-in-chief of IEEE Transactions on Fuzzy Systems from 2011
to 2016. He also served on the Board of Governors at IEEE Circuits and Systems
(CAS) Society in 2005-2008, IEEE Systems, Man, Cybernetics (SMC) Society in
2003-2005, IEEE Computational Intelligence Society in 2008-2010, and Chair of
IEEE Taipei Section in 2009-2010. Dr. Lin was the Distinguished Lecturer of
IEEE CAS Society from 2003 to 2005 and CIS Society from 2015-2017. He serves
as the Chair of IEEE CIS Distinguished Lecturer Program Committee in 2018
2019. He served as the Deputy Editor-in-Chief of IEEE Transactions on Circuits
and Systems-II in 2006-2008. Dr. Lin was the Program Chair of IEEE
International Conference on Systems, Man, and Cybernetics in 2005 and General
Chair of 2011 IEEE International Conference on Fuzzy Systems. Dr. Lin is the
coauthor of Neural Fuzzy Systems (Prentice-Hall), and the author of Neural
Fuzzy Control Systems with Structure and Parameter Learning (World
Scientific). He has published more than 360 journal papers including over 160
IEEE journal papers in the areas of neural networks, fuzzy systems, brain-
computer interface, multimedia information processing, cognitive neuro-
engineering, and human-machine teaming, that have been cited more than 25,800
times. Currently, his h-index is 72, and his i10-index is 328.
---|---
| José Antonio Sanz José Antonio Sanz received the M.Sc. and Ph.D. degrees in
computer sciences in 2008 and 2011 respectively, both from the Public
University of Navarra, Spain. He is currently an assistant professor at the
Department of Statistics, Computer Science and Mathematics, Public University
of Navarra. He is the author of 36 published original articles in
international journals and more than 60 contributions to conferences. He is a
member of the European Society for Fuzzy Logic and Technology (EUSFLAT) and
the Spanish Association of Artificial Intelligence (AEPIA). He received the
best paper award in the FLINS 2012 international conference and the Pepe Millá
award in 2014. His research interests include fuzzy techniques for
classification problems, interval-valued fuzzy sets, genetic fuzzy systems and
medical applications using soft computing techniques.
---|---
| Humberto Bustince received the Graduate degree in physics from the
University of Salamanca in 1983 and Ph.D. in mathematics from the Public
University of Navarra, Pamplona, Spain, in 1994. He is a Full Professor of
Computer Science and Artificial Intelligence in the Public University of
Navarra, Pamplona, Spain where he is the main researcher of the Artificial
Intelligence and Approximate Reasoning group, whose main research lines are
both theoretical (aggregation functions, information and comparison measures,
fuzzy sets, and extensions) and applied (image processing, classification,
machine learning, data mining, and big data). He has led 11 I+D public-funded
research projects, at a national and at a regional level. He is currently the
main researcher of a project in the Spanish Science Program and of a
scientific network about fuzzy logic and soft computing. He has been in charge
of research projects collaborating with private companies. He has taken part
in two international research projects. He has authored more than 210 works,
according to Web of Science, in conferences and international journals, with
around 110 of them in journals of the first quartile of JCR. Moreover, five of
these works are also among the highly cited papers of the last ten years,
according to Science Essential Indicators of Web of Science. Dr. Bustince is
the Editor-in-Chief of the online magazine Mathware & Soft Computing of the
European Society for Fuzzy Logic and technologies and of the Axioms journal.
He is an Associated Editor of the IEEE Transactions on Fuzzy Systems Journal
and a member of the editorial board of the Journals Fuzzy Sets and Systems,
Information Fusion, International Journal of Computational Intelligence
Systems and Journal of Intelligent & Fuzzy Systems. He is the coauthor of a
monography about averaging functions and coeditor of several books. He has
organized some renowned international conferences such as EUROFUSE 2009 and
AGOP. Honorary Professor at the University of Nottingham, National Spanish
Computer Science Award in 2019 and EUSFLAT Excellence Research Award in 2019.
---|---
|
††thanks: Correspondence should be addressed to<EMAIL_ADDRESS>
# Birth, life, and death of a dipolar supersolid
Maximilian Sohmen Institut für Quantenoptik und Quanteninformation,
Österreichische Akademie der Wissenschaften, Innsbruck, Austria Institut für
Experimentalphysik, Universität Innsbruck, Austria Claudia Politi Institut
für Quantenoptik und Quanteninformation, Österreichische Akademie der
Wissenschaften, Innsbruck, Austria Institut für Experimentalphysik,
Universität Innsbruck, Austria Lauritz Klaus Institut für Quantenoptik und
Quanteninformation, Österreichische Akademie der Wissenschaften, Innsbruck,
Austria Institut für Experimentalphysik, Universität Innsbruck, Austria
Lauriane Chomaz Institut für Experimentalphysik, Universität Innsbruck,
Austria Manfred J. Mark Institut für Quantenoptik und Quanteninformation,
Österreichische Akademie der Wissenschaften, Innsbruck, Austria Institut für
Experimentalphysik, Universität Innsbruck, Austria Matthew A. Norcia
Institut für Quantenoptik und Quanteninformation, Österreichische Akademie der
Wissenschaften, Innsbruck, Austria Francesca Ferlaino Institut für
Quantenoptik und Quanteninformation, Österreichische Akademie der
Wissenschaften, Innsbruck, Austria Institut für Experimentalphysik,
Universität Innsbruck, Austria
###### Abstract
In the short time since the first observation of supersolid states of
ultracold dipolar atoms, substantial progress has been made in understanding
the zero-temperature phase diagram and low-energy excitations of these
systems. Less is known, however, about their finite-temperature properties,
particularly relevant for supersolids formed by cooling through direct
evaporation. Here, we explore this realm by characterizing the evaporative
formation and subsequent decay of a dipolar supersolid by combining high-
resolution in-trap imaging with time-of-flight observables. As our atomic
system cools towards quantum degeneracy, it first undergoes a transition from
thermal gas to a crystalline state with the appearance of periodic density
modulation. This is followed by a transition to a supersolid state with the
emergence of long-range phase coherence. Further, we explore the role of
temperature in the development of the modulated state.
Supersolid states, which exhibit both global phase coherence and periodic
spatial modulation [1, 2, 3, 4, 5, 6, 7], have recently been demonstrated and
studied in ultracold gases of dipolar atoms [8, 9, 10]. These states are
typically accessed by starting with an unmodulated Bose–Einstein condensate
(BEC), and then quenching the strength of interatomic interactions to a value
that favors a density-modulated state. In this production scheme, the
superfluidity (or global phase coherence) of the supersolid is inherited from
the pre-existing condensate. However, a dipolar supersolid state can also be
reached by direct evaporation from a thermal gas with fixed interactions, as
demonstrated in Ref. [10].
A thermal gas at temperatures well above condensation has neither phase
coherence nor modulation, so both must emerge during the evaporative formation
process. This leads one to question whether these two features appear
simultaneously, or if not, which comes first. Further, because this transition
explicitly takes place at finite temperature $T$, thermal excitations may play
an important role in the formation of the supersolid, presenting a challenging
situation for theory. Moreover, in the case of a dipolar supersolid, the non-
monotonic dispersion relation and the spontaneous formation of periodic
density modulation lead to important new length- and energy-scales not present
in contact-interacting systems, which dramatically modify the evaporative
formation process.
While the ground state and dynamics of a zero-temperature dipolar quantum gas
can be computed by solving an extended Gross–Pitaevskii equation [11, 12, 13,
14, 15, 8, 16, 17] (see also Fig. 1a), similar treatments are currently
lacking for finite temperatures in the supersolid regime. In principle,
effects of finite temperature can be taken into account by perturbatively
including the thermal population of excited modes. This can be done either
coherently, by adding them in a single classical field which abides the
Gross–Pitaevskii equation, as in Refs. [18, 19, 20], or incoherently, by
iteratively computing mode populations via a set of coupled
Hartree–Fock–Bogoliubov equations [21, 22, 9]. In order to accurately describe
dynamical processes occurring at temperatures approaching the critical
temperature, both coherent excitations and incoherent interactions with the
background thermal gas must be accounted for, requiring either more advanced
c-field [18] or quantum Monte Carlo [23, 24, 25, 26, 27] techniques. So far,
theories with realistic experimental parameters have not been developed to
unveil the finite-temperature dipolar phase diagram and to determine the
properties of the thermal-to-supersolid phase transitions.
In this work, we experimentally study the evaporative transition into and out
of a supersolid state in a dilute gas of dysprosium atoms. As the atoms cool
down to quantum degeneracy, the number of condensed atoms increases, giving
birth to the supersolid state. Continued evaporation and collisional loss lead
to a reduction of atom number, and eventually the death of the supersolid.
Such an evaporation trajectory, as illustrated in Fig. 1a, passes through the
little-understood finite-temperature portion of the supersolid phase diagram.
During the evaporative birth of the supersolid, we discover that the system
first establishes strong periodic density modulation of locally coherent
atoms, and only later acquires long-range phase coherence. When comparing the
birth and death of the supersolid, which occur at different temperatures, we
observe higher levels of modulation during the birth, suggesting that thermal
fluctuations may play an important role in the formation of density
modulation.
Figure 1: Evaporation trajectory through the finite-temperature phase
diagram. a. At $T=0$ (bottom plane), the phase diagram for a gas of dipolar
atoms is spanned by the s-wave scattering length $a_{s}$ and the condensate
atom number $N_{c}$. In an elongated trap it features a BEC (white) and
independent droplet (ID, black) phases, separated in places by a supersolid
state (SSS, gray-scale). The plotted lightness in the $T=0$ phase diagram
represents the droplet link strength across the system (cf. [16]). Away from
$T=0$, the phase diagram is not known. We explore this region through
evaporation into (red, near i) and out of (blue, near ii) the SSS, along a
trajectory represented schematically by the colored arrow. b. Single-shot
image of the optical density (OD) of the sample in trap. Here, a system of
four “droplets” within the SSS region is shown, together with its projected
density profile. c. Single-shot matter-wave interference pattern after 35 ms
TOF expansion (OD), and the corresponding projected profile. The color-scale
is truncated for visual clarity. The background clouds of thermal atoms
present are not visible in the color scales of subfigures b, c; for $35$ ms
TOF and around $50\text{\,}\mathrm{nK}$ (as in c) the thermal atoms show an
approximately isotropic 2D Gaussian distribution of width
$\bar{\sigma}\sim$55\text{\,}\mathrm{\SIUnitSymbolMicro m}$$.
For our experiments, we first prepare an optically trapped gas of
approximately $10^{5}$ dysprosium atoms (isotope 164Dy), precooled via forced
evaporation to temperatures of several hundred nanokelvin, at which point the
gas remains thermal. From here, we can apply further evaporation either by a
nearly-adiabatic ramp-down of the trap depth (“slow ramp”), or by a rapid
reduction of the trap depth followed by a hold time at fixed depth (“fast
ramp”) to further lower the temperature and induce condensation into the
supersolid state. The “slow ramp” protocol yields a higher number of condensed
atoms ($N_{c}\sim 2{\times}10^{4}$; see next paragraph for definition), and
lower shot-to-shot atom number fluctuations, whereas the “fast ramp” protocol
($N_{c}\sim 10^{4}$) allows to follow the evolution of the system in a
constant trap, disentangling the system dynamics from varying trap parameters.
In contrast to protocols based on quenching the interactions in a BEC [9, 8,
10], we hold the magnetic field (and hence the contact interaction strength)
fixed during the entire evaporation process at 17.92 G, where the system
ground state at our $N_{c}$ is a supersolid (scattering length $\sim 85(5)$
$a_{0}$).
For the present work, we have implemented in-situ Faraday phase contrast
imaging [28, 29], which allows us to probe the in-trap density of our quantum
gas at micron-scale resolution. During the formation of the density-modulated
state, the translation symmetry is broken along the long (axial) direction of
our cigar-shaped trap 111 At the end of evaporation, we typically use trap
frequencies around $\omega_{x,y,z}=2\pi{\times}(36,88,141)\,\text{s}^{-1}$,
with the tightest trap direction oriented parallel to gravity and to an
applied uniform magnetic field, cf. [10]., typically giving rise to a chain of
three to six density peaks, which we call “droplets.” These droplets have a
spacing of roughly three microns, clearly visible in our in-situ images (Fig.
1b). As in our previous works [10, 16], we also image the sample after a time-
of-flight (TOF) expansion using standard absorption imaging. These TOF images
include a spatially broad contribution which we attribute to thermal atoms,
whose number $N_{th}$ and temperature $T$ we estimate by 2D-fitting of a Bose-
enhanced Gaussian function [31], excluding the cloud centre. Surplus atoms at
the cloud centre (compared to the broad Gaussian) are at least locally
coherent, or “(quasi-)condensed” in the sense of Refs. [32, 33, 34]. With the
total number of atoms $N$ measured by pixel count, we define $N_{c}=N-N_{th}$
to be the number of these (at least locally) coherent atoms. During TOF,
matter-wave interference between the expanding droplets gives rise to a
characteristic interference pattern (Fig. 1c). The high contrast of the
interference pattern is visible in single TOF images and indicates that each
individual droplet is by itself a phase coherent many-body object. The
stability of the interference fringes within the envelope over multiple
experimental realisations encodes the degree of phase coherence between
droplets (cf. Refs. [10, 16] and discussion below). The combination of in-situ
and TOF diagnostics provides complementary information allowing us to measure
both density modulation and its spatial extent (number of droplets), as well
as phase coherence.
Figure 2: Growth and spread of density modulation during evaporation. a.
Averaged density profiles (no recentering, approximately 20 shots per
timestep) along the long trap axis as a function of hold time $t_{h}$ after
the “fast ramp” reduction of trap depth (see main text). b. The density
correlator $C^{\prime}(d)$ (solid black line) is fitted by a cosine-modulated
Gaussian function (dashed red line) to extract the correlation length $L$.
Gray regions are strongly influenced by imaging noise and excluded from fits.
Correlators are displayed for $t_{h}=50$ ms (upper) and $t_{h}=300$ ms
(lower). c. Density-density correlation length $L$ versus $N_{c}$, for the
same timesteps shown in a. Horizontal error bars are the standard deviation
over repetitive shots, vertical error bars reflect the correlator fit
uncertainty, the red points correspond to the correlators of subfigure b. The
dashed line indicates the simple atom-number scaling of the Thomas–Fermi
radius of a harmonically trapped BEC, $\propto N_{c}^{1/5}.$
Figure 2 shows the birth of the supersolid. Starting from a thermal sample, we
apply the “fast ramp” (225 ms) evaporation protocol to the desired final trap
depth, too fast for the cloud to follow adiabatically and intermediately
resulting in a non-thermalized, non-condensed sample. Simply holding the
sample at constant trap depth for a time $t_{h}$, collisions and plain
evaporation lead to thermalization and cooling. In Figure 2a we plot the
average axial in-situ density profile (cf. Fig. 1b) versus $t_{h}$, for about
20 images per time step without any image recentering. At early $t_{h}$ the
atoms are primarily thermal, and show up as a broad, low-density background in
our images. For $t_{h}\lesssim 150$ ms, inspection of single-shot images
reveals an increasing, though substantially fluctuating number of droplets
appearing out of the thermal cloud. After this time, the droplet number
stabilizes to its final value. We observe that the droplet formation happens
on the same time scale as the equilibration of $N_{c}$ and $T$ [35], which we
expect to be set predominantly by the elastic collision rate. Once the
droplets have formed, other time scales might be relevant in determining the
equilibration rate of their relative positions and phases; the details of this
possibility remain an open question [16].
To better quantify the growth of the modulated state we consider the density-
density correlator $C^{\prime}(d)$ for the in-situ density profiles over
distances $d$ [35]. We find that $C^{\prime}(d)$ is well described by a
cosine-modulated Gaussian, and define the density correlation length $L$ (Fig.
2b) as its fitted width. This method provides a way to determine the extent
over which density modulation has formed. Figure 2c shows $L$ for the data set
of Fig. 2a versus the number of coherent atoms $N_{c}$, which we extract from
TOF absorption images in separate experimental trials with identical
parameters. Interestingly, despite the strongly modulated structure of the
supersolid state, the density correlation length $L$ closely follows a scaling
$\propto N_{c}^{1/5}$, just as the Thomas–Fermi radius of a harmonically
trapped BEC, suggesting a dominant role of interactions over kinetic energy.
Figure 3: Development of modulation and coherence while evaporating into the
supersolid state. a, Sample temperature $T$ (left ordinate, bullets), total
($N$, right ordinate, dashed red line) and coherent atom number ($N_{c}$,
solid red line) as a function of the ramp crop time $t_{c}$. The shadings
reflect the respective confidence intervals. b, The phasors $P_{i}$ (black
dots), representing the magnitude and phase coherence of modulation for
selected $t_{c}$ (dotted lines; same radial scale for all polar plots). The
red shading reflects mean and variance of the distribution. c, Evolution of
the Fourier amplitude means $A_{\text{M}}$ (filled markers) and $A_{\Phi}$
(open markers).
While in-situ images provide information about density modulation (diagonal
long-range order), they do not carry direct information about phase coherence
(off-diagonal long-range order), either within, or between droplets. For this,
we use TOF imaging and address the question of whether the formation of
density modulation precedes global (i. e. interdroplet) phase coherence during
evaporative formation of the supersolid, or the other way round.
For this study, we perform a “slow” ($500$ ms) final forced evaporation ramp
of constant slope that is nearly adiabatic with respect to $N_{c}$ and $T$
(though not necessarily with respect to excitations of droplet positions and
phase), and terminate the ramp at selected crop times $t_{c}$ 222 While for
this protocol the trap parameters do change slightly versus $t_{c}$, this
protocol proved more robust to shot-to-shot fluctuations in atom number than
the fast ramp of Fig. 2, which is important for this measurement.. After
$t_{c}$, we immediately release the atoms and perform TOF imaging. Figure 3a
shows the observed evolution of the total ($N$) and (quasi-)condensed
($N_{c}$) atom number as well as the sample temperature ($T$) versus $t_{c}$.
We expand on the observed evolution by measuring coherence properties.
Following Refs. [10, 16], for each measurement $i$ we extract a rescaled
complex phasor $P_{i}=\rho_{i}\exp{(-\text{i}\Phi_{i})}$, i. e. the Fourier
component corresponding to the modulation wavelength in the TOF interference
profile [35]. For systems with a small number of droplets (but at least two),
the magnitude of the phasor $\rho_{i}$ encodes the modulation strength and
also the (local) degree of coherence within each of the individual droplets.
Meanwhile, the phase $\Phi_{i}$ depends primarily on the relative phase
between the droplets (cf. [37]).
We plot the phasors for different evaporation times on the polar plane in Fig.
3b, where two effects become apparent. First, the modulus of the phasors grows
during the evaporation, indicating that the degree of modulation increases.
Second, the distribution of phases $\Phi_{i}$ is initially uniform, and then
narrows down over $t_{c}$. To determine the time sequence of these two
effects, we calculate the incoherent and coherent amplitude means,
$A_{\text{M}}=\langle|P_{i}|\rangle_{i}$, encoding modulation strength and
local phase coherence, and $A_{\Phi}=|\langle P_{i}\rangle_{i}|$, encoding the
degree of global phase coherence across the system [10, 16]. Plotting
$A_{\text{M}}$ and $A_{\Phi}$ against $t_{c}$ (Fig. 3c), we notice a time lag
of around $40$ ms between the increase of $A_{\text{M}}$ and $A_{\Phi}$,
indicating that during evaporation into a supersolid the translational and the
phase symmetry are not broken simultaneously 333We find that observing this
time lag does not require a particular fine-tuning of the experimental
parameters (starting atom number, $B$-field, trap frequencies, ramp speed) to
a higher degree than other experiments with supersolids. This indicates that
this time lag between $A_{\text{M}}$ and $A_{\Phi}$ is a rather general
feature of the evaporation into the supersolid.. Rather, density modulation
and local phase coherence appear before global phase coherence, consistent
with predictions from Monte Carlo simulations, cf. e. g. [27]. A similar
effect is observed in the fast-ramp protocol [35].
This observation suggests the transient formation of a quasi-condensate
crystal – a state with local but not long-range coherence [32, 33, 34], whose
increased compressibility relative to a thermal gas allows for the formation
of density modulation 444N. Prokof’ev and B. Svistunov, University of
Massachusetts, Amherst, private discussion. – prior to the formation of a
supersolid with phase coherence between droplets. The lack of global phase
coherence could be attributed to a Kibble–Zurek-type mechanism [40], in which
different regions of the sample condense independently, to excitation of modes
involving the motion or phase of the droplets during the evaporation process,
or to the thermal population of collective modes (which reduce long-range
coherence) at finite temperature. As the evaporation process does not allow
independent control of temperature and condensation rate without also changing
density or trap geometry, we cannot reliably determine the relative importance
of these effects (or others) from the experiment. Dedicated theoretical
studies at finite temperature will thus be needed to elucidate the impact of
these types of processes and to understand the exact formation process.
Figure 4: Lifecycle of a supersolid state. Density modulation $M$ (from in-
situ images) during the evaporation process (left ordinate, bullets; the
vertical error bars reflect the propagated uncertainty returned by the fitting
routine). The sample temperature decreases during the hold time $t_{h}$ and is
encoded by the color filling. $N_{c}$ (from TOF images) is the number of
coherent atoms over $t_{h}$ (right ordinate, red line; the light-red shading
reflects the measurement standard deviation). At two times where $N_{c}\sim
1.1{\times}10^{4}$ (vertical dashed lines), but at which the atoms have
different temperatures, $M$ differs substantially. The corresonding averaged
in-situ images below confirm a higher level of modulation at earlier $t_{h}$.
Inset: The observed modulation $M$ plotted versus $N_{c}$.
After the birth of the supersolid state, both density modulation and global
phase coherence persist for remarkably long times, exceeding one second.
Figure 4 shows the evolution of the coherent atom number $N_{c}$ and
temperature $T$ at long hold times under conditions similar to Fig. 2 – the
same fast ramp followed immediately by hold time $t_{h}$. Evaporative cooling
first increases the coherent atom number until, at long
$t_{h}\geq$1\text{\,}\mathrm{s}$$, atom losses become dominant and lead to a
continuous decrease of $N_{c}$, eventually leading to the disappearance of the
modulated state. However, this death of the supersolid is not a mere time-
reversal of the birth. $N_{c}$ decreases, i. e. evolves in the opposite
direction, but more slowly and at lower temperature than for the birth.
Furthermore, phase coherence appears to outlive modulation and to be
maintained until the very end [35]. Thus, a comparison between the birth and
death process provides us with important clues to the impact of temperature on
the supersolid.
We contrast the birth and death of the supersolid in Fig. 4 by also plotting
the observed in-situ density modulation $M$, which is calculated by Fourier
transforming the in-situ density profiles and normalizing the Fourier
component corresponding to the modulation wavelength to the zero-frequency
Fourier component. By comparing $M$ between times that have similar $N_{c}$
during the birth and the death of the supersolid, respectively, we find that
the degree of modulation is higher during the birth of the supersolid than
during the death. Because the sample is hotter at shorter hold times, this
suggests that the observed modulation is increased at higher temperature,
perhaps due to thermal population of collective modes, or due to finite-
temperature modifications to the dispersion relation 555Th. Pohl, University
of Aarhus, private communication., as predicted in Ref. [22]. Again, further
development of finite-temperature theory will be needed to conclusively
determine the importance of such effects.
The role of finite temperature in the formation of modulation, as well as the
mechanism by which phase variations across the modulated state arise and then
ultimately disappear, represent important future directions for theoretical
investigations of dipolar supersolids away from the relatively well understood
$T=0$ limit. Experimentally, it would be of great interest to study the
evaporative formation process in a larger and more uniform system, where
distinct domains may be observed to form, and a broader separation of length-
scales may be explored in correlation measurements. Such measurements, along
with improved finite-temperature theory, could enable more precise statements
as to the nature of the supersolid phase transition away from zero
temperature.
###### Acknowledgements.
We thank Russell Bisset, the Innsbruck Erbium team, Massimo Boninsegni,
Philippe Chomaz, Thomas Pohl, Nikolay Prokof’ev and Boris Svistunov for
insightful discussions, and Gianmaria Durastante, Philipp Ilzhöfer and Arno
Trautmann for early contributions. This work is financially supported through
an ERC Consolidator Grant (RARE, No. 681432), an NFRI grant (MIRARE, No.
ÖAW0600) of the Austrian Academy of Science, the QuantERA grant MAQS by the
Austrian Science Fund FWF No. I4391-N, and the DFG/FWF via FOR 2247/I4317-N36.
M.S. acknowledges support by the Austrian Science Fund FWF within the DK-ALM
(No. W1259-N27). M.A.N. has received funding as an ESQ Postdoctoral Fellow
from the European Union’s Horizon 2020 research and innovation programme under
the Marie Skłodowska‐Curie grant agreement No. 801110 and the Austrian Federal
Ministry of Education, Science and Research (BMBWF). M.J.M. acknowledges
support through an ESQ Discovery Grant, by the Austrian Academy of Sciences.
L.C. acknowledges support through the FWF Elise Richter Fellowship No. V792.
We also acknowledge the Innsbruck Laser Core Facility, financed by the
Austrian Federal Ministry of Science, Research and Economy.
## References
* Gross [1957] E. P. Gross, Unified theory of interacting bosons, Phys. Rev. 106, 161 (1957).
* Gross [1958] E. P. Gross, Classical theory of boson wave fields, Annals of Physics 4, 57 (1958).
* Andreev and Lifshitz [1969] A. F. Andreev and I. M. Lifshitz, Quantum theory of defects in crystals, Sov. Phys. JETP 29, 1107 (1969).
* Chester [1970] G. V. Chester, Speculations on Bose–Einstein condensation and quantum crystals, Phys. Rev. A 2, 256 (1970).
* Leggett [1970] A. J. Leggett, Can a solid be “Superfluid”?, Phys. Rev. Lett. 25, 1543 (1970).
* Li _et al._ [2017] J.-R. Li, J. Lee, W. Huang, S. Burchesky, B. Shteynas, F. Ç. Top, A. O. Jamison, and W. Ketterle, A stripe phase with supersolid properties in spin–orbit-coupled Bose-–Einstein condensates, Nature 543, 91 (2017).
* Léonard _et al._ [2017] J. Léonard, A. Morales, P. Zupancic, T. Esslinger, and T. Donner, Supersolid formation in a quantum gas breaking a continuous translational symmetry, Nature 543, 87 (2017).
* Böttcher _et al._ [2019] F. Böttcher, J.-N. Schmidt, M. Wenzel, J. Hertkorn, M. Guo, T. Langen, and T. Pfau, Transient supersolid properties in an array of dipolar quantum droplets, Phys. Rev. X 9, 011051 (2019).
* Tanzi _et al._ [2019] L. Tanzi, E. Lucioni, F. Famà, J. Catani, A. Fioretti, C. Gabbanini, R. N. Bisset, L. Santos, and G. Modugno, Observation of a dipolar quantum gas with metastable supersolid properties, Phys. Rev. Lett. 122, 130405 (2019).
* Chomaz _et al._ [2019] L. Chomaz, D. Petter, P. Ilzhöfer, G. Natale, A. Trautmann, C. Politi, G. Durastante, R. M. W. van Bijnen, A. Patscheider, M. Sohmen, M. J. Mark, and F. Ferlaino, Long-lived and transient supersolid behaviors in dipolar quantum gases, Phys. Rev. X 9, 021012 (2019).
* Wächtler and Santos [2016a] F. Wächtler and L. Santos, Quantum filaments in dipolar Bose–Einstein condensates, Phys. Rev. A 93, 061603 (2016a).
* Chomaz _et al._ [2016] L. Chomaz, S. Baier, D. Petter, M. J. Mark, F. Wächtler, L. Santos, and F. Ferlaino, Quantum-fluctuation-driven crossover from a dilute Bose–Einstein condensate to a macrodroplet in a dipolar quantum fluid, Phys. Rev. X 6, 041039 (2016).
* Bisset _et al._ [2016] R. N. Bisset, R. M. Wilson, D. Baillie, and P. B. Blakie, Ground-state phase diagram of a dipolar condensate with quantum fluctuations, Phys. Rev. A 94, 033619 (2016).
* Wächtler and Santos [2016b] F. Wächtler and L. Santos, Ground-state properties and elementary excitations of quantum droplets in dipolar Bose–Einstein condensates, Phys. Rev. A 94, 043618 (2016b).
* Baillie and Blakie [2018] D. Baillie and P. B. Blakie, Droplet Crystal Ground States of a Dipolar Bose Gas, Phys. Rev. Lett. 121, 195301 (2018).
* Ilzhöfer _et al._ [2021] P. Ilzhöfer, M. Sohmen, G. Durastante, C. Politi, A. Trautmann, G. Natale, G. Morpurgo, T. Giamarchi, L. Chomaz, M. J. Mark, and F. Ferlaino, Phase coherence in out-of-equilibrium supersolid states of ultracold dipolar atoms, Nature Physics 10.1038/s41567-020-01100-3 (2021).
* Roccuzzo and Ancilotto [2019] S. M. Roccuzzo and F. Ancilotto, Supersolid behavior of a dipolar Bose-Einstein condensate confined in a tube, Phys. Rev. A 99, 041601 (2019).
* Blakie _et al._ [2008] P. Blakie, A. Bradley, M. Davis, R. Ballagh, and C. Gardiner, Dynamics and statistical mechanics of ultra-cold bose gases using c-field techniques, Advances in Physics 57, 363 (2008).
* Petter _et al._ [2020] D. Petter, A. Patscheider, G. Natale, M. J. Mark, M. A. Baranov, R. v. Bijnen, S. M. Roccuzzo, A. Recati, B. Blakie, D. Baillie, L. Chomaz, and F. Ferlaino, High-energy Bragg scattering measurements of a dipolar supersolid (2020), arXiv:2005.02213 [cond-mat.quant-gas] .
* Hertkorn _et al._ [2020] J. Hertkorn, J.-N. Schmidt, F. Böttcher, M. Guo, M. Schmidt, K. S. H. Ng, S. D. Graham, H. P. Büchler, T. Langen, M. Zwierlein, and T. Pfau, Density fluctuations across the superfluid-supersolid phase transition in a dipolar quantum gas (2020), arXiv:2009.08910 [cond-mat.quant-gas] .
* Ronen and Bohn [2007] S. Ronen and J. L. Bohn, Dipolar Bose–Einstein condensates at finite temperature, Phys. Rev. A 76, 043607 (2007).
* Aybar and Oktel [2019] E. Aybar and M. O. Oktel, Temperature-dependent density profiles of dipolar droplets, Phys. Rev. A 99, 013620 (2019).
* Cinti _et al._ [2010] F. Cinti, P. Jain, M. Boninsegni, A. Micheli, P. Zoller, and G. Pupillo, Supersolid droplet crystal in a dipole-blockaded gas, Phys. Rev. Lett. 105, 135301 (2010).
* Saito [2016] H. Saito, Path-integral Monte Carlo study on a droplet of a dipolar Bose-–Einstein condensate stabilized by quantum fluctuation, J. Phys. Soc. Jpn. 85, 053001 (2016).
* Macia _et al._ [2016] A. Macia, J. Sánchez-Baena, J. Boronat, and F. Mazzanti, Droplets of trapped quantum dipolar bosons, Phys. Rev. Lett. 117, 205301 (2016).
* Cinti and Boninsegni [2017] F. Cinti and M. Boninsegni, Classical and quantum filaments in the ground state of trapped dipolar Bose gases, Phys. Rev. A 96, 013627 (2017).
* Kora and Boninsegni [2019] Y. Kora and M. Boninsegni, Patterned supersolids in dipolar Bose systems, J. Low Temp. Phys. 197, 337 (2019).
* Bradley _et al._ [1997] C. C. Bradley, C. A. Sackett, and R. G. Hulet, Bose-Einstein condensation of lithium: Observation of limited condensate number, Phys. Rev. Lett. 78, 985 (1997).
* Kadau _et al._ [2016] H. Kadau, M. Schmitt, M. Wenzel, C. Wink, T. Maier, I. Ferrier-Barbut, and T. Pfau, Observing the Rosensweig instability of a quantum ferrofluid, Nature 530, 194 (2016).
* Note [1] At the end of evaporation, we typically use trap frequencies around $\omega_{x,y,z}=2\pi{\times}(36,88,141)\tmspace+{.1667em}\text{s}^{-1}$, with the tightest trap direction oriented parallel to gravity and to an applied uniform magnetic field, cf. [10].
* Ketterle _et al._ [1999] W. Ketterle, D. S. Durfee, and D. M. Stamper-Kurn, Making, probing and understanding Bose–Einstein condensates, Proceedings of the International School of Physics “Enrico Fermi” (1999).
* Petrov _et al._ [2001] D. S. Petrov, G. V. Shlyapnikov, and J. T. M. Walraven, Phase-fluctuating 3D Bose–Einstein condensates in elongated traps, Phys. Rev. Lett. 87, 050404 (2001).
* Dettmer _et al._ [2001] S. Dettmer, D. Hellweg, P. Ryytty, J. J. Arlt, W. Ertmer, K. Sengstock, D. S. Petrov, G. V. Shlyapnikov, H. Kreutzmann, L. Santos, and M. Lewenstein, Observation of phase fluctuations in elongated Bose–Einstein condensates, Phys. Rev. Lett. 87, 160406 (2001).
* Prokof’ev _et al._ [2004] N. Prokof’ev, O. Ruebenacker, and B. Svistunov, Weakly interacting Bose gas in the vicinity of the normal-fluid–superfluid transition, Phys. Rev. A 69, 053625 (2004).
* [35] _See supplementary material_.
* Note [2] While for this protocol the trap parameters do change slightly versus $t_{c}$, this protocol proved more robust to shot-to-shot fluctuations in atom number than the fast ramp of Fig. 2, which is important for this measurement.
* Hadzibabic _et al._ [2004] Z. Hadzibabic, S. Stock, B. Battelier, V. Bretin, and J. Dalibard, Interference of an array of independent Bose–Einstein condensates, Phys. Rev. Lett. 93, 180403 (2004).
* Note [3] We find that observing this time lag does not require a particular fine-tuning of the experimental parameters (starting atom number, $B$-field, trap frequencies, ramp speed) to a higher degree than other experiments with supersolids. This indicates that this time lag between $A_{\text{M}}$ and $A_{\Phi}$ is a rather general feature of the evaporation into the supersolid.
* Note [4] N. Prokof’ev and B. Svistunov, University of Massachusetts, Amherst, private discussion.
* Damski and Zurek [2010] B. Damski and W. H. Zurek, Soliton creation during a Bose–Einstein condensation, Phys. Rev. Lett. 104, 160404 (2010).
* Note [5] Th. Pohl, University of Aarhus, private communication.
|
# Comparing Deep Learning strategies for paired but unregistered multimodal
segmentation of the liver in T1 and T2-weighted MRI
###### Abstract
We address the problem of multimodal liver segmentation in paired but
unregistered T1 and T2-weighted MR images. We compare several strategies
described in the literature, with or without multi-task training, with or
without pre-registration. We also compare different loss functions (cross-
entropy, Dice loss, and three adversarial losses). All methods achieved
comparable performances with the exception of a multi-task setting that
performs both segmentations at once, which performed poorly.
Index Terms— Segmentation, U-net, Multimodal imaging, MRI, Liver
## 1 Introduction
Automatic liver segmentation tools are used in a clinical context to ease the
interpretation of medical images, as well as for quantitative measurements.
For instance, they are used for volumetry or as a preliminary step for
automated assessment of hepatic fat fraction or liver tumor burden. Such
analyses can be done with multiple modalities, among which T1 and T2-weighted
MRIs.
T1 weighted MRIs are anatomical images, and thus show a well-contrasted and
easy to segment liver. Conversely, T2-weighted images have a lower inter-slice
resolution. They are more prone to acquisition artifacts, and the liver is
harder to distinguish from its surroundings (see Figure 1). Therefore, the
accurate manual segmentation of the liver is more tedious in T2 images. T1 and
T2-weighted images are acquired a few minutes apart, which induces
misalignments between the two images (mainly due to breathing).
In this work, , we propose a comparison of different strategies described in
the literature to perform accurate automatic segmentation of the liver in
pairs of T1 and T2-weighted MRI images.
In the recent literature, most proposed segmentation methods are built upon
the U-Net architecture [1], either by integrating shape priors [2], modifying
the architecture [3], or adapting to a particular problem, such as joint
lesion segmentation [4], unsupervised domain adaptation [5], or integration of
manual corrections [6]. We therefore chose this setting as a basis for our
comparison.
Fig. 1: Multisequence MRI pair example. Left: T1-weighted image. Right:
T2-weighted image. Fig. 2: Inference of a pair of images for different
input/output strategies. Left to right: single input; double-input, single-
output; double-input, double-output; double-input, single-output
preregistered. For the specialized setting, $I_{1}$ is the T1 image of the
pair and $I_{2}$ the T2 image. For the unspecialized setting, the T1 and T2
images are randomly swapped during training, so that the networks do not
expect a particular modality in each channel.
Our experiments revolve around two axes. The first axis is the multi-input and
multi-output strategy: it is shown in [7] that it is possible to train
efficient unspecialized segmentation networks when no paired data are
available, and many works showed the usefulness of dual-input segmentation for
registered paired images [3, 8]. We investigate to what extent these
strategies are applicable to our particular problem and data set, and how they
compare. The goal is to answer the following questions: Does providing both
images of the pair as input to the network increases performance? If so, does
segmenting both images in one forward pass, in a multi-task way, enables to
get better results? Can we improve the results by applying a non-linear
registration algorithm to the images before forwarding them to the network?
Our hypothesis is that, by providing anatomical information through the T1
image, a segmentation network may succeed better in segmenting the T2 image,
even with misalignements.
The second axis is the objective function we optimize during training. The
importance of this parameter in segmentation has been demonstrated in [9, 10],
and the benefit of learned segmentation losses by adversarial training for
certain applications in [11, 12, 13]. We compare three standard voxel-to-voxel
loss functions, and three other functions taking advantage of adversarial
training. For this axis the hypothesis is that adversarial losses, by learning
the distribution of liver masks, may lead to more robust predictions and an
increased performance overall.
## 2 Data
We compare the different approaches on a database of 88 pairs of T1-weighted
and T2-weighted MRIs centered on the liver, coming from 51 patients which all
have hepatic lesions. The T1 images are acquired at a portal enhancement time.
We resize every image at a resolution of 3 mm along the $z$ axis, and 1.5 mm
along the $x$ and $y$ axes. Images in each pair are aligned with their DICOM
metadata, but breathing can induce large movements of the liver (up to
$15$cm).
Reference segmentation masks are obtained through manual annotations by a
radiologist, using interactive 3D tools. Note that due to low contrast of the
liver in T2 images, as well as lower resolution along the $z$ axis, manual
annotation of the liver in T2 images is difficult and less accurate than in T1
images. We split the dataset by keeping 12 pairs for testing, and 6 pairs for
validation.
## 3 Methods
### 3.1 Architecture and training
In order to focus on the strategies and the objective functions, and for a
fair comparison, we let the following architecture and optimization parameters
fixed. We use the 3D-U-net architecture [1, 14], pre-trained with weights
provided in [15], as this architecture is now standard for medical image
segmentation [16]. We use the Adam optimizer at a $1.10^{-4}$ learning rate
with early-stopping, by keeping the best network on the validation dataset,
for 900 epochs of 150 steps. For memory reasons, we use batches of size 1, and
crop inputs into cuboids of random sizes and ratios. We use a random intensity
shift as a data augmentation strategy.
### 3.2 Multi-channel and multi-output
We evaluate the benefit of multi-modal inputs by comparing different
input/output settings (see Figure 2). Each setting has two versions, which we
refer to as specialized or unspecialized and that we describe below.
Single input:
The network has only one channel for input and output. If specialized, two
networks are trained, one for each modality. If unspecialized, only one
network is trained, taking as input indifferently a T1 or T2 image.
Double input, single output:
The network has two channels as input, and one as output. We train it to
segment the image in the first channel, the second channel receiving the
auxiliary modality. If specialized, two networks are needed, one segmenting T1
images with T2 as auxiliary modality, and vice versa. If unspecialized, only
one network is needed, segmenting indifferently T1 or T2 images.
Double input, double output:
A single network is trained, where both the input and output layers have two
channels - one for each image of the pair - so that both predictions are
multi-tasked. When specialized, the first channel is always the T1 image and
the second channel always T2, whereas we randomly swap the channels during
traing when unspecialized, so that the network does not expect a particular
modality in each channel.
Pre-registered:
To study how registering the images before the segmentation may be beneficial,
we performed a non-linear registration of the T1 image on the T2 image of each
pair and trained a double input, single output network segmenting T2 images
with T1 as auxiliary modality. As the T2 image is harder to segment, we can
expect that by also providing the aligned T1 image to the network, it can use
all relevant information to make a more accurate segmentation.
### 3.3 Influence of the loss fonction
In the following section we consider a dataset of images $\\{x_{n}\\}\subset
X$ and annotations $\\{y_{n}\\}\subset Y$, and a segmentor $S:X\rightarrow Y$.
To evaluate the influence of the loss function on the performances, we test 6
of them on the single-input, unspecialized strategy.
Three loss functions are standard voxel-to-voxel functions:
Binary cross-entropy:
$L_{bce}(x,y)=-\sum y\log(S(x))$, where the sum is taken over all image
voxels, as proposed in [1].
Dice:
$L_{Dice}(x,y)=-2\sum yS(x)/(\sum y\sum S(x))$, as proposed in [10]. The
normalization enables a better performance for important class imbalance
between foreground and background (i.e. when the objects to segment are small)
compared to $L_{bce}$.
Cross-entropy+Dice:
$L_{sum}=L_{bce}+L_{Dice}$
Three loss functions are adversarial losses: we simultaneously train a
discriminator network $D:Y\rightarrow[0,1]$ to recognize reference masks. The
idea is that by learning the distribution of liver masks, we can get models
that are robust to improbable predictions (with holes or odd shape for
instance). This technology has gained popularity in segmentation [17, 18], and
we refer to the related works section in [11] for a good review of adversarial
learning in segmentation.
The vanilla GAN loss,
as in [12]: $L_{GAN}(x,y)=L_{bce}(S(x),y))+L_{bce}(D(S(x)),1)$
The embedding loss,
proposed in [13]: $L_{el}(x,y)=L_{bce}(S(x),x)+||D_{k}(S(x))-D_{k}(y)||^{2}$,
where $D_{k}$ represents the $k$-th layer of network $D$. The goal is to gain
in stability during training.
The gambling loss,
described in [11]: Instead of the discriminator $D$, we use a gambler
$G:X\times Y\rightarrow Y$, which takes as input an image and a segmentation,
and outputs a normalized betting map. It is trained to minimize
$L_{G}(x,y)=-\sum G(x,S(x))y\log(S(x))$ while the segmentor minimizes
$L_{S}=L_{bce}-L_{G}$ The goal is to train the gambler network able to
recognize the hard part of the image to segment, so that the segmentor focuses
on them.
We start the training with a pre-trained segmentor, with the single-input
unspecialized strategy (see Section 3.2), as we found that it performed as
well as any other (see Section 4).
## 4 Results, discussion and conclusions
We report the performance of all approaches using Dice coefficient. Hausdorff
distance and Hausdorff distance 95th percentile resulted in a similar ranking
of the approaches. To help with the interpretation of the results, we perform
a 4-fold cross-validation on the entire database using the single-input
unspecialized strategy, that we repeat 3 times. On average, the Dice scores of
the different runs on each fold differ by $0.005$. Any Dice score difference
below that will thus not be considered meaningful for comparing strategies.
Table 4 shows a comparison of performances among the training strategy,
recorded on the test dataset. We note a slightly worse performance for T2
images in specialized strategies compared to their unspecialized counterpart,
while the difference of Dice scores of T1 images is limited. Multi-output
networks performed significantly worse than multi-input ones. As shown in
[19], multi-task learning is not trivial to work with, and the naive approach
we tried showed its limits; it may be especially tricky in 3D segmentation,
where increasing the capacity of a network is costly in terms of memory.
Let us now assess how much a network relies on the auxiliary modality for
double-input settings. To this end, we evaluate the performance of a network
when segmenting an image with its matching auxiliary image, and compare it to
its performance when given mismatched pairs. We measure this performance gap
with the difference of Dice scores, and compile the results in Table 4. We
record a more important gap for T2 images, which is consistent with the idea
that, as this modality is harder to segment, the network learns to fetch
information in the T1 modality. This gap is even more important when images
are pre-registered, which is also expected as the information is easier to
retrieve when images are aligned. Multi-output settings showed the most
important need for the auxiliary modality. Contrary to our hypothesis
according to which adding T1 information should improve the T2 predictions,
comparing Table 4 with Table 4 seems to link an increased usage of the
auxiliary channel with bad performances. One explanation may be that the
network may rely on irrelevant pieces of additional information. Overall, we
found that T2 images contain enough information to accurately segment the
liver, and the gap in performance between T1 and T2 images can be explained by
a difference in annotation quality.
Table 4 shows the performance of networks trained with different loss
functions. We found no clear effect of this parameter on performance.
Adversarial training did not outperform the other loss functions either,
despite an expected behavior during the adversarial training and good
discriminator performance. This experiment corroborates the idea expressed in
[16, 20] that data variety and annotation quality are prominent, and that the
gains induced by those innovations are sometimes hard to replicate on custom
datasets. Our cross-validation experiment showed high score differences across
splits, and showed that the split we chose for comparing strategies was
particularly difficult (one cross-validation split averages a Dice of $0.983$
on both T1 and T2). This also corroborates the idea of prominence of data on
performance.
Figure 3 shows some examples of predictions of the single-input, unspecialized
method. We can see that the predictions remain accurate even in the presence
of large tumor burden (leftmost column) or big lesions near the edge of the
liver (middle right column). The rightmost column shows a case were the
patient has undergone an hepatectomy. As it is the only case with such
atypical anatomy on our database, it is particularly challenging. Despite this
difficulty the network managed to make accurate predictions, especially on the
T1 image.
| Unspec. | Spec.
---|---|---
Strategy | T1 | T2 | T1 | T2
1 input | 0.961 | 0.938 | 0.959 | 0.929
2 in, 1 out | 0.955 | 0.933 | 0.954 | 0.929
2 in, 2 out | 0.938 | 0.907 | 0.942 | 0.897
1 out, prereg. | - | - | - | 0.925
Table 1: Mean Dice scores vs. training strategy.
Table 2: Dice loss when pairs are mismatched vs. strategy.
| Unspec. | Spec.
---|---|---
Strategy | T1 | T2 | T1 | T2
2 in, 1 out | $<$0.001 | 0.016 | 0.001 | 0.030
2 in, 2 out | 0.042 | 0.083 | 0.023 | 0.103
1 out, prereg. | - | - | - | 0.054
Table 3: Dice reduction when pairs are mismatched vs. strategy.
Loss | T1 | T2
---|---|---
Binary cross-entropy | 0.961 | 0.938
Dice loss | 0.959 | 0.931
Dice + BCE | 0.959 | 0.932
Vanilla GAN | 0.950 | 0.930
Embedding | 0.960 | 0.935
Gambling | 0.959 | 0930
Table 4: Mean Dice scores vs. loss function. Fig. 3: A few examples from the
test database. Top: T1 image, bottom: T2 image. Red: manual annotation, green:
prediction from the single-input, unspecialized network.
## References
* [1] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
* [2] Qi Zeng, Davood Karimi, Emily HT Pang, Shahed Mohammed, Caitlin Schneider, Mohammad Honarvar, and Septimiu E Salcudean, “Liver segmentation in magnetic resonance imaging via mean shape fitting with fully convolutional neural networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019, pp. 246–254.
* [3] Xiaomeng Li, Hao Chen, Xiaojuan Qi, Qi Dou, Chi-Wing Fu, and Pheng-Ann Heng, “H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes,” IEEE Transactions on Medical Imaging, vol. 37, no. 12, pp. 2663–2674, 2018.
* [4] Eugene Vorontsov, An Tang, Chris Pal, and Samuel Kadoury, “Liver lesion segmentation informed by joint liver segmentation,” in IEEE 15th International Symposium on Biomedical Imaging (ISBI). IEEE, 2018, pp. 1332–1335.
* [5] Junlin Yang, Nicha C Dvornek, Fan Zhang, Julius Chapiro, MingDe Lin, and James S Duncan, “Unsupervised domain adaptation via disentangled representations: Application to cross-modality liver segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019, pp. 255–263.
* [6] Grzegorz Chlebus, Hans Meine, Smita Thoduka, Nasreddin Abolmaali, Bram van Ginneken, Horst Karl Hahn, and Andrea Schenk, “Reducing inter-observer variability and interaction time of MR liver volumetry by combining automatic CNN-based liver segmentation and manual corrections,” PloS one, vol. 14, no. 5, pp. e0217228, 2019.
* [7] Kang Wang, Adrija Mamidipalli, Tara Retson, Naeim Bahrami, Kyle Hasenstab, Kevin Blansit, Emily Bass, Timoteo Delgado, Guilherme Cunha, Michael S Middleton, et al., “Automated CT and MRI liver segmentation and biometry using a generalized convolutional neural network,” Radiology: Artificial Intelligence, vol. 1, no. 2, pp. 180022, 2019\.
* [8] Tongxue Zhou, Su Ruan, and Stéphane Canu, “A review: Deep learning for medical image segmentation using multi-modality fusion,” Array, vol. 3-4, pp. 100004, 2019.
* [9] Shruti Jadon, “A survey of loss functions for semantic segmentation,” ArXiv:2006.14822, 2020.
* [10] Carole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, and M Jorge Cardoso, “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” in Deep learning in medical image analysis and multimodal learning for clinical decision support, pp. 240–248. Springer, 2017.
* [11] Laurens Samson, Nanne van Noord, Olaf Booij, Michael Hofmann, Efstratios Gavves, and Mohsen Ghafoorian, “I bet you are wrong: Gambling adversarial networks for structured semantic segmentation,” in IEEE International Conference on Computer Vision Workshops, 2019\.
* [12] Pauline Luc, Camille Couprie, Soumith Chintala, and Jakob Verbeek, “Semantic segmentation using adversarial networks,” ArXiv:1611.08408, 2016.
* [13] Mohsen Ghafoorian, Cedric Nugteren, Nóra Baka, Olaf Booij, and Michael Hofmann, “El-gan: Embedding loss driven generative adversarial networks for lane detection,” in European Conference on Computer Vision (ECCV). 2018, pp. 256–272, Springer.
* [14] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in Fourth International Conference on 3D Vision (3DV). IEEE, 2016, pp. 565–571.
* [15] Zongwei Zhou, Vatsal Sodha, Md Mahfuzur Rahman Siddiquee, Ruibin Feng, Nima Tajbakhsh, Michael B Gotway, and Jianming Liang, “Models genesis: Generic autodidactic models for 3D medical image analysis,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019, pp. 384–393.
* [16] Fabian Isensee, Jens Petersen, André Klein, David Zimmerer, Paul F. Jaeger, onnon Kohl, Jakob Wasserthal, Gregor Koehler, Tobias Norajitra, Sebastian J. Wirkert, and Klaus Maier-Hein, “nnU-Net: Self-adapting framework for U-Net-based medical image segmentation,” ArXiv:1809.10486, 2018.
* [17] Wei-Chih Hung, Yi-Hsuan Tsai, Yan-Ting Liou, Yen-Yu Lin, and Ming-Hsuan Yang, “Adversarial learning for semi-supervised semantic segmentation,” in BMVC, 2018.
* [18] Yizhe Zhang, Lin Yang, Jianxu Chen, Maridel Fredericksen, David P Hughes, and Danny Z Chen, “Deep adversarial networks for biomedical image segmentation utilizing unannotated images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2017, pp. 408–416.
* [19] Sen Wu, Hongyang R. Zhang, and Christopher Ré, “Understanding and improving information transfer in multi-task learning,” in International Conference on Learning Representations, 2020.
* [20] Johannes Hofmanninger, Forian Prayer, Jeanny Pan, Sebastian Röhrich, Helmut Prosch, and Georg Langs, “Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem,” European Radiology Experimental, vol. 4, no. 1, pp. 1–13, 2020\.
## 5 Acknowledgments
This work has been partially funded by a grant from Association Nationale de
la Recherche et de la Technologie (#2018/0199).
## 6 Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
## 7 Compliance with Ethical Standards
This study was performed in line with the principles of the Declaration of
Helsinki. This is conform to standard reference methodology MR-004 of the CNIL
(France). Approval was granted by the CNIL (Study number 19-188).
|
11institutetext:
# Studying Catastrophic Forgetting in Neural Ranking Models
Jesús Lovón-Melgarejo1 Laure Soulier2 Karen Pinel-Sauvagnat1 Lynda Tamine1
1Université Paul Sabatier IRIT Toulouse France
2Sorbonne Université CNRS LIP6 F-75005 Paris France
{jesus.lovon, sauvagnat<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Several deep neural ranking models have been proposed in the recent IR
literature. While their transferability to one target domain held by a dataset
has been widely addressed using traditional domain adaptation strategies, the
question of their cross-domain transferability is still under-studied. We
study here in what extent neural ranking models catastrophically forget old
knowledge acquired from previously observed domains after acquiring new
knowledge, leading to performance decrease on those domains. Our experiments
show that the effectiveness of neural IR ranking models is achieved at the
cost of catastrophic forgetting and that a lifelong learning strategy using a
cross-domain regularizer successfully mitigates the problem. Using an
explanatory approach built on a regression model, we also show the effect of
domain characteristics on the rise of catastrophic forgetting. We believe that
the obtained results can be useful for both theoretical and practical future
work in neural IR.
###### Keywords:
Neural ranking Catastrophic forgetting Lifelong learning
## 1 Introduction
Neural ranking models have been increasingly adopted in the information
retrieval (IR) and natural language processing (NLP) communities for a wide
range of data and tasks [35, 40]. One common underlying issue is that they
learn relationships that may hold only in the domain from which the training
data is sampled, and generalize poorly in unobserved domains111According to
Jialin and Qiang [41], a domain consists of at most two components: a feature
space over a dataset and a marginal probability distribution within a task.
[6, 40]. To enhance the transferability of neural ranking models from a source
domain to a target domain, transfer learning strategies such as fine-tuning
[53], multi-tasking [29], domain adaptation [41], and more recently
adversarial learning [7], have been widely used222We consider the definition
of transfer learning in [41] (Figure 2). Please note that several other
definitions exist [13].. However, these strategies have by essence two
critical limitations reported in the machine learning literature [6, 22]. The
first one, as can be acknowledged in the NLP and IR communities [7, 29], is
that they require all the domains to be available simultaneously at the
learning stage (except the fine-tuning). The second limitation, under-studied
in both communities, is that the model leans to catastrophically forget
existing knowledge (source domain) when the learning is transferred to new
knowledge (target domain) leading to a significant drop of performance on the
source domain. These limitations are particularly thorny when considering
open-domain IR tasks including, but not limited to, conversational search.In
the underlying settings (e.g., QA systems and chatbots [15, 25, 33, 43]),
neural ranking models are expected to continually learn features from online
information streams, sampled from either observed or unobserved domains, and
to scale across different domains but without forgetting previously learned
knowledge.
Catastrophic forgetting is a long-standing problem addressed in machine
learning using lifelong learning approaches [6, 42]. It has been particularly
studied in neural-network based classification tasks in computer vision [22,
26] and more recently in NLP [32, 37, 46, 49]. However, while previous work
showed that the level of catastrophic forgetting is significantly impacted by
dataset features and network architectures, we are not aware of any existing
research in IR providing clear lessons about the transferability of neural
ranking models across domains, nor basically showing if state-of-the-art
neural ranking models are actually faced with the catastrophic forgetting
problem and how to overcome it if any. Understanding the conditions under
which these models forget accumulated knowledge and whether a lifelong
learning strategy is a feasible way for improving their effectiveness, would
bring important lessons for both practical and theoretical work in IR. This
work contributes to fill this gap identified in the neural IR literature, by
studying the transferability of ranking models. We put the focus on
catastrophic forgetting which is the bottleneck of lifelong learning.
The main contributions of this paper are as follows. 1) We show the occurrence
of catastrophic forgetting in neural ranking models. We investigate the
transfer learning of five representative neural ranking models (DRMM[14],
PACRR[17], KNRM[50], V-BERT[31] and CEDR [31]) over streams of datasets from
different domains333In our work, different domains refer to different datasets
characterized by different data distributions w.r.t. to their source and
content as defined in [41]. (MS MARCO [3], TREC Microblog [45] and TREC
COVID19 [47]); 2) We identify domain characteristics such as relevance density
as signals of catastrophic forgetting ; 3) We show the effectiveness of
constraining the objective function of the neural IR models with a forget cost
term, to mitigate the catastrophic forgetting.
## 2 Background and Related Work
#### 2.0.1 From Domain Adaptation to Lifelong Learning of Neural Networks.
Neural networks are learning systems that must commonly, on the one hand,
exhibit the ability to acquire new knowledge and, on the other hand, exhibit
robustness by refining knowledge while maintaining stable performance on
existing knowledge. While the acquisition of new knowledge gives rise to the
well-known domain shift problem [18], maintaining model performance on
existing knowledge is faced with the catastrophic forgetting problem. Those
problems have been respectively tackled using domain adaptation [41] and
lifelong learning strategies [6, 42]. Domain adaptation, commonly known as a
specific setting of transfer learning [41], includes machine learning methods
(e.g., fine-tuning [49] and multi-tasking [29]) that assume that the source
and the target domains from which are sampled respectively the training and
testing data might have different distributions. By applying a transfer
learning method, a neural model should acquire new specialized knowledge from
the target domain leading to optimal performance on it.
One of the main issues behind common transfer learning approaches is
catastrophic forgetting [11, 12]: the newly acquired knowledge interfers with,
at the worst case, overwrites, the existing knowledge leading to performance
decrease on information sampled from the latter. Lifelong learning [6, 42]
tackles this issue by enhancing the models with the ability to continuously
learn over time and accumulate knowledge from streams of information sampled
across domains, either previously observed or not. The three common lifelong
learning approaches are [42]: 1) regularization that constrains the objective
function with a forget cost term [22, 26, 49]; 2) network expansion that
adapts the network architecture to new tasks by adding neurons and layers [5,
44]; and 3) memory models that retrain the network using instances selected
from a memory drawn from different data distributions [2, 32].
#### 2.0.2 On the Transferability of Neural Networks in NLP and IR.
Transferability of neural networks has been particularly studied in
classification tasks, first in computer vision [4, 54] and then only recently
in NLP [19, 38, 39]. For instance, Mou et al. [39] investigated the
transferability of neural networks in sentence classification and sentence-
pair classification tasks. One of their main findings is that transferability
across domains depends on the level of similarity between the considered
tasks. Unlikely, previous work in IR which mainly involves ranking tasks, have
only casually applied transfer learning methods (e.g., fine-tuning [53],
multi-tasking [29] and adversarial learning [7]) without bringing
generalizable lessons about the transferability of neural ranking models. One
consensual result reported across previous research in the area, is that
traditional retrieval models (e.g., learning-to-rank models [28]) that make
fewer distributional assumptions, exhibit more robust cross-domain
performances [7, 40]. Besides, it has been shown that the ability of neural
ranking models to learn new features may be achieved at the cost of poor
performances on domains not observed during training [35]. Another consensual
result is that although embeddings are trained using large scale corpora, they
are generally sub-optimal for domain-specific ranking tasks [40].
Beyond domain adaptation, there is a recent research trend in NLP toward
lifelong learning of neural networks, particularly in machine translation
[46], and language understanding tasks [37, 49, 51]. For instance, Xu et al.
[51] recently revisited the domain transferability of traditional word
embeddings [34] and proposed lifelong domain embeddings using a meta-learning
approach. The proposed meta-learner is fine-tuned to identify similar contexts
of the same word in both past domains and the new observed domain. Thus, its
inference model is able to compute the similarity scores on pairs of feature
vectors representing the same word across domains. These embeddings have been
successfully applied to a topic-classification task. Unlikely, catastrophic
forgetting and lifelong learning are still under-studied in IR. We believe
that a thorough analysis of the transferability of neural ranking models from
a lifelong learning perspective would be desirable for a wide range of
emerging open-domain IR applications including but not limited to
conversational search [15, 33, 25, 43].
## 3 Study Design
Our study mainly addresses the following research questions:
RQ1: Does catastrophic forgetting occur in neural ranking models?
RQ2: What are the dataset characteristics that predict catastrophic
forgetting?
RQ3: Is a regularization-based lifelong learning method effective to mitigate
catastrophic forgetting in neural ranking models?
### 3.1 Experimental Set Up
Given a neural model $M$ designed for an ad-hoc ranking task, the primary
objectives of our experiments are twofold: O1) measuring the catastrophic
forgetting of model $M$ while applying a domain adaptation method
$\mathcal{D}$, in line of RQ1 and RQ2; and O2) evaluating the effect of a
lifelong learning method $\mathcal{L}$ to mitigate catastrophic forgetting in
model $M$, in line of RQ3. We assume that model $M$ learns a ranking task
across a stream of $n$ domain datasets $\\{D_{1},\dots,D_{n}\\}$ coming in a
sequential manner one by one. At a high level, our experimental set up is:
1. Set up an ordered dataset stream setting $D_{1}\rightarrow\dots D_{n-1}\rightarrow D_{n}$ 2. Learn oracle models $M_{i}^{*},i=1\dots n$, with parameters $\hat{\theta}^{i*}$ by training the neural ranking model $M$ on training instances of dataset $D_{i},i=1\dots n$. 3. Measure the retrieval performance $R_{i,i}^{*}$ of each oracle model $M_{i}^{*}$ on testing instances of the same dataset $D_{i}$. 4. Launch a domain adaptation method $\mathcal{D}$ w.r.t. to objective O1 (resp. a lifelong learning method $\mathcal{L}$ w.r.t. to objective O2) along the considered setting as follows: • Initialize ($k=1$) model $M_{k}$, with $\hat{\theta}^{1*}$, parameters of model $M_{1}^{*}$ (trained on the dataset base $D_{1}$). • Repeat – Apply to model $M_{k}$ a method $\mathcal{D}$ w.r.t to objective O1 (resp. method $\mathcal{L}$ w.r.t. to objective O2) to transfer knowledge to the right dataset $D_{k+1}$ (forward transfer). The resulting model is noted $M_{k+1}$ with parameters $\hat{\theta}^{k+1}$. Its performance on dataset $D_{k+1}$ is noted $R_{k+1,k+1}$. – Measure the retrieval performance $R_{k+1,k}$ of model $M_{k+1}$ obtained on the testing instances of left dataset $D_{k}$ (backward transfer) – Move to the next right dataset : $k=k+1$ • Until the end of the dataset stream setting ($k=n$). 5. Measure catastrophic forgetting in model $M$.
This experimental pipeline, illustrated in Figure 1, follows general
guidelines adopted in previous work [2, 20, 26]. We detail below the main
underlying components highlighted in bold.
Figure 1: Experimental pipeline using a 3-dataset stream setting for a given
model M
##### Neural ranking models
We evaluate catastrophic forgetting in five (5) state-of-the-art models
selected from a list of models criticallly evaluated in Yang et al. [52]: 1)
interaction-based models: DRMM [14] and PACRR [17] and KNRM [50]; 2) BERT-
based models: Vanilla BERT [31] and CEDR-KNRM [31]. We use the OpenNIR
framework [30] that provides a complete neural ad-hoc document ranking
pipeline. Note that in this framework, the neural models are trained by
linearly combining their own score ($S_{NN}$) with a BM25 score ($S_{BM25}$).
##### Datasets and settings.
We use the three following datasets: 1) MS MARCO (ms) [3] a passage ranking
dataset which includes more than 864 K question-alike queries sampled from the
Bing search log and a large-scale web document set including 8841823
documents; 2) TREC Microblog (mb) [27], a real-time ad-hoc search dataset from
TREC Microblog 2013 and 2014, which contains a public Twitter sample stream
between February 1 and March 31, 2013 including 124969835 tweets and 115
queries submitted at a specific point in time; 3) TREC CORD19 (c19) [47] an
ad-hoc document search dataset which contains 50 question-alike queries and a
corpora of 191175 published research articles dealing with SARS-CoV-2 or
COVID-19 topics. It is worth mentioning that these datasets fit with the
requirement of cross-domain adaptation [41] since they have significant
differences in both their content and sources. Besides, we consider four
settings (See Table LABEL:tab:omega_results_map, column ”Setting”) among which
three 2-dataset ($n=2$) and one 3-dataset ($n=3$) settings. As done in
previous work [2, 26], these settings follow the patterns ($D_{1}\rightarrow
D_{2}$) or ($D_{1}\rightarrow D_{2}\rightarrow D_{3}$) where dataset orders
are based on the decreasing sizes of the training sets assuming that larger
datasets allow starting with well-trained networks.
##### Domain adaptation and lifelong learning methods.
We adopt fine-tuning (training on one domain and fine-tuning on the other) as
the representative domain adaptation task $\mathcal{D}$ since it suffers from
the catastrophic forgetting problem [2, 22]. Additionally, we adopt the
Elastic Weight Consolidation (EWC) [22] as the lifelong learning method
$\mathcal{L}$. The EWC constrains the loss function with an additional forget
cost term that we add to the objective function of each of the five neural
models studied in this work. Basically speaking, EWC constrains the neural
network-based model to remember knowledge acquired on left datasets by
reducing the overwriting of its most important parameters as:
$\mathcal{L}(\hat{\theta}^{k})=\mathcal{L}(\hat{\theta}^{k})+\Sigma_{1\leq
i<k}\frac{\lambda}{2}\mathcal{F}_{i}(\hat{\theta}^{k}-\hat{\theta}^{i})^{2}\vspace{-0.3cm}$
(1)
where $\mathcal{L}(\hat{\theta}^{k})$ is the loss of the neural ranking model
with parameters $\theta^{k}$ obtained right after learning on ($D_{k}$),
$\lambda$ is the importance weight of the models parameters trained on left
datasets ($D_{i},i<k$) with the current one ($D_{k}$), $\mathcal{F}$ is the
Fisher information matrix.
##### Measures.
Given the setting ($D_{1}\rightarrow\dots\rightarrow D_{n}$), we use the
remembering measure (REM) derived from the backward transfer measure (BWT)
proposed by Rodriguez et al. [10] as follows:
$\bullet$ BWT: measures the intrinsic effect (either positive or negative)
that learning a model $M$ on a new dataset (right in the setting) has on the
model performance obtained on an old dataset (left in the setting), referred
as backward transfer. Practically, in line with a lifelong learning
perspective, this measure averages along the setting the differences between
the performances of the model obtained right after learning on the left
dataset and the performances of the oracle model trained and tested on the
same left dataset. Thus, while positive values handle positive backward
transfer, negative values handle catastrophic forgetting. Formally, the BWT
measure is computed as:
$BWT=\frac{\sum_{i=2}^{n}\sum_{j=1}^{i-1}(R_{i,j}-R_{j,j}^{*})}{\frac{n(n-1)}{2}}$
(2)
$R_{i,j}$ is the performance measure of model $M_{i}$ obtained right after
learning on on dataset $D_{j}$. $R_{j,j}^{*}$ is the performance of the oracle
model $M_{j}^{*}$ trained on dataset $D_{j}$ and tested on the same dataset.
To make fair comparisons between the different studied neural models, we
normalize the differences in performance ($R_{i,j}-R_{j,j}^{*}$) on model
agnostic performances obtained using $BM25$ model on each left dataset
$D_{j}$. In our work, we use the standard IR performance measures MAP, NDCG@20
and P@20 to measure $R_{i,j}$ but we only report the REM values computed using
the MAP measure, as they all follow the same general trends.
$\bullet$ REM: because the BWT measure assumes either positive values for
positive backward transfer and negative values for catastrophic forgetting, it
allows to map with a positive remembering value in the range $[0,1]$ as
follows:
$REM=1-\lvert min(BWT,0)\rvert$ (3)
A REM value equals to 1 means that the model does not catastrophically forget.
To better measure the intrinsic ability of the neural ranking models to
remember previously acquired knowledge, we deploy in the OpenNIR framework two
runs for each neural model based on the score combination
($score_{G}=\alpha\times S_{NN}+(1-\alpha)\times S_{BM25}$). The first one by
considering the neural model after a re-ranking setup ($0<\alpha<1$) leading
to compute an overall $REM$ measure on the ranking model. The second one by
only considering the neural ranking based on the $S_{NN}$ score by totally
disregarding the BM25 scores ($\alpha=1$). $REMN$ denotes the remembering
measure computed in this second run.
### 3.2 Implementation details
We use the OpenNIR framework with default parameters and the pairwise hinge
loss function [8]. To feed the neural ranking models, we use the GloVe pre-
trained embeddings (42b tokens and 300d vectors). The datasets are split into
training and testing instance sets. For MS MARCO, we use the default splits
provided in the dataset. For TREC CORD19 and TREC Microblog, where no training
instances are provided, we adopt the splits by proportions leading to 27/18
and 92/23 training/testing queries respectively. In practice, we pre-rank
documents using the BM25 model. For each relevant document-query pair
(positive pair), we randomly sample a document for the same query with a lower
relevance score to build the negative pair. We re-rank the top-100 BM25
results and use $P@20$ to select the best-performing model. For each dataset,
we use the optimal BM25 hyperparameters selected using grid-search. In the
training phase, we consider a maximum of 100 epochs or early-stopping if no
further improvement is found. Each epoch consists of 32 batches of 16 training
pairs. All the models are optimized using Adam [21] with a learning rate of
$0.001$. BERT layers are trained at a rate of $2\mathrm{e}{-5}$ following
previous work [31]. For the EWC, we fixed $\lambda=0.5$. The code is available
at https://github.com/jeslev/OpenNIR-Lifelong.
## 4 Results
### 4.1 Empirical Analysis Of Catastrophic Forgetting in Neural Ranking
Models
Table 1: Per model-setting results in our fine-tuning and EWC-based lifelong
learning experiments. All the measures are based on the MAP@100 metric. The
improvements $\Delta_{MAP(MAPN)}$ and $\Delta_{REM(REMN)}$ are reported in
percent (%).
Model Setting Fine-tuning EWC-based lifelong learning REM(REMN)
$\Delta_{MAP(MAPN)}$ PR REM(REMN) $\Delta_{REM(REMN)}$ PR DRMM $ms\rightarrow
c19$ 1.000(1.000) +2.2(-73.6) 1.008 1.000(1.000) 0(0) 1.005 $ms\rightarrow mb$
0.962(0.943) +2.2(-73.6) 1.021 0.971(0.974) +0.9(+3.3) 1.011 $mb\rightarrow
c19$ 1.000(0.965) -1.7(-7.7) 0.993 1.000(0.662) 0(-31.4) 0.995 $ms\rightarrow
mb\rightarrow c19$ 0.976(0.938) +2(-73.6) 1.011 0.979(1.000) +0.3(+6.6) 1.004
PACRR $ms\rightarrow c19$ 1.000(0.760) +2.5(-30.1) 1.000 1.000(0.756) 0(-0.5)
1.000 $ms\rightarrow mb$ 1.000(1.000) +2.5(-30.1) 0.999 1.000(1.000) 0(0)
1.014 $mb\rightarrow c19$ 1.000(0.523) 0(+10) 1.000 1.000(0.940) 0(+79.7)
1.002 $ms\rightarrow mb\rightarrow c19$ 1.000(0.759) +2.5(-30) 1.000
1.000(0.874) 0(+15.2) 1.015 KNRM $ms\rightarrow c19$ 1.000(1.000) -12.1(-89)
1.069 1.000(1.000) 0(0) 1.058 $ms\rightarrow mb$ 1.000(1.000) -12.1(-89) 0.991
1.000(1.000) 0(0) 0.991 $mb\rightarrow c19$ 1.000(0.810) -2(-13.8) 1.135
1.000(0.902) 0(+11.4) 1.141 $ms\rightarrow mb\rightarrow c19$ 1.000(1.000)
-12.1(-89) 1.086 1.000(0.963) 0(-3.7) 1.087 VBERT $ms\rightarrow c19$
0.930(1.000) -10.6(0) 1.028 1.000(1.000) +7.5(0) 0.990 $ms\rightarrow mb$
1.000(0.883) -10.6(0) 1.030 1.000(1.000) 0(+13.3) 0.992 $mb\rightarrow c19$
0.913(1.000) +17.4(+25.8) 0.963 1.000(1.000) +9.5(0) 1.010 $ms\rightarrow
mb\rightarrow c19$ 0.989(0.922) -10.6(0) 1.011 1.000(1.000) +1.1(+8.5) 0.987
CEDR $ms\rightarrow c19$ 0.826(1.000) +2.6(+14.2) 1.013 1.000(1.000) +21.1(0)
1.008 $ms\rightarrow mb$ 0.510(0.920) +2.6(+14.2) 1.003 1.000(1.000)
+96.1(+8.7) 0.976 $mb\rightarrow c19$ 0.940(1.000) +19.6(+29.2) 1.011
1.000(1.000) +6.4(0) 0.984 $ms\rightarrow mb\rightarrow c19$ 0.771(0.946)
+2.6(+14.2) 0.996 0.891(1.000) +15.6(+5.7) 0.961
#### 4.1.1 Within- and Across-Model Analysis (RQ1).
Our objective here is to investigate whether each of the studied neural models
suffer from catastrophic forgetting while it is fine-tuned over a setting
($D_{1}\rightarrow D_{2}$ or $D_{1}\rightarrow D_{2}\rightarrow D_{3}$). To
carry out a thorough analysis of each model-setting pair, we compute the
following measures in addition to the REM/REMN measures: 1) the MAP@100
performance ratio
($PR=\frac{1}{(n-1)}\sum_{i=2}^{n}\frac{R_{i,i}}{R_{i,i}^{*}}$) of the model
learned on the right dataset and normalized on the oracle model performance;
2) the relative improvement in MAP@100 $\Delta_{MAP}$ (resp. $\Delta_{MAPN}$)
achieved with the ranking based on the global relevance score $Score_{G}$
(resp. $Score_{NN}$) trained and tested on the left dataset over the
performance of the BM25 ranking obtained on the same testing dataset. Table
LABEL:tab:omega_results_map reports all the metric values for each
model/setting pairwise. In line with this experiment’s objective, we focus on
the ”Fine-tuning” columns.
Looking first at the $PR$ measure reported in Table
LABEL:tab:omega_results_map, we notice that it is greater than $0.96$ in
$100\%$ of the settings, showing that the fine-tuned models are successful on
the right dataset, and thus allow a reliable investigation of catastrophic
forgetting as outlined in previous work [38]. It is worth recalling that the
general evaluation framework is based on a pre-ranking (using the BM25 model)
which is expected to provide positive training instances from the left dataset
to the neural ranking model being fine-tuned on the right dataset. The joint
comparison of the $REM$ (resp. $REMN$) and $\Delta_{MAP}$
(resp.$\Delta_{MAPN}$ ) measures lead us to highlight the following
statements:
$\bullet$ We observe that only CEDR and VBERT models achieve positive
improvements w.r.t to both the global ranking ($\Delta_{MAP}$ : $+19.6\%$,
$+17.4\%$ resp.) and the neural ranking ($\Delta_{MAP}$: $+29.2\%$, $+25.8\%$
resp.), particularly under the setting where $mb$ is the left dataset
($mb\rightarrow c19$). Both models are able to bring effectiveness gains
additively to those brought by the exact-based matching signals in BM25. These
effectiveness gains can be viewed as new knowledge in the form of semantic
matching signals which are successfully transferred to the left dataset
($c19$) while maintaining stable performances on the left dataset ($mb$)
(REMN=0.940 and 0.913 for resp. CEDR and VBERT). This result is consistent
with previous work suggesting that the regularization used in transformer-
based models has an effect of alleviating catastrophic forgetting [23].
$\bullet$ We notice that the CEDR model achieves positive improvements w.r.t
to the neural ranking score ($\Delta_{MAPN}$: $+14.2\%$) in all the settings
(3/4) where $ms$ is the left dataset while very low improvements are achieved
w.r.t. to the global score ($\Delta_{MAP}$: $+2.6\%$). We make the same
observation for the PACRR model but only for 1/4 model-setting pair
($\Delta_{MAPN}$: $+10\%$ vs. $\Delta_{MAPN}$: $0\%$) with $mb$ as the left
dataset. Under these settings, we can see that even the exact-matching signals
brought by the BM25 model are very moderate (leading to a very few positive
training instances), the CEDR and, to a lower extent, the PACRR models, are
able to inherently bring significant new knowledge in terms of semantic
matching signals at however the cost of significant forget on the global
ranking for CEDR (REM is the range $[0.510;0.826]$) and on the neural ranking
for PACRR (REM=0.523).
$\bullet$ All the models (DRMM, PACRR, KNRM and VBERT (for 3/4 settings) that
do not significantly beat the BM25 baseline either by using the global score
($\Delta_{MAP}$ in the range $[-12.1\%;+2.2\%]$) nor by using the neural score
($\Delta_{MAPN}$ in the range $[-89\%;+0\%]$), achieve near upper bound of
remembering (both REM and REMN are in the range $[0.94;1]$). Paradoxically,
this result does not allow us to argue about the ability of these models to
retain old knowledge. Indeed, the lack or even the low improvements over both
the exact matching (using the BM25 model) and the semantic-matching (using the
neural model) indicate that a moderate amount of new knowledge or even no
knowledge about effective relevance ranking has been acquired from the left
dataset. Thus, the ranking performance of the fine-tuned model on the left
dataset only depends on the level of mismatch between the data available in
the right dataset for training and the test data in the left dataset. We can
interestingly see that upper bound remembering performance ($REM=1$) is
particularly achieved when $ms$ is the left dataset (settings $ms\rightarrow
c19$, $ms\rightarrow mb$, $ms\rightarrow mb\rightarrow c19$). This could be
explained by the fact that the relevance matching signals learned by the
neural model in in-domain knowledge do not degrade its performances on
general-domain knowledge.
Assuming a well-established practice in neural IR which consists in linearly
interpolating the neural scores with the exact-based matching scores (e.g.,
BM25 scores), these observations give rise to three main findings: 1) the more
a neural ranking model is inherently effective in learning additional semantic
matching signals, the more likely it catastrophically forgets. In other terms,
intrinsic effectiveness of neural ranking models comes at the cost of forget;
2) transformer-based language models such as CEDR and VBERT exhibit a good
balance between effectiveness and forget as reported in previous work in NLP
[38]; 3) given the variation observed in REM and REMN, there is no clear trend
about which ranking (BM25-based ranking vs. neural ranking) impacts more
importantly the level of overall catastrophic forgetting of the neural models
#### 4.1.2 Across Dataset Analysis (RQ2).
Our objective here is to identify catastrophic forgetting signals from the
perspective of the left dataset. As argued in [1], understanding the
relationships between data characteristics and catastrophic forgetting allows
to anticipating the choice of datasets in lifelong learning settings
regardless of the neural ranking models. We perform a regression model to
explain the REM metric (dependent variable) using nine datasets
characteristics (independent variables). The latter are presented in Table
LABEL:tab:reg and include dataset-based measures inspired from [1, 9] and
effectiveness-based measures using the BM25 model.
Table 2: Linear regression explaining catastrophic forgetting (REM metric) at
the left dataset level. Significance: $***:p\leq 0.001;**:0.001<p\leq
0.01;*:0.01<p\leq 0.5$
Characteristic Description Coeff $R^{2}$ 0.544 Independent variables Dataset
Constant 0.7014$***$ RS Retrieval space size: $log_{10}(D\times Q)$ -0.1883 RD
Relevance density: $log_{10}\frac{Qrels}{D\times Q}$ -0.3997$*$ SD Score
relevance divergence: $KL(RSV_{D+},RSV_{D-})$ 0.0009 Vocab Size of the
vocabulary -0.0932$*$ DL Average length of documents -0.0349 QL Average length
of queries 0.1803$*$ QD Average query difficulty:
$avg_{q}(\frac{1}{|q|}\sum_{w\in q}idf_{w})$ 0.0044 Eff. MAP Effectiveness of
the BM25: MAP -0.0220$*$ std-AP Variation of BM25 effectiveness (AP metric):
$\sigma_{q}(AP_{q})$ 0.0543$*$ Residual Variables $Dataset_{i}$ MSmarco
0.1803$8$ Microblog 0.5211$**$ $M_{j}$ DRMM 0.1798$***$ PACRR 0.1965$***$ KNRM
0.1924$***$ VBERT 0.1313$***$ CEDR 0.0014
To artificially-generate datasets with varying data characteristics, we follow
the procedure detailed in [1]: we sample queries within each left dataset in
the settings presented in Table 1 (15 for $mb$ and 50 for $ms$) to create sub-
datasets composed of those selected queries and the 100 top corresponding
documents retrieved by the BM25 model.
Then, we replace in each setting the left dataset by the corresponding sub-
dataset. We estimate for each model-setting pair the REM value as well as the
characteristic values of the left sub-dataset. We repeat this procedure 300
times to obtain 300 new settings per model, based on the 300 sampled sub-
datasets. This leads to 300 (sub-setting-model) pairs with a variation for
both the dependent and the independent variables. Finally, we build the
following explanatory regression model, referring to the ”across dataset
analysis” in [1]:
$REM_{ij}=\sum_{k}C_{k}f_{ik}+Dataset_{i}+M_{j}+\epsilon_{i}\vspace{-0.3cm}$
(4)
where $i$ denotes the $i^{th}$ sub-setting and $j$ refers to the neural
ranking model $M_{j}$. $C_{k}$ and $f_{ik}$ denote respectively the weight and
the value of the $k^{th}$ characteristic of the left dataset in the $i^{th}$
sub-setting. Please note, that dataset feature values are independent of the
model $M_{j}$. $Dataset_{i}$ and $M_{j}$ are the residual variables of resp.
the left dataset and the model. The characteristic values $f_{ik}$ are
centered before the regression as suggested in Adomavicius and Zhang [1].
Table LABEL:tab:reg presents the result of the regression model. From $R^{2}$
and $Constant$, we can see that our regression model can explain $54.4\%$ of
the variation of the REM metric, highlighting an overall good performance in
explaining the remembering metric with a good level of prediction ($0.7014$).
From the independent variables, we can infer that the difficulty of the
dataset positively impacts the remembering (namely, decreasing the
catastrophic forgetting). More precisely, lower the relevance density (RD),
the BM25 effectiveness (MAP) and higher the variation in terms of BM25
performances over queries (std-AP) are, the higher the REM metric is. This
suggests that relevance-matching difficulty provides positive feedback signals
to the neural model to face diverse learning instances, and therefore to
better generalize over different application domains. This is however true to
the constraint that the vocabulary of the dataset ($Vocab$) is not too large
so as to boost neural ranking performance as outlined in [16, 36]. Looking at
the residual variables ($Dataset_{j}$ and $M_{j}$), we can corroborate the
observations made at a first glance in RQ1 regarding the model families
clearly opposing (DRMM-PACRR-KNRM-VBERT) and CEDR since the former
statistically exhibit higher REM metrics values than CEDR.
### 4.2 Mitigating Catastrophic Forgetting (RQ3)
From RQ1, we observed that some models are more prone to the catastrophic
forgetting problem than others. Our objective here is to examine whether an
EWC-based lifelong strategy can mitigate the problem. It is worth mentioning
that this objective has been targeted in previous research in computer vision
but without establishing a consensus [24, 46, 48]. While some studies reveal
that EWC outperforms domain adaptation strategies in their settings [24, 46],
others found that it is less effective [48]. To achieve the experiment’s
objective, we particularly report the following measures in addition to the
$REM/REMN$ measures: 1) $\Delta_{REM(REMN)}$ that reveals the improvement
(positive or negative) of the $REM/REMN$ measures achieved using an EWC-based
lifelong learning strategy over the $REM/REMN$ measure achieved using a fine-
tuning strategy; 2) the PR measure introduced in Section 4.1. Unlikely, our
aim through this measure here, is to highlight the performance stability of
the learned model on the right dataset while avoiding catastrophic forgetting
on the left dataset.
We turn now our attention to the ”EWC-based lifelong learning” columns in
Table LABEL:tab:omega_results_map. Our experiment results show that among the
$9$ (resp. $11$) settings that exhibit catastrophic forgetting in the combined
model (resp. neural model), EWC strategy allows to improve $9/9$ i.e., $100\%$
(resp. $9/11$ i.e., $88\%$) of them in the range $[+0.3\%,+96.1\%]$
(resp.$[+3.3\%,+79.7\%]$). Interestingly, this improvement in performance on
the left dataset does not come at the cost of a significant decrease in
performance on the right dataset since $100\%$ of the models achieve a $PR$
ratio greater than $0.96$. Given, in the one hand, the high variability of the
settings derived from the samples, and in the other hand, the very low number
of settings ($10\%$ i.e., $2/20$) where a performance decrease is observed in
the left dataset, we could argue that the EWC-based lifelong learning is not
inherently impacted by dataset order leading to a general effectiveness trend
over the models. We emphasize this general trend by particularly looking at
the CEDR model which we recall (See Section 4.1, RQ1), clearly exhibits the
catastrophic forgetting problem. As can be seen from Table
LABEL:tab:omega_results_map, model performances on the left datasets are
significantly improved ($+6.4\%\leq\Delta_{REM}\leq+96.1\%$;
$0\%\leq\Delta_{REMN}\leq+8.7\%$ ) while keeping model performances on the
right dataset stable ($0.961\leq PR\leq 1.008$). This property is referred to
as the stability-plasticity dilemma [42].
To get a better overview of the effect of the EWC strategy, we compare in
Figure 2 the behavior of the CEDR and KNRM models which exhibit respectively
low level ($REM=0.510$) and high level of remembering ($REM=1$) particularly
in the setting $ms\rightarrow mb$. The loss curves in Figure 2(a) highlight a
peak after the $20^{th}$ epoch for both CEDR and KNRM. This peak denotes the
beginning of the fine-tuning on the $mb$ dataset. After this peak, we can
observe that the curve representing the EWC-based CEDR loss (in purple) is
slightly above the CEDR loss (in orange), while both curves for the KNRM model
(green and blue resp. for with and without EWC) are overlayed. Combined with
the statements outlined in RQ1 concerning the ability of the CEDR model to
accumulate knowledge, this suggests that EWC is able to discriminate models
prone to catastrophic forgetting and, when necessary, to relax the constraint
of good ranking prediction on the dataset used for the fine-tuning to avoid
over-fitting. This small degradation of knowledge acquisition during the fine-
tuning on the $ms$ dataset is carried out at the benefit of the previous
knowledge retention to maintain retrieval performance on the $mb$ dataset
(Figure 2(b)). Thus, we can infer that the EWC strategy applied on neural
ranking models plays fully its role to mitigate the trade-off between
stability and plasticity.
(a) Loss function (b) Performance on the $mb$ dataset
Figure 2: Impact of the EWC strategy on loss and performance for the
$ms\rightarrow mb$ setting.
## 5 Conclusion
We investigated the problem of catastrophic forgetting in neural-network based
ranking models. We carried out experiments using 5 SOTA models and 3 datasets
showing that neural ranking effectiveness comes at the cost of forget and that
transformer-based models allow a good balance between effectiveness and
remembering. We also show that the EWC-based strategy mitigates the
catastrophic forgetting problem while ensuring a good trade-off between
transferability and plasticity. Besides, datasets providing weak and varying
relevance signals are likely to be remembered. While previous work in the IR
community mainly criticized neural models regarding effectiveness [35, 40,
52], we provide complementary insights on the relationship between
effectiveness and transferability in a lifelong learning setting that involves
cross-domain adaptation. We believe that our study, even under limited setups,
provides fair and generalizable results that could serve future research and
system-design in neural IR.
## 6 Acknowledgement
We would like to thank projects ANR COST (ANR-18-CE23-0016) and ANR JCJC
SESAMS (ANR-18- CE23-0001) for supporting this work.
## References
* [1] Adomavicius, G., Zhang, J.: Impact of data characteristics on recommender systems performance. ACM Trans. Manage. Inf. Syst. 3(1) (2012)
* [2] Asghar, N., Mou, L., Selby, K.A., Pantasdo, K.D., Poupart, P., Jiang, X.: Progressive memory banks for incremental domain adaptation. In: ICLR. vol. abs/1811.00239 (2020)
* [3] Bajaj, P., Campos, D., Craswell, N., Deng, L., Gao, J., Liu, X., Majumder, R., McNamara, A., Mitra, B., Nguyen, T., et al.: Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268 (2016)
* [4] Bengio, Y.: Deep learning of representations for unsupervised and transfer learning. In: UTLW’11. p. 17–37 (2011)
* [5] Cai, H., Chen, H., Zhang, C., Song, Y., Zhao, X., Yin, D.: Adaptive parameterization for neural dialogue generation. In: EMNLP-IJCNLP. pp. 1793–1802 (Nov 2019)
* [6] Chen, Z., Liu, B.: Lifelong Machine Learning, Second Edition. Synthesis Lectures on Artificial Intelligence and Machine Learning (2018)
* [7] Cohen, D., Mitra, B., Hofmann, K., Croft, W.B.: Cross domain regularization for neural ranking models using adversarial learning. In: ACM SIGIR (May 2018)
* [8] Dehghani, M., Zamani, H., Severyn, A., Kamps, J., Croft, W.B.: Neural ranking models with weak supervision. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 65–74 (2017)
* [9] Deldjoo, Y., Di Noia, T., Di Sciascio, E., Merra, F.A.: How dataset characteristics affect the robustness of collaborative recommendation models. In: ACM SIGIR. p. 951–960 (2020)
* [10] Díaz-Rodríguez, N., Lomonaco, V., Filliat, D., Maltoni, D.: Don’t forget, there is more than forgetting: new metrics for Continual Learning. ArXiv e-prints (Oct 2018)
* [11] French, R.M.: Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences 3(4), 128 – 135 (1999)
* [12] Goodfellow, I.J., Mirza, M., Da, X., Courville, A.C., Bengio, Y.: An empirical investigation of catastrophic forgeting in gradient-based neural networks. CoRR abs/1312.6211 (2014)
* [13] Gulrajani, I., Lopez-Paz, D.: In search of lost domain generalization. In: International Conference on Learning Representations (2021), https://openreview.net/forum?id=lQdXeXDoWtI
* [14] Guo, J., Fan, Y., Ai, Q., Croft, W.B.: A deep relevance matching model for ad-hoc retrieval. In: CIKM ’16. p. 55–64. Association for Computing Machinery (2016)
* [15] Hancock, B., Bordes, A., Mazare, P.E., Weston, J.: Learning from dialogue after deployment: Feed yourself, chatbot! In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. pp. 3667–3684 (Jul 2019)
* [16] Hofstätter, S., Rekabsaz, N., Eickhoff, C., Hanbury, A.: On the effect of low-frequency terms on neural-ir models. In: SIGIR. p. 1137–1140 (2019)
* [17] Hui, K., Yates, A., Berberich, K., de Melo, G.: Pacrr: A position-aware neural ir model for relevance matching. arXiv preprint arXiv:1704.03940 (2017)
* [18] J, Q.C., M, S., A, S., D., L.N.: Dataset Shift in Machine Learning) (2009)
* [19] Jha, R., Lovering, C., Pavlick, E.: When does data augmentation help generalization in nlp? (2020)
* [20] Kemker, R., McClure, M., Abitino, A., Hayes, T.L., Kanan, C.: Measuring catastrophic forgetting in neural networks. In: AAAI-18. pp. 3390–3398 (2018)
* [21] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
* [22] Kirkpatrick, J., Pascanu, R., Rabinowitz, N.C., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., Hadsell, R.: Overcoming catastrophic forgetting in neural networks. CoRR abs/1612.00796 (2016)
* [23] Lee, C., Cho, K., Kang, W.: Mixout: Effective regularization to finetune large-scale pretrained language models. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020\. OpenReview.net (2020), https://openreview.net/forum?id=HkgaETNtDB
* [24] Lee, S.W., Kim, J.H., Jun, J., Ha, J.W., Zhang, B.T.: Overcoming catastrophic forgetting by incremental moment matching. p. 4655–4665. NIPS’17, Curran Associates Inc., Red Hook, NY, USA (2017)
* [25] Li, J., Miller, A.H., Chopra, S., Ranzato, M., Weston, J.: Learning through dialogue interactions by asking questions. In: ICLR 2017 (2017)
* [26] Li, Z., Hoiem, D.: Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence (12), 2935–2947 (2018)
* [27] Lin, J., Efron, M.: Overview of the trec-2013 microblog track. In: Text REtrieval Conference (TREC), Gaithersburg, Maryland, USA (2013)
* [28] Liu, T.Y.: Learning to rank for information retrieval. Found. Trends Inf. Retr. 3(3), 225–331 (Mar 2009)
* [29] Liu, X., Gao, J., He, X., Deng, L., Duh, K., Wang, Y.: Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In: NAACL HLT 2015. pp. 912–921 (2015)
* [30] MacAvaney, S.: OpenNIR: A complete neural ad-hoc ranking pipeline. In: WSDM 2020 (2020)
* [31] MacAvaney, S., Yates, A., Cohan, A., Goharian, N.: Cedr: Contextualized embeddings for document ranking. In: ACM SIGIR. pp. 1101–1104 (2019)
* [32] de Masson d’Autume, C., Ruder, S., Kong, L., Yogatama, D.: Episodic memory in lifelong language learning. CoRR abs/1906.01076 (2019)
* [33] Mazumder, S., Ma, N., Liu, B.: Towards a continuous knowledge learning engine for chatbots. CoRR abs/1802.06024 (2018)
* [34] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: ICLR 2013 (2013)
* [35] Mitra, B., Craswell, N.: An introduction to neural information retrieval. Foundations and Trends in Information Retrieval 13(1), 1–126 (December 2018)
* [36] Mitra, B., Craswell, N.: An updated duet model for passage re-ranking. CoRR abs/1903.07666 (2019), http://arxiv.org/abs/1903.07666
* [37] Mosbach, M., Andriushchenko, M., Klakow, D.: On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines. arXiv e-prints p. arXiv:2006.04884 (Jun 2020)
* [38] Mosbach, M., Andriushchenko, M., Klakow, D.: On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines (2020)
* [39] Mou, L., Meng, Z., Yan, R., Li, G., Xu, Y., Zhang, L., Jin, Z.: How transferable are neural networks in NLP applications? In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pp. 479–489 (Nov 2016)
* [40] Onal, K.D., Zhang, Y., Altingövde, I.S., Rahman, M.M., Senkul, P., Braylan, A., Dang, B., Chang, H.L., Kim, H., McNamara, Q., Angert, A., Banner, E., Khetan, V., McDonnell, T., Nguyen, A.H., Xu, D., Wallace, B.C., de Rijke, M., Lease, M.: Neural information retrieval: at the end of the early years. Information Retrieval Journal 21, 111–182 (2017)
* [41] Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. on Knowl. and Data Eng. 22(10), 1345–1359 (Oct 2010)
* [42] Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: A review. Neural Networks 113, 54–71 (2019)
* [43] Roller, S., Boureau, Y.L., Weston, J., Bordes, A., Dinan, E., Fan, A., Gunning, D., Ju, D., Li, M., Poff, S., et al.: Open-domain conversational agents: Current progress, open problems, and future directions. arXiv preprint arXiv:2006.12442 (2020)
* [44] Rusu, A.A., Rabinowitz, N.C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., Hadsell, R.: Progressive neural networks. CoRR (2016)
* [45] Soboroff, I., Ounis, I., Macdonald, C., Lin, J.J.: Overview of the TREC-2012 microblog track. In: Proceedings of The Twenty-First Text REtrieval Conference, TREC 2012. NIST Special Publication, vol. 500-298 (2012)
* [46] Thompson, B., Gwinnup, J., Khayrallah, H., Duh, K., Koehn, P.: Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In: NAACL. pp. 2062–2068 (2019)
* [47] Wang, L.L., Lo, K., Chandrasekhar, Y., Reas, R., Yang, J., Eide, D., Funk, K., Kinney, R., Liu, Z., Merrill, W., et al.: Cord-19: The covid-19 open research dataset. ArXiv (2020)
* [48] Wen, S., Itti, L.: Overcoming catastrophic forgetting problem by weight consolidation and long-term memory. CoRR abs/1805.07441 (2018), http://arxiv.org/abs/1805.07441
* [49] Wiese, G., Weissenborn, D., Neves, M.: Neural domain adaptation for biomedical question answering. In: CoNLL 2017. pp. 281–289 (Aug 2017)
* [50] Xiong, C., Dai, Z., Callan, J., Liu, Z., Power, R.: End-to-end neural ad-hoc ranking with kernel pooling. In: ACM SIGIR. pp. 55–64 (2017)
* [51] Xu, H., Liu, B., Shu, L., Yu, P.S.: Lifelong domain word embedding via meta-learning. In: IJCAI-18. pp. 4510–4516 (7 2018)
* [52] Yang, W., Lu, K., Yang, P., Lin, J.: Critically examining the” neural hype” weak baselines and the additivity of effectiveness gains from neural ranking models. In: ACM SIGIR. pp. 1129–1132 (2019)
* [53] Yang, W., Xie, Y., Tan, L., Xiong, K., Li, M., Lin, J.: Data augmentation for BERT fine-tuning in open-domain question answering. CoRR abs/1904.06652 (2019)
* [54] Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: NIPS’14. p. 3320–3328 (2014)
|
# Asymptotic nodal length and log-integrability of toral eigenfunctions
Andrea Sartori Departement of Mathematics, Tel Aviv University, Tel Aviv,
Israel, IL<EMAIL_ADDRESS>
###### Abstract.
We study the nodal set, that is the zero set, of “flat” Laplace eigenfunctions
on the standard two dimensional flat torus
$\mathbb{T}^{2}=\mathbb{R}^{2}/\mathbb{Z}^{2}$. We find the asymptotic law for
the nodal length of these eigenfunctions in any ball of radius larger than the
Planck-scale. In particular, we prove that the nodal set equidistributes above
Planck-scale. At Planck-scale, we show that the nodal set equidistributes
around almost every point for some eigenfunctions, but we also construct
eigenfunctions whose nodal set fails to equidistribute. The proof is based on
Bourgain’s de-randomisation and the main new ingredient, which might be of
independent interest, is the integrability of arbitrarily large powers of the
logarithm of “generic ”eigenfunctions, based on the work of Nazarov [29, 30],
and some arithmetic ingredients called semi-correlations.
## 1\. Introduction
### 1.1. Nodal length of Laplace eigenfunctions and the Random Wave Model
Given a compact $C^{\infty}$-smooth Riemannian surface $(M,g)$ without
boundary, let $0=\lambda_{1}<\lambda_{2}\leq...$ be the spectrum, repeated
accordingly to multiplicity, of be the Laplace-Beltrami operator $\Delta_{g}$
on $M$. We are interested in the nodal set (i.e. the zero set) of Lapalce
eigenfunctions on $M$, that is solutions $f_{\lambda}$ to the eigenvalue
problem
$\displaystyle\Delta_{g}f_{\lambda}+\lambda f_{\lambda}=0,$
where $\lambda\in\\{\lambda_{i}\\}_{i\geq 1}$. In particular, we study the
nodal length
$\displaystyle\mathcal{L}(f_{\lambda})=\mathcal{H}\\{x\in
M;f_{\lambda}(x)=0\\}$
where $\mathcal{H}$ is the $1$-dimensional Hausdorff measure.
Yau [39], and independently Brüning [9], showed that
$\mathcal{L}(f_{\lambda})\geq c(M)\lambda^{1/2}$ for some constant $c>0$, and
Yau [39] conjectured the matching upper bound
$\displaystyle c(M)\sqrt{\lambda}\leq\mathcal{L}(f_{\lambda})\leq
C(M)\sqrt{\lambda},$
for some constant $C(M)>0$. Donnelly and Fefferman [12] showed that Yau’s
conjecture holds for manifold of any dimension, provided that the metric is
real-analytic. Recently, Logunov and Malinnikova [22, 23, 24] made further
progress on Yau’s conjecture showing the lower-bound for $C^{\infty}$
manifolds and giving a polynomial upper-bound.
More precise information about the nodal set can heuristically deduced from a
conjecture of Berry [3, 4] known as the Random Wave Model (RWM). The RWM
states that “generic”Laplace eigenfunctions, in balls of radius slightly
larger than $O(\lambda^{-1/2})$, the Planck-Scale, should behave as isotropic
Gaussian field $F$ with covariance function
$\mathbb{E}[F(x)\overline{F(y)}]=J_{0}\left(|x-y|\right),$
where $J_{0}(\cdot)$ is the $0$-th Bessel function. One plausible consequence
of the RWM is that the nodal length of “generic”eigenfunctions satisfies
$\displaystyle\mathcal{L}(f_{\lambda})=c\sqrt{\lambda}(1+o_{\lambda\rightarrow\infty}(1)),$
(1.1)
for some absolute constant $c>0$. Going even further, the nodal set should
asymptotic equidistribute on every ball of radius slightly larger than Planck-
scale, that is
$\displaystyle\mathcal{L}(f_{\lambda},B)=\mathcal{H}\\{x\in
B:f_{\lambda}(x)=0\\}=c\operatorname{Vol}(B)\sqrt{\lambda}(1+o_{\lambda\rightarrow\infty}(1)),$
(1.2)
for any ball $B=B(r)$ is any ball of radius $r=r(\lambda)>0$ satisfying
$r\cdot\lambda^{1/2}\rightarrow\infty$. In particular, we expect the nodal set
to equidistribute on $M$, see also [40, Chapter 13].
We study a class of deterministic Laplace eigenfunctions on the standard two
dimensional torus $\mathbb{T}^{2}=\mathbb{R}^{2}/\mathbb{Z}^{2}$ known as
“flat”eigenfunctions, see section 1.2 below. The main result is that (1.2)
holds for the said eigenfunctions in every ball of radius
$r>\lambda^{-1/2+\varepsilon}$. While the large and small scale behavior of
the nodal length of random Laplace eigenfunctions has been intensively studied
[2, 19, 27, 34, 37], to the best of the author knowledge, no other, non-random
or non-trivial (e.g. $f_{\lambda}(x)=\cos(a\cdot x)$ with $|a|^{2}=\lambda$),
examples of (1.2) or even (1.1) are known. Thus the results of this script
seem to be the first to address the asymptotic behavior of the nodal length of
deterministic Laplace eigenfunctions.
The proof is based on the de-randomisation technique pioneered by Bourgain [8]
and its subsequent development by Buckley-Wigman [10]. Bourgain’s de-
randomization asserts that a flat eigenfunction behaves accordingly to the RWM
in most Planck-scaled balls in $\mathbb{T}^{2}$, see Proposition 2.2 below.
Therefore, a crucial step in applying this technique is to control the
contribution of the few Planck-scaled balls where the RWM-type behavior may
not be present. In [10, 8], the authors found the asymptotic nodal domains
count, the number of connected components of $\mathbb{T}^{2}\backslash
f_{\lambda}^{-1}(0)$, for flat eigenfunctions eigenfunctions using the Faber-
Krahn inequality [26] to control the contribution of the “bad”balls.
In our work, the existence of few Planck-scaled balls which significantly
contribute to (1.2) led us to the question, of possible independent interest,
of finding anti-concentration inequalities for the nodal length. That is, to
find some $p>1$ such that
$\displaystyle\int_{\mathbb{T}^{2}}\mathcal{L}(f_{\lambda},B(x,\lambda^{-1/2}))^{p}dx\leq
C(p)\lambda^{-p/2},$ (1.3)
for some $C(p)>0$. Using the work of Nazarov [29, 30] and some arithmetic
considerations, we show that arbitrarily large powers of the logarithm of
Laplace eigenfunctions are integrable and deduce that (1.3) holds for all
$p\geq 1$.
Finally, as a consequence of our method, we are also able to study the
distribution, at Planck-scale, of the nodal length for flat eigenfunctions. We
show that, if $r\cdot\lambda^{1/2}\rightarrow\infty$ arbitrarily slowly then
(1.2) is true for almost all balls $B$, that is
$\displaystyle\operatorname{Var}(r^{-1}\mathcal{L}(f,B(x,r)))=\int_{\mathbb{T}^{2}}\left(r^{-1}\mathcal{L}(f,B(x,r))-cr\lambda^{1/2}\right)^{2}dx\rightarrow
0$ $\displaystyle r\cdot\lambda^{1/2}\rightarrow\infty,$ (1.4)
for certain eigenfunctions. However, we also give example where (1.4) fails,
thus there exists balls where (1.2) does not hold at Planck-scale. These
results appear to be the first to address the distribution of the nodal length
of deterministic Laplace eigenfunctions at small scale and they complement
similar founding for the Planck-scale mass-distribution of toral
eigenfunctions by Lester-Rudnick [21], Granville-Wigman [15] and Wigman-Yesha
[38], see also Humphries [17] for similar work on the modular surface.
### 1.2. Toral eigenfunctions
Laplace eigenfunctions on $\mathbb{T}^{2}$ with eigenvalue $-4\pi\lambda$ (we
will simply say eigenvalue $\lambda$ from now on) can be written explicitly as
a Fourier sum
$\displaystyle
f_{\lambda}(x)=f(x)=\sum_{\begin{subarray}{c}\xi\in\mathbb{Z}\\\
|\xi|^{2}=\lambda\end{subarray}}a_{\xi}e(\langle\xi,x\rangle).$ (1.5)
where $e(\cdot)=\exp(2\pi i\cdot)$ and the $a_{\xi}$’s are complex numbers
satisfying $\overline{a_{\xi}}=a_{-\xi}$ so that $f$ is real valued. The
eigenvalues are integers representable as the sum of two squares $\lambda\in
S:=\\{\lambda\in\mathbb{Z}:\lambda=\square+\square\\}$ and have multiplicity
$N=N(\lambda):=|\\{\xi\in\mathbb{Z}^{2}:|\xi|^{2}=\lambda\\}|$ given by the
number lattice points on the circle of radius $\lambda^{1/2}$. We normalize
$f$ so that
$\displaystyle||f||^{2}_{L^{2}(\mathbb{T}^{2})}=\sum_{\xi}|a_{\xi}|^{2}=1,$
(1.6)
and say that $f$ is flat if, for every $\varepsilon>0$, we have
$\displaystyle|a_{\xi}|^{2}\leq 100N^{-1+\varepsilon},$ (1.7)
for all $|\xi|^{2}=\lambda$. Flatness makes precise the term
“generic”eigenfunctions on $\mathbb{T}^{2}$, see also discussion in [10].
Finally, in order to describe the asymptotic behaviour of
$\mathcal{L}(f_{\lambda})$, we need to introduce one more piece of notation:
the pseudo-spectral measure associated with $f$ and its Fourier coefficients
$\displaystyle\mu_{f}=\sum_{\xi}|a_{\xi}|^{2}\delta_{\xi/\sqrt{\lambda}}$
$\displaystyle\widehat{\mu_{f}}(k)=\int_{\mathbb{S}^{1}}z^{k}d\mu_{f}(z),$
(1.8)
where $\delta_{\xi}$ is the Dirac distribution at the point $\xi$ and
$k\in\mathbb{Z}$. Observe that $\mu_{f}$ is a probability measure supported on
the unit circle $\mathbb{S}^{1}\subset\mathbb{R}$. Thus, since the set of
probability measure on $\mathbb{S}^{1}$ equipped with the weak⋆ topology is
compact, upon passing to a subsequence, from now on we always assume that
$\mu_{f}$ weak⋆ converges to some probability measure $\mu$, as
$N\rightarrow\infty$. Moreover, to avoid degeneracies, we assume the support
of $\mu$ is not contained in a line. For a study of the weak⋆ of $\mu_{f}$ in
the case $|a_{\xi}|^{2}=1$, see [20, 35].
### 1.3. Statement of the main results
We are now ready to state our main results:
###### Theorem 1.1.
Let $\varepsilon>0$, $\mu$ and $f$ be as in section 1.2. There exists a
density one subsequence 111A subsequence $S^{\prime}\subset S$ is of density
one if $\lim\limits_{X\rightarrow\infty}|\\{\lambda\in S^{\prime}:\lambda\leq
X\\}|/|\\{\lambda\in S^{\prime}:\lambda\leq X\\}|=1$. Importantly, every
subsequence $S^{\prime}\subset S$ discussed in this script, can be explicitly
described by some arithmetic conditions, see Section 2.2 below. of
$\lambda\in S$ and some explicit constant $c_{1}=c_{1}(\widehat{\mu}(2))>0$
such that
$\mathcal{L}(f,B)=c_{1}\operatorname{Vol}(B)\lambda^{1/2}(1+o_{\lambda\rightarrow\infty}(1)),$
uniformly for all flat $f$, satisfying $\mu_{f}\rightarrow\mu$, and all balls
$B\subset\mathbb{T}^{2}$ of radius $r\geq\lambda^{-1/2+\varepsilon}$, where
the limit is taken along the said density one subsequence. Moreover, if
$\operatorname{Im}(\widehat{\mu}(2))=0$, then
$c_{1}(\mu)=\frac{(1-\widehat{\mu}(2)^{2})}{2^{5/2}\pi}\int_{0}^{2\pi}\frac{1}{(1-\widehat{\mu}(2)\cos(2\theta))^{3/2}}d\theta.$
###### Remark 1.2.
The proof does not give any quantitative information on the error term in
Theorem 1.1. However, thanks to the work of Benatar, Marinucci and Wigman [2],
we expect the said error term to have size $O(N^{-2})$, for a generic sequence
of eigenvalues.
To state the next theorem, given $R\geq 1$, we write
$\displaystyle F_{x,R}(y)=F_{x}(y)=f\left(x+\frac{Ry}{\sqrt{\lambda}}\right)$
(1.9)
for the restriction of $f$ to the box $B(x,R/\sqrt{\lambda})$ and
$y\in[-1/2,1/2]^{2}$. We also write
$\mathcal{L}(F_{x})=\mathcal{L}(F_{x},B(1))=\mathcal{H}\\{y\in
B(1)=[-1/2,1/2]^{2}:F_{x}(y)=0\\}$. Our second main result is the following:
###### Theorem 1.3.
Let $F_{x}$ be as in (1.9) and $\mu$ be as in section 1.2. There exists a
density one subsequence of $\lambda\in S$ such that the following holds:
suppose that $\mu$ does not have any atoms then there exists some constant
$c_{2}=c_{2}(\widehat{\mu}(2))$ such that
$\operatorname{Var}\left(\mathcal{L}(F_{x})\right):=\int_{\mathbb{T}^{2}}\left(\mathcal{L}(F_{x})-c_{1}R\right)^{2}dx=c_{2}R^{2}\left(1+o_{R\rightarrow\infty}(1)\right)(1+o_{\lambda\rightarrow\infty}(1)),$
uniformly for all flat $f$, satisfying $\mu_{f}\rightarrow\mu$, $c_{1}$ is an
Theorem 1.1 and the order of limits is $\lambda\rightarrow\infty$ followed by
$R\rightarrow\infty$. Moreover, if $\widehat{\mu(2)}=0$ then $c_{2}=0$.
###### Remark 1.4.
We use the assumption that $\mu$ does not have atoms to simplify some
calculations in the study of the nodal length variance of the (centred)
Gaussian field with spectral measure $\mu$. It is very plausible that this
assumption can be removed at the cost of a more technical computation.
In particular, we have the following corollary about the equidistribution of
the length at Planck-scale around almost every point:
###### Corollary 1.5.
Let $\epsilon>0$, then, under the assumptions of Theorem 1.3 and
$\widehat{\mu}(2)=0$, we have
$\displaystyle\lim_{R\rightarrow\infty}\lim\limits_{\lambda\rightarrow\infty}\operatorname{Vol}\left(\left\\{x\in\mathbb{T}^{2}:\left|\frac{\mathcal{L}(F_{x})}{R}-\frac{1}{2\sqrt{2}}\right|>\epsilon\right\\}\right)\longrightarrow
0.$
###### Example 1.6.
An important class of examples is given by “Bourgain’s eigenfunctions”:
$f(x)=\frac{1}{\sqrt{N}}\sum_{|\xi|^{2}=\lambda}e(\langle x,\xi\rangle),$
note that these eigenfunctions are flat. Moreover, thanks to the
equidistribution of lattice points on $\mathbb{S}^{1}$ for a density one
subset of $\lambda\in S$ [13, 18], we may assume that limiting measure $\mu$
is the Lebesgue measure on the unit circle. Thus, Theorem 1.1 gives
$\displaystyle\mathcal{L}(f)=\frac{1}{2\sqrt{2}}\sqrt{\lambda}(1+o(1)).$
Furthermore, inserting in the proof of Theorem 1.3 the result of Berry [5,
Equation 28] evaluating the asymptotic variance of the nodal length of a
Gaussian field with spectral measure $\mu$, we obtain the precise asymptotic
behaviour
$\displaystyle\lim\limits_{\lambda\rightarrow\infty}\operatorname{Var}(\mathcal{L}(F_{x}))=\frac{\log
R}{512\pi}(1+o_{R\rightarrow\infty}(1)),$
where $F_{x}(\cdot)=f(x+Ry/\sqrt{\lambda})$ and $f$ is Bourgain.
On the other hand, consider the following slightly modified Bourgain’s
eigenfunctions: divide the circle into $8$ equal arcs starting at the point
$(1,0)$ and take all the coefficients supported on points in the first and
fifth arc, chosen anticlockwise, to be $2/\sqrt{N}$ and all the others to be
zero. Let us call such eigenfunctions $\tilde{f}$ and observe that $\tilde{f}$
is also flat. Then, for a density one of eigenvalues, the limiting measure
$\tilde{\mu}$ is the Lebesgue measure supported on the first and fifth arc,
thus $\widehat{\tilde{\mu}}(2)=1$ and $\tilde{\mu}$ has no atoms. Computing
the explicit value of $c_{2}(\tilde{\mu})$ in Theorem 1.3 via equation (A.35)
below, we have
$cR^{2}\leq\operatorname{Var}(\mathcal{L}(\tilde{F_{x}})),$
for some $c>0$. Hence, thanks to (1.3), Proposition 3.1 below, we may deduce
that the there exists some $\varepsilon,\delta>0$ such that
$\lim_{R\rightarrow\infty}\lim\limits_{\lambda\rightarrow\infty}\operatorname{Vol}\left(\left\\{x\in\mathbb{T}^{2}:\left|\frac{\mathcal{L}(\tilde{F}_{x})}{R}-c_{1}({\tilde{\mu}})\right|>\epsilon\right\\}\right)>\delta,$
that is the nodal length does not equidistribute at Planck-scale.
Finally, the main new ingredient in the proof, which allows us show (1.3), is
the log-integrability of $f$:
###### Proposition 1.7.
Let $f$ be as in (1.5) be flat and $p\geq 1$ be fixed. Then there exists a
density one subsequence of $\lambda\in S$ such that
$\int_{\mathbb{T}^{2}}|\log|f(x)||^{p}dx\leq C,$
for some absolute constant $C=C(p)>0$.
### 1.4. Notation
To simplify the exposition we adopt the following standard notation: we write
$A\lesssim B$ and $A\gtrsim B$ to designate the existence of an absolute
constant $C>0$ such that $A\leq CB$ and $A>CB$. Moreover, for some parameter
$R>0$, we write $A=O_{R}(B)$ to mean that there exists some constant
$C=C(R)>0$ such that $|A|\leq CB$, if no parameter is specified in the
notation, then the constant is absolute. Finally, we write
$o_{R\rightarrow\infty}(1)$ for any function that tends to zero as the
parameter $R\rightarrow\infty$.
### 1.5. Outline of the proof
In this outline, we assume that $f$ as in (1.5) is flat, let $F_{x}$ be as in
(1.9), and, for $\mu$ as in Theorem 1.1, we denote by $F_{\mu}$ the centered,
stationary, continuous Gaussian field with spectral measure $\mu$, see section
2.1 below for the relevant background. For simplicity, we only sketch the case
$B=\mathbb{T}$ of Theorem 1.1. By Bourgain’s de-randomisation, there exists a
coupling, see Proposition 2.2 below for the precise definition and statement,
such that
$\displaystyle||F_{x}-F_{\mu}||_{C^{2}[-1/2,1/2]^{2}}\overset{d}{\longrightarrow}0$
$\displaystyle\lambda\rightarrow\infty$ (1.10)
where the limit is taken along a density one subsequence of $\lambda\in S$,
$F_{x}$ is considered as a random variable on $\mathbb{T}^{2}$ and the
convergence is in distribution. From (1.10) and the stability of the nodal
set, Lemma 4.2 below, we deduce that
$\displaystyle\mathcal{L}(F_{x})\overset{d}{\rightarrow}\mathcal{L}(F_{\mu})$
$\displaystyle\lambda\rightarrow\infty.$ (1.11)
Now, we wish to extend the convergence in (1.11) to convergence of moments. To
this end, the estimates on the nodal length of Laplace eigenfunctions (on
analytic manifolds) by Donnelly and Fefferman, Lemma 3.2 below, give
$\mathcal{L}(F_{x})\lesssim\log\frac{\sup_{B(x,4R/\sqrt{\lambda})}|f|}{\sup_{B(x,2R/\sqrt{\lambda})}|f|}+O_{R}(1),$
where $B(x,r)$ is the ball centred at $x\in\mathbb{T}^{2}$ of radius $r>0$.
Therefore, given $p\geq 1$, Proposition 1.7 implies that
$\displaystyle\int_{x\in\mathbb{T}^{2}}\mathcal{L}(F_{x})^{p}dx\leq C$ (1.12)
for some absolute constant $C=C(p)>0$. Thus, from (1.11) and (1.12), we have
$\displaystyle\int_{\mathbb{T}^{2}}\mathcal{L}(F_{x})^{p}dx\longrightarrow\mathbb{E}[\mathcal{L}(F_{\mu})^{p}]$
$\displaystyle\lambda\rightarrow\infty$ (1.13)
along a density one subsequence of $\lambda\in S$. Finally moments of
$\mathcal{L}(F_{\mu})$ can be computed using the Kac-Rice formula,
Propositions 5.1 and 5.2 below, concluding the proof.
The proof of Proposition 1.7 relies on a Theorem of Nazarov [29, 30] about
log-integrability of Fourier series with spectrum supported on
$\Lambda(p)$-systems, see section 2.3 for the relevant results and
definitions. The recent work of Cammarota, Klurman and Wigman, Lemma 2.4
below, shows that the $\xi=(\xi_{1},\xi_{2})$’s as in (1.5), or, more
precisely, their projections $\xi_{i}$, form a $\Lambda(p)$ system for a
density one subsequence of $\lambda\in S$. Hence, Proposition 1.7 follows from
Theorem 2.6 and Lemma 2.4.
## 2\. Preliminaries
### 2.1. Probability background
Gaussian fields. We briefly collect some definitions about Gaussian fields (on
$\mathbb{R}^{2}$). A (real-valued) Gaussian field $F$ is a continuous map
$F:\mathbb{R}^{2}\times\Omega\rightarrow\mathbb{R}$ for some probability space
$\Omega$, equipped with a probability measure $\mathbb{P}(\cdot)$, such that
all finite dimensional distributions $(F(x_{1},\cdot),...F(x_{n},\cdot))$ are
multivariate Gaussian. $F$ is centred if $\mathbb{E}[F]\equiv 0$ and
stationary if its law is invariant under translations $x\rightarrow x+\tau$
for $\tau\in\mathbb{R}^{2}$. The covariance function of $F$ is
$\displaystyle\mathbb{E}[F(x)\cdot F(y)]=\mathbb{E}[F(x-y)\cdot F(0)].$
Since the covariance is positive definite, by Bochner’s theorem, it is the
Fourier transform of some measure $\mu$ on the $\mathbb{R}^{2}$. So we have
$\displaystyle\mathbb{E}[F(x)F(y)]=\int_{\mathbb{R}^{2}}e\left(\langle
x-y,\lambda\rangle\right)d\mu(\lambda).$
The measure $\mu$ is called the spectral measure of $F$ and, since $F$ is
real-valued, satisfies ${\mu(-I)}=\mu(I)$ for any (measurable) subset
$I\subset\mathbb{R}^{2}$, i.e., $\mu$ is a symmetric measure. By Kolmogorov
theorem, $\mu$ fully determines $F$, so we may simply write $F=F_{\mu}$.
Moreover, to simplify notation, given a parameter $R>1$, we write
$F_{\mu}(\cdot)=F_{\mu,R}(\cdot):=F_{\mu}(R\cdot)$, that is every Gaussian
field is scaled by a factor of $R$, compare with the definition of $F_{x}$ in
(1.9).
The Lévy–Prokhorov metric. Here we define the Lévy–Prokhorov metric and state
some standard results which will be useful later. Let $C^{s}(V)$ be the space
of $s$-times, $s\geq 0$ integer, continuously differentiable functions on $V$,
a compact set of $\mathbb{R}^{2}$. Since $C^{s}(V)$ is a separable metric
space, Prokhorov’s Theorem, see [6, Chapters 5 and 6], implies that
$\mathcal{P}(C^{s}(V))$, the space of probability measures on $C^{s}(V)$, is
metrizable via the Lévy–Prokhorov metric. This is defined as follows: for a
subset $B\subset C^{s}(V)$, denoted by $B_{{+\varepsilon}}$ the
$\varepsilon$-neighbourhood of $B$, that is
$B_{{+\varepsilon}}:=\\{p\in S~{}|~{}\exists~{}q\in B,\
d(p,q)<\varepsilon\\}=\bigcup_{{p\in B}}B(p,\varepsilon).$
The Lévy–Prokhorov metric
$d_{P}:{\mathcal{P}(C^{s}(V))\times\mathcal{P}}(C^{s}(V))\to[0,+\infty)$ is
defined for two probability measures $\mu$ and $\nu$ as:
$\displaystyle
d_{P}(\mu,\nu):=\inf_{\varepsilon>0}\left\\{\mu(B)\leq\nu(B_{{+\varepsilon}})+\varepsilon,\
\nu(B)\leq\mu(B_{{+\varepsilon}})+\varepsilon\
\forall~{}B\in{\mathcal{B}}(C^{s}(V))\right\\}.$ (2.1)
We will also need the following result of uniform integrability [6, Theorem
3.5].
###### Lemma 2.1.
Let $X_{n}$ a sequence of random variables such that
$X_{n}\overset{d}{\rightarrow}X$, that is in distribution. Suppose that there
exists some $\alpha>0$ such that $\mathbb{E}[|X_{n}|^{1+\alpha}]\leq C<\infty$
for some $C>0$, uniformly for all $n\geq 1$. Then,
$\displaystyle\mathbb{E}[X_{n}]\to\mathbb{E}[X]$ $\displaystyle
n\rightarrow\infty.$
The push-forward measure Given a ball $B\subset\mathbb{T}^{2}$ of radius $r>0$
and $F_{x}$ as in (1.9), we denote by $\operatorname{Vol}_{B}$ the uniform
measure on $B$, that is for a test-function $g:B\rightarrow\mathbb{R}$
$\int g(x)d\operatorname{Vol}_{B}(x):=\frac{1}{\pi r^{2}}\int_{B}g(x)dx.$
Then, for an integer $s\geq 1$, $F_{x}$ induces a probability measure on
$\mathcal{P}(C^{s}([-1/2,1/2]^{2}))$, via the push-forward
$(F_{x})_{\star}\operatorname{Vol}_{B}(A)=\operatorname{Vol}_{B}(\\{x\in
B:F_{x}(y)\in A\\})=:F^{B}_{\mu}(A),$
where $A\subset C^{s}([-1/2,1/2]^{2})$ is a measurable subset. Similarly,
given a (symmetric) probability measure $\mu$ on $\mathbb{S}^{1}$, the push-
forward of $F_{\mu}$ defines a probability measure on
$\mathcal{P}(C^{s}([-1/2,1/2]^{2}))$ which we denote by
$(F_{\mu})_{\star}\mathbb{P}$. To shorten notation, we simply write
$d_{P}(F^{B}_{x},F_{\mu}):=d_{P}((F_{x})_{\star}\operatorname{Vol}_{B},(F_{\mu})_{\star}\mathbb{P}).$
### 2.2. Bourgain’s de-randomisation and correlations
With the notation introduced in section 2.1, we have the following result, see
[8, 10] and [36, Proposition 4.5]:
###### Proposition 2.2.
Let $R>1$, $\varepsilon>0$, $f$, $F_{x}$ be as (1.5) and (1.9) respectively
and be $\mu$ as in section 1.2. Then there exists a density one subsequence of
$\lambda\in S$ such that
$\displaystyle d_{p}(F^{B}_{x},F_{\mu})\rightarrow 0$
$\displaystyle\lambda\rightarrow\infty$
uniformly for all $f$ flat, satisfying $\mu_{f}\rightarrow\mu$, and all balls
$B\subset\mathbb{T}^{2}$ of radius $r>\lambda^{-1/2+\varepsilon}$, where the
convergence is with respect to the $C^{2}([-1/2,1/2]^{2})$ metric.
###### Proof.
Let $\Omega$ be the abstract probability space where $F_{\mu}$ is defined and
let $\delta>0$ be given. Under the assumptions of Proposition 2.2, [36, Lemma
4.4] and [36, Proposition 4.5] say that, uniformly for all $f$ flat,
satisfying $\mu_{f}\rightarrow\mu$, and all balls $B\subset\mathbb{T}^{2}$ of
radius $r>\lambda^{-1/2+\varepsilon}$, there exist a map
$\tau:\Omega\rightarrow B$ and a subset $\Omega^{\prime}\subset\Omega$ such
that
1. (1)
For any measurable $A\subset\Omega$, $\operatorname{Vol}(\tau(A))=\pi
r^{2}\mathbb{P}(A)$,
2. (2)
$\operatorname{Vol}(\Omega^{\prime})\leq\delta$,
3. (3)
For all $\omega\not\in\Omega^{\prime}$,
$||F_{\mu}(\tau(\omega),y)-F_{x}(y)||_{C^{2}[-1/2,1/2]^{2}}\leq\delta$.
Therefore, given a measurable set $A\subset C^{2}([-1/2,1/2]^{2})$, we have
$\displaystyle(F_{\mu})_{\star}\mathbb{P}(A)$
$\displaystyle=\mathbb{P}(F_{\mu}\in A)=\mathbb{P}(F_{\mu}\in
A,\omega\in\Omega^{\prime})+\mathbb{P}(F_{\mu}\in
A,\omega\not\in\Omega^{\prime})$
$\displaystyle\stackrel{{\scriptstyle(2)}}{{\leq}}\mathbb{P}(F_{\mu}\in
A,\omega\in\Omega^{\prime})+\delta\stackrel{{\scriptstyle(1)-(3)}}{{\leq}}(F_{x})_{\star}\operatorname{Vol}_{B}(A_{\delta})+\delta$
Similarly, we have
$(F_{x})_{\star}\operatorname{Vol}_{B}(A)\leq(F_{\mu})_{\star}\mathbb{P}(A_{\delta})+\delta,$
thus, $d_{p}(F^{B}_{x},F_{\mu})\leq\delta,$ as required. ∎
The proof of Proposition 2.2 relies on the following result by Bombieri and
Bourgain [7, Theorem 14]:
###### Lemma 2.3 ( Bombieri-Bourgain).
Let $\ell$ be a positive integer. Then, for a density one subsequence of
$\lambda\in S$, there exists no non-trivial solution to the linear equation
$\displaystyle\xi_{1}+...+\xi_{2\ell}=0$ $\displaystyle|\xi|^{2}=\lambda.$
That is all solutions have the form $\xi_{1}=-\xi_{2}$,…,
$\xi_{2\ell-1}=-\xi_{2\ell}$, up to permutations.
We will need the following version of Lemma 2.3 for projections of the $\xi$
onto the first and second coordinate [11, Theorem 1.3]:
###### Lemma 2.4.
Let $\xi=(\xi^{1},\xi^{2})\in\mathbb{Z}^{2}$ and $\ell\geq 2$. Then, for a
density one subsequence of $\lambda\in S$, there exists no non-trivial
solution to the linear equation
$\displaystyle\xi^{i}_{1}+...+\xi^{i}_{2\ell}=0$
$\displaystyle|\xi|^{2}=\lambda\hskip 14.22636pti=1,2.$
### 2.3. Log-integrability and $\Lambda(p)$-systems
We need to first introduce some definitions: given some $g\in
L^{2}(\mathbb{T})$, the spectrum of $g$ is
$\operatorname{Spec}(g):=\left\\{n\in\mathbb{Z}:\widehat{g}(n)=\int e(n\cdot
x)g(x)dx\neq 0\right\\},$
We say that a (possibly finite) set $V=\\{n_{i}\\}_{i}\subset\mathbb{Z}$ is a
$\Lambda(p)$ system for some $p\geq 2$ if, for every $g\in L^{2}(\mathbb{T})$
with $\operatorname{Spec}(g)\subset V$, there exists some $\tilde{C}>0$,
independent of $g$, such that
$\displaystyle||g||_{L^{p}}\leq\tilde{C}||g||_{L^{2}}.$ (2.2)
###### Claim 2.5.
Let $V=\\{n_{i}\\}_{i}\subset\mathbb{Z}$ such that, for some even $p\geq 2$,
the only solutions to
$n_{i_{1}}+n_{i_{2}}+...+n_{i_{p}}=0$
are trivial, that is $n_{i_{1}}=-n_{i_{2}}$…, up to permutations. Then, $V$ is
a $\Lambda(p)$-system and $\tilde{C}(p)=c(p)$ is independent of $V$.
###### Proof.
Let $g\in L^{2}(\mathbb{T})$ with $\operatorname{Spec}(g)\subset V$, then we
may write $g$ as
$\displaystyle g(x)=\sum_{i}a_{i}e(n_{i}\cdot x),$
for some $a_{i}\in\mathbb{C}$ and, normalising $g$, we may also assume that
$||g||_{L^{2}}=\sum_{i}|a_{i}|^{2}=1.$ Now, expanding the $p$-power of $g$, we
have
$||g||_{L^{p}}^{p}=\sum_{i_{1},...,i_{p}}a_{i_{1}}\overline{a_{i_{2}}}a_{i_{3}}...\overline{a_{i_{p}}}\int
e(\langle n_{i_{1}}-n_{i_{2}}+...-n_{i_{p}},x\rangle)dx.$
Using the orthogonality of the exponentials and the assumption we deduce
$||g||_{L^{p}}^{p}=\sum_{\begin{subarray}{c}i_{1},...,i_{p}\\\
n_{i_{1}}+n_{i_{2}}+...+n_{i_{p}}=0\end{subarray}}a_{i_{1}}\overline{a_{i_{2}}}a_{i_{3}}...\overline{a_{i_{p}}}\leq
c(p)\left(\sum_{i}|a_{i}|^{2}\right)^{p/2}\leq c(p)$
where $c(p)$ counts the number of permutations of $n_{i_{1}}=-n_{i_{2}}$…. and
it is therefore independent of $g$ and of $V$, as required. ∎
Finally, given some $V=\\{n_{i}\\}_{i}\subset\mathbb{Z}$, we let
$\displaystyle R(V):=\sup_{\begin{subarray}{c}r\in\mathbb{Z}\\\ r\neq
0\end{subarray}}|\\{(n_{i},n_{j})\in V^{2}:n_{i}-n_{j}=r\\}|$ $\displaystyle
D(V):=\\{n_{i}-n_{j}\in\mathbb{Z}:i\neq j\\}$
With the above notation, we have the following theorem [30]:
###### Theorem 2.6 (Nazarov).
Let $\varepsilon>0$, $V\subset\mathbb{Z}$ such that $R(V)<\infty$ and $D(V)$
is a $\Lambda(p^{\prime})$ system for some $p^{\prime}>2$, moreover let
$\tilde{C}=\tilde{C}(p^{\prime})$ be as in (2.2). Then there exists some
constant $C=C(\tilde{C},\varepsilon,R(V))>0$ such that, uniformly for all
$g\in L^{2}(\mathbb{T})$ with spectrum contained in $V$ and normalised so that
$||g||_{L^{2}(\mathbb{T})}=1$, we have
$\int_{\mathbb{T}}|\log|g(x)||^{\frac{p^{\prime}}{4}-\varepsilon}dx\leq C.$
For the sake of completeness, we provide a proof of Theorem 2.6 in Appendix B.
## 3\. Log-integrability and moments of $\mathcal{L}(F_{x})$
In this section, we will prove Proposition 1.7 and deduce from it the
following:
###### Proposition 3.1.
Let $R>1$, $F_{x}$ be as in (1.9) and $p\geq 1$ be a fixed integer. Then there
exists a density one subsequence of $\lambda\in S$ and some constant
$C=C(p,R)$ such that
$\int_{\mathbb{T}^{2}}\mathcal{L}(F_{x})^{p}dx\leq C.$
### 3.1. Log-integrability of toral eigenfunctions
In this section, we prove Proposition 1.7, the proof follows directly
combining Lemma 2.4 and Theorem 2.6
###### Proof of Proposition 1.7.
Let $p\geq 1$ be given, first we observe
$\displaystyle\int_{\mathbb{T}^{2}}|\log|f(x)||^{p}dx=\int_{0}^{\infty}\operatorname{Vol}\left(|f(x)|\geq\exp(t^{1/p})\right)+\operatorname{Vol}\left(|f(x)|\leq\exp(-t^{1/p})\right)dt.$
(3.1)
Since $||f||_{L^{2}}=1$, Chebyshev’s inequality gives
$\operatorname{Vol}\left(|f(x)|\geq\exp(t^{1/p})\right)\leq\exp(-2t^{1/p}),$
thus the first term on the RHS of (3.1) is bounded by some constant depending
on $p$ only.
To bound the second term on the RHS of (3.1), we write
$f^{i}(x_{j},\cdot)=f^{i}(\cdot)$ for $f$ as a function $x_{i}\in\mathbb{T}$
only (that is, $x_{j}$ is fixed) and observe that
$\displaystyle\operatorname{Vol}\left(|f(x)|\leq\exp(-t^{1/p})\right)\leq\operatorname{Vol}\left(|f(x)|\leq\exp(-t^{1/p})\hskip
5.69054pt\text{and}\hskip
5.69054pt||f^{1}||_{L^{2}}\geq\exp(-t^{1/p}/2)\right)$
$\displaystyle+\operatorname{Vol}\left(|f(x)|\leq\exp(-t^{1/p})\hskip
5.69054pt\text{and}\hskip
5.69054pt||f^{1}||_{L^{2}}\leq\exp(-t^{1/p}/2)\right)$
$\displaystyle\leq\operatorname{Vol}\left(\frac{|f(x)|}{||f^{1}||}\leq\exp(-t^{1/p}/2)\right)+\operatorname{Vol}\left(||f^{1}||_{L^{2}}\leq\exp(-t^{1/p}/2)\right).$
(3.2)
Therefore, inserting (3.1) into (3.1) and bearing in mind that the first term
on the RHS of (3.1) is bounded, we obtain
$\displaystyle\int_{\mathbb{T}^{2}}|\log|f_{\lambda}(x)||^{p}dx\lesssim_{p}\int_{\mathbb{T}}\int_{\mathbb{T}}\left|\log\left|\frac{f_{1}}{||f_{1}||_{L^{2}}}\right|\right|^{p}dx_{1}dx_{2}+\int_{\mathbb{T}}|\log\left||f_{1}||_{L^{2}}(x_{2})|\right|^{p}dx_{2}+O(1).$
(3.3)
Now, thanks to Lemma 2.4 and Claim 2.5 we may assume that the sets
$\mathcal{E}^{i}_{\lambda}=\\{\xi^{i}:\xi=(\xi^{1},\xi^{2}),\hskip
5.69054pt|\xi|^{2}=\lambda\\}$ are both $\Lambda(\ell)$-systems, for some
$\ell>0$ to be chosen later. Therefore, since
$\xi^{i}\in\mathcal{E}^{i}_{\lambda}$ if and only if
$-\xi^{i}\in\mathcal{E}^{i}_{\lambda}$, $D(\mathcal{E}^{1}_{\lambda})$ is a
$\Lambda(\ell/2)$-system and $R(\mathcal{E}^{1}_{\lambda})=3$. Thus, we may
apply Theorem 2.6 to
$\displaystyle g=f^{1}/||f_{1}||_{L^{2}},\hskip
8.53581ptV=\mathcal{E}^{1}_{\lambda},\hskip 8.53581pt\epsilon=1,\hskip
8.53581ptp^{\prime}=\ell\geq 8p+4$
to see that the first term on the RHS of (3.3) is bounded. To bound the second
term on the RHS of (3.3), we observe that
$||f_{1}||^{2}_{L^{2}}(x_{2})=\sum_{\begin{subarray}{c}\xi,\eta\\\
\xi^{1}=\eta^{1}\end{subarray}}a_{\xi}\overline{a_{\eta}}e((\xi^{2}-\eta^{2})x_{2})=1+\sum_{\xi^{2}}a_{(\xi^{1},\xi^{2})}\overline{a_{(\xi^{1},-\xi^{2})}}e(2\xi^{2}x_{2})$
Thus, the function $||f_{1}||_{L^{2}}(x_{2})$ has spectrum contained in
$V^{\prime}=2\mathcal{E}^{2}_{\lambda}\cup\\{0\\}$ and $L^{2}$ norm equal to
$\int_{\mathbb{T}}\left|1+N^{-1}\sum_{\xi^{2}}a_{(\xi^{1},\xi^{2})}\overline{a_{(\xi^{1},-\xi^{2})}}e(2\xi^{2}x_{2})\right|^{2}dx_{2}=1+\sum_{\xi^{2}}(a_{(\xi^{1},\xi^{2})}\overline{a_{(\xi^{1},-\xi^{2})}})^{2}=1+o_{N\rightarrow\infty}(1),$
where we have used the fact that $|a_{\xi}|^{2}\leq N^{-1+\varepsilon}$ for
all $\xi$. Since $D(V^{\prime})$ is a $\Lambda(\ell/2)$-system and
$R(V^{\prime})=3$, Theorem 2.6 applied to
$\displaystyle g=||f_{1}||^{2}_{L^{2}},\hskip 8.53581ptV=V^{\prime},\hskip
8.53581pt\epsilon=1,\hskip 8.53581ptp^{\prime}=\ell\geq 8p+4$
shows that the second term on the RHS of (3.3) is also bounded. Hence, taking
$\ell=8p+4$ we conclude the proof of Proposition 1.7. ∎
### 3.2. Proof of Proposition 3.1
In this section given a box $B\subset\mathbb{R}^{n}$ and $r>0$, we write $rB$
for the concentric box of $r$-times the side. To conclude the proof of
Proposition 3.1, we need the following lemma, see [25, Lemma 2.6.1] and [12,
Proposition 6.7]:
###### Lemma 3.2.
Let $B\subset\mathbb{R}^{3}$ be the unit box, suppose that
$h:3B\rightarrow\mathbb{R}$ is an harmonic function, then
$\displaystyle\mathcal{V}\left(h,\frac{1}{2}B\right)\lesssim\log\frac{\sup_{2B}|h|}{\sup_{B}|h|},$
where $\mathcal{V}\left(h,\frac{1}{2}B\right)=\mathcal{H}^{2}(\\{x\in
2^{-1}B;h(x)=0\\})$.
We are finally ready to prove Proposition 3.1:
###### Proof of Proposition 3.1.
Let $B=[-1,1]^{2}$ and $\tilde{B}=[-1,1]^{2}\times[-1,1]$. First, we consider
that the auxiliary function
$h(y,s):=F_{x}(y)e^{2\pi Rs}:3\tilde{B}\rightarrow\mathbb{R}$
and observe that $h$ is harmonic. Therefore, Lemma 3.2 gives
$\displaystyle\mathcal{V}\left(h,\frac{1}{2}\tilde{B}\right)\lesssim\log\frac{\sup_{2\tilde{B}}|h|}{\sup_{\tilde{B}}|h|}.$
(3.4)
Moreover, since the factor $e^{2\pi Rs}$ has no zeros,
$\sup_{2\tilde{B}}|h|=e^{4\pi R}\sup_{2B}|F_{x}|$ and
$\sup_{\tilde{B}}|h|=e^{2\pi R}\sup_{B}|F_{x}|$ , we have
$\mathcal{L}(F_{x})\lesssim\mathcal{V}\left(h,\frac{1}{2}\tilde{B}\right)\lesssim
R+\log\frac{\sup_{2B}|F_{x}|}{\sup_{B}|F_{x}|}.$
Thus, given $p\geq 1$, the inequality $(X+Y)^{p}\lesssim_{p}X^{p}+Y^{p}$ gives
$\displaystyle\int_{\mathbb{T}^{2}}\mathcal{L}(F_{x})^{p}dx\lesssim_{p}R^{p}+\int_{\mathbb{T}^{2}}\left(\log\frac{\sup_{2B}|F_{x}|}{\sup_{B}|F_{x}|}\right)^{p}dx,$
(3.5)
Therefore, we have reduced to proof of Proposition 3.1 to showing that the
second term on the RHS of (3.5) is bounded.
Bearing in mind that $\sup_{2B}|F_{x}|/(\sup_{B}|F_{x}|)\geq 1$ and using the
estimate $\sup_{B}|F_{x}|>|f(x)|$, the second term on the RHS of (3.5) can be
bounded by
$\displaystyle\int_{\mathbb{T}^{2}}\left(\log\frac{\sup_{2B}|F_{x}|}{\sup_{B}|F_{x}|}\right)^{p}dx=\int_{0}^{\infty}\operatorname{Vol}\left(\left(\log\frac{\sup_{2B}|F_{x}|}{\sup_{B}|F_{x}|}\right)^{p}>t\right)dt$
$\displaystyle\leq\int_{0}^{\infty}\operatorname{Vol}\left(\sup_{2B}|F_{x}|>\exp(t^{1/p})|f(x)|\right)dt.$
(3.6)
Splitting into two cases $|f(x)|\geq\exp(-t^{1/p}/2)$ and
$|f(x)|<\exp(-t^{1/p}/2)$, we bound the integrand in (3.2) as follows:
$\displaystyle\operatorname{Vol}\left(\sup_{2B}|F_{x}|>e^{t^{1/p}}|f(x)|\right)\leq\operatorname{Vol}\left(\sup_{2B}|F_{x}|>e^{t^{1/p}/2}\right)+\operatorname{Vol}\left(|f(x)|\leq
e^{-t^{1/p}/2}\right).$ (3.7)
Thanks to Proposition 1.7, we have
$\displaystyle\int_{0}^{\infty}\operatorname{Vol}\left(|f(x)|\leq
e^{-t^{1/p}/2}\right)dt\lesssim_{p}\int_{\mathbb{T}^{2}}|\log|f(x)||^{p}dx=O_{p}(1).$
(3.8)
By elliptic estimates [14, Page 332], we have
$\sup_{2B}|F_{x}|\leq\sup_{2B}|h|^{2}\lesssim||h||^{2}_{L^{2}(3B)}\lesssim
e^{24\pi R}\int_{3B}|F_{x}(y)|^{2}dy.$
Since
$\int_{\mathbb{T}^{2}}||F_{x}||^{2}_{L^{2}[3B]}=\sum_{\xi,\eta}a_{\xi}\overline{a_{\eta}}\int_{\mathbb{T}^{2}}e((\xi-\eta)x)dx\int_{3B}e(R(\xi-\eta)\lambda^{-1/2}y)dy=O(1),$
Chebyshev’s inequality gives
$\operatorname{Vol}\left(\sup_{2B}|F_{x}|>e^{t^{1/p}/2}\right)\lesssim
e^{48\pi R}e^{-t^{1/p}},$ therefore
$\displaystyle\int_{0}^{\infty}\operatorname{Vol}\left(\sup_{2B}|F_{x}|>e^{t^{1/p}/2}\right)dt=O_{R,p}(1)$
(3.9)
Hence Proposition 3.1 follows inserting (3.8), (3.9), via (3.7), into (3.2). ∎
## 4\. Convergence of moments
The aim of this section is to prove the following Proposition:
###### Proposition 4.1.
Let $R>1$, $\varepsilon>0$, $f$, $F_{x}$, be as (1.5) and (1.9) respectively
and $\mu$ be as in 1.2. Then for every $p\geq 1$ there exists a density one
subsequence of $\lambda\in S$ such that
$\displaystyle\frac{1}{\operatorname{Vol}(B)}\int_{B}\mathcal{L}(F_{x})^{p}dx\rightarrow\mathbb{E}[\mathcal{L}(F_{\mu})^{p}]$
$\displaystyle\lambda\rightarrow\infty,$
uniformly for all flat $f$, satisfying $\mu_{f}\rightarrow\mu$, and all balls
$B\subset\mathbb{T}^{2}$ of radius $r>\lambda^{-1/2+\varepsilon}$.
To prove Proposition 4.1, we first prove convergence in distribution and then
conclude using Proposition 3.1.
### 4.1. Convergence in distribution
Here we prove the following Lemma:
###### Lemma 4.2.
Let $R>1$, $\varepsilon>0$, $f$, $F_{x}$, be as (1.5) and (1.9) respectively,
$\mu$ be as in 1.2 and $F^{B}_{x}$ be as in section 2.1. Then there exists a
density one subsequence of $\lambda\in S$ such that
$\displaystyle\mathcal{L}(F^{B}_{x})\overset{d}{\rightarrow}\mathcal{L}(F_{\mu})$
$\displaystyle\lambda\rightarrow\infty,$
where the convergence is in distribution, uniformly for all flat $f$,
satisfying $\mu_{f}\rightarrow\mu$, and all balls $B\subset\mathbb{T}^{2}$ of
radius $r>\lambda^{-1/2+\varepsilon}$.
The proof of the Lemma rests upon the two results. The first result is a
consequence of the stability of the zero set under $C^{2}$-perturbations as in
[28], see also [33, Lemma 6.1]:
###### Lemma 4.3.
Let $B\subset\mathbb{R}^{2}$ be a ball and let $C^{2}_{*}(B):=\\{g\in
C^{2}(B):|g|+|\nabla g|>0,\hskip 2.84526pt\\}$. Then $\mathcal{L}(g,B)=\\{x\in
B:g(x)=0\\}$ is a continuous functional on $C^{2}_{*}(2B)$.
The other result is the following well-known result of Bulinskya, see [28,
Lemma 6].
###### Lemma 4.4 (Bulinskya’s lemma).
Let $F=F_{\mu}$, with $\mu$ an Hermitian measure supported on
$\mathbb{S}^{1}$. If $\mu$ is not supported on a line, that is $(F,\nabla F)$
is non-degenerate, then $F\in C^{2}_{*}(B(2))$ almost surely, where
$C^{2}_{*}(B(2))$ as in Lemma (4.3).
We are finally ready to prove Lemma 4.2
###### Proof of Lemma 4.2.
Since the support of $\mu$ is not contained in a line, Lemma 4.4 implies that
$F_{\mu}\in C^{2}_{*}(B(2))$ almost surely. Moreover, up to changing the value
of $R$, Theorem 2.2 implies that
$d_{p}(F^{B}_{x},F_{\mu})\rightarrow 0,$
with respect to the $C^{2}(B(2))$ topology. Hence, Lemma 4.2 follows from the
Continuous Mapping Theorem [6, Theorem 2.7], together with Lemma 4.3. ∎
### 4.2. Proof of Proposition 4.1
Before proving Proposition 4.1, we need the following lemma, whose proof,
being similar to the proof of Proposition 3.1, is postponed to Appendix C.
###### Lemma 4.5.
Let $R>1$, $\varepsilon>0$, $F_{x}$ be as in (1.9) and $p\geq 1$ be a fixed
integer. Then there exists a density one subsequence of $\lambda\in S$ and
some constant $C=C(p,R)$ such that
$\frac{1}{\operatorname{Vol}(B)}\int_{B}\mathcal{L}(F_{x})^{p}dx\leq C,$
uniformly for all balls $B\subset\mathbb{T}^{2}$ of radius
$r>\lambda^{-1/2+\varepsilon}$.
###### Proof of Proposition 4.1.
Proposition 4.1 follows from Lemma 4.2 via Lemma 2.1 and Lemma 4.5. ∎
## 5\. Concluding the proofs of Theorems 1.1 and 1.3
Before concluding the proofs of Theorems 1.1 and 1.3, we need a few
preliminary results.
### 5.1. Nodal length of Gaussian random fields
The main results of this section are the following two Propositions the proofs
of which are postponed to Appendix A.
###### Proposition 5.1.
Let $\mu$ be a probability measure supported on $\mathbb{S}^{1}$, but not
supported on a line. Then there exist an explicit constant
$c_{1}=c_{1}(\operatorname{Re}(\widehat{\mu}(2)),\operatorname{Im}(\widehat{\mu}(2))>0$
such that
$\displaystyle\mathbb{E}[\mathcal{L}(F_{\mu})]=c_{1}\cdot R.$
Moreover, if $\operatorname{Im}(\widehat{\mu}(2))=0$, then
$\displaystyle
c_{1}=\frac{1-\widehat{\mu}(2)^{2}}{2^{5/2}\pi}\int_{0}^{2\pi}\frac{1}{(1-\widehat{\mu}(2)\cos(2\theta))^{3/2}}d\theta.$
###### Proposition 5.2.
Let $R>1$, $\mu$ be a probability measure on $\mathbb{S}^{1}$, with no atoms.
Then there exist an explicit constant
$c_{2}=c_{2}(\operatorname{Re}(\widehat{\mu}(2)),\operatorname{Im}(\widehat{\mu}(2))>0$
such that
$\displaystyle\operatorname{Var}(\mathcal{L}(F_{\mu}))=c_{2}R^{2}+o(R^{2}).$
Moreover, if $\widehat{\mu}(2)=0$, then $c_{2}=0$.
### 5.2. Locality nodal length
Here we prove the following lemma:
###### Lemma 5.3.
Let $B\subset\mathbb{T}^{2}$ be a ball, $R\geq 1$, $f$ and $F_{x}$ be as in
(1.5) and (1.9) respectively. Then, for a density one subsequence of
$\lambda\in S$, we have
$\mathcal{L}(f,B)=\frac{\lambda^{1/2}}{R}\int_{B}\mathcal{L}(F_{x})dx+O_{R}\left(\frac{r}{\lambda^{1/2}}\right)$
###### Proof.
Let us write $B=B(z,r)=B(r)$ for some $z\in\mathbb{T}^{2}$ and $r>0$, then we
observe that
$\displaystyle\frac{\lambda^{1/2}}{R}\int_{B(r-R/\lambda^{1/2})}\mathcal{L}(F_{x})\leq\mathcal{L}(f,B)\leq\frac{\lambda^{1/2}}{R}\int_{B(r+R/\lambda^{1/2})}\mathcal{L}(F_{x}).$
(5.1)
Indeed, writing $r^{\prime}=R/\lambda^{1/2}$, by definition of
$\mathcal{L}(\cdot)$ and Fubini, we have
$\displaystyle\int_{B}{\mathcal{L}(f,B(x,r-r^{\prime}))}dx$
$\displaystyle=\int_{B(r-r^{\prime})}\int_{B(r)}\mathds{1}_{B(x,r^{\prime})}(y)\mathds{1}_{f^{-1}(0)}(y)d\mathcal{H}(y)dx.$
$\displaystyle=\int_{B(r)}\mathds{1}_{f^{-1}(0)}(y)\operatorname{Vol}\left(B(y,r^{\prime})\cap
B(R)\right)d\mathcal{H}(y),$
Thus (5.1) follows from rescaling
$\mathcal{L}(f,B(x,r^{\prime}))=R\lambda^{-1/2}\mathcal{L}(F_{x})$, upon
noticing
$\mathds{1}_{B(r-r^{\prime})}\leq\dfrac{\operatorname{Vol}\left(B(\cdot,r^{\prime})\cap
B(R)\right)}{\operatorname{Vol}{B(r^{\prime})}}\leq\mathds{1}_{B(r+r^{\prime})}.$
Finally, by Proposition 3.1 with $p=2$ and the Cauchy-Schwartz inequality, for
a density one subsequence of $\lambda\in S$, we have
$\displaystyle\left(\int_{B(r)}-\int_{B(r\pm
R/\lambda^{1/2})}\right)\mathcal{L}(F_{x})dx\lesssim_{R}\frac{r}{\lambda^{1/2}}.$
(5.2)
Hence the Lemma follows from (5.1) and (5.2), bearing in mind that $r\leq 1$.
∎
### 5.3. Proof of Theorem 1.1
We are finally ready to prove Theorem 1.1:
###### Proof of Theorem 1.1.
Let $B$ be given then by Lemma 5.3 and Proposition 4.1 with $p=1$, for a
density one subsequence of $\lambda\in S$, we have
$\mathcal{L}(f,B)=\frac{\lambda^{1/2}\operatorname{Vol}(B)}{R}\cdot\frac{1}{\operatorname{Vol}(B)}\int_{B}\mathcal{L}(F_{x})dx+O_{R}(\lambda^{\varepsilon})=\lambda^{1/2}\frac{\operatorname{Vol}(B)}{R}\mathbb{E}[\mathcal{L}(F_{\mu})](1+o_{\lambda\rightarrow\infty}(1)).$
Hence, Theorem 1.1 follows from Proposition 5.1.
∎
### 5.4. Proof of Theorem 1.3
We are finally ready to prove Theorem 1.3:
###### Proof of Theorem 1.3.
The proof is similar to the proof of Theorem 1.1. Proposition 4.1, for a
density one of $\lambda\in S$, gives
$\operatorname{Var}(\mathcal{L}(F_{x}))\rightarrow\operatorname{Var}[\mathcal{L}(F_{\mu})].$
Hence, the Theorem follows from Proposition 5.2. ∎
## Acknowledgements
The author would like to thank Peter Sarnak for asking the question about
finding the asymptotic nodal length which gave rise to this work. We also
thank Igor Wigman for the many discussions and Alon Nishry for pointing out
the work of Nazarov and reading the first draft of the article. We also thank
the anonymous referees for their comments that improved the readability of the
paper and for finding a gap in the proof of Proposition 4.1. This work was
supported by the Engineering and Physical Sciences Research Council
[EP/L015234/1]. The EPSRC Centre for Doctoral Training in Geometry and Number
Theory (The London School of Geometry and Number Theory), University College
London.
## Appendix A Kac-Rice
### A.1. Proof of Proposition 5.1
In this section we prove Proposition 5.1 as a standard application of the Kac-
Rice formula [1, Theorem 6.2].
###### Proof of Proposition 5.1.
We write $F=F_{\mu}$. Since $\mu$ is not supported on a line, $(F,\nabla F)$
is non-degenerate, thus we apply the Kac-Rice formula [1, Theorem 6.2] to see
that
$\displaystyle\mathbb{E}[\mathcal{L}(F_{\mu},B)]=\int_{B}\mathbb{E}\left[|\nabla
F(y)||F(y)=0\right]\phi_{F(y)}(0)dy,$ (A.1)
where $\phi_{F(y)}$ is the density of the random variable $F(y)$. since $F(y)$
is Gaussian with mean zero and variance $1$, $\phi_{F(y)}(0)=1/\sqrt{2\pi}$;
since $F$ and $\nabla F$ are independent (this can be seen directly
differentiating $\mathbb{E}[F(x)^{2}]=1$), $\mathbb{E}\left[|\nabla
F(y)||F(y)=0\right]=\mathbb{E}[|\nabla F(y)|]$; since $F$ is stationary, we
also have $\mathbb{E}[|\nabla F(y)|]=\mathbb{E}[|\nabla F(0)|]$. Thus, (A.1)
simplifies to
$\displaystyle\mathbb{E}[\mathcal{L}(F_{\mu})]=\frac{1}{\sqrt{2\pi}}\cdot\mathbb{E}[|\nabla
F(0)|]=\frac{R}{\sqrt{2\pi}}\mathbb{E}[R^{-1}|\nabla F(0)|].$ (A.2)
Now, we compute the covariance function of $\nabla F$. Writing
$\widehat{\mu}(2)=\alpha+i\beta=\int\cos(2\theta)d\mu(e(\theta))+i\int\sin(2\theta)d\mu(e(\theta))$
and using the relations
$\cos(2\theta)=2\cos^{2}(\theta)-1=1-2\sin^{2}(\theta)$ and
$\sin(2\theta)=2\sin(\theta)\cos(\theta)$, we have
$\displaystyle
R^{-2}\mathbb{E}[\partial_{x_{1}}^{2}F(x)F(y)]\arrowvert_{x=y}=\int_{\mathbb{R}^{2}}\lambda_{1}^{2}d\mu(\lambda)=\int_{0}^{1}\cos^{2}(\theta)d\mu(e(\theta))=\frac{1}{2}+\frac{\alpha}{2}$
$\displaystyle
R^{-2}\mathbb{E}[\partial_{x_{2}}^{2}F(x)F(y)]\arrowvert_{x=y}=\int_{\mathbb{R}^{2}}\lambda_{2}^{2}d\mu(\lambda)=\int_{0}^{1}\sin^{2}(\theta)d\mu(e(\theta))=\frac{1}{2}-\frac{\alpha}{2}$
$\displaystyle
R^{-2}\mathbb{E}[\partial_{x_{1}}\partial_{y_{2}}F(x)F(y)]\arrowvert_{x=y}=\int_{\mathbb{R}^{2}}\lambda_{1}\lambda_{2}d\mu(\lambda)=\int_{0}^{1}\cos(\theta)\sin(\theta)d\mu(e(\theta))=\frac{\beta}{2}.$
(A.3)
Therefore, the covariance matrix of $R^{-1}\nabla F$ is
$\displaystyle
L=\begin{bmatrix}\frac{1}{2}+\frac{\alpha}{2}&\frac{\beta}{2}\\\
\frac{\beta}{2}&\frac{1}{2}-\frac{\alpha}{2}\end{bmatrix}$
$\displaystyle\det(L)=\frac{1}{4}\left(1-\alpha^{2}-\beta^{2}\right).$ (A.4)
Since $R^{-1}\nabla F(0)$ is a bi-variate Gaussian with mean $0$ and
Covariance $L$, given in (A.4),
$\displaystyle\mathbb{E}[|R^{-1}\nabla
F(0)|]=\frac{1}{\pi{(1-\alpha^{2}-\beta^{2})^{1/2}}}\int_{\mathbb{R}^{2}}\sqrt{x^{2}+y^{2}}\exp\left(-\frac{x^{2}(1-\alpha)+y^{2}(1+\alpha)-2\beta
xy}{(1-\alpha^{2}-\beta^{2})}\right).$
Letting $C_{1}:=\mathbb{E}[|R^{-1}\nabla F|(2\pi)^{-1/2}]$ proves the first
part of the proposition. Finally, assuming that $\beta=0$ we might simplify
the integral by passing to polar coordinates:
$\displaystyle\mathbb{E}[|R^{-1}\nabla
F|]=\frac{1}{\pi(1-\alpha^{2})^{1/2}}\int_{0}^{2\pi}d\theta\int_{0}^{\infty}r^{2}\exp\left(-\frac{r^{2}}{(1-\alpha^{2})}\left(1-\alpha\cos(2\theta)\right)\right)dr.$
Substituting $r=(\eta y)^{1/2}$, where
$\eta=(1-\alpha\cos\theta)^{-1}(1-\alpha^{2})$ we have
$\displaystyle\mathbb{E}[|R^{-1}\nabla F|]$
$\displaystyle=\frac{1}{2\pi(1-\alpha^{2})^{1/2}}\int_{0}^{2\pi}\eta^{3/2}d\theta\int_{0}^{\infty}y^{1-1/2}e^{-y}dy$
$\displaystyle=\frac{1}{2\pi}\Gamma(1+1/2)(1-\alpha^{2})\int_{0}^{2\pi}\frac{1}{(1-\alpha\cos(2\theta))^{3/2}}d\theta.$
(A.5)
Since $\Gamma(3/2)=\sqrt{\pi}/2$, the second part of Proposition follows from
(A.2) and (A.5).
∎
### A.2. Proof of Proposition 5.2
In this section we prove Proposition 5.2 essentially following [19]. To ease
the exposition, we divide the proof in a series of steps and write $F=F_{\mu}$
and $\mathcal{L}(F)=\mathcal{L}(F,B(1))$.
#### Step 1. The Kac-Rice formula for the second moment
To apply the Kac-Rice formula for the second moment [1, Theorem 6.3], we need
to check that the distributions $(F(y),\nabla F(y))$ and $(F(x),F(y))$ are
non-degenerate for all $x,y\in[-1/2,1/2]^{2}$. Since $\mu$ does not have atoms
it is not supported on a line, thus $(F(y),\nabla F(y))$ is non-degenerate.
Since $F$ is stationary, the covariance of $(F(x),F(y))$ is equal to the
covariance of $(F(0),F(w))$ which is
$\displaystyle A(w)=\begin{bmatrix}1&r(w)\\\ r(w)&1\end{bmatrix}$
$\displaystyle\det A(w)=:1-r(w)^{2}$ (A.6)
Therefore, $(F(0),F(w))$ is degenerate if and only if $\mu$ is supported on a
line, which is not the case by assumption on $\mu$.
Applying [1, Theorem 6.3] and bearing in mind that $F$ is stationary, we have
$\displaystyle\mathbb{E}[\mathcal{L}(F)^{2}]=\int_{[-1/2,1/2]^{2}\times[-1/2,1/2]^{2}}\phi_{(F(x),F(y))}(0,0)\mathbb{E}[|\nabla
F(x)||\nabla F(y)||F(x)=0,F(y)=0]dxdy$
$\displaystyle=\int_{[-1/2,1/2]^{2}}\phi_{(F(0),F(w))}(0,0)\mathbb{E}[|\nabla
F(0)||\nabla F(w)||F(0)=0,F(w)=0]dw$ (A.7)
#### Step 1.1. Preliminary simplifications
The covariance of the $6$ dimensional vector $(F(0),F(w),\allowbreak\nabla
F(0),\nabla F(w))$ is
$\Sigma(w):=\begin{bmatrix}A(w)&B(w)\\\ B(w)^{T}&C(w)\end{bmatrix}$
where $A(w)$ is given by (A.6) and
$\displaystyle B(w):=\begin{bmatrix}\mathbb{E}[F(0)\nabla
F(0)]&\mathbb{E}[F(0)\nabla F(w)]\\\ \mathbb{E}[F(w)\nabla
F(0)]&\mathbb{E}[F(w)\nabla F(w)]\end{bmatrix}$ $\displaystyle
C(w):=\begin{bmatrix}\mathbb{E}[\nabla F(0)\cdot\nabla F(0)]&\mathbb{E}[\nabla
F(0)\cdot\nabla F(w)]\\\ \mathbb{E}[\nabla F(w)\cdot\nabla
F(0)]&\mathbb{E}[\nabla F(w)\cdot\nabla F(w)]\end{bmatrix}.$
From (A.6) it also follows that
$\displaystyle\phi_{(F(0),F(w))}(0,0)=\frac{1}{2\pi\sqrt{1-r(w)^{2}}}.$ (A.8)
bearing in mind that $F(\cdot)$ and $\nabla F(\cdot)$ are independent and
using (A.3), we may simply $B$ and $C$ as
$\displaystyle B(w)=\begin{bmatrix}0&\nabla r(w)\\\ -\nabla
r(w)&0\end{bmatrix}$ $\displaystyle C(w)=\begin{bmatrix}R^{2}\cdot L&-H(w)\\\
-H(w)&R^{2}\cdot L\end{bmatrix}$
where $H(w)$ is the Hessian of $r(w)$ and $L$ is as in (A.4). The covariance
of the Gaussian vector $(\nabla F(0),\nabla F(w))$ conditioned on $F(0)=0$ and
$F(w)=0$ is then
$\displaystyle\Upsilon(\omega)=C-B^{T}A^{-1}B=\begin{bmatrix}R^{2}\cdot L&0\\\
0&R^{2}\cdot L\end{bmatrix}-\frac{1}{1-r^{2}}\begin{bmatrix}\nabla
r\cdot\nabla r^{T}&r\nabla r^{T}\cdot\nabla r\\\ r\nabla r\cdot\nabla
r^{T}&\nabla r^{T}\cdot\nabla r\end{bmatrix}.$ (A.9)
To simplify notation, we define
$\displaystyle X=-\frac{1}{1-r^{2}}\nabla r\cdot\nabla r^{T}$ $\displaystyle
Y=-H-\frac{1}{1-r^{2}}r\nabla r^{T}\cdot\nabla r.$ (A.10)
###### Remark A.1.
Observe that $X$ and $Y$ are uniformly bounded. Indeed, the non-diagonal
entries are dominated by the diagonal one by the Cauchy-Schwartz inequality.
The latter are bounded since $X$ has negative entries while $\Upsilon$ has
positive diagonal entries.
To compute (A.7) we will study $\phi_{(F(0),F(w))}(0,0)$ and
$\mathbb{E}[|\nabla F(0)|\linebreak|\nabla F(w)||F(0)=0,F(w)=0]$ separately.
We begin with $\phi_{(F(0),F(w))}(0,0)$.
#### Step 2. The singular set
We say that a point $x\in\mathbb{T}^{2}$ is singular if there exist a subset
$\Lambda_{x}\subset\mathbb{S}^{1}$ of measure $\mu(\Lambda_{x})>7/8$ for which
$\cos(\langle R\lambda,x\rangle)>3/4$ or $\cos(\langle
R\lambda,x\rangle)<-3/4$ for all $\lambda\in\Lambda_{x}$. From now on, we only
consider the case $\cos(\langle R\lambda,x\rangle)>3/4$ as the other case can
be treated identically. We divide $[-1/2,1/2]^{2}$ into $O(R)$ squares of side
$O(1/\sqrt{R})$ and we call a square singular if it contains a singular point,
we let $S$ be the union of all singular squares. Observe that, if $y$ is a
point in a singular square $I_{x}$, then $|\cos(\langle
R\lambda,x\rangle)-\cos(\langle R\lambda,y\rangle)|\leq 1/4$ so $\cos(\langle
R\lambda,y\rangle)\geq 1/2$. Thus,
$\displaystyle r(y)=\int_{\Lambda_{x}}\cos(\langle
R\lambda,y\rangle)+\int_{\mathbb{S}^{1}\backslash\Lambda_{x}}\cos(\langle
R\lambda,y\rangle)\geq\frac{1}{2}\cdot\frac{7}{8}-\frac{1}{8}\geq\frac{5}{16}.$
and so
$\displaystyle\int_{[-1/2,1/2]^{2}}r(w)^{4}dw\geq\left(\frac{5}{16}\right)^{2}\operatorname{Vol}(S).$
(A.11)
Moreover, we can bound the contribution of a singular square, see [32, section
6.3], as
$\displaystyle\int_{I_{x}}\frac{1}{\sqrt{1-r(w)^{2}}}dw\lesssim R^{-1}.$
(A.12)
Since there are at most $O(R)$ squares, using (A.11) and (A.12), we obtain
$\displaystyle\int_{S}\frac{1}{\sqrt{1-r(w)^{2}}}dw\lesssim
R\operatorname{Vol}(S)\cdot R^{-1}\lesssim\int_{[-1/2,1/2]^{2}}r(w)^{4}dw.$
(A.13)
#### Step 3. Asymptotic behaviour of $\mathbb{E}[|\nabla F(0)||\nabla
F(w)||F(0)=0,F(w)=0]$
Following Berry [5], we have
$\displaystyle\sqrt{\alpha}=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}(1-e^{-\alpha
t/2})\frac{dt}{t^{3/2}}.$
So we can write
$\displaystyle\mathbb{E}[|\nabla F(0)||\nabla F(w)||F(0)=0,F(w)=0]$
$\displaystyle=\frac{1}{2\pi}\int_{0}^{\infty}\int_{0}^{\infty}[v(0,0)-v(t,0)-v(0,s)+v(t,s)]\frac{dtds}{(ts)^{3/2}}$
(A.14)
where, letting $|\nabla F(0)|=W_{1}$ and $|\nabla F(w)|=W_{2}$,
$\displaystyle v(t,s)$
$\displaystyle=\mathbb{E}\left[\exp\left(-\frac{1}{2}||W_{1}||^{2}t-\frac{1}{2}||W_{2}||^{2}s\right)\right]=\frac{1}{\sqrt{(2\pi)^{4}\det\Upsilon}}\int\int$
$\displaystyle\exp\left(-\frac{1}{2}||w_{1}||^{2}t-\frac{1}{2}||w_{2}||^{2}s\right)\exp\left(-\frac{1}{2}(\omega_{1},\omega_{2})\Upsilon^{-1}(\omega_{1},\omega_{2})^{T}\right)dw_{1}dw_{2}$
$\displaystyle=\left(\frac{\det\left(\begin{bmatrix}tI&0\\\
0&sI\end{bmatrix}+\Upsilon^{-1}\right)^{-1}}{\det\Upsilon}\right)^{1/2}=\det(I+M)^{-1/2}$
where $\Upsilon$ is as in (A.9) and
$\displaystyle M=\begin{bmatrix}\sqrt{t}I&0\\\
0&\sqrt{s}I\end{bmatrix}\Upsilon\begin{bmatrix}\sqrt{t}I&0\\\
0&\sqrt{s}I\end{bmatrix}.$ (A.15)
#### Step 3.1. Computing $v(t,s)$
Expanding $M$ in (A.15), we write
$\displaystyle I+M=\begin{bmatrix}I+tL_{R}&0\\\
0&I+sL_{R}\end{bmatrix}+\begin{bmatrix}tX&\sqrt{ts}Y\\\
\sqrt{ts}Y&sX\end{bmatrix}$
where $X$, $Y$ are as in (A.10) and $L_{R}=R^{2}L$, moreover
$\displaystyle\det(I+M)^{-1/2}=\det\left(\begin{bmatrix}I+tL_{R}&0\\\
0&I+sL_{R}\end{bmatrix}\right)^{-1/2}\det\left(I+\tilde{M}\right)^{-1/2}$
where
$\displaystyle\tilde{M}:=\left(\begin{bmatrix}I+tL_{R}&0\\\
0&I+sL_{R}\end{bmatrix}\right)^{-1}\begin{bmatrix}tX&\sqrt{ts}Y\\\
\sqrt{ts}Y&sX\end{bmatrix}.$
Since $\det(I+tL_{R})=1+R^{2}t+(R^{2}t)^{2}(1-\alpha^{2}-\beta^{2})/4$, we
have
$\displaystyle\det\left(\begin{bmatrix}I+tL_{R}&0\\\
0&I+sL_{R}\end{bmatrix}\right)=[(1+R^{2}t+(R^{2}t)^{2}(1-\alpha^{2}-\beta^{2})/4)\times$
$\displaystyle\times(1+sR^{2}+(R^{2}s)^{2}(1-\alpha^{2}-\beta^{2})/4)]=:h(R^{2}t,R^{2}s)^{2}$
and
$\displaystyle(I+tL_{R})^{-1}=\frac{1}{h(R^{2}t,0)^{2}}\begin{bmatrix}1+\frac{R^{2}t}{2}(1-\alpha)&-\frac{R^{2}t}{2}\beta\\\
-\frac{R^{2}t}{2}\beta&1+\frac{R^{2}t}{2}(1+\alpha)\end{bmatrix}.$ (A.16)
Letting $\tilde{X}=X(I+tL_{R})^{-1}$ and $\tilde{Y}=(I+tL_{R})^{-1}Y$, and
using the fact that for a block diagonal matrix, $\det\begin{bmatrix}A&B\\\
C&D\end{bmatrix}=\det(A)\det(D-CA^{-1}B)$, we obtain
$\displaystyle\det\left(I+M\right)^{-1/2}=\frac{1}{h(R^{2}t,R^{2}s)}\det\left(I+\tilde{M}\right)^{-1/2}$
$\displaystyle=\frac{1}{h(R^{2}t,R^{2}s)}\det(I+t\tilde{X})^{-1/2}\det(I+s\tilde{X}-\sqrt{st}\tilde{Y}(I+t\tilde{X})^{-1}\sqrt{ts}\tilde{Y}).$
(A.17)
Moreover,
$\displaystyle\det(I+t\tilde{X})^{-1/2}=1+O(t\tilde{X})$ (A.18)
and, using $(I+t\tilde{X})^{-1}=I-t\tilde{X}+O(\tilde{X}^{2})$, we also have
$\displaystyle\det(I+s\tilde{X}-\sqrt{st}\tilde{Y}(I+t\tilde{X})^{-1}\sqrt{ts}\tilde{Y})$
$\displaystyle=\det(I+s\tilde{X}-st\tilde{Y}^{2}-\sqrt{ts}\tilde{Y}t\tilde{X}\sqrt{ts}\tilde{Y}+O(\tilde{X}^{2}\tilde{Y}^{2}))$
$\displaystyle=1+O(st\tilde{X},st\tilde{Y}^{2}).$ (A.19)
Thus, putting (A.17), (A.18) and (A.19) together, we obtain
$\displaystyle v(t,s)=\frac{1}{h(R^{2}t,R^{2}s)}+O(st\tilde{X},st\tilde{Y}).$
(A.20)
#### Step 3.2. The integrals over $s$ and $t$
By (A.16), for $t$ small enough, $t\tilde{X}=tX+O(t^{2}X)$ and
$t\tilde{Y}=tY+O(t^{2}Y)$ so
$f(t,s)-f(t,0)-f(s,0)+f(0,0)=tsX+tsY+O(t^{2}sX,st^{2}Y)$ for $s,t$ small
enough; for $t$ large enough, $t\tilde{X}=O(X)$, $t\tilde{Y}=O(Y)$. Thus,
inserting (A.20) into (A.14), and transforming the variables, we have
$\displaystyle\mathbb{E}[|\nabla F(0)||\nabla
F(w)||F(0)=0,F(w)=0]=\frac{R^{2}}{2\pi}\left(\int_{0}^{\infty}\left(1-\frac{1}{h(t,0)}\right)\frac{dt}{t^{3/2}}\right)^{2}+O(X,Y).$
(A.21)
#### Step 4. Asymptotic behaviour of the Variance
Recalling the notation in Step 2 for the singular set $S$ and (A.7), we write
$\displaystyle\mathbb{E}[\mathcal{L}(F)^{2}]=\left(\int_{[-1/2,1/2]^{2}\backslash
S}+\int_{S}\right)\phi_{(F(0),F(w))}(0,0)\mathbb{E}[|\nabla F(0)||\nabla
F(w)||F(0)=0,F(w)=0]dw.$
Thanks to Remark A.1, we have $\mathbb{E}[|\nabla F(0)||\nabla
F(w)||F(0)=0,F(w)=0]=O(1)$; thus, using the expansion
$\phi_{(F(0),F(w))}(0,0)=1+O(r(w)^{2})$ on $[-1/2,1/2]^{2}\backslash S$, we
get
$\displaystyle\mathbb{E}[\mathcal{L}(F)^{2}]=\frac{1}{\pi}\int_{[-1/2,1/2]^{2}\backslash
S}\mathbb{E}[|\nabla F(0)||\nabla F(w)||F(0)=0,F(w)=0]dw$
$\displaystyle+O\left(\int_{S}\frac{1}{\sqrt{1-r(w)^{2}}}dw\right)+O\left(\int_{[-1/2,1/2]^{2}\backslash
S}r(w)^{2}dw\right).$ (A.22)
Using again Remark A.1 and (A.11) to extend the first integral in (A.22) to
$[-1/2,1/2]^{2}$, using (A.13) to bound the second integral and the Cauchy
Schwartz inequality to bound the third one, we obtain
$\displaystyle\mathbb{E}[\mathcal{L}(F)^{2}]$
$\displaystyle=\frac{1}{\pi}\int_{[-1/2,1/2]^{2}}\mathbb{E}[|\nabla
F(0)||\nabla F(w)||F(0)=0,F(w)=0]dw$
$\displaystyle+O\left(\int_{[-1/2,1/2]^{2}}r(w)^{4}dw\right).$ (A.23)
Thus, inserting (A.21) into (A.23), transforming the variables, and taking
$C_{1}$ as in Proposition 5.1, we have
$\displaystyle\operatorname{Var}[\mathcal{L}^{2}(F)]=\frac{R^{2}}{2\pi^{2}}\left(\int_{0}^{\infty}\left(1-\frac{1}{h(t,0)}\right)\frac{dt}{t^{3/2}}\right)^{2}-C^{2}_{1}R^{2}$
$\displaystyle+O\left(\int_{[-1/2,1/2]^{2}}X(w)dw\right)+O\left(\int_{[-1/2,1/2]^{2}}Y(w)dw\right)+O\left(\int_{[-1/2,1/2]^{2}}r(w)^{4}dw\right).$
(A.24)
#### Step 5. Bounding moments and derivatives of the covariance
For $k\geq 2$ we claim that
$\displaystyle\int_{[-1/2,1/2]^{2}}r(w)^{k}dw=o_{R\rightarrow\infty}(1)$
(A.25)
Indeed, we have
$\displaystyle\int_{[-1/2,1/2]^{2}}r(w)^{k}dw=\int_{\mathbb{S}^{1}}...\int_{\mathbb{S}^{1}}\int_{[-1/2,1/2]^{2}}e(wR\left(\lambda_{1}+...+\lambda_{k}\right))d\mu(\lambda_{1})...d\mu(\lambda_{k})dw,$
(A.26)
and the inner integral in (A.26) is
$\int_{[-1/2,1/2]^{2}}e(wR\left(\lambda_{1}+...+\lambda_{k}\right))dw\lesssim\frac{J_{1}(||R(\lambda_{1}+...+\lambda_{k}))||}{||R(\lambda_{1}+...+\lambda_{k}))||}$
where $J_{1}(\cdot)$ is the Bessel function of the first kind. Since
$J_{1}(T)\lesssim T^{1/2}$ for $T$ large enough and $J_{1}(T)/T=O(1)$ for $T$
small, the integral in (A.26) can be bounded as
$\displaystyle\int_{[-1/2,1/2]^{2}}r(w)^{k}dw$
$\displaystyle\lesssim\int_{||\lambda_{1}+...+\lambda_{k}||\geq
1/\sqrt{R}}\frac{J_{1}(||R(\lambda_{1}+...+\lambda_{k}))||}{||R(\lambda_{1}+...+\lambda_{k}))||}d\mu(\lambda_{1})...d\mu(\lambda_{k})$
$\displaystyle+\int_{||\lambda_{1}+...+\lambda_{k}||\leq
1/\sqrt{R}}\frac{J_{1}(||R(\lambda_{1}+...+\lambda_{k}))||}{||R(\lambda_{1}+...+\lambda_{k}))||}d\mu(\lambda_{1})...d\mu(\lambda_{k})$
$\displaystyle\lesssim
R^{-3/4}+O\left(\int_{||\lambda_{1}+...+\lambda_{k}||\leq
1/\sqrt{R}}d\mu(\lambda_{1})...d\mu(\lambda_{k})\right).$ (A.27)
Finally, since $\mu$ has no atoms, fixing $\lambda_{1},...,\lambda_{k-1}$, we
have
$\displaystyle\underset{||\lambda_{1}+...+\lambda_{k}||\leq
1/\sqrt{R}}{\int...\int}d\mu(\lambda_{1})...d\mu(\lambda_{k})=o_{R\rightarrow\infty}(1).$
(A.28)
Thus, (A.25) follows from (A.27) and (A.28).
Taking $k=4$ and $k=5$ in (A.25), we have
$\displaystyle\int_{[-1/2,1/2]^{2}}r(w)^{4}dw=o(1)$
$\displaystyle\int_{[-1/2,1/2]^{2}}r(w)^{3}(\partial_{w_{i}}r(w))^{2}dw=o(R^{2})$
(A.29)
for $i=1,2$. Taking $k=2$ we have
$\displaystyle\int_{[-1/2,1/2]^{2}}(\partial_{w_{i}}r(w))^{2}dw\lesssim
R^{2}\int_{[-1/2,1/2]^{2}}r(w)^{2}dw=o(R^{2})$ (A.30)
and for $j=1,2$ we also have
$\displaystyle\int_{[-1/2,1/2]^{2}}\partial_{w_{i}}\partial_{w_{j}}r(w)dw\lesssim
R^{2}\int_{[-1/2,1/2]^{2}}r(w)^{2}dw=o(R^{2})$ (A.31)
#### Step 6.Concluding the proof
Using the same argument as in Step 4 to control the singular squares, and
writing $(1-r^{2})^{-1}=1+O(r^{2})$, we have
$\displaystyle\int_{[-1/2,1/2]^{2}}X(w)dw\lesssim\max_{i}\int_{[-1/2,1/2]^{2}}(\partial_{w_{i}}r(w))^{2}dw+$
$\displaystyle+O\left(\max_{i}\int_{[-1/2,1/2]^{2}}r(w)^{2}(\partial_{w_{i}}r(w))^{2}dw\right)+O\left(\int_{[-1/2,1/2]^{2}}r(w)^{4}dw\right)$
(A.32)
and
$\displaystyle\int_{[-1/2,1/2]^{2}}Y(w)dw\lesssim\max_{i,j}\int_{[-1/2,1/2]^{2}}(\partial_{w_{i}}\partial_{w_{j}}r(w))^{2}dw$
$\displaystyle+O\left(\max_{i,j}\int_{[-1/2,1/2]^{2}}r(w)^{3}(\partial_{w_{i}}r(w))^{2}dw\right)+O\left(\int_{[-1/2,1/2]^{2}}r(w)^{4}dw\right).$
(A.33)
where $i,j=1,2$. Inserting (A.29), (A.30) and (A.31) in (A.32) and (A.33), we
have
$\displaystyle\int_{[-1/2,1/2]^{2}}X(w)dw=o(1)$
$\displaystyle\int_{[-1/2,1/2]^{2}}Y(w)dw=o(1).$ (A.34)
Finally, inserting (A.34) into (A.24), we obtain
$\displaystyle\operatorname{Var}[\mathcal{L}^{2}(F)]=\frac{R^{2}}{2\pi^{2}}\left(\int_{0}^{\infty}\left(1-\frac{1}{h(t,0)}\right)\frac{dt}{t^{3/2}}\right)^{2}-C^{2}_{1}R^{2}+o(R^{2})$
(A.35)
This proves the first part of the proposition with
$C_{2}=(1/2\pi^{2})(\int(1-1/h(t,0))dt/t^{3/2})^{2}-C_{1}^{2}$. If
$\widehat{\mu}(2)=0$ we can use the formulas
$\displaystyle\frac{1}{\sqrt{2}\pi}\int_{0}^{\infty}\left(1-\frac{1}{h(t,0)}\right)dt=\frac{1}{\sqrt{2}\pi}\int_{0}^{\infty}\frac{1}{1+t^{2}}dt=\frac{1}{2\sqrt{2}}$
$\displaystyle C_{1}=\frac{1}{2\sqrt{2}}$
in (A.35) to conclude the proof of the Proposition.
## Appendix B Log-integrability
The content of this section follows [29, Chapter 3] and [30], we claim no
originality and encourage the reader to see directly [29, 30].
### B.1. Small values of Fourier series and $\Lambda(p)$-systems
First, with the notation of section 2.3, we observe that Theorem 1.7 follows
directly from the following result:
###### Theorem B.1 (Nazarov).
Let $V\subset\mathbb{Z}$ such that $R(V)<\infty$ and $D(V)$ is a
$\Lambda(p^{\prime})$ system for some $p^{\prime}>2$, moreover let
$\tilde{C}=\tilde{C}(p^{\prime})$ be as in (2.2). Then there exists some
constant $C=C(\tilde{C},\varepsilon,R(V))>0$ such that uniformly for all $g\in
L^{2}(\mathbb{T})$ with spectrum contained in $V$, we have
$||g||_{L^{2}}\leq\exp\left(\frac{C}{\rho(U)^{\frac{4}{p^{\prime}}+\varepsilon}}\right)\int_{U}|g(x)|^{2}dx,$
for any positive measure set $U\subset\mathbb{T}$, where $\rho(\cdot)$ is the
Lebesgue measure on $\mathbb{T}$.
We are now going to briefly show how Theorem B.1 implies Theorem 2.6:
###### Proof.
Let $\varepsilon>0$ be given and observe that
$\displaystyle\int_{\mathbb{T}}|\log|g(x)||^{\frac{p^{\prime}}{4}-\varepsilon}dx=\int_{0}^{\infty}\rho\left(|\log|g(x)||^{\frac{p^{\prime}}{4}-\varepsilon}>t\right)dt$
$\displaystyle\leq\int_{0}^{\infty}\rho\left(\log|g(x)|>t^{\frac{4}{p^{\prime}}+\frac{\varepsilon}{2}}\right)dt+\int_{1}^{\infty}\rho\left(\log|g(x)|\leq-t^{\frac{4}{p^{\prime}}+\frac{\varepsilon}{2}}\right)dt.$
(B.1)
Since $\int|g|^{2}dx=1$, Chebyshev’s inequality gives that the first term on
the right hand side of (B.1) is bounded by some constant
$C(p^{\prime},\varepsilon)>0$. Thus, it is enough to show that
$\int_{1}^{\infty}\rho\left(\log|g(x)|\leq-t^{\frac{4}{p^{\prime}}+\frac{\varepsilon}{2}}\right)dt\leq
C(\tilde{C}(p^{\prime}),\varepsilon,R(V)).$
To this end, let $0<\delta<1$ and define the set
$U_{\delta}=\\{x\in\mathbb{T}:|g(x)|\leq\delta\\}$, then, for some
$C=C(\tilde{C}(p^{\prime}),\varepsilon,R(V))>0$, Theorem B.1 gives
$1\leq\exp\left(\frac{C}{\rho(U_{\delta})^{\frac{4}{p^{\prime}}+\varepsilon_{1}}}\right)\rho(U_{\delta})\delta^{2}\leq\exp\left(\frac{C}{\rho(U_{\delta})^{\frac{4}{p^{\prime}}+\varepsilon_{1}}}\right)\delta^{2}$
as $\rho(U_{\delta})\leq 1$, for some $\varepsilon_{1}>0$ to be chosen later.
Thus, for some $C,c>0$, we have
$\displaystyle\rho(U_{\delta})\leq
C(-\log\delta)^{-\frac{p^{\prime}}{4}+c\varepsilon_{1}}$
Taking $\delta=\exp(-t^{\frac{4}{p^{\prime}}+\frac{\varepsilon}{2}})$ and
choosing $\varepsilon_{1}$ appropriately in terms of $\varepsilon$, we deduce
that
$\rho\left(\log|g(x)|\leq-t^{\frac{4}{p^{\prime}}+\frac{\varepsilon}{2}}\right)\leq
C(\tilde{C}(p^{\prime}),\epsilon,R(V))t^{-1-\varepsilon/3},$
which, once inserted in (B.1), implies the Theorem. ∎
Therefore, it will be enough to prove Theorem B.1.
### B.2. Proof of Theorem B.1
The main ingredient in the proof of Theorem B.1 is the following Lemma, which
we will prove in section B.3 below, see [29, Corollary 3.5] and [30].
###### Lemma B.2 (Spreading Lemma).
Let $V=\\{n_{i}\\}_{i}\subset\mathbb{Z}$ such that $R(V)<\infty$ and let
$U\subset\mathbb{T}$ with $\rho(U)\leq 4R(V)/(4(R(V)+1)$. Suppose that there
exists some integer $m\geq 1$ such that
$\frac{4}{\rho(U^{\prime})^{2}}\sum_{n_{i}\neq
n_{j}}\left|\widehat{\mathds{1}}_{U^{\prime}}(n_{i}-n_{j})\right|^{2}\leq
m+1,$
for all subsets $U^{\prime}\subset U$ of measure
$\rho(U^{\prime})\geq\rho(U)/2$. Then, there exists a set $U_{1}\supset U$
such that
1. (1)
The measure of $U_{1}$ satisfies
$\rho(U_{1}\backslash U)\geq\frac{\rho(U)}{4m}.$
2. (2)
For all $g\in L^{2}(\mathbb{T})$ with $\operatorname{Spec}(g)\subset V$, we
have
$\int_{U_{1}}|g(x)|^{2}dx\leq\left(C\frac{m^{5}}{\rho(U)^{2}}\right)^{3m}\int_{U}|g(x)|^{2}dx,$
for some absolute constant $C>0$.
We will also need the following two claims:
###### Claim B.3.
Under the assumptions of Theorem B.1, the integer $m>0$ in Lemma B.2 can be
taken to be
$\displaystyle
m=\left[\frac{\tilde{C}^{2}R(\Lambda)}{\rho(U^{\prime})^{\frac{2}{p}}}\right]=:\left[\frac{B}{\rho(U^{\prime})^{\frac{2}{p}}}\right]$
where $[\cdot]$ is the integer part.
###### Proof.
By definition of the $L^{2}(\mathbb{T})$ norm, we have
$\displaystyle\left(\sum_{n_{i}\neq
n_{j}}\left|\widehat{\mathds{1}}_{U^{\prime}}(n_{i}-n_{j})\right|^{2}\right)^{1/2}\leq
R(\Lambda)\left(\sum_{r\in
D(V)}\left|\widehat{\mathds{1}}_{U^{\prime}}(r)\right|^{2}\right)^{1/2}$
$\displaystyle=R(\Lambda)\sup\left\\{\left|\int_{U^{\prime}}\overline{h}(x)dx\right|:||h||_{L^{2}(\mathbb{T})}\leq
1,\hskip 8.53581pt\operatorname{Spec}(h)\subset D(V)\right\\}$ (B.2)
Now, since $D(V)$ is a $\Lambda(p^{\prime})$-system, we can bound the right
hand side of (B.2) using Hölder’s inequality as follows:
$\displaystyle\text{RHS}\eqref{B.2}\leq\rho(U^{\prime})^{1-\frac{1}{p^{\prime}}}\sup\left\\{||h||_{L^{p^{\prime}}(\mathbb{T})}:||h||_{L^{2}(\mathbb{T})}\leq
1,\hskip 8.53581pt\operatorname{Spec}(h)\subset
D(V)\right\\}\leq\tilde{C}\rho(U^{\prime})^{1-\frac{1}{p^{\prime}}},$
for some constant $\tilde{C}=\tilde{C}(V,p^{\prime})>0$ as in (2.2).
Therefore, bearing in mind (B.2), we obtain
$\frac{4}{\rho(U^{\prime})^{2}}\sum_{n_{i}\neq
n_{j}}\left|\widehat{\mathds{1}}_{U^{\prime}}(n_{i}-n_{j})\right|^{2}\leq\tilde{C}^{2}R(\Lambda)\rho(U^{\prime})^{-\frac{2}{p}},$
as required. ∎
###### Claim B.4.
Let $g\in L^{2}(\mathbb{T})$ with $\operatorname{Spec}(g)\subset
V=\\{n_{i}\\}_{i}$, and $U\subset\mathbb{T}$ be a measurable subset. If
$\nu(U)\geq 4R(V)/(4R(V)+1)$ then
$||g||_{L^{2}(\mathbb{T})}\leq\frac{2}{\rho(U)}\int_{U}|g(x)|^{2}dx.$
###### Proof.
First, we can write
$g=\sum_{i}\widehat{g}(n_{i})z^{n_{i}},$
so that, separating the diagonal terms from the others, we have
$\displaystyle\int_{U}|g(x)|^{2}dx$
$\displaystyle=\rho(U)\sum_{i}|\widehat{g}(n_{i})|^{2}+\sum_{i\neq
j}\widehat{\mathds{1}}_{U}(n_{i}-n_{j})\widehat{g}(n_{i})\overline{\widehat{g}(n_{j})}$
$\displaystyle=\rho(U)||g||_{L^{2}(\mathbb{T})}+\langle Q_{U}g,g\rangle,$
(B.3)
where $Q_{U}=(q_{ij})$ is an operator on $L^{2}(\mathbb{T})$ with matrix
representation, in the base $\\{z^{i}\\}$, given by
$\displaystyle
q_{ij}=\begin{cases}\widehat{\mathds{1}}_{U}(n_{i}-n_{j})&n_{i}\neq n_{j}\\\
0&\text{otherwise}\end{cases}.$ (B.4)
Since $\mathds{1}_{U}(\cdot)$ is real-valued,
$\widehat{\mathds{1}}_{U}(-n)=\overline{\widehat{\mathds{1}}_{U}}$, thus
$Q_{U}$ is a self-adjoint operator whose Hilbert-Schmidt norm is bounded by
$\displaystyle||Q_{U}||\leq R(V)^{1/2}\left(\sum_{n\neq
0}\left|\widehat{\mathds{1}}_{U}(n)\right|^{2}\right)^{1/2}=(R(V)\rho(U)(1-\rho(U)))^{1/2}.$
(B.5)
In particular, if $\rho(U)\geq 4R(V)/(4R(V)+1)$, we have
$(R(V)\rho(U)(1-\rho(U)))^{1/2}\leq\rho(U)/2$, thus (B.5) together with (B.2)
give Claim B.4. ∎
We are finally ready to prove Theorem B.1:
###### Proof of Theorem B.1.
Let $0<\nu\leq 4R(V)/(4R(V)+1)$ be some parameter and denote by $A(\nu)$ the
smallest constant such that
$||g||_{L^{2}(\mathbb{T})}\leq A(\nu)\int_{U}|g(x)|^{2}dx,$
for all $g\in L^{2}(\mathbb{T})$ with $\operatorname{Spec}(g)\subset V$ and
any set $U\subset\mathbb{T}$ with $\rho(U)\geq\nu$. Moreover, let
$\varphi(\nu)=\log A(\nu)$,
$\Delta(\nu)=\nu^{1+\frac{2}{p^{\prime}}}(4B)^{-1}$ and $m$ be given by Claim
(B.3). Applying Lemma B.2, bearing in mind that $m\leq B\nu^{-\frac{2}{p}}$,
we obtain a set $U_{1}\subset\mathbb{T}$ of measure
$\rho(U_{1})\geq\nu+\Delta(\nu)$ such that
$\int_{U_{1}}|g(x)|^{2}dx\leq\left(\frac{CB}{\nu^{2+\frac{10}{p^{\prime}}}}\right)^{3B\nu^{-\frac{2}{p}}}\int_{U}|g(x)|^{2}dx.$
Since, by definition of $A(\cdot)$,
$||g||_{L^{2}(\mathbb{T}^{2})}\leq
A(\nu+\Delta(\nu))\int_{U_{1}}|g(x)|^{2}dx,$
we have
$A(\nu)\leq
A(\nu+\Delta(\nu))\left(\frac{CB}{\nu^{2+\frac{10}{p^{\prime}}}}\right)^{3B\nu^{-\frac{2}{p^{\prime}}}},$
and taking the logarithm of both sides, we finally deduce
$\displaystyle\frac{\varphi(\nu)-\varphi(\nu+\Delta(\nu))}{\Delta(\nu)}\leq\frac{12B^{2}}{\nu^{1+\frac{4}{p^{\prime}}}}\log\frac{CB^{5}}{\nu^{2+\frac{10}{p^{\prime}}}}\leq\frac{C(\varepsilon,B)}{\nu^{1+\frac{4}{p^{\prime}}+\varepsilon}}.$
(B.6)
Comparing (B.6) with the differential inequality $d\varphi(\nu)/d\nu\leq
C(\varepsilon)B\nu^{-1-\frac{4}{p^{\prime}}-\varepsilon}$ and bearing in mind
(B.3), we deduce that
$\varphi(\nu)\leq
C(\varepsilon)\tilde{C}^{2}R(\Lambda)\nu^{-1-\frac{4}{p^{\prime}}-\varepsilon},$
as required. If $\nu(U)\geq 4R(V)/(4R(V)+1)$, then Claim B.4 shows that the
conclusion of Theorem B.1 is still satisfied. ∎
### B.3. Proof of Lemma B.2.
In this section we prove Lemma B.2. The proof follows closely the arguments in
[29, Section 3.4], again we claim no originality. We will need the following
definition:
###### Definition B.5.
Let $m$ be a positive integer and let $\tau,\varkappa>0$ be some parameters.
Given $g\in L^{2}(\mathbb{T})$, we say that $g\in
EP_{\text{loc}}^{n}(\tau,\varkappa)$ if for every $t\in(0,\tau)$ there exist
constants $a_{0}(t),...,a_{m}(t)\in\mathbb{C}$ such that
$\sum_{k}|a_{k}|^{2}=1$ and
$\left|\left|\sum_{k=0}^{m}a_{k}(t)f_{kt}\right|\right|_{L^{2}(\mathbb{T})}\leq\varkappa,$
where $f_{kt}(\cdot):=f(e(kt)\cdot).$
We refer the reader to [29, Section 3.1-3.4] for an accurate description of
the class $EP_{\text{loc}}^{m}(\tau,\varkappa)$. Intuitively, functions in
$EP_{\text{loc}}^{n}(\tau,\varkappa)$ “behave like”trigonometric polynomials
of degree $m$. The key estimate that we will need is the following [29,
Corollary 3.5’]:
###### Lemma B.6.
Let $g\in EP_{\text{loc}}^{m}(\tau,\varkappa)$ for some integer $m>0$ and some
$\tau,\varkappa>0$. Moreover, let $U\subset\mathbb{T}$ be a set of positive
measure and $\nu:=\rho(e(n\tau)U\backslash U)$. There exists a set
$U_{1}\supset U$ of measure $\rho(U_{1}\backslash U)\geq\frac{\nu}{2}$ such
that
$\int_{U_{1}}|g(x)|^{2}dx\leq\left(\frac{Cm^{3}}{\nu^{2}}\right)^{2m}\left(\int_{U}|g(x)|^{2}dx+\varkappa^{2}\right).$
We will also need the following two claims:
###### Claim B.7.
Let $U\subset\mathbb{T}$ be a measurable subset and let $m$ be as in Lemma
B.2. Then, there exists a subspace $V_{m}$ of $L^{2}(\mathbb{T})$ of dimension
at most $m$ such that for all $g\in L^{2}(\mathbb{T})$ orthogonal to $V_{m}$,
we have
$||g||_{L^{2}(\mathbb{T})}\leq\frac{2}{\rho(U^{\prime})}\int_{U^{\prime}}|g(x)|^{2}dx$
for all subsets $U^{\prime}\subset U$ with $\rho(U^{\prime})\geq\rho(U)/2$
###### Proof.
Indeed, let $|\sigma_{1}|\leq|\sigma_{2}|\leq...$ be the eigenvalues of the
operator $Q_{U^{\prime}}$ defined in (B.4) with $U^{\prime}$ instead of $U$.
Then we take $V_{m}$ to be the subspace generated by the eigenvectors with
eigenvalues $\sigma_{1},...,\sigma_{m}$. We are now going to show that $V_{m}$
has the claimed property. By definition of $m$, we have
$\sum_{i}|\sigma_{i}|^{2}=||Q_{U^{\prime}}||^{2}\leq\sum_{i\neq
j}|q_{ij}|^{2}=\sum_{n_{i}\neq
n_{j}}\left|\widehat{\mathds{1}}_{U^{\prime}}(n_{i}-n_{j})\right|^{2}\leq\frac{\rho(U^{\prime})^{2}(m+1)}{4}.$
Thus,
$|\sigma_{m+1}|^{2}\leq\frac{1}{m+1}\cdot\frac{\rho(U^{\prime})^{2}(m+1)}{4}\leq\frac{\rho(U^{\prime})^{2}}{4}.$
Therefore Claim B.7 follows from that fact that the norm of $Q_{U^{\prime}}$
restricted to $L^{2}(\mathbb{T})\backslash V_{m}$ is at most
$|\sigma_{m+1}|\leq\rho(U^{\prime})/2$ and an analogous argument to Claim B.4.
∎
Now, if $V$ is finite we let $N=|V|$, if $V$ is infinite we can ignore the
dependence on $N$ in the rest of the argument. With this notation, we claim
the following:
###### Claim B.8.
Let $U\subset\mathbb{T}$ be a measurable set, $m$ be as in Lemma B.2 and, if
$V$ is finite, suppose that $m<N$, moreover let $g\in L^{2}(\mathbb{T})$ with
$\operatorname{Spec}(g)\subset V$. Then there exists some $\sigma\in[0,1]$
such that $g\in EP^{m}_{\text{loc}}(\tau,\varkappa)$ where
$\varkappa^{2}=\frac{4}{\rho(U)}(m+1)\int_{U}|g(x)|^{2}dx$, $\tau=\sigma/2m$
and, moreover $\nu:=\rho(e(m\tau)U\backslash U)\geq\rho(U)/2m$.
###### Proof.
Let $t\in[0,1)$ be given, since exponentials with different frequencies are
linearly independent222Suppose that $n_{i}\neq n_{j}$ for $i\neq j$ and
$\sum_{i}a_{n_{i}}e(n_{i}x)=0$. Multiplying both sides by $e(-n_{1}x)$ and
integrating for $x\in\mathbb{T}$, we see that $a_{1}=0$. Repeating the
argument, we get $a_{i}=0$ for all $i$. in $L^{2}(\mathbb{T})$, we can choose
coefficients $a_{k}(t)$, so that $\sum_{k}|a_{k}|^{2}=1$ and the function
$h(\cdot)=\sum_{k=0}^{m}a_{k}(t)g_{kt}(\cdot),$
where $g_{kt}(x)=\sum a_{n_{i}}e(nkt)e(n_{i}x)$, is orthogonal to $V_{m}$,
given in Claim B.7, provided that $m<N$. Therefore, Claim B.7 gives
$\displaystyle||h||_{L^{2}(\mathbb{T})}\leq\frac{2}{\rho(U^{\prime})}\int_{U^{\prime}}|h(x)|^{2}dx,$
(B.7)
for all $U^{\prime}\subset U$ with $\rho(U^{\prime})\geq\rho(U)/2$.
We are now going to choose an appropriate set $U^{\prime}$ in order to
estimate the RHS of (B.7). Let $t\geq 0$ and take
$U^{\prime}=U_{t}:=\cap_{k=0}^{m}e(-kt)U$, since the function
$t\rightarrow\rho(U_{t}\backslash U)$ is continuous and takes value $0$ at
$t=0$, we can find some sufficiently small $\tau>0$ so that, for all
$t\in(0,\tau)$, the set $U_{t}:=\cap_{k=0}^{m}e(-kt)U$ has measure at least
$\rho(U)/2$. To estimate the RHS of (B.7), we observe that, for every
$k=0,...,m$, we have
$\int_{U_{t}}|g_{kt}(x)|^{2}dx\leq\int_{e(-kt)U}|g_{kt}(x)|^{2}dx=\int_{U}|g(x)|^{2}dx.$
Thus, the Cauchy-Schwartz inequality gives
$\displaystyle\int_{U_{t}}|h(x)|^{2}dx\leq\left(\sum_{k=0}^{m}\int_{U_{t}}|g_{kt}(x)|^{2}dx\right)\leq(m+1)\int_{U}|g(x)|^{2}dx.$
(B.8)
Hence, (B.7) together with (B.8), bearing in mind that
$\rho(U_{t})\geq\rho(U)/2$, give that for all $t\in(0,\tau)$ there exists
coefficients $a_{1}(t),...,a_{m}(t)$ such that $\sum_{k}|a_{k}|^{2}=1$ and
$\left|\left|\sum_{k=0}^{m}a_{k}(t)g_{kt}\right|\right|_{L^{2}(\mathbb{T})}\leq\frac{4(m+1)}{\rho(U)}\int_{U}|g(x)|^{2}dx.$
We are now left with proving the claimed estimates on $\tau$ and $\nu$. Let
$\psi(s)=\rho(e(s)U\backslash U)$, bearing in mind that $\rho(U)\leq
4R(V)/(4R(V)+1)$ so that, by (B.2), $\rho(\mathbb{T}\backslash
U)\geq(4R(V)+1)^{-1}\geq(2m)^{-1}$, we have
$\int_{0}^{1}\psi(s)ds=\rho(U)\rho(\mathbb{T}\backslash
U)\geq\frac{\rho(U)}{2m}.$
Thus, since $\psi(s)$ is non-negative and continuous, there exists some
$\sigma\in[0,1]$ such that for all $s\leq\sigma$ we have $\rho(e(s)U\backslash
U)\leq\rho(U)/2m$. We now verify that such $\tau=\sigma/m$ satisfies
$\rho(U_{t})\geq\rho(U)/2$ for all $t\in(0,\tau)$. Indeed, bearing in mind
that $kt\in(0,m\tau)$, we have
$\displaystyle\rho(U_{t})=\rho\left(\cap_{k=0}^{m}e(-kt)U\right)\geq\rho(U)-\sum_{k=1}^{m}\rho(e(kt)U\backslash
U)\geq\rho(U)-m\frac{\rho(U)}{2m}\geq\rho(U)/2,$ (B.9)
concluding the proof of Claim B.8. ∎
We are finally ready to present the proof of Lemma B.2
###### Proof of Lemma B.2.
Suppose that $m<N$, then, applying Lemma B.6 with the choice of parameters
given by Claim B.8, we obtain part (1) of Lemma B.2. For part (2), Lemma B.6
gives
$\displaystyle\int_{U_{1}}|g(x)|^{2}dx$
$\displaystyle\leq\left(\frac{Cm^{5}}{\rho(U)^{2}}\right)^{2m}\left(\frac{4(m+1)}{\rho(U)}+1\right)\int_{U}|g(x)|^{2}dx$
$\displaystyle\leq\left(\frac{Cm^{5}}{\rho(U)^{2}}\right)^{3m}\int_{U}|g(x)|^{2}dx,$
(B.10)
as required.
Let us now suppose that $m\geq N$, then the Nazarov-Turán Lemma [29, Theorem
1], for any set $U_{1}\subset\mathbb{T}$ of measure
$\rho(U_{1})=\rho(U)+\rho(U)/4m$, gives
$\displaystyle\int_{U_{1}}|g(x)|^{2}dx$
$\displaystyle\leq\left(\frac{C\rho(U_{1})}{\rho(U)}\right)^{N-1}\int_{U}|g(x)|^{2}dx$
$\displaystyle\leq\left(C+\frac{C}{4m}\right)^{N-1}\int_{U}|g(x)|^{2}dx$
and (B.10) follows. ∎
## Appendix C Log-integrability at small scales
The aim of this section is to prove Lemma 4.5. First, via a series of steps
already taken in proving Proposition 3.1, we reduce Lemma 4.5 to Lemma C.3
below. As in the proof of Proposition 3.1, Lemma 4.5 follows from the
following:
###### Lemma C.1.
Let $\varepsilon>0$, $f_{\lambda}$ be as in (1.5) and $p\geq 1$ be a fixed
integer. Then there exists a density one subsequence of $\lambda\in S$ and
some constant $C=C(p)$ such that
$\frac{1}{\operatorname{Vol}(B)}\int_{B}|\log|f_{\lambda}(x)||^{p}dx\leq C,$
uniformly for all balls $B\subset\mathbb{T}^{2}$ of radius
$r>\lambda^{-1/2+\varepsilon}$.
In order to prove Lemma C.1, we introduce some notation: let $f=f_{\lambda}$
be as in (1.5), we write $f_{r}$, with a slight abuse of notation, for $f$
restricted to the box of radius $0<r<1$ centred at some point
$z\in\mathbb{T}^{2}$, that is
$f_{\lambda,r,z}(y)=f_{r}(y):=f(z+ry),$
and $y\in[-1/2,1/2]^{2}$. Moreover, we write $f^{i}_{r}(y_{i})$ for $f_{r}$
considered as a function of the $y_{i}$-th variable only, that is
$\displaystyle f^{i}_{r}(y_{i})=\sum_{\xi^{1}}\tilde{a_{\xi}}e(r\xi^{1}y_{i})$
$\displaystyle\tilde{a_{\xi}}=e(z\xi)(e(r\xi^{2}y_{j})+e(-r\xi^{2}y_{j})),$
(C.1)
where $\xi=(\xi^{1},\xi^{2})$. As in the proof of Proposition 1.7, in order to
prove Lemma C.1 it is enough to consider the projection of $f$ along one
coordinate:
###### Lemma C.2.
Let $\varepsilon>0$, $0<r<1$, $f^{i}_{r}$ be as in (C.1) and $p>2$ be a fixed
integer. Then there exists a density one subsequence of $\lambda\in S$ and
some constant $C=C(p)$ such that
$\int_{[-1/2,1/2]}\left|\log\left|\frac{f^{i}_{r}(y_{i})}{||f^{i}_{r}||_{L^{2}}}\right|\right|^{p}dy_{i}\leq
C,$
uniformly for all $r>\lambda^{-1/2+\varepsilon}$ and $i=1,2$.
Since $f_{r}^{i}$ belongs to $L^{2}([-1/2,1/2])=L^{2}(\mathbb{T})$, as in the
proof of Theorem 2.6, in order to prove Lemma C.2 it is enough to show the
following:
###### Lemma C.3.
Let $\varepsilon>0,\delta>0$, $0<r<1$, $f^{i}_{r}$ be as in (C.1) and $p>2$ be
a fixed integer. Then there exists a density one subsequence of $\lambda\in S$
and some constant $C=C(p)$ such that
$||f^{i}_{r}||_{L^{2}(\mathbb{T})}\leq\exp\left(\frac{C}{\rho(U)^{\frac{2}{p}+\delta}}\right)\int_{U}|f^{i}_{r}(x)|^{2}dx,$
for any positive measure set $U\subset\mathbb{T}$, where $\rho(\cdot)$ is the
Lebesgue measure on $\mathbb{T}$, and uniformly for all
$r>\lambda^{-1/2+\varepsilon}$.
Hence, we have reduced the proof of Lemma 4.5 to proving Lemma C.3.
### C.1. Proof of Lemma C.3
In order to prove Lemma C.3, we will use the following result whose proof is
postponed to section C.2:
###### Lemma C.4 (Spreading Lemma at small scales).
Let $V=\\{n_{i}\\}_{i}\subset\mathbb{Z}$ such that $R(V)<\infty$ and let
$U\subset\mathbb{T}$ be a set of positive measure. Let $0<r<1$ be some
parameter, assume the following:
1. (1)
There exists some constant $0<A(V,2)=A(V)<4(R(V)+1/4R(V)$, such that for all
$g\in L^{2}(\mathbb{T})$ with $\operatorname{Spec}(h)\subset V$ and
$||g||_{L^{2}(\mathbb{T})}=1$, we have
$\frac{1}{2}\leq||g_{r}||_{L^{2}(\mathbb{T})}\leq A(V),$
where $g_{r}(\cdot)=g(r\cdot)$.
2. (2)
There exists some integer $m\geq 1$ such that
$\frac{4}{\rho(U^{\prime})^{2}}\sum_{n_{i}\neq
n_{j}}\left|\widehat{\mathds{1}}_{U^{\prime}}(r(n_{i}-n_{j}))\right|^{2}\leq
m+1,$
for all subsets $U^{\prime}\subset U$ of measure
$\rho(U^{\prime})\geq\rho(U)/2$ and $\rho(U)\leq A(V)4R(V)/(4(R(V)+1).$
Then, there exists a set $U_{1}\supset U$ such that
1. (1)
The measure of $U_{1}$ satisfies
$\rho(U_{1}\backslash U)\geq\frac{\rho(U)}{4m}.$
2. (2)
For all $g\in L^{2}(\mathbb{T})$ with $\operatorname{Spec}(g)\subset V$, we
ave
$\int_{U_{1}}|g_{r}(x)|^{2}dx\leq\left(C\frac{m^{5}}{\rho(U)^{2}}\right)^{3m}\int_{U}|g_{r}(x)|^{2}dx,$
for some absolute constant $C>0$.
In order to check the assumptions of Lemma C.4, we will need the following
results from [2] and [31] respectively:
###### Lemma C.5.
Let $\ell$ be a positive integer and $\delta>0$. Then, for a density one
subsequence of $\lambda\in S$, we have
$\displaystyle|\xi_{1}+...+\xi_{2\ell}|\geq\lambda^{-1/2+\delta}$
$\displaystyle|\xi|^{2}=\lambda$
uniformly for all $\xi_{1},...,\xi_{2\ell}$ such that
$\xi_{1}+...+\xi_{\ell}\neq 0$.
###### Lemma C.6.
Let $\xi=(\xi^{1},\xi^{2})\in\mathbb{Z}^{2}$, $\delta>0$ and $\ell\geq 2$.
Then, for a density one subsequence of $\lambda\in S$, we have
$\displaystyle|\xi^{i}_{1}+...+\xi^{i}_{2\ell}|\geq\lambda^{-1/2+\delta}$
$\displaystyle|\xi|^{2}=\lambda,\hskip 8.53581pti=1,2,$
uniformly for all $\xi_{1},...,\xi_{2\ell}$ such that
$\xi^{i}_{1}+...+\xi^{i}_{\ell}\neq 0$.
We are now ready to prove Lemma C.3:
###### Proof of Lemma C.3.
We prove Lemma C.3 for $f^{1}_{r}=f_{r}$, the proof for $f^{2}_{r}$ being
identical. We being with checking the assumptions of Lemma C.4. We being with
$(1)$, let
$g(x)=\sum_{i}a_{n_{i}}e(n_{i}\cdot x),$
for $x\in\mathbb{T}$, $n_{i}\in V=\\{\xi^{1}:\xi=(\xi^{1},\xi^{2}),\hskip
5.69054pt|\xi|^{2}=\lambda\\}$ and $\sum|a_{n_{i}}|^{2}=1$. Then, separating
the diagonal terms form the others, we obtain
$\int_{\mathbb{T}}|g(rx)|^{2}dx=\sum|a_{n}|^{2}+\sum_{n_{i}\neq
n_{j}}a_{n_{i}}\overline{a_{n_{j}}}\int_{\mathbb{T}}e(r(n_{i}-n_{j})x)dx=1+O\left(\sum_{n_{i}\neq
n_{j}}\frac{a_{n_{i}}\overline{a_{n_{j}}}}{r|n_{i}-n_{j}|}\right)$
Using Lemma C.6 with $\delta=\varepsilon/2$, bearing in mind that
$r>\lambda^{-1/2+\varepsilon}$ and observing that, by normalization,
$\sup_{n}|a_{n}|\leq 1$, we have
$\int_{\mathbb{T}}|g(rx)|^{2}dx=1+O\left(\lambda^{-\varepsilon/2}N^{2}\right)=1+o_{\lambda\rightarrow\infty}(1),$
where we have used the fact that $N\lesssim\lambda^{o(1)}$ [16, Page 342].
Therefore $(1)$ of Lemma C.4 holds for all sufficiently large $\lambda\in S$
along a density one subsequence.
We now show $(2)$ with $m=\lfloor B\rho(U^{\prime})^{-1/p}\rfloor$ for some
sufficiently large constant $B$, provided that $m<N$. First, we write
$\displaystyle\left(\sum_{n_{i}\neq
n_{j}}|\widehat{\mathds{1}}_{U^{\prime}}(r(n_{i}-n_{j}))|^{2}\right)^{1/2}=R(V)\left(\sum_{d\in
D(V)}|\widehat{\mathds{1}}_{U^{\prime}}(rd)|^{2}\right)^{1/2}$ (C.2)
$\displaystyle=R(V)\sup\left\\{\sum_{d}\widehat{\mathds{1}}_{U^{\prime}}(rd)\overline{c_{d}}:\left(\sum_{d}|c_{d}|^{2}\right)^{1/2}\leq
1\right\\}.$
We observe that
$\displaystyle\widehat{\mathds{1}}_{U^{\prime}}(rd)$
$\displaystyle=\int_{\mathbb{T}}\mathds{1}_{U^{\prime}}(x)e(rdx)dx\overset{y=rx}{=}r^{-1}\int_{r\mathbb{T}}\mathds{1}_{U^{\prime}}(y/r)e(dy)dy=r^{-1}\widehat{\mathds{1}}_{rU^{\prime}}(d)$
(C.3)
Thus, using Parseval’s identit, the RHS of (C.2) can be re-written as
$\displaystyle\left(\sum_{d\in
D(V)}|\widehat{\mathds{1}}_{U^{\prime}}(rd)|^{2}\right)^{1/2}$
$\displaystyle=\sup\left\\{r^{-1}\int_{rU^{\prime}}\overline{g}(x)dx:||g||_{L^{2}}\leq
1,\operatorname{Spec}(g)\subset D(V)\right\\}$
$\displaystyle\overset{x=ry}{=}\sup\left\\{\int_{U^{\prime}}\overline{g}(ry)dy:||g||_{L^{2}}\leq
1,\operatorname{Spec}(g)\subset D(V)\right\\}$
Now, writing
$g(ry)=\sum_{d\in D(V)}c_{d}e(dry),$
with $\sum|c_{d}|^{2}\leq 1$, Holder’s inequality gives
$\displaystyle\int_{U^{\prime}}\overline{g}(ry)dy\leq\rho(U^{\prime})^{1-1/(2p)}\left(\int_{\mathbb{T}}|g(ry)|^{2p}dy\right)^{1/(2p)}.$
(C.4)
Expanding the integrand, splitting the resulting sum in diagonal and off-
diagonal terms and using the fact that $D(V)$ is a $\Lambda(2p)$ system, by
Lemma 2.3, to estimate the diagonal contribution, we obtain that the integral
on the RHS of (C.4) can be bounded by
$\displaystyle\int_{\mathbb{T}}|g(ry)|^{2p}dy\leq
C(p)+\sum_{\begin{subarray}{c}d_{1},...-d_{2p}\\\ \text{off-
diagonal}\end{subarray}}c_{d_{1}}....\overline{c_{d_{2p}}}\int_{\mathbb{T}}e(r(d_{1}+....-d_{2p})y)dy$
Using Lemma C.6, we see that
$\int_{\mathbb{T}}e(r(d_{1}+....+d_{2p})y)dy\lesssim\lambda^{-\varepsilon/2},$
thus, in light of the fact that $N\lesssim\lambda^{o(1)}$, we obtain
$\displaystyle\int_{\mathbb{T}}|g(ry)|^{2p}dy\leq
C(p)+N^{2p}\lambda^{-\varepsilon}\leq 2C(p),$ (C.5)
for all sufficiently large $\lambda>0$.
Finally, we observe that if $m\geq N$ the conclusion of Lemma C.4 still holds
thanks to the Nazarov-Turán Lemma [29, Theorem 1], as in the proof of Lemma
B.2. Thus, the conclusion of Lemma C.4 holds for $m=\lfloor
B\rho(U^{\prime})^{-1/p}\rfloor$ for all $U^{\prime}\subset\mathbb{T}$ with
$\rho(U^{\prime})\leq 4A(V)R(V)/(4R(V)+1)$. Applying Lemma C.4 as in the proof
of Theorem B.1, we conclude the proof of Lemma C.3. ∎
### C.2. Proof of Lemma C.4
The proof of Lemma C.4 is similar to the proof of Lemma B.2 and it will
proceed through a series of claims.
###### Claim C.7.
Let $g\in L^{2}(\mathbb{T})$ with $\operatorname{Spec}(g)\subset
V=\\{n_{i}\\}_{i}$, and $U\subset\mathbb{T}$ be a measurable subset. Suppose
that $V$ satisfies assumption $(1)$ of Lemma C.4 with some constant
$A(V)<4R(V)/(4R(V)+1)$ and $\rho(U)\geq A(V)4R(V)/(4R(V)+1)$ then
$2^{-1}||g_{r}||_{L^{2}(\mathbb{T})}\leq||g||_{L^{2}(\mathbb{T})}\leq\frac{2}{\rho(U)}\int_{U}|g_{r}(x)|^{2}dx.$
###### Proof.
First, we can write
$g(x)=\sum_{i}\widehat{g}(n_{i})e(n_{i}\cdot x),$
so that, separating the diagonal terms from the others, we have
$\displaystyle\int_{U}|g_{r}(x)|^{2}dx$
$\displaystyle=\rho(U)\sum_{i}|\widehat{g}(n_{i})|^{2}+\sum_{i\neq
j}\widehat{\mathds{1}}_{U}(r(n_{i}-n_{j}))\widehat{g}(n_{i})\overline{\widehat{g}(n_{j})}$
$\displaystyle=\rho(U)||g||_{L^{2}(\mathbb{T})}+\langle Q^{r}_{U}g,g\rangle,$
(C.6)
where $Q^{r}_{U}=(q_{ij})$ is an operator on $L^{2}(\mathbb{T})$ with matrix
representation, in the base $\\{e(n\cdot x)\\}_{n\in\mathbb{Z}}$, given by
$\displaystyle
q_{ij}=\begin{cases}\widehat{\mathds{1}}_{U}(r(n_{i}-n_{j}))&n_{i}\neq
n_{j}\\\ 0&\text{otherwise}\end{cases}.$ (C.7)
Since $\mathds{1}_{U}(\cdot)$ is real-valued,
$\widehat{\mathds{1}}_{U}(-n)=\overline{\widehat{\mathds{1}}_{U}}$, thus
$Q^{r}_{U}$ is a self-adjoint operator whose Hilbert-Schmidt norm is bounded,
using the assumptions, by
$\displaystyle||Q^{r}_{U}||_{HS}$
$\displaystyle=\left(\sum_{i,j}|\widehat{\mathds{1}}_{U}(r(n_{i}-n_{j}))|^{2}\right)^{1/2}\leq
R(V)^{1/2}\left(\sum_{d\neq
0}\left|\widehat{\mathds{1}}_{U}(rd)\right|^{2}\right)^{1/2}$
$\displaystyle\leq
R(V)^{1/2}\left(\sup\left\\{\int_{U}h_{r}(x)dx:\operatorname{Spec}(h)\subset
V,||h||_{L^{2}(\mathbb{T})}=1\right\\}-\rho(U^{2})\right)$ $\displaystyle\leq
R(V)^{1/2}(A(V)\rho(U)-\rho(U)^{2})$ (C.8)
In particular, if $\rho(U)>A(V)4R(V)/(4R(V)+1)$, we have
$(R(V)\rho(U)(1-\rho(U)))^{1/2}\leq\rho(U)/2$, thus (C.2) together with (C.2)
give Claim C.7. ∎
The proof of the following claim, being identical to the proof of Claim B.7 is
omitted.
###### Claim C.8.
Let $U\subset\mathbb{T}$ be a measurable subset and let $m$ be as in Lemma
C.4. Suppose that $V$ satisfies assumption $(1)-(2)$ of Lemma C.4 then, there
exists a subspace $V_{m}$ of $L^{2}(\mathbb{T})$ of dimension at most $m$ such
that for all $g\in L^{2}(\mathbb{T})$ orthogonal to $V_{m}$ with
$\operatorname{Spec}(g)\subset V$, we have
$||g_{r}||_{L^{2}(\mathbb{T})}\leq\frac{4}{\rho(U^{\prime})}\int_{U^{\prime}}|g_{r}(x)|^{2}dx$
for all subsets $U^{\prime}\subset U$ with $\rho(U^{\prime})\geq\rho(U)/2$
###### Claim C.9.
Let $U\subset\mathbb{T}$ be a measurable set, $m$ be as in Lemma C.4 and let
$g\in L^{2}(\mathbb{T})$ with $\operatorname{Spec}(g)\subset V$. Suppose that
$V$ satisfies assumption $(1)-(2)$ of Lemma C.4 then there exists some
$\sigma\in[0,1]$ such that $g_{r}\in EP^{m}_{\text{loc}}(\tau,\varkappa)$
where $\varkappa^{2}=\frac{8}{\rho(U)}(m+1)\int_{U}|g_{r}(x)|^{2}dx$,
$\tau=\sigma/2m$ and, moreover $\nu:=\rho(e(m\tau)U\backslash
U)\geq\rho(U)/2m$.
###### Proof.
Let $t\in[0,1)$ be given, we observe that we can choose coefficients
$a_{k}(t)$, so that $\sum_{k}|a_{k}|^{2}=1$ and the function
$h(\cdot)=\sum_{k=0}^{m}a_{k}(t)g^{kt}(\cdot),$
where $g^{kt}(x)=\sum a_{n_{i}}e(nkt)e(n_{i}x)$, is orthogonal to $V_{m}$,
given in Claim C.8. Therefore, Claim C.8 gives
$\displaystyle||\sum_{k=0}^{m}a_{k}g^{kt}_{r}||_{L^{2}(\mathbb{T})}^{2}=||h_{r}||_{L^{2}(\mathbb{T})}\leq\frac{4}{\rho(U^{\prime})}\int_{U^{\prime}}|h_{r}(x)|^{2}dx,$
(C.9)
for all $U^{\prime}\subset U$ with $\rho(U^{\prime})\geq\rho(U)/2$.
We are now going to choose an appropriate set $U^{\prime}$ in order to
estimate the RHS of (C.9). Let $t\geq 0$ and take
$U^{\prime}=U_{t}:=\cap_{k=0}^{m}e(-kt)U$, since the function
$t\rightarrow\rho(U_{t}\backslash U)$ is continuous and takes value $0$ at
$t=0$, we can find some sufficiently small $\tau>0$ so that, for all
$t\in(0,\tau)$, the set $U_{t}:=\cap_{k=0}^{m}e(-kt)U$ has measure at least
$\rho(U)/2$. To estimate the RHS of (C.9), we observe that, for every
$k=0,...,m$, we have
$\int_{U_{t}}|g^{kt}_{r}(x)|^{2}dx\leq\int_{e(-kt)U}|g^{kt}_{r}(x)|^{2}dx=\int_{U}|g_{r}(x)|^{2}dx.$
Thus, the Cauchy-Schwartz inequality gives
$\displaystyle\int_{U_{t}}|h(x)|^{2}dx\leq\left(\sum_{k=0}^{m}\int_{U_{t}}|g^{kt}_{r}(x)|^{2}dx\right)\leq(m+1)\int_{U}|g_{r}(x)|^{2}dx.$
(C.10)
Hence, (C.9) together with (C.10), bearing in mind that
$\rho(U_{t})\geq\rho(U)/2$, give that for all $t\in(0,\tau)$ there exists
coefficients $a_{1}(t),...,a_{m}(t)$ such that $\sum_{k}|a_{k}|^{2}=1$ and
$\left|\left|\sum_{k=0}^{m}a_{k}(t)g^{kt}_{r}\right|\right|_{L^{2}(\mathbb{T})}\leq\frac{8(m+1)}{\rho(U)}\int_{U}|g_{r}(x)|^{2}dx.$
The claimed estimates on $\tau$ and $\nu$ follow an identical argument as in
the proof of Claim B.8, which is therefore omitted. ∎
We are finally ready to prove Lemma C.4:
###### Proof of Lemma C.4.
Given Claims C.7, C.8 and C.9, Lemma C.4 follows from Lemma B.6 with the
choice of parameters given in Claim C.9, as in the proof of Lemma B.2. ∎
## References
* [1] J.-M. Azaïs and M. Wschebor, Level sets and extrema of random processes and fields, John Wiley & Sons, Inc., Hoboken, NJ, 2009.
* [2] J. Benatar, D. Marinucci, and I. Wigman, Planck-scale distribution of nodal length of arithmetic random waves, J. Anal. Math., 141 (2020), pp. 707–749.
* [3] M. V. Berry, Regular and irregular semiclassical wavefunctions, Journal of Physics A: Mathematical and General, 10 (1977), p. 2083.
* [4] , Semiclassical mechanics of regular and irregular motion, Les Houches lecture series, 36 (1983), pp. 171–271.
* [5] M. V. Berry, Statistics of nodal lines and points in chaotic quantum billiards: perimeter corrections, fluctuations, curvature, J. Phys. A, 35 (2002), pp. 3025–3038.
* [6] P. Billingsley, Convergence of probability measures, John Wiley & Sons, 2013.
* [7] E. Bombieri and J. Bourgain, A problem on sums of two squares, Int. Math. Res. Not. IMRN, (2015), pp. 3343–3407.
* [8] J. Bourgain, On toral eigenfunctions and the random wave model, Israel Journal of Mathematics, 201 (2014), pp. 611–630.
* [9] J. Brüning, Über Knoten von Eigenfunktionen des Laplace-Beltrami-Operators, Math. Z., 158 (1978), pp. 15–21.
* [10] J. Buckley and I. Wigman, On the number of nodal domains of toral eigenfunctions, in Annales Henri Poincaré, vol. 17, Springer, 2016, pp. 3027–3062.
* [11] V. Cammarota, O. Klurman, and I. Wigman, Boundary effect on the nodal length for arithmetic random waves, and spectral semi-correlations, Comm. Math. Phys., 376 (2020), pp. 1261–1310.
* [12] H. Donnelly and C. Fefferman, Nodal sets of eigenfunctions on reimannian manifolds, Inventiones mathematicae, 93 (1988), pp. 161–183.
* [13] P. Erdős and R. R. Hall, On the angular distribution of Gaussian integers with fixed norm, vol. 200, 1999, pp. 87–94. Paul Erdős memorial collection.
* [14] L. C. Evans, Partial differential equations, vol. 19 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, 1998.
* [15] A. Granville and I. Wigman, Planck-scale mass equidistribution of toral Laplace eigenfunctions, Comm. Math. Phys., 355 (2017), pp. 767–802.
* [16] G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers, Oxford, fourth ed., 1975.
* [17] P. Humphries, Equidistribution in shrinking sets and $L^{4}$-norm bounds for automorphic forms, Math. Ann., 371 (2018), pp. 1497–1543.
* [18] I. Kátai and I. Környei, On the distribution of lattice points on circles, Ann. Univ. Sci. Budapest. Eötvös Sect. Math., 19 (1976), pp. 87–91 (1977).
* [19] M. Krishnapur, P. Kurlberg, and I. Wigman, Nodal length fluctuations for arithmetic random waves, Ann. of Math. (2), 177 (2013), pp. 699–737.
* [20] P. Kurlberg and I. Wigman, On probability measures arising from lattice points on circles, Math. Ann., 367 (2017), pp. 1057–1098.
* [21] S. Lester and Z. Rudnick, Small scale equidistribution of eigenfunctions on the torus, Comm. Math. Phys., 350 (2017), pp. 279–300.
* [22] A. Logunov, Nodal sets of Laplace eigenfunctions: polynomial upper estimates of the Hausdorff measure, Ann. of Math. (2), 187 (2018), pp. 221–239.
* [23] , Nodal sets of Laplace eigenfunctions: proof of Nadirashvili’s conjecture and of the lower bound in Yau’s conjecture, Ann. of Math. (2), 187 (2018), pp. 241–262.
* [24] A. Logunov and E. Malinnikova, Nodal sets of Laplace eigenfunctions: estimates of the Hausdorff measure in dimensions two and three, in 50 years with Hardy spaces, vol. 261 of Oper. Theory Adv. Appl., Birkhäuser/Springer, Cham, 2018, pp. 333–344.
* [25] A. Logunov and E. Malinnikova, Lecture notes on quantitative unique continuation for solutions of second order elliptic equations, arXiv: Analysis of PDEs, (2019).
* [26] D. Mangoubi, Local asymmetry and the inner radius of nodal domains, Comm. Partial Differential Equations, 33 (2008), pp. 1611–1621.
* [27] D. Marinucci, G. Peccati, M. Rossi, and I. Wigman, Non-universality of nodal length distribution for arithmetic random waves, Geom. Funct. Anal., 26 (2016), pp. 926–960.
* [28] F. Nazarov and M. Sodin, Asymptotic laws for the spatial distribution and the number of connected components of zero sets of gaussian random functions, J. Math. Phys. Anal. Geom., 12 (2016), pp. 205–278.
* [29] F. L. Nazarov, Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type, Algebra i Analiz, 5 (1993), pp. 3–66.
* [30] , Summability of large powers of logarithm of classic lacunary series and its simplest consequences, Unpublished, Preprint available at https://users.math.msu.edu/users/fedja/prepr.html, (1995).
* [31] K. Oleksiy and A. Sartori, On the behavior of nodal lines near the boundary for laplace eigenfunctions on the square, Arxiv preprint: arxiv.org/2104.13038, (2021).
* [32] F. Oravecz, Z. Rudnick, and I. Wigman, The Leray measure of nodal sets for random eigenfunctions on the torus, Ann. Inst. Fourier (Grenoble), 58 (2008), pp. 299–335.
* [33] A. Romaniega and A. Sartori, Nodal set of monochromatic waves satisfying the random wave model, Arxiv preprint: https://arxiv.org/abs/2011.03467, (2020).
* [34] Z. Rudnick and I. Wigman, On the volume of nodal sets for eigenfunctions of the Laplacian on the torus, Ann. Henri Poincaré, 9 (2008), pp. 109–130.
* [35] A. Sartori, On the fractal structure of attainable probability measures, Bull. Pol. Acad. Sci. Math., 66 (2018), pp. 123–133.
* [36] , Planck-scale number of nodal domains for toral eigenfunctions, J. Funct. Anal., 279 (2020), pp. 108663, 22.
* [37] I. Wigman, Fluctuations of the nodal length of random spherical harmonics, Comm. Math. Phys., 298 (2010), pp. 787–831.
* [38] I. Wigman and N. Yesha, Central limit theorem for Planck-scale mass distribution of toral Laplace eigenfunctions, Mathematika, 65 (2019), pp. 643–676.
* [39] S. T. Yau, Survey on partial differential equations in differential geometry, in Seminar on Differential Geometry, vol. 102 of Ann. of Math. Stud., Princeton Univ. Press, Princeton, N.J., 1982, pp. 3–71.
* [40] S. Zelditch, Eigenfunctions of the Laplacian on a Riemannian manifold, vol. 125 of CBMS Regional Conference Series in Mathematics, Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2017.
|
# Interactive slice visualization for exploring machine learning models
Catherine B. Hurley, Mark O’Connell, Katarina Domijan
Department of Mathematics and Statistics, Maynooth University
###### Abstract
Machine learning models fit complex algorithms to arbitrarily large datasets.
These algorithms are well-known to be high on performance and low on
interpretability. We use interactive visualization of slices of predictor
space to address the interpretability deficit; in effect opening up the black-
box of machine learning algorithms, for the purpose of interrogating,
explaining, validating and comparing model fits. Slices are specified directly
through interaction, or using various touring algorithms designed to visit
high-occupancy sections, or regions where the model fits have interesting
properties. The methods presented here are implemented in the R package
condvis2.
Keywords: Black-Box Models; Supervised and Unsupervised learning; Model
explanation; XAI; Sectioning; Conditioning
## 1 Introduction
Machine learning models fit complex algorithms to extract predictions from
datasets. Numerical model summaries such as mean squared residuals and feature
importance measures are commonly used for assessing model performance, feature
importance and for comparing various fits. Visualization is a powerful way of
drilling down, going beyond numerical summaries to explore how predictors
impact on the fit, assess goodness of fit and compare multiple fits in
different regions of predictor space, and perhaps ultimately developing
improved fits. Coupled with interaction, visualization becomes an even more
powerful model exploratory tool.
Currently, explainable artificial intelligence (XAI) is a very active research
topic, with the goal of making models understandable to humans. There have
been many efforts to use visualization to understand machine learning fits in
a model-agnostic way. Many of these show how features locally explain a fit
(Ribeiro et al., 2016; Lundberg and Lee, 2017). Staniak and Biecek (2018) give
an overview of R packages for local explanations and present some nice
visualizations. Other visualizations such as partial dependence plots
(Friedman, 2001) shows how a predictor affects the fit on average. Drilling
down, more detail is obtained by exploring the effect of a designated
predictor on the fit, conditioning on fixed values of other predictors, for
example using the individual conditional expectation (ICE) curves of Goldstein
et al. (2015). Interactive visualizations are perhaps under-utilized in this
context. Baniecki and Biecek (2020) offer a recent discussion. Britton (2019)
uses small multiple displays of clustered ICE curves in an interactive
framework to visualize interaction effects.
Visualizing data via conditioning or slicing was popularised by the “small
multiples” of Tufte (Tufte, 1986) and the trellis displays of Becker et al.
(1996). Nowadays, the concept is widely known as faceting, courtesy of Wickham
(2016). Wilkinson (2005) (chapter 11) gives a comprehensive description. In
the context of machine learning models, the conditioning concept is used in
ICE plots, which show a family of curves giving the fitted response for one
predictor, fixing other predictors at observed values. These ICE plots
simultaneously show all observations and overlaid fitted curves, one for each
observation in the dataset. Partial dependence plots which show the average of
the ice curves are more popular but these are known to suffer from bias in the
presence of correlated predictors. A recent paper (Hurley, 2021) gives a
comparison of these and other model visualization techniques based on
conditioning.
Visualization along with interactivity is a natural and powerful way of
exploring data; so-called brushing (Stuetzle, 1987) is probably the best-known
example. Other data visualization applications have used interaction in
creative ways, for high-dimensional data ggobi (see for example Cook and
Swayne (2007)) offers various kinds of low-dimensional dynamic projection
tours while the recent R package loon (Waddell and Oldford, 2020) has a graph-
based interface for moving through series of scatterplots. The interactive
display paradigm has also been applied to exploratory modelling analysis, for
example Urbanek (2002) describes an application for exploratory analysis of
trees. With interactive displays, the data analyst has the ability to sift
through many plots quickly and easily, discovering interesting and perhaps
unexpected patterns.
In this paper, we present model visualization techniques based on slicing
high-dimensional space, where interaction is used to navigate the slices
through the space. The idea of using interactive visualization in this way was
introduced in O’Connell et al. (2017). The basic concept is to fix the values
of all but one or two predictors, and to display the conditional fitted curve
or surface. Observations from a slice close to the fixed predictors are
overlaid on the curve or surface. The resulting visualizations will show how
predictors affect the fitted model and the model goodness of fit, and how this
varies as the slice is navigated through predictor space. We also describe
touring algorithms for exploring predictor space. These algorithms make
conditional visualization a practical and valuable tool for model exploration
as dimensions increase. Our techniques are model agnostic and are appropriate
for any regression or classification problem. The concepts of conditional
visualization are also relevant for “fits” provided by clustering and density
estimation algorithms. Our model visualization techniques are implemented in
our R package condvis2 (Hurley et al., 2020), which provides a highly-
interactive application for model exploration.
The outline of the paper is as follows. In Section 2 we describe the basic
ideas of conditional visualization for model fits, and follow that with our
tour constructions for visiting interesting and relevant slices of data space.
Section 3 focuses on our implementation, and describes the embedding of
conditional model visualizations in an interactive application. In Section 4
we present examples, illustrating how our methods are used to understand
predictor effects, explore lack of fit and to compare multiple fits. We
conclude with a discussion.
## 2 Slice visualization and construction
In this section we describe the construction of slice visualizations for
exploring machine learning models. We begin with notation and terminology.
Then we explain how observations near a slice are identified, and then
visualized using a color gradient. We present new touring algorithms designed
to visit high-occupancy slices and slices where model fits have interesting
properties. In practical applications, these touring algorithms mean our model
exploration techniques are useful for exploring fits with up to 30 predictors.
Consider data $\\{\boldsymbol{x}_{i},y_{i}\\}_{i=1}^{n}$, where
$\boldsymbol{x}_{i}=(x_{i1},...,x_{ip})$ is a vector of predictors and $y_{i}$
is the response. Let $f$ denote a fitted model that maps the predictors
$\boldsymbol{x}$ to fitted responses $f(\boldsymbol{x})$. (In many
applications we will have two or more fits which we wish to compare, but we
use just one here for ease of explanation.) Suppose there are just a few
predictors of primary interest. We call these the _section_ predictors and
index them by $S$. The remaining predictors are called _conditioning_
predictors, indexed by $C$. Corresponding to $S$ and $C$, partition the
feature coordinates $\boldsymbol{x}$ into $\boldsymbol{x}_{S}$ and
$\boldsymbol{x}_{C}$. Similarly, let $\boldsymbol{x}_{iS}$ and
$\boldsymbol{x}_{iC}$ denote the coordinates of observation $i$ for the
predictors in $S$ and $C$ respectively. We have interest in observing the
relationship between the response $y$, fit $f$, and $\boldsymbol{x}_{S}$,
conditional on $\boldsymbol{x}_{C}$. For our purposes, a section or slice is
constructed as a region around a single point in the space of $C$, i.e.
$\boldsymbol{x}_{C}=\boldsymbol{u}_{C}$, where $\boldsymbol{u}_{C}$ is called
the _section point_.
### 2.1 Visualizations
Two related visualizations show the fit and the data. This first display is
the so-called _section plot_ which shows how the fit $f$ varies over the
predictors in $\boldsymbol{x}_{S}$. The second display shows plots of the
predictors in $\boldsymbol{x}_{C}$ and the current setting of the section
point $\boldsymbol{u}_{C}$. We call these the _condition selector plots_ , as
the section point $\boldsymbol{u}_{C}$ is under interactive control.
More specifically, the section plot consists of
$f(\boldsymbol{x}_{S},\boldsymbol{x}_{C}=\boldsymbol{u}_{C})$ versus
${\boldsymbol{x}_{S}}$, shown on a grid covering $\boldsymbol{x}_{S}$,
overlaid on a subset of observations $(\boldsymbol{x}_{iS},y_{i})$, where
$\boldsymbol{x}_{iC}$ is near the designated section point
$\boldsymbol{u}_{C}$. For displaying model fits, we use $|S|=1,2$, though
having more variables in $S$ would be possible with faceted displays.
#### 2.1.1 Similarity scores and color
A key feature of the section plot is that only observations local to the
section point $\boldsymbol{u}_{C}$ are included. To determine these local
observations, we start with a distance measure $d$, and for each observation,
$i=1,2,\ldots,n$, we compute how far it is from the section point
$\boldsymbol{u}_{C}$ as
$\displaystyle d_{i}=d(\boldsymbol{u}_{C},\boldsymbol{x}_{iC}).$ (1)
This distance is converted to a similarity score as
$\displaystyle s_{i}=\max\left(0,1-\dfrac{d_{i}}{\sigma}\right)$ (2)
where $\sigma>0$ is a threshold parameter. Distances exceeding the threshold
$\sigma$ are accorded a similarity score of zero. Points on the section, that
is, identical to the section point $\boldsymbol{u}_{C}$, receive the maximum
similarity of 1. Plotting colors for points are then faded to the background
white color using these similarity scores. Points with a similarity score of
zero become white, that is, are not shown. Non-zero similarities are binned
into equal-width intervals. The colors of observations whose similarity
belongs to the right-most interval are left unchanged. Other observations are
faded to white, with the amount of fade decreasing from the first interval to
the last.
#### 2.1.2 Distances for similarity scores
We use two different notions of “distance” in calculating similarity scores.
The first is a Minkowski distance between numeric coordinates (Equation 5).
For two vectors $\boldsymbol{u}$ and $\boldsymbol{v}$, where $C_{num}$ indexes
numeric predictors and its complement $C_{cat}$ indexes the categorical
predictors in the conditioning set $C$,
$\displaystyle
d_{M}(\boldsymbol{u},\boldsymbol{v})=\left\\{\begin{array}[]{ll}\left(\sum_{j\in
C_{num}}|\boldsymbol{u}_{j}-\boldsymbol{v}_{j}|^{q}\right)^{1/q}&\text{ if
}\boldsymbol{u}_{k}=\boldsymbol{v}_{k}\;\forall k\in C_{cat}\\\
\infty&\text{otherwise.}\end{array}\right.$ (5)
In practice we use Euclidean distance given by $q=2$ and the maxnorm distance
which is the limit as $q\rightarrow\infty$ (equivalently
$\max_{j}|\boldsymbol{u}_{j}-\boldsymbol{v}_{j}|$). With the Minkowski
distance, points whose categorical coordinates do not match those of the
section $\boldsymbol{u}_{C}$ exactly will receive a similarity of zero and
will not be visible in the section plots. Using Euclidean distance, visible
observations in the section plot will be in the hypersphere of radius $\sigma$
centered at $\boldsymbol{u}_{C}$. Switching to the maxnorm distance means that
visible observations will be in the unit hypercube with sides of length
$2\sigma$.
If there are many categorical conditioning predictors, requiring an exact
match on categorical predictors could mean that there are no visible
observations. For this situation, we include a Gower distance (Gower, 1971)
given in Equation 6 which combines absolute differences in numeric coordinates
and mismatch counts in categorical coordinates,
$\displaystyle d_{G}(\boldsymbol{u},\boldsymbol{v})=\sum_{k\in
C_{num}}\frac{|u_{k}-v_{k}|}{R_{k}}+\sum_{k\in C_{cat}}1\left[u_{k}\neq
v_{k}\right]$ (6)
where $R_{k}$ is the range of the $k$th predictor in $C_{num}$.
#### 2.1.3 A toy example
To demonstrate the ideas of the previous subsections, we use an illustration
in the simple setting with just two predictors. Figure 1(a)
| |
---|---|---
(a) 3d-surface | (b) 2d-slice at Solar.R = 250 | (c) Fading point color
Figure 1: Illustration of relationship between distance and color in section
plot, using the three variable ozone data: (a) data and loess fit as a
surface, (b) a section plot showing fit versus Wind conditioning on
Solar.R=250, (c) distance to Solar.R=250 represented by color.
shows a loess surface relating Ozone to Solar.R and Wind in the air quality
data (Chambers et al., 1983). Consider $S$ = Wind and $C$ = Solar.R, and fix
the value of Solar.R as $\boldsymbol{u}_{C}=250$. Figure 1(b) is the resulting
section plot, where we see how the surface varies with Wind, with Solar.R
fixed at 250. Only observations with Solar.R near 250 (here about 250 $\pm$
50) are shown. Observations in this window receive a similarity score related
to their distance from 250, which is used to fade the color by distance.
Observations outside the window receive a similarity score of zero and are not
displayed in Figure 1(b). In Figure 1(c), the similarity scores assigned to
observations using their distance to Solar.R=250 are plotted, non-zero
similarity scores are faded by decreasing similarity.
From the section plot in Figure 1(b) it is apparent that there is just one
observation at Wind $\approx$ 20, so the fit in this region may not be too
reliable. By decreasing the Solar.R value to $\boldsymbol{u}_{C}=150$ and then
to 50 we learn that the dependence of Ozone on Wind also decreases.
### 2.2 Choosing section points
The simplest way of specifying $\boldsymbol{u}_{C}$ is to choose a particular
observation, or to supply a value of each predictor in $C$. As an alternative
to this, we can find areas where the data lives and visualize these. This is
particularly important as the number of predictors increases: the well-known
_curse of dimensionality_ Bellman (1961) implies that as the dimension of the
conditioning space increases, conditioning on arbitrary predictor settings
will yield mostly empty sections. Or, we can look for interesting sections
exhibiting features such as lack of fit, curvature or interaction. In the case
of multiple fits, we can chase differences between them.
We describe algorithms for the construction of _tours_ , which for our
purposes are a series of section points
$\\{\boldsymbol{u}^{k}_{C},k=1,2,\ldots,l\\}$. The tours are visualized by
section plots
$f(\boldsymbol{x}_{S}={\boldsymbol{x}_{S}}^{g},\boldsymbol{x}_{C}=\boldsymbol{u}^{k}_{C})$,
showing slices formed around the series of section points. We note that the
tours presented here are quite different to grand tours (Asimov, 1985) and
guided tours (Cook et al., 1995), which are formed as sequences of projection
planes and do not involve slicing.
#### 2.2.1 Tour construction: visiting regions with data
The simplest strategy to find where the data lives is to pick random
observations and use their coordinates for the conditioning predictors as
sections points. We call this the randomPath tour. Other touring options
cluster the data using the variables in $C$, and use the cluster centers as
section points. It is important to note that we are not trying to identify
actual clusters in the data, rather to visit the parts of $C$-predictor space
where observations are located. We consider two tours based on clustering
algorithms: (i) kmeansPath which uses centroids of k-means clusters as
sections and (ii) kmedPath which uses medoids of k-medoid clustering,
available from the pam algorithm of package cluster (Maechler et al., 2019).
Recall that medoids are observations in the dataset, so slices around them are
guaranteed to have at least one observation.
Both kmeansPath and kmedPath work for categorical as well as numerical
variables. kmeansPath standardizes numeric variables and hot-encodes
categorical variables. kmedPath uses a distance matrix based on standardized
Euclidean distances for numeric variables and the Gower distance (Gower, 1971)
for variables of mixed type, as provided by daisy from package cluster. For
our application we are not concerned with optimal clustering or choice of
number of clusters, our goal is simply to visit regions where the data live.
To evaluate our tour algorithms, we calculate randomPath, kmeansPath and
kmedPath tours of length $l=30$ on datasets of 2,000 rows and 15 numeric
variables obtained from the Ames (De Cock, 2011) and Decathlon (Unwin, 2015)
datasets. For comparison, we also use simulated independent Normal and Uniform
datasets of the same dimension. The results are summarized Table 1.
Table 1: Average number of visible observations in ($\sigma$=1) maxnorm slices at 30 section points and in parentheses their total similarity selected with randomPath, kmeansPath and kmedPath from Decathlon and Ames datasets and simulated Normal and Uniform datasets. Our calculations show both clustering algorithms find higher-occupancy slices than randomly selected slices, and slices of real datasets have higher occupancy than those from simulated datasets. | random | kmeans | kmed
---|---|---|---
Decathlon | 8.2 (1.8) | 23.3 (3) | 22.4 (3.7)
Ames | 16.0 (4.3) | 33.6 (8.4) | 25.4 (7.6)
Normal | 1.0 (1.0) | 1.8 (0.2) | 1.8 (1.1)
Uniform | 1.1 (1.0) | 0.9 (0.1) | 1.3 (1.0)
In general, the number of observations visible in sections from real data far
exceeds that from the simulated datasets, as real data tends to be clumpy. Not
surprisingly, paths based on both the clustering methods k-means and k-medoids
find sections with many more observations than simply picking random
observations.
We also investigate in Figure 2
Figure 2: Distribution of the maximum similarity per observation across
($\sigma$=1) maxnorm slices at 30 section points selected with randomPath,
kmeansPath and kmedPath from Decathlon and Ames datasets and simulated Normal
and Uniform datasets. We see that clustering tours of length 30 visit 25% of
observations for the real datasets, though not for the simulated datasets.
the distribution of the maximum similarity per observation over the 30 section
points for the three path algorithms and four datasets. Here, paths based on
clustering algorithms from both real datasets visit over 25% of the
observations, again demonstrating that our algorithms perform much better on
real data than on simulated data.
#### 2.2.2 Tour construction: visiting regions exhibiting lack of fit
Other goals of touring algorithms might be to find regions where the model
fits the data poorly, or where two or more fits give differing results. For
numeric responses, the tour lofPath (for lack of fit) finds observations $i$
whose value of
$\displaystyle\max_{f\in\rm{fits}}|y_{i}-\hat{y}^{f}_{i}|$
is among the $k$ (path length) largest, where $\hat{y}^{f}_{i}$ is the
prediction for observation $i$ from fit $f$. For categorical responses, it
finds observations where the predicted class does not match the observed
class.
Another tour called diffitsPath (for difference of fit) finds observations $i$
whose value of
$\displaystyle\max_{f\neq
f^{{}^{\prime}}\in\rm{fits}}|\hat{y}^{f^{{}^{\prime}}}_{i}-\hat{y}^{f}_{i}|$
is among the $l$ (path length) largest for numeric fits. For fits to
categorical responses, diffitsPath currently finds observations where there is
the largest number of distinct predicted categories, or differences in
prediction probabilities. Other paths could be constructed to identify
sections with high amount of fit curvature or the presence of interaction.
There are a few other simple tours that we have found useful in practice:
tours that visit observations with high and low response values and tours that
move along a selected condition variable, keeping other condition variables
fixed.
#### 2.2.3 A smoother tour
For each of the path algorithms, the section points are ordered using a
seriation algorithm to form a short path through the section points
—dendrogram seriation (Earle and Hurley, 2015) is used here. If a smoother
tour is desired, the section points
$\\{\boldsymbol{u}^{k}_{C},k=1,2,\ldots,l\\}$ may be supplemented with
intermediate points formed by interpolation between $\ \boldsymbol{u}^{k}_{C}$
and $\boldsymbol{u}^{k+1}_{C}$. Interpolation constructs a sequence of evenly
spaced points between each pair of ordered section points. For quantitative
predictors, this means linear interpolation, and for categorical predictors,
we simply transition from one category to the next at the midpoints on the
linear scale.
## 3 An interactive implementation
The model visualizations on sections and associated touring algorithms
described in Section 2 are implemented in our highly-interactive R package
condvis2. In the R environment, there are a number of platforms for building
interactive applications. The most primitive of these is base R with its
function getGraphicsEvent which offers control of mouse and keyboard clicks,
used by our previous package condvis (O’Connell et al., 2016; O’Connell,
2017), but the lack of support for other input mechanisms such as menus and
sliders limits the range of interactivity. Tcltk is another option, which is
used by the package loon. We have chosen to use the Shiny platform Chang et
al. (2020) which is relatively easy to use, provides a browser-based interface
and supports web sharing.
First we describe the section plot and condition selector plot panel and the
connections between them. These two displays are combined together with
interactive controls into an arrangement that Unwin and Valero-Mora (2018)
refer to as an ensemble layout.
### 3.1 Section plots
As described in Section 2.1, the section plot shows how a fit (or fits) varies
with one or two section predictors, for fixed values of the conditioning
predictors. Observations near the fixed values are displayed on the section
plot. A suitable choice of section plot display depends on the prediction
(numerical, factor, or probability matrix) and predictor type (numerical or
factor). Figure 3
| n/f | p
---|---|---
| (a) | (b)
n/f | |
| (c) | (d)
(n/f,f) | |
| (e) | (f)
(n,n) | |
Figure 3: Types of section plots. Column and row labels represent the
prediction and section variable type; n/f: numerical or factor, p=probability
of factor level. For n/f types, the factor is treated as numeric. Plots in the
first row have the n/f section variable on the x axis, y axis has prediction
of type n/f in (a), probability in (b). In the second row, there are two
section variables, one n/f and the other f, (c) is for a n/f prediction, (d)
for multi-class predictions. In the third row there are two numeric section
variables on the axes, (e) is for a n/f prediction shown in color, (f) for
multi-class predictions shown as bars.
shows different section plots. For two numeric section variables, we also use
perspective displays. When section predictors are factors these are converted
to numeric, so the displays are similar to those shown in the rows and columns
labelled n/f. When the prediction is the probability of factor level, the
display uses a curve for one of the two levels as in Figure 3(b) and barplot
arrays otherwise, see Figure 3(d) and (f). Here the bars show the predicted
class probabilities, for levels of a categorical section variable, and bins of
a numeric section variable. For section plot displays such as Figure 3(e)
where fit $f$ is shown as an image, faded points would be hard to see, so
instead we shrink the overlaid observations in proportion to the similarity
score. We do not add a layer of observations to the barplot arrays in Figure
3(d) and (f), as this would likely overload the plots.
### 3.2 Condition selector plots
The condition selector plots display predictors in the conditioning set $C$.
Predictors are plotted singly or in pairs using scatterplots, histograms,
boxplots or barplots as appropriate. They show the distributions of
conditioning predictors and also serve as an input vehicle for new settings of
these predictors. We use the strategy presented in O’Connell et al. (2017) for
ordering conditioning predictors to avoid unwitting extrapolation. A pink
cross overlaid on the condition selector plots shows the current settings of
the section point $\boldsymbol{u}_{C}$. See the panel on the right of Figure
4.
Alternatively, predictors may be plotted using a parallel coordinates display.
It is more natural in this setting to restrict conditioning values to
observations. In this case, the current settings of the section point
$\boldsymbol{u}_{C}$ are shown as a highlighted observation. In principle a
scatterplot matrix could be used, but we do not provide for this option as it
uses too much screen real estate.
### 3.3 The condvis2 layout
We introduce a dataset here which we will visit again in Section 4.1. The bike
sharing dataset (Fanaee-T and Gama, 2013) available from the UCI machine
learning repository has a response which is the count of rental bikes
(nrentals) and the goal is to relate this to weather and seasonal information,
through features which are season, hol (holiday or not), wday (working day or
not), yr (year 2011 or 2012), weather (good, misty, bad), temp (degrees
Celsius), hum (relative humidity in percent) and wind (speed in km per hour).
The aim is to model the count of rental bikes between years 2011 and 2012 in a
bike share system from the corresponding weather and seasonal information. We
build a random forest (Breiman, 2001) fit relating nrentals to other features
for all 750 observations. Setting up an interactive model exploration requires
a call to the function condvis specifying the data, fit, response, and one or
two section variables (here temp). Other dataset variables become the
condition variables. The resulting ensemble graphic (see Figure 4)
Figure 4: Condvis2 screenshot for a random forest fit to the bike rentals
data. The nrentals versus temp display is a section plot showing the fit. The
panel on the right shows the condition variables, with the current setting
marked with a pink cross. Menus at the top are for selecting section
variables, point colors. There is a slider for controlling the similarity
threshold, and radio buttons for the distance measure. The bottom left panel
is for tour controls.
has a section plot of nrentals versus temp with superimposed random forest fit
on the left, the panel on the right has the condition selector plots and the
remaining items on the display are interactive controls.
The pink crosses on the condition selector plots shows the current setting of
the conditioning predictors $\boldsymbol{u}_{C}$. If the initial value of the
conditioning predictors is not specified in the call to condvis, this is set
to the medoid of all predictors, calculated using standardized Euclidean
distance, or Gower for predictors of mixed type. Here $\boldsymbol{u}_{C}$
values are also listed underneath the condition selector plots. The distance
measure used defaults to maxnorm, so the observations appearing on the section
plot all have season=sum, weather=good, wday=y, hol=n, yr=2011, and have wind
and hum values within one (the default value of $\sigma$ in Equation 2)
standard deviation of hum=58.0, wind=11.8. The point colors are faded as the
maxnorm distance from (hum=58.0, wind=11.8) increases. These observations also
appear with a black outline on the (hum, wind) condition selector plot.
### 3.4 Interaction with condvis2
The choice of the section point $\boldsymbol{u}_{C}$ is under interactive
control. The most direct way of selecting $\boldsymbol{u}_{C}$ is by
interacting with the condition selector plots, For example, clicking on the
(hum, wind) plot in Figure 4 at location (hum=90, wind=10) moves the
coordinates of $\boldsymbol{u}_{C}$ for these two variables to the new
location, while the values for other predictors in $C$ are left unchanged.
Immediately the section plot of nrentals versus temp shows the random forest
fit at the newly specified location, but now there is only one observation
barely visible in the section plot, telling us that the current combination of
the conditioning predictors is in a near-empty slice. Double-clicking on the
(hum, wind) plot sets the section point to the closest observation on this
plot. If there is more than one such observation, then the section point
becomes the medoid of these closest observations. It is also possible to click
on an observation in the section plot, and this has the effect of moving the
section point $\boldsymbol{u}_{C}$ to the coordinates of the selected
observation for the conditioning predictors.
The light grey panel on the lower left has the tour options (described in
Section 2.2) which offer another way of navigating slices of predictor space.
The “Choose tour” menu offers a choice of tour algorithm, and “Tour length”
controls the length of the computed path. The “Tour Step” slider controls the
position along the current path; by clicking the arrow on the right the tour
progresses automatically through the tour section points. An interpolation
option is available for smoothly changing paths.
Clicking on the similarity threshold slider increases or decreases the value
of $\sigma$, including more or less observations in the nrentals versus temp
plot. The distance used for calculating similarities may be changed from
maxnorm to Euclidean or Gower (see Equations 3 and 4) via the radio buttons.
When the threshold slider is moved to the right-most position, all
observations are included in the section plot display.
One or two section variables may be selected from the “Choose a sectionvar”
and “Second sectionvar” menus. If the second section variable is hum, say,
this variable is removed from the condition selector plots. With two numeric
section variables, the section plot appears as an image as in Figure 3(e).
Another checkbox “Show 3d surface” appears, and clicking this shows how the
fit relates to (temp, hum) as a rotatable 3d plot. Furthermore, a variable
used to color observations may be chosen from the “Choose a colorvar” menu.
Clicking the “One plot” checkbox on the lower right changes the condition
selector plots to a single parallel coordinate plot. Deselecting the “Show
sim” box causes the black outline on the observations in the current slice to
be removed, which is a useful option if the dataset is large and display speed
is an issue. Clicking on the “Return conditions” button causes the app to
exit, returning all section points visited as a data frame.
### 3.5 Which fits?
Visualizations in condvis2 are constructed in a model-agnostic way. In
principle all that is required is that a fit produces predictions. Readers
familiar with R will know that algorithms from random forest to logistic
regression to support vector machines all have some form of predict method,
but they have different arguments and interfaces.
We have solved this by writing a predict wrapper called CVpredict (for condvis
predict) that operates in a consistent way for a wide range of fits. We
provide over 30 CVpredict methods, for fits ranging from neural nets, to trees
to bart machine. And, it should be relatively straightforward for others to
write their own CVpredict method, using the template we provide.
Others have tackled the problem of providing a standard interface to the model
fitting and prediction tasks. The parsnip package (Kuhn and Vaughan, 2021)
part of the so-called tidyverse world streamlines the process and currently
includes drivers for about 40 supervised learners including those offered by
spark and stan. The packages caret (Kuhn, 2019), mlr (Bischl et al., 2016),
and its most recent incarnation mlr3 (Lang et al., 2019), interface with
hundreds of learners and also support parameter tuning. As part of condvis2,
we have written CVpredict methods for the model fit classes from parsnip, mlr,
mlr3 and caret. Therefore our visualizations are accessible from fits produced
by most of R’s machine learning algorithms.
### 3.6 Dataset size
Visualization of large datasets is challenging, particularly so in interactive
settings where a user expects near-instant response. We have used our
application in settings with $n=100,000$ and $p=30$ and the computational
burden is manageable.
For section displays, the number of points displayed is controlled by the
similarity threshold $\sigma$ and is usually far below the dataset size $n$.
For reasons of efficiency, condition selector displays by default show at most
1,000 observations, randomly selected in the case where $n>1000$. Calculation
of the medoid for the initial section point and the kmedPath requires
calculation of a distance matrix which has complexity $O(n^{2}p)$. For
interactive use speed is more important than accuracy so we base these
calculations on a maximum of 4,000 rows by default.
The conditioning displays show $\lceil p/2\rceil$ panels of one or two
predictors or one parallel coordinate display. Up to $p=30$ will fit on screen
space using the parallel coordinate display, perhaps 10-15 otherwise. Of
course many data sets have much larger feature sets. In this situation, we
recommend selecting a subset of features which are _important_ for prediction,
to be used as the section and conditioning predictors $S$ and $C$. The
remaining set of predictors, say $F$, are hidden from view in the condition
selector plots and are fixed at some initial value which does not change
throughout the slice exploration.
Note that though the predictors $F$ are ignored in the calculation of
distances in Equations 5 and 6 and thus in the similarity scores of Equation
2, the initial values of these predictors
$\boldsymbol{x}_{F}=\boldsymbol{u}_{F}$ are used throughout in constructing
predictions; thus the section plot shows
$f(\boldsymbol{x}_{S}={\boldsymbol{x}_{S}}^{g},\boldsymbol{x}_{C}=\boldsymbol{u}_{C},\boldsymbol{x}_{F}=\boldsymbol{u}_{F})$.
If the set of important predictors is not carefully selected, the fit
displayed will not be representative of the fit for all observations visible
in the section plot.
In the situation where some predictors designated as unimportant are relegated
to $F$ thus not appearing in the condvis display, the settings for predictors
in $F$ remain at their initial values throughout all tours. This means that
section points for the tours based on selected observations (randomPath,
kmedPath, lofPath and diffitsPath) will not in fact correspond exactly to
dataset observations. An alternative strategy would be to let the settings for
the predictors in $F$ vary, but then there is a danger of being “lost in
space”.
## 4 Applications
In our first example, we compare a linear fit with a random forest for a
regression problem. Interactive exploration leads us to discard the linear fit
as not capturing feature effects in the data, but patterns in the random
forest fit suggests a particular generalized additive model that overall fits
the data well.
Our second example concerns a classification problem where we compare random
forest and tree fits. We learn that both fits have generally similar
classification surfaces. In some boundary regions the random forest overfits
the training data avoiding the mis-classifications which occur for the tree
fit.
Finally, we review briefly how interactive slice visualization techniques can
be used in unsupervised learning problems, namely to explore density functions
and estimates, and clustering results. Furthermore, we demonstrate that
interactive slice visualization is insightful even in situations where there
is no fit curve or surface to be plotted.
### 4.1 Regression: Bike sharing data
Here we investigate predictor effects and goodness of fit for models fit to
the bike sharing dataset, introduced in Section 3.3. To start with, we divide
the data into training and testing sets using a 60/40 split. For the training
data, we fit a linear model with no interaction terms, and a random forest
which halves the RMSE by comparison with the linear fit. Comparing the two
fits we see that the more flexible fit is much better supported by the data,
see for example Figure 5.
Figure 5: The random forest and linear model fit for the bike rentals training
data. The section variables are temp and year. The linear model fits poorly.
The random forest has a decreasing trend for both years for temperatures above
15C, which is supported by nearby observations.
In the fall, bike rentals are affected negatively by temperature according to
the observed data. The linear fit does not pick up this trend, and even the
random forest seems to underestimate the effect of temperature. Year is an
important predictor: people used the bikes more in 2012 than in 2011. At the
current setting of the condition variables, there is no data below a
temperature of 15C, so we would not trust the predictions in this region.
Focusing on the random forest only, we explore the combined effect on nrentals
of the two predictors temperature and humidity) (Figure 6).
| |
---|---|---
(a) Spring 2011 | (b) Spring 2012 | (c) Fall 2012
Figure 6: Section plots with predictors temperature and humidity of random
forest fit to bike training data. Image color shows the predicted number of
rentals. Conditioning variables other than year and season are set to good
weather/weekend/no holiday. Comparing the three plots, we see that the joint
effect of humidity and temperature changes through time; that is, a three-way
interaction.
The three plots have different settings of the time condition variables
selected interactively, other conditioning variables were set to good weather,
weekend and no holiday. In spring 2011, temperature is the main driver of bike
rentals, humidity has negligible impact. In spring 2012 the number of bike
rentals is higher than the previous year, especially at higher temperatures.
In fall 2012, bike rentals are higher than in spring, and high humidity
reduces bike rentals. With further interactive exploration, we see that this
three-way interaction effect is consistent at other levels of weather, weekend
and holiday.
In the absence of an interactive exploratory tool such as ours, one might
summarize the joint effect of temperature and humidity through a partial
dependence plot (Figure 7).
Figure 7: Partial dependence plot for random forest fit to bike training data,
showing effect of temperature and humidity on the predicted number of rentals.
The plot shows an interaction effect: prediction is higher for temperature
above 12C, but drops off for humidity above 80.
The plot combines the main effect of the featuress and their interaction
effect, and shows that people cycle more when temperature is above 12C, and
this effect depends on humidity. The partial dependence plot is a summary of
plots such as those in Figure 6, averaging over all observations in the
training data for the conditioning variables, and so it cannot uncover a
three-way interaction. A further issue is that the partial dependence curve or
surface is averaging over fits which are extrapolations, leading to
conclusions which may not be reliable.
Based on the information we have gleaned from our interactive exploration, an
alternative parametric fit to the random forest is suggested. We build a
generalized additive model (gam), with a smooth joint term for temperature,
humidity, an interaction between temperature and season, a smooth term for
wind, and a linear term for the remaining predictors. A gam fit is parametric
and will be easier to understand and explain than a random forest, and has the
additional advantage of providing confidence intervals, which may be added to
the condvis2 display. Though the training RMSE for the random forest is
considerably lower than that for the gam, on the test data the gam is a clear
winner, see Table 2.
Table 2: Training and test RMSE for the random forest and gam fits to the bike data. The gam has better test set performance than the random forest. | train | test
---|---|---
RF | 438.1 | 748.1
GAM | 573.1 | 670.2
For a deep-dive comparison of the two fits, we use the tours of Section 2.2 to
move through various slices, here using the combined training and testing
datasets. Figure 8
Figure 8: Tours of the bike data, nrentals versus temperature. Random forest
fit in blue and gam in red, train and test observations in light blue and pink
respectively, K-medoid tour in the first row, lack of fit tour in second,
stars in rows 3,4 specify corresponding slices visited. K-medoid shows gam
fits better. Lack of fit tour stars show lack of fit occurs in 2012.
shows a k-medoid tour in the first row and lack of fit tour in the second row,
with temp as the section variable and the remaining features forming the
condition variables. (Here for purposes of illustration both tours are
constructed to be of length 5). The last two rows of Figure 8 show the
condition variable settings for each of the ten tour points as stars, where a
long (short) radial line-segment indicates a high (low) value for a condition
variable. To the naked eye the gam fit looks to give better results for most
of the locations visited by the k-medoid tour. Switching to the lack of fit
tour, we see that the poorly-fit observation in each of the second row panels
in Figure 8 has a large residual for both the random forest and the gam fits.
Furthermore, the poorly-fit observations identified were all recorded in 2012,
as is evident from the stars in the last row.
### 4.2 Classification: Glaucoma data
Glaucoma is an eye disease caused by damage to the optic nerve, which can lead
to blindness if left untreated. In Kim et al. (2017), the authors explored
various machine learning fits relating the occurrence of glaucoma to age and
various other features measured on the eye. The provided dataset comes pre-
split into a training set of size 399 and a test set of size 100. Here we
focus on a random forest and a C5.0 classification tree (Salzberg, 1994) fit
to the training data. The random forest classified all training observations
perfectly, mis-classifying just two test set observations, whereas the tree
misclassified 20 and 6 cases for the training and test data respectively. In a
clinical setting however, as the authors in Kim et al. (2017) pointed out, the
results from a classification tree are easier to understand and implement.
We will use interactive explorations to reduce the interpretability deficit
for the random forest, and to check if the simpler tree provides an adequate
fit by comparison with the random forest, despite its inferior test set
performance.
Figure 9
Figure 9: Section plots of random forest and tree fits for the glaucoma
training data. Cases drawn in purple have glaucoma. In the region of these
section plots with nearby observations, the fitted surfaces are the same.
shows the training data with both classifiers. Here the section variables are
PSD and RNFL.mean (the two most important features according to random forest
importance), and conditioning variables are set to values from the first case,
who is glaucoma free. Both classifiers give similar results for this
condition, ignoring section plot regions with no data nearby. Points whose
color in the section plots disagrees with the background color of the
classification surface are not necessarily mis-classified, they are just near
(according to the similarity score) a region whose classification differs.
Reducing the similarity threshold $\sigma$ to zero would show points whose
values on the conditioning predictors are identical to those of the first
case, here just the first case itself, which is correctly classified by both
classifiers. Clicking around on the condition selector plots and moving
through the random, k-means and k-medoid tour paths shows that both
classifiers give similar classification surfaces for section predictors PSD
and RNFL.mean, in areas where observations live.
Using the lack of fit tour to explore where the C5 tree gives incorrect
predictions, in Figure 10
|
---|---
(a) C5 false negative | (b) C5 false positive
Figure 10: Glaucoma training data, random forest and tree fits, surface shows
probability of glaucoma. Cases drawn in purple have glaucoma. Panels show
cases wrongly classified by the tree, a false negative in (a) and false
positive in (b).
the section plots show probability of glaucoma, on a green (for no glaucoma)
to purple (for glaucoma) scale. Here the similarity threshold $\sigma$ is set
to zero, so only the mis-classified observations are visible. In the left hand
side panel figure 10(a) the C5 tree fit shows a false negative, which is quite
close to the decision boundary. Though the random forest fit correctly
classifies the observation, it does not do so with high probability. Figure
10(b) shows a situation where the tree gives a false positive, which is well-
removed from the decision boundary. The random forest correctly predicts this
observation as a negative, but the fitted surface is rough. Generally training
mis-classifications from the tree fit occur for PSD $\approx$ 2.5 and
RNL4.mean $\approx$ 90, where the random forest probability surface is jumpy.
So glaucoma prediction is this region is difficult based on this training
dataset.
### 4.3 Other application areas
Typical ways to display clustering results include assigning colors to
observations reflecting cluster membership, and visualizing the colored
observations in a scatterplot matrix, parallel coordinate plot or in a plot of
the first two principal components. Some clustering algorithms such as k-means
and model-based clustering algorithms offer predictions for arbitrary points.
The results of such algorithms can be visualized with our methodology. Section
plots show the cluster assignment for various slices in the conditioning
predictors. As in the classification example, we can compare clustering
results, and check the cluster boundaries where there is likely to be
uncertainty in the cluster assignment. Suitable tours in this setting visit
the centroid or medoid of the data clusters. See the vignette
https://cran.r-project.org/web/packages/condvis2/vignettes/mclust.html for an
example.
One can also think of density estimation algorithms as providing a “fit”. For
such fits, the CVpredict function gives the density value, which is
renormalized over the section plot to integrate to 1. This way section plots
show the density conditional on the settings of the conditional variables.
With our condvis visualizations, we can compare two or more density functions
or estimates by their conditional densities for one or two section variables,
assessing goodness of fit, and features such as number of modes and
smoothness. See the vignette
https://cran.r-project.org/web/packages/condvis2/vignettes/mclust.html for an
example.
The ideas of conditional visualization may also be applied to situations where
there is no fit function to be plotted. In this case, the section plot shows
observations for the section variables colored by similarity score which are
determined to be near the designated section point. This is a situation where
we provide section plots with $|S|>2$. One application of this is to compare
predictions or residuals for an ensemble of model fits. For the bike example
of Section 4.1, consider the dataset augmented with predictions from the gam
and random forest fits. Figure 11
Figure 11: Parallel coordinate plot showing response and predictions from gam
and random forest for summer, weekday, good weather days in 2012 from the bike
test data. For these condition variables, the random forest underestimates
nrentals by comparison with gam.
shows a parallel coordinate of three section variables, $y=$nrentals,
$\hat{y}_{gam}$ and $\hat{y}_{rf}$ with the similarity threshold set so that
the plot shows all summer, weekday, good weather days in 2012. The gam
predictions are similar to the response as indicated by the mostly parallel
line segment in the first panel, but the random forest underestimates the
observed number of bike rentals. This pattern does not hold for 2011 data.
## 5 Discussion
We have described a new, highly interactive application for deep-dive
exploration of supervised and unsupervised learning model fits. This casts
light on the black-box of machine learning algorithms, going far beyond simple
numerical summaries such as mean squared error, accuracy and predictor
importance measures. With interaction, the analyst can interrogate predictor
effects and pickup higher-order interactions in a way not possible with
partial dependence and ICE plots, explore goodness of fit to training or test
datasets, and compare multiple fits. Our new methodology will help machine
learning practioners, educators and students seeking to interpret, understand
and explain model results. The application is currently useful for moderate
sized datasets, up to 100,000 cases and 30 predictors in our experience.
Beyond that, we recommend using case and predictor subsets to avoid lags in
response time which make interactive use intolerable.
A previous paper (O’Connell et al., 2017) described an early version of this
project. Since then, in condvis2 we have developed the project much further,
and moved the implementation to a Shiny platform which supports a far superior
level of interactivity. The choice of section plots and distance measures have
been expanded. As an alternative to direct navigation through conditioning
space, we provide various algorithms for constructing tours, designed to visit
non-empty slices (randomPath, kmeansPath and kmedPath) or slices showing lack
of fit (lofPath) or fit disparities (diffitsPath). We now offer an interface
to a wide and extensible range of machine learning fits, through CVpredict
methods, including clustering algorithms and density fits. By providing an
interface to the popular caret, parsnip, mlr and mlr3 model-building platforms
our new interactive visualizations are widely accessible.
We recommend using variable importance measures to choose relevant section
predictors, as in the case study of Section 4.2. For pairs of variables,
feature interaction measures such as the H-statistic (Friedman and Popescu,
2008) and its visualization available in vivid (Inglis et al., 2020) could be
used to identify interesting pairs of section variables for interactive
exploration. New section touring methods could be developed to uncover other
plot patterns, but this needs to be done in a computationally efficient way.
As mentioned previously, the tours presented here are quite different to grand
tours, as it is the slice that changes, not the projection. In a recent paper
(Laa et al., 2020), following on ideas from Furnas and Buja (1994), grand
tours are combined with slicing, where slices are formed in the space
orthogonal to the current projection, but these techniques are not as yet
designed for the model fit setting.
There are some limitations in the specification of the section points through
interaction with the condition selector plots, beyond the fact that large
numbers of predictors will not fit in the space allocated to these plots (see
Section 3.6). If a factor has a large number of levels, then space becomes an
issue. One possibility is to display only the most frequent categories in the
condition selector plots, gathering other categories into an “other” category,
which of course is not selectable. Also, we have not as yet addressed the
situation where predictors are nested.
Currently we offer a choice of three distance measures (Euclidean, maxnorm and
Gower) driving the similarity weights used in section plot displays. Distances
are calculated over predictors in $C$, other than the hidden predictors $F$.
Predictors are scaled to unit standard deviation before distance is calculated
which may not be appropriate for highly skewed predictors, where a robust
scaling is likely more suitable. We could also consider an option to to
interactively exclude some predictors from the distance calculation.
Other approaches could also be investigated for our section plot displays.
Currently, the section plot shows the fit
$f(\boldsymbol{x}_{S}={\boldsymbol{x}_{S}}^{g},\boldsymbol{x}_{C}=\boldsymbol{u}_{C})$
versus ${\boldsymbol{x}_{S}}^{g}$, overlaid on a subset of observations
$(\boldsymbol{x}_{iS},y_{i})$, where $\boldsymbol{x}_{iC}$ belongs to the
section around $\boldsymbol{u}_{C}$ (assuming $F=\emptyset$). An alternative
might be to display the average fit for observations in the section, that is
${\rm ave}_{\boldsymbol{x}_{iC}\in{\rm
sect(\boldsymbol{u}_{C})}}\\{f(\boldsymbol{x}_{S}={\boldsymbol{x}_{S}}^{g},\boldsymbol{x}_{C}=\boldsymbol{x}_{iC})\\},$
or, form a weighted average using the similarity weights. Such a version of a
section plot is analogous to a local version of a partial dependence plot.
We note that the popular lime algorithm of Ribeiro et al. (2016) also uses the
concept of conditioning to derive explanations for fits from machine learning
models. In their setup, all predictors are designated as conditioning
predictors, so $S=\emptyset$. Lime explanations use a local ridge regression
to approximate $f$ at $\boldsymbol{x}_{C}=\boldsymbol{u}$ using nearby sampled
data, and the result is visualized in a barplot-type display of the local
predictor contributions. For the purposes of the local approximation, the
sampled data is weighted by a similarity score. This contrasts with the
approach presented here, where the similarity scores of Equation 2 are purely
for visualization purposes. In Hurley (2021), we discussed how lime
explanations could be generalized to the setting with one or two designated
section variables, and this could usefully be embedded in an interactive
application like ours.
## References
* Asimov (1985) Asimov, D., 1985. The grand tour: a tool for viewing multidimensional data. Siam Journal on Scientific and Statistical Computing 6, 128–143.
* Baniecki and Biecek (2020) Baniecki, H., Biecek, P., 2020\. The grammar of interactive explanatory model analysis. CoRR abs/2005.00497. URL: https://arxiv.org/abs/2005.00497, arXiv:2005.00497.
* Becker et al. (1996) Becker, R.A., Cleveland, W.S., Shyu, M.J., 1996. The visual design and control of trellis display. Journal of Computational and Graphical Statistics 5, 123–155. URL: https://amstat.tandfonline.com/doi/abs/10.1080/10618600.1996.10474701, doi:10.1080/10618600.1996.10474701, arXiv:https://amstat.tandfonline.com/doi/pdf/10.1080/10618600.1996.10474701.
* Bellman (1961) Bellman, R., 1961. Adaptive Control Processes: A Guided Tour. ’Rand Corporation. Research studies, Princeton University Press. URL: https://books.google.ie/books?id=POAmAAAAMAAJ.
* Bischl et al. (2016) Bischl, B., Lang, M., Kotthoff, L., Schiffner, J., Richter, J., Studerus, E., Casalicchio, G., Jones, Z.M., 2016\. mlr: Machine learning in r. Journal of Machine Learning Research 17, 1–5. URL: http://jmlr.org/papers/v17/15-066.html.
* Breiman (2001) Breiman, L., 2001. Random forests. Machine Learning 45, 5–32. URL: https://doi.org/10.1023/A:1010933404324, doi:10.1023/A:1010933404324.
* Britton (2019) Britton, M., 2019. Vine: Visualizing statistical interactions in black box models. arXiv 1904.00561 .
* Chambers et al. (1983) Chambers, J., Cleveland, W., Kleiner, B., Tukey, P., 1983\. Graphical Methods for Data Analysis. Wadsworth, Belmont, CA.
* Chang et al. (2020) Chang, W., Cheng, J., Allaire, J., Xie, Y., McPherson, J., 2020. Shiny: Web Application Framework for R. URL: https://CRAN.R-project.org/package=shiny. R package version 1.5.0.
* Cook et al. (1995) Cook, D., Buja, A., Cabrera, J., Hurley, C., 1995\. Grand tour and projection pursuit. Journal of Computational and Graphical Statistics 4, 155–172. URL: http://www.jstor.org/stable/1390844.
* Cook and Swayne (2007) Cook, D., Swayne, D.F., 2007\. Interactive and Dynamic Graphics for Data Analysis With R and GGobi. 1st ed., Springer Publishing Company, Incorporated.
* De Cock (2011) De Cock, D., 2011. Ames, iowa: Alternative to the boston housing data as an end of semester regression project. Journal of Statistics Education 19\. URL: https://doi.org/10.1080/10691898.2011.11889627, doi:10.1080/10691898.2011.11889627, arXiv:https://doi.org/10.1080/10691898.2011.11889627.
* Earle and Hurley (2015) Earle, D., Hurley, C.B., 2015\. Advances in dendrogram seriation for application to visualization. Journal of Computational and Graphical Statistics 24, 1–25. URL: http://dx.doi.org/10.1080/10618600.2013.874295, doi:10.1080/10618600.2013.874295, arXiv:http://dx.doi.org/10.1080/10618600.2013.874295.
* Fanaee-T and Gama (2013) Fanaee-T, H., Gama, J., 2013\. Event labeling combining ensemble detectors and background knowledge. Progress in Artificial Intelligence , 1–15URL: [WebLink], doi:10.1007/s13748-013-0040-3.
* Friedman (2001) Friedman, J.H., 2001. Greedy function approximation: A gradient boosting machine. The Annals of Statistics 29, 1189–1232. URL: http://www.jstor.org/stable/2699986.
* Friedman and Popescu (2008) Friedman, J.H., Popescu, B.E., 2008\. Predictive learning via rule ensembles. Ann. Appl. Stat. 2, 916–954. URL: https://doi.org/10.1214/07-AOAS148, doi:10.1214/07-AOAS148.
* Furnas and Buja (1994) Furnas, G.W., Buja, A., 1994\. Prosection views: Dimensional inference through sections and projections. Journal of Computational and Graphical Statistics 3, 323–353. URL: https://www.tandfonline.com/doi/abs/10.1080/10618600.1994.10474649, doi:10.1080/10618600.1994.10474649, arXiv:https://www.tandfonline.com/doi/pdf/10.1080/10618600.1994.10474649.
* Goldstein et al. (2015) Goldstein, A., Kapelner, A., Bleich, J., Pitkin, E., 2015\. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. Journal of Computational and Graphical Statistics 24, 44–65. doi:10.1080/10618600.2014.907095.
* Gower (1971) Gower, J.C., 1971. A general coefficient of similarity and some of its properties. Biometrics 27, 857–871. URL: http://www.jstor.org/stable/2528823.
* Hurley et al. (2020) Hurley, C., O’Connell, M., Domijan, K., 2020. Condvis2: Conditional Visualization for supervised and unsupervised models in Shiny. URL: https://CRAN.R-project.org/package=condvis. R package version 0.1.1.
* Hurley (2021) Hurley, C.B., 2021. Model exploration using conditional visualization. WIREs Computational Statistics 13, e1503. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/wics.1503, doi:https://doi.org/10.1002/wics.1503, arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/wics.1503.
* Inglis et al. (2020) Inglis, A., Parnell, A., Hurley, C., 2020. Vivid: Variable Importance and Variable Interaction Displays. URL: https://CRAN.R-project.org/package=vivid. R package version 0.1.0.
* Kim et al. (2017) Kim, S., J., C.K., Oh, S., 2017. Development of machine learning models for diagnosis of glaucoma. PLoS ONE 5.
* Kuhn (2019) Kuhn, M., 2019. caret: Classification and Regression Training. URL: https://CRAN.R-project.org/package=caret. R package version 6.0-84.
* Kuhn and Vaughan (2021) Kuhn, M., Vaughan, D., 2021\. parsnip: A Common API to Modeling and Analysis Functions. URL: https://CRAN.R-project.org/package=parsnip. R package version 0.1.25.
* Laa et al. (2020) Laa, U., Cook, D., Valencia, G., 2020. A slice tour for finding hollowness in high-dimensional data. Journal of Computational and Graphical Statistics 29, 681–687. URL: https://doi.org/10.1080/10618600.2020.1777140, doi:10.1080/10618600.2020.1777140, arXiv:https://doi.org/10.1080/10618600.2020.1777140.
* Lang et al. (2019) Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff, L., Bischl, B., 2019\. mlr3: A modern object-oriented machine learning framework in R. Journal of Open Source Software URL: https://joss.theoj.org/papers/10.21105/joss.01903, doi:10.21105/joss.01903.
* Lundberg and Lee (2017) Lundberg, S.M., Lee, S.I., 2017\. A unified approach to interpreting model predictions, in: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (Eds.), Advances in Neural Information Processing Systems 30. Curran Associates, Inc., pp. 4765–4774. URL: http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf.
* Maechler et al. (2019) Maechler, M., Rousseeuw, P., Struyf, A., Hubert, M., Hornik, K., 2019. cluster: Cluster Analysis Basics and Extensions. URL: https://CRAN.R-project.org/package=cluster. R package version 2.1.0.
* O’Connell (2017) O’Connell, M., 2017. Conditional Visualisation for Statistical Models. Ph.D. thesis. National University of Ireland Maynooth. URL: http://mural.maynoothuniversity.ie/8141/.
* O’Connell et al. (2016) O’Connell, M., Hurley, C., Domijan, K., 2016. Condvis: Conditional Visualization for Statistical Models. URL: https://CRAN.R-project.org/package=condvis. R package version 0.1.1.
* O’Connell et al. (2017) O’Connell, M., Hurley, C., Domijan, K., 2017. Conditional visualization for statistical models: An introduction to the condvis package in R. Journal of Statistical Software, Articles 81, 1–20. URL: https://www.jstatsoft.org/v081/i05, doi:10.18637/jss.v081.i05.
* Ribeiro et al. (2016) Ribeiro, M.T., Singh, S., Guestrin, C., 2016. “Why should I trust you?”: Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pp. 1135–1144.
* Salzberg (1994) Salzberg, S.L., 1994. C4.5: Programs for machine learning by j. ross quinlan. morgan kaufmann publishers, inc., 1993. Machine Learning 16, 235–240. URL: https://doi.org/10.1007/BF00993309, doi:10.1007/BF00993309.
* Staniak and Biecek (2018) Staniak, M., Biecek, P., 2018\. Explanations of Model Predictions with live and breakDown Packages. The R Journal 10, 395–409. URL: https://doi.org/10.32614/RJ-2018-072, doi:10.32614/RJ-2018-072.
* Stuetzle (1987) Stuetzle, W., 1987. Plot windows. Journal of the American Statistical Association 82, 466–475. URL: http://www.jstor.org/stable/2289448.
* Tufte (1986) Tufte, E.R., 1986. The Visual Display of Quantitative Information. Graphics Press, Cheshire, CT, USA.
* Unwin (2015) Unwin, A., 2015. GDAdata: Datasets for the Book Graphical Data Analysis with R. URL: https://CRAN.R-project.org/package=GDAdata. R package version 0.93.
* Unwin and Valero-Mora (2018) Unwin, A., Valero-Mora, P., 2018\. Ensemble graphics. Journal of Computational and Graphical Statistics 27, 157–165. URL: https://doi.org/10.1080/10618600.2017.1383264, doi:10.1080/10618600.2017.1383264, arXiv:https://doi.org/10.1080/10618600.2017.1383264.
* Urbanek (2002) Urbanek, S., 2002. Different ways to see a tree - klimt, in: Härdle, W., Rönz, B. (Eds.), Compstat, Physica-Verlag HD, Heidelberg. pp. 303–308.
* Waddell and Oldford (2020) Waddell, A., Oldford, R.W., 2020\. loon: Interactive Statistical Data Visualization. URL: https://CRAN.R-project.org/package=loon. r package version 1.3.1.
* Wickham (2016) Wickham, H., 2016. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. URL: https://ggplot2.tidyverse.org.
* Wilkinson (2005) Wilkinson, L., 2005. The Grammar of Graphics (Statistics and Computing). Springer-Verlag New York, Inc., Secaucus, NJ, USA.
|
# Geometric control of algebraic systems
Benoît Legat Raphaël M. Jungers<EMAIL_ADDRESS>ICTEAM institute,
UCLouvain, Louvain-la-Neuve, Belgium. (e-mail: ).
<EMAIL_ADDRESS>ICTEAM institute, UCLouvain, Louvain-la-Neuve,
Belgium. (e-mail: ).
###### Abstract
In this paper, we present a geometric approach for computing the controlled
invariant set of a continuous-time control system. While the problem is well
studied for in the ellipsoidal case, this family is quite conservative for
constrained or switched linear systems. We reformulate the invariance of a set
as an inequality for its support function that is valid for any convex set.
This produces novel algebraic conditions for the invariance of sets with
polynomial or piecewise quadratic support function. We compare it with the
common algebraic approach for polynomial sublevel sets and show that it is
significantly more conservative than our method.
###### keywords:
Lyapunov methods; Control of constrained systems; Control of switched systems;
Convex optimization.
††thanks: RJ is a FNRS honorary Research Associate. This project has received
funding from the European Research Council (ERC) under the European Union’s
Horizon 2020 research and innovation programme under grant agreement No 864017
- L2C. RJ is also supported by the Walloon Region, the Innoviris Foundation,
and the FNRS (Chist-Era Druid-net).
## 1 Introduction
Computing111A preliminary version of this work appears in Legat (2020).
controlled invariant set is paramount in many applications (Blanchini and
Miani (2015)). Indeed, the existence of a controlled invariant set is
equivalent to the stabilizability of a control system (Sontag (1983)) and a
(possibly nonlinear) stabilizable state feedback can be deduced from the
controlled invariant set (Barmish (1985)).
The stabilizability of a linear time-invariant (LTI) control system is
equivalent to the stability of its uncontrollable subspace (which is readily
accessible in its Controllability Form) (Wonham, 1985, Section 2.4). Indeed,
the eigenvalues of its controllable subspace can be fixed to any value by a
proper choice of linear state feedback. The resulting controlled system is
stable hence an invariant ellipsoid can be determined by solving a system of
linear equations (Liapounoff (1907)). This set is also controlled invariant
for the control system. When a control system admits an ellipsoidal controlled
invariant set, it is said to be _quadratically stabilizable_. When there
exists a linear state feedback such that the resulting autonomous system
admits an ellipsoidal invariant set, it is said to be _quadratically
stabilizable via linear control_.
While the stabilizability of LTI control systems is equivalent to their
quadratic stabilizability via linear control, it is no longer the case for
_uncertain_ or _switched_ systems (Petersen (1985)). Furthermore, it is often
desirable for _constrained_ systems to find a controlled invariant set of
maximal volume (or which is maximal in some direction (Ahmadi and Gunluk
(2018))). For such problem, the method detailed above is not suitable as it
does not take any volume consideration but more importantly, the maximal
volume invariant set may not be an ellipsoid and may not be rendered stable
via a linear control. For this reason, the Linear Matrix Inequality (LMI) (4)
was devised to encapsulate the controlled invariance of an ellipsoid via
linear control (Boyd et al., 1994, Section 7.2.2) and the conservatism of the
choice of linear control was analysed (Sontag (1983)). As the linearity of the
control was found to be conservative for uncertain systems (Petersen (1985)),
the LMI (13) was found to encapsulate controlled invariance of an ellipsoid
via _any_ state-feedback Barmish (1985).
These LMIs have had a tremendous impact on control, but unfortunately the
approach is limited to ellipsoids due to its algebraic nature. We reinterpret
it in a geometric/behavioural framework, based on convex analysis, which
allows us to formulate a general condition for the controlled invariance of
arbitrary convex sets via any state-feedback in Theorem 7. While this
condition reduces to (13) for the special case of ellipsoids, it provides a
new methods for computing controlled invariant convex sets with convex
polynomial and piecewise quadratic support functions.
In Section 2, we review the classical LMI’s for the invariance and controlled
invariance of ellipsoids and discusses the challenges for its generalization
to sublevel sets of polynomials. In Section 3, we develop a generic condition
of control invariance for continuous-time systems using our geometric
approach. We particularize it for ellipsoids (resp. sets with polynomial and
piecewise quadratic support functions) in Section 3.1 (resp. Section 3.2 and
Section 3.3). We illustrate these new results with a numerical example in
Section 4.
#### Reproducibility
The code used to obtain the results is published on codeocean (Legat and
Jungers (2021)). The set programs are reformulated by SetProg (Legat et al.
(2019)) into a Sum-of-Squares program which is reformulated into a
semidefinite program by SumOfSquares (Weisser et al. (2019)) which is solved
by Mosek v8 (ApS (2017)) through MathOptInterface (Legat et al. (2020a)).
## 2 Algebraic approach
Computing an ellipsoidal invariant set for an autonomous system $\dot{x}=Ax$
where $A\in\mathbb{R}^{n\times n}$ consists in searching for an ellipsoidal
set
$\mathcal{E}_{P}=\\{\,x\in\mathbb{R}^{n}\mid x^{\top}Px\leq 1\,\\}$
that satisfies the _Nagumo condition_ (Blanchini and Miani, 2015, Theorem
4.7): $x^{\top}PAx\leq 0$ for any $x\in\mathbb{R}^{n}$. The Nagumo condition
for ellipsoids is equivalent to the LMI:
$A^{\top}P+PA\preceq 0.$ (1)
which allows to search for ellipsoidal invariant sets using semidefinite
programming.
Consider the continuous-time control linear system
$\dot{x}=Ax+Bu$ (2)
where $A\in\mathbb{R}^{n_{x}\times n_{x}}$ and $B\in\mathbb{R}^{n_{x}\times
n_{u}}$ with the following definition of invariance.
###### Definition 1 (Controlled invariant set)
A set $S$ is _controlled invariant_ for system (2) if for any state $x_{0}\in
S$, there exists a control $u(t)$ such that the trajectory with initial state
and control $u$ remains in $S$.
With the presence of the control $u$ in the system, the Nagumo condition
becomes:
$\forall x\in\mathbb{R}^{n_{x}},\exists
u\in\mathbb{R}^{n_{u}},x^{\top}P(Ax+Bu)\leq 0.$ (3)
The control term $u$, or more precisely the existential quantifier $\exists$
prevents us to transform this into an LMI directly.
Fixing the control to a linear state feedback $u(x)=Kx$ for some matrix $K$
allows to fallback to the case of autonomous system $\dot{x}=(A+BK)x$. Then,
the invariance condition can be formulated as the _Bilinear Matrix Inequality_
(BMI):
$A^{\top}P+PA+K^{\top}B^{\top}P+PBK\preceq 0$
While the matrix inequality is bilinear in $K$ and $P$, and BMI’s are NP-hard
to solve in general (Toker and Ozbay (1995)), a clever algebraic manipulation
allows to reformulate it as a Linear Matrix Inequality (LMI) in $Q:=P^{-1}$
and $Y:=KQ$, where the sought controlled invariant ellipsoid is given by
$\mathcal{E}_{P}$, see e.g. (Boyd et al., 1994, Section 7.2.1, Section 7.2.2)
and (Blanchini and Miani, 2015, Section 4.4.1) for more details. The linear
matrix inequality is
$QA^{\top}+AQ+Y^{\top}B^{\top}+BY\preceq 0.$ (4)
Because the algebraic manipulation which allows to reformulate the BMI into an
LMI is done at the level of matrices, it is not clear how this approach can be
generalized to other families of sets such as the ones considered in Section
3.2 and Section 3.3. Moreover, searching for ellipsoidal controlled invariant
sets may be rather restrictive and the conservativeness is amplified for the
class of hybrid systems.
One attempt to generalize it to controlled invariant sublevel sets of
polynomials of degree $d$ is developed in Prajna et al. (2004). While the
method allows to consider systems defined by polynomial equations, we show
below that for linear systems, it has significant restrictions for $d>2$ and
the case $d=2$ reduces to the ellipsoidal case given by (4). This suggests
that the methods is essentially restricted to compute invariant sublevel sets
of polynomials of a most twice the degree of the polynomial equations. On the
other hand, the condition we find in Corollary 10 is necessary and sufficient.
The remaining of this section particularize the approach for linear systems,
more details can be found in Prajna et al. (2004).
Let $x_{[d]}$ denote the vector of all monomials of degree $d$ with the
variables $x_{i}$. Consider the set $\\{\,x\mid x_{[d]}^{\top}Px_{[d]}\leq
1\,\\}$ for a symmetric positive definite matrix $P$ and a state feedback of
the form $u(x)=K_{1}(x)x+K_{2}(x)x_{[d]}$ where $K_{1}(x),K_{2}(x)$ are
matrices of the appropriate dimensions whose entries are polynomials in $x$.
The invariance of the set for the autonomous system
$\dot{x}=Ax+BK_{1}(x)x+BK_{2}(x)x_{[d]}$ is equivalent to the nonnegativity of
the polynomial
$x_{[d]}^{\top}PM(x)BK_{2}(x)x_{[d]}+x_{[d]}^{\top}PF(x)x$
where $F(x)=M(x)BK_{1}(x)+M(x)A$ and $M(x)$ is the jacobian of the
transformation $x\mapsto x_{[d]}$. This polynomial can be rewritten in the
matrix form:
$\begin{bmatrix}x_{[d]}\\\
x\end{bmatrix}^{\top}\begin{bmatrix}PM(x)BK_{2}(x)&PF(x)\\\
0&0\end{bmatrix}\begin{bmatrix}x_{[d]}\\\ x\end{bmatrix}.$ (5)
The following is therefore a sufficient condition for the invariance:
$\forall x\in\mathbb{R}^{n},\quad\begin{bmatrix}2PM(x)BK_{2}(x)&PF(x)\\\
F^{\top}(x)P&0\end{bmatrix}\preceq 0.$ (6)
After a similarity transformation with the block diagonal matrix
$\operatorname{BlockDiag}(P^{-1},I_{n_{x}})$ where
$I_{n_{x}}\in\mathbb{R}^{n_{x}\times n_{x}}$ is the identity matrix, the
condition is rewritten as:
$\forall x\in\mathbb{R}^{n},\quad\begin{bmatrix}2M(x)BY(x)&F(x)\\\
F^{\top}(x)&0\end{bmatrix}\preceq 0$ (7)
where $Y(x)=K_{2}(x)P^{-1}$.
By Proposition 14, for (7) to hold we need all entries of $F(x)$ to be zero
polynomials which is quite restictive. The example below discusses the
conservativeness of (5) compared to (6).
###### Example 2
Consider the autonomous system $\dot{x}=-x$ and the invariant set $\\{\,x\mid
x^{4}\leq 1\,\\}$. The condition (5) is satisfied:
$\begin{bmatrix}x^{2}\\\ x\end{bmatrix}^{\top}\begin{bmatrix}0&-2x\\\
0&0\end{bmatrix}\begin{bmatrix}x^{2}\\\ x\end{bmatrix}=-x^{4}$
while the matrix
$\begin{bmatrix}0&-2x\\\ -2x&0\end{bmatrix}$
of conditions (6) and (7) is indefinite for any nonzero $x$.
## 3 Geometric approach
In this section we derive a characterization of the controlled invariance of a
closed convex set under the form of an inequality for its support function. We
start by showing the equivalence of the notion of invariance with another
class of systems that directly models the geometric behaviours of the
trajectories of control systems with unconstrained input.
Consider the continuous-time _algebraic_ linear systems:
$E\dot{x}=Cx.$ (8)
with the following definition of invariance.
###### Definition 3 (Invariant set for an algebraic system)
A set $S$ is _invariant_ for system (8) if for any state $x_{0}\in S$, there
exists a trajectory of the system that remains in $S$.
Note the use of “there exists” instead of “for all” in the definition of
invariance as both versions exists; see (Legat et al., 2020b, Remark 4) for
more details.
###### Proposition 4
Given a subset $\mathcal{S}\subseteq\mathbb{R}^{n}$ and matrices
$A\in\mathbb{R}^{r\times n},B\in\mathbb{R}^{r\times m}$, the following holds:
$A\mathcal{S}+B\mathbb{R}^{m}=\pi_{\operatorname{Im}(B)^{\perp}}{}^{-1}\pi_{\operatorname{Im}(B)^{\perp}}{}A\mathcal{S}$
where $\pi_{\operatorname{Im}(B)^{\perp}}{}$ is any projection matrix onto the
orthogonal subspace of $\operatorname{Im}(B)$.
###### Proof 3.1
Given $x\in\mathcal{S}$ and $y\in\mathbb{R}^{r}$, we have $y\in
A\\{x\\}+B\mathbb{R}^{m}$ if and only if $y-Ax\in\operatorname{Im}(B)$ or
equivalently,
$\pi_{\operatorname{Im}(B)^{\perp}}y=\pi_{\operatorname{Im}(B)^{\perp}}Ax$.
###### Proposition 5
Consider a control system (2) and an arbitrary projection matrix
$\pi_{\operatorname{Im}(B)^{\perp}}{}$ onto the orthogonal subspace of
$\operatorname{Im}(B)$. A set $S$ is controlled invariant for the control
system (2) with $\mathcal{U}=\mathbb{R}^{n_{u}}$, as defined in Definition 1,
if and only if it is controlled invariant for the algebraic system
$\pi_{\operatorname{Im}(B)^{\perp}}{}\dot{x}=\pi_{\operatorname{Im}(B)^{\perp}}{}Ax,$
as defined in Definition 3.
###### Proof 3.2
By Proposition 4, there exists $u\in\mathbb{R}^{n_{u}}$ such that
$\dot{x}=Ax+Bu$ if and only if
$\pi_{\operatorname{Im}(B)^{\perp}}{}\dot{x}=\pi_{\operatorname{Im}(B)^{\perp}}{}Ax$.
As the input $u$ is unconstrained, the result follows.
The Nagumo condition for algebraic systems has the following form.
###### Proposition 6
A closed set $\mathcal{S}$ is invariant for system (8), as defined in
Definition 3, if and only if
$\forall x\in\partial\mathcal{S},\exists y\in T_{S}(x),Ey=Ax.$ (9)
See Section B for a brief review of the concepts of convex geometry needed for
the remaining of this section. The invariance condition (9) can be rewritten
in terms of _exposed faces_.
###### Theorem 7 (Controlled invariance of convex set)
A convex set $\mathcal{C}$ is invariant for system (8) with matrices
$C,E\in\mathbb{R}^{r\times n}$ if and only if
$\forall z\in\mathbb{R}^{r},\forall x\in F_{\mathcal{C}}(E^{\top}z),\langle
z,Cx\rangle\leq 0.$ (10)
###### Proof 3.3
As $\mathcal{C}$ is convex, $T_{\mathcal{C}}(x)$ is a convex cone. By
definition of the polar of a cone, $x\in ET_{\mathcal{C}}(x)$ if and only if
$\langle y,x\rangle\leq 0$ for all $y\in[ET_{\mathcal{C}}(x)]^{\circ}$. By
Proposition 20, $[ET_{\mathcal{C}}(x)]^{\circ}=E^{-\top}N_{\mathcal{C}}(x)$.
Therefore, the set $\mathcal{C}$ is invariant if and only if
$\forall x\in\partial\mathcal{C},\forall z\in
E^{-\top}N_{\mathcal{C}}(x),\langle z,Cx\rangle\leq 0.$ (11)
By Proposition 19, we have
$\\{\,(x,z)\in\partial\mathcal{C}\times\mathbb{R}^{r}\mid E^{\top}z\in
N_{\mathcal{C}}(x)\,\\}=\\\
\\{\,(x,z)\in\partial\mathcal{C}\times\mathbb{R}^{r}\mid x\in
F_{\mathcal{C}}(E^{\top}z)\,\\}.$
As we show in the remaining of this section, Theorem 7 allows to reformulate
the invariance as an inequality in terms of the support function. This allows
to combine the invariance constraint with other set constraints that can be
formulated in terms of support functions. Moreover, for an appropriate family
of sets, also called _template_ , the set program can be automatically
rewritten into a convex program combining all constraints using _set
programming_ Legat et al. (2019); Legat (2020). For this reason, we only focus
on the invariance constraint and do not detail how to formulate the complete
convex programs with the objective and all the constraints needed to obtain
the results of Section 4 as these problems are decoupled.
This allows to formulate the invariance of a convex set in terms of its
support function if it is differentiable. We generalize this result with a
relaxed notion of differentiability in Theorem 12.
###### Theorem 8
Consider a nonempty closed convex set $\mathcal{C}$ such that
$\delta^{*}(\cdot|\mathcal{C})$ is differentiable. Then $\mathcal{C}$ is
invariant for system (8) with matrices $C,E\in\mathbb{R}^{r\times n}$ if and
only if
$\forall z\in\mathbb{R}^{r},\langle
z,C\nabla\delta^{*}(E^{\top}z|\mathcal{C})\rangle\leq 0.$ (12)
###### Proof 3.4
By Proposition 21,
$F_{\mathcal{C}}(E^{\top}z)=\\{\nabla\delta^{*}(E^{\top}z|\mathcal{C})\\}$
hence (10) is equivalent to (12).
### 3.1 Ellipsoidal controlled invariant set
In this section, we particularize Theorem 8 to the case of ellipsoids. Since
the support function of an ellipsoid $\mathcal{E}_{P}$ is
$\delta^{*}(y|\mathcal{E}_{P})=\sqrt{y^{\top}P^{-1}y}$, we have the following
corollary of Theorem 8.
###### Corollary 9 (Barmish (1985))
Given a positive definite matrix $P$, the ellipsoid $\mathcal{E}_{P}$ is
controlled invariant for system (8) if and only if
$CP^{-1}E^{\top}+EP^{-1}C^{\top}\preceq 0.$ (13)
Observe that for the trivial case $\operatorname{Im}(B)=\mathbb{R}^{n}$ for
system (2), Proposition 5 produces a system (8) with $r=0$ hence the LMI (13)
will be trivially satisfied for any $P^{-1}$, which is expected.
In comparison to (4), for a system (8) with matrices
$C,E\in\mathbb{R}^{r\times n}$, the LMI (4) has size $n\times n$ while the LMI
(13) has only size $r\times r$. The characterization of controlled invariance
of ellipsoids using (13) can also be obtained by applying an elimination
procedure to reduce (4); see (Boyd et al., 1994, Equation (7.11)). However,
uncertain or switched system may need a nonlinear state feedback to be
quadratically stabilizabilized Petersen (1985). For such systems, (4) is
conservative since it assumes a linear feedback while (13) does not assume
anything about the feedback. It was shown in Barmish (1985) that if (13) is
satisfied then a stabilizing nonlinear continuous state feedback can be
deduced from the solution $P$. There is even a closed form for the feedback in
case of single input (Barmish, 1985, Eq. (15)).
### 3.2 Polynomial controlled invariant set
In this section, we derive the algebraic condition for the controlled
invariance of a set with polynomial support function. This template is
referred to as _polyset_ ; see (Legat, 2020, Section 1.5.3).
###### Corollary 10
Given a homogeneous222A polynomial is _homogeneous_ if all its monomials have
the same total degree nonnegative polynomial $p(x)$ of degree $2d$, the set
$\mathcal{C}$ defined by the support function
$\delta^{*}(y|\mathcal{C})=p(y)^{\frac{1}{2d}}$ is invariant for system (8)
with matrices $C,E\in\mathbb{R}^{r\times n}$ if and only if the polynomial
$z^{\top}C\nabla p(E^{\top}z)$ (14)
is nonpositive for all $z\in\mathbb{R}^{r}$.
###### Proof 3.5
We have
$\nabla\delta^{*}(y|\mathcal{C})=\frac{1}{p(y)^{1-\frac{1}{2d}}}\nabla p(y).$
If $p(y)$ is identically zero, this is trivially satisfied. Otherwise,
$p(y)^{1-\frac{1}{2d}}$ is nonnegative and is zero in an algebraic variety of
dimension $n-1$ at most. Therefore, (12) is equivalent to (14).
While verifying the nonnegativity of a polynomial is co-NP-hard, a sufficient
condition can be obtained via the standard Sum-of-Squares programming
framework; see Blekherman et al. (2012).
### 3.3 Piecewise semi-ellipsoidal controlled invariant set
Johansson and Rantzer (1998) study the computation of piecewise quadratic
Lyapunov functions for continuous-time autonomous piecewise affine systems.
Legat et al. (2021) present a convex programming approach to compute
_piecewise semi-ellipsoidal_ controlled invariant sets of discrete-time
control systems. In this section, we show that Theorem 8 provides the
corresponding condition for continuous-time.
We recall (Legat et al., 2021, Definition 2) below.
###### Definition 11
A _polyhedral conic partition_ of $\mathbb{R}^{n}$ is a set of $m$ polyhedral
cones $(\mathcal{P}_{i})_{i=1}^{m}$ with nonempty interior such that for all
$i\neq j$, $\dim(\mathcal{P}_{i}\cap\mathcal{P}_{j})<n$ and
$\cup_{i=1}^{m}\mathcal{P}_{i}=\mathbb{R}^{n}$.
Piecewise semi-ellipsoids have a support function of the form
$\delta^{*}(y|\mathcal{C})=\sqrt{y^{\top}Q_{i}y}\qquad y\in\mathcal{P}_{i}$
(15)
where $(\mathcal{P}_{i})_{i=1}^{m}$ is a polyhedral conic partition. The
support function additionally has to satisfy (Legat et al., 2021, (2) and (3))
to ensure its continuity and convexity.
###### Theorem 12
Consider a polyhedral conic partition $(\mathcal{P}_{i})_{i=1}^{m}$ and a
nonempty closed convex set $\mathcal{C}$ such that
$\delta^{*}(y|\mathcal{C})=f_{i}(y)\qquad y\in\mathcal{P}_{i}$
for differentiable functions $f_{i}:\mathcal{P}_{i}\to\mathbb{R}$. The set
$\mathcal{C}$ is invariant for system (8) with matrices
$C,E\in\mathbb{R}^{r\times n}$ if and only if, for all $i=1,\ldots,m$ and
$z\in\mathbb{R}^{r}$ such that $E^{\top}z\in\mathcal{P}_{i}$, we have
$\langle z,C\nabla f_{i}(E^{\top}z)\rangle\leq 0.$ (16)
###### Proof 3.6
Given $z\in\mathbb{R}^{r}$ such that $E^{\top}z$ is in the intersection of the
boundary of $\mathcal{C}$ and the interior of $\mathcal{P}_{i}$, the support
function is differentiable at $E^{\top}z$ hence, by Proposition 21,
$F_{\mathcal{C}}(E^{\top}z)=\\{\nabla f_{i}(E^{\top}z)\\}$. The condition (10)
is therefore reformulated as (16).
Given a subset $I$ of $\\{1,\ldots,m\\}$ and $z\in\mathbb{R}^{r}$ such that
$E^{\top}z$ is in the intersection of the boundary of $\mathcal{C}$ and
$\cap_{i\in I}\mathcal{P}_{i}$, $F_{\mathcal{C}}(E^{\top}z)$ is the convex
hull of $\nabla\delta^{*}(E^{\top}z|\mathcal{C})$ for each $i\in I$. For any
convex combination (i.e., nonnegative numbers summing to 1)
$(\lambda_{i})_{i\in I}$, (16) implies that
$\langle z,C\sum_{i\in I}\lambda_{i}\nabla f_{i}(E^{\top}z)\rangle=\sum_{i\in
I}\lambda_{i}\langle z,C\nabla f_{i}(E^{\top}z)\rangle\leq 0.$
###### Corollary 13
A piecewise semi-ellipsoid $\mathcal{C}$ is invariant for system (8) with
matrices $C,E\in\mathbb{R}^{r\times n}$ if and only if the quadratic form
$z^{\top}CP_{i}^{-1}E^{\top}z+z^{\top}EP_{i}^{-1}C^{\top}z$ (17)
is nonpositive for all $i=1,\ldots,m$ and $z\in\mathbb{R}^{r}$ such that
$E^{\top}z\in\mathcal{P}_{i}$.
The condition (17) amounts to verifying the positive semidefiniteness of a
quadratic form when restricted to a polyhedral cone. When this cone is the
positive orthant, this is called the _copositivity_ which is co-NP-complete to
decide (Murty and Kabadi (1987)). However, a sufficient LMI is given in (Legat
et al., 2021, Proposition 2) and a necessary and sufficient condition is given
by a hierarchy of Sum-of-Squares programs (Parrilo, 2000, Chapter 5).
## 4 Numerical example
In this section, we study a simple numerical example to illustrate our new
approach. Suppose we want to compute a controlled invariant set for the
control system
$\dot{x}=\begin{pmatrix}0&1\\\ 0&0\\\ \end{pmatrix}x+\begin{pmatrix}0\\\
1\end{pmatrix}u$
with the state constraint $x\in[-1,1]^{2}$ and input constraint $u\in[-1,1]$.
We represent the state set $[-1,1]^{2}$ and its polar in green in Figure 1 and
Figure 2.
The union of controlled invariant sets is controlled invariant. Moreover, by
linearity, the convex hull of the unions of controlled invariant sets is
controlled invariant. Therefore, there exists a _maximal_ controlled
invariant, i.e., a controlled invariant set in which all controlled invariant
sets are included, for any family that is closed under union (resp. convex
hull); it is the union (resp. convex hull) of all controlled invariant sets
included in $[-1,1]^{2}$.
For this simple planar system, the maximal controlled invariant set can be
obtained by hand, it is
$\\{\,(x_{1},x_{2})\in[-1,1]^{2}\mid x_{1}x_{2}\leq 0\text{ or }|x_{1}|\leq
1-x_{2}^{2}/2\,\\}$
Its polar is given by
$\displaystyle\,\\{\,x\mid x_{1}x_{2}\leq 0\text{ and }|x_{1}-x_{2}|\leq
1\,\\}$ $\displaystyle\cup$ $\displaystyle\,\\{\,x\mid x_{1}(x_{1}-x_{2})\leq
0\text{ and }|x_{1}/2+x_{2}|\leq 1\,\\}$ $\displaystyle\cup$
$\displaystyle\,\\{\,x\mid x_{2}(x_{2}-x_{1})\leq 0\text{ and
}(2x_{1}-\operatorname*{sign}(x_{1}))^{2}+2x_{2}^{2}\leq 1\,\\}$
We represent it in yellow in Figure 1 and Figure 2.
As Proposition 5 requires the input to be unconstrained, we will consider
projections onto the first two dimensions of controlled invariant sets of the
following control system:
$\dot{x}=\begin{pmatrix}0&1&0\\\ 0&0&1\\\
0&0&0\end{pmatrix}x+\begin{pmatrix}0\\\ 0\\\ 1\end{pmatrix}u.$
with the state constraint $x\in[-1,1]^{3}$ and no input constraint.
Following Proposition 5, we consider the algebraic system
$\begin{pmatrix}1&0&0\\\ 0&1&0\\\ \end{pmatrix}\dot{x}=\begin{pmatrix}0&1&0\\\
0&0&1\\\ \end{pmatrix}x$
with the state constraint $x\in[-1,1]^{3}$.
While the _maximal_ invariant set is well defined, it is not the case anymore
when we restrict the set to belong to the family of ellipsoids, polysets or
piecewise semi-ellipsoids for a fixed polyhedral conic partition as these
families are not invariant under union nor convex hull. The objective used to
determine which invariant set is selected depends on the particular
application. Let $\mathcal{D}$ be the convex hull of
$\\{(-1+\sqrt{3},-1+\sqrt{3}),(-1,1),(1-\sqrt{3},1-\sqrt{3}),(1,-1)\\}$. For
this example, we maximize $\gamma$ such that $\gamma\mathcal{D}$ is included
in the projection of the invariant set onto the first two dimensions. We
represent $\gamma\mathcal{D}$ in red in Figure 1 and Figure 2.
For the ellipsoidal template considered in Section 3.1, the optimal solution
is shown in Figure 1 as ellipsoids corresponds to polysets of degree 2. The
optimal objective value is $\gamma\approx 0.81$.
For the polyset template considered in Section 3.2, the optimal solution are
represented in Figure 1. The optimal objective value for degree 4 (resp. 6, 10
and 20) is $\gamma\approx 0.91$. (resp. $\gamma\approx 0.93$, $\gamma\approx
0.96$ and $\gamma\approx 0.98$).
For the piecewise semi-ellipsoidal template, we consider polyhedral conic
partitions made of the conic hull of each facet of the polytope with extreme
points
$(\cos(\alpha)\cos(\beta),\sin(\alpha)\cos(\beta),\sin(\beta))$ (18)
where $\alpha=0,2\pi/m_{1},4\pi/m_{1},\ldots,2(m_{1}-1)\pi/m_{1}$ and
$\beta=-\pi/2,\ldots,-2\pi/(m_{2}-1),-\pi/(m_{2}-1),0,\pi/(m_{2}-1),2\pi/(m_{2}-1),\ldots,\pi/2$.
The optimal objective value for $m=(4,3)$ (resp. $(8,5)$) is $\gamma\approx
0.89$ (resp. $\gamma\approx 0.92$). The corresponding optimal solution is
shown in Figure 2.
Figure 1: In blue are the solution for polysets of different degrees. The
degrees from top to bottom are respectively 2, 4, 6, 10 and 20. The green set
is the safe set $[-1,1]^{2}$, the yellow set is the maximal controlled
invariant set and the red set is $\gamma\mathcal{D}$. The sets are represented
in the primal space in left figures and in polar space in the right figures.
Figure 2: In blue are the solution for piecewise semi-ellipsoids for two
different polyhedral conic partitions. The partitions from top to bottom are
as described in (18) with $m=(4,3)$ (resp. $(8,5)$). The green set is the safe
set $[-1,1]^{2}$, the yellow set is the maximal controlled invariant set and
the red set is $\gamma\mathcal{D}$. The sets are represented in the primal
space in left figures and in polar space in the right figures.
## 5 Conclusion
We proved a condition for continuous-time controlled invariance of a convex
set based on its support function. We particularized the condition for three
templates: ellipsoids, polysets and piecewise semi-ellipsoids. In the
ellipsoidal case, it reduces to a known LMI, in the polyset case, it provides
a condition significantly less conservative than the existing one333Indeed, it
is equivalent to invariance by Corollary 10 and we showed in Section 2 that
the existing approach is quite conservative. and in the piecewise semi-
ellipsoidal case, it provides the first convex programming approach for
continuous-time controlled invariance to the best of our knowledge.
As future work, we aim to apply this framework to other families such as the
_piecewise polysets_ defined in Legat (2020). Moreover, instead of considering
a uniform discretization of the hypersphere as in (18), a more adaptive
methods could be considered. The sensitivity information provided by the dual
solution of the optimization program could for instance determine which pieces
of the partition should be refined.
Finally, as the discrete-time version of this work developed in Legat et al.
(2020b, 2021) also requires the set to be represented by its support function
for the optimization program to be convex, both these methods and the method
of this paper could be combined to compute controlled invariant sets for
hybrid automata using the condition of this paper for the invariance subject
to the dynamics of each mode and the condition of Legat et al. (2020b, 2021)
for the invariance subject to each reset map.
## References
* Ahmadi and Gunluk (2018) Ahmadi, A.A. and Gunluk, O. (2018). Robust-to-Dynamics Optimization. _arXiv e-prints_ , arXiv:1805.03682.
* ApS (2017) ApS, M. (2017). MOSEK Optimization Suite Release 8.1.0.43. URL: http://docs.mosek.com/8.1/intro.pdf.
* Barmish (1985) Barmish, B.R. (1985). Necessary and sufficient conditions for quadratic stabilizability of an uncertain system. _Journal of Optimization theory and applications_ , 46(4), 399–408.
* Blanchini and Miani (2015) Blanchini, F. and Miani, S. (2015). _Set-theoretic methods in control_. Springer, second edition.
* Blekherman et al. (2012) Blekherman, G., Parrilo, P.A., and Thomas, R.R. (2012). _Semidefinite Optimization and Convex Algebraic Geometry_. Society for Industrial and Applied Mathematics, Philadelphia, PA.
* Boyd et al. (1994) Boyd, S.P., El Ghaoui, L., Feron, E., and Balakrishnan, V. (1994). _Linear matrix inequalities in system and control theory_ , volume 15. SIAM.
* Hiriart-Urruty and Lemaréchal (2012) Hiriart-Urruty, J.B. and Lemaréchal, C. (2012). _Fundamentals of convex analysis_. Springer Science & Business Media.
* Johansson and Rantzer (1998) Johansson, M. and Rantzer, A. (1998). Computation of piecewise quadratic lyapunov functions for hybrid systems. _IEEE Transactions on Automatic Control_ , 43, 555–559.
* Legat (2020) Legat, B. (2020). _Set programming : theory and computation_. Ph.D. thesis, UCLouvain.
* Legat et al. (2020a) Legat, B., Dowson, O., Garcia, J.D., and Lubin, M. (2020a). Mathoptinterface: a data structure for mathematical optimization problems. Second round of reviews.
* Legat et al. (2019) Legat, B., Jungers, R.M., Parrilo, P.A., and Tabuada, P. (2019). Set Programming with JuMP. In _The Third Annual JuMP-dev Workshop_.
* Legat et al. (2021) Legat, B., Raković, S.V., and Jungers, R.M. (2021). Piecewise semi-ellipsoidal control invariant sets. _IEEE Control Systems Letters_ , 5(3), 755–760. 10.1109/LCSYS.2020.3005326.
* Legat et al. (2020b) Legat, B., Tabuada, P., and Jungers, R.M. (2020b). Sum-of-squares methods for controlled invariant sets with applications to model-predictive control. _Nonlinear Analysis: Hybrid Systems_ , 36, 100858. 10.1016/j.nahs.2020.100858.
* Legat and Jungers (2021) Legat, B. and Jungers, R.M. (2021). Continuous-time controlled invariant sets, a geometric approach. https://www.codeocean.com/.
* Liapounoff (1907) Liapounoff, A. (1907). Problème général de la stabilité du mouvement. _Annales de la Faculté des sciences de Toulouse : Mathématiques_ , 9, 203–474. URL http://eudml.org/doc/72801.
* Murty and Kabadi (1987) Murty, K.G. and Kabadi, S.N. (1987). Some np-complete problems in quadratic and nonlinear programming. _Mathematical Programming: Series A and B_ , 39(2), 117–129.
* Parrilo (2000) Parrilo, P.A. (2000). _Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization_. Ph.D. thesis, Citeseer.
* Petersen (1985) Petersen, I. (1985). Quadratic stabilizability of uncertain linear systems: existence of a nonlinear stabilizing control does not imply existence of a linear stabilizing control. _IEEE Transactions on Automatic Control_ , 30(3), 291–293.
* Prajna et al. (2004) Prajna, S., Papachristodoulou, A., and Wu, F. (2004). Nonlinear control synthesis by sum of squares optimization: A lyapunov-based approach. In _5th Asian Control Conference_ , volume 1, 157–165. IEEE.
* Rockafellar (2015) Rockafellar, R.T. (2015). _Convex analysis_. Princeton university press.
* Schneider (2013) Schneider, R. (2013). _Convex bodies: the Brunn–Minkowski theory_. 151\. Cambridge University Press.
* Sontag (1983) Sontag, E.D. (1983). A lyapunov-like characterization of asymptotic controllability. _SIAM Journal on Control and Optimization_ , 21(3), 462–471.
* Toker and Ozbay (1995) Toker, O. and Ozbay, H. (1995). On the np-hardness of solving bilinear matrix inequalities and simultaneous stabilization with static output feedback. In _Proceedings of the American Control Conference_ , volume 4, 2525–2526. IEEE.
* Weisser et al. (2019) Weisser, T., Legat, B., Coey, C., Kapelevich, L., and Vielma, J.P. (2019). Polynomial and moment optimization in julia and jump. In _JuliaCon_. URL https://pretalx.com/juliacon2019/talk/QZBKAU/.
* Wonham (1985) Wonham, W.M. (1985). Linear multivariable control: A geometric approach. In _Applications of Mathematics_ , volume 10. Springer, third edition. 10.1007/978-1-4612-1082-5.
## Appendix A Block matrices
We have the following result for block matrices.
###### Proposition 14
Consider a symmetric matrix $A\in\mathbb{R}^{n\times n}$ and a matrix
$B\in\mathbb{R}^{n\times m}$. If the matrix
$C=\begin{bmatrix}A&B\\\ B^{\top}&0\end{bmatrix}$
is positive semidefinite then $B$ is zero.
###### Proof A.1
If $C$ is positive semidefinite, then there exists an integer $r$ and matrices
$X\in\mathbb{R}^{r\times n},Y\in\mathbb{R}^{r\times m}$ such that
$C=\begin{bmatrix}X^{\top}\\\
Y^{\top}\end{bmatrix}\begin{bmatrix}X&Y\end{bmatrix}.$
From $Y^{\top}Y=0$, we deduce that $Y=0$ hence $B=X^{\top}Y=0$.
## Appendix B Convex geometry
###### Definition 15 (Support function (Rockafellar, 2015, p. 28))
Consider a convex set $\mathcal{C}$. The _support function_ of $\mathcal{C}$
is defined as
$\delta^{*}(y|\mathcal{C})=\sup_{x\in\mathcal{C}}\langle y,x\rangle.$
###### Definition 16 (Polar set)
For any convex set $\mathcal{C}$ the polar of $\mathcal{C}$, denoted
$\mathcal{C}^{\circ}$, is defined as
$\mathcal{C}^{\circ}=\\{\,y\mid\delta^{*}(y|\mathcal{C})\leq 1\,\\}.$
We define the _tangent cone_ as follows (Blanchini and Miani, 2015, Definition
4.6).
###### Definition 17 (Tangent cone)
Given a closed convex set $\mathcal{S}$ and a distance function
$d(\mathcal{S},x)$, the _tangent cone_ to $\mathcal{S}$ at $x$ is defined as
follows:
$T_{\mathcal{S}}(x)=\left\\{\,y\mid\lim_{\tau\to 0}\frac{d(\mathcal{S},x+\tau
y)}{\tau}=0\,\right\\}$
where the distance is defined as
$d(\mathcal{S},x)=\inf_{y\in\mathcal{S}}\|x-y\|$
where $\|\cdot\|$ is a norm. The tangent cone is a convex cone and is
independent of the norm used.
For a convex set $\mathcal{C}$, the _normal cone_ is the polar of the tangent
cone $N_{\mathcal{C}}(x)=T_{\mathcal{C}}^{\circ}(x)$.
The _exposed face_ (also called the _support set_ , e.g., in (Schneider, 2013,
Section 1.7.1)) is defined as follows (Hiriart-Urruty and Lemaréchal, 2012,
Definition 3.1.3).
###### Definition 18 (Exposed face)
Consider a nonempty closed convex set $\mathcal{C}$. Given a vector $y\neq 0$,
the _exposed face_ of $\mathcal{C}$ associated to $y$ is
$F_{\mathcal{C}}(y)=\\{\,x\in\mathcal{C}\mid\langle
x,y\rangle=\delta^{*}(y|\mathcal{C})\,\\}.$
The exposed faces and normal cones are related by the following property
(Hiriart-Urruty and Lemaréchal, 2012, Proposition C.3.1.4).
###### Proposition 19
Consider a nonempty closed convex set $\mathcal{C}$. For any $x\in\mathcal{C}$
and nonzero vector $y$, $x\in F_{\mathcal{C}}(y)$ if and only if $y\in
N_{\mathcal{C}}(x)$.
Given a set $\mathcal{S}$ and a matrix $A$, let $A^{-\top}$ denote the
preimage $\\{\,x\mid A^{\top}x\in\mathcal{S}\,\\}$.
###### Proposition 20 ((Rockafellar, 2015, Corollary 16.3.2))
For any convex set $\mathcal{C}$ and linear map $A$,
$\displaystyle(A\mathcal{C})^{\circ}$
$\displaystyle=A^{-\top}\mathcal{C}^{\circ}.$
where $\mathcal{C}^{\circ}$ denotes the polar of the set $\mathcal{C}$.
When the support function is differentiable at a given point,
$F_{\mathcal{C}}$ is a singleton and may be directly obtained using the
following result:
###### Proposition 21 ((Rockafellar, 2015, Corollary 25.1.2))
Given a nonempty closed convex set $\mathcal{C}$, if
$\delta^{*}(y|\mathcal{C})$ is differentiable at $y$ then
$F_{\mathcal{C}}(y)=\\{\nabla\delta^{*}(y|\mathcal{C})\\}$.
In fact, for nonempty compact convex sets, the differentiability at $y$ is
even a necessary and sufficient conditions for the unicity of
$F_{\mathcal{C}}(y)$ (Schneider, 2013, Corollary 1.7.3).
|
Department of Physics, ETH Zurich - Lugano, Switzerland Institute of
Computational Science, Università Svizzera Italiana - Lugano, Switzerland
Italian Institute of Technology - Genova, Italy
# OPES: On-the-fly Probability Enhanced Sampling Method
Michele Invernizziinst:ethinst:usiinst:iit
inst:ethinst:ethinst:usiinst:usiinst:iitinst:iit
###### Abstract
Molecular simulations are playing an ever increasing role, finding
applications in fields as varied as physics, chemistry, biology and material
science. However, many phenomena of interest take place on time scales that
are out of reach of standard molecular simulations. This is known as the
sampling problem and over the years several enhanced sampling methods have
been developed to mitigate this issue. We propose a unified approach that puts
on the same footing the two most popular families of enhanced sampling
methods, and paves the way for novel combined approaches. The on-the-fly
probability enhanced sampling method provides an efficient implementation of
such generalized approach, while also focusing on simplicity and robustness.
## 1 Introduction
Despite the remarkable improvements over the last decades, both in
computational power and in algorithms efficiency, molecular simulations are
still limited in their scope by the sampling problem. Many phenomena of
interest, such as protein-ligand binding or crystal nucleation, are rare
events that take place on macroscopic time scales and thus would require an
impractical amount of computation to be simulated using standard molecular
dynamics or Markov-chain Monte Carlo. To circumvent this problem, a plethora
of enhanced sampling methods have been developed that aim at allowing a
simulation to visit all the relevant metastable states, without being hindered
by kinetic bottlenecks. Apart from some remarkable exceptions [1, 2], all the
most popular enhanced sampling techniques can be roughly grouped in two main
families, that we will refer to as collective variables methods and expanded
ensembles methods. We propose a unified perspective on enhanced sampling, that
allows us to develop a general method to perform both kinds of sampling in a
robust and efficient way. This new perspective makes enhanced sampling more
accessible and simpler to use and can also open up to novel sampling
strategies. The method we developed, called on-the-fly probability enhanced
sampling (OPES), is described in detail in Refs. [3] and [4]. Here we present
its main features in a synthetic fashion.
## 2 Unified approach
The goal of enhanced sampling is to increase the probability of observing in a
simulation certain rare events, and to do it in such a way that it is still
possible to retrieve statistics about the original system. We call target
distribution the modified probability distribution that is sampled instead of
the Boltzmann one. Rather than focusing on the various computational
techniques used by the different enhanced sampling methods, we propose to
group them according to how they define the target distribution they aim to
sample. Following this criteria we can identify two main families.
A first family is the one of methods such as umbrella sampling [5] and
metadynamics [6]. These methods make use of collective variables (CVs) or
order parameters that are smooth functions of the atomistic coordinates and
encode the slow modes of the system. In this family, the target distribution
is defined by requiring that its marginal probability distribution over such
CVs has a given functional form. Typically the marginal is chosen to be a
constant flat distribution, as in adaptive umbrella sampling [7], but other
choices are possible, such as the well-tempered distribution often used in
metadynamics [8].
A second family includes tempering methods, such as simulated tempering [9]
and replica exchange [10]. These methods define their target distribution as
the combination of slightly different versions of the original system, for
example the same system but at higher temperatures. These target distribution
are also known as generalized ensembles or expanded ensembles [11].
The OPES method can be used to sample either kind of target distributions. It
does so by adding to the potential energy of the system $U(\mathbf{x})$ a bias
potential $V(\mathbf{x})$ such that the sampled distribution is not the
equilibrium Boltzmann distribution, $P(\mathbf{x})\propto e^{-\beta
U(\mathbf{x})}$, but the target one, $p^{\text{tg}}(\mathbf{x})$. This bias
potential is defined as
$V(\mathbf{x})=-\frac{1}{\beta}\log\frac{p^{\text{tg}}(\mathbf{x})}{P(\mathbf{x})}\,,$
(1)
where $\beta$ is the inverse Boltzmann temperature. The bias potential is not
known a priori, but it is self-consistently learned during the simulation via
an on-the-fly estimate of the probability distributions. Statistics of the
unbiased system can be retrieved via a reweighting procedure, by assuming that
the bias is updated in an adiabatic way. Given any observable $O(\mathbf{x})$,
its ensemble average $\langle O\rangle$ over the unbiased system can be
estimated via ensemble averages over the sampled biased system:
$\langle O\rangle=\frac{\langle Oe^{\beta V}\rangle_{V}}{\langle e^{\beta
V}\rangle_{V}}\,.$ (2)
In this way also free energy differences and free energy surfaces can be
estimated [3, 4].
## 3 OPES for collective variables sampling
Given a set of collective variables $\mathbf{s}=\mathbf{s}(\mathbf{x})$, one
can define the marginal probability
$P(\mathbf{s})=\int
P(\mathbf{x})\,\delta[\mathbf{s}(\mathbf{x})-\mathbf{s}]\,d\mathbf{x}\,.$ (3)
The well-tempered ensemble with respect to these CVs is obtained by requiring
that the marginal of the target distribution is
$p^{\text{WT}}(\mathbf{s})\propto[P(\mathbf{s})]^{1/\gamma}$, where $\gamma>1$
is know as bias factor. Notice that the exact target distribution
$p^{\text{tg}}(\mathbf{x})$ is not known but this does not constitute a
problem. In fact, the core requirements are that the corresponding bias
potential can be expressed and that the target distribution is easy to sample,
thus the kinetic bottlenecks between metastable states are removed. This is
indeed guaranteed for the well-tempered distribution, given that the CVs are
chosen properly and the bias factor is large enough. The case of uniform
marginal target distribution can be seen as a special case of the well-
tempered one, where $\gamma=\infty$.
When using OPES for CVs sampling we need to estimate $P(\mathbf{s})$. To do
so, we use a weighted kernel density estimation with an automatic kernel
merging algorithm, that is explained in detail in Ref. [3]. We also introduce
a regularization term $\epsilon$ and a normalization $Z$ over the explored CV-
space. At step $n$ the bias, Eq. (1), can be written as a function of the CVs:
$V_{n}(\mathbf{s})=(1-1/\gamma)\frac{1}{\beta}\log\left(\frac{P_{n}(\mathbf{s})}{Z_{n}}+\epsilon\right)\,,$
(4)
where $P_{n}(\mathbf{s})$ is the estimate of $P(\mathbf{s})$ obtained via
reweighting. Reference [3] presents the full derivation of this expression.
## 4 OPES for expanded ensembles sampling
To define the expanded ensemble target distribution, we first define a class
of probability distributions $P_{\lambda}(\mathbf{x})\propto e^{-\beta
U_{\lambda}(\mathbf{x})}$, where $\lambda$ can be a parameter (e.g. the
temperature) or a set of parameters, and $U_{0}$ is the original system
potential. For simplicity we only consider nonweighted expanded ensembles, as
done in Ref. [4]. The expanded ensemble contains a discrete set
$\\{\lambda\\}$ of $N_{\\{\lambda\\}}$ parameters such that the corresponding
$P_{\lambda}(\mathbf{x})$ have an overlap in the configuration space. We can
write the expanded target distribution as:
$p_{\\{\lambda\\}}(\mathbf{x})=\frac{1}{N_{\\{\lambda\\}}}\sum_{\lambda}P_{\lambda}(\mathbf{x})\,.$
(5)
One can then define the expansion collective variables as $\Delta
u_{\lambda}(\mathbf{x})=\beta U_{\lambda}(\mathbf{x})-\beta U_{0}(\mathbf{x})$
and use them to write the bias potential at step $n$:
$V_{n}(\mathbf{x})=-\frac{1}{\beta}\log\left(\frac{1}{N_{\\{\lambda\\}}}\sum_{\lambda}e^{-\Delta
u_{\lambda}(\mathbf{x})+\beta\Delta F_{n}(\lambda)}\right)\,,$ (6)
where $\Delta F_{n}(\lambda)$ are the estimates of the free energy differences
between the unbiased system $U_{0}$ and the one at a given $\lambda$. These
are obtained via on-the-fly reweighting, similarly to $P_{n}(\mathbf{s})$ in
Sec. 3, but this time without the need for kernel density estimation as
$\\{\lambda\\}$ is a discrete set. The details of the derivation are explained
in Ref. [4].
Finally, we notice that often it is possible to rewrite Eq. (6) so that,
similarly to Eq. (4), the bias is a function of only a small number of CVs.
For example, in case of a multithermal expanded target distribution the bias
can be expressed as a function of the potential energy only [4].
## 5 Example: alanine dipeptide
Figure 1: Marginal probabilities over the $\phi$ angle and the potential
energy for OPES simulations of alanine dipeptide with different target
distributions. The Boltzmann distribution as obtained via reweighting is also
shown. Top row: well-tempered distribution over $\phi$, $\gamma=50$. Bottom
row: temperature-expanded ensemble, from 300 to 1000 K.
As an example we consider alanine dipeptide in vacuum at 300 K, which is a
prototypical system for enhanced sampling. It presents two main metastable
basins, that can be characterized using as CV the torsion angle $\phi$. The
most stable one contains two minima and lays in the region where $\phi$ is
negative, while the second basin has one minimum at $\phi\simeq 1$. A standard
unbiased simulation would suffer the sampling problem and almost never make
transitions between the two basins. In Fig. 1 we show different target
distributions that can be used to study alanine, by plotting their marginal
along $\phi$ and the potential energy. On the top, Fig. 1a, is the well-
tempered ensemble over $\phi$ ($\gamma=50$), which allows the system to easily
visit both basins, and greatly increases the probability of sampling
intermediate configurations around $\phi=0$. On the bottom, Fig. 1b, is an
expanded ensemble that combines four different temperatures, starting from 300
K up to 1000 K. At higher temperatures alanine explores configurations with
higher energy where the barrier between the two basins is smaller. Also in
this case the probability of visiting configurations around $\phi=0$ is
increased, but less sensibly. On the other hand from this second simulation it
is possible to retrieve statistics about a whole range of temperatures,
instead of 300 K only. The two target distributions presented are clearly
different but we can see that, once reweighting is performed, one retrieves
the same underlying Boltzmann probability (dashed lines).
Figure 2: Marginal probabilities over the $\phi$ angle and the potential
energy for OPES simulations of alanine dipeptide with different expanded
target distributions. The Boltzmann distribution as obtained via reweighting
is also shown. Top row: multiumbrella distribution over $\phi$, with 43 evenly
spaced umbrellas. Notice that it looks quite different from Fig. 1a, even
though in the limit of infinite umbrellas and $\gamma=\infty$ they would both
sample a uniform distribution over $\phi$. Bottom row: combined multithermal
and multiumbrella distribution, with 43 umbrellas and temperature range from
300 to 1000 K.
It is well known that is possible to enhance the sampling along a CV also
using an expanded ensemble as target, e.g. by combining multiple umbrella
sampling distributions [12]. In Fig. 2a one can see that a multiumbrella
target over $\phi$ can look very different from a well-tempered target over
the same CV, Fig. 1a. However, in both cases the system easily transitions
from one metastable basin to the other and the probability of being in the
transition region (around $\phi=0$) is greatly increased compared to the
unbiased case. By increasing the number of umbrellas in the target we can
eventually reach a uniform marginal, similar to the well-tempered one with
$\gamma=\infty$. Finally, it is also possible to combine different expansions
in the same target, as shown in Fig. 2b, where the used bias is a function
both of the angle $\phi$ and of the potential energy $U$.
All the simulation details, together with the inputs and the trajectories are
available online on the PLUMED-NEST repository (www.plumed-nest.org,
plumID:21.006) [13].
## 6 Conclusion
We briefly presented a target distribution perspective on enhanced sampling
and the on-the-fly probability enhanced sampling method (OPES), that have been
developed in Refs. [3, 4]. OPES is a general and flexible method that can be
used to sample different types of target distributions. It is also easy to use
and robust with respect to suboptimal collective variables [14]. It has been
implemented as a contributed module in the open source library PLUMED [15,
16], and has been already used for various applications [17, 18, 19, 20, 21].
We believe OPES can be a handy tool for anyone interested in enhanced sampling
and it also has the capability of supporting novel types of target
distributions.
###### Acknowledgements.
The author thanks Luigi Bonati for carefully reading the manuscript. This
research was supported by the NCCR MARVEL, funded by the Swiss National
Science Foundation, and European Union Grant No. ERC-2014-AdG-670227/VARMET.
## References
* [1] Bolhuis P. G. et al. Annu. Rev. Phys. Chem. 5320021
* [2] Skilling J. Tech. Rep.120064
* [3] Invernizzi M. Parrinello M. J. Phys. Chem. Lett.1120207
* [4] Invernizzi M., Piaggi P. M. Parrinello M. Phys. Rev. X1020204
* [5] Torrie G. M. Valleau J. P. Chem. Phys. Lett.2819744
* [6] Laio A. Parrinello M. P. Natl. Acad. Sci.99200220
* [7] Mezei M. J. Comput. Phys.6819871
* [8] Barducci A., Bussi G. Parrinello M. Phys. Rev. Lett.10020082
* [9] Marinari E. Parisi G. Europhys. Lett.1919926
* [10] Sugita Y. Okamoto Y. Chem. Phys. Lett.3141991-2
* [11] Lyubartsev A. P. et al. J. Chem. Phys.9619923
* [12] Sugita Y., Kitao A. Okamoto Y. J. Chem. Phys.113200015
* [13] The PLUMED consortium. Nat. Methods1620198
* [14] Invernizzi M. Parrinello M. J. Chem. Theory Comput.1520194
* [15] Tribello G. A. et al. Comput. Phys. Commun.18520142
* [16] www.plumed.org/doc-master/user-doc/html/_o_p_e_s.html
* [17] Mandelli D., Hirshberg B. Parrinello M. Phys. Rev. Lett.12520202
* [18] Bonati L., Rizzi V. Parrinello M. J. Phys. Chem. Lett.1120208
* [19] Rizzi V. et al. Nat. Commun.1220211
* [20] Piaggi P. M. et al. arXiv preprint2021arXiv:2101.04806
* [21] Karmakar T. et al. arXiv preprint2021arXiv:2101.03150
|
# Deep Compression of Neural Networks for Fault Detection on Tennessee
Eastman Chemical Processes
Mingxuan Li
Department of Computer Science
University of North Carolina at Chapel Hill
Chapel Hill, NC, USA
<EMAIL_ADDRESS>* Yuanxun Shao Nuro Inc.
Mountain View, CA, USA
<EMAIL_ADDRESS>
###### Abstract
Artificial neural network has achieved the state-of-art performance in fault
detection on the Tennessee Eastman process, but it often requires enormous
memory to fund its massive parameters. In order to implement online real-time
fault detection, three deep compression techniques (pruning, clustering, and
quantization) are applied to reduce the computational burden. We have
extensively studied 7 different combinations of compression techniques, all
methods achieve high model compression rates over 64% while maintain high
fault detection accuracy. The best result is applying all three techniques,
which reduces the model sizes by 91.5% and remains a high accuracy over 94%.
This result leads to a smaller storage requirement in production environments,
and makes the deployment smoother in real world.
###### Index Terms:
Deep Compression, Neural Networks, Fault Detection
## I Introduction
Artificial Neural Network (ANN) is a powerful technique that can classify
input data into predefined categories. ANN has been successfully deployed in
an enormous amount of real-life applications with superior results, including
computer vision [1], natural language processing [2], autonomous driving [3].
For a classification problem, input data is processed through a series of
weight matrices, bias vectors, and nonlinear activation functions. Following
by a classifier, ANN eventually calculates a likelihood score for each
category. During training, ANN maximizes the likelihood of the true category
(or minimize the negative of this likelihood) for taring data sets via
variants of stochastic gradient decent methods. In terms of the inference, we
run the forward path and classify testing data to the category with the
highest probability score. Even though ANN has shown great success in many
fields, sometimes the enormous model size caused by a large number of weights
can result in overwhelming computational burden (e.g. taking huge memory and
being very power demanding). In certain application scenarios that have memory
or energy consumption limitations, such as mobile platform or browser-based
systems, a smaller size ANN with high accuracy is not only necessary but also
hold in high demand.
Deep compression is a powerful technique for reducing the size of an ANN while
remaining high accuracy. There are three techniques can be used to achieve
this goal: pruning, clustering, and quantization. Pruning is the process of
removing relatively insignificant weights from weight matrices. Since the
matrices can be stored in the sparse matrix data type, the removal of any item
in one matrix directly reduces its size. Clustering, also known as weight
sharing, is the process of using the same value for multiple weights. Since
one reference can be used for several weights using this technique, it can
reduce the storage space for storing weights. At last, quantization is the
technique of storing weights in data types with lower precision in exchange
for smaller space in storage. By combing those three techniques, we are able
to reduce the size of ANN while maintaining a similar accuracy.
Fault detection, as an essential step of industrial production to ensure
efficiency and safety, has been an active field of research for past years. As
the complexity of production increases dramatically due to more advanced
technologies, deep learning has become an increasingly popular candidate for
fault detection applications due to its potential in handling complex systems,
including a large amount of measurements and many failure types for processes.
In this paper, we focus on the Tennessee Eastman process, a representative
case for fault detection and diagnostic. The Tennessee Eastman process is a
typical industrial process that consists of five main process units. It has 52
measurements and 21 fault types as defined by Downs and Vogel in [4].
Moreover, as a widely used case for fault detection, the usage of ANN to
diagnose fault has been proved to have satisfying results. To achieve the best
accuracy, instead of using a single ANN to classify the fault type, the result
in [5] uses different ANNs for different fault types and achieves 97.73%
average accuracy. However, this method requires one deep learning model for
each fault type, which requires large storage space and computational efforts
in total. Tennessee Eastman process requires the online detection ability with
low latency. Thus, the ANN pruning is an ideal candidate for reducing the
computational burden, speeding up the online inference time, as well as
maintaining the classification accuracy.
In this paper, deep compression of artificial neural networks (ANN) is used
for the fault detection on Tennessee Eastman chemical process to enable faster
online inferences with high accuracy. Sections II and III introduce the basics
of deep learning and deep compression. Section IV discusses the details of
deep learning architecture and deep compression results for fault detection of
TN Eastman chemical processes. Finally, Section V summaries this paper.
## II deep learning
An ANN usually has 4 different types of layer: input layer, hidden layer,
softmax layer, and output layer. Input layer is a weight matrix designed to
pass the input data into the ANN through a linear mapping. In Hidden layers,
the input information is transformed through a matrix multiplication and an
activation function: $h_{1}=\sigma(W_{1}x+b_{1})$,
$h_{i}=\sigma(W_{i}h_{i-1}+b_{i}),i=\\{2,3,4,...,k\\}$, where
$x\in\mathbb{R}^{n_{x}}$ is the vector of input data,
$h_{i}\in\mathbb{R}^{n_{h_{i}}}$ is the $i_{th}$ hidden layer representation,
$W_{i}\in\mathbb{R}^{n_{h_{i}}\times{n_{h_{i-1}}}}$ and
$b_{i}\in\mathbb{R}^{n_{h_{i}}}$ are weights and bias for connecting the i-th
and (i-1)-th hidden representations, $k$ is the number of hidden layers, and
$\sigma$ is a nonlinear activation function that introduces nonlinearity into
the neural network. Specifically, ReLU activation function,
$\sigma(x)=max(0,x)$, is used in this paper. The last fully-connected hidden
layer does not impose the activation function. Then, a softmax layer
calculates the score/probability of each class via
$\frac{e^{f_{y_{i}}}}{\sum_{j}e^{f_{j}}}$, where $f_{y_{i}}$ is the score of
the correct category, and $f_{j}$ is the score of the $j_{th}$ catagory.
Finally, the loss is calculated based on the true label and regularization.
The objective of the network is to minimize the loss and maximum the accuracy
of both training and testing datasets.
Figure 1: ANN Structure
## III Deep Compression
Neural networks usually require enormous memory to fund their massive
parameters. In this section, three deep compression techniques are discussed
to reduce the model size and keep high accuracy.
### III-A Pruning
Network pruning utilizes the connectivity between neuron s. Small weights
under a predefined threshold are removed to reduce the model size. As shown in
Figure 2, the left figure is the original weights matrix. After pruning with a
threshold 0.1, the pruned weights are illustrated in the right figure. The
sparsity of the matrix reduces model size, and can speeds up the calculation
with hardware optimizations.
Figure 2: The left figure is the original weight matrix, and the right figure
is the matrix after pruning with a threshold 0.1. Highlighted weights are
removed.
### III-B Clustering
Clustering is a technique that groups weights with close values. First,
weights are divided into a fixed number of groups. Second, values in the same
group will be reset as the same value and thus all weights in the same group
can be referenced to the same address to reduce model size. In Figure 3, the
left top figure is a weight matrix before clustering. Weights in this original
matrix are grouped into 6 groups, and all weights in the same group are each
assigned with a same value. The gradient of each weight is calculated through
back-propagation and the gradient of each group is the sum over all elements.
Finally, weights are updated through the cluster groups. Clustering
essentially minimizes the amount of references needed for a model, and thus
reduces the storage requirement of this model.
Figure 3: An illustration of the gradient update for clustered weights: the
top left figure is the original weight matrix colored according to clustered
groups; the top middle figure is the corresponding group indices; the bottom
left figure is the gradient of the weight matrix; the bottom middle figure is
the grouped gradients according to the centroids; the bottom right figure
shows the sum of gradients of each group; and the top right figure represent
the final gradient update of clusters.
### III-C Quantization
In ANN, weights are usually stored a in high precision format such as float64
or float32, which consumes a relatively large space. In the quantization
process, the precision used to store weights is degraded in the exchange for a
lower space consumption. In Figure 4, weights in the left figure are stored in
one decimal, while the right figure reduces the storage requirement via
rounding off the decimal value.
Figure 4: The left figure is the weight matrix before quantization; the right
figure is the weight matrix after quantization.
## IV Fault Detection for the Tennessee Eastman Chemical Process
The Tennessee Eastman process is a benchmark for fault detection [5]. Based on
a baseline ANN structure, various combinations of deep compression techniques
are applied to reduce the model size and achieve a high accuracy.
### IV-A Tennessee Eastman Process Description
As summarized in Table I, the Tennessee Eastman process provides 52
measurements and 21 different fault types (including ”no fault” as faultNumber
0) in its dataset. Moreover, Chiang [6] and Zhang [7] pointed out that due to
the absence of observable change in the mean, variance and the higher order
variances in the data, we do not have enough information to detect Fault 3, 9,
and 15. Therefore, this paper follows the common rule to exclude the three
faults above for the consideration, resulting in 18 classification types
(including ”no failure” as faultNumber 0).
Fault Number | Process Variable | Type
---|---|---
faultNumber 1 | A/C feed ratio, B composition con- stant (stream 4) | Step
faultNumber 2 | B composition, A/C ratio constant (stream 4) | Step
faultNumber 3 | D feed temperature (stream 2) | Step
faultNumber 4 | Reactor cooling water inlet tem- perature | Step
faultNumber 5 | Condenser cooling water inlet tem- perature | Step
faultNumber 6 | A feed loss (stream 1) | Step
faultNumber 7 | C header pressure loss - reduced availability (stream 4) | Step
faultNumber 8 | A, B, C feed composition (stream 4) | Random variation
faultNumber 9 | D feed temperature (stream 2) | Random variation
faultNumber 10 | C feed temperature (stream 4) | Random variation
faultNumber 11 | Reactor cooling water inlet tem- perature | Random variation
faultNumber 12 | Condenser cooling water inlet tem- perature | Random variation
faultNumber 13 | Reaction kinetics | Slow drift
faultNumber 14 | Reactor cooling water valve | Sticking
faultNumber 15 | Condenser cooling water valve | Sticking
faultNumber 16 | Unknown | Unkown
faultNumber 17 | Unknown | Unkown
faultNumber 18 | Unknown | Unkown
faultNumber 19 | Unknown | Unkown
faultNumber 20 | Unknown | Unkown
TABLE I: Fault Types in the TE Process
### IV-B Baseline Artificial Neural Network Models
In Tennessee Eastman process measurements, one entry in the dataset might
correspond to multiple fault types. Following the state-of-art method in [5],
we construct and train 18 different ANNs to detect each of the 18 fault types.
In baseline, each ANN has 6 layers in the structure of
$52-64-256-128-256-128-64-2$. The input layer dimension is 52, which
corresponds to the 52 measurement. The output layer has a dimension of 2,
which corresponds to the 2 possible outcomes (fault or no fault). The
resulting 18 models are served as the baseline for comparison with compressed
models.
### IV-C Deep Compression Results
In this section, all 7 different combinations of pruning, clustering, and
quantization are applied to the 18 ANNs. First, each of the three compression
techniques is applied individually to exam the performance. Then, each two of
the compression techniques applied to the original 18 ANNs. At Last, all three
techniques are applied to the original ANNs to compare the compression rate
and accuracy change.
The compression results of ANNs obtained by applying each of the three
techniques individually are shown in Table II, III, and IV. As shown in Table
II, when only pruning is applied, the average compressed rate is 64.0% , and
the average accuracy change is -0.5% for all 18 different types of fault
labels. This result is promising considering the average accuracy only changed
slightly, while the average model size is significantly reduced by 64.0%.
Table III shows the compression result after clustering only, which achieves
76.0% average compressed rate and only -1.2% average accuracy change. This
method is the best among combinations that only apply a single compression
technique. Table IV presents the result after only quantization, which
achieves an average compression rate of 72.0%, and an average accuracy change
of -7.8%.
Fault | ANN Size (bytes) | ANN Acc (%) | Compressed Size (bytes) | Compressed Acc (%) | Compressed Rate (%) | Acc Change (%)
---|---|---|---|---|---|---
0 | 486924 | 94.4 | 175207 | 94.4 | 64.0 | 0.0
1 | 487120 | 99.4 | 175235 | 95.3 | 64.0 | -4.1
2 | 486853 | 99.6 | 174988 | 99.6 | 64.1 | 0.0
3 | 486931 | 96.1 | 175159 | 94.4 | 64.0 | -1.7
4 | 486901 | 95.2 | 175354 | 95.3 | 64.0 | 0.1
5 | 486753 | 95.6 | 174983 | 99.8 | 64.1 | 4.2
6 | 486914 | 99.8 | 174765 | 99.8 | 64.1 | 0.0
7 | 486992 | 97.9 | 175556 | 98.5 | 64.0 | 6.0
8 | 486789 | 95.3 | 175412 | 95.3 | 64.0 | 0.0
9 | 486782 | 94.4 | 175086 | 94.4 | 64.0 | 0.0
10 | 486737 | 95.3 | 175564 | 95.3 | 64.0 | 0.0
11 | 487124 | 97.6 | 175382 | 97.3 | 64.0 | -0.3
12 | 486838 | 96.4 | 175283 | 99.1 | 64.0 | 2.7
13 | 486819 | 97.1 | 175111 | 95.3 | 64.0 | -1.8
14 | 486844 | 96.8 | 175266 | 96.8 | 64.0 | 0.0
15 | 486899 | 96.4 | 175400 | 92.0 | 64.0 | -4.4
16 | 486816 | 95.3 | 175605 | 94.4 | 64.0 | -0.9
17 | 486844 | 97.6 | 175296 | 94.4 | 64.0 | -3.2
Average | | | | | 64.0 | -0.5
TABLE II: Compression results with pruning
Fault | ANN Size (bytes) | ANN Acc (%) | Compressed Size (bytes) | Compressed Acc (%) | Compressed Rate (%) | Acc Change (%)
---|---|---|---|---|---|---
0 | 486924 | 94.4 | 117313 | 94.4 | 75.9 | 0.0
1 | 487120 | 99.4 | 117725 | 99.1 | 75.8 | -0.3
2 | 486853 | 99.6 | 116627 | 99.4 | 76.0 | -0.2
3 | 486931 | 96.1 | 118057 | 94.4 | 75.8 | -1.7
4 | 486901 | 95.2 | 116922 | 94.4 | 76.0 | -0.8
5 | 486753 | 95.6 | 115861 | 99.6 | 76.2 | 4.0
6 | 486914 | 99.8 | 114344 | 94.4 | 76.5 | -5.4
7 | 486992 | 97.9 | 115685 | 94.9 | 76.2 | -3.0
8 | 486789 | 95.3 | 116442 | 94.4 | 76.1 | -0.9
9 | 486782 | 94.4 | 117599 | 94.4 | 75.8 | 0.0
10 | 486737 | 95.3 | 117798 | 94.4 | 75.8 | -0.9
11 | 487124 | 97.6 | 115934 | 94.7 | 76.2 | -2.9
12 | 486838 | 96.4 | 117672 | 94.4 | 75.8 | -2.0
13 | 486819 | 97.1 | 116671 | 94.4 | 76.0 | -2.7
14 | 486844 | 96.8 | 118284 | 96.3 | 75.7 | -0.5
15 | 486899 | 96.4 | 116166 | 94.7 | 76.1 | -1.7
16 | 486816 | 95.3 | 117600 | 94.4 | 75.8 | -0.9
17 | 486844 | 97.6 | 117014 | 94.5 | 76.0 | -3.1
Average | | | | | 76.0 | -1.2
TABLE III: Compression results with clustering
Fault | ANN Size (bytes) | ANN Acc (%) | Compressed Size (bytes) | Compressed Acc (%) | Compressed Rate (%) | Acc Change (%)
---|---|---|---|---|---|---
0 | 486924 | 94.4 | 137620 | 94.4 | 71.7 | 0.0
1 | 487120 | 99.4 | 137030 | 95.9 | 71.9 | -3.5
2 | 486853 | 99.6 | 137482 | 98.8 | 71.8 | -0.8
3 | 486931 | 96.1 | 136678 | 94.4 | 71.9 | -1.7
4 | 486901 | 95.2 | 137766 | 94.4 | 71.7 | -0.8
5 | 486753 | 95.6 | 137140 | 99.5 | 71.8 | 3.9
6 | 486914 | 99.8 | 136784 | 13.7 | 71.9 | -86.1
7 | 486992 | 97.9 | 137008 | 94.8 | 71.9 | -3.1
8 | 486789 | 95.3 | 137479 | 94.4 | 71.8 | -0.9
9 | 486782 | 94.4 | 137911 | 94.4 | 71.7 | 0.0
10 | 486737 | 95.3 | 137582 | 94.4 | 71.7 | -0.9
11 | 487124 | 97.6 | 137075 | 95.5 | 71.9 | -2.1
12 | 486838 | 96.4 | 137428 | 71.8 | 75.8 | -25.6
13 | 486819 | 97.1 | 137064 | 82.9 | 71.8 | -14.2
14 | 486844 | 96.8 | 137421 | 95.8 | 71.8 | -1.0
15 | 486899 | 96.4 | 137109 | 96.1 | 71.8 | -0.3
16 | 486816 | 95.3 | 137560 | 94.4 | 71.7 | -0.9
17 | 486844 | 97.6 | 137238 | 95.0 | 71.8 | -2.6
Average | | | | | 72.0 | -7.8
TABLE IV: Compression results with Quantization
Next, a comprehensive study of applying two compression techniques are
conducted. Table V shows that when utilizing both clustering and pruning, the
average compressed rate increases to 87.3%, and the average accuracy change is
only -1.9%. Moreover, the compression result is consistent among all fault
types. The variance of compressed rate and accuracy change are respectively
0.16 and 1.72, which shows that there are few fluctuations in compression rate
and accuracy change, thus the average compression rate and average accuracy
change are reliably representative. Table VI shows the compression result with
both pruning and quantization. The average compression rate is 88.1%, which is
about the same as the result in Table V. But the average accuracy change drops
to -5.3%, which is the largest drop among methods that apply two compression
techniques. Table VII presents the compression result with both clustering and
pruning. The average accuracy change is -1.8%, which is close to the average
accuracy change in Table V. The compression rate is 82.2%, which is 5.1%
smaller than the result in Table V. In general, applying two compression
techniques achieves better results than only applying one single technique.
Fault | ANN Size (bytes) | ANN Acc (%) | Compressed Size (bytes) | Compressed Acc (%) | Compressed Rate (%) | Acc Change (%)
---|---|---|---|---|---|---
0 | 486924 | 94.4 | 62014 | 94.4 | 87.3 | 0.0
1 | 487120 | 99.4 | 61788 | 95.3 | 87.3 | -4.1
2 | 486853 | 99.6 | 63234 | 95.3 | 87.0 | -4.3
3 | 486931 | 96.1 | 61694 | 94.4 | 87.3 | -1.7
4 | 486901 | 95.2 | 62078 | 95.3 | 87.3 | 0.1
5 | 486753 | 95.6 | 62995 | 95.3 | 87.0 | -0.3
6 | 486914 | 99.8 | 62065 | 95.3 | 87.3 | -4.5
7 | 486992 | 97.9 | 60908 | 95.3 | 87.5 | -2.6
8 | 486789 | 95.3 | 60772 | 95.3 | 87.5 | 0.0
9 | 486782 | 94.4 | 61664 | 94.4 | 87.3 | 0.0
10 | 486737 | 95.3 | 62178 | 95.3 | 87.2 | 0.0
11 | 487124 | 97.6 | 61671 | 95.3 | 87.3 | -2.3
12 | 486838 | 96.4 | 62292 | 95.3 | 87.2 | 0.0
13 | 486819 | 97.1 | 61002 | 95.3 | 87.5 | -1.8
14 | 486844 | 96.8 | 62443 | 92.1 | 87.2 | -4.7
15 | 486899 | 96.4 | 60988 | 94.4 | 87.5 | -2.0
16 | 486816 | 95.3 | 60709 | 94.4 | 87.5 | -0.9
17 | 486844 | 97.6 | 61086 | 94.4 | 87.5 | -3.2
Average | | | | | 87.3 | -1.9
TABLE V: Compression results with Pruning and Clustering
Fault | ANN Size (bytes) | ANN Acc (%) | Compressed Size (bytes) | Compressed Acc (%) | Compressed Rate (%) | Acc Change (%)
---|---|---|---|---|---|---
0 | 486924 | 94.4 | 57823 | 94.4 | 88.1 | 0.0
1 | 487120 | 99.4 | 58337 | 94.4 | 88.0 | -5.0
2 | 486853 | 99.6 | 57901 | 94.2 | 88.1 | -5.4
3 | 486931 | 96.1 | 58162 | 94.4 | 88.1 | -1.7
4 | 486901 | 95.2 | 58014 | 94.4 | 88.1 | -0.8
5 | 486753 | 95.6 | 57901 | 94.4 | 88.1 | -1.2
6 | 486914 | 99.8 | 58132 | 94.4 | 88.1 | -5.4
7 | 486992 | 97.9 | 57978 | 94.4 | 88.1 | -3.5
8 | 486789 | 95.3 | 57330 | 94.4 | 88.2 | -0.9
9 | 486782 | 94.4 | 58013 | 94.4 | 88.1 | 0.0
10 | 486737 | 95.3 | 58098 | 94.4 | 88.1 | -0.9
11 | 487124 | 97.6 | 57775 | 94.4 | 88.1 | -3.2
12 | 486838 | 96.4 | 58371 | 94.4 | 88.0 | -2.0
13 | 486819 | 97.1 | 56579 | 94.4 | 88.4 | -2.7
14 | 486844 | 96.8 | 58048 | 46.9 | 88.1 | -49.9
15 | 486899 | 96.4 | 57806 | 87.1 | 88.1 | -9.3
16 | 486816 | 95.3 | 57919 | 94.4 | 88.1 | -0.9
17 | 486844 | 97.6 | 57648 | 94.4 | 88.2 | -3.2
Average | | | | | 88.1 | -5.3
TABLE VI: Compression results with Pruning and Quantization
Fault | ANN Size (bytes) | ANN Acc (%) | Compressed Size (bytes) | Compressed Acc (%) | Compressed Rate (%) | Acc Change (%)
---|---|---|---|---|---|---
0 | 486924 | 94.4 | 86997 | 94.4 | 82.1 | 0.0
1 | 487120 | 99.4 | 86233 | 96.6 | 82.3 | -2.8
2 | 486853 | 99.6 | 86674 | 95.7 | 82.3 | -3.9
3 | 486931 | 96.1 | 85853 | 94.4 | 82.4 | -1.7
4 | 486901 | 95.2 | 86820 | 94.4 | 82.2 | -0.8
5 | 486753 | 95.6 | 86477 | 99.2 | 82.2 | 3.6
6 | 486914 | 99.8 | 85627 | 94.4 | 82.4 | -5.4
7 | 486992 | 97.9 | 86582 | 94.5 | 82.2 | -3.4
8 | 486789 | 95.3 | 86966 | 94.4 | 82.1 | -0.9
9 | 486782 | 94.4 | 86958 | 94.4 | 82.1 | 0.0
10 | 486737 | 95.3 | 86962 | 94.3 | 82.1 | -1.0
11 | 487124 | 97.6 | 86509 | 93.7 | 82.2 | -3.9
12 | 486838 | 96.4 | 86261 | 94.4 | 82.3 | -2.0
13 | 486819 | 97.1 | 86494 | 94.4 | 82.2 | -2.7
14 | 486844 | 96.8 | 86644 | 94.7 | 82.2 | -2.1
15 | 486899 | 96.4 | 86561 | 94.3 | 82.2 | -2.1
16 | 486816 | 95.3 | 86990 | 94.4 | 82.1 | -0.9
17 | 486844 | 97.6 | 86580 | 94.4 | 82.2 | -3.2
Average | | | | | 82.2 | -1.8
TABLE VII: Compression results with Clustering and Quantization
Finally, we exam the performance of applying all three compression techniques.
Table VIII shows that the average compressed rate increases to 91.5% while the
average accuracy change remains relatively small at -1.8%. The variances of
the compressed rates and accuracy changes are only 0.14 and 1.83,
correspondingly, which shows the consistency of the performance among all
fault types. Compare to results of applying two techniques in Table V, Table
VI, and Table VII, this method achieves the best compressed rate with no extra
loss in accuracy, which indicates applying all three techniques is better than
only applying two techniques for fault diagnosis in the TE process. Compared
to the results of applying only one compression technique in Table II, III,
and IV, although average accuracy change decreases slightly, applying all
three techniques still achieves accuracies higher than 94% for all fault
types. At the same time, the compressed rate 91.5% is significantly higher
than 76%, the best result of applying only one technique.
The results of all 7 different combination of compression techniques are
summarized in Figure 5. It is clear that applying all three techniques
achieves the highest compressed rate, as well as maintains a high average
accuracy above 94%. Thus, it is ideal to utilize all three compression
techniques for the fault diagnosis in the TE process.
Fault | ANN Size (bytes) | ANN Acc (%) | Compressed Size (bytes) | Compressed Acc (%) | Compressed Rate (%) | Acc Change (%)
---|---|---|---|---|---|---
0 | 486924 | 94.4 | 41391 | 94.4 | 91.5 | 0.0
1 | 487120 | 99.4 | 41264 | 94.4 | 91.5 | -5.0
2 | 486853 | 99.6 | 42833 | 94.4 | 91.2 | -5.2
3 | 486931 | 96.1 | 41320 | 94.4 | 91.5 | -1.7
4 | 486901 | 95.2 | 42055 | 95.3 | 91.4 | 0.1
5 | 486753 | 95.6 | 42466 | 95.5 | 91.3 | -0.1
6 | 486914 | 99.8 | 41165 | 95.3 | 91.5 | -4.5
7 | 486992 | 97.9 | 40674 | 95.3 | 91.6 | -2.6
8 | 486789 | 95.3 | 40523 | 95.3 | 91.7 | 0.0
9 | 486782 | 94.4 | 41471 | 94.4 | 91.5 | 0.0
10 | 486737 | 95.3 | 41642 | 95.3 | 91.4 | 0.0
11 | 487124 | 97.6 | 41741 | 95.3 | 91.4 | -2.3
12 | 486838 | 96.4 | 42006 | 95.3 | 91.4 | -1.1
13 | 486819 | 97.1 | 40762 | 95.3 | 91.6 | -1.8
14 | 486844 | 96.8 | 42055 | 95.3 | 91.4 | -1.5
15 | 486899 | 96.4 | 40559 | 94.4 | 91.7 | -2.0
16 | 486816 | 95.3 | 40741 | 94.4 | 91.6 | -0.9
17 | 486844 | 97.6 | 40543 | 94.4 | 91.7 | -3.2
Average | | | | | 91.5 | -1.8
TABLE VIII: Compression Statistics with Pruning, Clustering, and Quantization
Figure 5: The left plot shows compressed rates of all fault types over 7
different combinations of compression techniques, where P = pruning, C =
clustering, and Q = quantization. The right plot shows the average accuracy
over all fault types with 7 different combinations of compression techniques.
Results of different compression methods are colored consistently in two
plots.
## V Conclusion
This paper studies deep compression techniques for fault diagnosis on the
Tennessee Eastman process. In respond to the demand of fast online detection
for industrial processes, three compression techniques (pruning, clustering,
and quantization) are applied to reduce model size and computational
complexity. We have examined a comprehensive list of 7 different combinations
of compression techniques. All methods achieve high model compression rates
over 64% while maintaining high fault detection accuracy. The best candidate
for fault detection on the Tennessee Eastman chemical process is applying all
three techniques, which reduces model sizes by 91.5% and remains a high
accuracy over 94%. This result leads to smaller storage requirement in
production environment, and makes the deployment smoother in real world.
## References
* [1] Dan Ciresan, Ueli Meier, and Jürgen Schmidhuber. Multi-column deep neural networks for image classification. In IN PROCEEDINGS OF THE 25TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2012, pages 3642–3649, 2012.
* [2] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA, October 2013. Association for Computational Linguistics.
* [3] C. Chen, A. Seff, A. Kornhauser, and J. Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 2722–2730, 2015.
* [4] J.J. Downs and E.F. Vogel. A plant-wide industrial process control problem. Computers & Chemical Engineering, 17(3):245 – 255, 1993. Industrial challenge problems in process control.
* [5] Seongmin Heo and Jay H. Lee. Fault detection and classification using artificial neural networks. IFAC-PapersOnLine, 51(18):470 – 475, 2018. 10th IFAC Symposium on Advanced Control of Chemical Processes ADCHEM 2018\.
* [6] Leo Chiang, E. Russell, and Richard Braatz. Fault detection and diagnosis in industrial systems. Measurement Science and Technology - MEAS SCI TECHNOL, 12, 10 2001\.
* [7] Ying wei Zhang. Enhanced statistical analysis of nonlinear processes using kpca, kica and svm. Chemical Engineering Science, 64:801–811, 2009.
|
11institutetext: IMCCE, Observatoire de Paris, PSL Research University, CNRS,
Sorbonne Université, Université de Lille, 75014 Paris, France 11email:
<EMAIL_ADDRESS>22institutetext: Department of Mathematics,
University of Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy
# The future large obliquity of Jupiter
Melaine Saillenfest 11 Giacomo Lari 22 Ariane Courtot 11
(Received 18 May 2020 / Accepted 3 June 2020)
###### Abstract
Aims. We aim to determine whether Jupiter’s obliquity is bound to remain
exceptionally small in the Solar System, or if it could grow in the future and
reach values comparable to those of the other giant planets.
Methods. The spin axis of Jupiter is subject to the gravitational torques from
its regular satellites and from the Sun. These torques evolve over time due to
the long-term variations of its orbit and to the migration of its satellites.
With numerical simulations, we explore the future evolution of Jupiter’s spin
axis for different values of its moment of inertia and for different migration
rates of its satellites. Analytical formulas show the location and properties
of all relevant resonances.
Results. Because of the migration of the Galilean satellites, Jupiter’s
obliquity is currently increasing, as it adiabatically follows the drift of a
secular spin-orbit resonance with the nodal precession mode of Uranus. Using
the current estimates of the migration rate of the satellites, the obliquity
of Jupiter can reach values ranging from $6^{\circ}$ to $37^{\circ}$ after $5$
Gyrs from now, according to the precise value of its polar moment of inertia.
A faster migration for the satellites would produce a larger increase in
obliquity, as long as the drift remains adiabatic.
Conclusions. Despite its peculiarly small current value, the obliquity of
Jupiter is no different from other obliquities in the Solar System: It is
equally sensitive to secular spin-orbit resonances and it will probably reach
comparable values in the future.
###### Key Words.:
celestial mechanics, Jupiter, secular dynamics, spin axis, obliquity
## 1 Introduction
The obliquity of a planet is the angle between its spin axis and the normal to
its orbit. A non-zero obliquity results in seasonal climate changes along the
planet’s orbit, as occurs on Earth. In the protoplanetary disc, giant planets
are expected to form with near-zero obliquities, while terrestrial planets
should exhibit more random values (see e.g. Ward & Hamilton, 2004; Rogoszinski
& Hamilton, 2020a). Yet, the planets of the Solar System all feature a large
variety of obliquities. The case of Mercury is special because the strong
tidal dissipation due to the proximity of the Sun now tightly maintains
Mercury’s obliquity to a near-zero value (see e.g. Correia & Laskar, 2010).
Excluding Mercury, Jupiter is by far the planet of the Solar System that has
the smallest obliquity (see Table 1). This small value seems to put Jupiter in
a different category, and it appears unclear why Jupiter should be the only
giant planet to indefinitely preserve its primordial obliquity.
Large obliquity changes can be produced by strong impacts. An impact with a
planetary-sized body is thought to have created the Moon and affected the spin
axis of the Earth, which has remained unchanged ever since (Canup & Asphaug,
2001; Li & Batygin, 2014b). Large-scale collisions have also probably
participated in increasing the obliquity of Uranus (Boué & Laskar, 2010;
Morbidelli et al., 2012; Rogoszinski & Hamilton, 2020a).
Table 1: Current obliquities of the planets of the Solar System.
$\begin{array}[]{rrcrr}\hline\cr\hline\cr&\text{obliquity}&&&\text{obliquity}\\\
\hline\cr\text{Mercury}&0.03^{\circ}&&\text{Jupiter}&3.12^{\circ}\\\
\text{Venus}&177.36^{\circ}&&\text{Saturn}&26.73^{\circ}\\\
\text{Earth}&23.45^{\circ}&&\text{Uranus}&97.86^{\circ}\\\
\text{Mars}&25.19^{\circ}&&\text{Neptune}&29.56^{\circ}\\\
\hline\cr\end{array}$
111Mercury’s obliquity is taken from Konopliv et al. (2020). Other values are
taken from Murray & Dermott (1999) who cite the compilation made by Yoder
(1995).
Apart from collisions, a well-known mechanism that can modify the obliquity of
a planet is a so-called “secular spin-orbit resonance”, that is, a near
commensurability between the frequency of precession of the spin axis and the
frequency of one (or several) harmonics appearing in the precession of the
orbit. This mechanism happens to be extremely common in planetary systems. The
overlap of such resonances produces a large chaotic region for the spin axis
of the terrestrial planets of the Solar System (see Laskar & Robutel, 1993).
This chaos probably had a strong influence on the early obliquity of Venus,
which was then driven to its current value by the solar tides combined with
its thick atmosphere (Correia & Laskar, 2001; Correia et al., 2003; Correia &
Laskar, 2003). The Moon currently protects the Earth from large chaotic
variations in its obliquity (Laskar et al., 1993; Li & Batygin, 2014a), but
due to tidal dissipation within the Earth-Moon system, the Earth will
eventually reach the chaotic region in a few gigayears from now (see Néron de
Surgy & Laskar, 1997). This chaotic zone also strongly affects the obliquity
of Mars, which still currently wanders between $0^{\circ}$ and more than
$60^{\circ}$ (Laskar et al., 2004a; Brasser & Walsh, 2011). As shown by
Millholland & Batygin (2019), secular spin-orbit resonances can also take
place very early in the history of a planet, that is, within the
protoplanetary disc itself. More generally, secular spin-orbit resonances are
thought to strongly affect the obliquity of exoplanets (see e.g. Atobe et al.,
2004; Brasser et al., 2014; Deitrick et al., 2018b, a; Shan & Li, 2018;
Millholland & Laughlin, 2018, 2019; Quarles et al., 2019; Saillenfest et al.,
2019; Kreyche et al., 2020).
For the giant planets of the Solar System, the secular spin-orbit resonances
are relatively thin today and well separated from each other. This is why it
is so difficult to explain the large obliquity of Uranus by a spin-orbit
coupling, now that the precession of Uranus’ spin axis is far from any first-
order resonances (see e.g. Boué & Laskar, 2010, Rogoszinski & Hamilton, 2020a,
b). Jupiter and Saturn, on the contrary, are located very close to strong
resonances: Jupiter is close to resonance with the nodal precession mode of
Uranus (Ward & Canup, 2006), and Saturn is close to resonance with the nodal
precession mode of Neptune (Ward & Hamilton, 2004; Hamilton & Ward, 2004; Boué
et al., 2009). Therefore, the dynamics of Jupiter’s spin axis seems to be
equally affected by secular spin-orbit resonances as other planets in the
Solar System. This was confirmed by Brasser & Lee (2015) and Vokrouhlický &
Nesvorný (2015), who show that models of the late planetary migration have to
be finely tuned to avoid overexciting Jupiter’s obliquity by spin-orbit
coupling, while tilting Saturn to its current orientation. In this regard, the
spin-axis dynamics of Jupiter does not appear to be special at all, in
contrast to its small obliquity value.
In this article, we aim to investigate the future long-term spin-axis dynamics
of Jupiter. In particular, we want to determine whether Jupiter’s obliquity is
bound to remain exceptionally small in the Solar System, or if it could grow
in the future and reach values comparable to those of the other planets.
The precession motion of a planet’s spin axis depends on the physical
properties of the planet (mass repartition and spin velocity), but also on
external torques applied to its equatorial bulge. These torques come from the
combined gravitational attraction of the Sun and of satellites (if it has
any). Since the orbit of Jupiter is stable over billions of years (Laskar,
1990), the direct torque from the Sun will not noticeably change in the
future. However, Jupiter’s satellites are known to migrate over time because
of tidal dissipation. The future long-term orbital evolution of the Galilean
satellites has been recently explored by Lari et al. (2020). The solutions
that they describe can therefore be used as a guide to study the future spin-
axis dynamics of Jupiter. Due to their much smaller masses, the other
satellites of Jupiter do not contribute noticeably to its spin-axis dynamics.
In Sect. 2, we describe our dynamical model and discuss the range of
acceptable values for the physical parameters of Jupiter, in particular its
polar moment of inertia. In Sect. 3, we present our results about the future
spin-axis dynamics of Jupiter: We explore the outcomes given by different
values of the poorly known physical parameters of Jupiter and by different
migration rates for its satellites. Our conclusions are summarised in Sect. 4.
## 2 Secular dynamics of the spin axis
### 2.1 Equations of motion
The spin-axis dynamics of an oblate planet subject to the lowest-order term of
the torque from the Sun is given for instance by Laskar & Robutel (1993) or
Néron de Surgy & Laskar (1997). Far from spin-orbit resonances, and due to the
weakness of the torque, the long-term evolution of the spin axis is accurately
described by the secular Hamiltonian function (i.e. averaged over rotational
and orbital motions). This Hamiltonian can be written
$\displaystyle\mathcal{H}(X,-\psi,t)$
$\displaystyle=-\frac{\alpha}{2}\frac{X^{2}}{\big{(}1-e(t)^{2}\big{)}^{3/2}}$
(1)
$\displaystyle-\sqrt{1-X^{2}}\big{(}\mathcal{A}(t)\sin\psi+\mathcal{B}(t)\cos\psi\big{)}$
$\displaystyle+2X\mathcal{C}(t),$
where the conjugate coordinates are $X$ (cosine of obliquity) and $-\psi$
(minus the precession angle). The Hamiltonian in Eq. (1) depends explicitly on
time $t$ through the orbital eccentricity $e$ and through the functions
$\left\\{\begin{aligned}
\mathcal{A}(t)&=\frac{2\big{(}\dot{q}+p\,\mathcal{C}(t)\big{)}}{\sqrt{1-p^{2}-q^{2}}}\,,\\\
\mathcal{B}(t)&=\frac{2\big{(}\dot{p}-q\,\mathcal{C}(t)\big{)}}{\sqrt{1-p^{2}-q^{2}}}\,,\\\
\end{aligned}\right.\quad\text{and}\quad\mathcal{C}(t)=q\dot{p}-p\dot{q}\,.$
(2)
In these expressions, $q=\eta\cos\Omega$ and $p=\eta\sin\Omega$, where
$\eta\equiv\sin(I/2)$, and $I$ and $\Omega$ are the orbital inclination and
the longitude of ascending node of the planet, respectively. If the orbit of
the planet is fixed in time, its obliquity is constant and its precession
angle $\psi$ circulates with constant angular velocity $\alpha
X/(1-e^{2})^{3/2}$. The quantity $\alpha$ is called the precession constant.
It depends on the spin rate of the planet and of its mass distribution,
through the formula:
$\alpha=\frac{3}{2}\frac{\mathcal{G}m_{\odot}}{\omega
a^{3}}\frac{J_{2}}{\lambda}\,,$ (3)
where $\mathcal{G}$ is the gravitational constant, $m_{\odot}$ is the mass of
the Sun, $\omega$ is the spin rate of the planet, $a$ is its semi-major axis,
$J_{2}$ is its second zonal gravity coefficient, and $\lambda$ is its
normalised polar moment of inertia. We retrieve the expression given for
instance by Néron de Surgy & Laskar (1997) by noting that
$J_{2}=\frac{2C-A-B}{2MR_{\mathrm{eq}}^{2}}\quad\text{and}\quad\lambda=\frac{C}{MR_{\mathrm{eq}}^{2}}\,,$
(4)
where $A$, $B$, and $C$ are the equatorial and polar moments of inertia of the
planet, $M$ is its mass, and $R_{\mathrm{eq}}$ is its equatorial radius.
The precession rate of the planet is increased if it possesses massive
satellites. If the satellites are far away from the planet, their equilibrium
orbital plane (called Laplace plane, see Tremaine et al., 2009) is close to
the orbital plane of the planet; therefore, far-away satellites increase the
torque exerted by the Sun on the equatorial bulge of the planet. If the
satellites are close to the planet, on the contrary, their equilibrium orbital
plane coincides with the equator of the planet and precesses with it as a
whole (Goldreich, 1965); therefore, close-in satellites artificially increase
the oblateness and the rotational angular momentum of the planet. In the
close-in satellite regime, an expression for the effective precession constant
has been derived by Ward (1975). As detailed by French et al. (1993), it
consists in replacing $J_{2}$ and $\lambda$ in Eq. (3) by the effective
values:
$J_{2}^{\prime}=J_{2}+\frac{1}{2}\sum_{k}\frac{m_{k}}{M}\frac{a_{k}^{2}}{R_{\mathrm{eq}}^{2}}\quad\text{and}\quad\lambda^{\prime}=\lambda+\sum_{k}\frac{m_{k}}{M}\frac{a_{k}^{2}}{R_{\mathrm{eq}}^{2}}\frac{n_{k}}{\omega}\,,$
(5)
where $m_{k}$, $a_{k}$, and $n_{k}$ are the mass, the semi-major axis, and the
mean motion of the $k$th satellite. In these expressions, the eccentricities
and inclinations of the satellites are neglected. This approximation has been
widely used in the literature. In the case of a single satellite, Boué &
Laskar (2006) have obtained a general expression for the precession rate of a
planet with an eccentric and inclined satellite, encompassing both the close-
in and far-away regimes. Using their article, we can verify that the Galilean
satellites are in the close-in regime. The Laplace plane of Callisto is
inclined today by less than $1^{\circ}$ with respect to Jupiter’s equator. The
small eccentricities and inclinations of the Galilean satellites would
contribute to $J_{2}^{\prime}$ and $\lambda^{\prime}$ with terms of order
$e_{k}^{2}$ and $\eta_{k}^{2}$, so even if $e_{k}$ increases up to $0.1$ (a
value found by Lari et al., 2020 in some cases) or if $I_{k}$ increases up to
$10^{\circ}$, the additional contribution to $J_{2}^{\prime}$ and
$\lambda^{\prime}$ would only be of order $10^{-4}$ and $10^{-6}$,
respectively. As we see below, this contribution is much smaller than our
uncertainty on the value of $\lambda$, allowing us to stick to the
approximation given by Eq. (5).
### 2.2 Orbital solution
The Hamiltonian given in Eq. (1) depends on the orbit of the planet and on its
temporal variations. In order to explore the long-term dynamics of Jupiter’s
spin axis, we need an orbital solution that is valid over billions of years.
This is well beyond the timespan covered by ephemerides. Luckily, the orbital
dynamics of the giant planets of the Solar System are almost integrable and
excellent solutions have been developed. We use the secular solution of Laskar
(1990) expanded in quasi-periodic series:
$\displaystyle z=e\exp(i\varpi)$
$\displaystyle=\sum_{k}E_{k}\exp(i\theta_{k})\,,$ (6)
$\displaystyle\zeta=\eta\exp(i\Omega)$
$\displaystyle=\sum_{k}S_{k}\exp(i\phi_{k})\,,$
where $\varpi$ is Jupiter’s longitude of perihelion. The amplitudes $E_{k}$
and $S_{k}$ are real constants, and the angles $\theta_{k}$ and $\phi_{k}$
evolve linearly over time $t$, with frequencies $\mu_{k}$ and $\nu_{k}$:
$\theta_{k}(t)=\mu_{k}\,t+\theta_{k}^{(0)}\hskip 14.22636pt\text{and}\hskip
14.22636pt\phi_{k}(t)=\nu_{k}\,t+\phi_{k}^{(0)}\,.$ (7)
The complete orbital solution of Laskar (1990) can be found in Appendix A for
amplitudes down to $10^{-8}$.
The series in Eq. (6) contain contributions from all the planets of the Solar
System. In the integrable approximation, the frequency of each term
corresponds to a unique combination of the fundamental frequencies of the
system, usually noted $g_{j}$ and $s_{j}$. In the limit of small masses, small
eccentricities and small inclinations (Lagrange-Laplace secular system), the
$z$ series only contains the frequencies $g_{j}$, while the $\zeta$ series
only contains the frequencies $s_{j}$ (see e.g. Murray & Dermott, 1999 or
Laskar et al., 2012). This is not the case in more realistic situations, as
recalled for instance by Kreyche et al. (2020) in the context of obliquity
dynamics. In planetary systems featuring mean-motion resonances, the spin axis
of a planet can be affected by shifted orbital precession frequencies
(Millholland & Laughlin, 2019) or by secondary resonances (Quillen et al.,
2017, 2018). However, this does not apply in the Solar System as it is today,
even when the existing near commensurabilities (like the “great Jupiter–Saturn
inequality”) are taken into account. Table 2 shows the combinations of
fundamental frequencies identified for the largest terms of Jupiter’s $\zeta$
series obtained by Laskar (1990).
Table 2: First twenty terms of Jupiter’s inclination and longitude of
ascending node in the J2000 equator and equinox reference frame.
$\begin{array}[]{rcrrr}\hline\cr\hline\cr k&\text{identification}&\nu_{k}\
(^{\prime\prime}\cdot\text{yr}^{-1})&S_{k}\times 10^{8}&\phi_{k}^{(0)}\
(^{\text{o}})\\\ \hline\cr 1&s_{5}&0.00000&1377467&107.59\\\
2&s_{6}&-26.33023&315119&307.29\\\ 3&s_{8}&-0.69189&58088&23.96\\\
4&s_{7}&-3.00557&48134&140.33\\\ 5&g_{5}-g_{6}+s_{7}&-26.97744&2308&222.98\\\
6&-g_{5}+g_{6}+s_{6}&-2.35835&1611&44.74\\\
7&2g_{6}-s_{6}&82.77163&1372&308.95\\\
8&g_{5}-g_{7}+s_{7}&-1.84625&1130&36.64\\\ 9&s_{1}&-5.61755&1075&168.70\\\
10&-g_{5}+g_{7}+s_{7}&-4.16482&946&51.54\\\
11&g_{5}+g_{6}-s_{6}&58.80017&804&32.90\\\
12&g_{5}-g_{6}+s_{6}&-50.30212&691&29.84\\\
13&2g_{5}-s_{6}&34.82788&636&114.12\\\
14&g_{7}-g_{8}+s_{7}&-0.58033&565&17.32\\\ 15&s_{2}&-7.07963&454&273.79\\\
16&-g_{5}+g_{7}+s_{6}&-27.48935&407&38.53\\\
17&g_{5}-g_{7}+s_{6}&-25.17116&385&35.94\\\
18&s_{1}+\gamma&-5.50098&383&162.89\\\
19&-g_{7}+g_{8}+s_{8}&-3.11725&321&326.97\\\
20&s_{2}+2\gamma&-6.84091&267&106.20\\\ \hline\cr\end{array}$
222Due to the secular resonance $(g_{1}-g_{5})-(s_{1}-s_{2})$, an additional
fundamental frequency $\gamma$ appears in terms 18 and 20 (see Laskar, 1990).
As explained by Saillenfest et al. (2019), at first order in the amplitudes
$S_{k}$ and $E_{k}$, secular spin-orbit resonant angles can only be of the
form $\sigma_{p}=\psi+\phi_{p}$, where $p$ is a given index in the $\zeta$
series. Resonances featuring terms of the $z$ series only appear at third
order and beyond. For the terrestrial planets of the Solar System, the $z$ and
$\zeta$ series converge very slowly, which implies that large resonances are
very numerous. These resonances overlap massively and produce wide chaotic
zones in the obliquity dynamics (see Laskar & Robutel, 1993; Néron de Surgy &
Laskar, 1997; Correia et al., 2003; Laskar et al., 2004a). The situation is
very different for the giant planets of the Solar System, for which the $z$
and $\zeta$ series converge quickly owing to the quasi-integrable nature of
their dynamics. Therefore, the secular spin-orbit resonances are small and
isolated from each other, and only first-order resonances play a substantial
role.
Figure 1 shows the location and width of every first-order resonance for the
spin-axis of Jupiter in an interval of precession constant $\alpha$ ranging
from $0^{\prime\prime}\cdot$yr-1 to $5^{\prime\prime}\cdot$yr-1. Because of
the chaotic dynamics of the Solar System (Laskar, 1989), the fundamental
frequencies related to the terrestrial planets (e.g. $s_{1}$, $s_{2}$, and
$\gamma$ appearing in Table 2) could vary substantially over billions of years
(Laskar, 1990). However, they only marginally contribute to Jupiter’s orbital
solution and none of them takes part in the resonances shown in Fig. 1. Our
secular orbital solution of Jupiter can therefore be considered valid over a
billion-year timescale.
Figure 1: Location and width of every first-order secular spin-orbit resonance
for Jupiter. Each resonant angle is of the form $\sigma_{p}=\psi+\phi_{p}$
where $\phi_{p}$ has frequency $\nu_{p}$ labelled on the graph according to
its index in the orbital series (see Table 2 and Appendix A). For a given
value of the precession constant $\alpha$, the interval of obliquity enclosed
by the separatrix is shown in pink, as computed using the exact formulas given
by Saillenfest et al. (2019). The green bar on the left shows Jupiter’s
current obliquity and the range for its precession constant considered in this
article, as detailed in Sects. 2.3 and 2.4.
### 2.3 Precession constant
As shown by the Hamiltonian function in Eq. (1), the precession constant
$\alpha$ is a key parameter of the spin-axis dynamics of a planet. Among the
physical parameters of Jupiter that enter into its expression (see Eq. 3), all
are very well constrained from observations except the normalised polar moment
of inertia $\lambda$.
While comparing the values of $\lambda$ given in the literature, one must be
careful about the normalisation used. Equation (4) explicitly requires a
normalisation using the equatorial radius $R_{\text{eq}}$, since it is linked
to the value of $J_{2}$. However, published values of the polar moment of
inertia are often normalised using the mean radius of Jupiter, which differs
from $R_{\text{eq}}$ by a factor of about $0.978$. This distinction seems to
have been missed by Ward & Canup (2006), who quote the nominal value given by
D. R. Williams in the _NASA Jupiter fact sheet_
333https://nssdc.gsfc.nasa.gov/planetary/factsheet/
jupiterfact.html as $0.254$, whereas it actually translates into
$\lambda=0.243$ when it is normalised using $R_{\text{eq}}$. Ward & Canup
(2006) also mention that “theoretical values [of $\lambda$] range from $0.255$
for the extreme of a constant-density core and massless envelope to $0.221$
for a constant-density envelope and point-mass core”. Unfortunately, these
numbers are taken from a conference talk given by W. B. Hubbard in 2005 so we
cannot check how they have been obtained. Since Eq. (3) is used, however, we
can assume that they have been properly normalised using $R_{\text{eq}}$.
As is shown in Fig. 1, the spin-axis of Jupiter is located very close to a
strong secular spin-orbit resonance. The corresponding term of the orbital
series is related to the precession mode of Uranus (term $k=4$ in Table 2),
and the resonant angle is $\sigma_{4}=\psi+\phi_{4}$. As noted by Ward & Canup
(2006), dissipative processes during the early planetary evolution are
expected to have forced Jupiter’s spin axis to spiral down towards the centre
of the resonance, called Cassini state 2. And indeed, the current value of
$\sigma_{4}$ is very close to zero, which has a low probability to happen if
Jupiter is far from Cassini state 2 because $\sigma_{4}$ would then circulate
between $0^{\circ}$ and $360^{\circ}$. In order to match Cassini state 2,
however, Jupiter’s normalised moment of inertia should be $\lambda\approx
0.2365$ (see Fig. 2). Since this value is not far from what is proposed in the
literature, this prompted Ward & Canup (2006) to consider this value as likely
for Jupiter.
Figure 2: Trajectory of Jupiter’s spin axis in the vicinity of resonance with
the fourth harmonics of $\zeta$ (see Table 2). Being farther away from
Jupiter’s precession frequency, the contribution of other harmonics can be
averaged; their mean contribution is included here up to third order in the
amplitudes (as in Eq. 17 of Saillenfest et al., 2019). Each trajectory
corresponds to a level curve of the Hamiltonian, which has only one degree of
freedom. The red dot shows the current location of Jupiter, and the black dot
shows Cassini state 2. The red curve is the current trajectory of Jupiter’s
spin axis for $\lambda=0.250$ (top) or $\lambda=0.237$ (bottom).
As noted by Le Maistre et al. (2016), the value of $\lambda\approx 0.2365$
corresponds to a massive core for Jupiter, and estimates obtained from models
of Jupiter’s interior structure are generally higher. Helled et al. (2011)
obtain values of $\lambda$ ranging from $0.251$ to $0.253$, that were
confirmed by Nettelmann et al. (2012). These values are consistent with the
range of $\lambda\in[0.221,0.255]$ quoted above. Other studies seem to agree
on even higher values: Wahl et al. (2017) and Ni (2018) present values of
$\lambda$ ranging between $0.2629$ and $0.2644$, compatible with the findings
of Hubbard & Marley (1989), Nettelmann et al. (2012), and Hubbard & Militzer
(2016). Finally, both low and high values are obtained by Vazan et al. (2016),
who give either $\lambda=0.247$ or $\lambda=0.262$ for three different models.
As explained by Le Maistre et al. (2016), however, all these values are model-
dependent and still a matter of debate. Hopefully, the _Juno_ mission will
provide direct observational constraints soon that will help us to determine
which models of Jupiter’s interior structure are the most relevant.
Here, instead of relying on one particular value of $\lambda$, we turn to the
exploration of the whole range of values given in the literature, namely
$\lambda\in[0.220,0.265]$. The rotation velocity of Jupiter is taken from
Archinal et al. (2018) and the other physical parameters are fixed to those
used by Lari et al. (2020) for consistency with the satellites’ orbital
evolution (see below). The corresponding value for the current precession
constant of Jupiter, computed from Eqs. (3) and (5), ranges from
$2.64^{\prime\prime}\cdot$yr-1 to $3.17^{\prime\prime}\cdot$yr-1. Given this
large range, using updated physical parameters (see e.g. Folkner et al., 2017;
Iess et al., 2018; Serra et al., 2019) would only slightly shift the value of
$\alpha$ within our exploration interval.
Because of tidal dissipation, satellites slowly migrate over time. This
produces a drift of the precession constant $\alpha$ on a timescale that is
much larger than the precession motion (i.e. the circulation of $\psi$). The
long-term spin-axis dynamics of a planet with migrating satellites is
described by the Hamiltonian in Eq. (1), but where $\alpha$ is a slowly-
varying function of time. In the Earth-Moon system, the outward migration of
the Moon produces a decrease of $\alpha$ that pushes the Earth towards a wide
chaotic region (see Néron de Surgy & Laskar, 1997). This decrease of $\alpha$
is due to the fact that the Moon is in the far-satellite regime (see Boué &
Laskar, 2006). The Galilean satellites, on the contrary, are in the close-
satellite regime, and their outward migration produces an increase of
$\alpha$, as shown by Eq. (5). This increase can be quantified using the long-
term orbital solution of Lari et al. (2020) depicted in Fig. 3 and
interpolating between data points. The result is presented in Fig. 4 for the
two extreme values of $\lambda$ considered in this article, as well as for the
value of $\lambda\approx 0.2365$ proposed by Ward & Canup (2006). Despite the
various outcomes of the dynamics described by Lari et al. (2020), the result
on the evolution of $\alpha$ is almost undistinguishable from one of their
simulations to another, even if the eccentricities of the satellites are taken
into account in Eq. (5). Indeed, the variation of $\alpha$ mostly depends on
the drift of the satellites’ semi-major axes, which is almost identical in
every simulation of Lari et al. (2020).
Since the rate of energy dissipation between Jupiter and its satellites is not
well known today, the timescale of the drift shown in Figs. 3 and 4 could
somewhat contract or expand. This point is further discussed in Sect. 3.
Moreover, other parameters in Eq. (3) probably slightly vary over billions of
years, such as the spin velocity of Jupiter or its oblateness. We consider
that the impact of their variations on the value of $\alpha$ is small and
contained within our exploration range.
Figure 3: Typical evolution of the semi-major axes of the Galilean satellites
obtained by Lari et al. (2020). The values are expressed in unit of Jupiter’s
equatorial radius. The bump at about $1.8$ Gyrs is due to the capture of
Callisto into resonance. Figure 4: Evolution of the effective precession
constant of Jupiter due to the migration of its satellites. The top and bottom
curves correspond to the two extreme values of the normalised polar moment of
inertia $\lambda$ considered in this article. They appear into $\alpha$
through Eq. (3). The central curve corresponds to the value of $\lambda$ that
places Jupiter just near Cassini state 2 with the precession mode of Uranus
(Ward & Canup, 2006).
### 2.4 Initial conditions
The initial orientation of the spin axis is taken from the solution of
Archinal et al. (2018) averaged over short-period terms. At the level of
precision required by our exploratory study, the refined orientation obtained
by Durante et al. (2020) is undistinguishable from this nominal orientation.
With respect to Jupiter’s secular orbital solution (see Sect. 2.2), this gives
an obliquity $\varepsilon=3.120^{\circ}$ and a precession angle
$\psi=-137.223^{\circ}$ at time J2000. The uncertainty on these values is
extremely small compared to the range of $\alpha$ considered (see Sect. 2.3).
Since the uncertainty is smaller than the curve width of our figures, we do
not consider any error bar on the initial value of $\varepsilon$ and $\psi$.
## 3 Obliquity evolution with migrating satellites
For values of $\lambda$ finely sampled in our exploration interval, the spin
axis of Jupiter is numerically propagated forwards in time for $5$ Gyrs. By
virtue of trigonometric identities, moving Jupiter’s orbit one step forwards
in time using the quasi-periodic decomposition in Eq. (6) only amounts to
computing a few sums and products. The trajectories obtained are shown in Fig.
5 for a few values of $\lambda$. They are projected in the plane of the
obliquity and the precession constant of Jupiter, where we localise also the
centres and widths of all first-order secular spin-orbit resonances. See
Appendix B for further details about the geometry of the resonances.
Figure 5: Future evolution of Jupiter’s spin axis projected in the plane of
the obliquity and the precession constant $\alpha$. Each panel corresponds to
a value of the normalised polar moment of inertia of Jupiter
$\lambda=C/(MR_{\text{eq}}^{2})$ given in title. The green bar shows the
initial location of Jupiter’s spin axis according to our exploration interval
of $\lambda$; the central mark is the value proposed by Ward & Canup (2006).
The red curves show the centre of all first-order secular spin-orbit
resonances (Cassini state 2) and the coloured areas represent their widths
(same as Fig. 1). From bottom to top, the resonances are with $\phi_{6}$, with
$\phi_{4}$, and with $\phi_{19}$ (see Table 2). The black dots show the
numerical solutions obtained over a timespan of $5$ Gyrs from now; they evolve
from bottom to top. According to the exact migration rate of the Galilean
satellites, the timeline could somewhat contract or expand (see text).
For values of $\lambda$ smaller than about $0.228$, Jupiter starts outside of
the large resonance with $\phi_{4}$, and the increase of its precession
constant $\alpha$ pushes it even farther away over time. As shown by the
trajectory computed for $\lambda=0.227$, the crossing of the very thin
resonance with $\phi_{19}$ twists the trajectory a little, but this cannot
produce any large change of obliquity. Indeed the resonance with $\phi_{19}$
is not strong enough to capture Jupiter’s spin axis: It is crossed quickly as
$\alpha$ increases, and Fig. 6 shows that the libration period of
$\sigma_{19}=\psi+\phi_{19}$ is very long. This results in a non-adiabatic
crossing (see Appendix C for details). Consequently, no major obliquity
variation for Jupiter can be expected in the future if $\lambda<0.228$.
However, such small values of $\lambda$ seem to be ruled out by most models of
Jupiter’s interior (see Sect. 2.3).
Figure 6: Period of small oscillations about the resonance centre for a
resonance with $\phi_{4}$ or $\phi_{19}$. Even though complete closed-form
solutions exist (see Haponiak et al., 2020), the small-oscillation limit leads
to handier formulas, suitable for order-of-magnitude estimates. The resonant
angles are $\sigma_{4}=\psi+\phi_{4}$ and $\sigma_{19}=\psi+\phi_{19}$,
respectively. Dashed curves are used for oscillations about Cassini state 2
before the separatrix appears. The appearance of the separatrix is marked by a
blue dot.
For values of $\lambda$ larger than $0.228$, on the contrary, Jupiter is
currently located inside or below the large resonance with $\phi_{4}$. As
predicted, the value $\lambda=0.2365$ results in very small oscillations
around Cassini state 2. As its precession constant $\alpha$ slowly increases
with time, Jupiter is captured into the resonance and follows the drift of its
centre towards large obliquities. Indeed, the resonance with $\phi_{4}$ is
large, and the libration period of $\sigma_{4}=\psi+\phi_{4}$ is short
compared to the variation timescale of $\alpha$ (see Figs. 5 and 6). This
results in an adiabatic capture. The various possible outcomes of adiabatic
and non-adiabatic crossings of secular spin-orbit resonances have recently
been studied by Su & Lai (2020). However, the orbital motion is here not
limited to a single harmonic, and Appendix C shows that the separatrix of the
resonance is replaced by a chaotic “moat”. Properly speaking, the resonance
with $\phi_{4}$ becomes a “true resonance” only as soon as the separatrix
appears, that is, for $\alpha$ larger than $\alpha\approx
3.04^{\prime\prime}\cdot$yr-1 (see Appendix B). In the whole range of values
of $\lambda>0.228$ considered in this article, the spin axis of Jupiter is
initially located close enough to Cassini state 2 to invariably end up inside
the separatrix of the resonance when it appears. The capture probability is
therefore $100\%$. None of our simulation shows a release out of resonance or
a turn-off towards Cassini state 1, which could have been a possible outcome
if Jupiter’s spin axis was initially located farther away from Cassini state 2
or if the drift of $\alpha$ was not adiabatic444There is a typographical error
in Saillenfest et al. (2019): the list of the Cassini states given before Eq.
(22) should read (4,2,3,1) instead of (1,2,3,4) in order to match the
denomination introduced by Peale (1969).. Since in canonical coordinates the
resonance width increases for $\alpha$ growing up to
$4.244^{\prime\prime}\cdot$yr-1, no separatrix crossing can happen, even for a
large libration amplitude inside the resonance (e.g. for $\lambda=0.228$ in
Fig. 5). The maximum obliquity reached by Jupiter is therefore only limited by
the finite amount of time considered.
If the Galilean satellites migrate faster than shown in Fig. 3, the obliquity
reached in $5$ Gyrs would be larger than that presented in Fig. 5. The
migration rate of the satellites is not well known. According to Lari et al.
(2020), the long-term migration rate of the satellites varies by $\pm 15\%$
over the uncertainty range of the parameter $(k_{2}/Q)_{0,1}$ measured by
Lainey et al. (2009). This parameter quantifies the dissipation within Jupiter
at Io’s frequency. Figure 7 shows the maximum obliquity reached in $5$ Gyrs
for $\lambda$ sampled in our exploration interval and $(k_{2}/Q)_{0,1}$
sampled in its uncertainty range. We retrieve the discontinuity at
$\lambda\approx 0.228$ discussed before, below which only small obliquity
variations are possible. For $\lambda>0.228$, as expected, we see that a fast
migration and a small moment of inertia produce a fast increase of obliquity,
which reaches $37^{\circ}$ in $5$ Gyrs in the most favourable case of Fig. 7.
On the contrary, a slow migration and a large moment of inertia produce a slow
increase of obliquity, which barely reaches $6^{\circ}$ in $5$ Gyrs in the
most unfavourable case of Fig. 7.
Figure 7: Maximum obliquity reached by Jupiter after $5$ Gyrs from now as a
function of its normalised polar moment of inertia
$\lambda=C/(MR_{\text{eq}}^{2})$ (right vertical axis) and the dissipation
parameter of Jupiter at Io’s frequency (horizontal axis). The left vertical
axis shows the current precession constant $\alpha$ of Jupiter. Some level
curves are shown in red.
## 4 Discussion and conclusion
Prompted by the peculiarly small value of the current obliquity of Jupiter, we
studied the future long-term evolution of its spin axis under the influence of
its slowly migrating satellites.
Jupiter is located today near a strong secular spin-orbit resonance with the
nodal precession mode of Uranus (Ward & Canup, 2006). Because of this
resonance, the obliquity of Jupiter is found to be currently increasing,
provided that its normalised polar moment of inertia
$\lambda=C/(MR_{\text{eq}}^{2})$ is larger than about $0.228$. Such a small
value seems to be ruled out by models of Jupiter’s interior (see e.g. Helled
et al., 2011; Hubbard & Militzer, 2016; Wahl et al., 2017). For larger values
of $\lambda$, the migration of the Galilean satellites induces an adiabatic
drift of the precession constant $\alpha$ of Jupiter that pushes its spin axis
inside the resonance and forces it to follow the resonance centre towards high
obliquities. For the value $\lambda\approx 0.2365$ proposed by Ward & Canup
(2006), the obliquity can reach values as large as $30^{\circ}$ in the next
$5$ Gyrs. For the value $\lambda\approx 0.252$ obtained by Helled et al.
(2011), the obliquity reaches values ranging from about $17^{\circ}$ to
$23^{\circ}$. The increase is more modest for values close to $\lambda\approx
0.264$ found by other authors, for which the maximum value of the obliquity
ranges from about $6^{\circ}$ to $17^{\circ}$. Hence, our main conclusion is
that, contrary to Saturn, Jupiter did not have time to tilt much yet from its
primordial orientation, but it will in the future and possibly a lot.
The model of tidal dissipation applied by Lari et al. (2020) to the Galilean
satellites and used here to compute the drift of $\alpha$ is simplified. The
current migration rates of satellites in the Solar System have been proved to
be higher than previously thought (see Lainey et al., 2009, 2017). As
discussed by Lari et al. (2020), the migration of the Galilean satellites
could be even faster than considered here if ever one of the outer satellites
was pushed by a resonance with the frequency of an internal oscillation of
Jupiter (Fuller et al., 2016). This would result in a faster increase of
Jupiter’s obliquity. This increase would be halted, however, if the satellites
ever migrate so fast as to break the adiabaticity of the capture into secular
spin-orbit resonance. In this case, Jupiter would cross the resonance and exit
without following the drift of its centre (see e.g. Ward & Hamilton, 2004; Su
& Lai, 2020). Numerical experiments show that adiabaticity would be broken for
a migration more than $110$ times faster than currently estimated. Such an
extremely fast migration seems unlikely. Moreover, with such a fast migration,
Callisto and then Ganymede would soon go beyond the close-satellite regime
(Boué & Laskar, 2006): This would slow down the increase of $\alpha$ and
possibly restore the adiabaticity of its drift. Therefore, the future increase
of Jupiter’s obliquity appears to be a robust result.
The maximum obliquity that Jupiter will reach could be very large, but it
depends on the precise value of Jupiter’s polar moment of inertia and on the
precise migration rate of the Galilean satellites. We hope to obtain soon new
estimates for these two crucial parameters, in particular from the results of
the _Juno_ and JUICE missions.
###### Acknowledgements.
We thank Marco Fenucci for his help and his suggestions during the redaction
of our manuscript. We also thank the anonymous referee for her/his valuable
comments. G. L. acknowledges financial support from the Italian Space Agency
(ASI) through agreement 2017-40-H.0 in the context of the NASA _Juno_ mission.
## References
* Archinal et al. (2018) Archinal, B. A., Acton, C. H., A’Hearn, M. F., et al. 2018, Celestial Mechanics and Dynamical Astronomy, 130, 22
* Atobe et al. (2004) Atobe, K., Ida, S., & Ito, T. 2004, Icarus, 168, 223
* Boué & Laskar (2006) Boué, G. & Laskar, J. 2006, Icarus, 185, 312
* Boué & Laskar (2010) Boué, G. & Laskar, J. 2010, ApJ, 712, L44
* Boué et al. (2009) Boué, G., Laskar, J., & Kuchynka, P. 2009, ApJ, 702, L19
* Brasser et al. (2014) Brasser, R., Ida, S., & Kokubo, E. 2014, MNRAS, 440, 3685
* Brasser & Lee (2015) Brasser, R. & Lee, M. H. 2015, AJ, 150, 157
* Brasser & Walsh (2011) Brasser, R. & Walsh, K. J. 2011, Icarus, 213, 423
* Canup & Asphaug (2001) Canup, R. M. & Asphaug, E. 2001, Nature, 412, 708
* Correia & Laskar (2001) Correia, A. C. M. & Laskar, J. 2001, Nature, 411, 767
* Correia & Laskar (2003) Correia, A. C. M. & Laskar, J. 2003, Icarus, 163, 24
* Correia & Laskar (2010) Correia, A. C. M. & Laskar, J. 2010, Icarus, 205, 338
* Correia et al. (2003) Correia, A. C. M., Laskar, J., & de Surgy, O. N. 2003, Icarus, 163, 1
* Deitrick et al. (2018a) Deitrick, R., Barnes, R., Bitz, C., et al. 2018a, AJ, 155, 266
* Deitrick et al. (2018b) Deitrick, R., Barnes, R., Quinn, T. R., et al. 2018b, AJ, 155, 60
* Durante et al. (2020) Durante, D., Parisi, M., Serra, D., et al. 2020, Geochim. Res. Lett., 47, e86572
* Folkner et al. (2017) Folkner, W. M., Iess, L., Anderson, J. D., et al. 2017, Geochim. Res. Lett., 44, 4694
* French et al. (1993) French, R. G., Nicholson, P. D., Cooke, M. L., et al. 1993, Icarus, 103, 163
* Fuller et al. (2016) Fuller, J., Luan, J., & Quataert, E. 2016, MNRAS, 458, 3867
* Goldreich (1965) Goldreich, P. 1965, AJ, 70, 5
* Hamilton & Ward (2004) Hamilton, D. P. & Ward, W. R. 2004, AJ, 128, 2510
* Haponiak et al. (2020) Haponiak, J., Breiter, S., & Vokrouhlický, D. 2020, Celestial Mechanics and Dynamical Astronomy, 132, 24
* Helled et al. (2011) Helled, R., Anderson, J. D., Schubert, G., & Stevenson, D. J. 2011, Icarus, 216, 440
* Hubbard & Marley (1989) Hubbard, W. B. & Marley, M. S. 1989, Icarus, 78, 102
* Hubbard & Militzer (2016) Hubbard, W. B. & Militzer, B. 2016, ApJ, 820, 80
* Iess et al. (2018) Iess, L., Folkner, W. M., Durante, D., et al. 2018, Nature, 555, 220
* Konopliv et al. (2020) Konopliv, A. S., Park, R. S., & Ermakov, A. I. 2020, Icarus, 335, 113386
* Kreyche et al. (2020) Kreyche, S. M., Barnes, J. W., Quarles, B. L., et al. 2020, The Planetary Science Journal, 1, 8
* Lainey et al. (2009) Lainey, V., Arlot, J.-E., Karatekin, Ö., & van Hoolst, T. 2009, Nature, 459, 957
* Lainey et al. (2017) Lainey, V., Jacobson, R. A., Tajeddine, R., et al. 2017, Icarus, 281, 286
* Lari et al. (2020) Lari, G., Saillenfest, M., & Fenucci, M. 2020, A&A, 639, A40
* Laskar (1989) Laskar, J. 1989, Nature, 338, 237
* Laskar (1990) Laskar, J. 1990, Icarus, 88, 266
* Laskar et al. (2012) Laskar, J., Boué, G., & Correia, A. C. M. 2012, A&A, 538, A105
* Laskar et al. (2004a) Laskar, J., Correia, A. C. M., Gastineau, M., et al. 2004a, Icarus, 170, 343
* Laskar et al. (1993) Laskar, J., Joutel, F., & Robutel, P. 1993, Nature, 361, 615
* Laskar & Robutel (1993) Laskar, J. & Robutel, P. 1993, Nature, 361, 608
* Laskar et al. (2004b) Laskar, J., Robutel, P., Joutel, F., et al. 2004b, A&A, 428, 261
* Le Maistre et al. (2016) Le Maistre, S., Folkner, W. M., Jacobson, R. A., & Serra, D. 2016, Planet. Space Sci., 126, 78
* Li & Batygin (2014a) Li, G. & Batygin, K. 2014a, ApJ, 790, 69
* Li & Batygin (2014b) Li, G. & Batygin, K. 2014b, ApJ, 795, 67
* Millholland & Batygin (2019) Millholland, S. & Batygin, K. 2019, ApJ, 876, 119
* Millholland & Laughlin (2018) Millholland, S. & Laughlin, G. 2018, ApJ, 869, L15
* Millholland & Laughlin (2019) Millholland, S. & Laughlin, G. 2019, Nature Astronomy, 3, 424
* Morbidelli et al. (2012) Morbidelli, A., Tsiganis, K., Batygin, K., Crida, A., & Gomes, R. 2012, Icarus, 219, 737
* Murray & Dermott (1999) Murray, C. D. & Dermott, S. F. 1999, Solar System Dynamics (Cambridge University Press)
* Néron de Surgy & Laskar (1997) Néron de Surgy, O. & Laskar, J. 1997, A&A, 318, 975
* Nettelmann et al. (2012) Nettelmann, N., Becker, A., Holst, B., & Redmer, R. 2012, ApJ, 750, 52
* Ni (2018) Ni, D. 2018, A&A, 613, A32
* Peale (1969) Peale, S. J. 1969, AJ, 74, 483
* Quarles et al. (2019) Quarles, B., Li, G., & Lissauer, J. J. 2019, ApJ, 886, 56
* Quillen et al. (2018) Quillen, A. C., Chen, Y.-Y., Noyelles, B., & Loane, S. 2018, Celestial Mechanics and Dynamical Astronomy, 130, 11
* Quillen et al. (2017) Quillen, A. C., Nichols-Fleming, F., Chen, Y.-Y., & Noyelles, B. 2017, Icarus, 293, 94
* Rogoszinski & Hamilton (2020a) Rogoszinski, Z. & Hamilton, D. P. 2020a, ApJ, 888, 60
* Rogoszinski & Hamilton (2020b) Rogoszinski, Z. & Hamilton, D. P. 2020b, Submitted to AAS journal, arXiv:2004.14913
* Saillenfest et al. (2019) Saillenfest, M., Laskar, J., & Boué, G. 2019, A&A, 623, A4
* Serra et al. (2019) Serra, D., Lari, G., Tommei, G., et al. 2019, MNRAS, 490
* Shan & Li (2018) Shan, Y. & Li, G. 2018, AJ, 155, 237
* Su & Lai (2020) Su, Y. & Lai, D. 2020, Submitted to ApJ, arXiv:2004.14380
* Tremaine et al. (2009) Tremaine, S., Touma, J., & Namouni, F. 2009, AJ, 137, 3706
* Vazan et al. (2016) Vazan, A., Helled, R., Podolak, M., & Kovetz, A. 2016, ApJ, 829, 118
* Vokrouhlický & Nesvorný (2015) Vokrouhlický, D. & Nesvorný, D. 2015, ApJ, 806, 143
* Wahl et al. (2017) Wahl, S. M., Hubbard, W. B., Militzer, B., et al. 2017, Geochim. Res. Lett., 44, 4649
* Ward (1975) Ward, W. R. 1975, AJ, 80, 64
* Ward & Canup (2006) Ward, W. R. & Canup, R. M. 2006, ApJ, 640, L91
* Ward et al. (1976) Ward, W. R., Colombo, G., & Franklin, F. A. 1976, Icarus, 28, 441
* Ward & Hamilton (2004) Ward, W. R. & Hamilton, D. P. 2004, AJ, 128, 2501
* Yoder (1995) Yoder, C. F. 1995, in Global Earth Physics: A Handbook of Physical Constants, ed. T. J. Ahrens, 1
## Appendix A Orbital solution for Jupiter
The secular orbital solution of Laskar (1990) is obtained by multiplying the
normalised proper modes $z_{i}^{\bullet}$ and $\zeta_{i}^{\bullet}$ (Tables VI
and VII of Laskar 1990) by the matrix $\tilde{S}$ corresponding to the linear
part of the solution (Table V of Laskar 1990). In the series obtained, the
terms with the same combination of frequencies are then merged together,
resulting in 56 terms in eccentricity and 60 terms in inclination. This forms
the secular part of the orbital solution of Jupiter, which is what is required
by our averaged model.
The orbital solution is expressed in the variables $z$ and $\zeta$ as
described in Eqs. (6) and (7). In Tables 3 and 4, we give the terms of the
solution in the J2000 ecliptic and equinox reference frame for amplitudes down
to $10^{-8}$.
Table 3: Quasi-periodic decomposition of Jupiter’s eccentricity and longitude
of perihelion (variable $z$).
$\begin{array}[]{rrrr}\hline\cr\hline\cr k&\mu_{k}\
(^{\prime\prime}\cdot\text{yr}^{-1})&E_{k}\times 10^{8}&\theta_{k}^{(0)}\
(^{\text{o}})\\\ \hline\cr 1&4.24882&4411915&30.67\\\
2&28.22069&1574994&308.11\\\ 3&3.08952&180018&121.36\\\
4&52.19257&51596&45.55\\\ 5&27.06140&18405&218.71\\\
6&29.37998&17762&217.54\\\ 7&28.86795&10743&32.64\\\ 8&27.57346&9436&43.74\\\
9&5.40817&6135&120.31\\\ 10&0.66708&5755&73.98\\\ 11&53.35188&4415&314.90\\\
12&76.16447&2441&143.03\\\ 13&51.03334&1354&316.29\\\ 14&7.45592&1354&20.24\\\
15&-19.72306&1083&293.24\\\ 16&4.89647&982&291.61\\\ 17&5.59644&941&290.35\\\
18&1.93168&767&198.10\\\ 19&3.60029&543&121.39\\\ 20&-56.90922&485&44.11\\\
21&2.97706&470&306.81\\\ 22&5.47449&354&95.01\\\ 23&17.91550&295&155.35\\\
24&5.71670&269&300.52\\\ 25&-20.88236&222&203.93\\\ 26&6.93423&173&349.25\\\
27&1.82121&161&150.50\\\ 28&5.35823&145&274.88\\\ 29&7.05595&139&178.82\\\
30&7.34103&114&27.85\\\ 31&17.36469&101&123.95\\\ 32&0.77840&99&65.10\\\
33&7.57299&80&191.47\\\ 34&5.99227&53&293.56\\\ 35&5.65485&51&219.22\\\
36&4.36906&49&40.82\\\ 37&5.23841&43&92.97\\\ 38&6.82468&37&14.53\\\
39&-0.49216&29&164.74\\\ 40&17.08266&28&179.38\\\ 41&16.81285&27&273.37\\\
42&7.20563&23&323.91\\\ 43&7.71663&15&273.52\\\ 44&19.01870&10&219.75\\\
45&17.15752&10&325.02\\\ 46&16.52731&6&131.91\\\ 47&17.63081&6&183.87\\\
48&17.81084&6&58.56\\\ 49&18.18553&5&57.27\\\ 50&17.47683&5&260.26\\\
51&17.72293&4&48.46\\\ 52&17.55234&4&197.65\\\ 53&18.01611&4&44.83\\\
54&16.26122&2&58.89\\\ 55&18.08627&2&356.17\\\ 56&18.46794&1&209.01\\\
\hline\cr\end{array}$
555This solution has been directly obtained from Laskar (1990) as explained in
the text. The phases $\theta_{k}^{(0)}$ are given at time J2000.
Table 4: Quasi-periodic decomposition of Jupiter’s inclination and longitude
of ascending node (variable $\zeta$).
$\begin{array}[]{rrrr}\hline\cr\hline\cr k&\nu_{k}\
(^{\prime\prime}\cdot\text{yr}^{-1})&S_{k}\times 10^{8}&\phi_{k}^{(0)}\
(^{\text{o}})\\\ \hline\cr 1&0.00000&1377467&107.59\\\
2&-26.33023&315119&307.29\\\ 3&-0.69189&58088&23.96\\\
4&-3.00557&48134&140.33\\\ 5&-26.97744&2308&222.98\\\ 6&-2.35835&1611&44.74\\\
7&82.77163&1372&308.95\\\ 8&-1.84625&1130&36.64\\\ 9&-5.61755&1075&168.70\\\
10&-4.16482&946&51.54\\\ 11&58.80017&804&32.90\\\ 12&-50.30212&691&29.84\\\
13&34.82788&636&114.12\\\ 14&-0.58033&565&17.32\\\ 15&-7.07963&454&273.79\\\
16&-27.48935&407&38.53\\\ 17&-25.17116&385&35.94\\\ 18&-5.50098&383&162.89\\\
19&-3.11725&321&326.97\\\ 20&-6.84091&267&106.20\\\ 21&-28.13656&256&134.07\\\
22&-7.19493&226&105.14\\\ 23&-6.96094&215&97.96\\\ 24&0.46547&162&286.88\\\
25&-17.74818&149&123.28\\\ 26&-7.33264&144&196.75\\\ 27&-5.85017&130&345.47\\\
28&11.50319&103&281.01\\\ 29&-5.21610&97&198.91\\\ 30&-5.37178&97&215.48\\\
31&-5.10025&94&15.38\\\ 32&0.57829&56&103.72\\\ 33&-5.96899&55&170.64\\\
34&-1.19906&53&133.26\\\ 35&-6.73842&47&44.50\\\ 36&-7.40536&44&233.35\\\
37&-7.48780&40&47.95\\\ 38&-6.15490&40&269.77\\\ 39&20.96631&40&57.78\\\
40&-6.56016&38&303.47\\\ 41&9.18847&32&1.15\\\ 42&-8.42342&32&211.21\\\
43&10.34389&32&190.85\\\ 44&-18.85115&23&240.06\\\ 45&-17.19656&17&334.19\\\
46&18.14984&15&291.19\\\ 47&-19.40256&14&207.96\\\ 48&-18.01114&11&242.09\\\
49&-17.66094&11&138.93\\\ 50&-17.83857&9&289.13\\\ 51&-17.54636&8&246.71\\\
52&-18.30007&6&267.05\\\ 53&-17.94404&5&212.26\\\ 54&-18.59563&5&98.11\\\
55&-19.13075&1&305.90\\\ \hline\cr\end{array}$
666This solution has been directly obtained from Laskar (1990) as explained in
the text. The phases $\phi_{k}^{(0)}$ are given at time J2000.
## Appendix B Crossing the resonance with $\phi_{4}$
Figures 1 and 5 show the location and width of all first-order secular spin-
orbit resonances produced by Jupiter’s orbital solution (Appendix A). In
particular, Jupiter is located very close to the large resonance with
$\phi_{4}$, whose frequency is the nodal precession mode of Uranus. Figures 9
and 9 show the geometry of the resonance with $\phi_{4}$ for different values
of the precession constant $\alpha$ of Jupiter. These graphs can be understood
as horizontal sections of Fig. 5, where we can locate the centre of the
resonance (i.e. Cassini state 2) and the separatrix width. For easier
comparison, Figs. 5 and 9 share the same horizontal axis.
Figure 8: Level curves of the Hamiltonian function in the vicinity of
resonance with $\phi_{4}$. The resonant angle is $\sigma_{4}=\psi+\phi_{4}$.
Other terms are averaged and included up to the third order of their
amplitudes (see Saillenfest et al. 2019). Each panel corresponds to a
different value of the precession constant $\alpha$. Equilibrium points
(called Cassini states) are shown by black spots. The interior of the
resonance is coloured red and the separatrix is shown with a thick red curve.
The location and width of the resonance for continuous values of $\alpha$ can
be seen in Fig. 5. In order to avoid being misled by coordinate singularities,
Fig. 9 shows the same level curves in a different set of coordinates.
Figure 9: Same as Fig. 9, but using polar coordinates that are not singular
for an obliquity $\varepsilon=0^{\circ}$. The outer black circle corresponds
to an obliquity $\varepsilon=40^{\circ}$.
## Appendix C Crossing the resonance with $\phi_{19}$
As a matter of fact, Jupiter’s orbital motion is not restricted to the
$\phi_{4}$ term. However, secular spin-orbit resonances with all other terms
(apart from $\phi_{19}$) are located very far from the location of Jupiter, so
that their effects average over time. The case of $\phi_{19}$ is special: even
though it is very thin, this resonance is not far from Jupiter’s location (see
the upper red curve in Fig. 5), which means that $\psi+\phi_{19}$ is a slow
angle that cannot be averaged out.
Instead of considering only $\phi_{4}$, as in Appendix B, a more rigorous
model of the long-term spin-axis dynamics of Jupiter consists in averaging the
Hamiltonian function over all angles except resonances with both $\phi_{4}$
and $\phi_{19}$. For a constant value of $\alpha$, this results in a two-
degree-of-freedom Hamiltonian system, in which the two angle coordinates are
$\sigma_{4}=\psi+\phi_{4}$ and $\sigma_{19}=\psi+\phi_{19}$. The dynamics can
then be studied using Poincaré surfaces of section. Figure 10 shows two
examples of sections. The lower island centred at $\sigma_{19}=0$ corresponds
to the thin resonance with $\phi_{19}$: As expected, it is completely
distorted as compared to the unperturbed separatrix (blue curve) due to the
proximity of the large resonance with $\phi_{4}$. It still persists, however,
as a set of periodic orbits. In contrast, the large resonance with $\phi_{4}$
is hardly affected at all by the $\phi_{19}$ term, which only transforms its
separatrix into a thin chaotic belt. In the left panel of Fig. 10, we can also
recognise Cassini state 1 with $\phi_{4}$ (for $\sigma_{4}=\pi$ and a small
obliquity), that is also visible in Fig. 9.
Figure 10: Poincaré surfaces of section showing the dynamics in the vicinity
of resonances with $\phi_{4}$ and $\phi_{19}$. Each graph corresponds to a
different value of $\alpha$ (see titles). The separatrices of the two
resonances taken separately are shown with coloured curves: red for $\phi_{4}$
and blue for $\phi_{19}$.
We investigated whether Jupiter could be trapped into the thin resonance with
$\phi_{19}$ and follow its resonance centre, but we found out that this can
never happen. On the one hand, the current phase of $\sigma_{19}$ is close to
$\pi$, so that even if $\lambda$ is finely tuned to place Jupiter right inside
the resonance, it ends up near the separatrix, leading to an unstable resonant
motion. On the other hand, as shown in Fig. 6, the libration period of
$\sigma_{19}$ is extremely large (the width and oscillation frequency both
scale as the square root of the amplitude $S_{19}$). This means that, as
$\alpha$ increases, the crossing of this resonance is not adiabatic. The
libration periods shown in Fig. 6 should be compared to the time needed for
$\alpha$ to go through the resonant region. According to Fig. 4, the mean
increase rate of $\alpha$ is $0.086^{\prime\prime}\cdot$yr${}^{-1}\cdot$Gyr-1
and according to Fig. 5, the resonances have a vertical width
$\Delta\alpha\approx 0.21^{\prime\prime}\cdot$yr-1 for the $\phi_{4}$
resonance and $\Delta\alpha\approx 0.01^{\prime\prime}\cdot$yr-1 for the
$\phi_{19}$ resonance (computed at the right separatrix when it appears).
Therefore, the time that would be needed for $\alpha$ to cross the $\phi_{4}$
resonance is $\Delta t\approx 2.5$ Gyrs, which corresponds to many oscillation
periods of $\sigma_{4}$ (about $100$): this is the adiabatic regime. On the
contrary, the time needed for $\alpha$ to cross the $\phi_{19}$ resonance is
$\Delta t\approx 0.1$ Gyrs, which corresponds to less than one oscillation
period of $\sigma_{19}$ (about $0.2$): this is the non-adiabatic regime. As a
result, a resonance capture with $\phi_{19}$ is extremely unlikely, even if
the orbital motion of Jupiter was restricted to its $19$th harmonic: Jupiter’s
spin axis enters the resonance and exits before $\sigma_{19}$ has time to
oscillate. With suitable initial conditions, Jupiter roughly follows the
resonance centre during the crossing, producing a bump in the obliquity
evolution (see Fig. 5 for $\lambda=0.227$), but nothing more can possibly
happen. This kind of non-adiabatic resonance crossing is described by Ward et
al. (1976), Laskar et al. (2004b), and Ward & Hamilton (2004) using Fresnel
integrals.
|
# The CaFe Project: Optical Fe II and Near-Infrared Ca II triplet emission in
active galaxies.
II. The driver(s) of the Ca II and Fe II and its potential use as a chemical
clock
Mary Loli Martínez-Aldama Center for Theoretical Physics, Polish Academy of
Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland Swayamtrupta Panda Center
for Theoretical Physics, Polish Academy of Sciences, Al. Lotników 32/46,
02-668 Warsaw, Poland Nicolaus Copernicus Astronomical Center, Polish Academy
of Sciences, ul. Bartycka 18, 00-716 Warsaw, Poland Bożena Czerny Center for
Theoretical Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668
Warsaw, Poland Murilo Marinello Laboratório Nacional de Astrofísica, R. dos
Estados Unidos, 154 - Nações, Itajubá - MG, 37504-364, Brazil Paola Marziani
INAF-Astronomical Observatory of Padova, Vicolo dell’Osservatorio, 5, 35122
Padova PD, Italy Deborah Dultzin Universidad Nacional Autonóma de México
Instituto de Astronomía: Ciudad de Mexico, Distrito Federal, MX 04510, Mexico
###### Abstract
In this second paper in the series, we carefully analyze the observational
properties of the optical Fe ii and NIR Ca ii triplet in Active Galactic
Nuclei, as well as the luminosity, black hole mass, and Eddington ratio in
order to define the driving mechanism behind the properties of our sample. The
Ca ii shows an inverse Baldwin effect, bringing out the particular behavior of
this ion with respect to the other low–ionization lines such as H$\beta$. We
performed a Principal Component Analysis, where $81.2\%$ of the variance can
be explained by the first three principal components drawn from the FWHMs,
luminosity, and equivalent widths. The first principal component (PC1) is
primarily driven by the combination of black hole mass and luminosity with a
significance over $99.9\%$, which in turn is reflected in the strong
correlation of the PC1 with the Eddington ratio. The observational
correlations are better represented by the Eddington ratio, thus it could be
the primary mechanism behind the strong correlations observed in the Ca ii-Fe
ii sample. Since, calcium belongs to the $\alpha$-elements, the Fe ii/Ca ii
flux ratio can be used as a chemical clock for determining the metal content
in AGN and trace the evolution of the host galaxies. We confirm the de-
enhancement of the ratio Fe ii/Ca ii by the Eddington ratio, suggesting a
metal enrichment of the BLR in intermediate-$z$ with respect to low-$z$
objects. A larger sample, particularly at $z>2$, is needed to confirm the
present results.
galaxies: active, quasars: emission lines; quasars: supermassive black holes;
galaxies: abundances
††journal: ApJ††software: CLOUDY v17.01 (Ferland et al. 2017); MATPLOTLIB
(Hunter 2007); NUMPY (Oliphant 2015); SKLEARN (Pedregosa et al. 2011);
STATSMODELS (Seabold & Perktold 2010); TOPCAT (Taylor 2005)
## 1 Introduction
The large diversity of the emission lines observed in the spectrum of the
Active Galactic Nuclei (AGN) reveals a complex structure of the broad line
region (BLR). The physical conditions of the BLR such as density, ionization
parameter and metallicity can be estimated by the flux ratios of the emission
lines and their profiles supply information of the dynamics in the BLR cloud
(Wandel, 1999; Negrete et al., 2014; Schnorr-Müller et al., 2016; Devereux,
2018). Emission lines can be divided considering their ionization potential
(IP). Typically, high-ionization lines (HIL) show IP$>$40 eV, while low-
ionization lines (LIL) have IP$<$20 eV (Collin-Souffrin et al., 1988; Marziani
et al., 2019). Reverberation mapping studies have confirmed the stratification
of the BLR (e.g. Horne et al., 2020), where HIL such as C iv$\lambda 1549$ or
He ii$\lambda 1640$ are emitted closer to the central continuum source, and
LIL such as H$\beta$ or Mg ii$\lambda 2800$ are emitted, at least three times
further. The presence of emission lines with very low-ionization potentials
(IP$\sim$10 eV) such as the multiple permitted Fe ii transitions or the Ca ii
triplet at $\lambda 8498,\lambda 8542,\lambda 8662$ (hereafter CaT) suggests
the existence of a zone shielded from the high energy photons emanated by the
central source and likely located in the outermost portion of the BLR (Joly,
1987; Dultzin-Hacyan et al., 1999; Rodríguez-Ardila et al., 2002; Rodriguez-
Ardila et al., 2012; Garcia-Rissmann et al., 2012; Marinello et al., 2016) .
The physical conditions of the Fe ii have been widely explored in a broad
wavelength range since it provides useful information about the energy-budget
of the BLR (Osterbrock & Ferland, 2006; Vestergaard & Wilkes, 2001). However,
its complex electronic structure owing to the various ionization and
excitation mechanisms complicates the model of the Fe ii (Collin & Joly, 2000;
Baldwin et al., 2004). This ionic species manifests as a pseudo-continuum due
to the numerous blended multiplets ranging from the UV to the NIR. In our
studies (see e.g. Panda et al., 2020a, , hereafter Paper-1), we incorporate
the Fe ii dataset from Verner et al. (1999) that includes a 371 level with
ionization potential upto $\sim$11.6 eV, available in CLOUDY (Ferland et al.,
2017). Newer Fe ii models are now available that have calculated more energy
levels for this species, reaching upto 26.4 eV (see Sarkar et al., 2020, for a
recent compilation). This model reproduces well the UV and optical Fe ii
contribution observed in I Zw 1, constraining in a better way the physical
conditions of the Fe ii emitting clouds. For more details on the progress in
understanding the Fe ii emission in AGNs and its modelling we refer the
readers to Paper-1.
The singly-ionized calcium emission can be approximately modeled by a five
levels atom: (1) the optical H and K lines ($\lambda 3933$, $\lambda 3968$ Å)
are emitted from the 4p level to the 4s ground level, (2) the infrared
multiplet ($\lambda 8498$, $\lambda 8542$ and $\lambda 8662$ Å, CaT) arises
from the 4p level to the 3d metastable level, and (3) the forbidden multiplet
($\lambda 7291$, $\lambda 7324$ Å) arises from the 3d metastable level to the
ground level (Ferland & Persson, 1989; Marziani et al., 2014). Due to
similarity between the ionization potentials of Ly$\alpha$ (10.2 ev) and the
singly-ionized Ca ii (11.8 eV), the 3d metastable level is highly populated
and the collisional excitation process leading to the infrared CaII triplet
emission is efficient. Thus, the near-infrared CaT offers the possibility to
study the properties of the very low-ionization lines in the BLR. CaT is
prominent in Narrow-Line Seyfert 1 (NLS1) galaxies (Persson, 1988; Marinello
et al., 2016) and quasars (Martínez-Aldama et al., 2015a). However, when the
stellar continuum has a significant contribution, the emission profile shows a
central dip or, in extreme cases, only an absorption profile is observed.
Therefore, a correct subtraction of the stellar component is needed,
particularly in low-luminosity sources. The CaT absorption is mainly observed
in Seyfert 1 and Seyfert 2 galaxies, where it may be enhanced by a population
of red supergiant stars associated with a starburst (Terlevich et al., 1990).
The velocity dispersion provided by the stellar CaT has been used to infer the
stellar populations and determine the black hole mass throughout the relation
$M_{\rm BH}-\sigma_{\star}$ (Garcia-Rissmann et al., 2005).
Some theoretical and observational studies have been devoted to look for the
connections between the optical Fe ii and CaT. Both ions show a strong linear
relation and similar widths, narrower than H$\beta$ or Pa$\beta$ (Persson,
1988; Martínez-Aldama et al., 2015a, b; Marinello et al., 2016; Panda et al.,
2020a), suggesting that both emission lines are emitted in the outer parts of
the BLR. According to the photoionization models, both emission lines share
almost identical physical conditions - large clouds (column densities $\sim$
1024 cm-2) with high mean densities ($\sim 10^{12}$ cm-3) and relatively low
temperatures ($\lesssim$ 8000 K) (Joly, 1987, 1989; Ferland & Persson, 1989;
Panda et al., 2020a; Panda, 2020).
In the first paper of the presented analysis (Paper-1), we updated on the
observational correlation between the strengths of the two species (i.e. the
flux ratios Fe ii/H$\beta$ and CaT/H$\beta$, hereafter RFeII and RCaT,
respectively) given by:
${\log{\rm R}_{\rm CaT}\approx(0.974\pm 0.119)\log{\rm R}_{\rm FeII}-(0.657\pm
0.041).}$ (1)
We also looked extensively at the optical Fe ii and CaT emission from a
theoretical standpoint, using the photoionization models, which are compared
with an up-to-date sample of Fe ii and CaT. We tested various photoionization
models in terms of ionization parameter, cloud density, metallicity, and
column density, and found an overlapping range of physical conditions that are
required to efficiently excite these two species. We also find the strong Fe
ii emitters in order to be well modeled require a range of metallicity from
solar to super-solar (Martínez-Aldama et al., 2018; Śniegowska et al., 2020).
This result is obtained by comparing the observed UV flux ratios of emission
lines such as C iv$\lambda 1549$, Al iii$\lambda$1860,
Siiv$\lambda$1397+Oiv]$\lambda$1402 or N v$\lambda 1240$ over He ii$\lambda
1640$ with the ones predicted by CLOUDY simulations. The correlation between
the stronger Fe ii emitters, metallicity, and Eddington ratio has been
confirmed by several independent studies (e.g. Hamann & Ferland, 1992; Shin et
al., 2013; Panda et al., 2019).
In a subsequent paper, Panda (2020), we furthered the photoionization
modelling to recover the EWs in the low-ionization emitting region in the BLR
and realize the anisotropy in the accretion disk emission leading to a better
understanding of the photoionization of the low-ionization emitting regions of
the BLR.
In this part of the series, we look at the observational properties and
correlations from the up-to-date optical and near-infrared measurements
centered around Fe ii and CaT emission, respectively. Usually, the stronger Fe
ii and CaT emitters are associated with the Narrow Line Seyfert 1 (NLS1) AGN,
but also AGN with higher luminosities and broader profiles show a strong
emission for these two species (Martínez-Aldama et al., 2015a). Since Fe ii
strength ( or RFeII) is apparently driven by the Eddington ratio (Boroson &
Green, 1992; Marziani et al., 2003; Dong et al., 2011; Zamfir et al., 2010;
Panda et al., 2018, 2019), it motivates us to explore the role of the
Eddington ratio, black hole mass and luminosity in the CaT and Fe ii
properties to decipher the primary driver leading to this observed correlation
between the two species.
Additionally, since calcium belongs to the $\alpha-$elements and iron is
mainly produced by Type-1 supernovae on relatively longer timescales, the flux
ratio Fe ii/Ca ii can be used as a proxy for estimating the chemical
enrichment (Martínez-Aldama et al., 2015a), such as it has been tested with
the UV Fe ii and Mg ii$\lambda 2800$ (Verner et al., 2009; Dong et al., 2011;
Shin et al., 2019; Onoue et al., 2020, and references therein). Therefore a
deep observational analysis is required.
The paper is organized as follows: in Section 2, we include a short review of
the sample. Section 3 describes the methods employed to estimate the black
hole mass and Eddington ratio. In Section 4, we report the observational
correlations of our sample, including the Eigenvector 1 sequence and the
Baldwin effect. In order to confirm the correlations found, we performed a
Principal Component Analysis (PCA), the results of which are shown in Section
5. In Section 6 we discuss the potential drivers of the CaT–Fe ii properties,
the Baldwin effect, as well as the Fe ii/CaT ratio as a possible metal
indicator. Conclusions are summarized in Section 7. Throughout this work, we
assume a standard cosmological model with $\Omega_{\Lambda}$ = 0.7,
$\Omega_{m}$ = 0.3, and H0 = 70 km s-1 Mpc-1.
## 2 Observational Data
Our analysis is based on the observational properties of H$\beta$, optical
FeII (4434–4684 Å) and NIR CaII triplet collected from Persson (1988),
Martínez-Aldama et al. (2015a, b), Marinello et al. (2016) and Marinello et
al. (2020). A detailed description of the full sample is discussed in Paper-1.
The full sample includes 58 objects with $42.5<\mathrm{log}\,L_{\rm
opt}\,{(5100\AA)}<47.7$ at $0.01<z<1.68$. Due to the different selection
criteria of the subsamples, the full sample shows a bimodal distribution in
redshift and luminosity, where $58\%$ of the sample shows $z<0.1$ and log
$L_{\rm opt}\sim 44$, while the rest of the objects are located at $z\sim 1.6$
with log $L_{\rm opt}\sim 47.4$ (see Figure 1). Therefore, our sample is
affected by such biases, which could influence our results. These aspects are
discussed in Sec. 4 and Sec. 6.
The optical measurements from Persson (1988) are originally reported by
Osterbrock (1976), Osterbrock & Phillips (1977), Koski (1978), Oke & Shields
(1976), Kunth & Sargent (1979) and de Bruyn & Sargent (1978). However, the
quality of the data was not so high like in recent times, therefore, this
sample should be treated with caution. There are five sources in common in
Marinello’s and Persson’s samples. The variation in the different
observational parameters are significant in three of them (Mrk 335, Mrk 493,
and I Zw 1, see Table LABEL:tab:table1). This could be an indication of the
quality of the measurements. However, Marinello and Persson’s samples include
typical NLS1y objects, thus a similar behavior is expected, such as Figure 2
shows. In order to disentangle this point, new observations of the Persson
sample are needed.
Table LABEL:tab:table1 reports the properties of the each source in the sample
such as redshift, optical (at 5100Å) and NIR (at 8542Å) luminosity, the flux
ratios RFeII and RCaT, as well as the equivalent width (EW) and Full-Width at
Half Maximum (FWHM) of H$\beta$, CaT and O i $\lambda 8446$. All the
measurements were taken from the original papers (Persson, 1988; Martínez-
Aldama et al., 2015a, b; Marinello et al., 2016, 2020). Since (Persson, 1988)
do not report the luminosities at 5100Å, we have estimated them from their
apparent V magnitudes reported by Veron-Cetty & Veron (2010) catalog. We have
considered a zero point flux density of $3.55\times 10^{-9}$ erg
s${}^{-1}\,$cm${}^{-2}\,\AA^{-1}$ (Bessell, 1990) to estimate the flux at
5500Å in the observed-frame. After correcting for the redshift, we assumed a
slope of $\alpha_{\lambda}=-1.67$ (Vanden Berk et al., 2001) to estimate the
flux at 5100Å. Finally, the distance to the source was obtained thought
classical integration assuming the cosmological parameters specified at the
end of Sec.1.
Figure 1: Redshift distribution of the sample as a function of optical
luminosity at 5100Å in units of erg s-1. Black, red, green and magenta symbols
correspond to Persson (1988), Martínez-Aldama et al. (2015a, b), Marinello et
al. (2016) and Marinello et al. (2020) samples, respectively.
## 3 Parameter Estimations
### 3.1 Black hole mass
The black hole mass ($M\mathrm{{}_{BH}}$) is estimated using the classical
relation given by:
$M\mathrm{{}_{BH}}=f\mathrm{{}_{BLR}}\frac{R\mathrm{{}_{BLR}}\,v^{2}}{G}$ (2)
where $G$ is the gravitational constant, $f\mathrm{{}_{BLR}}$ is the virial
factor, $R\mathrm{{}_{BLR}}$ is the broad line region size and $v$ is the
velocity field in the BLR, which is represented by the FWHM of H$\beta$. The
virial factor includes information of geometry, kinematics, and inclination
angle of the BLR. Typically, it is assumed as constant ($\sim 1$), however
some results point out that this factor should vary along the AGN populations
(e.g. Collin et al., 2006; Yu et al., 2019). In this work, we assume the
virial factor proposed by Mejía-Restrepo et al. (2018), which is anti-
correlated with the FWHM of the emission line:
$f\mathrm{{}_{BLR}}=\left({{\mathrm{FWHM_{H\beta}}}}\,/\,{4550{\pm
1000}}\right)^{-1.17}$.
For single–epoch spectra, the $R\mathrm{{}_{BLR}}$ is usually estimated
through the Radius-Luminosity (RL) (Bentz et al., 2013) given by:
$\displaystyle\mathrm{log}\left(\frac{R\mathrm{{}_{BLR}}}{\mathrm{1lt-
day}}\right)=$ $\displaystyle(1.527\,\pm\,0.31)\,+$
$\displaystyle\,0.533{{}^{+0.035}_{-0.033}}\,\mathrm{log}\left(\frac{L_{\rm
opt}}{10^{44}L_{\odot}}\right).$ (3)
where $L_{\rm opt}$ corresponds to the luminosity at 5100Å. Black hole mass
estimations are reported in Table LABEL:tab:table2. The sample shows a clear
distinction between low and high black hole masses (log
$M\mathrm{{}_{BH}}$$\sim 7-10$ M⊙).
### 3.2 Eddington ratio
The accretion rate is estimated by the classical Eddington ratio defined by
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$, where $L\mathrm{{}_{bol}}$ is the
bolometric luminosity and $L\mathrm{{}_{Edd}}$ is the Eddington luminosity
defined by $L\mathrm{{}_{Edd}}=1.5\times
10^{38}\left(\frac{M_{BH}}{M_{\odot}}\right)$. The bolometric luminosity
formally can be estimated integrating the area under the broadband spectral
energy distribution (SED) (e.g. Richards et al., 2006, and references
therein). However, since this process requires multi-wavelength data to
constrain the SED fitting process, it is hard to get an estimation for
individuals sources. Mean SEDs have been used to estimate average values
called bolometric correction factors ($k\mathrm{{}_{bol}}$), which scale the
monochromatic luminosity ($\lambda L_{\lambda}$) to give a rough estimation of
$L\mathrm{{}_{bol}}$$=k_{\mathrm{bol}}\cdot\lambda L_{\lambda}$. Usually,
$k\mathrm{{}_{bol}}$ is taken as a constant for a monochromatic luminosity;
however, results like the well-known non-linear relationship between the UV
and X-ray luminosities (e.g. Lusso & Risaliti, 2016, and references therein)
indicate that $k\mathrm{{}_{bol}}$ should be a function of luminosity (Marconi
et al., 2004; Krawczyk et al., 2013). Along the same line, Netzer (2019)
proposed new bolometric correction factors as a function of the luminosity
assuming an optically thick and geometrically thin accretion disk, over a
large range of black hole mass ($10^{7}$\- $10^{10}$ M⊙), Eddington ratios
(0.007–0.5), spin (-1–0.998) and a disk inclination angle of $56^{\circ}$. For
the optical range, the bolometric correction factor is given by:
$\displaystyle k_{\mathrm{bol}}=40\left(\frac{L_{\rm
opt}}{10^{42}}\right)^{-0.2},$ (4)
where $L_{\rm opt}$ corresponds to the luminosity at 5100Å. The wide option of
parameters considered for the model process provide a better approximation
corroborating previous results (Nemmen & Brotherton, 2010; Runnoe et al.,
2012a, b). In addition, it provides a better accuracy than the constant
bolometric factor correction which led to errors as large as $50\%$ for
individual measurements. Therefore, we explore the use of $k\mathrm{{}_{bol}}$
for estimating the Eddington ratio. Table LABEL:tab:table2 reports the
Eddington ratios utilizing the BH masses obtained using the classical RL
relation (Eq. 3).
## 4 The correlation analysis
### 4.1 Observational Pairwise Correlations
Figure 2 shows the correlation matrix of the observational parameters: optical
($L_{\mathrm{opt}}$ at 5100Å) and NIR ($L_{\rm NIR}$ at 8542Å) continuum
luminosities, the flux ratios RFeII and RCaT and the emission lines properties
such as FWHM and the equivalent width (EW) of H$\beta$, O i $\lambda 8446$ and
CaT, plus the EW of Fe ii. In order to stress the difference in luminosity and
FWHM values in the subsamples, they are identified by different colors. Each
panel also includes the Spearman’s rank correlation coefficient ($\rho$) and
the $p$-value, where significant correlations ($p<0.001$) are colored in red
(otherwise shown in black). Optical and NIR luminosities follow a linear
relation (Fig. 2), therefore both luminosities show the same behavior with the
rest of the observational properties.
Top panel of Figure 2 shows the strong correlation between RFeII and RCaT,
which is described by the Eq. 1 (dashed gray line; see also inset panel). The
anti-correlation between RFeII (or RCaT) and EWHβ is expected since the
strength of H$\beta$ decreases as RFeII (or RCaT) increases. The linear
correlation between RFeII and EWFeII is due to the fact that both parameters
reflects the strength of the Fe ii emission, the first one is weighted by the
H$\beta$ flux and the second by the luminosity. It is the same case for the
correlation EWCaT-RCaT. Since RCaT is correlated with RFeII, we expect a
positive linear relation between EWCaT-RFeII and EWFeII-RCaT. On the other
hand, RFeII and RCaT show non-linear trends with the FWHM of the emission
lines, which are further discussed in Section 4.2. The correlation between the
EW and the continuum luminosity are extensively described in Sec. 4.3 and 6.2.
The correlations between the FWHM of H$\beta$, O i $\lambda 8446$ and CaT are
strongest according to their Spearman coefficients and their associated
$p-$values (Fig. 2). In these panels, the 1:1 line is shown for reference.
H$\beta$ shows broader profiles than the O i $\lambda 8446$, particularly for
the sources with FWHM $>4000$ km s-1. We obtained the trend lines by least
squares (OLS) fitting implemented in python packages sklearn and statsmodels.
The relation has a slope of 0.894$\pm$0.05 and a scatter of $\sigma_{\rm
rms}\sim$0.115 dex. The deviation at 4000 km s-1 could be associated with the
presence of a red-ward asymmetry in the broadest H$\beta$ profiles, i.e. with
an emitting region closer to the continuum source (Marziani et al., 2013;
Punsly et al., 2020). The presence of this feature is hard to observe in the O
i $\lambda 8446$ profile, since it is blended with the CaT and the NIR Fe ii.
On the other hand, CaT is also narrower than H$\beta$, although the scatter is
larger ($\sigma_{\rm rms}\sim$0.152 dex) and the relation is slightly
shallower than the one given by O i $\lambda 8446$ with a slope of
0.827$\pm$0.08. O i $\lambda 8446$ and Ca ii show similar widths, the
predicted relation gives a slope of 0.944$\pm$0.05 and the scatter is smaller
($\sigma_{\rm rms}\sim$ 0.103 dex) than in the previous cases. This general
behavior corroborates that H$\beta$ is emitted closer to the continuum source
than O i $\lambda 8446$ and CaT (Persson, 1988; Martínez-Aldama et al., 2015a;
Marinello et al., 2016).
### 4.2 Eigenvector 1 sequence
The correlation between RFeII and the FWHM of H$\beta$ is known as the
Eigenvector 1 (EV1) sequence (Boroson & Green, 1992), which is also known as
the quasar main sequence (Sulentic et al., 2000). According to the EV1 scheme,
the observational and physical properties of type 1 AGN change along the
sequence (Marziani et al., 2018; Panda et al., 2018, 2019). Based on the RFeII
strength the accretion rate can be inferred, where the sources with RFeII$>$1
are typically associated with the highest Eddington ratios
($L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$$>$0.2, Marziani et al., 2003; Panda
et al., 2019, 2020b). The relation between these parameters is not linear
(Wildy et al., 2019), where orientation and luminosity are also involved (Shen
& Ho, 2014; Negrete et al., 2018)
The EV1 sequence (FWHMHβ-RFeII relation) of our sample is shown in Figure 2. A
displacement between the low– and high–luminosity objects (HE sample) can be
appreciated, however, both kinds of sources follow the same trend. This
displacement is only a luminosity effect, where the HE sample is shifted to
the larger FWHM values of the panel. An EV1-like sequence is also appreciated
in the relations RFeII-FWHMCaT and RFeII-FWHMOI,which is expected due to the
linear relation between the widths of the emission lines (Sec. 4.1).
The relation FWHMHβ-RCaT shows an EV1 sequence-like for the low-luminosity
subsample, but it is not appreciated in the high luminosity objects, the HE
sample. It seems that in some objects the CaT increases with increasing
FWHMHβ. The same effect is observed for the relations FWHMOI-RCaT and FWHMCaT-
RCaT. Surprisingly, the break occurs at RCaT$\sim 0.2$ which corresponds to
RFeII=1 (following the Eq. 1), the limit for the highly accreting sources
according to Marziani et al. (2003). A similar decoupling is also observed in
the relation between the EWCaT and the FWHM of the emission lines, but the
scatter is quite large. Martínez-Aldama et al. (2015a) found a rough
enhancement of EWCaT for the HE sample at intermediate–$z$ with respect to the
other objects at low-$z$ attributing this behavior to a burst of star
formation and an enrichment at intermediate redshift sources. The new HE
objects (Martínez-Aldama et al., 2015b) added to the presented analysis seem
to corroborate these results, however, some selection effect could also be
involved. We discuss this result in Sec. 6.3.
Figure 2: Correlation matrix for emission lines and continuum properties.
Black, red, green and magenta symbols correspond to Persson (1988), Martínez-
Aldama et al. (2015a, b), Marinello et al. (2016) and Marinello et al. (2020)
samples, respectively. Each panel specifies the Spearman’s rank correlation
coefficient and the $p-$value, where the significant correlations are colored
in red. Equivalent width (EW), FWHM, optical and NIR luminosities are given in
units of Å, km s-1 and erg s-1, respectively. Gray vertical-lines in the first
column mark the limit for super-Eddington sources at RFeII=1.0. In the
diagrams where the FWHMs and luminosities are correlated, the gray dashed line
marks the relation 1:1. In the correlation RCaT-RFeII gray dashed line
corresponds to Eq. (1). The inset right panel shows the relation RFeII-RCaT in
log–scale with the results of the bootstrap analysis, see Sec. 4.5. Black
dotted lines mark the confidence intervals at 95% for the 1000 realizations
(dark gray lines) of the bootstrap analysis. Lightgray patch marks the
corresponding prediction intervals bands for the sample.
### 4.3 Correlations with the equivalent width: the Baldwin effect
The anti-correlation between the equivalent width and the luminosity is known
as the global Baldwin effect (BEff) (Osmer & Shields, 1999; Green et al.,
2001; Baskin & Laor, 2004; Bachev et al., 2004; Dong et al., 2009; Zamfir et
al., 2010), which was first observed between the equivalent width of C
iv$\lambda 1549$ and the continuum luminosity at 1450Å (Baldwin, 1977). The
BEff is clearly appreciated in the high–ionization lines, except NV $\lambda
1240$ due to a second enrichment (Osmer et al., 1994). However, as the
ionization potential decreases the slope of this anti-correlation gets
shallower and it is hard to distinguish a strong correlation for low-
ionization lines (Sulentic et al., 2000; Dietrich et al., 2002) An intrinsic
Baldwin effect (Pogge & Peterson, 1992) has been also detected in the multi-
epoch observations for single variable AGNs. The BEff provides information
about the ionizing continuum shape (Wandel, 1999), structure, and metallicity
(Korista et al., 1998) of the BLR. Also, it has been used for calibrating the
luminosity in cosmological studies (Baldwin et al., 1978; Korista et al.,
1998).
#### 4.3.1 Luminosity
Figure 3 shows the equivalent widths of H$\beta$, O i $\lambda 8446$, optical
Fe ii and CaT as a function of the optical and NIR luminosities. Spearman rank
correlation coefficients, $p-$values and the scatter of the correlations are
reported in Table LABEL:tab:params_corr. None of the trends between the EW and
the optical and NIR luminosity satisfies the criteria for a significant
relation and all of them show shallow relations with a slope of $\alpha<0.1$.
The shallow slope confirms the weak relation between the luminosity and the EW
for low ionization lines. This result is in agreement with larger samples at
high redshift (Dietrich et al., 2002). The negative correlation between EWHβ
and $L_{\rm opt}$ ($\alpha=-0.064\pm 0.032,\,\rho=-0.412$, $p=0.001$) is
expected due to the behavior of individual variable sources (e.g. Rakić et
al., 2017). This behavior is different from the one of the relation
EWCaT-$L_{\mathrm{opt}}$ ($\alpha=0.068\pm 0.053$, $\rho$=0.417 $p=0.001$),
which suggests the presence of an inverse Baldwin effect. A positive
correlation has also been observed between the continuum at 5100Å and the
optical Fe ii in the monitoring of the variable NLSy1 NGC 4051 (Wang et al.,
2005). The strong correlation between the Fe ii and CaT explain this behavior.
However, in our sample the relation EWFeII-$L_{\mathrm{opt}}$ is negative
($\alpha=-0.091\pm 0.039$, $\rho=-0.409,\,p=0.001$) and is just below the
criteria assumed to consider a significant correlation. Other studies neither
reported a BEff for optical nor for UV Fe ii (Dong et al., 2011). Finally, the
trend observed for EWOI-$L_{\mathrm{opt}}$ is not significant and show a slope
consistent with zero ($\alpha=-0.007\pm 0.034$), also confirmed by previous
studies (Dietrich et al., 2002).
#### 4.3.2 Black hole mass and Eddington ratio
Since the black hole mass and the Eddington ratio have been considered as the
main drivers of the BEff (Wandel, 1999; Dong et al., 2011), we also present
the correlations EW-$M\mathrm{{}_{BH}}$ and
EW-$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ in Fig. 3. The parameters of the
correlations are reported in Table LABEL:tab:params_corr. The only significant
relation involving the black hole mass is EWFeII-$M\mathrm{{}_{BH}}$
($\rho=-0.493$, $\alpha=-0.151\pm 0.060$ $\sigma_{\rm rms}\sim 0.234$ dex).
In the correlations between the equivalent width and the Eddington ratio, the
significant relations are the ones involving EWHβ and EWCaT. In both cases the
correlations are sharper ($\alpha_{\rm H\beta}=-0.332\pm 0.149$, $\alpha_{\rm
CaT}=0.428\pm 0.237$) and stronger ($\rho_{\rm H\beta}=0.531$, $\rho_{\rm
CaT}=0.482$) than the luminosity case. Although the correlations for Fe ii and
O i $\lambda 8446$ are below the significance level, their slopes are steeper
than the correlations with respect to the luminosities and the black hole
mass. Hence, the Eddington ratio highlights the correlations with the
equivalent width, as Baskin & Laor (2004) and Dong et al. (2011) previously
reported.
Figure 3: Correlation matrix for the optical and NIR luminosities, black hole
and the equivalent width of H$\beta$, optical Fe ii, O i $\lambda 8446$ and
CaT. Green and blue symbols indicate the low– and high–luminosity subsamples,
respectively (Sec 4.3.3). Black dashed line represents the best OLS fitting
for the full sample. The Spearman’s rank correlation coefficient, $p-$values,
slope ($\alpha$), and scatter ($\sigma_{\rm rms}$) are also shown, where the
significant correlations are colored in red. Black dotted lines mark the
confidence intervals at 95% for the 1000 realizations (dark gray lines) of the
bootstrap analysis. Lightgray patch marks the corresponding prediction
intervals bands for the sample.
#### 4.3.3 Division of the sample
According to Dietrich et al. (2002) to avoid selection effects in the global
BEff a sample with a wide luminosity range is needed, $42<{\rm log}\,L<48$.
Our sample covers this range, however at high redshift only high luminosity
sources are available (${\rm log}\,L_{\rm opt}>47.5$ erg s-1). In order to
clarify the results of Sec. 4.3 and the presence of possible bias, we divided
the sample into two subsamples considering the median luminosity, log $L_{\rm
opt}=44.49$ erg s-1. In Fig. 3 the low– and high–luminosity subsamples are
represented by green and blue symbols, respectively. The division of the
sample directly affects the relations EW$-L_{\rm opt}$, EW$-L_{\rm NIR}$ and
EW$-{\rm M}_{\rm BH}$ where no significant correlations are observed, which
also reflects the bias involved in this consideration. For example, the
relation EWHβ$-L_{\rm opt}$ is positive for the low-luminosity subsample
($\alpha=0.1\pm 0.073$, $\rho=0.235$, $p=0.281$), while for the the high
luminosity case the relation has a different direction ($\alpha=-0.13\pm
0.031$, $\rho=-0.53$, $p=0.003$), similar to the behavior of the full sample.
The difference in the subsamples is also pointed out by the PCA (Appendix D).
Therefore, the correlations have some relevance only when the full sample is
considered .
However, the correlations EW–$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ are less
affected by the division of the sample, at least, in the significant
correlations provided by the full sample analysis. In the relation
EWHβ–$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$, the direction of the best fits
in the subsamples is still negative ($\alpha_{\rm low}=-0.215\pm
0.162,\alpha_{\rm high}=-0.472\pm 0.144$), although none of the relations are
significant ($\rho_{\rm low}=-0.26,\,p_{\rm low}=0.181,\,\rho_{\rm
high}=-0.35,\,p_{\rm high}=0.07$). While in the
EWCaT–$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ relation, the slope for the
subsamples is positive ($\alpha_{\rm low}=0.240\pm 0.158,\,\alpha_{\rm
high}=0.694\pm 0.242$) such as in the full sample, but without any
significance ($\rho_{\rm low}=0.222,\,p_{\rm low}=0.245,\,\rho_{\rm
high}=0.313,\,p_{\rm high}=0.098$). This result suggests that
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ is less influenced by a bias and it
then regulates the correlation between the equivalent width and the
luminosity, as originally suggested by Baskin & Laor (2004) and Bachev et al.
(2004).
### 4.4 The behavior of RFeII, RCaT and the ratio Fe ii/CaT
Fig. 4 shows the behavior of RFeII, RCaT and the ratio Fe ii/CaT as a function
of optical and NIR luminosity, black hole mass and Eddington ratio. RFeII and
RCaT do not show any significant correlation with the luminosity and black
hole mass for the full sample. Only the Fe ii/CaT shows a significant anti-
correlation with the optical ($\rho=-0.441,\,p=5.3\times 10^{-4}$) and NIR
luminosity ($\rho=-0.456,\,p=3.2\times 10^{-4}$). If the subsamples are
considered, all the best fits are below the statistical significance limit.
In the panels where $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ is involved, the
strongest correlation is the one corresponding to the ratio Fe ii/CaT
($\rho=0.554$, $p=6.4\times 10^{-6}$), followed by the one with RCaT
($\rho=$0.425, $p=8.9\times 10^{-4}$). In both cases the trend lines for the
full samples and subsamples have the same direction such as in Fig. 3,
although the significant correlation arises only for the full sample.
The positive correlation between RCaT and
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ confirm that the strength of CaT is
driven by the accretion rate, and it remains even after the division of the
sample. Although the same behavior is expected for RFeII, we cannot confirm
this result in our sample. The positive correlation between the optical and UV
RFeII and $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ is robust (e.g Zamfir et
al., 2010; Dong et al., 2011; Martínez-Aldama et al., 2020). Besides, the
RFeII has been used as a proxy for the $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$
to correct the time delay by the accretion effect and decrease the scatter in
the optical and UV Radius-Luminosity relation (Du & Wang, 2019; Martínez-
Aldama et al., 2020). This indicates that the Fe ii (and RFeII) in our sample
is affected by several factors: the sample size, the quality of the
observations and the Fe ii templates employed, which could decrease the
accuracy of its estimation. For instance, 10 objects from the Persson (1988)
sample have only upper limits, such as Mrk 335. It is one of the five common
sources observed by Marinello et al. (2016), where the RFeII value is $\sim
50\%$ higher than the value estimated by Persson (1988). It confirms that the
objects with upper values are highly inaccurate and thus it could be reflected
in the loss of the correlation with other parameters. A homogeneous fitting
process considering the same analysis spectral procedure could help to
decrease the scatter and clarify the trends which we aim to pursue in a future
work.
### 4.5 Bootstrap analysis
#### 4.5.1 Random distributions
Which is the probability that an uncorrelated data set gives a correlation
with Spearman rank coefficient as high as the one we observe? In order to
answer this question, we modeled the distributions of each of the parameters
in Fig. 3 and 4 considering a random sample of 1000 elements and the
probability distributions implemented in the module stats in python. To
determine how good of a fit this distribution is, we used the Kolmogorov-
Smirnov test which compare a sample with a reference probability distribution
and we chose the distribution with the highest $p-$valueran. Since the
luminosity and the black hole mass show bimodal distributions, we used two
distributions to reproduce the observational one. In the rest of the cases a
good fit was obtained with only one distribution. The probability
distributions considered and the $p-$value are reported in column (11) and
(12) of Table LABEL:tab:params_corr. The distribution fitting of the
correlation $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$-Fe ii/CaT is shown in Fig.
B.1 as an example of this analysis.
Later, we randomly selected 58 realizations from the original 1000, which
later we correlated following the correlations in Figures 3 and 4, and in the
correlation RFeII-RCaT. Finally, we repeated the process 2000 times and
estimated the Spearman rank correlation coefficient ($\rho_{\rm ran}$), the
corresponding $p-$value and estimated the fractions of significant
correlations ($f_{\rm ran}$). Results are reported in column (13) of Table
LABEL:tab:params_corr. In all the cases $f_{\rm ran}<1\times 10^{-3}$, it
means that is very unlikely at $3\sigma$ confidence level that two independent
correlations provide high correlations coefficients such as the observational
sample does. Therefore, our analysis supports the reliability of the observed
correlations.
#### 4.5.2 Linear regression fitting
Due to the small size of our sample and the gaps in luminosity and redshift,
we proved the statistical reliability of the correlations in Fig. 3 and 4 via
a bootstrap analysis (Efron & Tibshirani, 1993). The bootstrap sample is
formed by the selection of a subset of pairs from each one of the correlations
by random resampling with replacement. We created 1000 realizations and then
performed a linear regression fitting. The gray lines in Fig. 2 and 3
correspond to the 1000 realizations, which are in agreement with the best fit
of each correlation at 2$\sigma$ confidence level (dotted black lines). As a
reference, the figures also shows the prediction intervals bands (lightgray
patch), which indicate the variation of the individuals measurements and
predict that 95$\%$ of the individual point lies within the patch. As a
reference, we also analyzed the relation RFeII-RFeII (inset panel, Fig. 2) to
compare the behavior of the bootstrap analysis in a very well-know
correlation, obtaining an agreement within $2\sigma$ confidence level.
In order to quantify the bootstrap results, we considered the percentiles at
$2\sigma$ confidence level and estimated the errors of the slope ($\alpha_{\rm
BS}$) and ordinate ($\beta_{\rm BS}$) of the normal distribution drawn from
the 1000 realizations for each correlation. Results are reported in Table
LABEL:tab:params_corr. As it is expected the distributions are centered in the
slope and ordinates values of each correlations, which are completely
equivalent to the ones from the observational correlations. The magnitude of
the errors indicates the reliability of the correlation. The larger errors are
associated with the correlations below the significance criteria
($-0.4<\rho_{\rm BS}<0.4$, $p>0.001$). A clear example are the errors in the
slope of the relations involving O i $\lambda 8446$ (Fig. 3) or RFeII (Fig.
4), which indicates the inaccuracy of the results, such as the Spearman
correlation coefficient shows. Meanwhile, good correlations, such as RFeII-
RCaT, will show errors $<20\%$. As the correlation coefficients indicate, the
errors decrease considerably in the correlations where
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ is involved. This result points out
the relevance of $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ in the behavior of
our sample and its role in the Baldwin effect.
On the other hand, we also estimated the Spearman correlation coefficient
(${\rho_{\rm BS}}$) for the 1000 realizations and estimated the fraction of
significant realizations respect to the total number ($f_{\rm sig}$), which
satisfy the significance criteria ($|\rho|>0.4$ and $p<0.001$). We also
modeled the distribution of ${\rho_{\rm BS}}$ using a skewnorm distribution
and estimated the error at $2\sigma$ confidence level. The maximum of the
${\rho_{\rm BS}}$ distribution and $f_{\rm sig}$ are reported in columns (9)
and (10) of Table LABEL:tab:params_corr. In the strongest correlation of the
sample, RFeII-RCaT, we obtained $f_{\rm sig}$=1. It means that the 1000
bootstrap realizations satisfy the significant criteria and confirm the
reliability of the correlation. This is also highlighted by the errors of
${\rho_{\rm BS}}$, where the correlation remains significant within the
uncertainty range. In the correlations with $|\rho_{\rm BS}|>0.5$, $f_{\rm
sig}>0.75$ indicating a reliable correlations. However, if the errors of
${\rho_{\rm BS}}$ are considered, there is an small possibility to dismiss the
significance of the correlation. It can be expressed by the parameter
1-$f_{\rm sig}$, which expresses the probability of failing to detect a
correlation. Thus, there is probability of $<25\%$ to detect a false positive
correlation in EWHβ–$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ and Fe
ii/CaT–$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$. If $|\rho_{\rm BS}|=0.4-0.5$,
great care should be considered because the probability to detect a false
positive correlation increases to $(1-f_{\rm sig})\sim 50\%$. It is the case
of the correlations such as EWCaT-$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ or
Fe ii/CaT-$L_{\rm opt}$. The same interpretation of false positive probability
applies in the case of no detected correlation in the observed or bootstrap
samples ($\rho<0.4$), when the probabilities are always low, particularly for
the weakest correlations $(1-f_{\rm sig}>80\%)$.
### 4.6 Residuals behavior
In order to assess a possible redshift effect in our results, we estimated the
residuals with respect to the best fit for the correlations in Fig. 3 and Fig.
4. We divided the sample into low– and high–$L$ subsamples, which is
equivalent to a division into low– and high–redshift. The behavior of the
distributions is shown in Fig. B.2 and B.3. If any significant difference of
the median of the distribution with respect to the zero residual level is
observed, it could indicated a redshift effect. In all the correlations of
Fig. B.2, we observed a difference within $2\sigma$ confidence level. On the
other hand, the relations of Fig. 4 show the same behavior, however the width
of the distribution increases significantly as well as the median values,
particularly for the correlations involving RFeII. Since this behavior is only
observed in these correlations and they still show a dependency within
$2\sigma$ level, we cannot claim a redshift effect. As we mentioned
previously, the RFeII is not well behaved in our sample compared to previous
results. Therefore any trend involving RFeII must be taken with caution.
Figure 4: Correlation matrix for optical and NIR luminosity, black hole mass,
Eddington ratio ($L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$), RFeII, RCaT and Fe
ii/CaT ratio. Colors and symbols are the same as Fig.3.
## 5 Principal Component analysis
Principal component analysis (hereafter PCA) allows to get a better view of
the data where it can be separated in a quantitative manner, such that the
relevant properties explain the maximum amount of variability in the dataset.
PCA works by initially finding the principal axis along which the variance in
the multidimensional space (corresponding to all recorded properties) is
maximized. This axis is known as eigenvector 1. Subsequent orthogonal
eigenvectors, in order of decreasing variance along their respective
directions, are found, until the entire parameter space is spanned (see, for
example, Boroson & Green, 1992; Kuraszkiewicz et al., 2009; Wildy et al.,
2019; Tripathi et al., 2020). The PCA method is particularly useful when the
variables within the data set are highly correlated. Correlation indicates
that there is redundancy in the data. Due to this redundancy, PCA can be used
to reduce the original variables into a smaller number of new variables
(principal components) explaining most of the variance in the original
variables. This allows to determine correlated parameters and, in the context
of our work, we utilize this technique to determine the physical parameter(s)
that lead to the correlations illustrated in Fig. 3 and 4.
Eigenvalues (or loadings) can be used to determine the numbered principal
components to retain after PCA (Kaiser, 1961): (a) An eigenvalue $>$ 1
indicates that principal components (PCs) account for more variance than
accounted by one of the original variables in standardized data. This is
commonly used as a cutoff point for which PCs are retained. This holds true
only when the data are standardized, i.e., they are scaled to have standard
deviation one and mean zero; and (b) one can also limit the number of
component that accounts for a certain fraction of the total variance. Since
there is no well-accepted way to decide how many principal components are
enough, in this work we evaluate this using the scree plot (see, for example,
Figure 5), which is the plot of eigenvalues ordered from largest to the
smallest. A scree plot shows the variances (in percentages) against the
numbered principal component, and allows visualizing the number of significant
principal components in the data. The number of components is determined at
the point beyond which the remaining eigenvalues are all relatively small and
of comparable size (Peres-Neto et al., 2005; Jolliffe, 2011). This helps to
realize whether the given data set is governed by a single or more-dimensions,
where a dimension refers to a variable. Subsequently, these principal
components are investigated against the original variables of the dataset to
extract information of the importance of these variables in each principal
component.
We consider the 58 sources in our sample and collect the properties that are
common among them. Due to the diversity in the studied subsamples, we only
have 12 parameters that are obtained/estimated directly from the observation:
$z$, optical and NIR luminosity, RFeII, RCaT, FWHM and EW of H$\beta$, O i
$\lambda 8446$ and CaT, as well as the EW of Fe ii. Among these 12 parameters,
the redshift and the optical luminosities are correlated by a bias effect as
shown in Figure 1 and hence we drop the redshift and only choose the optical
luminosity. Thus we have a 11 parameters that are considered in the initial
PCA. Later, we only adopt the NIR luminosity which is equivalent to the
optical (Fig. 2). We refer the readers to Appendix C for more details on the
initial PCA tests, a note on the effect redundant parameters play in this
analysis and the final set of parameters used to infer the observed
correlations.
Next, we have the derived parameters - bolometric luminosity
($L\mathrm{{}_{bol}}$), black hole mass ($M\mathrm{{}_{BH}}$) and Eddington
ratio ($L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$), which are predicted using one
or more of the observed parameters that are already taken into account for the
PCA run. PCA is an orthogonal linear transformation technique applied to the
data and assumes that the input data contains linearly independent variables
such that the eigenvectors can be represented as a sum of linear combination
of variables with associated weights (eigenvalues or loadings) corresponding
to each variable. Thus, in order to remove any redundancy in the result
obtained via the PCA, one needs to make sure that the parameters that go in as
input are not scaled up version of another parameter, thereby saving us the
trouble of unwanted bias that comes out of it. The effect of the inclusion of
derived variables in the PCA is illustrated in Appendix C.
Similar to Wildy et al. (2019), we use the prcomp module in the R statistical
programming software. In addition to prcomp, we use the
factoextra111https://cloud.r-project.org/web/packages/factoextra/index.html
package for visualizing the multivariate data at hand, especially to extract
and visualize the eigenvalues/variances of dimensions.
### 5.1 PCA on the full sample
The tests aimed to reduce the redundancy of the variables were applied as
described in (see appendix C) and now allow us to perform a final PCA run with
the dataset that contain variables which are obtained directly from the
observations and have as little redundancy as possible. The final input
contains 8 variables, namely, the NIR luminosity (at 8542 Å), the EWs of Fe
ii, H$\beta$, O i $\lambda 8446$ and CaT, and, the FWHMs of H$\beta$, O i
$\lambda 8446$ and CaT. The result of the PCA is illustrated in Figure 5.
In this section, we present the results of the PCA on the full sample and
infer the relative importance of the eigenvectors as a function of the input
variables. In Appendix D is described the PCA analysis for the low- and high-
luminosities samples described in Sec. 4.3.3. Figure 5 shows the two-
dimensional factor-map between the first two principal components, scree plot
and the contributions of the input variables to the first four principal
components for the full sample. The factor-map shows the distribution of the
full sample categorically colored to highlight the different sources (see Sec.
2) in the eigen-space represented by the two principal components, Dim-1 and
Dim-2 (i.e., the PC1 and PC2). The first and the second PC contribute 40.6%
and 22.2% to the overall dispersion in the dataset. Combining the
contributions from the two subsequent PCs (PC3 and PC4), one can explain 89.1%
of the variation present in the data. We demonstrate the contributions of the
original variables on these four principal components to draw conclusions on
the primary driver(s) of the dataset.
First principal component: From the factor-map we find that the vectors
corresponding to the FWHMs of H$\beta$, O i $\lambda 8446$, CaT and the NIR
luminosity are nearly co-aligned, with the FWHM vectors of H$\beta$ and O i
$\lambda 8446$ having almost similar orientation. These four vectors are also
the major contributors to the variance along the primary principal component
(see third panel on the left of Figure 5). The red dashed line on the graph
above indicates the expected average contribution. If the contribution of the
variables were uniform, the expected value would be 1/length(variables) = 1/8
$\approx$ 12.5%. For a given component, a variable with a contribution larger
than this cutoff could be considered as important in contributing to the
component. The EWFeII is barely over this cutoff limit.
Second principal component: The factor-map highlights the prevalence of the
EWCaT which contributes $\sim$45% to this component, followed by the EWOI and
EWFeII. The overall contribution accounts for $\sim$95% from these three
variables.
Third and fourth principal components: The third PC is dominated by the EWHβ
with a minor contribution from FWHMHβ, EWOI and FWHMOI. Whereas, the fourth PC
is singularly governed by NIR luminosity. The other variables are below the
expected average contribution limit.
Figure 5: Graphical representation (factor-map) of the principal component
analysis (PCA) decomposition of our sample (58 sources) is shown in the first
panel. The dots represent individual objects on the standardized PC1 - PC2
plane that have variables indicated by the axis labels. The arrows represent
the prominence of the variables in the PC1 - PC2 plane. The dashed lines mark
the coordinate axes in the PC1 - PC2 plane and the ellipses depict the 95%
occupancy of the sources in their respective subsamples. The sample is
categorized based on their original source catalogues (see Panda et al.,
2020a, for details on the observational sample) \- (1) Martínez-Aldama et al.
(2015a); (2) Martínez-Aldama et al. (2015b); (3) Persson (1988); (4) Marinello
et al. (2016); and (5) PHL1092 (Marinello et al., 2020). The other panels
illustrate the precedence of the first 10 principal components in the form of
scree plots, followed by the contributions of the original variables to the
first four principal components.
### 5.2 Correlations between the principal eigenvectors and observed/derived
parameters
Figure 6: Correlation matrix showing dependence of the first four PCA vectors’
loadings versus the physical parameters (observed) for our full sample. The
Spearman’s rank correlation coefficient ($\rho$) and the $p-$value are
reported for the correlations whenever $p$-value $<$ 0.001. The OLS fits in
each panel are shown with red dashed lines. Black dotted lines mark the
confidence intervals at 95% for the 1000 realizations (dark gray lines) of the
bootstrap analysis. Lightgray patch marks the corresponding prediction
intervals bands for the sample. Figure 7: Correlation matrix showing
dependence of the first four PCA vectors’ loadings versus the physical
parameters (derived) for our full sample. The Spearman’s rank correlation
coefficients ($\rho$) and the $p-$values are reported for the correlations
whenever $p-$value $<$ 0.001. The OLS fits in each panel are shown with red
dashed lines. Black dotted lines mark the confidence intervals at 95% for the
1000 realizations (dark gray lines) of the bootstrap analysis. Lightgray patch
marks the corresponding prediction intervals bands for the sample.
To quantitatively assess the relevance of the observational variables and the
derived physical parameters we show their correlation against the
contributions (loadings) of the the first four principal components (PC1, PC2,
PC3 and PC4) for the full sample in Figures 6 and 7. The full-sample is then
separated into two subsamples based on the median optical luminosity of the
full-sample (i.e. log $L_{\mathrm{opt}}$ = 44.49 erg s-1). A comparative
analysis of the full-sample against the two subsamples is carried out in
Appendix D and in Figures D.2 and D.3.
Figure 6 is basically the correlation matrix representation for the Figure 6
that includes all the intrinsic variables (except for the NIR luminosity). We
only state the values of the Spearman’s correlation coefficient and the
corresponding $p$-value when the correlation is significant ($p<$ 0.001). In
the full sample, the strongest correlation with respect to PC1 (in decreasing
order) are exhibited by FWHMOI ($\rho$=-0.845, $p$=7.51$\times 10^{-17}$),
FWHMCaT ($\rho$=-0.844, $p$=1.77$\times 10^{-13}$), FWHMHβ ($\rho$=-0.792,
$p$=1.32$\times 10^{-13}$), EWFeII ($\rho$=0.703, $p$=7.79$\times 10^{-10}$),
and by EWHβ ($\rho$=0.583, $p$=1.59$\times 10^{-6}$). In case of PC2,
significant correlations are obtained only for the EWs of CaT ($\rho$=-0.81,
$p$=1.29$\times 10^{-14}$), O i $\lambda 8446$ ($\rho$=-0.681, $p$=3.92$\times
10^{-9}$) and Fe ii ($\rho$=-0.509, $p$=4.57$\times 10^{-5}$). For the PC3,
there are negative correlations right above the significance limit for the
three FWHMs and the EWs for H$\beta$ and O i $\lambda 8446$. The correlations
for the subsamples are described in Appendix D.
Figure D.3 corresponds to the derived parameters - bolometric luminosity,
black hole mass, Eddington ratio, RFeII, RCaT and the ratio of the two
species, Fe ii/CaT. For the full sample, we see clear, strong anti-
correlations with PC1 for the black hole mass ($\rho$=-0.845, $p$=6.99$\times
10^{-17}$), the bolometric luminosity ($\rho$=-0.748, $p$=1.49$\times
10^{-11}$), followed by the redshift ($\rho$=-0.7, $p$=1$\times 10^{-9}$) and
Eddington ratio ($\rho$=-0.519, $p$=2.94$\times 10^{-5}$). Although, in the
case of the correlation with respect to the redshift, this is biased due to
the gaps in the sample distribution which is highlighted in the panels in the
left column (this is also illustrated in Figure 1). For the remaining trends,
they corroborate with the correlations that were obtained with the FWHMs in
the previous figure. We also see a mild positive correlation of the PC1 with
the ratio, Fe ii/CaT ($\rho$=0.426, $p$=8.46$\times 10^{-4}$). There are only
two significant correlations obtained with respect to PC2, i.e. for the two
line ratios - RCaT ($\rho$=-0.506, $p$=5.08$\times 10^{-5}$) and RFeII
($\rho$=-0.46, $p$=2.79$\times 10^{-4}$). This checks the observed correlation
that is obtained and studied in this work and in our previous studies. The
correlations with PC3 and the two ratios - RFeII ($\rho$=0.621, $p$=2$\times
10^{-7}$) and RCaT ($\rho$=0.6, $p$=1.52$\times 10^{-7}$) indicates that the
RFeII-RCaT correlation in the full dataset is at least two dimensional and may
have multiple drivers for this observed correlation. Following the same
analysis carried out in Sec. 4.5, we also performed a bootstrap analysis for
the correlations in Fig. 6 and 7 which reflects the behavior observed in Sec.
4.5. The correlations for the subsamples are described in Appendix D.
## 6 Discussion
In this paper, we carefully look at the observational correlations from the
up-to-date optical and near-infrared measurements centered around Fe ii and
CaT emission, respectively. Since our sample is affected by bias, we explored
its influence in our results, showing that the correlations with the Eddington
ratio and the independent observational parameters are less biased than the
ones involving luminosity or black hole mass. These results are supported by a
bootstrap analysis, which corroborated the statistical meaning of the
correlations. We did not detect any redshift dependence in the correlations
with luminosity and Eddington above 2$\sigma$ confidence level. We also
performed a Principal Component Analysis in order to define the main driver(s)
of our sample where the black hole mass, luminosity, Eddington ratio and the
Fe ii/CaT show the main correlation with the PC1.
### 6.1 The primary driver(s) of our sample
The PCA is a powerful tool, however, the principal eigenvectors are just
mathematical entities and it not easy to connect them with a direct physical
meaning. As is shown in Sec. 4.2 and Appendix D, the PCA gives different
results if the full, low- or high-luminosities samples are considered. The
correlations between the PCi values and the observational parameters such as
FWHMHβ, RFeII or EWHβ for the low-luminosity subsample resembles the Boroson &
Green (1992) PCA results. However, the trends are different for the high-
luminosity subsample. This difference cannot be associated with luminosity or
redshifts effects, since PCA results are based on the space parameter
considered. Hence, since the objective is to understand the general drivers of
the sample, we only discuss the PCA for the full sample.
Figures 6 and 7 describe the relation between the observational and
independent parameters with the principal eigenvectors, where the FWHM shows
the primary correlations with a significance over 99.9$\%$, followed by the
relations with the equivalent widths again, with a high significance ($62.8\%$
of the variation). Hence, due to the relevance of the FWHM, a high correlation
between PC1 and the black hole mass is expected (Fig. 7). The luminosity and
black hole mass show the strongest correlations with the PC1, followed by the
Eddington ratio and the ratio Fe ii/CaT (see Sec. 6.3). The main correlations
with the PC2 are with EWCaT, EWOI, EWFeII, RCaT, EWFeII and RFeII ordered in
decreasing order of significance. Similar to the observational trends (Figures
3 and 4), all the correlations are stronger for the CaT than for Fe ii,
indicating the relevance of the CaT in our sample.
On the other hand, in Figures 3 and 4 the main correlations are the ones
involving the $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$, which is also supported
by the lowest errors provided by the bootstrap results (Fig. B.2 and B.3).
These facts point towards the Eddington ratio as the main driver. However,
from the PCA black, hole mass and luminosity have similar relevance
($\rho_{M_{\rm BH}}=0.845,\,\rho_{L}=0.748$) followed by the Eddington ratio
($\rho\sim-0.519$), all of them with a significance over 99.9$\%$. Since the
PCA reduces the dimensionality of the object to object variations, it is
expected that the main correlations are associated with luminosity, black hole
mass and Eddington ratio, since the third one can be expressed by:
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$$\propto L_{\rm opt}^{1/2}{\rm
FWHM_{H\beta}^{-2}}$. In order to test the self-dependence of the Eddington
ratio and hence its role in our sample, we performed a multivariate linear
regression fitting in the form $\log$ $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$
$\propto{\rm log}L_{\rm opt}+a{\rm logFWHM_{\rm H\beta}}$. We got $a=3.1$
($\sigma_{\rm rms}\sim 7.71\times 10^{-5}$ dex), a variation of $25\%$ respect
to the expected value ($a=-4$) from definition of
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$. Therefore, this highlights the
Eddington ratio as the driver of the correlations in the sample.
However, this result must be tested with the inclusion of more objects. In our
sample, the highest $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ values are always
associated with the highest luminosities, largest black hole masses and
highest redshifts, which is an artifact of the flux-limited sample. To verify
the Eddington ratio as the main driver, one should consider samples reducing
flux-limit and small-number biases, for example including low accretors at
high redshift, or enlarging the sample at low–z. In addition, our sample does
not include sources with FWHMHβ $>$ 7000 km s-1, which usually show weak or
negligible Fe ii contribution. Hence newer sources with CaT-Fe ii estimates
are required to confirm the current results and to certify the Eddington ratio
as the driving mechanism.
### 6.2 Is the Eddington ratio the driver of the Baldwin effect?
The driver of the Baldwin effect is still under discussion. The most accepted
explanation for this effect is that high luminosity AGNs have a soft ionizing
continuum, so the number of ionizing photons available for the emission line
excitation decrease. It is supported by the fact that the spectral index
between the UV (2500Å) and X-ray continuum (2 keV) increases with luminosity
(Baskin & Laor, 2004; Shields, 2007). Thus the UV bump will be shifted to
longer wavelengths provoking a steeper EW–$L$ relation as a function of the
ionization potential. Metallicity also has an important role (Korista et al.,
1997), due to the correlation with the black hole mass and luminosity (Hamann
& Ferland, 1993a, 1999). An increment in the metallicity reduces the
equivalent width of the emission lines.
In our analysis only low-ionization lines are considered, therefore we expect
weak relation between the EW and the luminosity. The values of the slopes are
around zero, $-0.1<\alpha<0.1$ in all the correlations, as predicted by
Dietrich et al. (2002). And the correlation coefficients are below the
significance level considered, except in the correlations Fe ii/CaT-
luminosity, although the bootstrap results predict a $\sim 50\%$ to detect
false positive in this case. Therefore, the statistical significance of the
EW–$L$ relations is called into question.
In the correlations where the Eddington ratio is involved, the slope is
stronger than in the luminosity case, $\alpha>0.3$. The same effect is found
for C iv$\lambda 1549$, a high–ionization emission line. Considering the
equivalent width of C iv$\lambda 1549$, luminosities and Eddington ratios
reported by Baskin & Laor (2004), we found a stronger correlation and higher
slope ($\rho=-0.5,\,\alpha=-0.3\pm 0.06$) in the relation
CIV-$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ than for the correlation with the
luminosity ($\rho=-0.04\pm 0.05,\,\alpha=-0.05\pm 0.09$). Additionally, the
bootstrap results predicts the smallest errors and a low probability to detect
false positive in the correlations EW-$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$.
These results suggest that the Eddington ratio has more relevance than the
luminosity (Baskin & Laor, 2004; Bachev et al., 2004; Dong et al., 2009) and
thus the behavior of the equivalent width of low– and high–ionization lines is
driven by the Eddington ratio.
We can probe the role of the Eddington ratio in an independent way throughout
a multivariate linear regression fitting in the form EW $\propto L_{\rm
opt}+a{\rm FWHM}$. For H$\beta$ we obtain $a=-2.5\pm 1.4$ ($\sigma_{\rm
rms}\sim 0.194$ dex), while for EWCaT $a=-3.8\pm 1.9$ ($\sigma_{\rm rms}\sim
0.308$ dex). In the last case, the slope is almost similar to the expected
value ($a=-4$), which again highlights the strong correlation between CaT and
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$. At least in our sample, the CaT is
better proxy for the Eddington ratio than Fe ii, although there is a $50\%$ of
probability to detect a false positive correlation as the bootstrap results
pointed out.
A novel results from the PCA is the relevance of the metallicity expressed as
the ratio Fe ii/CaT (Sec. 6.3). According to Korista et al. (1998), the
metallicity has a secondary role in the Baldwin effect, so we included this
parameter in the multivariate linear regression fitting: EW $\propto L_{\rm
opt}+a{\rm FWHM_{H\beta}}+b{\rm{FeII/CaII}}$. There is no improvement for the
H$\beta$ correlation ($a=-2.4\pm 1.6,\,b=-0.786\pm 1.4,\sigma_{\rm rms}\sim
0.193$), while for CaT the uncertainties are high ($a=-4.8\pm
4.1,\,b=-9.705\pm 6.35,\sigma_{\rm rms}\sim 0.279$). Therefore, the Eddington
ratio have the main role in the Baldwin effect than the metallicity if it is
expressed as the Fe ii/CaT ratio.
### 6.3 Implication for the chemical evolution
The relative abundance of iron with respect to the $\alpha$–elements has been
used as a proxy for the chemical abundance in AGN (see Hamann & Ferland, 1992,
for a review). The $\alpha$–elements (O, Mg, Si, Ca and Ti) are predominantly
produced by Type II supernovae (SNe) after the explosion of massive stars (7
M⊙ $<M_{\star}<100$ M⊙) on timescales of $10^{7}$ yr, while Fe is mostly
produced by Type Ia SNe from white dwarf progenitors on longer timescales
$\sim 1$ Gyr (Hamann & Ferland, 1993a). Depending on the host galaxy type, the
time delay between the manufacturing timescales varies from 0.3 to 1 Gyr for
massive elliptical and Milky Way-like galaxies, respectively (Matteucci &
Recchi, 2001). Thus, the ratio Fe/$\alpha$ can be used as a clock for
constraining the star formation, the metal content and the age of the AGN
(Matteucci, 1994; Hamann & Ferland, 1992). The UV Fe ii and Mg ii$\lambda
2800$ have been widely used for this purpose since the UV spectrum is
accessible in a wide redshift range, up to $z\sim 7.5$ (e.g. Dietrich et al.,
2003; Verner et al., 2009; Dong et al., 2011; De Rosa et al., 2014; Sameshima
et al., 2017; Shin et al., 2019; Onoue et al., 2020; Sarkar et al., 2020, and
references therein). However, the Fe ii/Mg ii flux ratio does not show a
significant redshift evolution suggesting a large number of Type Ia SNe (Onoue
et al., 2020) or AGN being chemically mature also at high–$z$ (Shin et al.,
2019).
The optical Fe ii and CaT have similar ionization potentials and both are
emitted by the same region in the BLR, although the CaT region is seemingly
more extended (Panda, 2020). Assuming that CaT scales with the rest of the
$\alpha$-elements and the ratio Fe ii/CaT traces the abundance iron over
calcium, we can use the ratio Fe ii/CaT as a metal estimator. Figure 8 shows
the distribution of the ratio Fe ii/CaT as a function of the redshift.
Dividing the sample at $z=0.8$, which basically separates the Persson and
Marinello et al. samples from the HE-sample (Martínez-Aldama et al. sample),
we get that low-redshift sample has a median Fe ii/CaT ratio of $\sim 5.8$,
while the intermediate-redshift show a Fe ii/CaT$\sim 3.0$. A two-sample
Kolmogorov-Smirnov test provides a value of 0.489 with a probability of
$p_{KS}\sim 0.001$. It means that both samples are not drawn from the same
distribution. The higher ratio Fe ii/CaT at low redshift suggests some form of
chemical evolution. Our sample reaches a maximum redshift at $z\sim 1.7$, just
after the maximum star formation peak (Madau & Dickinson, 2014). Thus, the Fe
ii/CaT ratio will be lowered by the effect of a recent starburst enhancing the
$\alpha$–elements with respect to iron at intermediate-redshifts (Martínez-
Aldama et al., 2015a) . Surprisingly, the Fe ii/CaT ratio also has a mild
correlation with the PC1 (Fig. 7), suggesting that the metal content has a
relevant role in governing the properties of our sample.
Hamann & Ferland (1993a) found a positive relation between the metallicity,
black hole mass and luminosity, therefore the highest metallicity AGN might
also be the most massive, such as the last row of Fig. 4 shows. An exception
to the Hamann & Ferland (1993a) results are the Narrow-Line Seyfert 1 (NLSy1)
galaxies which exhibit a high NV/CIV flux ratio (an alternative proxy of the
metal content) despite their low-luminosity (Shemmer et al., 2004). In our
sample a clear NLSy1 is the object PHL 1092 (Marinello et al., 2020), that
shows Fe ii/CaT$\sim 3.1$, which is close to the mean Fe ii/CaT value of the
high-redshift sample, putting this source within the regime of high metal
content. Previous studies indicate that NLSy1 show a deviation from the
relations NV/CIV-$L$ and -$M\mathrm{{}_{BH}}$ (Shemmer & Netzer, 2002), in our
sample we cannot confirm these results using the ratio Fe ii/CaT, since the
scatter for a fixed $L$ or $M\mathrm{{}_{BH}}$ is too large.
Based on the flux ratio NV/CIV, Shemmer et al. (2004) found a positive
correlation between the abundance ($Z$) and the Eddington ratio. Our sample
shows an anti-correlation between $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ and
the ratio Fe ii/CaT with a Spearman rank coefficient of $\rho\sim 0.554$ and a
significance over $99.9\%$. The bootstrap results proved the reliability of
this correlation, with a probability less than $11\%$ to detect a false
positive trend. Considering that it is very likely that a recent starburst
concomitantly increases $Z$ and lowers Fe ii/CaT (see for example Bizyaev et
al., 2019), we expect that the high Eddington ratio sources in our sample are
associated with high metal content and/or low Fe/$\alpha$ values. Similar to
the Baldwin Effect, the relation between the metal content estimator and the
Eddington ratio is stronger than with luminosity, black hole mass or redshift
(Shemmer et al., 2004; Dong et al., 2011; Sameshima et al., 2017; Shin et al.,
2019). Conversely, the correlation between the Eddington ratio and Fe ii/Mg ii
remains unclear; depending on the sample considered there is a positive (Shin
et al., 2019; Sameshima et al., 2017; Onoue et al., 2020) or null correlation
(Sarkar et al., 2020). Since Fe ii and Mg ii$\lambda 2800$ are affected by
non-abundance parameters such as the density or the microturbulence, the ratio
Fe ii/Mg ii might be affected (Sameshima et al., 2017; Shin et al., 2019).
After a correction by these factors, the correlation Fe ii/Mg
ii-$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ roughly remains positive or
disappears.
Under the assumption that Fe ii/CaT flux ratio is a first order proxy of the
[Fe/Ca] abundances, we found that the behavior shown by the Fe ii/CaT flux
ratio is in agreement with the normal chemical evolution (Hamann & Ferland,
1993b, 1999), where the main enrichment occurs in the early epochs. Our
results also support that main Fe ii enrichment occurs 1-2 Gyr after the first
starburst (Hamann & Ferland, 1993b, 1999) and also suggest that the strong Fe
ii could be associated with a second starburst. However, the overabundance of
Fe ii depends on the SNe Ia lifetime and the star formation epoch (Sameshima
et al., 2020). In order to confirm these results models incorporating chemical
evolution and [Fe/Ca] abundances are required. In addition, we must explore
the dependence on non-abundance parameters, which could modify the relation of
Figs. 3 and 4, or the effect of the Baldwin effect in the abundance
determination as Sameshima et al. (2020) have tested for Fe ii/Mg ii.
Enlarging the sample is one of the main challenges. For observing the CaT in
sources with $z>2$, near-infrared spectrometers with higher sensitivity are
required. However, due to the fact that the near- and mid-infrared spectral
regions are strongly affected by atmospheric telluric bands, some redshift
ranges will remain inaccessible from ground- based telescopes The most
attractive possibility to study the Fe ii/CaT ratio at larger redshifts is
offered by upcoming space observatories, such as Near InfraRed Spectrograph
(NIRSpec) from the James Webb Space Telescope (JWST).
Figure 8: Left panel: Fe ii/CaT distribution as a function of redshift in log-
scale. Green symbols correspond to sources with $z<0.8$, the rest of the
sources are marked with blue symbols. Right panel: Fe ii/CaT distribution,
colors are the same as the left panel. Vertical lines mark the median redshift
for the low- and high-redshift subsamples.
## 7 Conclusions
We performed a detailed analysis of the observational correlations present in
our CaT-Fe ii sample together with a bootstrap analysis to asses the
statistical reliability of the correlations. Throughout a Principal Component
Analysis, we identify the primary driver of our sample. We could not find any
redshift dependence above $2\sigma$ confidence level. Since our sample is
flux-limited, the presented analysis must be confirmed by larger samples in
the future. The presented analysis shows the following:
* •
The correlation with luminosity, black hole mass and Eddington ratio are
stronger for CaT than Fe ii. It suggest that CaT is a better proxy for the
Eddington ratio than Fe ii. A potential application of this results could be,
for example, the correction of the the Radius-Luminosity relation for effects
dependent on the accretion rate. However, the bootstrap analysis provides a
probability of $50\%$ to detect a false positive relation. Therefore a more
complete a sample is needed to confirm this result.
* •
The EWHβ correlates negatively with luminosity (Baldwin effect), while the
EWCaT shows a positive correlation. It stresses the different nature of both
low ionization emission lines. In general, the correlations with Eddington
ratio are stronger, more reliable and show the smallest errors according to
the bootstrap analysis. This supports the Eddington ratio as the driver of the
Baldwin effect.
* •
We performed a principal component analysis (PCA) to deduce the driver for the
RCaT-RFeII relation observed in our sample. We confirm that the results of the
PCA are dependent on the selection of the sample and the chosen quantities. We
consider only the directly observable variables (FWHMs, luminosity and EWs) in
the final PCA and later correlate the first four eigenvectors to the derived
variables (bolometric luminosity, black hole mass, Eddington ratio, the ratios
- RFeII and RCaT, and the metallicity indicator, Fe ii/CaT). The dominant
eigenvector is primarily driven by the combination of black hole mass and
luminosity which in turn is reflected in the strong correlation of the first
eigenvector with the Eddington ratio. We also notice a noticeable correlation
of the primary eigenvector with the metallicity tracer, Fe ii/CaT.
* •
Combining the PCA results and the observational correlation, we conclude that
luminosity, black hole mass, and Eddington ratio are the main drivers of our
sample, however, the correlations are better described by the Eddington ratio,
setting this parameter one step ahead in comparison to the other parameters. A
larger, more complete sample spanning a larger redshift range including a wide
variety of AGN is needed to assert the driver(s) of the CaT-Fe ii
correlations.
* •
We found a significant negative increment of the Fe ii/CaT ratio as a function
of redshift, suggesting the effect of a recent starburst which enhanced the
$\alpha$-elements with respect to iron at intermediate-redshift. So, the Fe
ii/CaT ratio could be used to map the metal and the star formation in AGN. The
Fe ii/CaT ratio is also highlighted by the PCA, pointing out the relevance of
the metal content in our sample. The negative correlation with the Eddington
ratio corroborated by the bootstrap analysis supports the Fe ii/CaT as a metal
indicator instead of the typical Fe ii/Mg ii$\lambda 2800$ ratio used for this
purpose. However, a more complete sample and sources at large redshift are
needed to verify these results.
## Acknowledgements
We are grateful to the anonymous referee for her/his comments and suggestions
that helped to improve the manuscript. The project was partially supported by
the Polish Funding Agency National Science Centre, project 2017/26/A/ST9/00756
(MAESTRO 9) and MNiSW grant DIR/WK/2018/12. DD acknowledges support from grant
PAPIIT UNAM, 113719.
## References
* Abdi & Williams (2010) Abdi, H., & Williams, L. J. 2010, WIREs Computational Statistics, 2, 433, doi: 10.1002/wics.101
* Bachev et al. (2004) Bachev, R., Marziani, P., Sulentic, J. W., et al. 2004, The Astrophysical Journal, 617, 171, doi: 10.1086/425210
* Baldwin (1977) Baldwin, J. A. 1977, ApJ, 214, 679, doi: 10.1086/155294
* Baldwin et al. (1978) Baldwin, J. A., Burke, W. L., Gaskell, C. M., & Wampler, E. J. 1978, Nature, 273, 431, doi: 10.1038/273431a0
* Baldwin et al. (2004) Baldwin, J. A., Ferland, G. J., Korista, K. T., Hamann, F., & LaCluyzé, A. 2004, ApJ, 615, 610, doi: 10.1086/424683
* Baskin & Laor (2004) Baskin, A., & Laor, A. 2004, Monthly Notices of the Royal Astronomical Society, 350, L31, doi: 10.1111/j.1365-2966.2004.07833.x
* Bentz et al. (2013) Bentz, M. C., Denney, K. D., Grier, C. J., et al. 2013, ApJ, 767, 149, doi: 10.1088/0004-637X/767/2/149
* Bessell (1990) Bessell, M. S. 1990, PASP, 102, 1181, doi: 10.1086/132749
* Bizyaev et al. (2019) Bizyaev, D., Chen, Y.-M., Shi, Y., et al. 2019, ApJ, 882, 145, doi: 10.3847/1538-4357/ab3406
* Boroson & Green (1992) Boroson, T. A., & Green, R. F. 1992, ApJS, 80, 109, doi: 10.1086/191661
* Collin & Joly (2000) Collin, S., & Joly, M. 2000, New A Rev., 44, 531, doi: 10.1016/S1387-6473(00)00093-2
* Collin et al. (2006) Collin, S., Kawaguchi, T., Peterson, B. M., & Vestergaard, M. 2006, A&A, 456, 75, doi: 10.1051/0004-6361:20064878
* Collin-Souffrin et al. (1988) Collin-Souffrin, S., Dyson, J. E., McDowell, J. C., & Perry, J. J. 1988, MNRAS, 232, 539, doi: 10.1093/mnras/232.3.539
* de Bruyn & Sargent (1978) de Bruyn, A. G., & Sargent, W. L. W. 1978, AJ, 83, 1257, doi: 10.1086/112320
* De Rosa et al. (2014) De Rosa, G., Venemans, B. P., Decarli, R., et al. 2014, ApJ, 790, 145, doi: 10.1088/0004-637X/790/2/145
* Devereux (2018) Devereux, N. 2018, MNRAS, 473, 2930, doi: 10.1093/mnras/stx2537
* Dietrich et al. (2003) Dietrich, M., Hamann, F., Appenzeller, I., & Vestergaard, M. 2003, ApJ, 596, 817, doi: 10.1086/378045
* Dietrich et al. (2002) Dietrich, M., Hamann, F., Shields, J. C., et al. 2002, ApJ, 581, 912, doi: 10.1086/344410
* Dong et al. (2011) Dong, X.-B., Wang, J.-G., Ho, L. C., et al. 2011, ApJ, 736, 86, doi: 10.1088/0004-637X/736/2/86
* Dong et al. (2009) Dong, X.-B., Wang, T.-G., Wang, J.-G., et al. 2009, The Astrophysical Journal, 703, L1, doi: 10.1088/0004-637x/703/1/l1
* Du & Wang (2019) Du, P., & Wang, J.-M. 2019, ApJ, 886, 42, doi: 10.3847/1538-4357/ab4908
* Dultzin-Hacyan et al. (1999) Dultzin-Hacyan, D., Taniguchi, Y., & Uranga, L. 1999, in Astronomical Society of the Pacific Conference Series, Vol. 175, Structure and Kinematics of Quasar Broad Line Regions, ed. C. M. Gaskell, W. N. Brandt, M. Dietrich, D. Dultzin-Hacyan, & M. Eracleous, 303
* Efron & Tibshirani (1993) Efron, B., & Tibshirani, R. J. 1993, An Introduction to the Bootstrap, Monographs on Statistics and Applied Probability No. 57 (Boca Raton, Florida, USA: Chapman & Hall/CRC)
* Ferland & Persson (1989) Ferland, G. J., & Persson, S. E. 1989, ApJ, 347, 656, doi: 10.1086/168156
* Ferland et al. (2017) Ferland, G. J., Chatzikos, M., Guzmán, F., et al. 2017, Rev. Mexicana Astron. Astrofis., 53, 385\. https://arxiv.org/abs/1705.10877
* Garcia-Rissmann et al. (2012) Garcia-Rissmann, A., Rodríguez-Ardila, A., Sigut, T. A. A., & Pradhan, A. K. 2012, ApJ, 751, 7, doi: 10.1088/0004-637X/751/1/7
* Garcia-Rissmann et al. (2005) Garcia-Rissmann, A., Vega, L. R., Asari, N. V., et al. 2005, MNRAS, 359, 765, doi: 10.1111/j.1365-2966.2005.08957.x
* Green et al. (2001) Green, P. J., Forster, K., & Kuraszkiewicz, J. 2001, The Astrophysical Journal, 556, 727, doi: 10.1086/321600
* Hamann & Ferland (1992) Hamann, F., & Ferland, G. 1992, ApJ, 391, L53, doi: 10.1086/186397
* Hamann & Ferland (1993a) —. 1993a, ApJ, 418, 11, doi: 10.1086/173366
* Hamann & Ferland (1993b) —. 1993b, ApJ, 418, 11, doi: 10.1086/173366
* Hamann & Ferland (1999) —. 1999, ARA&A, 37, 487, doi: 10.1146/annurev.astro.37.1.487
* Horne et al. (2020) Horne, K., De Rosa, G., Peterson, B. M., et al. 2020, arXiv e-prints, arXiv:2003.01448. https://arxiv.org/abs/2003.01448
* Hunter (2007) Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Jolliffe (2011) Jolliffe, I. 2011, Principal Component Analysis, ed. M. Lovric (Berlin, Heidelberg: Springer Berlin Heidelberg), 1094–1096, doi: 10.1007/978-3-642-04898-2_455
* Joly (1987) Joly, M. 1987, A&A, 184, 33
* Joly (1989) —. 1989, A&A, 208, 47
* Kaiser (1961) Kaiser, H. F. 1961, British Journal of Statistical Psychology, 14, 1, doi: 10.1111/j.2044-8317.1961.tb00061.x
* Korista et al. (1998) Korista, K., Baldwin, J., & Ferland, G. 1998, ApJ, 507, 24, doi: 10.1086/306321
* Korista et al. (1997) Korista, K., Baldwin, J., Ferland, G., & Verner, D. 1997, ApJS, 108, 401, doi: 10.1086/312966
* Koski (1978) Koski, A. T. 1978, ApJ, 223, 56, doi: 10.1086/156235
* Krawczyk et al. (2013) Krawczyk, C. M., Richards, G. T., Mehta, S. S., et al. 2013, ApJS, 206, 4, doi: 10.1088/0067-0049/206/1/4
* Kunth & Sargent (1979) Kunth, D., & Sargent, W. L. W. 1979, A&A, 76, 50
* Kuraszkiewicz et al. (2009) Kuraszkiewicz, J., Wilkes, B. J., Schmidt, G., et al. 2009, ApJ, 692, 1180, doi: 10.1088/0004-637X/692/2/1180
* Lusso & Risaliti (2016) Lusso, E., & Risaliti, G. 2016, ApJ, 819, 154, doi: 10.3847/0004-637X/819/2/154
* Madau & Dickinson (2014) Madau, P., & Dickinson, M. 2014, Annual Review of Astronomy and Astrophysics, 52, 415, doi: 10.1146/annurev-astro-081811-125615
* Marconi et al. (2004) Marconi, A., Risaliti, G., Gilli, R., et al. 2004, MNRAS, 351, 169, doi: 10.1111/j.1365-2966.2004.07765.x
* Marinello et al. (2016) Marinello, M., Rodríguez-Ardila, A., Garcia-Rissmann, A., Sigut, T. A. A., & Pradhan, A. K. 2016, ApJ, 820, 116, doi: 10.3847/0004-637X/820/2/116
* Marinello et al. (2020) Marinello, M., Rodríguez-Ardila, A., Marziani, P., Sigut, A., & Pradhan, A. 2020, MNRAS, 494, 4187, doi: 10.1093/mnras/staa934
* Martínez-Aldama et al. (2018) Martínez-Aldama, M. L., Del Olmo, A., Marziani, P., et al. 2018, Frontiers in Astronomy and Space Sciences, 4, 65, doi: 10.3389/fspas.2017.00065
* Martínez-Aldama et al. (2015a) Martínez-Aldama, M. L., Dultzin, D., Marziani, P., et al. 2015a, ApJS, 217, 3, doi: 10.1088/0067-0049/217/1/3
* Martínez-Aldama et al. (2015b) Martínez-Aldama, M. L., Marziani, P., Dultzin, D., et al. 2015b, Journal of Astrophysics and Astronomy, 36, 457, doi: 10.1007/s12036-015-9354-9
* Martínez-Aldama et al. (2020) Martínez-Aldama, M. L., Zajaček, M., Czerny, B., & Panda, S. 2020, ApJ, 903, 86, doi: 10.3847/1538-4357/abb6f8
* Marziani et al. (2014) Marziani, P., Martínez-Aldama, M. L., Dultzin, D., & Sulentic, J. W. 2014, Astronomical Review, 9, 29, doi: 10.1080/21672857.2014.11519729
* Marziani et al. (2013) Marziani, P., Sulentic, J. W., Plauchu-Frayn, I., & del Olmo, A. 2013, A&A, 555, A89, doi: 10.1051/0004-6361/201321374
* Marziani et al. (2003) Marziani, P., Zamanov, R. K., Sulentic, J. W., & Calvani, M. 2003, MNRAS, 345, 1133, doi: 10.1046/j.1365-2966.2003.07033.x
* Marziani et al. (2018) Marziani, P., Dultzin, D., Sulentic, J. W., et al. 2018, Frontiers in Astronomy and Space Sciences, 5, 6, doi: 10.3389/fspas.2018.00006
* Marziani et al. (2019) Marziani, P., Bon, E., Bon, N., et al. 2019, Atoms, 7, 18, doi: 10.3390/atoms7010018
* Matteucci (1994) Matteucci, F. 1994, A&A, 288, 57
* Matteucci & Recchi (2001) Matteucci, F., & Recchi, S. 2001, The Astrophysical Journal, 558, 351, doi: 10.1086/322472
* Mejía-Restrepo et al. (2018) Mejía-Restrepo, J. E., Lira, P., Netzer, H., Trakhtenbrot, B., & Capellupo, D. M. 2018, Nature Astronomy, 2, 63, doi: 10.1038/s41550-017-0305-z
* Negrete et al. (2018) Negrete, C. A., Dultzin, D., Marziani, P., & et al. 2018, in preparation
* Negrete et al. (2014) Negrete, C. A., Dultzin, D., Marziani, P., & Sulentic, J. W. 2014, Advances in Space Research, 54, 1355, doi: 10.1016/j.asr.2013.11.037
* Nemmen & Brotherton (2010) Nemmen, R. S., & Brotherton, M. S. 2010, MNRAS, 408, 1598, doi: 10.1111/j.1365-2966.2010.17224.x
* Netzer (2019) Netzer, H. 2019, MNRAS, 488, 5185, doi: 10.1093/mnras/stz2016
* Oke & Shields (1976) Oke, J. B., & Shields, G. A. 1976, ApJ, 207, 713, doi: 10.1086/154540
* Oliphant (2015) Oliphant, T. 2015, NumPy: A guide to NumPy, 2nd edn., USA: CreateSpace Independent Publishing Platform. http://www.numpy.org/
* Onoue et al. (2020) Onoue, M., Bañados, E., Mazzucchelli, C., et al. 2020, ApJ, 898, 105, doi: 10.3847/1538-4357/aba193
* Osmer et al. (1994) Osmer, P. S., Porter, A. C., & Green, R. F. 1994, ApJ, 436, 678, doi: 10.1086/174942
* Osmer & Shields (1999) Osmer, P. S., & Shields, J. C. 1999, in Astronomical Society of the Pacific Conference Series, Vol. 162, Quasars and Cosmology, ed. G. Ferland & J. Baldwin, 235. https://arxiv.org/abs/astro-ph/9811459
* Osterbrock (1976) Osterbrock, D. E. 1976, ApJ, 203, 329, doi: 10.1086/154083
* Osterbrock & Ferland (2006) Osterbrock, D. E., & Ferland, G. J. 2006, Astrophysics of gaseous nebulae and active galactic nuclei
* Osterbrock & Phillips (1977) Osterbrock, D. E., & Phillips, M. M. 1977, PASP, 89, 251, doi: 10.1086/130110
* Panda (2020) Panda, S. 2020, arXiv e-prints, arXiv:2004.13113. https://arxiv.org/abs/2004.13113
* Panda et al. (2018) Panda, S., Czerny, B., Adhikari, T. P., et al. 2018, ApJ, 866, 115, doi: 10.3847/1538-4357/aae209
* Panda et al. (2020a) Panda, S., Martínez-Aldama, M. L., Marinello, M., et al. 2020a, ApJ, 902, 76, doi: 10.3847/1538-4357/abb5b8
* Panda et al. (2019) Panda, S., Marziani, P., & Czerny, B. 2019, ApJ, 882, 79, doi: 10.3847/1538-4357/ab3292
* Panda et al. (2020b) —. 2020b, Contributions of the Astronomical Observatory Skalnate Pleso, 50, 293, doi: 10.31577/caosp.2020.50.1.293
* Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., et al. 2011, Journal of Machine Learning Research, 12, 2825
* Peres-Neto et al. (2005) Peres-Neto, P. R., Jackson, D. A., & Somers, K. M. 2005, Computational Statistics & Data Analysis, 49, 974 , doi: https://doi.org/10.1016/j.csda.2004.06.015
* Persson (1988) Persson, S. E. 1988, ApJ, 330, 751, doi: 10.1086/166509
* Pogge & Peterson (1992) Pogge, R. W., & Peterson, B. M. 1992, AJ, 103, 1084, doi: 10.1086/116127
* Punsly et al. (2020) Punsly, B., Marziani, P., Berton, M., & Kharb, P. 2020, ApJ, 903, 44, doi: 10.3847/1538-4357/abb950
* Rakić et al. (2017) Rakić, N., La Mura, G., Ilić, D., et al. 2017, A&A, 603, A49, doi: 10.1051/0004-6361/201630085
* Richards et al. (2006) Richards, G. T., Lacy, M., Storrie-Lombardi, L. J., et al. 2006, ApJS, 166, 470, doi: 10.1086/506525
* Rodriguez-Ardila et al. (2012) Rodriguez-Ardila, A., Garcia Rissmann, A., Sigut, A. A., & Pradhan, A. K. 2012, in Proceedings of Nuclei of Seyfert galaxies and QSOs - Central engine & conditions of star formation (Seyfert 2012). 6-8 November, 12
* Rodríguez-Ardila et al. (2002) Rodríguez-Ardila, A., Viegas, S. M., Pastoriza, M. G., & Prato, L. 2002, ApJ, 565, 140, doi: 10.1086/324598
* Runnoe et al. (2012a) Runnoe, J. C., Brotherton, M. S., & Shang, Z. 2012a, MNRAS, 426, 2677, doi: 10.1111/j.1365-2966.2012.21644.x
* Runnoe et al. (2012b) —. 2012b, MNRAS, 422, 478, doi: 10.1111/j.1365-2966.2012.20620.x
* Sameshima et al. (2017) Sameshima, H., Yoshii, Y., & Kawara, K. 2017, ApJ, 834, 203, doi: 10.3847/1538-4357/834/2/203
* Sameshima et al. (2020) Sameshima, H., Yoshii, Y., Matsunaga, N., et al. 2020, ApJ, 904, 162, doi: 10.3847/1538-4357/abc33b
* Sarkar et al. (2020) Sarkar, A., Ferland, G. J., Chatzikos, M., et al. 2020, arXiv e-prints, arXiv:2011.09007. https://arxiv.org/abs/2011.09007
* Schnorr-Müller et al. (2016) Schnorr-Müller, A., Davies, R. I., Korista, K. T., et al. 2016, MNRAS, 462, 3570, doi: 10.1093/mnras/stw1865
* Seabold & Perktold (2010) Seabold, S., & Perktold, J. 2010, in Proceedings of the 9th Python in Science Conference, ed. Stéfan van der Walt & Jarrod Millman, (Austin, TX), 92 – 96, doi: 10.25080/Majora-92bf1922-011
* Shemmer & Netzer (2002) Shemmer, O., & Netzer, H. 2002, ApJ, 567, L19, doi: 10.1086/339797
* Shemmer et al. (2004) Shemmer, O., Netzer, H., Maiolino, R., et al. 2004, ApJ, 614, 547, doi: 10.1086/423607
* Shen & Ho (2014) Shen, Y., & Ho, L. C. 2014, Nature, 513, 210, doi: 10.1038/nature13712
* Shields (2007) Shields, J. C. 2007, in Astronomical Society of the Pacific Conference Series, Vol. 373, The Central Engine of Active Galactic Nuclei, ed. L. C. Ho & J. W. Wang, 355. https://arxiv.org/abs/astro-ph/0612613
* Shin et al. (2019) Shin, J., Nagao, T., Woo, J.-H., & Le, H. A. N. 2019, ApJ, 874, 22, doi: 10.3847/1538-4357/ab05da
* Shin et al. (2013) Shin, J., Woo, J.-H., Nagao, T., & Kim, S. C. 2013, ApJ, 763, 58, doi: 10.1088/0004-637X/763/1/58
* Śniegowska et al. (2020) Śniegowska, M., Marziani, P., Czerny, B., et al. 2020, arXiv e-prints, arXiv:2009.14177. https://arxiv.org/abs/2009.14177
* Sulentic et al. (2000) Sulentic, J. W., Marziani, P., & Dultzin-Hacyan, D. 2000, ARA&A, 38, 521, doi: 10.1146/annurev.astro.38.1.521
* Taylor (2005) Taylor, M. B. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 347, Astronomical Data Analysis Software and Systems XIV, ed. P. Shopbell, M. Britton, & R. Ebert, (San Francisco, CA: ASP), 29
* Terlevich et al. (1990) Terlevich, E., Díaz, A. I., & Terlevich, R. 1990, Rev. Mexicana Astron. Astrofis., 21, 218
* Tripathi et al. (2020) Tripathi, S., McGrath, K. M., Gallo, L. C., et al. 2020, MNRAS, 499, 1266, doi: 10.1093/mnras/staa2817
* Vanden Berk et al. (2001) Vanden Berk, D. E., Richards, G. T., Bauer, A., et al. 2001, AJ, 122, 549, doi: 10.1086/321167
* Verner et al. (2009) Verner, E., Bruhweiler, F., Johansson, S., & Peterson, B. 2009, Physica Scripta, T134, 014006, doi: 10.1088/0031-8949/2009/t134/014006
* Verner et al. (1999) Verner, E. M., Verner, D. A., Korista, K. T., et al. 1999, ApJS, 120, 101, doi: 10.1086/313171
* Veron-Cetty & Veron (2010) Veron-Cetty, M. P., & Veron, P. 2010, VizieR Online Data Catalog, VII/258
* Vestergaard & Wilkes (2001) Vestergaard, M., & Wilkes, B. J. 2001, ApJS, 134, 1, doi: 10.1086/320357
* Wandel (1999) Wandel, A. 1999, ApJ, 527, 657, doi: 10.1086/308098
* Wang et al. (2005) Wang, J., Wei, J. Y., & He, X. T. 2005, A&A, 436, 417, doi: 10.1051/0004-6361:20042014
* Wildy et al. (2019) Wildy, C., Czerny, B., & Panda, S. 2019, A&A, 632, A41, doi: 10.1051/0004-6361/201935620
* Yu et al. (2019) Yu, L.-M., Bian, W.-H., Wang, C., Zhao, B.-X., & Ge, X. 2019, MNRAS, 488, 1519, doi: 10.1093/mnras/stz1766
* Zamfir et al. (2010) Zamfir, S., Sulentic, J. W., Marziani, P., & Dultzin, D. 2010, MNRAS, 403, 1759, doi: 10.1111/j.1365-2966.2009.16236.x
## Appendix A Tables
In this section, we summarize the observational properties of the full sample
employed in the analysis. The description of the columns is included in the
notes of each table.
Table A.1: Observational parameters
Object | $z$ | log $L_{\rm opt}$ | log $L_{\rm NIR}$ | RFeII | RCaT | FWHM H$\beta$ | FWHM OI | FWMH CaII | EW H$\beta$ | EW OI | EW CaII | EW FeII
---|---|---|---|---|---|---|---|---|---|---|---|---
[erg s-1] | [erg s-1] | [ km s-1] | [ km s-1] | [ km s-1] | [Å] | [Å] | [Å] | [Å]
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10) | (11) | (12) | (13)
Persson (1988) Sample
IZw1$\dagger$ | 0.061 | 44.684 $\pm$ 0.048 | 44.694 $\pm$ 0.049 | 1.78 $\pm$ 0.05 | 0.51 $\pm$ 0.13 | 950 $\pm$ 100 | 600 $\pm$ 150 | 1150 $\pm$ 150 | 66 $\pm$ 5 | 13 $\pm$ 1 | 43 $\pm$ 4 | 119 $\pm$ 12
Mrk 42 | 0.025 | 43.299 $\pm$ 0.048 | 43.247 $\pm$ 0.049 | 1.07 $\pm$ 0.07 | 0.41 $\pm$ 0.14 | 500 $\pm$ 100 | 450 $\pm$ 150 | 800 $\pm$ 150 | 36 $\pm$ 2 | 13 $\pm$ 1 | 19 $\pm$ 2 | 40 $\pm$ 6
Mrk 478$\dagger$ | 0.077 | 44.685 $\pm$ 0.048 | 44.607 $\pm$ 0.049 | 0.93 $\pm$ 0.06 | 0.23 $\pm$ 0.07 | 1250 $\pm$ 100 | 1100 $\pm$ 150 | 2500 $\pm$ 200 | 78 $\pm$ 5 | 18 $\pm$ 2 | 28 $\pm$ 3 | 77 $\pm$ 10
II Zw 136 | 0.063 | 44.47 $\pm$ 0.048 | 44.424 $\pm$ 0.049 | 0.66 $\pm$ 0.05 | 0.17 $\pm$ 0.13 | 1850 $\pm$ 100 | 1200 $\pm$ 150 | 2300 $\pm$ 250 | 96 $\pm$ 7 | 25 $\pm$ 3 | 29 $\pm$ 6 | 68 $\pm$ 7
Mrk 231 | 0.044 | 44.456 $\pm$ 0.048 | 44.59 $\pm$ 0.049 | 2.46 $\pm$ 0.05 | 0.42 $\pm$ 0.19 | 5700 $\pm$ 100 | 3500 $\pm$ 150 | 1900 $\pm$ 250 | 50 $\pm$ 4 | 36 $\pm$ 5 | 53 $\pm$ 5 | 120 $\pm$ 10
3C 273 | 0.159 | 46.097 $\pm$ 0.048 | 45.833 $\pm$ 0.049 | 0.40 $\pm$ 0.05 | 0.12 $\pm$ 0.04 | 3800 $\pm$ 100 | 1900 $\pm$ 150 | 2250 $\pm$ 250 | 89 $\pm$ 6 | 22 $\pm$ 5 | 27 $\pm$ 5 | 40 $\pm$ 4
Mrk 6 | 0.019 | 43.556 $\pm$ 0.048 | 43.675 $\pm$ 0.049 | 0.83 $\pm$ 0.16 | 0.47 $\pm$ 0.43 | 2800 $\pm$ 200 | 1650 $\pm$ 200 | 1950 $\pm$ 200 | 29 $\pm$ 6 | 5 $\pm$ 2 | 15 $\pm$ 6 | 24 $\pm$ 7
Mrk 486 | 0.039 | 43.969 $\pm$ 0.048 | 43.892 $\pm$ 0.049 | 0.27 $\pm$ 0.06 | 0.07 $\pm$ 0.04 | 1450 $\pm$ 100 | 1450 $\pm$ 150 | 1300 $\pm$ 250 | 105 $\pm$ 5 | 23 $\pm$ 3 | 12 $\pm$ 6 | 31 $\pm$ 4
Mrk 1239$\dagger$ | 0.02 | 43.158 $\pm$ 0.048 | 44.2 $\pm$ 0.049 | 1.10 $\pm$ 0.07 | 0.11 $\pm$ 0.07 | 750 $\pm$ 100 | 950 $\pm$ 150 | 1550 $\pm$ 400 | 95 $\pm$ 5 | 25 $\pm$ 3 | 10 $\pm$ 5 | 103 $\pm$ 15
Mrk 766 | 0.013 | 43.468 $\pm$ 0.048 | 43.221 $\pm$ 0.049 | 0.68 $\pm$ 0.12 | 0.31 $\pm$ 0.25 | 700 $\pm$ 100 | 1050 $\pm$ 150 | 1450 $\pm$ 300 | 44 $\pm$ 3 | 12 $\pm$ 2 | 11 $\pm$ 5 | 29 $\pm$ 8
Zw 0033+45 | 0.047 | 44.352 $\pm$ 0.048 | 44.159 $\pm$ 0.049 | 0.53 $\pm$ 0.10 | 0.16 $\pm$ 0.08 | 2300 $\pm$ 200 | 2050 $\pm$ 200 | 2600 $\pm$ 400 | 110 $\pm$ 10 | 23 $\pm$ 4 | 27 $\pm$ 3 | 60 $\pm$ 12
Mrk 684 | 0.046 | 44.161 $\pm$ 0.048 | 44.2 $\pm$ 0.049 | 1.29 $\pm$ 0.07 | 0.16 $\pm$ 0.10 | 1300 $\pm$ 100 | 850 $\pm$ 150 | 850 $\pm$ 250 | 38 $\pm$ 4 | 5 $\pm$ 2 | 8 $\pm$ 4 | 51 $\pm$ 7
Mrk 335$\dagger$ | 0.026 | 43.974 $\pm$ 0.048 | 43.932 $\pm$ 0.049 | $\leq$0.417 | $\leq$0.025 | 1450 $\pm$ 100 | 1000 $\pm$ 150 | – | 86 $\pm$ 5 | 17 $\pm$ 2 | $\leq$5 | 40 $\pm$ 6
Mrk 376 | 0.056 | 44.367 $\pm$ 0.048 | 44.367 $\pm$ 0.049 | 0.62 $\pm$ 0.06 | 0.05 $\pm$ 0.03 | 5800 $\pm$ 100 | 4500 $\pm$ 200 | – | 101 $\pm$ 10 | 28 $\pm$ 6 | 10 $\pm$ 5 | 67 $\pm$ 7
Mrk 493$\dagger$ | 0.032 | 43.678 $\pm$ 0.048 | 43.356 $\pm$ 0.049 | 0.78 $\pm$ 0.06 | 0.21 $\pm$ 0.18 | 450 $\pm$ 100 | 450 $\pm$ 150 | 650 $\pm$ 300 | 41 $\pm$ 3 | 7 $\pm$ 1 | 11 $\pm$ 8 | 33 $\pm$ 4
Mrk 841 | 0.037 | 44.124 $\pm$ 0.048 | 44.045 $\pm$ 0.049 | $\leq$0.209 | $\leq$0.032 | 4950 $\pm$ 100 | 3300 $\pm$ 150 | – | 107 $\pm$ 6 | 21 $\pm$ 3 | 7 $\pm$ 0 | 25 $\pm$ 6
Ton 1542 | 0.063 | 44.207 $\pm$ 0.048 | 44.404 $\pm$ 0.049 | $\leq$0.363 | $\leq$0.05 | 3800 $\pm$ 100 | 2850 $\pm$ 250 | – | 114 $\pm$ 11 | 24 $\pm$ 3 | 8 $\pm$ 0 | 43 $\pm$ 4
VII Zw 118 | 0.079 | 44.697 $\pm$ 0.048 | 44.63 $\pm$ 0.049 | $\leq$0.447 | $\leq$0.076 | 3700 $\pm$ 100 | 2600 $\pm$ 300 | – | 90 $\pm$ 9 | 14 $\pm$ 2 | 5 $\pm$ 0 | 38 $\pm$ 4
Mrk 124 | 0.057 | 43.592 $\pm$ 0.048 | 44.073 $\pm$ 0.049 | $\leq$0.708 | $\leq$0.078 | 1050 $\pm$ 100 | 1050 $\pm$ 250 | – | 76 $\pm$ 8 | 20 $\pm$ 4 | 7 $\pm$ 0 | 55 $\pm$ 6
Mrk 9 | 0.04 | 44.155 $\pm$ 0.048 | 43.915 $\pm$ 0.049 | $\leq$0.398 | $\leq$0.036 | 3450 $\pm$ 100 | 2500 $\pm$ 250 | – | 104 $\pm$ 10 | 21 $\pm$ 2 | 7 $\pm$ 0 | 45 $\pm$ 5
NGC 7469 | 0.016 | 43.864 $\pm$ 0.048 | 43.863 $\pm$ 0.049 | $\leq$0.339 | $\leq$0.045 | 2700 $\pm$ 200 | 1750 $\pm$ 250 | – | 72 $\pm$ 7 | 11 $\pm$ 2 | 4 $\pm$ 0 | 25 $\pm$ 4
Akn 120 | 0.034 | 44.188 $\pm$ 0.048 | 44.62 $\pm$ 0.049 | $\leq$0.468 | $\leq$0.062 | 6300 $\pm$ 100 | 4900 $\pm$ 400 | – | 89 $\pm$ 5 | 22 $\pm$ 4 | 7 $\pm$ 0 | 42 $\pm$ 4
Mrk 352 | 0.014 | 43.037 $\pm$ 0.048 | 43.136 $\pm$ 0.049 | $\leq$0.219 | $\leq$0.048 | 3750 $\pm$ 100 | 3250 $\pm$ 300 | – | 83 $\pm$ 5 | 19 $\pm$ 4 | 6 $\pm$ 0 | 19 $\pm$ 4
Mrk 304 | 0.066 | 44.507 $\pm$ 0.048 | 44.416 $\pm$ 0.049 | $\leq$0.282 | $\leq$0.017 | 3300 $\pm$ 200 | 3300 $\pm$ 400 | – | 118 $\pm$ 6 | 11 $\pm$ 2 | 4 $\pm$ 0 | 36 $\pm$ 6
Mrk 509 | 0.034 | 44.508 $\pm$ 0.048 | 44.43 $\pm$ 0.049 | $\leq$0.129 | $\leq$0.041 | 6000 $\pm$ 200 | 2500 $\pm$ 200 | – | 125 $\pm$ 6 | 28 $\pm$ 3 | 10 $\pm$ 0 | 18 $\pm$ 3
Martínez-Aldama et al. (2015a, b) Sample |
HE 1349+0007 | 1.444 | 47.119 $\pm$ 0.047 | 46.887 $\pm$ 0.045 | 0.692 $\pm$ 0.143 | 0.234 $\pm$ 0.086 | 5027 $\pm$ 430 | 4580 $\pm$ 680 | 4530 $\pm$ 940 | 33.5 $\pm$ 4 | 19.9 $\pm$ 4 | 24.7 $\pm$ 7 | 18.5 $\pm$ 1
HE 1409+0101 | 1.65 | 47.629 $\pm$ 0.044 | 47.264 $\pm$ 0.048 | 1.549 $\pm$ 0.107 | 0.316 $\pm$ 0.066 | 4000 $\pm$ 160 | 3100 $\pm$ 310 | 3550 $\pm$ 500 | 26.8 $\pm$ 1 | 15.3 $\pm$ 2 | 19.9 $\pm$ 6 | 36.2 $\pm$ 2
HE 2349-3800 | 1.604 | 47.11 $\pm$ 0.044 | 46.933 $\pm$ 0.045 | 0.832 $\pm$ 0.057 | 0.288 $\pm$ 0.093 | 4000 $\pm$ 160 | 3480 $\pm$ 520 | 3520 $\pm$ 700 | 27.1 $\pm$ 1 | 17.6 $\pm$ 4 | 19.9 $\pm$ 6 | 19.9 $\pm$ 1
HE 2147-3212 | 1.543 | 47.163 $\pm$ 0.049 | 46.891 $\pm$ 0.045 | 1.479 $\pm$ 0.375 | 0.49 $\pm$ 0.079 | 4491 $\pm$ 660 | 4300 $\pm$ 860 | 3990 $\pm$ 150 | 28.4 $\pm$ 3 | 22.1 $\pm$ 6 | 47.4 $\pm$ 4 | 35 $\pm$ 4
HE 2202-2557 | 1.535 | 47.172 $\pm$ 0.048 | 46.585 $\pm$ 0.043 | 0.537 $\pm$ 0.062 | 0.117 $\pm$ 0.019 | 7000 $\pm$ 540 | 5810 $\pm$ 1060 | 5900 $\pm$ 330 | 27.4 $\pm$ 1 | 3.6 $\pm$ 1 | 13 $\pm$ 3 | 12.7 $\pm$ 1
HE 2340-4443 | 0.922 | 46.758 $\pm$ 0.046 | 46.674 $\pm$ 0.049 | 0.224 $\pm$ 0.021 | 0.089 $\pm$ 0.064 | 3200 $\pm$ 100 | 3430 $\pm$ 220 | 3190 $\pm$ 1700 | 77.5 $\pm$ 3 | 15.9 $\pm$ 1 | 14.8 $\pm$ 10 | 15.4 $\pm$ 1
HE 0248-3628 | 1.536 | 47.465 $\pm$ 0.045 | 47.222 $\pm$ 0.044 | 0.372 $\pm$ 0.034 | 0.251 $\pm$ 0.023 | 3800 $\pm$ 150 | 3490 $\pm$ 260 | 3990 $\pm$ 150 | 40.7 $\pm$ 2 | 12.2 $\pm$ 2 | 31.3 $\pm$ 2 | 13.6 $\pm$ 0.5
HE 2352-4010 | 1.58 | 47.727 $\pm$ 0.044 | 47.537 $\pm$ 0.05 | 0.447 $\pm$ 0.031 | 0.17 $\pm$ 0.023 | 2900 $\pm$ 90 | 1930 $\pm$ 110 | 3080 $\pm$ 180 | 45.3 $\pm$ 1 | 16.6 $\pm$ 2 | 21.7 $\pm$ 3 | 17.6 $\pm$ 0.3
HE 0035-2853 | 1.638 | 47.309 $\pm$ 0.048 | 47.177 $\pm$ 0.044 | 1.479 $\pm$ 0.17 | 0.447 $\pm$ 0.041 | 5141 $\pm$ 390 | 5000 $\pm$ 370 | 4540 $\pm$ 170 | 30.3 $\pm$ 1 | 9.4 $\pm$ 2 | 27.9 $\pm$ 2 | 40.7 $\pm$ 3
HE 0048-2804 | 0.847 | 46.032 $\pm$ 0.049 | 46.168 $\pm$ 0.044 | 0.617 $\pm$ 0.114 | 0.102 $\pm$ 0.075 | 5484 $\pm$ 470 | 4990 $\pm$ 270 | 5170 $\pm$ 2400 | 40.1 $\pm$ 4 | 18.4 $\pm$ 1 | 4.9 $\pm$ 3 | 22.3 $\pm$ 2
HE 0058-3231 | 1.582 | 47.336 $\pm$ 0.048 | 47.043 $\pm$ 0.055 | 0.589 $\pm$ 0.176 | 0.389 $\pm$ 0.09 | 5127 $\pm$ 160 | 4960 $\pm$ 100 | 4910 $\pm$ 760 | 55.4 $\pm$ 2 | 44.4 $\pm$ 8 | 59.6 $\pm$ 14 | 27.7 $\pm$ 2
HE 0203-4627 | 1.438 | 47.102 $\pm$ 0.048 | 46.941 $\pm$ 0.045 | 0.759 $\pm$ 0.192 | 0.479 $\pm$ 0.088 | 5486 $\pm$ 810 | 6000 $\pm$ 250 | 5960 $\pm$ 530 | 26.1 $\pm$ 3 | 13.6 $\pm$ 3 | 33.2 $\pm$ 3 | 14.4 $\pm$ 2
HE 0005-2355 | 1.412 | 47.161 $\pm$ 0.047 | 46.863 $\pm$ 0.045 | 0.933 $\pm$ 0.237 | 0.166 $\pm$ 0.107 | 4777 $\pm$ 710 | 3500 $\pm$ 820 | 4600 $\pm$ 1070 | 21.8 $\pm$ 2 | 7.8 $\pm$ 2 | 13.9 $\pm$ 8 | 17.2 $\pm$ 2
HE 0043-2300 | 1.54 | 47.409 $\pm$ 0.05 | 47.257 $\pm$ 0.045 | 0.316 $\pm$ 0.044 | 0.214 $\pm$ 0.025 | 3511 $\pm$ 110 | 4000 $\pm$ 360 | 4000 $\pm$ 150 | 69.3 $\pm$ 5 | 24.5 $\pm$ 3 | 36.1 $\pm$ 3 | 20 $\pm$ 1
HE 0349-5249 | 1.541 | 47.567 $\pm$ 0.047 | 46.826 $\pm$ 0.049 | 1.704 $\pm$ 0.102 | 0.165 $\pm$ 0.014 | 4000 $\pm$ 400 | 3810 $\pm$ 220 | 4000 $\pm$ 150 | 21.9 $\pm$ 2.4 | 13.5 $\pm$ 3 | 37.6 $\pm$ 8 | 32.1 $\pm$ 3.5
HE 0359-3959 | 1.521 | 47.132 $\pm$ 0.047 | 46.911 $\pm$ 0.049 | 1.173 $\pm$ 0.07 | 0.371 $\pm$ 0.031 | 4000 $\pm$ 400 | 1770 $\pm$ 320 | 1560 $\pm$ 60 | 40.6 $\pm$ 4.4 | 9.2 $\pm$ 2.1 | 46.6 $\pm$ 9.9 | 43.3 $\pm$ 4.7
HE 0436-3709 | 1.445 | 46.949 $\pm$ 0.062 | 46.855 $\pm$ 0.046 | 1.164 $\pm$ 0.07 | 0.335 $\pm$ 0.028 | 4491 $\pm$ 449 | 4440 $\pm$ 250 | 4610 $\pm$ 170 | 33.8 $\pm$ 3.7 | 13.9 $\pm$ 1.9 | 26.8 $\pm$ 5.7 | 33.7 $\pm$ 3.7
HE 0507-3236 | 1.577 | 47.094 $\pm$ 0.044 | 46.939 $\pm$ 0.063 | 0.291 $\pm$ 0.006 | 0.119 $\pm$ 0.034 | 3200 $\pm$ 320 | 2830 $\pm$ 420 | 3870 $\pm$ 800 | 71.3 $\pm$ 7.2 | 25.1 $\pm$ 6.1 | 23.4 $\pm$ 8.2 | 17.8 $\pm$ 1.8
HE 0512-3329 | 1.587 | 47.222 $\pm$ 0.044 | 47.095 $\pm$ 0.064 | 0.81 $\pm$ 0.017 | 0.24 $\pm$ 0.05 | 3800 $\pm$ 380 | 2170 $\pm$ 320 | 2320 $\pm$ 360 | 75.6 $\pm$ 7.6 | 26.7 $\pm$ 6.5 | 46.6 $\pm$ 13.4 | 53.5 $\pm$ 5.4
HE 0926-0201 | 1.682 | 47.572 $\pm$ 0.049 | 47.338 $\pm$ 0.054 | 1.139 $\pm$ 0.082 | 0.374 $\pm$ 0.033 | 2900 $\pm$ 290 | 4310 $\pm$ 380 | 4500 $\pm$ 170 | 27.6 $\pm$ 3.1 | 28.9 $\pm$ 4.3 | 33.1 $\pm$ 7 | 27.6 $\pm$ 3.1
HE 1039-0724 | 1.458 | 47.027 $\pm$ 0.049 | 46.882 $\pm$ 0.05 | 0.289 $\pm$ 0.021 | 0.033 $\pm$ 0.018 | 5141 $\pm$ 514 | 3810 $\pm$ 890 | 3970 $\pm$ 920 | 28.4 $\pm$ 3.2 | 11.5 $\pm$ 3.4 | 2.7 $\pm$ 1.6 | 6.7 $\pm$ 0.8
HE 1120+0154 | 1.472 | 47.568 $\pm$ 0.047 | 47.363 $\pm$ 0.052 | 0.204 $\pm$ 0.012 | 0.084 $\pm$ 0.011 | 5498 $\pm$ 550 | 4030 $\pm$ 230 | 4040 $\pm$ 220 | 31 $\pm$ 3.4 | 13.3 $\pm$ 1.9 | 8.1 $\pm$ 1.9 | 5.4 $\pm$ 0.6
Marinello et al. (2016) Sample |
1H 1934-063$\dagger$ | 0.011 | 42.624 $\pm$ 0.043 | 42.637 $\pm$ 0.044 | 1.404 $\pm$ 0.223 | 0.368 $\pm$ 0.047 | 1430 $\pm$ 100 | 1000 $\pm$ 80 | 1205 $\pm$ 84 | 35.3 $\pm$ 3.5 | 18.8 $\pm$ 1.3 | 29.4 $\pm$ 1.9 | 53.2 $\pm$ 5.1
1H 2107-097$\dagger$ | 0.027 | 43.217 $\pm$ 0.043 | 43.465 $\pm$ 0.044 | 1.047 $\pm$ 0.106 | 0.139 $\pm$ 0.019 | 2530 $\pm$ 320 | 1720 $\pm$ 138 | 1700 $\pm$ 136 | 38 $\pm$ 3.4 | 14.4 $\pm$ 1.5 | 14.2 $\pm$ 1.5 | 34.9 $\pm$ 3.6
IZw1$\dagger$ | 0.061 | 44.344 $\pm$ 0.043 | 44.195 $\pm$ 0.044 | 2.286 $\pm$ 0.199 | 0.564 $\pm$ 0.058 | 1450 $\pm$ 110 | 820 $\pm$ 57 | 1100 $\pm$ 77 | 38.1 $\pm$ 2.9 | 33.9 $\pm$ 1.8 | 73.4 $\pm$ 4.1 | 84.9 $\pm$ 4.9
Mrk 1044$\dagger$ | 0.016 | 43.076 $\pm$ 0.043 | 43.013 $\pm$ 0.046 | 1.181 $\pm$ 0.127 | 0.111 $\pm$ 0.016 | 1570 $\pm$ 145 | 1010 $\pm$ 61 | 1200 $\pm$ 72 | 57.2 $\pm$ 5.1 | 18.9 $\pm$ 1.2 | 24.6 $\pm$ 2.1 | 65.3 $\pm$ 5.7
Mrk 1239$\dagger$ | 0.019 | 43.19 $\pm$ 0.043 | 43.366 $\pm$ 0.051 | 1.34 $\pm$ 0.147 | 0.148 $\pm$ 0.016 | 1720 $\pm$ 130 | 1220 $\pm$ 98 | 1240 $\pm$ 74 | 64.3 $\pm$ 5.3 | 23.3 $\pm$ 1.9 | 20.1 $\pm$ 2.2 | 74.8 $\pm$ 6.2
Mrk 335$\dagger$ | 0.026 | 43.82 $\pm$ 0.043 | 43.721 $\pm$ 0.053 | 0.818 $\pm$ 0.092 | 0.054 $\pm$ 0.007 | 1715 $\pm$ 130 | 1140 $\pm$ 103 | 1490 $\pm$ 119 | 123.8 $\pm$ 7.6 | 25.2 $\pm$ 1.7 | 11.3 $\pm$ 1.1 | 100.5 $\pm$ 8.8
Mrk 478$\dagger$ | 0.078 | 44.258 $\pm$ 0.043 | 44.45 $\pm$ 0.044 | 1.023 $\pm$ 0.089 | 0.514 $\pm$ 0.056 | 1250 $\pm$ 100 | 1300 $\pm$ 91 | 1560 $\pm$ 94 | 76.9 $\pm$ 6.1 | 20.8 $\pm$ 1.7 | 19.9 $\pm$ 1.6 | 55.2 $\pm$ 4.9
Mrk 493$\dagger$ | 0.032 | 43.393 $\pm$ 0.043 | 43.515 $\pm$ 0.044 | 1.721 $\pm$ 0.179 | 0.387 $\pm$ 0.046 | 1450 $\pm$ 110 | 770 $\pm$ 31 | 1065 $\pm$ 64 | 62.5 $\pm$ 5.9 | 13.3 $\pm$ 1.1 | 17.5 $\pm$ 1.4 | 33.3 $\pm$ 1.8
PG 1448+273$\dagger$ | 0.065 | 44.305 $\pm$ 0.043 | 44.064 $\pm$ 0.044 | 1.189 $\pm$ 0.129 | 0.262 $\pm$ 0.034 | 1730 $\pm$ 135 | 880 $\pm$ 35 | 885 $\pm$ 44 | 31.2 $\pm$ 3.8 | 15.2 $\pm$ 1 | 18.8 $\pm$ 1.3 | 32.2 $\pm$ 1.8
Tons 180$\dagger$ | 0.062 | 44.283 $\pm$ 0.043 | 43.911 $\pm$ 0.058 | 0.985 $\pm$ 0.11 | 0.145 $\pm$ 0.015 | 1470 $\pm$ 135 | 930 $\pm$ 41 | 990 $\pm$ 59 | 32.3 $\pm$ 1.9 | 9.6 $\pm$ 0.7 | 19.5 $\pm$ 1.6 | 31.1 $\pm$ 2.2
Marinello et al. (2020) Sample |
PHL 1092 | 0.394 | 44.883 $\pm$ 0.043 | 44.604 $\pm$ 0.047 | 2.576 $\pm$ 0.108 | 0.839 $\pm$ 0.038 | 1850 $\pm$ 100 | 1250 $\pm$ 100 | 3750 $\pm$ 360 | 34.2 $\pm$ 2.2 | 24.4 $\pm$ 1.8 | 90.2 $\pm$ 7.1 | 88.5 $\pm$ 5.3
Notes. Columns are as follows: (1) Object name. (2) Redshift. (3) Optical
continuum luminosity at 5100Å. (4) NIR luminosity at 8542Å. (5) and (6) RFeII
and CaT values, respectively. (7), (8) and (9) Full-width at half maximum of
H$\beta$, O i $\lambda 8446$ and CaT in units of km s-1, respectively. (10),
(11), (12) and (13) Equivalent width of H$\beta$, O i $\lambda 8446$, CaT and
Fe ii in units of Å, respectively.
Table A.2: Black hole parameters Object | log $M\mathrm{{}_{BH}}$ | log $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$
---|---|---
[$M_{\odot}$]
(1) | (2) | (4)
Persson (1988) Sample
IZw1$\dagger$ | 7.935 ${}^{+0.177}_{-0.177}$ | -0.362 ${}^{+0.184}_{-0.184}$
Mrk 42 | 6.966 ${}^{+0.257}_{-0.257}$ | -0.501 ${}^{+0.262}_{-0.262}$
Mrk 478$\dagger$ | 8.035 ${}^{+0.158}_{-0.158}$ | -0.461 ${}^{+0.165}_{-0.165}$
II Zw 136 | 8.061 ${}^{+0.138}_{-0.138}$ | -0.66 ${}^{+0.147}_{-0.147}$
Mrk 231 | 8.46 ${}^{+0.121}_{-0.121}$ | -1.069 ${}^{+0.131}_{-0.131}$
3C 273 | 9.188 ${}^{+0.142}_{-0.14}$ | -0.485 ${}^{+0.15}_{-0.148}$
Mrk 6 | 7.724 ${}^{+0.141}_{-0.141}$ | -1.053 ${}^{+0.149}_{-0.149}$
Mrk 486 | 7.707 ${}^{+0.148}_{-0.148}$ | -0.705 ${}^{+0.156}_{-0.156}$
Mrk 1239$\dagger$ | 7.037 ${}^{+0.201}_{-0.201}$ | -0.684 ${}^{+0.207}_{-0.206}$
Mrk 766 | 7.177 ${}^{+0.207}_{-0.207}$ | -0.577 ${}^{+0.213}_{-0.213}$
Zw 0033+45 | 8.077 ${}^{+0.151}_{-0.151}$ | -0.769 ${}^{+0.159}_{-0.159}$
Mrk 684 | 7.77 ${}^{+0.154}_{-0.154}$ | -0.615 ${}^{+0.161}_{-0.161}$
Mrk 335$\dagger$ | 7.709 ${}^{+0.148}_{-0.148}$ | -0.704 ${}^{+0.156}_{-0.156}$
Mrk 376 | 8.419 ${}^{+0.121}_{-0.121}$ | -1.099 ${}^{+0.131}_{-0.131}$
Mrk 493$\dagger$ | 7.129 ${}^{+0.276}_{-0.276}$ | -0.361 ${}^{+0.28}_{-0.28}$
Mrk 841 | 8.232 ${}^{+0.12}_{-0.12}$ | -1.107 ${}^{+0.13}_{-0.13}$
Ton 1542 | 8.181 ${}^{+0.122}_{-0.122}$ | -0.989 ${}^{+0.131}_{-0.131}$
VII Zw 118 | 8.432 ${}^{+0.124}_{-0.124}$ | -0.849 ${}^{+0.134}_{-0.134}$
Mrk 124 | 7.389 ${}^{+0.168}_{-0.168}$ | -0.69 ${}^{+0.175}_{-0.175}$
Mrk 9 | 8.118 ${}^{+0.123}_{-0.123}$ | -0.968 ${}^{+0.132}_{-0.132}$
NGC 7469 | 7.875 ${}^{+0.142}_{-0.142}$ | -0.957 ${}^{+0.15}_{-0.15}$
Akn 120 | 8.353 ${}^{+0.121}_{-0.121}$ | -1.177 ${}^{+0.13}_{-0.13}$
Mrk 352 | 7.553 ${}^{+0.126}_{-0.126}$ | -1.297 ${}^{+0.136}_{-0.135}$
Mrk 304 | 8.289 ${}^{+0.135}_{-0.135}$ | -0.858 ${}^{+0.144}_{-0.144}$
Mrk 509 | 8.506 ${}^{+0.125}_{-0.125}$ | -1.074 ${}^{+0.134}_{-0.134}$
Martínez-Aldama et al. (2015a, b) Sample
HE 1349+0007 | 9.834 ${}^{+0.183}_{-0.179}$ | -0.313 ${}^{+0.189}_{-0.185}$
HE 1409+0101 | 10.023 ${}^{+0.178}_{-0.173}$ | -0.094 ${}^{+0.184}_{-0.179}$
HE 2349-3800 | 9.746 ${}^{+0.166}_{-0.162}$ | -0.233 ${}^{+0.172}_{-0.168}$
HE 2147-3212 | 9.817 ${}^{+0.219}_{-0.216}$ | -0.260 ${}^{+0.225}_{-0.222}$
HE 2202-2557 | 9.981 ${}^{+0.181}_{-0.177}$ | -0.418 ${}^{+0.188}_{-0.184}$
HE 2340-4443 | 9.479 ${}^{+0.157}_{-0.154}$ | -0.246 ${}^{+0.164}_{-0.160}$
HE 0248-3628 | 9.917 ${}^{+0.174}_{-0.169}$ | -0.119 ${}^{+0.180}_{-0.175}$
HE 2352-4010 | 9.960 ${}^{+0.180}_{-0.175}$ | 0.048 ${}^{+0.185}_{-0.180}$
HE 0035-2853 | 9.943 ${}^{+0.183}_{-0.178}$ | -0.270 ${}^{+0.189}_{-0.185}$
HE 0048-2804 | 9.286 ${}^{+0.163}_{-0.161}$ | -0.634 ${}^{+0.171}_{-0.169}$
HE 0058-3231 | 9.956 ${}^{+0.169}_{-0.165}$ | -0.262 ${}^{+0.176}_{-0.172}$
HE 0203-4627 | 9.856 ${}^{+0.219}_{-0.216}$ | -0.348 ${}^{+0.224}_{-0.222}$
HE 0005-2355 | 9.838 ${}^{+0.220}_{-0.217}$ | -0.283 ${}^{+0.226}_{-0.223}$
HE 0043-2300 | 9.859 ${}^{+0.172}_{-0.167}$ | -0.106 ${}^{+0.179}_{-0.175}$
HE 0349-5249 | 9.990 ${}^{+0.199}_{-0.195}$ | -0.110 ${}^{+0.205}_{-0.201}$
HE 0359-3959 | 9.758 ${}^{+0.190}_{-0.187}$ | -0.227 ${}^{+0.196}_{-0.193}$
HE 0436-3709 | 9.703 ${}^{+0.188}_{-0.185}$ | -0.317 ${}^{+0.198}_{-0.195}$
HE 0507-3236 | 9.658 ${}^{+0.190}_{-0.186}$ | -0.156 ${}^{+0.195}_{-0.192}$
HE 0512-3329 | 9.788 ${}^{+0.192}_{-0.188}$ | -0.184 ${}^{+0.197}_{-0.193}$
HE 0926-0201 | 9.877 ${}^{+0.201}_{-0.196}$ | 0.007 ${}^{+0.207}_{-0.203}$
HE 1039-0724 | 9.793 ${}^{+0.188}_{-0.185}$ | -0.345 ${}^{+0.195}_{-0.191}$
HE 1120+0154 | 10.105 ${}^{+0.200}_{-0.195}$ | -0.225 ${}^{+0.205}_{-0.201}$
Marinello et al. (2016) Sample
1H 1934-063$\dagger$ | 6.985 ${}^{+0.156}_{-0.155}$ | -1.059 ${}^{+0.162}_{-0.161}$
1H 2107-097$\dagger$ | 7.507 ${}^{+0.178}_{-0.178}$ | -1.107 ${}^{+0.183}_{-0.183}$
IZw1$\dagger$ | 7.907 ${}^{+0.151}_{-0.151}$ | -0.605 ${}^{+0.158}_{-0.158}$
Mrk 1044$\dagger$ | 7.259 ${}^{+0.162}_{-0.162}$ | -0.973 ${}^{+0.168}_{-0.167}$
Mrk 1239$\dagger$ | 7.353 ${}^{+0.151}_{-0.15}$ | -0.975 ${}^{+0.157}_{-0.157}$
Mrk 335$\dagger$ | 7.688 ${}^{+0.148}_{-0.148}$ | -0.805 ${}^{+0.155}_{-0.155}$
Mrk 478$\dagger$ | 7.807 ${}^{+0.156}_{-0.156}$ | -0.575 ${}^{+0.162}_{-0.162}$
Mrk 493$\dagger$ | 7.399 ${}^{+0.152}_{-0.152}$ | -0.859 ${}^{+0.158}_{-0.158}$
PG 1448+273$\dagger$ | 7.949 ${}^{+0.15}_{-0.149}$ | -0.679 ${}^{+0.156}_{-0.156}$
Tons 180$\dagger$ | 7.879 ${}^{+0.16}_{-0.16}$ | -0.626 ${}^{+0.166}_{-0.166}$
Marinello et al. (2020) Sample
PHL 1092 | 8.281 ${}^{+0.140}_{-0.140}$ | -0.549 ${}^{+0.147}_{-0.147}$
NOTES. Columns are as follows: (1) Object name. (2) Black hole mass estimated
using the classical RL relation (Eq. 3), in units of M⊙ . (3) Eddington ratio.
Table A.3: Parameters of the correlations for the observed and bootstrap
sample
| | Observational data | | Bootstrap results |
---|---|---|---|---|---
Relation | $\alpha$ | $\beta$ | $\rho$ | $p$-value | $\sigma$ | | $\alpha_{\rm BS}$ | $\beta_{\rm BS}$ | ${\rho_{\rm BS}}$ | $f_{sig}$ | P. Dist. | $p-$valueran | $f_{\rm ran}$
(1) | (2) | (3) | (4) | (5) | (6) | | (7) | (8) | (9) | (10) | (11) | (12) | (13)
EWHβ | $L_{\mathrm{opt}}$ | -0.064 $\pm$ 0.032 | 4.632 $\pm$ 1.469 | -0.412 | 0.001 | 0.200 | | -0.064 $\pm$ 0.031 | 4.618 $\pm$ 1.408 | -0.428${}^{+0.21}_{-0.17}$ | 0.5 | c,a,b | 0.76,0.95, 0.67 | 3.5$\times 10^{-3}$
$L_{\rm NIR}$ | -0.064 $\pm$ 0.036 | 4.627 $\pm$ 1.618 | -0.352 | 0.007 | 0.204 | | -0.065 $\pm$ 0.034 | 4.656 $\pm$ 1.430 | -0.372${}^{+0.22}_{-0.17}$ | 0.2 | c,c,d | 0.76,0.98,0.60 | 1.0$\times 10^{-3}$
$M\mathrm{{}_{BH}}$ | -0.091 $\pm$ 0.053 | 2.508 $\pm$ 0.455 | -0.390 | 0.002 | 0.206 | | -0.092 $\pm$ 0.048 | 2.519 $\pm$ 0.409 | -0.405${}^{+0.24}_{-0.19}$ | 0.4 | c,b,d | 0.76,0.96,0.94 | 1.0$\times 10^{-3}$
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | -0.332 $\pm$ 0.149 | 1.53 $\pm$ 0.102 | -0.531 | 1.78$\times 10^{-5}$ | 0.195 | | -0.335 $\pm$ 0.148 | 1.529 $\pm$ 0.09 | -0.544${}^{+0.19}_{-0.16}$ | 0.85 | c,c | 0.76,0.61 | 1.5$\times 10^{-3}$
EWOI | $L_{\mathrm{opt}}$ | -0.007 $\pm$ 0.034 | 1.557 $\pm$ 1.548 | -0.015 | 0.910 | 0.211 | | -0.007 $\pm$ 0.033 | 1.000 $\pm$ 1.510 | -0.016${}^{+0.23}_{-0.27}$ | 0.0 | e,a,b | 0.97,0.95,0.67 | 1.0$\times 10^{-3}$
$L_{\rm NIR}$ | -0.003 $\pm$ 0.037 | 1.358 $\pm$ 1.673 | 0.053 | 0.691 | 0.211 | | -0.003 $\pm$ 0.034 | 1.358 $\pm$ 1.603 | 0.060${}^{+0.20}_{-0.28}$ | 0.0 | e,c,d | 0.97,0.98,0.60 | 1.5$\times 10^{-3}$
$M\mathrm{{}_{BH}}$ | -0.006 $\pm$ 0.054 | 1.271 $\pm$ 0.468 | -0.008 | 0.951 | 0.211 | | -0.005 $\pm$ 0.057 | 1.264 $\pm$ 0.459 | 0.008${}^{+0.23}_{-0.27}$ | 0.0 | e,b,d | 0.97,0.96,0.94 | 1.0$\times 10^{-3}$
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | -0.079 $\pm$ 0.161 | 1.172 $\pm$ 0.110 | -0.162 | 0.223 | 0.209 | | -0.082 $\pm$ 0.142 | 1.169 $\pm$ 0.107 | -0.161${}^{+0.23}_{-0.26}$ | 0.0 | e,c | 0.97,0.61 | 5.0$\times 10^{-4}$
EWFeII | $L_{\mathrm{opt}}$ | -0.091 $\pm$ 0.039 | 5.666 $\pm$ 1.754 | -0.409 | 0.001 | 0.239 | | -0.092 $\pm$ 0.039 | 5.678 $\pm$ 1.705 | -0.422${}^{+0.22}_{-0.17}$ | 0.5 | a,a,b | 0.94,0.95,0.67 | 5.0$\times 10^{-4}$
$L_{\rm NIR}$ | -0.095 $\pm$ 0.042 | 5.846 $\pm$ 1.917 | -0.367 | 0.005 | 0.242 | | -0.096 $\pm$ 0.043 | 5.881 $\pm$ 1.809 | -0.385${}^{+0.23}_{-0.18}$ | 0.3 | a,c,d | 0.94,0.95,0.55 | 5.0$\times 10^{-4}$
$M\mathrm{{}_{BH}}$ | -0.151 $\pm$ 0.06 | 2.831 $\pm$ 0.52 | -0.493 | 8.58$\times 10^{-5}$ | 0.234 | | -0.15 $\pm$ 0.059 | 2.824 $\pm$ 0.52 | -0.517${}^{+0.21}_{-0.14}$ | 0.75 | a,b,d | 0.94,0.96,0.94 | 1.0$\times 10^{-3}$
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | -0.281 $\pm$ 0.203 | 1.373 $\pm$ 0.139 | -0.360 | 0.005 | 0.265 | | -0.279 $\pm$ 0.211 | 1.374 $\pm$ 0.142 | -0.375${}^{+0.22}_{-0.19}$ | 0.3 | a,c | 0.94,0.61 | 5.0$\times 10^{-4}$
EWCaT | $L_{\mathrm{opt}}$ | 0.068 $\pm$ 0.053 | -1.851 $\pm$ 2.381 | 0.417 | 0.001 | 0.324 | | 0.069 $\pm$ 0.049 | -1.896 $\pm$ 1.842 | 0.425${}^{+0.14}_{-0.25}$ | 0.5 | a,a,b | 0.93,0.95,0.67 | 1.0$\times 10^{-3}$
$L_{\rm NIR}$ | 0.068 $\pm$ 0.057 | -1.827 $\pm$ 2.593 | 0.377 | 0.004 | 0.327 | | 0.069 $\pm$ 0.051 | -1.878 $\pm$ 2.451 | 0.380${}^{+0.15}_{-0.24}$ | 0.3 | a,c,d | 0.93,0.98, 0.60 | 1.5$\times 10^{-3}$
$M\mathrm{{}_{BH}}$ | 0.088 $\pm$ 0.085 | 0.470 $\pm$ 0.733 | 0.301 | 0.022 | 0.331 | | 0.089 $\pm$ 0.074 | 0.456 $\pm$ 0.608 | 0.305${}^{+0.17}_{-0.23}$ | 0.1 | a,b,d | 0.93,0.96,0.94 | 1.0$\times 10^{-3}$
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | 0.428 $\pm$ 0.237 | 1.477 $\pm$ 0.162 | 0.482 | 1.27$\times 10^{-4}$ | 0.309 | | 0.431 $\pm$ 0.200 | 1.477 $\pm$ 0.133 | 0.494${}^{+0.14}_{-0.24}$ | 0.71 | a,c | 0.93,0.61 | 1.0$\times 10^{-3}$
RFeII | $L_{\mathrm{opt}}$ | -0.015 $\pm$ 0.049 | 0.520 $\pm$ 2.209 | -0.063 | 0.640 | 0.301 | | -0.015 $\pm$ 0.045 | 0.536 $\pm$ 1.901 | -0.064${}^{+0.22}_{-0.28}$ | 0.0 | e,a,b | 0.96,0.95,0.67 | 1.5$\times 10^{-3}$
$L_{\rm NIR}$ | -0.017 $\pm$ 0.053 | 0.631 $\pm$ 2.383 | -0.081 | 0.546 | 0.301 | | -0.017 $\pm$ 0.047 | 0.602 $\pm$ 2.264 | -0.083${}^{+0.22}_{-0.26}$ | 0.0 | e,c,d | 0.96,0.98,0.60 | 5.0$\times 10^{-4}$
$M\mathrm{{}_{BH}}$ | -0.042 $\pm$ 0.076 | 0.204 $\pm$ 0.662 | -0.162 | 0.226 | 0.299 | | -0.042 $\pm$ 0.065 | 0.205 $\pm$ 0.566 | -0.160${}^{+0.22}_{-0.25}$ | 0.0 | e,b,d | 0.96,0.96,0.94 | 2.0$\times 10^{-3}$
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | 0.109 $\pm$ 0.230 | -0.092 $\pm$ 0.157 | 0.105 | 0.432 | 0.300 | | 0.111 $\pm$ 0.239 | -0.088 $\pm$ 0.141 | 0.102${}^{+0.25}_{-0.29}$ | 0.0 | e,c | 0.96,0.61 | 1.0$\times 10^{-3}$
RCaT | $L_{\mathrm{opt}}$ | 0.057 $\pm$ 0.063 | -3.392 $\pm$ 2.846 | 0.247 | 0.061 | 0.388 | | 0.056 $\pm$ 0.056 | -3.350 $\pm$ 2.437 | 0.243${}^{+0.18}_{-0.23}$ | 0.1 | f,a,b | 0.76,0.95,0.67 | 3.5$\times 10^{-3}$
$L_{\rm NIR}$ | 0.059 $\pm$ 0.068 | -3.476 $\pm$ 3.079 | 0.240 | 0.069 | 0.389 | | 0.060 $\pm$ 0.058 | -3.502 $\pm$ 2.823 | 0.241${}^{+0.18}_{-0.24}$ | 0.1 | f,c,d | 0.76,0.98,0.60 | 1.5$\times 10^{-3}$
$M\mathrm{{}_{BH}}$ | 0.058 $\pm$ 0.101 | -1.309 $\pm$ 0.874 | 0.117 | 0.381 | 0.394 | | 0.058 $\pm$ 0.091 | -1.306 $\pm$ 0.762 | 0.111${}^{+0.20}_{-0.24}$ | 0.0 | f,b,d | 0.76,0.96,0.94 | 1.5$\times 10^{-3}$
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | 0.500 $\pm$ 0.275 | -0.516 $\pm$ 0.188 | 0.425 | 8.90$\times 10^{-4}$ | 0.359 | | 0.498 $\pm$ 0.254 | -0.516 $\pm$ 0.152 | 0.426${}^{+0.18}_{-0.25}$ | 0.51 | f,c | 0.76,0.61 | 5.0$\times 10^{-4}$
Fe ii/CaT | $L_{\mathrm{opt}}$ | -0.072 $\pm$ 0.039 | 3.912 $\pm$ 1.78 | -0.441 | 5.31$\times 10^{-4}$ | 0.242 | | -0.071 $\pm$ 0.039 | 3.877 $\pm$ 1.747 | -0.444${}^{+0.22}_{-0.22}$ | 0.54 | e,a,b | 0.98,0.95,0.67 | 5.0$\times 10^{-4}$
$L_{\rm NIR}$ | -0.077 $\pm$ 0.043 | 4.107 $\pm$ 1.929 | -0.456 | 3.20$\times 10^{-4}$ | 0.244 | | -0.076 $\pm$ 0.042 | 4.086 $\pm$ 1.887 | -0.470${}^{+0.23}_{-0.20}$ | 0.61 | e,c,d | 0.98,0.98,0.60 | 5.0$\times 10^{-4}$
$M\mathrm{{}_{BH}}$ | -0.100 $\pm$ 0.064 | 1.513 $\pm$ 0.552 | -0.354 | 0.006 | 0.249 | | -0.102 $\pm$ 0.061 | 1.532 $\pm$ 0.559 | -0.348${}^{+0.23}_{-0.24}$ | 0.3 | e,b,d | 0.98,0.96,0.94 | 1.5$\times 10^{-3}$
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | -0.391 $\pm$ 0.179 | 0.424 $\pm$ 0.122 | -0.554 | 6.44$\times 10^{-6}$ | 0.233 | | -0.394 $\pm$ 0.164 | 0.422 $\pm$ 0.099 | -0.560${}^{+0.20}_{-0.17}$ | 0.89 | e,c | 0.98,0.61 | 1.0$\times 10^{-3}$
RFeII | RCaT | 0.973 $\pm$ 0.239 | -0.657 $\pm$ 0.081 | 0.721 | 1.75$\times 10^{-10}$ | 0.270 | | 0.974 $\pm$ 0.189 | -0.658 $\pm$ 0.07 | 0.737${}^{+0.07}_{-0.19}$ | 1.0 | e,f | 0.96,0.76 | 2.0$\times 10^{-3}$
NOTES. Columns are as follow: (1) Relations. (2) Slope of the observational
sample and error at $2\sigma$. (3) Ordinate of the observational sample and
error at $2\sigma$. (4) Spearman rank correlation coefficient for the
observational sample. Significant correlations respect to this parameter are
bold–faced. (5) $p-$value of the correlation coefficient. (6) Scatter of the
observational sample respect to the best fit. (7) Slope of the bootstrap
sample and error at $2\sigma$. (8) Ordinate of the bootstrap sample and error
at $2\sigma$. (9) Maximum of Spearman rank correlation coefficient
distribution for 1000 realizations of the bootstrap sample and the errors at
$2\sigma$. (10) Fraction of significant bootstrap realizations respect to the
total number. (11) Probability distributions used to model the observational
distributions using a random sample. The symbols are as follow: a–skewnorm,
b–powernorm, c–powerlaw, d–loglaplace, e–powerlognorm, f–lognorm. The symbols
follow the order of col. (1). In correlations involving luminosity and black
hole mass, 2 distributions where used in the modelling. (12) $p-$value of the
Kolmogorov-Smirnoff test to select the best distribution fitting. Order is the
same as col. (11). (13) Fraction of significant correlations assuming a two
random samples.
Table A.4: Correlations between the first four eigenvectors and the physical parameters Relations | Full | Low–$L$ | High–$L$
---|---|---|---
$\rho$ | $p$–value | $\rho$ | $p$–value | $\rho$ | $p$–value
(1) | (2) | (3) | (4) | (5) | (6) | (7)
PC1 | FWHMHβ | -0.792 | 1.32$\times 10^{-13}$ | 0.669 | 7.37$\times 10^{-5}$ | 0.76 | 1.73$\times 10^{6}$
FWHMOI | -0.845 | 7.51$\times 10^{-17}$ | 0.728 | 7.65$\times 10^{-6}$ | 0.884 | 2.1$\times 10^{-10}$
FWHMCaT | -0.844 | 1.77$\times 10^{-13}$ | 0.659 | 0.00159 | 0.85 | 3.94$\times 10^{-8}$
EWHβ | 0.583 | 1.59$\times 10^{-6}$ | 0.736 | 5.46$\times 10^{-6}$ | -0.694 | 2.93$\times 10^{-5}$
EWOI | 0.180 | 0.177 | 0.871 | 8.11$\times 10^{-10}$ | -0.296 | 0.12
EWFeII | 0.703 | 7.79$\times 10^{-10}$ | 0.476 | 0.00897 | -0.681 | 4.73$\times 10^{-5}$
EWCaT | -0.179 | 0.178 | -0.072 | 0.712 | -0.257 | 0.178
$z$ | -0.700 | 1$\times 10^{-9}$ | 0.356 | 0.0578 | 0.366 | 0.0512
log $L\mathrm{{}_{bol}}$ | -0.748 | 1.49$\times 10^{-11}$ | 0.495 | 0.00638 | 0.458 | 0.0124
log $M\mathrm{{}_{BH}}$ | -0.845 | 6.99$\times 10^{-17}$ | 0.701 | 2.24$\times 10^{-5}$ | 0.662 | 9.16$\times 10^{-5}$
log $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | -0.519 | 2.94$\times 10^{-5}$ | -0.492 | 0.00672 | 0.174 | 0.367
log RFeII | 0.164 | 0.218 | -0.316 | 0.0952 | -0.089 | 0.645
log RCaT | -0.151 | 0.259 | -0.382 | 0.041 | -0.021 | 0.914
log Fe ii/CaT | 0.426 | 8.46$\times 10^{-4}$ | 0.338 | 0.073 | -0.125 | 0.518
PC2 | FWHMHβ | 0.023 | 0.864 | -0.344 | 0.0674 | -0.278 | 0.145
FWHMOI | -0.033 | 0.804 | -0.38 | 0.0422 | 0.057 | 0.769
FWHMCaT | -0.098 | 0.518 | 0.456 | 0.0435 | -0.016 | 0.939
EWHβ | -0.012 | 0.93 | -0.212 | 0.27 | -0.327 | 0.0835
EWOI | -0.681 | 3.92$\times 10^{-9}$ | 0.348 | 0.064 | 0.346 | 0.0661
EWFeII | -0.509 | 4.57$\times 10^{-5}$ | 0.66 | 9.88$\times 10^{-5}$ | 0.357 | 0.0572
EWCaT | -0.81 | 1.29$\times 10^{-14}$ | 0.855 | 3.57$\times 10^{-9}$ | 0.86 | 2.3$\times 10^{-9}$
$z$ | -0.291 | 0.0267 | 0.069 | 0.722 | 0.5 | 0.00569
log $L\mathrm{{}_{bol}}$ | -0.268 | 0.0416 | 0.065 | 0.738 | 0.492 | 0.00666
log $M\mathrm{{}_{BH}}$ | -0.169 | 0.205 | -0.233 | 0.223 | 0.306 | 0.106
log $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | 0.272 | 0.0392 | 0.354 | 0.0598 | 0.492 | 0.00669
log RFeII | -0.46 | 2.79$\times 10^{-4}$ | 0.698 | 2.6$\times 10^{-5}$ | 0.669 | 7.26$\times 10^{-5}$
log RCaT | -0.506 | 5.08$\times 10^{-5}$ | 0.65 | 1.34$\times 10^{-4}$ | 0.838 | 1.46$\times 10^{-8}$
log Fe ii/CaT | 0.225 | 0.0888 | -0.257 | 0.178 | -0.338 | 0.0725
PC3 | FWHMHβ | -0.512 | 3.97$\times 10^{-5}$ | 0.196 | 0.308 | -0.046 | 0.812
FWHMOI | -0.506 | 5.15$\times 10^{-5}$ | -0.036 | 0.852 | 0.074 | 0.701
FWHMCaT | -0.47 | 9.7$\times 10^{-4}$ | -0.244 | 0.301 | 0.042 | 0.837
EWHβ | -0.647 | 4.18$\times 10^{-8}$ | -0.743 | 3.87$\times 10^{-6}$ | 0.478 | 0.00874
EWOI | -0.562 | 4.39$\times 10^{-6}$ | -0.342 | 0.0693 | 0.83 | 2.57$\times 10^{-8}$
EWFeII | 0.027 | 0.838 | -0.244 | 0.202 | -0.104 | 0.59
EWCaT | 0.318 | 0.0149 | 0.251 | 0.189 | 0.065 | 0.739
$z$ | -0.007 | 0.959 | 0.023 | 0.905 | 0.119 | 0.54
log $L\mathrm{{}_{bol}}$ | -0.041 | 0.761 | 0.14 | 0.469 | 0.02 | 0.916
log $M\mathrm{{}_{BH}}$ | -0.195 | 0.142 | 0.185 | 0.337 | -0.064 | 0.741
log $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | 0.341 | 0.00884 | -0.124 | 0.521 | 0.097 | 0.617
log RFeII | 0.621 | 2$\times 10^{-7}$ | 0.4 | 0.0313 | -0.501 | 0.00564
log RCaT | 0.600 | 6.52$\times 10^{-7}$ | 0.486 | 0.00759 | -0.173 | 0.368
log Fe ii/CaT | -0.232 | 0.0794 | -0.351 | 0.0621 | -0.478 | 0.00875
PC4 | FWHMHβ | 0.279 | 0.034 | -0.096 | 0.619 | 0.378 | 0.0431
FWHMOI | 0.190 | 0.154 | -0.187 | 0.332 | 0.313 | 0.0985
FWHMCaT | 0.118 | 0.435 | -0.699 | 6.02$\times 10^{-4}$ | 0.423 | 0.0313
EWHβ | -0.381 | 0.00315 | -0.054 | 0.78 | -0.1 | 0.608
EWOI | -0.339 | 0.00934 | 0.075 | 0.698 | 0.017 | 0.929
EWFeII | 0.130 | 0.331 | 0.041 | 0.832 | 0.164 | 0.397
EWCaT | 0.076 | 0.571 | -0.19 | 0.324 | 0.153 | 0.43
$z$ | -0.089 | 0.506 | -0.189 | 0.327 | -0.423 | 0.0224
log $L\mathrm{{}_{bol}}$ | -0.057 | 0.67 | -0.266 | 0.163 | -0.469 | 0.0104
log $M\mathrm{{}_{BH}}$ | 0.016 | 0.907 | -0.197 | 0.307 | -0.281 | 0.14
log $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ | -0.239 | 0.0708 | -0.07 | 0.717 | -0.693 | 3.13$\times 10^{-5}$
log RFeII | 0.42 | 0.00104 | 0.018 | 0.927 | 0.268 | 0.16
log RCaT | 0.207 | 0.119 | -0.279 | 0.143 | 0.18 | 0.35
log Fe ii/CaT | 0.224 | 0.0915 | 0.361 | 0.0547 | 0.244 | 0.201
NOTES. Columns are as follows: (1) Relations. (2), (4) and (6) Spearman rank
correlation coefficient for the full, low– and high–$L$ samples, respectively.
(3), (5) and (7) $p-$value of the correlation coefficient, for the full, low–
and high–$L$ samples, respectively. Significant correlations are bold-faced.
## Appendix B Random distribution and residuals plots
In this section are included extra figures which are complementary to the
results in Sec. 4.5 and 4.6. Figure B.1 shows the distributions of Fe ii/CaT
and $L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ parameters compared with the ones
from a fitting process, see Sec. 4.5.1. The fitted distribution and $p-$value
are indicated in left and middle panels. Right panel shows the distribution of
a random sample to get a significant correlation Fe
ii/CaT-$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ as high as the one from the
observational sample.
Figure B.1: Modeled probability distribution for
$L\mathrm{{}_{bol}}/L\mathrm{{}_{Edd}}$ (left panel) and Fe ii/CaT (middle
panel) parameters. Blue distribution correspond to the observed one, while the
blue one is obtained via a bootstrap analysis, see Sec.4.5.1. The right panel
shows the distribution of the Spearman rank correlation coefficient for 58
randomly selected sources from the distribution in left and middle panel. The
number of significant correlations ($\rho_{\rm ran}$ and $p-$val$<0.001$) is
below the $3\sigma$ confidence level.
In order to asses any signatures of redshift effects, Fig. B.2 and B.3 show
the distributions of the residuals as a function of the redshift for low– and
high–$L$ objects. This division is in agreement with low– and high–redshift
sources. The median of each distribution (dashed-vertical lines) does not show
a significant difference from the zero residual levels and it is not observed
a dependency within $2\sigma$ confidence level. This suggests that the
relation in figures 3 and 4 are not artificially enhancement by redshift
effects.
Figure B.2: Distribution of the residuals with respect to the the best fits as
a function of redshift for the correlation shown in Fig. 3. In each panel is
indicated the analyzed relation. Green distribution correspond to the low–$L$
(and low–$z$) subsample, while blue one draws the high–$L$ (and high–$z$)
distributions. Vertical lines indicate the median of each distribution.
Figure B.3: Same as Fig. B.2, but for correlations shown in Fig. 4.
## Appendix C Principal Component Analysis - effect of redundant parameters
The correlation between a variable and a principal component (PC) is used as
the coordinates of the variable on the PC. The representation of variables
differs from the plot of the observations: The observations are represented by
their projections, but the variables are represented by their correlations
(Abdi & Williams, 2010). (a) Positively correlated variables are grouped
together; (b) negatively correlated variables are positioned on opposite sides
of the plot origin (opposed quadrants); and (c) the distance between variables
and the origin measures the quality of the variables on the factor map.
Variables that are away from the origin are well represented on the factor
map.
Figure C.1: Graphical representation of the principal component analysis (PCA)
decomposition of our sample (58 sources). The dots represent individual
objects on the standardized PC1 - PC2 plane that have variables indicated by
the axis labels. The arrows represent the prominence of the variables in the
PC1 - PC2 plane. The dashed lines mark the coordinate axes in the PC1 - PC2
plane and the ellipses depict the 95% occupancy of the sources in their
respective subsamples. The sample is categorized based on their original
source catalogues (see Panda et al., 2020a, for details on the observational
sample) \- (1) Martínez-Aldama et al. (2015a); (2) Martínez-Aldama et al.
(2015b); (3) Persson (1988); (4) Marinello et al. (2016); and (5) PHL1092
(Marinello et al., 2020). LEFT: with $L_{\mathrm{opt}}$; RIGHT: without
$L_{\mathrm{opt}}$. The lower panels illustrate the precedence of the first 10
principal components in the form of scree plots.
Figure C.2: Similar to Figure C.1. LEFT: with RFeII and RCaT; RIGHT: without
RFeII and RCaT. The lower panels illustrate the precedence of the first 10
principal components in the form of scree plots.
As a first test for the PCA, we consider the aforementioned 11 parameters.
Before drawing any definitive conclusions, we wanted to remove parameters that
are redundant/uncorrelated. This allows to eliminates noise in the PCA output
due to the presence of these redundant/uncorrelated variables. We perform this
testing in two steps. In the first step, we analyze the effect of the presence
of both optical and NIR luminosity in the PCA run. The optical (at 5100Å) and
the NIR (8542Å) luminosities are almost identical for our sample (see bottom-
right panel of Figure 2) with a correlation coefficient, r = 0.950 ($p-$value
= 6.72$\times 10^{-30}$). This result is snychronous to our PCA (Figure C.1).
Here, the factor map for the two cases (with and without optical luminosity)
are shown adjacent to each other. At a first glance, the differences in the
sources constituting our full sample can be clearly seen. The (3) Persson and
(4) Marinello samples (i.e. the low luminosity sources) are similarly oriented
in the PC1-PC2 diagram (factor-map), while the high-luminosity sources from
the two catalogues from Martinez-Aldama (1 and 2) occupy a separate region in
the factor-map. We study these subsamples in more detail in the next section.
The corresponding scree plots are similar and highlight the dominance of the
first principal component (43% for the case with optical luminosity, and 40.6%
for the case without it). The subsequent principal components show similar
precedence. We thus make use of only the NIR luminosity henceforth.
The parameters, RFeII and RCaT are estimated from the various observations
that are tabulated in Table LABEL:tab:table1. These values are estimated from
the ratio of the fluxes of the respective emission species (optical Fe ii
within the 4434-4684Å and H$\beta$; CaT and H$\beta$, respectively). In our
analysis, we use the EWs for the said species which are basically scaled
versions of the line fluxes, one that is normalized by the corresponding
continuum luminosity (at 5100Å for Fe ii and 8542Å for CaT). Thus, the RFeII
and RCaT seemingly become redundant in presence of the EWs. We test this
effect of redundancy on our sample and the results are presented in Figure
C.2. The representation is similar to the Figure C.1. The factor-map on the
left panel shows the case where the RFeII and RCaT are included in the
analysis. Here, the RCaT vector is completely aligned with the EW(CaT) vector
suggesting that the quantity is strongly dependent on this variable itself and
is less affected by the EW(H$\beta$). On the other hand, the orientation of
the RFeII vector suggests that the quantity is affected by both EW(Fe ii) and
EW(H$\beta$). The corresponding scree plots for the two cases highlight the
importance of the noise introduced in the PCA due to the presence of RFeII and
RCaT. In the case where these variables were used, the dataset is organised
such that the two principal components are almost identical (32.8% for PC1 and
31.7% for PC2). This gives a false impression that the dataset is driven by a
2D plane rather than a line. A similar aspect of the optical main sequence
being represented as a line or a plane was explored in Wildy et al. (2019). On
the other hand, when these two variables are removed and the PCA module is re-
run, we see that the dataset is dominated by the variance along the first
principal component (40.6%) and the second principal component becomes less
important (22.2%). Another effect of the removal of redundant variables is the
emergence of other quantities, e.g. EW(O i $\lambda 8446$).
These two tests further confirms that the results of the PCA are dependent on
the selection of the sample and the chosen properties (Kuraszkiewicz et al.,
2009).
## Appendix D Principal Component Analysis. Low- and high-luminosity samples
Figure D.1: Same as Figure 5. LEFT: low-luminosity sample (log
$L_{\mathrm{opt}}$ $\leq$ 44.49 erg s-1). RIGHT: high-luminosity sample (log
$L_{\mathrm{opt}}$ $>$ 44.49 erg s-1).
Taking note from the previous runs and the heterogeneity present in our
sample, we now separate the full sample into two subsamples based on the
median of the optical luminosity (at 5100 Å) of the distribution, i.e. log
$L_{\mathrm{opt}}$ = 44.49 erg s-1 (Sec. 4.3.3), which gives 29 sources in
each case. We then perform the PCA on each of the subsamples (low $L_{\rm
opt}$ and high $L_{\rm opt}$) and illustrate the results in the left and right
columns of Figure D.1, respectively.
### D.1 Low-luminosity subsample
For the low-luminosity sample, the sources belong to the Persson (19/25) and
Marinello (10/10) samples, and the Marinello sample is almost enclosed within
the Persson sample in the factor-map. The corresponding scree plot shows the
dominance of the primary and secondary principal components (41.1% and 30.1%),
suggesting that the sample seems to be driven majorly by the combination of
the two components that can explain 71.2% of the variance in the dataset.
Accounting for the subsequent two principal components, 90.6% of the total
variation in the dataset can be explained in this case.
First principal component: Going back to the factor-map, we find that the
vectors corresponding to the FWHMs of H$\beta$ and O i $\lambda 8446$, EWHβ
are co-aligned, with the FWHM vectors having almost similar magnitudes. These
two FWHM vectors are also the major contributors to the variance along the
primary principal component (see third panel on the middle column of Figure
5). For the primary principal component, the EW of O i $\lambda 8446$ follows
after these two FWHMs, which then is followed by the NIR luminosity.
Second principal component: The factor-map highlights the prevalence of the
EWCaTfollowed by the FWHMCaTand the EWFeIIṪhis trend is also seen in the
contributions to the second principal component and supports our original
conclusion that the two species, Fe ii and CaT are similar in terms of their
excitation and line emissivities (see fourth panel on the middle of Figure 5).
We expect that the FWHMFeII would show similar behaviour, likewise of FWHMCaT
(as shown in Marinello et al., 2016, where the authors show an almost perfect
correlation between the Fe ii emission at 1 $\mu$m and CaT) strengthening the
inferences from the photoionization modelling (Panda et al., 2020a; Panda,
2020). This needs to be tested with a larger, of higher S/N and more-complete
sample in the future.
Third and fourth principal components: The third and fourth principal
components further contribute to 19.4% of the total variance in the dataset.
The third PC is singularly dominated by the EWHβ with a minor contribution
from EWCaTand FWHMHβ}. Similarly, the fourth PC is mainly driven by the
FWHMCaT and only a minor contribution from EW of O i $\lambda 8446$.
### D.2 high-luminosity subsample
For the high-luminosity sample, the sources belong to the Persson (6/25),
Marinello (PHL1092) samples, and all of Martinez-Aldama’s sources. The 6
sources from Persson (Mrk304, Mrk509, IZw1, Mrk478, VIIZw118 and 3C273) and
PHL1092, outline around the 95% confidence limit of the Martinez-Aldama’s
sample shown with the ellipses on the factor-map (top-right panel in Figure
5). This points towards the homogeneity in the subsample as opposed to the
earlier scenario when all the sources were bunched together. The corresponding
scree plot shows the dominance of the primary and secondary principal
components (42.7% and 22.5%), suggesting that the high-luminosity sample
behaves similar to the low-luminosity case. Likewise to the low-luminosity
case, accounting for the subsequent two principal components (PC3 and PC4),
91.4% of the total variation in the high-luminosity dataset can be explained.
First principal component: Compared to the low-luminosity case, the FWHM of O
i $\lambda 8446$ is still the primary dominant driver of the the primary PC,
followed by the EWFeII FWHMHβ and FWHMCaT. The EWFeII dominates in the
negative space of the PC1.
Second principal component: The primary contributor to this PC is still EWCaT,
but in contrast to the corresponding PC for the low-luminosity case, the
EWFeII is rather below the significance threshold in this case. Other
significant contributors are the EWHβ followed by the FWHMCaT.
Third and fourth principal components: The third and fourth principal
components further contribute to 26.2% (earlier this was 19.4% for the low-
luminosity case) of the total variance in the dataset. The third PC is
dominated by the EWOI with contribution from EWHβ. Whereas, the fourth PC is
singularly driven by the NIR luminosity.
### D.3 Correlations between the principal eigenvectors and observed/derived
parameters for the subsamples
Figure D.2: For the PC1, there are significant positive correlations for both
the subsamples, especially, with respect to FWHMHβ (low-luminosity:
$\rho$=0.669, $p$=7.37$\times 10^{-5}$; high-luminosity: $\rho$=0.76,
$p$=1.73$\times 10^{-6}$) and FWHMOI (low-luminosity: $\rho$=0.728,
$p$=7.65$\times 10^{-6}$; high-luminosity: $\rho$=0.884, $p$=2.1$\times
10^{-10}$). This is highlighted by the strong correlation between the PC1 and
the black hole mass which is obtained and explained in Figure D.3 as the
FWHMHβ is incorporated to estimate the black hole mass. A strong positive
correlation is obtained for FWHMCaT ($\rho$=0.85, $p$=3.94$\times 10^{-8}$)
for the high luminosity subsample. The EWHβ correlation with PC1 behaves
differently for the low-luminosity ($\rho$=0.736, $p$=5.46$\times 10^{-6}$)
and the high-luminosity ($\rho$=-0.694, $p$=2.93$\times 10^{-5}$) samples. The
low-luminosity sample follows the trend of the full sample in this case. For
EWOI, significant correlation is noted only for the low-luminosity case
($\rho$=0.871, $p$=8.11$\times 10^{-10}$). For EWFeII, significant anti-
correlation is noted only for the high-luminosity case ($\rho$=-0.681,
$p$=4.73$\times 10^{-5}$) . For PC2, we have two significant correlations for
the low-luminosity case - for EWCaT ($\rho$=0.855, $p$=3.57$\times 10^{-9}$)
and EWFeII ($\rho$=0.66, $p$=9.88$\times 10^{-5}$). Additionally, there is a
correlation obtained for the EWCaT ($\rho$=0.86, $p$=2.3$\times 10^{-9}$) for
the high-luminosity case. For PC3, a significant anti-correlation is observed
for the low-luminosity case’s EWHβ ($\rho$=-0.743, $p$=3.87$\times 10^{-6}$)
and a significant correlation for the high luminosity case’s EWOI
($\rho$=0.83, $p$=2.57$\times 10^{-8}$). There is a single significant
(anti-)correlation observed for PC4, i.e. versus FWHMCaT ($\rho$=-0.699,
$p$=6.02$\times 10^{-4}$).
Figure D.3: The strongest and only correlation in the subsamples for the PC1
are with respect to the black hole mass - for the low-luminosity sample, the
correlation is relatively stronger ($\rho$=0.701, p=2.24$\times 10^{-5}$)
compared to the high-luminosity sample ($\rho$=0.662, $p$=9.16$\times
10^{-5}$). For PC2, significant correlations are obtained only for RFeII and
RCaT cases. For the low-luminosity sample, the correlations of PC2 versus
RFeII ($\rho$=0.698, $p$=2.6$\times 10^{-5}$) and versus RCaT ($\rho$=0.65,
$p$=1.34$\times 10^{-4}$), while for the high-luminosity sample, the
correlations of PC2 versus RFeII ($\rho$=0.669, $p$=7.26$\times 10^{-5}$) and
versus RCaT ($\rho$=0.838, $p$=1.46$\times 10^{-8}$). This further
corroborates the strong connection between these two parameters from our past
results obtained in Paper-1 and those obtained from the PCA analysis of the
full sample in this paper. There are no significant correlations with respect
to PC3. The only significant (anti-)correlation with respect to PC4 is
obtained for $L\mathrm{{}_{Edd}}$ in high-luminosity sample ($\rho$=-0.693,
$p$=3.13$\times 10^{-5}$).
Figure D.2: Correlation matrix showing dependence of the first four PCA
vectors’ loadings versus the physical parameters (observed) for our sample.
The sample is divided into low luminosity (green dots) and high luminosity
(blue dots) subsamples based on the median value of the sample’s optical
luminosity distribution, i.e. at 44.49 erg s-1 (see Appendix D). The full
sample is shown in red dots. The Spearman’s rank correlation coefficient
($\rho$) and the $p-$value are reported for the correlations whenever
$p$-value $<$ 0.001. The OLS fits for each sample is shown using dashed lines
using their respective color. Figure D.3: Correlation matrix showing
dependence of the first four PCA vectors’ loadings versus the physical
parameters (derived) for our sample. The colors for the data-points are
identical to that shown previously in Figure D.2. The Spearman’s rank
correlation coefficients ($\rho$) and the $p-$values are reported for the
correlations whenever $p-$value $<$ 0.001. The OLS fits for each sample is
shown using dashed lines using their respective color.
|
# A Spiking Central Pattern Generator for the control of a simulated lamprey
robot running on SpiNNaker and Loihi neuromorphic boards
Emmanouil Angelidis
Department of Neuromorphic Computing
fortiss - Research Institute of the Free State of Bavaria
Munich, Germany
<EMAIL_ADDRESS>
&Emanuel Buchholz
Department of Neuromorphic Computing
fortiss - Research Institute of the Free State of Bavaria
Munich, Germany
&Jonathan Patrick Arreguit O’Neil
Biorobotics Laboratory, Institute of Bioengineering
École Polytechnique Fédérale de Lausanne
Lausanne, Switzerland
&Alexis Rougé
Department of Neuromorphic Computing
fortiss - Research Institute of the Free State of Bavaria
Munich, Germany
&Terrence Stewart
Computational Neuroscience Research Group
University of Waterloo Centre for Theoretical Neuroscience
Waterloo, Canada
&Axel von Arnim
Department of Neuromorphic Computing
fortiss - Research Institute of the Free State of Bavaria
Munich, Germany
&Alois Knoll
Chair of Robotics, Artificial Intelligence and Embedded Systems
Technical University of Munich
Munich, Germany
&Auke Ijspeert
Biorobotics Laboratory, Institute of Bioengineering
École Polytechnique Fédérale de Lausanne
Lausanne, Switzerland
Also Chair of Robotics, Artificial Intelligence and Embedded Systems,
Technical University of MunichAlso Chair of Robotics, Artificial Intelligence
and Embedded Systems, Technical University of Munich
###### Abstract
Central Pattern Generators (CPGs) models have been long used to investigate
both the neural mechanisms that underlie animal locomotion as well as a tool
for robotic research. In this work we propose a spiking CPG neural network and
its implementation on neuromorphic hardware as a means to control a simulated
lamprey model. To construct our CPG model, we employ the naturally emerging
dynamical systems that arise through the use of recurrent neural populations
in the Neural Engineering Framework (NEF). We define the mathematical
formulation behind our model, which consists of a system of coupled abstract
oscillators modulated by high-level signals, capable of producing a variety of
output gaits. We show that with this mathematical formulation of the Central
Pattern Generator model, the model can be turned into a Spiking Neural Network
(SNN) that can be easily simulated with Nengo, an SNN simulator. The spiking
CPG model is then used to produce the swimming gaits of a simulated lamprey
robot model in various scenarios. We show that by modifying the input to the
network, which can be provided by sensory information, the robot can be
controlled dynamically in direction and pace. The proposed methodology can be
generalized to other types of CPGs suitable for both engineering applications
and scientific research. We test our system on two neuromorphic platforms,
SpiNNaker and Loihi. Finally, we show that this category of spiking algorithms
shows a promising potential to exploit the theoretical advantages of
neuromorphic hardware in terms of energy efficiency and computational speed.
_K_ eywords Neurorobotics, Central Pattern Generators, Spiking Neural
Networks, Nengo, Neurorobotics Platform, SpiNNaker, Intel Loihi, Neuromorphic
Computing, Robotic Control, Hopf Oscillators
## 1 Introduction
Our work can be placed in the emerging field of Neurorobotics, a field that
combines knowledge acquired from different scientific fields and applies them
to the study and the control of animal models and robots. Within the context
of Neurorobotics, an artificial brain, either biologically or AI inspired, is
interacting with a robot model in physical or virtual experiments [1]. This
enables the testing of hypotheses on virtual embodiment, a concept which
encompasses the idea that a brain is not a system isolated from the outer
world, but one that constantly receives and processes stimuli and acts
according to them. Neurorobotics problems can fall into various categories,
for example robotic control based on cerebellar models [2, 3], dynamic vision
systems based on event-based cameras [4, 5], visual perception [6], motor
control and locomotion tasks [7, 8] and action selection [9].
A major limitation of existing neuronal models that are often used as
artificial brains is that they are both energy and computationally demanding,
since they are usually running on conventional CPUs. Even though spiking
neural network (SNN) models are computationally sparse by definition [10],
this characteristic is not taken into account when running them on
conventional hardware. Thus specialized hardware that is optimized to run
these models has been researched and developed, among others Intel Loihi [11],
IBM TrueNorth [12], SpiNNaker [13] and BrainScale [14], the latter two
developed within the context of the Human Brain Project. Our work makes use of
a SpiNNaker and a Loihi chip that runs the spiking neural network that we
developed.
Many fields of robotics have taken inspiration from biological systems, and
particularly from the locomotor system. Locomotion of animals is hypothesized
to be controlled to a large extent by functional units in the central nervous
system (CNS) called called Central Pattern Generators (CPGs) [15, 16], which
are usually described as neuronal systems that create rhythmic activity
patterns with minimal sensory feedback. In vertebrates, these locomotor
circuits are located mostly in the spinal cord, and receive stimulation from
the brainstem and other areas of the brain such as the motor cortex, the
cerebellum and the basal ganglia [17]. One interesting finding is that these
networks are capable of producing rhythmic output in the absence of feedback
with minimal stimulation, even if the spinal cord has been completely isolated
from the body [18]. The investigation of CPG based locomotion control is
motivated by the insight that it can give on animals locomotion systems and by
the fact that these kind of bio-inspired controllers present good capabilities
in terms of autonomy and modulation [19]. So far the CPG approach has been
largely validated for the locomotion of snake-like robots [20, 21, 22, 23]. On
an implementation level there exist several CPG models which are formulated as
SNNs, and and these spiking CPGs (SCPGs) are often running on specialized or
generic neuromorphic hardware. It was shown that such SCPGs running on
Neuromorphic hardware such as FPGAs, SpiNNaker or VLSI are providing a robust
and efficient way to control a complex movement [24] including sensory
feedback, namely for bipedal walking [25, 26], for the movement of an arm [27,
28] or to control a six-legged robot [29, 30].
The mathematical modelling of CPGs can be categorized into roughly 3
approaches. The first treats the neural circuitry to the abstraction level of
biophysical models and incorporates information about ion pumps and ion
channels located in the neural cells membrane and their influence on membrane
potentials and the generation of action potentials, frequently modelled by
Hodgkin-Huxley neuron models. The second approach uses simpler leaky
integrate-and-fire neurons as the basis of computation, abstracting away low-
level biological information. The third category which is also our starting
point is deprived of lower level biological information and treats CPGs as
systems of nonlinear coupled oscillators, where one oscillator models the
activity of a whole oscillatory neural network at an abstract level. Although
conceptually the latter is a phenomenological approach based on the
observation of the emerging locomotor patterns, it still offers many
explanations of the underlying mechanisms of rhythmic pattern generation. One
of the first successful attempts to use a high-level mathematical formulation
of a CPG and model it as a dynamical system which can be simulated with
spiking neurons was the work of Eliasmith and Anderson [31]. Many of the
described models are accompanied with neuromechanical simulations that close
the loop between body and brain. For an extensive review on CPGs in robotics
and biology we refer to [16].
In this article, we present a high-level SCPG for a lamprey robot that was
trained to replicate the dynamics of a system of coupled Hopf-like
oscillators. This model is able to produce a set of travelling waves with
high-level modulation which correspond to a continuous space of swimming
gaits. It can run directly on the neuromorphic SpiNNaker and Loihi boards. It
builds on the core Neurorobotics idea of interaction between a virtual robot
or animal agent and a virtual brain that runs on neuromorphic hardware and
achieves a complex locomotion task. In Section 2, we present the underlying
mathematical formulation of the system of coupled Hopf-like oscillators as a
first step of the modeling, in Section 2.3 we present the spiking version of
the CPG and its performance on the two boards. We provide simulations of both
the isolated spiking CPG model as well as neuromechanical simulations under
different scenarios in 3. We then present our future work (4.1) and a
conclusion (4).
## 2 Materials and Methods
### 2.1 Overall model architecture
Locomotor CPGs are modulated by higher level control centers of the brain with
low-dimensional control signals, a property which makes CPG models good
candidates for robotic control problems. This property of CPGs gives them a
role similar to a feed-forward controller inside a control framework, of
producing oscillatory signals that are modulated by external stimulation. To
test whether our CPG model can successfully control a lamprey robot we
implemented a neuromechanical simulation for which we employed an accurate 3D
model of a lamprey robot that is composed of nine body parts similar to the
Amphibot robot in [32]. These parts are bound together by eight joints that
have one degree of freedom: the rotation around the vertical axis. To produce
the swimming patterns, the angular positions of these joints oscillate with
amplitudes, frequencies and phases prescribed by the CPG model. The complete
controller architecture can then be divided in three components (see Figure
1):
1. 1.
the mesencephalic locomotor region (MLR), that emits high level signals on
each side of the spinal cord: the drives;
2. 2.
the central pattern generator (CPG), that generates travelling waves for each
joint corresponding to the received drives;
3. 3.
the proportional derivative (PD) controller, that controls the torques applied
to the joints to reach the time-varying target angle positions. .
### 2.2 Oscillatory signals generation based on coupled abstract Hopf-like
oscillators
In order to explain the synchronization phenomena between the different
oscillatory centers in the vertebrate spinal cord, Ijspeert [7] proposed a
model of nonlinear coupled oscillators, and used this model to control a
salamander robot. This model proposes a coupling between different oscillatory
centers based on coupling weights that dictate the phase difference and
frequency of the oscillatory centers. The oscillators can be chained either in
a single or double chain. In the double chain model, the one that we employ
here, the activity of the one side of the spinal cord is in antiphase with the
activity of the other side, a phenomenon which is also observed in
measurements of muscle activity of lampreys. Providing different stimuli,
coming from the high-level control centers, between the oscillators found on
each side can lead to a shift of the overall oscillatory patterns, which when
applied to a robot model induces turning due to the change of the overall
curvature of the robot. This dynamical system can be described by the
following differential equations which describe a system of phase oscillators
with controlled amplitude. The oscillators are described first in phase space,
which gives an intuition of how the coupling is induced, and then rewritten in
Cartesian space which as we explain is a form suitable for modelling with an
SNN:
$\dot{\theta}_{i}=2\pi\nu_{i}+\sum_{j}r_{j}w_{i,j}\sin\left(\theta_{i}-\theta_{j}-\Phi_{i,j}\right)$
(1)
$\ddot{r_{i}}=a_{i}\left(\frac{a_{i}}{4}\left(R_{i}-r_{i}\right)-\dot{r_{i}}\right)$
(2) $x_{i}=r_{i}\left(1+\cos\theta_{i}\right)$ (3)
$\Psi_{i}=\alpha\left(x_{i,right}-x_{i,left}\right)$ (4)
In this system the $\theta_{i}$, $v_{i}$ are the phase and the preferred
frequency of the $i$-th oscillator, $r_{i}$, the amplitude, $x_{i}$ is the
output of the $i$-th oscillator which represents motoneuron activity, and
$\Psi_{i}$ is the output of the model that is applied to the robot and
combines the activity of the oscillators of left and the right side of the
double chained model. From equation 1 one can observe that the first
derivative with respect to time of the phase of each oscillator, is modulated
by the coupling weights $w_{ij}$ and the amplitude of the oscillators it is
connected to. It is interesting to note that when the phase differences
$\Phi_{ij}$ are reached between the coupled oscillators the term $\theta_{j}$
\- $\theta_{i}$ \- $\Phi_{ij}$ becomes zero, and thus the oscillator
oscillates with the preferred frequency $2\pi\nu_{i}$. This is indeed the case
when the steady state is reached, which takes place when certain convergence
criteria are met. Equation 2 describes how the amplitude of each oscillator
converges to the preferred amplitude $R_{i}$, with parameter $a_{i}$ dictating
the speed of convergence. This ensures smooth transitions of the amplitude
when abrupt changes of the high-level drive occur. Even though this system
fully describes a CPG in phase space, it is not suitable for approximation
with an SNN, as integrating equation 1 in time, leads to a constantly
increasing phase. This constantly increasing value quickly saturates the
representational capabilities of neural populations, as they excel in
approximating values within a subset of a larger space. The solution for this
problem is to reformulate the problem in Cartesian space as follows [33]:
$\dot{x_{i}}=a({R_{i}}^{2}-{r_{i}}^{2})x_{i}-\overline{\omega}_{i}y_{i}$ (5)
$\dot{y_{i}}=a({R_{i}}^{2}-{r_{i}}^{2})y_{i}+\overline{\omega}_{i}x_{i}$ (6)
$\overline{\omega}_{i}=\omega_{i}+\sum_{j}\frac{w_{ij}}{r_{i}}[(x_{i}y_{j}-x_{j}y_{i})\cos{\Phi_{i,j}}-(x_{i}x_{j}-y_{i}y_{j})\sin{\Phi_{i,j}}]$
(7)
where $x_{i}$, $y_{i}$ denote the x and y-coordinates of a point in 2-D space
moving in a circle through time, with frequency controlled by equation 7. The
parameter $a$ dictates the speed of convergence of the amplitude to the steady
state, and $r_{i}$ it the norm of the [x,y] vector. This formulation is close
to the standard form of coupled Hopf oscillators with coupling to other
oscillators. This equation has the advantage that the x,y values stay within a
limit cycle, whose radius is dictated by the amplitude of the oscillation,
solving the problem of continuously increasing phase when one attempts to use
the phase representation.
To incorporate the drive corresponding to the high-level stimulation we use
two piece-wise linear functions, which saturate when the stimulation is
outside of a certain range. These two functions control the target frequency
and the target amplitude of each oscillator according to the relations:
$\omega_{i}(d)=\begin{cases}c_{\omega,1}d+c_{\omega,0},&\text{if }d_{low}\leq
d\leq d_{high}\\\ 0,&\text{otherwise}\end{cases}$ (8)
$R_{i}(d)=\begin{cases}c_{R,1}d+c_{R,0},&\text{if }d_{low}\leq d\leq
d_{high}\\\ 0,&\text{otherwise}\end{cases}$ (9)
These two equations replicate biological observations that the frequency and
amplitude of muscle contraction increase together with increased stimulation,
hence leading to faster locomotion. They complement the CPG with high-level
modulation, and with them we have a complete mathematical formulation of the
control framework, which we implement in an SNN.
### 2.3 Implementation of the coupled oscillators system in a spiking network
#### 2.3.1 Architecture of the spiking CPG neural network
The model that we introduced in the previous section is a mathematical
formulation of a system of coupled abstract Hopf-like oscillators, modulated
in frequency and amplitude by high-level stimulation. We show that such a
system can be easily simulated with an SNN simulator. To do so we designed a
modular SNN architecture where one oscillatory center is represented by one
population of spiking neurons and computes the equations described in (5 \-
7). This population at the same time encodes equation 9. For the coupling
between the neural oscillators we introduce an intermediate population which
receives the x,y values from neighbor oscillators, and computes the coupling
term of equation 7. This intermediate population facilitates the exchange of
data between the neural oscillators, and it’s presence is dictated purely by
the framework that we chose to implement the SNN. The overall architecture of
the model can be seen in Figure 2. At the same time each of the oscillatory
centers is receiving input from the high-level drive through equations 8 \- 9.
#### 2.3.2 Choice of the neural simulator
In order to replicate the system of modulated oscillators with a spiking
neural network the choice of a framework that can perform such numerical
computations was necessary. A characteristic shared by most neural simulators
is that they allow the simulation of simple leaky integrate-and-fire neuron
models (LIF). According to this model [34] the neuron spikes when its membrane
potential reaches a certain threshold. Each neuron is excited by the neurons
that are connected to it either in an excitatory or inhibitory fashion,
increasing or decreasing the membrane potential respectively. After a period
of inactivity the membrane potential is reset -leaks- to a base value. A
neuron is usually connected with multiple other neurons via junctions called
synapses. The information flow from one neuron to the other is dictated among
other factors by the level of present in the synapse neurotransmitters and
whose release is regulated by dedicated proteins. The overall strength of the
connection between neurons is dictated by the synaptic weight. From a
computational perspective, the adaptation of the synaptic weights through
synaptic plasticity mechanisms is the process which allows these networks of
neurons to learn a representation. Synaptic plasticity mechanisms can be
either biologically accurate, i.e. STDP [35], or variations of some machine
learning inspired approach such as the ones making use of backpropagation
algorithms [36], or biologically plausible mechanisms such as the e-prop
algorithm [37]. Most computational models of spiking neurons employ the simple
Leaky integrate-and-fire neuron model. We use these types of neurons for our
study as well. Several simulation platforms were suitable for the task of
simulating such neurons, but Nengo [38] was chosen for two reasons. First, it
has built-in methods for generating neural networks that approximate
differential equations. This approach is described in section 2.3.3. Second,
it can generate versions of these networks that can run on dedicated
neuromorphic hardware, as we discuss in section 2.5.
#### 2.3.3 Nengo and the Neural Engineering Framework
In this section we give an overview of the Neural Engineering Framework (NEF),
which is a general methodology for creating neural networks that approximate
differential equations [39]. Importantly, it generalizes to any neuron model,
including LIF spiking neurons, and takes into account the timing of synapses.
To understand the NEF, we start with the standard observation that a normal
feed-forward neural network is a function approximator. That is, if we have
some input $x$ and some output $y$, we can train a neural network produce the
desired output $y=f(x)$. While this training can be done using any neural
network learning algorithm, here we just use the simple method of having a
network with a single hidden layer of LIF neurons (no non-linearities at the
input or output), randomly generate the first layer of weights, and use least-
squares minimization to solve for the second layer of weights. This method
works for a large range of functions and is robust to spiking neuron models
[39].
However, to generate the CPG model described here, we need networks that
approximate differential equations. Here, the NEF applies the following
method. Suppose we want the differential equation $\dot{x}=f(x,u)$. We build a
feed-forward network where the inputs are $x$ and $u$ and the output
approximates $\tau f(x,u)+x$. We introduce the variable $\tau$ here, which
will be used as the time constant of a simple exponential low-pass filter
synapse that will connect the neurons. Now to generate the recurrent neural
network, we simply connect the output of that network back to itself, and
scale the $u$ input by $\tau$. The resulting network will approximate
$\dot{x}=f(x,u)$. See [39] for a full proof, which is based on the observation
that the Laplace transform of the low-pass filter is $F(s)=1/(1+s\tau)$.
Similar transformations can be done for more complex synaptic filters, but we
do not use those here.
As an example of this process, Figure 4 shows an NEF model of a single Hopf-
style oscillator. This was formed by creating a feed-forward single-hidden-
layer neural network with three inputs ($x$, $y$, and $\overline{\omega}$) and
two outputs ($\tau(a({R}^{2}-{r}^{2})x-\overline{\omega}y)+x$ and
$\tau(a({R}^{2}-{r}^{2})y+\overline{\omega}x)+y$). The weights for this
network were found by randomly sampling the inputs ($x$, $y$, and
$\overline{\omega}$), computing the desired outputs for each input, and then
training the network given this data. Afterwards, the resulting input and
output connection weights were multiplied together to create the recurrent
neural network shown.
The Nengo software toolkit [38], which is the software implementation of the
more general Neural Engineering Framework, provides high-level tools for
creating such networks for a variety of neuron models. Crucially, it also
provides facilities for linking networks together, so that large systems can
be built out of these components. Futhermore, the resulting systems can be
automatically compiled to run on CPUs, GPUs, or a variety of neuromorphic
hardware.
#### 2.3.4 The Nengo model
Based on the third principle of the NEF we employ the dynamical systems that
emerge through the use of recursive neurons to implement the oscillators in
our model. It is worth noting that recurrent neural populations can implement
various dynamical systems, such as integrators, oscillators, even chaotic
systems such as Lorenz attractors. The network computes each function from
equations(5-9) according to the NEF principles. By doing so the decoded
spiking activity of each neural population can be seen as a real-valued vector
with the appropriate dimensions. For the populations that encode the
oscillators (depicted with $theta_{i}$ in Figure 2) this 4-dimensional vector
represents the values $[\dot{x},\dot{y},\omega,R]$. For the intermediate
neuron populations that compute the coupling part of equation 7 the
4-dimensional vector represented is
$[\dot{x_{i}},\dot{y_{i}},\dot{x_{j}},\dot{y_{j}}]$. The high-level drive is
approximated by the decoded activity of a neuronal population dedicated in
receiving the drive and translating it to neural activity. A dedicated readout
output node (non-spiking) can be used to read the decoded output of the
system, that corresponds to the x-coordinate of the Hopf-like oscillator. The
complete system with input and output for 4 oscillatory centers can be seen in
Figure 3. As will be shown the system can scale to a larger number of
oscillatory centers but the scaling can be limited by the capabilities of the
neuromorphic hardware that it is running on.
As mentioned in 2.3.3 the Neural Engineering Framework can be used to
approximate any linear or non-linear function with spiking activity by
computing the connection weights between the different components of a spiking
neural network, acting as a neural compiler. This alleviates the need for
explicit training of the SNN, as in the NEF the information that needs to be
provided is limited to the properties of the neurons (i.e. membrane threshold
potential, neuron types), the values that the neural populations need to
represent and the functions that they compute, and the NEF solves for the
connection weights that will compute the desired functions. This enables
specifying the high-level mathematical functions that are encoded by the SNN
and that works both for feed-forward as well as for recurrent connections. The
latter is particularly relevant for our work as it enables dynamical systems
such as the oscillator system that we employ to emerge from the neuronal
activity. In order for the connection weights to be computed by the NEF,
during the initialization phase of the simulation a random selection of
sampling points to be used as inputs to the function to approximate is
selected. These points are based on the input space that the neuronal
population approximates, f.e. points in the space [0,1] for a population that
encodes 1-D values. Then these points are used to generate training data from
the functions, by providing the points as inputs to the desired functions and
collecting the output. Subsequently a least-squares optimization computes the
weights that best fit the decoded neuronal activity to the training data. For
a more detailed technical overview of this method we refer the viewer to [40].
#### 2.3.5 Perturbations and robustness of the CPG model
Animal CPGs have been documented to adapt to various perturbations (i.e.
external application of a force), by reacting smoothly and exhibiting stable
limit cycle behavior, i.e. recovering the gait patterns without losing
synchronization. Furthermore different degrees of stimulation of the
oscillatory centers on the spinal cord can lead to different gaits. Simple
asymmetrical stimulation between the right and left side drive of the spinal
cord can induce a shift of the gait patterns to the left or to the right, and
can induce turning. We show that these characteristics are exhibited by our
model under the following scenarios:
1. 1.
Perturbation of a single oscillatory center by external stimulation
2. 2.
Asymmetrical stimulation of the spinal cord from left to right side of the
spinal cord
These scenarios show the CPG model’s ability to quickly recover under external
perturbations as well as to modulate swimming gaits.
### 2.4 Neuromechanical simulation in the Neurorobotics Platform
To test the output and the high-level adaptation of the control signals we
performed a closed-loop neuromechanical simulation of our model with a robot
model as a body. The motivation behind simulating our model within a physical
simulation framework comes from the fact that neural circuits and control
algorithms cannot be separated from their natural habitat, the body. Only
within an embodied simulation can we test whether the system that we propose
can successfully control a robot. For such a full closed-loop robot-brain
interaction simulation we made use of a framework built exactly for this
purpose, the Neurorobotics Platform. The Neurorobotics Platform (NRP) is a
software simulator developed within the Human Brain Project [41] that enables
the synchronization and exchange of data between modelled brains and virtual
robots within a physical simulation environment. The Robotic Operating System
[42] is the middleware which enables the communication between the different
software components, which is also supported by a multitude of physical
robots. Within the NRP there is no need for an explicit synchronization
mechanism between the physical world and the modelled brain, as such a
mechanism is built into the framework. The physical simulation is provided by
Gazebo [43], which interfaces with multiple physics engines. It supports
directly many different brain simulators such as NEST [44], Nengo and
SpiNNaker, and through Nengo one can run models on Loihi. We used this
framework to connect the Nengo model presented in section 2.3.4 with the
lamprey robot (Figure 1).
To complement the simulation with a simplified fluid dynamics model, we
implemented a drag model, which is computing the forces produced by the
swimming motion, forcing the robot to move forward. The drag model is the one
presented in [45], and computes the forces applied on each robot link based on
the formulas:
$E_{i\parallel}=\lambda_{i\parallel}\upsilon_{i\parallel}^{2}$ (10)
$E_{i\bot}=\lambda_{i\bot}\upsilon_{i\bot}^{2}$ (11)
and the coefficients $\lambda$ can be computed by
$\lambda_{i\parallel}=\frac{1}{2}C_{i\parallel}S_{i}\rho$ (12)
$\lambda_{i\bot}=\frac{1}{2}C_{i\bot}S_{i}\rho$ (13)
where $\upsilon_{i}\parallel$ and $\upsilon_{i}\bot$ are the velocity
components of each link relative to the water in the parallel and
perpendicular directions. The parameter $\lambda$ depends on the fluid density
$\rho$ and the parameter $S_{i}$ is the surface of the link perpendicular to
the link movement. This drag model is only a simple approximation of the fluid
forces applied on the robot, but offers simplicity and computational speed
compared to the 3D Navier-Stokes equations.
#### 2.4.1 The neuromechanical simulation scenarios
We tested the arising swimming gaits under different simulation scenarios.
Firstly we show that the spiking CPG can produce swimming even with a low
number of neurons. Secondly we show unperturbed swimming with no high-level
modulation. Thirdly, we present modulation of the swimming by the high-level
drive with control of direction and speed. To show the ability of the
controller to incorporate sensory feedback from the simulation dynamically we
add a water speed barrier to the simulation. This speed barrier forces the
robot to move to the side without adaptation of the high-level drive, but with
modulation the robot manages to overcome it. The water speed barrier is
implemented in the form of a global fluid velocity vector opposite to the
forward direction. A summary of the scenarios:
1. 1.
Unperturbed swimming, effect of varying number of neurons per neural
population
2. 2.
Unperturbed swimming, no high-level modulation
3. 3.
Unperturbed swimming, control of the speed and direction of the robot
4. 4.
Presence of water speed barrier, no high-level modulation
5. 5.
Presence of water speed barrier, high-level modulation
The method that we used to modulate the high-level drive of the robot in the
presence of a speed barrier consists of a high-level feedback loop that
modulates the turning commands (i.e. the left-right asymmetry of drive
signals) towards a desired target angle (e.g. similarly to a fish aiming to
swim towards a particular far away target). This is implemented through a
linear minimization of the error between a target global angle around the
z-axis of the robot’s head and the actual angle of the robot’s head around the
z-axis. Thus, when the robot turns i.e. left, the error between the target
angle and the measured angle increases and the right drive increases linearly
to compensate for the deviation from the target angle. The equations that we
used for this strategy:
$d_{right}=\begin{cases}d_{right0}+CF*abs(\vec{R}_{z,target}-\vec{R_{z}}),&\text{if
}R_{z}-R_{z,target}\leq 0\\\ d_{right0}&\text{otherwise}\end{cases}$ (14)
$d_{left}=\begin{cases}d_{left0}+CF*abs(\vec{R}_{z,target}-\vec{R_{z}}),&\text{if
}R_{z}-R_{z,target}\geq 0\\\ d_{left0}&\text{otherwise}\end{cases}$ (15)
Where the left drive is increased when the error is positive, and the right
when negative. $\vec{u}_{target}$ is the target lateral velocity,
$\vec{R}_{z}$ is the recorded rotation around the z-axis of the robot’s head,
$CF$ is the correction factor that linearly multiplies the error, and
$d_{right0}$ and $d_{left0}$ provide the baseline of the drive stimulation.
This simple error correction strategy proves to be enough to correct the
deviation of the robot from a target angle by modulating the CPG with the
high-level drive.
### 2.5 Nengo on SpiNNaker-3 and Loihi boards
As stated in [46], the computational limitations for running spiking models on
conventional CPUs are originating in the von Neumann architecture.
Conventional computers are built and optimized to perform Boolean algebra
operations and arithmetic on the data stored in memory. Hence, this data needs
to be transferred back and forth between the memory and the CPUs, which can be
time consuming. Neuromorphic hardware on the other hand is specialized in
running spiking neural networks. The computation takes place in many small
calculators that have access to a small amount of local data. This strategy
reveals itself to be more time and energy efficient for neuron oriented
computations. For this reason, we tested our Nengo model on a SpiNNaker-3 [13]
and a Loihi board [11]. Due to the direct connection of SpiNNaker and Loihi
boards to Nengo with a software interface our model remained high-level but
could be run directly on the boards.
It should also be emphasized that, for efficiency reasons, the actual neuron
model running on conventional CPUs, SpiNNaker-3, and Loihi, are all slightly
different. They can all implement Leaky Integrate-and-Fire neurons (and other
neuron models), but they all make slightly different approximations (e.g.
fixed-point rounding). This means that the optimal neural network connection
weights for these different hardware platforms will all be slightly different.
However, because we specify our model in Nengo using only the mathematical
function to be approximated, this means that Nengo can take the hardware
details into account when solving for the connection weights, and the user
does not have to modify their model to adjust for different hardware
platforms.
That said, there are still some areas where the Nengo-SpiNNaker and Nengo-
Loihi interfaces have room for improvement. In particular, the software
support for automatically splitting a group of neurons to run across multiple
hardware cores is lacking, effectively giving an upper limit on the size of a
single group of neurons that is hardware-dependent. We also encountered
hardware limitations on the amount of data that could be probed (i.e.
recorded) during the running of the simulation, as discussed in Section 3.2.3.
## 3 Results
### 3.1 Running the isolated CPG model
The first test that we performed on the isolated (i.e. no time-varying
external modulation) spinal cord model, shows that our system can produce
oscillations and traveling waves from random initial conditions meaning that
it exhibits limit cycle behavior. For such a scenario there is a clear
periodic activation of the spiking neurons inside the oscillatory populations
as can be seen in 6. In order to provide benchmarks for the neuromorphic
platforms vs the CPU as well as to show the adaptive capabilities of our model
we ran the model with different numbers of neurons and different numbers of
oscillatory centers. An interesting finding is that oscillatory patterns are
generated even with low numbers of neurons as can be seen in Figure 8.
Furthermore, perturbing the model by providing explicit stimuli on specific
oscillatory centers, can lead to some interesting behaviours which show the
stability of the circuit. As can be seen in Figure 7 a single external
perturbation on one of the oscillatory centers leads to a temporary disruption
of the signals, localized around the neighbouring oscillatory centers. Upon
removal of the perturbation the oscillators quickly recover and stabilize.
This is the limit cycle property of the high-level mathematical model that is
captured well by the spiking network, and exhibits the robustness of the
model, a property which is of particular importance for robotics problems.
The high-level modulation and control of the signals when varying the input to
the network under the scenario described in 2.3.5 can be seen in Figure 5. In
this scenario a simple asymmetrical variation of the input signals between the
left and the right side of the spinal cord leads to a formulation of different
travelling wave patterns, which can induce different swimming behaviours. A
variation between the left and right side of the spinal cord leads according
to equation 4 to a shift of the center of the signals towards positive or
negative angles, which in turn induces a shift of the joints angles towards
one side, causing the robot’s curvature to change, inducing a change of
direction.
### 3.2 Neuromechanical simulations
#### 3.2.1 Unperturbed swimming
As mentioned in section 3.1 swimming patterns arise even with a smaller number
of neurons for every neural population in the spiking neural network, albeit
the fewer neurons the less precise the approximation is. A comparison of the
three simulation scenarios with consecutively larger numbers of neurons can be
seen in videos 111https://youtu.be/E27Zj1ShI14 (500 neurons),
222https://youtu.be/b1E9EvVRoOw (1000 neurons),
333https://youtu.be/n3Q-Sn6jUKU (2000 neurons). The robot configurations in
the scenario of the 2000 neurons can be seen in Figure 9. The videos
correspond to Figure 8, and as can be observed the less neurons, the less
smooth the swimming is. Nevertheless, even the 280 neurons per neural
population are enough to provide a swimming pattern.
Asymmetry of the driving signals between left and right induces turning as can
be seen in video 444https://youtu.be/DKdtTFdthbI, and providing such drives is
a simple way to navigate the robot towards one direction. Using a closed loop
control method such as the one described in 2.4.1 such asymmetries can be
computed and provided automatically to the control loop.
#### 3.2.2 Presence of water speed barrier
As described in section 2.4.1, to demonstrate the controllability of the robot
with a closed loop controller we examine the behaviour of the robot with the
presence of a speed barrier, first without adaptation of the high-level signal
555https://youtu.be/hCtUVjqVr5g and then with high-level adaptation
666https://youtu.be/Q58Me79cfSs. In the first video, the speed barrier causes
the robot to follow a trajectory towards the side, by applying higher drag
forces to the robot in the lateral direction. In this scenario the robot does
not manage to compensate for the presence of the speed barrier as the
unmodulated oscillatory signals do not induce a correction of the direction of
the robot. In the second video on the other hand, the error correction
mechanism described in 2.4.1 is activated, causing the trajectory of the robot
to be corrected to compensate for the speed barrier, and eventually it manages
to orient itself and swim forward. We can observe that the model adapts well
when the high-level tonic drive signal is regulated by the error correction
mechanism, which conceptually corresponds to the adaptation that a decision
making center of the brain would perform in order to follow a certain
trajectory.
#### 3.2.3 Energy and computational speed metrics on SpiNNaker-3 and Loihi
boards
For robotics applications it is important that the control signals are
generated in real-time. In order to be able to control a robot with the two
neuromorphic boards that we examined, the quality of the generated signals has
to be similar to the one coming from the CPU. Such comparison of the quality
for a simulation of 10 secs can be seen in Figures 11 and 10. As can be
observed, the signals are of better quality than the CPU for a low number of
neurons. The quality of the produced signals depends heavily on the number of
neurons that are used to represent them. Due to limitations arising from the
architecture of the two neuromorphic boards we tested, the total number of
neurons that we could run on a SpiNNaker board is limited to 30000, for a
Loihi board the limitations are reached at a similar number of neurons when no
probes for measuring the networks output are used. With probes the limit on
Loihi is reached at approximately 22000 neurons. The concept of a probe
corresponds to a software construct that can be used to collect simulation
data from the neuron activity, energy consumption etc. They are used to record
the decoded output value of the neural population representing the oscillatory
centres.
A more detailed comparison of the runtime performance for the different
platforms can be see in figure 12. What we observed during the execution on
the neuromorphic chips is that most of the time is spent during phases other
than the network execution, mostly during the initialization phase where the
network configuration is being setup, and during input-output(I/O) operations
such as the transfer of spikes between the neuromorphic board and the host
computer. This is especially true for the Loihi board, as can be observed in
figure 13, where the actual execution of the network is around 1 second for 10
seconds of simulation time, almost 10 times faster than real-time, slightly
increasing as the network’s size increases. In contrast, most of the time
during execution is spent on other operations such as the exchange of spikes.
It is clear, that this is the main bottleneck of Loihi’s execution time.
SpiNNaker on the other hand, and especially the execution of spiking networks
on SpiNNaker through Nengo, is already optimized for real-time execution. This
is the reason why the total operation of SpiNNaker including I/O operations
and network execution is staying almost real-time. It should be noted that
this time also includes waiting times induced by Nengo to make sure the
simulation runs in real-time. The network itself is executed on SpiNNaker at
around 2 seconds, marking a slightly slower execution time than Loihi.
A more detailed analysis of the time spent during the execution of the network
on Loihi during larger simulation times is provided in figure 14. To explain
the observations it is useful to separate the operation of the board in three
distinct phases. The first would be the initialization and setup phase which
includes software overhead, overhead to boot the board, setup of the host
server, compilation of neurons and synapses on the board and which is
performed only once. The second phase would be the loading of the spikes into
the neuromorphic board which can be done in parallel with the execution of the
network, or before the execution of the simulation. The third phase
corresponds to the actual execution on the board. From these findings we can
conclude that as soon as the execution of the network is separated from the
setup it can perform much faster than real-time. It should be noted that these
metrics are relevant for this specific neural network and do not provide an
accurate metric for other types of models.
Due to software limitations it was not possible to provide accurate energy
benchmarks for the SpiNNaker board. However, a comparison of the energy
consumption between a CPU and Loihi is provided in figure 15. On Loihi the
energy consumption was measured with the built in time and energy probes. For
measuring the energy consumption on the CPU, the RAPL interface was used. RAPL
is an Intel processor feature that provides the ability of monitoring and
controlling the SoC power consumption [47]. As the power measurement control
domain we used the package domain which includes the energy consumption of all
cores, integrated graphics and other uncore components like caches and memory
controllers. For the actual measurement, a framework developed by [48] was
used.
As a result, in figure 15 you can see that the energy consumption of the Loihi
chip is by three orders of magnitude lower than executing the same network
with Nengo CPU. This shows neuromorphic hardware can deliver significant
energy reductions for executing spiking neural networks when compared to
traditional CPU architectures.
## 4 Conclusions
In this paper we presented a Spiking Central Pattern Generator based on a
high-level system of abstract coupled Hopf-like oscillators that can run on
both software and neuromorphic hardware. The method which we used can be
generalized to any type of similar CPG controller. Our model is highly
parametrizable, and is an excellent candidate for optimization methods. With
different parametrizations it can provide a vast number of possible
synchronized gaits, f.e. travelling and standing waves. Our method enables us
to smoothly control a lamprey robot that with regulation of the high-level
drive adapts to various simulation scenarios. We presented a closed-loop
neurorobotics simulation within the Neurorobotics Platform achieving multiple
locomotor tasks. Lastly, we showed that running the controller on neuromorphic
hardware can achieve real-time operation and has potential advantages in terms
of energy efficiency and computational speed.
Our work is related to other works in the field that attempt to provide
insight on the performance of neuromorphic hardware. In particular, SpiNNaker
was benchmarked for its performance in terms of energy efficiency and
computational speed with similar accuracy, to an HPC system running a full-
scale microcircuit of the human cortex model [49]. It was shown that for such
complex models the energy consumption per synaptic event, which provides an
estimate of the energy efficiency is
$5.9\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}$, close to the
$5.8\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{J}$ consumed by the HPC
system. However for simpler models, closer in terms of synaptic connections
and number of neurons to the model that we employ, the cost per synaptic event
can be as low as $8\text{\,}\mathrm{n}\mathrm{J}$ [50]. Similarly, in [12]
they compared the performance of an IBM TrueNorth neuromorphic chip running a
set of computer vision neural networks with the performance of a dual 2.4 GHz
E5-2440 processor x86 system, as well as a Blue Gene/Q system with up to 32
compute cards and found two to three orders of execution time speedup and five
orders of magnitude less energy consumption compared to the non-neuromorphic
systems. Blouw et al. [51] showed that the energy performance of Intel’s Loihi
chip compared to the Movidius Neural Compute Stick, Nvidia’s Jetson TX1, a
CPU, and a GPU was significantly lower (5.3x, 20.5x, 23.2x, 109.1x times
respectively), for a keyword spotting task. However it should be noted that
generating precise energy consumption benchmarks is a cumbersome task, and
often the claims about the theoretical energy efficiency of neuromorphic
hardware are not accompanied with the corresponding metrics.
### 4.1 Future work
In order to study the challenges presented in animal swimming locomotion, a
realistic simulation framework that can model all the different aspects of the
physical world is necessary. The dynamics of the system, the control part, and
their communication and synchronization is already solved in the Neurorobotics
Platform, but a realistic fluid simulation is still missing. We are planning
to address this problem and present a unified framework in our future works.
This would allow providing realistic force feedback in the control loop, thus
enabling the generation of more complex computational models.
Furthermore, our CPG model can be enriched with various form of environmental
or sensory feedback, which can be incorporated into the model itself. Sensory
data such as stretch receptors, high-level cognitive controllers that regulate
the tonic drive are examples of this type of feedback.
One natural continuation of our work would be the transfer of the control
framework on a real robot, such as the Amphibot. This is currently limited by
the size of the SpiNNaker board that would prevent it from being fitted on the
robot. However Loihi comes with a USB stick that is more compact in size and
would potentially fit on the robot. One important consideration would be
waterproofing the neuromorphic boards, as well as making sure that the changes
induced in the dynamics of the system by the extra weight would be negligible.
## Acknowledgments
The authors would like to thank Peter Blouw and Eric Hunsberger from Applied
Brain Research for their valuable help on setting up the Nengo simulations and
David Florey,Yulia Sandamirskaya and Andreas Wild from Intel for their help
with the Loihi simulation and interpretation of results.
## 5 Figures
Figure 1: The control framework. The brainstem component is abstracting the
brain areas that are stimulating the spinal cord, separated into two
stimulations, one for each side of the spinal cord. The CPG component,
comprised of coupled oscillatory centers organised in a double chain, produces
the swimming gaits modulated by the high-level brainstem control. A PD
controller is receiving the output of the CPG network and applies it to the
robot, controlling the angular rotation of each joint. Figure 2: Architecture
of the spiking CPG model. Each oscillatory center, noted with $theta_{i}$ is
coupled with its neighbours through an intermediate population, depicted with
$C_{ij}$. The intermediate population is computing the coupling term of
equation 7. The x-y diagrams corresponding to each oscillator show the
trajectory of a point traversing the limit circle through time for the ideal
mathematical model. As can be observed, the oscillators in each side of the
spinal cord have an antiphase relationship between them, whereas the ones
upwards or downwards have a fixed phase difference of $4\pi/NumOsc$. Figure 3:
(Left) The Nengo simulated model where 4 oscillatory centers are shown. In
this simulation the high-level stimulation is driving the oscillations.
(Right) The output of each oscillator that corresponds to the decoded spiking
activity, when 2000 neurons per oscillatory center are used, is depicted.
Figure 4: The behavior of a single Hopf-like oscillator implemented in spiking
neurons using Nengo and the Neural Engineering Framework (NEF). The model
consists of an all-to-all recurrently connected layer of LIF neurons with
exponential synapses with 100ms time constants. Their spiking activity is
shown in the middle row, sorted by similarity. A single input
($\overline{\omega}$) is provided, and the two outputs show that it functions
as a controlled oscillator. The input weights, recurrent weights, and output
weights are found using the NEF such that the network approximates
$\dot{x}=a({R}^{2}-{r}^{2})x-\overline{\omega}y$ and
$\dot{y}=a({R}^{2}-{r}^{2})y+\overline{\omega}x$. Figure 5: The output of the
CPG network for 16 oscillatory centers, where each oscillator is depicted with
$\theta_{i}$. An asymmetric drive is provided to the network after 5 seconds
of simulation, increasing the drive on the right side of the spinal cord, and
decreasing it on the left. As can be observed the amplitude of the
oscillations on the right side increases, whereas on the left side decreases.
Figure 6: Spike train of the first 50 neurons of an oscillatory population
with 2000 neurons for 4 secs. The activity of the neurons shows clears signs
of periodicity. The neurons are continuously alternating between high and low
firing rates. Figure 7: The output of the network when the 5th oscillator is
perturbed by an external signal. The perturbation lasting from 4.8 to 5 secs
causes disturbance of the neighbouring oscillators’ $\theta_{2}$,
$\theta_{5}$, $\theta_{6}$ wave patterns. The model quickly recovers when the
perturbation is removed. Figure 8: The output of the network for different
number of neurons per oscillatory population. Even with 500 neurons the
network can produce an oscillatory output, of lower quality as some of the
oscillators’ waves are not smooth and there is more high-frequency noise. With
100 neurons there is an improvement of the quality of the signals, whereas
with 2000 neurons the signals are smooth and without high-frequency noise.
Even with a low number of neurons the patterns are capable of producing
simulated swimming. The network was trained in Nengo with a random seed of 0.
Figure 9: Swimming with the simulated robot, with snapshots at 160ms intervals
for the unperturbed non-adaptive scenario. The network consists of 2000
neurons per neural population. The travelling wave is propagated along the
robot’s body from head to tail. Figure 10: The output of the network for
different number of neurons per oscillatory population when executed on
SpiNNaker. On SpiNNaker the output of the network is relatively accurate and
better than the CPU even for a small number of neurons. The weights were
trained with a random seed of 0. Note that high-frequency filtering is applied
by default on the output signals. Figure 11: The output of the network for
different number of neurons per oscillatory population when executed on Loihi.
The results have similar accuracy as SpiNNaker and perform better than the CPU
for a low number of neurons. The weights were trained using the random seed 0.
Note that high-frequency filtering is applied by default on the output
signals. Figure 12: Runtime of a 10 seconds experiment for various number of
neurons per platform. The total execution time in SpiNNaker is referring to
the complete execution cycle from the moment the simulation is launched to the
moment the execution data is collected, likewise in Loihi. It is important to
note that these values represent the execution of Nengo on the neuromorphic
hardware from the perspective of an application developer, treating the
hardware as a black box. The SpiNNaker on-chip execution time measures only
the time spent on the board for the execution of the network. The Loihi
execution measures the execution time reported by Loihi and represents the
actual time spent executing the network. The execution + spike transfer
represents the execution time plus the time spent during the exchange of
spikes between the Loihi board and the CPU. The reasoning behind these
benchmarks is to demonstrate that the times spent on the chip are very low
compared to real-time and the rest of the times is spent on IO operations or
other operations induced by the software. For a more detailed breakdown of the
execution times in Loihi see also Figure 13. It can be observed that the
actual execution time on the boards is much faster than real-time, showing
that neuromorphic hardware is a great candidate for running the CPG model in
real-time. Figure 13: Breakdown of total execution time on the Loihi chip into
different parts for 10 seconds of simulation time and increasing neurons.
Python timings refer to the execution of the network from an application
developer’s point of view and include all the software and IO induced times.
The Executing series shows the actual execution time on the chip and is
linearly increasing as the number of neurons increase. The Executor series
includes both the execution and the transferring of spikes between the board
and the CPU. It should be noted that these two processes can be performed in
parallel. The times spent during the setup and initialization phases (Host
server up, encoding axons/synapses, booting the board, configuring registers)
are performed only once and their relative duration is less significant if the
simulation time increases, see also 14 Figure 14: Nengo Loihi execution times
when the simulation time increases. All the benchmarks were performed with a
network with 450 neurons per oscillatory center. In this figure it is evident
that the initialization and setup times play an increasingly less significant
role as the simulation time increases, making it possible to execute the
network in real-time after roughly 35 secs of simulation time. This is
important from the perspective of the application developer as it is taking
into account all the software and IO bottlenecks, which usually treats the
chips as black boxes and optimizes on the software and network layer. From the
figure we can observe that the times spent during the operation of the chip
are on the transfer of spikes and on the actual execution, which increase
linearly in time, whereas all the other times remain relatively stable. Figure
15: Energy Benchmark of the CPG with Nengo Loihi and Nengo CPU, measured with
built-in energy probes in Loihi and with the RAPL interface on the CPU. Is it
clear that the energy consumption on the chip is orders of magnitude smaller
that the consumption on the CPU.
## References
* [1] A. Knoll and Marc-Oliver Gewaltig. Neurorobotics : A strategic pillar of the human brain project. 2016\.
* [2] Marie Claire Capolei, Emmanouil Angelidis, Egidio Falotico, Henrik Lund, and Silvia Tolu. A Biomimetic Control Method Increases the Adaptability of a Humanoid Robot Acting in a Dynamic Environment. Frontiers in Neurorobotics, 13, August 2019.
* [3] Jesus A. Garrido Alcazar, Niceto Rafael Luque, Egidio D‘Angelo, and Eduardo Ros. Distributed cerebellar plasticity implements adaptable gain control in a manipulation task: a closed-loop robotic simulation. Frontiers in Neural Circuits, 7, 2013.
* [4] Jacques Kaiser, Alexander Friedrich, J. Camilo Vasquez Tieck, Daniel Reichard, Arne Roennau, Emre Neftci, and Rüdiger Dillmann. Embodied neuromorphic vision with event-driven random backpropagation, 2019.
* [5] Jacques Kaiser, Michael Hoff, Andreas Konle, Juan Camilo Vasquez Tieck, David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne Roennau, Wolfgang Maass, et al. Embodied synaptic plasticity with online reinforcement learning. Frontiers in Neurorobotics, 13:81, 2019.
* [6] Alban Bornet, Jacques Kaiser, Alexander Kroner, Egidio Falotico, Alessandro Ambrosano, Kepa Cantero, Michael H. Herzog, and Gregory Francis. Running Large-Scale Simulations on the Neurorobotics Platform to Understand Vision – The Case of Visual Crowding. Frontiers in Neurorobotics, 13, 2019. Publisher: Frontiers.
* [7] A. J. Ijspeert, A. Crespi, D. Ryczko, and J.-M. Cabelguen. From Swimming to Walking with a Salamander Robot Driven by a Spinal Cord Model. Science, 315(5817):1416–1420, March 2007.
* [8] Zhenshan Bing, Long Cheng, Guang Chen, Florian Röhrbein, Kai Huang, and Alois Knoll. Towards autonomous locomotion: CPG-based control of smooth 3d slithering gait transition of a snake-like robot. Bioinspiration & Biomimetics, 12(3):035001, apr 2017.
* [9] Tony J. Prescott, Fernando M. Montes González, Kevin Gurney, Mark D. Humphries, and Peter Redgrave. A robot model of the basal ganglia: behavior and intrinsic processing. Neural Networks: The Official Journal of the International Neural Network Society, 19(1):31–61, January 2006.
* [10] Wolfgang Maass. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9):1659–1671, December 1997.
* [11] M. Davies, N. Srinivasa, T. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, Y. Liao, C. Lin, A. Lines, R. Liu, D. Mathaikutty, S. McCoy, A. Paul, J. Tse, G. Venkataramanan, Y. Weng, A. Wild, Y. Yang, and H. Wang. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1):82–99, January 2018.
* [12] F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G. Nam, B. Taba, M. Beakes, B. Brezzo, J. B. Kuang, R. Manohar, W. P. Risk, B. Jackson, and D. S. Modha. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 34(10):1537–1557, Oct 2015.
* [13] Steve B. Furber, Francesco Galluppi, Steve Temple, and Luis A. Plana. The spinnaker project. IEEE. Proceedings, 102(5):652–665, 2014.
* [14] J. Schemmel, D. Briiderle, A. Griibl, M. Hock, K. Meier, and S. Millner. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pages 1947–1950, May 2010.
* [15] Paolo Arena. The central pattern generator: A paradigm for artificial locomotion. Soft Computing, 4:251–266, 01 2000.
* [16] Auke Jan Ijspeert. Central pattern generators for locomotion control in animals and robots: A review. Neural Networks, 21(4):642–653, May 2008.
* [17] Sten Grillner. Biological pattern generation: the cellular and computational logic of networks in motion. Neuron, 52(5):751–766, December 2006.
* [18] Sten Grillner. The motor infrastructure: from ion channels to neuronal networks. Nature Reviews Neuroscience, 4(7):573–586, July 2003.
* [19] Junzhi Yu, M. Tan, Jian Chen, and Jianwei Zhang. A survey on cpg-inspired control models and system implementation. IEEE transactions on neural networks and learning systems, 25:441–56, 03 2014.
* [20] A. Crespi and A.J. Ijspeert. Amphibot ii: An amphibious snake robot that crawls and swims using a central pattern generator. Proceedings of the 9th International Conference on Climbing and Walking Robots (CLAWAR 2006), 01 2006.
* [21] K. Inoue, Shugen Ma, and Chenghua Jin. Neural oscillator network-based controller for meandering locomotion of snake-like robots. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004, volume 5, pages 5064–5069, April 2004. ISSN: 1050-4729.
* [22] Elisa Donati, Giacomo Indiveri, and Cesare Stefanini. A novel spiking CPG-based implementation system to control a lamprey robot. In 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob), pages 1364–1364, June 2016. ISSN: 2155-1782.
* [23] Zhelong Wang, Qin Gao, and Hongyu Zhao. CPG-Inspired Locomotion Control for a Snake Robot Basing on Nonlinear Oscillators. Journal of Intelligent & Robotic Systems, 85(2):209–227, February 2017.
* [24] Brayan Cuevas-Arteaga, Juan Pedro Dominguez-Morales, Horacio Rostro-Gonzalez, Andres Espinal, Angel Jiménez-Fernandez, Francisco Gómez-Rodríguez, and Alejandro Linares-Barranco. A spinnaker application: Design, implementation and validation of scpgs. volume 10305, 06 2017.
* [25] Alex Russell, Garrick Orchard, and Ralph Etienne-Cummings. Configuring of spiking central pattern generator networks for bipedal walking using genetic algorthms. pages 1525 – 1528, 06 2007.
* [26] M. Lewis, Francesco Tenore, and Ralph Etienne-Cummings. Cpg design using inhibitory networks. volume 2005, pages 3682–3687, 01 2005.
* [27] Alexandros Bouganis and Murray Shanahan. Training a spiking neural network to control a 4-DoF robotic arm based on Spike Timing-Dependent Plasticity. In The 2010 International Joint Conference on Neural Networks (IJCNN), pages 1–8, Barcelona, Spain, July 2010. IEEE.
* [28] Samir Menon, Sam Fok, Alex Neckar, Oussama Khatib, and Kwabena Boahen. Controlling articulated robots in task-space with spiking silicon neurons. In 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, pages 181–186, Sao Paulo, Brazil, August 2014. IEEE.
* [29] Andres Espinal, Horacio Rostro-Gonzalez, Martin Carpio, Erick I. Guerra-Hernandez, Manuel Ornelas-Rodriguez, and Marco Sotelo-Figueroa. Design of Spiking Central Pattern Generators for Multiple Locomotion Gaits in Hexapod Robots by Christiansen Grammar Evolution. Frontiers in Neurorobotics, 10, 2016.
* [30] Daniel Gutierrez-Galan, Juan Pedro Dominguez-Morales, Fernando Perez-Pena, and Alejandro Linares-Barranco. NeuroPod: a real-time neuromorphic spiking CPG applied to robotics. Neurocomputing, page S0925231219315644, November 2019. arXiv: 1904.11243.
* [31] Chris Eliasmith and Charles H. Anderson. Rethinking central pattern generators: A general approach. Neurocomputing, 32-33:735–740, June 2000.
* [32] Alessandro Crespi, André Badertscher, André Guignard, and Auke Jan Ijspeert. AmphiBot I: an amphibious snake-like robot. Robotics and Autonomous Systems, 50(4):163–175, March 2005.
* [33] Andrej Bicanski, Dimitri Ryczko, Jérémie Knuesel, Nalin Harischandra, Vanessa Charrier, Örjan Ekeberg, Jean-Marie Cabelguen, and Auke Jan Ijspeert. Decoding the mechanisms of gait generation in salamanders by combining neurobiology, modeling and robotics. Biological Cybernetics, 107(5):545–564, October 2013.
* [34] Wulfram Gerstner and Werner Kistler. Spiking Neuron Models: An Introduction. Cambridge University Press, New York, NY, USA, 2002.
* [35] H. Markram, W. Gerstner, and P. J. Sjöström. Spike-Timing-Dependent Plasticity: A Comprehensive Overview. Frontiers in Synaptic Neuroscience, 4, July 2012.
* [36] Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, and Wolfgang Maass. Long short-term memory and Learning-to-learn in networks of spiking neurons. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 787–797. Curran Associates, Inc., 2018.
* [37] Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets. arXiv:1901.09049 [cs], February 2019. arXiv: 1901.09049.
* [38] Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence C. Stewart, Daniel Rasmussen, Xuan Choo, Aaron Russell Voelker, and Chris Eliasmith. Nengo: a Python tool for building large-scale functional brain models. Frontiers in Neuroinformatics, 7, 2014.
* [39] Chris Eliasmith and Charles Anderson. Neural engineering: Computation, representation, and dynamics in neurobiological systems. IEEE Transactions on Neural Networks, 15(2):528–529, March 2004\.
* [40] Terrence C. Stewart. A technical overview of the neural engineering framework. Technical report, Centre for Theoretical Neuroscience, 2012.
* [41] Egidio Falotico, Lorenzo Vannucci, Alessandro Ambrosano, Ugo Albanese, Stefan Ulbrich, Juan Camilo Vasquez Tieck, Georg Hinkel, Jacques Kaiser, Igor Peric, Oliver Denninger, Nino Cauli, Murat Kirtay, Arne Roennau, Gudrun Klinker, Axel Von Arnim, Luc Guyot, Daniel Peppicelli, Pablo Martínez-Cañada, Eduardo Ros, Patrick Maier, Sandro Weber, Manuel Huber, David Plecher, Florian Röhrbein, Stefan Deser, Alina Roitberg, Patrick van der Smagt, Rüdiger Dillman, Paul Levi, Cecilia Laschi, Alois C. Knoll, and Marc-Oliver Gewaltig. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform. Frontiers in Neurorobotics, 11, 2017.
* [42] Morgan Quigley, Brian Gerkey, Ken Conley, Josh Faust, Tully Foote, Jeremy Leibs, Eric Berger, Rob Wheeler, and Andrew Ng. Ros: an open-source robot operating system. In Proc. of the IEEE Intl. Conf. on Robotics and Automation (ICRA) Workshop on Open Source Robotics, Kobe, Japan, May 2009.
* [43] N. Koenig and A. Howard. Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), volume 3, pages 2149–2154 vol.3, 2004.
* [44] Marc-Oliver Gewaltig and Markus Diesmann. Nest (neural simulation tool). Scholarpedia, 2(4):1430, 2007.
* [45] Örjan Ekeberg. A combined neuronal and mechanical model of fish swimming. Biological Cybernetics, 69(5):363–374, October 1993.
* [46] Hongyu An, Kangjun Bai, and Yang Yi. The Roadmap to Realizing Memristive Three-dimensional Neuromorphic Computing System. November 2018.
* [47] Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B: System Programming Guide, Part 2. page 582.
* [48] Rui Pereira, Marco Couto, Francisco Ribeiro, Rui Rua, Jácome Cunha, João Paulo Fernandes, and João Saraiva. Energy efficiency across programming languages: how do energy, time, and memory relate? In Proceedings of the 10th ACM SIGPLAN International Conference on Software Language Engineering, pages 256–267, Vancouver BC Canada, October 2017. ACM.
* [49] Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model. Frontiers in Neuroscience, 12, 2018. Publisher: Frontiers.
* [50] Evangelos Stromatias, Francesco Galluppi, Cameron Patterson, and Steve Furber. Power analysis of large-scale, real-time neural networks on SpiNNaker. In The 2013 International Joint Conference on Neural Networks (IJCNN), pages 1–8, August 2013. ISSN: 2161-4407.
* [51] Peter Blouw, Xuan Choo, Eric Hunsberger, and C. Eliasmith. Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware. NICE ’19, 2019.
|
We investigate the performance of multi-user multiple-antenna downlink systems in which a base station (BS) serves multiple users via a shared wireless medium. In order to fully exploit the spatial diversity while minimizing the passive energy consumed by radio frequency (RF) components, the BS is equipped with $M$ RF chains and $N$ antennas, where $M < N$. Upon receiving pilot sequences to obtain the channel state information (CSI), the BS determines the best subset of $M$ antennas for serving the users. We propose a joint antenna selection and precoding design (JASPD) algorithm to maximize the system sum rate subject to a transmit power constraint and quality of service (QoS) requirements. The JASPD overcomes the non-convexity of the formulated problem via a doubly iterative algorithm, in which an inner loop successively optimizes the precoding vectors, followed by an outer loop that tries all valid antenna subsets. Although approaching the (near) global optimality, the JASPD suffers from a combinatorial complexity, which may limit its application in real-time network operations. To overcome this limitation, we propose a learning-based antenna selection and precoding design algorithm (L-ASPA), which employs a deep neural network (DNN) to establish underlaying relations between the key system parameters and the selected antennas. The proposed L-ASPD is robust against the number of users and their locations, BS's transmit power, as well as the small-scale channel fading. With a well-trained learning model, it is shown that the L-ASPD significantly outperforms baseline schemes based on the block diagonalization [5] and a learning-assisted solution for broadcasting systems [29] and achieves higher effective sum rate than that of the JASPA under limited processing time. In addition, we observed that the proposed L-ASPD can reduce the computation complexity by 95% while retaining more than 95% of the optimal performance.
Multiuser, precoding, antenna selection, machine learning, neural networks, successive convex optimization.
§ INTRODUCTION
Multiple-input multiple-output (MIMO) is an enabling technology to deal with the rapidly increasing demand for data-hungry applications in current and future mobile networks. By using a large number of antennas, an MIMO base station is able to send multiple information streams to multiple users simultaneously with negligible inter-user interference. The advantages of MIMO systems, under a proper beamforming design, comprise not only high spectral efficiency but also improved energy efficiency [1]. When the number of antennas in MIMO systems becomes very large, antenna selection (AS) can be employed to improve the performance in terms of both hardware cost and technological aspects [2]. This is due to the fact that the radio frequency (RF) chains are usually much more expensive than antenna elements. More importantly, a proper AS strategy is capable of not only obtaining full spatial diversity but also considerably minimizing the RF chains' energy consumption, hence improving the system energy efficiency [3].
In general, AS is an NP-hard problem whose optimal solution is only guaranteed via exhaustive search, which tries all possible antenna combinations. The high complexity of AS may limit its potential in practice, especially in 5G services which usually have stringent latency and real-time decision making requirements [4].
Low-complexity solutions have become necessary to make AS practically feasible, especially for the BS of medium to large number of antennas. A block diagonalization-based algorithm is proposed in [5] for multiuser MIMO systems, that selects the best antennas to either minimize the symbol error rate (SER) upper bound or maximize the minimum capacity. This method consecutively eliminates one antenna at a time that imposes the most energy in the corresponding orthogonal beamformers. The authors of [6] propose a joint beamforming design and AS algorithm to minimize the multicasting transmit power. By using group sparsity-promoting $l_{1,2}$ norms instead of the $l_0$ norm, the selected antennas and beamformers can be obtained via an iterative algorithm. The application of $l_{1,2}$ norms is also employed in massive MIMO for minimizing the transmit power [7] and in cell-free MIMO downlink setups for joint access point selection and power allocation [8]. In [9], an AS algorithm based on mirror-prox successive convex approximation (SCA) is proposed for maximizing the minimum rate in multiple-input single-output (MISO) broadcasting systems. A similar SCA-based approach is proposed in [10, 11] for energy efficiency maximization.
Recently, the use of machine learning (ML) in communications systems has attracted much attention [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. The main advantage of ML-aided communications lies in the capability of establishing underlying relations between system parameters and the desired objective, hence being able to shift the computation burden in real-time processing to the offline training phase [25, 26].
The authors of [16] propose a beamforming neural network (BNN) for minimizing the transmit power of multiuser MISO systems, which employs convolutional neural networks (CNN) and a supervised-learning method to predict the magnitude and direction of the beamforming vectors. This method is extended in [17, 18] for unsupervised-learning to maximize the system weighted sum-rate.
In [19], a deep learning-aided transmission strategy is proposed for single-user MIMO system with limited feed back, which is capable of addressing both pilot-aided training and channel code selection.
The authors of [20] develop a deep learning-based beamforming design to maximize the spectral efficiency of a single-user millimeter wave (mmWave) MISO system, which achieves higher spectral efficiency than conventional hybrid beamforming designs.
The application of Q-learning is developed in [21] to overcome the combinatorial-complexity task of selecting the best channel impulse response in vehicle to infrastructure communications.
A similar Q-learning based method is proposed in [23] to solve the joint design of beamforming, power control, and interference coordination of cellular networks.
In [22], the authors develop a deep reinforcement learning framework which can autonomously optimize broadcast beams in MIMO broadcast systems based on users' measurements.
A common data set for training mmWave MIMO networks is provided in [24] regarding various performance metrics.
Towards the learning-aided physical layer design, the application of ML to AS is a promising way to tackle the high-complexity of AS [27, 28, 29, 30]. A joint design for AS and hybrid beamformers for single-user mmWave MIMO is proposed in [27] based on two serial CNNs, in which one CNN is used to predict the selected antennas and another CNN is used to estimate the hybrid beamformers. The authors of [28] propose a multi-class classification approach to tackle the AS problem in single-user MIMO systems based on two classification methods, namely multiclass k-nearest neighbors and support vector machine (SVM). In [29], a neural network-based approach is proposed to reduce the computational complexity of AS for broadcasting. The neural network (NN) is employed to directly predict the selected antennas that maximize the minimum signal to noise ratio among the users. The authors of [30] propose a learning-based transmit antenna selection to improve the security in the wiretap channel. Therein, two learning-based SVM and naive-Bayes schemes are considered. Although being able to improve the secrecy performance with a reduced feedback overhead, the setup analyzed in [30] is limited to only a single antenna selection.
§.§ Contributions
In this paper, we investigate the performance of a multiuser MISO downlink system via a joint design of AS and precoding vectors to improve the system sum rate while guaranteeing the users' quality of service (QoS) requirements. Our contributions are as follows:
* First, we develop a joint antenna selection and beamforming design (JASPD) framework to maximize the effective system sum rate, which accounts for the time overhead spent on both channel estimation and computational processing, subject to users' QoS requirements and limited transmit power budget. The proposed JASPD works in an iterative manner, which first optimizes the beamforming vectors for a given antenna subset, and then selects the best antenna subset.
* Second, to tackle the non-convexity in optimizing the beamforming vectors of JASPD, we propose two iterative optimization algorithms based on semidefinite relaxation (SDR) and SCA methods. The convergence of the proposed iterative algorithms to at least a local optimum is theoretically guaranteed.
* Third, we propose a learning-based antenna selection and precoding design (L-ASPD) algorithm to overcome the high computational complexity of AS, which employs a deep neural network (DNN) to capture and reveal the relationship between the system parameters and the selected antennas via an offline training process. More importantly, our leaning model is robust against not only the channel fading but also the number of users and their locations. Compared to existing works, which either study single-user MIMO systems [27, 28], a single beamformer for broadcasting [29] or a single antenna selection [30], we consider a more general multi-user system.
* Finally, extensive simulation results show that, under the same limited processing time, the proposed L-ASPD outperforms the JASPD and significantly outperforms existing AS schemes on both model-based [5] and ML-aided [29] designs. We observed that the L-ASPD can achieve more than 95% of the optimal sum rate while reducing more than 95% of the computational time.
The rest of the paper is organized as follows. Section II presents the system model and key parameters.
Section III develops two iterative optimization algorithms used in the JASPD. Section IV introduces a ML-aided joint design to accelerate real-time processing. Section V demonstrates the effectiveness of the proposed algorithms via simulation results. Finally, Section IV concludes the paper.
Notations: The superscript $(.)^T$, $(.)^H$ and $\mathrm{Tr}(.)$ stand for the transpose, Hermitian transpose, and trace operation, respectively. $\binom{n}{k}$ represents the binomial coefficients. $|.|$ and $\|.\|$ denote the cardinality and the $l_2$-norm of a set, respectively.
§ SYSTEM MODEL
We consider a multiuser MISO downlink system operated in time division duplex (TDD) mode, in which a multi-antenna base station (BS) servers $K$ single-antenna users in the same frequency resource[In practice the whole bandwidth is divided into multiple sub-frequency bands. The proposed scheme is directly applied to each band.], as depicted in Fig. <ref>. The BS is equipped with $M$ RF chains and $N$ antennas, where $N > M \ge K$. The motivation of having more antennas than the number of RF chains is that the BS can i) fully exploit spatial diversity gain and ii) minimize the static energy consumed by hardware components [3], e.g., RF chains and amplifiers.
The system operates in a quasi-static block fading channel in which the channel gains are constant within on block and independently change from one block to another. Before sending data to the users, the BS needs to acquire the channel state information (CSI) via pilot-aided channel estimation[The system is assumed to operate above certain SNR levels in which the CSI can be efficiently estimated.] in order to perform reprocessing, e.g., beamforming and power allocation.
Diagram of multiuser MISO system. A subset of antennas is selected for data transmission.
Fig. <ref> illustrates the three phases in one transmission block. Let $T$ and $\tau_{csi}$ denote the block duration and channel estimation time, both expressed in terms of channel use (c.u.), respectively. The block duration is determined by the system coherence time. Assuming mutually orthogonal pilot sequences across the users, the channel estimation time is $\tau_{csi} = K(\lfloor N/M\rfloor + 1)$ c.u., where $\lfloor x\rfloor$ denotes the largest integer not exceeding $x$. Unlike most of previous works that ignore the processing time, we consider the general case in which the processing time takes place in $\tau_{pro}$ (c.u.). In practice, the value of $\tau_{pro}$ largely depends on beamforming techniques and the hardware capability.
Let $\bs{h}_k \in \mathbb{C}^{1\times N}$ denote the channel vector from the BS's antennas to user $k$, including the pathloss.
We assume that full CSIs are available at the BS. Because there are only $M < N$ RF chains, the BS has to determine an optimal subset of $M$ antennas for sending data to the users. Let $\CA = \{a_1, a_2, \dots, a_M\}, a_m\in [N]\triangleq \{1, 2, \dots, N\}$, be a subset of $M$ antennas (out of $N$), and let $\CAb$ be the collection of all possible antenna subsets. By definition, we have $|\CA| = M$ and $|\CAb| = \binom{N}{M}$.
Denote by $\bs{h}_{k,\CA} \in \mathbb{C}^{1\times M}$ the channel vector from active antennas in a subset $\CA$ to user $k$, i.e., $\bs{h}_{k,\CA} = \left[h_k[a_1], h_k[a_2], \dots, h_k[a_M]\right]$, where $a_m \in \CA$ and $h_k[n]$ is the $n$-th element of $\bs{h}_k$. Before serving the users, the BS first precodes the data to suppress inter-user interference. Let $\bs{w}_{k,\CA} \in \mathbb{C}^{M\times 1}$ be the precoding vector for user $k$ corresponding to the selected antenna subset $\CA$. The received signal at user $k$ is
\begin{align}
y_{k,\CA} = \bs{h}_{k,\CA} \bs{w}_{k,\CA} x_k + {\sum}_{i\neq k} \bs{h}_{k,\CA} \bs{w}_{i,\CA} x_i + n_k, \label{eq:y_k}
\end{align}
where $n_k$ is Gaussian noise with zero mean and variance $\sigma^2$. The first term in (<ref>) is the desired signal, and the second term is the inter-user interference.
By considering interference as noise, the effective achievable rate of user $k$ is
\begin{align}
R_k(\CA) =& B\left(1 - \frac{\tau_{csi} + \tau_{pro}}{T} \right) \notag \\
&\times \log_2\Big(1 + \frac{|\bs{h}_{k,\CA} \bs{w}_{k,\CA}|^2}{{\sum}_{i \neq k} |\bs{h}_{k,\CA} \bs{w}_{i,\CA}|^2 + \sigma^2} \Big), \forall k, \label{eq:R_k}
\end{align}
where $B$ is the shared channel bandwidth and $1 - \frac{\tau_{csi} + \tau_{pro}}{T}$ accounts for actual time for data transmission. The total transmit power[The energy consumed by hardware components is excluded since it is constant and does not affect the precoding design.] is ${\sum}_{k=1}^K\|\bs{w}_{k,\CA} \|^2$.
It is observed from (<ref>) that the effective data rate is determined not only by the precoding vectors $\bs{w}_{k,\CA}$ but also by the channel estimation and processing times. In particular, spending more time on either channel estimation or processing will degrade the effective transmission rate.
Block diagram of one transmission block.
§ OPTIMAL ANTENNA SELECTION AND PRECODING DESIGN
In this section, we develop a joint antenna selection and precoding design to maximize the system sum rate while satisfying the minimum QoS requirements and limited power budget. The joint optimization problem can be formulated as follows:
\begin{align}
\text{P0}:~&\underset{\CA\in \CAb, \{\bs{w}_{k,\CA}\}}{\mathtt{maximize}}
~~ {\sum}_{k=1}^K R_k(\CA) \label{OP:0} \\
\mathtt{s.t.}
&~~ R_k(\CA) \ge \eta_k, \forall k, \notag \\
&~~ {\sum}_{k=1}^K \|\bs{w}_{k,\CA} \|^2\leq P_{tot}, \notag %\subeqn \label{eq:p0 c0}
\end{align}
where $R_k(\CA)$ is given in (<ref>), $P_{tot}$ is the total transmit power budget at BS, and $\eta_k$ is the QoS requirement for user $k$. In problem (<ref>), the first constraint is to satisfy the minimum user QoS requirement and the second constraint states that the total transmit power should not exceed the power budget. We note that the problem formulation in (<ref>) can be directly extended to the weighted sum rate metric for given weight coefficients with the weights are used as parts of the training input.
In general, problem (<ref>) is a mixed binary non-linear problem where the binary variables of the activated antennas are strongly coupled with the continuous variables of the precoding vectors. Because the precoding vectors are designed for a given selected antenna subset, problem P0 can be reformulated in an iterative form as follows:
\begin{align}
\underset{\CA \in \CAb}{\mathtt{maximize}} ~~ \text{P1}(\CA), \label{OP:0 AS}
\end{align}
where $\text{P1}(\CA)$ is the precoding design problem for the candidate antenna subset $\CA$, which is defined as follows
\begin{align}
&\mathrm{P1}(\CA): \underset{\{\bs{w}_{k,\CA}\}}{\mathtt{Max}}
~~ \bar{B}\sum_{k=1}^K \log_2\Big(1 + \frac{|\bs{h}_{k,\CA} \bs{w}_{k,\CA}|^2}{{\sum}_{i \neq k} |\bs{h}_{k,\CA} \bs{w}_{i,\CA}|^2 + \sigma^2} \Big) \label{OP:1}\\
~~ \bar{B}\log_2\Big(1 + \frac{|\bs{h}_{k,\CA} \bs{w}_{k,\CA}|^2}{{\sum}_{i \neq k} |\bs{h}_{k,\CA} \bs{w}_{i,\CA}|^2 + \sigma^2} \Big) \ge \eta_k,\forall k,\subeqn \label{eq:p1 c1}\\
&\qquad~~{\sum}_{k=1}^K \|\bs{w}_{k,\CA} \|^2\leq P_{tot}, \subeqn \label{eq:p1 c2}
\end{align}
where $\bar{B} \triangleq B(1 - \frac{\tau_{csi}+\tau_{pro}}{T})$ and we have used (<ref>) for $R_k(\CA)$.
If problem $\text{P1}(\CA)$ can be solved optimally, then the optimal solution of P0 can be obtained via an exhaustive search in (<ref>), which tries all possible antenna subsets. Unfortunately, solving problem $\text{P1}(\CA)$ is challenging due to the non-concavity of the objective function and the non-convexity of the first constraint.
In the following, we propose two solutions based on SDR and SCA methods to tackle the non-convexity of the beamforming vectors design in Section III-A. We then describe the proposed JASPD algorithm and analyze its complexity in Section III-B.
§.§ Near Optimal Beamforming Design for Selected Antennas
In this subsection, we design the beamforming vectors to maximize the system sum rate for a selected antenna subset. In the following, we propose two methods to solve (<ref>).
§.§.§ Semidefinite Relaxation based Solution
Semidefinite-based formulation is an efficient method to design the beamforming vectors of wireless systems, which converts quadratic terms into linear ones by lifting the original variable domain into a higher-dimensional space. We adopt the semidefinite method to deal with the signal-to-noise-plus-interference-ratio (SINR) term in both the objective function and the first constraint. Define a new set of variables $\bs{W}_{k} = \bs{w}_{k,\CA}\bs{w}^H_{k,\CA} \in \mathbb{C}^{M\times M}$, and denote $\bs{H}_{k} \triangleq \bs{h}^H_{k,\CA} \bs{h}_{k,\CA}$. It is straightforward to verify that $|\bs{h}_{k,\CA}\bs{w}_{l,\CA}|^2 = \bs{h}_{k,\CA}\bs{w}_{l,\CA} \bs{w}^H_{l,\CA} \bs{h}^H_{k,\CA} = \mathrm{Tr}(\bs{H}_{k}\bs{W}_{l})$ and $\|\bs{w}_{k,\CA}\|^2 = \mathrm{Tr}(\bs{W}_{k})$.
By introducing arbitrary positive variables $\{x_k\}_{k=1}^K$, we can reformulate problem (<ref>) as follows:
\begin{align}
&\underset{\bs{W}, \bs{x}}{\mathtt{maximize}}
~~ \frac{\bar{B}}{\log(2)}{\sum}_{k=1}^K x_k \label{OP:SDR1} \\
~~ \log\Big(1 + \frac{\mathrm{Tr}(\bs{H}_k\bs{W}_k)}{{\sum}_{i\neq k} \mathrm{Tr}(\bs{H}_k\bs{W}_i) + \sigma^2}\Big) \ge x_k, \forall k, \subeqn \label{eq:OP SDR1 c1}\\
&\qquad~~ x_k \ge \frac{\eta_k\log(2)}{\bar{B}},\forall k, \subeqn \label{eq:OP SDR1 c2} \\
&\qquad~~ {\sum}_{k=1}^K\mathrm{Tr}(\bs{W}_k) \le P_{tot},\subeqn \label{eq:OP SDR1 c3}\\
&\qquad~~ \mathrm{rank}(\bs{W}_k) = 1, \forall k,\notag
%&~~ {\sum}_{k=1}^K \lambda_kp_k \leq P_{tot}, \subeqn \label{eq:OP2 c3}
\end{align}
where we use short-hand notations $\bs{W}$ and $\bs{x}$ for $(\bs{W}_1, \dots, \bs{W}_K)$ and $(x_1, \dots, x_K)$, respectively.
The equivalence between (<ref>) and (<ref>) can be verified as the equality holds in (<ref>) at the optimum. It is observed that the objective is a linear function and constraints (<ref>) and (<ref>) are convex. Thus, the challenge in solving problem (<ref>) lies in (<ref>) and the rank-one constraint. While the latter constraint can be efficiently coped with by using the relaxation method followed by randomization if needed [32], dealing with the former constraint is more struggling.
In the next step, we introduce slack variables $\{y_k\}_{k=1}^K$ and reformulate constraint (<ref>) as
\begin{align}
&\log\Big(\sigma^2 + {\sum}_{i=1}^K \mathrm{Tr}(\bs{H}_k\bs{W}_i)\Big) \ge x_k + y_k, \label{eq:app1}\\
&\sigma^2 + {\sum}_{i\ne k} \mathrm{Tr}(\bs{H}_k\bs{W}_i) \le e^{y_k}. \label{eq:app2}
\end{align}
Because the function $\log()$ is concave, constraint (<ref>) is convex. However, since the function $\exp(.)$ is convex, constraint (<ref>) is unbounded. To overcome this difficulty, we employ the inner approximation method, which uses the first-order approximation of $e^{y_k}$ at the right hand side of (<ref>). As a result, the approximated problem of (<ref>) can be formulated as follows:
\begin{align}
&\mathrm{P2}(\bs{y}_0):~ \underset{\bs{W},\bs{x}, \bs{y}}{\mathtt{maximize}}
~~ \frac{\bar{B}}{\log(2)}{\sum}_{k=1}^K x_k \label{OP:SDR1 app} \\
~~ \eqref{eq:OP SDR1 c2}; \eqref{eq:OP SDR1 c3}; \eqref{eq:app1};~\mathrm{rank}(\bs{W}_k) = 1, \forall k, \notag \\
&~~ \sigma^2 + {\sum}_{i\neq k} \mathrm{Tr}(\bs{H}_k\bs{W}_i) \le
e^{y_{0k}}(y_k - y_{0k} + 1), \forall k, \subeqn \label{eq:OP SDR1 app c1}
\end{align}
where $\bs{y} \triangleq \{y_{k}\}_{k=1}^K$ and $\bs{y}_0$ is any feasible value of $\bs{y}$ that satisfies constraint (<ref>).
It is evident that, for a given $\bs{y}_0$, the objective and constraints of problem (<ref>) are convex except for the rank one constraint. This suggests to solve (<ref>) by the semi-definite relaxation (SDR) method [32] which ignores the rank one constraint and can be solved in an efficient manner by standard solvers, e.g., CVX. Because $e^{y_0}(y - y_0 + 1) \leq e^{y}, \forall y_0$, the approximated problem (<ref>) always gives a suboptimal solution of the original problem (<ref>).
Iterative Algorithm to solve (<ref>)
[1]
Initialize $\bs{y}_0$, $\epsilon$, $X_{\rm old}$ and $\mathtt{error}$.
while $\mathtt{error} > \epsilon$ do
Solve the SDR of (<ref>) by dropping the rank-one constraint to obtain $\{\bs{W}_{\star k}, x_{\star k}, y_{\star k}\}_{k=1}^K$
Compute $\mathtt{error} = \frac{\bar{B}}{\log(2)}|\sum_{k=1}^Kx_{\star k} - X_{\rm old}|$
Update $X_{\rm old} \gets \frac{\bar{B}}{\log(2)}\sum_{k=1}^Kx_{\star k}; y_{0k} \gets y_{\star k}, \forall k$
It is worth noting that the optimal solution of problem (<ref>) is largely determined by the parameters $\bs{y}_0$. Thus, it is crucial to select proper values $\bs{y}_0$ such that the solution of (<ref>) is close to the optimal solution of (<ref>). As such, we propose an iterative optimization algorithm to improve the performance of problem (<ref>), shown in Algorithm <ref>. The premise behind the proposed algorithm is to better estimate $\bs{y}_0$ through iterations.
The sequence of the objective values generated by Algorithm 1 in solving the SDR of problem P2($\bs{y}_0$) is non-decreasing.
The proof of Proposition <ref> is shown in Appendix <ref>. Although not guaranteeing the global optimum of problem (<ref>), Proposition <ref> justifies the convergence to at least a local optimum of the proposed iterative algorithm[The study of the performance gap to the global optimum is left for future work.].
The execution of Algorithm <ref> requires initial values $y_{0k}, \forall k$. Therefore, it requires an efficient way to find these initial values before tackling problem (<ref>). To this end, we start by solving the feasibility problem below:
\begin{align}
&~~ \bs{W} \label{OP:SDR1 ini} \\
\mathtt{s.t.}
&~~ \frac{\mathrm{Tr}(\bs{H}_k\bs{W}_k)}{2^{\eta_k/\bar{B}}-1} \ge {\sum}_{i\neq k} \mathrm{Tr}(\bs{H}_k\bs{W}_i) + \sigma^2, \forall k, \notag\\
&~~ {\sum}_{k=1}^K\mathrm{Tr}(\bs{W}_k) \le P_{tot}, \notag
%&~~ {\sum}_{k=1}^K \lambda_kp_k \leq P_{tot}, \subeqn \label{eq:OP2 c3}
\end{align}
which is convex. Then the initial values are computed as $y_{0k} = \log({\sum}_{i\neq k} \mathrm{Tr}(\bs{H}_k\bs{W}^*_i) + \sigma^2 ), \forall k$, where $\bs{W}^*_k$ is the solution of (<ref>).
The solution in (<ref>) is based on the SDR which sometimes violates the rank-one constraint. In such cases, Gaussian randomization can be adopted. Details on Gaussian randomization process are available in [32]. Our simulation results show that more than 99% of the times Algorithm <ref> can output rank-one solutions.
§.§.§ Reformulation based on Difference of Convex
The SDR-based reformulation in the previous subsection leverages the original problem's non-convexity by working in a higher dimensional domain, which requires more memory. In this subsection, we solve (<ref>) based on difference-of-convex (DC) reformulation directly on the original variable domain.
By introducing arbitrary positive variables $\bs{u} \triangleq \{u_k\}_{k=1}^K$, we can reformulate problem (<ref>) as follows:
\begin{align}
\underset{\bs{w}, \bs{u}}{\mathtt{Maximize}}
&~~ \bar{B}{\sum}_{k=1}^K \log_2( 1 + u_k) \label{OP:dc1} \\
\mathtt{s.t.}
&~~ \frac{|\bs{h}_{k,\CA} \bs{w}_{k,\CA}|^2}{{\sum}_{i \neq k} |\bs{h}_{k,\CA} \bs{w}_{i,\CA}|^2 + \sigma^2} \ge u_k, \forall k, \subeqn \label{eq:OP dc1 c1}\\
&~~ u_k \ge \bar{\eta}_k, \forall k, \subeqn \label{eq:OP dc1 c2} \\
&~~{\sum}_{k=1}^K \|\bs{w}_{k,\CA} \|^2\leq P_{tot}, \subeqn \label{eq:OP dc1 c3}
%&~~ {\sum}_{k=1}^K \lambda_kp_k \leq P_{tot}, \subeqn \label{eq:OP2 c3}
\end{align}
where $\bar{\eta}_k \triangleq 2^{\eta_k/\bar{B}}-1$ and $\bs{w}$ is a short-hand notation for $(\bs{w}_{1,\CA}, \dots, \bs{w}_{K,\CA})$.
The equivalence between (<ref>) and (<ref>) can be verified since constraint (<ref>) holds with equality at the optimum.
As the denominator of the left-hand-side of (<ref>) is positive, it can be rewritten as
\begin{align}
\frac{|\bs{h}_{k,\CA} \bs{w}_{k,\CA}|^2}{u_k} \ge {\sum}_{i \neq k} |\bs{h}_{k,\CA} \bs{w}_{i,\CA}|^2 + \sigma^2. \label{eq:OP2 c1 equiv}
\end{align}
An important observation from (<ref>) is that $\frac{|\bs{h}_{k,\CA} \bs{w}_{k,\CA}|^2}{u_k}$ is a convex function of $\bs{w}_{k,
\CA}$ and $u_k$ (see Appendix <ref>). Therefore, (<ref>) has a form of the DC representation, which suggests an efficient way to solve (<ref>). In particular, let $\hat{\bs{w}}_{k,\CA}, \hat{u}_k$ be any feasible solution of (<ref>), we can approximate (<ref>) by using the first order approximation of the left-hand-side of (<ref>), stated as
\begin{align}
&\sum_{i \neq k} \bs{w}^H_{k,\CA}\bs{H}_k \bs{w}_{i,\CA} + \sigma^2 \le \frac{\bs{w}^H_{k,\CA}\left(\!\bs{H}_k\! +\! \bs{H}^T_k\!\right) \hat{\bs{w}}_{k,\CA}}{\hat{u}_k} \notag\\
&- u_k \frac{\hat{\bs{w}}^H_{k,\CA}\bs{H}_k \hat{\bs{w}}_{k,\CA}}{\hat{u}^2_k} + \frac{\hat{\bs{w}}^H_{k,\CA}\!\left(\bs{H}_k - \bs{H}^T_k\right) \hat{\bs{w}}_{k,\CA}}{\hat{u}_k}, \label{eq:OP2 c1 app}
\end{align}
which is obviously convex in $\bs{w}_{k,\CA}$ and $u_k$, where $\bs{H}_k = \bs{h}^H_{k,\CA}\bs{h}_{k,\CA}$.
By using (<ref>) as an approximation of (<ref>), problem (<ref>) can be approximated as
\begin{align}
\mathrm{P3}(\hat{\bs{w}},\hat{\bs{u}}):& ~\underset{\bs{w}, \bs{u}}{\mathtt{Maximize}}
~~ \bar{B}{\sum}_{k=1}^K \log_2( 1 + u_k) \label{OP:2 app} \\
\mathtt{s.t.}
&~~ \eqref{eq:OP dc1 c2};~ \eqref{eq:OP dc1 c3};~\eqref{eq:OP2 c1 app}. \notag
\end{align}
For given $\hat{\bs{w}}_{k,\CA}, \hat{x}_k$, the objective function in (<ref>) is concave and the constraints are convex, hence it can be solved in an efficient manner by standard solvers, e.g., CVX. Because the right-hand-side of (<ref>) is always less than or equal to $\frac{{\bs{w}}^H_{k,\CA}\bs{H}_k {\bs{w}}_{k,\CA}}{{u}_k}$, the approximated problem (<ref>) always gives a suboptimal solution of the original problem (<ref>).
In order to reduce the performance gap between the approximated problem (<ref>) and the original problem (<ref>), we propose Algorithm <ref> which consists of solving a sequence of SCA problems. The premise behind the proposed algorithm is to better select the parameters $\hat{\bs{w}}_{k,\CA}, \hat{u}_k$ through iterations.
Iterative Algorithm to solve (<ref>)
[1]
Initialize $\hat{\bs{w}}_{k,\CA},\hat{u}_k$, $\epsilon$, $X_{\rm old}$ and $\mathtt{error}$.
while $\mathtt{error} > \epsilon$ do
Solve problem $\mathrm{P3}(\hat{\bs{w}}_{k,\CA},\hat{u}_k)$ in (<ref>) to obtain $\bs{w}^\star_k, u^\star_k, \forall k$
Compute $\mathtt{error} = |\bar{B}\sum_{k=1}^K\log_2(1 + u^\star_k) - X_{\rm old}|$
Update $X_{\rm old} \gets \bar{B}\sum_{k=1}^K\log_2(1 + u^\star_k)$; $\hat{\bs{w}}_{k,\CA} \gets \bs{w}^\star_k$; $\hat{u}_k \gets u^\star_k, \forall k$
Finding a feasible point is always essential in the SCA. Intuitively, one can think about the feasibility problem of (<ref>), which is stated as
\begin{align}
~~ 1 \label{OP:P3 feasibility}\\
~~ \frac{1}{\bar{\eta}_k} |\bs{h}_{k,\CA} \bs{w}_{k,\CA}|^2 \ge \sum_{i \neq k} |\bs{h}_{k,\CA} \bs{w}_{i,\CA}|^2 + \sigma^2,\forall k, \subeqn \label{eq:p3 fea c1}\\
&\qquad~~{\sum}_{k=1}^K \|\bs{w}_{k,\CA} \|^2\leq P_{tot}. \subeqn \label{eq:p3 fea c2}
\end{align}
However, since both sides of (<ref>) are convex, this constraint is unbounded. Therefore, finding a feasible point by solving (<ref>) is not efficient. Instead, we adopt (<ref>) as the mean to find initial values $\hat{\bs{w}}, \hat{\bs{u}}$. In particular, from $\bs{W}_{\star k}, \forall k$, the solution of the convex problem (<ref>), we obtain the corresponding feasible precoding vectors $\bs{w}_{\star k}$. Then, we assign $\hat{\bs{w}}_k = \bs{w}_{\star k}$ and $\hat{u}_k = \frac{|\bs{h}_{k,\CA} \bs{w}_{\star k}|^2}{{\sum}_{i \neq k} |\bs{h}_{k,\CA} \bs{w}_{\star i}|^2 + \sigma^2}$.
§.§ JASPD Algorithm and Complexity Analysis
Once the precoding vectors have been optimized for each antenna subset, i.e., problem (<ref>) is solved, we can tackle the original optimization problem (<ref>) via Algorithm <ref>.
The proposed JASPD algorithm consists of two loops: the outer loop tries all valid antenna subsets, and the inner loop optimizes the precoding vectors iteratively. While the complexity of the inner loop is relatively reasonable since (the SDR of) problem (<ref>) (or problem (<ref>)) is convex [36], the outer iteration's complexity increases combinatorially with the number of antennas. In fact, the JASPD has to examine all $\binom{N}{M}$ candidates for the selected antennas. As an example, for $N = 20, M = 8$, there are $125970$ possible antenna subsets to be went through, each of which imposes an inner loop in Algorithm <ref> or Algorithm <ref>.
Although guaranteeing the maximal achievable rate, the proposed JASPD suffers an exponential complexity due to the selection process. Its high computation time may limit its applicability in practice and degrade the effective rate (see (<ref>)). In the next section, we propose a low-complexity joint design to overcome the computation burden of the antenna selection process.
Exhaustive Search based Joint Antenna Selection and Precoding Design
Inputs: $\bs{H}, P_{tot}, \{\eta_k\}_{k=1}^K$. Outputs: $C_{opt}, \CA_{opt}, \bs{W}_{opt}$
[1]
Construct the super group $\CAb = \{\CA~|~ \CA \subset [N], |\CA| = M\}$
Initialize $C_{opt} = 0$
for $i=1:|\CAb|$ do
$\CA = \CAb[i]$
Apply Algorithm <ref> or Algorithm <ref> on the current antenna subset $\CA$
to obtain the optimal $X_{old}(\CA)$ and $\bs{W}_\star(\CA)$
$\mathtt{If}~ C_{opt} < X_{old}(\CA)$
$C_{opt} \gets X_{old}(\CA)$; $\CA_{opt} \leftarrow \CA$; $\bs{W}_{opt} = \bs{W}_\star(\CA)$.
§ ACCELERATING THE OPTIMIZATION: A DEEP LEARNING-BASED APPROACH
In this section, we exploit recent advances in machine learning to overcome the major high-complexity limitation of selection process by proposing a learning-based antenna selection and precoding design algorithm (L-ASPD).
The premise behind the proposed L-ASPD is to exploit machine-learning based predictions to help the optimal algorithm to tackle the most difficult and time-consuming part in the optimization. In particular, the L-ASPD will first predict potential subsets of antennas, which will be much smaller than $\binom{N}{M}$.
We deploy DNN as the learning model to establish underlaying relations between the system parameters (inputs) and the selected antenna subset. The DNN consists of three main parts: one input layer, one output layer and hidden layers, as depicted in Fig. <ref>. Based on the labeled data, the DNN will optimize the learning parameters in order to minimize the prediction error, e.g., cost function.
The L-ASPD is implemented via 3 steps: i) offline training data generation, ii) building the learning model, and iii) real-time prediction.
Illustration of a DNN with three hidden layers.
§.§ Training Data Generation
Since the communication between the BS and the users is specified by the channel gains, the transmit power budget and noise power, they are essential for the learning model. Let $\bs{H} = [\bs{h}_1^H, \dots, \bs{h}_K^H]^H \in \mathbb{C}^{K\times N}$ denote the channel coefficients from the BS's antennas to all users. Since the number of users can be arbitrary between 1 and $M$ (the number of RF chains), the channel matrix $\bs{H}$ is first zero-padded to obtain the standard size $\bar{\textbf{H}} = [\bs{H}^H, \boldsymbol{0}_{N\times (M-K)}]^H \in \mathbb{C}^{M \times N}$. Because the NN accepts only real-value inputs, the original complex representation of the channel matrix is invalid. One can stack the real and imaginary parts of $\bar{\textbf{H}}$ and use them as the training input to the NN [29]. However, we observe that such method is not efficient to our problem because it does not directly capture inter-user interference - the major limiting factor in multiuser systems. As the inter-user interference is determined by the cross-product of the channel vectors of two users, we choose $\bs{x} = \frac{P_{tot}}{\sigma^2}\mathrm{abs}(\mathrm{vec}(\bar{\textbf{H}} \bar{\textbf{H}}^H)) \in \mathbb{R}^{M^2\times 1}$ as the training input. It is worth noting that the training input $\bs{x}$ is robust against the number of users and pathloss, as well as the BS's transmit power. Last but not least, $\bs{x}$ should be normalized before being fed to the NN, i.e., $\bs{x} = \frac{\bs{x}}{\max(\bs{x})}$.
Once the input sample is given, we need to define the output, which is the selected antenna combination that provides the maximum objective function in (<ref>). For each training input $\bs{x}$, we define an output vector $\bs{b} \in \{0,1\}^{\binom{N}{M}\times 1}$ that consists of all possible antenna subsets. $b[n] = 1$ if the $n$-th subset is selected, otherwise $b[n] = 0$. Because we are interested in selecting only one subset, we have $\|\bs{b}\|_0 = 1$. In order to compute $\bs{b}$, for each channel realization $\bs{H}$ (corresponding to $\bs{x}$), we run the proposed JASPD algorithm to find the best antenna subset $\CA^\star$ and then assign the output element $b[n^\star] = 1$ corresponding to $\CA^\star$.
Steps to generate training samples for L-ASPD
1. For $t = 1:N_S$
2. Generate a random number of users $K$ between $[1, M]$.
3. Generate random locations of these $K$ users between 50 and
300m from the BS. Calculate the pathloss.
4. Generate a channel matrix $\bs{H}\in\mathbb{C}^{K\times N}$, including
the pathloss.
Output sample generation
5. Run JASPD algorithm to find the best antenna subset.
6. Compute the binary output vector $\bs{b}_t$ with only a single
non-zero element corresponding to the selected subset.
Input sample generation
5. Zero-padding: $\bar{\textbf{H}} = [\bs{H}^H, \boldsymbol{0}_{N\times (M-K)}]^H$.
6. Calculate $\bs{x}_t = \frac{P_{tot}}{\sigma^2}\mathrm{abs}(\mathrm{vec}(\bar{\textbf{H}}^H \bar{\textbf{H}}))$; $\bs{x}_t = \frac{\bs{x}_t}{\max(\bs{x}_t)}$.
7. Endfor
Denote by $N_S$ the number of samples used to train the learning model. The total training input is aggregated in the input matrix $\bs{X} = [\bs{x}_1, \bs{x}_2, \dots, \bs{x}_{N_S}]$, where $\bs{x}_t$ is the $t$-th input sample. Similarly, the training output matrix is $\bs{B} = [\bs{b}_1, \dots, \bs{b}_{N_S}]$, where $\bs{b}_t$ is the $t$-th output sample corresponding to the input sample $\bs{x}_t$. The steps for generating the training samples are listed in Table <ref>. We note that JASPD algorithm considered in Table <ref> is used for generating training samples and is executed off-line. Once the NN is well-trained, it is used for only the selected antenna subsets in the real-time prediction phase.
§.§ Building the Learning Model
When the training data is available, it will be used to train the NN with the learning parameter $\bs{\Theta}$. For an $L$-layer NN, we have $\bs{\Theta} = [\bs{\theta}_1, \dots, \bs{\theta}_L]$, where $\bs{\theta}_l \in \mathbb{R}^{N_l\times 1}, 1\le l \le L$, is the learning parameters in the $l$-th layer, and $N_l$ is the number of nodes in the $l$-th layer. As the most popular and efficient candidate for classification problems, we employ a sigmoid-family $\mathtt{tansig}(z) = 2(1+e^{-2z})^{-1}-1$ as the activation function for the hidden layers and the soft-max as the activation function for the output layer. The learning phase can be done via the minimization of prediction error
\begin{align}
\Delta(\bs{\Theta}) =& \frac{1}{N_S} \|-\mathrm{Tr}(\bs{B}^T \log(f_{\bs{\Theta}}(\bs{X}))) \\
-& \mathrm{Tr}(\bar{\bs{B}}^T \log(1 - f_{\bs{\Theta}}(\bs{X})))\parallel^2
+ \frac{\lambda}{2N_S} {\sum}_{l=1}^L \parallel \bs{\theta}_l\parallel^2, \notag
\end{align}
where $\lambda$ is the regulation parameter, $\bar{\bs{B}} = \bs{1} - \bs{B}$, and $f_{\bs{\Theta}}(\bs{X})$ is the prediction of the output layer.
§.§ Real-time Prediction
When the NN has been well trained, it is ready to provide real-time and highly accurate predictions. From the current channel coefficient matrix $\bs{H}$, we construct $\bs{x} = \frac{P_{tot}}{\sigma^2}\mathrm{abs}(\mathrm{vec}(\bar{\textbf{H}}^H \bar{\textbf{H}}))$, where $\bar{\textbf{H}}= [\bs{H}^H, \boldsymbol{0}_{N\times (M-K)}]^H$, which is then normalized to obtain $\bs{x}_{\rm norm} = \frac{\bs{x}}{\max(\bs{x})}$. Then $\bs{x}_{\rm norm}$ is used as the input of the trained NN to output the prediction vector $\hat{\bs{b}}$. It is worth noting that the NN does not provide absolute prediction, e.g., $0$ or $1$, but probabilistic uncertainties, e.g., $-1\le \hat{b}[n] \le 1, \forall n$. In general, the larger an element in $\hat{\bs{b}}$ is, the higher chance this element is the best antenna subset. Consequently, the subset $\CA_{n^\star}$ corresponding to the largest output prediction, i.e., $n^\star = \arg\max_n \hat{b}[n]$, can be selected.
Proposed L-ASPD Algorithm
Inputs: $\Theta$, $\bs{H}, P_{tot}, \{\eta_k\}_{k=1}^K$. Outputs: $C_{opt}, \CA_{opt}, \bs{w}_{opt}$
[1]
Construct $\bs{x} = \frac{P_{tot}}{\sigma^2}\mathrm{abs}(\mathrm{vec}(\bs{H}^H \bs{H}))^2$; $\bs{x}_{\rm norm} = \frac{\bs{x}}{\max(\bs{x})}$
Apply $\bs{x}_{\rm norm}$ to the learned model $\bs{\Theta}$ to predict $\bs{\mathcal{K}}_S$
Initialize $C_{opt} = 0$
for $\CA \in \boldsymbol{\mathcal{K}}_S$
Apply Algorithm <ref> or <ref> on the current subset $\CA$ to
obtain the optimal $X_{old}(\CA)$ and $\bs{w}_{\star,\CA}$
$\mathbf{if}~ C_{opt} < X_{old}(\CA)$
$C_{opt} = X_{old}(\CA)$; $\CA_{opt} \leftarrow \CA$; $\bs{w}_{opt} \gets \bs{w}_{\star ,\CA}$.
However, the prediction is not always precise. Therefore, in order to improve the performance of L-ASPD, instead of choosing only one best candidate, we select $K_S$ subsets, denoted by $\boldsymbol{\mathcal{K}}_S$, corresponding to the $K_S$ largest elements in $\hat{\bs{b}}$. Then, we apply the precoding design (Algorithm <ref> or <ref>) on these $K_S$ subsets. Intuitively, larger values of $K_S$ will increase the chance for the L-ASPD to select the best antenna subset at an expense of more computation complexity. The steps of the L-ASPD are listed in Algorithm <ref>. Compared with the JASPD, the L-ASPD significantly reduces the computational time since it tries only $K_S$ promising candidates instead of $\binom{N}{M}$. Consequently, the L-ASPD is expected to achieve higher effective sum rate than that of the JASPD, especially when $K_S \ll \binom{N}{M}$.
§ PERFORMANCE EVALUATION
In this section, we evaluate the performance of the proposed algorithms via simulation results. The users are uniformly distributed in an area between $50$ and $300$ meters from the centered-BS. We employ the WINNER II line-of-sight pathloss model [33], which results in that the pathloss is uniformly distributed between $-59.4$ dB and $-74.6$ dB. All wireless channels are subject to Rayleigh fading. The channel bandwidth $B = 1$ MHz and the noise spectral density is -140 dBm/Hz.
We adopt the LTE specifications [34] that one c.u. lasts in one symbol duration and is equal to 66.7 $\mu$s, and one block duration is spanned over 200 c.u.. The BS is assumed to spend 0.2 c.u. to solve one convex optimization problem [36]. As a result, it takes $0.2K_S$ c.u. to execute the proposed L-ASPD, where $K_S$ is the number of predicted subsets. We employ an NN with two hidden layers to train the learning model for the L-ASPD, each layer consists of 100 nodes[We heuristically try a different number of hidden layers and find out that a NN with two hidden layers is sufficient for our problem]. SVM can also be employed for its fast training phase, however, results in poorer performance compared to NN. This is because SVM results in hyperplanes to discriminate the data whereas the NN can discriminate data using more elaborate functions. The NN is trained using the scaled conjugate gradient method. Other simulation parameters are listed in Table <ref>.
Simulation parameters
Parameters Value
Cell radius 300 m
BS's transmit power 1 W - 5W
Number of RF chains $M$ 4
Number of antennas $N$ Varies
Number of users $K$ Varies between 1 and $M$
QoS $\eta_k = \eta, \forall k$ 2 Mbps
Training method Scaled conjugate gradient
Activation function (hidden layers) $\mathtt{tansig}$
Activation function (output layer) $\mathtt{soft-max}$
Loss function Cross-entropy
[Sum rate versus number of iterations]
[Sum rate versus simulation time]
Performance comparison of the proposed Algorithm 1 and 2, $P_{tot} = 37$ dBm and $K = 4$. Both algorithms converge in less than 10 iterations.
§.§ Convergence of the Proposed Optimization Algorithms
We first evaluate the convergence performance of the proposed iterative Algorithm <ref> and <ref> presented in Section <ref>. The results are obtained from 200 random realizations of channel fading coefficients and users' locations. For each realization, we run both Algorithm <ref> and <ref> until they converge. Fig. <ref>a compares the sum-rate obtained by the two proposed algorithms as a function of the iteration number. It is clearly shown that both algorithms converge quickly after less than 10 iterations, which demonstrates the effectiveness of the proposed iterative algorithms.
In order to provide insights on the computation performance of the proposed algorithms, we show in Fig <ref>b the sum-rate versus the simulation time. Both algorithms are carried out by SeDuMi solver integrated in Matlab 2017b, running on a personal laptop with the Intel i7-6820HQ CPU and 8GB RAM. It is observed that Algorithm <ref> executes slightly faster than Algorithm <ref>, however, achieves a smaller sum-rate. The performance gain brought by Algorithm <ref> results from the fact that it uses more memory than Algorithm <ref>, as shown in Table <ref>. Due to superior performance, we will employ the proposed Algorithm <ref> in the remaining comparisons.
Number of variables required by Algorithm 1 and 2 for different setups for $N = 8$.
$M$ $2$ $3$ $4$ $5$
Algorithm 1 267 400 533 666
Algorithm 2 55 94 141 196
§.§ Performance-complexity Trade-off of the L-ASPD
In this subsection, we examine the efficiency of the proposed L-ASPD via a performance-complexity gain trade-off. By confining the search space of the prediction output, i.e., $K_\CS$ - the number of potential antenna subsets, we can manage the complexity of L-ASPD since it will work only on $K_\CS$ candidates. The complexity gain of L-ASPD is defined as the relative time saving compared to the exhaustive search that tries every antenna subsets, calculated as:
\begin{align}
\theta(K_\CS) = \frac{\tau(\binom{N}{M} - K_\CS)}{\tau \binom{N}{M}} = 1 - \frac{K_\CS}{\binom{N}{M}},
\end{align}
where $\tau$ is the computational time spent on the optimization of the precoding vectors for a selected antenna subset. The performance gain is defined as the ratio between the sum rate obtained by L-ASPD divided by the optimal sum rate which is achieved by searching all possible antenna subsets.
Fig. <ref> plots the performance-complexity tradeoff of the proposed L-ASPD with $M=4$ RF chains and $N=8$ total number of antennas. It is observed that the L-ASPD retains more than 96% of the optimal sum rate (which is obtained by exhaustive search) while saving more than 95% complexity. Even when spending only 2% the computational time, the L-ASPD still achieves 86% the optimal performance, which confirms the effectiveness of the proposed L-ASPD algorithm. Compared with the heuristic solution, the L-ASPD further reduces more than 13% the computational time at the 95% performance gain target.
Fig. <ref> plots the relative performance in the real-time prediction of L-ASPD versus the number of training samples. The relative performance is measured as the ratio of the L-ASPD's sum rate divided by the one obtained by the JASPD. Each training sample is generated randomly and captures the randomness in both channel small-scale fading and user location. In general, having more training samples results in better prediction accuracy since the L-ASPD learns more about the intrinsic relation between the selected antennas and the input features. It is shown that $2\times 10^5$ training samples are sufficient for the L-ASPD to achieve more than 94% of the optimal performance.
Performance-complexity tradeoff of the proposed L-ASPD. $ M = 4, N = 8$.
Learning (relative) performance versus the number of training samples. $M = 4, N = 8$.
§.§ Online Performance Comparison
This subsection demonstrates the effectiveness of the proposed L-ASPD algorithm via performance comparisons with existing solutions in difference scenarios. The first baseline scheme is proposed in [5], which employs block diagonalization to consecutively eliminate antennas that incur the largest transmit power cost. The second baseline is introduced in [29], which is a learning-assisted antenna selection for multicasting. In addition, a Heuristic search is also presented, which also applies the proposed beamforming design but it searches for the antenna subset heuristically. We note that comparison with [27, 28, 30] is not applicable because [27, 28] consider a single-user system and [30] selects only a single antenna.
Sum rate performance of the proposed algorithms versus the number of predicted subsets $K_S$. $P_{tot} = 33$ dBm, $M = 4$ and $N = 8$.
Fig. <ref> shows the achievable sum rate as a function of $K_S$ - the most promising subsets predicted by the proposed L-ASPD algorithm. In order to reveal the benefit of proposed beamforming design in Algorithm <ref>, we also show a curve, which applies a zero-forcing based power control [35] on the antenna subsets predicted by Algorithm <ref>. This curve is named as Proposed - Zero Forcing in the figures. It is shown that the proposed L-ASPD significantly surpasses all schemes for all observed $K_S$ values. In general, having more predicted subsets $K_S$ results in a larger sum rate, which is in line with results in Fig. <ref>. In particular, by searching over the most five promising subsets, the proposed L-ASPD achieves 1 Mbps and 2 Mbps higher than schemes in [29] and [5], respectively. We note that the sum rate of the scheme in [5] is independent from $K_S$ since it predicts the best antenna subset. Similarly, the performance curve of [29] has a step-shape because it uses the active antennas as the prediction outputs, hence it is only able to confine the original search space to $\binom{M+n}{M}$ subsets, with $0\le n\le N-M$.
Fig. <ref> plots the sum rate as a function of the transmit power. The effectiveness of the proposed learning-based method is shown via the largest sum rate achieved by the L-JAPD compared to other schemes. On average, the L-JAPD algorithm produces 1.5 Mbps and 2 Mbps more than the solution in [29] and heuristic scheme, respectively, proving that the NN has been well trained. Compared to the solution in [5], the L-ASPD achieves a relative sum rate gain of 5 Mbps and 2 Mbps at the transmit power equal to 30 dBm and 33 dBm, respectively. One interesting observation is that the Zero-forcing scheme and the solution in [5] approach the L-ASPD's performance when the total transmit power budget increases. This is because for large $P_{tot}$, the BS has sufficient power budget to fully mitigate inter-user interference. For small $P_{tot}$, the system resource becomes scarce, therefore completely eliminating inter-user interference is far from the optimum, which is shown in a big gap between the L-ASPD and these two schemes. In such high-load scenarios, employing the proposed design is highly beneficial.
Sume rate performance of the proposed algorithms versus the total transmit power $P_{tot}$. $K_S = 7$ and $N = 8$ available antennas.
Effective sum rate comparison for various number of total antennas $N$. $P_{tot}$ = 30 dBm, $M = 4, K_S = 10$.
Fig. <ref> presents the effective sum rate for different total antennas numbers $N$. For a fair comparison, the total transmit power is kept constant at 30 dBm and the total overhead of channel estimation and computation is taken into account. For the former, it takes 8 c.u. to obtain the CSI when the total antenna number is $6, 7, 8$, and takes 12 c.u. when the number of antennas is 9 and 10. Consider the latter, the L-ASPD algorithm only searches over 10 most promising candidates, while the JASPD tries all $\binom{N}{M}$ antenna subsets. In general, having more antennas results in higher effective sum rate of all schemes, which confirms the benefit of antenna selection. Interestingly, the proposed L-ASPD algorithm achieves the best performance and surpasses the exhaustive search scheme, especially for large $N$, which is in contrast to common understanding that the exhaustive search achieves the best performance. This is because we take the computation time into account in the comparison, as shown in (<ref>). As a result, the exhaustive search scheme spends too much time in searching for the best subset, particularly with large $N$, resulting in smaller effective rates. As an example for $N = 10$, the exhaustive search scheme requires a computation time which is 21 times more than that of the L-ASPD.
§ CONCLUSIONS
We studied the joint design for antenna selection and precoding vectors in multi-user multi-antenna systems to fully exploit the spatial diversity. We first proposed a (near) optimal joint antenna selection and precoding algorithm to maximize the system sum rate, subjected to the users' QoS and limited transmit power. The proposed joint design successively optimizes the precoding vectors via two proposed iterative optimization algorithms based on the semidefinite relaxation and successive convex approximation methods. In order to further improve the optimization efficiency, we then developed the machine learning-based solution to provide appropriate and time-stringent antenna predictions. The proposed learning-based algorithm is robust against the number of users and their locations, the BS's transmit power, as well as the channel fading. We showed via simulation results that the proposed learning-based solution significantly surpasses existing selection schemes and outperforms the exhaustive search-based solution.
Based on the outcome of this work, several research directions can be considered. The first problem is how to improve the training phase efficiency, which is especially important when the number of available antennas is very large. In such a case, a low-complexity precoding design, e.g., zero-forcing, can be used to quickly obtain sufficient training samples. The second problem lies in dealing with the network dynamics, which requires the learning model to frequently and timely adapted. Transfer leaning and reinforcement learning are promising solutions in this case to avoid retraining the whole network.
§ PROOF OF PROPOSITION <REF>
Denote $\big(\bs{W}^{(t)}_\star, \bs{x}^{(t)}_{\star}, \bs{y}^{(t)}_{\star}\big)$ as the optimal solution of $\text{P2}(\bs{y}^{(t)}_0)$ at iteration $t$. We will show that if $y_{\star k}^{(t)} < y^{(t)}_{0k}, \forall k$, then by using $y^{(t+1)}_{0k} = y_{\star k}^{(t)}$ in the $(t+1)$-th iteration, we will have $\sum_k x_{\star k}^{(t+1)} > \sum_k x^{(t)}_{\star k} $, where $\{x_{\star k}^{(t+1)}\}_{k=1}^K$ is the solution at iteration $t+1$. Indeed, by choosing a relatively large initial value $\bs{y}^{(1)}_0$, we always have $y_{\star k}^{(1)} < y^{(1)}_{0k}, \forall k$.
Denote $f(y;a) = e^a(y - a + 1)$ as the first order approximation of the $e^y$ function at $a$. At iteration $t+1$, we have $y^{(t+1)}_{0k} = y^{(t)}_{\star k}, \forall k$. Therefore, $f(y;y_{\star k}^{(t)})$ is used in the right-hand side of constraint (<ref>) at the $(t+1)$-th iteration. Consider a candidate $(y^{(t+1)}_1, \dots, y^{(t+1)}_K)$ for any $y^{(t+1)}_k \in (\hat{y}_k, y^{(t)}_{\star k})$, where $\hat{y}_k = y_{\star k}^{(t)} - 1 + e^{y^{(t)}_{0k} - y_{\star k}^{(t)}}(y_{\star k}^{(t)} - y^{(t)}_{0k} + 1)$. Because function $\exp()$ is convex and $y^{(t+1)}_k < y_{\star k}^{(t)}$, then we have $f(y^{(t+1)}_k; y_{\star k}^{(t)}) > f(y_{\star k}^{(t)}; y^{(t)}_{0k}), \forall k$. Therefore, there exits $\bs{W}^{(t+1)}_k$ and $x^{(t+1)}_k > x^{(t)}_{\star k}$ which satisfies constraints (<ref>) and (<ref>). Consider a new set $\{\bs{W}^{(t+1)}_k, x^{(t+1)}_k, y^{(t+1)}_k\}_{k=1}^K$. This set satisfies all the constraints of problem $\text{P2}(\bs{y}^{(t)}_\star)$, and therefore is a feasible solution of the optimization problem. As the result, the optimal objective at iteration $(t+1)$, $\frac{\bar{B}}{\log(2)}\sum_k x^{(t+1)}_{\star k}$, must satisfy $\frac{\bar{B}}{\log(2)}\sum_k x^{(t+1)}_{\star k} \ge \frac{\bar{B}}{\log(2)}\sum_k x^{(t+1)}_k > \frac{\bar{B}}{\log(2)}\sum_k x^{(t)}_{\star k}$, which completes the proof of Proposition 1.
§ CONVEXITY OF FUNCTION $\FRAC{\BS{X}^T\BS{A}\BS{X}}{Y}$
To prove the convexity of $F(\bs{x},y) = \frac{\bs{x}^T\bs{A}\bs{x}}{y}$ for any positive semi-definite matrix $\bs{A}$, we need to show that the Hessian matrix of $F(\bs{x},y)$ is positive semidefinite. Indeed, the Hessian matrix of $F(\bs{x},y)$ is
\begin{align}
\bs{H}_F = \left[
\begin{array}{c c}
\frac{\bs{A} + \bs{A}^T}{y} & -\frac{(\bs{A} + \bs{A}^T)\bs{x}}{y^2}\\
-\frac{\bs{x}^T(\bs{A} + \bs{A}^T)}{y^2} & \frac{2\bs{x}^T\bs{A}\bs{x}}{y^3}
\end{array}
\right]. \notag
\end{align}
For arbitrary vector $\bs{c} = [\bs{a}^T b]^T$, where $\bs{a} \in \mathbb{R}^{N\times 1}$, consider a function
\begin{align}
&\bs{c}^T \bs{H}_F \bs{c} = \frac{\bs{a}^T(\bs{A} + \bs{A}^T)\bs{a}}{y} - \frac{\bs{a}^T(\bs{A} + \bs{A}^T)\bs{x}b}{y^2} \notag \\
- \frac{\bs{x}^T(\bs{A} + \bs{A}^T)\bs{a}b}{y^2} + \frac{2\bs{x}^T\bs{A}\bs{x}b^2}{y^3} \notag \\
&\overset{(*)}{=}\frac{\bs{a}^T\!(\bs{A}\! +\! \bs{A}^T)\bs{a}}{y}\! -\! 2\frac{\bs{a}^T(\bs{A}\! +\! \bs{A}^T)\bs{x}b}{y^2}
+ \frac{\bs{x}^T(\bs{A}\! +\! \bs{A}^T)\bs{x}b^2}{y^3} \notag\\
&= \frac{\bs{a}^T\tilde{\bs{A}}\bs{a} - 2\bs{a}^T\tilde{\bs{A}}\tilde{\bs{x}} + {\tilde{\bs{x}}}^T\tilde{\bs{A}}\tilde{\bs{x}}}{y},\label{eq:HF}
\end{align}
where $\tilde{\bs{A}} \triangleq \bs{A}^T + \bs{A}$, $\tilde{\bs{x}} \triangleq \bs{x}b/y$ and $(*)$ results from the fact that $\bs{A}$ is symmetric and $\bs{a}^T\tilde{\bs{A}}\tilde{\bs{x}} = \tilde{\bs{x}}^T\tilde{\bs{A}}\bs{a}$. It is obvious that the RHS of (<ref>) is always non-negative for $y > 0$ and positive semi-definite matrix $\tilde{\bs{A}}$, which concludes the positive semi-definite of the Hessian matrix of $F(\bs{x},y)$.
[1]
E. G. Larsson, O. Edfors, F. Tufvesson, and T. L. Marzetta, “Massive
MIMO for next generation wireless systems,” IEEE Commun.
Mag., vol. 52, no. 2, pp. 186–195, Feb. 2014.
[2]
R. Heath and A. Paulraj, “Antenna selection for spatial multiplexing systems
based on minimum error rate,” in Proc. IEEE Int. Conf. Commun., Jun.
2001, pp. 2276–2280.
[3]
Y. Pei, T.-H. Pham, and Y. Liang, “How many RF chains are optimal for
large-scale MIMO systems when circuit power is considered?” in Proc.
IEEE Global Commun. Conf., Dec. 2012, pp. 3868–3873.
[4]
J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C. K. Soong, and
J. C. Zhang, “What will 5G be?” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1065–1082, Jun. 2014.
[5]
R. Chen, J. G. Andrews, and R. W. Heath, “Efficient transmit antenna
selection for multiuser MIMO systems with block diagonalization,” in
Proc. IEEE Global Telecommunications Conference, Nov. 2007, pp.
[6]
O. Mehanna, N. D. Sidiropoulos, and G. B. Giannakis, “Joint multicast
beamforming and antenna selection,” IEEE Trans. Signal
Process., vol. 61, no. 10, pp. 2660–2674, May 2013.
[7]
S. Qin, G. Li, G. Lv, G. Zhang, and H. Hui, “L1/2-regularization
based antenna selection for RF-chain limited massive MIMO systems,” in
Proc. IEEE Vehicular Technology Conference (VTC-Fall), Sep. 2016,
pp. 1–5.
[8]
T. X. Vu, S. Chatzinotas, S. ShahbazPanahi, and B. Ottersten, “Joint power
allocation and access point selection for cell-free massive MIMO,” in
Proc. IEEE Int. Conf. Commun., May 2020, pp. 1–6.
[9]
M. S. Ibrahim, A. Konar, M. Hong, and N. D. Sidiropoulos, “Mirror-prox
SCA algorithm for multicast beamforming and antenna selection,” in
Proc. IEEE Int. Workshop Signal Process. Adv. Wireless Commun., June 2018, pp. 1–5.
[10]
O. Tervo, L. Tran, H. Pennanen, S. Chatzinotas, B. Ottersten, and
M. Juntti, “Energy-efficient multicell multigroup multicasting with joint
beamforming and antenna selection,” IEEE Trans. Signal
Process., vol. 66, no. 18, pp. 4904–4919, Sep. 2018.
[11]
S. He, Y. Huang, J. Wang, L. Yang, and W. Hong, “Joint antenna
selection and energy-efficient beamforming design,” IEEE Signal
Process. Lett., vol. 23, no. 9, pp. 1165–1169, Sep. 2016.
[12]
T. O’Shea and J. Hoydis, “An introduction to deep learning for the physical
layer,” IEEE Trans. Cog. Commun. Netw., vol. 3, no. 4, pp. 563–575,
Dec. 2017.
[13]
A Zappone, M Di Renzo, M Debbah, “Wireless networks design in the era of deep learning: Model-based, AI-based, or both?”, IEEE Trans. Commun.,,vol. 67, no. 10, pp. 7331–7376, Oct. 2019.
[14]
A Zappone, M Di Renzo, M Debbah, TT Lam, X Qian, “Model-aided wireless artificial intelligence: Embedding expert knowledge in deep neural networks for wireless system optimization,” IEEE Veh. Techno. Mag., vol. 14, no. 3, pp. 60–69, Sept. 2019.
[15]
L. Lei, L. You, G. Dai, T. X. Vu, D. Yuan, and S. Chatzinotas, “A deep
learning approach for optimizing content delivering in cache-enabled
HetNet,” in Proc. IEEE Int. Symp. Wireless Commun. Syst., Aug. 2017,
pp. 449–453.
[16]
W. Xia, G. Zheng, Y. Zhu, J. Zhang, J. Wang, and A. P. Petropulu,
“A deep learning framework for optimization of MISO downlink beamforming,”
IEEE Trans. Commun., vol. 68, no. 3, pp. 1866–1880,
March 2020.
[17]
H. Huang, W. Xia, J. Xiong, J. Yang, G. Zheng, and X. Zhu,
“Unsupervised learning-based fast beamforming design for downlink MIMO,”
IEEE Access, vol. 7, pp. 7599–7605, 2019.
[18]
H. Huang, Y. Peng, J. Yang, W. Xia, and G. Gui, “Fast beamforming
design via deep learning,” IEEE Trans. Veh. Techno.,
vol. 69, no. 1, pp. 1065–1069, Jan 2020.
[19]
J. Jang, H. Lee, S. Hwang, H. Ren, and I. Lee, “Deep learning-based
limited feedback designs for MIMO systems,” IEEE Wireless
Commun. Lett., pp. 1–1, 2019.
[20]
T. Lin and Y. Zhu, “Beamforming design for large-scale antenna arrays
using deep learning,” IEEE Wireless Commun. Lett., vol. 9,
no. 1, pp. 103–107, Jan 2020.
[21]
T. E. Bogale, X. Wang, and L. Le, “Adaptive channel prediction,
beamforming and scheduling design for 5G V2I network: Analytical and machine
learning approaches,” IEEE Trans. Veh. Techno., pp.
1–1, 2020.
[22]
R. Shafin, H. Chen, Y. H. Nam, S. Hur, J. Park, J. Zhang,
J. Reed, and L. Liu, “Self-tuning sectorization: Deep reinforcement
learning meets broadcast beam optimization,” IEEE Trans.
Wireless Commun., pp. 1–1, 2020.
[23]
F. B. Mismar, B. L. Evans, and A. Alkhateeb, “Deep reinforcement
learning for 5G networks: Joint beamforming, power control, and interference
coordination,” IEEE Trans. Commun., vol. 68, no. 3,
pp. 1581–1592, March 2020.
[24]
A. Alkhateeb, “DeepMIMO: A generic deep learning dataset for millimeter wave
and massive MIMO applications,” in Proc. Info. Theory and
Applications Workshop (ITA), San Diego, CA, Feb. 2019, pp. 1–8.
[25]
H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to
optimize: Training deep neural networks for wireless resource management,”
in IEEE Int. Workshop Signal Process. Adv. Wireless Commun., Jul.
2017, pp. 247––252.
[26]
L. Lei, T. X. Vu, L. You, S. Fowler, and D. Yuan, “Efficient minimum-energy
scheduling with machine-learning based predictions for multiuser MISO
systems,” in Proc. IEEE Int. Conf. Commun., May 2018, pp. 1–6.
[27]
A. M. Elbir and K. V. Mishra, “Joint antenna selection and hybrid
beamformer design using unquantized and quantized deep learning networks,”
IEEE Trans. Wireless Commun., vol. 19, no. 3, pp.
1677–1688, Mar. 2020.
[28]
J. Joung, “Machine learning-based antenna selection in wireless
communications,” IEEE Commun. Lett., vol. 20, no. 11, pp. 2241–2244,
Nov. 2016.
[29]
M. S. Ibrahim, A. S. Zamzam, X. Fu, and N. D. Sidiropoulos, “Learning-based
antenna selection for multicasting,” in Proc. IEEE Int. Workshop
Signal Process. Adv. Wireless Commun., Jun. 2018, pp. 1–5.
[30]
D. He, C. Liu, T. Q. S. Quek, and H. Wang, “Transmit antenna selection in MIMO
wiretap channels: A machine learning approach,” IEEE Wireless Commun.
Lett., vol. 7, no. 4, pp. 634–637, Aug. 2018.
[31]
T. X. Vu, L. Lei, S. Chatzinotas, and B. Ottersten, “Machine learning based
antenna selection and power allocation in multi-user MISO systems,” in
Proc. Int. Symp. on Modeling and Opt. in Mobile, Ad Hoc, and Wireless
Netw., Jun. 2019, pp. 1–6.
[32]
Z.-Q. Luo, W. K. Ma, A. M. C. So, Y. Ye, and S. Zhang, “Semidefinite
relaxation of quadratic optimization problems,” IEEE Signal Process.
Mag., vol. 27, no. 3, pp. 20–34, Mar. 2010.
[33]
P. Kyosti and et al., “WINNER II channel models,” 2007, tech.
rep. D1.1.2 V1.2.
[34]
T. Innovations, “LTE in a nutshell: The physical layer,” 2010,
white paper.
[35]
T. X. Vu, L. Lei, S. Vuppala, A. Kalantari, S. Chatzinotas, and
B. Ottersten, “Latency minimization for content delivery networks with
wireless edge caching,” in Proc. IEEE Int. Conf. Commun., Kansas City, MO, 2018, pp. 1–6.
[36]
J. Mattingley and S. Boyd, “Real-time convex optimization in signal
processing,” IEEE Signal Process. Mag., vol. 27, no. 3, pp. 50–61,
May 2010.
|
# Hardy and Rellich inequalities with Bessel Pairs
Michael Ruzhansky Michael Ruzhansky: Department of Mathematics: Analysis,
Logic and Discrete Mathematics Ghent University, Belgium and School of
Mathematical Sciences Queen Mary University of London United Kingdom E-mail
address<EMAIL_ADDRESS>and Bolys Sabitbek Bolys Sabitbek:
School of Mathematical Sciences Queen Mary University of London United Kingdom
and Al-Farabi Kazakh National University 71 al-Farabi Ave., Almaty, 050040
Kazakhstan E-mail address<EMAIL_ADDRESS>
###### Abstract.
In this paper, we establish suitable characterisations for a pair of functions
$(W(x),H(x))$ on a bounded, connected domain $\Omega\subset\mathbb{R}^{n}$ in
order to have the following Hardy inequality
$\int_{\Omega}W(x)|\nabla u|_{A}^{2}dx\geq\int_{\Omega}|\nabla
d|^{2}_{A}H(x)|u|^{2}dx,\,\,\,u\in C^{1}_{0}(\Omega),$
where $d(x)$ is a suitable quasi-norm (gauge), $|\xi|^{2}_{A}=\langle
A(x)\xi,\xi\rangle$ for $\xi\in\mathbb{R}^{n}$ and $A(x)$ is an $n\times n$
symmetric, uniformly positive definite matrix defined on a bounded domain
$\Omega\subset\mathbb{R}^{n}$. We also give its $L^{p}$ analogue. As a
consequence, we present examples for a standard Laplacian on $\mathbb{R}^{n}$,
Baouendi-Grushin operator, and sub-Laplacians on the Heisenberg group, the
Engel group and the Cartan group. Those kind of characterisations for a pair
of functions $(W(x),H(x))$ are obtained also for the Rellich inequality. These
results answer the open problems of Ghoussoub-Moradifam [27].
###### Key words and phrases:
Hardy inequality; Rellich inequlity; Bessel pairs; stratified Lie group.
###### 1991 Mathematics Subject Classification:
35A23; 35R45; 35B09; 34A40
The authors were supported by EPSRC grant EP/R003025/2. The first author was
also supported by FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial
Differential Equations. The second author was aslo supported by the MES RK
grant AP08053051.
## 1\. Introduction
In the work [28], Ghoussoub and Moradifam gave necessary and sufficient
conditions for a Bessel pair of positive radial functions $W(x)$ and $H(x)$ on
a ball $B$ of radius $R$ in $\mathbb{R}^{n}$, $n\geq 1$, so that one has the
Hardy inequality for all functions $u\in C^{\infty}_{0}(B)$
$\int_{B}W(x)|\nabla u|^{2}dx\geq\int_{B}H(x)|u|^{2}dx,$
and the Hardy-Rellich inequality for all functions $u\in C^{\infty}_{0}(B)$:
$\int_{B}W(x)|\Delta u|^{2}dx\geq\int_{B}H(x)|\nabla
u|^{2}dx+(n-1)\int_{B}\left(\frac{W(x)}{|x|^{2}}-\frac{W_{r}(|x|)}{|x|}\right)|\nabla
u|^{2}dx.$
The characterisation of pairs of functions $W(x)$ and $H(x)$ made a very
interesting connection between Hardy type inequalities and the oscillatory
behavior of ordinary differential equations. Choosing suitable Bessel pairs
$(W(x),H(x))$ allows one to improve, extend, and unify many results about
Hardy and Hardy-Rellich inequalities that were established by Caffarelli et
al. [17], Brezis and Vazquez [16], Wang and Willem [43], Adimurthi et al. [1]
and other authors.
In the book [27], Ghoussoub and Moradifam posed two questions:
* •
Develop suitable characterisations for a pair of functions $(W(x),H(x))$ in
order to have the following inequality
$\int_{\Omega}W(x)|\nabla
u|_{A}^{2}dx\geq\int_{\Omega}H(x)|u|^{2}dx,\,\,\,u\in C^{1}_{0}(\Omega),$
where $|\xi|^{2}_{A}=\langle A(x)\xi,\xi\rangle$ for $\xi\in\mathbb{R}^{n}$
and $A(x)$ is an $n\times n$ symmetric, uniformly positive definite matrix
defined on a bounded domain $\Omega\subset\mathbb{R}^{n}$.
* •
Devise a necessary and sufficient condition on a Bessel pair $(W(x),H(x))$ in
order for the Rellich inequality to hold:
$\int_{\Omega}W(x)|\Delta u|^{2}dx\geq\int_{\Omega}H(x)|u|^{2}dx,\,\,\,u\in
C^{\infty}_{0}(\Omega).$
The aim of this paper is to give suitable characterisations for a Bessel pair
of positive radial functions $W(x)$ and $H(x)$ for Hardy and Rellich
inequalities on a bounded, connected domain $\Omega\subset\mathbb{R}^{n}$ that
answers the open problems of Ghoussoub-Moradifam [27]. We prove Hardy and
Rellich inequalities expressing conditions for Bessel pairs in terms of
ordinary differential equations associated with the positive weight functions
$W(x)$ and $H(x)$. Our approach relies on the first and second order Picone
identities. This suggested approach seems very effective, allowing us to
recover almost all well-known Hardy and Rellich type inequalities. This
approach is an extension of the method of Allegretto-Huang [4, Theorem 2.1],
by adding the positive weight function $W(x)$. A similar approach was used by
the authors [37] to establish Hardy and Rellich type inequalities for general
(real-valued) vector fields with boundary terms. Recently, in [19] Cazacu
called this method (but without the function $W(x)$) as the Method of Super-
solutions in Hardy and Rellich inequalities that was adopted from Davies [22].
This characterisation of Bessel pairs builds an interesting bridge between
Hardy (Rellich) type inequalities and ordinary differential equations. In
particular, we can extend and improve many results for Hardy and Rellich type
inequalities. Let us briefly recall several types of Hardy inequalities that
can be recovered:
* I.
The classical Hardy inequality for $n\geq 3$ on a bounded domain
$\Omega\subset\mathbb{R}^{n}$ asserts that
$\int_{\Omega}|\nabla
u|^{2}dx\geq\left(\frac{n-2}{2}\right)^{2}\int_{\Omega}\frac{|u|^{2}}{|x|^{2}}dx,\,\,\,u\in
C^{1}_{0}(\Omega),$
where the constant is optimal and not attained. This version of Hardy
inequality was investigated by many authors [31], [22], [38] and the
references therein.
* II.
The geometric Hardy inequality for any bounded convex domain
$\Omega\subset\mathbb{R}^{n}$ with smooth boundary asserts that
$\int_{\Omega}|\nabla
u|^{2}dx\geq\frac{1}{4}\int_{\Omega}\frac{|u|^{2}}{\delta^{2}(x)}dx,\,\,\,u\in
C^{1}_{0}(\Omega),$
where $\delta(x):=dist(x,\partial\Omega)$ is the Euclidean distance to
boundary $\partial\Omega$ and the constant is also optimal and not attained.
There is a number of studies related to this subject, see e.g. [5], [6], [7],
[22], [31] and [33].
* III.
The multipolar Hardy inequality on a bounded domain
$\Omega\subset\mathbb{R}^{n}$ asserts that
$\int_{\Omega}|\nabla u|^{2}dx\geq
C\sum_{i=1}^{k}\int_{\Omega}\frac{|u|^{2}}{|x-a_{i}|^{2}}dx,\,\,\,u\in
C^{1}_{0}(\Omega),$
where $k$ is the number of poles. This type of inequalities was studied by
Felli-Terracini [24], Bosi-Dolbeault-Esteban [11] and Cacazu-Zuazua [20].
## 2\. Hardy inequalities with Bessel pairs
Let $\Omega\subset\mathbb{R}^{n}$ be a bounded domain with smooth boundary.
Define
$\mathcal{L}_{p,A}f=-\sum_{i,j=1}^{n}\frac{\partial}{\partial{x_{j}}}\left(a_{ij}(x)|\nabla
f|^{p-2}_{A}\frac{\partial f}{\partial x_{j}}\right),$ (2.1)
and
$|\nabla f|^{2}_{A}=\sum_{i,j=1}^{n}a_{ij}(x)\frac{\partial f}{\partial
x_{i}}\frac{\partial f}{\partial x_{j}},$
where $A(x)$ is an $n\times n$ symmetric, uniformly positive definite matrix
with smooth coefficients defined on $\Omega$.
Let $\Phi_{p}$ be a constant multiple of the fundamental solution (e.g. [9]
and [32]) for $\mathcal{L}_{p,A}$ that solves the equation
$\displaystyle\mathcal{L}_{p,A}\Phi_{p}(x)$ $\displaystyle=0,\,\,\,x\neq 0.$
From $\Phi_{p}$, we are able to define the quasi-norm
$d(x):=\left\\{\begin{matrix}\Phi_{p}(x)^{\frac{p-1}{p-Q}},&\text{for}\,\,\,x\neq
0,\\\ 0,&\text{for}\,\,\,x=0,\end{matrix}\right.$ (2.2)
where $Q$ is the appropriate homogeneous dimension and $1<p<Q$.
Define
$\Psi_{\mathcal{L}_{A}}(x):=|\nabla d|^{2}_{A}(x),$ (2.3)
for $x\neq 0$. The function $\Psi_{\mathcal{L}_{A}}(x)$ can be calculated for
the explicit form of the quasi-norm $d(x)$. For example:
* •
In the Euclidean setting, when $\mathcal{L}_{A}=\Delta$ is the standard
Laplace operator, then $\Psi_{\Delta}(x)=1$.
* •
In the Heisenberg group, when $\mathcal{L}_{A}=\mathcal{L}_{\mathbb{H}}$ is
the sub-Laplacian and the quasi-norm ($\mathcal{L}$-gauge)
$d_{\mathbb{H}}(x)$, then
$\Psi_{\mathcal{L}_{\mathbb{H}}}(x)=|x^{\prime}|^{2}d_{\mathbb{H}}^{-2}$.
* •
For Baouendi-Grushin operator, when $\mathcal{L}_{A}=\mathcal{L}_{\gamma}$ is
the Baouendi-Grushin operator and $d_{\gamma}(x)$ is associated the quasi-
norm, then
$\Psi_{\mathcal{L}_{\gamma}}(x)=|\xi|^{2\gamma}d_{\gamma}^{-2\gamma}$ where
$x=(\xi,\zeta)\in\mathbb{R}^{k}\times\mathbb{R}^{l}$ and $\gamma>0$.
In the stratified Lie groups, we shall remark that the function
$\Psi_{\mathcal{L}_{A}}(x)$ is $\delta_{\lambda}$-homogeneous degree of zero
and translation invariant (i.e. $\Psi_{\mathcal{L}}(\alpha\circ x,\alpha\circ
y)=\Psi_{\mathcal{L}}(x,y)$ for $x,y\in\mathbb{G}$ with $x\neq y$).
Furthermore, the function $\Psi_{\mathcal{L}_{A}}(x)$ is the kernel of mean
volume formulas (see more [10, Definition 5.5.1]).
Let us convert the equation
$\sum_{i,j=1}^{n}\frac{\partial}{\partial x_{j}}\left(W(|x|)|\nabla
v|^{p-2}_{A}a_{ij}(x)\frac{\partial v}{\partial x_{i}}\right)+|\nabla
d|^{p}_{A}H(|x|)v^{p-1}=0,$ (2.4)
into the quasilinear second-order differential equation of the form
$(r^{Q-1}W(r)(v^{\prime}(r))^{p-1})^{\prime}+r^{Q-1}H(r)v^{p-1}(r)=0,$ (2.5)
where ${}^{\prime}=\partial_{r}$ and $r:=d(x)$.
Suppose that $W(x)$, $H(x)$, and $v(x)$ are positive radially symmetric
functions. Let us rewrite the first term of (2.4) in terms of the radial
derivative. First note that for $i,j=1,\ldots,n$, we have
$\displaystyle\frac{\partial r}{\partial x_{i}}$
$\displaystyle=\left(\frac{p-1}{p-Q}\right)\Phi_{p}^{\frac{p-1}{p-Q}-1}\frac{\partial\Phi_{p}}{\partial
x_{i}},$ (2.6) $\displaystyle\frac{\partial r}{\partial x_{j}}\frac{\partial
r}{\partial x_{i}}$
$\displaystyle=\left(\frac{p-1}{p-Q}\right)^{2}\Phi_{p}^{2\frac{p-1}{p-Q}-2}\frac{\partial\Phi_{p}}{\partial
x_{i}}\frac{\partial\Phi_{p}}{\partial x_{j}},$ (2.7)
$\displaystyle\frac{\partial^{2}r}{\partial x_{i}\partial x_{j}}$
$\displaystyle=\left(\frac{p-1}{p-Q}\right)\Phi_{p}^{\frac{p-1}{p-Q}-1}\frac{\partial^{2}\Phi_{p}}{\partial
x_{i}\partial
x_{j}}+\frac{(p-1)(Q-1)}{(p-Q)^{2}}\Phi_{p}^{\frac{p-1}{p-Q}-2}\frac{\partial\Phi_{p}}{\partial
x_{i}}\frac{\partial\Phi_{p}}{\partial x_{j}}$ (2.8)
$\displaystyle=\left(\frac{p-1}{p-Q}\right)\Phi_{p}^{\frac{p-1}{p-Q}-1}\frac{\partial^{2}\Phi_{p}}{\partial
x_{i}\partial
x_{j}}+\left(\frac{Q-1}{p-1}\right)\Phi_{p}^{-\frac{p-1}{p-Q}}\frac{\partial
r}{\partial x_{j}}\frac{\partial r}{\partial x_{i}}.$
Then
$\displaystyle\frac{\partial v}{\partial x_{i}}=v^{\prime}\frac{\partial
r}{\partial x_{i}},\,\,\,\text{and}\,\,\,\frac{\partial^{2}v}{\partial
x_{i}\partial x_{j}}=\frac{\partial r}{\partial x_{i}}\frac{\partial
r}{\partial x_{j}}v^{\prime\prime}+\frac{\partial^{2}r}{\partial x_{i}\partial
x_{j}}v^{\prime}.$
Since $\Phi_{p}=r^{\frac{p-Q}{p-1}}$, we thus have
$\displaystyle|\nabla r|^{p-2}_{A}$
$\displaystyle=\left(\frac{p-1}{p-Q}\right)^{p-2}|\nabla\Phi_{p}|_{A}^{p-2}r^{\frac{(Q-1)(p-2)}{p-1}},$
(2.9) $\displaystyle|\nabla v|^{p-2}_{A}$ $\displaystyle=|\nabla
r|^{p-2}_{A}(v^{\prime})^{p-2},$ (2.10) $\displaystyle\frac{\partial|\nabla
v|^{p-2}_{A}}{\partial x_{j}}$ $\displaystyle=\frac{(Q-1)(p-2)}{(p-1)r}|\nabla
r|^{p-2}_{A}(v^{\prime})^{p-2}\frac{\partial r}{\partial x_{j}}+(p-2)|\nabla
r|^{p-2}_{A}(v^{\prime})^{p-3}v^{\prime\prime}\frac{\partial r}{\partial
x_{j}}$ (2.11)
$\displaystyle+\left(\frac{p-1}{p-Q}\right)^{p-2}\frac{\partial|\nabla\Phi_{p}|_{A}^{p-2}}{\partial
x_{j}}r^{\frac{(Q-1)(p-2)}{p-1}}(v^{\prime})^{p-2}.$
Using above expressions, a straightforward computation gives
$\displaystyle\frac{\partial}{\partial{x_{j}}}\left(Wa_{ij}(x)|\nabla
v|^{p-2}_{A}\frac{\partial v}{\partial x_{i}}\right)=Wa_{ij}(x)|\nabla
v|^{p-2}_{A}\frac{\partial^{2}v}{\partial x_{i}\partial x_{j}}$
$\displaystyle+a_{ij}(x)|\nabla v|^{p-2}_{A}\frac{\partial W}{\partial
x_{j}}\frac{\partial v}{\partial x_{i}}+W|\nabla v|^{p-2}_{A}\frac{\partial
a_{ij}(x)}{\partial x_{j}}\frac{\partial v}{\partial
x_{i}}+a_{ij}(x)W\frac{\partial|\nabla v|^{p-2}_{A}}{\partial
x_{j}}\frac{\partial v}{\partial x_{i}}$ $\displaystyle=W|\nabla
r|^{p-2}_{A}(v^{\prime})^{p-2}v^{\prime\prime}\underbrace{a_{ij}(x)\frac{\partial
r}{\partial x_{i}}\frac{\partial r}{\partial x_{j}}}_{=|\nabla
r|^{2}_{A}}+\left(\frac{p-1}{p-Q}\right)W|\nabla
r|^{p-2}_{A}(v^{\prime})^{p-1}\Phi_{p}^{\frac{p-1}{p-Q}-1}a_{ij}(x)\frac{\partial^{2}\Phi_{p}}{\partial
x_{i}\partial x_{j}}$ $\displaystyle+\left(\frac{Q-1}{p-1}\right)W|\nabla
r|^{p-2}_{A}(v^{\prime})^{p-2}\Phi_{p}^{-\frac{p-1}{p-Q}}\underbrace{a_{ij}(x)\frac{\partial
r}{\partial x_{i}}\frac{\partial r}{\partial x_{j}}}_{=|\nabla
r|^{2}_{A}}+|\nabla
r|^{p-2}_{A}(v^{\prime})^{p-1}W_{r}\underbrace{a_{ij}(x)\frac{\partial
r}{\partial x_{i}}\frac{\partial r}{\partial x_{j}}}_{=|\nabla r|^{2}_{A}}$
$\displaystyle+W|\nabla r|^{p-2}_{A}(v^{\prime})^{p-2}\frac{\partial
a_{ij}(x)}{\partial x_{j}}v^{\prime}\frac{\partial r}{\partial
x_{i}}+\left(\frac{p-1}{p-Q}\right)^{p-2}Wr^{\frac{(Q-1)(p-2)}{p-1}}(v^{\prime})^{p-1}a_{ij}(x)\frac{\partial|\nabla\Phi_{p}|_{A}^{p-2}}{\partial
x_{j}}\frac{\partial r}{\partial x_{i}}$
$\displaystyle+\frac{(Q-1)(p-2)}{(p-1)r}|\nabla
r|^{p-2}_{A}Ц(v^{\prime})^{p-1}\underbrace{a_{ij}(x)\frac{\partial r}{\partial
x_{i}}\frac{\partial r}{\partial x_{j}}}_{=|\nabla r|^{2}_{A}}+(p-2)W|\nabla
r|^{p-2}_{A}(v^{\prime})^{p-2}v^{\prime\prime}\underbrace{a_{ij}(x)\frac{\partial
r}{\partial x_{i}}\frac{\partial r}{\partial x_{j}}}_{=|\nabla r|^{2}_{A}}$
where $|\nabla r|^{2}_{A}=\sum_{i,j=1}^{n}\partial_{x_{i}}r\partial_{x_{j}}r$.
We assume there is the summation $\sum_{i,j=1}^{n}$, to get
$\displaystyle=W|\nabla
r|^{p}_{A}(v^{\prime})^{p-2}\left((p-1)v^{\prime\prime}+\left(\frac{Q-1}{r}\right)\frac{1}{p-1}v^{\prime}+\left(\frac{p-1}{p-Q}\right)\frac{\Phi_{p}^{\frac{p-1}{p-Q}-1}}{|\nabla
r|_{A}^{2}}a_{ij}(x)\frac{\partial^{2}\Phi_{p}}{\partial x_{i}\partial
x_{j}}v^{\prime}\right)$ $\displaystyle+|\nabla
r|_{A}^{p}(v^{\prime})^{p-1}W_{r}+W|\nabla
r|_{A}^{p-2}(v^{\prime})^{p-1}\frac{\partial a_{ij}}{\partial
x_{j}}\frac{\partial r}{\partial
x_{i}}+\left(\frac{Q-1}{r}\right)\left(1-\frac{1}{p-1}\right)W|\nabla
r|_{A}^{p}(v^{\prime})^{p-1}$
$\displaystyle+\underbrace{\left(\frac{p-1}{p-Q}\right)^{p-2}r^{\frac{(Q-1)(p-2)}{p-1}}|\nabla\Phi_{p}|^{p-2}_{A}}_{=|\nabla
r|_{A}^{p-2}}\frac{W(v^{\prime})^{p-1}a_{ij}(x)}{|\nabla\Phi_{p}|^{p-2}_{A}}\frac{\partial|\nabla\Phi_{p}|_{A}^{p-2}}{\partial
x_{j}}\frac{\partial r}{\partial x_{i}}.$
Now we apply equation (2.6) for $\partial r/\partial x_{i}$
$\displaystyle=W|\nabla
r|_{A}^{p}(v^{\prime})^{p-2}\left((p-1)v^{\prime\prime}+\left[\frac{Q-1}{r}+\frac{W_{r}}{W}\right]v^{\prime}\right)+\left(\frac{p-1}{p-Q}\right)W|\nabla
r|_{A}^{p-2}(v^{\prime})^{p-1}\Phi_{p}^{\frac{p-1}{p-Q}-1}$
$\displaystyle\times\frac{1}{|\nabla\Phi_{p}|^{p-2}_{A}}\underbrace{\sum_{i,j=1}^{n}\left[|\nabla\Phi_{p}|^{p-2}_{A}a_{ij}(x)\frac{\partial^{2}\Phi_{p}}{\partial
x_{i}\partial
x_{j}}+a_{ij}(x)\frac{\partial|\nabla\Phi_{p}|_{A}^{p-2}}{\partial
x_{j}}\frac{\partial\Phi_{p}}{\partial
x_{i}}+|\nabla\Phi_{p}|^{p-2}_{A}\frac{\partial a_{ij}}{\partial
x_{j}}\frac{\partial\Phi_{p}}{\partial
x_{i}}\right]}_{\mathcal{L}_{p,A}\Phi_{p}(x)=0}$ $\displaystyle=W(r)|\nabla
r|_{A}^{p}(v^{\prime})^{p-2}\left((p-1)v^{\prime\prime}+\left[\frac{Q-1}{r}+\frac{W_{r}}{W}\right]v^{\prime}\right).$
We arrive that (2.4) can be rewritten as
$W(r)|\nabla
r|_{A}^{p}(v^{\prime})^{p-2}\left((p-1)v^{\prime\prime}+\left[\frac{Q-1}{r}+\frac{W_{r}}{W}\right]v^{\prime}\right)+|\nabla
r|_{A}^{p}H(r)v^{p-1}=0,$ (2.12)
which means
$(r^{Q-1}W(r)(v^{\prime}(r))^{p-1})^{\prime}+r^{Q-1}H(r)v^{p-1}(r)=0,$ which
is (2.5).
The next theorem provides an explicit existence criterion of positive solution
for ordinary differential equation (2.5) which is proved by Agarwal-Bohner-Li
[3, Theorem 4.6.13]:
###### Theorem 2.1 (Agarwal-Bohner-Li [3]).
Let $a:[r_{0},\infty)\rightarrow(0,\infty)$ and
$b:[r_{0},\infty)\rightarrow(0,\infty)$ be continuous functions with $b(r)\neq
0$. Suppose that
$\int_{r_{0}}^{\infty}b(s)ds<\infty,\,\,\text{and}\,\,\,\,\phi(r)=2\int_{r}^{\infty}b(s)ds<\infty\,\,\,\text{for}\,\,\,r\geq
r_{0}.$
Suppose further that
$\int_{r_{0}}^{\infty}\left(\frac{\phi(s)}{a(s)}\right)^{\frac{1}{p-1}}ds\leq\frac{1}{2(p-1)}.$
(2.13)
Then there exists a nonnegative solution to the following equation
$(a(r)[y^{\prime}(r)]^{p-1})^{\prime}+b(r)[y(r)]^{p-1}=0\,\,\,\text{for}\,\,\,r\geq
r_{0}.$ (2.14)
The following theorem characterises the relation between $W(x)$ and $H(x)$ in
order to obtain the weighted Hardy inequality:
###### Theorem 2.2.
Let $\Omega$ be a bounded domain in $\mathbb{R}^{n}$. Let $W(x)$ and $H(x)$ be
positive radially symmetric functions. Let $1<p<Q$. Let $d(x)$ be as in (2.2).
Then the inequality
$\int_{\Omega}W(x)|\nabla u|^{p}_{A}dx\geq\int_{\Omega}|\nabla
d|^{p}_{A}H(x)|u|^{p}dx$ (2.15)
holds for all complex-valued functions $u\in C^{1}_{0}(\Omega)$ provided that
the following conditions hold:
$\int_{r_{0}}^{\infty}s^{Q-1}H(s)ds<\infty,\,\,\text{and}\,\,\,\,\phi(r)=2\int_{r}^{\infty}s^{Q-1}H(s)ds<\infty\,\,\,\text{for}\,\,\,r\geq
r_{0},$ (2.16)
$\int_{r_{0}}^{\infty}\left(\frac{\phi(s)}{s^{Q-1}W(s)}\right)^{\frac{1}{p-1}}ds\leq\frac{1}{2(p-1)}\,\,\,\text{for
some}\,\,\,r_{0}>0.$ (2.17)
###### Remark 2.3.
Note that
* •
As usual, we denote $W(x)=W(|x|)$ and $H(x)=H(|x|)$. We fix the notation for a
positive function $f(x)>0$ and a non-negative function $f(x)\geq 0$.
* •
For $p=2$, Theorem 2.2 answers to the question posed by Ghoussoub-Moradifam
[27].
* •
For $A(x)\equiv 1$, Theorem 2.2 was established for general (real-valued)
vector fields with boundary terms by the authors [37] (see also [38], [39] and
[42]).
To prove Theorem 2.2, we need to establish the (first-order) Picone identity
with $A(x)$ which is an $n\times n$ symmetric, uniformly positive definite
matrix defined on $\Omega$. The proof of Lemma 2.4 is similar to the standard
Picone identity obtained by Allegretto-Huang [4] and the authors [37].
###### Lemma 2.4.
Let $\Omega$ be a bounded domain in $\mathbb{R}^{n}$. Let a complex-valued
function $u$ be differentiable a.e. in $\Omega$. Let $1<p<\infty$. Let a
positive function $v$ be differentiable in $\Omega$. Define
$\displaystyle R(u,v)$ $\displaystyle=|\nabla u|^{p}_{A}-\langle
A(x)\nabla\left(\frac{|u|^{p}}{v^{p-1}}\right),|\nabla v|^{p-2}_{A}\nabla
v\rangle,$ (2.18) $\displaystyle L(u,v)$ $\displaystyle=|\nabla
u|^{p}_{A}-p\frac{|u|^{p-1}}{v^{p-1}}|\nabla v|^{p-2}_{A}\langle
A(x)\nabla|u|,\nabla v\rangle+(p-1)\frac{|u|^{p}}{v^{p}}|\nabla v|^{p}_{A},$
(2.19)
where $|\xi|^{2}_{A}=\langle A(x)\xi,\xi\rangle$. Then
$L(u,v)=R(u,v)\geq 0.$
Moreover, $L(u,v)=0$ a.e. in $\Omega$ if and only if $u\geq 0$ and $u=cv$ a.e.
in $\Omega$ for some constant $c$ in each component of $\Omega$.
###### Proof of Lemma 2.4.
It is easy to show that $R(u,v)=L(u,v)$ by the expansion of $R(u,v)$ as
follows
$\displaystyle R(u,v)$ $\displaystyle=|\nabla u|^{p}_{A}-\langle
A(x)\nabla\left(\frac{|u|^{p}}{v^{p-1}}\right),|\nabla v|^{p-2}_{A}\nabla
v\rangle$ $\displaystyle=|\nabla u|^{p}_{A}-p\frac{|u|^{p-1}}{v^{p-1}}|\nabla
v|^{p-2}_{A}\langle A(x)\nabla|u|,\nabla
v\rangle+(p-1)\frac{|u|^{p}}{v^{p}}|\nabla v|^{p}_{A}$ $\displaystyle=L(u,v).$
Let $u(x)=R(x)+iI(x)$, where $R(x)$ and $I(x)$ are the real and imaginary
parts of $u$. We can restrict to the set where $u(x)\neq 0$. Then we have
$\displaystyle(\nabla|u|)(x)=\frac{1}{|u|}(R(x)\nabla R(x)+I(x)\nabla I(x)).$
(2.20)
Since
$\displaystyle\left|\frac{1}{|u|}(R\nabla R+I\nabla
I)\right|^{2}_{A}\leq|\nabla R|^{2}_{A}+|\nabla I|^{2}_{A},$
we get $|\nabla|u||_{A}\leq|\nabla u|_{A}$ a.e. in $\Omega$ (see [36, Theorem
2.1]).
Let us recall Young’s inequality where for real numbers $a$ and $b$ we have
$pab\leq a^{p}+(p-1)b^{\frac{p}{p-1}}.$
By taking $a=|\nabla u|_{A}$ and $b=\frac{|u|^{p-1}}{v^{p-1}}|\nabla
v|_{A}^{p-1}$, we prove $L(u,v)\geq 0$ in the following way:
$\displaystyle L(u,v)$ $\displaystyle=|\nabla
u|^{p}_{A}-p\frac{|u|^{p-1}}{v^{p-1}}|\nabla v|^{p-2}_{A}\langle
A(x)\nabla|u|,\nabla v\rangle+(p-1)\frac{|u|^{p}}{v^{p}}|\nabla v|^{p}_{A}$
$\displaystyle=|\nabla
u|^{p}_{A}-p\frac{|u|^{p-1}}{v^{p-1}}|\nabla|u||_{A}|\nabla
v|^{p-1}_{A}+(p-1)\frac{|u|^{p}}{v^{p}}|\nabla v|^{p}_{A}$
$\displaystyle+p\frac{|u|^{p-1}|\nabla
v|^{p-2}_{A}}{v^{p-1}}\left(|\nabla|u||_{A}|\nabla v|_{A}-\langle
A(x)\nabla|u|,\nabla v\rangle\right)$ $\displaystyle\geq|\nabla
u|^{p}_{A}-p\frac{|u|^{p-1}}{v^{p-1}}|\nabla u|_{A}|\nabla
v|^{p-1}_{A}+(p-1)\frac{|u|^{p}}{v^{p}}|\nabla v|^{p}_{A}$
$\displaystyle+p\frac{|u|^{p-1}|\nabla
v|^{p-2}_{A}}{v^{p-1}}\left(|\nabla|u||_{A}|\nabla v|_{A}-\langle
A(x)\nabla|u|,\nabla v\rangle\right)$ $\displaystyle\geq
p\frac{|u|^{p-1}|\nabla v|^{p-2}_{A}}{v^{p-1}}\left(|\nabla|u||_{A}|\nabla
v|_{A}-\langle A(x)\nabla|u|,\nabla v\rangle\right).$
We will now show that $|\nabla|u||_{A}|\nabla v|_{A}\geq\langle
A(x)\nabla|u|,\nabla v\rangle$, which implies $L(u,v)\geq 0$. A direct
computation gives
$\displaystyle 0\leq|\nabla|u|-b\nabla v|^{2}_{A}$ $\displaystyle=\langle
A(x)(\nabla|u|-b\nabla v),\nabla|u|-b\nabla v\rangle$
$\displaystyle=|\nabla|u||^{2}_{A}-2b\langle A(x)\nabla|u|,\nabla
v\rangle+b^{2}|\nabla v|^{2}_{A}.$
Setting $b=|\nabla v|_{A}^{-2}\langle A(x)\nabla|u|,\nabla v\rangle$ and
rearranging produces
$|\nabla|u||_{A}|\nabla v|_{A}\geq\langle A(x)\nabla|u|,\nabla v\rangle.$
(2.21)
Observe that $L(u,v)=0$ if and only if
* •
equality holds for $|\nabla|u||_{A}\leq|\nabla u|_{A}$ when $u\geq 0$;
* •
equality holds in (2.21) when $u=cv$ for some constant $c$.
The proof is complete. ∎
###### Proof of Theorem 2.2.
By Theorem 2.1, the conditions (2.16) and (2.17) provide the existence of a
nonnegative solution to the following equation
$\nabla\cdot(W(x)|\nabla v|^{p-2}_{A}A(x)\nabla v)+|\nabla
d|^{p}_{A}(x)H(x)v^{p-1}=0.$ (2.22)
Then we prove by applying properties of the (first-order) Picone identity,
divergence theorem and the equation (2.22), respectively. We have
$\displaystyle 0$ $\displaystyle\leq\int_{\Omega}W(x)R(u,v)dx$
$\displaystyle=\int_{\Omega}W(x)|\nabla u|^{p}_{A}dx-\int_{\Omega}W(x)\langle
A(x)\nabla\left(\frac{|u|^{p}}{v^{p-1}}\right),|\nabla v|^{p-2}_{A}\nabla
v\rangle dx$ $\displaystyle=\int_{\Omega}W(x)|\nabla
u|^{p}_{A}dx+\int_{\Omega}\frac{|u|^{p}}{v^{p-1}}\nabla\cdot(W(x)|\nabla
v|^{p-2}_{A}A(x)\nabla v)dx$ $\displaystyle=\int_{\Omega}W(x)|\nabla
u|^{p}_{A}-\int_{\Omega}|\nabla d|^{p}_{A}H(x)|u|^{p}dx.$
This proves Theorem 2.2. ∎
Next we will give examples for operators $\mathcal{L}_{A}$ by taking different
matrices $A(x)$.
Euclidean Space $\mathbb{R}^{n}$: Let $\Omega=\mathbb{R}^{n}$. If we take
$A(x)$ as an identity matrix, then $\mathcal{L}_{A}=-\Delta$ is the standard
Laplacian, $\Phi(x)=|x|^{2-n}$ and $d(x)=|x|$ with $x\in\mathbb{R}^{n}$.
###### Corollary 2.5.
Let $\Omega=\mathbb{R}^{n}$. Let $W(|x|)$ and $H(|x|)$ be positive radially
symmetric functions. Then the inequality
$\int_{\mathbb{R}^{n}}W(|x|)|\nabla
u|^{2}dx\geq\int_{\mathbb{R}^{n}}H(|x|)|u|^{2}dx$ (2.23)
holds for all complex-valued functions $u\in C^{1}_{0}(\Omega)$ provided that
the following conditions hold:
$\int_{r_{0}}^{\infty}s^{n-1}H(s)ds<\infty,\,\,\text{and}\,\,\,\,\phi(r)=2\int_{r}^{\infty}s^{n-1}H(s)ds<\infty\,\,\,\text{for}\,\,\,r\geq
r_{0},$ (2.24)
$\int_{r_{0}}^{\infty}\frac{\phi(s)}{s^{n-1}W(s)}ds\leq\frac{1}{2}\,\,\,\text{for
some}\,\,\,r_{0}>0.$ (2.25)
The Heisenberg group $\mathbb{H}^{1}$: Let
$\mathbb{H}^{1}:=\mathbb{R}^{2}\times\mathbb{R}$ be Heisenberg group with
$x=(x_{1},x_{2},x_{3})$. We take
$A(x):=\begin{pmatrix}1&0&-\frac{x_{2}}{2}\\\ 0&1&\frac{x_{1}}{2}\\\
-\frac{x_{2}}{2}&\frac{x_{1}}{2}&\frac{x_{1}^{2}+x_{2}^{2}}{4}\end{pmatrix}.$
Then we have the following horizontal gradient
$\nabla_{\mathbb{H}}:=(\partial_{x_{1}}-\frac{x_{2}}{2}\partial_{x_{3}},\partial_{x_{2}}+\frac{x_{1}}{2}\partial_{x_{3}}),$
and the sub-Laplacian is given by
$\mathcal{L}_{\mathbb{H}}:=\Delta_{x_{1},x_{2}}+\frac{x_{1}^{2}+x_{2}^{2}}{4}\partial_{x_{3}}^{2}+(x_{1}\partial_{x_{2}}-x_{2}\partial_{x_{1}})\partial_{x_{3}}.$
The quasi-norm ($\mathcal{L}$-gauge) is given by
$d_{\mathbb{H}}(x)=((x_{1}^{2}+x_{2}^{2})^{2}+16x_{3}^{2})^{\frac{1}{4}}.$
Note that the function $\Psi_{\mathcal{L}_{\mathbb{H}}}(x)$ could be
explicitly calculated as follow
$\displaystyle\Psi_{\mathcal{L}_{\mathbb{H}}}(x)=|\nabla_{\mathbb{H}}d_{\mathbb{H}}|^{2}(x)$
$\displaystyle=(X_{1}d_{\mathbb{H}})^{2}+(X_{2}d_{\mathbb{H}})^{2}$
$\displaystyle=d_{\mathbb{H}}^{-6}[(x_{1}^{2}+x_{2}^{2})^{2}x_{1}^{2}-8(x_{1}^{2}+x_{2}^{2})x_{1}x_{2}x_{3}+16x_{1}^{2}x_{3}^{2}]$
$\displaystyle+d_{\mathbb{H}}^{-6}[(x_{1}^{2}+x_{2}^{2})^{2}x_{2}^{2}+8(x_{1}^{2}+x_{2}^{2})x_{1}x_{2}x_{3}+16x_{2}^{2}x_{3}^{2}]$
$\displaystyle=d_{\mathbb{H}}^{-6}(x_{1}^{2}+x_{2}^{2})[(x_{1}^{2}+x_{2}^{2})^{2}+16x_{3}^{2}]$
$\displaystyle=|x^{\prime}|^{2}d_{\mathbb{H}}^{-2}(x),$
where $|x^{\prime}|^{2}=x_{1}^{2}+x_{2}^{2}$.
###### Corollary 2.6.
Let $\Omega$ be a bounded domain in $\mathbb{H}^{1}$. Let $W(x)$ and $H(x)$ be
positive radially symmetric functions. Then the inequality
$\int_{\Omega}W(x)|\nabla_{\mathbb{H}}u|^{2}dx\geq\int_{\Omega}|\nabla_{\mathbb{H}}d_{\mathbb{H}}|^{2}H(x)|u|^{2}dx$
(2.26)
holds for all complex-valued functions $u\in C^{1}_{0}(\Omega)$ provided that
the following conditions hold:
$\int_{r_{0}}^{\infty}s^{Q-1}H(s)ds<\infty,\,\,\text{and}\,\,\,\,\phi(r)=2\int_{r}^{\infty}s^{Q-1}H(s)ds<\infty\,\,\,\text{for}\,\,\,r\geq
r_{0},$ (2.27)
$\int_{r_{0}}^{\infty}\frac{\phi(s)}{s^{Q-1}W(s)}ds\leq\frac{1}{2}\,\,\,\text{for
some}\,\,\,r_{0}>0.$ (2.28)
Baouendi-Grushin operator: Let $\Omega$ be an open subset of
$\mathbb{R}^{n}=\mathbb{R}^{k}\times\mathbb{R}^{l}$ and $x\in\Omega$ with
$x=(\xi,\zeta)$. For $\gamma>0$, we take
$A(x):=\begin{pmatrix}I_{k}&0\\\ 0&\gamma|\xi|^{\gamma}I_{l}\end{pmatrix},$
where $I_{k}$ and $I_{l}$ are the identity matrices of size $k$ and $l$,
respectively. Then we have the following vector field
$\nabla_{\gamma}:=(\nabla_{\xi},\gamma|\xi|^{\gamma}\nabla_{\zeta})$ and the
Baouendi-Grushin operator
$\mathcal{L}_{\gamma}:=-\Delta_{\xi}-\gamma^{2}|\xi|^{2\gamma}\Delta_{\zeta}.$
For $x=(\xi,\zeta)\in\mathbb{R}^{k}\times\mathbb{R}^{l}$, let
$d_{\gamma}(x)=(|\xi|^{2\gamma}+|\zeta|^{2})^{1/2\gamma}.$
As in the Heisenberg group, the function $\Psi_{\mathcal{L}_{\gamma}}(x)$
could be explicitly calculated as follow
$\displaystyle\Psi_{\mathcal{L}_{\gamma}}(x)=|\nabla_{\gamma}d_{\gamma}|^{2}(x)$
$\displaystyle=\sum_{i=1}^{k}(\partial_{\xi_{i}}d_{\gamma})^{2}+\sum_{i=1}^{l}\gamma^{2}|\xi|^{2\gamma}(\partial_{\zeta_{i}}d_{\gamma})^{2}$
$\displaystyle=(|\xi|^{2\gamma}+|\zeta|^{2})^{\frac{1}{\gamma}-2}(|\xi|^{4\gamma}+|\xi|^{2\gamma}|\zeta|^{2})$
$\displaystyle=\frac{|\xi|^{2\gamma}}{d_{\gamma}^{2\gamma}(x)}.$
###### Corollary 2.7.
Let $\Omega$ be an open subset of
$\mathbb{R}^{n}=\mathbb{R}^{k}\times\mathbb{R}^{l}$ and $x\in\Omega$ with
$x=(\xi,\zeta)$. Let $W(x)$ and $H(x)$ be positive radially symmetric
functions. Then the inequality
$\int_{\Omega}W(x)|\nabla_{\gamma}u|^{2}dx\geq\int_{\Omega}|\nabla_{\gamma}d_{\gamma}|^{2}H(x)|u|^{2}dx$
(2.29)
holds for all complex-valued functions $u\in C^{1}_{0}(\Omega)$ provided that
the following conditions hold:
$\int_{r_{0}}^{\infty}s^{Q-1}H(s)ds<\infty,\,\,\text{and}\,\,\,\,\phi(r)=2\int_{r}^{\infty}s^{Q-1}H(s)ds<\infty\,\,\,\text{for}\,\,\,r\geq
r_{0},$ (2.30)
$\int_{r_{0}}^{\infty}\frac{\phi(s)}{s^{Q-1}W(s)}ds\leq\frac{1}{2}\,\,\,\text{for
some}\,\,\,r_{0}>0.$ (2.31)
The Engel group $\mathbb{E}$: Let
$\mathbb{E}:=\mathbb{R}^{2}\times\mathbb{R}\times\mathbb{R}$ be the Engel
group with $x=(x_{1},x_{2},x_{3},x_{4})$. We take
$A(x):=\begin{pmatrix}1&0&-\frac{x_{2}}{2}&-\frac{x_{3}}{2}+\frac{x_{1}x_{2}}{12}\\\
0&1&\frac{x_{1}}{2}&\frac{x_{1}^{2}}{12}\\\
-\frac{x_{2}}{2}&\frac{x_{1}}{2}&\frac{x_{1}^{2}+x_{2}^{2}}{4}&\frac{x_{2}}{2}\left(\frac{x_{3}}{2}-\frac{x_{1}x_{2}}{12}\right)+\frac{x_{1}^{3}}{24}\\\
-\frac{x_{3}}{2}+\frac{x_{1}x_{2}}{12}&\frac{x_{1}^{2}}{12}&\frac{x_{2}}{2}\left(\frac{x_{3}}{2}-\frac{x_{1}x_{2}}{12}\right)+\frac{x_{1}^{3}}{24}&\left(\frac{x_{3}}{2}-\frac{x_{1}x_{2}}{12}\right)^{2}+\frac{x_{1}^{4}}{144}\end{pmatrix}.$
Then the horizontal gradient and sub-Laplacian are given by
$\nabla_{\mathbb{E}}:=(X_{1},X_{2}),$
and
$\displaystyle\mathcal{L}_{\mathbb{E}}:=X_{1}^{2}+X_{2}^{2},$
where
$X_{1}:=\partial_{x_{1}}-\frac{x_{2}}{2}\partial_{x_{3}}-\left(\frac{x_{3}}{2}-\frac{x_{1}x_{2}}{12}\right)\partial_{x_{4}},\,\,\text{and}\,\,X_{2}:=\partial_{x_{2}}+\frac{x_{1}}{2}\partial_{x_{3}}+\frac{x_{1}^{2}}{12}\partial_{x_{4}}.$
###### Corollary 2.8.
Let $\Omega$ be a bounded domain in $\mathbb{E}$. Let $W(x)$ and $H(x)$ be
positive radially symmetric functions. Then the inequality
$\int_{\Omega}W(x)|\nabla_{\mathbb{E}}u|^{2}dx\geq\int_{\Omega}|\nabla_{\mathbb{E}}d|^{2}H(x)|u|^{2}dx$
(2.32)
holds for all complex-valued functions $u\in C^{1}_{0}(\Omega)$ provided that
the following conditions hold:
$\int_{r_{0}}^{\infty}s^{Q-1}H(s)ds<\infty,\,\,\text{and}\,\,\,\,\phi(r)=2\int_{r}^{\infty}s^{Q-1}H(s)ds<\infty\,\,\,\text{for}\,\,\,r\geq
r_{0},$ (2.33)
$\int_{r_{0}}^{\infty}\frac{\phi(s)}{s^{Q-1}W(s)}ds\leq\frac{1}{2}\,\,\,\text{for
some}\,\,\,r_{0}>0.$ (2.34)
The Cartan group $\mathcal{B}_{5}$: Let
$\mathcal{B}_{5}:=\mathbb{R}^{2}\times\mathbb{R}\times\mathbb{R}^{2}$ be the
Cartan group with $x=(x_{1},x_{2},x_{3},x_{4},x_{5})$. We take
$A(x):=\begin{pmatrix}1&0&0&0&0\\\
0&1&-x_{1}&\frac{x_{1}^{2}}{2}&x_{1}x_{2}\\\
0&-x_{1}&x_{1}^{2}&-\frac{x_{1}^{3}}{2}&-x_{1}^{2}x_{2}\\\
0&\frac{x_{1}^{2}}{2}&-\frac{x_{1}^{3}}{2}&\frac{x_{1}^{4}}{4}&\frac{x_{1}^{3}x_{2}}{2}\\\
0&x_{1}x_{2}&-x_{1}^{2}x_{2}&\frac{x_{1}^{3}x_{2}}{2}&x_{1}^{2}x_{2}^{2}\end{pmatrix}.$
Then the horizontal gradient and sub-Laplacian are given by
$\nabla_{\mathcal{B}_{5}}:=(X_{1},X_{2}),$
and
$\displaystyle\mathcal{L}_{\mathcal{B}_{5}}:=X_{1}^{2}+X_{2}^{2},$
where
$X_{1}:=\partial_{x_{1}}\,\,\text{and}\,\,X_{2}:=\partial_{x_{2}}-x_{1}\partial_{x_{3}}+\frac{x_{1}^{2}}{2}\partial_{x_{4}}+x_{1}x_{2}\partial
x_{5}.$
###### Corollary 2.9.
Let $\Omega$ be a bounded domain in $\mathcal{B}_{5}$. Let $W(x)$ and $H(x)$
be positive radially symmetric functions. Then the inequality
$\int_{\Omega}W(x)|\nabla_{\mathcal{B}_{5}}u|^{2}dx\geq\int_{\Omega}|\nabla_{\mathcal{B}_{5}}d|^{2}H(x)|u|^{2}dx$
(2.35)
holds for all complex-valued functions $u\in C^{1}_{0}(\Omega)$ provided that
the following conditions hold:
$\int_{r_{0}}^{\infty}s^{Q-1}H(s)ds<\infty,\,\,\text{and}\,\,\,\,\phi(r)=2\int_{r}^{\infty}s^{Q-1}H(s)ds<\infty\,\,\,\text{for}\,\,\,r\geq
r_{0},$ (2.36)
$\int_{r_{0}}^{\infty}\frac{\phi(s)}{s^{Q-1}W(s)}ds\leq\frac{1}{2}\,\,\,\text{for
some}\,\,\,r_{0}>0.$ (2.37)
## 3\. Rellich inequality with Bessel pairs
In this section, we present a Rellich inequlaity with Bessel pairs, which will
be obtained as a byproduct of the (second-order) Picone type identity and the
divergence theorem.
###### Theorem 3.1.
Let $\Omega$ be a bounded domain in $\mathbb{R}^{n}$. Let $W\in C^{2}(\Omega)$
and $H\in L^{1}_{loc}(\Omega)$ be positive radially symmetric functions.
Suppose that there exists a positive function $v\in C^{2}(\Omega)$ such that
$\Delta(W(x)|\Delta v|^{p-2}\Delta v)\geq H(x)v^{p-1},$ (3.1)
with $-\Delta v>0$ a.e. in $\Omega$. Then for all complex-valued functions
$u\in C^{2}_{0}(\Omega)$, we have
$\int_{\Omega}W(x)|\Delta|u||^{p}dx\geq\int_{\Omega}H(x)|u|^{p}dx,$ (3.2)
where $1<p<n$.
Here we present the corollary for $p=2$ to the above theorem:
###### Corollary 3.2.
Let $\Omega$ be a bounded domain in $\mathbb{R}^{n}$. Let $W\in C^{2}(\Omega)$
and $H\in L^{1}_{loc}(\Omega)$ be positive radially symmetric functions.
Suppose that a positive function $v\in C^{\infty}(\Omega)$ satisfies
$\Delta(W(x)\Delta v)\geq H(x)v,$ (3.3)
with $-\Delta v>0$ a.e. in $\Omega$. Then for all complex-valued functions
$u\in C^{2}_{0}(\Omega)$, we have
$\int_{\Omega}W(x)|\Delta|u||^{2}dx\geq\int_{\Omega}H(x)|u|^{2}dx.$ (3.4)
In order to prove Theorem 3.1, we establish the (second-order) Picone type
identity.
###### Lemma 3.3.
Let $\Omega\subset\mathbb{R}^{n}$ be open set. Let $v$ be twice differentiable
a.e. in $\Omega$ and satisfying the conditions $v>0$ and $-\Delta v>0$ a.e. in
$\Omega$. Let a complex-valued function $u$ be twice differentiable a.e. in
$\Omega$. For $p>1$ we define
$R_{1}(u,v):=|\Delta|u||^{p}-\Delta\left(\frac{|u|^{p}}{v^{p-1}}\right)|\Delta
v|^{p-2}\Delta v,$ (3.5)
and
$\displaystyle L_{1}(u,v):=$
$\displaystyle|\Delta|u||^{p}-p\left(\frac{|u|}{v}\right)^{p-1}\Delta|u||\Delta
v|^{p-2}\Delta v$ (3.6)
$\displaystyle+(p-1)\left(\frac{|u|}{v}\right)^{p}|\Delta
v|^{p}-p(p-1)\frac{|u|^{p-2}}{v^{p-1}}|\Delta v|^{p-2}\Delta
v\left(\nabla|u|-\frac{|u|}{v}\nabla v\right)^{2}.$
Then we have
$L_{1}(u,v)=R_{1}(u,v)\geq 0.$ (3.7)
###### Proof of Lemma 3.3.
We show that $R_{1}(u,v)=L_{1}(u,v)$ by a simple expansion of $R_{1}(u,v)$ as
follows
$\displaystyle\Delta\left(\frac{|u|^{p}}{v^{p-1}}\right)$
$\displaystyle=\nabla\cdot\left(\frac{p|u|^{p-1}\nabla|u|}{v^{p-1}}-\frac{(p-1)|u|^{p}\nabla
v}{v^{p}}\right)$
$\displaystyle=\sum_{i=1}^{n}\partial_{x_{i}}\left(\frac{p|u|^{p-1}\partial_{x_{i}}|u|}{v^{p-1}}-\frac{(p-1)|u|^{p}\partial_{x_{i}}v}{v^{p}}\right)$
$\displaystyle=\sum_{i=1}^{n}\left[\frac{p(p-1)|u|^{p-2}(\partial_{x_{i}}|u|)^{2}+p|u|^{p-1}\partial_{x_{i}}^{2}|u|}{v^{p-1}}-\frac{p(p-1)|u|^{p-1}\partial_{x_{i}}|u|\partial_{x_{i}}v}{v^{p}}\right.$
$\displaystyle-\left.\frac{p(p-1)|u|^{p-1}\partial_{x_{i}}|u|\partial_{x_{i}}v+(p-1)|u|^{p}\partial_{x_{i}}^{2}v}{v^{p}}+\frac{p(p-1)|u|^{p}(\partial_{x_{i}}v)^{2}}{v^{p+1}}\right]$
$\displaystyle=p\frac{|u|^{p-1}}{v^{p-1}}\Delta|u|-(p-1)\frac{|u|^{p}}{v^{p}}\Delta
v$
$\displaystyle+p(p-1)\left[\frac{|u|^{p-2}}{v^{p-1}}|\nabla|u||^{2}-2\frac{|u|^{p-1}}{v^{p}}\langle\nabla|u|,\nabla
v\rangle+\frac{|u|^{p}}{v^{p+1}}|\nabla v|^{2}\right]$
$\displaystyle=p\frac{|u|^{p-1}}{v^{p-1}}\Delta|u|-(p-1)\frac{|u|^{p}}{v^{p}}\Delta
v+p(p-1)\frac{|u|^{p-2}}{v^{p-1}}\left|\nabla|u|-\frac{|u|}{v}\nabla
v\right|^{2}.$
The rest of proof is to apply Young’s inequality, then we proceed as follows
$p\frac{|u|^{p-1}}{v^{p-1}}\Delta|u||\Delta v|^{p-2}\Delta
v\leq|\Delta|u||^{p}+(p-1)\frac{|u|^{p}}{v^{p}}|\Delta v|^{p},$
where $p>1$. This gives
$\displaystyle L_{1}(u,v)\geq-p(p-1)\frac{|u|^{p-2}}{v^{p-1}}|\Delta
v|^{p-2}\Delta v\left(\nabla|u|-\frac{|u|}{v}\nabla v\right)^{2}.$
It is easy to see that $L_{1}(u,v)\geq 0$ by observing the fact $-\Delta v>0$.
∎
###### Proof of Theorem 3.1.
We prove by using the (second-order) Picone type identity and Green’s second
identity as follows:
$\displaystyle 0$ $\displaystyle\leq\int_{\Omega}W(x)R_{1}(u,v)dx$
$\displaystyle=\int_{\Omega}W(x)|\Delta|u||^{p}dx-\int_{\Omega}W(x)\Delta\left(\frac{|u|^{p}}{v^{p-1}}\right)|\Delta
v|^{p-2}\Delta vdx$
$\displaystyle=\int_{\Omega}W(x)|\Delta|u||^{p}dx-\int_{\Omega}\frac{|u|^{p}}{v^{p-1}}\Delta(W(x)|\Delta
v|^{p-2}\Delta v)dx$
$\displaystyle=\int_{\Omega}W(x)|\Delta|u||^{p}dx-\int_{\Omega}H(x)|u|^{p}dx,$
using (3.1). This completes the proof. ∎
### 3.1. Several versions of Rellich type inequalities
Here by letting $W\equiv 1$ and $v=|x|^{-\frac{n-4}{2}}$ into (3.3), we obtain
the function
$H(x)=\frac{n^{2}(n-4)^{2}}{16}|x|^{-4},$
and inserting to inequality (3.4), we have the following result:
###### Corollary 3.4 (Rellich inequality).
Let $n\geq 5$. Then for all complex-valued functions $u\in
C_{0}^{\infty}(\mathbb{R}^{n}\backslash\\{0\\})$, we have
$\int_{\mathbb{R}^{n}}|\Delta|u||^{2}dx\geq\frac{n^{2}(n-4)^{2}}{16}\int_{\mathbb{R}^{n}}\frac{|u|^{2}}{|x|^{4}}dx.$
(3.8)
###### Corollary 3.5.
Let $n\geq 3$ and $2-\frac{n}{p}<\gamma<\frac{n(p-1)}{p}$. Then for all
complex-valued functions $u\in
C_{0}^{\infty}(\mathbb{R}^{n}\backslash\\{0\\})$, we have
$\int_{\mathbb{R}^{n}}|x|^{\gamma p}|\Delta
u|^{p}dx\geq\left(\frac{n}{p}-2+\gamma\right)^{p}\left(\frac{n(p-1)}{p}-\gamma\right)^{p}\int_{\mathbb{R}^{n}}|x|^{(\gamma-2)p}|u|^{p}dx.$
(3.9)
In the case $\gamma=0$ and for $1<p<n/2$, we get
$\int_{\mathbb{R}^{n}}|\Delta
u|^{p}dx\geq\left(\frac{n}{p}-2\right)^{p}\left(\frac{n(p-1)}{p}\right)^{p}\int_{\mathbb{R}^{n}}|x|^{-2p}|u|^{p}dx.$
(3.10)
###### Remark 3.6.
Note that the weighted Rellich inequality (3.9) is proved by Mitidieri [34]
and $L^{p}$-Rellich inequality (3.10) by Okazawa [35] with the optimal
constants, respectively.
###### Proof of Corollary 3.5.
Let us set
$W=|x|^{\gamma p},\,\,\text{and}\,\,\,v=|x|^{\alpha},$ (3.11)
where $\alpha=-(n/p+a-2)$. A direct computation gives
$\displaystyle\Delta v$ $\displaystyle=\alpha(\alpha+n-2)|x|^{\alpha-2},$
$\displaystyle|\Delta v|^{p-2}$
$\displaystyle=|\alpha|^{p-2}(\alpha+n-2)^{p-2}|x|^{(\alpha-2)(p-2)},$
$\displaystyle W|\Delta v|^{p-2}\Delta v$
$\displaystyle=|\alpha|^{p-1}(\alpha+n-2)^{p-1}|x|^{(\alpha-2)(p-1)+\gamma
p}.$
By inserting to (3.1), we arrive at
$\displaystyle\Delta(W|\Delta v|^{p-2}\Delta
v)=C_{\alpha,p,n,\gamma}|x|^{\alpha(p-1)+(\gamma-2)p},$
where
$C_{\alpha,p,n,\gamma}:=|\alpha|^{p-1}(\alpha+n-2)^{p-1}(\alpha
p-\alpha-2p+2+\gamma p)(\alpha p-\alpha-2p+\gamma p+n).$
Now we put the value of $\alpha$ in the constant, then we get
$H(x)=\left(\frac{n}{p}-2+\gamma\right)^{p}\left(\frac{n(p-1)}{p}-\gamma\right)^{p}|x|^{(\gamma-2)p}.$
(3.12)
The statement then follows from Theorem 3.1. ∎
## References
* [1] Adimurthi, Chaudhuri, N., Ramaswamy, N.: An improved Hardy Sobolev inequality and its applications. Proc. Am. Math. Soc. 130, 489-505 (2002)
* [2] Adimurthi, Sekar A.: Role of the fundamental solution in Hardy-Sobolev type inequalities. Proceedings of the Royal Society of Edinburgh 136A, 1111-1130 (2006)
* [3] Agarwal R. P., Bohner M., Li W-T.: Nonoscillation and oscillation: theory for functional differential equations, Dekker, New York, 1995
* [4] Allegretto W., Huang Y.X.: A Picone’s identity for the $p$-Laplacian and applications. Nonlinear Analysis, Theory, Methods and Applications 32(7), 819-830 (1998)
* [5] Ancona A.: On strong barriers and an inequalities of Hardy for domains $\mathbb{R}^{n}$. J. London Math. Soc. 43, 274-290 (1986)
* [6] Avkhadiev F.G. and Laptev A.: On a sharp Hardy inequality for convex domains. Springer; International Mathematical Series (New York) 12, Around the Research of Vladimir Maz’ya I, 1–12 (2010)
* [7] Avkhadiev F. G. and Wirths K.J.: Unified Poincaré and Hardy inequalities with sharp constants for convex domains. ZAMM Z. Angew. Math. Mech. 87, 632-642 (2007)
* [8] Blanchet, A., Bonforte, M., Dolbeault, J., Grillo, G., Vasquez, J.L.: Hardy-Poincaré inequalities and applications to nonlinear diffusions. C. R. Acad. Sci. Paris, Ser. I 344, 431-436 (2007)
* [9] Boccardo L., Galloutet Th.: Nonlinear elliptic and parabolic equations involving measure data. J. Funct. Anal. 87, 149-169 (1989)
* [10] Bonfiglioli A., Lanconelli E. and Uguzzoni F.: Stratified Lie Groups and Potential Theory for their Sub-Laplacians. Springer-Verlag, Berlin-Heidelberg, 2007
* [11] Bosi R., Dolbeault J., and Esteban M.J.: Estimates for the optimal constants in multipolar Hardy inequalities for Schrödinger and Dirac operators. Commun. Pure Appl. Anal. 7, no. 3, 533-562 (2008)
* [12] Bouchez V., Willem M.: Extremal functions for the Caffarelli-Kohn-Nirenberg inequalities: a simple proof of symmetry. J. Math. Anal. Appl. 352(1), 293-300 (2009)
* [13] Brezis, H., Lieb, E.H.: Sobolev inequalities with remainder terms. J. Funct. Anal. 62, 73-86 (1985)
* [14] Brezis, H., Marcus, M.: Hardy’s inequality revisited. Ann. Scuola. Norm. Sup. Pisa 25, 217-237 (1997)
* [15] Brezis, H., Marcus, M., Shafrir, I.: Extremal functions for Hardy’s inequality with weight. J. Funct. Anal. 171, 177-191 (2000)
* [16] Brezis, H., Vazquez, J.L.: Blow-up solutions of some nonlinear elliptic problems. Revista Mat. Univ. Complutense Madrid 10, 443-469 (1997)
* [17] Caffarelli, L., Kohn, R., Nirenberg, L.: First order interpolation inequalities with weights. Compos. Math. 53, 259-275 (1984)
* [18] Catrina F., Wang Z.: On the Caffarelli-Kohn-Nirenberg inequalities: sharp constants, existence (and nonexistence), and symmetry of extremal functions. Comm. Pure Appl. Math. 54(2), 229-258 (2001)
* [19] Cazacu C.: The method of super-solutions in Hardy and Rellich inequalities in the $L^{2}$ setting: an overview of well-known results and short proofs. Rev. Roumaine Math. Pures Appl. to appear. Preprint.
* [20] Cazacu C., Zuazua E.: Improved multipolar Hardy inequalities. Studies in phase space analysis with applications to PDEs, 35-52, Progr. Nonlinear Differential Equations Appl. 84, Birkhuser/Springer, New York, 2013
* [21] Cowan C.: Optimal Hardy inequalities for general elliptic operators with improvements. Comm. Pure Appl. Anal. 9 no. 1, 109-140 (2010)
* [22] Davies E. B.: A review of Hardy inequalities. The Mazya anniversary collection, Vol. 2 (Rostock, 1998), 55–67, Oper. Theory Adv. Appl., 110, Birkhäuser, Basel, 1999
* [23] D’Ambrosio L.: Hardy-type inequalities related to degenerate elliptic differential operators. Ann. Scuola Norm. Sup. Pisa CI. Sci. 5, 451-486 (2005)
* [24] Felli V., Terracini S: Elliptic equations with multi-singular inverse-square potentials and critical nonlinearity. Comm. Partial Differential Equations 31, 469-495 (2006)
* [25] Fischer V., Ruzhansky M.: Quantization on nilpotent Lie groups. Progress in Mathematics, 314, Birkhäuser, (open access book) (2016)
* [26] Garofalo N., Lanconelli E.: Frequency functions on the Heisenberg group, the uncertainty principle and unique continuation. Ann. Inst. Fourier (Grenoble), 40, No. 2, 313-356 (1990)
* [27] Ghoussoub N., Moradifam A.: Functional inequalities new perspectives and new applications. Mathematical Surveys and Monographs, 187, American Mathematical Society, Providence, RI 2013
* [28] Ghoussoub N., Moradifam A.: Bessel pairs and optimal Hardy and Hardy-Rellich inequalities. Math. Ann. 349, 1-57 (2011)
* [29] Goldstein J., Kombe I., and Yener A.: A unified approach to weighted Hardy type inequalities on Carnot groups. Discrete and Continuous Dynamical Systems 37, No. 4, 2009-2021 (2017)
* [30] Hardy G.: Note on a theorem of Hilbert. Math. Zeitschr. 6, 314-317 (1920)
* [31] Kufner A. and Opic B.: “Hardy Type Inequalities,” Pitman Research Notes in Mathematics Series, 219. Longman Scientific and Technical, Harlow, 1990
* [32] Kilpelainen T., Maly J.: Degenerate elliptic equations with measure data and nonlinear potentials. Ann. Scuola Norm. Sup. Pisa IV 19, 591-613, (1992)
* [33] Maz’ya V.G.: Sobolev Spaces, Berlin, Springer-Verlag, 1985
* [34] Mitidieri E.: A simple approach to Hardy inequalities. Mathematical Notes 67, 479-486 (2000)
* [35] Okazawa N.: $L^{p}$-theory of Schrödinger operators with strongly singular potentials. Japan. J. Math. 22, 199–239 (1996)
* [36] Ruzhansky M., Sabitbek B., Suragan D.: Weighted $L^{p}$-Hardy and $L^{p}$-Rellich inequalities with boundary terms on stratified Lie groups. Rev. Mat. Complutense, 32, 19-35 (2019)
* [37] Ruzhansky M., Sabitbek B., Suragan D.: Weighted anisotropic Hardy and Rellich type inequalities for general vector fields. NoDEA Nonlinear Differential Equations Appl. 26, no. 2, 26:13 (2019)
* [38] Ruzhansky M., Suragan D.: Hardy inequalities on homogeneous groups. Progress in Math. Vol. 327, Birkhäuser, 588 pp, 2019
* [39] Ruzhansky M., Verma D.: Hardy inequalities on metric measure spaces. Proc. R. Soc. A, 475, 20180310, 15pp. (2019)
* [40] Ruzhansky M., Suragan D.: Anisotropic $L^{2}$-weighted Hardy and $L^{2}$-Caffarelli-Kohn-Nirenberg inequalities. Commun. Contemp. Math. 19(6), 1750014 (2017)
* [41] Sabitbek B., Suragan D.: Horizontal Weighted Hardy-Rellich Type inequality on Stratified Lie groups. Complex Anal. Oper. Theory. 12(6), 1469-1480 (2018)
* [42] Sabitbek B.: Hardy-Sobolev type inequalities on homogeneous groups and applications. PhD thesis, Al-Farabi Kazakh National University, (2019)
* [43] Wang, Z.Q., Willem, M.: Caffarelli-Kohn-Nirenberg inequalities with remainder terms. J. Funct. Anal. 203, 550-568 (2003)
|
# Separating Polarization from Noise: Comparison and Normalization of
Structural Polarization Measures
Ali Salloum<EMAIL_ADDRESS>0000-0002-2381-6876 Aalto
UniversityKonemiehentie 2, 02150EspooFinland , Ted Hsuan Yun Chen
<EMAIL_ADDRESS>0000-0002-3279-8710 Aalto University and
University of HelsinkiUnioninkatu 37, 00170HelsinkiFinland and Mikko Kivelä
<EMAIL_ADDRESS>1234-5678-9012 Aalto UniversityKonemiehentie 2,
02150EspooFinland
(2021)
###### Abstract.
Quantifying the amount of polarization is crucial for understanding and
studying political polarization in political and social systems. Several
methods are used commonly to measure polarization in social networks by purely
inspecting their structure. We analyse eight of such methods and show that all
of them yield high polarization scores even for random networks with similar
density and degree distributions to typical real-world networks. Further, some
of the methods are sensitive to degree distributions and relative sizes of the
polarized groups. We propose normalization to the existing scores and a
minimal set of tests that a score should pass in order for it to be suitable
for separating polarized networks from random noise. The performance of the
scores increased by 38%-220% after normalization in a classification task of
203 networks. Further, we find that the choice of method is not as important
as normalization, after which most of the methods have better performance than
the best-performing method before normalization. This work opens up the
possibility to critically assess and compare the features and performance of
different methods for measuring structural polarization.
network science, polarization, political polarization, computational social
science, normalization, twitter, networks, sociology, community detection,
clustering, statistical significance
††copyright: acmcopyright††journalyear: 2021††doi:
10.1145/1122445.1122456††conference: Woodstock ’18: ACM Symposium on Neural
Gaze Detection; June 03–05, 2018; Woodstock, NY††booktitle: Woodstock ’18: ACM
Symposium on Neural Gaze Detection, June 03–05, 2018, Woodstock, NY††price:
15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs: Human-centered computing
Collaborative and social computing Collaborative and social computing design
and evaluation methods Social network analysis††ccs: Human-centered computing
Collaborative and social computing design and evaluation methods Social
network analysis††ccs: Human-centered computing Social network analysis††ccs:
Applied computing Sociology
## 1\. Introduction
Political polarization has long been a question in the social sciences
(Baldassarri and Gelman, 2008; Fiorina and Abrams, 2008), with mainstream
interest growing following observed political divisions in the 2000 and 2004
US Presidential elections (Fiorina and Abrams, 2008). Polarization, which is
generally understood in the social science literature as the division of
individuals into coherent and strongly opposed groups based on their opinion
of one or more issues (Fiorina and Abrams, 2008; DiMaggio et al., 1996), has
deleterious consequences for social systems. These undesirable outcomes
include increased divisiveness and animosity (Mason, 2015), policy gridlock
(Jones, 2001), and even decreased political representation (Baldassarri and
Gelman, 2008). Recent sociopolitical challenges have further strengthened
academic interest in this area, as political polarization has been associated
with difficulties in resolving pressing societal issues such as climate change
(Zhou, 2016), immigration and race relations (Hout and Maggio, 2020), and
recently the COVID-19 pandemic (Makridis and Rothwell, 2020). In the context
of social computing and computer mediated communication, political
polarization on social media has been shown to constrain communication and
eases the spread of misinformation (Iandoli et al., 2021).
Extensive research efforts have been put toward mitigating political
polarization in computer mediated communication. This body of work ranges from
studies exploring the role of polarization in social media (Rabab’ah et al.,
2016; Ozer et al., 2017; Darwish, 2019; Demszky et al., 2019; Bright, 2018;
Conover et al., 2011; Cossard et al., 2020) to platform design aimed at
attenuating polarization (Bail et al., 2018; Nelimarkka et al., 2018;
Nelimarkka et al., 2019). A fundamental requirement that underlies these
efforts is that we are able to identify polarized topics and systems, and
monitor changes over time. How will we know where intervention is required?
And how will we know if our interventions are successful? Much research has
been done in the area of measuring political polarization. Traditional methods
primarily relied on survey-based approaches, which tend to measure
distributional properties of responses to questions on public opinion surveys,
such as bimodality and dispersion (DiMaggio et al., 1996). With the increasing
availability and richness of publicly-observable social data, recent work from
the computational social science and network science fields have shifted
polarization measurement in two new directions (Garimella et al., 2018b). The
first area of work are content-based approaches (e.g. Belcastro et al., 2020;
Demszky et al., 2019), which have become widely used following developments in
natural language processing tools that allow researchers to detect
antagonistic positions between groups on social media.
Another fruitful area of work focuses on structural aspects of polarization
inferred from network representations of social or political systems. These
structural polarization measures tend to be based on the logic of identifying
what would be observable features of systems that are characterized by
polarization. Because polarization is a system-level phenomenon, these
features are defined at the network-level, making them different from node-
level (i.e. individual) behavioral mechanisms. While node-level mechanisms
such as homophily can contribute to polarization (Baumann et al., 2020; Jasny
et al., 2015), they do not necessitate it, as individuals will at any time
exhibit a multitude of behavioral tendencies. Most importantly, following the
definition of polarization outlined above, structural measures generally take
separation between structurally-identified communities to represent
polarization between groups in the system. Additionally, different measures
tend to be motivated by additional aspects of political polarization, such as,
for example, the difficulty of information to escape echo chambers (e.g.
Garimella et al., 2018b). Because these structural polarization measures can
flexibly capture theoretically-grounded aspects of political polarization,
especially at a relatively low cost compared to content-based and survey-based
approaches, they appear to be attractive measures for applied work.
Indeed, applications of structural polarization scores to real networks have
spanned many domains, including the studies of political party dynamics
(Baldassarri and Bearman, 2007; Moody and Mucha, 2013; Kirkland and Gross,
2014; Waugh et al., 2011; Neal, 2020), cohesion and voting behavior (Waugh et
al., 2011; Amelio and Pizzuti, 2012; Dal Maso et al., 2014; Chen et al.,
2020), political events (Weber et al., 2013; Morales et al., 2015; Darwish,
2019; Cossard et al., 2020; Rashed et al., 2020), and online echo chambers
(Garimella et al., 2018a; Cota et al., 2019; Cossard et al., 2020). They have
also been used to detect and study the presence of ‘controversy’ in
communication across different topics (Garimella et al., 2018b)—in fact, some
structural polarization scores are named as controversy scores—as polarized
groupings can be interpreted as a manifestation of a controversial issue.
Despite their widespread application, there are few systematic assessments of
how well these structural polarization measures perform in key aspects of
measurement validity (Trochim and Donnelly, 2001), such as predicting whether
a network is polarized based on human labeling, and whether they are invariant
to basic network statistics such as size and densiy. Exceptions to this
include a small number of studies that compare the performance of some scores
in classifying polarized political systems (Garimella et al., 2018b;
Emamgholizadeh et al., 2020) or those that show certain scores are invariant
to network size (Ozer et al., 2017). On the whole, the body of evidence is
sparse.
Further, beyond simply the lack of evidence, there are in fact theoretical
reasons to expect systematic shortcomings with the current approach. Consider
that the typical method for measuring structural polarization relies on the
joint tasks of (1) graph clustering to find two groups, and then (2) measuring
how separate the two groups are. A key characteristic of this approach is that
most clustering methods start from the assumption that the networks have
separate groups, and because these clustering algorithms are optimized to do
so, they can find convincing group structures even in completely random
networks (Zhang and Moore, 2014; Lancichinetti et al., 2010; Guimera et al.,
2004). Moreover, the quality function of these methods can be sensitive to
elementary features such as the size and density of these networks (Guimera et
al., 2004; Zhang and Moore, 2014). Given such a starting point, it is
difficult to develop a measure that would separate non-polarized networks from
polarized ones. Scores based on ostensibly intuitive community structures are
potentially poor measures of polarization because they yield high polarization
scores for non-polarized networks.
We address these concerns by analyzing eight different scores that have been
proposed and used for quantifying structural polarization. We test how well
they separate random networks from polarized networks and how sensitive they
are to various network features: number of nodes, average degree, degree
distribution, and relative size of the two groups. Further, we compute these
polarization values for a large array of political networks with varying
degrees of polarization, and use null models to see how much of the score
values are explained by these different network features. We find that none of
the measures tested here are able to systematically produce zero scores (i.e.,
low or non-polarization) for random networks without any planted group
structure, which we take to be a reasonable requirement for a polarization
measure (Guerra et al., 2013). Further, they are sensitive to variations in
basic network features, with the average degree being particularly challenging
for these measures.
Our study makes a number of contributions to the literature. First, our
results indicate that it is difficult to interpret any of the polarization
scores in isolation. Given a score value it can be impossible to tell even
whether the network is more polarized than a random network unless the score
is reflected against the network features. Further, analysing the extent to
which the network is polarized or making comparisons claiming that one network
is more polarized than another one is not straightforward with the current
measures. Second, we present a straightforward solution to the problem. We
show that normalizing polarization scores against a distribution of scores
from their corresponding configuration models improves their performance in a
classification task of 203 labeled networks. Finally, we make our testing code
and data publicly available on GitHub (Salloum, 2021) and Zenodo (Salloum et
al., 2021).
The rest of this paper is organized as follows. We first briefly discuss prior
research and describe the current methods for measuring polarization in
networks. Next, we introduce the methods and data used in this study. We then
study the performance of modern polarization scores both on real networks and
synthetic networks. Finally, we present normalization to the existing scores.
We conclude by summarizing our results in the discussion section.
## 2\. Network-based Polarization Scores
Construct GraphPartition GraphCompute Polarization Figure 1. The polarization
measurement pipeline
Polarization can be understood as the division of individuals into coherent
and strongly opposed groups based on their opinion of one or more issues
(DiMaggio et al., 1996; Fiorina and Abrams, 2008; McCoy et al., 2018).
Polarization reflects strongly on social network structures by creating two
internally well-connected groups that are sparsely connected to each other
(Baumann et al., 2020), which means that the amount of polarization in a
system can be measured by observing the structure of interactions within that
system. This logic informs many structural polarization scores, which are
obtained following the procedure shown in Fig. 1: (1) social network
construction, (2) finding the two groups via graph partitioning, and (3)
quantifying the separation between the two groups.
When measuring structural polarization in a social system, constructing a
network representation of the system that allows us to identify the existence
of polarized group structures apart from other types of interactions is a
crucial step. It defines the appropriate bounds of the system, both in terms
of which units are included and which interactions are measured. A social
network is suitable for polarization analysis with the methods described here
if the link between two nodes indicates a positive relationship, such as
friendship, preference similarity, endorsement, or agreement (Conover et al.,
2011; Garimella et al., 2018b; Akoglu, 2014; Garimella, 2018). Bounding the
network to the appropriate set of interactions is key, as it has been observed
that simultaneously measuring unrelated topics will yield lower structural
polarization scores regardless of how polarized each of the topics are (Chen
et al., 2020).
Network-based polarization scores share the requirement that the network needs
to be partitioned, with the logic being that the partitioned units are somehow
closer to others in their in-group than to those from out-groups (Garimella et
al., 2018b). In some cases, information external to the network structure can
be used to infer the groups in the network (e.g. node partisanship labels
(Bright, 2018)). However, the group membership or position of each unit is
often not known or is difficult to otherwise infer, making graph partition
algorithms necessary. Graph partitioning is an ill-posed problem (Fortunato
and Hric, 2016), with a large number of algorithms available that try to find
slightly different types of clusters in networks. In this paper, we use three
different partition algorithms: METIS (Karypis and Kumar, 1998), regularized
spectral clustering (Zhang and Rohe, 2018), and modularity (Newman, 2006)
optimization algorithm. These methods give similar overall results in our
analysis (see Appendix B); for brevity we will show only results for METIS,
which searches for two large groups such that there is a minimum number of
connections between them.
The main difference between structural polarization scores is how they compute
the amount of polarization given a graph and its partition. As noted above,
structural polarization between groups is measured by the separation between
structurally-identified communities in the network. Here, separation is
usually characterized by the discrepancy between interaction patterns that
occur within groups and those that cross groups, but scores differ in the kind
of interactions they highlight. In this study, we examine eight different
structural polarization scores. A brief summary of all polarization scores
examined in this study can be found in Table 1. We selected these scores
because there is considerable variation in the kinds of interactions they
highlight, and because they are measures that have been used in applied work
across various fields, including computational social science (Darwish, 2019),
political science (Hargittai et al., 2008), communications (Bright, 2018), and
policy studies (Chen et al., 2020).
At its simplest, structural polarization is measured by comparing the
difference in the frequency of external to internal links using the EI-index
(Krackhardt and Stern, 1988) and similar density-based scores (Chen et al.,
2020). These scores disregard the internal structure of the groups and how the
links between them are placed. The Betweeness Centrality Controversy (BCC)
score (Garimella et al., 2018b) alternatively considers the difference in the
edge betweenness centrality of external and internal links. Another approach
is to use random walk simulations to determine how likely information is to
remain within groups or reach external groups (i.e. Random Walk Controversy
and Adaptive Random Walk Controversy; RWC and ARWC) (Garimella et al., 2018b;
Rabab’ah et al., 2016; Rumshisky et al., 2017; Darwish, 2019). We also explore
the performance of a polarization measure based on community boundaries
(Boundary Polarization; BP) (Guerra et al., 2013), where a high concentration
of high-degree nodes in the boundary of communities implies lower polarization
as the influential users are seen as bridging the gap between communities.
Lastly, we study a measure based on label propagation (Dipole Polarization;
DP) (Morales et al., 2015), where the influence of high-degree nodes is
believed to spread via the neighbors in the network, and the distance of the
quantified influence of each community is proportional to the polarization.
Full definitions of each of the eight structural polarization scores we study
are included in Appendix A.
Table 1. Polarization scores used in this study Polarization score | Domain | Parameters
---|---|---
Random Walk Controversy (RWC) (Garimella et al., 2018b) | [-1, 1] | # of influencers in each group
Adaptive Random Walk Controversy (ARWC) | [-1, 1] | % of influencers in each group
Betweenness Centrality Controversy (BCC) (Garimella et al., 2018b) | [0, 1] | Kernel for density estimation
Boundary Polarization (BP) (Guerra et al., 2013) | [-0.5, 0.5] | -
Dipole polarization (DP) (Morales et al., 2015) | [0, 1] | % of influencers in each group
E-I Index (EI) (Krackhardt and Stern, 1988) | [-1, 1] | -
Adaptive E-I Index (AEI) (Chen et al., 2020) | [-1, 1] | -
Modularity (Q) (Waugh et al., 2011) | [-0.5, 1] | -
There are several possible issues with the approach to measuring structural
polarization described above. First, if not designed carefully, such
polarization scores can be very sensitive to ostensibly unrelated network
features such as number of nodes, density, and degree heterogeneity. The
scores can also behave in untransparent ways if the groups sizes are not
balanced. Finally, graph partitioning algorithms can find clustering
structures even in randomized networks, giving them very high polarization
scores, especially if they are sparse. These are all problems that have
already been found in related problem of partitioning networks to arbitrary
number of clusters (as opposed to two) and evaluating the partition (Bagrow,
2012; Zhang and Moore, 2014; Lancichinetti et al., 2010; Guimera et al.,
2004). This kind of sensitivity to basic network features and cluster sizes
means that the scores are not in an absolute scale where they could be
compared across different networks unless these confounding factors are taken
into consideration. Problems with polarization scores are not only a
theoretical possiblity, but practical problems with the structural
polarization score framework have been reported recently. The RWC score, which
has been recently described as state-of-the-art (Emamgholizadeh et al., 2020;
de Zarate et al., 2020) and used as the main measure (Darwish, 2019; Cossard
et al., 2020; Rashed et al., 2020), has poor accuracy in separating polarized
networks from non-polarized ones (Emamgholizadeh et al., 2020).
Figure 2. A The network represents a Twitter conversation network for
#leadersdebate. Garimella et al. (Garimella et al., 2018b) has labeled this
topic polarized as it refers to the debate during the U.K. national elections
in May 2015. C A Twitter endorsement network around #translaw during the
Finnish Parliamentary Elections in 2019. (B, D) For both networks we also
visualized their randomized versions obtained with configuration model. The
following RWC values ($k=10$) were computed: observed #leadersdebate 0.57,
randomized #leadersdebate 0.27, observed #translaw 0.68 and randomized
#translaw 0.74. RWC value of zero indicates no polarization and one indicates
maximum polarization. Randomized networks still have positive polarization,
and for the randomized #translaw network the value is even higher than the
observed network’s score. Only the giant component of the network is
visualized.
We illustrate some of the problems in measuring structural polarization in
practice using two networks (shown in Fig. 2). The first network
(#leadersdebate) demonstrates the general problem of all scores studied here.
Here, the observed network has an RWC value of 0.57. After fixing the observed
degree sequence of the network and randomizing everything else, the shuffled
networks still scores positively, with an averaged RWC value of 0.27. That is,
approximately half of the polarization score value is explained by the size
and the degree distribution of the network.
The second network (#translaw), with RWC value of 0.68, displays a serious
problem related to hubs. A randomized network can have a higher polarization
than the observed one due to the absorbing power of the hubs. In other words,
a random network with one or more hubs can keep the random walker trapped
inside the starting partition even in a non-polarized network. This leads to
higher values of RWC as in the case of the #translaw network, where the
randomized versions obtained an average score of 0.74. Note that this is also
higher than the score for the first network, which has a clearly visible
division to two clusters in the visualization. As will become evident in the
following sections, this is likely due to the larger size and higher density
of the first network. We will next systematically analyse how the various
network features affect all of the eight structural polarization scores.
## 3\. Methods
Our primary aim in this paper is to assess how well commonly-used structural
polarization measures perform on important aspects of measurement validity
(Trochim and Donnelly, 2001). We begin by examining the extent to which these
eight structural polarization scores are driven by basic network properties
using null models based on random networks. These models are built in a way
that they fix some structural properities but are otherwise maximally random,
i.e., they give equal probability of sampling every network while holding that
property constant. There are two main use cases. First, we will systematically
analyse various network features by sweeping through the parameter spaces of
these models. Second, we can match some of the features of the observed
networks and randomize everything else, giving us an estimate of how much of
the scores is a consequences of these structural features. Valid measures
should not be systematically biased by these structural features.
We use (1) an Erdős-Rényi model (Erdős and Rényi, 1959) (fixing the number of
links), (2) a configuration model (Molloy and Reed, 1995; Fosdick et al.,
2018) (fixing the degree sequence), and (3) a model for fixing degree-degree
sequences (Mahadevan et al., 2006). All of these models fix the number of
nodes. One can see these models as a sequence where each of them shuffle less
of the data than the previous one (Mahadevan et al., 2006; Gauvin et al.,
2018), because fixing degree-degree sequences automatically fixes the degree
distribution, which in turn fixes the number of links. To emphasize this
relationship between the models, we label them using the $dk$-series framework
(Mahadevan et al., 2006; Orsini et al., 2015). The $dk$-distribution of graph
$G$ incorporates all degree correlations within $d$-sized subgraphs, hence
allowing us to ‘shuffle’ the original graph while keeping the desired
prescribed properties regarding network topology. Increasing values of $d$
captures more properties of the network at the expense of more complex
probability distributions. In the limit of large $d$, the complete
representation of the input graph is obtained. The intuition is very similar
to Taylor series. The more terms you include in the series (corresponding to
higher values of $d$), the closer you get to the exact value of the function
(i.e. the original network). Within this framework, the Erdős-Rényi network
fixing the number of links (or equivalently, the average node degree) is
$d=0$, the configuration model fixing the degree sequence is $d=1$, and the
model fixing the degree-degree sequence is $d=2$. Each randomized network is
repartitioned before computing its polarization score by applying the same
graph partitioning algorithm as for the corresponding original network.
In addition to models fixing some observable network properties, we use the
stochastic block model (Holland et al., 1983) to generate networks with two
groups. This model is used in Section 4.2.3 to explore how unbalanced group
sizes affect the score values.
### 3.1. Data
Figure 3. Distributons of basic statistics of the 203 observed networks. The
average network studied had approximately 4000 nodes and an average degree of
3. Network assortativity refers to degree assortativity.
In addition to networks simulated with models, we use real-world data from
Twitter from three sources for a total of 203 networks. First, from Chen et
al., we used 150 networks constructed from the most popular hashtags during
the 2019 Finnish Parliamentary Elections (Chen et al., 2020). These were
constructed based on single hashtags (e.g. #police, #nature, #immigration).
Second, from the same source, we included 33 topic networks, which were
constructed from sets of hashtags related to larger topics, such as climate
change or education (see Appendix A in Chen et al., 2020). Third, we used 20
topic networks from Garimella et al.’s study on the RWC (Garimella et al.,
2018b). Each of the 203 resulting network has a node set containing all users
who posted an original tweet with a specific set of hashtags and all users who
retweeted at least one of these tweets. Undirected ties on the network
indicate the connected nodes have at least one instance of retweeting between
them on the given topic.
Finally, we process all network prior to assessment by (1) obtaining the giant
component of the network as has been done previously (Garimella et al.,
2018b), (2) removing self-loops, and (3) removing parallel edges. The latter
two steps did not have a significant effect on polarization values. The
average network in this study had approximately 4000 nodes, an average degree
of 3 and tended to be slightly assortative. Complete summary distributions for
the networks included in our study are presented in Fig. 3.
## 4\. Results
### 4.1. Real Networks
We first compare the observed network data to random networks that are
shuffled in a way that we keep some of the features of the networks. As
expected, the more features we keep, the more similar the scores are to the
ones for original networks (see Fig. 4).
For BP, Q, EI, and AEI the scores produced by the random models cover the
observed score for most networks (black bar that corresponds to the observed
score of a single network is covered by the other colors that correspond to
the scores produced by different random models), indicating that the number of
links and size of the networks ($d=0$) are already enough to predict much of
the observed score. For RWC, ARWC, BC, and DP, the more features are kept, the
higher (and therfore closer to the original value) the scores tends to be. In
general, the change in scores after randomization follows a pattern where both
low and high original scores can get very low values for the model keeping the
average degree ($d=0$). The degree sequence ($d=1$) and degree-degree sequence
($d=2$) can in many cases explain most of the observed scores, and in some
cases the scores for these random networks are even larger than for the
original networks. We also have included an alternative way to visualize the
polarization values of the randomized networks in Appendix B.
Figure 4. Polarization scores for the 203 observed networks and their
shuffled versions. Each bar corresponds a score, and scores for a network and
its randomized versions are on top of each other, ordered from bottom to top
in the following order: observed network (black) and randomized networks where
degree-degree sequence ($d=2$, yellow), degree sequence ($d=1$, blue), or
average degree ($d=0$, red) is preserved. An interpretation for the figure is
that, the amount of color that is shown tells how much of the total bar height
(the score value) is explained by the corresponding network feature. Note that
in some cases, the randomized networks produce higher scores than the original
network and in this case the black bar is fully covered by the colored bar(s).
In this case we draw a black horizontal line on top of the colored bars
indicating the height of the black bar. See Appendix B for similar results
obtained by other partition algorithms.
Fig. 4 gives a detailed view of the poplarization scores. It can be used to
read scores for each of the original networks and the corresponding random
networks. There are four methods for which the $d=0$ model already explains
most of the observed scores, and for the rest the degree sequence ($d=1$) is
usually a very good predictor.
Figure 5. A The RWC-scores for the 203 observed networks as a function of
average degree and network size. B The same score for configuration model
$d=1$ of the real network.
To get a more detailed view on the characteristics that are related with the
polarization scores, we show the RWC score as a function of both network size
and average degree in Fig. 5. Network size is correlated in a way that smaller
networks have higher RWC scores (Spearman correlation coefficient -0.42).
After shuffling the real network with fixing the original degree sequence, the
smaller and sparser networks have higher RWC scores (respective Spearman
correlation coefficients -0.67 and -0.68). Although randomized networks had
lower RWC scores than the original networks, the averaged RWC for all networks
with average degree less than four was approximately 0.45 even after
randomization.
### 4.2. Synthetic Random Networks
Based on Section 4.1, we see that polarization scores are heavily affected by
elementary network statistics like network size, density, and degree
distribution. We will next explore more systematically which factors explain
the high polarization scores in randomized networks. In addition, to the
aforementioned statistics, we analyse the effect of unequal community sizes as
real network are likely to have a range of polarized group sizes.
#### 4.2.1. Network size and density.
Even an ideal structural polarization score can be correlated with network
size and average degree in a specific set of real-world networks, but it
should not be biased by these elementary network features in the set of all
networks. As a consequence, structural polarization scores should get neutral
values for random networks with any size and average degree that are generated
without any explicit group structure that is present in the creation process.
To assess how the scores perform on this metric, we computed each score for
Erdős–Rényi graphs of varying sizes and densities. Our results shown in Fig. 6
indicate that all scores are affected by at least one of these elementary
network features.
First, we find that network size generally did not contribute to polarization
scores, with RWC being the sole exception. It was affected by the number of
nodes in the network, giving lower values for larger graphs with the same
average degree (see Fig. 6). For networks with enough nodes, the RWC is very
close to zero for all values of average degree, but this comes at the cost of
the score being sensitive to network size. On the other hand, despite being a
similar random walk-based score, the ARWC is invariant to network size. This
highlights the difference in their construction. Specifically, the RWC takes
as a parameter a fixed number of influencers in the system, meaning that the
number of influencers as a proportion of the network varies by network size,
leading to inconsistent variation in RWC across networks. The ARWC removes
this dependence by setting the number of influencers as a function of network
size (i.e., as a proportion of network size it remains constant). We discuss
the difference between the RWC and ARWC in Appendix A.
Specific instances of RWC aside, all scores are dependent on average degree,
and only approach non-polarized levels when the network’s average degree
exceeds a certain value. For instance, BCC gives zero polarization for random
graphs only when the average degree is approximately 10. This is quite strict
condition to have especially for Twitter networks. BP decreases almost
linearly as a function of density. It reaches the value zero when the average
degree of the network is between 5 and 6. The negative values for larger
degrees indicate that nodes in the boundary are more likely to connect to the
other group. Morales et al., i.e., author behind the DP score, pointed out how
their score suffers from the “minimum polarization problem” due to the nonzero
standard deviations of the opinion distributions (Morales et al., 2015).
Dependence between density and modularity has been studied theoretically
before for the case where the number of clusters is not restricted to two like
in polarization scores. Previous research has shown that, sparse random graphs
(and scale-free networks) can have very high modularity scores (Guimera et
al., 2004), and that, with high probability, modularity cannot be zero if the
average degree is bounded (McDiarmid and Skerman, 2020). It is therefore known
that using a network’s modularity score to measure the amount to which it is
clustered is inadvisable. This notion has been speculated to apply for the use
of modularity as a polarization score (Guerra et al., 2013). Our results
confirm this notion, and show that modularity behaves similarly for the case
where the number of clusters is limited to two, with the difference that the
maximum value in our simulations goes up to only approximately 0.5. Here, it
should be noted that none of the other scores seem to be immune to this
problem.
Figure 6. A The expected RWC values as a function of average degree for ER-
networks. The symbols denote the network size as indicated in the legend. B
The expected values of all polarization scores (except RWC) as a function of
size for ER networks with average degree 2.4. See panel C for legend. C The
expected values of all polarization scores (except RWC) as a function of
average degree for ER networks with 4000 nodes. The dashed line indicates zero
polarization. See Appendix B for the polarization scores as a function of both
number of nodes and average degree.
#### 4.2.2. Heterogeneity of degree sequence.
Figure 7. The effect of degree heterogeneity to polarization scores for the 8
scores in simulated scale-free networks with 1000 nodes. We show the expected
polarization score and its standard deviation as a function of average degree.
The symbols correspond to different exponent $\gamma$ values as indicated by
the legend. See Figs. 14-15 in Appendix B for similar results for larger
networks.
The role of the underlying degree sequence is essential to study as political
communication networks tend to have a few nodes with relatively high number of
edges. For networks produced by the Erdős–Rényi model, the degree distribution
is Binomial centered at the average degree $\langle k\rangle=p(n-1)$, where
$p$ is the probability that two nodes are connected and $n$ is the number of
nodes in network. In contrast, many real networks’ degrees follow fat-tailed
distributions which can have considerably higher variance (Barabási and
Albert, 1999; Clauset et al., 2009; Serafino et al., 2021). To analyze the
effect of degree distribution, we simulate random graphs whose degree
sequences were drawn from a power law distribution $P(k)\propto k^{-\gamma}$
(Fasino et al., [n.d.]). We vary the exponent $\gamma$, which allows us to
explore the amount of variation in the degree distributions. The lower the
value of $\gamma$ is, the higher is the variation and larger are the hubs.
The RWC, ARWC, BCC, and DP give higher scores for more heterogeneous networks,
and there is a slight opposite effect for the other scores given a low average
degree (see Fig. 7). The average degree affects polarization scores for
networks with heterogeneous degree sequences as well. The observation that the
level of polarization approaches zero only when the network becomes denser
still holds for all scores, but for the scores that are lifted by degree
heterogeneity, much denser networks are required in order for the scores to
become close to zero.
#### 4.2.3. Communities of different sizes.
Figure 8. Polarization scores as the function of relative group size
$n_{\text{small}}/(n_{\text{small}}+n_{\text{large}})$ in an SBM model
described in the main text. The different markers correspond to the three
schemes of adding inter-block links.
Polarized groups within a system are often imbalanced in size, making it
important for scores to perform consistently across a range of group size
balance. To assess this metric, we used the stochastic block model to generate
a set of polarized networks differing by group size imbalance and level of
polarization. All networks were fixed to have size $n=10000$, and the size of
the smaller group ranges from between 10% to 50% of the whole network. Within-
group links were generated for each node to have an average within-group
degree of $k_{\text{in}}$. To obtain different levels of polarization, we
generated between-group links at three levels of density. For low polarization
networks, links were generated with the expectation of adding
$k_{\text{out}}=k_{\text{in}}/c$ between-group links per node from the larger
group. Conversely, high polarization networks have an expected
$k_{\text{out}}$ between-group links per node from the smaller group. A third
set of networks, with medium polarization, have an expected
$k_{\text{out}}\times n/2$ total between-group links. Our networks were
generated with $k_{\text{in}}=4.5$ and $c=25$, but our results are robust to a
reasonable range of different values.
Our results indicate that all scores depend on group size imbalance at least
for the high and low polarization schemes (see Fig. 8). The EI and AEI scores
are relatively insensitive to the group size imbalances as their values change
by only a few percentage points. For all scores except DP and Q, simulated
level of polarization affects the extent to which they are dependent on group
size imbalances; at certain levels of simulated polarization, the dependence
disappears. For EI and AEI this level is exactly the one present in the medium
polarization networks. Finally, it is worth noting that DP has an intrinsic
scaling factor which penalizes the difference between groups. Specifically,
its coefficient is designed to contribute maximally to the polarization only
when the communities are equally-sized, thus the linear relationship between
imbalance and score.
## 5\. Evaluation and Normalization of the Polarization Scores
As our analysis suggests, nonzero polarization values arise in randomized
networks due to the scores being sensitive to network size, number of links,
and degree distribution. These features in themselves are not likely to be
linked to polarization. The number of nodes and the number of links depend on
unrelated system features such as overall activity in the network and and
design choices such as link sampling schemes. Further, the fat-tailed degree
distributions we often observe in the data have been attributed to
preferential-attachment mechanisms (Barabási and Albert, 1999).
However, even if scores do not return zero values for random networks or are
biased by some features of the data, they can still be useful if they are able
to separate polarized networks from non-polarized ones by systematically
returning higher scores for the former set. In this section, we assess the
predictive validity of the scores against a manually labeled set of 203
polarized and non-polarized networks, which we introduced in Section 3.1. The
20 networks from Garimella et al.’s study are already manually labeled
(Garimella et al., 2018b), so we are able to directly use these external
labels. The 183 networks from Chen et al.’s study had not been labeled (Chen
et al., 2020), so we manually labeled them for our present task.
We labeled these networks based on the content of tweets, which ensures that
our labeling is independent of the structural polarization measures. We
checked whether any of the pre-defined features, which are selected based on
our definition of polarization, were present in a reasonable number of tweets
before marking the network polarized. The labeling process, which we describe
below, was performed before the main analysis, and resulted in a balanced data
set with around $47\%$ of the networks labeld as polarized. To the extent that
the basic network features we examined are not indicative of polarization, we
should see an increase in the classification performance of the scores when
the effects induced by them are removed.
We recognize that polarization labels based on content alone are not
necessarily the ground truth on polarization in the system. Instead, content-
based labeling is another method for quantifying polarization, which is based
on a set of criteria different from those used in structural polarization
scores. Because it is another label of the same underlying latent construct,
content-based labeling is useful for assessing the validity of structural
polarization scores. If a structural score correlates well with our content-
based labels, they are said to have high convergent validity, which is a form
of measurement validity(Adcock and Collier, 2001). This means that it can be
seen as a better measure of the latent overall polarization in the system
compared to a less correlated structural score.
### 5.1. Data Labeling
We labeled our networks before performing the main analysis. All networks with
at least one hashtag containing the substring ‘election’ were classified as
polarized. We manually sampled tweets from each network for confirmation. For
each network, we applied a four-stage process for labeling in the following
order:
1. (1)
Sample uniformly 5 days from which tweets are read.
2. (2)
Display all the tweets from each sampled day.
3. (3)
Sample 20 users from each sampled day and display all their tweets during the
sampled days.
4. (4)
Partition the network and distinguish 10 highest degree nodes from both
groups. Display all their tweets.
After displaying the tweets that were obtained by the described process, we
checked whether any of the following features were present in a reasonable
number of tweets.
* •
us-versus-them mentality, signs of disagreement, dispute, or friction
* •
strongly discrediting the out-group or strongly supporting the in-group from
both sides
* •
direct, negative, or strong criticism of political adversaries or political
actors from both sides
* •
completely opposite opinions, beliefs, or points of view on a political or
social topic
Based on the content of the sampled tweets together with domain knowledge, a
researcher classified the network either polarized or non-polarized. If the
sample was too vague to be labeled, we repeated the process to gain a clearer
view into the general content of tweets.
### 5.2. Classification Performance of the Polarization Scores
We adapt a typical framework where there is a decision threshold for the score
value under which all networks are predicted to be non-polarized and above
which they are prediced to be polarized. Each threshold value produces a false
positive rate and true positive rate, and varying the threshold gives us an
ROC curve (shown in Fig. 9) characterizing the overall performance of the
scores in the prediction task. This makes the evaluation independent on the
selected type of classifier. We also derive single scores to quantify the
overall performance. The Gini coefficient measures the performance improvement
compared to a uniformly random guess and is defined as the area between the
diagonal line and the ROC curve, normalized so $1$ indicates perfect
prediction and $0$ indicates random guess. We also report the unnormalized
area under curve (AUC).
Figure 9. ROC curves, Gini coefficient values, and AUC values for the task of
predicting manually curated labeling of polarized and non-polarized networks.
The results shown (left) for the score values before the normalization and
(right) after the normalization with denoised scores $\hat{\Phi}(G)$ (see main
text). Figures are based on all the 203 empirical networks described in
section 3.1. Results for the different normalization scheme
$\hat{\Phi}_{z}(G)$ including the standardization are shown in the Fig. 16 in
Appendix B. The ROC curves for the alternative graph partitioning methods are
also reported in Figs. 22 and 23 in Appendix B.
The Gini coefficients for the scores vary between $0.20$ and $0.53$ with Q and
BCC performing the worst. ARWC performs the best with a wide margin to the
second best AEI and EI (with coefficient values $0.40$ and $0.41$). The non-
adaptive RWC has a Gini coefficient of $0.34$, which is better than prior work
shows (Emamgholizadeh et al., 2020), but still generally poor. Notably, the
ARWC score performs very well if we accept a high false positive rate. That
is, if a system has a very small ARWC score, it is a good indication of the
network not being polarized. In contrast, a large ARWC score (or any other
measure) does not necessarily indicate that the network is polarized. As our
prior results show, a high score might be due to the effects of small and
sparse networks with hubs as opposed to anything related to polarized group
structures.
Figure 10. Scatter plot of denoised ($\hat{\Phi}$) and observed polarization
scores for the 203 networks described in Section 3.1. Red crosses are networks
labeled as polarized and black points are networks labeled as non-polarized.
An outlier was removed from the plots. Note that the scales for the scores are
different and not shown in this figure. See Fig. 19 in Appendix B for the
equivalent scatter plot for the standardized scores ($\hat{\Phi}_{z}$).
### 5.3. Normalization of the scores
To remove the effect of network size and the degree distribution, we computed
the averaged polarization score for multiple instances of the network shuffled
with the configuration model ($d=1$ in the $dk$-framework), and subtracted it
from the observed polarization score value. That is, given a network $G$ and a
polarization score $\Phi$, we define a normalized score as
$\hat{\Phi}(G)=\Phi(G)-\langle\Phi(G_{CM})\rangle\,,$
where $\Phi(G)$ is the polarization score of observed network and
$\langle\Phi(G_{CM})\rangle$ is the expected polarization score of graphs
generated by the configuration model. This score corrects for the expected
effect of the size and degree distribution of the network (i.e. removes the
blue part from the observed score shown previously in Fig. 4). Thus we call it
the denoised polarization score. It does not consider the fact that there is
some fluctuations in the score values in the configuration model. We correct
for this in another, slightly different normalization scheme, where we divide
the normalized score by the standard deviation of the score value distribution
for the configuration model:
$\hat{\Phi}_{z}(G)=\frac{\Phi(G)-\langle\Phi(G_{CM})\rangle}{\sqrt{\langle\Phi(G_{CM})^{2}\rangle-\langle\Phi(G_{CM})\rangle^{2}}}\,.$
We call this normalization standardized and denoised polarization score. Note
that the distribution of polarization value over the ensemble of null random
graphs is not necessarily Gaussian. If the values $\Phi(G_{CM})$ followed
Gaussian distribution, then the statistical significance testing could be
performed with the standard normal distribution, and $\hat{\Phi}_{z}(G_{CM})$
would be the appropriate test statistic (the z-score). An approximate value
for a significance can be obtained with large number of samples. The same
normalization has been proposed for modularity to mitigate the resolution
limit (Miyauchi and Kawase, 2016). In that work, the proposed quality function
gives higher values for statistically rarer observations.
Fig. 9 shows the ROC curves and related coefficient values for the denoised
scores $\hat{\Phi}(G)$ (see the qualitatively similar results in Appendix B
for $\hat{\Phi}_{z}(G)$). The performance of all of the scores increased after
normalization as indicated by the overall lifted ROC curves and improved Gini
coefficients (and AUC values). Improvements in Gini coefficients after
normalization ranges between 38% - 220% depending on the measure used. The
ARWC remains among the best performing scores, along with the AEI and EI. The
AEI in particular, perfoms best under conservative estimates of polarization
(i.e., low false positive rates). The BCC is still among the worst after the
normalization, along with DP. However, all post-normalization scores notably
outperform the best unnormalized score (ARWC).
Fig. 10 further illustrates the dramatic change in the predictive power of the
scores after normalization. It shows the normalized score values as a function
of the observed score values. With most scores, the two types of networks are
mixed together when sorted with the observed scores (on the x-axis).
Normalization largely lifts the polarized networks above the cloud of non-
polarized ones, making it possible to separate these two groups (on the
y-axis). This finding holds for all the polarization measures analyzed here.
Note that in practice, to implement the normalization procedure, one needs to
randomize the network multiple times with the configuration model and compute
the score $\Phi(G_{CM})$ to get estimates for the mean value of the score
$\langle\Phi(G_{CM})\rangle$ (and $\langle\Phi(G_{CM})^{2}\rangle$ for
$\hat{\Phi}_{z}(G)$). Here we sampled the networks 500 times which was more
than enough samples as they lead to error of the means ranging from 0.01 to
0.05.
To determine which types of networks benefits the most from the normalization,
we plotted the AUC as a function of network size and average degree for each
polarization score. This was done by evaluating the performance for subsets of
networks with a fixed window size of 100 (shown in Fig. 11). The results show
how the performances for all the polarization methods are better and more
stable after the normalization independent on the network size. Only ARWC has
a short region where the performances of both normalized and unnormalized
scores overlap. The same analysis for average degree is included in Appendix
B.
Figure 11. Quantifying how network’s average degree affects the performance.
We group the data such that there are 100 networks with consecutive sizes in
our data, and create a set of such windows by varying the size range. We then
evaluate the AUC for the moving window of 100 networks. Generally, all the
networks benefits from the normalization across all the polarization methods.
The same analysis for average degree is included in Appendix B. Also the
details on the scale of the windows are included in Appendix B.
We also tested whether combining the results from all polarization methods
improves the accuracy of predicting whether a network is polarized. Our first
strategy was to take the average of all the scores and use that as a new
polarization score. The second strategy was to train a simple decision tree
classifier where the input vector contained all eight scores obtained for a
network. The AUC for the average of unnormalized values was 0.71, and for
normalized values it increased to 0.87. Although the averaged normalized score
outperformed some of the single normalized polarization scores (e.g. BP, DP,
and Q), it did not outperform the best-performing ones (e.g. ARWC and AEI).
Regarding the decision tree, the AUC of the pre-normalization classifier was
0.78, whereas for the post-normalization one, the AUC increased to 0.90. Our
results show that strategies based on combined scores can in some cases offer
improvements over single polarization scores, but only minimally. It is up to
the researcher to decide if these gains are worth the cost of additional work
and loss of transparency associated with training machine learning models.
## 6\. Discussion
Measuring polarization is important for social science research, including the
social computing and computer mediated communication fields. Structural
polarization measures offer an ostensibly promising approach, but we
identified a number of undesirable properties associated with all eight
commonly-used measures studied here. These measures can be high for random
networks and they are sensitive to various network features such as size,
average degree, and degree distribution. These properties pose a clear problem
for polarization research. Considerable research effort has been put into
polarization identification and platform design for attenuating polarization,
but if the measurement of polarization is systematically biased by basic
network features, our ability to make valid inferences are greatly reduced.
For example, consider Bail et al.’s study that found increasing exposure to
opposing views increased political polarization (Bail et al., 2018). The study
did not rely on structural polarization measures, but had this study been
conducted in the field using the RWC to measure polarization, the increased
activity that likely would have resulted from the intervention could have
decreased polarization scores, resulting in the exact opposite conclusion
being drawn.
Based on our results, we strongly recommend applying the normalization
procedures introduced in Section 5.3 for applied work using any of the
network-based polarization scores included here. Doing so removes a
substantial amount of noise arising from the network’s local properties. For
our test networks, classification performance improved by 38%-220% (Gini
coefficient) depending on the measure. Notably, the differences in performance
across polarization scores were minor after normalization. In fact, the AEI
and EI, which are the simplest and least computationally-demanding scores,
were among the best performing scores.
In order for us to draw qualitative conclusions based on the score values we
should understand the scale of the score, e.g., what values constitute medium
or high polarization. The unnormalized score values often have interpretation
described in the logic of their definition. Despite their relatively high
accuracy, normalized scores are less amenable to this kind of direct
interpretation. If this is needed, a plausible alternative is to report both
the score itself and its expected value in one or more random network models.
This way, one has a sense of how much of the score is explained by the various
features of the network.
Our work has implications for additional structural polarization scores not
studied here, including those in development. It is clear from our results
that structural scores obtained via the consecutive procedures of network
clustering and score computation are susceptible to being sensitive to various
network features in a way that is not apparent from the score’s definition.
Our argument (and others’ before us (Guerra et al., 2013)), backed up by the
results that normalization increases the performance of the scores, is that
these sensitivities bias the scores. At a minimum, one should have a clear
idea of how the given score behaves in relation to basic network statistics
and group sizes. To facilitate such examination, we have made our analysis
pipeline and data publicly available (Salloum, 2021; Salloum et al., 2021).
There could be other possible sources of bias, so our benchmarking framework
should be taken as a minimal test that is not necessarily sufficient.
More broadly, the fact that all eight scores we tested were affected to some
extent by the same problems suggests that the approach of separating
polarization scoring into independent clustering and score calculation phases
might be flawed. This is part of the wider problem where clustering methods
start from the assumption that the given network contains clusters and can
find them even in random data. A solution to this problem could be to break
the independence of the clustering phase and the score calculation phase,
using instead clustering methods that can test if the clusters could be
explained by random networks (Lancichinetti et al., 2010; Peixoto, 2013).
Scores can be set to zero if no significant clusters are found. This reduces
the false positive rate, which was especially problematic with the best-
performing ARWC method.
Our study presents some limitations which can be avenues for future research.
First, our results are based purely on simulations, and a more theoretical
approach could be taken to understand the individual scores better. This work
can build on the prior work on fundamental problems in graph clustering which,
as illustrated here, are reflecting onto polarization scores. In this context,
modularity is a well-studied example of how an apparently reasonably defined
method can have many underlying problems (Guimera et al., 2004; Fortunato and
Barthelemy, 2007; Bagrow, 2012; McDiarmid and Skerman, 2017, 2020). Given
this, analyzing modularity from the perspective of limiting the number of
clusters to two could be done as an extension to the previous literature on
modularity optimisation with an arbitrary number of communties. Even if
modularity is not important as a structural polarization score, this analysis
could shed light on the kind of phenomena to expect when scoring networks with
two clusters.
Second, public opinion on politics can be multisided. This means that instead
of having only two groups, there can be multiple cohesive clusters that are
segragated in some way in the network. However, the majority of polarization
measures, including the structural measures analyzed here are defined
exclusively for two cluster, with the exception of modularity and the EI-
index. Conceptual and technical work that generalizes polarization to the
multiside context is therefore useful. This is a nascent area of study
(Reiljan, 2020), with some extensions to structural measures (Markgraf and
Schoch, 2019; Gaumont et al., 2018). Such generalizations are likely to retain
the same problems as their two-sided variants, because more degrees of freedom
in the number of groups for the clustering algorithms will lead to better
clusters (as measured with the internal evaluation metrics of the methods). As
discussed in section 4.2.1, previous work on modularity can again be useful
here, as it indicates that the issues high score values in random networks is
even worse when the number of clusters is not limited.
Further, a clear limitation of the current work is the number and variety of
labeled network data that was used. While the number of network is enough to
statistically show that normalization improves the score performance, a more
fine-grained view of the problem could be achieved with more networks.
Similarly, the generalizability of the classification results could be
improved by widening the range of network size and density but more
importantly by including different types of social networks. Here, it is worth
noting that our approach to labeling the content might not be as clear cut for
other contexts such as non-political communication. Finally, the analysis
regarding the different-sized clusters can be improved. Although our results
indicated that all scores depend on group size imbalance at least for the low
and high polarization schemes, other techniques for simulating polarization
between the communities should be examined.
Despite the issues we raised in this paper, structural polarization measures
as an approach remains useful. In addition to being based on emergent behavior
directly observed in the system under study, they facilitate an accessible
approach to studying polarization. A network-based approach generally has low
language processing requirements, making it equally easy to apply to different
linguistic contexts. Additionally, it has been argued that there is an uneven
development of natural language processing tools across languages (Djatmiko et
al., 2019), which presents an additional barrier to content-based polarization
measures. Content-based polarization measures often require language-specific
resources such as sentiment lexicons, which are often costly to build (Sun et
al., 2017). Similarly, survey-based measures, especially from original data
sources, are costly to obtain. In light of these sometimes considerable
barriers to research, structural polarization measures provide an accessile
alternative for applied researchers.
To facilitate the use of structural polarization measures, we introduced in
this paper a minimal set of tests that a structural polarization score should
pass in order for it to be able to distinguish polarization from noise. This
should serve as a benchmark for future developments of such scores. The fact
that the all of the current scores perform relatively poorly indicates that
there is a need for alternative approaches to the typical scoring approach.
The normalization procedure we introduced here is a patch that alleviates this
problem. There is space for other, possibly fundamentally different
approaches, to be innovated for measuring structural polarization.
###### Acknowledgements.
This research is part of the ECANET-consortium (Echo Chambers, Experts and
Activists: Networks of Mediated Political Communication), part of the Media
and Society research programme (2019-2022) funded by the Academy of Finland
(grant number: 320781).
## References
* (1)
* Adcock and Collier (2001) Robert Adcock and David Collier. 2001. Measurement validity: A shared standard for qualitative and quantitative research. _American political science review_ 95, 3 (2001), 529–546.
* Akoglu (2014) Leman Akoglu. 2014\. Quantifying Political Polarity Based on Bipartite Opinion Networks. _Proceedings of the International AAAI Conference on Web and Social Media_ 8, 1 (May 2014). https://ojs.aaai.org/index.php/ICWSM/article/view/14524
* Amelio and Pizzuti (2012) Alessia Amelio and Clara Pizzuti. 2012. Analyzing Voting Behavior in Italian Parliament: Group Cohesion and Evolution. In _2012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining_. IEEE, 140–146. https://doi.org/10.1109/ASONAM.2012.33
* Bagrow (2012) James P Bagrow. 2012\. Communities and bottlenecks: Trees and treelike networks have high modularity. _Physical Review E_ 85, 6 (2012), 066118.
* Bail et al. (2018) Christopher A Bail, Lisa P Argyle, Taylor W Brown, John P Bumpus, Haohan Chen, MB Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. Exposure to opposing views on social media can increase political polarization. _Proceedings of the National Academy of Sciences_ 115, 37 (2018), 9216–9221.
* Baldassarri and Bearman (2007) Delia Baldassarri and Peter Bearman. 2007. Dynamics of political polarization. _American sociological review_ 72, 5 (2007), 784–811.
* Baldassarri and Gelman (2008) Delia Baldassarri and Andrew Gelman. 2008. Partisans without constraint: Political polarization and trends in American public opinion. _Amer. J. Sociology_ 114, 2 (2008), 408–446.
* Barabási and Albert (1999) Albert-László Barabási and Réka Albert. 1999. Emergence of Scaling in Random Networks. _Science_ 286, 5439 (1999), 509–512. https://doi.org/10.1126/science.286.5439.509
* Baumann et al. (2020) Fabian Baumann, Philipp Lorenz-Spreen, Igor M Sokolov, and Michele Starnini. 2020. Modeling echo chambers and polarization dynamics in social networks. _Physical Review Letters_ 124, 4 (2020), 048301\.
* Belcastro et al. (2020) Loris Belcastro, Riccardo Cantini, Fabrizio Marozzo, Domenico Talia, and Paolo Trunfio. 2020\. Learning political polarization on social media using neural networks. _IEEE Access_ 8 (2020), 47177–47187.
* Bright (2018) Jonathan Bright. 2018\. Explaining the emergence of political fragmentation on social media: The role of ideology and extremism. _Journal of Computer-Mediated Communication_ 23, 1 (2018), 17–33.
* Chen et al. (2020) Ted Hsuan Yun Chen, Ali Salloum, Antti Gronow, Tuomas Ylä-Anttila, and Mikko Kivelä. 2020\. Polarization of Climate Politics Results from Partisan Sorting: Evidence from Finnish Twittersphere. _arXiv:2007.02706_ (2020).
* Clauset et al. (2009) Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. 2009. Power-law distributions in empirical data. _SIAM review_ 51, 4 (2009), 661–703.
* Conover et al. (2011) Michael Conover, Jacob Ratkiewicz, Matthew Francisco, Bruno Gonçalves, Alessandro Flammini, and Filippo Menczer. 2011. Political Polarization on Twitter. In _Proc. 5th International AAAI Conference on Weblogs and Social Media (ICWSM)_. http://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2847
* Cossard et al. (2020) Alessandro Cossard, Gianmarco De Francisci Morales, Kyriaki Kalimeri, Yelena Mejova, Daniela Paolotti, and Michele Starnini. 2020\. Falling into the echo chamber: the italian vaccination debate on twitter. In _Proceedings of the International AAAI Conference on Web and Social Media_ , Vol. 14. 130–140.
* Cota et al. (2019) Wesley Cota, Silvio C Ferreira, Romualdo Pastor-Satorras, and Michele Starnini. 2019. Quantifying echo chamber effects in information spreading over political communication networks. _EPJ Data Science_ 8, 1 (2019), 35.
* Dal Maso et al. (2014) Carlo Dal Maso, Gabriele Pompa, Michelangelo Puliga, Gianni Riotta, and Alessandro Chessa. 2014\. Voting behavior, coalitions and government strength through a complex network analysis. _PloS one_ 9, 12 (2014), e116046.
* Darwish (2019) Kareem Darwish. 2019\. Quantifying Polarization on Twitter: The Kavanaugh Nomination. In _Social Informatics_. Springer International Publishing, Cham, 188–201.
* de Zarate et al. (2020) Juan Manuel Ortiz de Zarate, Marco Di Giovanni, Esteban Zindel Feuerstein, and Marco Brambilla. 2020\. Measuring Controversy in Social Networks Through NLP. In _String Processing and Information Retrieval_. Springer International Publishing, Cham, 194–209.
* Demszky et al. (2019) Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Matthew Gentzkow, Jesse Shapiro, and Dan Jurafsky. 2019. Analyzing Polarization in Social Media: Method and Application to Tweets on 21 Mass Shootings. In _Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)_. 2970–3005.
* DiMaggio et al. (1996) Paul DiMaggio, John Evans, and Bethany Bryson. 1996\. Have American’s social attitudes become more polarized? _American journal of Sociology_ 102, 3 (1996), 690–755.
* Djatmiko et al. (2019) Fahim Djatmiko, Ridi Ferdiana, and Muhammad Faris. 2019\. A Review of Sentiment Analysis for Non-English Language. In _2019 International Conference of Artificial Intelligence and Information Technology (ICAIIT)_. IEEE, 448–451.
* Emamgholizadeh et al. (2020) Hanif Emamgholizadeh, Milad Nourizade, Mir Saman Tajbakhsh, Mahdieh Hashminezhad, and Farzaneh Nasr Esfahani. 2020. A framework for quantifying controversy of social network debates using attributed networks: biased random walk (BRW). _Social Network Analysis and Mining_ 10, 1 (2020), 1–20.
* Erdős and Rényi (1959) P Erdős and A Rényi. 1959. On random graphs I. _Publicationes Mathematicae_ 6 (1959), 290–297.
* Fasino et al. ([n.d.]) Dario Fasino, Arianna Tonetto, and Francesco Tudisco. [n.d.]. Generating large scale-free networks with the Chung–Lu random graph model. _Networks_ ([n. d.]). https://doi.org/10.1002/net.22012
* Fiorina and Abrams (2008) Morris P Fiorina and Samuel J Abrams. 2008. Political polarization in the American public. _Annu. Rev. Polit. Sci._ 11 (2008), 563–588.
* Fortunato and Barthelemy (2007) Santo Fortunato and Marc Barthelemy. 2007. Resolution limit in community detection. _Proceedings of the national academy of sciences_ 104, 1 (2007), 36–41.
* Fortunato and Hric (2016) Santo Fortunato and Darko Hric. 2016. Community detection in networks: A user guide. _Physics reports_ 659 (2016), 1–44.
* Fosdick et al. (2018) Bailey K Fosdick, Daniel B Larremore, Joel Nishimura, and Johan Ugander. 2018. Configuring random graph models with fixed degree sequences. _SIAM Rev._ 60, 2 (2018), 315–355.
* Garimella (2018) Kiran Garimella. 2018\. _Polarization on Social Media_. Ph.D. Dissertation. Aalto University. http://urn.fi/URN:ISBN:978-952-60-7833-5
* Garimella et al. (2018a) Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018a. Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship. In _Proceedings of the 2018 World Wide Web Conference_ (Lyon, France) _(WWW ’18)_. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 913–922. https://doi.org/10.1145/3178876.3186139
* Garimella et al. (2018b) Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018b. Quantifying controversy on social media. _ACM Transactions on Social Computing_ 1, 1 (2018), 1–27.
* Gaumont et al. (2018) Noé Gaumont, Maziyar Panahi, and David Chavalarias. 2018\. Reconstruction of the socio-semantic dynamics of political activist Twitter networks—Method and application to the 2017 French presidential election. _PloS one_ 13, 9 (2018), e0201879.
* Gauvin et al. (2018) Laetitia Gauvin, Mathieu Génois, Márton Karsai, Mikko Kivelä, Taro Takaguchi, Eugenio Valdano, and Christian L Vestergaard. 2018\. Randomized reference models for temporal networks. _arXiv:1806.04032_ (2018).
* Guerra et al. (2013) P.H. Guerra, Wagner Meira Jr, Claire Cardie, and R. Kleinberg. 2013. A measure of polarization on social media NetworksBased on community boundaries. _Proceedings of the 7th International Conference on Weblogs and Social Media, ICWSM 2013_ (01 2013), 215–224.
* Guimera et al. (2004) Roger Guimera, Marta Sales-Pardo, and Luís A Nunes Amaral. 2004\. Modularity from fluctuations in random graphs and complex networks. _Physical Review E_ 70, 2 (2004), 025101.
* Hargittai et al. (2008) Eszter Hargittai, Jason Gallo, and Matthew Kane. 2008\. Cross-ideological discussions among conservative and liberal bloggers. _Public Choice_ 134, 1-2 (2008), 67–86.
* Holland et al. (1983) Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. 1983. Stochastic blockmodels: First steps. _Social networks_ 5, 2 (1983), 109–137.
* Hout and Maggio (2020) Michael Hout and Christopher Maggio. 2020. Immigration, Race, and Political Polarization. https://doi.org/10.31235/osf.io/p7q2w
* Iandoli et al. (2021) Luca Iandoli, Simonetta Primario, and Giuseppe Zollo. 2021\. The impact of group polarization on the quality of online debate in social media: A systematic literature review. _Technological Forecasting and Social Change_ 170 (2021), 120924.
* Jasny et al. (2015) Lorien Jasny, Joseph Waggle, and Dana R Fisher. 2015\. An empirical examination of echo chambers in US climate policy networks. _Nature Climate Change_ 5, 8 (2015), 782–786.
* Jones (2001) David R Jones. 2001\. Party polarization and legislative gridlock. _Political Research Quarterly_ 54, 1 (2001), 125–141.
* Karypis and Kumar (1998) George Karypis and Vipin Kumar. 1998. A fast and high quality multilevel scheme for partitioning irregular graphs. _SIAM Journal on scientific Computing_ 20, 1 (1998), 359–392.
* Kernighan and Lin (1970) B. W. Kernighan and S. Lin. 1970. An efficient heuristic procedure for partitioning graphs. _The Bell System Technical Journal_ 49, 2 (1970), 291–307. https://doi.org/10.1002/j.1538-7305.1970.tb01770.x
* Kirkland and Gross (2014) Justin H Kirkland and Justin H Gross. 2014. Measurement and theory in legislative networks: The evolving topology of Congressional collaboration. _Social Networks_ 36 (2014), 97–109.
* Krackhardt and Stern (1988) David Krackhardt and Robert N Stern. 1988. Informal networks and organizational crises: An experimental simulation. _Social psychology quarterly_ (1988), 123–140.
* Lancichinetti et al. (2010) Andrea Lancichinetti, Filippo Radicchi, and José J Ramasco. 2010. Statistical significance of communities in networks. _Physical Review E_ 81, 4 (2010), 046110.
* Mahadevan et al. (2006) Priya Mahadevan, Dmitri Krioukov, Kevin Fall, and Amin Vahdat. 2006. Systematic topology analysis and generation using degree correlations. _ACM SIGCOMM Computer Communication Review_ 36, 4 (2006), 135–146.
* Makridis and Rothwell (2020) Christos Makridis and Jonathan T Rothwell. 2020. The real cost of political polarization: evidence from the COVID-19 pandemic. _Available at SSRN 3638373_ (2020).
* Markgraf and Schoch (2019) Moritz Markgraf and Manfred Schoch. 2019. Quantification of Echo Chambers: A Methodological Framework Considering Multi-party Systems. In _Proceedings of the 27th European Conference on Information Systems (ECIS)_. https://aisel.aisnet.org/ecis2019_rp/91/
* Mason (2015) Lilliana Mason. 2015\. “I disrespectfully agree”: The differential effects of partisan sorting on social and issue polarization. _American Journal of Political Science_ 59, 1 (2015), 128–145.
* McCoy et al. (2018) Jennifer McCoy, Tahmina Rahman, and Murat Somer. 2018\. Polarization and the global crisis of democracy: Common patterns, dynamics, and pernicious consequences for democratic polities. _American Behavioral Scientist_ 62, 1 (2018), 16–42.
* McDiarmid and Skerman (2017) Colin McDiarmid and Fiona Skerman. 2017. Modularity of regular and treelike graphs. _Journal of Complex Networks_ 6, 4 (10 2017), 596–619. https://doi.org/10.1093/comnet/cnx046 arXiv:https://academic.oup.com/comnet/article-pdf/6/4/596/25451962/cnx046.pdf
* McDiarmid and Skerman (2020) Colin McDiarmid and Fiona Skerman. 2020. Modularity of Erdős-Rényi random graphs. _Random Structures & Algorithms_ 57, 1 (2020), 211–243.
* Miyauchi and Kawase (2016) Atsushi Miyauchi and Yasushi Kawase. 2016. Z-score-based modularity for community detection in networks. _PloS one_ 11, 1 (2016), e0147805.
* Molloy and Reed (1995) Michael Molloy and Bruce Reed. 1995. A critical point for random graphs with a given degree sequence. _Random structures & algorithms_ 6, 2-3 (1995), 161–180.
* Moody and Mucha (2013) James Moody and Peter J Mucha. 2013. Portrait of political party polarization. _Network Science_ 1, 1 (2013), 119–121.
* Morales et al. (2015) Alfredo Jose Morales, Javier Borondo, Juan Carlos Losada, and Rosa M Benito. 2015. Measuring political polarization: Twitter shows the two sides of Venezuela. _Chaos: An Interdisciplinary Journal of Nonlinear Science_ 25, 3 (2015), 033114.
* Neal (2020) Zachary P Neal. 2020\. A sign of the times? Weak and strong polarization in the US Congress, 1973–2016. _Social Networks_ 60 (2020), 103–112.
* Nelimarkka et al. (2018) Matti Nelimarkka, Salla-Maaria Laaksonen, and Bryan Semaan. 2018. Social media is polarized, social media is polarized: towards a new design agenda for mitigating polarization. In _Proceedings of the 2018 Designing Interactive Systems Conference_. 957–970.
* Nelimarkka et al. (2019) Matti Nelimarkka, Jean Philippe Rancy, Jennifer Grygiel, and Bryan Semaan. 2019. (Re) Design to Mitigate Political Polarization: Reflecting Habermas’ ideal communication space in the United States of America and Finland. _Proceedings of the ACM on Human-computer Interaction_ 3, CSCW (2019), 1–25.
* Newman (2006) Mark EJ Newman. 2006\. Modularity and community structure in networks. _Proceedings of the national academy of sciences_ 103, 23 (2006), 8577–8582.
* Orsini et al. (2015) Chiara Orsini, Marija M Dankulov, Pol Colomer-de Simón, Almerima Jamakovic, Priya Mahadevan, Amin Vahdat, Kevin E Bassler, Zoltán Toroczkai, Marián Boguná, Guido Caldarelli, et al. 2015\. Quantifying randomness in real networks. _Nature communications_ 6, 1 (2015), 1–10.
* Ozer et al. (2017) Mert Ozer, Mehmet Yigit Yildirim, and Hasan Davulcu. 2017\. Measuring the Polarization Effects of Bot Accounts in the US Gun Control Debate on Social Media. _Proceedings of ACM Conference (Conference’17)_ (2017).
* Peixoto (2013) Tiago P Peixoto. 2013\. Parsimonious module inference in large networks. _Physical review letters_ 110, 14 (2013), 148701\.
* Rabab’ah et al. (2016) Abdullateef Rabab’ah, Mahmoud Al-Ayyoub, Yaser Jararweh, and Mohammed Al-Kabi. 2016. Measuring the Controversy Level of Arabic Trending Topics on Twitter. In _2016 7th International Conference on Information and Communication Systems (ICICS)_. https://doi.org/10.1109/IACS.2016.7476097
* Rashed et al. (2020) Ammar Rashed, Mucahid Kutlu, Kareem Darwish, Tamer Elsayed, and Cansın Bayrak. 2020. Embeddings-Based Clustering for Target Specific Stances: The Case of a Polarized Turkey. arXiv:2005.09649 [cs.SI]
* Reiljan (2020) Andres Reiljan. 2020\. ‘Fear and loathing across party lines’(also) in Europe: Affective polarisation in European party systems. _European journal of political research_ 59, 2 (2020), 376–396.
* Rumshisky et al. (2017) Anna Rumshisky, Mikhail Gronas, Peter Potash, Mikhail Dubov, Alexey Romanov, Saurabh Kulshreshtha, and Alex Gribov. 2017. Combining Network and Language Indicators for Tracking Conflict Intensity. In _Social Informatics_ , Giovanni Luca Ciampaglia, Afra Mashhadi, and Taha Yasseri (Eds.). Springer International Publishing, Cham, 391–404.
* Salloum (2021) Ali Salloum. 2021\. Code and Data for Separating Controversy from Noise: Comparison and Normalization of Structural Polarization Measures. https://github.com/alesalloum/normalized_polarization.
* Salloum et al. (2021) Ali Salloum, Ted Hsuan Yun Chen, and Mikko Kivelä. 2021\. Data for Separating Controversy from Noise: Comparison and Normalization of Structural Polarization Measures. https://doi.org/10.5281/zenodo.4434245.
* Serafino et al. (2021) Matteo Serafino, Giulio Cimini, Amos Maritan, Andrea Rinaldo, Samir Suweis, Jayanth R Banavar, and Guido Caldarelli. 2021. True scale-free networks hidden by finite size effects. _Proceedings of the National Academy of Sciences_ 118, 2 (2021).
* Sun et al. (2017) Shiliang Sun, Chen Luo, and Junyu Chen. 2017. A review of natural language processing techniques for opinion mining systems. _Information fusion_ 36 (2017), 10–25.
* Trochim and Donnelly (2001) William MK Trochim and James P Donnelly. 2001. _Research methods knowledge base_. Vol. 2. Atomic Dog Pub.
* Waugh et al. (2011) Andrew Scott Waugh, Liuyi Pei, James H. Fowler, Peter J. Mucha, and Mason A. Porter. 2011. Party Polarization in Congress: A Network Science Approach. arXiv:0907.3509 [physics.soc-ph]
* Weber et al. (2013) Ingmar Weber, Venkata R Kiran Garimella, and Alaa Batayneh. 2013\. Secular vs. islamist polarization in egypt on twitter. In _Proceedings of the 2013 IEEE/ACM international conference on advances in social networks analysis and mining_. 290–297.
* Zhang and Moore (2014) Pan Zhang and Cristopher Moore. 2014. Scalable detection of statistically significant communities and hierarchies, using message passing for modularity. _Proceedings of the National Academy of Sciences_ 111, 51 (2014), 18144–18149.
* Zhang and Rohe (2018) Yilin Zhang and Karl Rohe. 2018. Understanding Regularized Spectral Clustering via Graph Conductance. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ (Montréal, Canada) _(NIPS’18)_. Curran Associates Inc., Red Hook, NY, USA, 10654–10663.
* Zhou (2016) Jack Zhou. 2016\. Boomerangs versus javelins: how polarization constrains communication on climate change. _Environmental Politics_ 25, 5 (2016), 788–811.
## Appendix A Polarization measures
Here we briefly introduce the definitions of polarization measures studied in
the main body of the paper. Each score here assumes two disjoint sets $A$ and
$B$. Here a network refers to the giant component only. Let $V=A\cup B$ be the
set of nodes and $E$ be the set of edges in network. The membership $c_{i}$ of
the nodes are determined during the network partition stage. In a polarized
network, the sets are expected to represent the opposing communities or sides.
Some of the scores, such as Random Walk Controversy and Boundary Polarzation,
are also designed to capture potential antipolarization behavior.
1. (1)
Random Walk Controversy
This measure captures the intuition of how likely a random user on either side
is to be exposed to dominant content produced by influencers from the opposing
side. From both sets $k$ nodes with the highest degrees are selected and
labeled as influencers. The high-degreeness of a node is assumed to indicate a
large number of received endorsements on the specific topic, thus called
influencers.
A random walk begins randomly from either side with equal probability and
terminates only when it arrives in any influencer node (absorbing state).
Based on the distribution of starting and ending sides of the multiple random
walks, the score is computed as
$P_{RWC}=p_{AA}p_{BB}-p_{AB}p_{BA},$
where $p_{AB}$ is the conditional probability for a random walk ending in side
$B$ given that it had started from side $A$. The other probabilities in the
formula are computed similarly. The polarization score $P_{RWC}$ takes values
between -1 and 1. Fully polarized network has a $P_{RWC}$ of 1, whereas a non-
polarized network is expected to have $P_{RWC}=0$. If the users are exposed
more likely to the content produced by influencers of the opposing group, the
$P_{RWC}$ becomes negative.
As there was no general network dependent rule for choosing the parameter $k$
in (Garimella et al., 2018b), we chose a single value $k=10$ for all of the
networks.
2. (2)
Adaptive Random Walk Controversy
The Random Walk Controversy measure is very sensitive to the number of
influencers $k$. As no strategy for selecting the parameter $k$ based on the
network was presented in the article it was defined (Garimella et al., 2018a),
we devised such a strategy to adaptively change $k$ depending the network
based on an initial sensitivity analysis of the score. Instead of selecting a
fixed number of influencers from both sides, the number of influencers in a
community depends on its size. By labeling the fraction $K$ of the highest-
degree nodes as influencers from each side, i.e., by selecting $k_{A}=K/n_{A}$
for the community $A$ and $k_{B}=K/n_{B}$ for community $B$ with fixed $K$,
the polarization measure $P_{ARWC}$ scales with the number of nodes in
community. We used $K=0.01$. It should be noted that the actual values for the
ARWC score (and RWC score) are sensitive to these parameter choices making
comparison of results difficult if different values are used, but the
qualitative behavior relative to random networks as described in the article
is not sensitive to small changes in the actual parameter value $K$ (and $k$
for RWC).
3. (3)
Betweenness Centrality Controversy
This measure is based on the distribution of edge betweenness centralities. If
the two sides are strongly separated, then the links on the boundary are
expected to have high edge betweenness centralities. The intuition is that, in
highly polarized network, links connecting the opposing communities have a
critical role in the network topology. The centrality $c_{B}$ for each edge
present in the network is defined as follows
$c_{B}(e)=\sum_{s,t\in V}\frac{\sigma(s,t|e)}{\sigma(s,t)},$
where $\sigma(s,t)$ denotes the total number of shortest paths between nodes
$s$ and $t$ and $\sigma(s,t|e)$ is the number of those paths that include edge
$e$.
Then KL-divergence $d_{KL}$ is computed between the distribution of edge
centralities for edges in the cut and the distribution of edge centralities
for the rest of edges. The PDFs for KL are estimated by kernel density
estimation. The measure seeks to quantify polarization by comparing the
centralities of boundary and non-boundary links. The measure is defined as
$P_{BCC}=1-e^{-d_{KL}},$
The score approaches 1 as the level of separation increases. For networks in
which the centrality values of links between two communities do not differ
significantly, $P_{BCC}$ produces values close to 0.
4. (4)
Boundary Polarization
This measure assumes that a low concentration of high-degree nodes in the
boundary of communities implies polarization. The underlying intuition is that
the further some authoritative or influential user is from the boundary, the
larger is the amount of antagonism present in the network. Two sets, $C$ and
$I$, are defined for the score. The node $s\in A$ belongs to set $C_{A}$ if
and only if it is linked to at least one node of the other side ($t\in B$) and
it is linked to a node $w\in A$ that is not connected to any other node of
side $B$. For the whole network, we have $C=C_{A}\cup C_{B}$, as both sides
have their own boundary nodes naturally. The non-boundary nodes are called
internal nodes and are obtained by $I_{A}=A-C_{A}$. The sets of internal nodes
of both communities are then combined by $I=I_{A}\cup I_{B}$.
The measure is defined as
$P_{BP}=\frac{1}{|C|}\sum_{s\in C}\frac{d_{I}(s)}{d_{C}(s)+d_{I}(s)}-0.5,$
where $d_{C}$ is the number of edges between the node $s$ and nodes in $C$ and
$d_{I}$ is the number of edges between the same node $s$ and nodes in $I$. The
score is normalized by the cardinality of set of boundary nodes $C$. The
values of $P_{BP}$ range from -0.5 to 0.5, where 0.5 indicates maximum
polarization. Non-polarized network is expected to have values close to zero,
whereas negative values indicate that the boundary nodes of community $A$ are
more likely to connect to the other side $B$ than to own side.
5. (5)
Dipole Polarization
This measure applies label propagation for quantifying the distance between
the influencers of each side. Its intuition is that a network is perfectly
polarized when divided in two communities of the same size and opposite
opinions. First, the top-k% highest-degree nodes from both sides are selected.
These nodes are assigned the ”extreme opinion scores” of -1 or 1 depending on
which side they belong to. For the influencer nodes, the $r_{t}$ is fixed to
its the extreme value for all steps $t$. All the other nodes begin with a
neutral opinion score $r_{t=0}=0$. The opinion scores $r$ of the rest of nodes
in the network are then updated by label propagation as follows
$r_{t}(s)=\frac{\sum_{v}W_{sv}r_{t-1}(s)}{d(s)},$
where $W_{sv}=1$ if there is an edge between the nodes $s$ and $v$,
$r_{t-1}(s)$ is the opinion score of node $s$ at previous step and $d(s)$ is
the degree of node $s$. This process is repeated until the convergence of
opinion scores.
Denote the average or the gravity center of positive and negative opinion
scores by $gc^{+}$ and $gc^{-}$. The distance between the means of the
opposite opinion score distributions is then $d=\frac{|gc^{+}-gc^{-}|}{2}$.
For the final polarization score $P_{DP}$, the distance $d$ is multiplied by
$(1-\Delta a)$ to penalize the potential difference in the community sizes.
The $\Delta a$ can be obtained either by a) taking the difference of definite
integrals of the opposite opinion score distributions or b) by computing the
absolute difference of the normalized community sizes. The latter is simply
obtained by $\Delta a=\frac{|n^{+}-n^{-}|}{|A\cup B|}$, where $n^{+}$ denotes
the number of nodes having a positive opinion score and $n^{-}$ denotes the
number of nodes having a negative opinion score.
The final polarization is calculated by
$P_{DP}=(1-\Delta a)d.$
The value of $P_{DP}$ can only be its maximum when the label-propagation based
communities have equal sizes. The closer the means of opinion score
distributions of both communities are, the lower polarization.
6. (6)
Modularity
Modularity is one of the most popular scores for describing discrepancy in a
social network. Modularity measures how different the communities are from the
corresponding communities of the ensemble of random graphs obtained by
configuration model. The polarization based on modularity is simply the
formula of modularity that is used to evaluate the quality of communities.
$P_{Q}=\frac{1}{2|E|}\sum_{ij}(W_{ij}-\frac{k_{i}k_{j}}{2|E|})\delta(c_{i},c_{j}),$
where $|E|$ is the number of edges, $W_{ij}$ is the element of adjacency
matrix and $k_{i}$ is the degree of node $i$. The value of
$\delta(c_{i},c_{j})$ equals to one only when the nodes $i$ and $j$ belong to
the same community, otherwise it is zero.
7. (7)
E-I Index
This simple measure, also known as Krackhardt E/I Ratio, computes the relative
density of internal connections within a community compared to the number of
connections that community has externally. For two communities, it can be
defined as
$P_{EI}=\frac{|C|}{|C^{\prime}|},$
where $C$ is the cut-set $\\{(s,t)\in E|s\in A,t\in B\\}$ and $C^{\prime}$ is
the complement of that set $(C^{\prime}=E/C)$.
8. (8)
Adaptive E-I Index
This measure is an extension of E-I Index as it accounts for different
community sizes by using the density of links within each community. The
Adaptive E-I Index becomes E-I Index when both of the communities have equal
number of nodes. The measure is defined as
$P_{AEI}=\frac{\sigma_{AA}+\sigma_{BB}-(\sigma_{AB}+\sigma_{BA})}{\sigma_{AA}+\sigma_{BB}+(\sigma_{AB}+\sigma_{BA})},$
where $\sigma_{AA}$ is the ratio of actual and potential links within the
community $A$ (similary for $\sigma_{BB}$) and $\sigma_{AB}$ is the observed
number of links between the communities $A$ and $B$ divided by the number of
all potential links.
## Appendix B Additional Figures and Analysis
In this appendix, we include figures that summarize results of additional
analysis. In section B.1, we include alternative illustrations and additional
analysis to support our arguments. In section B.2, we include results obtained
using different partitioning methods.
### B.1. Alternative visualisations and additional analysis
Fig. 12 is an alternative plot for illustrating our observation in Section
4.1. Fig. 13 is an alternative plot for illustrating our observation in
Section 4.2.1.
The effects of heterogeneous degree sequences on polarization measures were
also studied for larger networks (Figs. 14 and 15). To see the performance
stability across networks with different average degrees, see Figs. 17 and 18.
Finally, the Figs. 16 and 19 show the performance of the standardized denoised
polarization scores.
### B.2. Results for alternative graph partitioning methods
In addition to METIS, we perfomed our analysis using two alternative
clustering methods: regularized spectral clustering and modularity
optimization. While the number of links between the two groups is used by
METIS as an optimisation criteria, the intuition behind spectral clustering is
related to finding groups where a random walker would remain in the starting
cluster as long as possible. Further, modularity measures the excess fraction
of links inside the groups as compared to a null model. The clusters obtained
by METIS had already high modularity values. Therefore, for optimizing the
modularity, we used the partition produced by METIS as the pre-partition for
which a fine-tuning was performed: We then optimised the partition for maximum
modularity by a greedy stochastic optimization method which consecutively
tries to swap the cluster of a random node and accepts it if the value of the
target function improves (Kernighan and Lin, 1970). A reasonable convergence
was achieved when the number of swaps was equal to two times the number of
nodes in the network. Figures 20 and 21 display the noise bar analysis
equivalent to Fig. 4 in the main text for these two additional methods, but
only considering the configuration model ($d=1$). Figures 22 and 23 are
alternatives for Fig. 9 displaying the ROC curves related to the
classification task presented in Sec. 5.
Figure 12. The observed polarization score as a function of the score computed
for a randomized version of that network. Each point corresponds to the
average polarization score of 500 randomizations of the original network. The
colors represent the three different random models (see the legend in upper
left panel). Note that the negative values of Boundary Polarization have been
left out in this visualisation. Figure contains data for all the 203 networks.
Error bars for single points are ommitted because they are mostly too small to
be visible. Figure 13. 3D plots displaying the effect of network size and
average degree on polarization. For ER-networks, only RWC gives lower
polarization as network grows but fixed parameter is required then. Many of
the scores approaches zero polarization only after becoming very dense. Figure
14. The effect of degree heterogeneity to polarization scores for the 8 scores
in simulated scale-free networks with 2000 nodes. We show the expected
polarization score and its standard deviation as a function of average degree.
The symbols correspond to different exponent $\gamma$ values as indicated by
the legend. The RWC values are lower for larger networks which was also shown
to be the case for ER-networks in Section 4.2.1. The rest of the scores are
not dependent on network size. See Fig. 7 in the main text for the network
with 1000 nodes Figure 15. The effect of degree heterogeneity to polarization
scores for the 8 scores in simulated scale-free networks with 5000 nodes. We
show the expected polarization score and its standard deviation as a function
of average degree. The symbols correspond to different exponent $\gamma$
values as indicated by the legend. See Fig. 7 in the main text for the network
with 1000 nodes. Figure 16. ROC curves, Gini coefficient values, and AUC
values for the task of predicting manually curated labeling of polarized and
non-polarized networks. The results shown (left) for the score values before
the normalization and (right) after the normalization with standardized and
denoised scores $\hat{\Phi}_{z}(G)$ (See Section 5.3). Figures are based on
all the 203 empirical networks described in section 3.1. The ROC curves for
denoised polarization scores are in Fig. 9. Figure 17. Quantifying how
network’s average degree affects the performance. We group the data such that
there are 100 networks with consecutive degrees in our data, and create a set
of such windows by varying the degree range. We then evaluate the AUC for the
moving window of 100 networks. The plots show how the performances of both
normalized and unnormalized scores become higher as the average degree
increases for all the polarization methods. However, the normalization still
improves the overall accuracy, especially for the less sparse networks. For
instance, the normalization of RWC score improves the AUC approximately 0.10
units for networks with an average degree of 2.4 or higher.
Figure 18. Additional information about the windows for Fig. 11 and Fig. 17.
The smallest value of the window is on x-axis and, respectively, the largest
value of the window is located on y-axis.
.
Figure 19. Similar scatter plot as in Fig. 10 but for denoised polarization
scores with standardization. Red crosses are networks labeled as polarized and
black points are networks labeled as non-polarized. An outlier was removed
from the plots. Note that the scales for the scores are different and not
shown in this figure. Figure 20. Polarization scores for the 203 observed
networks and their shuffled versions. Each bar corresponds a score, and scores
for a network and its randomized versions are on top of each other: observed
network is represented with black bars and scores competed for random networks
where degree sequence is preserved ($d=1$) are shown in blue. An
interpretation for the figure is that, the amount of blue that is shown tells
how much of the total bar height (the score value) is explained by the degree
distribution and the amount of black that is shown is not explained by it.
Note that in some cases, the randomized networks produce higher scores than
the original network and in this case the black bar is fully covered by the
blue bar. In this case we draw a black horizontal line on top of the blue bar
indicating the height of the black bar. The difference of this figure to Fig.
4 in the main text is that groups in this figure are produced with spectral
clustering and only the null model for degree sequence is shown. Figure 21.
Polarization scores for the 203 observed networks and their shuffled versions.
Each bar corresponds a score, and scores for a network and its randomized
versions are on top of each other: observed network is represented with black
bars and scores competed for random networks where degree sequence is
preserved ($d=1$) are shown in blue. An interpretation for the figure is that,
the amount of blue that is shown tells how much of the total bar height (the
score value) is explained by the degree distribution and the amount of black
that is shown is not explained by it. Note that in some cases, the randomized
networks produce higher scores than the original network and in this case the
black bar is fully covered by the blue bar. In this case we draw a black
horizontal line on top of the blue bar indicating the height of the black bar.
The difference of this figure to Fig. 4 in the main text is that groups in
this figure are fine tuned with modularity optimization and only the null
model for degree sequence is shown. Figure 22. ROC curves, Gini coefficient
values, and AUC values for the task of predicting manually curated labeling of
polarized and non-polarized networks. The results shown (left) for the score
values before the normalization and (right) after the normalization with
denoised scores $\hat{\Phi}(G)$ (see main text). Figures are based on all the
203 empirical networks described in section 3.1. The difference of this figure
to Fig. 9 in the main text is that groups in this figure are produced with
spectral clustering. Figure 23. ROC curves, Gini coefficient values, and AUC
values for the task of predicting manually curated labeling of polarized and
non-polarized networks. The results shown (left) for the score values before
the normalization and (right) after the normalization with denoised scores
$\hat{\Phi}(G)$ (see main text). Figures are based on all the 203 empirical
networks described in section 3.1. The difference of this figure to Fig. 9 in
the main text is that groups in this figure are fine tuned with modularity
optimisation.
|
# Specific heat of CeRhIn5 in high magnetic fields: Magnetic phase diagram
revisited
S. Mishra Laboratoire National des Champs Magnétiques Intenses (LNCMI-EMFL),
CNRS, UGA, 38042 Grenoble, France A. Demuer Laboratoire National des Champs
Magnétiques Intenses (LNCMI-EMFL), CNRS, UGA, 38042 Grenoble, France D. Aoki
Institute for Materials Research, Tohoku University, Oarai, Ibaraki, 311-1313,
Japan I. Sheikin<EMAIL_ADDRESS>Laboratoire National des Champs
Magnétiques Intenses (LNCMI-EMFL), CNRS, UGA, 38042 Grenoble, France
###### Abstract
CeRhIn5 is a prototypical antiferromagnetic heavy-fermion compound, whose
behavior in a magnetic field is unique. A magnetic field applied in the basal
plane of the tetragonal crystal structure induces two additional phase
transitions. When the magnetic field is applied along, or close to, the $c$
axis, a new phase characterized by a pronounced in-plane electronic anisotropy
emerges at $B^{*}\approx$ 30 T, well below the critical field, $B_{c}\simeq$
50 T, to suppress the antiferromagnetic order. The exact origin of this new
phase, originally suggested to be an electronic-nematic state, remains
elusive. Here we report low-temperature specific heat measurements in CeRhIn5
in high static magnetic fields up to 36 T applied along both the $a$ and $c$
axes. For fields applied along the $a$ axis, we confirmed the previously
suggested phase diagram, and extended it to higher fields. This allowed us to
observe a triple point at $\sim$ 30 T, where the first-order transition from
an incommensurate to commensurate magnetic structure merges into the onset of
the second-order antiferromagnetic transition. For fields applied along the
$c$ axis, we observed a small but distinct anomaly at $B^{*}$, which we
discuss in terms of a possible field-induced transition, probably weakly
first-order. We further suggest that the transition corresponds to a change of
magnetic structure. We revise magnetic phase diagrams of CeRhIn5 for both
principal orientations of the magnetic field based entirely on thermodynamic
anomalies.
## I INTRODUCTION
Strongly correlated electron systems, such as high-temperature
superconductors, iron-based superconductors, and heavy-fermion compounds, are
of much experimental and theoretical interest. In all these materials,
unconventional superconductivity is believed to emerge in the vicinity of a
quantum critical point, a zero-temperature continuous phase transition. In
addition, some of these materials host even more exotic phases. The latter
include the pseudogap phase in high $T_{c}$ superconductors [1], an
electronic-nematic state in iron-based superconductors [2], and the mysterious
“hidden order” phase in the heavy-fermion compound URu2Si2 [3]. These phases
are still poorly understood, and their possible relation with unconventional
superconductivity is a subject of much theoretical debate.
From the experimental point of view, Ce-based heavy-fermion materials are
particularly suitable systems to investigate. In these compounds, the strength
of the electronic correlations can be tuned by pressure, doping, and magnetic
fields. Relatively small energy scales allow these materials to be tuned to
quantum critical points by accessible pressures and magnetic fields.
Unconventional superconductivity and/or other unusual states are often
observed in these systems in the vicinity of a quantum critical point.
CeRhIn5 is one of the best-studied heavy-fermion compounds. It crystallizes in
the tetragonal HoCoGa5 structure (space group $P4/mmm$), which can be viewed
as a stack of alternating layers of CeIn3 and RhIn2 along the $c$ axis. The
electronic specific heat coefficient, $\gamma\approx$ 400 mJ/K2mol, makes
CeRhIn5 a moderate heavy-fermion material [4, 5, 6]. At ambient pressure and
zero magnetic field, it undergoes an antiferromagnetic (AFM) transition at
$T_{N}=$ 3.8 K. Within the AFM phase, the Ce moments are antiferromagnetically
aligned within the CeIn3 planes. The moments spiral transversally along the
$c$ axis with a propagation vector $\mathbf{Q}=(0.5,0.5,0.297)$ incommensurate
with the crystal lattice [7].
A magnetic field applied in the basal plane of CeRhIn5 induces two additional
transitions, observed in specific heat [8], thermal expansion, and
magnetostriction measurements [9]. The lower temperature transition is first
order. It occurs at $B_{m}$ $\sim$ 2 T at low temperatures, and corresponds to
a change of magnetic structure from incommensurate to commensurate [10]. The
higher temperature transition is second order. It corresponds to a change of
the ordered moment, while the propagation vector, almost the same as in a zero
magnetic field [10], becomes temperature-dependent [11]. Both transitions were
traced up to 18 T in static field measurements [9]. More recently, specific
heat measurements in pulsed fields applied along the $a$ axis revealed a non-
monotonic field dependence of $T_{N}$ [12]. The AFM transition temperature
initially increases up to about 12 T, and then decreases monotonically all the
way up to the critical field $B_{c}\sim 50$ T. In these measurements, however,
the two field-induced phases were not observed. Therefore, the complete phase
diagram for fields along the $a$ axis remains elusive.
When a magnetic field is applied along the $c$ axis, $T_{N}$ monotonically
decreases until it is completely suppressed at $B_{c}\sim 50$ T [12, 13].
Surprisingly, the critical field for this orientation is approximately the
same as along the $a$ axis in spite of a considerable crystallographic and
magnetic anisotropy of CeRhIn5. On the other hand, the critical field was
extrapolated from specific heat measurements in pulsed magnetic fields, while
there is a difference between the results obtained in pulsed [12, 13] and
static [6] fields.
The most interesting feature was observed in various measurements at
$B^{*}\simeq$ 30 T for a field applied either along or slightly tilted from
the $c$ axis [12, 14, 15, 16, 17, 18, 19]. While it was interpreted as a
transition into an electronic-nematic state [15], the exact origin and nature
of this anomaly is still under debate. Surprisingly, specific heat
measurements have so far failed to show a direct indication of this anomaly
[6, 12, 13]. It is thus still unclear whether the anomaly corresponds to a
real thermodynamic phase transition or a crossover.
Figure 1: Specific heat divided by temperature, $C/T$, of CeRhIn5 for a
magnetic field applied along the $a$ axis. (a) $C/T$ as a function of $T$
obtained from relaxation technique for several values of a magnetic field.
Curves are vertically shifted according to the magnetic field scale shown in
the right axis. A zoom at low and high fields is shown in (b) and (c),
respectively. (d) Total specific heat, $C$, obtained from field sweeps using
the AC technique.
The above mentioned inconsistencies and shortcomings demonstrate a clear need
to perform specific heat measurements in high static fields to ascertain the
quantum critical point, complete the phase diagram for fields along the $a$
axis, verify the phase diagram along the $c$ axis, and seek a direct evidence
for the enigmatic novel state at $B^{*}$.
In this paper, we report high-field low-temperature specific heat measurements
on a single crystal of CeRhIn5. The measurements were performed in static
fields up to 36 T for field orientations both along the $a$ and $c$ axes. For
a field applied along the $a$ axis, we observed all the previously reported
transitions, and traced them to higher fields. For a field along the $c$ axis,
we observed the so far elusive anomaly at $B^{*}$ in addition to the AFM
transition. Based on these features observed in specific heat, we propose a
revision of the magnetic phase diagram of CeRhIn5 for both principal
orientations of the magnetic field.
## II EXPERIMENTAL DETAILS
The high-quality single crystal of CeRhIn5 with the dimensions of
1.3$\times$0.8$\times$0.2 mm3 (the length of the sample is parallel to the $c$
axis) and a mass of 1.55 mg used in the present study was grown by the In-
self-flux technique, details of which can be found elsewhere [20]. Specific
heat measurements were performed in static magnetic fields to 36 T by either a
thermal relaxation technique at constant field or AC technique at constant
temperature [21].
## III RESULTS AND DISCUSSION
Figure 2: Magnetic phase diagram of CeRhIn5 obtained from relaxation
(circles) and AC (triangles) specific heat measurements for a field applied
along the $a$ axis. Closed and open symbols correspond to second- and first-
order transitions, respectively.
Figure 1(a) shows specific heat divided by temperature, $C/T$, obtained from
relaxation measurements for a magnetic field applied along the $a$ axis. For
this field orientation, apart from the AFM transition at $T_{N}$, there are
two additional field-induced transitions at $T_{1}$ and $T_{2}$, as shown in
Fig. 1(b). The transition at $T_{1}$ manifests itself by a sharp $\delta$-like
peak characteristic of a first-order transition. The transition at $T_{2}$
appears as a $\lambda$-type anomaly typical for a second-order transition.
This transition is observed only at low fields, as shown in Fig. 1(b). In
agreement with previous reports, $T_{N}$ initially increases up to about 10 T,
and then decreases monotonically up to the highest field of our measurements.
The transition temperature $T_{1}$ shows a similar trend. Above 3 T, $T_{1}$
increases up to about 12 T, and then starts to decrease. Its suppression rate,
however, is slower than that of $T_{N}$. With increasing field, the two
transitions approach each other. At 28 T, the two transitions are barely
distinguishable, and at 30 T only the transition at $T_{N}$ remains, as shown
in Fig. 1(c).
All the transitions are also observed in measurements using the AC technique,
as shown in Fig. 1(d). In particular, the curve obtained at 3.4 K, shows four
phase transitions. Interestingly, while the low-field transition at
$B_{1}^{L}$ manifests itself as a sharp peak, its high-field counterpart at
$B_{1}^{H}$ appears as a rather smeared anomaly, although it is also a first-
order transition. This is not surprising considering that standard AC
calorimetry models are based on steady-state measurements, which is not the
case for a first-order phase transition due to the involvement of latent heat.
Therefore, a first-order transition does not always manifest itself as the
canonical $\delta$-like feature. The shape, and even the presence, of an
anomaly depends on the latent heat associated with the transition. On the
other hand, a second-order transition always manifest itself as a distinct
$\lambda$-like anomaly, which is, indeed, the case for the two known second-
order transitions at $B_{2}$ and $B_{N}$.
The resulting magnetic field temperature, $B-T$, phase diagram for field along
the $a$ axis is shown in Fig. 2. It contains three different antiferromagnetic
phases labeled AFM1, AFM2, and AFM3. The magnetic structure of all three
phases was previously determined by neutron diffraction [10, 11]. The zero-
field phase AFM1 corresponds to an incommensurate antiferromagnetic spin helix
with a propagation vector $\mathbf{Q}=(0.5,0.5,0.297)$. The AFM2 phase is an
incommensurate elliptical helix with strongly modulated magnetic moments and a
temperature-dependent propagation vector. The AFM3 phase is a commensurate
collinear square wave (‘up-up-down-down’ configuration) with a propagation
vector $\mathbf{Q}=(1/2,1/2,1/4)$. All three phases meet at a triple point
inside the AFM phase at (3 T, 3.4 K). The AFM2 phase exists only in a narrow
temperature range close to $T_{N}$. This range shrinks with increasing
magnetic field until the AFM2 phase is completely suppressed at $\sim$ 30 T,
giving rise to yet another triple point. Remarkably, this field is about the
same as $B^{*}$, at which the putative electronic-nematic phase emerges for
fields close to the $c$ axis. Above 30 T, only the commensurate phase AFM3
exists up to the complete suppression of the AFM order. A naive quadratic fit
of $T_{N}$ vs $B$ reveals a critical field $B_{c}\simeq$ 54 T, in agreement
with previous pulsed field results [12].
Figure 3: $C/T$ of CeRhIn5 for a magnetic field applied along the $c$ axis
obtained from temperature sweeps using the relaxation technique.
Figure 3 shows the temperature dependence of the specific heat divided by
temperature, $C/T$, obtained using the relaxation technique at different
magnetic fields applied along the $c$ axis. For this orientation of the
magnetic field, $T_{N}$ is gradually suppressed, consistent with previous
reports [6, 12, 13]. This is the usual behavior observed in AFM heavy-fermion
compounds. However, we observed a nonmonotonic behavior of the specific heat
jump at the AFM transition. With increasing field, the jump size gradually
increases up to 27 T, above which there is a small abrupt drop. The jump size
then remains almost constant between 29 and 35 T, the highest field of our
measurements. A similar behavior was also observed in the previous studies
[13, 8]. This unusual behavior indicates that there might be a change in the
AFM state between 27 and 29 T.
The most remarkable result is obtained using the AC technique with the field
applied along the $c$ axis. For this orientation, we observed a weak but
distinct anomaly at $B^{*}$, as shown in Fig. 4(a). The exact position of the
anomaly is defined from the second derivative of the specific heat with
respect to the magnetic field, where the anomaly manifests itself as a small
maximum, as shown in Fig. 4(b). This anomaly was not observed in previous
high-field specific heat measurements.
Figure 4: (a) Specific heat of CeRhIn5 for a magnetic field applied along the
$c$ axis obtained from field sweeps using the AC technique. Curves are
vertically shifted for clarity. (b) Second derivatives of the heat capacity
shown in (a) with respect to a magnetic field. Arrows indicate the AFM
transition and the anomaly at $B^{*}$.
We will now discuss a possible origin of the high-field state above $B^{*}$
based on our findings. The presence of the specific heat anomaly at $B^{*}$
implies that it is likely a real thermodynamic phase transition rather than a
crossover, contrary to what was previously suggested [16]. The latter
suggestion, however, was based on magnetostriction measurements performed in a
magnetic field applied at 20∘ from the $c$ axis. Furthermore, the anomaly we
observe at $B^{*}$ does not have the characteristic $\lambda$-like shape of a
second-order phase transition contrary to those at $B_{2}$ and $B_{N}$ in Fig.
1(d). Therefore, the anomaly observed at $B^{*}$ most likely corresponds to a
first-order phase transition. Moreover, it is thermodynamically forbidden that
three second-order phase boundary lines meet at a triple point [25]. This
further supports our hypothesis that $B^{*}$ is a first-order phase
transition. Finally, the anomaly at $B^{*}$ is observed only within the AFM
state, in agreement with previous reports [12, 14, 15, 16, 18]. Based on this,
the most natural explanation of the phase transition at $B^{*}$ is a change of
magnetic structure. Previous high-field NMR measurements unambiguously suggest
that the AFM phases both below and above $B^{*}$ are incommensurate [17].
Therefore, $B^{*}$ should correspond to a transition from one incommensurate
phase, AFM1, to another phase incommensurate along the $c$ axis, AFM4, with a
propagation vector $\mathbf{Q}=(0.5,0.5,l)$, where $l$ is different from 0.297
of the AFM1 phase.
This hypothesis is consistent with previous reports. Indeed, the previously
observed resistivity jump at $B^{*}$ [14, 15, 19] can be naturally accounted
for by a metamagnetic spin reorientation, as we suggest here. The only
previously reported result, which is difficult to reconcile with our
hypothesis, is that a Fermi surface reconstruction corresponding to the
delocalization of the $f$ electrons occurs at $B^{*}$ [12, 26]. This
conclusion, however, was challenged by recent angular-dependent de Haas-van
Alphen effect measurements, which suggest that the $f$ electrons in CeRhIn5
remain localized up to fields higher even than $B_{c}$ [27].
Figure 5: Magnetic phase diagram of CeRhIn5 obtained from relaxation
(circles) and AC (triangles) specific heat measurements. Closed symbols
correspond to second-order transitions from AFM to PM phase. Open symbols
indicate the anomaly at $B^{*}$, which presumably corresponds to a weakly
first-order transition.
Figure 5 shows the revised magnetic phase diagram of CeRhIn5 for a field
applied along the $c$ axis. The field dependence of $T_{N}$ obtained from our
static-field measurements is consistent with that previously reported, based
on the pulsed-field data [13]. A fit of the data to the
$T_{N}(B)=T_{N0}[1-(B/B_{c})^{2}]$ expression, where $T_{N0}$ is $T_{N}$ at a
zero field, reveals a critical field $B_{c}\simeq$ 52 T. This value is in
agreement with that previously reported [12]. The transition at $B^{*}$ is
weakly temperature-dependent in agreement with previous measurements [12, 14,
15, 16, 18]. As was already discussed above, we suggest that this first-order
transition separates two different incommensurate magnetic phases.
We note that the situation is entirely different when a magnetic field is
tilted from the $c$ axis. First, a finite component of a magnetic field in the
basal plane explicitly breaks the $C_{4}$ rotational symmetry. Even more
important is that at angles bigger than 2∘, the transition from incommensurate
AFM1 phase into the commensurate phase AFM3 occurs below 30 T [27]. Therefore,
the transition at $B^{*}$ is from the commensurate phase AFM3 to the
incommensurate phase AFM4. This is likely what was observed in recent
ultrasound velocity measurements [18]. In these measurements, the anomaly
observed at 20 T at the AFM1-AFM3 transition is very similar to that observed
at $B^{*}\simeq$ 30 T. Furthermore, the magnetostriction anomaly observed at
$B^{*}$ in a magnetic field tilted by about 20∘ from the $c$ axis is similar
to that observed at 7.5 T, where it corresponds to the AFM1-AFM3 transition.
## IV CONCLUSIONS
In summary, we performed specific heat measurements in CeRhIn5 in static
fields up to 36 T applied both along the $a$ and the $c$ axis. For the field
along the $a$ axis, we confirmed the previously established rich phase
diagram, and extended it to higher fields. For the field along the $c$ axis,
we observed a distinct anomaly at $B^{*}\simeq$ 30 T, suggesting that a real
thermodynamic phase transition, probably weakly first-order, is likely to take
place at this field. We suggest that this transition is from the low-field
incommensurate magnetic structure to another incommensurate phase,
characterized by a different propagation vector. High field inelastic neutron
scattering measurements are required to definitely confirm this hypothesis.
Such measurements, although very challenging, are now possible due to the
recent experimental breakthrough [28].
###### Acknowledgements.
We thank H. Harima and Y. Tokunaga for fruitful and enlightening discussions.
We acknowledge the support of the LNCMI-CNRS, member of the European Magnetic
Field Laboratory (EMFL), the ANR-DFG grant “Fermi-NESt,” and JSPS KAKENHI
grants number JP15H05882, JP15H05884, JP15H05886, and JP15K21732 (J-Physics).
## References
* Keimer _et al._ [2015] B. Keimer, S. A. Kivelson, M. R. Norman, S. Uchida, and J. Zaanen, From quantum matter to high-temperature superconductivity in copper oxides, Nature 518, 179 (2015).
* Fernandes _et al._ [2014] R. M. Fernandes, A. V. Chubukov, and J. Schmalian, What drives nematic order in iron-based superconductors?, Nat. Phys. 10, 97 (2014).
* Mydosh _et al._ [2020] J. A. Mydosh, P. M. Oppeneer, and P. S. Riseborough, Hidden order and beyond: an experimental-theoretical overview of the multifaceted behavior of URu2Si2, J. Phys.: Condens. Matter 32, 143002 (2020).
* Hegger _et al._ [2000] H. Hegger, C. Petrovic, E. G. Moshopoulou, M. F. Hundley, J. L. Sarrao, Z. Fisk, and J. D. Thompson, Pressure-Induced Superconductivity in Quasi-2D CeRhIn5, Phys. Rev. Lett. 84, 4986 (2000).
* Takeuchi _et al._ [2001] T. Takeuchi, T. Inoue, K. Sugiyama, D. Aoki, Y. Tokiwa, Y. Haga, K. Kindo, and Y. Ōnuki, Magnetic and Thermal Properties of CeIrIn5 and CeRhIn5, J. Phys. Soc. Jpn. 70, 877 (2001).
* Kim _et al._ [2001] J. S. Kim, J. Alwood, G. R. Stewart, J. L. Sarrao, and J. D. Thompson, Specific heat in high magnetic fields and non-Fermi-liquid behavior in $\mathrm{Ce}{M}\mathrm{In}_{5}$ $({M}=\mathrm{I}\mathrm{r},\mathrm{}\mathrm{Co})$, Phys. Rev. B 64, 134524 (2001).
* Bao _et al._ [2000] W. Bao, P. G. Pagliuso, J. L. Sarrao, J. D. Thompson, Z. Fisk, J. W. Lynn, and R. W. Erwin, Incommensurate magnetic structure of CeRhIn5, Phys. Rev. B 62, R14621 (2000).
* Cornelius _et al._ [2001] A. L. Cornelius, P. G. Pagliuso, M. F. Hundley, and J. L. Sarrao, Field-induced magnetic transitions in the quasi-two-dimensional heavy-fermion antiferromagnets $\mathrm{Ce}_{n}\mathrm{RhIn}_{3n+2}$ $(n=1$ or 2), Phys. Rev. B 64, 144411 (2001).
* Correa _et al._ [2005] V. F. Correa, W. E. Okraku, J. B. Betts, A. Migliori, J. L. Sarrao, and A. H. Lacerda, High-magnetic-field thermal expansion and elastic properties of CeRhIn5, Phys. Rev. B 72, 012407 (2005).
* Raymond _et al._ [2007] S. Raymond, E. Ressouche, G. Knebel, D. Aoki, and J. Flouquet, Magnetic structure of CeRhIn5 under magnetic field, J. Phys.: Condens. Matter 19, 242204 (2007).
* Fobes _et al._ [2018] D. M. Fobes, S. Zhang, S.-Z. Lin, P. Das, N. J. Ghimire, E. D. Bauer, J. D. Thompson, L. W. Harriger, G. Ehlers, A. Podlesnyak, R. I. Bewley, A. Sazonov, V. Hutanu, F. Ronning, C. D. Batista, and M. Janoschek, Tunable emergent heterostructures in a prototypical correlated metal, Nat. Phys. 14, 456 (2018).
* Jiao _et al._ [2015] L. Jiao, Y. Chen, Y. Kohama, D. Graf, E. D. Bauer, J. Singleton, J.-X. Zhu, Z. Weng, G. Pang, T. Shang, J. Zhang, H.-O. Lee, T. Park, M. Jaime, J. D. Thompson, F. Steglich, Q. Si, and H. Q. Yuan, Fermi surface reconstruction and multiple quantum phase transitions in the antiferromagnet CeRhIn5, Proc. Natl. Acad. Sci. USA 112, 673 (2015).
* Jiao _et al._ [2019] L. Jiao, M. Smidman, Y. Kohama, Z. S. Wang, D. Graf, Z. F. Weng, Y. J. Zhang, A. Matsuo, E. D. Bauer, H. Lee, S. Kirchner, J. Singleton, K. Kindo, J. Wosnitza, F. Steglich, J. D. Thompson, and H. Q. Yuan, Enhancement of the effective mass at high magnetic fields in $\mathrm{CeRhIn}_{5}$, Phys. Rev. B 99, 045127 (2019).
* Moll _et al._ [2015] P. J. W. Moll, B. Zeng, L. Balicas, S. Galeski, F. F. Balakirev, E. D. Bauer, and F. Ronning, Field-induced density wave in the heavy-fermion compound CeRhIn5, Nat. Commun. 6, 6663 (2015).
* Ronning _et al._ [2017] F. Ronning, T. Helm, K. R. Shirer, M. D. Bachmann, L. Balicas, M. K. Chan, B. J. Ramshaw, R. D. McDonald, F. F. Balakirev, M. Jaime, E. D. Bauer, and P. J. W. Moll, Electronic in-plane symmetry breaking at field-tuned quantum criticality in CeRhIn5, Nature 548, 313 (2017).
* Rosa _et al._ [2019] P. F. S. Rosa, S. M. Thomas, F. F. Balakirev, E. D. Bauer, R. M. Fernandes, J. D. Thompson, F. Ronning, and M. Jaime, Enhanced Hybridization Sets the Stage for Electronic Nematicity in $\mathrm{CeRhIn}_{5}$, Phys. Rev. Lett. 122, 016402 (2019).
* Lesseux _et al._ [2020] G. Lesseux, H. Sakai, T. Hattori, Y. Tokunaga, S. Kambe, P. Kuhns, A. Reyes, J. Thompson, P. Pagliuso, and R. Urbano, Orbitally defined field-induced electronic state in a Kondo lattice, Phys. Rev. B 101, 165111 (2020).
* Kurihara _et al._ [2020] R. Kurihara, A. Miyake, M. Tokunaga, Y. Hirose, and R. Settai, High-field ultrasonic study of quadrupole ordering and crystal symmetry breaking in CeRhIn5, Phys. Rev. B 101, 155125 (2020).
* Helm _et al._ [2020] T. Helm, A. D. Grockowiak, F. F. Balakirev, J. Singleton, J. B. Betts, K. R. Shirer, M. König, T. Förster, E. D. Bauer, F. Ronning, S. W. Tozer, and P. J. W. Moll, Non-monotonic pressure dependence of high-field nematicity and magnetism in CeRhIn5, Nat. Commun. 11, 3482 (2020).
* Shishido _et al._ [2002] H. Shishido, R. Settai, D. Aoki, S. Ikeda, H. Nakawaki, N. Nakamura, T. Iizuka, Y. Inada, K. Sugiyama, T. Takeuchi, K. Kindo, T. C. Kobayashi, Y. Haga, H. Harima, Y. Aoki, T. Namiki, H. Sato, and Y. Ōnuki, Fermi Surface, Magnetic and Superconducting Properties of LaRhIn5 and CeTIn5 (T: Co, Rh and Ir), J. Phys. Soc. Jpn. 71, 162 (2002).
* [21] See Supplemental Material at [URL] for the experimental details, which includes Refs. [22–24].
* Lortz _et al._ [2007] R. Lortz, Y. Wang, A. Demuer, P. H. M. Böttger, B. Bergk, G. Zwicknagl, Y. Nakazawa, and J. Wosnitza, Calorimetric Evidence for a Fulde-Ferrell-Larkin-Ovchinnikov Superconducting State in the Layered Organic Superconductor $\kappa\mathrm{\text{$-$}}(\mathrm{BEDT}\mathrm{\text{$-$}}\mathrm{TTF}{)}_{2}\mathrm{Cu}(\mathrm{NCS}{)}_{2}$, Phys. Rev. Lett. 99, 187002 (2007).
* Sullivan and Seidel [1968] P. F. Sullivan and G. Seidel, Steady-State, ac-Temperature Calorimetry, Phys. Rev. 173, 679 (1968).
* Baloga and Garland [1977] J. D. Baloga and C. W. Garland, ac calorimetry at high pressure, Rev. Sci. Instrum. 48, 105 (1977).
* Yip _et al._ [1991] S. Yip, T. Li, and P. Kumar, Thermodynamic considerations and the phase diagram of superconducting UPt3, Phys. Rev. B 43, 2742 (1991).
* Jiao _et al._ [2017] L. Jiao, Z. F. Weng, M. Smidman, D. Graf, J. Singleton, E. D. Bauer, J. D. Thompson, and H. Q. Yuan, Magnetic field-induced Fermi surface reconstruction and quantum criticality in CeRhIn5, Philos. Mag. 97, 3446 (2017).
* Mishra _et al._ [2021] S. Mishra, J. Hornung, M. Raba, J. Klotz, T. Förster, H. Harima, D. Aoki, J. Wosnitza, A. McCollam, and I. Sheikin, Robust Fermi-Surface Morphology of CeRhIn5 across the Putative Field-Induced Quantum Critical Point, Phys. Rev. Lett. 126, 016403 (2021).
* Duc _et al._ [2018] F. Duc, X. Tonon, J. Billette, B. Rollet, W. Knafo, F. Bourdarot, J. Béard, F. Mantegazza, B. Longuet, J. E. Lorenzo, E. Lelièvre-Berna, P. Frings, and L.-P. Regnault, 40-Tesla pulsed-field cryomagnet for single crystal neutron diffraction, Rev. Sci. Instrum. 89, 053905 (2018).
## Supplemental Material for “Specific heat of CeRhIn5 in high magnetic
fields: Magnetic phase diagram revisited”
### IV.1 Specific heat method
Figure 6: Setup used for specific heat measurement. (a) Cernox bare chip with
four contacts used in the thermal relaxation technique. (b) Cernox bare chip
cut into two parts separating the heater and the thermometer for the AC
calorimetric technique. (c) Single crystal of CeRhIn5 mounted on the back of
the Cernox bare chip connected to the thermal bath.
Low-temperature specific heat measurements were performed in magnetic fields
up to 36 T using both thermal relaxation and AC calorimetric techniques. Low-
field measurements using thermal relaxation technique were performed in a 4He
cryostat equipped with a 12 T superconducting magnetic and a VTI providing the
lowest thermal bath temperature of 1.5 K. The AC calorimetric measurements and
the high field measurements using thermal relaxation technique were performed
in a 36 T resistive magnet equipped with a 4He cryostat reaching the lowest
thermal bath temperature of 1.3 K by pumping on the Helium bath. A Cernox
thermometer, calibrated from 1.3 K to 40 K in magnetic fields up to 36 T, was
used as a reference thermometer for the thermal bath.
### IV.2 Thermal relaxation technique
In the thermal relaxation method, a single Cernox bare chip, such as shown in
Fig. 6(a), was used as both the sample heater and the thermometer. The chip is
connected to the thermal bath via a weak thermal link provided by four thin
phosphor bronze wires, as shown in Fig. 6(c). The wires also provide a
mechanical support for the chip, and are used as the current leads. The sample
is mounted on the back of the chip using a small amount of Apiezon grease, as
shown in Fig. 6(c). Further details of this technique are given elsewhere
[22]. Here we use long relaxations with a temperature increase of $100\%$
above the reference thermal bath temperature yielding a larger number of data
points per relaxation. The resistance of the Cernox chip is calibrated in-situ
against the thermal bath temperature using the reference thermometer.
Similarly, the thermal conductance of the wires is calibrated in-situ against
the thermal bath. This technique provides an accuracy of 1$\%$, and its
sensitivity of $10^{-3}$ is ideal to detect even very small changes in
specific heat.
The Cernox chip, the Apiezon grease, and the wires contribute, as an addenda,
to the total heat capacity of the system. This addenda must be subtracted to
obtain the absolute specific heat of the sample. To this end, the addenda,
i.e., the heat capacity of the system without sample, was first measured as a
function of temperature in zero field. The addenda was later subtracted from
all the curves obtained with a mounted sample.
### IV.3 AC calorimetric technique
Unlike the thermal relaxation technique, in AC calorimetry the heater and the
thermometer are separated, as shown in Fig. 6(b). The heater part of the
Cernox chip of resistance $R_{H}$ is excited by a small AC current
$I_{H}\cos(\omega t)$ of a few $\mu$A at a frequency $f=$ 4 Hz ($\omega=2\pi
f$). The sample temperature is measured using the thermometer part of the chip
excited by a small DC current of a few $\mu$A. The AC power generated by the
heater has two effects. First, it increases the sample temperature above the
thermal bath temperature $T_{bath}$. Second, it induces an oscillatory thermal
response $T_{AC}$ with the sample temperature $T$ oscillating at twice the
excitation frequency [23, 24]. The resulting sample temperature is thus given
by:
$T=T_{bath}+T_{DC}+T_{AC}$ (1)
Here $T_{DC}$ is an inevitable effect arising from self-heating of the heater
and the thermometer due to supplied excitations. $T_{DC}$ is the excess in
sample temperature $T$ with respect to the thermal bath temperature. It
depends on the average power generated by the heater,
$P_{H}=I_{H}^{2}R_{H}/2$, and the thermometer, $P_{Th}$, as well as the
thermal conductance $\kappa$ between the system and the thermal bath, i.e.,
$T_{DC}=(P_{Th}+P_{H})/\kappa$.
The heat capacity of the system, $C$, is obtained from the oscillatory thermal
response expressed in complex form as
$T_{AC}=\frac{P_{H}}{\kappa+2\iota\omega C}$ (2)
Both the real and the imaginary terms of the complex Eq. 2 depend on $P_{H}$,
$\kappa$, $C$, $\omega$. However, the phase, $\theta$, between the two terms
provides a more direct measure of the heat capacity of the system, independent
of $P_{H}$:
$\tan\theta=\frac{2C\omega}{\kappa}$ (3)
In the first-order approximation, $\kappa$ is proportional to the temperature,
and is independent of the magnetic field.
To precisely determine the sample temperature, the resistance of the Cernox
heater is calibrated in magnetic field. To take into account the self heating
effects, i.e., $T_{DC}$, the heater resistance at zero excitation $R_{H0}$ is
obtained by a linear extrapolation of heater resistances $R_{H}$ at a few
different small excitations. At zero self heating, the heater temperature is
the same as bath temperature. Once $R_{H0}$ is known, the heater can be
precisely calibrated in magnetic field using the field calibrated reference
thermometer of the thermal bath. We determined $R_{H0}$ values at four
different magnetic fields (20 T, 25 T, 30 T and 35 T) at several different
temperatures in the range of interest of this study, i.e., from 1.3 K to 4 K.
This way, calibration curves, i.e., $R_{H0}$ vs $T$, are obtained at above
mentioned magnetic fields. Finally, using these calibration curves with
appropriate polynomial fits, the sample temperature in magnetic field is
precisely determined. Due to the magnetoresistance of the heater and the
thermometer, there is a very small variation, less than 1%, of the sample
temperature in magnetic fields. The sample temperature, $T$, indicated in
figures 1(d) and 3(b) of the main text is the value averaged over the relevant
field interval.
### IV.4 Comparison of thermal-relaxation and AC calorimetric techniques
The specific heat measured using the AC calorimetric technique is consistent
with that obtained by relaxation technique, as shown in Fig. 7, where we plot
$\tan\theta$. Given that $\kappa\propto T$, $\tan\theta\propto C/T$. For the
AC specific heat data, the sample temperature is determined by removing the
self heating effect, i.e., $T_{DC}$, using the heater calibration described in
the previous section.
In the thermal relaxation technique, $C$ is the molar specific heat of CeRhIn5
obtained by subtracting the addenda contribution from the total heat capacity
of the system. On the other hand, in AC calorimetric technique, $C$ is the
total heat capacity of the system including the addenda. However, the addenda
contribution to the heat capacity of the system is negligibly small over the
temperatures range of Fig. 7, as was independently verified using the thermal-
relaxation technique. That is why there is an excellent agreement between the
curves obtained by thermal relaxation and AC calorimetric techniques, as shown
in Fig. 7.
Figure 7: Specific heat of CeRhIn5 at zero field obtained using thermal
relaxation (closed circles) and AC calorimetric (solid line) techniques. For
the latter, $\tan\theta$ proportional to $C/T$ is plotted.
|
# $SU(N)\to SU(2)$ symmetry breaking in quantum antiferromagnets
A.K. Kolezhuk${}^{\textsf{\footnotesize{\mbox{?},\penalty 1000\mbox{?}}}}$,
T.L. Zavertanyi${}^{\textsf{\footnotesize{\mbox{?}}}}$
(Received June 22, 2020, in final form August 14, 2020)
###### Abstract
We study a $SU(2)$-symmetric spin-${3}/{2}$ system on a bipartite lattice
close to the antiferromagnetic $SU(4)$-symmetric point, which can be described
by the $CP^{3}$ model with a perturbation breaking the symmetry from $SU(4)$
down to $SU(2)$ and favoring the Néel ordering. We show that the effective
theory of the perturbed model is not the usual $O(3)$ nonlinear sigma model
(NLSM), but rather the $O(3)\times O(2)$ NLSM. We show that in the presence of
perturbation, the topological charge $q$ of the $CP^{3}$ field is connected to
the $O(3)$-NLSM type topological charge of the spin texture $Q$ (defined in a
usual way via the unit Néel vector) by the relation $q=3Q$, thus under the
influence of the perturbation unit-charge skyrmions of $CP^{3}$ model bind
into triplets. We also show that in the general spin-$S$ case, symmetry
breaking from $SU(2S+1)$ to $SU(2)$ results in the general relation
$2SQ_{O(3)}=q_{CP^{2S}}$ between $CP^{2S}$ and $O(3)$ charges, so one can
expect $2S$-multiplet binding of skyrmions.
Key words: frustrated magnets, skyrmions, cold gases in optical lattices
###### Abstract
Ìè äîñëäæóìî $SU(2)$-ñèìåòðèíó ñèñòåìó ñïíó ${3}/{2}$ íà äâîðîçäëüíé ðàòö,
ïîáëèçó àíòèôåðîìàãíòíî¿ $SU(4)$-ñèìåòðèíî¿ òîêè, ùî ìîæå áóòè îïèñàíà
$CP^{3}$ ìîäåëëþ ç çáóðåííÿì, ùî ïîðóøó ñèìåòðþ äî $SU(2)$ çàîõîó íåëâñüêå
âïîðÿäêóâàííÿ. Ïîêàçàíî, ùî åôåêòèâíîþ òåîðþ äëÿ çáóðåíî¿ ìîäåë íå çâèàéíà
$O(3)$ íåëíéíà ñãìà-ìîäåëü (ÍËÑÌ), à $O(3)\times O(2)$ ÍËÑÌ. Ðàçîì ç òèì,
òîïîëîãíèé çàðÿä $Q$ $O(3)$-ÍËÑÌ òèïó äëÿ ñïíîâî¿ òåêñòóðè îäèíèíîãî âåêòîðà
àíòèôåðîìàãíåòèçìó ìîæå áóòè ââåäåíèé çâèàéíèì èíîì. Ïîêàçàíî, ùî â ïðèñóòíîñò
çáóðåííÿ òîïîëîãíèé çàðÿä $q$ ïîëÿ $CP^{3}$ ïîâ’ÿçàíèé ç çàðÿäîì $Q$
$O(3)$-ÍËÑÌ òèïó (äëÿ ñïíîâî¿ òåêñòóðè îäèíèíîãî âåêòîðà àíòèôåðîìàãíåòèçìó)
ñïââäíîøåííÿì $3Q=q$, òîìó ïä äþ çáóðåííÿ ñêðìîíè ìîäåë $CP^{3}$ ç îäèíèíèì
çàðÿäîì çâ’ÿçóþòüñÿ â òðèïëåòè. Òàêîæ ïîêàçàíî, ùî ó çàãàëüíîìó âèïàäêó ñïíó
$S$, ïîðóøåííÿ ñèìåòð¿ ç $SU(2S+1)$ äî $SU(2)$ ïðèâîäèòü äî ñïââäíîøåííÿ
$2SQ_{O(3)}=q_{CP^{2S}}$ ìæ òîïîëîãíèìè çàðÿäàìè $CP^{2S}$ òà $O(3)$ òèïó,
òîáòî ìîæíà îêóâàòè ùî ñêðìîíè áóäóòü ó öüîìó âèïàäêó çâ’ÿçóâàòèñÿ â
$2S$-ìóëüòèïëåòè.
Ключов слова: ôðóñòðîâàí ìàãíåòèêè, ñêðìîíè, õîëîäí ãàçè â îïòèíèõ ðàòêàõ
## 1 Introduction and model
Numerous studies in the past two decades have firmly positioned ultracold
gases as an extremely versatile “toolbox” capable of simulating a wide range
of problems originating in condensed matter physics and field theory.
Particularly, multicomponent ultracold gases in optical lattices[1, 2, 3]
allow one to model spin systems with strong non-Heisenberg exchange
interactions[4, 5, 6, 7, 8, 9], normally inaccessible in solid state magnets.
Presence of controllable strong higher-order (biquadratic, biqubic, etc.)
exchange interactions allows one to explore models with enhanced $SU(N)$
symmetry with $N>2$, which have been a subject of extensive theoretical
studies [10, 11, 12, 13, 14, 15, 16]. Realization of $SU(N)$ antiferromagnets
with $N$ up to 10 was suggested[17, 18] and realized in experiments [19]. Spin
systems with strong higher-order exchange interactions may exhibit phases with
unconventional (multipole) order and can be considered as a special type of
frustrated magnets.
It has been shown [20, 21] that in spin-1 systems close to the
antiferromagnetic $SU(3)$ point, a perturbation that breaks this symmetry down
to $SU(2)$ can lead to an interesting effect: unit-charge topological
excitations (skyrmions, hedgehogs) of the effective $CP^{2}$ theory describing
the $SU(3)$-symmetric model bind into doublets that correspond to unit-charge
topological excitations of the effective $O(3)$ nonlinear sigma model (NLSM)
theory describing the $SU(2)$-symmetric model.
In the present work, we study spin-${3}/{2}$ systems close to the
antiferromagnetic $SU(4)$ point, and show that a similar effect of binding
topological excitations into triplets exists when the symmetry gets broken
down to $SU(2)$. We further show that this result can be generalized: for a
system with underlying spin $S$ close to the antiferromagnetic $SU(2S+1)$
point, a perturbation that brings the symmetry down to $SU(2)$ can lead to the
formation of $2S$-multiplets of topological excitations.
We start with a system of spin-${3}/{2}$ fermions on a bipartite optical
lattice in $d$ spatial dimensions ($d=1,2$) that can be described by the
following Hamiltonian in the $s$-wave scattering approximation [6]:
$\widehat{H}=-t\sum_{\sigma=\pm 1/2,\pm 3/2}\sum_{\langle
ij\rangle}\left(c^{{\dagger}}_{\sigma,i}c^{\vphantom{{\dagger}}}_{\sigma,j}+\text{h.c.}\right)+\sum_{i}\sum_{F=0,2}U_{F}\sum_{m=-F}^{F}P^{{\dagger}}_{Fm,i}P^{\vphantom{{\dagger}}}_{Fm,i}\,,$
(1.1)
where $c_{\sigma,i}$ are the spin-${3}/{2}$ fermionic operators at the lattice
site $i$, $t$ is the effective hopping amplitude between two neighboring sites
(which is for simplicity assumed to be the same for all spatial directions),
$P_{Fm,i}=\sum_{\sigma\sigma^{\prime}}\left\langle
Fm\left|\frac{3}{2}\sigma\right.,\frac{3}{2}\sigma^{\prime}\right\rangle
c_{\sigma,i}c_{\sigma^{\prime},i}$
are the operators describing an on-site pair with the total spin $F$, and
interaction strengths $U_{0}$, $U_{2}$ are proportional to the scattering
lengths in the $F=0$ and $F=2$ channels, respectively.
At quarter filling (one particle per site), and in the limit of strong on-site
repulsion $U_{0},U_{2}\gg t$, the charge degrees of freedom are strongly
gapped, and at low excitation energies the system can be described by the
effective Hamiltonian involving only spin degrees of freedom. The
antiferromagnetic $SU(4)$-symmetric point corresponds[22] to the limit
$U_{2}\to\infty$ and its Hamiltonian can be written in terms of the on-site
spin-${3}/{2}$ operators $\widehat{\bm{S}}_{i}$ as follows [23]:
$\widehat{\mathcal{H}}_{SU(4)}=J\sum_{\langle
ij\rangle}\Big{\\{}-\frac{31}{24}(\widehat{\bm{S}}_{i}\widehat{\bm{S}}_{j})+\frac{10}{36}(\widehat{\bm{S}}_{i}\widehat{\bm{S}}_{j})^{2}+\frac{2}{9}(\widehat{\bm{S}}_{i}\widehat{\bm{S}}_{j})^{3}\Big{\\}},$
(1.2)
with $J=t^{2}/U_{0}$. For general values of $U_{0}$, $U_{2}$, the effective
spin Hamiltonian still exhibits an enhanced $Sp(4)$ symmetry [6], which is
special for spin ${3}/{2}$. The effects of symmetry reduction from $SU(4)$ to
$Sp(4)$ were studied in [22], the corresponding perturbation has been shown to
be dangerously irrelevant and is not of interest to us in the present work.
Thus, we consider perturbing the $SU(4)$-invariant model (1.2)
$\widehat{\mathcal{H}}_{SU(4)}\mapsto\widehat{\mathcal{H}}_{SU(4)}+\lambda\sum_{\langle
ij\rangle}(\widehat{\bm{S}}_{i}\widehat{\bm{S}}_{j}),\qquad\lambda>0$ (1.3)
by the term that breaks the symmetry down to $SU(2)$ and favors the
antiferromagnetic spin ordering. According to the mean-field study [23], the
other sign of $\lambda$ would favor an exotic phase characterized by the
presence of octupolar and quadrupolar orders and will not be considered here.
Such a perturbation is not possible if only the $s$-wave scattering is taken
into account, but will naturally arise due to the contribution from the
$p$-wave scattering. Normally, the $p$-wave scattering is neglected because
the corresponding contributions to the interaction are about a few percent
compared to the $s$-wave ones [24], but in the present case this is sufficient
to break the enhanced symmetry. Beside that, the effective strength of the
$p$-wave scattering can be controlled by quasi-low-dimensional confinement
[25].
## 2 $SU(2)$-perturbed $CP^{3}$ model: effective theory
The effective low-energy continuum theory for the $SU(4)$ antiferromagnet
(1.2) is the well-known $CP^{3}$ model described by the following euclidean
action [12, 13]
$\mathcal{A}_{CP^{3}}=\frac{\Lambda^{d-1}}{2g_{0}}\int\mathrm{d}^{d+1}x|\mathcal{D}_{\mu}\bm{z}|^{2}+\mathcal{A}_{\rm
top}\,.$ (2.1)
Here, the Planck constant and the lattice spacing are set to unity, $\Lambda$
is the ultraviolet momentum cutoff, the 4-component complex vector field
$\bm{z}$ is subjected to the unit length constraint
$\bm{z}^{\dagger}\bm{z}=1$,
$\mathcal{D}_{\mu}=\partial_{\mu}-\mathrm{i}A_{\mu}$ is the gauge covariant
derivative, and $A_{\mu}=-\mathrm{i}(\bm{z}^{\dagger}\partial_{\mu}\bm{z})$ is
the gauge field, $x^{0}=c\tau$, $\tau=\mathrm{i}t$ is the imaginary time.
Assuming for definiteness that the lattice is hypercubic, one obtains the
limiting velocity $c=2J\sqrt{d}$, and the bare coupling constant
$g_{0}=\sqrt{d}$. The topological term in the action
$\mathcal{A}_{\rm
top}[\bm{z}]=\int\mathrm{d}\tau\sum_{j}\eta_{j}\bm{z}^{\dagger}_{j}\partial_{\tau}\bm{z}_{j}\,,$
(2.2)
where the phase factors $\eta_{j}=\pm 1$ take opposite signs at lattice sites
belonging to A and B sublattices, can be cast in the continuum form only for
$d=1$ [26, 12, 13]. Without the topological term, the action (2.1) can be
viewed as the energy of the static $(d+1)$ dimensional “classical” spin
texture.
The $CP^{N-1}$ model [27, 28, 29, 30, 31] has been extensively studied as an
effective theory for $SU(N)$ antiferromagnets [12, 13]. In $d=1$, there is no
long-range spin order and for $N>2$ excitations are gapped at any value of the
coupling $g_{0}$ even in the presence of the topological term (2.2), while in
$d=2$ the disordered phase appears if the coupling $g_{0}$ exceeds some
$N$-dependent critical value $g_{c}$ [29, 30]. Numerical work [15, 16, 32]
suggests that for $d=2$ the value $g_{c}N/g_{0}$ lies somewhere between 4 and
5.
The topological term becomes important in the disordered phase: it drives a
spontaneous breaking of the translational invariance of the underlying
lattice, leading to the twofold degenerate (dimerized) ground state in $d=1$,
and in $d=2$ the ground state degeneracy and the pattern of the resulting
“valence bond solid” is lattice-dependent [26, 12, 13].
The leading contribution to the continuum action from the perturbation (1.3)
is given by the gradient-free term, so the perturbed action takes the form
$\mathcal{A}_{AF}=\frac{\Lambda^{d-1}}{2g_{0}}\int\mathrm{d}^{d+1}x\Big{\\{}|\partial_{\mu}\bm{z}|^{2}-|\bm{z}^{\dagger}\partial_{\mu}\bm{z}|^{2}-m_{0}^{2}\langle\bm{S}\rangle^{2}\Big{\\}}+\mathcal{A}_{\rm
top}\,,$ (2.3)
where $m_{0}^{2}=2g_{0}\lambda/(c\Lambda^{d-1})>0$ is proportional to the
perturbation strength,
$\langle\bm{S}\rangle=\bm{z}^{\dagger}\textrm{S}^{a}\bm{z}$ is the spin
average, and $\textrm{S}^{a}$ are spin-${3}/{2}$ matrices, $a=1,2,3$. Here, we
assume that four components $z_{m}$ of the complex vector field $\bm{z}$ are
directly related to the amplitudes of the four spin-${3}/{2}$ basis states
$|\frac{3}{2}m\rangle$, $m=-\frac{3}{2}\ldots\frac{3}{2}$
To analyze the behavior of the perturbed theory, it is convenient to separate
the modes becoming massive under the perturbation. We parameterize the
4-component field $\bm{z}$ in the following way:
$z_{m}=D^{(3/2)}_{mm^{\prime}}(\alpha,\theta,\varphi)\psi_{m^{\prime}}(\beta,\vartheta,\phi).$
(2.4)
Here, $\bm{\psi}$ is the spin-${3}/{2}$ state [23] taken in the principal axes
of the spin-quadrupolar tensor
$\psi_{3/2}=\cos(\beta)\cos\frac{\vartheta}{2}\,,\quad\psi_{1/2}=\sin\beta\sin\frac{\vartheta}{2}\mathrm{e}^{\mathrm{i}\phi},\quad\psi_{-1/2}=\sin\beta\cos\frac{\vartheta}{2}\,,\quad\psi_{-3/2}=\cos\beta\sin\frac{\vartheta}{2}\mathrm{e}^{\mathrm{i}\phi}\,.$
(2.5)
Such a choice ensures that the tensor
$\bm{\psi}^{\dagger}(\textrm{S}^{a}\textrm{S}^{b}+\textrm{S}^{b}\textrm{S}^{a})\bm{\psi}$
is diagonal. The spin average in the state (2.5) is given by
$\langle S^{z}\rangle=\frac{1}{2}\cos\vartheta(4\cos^{2}\beta-1),\quad\langle
S^{+}\rangle=\sin\vartheta\sin\beta(\sqrt{3}\cos\beta\mathrm{e}^{\mathrm{i}\phi}+\sin\beta\mathrm{e}^{-\mathrm{i}\phi}).$
(2.6)
The Wigner matrix $D^{(j)}$ is the standard $(2j+1)$-dimensional
representation of a rotation [33] and depends on three Euler angles
$(\alpha,\theta,\varphi)$:
$\displaystyle d_{m^{\prime}m}^{(j)}(\theta)$ $\displaystyle=$
$\displaystyle\left[\frac{(j+m^{\prime})!(j-m^{\prime})!}{(j+m)!(j-m)!}\right]^{1/2}\left(\cos{\frac{\theta}{2}}\right)^{m^{\prime}+m}\left(\sin{\frac{\theta}{2}}\right)^{m^{\prime}-m}P^{(m^{\prime}-m,m^{\prime}+m)}_{j-m^{\prime}}(\cos{\theta}),$
$\displaystyle D_{m^{\prime}m}^{(j)}(\alpha,\theta,\varphi)$ $\displaystyle=$
$\displaystyle\mathrm{e}^{\mathrm{i}m^{\prime}\varphi}d_{m^{\prime}m}^{(j)}(\theta)\mathrm{e}^{\mathrm{i}m\alpha},$
(2.7)
$P^{(a,b)}_{n}(\cos{\theta})$ being the Jacobi polynomials. Rotating the state
(2.5), we obtain the general normalized spin-${3}/{2}$ state characterized by
six real parameters (we omit the overall phase that depends on the gauge).
Antiferromagnetic perturbation (1.3) favors field configurations with small
$\vartheta$, $\beta$, so it is convenient to introduce three-component real
field $\bm{h}=(h_{x},h_{y},h_{z})$:
$h_{x}+\mathrm{i}h_{y}=\sin\frac{\vartheta}{2}\mathrm{e}^{\mathrm{i}\phi},\qquad
h_{z}=\sin\beta.$ (2.8)
The perturbation in (2.3) amounts to making $\bm{h}$ massive, as
$\langle\bm{S}\rangle^{2}=\frac{9}{4}-9(h_{x}^{2}+h_{y}^{2})-6h_{z}^{2}+O(h^{2})$.
Substituting the ansatz (2.4), (2.8) into the action (2.3) and retaining up to
quadratic terms in powers of $h_{x,y,z}$, one obtains the action in the
following form:
$\mathcal{A}_{\textrm{AF}}=\mathcal{A}_{\textrm{NLSM}}[\theta,\varphi]+\mathcal{A}_{\mathrm{m}}[\bm{h}]+\mathcal{A}_{\textrm{int}}[\bm{h},\theta,\varphi,\alpha]+\mathcal{A}_{\textrm{top}}\,,$
(2.9)
where $\mathcal{A}_{\textrm{NLSM}}$ is the action of the $O(3)$ nonlinear
sigma-model,
$\mathcal{A}_{\textrm{NLSM}}=\frac{3\Lambda^{d-1}}{8g_{0}}\int\mathrm{d}^{d+1}x\Big{\\{}(\partial_{\mu}\theta)^{2}+\sin^{2}\theta(\partial_{\mu}\varphi)^{2}\Big{\\}},$
(2.10)
$\mathcal{A}_{\mathrm{m}}$ is the quadratic action of the massive field,
$\mathcal{A}_{\mathrm{m}}=\frac{\Lambda^{d-1}}{2g_{0}}\int\mathrm{d}^{d+1}x\Big{\\{}(\partial_{\mu}\bm{h})^{2}+9m_{0}^{2}(h_{x}^{2}+h_{y}^{2})+6m_{0}^{2}h_{z}^{2}\Big{\\}},$
(2.11)
and $\mathcal{A}_{\textrm{int}}$ describes the interaction,
$\mathcal{A}_{\textrm{int}}=\frac{\Lambda^{d-1}}{2g_{0}}\int\mathrm{d}^{d+1}x\Big{\\{}h_{z}^{2}\big{[}(\partial_{\mu}\theta)^{2}+\sin^{2}\theta(\partial_{\mu}\varphi)^{2}\big{]}+(4h_{z}^{2}+9h_{x}^{2}+9h_{y}^{2})(\partial_{\mu}\alpha+\cos\theta\partial_{\mu}\varphi)^{2}+\ldots\Big{\\}}.$
(2.12)
We integrate out the “fast” field $\bm{h}$ and obtain the effective action
that depends only on the “slow” filelds $\theta$, $\varphi$, $\alpha$:
$\mathcal{A}_{\textrm{eff}}=\frac{\Lambda^{d-1}}{2\Gamma}\int\mathrm{d}^{d+1}x\Big{\\{}(\partial_{\mu}\theta)^{2}+\sin^{2}\theta(\partial_{\mu}\varphi)^{2}\Big{\\}}+\frac{\Lambda^{d-1}}{2G}\int\mathrm{d}^{d+1}x(\partial_{\mu}\alpha+\cos\theta\partial_{\mu}\varphi)^{2}+\mathcal{A}_{\textrm{itop}}\,,$
(2.13)
where the renormalized couplings $\Gamma$, $G$ are determined by the equations
$\displaystyle\frac{\Lambda^{d-1}}{\Gamma}=\frac{\Lambda^{d-1}}{\Gamma_{0}}+\frac{1}{(2\piup)^{d+1}}\int_{k<\Lambda}\frac{\mathrm{d}^{d+1}k}{k^{2}+6m_{0}^{2}}\,,\qquad\Gamma_{0}=\frac{4g_{0}}{3}\,,$
$\displaystyle\frac{\Lambda^{d-1}}{G}=\frac{1}{(2\piup)^{d+1}}\int_{k<\Lambda}\mathrm{d}^{d+1}k\left\\{\frac{4}{k^{2}+6m_{0}^{2}}+\frac{18}{k^{2}+9m_{0}^{2}}\right\\}.$
(2.14)
Beside terms of higher than quadratic order in $\bm{h}$, in the interaction
(2.12) we have omitted several types of terms that will not contribute to the
renormalized action at the one-loop level. Namely, terms proportional to
$h_{z}h_{x,y}(\partial_{\mu}\Phi)^{2}$, $h_{z}(\partial_{\mu}\Phi)^{2}$, where
$\Phi$ denotes any of the slow fields, would generate terms of the fourth and
higher order in gradients of $\Phi$. The other omitted term, of the structure
$(h_{x}\partial_{\mu}h_{y}-h_{y}\partial_{\mu}h_{x})(\partial_{\mu}\Phi)$,
yield contributions that vanish after integration over the wave vector.
For small AF perturbations, $m_{0}/\Lambda\ll 1$, from (2) one has
$\displaystyle\Gamma$ $\displaystyle\simeq$
$\displaystyle\frac{2\piup}{\ln\frac{\Lambda}{m_{0}}}\,,\qquad
G\simeq\frac{\piup}{11\ln\frac{\Lambda}{m_{0}}}\quad\text{for $d=1$},$
$\displaystyle\Gamma$ $\displaystyle\simeq$
$\displaystyle\frac{\Gamma_{0}}{1+\Gamma_{0}/2\piup^{2}}+O(m_{0}/\Lambda),\qquad
G\simeq\frac{11}{\piup^{2}}+O(m_{0}/\Lambda)\quad\text{for $d=2$}.$ (2.15)
The action (2.13) describes the $O(3)\times O(2)$ NLSM that is encountered as
an effective theory of frustrated antiferromagnets [34, 35]. It can be recast
in the form
$\mathcal{A}_{\textrm{eff}}=\frac{\Lambda^{d-1}}{2\Gamma}\int\mathrm{d}^{d+1}x\text{Tr}\left(\partial_{\mu}R^{T}P\partial_{\mu}R\right),\qquad
P=\text{diag}(1,1,\zeta),\qquad\zeta=\Gamma/G,$ (2.16)
where the matrix field $R\in SO(3)$ is the rotation matrix, and filelds
$\theta$, $\varphi$, $\alpha$ are connected to $R$ via the standard relations
$\Omega_{\mu}=\begin{pmatrix}0&-\omega_{\mu 3}&\omega_{\mu 2}\\\ \omega_{\mu
3}&0&-\omega_{\mu 1}\\\ -\omega_{\mu 2}&\omega_{\mu
1}&0\end{pmatrix}=\partial_{\mu}RR^{\mathrm{T}},$ (2.17)
where $\omega_{\mu a}$ are the rotation “frequencies” in the rest frame
$\omega_{\mu
1}=-\sin\alpha\partial_{\mu}\theta+\cos\alpha\sin\theta\partial_{\mu}\varphi,\quad\omega_{\mu
2}=\cos\alpha\partial_{\mu}\theta+\sin\alpha\sin\theta\partial_{\mu}\varphi,\quad\omega_{\mu
3}=\partial_{\mu}\alpha+\cos\theta\partial_{\mu}\varphi.$ (2.18)
The field $R$ may be visualized as rotating axisymmetric top. In the standard
$O(3)$ NLSM, $\zeta=0$ and this top is an “arrow” (a unit vector defined by
the polar and azimuthal angles $\theta$, $\varphi$) with one inertia momentum
equal to zero. One can see that fluctuations of massive fields lead to a
dynamic generation of the third inertia momentum, so the effective theory of
the AF-perturbed model is not the standard $O(3)$ NLSM as one might naively
guess, but rather the $O(3)\times O(2)$ NLSM. Properties of the latter model
are well known [35, 36, 37]. In one dimension, $\zeta$ flows to the $O(4)$
fixed point $\zeta=1$, while $\Gamma$ flows to infinity, indicating the
dynamic generation of a finite correlation length. For $d=2$, the the
$O(3)\times O(2)$ NLSM has long-range AF order, couplings $\Gamma$, $\zeta$
get renormalized but stay finite. We have checked that the similar dynamic
generation of the third inertia momentum occurs for spin-1 systems close to
the antiferromagnetic $SU(3)$ point, resulting in $O(3)\times O(2)$ NLSM as
the effective model, so one may expect this result to be valid for general
$S$.
## 3 Multiplet binding of topological excitations
Consider the fate of topologically nontrivial excitations of the $SU(4)$
-symmetric model (2.1) under AF perturbation (2.3) breaking the symmetry down
to $SU(2)$. In $(1+1)$ dimensions, such excitations (“skyrmions”) in
$CP^{N-1}$ model are characterized by the nonzero topological charge [29] that
is essentially the winding number of the overall phase taken over a contour at
infinity,
$q_{CP^{N-1}}=-\frac{1}{2\piup}\oint\bm{A}\cdot\mathrm{d}\bm{l}=-\frac{1}{2\piup}\int\mathrm{d}^{2}x\epsilon_{\mu\nu}(\partial_{\mu}A_{\nu})=-\frac{\mathrm{i}}{2\piup}\int\mathrm{d}^{2}x\epsilon_{\mu\nu}(\partial_{\mu}\bm{z}^{\dagger}\partial_{\nu}\bm{z})$
(3.1)
and for $d=1$ this charge is directly related to the topological term in the
action (2.1), $\mathcal{A}_{\textrm{top}}=\mathrm{i}\piup q_{CP(3)}$. The
topological charge density is proportional to the dimerization order
parameter; the ground state of the $SU(4)$ -symmetric model (2.1) has finite
topological charge density and thus is spontaneously dimerized [13].
The effect of the $SU(4)\mapsto SU(2)$ perturbation on the topological charge
can be illustrated by the following simple observation: finite $\lambda$
favors field configurations with the maximum spin length, i.e., with
$\bm{h}=0$. Such field configurations are given by
$z_{m}=D^{(3/2)}_{mm^{\prime}}(\alpha,\theta,\varphi)\psi^{(0)}_{m^{\prime}}\,,\qquad\psi^{(0)}_{m}=\delta_{3/2,m}\,.$
(3.2)
Substituting the above ansatz into (3.1), one straightforwardly obtains
$\displaystyle q_{CP^{3}}$ $\displaystyle=$
$\displaystyle\frac{3}{4\piup}\int\mathrm{d}^{2}x\sin\theta\epsilon_{\mu\nu}(\partial_{\mu}\theta)(\partial_{\nu}\varphi)=3Q_{O(3)}\,,$
(3.3)
where the topological charge $Q_{O(3)}$ is the winding number of the
$S^{2}\mapsto S^{2}$ mapping characterizing the space-time distribution of the
unit vector $\bm{n}(\theta,\varphi)$. It should be remarked that although the
homotopy group $\pi_{2}(SO(3))=0$, one can still define the $O(3)$-NLSM
topological charge of the spin texture $Q_{O(3)}$ in the usual way via the
unit vector $\bm{n}(\theta,\varphi)$ that corresponds to the local direction
of the Néel vector. The AF perturbation thus favors $\bm{z}$-field
configurations with charge $q_{CP^{3}}$ being a multiple of $3$. One may
conclude that unit-charge skyrmions of the $CP^{3}$ model bind into _triplets_
under the influence of the AF perturbation. Such a triplet is the well-known
unit-charge skyrmion (Belavin-Polyakov soliton [38]) of the $O(3)$ NLSM.
This is completely analogous to the formerly noted effect[20, 21] of
topological binding of skyrmions into _pairs_ in the $SU(3)$ spin-1
antiferromagnet under the AF perturbation lowering the symmetry from $SU(3)$
to $SU(2)$.
This statement is easily generalized: consider the $SU(2S+1)$ antiferromagnet
with the underlying spin $S$. Assume that the enhanced symmetry is broken down
to $SU(2)$ by the perturbation that favors the field configuration with the
maximal spin length, as in (2.3):
$\bm{z}=D^{(S)}(\alpha,\theta,\varphi)\bm{\psi}^{(0)}=\mathrm{e}^{\mathrm{i}\varphi\hat{S}_{3}}\mathrm{e}^{\mathrm{i}\theta\hat{S}_{2}}\mathrm{e}^{\mathrm{i}\alpha\hat{S}_{3}}\bm{\psi}^{(0)},\qquad\psi^{(0)}_{m}=\delta_{S,m}\,.$
(3.4)
Then,
$\displaystyle\partial_{\mu}\bm{z}$ $\displaystyle=$
$\displaystyle\mathrm{i}(\partial_{\mu}\varphi)\hat{S}_{3}\bm{z}+\mathrm{i}(\partial_{\mu}\theta)\mathrm{e}^{\mathrm{i}\varphi\hat{S}_{3}}\hat{S}_{2}\mathrm{e}^{\mathrm{i}\theta\hat{S}_{2}}\mathrm{e}^{\mathrm{i}\alpha\hat{S}_{3}}\bm{\psi}^{(0)}+\mathrm{i}(\partial_{\mu}\alpha)S\bm{z}\,,$
$\displaystyle\epsilon_{\mu\nu}\partial_{\mu}\bm{z}^{\dagger}\partial_{\nu}\bm{z}$
$\displaystyle=$
$\displaystyle\varepsilon_{\mu\nu}(\partial_{\mu}\theta)(\partial_{\nu}\varphi)\left(\bm{\psi}^{(0)}\right)^{\dagger}\mathrm{e}^{-\mathrm{i}\theta\hat{S}_{2}}(\hat{S}_{2}\hat{S}_{3}-\hat{S}_{3}\hat{S}_{2})\mathrm{e}^{\mathrm{i}\theta\hat{S}_{2}}\bm{\psi}^{(0)}$
(3.5) $\displaystyle=$
$\displaystyle\mathrm{i}S\sin{\theta}\varepsilon_{\mu\nu}(\partial_{\mu}\theta)(\partial_{\nu}\varphi).$
Substituting this into (3.1), we see that the $CP^{2S}$ charge takes the form
$q_{CP^{2S}}=2SQ_{O(3)}\,.$ (3.6)
Thus, in $SU(2S+1)$ antiferromagnets with underlying spin $S$, AF perturbation
that breaks the enhanced symmetry down to $SU(2)$ leads to the binding of
unit-charge skyrmions of the $CP^{2S}$ model to $2S$-_multiplets_.
Strictly speaking, in $(1+1)$ dimensions, skyrmions considered above, are not
excitations, but instanton events. The same effect obviously holds for
“monopoles” in $(2+1)$ dimensions (instanton events changing the skyrmion
topological quantum number $q_{CP^{2S}}$).
For the $(2+1)$ dimensional case, skyrmions may be viewed as static solitons
in two spatial dimensions, and similarly, in $d=3$ monopoles they may be
viewed as static solitons (“hedgehogs”); the same reasoning on multiplet
binding applies.
To show that explicitly, one may look at the “skyrmion current”
$\bm{j}=(2\piup)^{-1}\nabla\times\bm{A}$, whose flux through a closed surface
surrounding the monopole, $\widetilde{q}=\oint\bm{j}\cdot\mathrm{d}\bm{S}$,
determines the monopole charge $\widetilde{q}$. A calculation essentially
following (3) shows that
$j_{a}=-\frac{\mathrm{i}}{2\piup}\varepsilon_{abc}(\partial_{b}\bm{z}^{\dagger}\partial_{c}\bm{z})=\frac{S}{2\piup}\varepsilon_{abc}\sin\theta(\partial_{b}\theta)(\partial_{c}\varphi)=2SJ_{a}\,,$
(3.7)
where $\bm{J}$ is the corresponding skyrmion current of the $O(3)$ NLSM. Thus,
$\widetilde{q}=\pm 1$ monopoles bind into $2S$ multiplets under the influence
of the perturbation.
The topological term in the action in the $(2+1)$ dimensional case is
determined by monopole events and can not be expressed in the continuum limit
as it is lattice-dependent [26, 13]. On the square lattice, it is given by
$\mathcal{A}_{\textrm{top}}=\frac{\mathrm{i}}{2}\piup
n_{\mathrm{c}}\sum_{\bm{r}_{i}}\zeta(\bm{r}_{i})\widetilde{q}_{i}\,,$ (3.8)
where the sum is over the locations $\bm{r}_{i}$ of monopoles having the
charge $\widetilde{q}_{i}$, and factors $\zeta(\bm{r}_{i})$ take on values
$0$, $1$, $2$, $3$ for $\bm{r}_{i}$ belonging to the four dual sublattices
$W$, $X$, $Y$, $Z$ respectively (see figure 7 of [13]), and $n_{\mathrm{c}}$
is the “colour number” that in our case is equal to 1.
In $(2+1)$ dimensions, the ground state of the $CP^{N-1}$ model can have the
long-range order or can be disordered, depending on the value of the coupling
$g_{0}$. In the ordered phase, the topological term is ineffective. However,
if for some reason the coupling gets driven over the critical value (e.g., due
to the presence of next-nearest neighbor interactions) and we land in the
disordered phase, the topological term becomes important: it leads to the
ground state with nonzero monopole density and thus to the spontaneous
dimerization (i.e., breaking of the translation symmetry) [13]. The
dimerization pattern of the ground state depends on the value of
$n_{\mathrm{c}}$: it is twofold degenerate for $n_{\mathrm{c}}=2\bmod 4$,
fourfold degenerate for $n_{\mathrm{c}}=1\bmod 4$ or $n_{\mathrm{c}}=3\bmod
4$, and non-degenerate (with unbroken translational invariance) for
$n_{\mathrm{c}}=0\bmod 4$. Thus, when the $SU(N)$-symmetric antiferromagnet
gets perturbed as in (2.3) by the $SU(2)$ term favoring the Néel order,
$2S$-multipletting of unit-charge monopoles leads to $\widetilde{q}_{i}$ in
(3.8) multiplied by $2S$, which is equivalent to changing the number of
“colours” $n_{\mathrm{c}}$ from 1 to $2S$. The ground state becomes
respectively twofold degenerate for odd-integer spins $S=2n+1$, stays fourfold
degenerate for half-integer $S$, and is non-degenerate for even-ineger $S=2n$.
This result coincides with the conclusion obtained by Haldane [26] in the
framework of the $O(3)$ NLSM analysis.
## 4 Summary
We have considered the low-dimensional spin-$S$ antiferromagnet on a bipartite
lattice, close to the point with the enhanced $SU(2S+1)$ symmetry which is
decribed in the continuum field approximation by the $CP^{2S}$ model, and
studied the consequences of explicit symmetry breaking from $SU(2S+1)$ to
$SU(2)$ (with the appropriate sign that favors the Nèel order). This model is
motivated by the physics of cold spinor bosonic atoms in optical lattices, and
the symmetry-breaking perturbation can be associated with weak interactions
such as $p$-wave scattering that are usually neglected. We derive the
effective theory for the perturbed system, and show that it is not the
standard $O(3)$ nonlinear sigma model (NLSM) as one might naively guess, but
rather the $O(3)\times O(2)$ NLSM. This occurs due to the dynamic generation
of the third inertia momentum for the “spin arrow”, caused by fluctuations of
massive fields that correspond to non-axisymmetric deformations of the
quadrupolar tensor. We have further shown that under the influence of the
$SU(2S+1)\mapsto SU(2)$ perturbation unit-charge topological excitations
(skyrmions and monopoles) of the $CP^{2S}$ model bind into $2S$ multiplets
that correspond to excitations with unit $O(3)$ topological charge defined in
terms of the unit Nèel vector.
## References
* [1] Lewenstein M., Sanpera A., Ahufinger V., Damski B., Sen(De) A., Sen U., Adv. Phys., 2007,
56, No. 2, 243, doi:10.1080/00018730701223200.
* [2] Bloch I., Dalibard J., Zwerger W., Rev. Mod. Phys., 2008, 80, 885, doi:10.1103/RevModPhys.80.885.
* [3] Kawaguchi Y., Ueda M., Phys. Rep., 2012, 520, No. 5, 253, doi:https://doi.org/10.1016/j.physrep.2012.07.005.
* [4] Zhou F., Snoek M., Ann. Phys., 2003, 308, No. 2, 692, doi:https://doi.org/10.1016/j.aop.2003.08.009.
* [5] Imambekov A., Lukin M., Demler E., Phys. Rev. A, 2003, 68, 063602, doi:10.1103/PhysRevA.68.063602.
* [6] Wu C., Hu J.p., Zhang S.c., Phys. Rev. Lett., 2003, 91, 186402, doi:10.1103/PhysRevLett.91.186402.
* [7] Lecheminant P., Boulat E., Azaria P., Phys. Rev. Lett., 2005, 95, 240402, doi:10.1103/PhysRevLett.95.240402.
* [8] Wu C., Phys. Rev. Lett., 2005, 95, 266404, doi:10.1103/PhysRevLett.95.266404.
* [9] Wu C., Mod. Phys. Lett. B, 2006, 20, No. 27, 1707, doi:10.1142/S0217984906012213.
* [10] Affleck I., Nucl. Phys. B, 1986, 265, No. 3, 409, doi:https://doi.org/10.1016/0550-3213(86)90167-7.
* [11] Affleck I., Nucl. Phys. B, 1988, 305, No. 4, 582, doi:https://doi.org/10.1016/0550-3213(88)90117-4.
* [12] Read N., Sachdev S., Nucl. Phys. B, 1989, 316, No. 3, 609, doi:https://doi.org/10.1016/0550-3213(89)90061-8.
* [13] Read N., Sachdev S., Phys. Rev. B, 1990, 42, 4568, doi:10.1103/PhysRevB.42.4568.
* [14] Itoi C., Kato M.H., Phys. Rev. B, 1997, 55, 8295, doi:10.1103/PhysRevB.55.8295.
* [15] Assaad F.F., Phys. Rev. B, 2005, 71, 075103, doi:10.1103/PhysRevB.71.075103.
* [16] Kawashima N., Tanabe Y., Phys. Rev. Lett., 2007, 98, 057202, doi:10.1103/PhysRevLett.98.057202.
* [17] Gorshkov A.V., Hermele M., Gurarie V., Xu C., Julienne P.S., Ye J., Zoller P., Demler E., Lukin M.D., Rey A.M., Nat. Phys., 2010, 6, 289, doi:10.1038/nphys1535.
* [18] Hermele M., Gurarie V., Rey A.M., Phys. Rev. Lett., 2009, 103, 135301, doi:10.1103/PhysRevLett.103.135301.
* [19] Scazza F., Hofrichter C., Höfer M., De Groot P., Bloch I., Fölling S., Nat. Phys., 2014, 10, No. 10, 779, doi:10.1038/nphys3061.
* [20] Ivanov B.A., Khymyn R.S., Kolezhuk A.K., Phys. Rev. Lett., 2008, 100, 047203, doi:10.1103/PhysRevLett.100.047203.
* [21] Kolezhuk A., Phys. Rev. B, 2008, 78, 144428, doi:10.1103/PhysRevB.78.144428.
* [22] Kolezhuk A.K., Vekua T., Phys. Rev. B, 2011, 83, 014418, doi:10.1103/PhysRevB.83.014418.
* [23] Fridman Y.A., Kosmachev O.A., Kolezhuk A.K., Ivanov B.A., Phys. Rev. Lett., 2011, 106, 097202,
doi:10.1103/PhysRevLett.106.097202.
* [24] Campbell G.K., Boyd M.M., Thomsen J.W., Martin M.J., Blatt S., Swallows M.D., Nicholson T.L., Fortier T., Oates C.W., Diddams S.A., Lemke N.D., Naidon P., Julienne P., Ye J., Ludlow A.D., Science, 2009, 324, No. 5925, 360, doi:10.1126/science.1169724.
* [25] Granger B.E., Blume D., Phys. Rev. Lett., 2004, 92, 133202, doi:10.1103/PhysRevLett.92.133202.
* [26] Haldane F.D.M., Phys. Rev. Lett., 1988, 61, 1029, doi:10.1103/PhysRevLett.61.1029.
* [27] Eichenherr H., Nucl. Phys. B, 1978, 146, No. 1, 215, doi:https://doi.org/10.1016/0550-3213(78)90439-X.
* [28] Golo V., Perelomov A., Phys. Lett. B, 1978, 79, No. 1, 112, doi:https://doi.org/10.1016/0370-2693(78)90447-1.
* [29] D’Adda A., Lüscher M., Vecchia] P.D., Nucl. Phys. B, 1978, 146, No. 1, 63,
doi:https://doi.org/10.1016/0550-3213(78)90432-7.
* [30] Witten E., Nucl. Phys. B, 1979, 149, No. 2, 285, doi:https://doi.org/10.1016/0550-3213(79)90243-8.
* [31] Aref’eva I., Azakov S., Nucl. Phys. B, 1980, 162, No. 2, 298, doi:https://doi.org/10.1016/0550-3213(80)90266-7.
* [32] Beach K.S.D., Alet F., Mambrini M., Capponi S., Phys. Rev. B, 2009, 80, 184401,
doi:10.1103/PhysRevB.80.184401.
* [33] Landau L.D., Lifshitz E.M., Quantum Mechanics (3rd Edition): Non-Relativistic Theory,
Pergamon, Oxford, 1977.
* [34] Dombre T., Read N., Phys. Rev. B, 1989, 39, 6797, doi:10.1103/PhysRevB.39.6797.
* [35] Azaria P., Delamotte B., Mouhanna D., Phys. Rev. Lett., 1992, 68, 1762, doi:10.1103/PhysRevLett.68.1762.
* [36] Azaria P., Delamotte B., Jolicoeur T., Mouhanna D., Phys. Rev. B, 1992, 45, 12612,
doi:10.1103/PhysRevB.45.12612.
* [37] Apel W., Wintel M., Everts H.U., Z. Phys. B: Condens. Matter, 1992, 86, 139, doi:10.1007/BF01323558.
* [38] Belavin A.A., Polyakov A.M., JETP Lett., 1975, 22, 245.
Ukrainian 0
Ïîðóøåííÿ ñèìåòð¿ $SU(N)\to SU(2)$ â êâàíòîâèõ àíòèôåðîìàãíåòèêàõ Î.Ê. Êîëåæóê
${}^{\textsf{\footnotesize{\mbox{?},\penalty 1000\mbox{?}}}}$, Ò.Ë.
Çàâåðòàíèé${}^{\textsf{\footnotesize{\mbox{?}}}}$
${}^{\textsf{\footnotesize 1}}$ íñòèòóò âèñîêèõ òåõíîëîãé, Êè¿âñüêèé
íàöîíàëüíèé óíâåðñèòåò ìåí Òàðàñà Øåâåíêà,
ÌÎÍ Óêðà¿íè, Êè¿â 03022, Óêðà¿íà ${}^{\textsf{\footnotesize 2}}$ íñòèòóò
ìàãíåòèçìó ÍÀÍ Óêðà¿íè òà ÌÎÍ Óêðà¿íè, ïðîñï. Âåðíàäñüêîãî 36–Á, Êè¿â 03142,
Óêðà¿íà
|
# Deep Universal Blind Image Denoising
Jae Woong Soh Department of ECE, INMC
Seoul National University
Seoul, Korea
Email<EMAIL_ADDRESS>Nam Ik Cho Department of ECE, INMC
Seoul National University
Seoul, Korea
Email<EMAIL_ADDRESS>
###### Abstract
Image denoising is an essential part of many image processing and computer
vision tasks due to inevitable noise corruption during image acquisition.
Traditionally, many researchers have investigated image priors for the
denoising, within the Bayesian perspective based on image properties and
statistics. Recently, deep convolutional neural networks (CNNs) have shown
great success in image denoising by incorporating large-scale synthetic
datasets. However, they both have pros and cons. While the deep CNNs are
powerful for removing the noise with known statistics, they tend to lack
flexibility and practicality for the blind and real-world noise. Moreover,
they cannot easily employ explicit priors. On the other hand, traditional non-
learning methods can involve explicit image priors, but they require
considerable computation time and cannot exploit large-scale external
datasets. In this paper, we present a CNN-based method that leverages the
advantages of both methods based on the Bayesian perspective. Concretely, we
divide the blind image denoising problem into sub-problems and conquer each
inference problem separately. As the CNN is a powerful tool for inference, our
method is rooted in CNNs and propose a novel design of network for efficient
inference. With our proposed method, we can successfully remove blind and
real-world noise, with a moderate number of parameters of universal CNN.
## I Introduction
Image denoising aims to recover the latent clean image from an observed noisy
image. It has been a longstanding fundamental problem because noise
intervention is inevitable during the image acquisition process, which
degrades the visual quality of the acquired image and lowers the performance
of computer vision tasks. The overall noise is the accumulation of multiple
different noise sources, such as capturing sensors, in-camera pipeline, and
data transmission media. Such a noise generation process is too complicated to
address, therefore with the belief of the central limit theorem, the noise is
usually assumed as additive white Gaussian noise (AWGN).
Specifically, an observed noisy image $\mathbf{y}$ has been assumed to be the
sum of the latent clean image $\mathbf{x}$ and an AWGN $\mathbf{n}$ as
${\mathbf{y}}={\mathbf{x}}+{\mathbf{n}}.$ (1)
Traditionally, image denoising has been addressed as a statistical inference
problem, which focused on constructing a maximum a posteriori (MAP) inference
model. In the Bayesian perspective, the MAP inference can be divided into the
data fidelity term (log-likelihood) and regularization term (log-prior) as
$\displaystyle\mathbf{\hat{x}}$ $\displaystyle=\arg\max_{{\mathbf{x}}}\log
p({\mathbf{x}}|{\mathbf{y}}),$ (2) $\displaystyle=\arg\max_{{\mathbf{x}}}\log
p({\mathbf{y}}|{\mathbf{x}})+\log p({\mathbf{x}}).$ (3)
To obtain a plausible model in this form and its solution, many classical
researches focused on the design of image prior terms based on experience and
knowledge of the data. These approaches include total variation regularization
[1], sparsity [2], low-rank constraints [3, 4], non-local self-similarity [5,
6, 7, 8], and so on. These methods _explicitly_ defined the data fidelity term
with the assumption of AWGN and appropriate image prior term. They are shown
to be superior in terms of interpretability and flexibility, but they are
limited when the noise deviates from spatially uniform i.i.d. Gaussian noise.
Also, their optimization implementation mostly depends on iterative solvers,
which takes a long computation time.
Recently, the upsurge of powerful deep learning in computer vision introduced
deep learning-based denoisers, which _implicitly_ learns the MAP estimate
based on the supervised learning framework [9, 10, 11, 12, 13, 14, 15, 16,
17]. These methods surpass the conventional non-learning methods by large
margins, but they lack in flexibility, especially for blind denoising. For
example, if a network is trained to denoise a specific level of noise
variance, the network would not work well for an unseen noise-level. Hence, a
universal network is needed, which is trained with the whole expected range of
noise-level for blind denoising unless the noise-level is known. In this case,
it is still inferior to the specific noise-level denoiser, and discriminative
learning methods suffer from the averaging problem, which means that the
network targets better to the mid-level noise [9, 10]. Moreover, knowledge
from human experience cannot be easily injected into the deep learning
framework. Although there are some works to merge non-local self-similarity,
one of the strong priors, with deep networks [12, 15, 16], introducing other
priors is not easy.
In this paper, we propose a convolutional neural network (CNN)-based universal
blind denoiser, dubbed Deep Universal Blind Denoiser (DUBD), which leverages
the advantages of MAP inference and the power of deep-learning. The design of
DUBD is motivated by the advantages and disadvantages of the above-stated
approaches, how to insert human knowledge priors to the deep-learning
framework without harming the power of the network. In particular, for
practicality, the proposed DUBD can handle a wide range of noise-level and
spatially varying noises without knowing the noise-level. Also, our method can
address spectrally variant noise, as evidenced in [4]. Moreover, our DUBD
outperforms other methods, including blind and non-blind denoisers, while
requiring a comparable number of parameters. Our contributions can be
summarized as:
* •
We propose a CNN-based universal blind denoiser that can handle a wide range
of noise-level including spatially and spectrally varying noise.
* •
Our DUBD can explicitly incorporate prior knowledge, which further lifts the
performance of the network.
* •
Our DUBD outperforms other denoisers with a comparable number of parameters,
which eventually brings better practicality.
* •
Our DUBD can be applied to real-world noisy images and also shows outstanding
performance compared to the other methods.
## II Probabilistic View
For blind image denoising, a naïve approach is to train a universal CNN with
the whole expected range of noise-level. However, it is proven not to be a
good choice, lacking performance and flexibility [9, 10]. Hence, we approach
the blind image denoising problem with the divide-and-conquer scheme.
Precisely, we reformulate the log-posterior by introducing a new random
variable ${\mathbf{c}}$ that contains prior based on human knowledge as
$\displaystyle\log p({\mathbf{x}}|{\mathbf{y}})$
$\displaystyle=\log\int_{{\mathbf{c}}}p({\mathbf{x}}|{\mathbf{y}},{\mathbf{c}})p({\mathbf{c}}|{\mathbf{y}})d{\mathbf{c}},$
(4) $\displaystyle\approx\log
p({\mathbf{x}}|{\mathbf{y}},\mathbf{\hat{c}})p(\mathbf{\hat{c}}|{\mathbf{y}}),$
(5) $\displaystyle=\log p({\mathbf{x}}|{\mathbf{y}},\mathbf{\hat{c}})+\log
p(\mathbf{\hat{c}|y}).$ (6)
Note that Eq. 4 is a marginal expression by introducing a random variable
${\mathbf{c}}$, but the integration is mostly not tractable. Hence, for
approximating the integration, we use the point estimate for ${\mathbf{c}}$,
which is
$\mathbf{\hat{c}}=\arg\max_{{\mathbf{c}}}p({\mathbf{c}}|{\mathbf{y}})$. When
$p({\mathbf{c}}|{\mathbf{y}})$ has a unimodal distribution with a sharp peak,
the approximation is quite fair. Then, we can solve the MAP estimate with the
given point estimate $\mathbf{\hat{c}}$. Formally, the MAP inference problem
can be reformulated into solving two sub-problems as
$\displaystyle\mathbf{\hat{c}}$ $\displaystyle=\arg\max_{{\mathbf{c}}}\log
p({\mathbf{c}}|{\mathbf{y}}),$ (7) $\displaystyle\mathbf{\hat{x}}$
$\displaystyle=\arg\max_{{\mathbf{x}}}\log
p({\mathbf{x}}|{\mathbf{y}},\mathbf{\hat{c}}).$ (8)
As neural networks are experts at inference, we employ CNNs for our two
inference problems, described as
$\displaystyle\mathbf{\hat{c}}$ $\displaystyle\approx
g_{\theta}({\mathbf{y}}),$ (9) $\displaystyle\mathbf{\hat{x}}$
$\displaystyle\approx f_{\phi}({\mathbf{y}};\mathbf{\hat{c}}),$ (10)
where $g_{\theta}(\cdot)$ and $f_{\phi}(\cdot)$ are CNNs with parameters
$\theta$ and $\phi$, respectively. By dividing the original problem into sub-
problems, we decrease the modalities of multi-modal distribution of the log-
posterior. Thus the network conquers separate conditionals and eventually
facilitates solving the whole blind denoising inference. Concretely, for
solving the second inference problem, we design a tunable CNN according to
${\mathbf{c}}$. Importantly, we must introduce an appropriate ${\mathbf{c}}$
based on our prior knowledge.
## III Conditional Estimation Network
Figure 1: The network architecture of conditional estimation network (CENet).
We adopt $3\times 3$ convolution layers for all layers, and the number of
channels is $64$ except for the last one. We decrease the spatial size of
feature maps with the strided convolution layers. At the last stage, we
bilinearly interpolate the map to recover its spatial size back to the same
size of the input image. Figure 2: The network architecture of tunable
denoiser. We adopt $3\times 3$ convolution layers for the main stem of the
network, including conditional affine transform blocks (CATBlocks) and
residual blocks (ResBlocks). Also, the number of channels is $64$ for the main
stem, and $1\times 1$ convolution layers are employed for the condition
encoder. In condition encoder, the number of channels for the first layer is
$128$, and the others are $64$.
In this section, we introduce the choice of ${\mathbf{c}}$, and how we design
a network for the point estimate $\mathbf{\hat{c}}$. Notably, we consider the
cases that $p({\mathbf{c}}|{\mathbf{y}})$ is unimodal with a sharp peak, for
tractable solution, as ${\mathbf{c}}$ may belong to any category unless it
satisfies such properties. Fortunately, a noise-level of a given noisy image
is deterministic, which means that its distribution is unimodal with a
relatively sharp peak. In other words, only a single noise-level corresponds
to the given noisy image. Therefore, we choose the noise-level as
${\mathbf{c}}$. In addition to the blind scenario, we can also handle the non-
blind case by solving the sub-problem of Eq. 8 with known information
${\mathbf{c}}$. Also, we can manually control ${\mathbf{c}}$ to manipulate the
network to produce better outputs.
To estimate the noise-level of a given image with an unknown and spatially
varying noise variance, we exploit a CNN architecture shown in Figure 1. In
addition to the choice of ${\mathbf{c}}$, we add another prior knowledge that
the variation of noise-level is spatially smooth in designing the network. We
decrease the size of feature maps using the strided convolutions and apply
bilinear interpolation at the last stage, rather than using a smoothness term
such as total variation regularization. By doing so, we have two advantages
compared to the additional loss term. First, we can decrease the computational
complexity by decreasing the feature map size, and second, we do not need to
tune the hyper-parameter that balances the original loss and the
regularization term. In summary, the network is trained with the loss:
$\mathcal{L}_{c}(\theta)=\mathbb{E}[||\sigma-
g_{\theta}({\mathbf{y}})||^{2}_{2}].$ (11)
The output of $g_{\theta}({\mathbf{y}})$ is the estimated noise-level map
$\hat{\sigma}\in\mathbb{R}^{H\times W\times C}$, where $H$, $W$, and $C$
denote height, width and number of channels of the noisy image respectively.
## IV Tunable Denoising Network
In this section, we present a novel network architecture where the input is a
noisy image with the intervention of an additional tunable parameter
${\mathbf{c}}$ to emulate $\arg\max
p({\mathbf{x}}|{\mathbf{y}},\mathbf{\hat{c}})$. The overall architecture of
our tunable denoiser is shown in Figure 2 where its design directly reflects
$f_{\phi}(y;\hat{c})$. The denoising network mainly consists of two parts: the
main stem and condition encoder.
### IV-A Main Stem
The main stem of our tunable denoising network is based on the cascade of $D$
conditional affine transform blocks (CATBlocks). To incorporate conditional
information efficiently, we present a CATBlock which applies the affine
transformation to feature maps as
$F_{o}=\gamma\otimes F_{i}+\beta,$ (12)
where $F_{i}$ and $F_{o}$ denote feature maps before and after the
transformation, respectively, and $\otimes$ denotes element-wise
multiplication. In the CATBlock, the output feature maps of the last
convolution layer are adjusted by affine transformation with respect to the
condition-encoded affine parameters $\gamma$ and $\beta$, and the encoded
parameters are shared along with other CATBlocks. Similar approaches have been
applied in various tasks [18, 19, 20, 21].
The CATBlock includes a cascade of $N$ residual blocks (ResBlocks), which is
shown in the upper-right corner of Figure 2. Additionally, we adopt the
residual learning scheme [9], which is to learn noise rather than the clean
image itself. Notably, our CATBlocks and ResBlocks adopt residual skip-
connection to bypass short- to long-term information.
### IV-B Condition Encoder
The condition encoder network is a simple two-layers of $1\times 1$
convolution. It takes the conditional variable and outputs the condition-
encoded parameters $\gamma$ and $\beta$. These parameters adjust the feature
values according to the condition. It is notable that the number of parameters
of the condition encoder is negligible compared to the whole network.
### IV-C Implementation Details
Now we specify the implementation details of our DUBD. For the number of
blocks, we set $D=5$ and $N=5$. The DIV2K [22] dataset is used for the
training, which is a recently released high-resolution dataset. It consists of
high-quality images of $800$ training, $100$ validation, and $100$ test
images. The noise-levels of Gaussian noise are randomly sampled from the range
of $[5,70]$ uniformly. We extract $96\times 96$ size of patches for training.
We adopt the ADAM optimizer [23], and the initial learning rate is set to
$2\times 10^{-4}$ and halved once during training. Mean squared error (MSE)
loss function, which is expressed as
$\mathcal{L}_{dn}(\phi)=\mathbb{E}[||{\mathbf{x}}-f_{\phi}({\mathbf{y}};{\mathbf{c}})||^{2}_{2}]$
(13)
is adopted for denoising loss.
TABLE I: The average PSNR results on test sets. The best results are denoted
in red and the second best in blue. ‘*’ next to the name of the method means
blind denoising.
Noise-level | Dataset | CBM3D [8] | TNRD [24] | RED [11] | MemNet* [25] | CDnCNN* [9] | FFDNet [10] | UNLNet* [13] | ATDNet* [14] | VDN* [17] | DUBD-NB (Ours) | DUBD-B* (Ours)
---|---|---|---|---|---|---|---|---|---|---|---|---
$\sigma=10$ | CBSD68 | 35.91 | - | 33.89 | 28.52 | 36.13 | 36.14 | 36.20 | 36.29 | 36.29 | 36.35 | 36.33
| Kodak24 | 36.43 | - | 34.73 | 29.70 | 36.46 | 36.69 | - | 36.98 | 36.85 | 37.03 | 37.02
| Urban100 | 36.00 | - | 34.42 | 29.44 | 34.61 | 35.78 | - | 36.31 | 35.97 | 36.32 | 36.23
$\sigma=30$ | CBSD68 | 29.73 | - | 28.45 | 28.39 | 30.34 | 30.32 | 30.21 | 30.61 | 30.64 | 30.65 | 30.62
| Kodak24 | 30.75 | - | 29.53 | 29.55 | 31.17 | 31.27 | 31.18 | 31.72 | 31.67 | 31.75 | 31.75
| Urban100 | 30.36 | - | 28.84 | 28.93 | 30.00 | 30.53 | 30.41 | 31.48 | 31.14 | 31.46 | 31.43
$\sigma=50$ | CBSD68 | 27.38 | 25.96 | 26.34 | 26.33 | 27.95 | 27.97 | 27.85 | 28.33 | 28.33 | 28.35 | 28.31
| Kodak24 | 28.46 | 27.04 | 27.42 | 27.51 | 28.83 | 28.98 | 28.86 | 29.48 | 29.44 | 29.51 | 29.50
| Urban100 | 27.94 | 25.52 | 26.25 | 26.53 | 27.59 | 28.05 | 27.95 | 29.20 | 28.86 | 29.16 | 29.14
$\sigma=70$ | CBSD68 | 26.00 | - | 25.09 | 25.09 | 25.66 | 26.55 | - | - | 26.93 | 26.96 | 26.89
| Kodak24 | 27.09 | - | 26.16 | 26.24 | 26.36 | 27.56 | - | - | 28.05 | 28.12 | 28.11
| Urban100 | 26.31 | - | 24.58 | 24.93 | 25.24 | 26.40 | - | - | 27.31 | 27.59 | 27.58
TABLE II: The number of parameters and PSNR for several CNN-based methods. The
PSNR is measured on Urban100 [26] set with $\sigma=50$. For RNAN [16], we
brought the number from the paper.
Methods | RED [11] | CDnCNN* [9] | FFDNet [10] | ATDNet* [14] | RNAN [16] | VDN* [17] | DUBD-NB (Ours) | DUBD-B* (Ours)
---|---|---|---|---|---|---|---|---
Parameters | 4,135 K | 668 K | 825 K | 9,453 K | 7,409 K | 7,817 K | 2,088 K | 2,239 K
PSNR (dB) | 26.25 | 27.59 | 28.05 | 29.20 | 29.08 | 28.86 | 29.16 | 29.14
Ground Truth
Noisy
CBM3D [8]
TNRD [24]
RED [11]
MemNet [25]
CDnCNN [9]
FFDNet [10]
UNLNet [13]
ATDNet [14]
DUBD-NB (Ours)
DUBD-B (Ours)
Figure 3: Visualized examples of denoising for $\sigma=50$.
Ground Truth
Noisy
CBM3D [8]
TNRD [24]
RED [11]
MemNet [25]
CDnCNN [9]
FFDNet [10]
UNLNet [13]
ATDNet [14]
DUBD-NB (Ours)
DUBD-B (Ours)
Figure 4: Visualized examples of denoising for $\sigma=50$.
## V Experimental Results
We test denoising performances on the images corrupted with noise-levels
$\sigma=10,~{}30,~{}50,~{}70$, on three famous denoising datasets for color
images: CBSD68 [27], Kodak24, and Urban100 [26]. Importantly, considering the
practical usage, we mainly aim color image denoising where gray image
denoising can also be done with the change of input channels.
We compare our method with several denoising algorithms from conventional non-
learning methods to recent CNN-based state-of-the-arts: CBM3D [8], TNRD [24],
RED [11], MemNet [25], DnCNN [9], FFDNet [10], UNLNet [13], and ATDNet [14].
We present two results of our DUBD, namely DUBD-NB and DUBD-B. The DUBD-NB is
a non-blind model where we assume known noise-levels, and thus the output
corresponds to $f_{\phi}({\mathbf{y}};{\mathbf{c}})$. On the other hand,
DUBD-B is a blind model where the denoiser takes the estimated ${\mathbf{c}}$
from CENet, and hence the output is $f_{\phi}({\mathbf{y}};\mathbf{\hat{c}})$.
It is notable that we train only _one_ universal network for our DUBD that can
be used both as blind and non-blind, whereas other methods include blind and
non-blind models separately for each purpose. We evaluate the results by PSNR
(peak signal-to-noise ratio) on color images.
The overall results are listed in TABLE I, which shows that our methods are on
par with or exceeding previous state-of-the-art methods. Notably, our DUBD-B
shows much better performances than other non-blind ones. Interestingly,
DUBD-B and DUBD-NB show negligible performance gaps, even though the
estimation is not quite perfect. In other words, our method is robust or not
very sensitive to the conditional estimations. ATDNet [14] blind model shows
comparable results to ours or sometimes shows the best results among the
others. But the applicable range of noise-level of ATDNet is narrower than
ours. Also, an interesting fact is that CBM3D [8], which is a conventional
approach exploiting non-local self-similarity with 3D transformed Wiener
filter, shows quite great results better than most of the CNN-based methods in
Urban100 [26] set that have many recurrent patterns.
For visual comparisons, we show some denoising results in Figure 3 and 4. As
shown, ours show a better separation of content and noise, _i.e_ , noise
reduction. For other methods, as shown in Figure 4, they often show cloudy
patterns or entangled noise, and sometimes show color artifacts. On the other
hand, our DUBDs achieve plausible detail preservation and naturalness.
## VI Analysis
We further analyze our DUBD with other recent state-of-the-art methods,
focusing on the complexity and robustness to spatially or spectrally varying
noises, which are important factors for practicability.
Figure 5: The number of parameter vs. PSNR on CBSD68 with $\sigma=50$.
### VI-A The Number of Parameters
Since the computational resource is limited in a real-world situation, the
number of parameters is an important factor for the practicability of CNNs.
TABLE II and Figure 5 show the summary of network complexity vs. performance
for the methods compared earlier. Compared to CDnCNN [9] and FFDNet [10], our
methods show PSNR gain more than $1.0$ to $1.5$ dB by doubling or tripling the
number of parameters. Our DUBD outperforms RED [11] with about half the number
of parameters. Also, compared to ATDNet [14] and RNAN [16], our methods need
much fewer parameters, while showing comparable PSNRs. Importantly, RNAN [16]
is a non-blind method which requires each separate model for a specific noise-
level. Additionally, in our methods, changing from non-blind method to blind
method requires only 151 K parameters for CENet. We also visualize the
comparisons of several methods in terms of parameters versus performance in
Figure 5. It is noteworthy that although we both include blind and non-blind
methods in the comparisons, our DUBD shows outstanding performance with a
comparable number of parameters. In summary, our DUBD is practical and
competitive compared to the state-of-the-art methods.
### VI-B Dealing with Spectrally-Spatially Variant Noise
In a real-world situation, the noise in an image is rarely spatially uniform.
Sometimes, the noise-level varies in different color channels, as pointed out
in [10, 4]. Thus, in this section, we show the results of spectrally-spatially
varying noise with our DUBD-B.
#### VI-B1 Spectrally Variant Noise
To test on spectrally variant noise, we feed an input as shown in Figure 6(d),
where the extent of noise corruption differs in each color channel. As shown
in Figure 6(a-c), the noise corruption is severe in B-channel while it is mild
in R-channel. As shown in Figure 6(e), addressing this type of noise with only
a single averaged noise-level does not show plausible results. It can be seen
that the noise in B-channel (appears in the bluish region of the image)
remains after the denoising, while there is loss-of-details of the red bird.
On the other hand, since our method can handle each channel separately, it
shows visually promising results as in Figure 6(f).
(a) Noise in R
(b) Noise in G
(c) Noise in B
(d) Input noisy image
(e) Output with average noise-level
(f) Our result
Figure 6: Input and results on spectrally varying noise.
#### VI-B2 Spatially Variant Noise
We feed an input image with spatially variant noise as shown in Figure 7(b).
The noise-level of each spatial position is shown in Figure 7(a) and the
results are shown in Figure 7(c) and (d). As shown in the results, denoising
with a single noise-level shows over-smoothed regions, as in the red bird, and
there are some regions that the noise is not removed yet. On the other hand,
our method handles spatially variant noise well within the blind denoising
scheme, showing visually pleasing results.
(a) Noise-level in spatial dimension
(b) Input noisy image
(c) Output with average noise-level
(d) Our result
Figure 7: Input and results on spatially varying noise.
### VI-C Traversing Conditional Variable
(a) PSNR results on different input ${\mathbf{c}}$.
(b) Visualized results of denoised images with different input ${\mathbf{c}}$.
Figure 8: Results by traversing the conditional variable ${\mathbf{c}}$.
In this section, we show the results by traversing the conditional variable
${\mathbf{c}}$ for two reasons. First, we show that our tunable denoiser can
also be manually controlled in accordance with user preferences. By
controlling the ${\mathbf{c}}$, the denoising strength can be tuned. Second,
we attempt to investigate the effect of conditional variable and indirectly
trace the conditional posterior $p({\mathbf{c}}|{\mathbf{y}})$.
The results are shown in Figure 8, where we feed an image corrupted by i.i.d.
Gaussian noise with $\sigma=30$ by changing the input ${\mathbf{c}}$ of our
tunable denoiser. As shown in the Figure 8(a), the PSNR graph achieves its
peak at ${\mathbf{c}}=31$, near the true $\sigma$. Also, our blind result with
conditional estimation shows quite a competent result. Notably, the graph
shape is asymmetric unimodal, where higher ${\mathbf{c}}$ than the true value
shows much less sensitivity then feeding the lower ${\mathbf{c}}$ than the
true value. Its coherent tendency can also be found in the corresponding
visualizations on Figure 8(b). Starting from small ${\mathbf{c}}$, the noise
remains in the image, but by tuning ${\mathbf{c}}$ to higher values, the noise
gradually diminishes. Then, increasing ${\mathbf{c}}$ higher than the true
noise-level, the image gets blurry, losing details and textures.
## VII Results on Real Noisy Images
As the noise from the real-world largely deviates from the AWGN, it is
important to validate whether the network will also work well for the real-
world noises, with possible modification or retraining. Real-noise originates
from the shot-noise and read-noise from the sensors and then spreads
throughout the camera pipeline. Hence, the noise is not i.i.d. any more, and
it is signal-dependent and correlated with neighboring pixels due to
demosaicking. Hence, we also train our network to deal with real-noise images.
To train our network for real image denoising, we use Smartphone Image
Denoising Dataset (SIDD) [28], which includes pairs of clean and noisy images
from smartphone cameras. For the choice of ${\mathbf{c}}$, since we do not
know the noise-level, we use the average pixel value of the noisy image of
$4\times 4$ region as our ${\mathbf{c}}$. It can be expressed as
${\mathbf{c}}=Avgpool_{4\times 4}({\mathbf{y}}).$ (14)
Then, the CENet becomes just a simple average pooling operation, and our
tunable denoiser targets different color valued regions separately. That is,
it can be operated within the divide-and-conquer scheme. We choose a widely-
used benchmark for the evaluation: Darmstadt Noise Dataset (DND) [29], which
consists of 50 images with real-noise from 50 scenes and the scenes are
further cropped by the provider which results in 1,000 small patches. The
ground-truth image is not available, and the evaluation can only be done
through online submission.
### VII-A Analysis
We first analyze the effect of the choice of ${\mathbf{c}}$. As our ablation
investigation, we train a baseline DUBD model within an end-to-end scheme
without any constraints on the conditional variable ${\mathbf{c}}$. Then, we
trained our DUBD-R (R stands for the model for real noise) by using the
spatial average color value as our conditional variable. TABLE III shows the
summary. It is shown that by setting ${\mathbf{c}}$ as Eq. 14, the PSNR has
been improved compared to the baseline model. Also, we note that DUBD-R has a
smaller number of parameters compared to the baseline model by replacing the
CENet by a simple average pooling method.
TABLE III: Ablation investigation for the real-noise denoising.
Dataset | SIDD [28] | DND [29]
---|---|---
Baseline | 39.13 | 39.13
DUBD-R | 39.27 | 39.38
### VII-B Results
TABLE IV: Average PSNR and SSIM results on DND [29] benchmark. The best results are in red and the second bests are in blue. Method | Parameters | PSNR | SSIM
---|---|---|---
DnCNN+ [9] | 668 K | 37.90 | 0.9430
FFDNet+ [10] | 825 K | 37.61 | 0.9415
CBDNet [30] | 4,365 K | 38.06 | 0.9421
ATDNet [14] | 9,453 K | 39.19 | 0.9526
RIDNet [31] | 1,499 K | 39.26 | 0.9528
VDN [17] | 7,817 K | 39.38 | 0.9518
DUBD-R (Ours) | 2,088 K | 39.38 | 0.9526
DUBD-R+ (Ours) | 2,088 K | 39.44 | 0.9530
In TABLE IV, we demonstrate several comparisons on two real-noise image
benchmark datasets. The comparisons are DnCNN+ [9], FFDNet+ [10], CBDNet [30],
ATDNet [14], RIDNet [31], and VDN [17]. To achieve further performance gain,
we adopt a self-ensemble approach to our method and denote it as “DUBD-R+.” As
shown in the table, our method achieves competent results. Compared to a
recently published blind denoiser VDN [17], our method shows competent results
while requiring a much smaller number of parameters. We also visualize
comparisons on Figure 9 for qualitative evaluation. We expect further
investigation on the choice of ${\mathbf{c}}$ can boost the performance for
real-world noisy images.
Noisy
23.55/0.5185
DnCNN+ [9]
34.51/0.9457
FFDNet+ [10]
34.47/0.9510
CBDNet [30]
35.43/0.9469
ATDNet [14]
36.03/0.9506
RIDNet [31]
37.17/0.9596
VDN [17]
37.34/0.9619
DUBD-R+
37.61/0.9637
Figure 9: Visualization of results on a real-noise image from DND [29]. The
PSNR/SSIM results are written below.
## VIII Conclusion
In this paper, we have proposed a universal blind denoiser, which can reduce
noise from various environments, including the ones encountered in a real
situation. Based on the idea of divide-and-conquer, we split the original
denoising MAP problem into two inference sub-problems, and we designed new CNN
architectures as sub-problem solvers. Concretely, we introduced an auxiliary
random variable to divide and approximate the original problem. Moreover, on
the choice of the auxiliary random variable, we explicitly reflected our prior
knowledge to augment implicit priors learned from a large-scale dataset. With
the experiments, we have shown that our method is an efficient one in terms of
performance vs. complexity, and shows promising results in various situations
of noise corruption, such as spectrally and spatially varying noises without
variance information. As a result, it also works robustly to the real-world
noise from cameras. The codes are publicly available at
https://www.github.com/JWSoh/DUBD.
## Acknowledgment
This research was supported in part by Samsung Electronics Co., Ltd..
## References
* [1] L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” _Physica D: nonlinear phenomena_ , vol. 60, no. 1-4, pp. 259–268, 1992.
* [2] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” _IEEE Transactions on Image processing_ , vol. 15, no. 12, pp. 3736–3745, 2006.
* [3] S. Gu, L. Zhang, W. Zuo, and X. Feng, “Weighted nuclear norm minimization with application to image denoising,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2014, pp. 2862–2869.
* [4] J. Xu, L. Zhang, and D. Zhang, “A trilateral weighted sparse coding scheme for real-world image denoising,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 20–36.
* [5] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in _2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)_ , vol. 2. IEEE, 2005, pp. 60–65.
* [6] W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” _IEEE transactions on Image Processing_ , vol. 22, no. 4, pp. 1620–1630, 2012.
* [7] J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration.” in _ICCV_ , vol. 29. Citeseer, 2009, pp. 54–62.
* [8] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” _IEEE Transactions on image processing_ , vol. 16, no. 8, pp. 2080–2095, 2007.
* [9] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” _IEEE Transactions on Image Processing_ , vol. 26, no. 7, pp. 3142–3155, 2017.
* [10] K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn-based image denoising,” _IEEE Transactions on Image Processing_ , vol. 27, no. 9, pp. 4608–4622, 2018.
* [11] X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in _Advances in neural information processing systems_ , 2016, pp. 2802–2810.
* [12] S. Lefkimmiatis, “Non-local color image denoising with convolutional neural networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 3587–3596.
* [13] ——, “Universal denoising networks: a novel cnn architecture for image denoising,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 3204–3213.
* [14] Y. Kim, J. W. Soh, and N. I. Cho, “Adaptively tuning a convolutional neural network by gate process for image denoising,” _IEEE Access_ , vol. 7, pp. 63 447–63 456, 2019.
* [15] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang, “Non-local recurrent network for image restoration,” in _Advances in Neural Information Processing Systems_ , 2018, pp. 1673–1682.
* [16] Y. Zhang, K. Li, K. Li, B. Zhong, and Y. Fu, “Residual non-local attention networks for image restoration,” in _ICLR_ , 2019.
* [17] Z. Yue, H. Yong, Q. Zhao, D. Meng, and L. Zhang, “Variational denoising network: Toward blind noise modeling and removal,” in _Advances in Neural Information Processing Systems_ , 2019, pp. 1688–1699.
* [18] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 1501–1510.
* [19] E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville, “Film: Visual reasoning with a general conditioning layer,” in _Thirty-Second AAAI Conference on Artificial Intelligence_ , 2018.
* [20] X. Wang, K. Yu, C. Dong, and C. Change Loy, “Recovering realistic texture in image super-resolution by deep spatial feature transform,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 606–615.
* [21] T. Park, M.-Y. Liu, T.-C. Wang, and J.-Y. Zhu, “Semantic image synthesis with spatially-adaptive normalization,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 2337–2346.
* [22] E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super-resolution: Dataset and study,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2017, pp. 126–135.
* [23] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
* [24] Y. Chen and T. Pock, “Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 39, no. 6, pp. 1256–1272, 2016.
* [25] Y. Tai, J. Yang, X. Liu, and C. Xu, “Memnet: A persistent memory network for image restoration,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 4539–4547.
* [26] J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 5197–5206.
* [27] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in _Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001_ , vol. 2. IEEE, 2001, pp. 416–423.
* [28] A. Abdelhamed, S. Lin, and M. S. Brown, “A high-quality denoising dataset for smartphone cameras,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 1692–1700.
* [29] T. Plotz and S. Roth, “Benchmarking denoising algorithms with real photographs,” in _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , July 2017.
* [30] S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 1712–1722.
* [31] S. Anwar and N. Barnes, “Real image denoising with feature attention,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 3155–3164.
|
# Droplet ejection by electrowetting actuation
Quoc Vo Tuan Tran Corresponding author<EMAIL_ADDRESS><EMAIL_ADDRESS>School of Mechanical & Aerospace Engineering, Nanyang Technological
University, 50 Nanyang Avenue, 639798, Singapore
###### Abstract
Fast contact-line motion of a droplet spreading on a solid substrate under the
electrowetting effect generates strong capillary waves on the droplet’s
surface. The capillary waves may be strong enough to induce ejection of a
satellite droplet from the primary one. In this study, we show that the size
of the satellite droplet and the ejection time are not only dependent on the
contact-line velocity, which directly relates to the applied voltage enabling
the electrowetting effect, but also affected by the ejection dynamics. We
derive a theoretical model of the criteria for droplet ejection and
experimentally verify the proposed criteria for wide ranges of viscosity,
droplet size and the applied voltage.
Suggested keywords
Fast motion of contact line during spreading of liquid droplets on solid
substrates may result in ejection of satellite droplets Ding _et al._ (2012).
Such droplet ejection is a fascinating physical phenomenon involving numerous
fundamental problems such as spreading dynamics Cox (1986), capillary waves
Keller and Miksis (1983); Billingham (1999), pinch-off singularity Day _et
al._ (1998), coalescence Zhang _et al._ (2015, 2009); Thoroddsen2000 ;
Shim2017 , or deformation and breakup dynamics of double emulsion droplets
Chen2013 ; Chen2015 . Understanding the dynamics and the criteria at which
ejection happens also provides important remarks for improving industrial
processes including formation of aerosol droplets Gordillo and Rodríguez-
Rodríguez (2019), polymer’s emulsions, industrial sprays Villermaux (2007), or
controlling droplet jumping in digital microfluidics Merdasi (2019); Lee
(2012); Cavalli (2016); Hong (2015) and electronics cooling Foulkes (2020).
The ejection of satellite droplets during spreading directly ties to the
capillary wave on the surface of the primary droplet Ding _et al._ (2012). In
normal droplet wetting phenomena where the spreading motion of a droplet is
driven by capillarity at the contact line, capillary waves on the droplet
surface is directly generated from the fast motion of the contact line. Thus,
the ejection of satellite droplets is only possible for high wettability
surfaces, i.e., those with small static contact angles, to induce sufficient
spreading velocity Ding _et al._ (2012). This limits the capacity of using
normal wetting phenomena to systematically study satellite droplet ejection
during wetting. Due to this limitation, ejection dynamics of satellite
droplets during spreading, as well as the required conditions for ejection,
have been largely unexplored. Droplet spreading driven by electrowetting Vo
_et al._ (2018), on the other hand, does not suffer this limitation. The
velocity of the contact line of the electrowetting-actuated droplet is
forcefully controlled by the electrowetting effect Vo and Tran (2021). As a
result, by using the electrowetting effect to control droplet spreading, it is
possible to trigger droplet ejection for varying surface wettability, liquid
viscosities, and droplet radius, an advantage that allows us to systematically
investigate ejection phenomena of satellite droplets.
In this Letter, we systematically examine the dynamic of satellite droplet
ejection during wetting process of droplets on solid substrates in which the
spreading velocity is forcefully controlled by the electrowetting effect. By
varying the drop’s size, viscosity, and the applied voltage to adjust the
electrowetting effect’s intensity, we study the contact line’s critical
velocity beyond which ejection of the satellite droplets is possible. We also
develop a predictive model for the condition of satellite droplet ejection and
verify it experimentally.
Figure 1: Schematic of the experimental setup using electrowetting to eject
satellite droplets. Figure 2: (a) Snapshots showing the spreading dynamics of
a $r_{0}=0.5\,$mm droplet when a voltage $U=120\,$V is applied. No ejection of
satellite droplets is observed. (b) Snapshots showing the spreading dynamics
of a $r_{0}=0.5\,$mm droplet when $U=140\,$V is applied resulting in $v_{\rm
i}=0.44\,{\rm m\,s^{-1}}$. A small satellite droplet is ejected when the
formed liquid balloon merges to the droplet. We categorise this as two-stage
ejection. (c) Snapshots showing the spreading dynamics of a $r_{0}=0.5\,$mm
droplet when $U=150\,$V is applied resulting in $v_{\rm i}=0.48\,{\rm
m\,s^{-1}}$. A big satellite droplet is ejected directly from the actuated
droplet. We categorise this as single-stage ejection. In all cases, the scale
bars represent 0.5 mm and the viscosity of droplets is $\eta=2.2\,$mPa s.
We prepare test substrates to induce electrowetting effect using indium-tin-
oxide (ITO) glass slides covered by an amorphous fluoropolymer (Teflon
AF-1601, DuPont) layer. The ITO layer has thickness of 200 nm and sheet
resistance 20 $\Omega/{\rm sq}$. Teflon AF-1601, provided by the manufacturer
(DuPont) as a 6% solution in Fluorinert® FC-40, was spincoated onto the ITO
substrate at the speed 1000 round-per-minute for 30 seconds using a spin
coater (Polos Spin150i, APT Automation). After spincoating, the coated
substrate was immediately transferred to a hot plate for heating process to
remove the solvent. The heating process includes two consecutive steps: 1) at
$115^{\circ}$C for $10$ minutes, and 2) at $165^{\circ}$C for $30$ minutes
Datta _et al._ (2003). The thickness of the Teflon layer is $d=2.5\,{\rm\upmu
m}$, measured using a surface profiler (Talyscan 150, Taylor Hobson). The
Teflon coating has surface roughness of $\approx 0.6\,$nm and negligible
variations in thickness, examined by atomic force microscopy (Bioscope
Catalyst, Bruker)Vo and Tran (2019).
A droplet is then gently deposited on a test substrate in every experiment. To
form an electrowetting circuit, we dip a $18\,\mu$m diameter tungsten wire
into the droplet and connect it to the positive terminal of a direct current
(DC) power supply (IT6723G, ITECH), while the ITO layer to the negative one
(see Fig. 1). We use a solid-state relay (SSR) to control the application of
voltage to the electrowetting circuit. The applied voltage is in a form of a
pulse having amplitude $U$ and pulse’s width $T$. The amplitude $U$ is varied
in the range $0\,{\rm V}\leq U\leq 190\,$V, while $T$ is kept fixed at $150$
ms, a sufficiently long period to ensure that droplets reach new equilibrium
in every actuation.
The working liquids used to generate droplets are solutions of glycerol, DI
water, and 0.125 M sodium chloride. We vary the viscosity $\eta$ of the liquid
from $1\,{\rm mPa\,s}$ to $17.6\,{\rm mPa\,s}$ by changing the glycerol mass
concentrations from 0% to 67% Vo _et al._ (2018). The viscosity of the liquid
was measured using a rheometer (Discovery HR-2, TA Instrument). The radius of
the droplets $r_{0}$ is varied in the range $0.32\,{\rm mm}\leq r_{0}\leq
1.25\,{\rm mm}$. In every experiment, we immerse the droplet and the substrate
in silicone oil having viscosity $\eta_{\rm o}=1.8\,{\rm mPas}$ and mass
density $\rho_{\rm o}=873\,{\rm kg\,m^{-3}}$ (Clearco Products Inc.). The
interfacial tension $\sigma$ between the working liquid and the oil, measured
by the pendant drop method, varies from $37.2\,{\rm mPa\,s}$ to $29.4\,{\rm
mPa\,s}$ for the tested glycerol solutions Vo _et al._ (2018). The
temperature of the oil pool is kept at $20\pm 0.5\,^{\circ}{\rm C}$ in all
experiments. We note that all droplets in our experiment have radii $r_{0}$
smaller than the capillary length $l_{\rm c}=[\sigma/(\rho-\rho_{\rm
o})g]^{1/2}\approx 5.5\,$mm, where $g=9.781\,{\rm m\,s^{-1}}$ is the
gravitational acceleration, and $\rho=1000\,{\rm kg\,m^{-3}}$ is the mass
density of the liquid. The contact angle of the droplet on the substrate when
no voltage is applied is $\approx 160^{\circ}$.
We use a high speed camera (Photron, SAX2), typically running at $20000$
frames-per-second, to capture the behaviours of actuated droplets. The data
presented in this Letter are obtained by repeating experiment with the same
conditions at least three times, and the standard deviations of these
repetitions are used for uncertainty estimation.
Figure 3: (a) Plot showing the normalized satellite droplets radius $r_{\rm
p}/r_{0}$ versus initial contact-line velocity $v_{\rm i}$. The shaded areas
showing the two groups of $r_{\rm p}/r_{0}$ corresponding to the two types of
droplet ejection are for guiding the eyes. (b) Plot showing the pinch-off time
$t_{\rm p}$ versus $v_{\rm i}$; $t_{\rm p}$ does not depend on $v_{\rm i}$ but
on $r_{0}$, $\eta$ and types of ejection. Inset: Plot showing the normalized
pinch-off time $t_{\rm p}/\tau$ versus $v_{\rm i}$ indicating that $t_{\rm
p}/\tau$ is well separated by the types of ejection, i.e., $t_{\rm
p}/\tau\approx 2.4$ for the single-stage ejection and $t_{\rm p}/\tau\approx
2.9$ for the two-stage ejection. In both plots, solid markers are data for
two-stage ejection, while open markers are for one-stage ejection.
In Fig. 2, we show snapshots presenting the spreading and satellite-droplet
ejecting dynamics of electrowetting-actuated droplets with several values of
the applied voltage $U$. We observe that, when $U$ is applied, fast spreading
motion of the contact line caused by electrowetting effect forces the lower
part of the droplet to rapidly expand and generates capillary waves
propagating along the droplet-oil interface toward the apex of the droplet.
When all the waves from the circular contact line hit the apex of the droplet
at a same time, they create a column of liquid on top of the droplet which
subsequently evolves to a balloon shape (see Fig. 2, snapshots at $t\leq\,$5
ms). This process is similar to the formation of pancake shape (Liu _et al._
, 2014) or pyramidal shape (Richard _et al._ , 2002; Renardy _et al._ ,
2003) of liquid when a droplet impacts onto a solid substrate. Increasing the
applied voltage $U$ leading in greater the contact-line spreading velocity and
stronger the capillary waves on the droplet’s surface results in different
behaviours of the liquid balloon and subsequently different satellite-droplet
ejection dynamics. For voltage $U\leq 120\,$V, or equivalently, $v_{\rm i}\leq
0.39\,{\rm ms^{-1}}$, where $v_{\rm i}$ is the initial value of the contact-
line velocity directly controlled by the applied voltage Vo and Tran (2021),
the liquid balloon is pulled back and totally merged into the lower part of
the droplet; no ejection of satellite droplets occurs (Fig. 2a). At $U=130\,$V
or $v_{\rm i}=0.41\,{\rm ms^{-1}}$, the merging process of the balloon and the
lower part of the droplet creates sufficiently strong capillary waves on the
balloon surface and eventually generates a small satellite droplet on top of
the balloon (see Fig. 2b). This process is similar to the ejection of droplets
during coalescence of unequal size drops (Zhang and Thoroddsen, 2008; Zhang
_et al._ , 2009), of bubbles and droplets (Zhang _et al._ , 2015; Li _et
al._ , 2014), or coalescence cascade of a liquid drop Thoroddsen2000 ;
Shim2017 . However, due to the high damping of our experimental system, we did
not observe any subsequent coalescence or ejection of smaller droplets. We
categorise this type of droplet ejection as two-stage ejection.
At the upper extreme, i.e., $U\geq 150\,$V or $v_{\rm i}\geq 0.48\,{\rm
ms^{-1}}$, the capillary waves on the primary droplet are sufficiently strong
to cause pinch-off of the balloon shape and eject a much larger satellite
droplet (see Fig. 2c). We categorise this type of ejection as single-stage
ejection, a similar process to the so-called first-stage pinch-off of droplets
during fast spreading reported by Ding et al Ding _et al._ (2012).
_Size of ejected droplets and ejection time:_ The size of the ejected droplets
$r_{\rm p}$ and the ejection time $t_{\rm p}$ depend on the types of droplet
ejections, i.e., single-stage or two-stage. In Fig. 3a, we show the normalised
radius of ejected droplets, $r_{\rm p}/r_{0}$, versus the initial contact-line
velocity $v_{\rm i}$ for several values of droplet radius $r_{0}$ and
viscosity $\eta$. We observe that the data are well separated depending on the
type of droplet ejection: single-stage ejection possibly results in $0.3\leq
r_{\rm p}/r_{0}\leq 0.5$, whereas two-stage ejection possibly results in
$0.05\leq r_{\rm p}/r_{0}\leq 0.2$. For both types of ejection, $r_{\rm
p}/r_{0}$ increases with $v_{\rm i}$. However, in the single-stage ejection,
$r_{\rm p}/r_{0}$ increases at a higher rate compared to that in the two-stage
ejection. In Fig. 3b, we show the ejection time $t_{\rm p}$ versus $v_{\rm
i}$. We observe that $t_{\rm p}$ is independent of $v_{\rm i}$ for fixed other
parameters, i.e., $\eta$, $r_{0}$ and type of ejection. There is a measurable
increase in ejection time when the viscosity is increased from $\eta=1\,$mPa s
to $\eta=2.2\,$mPa s. We note that it is, however, not possible to eject
satellite droplets for viscosity larger than $\eta=2.2\,$mPa s. Therefore, we
are not able to examine the dependence of $t_{\rm p}$ on $\eta$ for a larger
range of viscosity. In the inset of Fig. 3b, we show the dependence of $t_{\rm
p}/\tau$ on $v_{\rm i}$, where $\tau=(\rho r_{0}^{3}/\sigma)^{1/2}$, revealing
that for a fixed value of viscosity ($\eta=1\,$mPa s), $t_{\rm p}/\tau\approx
2.42\pm 0.16$ for one-stage ejection (open markers), significantly different
from that for the two-stage ejection ($t_{\rm p}/\tau\approx 2.9\pm 0.14$,
solid markers). The difference in the ejection time between two types of
ejection is accounted for the merging process in the two-stage ejection (see
Fig. 2).
_Criteria for two-state ejection:_ Similar to the case that droplet ejected
during coalescence of unequal size droplets (Zhang _et al._ , 2009), the two-
stage ejection is driven by the Laplace pressure difference between the upper
part, i.e., the liquid balloon (Fig. 2b), and the lower part of the droplet.
Larger difference between the two curvatures creates stronger downward flows
rushing through the neck connecting the upper and lower parts. Subsequently,
this creates stronger capillary waves on the droplet’s surface. As a result,
the ejection of the satellite droplets depends on the size difference between
the two parts of the droplet. If we denote $\alpha$ the ratio between the
radii of the primary droplet and its upper part, the value of $\alpha$
enabling droplet ejection in our experiment ranges between $2$ and $3$ for
Ohnesorge number ${\rm Oh}=\eta(\rho\sigma r_{0})^{-1/2}$ varying from
$0.0073$ to $0.016$, consistent with the observation for the pinch-off
criteria of unequal size droplets coalescence Zhang _et al._ (2009). In our
experiment, the two-stage ejection occurs in a small range of $v_{\rm i}$
sandwiched between no-ejection and the single-stage ejection. For example,
with water droplet having $r_{0}=0.5\,$mm, we observe no-ejection for $v_{\rm
i}<0.35\,{\rm ms^{-1}}$, two-stage ejection for $0.35\,{\rm ms^{-1}}\leq
v_{\rm i}<0.38\,{\rm ms^{-1}}$, and single-stage ejection for $v_{\rm i}\geq
0.38\,{\rm ms^{-1}}$. As a result, two-stage ejection can be viewed as an
intermediate regime between no-ejection and single-stage ejection when
increasing $v_{\rm i}$.
_Criteria for single-state ejection:_ In the single-stage ejection, the
capillary waves on the droplet’s surface determining the ejection dynamics is
directly driven by the fast motion of the contact line with the initial
velocity $v_{\rm i}$, and decays over time by viscous damping with the
coefficient $\xi=\mu(\rho\sigma r_{0})^{-1/2}$ Vo and Tran (2021), where $\mu$
is the contact line friction coefficient Vo and Tran (2018); Vo _et al._
(2018). The necessary condition for the ejection to happen is capillary-wave
generation at the droplet-oil interface, which is previously reported as
$\xi\leq 1$ and ${\rm We}\geq 1$ Vo and Tran (2021). Here, ${\rm We}=v_{\rm
i}^{2}\rho r_{0}/\sigma$ is the Weber-like dimensionless number presenting the
ratio of inertia caused by the contact line velocity $v_{\rm i}$ to the
capillarity Vo and Tran (2021); Keller and Miksis (1983); Billingham (1999).
We now seek for the sufficient condition to eject the satellite from the
primary droplet, i.e., pinching off at the neck connecting the satellite
droplet and the lower liquid body (Fig. 2). At the moment pinch-off happens,
the downward velocity $v_{\rm p}$ of the lower liquid body at the neck must be
higher than that of the satellite droplet. On the one hand, as fluid motion in
the upper neck toward the lower one is driven by the difference in Laplace
pressure between the satellite droplet and the lower body, the downward
velocity of the satellite droplet is $(\sigma/\rho r_{\rm p})^{1/2}$. On the
other hand, the downward velocity of the liquid body below the neck is
determined by the phase velocity $v_{\rm p}$ of the capillary waves at the
same position. In our case, the phase velocity $v_{\rm p}$ is estimated as
$v_{\rm p}\sim\lambda f_{\rm p}$, where $\lambda\sim r_{0}$ is the wavelength,
$f_{\rm p}\sim(k_{\rm d}/2\pi)(v_{\rm i}/r_{0})$ the wave frequency attenuated
by the viscous effect, and $k_{\rm d}=2\pi(1-\xi^{2})^{1/2}$ the wave number
Vo and Tran (2021). As a result, the sufficient condition for the transition
from no-ejection to single-stage ejection is $v_{\rm
i}(1-\xi^{2})^{1/2}=C(\sigma/\rho r_{\rm p})^{1/2}$, which can be conveniently
written in the dimensionless form
${\rm We}=C^{2}\frac{r_{0}}{r_{\rm p}}(1-\xi^{2})^{-1}.$ (1)
Here, $C$ is a constant of order unity arisen from the dimensional analysis.
Figure 4: Plot showing the phase diagram of the ejection behaviours of water-
glycerol droplets of various radius $r_{0}$ and viscosity $\eta$ rearranged in
the dimensionless quantities ${\rm We}$ versus $\xi$. The solid curve presents
Eq. 1 using $r_{0}/r_{\rm p}=2.5$ and $C=0.73$. The shaded area around the
solid curve shows the variation of the Eq. 1 when the experimental values of
$r_{0}/r_{\rm p}$ vary from $2$ to $3$.
In Fig. 4, we show the map of various observed behaviours, most notably the
ejection and no-ejection behaviours and their transition depending on two
parameters ${\rm We}$ and $\xi$. The two lines ${\rm We}=1$ and $\xi=1$ form a
coordinate defining the quadrant in which ejection is possible, i.e., the
second quadrant of the coordinate where ${\rm We}\geq 1$ and $\xi\leq 1$. The
region having two-stage ejections (red circles) is clearly sandwiched between
those with no-ejection and single-stage ejections. The transition to single-
stage ejection is well captured by Eq. 1 (solid curve). We note that since
there is no available theoretical formula for the values of $r_{0}/r_{\rm p}$,
we use the experimental data of $r_{0}/r_{\rm p}$ in Fig. 3a to test our
theory, i.e., Eq. 1 is plotted using the experimental value of $r_{0}/r_{\rm
p}=2.5$ and $C=0.73$. The shaded area around the solid curves in Fig. 4 shows
the transition between no-ejection to ejection regions caused by the variation
in the experimental values of $r_{0}/r_{\rm p}$ from $2$ to $3$. The good
agreement between Eq. 1 and the experimental data confirms our prediction of
the condition for satellite-droplet ejection by fast spreading due to
electrowetting.
_Relation between the applied voltage and the ejection criteria:_ With the
conditions $\xi\leq 1$ and ${\rm We}\geq 1$, i.e., capillary waves occur on
the surface of the electrowetting-actuated droplets, one can relate ${\rm We}$
to $U$ by balancing the electrowetting-induced driving force at the contact
line and the droplet’s inertia Vo and Tran (2021): ${\rm We}^{1/2}=A(U-U_{\rm
c})+1$. Here, $A$ is entirely determined by the system’s parameters and
$U_{\rm c}$ is the voltage at which capillary waves occur on the droplet’s
surface. We note that it is also possible to determine $U_{\rm c}$
theoretically using capillary waves generation condition Vo and Tran (2021).
In principle, the relation between ${\rm We}$ and $U$ together with Eq. 1
provide a complete theoretical conditions for satellite droplet ejection by
electrowetting actuation.
In conclusion, we have ultilized the electrowetting effect to induce strong
capillary waves on droplet-oil interfaces and investigated the resulting
ejection of satellite droplets. Our results show that the radius of ejected
droplets and the ejection time weakly depend on the wave magnitude, but vary
substantially with the ejection type: a single-stage ejection usually
generates larger satellite droplets in shorter time compared to a two-stage
counterpart. We also experimentally determine the conditions enabling droplet
ejection and proposed a model capturing such conditions using the Weber-like
number ${\rm We}$ and the damping coefficient $\xi$. An excellent agreement
between the theoretical model and the experimental data not only offers a
better understanding in fundamental problems such as pinch-off of elongated
droplets (Day _et al._ , 1998; Burton and Taborek, 2007) or rupture of liquid
sheets (Kitavtsev _et al._ , 2018), but also is useful for practical
applications such as digital microfluidics and ink-jet printing.
This study is supported by Nanyang Technological University and the Agency for
Science, Technology and Research (A*STAR) under its Pharos Funding Scheme
(Grant No. 1523700102).
## I Data Availability
The data that supports the findings of this study are available within the
article.
## References
* Ding _et al._ (2012) H. Ding, E. Q. Li, F. H. Zhang, Y. Sui, P. D. M. Spelt, and S. T. Thoroddsen, J. Fluid Mech. 697, 92 (2012).
* Cox (1986) R. G. Cox, J. Fluid Mech. 168, 169 (1986).
* Keller and Miksis (1983) J. B. Keller and M. J. Miksis, SIAM J. Appl. Math. 43, 268 (1983).
* Billingham (1999) J. Billingham, J. Fluid Mech. 397, 45 (1999).
* Day _et al._ (1998) R. F. Day, E. J. Hinch, and J. R. Lister, Phys. Rev. Lett. 80, 704 (1998).
* Zhang _et al._ (2015) F. H. Zhang, M. J. Thoraval, S. T. Thoroddsen, and P. Taborek, J. Fluid Mech. 782, 209 (2015).
* Zhang _et al._ (2009) F. H. Zhang, E. Q. Li, and S. T. Thoroddsen, Phys. Rev. Lett. 102, 104502 (2009).
* (8) S. T. Thoroddsen, and K. Takehara, Phys. Fluids 12, 1265 (2000).
* (9) S. Shim, and H. A. Stone, Phys. Rev. Fluids 2, 044001 (2017).
* (10) Y. Chen, X. Liu, and M. Shi, Appl. Phys. Lett. 102, 051609 (2013).
* (11) Y. Chen, X. Liu, and M. Shi, Appl. Phys. Lett. 102, 141601 (2015).
* Gordillo and Rodríguez-Rodríguez (2019) J. M. Gordillo and J. Rodríguez-Rodríguez, J. Fluid Mech. 867, 556 (2019).
* Villermaux (2007) E. Villermaux, Annu. Rev. Fluid Mech. 39, 419 (2007).
* Merdasi (2019) A. Merdasi, M. A. Daeian, A. Moosavi, and M. B. Shafii, Extreme Mech. Lett. 32, 100538 (2019).
* Lee (2012) S. J. Lee, S. Lee, and K. H. Kang, Appl. Phys. Lett. 100, 081604 (2012).
* Cavalli (2016) A. Cavalli, D. J. Preston, E. Tio, D. W. Martin, N. Miljkovic, E. N. Wang, F. Blanchette, and J. W. M. Bush, Phys. Fluids 28, 022101 (2016).
* Hong (2015) J. Hong, and S. J. Lee, Lap. Chip. 15, 900 (2015).
* Foulkes (2020) T. Foulkes, J. Oh, P. Sokalski, L. Li, S. Sett, J. Sotelo, Y. Yan, R. Pilawa-Podgurski, A. Castaneda, M. Steinlauf, and N. Miljkovic1, Appl. Phys. Lett. 116, 203701 (2020).
* Vo _et al._ (2018) Q. Vo, H. Su, and T. Tran, Sci. Rep. 8, 836 (2018).
* Vo and Tran (2021) Q. Vo and T. Tran, (2021), arXiv:2101.02821 .
* Datta _et al._ (2003) A. Datta, I.-Y. Eom, A. Dhar, P. Kuban, R. Manor, I. Ahmad, S. Gangopadhyay, T. Dallas, M. Holtz, H. Temkin, and P. K. Dasgupta, IEEE Sens. J. 3, 788-795 (2003) .
* Liu _et al._ (2014) Y. Liu, L. Moevius, X. Xu, T. Qian, J. M. Yeomans, and Z. Wang, Nat. Phys. 10, 515 (2014) .
* Richard _et al._ (2002) D. Richard, C. Clanet, and D. Quéré, Nature 417, 811 (2002).
* Renardy _et al._ (2003) Y. Renardy, S. Popinet, L. Duchemin, M. Renardy, S. Zaleski, C. Josserand, M. A. Drumright-Clarke, D. Richard, C. Clanet, and D. Quéré, J. Fluid Mech. 484, 69 (2003).
* Zhang and Thoroddsen (2008) F. H. Zhang and S. T. Thoroddsen, Phys. Fluids 20, 022104 (2008) .
* Li _et al._ (2014) E. Q. Li, S. A. Al-Otaibi, I. U. Vakarelski, and S. T. Thoroddsen, J. Fluid Mech. 744, R1 (2014).
* Vo and Tran (2018) Q. Vo and T. Tran, Phys. Rev. E 97, 063101 (2018).
* Hong and Lee (2015) J. Hong and S. J. Lee, Lab Chip 15, 900 (2015).
* Vo and Tran (2019) Q. Vo and T. Tran, Phys. Rev. Lett. 123, 024502 (2019).
* Burton and Taborek (2007) J. C. Burton and P. Taborek, Phys. Rev. Lett. 98, 224502 (2007).
* Kitavtsev _et al._ (2018) G. Kitavtsev, M. A. Fontelos, and J. Eggers, J. Fluid Mech. 840, 555 (2018) .
|
# ${\bf{\rm NK}_{1}}$ of Bak’s unitary group over Graded Rings
Rabeya Basu Indian Institute of Science Education and Research (IISER) Pune,
India<EMAIL_ADDRESS><EMAIL_ADDRESS>and Kuntal Chakraborty
Indian Institute of Science Education and Research (IISER) Pune, India
<EMAIL_ADDRESS>
Research by the first author was supported by SERB-MATRICS grant for the
financial year 2020–2021. And, research by the second author was supported by
IISER (Pune) post-doctoral research grant.
Corresponding Author<EMAIL_ADDRESS><EMAIL_ADDRESS>
Abstract: For an associative ring $R$ with identity, we study the absence of
$k$-torsion in ${\rm NK_{1}GQ}(R)$; Bass nil-groups for the general quadratic
or Bak’s unitary groups. By using a graded version of Quillen–Suslin theory we
deduce an analog for the graded rings.
2020 Mathematics Subject Classification: 19-XX, 15-XX, 16-XX, 20-XX
Key words: General linear groups, Elementary subgroups, Quadratic forms,
Higman linearisation, $k$-torsion, Whitehead group - $\rm K_{1}$.
## 1\. Introduction
Let $R$ be an associative ring with identity element $1$. When $R$ is
commutative, we define ${\rm SK_{1}}(R)$ as the kernel of the determinant map
from the Whitehead group ${\rm K_{1}}(R)$ to the group of units of $R$. The
Bass nil-group ${\rm NK_{1}}(R)=\textnormal{ker}({\rm
K_{1}}(R[X])\rightarrow{\rm K_{1}}(R))$; $X=0$. i.e., the subgroup consisting
of elements $[\alpha(X)]\in{\rm K_{1}}(R[X])$ such that $[\alpha(0)]=[{\rm
I}]$. Hence ${\rm K_{1}}(R[X])\cong{\rm NK}_{1}(R)\oplus{\rm K_{1}}(R)$. The
aim of this paper is to study some properties of Bass nil-groups ${\rm
NK_{1}}$ for the general quadratic groups or Bak’s unitary groups.
It is well-known that for many rings, e.g. if $R$ is regular Noetherian,
Dedekind domain, or any ring with finite global dimension, the group ${\rm
NK}_{1}(R)$ is trivial. On the other hand, if it is non-trivial, then it is
not finitely generated as a group. e.g. if $G$ is a non-trivial finite group,
the group ring $\mathbb{Z}G$ is not regular. In many such cases, it is
difficult to compute ${\rm NK}_{1}(\mathbb{Z}[G])$. In [13], D.R. Harmon
proved the triviality of this group when $G$ is finite group of square free
order. C. Weibel, in [20], has shown the non-triviality of this group for $G$
= $\mathbb{Z}/2\oplus\mathbb{Z}/2$, $\mathbb{Z}/4$, and $D_{4}$. Some more
results are known for finite abelian groups from the work of R.D. Martin;
cf.[16]. It is also known (cf.[12]) that for a general finite group $G$, ${\rm
NK}_{1}(R[G])$ is a torsion group for the group ring $R[G]$. In fact, for
trivial ${\rm NK}_{1}(R)$, every element of finite order of ${\rm
NK}_{1}(R[G])$ is some power of the cardinality of $G$. For $R=\mathbb{Z}$,
this is a result of Weibel. In particular, if $G$ is a finite $p$-group ($p$ a
prime), then every element of ${\rm NK}_{1}(\mathbb{Z}[G])$ has $p$-primary
order. In [17], J. Stienstra showed that ${\rm NK_{1}}(R)$ is a ${\rm
W}(R)$-module, where ${\rm W}(R)$ is the ring of big Witt vectors (cf.[11] and
[19]). Consequently, in ([18], §3), C. Weibel observed that if $k$ is a unit
in $R$, then ${\rm SK_{1}}(R[X])$ has no $k$-torsion, when $R$ is a
commutative local ring. Note that if $R$ is a commutative local ring then
${\rm SK_{1}}(R[X])$ coincides with ${\rm NK_{1}}(R)$; indeed, if $R$ is a
local ring then ${\rm SL}_{n}(R)={\rm E}_{n}(R)$ for all $n>0$. Therefore, we
may replace $\alpha(X)$ by $\alpha(X)\alpha(0)^{-1}$ and assume that
$[\alpha(0)]=[{\rm I}]$. In [7], the first author extended Weibel’s result for
arbitrary associative rings. In this paper we prove the analog result for
$\lambda$-unitary Bass nil-groups, viz. ${\rm NK_{1}GQ}^{\lambda}(R,\Lambda)$,
where $(R,\Lambda)$ is the form ring as introduced by A. Bak in [1]. The main
ingredient for our proof is an analog of Higman linearisation (for a subclass
of Bak’s unitary group) due to V. Kopeiko; cf.[15]. For the general linear
groups, Higman linearisation (cf.[6]) allows us to show that ${\rm NK_{1}}(R)$
has a unipotent representative. The same result is not true in general for the
unitary nil-groups. Kopeiko’s results in [15] explain a complete description
of the elements of ${\rm NK_{1}GQ}^{\lambda}(R,\Lambda)$ that have (unitary)
unipotent representatives. Followings are the main results in this article.
###### Theorem 1.1.
Let $[\alpha(X)]=\big{[}\begin{pmatrix}A(X)&B(X)\\\
C(X)&D(X)\end{pmatrix}\big{]}\in{\rm NK_{1}GQ}^{\lambda}(R,\Lambda)$ with
$A(X)\in{\rm GL}_{r}(R[X])$ for some $r\in\mathbb{N}$. Then $[\alpha(X)]$ has
no $k$-torsion if $kR=R$.
And, an analog for the graded rings:
###### Theorem 1.2.
Let $R=R_{0}\oplus R_{1}\oplus\dots$ be a graded ring. Let $k$ be a unit in
$R_{0}$. Let $N=N_{0}+N_{1}+\dots+N_{r}\in{\rm M}_{r}(R)$ be a nilpotent
matrix, and ${\rm I}$ denote the identity matrix. If $[({\rm I}+N)]^{k}=[{\rm
I}]$ in ${\rm K_{1}GQ}^{\lambda}(R,\Lambda)$, then $[{\rm I}+N]=[{\rm
I}+N_{0}]$.
In the proof of 1.2, we have used a graded version of Quillen–Suslin’s local-
global principle for Bak’s unitary group over graded rings. This unify and
generalize the results proved in [5], [7], [9], and [10].
###### Theorem 1.3.
(Graded local-global principle) Let $R=R_{0}\oplus R_{1}\oplus
R_{2}\oplus\cdots$ be a graded ring with identity $1$. Let $\alpha\in{\rm
GQ}(2n,R,\Lambda)$ be such that $\alpha\equiv{\rm I}_{2n}\pmod{R_{+}}$. If
$\alpha_{\mathfrak{m}}\in{\rm
EQ}(2n,R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$, for every maximal ideal
$\mathfrak{m}\in\rm Max(C(R_{0}))$, then $\alpha\in{\rm EQ}(2n,R,\Lambda).$
## 2\. Preliminaries
Let $R$ be an associative ring with identity element $1$. Let ${\rm M}(n,R)$
denote the additive group of $n\times n$ matrices, and ${\rm GL}(n,R)$ denote
the multiplicative group of $n\times n$ invertible matrices. Let $e_{ij}$ be
the matrix with $1$ in the $ij$-th position and $0$’s elsewhere. The
elementary subgroup of ${\rm GL}(n,R)$ plays a key role in classical algebraic
K-theory. We recall,
###### Definition 2.1.
Elementary Group ${\rm E}(n,R)$: The subgroup of all matrices in ${\rm
GL}(n,R)$ generated by $\\{{\rm E}_{ij}(\lambda):\lambda\in R,i\neq j\\}$,
where ${\rm E}_{ij}(\lambda)={\rm I}_{n}+\lambda e_{ij}$, and $e_{ij}$ is the
matrix with $1$ in the $ij$-position and $0$’s elsewhere.
###### Definition 2.2.
For $\alpha\in{\rm M}(r,R)$ and $\beta\in{\rm M}(s,R)$, the matrix
$\alpha\perp\beta$ denotes its embedding in ${\rm M}(r+s,R)$ (here $r$ and $s$
are even integers in the non-linear cases), given by
$\alpha\perp\beta=\left(\begin{array}[]{cc}\alpha&0\\\
0&\beta\end{array}\right).$
There is an infinite counterpart: Identifying each matrix $\alpha\in{\rm
GL}(n,R)$ with the large matrix $(\alpha\perp\\{1\\})$ gives an embedding of
${\rm GL}(n,R)$ into ${\rm GL}(n+1,R)$. Let ${\rm
GL}(R)=\underset{n=1}{\overset{\infty}{\cup}}{\rm GL}(n,R)$, and ${\rm
E}(R)=\underset{n=1}{\overset{\infty}{\cup}}{\rm E}(n,R)$ be the corresponding
infinite linear groups.
As a consequence of classical Whitehead Lemma (cf.[3]) due to A. Suslin, one
gets
$[{\rm GL}(R),{\rm GL}(R)]={\rm E}(R).$
###### Definition 2.3.
The quotient group
${\rm K_{1}}(R)=\frac{{\rm GL}(R)}{[{\rm GL}(R),{\rm GL}(R)]}=\frac{{\rm
GL}(R)}{{\rm E}(R)}$
is called the Whitehead group of the ring $R$. For $\alpha\in{\rm GL}(n,R)$,
let $[\alpha]$ denote its equivalence class in ${\rm K_{1}}(R)$.
In the similar manner we define ${\rm K_{1}}$ group for the other types of
classical groups; viz., the symplectic Whitehead group ${\rm K_{1}}{\rm
Sp}(R)$ and the orthogonal Whitehead group ${\rm K_{1}}{\rm O}(R)$.
This paper explores a uniform framework for classical type groups over graded
structures. Let us begin by recalling the concept of form rings and form
parameter as introduced by A. Bak in [1]. This allows us to give a uniform
definition for classical type groups.
###### Definition 2.4.
(Form rings): Let $R$ be an associative ring with identity, and with an
involution $-:R\rightarrow R$, $a\mapsto\overline{a}$. Let $\lambda\in C(R)$ =
the center of $R$, with the property $\lambda\overline{\lambda}=1$. We define
two additive subgroups of $R$, viz.
$\Lambda_{\rm max}=\\{a\in R\mid
a=-\lambda\overline{a}\\}~{}\textit{and}~{}\Lambda_{\rm
min}=\\{a-\lambda\overline{a}\mid a\in R\\}.$
One checks that for any $x\in R$, $\Lambda_{\rm max}$ and $\Lambda_{\rm min}$
are closed under the conjugation operation $a\mapsto\overline{x}ax$.
A $\lambda$-form parameter on $R$ is an additive subgroup $\Lambda$ of $R$
such that $\Lambda_{\rm min}\subseteq\Lambda\subseteq\Lambda_{\rm max}$, and
$\overline{x}\Lambda x\subseteq\Lambda$ for all $x\in R$. i.e., a subgroup
between two additive groups which is also closed under the conjugation
operation. A pair $(R,\Lambda)$ is called a form ring.
To define Bak’s unitary group or the general quadratic group, we fix a central
element $\lambda\in R$ with $\lambda\overline{\lambda}=1$, and then consider
the form
$\psi_{n}=\begin{pmatrix}0&{\rm I}_{n}\\\ \lambda{{\rm
I}}_{n}&0\end{pmatrix}.$
For more details, see [7], and [8].
Bak’s Unitary or General Quadratic Groups ${\rm GQ}$:
${\rm GQ}(2n,R,\Lambda)~{}=~{}\\{\sigma\in{\rm
GL}(2n,R,\Lambda)\,|\,\overline{\sigma}\psi_{n}\sigma=\psi_{n}\\}.$
### Elementary Quadratic Matrices :
Let $\rho$ be the permutation, defined by $\rho(i)=n+i$ for $i=1,\dots,n$. For
$a\in R$, and $1\leq i,j\leq n$, we define
$q\varepsilon_{ij}(a)={\rm I}_{2n}+ae_{ij}-\overline{a}e_{\rho(j)\rho(i)}$ for
$i\neq j$,
$qr_{ij}(a)=\left\\{\begin{array}[]{ll}{\rm
I}_{2n}+ae_{i\rho(j)}-\lambda\overline{a}e_{j\rho(i)}&\text{for}~{}i\neq j\\\
{\rm I}_{2n}+ae_{\rho(i)j}&\text{for}~{}i=j\end{array}\right.$
$ql_{ij}(a)=\left\\{\begin{array}[]{ll}{\rm
I}_{2n}+ae_{\rho(i)j}-\overline{\lambda}\overline{a}e_{\rho(j)i}&\text{for}~{}i\neq
j\\\ {\rm I}_{2n}+ae_{\rho(i)j}&\text{for}~{}i=j\end{array}\right.$
(Note that for the second and third type of elementary matrices, if $i=j$,
then we get $a=-\lambda\overline{a}$, and hence it forces that
$a\in\Lambda_{\rm max}(R)$. One checks that these above matrices belong to
$\rm GQ(2n,R,\Lambda)$; cf.[1].
$n$-th Elementary Quadratic Group ${\rm EQ}(2n,R,\Lambda)$:
The subgroup generated by $q\varepsilon_{ij}(a),qr_{ij}(a)\text{and
}ql_{ij}(a)$, for $a\in R$ and $1\leq i,j\leq n$. For uniformity we denote the
elementary generators of ${\rm EQ}(2n,R,\Lambda)$ by $\eta_{ij}(*)$.
Stabilization map: There are standard embeddings:
${\rm GQ}(2n,R,\Lambda)\longrightarrow{\rm GQ}(2n+2,R,\Lambda)$
given by
$\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\mapsto\begin{pmatrix}a&0&b&0\\\
0&1&0&0\\\ c&0&d&0\\\ 0&0&0&1\end{pmatrix}.$
Hence we have
${\rm GQ}(R,\Lambda)=\underset{\longrightarrow}{\lim}\,\,{\rm
GQ}(2n,R,\Lambda)$.
It is clear that the stabilization map takes generators of ${\rm
EQ}(2n,R,\Lambda)$ to the generators of ${\rm EQ}(2(n+1),R,\Lambda)$. Hence we
have
${\rm EQ}(R,\Lambda)=\underset{\longrightarrow}{\lim}\,\,{\rm
EQ}(2n,R,\Lambda)$
There are standard formulas for the commutators between quadratic elementary
matrices. For details, we refer [1] (Lemma 3.16). In later sections there are
repeated use of those relations. The analogue of the Whitehead Lemma for the
general quadratic groups (cf.[1]) due to Bak allows us to write:
$[{\rm GQ}(R,\Lambda),{\rm GQ}(R,\Lambda)]=[{\rm EQ}(R,\Lambda),{\rm
EQ}(R,\Lambda)]={\rm EQ}(R,\Lambda).$
Hence we define the Whitehead group of the general quadratic group
${\rm K_{1}}{\rm GQ}=\frac{{\rm GQ}(R,\Lambda)}{{\rm EQ}(R,\Lambda)}.$
And, the Whitehead group at the level $m$
${\rm K}_{1,m}{\rm GQ}=\frac{{\rm GQ}_{m}(R,\Lambda)}{{\rm
EQ}_{m}(R,\Lambda)},$
where $m=2n$ in the non-linear cases.
Let $(R,\Lambda)$ be a form ring. We extend the involution of $R$ to the ring
$R[X]$ of polynomials by setting $\overline{X}=X$. As a result we obtain a
form ring $(R[X],\Lambda[X])$.
###### Definition 2.5.
The kernel of the group homomorphism
${\rm K_{1}GQ}(R[X],\Lambda[X])\rightarrow{\rm K_{1}GQ}(R,\Lambda)$
induced from the form ring homomorphism
$(R[X],\Lambda[X])\rightarrow(R,\Lambda):X\mapsto 0$ is denoted by ${\rm
NK_{1}GQ}(R,\Lambda)$. We often say it as Bass nilpotent unitary ${\rm
K_{1}}$-group of $R$, or just unitary nil-group.
From the definition it follows that
${\rm K_{1}GQ}(R[X],\Lambda[X])={\rm K_{1}GQ}(R,\Lambda)\oplus{\rm
NK_{1}GQ}(R,\Lambda).$
In this context, we will use following two types of localizations, mainly over
graded ring $R=R_{0}\oplus R_{1}\oplus R_{2}\oplus\cdots$.
1. (1)
Principal localization: for a non-nilpotent, non-zero divisor $s$ in $R_{0}$
with $\overline{s}=s$, we consider the multiplicative subgroup
$S=\\{1,s,s^{2},\dots\\}$, and denote localized form ring by
$(R_{s},\Lambda_{s})$.
2. (2)
Maximal localization: for a maximal ideal $\mathfrak{m}\in\rm Max(R_{0})$, we
take the multiplicative subgroup $S=R_{0}-\mathfrak{m}$, and denote the
localized form ring by $(R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$.
Blanket assumption: We always assume that $2n\geq 6$.
Next, we recall the well-known “Swan–Weibel’s homotopy trick”, which is the
main ingredient to handle the graded case. Let $R=R_{0}\oplus R_{1}\oplus
R_{2}\oplus\cdots$ be a graded ring. An element $a\in R$ will be denoted by
$a=a_{0}+a_{1}+a_{2}+\cdots$, where $a_{i}\in R_{i}$ for each $i$, and all but
finitely many $a_{i}$ are zero. Let $R_{+}=R_{1}\oplus R_{2}\oplus\cdots$.
Graded structure of $R$ induces a graded structure on ${\rm M}_{n}(R)$ (ring
of $n\times n$ matrices).
###### Definition 2.6.
Let $a\in R_{0}$ be a fixed element. We fix an element $b=b_{0}+b_{1}+\cdots$
in $R$ and define a ring homomorphism $\epsilon:R\rightarrow R[X]$ given by
$\epsilon(b)=\epsilon(b_{0}+b_{1}+\cdots)\;=\;b_{0}+b_{1}X+b_{2}X^{2}+\cdots+b_{i}X^{i}+\cdots.$
Then we evaluate the polynomial $\epsilon(b)(X)$ at $X=a$ and denote the image
by $b^{+}(a)$ i.e., $b^{+}(a)=\epsilon(b)(a)$. Note that
$\big{(}b^{+}(x)\big{)}^{+}(y)=b^{+}(xy)$. Observe, $b_{0}=b^{+}(0)$. We shall
use this fact frequently.
The above ring homomorphism $\epsilon$ induces a group homomorphism at the
${\rm GL}(2n,R)$ level for every $n\geq 1$, i.e., for $\alpha\in{\rm
GL}(2n,R)$ we get a map
$\epsilon:{\rm GL}(2n,R,\Lambda)\rightarrow{\rm GL}(2n,R[X],\Lambda[X])\text{
defined by}$
$\alpha=\alpha_{0}\oplus\alpha_{1}\oplus\alpha_{2}\oplus\cdots\mapsto\alpha_{0}\oplus\alpha_{1}X\oplus\alpha_{2}X^{2}\cdots,$
where $\alpha_{i}\in{\rm M}(2n,R_{i})$. As above for $a\in R_{0}$, we define
$\alpha^{+}(a)$ as
$\alpha^{+}(a)=\epsilon(\alpha)(a).$
###### Notation 2.7.
By ${\rm GQ}(2n,R[X],\Lambda[X],(X))$ we shall mean the group of all quadratic
matrices over $R[X]$, which are ${\rm I}_{2n}$ modulo $(X)$. Also if $R$ is a
graded ring, then by ${\rm GQ}(2n,R,\Lambda,(R_{+}))$ we shall mean the group
of all quadratic matrices over $R$ which are ${\rm I}_{2n}$ modulo $R_{+}$.
The following lemma highlights very crucial fact which we use (repeatedly) in
the proof of “Dilation Lemma”.
###### Lemma 2.8.
Let $R$ be a Noetherian ring and $s\in R$. Then there exists a natural number
$k$ such that the homomorphism ${\rm GQ}(2n,R,\Lambda,s^{k}R)\rightarrow{\rm
GQ}(2n,R_{s},\Lambda_{s})$ $($induced by localization homomorphism
$R\rightarrow R_{s})$ is injective. Moreover, it follows that the induced map
${\rm EQ}(2n,R,\Lambda,s^{k}R)\rightarrow{\rm EQ}(2n,R_{s},\Lambda_{s})$ is
injective.
For the proof of the above lemma we refer [14], Lemma 5.1. Recall that any
module finite ring $R$ is direct limit of its finitely generated subrings.
Also, ${\rm G}(R,\Lambda)=\underset{\longrightarrow}{\lim}\,{\rm
G}(R_{i},\Lambda_{i})$, where the limit is taken over all finitely generated
subring of $R$. Thus, one may assume that $C(R)$ is Noetherian. Hence one may
consider module finite (form) rings $(R,\Lambda)$ with identity.
Now we recall few technical definitions and useful lemmas.
###### Definition 2.9.
A row $(a_{1},a_{2},\dots,a_{n})\in R^{n}$ is said to be unimodular if there
exists $(b_{1},b_{2},\dots,b_{n})\in R^{n}$ such that
$\sum_{i=1}^{n}a_{i}b_{i}=1$. The set of all unimodular rows of length $n$ is
denoted by ${\rm Um}_{n}(R)$.
For any column vector $v\in(R^{2n})^{t}$ we define the row vector
$\widetilde{v}=\overline{v}^{t}\psi_{n}$.
###### Definition 2.10.
We define the map $M:(R^{2n})^{t}\times(R^{2n})^{t}\rightarrow M(2n,R)$ and
the inner product $\langle,\rangle$ as follows:
$\displaystyle M(v,w)$
$\displaystyle=v.\widetilde{w}-\overline{\lambda}\,\overline{w}.\widetilde{v}$
$\displaystyle\langle v,w\rangle$ $\displaystyle=\widetilde{v}.w$
Note that the elementary generators of the groups ${\rm EQ}(2n,R,\Lambda)$ are
of the form ${\rm I}_{2n}+M(*_{1},*_{2})$ for suitably chosen standard basis
vectors.
###### Lemma 2.11.
$($cf.[1]$)$ The group ${\rm E}(2n,R,\Lambda)$ is perfect for $n\geq 3$, i.e.,
$[{\rm EQ}(2n,R,\Lambda),{\rm EQ}(2n,R,\Lambda)]={\rm EQ}(2n,R,\Lambda).$
###### Lemma 2.12.
For all elementary generators of ${\rm GQ}(2n,R,\Lambda)$ we have the
following splitting property: for all $x,y\in R$,
$\eta_{ij}(x+y)=\eta_{ij}(x)\eta_{ij}(y).$
${{\bf Proof:}}$ See pg. 43-44, Lemma 3.16, [1].
###### Lemma 2.13.
Let $G$ be a group, and $a_{i},b_{i}\in G$, for $i=1,2,\ldots,n$. Then for
$r_{i}=\Pi_{j=1}^{i}a_{j}$, we have
$\Pi_{i=1}^{n}r_{i}b_{i}r_{i}^{-1}\Pi_{i=1}^{n}a_{i}$.
###### Lemma 2.14.
The group ${\rm GQ}(2n,R,\Lambda,R_{+})\cap{\rm EQ}(2n,R,\Lambda)$ generated
by the elements of the type $\varepsilon\eta_{ij}(*)\varepsilon^{-1}$, where
$\varepsilon\in{\rm EQ}(2n,R,\Lambda)$ and $*\in R_{+}$.
${{\bf Proof:}}$ Let $\alpha\in{\rm GQ}(2n,R,\Lambda,R_{+})\cap{\rm
EQ}(2n,R,\Lambda)$. Then we can write
$\alpha=\Pi_{k=1}^{r}\eta_{i_{k}j_{k}}(a_{k})$
for some element $a_{k}\in R$, $k=1,\dots,r$. We can write $a_{k}$ as
$a_{k}=(a_{0})_{k}+(a_{+})_{k}$ for some $(a_{0})_{k}\in R_{0}$ and
$(a_{+})_{k}\in R_{+}$. Using Lemma 2.12, we can write $\alpha$ as,
$\alpha=\Pi_{k=1}^{r}(\eta_{i_{k}j_{k}}(a_{0})_{k})(\eta_{i_{k}j_{k}}(a_{+})_{k}).$
Let $\epsilon_{t}=\Pi_{k=1}^{t}\eta_{i_{k}j_{k}}((a_{0})_{k})$ for $1\leq
t\leq r$. By the Lemma 2.13, we have
$\alpha=\left(\Pi_{k=1}^{r}\epsilon_{k}\eta_{i_{k}j_{k}}((a_{+})_{k})\epsilon_{k}^{-1}\right)\left(\Pi_{k=1}^{r}\eta_{i_{k}j_{k}}((a_{0})_{k})\right).$
Let us write
$A=\Pi_{k=1}^{r}\epsilon_{k}\eta_{i_{k}j_{k}}((a_{+})_{k})\epsilon_{k}^{-1}$
and $B=\Pi_{k=1}^{r}\eta_{i_{k}j_{k}}((a_{0})_{k})$. Hence $\alpha=AB$. Let
‘over-line’ denotes the quotient ring modulo $R_{+}$. Now going modulo
$R_{+}$, we have $\overline{\alpha}=\overline{AB}=\bar{A}\bar{B}=\overline{\rm
I}_{2n}\bar{B}=\overline{\rm I}_{2n}$, the last equality holds as
$\alpha\in{\rm GQ}(2n,R,\Lambda,R_{+})$. Hence, $\overline{B}=\overline{{\rm
I}}_{2n}$. Since the entries of $B$ are in $R_{0}$, it follows that $B={\rm
I}_{2n}$. Therefore it follows that
$\alpha=\Pi_{k=1}^{r}\epsilon_{k}\eta_{i_{k}j_{k}}((a_{+})_{k})\epsilon_{k}^{-1}.$
$\Box$
## 3\. Quillen–Suslin Theory for Bak’s Group over Graded Rings
### 3.1. Local–Global Principle
###### Lemma 3.1.
Let $(R,\Lambda)$ be a form ring and $v\in{\rm EQ}(2n,R,\Lambda)e_{1}$. Let
$w\in R^{2n}$ be a column vector such that $\langle v,w\rangle=0$. Then ${\rm
I}_{2n}+M(v,w)\in{\rm EQ}(2n,R,\Lambda)$.
${{\bf Proof:}}$ Let $v=\varepsilon e_{1}$. Then we have ${\rm
I}_{2n}+M(v,w)=\varepsilon({\rm I}_{2n}+M(e_{1},w_{1}))\varepsilon^{-1}$,
where $w_{1}=\varepsilon^{-1}w$. Since $\langle e_{1},w_{1}\rangle=\langle
v,w\rangle=0$, we have $w_{1}^{T}=(w_{11},\dots,w_{1n-1},0,\dots,w_{12n})$.
Therefore, since $\lambda\bar{\lambda}=1$, we have
${\rm I}_{2n}+M(v,w)=\prod_{\begin{subarray}{c}1\leq j\leq n\\\ 1\leq i\leq
n-1\end{subarray}}\varepsilon
ql_{in}(-\bar{\lambda}\overline{w}_{1n+i})q\varepsilon_{jn}(-\bar{\lambda}\overline{w}_{1j})ql_{nn}^{-1}(*)\varepsilon^{-1}$
###### Lemma 3.2.
Let $R$ be a graded ring. Let $\alpha\in{\rm EQ}(2n,R,\Lambda)$. Then for
every $a\in R_{0}$ one gets $\alpha^{+}(a)\in{\rm EQ}(2n,R,\Lambda)$.
${{\bf Proof:}}$ Let $\alpha=\Pi_{k=1}^{t}({\rm
I}_{2n}+aM(e_{i_{k}},e_{j_{k}})),$ where $a\in R$ and $t\geq 1$. Hence for
$b\in R_{0}$, we have $\alpha^{+}(b)=\Pi_{k=1}^{t}({\rm
I}_{2n}+a^{+}(b)M(e_{i_{k}},e_{j_{k}}))$. Now taking $v=e_{i}$ and
$v=a^{+}(b)e_{j}$ we have $\langle v,w\rangle=0$ and ${\rm I}_{2n}+M(v,w)={\rm
I}_{2n}+a^{+}(b)M(e_{i},e_{j}))$ which belongs to ${\rm EQ}(2n,R,\Lambda)$ by
Corollary 3.1. Hence we have $\alpha^{+}(b)\in{\rm EQ}(2n,R,\Lambda)$ for
$b\in R_{0}$. $\Box$
###### Lemma 3.3.
(Graded Dilation Lemma) Let $\alpha\in{\rm GQ}(2n,R,\Lambda)$ with
$\alpha^{+}(0)={\rm I}_{2n}$ and $\alpha_{s}\in{\rm EQ}(2n,R_{s},\Lambda_{s})$
for some non-zero-divisor $s\in R_{0}$. Then there exists $\beta\in{\rm
EQ}(2n,R,\Lambda)$ such that
$\beta_{s}^{+}(b)=\alpha_{s}^{+}(b)$
for some $b=s^{l}$ and $l\gg 0$.
${{\bf Proof:}}$ Since $\alpha_{s}\in{\rm EQ}(2n,R_{s},\Lambda_{s})$ with
$(\alpha_{0})_{s}={\rm I}_{2n}$, then $\alpha_{s}=\gamma$, where
$\gamma_{ii}=1+g_{ii}$ where $g_{ii}\in(R^{+})_{s}$ and $\gamma_{ij}=g_{ij}$
for $i\neq j$, where $g_{ij}\in(R^{+})_{s}$. Choose $l$ large enough such that
every denominator of $g_{ij}$ for all $i,j$ divides $s^{l}$. Then by Lemma
3.2, we have $\alpha_{s}^{+}(s^{l})\in{\rm EQ}(2n,R_{s},\Lambda_{s})$. As all
denominator is cleared then $\alpha_{s}^{+}(s^{l})$ permits a natural
pullback. Hence we have $\alpha^{+}(s^{l})\in{\rm EQ}(2n,R,\Lambda).$ Call
this pullback as $\beta$. $\Box$
###### Lemma 3.4.
Let $\alpha_{s}\in{\rm EQ}(2n,R_{s},\Lambda_{s})$ with $\alpha_{s}^{+}(0)={\rm
I}_{2n}$. Then one gets
$\alpha_{s}^{+}(b+d)\alpha_{s}^{+}(d)^{-1}\in{\rm EQ}(2n,R,\Lambda)$
for some $s,d\in R_{0}$ and $b=s^{l},l\gg 0$.
${{\bf Proof:}}$ Since $\alpha_{s}\in{\rm EQ}(2n,R_{s},\Lambda_{s})$, we have
$\alpha_{s}^{+}(X)\in{\rm EQ}(2n,R_{s}[X],\Lambda_{s}[X])$. Let
$\beta^{+}(X)=\alpha^{+}(X+d)\alpha^{+}(d)^{-1},$
where $d\in R_{0}$. Then we have
$\beta^{+}_{s}(X)\in{\rm EQ}(2n,R_{s}[X],\Lambda_{s}[X])$
and $\beta^{+}(0)={\rm I}_{2n}$. Hence by Lemma 3.3, we have, there exists
$b=s^{l}$, $l\gg 0$, such that $\beta^{+}(bX)\in{\rm EQ}(2n,R[X],\Lambda[X])$.
Putting $X=1$, we get our desired result. $\Box$
Proof of Theorem 1.3 – Graded Local-Global Principle:
Since $\alpha_{\mathfrak{m}}\in{\rm
EQ}(2n,R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$ for all $\mathfrak{m}\in{\rm
Max}(C(R_{0}))$, for each $\mathfrak{m}$ there exists $s\in
C(R_{0})\setminus\mathfrak{m}$ such that $\alpha_{s}\in{\rm
EQ}(2n,R_{s},\Lambda_{s})$. Using Noetherian property we can consider a finite
cover of $C(R_{0})$, say $s_{1}+\dots+s_{r}=1$. From Lemma 3.3, we have
$\alpha^{+}(b_{i})\in{\rm EQ}(2n,R,\Lambda)$ for some $b_{i}=s_{i}^{l_{i}}$
with $b_{1}+\dots+b_{r}=1$. Now consider $\alpha_{s_{1}s_{2}\dots s_{r}}$,
which is the image of $\alpha$ in $R_{s_{1}s_{2}\dots s_{r}}$. By Lemma 2.8,
$\alpha\mapsto\alpha_{s_{1}s_{2}\dots s_{r}}$ is injective. Hence we can
perform our calculation in $R_{s_{1}s_{2}\dots s_{r}}$ and then pull it back
to $R$.
$\begin{split}\alpha_{s_{1}s_{2}\dots s_{r}}=&\alpha_{s_{1}s_{2}\dots
s_{r}}^{+}(b_{1}+b_{2}+\dots+b_{r})\\\
=&((\alpha_{s_{1}})_{s_{2}s_{3}\dots})^{+}(b_{1}+\dots+b_{r})((\alpha_{s_{1}})_{s_{2}s_{3}\dots})^{+}(b_{2}+\dots+b_{r})^{-1}\dots\\\
&((\alpha_{s_{i}})_{s_{1}\dots\hat{s_{i}}\dots
s_{r}})^{+}(b_{i}+\dots+b_{r})((\alpha_{s_{i}})_{s_{1}\dots\hat{s_{i}}\dots
s_{r}})^{+}(b_{i+1}+\dots+b_{r})^{-1}\\\ &((\alpha_{s_{r}})_{s_{1}s_{2}\dots
s_{r-1}})^{+}(b_{r})((\alpha_{s_{r}})_{s_{1}s_{2}\dots
s_{r-1}})^{+}(0)^{-1}\end{split}$
Observe that $((\alpha_{s_{i}})_{s_{1}\dots\hat{s_{i}}\dots
s_{r}})^{+}(b_{i}+\dots+b_{r})((\alpha_{s_{i}})_{s_{1}\dots\hat{s_{i}}\dots
s_{r}})^{+}(b_{i+1}+\dots+b_{r})^{-1}\in{\rm EQ}(2n,R,\Lambda)$ due to Lemma
3.4 (here $\hat{s_{i}}$ means we omit $s_{i}$ in the product
$s_{1}\dots\hat{s_{i}}\dots s_{r}$), and hence $\alpha_{s_{1}s_{2}\dots
s_{r}}\in{\rm EQ}(2n,R_{s_{1}\dots s_{r}},\Lambda_{s_{1}\dots s_{r}})$. This
proves $\alpha\in{\rm EQ}(2n,R,\Lambda)$. $\Box$
### 3.2. Normality and Local–Global
Next we are going to show that if $K$ is a commutative ring with identity and
$R$ is an associative $K$-algebra such that $R$ is finite as a left
$K$-module, then the normality criterion of elementary subgroup is equivalent
to the Local-Global principle for quadratic group. (One can also consider $R$
as a right $K$-algebra.)
###### Lemma 3.5.
$($Bass; cf.[4]$)$ Let $A$ be an associative $B$-algebra such that $A$ is
finite as a left $B$-module and $B$ be a commutative local ring with identity.
Then $A$ is semilocal.
###### Theorem 3.6.
$($cf.[7]$)$ Let $A$ be a semilocal ring $($not necessarily commutative$)$
with involution. Let $v\in{\rm Um}_{2n}(A)$.Then $v\in e_{1}{\rm EQ}(2n,A)$.
In other words the group ${\rm EQ}(2n,A)$ acts transitively on ${\rm
Um}_{2n}(A)$.
Before proving the next theorem we need to recall a theorem from [7]:
###### Theorem 3.7.
$($Local-Global Principle$)$ Let $A$ be an associative $B$-algebra such that
$A$ is finite as a left $B$-module and $B$ be a commutative ring with
identity.. If $\alpha(X)\in{\rm GQ}(2n,A[X],\Lambda[X])$, $\alpha(0)=\rm{\rm
I}_{2n}$ and $\alpha_{\mathfrak{m}}(X)\in{\rm
EQ}(2n,A_{\mathfrak{m}}[X],\Lambda_{\mathfrak{m}}[X])$ for every maximal ideal
$\mathfrak{m}\in{\rm Max}(B)$, then $\alpha\in{\rm EQ}(2n,A[X],\Lambda[X])$.
###### Theorem 3.8.
Let $K$ be a commutative ring with unity and $R=\oplus_{i=0}^{\infty}R_{i}$ be
a graded $K$-algebra such that $R_{0}$ is finite as a left $K$-module. Then
for $n\geq 3$ the following are equivalent:
$(1)$ ${\rm EQ}(2n,R,\Lambda)$ is a normal subgroup of ${\rm
GQ}(2n,R,\Lambda)$.
$(2)$ If $\alpha\in{\rm GQ}(2n,R,\Lambda)$ with $\alpha^{+}(0)={\rm I}_{2n}$
and $\alpha_{\mathfrak{m}}\in{\rm
EQ}(2n,R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$ for every maximal ideal
$\mathfrak{m}\in{\rm Max}(K)$, then $\alpha\in{\rm EQ}(2n,R,\Lambda)$.
${{\bf Proof:}}$ $(1)\Rightarrow(2)$ We have proved the Lemma 3.1 for any form
ring with identity and shown that the local-global principle is a consequence
of Lemma 3.1. So, the result is true in particular if we have ${\rm
EQ}(2n,R,\Lambda)$ is a normal subgroup of ${\rm GQ}(2n,R,\Lambda)$.
$(2)\Rightarrow(1)$ Since polynomial rings are special case of graded rings,
the result follows by using the Theorem 3.7. Let $\alpha\in{\rm
EQ}(2n,R,\Lambda)$ and $\beta\in{\rm GQ}(2n,R,\Lambda)$. Then we have $\alpha$
can be written as product of the matrices of the form $({\rm I}_{2n}+\beta
M(*_{1},*_{2})\beta^{-1})$, with $\langle*_{1},*_{2}\rangle=0$ where $*_{1}$
and $*_{2}$ are suitably chosen basis vectors. Let $v=\beta*_{1}$. Then we can
write $\beta\alpha\beta^{-1}$ as a product of the matrices of the form ${\rm
I}_{2n}+M(v,w)$ for some $w\in R^{2n}$. We must show that each ${\rm
I}_{2n}+M(v,w)\in{\rm EQ}(2n,R,\Lambda)$.
Consider $\gamma={\rm I}_{2n}+M(v,w)$. Then $\gamma^{+}(0)={\rm I}_{2n}$. By
Lemma 3.5 we have the ring $S^{-1}R$ is semilocal where
$S=K\setminus\mathfrak{m}$, and $\mathfrak{m}\in{\rm Max}(K)$. Since $v\in{\rm
Um}_{2n}(R)$, then by Theorem 3.6, we have $v\in{\rm
EQ}(2n,S^{-1}R,S^{-1}\Lambda)e_{1}$. Therefore by applying Lemma 3.1 to the
ring $(S^{-1}R,S^{-1}\Lambda)$, we have $\gamma_{\mathfrak{m}}\in{\rm
EQ}(2n,R_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$ for every maximal ideal
$\mathfrak{m}\in{\rm Max}(K)$. Hence by hypothesis we have $\gamma\in{\rm
EQ}(2n,R,\Lambda)$. This completes the proof. $\Box$
###### Remark 3.9.
We conclude that the local-global principle for the elementary subgroups and
their normality properties are equivalent.
## 4\. Bass Nil Group ${\rm{\rm NK_{1}}{\rm GQ}(R)}$
In this section recall some basic definitions and properties of the
representatives of ${\rm{\rm NK_{1}}{\rm GQ}(R)}$. We represent any element of
${\rm M}_{2n}(R)$ as $\begin{pmatrix}a&b\\\ c&d\end{pmatrix},$ where
$a,b,c,d\in{\rm M}_{n}(R)$. For $\alpha=\begin{pmatrix}a&b\\\
c&d\end{pmatrix}$ we call $\begin{pmatrix}a&b\end{pmatrix}$ the upper half of
$\alpha$. Let $(R,\lambda,\Lambda)$ be a form ring. By setting
$\bar{\Lambda}=\\{\bar{a}:a\in\Lambda\\}$ we get another form ring
$(R,\bar{\lambda},\bar{\Lambda})$. We can extend the involution of $R$ to
${\rm M}_{n}(R)$ by setting $(a_{ij})^{*}=(\overline{a}_{ji})$.
###### Definition 4.1.
Let $(R,\lambda,\Lambda)$ be a form ring. A matrix $\alpha=(a_{ij})\in{\rm
M}_{n}(R)$ is said to be $\Lambda$-Hermitian if $\alpha=-\lambda\alpha^{*}$
and all the diagonal entries of $\alpha$ are contained in $\Lambda$. A matrix
$\beta\in{\rm M}_{n}(R)$ is said to be $\bar{\Lambda}$-Hermitian if
$\beta=-\bar{\lambda}\beta^{*}$ and all the diagonal entries of $\beta$ are
contained in $\bar{\Lambda}$.
###### Remark 4.2.
A matrix $\alpha\in{\rm M}_{n}(R)$ is $\Lambda$-Hermitian if and only if
$\alpha^{*}$ is $\bar{\Lambda}$-Hermitian. The set of all $\Lambda$-Hermitian
matrices forms a group under matrix multiplication.
###### Lemma 4.3.
[15, Example 2] Let $\beta\in{\rm GL}_{n}(R)$ be a $\Lambda$-Hermitian matrix.
Then the matrix $\alpha^{*}\beta\alpha$ is $\Lambda$-Hermitian for every
$\alpha\in{\rm GL}_{n}(R)$.
###### Definition 4.4.
Let $\alpha=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in{\rm M}_{2n}(R)$ be a
matrix. Then $\alpha$ is said to be a $\Lambda$-quadratic matrix if one of the
following equivalent conditions holds:
1. (1)
$\alpha\in{\rm GQ}(2n,R,\Lambda)$ and the diagonal entries of the matrices
$a^{*}c,b^{*}d$ are in $\Lambda$,
2. (2)
$a^{*}d+\lambda c^{*}d={\rm I}_{n}$ and the matrices $a^{*}c,b^{*}d$ are
$\Lambda$-Hermitian,
3. (3)
$\alpha\in{\rm GQ}(2n,R,\Lambda)$ and the diagonal entries of the matrices
$ab^{*},cd^{*}$ are in $\Lambda$,
4. (4)
$ad^{*}+\lambda bc^{*}={\rm I}_{n}$ and the matrices $ab^{*},cd^{*}$ are
$\Lambda$-Hermitian.
###### Remark 4.5.
The set of all $\Lambda$-quadratic matrices of order $2n$ forms a group called
$\Lambda$-quadratic group. We denote this group by ${\rm
GQ}^{\lambda}(2n,R,\Lambda)$. If $2\in R^{*}$, then we have $\Lambda_{\rm
min}=\Lambda_{\rm max}$. In this case notions of quadratic groups and notions
of $\Lambda$-quadratic groups coincides. Also this happens when
$\Lambda=\Lambda_{\rm max}$. Hence quadratic groups are special cases of
$\Lambda$-quadratic groups. Other classical groups appear as
$\Lambda$-quadratic groups in the following way. Let $R$ be a commutative ring
with trivial involution. Then
${\rm GQ}^{\lambda}(2n,R,\Lambda)=\begin{cases}{\rm Sp}_{2n}(R),&\text{if
}\lambda=-1\text{ and }\Lambda=\Lambda_{\rm max}=R\\\ {\rm
O}_{2n}(R),&\text{if }\lambda=1\text{ and }\Lambda=\Lambda_{\rm
min}=0\end{cases}$
And for general linear group ${\rm GL}_{n}(R)$, we have, ${\rm GL}_{n}(R)={\rm
GQ}^{1}(2n,H(R),\Lambda=\Lambda_{\rm max})$, where $\mathbb{H}(R)$ denotes the
ring $R\oplus R^{op}$ with $R^{op}$ is the opposite ring of $R$ and the
involution on $\mathbb{H}(R)$ is defined by $\overline{(x,y)}=(y,x)$. Thus the
study of $\Lambda$-quadratic matrices unifies the study of quadratic matrices.
We recall following results from [15].
###### Lemma 4.6.
Let $\alpha=\begin{pmatrix}a&0\\\ 0&d\end{pmatrix}\in{\rm M}_{2n}(R)$. Then
$\alpha\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$ if and only if $a\in{\rm
GL}_{n}(R)$ and $d=(a^{*})^{-1}$.
${{\bf Proof:}}$ Let $\alpha\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$. In view of
$(2)$ of Definition 4.4, we have, $a^{*}d={\rm I}_{n}$. Hence $a$ is
invertible and $d=(a^{*})^{-1}$. Converse holds by $(2)$ of Definition 4.4.
$\Box$
###### Definition 4.7.
Let $\alpha\in{\rm GL}_{n}(R)$ be a matrix. A matrix of the form
$\begin{pmatrix}\alpha&0\\\ 0&(\alpha^{*})^{-1}\end{pmatrix}$ is denoted by
$\mathbb{H}(\alpha)$ and is said to be hyperbolic.
###### Remark 4.8.
In a similar way we can show that matrices of the form
$T_{12}(\beta):=\begin{pmatrix}{\rm I}_{n}&\beta\\\ 0&{\rm
I}_{n}\end{pmatrix}$ is $\Lambda$-quadratic matrix if and only if $\beta$ is
$\bar{\Lambda}$-Hermitian. And the matrix of the form
$T_{21}(\gamma):=\begin{pmatrix}{\rm I}_{n}&0\\\ \gamma&{\rm
I}_{n}\end{pmatrix}$ is $\Lambda$-quadratic matrix if and only if $\gamma$ is
$\Lambda$-Hermitian.
Likewise in the quadratic case we can define the notion of
$\Lambda$-elementary quadratic groups in the following way:
###### Definition 4.9.
The $\Lambda$-elementary quadratic group is denoted by ${\rm
EQ}^{\lambda}(2n,R,\Lambda)$ and defined by the group generated by $2n\times
2n$ matrices of the form $\mathbb{H}(\alpha)$ where $\alpha\in{\rm E}_{n}(R)$,
$T_{12}(\beta)$ and $\beta$ is $\bar{\Lambda}$-Hermitian and $T_{21}(\gamma)$
is $\gamma$ $\Lambda$-Hermitian.
###### Lemma 4.10.
Let $A=\begin{pmatrix}\alpha&\beta\\\ 0&\delta\end{pmatrix}\in{\rm
M}_{2n}(R)$. Then $A\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$ if and only if
$\alpha\in{\rm GL}_{n}(R)$, $\delta=(\alpha^{*})^{-1}$ and $\alpha^{-1}\beta$
is $\bar{\Lambda}$-Hermitian. In this case
$A\equiv\mathbb{H}(\alpha)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}$.
${{\bf Proof:}}$ Let $A\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$. Then by $(4)$ of
Definition 4.4, we have $\alpha\delta^{*}={\rm I}_{n}$ and $\alpha\beta^{*}$
is $\Lambda$-Hermitian. Hence $\alpha$ is invertible and
$\delta=(\alpha^{*})^{-1}$. For $\alpha^{-1}\beta$, we get
$(\alpha^{-1}\beta)^{*}=\beta^{*}(\alpha^{-1})^{*}=\alpha^{-1}(\alpha\beta^{*})(\alpha^{-1})^{*},$
which is $\Lambda$-Hermitian by Lemma 4.3. Hence $\alpha^{-1}\beta$ is
$\bar{\Lambda}$-Hermitian. Conversely, the condition on $A$ will fulfill the
condition $(4)$ of Definition 4.4. Hence $A$ is $\Lambda$-quadratic. Since
$\alpha^{-1}\beta$ is $\bar{\Lambda}$-Hermitian,
$T_{12}(-\alpha^{-1}\beta)\in{\rm EQ}^{\lambda}(2n,R,\Lambda)$
and $AT_{12}(\alpha^{-1}\beta)=\mathbb{H}(\alpha)$. Thus
$A\equiv\mathbb{H}(\alpha)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}$. $\Box$
A similar proof will prove the following:
###### Lemma 4.11.
Let $B=\begin{pmatrix}\alpha&0\\\ \gamma&\delta\end{pmatrix}\in{\rm
M}_{2n}(R)$. Then $B\in{\rm GQ}^{\lambda}(2n,R,\Lambda)$ if and only if
$\alpha\in{\rm GL}_{n}(R)$, $\delta=(\alpha^{*})^{-1}$ and $\gamma$ is
$\Lambda$-Hermitian. In this case
$B\equiv\mathbb{H}(\alpha)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}.$
###### Lemma 4.12.
Let $\alpha=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in{\rm
GQ}^{\lambda}(2n,R,\Lambda)$. Then
$\alpha\equiv\mathbb{H}(a)\pmod{{\rm EQ}^{\lambda}(4n,R,\Lambda)}$
if $a\in{\rm GL}_{n}(R).$ Moreover, if $a\in{\rm E}_{n}(R)$, then
$\alpha\equiv\mathbb{H}(a)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}$.
${{\bf Proof:}}$ By same argument as given in Lemma 4.10, we have $a^{-1}b$ is
$\Lambda$-Hermitian. Hence $T_{12}(-a^{-1}b)\in{\rm
EQ}^{\lambda}(2n,R,\Lambda)$, and consequently $\alpha
T_{12}(-a^{-1}b)=\begin{pmatrix}a&0\\\ c&d^{\prime}\end{pmatrix}\in{\rm
GQ}^{\lambda}(2n,R,\Lambda)$ for some $d^{\prime}\in{\rm GL}_{n}(R)$. Hence by
Lemma 4.11, we get
$\alpha T_{12}(-a^{-1}b)\equiv H(a)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}.$
Hence $\alpha\equiv H(a)\pmod{{\rm EQ}^{\lambda}(2n,R,\Lambda)}$. $\Box$
###### Definition 4.13.
Let $\alpha=\begin{pmatrix}a_{1}&b_{1}\\\ c_{1}&d_{1}\end{pmatrix}\in{\rm
M}_{2r}(R)$, $\beta=\begin{pmatrix}a_{2}&b_{2}\\\
c_{2}&d_{2}\end{pmatrix}\in{\rm M}_{2s}(R)$. As before, we define
$\alpha\perp\beta$, and consider an embedding
$\rm GQ^{\lambda}(2n,R,\Lambda)\rightarrow\rm
GQ^{\lambda}(2n+2,R,\Lambda),\,\,\,\alpha\mapsto\alpha\perp{\rm I}_{2}.$
We denote ${\rm
GQ}^{\lambda}(R,\Lambda)=\underset{n=1}{\overset{\infty}{\cup}}{\rm
GQ}^{\lambda}(2n,R,\Lambda)$ and ${\rm
EQ}^{\lambda}(R,\Lambda)=\underset{n=1}{\overset{\infty}{\cup}}{\rm
EQ}^{\lambda}(2n,R,\Lambda)$.
In view of quadratic analog of Whitehead Lemma, we have the group ${\rm
EQ}^{\lambda}(R,\Lambda)$ coincides with the commutator of ${\rm
GQ}^{\lambda}(R,\Lambda)$. Therefore the group
${\rm K_{1}}{\rm GQ}^{\lambda}(R,\Lambda):=\frac{{\rm
GQ}^{\lambda}(R,\Lambda)}{{\rm EQ}^{\lambda}(R,\Lambda)}$
is well-defined. The class of a matrix $\alpha\in{\rm
GQ}^{\lambda}(R,\Lambda)$ in the group ${\rm K_{1}}{\rm
GQ}^{\lambda}(R,\Lambda)$ is denoted by $[\alpha]$. In this way we obtain a
${\rm K_{1}}$-functor ${\rm K_{1}}{\rm GQ}^{\lambda}$ acting form the category
of form rings to the category of abelian groups.
###### Remark 4.14.
Likewise in the quadratic case, the kernel of the group homomorphism
${\rm K_{1}GQ}^{\lambda}(R[X],\Lambda[X])\rightarrow{\rm
K_{1}GQ}^{\lambda}(R,\Lambda)$
induced from the form ring homomorphism
$(R[X],\Lambda[X])\rightarrow(R,\Lambda);X\mapsto 0$ is denoted by ${\rm
NK_{1}GQ}^{\lambda}(R,\Lambda)$. Since the $\Lambda$-quadratic groups are
subclass of the quadratic groups, the Local-global principle holds for
$\Lambda$-quadratic groups. We use this throughout for the next section.
## 5\. Absence of torsion in ${\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$
In this section we give the proof of Theorem 1.1 and Theorem 1.2. In [6], the
proof of the theorem for the linear case is based on two key results, viz. the
Higman linearisation, and a lemma on polynomial identity in the truncated
polynomial rings. Here we recall the lemma with its proof to highlight its
connection with the big Witt vectors. Recently, in [15], V. Kopeiko deduced an
analog of Higman linearisation process for a subclass of the general quadratic
groups.
###### Definition 5.1.
For a associative ring $R$ with unity we consider the truncated polynomial
ring
$R_{t}=\frac{R[X]}{(X^{t+1})}.$
###### Lemma 5.2.
$($cf.[6], Lemma 4.1$)$ Let $P(X)\in R[X]$ be any polynomial. Then the
following identity holds in the ring $R_{t}:$
$(1+X^{r}P(X))=(1+X^{r}P(0))(1+X^{r+1}Q(X)),$
where $r>0$ and $Q(X)\in R[X]$, with $\deg(Q(X))<t-r$.
${{\bf Proof:}}$ Let us write $P(X)=a_{0}+a_{1}X+\cdots+a_{t}X^{t}$. Then we
can write $P(X)=P(0)+XP^{\prime}(X)$ for some $P^{\prime}(X)\in R[X]$. Now, in
$R_{t}$
$\displaystyle(1+X^{r}P(X))(1+X^{r}P(0))^{-1}$
$\displaystyle=(1+X^{r}P(0)+X^{r+1}P^{\prime}(X))(1+X^{r}P(0))^{-1}$
$\displaystyle=1+X^{r+1}P^{\prime}(X)(1-X^{r}P(0)+X^{2r}(P(0))^{2}-\cdots)$
$\displaystyle=1+X^{r+1}Q(X)$
where $Q(X)\in R[X]$ with $\deg(Q(X))<t-r$. Hence the lemma follows. $\Box$
Remark. Iterating the above process we can write for any polynomial $P(X)\in
R[X]$,
$(1+XP(X))=\Pi_{i=1}^{t}(1+a_{i}X^{i})$
in $R_{t}$, for some $a_{i}\in R$. By ascending induction it will follow that
the $a_{i}$’s are uniquely determined. In fact, if $R$ is commutative then
$a_{i}$’s are the $i$-th component of the ghost vector corresponding to the
big Witt vector of $(1+XP(X))\in{\rm W}(R)=(1+XR[[X]])^{\times}$. For details
see ([11], $\mathcal{x}$I).
###### Lemma 5.3.
Let $R$ be a ring with $1/k\in R$ and $P(X)\in R[X]$. Assume $P(0)$ lies in
the center of $R$. Then
$(1+X^{r}P(X))^{k^{r}}=1\Rightarrow(1+X^{r}P(X))=(1+X^{r+1}Q(X))$
in the ring $R_{t}$ for some $r>0$ and $Q(X)\in R[X]$ with $\deg(Q(X))<t-r$.
Following result is due to V. Kopeiko, cf. [15].
###### Proposition 5.4.
(Higman linearisation) Let $(R,\Lambda)$ be a form ring. Then, every element
of the group ${\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$ has a representative
of the form
$[a;b,c]_{n}=\begin{pmatrix}{\rm I}_{r}-aX&bX\\\ -cX^{n}&{\rm
I}_{r}+a^{*}X+\cdots+(a^{*})^{n}X^{n}\end{pmatrix}\in{\rm
GQ}^{\lambda}(2r,R[X],\Lambda[X])$
for some positive integers $r$ and $n$, where $a,b,c\in{\rm M}_{r}(R)$ satisfy
the following conditions:
1. (1)
the matrices $b$ and $ab$ are Hermitian and also $ab=ba^{*}$,
2. (2)
the matrices $c$ and $ca$ are Hermitian and also $ca=a^{*}c$,
3. (3)
$bc=a^{n+1}$and $cb=(a^{*})^{n+1}$.
###### Corollary 5.5.
Let $[\alpha]\in{\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$ has the
representation $[a;b,c]_{n}$ for some $a,b,c\in{\rm M}_{n}(R)$ according to
Proposition 5.4. Then
$[\alpha]=[\mathbb{H}({\rm I}_{r}-aX)]$
in ${\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$ if $({\rm I}_{r}-aX)\in{\rm
GL}_{r}(R)$.
${{\bf Proof:}}$ By Lemma 4.12 we have $[a;b,c]_{n}\equiv\mathbb{H}({\rm
I}_{r}-aX)\pmod{{\rm EQ}^{\lambda}(2r,R[X],\Lambda[X])}$. Hence we have
$[\alpha]=[\mathbb{H}({\rm I}_{r}-aX)]$ in ${\rm NK_{1}}{\rm
GQ}^{\lambda}(R,\Lambda)$. $\Box$
Proof of Theorem 1.1:
By the Theorem 5.4, we have $[\alpha]=[[a;b,c]_{n}]$ for some $a,b,c\in{\rm
M}_{s}(R)$ and for some natural numbers $n$ and $s$. Note that in the Step $1$
of the Proposition 5.4, the invertibility of the first corner of the matrix
$\alpha$ will not be changed during the linearisation process. Also the
invertibility of the first corner is preserved in the remaining steps of the
Proposition 5.4. Therefore since the first corner matrix $A(X)\in{\rm
GL}_{r}(R[X])$, then we have $({\rm I}_{s}-aX)\in{\rm GL}_{s}(R[X])$. By
Corollary 5.5, we have $[\alpha]=[\mathbb{H}({\rm I}_{s}-aX)]$. Now let
$[\alpha]$ be a $k$-torsion. Then we have $[\mathbb{H}({\rm I}_{r}-aX)]$ is a
$k$-torsion. Since $({\rm I}_{r}-aX)$ is invertible, it follows that $a$ is
nilpotent. Let $a^{t+1}=0$. Since $[({\rm I}_{r}-aX)]^{k}=[{\rm I}]$ in ${\rm
K_{1}}{\rm GQ}^{\lambda}(R[X],\Lambda[X])$, then by arguing as given in [7],
we have $[{\rm I}_{r}-aX]=[I]$ in ${\rm K_{1}}{\rm
GQ}^{\lambda}(R[X],\Lambda[X])$. This completes the proof. $\Box$
Proof of Theorem 1.2 – (Graded Version):
Consider the ring homomorphism $f:R\rightarrow R[X]$ defined by
$f(a_{0}+a_{1}+\dots)=a_{0}+a_{1}X+\dots.$
Then
$\displaystyle[({\rm I}+N)^{k}]=[{\rm I}]$ $\displaystyle\Rightarrow f([{\rm
I}+N]^{k})=[f({\rm I}+N)]^{k}=[{\rm I}]$ $\displaystyle\Rightarrow[({\rm
I}+N_{0}+N_{1}X+\dots+N_{r}X^{r})]^{k}=[{\rm I}].$
Let $\mathfrak{m}$ be a maximal ideal in $R_{0}$. By Theorem 1.1, we have
$[({\rm I}+N_{0}+N_{1}X+\dots+N_{r}X^{r})]=[{\rm I}]$
in ${\rm NK_{1}}{\rm
GQ}^{\lambda}((R)_{\mathfrak{m}},\Lambda_{\mathfrak{m}})$. Hence by using the
local-global principle we conclude
$[({\rm I}+N)]=[{\rm I}+N_{0}]$
in ${\rm NK_{1}}{\rm GQ}^{\lambda}(R,\Lambda)$, as required. $\Box$
Acknowledgment: We thank Sergey Sinchuk and V. Kopeiko for many useful
discussions.
## References
* [1] A. Bak; ${\rm K}$-Theory of forms. Annals of Mathematics Studies, 98\. Princeton University Press, Princeton, N.J. University of Tokyo Press, Tokyo (1981).
* [2] A. Bak, R. Basu, R.A. Rao; Local-global principle for transvection groups. Proceedings of The American Mathematical Society 138 (2010), no. 4, 1191–1204.
* [3] H. Bass; Algebraic K-Theory, Benjamin, New York-Amsterdam (1968).
* [4] H. Bass; K-Theory and stable algebra, Publ. Math. I.H.E.S. No. 22 (1964), 5–60.
* [5] R. Basu, R.A. Rao, Reema Khanna; On Quillen’s local global principle. Commutative algebra and algebraic geometry, Contemp. Math., 390, Amer. Math. Soc., Providence, RI (2005), 17–30.
* [6] R. Basu; Absence of torsion for $NK_{1}(R)$ over associative rings, J. Algebra Appl. 10(4) (2011), 793–799.
* [7] R. Basu; Local-global principle for general quadratic and general Hermitian groups and the nilpotence of ${\rm KH}_{1}$. Zap. Nauchn. Sem. S.-Petersburg. Otdel. Mat. Inst. Steklov. (POMI) 452 (2016), Voprosy Teorii Predstavleniĭ Algebr i Grupp. 30, 5–31. translation in J. Math. Sci. (N.Y.) 232 (2018), no. 5, 591–609.
* [8] R. Basu; On Transvection Subgroups of General Quadratic Modules. Journal of Algebra and Its Application. Vol. 17, No. 11, 1850217 (2018).
* [9] R. Basu, Ravi A. Rao, Reema Khanna; Pillars of relative Quillen–Suslin Theory. “Leavitt Path Algebras and K-theory”, ISI Series, Springer (2020) 211–223.
* [10] R. Basu, Manish Kumar Singh; On Quillen–Suslin Theory for Classical Groups; Revisited over Graded Rings. Contemp. Math. Amer. Math. Soc., Vol. 751, (2020), 5–18.
* [11] S. Bloch; Algebraic ${\rm K}$-Theory and crystalline cohomology, Publ. Math. I.H.E.S. 47 (1977), 187–268. itary $K_{1}$-group of unitary ring, Journal of Mathematical Sciences, Vol 240, No 4 (2019), 459–473.
* [12] I. Hambleton, W. Lück; Induction and computation of Bass Nil groups for finite groups. Pure Appl. Math. Q. 8 (2012), no. 1, 199–219.
* [13] D.R. Harmon; ${\rm NK}_{1}$ of finite groups. Proc. Amer. Math. Soc. 100(2) (1987), 229–232.
* [14] R. Hazrat, N. Vavilov; ${\rm K_{1}}$ of Chevalley groups are nilpotent. Journal of Pure and Applied Algebra 179 (2003), no. 1-2, 99–116.
* [15] V.I. Kopeiko; Bass nilpotent unitary ${\rm K_{1}}$ group of unitary ring. Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 460 (2017), Voprosy Teorii Predstavleniĭ Algebr i Grupp. 33, 111–119. translation in J. Math. Sci. (N.Y.) 243 (2019), no. 4, 577–582.
* [16] R.D. Martin; Nilgroups of finite abelian groups. Ph.D.Thesis, Columbia Iniversity, ProQuest LLC, Ann Arbor, MI, 1976.
* [17] J. Stienstra; Operation in the linear ${\rm K}$-theory of endomorphisms, Current Trends in Algebraic Topology, Conf. Proc. Can. Math. Soc. 2 (1982).
* [18] C. Weibel; Mayer-Vietoris Sequence and module structure on ${\rm K_{1}0}$, Springer Lecture Notes in Mathematics 854 (1981), 466–498.
* [19] C. Weibel; Module Structures on the ${\rm K}$-theory of Graded Rings, Journal of Algebra 105 (1987), 465–483.
* [20] C. Weibel; ${\rm NK}_{0}$ and ${\rm NK}_{1}$ of the groups $C_{4}$ and $D_{4}$. Addendum to ”Lower algebraic K-theory of hyperbolic 3-simplex reflection groups” by J.-F. Lafont and I. J. Ortiz [MR2495796]. Comment. Math. Helv. 84 (2009), no. 2, 339–349.
|
# Large Deviation Principles for Block and Step Graphon Random Graph Models
Jan Grebík
Mathematics Institute
University of Warwick
Coventry CV4 7AL, UK Oleg Pikhurko
Mathematics Institute and DIMAP
University of Warwick
Coventry CV4 7AL, UK
###### Abstract
Borgs, Chayes, Gaudio, Petti and Sen [BCGPS] proved a large deviation
principle for block model random graphs with rational block ratios. We
strengthen their result by allowing any block ratios (and also establish a
simpler formula for the rate function). We apply the new result to derive a
large deviation principle for graph sampling from any given step graphon.
## 1 Introduction
The theory of large deviations (see e.g. the books by Dembo and Zeitouni
[DemboZeitouni10ldta] or Rassoul-Agha and Seppäläinen
[RassoulaghaSeppelainen14cldigm]) studies the probabilities of rare events on
the exponential scale. This is formally captured by the following definition.
###### Definition 1.1.
A sequence of Borel probability measures $(\mu_{n})_{n\in{\mathbbm{N}}}$ on a
topological space $X$ satisfies a _large deviation principle_ (_LDP_ for
short) with _speed_ $s:{\mathbbm{N}}\to(0,\infty)$ and _rate function_
$I:X\to[0,\infty]$ if
* •
$s(n)\to\infty$ as $n\to\infty$,
* •
the function $I$ is _lower semi-continuous_ (_lsc_ for short), that is, for
each $r\in{\mathbbm{R}}$ the level set $\\{I\leqslant r\\}:=\\{x\in
X:I(x)\leqslant r\\}$ is a closed subset of $X$,
* •
the following _lower bound_ holds:
$\liminf_{n\to\infty}\frac{1}{s(n)}\,{\log\,\mu_{n}(G)}\geqslant-\inf_{x\in
G}I(x),\quad\mbox{for every open $G\subseteq X$,}$ (1)
* •
the following _upper bound_ holds:
$\limsup_{n\to\infty}\frac{1}{s(n)}\,{\log\,\mu_{n}(F)}\leqslant-\inf_{x\in
F}I(x),\quad\mbox{for every closed $F\subseteq X$.}$ (2)
As it is well-known (see e.g. [RassoulaghaSeppelainen14cldigm, Lemma 2.11]),
if (1) and (2) hold for some (not necessarily lsc) function $I:X\to[0,\infty]$
then we can replace $I$ without violating these bounds by its _lower semi-
continuous regularization_
$I_{\mathrm{lsc}}(x):=\sup\left\\{\inf_{y\in G}I(y):G\ni x\mbox{ and
$G\subseteq X$ is open}\right\\},\quad x\in X;$ (3)
furthermore (see e.g. [RassoulaghaSeppelainen14cldigm, Lemma 2.8]),
$I_{\mathrm{lsc}}$ is lower semi-continuous and, in fact, $I_{\mathrm{lsc}}$
is the largest lsc function with $I_{\mathrm{lsc}}\leqslant I$. If $X$ is a
regular topological space then there can be at most one lower semi-continuous
rate function satisfying Definition 1.1 (see e.g. [DemboZeitouni10ldta, Lemma
4.1.4] or [RassoulaghaSeppelainen14cldigm, Theorem 2.13]). This (as well as
some other results, such as Lemma 2.5 below) motivates the requirement that
$I$ is lsc in Definition 1.1.
Large deviations for various models of random graphs have been receiving much
attention in the recent years; see e.g. the survey by Chatterjee
[Chatterjee16bams], or [BCGPS, Section 1.7] for references to some more recent
results. A basic but central model is the _binomial random graph_
${\mathbbm{G}}(n,p)$, where the vertex set is $[n]:=\\{1,\dots,n\\}$ and each
pair of vertices is an edge with probability $p$, independently of other
pairs. A large deviation principle for ${\mathbbm{G}}(n,p)$ for constant
$p\in(0,1)$ was established in a ground-breaking paper of Chatterjee and
Varadhan [ChatterjeeVaradhan11] as follows. (See also the exposition of this
proof in Chatterjee’s book [Chatterjee17ldrg].)
As it turns out, the “correct” setting is to consider _graphons_ , that is,
measurable symmetric functions $[0,1]^{2}\to[0,1]$. On the set ${\mathcal{W}}$
of all graphons, one can define the so-called _cut-distance_ $\delta_{\Box}$,
which is a pseudo-metric on ${\mathcal{W}}$ (see Section 2.1 for all missing
definitions related to graphons). Consider the factor space
$\widetilde{\mathcal{W}}:=\\{\widetilde{U}:U\in{\mathcal{W}}\\},$
where $\widetilde{U}:=\\{V\in{\mathcal{W}}:\delta_{\Box}(U,V)=0\\}$ consists
of all graphons _weakly isomorphic_ to $U$. The space
$(\widetilde{\mathcal{W}},\delta_{\Box})$ naturally appears in the limit
theory of dense graphs, see e.g. the book by Lovász [Lovasz:lngl]. In
particular, a graph $G$ on $[n]$ can be identified with the graphon $f^{G}$
where we partition $[0,1]$ into intervals of length $1/n$ each and let $f^{G}$
be the $\\{0,1\\}$-valued step function that encodes the adjacency relation.
This way, ${\mathbbm{G}}(n,p)$ gives a (discrete) probability measure
$\widetilde{\mathbb{P}}_{n,p}$ on $(\widetilde{\mathcal{W}},\delta_{\Box})$,
where $\widetilde{\mathbb{P}}_{n,p}(A)$ for
$A\subseteq\widetilde{\mathcal{W}}$ is the probability that the sampled graph,
when viewed as a graphon up to weak isomorphism, belongs to the set $A$.
Also, recall that, for $p\in[0,1]$, the _relative entropy_
$h_{p}:[0,1]\to[0,\infty]$ is defined by
$h_{p}(r):=r\log\left(\frac{r}{p}\right)+(1-r)\log\left(\frac{1-r}{1-p}\right),\quad
r\in[0,1].$ (4)
###### Theorem 1.2 (Chatterjee and Varadhan [ChatterjeeVaradhan11]).
Let $p\in[0,1]$. The function $I_{p}:{\mathcal{W}}\to[0,\infty]$ defined by
$I_{p}(U):=\frac{1}{2}\int_{[0,1]^{2}}h_{p}(U(x,y))\,\mathrm{d}x\,\mathrm{d}y,\quad
U\in{\mathcal{W}},$ (5)
gives a well-defined function $\widetilde{\mathcal{W}}\to[0,\infty]$ (that is,
$I_{p}$ assumes the same value at any two graphons at $\delta_{\Box}$-distance
0) which is lower semi-continuous on
$(\widetilde{\mathcal{W}},\delta_{\Box})$. Moreover, the sequence of measures
$(\widetilde{\mathbb{P}}_{n,p})_{n\in{\mathbbm{N}}}$ on
$(\widetilde{\mathcal{W}},\delta_{\Box})$ satisfies an LDP with speed $n^{2}$
and rate function $I_{p}$.
Borgs, Chayes, Gaudio, Petti and Sen [BCGPS] extended this result to $k$-block
stochastic models as follows. Let $k\geqslant 1$ be a fixed integer. Let
$\mathbold{p}=(p_{i,j})_{i,j\in[k]}\in[0,1]^{k\times k}$ be a symmetric
$k\times k$ matrix with entries in $[0,1]$. For an integer vector
$\mathbold{a}=(a_{1},\dots,a_{k})\in{\mathbbm{N}}_{\geqslant 0}^{k}$, where
${\mathbbm{N}}_{\geqslant 0}:=\\{0,1,2,\ldots\\}$
denotes the set of non-negative integers, let
$\widetilde{\mathbb{P}}_{\mathbold{a},\mathbold{p}}$ be the probability
distribution on $\widetilde{\mathcal{W}}$ defined as follows. Set $n$ to be
$\|\mathbold{a}\|_{1}=a_{1}+\dots+a_{k}$ and let $(A_{1},\dots,A_{k})$ be the
partition of $[n]$ into $k$ consecutive intervals with $A_{i}$ having $a_{i}$
elements. The random graph ${\mathbbm{G}}(\mathbold{a},\mathbold{p})$ on $[n]$
is produced by making each pair $\\{x,y\\}$ of $[n]$ an edge with probability
$p_{i,j}$ where $i,j\in[k]$ are the indices with $x\in A_{i}$ and $y\in
A_{j}$, with all choices made independently of each other. Output the weak
isomorphism class $\widetilde{f^{G}}\in\widetilde{\mathcal{W}}$ of the graphon
$f^{G}$ corresponding to the generated graph
$G\sim{\mathbbm{G}}(\mathbold{a},\mathbold{p})$ on $[n]$. Informally speaking,
we take $k$ blocks consisting of exactly $a_{1},\dots,a_{k}$ vertices
respectively, make pairs into edges with the probabilities given by the
$k\times k$ matrix $\mathbold{p}$, and them forget the block structure.
Next, we define a rate function for a given non-zero real $k$-vector
$\mathbold{\alpha}=(\alpha_{1},\dots,\alpha_{k})\in[0,\infty)^{k}$. Let
$(I_{1}^{(\mathbold{\alpha})},\dots,I_{k}^{(\mathbold{\alpha})})$ denote the
partition of $[0,1]$ into consecutive intervals such that each interval
$I_{i}^{(\mathbold{\alpha})}$ has length
$\alpha_{i}/\|\mathbold{\alpha}\|_{1}$. Define the function
$J_{\mathbold{\alpha},\mathbold{p}}:\widetilde{\mathcal{W}}\to[0,\infty)$ by
$J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U}):=\inf_{V\in\widetilde{U}}\,\frac{1}{2}\sum_{i,j\in[k]}\int_{I_{i}^{(\mathbold{\alpha})}\times
I_{j}^{(\mathbold{\alpha})}}h_{p_{i,j}}(V(x,y))\,\mathrm{d}x\,\mathrm{d}y,\quad
U\in{{\mathcal{W}}}.$ (6)
Note that the function $J_{\mathbold{\alpha},\mathbold{p}}$ will not change if
we multiply the vector $\mathbold{\alpha}$ by any positive scalar.
In the above notation, the LDP of Borgs et al [BCGPS, Theorem 1 and Remark 2]
states the following.
###### Theorem 1.3 (Borgs, Chayes, Gaudio, Petti and Sen [BCGPS]).
Let $\mathbold{\alpha}\in{\mathbbm{N}}_{\geqslant 0}^{k}$ be a non-zero
integer $k$-vector and let $\mathbold{p}\in[0,1]^{k\times k}$ be a symmetric
$k\times k$ matrix. Then the sequence of measures
$(\widetilde{\mathbb{P}}_{n\mathbold{\alpha},\mathbold{p}})_{n\in{\mathbbm{N}}}$
on $(\widetilde{\mathcal{W}},\delta_{\Box})$ satisfies an LDP with speed
$(n\,\|\mathbold{\alpha}\|_{1})^{2}$ and rate function
$(J_{\mathbold{\alpha},\mathbold{p}})_{\mathrm{lsc}}$.
Note that the special case $k=1$ and $\mathbold{\alpha}=(1)$ of Theorem 1.3
and the assumption that the function
$J_{\mathbold{\alpha},\mathbold{p}}:\widetilde{\mathcal{W}}\to[0,\infty]$ is
lower semi-continuous give the second part of Theorem 1.2.
Our contribution is as follows.
First, we prove that the function
$J_{\mathbold{\alpha},\mathbold{p}}:\widetilde{\mathcal{W}}\to[0,\infty]$ is
lower semi-continuous (so, in particular, there is no need to take the lower
semi-continuous regularization in Theorem 1.3):
###### Theorem 1.4.
For every symmetric matrix ${\mathbold p}\in[0,1]^{k\times k}$ and every non-
zero real $k$-vector $\mathbold{\alpha}\in[0,\infty)^{k}$, the function
$J_{\mathbold{\alpha},\mathbold{p}}:\widetilde{\mathcal{W}}\to[0,\infty]$ is
lower semi-continuous with respect to the metric $\delta_{\Box}$.
Second, we extend Theorem 1.3 by allowing the fraction of vertices assigned to
a part to depend on $n$ as long as it converges to any finite (possibly
irrational) limit.
###### Theorem 1.5.
Fix any symmetric $k\times k$ matrix $\mathbold{p}\in[0,1]^{k\times k}$ and a
non-zero real $k$-vector
$\mathbold{\alpha}=(\alpha_{1},\dots,\alpha_{k})\in[0,\infty)^{k}$. Let
$\mathbold{a}_{n}=(a_{n,1},\dots,a_{n,k})\in{\mathbbm{N}}_{\geqslant
0}^{k},\quad\mbox{for $n\in{\mathbbm{N}}$},$
be arbitrary non-zero integer $k$-vectors such that
$\lim_{n\to\infty}a_{n,i}/n=\alpha_{i}$ for each $i\in[k]$. Then the sequence
of measures
$(\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}})_{n\in{\mathbbm{N}}}$
on $(\widetilde{\mathcal{W}},\delta_{\Box})$ satisfies an LDP with speed
$\|\mathbold{a}_{n}\|_{1}^{2}$ and rate function
$J_{\mathbold{\alpha},\mathbold{p}}$.
One application of Theorem 1.5 is as follows. Each graphon $W\in{\mathcal{W}}$
gives rise to the following inhomogeneous random graph model. Namely, the
_random $W$-graph_ ${\mathbbm{G}}(n,W)$ is generated by first sampling uniform
elements $x_{1},\dots,x_{n}\in[0,1]$ and then making each pair $\\{i,j\\}$ an
edge with probability $W(x_{i},x_{j})$, where all choices are independent of
each other. Let $\widetilde{\mathbb{R}}_{n,W}$ be the corresponding (discrete)
measure on $\widetilde{\mathcal{W}}$ where we take the equivalence class
$\widetilde{f^{G}}\in\widetilde{\mathcal{W}}$ of the sampled graph $G$. When
$W$ is the constant function $p$, we get exactly the binomial random graph
${\mathbbm{G}}(n,p)$ and
$\widetilde{\mathbb{R}}_{n,W}=\widetilde{\mathbb{P}}_{n,p}$.
The authors showed in [GrebikPikhurko:LDP] that, for any graphon
$W\in{\mathcal{W}}$, the only “interesting” speeds for the sequence of
measures $(\widetilde{\mathbb{R}}_{n,W})_{n\in{\mathbbm{N}}}$ are $\Theta(n)$
and $\Theta(n^{2})$, and established a general LDP for speed $n$. The case
when speed is $n^{2}$ seems rather difficult. Here (in Theorem 1.6) we prove
an LDP for speed $n^{2}$ when $W$ is a _$k$ -step graphon_, that is, there is
a measurable partition $[0,1]=A_{1}\cup\dots\cup A_{k}$ such that $W$ is a
constant $p_{i,j}$ on each product $A_{i}\times A_{j}$, $i,j\in[k]$. We can
assume that each $A_{i}$ has positive measure, since changing the values of
$W$ on a null subset of $[0,1]^{2}$ does not affect the distribution of
${\mathbbm{G}}(n,W)$.
Before stating our LDP, let us point out the difference between the random
graphs ${\mathbbm{G}}((a_{1},\dots,a_{k}),(p_{i,j})_{i,j\in[k]})$ and
${\mathbbm{G}}(a_{1}+\dots+a_{k},W)$ when $W$ and $(p_{i,j})_{i,j\in[k]}$ are
as above. In the former model, we have exactly $a_{i}$ vertices in the $i$-th
block for each $i\in[k]$. In the latter model, each vertex is put into one of
the $k$ blocks with the probabilities given by the measures of
$A_{1},\dots,A_{k}$, independently of the other vertices; thus the number of
vertices in each block is binomially distributed. It comes as no surprise that
if we consider large deviations for
$(\widetilde{\mathbb{R}}_{n,W})_{n\in{\mathbbm{N}}}$ at speed $n^{2}$ then the
rate function depends only on $(p_{i,j})_{i,j\in[k]}$ but not on the (non-
zero) measures of the parts $A_{i}$ since, informally speaking, the price we
“pay” to get any desired distribution of vertices per parts is multiplicative
${\mathrm{e}}^{-O(n)}$, which is negligible for speed $n^{2}$.
###### Theorem 1.6.
Let $W$ be a $k$-step graphon with $k$ non-null parts whose values are encoded
by a symmetric $k\times k$ matrix $\mathbold{p}\in[0,1]^{k\times k}$. Define
$R_{\mathbold{p}}(\widetilde{U}):=\inf_{\mathbold{\alpha}\in[0,1]^{k}\atop\alpha_{1}+\ldots+\alpha_{k}=1}J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U}),\quad
U\in{\mathcal{W}}.$ (7)
Then the function $R_{\mathbold{p}}:\widetilde{\mathcal{W}}\to[0,\infty]$ is
lower semi-continuous with respect to the metric $\delta_{\Box}$. Moreover,
the sequence of measures $(\widetilde{\mathbb{R}}_{n,W})_{n\in{\mathbbm{N}}}$
on $(\widetilde{\mathcal{W}},\delta_{\Box})$ satisfies an LDP with speed
$n^{2}$ and rate function $R_{\mathbold{p}}$.
For $k=1$, we recover the LDP result of Chatterjee and Varadhan
[ChatterjeeVaradhan11] (that is, Theorem 1.2).
Initially, we proved Theorem 1.6 independently of the work by Borgs et al
[BCGPS], by first proving an LDP for what we call _$k$ -coloured graphons_
(that are defined in Section 3). Since our original proof of Theorem 1.6 is
quite long and shares many common steps with the proof from [BCGPS] (with both
being built upon the method of Chatterjee and Varadhan
[ChatterjeeVaradhan11]), we decided to derive Theorem 1.6 from the results in
[BCGPS] with a rather short proof, also strengthening the LDP of Borgs et al
[BCGPS] in the process.
This paper is organised as follows. In Section 2 we give further definitions
(repeating some definitions from the Introduction) and provide some standard
or easy results that we will need later. Section 3 introduces $k$-coloured
graphons and proves a compactness result. This result is used in Section 4 to
prove that the functions $J_{\mathbold{a},\mathbold{p}}$ and
$R_{\mathbold{p}}$ are lower semi-continuous. The large deviation principles
stated in Theorems 1.5 and 1.6 are proved in Section 5 and 6 respectively.
## 2 Preliminaries
Recall that the relative entropy $h_{p}$ was defined in (4) and observe the
conventions that $0\log(0)=0\log\left(\frac{0}{0}\right)=0$,
$h_{0}(r)=+\infty$ whenever $r\not=0$ and $h_{1}(r)=+\infty$ whenever
$r\not=1$. The _indicator function_ ${\mathbbm{1}}_{X}$ of a set $X$ assumes
value $1$ for every $x\in X$ and 0 otherwise.
A measurable space $(\Omega,{\mathcal{A}})$ is called _standard_ if there is a
Polish topology on $\Omega$ whose Borel $\sigma$-algebra is equal to
${\mathcal{A}}$. Given a measure $\mu$ on $(\Omega,{\mathcal{A}})$, we call a
subset of $\Omega$ _measurable_ if it belongs to the _completion_ of
${\mathcal{A}}$ by $\mu$, that is, the $\sigma$-algebra generated by
${\mathcal{A}}$ and $\mu$-null sets. We will usually omit $\sigma$-algebras
from our notation.
Unless specified otherwise, the interval $[0,1]$ of reals is always equipped
with the Lebesgue measure, denoted by $\lambda$. By $\lambda^{\oplus k}$ we
denote the completion of the $k$-th power of $\lambda$, that is,
$\lambda^{\oplus k}$ is the Lebesgue measure on $[0,1]^{k}$
Denote as ${\bf A}^{(k)}$ the set of all ordered partitions of $[0,1]$ into
$k$ measurable sets. For a non-zero vector
$\mathbold{\alpha}=(\alpha_{1},\dots,\alpha_{k})\in[0,\infty)^{k}$, we let
${\bf A}^{(\mathbold{\alpha})}\subseteq{\bf A}^{(k)}$ to be the set of all
ordered partitions of $[0,1]$ into $k$ measurable sets such that the $i$-th
set has Lebesgue measure exactly $\alpha_{i}/\|\mathbold{\alpha}\|_{1}$.
Recall that
$(I_{1}^{(\mathbold{\alpha})},\dots,I_{k}^{(\mathbold{\alpha})})\in{\bf
A}^{(\mathbold{\alpha})}$ denotes the partition of $[0,1]$ into consecutive
intervals whose lengths are given by
$\mathbold{\alpha}/\|\mathbold{\alpha}\|_{1}$ (where each dividing point is
assigned to e.g. its right interval for definiteness).
We will also need the following result.
###### Theorem 2.1.
For every two atomless standard measure spaces $(\Omega,\mu)$ and
$(\Omega^{\prime},\mu^{\prime})$, and Borel subsets $A\subseteq\Omega$ and
$A^{\prime}\subseteq\Omega^{\prime}$ with
$0<\mu(A)=\mu^{\prime}(A^{\prime})<\infty$, there is a measure-preserving
Borel isomorphism between $A$ and $A^{\prime}$.
###### Proof.
The case when $A=\Omega$ and $A^{\prime}=\Omega^{\prime}$ amounts to the
Isomorphism Theorem for Measure Spaces, whose proof can be found in e.g.
[Srivastava98cbs, Theorem 3.4.23]. The general case follows by restricting
everything to $A$ and $A^{\prime}$, and noting that the obtained measure
spaces are standard by e.g. [Srivastava98cbs, Theorem 3.2.4].∎
### 2.1 Graphons
A _graphon_ $U$ is a function $U:[0,1]^{2}\to[0,1]$ which is _symmetric_ (that
is, $U(x,y)=U(y,x)$ for all $x,y\in[0,1]$) and _measurable_ (that is, for
every $a\in{\mathbbm{R}}$, the level set $\\{W\leqslant a\\}$ is a (Lebesgue)
measurable subset of $[0,1]^{2}$). Recall that we denote the set of all
graphons by $\mathcal{W}$. We define the _cut-norm_
$d_{\Box}:{\mathcal{W}}^{2}\to[0,1]$ by
$d_{\Box}(U,V):=\sup_{A,B\subseteq[0,1]}\left|\int_{A\times
B}\left(U-V\right)\,\mathrm{d}\lambda^{\oplus 2}\right|,\quad
U,V\in\mathcal{W},$ (8)
where the supremum is taken over all pairs of measurable subsets of $[0,1]$.
For a function $\phi:[0,1]\to[0,1]$ and a graphon $U$, the _pull-back_
$U^{\phi}$ of $U$ along $\phi$ is defined by
$U^{\phi}(x,y):=U(\phi(x),\phi(y)),\quad x,y\in[0,1].$
The _cut-distance_ $\delta_{\Box}:{\mathcal{W}}^{2}\to[0,1]$ can be defined as
$\delta_{\Box}(U,V):=\inf_{\phi,\psi}d_{\Box}(U^{\phi},V^{\psi}),$ (9)
where the infimum is taken over all measure-preserving maps
$\phi,\psi:[0,1]\to[0,1]$ (then the pull-backs $U^{\phi}$ and $V^{\psi}$ are
necessarily measurable functions). See [Lovasz:lngl, Section 8.2] for more
details and, in particular, [Lovasz:lngl, Theorem 8.13] for some alternative
definitions that give the same distance. It can be easily verified that
$\delta_{\Box}$ is a pseudo-metric on ${\mathcal{W}}$. Recall that we denote
the metric quotient by $\widetilde{\mathcal{W}}$ and the equivalence class of
$U\in\mathcal{W}$ by $\widetilde{U}$. For $U\in{\mathcal{W}}$ and
$\eta\geqslant 0$, we will denote the closed radius-$\eta$ ball around
$\widetilde{U}$ in $\widetilde{\mathcal{W}}$ by
$S(\widetilde{U},\eta):=\left\\{\widetilde{V}\in\widetilde{{\mathcal{W}}}:\delta_{\Box}(U,V)\leqslant\eta\right\\}.$
###### Remark 2.2.
Without affecting $(\widetilde{\mathcal{W}},\delta_{\Box})$, we could have
defined a graphon as a Borel symmetric function $[0,1]^{2}\to[0,1]$ and
required that the measure-preserving maps in the definition of $\delta_{\Box}$
are Borel. Then some parts could be simplified (for example, the first claim
of Lemma 2.3 would not be necessary as the function $U^{\phi}$ would be Borel
for every Borel $\phi$). However, we prefer to use the (now standard)
conventions from Lovász’ book [Lovasz:lngl].
We will need the following auxiliary result.
###### Lemma 2.3.
Let $U$ be a graphon and $\phi:[0,1]\to[0,1]$ be a measurable function such
that the _push-forward measure_ $\phi_{*}\lambda$ (defined by
$(\phi_{*}\lambda)(X):=\lambda(\phi^{-1}(X))$ for measurable
$X\subseteq[0,1]$) satisfies $\phi_{*}\lambda\ll\lambda$, that is, is
absolutely continuous with respect to the Lebesgue measure $\lambda$. Then
$U^{\phi}$ is a graphon. Moreover, if the Radon-Nikodym derivative
$D:=\frac{\,\mathrm{d}(\phi_{*}\lambda)}{\,\mathrm{d}\lambda}$ satisfies
$D(x)\leqslant 1+\varepsilon$ for a.e. $x\in[0,1]$ then
$\delta_{\Box}(U,U^{\phi})\leqslant 2\varepsilon.$ (10)
###### Proof.
The function $U^{\phi}:[0,1]^{2}\to[0,1]$ is clearly symmetric so we have to
show that it is measurable. Since the pre-image under $\phi$ of any
$\lambda$-null set is again $\lambda$-null by our assumption
$\phi_{*}\lambda\ll\lambda$, there is a Borel map $\psi:[0,1]\to[0,1]$ such
that the set
$X:=\\{x\in[0,1]:\psi(x)\not=\phi(x)\\}$
is $\lambda$-null. (For a proof, see e.g. [Cohn13mt, Proposition 2.2.5].) Take
any $r\in{\mathbbm{R}}$. The set
$A:=\\{(x,y)\in[0,1]^{2}:U(x,y)\leqslant r\\}$
is measurable so by e.g. [Cohn13mt, Proposition 1.5.2] there are
$B,N\subseteq[0,1]^{2}$ such that $B$ is Borel, $N$ is $\lambda^{\oplus
2}$-null and $A\bigtriangleup B\subseteq N$, where $A\bigtriangleup
B:=(A\setminus B)\cup(B\setminus A)$ denotes the _symmetric difference_ of the
sets $A$ and $B$. The pre-image of $N$ under the Borel map $\psi^{\oplus
2}(x,y):=(\psi(x),\psi(y))$ is also $\lambda^{\oplus 2}$-null for otherwise
this would contradict the absolute continuity $(\phi_{*}\lambda)^{\oplus
2}\ll\lambda^{\oplus 2}$ (which follows from $\phi_{*}\lambda\ll\lambda$ by
the Fubini-Tonelli Theorem for Complete Measures). Thus the level set
$\\{U^{\phi}\leqslant r\\}$ is Lebesgue measurable since its symmetric
difference with the Borel set $(\psi^{\oplus 2})^{-1}(B)$ is a subset of the
null set $(\psi^{\oplus 2})^{-1}(N)\cup(X\times[0,1])\cup([0,1]\times X)$. As
$r\in{\mathbbm{R}}$ was arbitrary, $U^{\phi}$ is a measurable function and
thus a graphon.
For the second part, it will be convenient to use the following generalisation
of a graphon (which will not used anywhere else in this paper except this
proof). Namely, by a _generalised graphon_ we mean a triple $(V,\Omega,\mu)$
where $(\Omega,\mu)$ is an atomless standard probability space and
$V:(\Omega^{2},\mu^{\oplus 2})\to[0,1]$ is a symmetric measurable function. In
the special case $(\Omega,\mu)=([0,1],\lambda)$ we get our notion of a graphon
from the Introduction. Most definitions and results extend with obvious
modifications from graphons to generalised graphons (see [Lovasz:lngl, Chapter
13.1] for details). In particular, we will need the facts that if
$(V,\Omega,\mu)$ is a generalised graphon and
$\phi:(\Omega^{\prime},\mu^{\prime})\to(\Omega,\mu)$ is a measure-preserving
map between standard probability spaces, then the function $V^{\phi}$ is
measurable (which can be proved by adapting the proof of the first part of the
lemma) and
$\delta_{\Box}((V,\Omega,\mu),(V^{\phi},\Omega^{\prime},\mu^{\prime}))=0,$
(11)
where we define $V^{\phi}(x,y):=V(\phi(x),\phi(y))$ for
$x,y\in\Omega^{\prime}$ and $\delta_{\Box}$ is the extension of the cut-
distance to generalised graphons via the obvious analogues of (8) and (9).
Let us return to the proof of the second part of the lemma. We can assume that
the set $\\{x\in[0,1]:D(x)\not=1\\}$ has positive measure for otherwise $\phi$
is a measure-preserving map and $\delta_{\Box}(U,U^{\phi})=0$ by (11), as
required. Let $(V,[0,1]^{2},\lambda^{\oplus 2})$ be the generalised graphon
defined by $V((x,y),(x^{\prime},y^{\prime})):=U(x,x^{\prime})$, for
$x,y,x^{\prime},y^{\prime}\in[0,1]$. Thus $V=U^{\pi}$, where
$\pi:[0,1]^{2}\to[0,1]$ is the (measure-preserving) projection on the first
coordinate. By (11), it holds that
$\delta_{\Box}((V,[0,1]^{2},\lambda^{\oplus 2}),U)=0.$
By changing $D$ on a $\lambda$-null set, we can make it a Borel function with
$D(x)\leqslant 1+\varepsilon$ for every $x\in[0,1]$. Then
$\Omega:=\\{(x,y)\in[0,1]\times{\mathbbm{R}}:0\leqslant y\leqslant D(x)\\}$
is a Borel subset of $[0,1]\times{\mathbbm{R}}$ (see e.g. [Cohn13mt, Example
5.3.1]) and thus induces a standard measurable space. Let $\mu$ be the
restriction of the Lebesgue measure on ${\mathbbm{R}}^{2}$ to $\Omega$. Define
$W:\Omega^{2}\to[0,1]$ by $W((x,y),(x^{\prime},y^{\prime})):=U(x,x^{\prime})$
for $(x,y),(x^{\prime},y^{\prime})\in\Omega$. Thus $U^{\phi}$ and
$(W,\Omega,\mu)$ are measure-preserving pull-backs of the generalised graphon
$(U,[0,1],\phi_{*}\mu)$ along respectively the map $\phi$ and the projection
$\Omega\to[0,1]$ on the first coordinate. Therefore we have by (11) and the
Triangle Inequality for $\delta_{\Box}$ that
$\delta_{\Box}(U^{\phi},(W,\Omega,\mu))\leqslant\delta_{\Box}(U^{\phi},(U,[0,1],\phi_{*}\mu))+\delta_{\Box}((U,[0,1],\phi_{*}\mu),(W,\Omega,\mu))=0.$
Thus it suffices to show that the cut-distance $\delta_{\Box}$ between
$(V,[0,1]^{2},\lambda^{\oplus 2})$ and $(W,\Omega,\mu)$ is at most
$2\varepsilon$. The functions $V$ and $W$ and the measures $\lambda^{\oplus
2}$ and $\mu$ coincide on $X^{2}$, where $X:=[0,1]^{2}\cap\Omega$. Since
$D\not=1$ on a set of positive measure, it holds that $\mu(X)<1$. The Borel
subsets $[0,1]^{2}\setminus X$ and $\Omega\setminus X$ of ${\mathbbm{R}}^{2}$
have the same positive Lebesgue measure and thus, by Theorem 2.1, there is a
Borel measure-preserving bijection $\psi$ between them. By letting $\psi$ be
the identity function on $X$, we get a Borel measure-preserving bijection
$\psi:\Omega\to[0,1]^{2}$. Since $D\leqslant 1+\varepsilon$, we have that
$\Omega\setminus X\subseteq[0,1]\times[1,1+\varepsilon]$ has measure at most
$\varepsilon$. The function $W$ and the pull-back $V^{\psi}$, as maps
$\Omega^{2}\to[0,1]$, coincide on the set $X^{2}$ of measure at least
$(1-\varepsilon)^{2}\geqslant 1-2\varepsilon$. It follows that the
$d_{\Box}$-distance between them at most $2\varepsilon$. (Indeed, when we
compute it via the analogue of (8), the integrand is bounded by 1 in absolute
value and is non-zero on a set of measure at most $2\varepsilon$.) This
finishes the proof of the lemma.∎
Informally speaking, the following result states that if we delete a small
subset of $[0,1]$ and stretch the rest of a graphon uniformly then the new
graphon is close to the original one.
###### Lemma 2.4.
Let $U\in{\mathcal{W}}$ be a graphon, $s\in(0,1]$ be a non-zero real and
$\phi:[0,1]\to[0,1]$ be the map that sends $x$ to $sx$. Then $U^{\phi}$ is a
graphon and $\delta_{\Box}(U,U^{\phi})\leqslant 2(\frac{1}{s}-1)$.
###### Proof.
Clearly, the push-forward $\phi_{*}\lambda$ is the uniform probability measure
on $[0,s]$ so the Radon-Nikodym derivative
$\frac{\,\mathrm{d}(\phi_{*}\lambda)}{\,\mathrm{d}\lambda}$ is a.e. $1/s$ on
$[0,s]$ and $0$ on $[s,1]$. The result now follows from Lemma 2.3. ∎
### 2.2 Large deviations for compact metric spaces
Recall that the definition of a large deviation principle was given in
Definition 1.1. Since we will be dealing with LDPs for compact metric spaces
only, we may use the following alternative characterisation that follows from
[DemboZeitouni10ldta, Theorems 4.1.11 and 4.1.18] (see also
[RassoulaghaSeppelainen14cldigm, Exercise 2.24]).
###### Lemma 2.5.
Let $(X,d)$ be a compact metric space, $s:{\mathbbm{N}}\to(0,\infty)$ satisfy
$s(n)\to\infty$, and $I:X\to[0,\infty]$ be a lower semi-continuous function on
$(X,d)$. Then a sequence of Borel probability measures
$(\mu_{n})_{n\in{\mathbbm{N}}}$ on $(X,d)$ satisfies an LDP with speed $s$ and
rate function $I$ if and only
$\displaystyle\lim_{\eta\to
0}\liminf_{n\to\infty}\frac{1}{s(n)}\,{\log\Big{(}\mu_{n}\big{(}\\{y\in
X:d(x,y)\leqslant\eta\\}\big{)}\Big{)}}$ $\displaystyle\geqslant$
$\displaystyle-I(x),\quad\mbox{for every $x\in X$,}$ (12)
$\displaystyle\lim_{\eta\to
0}\limsup_{n\to\infty}\frac{1}{s(n)}\,{\log\Big{(}\mu_{n}\big{(}\\{y\in
X:d(x,y)\leqslant\eta\\}\big{)}\Big{)}}$ $\displaystyle\leqslant$
$\displaystyle-I(x),\quad\mbox{for every $x\in X$}.$ (13)
In fact, under the assumptions of Lemma 2.5, the bounds (1) and (12) (resp.
(2) and (13)) are equivalent to each other. So we will also refer to (12) and
(13) as the _lower bound_ and the _upper bound_ respectively.
## 3 Coloured graphons
The definitions and results of this section are needed in order to establish
the lower semi-continuity of the functions
$J_{\mathbold{\alpha},\mathbold{p}}$ and $R_{\mathbold{p}}$.
Fix $k\in\mathbb{N}$. By a _$k$ -coloured graphon_ we mean a pair
$(W,\mathcal{A})$ where $W\in\mathcal{W}$ and $\mathcal{A}\in{\bf A}^{(k)}$.
(One can view the partition $\mathcal{A}$ as a $k$-colouring of $[0,1]$.)
Write $\mathcal{W}^{(k)}:={\mathcal{W}}\times{\bf A}^{(k)}$ for the space of
all $k$-coloured graphons. We define the pseudo-metric $d^{(k)}_{\Box}$ (the
analogue of the cut norm $d_{\Box}$) on $\mathcal{W}^{(k)}$ as
$d^{(k)}_{\Box}((U,\mathcal{A}),(V,\mathcal{B})):=\sup_{C,D\subseteq[0,1]}\sum_{i,j\in[k]}\left|\int_{C\times
D}({{\mathbbm{1}}}_{A_{i}\times A_{j}}U-{{\mathbbm{1}}}_{B_{i}\times
B_{j}}V)\,\mathrm{d}\lambda^{\oplus
2}\right|+\sum_{i\in[k]}\lambda(A_{i}\triangle B_{i}),$
for $(U,\mathcal{A}),(V,\mathcal{B})\in\mathcal{W}^{(k)}$, where
${\mathcal{A}}=(A_{1},\dots,A_{k})$ and ${\mathcal{B}}=(B_{1},\dots,B_{k})$.
Informally speaking, two $k$-coloured graphons are close to each other in
$d^{(k)}$ if they have similar distributions of coloured edges across cuts,
where an edge is coloured by the colours of its endpoints. The second term is
added so that e.g. we can distinguish two constant-0 graphons with different
part measures.
The _cut distance_ for coloured graphons is then defined as
$\delta^{(k)}_{\Box}((U,\mathcal{A}),(V,\mathcal{B})):=\inf_{\phi,\psi}d^{(k)}_{\Box}((U,\mathcal{A})^{\phi},(V,\mathcal{B})^{\psi}),$
(14)
where the infimum is taken over measure-preserving maps
$\phi,\psi:[0,1]\to[0,1]$ and we denote
$(U,\mathcal{A})^{\phi}:=(U^{\phi},{\mathcal{A}}^{\phi})$ and
${\mathcal{A}}^{\phi}:=(\phi^{-1}(A_{1}),\dots,\phi^{-1}(A_{k}))$ with
$(A_{1},\dots,A_{k})$ being the parts of ${\mathcal{A}}$. As in the graphon
case (compare with e.g. [Lovasz:lngl, Theorem 8.13]), some other definitions
give the same distance (e.g. it is enough to take the identity function for
$\psi$). We chose this definition as it is immediately clear from it that the
function $\delta^{(k)}_{\Box}$ is symmetric and defines a pseudo-metric. We
write $\widetilde{\mathcal{W}}^{(k)}$ for the corresponding quotient, where we
identify two $k$-coloured graphons at $\delta^{(k)}_{\Box}$-distance 0.
###### Theorem 3.1.
The metric space $(\widetilde{\mathcal{W}}^{(k)},\delta^{(k)}_{\Box})$ is
compact.
###### Proof.
The proof is obtained by the obvious adaptation of the proof of Lovász and
Szegedy [LovaszSzegedy07gafa, Theorem 5.1] (see also [Lovasz:lngl, Theorem
9.23]) that the space $(\widetilde{\mathcal{W}},\delta_{\Box})$ is compact.
Let $(W_{n},\mathcal{A}_{n})_{n\in\mathbb{N}}$ be an arbitrary sequence of
elements of $\mathcal{W}^{(k)}$. We have to find a subsequence that converges
to some element of $\mathcal{W}^{(k)}$ with respect to $\delta_{\Box}^{(k)}$.
When dealing with the elements of $\mathcal{W}^{(k)}$, we can ignore null
subsets of $[0,1]$; thus all relevant statements, e.g. that one partition
refines another, are meant to hold almost everywhere.
For $n\in\mathbb{N}$, let the parts of $\mathcal{A}_{n}$ be
$(A_{n,1},\dots,A_{n,k})$ and, by applying a measure-preserving bijection to
$(W_{n},\mathcal{A}_{n})$, assume by Theorem 2.1 that the colour classes
$A_{n,1},\dots,A_{n,k}\subseteq[0,1]$ are all intervals, coming in this order.
By passing to a subsequence, assume that, for each $i\in[k]$, the length of
$A_{n,i}$ converges to some $\alpha_{i}\in[0,1]$ as $n\to\infty$. With
$\mathbold{\alpha}:=(\alpha_{1},\dots,\alpha_{k})$, this gives rise to the
“limiting” partition
$\mathcal{A}:=(I_{1}^{(\mathbold{\alpha})},\dots,I_{k}^{(\mathbold{\alpha})})\in{\bf
A}^{(\mathbold{\alpha})}$
of $[0,1]$ into intervals.
Let $m_{1}:=k$ and inductively for $\ell=2,3,\ldots$ let $m_{\ell}$ be
sufficiently large such that for every graphon $W$ and a measurable partition
$\mathcal{A}^{\prime}$ of $[0,1]$ with $|\mathcal{A}^{\prime}|\leqslant
m_{\ell-1}$ there is a measurable partition $\mathcal{P}=(P_{1},\dots,P_{m})$
of $[0,1]$ refining $\mathcal{A}^{\prime}$ such that $m\leqslant m_{\ell}$ and
$d_{\Box}(W,W_{\mathcal{P}})\leqslant 1/\ell$. Here $W_{\mathcal{P}}$ denotes
the projection of $W$ to the space of ${\mathcal{P}}$-step graphons; namely,
for every $i,j\in[m]$ with $P_{i}\times P_{j}$ non-null in $\lambda^{\oplus
2}$, $W_{{\mathcal{P}}}$ assumes the constant value
$\frac{1}{\lambda(P_{i})\lambda(P_{j})}\int_{P_{i}\times
P_{j}}W\,\mathrm{d}\lambda^{\oplus 2}$ on $P_{i}\times P_{j}$ (and, say,
$W_{{\mathcal{P}}}$ is defined to be 0 on all $\lambda^{\oplus 2}$-null
products $P_{i}\times P_{j}$). Such a number $m_{\ell}$ exists by
[Lovasz:lngl, Lemma 9.15], a version of the Weak Regularity Lemma for
graphons.
For each $n\in\mathbb{N}$, we do the following. Let
$\mathcal{P}_{n,1}:=\mathcal{A}_{n}$ and, inductively on $\ell=2,3,\dots$, let
$\mathcal{P}_{n,\ell}$ be the partition with at most $m_{\ell}$ parts obtained
by applying the above Weak Regularity Lemma to
$(W_{n},\mathcal{P}_{n,\ell-1})$. By adding empty parts to
$\mathcal{P}_{n,\ell}$, for each $\ell\geqslant 1$, we can assume that it has
the same number of parts (namely, $m_{\ell}$) of each colour, that is, we can
denote its parts as $(P_{n,\ell,i,j})_{i\in[k],j\in[m_{\ell}]}$, so that
$P_{n,\ell,i,j}\subseteq A_{n,i}$ for all $(i,j)\in[k]\times[m_{\ell}]$. Also,
define $W_{n,\ell}:=(W_{n})_{\mathcal{P}_{n,\ell}}$ to be the projection of
the graphon $W_{n}$ on the space of $\mathcal{P}_{n,\ell}$-step graphons.
Then, iteratively for $\ell=2,3,\dots$, repeat the following. Find a measure-
preserving bijection $\phi:[0,1]\to[0,1]$ such that
$(\mathcal{P}_{n,\ell})^{\phi}$ is a partition into intervals and $\phi$
preserves the previous partitions
$\mathcal{P}_{n,1},\dots,\mathcal{P}_{n,\ell-1}$ (each of which is a partition
into intervals by induction on $\ell$). Then, for each $m\geqslant\ell$,
replace $(W_{n,m},\mathcal{P}_{n,m})$ by $(W_{n,m},\mathcal{P}_{n,m})^{\phi}$.
When we are done with this step, the following properties hold for each
integer $\ell\geqslant 2$:
* •
$\delta_{\Box}^{(k)}((W_{n,\ell},\mathcal{A}_{n}),(W_{n},\mathcal{A}_{n}))\leqslant
1/\ell$;
* •
The partition $\mathcal{P}_{n,\ell}$ refines $\mathcal{P}_{n,\ell-1}$ (and,
inductively, also refines $\mathcal{A}_{n}=\mathcal{P}_{n,1}$);
* •
$|\mathcal{P}_{n,\ell}|=km_{\ell}$ with exactly $m_{\ell}$ parts assigned to
each colour class of $\mathcal{A}_{n}$.
Next, iteratively for $\ell=1,2,\dots$, we pass to a subsequence of $n$ so
that for every $(i,j)\in[k]\times[m_{\ell}]$, the length of the interval
$P_{n,\ell,i,j}$ converges, and for every pair
$(i,j),(i^{\prime},j^{\prime})\in[k]\times[m_{\ell}]$, the common value of the
step-graphon $W_{n,\ell}$ on $P_{n,\ell,i,j}\times
P_{n,\ell,i^{\prime},j^{\prime}}$ converges. It follows that the sequence
$W_{n,\ell}$ converges pointwise to some graphon $U_{\ell}$ which is itself a
step-function with $km_{\ell}$ parts that are intervals. We use
diagonalisation to find a subsequence of $n$ so that, for each
$\ell\in\mathbb{N}$, $W_{n,\ell}$ converges to some step-graphon $U_{\ell}$
a.e. as $n\to\infty$, with the step partition $\mathcal{P}_{\ell}$ of
$U_{\ell}$ consisting of $km_{\ell}$ intervals and refining the partition
${\mathcal{A}}$.
It follows that, for all $s<t$ in ${\mathbbm{N}}$, the partition
$\mathcal{P}_{t}$ is a refinement of $\mathcal{P}_{s}$ and, moreover, $U_{s}$
is the conditional expectation $\mathbb{E}[U_{t}|\mathcal{P}_{s}]$ a.e. As
observed in the proof of Lovász and Szegedy [LovaszSzegedy07gafa, Theorem
5.1], this (and $0\leqslant U_{t}\leqslant 1$ a.e.) implies by the Martingale
Convergence Theorem that $U_{\ell}$ converge a.e. to some graphon $U$ as
$\ell\to\infty$. By the Dominated Convergence Theorem, $U_{\ell}\to U$ also in
the $L^{1}$-distance.
We claim that $\delta_{\Box}((W_{n},\mathcal{A}_{n}),(U,\mathcal{A}))\to 0$ as
$n\to\infty$ (after we passed to the subsequence defined as above). Take any
$\varepsilon>0$. Fix an integer $\ell>4/\varepsilon$ such that
$\|U-U_{\ell}\|_{1}\leqslant\varepsilon/4$. Given $\ell$, fix $n_{0}$ such
that for all $n\geqslant n_{0}$ we have
$\|U_{\ell}-W_{n,\ell}\|_{1}\leqslant\varepsilon/4$ and
$\sum_{i=1}^{k}\lambda(I_{i}^{(\mathbold{\alpha})}\bigtriangleup
A_{n,i})\leqslant\varepsilon/4$. Then, for every $n\geqslant n_{0}$ we have
$\displaystyle\delta_{\Box}^{(k)}((U,\mathcal{A}),(W_{n},\mathcal{A}_{n}))$
$\displaystyle\leqslant$ $\displaystyle
d_{\Box}^{(k)}((U,\mathcal{A}),(U_{\ell},\mathcal{A}))+d_{\Box}^{(k)}((U_{\ell},\mathcal{A}),(W_{n,\ell},\mathcal{A}_{n}))$
$\displaystyle+$
$\displaystyle\delta_{\Box}^{(k)}((W_{n,\ell},\mathcal{A}_{n}),(W_{n},\mathcal{A}_{n}))$
$\displaystyle\leqslant$
$\displaystyle\|U-U_{\ell}\|_{1}+\|U_{\ell}-W_{n,\ell}\|_{1}+\sum_{i=1}^{k}\lambda(I_{i}^{(\mathbold{\alpha})}\bigtriangleup
A_{n,i})+1/\ell$ $\displaystyle\leqslant$
$\displaystyle\varepsilon/4+\varepsilon/4+\varepsilon/4+\varepsilon/4\ =\
\varepsilon.$
Since $\varepsilon>0$ was arbitrary, the claim is proved. Thus the metric
space $(\widetilde{\mathcal{W}}^{(k)},\delta^{(k)}_{\Box})$ is indeed compact.
∎
## 4 The lower semi-continuity of $J_{\mathbold{\alpha},\mathbold{p}}$ and
$R_{\mathbold{p}}$
For this section we fix an integer $k\geqslant 1$, a symmetric $k\times k$
matrix $\mathbold{p}=(p_{i,j})_{i,j\in[k]}\in[0,1]^{k\times k}$ and a non-zero
real vector $\mathbold{\alpha}\in[0,\infty)^{k}$. We show that the functions
$J_{\mathbold{\alpha},\mathbold{p}}$ and $R_{\mathbold{p}}$ are lower semi-
continuous functions from $(\widetilde{\mathcal{W}},\delta_{\Box})$ to
$[0,+\infty]$.
Let $\Gamma:\mathcal{W}^{(k)}\to{\mathcal{W}}$ be the map that forgets the
colouring, i.e., $\Gamma(U,\mathcal{A}):=U$ for
$(U,{\mathcal{A}})\in\mathcal{W}^{(k)}$. For $i,j\in[k]$, let the map
$\Gamma_{i,j}:\mathcal{W}^{(k)}\to{\mathcal{W}}$ send
$(U,(A_{1},\dots,A_{k}))\in\mathcal{W}^{(k)}$ to the graphon $V$ defined as
$V(x,y):=\left\\{\begin{array}[]{ll}U(x,y),&(x,y)\in(A_{i}\times
A_{j})\cup(A_{j}\times A_{i}),\\\
p_{i,j},&\mbox{otherwise,}\end{array}\right.\quad\mbox{for $x,y\in[0,1]$.}$
###### Lemma 4.1.
The maps $\Gamma$ and $\Gamma_{i,j}$, for ${i,j\in[k]}$, are $1$-Lipschitz
maps from $(\mathcal{W}^{(k)},d_{\Box}^{(k)})$ to $({\mathcal{W}},d_{\Box})$.
###### Proof.
First, consider $\Gamma:\mathcal{W}^{(k)}\to\mathcal{W}$. Take arbitrary
$(U,\mathcal{A}),(V,\mathcal{B})\in\mathcal{W}^{(k)}$. Let
${\mathcal{A}}=(A_{1},\dots,A_{k})$ and $\mathcal{B}=(B_{1},\dots,B_{k})$.
Clearly, the pairwise products $A_{i}\times A_{j}$ (resp. $B_{i}\times B_{j}$)
for $i,j\in[k]$ partition $[0,1]^{2}$. Thus we have
$\displaystyle d_{\Box}(\Gamma(U,\mathcal{A}),\Gamma(V,\mathcal{B}))$
$\displaystyle=$ $\displaystyle d_{\Box}(U,V)\ =\
\sup_{C,D\subseteq[0,1]}\left|\int_{C\times D}(U-V)\,\mathrm{d}\lambda^{\oplus
2}\right|$ (15) $\displaystyle\leqslant$
$\displaystyle\sup_{C,D\subseteq[0,1]}\sum_{i,j\in[k]}\left|\int_{C\times
D}(U\,{{\mathbbm{1}}}_{A_{i}\times A_{j}}-V\,{\mathbbm{1}}_{B_{i}\times
B_{j}})\,\mathrm{d}\lambda^{\oplus 2}\right|$ $\displaystyle\leqslant$
$\displaystyle d_{\Box}^{(k)}((U,\mathcal{A}),(V,\mathcal{B})).$
Thus the function $\Gamma$ is indeed $1$-Lipschitz.
The claim about $\Gamma_{i,j}$ follows by observing that
$d_{\Box}(\Gamma_{i,j}(U,\mathcal{A}),\Gamma_{i,j}(V,\mathcal{B}))\leqslant
d_{\Box}(\Gamma(U,\mathcal{A}),\Gamma(V,\mathcal{B}))$
for every $(U,\mathcal{A}),(V,\mathcal{B})\in\mathcal{W}^{(k)}$.∎
###### Lemma 4.2.
Let $F$ be $\Gamma$ or $\Gamma_{i,j}$ for some $i,j\in[k]$. Then $F$ gives
rise to a well-defined function
$\widetilde{\mathcal{W}}^{(k)}\to\widetilde{\mathcal{W}}$ which, moreover, is
$1$-Lipschitz as a function from
$(\widetilde{\mathcal{W}}^{(k)},\delta_{\Box}^{(k)})$ to
$(\widetilde{\mathcal{W}},\delta_{\Box})$.
###### Proof.
Take any $(U,{\mathcal{A}}),(V,{\mathcal{B}})\in\mathcal{W}^{(k)}$. Let
$\varepsilon>0$ be arbitrary. Fix measure-preserving maps
$\phi,\psi:[0,1]\to[0,1]$ with
$d_{\Box}^{(k)}((U,{\mathcal{A}})^{\psi},(V,{\mathcal{B}})^{\phi})<\delta_{\Box}^{(k)}((U,{\mathcal{A}}),(V,{\mathcal{B}}))+\varepsilon$.
By Lemma 4.1, we have
$\displaystyle\delta_{\Box}\left(F(U,{\mathcal{A}}),F(V,{\mathcal{B}})\right)$
$\displaystyle\leqslant$ $\displaystyle
d_{\Box}\left((F(U,{\mathcal{A}}))^{\psi},(F(V,{\mathcal{B}}))^{\phi}\right)\
=\
d_{\Box}\left(F(U^{\psi},{\mathcal{A}}^{\psi}),F(V^{\phi},{\mathcal{B}}^{\phi})\right)$
$\displaystyle\leqslant$ $\displaystyle
d_{\Box}^{(k)}((U^{\psi},{\mathcal{A}}^{\psi}),(V^{\phi},{\mathcal{B}}^{\phi}))\
<\
\delta_{\Box}^{(k)}\left((U,{\mathcal{A}}),(V,{\mathcal{B}})\right)+\varepsilon.$
This implies both claims about $F$ as $\varepsilon>0$ was arbitrary. ∎
For $(U,(A_{1},\dots,A_{k}))\in\mathcal{W}^{(k)}$, define
$I^{(k)}_{\mathbold{p}}(U,(A_{1},\dots,A_{k})):=\frac{1}{2}\sum_{i,j\in[k]}\int_{A_{i}\times
A_{j}}I_{p_{i,j}}(U)\,\mathrm{d}\lambda^{\oplus 2}.$ (16)
In the special case of (16) when $k=1$ and $p_{1,1}=p$ (and we ignore the
second component since ${\bf A}^{(1)}$ consists of just the trivial partition
of $[0,1]$ into one part), we get the function
$I_{p}:{\mathcal{W}}\to[0,\infty]$ of Chatterjee and Varadhan defined in (5).
###### Lemma 4.3.
The function $I^{(k)}_{\mathbold{p}}$ gives a well-defined function
$\widetilde{\mathcal{W}}^{(k)}\to[0,+\infty]$ which, moreover, is lower semi-
continuous as a function on
$(\widetilde{\mathcal{W}}^{(k)},\delta^{(k)}_{\Box})$.
###### Proof.
Note that we can write
$I^{(k)}_{\mathbold{p}}(U,\mathcal{A})=\sum_{1\leqslant i\leqslant j\leqslant
k}I_{p_{i,j}}(\Gamma_{i,j}(U,\mathcal{A})),\quad(U,{\mathcal{A}})\in\mathcal{W}^{(k)},$
(17)
because $\Gamma_{i,j}(U,{\mathcal{A}})$ assumes value $p_{i,j}$ outside of
$(A_{i}\times A_{j})\cup(A_{j}\times A_{i})$ while $I_{p}(p)=0$ for any
$p\in[0,1]$. Recall that, by Theorem 1.2, $I_{p}$ gives a well-defined
function $\widetilde{\mathcal{W}}\to[0,\infty]$ for every $p\in[0,1]$. Thus,
by Lemma 4.2, the right-hand side of (17) does not change if we replace
$(U,{\mathcal{A}})$ by any other element of $\mathcal{W}^{(k)}$ at
$\delta_{\Box}^{(k)}$-distance 0. We conclude that $I_{\mathbold{p}}^{(k)}$
gives a well-defined function on $\widetilde{\mathcal{W}}^{(k)}$.
Each composition $I_{p}\circ\Gamma_{i,j}$ is lsc as a function
$(\widetilde{\mathcal{W}}^{(k)},\delta_{\Box}^{(k)})\to[0,\infty]$ because,
for every $r\in{\mathbbm{R}}$, the level set
$\\{I_{p}\circ\Gamma_{ij}\leqslant r\\}$ is closed as the pre-image under the
continuous function
$\Gamma_{i,j}:\widetilde{\mathcal{W}}^{(k)}\to\widetilde{\mathcal{W}}$ of the
closed set $\\{I_{p}\leqslant r\\}$. (Recall that the function
$I_{p}:\widetilde{\mathcal{W}}\to[0,\infty]$ is lsc by Theorem 1.2.) Thus
$I^{(k)}_{\mathbold{p}}:\widetilde{\mathcal{W}}^{(k)}\to[0,\infty]$ is lsc by
(17), as a finite sum of lsc functions. ∎
Now we are ready to show that $J_{\mathbold{\alpha},\mathbold{p}}$ and
$R_{\mathbold{p}}$ are lsc (in particular, proving Theorem 1.4). The argument
showing the lower semi-continuity of these functions is motivated by the
Contraction Principle (see e.g. [DemboZeitouni10ldta, Theorem 4.2.1] or
[RassoulaghaSeppelainen14cldigm, Section 3.1]).
###### Corollary 4.4.
For every symmetric matrix $\mathbold{p}\in[0,1]^{k\times k}$ and every non-
zero real vector $\mathbold{\alpha}\in[0,\infty)^{k}$, the functions
$J_{\mathbold{\alpha},\mathbold{p}}$ and $R_{\mathbold{p}}$ are lower semi-
continuous on $(\widetilde{\mathcal{W}},\delta_{\Box})$.
###### Proof.
Note that $J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U})$ for any
$U\in{\mathcal{W}}$ is equal to the infimum of
$I_{\mathbold{p}}^{(k)}(V,{\mathcal{A}})$ over all
$(V,{\mathcal{A}})\in\mathcal{W}\times{\bf A}^{(\mathbold{\alpha})}$ such that
$\Gamma(V,{\mathcal{A}})=V$ belongs to $\widetilde{U}$. Indeed, for any
partition ${\mathcal{A}}\in{\bf A}^{(\mathbold{\alpha})}$ of $[0,1]$ one can
find by Theorem 2.1 a measure-preserving Borel bijection $\phi$ of $[0,1]$
such that ${\mathcal{A}}^{\phi}$ is equal a.e. to
$(I_{1}^{(\mathbold{\alpha})},\dots,I_{k}^{(\mathbold{\alpha})})$.
In the rest of the proof, let us view $\Gamma$ and $I^{(k)}_{\mathbold{p}}$ as
functions on $\widetilde{\mathcal{W}}^{(k)}$ (by Lemmas 4.2 and 4.3). Thus we
have
$J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U})=\inf_{\Gamma^{-1}(\widetilde{U})\cap\widetilde{\mathcal{W}}^{(\mathbold{\alpha})}}\,I_{\mathbold{p}}^{(k)},\quad\mbox{for
each $U\in{\mathcal{W}}$},$ (18)
where
$\widetilde{\mathcal{W}}^{(\mathbold{\alpha})}\subseteq\widetilde{\mathcal{W}}^{(k)}$
denotes the set of all $\delta_{\Box}^{(k)}$-equivalence classes that
intersect $\mathcal{W}\times{\bf A}^{(\mathbold{\alpha})}$ (equivalently, lie
entirely inside $\mathcal{W}\times{\bf A}^{(\mathbold{\alpha})}$).
Take any graphon $U\in{\mathcal{W}}$. The pre-image
$\Gamma^{-1}(\widetilde{U})$ is a closed subset of
$\widetilde{\mathcal{W}}^{(k)}$ by the continuity of $\Gamma$ (Lemma 4.2).
Also, $\widetilde{\mathcal{W}}^{(\mathbold{\alpha})}$ is a closed subset of
$\widetilde{\mathcal{W}}^{(k)}$: if
$(V,(B_{1},\dots,B_{k}))\in\mathcal{W}^{(k)}$ is not in $\mathcal{W}\times{\bf
A}^{(\mathbold{\alpha})}$, then the $\delta_{\Box}^{(k)}$-ball of radius e.g.
$\frac{1}{2}\,\left\|\,(\lambda(B_{i}))_{i\in[k]}-\frac{1}{\|\mathbold{\alpha}\|_{1}}\mathbold{\alpha}\,\right\|_{1}>0$
around it is disjoint from $\mathcal{W}\times{\bf A}^{(\mathbold{\alpha})}$.
Recall that the space $\widetilde{\mathcal{W}}^{(k)}$ is compact by Theorem
3.1. Thus the infimum in (18) is taken over a (non-empty) compact set. As any
lsc function attains its infimum on any non-empty compact set and
$I^{(k)}_{\mathbold{p}}:\widetilde{\mathcal{W}}^{(k)}\to[0,+\infty]$ is lsc by
Lemma 4.3, there is $(V,{\mathcal{A}})\in\mathcal{W}\times{\bf
A}^{(\mathbold{\alpha})}$ such that $V\in\widetilde{U}$ and
$I_{\mathbold{p}}^{(k)}(\widetilde{(V,{\mathcal{A}})})=J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U})$,
where $\widetilde{(V,{\mathcal{A}})}$ denotes the
$\delta_{\Box}^{(k)}$-equivalence class of $(V,{\mathcal{A}})$.
Thus for any $r\in{\mathbbm{R}}$ the level set
$\\{J_{\mathbold{\alpha},\mathbold{p}}\leqslant r\\}$ is equal to the image of
$\\{I_{\mathbold{p}}^{(k)}\leqslant
r\\}\cap\widetilde{\mathcal{W}}^{(\mathbold{\alpha})}$ under $\Gamma$. Since
the function $I_{\mathbold{p}}^{(k)}$ is lsc by Lemma 4.3, the level set
$\\{I_{\mathbold{p}}^{(k)}\leqslant r\\}$ is a closed and thus compact subset
of $\widetilde{\mathcal{W}}^{(k)}$. Thus the set
$\\{I_{\mathbold{p}}^{(k)}\leqslant
r\\}\cap\widetilde{\mathcal{W}}^{(\mathbold{\alpha})}$ is compact. Its image
$\\{J_{\mathbold{\alpha},\mathbold{p}}\leqslant r\\}$ under the continuous map
$\Gamma:\widetilde{\mathcal{W}}^{(k)}\to\widetilde{\mathcal{W}}$ is compact
and thus closed. Since $r\in{\mathbbm{R}}$ was arbitrary, the function
$J_{\mathbold{\alpha},\mathbold{p}}:\widetilde{\mathcal{W}}\to[0,\infty]$ is
lsc.
Since $R_{\mathbold{p}}(\widetilde{U})$ is equal to the infimum of
$I_{\mathbold{p}}^{(k)}$ over $\Gamma^{-1}(\widetilde{U})$, the same argument
(except we do not need to intersect $\Gamma^{-1}(\widetilde{U})$ with
$\widetilde{\mathcal{W}}^{(\mathbold{\alpha})}$ anywhere) also works for
$R_{\mathbold{p}}$. ∎
## 5 Proof of Theorem 1.5
First we prove two auxiliary lemmas. The first one states, informally
speaking, that the measure
$\widetilde{\mathbb{P}}_{\mathbold{a},\mathbold{p}}$ is “uniformly continuous”
in $\mathbold{a}$.
###### Lemma 5.1.
For every symmetric matrix $\mathbold{p}\in[0,1]^{k\times k}$, real
$\varepsilon\in(0,1)$ and non-zero integer vectors
$\mathbold{a},\mathbold{b}\in{\mathbbm{N}}^{k}$, if
$\|\mathbold{b}-\mathbold{a}\|_{1}\leqslant\varepsilon\,\min\left\\{\,\|\mathbold{a}\|_{1},\|\mathbold{b}\|_{1}\,\right\\}$
then there is a (discrete) measure $\widetilde{{\mathbbm{C}}}$ on
$\widetilde{\mathcal{W}}\times\widetilde{\mathcal{W}}$ which gives a coupling
between $\widetilde{\mathbb{P}}_{\mathbold{a},\mathbold{p}}$ and
$\widetilde{\mathbb{P}}_{\mathbold{b},\mathbold{p}}$ such that for every
$(\widetilde{U},\widetilde{V})$ in the support of $\widetilde{{\mathbbm{C}}}$
we have $\delta_{\Box}(U,V)\leqslant 4\varepsilon/(1-\varepsilon)$.
###### Proof.
Let $m:=\|\mathbold{a}\|_{1}$ and $n:=\|\mathbold{b}\|_{1}$. Let
$[m]=A_{1}\cup\dots\cup A_{k}$ (resp. $[n]=B_{1}\cup\dots\cup B_{k}$) be the
partition into consecutive intervals with $a_{1},\dots,a_{k}$ (resp.
$b_{1},\dots,b_{k}$) elements. For each $i\in[k]$ fix some subsets
$A_{i}^{\prime}\subseteq A_{i}$ and $B_{i}^{\prime}\subseteq B_{i}$ of size
$\min(a_{i},b_{i})$. Define $A^{\prime}:=\cup_{i=1}^{k}A_{i}^{\prime}$ and
$B^{\prime}:=\cup_{i=1}^{k}B_{i}^{\prime}$. Fix any bijection $h:A^{\prime}\to
B^{\prime}$ that sends each $A_{i}^{\prime}$ to $B_{i}^{\prime}$. We have
$\left|\,[m]\setminus
A^{\prime}\,\right|\leqslant\sum_{i=1}^{k}\max(0,a_{i}-b_{i})\leqslant\|\mathbold{a}-\mathbold{b}\|_{1}\leqslant\varepsilon
m$ (19)
and similarly $\left|\,[n]\setminus B^{\prime}\,\right|\leqslant\varepsilon
n$.
We can couple random graphs $G\sim{\mathbbm{G}}(\mathbold{a},\mathbold{p})$
and $H\sim{\mathbbm{G}}(\mathbold{b},\mathbold{p})$ so that every pair
$\\{x,y\\}$ in $A^{\prime}$ is an edge in $G$ if and only if $\\{h(x),h(y)\\}$
is an edge in $H$. This is possible because, with $i,j\in[k]$ satisfying
$(x,y)\in A_{i}^{\prime}\times A_{j}^{\prime}$, the probability of
$\\{x,y\\}\in E(G)$ is $p_{i,j}$, the same as the probability with which
$h(x)\in B_{i}$ and $h(y)\in B_{j}$ are made adjacent in $H$ (so we can just
use the same coin flip for both pairs). By making all edge choices to be
mutually independent otherwise, we get a probability measure ${\mathbbm{C}}$
on pairs of graphs which is a coupling between
${\mathbbm{G}}(\mathbold{a},\mathbold{p})$ and
${\mathbbm{G}}(\mathbold{b},\mathbold{p})$. The corresponding measure
$\widetilde{{\mathbbm{C}}}$ on
$\widetilde{\mathcal{W}}\times\widetilde{\mathcal{W}}$ gives a coupling
between $\widetilde{\mathbb{P}}_{\mathbold{a},\mathbold{p}}$ and
$\widetilde{\mathbb{P}}_{\mathbold{b},\mathbold{p}}$.
Let us show that this coupling ${\mathbbm{C}}$ satisfies the distance
requirement. Take any pair of graphs $(G,H)$ in the support of
${\mathbbm{C}}$. Let $G^{\prime}:=G[A^{\prime}]$ be obtained from $G$ by
removing all vertices in $[m]\setminus A^{\prime}$. We remove at most
$\varepsilon$ fraction of vertices by (19). By relabelling the vertices of
$G$, we can assume that all removed vertices come at the very end. Then the
union of the intervals in the graphon $f^{G}$ corresponding to the vertices of
$G^{\prime}$ is an initial segment of $[0,1]$ of length $s\geqslant
1-\varepsilon$ and the graphon of $G^{\prime}$ is the pull-back of $f^{G}$
under the map $x\mapsto sx$. By Lemma 2.4, we have
$\delta_{\Box}(f^{G},f^{G^{\prime}})\leqslant
2(\frac{1}{1-\varepsilon}-1)=\frac{2\varepsilon}{1-\varepsilon}$. By symmetry,
the same estimate applies to the pair $(H,H^{\prime})$, where $H^{\prime}$ is
obtained from $H$ by deleting all vertices from $[n]\setminus B^{\prime}$. By
the definition of our coupling, the function $h:V(G^{\prime})\to
V(H^{\prime})$ is (deterministically) an isomorphism between $G^{\prime}$ and
$H^{\prime}$. Thus the graphons of $G^{\prime}$ and $H^{\prime}$ are weakly
isomorphic. This gives by the Triangle Inequality the required upper bound on
the cut-distance between $\widetilde{f^{G}}$ and $\widetilde{f^{H}}$. ∎
The function $J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{V})$ is not in
general continuous in $\mathbold{\alpha}$ even when $\mathbold{p}$ and
$\widetilde{V}$ are fixed. (For example, if $k=2$, $V=1-f^{K_{2}}$ is the
limit of $K_{n}\sqcup K_{n}$, the disjoint union of two cliques of order $n$
each, and $\mathbold{p}$ is the identity $2\times 2$ matrix then
$J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{V})$ is $0$ for
$\mathbold{\alpha}=(\frac{1}{2},\frac{1}{2})$ and $\infty$ otherwise.)
However, the following version of “uniform semi-continuity” in
$\mathbold{\alpha}$ will suffice for our purposes.
###### Lemma 5.2.
Fix any symmetric $\mathbold{p}\in[0,1]^{k\times k}$. Then for every $\eta>0$
there is $\varepsilon=\varepsilon(\eta,\mathbold{p})>0$ such that if vectors
${\mathbold{\gamma}},\mathbold{\kappa}\in[0,\infty)^{k}$ with
$\|\mathbold{\gamma}\|_{1}=\|\mathbold{\kappa}\|_{1}=1$ satisfy
$\kappa_{i}\leqslant(1+\varepsilon)\gamma_{i},\quad\mbox{for every
$i\in[k]$},$ (20)
then for every $U\in{\mathcal{W}}$ there is ${V}\in{{\mathcal{W}}}$ with
$\delta_{\Box}(U,{V})\leqslant\eta$ and
$J_{\mathbold{\kappa},\mathbold{p}}(\widetilde{V})\leqslant
J_{{\mathbold{\gamma}},\mathbold{p}}(\widetilde{U})+\eta.$
###### Proof.
For every fixed $p\in(0,1)$, the relative entropy function
$h_{p}:[0,1]\to[0,\infty]$ is bounded (in fact, by
$\max\\{h_{p}(0),h_{p}(1)\\}<\infty$). Thus we can find a constant $C$ that
depends on $\mathbold{p}$ only such that for every $x\in[0,1]$ and
$i,j\in[k]$, $h_{p_{i,j}}(x)$ is either $\infty$ or at most $C$.
Let us show that any positive
$\varepsilon\leqslant\min\\{\,\frac{\eta}{3(2C+\eta)},\,\frac{\eta}{2}\,\\}$
works. Let ${\mathbold{\gamma}}$, $\mathbold{\kappa}$, and $U$ be as in the
lemma. Assume that
$J_{{\mathbold{\gamma}},\mathbold{p}}(\widetilde{U})<\infty$ for otherwise we
can trivially take ${V}:=U$. By the choice of $C$, we have that
$J_{{\mathbold{\gamma}},\mathbold{p}}(\widetilde{U})\leqslant C$.
By replacing $U\in{\mathcal{W}}$ with a weakly isomorphic graphon, assume
that, for the partition $(I_{i}^{({\mathbold{\gamma}})})_{i\in[k]}$ of $[0,1]$
into intervals, we have
$\frac{1}{2}\sum_{i,j\in[k]}\int_{I_{i}^{({\mathbold{\gamma}})}\times
I_{j}^{({\mathbold{\gamma}})}}I_{p_{i,j}}(U)\,\mathrm{d}\lambda^{\oplus
2}<J_{{\mathbold{\gamma}},\mathbold{p}}(\widetilde{U})+\frac{\eta}{2}.$
Let $\phi:[0,1]\to[0,1]$ be the a.e. defined function which on each interval
$I_{i}^{({\mathbold{\kappa}})}$ of positive length is the increasing linear
function that bijectively maps this interval onto
$I_{i}^{({\mathbold{\gamma}})}$. The Radon-Nikodym derivative
$D:=\frac{\,\mathrm{d}\mu}{\,\mathrm{d}\lambda}$ of the push-forward
$\mu:=\phi_{*}\lambda$ of the Lebesgue measure $\lambda$ along $\phi$ assumes
value $\kappa_{i}/\gamma_{i}$ a.e. on $I_{i}^{({\mathbold{\gamma}})}$. (The
union of $I_{i}^{(\mathbold{\gamma})}$ with $\gamma_{i}=0$ is a countable and
thus null set, and can be ignored from the point of view of $D$.) Let
$V:=U^{\phi}$. It is a graphon by the first part of Lemma 2.3. We have by the
definition of $J_{\mathbold{\kappa},\mathbold{p}}$ that
$J_{\mathbold{\kappa},\mathbold{p}}(\widetilde{V})\leqslant\
\frac{1}{2}\sum_{i,j\in[k]}\int_{I_{i}^{({\mathbold{\kappa}})}\times
I_{j}^{({\mathbold{\kappa}})}}I_{p_{i,j}}(U^{\phi})\
\,\mathrm{d}\lambda^{\oplus 2}=\
\frac{1}{2}\sum_{i,j\in[k]}\frac{\kappa_{i}\kappa_{j}}{\gamma_{i}\gamma_{j}}\int_{I_{i}^{({\mathbold{\gamma}})}\times
I_{j}^{({\mathbold{\gamma}})}}I_{p_{i,j}}(U)\,\mathrm{d}\lambda^{\oplus 2}.$
By
$\frac{\kappa_{i}\kappa_{j}}{\gamma_{i}\gamma_{j}}\leqslant(1+\varepsilon)^{2}\leqslant
1+3\varepsilon$, this is at most
$(1+3\varepsilon)\,\frac{1}{2}\sum_{i,j\in[k]}\int_{I_{i}^{({\mathbold{\gamma}})}\times
I_{j}^{({\mathbold{\gamma}})}}I_{p_{i,j}}(U)\,\mathrm{d}\lambda^{\oplus
2}\leqslant(1+3\varepsilon)\left(J_{{\mathbold{\gamma}},\mathbold{p}}(\widetilde{U})+\frac{\eta}{2}\right).$
By $J_{{\mathbold{\gamma}},\mathbold{p}}(\widetilde{U})\leqslant C$ and our
choice of $\varepsilon$, this in turn is at most
$J_{{\mathbold{\gamma}},\mathbold{p}}(\widetilde{U})+3\varepsilon
C+(1+3\varepsilon)\frac{\eta}{2}\leqslant\
J_{{\mathbold{\gamma}},\mathbold{p}}(\widetilde{U})+\eta.$
Thus it remains to estimate the cut-distance between $U$ and ${V}=U^{\phi}$.
Since the Radon-Nikodym derivative
$D=\frac{\,\mathrm{d}\phi_{*}\lambda}{\,\mathrm{d}\lambda}$ satisfies
$D(x)\leqslant 1+\varepsilon$ for a.e. $x\in[0,1]$ by (20), we have by Lemma
2.3 that $\delta_{\Box}(U,{V})\leqslant 2\varepsilon\leqslant\eta$. Thus $V$
is as desired.∎
Now we are ready to prove Theorem 1.5.
###### Proof of Theorem 1.5.
Recall that we have to establish an LDP for the sequence
$(\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}})_{n\in{\mathbbm{N}}}$
of measures, where the integer vectors $\mathbold{a}_{n}$, after scaling by
$1/n$, converge to a non-zero vector $\mathbold{\alpha}\in[0,\infty)^{k}$.
Since the underlying metric space $(\widetilde{\mathcal{W}},\delta_{\Box})$ is
compact and the proposed rate function $J_{\mathbold{\alpha},\mathbold{p}}$ is
lsc by Theorem 1.4, it suffices to prove the bounds in (12) and (13) of Lemma
2.5 for any given graphon $\widetilde{U}\in\widetilde{\mathcal{W}}$.
Let us show the upper bound first, that is, that
$\lim_{\eta\to
0}\limsup_{n\to\infty}\frac{1}{(\|\mathbold{a}_{n}\|_{1})^{2}}\log\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}(S(\widetilde{U},\eta))\leqslant-
J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U}).$ (21)
Take any $\eta>0$. Assume that, for example, $\eta<1/9$. Let
$m\in{\mathbbm{N}}$ be sufficiently large. Define
$\mathbold{b}:=(a_{m,1}{\mathbbm{1}}_{\alpha_{1}>0},\ldots,a_{m,k}{\mathbbm{1}}_{\alpha_{k}>0})\in{\mathbbm{Z}}^{k}.$
(22)
In other words, we let $\mathbold{b}$ to be $\mathbold{a}_{m}$ except we set
its $i$-th entry $b_{i}$ to be 0 for each $i\in[k]$ with $\alpha_{i}=0$. By
Theorem 1.3 (the LDP by Borgs et al [BCGPS]) and Theorem 1.4, we have that
$\limsup_{n\to\infty}\frac{1}{(n\,\|\mathbold{b}\|_{1})^{2}}\log\widetilde{\mathbb{P}}_{n\mathbold{b},\mathbold{p}}(S(\widetilde{U},\eta))\leqslant-\inf_{S(\widetilde{U},\eta)}J_{\mathbold{b},\mathbold{p}}.$
(23)
Since $m$ is sufficiently large, we can assume that
$\|\frac{1}{n}\,\mathbold{a}_{n}-\mathbold{\alpha}\|_{1}\leqslant\xi\,\|\mathbold{\alpha}\|_{1}$
for every $n\geqslant m$, where e.g. $\xi:=\eta/40$. In particular, we have
$\|\mathbold{a}_{m}-\mathbold{b}\|_{1}\leqslant\|\mathbold{a}_{m}-m\mathbold{\alpha}\|_{1}\leqslant\xi
m\|\mathbold{\alpha}\|_{1}$ from which it follows that
$\|\mathbold{b}-m\mathbold{\alpha}\|_{1}\leqslant\|\mathbold{b}-\mathbold{a}_{m}\|_{1}+\|\mathbold{a}_{m}-m\mathbold{\alpha}\|_{1}\leqslant
2\xi m\,\|\mathbold{\alpha}\|_{1}.$
Let $n$ be sufficiently large (in particular, $n\geqslant m$) and let
$n^{\prime}:=\lfloor n/m\rfloor$. By above, we have
$\displaystyle\|n^{\prime}\mathbold{b}-\mathbold{a}_{n}\|_{1}$
$\displaystyle\leqslant$
$\displaystyle\textstyle\|\mathbold{b}\|_{1}+\|\frac{n}{m}\mathbold{b}-n\mathbold{\alpha}\|_{1}+\|n\mathbold{\alpha}-\mathbold{a}_{n}\|_{1}$
(24) $\displaystyle\leqslant$ $\displaystyle\|\mathbold{b}\|_{1}+3\xi
n\|\mathbold{\alpha}\|_{1}$ $\displaystyle\leqslant$
$\displaystyle\textstyle\|\mathbold{b}\|_{1}+4\xi\min\\{\,\|\frac{n}{m}\mathbold{b}\|_{1},\,\|\mathbold{a}_{n}\|_{1}\,\\}$
$\displaystyle\leqslant$
$\displaystyle\textstyle\frac{\eta}{9}\,\min\\{\,\|n^{\prime}\mathbold{b}\|_{1},\,\|\mathbold{a}_{n}\|_{1}\,\\}.$
Thus, by Lemma 5.1, there is a coupling $\widetilde{{\mathbbm{C}}}$ between
$\widetilde{\mathbb{P}}_{n^{\prime}\mathbold{b},\mathbold{p}}$ and
$\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}$ such that for every
$(\widetilde{V},\widetilde{W})$ in the support of $\widetilde{{\mathbbm{C}}}$
we have $\delta_{\Box}(V,{W})\leqslant\eta/2$. Thus, if
$\widetilde{W}\sim\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}$
lands in $S(\widetilde{U},\eta/2)$ then necessarily
$\widetilde{V}\sim\widetilde{\mathbb{P}}_{n^{\prime}\mathbold{b},\mathbold{p}}$
lands in $S(\widetilde{U},\eta)$. This gives that
$\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}(S(\widetilde{U},\eta/2))\leqslant\widetilde{\mathbb{P}}_{n^{\prime}\mathbold{b},\mathbold{p}}(S(\widetilde{U},\eta)).$
Since this is true for every sufficiently large $n$ and it holds by (24) that,
for example,
$\|\mathbold{a}_{n}\|_{1}/(n^{\prime}\,\|\mathbold{b}\|_{1})\geqslant
1-\eta/9\geqslant\sqrt{1-\eta}$, we have that
$\limsup_{n\to\infty}\frac{1-\eta}{(\|\mathbold{a}_{n}\|_{1})^{2}}\log\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}(S(\widetilde{U},\eta/2))\leqslant\limsup_{n^{\prime}\to\infty}\frac{1}{(n^{\prime}\,\|\mathbold{b}\|_{1})^{2}}\log\widetilde{\mathbb{P}}_{n^{\prime}\mathbold{b},\mathbold{p}}(S(\widetilde{U},\eta)).$
(25)
Let us turn our attention to the right-hand side of (23). Pick some
$\widetilde{V}\in S(\widetilde{U},\eta)$ with
$J_{\mathbold{b},\mathbold{p}}(\widetilde{V})\leqslant\inf_{S(\widetilde{U},\eta)}J_{\mathbold{b},\mathbold{p}}+\eta$.
(In fact, by the lower semi-continuity of $J_{\mathbold{b},\mathbold{p}}$ and
the compactness of $S(\widetilde{U},\eta)$, we could have required that
$J_{\mathbold{b},\mathbold{p}}(\widetilde{V})=\inf_{S(\widetilde{U},\eta)}J_{\mathbold{b},\mathbold{p}}$.)
Since $\mathbold{a}_{n}/\|\mathbold{a}_{n}\|_{1}$ converges to
$\mathbold{\alpha}/\|\mathbold{\alpha}\|_{1}$ as $n\to\infty$ and we chose $m$
to be sufficiently large, we can assume that $b_{i}=0$ if and only if
$\alpha_{i}=0$ and that Lemma 5.2 applies for
$\mathbold{\gamma}:=\mathbold{b}/\|\mathbold{b}\|_{1}$ and
$\mathbold{\kappa}:=\mathbold{\alpha}/\|\mathbold{\alpha}\|_{1}$ (and our
$\eta$). The lemma gives that there is $\widetilde{W}\in
S(\widetilde{V},\eta)$ such that
$J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{W})-\eta\leqslant
J_{\mathbold{b},\mathbold{p}}(\widetilde{V})$. Thus we get the following upper
bound on the right-hand side of (23):
$-\inf_{S(\widetilde{U},\eta)}J_{\mathbold{b},\mathbold{p}}\leqslant-
J_{\mathbold{b},\mathbold{p}}(\widetilde{V})+\eta\leqslant-
J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{W})+2\eta\leqslant-\inf_{S(\widetilde{U},2\eta)}J_{\mathbold{\alpha},\mathbold{p}}+2\eta.$
(26)
By putting (23), (25) and (26) together we get that, for every
$\eta\in(0,1/9)$,
$\limsup_{n\to\infty}\frac{1-\eta}{(\|\mathbold{a}_{n}\|_{1})^{2}}\log\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}(S(\widetilde{U},\eta/2))\leqslant-\inf_{S(\widetilde{U},2\eta)}J_{\mathbold{\alpha},\mathbold{p}}+2\eta.$
If we take here the limit as $\eta\to 0$ then the infimum in the right-hand
side converges to $J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U})$ by the
lower semi-continuity of $J_{\mathbold{\alpha},\mathbold{p}}$ (established in
Theorem 1.4), giving the claimed upper bound (21).
Let us turn to the lower bound, i.e. we prove (12) for
$\widetilde{U}\in\widetilde{\mathcal{W}}$. As before, take any sufficiently
small $\eta>0$, then sufficiently large $m\in{\mathbbm{N}}$ and define
$\mathbold{b}$ by (22). By Theorems 1.3 and 1.4 applied to the open ball
around $\widetilde{U}$ of radius $2\eta$, we have
$\liminf_{n\to\infty}\frac{1}{(n\,\|\mathbold{b}\|_{1})^{2}}\log\widetilde{\mathbb{P}}_{n\mathbold{b},\mathbold{p}}(S(\widetilde{U},2\eta))\geqslant-\inf_{S(\widetilde{U},\eta)}J_{\mathbold{b},\mathbold{p}}.$
(27)
Similarly as for the upper bound, the left-hand side can be upper bounded via
Lemma 5.2 by, for example,
$\liminf_{n\to\infty}\frac{1+\eta}{(\|\mathbold{a}_{n}\|_{1})^{2}}\log\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}(S(\widetilde{U},3\eta)).$
(28)
Since $m$ is sufficiently large, we can apply Lemma 5.2 to
$\mathbold{\kappa}:={\mathbold b}/\|\mathbold{b}\|_{1}$,
$\mathbold{\gamma}:=\mathbold{\alpha}/\|\mathbold{\alpha}\|_{1}$, the given
graphon $U$ and our chosen $\eta$ to find $\widetilde{V}\in
S(\widetilde{U},\eta)$ such that
$J_{\mathbold{b},\mathbold{p}}(\widetilde{V})\leqslant
J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U})+\eta$. Thus
$-\inf_{S(\widetilde{U},\eta)}J_{\mathbold{b},\mathbold{p}}\geqslant-
J_{\mathbold{b},\mathbold{p}}(\widetilde{V})\geqslant-
J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U})-\eta.$
By (27), this is a lower bound on the expression in (28). Taking the limit as
$\eta\to 0$ we get the required LDP lower bound (12). This finishes the proof
of Theorem 1.5. ∎
## 6 Proof of Theorem 1.6
Recall that $W$ is a $k$-step graphon with non-null parts whose values are
encoded by a symmetric $k\times k$ matrix $\mathbold{p}\in[0,1]^{k\times k}$.
We consider the $W$-random graph ${\mathbbm{G}}(n,W)$ where we first sample
$n$ independent uniform points $x_{1},\dots,x_{n}\in[0,1]$ and then make each
pair $\\{i,j\\}\subseteq[n]$ an edge with probability $W(x_{i},x_{j})$. We
have to prove an LDP for the corresponding sequence
$(\widetilde{\mathbb{R}}_{n,W})_{n\in{\mathbbm{N}}}$ of measures on the metric
space $(\widetilde{\mathcal{W}},\delta_{\Box})$ with speed $n^{2}$ and the
rate function $R_{\mathbold{p}}$ that was defined in (7). Recall that the
lower semi-continuity of $R_{\mathbold{p}}$ was established in Corollary 4.4.
Let us show the lower bound. Since the underlying space
$(\widetilde{\mathcal{W}},\delta_{\Box})$ is compact, it is enough to prove
the bound in (12) of Lemma 2.5 for any given
$\widetilde{U}\in\widetilde{{\mathcal{W}}}$, that is, that
$\lim_{\eta\to
0}\liminf_{n\to\infty}\frac{1}{n^{2}}\log\widetilde{\mathbb{R}}_{n,W}(S(\widetilde{U},\eta))\geqslant-
R_{\mathbold{p}}(\widetilde{U}).$ (29)
Take any $\varepsilon>0$. By the definition of $R_{\mathbold{p}}$, we can fix
a vector $\mathbold{\alpha}=(\alpha_{1},\dots,\alpha_{k})\in[0,1]^{k}$ such
that $\|\mathbold{\alpha}\|_{1}=1$ and
$R_{\mathbold{p}}(\widetilde{U})\geqslant
J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U})-\varepsilon.$
Let $m$ be the number of non-zero entries of $\mathbold{\alpha}$. As
$\mathbold{\alpha}$ is non-zero, we have $m\geqslant 1$. We can assume by
symmetry that $\alpha_{1},\dots,\alpha_{m}$ are the non-zero entries. Define
$c:=\frac{1}{2}\min_{i\in[m]}\alpha_{i}>0$.
For each $n\in{\mathbbm{N}}$, take any integer vector
$\mathbold{a}_{n}=(a_{n,1},\dots,a_{n,k})\in{\mathbbm{N}}_{\geqslant 0}^{k}$
such that $\|\mathbold{a}_{n}\|_{1}=n$ and
$\|\mathbold{a}_{n}-n\,\mathbold{\alpha}\|_{\infty}<1$ (in particular, we have
$a_{n,i}=0$ if $\alpha_{i}=0$).
Let $n$ be sufficiently large. In particular, for every $i\in[m]$ we have that
$a_{n,i}/n\geqslant c$. When we generate $G\sim{\mathbbm{G}}(n,W)$ by choosing
first random $x_{1},\dots,x_{n}\in[0,1]$, it holds with probability at least,
very roughly, $c^{-n}$ that for each $i\in[k]$ the number of $x_{j}$’s that
belong to the $i$-th part of the step graphon $W$ is exactly $a_{n,i}$.
Conditioned on this event of positive measure, the resulting graphon
$\widetilde{f^{G}}$ is distributed according to
$\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}$. Thus
$\widetilde{\mathbb{R}}_{n,W}(S)\geqslant
c^{-n}\,\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}(S),\quad\mbox{for
every $S\subseteq\widetilde{\mathcal{W}}$}.$
This and the new LDP result (Theorem 1.5) give that, for every $\eta>0$,
$\displaystyle\limsup_{n\to\infty}\frac{1}{n^{2}}\log\widetilde{\mathbb{R}}_{n,W}(S(\widetilde{U},\eta))$
$\displaystyle\geqslant$
$\displaystyle\limsup_{n\to\infty}\frac{1}{n^{2}}\log\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}(S(\widetilde{U},\eta))$
$\displaystyle\geqslant$
$\displaystyle-\inf_{S(\widetilde{U},\eta/2)}J_{\mathbold{\alpha},\mathbold{p}}\
\geqslant\ -J_{\mathbold{\alpha},\mathbold{p}}(\widetilde{U})\ \geqslant\
-R_{\mathbold{p}}(\widetilde{U})-\varepsilon.$
Taking the limit as $\eta\to 0$, we conclude that the LDP lower bound (29)
holds within additive error $\varepsilon$. As $\varepsilon>0$ was arbitrary,
the lower bound holds.
Let us show the upper bound (2) of Definition 1.1 for any closed set
$F\subseteq\widetilde{\mathcal{W}}$, that is, that
$\limsup_{n\to\infty}\frac{1}{n^{2}}\log\widetilde{\mathbb{R}}_{n,W}(F)\leqslant-\inf_{F}R_{\mathbold{p}}.$
(30)
For each $n\in{\mathbbm{N}}$, we can write $\widetilde{\mathbb{R}}_{n,W}(F)$
as the sum over all $\mathbold{a}\in{\mathbbm{N}}_{\geqslant 0}^{k}$ with
$\|\mathbold{a}\|_{1}=n$ of the probability that the distribution of random
independent $x_{1},\dots,x_{n}\in[0,1]$ per $k$ steps of $W$ is given by the
vector $\mathbold{a}$ times the probability conditioned on $\mathbold{a}$ to
hit the set $F$. This conditional probability is exactly
$\widetilde{\mathbb{P}}_{\mathbold{a},\mathbold{p}}(F)$. Thus
$\widetilde{\mathbb{R}}_{n,W}(F)$ is a convex combination of the reals
$\widetilde{\mathbb{P}}_{\mathbold{a},\mathbold{p}}(F)$ and there is a vector
$\mathbold{a}_{n}\in{\mathbbm{N}}_{\geqslant 0}^{k}$ with
$\|\mathbold{a}_{n}\|_{1}=n$ such that
$\widetilde{\mathbb{R}}_{n,W}(F)\leqslant\widetilde{\mathbb{P}}_{\mathbold{a}_{n},\mathbold{p}}(F).$
(31)
Fix one such vector $\mathbold{a}_{n}$ for each $n\in{\mathbbm{N}}$.
Since the set of all real vectors $\mathbold{\alpha}\in[0,1]^{k}$ with
$\|\mathbold{\alpha}\|_{1}=1$ is compact, we can find an increasing sequence
$(n_{i})_{i\in{\mathbbm{N}}}$ of integers such that
$\limsup_{n\to\infty}\frac{1}{n^{2}}\log\widetilde{\mathbb{R}}_{n,W}(F)=\lim_{i\to\infty}\frac{1}{n_{i}^{2}}\log\widetilde{\mathbb{R}}_{n_{i},W}(F),$
(32)
and the scaled vectors $\frac{1}{n_{i}}\,\mathbold{a}_{n_{i}}$ converge to
some real vector $\mathbold{\alpha}\in[0,1]^{k}$.
Let $(\mathbold{b}_{n})_{n\in{\mathbbm{N}}}$ be obtained by filling the gaps
in $(\mathbold{a}_{n_{i}})_{n\in{\mathbbm{N}}}$, meaning that if $n=n_{i}$ for
some $i\in{\mathbbm{N}}$ then we let $\mathbold{b}_{n}:=\mathbold{a}_{n_{i}}$;
otherwise we pick any $\mathbold{b}_{n}\in{\mathbbm{N}}_{\geqslant 0}^{k}$
that satisfies $\|\mathbold{b}_{n}\|_{1}=n$ and
$\|\mathbold{b}_{n}-n\,\mathbold{\alpha}\|_{\infty}<1$. Since the normalised
vectors $\mathbold{b}_{n}/\|\mathbold{b}_{n}\|_{1}$ converge to the same
limiting vector $\mathbold{\alpha}$, we have by Theorem 1.5 that
$\limsup_{i\to\infty}\frac{1}{n_{i}^{2}}\log\widetilde{\mathbb{P}}_{\mathbold{a}_{n_{i}},\mathbold{p}}(F)\leqslant\limsup_{n\to\infty}\frac{1}{n^{2}}\log\widetilde{\mathbb{P}}_{\mathbold{b}_{n},\mathbold{p}}(F)\leqslant-\inf_{F}J_{\mathbold{\alpha},\mathbold{p}}.$
(33)
Putting (31), (32) and (33) together with the trivial consequence
$J_{\mathbold{\alpha},\mathbold{p}}\geqslant R_{\mathbold{p}}$ of the
definition of $R_{\mathbold{p}}$, we get the desired upper bound (30). This
finishes the proof of Theorem 1.6.
## Acknowledgements
Jan Grebík and Oleg Pikhurko were supported by Leverhulme Research Project
Grant RPG-2018-424.
## References
|
# Semiclassical analysis on compact nil-manifolds
Véronique Fischer University of Bath, Department of Mathematical Sciences,
Bath, BA2 7AY, UK<EMAIL_ADDRESS>
###### Abstract.
In this paper, we develop a semi-classical calculus on compact nil-manifolds.
As applications, we obtain Weyl laws for positive Rockland operators on any
graded compact nil-manifolds and results on quantum ergodicity in position for
sub-Laplacians on any stratified nil-manifolds.
###### Contents
1. 1 Introduction and main results
2. 2 Preliminaries on compact nil-manifold
1. 2.1 Definition and examples
2. 2.2 Discrete co-compact subgroups vs lattices
1. 2.2.1 Structures on $M$
3. 2.3 $\Gamma$-periodicity and periodisation
4. 2.4 Spaces of periodic functions
5. 2.5 Convolution and periodicity
6. 2.6 Operators on $M$ and $G$
3. 3 Preliminaries on graded groups and Rockland operators
1. 3.1 Graded nilpotent Lie group
1. 3.1.1 Dilations and homogeneity
2. 3.1.2 Approximate identity
3. 3.1.3 Homogeneous quasi-norms
2. 3.2 Discrete co-compact subgroups of $G$
3. 3.3 The dual of $G$ and the Plancherel theorem
4. 3.4 Positive Rockland operators on $G$
1. 3.4.1 Definitions
2. 3.4.2 Spectral multipliers in $\mathcal{R}$ and in $\widehat{\mathcal{R}}$
4. 4 Semiclassical calculus on graded compact nil-manifold
1. 4.1 Semiclassical pseudodifferential operators
2. 4.2 Properties of the semiclassical calculus
1. 4.2.1 Boundedness in $L^{2}(M)$
2. 4.2.2 Singularity of the operators as $\varepsilon\to 0$.
3. 4.2.3 Symbolic calculus
4. 4.2.4 Hilbert-Schmidt norm and trace
5. 4.2.5 The Hilbert space $L^{2}(M\times{\widehat{G}})$
3. 4.3 Properties of positive Rockland operators on $G$ and on $M$
4. 4.4 Weyl laws for $\mathcal{R}_{M}$
5. 5 Ergodicity for the symbols
1. 5.1 The differential operator $\mathscr{E}$
2. 5.2 Mean ergodicity of symbols
6. 6 Quantum Variance
1. 6.1 End of the proof of Theorem 1.1 from Proposition 6.1
2. 6.2 Some observations
1. 6.2.1 A weak result of Egorov type
2. 6.2.2 Some non-commutative but real considerations
3. 6.3 Proof of Proposition 6.1
4. 6.4 A generalisation of Proposition 6.1
## 1\. Introduction and main results
For the last twenty years, the analysis of hypoelliptic operators in sub-
Riemannian settings has made fundamental progress in powers and heat
expansions and in index theory (see respectively Ponge’s memoires [23] and van
Erp’s work in [28, 29] and the references in both). The underlying methods and
ideas are so comprehensive that their natural setting is not restricted to the
class of sub-Riemannian manifolds but extends more generally to filtered
manifolds. In fact, a significant tool for these results has turned out to be
the tangent groupoid to a filtered manifold [30, 8]. Unfortunately, this dosed
not seem to be suited for micro-local and semi-classical questions where a
notion of symbol is often needed. Very few results in that part of analysis
have been obtained for hypoelliptic operators. For instance, quantum
ergodicity in this context has been studied in specific examples amongst
contact or quasi-contact manifolds, and the list consists of [32, 3, 33, 6,
21, 24] to the author’s knowledge to-date.
The main aim of this paper is to start the development of semi-classical
analysis for a large class of hypoelliptic operators; this class includes the
natural sub-Laplacians on compact nil-manifolds without any further hypothesis
than a stratified structure on the underlying group. The methods and results
presented here support the systematic approach of semi-classical analysis in
sub-Riemannian and sub-elliptic settings developed by the author and her
collaborator Clotilde Fermanian-Kammerer on nilpotent Lie groups [11, 10, 12].
Indeed, the next natural context to test our approach after the group case is
the one of compact nil-manifolds as they are the analogues of the torus
$\mathbb{T}^{n}$ (the quotient of $\mathbb{R}^{n}$ by a lattice) in spectral
Euclidean geometry: although easily comprehended, tori provide a rich context
for the usual (i.e. Euclidean and commutative) semiclassical analysis, see
e.g. [4, 2].
As applications, we obtain the following results:
###### Theorem 1.1.
Let $\Gamma$ be a discrete co-compact subgroup of a stratified nilpotent Lie
group $G$. A Haar measure having been fixed on $G$, we denote by $d\dot{x}$
the corresponding natural measure on the compact nil-manifold
$M:=\Gamma\backslash G$. Let $X_{1},\ldots,X_{n_{1}}$ be a basis of the first
stratum $\mathfrak{g}_{1}$ of the Lie algebra of $G$. Let $\mathcal{L}_{M}$ be
the associated self-adjoint sub-Laplacian on $L^{2}(M)$. We denote its
eigenvalues and its spectral counting function by
$0=\lambda_{0}<\lambda_{1}\leq\lambda_{2}\leq\ldots\qquad\mbox{and}\qquad
N(\Lambda):=\left|\left\\{j\in\mathbb{N}_{0},\;\;\lambda_{j}\leq\Lambda\right\\}\right|,$
Weyl Law:
The spectral counting function admits the asymptotic
$N(\Lambda)\sim c_{1}\Lambda^{-Q/2},\qquad\mbox{as}\ \Lambda\to+\infty,$
with a constant $c_{1}$ depending on $M$, $G$ and $\mathcal{L}$. More
precisely,
$c_{1}={\rm Vol}(M)\,c_{0}2/Q,$
where ${\rm Vol}(M)$ denotes the volume of $M$, $Q$ the homogeneous dimension
of $G$ and $c_{0}$ is a positive constant depending only on $\mathcal{L}$ and
$G$.
Quantum Ergodicity in position:
Let $(\varphi_{j})$ be an orthonormal basis of the Hilbert space $L^{2}(M)$
consisting of eigenfunctions of $\mathcal{L}_{M}$ satisfying
$\mathcal{L}_{M}\varphi_{j}=\lambda_{j}\varphi_{j}$. There exists a
subsequence $(j_{k})$ of density one
$\lim_{\Lambda\to\infty}\frac{|\\{j_{k}:\lambda_{j_{k}}\leq\Lambda\\}|}{N(\Lambda)}=1,$
such that for any continuous function $a:M\to\mathbb{C}$
$\lim_{k\to+\infty}\int_{M}a(\dot{x})|\varphi_{j_{k}}(\dot{x})|^{2}d\dot{x}=\int_{M}a(\dot{x})\,d\dot{x}.$
Weyl laws for hypoelliptic operators in sub-Riemannian contexts have been
obtained using heat expansions, see Ponge’s memoire [23] and references
therein. Note that the constant involved in their Weyl laws is expressed in
terms of a coefficient of these expansions and that this result was
generalised recently to operators satisfying the Rockland condition on
filtered manifold in [8, p. 347]. By contrast, our Weyl law above is a natural
consequence of the semi-classical approach already developed in the group
context [11, 10, 12] and on nil-manifolds of Heisenberg type [13];
furthermore, the constant we obtain is more explicit than the one from the
heat expansion approach as our constant $c_{0}$ is determined in Theorem 3.6
and Formula (3.6). Moreover, our result admits a semi-classical generalisation
stated in Theorem 4.12, which is new. All these results are proved in the more
general setting of positive Rockland operators on any graded nil-manifold in
Sections 3 and 4.
The second application of the semi-classical analysis on $M$ discussed in this
paper concerns quantum variance for sub-Laplacians, see Corollary 6.4 for the
most general form. The result on quantum variance and standard manipulations
imply the quantum ergodicity in position stated in Theorem 1.1. In the
commutative case, that is, on the torus. $\mathbb{T}^{n}$, our result on
quantum variance boils down to quantum ergodicity for the 0-energy level (i.e.
the analogue of [34, Theorem 9.4] for $c=0$) for the standard Laplacian and
for the basic semi-classical calculus; by basic semi-classical calculus, we
mean the algebra of operators $a(\dot{x},\varepsilon D)$ obtained via the
Kohn-Nirenberg quantisation of symbols $a\in
C^{\infty}(\mathbb{T}^{n};\mathcal{S}(\mathbb{R}^{n}))$ with Fourier variables
dilated by $\varepsilon\in(0,1]$. Although this may sound disappointing, this
is what can be obtained on the torus without introducing more refined semi-
classical classes of symbols on the phase-space. As in the commutative case,
we expect that deeper results on quantum ergodicity for hypo-elliptic
operators will require a more involved semi-classical setting or a micro-local
version. We will develop these in future works.
On the more positive side, our result on quantum variance allows us to discuss
a notion of ergodicity for our operator-valued symbols and to obtain the
quantum ergodicity in position stated in Theorem 1.1. To the author’s
knowledge, this quantum ergodicity in position is new on nil-manifolds which
are not quotients of the three dimensional Heisenberg group $\mathbb{H}_{1}$
or product of these. Indeed, in its ‘usual Euclidean’ micro-local meaning,
quantum ergodicity was obtained recently on three dimensional contact
manifolds in [6]. This approach seems possible for the Heisenberg group [3]
and when the local structure is essentially $\mathbb{H}_{1}$ [6, 21, 24]. They
imply the result for ergodicity in position in Theorem 1.1 but only for nil-
manifolds which are products of the canonical quotient of $\mathbb{H}_{1}$.
The studies of hypoelliptic operators mentioned in the first paragraph of this
introduction suggest that this ‘Euclidean’ strategy may become unviable as the
non-commutativity becomes more involved. By contrast, our result on quantum
ergodicity in position holds on any nil-manifold where a natural sub-Laplacian
can be defined.
As mentioned earlier, the methods of the paper follow the approach developed
by the author and her collaborator Clotilde Fermanian-Kammerer on nilpotent
Lie groups $G$, see [11, 10, 12, 13], and presented independently in Section
4. The calculus is formed by operators obtained from symbols and the natural
quantisation based on the group Fourier transform. The group Fourier transform
of a function at a (unitary irreducible) representation is an operator on the
space of the representation. Consequently, denoting by ${\widehat{G}}$ the
unitary dual, i.e. the set of unitary irreducible representations of $G$
modulo equivalence, the symbols of pseudo-differential operators introduced in
[14] are measurable fields of operators on $G\times{\widehat{G}}$ for
operators on $G$, or $M\times{\widehat{G}}$ for operators on the nil-manifold
$M$. The ideas behind this symbolic ergodicity comes from computing the
commutator of $\mathcal{L}_{M}$ with a semi-classical pseudo-differential
operator:
$[\mathcal{L}_{M},{\rm Op}^{(\varepsilon)}(\sigma)]={\rm
Op}^{(\varepsilon)}(\mathcal{L}_{M}\sigma)-2\varepsilon^{-1}{\rm
Op}^{(\varepsilon)}(\widehat{\mathscr{E}}\sigma)+\varepsilon^{-2}{\rm
Op}^{(\varepsilon)}\left([\widehat{\mathcal{L}},\sigma]\right),$
where $\widehat{\mathscr{E}}:=\sum_{j=1}^{n_{1}}X_{M,j}\pi(X_{j})$. In Section
5, we apply von Neuman’s mean ergodic theorem to the one-parameter group of
unitary operators $e^{i2t\widehat{\mathscr{E}}}$. As the difference between
the two formal expansions
$\displaystyle e^{-it\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}(\sigma)e^{it\varepsilon\mathcal{L}_{M}}$
$\displaystyle={\rm
Op}^{(\varepsilon)}(\sigma)-it\varepsilon[\mathcal{L}_{M},{\rm
Op}^{(\varepsilon)}(\sigma)]+\ldots,$ $\displaystyle{\rm
Op}^{(\varepsilon)}(e^{i2t\widehat{\mathscr{E}}}\sigma)$ $\displaystyle={\rm
Op}^{(\varepsilon)}(\sigma)+2it{\rm
Op}^{(\varepsilon)}(\widehat{\mathscr{E}}\sigma)+\ldots,$
is $O(\varepsilon)$ if $[\widehat{\mathscr{L}},\sigma]=0$, we can use an
Egorov-type argument to show the property of the quantum variance of Section
6. The methods presented here could be easily adapted to operators of the form
$\mathcal{L}_{M}+V$ where $V$ is a (real-valued and regular enough) potential.
But this is left for the future works regarding full quantum ergodicity for
hypo-elliptic operators with our semi-classical and micro-local approaches on
more general sub-Riemanian or even filtered manifolds.
Notation: If $V$ and $W$ are topological vector spaces, we denote by
$\mathscr{L}(V,W)$ the space of linear continuous mappings from $V$ to $W$. If
$V=W$, then we write $\mathscr{L}(V,V)=\mathscr{L}(V)$. If $V$ is a
topological vector space and $M$ a smooth manifold, we denote by
$C^{\infty}(M:V)$ the space of smooth functions on $M$ valued in $V$.
## 2\. Preliminaries on compact nil-manifold
In this section, we set our notation for nil-manifolds and recall some
elements of analysis in this setting.
### 2.1. Definition and examples
A compact nil-manifold is the quotient $M=\Gamma\backslash G$ of a nilpotent
Lie group $G$ by a discrete co-compact subgroup $\Gamma$ of $G$. In this
paper, a nilpotent Lie group is always assumed connected and simply connected
unless otherwise stated.
The vocabulary varies, and a discrete co-compact subgroup may also be called
uniform or lattice in some literature, see e.g. [7, Section 5].
A concrete example of uniform subgroup is the natural discrete subgroup of the
Heisenberg group, as described in [7, Example 5.4.1]. Abstract constructions
for graded groups are discussed in Section 3.2, and will use the following
statement which recalls some abstract examples and characterisations from [7,
Section 5.1] (the definition of Malcev bases is recalled below):
###### Theorem 2.1.
Let $G$ be a nilpotent Lie group.
1. (1)
A subgroup $\Gamma$ of $G$ is discrete co-compact if and only if ${\rm
Exp}Y_{j}\in\Gamma$ for $j=1,\ldots,n$ for some weak or strong Malcev basis
$Y_{1},\ldots,Y_{n}$ of $\mathfrak{g}$.
2. (2)
A subgroup $\Gamma$ of $G$ is discrete co-compact if and only if it can be
written as $\Gamma={\rm Exp}(\mathbb{Z}Y_{1})\ldots{\rm Exp}(\mathbb{Z}Y_{n})$
for some weak or strong Malcev basis $Y_{1},\ldots,Y_{n}$ of $\mathfrak{g}$.
Moreover, a group $G$ admits a uniform subgroup $\Gamma$ if and only if its
Lie algebra $\mathfrak{g}$ has a rational structure. If this is the case, then
a choice of rational structure $\mathfrak{g}_{\mathbb{Q}}$ is the
$\mathbb{Q}$-span of $\log\Gamma$.
Furthermore, if $Y_{1},\ldots,Y_{n}$ is a strong Malcev basis whose structural
constants $c_{i,j,k}$ from $[Y_{i},Y_{j}]=\sum_{k}c_{i,j,k}Y_{k}$ are all
rational, then there exists a positive integer $K\in\mathbb{N}$ such that the
set ${\rm Exp}(K\mathbb{Z}Y_{1})\ldots{\rm Exp}(K\mathbb{Z}Y_{n})$ is a
discrete co-compact subgroup of $G$.
Let us recall the notion of Malcev bases: an (ordered) basis $Y_{1},\ldots
Y_{n}$ of $\mathfrak{g}$ is a strong (resp. weak) Malcev basis when for each
$m=1,\ldots,n$, the subspace
$\mathbb{R}Y_{1}\oplus\ldots\oplus\mathbb{R}Y_{m}$ is a Lie sub-algebra (resp.
ideal) of the Lie algebra $\mathfrak{g}$. We refer the reader to [7] for
examples and properties of these bases, and the reader unfamiliar with this
notion can just consider this as a technical property satisfied by important
bases, for instance by the basis constructed in Section 3.1.1 in the case of
graded Lie groups.
### 2.2. Discrete co-compact subgroups vs lattices
There is a close connection between discrete co-compact subgroups and lattices
in $\mathfrak{g}$ described in [7, Section 5.4]:
###### Theorem 2.2.
Let $\Gamma$ be a discrete co-compact subgroup of a nilpotent Lie group $G$.
Then there exists $\Gamma_{0}$ and $\Gamma_{1}$ discrete co-compact subgroups
of $G$ such that
* •
$\log\Gamma_{0}$ and $\log\Gamma_{1}$ are lattices of the vector space
$\mathfrak{g}\sim\mathbb{R}^{n}$,
* •
the inclusions $\Gamma_{0}\subset\Gamma\subset\Gamma_{1}$ hold, and
* •
$\Gamma/\Gamma_{0}$ and $\Gamma_{1}/\Gamma_{0}$ are finite sets.
Furthermore, having written $\Gamma$ as ${\rm Exp}(\mathbb{Z}Y_{1})\ldots{\rm
Exp}(\mathbb{Z}Y_{n})$ for some strong Malcev basis $Y_{1},\ldots,Y_{n}$ of
$\mathfrak{g}$, we may choose $\Gamma_{0}={\rm
Exp}(K\mathbb{Z}Y_{1})\ldots{\rm Exp}(K\mathbb{Z}Y_{n})$ for some suitable
integer $K>0$, and $\Gamma_{1}={\rm Exp}(k_{1}\mathbb{Z}Y_{1})\ldots{\rm
Exp}(k_{n}\mathbb{Z}Y_{n})$ with $k_{j}=K^{-N^{n-j}}$, $j=1,\ldots,n$.
###### Corollary 2.3.
Let $\Gamma$ be a discrete co-compact subgroup of a nilpotent Lie group $G$.
We identify $G$ with $\mathbb{R}^{n}$ via the exponential mapping and a basis
of $\mathfrak{g}$. Then for any $N>n$ and any norm $|\cdot|$ on the vector
space $\mathbb{R}^{n}\sim G$, the sum
$\sum_{\gamma\in\Gamma}(1+|\gamma|)^{-N}$ finite.
###### Proof.
We may assume that the basis of $\mathfrak{g}$ is a strong Malcev basis
$Y_{1},\ldots,Y_{n}$ such that $\Gamma={\rm Exp}(\mathbb{Z}Y_{1})\ldots{\rm
Exp}(\mathbb{Z}Y_{n})$. Theorem 2.2 implies that we may assume that
$\Lambda=\log\Gamma$ is a lattice of $\mathfrak{g}\sim\mathbb{R}^{n}$. In this
case,
$\sum_{\gamma\in\Gamma}(1+|\gamma|)^{-N}=\sum_{m\in\Lambda}(1+|m|)^{-N}.$
The result follows by comparisons with integrals on $\mathbb{R}^{n}$ for a
suitable norm on $\mathbb{R}^{n}$, as all the norms on $\mathbb{R}^{n}$ are
equivalent. ∎
#### 2.2.1. Structures on $M$
An element of $M$ is a class
$\dot{x}:=\Gamma x$
of an element $x$ in $G$. If the context allows it, we may identify this class
with its representative $x$.
The quotient $M$ is naturally equipped with the structure of a compact smooth
manifold. Furthermore, fixing a Haar measure on the unimodular group $G$, $M$
inherits a measure $d\dot{x}$ which is invariant under the translations given
by
$\begin{array}[]{rcl}M&\longrightarrow&M\\\
\dot{x}&\longmapsto&\dot{x}g=\Gamma xg\end{array},\quad g\in G.$
Recall that the Haar measure $dx$ on $G$ is unique up to a constant and, once
it is fixed, $d\dot{x}$ is the only $G$-invariant measure on $M$ satisfying
for any function $f:G\to\mathbb{C}$, for instance continuous with compact
support,
(2.1) $\int_{G}f(x)dx=\int_{M}\sum_{\gamma\in\Gamma}f(\gamma x)\ d\dot{x}.$
We denote by ${\rm Vol}(M)=\int_{M}1d\dot{x}$ the volume of $M$.
Some ‘nice’ fundamental domains are described in [7, Section 5.3]:
###### Proposition 2.4.
Let $\Gamma$ be a discrete co-compact subgroup of a nilpotent Lie group $G$
described as $\Gamma={\rm Exp}(\mathbb{Z}Y_{1})\ldots{\rm
Exp}(\mathbb{Z}Y_{n})$ for some weak Malcev basis $Y_{1},\ldots,Y_{n}$ of
$\mathfrak{g}$ (see Theorem 2.1.
We set
$R_{0}:=[-\frac{1}{2},\frac{1}{2})\times\ldots\times[-\frac{1}{2},\frac{1}{2})=[-\frac{1}{2},\frac{1}{2})^{n}$
and for every $m\in\mathbb{R}^{n}$:
$R_{m}:=m+R_{0}\quad\mbox{and}\quad D_{m}:=\\{{\rm Exp}(t_{1}Y_{1})\ldots\
{\rm Exp}(t_{n}Y_{n})\ :\ t=(t_{1},\ldots,t_{n})\in R_{m}\\}.$
Then $D_{m}$ is a fundamental domain for $M=\Gamma\backslash G$. Furthermore,
the map
$\Theta:\left\\{\begin{array}[]{rcl}\mathbb{R}^{n}&\longrightarrow&M\\\
t&\longmapsto&\Gamma{\rm Exp}(t_{1}Y_{1})\ldots\ {\rm
Exp}(t_{n}Y_{n})\end{array}\right.,$
maps $R_{m}$ onto $M$ and the Lebesgue measure $dt$ restricted to $R_{m}$ to
the $G$-invariant measure on $M$. If $t,u\in R_{m}$ and $\gamma\in\Gamma$
satisfy $\Theta(t)=\gamma\Theta(u)$ then $t=u$ and $\gamma=0$. Furthermore, if
$t\in\mathbb{R}$ and $\gamma\in\Gamma$ satisfy
$\Theta(t)^{-1}\gamma\Theta(t)=0$ then $\gamma=0$.
###### About the proof of Proposition 2.4.
Proposition 2.4 follows from simple modifications from Theorem 5.3.1 in [7]
and its proof. ∎
### 2.3. $\Gamma$-periodicity and periodisation
Let $\Gamma$ be a discrete co-compact subgroup of a nilpotent Lie group $G$.
We say that a function $f:G\rightarrow\mathbb{C}$ is $\Gamma$-left-periodic or
just $\Gamma$-periodic when we have
$\forall x\in G,\;\;\forall\gamma\in\Gamma,\;\;f(\gamma x)=f(x).$
This definition extends readily to measurable functions and to distributions.
There is a natural one-to-one correspondence between the functions on $G$
which are $\Gamma$-periodic and the functions on $M$. Indeed, for any map $F$
on $M$, the corresponding periodic function on $G$ is $F_{G}$ defined via
$F_{G}(x):=F(\dot{x}),\quad x\in G,$
while if $f$ is a $\Gamma$-periodic function on $G$, it defines a function
$f_{M}$ on $M$ via
$f_{M}(\dot{x})=f(x),\qquad x\in G.$
Naturally, $(F_{G})_{M}=F$ and $(f_{M})_{G}=f$.
We also define, at least formally, the periodisation $\phi^{\Gamma}$ of a
function $\phi(x)$ of the variable $x\in G$ by:
$\phi^{\Gamma}(x)=\sum_{\gamma\in\Gamma}\phi(\gamma x),\qquad x\in G.$
If $E$ is a space of functions or of distributions on $G$, then we denote by
$E^{\Gamma}$ the space of elements in $E$ which are $\Gamma$-periodic. Let us
recall that $G$ is a smooth manifold which is identified with $\mathbb{R}^{n}$
via the exponential mapping and polynomial coordinate systems. This leads to a
corresponding Lebesgue measure on $\mathfrak{g}$ and the Haar measure $dx$ on
the group $G$, hence $L^{p}(G)\cong L^{p}(\mathbb{R}^{n})$. This also allows
us [7, p.16] to define the spaces
$\mathcal{D}(G)\cong\mathcal{D}(\mathbb{R}^{n})\quad\mbox{and}\quad\mathcal{S}(G)\cong\mathcal{S}(\mathbb{R}^{n})$
of test functions which are smooth and compactly supported or Schwartz, and
the corresponding spaces of distributions
$\mathcal{D}^{\prime}(G)\cong\mathcal{D}^{\prime}(\mathbb{R}^{n})\quad\mbox{and}\quad\mathcal{S}^{\prime}(G)\cong\mathcal{S}^{\prime}(\mathbb{R}^{n}).$
Note that this identification with $\mathbb{R}^{n}$ does not usually extend to
the convolution: the group convolution, i.e. the operation between two
functions on $G$ defined formally via
$(f_{1}*f_{2})(x):=\int_{G}f_{1}(y)f_{2}(y^{-1}x)dy,$
is not commutative in general whereas it is a commutative operation for
functions on the abelian group $\mathbb{R}^{n}$.
We also define the set of functions
$C_{b}^{\infty}(G):=\left\\{f\in C^{\infty}(G):\sup_{G}|Y^{\alpha}f|<\infty\
\mbox{for every}\ \alpha\in\mathbb{N}_{0}^{n}\right\\},$
for some basis $Y_{1},\ldots,Y_{n}$ of $\mathfrak{g}$ identified with left-
invariant vector fields and
$Y^{\alpha}=Y_{1}^{\alpha_{1}}Y_{2}^{\alpha_{2}}\cdots
Y_{n}^{\alpha_{n}},\quad\alpha\in\mathbb{N}_{0}^{n}.$
We check readily that the vector space $C_{b}^{\infty}(G)$ and its natural
topology are independent of a choice of basis $Y_{1},\ldots,Y_{n}$ and that
$C^{\infty}(G)^{\Gamma}=C_{b}^{\infty}(G)^{\Gamma}.$
Furthermore, we have:
###### Lemma 2.5.
The periodisation of a Schwartz function $\phi\in\mathcal{S}(G)$ is a well-
defined function $\phi^{\Gamma}$ in $C_{b}^{\infty}(G)^{\Gamma}$. Furthermore,
the map $\phi\mapsto\phi^{\Gamma}$ yields a surjective morphism of topological
vector spaces from $\mathcal{S}(G)$ onto $C_{b}^{\infty}(G)^{\Gamma}$ and from
$\mathcal{D}(G)$ onto $C_{b}^{\infty}(G)^{\Gamma}$.
###### Proof of Lemma 2.5.
We first need to set some notation. By Theorem 2.1, we may assume that
$\Gamma={\rm Exp}(\mathbb{Z}Y_{1})\ldots{\rm Exp}(\mathbb{Z}Y_{n})$ for some
strong Malcev basis $Y_{1},\ldots,Y_{n}$ of $\mathfrak{g}$. As strong Malcev
bases yields polynomial coordinates, we may identify $G$ with $\mathbb{R}^{n}$
via the exponential mapping: $y=(y_{1},\ldots,y_{n})\mapsto{\rm
Exp}(y_{1}Y_{1}+\ldots+y_{n}Y_{n})$. We fix a Euclidean norm $|\cdot|$ on
$\mathbb{R}^{n}\sim G$. Note that $|y^{-1}|=|-y|=|y|$ and that the Baker-
Campbell-Hausdorff formula implies the following modified triangle inequality:
(2.2) $\exists C_{0}>0\qquad\forall a,b\in G\qquad 1+|ab|\leq
C_{0}(1+|a|)^{s}(1+|b|)^{s},$
where $s$ is the step of $G$.
We first show that the periodisation of a Schwartz function
$\phi\in\mathcal{D}(G)$ makes sense as a function on $G$. As $\phi$ is
Schwartz, for all $N\in\mathbb{N}$ there exists $C=C_{\phi,N}$ such that
$\forall x\in G\qquad|\phi(x)|\leq C(1+|x|)^{-N},$
so by (2.2),
$\forall x\in G,\ \gamma\in\Gamma\qquad|\phi(\gamma x)|\leq C(1+|\gamma
x|)^{-N}\leq CC_{0}^{N}(1+|x|)^{N}(1+|\gamma|)^{-N/n}.$
The sum $\sum_{\gamma\in\Gamma}(1+|\gamma|)^{-N/s}$ is finite by Corollary 2.3
for $N$ large enough (it suffices to have $N>ns$). Hence the function
$\phi^{\Gamma}$ is well defined on $G$. Furthermore, it is now a routine
exercise to check that $\phi^{\Gamma}\in C_{b}^{\infty}(G)^{\Gamma}$ and that
$\phi\mapsto\phi^{\Gamma}$ is a morphism of topological vector spaces between
$\mathcal{S}(G)$ to $C_{b}^{\infty}(G)^{\Gamma}$, and also from
$\mathcal{D}(G)$ to $C_{b}^{\infty}(G)^{\Gamma}$.
It remains to show the surjectivity of
$\mathcal{D}(G)\ni\phi\mapsto\phi^{\Gamma}\in C_{b}^{\infty}(G)^{\Gamma}$. We
observe
(2.3) $\forall\phi\in C_{b}^{\infty}(G)^{\Gamma},\
\forall\psi\in\mathcal{D}(G)\qquad\phi\,\psi\in\mathcal{D}(G)\quad\mbox{and}\quad(\phi\psi)^{\Gamma}=\phi\,\psi^{\Gamma}.$
We fix $\psi_{0}\in\mathcal{D}(G)$ valued in $[0,1]$ and such that
$\psi_{0}=1$ on a fundamental domain of $M$, for instance the fundamental
domain $D_{0}$ from Proposition 2.4 to fix the ideas. We observe that
$\psi_{0}^{\Gamma}\in C_{b}^{\infty}(G)^{\Gamma}$ is bounded above but also
below by 1, and furthermore that $1/\psi_{0}^{\Gamma}$ is also in
$C_{b}^{\infty}(G)^{\Gamma}$. Given any $\phi\in C_{b}^{\infty}(G)^{\Gamma}$,
applying (2.3) to $\phi$ and $\psi=\psi_{0}/\psi_{0}^{\Gamma}$ shows the
surjectivity of $\mathcal{D}(G)\ni\phi\mapsto\phi^{\Gamma}\in
C_{b}^{\infty}(G)^{\Gamma}$ and concludes the proof of Lemma 2.5. ∎
### 2.4. Spaces of periodic functions
We now examine $E^{\Gamma}$ for some spaces of functions $E$ on the nilpotent
Lie group $G$, where $\Gamma$ is a discrete co-compact subgroup of a nilpotent
Lie group $G$. Although
$\mathcal{D}(G)^{\Gamma}=\\{0\\}=\mathcal{S}(G)^{\Gamma},$
many other $E^{\Gamma}$ are isomorphic to important spaces of functions (or
distributions) on $M$ as the following lemmata suggest.
The definition of $F_{G}$ and $f_{M}$ extend to measurable functions and we
have:
###### Lemma 2.6.
For every $p\in[1,\infty]$, the map $F\mapsto F_{G}$ is an isomorphism of the
topological vector spaces (in fact Banach spaces) from $L^{p}(M)$ onto
$L^{p}_{loc}(G)^{\Gamma}$ with inverse $f\mapsto f_{M}$.
The proof follows readily from the description of fundamental domains above,
see Proposition 2.4. It is left to the reader.
We also check readily:
###### Lemma 2.7.
The mapping $F\mapsto F_{G}$ is an isomorphsim of topological vector spaces
from $\mathcal{D}(M)$ onto $C_{b}^{\infty}(G)^{\Gamma}$ with inverse $f\mapsto
f_{M}$.
Consequently, $\mathcal{D}^{\prime}(M)$ is isomorphic to the dual of
$C_{b}^{\infty}(G)^{\Gamma}$. This allows for a first distributional meaning
to $F\mapsto F_{G}$ with $F_{G}$ in the continuous dual of
$(C_{b}^{\infty}(G))^{\Gamma}$ and extended to $C_{b}^{\infty}(G)$ by Hahn-
Banach’s theorem. However, we prefer to extend the definition of $F_{G}$ to
the case of distributions in the following way: if
$F\in\mathcal{D}^{\prime}(M)$, then $F_{G}$ given by
$\forall\phi\in\mathcal{S}(G)\qquad\langle F_{G},\phi\rangle=\langle
F,(\phi^{\Gamma})_{M}\rangle.$
is a tempered distribution by Lemmata 2.7 and 2.5. One checks easily that it
is periodic and that it coincides with any other definition given above, for
instance on $\cup_{p\in[1,\infty)}L^{p}(M)$. Furthermore, we have:
###### Lemma 2.8.
We have $\mathcal{D}^{\prime}(G)^{\Gamma}=\mathcal{S}^{\prime}(G)^{\Gamma}$,
and the map $F\mapsto F_{G}$ yields an isomorphism of topological vector
spaces from $\mathcal{D}^{\prime}(M)$ onto $\mathcal{S}^{\prime}(G)^{\Gamma}$.
###### Proof.
Lemmata 2.5 and 2.7 imply easily that $F\mapsto F_{G}$ is a morphism of
topological vector spaces from $\mathcal{D}^{\prime}(M)$ to
$\mathcal{S}^{\prime}(G)^{\Gamma}$. We can construct its inverse using the
function $\psi_{0}\in\mathcal{D}(G)$ from the proof of Lemma 2.5. If
$f\in\mathcal{D}^{\prime}(G)^{\Gamma}$ then we define
$\forall\psi\in\mathcal{D}(M)\qquad\left\langle
f_{M},\psi\right\rangle:=\left\langle
f,\psi_{G}\frac{\psi_{0}}{\psi_{0}^{\Gamma}}\right\rangle.$
Lemma 2.7 implies easily that $f_{M}$ is a distribution on $M$ and that
$f\mapsto f_{M}$ is a morphism of topological vector spaces from
$\mathcal{D}^{\prime}(G)^{\Gamma}$ to $\mathcal{D}^{\prime}(M)$. Furthermore,
it gives the inverse of $F\mapsto F_{G}$ since we have first from the
definitions of these two mappings:
$\forall\phi\in\mathcal{D}(G)\qquad\left\langle(f_{M})_{G},\phi\right\rangle=\left\langle
f_{M},(\phi^{\Gamma})_{M}\right\rangle=\left\langle
f,\phi^{\Gamma}\frac{\psi_{0}}{\psi_{0}^{\Gamma}}\right\rangle=\left\langle
f,\phi\right\rangle,$
by periodicity of $f$. Hence
$f=(f_{M})_{G}\in\mathcal{S}^{\prime}(G)^{\Gamma}$ for any
$f\in\mathcal{D}^{\prime}(G)^{\Gamma}$. The statement follows. ∎
One checks easily that the inverse $f\mapsto f_{M}$ of $F\mapsto F_{G}$
constructed in the proof above coincides with any other definition of
$f\mapsto f_{M}$ given above, for instance on
$\cup_{p\in[1,\infty)}L^{p}_{loc}(G)$. Moreover, for every $p\in[1,\infty)$,
since $L^{p}(M)\subset\mathcal{D}^{\prime}(M)$ , we have
$L^{p}_{loc}(G)^{\Gamma}\subset\mathcal{S}^{\prime}(G)^{\Gamma}$ by Lemma 2.8.
### 2.5. Convolution and periodicity
We already know that the convolution of a tempered distribution with a
Schwartz function is smooth on a nilpotent Lie group $G$. When the
distribution is periodic under the discrete co-compact subgroup of $G$, we
also have the following properties, in particular a type of Young’s
convolution inequality:
###### Lemma 2.9.
Let $F\in\mathcal{S}^{\prime}(G)^{\Gamma}$ and $\kappa\in\mathcal{S}(G)$. Then
$F*\kappa\in C_{b}^{\infty}(G)^{\Gamma}$. Viewed as a function on $M$,
$(F*\kappa)_{M}(\dot{x})=\int_{M}F_{M}(\dot{y})\
(\kappa(\cdot^{-1}x)^{\Gamma})_{M}(\dot{y})d\dot{y}=\int_{M}F_{M}(\dot{y})\sum_{\gamma\in\Gamma}\kappa(y^{-1}\gamma
x)\ d\dot{y}.$
If $f\in L^{p}(M)$ for $p\in[1,+\infty]$, then
$f_{G}\in\mathcal{S}^{\prime}(G)^{\Gamma}$ and we have
$\|(f_{G}*\kappa)_{M}\|_{L^{p}(M)}\leq\|f\|_{L^{p}(M)}\|\kappa\|_{L^{1}(G)}$
###### Proof.
We check readily for $x\in G$ and $\gamma\in\Gamma$
$F*\kappa(\gamma x)=\int_{G}F(y)\kappa(y^{-1}\gamma x)=\int_{G}F(\gamma
z)\kappa(z^{-1}x)dz=\int_{G}F(z)\kappa(z^{-1}x)dz=F*\kappa(x).$
The formula on $M$ follows from (2.1).
Let $f\in L^{p}(\dot{M})$ for $p\in[1,+\infty]$. As a consequence of Lemmata
2.6 and 2.8, $f_{G}\in\mathcal{S}^{\prime}(G)^{\Gamma}\cap
L^{p}_{loc}(G)^{\Gamma}$. By Lemmata 2.5 and 2.7, for each fixed $\dot{x}\in
M$, we can set
$d_{\dot{x}}(\dot{y}):=\kappa(\cdot^{-1}x)^{\Gamma}(y)=\sum_{\gamma\in\Gamma}\kappa(y^{-1}\gamma
x),$
and this defines a smooth function $d_{\dot{x}}$ on $M$. Furthermore,
$\dot{x}\mapsto d_{\dot{x}}$ is continuous from $M$ to $\mathcal{D}(M)$. This
function allows us to write the more concise formula
$(f_{G}*\kappa)_{M}(\dot{x})=\int_{M}f(\dot{y})d_{\dot{x}}(\dot{y})d\dot{y}.$
The decomposition of the Haar measure in (2.1) and its invariance under
translation imply
(2.4)
$\displaystyle\|d_{\dot{x}}\|_{L^{1}(M)}\leq\int_{M}\sum_{\gamma\in\Gamma}|\kappa(y^{-1}\gamma
x)|\ d\dot{y}=\int_{G}|\kappa(y^{-1}x)|\
dy=\|\kappa\|_{L^{1}(G)},\quad\dot{x}\in M\ \mbox{(fixed)},$ (2.5)
$\displaystyle\int_{M}|d_{\dot{x}}(\dot{y})|d\dot{x}\leq\int_{M}\sum_{\gamma\in\Gamma}|\kappa(y^{-1}\gamma
x)|\ d\dot{x}=\int_{G}|\kappa(y^{-1}x)|\
dx=\|\kappa\|_{L^{1}(G)},\quad\dot{y}\in M\ \mbox{(fixed)}.$
The case $p=+\infty$ follows from (2.4) since we have
$|(f_{G}*\kappa)_{M}(\dot{x})|\leq\|f\|_{L^{\infty}(M)}\|d_{\dot{x}}\|_{L^{1}(M)}=\|f\|_{L^{\infty}(M)}\|\kappa\|_{L^{1}(G)}.$
Let $p\in[1,+\infty)$. By Jensen’s inequality, we have for any fixed
$\dot{x}\in M$
$|(f_{G}*\kappa)_{M}(\dot{x})|^{p}=\left|\int_{M}f(\dot{y})d_{\dot{x}}(\dot{y})d\dot{y}\right|^{p}\leq\|d_{\dot{x}}\|_{L^{1}(M)}^{p}\int_{M}\left|f(\dot{y})\right|^{p}|\frac{d_{\dot{x}}(\dot{y})}{\|d_{\dot{x}}\|_{L^{1}(M)}}d\dot{y}$
so that, now integrating against $\dot{x}\in M$ and using (2.5),
$\|(f_{G}*\kappa)_{M}\|_{L^{p}(M)}^{p}\leq\|\kappa\|_{L^{1}(G)}^{p-1}\int_{M}\left|f(\dot{y})\right|^{p}\int_{M}|d_{\dot{x}}(\dot{y})|d\dot{x}\,d\dot{y}=\|\kappa\|_{L^{1}(G)}^{p}\int_{M}\left|f(\dot{y})\right|^{p}d\dot{y}.$
The statement follows. ∎
The proofs of Lemmata 2.9 and 2.5 can be modified in a classical manner to
obtain:
###### Corollary 2.10.
Let $\kappa\in L^{1}(G)$. We assume that there is an $N>0$ large enough
($N=ns+1$ is sufficient, where $n$ and $s$ are the dimension and the step of
$G$) such that
$\exists C>0\qquad 0\leq\kappa(x)\leq C(1+|x|)^{-N}\quad\mbox{for almost
every}\ x\in G.$
Then
$K(\dot{x},\dot{y}):=\sum_{\gamma\in\Gamma}\kappa(y^{-1}\gamma x)$
defines a non-negative integrable function $(\dot{x},\dot{y})\mapsto
K(\dot{x},\dot{y})$ on $M\times M$ satisfying
$\int_{M}K(\dot{x},\dot{y})d\dot{y}=\|\kappa\|_{L^{1}(G)}\ \mbox{for a.e.}\
\dot{x}\in
M,\quad\mbox{and}\quad\int_{M}K(\dot{x},\dot{y})d\dot{x}=\|\kappa\|_{L^{1}(G)}\
\mbox{for a.e.}\ \dot{y}\in M.$
Furthermore, if $f\in L^{p}(M)$ for $p\in[1,+\infty)$, then
$\int_{M}\left(\int_{M}|f(\dot{y})|\,K(\dot{x},\dot{y})d\dot{y}\right)^{p}d\dot{x}\leq\|\kappa\|_{L^{1}(G)}^{p}\|f\|_{L^{p}(M)}^{p}.$
### 2.6. Operators on $M$ and $G$
Consider a linear continuous mapping
$T:\mathcal{S}(G)\to\mathcal{S}^{\prime}(G)$ which is invariant under $\Gamma$
in the sense that
$\forall F\in\mathcal{S}(G),\ \forall\gamma\in\Gamma,\qquad
T(F(g\,\cdot))=(TF)(g\,\cdot).$
Then it naturally induces an operator $T_{M}$ on $M$ via
$T_{M}f=(Tf_{G})_{M}.$
Furthermore, $T_{M}:\mathcal{D}(M)\to\mathcal{D}^{\prime}(M)$ is a linear
continuous mapping by Lemmata 2.7 and 2.8. Note that if $T$ is invariant under
$G$, then it is invariant under $\Gamma$.
Invariant differential operators keep many of their feature from $G$ to $M$:
###### Lemma 2.11.
Let $T$ be a smooth differential operator on $G$ which is invariant under
$\Gamma$. Then $T_{M}$ is a smooth differential operator on $M$.
If $T$ is hypo-elliptic, then so is $T_{M}$.
If $T$ is symmetric and positive on $\mathcal{S}(G)$ in the sense that
$\forall
F_{1},F_{2}\in\mathcal{S}(G)\quad\int_{G}TF_{1}(x)\,\overline{F_{2}(x)}dx=\int_{G}F_{1}(x)\,\overline{TF_{2}(x)}dx\quad\mbox{and}\quad\int_{G}TF_{1}(x)\,\overline{F_{1}(x)}dx\geq
0,$
then $T_{M}$ is also symmetric and positive on $\mathcal{D}(M)$. Moreover, $T$
is essentially self-adjoint on $L^{2}(G)$ and $T_{M}$ is essentially self-
adjoint on $L^{2}(M)$.
These arguments are very classical and we only sketch their proofs.
###### Sketch of the proof.
The manifold $M$ have natural charts given by the descriptions of the
fundamental domains in Proposition 2.4. They imply that $T_{M}$ is a smooth
differential operator on $M$ and also that if $T$ is hypo-elliptic (resp.
symmetric and positive), then so is $T_{M}$. The self-adjointness follows from
the fact that the positivity imply the density of the ranges of the operators
$T+i{\rm I}$ and $T-i{\rm I}$, and $T_{M}+i{\rm I}$ and $T_{M}-i{\rm I}$. ∎
## 3\. Preliminaries on graded groups and Rockland operators
### 3.1. Graded nilpotent Lie group
In the rest of the paper, we will be concerned with graded Lie groups.
References on this subject includes [16] and [14].
A graded Lie group $G$ is a connected and simply connected Lie group whose Lie
algebra $\mathfrak{g}$ admits an $\mathbb{N}$-gradation
$\mathfrak{g}=\oplus_{\ell=1}^{\infty}\mathfrak{g}_{\ell}$ where the
$\mathfrak{g}_{\ell}$, $\ell=1,2,\ldots$, are vector subspaces of
$\mathfrak{g}$, almost all equal to $\\{0\\}$, and satisfying
$[\mathfrak{g}_{\ell},\mathfrak{g}_{\ell^{\prime}}]\subset\mathfrak{g}_{\ell+\ell^{\prime}}$
for any $\ell,\ell^{\prime}\in\mathbb{N}$. This implies that the group $G$ is
nilpotent. Examples of such groups are the Heisenberg group and, more
generally, all stratified groups (which by definition correspond to the case
$\mathfrak{g}_{1}$ generating the full Lie algebra $\mathfrak{g}$).
#### 3.1.1. Dilations and homogeneity
For any $r>0$, we define the linear mapping
$D_{r}:\mathfrak{g}\to\mathfrak{g}$ by $D_{r}X=r^{\ell}X$ for every
$X\in\mathfrak{g}_{\ell}$, $\ell\in\mathbb{N}$. Then the Lie algebra
$\mathfrak{g}$ is endowed with the family of dilations $\\{D_{r},r>0\\}$ and
becomes a homogeneous Lie algebra in the sense of [16]. We re-write the set of
integers $\ell\in\mathbb{N}$ such that $\mathfrak{g}_{\ell}\not=\\{0\\}$ into
the increasing sequence of positive integers
$\upsilon_{1},\ldots,\upsilon_{n}$ counted with multiplicity, the multiplicity
of $\mathfrak{g}_{\ell}$ being its dimension. In this way, the integers
$\upsilon_{1},\ldots,\upsilon_{n}$ become the weights of the dilations.
We construct a basis $X_{1},\ldots,X_{n}$ of $\mathfrak{g}$ adapted to the
gradation, by choosing a basis $\\{X_{1},\ldots X_{n_{1}}\\}$ of
$\mathfrak{g}_{1}$ (this basis is possibly reduced to $\emptyset$), then
$\\{X_{n_{1}+1},\ldots,X_{n_{1}+n_{2}}\\}$ a basis of $\mathfrak{g}_{2}$
(possibly $\\{0\\}$ as well as the others) We have
$D_{r}X_{j}=r^{\upsilon_{j}}X_{j}$, $j=1,\ldots,n$.
The associated group dilations are defined by
$D_{r}(x)=rx:=(r^{\upsilon_{1}}x_{1},r^{\upsilon_{2}}x_{2},\ldots,r^{\upsilon_{n}}x_{n}),\quad
x=(x_{1},\ldots,x_{n})\in G,\ r>0.$
In a canonical way, this leads to the notions of homogeneity for functions,
distributions and operators and we now give a few important examples.
The Haar measure is $Q$-homogeneous where
$Q:=\sum_{\ell\in\mathbb{N}}\ell\dim\mathfrak{g}_{\ell}=\upsilon_{1}+\ldots+\upsilon_{n},$
is called the homogeneous dimension of $G$.
Identifying the element of $\mathfrak{g}$ with left invariant vector fields,
each $X_{j}$ is a $\upsilon_{j}$-homogeneous differential operator of degree.
More generally, the differential operator
$X^{\alpha}=X_{1}^{\alpha_{1}}X_{2}^{\alpha_{2}}\cdots
X_{n}^{\alpha_{n}},\quad\alpha\in\mathbb{N}_{0}^{n}$
is homogeneous with degree
$[\alpha]:=\alpha_{1}\upsilon_{1}+\cdots+\alpha_{n}\upsilon_{n}.$
#### 3.1.2. Approximate identity
On $G$, we can easily construct approximation of the identity operator in the
following way: if $\kappa\in\mathcal{S}(G)$, then for each $s\in(0,1)$, we
define $\kappa_{s}\in\mathcal{S}(G)$ via
$\kappa_{s}(y)=s^{-Q}\kappa(s^{-1}y)$. Setting
$c_{\kappa}=\int_{G}\kappa(y)dy$, we have the convergence [14, Section 3.1.10]
$f*\kappa_{s}\longrightarrow_{s\to 0}c_{\kappa}f,$
in $\mathcal{S}(G)$ for any $f\in\mathcal{S}(G)$ and in
$\mathcal{S}^{\prime}(G)$ for any $f\in\mathcal{S}^{\prime}(G)$. The
convergence also takes place in $L^{p}(G)$, $p\in(0,\infty)$, but we will not
use this here.
#### 3.1.3. Homogeneous quasi-norms
An important class of homogeneous map are the homogeneneous quasi-norms, that
is, a $1$-homogeneous non-negative map $G\ni x\mapsto\|x\|$ which is symmetric
and definite in the sense that $\|x^{-1}\|=\|x\|$ and
$\|x\|=0\Longleftrightarrow x=0$. In fact, all the homogeneous quasi-norms are
equivalent in the sense that if $\|\cdot\|_{1}$ and $\|\cdot\|_{2}$ are two of
them, then
$\exists C>0\qquad\forall x\in G\qquad C^{-1}\|x\|_{1}\leq\|x\|_{2}\leq
C\|x\|_{1}.$
Examples may be constructed easily, such as
$\|x\|=(\sum_{j=1}^{n}|x_{j}|^{N/\upsilon_{j}})^{-N}\ \mbox{for any}\
N\in\mathbb{N},$
with the convention above.
An important property of homogeneous quasi-norms is that they satisfy the
triangle inequality up to a constant:
$\exists C>0\qquad\forall x,y\in G\qquad\|xy\|\leq C(\|x\|+\|y\|).$
However, it is possible to construct a homogeneous quasi-norm on $G$ which
yields a distance on $G\sim\mathbb{R}^{n}$ in the sense that the triangle
inequality above is satisfied with constant 1 [14, Theorem 3.1.39].
Using polar coordinates [14, Proposition 3.1.42], it is easily seen that for
any homogeneous quasi-norm $\|\cdot\|$ on $G$, we have for any $s>Q$
(3.1)
$\int_{G}(1+\|x\|)^{-s}dx<\infty\quad\mbox{and}\quad\int_{\|y\|>\varepsilon}|y|^{-s}dy\leq
C_{s,G}\varepsilon^{-s+Q},$
for some finite constant $C_{s,G}>0$.
### 3.2. Discrete co-compact subgroups of $G$
If the structural constants $c_{i,j,k}$ from
$[X_{i},X_{j}]=\sum_{k}c_{i,j,k}X_{k}$ are all rational, then then there
exists a positive integer $K\in\mathbb{N}$ such that the set
$\Gamma:={\rm Exp}(K\mathbb{Z}X_{n})\ldots{\rm Exp}(K\mathbb{Z}X_{1})={\rm
Exp}(K\mathbb{Z}X_{1})\ldots{\rm Exp}(K\mathbb{Z}X_{n})$
is a discrete co-compact subgroup of $G$ as a consequence of Theorem 2.1 and
the following statement:
###### Lemma 3.1.
Let $G$ be a graded Lie group. The basis constructed in Section 3.1.1 but
ordered as $X_{n},\ldots,X_{1}$ is a strong Malcev basis of $\mathfrak{g}$.
###### Proof of Lemma 3.1.
The properties of the dilations implies that the Lie bracket of an element
$X\in\mathfrak{g}$ with $X_{m}$ is in the linear span of $X_{j}$’s with
weights $>m$, hence in $\mathbb{R}X_{m+1}\oplus\ldots\oplus\mathbb{R}X_{n}$. ∎
###### Remark 3.2.
* •
In what follows, we do not require that $\Gamma$ is constructed out of a basis
constructed in Section 3.1.1.
* •
Although we will not need this property, we observe that the same proof yields
the fact that $X_{n},\ldots,X_{1}$ is a strong Malcev basis of $\mathfrak{g}$
through the sequence of ideals
$\mathfrak{g}_{(1)}\subset\mathfrak{g}_{(2)}\subset\ldots\subset\mathfrak{g}_{(k)}=\mathfrak{g}$
defined as follows. Re-labeling the weights as a sequence of strictly
decreasing integers
$\\{1\leq\upsilon_{1}\leq\ldots\leq\upsilon_{n}\\}=\\{\upsilon_{n}=\omega_{1}>\ldots>\omega_{k}=\upsilon_{1}\\},$
$\mathfrak{g}_{(j)}$ denotes the vector space spanned by the elements in
$\mathfrak{g}$ with weights $\geq\omega_{j}$ for each $j=1,\ldots,k$.
We will need the following property which is similar to Corollary 2.3 but for
homogeneous quasi-norms:
###### Lemma 3.3.
Let $\Gamma$ be a discrete co-compact subgroup of a graded group $G$. Then for
any $N>\upsilon_{n}n$ and any homogeneous quasi-norm $\|\cdot\|$ on $G$, the
sum $\sum_{\gamma\in\Gamma\setminus\\{0\\}}\|\gamma\|^{-N}$ finite.
###### Proof.
As all the homogeneous quasi-norms are equivalent, it suffices to prove the
result for the one given by
$\|x\|=\max_{j=1,\ldots,n}|x_{j}|^{1/\upsilon_{j}}$, where we have written
$x={\rm Exp}\sum_{j=1}^{n}x_{j}X_{j}$, for the basis $X_{1},\ldots,X_{n}$
adapted to the gradation and constructed in Section 3.1.1. We observe that it
suffices to show that the sum over $\gamma\in\Gamma$ with $\|\gamma\|\geq 1$
is finite, and that for any $x\in G$
$\|x\|\geq
1\Longrightarrow\|x\|\geq(\max_{j=1,\ldots,n}|x_{j}|)^{1/\upsilon_{n}}.$
Now $x\mapsto\max_{j=1,\ldots,n}|x_{j}|$ is a norm on $\mathbb{R}^{n}$, and we
can conclude with Corollary 2.3. ∎
A consequence of the analysis in Section 2 is the existence of approximate
identity:
###### Corollary 3.4.
Let $\Gamma$ be a discrete co-compact subgroup of a graded group $G$. Let
$\kappa\in\mathcal{S}(G)$, then for each $s\in(0,1)$, we define
$\kappa_{s}\in\mathcal{S}(G)$ via $\kappa_{s}(y)=s^{-Q}\kappa(s^{-1}y)$ and
set $c_{\kappa}=\int_{G}\kappa(y)dy$. Then for any
$f\in\mathcal{D}^{\prime}(M)$, we have the convergence in
$\mathcal{D}^{\prime}(M)$:
$(f_{G}*\kappa_{s})_{M}\longrightarrow_{s\to 0}c_{\kappa}f,$
###### Proof of Corollary 3.4.
This is a consequence of the convergence of the approximate identity in
$\mathcal{S}^{\prime}(G)$, see Section 3.1.2, and Lemma 2.8. ∎
### 3.3. The dual of $G$ and the Plancherel theorem
In this paper, we always assume that the representations of the group $G$ are
strongly continuous and acting on separable Hilbert spaces. Unless otherwise
stated, the representations of $G$ will also be assumed unitary. For a
representation $\pi$ of $G$, we keep the same notation for the corresponding
infinitesimal representation which acts on the universal enveloping algebra
$\mathfrak{U}(\mathfrak{g})$ of the Lie algebra of the group. It is
characterised by its action on $\mathfrak{g}$:
(3.2) $\pi(X)=\partial_{t=0}\pi(e^{tX}),\quad X\in\mathfrak{g}.$
The infinitesimal action acts on the space $\mathcal{H}_{\pi}^{\infty}$ of
smooth vectors, that is, the space of vectors $v\in\mathcal{H}_{\pi}$ such
that the mapping $G\ni x\mapsto\pi(x)v\in\mathcal{H}_{\pi}$ is smooth.
We will use the following equivalent notations for the group Fourier transform
of a function $f\in L^{1}(G)$ at $\pi$
$\pi(f)\equiv\widehat{f}(\pi)\equiv\mathcal{F}_{G}(f)(\pi)=\int_{G}f(x)\pi(x)^{*}dx.$
We denote by ${\widehat{G}}$ the unitary dual of $G$, that is, the unitary
irreducible representations of $G$ modulo equivalence and identify a unitary
irreducible representation with its class in ${\widehat{G}}$. The set
${\widehat{G}}$ is naturally equipped with a structure of standard Borel
space. The Plancherel measure is the unique positive Borel measure $\mu$ on
${\widehat{G}}$ such that for any $f\in C_{c}(G)$, we have:
(3.3)
$\int_{G}|f(x)|^{2}dx=\int_{{\widehat{G}}}\|\mathcal{F}_{G}(f)(\pi)\|_{HS(\mathcal{H}_{\pi})}^{2}d\mu(\pi).$
Here $\|\cdot\|_{HS(\mathcal{H}_{\pi})}$ denotes the Hilbert-Schmidt norm on
$\mathcal{H}_{\pi}$. This implies that the group Fourier transform extends
unitarily from $L^{1}(G)\cap L^{2}(G)$ to $L^{2}(G)$ onto
$L^{2}({\widehat{G}}):=\int_{{\widehat{G}}}\mathcal{H}_{\pi}\otimes\mathcal{H}_{\pi}^{*}d\mu(\pi),$
which we identify with the space of $\mu$-square integrable fields on
${\widehat{G}}$. Consequently (3.3) holds for any $f\in L^{2}(G)$; this
formula is called the Plancherel formula. It is possible to give an expression
for the Plancherel measure $\mu$, see [7, Section 4.3], although we will not
need this in this paper. We deduce the inversion formula: for any
$\kappa\in\mathcal{S}(G)$, we have [14, Proposition 5.1.15]
(3.4) $\int_{{\widehat{G}}}{\rm
Tr}|\mathcal{F}\kappa(\pi)|d\mu(\pi)<\infty,\qquad\forall x\in G\
\int_{{\widehat{G}}}{\rm
Tr}(\pi(x)\mathcal{F}_{G}\kappa(\pi))d\mu(\pi)=\kappa(x).$
We also recall that ${\widehat{G}}$ inherits a dilation from the one on $G$,
see [11, Section 2.2]. We denote by $r\cdot\pi$ the element of ${\widehat{G}}$
obtained from $\pi$ through dilatation by $r$, that is,
$r\cdot\pi(x)=\pi(rx)$, $r>0$ and $x\in G$.
### 3.4. Positive Rockland operators on $G$
Let us briefly review the definition and main properties of positive Rockland
operators. References on this subject includes [16] and [14].
#### 3.4.1. Definitions
A _Rockland operator_ $\mathcal{R}$ on $G$ is a left-invariant differential
operator which is homogeneous of positive degree and satisfies the Rockland
condition, that is, for each unitary irreducible representation $\pi$ on $G$,
except for the trivial representation, the operator $\pi(\mathcal{R})$ is
injective on the space $\mathcal{H}_{\pi}^{\infty}$ of smooth vectors of the
infinitesimal representation.
Recall [19] that Rockland operators are hypoelliptic. In fact, they may
equivalently. be characterised as the left-invariant differential operators
which are hypoelliptic. If this is the case, then
$\mathcal{R}+\sum_{[\alpha]<\nu}c_{\alpha}X^{\alpha}$ where
$c_{\alpha}\in\mathbb{C}$ and $\nu$ is the homogeneous degree of $\nu$ is
hypoelliptic.
A Rockland operator is _positive_ when
$\forall f\in\mathcal{S}(G),\qquad\int_{G}\mathcal{R}f(x)\
\overline{f(x)}dx\geq 0.$
Any sub-Laplacian with the sign convention
$-(Z_{1}^{2}+\ldots+Z_{n^{\prime}}^{2})$ of a stratified Lie group is a
positive Rockland operator; here $Z_{1},\ldots,Z_{n^{\prime}}$ form a basis of
the first stratum $\mathfrak{g}_{1}$. The reader familiar with the Carnot
group setting may view positive Rockland operators as generalisations of the
natural sub-Laplacians. Positive Rockland operators are easily constructed on
any graded Lie group [14, Section 4.2].
A positive Rockland operator is essentially self-adjoint on $L^{2}(G)$ and we
keep the same notation for their self-adjoint extension. Its spectrum is
included in $[0,+\infty)$ and the point 0 may be neglected in its spectrum,
see [10, Lemma 2.13].
For each unitary irreducible representation $\pi$ of $G$, the operator
$\pi(\mathcal{R})$ is essentially self-adjoint on $\mathcal{H}^{\infty}_{\pi}$
and we keep the same notation for this self-adjoint extension. Its spectrum is
a discrete subset of $(0,\infty)$ if $\pi\not=1_{{\widehat{G}}}$ is not
trivial while $\pi(\mathcal{R})=0$ if $\pi=1_{\mathcal{R}}$. We denote by
$E_{\pi}$ its spectral measure of $\pi(\mathcal{R})=\int_{\mathbb{R}}\lambda
dE_{\pi}(\lambda)$.
#### 3.4.2. Spectral multipliers in $\mathcal{R}$ and in
$\widehat{\mathcal{R}}$
If $\psi:\mathbb{R}^{+}\to\mathbb{C}$ is a measurable function, the spectral
multiplier $\psi(\mathcal{R})$ is well defined as a possibly unbounded
operator on $L^{2}(G)$. Its domain is the space of function $\psi\in L^{2}(G)$
such that $\int_{0}^{+\infty}|\psi(\lambda)|^{2}d(E_{\lambda}\psi,\psi)$ is
finite where $E$ denotes the spectral measure of
$\mathcal{R}=\int_{0}^{+\infty}\lambda dE_{\lambda}.$ For instance, if $\psi$
is bounded, then $\psi(\mathcal{R})$ is bounded on $L^{2}(G)$. However, we
will be concerned with multipliers $\psi$ that may not be in
$L^{\infty}(\mathbb{R})$. If the domain of $\psi(\mathcal{R})$ contains
$\mathcal{S}(G)$ and defines a continuous map
$\mathcal{S}(G)\to\mathcal{S}^{\prime}(G)$, then it is invariant under right-
translation and, by the Schwartz kernel theorem, admits a right-convolution
kernel $\psi(\mathcal{R})\delta_{0}\in\mathcal{S}^{\prime}(G)$ which satisfies
the following homogeneity property:
(3.5)
$\psi(r^{\nu}\mathcal{R})\delta_{0}(x)=r^{-Q}\psi(\mathcal{R})\delta_{0}(r^{-1}x),\quad
x\in G.$
Furthermore, for each unitary irreducible representation $\pi$ of $G$, the
domain of the operator
$\psi(\pi(\mathcal{R}))=\int_{\mathbb{R}}\psi(\lambda)dE_{\pi}(\lambda)$
contains $\mathcal{H}_{\pi}^{\infty}$ and we have
$\mathcal{F}_{G}\\{\psi(\mathcal{R})\varphi\\}(\pi)=\psi(\pi(\mathcal{R}))\
\widehat{\varphi}(\pi),\quad\phi\in\mathcal{S}(G).$
The following statement is the famous result due to Hulanicki [20]:
###### Theorem 3.5 (Hulanicki’s theorem).
Let $\mathcal{R}$ be a positive Rockland operator on $G$. If
$\psi\in\mathcal{S}(\mathbb{R})$ then
$\psi(\mathcal{R})\delta_{0}\in\mathcal{S}(G)$.
For instance, the heat kernels
$p_{t}:=e^{-t\mathcal{R}}\delta_{0},\quad t>0,$
are Schwartz - although this property is in fact used in the proof of
Hulanicki’s Theorem.
The following result was mainly obtained by Christ for sub-Laplacians on
stratified groups [5, Proposition 3]. As explained below, the proof extends to
positive Rockland operators:
###### Theorem 3.6.
Let $\mathcal{R}$ be a positive Rockland operator of homogeneous degree $\nu$
on $G$. If the measurable function $\psi:\mathbb{R}^{+}\to\mathbb{C}$ is in
$L^{2}(\mathbb{R}^{+},\lambda^{Q/\nu}d\lambda/\lambda)$, then
$\psi(\mathcal{R})$ defines a continuous map
$\mathcal{S}(G)\to\mathcal{S}^{\prime}(G)$ whose convolution kernel
$\psi(\mathcal{R})\delta_{0}$ is in $L^{2}(G)$. Moreover, we have
$\|\psi(\mathcal{R})\delta_{0}\|_{L^{2}(G)}^{2}=c_{0}\int_{0}^{\infty}|\psi(\lambda)|^{2}\lambda^{\frac{Q}{\nu}}\frac{d\lambda}{\lambda},$
where $c_{0}=c_{0}(\mathcal{R})$ is a positive constant of $\mathcal{R}$ and
$G$.
In other words, the map $\psi\mapsto\psi(\mathcal{R})\delta_{0}$ is an
isometry from $L^{2}((0,\infty),c_{0}\lambda^{Q/2}d\lambda/\lambda)$ onto its
image which is a closed subspace of $L^{2}(G)$.
###### Remark 3.7.
* •
By plugging the function $\psi(\lambda)=e^{-\lambda/2}$ in the formula of
Theorem 3.6 and noticing $\|p_{1/2}\|_{L^{2}}^{2}=p_{1}(0)$, we obtain the
following expression for the constant in the statement in terms of the heat
kernel $p_{t}$ of $\mathcal{R}$:
(3.6)
$c_{0}=c_{0}(\mathcal{R})=\frac{p_{1}(0)}{\Gamma(Q/\nu)},\quad\mbox{where}\quad\Gamma(Q/\nu)=\int_{0}^{\infty}e^{-\lambda}\lambda^{Q/\nu}\frac{d\lambda}{\lambda}.$
* •
Using the Plancherel formula (3.3), we have an expression for
$\|\psi(\mathcal{R})\delta_{0}\|_{L^{2}(G)}^{2}=\int_{{\widehat{G}}}\|\psi(\widehat{\mathcal{R}}(\pi))\|_{HS}^{2}d\mu(\pi),$
in terms of the Plancherel measure. Taking for instance $\psi=1_{[0,1]}$
easily leads
$c_{0}=\frac{Q}{\nu}\int_{{\widehat{G}}}{\rm
Tr}\left(1_{[0,1]}(\widehat{\mathcal{R}}(\pi))\right)_{HS}^{2}d\mu(\pi).$
More generally, taking $\psi=1_{[a,b]}$ where $0\leq a<b$, we have
(3.7)
$\frac{c_{0}\nu}{Q}(b^{\frac{Q}{\nu}}-a^{\frac{Q}{\nu}})=\int_{{\widehat{G}}}{\rm
Tr}\left(1_{[a,b]}(\widehat{\mathcal{L}}(\pi))\right)d\dot{x}d\mu(\pi).$
* •
We can check Theorem 3.6 in the familiar setting of the canonical Laplacian
$\Delta_{\mathbb{R}^{n}}=-\sum_{j}\partial_{j}^{2}$ on the abelian group
$G=\mathbb{R}^{n}$. Using the Euclidean Fourier transform and a change in
polar coordinates, we readily obtain
$c_{0}(\Delta_{\mathbb{R}^{n}})=(\Gamma(n/2)2^{n}\pi^{n/2})^{-1}$ which is
indeed equal to $p_{1}(0)/\Gamma(n/2)$ since the heat kernels in this setting
are the Gaussians $p_{t}(x)=(4\pi t)^{-n/2}e^{-|x|^{2}/4t}$.
* •
We can also check Theorem 3.6 and the formula in (3.6) in the case of the
Heisenberg group. Realising the Heisenberg group $\mathbb{H}_{n}$ as
$\mathbb{R}^{2n+1}=\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}$ with
the group law
$(x,y,t)(x^{\prime},y^{\prime},t^{\prime})=(x+x^{\prime},y+y^{\prime},t+t^{\prime}+\frac{1}{2}(x\cdot
y^{\prime}-y\cdot x^{\prime})),$
the canonical sub-Laplacian $\mathcal{L}_{\mathbb{H}_{n}}$ is well understood,
see e.g. [26] or [14]. In particular, the relations between the Fourier
transform of $\mathbb{H}_{n}$ and the functional calculus of
$\mathcal{L}_{\mathbb{H}_{n}}$ yield
$\displaystyle\|\psi(\mathcal{L}_{\mathbb{H}_{n}})\delta_{0}\|_{L^{2}(\mathbb{H}_{n})}^{2}$
$\displaystyle=(2\pi)^{-(3n+1)}\int_{-\infty}^{+\infty}\sum_{a=0}^{\infty}\frac{(n+a-1)!}{(n-1)!a!}|\psi(|\lambda|(2a+n))|^{2}|\lambda|^{n}d\lambda$
$\displaystyle=c_{0}(\mathcal{L}_{\mathbb{H}_{n}})\int_{0}^{\infty}|\psi(\lambda)|^{2}\lambda^{Q/2}\frac{d\lambda}{\lambda},$
with $Q=2n+2$, so
$c_{0}(\mathcal{L}_{\mathbb{H}_{n}}):=(2\pi)^{-(3n+1)}2\sum_{a=0}^{\infty}\frac{(n+a-1)!}{(n-1)!a!}(2a+n)^{-n-1}.$
* •
If $\mathcal{R}$ is a positive Rockland operator, then any positive powers
$\mathcal{R}^{\ell}$ is also a positive Rockland operator, and a simple change
of variable on $(0,+\infty)$ implies
(3.8)
$c_{0}(\mathcal{R}^{\ell})=\frac{1}{\ell}c_{0}(\mathcal{R})\quad\mbox{for
any}\ \ell\in\mathbb{N}.$
###### Sketch of the proof of Theorem 3.6.
Since the heat kernels are Schwartz, we can argue as in the proof of [5,
Proposition 3]: for any $b>0$, the kernel
$\phi_{0}:=1_{(0,b]}(\mathcal{R})\delta_{0}\ \mbox{is in}\ L^{2}(G),$
and, for every $\psi\in L^{\infty}(0,b]$, the kernel
$\psi(\mathcal{R})\delta_{0}$ is in $L^{2}(G)$ with
$\|\psi(\mathcal{R})\delta_{0}\|_{L^{2}(G)}^{2}=\int_{0}^{\infty}|\psi(\lambda)|^{2}d(E_{\lambda}\phi_{0},\phi_{0}).$
This implies the existence and uniqueness of a sigma-finite Borel measure $m$
on $\mathbb{R}^{+}$ satisfying
$\|\psi(\mathcal{R})\delta_{0}\|_{L^{2}(G)}^{2}=\int_{0}^{\infty}|\psi(\lambda)|^{2}dm(\lambda),$
for every $\psi\in L^{\infty}(\mathbb{R}^{+})$ with compact support. From the
uniqueness and the homogeneity property in (3.5), it follows that the measure
$m$ is homogeneous of degree $Q/\nu$ on $\mathbb{R}^{+}$. This means that
$\lambda^{-Q/\nu}dm(\lambda)$ is a Haar measure for the multiplicative group
$\mathbb{R}^{+}$, and is therefore a constant multiple of $d\lambda/\lambda$.
This shows the theorem for any $\psi\in L^{\infty}(\mathbb{R}^{+})$ with
compact support.
Let us now prove the theorem for $\psi$ in
$L^{2}(\mathbb{R}^{+},\lambda^{Q/\nu}d\lambda/\lambda)$. The first problem is
to ensure that $\psi(\mathcal{R})\delta_{0}$ makes sense. For this, let
$(\psi_{j})_{j\in\mathbb{N}}$ be a sequence of bounded compactly supported
functions that converges to $\psi$ in
$L^{2}(\mathbb{R}^{+},\lambda^{Q/\nu}d\lambda/\lambda)$ as $j\to\infty$. As
the statement is shown for the $\psi_{j}$’s, the kernels
$\psi_{j}(\mathcal{R})\delta_{0}$, $j\in\mathbb{N}$, form a Cauchy sequence in
the Hilbert space $L^{2}(G)$; we denote by $\kappa\in L^{2}(G)$ its limit. We
observe that for each $\phi\in\mathcal{S}(G)$, Fatou’s inequality yields
$\int_{0}^{\infty}|\psi(\lambda)|^{2}d(E_{\lambda}\phi,\phi)\leq\liminf_{j\to\infty}\int_{0}^{\infty}|\psi_{j}(\lambda)|^{2}d(E_{\lambda}\phi,\phi)\leq
c_{0}^{-1}\|\phi\|_{L^{1}(G)}^{2}\|\kappa\|_{L^{2}(G)}^{2},$
since we have
$c_{0}\int_{0}^{\infty}|\psi_{j}(\lambda)|^{2}d(E_{\lambda}\phi,\phi)=\|\psi_{j}(\mathcal{R})\phi\|_{L^{2}(G)}^{2}=\|\phi*\psi_{j}(\mathcal{R})\delta_{0}\|_{L^{2}(G)}^{2}\leq\|\phi\|_{L^{1}(G)}^{2}\|\psi_{j}(\mathcal{R})\delta_{0}\|_{L^{2}(G)}^{2},$
by the Young convolution inequality. This shows that $\psi(\mathcal{R})$
defines a continuous map $\mathcal{S}(G)\to\mathcal{S}^{\prime}(G)$ thus
admits a right-convolution kernel
$\psi(\mathcal{R})\delta_{0}\in\mathcal{S}^{\prime}(G)$ in the sense of
distributions. It remains to show $\kappa=\psi(\mathcal{R})\delta_{0}$. For
this we observe that we have for any $\chi\in\mathcal{D}(\mathbb{R})$ and any
$\phi\in\mathcal{S}(G)$:
$\|(\psi(\mathcal{R})-\psi_{j}(\mathcal{R}))\chi(\mathcal{R})\phi\|_{L^{2}(G)}\leq\|(\psi-\psi_{j})\chi(\mathcal{R})\delta_{0}\|_{L^{2}(G)}\|\phi\|_{L^{1}(G)},$
and, since the statement is proved for functions with compact supports,
$\|(\psi-\psi_{j})\chi(\mathcal{R})\delta_{0}\|_{L^{2}(G)}^{2}=c_{0}\int_{0}^{\infty}|(\psi-\psi_{j})\chi(\lambda)|^{2}\lambda^{\frac{Q}{\nu}}\frac{d\lambda}{\lambda}\leq
c_{0}\|\chi\|_{L^{\infty}(\mathbb{R})}\|\psi-\psi_{j}\|_{L^{2}(\mathbb{R}^{+},\lambda^{Q/\nu}d\lambda/\lambda)}.$
The last expression converges to 0 as $j\to\infty$ by hypothesis. Hence
$\lim_{j\to\infty}\|(\psi(\mathcal{R})-\psi_{j}(\mathcal{R}))\chi(\mathcal{R})\phi\|_{L^{2}(G)}=0=\|(\chi(\mathcal{R})\phi)*(\psi(\mathcal{R})\delta_{0}-\kappa)\|_{L^{2}(G)}.$
This implies that
$\langle\chi(\mathcal{R})\phi,\psi(\mathcal{R})\delta_{0}-\kappa\rangle=0$ and
thus $\psi(\mathcal{R})\delta_{0}-\kappa=0$ as tempered distributions because
$\chi(r\mathcal{R})\delta_{0}$ is an approximate identity (see Section 3.1.2
and Hulanicki’s theorem in Theorem 3.5 with (3.5)). This concludes the sketch
of the proof of Theorem 3.6. ∎
## 4\. Semiclassical calculus on graded compact nil-manifold
### 4.1. Semiclassical pseudodifferential operators
The semiclassical pseudodifferential calculus in the context of groups of
Heisenberg type was presented in [11, 12], but in fact extends readily to any
graded group $G$. Here, we show how to define it on the quotient manifold $M$.
We consider the class of symbols ${\mathcal{A}}_{0}$ of fields of operators
defined on $M\times{\widehat{G}}$
$\sigma(\dot{x},\pi)\in{\mathcal{L}}({\mathcal{H}}_{\pi}),\;\;(\dot{x},\pi)\in
M\times{\widehat{G}},$
that are of the form
$\sigma(\dot{x},\pi)=\mathcal{F}_{G}\kappa_{\dot{x}}(\pi),$
where $\dot{x}\mapsto\kappa_{\dot{x}}$ is a smooth map from $M$ to
$\mathcal{S}(G)$. Equivalently, the map $\dot{x}\mapsto\kappa_{\dot{x}}$ in
$C^{\infty}(M:\mathcal{S}(G))$ may also be viewed as the $x$-periodic map
$x\longmapsto\kappa_{\dot{x}}\ \mbox{in}\
C^{\infty}(G:\mathcal{S}(G))^{\Gamma},$
and the symbol $\sigma=\\{\sigma(\dot{x},\pi):(\dot{x},\pi)\in
M\times{\widehat{G}}\\}$ as a field of operators on $G\times{\widehat{G}}$
which is $\Gamma$-periodic in $x\in G$. The group Fourier transform yields a
bijection
$(\dot{x}\mapsto\kappa_{\dot{x}})\mapsto(\dot{x}\mapsto\sigma(\dot{x},\cdot)=\mathcal{F}(\kappa_{\dot{x}}))$
from $C^{\infty}(M:\mathcal{S}(G))$ onto $\mathcal{A}_{0}$. We equip
$\mathcal{A}_{0}$ of the Fréchet topology so that this mapping is an
isomorphism of topological vector spaces.
We observe that $\mathcal{A}_{0}$ is an algebra for the usual composition of
symbol. Furthermore, it is also equip with the involution
$\sigma\mapsto\sigma^{*}$, where
$\sigma^{*}=\\{\sigma(\dot{x},\pi)^{*},(\dot{x},\pi)\in
M\times{\widehat{G}}\\}$. We say that a symbol $\sigma\in\mathcal{A}_{0}$ is
_self-adjoint_ if $\sigma^{*}=\sigma$. If $\sigma\in\mathcal{A}_{0}$, we then
define the symbols
$\Re\sigma:=\frac{1}{2}(\sigma+\sigma^{*})\quad\mbox{and}\quad\Im\sigma:=\frac{1}{2i}(\sigma-\sigma^{*}),$
so that $\sigma=\Re\sigma+i\Im\sigma$, and both $\Re\sigma$ and $\Im\sigma$
are self-adjoint and in $\mathcal{A}_{0}$.
Note that by the Fourier inversion formula (3.4), we have
$\kappa_{\dot{x}}(z)=\int_{{\widehat{G}}}{\rm
Tr}(\pi(z)\,\sigma(\dot{x},\pi))d\mu(\pi)=\int_{{\widehat{G}}}{\rm
Tr}(\pi(z)\,\sigma_{G}(x,\pi))d\mu(\pi).$
We then define the operator ${\rm Op}_{G}(\sigma)$ at
$F\in\mathcal{S}^{\prime}(G)$ via
${\rm Op}_{G}(\sigma)F(x):=F*\kappa_{\dot{x}}(x),\quad x\in G.$
This makes sense since, for each $x\in G$, the convolution of the tempered
distribution $F$ with the Schwartz function $\kappa_{\dot{x}}$ yields a smooth
function $F*\kappa_{\dot{x}}$ on $G$. Because of the Fourier inversion formula
(3.4), it may be written formally as
${\rm Op}_{G}(\sigma)F(x)=\int_{G\times{\widehat{G}}}{\rm
Tr}(\pi(y^{-1}x)\sigma_{G}(x,\varepsilon\cdot\pi))F(y)dyd\mu(\pi).$
If $F$ is periodic, then ${\rm Op}_{G}(\sigma)F$ is also periodic with ${\rm
Op}_{G}(\sigma)F\in C^{\infty}(G)^{\Gamma}$ and we can view $F$ and ${\rm
Op}_{G}(\sigma)F$ as functions on $M$, see Section 2.4. In other words, we set
for any $f\in\mathcal{D}^{\prime}(M)$ and $\dot{x}\in M$:
${\rm Op}(\sigma)f(\dot{x}):={\rm
Op}_{G}(\sigma)f_{G}(x)=(f_{G}*\kappa_{\dot{x}})_{M}(\dot{x})=\int_{M}f(\dot{y})\sum_{\gamma\in\Gamma}\kappa_{\dot{x}}(y^{-1}\gamma
x)\,d\dot{y},$
and this defines the function ${\rm Op}(\sigma)f\in\mathcal{D}(M)$. We say
that $\kappa_{\dot{x}}$ is the _kernel associated_ with the symbol $\sigma$ or
${\rm Op}(\sigma)$.
The results in Section 2.4, especially Lemma 2.9, yield:
###### Lemma 4.1.
Let $\sigma\in\mathcal{A}_{0}$ and let $\kappa_{\dot{x}}$ be its associated
kernel. Then ${\rm Op}(\sigma)$ maps $\mathcal{D}^{\prime}(M)$ to
$\mathcal{D}(M)$ continuously, and its Schwartz integral is the smooth
function $K$ on $M\times M$ given by
$K(\dot{x},\dot{y})=\sum_{\gamma\in\Gamma}\kappa_{\dot{x}}(y^{-1}\gamma x).$
Consequently, the operator ${\rm Op}(\sigma)$ is Hilbert-Schmidt on $L^{2}(M)$
with Hilbert-Schmidt norm
$\|{\rm Op}(\sigma)\|_{HS}=\|K\|_{L^{2}(M\times M)}.$
Let $\varepsilon\in(0,1]$ be a small parameter. For every symbol
$\sigma\in{\mathcal{A}}$, we consider the symbol
(4.1)
$\sigma^{(\varepsilon)}:=\\{\sigma(\dot{x},\varepsilon\pi):(\dot{x},\pi)\in
M\times{\widehat{G}}\\},$
whose associated kernel is then
(4.2)
$\kappa^{(\varepsilon)}_{\dot{x}}(z):=\varepsilon^{-Q}\kappa_{\dot{x}}(\varepsilon^{-1}\cdot
z),\quad z\in G,$
if $\kappa_{\dot{x}}=\kappa^{(1)}_{\dot{x}}$ is the kernel associated with the
symbol $\sigma=\sigma^{(1)}$. The semi-classical pseudo-differential calculus
is then defined via
${\rm Op}^{\varepsilon}(\sigma):={\rm
Op}(\sigma^{(\varepsilon)})\quad\mbox{and}\quad{\rm
Op}_{G}^{\varepsilon}(\sigma):={\rm Op}_{G}(\sigma^{(\varepsilon)}).$
### 4.2. Properties of the semiclassical calculus
The first two results in this section will justify the introduction of the
following semi-norm on $\mathcal{A}_{0}$:
$\|\sigma\|_{\mathcal{A}_{0}}:=\int_{G}\sup_{\dot{x}\in
M}|\kappa_{\dot{x}}(y)|dy,$
where $\kappa_{\dot{x}}$ is the kernel associated with
$\sigma\in\mathcal{A}_{0}$.
#### 4.2.1. Boundedness in $L^{2}(M)$
###### Proposition 4.2.
For every $\varepsilon\in(0,1]$ and $\sigma\in\mathcal{A}_{0}$,
$\|{\rm
Op}(\sigma)\|_{\mathcal{L}(L^{2}(M))}\leq\|\sigma\|_{\mathcal{A}_{0}}\quad\mbox{and}\quad\|\sigma\|_{\mathcal{A}_{0}}=\|\sigma^{(\varepsilon)}\|_{\mathcal{A}_{0}}$
where $\sigma^{(\varepsilon)}$ is given in (4.1). Consequently, we have:
$\|{\rm
Op}^{\varepsilon}(\sigma)\|_{\mathcal{L}(L^{2}(M))}\leq\|\sigma\|_{\mathcal{A}_{0}}.$
###### Proof.
We observe that for $f\in L^{2}(M)$,
$|{\rm
Op}(\sigma)f(\dot{x})|=\big{|}\int_{M}f(\dot{y})\sum_{\gamma\in\Gamma}\kappa_{\dot{x}}(y^{-1}\gamma
x)\,d\dot{y}\big{|}\leq\int_{M}|f(\dot{y})|\sum_{\gamma\in\Gamma}\sup_{\dot{x}_{1}\in
M}|\kappa_{\dot{x}_{1}}(y^{-1}\gamma x)|\ d\dot{y}.$
Furthermore, we can apply Corollary 2.10 to the function defined on $G$ by
$\sup_{\dot{x}_{1}\in M}|\kappa_{\dot{x}_{1}}|$ since
$\dot{x}\mapsto\kappa_{\dot{x}}$ is a smooth function from $G$ to
$\mathcal{S}(G)$. We obtain
$\|{\rm Op}(\sigma)f\|_{L^{2}(M)}\leq\|f\|_{L^{2}(M)}\|\sup_{\dot{x}_{1}\in
M}|\kappa_{\dot{x}_{1}}|\|_{L^{1}(G)}.$
This implies the first inequality. For the equality, a simple change of
variable $y=\varepsilon^{-1}\cdot z$ yields
$\|\sigma^{(\varepsilon)}\|_{\mathcal{A}_{0}}=\|\sup_{\dot{x}_{1}\in
M}|\kappa^{(\varepsilon)}_{\dot{x}_{1}}|\|_{L^{1}(G)}=\int_{G}\sup_{\dot{x}_{1}\in
M}|\kappa_{\dot{x}_{1}}|(\varepsilon^{-1}\cdot z)\
\varepsilon^{-Q}dz=\int_{G}\sup_{\dot{x}_{1}\in
M}|\kappa_{\dot{x}_{1}}(y)|dy=\|\sigma\|_{\mathcal{A}_{0}}.$
The rest follows. ∎
#### 4.2.2. Singularity of the operators as $\varepsilon\to 0$.
The following lemma is similar to Proposition 3.4 in [10] and shows that the
singularities of the integral kernels of the operators ${\rm
Op}^{(\varepsilon)}(\sigma)$ concentrate on the diagonal as $\varepsilon\to
0$. It may also justify for many semi-classical properties that the kernel
associated with a symbol may be assumed to be compactly supported in the
variable of the group:
###### Lemma 4.3.
Let $\chi\in\mathcal{D}(G)$ be identically equal to $1$ close to $0$. Let
$\sigma\in\mathcal{A}_{0}$ and let $\kappa_{\dot{x}}(z)$ denote its associated
kernel. For every $\varepsilon>0$, the symbol $\sigma_{\varepsilon}$ defined
via
$\sigma_{\varepsilon}(\dot{x},\pi)=\mathcal{F}_{G}\left(\kappa_{\dot{x}}\chi(\varepsilon\,\cdot)\right),$
that is, the symbol with associated kernel
$\kappa_{\dot{x}}(z)\chi(\varepsilon\cdot z)$, is in $\mathcal{A}_{0}$.
Furthermore, for all $N\in\mathbb{N}$, there exists a constant
$C=C_{N,\sigma,\chi}>0$ such that
$\forall\varepsilon\in(0,1]\qquad\left\|\sigma_{\varepsilon}-\sigma\right\|_{\mathcal{A}_{0}}\leq
C{\varepsilon}^{N}.$
###### Proof.
As the function $\chi$ is identically $1$ close to $z=0$, for all
$N\in\mathbb{N}$, there exists a bounded continuous function $\theta$,
identically 0 near 0, such that
$\forall y\in G,\;\;1-\chi(y)=\theta(y)\|y\|^{N},$
where $\|\cdot\|$ is a fixed homogeneous quasi-norm on $G$ (see Section
3.1.1). This notation implies
$\kappa_{\dot{x}}(z)-\kappa_{\dot{x}}(z)\chi(\varepsilon\cdot
z)=\kappa_{\dot{x}}(z)\theta(\varepsilon\cdot z)\|\varepsilon\cdot z\|^{N}.$
As $\|\varepsilon\cdot z\|=\varepsilon\|z\|$, we obtain
$\left\|\sigma_{\varepsilon}-\sigma\right\|_{\mathcal{A}_{0}}=\int_{G}\sup_{\dot{x}}|\kappa_{\dot{x}}(z)-\kappa_{\dot{x}}(z)\chi(\varepsilon\cdot
z)|dz\leq\varepsilon^{N}\|\theta\|_{L^{\infty}}\int_{G}\sup_{\dot{x}\in
M}|\kappa_{\dot{x}}(z)|\|z\|^{N}dz.$
This last integral is finite and this concludes the proof. ∎
#### 4.2.3. Symbolic calculus
In order to write down the symbolic properties of the semi-classical calculus,
we need to introduce the notions of difference operators. They aim at
replacing the derivatives with respect to the Fourier variable in the
Euclidean case.
If $q$ is a smooth function on $G$ with polynomial growth, we define
$\Delta_{q}$ via
$\Delta_{q}\widehat{f}(\pi)=\mathcal{F}_{G}(qf)(\pi),\quad\pi\in{\widehat{G}},$
for any function $f\in\mathcal{S}(G)$ and even any tempered distribution
$f\in\mathcal{S}^{\prime}(G)$. In particular, considering the basis $(X_{j})$
constructed in Section 3.1.1, we consider [14, Proposition 5.2.3] the
polynomials $q_{\alpha}$ such that $X^{\beta}q_{\alpha}=\delta_{\alpha,\beta}$
for all $\alpha,\beta\in\mathbb{N}_{0}^{n}$. We then define
$\Delta^{\alpha}:=\Delta_{q_{\alpha}(\cdot^{-1})}.$
We can now describe the symbolic calculus. The quantitative results on the
symbolic calculus for the pseudo-differential calculus on $G$ in [14, Section
5.5] imply the following statement:
###### Proposition 4.4.
1. (1)
If $\sigma_{1},\sigma_{2}\in\mathcal{A}_{0}$, then for every $N\in\mathbb{N}$,
we have
${\rm Op}^{(\varepsilon)}(\sigma_{1}){\rm
Op}^{(\varepsilon)}(\sigma_{2})=\sum_{[\alpha]<N}\varepsilon^{[\alpha]}{\rm
Op}^{(\varepsilon)}(\Delta^{\alpha}\sigma_{1}\
X_{\dot{x}}^{\alpha}\sigma_{2})\ +\ O(\varepsilon)^{N},$
in the sense that there exists a constant
$C=C_{\sigma_{1},\sigma_{2},G,\Gamma,N}>0$ such that
$\forall\varepsilon\in(0,1]\qquad\|{\rm Op}^{(\varepsilon)}(\sigma_{1}){\rm
Op}^{(\varepsilon)}(\sigma_{2})-\sum_{[\alpha]<N}\varepsilon^{[\alpha]}{\rm
Op}^{(\varepsilon)}(X_{\dot{x}}^{\alpha}\Delta^{\alpha}\sigma^{*})\|_{\mathcal{L}(L^{2}(M))}\leq
C\varepsilon^{N}.$
2. (2)
If $\sigma\in\mathcal{A}_{0}$, then for every $N\in\mathbb{N}$, we have
${\rm
Op}^{(\varepsilon)}(\sigma)^{*}=\sum_{[\alpha]<N}\varepsilon^{[\alpha]}{\rm
Op}^{(\varepsilon)}(\Delta^{\alpha}\sigma_{1}\
X_{\dot{x}}^{\alpha}\sigma_{2})\ +\ O(\varepsilon)^{N},$
in the sense that there exists a constant $C=C_{\sigma,G,\Gamma,M}>0$ such
that
$\forall\varepsilon\in(0,1]\qquad\|{\rm
Op}^{(\varepsilon)}(\sigma)^{*}-\sum_{[\alpha]<N}\varepsilon^{[\alpha]}{\rm
Op}^{(\varepsilon)}(X_{\dot{x}}^{\alpha}\Delta^{\alpha}\sigma^{*})\|_{\mathcal{L}(L^{2}(M))}\leq
C\varepsilon^{N}.$
###### Remark 4.5.
If $\sigma_{1},\sigma_{2}\in\mathcal{A}_{0}$ and $\sigma_{2}$ does not depend
on $\dot{x}$, then we have the equality
${\rm Op}^{(\varepsilon)}(\sigma_{1}){\rm Op}^{(\varepsilon)}(\sigma_{2})={\rm
Op}^{(\varepsilon)}(\sigma_{1}\sigma_{2})$
for every $\varepsilon\in(0,1]$.
#### 4.2.4. Hilbert-Schmidt norm and trace
###### Lemma 4.6.
Let $\sigma\in\mathcal{A}_{0}$ with associated kernel $\kappa_{\dot{x}}(z)$.
For every $\varepsilon\in(0,1]$, ${\rm Op}^{(\varepsilon)}(\sigma)$ is a
Hilbert-Schmidt and trace-class operator on $M$, with Hilbert-Schmidt norm
$\|{\rm Op}^{(\varepsilon)}(\sigma)\|_{HS}$ satisfying for every
$N\in\mathbb{N}$
$\|{\rm Op}^{(\varepsilon)}(\sigma)\|_{HS}^{2}=\varepsilon^{-Q}\iint_{G\times
M}|\kappa_{\dot{x}}(z)|^{2}dzd\dot{x}\ +\ O(\varepsilon)^{N},$
and trace satisfying for every $N\in\mathbb{N}$
${\rm Tr}\left({\rm
Op}^{(\varepsilon)}(\sigma)\right)=\varepsilon^{-Q}\int_{M}\kappa_{\dot{x}}(0)d\dot{x}\
+\ O(\varepsilon)^{N}.$
This means that there exists a constant $C=C_{N,\sigma,G,\Gamma}>0$ such that
for every $\varepsilon\in(0,1]$
$\displaystyle\left|\|{\rm
Op}^{(\varepsilon)}(\sigma)\|_{HS}^{2}-\varepsilon^{-Q}\iint_{G\times
M}|\kappa_{\dot{x}}(z)|^{2}dzd\dot{x}\right|$ $\displaystyle\leq
C\varepsilon^{N}$ $\displaystyle\mbox{and}\quad\left|{\rm Tr}\left({\rm
Op}^{(\varepsilon)}(\sigma)\right)-\varepsilon^{-Q}\int_{M}\kappa_{\dot{x}}(0)d\dot{x}\right|$
$\displaystyle\leq C\varepsilon^{N}.$
###### Proof of Lemma 4.6.
Lemma 4.1 implies that ${\rm Op}^{(\varepsilon)}(\sigma)$ is Hilbert-Schmidt
and trace-class, with Hilbert-Schmidt norm and trace that can be expressed in
terms of their smooth kernels.
First, let us consider the Hilbert-Schmidt norm for $\varepsilon=1$. By Lemma
4.1, we have
$\displaystyle\|{\rm Op}(\sigma)\|_{HS}^{2}$ $\displaystyle=\iint_{M\times
M}\big{|}\sum_{\gamma\in\Gamma}\kappa_{\dot{x}}(y^{-1}\gamma
x)\big{|}^{2}d\dot{y}d\dot{x}$ $\displaystyle=\iint_{M\times
M}\sum_{\gamma_{1},\gamma_{2}\in\Gamma}\kappa_{\dot{x}}((\gamma_{1}y^{-1})x)\overline{\kappa_{\dot{x}}((\gamma_{1}y^{-1})\gamma_{2}x)}d\dot{y}d\dot{x}$
$\displaystyle=\iint_{G\times
M}\sum_{\gamma_{2}\in\Gamma}\kappa_{\dot{x}}(y^{-1}x)\overline{\kappa_{\dot{x}}(y^{-1}\gamma_{2}x)}dyd\dot{x},$
by (2.1). Using the change of variable $z=y^{-1}x$, we have obtained
$\displaystyle\|{\rm Op}(\sigma)\|_{HS}^{2}$ $\displaystyle=\iint_{G\times
M}\sum_{\gamma_{2}\in\Gamma}\kappa_{\dot{x}}(z)\overline{\kappa_{\dot{x}}(zx^{-1}\gamma_{2}x)}dzd\dot{x}$
$\displaystyle=\sum_{\gamma_{2}\in\Gamma}\iint_{G\times
R_{0}}\kappa_{\Gamma\Theta(t)}(z)\overline{\kappa_{\Gamma\Theta(t)}(z\Theta(t)^{-1}\gamma_{2}\Theta(t))}dzdt$
having used the fundamental domain $R_{0}$ described in Proposition 2.4. Note
that the term corresponding to $\gamma_{2}=0$ in this last sum is equal to
$\iint_{G\times M}|\kappa_{\dot{x}}(z)|^{2}dzd\dot{x}$.
For every $\varepsilon\in(0,1]$, the computations above yield
$\|{\rm
Op}^{(\varepsilon)}(\sigma)\|_{HS}^{2}=\sum_{\gamma\in\Gamma}\rho_{\varepsilon,\gamma},$
where
$\displaystyle\rho_{\varepsilon,\gamma}$ $\displaystyle:=\iint_{G\times
R_{0}}\kappa^{(\varepsilon)}_{\Gamma\Theta(t)}(z)\overline{\kappa^{(\varepsilon)}_{\Gamma\Theta(t)}(z\Theta(t)^{-1}\gamma\Theta(t))}dzdt$
$\displaystyle=\varepsilon^{-Q}\iint_{G\times
R_{0}}\kappa_{\Gamma\Theta(t)}(z)\overline{\kappa_{\Gamma\Theta(t)}(z\,\varepsilon^{-1}\cdot(\Theta(t)^{-1}\gamma\Theta(t)))}dzdt,$
after the change of variable $z\mapsto\varepsilon\cdot z$. For $\gamma=0$, we
see
$\rho_{\varepsilon,0}=\iint_{G\times
M}|\kappa^{(\varepsilon)}_{\dot{x}}(z)|^{2}dzd\dot{x}=\varepsilon^{-Q}\iint_{G\times
M}|\kappa_{\dot{x}}(z)|^{2}dzd\dot{x}.$
We now consider $\gamma\in\Gamma\setminus\\{0\\}$. We fix a homogenous quasi-
norm $\|\cdot\|$ on $G$ (see Section 3.1.1). In order to avoid introducing
unnecessary constants, we may assume that it yields a distance on
$G\sim\mathbb{R}^{n}$. By assumption on the kernel $\kappa_{\dot{x}}(z)$
associated with $\sigma$, we have
$\forall N\in\mathbb{N}\quad\exists C_{N}\quad\forall\dot{x}\in M,\ z\in
G\quad|\kappa_{\dot{x}}(z)|\leq C_{N}(1+\|z\|)^{-N}.$
These estimates imply for any $N_{1},N\in\mathbb{N}$:
$\displaystyle|\rho_{\varepsilon,\gamma}|$ $\displaystyle\leq
C_{N_{1}}C_{N}\iint_{G\times
R_{0}}(1+\|z\|)^{-N_{1}}(1+\|z\,\varepsilon^{-1}\cdot(\Theta(t)^{-1}\gamma\Theta(t))\|)^{-N}dzdt$
$\displaystyle\leq
C_{N_{1}}C_{N}\int_{G}(1+\|z\|)^{-N_{1}+N}dz\int_{R_{0}}(1+\varepsilon^{-1}\|\Theta(t)^{-1}\gamma\Theta(t)\|)^{-N}dt.$
Let $N\in\mathbb{N}$ be fixed. We choose $N_{1}=N+Q+1$ so that the last
integral over $G$ is finite, see (3.1). For the integral over $R_{0}$, we
observe that the function $t\mapsto\|\Theta(t)^{-1}\gamma\Theta(t)\|$ is
continuous from $\mathbb{R}^{n}$ to $[0,+\infty)$ and never vanishes by
Proposition 2.4 and the properties of the quasi-norms; let $c_{\gamma}>0$
denote its infimum. We deduce:
$\int_{R_{0}}(1+\varepsilon^{-1}\|\Theta(t)^{-1}\gamma\Theta(t)\|)^{-N}dt\leq(1+c_{\gamma}\varepsilon^{-1})^{-N}\leq
c_{\gamma}^{-N}\varepsilon^{N}.$
We will use this for the finite number of $\gamma\in\Gamma\setminus\\{0\\}$
such that $\|\gamma\|\leq 4\max_{t\in\bar{R}_{0}}\|\Theta(t)\|$. For the
others, the triangle inequality and
$\|\gamma\|>4\max_{t\in\bar{R}_{0}}\|\Theta(t)\|$ imply that
$\|\Theta(t)^{-1}\gamma\Theta(t)\|\geq\|\gamma\|/2$, so
$\int_{R_{0}}(1+\varepsilon^{-1}\|\Theta(t)^{-1}\gamma\Theta(t)\|)^{-N}dt\leq(1+\varepsilon^{-1}\|\gamma\|/2)^{-N}\leq
2^{N}\varepsilon^{N}\|\gamma\|^{-N}$
Summing over $\gamma\in\Gamma\setminus\\{0\\}$, we obtain the estimate
$\sum_{\gamma\in\Gamma\setminus\\{0\\}}|\rho_{\varepsilon,\gamma}|\leq\epsilon^{N}\left(\sum_{0<\|\gamma\|\leq
4\max_{t\in\bar{R}_{0}}\|\Theta(t)\|}c_{\gamma}^{-N}+2^{N}\sum_{\|\gamma\|>0}\|\gamma\|^{-N}\right).$
By Lemma 3.3, the very last sum is finite for $N$ large, $N>n\nu_{n}$ being
sufficient. Hence the right-hand side above is $\lesssim\varepsilon^{N}$, and
this shows the estimate for the Hilbert-Schmidt norm in the statement.
The proof for the trace is similar. Indeed, we have
${\rm Tr}({\rm
Op}^{(\varepsilon)}(\sigma))=\int_{M}\sum_{\gamma\in\Gamma}\kappa^{(\varepsilon)}_{\dot{x}}(x^{-1}\gamma
x)d\dot{x}=\int_{M}\kappa^{(\varepsilon)}_{\dot{x}}(0)d\dot{x}+\rho_{\varepsilon},$
where
$|\rho_{\varepsilon}|\leq\sum_{\gamma\in\Gamma\setminus\\{0\\}}\int_{R_{0}}\left|\kappa^{(\varepsilon)}_{\Gamma\Theta(t)}(z\Theta(t)^{-1}\gamma\Theta(t))\right|dt.$
Proceeding as above, the right-hand side is $O(\varepsilon^{N})$ for any
$N\in\mathbb{N}$. This concludes the proof of Lemma 4.6. ∎
#### 4.2.5. The Hilbert space $L^{2}(M\times{\widehat{G}})$
We open a brief parenthesis devoted to the tensor product of the Hilbert
spaces $L^{2}(M)$ and $L^{2}({\widehat{G}})$ (see Section 3.3 for the latter):
$L^{2}(M\times{\widehat{G}}):=\overline{L^{2}(M)\otimes
L^{2}({\widehat{G}})}.$
We may identify $L^{2}(M\times{\widehat{G}})$ with the space of measurable
fields of Hilbert-Schmidt operators $\sigma=\\{\sigma(\dot{x},\pi)\ :\
(\dot{x},\pi)\in M\times{\widehat{G}}\\}$ such that
$\|\sigma\|_{L^{2}(M\times{\widehat{G}})}^{2}:=\iint_{M\times{\widehat{G}}}\|\sigma(\dot{x},\pi)\|_{HS(\mathcal{H}_{\pi})}^{2}d\dot{x}d\mu(\pi)<\infty.$
Here $\mu$ is the Plancherel measure on ${\widehat{G}}$, see Section 3.3. The
group Fourier transform yields an isomorphism between the Hilbert spaces
$L^{2}(M\times{\widehat{G}})$ and $L^{2}(M\times G)$, and
$\mathcal{F}_{G}^{-1}\sigma(\dot{x},\cdot)=\kappa_{\dot{x}}$ will still be
called the associated kernel of $\sigma$. Naturally $\mathcal{A}_{0}\subset
L^{2}(M\times{\widehat{G}})$.
Note that the first formula in Lemma 4.6 may be written as
$\forall\sigma\in\mathcal{A}_{0}\qquad\|{\rm
Op}^{(\varepsilon)}(\sigma)\|_{HS}^{2}=\varepsilon^{-Q}\|\sigma\|_{L^{2}(G\times
M)}^{2}\ +\ O(\varepsilon)^{N},$
while by the Fourier inversion formula (3.4), the second formula is
$\forall\sigma\in\mathcal{A}_{0}\qquad{\rm Tr}\left({\rm
Op}^{(\varepsilon)}(\sigma)\right)=\varepsilon^{-Q}\iint_{M\times{\widehat{G}}}{\rm
Tr}\left(\sigma(\dot{x},\pi)\right)d\dot{x}d\mu(\pi)\ +\ O(\varepsilon)^{N}.$
### 4.3. Properties of positive Rockland operators on $G$ and on $M$
This section is devoted to the general properties of positive Rockland
operators. Many of these properties, for instance regarding self-adjointness
and heat kernels, are well-known for general sub-Laplacians on smooth
manifolds [25, p. 261-262], and some of them are known for Rockland operators
on compact manifolds from the recent paper [8].
Let $\mathcal{R}$ be a positive Rockland operator on $G$. The operator
$\mathcal{R}_{M}$ it induces on $M$ is a smooth differential operator which is
positive and essentially self-adjoint on $L^{2}(M)$, see Section 2.6. We will
keep the same notation for $\mathcal{R}_{M}$ and for its self-adjoint
extension. The properties of the functional calculus for $\mathcal{R}$ imply:
###### Lemma 4.7.
Let $\mathcal{R}$ be a positive Rockland operator on $G$. Let
$\psi\in\mathcal{S}(\mathbb{R})$.
1. (1)
The operator $\psi(\mathcal{R}_{M})$ defined as a bounded spectral multiplier
on $L^{2}(M)$ coincides with the operator ${\rm Op}(\sigma)$ on
$C^{\infty}(M)$, with symbol $\sigma(\pi):=\psi(\widehat{\mathcal{R}}(\pi))$
in $\mathcal{A}_{0}$ independent of $\dot{x}\in M$. The associated kernel is
$\kappa:=\psi(\mathcal{R}(\pi))\delta_{0}\in\mathcal{S}(G)$.
2. (2)
For every $\varepsilon\in(0,1]$, we have
$\psi(\varepsilon^{\nu}\mathcal{R}_{M})={\rm Op}^{(\varepsilon)}(\sigma)$ on
$C^{\infty}(M)$, where $\nu$ is the degree of homogeneity of $\mathcal{R}$.
3. (3)
The integral kernel of $\psi(\mathcal{R}_{M})$ is a smooth function on
$M\times M$ given by
$K(\dot{x},\dot{y})=\sum_{\gamma\in\Gamma}\kappa(y^{-1}\gamma x).$
The operator $\psi(\mathcal{R}_{M})$ is Hilbert-Schmidt on $L^{2}(M)$.
###### Proof.
We have $\psi(\mathcal{R})={\rm Op}_{G}(\sigma)$ with symbol
$\sigma(\pi)=\psi(\widehat{\mathcal{R}}(\pi))$ independent of $\dot{x}\in M$,
see [11, 10, 14]. Its associated kernel is
$\kappa:=\psi(\mathcal{R}(\pi))\delta_{0}$ is Schwartz by Hulanicki’s theorem
(Theorem 3.5). Hence $\sigma\in\mathcal{A}_{0}$. This shows Part (1). The
homogeneity properties of $\mathcal{R}$ and its functional calculus show Part
(2). Part (3) follows from Lemma 4.1. ∎
By Lemma 4.7 applied to $\psi(\lambda)=e^{-t\lambda}$, the heat operators
$e^{-t\mathcal{R}_{M}}$, $t>0$, admits the following smooth kernels $K_{t}$ on
$M\times M$:
$K_{t}(\dot{x},\dot{y})=\sum_{\gamma\in\Gamma}p_{t}(y^{-1}\gamma
x),\quad\dot{x},\dot{y}\in M,$
where $p_{t}=e^{-t\mathcal{R}}\delta_{0}$, $t>0$, are the heat kernels for
$\mathcal{R}$.
The following statement is classical for sub-Laplacians on compact manifolds.
We provide here a self-contained proof for $\mathcal{R}_{M}$:
###### Proposition 4.8.
1. (1)
The spectrum ${\rm Sp}(\mathcal{R}_{M})$ of $\mathcal{R}_{M}$ is a discrete
and unbounded subset of $[0,+\infty)$. Each eigenspace of $\mathcal{R}_{M}$
has finite dimension.
2. (2)
The constant functions on $M$ form the 0-eigenspace of $\mathcal{R}_{M}$.
Note that a consequence of Part (1) is that the resolvent operators
$(\mathcal{R}-z)^{-1}$, $z\in\mathbb{C}\setminus{\rm Sp}(\mathcal{R}_{M})$,
are compact on $L^{2}(M)$.
###### Proof of Proposition 4.8 (1).
The heat operator $e^{-t\mathcal{R}_{M}}$ is positive and Hilbert-Schmidt on
$L^{2}(M)$, so its spectrum is a discrete subset of $[0,\infty)$ and the only
possible accumulation point is zero. Furthermore, the eigenspaces of
$e^{-t\mathcal{R}}$ for positive eigenvalues have finite dimensions. Let us
show that the kernel of each $e^{-t\mathcal{R}_{M}}$ is trivial. If
$e^{-t\mathcal{R}_{M}}f=0$ for some $f\in L^{2}(M)$ then
$e^{-t^{\prime}\mathcal{R}_{M}}f=0$ for $t^{\prime}=t,t/2,t/4,\ldots$ since
$\|e^{-(t^{\prime}/2)\mathcal{R}_{M}}f\|_{L^{2}(M)}^{2}=(e^{-t^{\prime}\mathcal{R}_{M}}f,f)=0$.
By Section 2.4, $f_{G}$ is a periodic tempered distribution in
$L^{2}_{loc}(G)$ satisfying $e^{-t^{\prime}\mathcal{R}}f_{G}=0$ for
$t^{\prime}=t,t/2,t/4,\ldots$, but this implies $f_{G}=0$ since
$e^{-s\mathcal{R}}$ converges to the identity on $\mathcal{S}^{\prime}(G)$ as
$s\to 0$ [14, Sections 3.1.10 and 4.2.2]. So $f=0$. We have obtained that the
kernel of each $e^{-t\mathcal{R}_{M}}$ is trivial and thus that their spectrum
is included in $(0,+\infty)$.
The spectrum of $\mathcal{R}_{M}$ is the discrete set ${\rm
Sp}(\mathcal{R}_{M})=-\ln{\rm Sp}(e^{-\mathcal{R}})\subset\mathbb{R}$. It is
unbounded since $\mathcal{R}_{M}$ is a (non-constant) differential operator.
It is included in $[0,+\infty)$ as $\mathcal{R}_{M}$ is positive. The
eigenspaces for $\mathcal{R}_{M}$ and for its heat kernels are in one-to-one
correspondence, and therefore have finite dimensions. ∎
###### Proof of Proposition 4.8 (2).
If a function is constant on $M$, then it is a 0-eigenfunction. Let us prove
the converse. Let $f$ be a 0-eigenfunction, i.e. $f\in L^{2}(M)$ and
$\mathcal{R}_{M}f=0$. By Section 2.4, $f_{G}$ is a periodic tempered
distribution in $L^{2}_{loc}(G)$ satisfying $\mathcal{R}f_{G}=0$. By the
Liouville theorem for homogeneous groups [14, Theorem 3.2.45] due to Geller
[17], $f_{G}$ is a polynomial on $G\sim\mathbb{R}^{n}$. As it is periodic, it
must be a constant. Hence $f$ is a constant. ∎
A consequence of Sections 2.6 and 3.4.1 is that the operator $\mathcal{R}_{M}$
is hypoelliptic on the manifold $M$. The same argument shows that the operator
$\mathcal{R}_{M}-E{\rm I}$ is hypoelliptic for every constant
$E\in\mathbb{C}$, and this implies:
###### Lemma 4.9.
The eigenfunctions of $\mathcal{R}_{M}$ are smooth on $M$.
### 4.4. Weyl laws for $\mathcal{R}_{M}$
In this section, we consider a positive Rockland operator $\mathcal{R}$ on a
graded Lie group $G$ and its corresponding operator $\mathcal{R}_{M}$ on the
nil-manifold $M=\Gamma\backslash G$ where $\Gamma$ is a discrete co-compact
subgroup of $G$. By Proposition 4.8, we may order the eigenvalues of
$\mathcal{R}_{M}$ (counted with multiplicity) into the sequence
$0=\lambda_{0}<\lambda_{1}\leq\lambda_{2}\leq\ldots\leq\lambda_{j}\longrightarrow+\infty\quad\mbox{as}\quad
j\to+\infty.$
We denote its spectral counting function by
$N(\Lambda):=\left|\left\\{j\in\mathbb{N}_{0},\;\;\lambda_{j}\leq\Lambda\right\\}\right|.$
This section is devoted to Weyl laws for $\mathcal{R}_{M}$, starting with the
following statement:
###### Theorem 4.10 (Weyl law).
We have
$\lim_{\Lambda\to+\infty}\Lambda^{-Q/\nu}N(\Lambda)={\rm Vol}(M)\,c_{0}\nu/Q$
where $Q$ the homogeneous dimension of $G$ and $c_{0}$ the constant from
Theorem 3.6, see also (3.6).
Let us comment on Theorem 4.10 before presenting its proof:
* •
In the particular case of the canonical Laplacian
$\Delta_{\mathbb{T}^{n}}=-\sum_{j}\partial_{j}^{2}$ on the torus
$\mathbb{T}^{n}=\mathbb{R}^{n}/\mathbb{Z}^{n}$, we recover the well known
result since $\nu=2$, $Q=n$, ${\rm Vol}(M)=1$ and
$c_{0}(\Delta_{\mathbb{R}^{n}})=(\Gamma(n/2)2^{n}\pi^{n/2})^{-1}$ (see Remark
3.7).
* •
The Weyl law for Rockland operators has been recently obtained in greater
generality by Dave and Haller [8, Corollary 3]. Their proof is different from
the one presented here: they develop heat kernel expansions on filtered
manifolds, and the constant in their Weyl law is characterised as a
coefficient of these expansions.
* •
Let us consider the case of the canonical Heisenberg nil-manifold, that is,
the quotient $M=\Gamma\backslash\mathbb{H}_{n}$ of the Heisenberg group
$\mathbb{H}_{n}$ by the canonical lattice
$\Gamma=\mathbb{Z}^{n}\times\mathbb{Z}^{n}\times\frac{1}{2}\mathbb{Z}$. The
spectrum of the canonical sub-Laplacian $\mathcal{L}_{M}$ is known [9, 15,
27]: it consists of the single eigenvalue $4\pi^{2}|m|^{2}$ where $m$ runs
over $\mathbb{Z}^{2n}$, and of the eigenvalue $4(2a+n)\pi|k|$ with
multiplicity $(2|k|)^{n}\frac{(n+a-1)!}{(n-1)!a!}$ where $a$ and $k$ run over
$\mathbb{N}$ and $\mathbb{Z}\setminus\\{0\\}$ respectively.
Since ${\rm Vol}(M)=1/2$, the Weyl law for $\mathcal{L}_{M}$ gives as
$\Lambda\to+\infty$
$c_{1}\Lambda^{n+1}\sim\sum_{m\in\mathbb{Z}^{2n}:4\pi^{2}|m|^{2}\leq\Lambda}1+\sum_{\begin{subarray}{c}k\in\mathbb{Z}\setminus\\{0\\},\,a\in\mathbb{N}\\\
4(2a+n)\pi|k|<\Lambda\end{subarray}}(2|k|)^{n}\frac{(n+a-1)!}{(n-1)!a!}$
where $c_{1}=c_{0}(\mathcal{L}_{\mathbb{H}_{n}})/(2n+2)$, and the constant
$c_{0}(\mathcal{L}_{\mathbb{H}_{n}})$ was explicitly given in Remark 3.7. The
Weyl law for the torus implies that the first sum is $\sim
c^{\prime}_{1}\Lambda^{n}$. Hence we have obtained:
$c_{1}\Lambda^{n+1}\sim\sum_{\begin{subarray}{c}k\in\mathbb{Z}\setminus\\{0\\},\,a\in\mathbb{N}\\\
4(2a+n)\pi|k|<\Lambda\end{subarray}}(2|k|)^{n}\frac{(n+a-1)!}{(n-1)!a!}.$
* •
If $\mathcal{R}$ is a positive Rockland operator, then any positive powers of
$\mathcal{R}$ is also a positive Rockland operator and we can check using the
property (3.8) of the constant $c_{0}$ that the Weyl law above for
$\mathcal{R}$ implies the Weyl law for $\mathcal{R}^{\ell}$ for any
$\ell\in\mathbb{N}$.
Theorem 4.10 follows from taking $\varepsilon^{\nu}=\Lambda^{-1}$ and a
convenient choice of functions $\psi\in\mathcal{D}(\mathbb{R})$ approximating
the indicatrix of $[0,1]$ in the following statement:
###### Lemma 4.11.
For any $\psi\in\mathcal{S}(\mathbb{R})$ and $N\in\mathbb{N}$, there exists a
constant $C=C_{N,\psi,G,\Gamma,\mathcal{R}}>0$ such that we have for every
$\varepsilon\in(0,1]$:
$\left|\sum_{j=0}^{\infty}|\psi|^{2}(\varepsilon^{\nu}\lambda_{j})-\varepsilon^{-Q}{\rm
Vol}(M)\,c_{0}\int_{0}^{\infty}|\psi(\lambda)|^{2}\lambda^{\frac{Q}{\nu}}\frac{d\lambda}{\lambda}\right|\leq
C\varepsilon^{N}.$
###### Proof of Lemma 4.11.
By Lemmata 4.7 and 4.6 (see also Section 4.2.5), we have
$\|\psi(\varepsilon^{\nu}\mathcal{R}_{M})\|_{HS(M)}^{2}=\varepsilon^{-Q}\|\kappa\|_{L^{2}(M\times
G)}^{2}+O(\varepsilon)^{N}.$
We can compute the $L^{2}$-norm with Theorem 3.6:
$\|\kappa\|_{L^{2}(M\times G)}^{2}={\rm
Vol}(M)\,\|\kappa\|_{L^{2}(G)}^{2}={\rm
Vol}(M)\,c_{0}\int_{0}^{\infty}|\psi(\lambda)|^{2}\lambda^{\frac{Q}{\nu}}\frac{d\lambda}{\lambda}.$
Now by functional calculus, we also have:
$\|\psi(\varepsilon^{\nu}\mathcal{R}_{M})\|_{HS(M)}^{2}=\sum_{j=0}^{\infty}|\psi|^{2}(\varepsilon^{\nu}\lambda_{j}),$
The statement follows. ∎
We finish this section with the following generalised Weyl law:
###### Theorem 4.12 (Generalised Weyl law).
Let $0\leq a<b$. Denoting the semi-classical counting function for $[a,b]$ by
$N_{[a,b]}(\Lambda):=\\{j\in\mathbb{N}_{0}\ :\ \Lambda
a\leq\lambda_{j}\leq\Lambda b\\},$
we have as $\Lambda\to+\infty$
$N_{[a,b]}(\Lambda)\sim c\Lambda^{Q/2},$
where the (positive, finite) constant $c$ is
$c={\rm Vol}(M)\,\frac{c_{0}\nu}{Q}(b^{\frac{Q}{\nu}}-a^{\frac{Q}{\nu}})={\rm
Vol}(M)\int_{{\widehat{G}}}{\rm
Tr}\left(1_{[a,b]}(\widehat{\mathcal{L}}(\pi))\right)d\dot{x}d\mu(\pi).$
Furthermore, if we consider an orthonormal basis
$(\varphi_{j})_{j\in\mathbb{N}_{0}}$ of the Hilbert space $L^{2}(M)$
consisting of eigenfunctions of $\mathcal{R}_{M}$:
$\mathcal{R}_{M}\varphi_{j}=\lambda_{j}\varphi_{j},\
j=0,1,2,\ldots\quad\mbox{with increasing eigenvalues}\
\lambda_{j}\leq\lambda_{j+1},$
then we have for any $\sigma\in\mathcal{A}_{0}$ as $\varepsilon\to 0$
$\frac{1}{N_{[a,b]}(\varepsilon^{-\nu})}\sum_{j\,:\,\lambda_{j}\in\varepsilon^{-\nu}[a,b]}\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},\varphi_{j}\right\rangle_{L^{2}(M)}\
\longrightarrow\ \iint_{M\times{\widehat{G}}}{\rm
Tr}\left(\sigma(\dot{x},\pi)1_{[a,b]}(\widehat{\mathcal{R}}(\pi))\right)d\dot{x}d\mu(\pi).$
This statement should be compared with its ‘commutative’ counterpart, for
instance in [34, Theorem 9.3].
###### Proof.
Taking smooth approximations of $1_{[a,b]}$ from below and above in Lemma 4.11
easily leads to the estimate for $N_{[a,b]}(\varepsilon^{-\nu})$, see also
(3.7). Let us prove the rest of the statement. First let us check that we can
define the following linear functional on $\mathcal{A}_{0}$:
$\ell(\sigma):=\iint_{M\times{\widehat{G}}}{\rm
Tr}\left(\sigma(\dot{x},\pi)1_{[a,b]}(\widehat{\mathcal{L}}(\pi))\right)d\dot{x}d\mu(\pi).$
As $1_{[a,b]}=1_{[a,b]}^{2}\in L^{2}(\lambda^{Q/\nu}d\lambda/\lambda)$, we
have
$\int_{{\widehat{G}}}{\rm
Tr}|1_{[a,b]}(\widehat{\mathcal{R}})|d\mu=\int_{{\widehat{G}}}\|1_{[a,b]}(\widehat{\mathcal{R}})\|_{HS}^{2}d\mu=c_{0}^{-1}\|1_{[a,b]}(\mathcal{R})\delta_{0})\|_{L^{2}(G)}<\infty,$
by the Plancherel formula (3.3) and Theorem 3.6. Thus the following quantity
is finite
$\iint_{M\times{\widehat{G}}}{\rm
Tr}\left|\sigma(\dot{x},\pi)1_{[a,b]}(\widehat{\mathcal{R}}(\pi))\right|d\dot{x}d\mu(\pi)\leq\sup_{(\dot{x},\pi)\in
M\times{\widehat{G}}}\|\sigma(\dot{x},\pi)\|_{\mathscr{L}(\mathcal{H}_{\pi})}\int_{{\widehat{G}}}{\rm
Tr}|1_{[a,b]}(\widehat{\mathcal{R}})|d\mu<\infty$
since we have (with $\kappa_{\dot{x}}(y)$ denoting the kernel associated to
$\sigma$)
$\|\sigma\|_{L^{\infty}(M\times{\widehat{G}})}:=\sup_{(\dot{x},\pi)\in
M\times{\widehat{G}}}\|\sigma(\dot{x},\pi)\|_{\mathscr{L}(\mathcal{H}_{\pi})\times{\widehat{G}}}\leq\sup_{\dot{x}}\int_{G}|\kappa_{\dot{x}}(y)|dy<\infty.$
We observe that if $\psi\in\mathcal{D}(G)$ satisfies $\psi
1_{[a,b]}=1_{[a,b]}$, then
$\ell(\psi(\varepsilon^{2}\widehat{\mathcal{R}}))={\rm
Vol}(M)\iint_{M\times{\widehat{G}}}{\rm
Tr}\left(\sigma(\dot{x},\pi)1_{[a,b]}(\widehat{\mathcal{R}}(\pi))\right)d\dot{x}d\mu(\pi).$
Hence, replacing $\sigma$ with $\sigma-{\rm
Vol}(M)^{-1}\ell(\sigma)\psi(\varepsilon^{2}\widehat{\mathcal{R}})$, we may
assume that $\sigma\in\mathcal{A}_{0}$ with $\ell(\sigma)=0$.
We are left with showing the convergence for any $\sigma\in\mathcal{A}_{0}$
satisfying $\ell(\sigma)=0$
$\frac{S_{\varepsilon}(\sigma)}{N_{[a,b]}(\varepsilon^{-\nu})}\longrightarrow_{\varepsilon\to
0}0,\quad\mbox{where}\quad
S_{\varepsilon}(\sigma):=\sum_{\lambda_{j}\in\varepsilon^{-\nu}[a,b]}\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},\varphi_{j}\right\rangle;$
in this proof, we use the short hand $\langle.,.\rangle$ for the scalar
product on $L^{2}(M)$. By linearity and since $\sigma=\Re\sigma+i\Im\sigma$,
we may assume that $\sigma=\sigma^{*}$. We observe that, by Remark 4.5,
$S_{\varepsilon}(\sigma)=S_{\varepsilon}(\sigma\psi_{1}(\widehat{\mathcal{R}}))$
where $\psi_{1}\in\mathcal{D}(G)$ is a function satisfying
$\psi_{1}1_{[a,b]}=1_{[a,b]}$, and that for any $\psi\in\mathcal{D}(G)$
supported in $[a,b]$
$S_{\varepsilon}(\psi(\varepsilon^{2}\widehat{\mathcal{R}})\sigma)={\rm
Tr}\left({\rm
Op}^{(\varepsilon)}(\sigma\psi(\widehat{\mathcal{R}}))\right)=\varepsilon^{-Q}\iint_{M\times{\widehat{G}}}{\rm
Tr}\left(\sigma\psi(\widehat{\mathcal{R}})\right)d\dot{x}d\mu+O(\varepsilon)^{N},$
by Lemma 4.6, see also Section 4.2.5. By linearity, we have
$|S_{\varepsilon}(\sigma)|\leq|S_{\varepsilon}(\sigma\psi(\widehat{\mathcal{R}}))|+|S_{\varepsilon}(\sigma(\psi-\psi_{1})(\widehat{\mathcal{R}}))|.$
For the second term, we observe that
$\displaystyle\overline{S_{\varepsilon}(\sigma(\psi-\psi_{1})(\widehat{\mathcal{R}}))}$
$\displaystyle=\sum_{\lambda_{j}\in\varepsilon^{-\nu}[a,b]}\left\langle({\rm
Op}^{(\varepsilon)}(\sigma))^{*}\varphi_{j},(\psi-\psi_{1})(\varepsilon^{2}\mathcal{R})\varphi_{j}\right\rangle$
$\displaystyle=\sum_{\lambda_{j}\in\varepsilon^{-\nu}[a,b]}\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},(\psi-\psi_{1})(\varepsilon^{2}\mathcal{R})\varphi_{j}\right\rangle\quad+N(\varepsilon^{-2})O(\varepsilon),$
by the properties of the semiclassical calculus (see Proposition 4.4 (2)).
Proceeding as above, we see
$|S_{\varepsilon}(\sigma(\psi-\psi_{1})(\widehat{\mathcal{R}}))|\leq|S_{\varepsilon,0}|+|S_{\varepsilon,1}|+N(\varepsilon^{-2})O(\varepsilon),$
where for $\psi_{0}\in\mathcal{D}(\mathbb{R})$
$\displaystyle S_{\varepsilon,0}$
$\displaystyle:=\sum_{\lambda_{j}\in\varepsilon^{-\nu}[a,b]}\left\langle{\rm
Op}^{(\varepsilon)}(\sigma\psi_{0}(\widehat{\mathcal{R}}))\varphi_{j},(\psi-\psi_{1})(\varepsilon^{2}\mathcal{R})\varphi_{j}\right\rangle,$
$\displaystyle S_{\varepsilon,1}$
$\displaystyle:=\sum_{\lambda_{j}\in\varepsilon^{-\nu}[a,b]}\left\langle{\rm
Op}^{(\varepsilon)}(\sigma(\psi_{0}-\psi_{1})(\widehat{\mathcal{R}}))\varphi_{j},(\psi-\psi_{1})(\varepsilon^{2}\mathcal{R})\varphi_{j}\right\rangle.$
We assume that $\psi_{0}=0$ on the support of $\psi-\psi_{1}$. This implies
$S_{\varepsilon,0}=0$. For the second term, we see
$\displaystyle|S_{\varepsilon,1}|$ $\displaystyle\leq\sum_{j}\|{\rm
Op}^{(\varepsilon)}(\sigma)\
(\psi_{0}-\psi_{1})(\varepsilon^{2}\mathcal{R})\varphi_{j}\|_{L^{2}(M)}\|(\psi-\psi_{1})(\varepsilon^{2}\mathcal{R})\varphi_{j}\|_{L^{2}(M)}$
$\displaystyle\leq\|{\rm
Op}^{(\varepsilon)}(\sigma)\|_{\mathscr{L}(L^{2}(M)}\sum_{j}\|(\psi_{0}-\psi_{1})(\varepsilon^{2}\mathcal{R})\varphi_{j}\|_{L^{2}(M)}\|(\psi-\psi_{1})(\varepsilon^{2}\mathcal{R})\varphi_{j}\|_{L^{2}(M)}$
$\displaystyle\leq\|\sigma\|_{\mathcal{A}_{0}}\frac{1}{2}\left(\|(\psi_{0}-\psi_{1})(\varepsilon^{2}\mathcal{R})\|_{HS(L^{2}(M))}^{2}+\|(\psi-\psi_{1})(\varepsilon^{2}\mathcal{R})\|_{HS(L^{2}(M))}^{2}\right),$
by Proposition 4.2. By Lemma 4.6, see also Section 4.2.5, we have
$\|(\psi_{0}-\psi_{1})(\varepsilon^{2}\mathcal{R})\|_{HS(L^{2}(M))}^{2}=\varepsilon^{-Q}\|(\psi_{0}-\psi_{1})(\widehat{\mathcal{R}})\|_{L^{2}({\widehat{G}}))}^{2}+O(\varepsilon)^{N}$
while the Plancherel formula (3.3) and Theorem 3.6 yield:
$\|(\psi_{0}-\psi_{1})(\widehat{\mathcal{R}})\|_{L^{2}({\widehat{G}}))}^{2}=c_{0}\|(\psi_{0}-\psi_{1})\|_{L^{2}((0,\infty),\lambda^{Q/\nu
d\lambda/\lambda)})}^{2}.$
We have a similar result for
$\|(\psi-\psi_{1})(\varepsilon^{2}\mathcal{R})\|_{HS(L^{2}(M))}^{2}$.
Collecting all the estimates above yields:
$\displaystyle\limsup_{\varepsilon\to
0}\frac{|S_{\varepsilon}(\sigma)|}{N_{[a,b](\varepsilon^{-\nu})}^{-1}}$
$\displaystyle\leq c\left|\iint_{M\times{\widehat{G}}}{\rm
Tr}\left(\sigma\psi(\widehat{\mathcal{R}})\right)d\dot{x}d\mu\right|+\frac{c_{0}}{2}\|\psi_{0}-\psi_{1}\|_{L^{2}((0,\infty),\lambda^{Q/\nu}d\lambda/\lambda))}^{2}$
$\displaystyle\qquad+\frac{c_{0}}{2}\|\psi-\psi_{1}\|_{L^{2}((0,\infty),\lambda^{Q/\nu}d\lambda/\lambda))}^{2},$
for any $\psi_{0},\psi,\psi_{1}\in\mathcal{D}(G)$, with
$\psi_{1}1_{[a,b]}=1_{[a,b]}$, ${{\rm supp}}\,\psi\subset[a,b]$ and
$\psi_{0}=0$ on ${{\rm supp}}\,(\psi-\psi_{1})$. For the first term on the
right-hand side, since $\ell(\sigma)=0$, we have
$\displaystyle\left|\iint_{M\times{\widehat{G}}}{\rm
Tr}\left(\sigma\psi(\widehat{\mathcal{R}})\right)d\dot{x}d\mu\right|$
$\displaystyle=\left|\iint_{M\times{\widehat{G}}}{\rm
Tr}\left(\sigma(\psi-1_{[a,b]})(\widehat{\mathcal{R}})\right)d\dot{x}d\mu\right|$
$\displaystyle\leq\|\sigma\|_{L^{\infty}(M\times{\widehat{G}})}{\rm
Vol}(M)\,\int_{{\widehat{G}}}{\rm
Tr}\left|(\psi-1_{[a,b]})(\widehat{\mathcal{R}})\right|d\mu.$
Furthermore, since $\widehat{\mathcal{R}}^{*}=\widehat{\mathcal{R}}$, the
Plancherel formula (3.3) and Theorem 3.6 yield
$\int_{{\widehat{G}}}{\rm
Tr}\left|(\psi-1_{[a,b]})(\widehat{\mathcal{R}})\right|d\mu=\int_{{\widehat{G}}}\left\|\sqrt{|\psi-1_{[a,b]}|}(\widehat{\mathcal{R}})\right\|_{HS}^{2}d\mu=c_{0}\|\sqrt{|\psi-1_{[a,b]}|}\|_{L^{2}((0,\infty),\lambda^{Q/\nu}d\lambda/\lambda))}^{2}.$
It is not difficult to construct suitable sequences of functions $\psi_{0}$,
$\psi$ and $\psi_{1}$ converging to $1_{[a,b]}$ in
$L^{2}((0,\infty),\lambda^{Q/\nu d\lambda/\lambda)}$ and such that
$\|\sqrt{|\psi-1_{[a,b]}|}\|_{L^{2}((0,\infty),\lambda^{Q/\nu}d\lambda/\lambda))}$
converges to 0. This shows that $\limsup_{\varepsilon\to
0}N_{[a,b]^{-1}(\varepsilon^{-\nu})}|S_{\varepsilon}(\sigma)|=0$ and concludes
the proof of Theorem 4.12. ∎
## 5\. Ergodicity for the symbols
In this section, we discuss the ergodicity of the symbols in Section 5.2. The
setting we now consider till the end of the paper, is the one of Theorem 1.1,
that is, a discrete co-compact subgroup $\Gamma$ of a stratified nilpotent Lie
group $G$, together with a basis $X_{1},\ldots,X_{n_{1}}$ of the first stratum
$\mathfrak{g}_{1}$ of its Lie algebra $\mathfrak{g}$. We denote the associated
sub-Laplacians by $\mathcal{L}=-X_{1}^{2}-\ldots-X_{n_{1}}^{2}$ on $G$ and by
$\mathcal{L}_{M}$ on $M=\Gamma\backslash G$ and keep the same notation for
their self-adjoint extensions.
### 5.1. The differential operator $\mathscr{E}$
We denote by $\mathscr{E}$ the operator on $M\times G$
$\mathscr{E}=\sum_{j=1}^{n_{1}}X_{M,j}X_{j}.$
Here $X_{M,j}$ denote the differential operator on $M$ obtained by
periodisation of the vector field $X_{j}$ on $G$, see Section 2.6 for the
periodisation of left-invariant operator on $G$. Clearly, $\mathscr{E}$ is a
real smooth differential operator on $M\times G$. As the vector fields
$X_{j}$’s are left invariant, $\mathscr{E}$ is symmetric in $L^{2}(M\times
G)$. Furthermore, the ranges of $\mathscr{E}+i\text{\rm I}$ and
$\mathscr{E}-i\text{\rm I}$ are dense, so $\mathscr{E}$ is essentially self-
adjoint on $L^{2}(M\times G)$.
We are interested in its symbol
$\widehat{\mathscr{E}}=\sum_{j=1}^{n_{1}}X_{M,j}\pi(X_{j}),$
that is, in the operator $\widehat{\mathscr{E}}$ acting on $\mathcal{A}_{0}$
via
$\widehat{\mathscr{E}}\sigma=\mathcal{F}_{G}(\mathscr{E}\kappa_{\dot{x}}),$
for any $\sigma\in\mathcal{A}_{0}$ with associated kernel
$(\dot{x},y)\mapsto\kappa_{\dot{x}}(y)\in C^{\infty}(M:\mathcal{S}(G))$.
In Section 6, $\mathscr{E}$ and its symbol will play a pivotal role, and this
comes from the following computational remark mentioned in the introduction:
###### Lemma 5.1.
If $\sigma\in\mathcal{A}_{0}$, then
$[\mathcal{L}_{M},{\rm Op}^{(\varepsilon)}(\sigma)]={\rm
Op}^{(\varepsilon)}(\mathcal{L}_{M}\sigma)-2\varepsilon^{-1}{\rm
Op}^{(\varepsilon)}(\widehat{\mathscr{E}}\sigma)+\varepsilon^{-2}{\rm
Op}^{(\varepsilon)}\left([\widehat{\mathcal{L}},\sigma]\right),$
where
$\mathcal{L}_{M}\sigma(\dot{x},\pi)=\mathcal{L}_{M,\dot{x}}\sigma(\dot{x},\pi)$.
The symbols $\mathcal{L}_{M}\sigma$, $\widehat{\mathscr{E}}\sigma$ and
$[\widehat{\mathcal{L}},\sigma]$ are in $\mathcal{A}_{0}$.
We could deduce this computation from an extension of the symbolic calculus
stated for symbols in $\mathcal{A}_{0}$ in Proposition 4.2.3, since
$\varepsilon^{2}\mathcal{L}_{M}={\rm
Op}^{(\varepsilon)}(\widehat{\mathcal{L}})\quad\mbox{and}\quad\sum_{[\alpha]\leq
2}\varepsilon^{[\alpha]}(\Delta^{\alpha}\widehat{\mathcal{L}})\
X^{\alpha}=\sum_{j=1}^{n_{1}}-\pi(X_{j})^{2}-2\varepsilon\sum_{j=1}^{n_{1}}\pi(X_{j})X_{j}+\varepsilon^{2}\sum_{j=1}^{n_{1}}-X_{j}^{2}.$
But we choose to give a direct proof.
###### Proof of Lemma 5.1.
Let $\kappa_{x}$ be the kernel associated with $\sigma$. We have for any
$f\in\mathcal{D}(M)$
$\displaystyle\left(\mathcal{L}_{M}{\rm
Op}(\sigma)f\right)_{G}(x)=\mathcal{L}_{x}\left(f_{G}*\kappa_{x}(x)\right)$
$\displaystyle\quad=f_{G}*\mathcal{L}_{x_{1}=x}\kappa_{x_{1}}(x)+\mathcal{L}_{x_{2}=x}\left(f_{G}*\kappa_{x}(x_{2})\right)-2\sum_{j=1}^{n_{1}}X_{j,x_{1}=x}X_{j,x_{2}=x}f_{G}*\kappa_{x_{1}}(x_{2}).$
The result follows from
$\mathcal{L}_{x_{2}=x}\left(f_{G}*\kappa_{x}(x_{2})\right)=f_{G}*(\mathcal{L}\kappa_{x})(x)$.
∎
In the next section, we will need the following technical properties regarding
the commutators
${\rm ad}(\mathscr{E})B=[\mathscr{E},B]=\mathscr{E}B-B\mathscr{E},$
for operators $B$ given by differential operators $X_{M}^{\alpha}$ or
$X^{\alpha}$ or by the multiplication operator $f(\dot{x},y)\mapsto
q(y)f(\dot{x},y)$.
###### Lemma 5.2.
We denote by $s$ the step of the stratified Lie group $G$; this is the
smallest integer such that ${\rm ad}(Y_{1})\ldots{\rm ad}(Y_{s+1})=0$ for any
$Y_{1},\ldots,Y_{s+1}$ in $\mathfrak{g}$. We have the same property with any
$s+1$ $G$-invariant vector fields on $M$.
For any $\alpha\in\mathbb{N}_{0}^{n_{1}}$, if $k>s|\alpha|$ then ${\rm
ad}^{k}(\mathscr{E})X_{M}^{\alpha}=0$ and ${\rm
ad}^{k}(\mathscr{E})(X)^{\alpha}=0$, whereas ${\rm ad}^{k}(\mathscr{E})B=0$
for a multiplication operator $B$ with any homogeneous polynomial $q$ of
homogeneous degree $a<k$.
###### Proof of Lemma 5.2.
For $B=X_{M}^{\alpha}$, we see
${\rm ad}^{k}(\mathscr{E})X_{M}^{\alpha}=\sum_{1\leq j_{1},\ldots,j_{k}\leq
n_{1}}{\rm ad}(X_{M,j_{1}})\ldots{\rm
ad}(X_{M,j_{k}})(X^{\alpha})\,X_{j_{1}}\ldots X_{M,j_{k}}.$
If $X^{\alpha}=X_{M,1}^{a}$ for instance, then
${\rm ad}(X_{M,2})(X_{M,1}^{a})=\sum_{b=1}^{a}X_{M,1}^{b-1}\ {\rm
ad}(X_{M,2})(X_{M,1})\ X_{M,1}^{a-b}.$
More generally, ${\rm ad}(X_{M,j_{1}})\ldots{\rm
ad}(X_{M,j_{k}})(X_{M}^{\alpha})$ will be given by a similar linear
combination of products of $|\alpha|$ vector fields, with $k$-instances of
${\rm ad}$ applied in certain ways. A pigeon hole argument shows that if
$k>s|\alpha|$, then ${\rm ad}$ has been applied at least $s+1$ times on at
least one vector within a term of the linear combination of ${\rm
ad}(X_{M,j_{1}})\ldots{\rm ad}(X_{M,j_{k}})(X_{M}^{\alpha})$ which thus
vanishes.
We proceed similarly for $B=X^{\alpha}$:
${\rm ad}^{k}(\mathscr{E})(X^{\alpha})=\sum_{1\leq j_{1},\ldots,j_{k}\leq
n_{1}}X_{M,j_{1}}\ldots X_{M,j_{k}}\ {\rm ad}(X_{j_{1}})\ldots{\rm
ad}(X_{j_{k}})(X^{\alpha}).$
The argument above shows that $k>s|\alpha|$ this expression vanishes.
Let us now consider the case of $B_{q}$ multiplication by a $a$-homogeneous
polynomial $q$ on $G$:
${\rm ad}^{k}(\mathscr{E})B_{q}=\sum_{1\leq j_{1},\ldots,j_{k}\leq
n_{1}}X_{M,j_{1}}\ldots X_{M,j_{k}}\ {\rm ad}(X_{j_{1}})\ldots{\rm
ad}(X_{j_{k}})B_{q}.$
One checks easily that ${\rm ad}(X)B_{q}=B_{Xq}$ for any $X\in\mathfrak{g}$.
We obtain recursively that ${\rm ad}(X_{j_{1}})\ldots{\rm ad}(X_{j_{k}})B_{q}$
is a multiplication by a homogeneous polynomial of homogeneous of degree
$a-k$, with the convention that a polynomial of negative order is identically
zero. Hence, ${\rm ad}^{k}(\mathscr{E})a=0$ for $k>a$.
This concludes the proof of Lemma 5.2. ∎
### 5.2. Mean ergodicity of symbols
By ergodicity of the symbol, we mean understanding the ergodic behaviour in
the sense of von Neuman’s mean ergodic theorem of the one-parameter group
associated with $i\widehat{\mathscr{E}}$. Here
$\widehat{\mathscr{E}}=\sum_{j=1}^{n_{1}}X_{M,j}\pi(X_{j})$ is the symbol of
the differential operator $\mathscr{E}$ defined in Section 5.1. As
$\mathscr{E}$ is essentially self-adjoint on $L^{2}(M\times G)$ with domain
$C^{\infty}(M:\mathcal{S}(G))$, so is $\widehat{\mathscr{E}}$ on the Hilbert
space $L^{2}(M\times{\widehat{G}})$ with domain $\mathcal{A}_{0}$; the latter
Hilbert space is defined in Section 4.2.5. We will keep the same notation for
their self-adjoint extensions.
It is instructive to look at the commutative case, that is, $G=\mathbb{R}^{n}$
is abelian so $M\sim\mathbb{T}^{n}$ is a torus. An element of
$\mathcal{A}_{0}$ is then a function $a\in
C^{\infty}(\mathbb{T}^{n});\mathcal{S}(\mathbb{R}^{n}))$, i.e. a function
$a(\dot{x},\xi)$ Schwartz in $\xi$ and depending smoothly in
$\dot{x}\in\mathbb{T}^{n}$, and we have
$\widehat{\mathscr{E}}=\sum_{j=1}^{n}\partial_{x_{j}}\frac{1}{2i\pi}\partial_{\xi_{j}}\qquad\mbox{and}\qquad
e^{it\widehat{\mathscr{E}}}a(x,\xi)=a(x+t\xi,\xi).$
Let us now go back to the general case of $G$ stratified Lie group. This
section is devoted to showing the following properties for
$e^{it\widehat{\mathscr{E}}}$:
###### Theorem 5.3.
The operators $e^{it\widehat{\mathscr{E}}}$, $t\in\mathbb{R}$, form a strongly
continuous one-parameter group of unitary operators on
$L^{2}(M\times{\widehat{G}})$.
1. (1)
If $\sigma\in\mathcal{A}_{0}$ then
$e^{it\widehat{\mathscr{E}}}\sigma\in\mathcal{A}_{0}$ for every
$t\in\mathbb{R}$. Furthermore, the mapping $t\mapsto
e^{it\widehat{\mathscr{E}}}$ is continuous from $\mathbb{R}$ to
$\mathscr{L}(\mathcal{A}_{0})$.
2. (2)
For any $\sigma\in L^{2}(M\times{\widehat{G}})$, we have the convergence as
$T\to+\infty$
$\left\|\frac{1}{T}\int_{0}^{T}e^{it\widehat{\mathscr{E}}}\sigma
dt-\mathcal{P}\sigma\right\|_{L^{2}(M\times{\widehat{G}})}\longrightarrow 0,$
where $\mathcal{P}$ is the orthogonal projection of
$L^{2}(M\times{\widehat{G}})$ given by
$\mathcal{P}\sigma(\dot{x},\pi)=\int_{M}\sigma(\dot{x}^{\prime},\pi)d\dot{x}^{\prime}\quad\mbox{for
almost every}\ (\dot{x},\pi)\in M\times{\widehat{G}}.$
The proof of the first part of Theorem 5.3 relies on the following
observations of a computational nature. We start with recalling that, at least
formally, if $B$ is a linear operator on the space of symbols, then we have
(5.1) $e^{-it\widehat{\mathscr{E}}}Be^{it\widehat{\mathscr{E}}}=e^{{{\rm
ad}}(-it\widehat{\mathscr{E}})}B=\sum_{k=0}^{\infty}\frac{(-it)^{k}}{k!}{{\rm
ad}}^{k}(\widehat{\mathscr{E}})B,$
where ${{\rm ad}}$ is short for the commutator bracket, that is, ${\rm
ad}(A)B=[A,B]$. This makes sense for the operators $B$ of the form
$X_{M}^{\alpha}$, $\Delta^{\alpha}$ and $\pi(X)^{\alpha}$ because the sum over
$k$ is finite by Lemma 5.2
###### Proof of Theorem 5.3 (1).
Let $\sigma\in\mathcal{A}_{0}$ and $t\in\mathbb{R}$. At least formally, we may
write for any $\alpha\in\mathbb{N}_{0}^{n}$,
$\beta,\gamma\in\mathbb{N}_{0}^{n_{1}}$,
$\Delta^{\alpha}\pi(X^{\beta})X_{M}^{\gamma}\
e^{it\widehat{\mathscr{E}}}\sigma=e^{it\widehat{\mathscr{E}}}\tau_{\alpha,\beta,\gamma}\quad\mbox{where}\quad\tau_{\alpha,\beta,\gamma}:=e^{-it\widehat{\mathscr{E}}}\Delta^{\alpha}\pi(X^{\beta})X_{M}^{\gamma}\
e^{it\widehat{\mathscr{E}}}\sigma.$
By (5.1) and Lemma 5.2, every symbol
$\tau_{\alpha,\beta,\gamma}=e^{-it\widehat{\mathscr{E}}}\Delta^{\alpha}e^{it\widehat{\mathscr{E}}}\
e^{-it\widehat{\mathscr{E}}}\pi(X^{\beta})e^{it\widehat{\mathscr{E}}}\
e^{-it\widehat{\mathscr{E}}}X_{M}^{\gamma}e^{it\widehat{\mathscr{E}}}\
\sigma,$
is well defined in $\mathcal{A}_{0}$ and the formula above holds.
Consequently, $e^{it\widehat{\mathscr{E}}}\tau_{\alpha,\beta,\gamma}\in
L^{2}(M\times{\widehat{G}})$.
We observe that $\Delta^{\alpha}e^{it\widehat{\mathscr{E}}}\sigma\in
L^{2}(M\times{\widehat{G}})$ for every $\alpha\in\mathbb{N}_{0}^{n}$ and the
Plancherel theorem imply that
$(1+\|\cdot\|)^{N}\mathcal{F}_{G}^{-1}\left(e^{it\widehat{\mathscr{E}}}\sigma(\dot{x},\cdot)\right)$
is in $L^{2}(G)$ for any $N\in\mathbb{N}$ and a well chosen homogeneous quasi-
norm on $G$. Similarly, $\pi(X)^{\beta}e^{it\widehat{\mathscr{E}}}\sigma\in
L^{2}(M\times{\widehat{G}})$ for every $\beta\in\mathbb{N}_{0}^{n_{1}}$ and
the Sobolev embeddings on $G$ [14, Theorem 4.4.25] imply that the map
$\dot{x}\mapsto\mathcal{F}_{G}^{-1}\left(e^{it\widehat{\mathscr{E}}}\sigma(\dot{x},\cdot)\right)$
is in $L^{2}(M;L^{2}(G)\cap C_{b}^{\infty}(G))$. Here $L^{2}(M;B)$ denotes the
topological vector space of the square integrable function on $M$ valued in a
Frechet space $B$. Finally,
$X_{M}^{\gamma}e^{it\widehat{\mathscr{E}}}\sigma\in
L^{2}(M\times{\widehat{G}})$ for every $\gamma\in\mathbb{N}_{0}^{n_{1}}$ and
the (Euclidean) Sobolev embeddings on $M$ imply that the map
$\dot{x}\mapsto\mathcal{F}_{G}^{-1}\left(e^{it\widehat{\mathscr{E}}}\sigma(\dot{x},\cdot)\right)$
is smooth from $M$ to $L^{2}(G)$.
Performing all the three operations simultaneously shows that the map
$\dot{x}\mapsto\mathcal{F}_{G}^{-1}\left(e^{it\widehat{\mathscr{E}}}\sigma(\dot{x},\cdot)\right)$
is smooth from $M$ to $\mathcal{S}(G)$. In other words,
$e^{it\widehat{\mathscr{E}}}\sigma\in\mathcal{A}_{0}$. Furthermore, since the
proof above involves only finite sums, the Plancherel formula implies the
continuity in $t$ of $e^{it\widehat{\mathscr{E}}}$ on
$\mathscr{L}(\mathcal{A}_{0})$. ∎
As already mentioned, the core in the proof of the second part of Theorem 5.3
is an instance of the famous Mean Ergodic Theorem due to von Neumann.
Variations of this theorem can be found in many textbooks on functional
analysis, and a very clear proof of the version below may be found in [18,
p.71]:
###### Theorem 5.4 (Mean Ergodic Theorem).
Let $(U_{t})_{t\in\mathbb{R}}$ be a strongly continuous one-parameter group of
unitary operators on a Hilbert space $\mathcal{H}$. Let $\mathcal{P}$ denote
the orthogonal projection on the space of invariant vectors under the group.
Then for every vector $v\in\mathcal{H}$, we have the convergence
$\left\|\frac{1}{T}\int_{0}^{T}e^{it\widehat{\mathscr{E}}}v\,dt-\mathcal{P}v\right\|_{\mathcal{H}}\longrightarrow
0\quad\mbox{as}\ T\to+\infty.$
Another ingredient in the proof of the second part will be the decomposition
of the regular representation of $G$ on $L^{2}(M)$ given by
$\pi_{R}(g)f(\dot{x})=f(\dot{x}g)$. It decomposes as a discrete direct sum
$\pi_{R}=\oplus^{\perp}m(\rho)\rho,$
over a countable family of irreducible representations $\rho$ of $G$ with
finite multiplicity $m(\rho)$, see e.g. [31, Section 2.7] or [22, p.146]; we
denote this set of $\rho$ by $\widehat{\Gamma\backslash G}$. The trivial
representation $1_{{\widehat{G}}}$ has multiplicity 1, and its corresponding
space $\mathcal{H}_{\rho}$ is the space of constant functions on $M$.
The proof of the third part of Theorem 5.3 then boils dow to the following
lemma:
###### Lemma 5.5.
Let $\rho\in\widehat{\Gamma\backslash G}$ be non-trivial, i.e.
$\rho\not=1_{{\widehat{G}}}$. The operator
$\tilde{\mathscr{E}}_{\rho}:=\sum_{j=1}^{n_{1}}\rho(X_{j})X_{j}$ is
essentially self-adjoint on the Hilbert space
$\overline{\mathcal{H}_{\rho}\otimes L^{2}(G)}$. The only vector in
$\overline{\mathcal{H}_{\rho}\otimes L^{2}(G)}$ invariant under the one-
parameter group of unitary operators $e^{it\tilde{\mathscr{E}}_{\rho}}$,
$t\in\mathbb{R}$, is $0$.
###### Proof of Lemma 5.5.
Since the vector fields $X_{j}$ are $G$-invariant, the self-adjointness of
$\tilde{\mathscr{E}}_{\rho}$ follows from the one of $\mathscr{E}$ together
with the composition of the projection of $L^{2}(M)$ on the $\rho$-component
tensored with the Fourier transform $\mathcal{F}_{G}$.
Let $\tilde{\kappa}_{\rho}\in\overline{\mathcal{H}_{\rho}\otimes L^{2}(G)}$ be
invariant under $e^{it\tilde{\mathscr{E}}_{\rho}}$, $t\in\mathbb{R}$. This is
equivalent to $\tilde{\kappa}_{\rho}$ being in the domain of
$\tilde{\mathscr{E}}_{\rho}$ and
$\tilde{\mathscr{E}}_{\rho}\tilde{\kappa}_{\rho}=0$. Since the vector fields
$X_{j}$ are left invariant, we can convolve a scalar function
$\phi\in\mathcal{D}(G)$ on the left with an any element
$\tilde{\kappa}^{\prime}_{\rho}$ in $\overline{\mathcal{H}_{\rho}\otimes
L^{2}(G)}$, and obtain
$\phi*\tilde{\mathscr{E}}_{\rho}\tilde{\kappa}^{\prime}_{\rho}=\tilde{\mathscr{E}}_{\rho}(\phi*\tilde{\kappa}^{\prime}_{\rho})$.
This implies that $\phi*\tilde{\kappa}_{\rho}$ is also invariant and that it
suffices to prove the result for an invariant $\tilde{\kappa}_{\rho}(z)$
smooth in $z\in G$. We assume so.
Let $\phi,\psi\in\mathcal{D}(G)$ be scalar valued functions such that $\psi$
is supported where $\phi$ is constant. This implies
$\psi\tilde{\mathscr{E}}_{\rho}(\tilde{\kappa}_{\rho}\phi)=\psi\phi\tilde{\mathscr{E}}_{\rho}\tilde{\kappa}_{\rho}+\sum_{j=1}^{n_{1}}\rho(X_{j})\tilde{\kappa}_{\rho}\
\psi X_{j}\phi=0,$
and more generally
$\forall\ell\in\mathbb{N}\qquad\psi\tilde{\mathscr{E}}_{\rho}^{\ell}(\tilde{\kappa}_{\rho}\phi)=0,\quad\mbox{thus}\quad\psi
e^{it\tilde{\mathscr{E}}_{\rho}}(\tilde{\kappa}_{\rho}\phi)=\sum_{\ell=0}^{+\infty}\frac{(it)^{\ell}}{\ell!}\psi\tilde{\mathscr{E}}_{\rho}^{\ell}(\tilde{\kappa}_{\rho}\phi)=\psi\phi\tilde{\kappa}_{\rho}.$
Taking a suitable limit in $L^{2}(G)$ of functions $\phi$, we obtain
(5.2)
$e^{it\tilde{\mathscr{E}}_{\rho}}(\tilde{\kappa}_{\rho}\phi_{\infty})=\tilde{\kappa}_{\rho}\phi_{\infty},$
where $\phi_{\infty}$ is the indicatrix $1_{B}$ of a ball $B$. As this is so
for any $B$, by linearity, (5.2) holds for any simple function
$\phi_{\infty}$, and by $L^{2}$-continuity of
$e^{it\tilde{\mathscr{E}}_{\rho}}$, for any $\phi_{\infty}\in\mathcal{D}(G)$.
Hence, we have obtained
$0=\tilde{\mathscr{E}}_{\rho}(\tilde{\kappa}_{\rho}\phi)$ for any
$\phi\in\mathcal{D}(G)$, and thus
$\sum_{j=1}^{n_{1}}\rho(X_{j})\tilde{\kappa}_{\rho}(z)\
X_{j}\phi(z)=0\quad\mbox{for any point}\ z\in G.$
As we can choose the scalars $X_{j}\phi(z)$ arbitrarily, we have
$\rho(X_{j})\tilde{\kappa}_{\rho}(z)=0$ in $\mathcal{H}_{\rho}$ for each
$j=1,\ldots,n_{1}$ and each $z\in G$. As the $X_{j}$, $j=1,\ldots,n_{1}$,
generate the Lie algebra $\mathfrak{g}$, the vector $\tilde{\kappa}_{\rho}(z)$
in $\mathcal{H}_{\rho}$ is annihilated by the infinitesimal representation
$\rho$ of $\mathfrak{g}$, which is irreducible and non-trivial. This implies
that $\tilde{\kappa}_{\rho}(z)=0$ for each $z\in G$, and concludes the proof
of Lemma 5.5. ∎
###### Proof of Theorem 5.3 (2).
By the mean ergodic theorem, it suffices to identify the space of symbols
$\sigma\in L^{2}(M\times{\widehat{G}})$ invariant under
$e^{it\widehat{\mathscr{E}}}$, $t\in\mathbb{R}$. Such a symbol $\sigma$ may be
decomposed as
$\sigma=\sum_{\rho\in\widehat{\Gamma\backslash
G}}\sigma_{\rho},\quad\mbox{with}\quad\sigma_{\rho}\in\overline{\mathcal{H}_{\rho}\otimes
L^{2}({\widehat{G}})}$
satisfying
$\mathcal{F}_{G}^{-1}\sigma_{\rho}\in\overline{\mathcal{H}_{\rho}\otimes
L^{2}(G)}$ being invariant under $e^{it\tilde{\mathscr{E}}_{\rho}}$,
$t\in\mathbb{R}$. By Lemma 5.5, for $\rho\not=1_{{\widehat{G}}}$,
$\mathcal{F}_{G}^{-1}\sigma_{\rho}=0$ so $\sigma_{\rho}=0$, while for
$\rho=1_{{\widehat{G}}}$, we have $\tilde{\mathscr{E}}_{\rho}=0$ so
$e^{it\tilde{\mathscr{E}}_{\rho}}$ is the identity operator on
$\overline{\mathcal{H}_{\rho}\otimes L^{2}(G)}\sim L^{2}(G)$. This implies
that $\sigma=\sigma_{\rho}$ with $\rho=1_{{\widehat{G}}}$. In other words, the
orthogonal projection $\mathcal{P}$ on the invariants is the projection onto
the $\rho$-component with $\rho=1_{{\widehat{G}}}$ which is given by
integration on $M$. This concludes the proof of Theorem 5.3. ∎
## 6\. Quantum Variance
In this section, we keep the same setting as in Section 5. We also consider an
orthonormal basis $(\varphi_{j})_{j\in\mathbb{N}_{0}}$ of the Hilbert space
$L^{2}(M)$ consisting of eigenfunctions of $\mathcal{L}_{M}$:
$\mathcal{L}_{M}\varphi_{j}=\lambda_{j}\varphi_{j},\
j=0,1,2,\ldots\quad\mbox{with increasing eigenvalues}\
\lambda_{j}\leq\lambda_{j+1}.$
We denote the counting spectral function by
$N(\Lambda):=\\{j\in\mathbb{N}_{0}\ :\ \lambda\leq\Lambda\\}.$
This section is devoted to finishing the proof of Theorem 1.1. The Weyl law
$N(\varepsilon^{-2})\sim c_{1}\varepsilon^{-Q}$ has already been obtained in
Section 4.4 while the quantum ergodicity will follow from the following
statement.
###### Proposition 6.1 (Quantum Variance).
For any $\sigma\in\mathcal{A}_{0}$ satisfying
$\int_{M}\sigma(\dot{x},\pi)d\dot{x}=0$ for almost every
$\pi\in{\widehat{G}}$, we have the convergence
$\frac{1}{N(\varepsilon^{-2})}\sum_{j\,:\,\lambda_{j}\leq\varepsilon^{-2}}\left|\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},\varphi_{j}\right\rangle_{L^{2}(M)}\right|^{2}\longrightarrow
0\quad\mbox{as}\ \varepsilon\to 0.$
We start with showing how Proposition 6.1 implies the last part of Theorem
1.1. The next subsection gives important observations for the proof of
Proposition 6.1. The last subsection states and proves a generalisation of
Proposition 6.1.
### 6.1. End of the proof of Theorem 1.1 from Proposition 6.1
The last part of Theorem 1.1 follows classically ([34, Proof of Theorem 9.4
(i)] or [1, p.111]) from the separability of the space of continuous functions
on $M$ together with the following statement which is a consequence of
Proposition 6.1:
###### Corollary 6.2.
For any continuous function $a:M\to\mathbb{C}$, we have the convergence
$\frac{1}{N(\varepsilon^{-2})}\sum_{j\,:\,\lambda_{j}\leq\varepsilon^{-2}}\left|\int_{M}a(\dot{x})|\varphi_{j}(x)|^{2}d\dot{x}-\int_{M}a(\dot{x})d\dot{x}\right|^{2}\longrightarrow
0\quad\mbox{as}\ \varepsilon\to 0.$
###### Proof of Corollary 6.4.
An argument of density shows that we may assume $a\in\mathcal{D}(M)$. We may
also assume $\int_{M}a(\dot{x})d\dot{x}=0$.
Let $\psi\in\mathcal{D}(\mathbb{R})$ be such that $\psi=1_{[0,1]}\psi$. Then
$\sigma(\dot{x},\pi)=a(\dot{x})\psi(\widehat{\mathcal{L}})$ defines a symbol
$\sigma$ in $\mathcal{A}_{0}$ by Lemma 4.7 and we have
$\sum_{j\,:\,\lambda_{j}\leq\varepsilon^{-2}}\left|\int_{M}a(\dot{x})|\varphi_{j}(x)|^{2}d\dot{x}\right|^{2}\leq\sum_{j\,:\,\lambda_{j}\leq\varepsilon^{-2}}\left|\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},\varphi_{j}\right\rangle_{L^{2}(M)}\right|^{2}.$
We conclude with Proposition 6.1. ∎
### 6.2. Some observations
In this section, we present some remarks which will guide us in some steps of
the proof of Proposition 6.1.
#### 6.2.1. A weak result of Egorov type
The observations explained at the end of the introduction lead us to the
following first result of Egorov type. Although we will not use this result,
the proof sheds light on an important problem related to the non-commutativiy
of our symbols:
###### Lemma 6.3 (Weak Egorov).
Let $\sigma\in\mathcal{A}_{0}$ be such that
$\sigma_{t}:=e^{2it\widehat{\mathscr{E}}}\sigma$ commute with
$\widehat{\mathcal{L}}$ for every $t\in\mathbb{R}$. For any $T>0$, there
exists a constant $C=C_{G,\Gamma,(X_{j}),\sigma,T}>0$ such that we have
$\forall\varepsilon\in(0,1]\qquad\forall
t\in[0,T]\qquad\|e^{-it\varepsilon\mathcal{L}_{M}}\ {\rm
Op}^{(\varepsilon)}(\sigma)\ e^{it\varepsilon\mathcal{L}_{M}}-{\rm
Op}^{(\varepsilon)}(\sigma_{t})\|_{\mathscr{L}(L^{2}(M))}\leq C\varepsilon.$
###### Proof of Lemma 6.3.
We write
$\displaystyle e^{-it\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}(\sigma)e^{it\varepsilon\mathcal{L}_{M}}-{\rm
Op}^{(\varepsilon)}(\sigma_{t})=\int_{0}^{1}\partial_{s}\left\\{e^{-its\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}(\sigma_{t(1-s)})e^{its\varepsilon\mathcal{L}_{M}}\right\\}ds$
$\displaystyle\qquad=\int_{0}^{1}e^{-its\varepsilon\mathcal{L}_{M}}\left([-it\varepsilon\mathcal{L}_{M},{\rm
Op}^{(\varepsilon)}(\sigma_{t(1-s)})]-2i{\rm
Op}^{(\varepsilon)}(\widehat{\mathscr{E}}\sigma_{t(1-s)})\right)e^{its\varepsilon\mathcal{L}_{M}}ds$
$\displaystyle\qquad=-it\int_{0}^{1}e^{-its\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}\left(\varepsilon^{-1}[\widehat{\mathcal{L}},\sigma_{t(1-s)}]+\varepsilon\mathcal{L}_{M}\sigma_{t(1-s)}\right)e^{its\varepsilon\mathcal{L}_{M}}ds,$
by Lemma 5.1. The statement follows from the hypothesis on $\sigma_{t}$ and
the properties of the semi-classical calculus (see Section 4) and of
$e^{i2t\widehat{\mathscr{E}}}$ (see Theorem 5.3 (1)). ∎
#### 6.2.2. Some non-commutative but real considerations
We are therefore led to study the term ${\rm
Op}^{(\varepsilon)}([\widehat{\mathcal{L}},\sigma_{t(1-s)}])$. In fact, it
will appear in a scalar product, and to tackle this appearance, we will use
the following observations which require first to set some vocabulary.
If $\sigma\in\mathcal{A}_{0}$, we also define the symbol $(\sigma)\breve{\
}:=\breve{\sigma}\in\mathcal{A}_{0}$ via
$\breve{\sigma}(\dot{x},\pi):=\overline{\sigma(\dot{x},\bar{\pi})}.$
We observe that this operation commutes with taking the adjoint:
(6.1) $(\sigma)\breve{\ }\,^{*}=(\sigma)^{*}\breve{\
}\qquad\mbox{thus}\qquad(\Re\sigma)\breve{\ }=\Re\breve{\sigma}.$
If $\kappa_{\dot{x}}(y)$ is the kernel associated with $\sigma$, then the
kernels associated with $\sigma^{*}$ and $\breve{\sigma}$ are
$\bar{\kappa}_{\dot{x}}(y^{-1})$ and $\bar{\kappa}_{\dot{x}}(y)$ respectively.
The relation between ${\rm Op}^{\varepsilon}(\sigma)$ and ${\rm
Op}^{\varepsilon}(\sigma^{*})$ is given in Proposition 4.4. For
$\breve{\sigma}$, we see
(6.2) ${\rm Op}^{(\varepsilon)}(\breve{\sigma})\phi=\overline{{\rm
Op}^{(\varepsilon)}(\sigma)\bar{\phi}},\qquad\mbox{so}\qquad\overline{\langle{\rm
Op}^{(\varepsilon)}(\sigma)\phi,\phi\rangle_{L^{2}(M)}}=\langle{\rm
Op}^{(\varepsilon)}(\breve{\sigma})\bar{\phi},\bar{\phi}\rangle_{L^{2}(M)},$
which implies easily
(6.3) $\sigma=\breve{\sigma}\ \Longrightarrow\ \Re\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi,\varphi\right\rangle=\Re\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\Re\varphi,\Re\varphi\right\rangle-\Re\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\Im\varphi,\Im\varphi\right\rangle.$
We see that for $\tau_{1},\tau_{2}\in\mathcal{A}_{0}$ or
$\tau_{2}\in\mathcal{A}_{0}$ and
$\tau_{1}\in\mathcal{F}_{G}\mathfrak{U}(\mathfrak{g})$ we have
(6.4)
$[\tau_{1},\tau_{2}]^{*}=-[\tau_{1}^{*},\tau_{2}^{*}]\quad\mbox{and}\quad([\tau_{1},\tau_{2}])\breve{\
}=[\breve{\tau}_{1},\breve{\tau}_{2}].$
We will also need the following observations describing the relations of
$\breve{\,}$ and $\widehat{\mathscr{E}}$:
(6.5)
$(\widehat{\mathscr{E}}\sigma)\breve{\,}=\widehat{\mathscr{E}}\breve{\sigma}\qquad\mbox{so}\qquad(e^{it\widehat{\mathscr{E}}}\sigma)\breve{\,}=e^{-it\widehat{\mathscr{E}}}\breve{\sigma}.$
### 6.3. Proof of Proposition 6.1
As we can write $\sigma=\Re\sigma+i\Im\sigma$, it suffices to prove the result
for $\sigma\in\mathcal{A}_{0}$ self-adjoint. Similarly, it suffices to prove
the result for $\sigma\in\mathcal{A}_{0}$ such that $\breve{\sigma}=\sigma$.
Hence we may assume $\sigma=\sigma^{*}=\breve{\sigma}$. Furthermore, the
properties of the calculus, especially Proposition 4.4 (2), imply
$\sum_{\lambda_{j}\leq\varepsilon^{-2}}\left|\Im\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},\varphi_{j}\right\rangle\right|^{2}=O(\varepsilon)^{1-Q};$
in this section, we use the short hand $\langle.,.\rangle$ for the scalar
product on $L^{2}(M)$. Therefore, it suffices to show that
(6.6) $\lim_{\varepsilon\to
0}\frac{S_{\varepsilon}}{N(\varepsilon^{-2})}=0\qquad\mbox{where}\qquad
S_{\epsilon}:=\sum_{\lambda_{j}\leq\varepsilon^{-2}}\left|\Re\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},\varphi_{j}\right\rangle\right|^{2}.$
Using (6.3), we may now assume that the $\varphi_{j}$’s are real-valued.
With all these assumptions, we now start the analysis with
$S_{\epsilon}=\sum_{\lambda_{j}\leq\varepsilon^{-2}}\left|\Re\left\langle\frac{1}{2T}\int_{-T}^{T}e^{-it\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}(\sigma)e^{-it\varepsilon\mathcal{L}_{M}}dt\,\varphi_{j},\varphi_{j}\right\rangle\right|^{2}\leq
2(S_{1,\varepsilon}+S_{2,\varepsilon}),$
for $T>0$ where, having denoted
$\sigma_{t}:=e^{2it\widehat{\mathscr{E}}}\sigma$,
$\displaystyle S_{2,\varepsilon}$
$\displaystyle:=\sum_{\lambda_{j}\leq\varepsilon^{-2}}\left|\Re\left\langle\frac{1}{2T}\int_{-T}^{T}\left(e^{-it\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}(\sigma)e^{-it\varepsilon\mathcal{L}_{M}}-{\rm
Op}^{(\varepsilon)}(\sigma_{t})\right)dt\,\varphi_{j},\varphi_{j}\right\rangle\right|^{2},$
$\displaystyle S_{1,\varepsilon}$ $\displaystyle:=\|{\rm
Op}^{(\varepsilon)}\left(\frac{1}{2T}\int_{-T}^{T}\sigma_{t}dt\right)\|^{2}_{HS}=\varepsilon^{-Q}\|\frac{1}{2T}\int_{-T}^{T}\sigma_{t}dt\|^{2}_{L^{2}(M\times{\widehat{G}})}+O(\varepsilon)^{N},$
by Lemma 4.6, see also Section 4.2.5. The properties of the calculus,
especially Proposition 4.4 (2), imply
$S_{2,\varepsilon}=S^{\prime}_{2,\varepsilon}+O(\varepsilon^{1-Q})$ where
$S^{\prime}_{2,\varepsilon}:=\sum_{\lambda_{j}\leq\varepsilon^{-2}}\left|\Re\left\langle\frac{1}{2T}\int_{-T}^{T}\left(e^{-it\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}(\sigma)e^{-it\varepsilon\mathcal{L}_{M}}-{\rm
Op}^{(\varepsilon)}(\Re\sigma_{t})\right)dt\,\varphi_{j},\varphi_{j}\right\rangle\right|^{2}.$
We now proceed as in the proof of Lemma 6.3, even if having introduced the
real part of $\sigma_{t}$ will produce some additional terms:
$\displaystyle e^{-it\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}(\sigma)e^{it\varepsilon\mathcal{L}_{M}}-{\rm
Op}^{(\varepsilon)}(\Re\sigma_{t})=\int_{0}^{1}\partial_{s}\left\\{e^{-its\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}(\Re\sigma_{t(1-s)})e^{its\varepsilon\mathcal{L}_{M}}\right\\}ds$
$\displaystyle\quad=\int_{0}^{1}e^{-its\varepsilon\mathcal{L}_{M}}\left([-it\varepsilon\mathcal{L}_{M},{\rm
Op}^{(\varepsilon)}(\Re\sigma_{t(1-s)})]-2{\rm Op}^{(\varepsilon)}(\Re
i\widehat{\mathscr{E}}\sigma_{t(1-s)})\right)e^{its\varepsilon\mathcal{L}_{M}}ds$
$\displaystyle\quad=\int_{0}^{1}e^{-its\varepsilon\mathcal{L}_{M}}{\rm
Op}^{(\varepsilon)}\left(-it\varepsilon^{-1}[\widehat{\mathcal{L}},\Re\sigma_{t(1-s)}]+2i\widehat{\mathscr{E}}\Re\sigma_{t(1-s)}-2\Re(i\widehat{\mathscr{E}}\sigma_{t(1-s)})\right.$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\left.-it\varepsilon\mathcal{L}_{M}\Re\sigma_{t(1-s)}\right)e^{its\varepsilon\mathcal{L}_{M}}ds.$
This together with the properties of the calculus, especially Proposition 4.4
(2), imply
$S^{\prime}_{2,\varepsilon}=S^{\prime\prime}_{2,\varepsilon}+O(\varepsilon)^{1-Q}$
where
$S^{\prime\prime}_{2,\varepsilon}:=\sum_{\lambda_{j}\leq\varepsilon^{-2}}\left|\Re\left\langle{\rm
Op}^{(\varepsilon)}\left(-i\varepsilon^{-1}[\widehat{\mathcal{L}},\tau_{1}]+\Re(i\tau_{2})\right)\varphi_{j},\varphi_{j}\right\rangle\right|^{2},$
where $\tau_{1}$ and $\tau_{2}$ are the symbols given by:
$\displaystyle\tau_{1}$
$\displaystyle:=\frac{1}{2T}\int_{-T}^{T}\int_{0}^{1}\Re\sigma_{t(1-s)}dstdt$
$\displaystyle\tau_{2}$
$\displaystyle:=\frac{1}{T}\int_{-T}^{T}\int_{0}^{1}\widehat{\mathscr{E}}\Re\sigma_{t(1-s)}-\widehat{\mathscr{E}}\sigma_{t(1-s)}\
dstdt.$
For the term in $\tau_{1}$, we observe that $\tau_{1}$ is in $\mathcal{A}_{0}$
and satisfies
$\breve{\tau}_{1}=\frac{1}{2T}\int_{-T}^{T}\int_{0}^{1}\Re\sigma_{-t(1-s)}dstdt=\tau_{1},$
because of (6.5) and (6.1). Thus, using (6.2), (6.4),
$\breve{\mathscr{L}}=\mathscr{L}$ and $\varphi_{j}=\Re\varphi_{j}$, we see
that
$\overline{\left\langle{\rm
Op}^{(\varepsilon)}([\widehat{\mathcal{L}},\tau_{1}])\varphi_{j},\varphi_{j}\right\rangle}=\left\langle{\rm
Op}^{(\varepsilon)}([\widehat{\mathcal{L}},\tau_{1}])\varphi_{j},\varphi_{j}\right\rangle,\quad\mbox{so}\quad\Re\left\langle{\rm
Op}^{(\varepsilon)}(i[\widehat{\mathcal{L}},\tau_{1}])\varphi_{j},\varphi_{j}\right\rangle=0.$
For the term in $\tau_{2}$, we observe similarly that $\tau_{2}$ is in
$\mathcal{A}_{0}$ and satisfies $\breve{\tau}_{2}=\tau_{2}$ so
$\Re\left\langle{\rm
Op}^{(\varepsilon)}(i\tau_{2})\varphi_{j},\varphi_{j}\right\rangle=0.$
We have the same observation with $\tau_{2}$ replaced with $\tau_{2}^{*}$
because of (6.1). This implies
$\Re\left\langle{\rm
Op}^{(\varepsilon)}(\Re(i\tau_{2}))\varphi_{j},\varphi_{j}\right\rangle=0,$
and therefore $S_{2,\varepsilon}^{\prime\prime}=0$.
Collecting all the equalities and inequalities above, we see
$S_{\varepsilon}\leq\varepsilon^{-Q}\|\frac{1}{2T}\int_{-T}^{T}\sigma_{t}dt\|^{2}_{L^{2}(M\times{\widehat{G}})}+C\varepsilon^{1-Q},$
where the positive constant $C$ may depend on $T$. This implies
$\limsup_{\varepsilon\to 0}\frac{S_{\varepsilon}}{N(\varepsilon^{-2})}\leq
c_{1}^{-1}\|\frac{1}{2T}\int_{-T}^{T}\sigma_{t}dt\|^{2}_{L^{2}(M\times{\widehat{G}})},$
where $c_{1}$ is the constant from the Weyl law $N(\varepsilon^{-2})\sim
c_{1}\varepsilon^{-Q}$. By our Ergodic Theorem (see Theorem 5.3 (2)), the
right-hand side tends to 0 as $T\to+\infty$. This shows the convergence in
(6.6).
This concludes the proofs of Proposition 6.1 and of our main result (Theorem
1.1).
### 6.4. A generalisation of Proposition 6.1
In this section, we prove the following generalisation of Proposition 6.1:
###### Corollary 6.4.
For any $\sigma\in\mathcal{A}_{0}$, we have the convergence as $\varepsilon\to
0$
$\frac{1}{N(\varepsilon^{-2})}\sum_{\lambda_{j}\leq\varepsilon^{-2}}\left|\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},\varphi_{j}\right\rangle_{L^{2}(M)}-\frac{1}{\rm
Vol(M)}\int_{M}\sigma(\dot{x},1_{{\widehat{G}}})d\dot{x}\right|^{2}\longrightarrow
0\,.$
As explained in the introduction, a commutative counterpart on the torus would
be the analogue of [34, Theorem 9.4] for the 0-energy level of the standard
laplacian on the torus for the basic semi-classical calculus.
Before starting the proof, we observe that the integral in Corollary 6.4 is
indeed a scalar, and that it can be computed as
$\sigma(\dot{x},1_{{\widehat{G}}})=\int_{G}\kappa_{\dot{x}}(y)dy,\quad\mbox{so}\quad\int_{M}\sigma(\dot{x},1_{{\widehat{G}}})d\dot{x}=\iint_{M\times
G}\kappa_{\dot{x}}(y)d\dot{x}dy,$
where $\kappa_{\dot{x}}(y)$ is the kernel associated with $\sigma$.
###### Proof of Corollary 6.4.
We want to show that for any $\sigma\in\mathcal{A}_{0}$ we have
$\lim_{\varepsilon\to
0}\frac{S_{\varepsilon}(\sigma\psi)}{N(\varepsilon^{-2})}=0\quad\mbox{where}\quad
S_{\varepsilon}(\sigma):=\sum_{\lambda_{j}\leq\varepsilon^{-2}}|u_{j,\varepsilon,\sigma}|^{2},$
and
$u_{j,\varepsilon,\sigma}:=\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},\varphi_{j}\right\rangle_{L^{2}(M)}-\frac{1}{\rm
Vol(M)}\int_{M}\sigma(\dot{x},1_{{\widehat{G}}})d\dot{x}.$
Writing $\tau$ for the symbol in $\mathcal{A}_{0}$ independent of $\dot{x}$
given by $\tau(\pi)=\int_{M}\sigma(\dot{x},\pi)d\dot{x}$, we observe that
$S_{\varepsilon}(\sigma)\leq
2(S_{\varepsilon}(\sigma-\tau)+S_{\varepsilon}(\tau)),$
and that the average over $M$ of the symbol $\sigma-\tau$ vanishes, so we have
$\lim_{\varepsilon\to 0}S_{\varepsilon}(\sigma-\tau)/N(\varepsilon^{-2})=0$ by
Proposition 6.1. This shows that we may assume that $\sigma$ is independent of
$\dot{x}$.
Note that $\widehat{\mathcal{L}}(1_{{\widehat{G}}})=0$, so
$\psi(\widehat{\mathcal{L}}(\pi))=\psi(0)$ at $\pi=1_{{\widehat{G}}}$ for any
$\psi\in\mathcal{D}(G)$. Consequently, considering $\psi\in\mathcal{D}(G)$
satisfying $\psi 1_{[0,1]}=1_{[0,1]}$, we see that
$\sigma-\sigma(1_{{\widehat{G}}})\psi(\widehat{\mathcal{L}})$ is in
$\mathcal{A}_{0}$ with
$S_{\varepsilon}(\sigma)=S_{\varepsilon}(\sigma-\sigma(1_{{\widehat{G}}})\psi(\widehat{\mathcal{L}}))\quad\mbox{and}\quad(\sigma-\sigma(1_{{\widehat{G}}})\psi(\widehat{\mathcal{L}}))(\pi)=0\
\mbox{at}\ \pi=1_{{\widehat{G}}}.$
This shows that we may assume in addtion $\sigma(1_{{\widehat{G}}})=0$.
As explained above, we may assume $\sigma=\widehat{\kappa}$ with
$\kappa\in\mathcal{S}(G)$ and
$\sigma(1_{{\widehat{G}}})=\int_{G}\kappa(y)dy=0$. We have
$u_{j,\varepsilon,\sigma}=\left\langle{\rm
Op}^{(\varepsilon)}(\sigma)\varphi_{j},\varphi_{j}\right\rangle_{L^{2}(M)},\quad\mbox{so}\quad|u_{j,\varepsilon,\sigma}|\leq\|{\rm
Op}^{(\varepsilon)}(\sigma)\|_{\mathscr{L}(L^{2}(M)}\|\varphi_{j}\|_{L^{2}(M)}^{2}\leq\|\sigma\|_{\mathcal{A}_{0}},$
by Proposition 4.2 and since $\|\varphi_{j}\|_{L^{2}(M)}=1$; this implies
$\forall\varepsilon\in(0,1]\qquad 0\leq
N(\varepsilon^{-2})^{-1}S_{\varepsilon}(\sigma)\leq\|\sigma\|_{\mathcal{A}_{0}}.$
Furthermore, we have
$u_{j,\varepsilon,\sigma}=\left\langle(\varphi_{j,G}*\kappa^{(\varepsilon)})_{M},\varphi_{j}\right\rangle_{L^{2}(M)}\longrightarrow_{\varepsilon\to
0}0\,,$
by Corollary 3.4 since $\int_{G}\kappa(y)dy=0$. The convergence is for
$j\in\mathbb{N}_{0}$ fixed, but it shows that for every $j\in\mathbb{N}_{0}$,
there exists $\varepsilon^{(j)}\in(0,1]$ such that
$|u_{j,\varepsilon,\sigma}|\leq 1/(j+1)$ holds for every
$\varepsilon\in(0,\varepsilon^{(j)})$. We may assume that the sequence
$(\varepsilon^{(j)})_{j\in\mathbb{N}_{0}}$ is decreasing. Consequently, if
$(\varepsilon_{\ell})_{\ell\in\mathbb{N}}$ is a sequence in $(0,1]$ converging
to 0, then we may extract a subsequence
$(\varepsilon_{\ell_{k}})_{k\in\mathbb{N}}$ such that
$\varepsilon_{\ell_{k}}<\varepsilon^{(k)}$ for all $k\in\mathbb{N}$ and we
have
$S_{\varepsilon_{\ell_{k}}}(\sigma)\leq\sum_{0\leq j\leq
N(\varepsilon_{\ell_{k}}^{-2})}\frac{1}{j+1}\sim\ln
N(\varepsilon_{\ell_{k}}^{-2}),\qquad\mbox{so}\qquad\lim_{k\to\infty}\frac{S_{\varepsilon_{\ell_{k}}}(\sigma)}{N(\varepsilon_{\ell_{k}}^{-2})}=0.$
This implies $\lim_{\varepsilon\to
0}S_{\varepsilon}(\sigma)/N(\varepsilon^{-2})=0$ and concludes the proof of
Corollary 6.4. ∎
## References
* [1] N. Anantharaman, F. Faure and C. Fermanian-Kammerer _Chaos en mécanique quantique_ , Editor: Harinck, Pascale and Plagne, Alain and Sabbah, Claude, Lectures from the Mathematical Days (X-UPS 2014) held at l’École Polytechnique, Palaiseau, April 28–29, 2014, Éditions de l’École Polytechnique, Palaiseau, 2014\.
* [2] N. Anantharaman and. F. Macià, Semiclassical measures for the Schrödinger equation on the torus, _J. Eur. Math. Soc. (JEMS)_ , 16, 2014, No 6, pp 1253–1288.
* [3] R. Beals, B. Gaveau and C. Greiner, Hamilton-Jacobi theory and the heat kernel on Heisenberg groups, _J. Math. Pures Appl. (9)_ , 79, 2000, No 7, pp 633–689.
* [4] J. Bourgain, N. Burq and M. Zworski, Control for Schrödinger operators on 2-tori: rough potentials, _J. Eur. Math. Soc. (JEMS)_ , 15, 2013, No 5, pp 1597–1628.
* [5] M. Christ, $L^{p}$ bounds for spectral multipliers on nilpotent groups, _Trans. Amer. Math. Soc._ , 328, 1991, 1, pp 73–81.
* [6] Y. Colin de Verdière, L. Hillairet, and E. Trélat , Spectral asymptotics for sub-Riemannian Laplacians, I: Quantum ergodicity and quantum limits in the 3-dimensional contact case, _Duke Math. J._ , 167, 2018, No 1, pp 109–174.
* [7] L.-J. Corwin and F.-P. Greenleaf, Representations of nilpotent Lie groups and their applications, Part 1: Basic theory and examples, Cambridge studies in advanced Mathematics, 18, Cambridge university Press, 1990.
* [8] S. Dave and S. Haller, The heat asymptotics on filtered manifolds, _J. Geom. Anal._ , 30, 2020, No 1, pp 337–389.
* [9] C. Deninger and W. Singhof, The $e$-invariant and the spectrum of the Laplacian for compact nil-manifolds covered by Heisenberg groups, _Invent. Math._ , 78, 1984, No 1, pp 101–112.
* [10] C. Fermanian Kammerer and V. Fischer, Semi-classical analysis on H-type groups, Sci. China Math., 62, 2019, No 6, pp 1057–1086.
* [11] C. Fermanian Kammerer and V. Fischer, Defect measures on graded Lie groups. To appear in _Annali della Scuola Normale de Pisa_.
* [12] C. Fermanian Kammerer and V. Fischer, Quantum evolution and sub-Laplacian operators on groups of Heisenberg type. To appear in _Journal of Spectral Theory_.
* [13] C. Fermanian Kammerer and C. Letrouit, Observability and Controllability for the Schrödinger Equation on Quotients of Groups of Heisenberg Type. Submitted and Arxiv 2009.13877.
* [14] V. Fischer and M. Ruzhansky, Quantization on nilpotent Lie groups, Progress in Mathematics, 314, Birkhäuser Basel, 2016.
* [15] G. Folland, Compact Heisenberg manifolds as CR manifolds, _J. Geom. Anal._ , 14, 2004, No 3, pp 521–532.
* [16] G. Folland and E. Stein, Hardy spaces on homogeneous groups, Mathematical Notes, 28, Princeton University Press, 1982\.
* [17] D. Geller, Liouville’s theorem for homogeneous groups, _Comm. Partial Differential Equations_ , 8, 1983, 15, pp 1665–1677.
* [18] A. Gorodnik and A. Nevo, Quantitative ergodic theorems and their number-theoretic applications, _Bull. Amer. Math. Soc. (N.S.)_ , 52, 2015, No 1, pp 65–113.
* [19] B. Helffer and J. Nourrigat, Caracterisation des opérateurs hypoelliptiques homogènes invariants à gauche sur un groupe de Lie nilpotent gradué, _Comm. Partial Differential Equations_ , 4, 1979, 8, pp 899–958.
* [20] A. Hulanicki, A functional calculus for Rockland operators on nilpotent Lie groups, Studia Mathematica, 78 (1984), pp 253–266.
* [21] C. Letrouit. Quantum limits of products of Heisenberg manifolds. Submitted and Arxiv:2007.00910.
* [22] C. Moore, Decomposition of unitary representations defined by discrete subgroups of nilpotent groups, _Ann. of Math. (2)_ , 82, 1965, pp 146–182.
* [23] R. Ponge, _Heisenberg calculus and spectral theory of hypoelliptic operators on Heisenberg manifolds_ , Mem. Amer. Math. Soc., 194, 2008, No 906.
* [24] N. Savale. Spectrum and abnormals in sub-Riemannian geometry: the 4D quasi-contact case. Submitted and ArXiv:1909.00409.
* [25] R. Strichartz, Sub-Riemannian geometry, _J. Differential Geom._ , 24, 1986, 2, pp 221–263.
* [26] S. Thangavelu, _Harmonic analysis on the Heisenberg group_ , Progress in Mathematics, 159, Birkhäuser Boston, Inc., Boston, MA, 1998\.
* [27] S. Thangavelu, Harmonic analysis on Heisenberg nil-manifolds, _Rev. Un. Mat. Argentina_ , 50, 2009, No 2, pp 75–93.
* [28] E. van Erp, The Atiyah-Singer index formula for subelliptic operators on contact manifolds. Part I, _Ann. of Math. (2)_ , 171, 2010, No 3, pp 1647–1681.
* [29] E. van Erp, The Atiyah-Singer index formula for subelliptic operators on contact manifolds. Part II, _Ann. of Math. (2)_ , 171, 2010, No 3, pp 1683–1706.
* [30] E. van Erp and R. Yuncken, A groupoid approach to pseudodifferential calculi, _J. Reine Angew. Math._ , 756, 2019, pp 151–182.
* [31] N. Wallach, _Harmonic analysis on homogeneous spaces_ , Pure and Applied Mathematics, No. 19, Marcel Dekker, Inc., New York, 1973\.
* [32] S. Zelditch, Index and dynamics of quantized contact transformations, _Ann. Inst. Fourier (Grenoble)_ , 47, 1997, No 1, pp 305–363.
* [33] S. Zelditch, Complex zeros of real ergodic eigenfunctions, _Invent. Math._ , 167, 2007, No 2, pp 419–443.
* [34] M. Zworski, _Semiclassical analysis_ , Graduate Studies in Mathematics, 138, American Mathematical Society, Providence, RI, 2012\.
|
∎
11institutetext: ✉ Felix Bestehorn 22institutetext: Christian Kirches
22email: {f.bestehorn<EMAIL_ADDRESS>
33institutetext: ${}^{\textbf{1}}$ Institute for Mathematical Optimization,
Technische Universität Braunschweig, Braunschweig, Germany 44institutetext:
Maike Bestehorn
44email<EMAIL_ADDRESS>
55institutetext: ${}^{\textbf{2}}$Schäftlarn, Germany
# A deterministic matching method for exact matchings to compare the outcome
of different interventions ††thanks: C. Kirches acknowledges funding by
Deutsche Forschungsgemeinschaft through Priority Programme 1962 “Non-smooth
and Complementarity-based Distributed Parameter Systems: Simulation and
Hierarchical Optimization”. C. Kirches was supported by the German Federal
Ministry of Education and Research, grants no 61210304-ODINE, 05M17MBA-
MoPhaPro and 05M18MBA-MOReNet.
Felix Bestehorn${}^{\textbf{1}}$ Maike Bestehorn${}^{\textbf{2}}$ Christian
Kirches${}^{\textbf{1}}$
(Received: date / Accepted: date)
###### Abstract
Statistical matching methods are widely used in the social and health sciences
to estimate causal effects using observational data. Often the objective is to
find comparable groups with similar covariate distributions in a dataset, with
the aim to reduce bias in a random experiment. We aim to develop a foundation
for deterministic methods which provide results with low bias, while retaining
interpretability. The proposed method matches on the covariates and calculates
all possible maximal exact matches for a given dataset without adding
numerical errors. Notable advantages of our method over existing matching
algorithms are that all available information for exact matches is used, no
additional bias is introduced, it can be combined with other matching methods
for inexact matching to reduce pruning and that the result is calculated in a
fast and deterministic way. For a given dataset the result is therefore
provably unique for exact matches in the mathematical sense. We provide
proofs, instructions for implementation as well as a numerical example
calculated for comparison on a complete survey. 62D20 91B68
###### Keywords:
Statistical exact matching; evaluation of observational studies; matched
sampling; weighted matching
###### pacs:
C15 C18
###### MSC:
62-07 91B68
††journal:
## 1 Introduction
Statistical matching (SM) is widely used to reduce the effect of confounding
(Rubin, 1973; Anderson et al., 1980; Kupper et al., 1981) when estimating the
causal effects of two different paths of action in an observational study.
Such a study consists e.g. of a dataset containing two therapy groups $A$ and
$B$, which in turn contain patients
$a_{1},\,\ldots,\,a_{|A|},\,b_{1},\,\ldots,\,b_{|B|}$. Every patient $p\in D$
has a $s$-dimensional covariate vector $cv(p)$, describing the patients
condition, and an observed value $\mathfrak{o}(p)$, describing the result of
the therapy, for examples see (Ray et al., 2012; Zhang et al., 2015; Gozalo et
al., 2015; Zhang et al., 2016; Cho et al., 2017; Burden et al., 2017; McEvoy
et al., 2016; Schermerhorn et al., 2008; Lee et al., 2017; Capucci et al.,
2017; Tranchart et al., 2016; Zangbar et al., 2016; Dou et al., 2017; Fukami
et al., 2017; McDonald et al., 2017; Lai et al., 2016; Abidov et al., 2005;
Adams et al., 2017; Kishimoto et al., 2017; Kong et al., 2017; Chen et al.,
2016; Seung et al., 2008; Shaw et al., 2008; Liu et al., 2016; Svanström et
al., 2013; Salati et al., 2017).
The goal in regard to SM would then be to find a matching such that patients
which are similar according to a chosen similarity measure, e.g. Mahalanobis
distance or propensity score, are compared with each other and a reliable
conclusion with regards to the preferable therapy under the circumstances
defined by the underlying model and hypothesis can be drawn from the matching,
while bias potentially introduced through a comparison of dissimilar patients
is reduced. If the matching is also maximal in the sense that all possibly
matchable patients, i.e. patients which are similar to other patients, are
matched, the matching is called a maximal matching.
Regrettably, minimizing bias is not the only key aspect to be considered:
maximal matching can lead to the pruning of patients, thus possibly ignoring
relevant information contained in the dataset. As these matchings are usually
not unique, there can be a high variance in information in between matchings
and thus conclusions drawn from them can potentially vary to a high degree.
Hence one needs to find a matching optimized in regards to bias and pruning.
As the underlying distribution of the dataset and the influence of the therapy
is unknown, finding the optimal maximal patient-to-patient matching is
difficult. Based on the foundation laid by Rosenbaum and Rubin (1983) for
propensity score matching (PSM), many different methods have been proposed to
deal with this problem, e.g. nearest neighbour matching (Rubin, 1972),
stratification matching on propensity scores (Anderson et al., 1980), caliper
matching (Stuart, 2010), optimal matching (Rosenbaum, 1989), coarsened
matching (Iacus et al., 2012) or full matching (Hansen, 2004; Hansen and
Klopfer, 2012). A comprehensive overview can be found for example in the
article from Stuart (2010).
The aforementioned methods inspect either one or several matchings and try to
cope with the problem of not being able to calculate all possible maximal
matchings through numerical or stochastic methods ((Stuart, 2010)) and have
some limitations which have been investigated lately (King and Nielsen, 2019;
Austin, 2011; Pearl, 2000). Therefore, researchers find themselves sometimes
in the difficult position where different matchings, while being statistically
sound, can suggest different conclusions.
Due to the aforementioned reasons we investigate a slightly different approach
in this article. After showing that considering only one or several patient-
to-patient matchings over the whole dataset is inadequate as exponentially
many different patient-to-patient matchings exist, implying that high variance
in the deduced conclusions is possible, we proceed to propose a method that
matches clusters of patients. The goal is to develop a method which uses all
information contained in the dataset and considers all possible maximal
matchings of a dataset in accordance to a chosen similarity, thus leading to
low confounding and low variance. The proposed method has the desirable
property of calculating a matching in accordance to the expectancy value of
all possible maximal exact matchings in the dataset, while being fast and
deterministic and therefore can support decision making processes as no
additional errors are included during the matching process.
A short note on terminology: We use the terms terms therapy group, patient,
covariate vector and observed value for clarity and simplicity of presentation
and that they can be substituted for any type of group, member of said group,
properties of the member and observed result for the member.
### 1.1 Contribution
We investigate the quantity of possible exact matchings and show that, even
under the restriction that only exact matches are considered, many possible
matchings exists. For this reason we propose a different approach, which uses
all available information in a given dataset for an exact matching and show
that the proposed method has desirable properties for SM. We confirm our
theoretical contributions by evaluating a complete survey and comparing the
proposed method with the de-facto standard for SM in such applications, namely
propensity score matching (PSM).
### 1.2 Structure of the Remainder
We proceed to show that even in an exact matching context multiple possible
matchings exist and propose an algorithm to cope with this problem and prove
desirable properties of the algorithm. In Section 3, we describe and confirm
the findings made in the previous section based on a comparison of the
proposed algorithm with an established method for SM on a complete survey. We
conclude with Section 4, which is used to summarize our results as well as the
algorithm’s benefits and drawbacks and to give an outlook for potential
further development.
## 2 Deterministic exact matching
As stated in the introduction (Section 1), the goal of SM in a general setting
is to match as many patients between two groups with regards to a chosen
similarity or distance measure as possible. One can immediately distinguish
two cases:
* •
Exact matching (Stuart, 2010; Iacus et al., 2012): Only members of different
sets with equal covariate vectors are matched.
* •
$\delta$-matching or caliper/inexact matching (Stuart, 2010): Members of
different sets can be matched if their difference with regard to a chosen
similarity measure is smaller than $\delta$.
Thus exact matching is a special case of $\delta$-matching for $\delta=0$ and
one can define potentially matchable patients in the following way.
###### Definition 2.1 (Matchable Patients)
Let $p$ and $q$ be two patients from different therapy groups and let
$d(\,\cdot\,,\,\cdot)$ be an arbitrary similarity measure. Then $p$ and $q$
are matchable patients for a $\delta$-matching, if $d(p,\,q)\leq\delta$.
We will only consider exact matches, $\delta=0$, for the remainder of this
manuscript. We refer to Section 4 for a discussion of a potential extension of
this method to $\delta$-matchings with $\delta>0$.
We first formalize an observation made during statistical matching, which
states that the number of all different possible exact matches may be
inhibitively large, so that not all matchings can be computed in acceptable
time. Based on this we proceed by showing that it is possible to calculate a
matching on a dataset in accordance with the expectancy of the observed
values, if one uses clusters instead of patients and show that the proposed
algorithm has additional desirable properties for statistical matching.
### 2.1 Preparations
The general idea of the proposed method is to cluster patients with the same
covariate vector for each therapy group and generate a matching between both
therapy groups for the constructed clusters.
Clustering of patients $p$ and $q$ on their covariate vectors requires a
similarity measure. In the remainder, we will use the $L_{1}$ distance measure
(also known as Manhattan metric)
$d(p,\,q)\coloneqq\sum_{i=1}^{s}|cv_{i}(p)-cv_{i}(q)|,$ (2.1)
but other distance measures (possibly defined through similarity measures) are
applicable as well as long as they can be calculated for every pair of
patients.
###### Remark 2.2
Note that two patients $p$ and $q$ have identical covariate vectors if and
only if $d(p,\,q)=0$. Thus for exact matching $d(p,\,q)=0$ is required for all
matchable patients $p$ and $q$.
Together with a distance measure we can define the notion of clusters.
###### Definition 2.3 (Cluster of patients)
In an SM context, a cluster of patients from one therapy group $H$ is a non-
empty set $C_{H}$ of patients with properties
1. 1.
$d(p,\,q)=0\,\,\forall p,\,q\in C_{H}$.
2. 2.
For $p\in C_{H}$ it holds that $\nexists q\in H$ such that $q\notin C_{H}$ and
$d(p,\,q)=0$.
3. 3.
If $p\in C_{H}$, then the assigned covariate vector of $C_{H}$ is $cv(p)$.
Hence, clusters have the following properties:
###### Proposition 2.4
Let $H$ be a therapy group in an SM context, then the following holds for
clusters in this therapy group:
1. 1.
Every patient in $H$ belongs to exactly one cluster.
2. 2.
Every cluster can have exactly one covariate vector assigned to it.
3. 3.
Any two clusters in $H$ have different assigned covariate vectors.
###### Proof
We prove every characteristic individually:
1. 1.
As $d(p,\,p)=0,\,\forall p\in H$, all patients belong to at least one cluster.
Thus it remains to show that there exists no patient $p\in H$ belonging to two
different clusters $C_{H,1}$ and $C_{H,2}$. Assume that $p\in C_{H,1}\cap
C_{H,2}$ and let $q_{1}\in C_{H,1}$ and $q_{2}\in C_{H,2}$ be two patients in
$C_{H,1}$ and $C_{H,2}$ respectively. Then it holds by Definition 2.3.1 that
${d(p,\,q_{1})=0=d(p,\,q_{2})}$ and therefore $d(q_{1},\,q_{2})=0$. This is a
contradiction to Definition 2.3.2. and therefore every patient belongs to
exactly one cluster.
2. 2.
As clusters are non-empty sets of patients every cluster has at least one
covariate vector assigned to it. Therefore assume that cluster $C$ has two
assigned covariate vectors $v_{1}$ and $v_{2}$ differing in at least one
entry. Then by Definition 2.33 it holds that there exists patients $p,\,q\in
C$ such that $v_{1}=cv(p)$ and $v_{2}=cv(q)$. As $v_{1}\neq v_{2}$ holds by
assumption it follows that $d(p,\,q)\neq 0$, contradicting Definition
$\ref{def:cluster}.\ref{item:def_cluster_1}$ as $p,\,q\in C$.
3. 3.
Assume that different clusters $C_{H,1}$ and $C_{H,2}$ have the same assigned
covariate vector. This implies that$d(p,\,q)=0,\,\forall p\in C_{H,1},\,q\in
C_{H,2}$ and is a contradiction to Definition 2.3.2.
Because of Proposition 2.4, clusters can be assigned unique covariate vectors.
Thus the similarity of clusters $C_{A}$ and $C_{B}$ – for therapy groups $A$
and $B$ respectively – can be denoted similarly to patients as
$d(C_{A},\,C_{B})$. This leads to the following observation in regards to
exact matching:
For an arbitrary dataset $D=(A,\,B)$ and a cluster $C_{H}$ belonging to a
therapy group ${H\in\\{A,\,B\\}}$ only the following situations can occur:
1. 1.
For $C_{A}$ there exists one cluster $C_{B}$ with $d(C_{A},\,C_{B})\equiv 0$.
2. 2.
For $C_{A}$ there exists no cluster $C_{B}$ with $d(C_{A},\,C_{B})\equiv 0$.
3. 3.
For $C_{B}$ there exists no cluster $C_{A}$ with $d(C_{A},\,C_{B})\equiv 0$.
Only situation (1) is relevant for exact matching of single patients or
clusters, as exact matching can only occur for clusters with a corresponding
cluster in the opposite therapy group. Furthermore if situation (1) occurs,
then the match is unique in regards to the clusters.
###### Proposition 2.5
Let $C_{A}$ and $C_{B}$ be clusters from different therapy groups, then
${d(C_{A},\,C_{B})\equiv 0}$ holds iff the two clusters have the same assigned
covariate vector.
###### Proof
Let $C_{A}$ and $C_{B}$ be clusters from different therapy groups and
${d(C_{A},\,C_{B})\equiv 0}$. As every cluster has exactly one assigned
covariate vector it remains to show ${cv(C_{A})\equiv cv(C_{B})}$:
$d(C_{A},\,C_{B})\equiv
0\Leftrightarrow\sum_{i=1}^{s}|cv_{i}(C_{A})-cv_{i}(C_{B})|\equiv
0\Leftrightarrow cv_{i}(C_{A})\equiv cv_{i}(C_{B}),\,\forall 1\leq i\leq s.$
(2.2)
Thus both clusters have the same assigned covariate vector. The reverse
direction follows as all implications in equation (2.2) are given through
equivalence.
Motivated by the previous proposition and by the definition of matchable
patients (Definition 2.1) we can define exact matchable clusters.
###### Definition 2.6 (Matchable Cluster)
Two clusters $C_{A}$ and $C_{B}$ of $A$ and $B$ respectively are exact
matchable clusters iff $d(C_{A},\,C_{B})\equiv 0$.
Equipped with Definitions 2.3 and 2.6 as well as Propositions 2.4 and 2.5 we
can identify the exact cardinality $n$ of exact matchable clusters.
Additionally, we can calculate the number of all different possible matchings
between two exact matchable clusters (Proposition 2.7) and for whole datasets
(Proposition 2.8). Note that the necessity to calculate all possible matchings
arises as observed results between different maximal matchings can show a
large variation, see Section 3, even if the dataset is large and matches are
exact (Table 1).
###### Proposition 2.7
Let $C_{A}$ and $C_{B}$ be exact matchable clusters of $A$ and $B$ with
$|C_{A}|=\mathfrak{a}$ and $|C_{B}|=\mathfrak{b}$ respectively, then
1. 1.
A set of exact matches $M$ between $A$ and $B$ with a maximal number of
matches includes $\min(\mathfrak{a},\,\mathfrak{b})$ exact matches from
$C_{A}$ and $C_{B}$.
2. 2.
For $C_{A}$ and $C_{B}$ there exists
$\binom{\max(\mathfrak{a},\,\mathfrak{b})}{\min(\mathfrak{a},\,\mathfrak{b})}$
sets with $\min(\mathfrak{a},\,\mathfrak{b})$ exact matches, where
$\binom{\max(\mathfrak{a},\,\mathfrak{b})}{\min(\mathfrak{a},\,\mathfrak{b})}$
denotes the binomial coefficient of $\max(\mathfrak{a},\,\mathfrak{b})$ and
$\min(\mathfrak{a},\,\mathfrak{b})$.
###### Proof
We can w.l.o.g. assume that $\mathfrak{a}\leq\mathfrak{b}$, as one can simply
substitute $\mathfrak{a}$ for $\mathfrak{b}$ and $\mathfrak{b}$ for
$\mathfrak{a}$ in the other case. Thus
$\min(\mathfrak{a},\,\mathfrak{b})=\mathfrak{a}$ and as $C_{A}$ and $C_{B}$
are exact matchable clusters, $\mathfrak{a}$ matchable pairs $(p,\,q)$ with
$p\in C_{A},\,q\in C_{B}$ exist. Due to Proposition 2.5, this is the maximal
matchable number of patients between $C_{A}$ and $C_{B}$ as $M$ was assumed to
be maximal. Assume now that only $a<\mathfrak{a}$ of these matchable pairs are
contained in $M$. This contradicts the maximality of $M$ as the remaining
matchable pairs could be added. This proves (1).
As $\mathfrak{a}\leq\mathfrak{b}$, it holds that every element from $C_{A}$ is
matched to elements of $C_{B}$ and these matches constitute $\mathfrak{a}$
many tuples of matched elements, in short an $\mathfrak{a}$-tuple. As
$|C_{B}|=\mathfrak{b}$, one can construct $\binom{\mathfrak{b}}{\mathfrak{a}}$
different matchings by selecting different elements of $C_{B}$ for the
matches.
Therefore for two matching clusters $C_{A}$ and $C_{B}$ exist
$\binom{\max(\mathfrak{a},\,\mathfrak{b})}{\min(\mathfrak{a},\,\mathfrak{b})}$
sets with $\min(\mathfrak{a},\,\mathfrak{b})$ exact matches in general.
###### Proposition 2.8
Let $n$ be the number of exact matching clusters in $A$ and $B$ and let
$C_{A,j}$ and $C_{B,j}$ with $1\leq j\leq n$ be exact matching clusters of $A$
and $B$ with $|C_{A,\,j}|=\mathfrak{a}_{j}$ and $|C_{B,\,j}|=\mathfrak{b}_{j}$
respectively. Then the number of different exact matchings that exist on the
whole dataset is
$\prod_{j=1}^{n}\binom{\max(\mathfrak{a}_{j},\,\mathfrak{b}_{j})}{\min(\mathfrak{a}_{j},\,\mathfrak{b}_{j})}$
(2.3)
###### Proof
By Proposition 2.7 there exist
$\binom{\max(\mathfrak{a}_{j},\,\mathfrak{b}_{j})}{\min(\mathfrak{a}_{j},\,\mathfrak{b}_{j})}$
exact maximal matchings for every pair of exact matching clusters $1\leq j\leq
n$. Therefore we have
$\prod_{j=1}^{n}\binom{\max(\mathfrak{a}_{j},\,\mathfrak{b}_{j})}{\min(\mathfrak{a}_{j},\,\mathfrak{b}_{j})}$
maximal exact matchings in total.
Note that the number of different maximal matchings in Proposition 2.7 is
smaller for exact matchings than it is for $\delta$-matchings with $\delta>0$
or one-to-many matchings.
Proposition 2.7 shows that even for small datasets the naive way of
calculating all possible matchings can be infeasible due to the binomial
coefficient. One could now calculate only one or several matchings, but these
will possibly neglect important parts of available information in the dataset.
### 2.2 A deterministic balancing exact matching algorithm
Motivated by the findings of the previous subsection and the use-oriented
necessity to use all available information in a given dataset, we investigate
the outcomes of an observed value in a dataset regarding clusters. For
simplicity of presentation we henceforth assume that $\mathfrak{o}(x)$ is in
$\\{0,\,1\\}$, see Remark 2.10 for a short discussion of other settings.
###### Definition 2.9 (Relative frequency of an observed value in a cluster)
Let $C_{H}\coloneqq\\{x_{1},\,\ldots,\,x_{|C_{H}|}\\}$ be a cluster in therapy
group $H$. Then the relative frequency of the observed value
$\mathfrak{o}(x_{u})=1$ in $C_{H}$ is defined as
$F(C_{H})\coloneqq\frac{1}{|C_{H}|}\sum_{u=1}^{|C_{H}|}\mathfrak{o}(x_{u}).$
(2.4)
###### Remark 2.10 (Different intervals for observational values)
Besides simplicity of presentation, the assumption that $\mathfrak{o}(x)$ is
in $\\{0,\,1\\}$ has several advantages and can be easily generalized:
1. 1.
The relative frequency of the observed value $\mathfrak{o}(x_{u})=1$ in
$C_{H}$ is $1-F(C_{H})$.
2. 2.
Any binary setting with $\tilde{\mathfrak{o}}\in\\{0,\,K\\},\,K\in\mathbb{R}$
can be mapped to $\mathfrak{o}\in\\{0,\,1\\}$.
3. 3.
For non-binary outcomes $\mathfrak{o}\in\\{K_{1},\,K_{2},\,\ldots\\}$, one has
to consider the modification
$F^{(K_{i})}(C_{H})=\frac{1}{|C_{H}|}\sum_{u=1}^{|C_{H}|}\chi\big{(}\mathfrak{o}(x_{u})=K_{i}\big{)}$,
where $\chi\big{(}\mathfrak{o}(x_{u})=K_{i}\big{)}$ denotes the indicator
function, i.e.
$\chi\big{(}\mathfrak{o}(x_{u})=K_{i}\big{)}=\begin{cases}1,\quad\text{if
}\mathfrak{o}(x_{u})=K_{i}\\\ 0,\quad\text{otherwise}\end{cases}$
The relative frequency of an observed value in a cluster can be seen as the
relative outcome value for the cluster. In the context of statistical matching
the observed value data of patients should only contribute to the final
conclusion if the whole cluster can be matched. Regrettably this is not the
case in general as a cluster
$C_{A}\coloneqq\\{x_{1},\,\ldots,\,x_{\mathfrak{a}}\\}$ does not necessarily
have a matching cluster. Additionally even if a matching cluster
${C_{B}\coloneqq\\{z_{1},\,\ldots,\,z_{\mathfrak{b}}\\}}$ for $C_{A}$ exists
only an accumulated observed value of $\min(\mathfrak{a},\,\mathfrak{b})$
patients should contribute to the frequency evaluated in the end to prevent a
distortion of the end result by large clusters. A first approach, which we
will refine subsequently, to prevent this distortion leads to the definition
of the relative matching frequency of an observed value.
###### Definition 2.11 (Relative matching frequency of an observed value)
Let
$C_{A}\coloneqq\\{x_{1},\,\ldots,\,x_{\mathfrak{a}}\\},\,C_{B}\coloneqq\\{z_{1},\,\ldots,\,z_{\mathfrak{b}}\\}$
be two exact matching clusters. Then the relative matching frequency of an
observed value $\mathfrak{o}(x_{v})=1$ for $C_{A}$ is defined as
$F_{M}(C_{A})\coloneqq\frac{1}{\mathfrak{a}}\sum_{v=1}^{\min(\mathfrak{a},\,\mathfrak{b})}\mathfrak{o}(x_{v}).$
(2.5)
Using the relative matching frequency of observed values to evaluate the final
outcome of a dataset in terms of an observed value results in incomplete usage
of information as only $\min\\{\mathfrak{a},\,\mathfrak{b}\\}$ patients are
matched and thus only the observed values of
$\min\\{\mathfrak{a},\,\mathfrak{b}\\}$ patients affect the outcome. This
problem is independent of the matching method used, if the method does not
consider all possible matchings. Note that many commonly used matching methods
as described e.g. in (Stuart, 2010) implicitly consider the relative matching
frequency $F_{M}(C_{A})$ as a result after a single matching realization as
can be seen by interpreting the used patients as clusters appropriate to the
chosen $\delta$.
We change the perspective to show that usage of the full information available
is possible. For this we begin by considering one realization of a maximal
matching between clusters as the result of a random experiment. As all
patients in a cluster are the same with respect to their covariates, all
patients in the same cluster should have the same probability to be chosen in
a maximal matching to be matched to patients from a matching cluster. Thus
every possible maximal matching has the same probability to appear in a single
maximal matching experiment. Repeating the random experiment for maximal
matchings between two clusters results in a sequence of maximal matchings,
which we call a uniform sequence of matchings.
###### Definition 2.12 (Uniform sequence of matchings)
Let $C_{A}\coloneqq\\{x_{1},\,\ldots,\,x_{\mathfrak{a}}\\}$ and
$C_{B}\coloneqq\\{z_{1},\,\ldots,\,z_{\mathfrak{b}}\\}$ be two exact matching
clusters. An infinite sequence of matchings $M=(M_{1},\,M_{2},\,\ldots)$ is
called a uniform sequence of matchings iff every possible matching between
patients of $C_{A}$ and $C_{B}$ has the same probability to be drawn as a
matching $M_{r}$ in the sequence.
With these notions we can show that the expectancy of the observed value over
all possible maximal matchings is a term, whose value can be directly
calculated through a cluster matching. Proposition 2.13 examines this for the
case of two exact matching clusters.
###### Proposition 2.13
Let
$C_{A}\coloneqq\\{x_{1},\,\ldots,\,x_{\mathfrak{a}}\\},\,C_{B}\coloneqq\\{z_{1},\,\ldots,\,z_{\mathfrak{b}}\\}$
be two exact matching clusters and let $M$ be a uniform sequence of maximal
matched pairs between $C_{A}$ and $C_{B}$. Then
1. 1.
Every $M_{k}$ contains $\min(\mathfrak{a},\,\mathfrak{b})$ matching pairs and
2. 2.
The expectancy of the relative matching frequency over the sequence of uniform
matchings for $C_{A}$ and $C_{B}$ can be calculated as
$\displaystyle\mathbb{E}({C_{A}})$
$\displaystyle\coloneqq\lim\limits_{r\to\infty}\frac{1}{r}\Big{(}\sum_{k=1}^{r}F_{M}(C_{A})_{k}\Big{)}=\frac{\min(\mathfrak{a},\,\mathfrak{b})}{\mathfrak{a}^{2}}\sum_{v=1}^{\mathfrak{a}}\mathfrak{o}(x_{v})\quad\text{and}$
(2.6) $\displaystyle\mathbb{E}({C_{B}})$
$\displaystyle\coloneqq\lim\limits_{r\to\infty}\frac{1}{r}\Big{(}\sum_{k=1}^{r}F_{M}(C_{B})_{k}\Big{)}=\frac{\min(\mathfrak{a},\,\mathfrak{b})}{\mathfrak{b}^{2}}\sum_{w=1}^{\mathfrak{b}}\mathfrak{o}(z_{w}).$
###### Proof
By Proposition 2.8, every exact match with a maximum number of matches
includes $\min(\mathfrak{a},\,\mathfrak{b})$ pairs of $C_{A}$ and $C_{B}$.
Therefore every $M_{k}$ contains $\min(\mathfrak{a},\,\mathfrak{b})$ pairs of
patients from $C_{A}$ and $C_{B}$. For the second part we can w.l.o.g. assume
that $\mathfrak{a}\leq\mathfrak{b}$, as one can simply substitute in the other
case. As $\mathfrak{a}=\min(\mathfrak{a},\,\mathfrak{b})$ it follows that
$F_{M}(C_{A})_{k}=F_{M}(C_{A})=F(C_{A})=\frac{1}{\mathfrak{a}}\sum_{v=1}^{\mathfrak{a}}\mathfrak{o}(x_{v})=\frac{\mathfrak{a}}{\mathfrak{a}^{2}}\sum_{v=1}^{\mathfrak{a}}\mathfrak{o}(x_{v})$
for all $k$.
For $\mathfrak{a}=1$ it follows that exactly one patient
$\widetilde{z}_{k,\,w},\,w\in\\{1,\,\ldots,\,\mathfrak{b}\\}$ of $C_{B}$ gets
chosen in every maximal matching $M_{k}$. As all patients have the same
probability to be chosen the probability is $\frac{1}{\mathfrak{b}}$ for every
patient. Evaluating the limits and referring to the patients of $C_{B}$ chosen
in one realization of a maximal matching as $\widetilde{z}_{k,\,w}$ leads to:
$\lim\limits_{r\to\infty}\frac{1}{r}\Big{(}\sum_{k=1}^{r}F_{M}(C_{B})_{k}\Big{)}=\lim\limits_{r\to\infty}\frac{1}{r}\Big{(}\sum_{k=1}^{r}\frac{1}{\mathfrak{b}}\sum_{v=1}^{\mathfrak{a}}\mathfrak{o}(\widetilde{z}_{k,\,w})\Big{)}=\frac{1}{\mathfrak{b}^{2}}\sum_{w=1}^{\mathfrak{b}}\mathfrak{o}(z_{w}),$
(2.7)
where the second equation follows by the law of large numbers as all patients
have the same probability to be chosen in a maximal matching.
Now let $\mathfrak{a}>1$. Thus out of $\mathfrak{b}$ patients $\mathfrak{a}$
patients get matched and every patient has the same probability to be chosen
in a maximal matching $M_{k}$. Again by the law of large numbers in the second
equation it follows that
$\lim\limits_{r\to\infty}\frac{1}{r}\Big{(}\sum_{k=1}^{r}F_{M}(C_{B})_{k}\Big{)}=\lim\limits_{r\to\infty}\frac{1}{r}\Big{(}\sum_{k=1}^{r}\frac{\mathfrak{a}}{\mathfrak{b}}\sum_{v=1}^{\mathfrak{a}}\mathfrak{o}(\widetilde{z}_{k,\,w})\Big{)}=\frac{\mathfrak{a}}{\mathfrak{b}^{2}}\sum_{w=1}^{\mathfrak{b}}\mathfrak{o}(z_{w}).$
(2.8)
It is straightforward to generalize Proposition 2.13 to uniform sequences of
matchings over therapy groups containing several clusters, as a single
realization of maximal matchings between clusters is independent of maximal
matchings between other clusters.
###### Proposition 2.14
Let $n$ be the number of exact matching clusters and let $C_{A,j}$ and
$C_{B,j}$ with $1\leq j\leq n$ be exact matching clusters of $A$ and $B$ with
$|C_{A,\,j}|=\mathfrak{a}_{j}$ and ${|C_{B,\,j}|=\mathfrak{b}_{j}}$
respectively. Let $M$ be the uniform sequence of maximal exact matchings
between all clusters. Then
1. 1.
Every element $M_{k}$ of $M$ contains
$|M_{k}|=\sum_{j=1}^{n}\min(\mathfrak{a}_{j},\,\mathfrak{b}_{j})$ (2.9)
matches.
2. 2.
For the expectancy of the relative matching frequencies of observed values for
$A$ of any maximum exact matching it holds that
$\mathbb{E}_{A}\coloneqq\lim\limits_{r\to\infty}\frac{1}{r}\sum_{k=1}^{r}\Big{(}\sum_{j=1}^{n}F_{M}(C_{A,\,j})_{k}\Big{)}=\sum_{j=1}^{n}\mathbb{E}(C_{A,\,j}),$
(2.10)
with $\widetilde{x}_{j,\,k,\,v},\,{v\in\\{1,\,\ldots,\,\mathfrak{a_{j}}\\}}$
as the patients from cluster $C_{A,\,j}$ chosen in the $k$-th matching
$M_{k}$. Analogously it holds that
$\mathbb{E}_{B}\coloneqq\lim\limits_{r\to\infty}\frac{1}{r}\sum_{k=1}^{r}\Big{(}\sum_{j=1}^{n}F_{M}(C_{B,\,j})_{k}\Big{)}=\sum_{j=1}^{n}\mathbb{E}(C_{B,\,j}),$
###### Proof
Equation (2.9) holds as it is a summation over the equation from Proposition
2.13.1. Similarly equation (2.10) holds as a summation over equation (2.6) as
a maximal matching between two clusters is independent of a maximal matching
between other clusters.
The previous proposition shows that for each therapy group $A$ and $B$ not all
possible matches are realized during a single matching and that the relative
matching frequency of observed values for uniform sequences of matchings
converges to the expectancy of the observed results, which in this case is an
easily calculable value. The added benefit being that the term is unique for a
dataset and independent of the used matching method.
We are now prepared to propose an algorithm, which calculates the expectancy
of the relative matching frequencies of observed values in a deterministic
fashion. The full algorithm (Algorithm 4) is divided into three stages.
In its first stage clusters according to Definition 2.3 are generated
(Algorithm 1). In the second stage the algorithm will try to match as many
clusters as possible (Algorithm 2), while in the third stage it weights the
patients of each cluster in accordance to the size of its matching cluster and
its own size according to equation (2.6) (Algorithm 3).
Algorithm 1 Clustering step
1:Set $c=0$ and $is\\_clustered(x_{v})=0$ for all patients in $A$.
2:for each patient $x_{v},\,1\leq v\leq|A|$ do
3: if $is\\_clustered(x_{v})\equiv 0$ then
4: Set $c=c+1$, $C_{A,\,c}:=\\{x_{v}\\}$ and $is\\_clustered(x_{v})=1$.
5: else
6: continue
7: end if
8: for each patient $x_{u}$ with $v<u\leq|A|$ and $is\\_clustered(x_{u})\equiv
0$ do
9: if $d(x_{u},\,C_{A,\,c})\equiv 0$ then
10: Set $C_{A,\,c}=C_{A,\,c}\cup x_{u}$ and $is\\_clustered(x_{u})=1$
11: end if
12: end for
13:end for
14:Set $n_{A}=c$.
15:Repeat steps 1 – 13 for $B$ and set $n_{B}=c$.
16:return
$C_{A,\,1},\,\ldots,\,C_{A,n_{A}},\,C_{B,\,1},\,\ldots,\,C_{B,n_{B}}$.
Algorithm 2 Matching step
1:Set $i=1$.
2:for every cluster $C_{A,\,g},\,1\leq g\leq n_{A}$ do
3: Search for cluster $C_{B,\,i}$ with $d(C_{A,\,g},\,C_{B,\,i})\equiv 0$.
4: if a cluster $C_{B,\,i}$ was found in the previous step then
5: Create matching set $M_{i}=\emptyset$.
6: Set $M_{i}=\\{C_{A,\,g},\,C_{B,\,i}\\}$.
7: Set $i=i+1$.
8: end if
9:end for
10:return Matching sets $M_{1},\,\ldots,\,M_{n}$.
Algorithm 3 Weighting step
1:Set $w(C_{A,\,g})=0\,\,\forall 1\leq g\leq n_{A}$ and
$w(C_{B,\,h})=0\,\,\forall 1\leq h\leq n_{B}$
2:for all $C_{A,\,g},\,1\leq g\leq n_{A}$, with $M_{g}\neq\emptyset$ do
3: Search for $C_{B,\,h},\,1\leq h\leq n_{B}$, as the previously calculated
matching cluster of $C_{A,g}$.
4: Calculate $S_{A,\,g}:=S_{B,\,h}:=\min\\{|C_{A,\,g}|,\,|C_{B,\,h}|\\}$.
5: Compute $w(C_{A,\,g}):=S_{A,\,g}/|C_{A,\,g}|^{2}$ and
$w(C_{B,\,h}):=S_{B,\,h}/|C_{B,\,h}|^{2}$.
6:end for
7:Compute Min-weighted results: $\displaystyle R_{A}$ $\displaystyle:=$
$\displaystyle\sum_{h=1}^{n_{A}}\big{[}w(C_{A,\,h})\sum_{v=1}^{|C_{A,\,h}|}\mathfrak{o}(x_{v,\,h})\big{]},$
(2.11) $\displaystyle R_{B}$ $\displaystyle:=$
$\displaystyle\sum_{g=1}^{n_{B}}\big{[}w(C_{B,\,g})\sum_{w=1}^{|C_{B,\,g}|}\mathfrak{o}(z_{w,\,g})\big{]},$
(2.12) where $x_{v,\,h}\in C_{A,\,h}$ and $z_{w,\,g}\in C_{B,\,g}$.
8:return weighted results $R_{A},\,R_{B}$.
Linked together, Algorithms 1 – 3 form the DeM algorithm.
Algorithm 4 Deterministic balancing score exact matching algorithm (DeM)
1:Cluster the patients with Algorithm 1 for $A$ and $B$ into
$C_{A,\,1},\,\ldots,\,C_{A,n_{A}},\,C_{B,\,1},\,\ldots,\,C_{B,n_{B}}$.
2:Compute matchings sets $M_{1},\,\ldots,\,M_{n}$ through application of the
matching Algorithm 2 on the clusters
$C_{A,\,1},\,\ldots,\,C_{A,n_{A}},\,C_{B,\,1},\,\ldots,\,C_{B,n_{B}}$.
3:Compute the weighted result with Algorithm 3 for $M_{1},\,\ldots,\,M_{n}$
4:return Weighted results $R_{A},R_{B}$ and the set of matched clusters $M$.
As shown in Proposition 2.14, the DeM algorithm calculates the expectancy
value for the uniform sequence of exact maximal matchings for a given dataset
(equations (2.10) and respectively (2.11) or (2.12)), and uses every
information contained in the dataset available for an exact matching
(equations (2.9) and steps 4 – 8 of Algorithm 2 in conjunction with Algorithm
1). This is summarized in the following theorem.
###### Theorem 2.15 (Matching properties of the DeM algorithm)
The DeM algorithm, Algorithm 4,
1. 1.
matches all possible exact matches and
2. 2.
produces exactly one matching result in accordance to the expectancy value of
all possible matches in the dataset.
Additionally one can prove that the proposed DeM algorithm (Algorithm 4) is
fast and deterministic.
###### Theorem 2.16
The DeM algorithm (Algorithm 4) is a deterministic algorithm and has a runtime
of $O(|A|\cdot|B|\cdot s+|A|^{2}+|B|^{2})$, where $s$ is the dimension of the
covariate vectors.
###### Proof
An algorithm is deterministic if given a particular input it will always
produce the same output. Algorithm 4 takes a dataset as input and will, in the
case of an exact matching, always produce the same clusters during step 1. As
the same clusters were produced in step 1, the same clusters are matched in
step 2, because of Proposition 2.5. Step 3 calculates the weight of the
respective matched clusters, which is always same since the matched clusters
from step 2 are the same. Thus Algorithm 4 is deterministic.
Evaluating the runtime can be achieved by looking at every step separately:
1. 1.
In Algorithm 1 the steps 1–14 have a runtime of $|A|^{2}$, while step 15 has a
runtime of $|B|^{2}$.
2. 2.
Algorithm 2 investigates every cluster in $B$ at most $|A|$ times and every
comparison between clusters needs $s$ operations to determine the distance.
Thus Algorithm 2 has a runtime of $O(|A|\cdot|B|\cdot s)$.
3. 3.
As $n_{A}\leq|A|$ and $n_{B}\leq|B|$ it follows that Algorithm 3 has a runtime
of $O(\max\\{|A|,\,|B|\\})$.
Adding all the runtimes together and making no further assumptions in regards
to the comparative size of $s,\,|A|$ and $|B|$, one concludes that Algorithm 4
has a runtime of $O(|A|\cdot|B|\cdot s+|A|^{2}+|B|^{2})$.
The exclusive applicability of the proposed algorithm to exact matches which
can be seen as a limitation will be discussed in Section 4.
### 2.3 Additional properties of the DeM algorithm
As shown in the previous subsection, the DeM algorithm calculates matchings in
accordance to the expected value over all possible matchings in the dataset.
This section discusses two additional properties of the DeM algorithm.
We first discuss an a posteriori property of the proposed cluster matching.
For statistical tests it is often necessary to calculate the variance inherent
to the final matching result. For clusters this can be achieved by looking at
matchings through the perspective of hypergeometric distributions:
Let
$C_{A}\coloneqq\\{x_{1},\,\ldots,\,x_{\mathfrak{a}}\\},\,C_{B}\coloneqq\\{z_{1},\,\ldots,\,z_{\mathfrak{b}}\\}$
be two exact matching clusters, then $\mathfrak{a}$ can be interpreted as the
population number of which $\sum_{v=1}^{\mathfrak{a}}\mathfrak{o}(x_{v})$ have
some property and $\min(\mathfrak{a},\,\mathfrak{b})$ of $\mathfrak{a}$
patients are chosen in this maximal matching. Thus a realization of a maximal
matching in the sense of relative matching frequencies can be interpreted as a
sample drawn from a hypergeometrically distributed random variable projected
onto the interval $[0,\,1]$.
The view of maximal matchings as realization of a drawing from a
hypergeometric distribution concurs with the results of Propositions 2.13 and
2.14 from the previous subsection as the expectancy of a hypergeometric
distribution for two exact matching clusters is $\mathbb{E}(C_{A})$ and
$\mathbb{E}(C_{B})$, respectively, where the terms $\mathfrak{a}$ and
$\mathfrak{b}$ stem from reversing the normalization done in the previous
subsection.
Taking this perspective allows to calculate the variance for maximal
matchings.
###### Proposition 2.17
Let
$C_{A}\coloneqq\\{x_{1},\,\ldots,\,x_{\mathfrak{a}}\\},\,C_{B}\coloneqq\\{z_{1},\,\ldots,\,z_{\mathfrak{b}}\\}$
be two exact matching clusters. Then the variance of matchings for $C_{A}$ is
given by
$\mathrm{Var}(C_{A})=\mathbb{E}(C_{A})\Big{(}1-\frac{\sum_{v=1}^{\mathfrak{a}}\mathfrak{o}(x_{v})}{\mathfrak{a}}\Big{)}\frac{\mathfrak{a}-\min(\mathfrak{a},\,\mathfrak{b})}{\mathfrak{a}-1}$
(2.13)
###### Proof
Viewing one realization of a cluster matching as the realization of a
hypergeometrically distributed random variable yields the probability of
picking one patient with $\mathfrak{o}(x_{v})\equiv 1$ as
$\frac{\sum_{v=1}^{\mathfrak{a}}\mathfrak{o}_{(}x_{v})}{\mathfrak{a}}$. From
Proposition 2.13 it is know that
$\mathbb{E}(C_{A})=\frac{\min(\mathfrak{a},\,\mathfrak{b})}{\mathfrak{a}^{2}}\sum_{v=1}^{\mathfrak{a}}\mathfrak{o}(x_{v})$.
Thus using the formula for the variance of hypergeometric distributions yields
equation (2.13).
As exact matchings of different clusters are independent from each other, the
variance for a matching over a therapy group follows immediately from the
previous Proposition 2.17.
###### Corollary 2.18
Let $n$ be the number of exact matching clusters in $A$ and $B$ and let
$C_{A,j}$ and $C_{B,j}$ with $1\leq j\leq n$ be exact matching clusters of $A$
and $B$ with $|C_{A,\,j}|=\mathfrak{a}_{j}$ and
$|C_{B,\,j}|=\mathfrak{b}_{j}$, respectively. Let $M$ be the uniform sequence
of maximal exact matchings between all clusters. Then the variance of therapy
group $A$ is
$\mathrm{Var}(A)=\sum_{j=1}^{n}\mathrm{Var}(C_{A,\,j}).$ (2.14)
For therapy group $B$ equation 2.14 holds similarly.
Note that all values used in Proposition 2.17 and Corollary 2.18 are available
after matching with the DeM algorithm and that the clusters are matched such
that no additional factor is introduced into the variance. Additionally note
that the variance of a cluster for which all patients are matched, i.e. for
$C_{A}$ with $\min(\mathfrak{a},\,\mathfrak{b})=\mathfrak{a}$, is $0$. The
same holds for clusters for which all patients have the same observed result,
as then either $F_{w}(C_{A})=0$ or
$\Big{(}1-\frac{\sum_{v=1}^{\min(\mathfrak{a},\,\mathfrak{b})}\mathfrak{o}(x_{v})}{\mathfrak{a}}\Big{)}=0$.
Thus at least half of all matched clusters from $A$ and $B$ fulfil either the
$\min(\mathfrak{a},\,\mathfrak{b})=\mathfrak{a}$ or
$\min(\mathfrak{a},\,\mathfrak{b})=\mathfrak{b}$ condition, and therefore have
a variance of $0$ in the matching calculated by the DeM algorithm.
The second property we discuss relates to the calculation of clusters and the
initial matching procedure. Rosenbaum and Rubin (1983) defined the balancing
score $b(p)$ of a patient $p$ as a value assignment such that the conditional
distribution of $cv(p)$ is the same for patients $p$ from both treatment
groups, $A$ and $B$. They have shown that $cv(p)$ is the finest balancing
score ((Rosenbaum and Rubin, 1983), section $2$) and that if treatment
assignment is strongly ignorable, then the difference between the two
respective treatments is an unbiased estimate of the average treatment effect
at that balancing score value ((Rosenbaum and Rubin, 1983), theorem $3$).
Since we use $cv(p)$ in our calculations, the result calculated by Algorithm 4
is an unbiased estimate of the average treatment effect, if the strong
ignorability assumption holds, additionally to the properties proven
previously.
## 3 Numerical example
### 3.1 Description of dataset and setup
We use an official complete survey to illustrate the effect of ignoring
different possible matchings as well as the results of the proposed DeM
algorithm.
The dataset used is the quality assurance dataset of isolated aortic valve
procedures in $2013$, which is an official mandatory dataset including all
aortic valve surgery cases in German hospitals. It contains patient
information (covariates) and mortality information (observed results) for
$17,427$ patients. For each patient the corresponding record contains $s=19$
covariate variables. The $17,427$ patients are divided into two therapy
groups. $9,848$ SAVR cases (replacement surgery of aortic valves) and $7,579$
TF-AVI cases (transcatheter/transfemoral implantation of aortic valves). The
cases were documented in accordance with §137 Social Security Code V (SGB V)
by hospitals registered under §108 SGB V. The data collection is compulsory
for all in-patient isolated aortic valve procedures in German hospitals. The
dataset is held by the Federal Joint Committee (Germany) and freely accessible
for researchers after application. Given this dataset, it can be safely
assumed that the data is independent in a statistical sense as patients were
only recorded once.
We proceed to compare the proposed DeM algorithm with two other approaches:
the de-facto standard for statistical matching, the $1$:$1$ propensity score
matching (PSM), as well as a bootstrapped variant of $1$:$1$ PSM by Austin and
Small (2014). For the regression based PSM algorithms, relevant regression
variables and their values have to be determined. For our example, we consider
the $H_{0}$-hypothesis: The mortality-rate does not depend on therapy, for
which the relevant variables are internationally validated in the Euroscore II
(http://www.euroscore.org). The corresponding regression values for this
setting are taken from the quality assurance dataset of isolated aortic valve
procedures. PSM itself was then calculated using functions provided by IBM
SPSS Statistics for Windows, Version $24.0$.
The decision to use $1$:$1$ PSM for comparison was made as it has the highest
amount of possible matchings for fixed match-sizes and is the most commonly
used (Stuart, 2010). Furthermore all possible $\mathfrak{v}:\mathfrak{w}$
matchings, for arbitrary $\mathfrak{v},\,\mathfrak{w}\in\mathbb{N}$ are
included in the set of possible $1$:$1$ matchings, while the reverse is
obviously not true for arbitrary $\mathfrak{v},\,\mathfrak{w}\in\mathbb{N}$
and any given dataset.
We additionally note that a match of two patients with $\delta>0$ in this
dataset would imply a difference of at least $5\%$ between patients in regards
to covariates as there are only $19$ covariate variables.
We explicitly stress that the purpose of this section is not the
recommendation of any kind of treatment, but the illustration of the usage of
results presented in Subsections 2.1–2.3.
### 3.2 Computational results
We computed the exact $1$:$1$ matchings with PSM and the proposed DeM
algorithm. For comparison purposes we present seven realizations of the non-
deterministic PSM (Set 1 – 7). Out of the $9,848$ SAVR patients $3,361$ had at
least one exact TF-AVI match, while out of the $7,579$ TF-AVI patients,
$2,249$ patients had at least one exact SAVR match. Thus one third of all
patients could be exactly matched. As stated in the previous section, the null
hypothesis for calculation of the p-values was $H_{0}:$ The mortality-rate
does not depend on therapy. The results of some maximum matchings and the
differences between them are indicated in Table 1.
Table 1: Results for maximal matchings $1{,}502$ exact matchings with | SAVR | TF-AVI | $\chi^{2}$ Test
---|---|---|---
regards to all $19$ Euroscore II | in-hospital death | in-hospital death | (2-tailed)
variables and without replacement | count | % | count | % | p-value
PSM Set 1 | $73$ | $4.9\%$ | $33$ | $2.2\%$ | $<0.0001$
PSM Set 2 | $73$ | $4.9\%$ | $34$ | $2.3\%$ | $<0.0001$
PSM Set 3 | $42$ | $2.8\%$ | $32$ | $2.1\%$ | $0.2398$
PSM Set 4 | $24$ | $1.6\%$ | $15$ | $1.0\%$ | $0.1470$
PSM Set 5 | $73$ | $4.9\%$ | $50$ | $3.3\%$ | $0.0342$
PSM Set 6 | $24$ | $1.6\%$ | $50$ | $3.3\%$ | $0.0021$
PSM Set 7 | $73$ | $4.9\%$ | $15$ | $1.0\%$ | $0.0001$
Uniform Bootstrapping ($10{,}000$ samples) | $52.47$ | $3.49\%$ | $32.10$ | $2.14\%$ | $0.0210$ (t-test)aaaThe t-test values for all sets without replacement are $<0.0001$, with replacement $0.0005$.
DeM | 53.01 | 3.5% | 32.32 | 2.1% | 0.0227
A maximal matching $1$:$1$ comprises $1,502$ matching pairs of patients,
meaning that $3,004$ patients were matched during any exact non-cluster
matching. We only considered maximal exact matches, therefore every shown
matching matches the maximal number of patients possible and matches two
patients if and only if their covariates are equal, meaning that the
standardized differences in all presented sets is $0$. Still the large
discrepancy between the observed results in the shown sets immediately
indicates that many maximal matchings exist, as shown in Proposition 2.8.
Calculating all possible maximal matchings would be a futile endeavour and not
necessary if the observed results between different maximal matchings would
not vary. Unfortunately observed results can vary to a very high degree, as
can be seen in Table 1. They vary in such a way that one could even draw
different conclusions based on the matching one calculated, see sets $1$, $6$
and $7$, while arguing that the calculated p-value is below a threshold of
$1\%$. For other sets one can see that they are on either side of the
spectrum, favouring one, the other, or no therapy. Even the bootstrapping
result for $10,000$ samples did not exhaust all possible maximal matchings and
is only similar to the result of DeM, which gives as a result the expectancy
of all possible maximal exact matches in the given dataset, see Theorem 2.15.
As can be seen from the results, given one dataset and a non-deterministic
method one could obtain different results even when regression variables are
given, which leads to uncertainty in the evaluation process as fellow
researchers cannot reconstruct results obtained through statistical matching
based on regression methods. The proposed algorithm tries to resolve this
issue for exact matches. Even though this is a limitation in applicability,
exact cluster matches obtained through the DEM algorithm can be used at the
core of a matching, ascertaining that at least the exact matchable contingent
of a dataset is matched deterministically (Theorem 2.16) and corresponds to
the expectancy value of the exact matches (Theorem 2.15). Additionally, if in
large datasets no exact matches can be found, researchers should thoroughly
investigate for systematic differences in the therapy groups, as comparisons
of therapy-effects are not recommended if systematic differences exist.
## 4 Conclusion
We proposed an alternative deterministic exact matching method (DeM) for SM in
the exact case. The proposed method is based on matching clusters of patients
from therapy groups instead of matching patients to patients directly. The
presented cluster matching approach computed with the DeM algorithm (Algorithm
4) extracts all possible information from a given dataset as all possibly
matchable patients get matched and the constructed matching is in accordance
with the expectancy value of the dataset (Theorem 2.15). The constructed
matching also has the desirable property of having low variance while being in
accordance to the expectancy of all possible maximal exact matchings in the
dataset (Proposition 2.17 and Corollary 2.18).
As the proposed algorithm is deterministic and fast (Theorem 2.16) as well as
easy to implement, it can be used to produce exact matchings on datasets and
to discuss findings in a reliable way as the results are easily reproducible.
This is an important property as it makes a subsequent decision-making process
more transparent and not susceptible to random events, such as random draws
not in accordance to the expectancy. Thus discussions about conclusions drawn
can be done based on the dataset and the method used for data acquisition as
uncertainties regarding the matching method are eliminated through a proven
guarantee that there are no additional errors introduced by the matching
procedure.
The results from the numerical example, calculated on a dataset containing a
complete survey, validate the shown theoretical propositions and theorems. The
proposed method can furthermore be seen as an extension of state of the art
methods as results obtained through their usage would converge in the limit
against the result calculated through the proposed algorithm.
The exclusive applicability of the proposed algorithm to exact matches might
be seen as a limitation. Then again for small datasets, which are
statistically more prone to high variance in regards to two different
matchings, the proposed algorithm provides a reliable result for the exact
matches. For the case of large datasets, a practitioner should be wary if only
few exact matches exist or a matching result varies to a high degree from the
result given by the proposed deterministic algorithm as a systematic
difference between the two compared therapy groups might exist or measuring
inaccuracies for continuous covariates might be too large in the given dataset
to draw reliable conclusions.
Finally, we highlight that the algorithm can be used as an a priori method for
another matching method to extract all available information contained in
exact matchings, therefore ascertaining that at least the exact matchable
patients of both therapy groups are matched deterministically and their
information is completely used. Further research will be dedicated to extend
the presented model towards $\delta$-matching for $\delta>0$ while keeping the
desirable properties presented in this paper and therefore extend the
applicability of the proposed method.
## References
* Abidov et al. (2005) Abidov, A., A. Rozanski, R. Hachamovitch, S. W. Hayes, F. Aboul-Enein, I. Cohen, J. D. Friedman, G. Germano, and D. S. Berman (2005). Prognostic significance of dyspnea in patients referred for cardiac stress testing. New England Journal of Medicine 353(18), 1889–1898. PMID: 16267320.
* Adams et al. (2017) Adams, N., K. S. Gibbons, and D. Tudehope (2017, Apr). Public-private differences in short-term neonatal outcomes following birth by prelabour caesarean section at early and full term. The Australian & New Zealand journal of obstetrics & gynaecology 57, 176–185.
* Anderson et al. (1980) Anderson, D. W., L. Kish, and R. G. Cornell (1980). On stratification, grouping and matching. Scandinavian Journal of Statistics 7(2), 61–66.
* Austin (2011) Austin, P. C. (2011, May). An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate behavioral research 46, 399–424.
* Austin and Small (2014) Austin, P. C. and D. S. Small (2014). The use of bootstrapping when using propensity-score matching without replacement: a simulation study. Statistics in Medicine 33(24), 4306–4319.
* Burden et al. (2017) Burden, A., N. Roche, C. Miglio, E. V. Hillyer, D. S. Postma, R. M. Herings, J. A. Overbeek, J. M. Khalid, D. van Eickels, and D. B. Price (2017). An evaluation of exact matching and propensity score methods as applied in a comparative effectiveness study of inhaled corticosteroids in asthma. Pragmatic and observational research 8, 15–30.
* Capucci et al. (2017) Capucci, A., A. De Simone, M. Luzi, V. Calvi, G. Stabile, A. D’Onofrio, S. Maffei, L. Leoni, G. Morani, R. Sangiuolo, C. Amellone, C. Checchinato, E. Ammendola, and G. Buja (2017, Sep). Economic impact of remote monitoring after implantable defibrillators implantation in heart failure patients: an analysis from the effect study. Europace : European pacing, arrhythmias, and cardiac electrophysiology : journal of the working groups on cardiac pacing, arrhythmias, and cardiac cellular electrophysiology of the European Society of Cardiology 19, 1493–1499.
* Chen et al. (2016) Chen, H.-Y., Q. Wang, Q.-H. Xu, L. Yan, X.-F. Gao, Y.-H. Lu, and L. Wang (2016). Statin as a combined therapy for advanced-stage ovarian cancer: A propensity score matched analysis. BioMed research international 2016, 9125238.
* Cho et al. (2017) Cho, S. H., G.-S. Choi, G. C. Kim, A. N. Seo, H. J. Kim, W. H. Kim, K.-M. Shin, S. M. Lee, H. Ryeom, and S. H. Kim (2017, Mar). Long-term outcomes of surgery alone versus surgery following preoperative chemoradiotherapy for early t3 rectal cancer: A propensity score analysis. Medicine 96, e6362.
* Dou et al. (2017) Dou, J.-P., J. Yu, X.-H. Yang, Z.-G. Cheng, Z.-Y. Han, F.-Y. Liu, X.-L. Yu, and P. Liang (2017, Apr). Outcomes of microwave ablation for hepatocellular carcinoma adjacent to large vessels: a propensity score analysis. Oncotarget 8, 28758–28768.
* Fukami et al. (2017) Fukami, H., Y. Takeuchi, S. Kagaya, Y. Ojima, A. Saito, H. Sato, K. Matsuda, and T. Nagasawa (2017). Perirenal fat stranding is not a powerful diagnostic tool for acute pyelonephritis. International journal of general medicine 10, 137–144.
* Gozalo et al. (2015) Gozalo, P., M. Plotzke, V. Mor, S. C. Miller, and J. M. Teno (2015). Changes in medicare costs with the growth of hospice care in nursing homes. New England Journal of Medicine 372(19), 1823–1831. PMID: 25946281.
* Hansen (2004) Hansen, B. (2004, 02). Full matching in an observational study of coaching for the sat. Journal of the American Statistical Association 99, 609–618.
* Hansen and Klopfer (2012) Hansen, B. and S. Klopfer (2012, 01). Optimal full matching and related designs via network flows. Journal of Computational and Graphical Statistics 15.
* Iacus et al. (2012) Iacus, S., G. King, G. Porro, and J. Katz (2012, 12). Causal inference without balance checking: Coarsened exact matching. Political Analysis 20, 1–24.
* King and Nielsen (2019) King, G. and R. Nielsen (2019). Why propensity scores should not be used for matching. Political Analysis, 1–20.
* Kishimoto et al. (2017) Kishimoto, M., H. Yamana, S. Inoue, T. Noda, T. Myojin, H. Matsui, H. Yasunaga, M. Kawaguchi, and T. Imamura (2017, Jun). Sivelestat sodium and mortality in pneumonia patients requiring mechanical ventilation: propensity score analysis of a japanese nationwide database. Journal of anesthesia 31, 405–412.
* Kong et al. (2017) Kong, L., M. Li, L. Li, L. Jiang, J. Yang, and L. Yan (2017, Apr). Splenectomy before adult liver transplantation: a retrospective study. BMC surgery 17, 44.
* Kupper et al. (1981) Kupper, L. L., J. M. Karon, D. G. Kleinbaum, H. Morgenstern, and D. K. Lewis (1981). Matching in epidemiologic studies: Validity and efficiency considerations. Biometrics 37(2), 271–291.
* Lai et al. (2016) Lai, W.-H., C.-S. Rau, S.-C. Wu, Y.-C. Chen, P.-J. Kuo, S.-Y. Hsu, C.-H. Hsieh, and H.-Y. Hsieh (2016, Nov). Post-traumatic acute kidney injury: a cross-sectional study of trauma patients. Scandinavian journal of trauma, resuscitation and emergency medicine 24, 136.
* Lee et al. (2017) Lee, S. I., K. S. Lee, J. B. Kim, S. J. Choo, C. H. Chung, J. W. Lee, and S.-H. Jung (2017, Jun). Early antithrombotic therapy after bioprosthetic aortic valve replacement in elderly patients: A single-center experience. Annals of thoracic and cardiovascular surgery : official journal of the Association of Thoracic and Cardiovascular Surgeons of Asia 23, 128–134.
* Liu et al. (2016) Liu, Y., J. Han, T. Liu, Z. Yang, H. Jiang, and H. Wang (2016). The effects of diabetes mellitus in patients undergoing off-pump coronary artery bypass grafting. BioMed research international 2016, 4967275.
* McDonald et al. (2017) McDonald, J. S., R. J. McDonald, E. E. Williamson, D. F. Kallmes, and K. Kashani (2017, Jun). Post-contrast acute kidney injury in intensive care unit patients: a propensity score-adjusted study. Intensive care medicine 43, 774–784.
* McEvoy et al. (2016) McEvoy, R. D., N. A. Antic, E. Heeley, Y. Luo, Q. Ou, X. Zhang, O. Mediano, R. Chen, L. F. Drager, Z. Liu, G. Chen, B. Du, N. McArdle, S. Mukherjee, M. Tripathi, L. Billot, Q. Li, G. Lorenzi-Filho, F. Barbe, S. Redline, J. Wang, H. Arima, B. Neal, D. P. White, R. R. Grunstein, N. Zhong, and C. S. Anderson (2016). Cpap for prevention of cardiovascular events in obstructive sleep apnea. New England Journal of Medicine 375(10), 919–931. PMID: 27571048.
* Pearl (2000) Pearl, J. (2000, 01). Causality: Models, reasoning, and inference, second edition. Causality 29.
* Ray et al. (2012) Ray, W., K. Murray, K. Hall, P. Arbogast, and C. Stein (2012, 05). Azithromycin and the risk of cardiovascular death reply. The New England journal of medicine 366, 1881–90.
* Rosenbaum and Rubin (1983) Rosenbaum, P. and D. Rubin (1983, 04). The central role of the propensity score in observational studies for causal effects. Biometrika 70, 41–55.
* Rosenbaum (1989) Rosenbaum, P. R. (1989). Optimal matching for observational studies. Journal of the American Statistical Association 84(408), 1024–1032.
* Rubin (1972) Rubin, D. (1972). Estimating causal effects of treatments in experimental and observational studies. ETS Research Bulletin Series 1972(2), i–31.
* Rubin (1973) Rubin, D. B. (1973). Matching to remove bias in observational studies. Biometrics 29(1), 159–183.
* Salati et al. (2017) Salati, M., A. Brunelli, F. Xiume, M. Monteverde, A. Sabbatini, M. Tiberi, C. Pompili, R. Palloni, and M. Refai (2017, Jun). Video-assisted thoracic surgery lobectomy does not offer any functional recovery advantage in comparison to the open approach 3 months after the operation: a case matched analysisdagger. European journal of cardio-thoracic surgery : official journal of the European Association for Cardio-thoracic Surgery 51, 1177–1182.
* Schermerhorn et al. (2008) Schermerhorn, M. L., A. J. O’Malley, A. Jhaveri, P. Cotterill, F. Pomposelli, and B. E. Landon (2008). Endovascular vs. open repair of abdominal aortic aneurysms in the medicare population. New England Journal of Medicine 358(5), 464–474. PMID: 18234751.
* Seung et al. (2008) Seung, K. B., D.-W. Park, Y.-H. Kim, S.-W. Lee, C. W. Lee, M.-K. Hong, S.-W. Park, S.-C. Yun, H.-C. Gwon, M.-H. Jeong, Y. Jang, H.-S. Kim, P. J. Kim, I.-W. Seong, H. S. Park, T. Ahn, I.-H. Chae, S.-J. Tahk, W.-S. Chung, and S.-J. Park (2008). Stents versus coronary-artery bypass grafting for left main coronary artery disease. New England Journal of Medicine 358(17), 1781–1792. PMID: 18378517.
* Shaw et al. (2008) Shaw, A. D., M. Stafford-Smith, W. D. White, B. Phillips-Bute, M. Swaminathan, C. Milano, I. J. Welsby, S. Aronson, J. P. Mathew, E. D. Peterson, and M. F. Newman (2008, Feb). The effect of aprotinin on outcome after coronary-artery bypass grafting. The New England journal of medicine 358, 784–93.
* Stuart (2010) Stuart, E. A. (2010, Feb). Matching methods for causal inference: A review and a look forward. Statistical science : a review journal of the Institute of Mathematical Statistics 25, 1–21.
* Svanström et al. (2013) Svanström, H., B. Pasternak, and A. Hviid (2013). Use of azithromycin and death from cardiovascular causes. New England Journal of Medicine 368(18), 1704–1712. PMID: 23635050.
* Tranchart et al. (2016) Tranchart, H., D. Fuks, L. Vigano, S. Ferretti, F. Paye, G. Wakabayashi, A. Ferrero, B. Gayet, and I. Dagher (2016, May). Laparoscopic simultaneous resection of colorectal primary tumor and liver metastases: a propensity score matching analysis. Surgical endoscopy 30, 1853–62.
* Zangbar et al. (2016) Zangbar, B., M. Khalil, A. Gruessner, B. Joseph, R. Friese, N. Kulvatunyou, J. Wynne, R. Latifi, P. Rhee, and T. O’Keeffe (2016, Nov). Levetiracetam prophylaxis for post-traumatic brain injury seizures is ineffective: A propensity score analysis. World journal of surgery 40, 2667–2672.
* Zhang et al. (2016) Zhang, M., R. R. Guddeti, Y. Matsuzawa, J. D. Sara, T. Kwon, Z. yue Liu, T. Sun, S. Lee, R. J. Lennon, M. R. Bell, H. V. Schaff, R. C. Daly, L. O. Lerman, A. Lerman, and C. Locker (2016). Left internal mammary artery versus coronary stents: Impact on downstream coronary stenoses and conduit patency. In Journal of the American Heart Association.
* Zhang et al. (2015) Zhang, Z., K. Chen, and H. Ni (2015). Calcium supplementation improves clinical outcome in intensive care unit patients: a propensity score matched analysis of a large clinical database mimic-ii. SpringerPlus 4, 594.
|
# Analysis of key flavors of event-driven predictive maintenance using logs of
phenomena described by Weibull distributions
Petros Petsinis Aristotle University of Thessaloniki, Greece
<EMAIL_ADDRESS>, Athanasios Naskos Atlantis Engineering,
Thessaloniki, Greece<EMAIL_ADDRESS>and Anastasios Gounaris Aristotle
University of Thessaloniki, Greece<EMAIL_ADDRESS>
###### Abstract.
This work explores two approaches to event-driven predictive maintenance in
Industry 4.0 that cast the problem at hand as a classification or a regression
one, respectively, using as a starting point two state-of-the-art solutions.
For each of the two approaches, we examine different data preprocessing
techniques, different prediction algorithms and the impact of ensemble and
sampling methods. Through systematic experiments regarding the aspects
mentioned above, we aim to understand the strengths of the alternatives, and
more importantly, shed light on how to navigate through the vast number of
such alternatives in an informed manner. Our work constitutes a key step
towards understanding the true potential of this type of data-driven
predictive maintenance as of to date, and assist practitioners in focusing on
the aspects that have the greatest impact.
Predictive Maintenance, Industry 4.0, Machine Learning, Classification,
Regression
††copyright: rightsretained††doi: ††conference: arXiv.org; ; ††journalyear:
2021
## 1\. Introduction
One of the key characteristics of the 4th Industrial Evolution, broadly known
as Industry 4.0, is the wide spreading of Predictive Maintenance (PdM), which
is reported to be capable of yielding _“tremendous”_
benefits.111https://www.pwc.nl/nl/assets/documents/pwc-predictive-maintenance-
beyond-the-hype-40.pdf As such, PdM has evolved to a multi-billion (i.e.
$23.8B) worth market, according to the IoT Analytics industry report (Scully,
2019b). PdM has attracted the attention from big companies like
IBM222https://www.ibm.com/services/technology-support/multivendor-
it/predictive-maintenance, SAP333https://www.sap.com/products/predictive-
maintenance.html, SAS444https://www.sas.com/en_us/software/asset-performance-
analytics.html,
Siemens555https://new.siemens.com/global/en/products/services/industry/predictive-
services.html, GE666https://www.ge.com/digital/iiot-platform and
ABB777https://ability.abb.com/customer-segments/marine-ports/remote-
diagnostics-predictive-maintenance/ to smaller companies that focus solely on
that field, like Augury888https://www.augury.com/category/predictive-
maintenance/, Falkonry999https://falkonry.com/, Sight
Machine101010https://sightmachine.com/product/ema/ and other (Scully, 2019a).
The goal of PdM is to eliminate machinery downtime and operational costs.
Broadly, PdM capitalizes on the benefits stemming from mature techniques in
the fields of data analytics, machine learning and big data management in
distributed, IoT and edge-computing settings(Lu, 2017), i.e., it essentially
constitutes a specific case of applied data science (Ullman, 2020; Ozsu,
2020). The main functionality of PdM to achieve its overarching goal is to
predict the failure of the equipment. To this end, someone can resort to
domain knowledge encapsulated in accurate models and/or rules covering the
equipment behavior. A second direction, which is leveraged in this work, is
driven by the events and log data regarding the equipment operation and
maintenance, i.e., it is a data-driven direction. These event logs are
processed to extract patterns of equipment failure automatically. Typically,
the event-based PdM techniques rely on methods like data mining, feature
selection and machine learning. The main subject of this work is to study,
contribute, compare and come to conclusions on two different data-driven PdM
approaches that cast the problem of scheduled maintenance into a problem of
classification and regression, respectively, while the raw log data comprise a
series of event occurrences following a Weibull distribution. This type of
distribution is widely used to describe industrial equipment failures (Lai et
al., 2006).
More specifically, we take the state-of-the-art solutions in (Wang et al.,
2017) and (Korvesis et al., 2018) as a starting point and explore the impact
of employing different model construction, dimensionality reduction, dataset
preparation and ensemble techniques. The testing framework is based on the one
in (Naskos and Gounaris, 2019) with appropriate extensions. However, in both
solutions the main rationale is the same: firstly, discrete events are
continuously monitored and logged, and secondly, before the failure, there are
warning events adhering to no crisp patterns (therefore, no sequential pattern
mining solutions are effective) and the aim of PdM is to learn to identify
such early phenomena. The two pillar solutions of this work, namely (Wang et
al., 2017) and (Korvesis et al., 2018) come with different approaches
regarding the selection of their features and the building of a prediction
model (classification vs regression). More importantly, they have been
successfully extended to build solutions for PdM in Industry 4.0 (Naskos et
al., 2019) and detect threat patterns in an IT infrastructure (Bellas et al.,
2020). However, their application is tricky because it involves decision
taking at various design levels, while the performance is sensitive to several
parameters.
In this work, we aim to answer the following questions with a view to
deepening the understanding of the afore-mentioned PdM techniques: (1) _“What
is the impact on the performance of different prediction techniques compared
to the ones originally employed or evaluated?”_ , which calls for evaluation
of an extensive set of predictors. (2) _“Is the behavior of feature selection
techniques correlated with the exact prediction technique employed”?_ , which
aims to understand whether the feature selection and prediction techniques can
be addressed independently in a divide-and-conquer rationale to reduce the
search space complexity. (3) _“Which are the most important dataset attributes
that affect the technique choices?”_ , which calls for experimenting with
various dataset types. (4) _“Does the manipulation of the training set to deal
with class imbalance and small training sizes can yield benefits?_ ; to answer
this question we explore several alternatives. (5) _“What is the impact of
ensemble solutions?”_ , where we build on top of the single-predictor results
on the quest for building an efficient combination, something that we show
that it is challenging. The main contribution of this work is that it provides
insightful answers to these questions, and thus constitutes a key step towards
understanding the true potential of event-driven predictive maintenance as of
to date, when log entries of different types follow a Weibull distribution,
and assist practitioners in focusing on the aspects that have the greatest
impact. All the code is publicly available.111111
https://github.com/petsinis/JupyterNotebookPdM
We provide the background about the main methodologies around which our
analysis revolves, along with related work in the subsequent section. The
datasets and the rationale of experimental result assessment are described in
Section 3. The next section comprises the first main part of our contribution:
the alternative feature selection and model building techniques along with
their experimental evaluation and key observations. In Section 5, we explore
the impact of training set generation and, in Section 6, we deal with ensemble
solutions. We conclude in Section 7 with our final remarks.
## 2\. Related Work and Background
In this section, we first present the related work and then we give the
background details regarding the two main methodologies considered in our
experimental analysis.
### 2.1. Related Work
Data-driven techniques, where the data refer to past events, commonly in the
form of log entries, are widely used in PdM. (Korvesis et al., 2018) is a key
representative of the state-of-the-art, where the unnderlying problem solved
is a regression one. Another event-based approach is presented in (Sipos et
al., 2014), where historical and service data from a ticketing system are
combined with domain knowledge to train a binary classifier for the prediction
of a failure. As in several similar works, a feature selection (Bach, 2008)
and an event amplification technique is used to enhance the effectiveness of
the SVM-based classifier. Event-based analysis, based on event and failure
logs, is also performed in (Wang et al., 2017), where it is assumed that the
system is capable of generating exceptions and error log entries that are
inherently relevant to significant failures. The proposal in (Wang et al.,
2017) relies on pattern extraction and similarity between patterns preceding a
failure, while emphasis is posed on feature selection.
Data-driven PdM is also related to online frequent episodes mining; research
works (Ao et al., 2015) and (Li et al., 2018) propose techniques in this
topic. The key strength of (Ao et al., 2015) is that it can apply to an online
(i.e. streaming) scenario. (Li et al., 2018) further improves upon it through
providing solutions for the case where the event types is unbounded. Complex-
event processing (CEP) (Kolchinsky and Schuster, 2018) is also a technology
that enables PdM enforcement after warning sequential patterns have been
extracted.
An overview of the the data-driven PdM is presented in works such as (Carvalho
et al., 2019; Kovalev et al., 2018), where the authors list techniques for
data preprocessing, feature selection, data storing, model building and
validation, fault detection and prediction and estimation of remaining useful
life. However, each application comes with its unique characteristics that
affect the corresponding design (Korvesis et al., 2018). In the remainder of
this section, we further elaborate on the solutions in (Wang et al., 2017) and
(Korvesis et al., 2018), which are deemed as key representatives of
classification- and regression-based event-driven PdM, respectively. An early
comparison of these techniques has appeared in (Naskos and Gounaris, 2019);
the main difference is that in this work, we investigate several alternatives
using each of these techniques as a basis (i.e., we keep the main rationale
but we explore different techniques in the way the prediction model is built)
and we thoroughly assess their efficiency, while previously, the initial
proposals were compared without exploring alternatives.
### 2.2. A classification-based methodology
The first methodology considered is characterized by two main features:
firstly, it casts the problem as a binary classification one, and secondly, it
follows a specific approach to expanding the feature space. Although it has
been proposed in (Wang et al., 2017) for ATM failures, it is generic and thus
can be applied to other predictive maintenance scenarios, provided that the
data comprise error logs similar to those in the original proposal.
Parameter | Description
---|---
X | number of sub-windows inside an $OW$
M | length (number of days) of a sub-window
Y | length (number of days) of a $PW$
Z | length (number of days) of a $BW$
Table 1. Window-related parameters.
The event logs are mapped to time segments; in our scenario, the finest level
of granularity of such segments is a day as discussed in more detail in
Section 3. The time segments are then grouped into (sliding) windows. For each
prediction, the segments in a fixed-length observation window ($OW$) are
considered and the prediction refers to a subsequent window, the prediction
window ($PW$). An $OW$ is in turn divided into $X$ sub-windows of length $M$.
From each of them, several basic features are exported, which will eventually
constitute several of the new features of the $OW$.
During training, for each $OW$, a feature vector is produced, the label of
which is given through its corresponding $PW$. More specifically, if $PW$
contains a day on which the equipment failed, the $OW$ vector receives a true
label; otherwise the label is 0. To account for real-world scenarios, where
failures very close to a correct prediction cannot be practically prevented,
there is a third type of fixed interval of days, referred to as the buffer
window ($BW$) in between $OW$ and $PW$. For example, assume that $OW$, $PW$
and $BW$ are 3, 2 and 1 days, respectively. This corresponds to a scenario
where predictions are made taking into account the log entries of the 3 most
recent days and refer to forthcoming failures in a period from more than 1 day
and less than 3 days ahead. Table 1 summarizes the window-related parameters.
Five types of features comprise the final vector for each $OW$ as follows.
1. (1)
_Basic features:_ a frequency vector for each type of error/log in a sub-
window within $OW$. Overall $X$ such vectors are produced, which are
concatenated.
2. (2)
_Advanced statistical features:_ a statistics vector that contains, for each
logged error type the minimum, maximum, and mean distance of the error type
within the $OW$ from the beginning of the $PW$; also the mean distance (and
its standard deviation) between occurrences of the same fault types in the
$OW$ are considered.
3. (3)
_Failure-similarity features:_ The Jaccard similarity between this $OW$ and a
reference $OW$ with a positive label.
4. (4)
_Pattern-based features:_ this is a binary vector of patterns of different
types of error based on applying association rule mining.
5. (5)
_Equipment features:_ this includes metadata regarding the specific equipment,
such as model, date of installation, and so on.
In our comparison, we employ the first three types of features. In the
original proposal in (Wang et al., 2017), several feature selection
techniques, such as ReliefF, and classifiers, such as XGBoost, are
investigated, In this work, we consider both the best performing techniques in
(Wang et al., 2017) and additional solutions, such as nearest neighbors and
neural network-based ones, as detailed in Section 4, and we discuss the cases
in which each alternative behaves better.
### 2.3. A regression-based methodology
Instead of casting the problem as a (binary) classification one, another
approach is to cast it as a regression one aiming to reduce the prediction
error where each prediction gets a real value, as proposed in (Korvesis et
al., 2018) to solve the event-based PdM problem in the aviation industry.
In this approach, the fault event logs are grouped in episodes. An episode
begins with the first event that occurs after the occurrence of the main
failure event that we aim to predict (called target event) and ends with the
last event before the occurrence of the next target event. Contrary to the
classification-based methodology that focuses on feature generation, this
methodology emphasizes more on log pre-processing. More specifically, the
following pre-processing steps are considerd:
1. (1)
_Deletion of rare events:_ if the occurrence of a fault type event is rare
compared to that of the target event, then it is considered unrelated to it,
so it is not useful and can be removed.
2. (2)
_Deletion of frequent events:_ if the occurrence of a fault type event is very
frequent, then it is also considered not related to the target event, so it is
not useful and can be removed.
3. (3)
_Dealing with multiple appearances:_ in some cases, we do not care how many
times an event occurred during a time segment. In this methodology, the
occurrence vector corresponding to each time period and fed to the classifier
is a binary one indicating only whether an event has appeared at least once in
this period.
4. (4)
_Dealing with continuous appearances:_ in some cases the appearance of a fault
type event in consecutive time segments can be due to the fact that no action
was taken because, for example, of a low priority. To avoid inserting
unnecessarily event occurrences, in the case of appearances of the same fault
type in the log in multiple consecutive time segments, only the first
appearance is kept.
Overall, each time segment in an episode is described by a binary vector with
size equal to the number of different event types in the log. The value
associated with this vector during training is derived from a risk function,
which encapsulates the rationale that the closer to the target event a set of
event types appear, the more they constitute warning signs of a subsequent
failure of key equipment. This is naturally quantified with the help of a
sigmoid function, as shown in Figure 1. The values of this function are
bounded in the (0,1) range. There are two main parameters, the midpoint $m$
and the steepness $s$. The former defines the distance in number of days from
the target event that the sigmoid takes the value of 0.5. The $s$ parameter
defines how steep the curve is; the higher its value, the less significant the
sets of events before the midpoint become, and similarly, the more significant
after the midpoint.
In the original implementation in (Korvesis et al., 2018), the superiority of
a Random Forest regression solution upon SVM was shown; here we explore
further alternatives, keeping the pre-processing steps as defined above.
$0$$2$$4$$6$$8$$10$$12$$14$$0.5$$1$${\sigma 1(x):m=7,s=0.9}$${\sigma
2(x):m=10,s=0.9}$${\sigma 3(x):m=7,s=0.4}$ Figure 1. Impact of midpoint and
steepness on the sigmoid function (the target event is on time point 14).
## 3\. Experimental setting
There are several datasets used in PdM as reported in (Carvalho et al., 2019).
Also, NASA maintains a repository for prognostic
datasets.121212https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognostic-data-
repository/ However, these datasets contain sensor data time series rather
than partially ordered sequences of discrete events, which is our focus. To
overcome this problem, we re-engineer in our code implementation the event
generator in (Naskos and Gounaris, 2019), which is publicly available. We
first describe the datasets used and then the assessment methodology.
### 3.1. Datasets
The data used in this work is a set of timestamped event logs. They are
synthetically generated in a manner that resembles real-world applications,
such as the one in (Korvesis et al., 2018). An example is shown in Table 2,
where the time granularity is complete days. On each day, several events may
occur, which yields a partially ordered sequence. Note that the time
granularity of complete days does not refer to the frequency of monitoring the
status of industrial equipment continuously in a modern Industry 4.0 setting,
where, typically, several measurements are taken every second, but to the
frequency of recording higher-level incidents, a (very) small portion of which
_may_ be relevant to a target event of high importance. Nevertheless, the
described approach can be trivially modified to apply to any time granularity.
Timestamps | EventId
---|---
2020-01-01 | 1
2020-01-01 | 22
2020-01-02 | 12
2020-01-02 | 7
2020-01-02 | 12
2020-01-03 | 7
2020-01-04 | 3
… | …
Table 2. An example event log
The main parameters of the generator in (Naskos and Gounaris, 2019) are
provided in Table 3. In each synthetic dataset, there are _ft_ fault/event
types. Each one of them follows a Weibull distribution, since this type of
distribution can describe several types of equipment failures (Lai et al.,
2006). One event plays the role of the target event. The rationale of the data
generator is to artificially inject event patterns before the occurrence of
the target event, and then test the capability of the classifiers to detect
such patterns and become capable of predicting the target event.
Parameter | Description
---|---
ft | number of different event/fault types
$s_{tr}$ | size of training dataset (in days)
$s_{te}$ | size of testing dataset (in days)
pl | pattern length
$min_{t}/max_{t}$ | min./max. distance (in days) of the last pattern event from the target event
$min_{p}/max_{p}$ | min./max. distance (in days) between pattern events
$min_{f}/max_{f}$ | min./max. pattern event forms
pc | pattern clarity
pps | the percentage of the missing events in the distorted patterns
Table 3. Main dataset generator parameters from (Naskos and Gounaris, 2019)
A key feature of the data generator is that the injected patterns preceding a
target event can be perturbed using several tuning parameters to imitate the
fact that, typically, no crisp patterns, which can be more easily detected,
appear. More specifically, there may be no full pattern before a target event
instance; moreover, a pattern may not always be followed by a target event.
The extent to which these two behaviors appear in the logs are configurable
through the _pc_ and _pps_ parameters. The clarity percentage (captured by
_pc_) defines the percentage of partial patterns, i.e. pattern instances that
are missing some of the events of the original pattern. The partial pattern
size (_pps_) defines the percentage of the missing events. For example,
setting $pc=0.9$ and $pps=0.5$ enforces 10% of the pattern instances to
include only the half of the events of the original pattern, which is of size
_pl_. In addition, in 10% of the injected patterns, no target event follows.
Moreover, in a pattern comprising _pl_ elements, each of the elements may
correspond to multiple event types, captured by the parameters $min_{f}$ and
$max_{f}$. This corresponds to the situation, where there are families of
faults (event types) that might give early warning for an upcoming target
event. Finally, the pattern elements may be shuffled. Overall, the combination
of the _pc_ , _pps_ , _shuffle_ , $min_{f}$ and $max_{f}$ parameters allows us
to create synthetic datasets, where it is challenging to detect and create
models regarding warning event sets.
Dataset | $ft$ | $shuffle$ | $pl$ | $min_{f}$ | $max_{f}$ | $s_{tr}$ | $s_{te}$ | $min_{t}$ | $max_{t}$ | $min_{p}$ | $max_{p}$ | $pc$ | $pps$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
DS1 | 150 | no | 6 | 1 | 3 | 1094 | 730 | 1 | 5 | 1 | 2 | 90% | 50%
DS2 | 4 | 3 | 4
DS3 | 1500 | same as DS1
DS4 |
DS5 | 150 | yes | same as DS2
DS6 |
Table 4. Dataset generator parameters taken from (Naskos and Gounaris, 2019);
$DS1$ to $DS5$ contain approx. 50 target events, whereas $DS6$ contains 25
target events.
Overall, in the evaluation, 6 types of synthetic datasets are generated, as
shown in Table 4. The types of the datasets are the same as in (Naskos and
Gounaris, 2019) but the random instances of each dataset type are different.
The datasets correspond to a period of 5 years and are split into 3 years of
training logs and 2 years of test logs. The distance of the warning pattern
from the target event ranges from 1 to 5 days, while the distance between the
events of the pattern ranges from 1 to 2 days. The pattern clarity is set to
90%, while the partial patterns include only the half of the events of the
full pattern (i.e. $pps=50\%$). These datasets comprise approximately 50
target events (on average, one target event in 36 days) apart from the last
dataset type, which comprises only 25 target events. There are two main
dataset types, namely, DS1 and DS2 and the other datasets (i.e., DS3-6) are
slightly altered versions of the main ones. More specifically, DS3 and DS4
have 10X more event types than DS1 and DS2, respectively. DS5 and DS6 include
shuffled patterns compared to DS2, while, as already mentioned above, DS6
includes fewer target events. In the experiments, ten instances of each
dataset are produced.
### 3.2. Evaluation Methods
The test set is divided into episodes, where each episode ends with a target
event. There are three main time intervals within an episode that impact on
the evaluation:
1. (1)
The _Correct Prediction_ period, in which the prediction of an equipment’s
failure is made in a timely manner. The predictions falling into this period
as counted as True Positives (TP).
2. (2)
The _Early Prediction Period_ , which starts just after the occurrence of the
previous failure (target event) and ends just before the correct prediction
period. The predictions for failure during this period are considered to be
premature and unnecessary. They are counted as False Positives (FP)
3. (3)
The _Repair period_ , which represents the days needed to repair the fault
before it manifests itself. If equipment’s failure is predicted within the
repair period, then it is ignored.
Case | Description
---|---
True Positive (TP) | Set to 1 if at least one failure prediction made during the Correct Prediction Period; otherwise, it is set to 0.
False Negative (FN) | Set to1 if no failure prediction made during the Correct Prediction Period; otherwise, it is set to 0.
False Positive (FP) | The amount of failure predictions during the Early Prediction Period.
True Negative (TN) | The amount of non-failure predictions during the Early Prediction Period.
Ignored | All failure predictions during the Repair Period.
Table 5. Description of prediction cases within an episode.
Table 5 presents all the prediction cases within an episode. Note that there
can be up to 1 TP or FN per episode, but arbitrary numbers of FPs and TNs. In
all the experiments, we have used the F1 metric to evaluate the results:
$F1\;Score=2\frac{{Precision*Recall}}{{Precision+Recall}}$
where $Precision=\frac{{TP}}{{TP+FP}}$ and $Recall=\frac{{TP}}{{TP+FN}}.$
The reason behind employing F1 is that it balances recall and precision.
Recall is important, since it reflects the number of failures predicted.
However, it is trivial to create a dummy predictor that always votes for
failure. Therefore, we need to balance recall with precision, exactly as F1
targets.
In the classification-based technique, the sequence of days is divided into
windows ($OW$s) using a moving step of predefined length (see also Figure 2).
One prediction is made for each different window slide. The interval in which
the last day of the window falls determines the type of forecast (TP, TN, FP,
FN), as explained above.
Early Prediction PeriodCorrect Prediction PeriodRepair PeriodFailureFailure
Figure 2. Example of dividing a sequence of fourteen days to 3 windows, using
moving step of 3 days.
In the regression-based technique as initially described in (Korvesis et al.,
2018), the predictions were made on a daily basis. Since this technique
assigns, in each prediction, a risk value, a threshold is specified. If the
risk value is below a threshold, then this is deemed as a non-failure
prediction. If it is equal to or greater than the threshold, a failure alarm
is raised. In order for the comparison of the two techniques to be fair, the
predictions of the regression-based technique are made for each group of $N$
days, where $N$ is equal to the size of the $OW$. Also, the training is done
using a moving window. As previously, the risk value received on the last day
of the $N$ days will be used during training; also the period in which this
day falls determines the manner the prediction outcome is assessed.
## 4\. Impact of classifiers and dimensionality reduction
In this section, we first present the alternatives for model building for the
two main approaches under investigation. These alternatives extend the ones
included originally in the proposals in (Wang et al., 2017) and (Korvesis et
al., 2018). Then we present the alternatives for dimensionality reduction that
are examined. For each of the alternatives introduced, we present their main
settings.131313These setting have been found after experimentation that is not
presented in this work due to space limitations. Finally, we conduct a
thorough experimental analysis.
### 4.1. Classification and Regression Predictors
#### 4.1.1. K-Nearest Neighbors (used for both techniques)
One of the earliest classifiers, but still widely used today, is the _K-
Nearest Neighbors_ (KNN). It belongs to the category of lazy classifiers,
since it does not rely on model building. There exist several flavors; in our
work, KNN classifies an unknown sample into the class of the largest majority
among the $k$ closest samples. As such, it comprises two main configuration
parameters. The first one refers to the definition of the distance. In our
work, the distance metric employed is the Euclidean one. The second tuning
knob is the value of $k$, which, here, is set in the range of [1,5].
#### 4.1.2. Neural Networks (used for both techniques)
Another broad category of classifiers is _(Artificial) Neural Networks_ (NNs),
which are inspired by the manner the brain operates based on biological neural
networks. Briefly, the purpose of a NN is to minimize a cost or loss function,
which depends on the problem in hand. For example, for the classification
problem, a NN is typically trained so that, by entering and processing the
training examples, it minimizes the wrong predictions (maximizes the correct
predictions). In this regard, the classification problem is a binary one, and
we employ the binary crossentropy loss function. In the regression-based
technique, minimizing the mean square error had the best reuslt and this is
used for the experiments presented later. The error can be any metric distance
between the predicted and the actual value, such as the absolute difference
between the risk values. The optimization is based on stochastic gradient
descent, and more specifically, on Adam (Kingma and Ba, 2015), with fixed
learning rate=0.001.
As holds for all classifiers, a key objective for NNs is generalization. To
this end, many NN architectures have been proposed, three of them were
examined in the present work.
##### Multilayer Perceptron (used for both techniques)
Multilayer Perceptron (MLP) is one of the most well-known and widely used NN
architectures consisting of at least one hidden level and characterized by a
high degree of interconnection between the neurons of two successive levels.
The NN are further configured as follows. Each neuron has a rectifying
activation function (ReLU) (Ramachandran et al., 2017). For both techniques,
we have used 3-layer MLPs with 128, 64 and 128 units, respectively and fixed
dropout of 25% in the last two layers. For both the binary classification and
the reggression problem, there is only one output neuron.
##### Long Short Term Memory (used for both techniques)
Long Short Term Memory (LSTM) is a Recurrent Neural Network (RNN) architecture
that has feedback connections. As such, a RNN is just a multi-layer NN where
there are connections between higher-level outputs and inputs of the same or
lower levels. As a result, the entry of a neuron at time $t$ does not depend
only on the input data at $t$, but also on the previous values $t-1$, $t-2$,
and so on, so that the system acquires memory. LSTMs can be effectively
applied to time series classification problems like the ones in our setting.
However, the downside is that they need a relatively larger amount of training
data. For this study, the LSTM architecture has only one hidden layer with 128
neurons.
##### Convolutional Neural Network (used for both techniques)
Convolutional Neural Network (CNN) is an extension to MLPs. It is widely used
in image detection as it can effectively detect patterns in images using
filters. The fixed scheme of CNNs that is used comprises two convolution
layers followed by an average pooling layer, with dropout 20% just before the
output level; the kernel size is set to 3 and ReLU is employed for the
activation function. As in the case of LSTMs, the input examples are not a
vector but a set of vectors. We can imagine training samples not as simple
file logs but as images. If one considers that the images are two-dimensional
arrays with numbers depending on the color representation (e.g., 0 for black
and 1 for white), we can think our samples as images with a view to benefiting
from a NN technique excelling in image analysis, such as CNN (see also Figure
3).
Figure 3. Treating an observation window as an image.
#### 4.1.3. XGBoost (used for both techniques)
It is an implementation of gradient boosting (Chen and Guestrin, 2016), which
minimizes the average square error using gradient descent. It starts with the
creation of an initial tree model that predicts the average of the labels of
each training sample. It continues with the creation of a new tree model to
predict the errors of the previous tree model. This process continues until a
certain number of models (i.e., an ensemble) are created or until it
converges.
#### 4.1.4. Random Forest (used for both techniques)
This also belongs to ensemle methods but follows a bagging approach. Random
Forest (RF) is a meta estimator that fits a number of decision tree
classifiers on various samples of the dataset and uses averaging to improve
the predictive accuracy and mitigate over-fitting. It is earlier than XGBoost.
#### 4.1.5. Partial Least Squares (used for regression)
Partial Least Squares (PLS) is a method of predicting dependent values from a
set of observations. In our tests, we have used a PLS version that
encapsulates singular value decomposition (SVD) to decompose the matrices of
the initial data into components.
### 4.2. Dimensionality Reduction Algorithms
Dimensionality reduction is particularly relevant in our setting given that
very few event types have actually predictive power; the vast majority of
event types is irrelevant to the failure target events. By pruning and
decreasing the search space, the classifier performance can be significantly
improved. In this work, we have investigated two techniques to this end.
#### 4.2.1. ReliefF (Kononenko et al., 1997)
This algorithm implements an iterative process that takes as input training
examples along with their corresponding labels and exports a vector with
quality estimates for each attribute. The evaluation of each feature is done
using two concepts based on the $k$ nearest hits and misses, respectively.
#### 4.2.2. PCA
The Principal Component Analysis allows to transform a high-dimensional
dataset to a low-dimensional one while losing as little information as
possible. In our application of PCA, the new space is enforced after
subtracting the mean in each dimension, i.e., we center the dataset in the
beginning of the axes.
### 4.3. Evaluation of Classification Predictors
In this subsection, we explore different classifiers for the class-ification-
based technique inspired by (Wang et al., 2017). For each of the 6 dataset
types in Section 3, we have created 10 random instances, which remain the same
through all experiments to allow for an apples-to-apples comparison. We have
also chosen 4 main settings for the classification-based technique, as
described below:
1. (1)
A, where the period $Y$ within which the prediction can be made is 16 days
(approximately 2 weeks) and we set $X=M=4$, so that $X\cdot M=Y$, i.e., the
observation window is of the same size as the prediction window. Also, there
is no buffer window, i.e., $Z=0$.
2. (2)
B, where we shrink $Y$ to 6 days, and we set $X=3$ and $M=2$.
3. (3)
C, which is similar to A but with $Z=3$.
4. (4)
D, which is similar to B but with $Z=3$.
The length of the step in each case is defined as half the length of the
prediction window in order for each day to be considered in 2 predictions.
For each algorithm, several configurations are tested and the best performing
tuning on average is chosen. For PCA and ReliefF to find the best performing
selected features that maximize the score, we repeatedly reduce the dimensions
by a percentage of 20%. Table 6 presents the mean F1 score of all combinations
of classifiers and dimensionality reduction techniques. At the bottom of the
table, we show two naive classifiers, one random and one always raising an
alarm. For each classifier, the best performing dimensionality reduction
setting is underlined. The overall best approach on average is in bold.
Setting | A | B | C | D
---|---|---|---|---
Algorithm | Feat. Sel. | Mean F1 score in 10 DS1 instances
XGBoost | None | 0.685 | 0.225 | 0.600 | 0.321
PCA | 0.654 | 0.315 | 0.670 | 0.387
ReliefF | 0.752 | 0.613 | 0.756 | 0.609
RF | None | 0.181 | 0.000 | 0.034 | 0.000
PCA | 0.504 | 0.059 | 0.501 | 0.259
ReliefF | 0.697 | 0.244 | 0.587 | 0.203
KNN | None | 0.543 | 0.066 | 0.499 | 0.046
PCA | 0.671 | 0.269 | 0.672 | 0.420
ReliefF | 0.698 | 0.273 | 0.660 | 0.326
MLP | None | 0.606 | 0.150 | 0.576 | 0.259
PCA | 0.656 | 0.280 | 0.670 | 0.414
ReliefF | 0.620 | 0.282 | 0.667 | 0.372
LSTM | None | 0.532 | 0.204 | 0.475 | 0.236
PCA | 0.623 | 0.322 | 0.574 | 0.370
ReliefF | 0.644 | 0.330 | 0.608 | 0.362
CNN | None | 0.593 | 0.204 | 0.589 | 0.291
PCA | 0.612 | 0.346 | 0.598 | 0.366
ReliefF | 0.610 | 0.331 | 0.642 | 0.376
Random Classifier | 0.520 | 0.223 | 0.543 | 0.275
All True Classifier | 0.464 | 0.176 | 0.500 | 0.190
Table 6. Mean F1 score of classification predictors for DS1. Setting | A | B | C | D
---|---|---|---|---
Algorithm | Feat. Sel. | Min F1 score in 10 DS1 instances
XGBoost | None | 0.605 | 0.12 | 0.474 | 0.19
PCA | 0.615 | 0.233 | 0.571 | 0.338
ReliefF | 0.653 | 0.5 | 0.676 | 0.588
RF | None | 0.000 | 0.000 | 0.000 | 0.000
PCA | 0.435 | 0.000 | 0.316 | 0.129
ReliefF | 0.562 | 0.129 | 0.435 | 0.129
KNN | None | 0.4 | 0.000 | 0.375 | 0.000
PCA | 0.586 | 0.211 | 0.588 | 0.355
ReliefF | 0.583 | 0.2 | 0.565 | 0.304
MLP | None | 0.409 | 0.045 | 0.449 | 0.000
PCA | 0.571 | 0.214 | 0.548 | 0.311
ReliefF | 0.543 | 0.213 | 0.6 | 0.259
LSTM | None | 0.429 | 0.111 | 0.381 | 0.111
PCA | 0.444 | 0.214 | 0.44 | 0.279
ReliefF | 0.571 | 0.291 | 0.543 | 0.314
CNN | None | 0.451 | 0.08 | 0.44 | 0.17
PCA | 0.545 | 0.224 | 0.508 | 0.299
ReliefF | 0.54 | 0.283 | 0.556 | 0.333
Table 7. Minimum F1 score of classification predictors for DS1.
Observations for DS1: XGBoost combined with ReliefF displays the best scores
for all the settings, as shown in Table 6. To better assess its superiority,
in Table 7, we provide the lowest F1 score among the 10 DS1 instances for the
best performing configuration of each technique; still the combination
XGBoost-ReliefF dominates. Apart from this main observation, several
additional interesting observations can be drawn: (i) For XGBoost, the
difference between not employing dimensionality reduction and employing PCA
has relatively small difference. More importantly, if ReliefF is not employed,
there is a significant degradation in performance when the prediction window
is narrower (settings B and D) compared to the settings A and C. (ii) RF
cannot handle cases B and D well, while it requires either PCA or ReliefF.
(iiii) KNN is superior to RF, especially when there is a buffer window
(settings C and D). In addition, KNN worst behavior as shown in Table 7 is
significantly better than RF. (iv) NN techniques exhibit higher F1 scores than
RF but lower than XGBoost and KNN in all cases except B. In general, regarding
NN flavors, there is not a clear pattern. Employing ReliefF or PCA is always
better than not employing any of them. On average, LSTM is never better than
MLP or CNN, but its worst performance for setting B is better than the worst
performance of the latter two NN architectures. As in all techniques apart
from XGBoost-ReliefF, the F1 scores in settings B and D are particularly low.
(v) The best average performing combination for each algorithm (underlined in
Table 6) is not always the one with the better worst behavior (underlined in
Table 7). Finally, the two dummy classifiers often behave better than many
combinations, which is attributed to the high difficulty in training effective
models in such settings.
Further to the last observation, in the following, we do not examine some
inferior combinations, such as those not employing PCA or ReliefF. We continue
the discussion for DS2-DS6, while detailed results are in Tables 8 and 9.
Setting | A | B | C | D
---|---|---|---|---
Algorithm | Feat. Sel. | Mean F1 score in 10 DS2 instances
XGBoost | PCA | 0.588 | 0.219 | 0.617 | 0.275
ReliefF | 0.701 | 0.379 | 0.766 | 0.391
RF | ReliefF | 0.597 | 0.291 | 0.723 | 0.261
KNN | PCA | 0.643 | 0.336 | 0.683 | 0.360
ReliefF | 0.624 | 0.384 | 0.708 | 0.396
MLP | PCA | 0.627 | 0.428 | 0.669 | 0.420
LSTM | ReliefF | 0.566 | 0.298 | 0.649 | 0.369
CNN | PCA | 0.531 | 0.467 | 0.607 | 0.372
ReliefF | 0.536 | 0.292 | 0.633 | 0.366
Random Classifier | 0.484 | 0.225 | 0.519 | 0.282
All True Classifier | 0.459 | 0.174 | 0.494 | 0.190
Algorithm | Feat. Selection | Mean F1 score in 10 DS3 instances
XGBoost | PCA | 0.657 | 0.337 | 0.694 | 0.368
ReliefF | 0.674 | 0.600 | 0.678 | 0.660
RF | ReliefF | 0.059 | 0.036 | 0.038 | 0.022
KNN | PCA | 0.660 | 0.280 | 0.684 | 0.339
ReliefF | 0.650 | 0.597 | 0.658 | 0.659
MLP | PCA | 0.660 | 0.515 | 0.700 | 0.564
LSTM | ReliefF | 0.597 | 0.330 | 0.658 | 0.398
CNN | PCA | 0.564 | 0.358 | 0.612 | 0.367
ReliefF | 0.565 | 0.351 | 0.668 | 0.388
Random Classifier | 0.485 | 0.246 | 0.520 | 0.303
All True Classifier | 0.462 | 0.174 | 0.492 | 0.191
Algorithm | Feat. Selection | Mean F1 score in 10 DS4 instances
XGBoost | PCA | 0.633 | 0.271 | 0.672 | 0.297
ReliefF | 0.653 | 0.232 | 0.663 | 0.301
RF | ReliefF | 0.058 | 0.077 | 0.034 | 0.058
KNN | PCA | 0.611 | 0.229 | 0.665 | 0.274
ReliefF | 0.655 | 0.305 | 0.671 | 0.339
MLP | PCA | 0.624 | 0.342 | 0.652 | 0.394
LSTM | ReliefF | 0.616 | 0.314 | 0.643 | 0.399
CNN | PCA | 0.545 | 0.304 | 0.590 | 0.347
ReliefF | 0.565 | 0.313 | 0.634 | 0.402
Random Classifier | 0.531 | 0.227 | 0.569 | 0.285
All True Classifier | 0.460 | 0.176 | 0.496 | 0.190
Algorithm | Feat. Selection | Mean F1 score in 10 DS5 instances
XGBoost | PCA | 0.560 | 0.245 | 0.628 | 0.275
ReliefF | 0.689 | 0.351 | 0.752 | 0.391
RF | ReliefF | 0.618 | 0.217 | 0.719 | 0.219
KNN | PCA | 0.627 | 0.322 | 0.688 | 0.369
ReliefF | 0.640 | 0.338 | 0.701 | 0.388
MLP | PCA | 0.611 | 0.418 | 0.666 | 0.412
LSTM | ReliefF | 0.575 | 0.293 | 0.663 | 0.364
CNN | PCA | 0.554 | 0.428 | 0.623 | 0.367
ReliefF | 0.504 | 0.388 | 0.572 | 0.342
Random Classifier | 0.484 | 0.225 | 0.519 | 0.282
All True Classifier | 0.459 | 0.174 | 0.494 | 0.190
Algorithm | Feat. Selection | Mean F1 score in 10 DS6 instances
XGBoost | PCA | 0.330 | 0.180 | 0.355 | 0.196
ReliefF | 0.394 | 0.333 | 0.452 | 0.222
RF | ReliefF | 0.338 | 0.240 | 0.351 | 0.127
KNN | PCA | 0.446 | 0.293 | 0.455 | 0.315
ReliefF | 0.475 | 0.338 | 0.489 | 0.319
MLP | PCA | 0.479 | 0.418 | 0.492 | 0.310
LSTM | ReliefF | 0.437 | 0.212 | 0.480 | 0.261
CNN | PCA | 0.408 | 0.402 | 0.422 | 0.241
ReliefF | 0.438 | 0.257 | 0.456 | 0.294
Random Classifier | 0.347 | 0.135 | 0.345 | 0.158
All True Classifier | 0.252 | 0.094 | 0.265 | 0.099
Table 8. Mean F1 score of classification predictors for DS2-DS6.
Setting | A | B | C | D
---|---|---|---|---
Algorithm | Feat. Sel. | Min F1 score in 10 DS2 instances
XGBoost | PCA | 0.462 | 0.095 | 0.514 | 0.154
ReliefF | 0.564 | 0.222 | 0.634 | 0.263
RF | ReliefF | 0.391 | 0.08 | 0.571 | 0.118
KNN | PCA | 0.596 | 0.226 | 0.652 | 0.311
ReliefF | 0.558 | 0.308 | 0.629 | 0.348
MLP | PCA | 0.493 | 0.326 | 0.566 | 0.356
LSTM | ReliefF | 0.484 | 0.228 | 0.523 | 0.316
CNN | PCA | 0.487 | 0.345 | 0.563 | 0.326
ReliefF | 0.484 | 0.219 | 0.556 | 0.344
Algorithm | Feat. Selection | Min F1 score in 10 DS3 instances
XGBoost | PCA | 0.54 | 0.143 | 0.606 | 0.25
ReliefF | 0.526 | 0.200 | 0.512 | 0.457
RF | ReliefF | 0.0 | 0.0 | 0.0 | 0.0
KNN | PCA | 0.576 | 0.148 | 0.627 | 0.167
ReliefF | 0.591 | 0.267 | 0.549 | 0.45
MLP | PCA | 0.529 | 0.367 | 0.615 | 0.41
LSTM | ReliefF | 0.475 | 0.233 | 0.612 | 0.348
CNN | PCA | 0.462 | 0.281 | 0.531 | 0.327
ReliefF | 0.452 | 0.282 | 0.579 | 0.314
Algorithm | Feat. Selection | Min F1 score in 10 DS4 instances
XGBoost | PCA | 0.524 | 0.129 | 0.588 | 0.138
ReliefF | 0.545 | 0.118 | 0.522 | 0.2
RF | ReliefF | 0.0 | 0.0 | 0.0 | 0.0
KNN | PCA | 0.553 | 0.121 | 0.576 | 0.148
ReliefF | 0.576 | 0.263 | 0.554 | 0.291
MLP | PCA | 0.517 | 0.212 | 0.592 | 0.299
LSTM | ReliefF | 0.533 | 0.225 | 0.581 | 0.338
CNN | PCA | 0.474 | 0.263 | 0.521 | 0.286
ReliefF | 0.462 | 0.24 | 0.593 | 0.37
Algorithm | Feat. Selection | Min F1 score in 10 DS5 instances
XGBoost | PCA | 0.419 | 0.214 | 0.491 | 0.154
ReliefF | 0.5 | 0.235 | 0.588 | 0.279
RF | ReliefF | 0.408 | 0.069 | 0.571 | 0.091
KNN | PCA | 0.578 | 0.222 | 0.638 | 0.27
ReliefF | 0.5 | 0.216 | 0.612 | 0.339
MLP | PCA | 0.514 | 0.333 | 0.571 | 0.333
LSTM | ReliefF | 0.508 | 0.22 | 0.557 | 0.296
CNN | PCA | 0.483 | 0.339 | 0.563 | 0.304
ReliefF | 0.286 | 0.267 | 0.351 | 0.0
Algorithm | Feat. Selection | Min F1 score in 10 DS6 instances
XGBoost | PCA | 0.167 | 0.143 | 0.167 | 0.143
ReliefF | 0.125 | 0.19 | 0.167 | 0.0
RF | ReliefF | 0.1 | 0.154 | 0.118 | 0.0
KNN | PCA | 0.25 | 0.24 | 0.4 | 0.194
ReliefF | 0.267 | 0.235 | 0.308 | 0.143
MLP | PCA | 0.333 | 0.211 | 0.286 | 0.235
LSTM | ReliefF | 0.25 | 0.093 | 0.345 | 0.175
CNN | PCA | 0.333 | 0.32 | 0.246 | 0.195
ReliefF | 0.333 | 0.146 | 0.317 | 0.231
Table 9. Minimum F1 score of classification predictors for DS2-DS6.
Observations for DS2: The main difference between DS1 and DS2 is that DS2
contains shorter and more diverse patterns before target events. In average,
the combination XGBoost-ReliefF is still the dominant one for settings A and
C, which correspond to a larger prediction window. However, for the settings B
and D where the correct prediction period is shorter (6 days) the NN
solutions, and more specifically CNN and MLP, respectively, perform better
than XGBoost. Also, NN-based solutions seem to perform better when combined
with PCA rather than ReliefF. Finally, when considering the instance with
worst performance, KNN is more robust than XGBoost regarding the A and C
settings.
Observations for DS3: For the DS3, which is similar to DS1 but with a ten-fold
increase in the number of event types, the combination of XGBoost-ReliefF is,
on average, superior, but (i) the difference in performance compared to KNN
and NN-based solutions is smaller, and (ii) in setting C, a NN-based solution,
namely MLP, is the best performing one. While all NN flavors can handle
settings A and C in a manner that their performance is not much inferior to
the best performing one, only MLP performs well in settings B and D. As in
DS2, KNN exhibits higher robustness with regards to the worst performing
instance compared to XGBoost. Finally, RF cannot cope with the challenges of
this DS type.
Observations for DS4: DS4 modifies DS2 by examining the behavior after a
10-fold increase in the event types. XGBoost-ReliefF is close to the best
performance for settings A and C but it is not the most efficient anymore. KNN
is also very competent for the A and C cases, while NN-based solutions exhibit
the highest performance in settings B and D, with MLP-PCA achieving the best
scores on average. In terms of robustness, KNN and CNN are the dominant
solutions.
Observations for DS5: DS5 introduces shuffled patterns into DS2, which
increase the diversity in the patterns preceding target events. The behavior
is similar to DS2 with a slight decrease in the F1 scores overall. This
implies that the fact that the warning event patterns contain their elements
in random order does not play a key role. Compared to DS4, the behavior is
also similar, but the F1 scores are higher, which implies that the techniques
are more heavily affected by the increase in the size of the event types.
Observations for DS6: DS6 employs the same setting as DS5 but with half
training and test instances. This heavily impacts all classifiers but NN-based
solutions, and MLP-PCA in particular, exhibit the most limited degradation in
their performance. KNN is the second best approach for this DS type.
Generic observations: Overall, the main lesson learned is that, for the
classification-based techniques, XGBoost combined with ReliefF is a good
starting point, as evidenced also in the proposal in (Wang et al., 2017). But
there are several other additional dimensions that need to be considered,
which extend this main intermediate conclusion: (1) XGBoost alone does not
guarantee good performance; e.g., when combined with PCA, in some settings
like D, it cannot even beat a random classifier. (2) RF, which is shown to
have good performance in (Wang et al., 2017) cannot cope with many settings.
(3) When the preceding patterns are diverse, in the sense that the constituent
elements may have multiple alternatives and the prediction window is
relatively narrow, NN-based should be examined. Similarly, KNN-based solutions
along with NN-based ones need to be considered in cases where the set of event
types is large and/or the failure instances are infrequent.
Setting | A | B | C | D
---|---|---|---|---
Algorithm | Feat. Sel. | Mean F1 score in 10 DS1 instances
XGBoost | None | 0.545 | 0.323 | 0.708 | 0.315
PCA | 0.611 | 0.370 | 0.706 | 0.409
ReliefF | 0.640 | 0.459 | 0.717 | 0.353
RF | None | 0.537 | 0.407 | 0.609 | 0.274
PCA | 0.581 | 0.324 | 0.636 | 0.321
ReliefF | 0.630 | 0.434 | 0.714 | 0.305
KNN | None | 0.501 | 0.000 | 0.499 | 0.000
PCA | 0.627 | 0.409 | 0.700 | 0.400
ReliefF | 0.620 | 0.408 | 0.679 | 0.371
PLS | None | 0.543 | 0.309 | 0.645 | 0.276
PCA | 0.587 | 0.375 | 0.760 | 0.380
ReliefF | 0.627 | 0.433 | 0.764 | 0.415
MLP | None | 0.506 | 0.150 | 0.576 | 0.259
PCA | 0.505 | 0.409 | 0.567 | 0.296
ReliefF | 0.548 | 0.402 | 0.581 | 0.306
LSTM | None | 0.561 | 0.316 | 0.581 | 0.246
PCA | 0.564 | 0.305 | 0.613 | 0.287
ReliefF | 0.557 | 0.378 | 0.605 | 0.298
CNN | None | 0.546 | 0.294 | 0.543 | 0.245
PCA | 0.502 | 0.309 | 0.546 | 0.291
ReliefF | 0.495 | 0.333 | 0.547 | 0.297
Random Classifier | 0.520 | 0.223 | 0.543 | 0.275
All True Classifier | 0.464 | 0.176 | 0.500 | 0.190
Table 10. Mean F1 score of regression predictors for DS1. Setting | A | B | C | D
---|---|---|---|---
Algorithm | Feat. Sel. | Min F1 score in 10 DS1 instances
XGBoost | None | 0.442 | 0.207 | 0.515 | 0.285
PCA | 0.483 | 0.323 | 0.613 | 0.381
ReliefF | 0.538 | 0.327 | 0.679 | 0.279
RF | None | 0.412 | 0.333 | 0.515 | 0.222
PCA | 0.468 | 0.273 | 0.5 | 0.25
ReliefF | 0.556 | 0.333 | 0.615 | 0.276
KNN | None | 0.378 | 0.000 | 0.355 | 0.000
PCA | 0.553 | 0.375 | 0.679 | 0.345
ReliefF | 0.511 | 0.378 | 0.615 | 0.345
PLS | None | 0.427 | 0.273 | 0.5 | 0.222
PCA | 0.448 | 0.327 | 0.68 | 0.35
ReliefF | 0.5 | 0.375 | 0.755 | 0.378
MLP | None | 0.511 | 0.312 | 0.555 | 0.212
PCA | 0.4 | 0.327 | 0.459 | 0.222
ReliefF | 0.416 | 0.222 | 0.486 | 0.243
LSTM | None | 0.429 | 0.191 | 0.5 | 0.207
PCA | 0.452 | 0.239 | 0.526 | 0.246
ReliefF | 0.455 | 0.333 | 0.507 | 0.22
CNN | None | 0.455 | 0.16 | 0.438 | 0.0
PCA | 0.432 | 0.275 | 0.485 | 0.212
ReliefF | 0.421 | 0.273 | 0.472 | 0.237
Table 11. Minimum F1 score of regression predictors for DS1.
### 4.4. Evaluation of Regression Predictors
We now turn our attention to the regression-based approach that is based on
the methodology in (Korvesis et al., 2018). The evaluation of the alternatives
follows exactly the same rationale. In all cases, the steepness of the sigmoid
function was set to 0.7 and the midpoint to 16 days for settings A and C, and
6 days for settings B and D. The alarm threshold was a configuration
parameter, and we present the results corresponding to the best configuration.
All the DS instances are the same as previously. The days are grouped in
segments of $N=Y$ days, so that again, the counterpart of the observation
window is of the same size as the prediction window.
Observations for DS1: The average results are presented in Table 10. The main
observations are summarized as follows: (i) Compared to the results for the
classification-based approach in Table 6, the main conclusion drawn is that
the classification-based techniques perform better. Especially for settings B
and D, the best performing regression techniques achieve significantly lower
F1 scores, while the are marginally superior in setting C. (ii) XGBoost-
ReliefF is better in settings A and B, but as shown in Table 11, it is not the
most robust technique for these settings. In settings C and D, PLS-ReliefF is
the dominant solution. (iii) Solutions that employ XGBoost are either the best
performing ones or very close to the best performing ones, but without a clear
winner regarding dimensionality reduction between PCA and ReliefF. (iv) RF
combined with ReliefF can handle settings A, B and C close to the best
performing ones and for setting A, it exhibits the highest worst F1 score
among the 10 instances tested. (v) KNN is worse than RF in all settings except
D and exhibits a consistent pattern, according to which it improves its
performance with PCA to a larger degree than with ReliefF. (vi) PLS-RelieF can
be deemed as the best solution, since it is superior in C and D and the gap
from the superior technique for settings A and B, XGBoost-ReliefF, is
relatively small. (vii) NN-based solutions are clearly inferior to all the
afore-mentioned techniques, in the sense that they achieve the lowest average
F1 scores in all settings.
Setting | A | B | C | D
---|---|---|---|---
Algorithm | Feat. Sel. | Mean F1 score in 10 DS2 instances
XGBoost | PCA | 0.574 | 0.320 | 0.622 | 0.316
ReliefF | 0.592 | 0.354 | 0.619 | 0.306
RF | ReliefF | 0.595 | 0.329 | 0.621 | 0.304
KNN | PCA | 0.617 | 0.326 | 0.663 | 0.361
ReliefF | 0.634 | 0.363 | 0.638 | 0.352
PLS | ReliefF | 0.598 | 0.331 | 0.638 | 0.336
MLP | ReliefF | 0.522 | 0.337 | 0.568 | 0.326
LSTM | ReliefF | 0.520 | 0.289 | 0.575 | 0.311
CNN | ReliefF | 0.478 | 0.277 | 0.534 | 0.296
Random Classifier | 0.484 | 0.225 | 0.519 | 0.282
All True Classifier | 0.459 | 0.174 | 0.494 | 0.190
Algorithm | Feature Selection | Mean F1 score in 10 DS3 instances
XGBoost | PCA | 0.606 | 0.299 | 0.639 | 0.338
ReliefF | 0.623 | 0.427 | 0.657 | 0.354
RF | ReliefF | 0.610 | 0.383 | 0.623 | 0.334
KNN | PCA | 0.653 | 0.349 | 0.654 | 0.400
ReliefF | 0.642 | 0.373 | 0.655 | 0.407
PLS | ReliefF | 0.615 | 0.410 | 0.659 | 0.366
MLP | ReliefF | 0.545 | 0.312 | 0.555 | 0.330
LSTM | ReliefF | 0.557 | 0.289 | 0.572 | 0.311
CNN | ReliefF | 0.504 | 0.271 | 0.523 | 0.304
Random Classifier | 0.485 | 0.246 | 0.520 | 0.303
All True Classifier | 0.462 | 0.174 | 0.492 | 0.191
Algorithm | Feature Selection | Mean F1 score in 10 DS4 instances
XGBoost | PCA | 0.626 | 0.284 | 0.640 | 0.328
ReliefF | 0.625 | 0.306 | 0.644 | 0.337
RF | ReliefF | 0.612 | 0.313 | 0.640 | 0.322
KNN | PCA | 0.628 | 0.368 | 0.660 | 0.343
ReliefF | 0.626 | 0.365 | 0.664 | 0.335
PLS | ReliefF | 0.594 | 0.335 | 0.656 | 0.328
MLP | PCA | 0.530 | 0.286 | 0.562 | 0.316
LSTM | ReliefF | 0.525 | 0.267 | 0.571 | 0.270
CNN | ReliefF | 0.483 | 0.260 | 0.533 | 0.268
Random Classifier | 0.531 | 0.227 | 0.569 | 0.285
All True Classifier | 0.460 | 0.176 | 0.496 | 0.190
Algorithm | Feature Selection | Mean F1 score in 10 DS5 instances
XGBoost | PCA | 0.581 | 0.314 | 0.624 | 0.307
ReliefF | 0.591 | 0.331 | 0.612 | 0.322
RF | ReliefF | 0.597 | 0.318 | 0.607 | 0.299
KNN | PCA | 0.604 | 0.326 | 0.646 | 0.363
ReliefF | 0.621 | 0.340 | 0.617 | 0.340
PLS | ReliefF | 0.605 | 0.321 | 0.637 | 0.328
MLP | PCA | 0.510 | 0.309 | 0.572 | 0.331
LSTM | ReliefF | 0.489 | 0.273 | 0.538 | 0.320
CNN | ReliefF | 0.489 | 0.273 | 0.538 | 0.320
Random Classifier | 0.484 | 0.225 | 0.519 | 0.282
All True Classifier | 0.459 | 0.174 | 0.494 | 0.190
Algorithm | Feature Selection | Mean F1 score in 10 DS6 instances
XGBoost | PCA | 0.425 | 0.268 | 0.446 | 0.224
ReliefF | 0.474 | 0.366 | 0.439 | 0.242
RF | ReliefF | 0.487 | 0.301 | 0.452 | 0.225
KNN | PCA | 0.466 | 0.330 | 0.489 | 0.312
ReliefF | 0.436 | 0.247 | 0.446 | 0.272
PLS | ReliefF | 0.500 | 0.344 | 0.501 | 0.233
MLP | ReliefF | 0.394 | 0.300 | 0.415 | 0.250
LSTM | ReliefF | 0.397 | 0.243 | 0.402 | 0.261
CNN | ReliefF | 0.355 | 0.242 | 0.373 | 0.268
Random Classifier | 0.347 | 0.135 | 0.345 | 0.158
All True Classifier | 0.252 | 0.094 | 0.265 | 0.099
Table 12. Mean F1 score of regression predictors for DS2-DS6.
As previously, for the rest of the DS types, we examine a smaller set of
techniques to better focus on the combinations that achieve the highest F1
scores. The detailed results are shown in Table 12.
Observations for DS2: KNN is the clear winner in this DS type, where the
patterns of events preceding a failure become smaller and more diverse. KNN
behaves equally well with PCA and ReliefF; the former dimensionality technique
is preferable when there is a buffer window (settings C and D). RF and XGBoost
behave similarly, while NN-based solutions remain inferior to all other
alternatives. In DS2, classification-based solutions perform better than
regression-based ones as in DS1.
Observations for DS3: The increase of the event types from 150 to 1500 seems
to be advantageous for KNN, which along with XGBoost and PLS are the best
performing solutions. Still, the regression-based techniques perform worse
than their classification counterparts for this DS type, especially for
settings B and D.
Observations for D4: Similarly to DS2, in DS4 KNN remains the clear winner. In
addition, the performance is similar to the one achieved by classification-
based solutions.
Observations for D5: As in the previous section, we can observe that shuffling
has a small impact and the behavior is very similar to the one for DS2.
Observations for D6: This is a very challenging setting with fewer failure
cases for training. There is no clear winner, but in different settings, any
of XGBoost, KNN or PLS may behave better. Also, the performance is slightly
better than the one reported in Table 8.
Generic observations: The regression-based methodology is in general inferior
to the classification based one in terms of F1 score with an exception in DS6.
XGBoost, KNN and PLS are the main options, while NN-based solutions
consistently fail to achieve comparable performance with the other solutions.
These results extend the remarks originally made in (Korvesis et al., 2018),
where RF was compared against SVM and was shown to behave better.
## 5\. Impact of manipulating training data
Figure 4. Tackling the class imbalance problem. Setting | A | B | C | D
---|---|---|---|---
Algorithm | Feat. Sel. | Mean F1 score in 10 DS1 instances
XGB | ReliefF | 0.752 | 0.613 | 0.756 | 0.609
Undersampling | 0.755 | 0.548 | 0.771 | 0.599
Oversampling | 0.747 | 0.459 | 0.745 | 0.503
| Mean F1 score in 10 DS2 instances
XGB | ReliefF | 0.701 | 0.379 | 0.766 | 0.391
Undersampling | 0.683 | 0.422 | 0.730 | 0.416
Oversampling | 0.732 | 0.353 | 0.772 | 0.374
| Mean F1 score in 10 DS3 instances
XGB | ReliefF | 0.674 | 0.600 | 0.678 | 0.660
Undersampling | 0.656 | 0.565 | 0.671 | 0.592
Oversampling | 0.676 | 0.464 | 0.723 | 0.500
| Mean F1 score in 10 DS4 instances
MLP | PCA | 0.632 | 0.337 | 0.662 | 0.382
Undersampling | 0.640 | 0.321 | 0.664 | 0.377
Oversampling | 0.628 | 0.333 | 0.670 | 0.386
| Mean F1 score in 10 DS5 instances
XGB | ReliefF | 0.689 | 0.351 | 0.752 | 0.352
Undersampling | 0.640 | 0.398 | 0.739 | 0.400
Oversampling | 0.739 | 0.330 | 0.768 | 0.351
| Mean F1 score in 10 DS6 instances
MLP | PCA | 0.520 | 0.369 | 0.491 | 0.328
Undersampling | 0.422 | 0.253 | 0.471 | 0.240
Oversampling | 0.443 | 0.240 | 0.453 | 0.253
Table 13. Results for classification predictors after balancing the training dataset. Setting | A | B | C | D
---|---|---|---|---
Algorithm | Feat. Sel. | Mean F1 score in 10 DS1 instances
PLS | ReliefF | 0.653 | 0.349 | 0.654 | 0.400
Undersampling | 0.629 | 0.378 | 0.653 | 0.339
Oversampling | 0.625 | 0.364 | 0.606 | 0.380
| Mean F1 score in 10 DS2 instances
KNN | ReliefF | 0.634 | 0.363 | 0.638 | 0.352
Undersampling | 0.658 | 0.393 | 0.628 | 0.352
Oversampling | 0.640 | 0.381 | 0.650 | 0.357
| Mean F1 score in 10 DS3 instances
XGB | ReliefF | 0.653 | 0.349 | 0.654 | 0.400
Undersampling | 0.629 | 0.378 | 0.635 | 0.339
Oversampling | 0.625 | 0.364 | 0.606 | 0.380
| Mean F1 score in 10 DS4 instances
KNN | PCA | 0.628 | 0.368 | 0.660 | 0.343
Undersampling | 0.589 | 0.362 | 0.588 | 0.346
Oversampling | 0.617 | 0.355 | 0.649 | 0.364
| Mean F1 score in 10 DS5 instances
KNN | ReliefF | 0.621 | 0.351 | 0.617 | 0.340
Undersampling | 0.639 | 0.386 | 0.611 | 0.363
Oversampling | 0.625 | 0.383 | 0.611 | 0.369
| Mean F1 score in 10 DS6 instances
PLS | ReliefF | 0.500 | 0.344 | 0.501 | 0.233
Undersampling | 0.482 | 0.367 | 0.435 | 0.318
Oversampling | 0.489 | 0.163 | 0.497 | 0.211
Table 14. Results for regression predictors after balancing the training
dataset.
The test datasets employed in this work and the vast majority of real-world
PdM scenarios in general are inherently characterized by class imbalance and
low volume. For example, if we have daily data for one year, this corresponds
to 365 training examples only. If the observation window exceeds 1 day, as in
our settings, the training dataset is decreased even more. Moreover, if the
critical failures corresponding to a target event occur 10 times in a year,
there are much fewer training samples indicating a failure than those
referring to normal operation. Both these factors impact on the methodology
efficiency. We have explored two ways to ameliorate the problem of class
imbalance, as shown in Figure 4. We either sample the class referring to the
normal operation before model building or we oversample the examples referring
to failures. After balancing the training dataset, we can oversample the data
to increase its size.
The results are shown in Table 13, where we investigate only the best
performing combinations for each dataset type according to the results in the
previous section. From the table, we conclude that in 15 out of the 24 cases
(6 datasets X 4 paremeter settings), manipulating the input yielded F1
improvements. Out of these improvements, 7 were through undersampling and 8
through oversampling. The improvements were up to 13.6% (D setting in DS5).
Interestingly, for DS6, in none of the 4 settings manipulating the training
dataset has shown to be beneficial.
We applied the same modifications to the regression-based techniques, and as
shown in Table 14, there are fewer cases that improvements were observed (14
out of 24). However, the largest relative improvement reached 36.5% (setting D
in DS6). Still, in absolute numbers, this F1 score is inferior to the best
performing classification-based technique. For regression-based techniques,
inspired by oversampling solutions described in (Korvesis et al., 2018), we
have explored two additional flavors that perform more systematic
undersampling and oversampling based also on the risk values. We omit details,
but due to these flavors some additional minor improvements were noticed in
DS5 and DS6.
The main lesson learnt is that manipulating the training dataset can indeed
yield benefits, but there is no clear pattern regarding its exact usage. More
importantly, as can be easily noticed from the two tables above, in a
significant portion of the cases, not only there are no improvements in terms
of the F1 score, but the degradation is more significant than the highest
observed benefits, e.g., more than 25%. Therefore, this step needs to be
performed with care.
## 6\. Impact of ensemble solutions
Figure 5. The ratio between the F1 score of the ensemble solution to the best
performing single classifier solution for the techniques: a) simple average,
b) weighted average, c) dynamic weighted average and d) dynamic threshold
(from top to bottom)
The last part of our experimental analysis deals with heterogeneous ensembles.
More specifically, we combine multiple classifiers as follows. To narrow the
scope of this experiment, we first consider only three classifiers, namely
XGBoost, Random Forest and KNN coupled with ReliefF. These techniques are
combined in four different manners as follows:
1. (1)
Simple average: a prediction for a forthcoming failure is made whenever at
least half, i.e, 2 out of 3) of the classifiers predict so.
2. (2)
Weighted average: a failure prediction for a forthcoming failure is weighted
by the relative F1 score according to the performance of the classifiers in
Section 4. E.g., if the corresponding scores are 0.6, 0.5 and 0.7, the weights
become 0.33, 0.28 and 0.39, respectively. If the aggregated value exceeds 0.5,
a failure alarm is raised.
3. (3)
Dynamic weighted average: the weights here are optimal, in the sense that they
are set after each classifier runs according to the classifier performance for
each dataset instance; as such, they are different across the 60 datasets.
Doing so, we aim to find the upper bound of an ensemble solution based on
weights.
4. (4)
Dynamic threshold: inspired by our experience in (Naskos et al., 2019), where
in some cases combining solutions with the logical OR instead of AND, we
examine a setting where, for each combination of dataset type and parameter
setting, we may choose any threshold between 1 and 3 out of 3 classifiers to
vote for a failure in order to raise an alarm. In the experiments, we show
only the best performing configuration.
Figure 5 shows the results compared to the best performing single-classifier
solutions in Section 4, as reported in Tables 6 and 8. Simple and weights
averaging cannot yield any benefits and lead to performance degradation by 13%
and 12.5%, respectively. Dynamic weighting can increase the F1 score in some
cases, but in general, F1 decreases by 8.2%. However, dynamic threshold
setting manages to produce tangible benefits: the average F1 score
improvements are 7.05%, while, any degradation observed in a combination of
dataset type and parameter setting does not exceed 1.05%.
We repeated a similar experiment for the regression-based predictors. Due to
lack of space, we do not present detailed results but the two main
observations drawn are: (i) it is beneficial to first combine the predictors
and apply any threshold afterwards, and (ii) the best-performing behavior of
the regression-based ensembles is inferior to the behavior of the worst-
performing classification-based ensembles at the top of Figure 5.
## 7\. Discussion and Future Work
In the introduction, we listed five questions. Here, we provide the summary
answers to them, where we group the first three together.
_“What is the impact on the performance of different prediction techniques
compared to the ones originally employed or evaluated? Is the behavior of
feature selection techniques correlated with the exact prediction technique
employed? Which are the most important dataset attributes that affect the
technique choices?”_ : we have shown that it is beneficial to depart from the
initial predictor suggestions in many cases, and also explore KNN- and NN-
based solutions. More specifically, our results provide strong evidence that
the prediction technique needs to be coupled with the feature selection one,
and despite the explosion in the search space, not to treat these aspects
separately. The final choice about the technique to be employed depends on
several characteristics of the dataset type, including the number of the
training examples with failures, the domain size of the event types, and the
diversity in the warning patterns. The summary observations are mentioned at
the end of Sections 4.3 and 4.4.
_“Does the manipulation of the training set to deal with class imbalance and
typically small sizes can yield benefits?”_ : we have dealt with this question
in Section 5, where it is shown that improvements can be made. Albeit, there
is no clear pattern as to which manipulation techniques should be chosen;
therefore a meta-classifier is required, and this constitutes a significant
direction for future work.
_“What is the impact of ensemble solutions?”_ : as discussed in Section 6,
simply combining predictors in a meaningful way does not guarantee
improvements and can even lead to significant performance degradation. The
most promising solution is to dynamically set the threshold regarding raising
an alarm for a forthcoming failure. This again calls for the development of a
meta-classifier, to judiciously take such decisions.
An additional remark from our work is that, in the datasets examined, the
classification-based solution outperforms the regression-based one. After
reaching this conclusion, further investigation is required to establish the
exact cause and the contribution of the extended features extracted from the
raw logs in the former technique.
In summary, in this work we deal with event logs referring to phenomena
following Weibull distributions, as is the case in many industrial settings.
Assuming that a small portion of such phenomena may be treated as warning
signals for a forthcoming significant failure, albeit in a an unclear manner,
we have investigated several techniques and methodologies, significantly
extending the preliminary results in (Naskos and Gounaris, 2019) and the
conclusions form proposals, such as (Wang et al., 2017) and (Korvesis et al.,
2018).
## References
* (1)
* Ao et al. (2015) Xiang Ao, Ping Luo, Chengkai Li, Fuzhen Zhuang, and Qing He. 2015. Online frequent episode mining. In _IEEE 31st Int. Conf. on Data Engineering (ICDE)_. 891–902.
* Bach (2008) Francis R Bach. 2008\. Bolasso: model consistent lasso estimation through the bootstrap. In _Proc. of the 25th Int. Conf. on Machine learning_. ACM, 33–40.
* Bellas et al. (2020) Christos Bellas, Athanasios Naskos, Georgia Kougka, George Vlahavas, Anastasios Gounaris, Athena Vakali, Apostolos Papadopoulos, Evmorfia Biliri, Nefeli Bountouni, and Gustavo Gonzalez Granadillo. 2020\. A Methodology for Runtime Detection and Extraction of Threat Patterns. _SN Comput. Sci._ 1, 5 (2020), 238.
* Carvalho et al. (2019) Thyago Peres Carvalho, Fabrízzio Alphonsus A. M. N. Soares, Roberto Vita, Roberto da Piedade Francisco, João P. Basto, and Symone G. S. Alcalá. 2019\. A systematic literature review of machine learning methods applied to predictive maintenance. _Comput. Ind. Eng._ 137 (2019).
* Chen and Guestrin (2016) Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In _Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining_. 785–794.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ , Yoshua Bengio and Yann LeCun (Eds.).
* Kolchinsky and Schuster (2018) Ilya Kolchinsky and Assaf Schuster. 2018. Efficient Adaptive Detection of Complex Event Patterns. _PVLDB_ 11, 11 (2018), 1346–1359.
* Kononenko et al. (1997) Igor Kononenko, Edvard Šimec, and Marko Robnik-Šikonja. 1997\. Overcoming the myopia of inductive learning algorithms with RELIEFF. _Applied Intelligence_ 7, 1 (1997), 39–55.
* Korvesis et al. (2018) Panagiotis Korvesis, Stephane Besseau, and Michalis Vazirgiannis. 2018. Predictive Maintenance in Aviation: Failure Prediction from Post-Flight Reports. In _2018 IEEE 34th International Conference on Data Engineering (ICDE)_. IEEE, 1414–1422.
* Kovalev et al. (2018) D. Kovalev, I. Shanin, S. Stupnikov, and V. Zakharov. 2018. Data Mining Methods and Techniques for Fault Detection and Predictive Maintenance in Housing and Utility Infrastructure. In _2018 International Conference on Engineering Technologies and Computer Science (EnT)_. 47–52.
* Lai et al. (2006) Chin-Diew Lai, D.N. Murthy, and Min Xie. 2006. _Weibull Distributions and Their Applications_. 63–78.
* Li et al. (2018) Hui Li, Sizhe Peng, Jian Li, Jingjing Li, Jiangtao Cui, and Jianfeng Ma. 2018\. ONCE and ONCE+: Counting the Frequency of Time-constrained Serial Episodes in a Streaming Sequence. _arXiv preprint arXiv:1801.09639_ (2018).
* Lu (2017) Yang Lu. 2017. Industry 4.0: A survey on technologies, applications and open research issues. _Journal of Industrial Information Integration_ 6 (2017), 1 – 10.
* Naskos and Gounaris (2019) Athanasios Naskos and Anastasios Gounaris. 2019. Efficiency assessment of event-based predictive maintenance in Industry 4.0. In _Advances in Data Mining - Applications and Theoretical Aspects, 19th Industrial Conference, ICDM 2019, New York, USA, July 17 - July 21, 2019_. 103–117.
* Naskos et al. (2019) Athanasios Naskos, Georgia Kougka, Theodoros Toliopoulos, Anastasios Gounaris, Cosmas Vamvalis, and Daniel Caljouw. 2019. Event-based predictive maintenance on top of sensor data in a real Industry 4.0 case study. In _Machine Learning and Knowledge Discovery in Databases - International Workshops of ECML PKDD Proceedings, Part II_. 345–356.
* Ozsu (2020) M. Tamer Ozsu. 2020\. A Systematic View of Data Science. _IEEE Data Eng. Bull._ 43, 3 (2020), 3–11.
* Ramachandran et al. (2017) Prajit Ramachandran, Barret Zoph, and Quoc V Le. 2017\. Searching for activation functions. _arXiv preprint arXiv:1710.05941_ (2017).
* Scully (2019a) P Scully. 2019a. Predictive Maintenance Companies Landscape 2019 IoT Analytics. _Iot-analytics.com_ (2019).
* Scully (2019b) P Scully. 2019b. Predictive Maintenance Report 2019-2024 IoT Analytics. _Iot-analytics.com_ (2019).
* Sipos et al. (2014) Ruben Sipos, Dmitriy Fradkin, Fabian Moerchen, and Zhuang Wang. 2014. Log-based predictive maintenance. In _Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining_. ACM, 1867–1876.
* Ullman (2020) Jeffrey D. Ullman. 2020\. The Battle for Data Science. _IEEE Data Eng. Bull._ 43, 2 (2020), 8–14.
* Wang et al. (2017) J Wang, C Li, S Han, S Sarkar, and X Zhou. 2017. Predictive maintenance based on event-log analysis: A case study. _IBM Journal of Research and Development_ 61, 1 (2017), 11–121.
|
# AGRNet: Adaptive Graph Representation Learning and Reasoning for Face
Parsing
Gusi Te, Wei Hu, Yinglu Liu, Hailin Shi, and Tao Mei G. Te and W. Hu are with
Wangxuan Institute of Computer Technology, Peking University, No. 128,
Zhongguancun North Street, Beijing, China. E-mail:
<EMAIL_ADDRESS>Liu, H. Shi and T. Mei are with JD AI
Research, Beijing, China. E-mail<EMAIL_ADDRESS>Corresponding author: Wei Hu (forhuwei@pku.edu.cn). This work was supported by
National Natural Science Foundation of China (61972009).
###### Abstract
Face parsing infers a pixel-wise label to each facial component, which has
drawn much attention recently. Previous methods have shown their success in
face parsing, which however overlook the correlation among facial components.
As a matter of fact, the component-wise relationship is a critical clue in
discriminating ambiguous pixels in facial area. To address this issue, we
propose adaptive graph representation learning and reasoning over facial
components, aiming to learn representative vertices that describe each
component, exploit the component-wise relationship and thereby produce
accurate parsing results against ambiguity. In particular, we devise an
adaptive and differentiable graph abstraction method to represent the
components on a graph via pixel-to-vertex projection under the initial
condition of a predicted parsing map, where pixel features within a certain
facial region are aggregated onto a vertex. Further, we explicitly incorporate
the image edge as a prior in the model, which helps to discriminate edge and
non-edge pixels during the projection, thus leading to refined parsing results
along the edges. Then, our model learns and reasons over the relations among
components by propagating information across vertices on the graph. Finally,
the refined vertex features are projected back to pixel grids for the
prediction of the final parsing map. To train our model, we propose a
discriminative loss to penalize small distances between vertices in the
feature space, which leads to distinct vertices with strong semantics.
Experimental results show the superior performance of the proposed model on
multiple face parsing datasets, along with the validation on the human parsing
task to demonstrate the generalizability of our model.
###### Index Terms:
Face parsing, graph representation, attention mechanism, graph reasoning.
Figure 1: Illustration of the proposed adaptive graph representation learning
and reasoning for face parsing, which aims to capture the long range
dependencies among facial components. Given an input image, we represent each
facial component by a few vertices via adaptive pixel-to-vertex projection,
learn and reason over the graph connectivity to infer the relations among
facial components, and reproject the refined vertex features to the pixel
grids for face parsing.
## I Introduction
Face parsing is a particular task in semantic segmentation, which assigns a
pixel-wise label to each semantic component, such as facial skin, eyes, mouth
and nose. It enables more complex face analysis tasks, including face swapping
[1, 2], face editing [3, 4] and face completion [5, 6].
A plethora of methods have been proposed to fulfill this task with remarkable
success, which can be classified into two categories: 1) Face-based parsing,
which takes the holistic face image as input for segmentation [7, 8, 9, 10,
11]. This however may neglect the scale discrepancy in different facial
components, sometimes resulting in the lack of details. 2) Region-based
parsing, which addresses the above problem by first predicting the bounding
box and then acquiring the parsing map of each facial component individually
[12, 13, 14, 15, 16]. However, there still exist limitations and challenges.
Firstly, region-based parsing methods are based on the individual information
within each component, while the correlation among components is not exploited
yet to capture long range dependencies. In fact, facial components present
themselves with abundant correlation between each other. For instance, eyes,
mouth and eyebrows will generally become more curvy when people smile, as
shown in Fig. 1; facial skin and other components will be dark when the
lighting is weak, etc.. Secondly, it remains a challenge to segment pixels
around the boundary between components, since boundaries tend to be ambiguous
in real scenarios.
To this end, we explicitly model the component-wise correlations on a graph
abstraction, with each vertex describing a facial component and each link
capturing the correlation between a pair of components. To learn such a graph
representation, we propose Adaptive Graph Representation learning and
reasoning over facial components—AGRNet, which learns and reasons over non-
local components to capture long range dependencies with the boundary
information between components leveraged. In particular, to bridge the image
pixels and graph vertices, AGRNet adaptively projects a collection of pixels
with similar features to each vertex. To obtain representative vertices,
instead of projecting pixels within an image block to a vertex by fixed
spatial pooling as in our previous work [10], we adaptively select pixels with
high responses to a distinct facial component as a graph vertex. Since facial
components are unknown in the beginning, we employ a predicted parsing map as
the initial condition. Further, to achieve accurate segmentation along the
edges between components, AGRNet incorporates edge attention in the pixel-to-
vertex projection, which assigns larger weights to the features of edge pixels
during the feature aggregation.
Based on the projected vertices, AGRNet learns the correlations among facial
components, i.e., the graph links between vertices, and reasons over the
correlations by propagating information across all vertices on the graph via
graph convolution [17]. This enables characterizing long range correlations in
the facial image for learning high-level semantic information. Finally, we
project the learned graph representation back to the pixel grids, which is
integrated with the original pixel-wise feature map for face parsing.
To train the proposed model, we design a boundary-attention loss as well as a
discriminative loss to reinforce edge pixels and vertex features respectively.
The boundary-attention loss measures the error of the predicted parsing map
only at edge pixels, which aims to obtain accurate segmentation along the
boundary. Meanwhile, the discriminative loss is designed to penalize small
distances among vertices in the feature space, thus leading to more
representative and distinct vertices with strong semantics.
Compared with our previous work [10], the proposed AGRNet makes significant
improvements in the following three aspects. 1) AGRNet projects pixels to
vertices in an adaptive fashion conditioned on a predicted parsing map, rather
than simply applying spatial pooling in [10]. Specifically, as illustrated in
Fig. 3, [10] projects pixels within an image patch to a vertex via spatial
pooling, which results in ambiguous semantics in vertices. In contrast, AGRNet
projects a collection of pixels to a vertex adaptively so as to represent each
semantic facial component, which leads to more semantic and compact vertices
for the subsequent efficient reasoning among components. This improvement is
substantial and critical for capturing the long-range dependencies between
facial components. 2) We design a discriminative loss, which encourages
vertices to keep distant in the feature space. This is advantageous in
learning distinct vertices for accurate segmentation of ambiguous pixels along
the boundary. While the parsing map in our model provides hints in selecting
semantic vertices, the similarity among different vertices is overlooked, thus
the discriminative loss is critical in pushing pixels to its corresponding
component. 3) We conduct more extensive experiments on large-scale datasets,
including LaPa [9] and CelebAMask-HQ [18]. Also, we generalize the proposed
model to the human parsing task on the LIP dataset [19], and the results
demonstrate the effectiveness of our model.
Our main contributions are summarized as follows.
1. 1.
We propose a component-level adaptive graph representation learning and
reasoning for face parsing—AGRNet, aiming to exploit the correlations among
components for capturing long range dependencies.
2. 2.
The graph representation is adaptively learned by projecting a collection of
pixels with similar features to each vertex in a differentiable manner along
with edge attention.
3. 3.
To train the proposed model effectively, we propose a discriminative loss,
which conduces to enlarging the feature distance among distinct vertices.
4. 4.
We conduct extensive experiments on the benchmarks of face parsing as well as
human parsing to show the generalization. Results demonstrate that our model
outperforms the state-of-the-art methods on almost every category for face
parsing.
The paper is organized as follows. We first review previous works in Section
II. Then we elaborate on the proposed model in Section III. Next, we present
the loss functions for training the proposed model and provide further
analysis in Section IV. Finally, experiments and conclusions are presented in
Section V and VI, respectively.
## II Related Work
### II-A Face Parsing
Previous methods can be classified into two categories: face-based parsing and
region-based parsing.
Face-based parsing takes the whole image as input regardless of component
properties [7, 8, 9, 10, 11]. Liu et al. import features learned from a
convolutional neural network into the Conditional Random Field (CRF) framework
to model individual pixel labels and neighborhood dependencies [7]. Luo et al.
propose a hierarchical deep neural network to extract multi-scale facial
features [16]. Zhou et al. adopt an adversarial learning approach to train the
network and capture the high-order inconsistency [20]. Liu et al. design a
CNN-RNN hybrid model that benefits from both high-quality features of CNNs and
non-local properties of RNNs [12]. To refine the segmentation along critical
edges, Liu et al. introduce edge cues and propose a boundary-aware loss for
face parsing, leading to improved performance [9]. However, this class of
methods neglect the discrepancy in scales of various facial components,
sometimes resulting in the lack of details in some components.
Region-based parsing takes the scale discrepancy into account and predicts
each component respectively, which is advantageous in capturing elaborate
details [12, 13, 14, 15, 16]. Zhou et al. present an interlinked CNN that
takes multi-scale images as input and allows bidirectional information passing
[15]. Lin et al. propose a novel RoI Tanh-Warping operator that preserves both
central and peripheral information. It contains two branches with the local-
based branch for inner facial components and the global-based branch for outer
facial ones. This method demonstrates high performance especially for hair
segmentation [13]. Yin et al. introduce the Spatial Transformer Network and
build a training connection between traditional interlinked CNNs, which makes
the end-to-end joint training process possible [14]. Nevertheless, this class
of methods often neglect the correlation among components to characterize long
range dependencies.
Figure 2: The overview of the proposed face parsing framework.
### II-B Attention Mechanism
Limited by the locality of the convolution operator, CNNs lack the ability to
model the global contextual information in a single layer. Therefore,
attention mechanism has been proposed to capture long-range information [21],
which has been successfully applied to many applications such as sentence
encoding [22] and image recognition [23].
Chen et al. propose a Double Attention Model that gathers information
spatially and temporally to reduce the complexity of traditional non-local
modules [24]. Zhu et al. present an asymmetric module to reduce the
computation and distill features [25]. Fu et al. devise a dual attention
module that applies both spatial and channel attention in feature maps [26].
To research the underlying relationship among different regions, Chen et al.
project the original features into an interactive space and utilize the GCN
[17] to exploit the high-order relationship [27]. Li et al. devise a robust
attention module that incorporates the Expectation-Maximization algorithm
[28]. Yin et al. propose the disentangled non-local module, aiming at
disentangling pairwise and unary relations in the classical non-local network
[29].
### II-C Graph Representation for Images
Images could be interpreted as a particular form of regular grid graphs, which
thus enables studying images from the graph perspective. Chandra et al.
propose a Conditional Random Field based method on image segmentation [30].
Recently, graph convolution networks [17, 31, 32] have been leveraged in image
segmentation. Li et al. introduce the graph convolution to semantic
segmentation, which projects features onto vertices in the graph domain and
applies graph convolution afterwards [33]. Furthermore, Lu et al. propose
Graph-FCN where semantic segmentation is cast as vertex classification by
directly transforming an image into regular grids [34]. Pourian et al. propose
a graph-based method for semi-supervised segmentation [35], where the image is
divided into community graphs and labels are assigned to corresponding
communities. Zhang et al. utilize the graph convolution both in the coordinate
space and the feature space [36]. Li et al. propose a spatial pyramid graph
reasoning module based on an improved Laplacian matrix to learn a better
distance metric for feature filtering [37].
## III Adaptive Graph Representation Learning and Reasoning
In this section, we first introduce the overall framework in Section III-A.
Then, we elaborate on the proposed adaptive graph projection, graph reasoning,
and graph reprojection in Section III-B, Section III-C and Section III-D,
respectively.
### III-A Overview
As illustrated in Fig. 2, given an input face image
$\mathbf{I}\in\mathbb{R}^{H\times W}$ of height $H$ and width $W$, we aim to
predict a pixel-wise label to each facial semantic component. In particular,
we aim to model the long-range dependencies among distant components on a
graph, which is critical for the description of the facial structure. The
overall framework of the proposed AGRNet consists of three procedures as
follows.
* •
Initial Feature and Edge Extraction. We take the ResNet-101 [38] as the
backbone and extract low-level and high-level features for the subsequent
graph representation. The low-level features generally contain image details
but often lack semantic information, while the high-level features provide
rich semantics with global information at the cost of image details. To fully
exploit the global information in high-level features, we employ spatial
pyramid pooling to learn multi-scale contextual information. Further, we
propose an edge perceiving module to acquire an edge map for the subsequent
edge attention operation.
* •
Adaptive Graph Representation Learning and Reasoning. We project a collection
of pixels to similar features to graph vertices in an adaptive and
differentiable manner, and reason over the vertices to learn the long-range
relations among facial components. This consists of three operations: adaptive
graph projection, graph reasoning and graph reprojection, which extracts
component features as vertices in a differentiable fashion, reasons the
relations between vertices with a graph convolution network, and projects the
learned graph representation back to pixel grids, leading to a refined feature
map with rich local and non-local semantics.
* •
Semantic Decoding. Finally, we add the refined feature map to the original
pixel-wise feature map and predict the parsing map via a convolution layer
with kernel size of $1\times 1$ .
Next, we will elaborate on the three operations in Adaptive Graph
Representation Learning and Reasoning in order as follows. The specific
architecture is illustrated in Fig. 4.
Figure 3: Comparison of the spatial graph projection in [10] and the proposed
adaptive graph projection. (a) Spatial graph projection, where pixels within
an image patch are projected to a vertex via spatial pooling. (b) Adaptive
graph projection, where a collection of pixels are projected to a vertex
adaptively so as to represent each semantic facial component. Figure 4:
Architecture of the proposed adaptive graph representation learning and
reasoning for face parsing.
### III-B Adaptive Graph Projection
We learn the graph representation of facial components in an adaptive and
differentiable manner, which projects a collection of pixels that tend to
reside in the same facial component to $K$ ($K\geq 1$) vertices in the graph.
We choose $K$ vertices to represent a facial component, because 1) the feature
from K vertices provides abundant visual description of the component, and 2)
both the intra-component relations and inter-component relations will be
exploited during the graph reasoning.
Considering the facial components are unknown in the test, we first generate a
preliminary raw parsing map that provides rough component information, which
is supervised by the ground truth parsing map during the training. Based on
the raw prediction, we select $K$ pixels with the highest confidence with
respect to each component. Then the corresponding indices are utilized to
sample representative vertices on the original feature map. These vertices
will be fed into the subsequent graph reasoning module to extract the mutual
relationship.
Specifically, the input to the proposed adaptive graph projection consists of
a low-level feature map ${\mathbf{X}}_{1}\in\mathbb{R}^{H_{1}W_{1}\times
C_{1}}$ and a high-level feature map
${\mathbf{X}}_{2}\in\mathbb{R}^{H_{2}W_{2}\times C_{2}}$
($H_{1}W_{1}>H_{2}W_{2}$) as well as an edge map
$\mathbf{E}\in\mathbb{R}^{H_{1}W_{1}\times 1}$ from the first procedure of
Initial Feature and Edge Extraction, where $H_{i}$ and $W_{i}$ denote the
height and width of the $i$th feature map respectively while $C_{i}$ is the
number of feature channels, $i\in\\{1,2\\}$. Taking them as input, we aim to
construct a set of vertices, each of which corresponds to a distinct facial
component.
First, we upsample the feature map ${\mathbf{X}}_{2}$ to the scale of
${\mathbf{X}}_{1}$ with bilinear interpolation, and concatenate the upsampled
version $\widetilde{{\mathbf{X}}}_{2}$ and ${\mathbf{X}}_{1}$ followed by a 1
$\times$ 1 convolution layer to reduce the channel dimension to $C$. Then we
obtain the fused feature map ${\mathbf{X}}_{0}\in\mathbb{R}^{H_{1}W_{1}\times
C}$, with both detailed texture and abundant semantic information:
${\mathbf{X}}_{0}=\text{conv}([{\mathbf{X}}_{1},\widetilde{{\mathbf{X}}}_{2}]),$
(1)
where $[\cdot,\cdot]$ denotes the concatenation operation, and
$\text{conv}(\cdot)$ represents the 1 $\times$ 1 convolution.
To separate pixels into edge pixels and non-edge pixels, we employ the learned
edge map $\mathbf{E}$ to mask the original feature map ${\mathbf{X}}_{0}$:
$\displaystyle{\mathbf{X}}_{e}$
$\displaystyle={\mathbf{X}}_{0}\circ\mathbf{E},$ (2)
$\displaystyle{\mathbf{X}}_{ne}$
$\displaystyle={\mathbf{X}}_{0}\circ(\mathbf{1}-\mathbf{E}),$
where $\circ$ denotes the Hadamard product and $\mathbf{1}$ is a matrix with
all the elements as $1$. ${\mathbf{X}}_{e}$ and ${\mathbf{X}}_{ne}$ denote the
masked feature map of edge pixels and non-edge pixels, respectively.
In order to generate vertices that capture semantic features of facial
components, we propose to first predict a preliminary parsing map
${\mathbf{Z}}_{0}\in\mathbb{R}^{H_{1}W_{1}\times N_{c}}$, where $N_{c}$
denotes the number of segmentation categories. The prediction of
${\mathbf{Z}}_{0}$ is produced by a Multi-layer Perceptron (MLP) under the
supervision of the ground truth parsing map, i.e.,
${\mathbf{Z}}_{0}=\text{conv}({\mathbf{X}}_{0})$. The predicted parsing map
${\mathbf{Z}}_{0}$ corresponds to the confidence in terms of each facial
component.
As component vertices should be compact and representative, we choose top $K$
pixels with highest confidence in ${\mathbf{Z}}_{0}$ as component vertices.
Specifically, we extract the top $K$ pixels and their indices, along with the
corresponding feature vectors from the masked feature map of non-edge pixels
${\mathbf{X}}_{ne}$. That is, only non-edge interior pixels are taken from
${\mathbf{X}}_{ne}$ as vertices:
$\displaystyle{\mathbf{X}}_{G}$
$\displaystyle={\mathbf{X}}_{ne}\\{\bigparallel_{i=1}^{N_{c}}\text{topk}({\mathbf{Z}}_{0}[:,i])\\}$
(3)
$\displaystyle={\mathbf{X}}_{ne}\\{\bigparallel_{i=1}^{N_{c}}\text{topk}(\text{conv}({\mathbf{X}}_{0})[:,i])\\},$
where $\\{\cdot\\}$ denotes selection by the indices. The semantic vertices
are then effectively extracted, each of which corresponds to a vertex in the
graph.
By the adaptive graph projection, we bridge the connection between pixels and
each component via the selected vertices, leading to the features of the
projected vertices on the graph ${\mathbf{X}}_{G}\in\mathbb{R}^{KN_{c}\times
C}$.
The adaptive graph projection significantly improves the projection in our
previous work [10]. As in Fig. 3, we improve the traditional regular spatial
pooling strategy by adaptively selecting representative vertices. Although
facial components tend to locate in certain positions, regular spatial pooling
fails to capture the representative information in each component and
introduces irrelevant pixels. In contrast, our model characterizes each
component with vertices adaptively, leveraging on a predicted parsing map that
provides confident semantic information.
### III-C Graph Reasoning
The connectivity between vertices represents the relation between each pair of
facial components. Hence, we reason over the relations by propagating
information across vertices to learn higher-level semantic information.
Instead of constructing a pre-defined graph, the proposed AGRNet learns the
graph connectivity dynamically without the supervision of a ground truth
graph. We propose to leverage a single-layer Graph Convolution Network (GCN)
[17]—a first-order approximation of spectral graph convolution—to aggregate
the neighborhood information and learn local vertex features.
Specifically, we feed the input vertex features ${\mathbf{X}}_{G}$ into the
GCN. The output feature map $\hat{{\mathbf{X}}}_{G}\in\mathbb{R}^{KN_{c}\times
C}$ is
$\hat{{\mathbf{X}}}_{G}=\text{ReLU}\left[({\mathbf{I}}-{\mathbf{A}}){\mathbf{X}}_{G}{\mathbf{W}}_{G}\right],$
(4)
where ${\mathbf{A}}$ denotes the adjacency matrix that encodes the graph
connectivity to learn, ${\mathbf{W}}_{G}\in\mathbb{R}^{C\times C}$ denotes the
weights of the GCN, and ReLU is the activation function. The features
$\hat{{\mathbf{X}}}_{G}$ are acquired by the vertex-wise interaction
(multiplication with $({\mathbf{I}}-{\mathbf{A}})$) and channel-wise
interaction (multiplication with ${\mathbf{W}}_{G}$). We set the same number
of output channels as the input to keep consistency, allowing the module to be
compatible with the subsequent process.
The main difference between previous GCNs [17, 39] and ours is that, instead
of a hand-crafted graph in [17] or a graph with available connectivities [39],
we randomly initialize the graph ${\mathbf{A}}$ and then iteratively learn
BOTH the connectivity and edge weights by a linear layer during the training,
which adaptively captures the implicit relations among facial components.
Besides, we add a residual connection to reserve the input features of
vertices ${\mathbf{X}}_{G}$. The finally reasoned relations are acquired with
the information propagation across all vertices based on the learned graph.
After graph reasoning, the semantic information is greatly enhanced across
different vertices.
### III-D Graph Reprojection
Having learned vertex features that capture the semantic information of each
facial component, we project vertex features in the graph domain back to the
original pixel domain for face parsing. That is, we aim to compute a matrix
${\mathbf{P}}:\hat{{\mathbf{X}}}_{G}\mapsto{\mathbf{X}}_{P}$ that reprojects
vertex features $\hat{{\mathbf{X}}}_{G}$ to pixel features ${\mathbf{X}}_{P}$.
In particular, we leverage vertex features ${\mathbf{X}}_{G}$ prior to
reasoning and edge-masked pixel features ${\mathbf{X}}_{e}$ to construct the
projection matrix ${\mathbf{P}}$, which models the correlation between
vertices and pixels. Specifically, we take the inner product of
${\mathbf{X}}_{G}$ and ${\mathbf{X}}_{e}$ to capture the similarity between
vertices and each pixel. We then apply a softmax function for normalization.
Formally, the projection matrix takes the form:
${\mathbf{P}}=\text{softmax}\left({\mathbf{X}}_{G}\cdot{\mathbf{X}}_{e}^{\top}\right),$
(5)
where ${\mathbf{P}}\in\mathbb{R}^{KN_{c}\times H_{1}W_{1}}$.
Having acquired the projection matrix ${\mathbf{P}}$ modeling the correlation
between vertices and pixels, we obtain the final refined pixel-level feature
map ${\mathbf{X}}_{P}$ by taking the product of the projection matrix
${\mathbf{P}}$ and refined vertex features $\hat{{\mathbf{X}}}_{G}$, i.e.,
${\mathbf{X}}_{P}={\mathbf{P}}^{\top}\cdot\hat{{\mathbf{X}}}_{G}.$ (6)
After reprojection, in order to take advantage of the original feature map
${\mathbf{X}}_{0}$ that captures features of pixel-level texture information,
we add ${\mathbf{X}}_{0}$ to the refined feature map ${\mathbf{X}}_{P}$ that
captures edge-aware component-level semantic information by element-wise
summation, leading to multi-level feature representations. Followed by a
$1\times 1$ convolution, our model predicts the final parsing result
${\mathbf{Y}}$ as
${\mathbf{Y}}=\text{conv}({\mathbf{X}}_{0}+{\mathbf{P}}^{\top}\cdot\hat{{\mathbf{X}}}_{G}),$
(7)
where $\text{conv}(\cdot)$ denotes the $1\times 1$ convolution.
## IV Training Objectives and Analysis
Having presented our model, we propose a boundary-attention loss and a
discriminative loss tailored to the training of our model in Section IV-A.
Further, we provide detailed analysis and discussion in Section IV-B.
### IV-A The Loss Function
In addition to the commonly adopted parsing loss, the training objective of
AGRNet further aims to promote the segmentation accuracy along the boundary as
well as learning discriminative vertex features. Hence, we propose a boundary-
attention loss and a discriminative loss respectively, which are detailed
below.
#### IV-A1 The Proposed Boundary-Attention Loss
To improve the segmentation results along the boundary, we introduce the
boundary-attention loss (BA-Loss) inspired by [9] as in our previous work
[10]. The BA-loss measures the loss of the predicted parsing labels compared
to the ground truth only at edge pixels, thus strengthening the model capacity
for critical edge pixels that are challenging to distinguish. Mathematically,
the BA-loss is formulated as
$\mathcal{L}_{\text{BA}}=\sum_{i=1}^{HW}\sum_{j=1}^{N_{c}}\left[e_{i}=1\right]y_{ij}\log{p_{ij}},$
(8)
where $i$ is the pixel index, $j$ is the class index and $N_{c}$ is the number
of categories. $e_{i}$ is a binary scalar to indicate an edge pixel
($e_{i}=1$), $y_{ij}$ denotes the ground truth label of face parsing, and
$p_{ij}$ denotes the predicted parsing label. $\left[\cdot\right]$ is the
Iverson bracket, which denotes a number that is $1$ if the condition in the
bracket is satisfied, and $0$ otherwise.
#### IV-A2 The Proposed Discriminative Loss
In semantic segmentation, pixels belonging to different categories should be
distant from each other in the feature space, while pixels belonging to the
same category should be as similar as possible. In our model, while the
parsing map provides hints in selecting semantic vertices, the similarity
among different vertices is however overlooked. The separation of vertices is
also critical in pushing pixels to its corresponding component.
Motivated by [40] [41] [42] [43], we propose a discriminative loss that
penalizes small feature distances between vertices representing different
components and encourages multiple vertices corresponding to the same
component to be more diverse. Specifically, let $\mathbf{x_{1}}$ and
$\mathbf{x_{2}}$ denote the features of two semantic vertices respectively, we
formulate the penalty function as
$\phi(\mathbf{x_{1}},\mathbf{x_{2}})=\left\\{\begin{aligned}
&(\delta-||\mathbf{x_{1}}-\mathbf{x_{2}}||_{2})^{2},&||\mathbf{x_{1}}-\mathbf{x_{2}}||_{2}<\delta\\\
&0,&||\mathbf{x_{1}}-\mathbf{x_{2}}||_{2}\geq\delta\end{aligned}\right.,$ (9)
where $\delta$ is a pre-defined threshold to control the threshold of the
feature distance between two semantic vertices. If the $l_{2}$ distance
between the two features exceeds $\delta$, the function does not impose any
penalty. Otherwise, the penalty is a quadratic function that takes a larger
value for the smaller distance.
Considering all the vertices with cardinality $KN_{c}$, we formulate the
complete discriminative loss $\mathcal{L}_{\text{dis}}$ as follows:
$\mathcal{L}_{\text{dis}}=\frac{1}{KN_{c}(KN_{c}-1)}\sum_{i=1}^{KN_{c}}{\sum_{j=1,j\neq
i}^{KN_{c}}{\phi(\mathbf{x}_{i},\mathbf{x}_{j})}}.$ (10)
#### IV-A3 The Total Loss
In addition to the above two loss functions, we have three more losses: 1) the
prediction loss of the raw parsing map for adaptive graph projection
$\mathcal{L}_{\text{raw}}$, which takes the cross entropy between the
predicted raw parsing map and the ground truth parsing map; 2) the final
parsing loss $\mathcal{L}_{\text{final}}$, which takes the cross entropy
between each predicted label and the ground truth label; 3) the edge
prediction loss $\mathcal{L}_{\text{edge}}$, which measures the estimation
error of the image edges. The three loss functions take the following forms:
$\displaystyle\mathcal{L}_{\text{raw}}=$
$\displaystyle-\frac{1}{HW}\sum_{i=1}^{HW}{\sum_{j=1}^{N_{c}}{y_{ij}\log(\text{pred}^{raw}_{ij})}};$
$\displaystyle\mathcal{L}_{\text{final}}=$
$\displaystyle-\frac{1}{HW}\sum_{i=1}^{HW}{\sum_{j=1}^{N_{c}}{y_{ij}\log(\text{pred}^{final}_{ij})}};$
$\displaystyle\mathcal{L}_{\text{edge}}=$
$\displaystyle-\frac{1}{HW}\sum_{i=1}^{HW}{e_{i}\log(\text{pred}^{edge}_{i})}$
$\displaystyle+(1-e_{i})\log(1-\text{pred}^{edge}_{i}).$
The total loss function is then defined as follows:
$\mathcal{L}=\lambda_{1}\mathcal{L}_{\text{raw}}+\lambda_{2}\mathcal{L}_{\text{edge}}+\lambda_{3}\mathcal{L}_{\text{BA}}+\lambda_{4}\mathcal{L}_{\text{final}}+\lambda_{5}\mathcal{L}_{\text{dis}},$
(11)
where $\\{\lambda_{i}\\}_{i=1}^{5}$ are hyper-parameters to strike a balance
among different loss functions.
Finally, we provide a detailed training pipeline in Algorithm 1.
Input: Image set $\\{\mathbf{I_{1}},...,\mathbf{I_{n}}\\}$, Edge set
$\\{\mathbf{E_{1}},...,\mathbf{E_{n}}\\}$, Label set
$\\{\mathbf{Y_{1}},...,\mathbf{Y_{n}}\\}$,
Hyper parameters $\lambda_{1},...,\lambda_{5}$, Number of categories $N_{c}$
Output: Parsing Map ${\mathbf{Y}}$, Model parameters $\Phi$
while _not converge_ do
Sample $\mathbf{I_{i}}$, $\mathbf{E_{i}}$, $\mathbf{Y_{i}}$ from
$\\{\mathbf{I_{1}},...,\mathbf{I_{n}}\\},\\{\mathbf{E_{1}},...,\mathbf{E_{n}}\\},\\{\mathbf{Y_{1}},...,\mathbf{Y_{n}}\\}$
Acquire a low-level/high-level feature map $\mathbf{X_{1}},\mathbf{X_{2}}$ =
$Resnet$($\mathbf{I_{i}}$)
$\mathbf{X_{0}}$ = $Fuse([\mathbf{X_{1}},\mathbf{X_{2}}])$ as in (1)
Learn an edge map $\mathbf{E}$ = $EdgeModule$($\mathbf{X_{0}}$)
Predict a preliminary parsing map $\mathbf{Z_{0}}$ =
$Predict$($\mathbf{X_{0}}$)
// Adaptive Graph Projection
${\mathbf{X}}_{e}={\mathbf{X}}_{0}\circ\mathbf{E}$
${\mathbf{X}}_{ne}={\mathbf{X}}_{0}\circ(\mathbf{1}-\mathbf{E})$
${\mathbf{X}}_{G}={\mathbf{X}}_{ne}\\{\parallel_{i=1}^{N_{c}}\text{topk}(\mathbf{Z_{0}}[:,i])\\}$
// Graph Learning and Reasoning
$\hat{{\mathbf{X}}}_{G}=\text{ReLU}\left[({\mathbf{I}}-{\mathbf{A}}){\mathbf{X}}_{G}{\mathbf{W}}_{G}\right]$
// Graph Reprojection
Compute the projection matrix
$\mathbf{P}=\text{softmax}\left({\mathbf{X}}_{G}\cdot{\mathbf{X}}_{e}^{\top}\right)$
Obtain the final refined pixel-level feature map via graph reprojection
${\mathbf{X}}_{P}=\mathbf{P}^{\top}\cdot\hat{{\mathbf{X}}}_{G}$
Predict the final parsing result
${\mathbf{Y}}=\text{conv}({\mathbf{X}}_{0}+{\mathbf{X}}_{P})$
// Loss Aggregation
$\mathcal{L}=\lambda_{1}\mathcal{L}_{\text{raw}}+\lambda_{2}\mathcal{L}_{\text{edge}}+\lambda_{3}\mathcal{L}_{\text{BA}}+\lambda_{4}\mathcal{L}_{\text{final}}+\lambda_{5}\mathcal{L}_{\text{dis}}$
as in (11)
Update network parameters with the SGD optimizer and loss $\mathcal{L}$
Algorithm 1 Training for AGRNet TABLE I: Ablation study on the LaPa dataset.
We conduct comparison experiments from multiple aspects, including composition
of modules, loss functions, vertex numbers, FLOPs and parameters.
Model | ResNet | Spatial | Adaptive | Edge | Graph | Mean F1 (%) | Mean IoU (%) | Mean Accuracy (%)
---|---|---|---|---|---|---|---|---
1 | ✓ | | | | | 90.9 | 85.1 | 90.2
2 | ✓ | ✓ | | | | 91.1 | 85.7 | 90.8
3 | ✓ | | ✓ | | | 91.4 | 86.0 | 91.3
4 | ✓ | | ✓ | | ✓ | 91.9 | 86.3 | 91.7
5 | ✓ | | ✓ | ✓ | | 91.6 | 85.6 | 92.1
6 | ✓ | | ✓ | ✓ | ✓ | 92.3 | 87.0 | 92.7
(a) On composition of modules
Model | Cross Entropy | Discriminative | Boundary-Attention | Mean F1 (%) | Mean IoU (%) | Mean Accuracy (%)
---|---|---|---|---|---|---
1 | ✓ | | | 90.9 | 85.1 | 90.2
2 | ✓ | ✓ | | 91.4 | 86.1 | 92.5
3 | ✓ | | ✓ | 91.8 | 86.3 | 91.8
4 | ✓ | ✓ | ✓ | 92.3 | 87.0 | 92.7
(b) On loss functions
| Top K
---|---
Metric (%) | top 1 | top 2 | top 4 | top 8
Mean F1 | 91.6 | 91.9 | 92.3 | 91.1
Mean IoU | 86.0 | 86.3 | 87.0 | 86.2
Mean Accuracy | 92.0 | 93.5 | 92.7 | 92.3
(c) On vertex numbers
| FLOPs (G) | Parameters (M)
---|---|---
ResNet | 6.652 | 14.10
Edge Perceiving | 0.249 | 0.03
Graph Projection | 0.471 | 0.02
Graph Reasoning | 0.037 | <0.01
(d) On FLOPs and parameters
### IV-B Difference from Relevant Models
While the proposed graph projection is inspired by non-local modules and the
entire model belongs to the family of graph-based methods, we analyze the
prominent differences between previous works and our method.
Comparison with non-local modules. Typically, a traditional non-local module
models pixel-wise correlations based on pixel-wise feature similarities, while
neglecting the high-order relationship among regions. In contrast, we
explicitly exploit the correlation among distinct components via the proposed
pixel-to-vertex projection and graph reasoning over vertices. Each vertex
embeds certain facial component, which models the most prominent
characteristics towards semantics. We further learn and reason over the
relations between vertices by graph convolution, which captures high-order
semantic relations among different facial components.
Also, the computation complexity of non-local modules is expensive in general
[23]. The proposed edge-aware adaptive pooling addresses this issue by
extracting a few significant vertices instead of redundant query points, which
reduces the attention map size from $O(N^{2})$ to $O(NK)$, where $K\ll N$.
Further, we separate and assign different weights to edge and non-edge pixels
via the learned edge map, which emphasizes on challenging edge pixels, thus
improving the parsing quality.
Comparison with graph-based models. In comparison with other graph-based
models, such as [27, 33, 10], we significantly improve the graph projection
process by introducing semantics in an adaptive and differentiable manner. It
is worth noting that, the vertex selection is critical in the graph
construction. In previous works, each vertex is simply represented by a
weighted sum of image pixels or acquired by spatial pooling, which may result
in ambiguity in understanding vertices. Besides, given different inputs of
feature maps, the pixel-wise features often vary greatly but the projection
matrix is fixed after training. In contrast, we incorporate explicit semantics
into the projection process to extract robust vertices and construct the
projection matrix on-the-fly on the basis of the similarity between vertices
and each pixel in the feature space.
Furthermore, in comparison with hand-crafted graph models [44, 45], which are
broadly used in human parsing, our model learns the implicit graph and the
correlation between facial components. Different from the human structure
which implies the graph connectivity, there is no conventional topology
predefined on human faces. Therefore, we propose the dynamically learned graph
reasoning module to infer the underlying graph adjacency matrix instead of the
hand-crafted human structure in Graphonomy [44].
### IV-C Theoretical analysis
In particular, our method has addressed existing major challenges of face
parsing from the following two crucial aspects.
Firstly, the correlation between different facial components is critical in
face parsing. However, existing methods, including face-based and region-based
parsing, overlook such correlation among different parts. Face-based parsing
may neglect the scale discrepancy in different facial components and region-
based parsing does not exploit the inter-region relationship and long-range
dependencies, while it would definitely enhance the parsing performance to
explore such correlations. Similar theory has been validated on the task of
human parsing [44]. Further, facial structural variance is much less than
human structure, leading to more stable structural correlations for effective
learning. Graph representation is one of the most effective ways to model such
correlations. Hence, we introduce a graph-based model for face representation
and employ graph convolutional networks to construct and reason the component-
wise relationship.
Secondly, it remains a challenge to segment pixels around the edges between
components, since edges tend to be ambiguous in real-world scenarios. This
issue is more severe in face parsing, as edge pixels cover a higher proportion
in face images than that for other tasks such as scene parsing. Therefore,
improving the segmentation accuracy of edge pixels is one effective way to
enhance the performance of face parsing. In this paper, we achieve accurate
segmentation along the edges from two aspects: 1) we incorporate edge
attention in the pixel-to-vertex projection, which assigns larger weights to
the features of edge pixels during the feature aggregation; 2) we design a
boundary-attention loss to reinforce edge pixels, which measures the error of
the predicted parsing map only at edge pixels.
In face of the above two challenges and based on the principles to address
them, we explicitly model the component-wise correlations on a graph
abstraction with edge pixels taken into consideration, where each vertex
describes a facial component and each link captures the correlation between a
pair of components. To learn such a graph representation, we propose adaptive
graph representation learning and reasoning over facial components, which
learns and reasons over non-local components to capture long range
dependencies with the boundary information between components leveraged. To
obtain representative vertices, we adaptively select pixels with high
responses to a distinct facial component as a graph vertex. Since facial
components are unknown in the beginning, we employ a predicted parsing map as
the initial condition, and propose a discriminative loss to enhance the
discrimination between vertices, leading to more representative and distinct
vertices with strong semantics. Further, we integrate the component-wise
correlation into the feature representation of edge pixels, so that it
contains not only the local contextual information, but also the correlation
with other components, thus enhancing the semantics.
The above theoretical analysis provides the interpretability of our method for
deep-learning based face analysis, which can be summarized as follows: we
propose an effective network to address two great challenges in face
parsing—insufficient use of component-wise correlations and ambiguities of
edge pixels, and design several losses to guide the network to adaptively
learn the intrinsic features of face images from data.
## V Experiments
To validate the proposed AGRNet, we conduct extensive experiments on face
parsing as well as on human parsing for generalizability.
### V-A Datasets and Metrics
TABLE II: Comparison with the state-of-the-art methods on face parsing
datasets (in F1 score).
Methods | Skin | Nose | U-lip | I-mouth | L-lip | Eyes | Brows | Mouth | Overall
---|---|---|---|---|---|---|---|---|---
Liu et al. [12] | 92.1 | 93.0 | 74.3 | 79.2 | 81.7 | 86.8 | 77.0 | 89.1 | 88.6
Lin et al. [13] | 94.5 | 95.6 | 79.6 | 86.7 | 89.8 | 89.6 | 83.1 | 95.0 | 92.7
Wei et al. [46] | 95.6 | 95.2 | 80.0 | 86.7 | 86.4 | 89.0 | 82.6 | 93.6 | 91.7
Yin et al. [14] | - | 96.3 | 82.4 | 85.6 | 86.6 | 89.5 | 84.8 | 92.8 | 91.0
Liu et al. [9] | 94.9 | 95.8 | 83.7 | 89.1 | 91.4 | 89.8 | 83.5 | 96.1 | 93.1
Te et al. [10] | 94.6 | 96.1 | 83.6 | 89.8 | 91.0 | 90.2 | 84.9 | 95.5 | 93.2
Ours | 95.1 | 95.9 | 83.2 | 90.0 | 90.9 | 90.1 | 85.0 | 96.2 | 93.2
(a) The Helen dataset
Methods | Skin | Hair | L-Eye | R-Eye | U-lip | I-mouth | L-lip | Nose | L-Brow | R-Brow | Mean
---|---|---|---|---|---|---|---|---|---|---|---
Zhao et al. [47] | 93.5 | 94.1 | 86.3 | 86.0 | 83.6 | 86.9 | 84.7 | 94.8 | 86.8 | 86.9 | 88.4
Liu et al. [9] | 97.2 | 96.3 | 88.1 | 88.0 | 84.4 | 87.6 | 85.7 | 95.5 | 87.7 | 87.6 | 89.8
Te et al. [10] | 97.3 | 96.2 | 89.5 | 90.0 | 88.1 | 90.0 | 89.0 | 97.1 | 86.5 | 87.0 | 91.1
Luo et al. [11] | 95.8 | 94.3 | 87.0 | 89.1 | 85.3 | 85.6 | 88.8 | 94.3 | 85.9 | 86.1 | 89.2
Wei et al. [46] | 96.1 | 95.1 | 88.9 | 87.5 | 83.1 | 89.2 | 83.8 | 96.1 | 86.0 | 87.8 | 89.4
Ours | 97.7 | 96.5 | 91.6 | 91.1 | 88.5 | 90.7 | 90.1 | 97.3 | 89.9 | 90.0 | 92.3
(b) The LaPa dataset
Methods | Face | Nose | Glasses | L-Eye | R-Eye | L-Brow | R-Brow | L-Ear | R-Ear | Mean
---|---|---|---|---|---|---|---|---|---|---
I-Mouth | U-Lip | L-Lip | Hair | Hat | Earring | Necklace | Neck | Cloth
Zhao et al. [47] | 94.8 | 90.3 | 75.8 | 79.9 | 80.1 | 77.3 | 78.0 | 75.6 | 73.1 | 76.2
89.8 | 87.1 | 88.8 | 90.4 | 58.2 | 65.7 | 19.4 | 82.7 | 64.2
Lee et al. [18] | 95.5 | 85.6 | 92.9 | 84.3 | 85.2 | 81.4 | 81.2 | 84.9 | 83.1 | 80.3
63.4 | 88.9 | 90.1 | 86.6 | 91.3 | 63.2 | 26.1 | 92.8 | 68.3
Luo et al. [11] | 96.0 | 93.7 | 90.6 | 86.2 | 86.5 | 83.2 | 83.1 | 86.5 | 84.1 | 84.0
93.8 | 88.6 | 90.3 | 93.9 | 85.9 | 67.8 | 30.1 | 88.8 | 83.5
Wei et al. [46] | 96.4 | 91.9 | 89.5 | 87.1 | 85.0 | 80.8 | 82.5 | 84.1 | 83.3 | 82.1
90.6 | 87.9 | 91.0 | 91.1 | 83.9 | 65.4 | 17.8 | 88.1 | 80.6
Te et al. [10] | 96.2 | 94 | 92.3 | 88.6 | 88.7 | 85.7 | 85.2 | 88 | 85.7 | 85.1
95.0 | 88.9 | 91.2 | 94.9 | 87.6 | 68.3 | 27.6 | 89.4 | 85.3
Ours | 96.5 | 93.9 | 91.8 | 88.7 | 89.1 | 85.5 | 85.6 | 88.1 | 88.7 | 85.5
92.0 | 89.1 | 91.1 | 95.2 | 87.2 | 69.6 | 32.8 | 89.9 | 84.9
(c) The CelebAMask-HQ dataset
Face Parsing The Helen dataset [48] includes 2,330 images with 11 categories:
background, skin, left/right brow, left/right eye, upper/lower lip, inner
mouth and hair. Specifically, we keep the same training/validation/test
protocol as in [48]. The number of the training, validation and test samples
are 2,000, 230 and 100, respectively. The LaPa dataset [9] is a large-scale
face parsing dataset, consisting of more than 22,000 facial images with
abundant variations in expression, pose and occlusion, and each image is
provided with an 11-category pixel-level label map and 106-point landmarks.
The CelebAMask-HQ dataset [18] is composed of 24,183 training images, 2,993
validation images and 2,824 test images. The number of categories in
CelebAMask-HQ is 19. In addition to facial components, the accessories such as
eyeglass, earring, necklace, neck, and cloth are also annotated.
Human Parsing The LIP dataset [19] is a large-scale dataset focusing on
semantic understanding of person, which contains 50,000 images with elaborated
pixel-wise annotations of 19 semantic human part labels as well as 2D human
poses with 16 key points. The images collected from the real-world scenarios
contain human appearing with challenging poses and views, serious occlusions,
various appearances and low-resolutions.
Metrics: We employ three evaluation metrics to measure the performance of our
model: pixel accuracy, intersection over union (IoU) and F1 score. The mean
value is calculated by the average of total categories. Directly employing the
pixel accuracy metric ignores the scale variance amid facial components, while
the mean IoU and F1 score are better for evaluation. To keep consistent with
the previous methods, we report the overall F1 score on the Helen dataset,
which is computed over the merged facial components: brows (left + right),
eyes (left + right), nose, mouth (upper lip + lower lip + inner mouth).
### V-B Implementation Details
During the training, we employ the random rotation and scale augmentation. The
rotation angle is randomly selected from $(-30^{\circ},30^{\circ})$ and the
scale factor is randomly selected from $(0.75,1.25)$. The ground truth of the
edge mask is extracted according to the semantic label map. If the label of a
pixel is different from any of its four neighbors, it is regarded as an edge
pixel. For the Helen dataset, we pre-process the face image by face alignment
as in other works. For the LaPa and CelebAMask-HQ datasets, we directly
utilize the original images without any pre-processing.
We take the ResNet-101[38] as the backbone and extract the output of conv_2
and conv_5 layers as the low-level and high-level feature maps for multi-scale
representations. To reduce the information loss in the spatial space, we
utilize the dilation convolution in the last two blocks and the output size of
the final feature map is 1/8 of the input image. To fully exploit the global
information in high-level features, we employ a spatial pyramid pooling
operation [47] to learn multi-scale contextual information. In the edge
prediction branch, we concatenate the output of conv_2, conv_3, conv_4 layers
and apply a 1 $\times$ 1 convolution to predict the edge map. In the
differentiable graph projection, we set top-4 pixels as representative
vertices for each face component, and thus the number of graph vertices is 4
times of the category number. We choose the hyper-parameters in (11) as
$\lambda_{1}$ = 1, $\lambda_{2}$ = 1, $\lambda_{3}$ = 1, $\lambda_{4}$ = 0.5,
$\lambda_{5}$ = 0.1 by grid search and according to prior knowledge111The
first four losses in (11) are in the form of logarithm, while the last loss is
in the form of the Euclidean distance. Hence, the weighting parameters of the
first four losses should be larger than the parameter of the last loss. of the
scale of each loss for the initial setting of grid search. In the proposed
discriminative loss in (9), we normalize the vertex features and set
$\delta=1$.
All the experiments are implemented with 4 NVIDIA RTX 2080Ti GPUs. Stochastic
Gradient Descent (SGD) is employed for optimization. We initialize the network
with a pre-trained model on ImageNet. The input size is $473\times 473$ and
the batch size is set to 8. The learning rate starts at 0.001 with the weight
decay of 0.0005. The batch normalization is implemented with In-Place
Activated Batch Norm [49]. For fair comparisons, these settings are adopted
for all the compared methods.
Figure 5: Visualization of parsing results from different methods on the
CelebAMask-HQ dataset.
### V-C Face Parsing
We perform comprehensive experiments on the commonly used three face parsing
datasets and present the experimental results as follows.
#### V-C1 Ablation study
The ablation study is conducted from multiple aspects on the LaPa dataset.
On composition of modules. We evaluate the improvement brought by different
modules in AGRNet. Specifically, we remove some components and train the model
from scratch under the same initialization. The quantitative results are
reported in Table I-(a), where ResNet denotes the baseline model trained with
the network consisting of the backbone and the pyramid spatial pooling. It
achieves 90.9% in Mean F1 score. Spatial and Adaptive refer to the respective
schemes of graph construction. Spatial denotes uniform spatial pooling as in
the previous work [10] and Adaptive denotes the proposed adaptive pooling
method in this paper, which improves the F1 score of Model 1 (ResNet) by 0.2%
and 0.5%, respectively. This result validates the effectiveness of the
proposed adaptive pixel-to-vertex projection. Edge and Graph represent the
edge module and graph reasoning module, respectively. Compared with Model 3,
the edge module achieves improvement of 0.2% in Mean F1 score, while the Graph
module improves the Mean F1 score by 0.5%. This demonstrates the superiority
of the proposed graph reasoning. By employing all the modules, we achieve the
best performance of 92.3% in Mean F1 score.
On loss functions. We evaluate the effectiveness of different loss functions
described in Section IV-A and show the comparison results in Table I-(b). We
observe that the Discriminative loss leads to 0.5% of improvement and the
Boundary-Attention loss brings 0.9% of improvement in Mean F1 score compared
with the traditional cross entropy loss. The best performance is obtained by
utilizing all of these loss functions.
On vertex numbers. Table I-(c) presents the performance with respect to
different number of vertices. Our model achieves the best mIoU and overall F1
under the top 4 setting. We suppose that too many vertices result in redundant
noisy vertices whereas fewer vertices fail to represent enough semantic
information.
On FLOPs and parameters. We compute the complexity of different modules in
FLOPs and Parameter size in Table I-(d). All the numbers are computed with the
input image size of 473 $\times$ 473 when the batch size is set to 1. ResNet
refers to the backbone network, which has 6.652GFLOPs and 14.10M of
parameters. The Edge Perceiving module brings only about 0.25GFLOPs and 0.03M
of parameters. The Graph Projection module further leads to only 0.471G and
0.02M of increase in FLOPs and parameters, respectively. After integrating the
Graph Reasoning module, the network has 7.411GFLOPs and 14.15M of parameters.
We see that our model improves the F1 score by 1.4% at the cost of mere
0.8GFLOPs and 0.05M of parameters compared with the ResNet backbone.
Figure 6: Visualization of the hyper-parameter search.
On hyper-parameters.
In the experiments, we set the parameters $\\{\lambda_{i}\\}_{i=1}^{4}$ in
(11) based on the setting of the LaPa work [9], which achieves convincing
performance on several face parsing datasets. As to the weighting parameter of
the discriminative loss—$\lambda_{5}$, we assign its value according to the
approximate proportion of loss scale. For more rigorous parameter settings, we
additionally adopt the traditional way of performing hyper-parameter
optimization—grid search or a parameter sweep, which is an exhaustive
searching through a manually specified subset of the hyper-parameter space of
a learning algorithm. In our case, the grid search algorithm is guided by the
evaluation metric of Overall F1 score on the test set of the Helen dataset.
Specifically, we train our model under a finite set of reasonable settings of
hyper-parameters $\\{\lambda_{i}\\}_{i=1}^{5}$. We change a single parameter
at a time, and report the respective results in Table III. We see that a
choice of these parameters
$\lambda_{1}=1,\lambda_{2}=1,\lambda_{3}=1,\lambda_{4}=0.5,\lambda_{5}=0.1$ is
empirically optimal in this case, which is our setting. Thus, in general, our
choice of the parameters is empirically optimal.
Further, we illustrate the grid search in Fig. 6. We observe that the results
are insensitive to $\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}$, while
they show sensitivity to $\lambda_{5}$—the weighting parameter of the
discriminative loss. This is mainly because the discriminative loss has less
contribution to the final segmentation result than the cross entropy loss.
Therefore, assigning a large weight to the discriminative loss will weaken the
supervision of the ground truth parsing map and thus influence the overall
performance.
TABLE III: The parsing results under different hyper-parameters (measured by overall F1). Parameter | =0.1 | =0.5 | =1 | =2 | =5
---|---|---|---|---|---
$\lambda_{1}$ | 92.8 | 93.0 | 93.1 | 93.1 | 92.7
$\lambda_{2}$ | 92.9 | 92.8 | 93.1 | 93.0 | 92.4
$\lambda_{3}$ | 92.2 | 92.6 | 93.1 | 93.0 | 92.1
$\lambda_{4}$ | 93.1 | 93.2 | 93.1 | 93.0 | 92.9
$\lambda_{5}$ | 93.1 | 92.0 | 91.5 | 90.0 | 85.4
TABLE IV: The performance over domain gaps (measured in F1 score). Train | Test | Skin | Nose | U-lip | I-mouth | L-lip | Eyes | Brows | Mouth | Overall
---|---|---|---|---|---|---|---|---|---|---
Helen-Train | Helen-Test | 95.0 | 96.0 | 83.1 | 90.0 | 90.7 | 89.9 | 85.1 | 96.2 | 93.1
CelebAMask-HQ | Helen-Test | 93.2 | 94.7 | 82.7 | 88.5 | 90.1 | 89.1 | 79.6 | 92.4 | 91.8
Helen-Primary | Helen-Reference | 92.6 | 92.0 | 74.1 | 91.0 | 89.2 | 81.2 | 83.1 | 95.6 | 90.9
Helen-Reference | Helen-Primary | 61.3 | 34.8 | 4.6 | 0 | 45.1 | 69.4 | 30.7 | 33.2 | 42.9
TABLE V: Experimental comparison on the LIP dataset for human parsing (in IoU
score).
| background | hat | hair | glove | glasses | u-cloth | dress | coat | socks | pants | j-suits | scarf | skirt | face | l-arm | r-arm | l-leg | r-leg | l-shoe | r-shoe | mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
SegNet [50] | 70.62 | 26.60 | 44.01 | 0.01 | 0.00 | 34.46 | 0.00 | 15.97 | 3.59 | 33.56 | 0.01 | 0.00 | 0.00 | 52.38 | 15.30 | 24.23 | 13.82 | 13.17 | 9.26 | 6.47 | 18.17
FCN-8s [51] | 78.02 | 39.79 | 58.96 | 5.32 | 3.08 | 49.08 | 12.36 | 26.82 | 15.66 | 49.41 | 6.48 | 0.00 | 2.16 | 62.65 | 29.78 | 36.63 | 28.12 | 26.05 | 17.76 | 17.70 | 28.29
DeepLabV2 [52] | 84.53 | 56.48 | 65.33 | 29.98 | 19.67 | 62.44 | 30.33 | 51.03 | 40.51 | 69.00 | 22.38 | 11.29 | 20.56 | 70.11 | 49.25 | 52.88 | 42.37 | 35.78 | 33.81 | 32.89 | 41.64
Attention [53] | 84.00 | 58.87 | 66.78 | 23.32 | 19.48 | 63.20 | 29.63 | 49.70 | 35.23 | 66.04 | 24.73 | 12.84 | 20.41 | 70.58 | 50.17 | 54.03 | 38.35 | 37.70 | 26.20 | 27.09 | 42.92
Attention+SSL [19] | 84.56 | 59.75 | 67.25 | 28.95 | 21.57 | 65.30 | 29.49 | 51.92 | 38.52 | 68.02 | 24.48 | 14.92 | 24.32 | 71.01 | 52.64 | 55.79 | 40.23 | 38.80 | 28.08 | 29.03 | 44.73
ASN [54] | 84.01 | 56.92 | 64.34 | 28.07 | 17.78 | 64.90 | 30.85 | 51.90 | 39.75 | 71.78 | 25.57 | 7.97 | 17.63 | 70.77 | 53.53 | 56.70 | 49.58 | 48.21 | 34.57 | 33.31 | 45.41
SSL [19] | 84.64 | 58.21 | 67.17 | 31.20 | 23.65 | 63.66 | 28.31 | 52.35 | 39.58 | 69.40 | 28.61 | 13.70 | 22.52 | 74.84 | 52.83 | 55.67 | 48.22 | 47.49 | 31.80 | 29.97 | 46.19
MMAN [55] | 84.75 | 57.66 | 65.63 | 30.07 | 20.02 | 64.15 | 28.39 | 51.98 | 41.46 | 71.03 | 23.61 | 9.65 | 23.20 | 69.54 | 55.30 | 58.13 | 51.90 | 52.17 | 38.58 | 39.05 | 46.81
SS-NAN [56] | 88.67 | 63.86 | 70.12 | 30.63 | 23.92 | 70.27 | 33.51 | 56.75 | 40.18 | 72.19 | 27.68 | 16.98 | 26.41 | 75.33 | 55.24 | 58.93 | 44.01 | 41.87 | 29.15 | 32.64 | 47.92
CE2P [57] | 87.6 | 65.29 | 72.54 | 39.09 | 32.73 | 69.46 | 32.52 | 56.28 | 49.67 | 74.11 | 27.23 | 14.19 | 22.51 | 75.5 | 65.14 | 66.59 | 60.1 | 58.59 | 46.63 | 46.12 | 53.1
Ours | 87.4 | 67.56 | 72.63 | 42.96 | 36.65 | 69.15 | 35.35 | 55.74 | 51.13 | 74.19 | 30.49 | 15.69 | 22.61 | 75.01 | 65.04 | 67.76 | 58.51 | 57.13 | 45.79 | 45.37 | 53.8
#### V-C2 Comparison with the state of the art
We compare our method with the state-of-the-art approaches on three datasets,
including small-scale and large-scale ones, as presented in Table II. The
score of the eyes/brows is the sum of scores of the left and right ones, and
the overall score is calculated by the average score of the mouth, eyes, nose,
and brows. It is worth noting that, because the codes of some algorithms are
not publicly available and the requirements of training data are different, it
is hard to re-implement all the methods on every dataset. Therefore, we focus
on implementing and testing the latest methods while dropping several previous
methods, including Liu et al. [12], Lin et al. [13] and Lee et al. [18].
Specifically, the work by Lin et al. [13] cannot be tested on the LaPa dataset
and the CelebAMask-HQ dataset, because there is a lack of fine facial
segmentation bounding boxes in both datasets, which is however required by
[13]. Liu et al. [12] and Lee et al. [18] are improved by follow-up works such
as Wei et al. [46] and Luo et al. [11], thus we show the better results of Wei
et al. [46] and Luo et al. [11].
On the small-scale Helen dataset, our method achieves comparable performance
with the state-of-the-art approaches. On the two large-scale face parsing
datasets—LaPa and CelebAMask-HQ, our method outperforms the previous methods
by a large margin. Specifically, our model achieves improvement over Te et al.
[10] by 1.2% on the LaPa dataset and by 0.4% on the CelebAMask-HQ dataset,
especially on brows and eyes.
We also visualize some parsing results of the CelebAMask-HQ dataset in
comparison with competitive methods in Fig. 5. We see that our results exhibit
accurate segmentation even over delicate details such as the categories of
hair, earring and necklace. For example, in the first row, our model
distinguishes the hand from the hair component accurately even if they are
overlapped, while other methods assign the label of the hair to the hand. In
the second row, our model generates the most consistent parsing result for the
thin necklace while other results are fragmented. In the third row, our model
separates the delicate earring from the hair, producing the finest parsing
result.
#### V-C3 Performance over domain gaps
Although there is no significant domain gap among multiple datasets, there are
different aspects to discuss the domain gap [42, 43]. The first is to
train/test on different datasets, while the second is to explore the diversity
of face poses. The evaluation results are listed in Table IV.
Firstly, we conduct the cross test with respect to different datasets,
including the CelebAMask-HQ and Helen datasets. We divide the Helen dataset
into the training set (denoted as ”Helen-Train”) and the testing set (denoted
as ”Helen-Test”). Specifically, we utilize the model trained on the
CelebAMask-HQ dataset and test directly on ”Helen-Test”. However, these two
datasets are labeled with different categories, so we select the common facial
components for testing and mark the other labels as the background.
Consequently, finer labels such as “Glasses” and “Ears” are categorized as the
background, which affects the performance when compared with the result of
evaluating on the Helen dataset as shown in Table IV. Besides, among the
common labels, the segmentation accuracy decreases mostly on ”Brows” and
”Mouth”.
Secondly, we conduct the cross test between data with different face poses.
Specifically, we detect the pitch angle of each input image, and divide the
Helen dataset into two categories according to whether the pitch angle is
within $0^{\circ}-40^{\circ}$. If the angle is between $0^{\circ}$ and
$40^{\circ}$, we refer to the images as the ”Helen-Primary” dataset, and
otherwise the ”Helen-Reference” dataset. Specifically, ”Helen-Reference”
contains 798 images out of 2330 images in the Helen dataset, which occupies
34% of the complete dataset. As presented in Table IV, the performance only
drops by 1.4% if we take ”Helen-Primary” as the training dataset and ”Helen-
Reference” as the testing dataset. In comparison, when we swap the datasets,
the parsing result is only 42.9% in Overall F1 score. This is because the
”Helen-Reference” dataset with large pitch angles does not provide enough
information for understanding complete faces, especially for subtle mouth
components.
In addition, the cross test between data with different face poses also
demonstrates the performance of AGRNet when some parts of the face missing,
e.g., due to the self-occlusion, where non-front faces are self-occluded in
general. The good performance of training on ”Helen-Primary” and testing on
”Helen-Reference” shows our method generalizes to self-occluded faces well.
This generalization ability gives credits to the intrinsic structure embedded
in the proposed graph representation. That is, even though self-occluded data
may demonstrate rather different distribution characteristics, there may be an
intrinsic structure embedded in the data. Such intrinsic structure may be
regarded to be better maintained from seen datasets to unseen datasets.
Consequently, the graph representation learned from the face structure
provides extra insights from the structure domain, in addition to the data
domain, that finally enhances the generalizability of the network.
Figure 7: Visualization of the learned graph adjacency matrix.
#### V-C4 Visualization of vertices and response
Further, we present some visualization examples of vertices and the pixel-to-
vertex projection matrix in order to provide intuitive interpretation. Fig. 8
shows the selected vertices with respect to specific facial components, where
keypoints are marked as yellow. We observe that vertices lie in the interior
region of the corresponding facial component. Besides, symmetric components
are well separated, such as left and right brows.
Also, we visualize the adjacency matrix trained on the Helen dataset in Fig.
7. As shown by the color bar, lighter colors indicate larger edge weights. We
observe that the pair of left brow (”lb” in the figure) and the right brow
(”rb” in the figure) has the strongest positive correlation, while the pair of
inner mouth (”imouth” in the figure) and nose has the strongest negative
correlation, which is reasonable to some extent.
Furthermore, we visualize the pixel-to-vertex projection matrix via response
maps. As in Fig. 11, given a projection matrix
${\mathbf{P}}\in\mathbb{R}^{KN_{c}\times H_{1}W_{1}}$, we visualize the weight
of each pixel that contributes to semantic vertices. Since there are 4
vertices corresponding to each component, we sum them up as a complete
component response map. Brighter color in Fig. 11 indicates higher response.
We observe that pixels demonstrate high response to vertices in the
corresponding face component in general, which validates the effectiveness of
the proposed adaptive graph projection. In particular, edge pixels show higher
response in each component, thanks to the proposed edge attention and
Boundary-Aware loss. Note that, there exist outliers if some component is
blocked. For example, while one ear is occluded in both images from row 3 and
row 4, several pixels still exhibit high response to the other ear region in
the response map.
Figure 8: Visualization of projected vertices with respect to facial
components. The yellow points represent vertices acquired from the proposed
adaptive graph projection, and each column represents a certain category. Note
that, the vertices significant overlap with the mainstream facial landmark
layout [58, 59], which brings the advantages towards learning and reasoning
the semantic representation for face parsing Figure 9: Human parsing results
on the LIP dataset. Our parsing maps are clear and smooth along the
boundaries, and even exhibit better results of shorts compared to the ground
truth as depicted in the third row. Figure 10: Failure cases. The earring is
mislabeled in the first sample, the hair region is not continuous in the
second sample, and some background is classified as nose in the third one.
Figure 11: Response maps with respect to different facial components. Brighter
color indicates higher response. The response maps exhibit high response to
ambiguous pixels along the inter-component boundaries.
#### V-C5 Failure cases
We show several unsatisfactory examples in the CelebAMask-HQ dataset in Fig.
10, where most of incorrect labels lie along the boundaries or ”accessories”
categories (e.g., earring and necklace). The cause of failure cases is mostly
the significant imbalance of pixel numbers in different categories or the
disturbance of adjacent pixels along the boundary.
### V-D Human Parsing
Human parsing is another segmentation task which predicts a pixel-wise label
to each semantic human part. Unlike face parsing, human structure is
hierarchical, where a large component could be decomposed into several
fragments. Unlike recent works which often construct a fixed graph topology
based on hand-crafted human hierarchy [60, 61, 62], our model learns the graph
connectivity adaptively. Following the standard training and validation split
as described in [57], we evaluate the performance of our model on the
validation set of the LIP dataset, and present the results in Table V. Several
comparison methods involve the attention module, including SS-NAN [56] and
Attention [53].
The results in Table V show that our model outperforms the competitive method
CE2P [57] by 0.7% in mean IoU, which validates that our model is generalizable
to human parsing. Also, it is worth noting that our model achieves significant
improvement in elaborate categories, such as socks, glove and j-suits.
Furthermore, we provide visualization examples in Fig. 9. The visualized
results demonstrate that our model leads to accurate prediction of human
components, especially around the boundaries between components, which gives
credits to the proposed component-level modeling.
## VI Conclusion
In this paper, we propose an adaptive graph representation learning and
reasoning method (AGRNet) for the component-wise modeling of face images,
aiming to exploit the component-wise relationship for accurate face parsing.
In particular, we adaptively project the representation from the pixel space
to the vertex space that represents semantic components as graph abstraction
under the initial condition of a predicted parsing map. The image edge
information is incorporated to highlight ambiguous pixels during the
projection for precise segmentation along the edges. Then, the model learns
and reasons over the relationship among components by propagating features
across vertices on the graph. Finally, the refined vertex features are
projected back to pixel grids for the prediction of the final parsing map.
Further, we propose a discriminative loss to learn vertices with distinct
features for semantic description of facial components. Experimental results
demonstrate that AGRNet sets the new state of the art on large-scale face
parsing datasets, with accurate segmentation along the boundaries. Also, our
model shows effectiveness on the human parsing task, which validates its
generalizability.
## References
* [1] I. Kemelmacher-Shlizerman, “Transfiguring portraits,” _ACM Trans. Graph._ , vol. 35, pp. 94:1–94:8, 2016.
* [2] Y. Nirkin, I. Masi, A. T. Tuan, T. Hassner, and G. Medioni, “On face segmentation, face swapping, and face perception,” in _2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)_. IEEE, 2018, pp. 98–105.
* [3] Z. Shu, E. Yumer, S. Hadap, K. Sunkavalli, E. Shechtman, and D. Samaras, “Neural face editing with intrinsic image disentangling,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 5541–5550.
* [4] H. Zhang, B. S. Riggan, S. Hu, N. J. Short, and V. M. Patel, “Synthesis of high-quality visible faces from polarimetric thermal faces using generative adversarial networks,” _International Journal of Computer Vision_ , pp. 1–18, 2018.
* [5] Y. Li, S. Liu, J. Yang, and M.-H. Yang, “Generative face completion,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 3911–3919.
* [6] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” _IEEE Signal Processing Letters_ , vol. 23, no. 10, pp. 1499–1503, 2016.
* [7] S. Liu, J. Yang, C. Huang, and M.-H. Yang, “Multi-objective convolutional learning for face labeling,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 3451–3459.
* [8] Z. Wei, Y. Sun, J. Wang, H. Lai, and S. Liu, “Learning adaptive receptive fields for deep image parsing network,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2434–2442.
* [9] Y. Liu, H. Shi, H. Shen, Y. Si, X. Wang, and T. Mei, “A new dataset and boundary-attention semantic segmentation for face parsing.” in _AAAI_ , 2020, pp. 11 637–11 644.
* [10] G. Te, Y. Liu, W. Hu, H. Shi, and T. Mei, “Edge-aware graph representation learning and reasoning for face parsing,” in _European Conference on Computer Vision_. Springer, Cham, 2020, pp. 258–274.
* [11] L. Luo, D. Xue, and X. Feng, “Ehanet: An effective hierarchical aggregation network for face parsing,” _Applied Sciences_ , vol. 10, no. 9, p. 3135, 2020\.
* [12] S. Liu, J. Shi, J. Liang, and M.-H. Yang, “Face parsing via recurrent propagation,” in _28th British Machine Vision Conference, BMVC 2017_ , 2017, pp. 1–10.
* [13] J. Lin, H. Yang, D. Chen, M. Zeng, F. Wen, and L. Yuan, “Face parsing with roi tanh-warping,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 5654–5663.
* [14] Z. Yin, V. Yiu, X. Hu, and L. Tang, “End-to-end face parsing via interlinked convolutional neural networks,” _arXiv preprint arXiv:2002.04831_ , 2020\.
* [15] Y. Zhou, X. Hu, and B. Zhang, “Interlinked convolutional neural networks for face parsing,” in _International symposium on neural networks_. Springer, 2015, pp. 222–231.
* [16] P. Luo, X. Wang, and X. Tang, “Hierarchical face parsing via deep learning,” in _Proceedings of the IEEE international conference on computer vision_ , 2012, pp. 2480–2487.
* [17] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” _arXiv preprint arXiv:1609.02907_ , 2016.
* [18] C.-H. Lee, Z. Liu, L. Wu, and P. Luo, “Maskgan: Towards diverse and interactive facial image manipulation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 5549–5558.
* [19] K. Gong, X. Liang, D. Zhang, X. Shen, and L. Lin, “Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 932–940.
* [20] E. Zhou, H. Fan, Z. Cao, Y. Jiang, and Q. Yin, “Extensive facial landmark localization with coarse-to-fine convolutional network cascade,” in _Proceedings of the IEEE international conference on computer vision workshops_ , 2013, pp. 386–391.
* [21] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” in _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ , 2015. [Online]. Available: http://arxiv.org/abs/1409.0473
* [22] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in _Advances in neural information processing systems_ , 2017, pp. 5998–6008.
* [23] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 7794–7803.
* [24] Y. Chen, Y. Kalantidis, J. Li, S. Yan, and J. Feng, “A^ 2-nets: Double attention networks,” in _Advances in Neural Information Processing Systems_ , 2018, pp. 352–361.
* [25] Z. Zhu, M. Xu, S. Bai, T. Huang, and X. Bai, “Asymmetric non-local neural networks for semantic segmentation,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 593–602.
* [26] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for scene segmentation,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 3146–3154.
* [27] Y. Chen, M. Rohrbach, Z. Yan, Y. Shuicheng, J. Feng, and Y. Kalantidis, “Graph-based global reasoning networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 433–442.
* [28] X. Li, Z. Zhong, J. Wu, Y. Yang, Z. Lin, and H. Liu, “Expectation-maximization attention networks for semantic segmentation,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 9167–9176.
* [29] M. Yin, Z. Yao, Y. Cao, X. Li, Z. Zhang, S. Lin, and H. Hu, “Disentangled non-local neural networks,” in _European Conference on Computer Vision_. Springer, 2020, pp. 191–207.
* [30] S. Chandra, N. Usunier, and I. Kokkinos, “Dense and low-rank gaussian crfs using deep embeddings,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 5103–5112.
* [31] M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” in _Advances in Neural Information Processing Systems_ , 2016, pp. 3844–3852.
* [32] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in _International Conference on Learning Representations_ , 2018.
* [33] Y. Li and A. Gupta, “Beyond grids: Learning graph representations for visual recognition,” in _Advances in Neural Information Processing Systems_ , 2018, pp. 9225–9235.
* [34] Y. Lu, Y. Chen, D. Zhao, and J. Chen, “Graph-fcn for image semantic segmentation,” in _International Symposium on Neural Networks_. Springer, 2019, pp. 97–105.
* [35] N. Pourian, S. Karthikeyan, and B. S. Manjunath, “Weakly supervised graph based semantic segmentation by learning communities of image-parts,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 1359–1367.
* [36] L. Zhang, X. Li, A. Arnab, K. Yang, Y. Tong, and P. Torr, “Dual graph convolutional network for semantic segmentation,” in _Proceedings of the British Machine Vision Conference 2019_. British Machine Vision Association, 2019.
* [37] X. Li, Y. Yang, Q. Zhao, T. Shen, Z. Lin, and H. Liu, “Spatial pyramid based graph reasoning for semantic segmentation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 8950–8959.
* [38] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE international conference on computer vision_ , 2016, pp. 770–778.
* [39] Y. Luo, R. Ji, T. Guan, J. Yu, P. Liu, and Y. Yang, “Every node counts: Self-ensembling graph convolutional networks for semi-supervised learning,” _Pattern Recognition_ , vol. 106, p. 107451, 2020.
* [40] B. De Brabandere, D. Neven, and L. Van Gool, “Semantic instance segmentation for autonomous driving,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2017, pp. 7–9.
* [41] Z. Huang, X. Wang, L. Huang, C. Huang, Y. Wei, and W. Liu, “Ccnet: Criss-cross attention for semantic segmentation,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 603–612.
* [42] Y. Luo, P. Liu, L. Zheng, T. Guan, J. Yu, and Y. Yang, “Category-level adversarial adaptation for semantic segmentation using purified features,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2021.
* [43] Y. Luo, P. Liu, T. Guan, J. Yu, and Y. Yang, “Significance-aware information bottleneck for domain adaptive semantic segmentation,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 6778–6787.
* [44] K. Gong, Y. Gao, X. Liang, X. Shen, M. Wang, and L. Lin, “Graphonomy: Universal human parsing via graph transfer learning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 7450–7459.
* [45] H. He, J. Zhang, Q. Zhang, and D. Tao, “Grapy-ml: graph pyramid mutual learning for cross-dataset human parsing,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 07, 2020, pp. 10 949–10 956.
* [46] Z. Wei, S. Liu, Y. Sun, and H. Ling, “Accurate facial image parsing at real-time speed,” _IEEE Transactions on Image Processing_ , vol. 28, no. 9, pp. 4659–4670, 2019.
* [47] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2881–2890.
* [48] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang, “Interactive facial feature localization,” in _European conference on computer vision_ , 2012, pp. 679–692.
* [49] S. Rota Bulò, L. Porzi, and P. Kontschieder, “In-place activated batchnorm for memory-optimized training of dnns,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018.
* [50] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” _TPAMI_ , vol. 39, no. 12, pp. 2481–2495, 2017.
* [51] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 3431–3440.
* [52] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 40, no. 4, pp. 834–848, 2018.
* [53] L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille, “Attention to scale: Scale-aware semantic image segmentation,” in _CVPR_ , 2016.
* [54] P. Luc, C. Couprie, S. Chintala, and J. Verbeek, “Semantic segmentation using adversarial networks,” in _NIPS Workshop on Adversarial Training_ , 2016\.
* [55] Y. Luo, Z. Zheng, L. Zheng, T. Guan, J. Yu, and Y. Yang, “Macro-micro adversarial network for human parsing,” in _ECCV_ , 2018.
* [56] J. Zhao, J. Li, X. Nie, F. Zhao, Y. Chen, Z. Wang, J. Feng, and S. Yan, “Self-supervised neural aggregation networks for human parsing,” in _CVPR-workshop_ , 2017.
* [57] T. Ruan, T. Liu, Z. Huang, Y. Wei, S. Wei, and Y. Zhao, “Devil in the details: Towards accurate single and multiple human parsing,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 33, 2019, pp. 4814–4821.
* [58] Y. Liu, H. Shen, Y. Si, X. Wang, X. Zhu, H. Shi, Z. Hong, H. Guo, Z. Guo, Y. Chen _et al._ , “Grand challenge of 106-point facial landmark localization,” in _2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)_. IEEE, 2019, pp. 613–616.
* [59] W. Wu, C. Qian, S. Yang, Q. Wang, Y. Cai, and Q. Zhou, “Look at boundary: A boundary-aware face alignment algorithm,” in _Proceedings of the IEEE international conference on computer vision_ , 2018, pp. 2129–2138.
* [60] X. Liu, M. Zhang, W. Liu, J. Song, and T. Mei, “Braidnet: Braiding semantics and details for accurate human parsing,” in _Proceedings of the 27th ACM International Conference on Multimedia_ , ser. MM ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 338–346. [Online]. Available: https://doi.org/10.1145/3343031.3350857
* [61] W. Wang, Z. Zhang, S. Qi, J. Shen, Y. Pang, and L. Shao, “Learning compositional neural information fusion for human parsing,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 5703–5713.
* [62] T. Wang, T. Yang, M. Danelljan, F. S. Khan, X. Zhang, and J. Sun, “Learning human-object interaction detection using interaction points,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 4116–4125.
| Gusi Te Gusi Te is currently an MSc student with Wangxuan Institute of
Computer Technology, Peking University. Before that, he acquired his bachelor
degree in Peking University. His research interests include 3D computer vision
and graph neural networks.
---|---
| Wei Hu (Senior Member, IEEE) received the B.S. degree in Electrical
Engineering from the University of Science and Technology of China in 2010,
and the Ph.D. degree in Electronic and Computer Engineering from the Hong Kong
University of Science and Technology in 2015. She was a Researcher with
Technicolor, Rennes, France, from 2015 to 2017. She is currently an Assistant
Professor with Wangxuan Institute of Computer Technology, Peking University.
Her research interests are graph signal processing, graph-based machine
learning and 3D visual computing. She has authored over 50 international
journal and conference publications, with several paper awards including Best
Student Paper Runner Up Award in ICME 2020 and Best Paper Candidate in CVPR
2021. She was awarded the 2021 IEEE Multimedia Rising Star Award—Honorable
Mention. She serves as an Associate Editor for Signal Processing Magazine,
IEEE Transactions on Signal and Information Processing over Networks, etc.
---|---
| Yinglu Liu Yinglu Liu is currently a senior researcher in JD AI Research,
Beijing, China. She received the B.E. degree in Information Engineering from
Xiamen Unversity in 2009, and the Ph.D. degree in Pattern Recognition and
Intelligent System from Institute of Automation, Chinese Academy of Sciences
in 2015. Before joining JD.com, she was a senior algorithm engineer in Samsung
Research China-Beijing. Her research interest includes computer vision and
machine learning, with a focus on image generation, semantic segmentation,
etc.
---|---
| Hailin Shi Hailin Shi received his PhD from Institute of Automation,
Chinese Academy of Sciences in 2017. He is currently a senior researcher at JD
AI Research. His research interest includes face analysis and deep learning.
He has authored or co-authored over 40 publications in the major conferences
and journals of computer vision and pattern recognition, and a book chapter
about deep metric learning, and has applied over 20 face-analysis-related
patents.
---|---
| Tao Mei Tao Mei (M07-SM11-F19) is a Vice President with JD.COM and the
Deputy Managing Director of JD AI Research, where he also serves as the
Director of Computer Vision and Multimedia Lab. Prior to joining JD.COM in
2018, he was a Senior Research Manager with Microsoft Research Asia in
Beijing, China. He has authored or co-authored over 200 publications (with 12
best paper awards) in journals and conferences, 10 book chapters, and edited
five books. He holds over 25 US and international patents. He is or has been
an Editorial Board Member of IEEE Trans. on Image Processing, IEEE Trans. on
Circuits and Systems for Video Technology, IEEE Trans. on Multimedia, ACM
Trans. on Multimedia Computing, Communications, and Applications, Pattern
Recognition, etc. He is the General Co-chair of IEEE ICME 2019, the Program
Co-chair of ACM Multimedia 2018, IEEE ICME 2015 and IEEE MMSP 2015. Tao
received B.E. and Ph.D. degrees from the University of Science and Technology
of China, Hefei, China, in 2001 and 2006, respectively. He is an adjunct
professor of University of Science and Technology of China, The Chinese
University of Hong Kong (Shenzhen), and Ryerson University. Tao is a Fellow of
IEEE (2019), a Fellow of IAPR (2016), a Distinguished Scientist of ACM (2016),
and a Distinguished Industry Speaker of IEEE Signal Processing Society (2017).
---|---
|
††thanks: Funded by a FRIA fellowship from the F.R.S.-FNRS.††thanks: Research
associate of F.R.S.-FNRS. Supported by the ARC Project Transform Fédération
Wallonie-Bruxelles and the FNRS CDR J013116F; MIS F451019F projects.††thanks:
Partly funded by the ANR projects DeLTA (ANR-16-CE40-0007) and Ticktac
(ANR-18-CE40-0015).††thanks: Partly funded by the ANR projects DeLTA
(ANR-16-CE40-0007) and Ticktac (ANR-18-CE40-0015).
# Computability of Data-Word Transductions
over Different Data Domains
Léo Exibard Université Libre de Bruxelles, Brussels, Belgium and Aix
Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France
<EMAIL_ADDRESS>Emmanuel Filiot Université Libre de Bruxelles,
Brussels, Belgium Nathan Lhote Aix Marseille Univ, Université de Toulon,
CNRS, LIS, Marseille, France Pierre-Alain Reynier Aix Marseille Univ,
Université de Toulon, CNRS, LIS, Marseille, France
###### Abstract
In this paper, we investigate the problem of synthesizing computable functions
of infinite words over an infinite alphabet (data $\omega$-words). The notion
of computability is defined through Turing machines with infinite inputs which
can produce the corresponding infinite outputs in the limit. We use non-
deterministic transducers equipped with registers, an extension of register
automata with outputs, to describe specifications. Being non-deterministic,
such transducers may not define functions but more generally relations of data
$\omega$-words. In order to increase the expressive power of these machines,
we even allow guessing of arbitrary data values when updating their registers.
For functions over data $\omega$-words, we identify a sufficient condition
(the possibility of determining the next letter to be outputted, which we call
next letter problem) under which computability (resp. uniform computability)
and continuity (resp. uniform continuity) coincide.
We focus on two kinds of data domains: first, the general setting of
oligomorphic data, which encompasses any data domain with equality, as well as
the setting of rational numbers with linear order; and second, the set of
natural numbers equipped with linear order. For both settings, we prove that
functionality, _i.e._ determining whether the relation recognized by the
transducer is actually a function, is decidable. We also show that the so-
called next letter problem is decidable, yielding equivalence between
(uniform) continuity and (uniform) computability. Last, we provide
characterizations of (uniform) continuity, which allow us to prove that these
notions, and thus also (uniform) computability, are decidable. We even show
that all these decision problems are PSpace-complete for $(\mathbb{N},<)$ and
for a large class of oligomorphic data domains, including for instance
$(\mathbb{Q},<)$.
###### keywords:
Data Words and Register Automata and Register Transducers and Functionality
and Continuity and Computability.
###### Contents
1. 1 Data alphabet, languages and transducers
1. 1.1 Data as logical structures
2. 1.2 Words and data words
3. 1.3 Functions and relations
4. 1.4 Register transducers
2. 2 Continuity and computability
1. 2.1 Continuity notions
1. 2.1.1 Continuity
2. 2.1.2 Cauchy continuity
3. 2.1.3 Uniform continuity
2. 2.2 Computability notions
3. 2.3 Computability versus continuity
1. 2.3.1 From computability to continuity
2. 2.3.2 From continuity to computability
3. 3 Oligomorphic data
1. 3.1 Characterizing functionality and continuity
1. 3.1.1 Characterization of non-emptiness
2. 3.1.2 Characterizing functionality
3. 3.1.3 Characterizing continuity
2. 3.2 Deciding functionality, continuity and computability
1. 3.2.1 Computing the next letter
2. 3.2.2 Deciding functionality, continuity and computability
4. 4 A non oligomorphic case: $(\mathbb{N},\\{<,0\\})$A non oligomorphic case: (N,{¡,0})
1. 4.1 On loop iteration
2. 4.2 $\mathbb{Q}$-typesQ-types
3. 4.3 Relations between machines over $\mathbb{N}$ and over $\mathbb{Q}_{+}$Relations between machines over N and Q+
4. 4.4 Emptiness of automata
5. 4.5 Functionality
6. 4.6 Next-letter problem
7. 4.7 Uniform continuity
8. 4.8 Continuity
9. 4.9 Transfer result
5. A Proof of Lemma 14
6. B Proof of Lemma 30
## Introduction
#### Synthesis
Program synthesis aims at deriving, in an automatic way, a program that
fulfils a given specification. It is very appealing when for instance the
specification describes, in some abstract formalism (an automaton or ideally a
logic), important properties that the program must satisfy. The synthesised
program is then _correct-by-construction_ with regard to those properties. It
is particularly important and desirable for the design of safety-critical
systems with hard dependability constraints, which are notoriously hard to
design correctly. In their most general forms, synthesis problems have two
parameters, a set of inputs In and a set of outputs Out, and relate two
classes $\mathcal{S}$ and $\mathcal{I}$ of specifications and implementations
respectively. A specification $S\in\mathcal{S}$ is a relation
$S\subseteq\textsf{In}\times\textsf{Out}$ and an implementation
$I\in\mathcal{I}$ is a function $I:\textsf{In}\rightarrow\textsf{Out}$. The
$(\mathcal{S},\mathcal{I})$-synthesis problem asks, given a (finite
representation of a) specification $S\in\mathcal{S}$, whether there exists
$I\in\mathcal{I}$ such that for all $u\in\textsf{In}$, $(u,I(u))\in S$. If
such $I$ exists, then the procedure must return a program implementing $I$. If
all specifications in $\mathcal{S}$ are _functional_ , in the sense that they
are the graphs of functions from In to Out, then the
$(\mathcal{S},\mathcal{I})$-synthesis is a membership problem: given
$f\in\mathcal{S}$, does $f\in\mathcal{I}$ hold?
#### Automata-theoretic approach to synthesis
In this paper, we are interested in the automata-theoretic approach to
synthesis, in the sense that specifications and implementations can be defined
by automata, or by automata extended with outputs called _transducers_. In
this approach, In and Out are sets of words over input and output alphabets
$\Sigma$ and $\Gamma$ respectively. Perhaps the most well-known decidable
instance of synthesis in this context is the celebrated result of Büchi and
Landweber [JL69]: $\mathcal{S}$ is the class of $\omega$-regular
specifications, which relates infinite input words
$i_{0}i_{1}\dots\in\Sigma^{\omega}$ to infinite output words
$o_{0}o_{1}\dots\in\Gamma^{\omega}$ through $\omega$-automata (e.g.
deterministic parity automata), in the sense that the infinite convolution
$i_{0}o_{0}i_{1}o_{1}\dots\in(\Sigma\Gamma)^{\omega}$ must be accepted by an
$\omega$-automaton defining the specification. The class of implementations
$\mathcal{I}$ is all the functions which can be defined by Mealy machines, or
equivalently, deterministic _synchronous_ transducers which, whenever they
read some input $i\in\Sigma$, produce some output $o\in\Gamma$ and possibly
change their own internal state. The seminal result of Büchi and Landweber has
recently triggered a lot of research in reactive system synthesis and game
theory, both on the theoretical and practical sides, see for instance
[CHVB18]. We identify two important limitations to the now classical setting
of $\omega$-regular reactive synthesis:
1. $(i)$
specifications and implementations are required to be synchronous, in the
sense that a single output $o\in\Gamma$ must be produced for each input
$i\in\Sigma$, and
2. $(ii)$
the alphabets $\Sigma$ and $\Gamma$ are assumed to be finite.
Let us argue why we believe $(i)$ and $(ii)$ are indeed limitations. First of
all, if a specification is not realizable by a synchronous transducer, then a
classical synthesis algorithm stops with a negative answer. However, the
specification could be realizable in a larger class of implementations
$\mathcal{I}$. As an example, if $S$ is the set of words $i_{0}o_{0}\dots$
such that $o_{\ell}=i_{\ell+1}$, then $S$ is not realizable synchronously
because it is impossible to produce $o_{\ell}$ before knowing $i_{\ell+1}$.
But this specification is realizable by a program which can delay its output
production by one time unit. Enlarging the class of implementations can
therefore allow one to give finer answers to the synthesis problem in cases
where the specification is not synchronously realizable. We refer to this type
of relaxations as _asynchronous_ implementations. An asynchronous
implementation can be modelled in automata-theoretic terms as a transducer
which, whenever it reads an input $i\in\Sigma$, produces none or several
outputs, i.e. a finite word $u\in\Gamma^{*}$. Generalizations of reactive
system synthesis to asynchronous implementations have been considered in
[HKT12, FLZ11, WZ20]. In these works however, the specification is still
synchronous, given by an automaton which strictly alternates between reading
input and output symbols. Here, we also assume that specifications are
asynchronous, as it gives more flexibility in the relations that can be
expressed. For instance, one can specify that some response has to be delayed,
or, when transforming of data streams, allow for erasure and/or duplication of
some data values.
The synchronicity assumption made by classical reactive synthesis is motivated
by the fact that such methods focus on the control of systems rather than on
the data, in the sense that input symbols are Boolean signals issued by some
environment, and output symbols are actions controlling the system in order to
fulfil some correctness properties. From a data-processing perpective, this is
a strong limitation. The synthesis of systems which need to process streams of
data, like a monitoring system or a system which cleans noisy data coming from
sensors, cannot be addressed using classical $\omega$-regular synthesis.
Therefore, one needs to extend specifications to asynchronous specifications,
in the sense that the specifications must describe properties of executions
which do not strictly alternate between inputs and outputs. Already on finite
words however, the synthesis problem of asynchronous specifications by
asynchronous implementations, both defined by transducers, is undecidable in
general [CL14], and decidable only in some restricted cases [FJLW16]. The
second limitation $(ii)$ is addressed in the next paragraph.
#### From finite to infinite alphabets
To address the synthesis of systems where _data_ are taken into account, one
also needs to extend synthesis methods to handle infinite alphabets. As an
example, in a system scheduling processes, the data values are process ids. In
a stream processing system, data values can be temperature or pressure
measurements for example. Not only one needs synthesis methods able to handle
infinite alphabets of data values, but where those values can be compared
through some predicates, like equality or a linear order. Recent works have
considered the synthesis of (synchronous) reactive systems processing _data
words_ whose elements can be compared for equality [KMB18, ESKG14, KK19,
EFR19] as well as comparison with a linear order on the data [EFK21]. To
handle data words, just as automata have been extended to _register automata_
, transducers have been extended to _register transducers_. Such transducers
are equipped with a finite set of registers in which they can store data
values and with which they can compare them for equality, inequality or in
general any predicate, depending on the considered data domain. When a
register transducer reads a data value, it can compare it to the values stored
in its registers, assign it to some register, and output the content of none
or several registers, i.e., a finite word $v$ of register contents. To have
more expressive power, we also allow transducers to guess an arbitrary data
value and assign it to some register. This feature, called data guessing, is
arguably a more robust notion of non-determinism notion for machines with
registers and was introduced to enhance register automata [KZ10]. We denote by
NRT the class of non-deterministic register transducers. As an example,
consider the (partial111In this paper, data word functions can be partial by
default and therefore we do not explicitly mention it in the sequel.) data
word function $g$ which takes as input any data word of the form
$u=su_{1}su_{2}\cdots\in\mathbb{N}^{\omega}$, $s$ occurs infinitely many times
in $u$, and $u_{i}\in(\mathbb{N}\setminus\\{s\\})^{+}$ for all $i\geq 1$. Now,
for all $i\geq 1$, denote by $|u_{i}|$ the length of $u_{i}$ and by $d_{i}$
the last data value occurring in $u_{i}$. The function $g$ is then defined as
$g(u)=d_{1}^{|u_{1}|}sd_{2}^{|u_{2}|}s\dots$. This function can be defined by
the NRT of Figure 1. Note that without the guessing feature, this function
could not be defined by any NRT.
$i$$f$$o$$\top\mid\downarrow{}r_{s},?r_{o},\mathrm{out}~{}\varepsilon$$\star\neq
r_{s}\mid\downarrow{}r_{c},\mathrm{out}~{}r_{o}$$\wedge\begin{array}[]{c}\star=r_{s}\\\
r_{c}=r_{o}\end{array}\mid?r_{o},\mathrm{out}~{}r_{s}$$\star\neq
r_{s}\mid\downarrow{}r_{c},\mathrm{out}~{}r_{o}$ Figure 1: An NRT defining the
data word function $g$, equipped with a Büchi condition. The current data
value denoted by $\star$ is tested with respect to the content of the
registers on the left of the bar $|$. On the right of the bar, there are
instructions such as assigning an arbitrary data value to $r$ (notation $?r$),
outputting the content of a register or nothing ($\mathrm{out}~{}r$), or
assigning the current data value to some register ($\downarrow{}r$). The Büchi
condition makes sure that the first data value, initially stored in $r_{s}$
during the first transition, occurs infinitely many times. The register
$r_{c}$ stores the last data value that has been read. $r_{o}$ is meant to
store the last data value $d_{i}$ of an input chunk $u_{i}$. It has to be
guessed whenever a new chunk $u_{i}$ is starting to be read, and on reading
again $r_{s}$, the automaton checks that the guess was right by evaluating
whether $r_{c}=r_{o}$ (at that moment, $r_{c}$ contains $d_{i}$).
Thanks to the non-determinism of NRT, in general and unlike the previous
example, there might be several accepting runs for the same input data word,
each of them producing a possibly different output data word. Thus, NRT can be
used to define binary relations of data $\omega$-words, and hence
specifications. In the works [KMB18, ESKG14, KK19, EFR19, EFK21] already
mentioned, NRT have been used as a description of specifications, however they
are assumed to be synchronous and without guessing.
#### Objective: synthesis of computable data word functions
In this paper, our goal is to define a synthesis setting where both
limitations $(i)$ and $(ii)$ are lifted. In particular, specifications are
assumed to be given by (asynchronous) non-deterministic register transducers
equipped with a Büchi condition (called NRT). To retain decidability, we
however make some hypothesis: specifications are assumed to be functional,
i.e., they already define a function from input data $\omega$-words to output
data $\omega$-words. While this a strong hypothesis, it is motivated by two
facts. First, the synthesis problem of asynchronous implementations from
asynchronous specifications given by (non-functional) NRT is undecidable in
general, already in the finite alphabet case [CL14]. Second, functional NRT
define uncomputable functions in general, and therefore they cannot be used as
machines that compute the function they specify. Since those functions are
defined over infinite inputs, let us make clear what we mean by computable
functions. A (partial) function $f$ of data $\omega$-words is computable if
there exists a Turing machine $M$ that has an infinite input
$x\in\mathrm{dom}(f)$, and produces longer and longer prefixes of the output
$f(x)$ as it reads longer and longer prefixes of the input $x$. Therefore,
such a machine produces the output $f(x)$ in the limit. As an example, the
function $g$ previously defined is computable. A Turing machine computing it
simply has to wait until it sees the last data value $d_{i}$ of a chunk
$u_{i}$ (which necessarily happens after a finite amount of time), compute the
length $\ell_{i}$ of $u_{i}$ and once it sees $d_{i}$, output
$d_{i}^{\ell_{i}}$ at once. However, consider the extension $f$ to any input
data word defined as follows: $f(u)=g(u)$ if $u$ is in the domain of $g$, and
otherwise $f(u)=s^{\omega}$ where $s$ is the first data value of $u$. Such
function is not computable. For instance, on input $x=sd^{\omega}$ (where
$d\neq s$ is an arbitrary data value), we have $f(sd^{\omega})=s^{\omega}$, as
$x$ is not in the domain of $g$. Yet, on any finite prefix $\alpha_{k}=sd^{k}$
of $sd^{\omega}$, any hypothetical machine computing $f$ cannot output
anything. Indeed, there exists a continuation of $\alpha_{k}$ which is in the
domain of $g$, and for which $f$ produces a word which starts with a different
data value than $f(\alpha_{k}d^{\omega})$: it suffices to take the
continuation $(sd)^{\omega}$, as we have
$f(\alpha_{k}(sd)^{\omega})=g(\alpha_{k}(sd)^{\omega})=d^{k}(sd)^{\omega}$.
In this paper, our goal is therefore to study the following synthesis problem:
given a functional NRT defining a function $f$ of data $\omega$-words,
generate a Turing machine which computes $f$ if one exists. In other words,
one wants to decide whether $f$ is computable, and if it is, to synthesize an
algorithm which computes it.
#### Contributions
Register transducers can be parameterized by the set of data values from which
the $\omega$-data words are built, along with the set of predicates which can
be used to test those values. We distinguish a large class of data sets for
which we obtain decidability results for the later problem, namely the class
of oligomorphic data sets [BKL14]. Briefly, oligomorphic data sets are
countable sets $D$ equipped with a finite set of predicates which satisfies
that for all $n$, $D^{n}$ can be partitioned into finitely many equivalence
classes by identifying tuples which are equal up to automorphisms (predicate-
preserving bijections). For example, any set equipped with equality is
oligomorphic, such as $(\mathbb{N},\\{=\\})$, $(\mathbb{Q},\\{<\\})$ is
oligomorphic while $(\mathbb{N},\\{<\\})$ is not. However
$(\mathbb{N},\\{<\\})$ is an interesting data set in and of itself. We also
investigate NRT over such data set, using the fact that it is a substructure
of $(\mathbb{Q},\\{<\\})$ which is oligormorphic. Our detailed contributions
are the following:
1. 1.
We first establish a general correspondence between computability and the
classical mathematical notion of continuity (for the Cantor distance) for
functions of data $\omega$-words (Theorems 9 and 10). This correspondence
holds under a general assumption, namely the decidability of what we called
the _next-letter problem_ , which in short asks that the next data value which
can be safely outputted knowing only a finite prefix of the input data
$\omega$-word is computable, if it exists. We also show similar
correspondences for more constrained computability and continuity notions,
namely Cauchy, uniform and $m$-uniform computability and continuity. In these
correspondences, the construction of a Turing machine computing the function
is effective.
2. 2.
We consider a general computability assumption for oligomorphic data sets,
namely that they have decidable first-order satisfiability problem [Boj19]. We
call such data sets _decidable_. We then show that functions defined by NRT
over decidable oligomorphic data sets and over $(\mathbb{N},\\{<\\})$, have
decidable next-letter problem. As a consequence (Theorems 20 and 37), we
obtain that a function of data $\omega$-words definable by an NRT over
decidable oligomorphic data sets and over $(\mathbb{N},\left\\{<\right\\})$,
is computable iff it is continuous (and likewise for all computability and
continuity notions we introduce). This is a useful mathematical
characterization of computability, which we use to obtain our main result.
3. 3.
As explained before, an NRT may not define a function in general but a
relation, due to non-determinism. Functionality is a semantical, and not
syntactical, notion. We nevertheless show that checking whether an NRT defines
a function is decidable for decidable oligomorphic data sets (Theorem 21).
This problem is called the _functionality problem_ and is a prerequisite to
our study of computability, as we assume specifications to be functional. We
establish PSpace-completeness of the functionality problem for NRT over
$(\mathbb{N},\\{<\\})$ (Corollary 35) and for oligomorphic data sets (Theorem
21) under some additional assumptions on the data set that we call polynomial
decidability. In short, it is required that the data set has PSpace-decidable
first-order satisfiability problem.
4. 4.
Finally, we show (again Theorem 21) that continuity of functions defined by
NRT over decidable (resp. polynomially decidable) oligomorphic data sets is
decidable (resp. PSpace-c). We also obtain PSpace-completeness in the non-
oligomorphic case $(\mathbb{N},\\{<\\})$ (Theorem 42). These results also hold
for the stronger notion of uniform continuity (see also Theorem 39). As a
result of the correspondence between computability and continuity, we also
obtain that computability and uniform computability are decidable for
functions defined by NRT over decidable oligomorphic data sets, and PSPace-c
for polynomially decidable oligomorphic data sets as well as
$(\mathbb{N},\\{<\\})$. This is our main result and it answers positively our
initial synthesis motivation.
The proof techniques we use have the following structure in common: first, we
characterize non-functionality and non-continuity by structural patterns on
NRT and establish small witness properties for the existence these patterns.
Then, based on the small witness properties, we show how to decide whether
given an NRT, such patterns are matched or not. While the proofs have some
similarities between the oligomorphic case, the case $(\mathbb{N},\\{<\\})$
and the functionality and continuity problems, there are subtle technical
differences which make them hard to factorize with reasonable amount of
additional notations and theoretical assumptions.
#### Related Work
We have already mentioned works related to the synthesis problem. We now give
references to results on computability and continuity. The notion of
continuity with regards to Cantor distance is not new, and for rational
functions over finite alphabets, it was already known to be decidable [Pri02].
The approach of Prieur is to reduce continuity to functionality by defining
from a transducer $T$ a transducer realizing its topological closure. We were
able to extend this approach to almost all the cases we considered, except for
transducers over $(\mathbb{N},\left\\{<\right\\})$ with guessing allowed, so
we chose a different proof strategy. The connection between continuity and
computability for functions of $\omega$-words over a finite alphabet has
recently been investigated in [DFKL20] for one-way and two-way transducers.
Our results lift the case of one-way transducers from [DFKL20] to data words.
Our results were partially published in conference proceedings [EFR20]. In
that publication, only the case of data sets equipped with the equality
predicate was considered. We now consider oligomorphic data sets (which
generalise the latter case), the data set $(\mathbb{N},\\{<\\})$ and new
computability notions. Despite the fact that our results are more general,
this generalisation also allows to extract the essential arguments needed to
prove this kind of results. Moreover, compared to [EFR20], we add here the
possibility for the register transducer to make non-deterministic register
assignment (data guessing), which strictly increases their expressive power.
## 1 Data alphabet, languages and transducers
### 1.1 Data as logical structures
Let $\Sigma$ be a finite signature with relation and constant symbols. Let
$\mathbb{D}=(D,\Sigma^{\mathbb{D}})$ be a logical structure over $\Sigma$ with
a countably infinite domain $D$ and an interpretation of each symbol of
$\Sigma$. Note that we often identify $\mathbb{D}$ and $D$ when the structure
considered is clear, from context.
An _automorphism_ of a structure $\mathbb{D}$ is a bijection $\mu:D\rightarrow
D$ which preserves the constants and the predicates of $\mathbb{D}$: for any
constant $c$ in $D$, $\mu(c)=c$ and for any relation of $\Sigma$, $R\subseteq
D^{r}$, we have $\forall\ \bar{x},\ R(\bar{x})\Leftrightarrow
R(\mu(\bar{x}))$, where $\mu$ is naturally extended to $D^{r}$ by applying it
pointwise. We denote by $\mathrm{Aut}(\mathbb{D})$ the set of automorphisms of
$\mathbb{D}$. Let $\bar{x}\in\mathbb{D}^{d}$, the set
$\left\\{\mu(\bar{x})\mid\ \mu\in\mathrm{Aut}(\mathbb{D})\right\\}$ is called
the _orbit_ of $\bar{x}$ under the action of $\mathrm{Aut}(\mathbb{D})$.
We will be interested in structures that have a lot of symmetry. For instance
the structures $(\mathbb{N},\left\\{0,=\right\\})$,
$(\mathbb{Z},\left\\{<\right\\})$ and $(\mathbb{Q},\left\\{<\right\\})$ fall
under our study as well as more sophisticated structures like
$(1(0+1)^{*},{\otimes})$ where $\otimes$ is the bitwise xor operation. Other
structures like $(\mathbb{Z},\left\\{+\right\\})$ will not have enough
internal symmetry to be captured by our results.
A logical structure $\mathbb{D}$ is _oligomorphic_ if for any natural number
$n$ the set $\mathbb{D}^{n}$ has finitely many orbits under the action of
$\mathrm{Aut}(\mathbb{D})$.
Oligomorphic structures can be thought of as “almost finite”. Consider
$(\mathbb{N},\left\\{=\right\\})$, then $\mathbb{N}^{2}$ only has two orbits:
the diagonal $\left\\{(x,x)\mid\ x\in\mathbb{N}\right\\}$ and its complement
$\left\\{(x,y)\in\mathbb{N}^{2}\mid\ x\neq y\right\\}$. In fact
$(\mathbb{N},\left\\{=\right\\})$ is oligomorphic, since the orbit of an
element of $\mathbb{N}^{n}$ is entirely determined by which coordinates are
equal to each other. Similarly, one can see that the dense linear order
$(\mathbb{Q},\left\\{<\right\\})$ is oligomorphic.
The automorphism group of $(\mathbb{Z},\left\\{<\right\\})$ consists of all
translations $n\mapsto n+c$ for some fixed $c\in\mathbb{Z}$. This means that
$\mathbb{Z}$ only has one orbit. However, $\mathbb{Z}^{2}$ has an infinite
number of orbits since the difference between two numbers is preserved by
translation. Hence $(\mathbb{Z},\left\\{<\right\\})$ is not oligormorphic.
However, the fact that $(\mathbb{Z},\left\\{<\right\\})$ is a substructure of
$(\mathbb{Q},\left\\{<\right\\})$ will allow us to extend our results to this
structure, with some additional work. For more details on oligomorphic
structures see [Boj19, Chap. 3].
Let $G$ be a group acting on both $X,Y$, then a function $f:X\rightarrow Y$ is
called _equivariant_ if for all $x\in X,\mu\in G$ we have
$f(\mu(x))=\mu(f(x))$.
### 1.2 Words and data words
For a (possibly infinite) set $A$, we denote by $A^{*}$ (resp. $A^{\omega}$)
the set of finite (resp. infinite) words over this alphabet, and we let
$A^{\infty}=A^{*}\cup A^{\omega}$. For a word $u=u_{1}\dots u_{n}$, we denote
$\lvert u\lvert=n$ its length, and, by convention, for $x\in A^{\omega},\lvert
x\lvert=\infty$. The empty word is denoted $\varepsilon$. For $1\leq i\leq
j\leq\lvert w\lvert$, we let $w[i{:}j]=w_{i}w_{i+1}\dots w_{j}$ and
$w[i]=w[i{:}i]$ the $i$th letter of $u$. For $u,v\in A^{\infty}$, we say that
$u$ is a prefix of $v$, written $uv$, if there exists $w\in A^{\infty}$ such
that $v=uw$. In this case, we define $u^{-1}v=w$. For $u,v\in A^{\infty}$, we
say that $u$ and $v$ _match_ if either $uv$ or $vu$, which we denote by
$u\mathrel{{\hstretch{.8}{\parallel}}}v$, and we say that they _mismatch_ ,
written $u\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}v$,
otherwise. Finally, for $u,v\in A^{\infty}$, we denote by $u\wedge v$ their
longest common prefix, i.e. the longest word $w\in A^{\infty}$ such that $wu$
and $wv$.
Let $\mathbb{D}$ be a logical structure. A word over $\mathbb{D}$ is called a
_$\mathbb{D}$ -data word_ (or just _data word_). Note that
$\mathrm{Aut}(\mathbb{D})$ naturally acts on $\mathbb{D}^{\infty}$.
### 1.3 Functions and relations
A (binary) relation between sets $X$ and $Y$ is a subset $R\subseteq X\times
Y$. We denote its domain $\mathrm{dom}(R)=\\{x\in X\mid\exists y\in Y,(x,y)\in
R\\}$. It is _functional_ if for all $x\in\mathrm{dom}(R)$, there exists
exactly one $y\in Y$ such that $(x,y)\in R$. Then, we can also represent it as
the function $f_{R}:\mathrm{dom}(R)\rightarrow Y$ such that for all
$x\in\mathrm{dom}(R)$, $f(x)=y$ such that $y\in Y$ (we know that such $y$ is
unique). $f_{R}$ can also be seen as a _partial_ function $f_{R}:X\rightarrow
Y$. In this paper, unless otherwise stated, functions of data words are
assumed to be partial, and we denote by $\mathrm{dom}(f)$ the domain of any
(partial) function $f$.
### 1.4 Register transducers
Let $\mathbb{D}$ be a logical structure, and let $R$ be a finite set of
variables. We define _$R$ -tests_ by the following grammar:
$\phi::=P(\bar{t})|\phi\wedge\phi|\phi\vee\phi|\neg\phi$
where $P$ is a symbol of arity $k$ in the signature of $\mathbb{D}$ and
$\bar{t}$ a $k$-tuple of terms. We denote by $\mathsf{Test}(R)$ the set of
$R$-tests. Terms are defined by either a constant of $\mathbb{D}$ or a
variable of $R$. In other words $R$-tests are exactly the quantifier-free
formulas over the signature of $\mathbb{D}$ using variables in $R$.
###### Remark 1.
We choose tests to be quantifier-free formulas. However we could have chosen
existential first-order formulas without affecting our results. Note that we
choose this formalism just for simplicity’s sake, and that it does not make
any difference for structures which admit quantifier elimination such as
$(\mathbb{N},\left\\{=\right\\})$ or $(\mathbb{Q},\left\\{<\right\\})$.
A _non-deterministic register transducer_ (NRT for short) over $\mathbb{D}$ is
a tuple $\left(Q,R,\Delta,q_{0},\overline{\mathsf{c}_{0}},F\right)$. Where $Q$
is a finite set of states, $R$ is a finite set of registers, $q_{0}\in Q$,
$\overline{\mathsf{c}_{0}}\in\Sigma^{R}$ is a vector of constant symbols,
$F\subseteq Q$, and $\Delta$ is a finite subset of
$\underbrace{Q}_{\text{current
state}}\times\underbrace{\mathsf{Test}(R\uplus\left\\{\mathsf{input}\right\\})}_{\text{current
registers + input data}}\times\underbrace{Q}_{\text{target
state}}\times\underbrace{\left\\{\mathsf{keep},\mathsf{set},\mathsf{guess}\right\\}^{R}}_{\text{register
operations}}\times\underbrace{R^{*}}_{\text{output word}}$
A _non-guessing_ transducer (NGRT) has a transition function which is included
in $Q\times\mathsf{Test}(R\uplus\left\\{\mathsf{input}\right\\})\times
Q\times\left\\{\mathsf{keep},\mathsf{set}\right\\}^{R}\times R^{*}$. Finally,
a _deterministic_ transducer (DRT) satisfies an even stronger condition: its
transition relation is a function of type
$\delta:Q\times\mathsf{Test}(R\uplus\left\\{\mathsf{input}\right\\})\rightarrow
Q\times\left\\{\mathsf{keep},\mathsf{set}\right\\}^{R}\times R^{*}$ where,
additionally, tests are mutually exclusive, i.e. for all
$\phi,\psi\in\mathrm{dom}(\delta)$, $\phi\wedge\psi$ is unsatisfiable.
###### Remark 2.
Note that in the definition of a transducer we require that $\mathbb{D}$
contains at least one constant symbol. This is needed for annoying technical
reasons, namely in order to initialize registers to some value.
However it is not too damaging since, given a $\Sigma$-structure $\mathbb{D}$
of domain $D$, one can always consider the
$\Sigma\uplus\left\\{\mathsf{c}_{0}\right\\}$-structure $\mathbb{D}_{\bot}$
with domain $D\uplus\left\\{\bot\right\\}$, which is just the structure
$\mathbb{D}$ with the extra constant symbol being interpreted as the new
element $\bot$, the other relations and constants are unchanged, except
naturally for the equality relation which is extended to include
$(\bot,\bot)$.
For simplicity’s sake we will sometimes talk about structures without
mentioning any constant, implicitely stating that we extend the structure to
include $\bot$. Also note that this operation of adding a fresh constant does
not affect oligomorphicity.
Let $T$ be an NRT given as above. A _configuration_ $C$ of $T$ is given by a
pair $(q,\bar{d})$ where $q\in Q$ is a state and $d\in\mathbb{D}^{R}$ is a
tuple of data values, hence the group $\mathrm{Aut}(\mathbb{D})$ naturally
acts on the configurations of $T$ by not touching the states and acting on the
content of the registers pointwise. The _initial configuration_ is the pair
$C_{0}=(q_{0},\bar{d}_{0})$ with
$\bar{d}_{0}=\overline{\mathsf{c}}_{0}^{\mathbb{D}}$ being the interpretation
of the constants in $\mathbb{D}$. A configuration is called _final_ if the
state component is in $F$. Let
$C_{1}=(q_{1},\bar{d}_{1}),C_{2}=(q_{2},\bar{d}_{2})$ be two configurations,
let $d\in\mathbb{D}$ and let
$t=(q_{1},\phi,q_{2},\mathsf{update},v)\in\Delta$. We say that $C_{2}$ is a
_successor configuration_ of $C_{1}$ by reading $d$ through $t$ and producing
$w\in\mathbb{D}^{|v|}$ if the following hold:
* •
By letting $\bar{d}_{1}[\mathsf{input}\leftarrow
d]:\left\\{\begin{array}[]{l}r\in R\mapsto\bar{d}_{1}(r)\\\
\mathsf{input}\mapsto d\end{array}\right.$, we have
$\bar{d}_{1}[\mathsf{input}\leftarrow d]\models\phi$
* •
for all $r\in R$, if $\mathsf{update}(r)=\mathsf{keep}$, then
$\bar{d}_{2}(r)=\bar{d}_{1}(r)$
* •
for all $r\in R$, if $\mathsf{update}(r)=\mathsf{set}$, then
$\bar{d}_{2}(r)=d$
* •
$w(i)=\bar{d}_{2}(v(i))$ for all $i\in\left\\{1,\ldots,|v|\right\\}$
Moreover, we write $C_{1}\xrightarrow{d,\phi,\mathsf{update}|w}_{T}C_{2}$ to
indicate that fact. Often we don’t mention $T$ (when clear from context), nor
$\phi$ and $\mathsf{update}$, and we simply write
$C_{1}\xrightarrow{d|w}C_{2}$. Given a sequence of successor configurations,
called a _run_ ,
$\rho=C_{1}\xrightarrow{d_{1}|w_{1}}C_{2}\xrightarrow{d_{2}|w_{2}}C_{3}\ldots
C_{n}\xrightarrow{d_{n}|w_{n}}C_{n+1}$, we write
$C_{1}\xrightarrow{d_{1}d_{2}\cdots d_{n}|w_{1}w_{2}\cdots w_{n}}C_{n+1}$. We
sometimes even don’t write the output $C\xrightarrow{u}C^{\prime}$ stating
that there is a sequence of transitions reading $u$ going from $C$ to
$C^{\prime}$.
Let $\rho=C_{1}\xrightarrow{d_{1}|v_{1}}C_{2}\ldots
C_{n}\xrightarrow{d_{n}|v_{n}}C_{n+1}\ldots$ denote a possibly infinite run.
If $C_{1}=C_{0}$, then $\rho$ is called _initial_. If an infinite number of
configurations of $\rho$ are final, we say that $\rho$ is _final_. A run which
is both initial and final is _accepting_. We say that the run $\rho$ is over
the input word $x=d_{1}\ldots d_{n}\ldots$ and produces $w=v_{1}\ldots
v_{n}\ldots$ in the output. Then the semantics of $T$ is defined as
$\llbracket T\rrbracket=\left\\{(x,w)\mid\ \rho\text{ is over $x$, produces
$y$ and is
accepting}\right\\}\subseteq\mathbb{D}^{\omega}\times\mathbb{D}^{\infty}$. An
NRT is called _functional_ if $\llbracket T\rrbracket$ is a (partial)
function. Note that in the following we will mainly consider transducers that
only produce $\omega$-words. Restricting the accepting runs of a transducer to
runs producing infinite outputs is a Büchi condition and can easily be done by
adding one bit of information to states.
## 2 Continuity and computability
### 2.1 Continuity notions
We equip the set $A^{\infty}$ with the usual distance: for $u,v\in
A^{\infty}$, $\lVert u,v\rVert=0$ if $u=v$ and $\lVert u,v\rVert=2^{-\lvert
u\wedge v\lvert}$ otherwise. A sequence of (finite or infinite) words
$(w_{n})_{n\in\mathbb{N}}$ converges to some word $w$ if for all $\epsilon>0$,
there exists $N\geq 0$ such that for all $n\geq N$, $\lVert
w_{n},w\rVert\leq\epsilon$. Given a language $L\subseteq A^{\infty}$, we
denote by $\bar{L}$ its topological closure, i.e. the set of words which can
be approached arbitrarily close by words of $L$.
###### Remark 3.
Whether the alphabet $A$ is finite or infinite substantially modifies the
properties of the metric space $A^{\infty}$. Indeed when $A$ is finite this
space is compact, but it is not when $A$ is infinite.
#### 2.1.1 Continuity
[Continuity] A function $f:A^{\omega}\rightarrow B^{\omega}$ is _continuous_
at $x\in\mathrm{dom}(f)$ if (equivalently):
1. [(a)]
2. 1.
for all sequences of words $(x_{n})_{n\in\mathbb{N}}$ converging towards $x$,
where for all $i\in\mathbb{N}$, $x_{i}\in\mathrm{dom}(f)$, we have that
$(f(x_{n}))_{n\in\mathbb{N}}$ converges towards $f(x)$.
3. 2.
$\forall i\geq 0,\ \exists j,\ \forall y\in\mathrm{dom}(f),\ \lvert x\wedge
y\lvert\geq j\Rightarrow\lvert f(x)\wedge f(y)\lvert\geq i$
The function $f$ is called _continuous_ if it is continuous at each
$x\in\mathrm{dom}(f)$.
#### 2.1.2 Cauchy continuity
A Cauchy continuous function maps any Cauchy sequence to a Cauchy sequence.
One interesting property of Cauchy continuous functions is that they always
admit a (unique) continuous extension to the completion of their domain. Since
we deal with $A^{\infty}$ which is complete, the completion of the domain of a
function $f$, denoted $\overline{\mathrm{dom}(f)}$, is simply its closure.
[Cauchy continuity] A function $f:A^{\omega}\rightarrow A^{\omega}$ is _Cauchy
continuous_ if the image of a Cauchy sequence in $\mathrm{dom}(f)$ is a Cauchy
sequence.
###### Remark 4.
Any Cauchy continuous function $f$ can be continuously extended over
$\overline{\mathrm{dom}(f)}$ in a unique way, which we denote by $\bar{f}$.
#### 2.1.3 Uniform continuity
[Uniform continuity] A function $f:A^{\omega}\rightarrow A^{\omega}$ is
_uniformly continuous_ if there exists a mapping
$m:\mathbb{N}\rightarrow\mathbb{N}$ such that:
$\forall i\geq 0,\forall x,y\in\mathrm{dom}(f),\lvert x\wedge y\lvert\geq
m(i)\Rightarrow\lvert f(x)\wedge f(y)\lvert\geq i$
Such a function $m$ is called a _modulus of continuity_ 222The usual notion of
modulus of continuity is defined with respect to distance, but here we choose
to define it with respect to longest common prefixes, for convenience. Given
$m$ a modulus of continuity in our setting we can define $\omega:x\mapsto
2^{-m\left(\left\lceil\log_{2}\left(\frac{1}{x}\right)\right\rceil\right)}$
and recover the usual notion. for $f$. We also say that $f$ is _$m$
-continuous_. Finally, a functional NRT $T$ is _uniformly continuous_ when
$\llbracket T\rrbracket$ is uniformly continuous.
###### Remark 5.
In the case of a finite alphabet, and in general for compact spaces, Cauchy
continuity is equivalent to uniform continuity, but for infinite alphabets
this does not hold anymore. Consider the following function $f$ computable by
a DRT over the data alphabet $(\mathbb{N},\left\\{0,<\right\\})$ and defined
by $u0x\mapsto x$, for $u0\in\mathbb{N}^{*}$ being a strictly decreasing
sequence. Then this function is not uniformly continuous, since two words may
be arbitrarily close yet have very different images. However one can check
that the image of a Cauchy sequence is indeed Cauchy: let
$(x_{n})_{n\in\mathbb{N}}$ be a Cauchy sequence in the domain of $f$. Let us
assume without loss of generality that all the $x_{n}$’s begin with the same
letter $i\in\mathbb{N}$. Then, after reading at most $i+1$ symbols of one of
the $x_{n}$’s, the DRT outputs something. Let $j\in\mathbb{N}$ and let $N$ be
such that for all $m,n\geq N$ we have $|x_{m}\wedge x_{n}|\geq i+j+1$. Thus we
have $|f(x_{m})\wedge f(x_{n})|\geq j$, which means that
$(f(x_{n}))_{n\in\mathbb{N}}$ is Cauchy.
We’ve seen in the previous remark that Cauchy continuity and uniform
continuity don’t coincide over infinite alphabets. However when dealing with
oligomorphic structures we recover some form of compactness, that is
compactness of $\mathbb{D}^{\infty}/{\mathrm{Aut}(\mathbb{D})}$, which ensures
that the two notions do coincide in this case.
###### Proposition 6.
Let $\mathbb{D}$ be an oligomorphic structure and let
$f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ be an equivariant
function. Then $f$ is uniformly continuous if and only if it is Cauchy
continuous.
###### Proof 2.1.
It is clear that a uniformly continuous function is in particular Cauchy
continuous. Let $\mathbb{D}$ be an oligomorphic structure and let
$f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ be an equivariant
function. Let us assume that $f$ is not uniformly continuous. This means that
there exists $i\in\mathbb{N}$ and a sequence $(x_{n},y_{n})_{n\in\mathbb{N}}$
such that for all $n$, $|x_{n}\wedge y_{n}|\geq n$ and $|f(x_{n})\wedge
f(y_{n})|\leq i$. Let us consider the sequence
$([x_{n}],[y_{n}])_{n\in\mathbb{N}}$ of pairs of elements in
$\mathbb{D}^{\omega}/{\mathrm{Aut}(\mathbb{D})}$, i.e. words are seen up to
automorphism. Using standard arguments, one can show that the set
$\mathbb{D}^{\omega}/{\mathrm{Aut}(\mathbb{D})}$, equipped with the distance
$d([x],[y])=\min_{u\in[x],v\in[y]}d(u,v)$, is compact (see [Exi21, Proposition
12.28] for details). As a consequence, we can extract a subsequence (which we
also call $([x_{n}],[y_{n}])_{n\in\mathbb{N}}$ for convenience) and which is
convergent. This means that there are automorphisms
$(\mu_{n})_{n\in\mathbb{N}}$ such that the sequence
$(\mu_{n}(x_{n}))_{n\in\mathbb{N}}$ (and thus
$(\mu_{n}(y_{n}))_{n\in\mathbb{N}}$) converges. Hence by interleaving
$(\mu_{n}(x_{n}))_{n\in\mathbb{N}}$ and $(\mu_{n}(y_{n}))_{n\in\mathbb{N}}$,
we obtain a converging sequence whose image is divergent, which means that $f$
is not Cauchy continuous.
###### Remark 7.
In order to refine uniform continuity one can study $m$-continuity for
particular kinds of functions $m$. For instance for $m:i\mapsto i+b$,
$m$-continuous functions are exactly $2^{b}$-Lipschitz continuous functions.
Similarly, for $m:i\mapsto ai+b$, $m$-continuous function are exactly the
$\frac{1}{a}$-Hölder continuous functions.
Note that while these notions are interesting in and of themselves, they are
very sensitive to the metric that is being used. For instance the metric
$d(x,y)=\frac{1}{|x\wedge y|}$ while defining the same topology over words,
yields different notions of Lipschitz and Hölder continuity.
### 2.2 Computability notions
Let $\mathbb{D}$ be a data set. In order to reason with computability, we
assume in the sequel that the countable set of data values $\mathbb{D}$ we are
dealing with has an effective representation, meaning that each element can be
represented in a finite way. For instance, this is the case when
$\mathbb{D}=\mathbb{N}$. Moreover, we assume that checking if a tuple of
values belongs to some relation of $\mathbb{D}$ is decidable. We say that the
structure $\mathbb{D}$ is _representable_. Formally, a structure is
representable if there exists a finite alphabet $A$ and an injective function
$\mathsf{enc}:\mathbb{D}\rightarrow A^{*}$ such that the sets
$\\{\mathsf{enc}(d)\mid d\in\mathbb{D}\\}$, $\\{\mathsf{enc}(c)\mid c\text{ is
a constant of }\mathbb{D}\\}$ and
$\left\\{\mathsf{enc}(d_{1})\sharp\cdots\sharp\mathsf{enc}(d_{k})\mid\
(d_{1},\ldots,d_{k})\in R\right\\}$ are decidable for all predicates $R$ of
$\mathbb{D}$ and $\sharp\not\in A$. Any infinite word
$d_{1}d_{2}\dots\in\mathbb{D}^{\omega}$ can be encoded as the $\omega$-word
$\mathsf{enc}(d_{1})\sharp\mathsf{enc}(d_{2})\sharp\dots\in(A^{*}\sharp)^{\omega}$.
We now define how a Turing machine can compute a function from
$\mathbb{D}^{\omega}$ to $\mathbb{D}^{\omega}$. We consider deterministic
Turing machines whose cells can contain a letter from $A\cup\\{\sharp\\}$ or a
letter from a finite working alphabet. They have three tapes: a read-only one-
way input tape on alphabet $A\cup\\{\sharp\\}$ (containing an encoding of an
infinite input data word), a two-way working tape, and a write-only one-way
output tape on alphabet $A\cup\\{\sharp\\}$ (on which they write the encoding
of the infinite output data word). Since we always work modulo encoding, for
the sake of simplicity, from now on and in the rest of the paper, we assume
that each cell of the Turing machine, on the input and output tapes, contain a
data value $d\in\mathbb{D}$, while cells of the working tape are assumed to
contain either a data value $d\in\mathbb{D}$ or a letter from the working
alphabet. So, instead of saying the input contains the encoding of a data word
$x$, we just say that it contains the input data word $x$. We discuss in
Remark 8 how the computability notions we introduce hereafter are sensitive to
encodings.
Consider such a Turing machine $M$ and some input data word
$x\in\mathbb{D}^{\omega}$. For any integer $k\in\mathbb{N}$, we let $M(x,k)$
denote the finite output data word written by $M$ on its output tape after
reaching cell number $k$ of the input tape (assuming it does). Observe that as
the output tape is write-only, the sequence of data words $(M(x,k))_{k\geq 0}$
is non-decreasing, and thus we denote by $M(x)$ the limit content of the
output tape.
[Computability] Let $\mathbb{D}$ be a representable data domain. A data word
function $f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ is _computable_
if there exists a deterministic multi-tape machine $M$ such that for all
$x\in\mathrm{dom}(f)$, $M(x)=f(x)$. We say that $M$ _computes_ $f$.
[Cauchy computability] Let $\mathbb{D}$ be a representable data domain. A data
word function $f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ is _Cauchy
computable_ if there exists a deterministic multi-tape machine $M$ computing
$f$ such that for all $x$ in the topological closure
$\overline{\mathrm{dom}(f)}$ of $\mathrm{dom}(f)$, the sequence
$(M(x,k))_{k\geq 0}$ converges to an infinite word. In other words a Cauchy
computable function is a function which admits a continuous extension to the
closure of its domain and which is computable. We say that $M$ _Cauchy
computes_ $f$.
[Uniform computability] Let $\mathbb{D}$ be a representable data domain. A
data word function $f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ is
_uniformly computable_ if there exists a deterministic multi-tape machine $M$
and a computable mapping $m:\mathbb{N}\rightarrow\mathbb{N}$ such that $M$
computes $f$ and for all $i\geq 0$ and $x\in\mathrm{dom}(f)$, $|M(x,m(i))|\geq
i$. Such a function $m$ is called a _modulus of computability_ for $f$. In
that case $f$ is called _$m$ -computable_. We say that $M$ _uniformly
computes_ $f$, and also that $M$ _$m$ -computes_ $f$.
The function $g$ defined in the Introduction (p. From finite to infinite
alphabets), for the data domain of integers, is computable. Remind that it is
defined on all input words of the form $x=su_{1}d_{1}su_{2}d_{2}s\dots$ such
that $s$ occurs infinitely often, and for all $i$, $s$ does not occur in
$u_{i}d_{i}$, by $g(x)=d_{1}^{|u_{1}|+1}sd_{2}^{|u_{2}|+1}s\dots$. A Turing
machine just needs to read the input up to $d_{1}$, then output $d_{1}$
exactly $|u_{1}|+1$ times, and so on for the other pieces of inputs.
In Remark 5, the function $f$ is not uniformly continuous but Cauchy
continuous. It is actually not uniformly computable but Cauchy computable. As
a matter of fact, all the computability notions we define here entail the
respective continuity notions defined before. We make this formal in Section
2.3.
###### Remark 8 (Robustness to encoding).
When actually representing words over an infinite alphabet, it is not
realistic to assume that one letter takes a constant amount of space and can
be read in a constant amount of time. Then, which of the many notions
introduced above are sensitive to encoding and which ones are more robust?
Let $\mathbb{D}$ be a representable structure, and let
$\mathsf{enc}:\mathbb{D}\rightarrow A^{*}$ be its encoding function. Let
$f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ be a function and let
$f_{\mathsf{enc}}:(A\uplus\sharp)^{\omega}\rightarrow(A^{*}\uplus\sharp)^{\omega}$
be defined as ${\mathsf{enc}_{\sharp}}\circ
f\circ{\mathsf{enc}_{\sharp}}^{-1}$, where $\mathsf{enc}_{\sharp}:d_{1}\cdots
d_{n}\mapsto\mathsf{enc}(d_{1})\sharp\cdots\sharp\mathsf{enc}(d_{n})$.
Continuity and computability are robust enough so that $f$ is continuous
(resp. computable) if and only if $f_{\mathsf{dec}}$ is. Cauchy continuity and
computability also fall under this category. In contrast, uniform continuity
and uniform computability are very sensitive to encoding. As an example, the
function which maps a word to the second letter in the word is _never_
uniformly continuous, since the encoding of the first letter may be
arbitrarily long. Nevertheless, uniform computability is still a relevant
notion, as it provides guarantees on the maximal number of input data values
which need to be read to produce a given number of output data values, even
though the encoding of those values can be arbitrarily large.
### 2.3 Computability versus continuity
In this section, we show that all the computability notions (computability,
Cauchy continuity, …) imply their respective continuity notions. We then give
general conditions under which the converse also holds.
#### 2.3.1 From computability to continuity
###### Theorem 9.
Let $f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ and let
$m:\mathbb{N}\rightarrow\mathbb{N}$, the following implications hold:
* •
$f$ is computable $\Rightarrow$ $f$ is continuous
* •
$f$ is Cauchy computable $\Rightarrow$ $f$ is Cauchy continuous
* •
$f$ is uniformly computable $\Rightarrow$ $f$ is uniformly continuous
* •
$f$ is $m$-computable $\Rightarrow$ $f$ is $m$-continuous
###### Proof 2.2.
Assume that $f$ is computable by a deterministic multi-tape Turing machine
$M$. Let $x$ be in the topological closure of $\mathrm{dom}(f)$ and
$(x_{n})_{n}$ be a sequence in $\mathrm{dom}(f)$ converging to $x$. We show
that $(f(x_{n}))_{n}$ converges to $M(x)$ if $M(x)$ is infinite, which is the
case for all $x\in\mathrm{dom}(f)$ since $f$ is computable. For all $k\geq 0$,
let $p_{k}$ the prefix of $x$ of length $k$. Since
$\lim_{n\rightarrow\infty}x_{n}=x$, for all $k$, there exists $n_{k}$ such
that for all $m\geq n_{k}$, $p_{k}\leq x_{m}$. As $M$ is a deterministic
machine, it implies that $M(x,k)\leq f(x_{m})$. So, for all $k$, $M(x,k)\leq
f(x_{m})$ for all but finitely many $m$. It follows that $(f(x_{m}))_{m}$
converges to $M(x)=\lim_{k\rightarrow\infty}M(x,k)$ if $M(x)$ is an infinite
word, which it is if $x\in\mathrm{dom}(f)$, entailing continuity of $f$. If
additionally $f$ is Cauchy computable, then it is also the case for all
$x\in\overline{\mathrm{dom}(f)}$, entailing Cauchy continuity of $f$.
It remains to show the fourth statement, which entails the third. So, let us
assume that $f$ is $m$-computable by some machine $M$. We show it is
$m$-continuous. Let $i\geq 0$, $x,y\in\mathrm{dom}(f)$ such that $|x\wedge
y|\geq m(i)$. We must show that $|f(x)\wedge f(y)|\geq i$. We have
$M(x,m(i))=M(y,m(i))$ because $M$ is deterministic and $|x\wedge y|\geq m(i)$.
By definition of $m$-computability, we also have $|M(x,m(i))|\geq i$. Since
$M(x,m(i))\leq f(x)$ and $M(y,m(i))\leq f(y)$, we get $|f(x)\wedge f(y)|\geq
i$, concluding the proof.
#### 2.3.2 From continuity to computability
While, as we have seen in the last section, computability of a function $f$
implies its continuity, and respectively for all the notions of computability
and continuity we consider in this paper, the converse may not hold in
general. We give sufficient conditions under which the converse holds for $f$,
namely when it has a computable next-letter problem. This problem asks, given
as input two finite words $u,v\in\mathbb{D}^{*}$, to output, if it exists, a
data value $d\in\mathbb{D}$ such that for all $y\in\mathbb{D}^{\omega}$ such
that $uy\in\mathrm{dom}(f)$, we have $vd\leq f(uy)$. Because of the universal
quantification on $y$, note that $d$ is unique if it exists. Formally:
Next-Letter Problem for $f$:
Input | $u,v\in\mathbb{D}^{*}$
---|---
Output | $\left\\{\begin{array}[]{ll}d\in\mathbb{D}&\text{if for all }y\in\mathbb{D}^{\omega}\text{ s.t. }uy\in\mathrm{dom}(f),vd\leq f(uy)\\\ \texttt{none}&\text{otherwise}\end{array}\right.$
###### Theorem 10.
Let $f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ be a function with a
computable next-letter problem and let $m:\mathbb{N}\rightarrow\mathbb{N}$,
the following implications hold:
* •
$f$ is continuous $\Rightarrow$ $f$ is computable
* •
$f$ is Cauchy continuous $\Rightarrow$ $f$ is Cauchy computable
* •
$f$ is uniformly continuous $\Rightarrow$ $f$ is uniformly computable
* •
$f$ is $m$-continuous $\Rightarrow$ $f$ is $m$-computable
###### Proof 2.3.
Let us assume the existence of a procedure $\textsf{Next}_{f}(u,v)$ which
computes the next-letter problem. To show the four statements, we show the
existence of a deterministic Turing machine $M_{f}$ common to the four
statements, in the sense that $M_{f}$ computes $f$ if $f$ is continuous,
respectively Cauchy computes $f$ is f is Cauchy continuous, etc. The Turing
machine $M_{f}$ is best presented in pseudo-code as follows.
Data: $x\in\mathbb{D}^{\omega}$
1 $v:=\epsilon$;
2 for _$i=0$ to $+\infty$_ do
3 $d:=\textsf{Next}_{f}(x[{:}i],v)$;
4 while _$d\neq\texttt{none}$_ do
output $d$;
// write $d$ on the output tape
5 $v:=vd$;
6 $d:=\textsf{Next}_{f}(x[{:}i],v)$;
7
8 end while
/* end while loop when $d=\texttt{none}$ */
9
10 end for
Algorithm 1 Turing machine $M_{f}$ defined in pseudo-code.
Now, we show that if $f$ is continuous, then $M_{f}$ computes $f$, i.e.
$M_{f}(x)=f(x)$ for all $x\in\mathrm{dom}(f)$. First, for all $i\geq 0$, let
$v_{i}=d_{i,1}d_{i,2}\dots$ be the sequence of data values outputted at line 1
at the $i$th iteration of the for loop. Note that the $i$th iteration may not
exist when the test at line 1 is forever true. In that case, we set
$v_{i}=\epsilon$. By definition of $M_{f}(x,i)$, we have333For any
$\omega$-word $\alpha$, we let $\alpha.\epsilon=\alpha$.
$M_{f}(x,i)=v_{0}v_{1}\dots v_{i}$ for all $i\geq 0$. Moreover, by definition
of the next-letter problem, we also have $M_{f}(x,i)\leq f(x)$. Now, by
definition, $M_{f}(x)=v_{0}v_{1}v_{2}\dots$. So, if it is infinite, then
$M_{f}(x)=f(x)$. It remains to show that it is indeed true when $f$ is
continuous. Suppose it is not the case. Then there exists $i_{0}$ such that
for all $i\geq i_{0}$ the call $\textsf{Next}_{f}(x[{:}i],v_{0}\dots
v_{i_{0}})$ returns none (assume $i_{0}$ is the smallest index having this
property). Let $d$ such that $v_{0}\dots v_{i_{0}}d<f(x)$. Then, for all
$i\geq i_{0}$, there exists $\alpha_{i}\in\mathcal{D}^{\omega}$ and
$d^{\prime}\neq d$ such that $x[{:}i]\alpha_{i}\in\mathrm{dom}(f)$ and
$v_{0}\dots v_{i_{0}}d^{\prime}<f(x[{:}i]\alpha_{i})$. Clearly, the sequence
$(f(x[{:}i]\alpha_{i}))_{i}$, if it converges, does not converge to $f(x)$.
Since $(x[{:}i]\alpha_{i})_{i}$ converges to $x$, this contradicts the
continuity of $f$.
Consider now the case where $f$ is Cauchy-continuous. It implies that $f$
admits a unique continuous extension $\overline{f}$ to
$\overline{\mathrm{dom}(f)}$ (see Remark 4). By the first statement we just
proved, $M_{\overline{f}}$ computes $\overline{f}$, and by definition of
Cauchy-computability, we conclude that $M_{f}$ Cauchy-computes $f$.
We finally prove the fourth statement, which implies the third. Suppose that
$f$ is $m$-continuous for some modulus of continuity $m$. We show that $M_{f}$
$m$-computes $f$. We already proved that $M_{f}$ computes $f$ (as $f$ is
continuous). Let $i\geq 0$ and $x\in\mathrm{dom}(f)$. It remains to show that
$|M_{f}(x,m(i))|\geq i$. Let $j=m(i)$. Since the algorithm $M_{f}$ calls
$\textsf{Next}_{f}(x[{:}j],\cdot)$ until it returns none, we get that
$v_{0}v_{1}\dots v_{j}$ is the longest output which can be safely output given
only $x[{:}j]$, i.e. $v_{0}\dots v_{j}=\bigwedge\\{f(x[{:}j]\alpha)\mid
x[{:}j]\alpha\in\mathrm{dom}(f)\\}$. If $v_{j}$ is infinite, then
$|M_{f}(x,j)|=+\infty$ and we are done. Suppose $v_{j}$ is finite. Then, there
exists $\alpha\in\mathcal{D}^{\omega}$ such that
$x[{:}j]\alpha\in\mathrm{dom}(f)$ and $f(x)\wedge
f(x[{:}j]\alpha)=v_{0}v_{1}\dots v_{j}=M_{f}(x,j)$. Since $|x\wedge
x[{:}j]\alpha|\geq j=m(i)$, by $m$-continuity of $f$, we get that
$|M_{f}(x,j)|\geq i$, concluding the proof.
## 3 Oligomorphic data
In this section we consider a structure $\mathbb{D}$ which is oligomorphic. We
will show that in this case one can decide, under reasonable computability
assumptions, all the notions of continuity introduced in the previous section,
as well as compute the next-letter problem. The first step is to prove
characterizations of these properties, and then show that the
characterizations are decidable. Let
$\mathcal{R}:\mathbb{N}\rightarrow\mathbb{N}$ denote the _Ryll-Nardzewski
function_ of $\mathbb{D}$ which maps $k$ to the number of orbits of $k$-tuples
of data values, which is finite thanks to oligomorphism.
### 3.1 Characterizing functionality and continuity
The main goal of this section is to give for NRT characterizations of
functionality, continuity and uniform continuity, that consist in small
witnesses. These small witnesses are obtained by pumping arguments that rely
on the fact that there is only a finite number of configuration orbits.
We start by defining _loop removal_ , a tool which will prove useful
throughout this section. The main idea is that although no actual loop over
the same configuration can be guaranteed over infinite runs, the
oligomorphicity property guarantees that a configuration _orbit_ will be
repeated over long enough runs. A _loop_ is a run of the shape
$C\xrightarrow{u|v}D\xrightarrow{w|z}\mu(D)$ such that $\mu(C)=C$. The shorter
run $C\xrightarrow{\mu(u)|\mu(v)}\mu(D)$ is thus called the run obtained after
_removing the loop_.
###### Proposition 11 (Small run witness).
let $T$ be an NRT with $k$ registers and state space $Q$, and let
$C\xrightarrow{u|v}D$ be a run. Then there exists $u^{\prime},v^{\prime}$ with
$|u^{\prime}|\leq|Q|\cdot\mathcal{R}(2k)$ such that
$C\xrightarrow{u^{\prime}|v^{\prime}}D$.
###### Proof 3.1.
The idea behind this proposition is simple: any large enough run must contain
a loop and can thus be shortened. Let $u=a_{1}\cdots a_{n}$ with a run
$C_{1}\xrightarrow{a_{1}|v_{1}}C_{2}\xrightarrow{a_{2}|v_{2}}C_{3}\cdots
C_{n}$. Let us assume that $n>|Q|\mathcal{R}(2k)$, we want to obtain a shorter
run from $C_{1}$ to $C_{n}$. Let us consider the orbits of the pairs
$(C_{1},C_{1}),(C_{1},C_{2}),\ldots,(C_{1},C_{n})$. Since
$n>|Q|\mathcal{R}(2k)$, there must be two pairs in the same orbit, i.e. there
must be two indices $1\leq i<j\leq n$ and some automorphism $\mu$ such that
$\mu(C_{1},C_{i})=(C_{1},C_{j})$. Hence we obtain the run
$\mu(C_{1})\xrightarrow{\mu(a_{1}\cdots a_{i-1}|v_{1}\cdots
v_{i-1})}\mu(C_{i})=C_{1}\xrightarrow{\mu(a_{1}\cdots a_{i-1}|v_{1}\cdots
v_{i-1})}C_{j}$ which is strictly shorter.
#### 3.1.1 Characterization of non-emptiness
Let an NRA (non deterministic register automaton) be simply an NRT without any
outputs. We give a characterization of non-emptiness in terms of small
witnesses.
###### Proposition 12.
Let $A$ be an NRA, the following are equivalent:
1. 1.
$A$ recognizes at least one word
2. 2.
there exist $C_{0}\xrightarrow{u_{1}}C\xrightarrow{u_{2}}\mu(C)$ with $C$
being a final configuration, and $\mu\in\mathrm{Aut}(\mathbb{D})$
3. 3.
there exist $C_{0}\xrightarrow{u_{1}}C\xrightarrow{u_{2}}\mu(C)$ with
$|u_{1}|,|u_{2}|\leq|Q|\cdot\mathcal{R}(2k)$, $C$ being a final configuration,
and $\mu\in\mathrm{Aut}(\mathbb{D})$
###### Proof 3.2.
Let us assume (1) and let $x$ be a word accepted by $A$. Since $\mathbb{D}$ is
oligomorphic, there is only a finite number of orbits of configurations. Thus
an accepting run of $A$ over $x$ must go through some accepting configuration
orbit infinitely often, and in particular at least twice. Hence (2) holds.
Let us assume that (2) holds. To bound the size of $u_{1}$ and $u_{2}$ and
obtain (3), we apply Proposition 11 twice in $A$ (seen as a trivial NRT
producing only epsilon outputs): once to remove loops in the run
$C_{0}\xrightarrow{u_{1}\mid\epsilon}C$ and once in
$C\xrightarrow{u_{2}\mid\epsilon}\mu(C)$.
Let us assume (3), then the word $u_{1}u_{2}\mu(u_{2})\mu^{2}(u_{2})\cdots$ is
accepted by $A$.
#### 3.1.2 Characterizing functionality
Characterizing functionality is slightly more complicated since we have to
care about outputs. Moreover, we need to exhibit patterns involving two runs
which makes pumping more involved.
We start with a useful yet quite technical lemma, which will allow us to
remove loops while preserving mismatches.
###### Lemma 13 (Small mismatch witness).
There exists a polynomial $P(x,y,z)$ such that the following hold.
Let $T$ be an NRT, with $k$ registers, a state space $Q$ and a maximal output
length $L$. Let $C_{1}\xrightarrow{u|u_{1}}D_{1}$ and
$C_{2}\xrightarrow{u|u_{2}}D_{2}$ be runs such that
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}$. Then
there exists $u^{\prime}$ of length less than $P(\mathcal{R}(4k+2),|Q|,L)$ and
$u_{1}^{\prime},u_{2}^{\prime}$, so that
$C_{1}\xrightarrow{u^{\prime}|u_{1}^{\prime}}D_{1}$,
$C_{2}\xrightarrow{u^{\prime}|u_{2}^{\prime}}D_{2}$ with
$u_{1}^{\prime}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}^{\prime}$.
###### Proof 3.3.
Let $\rho_{1}=C_{1}\xrightarrow{u|u_{1}}D_{1}$ and
$\rho_{2}=C_{2}\xrightarrow{u|u_{2}}D_{2}$ be runs such that
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}$.
Our goal is to show that either $|u|$ is smaller than some
polynomial444Technically, $5\mathcal{R}(4k+2)^{7}\cdot Q^{14}\cdot L^{5}$ is a
suitable such polynomial, although better bounds could be computed. in
$\mathcal{R}(4k+2),|Q|$ and $L$ or we can remove a synchronous loop in
$\rho_{1},\rho_{2}$ while preserving the mismatch. A synchronous loop is
defined as a loop over the same input in the product transducer $T\times T$.
Let $u_{1}=\alpha a_{1}\beta_{1}$, $u_{2}=\alpha a_{2}\beta_{2}$ with
$a_{1}\neq a_{2}$. We will only consider removing loops that do _not_ contain
the transitions producing the mismatching letters $a_{1}$ or $a_{2}$.
In order to make things more precise we introduce some useful loop vocabulary.
An _initial loop_ in $\rho=C_{1}\rightarrow C_{2}\rightarrow\cdots\rightarrow
C_{n}$ is a loop starting in $C_{1}$. A _simple loop_ is a loop $C\rightarrow
D\rightarrow\mu(D)$ so that there is no pair of configurations $B,\lambda(B)$
with $\lambda(C)=C$ which occur within the part $D\rightarrow\mu(D)$ of the
run. Similarly a _simple synchronous loop_ is a simple loop over $T\times T$.
In the following we will consider loops that are synchronous and simple. They
will be of the shape
$(C_{1},C_{2})\rightarrow(B_{1},B_{2})\rightarrow\mu(B_{1},B_{2})$ with
$\mu(C_{1},C_{2})=(C_{1},C_{2})$. Moreover, we will ask that the automorphism
satisfies $\mu(a_{1})=a_{1}$ and $\mu(a_{2})=a_{2}$, this is done to be sure
that removing or adding loops does not affect the fact that $a_{1}\neq a_{2}$
(note that it would be enough to simply ask that $\mu(a_{1})=a_{1}$). We call
such loops _nice_. With these constraints, if we have a sequence of
configuration pairs in $(\rho_{1},\rho_{2})$ of length larger than
$\mathcal{R}(4k+2)Q^{2}$, we are sure to find a nice loop that preserves
$(a_{1},a_{2})$.
We call the _effect_ of a loop on $\rho_{1}$, the length of the output factor
removed from $\alpha$, in particular the effect is $0$ if the factor removed
is in $\beta_{1}$ (and symmetrically for $\rho_{2}$). We call the _net effect_
of a synchronous loop on $\rho_{1},\rho_{2}$ the difference between the effect
in $\rho_{1}$ and the effect on $\rho_{2}$. Loops with a positive, negative
and null net effect are called positive, negative and null loops,
respectively.
If there exists a null nice loop, then removing it preserves the mismatch and
we are done. We split the proof into the two remaining cases: 1) either all
nice loops are strictly positive (without loss of generality) or 2) some nice
loops are strictly positive while others are strictly negative. In the first
case, we will show that $u$ has to be _small_. In the second case, we will
show that, after removing all nice loops, we can repump (a small number of)
positive and negative nice loops to realign the mismatch.
Let us consider a configuration $(B_{1},B_{2})$ in $(\rho_{1},\rho_{2})$. The
_effects_ of $(B_{1},B_{2})$ is the set of effects of nice loops starting in
$(C_{1},C_{2})$ ending in $(B_{1},B_{2})$ (note that we consider any nice
loop, not just the ones occurring in $(\rho_{1},\rho_{2})$). Then we say that
$(B_{1},B_{2})$ is _positive_ (resp. _negative_) if all its effect are
positive (resp. _negative_).
We consider two cases, 1) either $(\rho_{1},\rho_{2})$ only has strictly
positive configurations (without loss of generality) or 2) it has
configurations with positive and negative effects (possibly different
configurations).
Let us start with the first case, we observe two simple facts: the effect of a
nice loop is bounded by $\mathcal{R}(4k+2)|Q|^{2}L$ and, similarly, the output
length of a run without nice loops is bounded by $\mathcal{R}(4k+2)|Q|^{2}L$.
Let us denote by $\Delta_{0}$ the difference of output lengths
$|u_{2}|-|u_{1}|$. Since there are no nice loops having $0$ effect on
$\rho_{1}$ this means that $|\beta_{1}|\leq\mathcal{R}(4k+2)|Q|^{2}L$, and
thus $\Delta_{0}\geq-\mathcal{R}(4k+2)|Q|^{2}L$. We denote by $\Delta_{1}$ the
difference of output length after removing one nice loop from
$(\rho_{1},\rho_{2})$. We observe that
$-\mathcal{R}(4k+2)|Q|^{2}L\leq\Delta_{0}<\Delta_{1}$ since all nice loops
must be strictly positive. We carry on removing nice loops until there are
none left. Note that removing nice loops cannot introduce configurations with
new effects: some configurations are erased, and to some an automorphism $\mu$
satisfying $\mu(C_{1},C_{2},a_{1},a_{2})=(C_{1},C_{2},a_{1},a_{2})$ is
applied, which preserves the effect set. We thus obtain
$-\mathcal{R}(4k+2)|Q|^{2}L\leq\Delta_{0}<\Delta_{1}<\ldots<\Delta_{l}\leq\mathcal{R}(4k+2)|Q|^{2}L$
after removing $l$ nice loops and obtaining a run without nice loops. Hence we
get that $l\leq 2\mathcal{R}(4k+2)|Q|^{2}L$. This means that the runs
$\rho_{1},\rho_{2}$ could not have been very large. In fact we have
$|\rho_{1}|\leq(l+1)\times\mathcal{R}(4k+2)|Q|^{2}\leq
3\mathcal{R}(4k+2)^{2}|Q|^{4}L$.
We only have left to consider the second case, that is, $(\rho_{1},\rho_{2})$
has configurations with positive and negative effects. Note that the effects
of configurations belong to the interval
$[-\mathcal{R}(4k+2)|Q|^{2}L,\mathcal{R}(4k+2)|Q|^{2}L]$. Let $d$ denote in
the following the $\gcd$ of all effects of configurations in
$(\rho_{1},\rho_{2})$. Let
$(B_{1}^{1},B_{2}^{1}),\ldots,(B_{1}^{l},B_{2}^{l})$ be configurations so that
the $\gcd$ of their collective effects is $d$, and with at least one positive
and one negative effect. We can assume that $l\leq\mathcal{R}(4k+2)|Q|^{2}L$.
We remove nice loops without deleting these configurations. Note that removing
loops may mean applying an automorphism to some of these configurations.
However, the automorphisms always preserve $(C_{1},C_{2},a_{1},a_{2})$ so the
effect set of the configuration is left unchanged. Also note that the effects
of such loops are all multiples of $d$. After removing all these loops we are
left with runs that are small ($\leq(l+1)\mathcal{R}(4k+2)|Q|^{2}$) but the
outputs $a_{1}$, $a_{2}$ may very well be misaligned (by a factor of $d$). The
idea is to use the configurations
$(B_{1}^{1},B_{2}^{1}),\ldots,(B_{1}^{k},B_{2}^{k})$ to pump simple loops into
the run to realign the two mismatching outputs. Let us explain how these nice
loops can be pumped into the runs: Let
$\alpha:(C_{1},C_{2})\xrightarrow{(u_{1},u_{2})|(v_{1},v_{2})}(B_{1},B_{2})$
be a run and let
$\beta:(C_{1},C_{2})\xrightarrow{(x_{1},x_{2})|(y_{1},y_{2})}(B_{1},B_{2})\xrightarrow{(z_{1},z_{2})|(w_{1},w_{2})}\mu(B_{1},B_{2})$
be a nice loop. By applying $\mu^{-1}$ to $\alpha$ and the second part of
$\beta$ we obtain a run of the shape:
$(C_{1},C_{2})\xrightarrow{\mu^{-1}(u_{1},u_{2})|\mu^{-1}(v_{1},v_{2})}\mu^{-1}(B_{1},B_{2})\xrightarrow{\mu^{-1}(z_{1},z_{2})|\mu^{-1}(w_{1},w_{2})}(B_{1},B_{2})$,
with the added effect of $\beta$.
Note that we have chosen the configurations carefully so that we can change
the alignment by any multiple of $d$.
###### Lemma 14 (Signed generalized Bézout identity).
Let $p_{1},\ldots,p_{k}\in\mathbb{Z}{\setminus}{\left\\{0\right\\}}$ be non-
zero integers, such that at least two have different signs. Then there exist
natural numbers $n_{1},\ldots,n_{k}\leq\max(|p_{1}|^{3},\ldots,|p_{k}|^{3})$
such that:
$n_{1}p_{1}+\ldots+n_{k}p_{k}=\gcd(p_{1},\ldots,p_{k})$
Using the previous lemma (see Appendix A for the proof), we can change the
alignment by $d$ using only a polynomial number of times each loop (at most
$(\mathcal{R}(4k+2)|Q|^{2}L)^{3}$ for each loop). Since the runs are small,
the misalignment is also small (at most $(l+1)\mathcal{R}(4k+2)|Q|^{2}L$) and
we only need to repeat this operation a polynomial number of times. Finally
since we have chosen automorphisms that preserve $a_{1},a_{2}$, we are sure to
obtain a mismatch.
###### Proposition 15 (Functionality).
There exists a polynomial $P(x,y,z)$ such that the following holds.
Let $R\subseteq\mathbb{D}^{\omega}\times\mathbb{D}^{\omega}$ be given by an
NRT $T$ with $k$ registers, a state space $Q$ and a maximum output length $L$.
The following are equivalent:
1. 1.
$R$ is not functional
2. 2.
there exist
$C_{0}\xrightarrow{u|u_{1}}C_{1}\xrightarrow{v|v_{1}}D_{1}\xrightarrow{w|w_{1}}\mu(C_{1})$,
$C_{0}\xrightarrow{u|u_{2}}C_{2}\xrightarrow{v|v_{2}}D_{2}\xrightarrow{w|w_{2}}\mu(C_{2})$
with $C_{1},D_{2}$ final, $\mu\in\mathrm{Aut}(\mathbb{D})$ such that
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}$
3. 3.
there exist
$C_{0}\xrightarrow{u|u_{1}}C_{1}\xrightarrow{v|v_{1}}D_{1}\xrightarrow{w|w_{1}}\mu(C_{1})$,
$C_{0}\xrightarrow{u|u_{2}}C_{2}\xrightarrow{v|v_{2}}D_{2}\xrightarrow{w|w_{2}}\mu(C_{2})$
with $C_{1},D_{2}$ final and $|u|,|v|,|w|\leq P(\mathcal{R}(4k),|Q|,L)$,
$\mu\in\mathrm{Aut}(\mathbb{D})$ such that
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}$
###### Proof 3.4.
Let us assume that (1) holds, meaning that $T$ has two accepting runs
$\rho_{1},\rho_{2}$ over some word $x\in\mathbb{D}^{\omega}$ which produce
different outputs. Let $C_{0},C_{1},C_{2}\ldots$ denote the configurations of
$\rho_{1}$ and $C_{0},C_{1}^{\prime},C_{2}^{\prime}\ldots$ the ones of
$\rho_{2}$. Let us consider the orbits of pairs of configurations
$C_{i},C_{i}^{\prime}$. We know that there is a finite number of such orbits.
We also know that an infinite number of such pairs is accepting in the first
component and an infinite number is accepting in the second component. Thus we
can see $(\rho_{1},\rho_{2})$ as a sequence of the following shape, with $C$
and $D^{\prime}$ final:
$(C_{0},C_{0})\xrightarrow{u_{0}}(C,C^{\prime})\xrightarrow{v_{0}}(D,D^{\prime})\xrightarrow{u_{1}}{\mu_{1}(C,C^{\prime})}\xrightarrow{v_{1}}\nu_{1}(D,D^{\prime})\xrightarrow{u_{2}}\cdots$
Since the outputs are different and infinite then they mismatch at some
position $i$. Then, there exists $n$ such that $u=u_{0}v_{0}\cdots u_{n}v_{n}$
has produced at least $i$ symbols, both for $\rho_{1}$ and $\rho_{2}$. Hence
we have shown that
$C_{0}\xrightarrow{u|\alpha_{1}}\mu_{n}(C)\xrightarrow{u_{n+1}|\beta_{1}}\nu_{n+1}(D)\xrightarrow{v_{n+1}|\gamma_{1}}\mu_{n+1}(C)$,
$C_{0}\xrightarrow{u|\alpha_{2}}\mu_{n}(C^{\prime})\xrightarrow{u_{n+1}|\beta_{2}}\nu_{n+1}(D^{\prime})\xrightarrow{v_{n+1}|\gamma_{2}}\mu_{n+1}(C^{\prime})$,
and by assumption $|\alpha_{1}|,|\alpha_{2}|\geq i$, and thus
$\alpha_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}\alpha_{2}$.
Hence we have shown that (2) holds.
Let us assume that (2) holds: i.e. we have
$C_{0}\xrightarrow{u|u_{1}}C_{1}\xrightarrow{v|v_{1}}D_{1}\xrightarrow{w|w_{1}}\mu(C_{1})$,
$C_{0}\xrightarrow{u|u_{2}}C_{2}\xrightarrow{v|v_{2}}D_{2}\xrightarrow{w|w_{2}}\mu(C_{2})$
with $C_{1},D_{2}$ final, $\mu\in\mathrm{Aut}(\mathbb{D})$ such that
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}$. We use
Lemma 13 (small mismatch witness) to obtain that
$C_{0}\xrightarrow{u^{\prime}|u_{1}^{\prime}}C_{1}$,
$C_{0}\xrightarrow{u^{\prime}|u_{2}^{\prime}}C_{2}$ with
$u_{1}^{\prime}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}^{\prime}$
and $|u^{\prime}|$ small. We get
$C_{0}\xrightarrow{u^{\prime}|u_{1}^{\prime}}C_{1}\xrightarrow{v|v_{1}}D_{1}\xrightarrow{w|w_{1}}\mu(C_{1})$,
$C_{0}\xrightarrow{u^{\prime}|u_{2}^{\prime}}C_{2}\xrightarrow{v|v_{2}}D_{2}\xrightarrow{w|w_{2}}\mu(C_{2})$.
We now only have to use some loop removal on $v$ and $w$, just as in
Proposition 12, in order to obtain words smaller than
$|Q|^{2}\mathcal{R}(4k)$.
Showing that (3) implies (1) is the easiest part since the pattern clearly
causes non-functionality over the word $uvw\mu(vw)\mu^{2}(vw)\cdots$.
#### 3.1.3 Characterizing continuity
Here we characterize continuity and uniform continuity using patterns similar
to the one of functionality. Before doing so we introduce two properties of
configuration. A configuration is _co-reachable_ if there is a final run from
it.
For this we define a notion of _critical pattern_. Let $T$ be an NRT. We
associate to it the set of critical patterns, denoted
$\mathsf{Critical}_{T}(u,v,w,z,C_{1},\mu(C_{1}),D_{1},C_{2},\mu(C_{2}),D_{2})$
($T$ is omitted when clear from context), given by the runs
$C_{0}\xrightarrow{u|u_{1}}C_{1}\xrightarrow{v|v_{1}}\mu(C_{1})\xrightarrow{w|w_{1}}D_{1}$,
$C_{0}\xrightarrow{u|u_{2}}C_{2}\xrightarrow{v|v_{2}}\mu(C_{2})\xrightarrow{z|w_{2}}D_{2}$
so that $D_{1}$, $D_{2}$ are co-reachable and one of the following holds:
1. [a)]
2. 1.
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}$, or
3. 2.
$v_{i}=\epsilon$ and
$u_{j}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{i}w_{i}$
for $\left\\{i,j\right\\}=\left\\{1,2\right\\}$, or
4. 3.
$v_{1}=v_{2}=\epsilon$ and
$u_{1}w_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}w_{2}$
Before characterizing continuity and uniform continuity, we show a small
critical pattern property. [Small critical pattern] There exists a polynomial
$P^{\prime}(x,y,z)$ such that the following holds.
Let $T$ be an NRT, with $k$ registers, a state space $Q$ and a maximal output
length $L$. Let
$C_{0}\xrightarrow{u|u_{1}}C_{1}\xrightarrow{v|v_{1}}\mu(C_{1})\xrightarrow{w|w_{1}}D_{1}$,
$C_{0}\xrightarrow{u|u_{2}}C_{2}\xrightarrow{v|v_{2}}\mu(C_{2})\xrightarrow{z|w_{2}}D_{2}$
be a critical pattern. Then there exists
$u^{\prime},v^{\prime},w^{\prime},z^{\prime}$ of length less than
$P^{\prime}(\mathcal{R}(4k),|Q|,L)$ and
$u_{1}^{\prime},v_{1}^{\prime},w_{1}^{\prime},u_{2}^{\prime},v_{2}^{\prime},w_{2}^{\prime}$,
so that
$C_{0}\xrightarrow{u^{\prime}|u_{1}^{\prime}}C_{1}\xrightarrow{v^{\prime}|v_{1}^{\prime}}\mu(C_{1})\xrightarrow{w^{\prime}|w_{1}^{\prime}}D_{1}$,
$C_{0}\xrightarrow{u^{\prime}|u_{2}^{\prime}}C_{2}\xrightarrow{v^{\prime}|v_{2}^{\prime}}\mu(C_{2})\xrightarrow{z^{\prime}|w_{2}^{\prime}}D_{2}$
is a critical pattern.
###### Proof 3.5.
We want to remove loops in $u,v,w,z$ without affecting the mismatches. The
idea is to see such a critical pattern as a pair of runs which mismatch and
leverage Lemma 13. In order to make sure that the loops which are removed do
not interfere with the intermediate configurations, we color the states of the
transducer. We consider runs which start with red configurations, then the
middle parts $C_{1}\xrightarrow{v|v_{1}}\mu(C_{1})$ and
$C_{2}\xrightarrow{v|v_{2}}\mu(C_{2})$ are colored in blue and the final parts
are colored in green. Using Lemma 13, we obtain runs smaller than
$P(\mathcal{R}(4k),3|Q|,L)$ (the $3$ factor comes from the coloring). Since
the loops have to be removed in the monochromatic parts, we obtain the desired
result.
Let us give a characterization of continuity and uniform continuity for
functions given by an NRT.
###### Proposition 16 (Continuity/uniform continuity).
There exists a polynomial $P^{\prime}(x,y,z)$ such that the following holds.
Let $f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ be given by an NRT
$T$ with $k$ registers, a state space $Q$ and a maximum output length $L$. The
following are equivalent:
1. 1.
$f$ is not uniformly continuous (resp. continuous)
2. 2.
there exists a critical pattern in
$\mathsf{Critical}(u,v,w,z,C_{1},\mu(C_{1}),D_{1},C_{2},\mu(C_{2}),D_{2})$
(resp. with $C_{1}$ final).
3. 3.
there exists a critical pattern in
$\mathsf{Critical}(u,v,w,z,C_{1},\mu(C_{1}),D_{1},C_{2},\mu(C_{2}),D_{2})$
such that $|u|,|v|,|w|,|z|\leq P^{\prime}(\mathcal{R}(4k),|Q|,L)$ (resp. with
$C_{1}$ final).
###### Proof 3.6.
Let us assume that (1) holds, meaning that $f$ is not uniformly continuous
(resp. not continuous) at some point $x\in\mathbb{D}^{\omega}$. This means
that there exists $i$ such that for any $n$, there are two accepting runs
$\rho_{1},\rho_{2}$ over $x_{1},x_{2}$ and producing $y_{1},y_{2}$
respectively such that $|x\wedge x_{1}\wedge x_{2}|>n$ and $|y_{1}\wedge
y_{2}|<i-1$. Moreover, if $f$ is not continuous, we can even assume that
$x_{1}=x$ and $\rho_{1}=\rho$, some accepting run over $x$. Let us consider
some $n>2i\cdot|Q|^{2}\cdot\mathcal{R}(2k)$. Let $u=x\wedge x_{1}\wedge
x_{2}$, since $u$ is large enough we have that some pair of configurations in
$\rho_{1},\rho_{2}$ has to repeat at least $2i$ times, up to automorphism. If
$f$ is not continuous, we choose $n$ large enough so that a final
configuration appears at least $2i\cdot|Q|^{2}\cdot\mathcal{R}(2k)$ times in
the first $n$ transitions of $\rho$. That way we can ensure that some pair of
configuration in $\rho,\rho_{2}$ repeats at least $2i$ times, up to
automorphism, with the configuration of $\rho$ being accepting.
Thus we obtain two sequences:
$C_{0}\xrightarrow{u_{0}|v_{0}}C\xrightarrow{u_{1}|v_{1}}\mu_{1}(C)\cdots\mu_{{2i-1}}(C)\xrightarrow{u_{2i}|v_{2i}}\mu_{2i}(C)$
and
$C_{0}\xrightarrow{u_{0}|w_{0}}D\xrightarrow{u_{1}|w_{1}}\mu_{1}(D)\cdots\mu_{{2i-1}}(D)\xrightarrow{u_{2i}|w_{2i}}\mu_{2i}(D)$.
Moreover, let $x_{1}=ux_{1}^{\prime}$, $x_{2}=ux_{2}^{\prime}$, let
$y_{1}=v_{0}\cdots v_{2i}y_{1}^{\prime}$ and $y_{2}=w_{0}\cdots
w_{2i}y_{2}^{\prime}$. We do a case analysis; note that the cases are not
necessarily mutually exclusive.
Case 1) let us first assume that there is some index
$j\in\left\\{1,\ldots,2i\right\\}$ so that
$\mu_{j}(C)\xrightarrow{u_{j}|\epsilon}\mu_{j}(C)$ and
$\mu_{j-1}(D)\xrightarrow{u_{i}|\epsilon}\mu_{j}(D)$. Since the outputs
$y_{1},y_{2}$ mismatch, we have some prefixes of $\rho_{1},\rho_{2}$ of the
shape $C_{0}\xrightarrow{u_{0}\cdots u_{j-1}|v_{0}\cdots
v_{j-1}}\mu_{j}(C)\xrightarrow{u_{j}|\epsilon}\mu_{j}(C)\xrightarrow{x_{1}^{\prime\prime}|y_{1}^{\prime\prime}}E$
and $C_{0}\xrightarrow{u_{0}\cdots u_{j-1}|w_{0}\cdots
w_{j-1}}\mu_{j}(D)\xrightarrow{u_{j}|\epsilon}\mu_{j}(D)\xrightarrow{x_{2}^{\prime\prime}|y_{2}^{\prime\prime}}F$
with $v_{0}\cdots
v_{j-1}y_{1}^{\prime\prime}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}w_{0}\cdots
w_{j-1}y_{2}^{\prime\prime}$. Hence we obtain a critical pattern of shape c).
Case 2) we assume that there are at least $i$ indices
$j\in\left\\{1,\ldots,2i\right\\}$ such that $v_{j}\neq\epsilon$ and at least
$i$ indices such that $w_{j}\neq\epsilon$. Then we have that $v_{0}\cdots
v_{2i}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}w_{0}\cdots
w_{2i}$ and thus we get a critical pattern of shape a).
Case 3), let us assume (without loss of generality) that there strictly fewer
than $i$ indices such that $w_{j}\neq\epsilon$. This means that ther must be
at least $i$ indices such that $v_{j}\neq\epsilon$, otherwise we refer back to
case 1). Let us consider a prefix of $\rho_{2}$ $C_{0}\xrightarrow{u_{0}\cdots
u_{2i}|w_{0}\cdots
w_{2i}}\mu_{2i}D\xrightarrow{x_{2}^{\prime\prime}|y_{2}^{\prime\prime}}$ such
that $v_{0}\cdots
v_{2i}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}w_{0}\cdots
w_{2i}y_{2}^{\prime\prime}$. Let $j$ be such that $w_{j}=\epsilon$, then we
add the loop corresponding to index $j$ after the configuration
$(\mu_{2i}(C),\mu_{2i}(D))$. Doing this may modify the mismatching letter of
$y_{2}^{\prime\prime}$ as to cancelling the mismatch, however if it is the
case, we can simply add the loop twice, which guarantees that the mismatch is
preserved. Thus we obtain a critical pattern of shape b).
If we assume that (2) holds then Claim 3.1.3 gives us (3).
Let us assume that (3) holds. Then $f$ is discontinuous at
$x=uv\mu(v)\mu^{2}(v)\cdots\in\overline{\mathrm{dom}(f)}$ and is thus not
uniformly continuous. Moreover, if $C_{1}$ is final we have that $f$ is
discontinuous at $x\in\mathrm{dom}(f)$, and hence $f$ is not continuous.
### 3.2 Deciding functionality, continuity and computability
We use a key property of oligomorphic structures, namely that orbits can be
defined by first order formulas.
Let $\mathbb{D}=(D,\Sigma^{\mathbb{D}})$ be an oligomorphic structure. We
denote by $\textsf{FO}[\Sigma]$ the set of first-order formulas over signature
$\Sigma$, and just FO if $\Sigma$ is clear from the context.
###### Proposition 17 ([Boj19, Lemma 4.11]).
Let $\mathbb{D}$ be an oligomorphic structure and let $k$ be a natural number.
Any orbit of an element of $\mathbb{D}^{k}$ is first-order definable.
We say that $\mathbb{D}$ is _decidable_ if its Ryll-Nardzewski function is
computable and $\textsf{FO}[\Sigma]$ has decidable satisfiability problem over
$\mathbb{D}$. Moreover, we say that $\mathbb{D}$ is _polynomially decidable_
if an orbit of $\mathbb{D}^{k}$ can be expressed by an FO formula of
polynomial size in $k$ and the FO satisfiability problem is decidable in
PSpace. One (but not us) could easily define a similar notion of exponentially
decidable, or $f$-decidable for some fixed complexity function $f$.
Roughly speaking the main automata problems which we will consider (emptiness,
functionality, continuity, etc) will be PSpace (resp. decidable) whenever the
structure $\mathbb{D}$ is polynomially decidable (resp. decidable).
#### 3.2.1 Computing the next letter
In this section, we show how to compute the next letter problem for NRT over a
decidable representable oligomorphic structure $\mathbb{D}$. By Theorems 10
and 9, this entails that continuity and computability coincide for functions
defined by transducers over decidable representable oligomorphic structures,
as stated in Theorem 20 below.
Before tackling the next letter problem, we consider the emptiness problem of
register automata.
###### Theorem 18.
Let $\mathbb{D}$ be a decidable (resp. polynomially decidable) oligomorphic
structure. The emptiness problem for NRA is decidable (resp. in PSpace).
###### Proof 3.7.
We show the result in case of a polynomially decidable structure, the more
general case can be obtained by forgetting about complexity. Let $\mathbb{D}$
be a polynomially decidable oligomorphic structure and let $T$ be an NRA with
$k$ registers and state space $Q$. Since $\mathbb{D}$ is polynomially
decidable, any orbit of $\mathbb{D}^{k}$ can be represented by a formula of
size polynomial in $k$. This means that $\mathcal{R}(k)$ is exponential. Using
the non-emptiness characterization from Proposition 12, we only need to find a
run of length polynomial in $Q$ and $\mathcal{R}(k)$. The idea is to use a
counter bounded by this polynomial, using space in $\log(|Q|\mathcal{R}(k))$,
and execute an NPSpace algorithm which will guess a run of the automaton and
update the type of configurations in space polynomial in $k$ and $\log(|Q|)$.
Simulating the run goes like this: first the type of the configuration is
initialized to $q_{0},(d_{0},\ldots,d_{0})$. Then a new type
$\phi(y_{1},\ldots,y_{k})$ is guessed, as well as a transition which we see as
given by a state $q$ and a formula
$\psi(x_{1},\ldots,x_{k},y_{1},\ldots,y_{k})$. We check that the transition is
valid by deciding the satisfiability of $\exists y_{1},\ldots,y_{k}\
\psi(d_{0},\ldots,d_{0},y_{1},\ldots,y_{k})\wedge\phi(y_{1},\ldots,y_{k})$
which can be done in PSpace, by assumption. We thus move to the new
configuration type given by $q,\phi$ and we continue the simulation. At some
point when $q$ is final, we keep the configuration type in memory and guess
that we will see it again.
###### Lemma 19.
Let $f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ be a function
defined by an NRT over a decidable representable oligomorphic structure
$\mathbb{D}$. Then, its next letter problem is computable.
###### Proof 3.8.
In the next letter problem we get as input two words $u,v\in\mathbb{D}^{*}$.
Our first goal is to decide whether there exists $d\in D$ such that
$f(u\mathbb{D}^{\omega})\subseteq vd\mathbb{D}^{\omega}$. We check the
negation of this property, i.e. we try to exhibit two runs
$C_{0}\xrightarrow{u|u_{1}}C_{1}\xrightarrow{w|w_{1}}D_{1}$,
$C_{0}\xrightarrow{u|u_{2}}C_{2}\xrightarrow{w|w_{2}}D_{2}$ such that
$|u_{1}w_{1}|,|u_{2}w_{2}|>|v|$ and either $|u_{1}w_{1}\wedge v|<|v|$ or
$|u_{1}w_{1}\wedge u_{2}w_{2}|\leq|v|$. The non-existence of such runs only
depends on the type of $u,v$, hence we can define an automaton which simulates
$T$ and starts by reading some input of the type of $u$ and checks whether
there is a mismatch occurring before the $|v|$ outputs, which can be done by
adding one register to store the mismatch, and two $|v|$-bounded counters in
memory (recall that $v$ is given as input). It finally checks that the reached
configurations are co-reachable by guessing some continuation for each and
simulating $T$ over it. Thus we reduce the non-existence of a next letter to
the non-emptiness of an automaton, which is decidable.
Once we know that such a next letter exists, we only have to simulate any run
of $T$ over $uw$ for an arbitrary $w\in\mathbb{D}^{\omega}$ such that
$uw\in\mathrm{dom}(T)$, and see what the $|v|+1^{\text{th}}$ output is (note
that we can avoid $\epsilon$-producing loops, so this data value will be
output in a finite number of steps). To be able to simulate $T$ over $uw$, we
use the fact that $\mathbb{D}$ is representable and decidable. For every
transition that we want to take, from decidability we can check whether the
transition is possible. Then, once we know the transition is possible, we can
enumerate the representations of elements of $\mathbb{D}$ and check that they
satisfy the transition formula.
As a direct corollary of Lemma 19, Theorem 10 and Theorem 9, we obtain:
###### Theorem 20.
Let $f:\mathbb{D}^{\omega}\rightarrow\mathbb{D}^{\omega}$ be a function
defined by an NRT over a decidable oligomorphic structure $\mathbb{D}$, and
let $m:\mathbb{N}\rightarrow\mathbb{N}$ be a total function. Then,
1. 1.
$f$ is computable iff $f$ is continuous
2. 2.
$f$ is uniformly computable iff $f$ is uniformly continuous
3. 3.
$f$ is $m$-computable iff $f$ is $m$-continuous
#### 3.2.2 Deciding functionality, continuity and computability
###### Theorem 21.
Given a decidable (resp. polynomially decidable) oligomorphic structure
$\mathbb{D}$ functionality, continuity and uniform continuity are decidable
(resp. PSpace-c) for functions given by NRT. As a consequence, if $\mathbb{D}$
is representable, then computability and uniform computability are decidable
(resp. PSpace-c).
###### Proof 3.9.
The proofs are very similar, whether we consider functionality, continuity or
uniform continuity. Let us show the result for functionality. Moreover we
assume that $\mathbb{D}$ is polynomially decidable, the argument in the more
general case can easily be obtained just by forgetting about complexity.
Let us consider an NRT $T$ with $k$ registers, state space $Q$ and maximum
output length $L$. We want to show that we can decide the existence of two
runs with a mismatch. From the characterization given in Proposition 15, we
know that the pattern we are looking for is small. We consider a counter
bounded by the value $3P(\mathcal{R}(4k),|Q|,L)$, which can be represented
using polynomial space because $\mathcal{R}(4k)$ is exponential in $k$
($\mathbb{D}$ is polynomially decidable). Our goal is to simulate $T$ and
exhibit a pattern of length bounded by that counter. As we have seen before,
we can easily simulate runs of $T$ in PSpace. The additional difficulty here
is that at some point we have to check that two output positions mismatch. We
use two additional counters which will ensure that the two mismatching outputs
correspond to the same position. Let us now describe how the algorithm goes,
in a high-level manner. We start by initializing two runs in parallel, as well
as our counters. We keep in memory the $2k$-type of the two configurations,
which can be done in polynomial space since $\mathbb{D}$ is polynomially
decidable. We keep guessing in parallel two transitions for our runs and
updating the $2k$-type using the fact that satisfiability of FO is in PSpace.
Every time a run outputs some letter, its counter is incremented. At some
point we may guess that we output the mismatching value in one of the runs, in
which case we stop the counter corresponding to that run. We crucially also
need to be able to check later that the value output mismatches. In order to
do this we keep in memory a $2k+1$-type, always keeping the value which we
output. At some point we output the second mismatching position, we check that
the counters coincide and that the outputs are indeed different, which is
given by the $2k+1$-type. In parallel, we also have to check that we reach
some final configurations and that some configuration repeats. To do this we
need to keep one or two additional $2k$-type in memory, which again can easily
be done in PSpace.
The approach for continuity and uniform continuity is exactly the same except
that the patterns of Proposition 16 are slightly more involved. Moreover we
also need to decide on the fly that the configurations reached are co-
reachable. This can be done exactly like for non-emptiness of automata in
PSpace.
Finally, the PSpace lower bound is obtained by reducing from emptiness of
register automata over $(\mathbb{N},\\{=\\})$, which is PSpace-c [DL09]. Since
the data domains are countable, they can always simulate
$(\mathbb{N},\\{=\\})$, and the proofs of [EFR20] can easily be adapted: given
a NRA $A$ over $(\mathbb{N},\\{=\\})$, define a relation associating words of
the form $w\\#x$ with $w\in L(A)$ and $x\in\mathcal{D}^{\omega}$ to all
elements of $\mathcal{D}^{\omega}$; it is functional if and only if its domain
is empty. Similarly, given some non-continuous function $g$ defined by some
NRT, define the function $w\\#x\mapsto g(x)$ for all $w\in L(A)$ and
$x\in\mathcal{D}^{\omega}$. Again, it is continuous iff it is empty; similarly
for uniform continuity.
Thus, given a specification represented as a NRT over a representable and
decidable domain, one can examine whether it can be implemented (provided it
is functional, which is then decidable) by checking whether it is computable.
Indeed, computability is the most liberal criterium for being realisable. The
notion of uniform computability then allows to refine this check, as for
functions that are uniformly computable, one can focus on implementations that
have a bounded lookahead. In the case of $(\mathbb{Q},\left\\{<\right\\})$, we
further get that both problems are PSpace-complete:
###### Theorem 22.
For relations given by NRT over $(\mathbb{Q},\left\\{<\right\\})$ deciding
functionality, continuity/computability and uniform continuity/uniform
computability are PSpace-complete.
###### Proof 3.10.
We only need to argue that $(\mathbb{Q},\left\\{<\right\\})$ is representable
and polynomially decidable. Clearly rational numbers are representable.
Moreover, since $(\mathbb{Q},\left\\{<\right\\})$ is homogeneous, it admits
quantifier elimination, which means that any $k$-type can be defined by a
formula polynomial in $k$ (actually linear). Indeed a type is just given by
the linear order between the free variables. Moreover, satisfiability of
first-order logic over $(\mathbb{Q},\left\\{<\right\\})$ is in PSpace.
Note that the same reasoning applies for $(\mathbb{N},\left\\{=\right\\})$;
the case of $(\mathbb{N},\left\\{=\right\\})$ can also be obtained by encoding
it in $(\mathbb{Q},\left\\{<\right\\})$.
## 4 A non oligomorphic case: $(\mathbb{N},\\{<,0\\})$A non oligomorphic
case: (N,{¡,0})
We now turn to the study of the case of natural numbers equipped with the
usual order. This domain is not oligomorphic (cf Example 1.1), so there might
not exist loops in the transducer, as there are infinitely many orbits of
configurations. For simplicity, in the rest of this section,
$(\mathbb{N},\\{<,0\\})$ and $(\mathbb{Q}_{+},\\{<,0\\})$ (where
$\mathbb{Q}_{+}$ is the set of rational numbers which are greater than or
equal to $0$) are respectively denoted $\mathbb{N}$ and $\mathbb{Q}_{+}$. We
thus need to study more precisely which paths are iterable, i.e. can be
repeated unboundedly or infinitely many times. We show that such property only
depends on the relative order between the registers, i.e. on the type of the
configurations, seen as belonging to $\mathbb{Q}_{+}$, where the type of a
configuration is an FO-formula describing its orbit (cf Proposition 17 and
Section 4.2). More generally, the fact that $\mathbb{N}$ is a subdomain of
$\mathbb{Q}_{+}$, along with the property that any finite run in
$\mathbb{Q}_{+}$ corresponds to a run in $\mathbb{N}$ (by multiplying all
involved data values by the product of their denominator), allows us to
provide a characterisation of continuity and uniform continuity which yields a
PSpace decision procedure for those properties.
### 4.1 On loop iteration
$0$$1$$\top,\downarrow{}r$$*<r,\downarrow{}r$
$0$$1$$?r_{M}$$r<*<r_{M},\downarrow{}r$
The NRA of Example 4.1 is non-empty in $\mathbb{Q}_{+}$, since it accepts e.g.
the word $1\cdot\dfrac{1}{2}\cdots\dfrac{1}{n}\cdots$. However, it is empty in
$\mathbb{N}$. Indeed, any data word compatible with its only infinite run
necessarily forms an infinite descending chain, which is impossible in
$\mathbb{N}$. Similarly, in Example 4.1, the NRA initially guesses some upper
bound $B$ which it stores in $r_{M}$, and then asks to see an infinite
increasing chain which is bounded from above by $B$. This is possible in
$\mathbb{Q}_{+}$, but not in $\mathbb{N}$.
That is why we need to study more closely what makes a given path _$\omega$
-iterable_, i.e. that can be taken an infinite number of times. To
characterise continuity, we will also need the weaker notion of _iterable_
path, i.e. of a path that can be taken arbitarily many times over finite
inputs which are increasing for the prefix order. For instance, the loop in
Example 4.1 is not iterable: the first letter in the input sets a bound on the
number of times it can be taken. The loop in Example 4.1 is iterable: it
suffices to guess bigger and bigger values of the initial upper bound.
However, there can be no infinite run which contains infinitely many
occurrences of such loop, as the value that is initially guessed for a given
run sets a bound on the number of times the loop can be taken, so it is not
$\omega$-iterable.
We show that the notions of iterability and $\omega$-iterability are both
characterised by properties on the order between registers of a pair of
configurations, which can be summed up into a type, hence opening the way to
deciding such properties.
### 4.2 $\mathbb{Q}$-typesQ-types
In our study, the relative order between registers plays a key role. Such
information is summed up by the type of the configuration, interpreted as a
configuration in $\mathbb{Q}_{+}$.
Since we will need to manipulate different types of copies of some set of
registers, we adopt the following convention: In the following, we assume that
for a set of registers $R$, $R_{1}$ and $R_{2}$ are two disjoint copies of
$R$, whose elements are respectively $r_{1}$ and $r_{2}$ for $r\in R$.
Similarly, $R^{\prime}$ is a primed copy of $R$, whose elements are
$r^{\prime}$ for $r\in R$. Note that the two can be combined to get
$R_{1}^{\prime},R_{2}^{\prime}$. Note also that primes and indices are also
used as usual notations, but no ambiguity should arise.
For a register valuation $\bar{d}:R\rightarrow\mathbb{N}$, we define
$\tau(\bar{d})$ as $\tau_{\mathbb{Q}_{+}}(\bar{d})$ the type of the valuation
in $\mathbb{Q}_{+}$, i.e. an FO-formula describing its orbit (such an FO-
formula exists by Proposition 17 since $\mathbb{Q}_{+}$ is oligomorphic). Note
that such type can be represented e.g. by the formula
$\bigwedge_{\bowtie\in\\{<,>,=\\}}\bigwedge_{r,r^{\prime}\in
R\mid\bar{d}(r)\bowtie\bar{d}(r^{\prime})}r\bowtie
r^{\prime}\wedge\bigwedge_{r\in R\mid\bar{d}(r)=0}r=0$. We extend the notion
to configurations $(p,\bar{d})\in Q\times\mathbb{N}^{R}$ by letting
$\tau((p,\bar{d}))=(p,\tau(\bar{d}))$. Thus, the type specifies the current
state, and summarises the information of the order between registers, as well
as whether they are equal to $0$ or not.
We will also need to have access to the relative order between registers of
two configurations. Thus, for two register valuations
$\bar{d}_{1},\bar{d}_{2}:R\rightarrow\mathbb{N}$, we define
$\sigma(\bar{d}_{1},\bar{d}_{2})=\tau_{\mathbb{Q}_{+}}(\bar{d}_{1}\uplus\bar{d}_{2}^{\prime})$,
where $\bar{d}_{1}\uplus\bar{d}_{2}^{\prime}$ is the disjoint union of
$\bar{d}_{1}$ and of a primed copy of $\bar{d}_{2}$, so that the registers of
$\bar{d}_{2}$ can be distinguished from those of $\bar{d}_{1}$. We then have,
for all registers $r,s\in R$ and all relations $\bowtie\in\\{<,>,=\\}$ that
$\sigma(\bar{d}_{1},\bar{d}_{2})\Rightarrow r\bowtie s^{\prime}$ if and only
if $\bar{d}_{1}(r)\bowtie\bar{d}_{2}(s)$. Again, the definition is naturally
lifted to configurations by letting
$\sigma((p,\bar{d}_{1}),(q,\bar{d}_{2}))=(p,q,\sigma(\bar{d}_{1},\bar{d}_{2}))$.
###### Remark 23.
Recall that by definition of an orbit, we have that for any register
valuations $\bar{d}_{1}$ and $\bar{d}_{2}$ such that
$\tau(\bar{d}_{1})=\tau(\bar{d}_{2})$, there exists an automorphism
$\mu\in\mathrm{Aut}(\mathbb{Q}_{+})$ such that $\mu(\bar{d}_{1})=\bar{d}_{2}$.
The core property is the following:
Let $R$ be a set of registers, and let $\sigma$ be a $\mathbb{Q}$-type defined
over $R\uplus R^{\prime}$, where $R^{\prime}$ is a primed copy of $R$. We say
that $\sigma$ has the property $\star$ for the set of registers $X\subseteq R$
if:
* •
for all $r\in X$, $\sigma\Rightarrow r\leq r^{\prime}$
* •
for all $r,s\in X$, if $\sigma\Rightarrow s=s^{\prime}$ and $\sigma\Rightarrow
r\leq s$, then $\sigma\Rightarrow r=r^{\prime}$
By extension, for two configurations $C$ and $C^{\prime}$ over $R$, we say
that $C$ and $C^{\prime}$ have the property $\star$ for the set of registers
$X\subseteq R$ if $\sigma(C,C^{\prime})$ has the $\star$ property for $X$.
Finally, when $X=R$, we simply state that $\sigma$ has the $\star$ property.
Such property ensures, for the considered subset of registers, that they
cannot induce infinite descending chains nor infinite bounded increasing
chains (as both are not feasible in $\mathbb{N}$), if a run loops over
configurations whose pairwise type is $\sigma$.
### 4.3 Relations between machines over $\mathbb{N}$ and over
$\mathbb{Q}_{+}$Relations between machines over N and Q+
There is a tight relation between machines operating over $\mathbb{N}$ and
over $\mathbb{Q}_{+}$. First, since $\mathbb{N}$ is a subdomain of
$\mathbb{Q}_{+}$, runs in $\mathbb{N}$ are also runs in $\mathbb{Q}_{+}$. Over
finite runs, by multiplying all data values by the product of their
denominators, we can get the converse property.
###### Proposition 24.
Let $X\subset_{f}\mathbb{Q}_{+}$ be a finite subset of $\mathbb{Q}_{+}$. There
exists an automorphism $\lambda\in\mathrm{Aut}(\mathbb{Q}_{+})$ such that
$\lambda(X)\subset\mathbb{N}$, $\lambda(\mathbb{N})\subseteq\mathbb{N}$ and
$\lambda$ is non-contracting, i.e. for all $x,y\in\mathbb{Q}_{+}$,
$\lvert\lambda(x)-\lambda(y)\rvert\geq\lvert x-y\rvert$.
###### Proof 4.1.
By writing $X=\left\\{\frac{p_{1}}{q_{1}},\dots,\frac{p_{n}}{q_{n}}\right\\}$,
let $K=\prod_{i}q_{i}$. Then $\lambda:d\mapsto Kd$ is an automorphism
satisfying the required properties.
We then get the following:
###### Proposition 25.
Let $A$ be a NRA over $\mathbb{Q}_{+}$, and let $\nu$ and $\nu^{\prime}$ be
$\mathbb{Q}$-types.
If there exist two configurations
$C=(p,\bar{d}),C^{\prime}=(q,\bar{d}^{\prime})$ where
$\bar{d},\bar{d}^{\prime}:R\rightarrow\mathbb{Q}_{+}$ are such that
$\tau(\bar{d})=\nu$, $\tau(\bar{d}^{\prime})=\nu^{\prime}$ and if there exists
a data word $v\in\mathbb{Q}_{+}^{*}$ such that $C\xrightarrow{u}C^{\prime}$,
then there also exist two configurations $D=(p,\bar{e})$,
$D^{\prime}=(q,\bar{e}^{\prime})$ and a data word $w$ which satisfy the same
properties, i.e. $\tau(D)=\nu$, $\tau(D^{\prime})=\nu^{\prime}$ and
$D\xrightarrow{w}D^{\prime}$; and which belong to $\mathbb{N}$, i.e. such that
$\bar{e},\bar{e}^{\prime}:R\rightarrow\mathbb{N}$ and $w\in\mathbb{N}^{*}$.
###### Proof 4.2.
Let $A$ be a NRA over $\mathbb{Q}_{+}$, and assume that
$C\xrightarrow{u}C^{\prime}$. By applying Proposition 24 to $X=C(R)\cup
C^{\prime}(R)\cup\mathsf{data(u)}$ (states do not play a role here), we get
that $\lambda(C)\xrightarrow{\lambda(u)}\lambda(C^{\prime})$ is also a run of
$A$, and $D=\lambda(C)$, $D^{\prime}=\lambda(C^{\prime})$ and $w=\lambda(u)$
satisfy the required properties.
###### Remark 26.
As a corollary, we obtain that for any NRA $A$ over finite words,
$L_{\mathbb{N}}(A)\neq\varnothing$ if and only if
$L_{\mathbb{Q}_{+}}(A)\neq\varnothing$.
Note that such property does not hold over infinite runs, as witnessed by
Examples 4.1 and 4.1. The property $\star$ ensures that a loop can be iterated
in $\mathbb{N}$, as shown in the next key proposition:
###### Proposition 27.
Let $T$ be an NRT and assume that $B\xrightarrow{u}B^{\prime}$ following some
sequence of transitions $\pi$, where $\tau(B)=\tau(B^{\prime})$,
$u\in\mathbb{Q}_{+}^{*}$ and $B$ and $B^{\prime}$ have the property $\star$.
Then there exists an infinite run $D\xrightarrow{x}$ over the sequence of
transitions $\pi^{\omega}$, with $x\in\mathbb{N}^{\omega}$ and
$\tau(D)=\tau(B)$.
Before showing this proposition, let us introduce some intermediate notions:
Let $\bar{d},\bar{d}^{\prime}$ be two register valuations over
$\mathbb{Q}_{+}$ such that $\tau(\bar{d})=\tau(\bar{d}^{\prime})$. We say that
$\bar{d}^{\prime}$ is _wider_ than $\bar{d}$ whenever for any $r,s\in R$, we
have:
* •
$\lvert\bar{d}^{\prime}(s)-\bar{d}^{\prime}(r)\rvert\geq\lvert\bar{d}(s)-\bar{d}(r)\rvert$
* •
$\bar{d}^{\prime}(r)\geq\bar{d}(r)$
Note that the second item of the definition is required to ensure that the
interval between $\bar{d}^{\prime}(r)$ and $0$ is also wider than the interval
between $\bar{d}(r)$ and $0$.
The notion is extended to configurations by saying that
$C^{\prime}=(p^{\prime},\bar{d}^{\prime})$ is wider than $C=(p,\bar{d})$ if
$\bar{d}^{\prime}$ is wider than $\bar{d}$. In the following, we only apply
this notion to configurations $C$ and $C^{\prime}$ that have the same state,
i.e. $p=p^{\prime}$.
###### Proposition 28.
Let $\sigma$ be a type over $R\uplus R^{\prime}$ such that $\sigma_{\mid
R}=\sigma_{\mid R^{\prime}}$. If $\sigma$ has the $\star$ property, then there
exist two register valuations $\bar{d}$ and $\bar{d}^{\prime}$ in $\mathbb{N}$
such that $\sigma(\bar{d},\bar{d}^{\prime})=\sigma$ and such that
$\bar{d}^{\prime}$ is wider than $\bar{d}$.
###### Proof 4.3.
First, assume that there exists some register $r_{0}\in R$ such that
$\sigma\Rightarrow r_{0}=0$ (thus $\sigma\Rightarrow r_{0}^{\prime}=0$, since
we assumed that $\sigma_{\mid R}=\sigma_{\mid R^{\prime}}$). If this is not
the case, consider instead the type $\sigma_{0}=\sigma\wedge r_{0}=0\wedge
r^{\prime}_{0}=0$ over $R\uplus\\{r_{0}\\}$. Indeed, if $\bar{d}$ and
$\bar{d}^{\prime}$ are two valuations in $\mathbb{N}$ such that
$\sigma(\bar{d},\bar{d}^{\prime})=\sigma_{0}$ and $\bar{d}^{\prime}$ is wider
than $\bar{d}^{\prime}$, then $\bar{d}_{\mid R}$ and $\bar{d}^{\prime}_{\mid
R}$ are two valuations in $\mathbb{N}$ such that
$\sigma(\bar{d},\bar{d}^{\prime})=\sigma$ and $\bar{d}^{\prime}_{\mid R}$ is
wider than $\bar{d}_{\mid R}$.
Now, for a pair of valuations $(\bar{d},\bar{d}^{\prime})$, define its
shrinking intervals:
$S(\bar{d},\bar{d}^{\prime})=\\{(r,s)\mid\lvert\bar{d}^{\prime}(s)-\bar{d}^{\prime}(r)\rvert<\lvert\bar{d}(s)-\bar{d}(r)\rvert\\}$,
and say that $(\bar{d},\bar{d}^{\prime})$ has $k=\lvert
S(\bar{d},\bar{d}^{\prime})\rvert$ shrinking intervals. We need to show that
given a pair of valuations $\bar{e}$ and $\bar{e}^{\prime}$ such that
$\sigma(\bar{e},\bar{e}^{\prime})=\sigma$, if they have $k>0$ intervals that
shrink, we can exhibit a pair of valuations $\bar{d}$ and $\bar{d}^{\prime}$
such that $\sigma(\bar{d},\bar{d}^{\prime})=\sigma$ and which has $l<k$
intervals that shrink.
Thus, let $\bar{e}$ and $\bar{e}^{\prime}$ be two valuations such that
$\sigma(\bar{e},\bar{e}^{\prime})=\sigma$ and which have $k>0$ shrinking
intervals. Then, let $(r,s)\in S(\bar{e},\bar{e}^{\prime})$; w.l.o.g. assume
that $\bar{e}^{\prime}(s)\geq\bar{e}^{\prime}(r)$. As $\bar{e}$ and
$\bar{e}^{\prime}$ have the same type, we get $\bar{e}(s)\geq\bar{e}(r)$.
Moreover, as $(r,s)\in S$, $\bar{e}(s)\neq\bar{e}(r)$, so
$\bar{e}(s)>\bar{e}(r)$, which implies
$\bar{e}^{\prime}(s)>\bar{e}^{\prime}(r)$. Finally, we have that
$\bar{e}^{\prime}(s)>\bar{e}(s)$. Indeed, since $\sigma$ has the $\star$
property, we have that $\bar{e}^{\prime}(s)\geq\bar{e}(s)$, and moreover if we
had $\bar{e}^{\prime}(s)=\bar{e}(s)$, we would get that
$\bar{e}(r)=\bar{e}^{\prime}(r)$ as $\bar{e}(r)\leq\bar{e}(s)$, which would
mean that $(r,s)\notin S(\bar{e},\bar{e}^{\prime})$.
Finally, let $M=\max\left(\\{\bar{e}(t)\mid t\in
R,\bar{e}(t)<\bar{e}^{\prime}(s)\\}\cup\\{\bar{e}^{\prime}(t)\mid t\in
R,\bar{e}^{\prime}(t)<\bar{e}^{\prime}(s)\\}\right)$ be the maximum value seen
in $\bar{e}$ and $\bar{e}^{\prime}$ which is lower than $\bar{e}^{\prime}(s)$.
Note that we have $M\geq\bar{e}(r)$, $M\geq\bar{e}^{\prime}(r)$ and
$M\geq\bar{e}(s)$.
Now, let
$c=\big{(}\bar{e}(s)-\bar{e}(r)\big{)}-\big{(}\bar{e}^{\prime}(s)-\bar{e}^{\prime}(r)\big{)}$.
As $(r,s)\in S$, $c>0$. Then, consider the automorphism
$\mu\in\mathrm{Aut}(\mathbb{Q}_{+})$ defined as
$\left\\{\begin{array}[]{l}x\in[0;M]\mapsto x\\\
x\in]M;\bar{e}^{\prime}(s)]\mapsto
M+\frac{\bar{e}^{\prime}(s)+c-M}{\bar{e}^{\prime}(s)-M}(x-M)\\\
x\in]\bar{e}^{\prime}(s);+\infty[\mapsto x+c\end{array}\right.$
It can be checked that for all $x,y\in\mathbb{Q}_{+}$,
$\lvert\mu(y)-\mu(x)\rvert\geq\lvert y-x\rvert$, so $\lvert
S(\mu(\bar{e}),\mu(\bar{e}^{\prime}))\rvert\leq\lvert
S(\bar{e},\bar{e}^{\prime})\rvert$. Now, we have that $(r,s)\notin
S(\mu(\bar{e}),\mu(\bar{e}^{\prime}))$. Indeed, $\mu(\bar{e}(s))=\bar{e}(s)$,
$\mu(\bar{e}(r))=\bar{e}(r)$ and
$\mu(\bar{e}^{\prime}(r))=\bar{e}^{\prime}(r)$ since
$\bar{e}(s),\bar{e}(r),\bar{e}^{\prime}(r)\in[0;M]$. Finally,
$\mu(\bar{e}^{\prime}(s))=\bar{e}^{\prime}(s)+c$. Overall, we get that
$\big{(}\mu(\bar{e}(s))-\mu(\bar{e}(r))\big{)}-\big{(}\mu(\bar{e}^{\prime}(s))-\mu(\bar{e}^{\prime}(r))\big{)}=\big{(}\bar{e}(s)-\bar{e}(r)\big{)}-\big{(}\bar{e}^{\prime}(s)+c-\bar{e}^{\prime}(r)\big{)}=c-c=0$,
which means that
$\lvert\bar{e}^{\prime}(s)-\bar{e}^{\prime}(r)\rvert=\lvert\bar{e}(s)-\bar{e}(r)\rvert$,
so $(r,s)\notin S(\mu(\bar{e}),\mu(\bar{e}^{\prime}))$: $\lvert
S(\mu(\bar{e}),\mu(\bar{e}^{\prime}))\rvert<\lvert
S(\bar{e},\bar{e}^{\prime})\rvert$.
Since $\mu\in\mathrm{Aut}(\mathbb{Q}_{+})$, we get that
$(\mu(\bar{e}),\mu(\bar{e}^{\prime}))$ is such that
$\sigma(\mu(\bar{e}),\mu(\bar{e}^{\prime}))=\sigma$, so we exhibited a pair of
valuations which has $l<k$ intervals that shrink.
By iteratively applying this process to $(\bar{e},\bar{e}^{\prime})$ until no
shrinking intervals remain, we get a pair $(\bar{d},\bar{d}^{\prime})$ which
is such that $\sigma(\bar{d},\bar{d}^{\prime})=\sigma$ and $\bar{d}^{\prime}$
is wider than $\bar{d}$. Now, by multiplying all data values in $\bar{d}$ and
$\bar{d}^{\prime}$ by the product of their denominators, we get two valuations
$\bar{f}$ and $\bar{f}^{\prime}$ which are in $\mathbb{N}$ such that
$\sigma(\bar{f},\bar{f}^{\prime})=\sigma$ and $\bar{f}^{\prime}$ is wider than
$\bar{f}$.
We finally need the following technical result:
###### Proposition 29.
Let $\bar{d}$ and $\bar{d}^{\prime}$ be two configurations in $\mathbb{N}$
such that $\bar{d}^{\prime}$ is wider than $\bar{d}$. Then, there exists some
automorphism $\mu\in\mathrm{Aut}(\mathbb{Q}_{+})$ such that
$\bar{d}^{\prime}=\mu(\bar{d})$ and $\mu(\mathbb{N})\subseteq\mathbb{N}$.
###### Proof 4.4.
Let $\\{a_{0},\dots,a_{k}\\}=\bar{d}(R)\cup\\{0\\}$ and
$\\{a^{\prime}_{0},\dots,a^{\prime}_{k}\\}=\bar{d}^{\prime}(R)\cup\\{0\\}$,
with $0=a_{0}<\dots<a_{k}$ and $a^{\prime}_{0}<\dots<a^{\prime}_{k}$. Note
that since $\tau(\bar{d})=\tau(\bar{d}^{\prime})$, we indeed have that
$\lvert\bar{d}(R)\cup\\{0\\}\rvert=\lvert\bar{d}^{\prime}(R)\cup\\{0\\}\rvert$;
moreover, since $\bar{d}^{\prime}$ is wider than $\bar{d}$, we have that for
all $0\leq i<k$, $a^{\prime}_{i+1}-a^{\prime}_{i}>a_{i+1}-a_{i}$. Consider the
following easy lemma (the proof is in Appendix B for completeness):
###### Lemma 30.
Let $a,b,c,d\in\mathbb{N}$ be such that $a<b$, $c<d$ and $d-c\geq b-a$. Then,
there exists a function $f:[a;b]\rightarrow[c;d]$ which is increasing and
bijective, and such that $f([a;b]\cap\mathbb{N})\subseteq\mathbb{N}$.
Then, apply it to each interval $[a_{i};a_{i+1}]$ to get a family of
increasing and bijective functions $(\mu_{i})_{0\leq i<k}$ which are such that
$\mu_{i}([a_{i};a_{i+1}])=[a^{\prime}_{i};a^{\prime}_{i+1}]$ and
$\mu([a_{i};a_{i+1}]\cap\mathbb{N})\subseteq\mathbb{N}$. Then, let
$\mu_{k}:x\in[a_{k};+\infty[\mapsto x+a^{\prime}_{k}-a_{k}$. We get that
$\mu=\cup_{0\leq i\leq k}\mu_{i}\in\mathrm{Aut}(\mathbb{Q}_{+})$ is such that
$\bar{d}^{\prime}=\mu(\bar{d})$ and satisfies
$\mu(\mathbb{N})\subseteq\mathbb{N}$.
We are now ready to prove Proposition 27:
###### Proof 4.5 (Proof of Proposition 27).
Let $T$ be an NRT and assume that $B\xrightarrow[\pi]{u}B^{\prime}$ following
some sequence of transitions $\pi$, where $\tau(B)=\tau(B^{\prime})$,
$u\in\mathbb{Q}_{+}^{*}$ and $B$ and $B^{\prime}$ have the property $\star$.
Let $\sigma=\sigma(B,B^{\prime})$. By Proposition 28, we know that there exist
two configurations $C$ and $C^{\prime}$ in $\mathbb{N}$ such that
$\sigma(C,C^{\prime})=\sigma$ and $C^{\prime}$ is wider than $C$. Let
$\nu\in\mathrm{Aut}(\mathbb{Q}_{+})$ be some automorphism such that $\nu(B)=C$
and $\nu(B^{\prime})=C^{\prime}$ (such an automorphism exists since
$\sigma(B,B^{\prime})=\sigma(C,C^{\prime})$). Then, we have that
$C\xrightarrow[\pi]{\nu(u)}C^{\prime}$. By multiplying all involved data
values by their common denominator (cf Propostion 24), we get two
configurations $D$ and $D^{\prime}$, along with some data word $v$, all
belonging to $\mathbb{N}$, such that $D\xrightarrow[\pi]{v}D^{\prime}$, and
such that $D^{\prime}$ is wider than $D$. By Proposition 29, there exists an
automorphism $\mu\in\mathrm{Aut}(\mathbb{Q}_{+})$ such that
$\mu(D)=D^{\prime}$ and $\mu(\mathbb{N})\subseteq\mathbb{N}$. Thus, by letting
$x=v\mu(v)\mu^{2}(v)\dots$, we get that
$D\xrightarrow[\pi]{v}\mu(D)\xrightarrow[\pi]{\mu(v)}\mu^{2}(D)\dots$ is a run
over $x\in\mathbb{N}^{\omega}$ over the sequence of transitions
$\pi^{\omega}$, and $\tau(D)=\tau(B)$.
A last important property is the following:
###### Proposition 31.
Let $A$ be a NRA over $\mathbb{Q}_{+}$, and assume that
$C\xrightarrow{u}C^{\prime}$ and that $C^{\prime\prime}\xrightarrow{v}$, where
$\tau(C^{\prime})=\tau(C^{\prime\prime})$, $u\in\mathbb{Q}_{+}^{*}$ and
$v\in\mathbb{N}^{\omega}$. Then there exist $w\in\mathbb{N}^{*}$,
$x\in\mathbb{N}^{\omega}$ and two configurations $D,D^{\prime}$ whose
valuations take their values in $\mathbb{N}$ such that
$D\xrightarrow{w}D^{\prime}\xrightarrow{x}$, $\tau(C)=\tau(D)$ and
$\tau(C^{\prime})=\tau(D^{\prime})$.
###### Proof 4.6.
Since $\tau(C^{\prime})=\tau(C^{\prime\prime})$, there exists some
$\mu\in\mathrm{Aut}(\mathbb{Q}_{+})$ such that
$\mu(C^{\prime})=C^{\prime\prime}$, thus
$\mu(C)\xrightarrow{\mu(u)}C^{\prime\prime}$. Now, by applying Proposition 24
to $X=(\mu(C))(R)\cup C^{\prime\prime}(R)\cup\mathsf{data}(\mu(u))$, we get
that
$\lambda(\mu(C))\xrightarrow{\lambda(\mu(u))}\lambda(C^{\prime\prime})\xrightarrow{\lambda(v)}$
satisfies the required property.
### 4.4 Emptiness of automata
###### Proposition 32 (Non-emptiness).
Let $A$ be an NRA over $\mathbb{N}^{\omega}$. The following are equivalent:
1. 1.
$L(A)$ is non-empty
2. 2.
there exist two runs whose input words belong to $\mathbb{Q}_{+}^{*}$, which
are as follows:
1. (a)
$C_{0}\xrightarrow{u}C$
2. (b)
$D\xrightarrow{v}D^{\prime}$ with $\tau(D)=\tau(D^{\prime})=\tau(C)$, $D$ is a
final configuration, and $\sigma=\sigma(D,D^{\prime})$ satisfies $\star$
property.
###### Proof 4.7.
Assume $(2)$ holds. The result easily follows from Propositions 25, 27 and 31.
Assume now that $L(A)$ is non-empty. Let
$\rho=C_{0}\xrightarrow{d_{0}}C_{1}\xrightarrow{d_{1}}C_{2}\dots$ be an
accepting run over input $x=d_{0}d_{1}\dots$ in $A$ (where
$C_{0}=(q_{0},\overline{0})$). For each $i\geq 0$, let $\nu_{i}=\tau(C_{i})$.
As $\rho$ is acccepting and there are only finitely many types, we get that
there exists some type $\nu$ such that the state is accepting and
$(q_{i},\nu_{i})=(q,\nu)$ for infinitely many $i\in\mathbb{N}$. Let
$(C_{j})_{j\in\mathbb{N}}$ be an infinite subsequence of $C_{i}$ such that for
all $j$, $\tau(C_{j})=\nu$. Now, colour the set of unordered pairs as follows:
$c\left(\left\\{\tau(C_{j}),\tau(C_{k})\right\\}\right)=\sigma(C_{j},C_{k})$
(where we assume w.l.o.g. that $j<k$). By Ramsey’s theorem, there is an
infinite subset such that all pairs have the same colour $\sigma$. Let
$(C_{k})_{k\in\mathbb{N}}$ be an infinite subsequence such that for all $j<k$,
$\sigma(C_{j},C_{k})=\sigma$. Now, assume that $\sigma$ breaks the $\star$
property. There are two cases:
* •
There exists some $r$ such that $\sigma\Rightarrow r>r^{\prime}$. Then, it
means that for all $j<k$, $C_{j}(r)>C_{k}(r)$. In particular, this means that
$C_{0}(r)>C_{1}(r)>\dots>C_{n}(r)\dots$, which yields an infinite descending
chain in $\mathbb{N}$, and leads to a contradiction.
* •
There exists some $s$ such that $\sigma\Rightarrow s=s^{\prime}$ and some $r$
which satisfies $\sigma\Rightarrow r<s$ and $\sigma\not\Rightarrow
r=r^{\prime}$. If $\sigma\Rightarrow r>r^{\prime}$, we are back to the first
case. Otherwise, it means $\sigma\Rightarrow r<r^{\prime}$. Then, on the one
hand, $C_{0}(r)<C_{1}(r)<\dots<C_{n}(r)<\dots$. On the other hand,
$C_{0}(s)=C_{1}(s)=\dots=C_{n}(s)=\dots$. But we also have that for all
$k\in\mathbb{N},C_{k}(r)<C_{k}(s)=C_{0}(s)$. Overall, we get an infinite
increasing chain which is bounded from above by $C_{0}(s)$, which again leads
to a contradiction.
Thus, $\sigma$ satisfies the $\star$ property. So, this is in particular the
case for some pair of configurations $C=D=C_{k}$ and
$D^{\prime}=C_{k^{\prime}}$ for some $k<k^{\prime}$ taken from the last
extracted subsequence. Such configurations are such that (recall that the
$C_{k}$ are configurations of an accepting run over some input, which is in
particular initial):
1. [(a)]
2. 1.
$C_{0}\xrightarrow{u}C$
3. 2.
$D\xrightarrow{v}D^{\prime}$.
Moreover, $\tau(D)=\tau(D^{\prime})=\tau(C)$ and $D$ is final, by definition
of $(C_{k})_{k\in\mathbb{N}}$.
###### Corollary 33.
Emptiness for NRA over $\mathbb{N}^{\omega}$ is decidable in PSpace.
###### Proof 4.8.
The algorithm is similar to the one for deciding non-emptiness for NRA over
oligomorphic domains. Indeed, the sought witness lies in $\mathbb{Q}_{+}$,
which is oligomorphic; it suffices to additionally check that the pairwise
type of $D$ and $D^{\prime}$ satisfies the star property. Thus, the algorithm
initially guesses $\tau(C)$ and $\sigma$. Then, checking that there indeed
exists a configuration whose type is $\tau(C)$ and that can be reached from
$C_{0}$ (item 2a of Propostion 32) can be done in the same way as for Theorem
21, by simulating symbolically (i.e. over $\mathbb{Q}$-types) a run of the
automaton. Now, for item 2b, the algorithm again symbolically simulates a run
from $D$, by keeping track of the type of the current configuration
$\tau(D^{\prime})$, and additionally of the pairwise type
$\sigma(D,D^{\prime})$. Since $\sigma(D,D^{\prime})$ is a $\mathbb{Q}$-type
over $2\lvert R\rvert$ registers, it can be stored in polynomial space;
moreover, given a transition test $\phi$, it can also be updated in polynomial
space.
### 4.5 Functionality
Following the study of the relationships between $\mathbb{N}$ and
$\mathbb{Q}$, we are now ready to provide a characterization of non
functionality over $\mathbb{N}$. Intuitively, it amounts to finding two pairs
of runs whose inputs are in $\mathbb{Q}$: first, a prefix witnessing a
mismatch, and second, an accepting loop satisfying the $\star$ property to
ensure its iterability over $\mathbb{N}$.
###### Proposition 34 (Functionality).
Let $R\subseteq\mathbb{N}^{\omega}\times\mathbb{N}^{\omega}$ be given by an
NRT $T$. The following are equivalent:
1. 1.
$R$ is not functional
2. 2.
there exist two pairs of runs whose input words belong to
$\mathbb{Q}_{+}^{*}$, which are as follows:
1. (a)
$C_{0}\xrightarrow{u|u_{1}}C_{1}$ and $C_{0}\xrightarrow{u|u_{2}}C_{2}$ with
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}$,
2. (b)
$D_{1}\xrightarrow{v}D^{\prime}_{1}$ and $D_{2}\xrightarrow{v}D^{\prime}_{2}$
with $\tau(D_{1}\uplus D_{2})=\tau(D^{\prime}_{1}\uplus
D_{2})=\tau(C_{1}\uplus C_{2})$, both runs visit a final state of $T$, and
$\sigma_{i}=\tau((D_{1}\uplus D_{2})\uplus(D^{\prime}_{1}\uplus
D^{\prime}_{2}))$ satisfies property $\star$.
###### Proof 4.9.
First, assume that (2) holds. By applying Proposition 27 to the product
transducer $T\times T$, there exist two configurations $E_{1}$ and $E_{2}$ and
an infinite data word $x\in\mathbb{N}^{\omega}$ such that
$E_{1}\xrightarrow{x\mid y_{1}}$ and $E_{2}\xrightarrow{x\mid y_{2}}$.
Moreover, both runs are accepting as each finite run is required to visit a
final state of $T$.
Now, since $\tau(E_{1}\uplus E_{2})=\tau(D_{1}\uplus D_{2})=\tau(C_{1}\uplus
C_{2})$, we can apply Proposition 31 to get two runs
$C_{0}\xrightarrow{u^{\prime}\mid
u^{\prime}_{1}}F_{1}\xrightarrow{x^{\prime}\mid y_{1}^{\prime}}$ and
$C_{0}\xrightarrow{u^{\prime}\mid
u^{\prime}_{2}}F_{2}\xrightarrow{x^{\prime}\mid y_{2}^{\prime}}$. Moreover,
since automorphisms preserve mismatches, we know that
$u^{\prime}_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u^{\prime}_{2}$.
Thus, we obtained a witness of non-functionality, since we have
$(u^{\prime}x,u^{\prime}_{1}y_{1}^{\prime}),(u^{\prime}x,u^{\prime}_{2}y_{2}^{\prime})\in
T$, with $u^{\prime}_{1}y_{1}^{\prime}\neq u^{\prime}_{2}y_{2}^{\prime}$ since
$u^{\prime}_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u^{\prime}_{2}$.
We now assume that $R$ is not functional. By definition, and as we assume $R$
only contains infinite output words, there are two runs on some input word $x$
whose outputs mismatch. By splitting both runs after the first mismatch, we
get two runs $C_{0}\xrightarrow{t\mid t_{1}}B_{1}\xrightarrow{w\mid y}$ and
$C_{0}\xrightarrow{t\mid t_{2}}B_{2}\xrightarrow{w\mid z}$ with
$t_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}t_{2}$ (note
that we do not necessarily have that $y=z$). Now, from the product transducer
$T\times T$, one can define an NRA $A$ with registers $R\uplus R^{\prime}$
recognising the language $L(A)=\left\\{w^{\prime}\mid
E_{1}\xrightarrow[T]{w^{\prime}\mid
y^{\prime}},E_{2}\xrightarrow[T]{w^{\prime}\mid z^{\prime}}\text{ and
}\tau(E_{1}\uplus E_{2})=\tau(B_{1}\uplus B_{2})\right\\}$ which starts by
guessing some configuration $E_{1}\uplus E_{2}$ and checks that
$\tau(E_{1}\uplus E_{2})=\tau(B_{1}\uplus B_{2})$, then simulates $T\times T$.
Such language is non-empty, since it at least contains $w$. By Proposition 32,
we get that there exist runs whose input words belong to $\mathbb{Q}_{+}^{*}$
which are $C_{0}\uplus C_{0}\xrightarrow{u^{\prime}}C_{1}\uplus C_{2}$ and
$D_{1}\uplus D_{2}\xrightarrow{v}D^{\prime}_{1}\uplus D^{\prime}_{2}$ with
$\tau(D_{1}\uplus D_{2})=\tau(D^{\prime}_{1}\uplus
D^{\prime}_{2})=\tau(C_{1}\uplus C_{2})$, $D_{1}$ and $D^{\prime}_{1}$ final
and $\sigma(D_{1}\uplus D_{2},D^{\prime}_{1}\uplus D^{\prime}_{2})$ satisfies
the $\star$ property, which immediately yields item 2b.
Since $C_{0}\uplus C_{0}\xrightarrow{u^{\prime}}C_{1}\uplus C_{2}$, by
definition of the considered NRA, we have that
$E_{1}\xrightarrow[T]{u^{\prime}\mid u^{\prime}_{1}}C_{1}$ and
$E_{2}\xrightarrow[T]{u^{\prime}\mid u^{\prime}_{2}}C_{2}$, with
$\tau(E_{1}\uplus E_{2})=\tau(B_{1}\uplus B_{2})$. Thus, by applying an
automorphism $\mu$ such that $\mu(B_{1})=E_{1}$, $\mu(B_{2})=E_{2}$, such runs
can be glued with the runs $C_{0}\xrightarrow{t\mid t_{1}}B_{1}$ and
$C_{0}\xrightarrow{t\mid t_{2}}B_{2}$ to yield two runs
$C_{0}\xrightarrow{u\mid u_{1}}C_{1}$ and $C_{0}\xrightarrow{u\mid
u_{2}}C_{2}$, with $u=\mu(t)u^{\prime}$ and $u_{1}=\mu(t_{1})u^{\prime}_{1}$,
$u_{2}=\mu(t_{2})u^{\prime}_{2}$, with
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}$ since
$\mu(t_{1})\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}\mu(t_{2})$
(recall that automorphisms preserve mismatches, since they are bijective), so
we get item 2a.
###### Corollary 35.
Functionality for relations over
$\mathbb{N}^{\omega}\times\mathbb{N}^{\omega}$ given by NRT is decidable in
PSpace.
###### Proof 4.10.
By Lemma 13, if item 2a holds, then we can assume that the length of $u$ is
bounded by $P(\mathcal{R}(4k),\lvert Q\rvert,L)$, where $\mathcal{R}$ denotes
the Ryll-Nardzewski function of $\mathbb{Q}_{+}$, which is exponential. Thus,
the existence of $u$ can be checked with a counter that is polynomially
bounded. Then, item 2b can be checked in polynomial space, since it reduces to
checking emptiness of the NRA $A$ that we described in the above proof.
### 4.6 Next-letter problem
We now show that for any function definable by an NRT over $\mathbb{N}$, the
next-letter problem is computable.
###### Lemma 36.
Let $f:\mathbb{N}^{\omega}\rightarrow\mathbb{N}^{\omega}$ be a function
defined by an NRT over $\mathbb{N}$. Then, its next-letter problem is
computable.
###### Proof 4.11.
The algorithm is in two phases:
1. 1.
decide whether there exists a next letter
2. 2.
if there exists a next letter, compute it
Recall that as input to the next-letter problem, there are two finite data
words $u,v\in\mathbb{N}^{*}$ and the goal is to decide whether there exists
some $d\in\mathbb{N}$ such that for all $uy\in\mathrm{dom}(f)$, $v\leq f(uy)$
implies $vd\leq f(uy)$. Let us assume that $f$ is defined by some NRT
$T=\left(Q,R,\Delta,q_{0},\overline{\mathsf{c}_{0}},F\right)$. Let us explain
how to algorithmically realize the two latter steps.
1\. To decide the existence of such a $d$, we reduce the problem to a
functionality problem for an NRT $T_{uv}$. First, let us define the
convolution $x_{1}\otimes x_{2}$ of two data words
$x_{1}=d_{1}d_{2}\dots,x_{2}=d^{\prime}_{1}d^{\prime}_{2}\dots\in\mathbb{N}^{\omega}$
as the data word $d_{1}d^{\prime}_{1}d_{2}d^{\prime}_{2}\dots$. The intuitive
idea is to construct $T_{uv}$ in such a way that it defines the relation
$R_{uv}=R_{uv}^{1}\cup R_{uv}^{2}$ defined by $R_{uv}^{i}=\\{(ux_{1}\otimes
ux_{2},vd^{\omega})\mid ux_{1},ux_{2}\in\mathrm{dom}(f),vd\preceq
f(ux_{i})\\}$. It is not difficult to see that $R_{uv}$ is a function iff the
next-letter has a positive answer for $u$ and $v$. Once $T_{uv}$ is
constructed, checking its functionality is possible thanks to Corollary 35.
Let us now explain how to construct $T_{uv}$. It is the union of two
transducers $T_{uv}^{1}$ and $T_{uv}^{2}$ defining $R_{uv}^{1}$ and
$R_{uv}^{2}$ respectively. The constructions are similar for both:
$T_{uv}^{1}$ ignores the even position of the input word after $u$ has been
read while $T_{uv}^{2}$ ignores the odd position after $u$ has been read, but
they otherwise are defined in the same way. Let us explain how to construct
$T_{uv}^{1}$. First, $T_{uv}^{1}$ makes sure that its input
$\alpha\otimes\beta$ is such that $u\leq\alpha$ and $u\leq\beta$, or
equivalently that $\alpha\otimes\beta=(u\otimes u)(x\otimes y)$ for some
$x,y$. This is possible by hardcoding $u$ in the transitions of $T$. Likewise,
$T_{uv}^{1}$ also makes sure that $v$ is a prefix of $f(ux)$. It is also
possible by hardcoding in $T$ the output $v$ (to check for instance that a run
of $T$ output the $i$th data value $d_{i}$ of $v$, it suffices when $T$
outputs a register $r$ on its transitions, to add the test $r=d_{i}$). So,
$T_{uv}^{1}$ simulates a run of $T$ on $ux$ (the odd positions of
$\alpha\otimes\beta$) and a run of $T$ on $uy$ (the even positions of
$\alpha\otimes\beta$). Once $v$ has been consumed by the simulated run on
$ux$, the first time this simulated run outputs something, the data value is
kept in a special register and outputted all the time by $T_{uv}^{1}$.
2\. If we know that there exists a next letter $d$, the only thing that
remains to be done is to compute it. To do so, we can again construct a
transducer $T^{\prime}_{uv}$ which simulates $T$ but makes sure to accept only
input words that start with $u$, accepted by runs which outputs words starting
with $v$. This can again be done by hardcoding $u$ and $v$ in $T$. Then, to
compute a data value $d$, it suffices to execute $T^{\prime}_{uv}$ by
computing, at any point $i$, all the configurations reached by
$T^{\prime}_{uv}$, and by keeping only those which are co-reachable. Testing
whether a configuration is co-reachable can be done by testing the non-
emptiness of an NRA starting in this initial configuration. Doing so, the
algorithm computes all prefixes of accepting runs. Eventually, since there
exists a next letter $d$, one of the run will output it.
As a direct corollary of Lemma 36, Theorem 10 and Theorem 9, we obtain:
###### Theorem 37.
Let $f:\mathbb{N}^{\omega}\rightarrow\mathbb{N}^{\omega}$ be a function
defined by an NRT over $\mathbb{N}$, and let
$m:\mathbb{N}\rightarrow\mathbb{N}$ be a total function. Then,
1. 1.
$f$ is computable iff $f$ is continuous
2. 2.
$f$ is uniformly computable iff $f$ is uniformly continuous
3. 3.
$f$ is $m$-computable iff $f$ is $m$-continuous
### 4.7 Uniform continuity
We now turn to uniform continuity over $\mathbb{N}$, and show that it is
enough to consider uniform continuity over $\mathbb{Q}_{+}$ and restrict our
attention to configurations that are co-reachable w.r.t. data words over
$\mathbb{N}$.
Given a configuration $\mathbb{Q}$-type $\tau$, we say that it is co-reachable
in $\mathbb{N}$ if there is an actual configuration $C$ of type $\tau$ which
is co-reachable.
###### Proposition 38.
Let $T$ be an NRT over $(\mathbb{N},\left\\{<,0\right\\})$ and $f=\llbracket
T\rrbracket$. The following are equivalent:
* •
$f$ is uniformly continuous,
* •
There is a critical pattern (see Definition 3.1.3) with $D_{1},D_{2}$ co-
reachable in $\mathbb{N}$.
###### Proof 4.12.
Let $T^{\prime}$ denote the same transducer as $T$ except that 1) it is over
$\mathbb{Q}$ and 2) it is restricted to configurations which are co-reachable
in $\mathbb{N}$.
We claim that $T$ realizes a uniformly continuous function if and only if
$T^{\prime}$ does. From Proposition 16, this is enough to show the expected
result.
Any run over $T$ is in particular a run over $T^{\prime}$, hence if $T$ is not
uniformly continuous then, in particular, $T^{\prime}$ is not either.
Conversely, let us assume that $T^{\prime}$ is not uniformly continuous.
According to Proposition 16, this means that we can exhibit a critical pattern
$C_{0}\xrightarrow{u|u_{1}}C_{1}\xrightarrow{v|v_{1}}\mu(C_{1})\xrightarrow{w|w_{1}}D_{1}$,
$C_{0}\xrightarrow{u|u_{2}}C_{2}\xrightarrow{v|v_{2}}\mu(C_{2})\xrightarrow{z|w_{2}}D_{2}$
such that $D_{1},D_{2}$ are co-reachable. By definition we have that
$D_{1},D_{2}$ are co-reachable in $\mathbb{N}$. Let $i$ denote the mismatching
position in the pattern. Let $n>0$, we want to exhibit two inputs that have a
common prefix of length at least $n$ but outputs that have a longest common
prefix smaller than $i$. We consider the two runs
$C_{0}\xrightarrow{u|u_{1}}C_{1}\xrightarrow{v|v_{1}}\mu(C_{1})\cdots\mu^{n}(C_{1})\xrightarrow{\mu^{n}(v)|\mu^{n}(v_{1})}\mu^{n+1}(C_{1})\xrightarrow{\mu^{n}(w)|\mu^{n}(w_{1})}\mu^{n}(D_{1})$
and
$C_{0}\xrightarrow{u|u_{2}}C_{2}\xrightarrow{v|v_{2}}\mu(C_{2})\cdots\mu^{n}(C_{2})\xrightarrow{\mu^{n}(v)|\mu^{n}(v_{2})}\mu^{n+1}(C_{2})\xrightarrow{\mu^{n}(w)|\mu^{n}(w_{1})}\mu^{n}(D_{2})$.
Note that it could be that $\mu^{n}$ cancels the mismatch, in which case we
can consider $\mu^{n+1}$. Since $\mu^{n}(D_{1})$ and $\mu^{n}(D_{2})$ are co-
reachable, we can use Proposition 31 and we obtain the result.
As a corollary, we obtain:
###### Theorem 39.
Let $f:\mathbb{N}^{\omega}\rightarrow\mathbb{N}^{\omega}$ be a function
defined by an NRT over $\mathbb{N}$. Deciding its uniform continuity is
PSpace-c.
###### Proof 4.13.
We are left to show that uniform continuity of $T^{\prime}$ is in PSpace. This
problem is in PSpace from Theorem 21. We have to be careful however, since
computing $T^{\prime}$ explicitly may be too costly. The trick is that
checking if a configuration is co-reachable can be done on the fly in PSpace,
using Proposition 32 and Corollary 33.
The complexity lower bound can again be obtained by reducing the problem to
emptiness of register automata over $(\mathbb{N},\\{=\\})$, which is PSpace-c
[DL09].
### 4.8 Continuity
We end our study with the property of continuity of NRT over $\mathbb{N}$.
To simplify the following statement, we assume that transducers are equipped
with a distinguished register $r_{m}$ which along the run stores the maximal
data value seen in the input so far.
###### Proposition 40 (Continuity).
Let $f:\mathbb{N}^{\omega}\rightarrow\mathbb{N}^{\omega}$ be given by an NRT
$T$ over $\mathbb{N}$. The following are equivalent:
1. 1.
$f$ is not continuous
2. 2.
there exists a critical pattern in
$\mathsf{Critical}(u,v,w,z,C_{1},C_{2},C_{1}^{\prime},C_{2}^{\prime},D_{1},D_{2})$
with $u,v,w,z\in\mathbb{Q}_{+}^{*}$, where $C_{1}$ is final, $D_{1}$ and
$D_{2}$ are co-reachable in $\mathbb{N}$ and such that
$\sigma=\tau((C_{1}\uplus C_{2})\uplus(C^{\prime}_{1}\uplus C^{\prime}_{2}))$
satisfies the $\star$ property for $X=R_{1}\cup R_{2}^{l}$, where
$R_{2}^{l}=\\{r_{2}\in R_{2}\mid\exists r^{\prime}_{1}\in
R^{\prime}_{1},\sigma\Rightarrow r_{2}\leq r^{\prime}_{1}\\}$.
###### Proof 4.14 (Proof of
$(\ref{itm:NfNotCont})~{}\Rightarrow~{}(\ref{itm:NfPattern})$).
Assume first that $f$ is not continuous. Let $x\in\mathbb{N}^{\omega}$, and
let $(x_{n})_{n\in\mathbb{N}}$ be such that $\forall
n\in\mathbb{N},x_{n}\in\mathrm{dom}(f)$ $x_{n}\xrightarrow[n\infty]{}x$ but
$f(x_{n})\not\xrightarrow[n\infty]{}f(x)$. Up to extracting subsequences, we
can assume w.l.o.g. that there exists some $k\in\mathbb{N}$ such that for all
$n\in\mathbb{N}$, $\lvert f(x)\wedge f(x_{n})\rvert\leq k$. We denote by
$\rho$ a run of $T$ over $x$ yielding output $f(x)$, and by
$(\rho_{n})_{n\in\mathbb{N}}$ a sequence of runs of $T$ such that $\rho_{n}$
yields output $f(x_{n})$ over input $f(x)$.
First, since the set $\Delta$ of transitions of $T$ is finite,
$\Delta^{\omega}$ is compact (cf Remark 3), so $(\rho_{n})_{n\in\mathbb{N}}$
admits a converging subsequence, so we can assume w.l.o.g. that
$(x_{n})_{n\in\mathbb{N}}$ is such that $(\rho_{n})_{n\in\mathbb{N}}$
converges to some infinite sequence of transitions $\pi$.
###### Remark 41.
Note however that $\pi$ is not necessarily a run over some concrete data word:
since transducers are allowed to guess arbitrary data value, we do not have
the property that after reading input $u$, the configuration of the transducer
takes its values in $\mathsf{data}(u)\cup\\{0\\}$, which would allow to build
a concrete run by extracting subsequences of runs whose configurations
coincide on longer and longer prefixes. If we disallow guessing, such property
is restored, which greatly simplifies the proof. Indeed, then, we can require
that $\sigma$ has the $\star$ property, without allowing some wild registers
to go astray (namely those that are above $r^{\prime}_{k}$, which necessarily
correspond to data values that have been guessed, otherwise their content is
at most equal to the one of $r_{m}$), and having two runs that can be
instantiated by actual data words allow to extract $\sigma$ with a Ramsey
argument which is similar to the one used for emptiness (see Proposition 32).
Now, let $(E_{i})_{i\in\mathbb{N}}$ be the sequence of configurations of
$\rho$, and, for each $\rho_{n}$, let $(C_{n,i})_{i\in\mathbb{N}}$ be its
corresponding sequence of configurations. Then, for all $0\leq i<j$, let
$\tau_{n,i}=\tau(C_{n,i})$ and $\sigma_{n,i,j}=\sigma(E_{i}\uplus
C_{n,i},E_{j}\uplus C_{n,j})$. Since the $\tau_{n,i}$ and $\sigma_{n,i,j}$
take their values in a finite set, by using compactness arguments, we can
extract two subsequences $(\tau^{\prime}_{n,i})_{n,i\in\mathbb{N}}$ and
$(\sigma^{\prime}_{n,i,j})_{n\in\mathbb{N},0\leq i<j}$ which respectively
converge to $(\tau^{\prime}_{i})_{i\in\mathbb{N}}$ and to a family
$(\sigma_{i,j})_{0\leq i<j}$. Formally, we require that for all $M\geq 0$,
there exists some $N\geq 0$ such that for all $n\geq N$, we have that
$\tau^{\prime}_{n,i}=\tau^{\prime}_{i}$ and
$\sigma^{\prime}_{n,i,j}=\sigma_{i,j}$ for all $0\leq i<j\leq M$ (note that
for types $\tau^{\prime}_{n,i}$, this corresponds to convergence for infinite
words). By first restricting to final configurations in $\rho$ (we know there
are infinitely many since $\rho$ is accepting), we can assume that all $E_{i}$
are final. We further restrict to $E_{i}$ which all have the same type $\nu$
(there is at least one type which repeats infinitely often). To avoid
cluttering the notations, we name again $(\rho_{n})_{n\in\mathbb{N}}$ the
corresponding subsequence of runs, $(\tau_{n,i})_{n,i\in\mathbb{N}}$ the
corresponding types, and $(\sigma_{n,i,j})_{n,i,j}$ their associated family of
pairwise types. Finally, by applying Ramsey’s theorem to
$(\tau_{i})_{i\in\mathbb{N}}$ and $(\sigma_{i,j})_{0\leq i<j}$, we know that
there exists some type $\tau$, some pairwise type $\sigma$ and some infinite
subset $I\subseteq\mathbb{N}$ such that for all $i,j\in I$ such that $i<j$,
$\tau_{i}=\tau$ and $\sigma_{i,j}=\sigma$. For simplicity, we reindex the
$(E_{j})_{j\in I}$ and the $(C_{n,j})_{j\in I}$ over $\mathbb{N}$, that we
again name $(E_{j})_{j\in\mathbb{N}}$ and $(C_{n,i})_{i\in\mathbb{N}}$.
Now, assume by contradiction that $\sigma$ does not have the property $\star$
for $X=R_{1}\cup R_{2}^{l}$. There are two cases, that we treat iteratively:
* •
There exists some $r\in X$ such that $\sigma\Rightarrow r>r^{\prime}$. There
are two subcases:
* –
If there exists such $r\in R_{1}$, then we get that for all $0\leq i<j$,
$E_{i}(r)>E_{j}(r)$, which immediately yields an infinite descending chain in
$\mathbb{N}$, and hence a contradiction.
* –
If there exists such $r\in R_{2}^{l}$, then let $r^{\prime}_{M}$ be such that
$\sigma\Rightarrow r\leq r^{\prime}_{M}$. Since $r_{M}\in R_{1}$, we know that
$\sigma\Rightarrow r_{M}\leq r^{\prime}_{M}$, otherwise we are back to the
previous case. Then, let $N\in\mathbb{N}$ be such that $\sigma_{N,i,j}=\sigma$
for all $0\leq i<j\leq B=E_{1}(r^{\prime}_{M})+1$. Such $N$ necessarily exists
since we took the $(\rho_{n})_{n\in\mathbb{N}}$ such that the
$(\sigma_{n,i,j})_{0\leq i<j}$ converges. Thus, $E_{1}(r^{\prime}_{M})\geq
C_{N,0}(r)>C_{N,1}(r)>\dots C_{N,B}(r)$, which again yields a contradiction.
* •
Assume now that there exists $r,s\in X$ such that $\sigma\Rightarrow
s=s^{\prime}$, $\sigma\Rightarrow r\leq s$ but $\sigma\Rightarrow
r<r^{\prime}$. There are again two subcases:
* –
If $s\in R^{1}$, then we know that for all $i\in\mathbb{N}$,
$E_{i}(s)=E_{0}(s)$. Now, if $r\in R^{1}$, then we get that
$E_{0}(r)<E_{1}(r)<\dots$ but for all $i\in\mathbb{N}$, $E_{i}(r)<E_{i}(s)$,
which leads to a contradiction. If $r\in R_{2}^{l}$, then let $N$ be such that
$\sigma_{N,i,j}=\sigma$ for all $0\leq i<j\leq B=E_{0}(s)+1$. We then get a
strictly increasing chain $C_{N,0}(r)<C_{N,1}(r)<\dots<C_{N,B}(r)$ of length
$B$ which is bounded from above by $B-1$, which does not exist in
$\mathbb{N}$.
* –
If $s\in R_{2}^{l}$ then let $r^{\prime}_{M}\in R_{1}$ be such that
$\sigma\Rightarrow s\leq r^{\prime}_{M}$. Then, let
$B=E_{1}(r^{\prime}_{M})+1$, and let $N$ be such that $\sigma_{N,i,j}=\sigma$
for all $0\leq i<j\leq B$. If $r\in R^{1}$ then we have that
$E_{0}(r)<E_{1}(d)<\dots<E_{B}(r)\leq E_{B}(s)=E_{0}(s)\leq
E_{1}(r^{\prime}_{M})$; similarly if $r\in R_{2}^{l}$ we get
$C_{N,0}(r)<C_{N,1}(r)<\dots<C_{N,B}(r)\leq C_{N,B}(s)=C_{N,0}(s)\leq
E_{1}(r_{M}^{\prime})$. In both cases, this leads to a contradiction.
As a consequence, $\sigma$ indeed has the property $\star$ for $X=R_{1}\cup
R_{2}^{l}$.
Now, it remains to exhibit a critical pattern from $\rho$ and the (last)
extracted subsequence $(\rho_{n})_{n\in\mathbb{N}}$ and the corresponding
inputs $(x_{n})_{n\in\mathbb{N}}$. Let $k\in\mathbb{N}$ be such that for all
$n\in\mathbb{N}$, $\lvert f(x)\wedge f(x_{n})\rvert\leq k$. On the one hand,
we have $\rho=E_{0}\xrightarrow{u_{0}\mid v_{0}}E_{1}\xrightarrow{u_{1}\mid
v_{1}}E_{2}\dots$, where $x=u_{0}u_{1}\dots$ and such that all $(E_{i})_{i>0}$
are final and they all have the same type. On the other hand, each $\rho_{n}$
can be written $\rho_{n}=C_{n,0}\xrightarrow{u^{n}_{0}\mid
v^{n}_{0}}C_{n,1}\xrightarrow{u^{n}_{1}\mid v^{n}_{1}}C_{n,2}\dots$, where for
all $j\geq 0,\lvert u^{n}_{j}\rvert=\lvert u_{j}\rvert$ (the configurations
$C_{n,i}$ are synchronised with those of $E_{i}$, as we always extracted
subsequences in a synchronous way) and $u^{n}_{0}u^{n}_{1}\dots=x_{n}$. Now,
let $M$ be such that $\forall n\geq M$, we have $\forall 0\leq i<j\leq
B+3,\sigma_{n,i,j}=\sigma$, where $B=2k$. Finally, take some $l\geq M$ such
that $\lvert x_{l}\wedge x\rvert\geq\lvert u_{0}u_{1}\dots u_{B+1}\rvert$.
There are two cases:
* •
$v_{0}\dots
v_{B}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}v^{l}_{0}\dots
v^{l}_{B}$. Then, we exhibited a critical parttern with two runs
$E_{0}\xrightarrow{u_{0}\dots u_{B}\mid v_{0}\dots
v_{B}}E_{B+1}\xrightarrow{u_{B+1}\mid v_{B+1}}E_{B+2}\xrightarrow{u_{B+2}\mid
v_{B+2}}E_{B+3}$ along with $C_{l,0}\xrightarrow{u^{l}_{0}\dots u^{l}_{B}\mid
v^{l}_{0}\dots v^{l}_{B}}C_{l,B+1}\xrightarrow{u^{l}_{B+1}\mid
v^{l}_{B+1}}C_{l,B+2}\xrightarrow{u^{l}_{B+2}\mid v^{l}_{B+2}}C_{l,B+3}$.
First, the outputs indeed mismatch, by hypothesis, so we are in case 1 of the
definition of a critical pattern (see Definition 3.1.3).
* •
$v_{0}\dots v_{B}\mathrel{{\hstretch{.8}{\parallel}}}v^{l}_{0}\dots
v^{l}_{B}$. Then, since $v_{0}\dots v_{B}\leq f(x)$ and $v^{l}_{0}\dots
v^{l}_{B}\leq f(x_{l})$, and since $\lvert f(x)\wedge f(x_{l})\rvert\leq k$,
we get that $\lvert v_{0}\dots v_{B}\rvert\leq k$ or $\lvert v^{l}_{0}\dots
v^{l}_{B}\rvert\leq k$. Now, there are again two cases:
* –
There exists some $i\leq B$ such that $v_{i}=v^{l}_{i}=\varepsilon$. Then, we
exhibited a critical pattern with two runs $E_{0}\xrightarrow{u_{0}\dots
u_{i-1}\mid v_{0}\dots
v_{i-1}}E_{i}\xrightarrow{u_{i}\mid\varepsilon}E_{i+1}\xrightarrow{u_{i+1}\dots
u_{m}\mid v_{i+1}\dots v_{m}}E_{m+1}$ and $C_{l,0}\xrightarrow{u^{l}_{0}\dots
u^{l}_{i-1}\mid v^{l}_{0}\dots
v^{l}_{i-1}}C_{l,i}\xrightarrow{u^{l}_{i}\mid\varepsilon}C_{l,i+1}\xrightarrow{u^{l}_{i+1}\dots
u^{l}_{m}\mid v^{l}_{i+1}\dots v^{l}_{m}}C_{l,m+1}$, where $m$ is such that
$v^{l}_{0}\dots
v^{l}_{m}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}v_{0}\dots
v_{m}$ (such $m$ exists since
$f(x_{n})\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}f(x)$). We
are then in case 3 of the definition of a critical pattern (see Definition
3.1.3).
* –
Otherwise, there necessarily exists some $i\leq B$ such that
$v_{i}=\varepsilon$ or $v^{l}_{i}=\varepsilon$ since $\lvert v_{0}\dots
v_{B}\rvert\leq k$ or $\lvert v^{l}_{0}\dots v^{l}_{B}\rvert\leq k$. We assume
that $v_{i}=\varepsilon$, the reasoning is symmetric if instead
$v^{l}_{i}=\varepsilon$. Necessarily, $v^{l}_{0}\dots
v^{l}_{i-1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}f(x)$,
otherwise we can find some $i$ such that both $v_{i}=v^{l}_{i}$ and we are
back to the previous case. Then, we exhibited a critical pattern with two runs
$E_{0}\xrightarrow{u_{0}\dots u_{i-1}\mid v_{0}\dots
v_{i-1}}E_{i}\xrightarrow{u_{i}\mid\varepsilon}E_{i+1}\xrightarrow{u_{i+1}\dots
u_{m}\mid v_{i+1}\dots v_{m}}E_{m+1}$ and $C_{l,0}\xrightarrow{u^{l}_{0}\dots
u^{l}_{i-1}\mid v^{l}_{0}\dots v^{l}_{i-1}}C_{l,i}\xrightarrow{u^{l}_{i}\mid
v^{l}_{i}}C_{l,i+1}\xrightarrow{u^{l}_{i+1}\dots u^{l}_{m}\mid
v^{l}_{i+1}\dots v^{l}_{m}}C_{l,m+1}$, where $m$ is such that $v^{l}_{0}\dots
v^{l}_{m}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}v_{0}\dots
v_{m}$ (such $m$ exists since we assumed that $v^{l}_{0}\dots
v^{l}_{i-1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}f(x)$).
We are then in case 2 of the definition of a critical pattern (see Definition
3.1.3).
Finally, in all the considered cases, the last configuration $D_{1}=E_{j}$ of
the first run of the critical pattern (the one which is a prefix of $\rho$) is
final, since we extracted the $(E_{i})$ so that we only kept the final ones.
Also, both $D_{1}=E_{j}$ and $D_{2}=C_{l,j}$ are co-reachable in $\mathbb{N}$
since they are configurations of accepting runs in $\mathbb{N}$, which
concludes the proof of
$(\ref{itm:NfNotCont})~{}\Rightarrow~{}(\ref{itm:NfPattern})$.
###### Proof 4.15 (Proof of
$(\ref{itm:NfPattern})~{}\Rightarrow~{}(\ref{itm:NfNotCont})$).
We assume now that $(\ref{itm:NfPattern})$ holds and show that $f$ is not
continuous. Intuitively, following notations of the critical pattern, we will
prove that $f$ is not continuous at some input
$x=u^{\prime}v^{\prime}\mu(v^{\prime})\mu(v^{\prime})\ldots$, where
$u^{\prime},v^{\prime}$ elements of $\mathbb{N}^{+}$, images of $u,v$ by some
automorphism.
Thus, let us consider two runs
$C_{0}\xrightarrow{u|u_{1}}C_{1}\xrightarrow{v|v_{1}}\mu(C_{1})$ with $C_{1}$
final, and
$C_{0}\xrightarrow{u|u_{2}}C_{2}\xrightarrow{v|v_{2}}\mu(C_{2})\xrightarrow{z|w_{2}}D_{2}$
with $D_{2}$ co-reachable in $\mathbb{N}$. Two cases can occur: either
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}$, or
$v_{2}=\epsilon$ and
$u_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u_{2}w_{2}$.
As in the proof in the oligomorphic setting, we want to show that it is
possible to build from the first run an infinite run looping forever along
(some renaming of) $v$, and from the second run a family of infinite runs,
looping more and more. While this is directly true in some oligomorphic data
domain, as we saw before, iteration in $\mathbb{N}$ is more tricky.
These runs can be seen as finite runs in the transducer $T\times T$, with
twice as many registers as $T$, which we denote by $R_{1}$ for the first copy,
and $R_{2}$ for the second. By assumption, $\sigma=\tau((C_{1}\uplus
C_{2})\uplus(C^{\prime}_{1}\uplus C^{\prime}_{2}))$ satisfies the $\star$
property for $X=R_{1}\cup R_{2}^{l}$. If we had that $\sigma$ satisfies the
$\star$ property for $R_{1}\cup R_{2}$, then we could immediately deduce by
Proposition 27 that the two loops on $v$ could be iterated infinitely many
times. However, the weaker hypothesis we have will only allow us to show that
the loop from $C_{1}$ can indeed by $\omega$-iterated, while the loop from
$C_{2}$ can be iterated as many times as we want, but finitely many times. To
this end, we have to take care of registers in $R_{2}\setminus R_{2}^{l}$ to
show that these runs indeed exist.
As we assumed that there is a register $r_{m}$ storing the maximal data value
appearing in the input word, the definition of the set $R_{2}^{l}$ ensures
that every register not in $R_{2}^{l}$ has its value coming from a guess along
the run $C_{0}\xrightarrow{u}C_{2}$. This is directly linked with the
technical difficulty inherent to the presence of guessing. In particular, this
allows us to consider different runs, in which the input data word is not
modified, but the guessed values are.
As $C_{2}$ and $\mu(C_{2})$ have the same type, and registers not in
$R_{2}^{l}$ have “large” values, their behaviour along the cycle (w.r.t.
types) $C_{2}\xrightarrow{v}\mu(C_{2})$ is very restricted: they can only
increase or decrease at each step. In particular, if one starts from some
configuration $C_{2}$ in which values of registers not in $R_{2}^{l}$ are very
far apart, this will allow to iterate the cycle several times. More precisely,
for any integer $n$, we can compute an integer $n^{\prime}$ and values for the
guesses of registers not in $R_{2}^{l}$ which are pairwise $n^{\prime}$ far
apart, ensuring that the cycle can be repeated at least $n$ times.
Thus, the previous reasoning allows us to replicate the proof of Proposition
27 to show that there exist a word $v^{\prime}\in\mathbb{N}^{*}$ and an
automorphism $\alpha$ preserving $\mathbb{N}$, such that:
* •
$C^{\prime}_{1}\xrightarrow{x|y}$, with $\tau(C^{\prime}_{1})=\tau(C_{1})$ and
$x=v^{\prime}\alpha(v^{\prime})\alpha^{2}(v^{\prime})\ldots$
* •
there exists $C^{\prime}_{2}$ and, for all $n$, some $C^{\prime}_{2}(n)$ with
$C^{\prime}_{2}(n)_{|R_{2}^{l}}=C^{\prime}_{2|R_{2}^{l}}$,
$\tau(C^{\prime}_{2}(n))=\tau(C_{2})$, and
$C^{\prime}_{2}(n)\xrightarrow{v^{\prime}}\alpha(C^{\prime}_{2}(n))\ldots\xrightarrow{\alpha^{n}(v^{\prime})}\alpha^{n+1}(C^{\prime}_{2}(n))$
Using previous remark on guessed values, together with Proposition 31 applied
on $C_{0}\xrightarrow{u}C_{1}$ and $C_{0}\xrightarrow{u}C_{2}$, and
Proposition 25 applied on
$\mu(C_{2})\xrightarrow{z|w_{2}}D_{2}\xrightarrow{z_{l}}$, we end up with:
* •
a run
$C_{0}\xrightarrow{u^{\prime}|u^{\prime}_{1}}E_{1}\xrightarrow{x^{\prime}|y^{\prime}}$,
with
$x^{\prime}=v^{\prime\prime}\alpha(v^{\prime\prime})\alpha^{2}(v^{\prime\prime})\ldots$
* •
for every $n$, there is a run
$C_{0}\xrightarrow{u^{\prime}|u^{\prime}_{2}}E_{2}\xrightarrow{v^{\prime\prime}}\alpha(E_{2})\ldots\xrightarrow{\alpha^{n}(v^{\prime\prime})}\alpha^{n+1}(E_{2})\xrightarrow{z^{\prime}|w^{\prime}_{2}}D^{\prime}_{2}\xrightarrow{z^{\prime}_{l}}$
* •
$u^{\prime}_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u^{\prime}_{2}$
or
$u^{\prime}_{1}\mathrel{\centernot{\mathrel{{\hstretch{.8}{\parallel}}}}}u^{\prime}_{2}w^{\prime}_{2}$
Hence, this proves the non functionality as we have a sequence of runs whose
inputs converge towards $u^{\prime}x^{\prime}$ but whose outputs mismatch.
###### Theorem 42.
Continuity of NRT on $\mathbb{N}$ is PSpace-c.
###### Proof 4.16.
Small critical pattern property (Claim 3.1.3) on oligomorphic data can easily
be adapted to $\mathbb{N}$. Indeed, the only difference lies in the loops on
$v$, but is is important to notice that loop removal used to reduce length of
runs preserves the extremal configurations (see Proposition 11), hence it
preserves the type of the global run, here the type $\sigma=\tau((C_{1}\uplus
C_{2})\uplus(C^{\prime}_{1}\uplus C^{\prime}_{2}))$.
Then, one can proceed similarly by guessing such a small critical pattern, and
checking mismatches with counters. Note that we have to verify in addition
that $D_{2}$ is co-reachable in $\mathbb{N}$, but again, as already detailed
in previous proofs, this property can be decided on-the-fly.
Finally, the PSpace lower bound can again be obtained by a reduction to the
emptiness problem of register automata over $(\mathbb{N},\\{=\\})$, which is
PSpace-c [DL09].
### 4.9 Transfer result
We have extended our result to one non-oligomorphic structure, namely
$(\mathbb{N},\left\\{<,0\right\\})$, which is a substructure of
$(\mathbb{Q},\left\\{<,0\right\\})$. If we want to extend the result to other
substructures of $(\mathbb{Q},\left\\{<\right\\})$, say
$(\mathbb{Z},\left\\{<\right\\})$, we could do the same work again and study
the properties of iterable loops in $\mathbb{Z}$. However, a simpler way is to
observe that $(\mathbb{Z},\left\\{<\right\\})$ can be simulated by
$\mathbb{N}$, using two copies of the structure. We show a quite simple yet
powerful result: given a structure $\mathbb{D}$, if the structure
$\mathbb{D}^{\prime}$ can be defined as a quantifier-free interpretation over
$\mathbb{D}$ then the problems of emptiness, functionality, continuity, etc
reduce to the same problems over $\mathbb{D}^{\prime}$.
A _quantifier-free interpretation_ of dimension $l$ with signature $\Gamma$
over a structure $(\mathbb{D},\Sigma)$ is given by $\textsf{QF}[\Sigma]$
quantifier-free formulas: a formula
$\phi_{\text{domain}}(x_{1},\ldots,x_{l})$, for each constant symbol
$\mathsf{c}$ of $\Gamma$ a formula $\phi_{\mathsf{c}}(x_{1},\ldots,x_{l})$ and
for each relation symbol $R$ of $\Gamma$ of arity $r$ a formula
$\phi_{R}(x_{1}^{1},\ldots,x_{l}^{1},\ldots,x_{1}^{r},\ldots,x_{l}^{r})$. The
structure $\mathbb{D}^{\prime}$ is defined by the following: a domain
$D^{\prime}=\left\\{(d_{1},\ldots,d_{l})|\
\mathbb{D}\models\phi_{\text{domain}}(d_{1},\ldots,d_{l})\right\\}$; an
interpretation $(d_{1},\ldots,d_{l})\in D^{\prime}$ for each constant symbol
$\mathsf{c}$ such that
$\mathbb{D}\models\phi_{\mathsf{c}}(d_{1},\ldots,d_{l})$ (we assume that there
is a unique possible tuple satisfying the formula, which can be syntactically
ensured); an interpretation
$R^{\mathbb{D}}=\left\\{(x_{1}^{1},\ldots,x_{l}^{1},\ldots,x_{1}^{r},\ldots,x_{l}^{r})|\
\mathbb{D}\models\phi_{R}(x_{1}^{1},\ldots,x_{l}^{1},\ldots,x_{1}^{r},\ldots,x_{l}^{r})\right\\}$
for each relation symbol $R$.
###### Theorem 43.
Let $\mathbb{D}^{\prime}$ be a quantifier-free interpretation over
$\mathbb{D}$. Let $P$ denote a decision problem among non-emptiness,
functionality, continuity, Cauchy continuity or uniform continuity. There is a
PSpace reduction from $P$ over $\mathbb{D}^{\prime}$ to $P$ over $\mathbb{D}$.
###### Proof 4.17.
Let $R\subseteq\mathbb{D}^{\prime\omega}\times\mathbb{D}^{\prime\omega}$ be
given by an NRT $T$. If we assume that $\mathbb{D}^{\prime}$ is an
$l$-dimension interpretation of $\mathbb{D}$, then we can view $R$ as a
relation $P\subseteq(\mathbb{D}^{l})^{\omega}\times(\mathbb{D}^{l})^{\omega}$.
Note that $P$ is empty (resp. functional, continuous, Cauchy continuous,
uniformly continuous) if and only if $R$ is.
Moreover, since $\mathbb{D}^{\prime}$ is an interpretation, one can construct
an NRT $S$ which realizes $P$. It uses $l$ registers for every register of $T$
plus $l-1$ registers to store the input. In its first $l-1$ transitions it
just stores the input and every $l$ transition, it simulates a transition of
$T$, just by substituting the formulas of the interpretation for the
predicates. As usual, we do not construct $S$ explicitly, but we are able to
simulate it using only polynomial space.
As a direct corollary we can, in particular, transfer our result over
$(\mathbb{N},\\{<,0\\})$ to $(\mathbb{Z},\\{<,0\\})$:
###### Corollary 44.
The problems of non-emptiness, functionality, continuity, Cauchy continuity,
uniform continuity over $(\mathbb{Z},\\{<,0\\})$ are in PSpace.
###### Proof 4.18.
From Theorem 43, using the fact that $(\mathbb{Z},\\{<,0\\})$ is given by the
following two-dimensional interpretation over $(\mathbb{N},\\{<,0\\})$:
$\phi_{\text{domain}}:=(x=0)\vee(y=0)$,
$\phi_{0}:=(x=y=0),\phi_{<}:=(x_{1}=x_{2}=0\wedge(y_{1}<y_{2}))\vee(y_{1}=y_{2}=0\wedge(x_{1}>x_{2}))\vee(x_{2}=y_{1}=0\wedge\neg(x_{1}=y_{2}=0))$.
###### Remark 45.
Of course our transfer result applies to many other substructures of
$(\mathbb{Q},\\{<,0\\})$, such as the ordinals $\omega+\omega$,
$\omega\times\omega$, etc.
###### Remark 46.
Considering first-order interpretation (i.e. where $\phi_{\text{domain}}$, etc
are first-order formulas instead of quantifier-free ones) would yield too much
expressive power. Indeed, $(\mathbb{N},\left\\{+1,0\right\\})$, where
$+1(x,y)$ holds whenever $y=x+1$, can easily be defined as a (one-dimensional)
first-order interpretation of $(\mathbb{N},\left\\{<,0\right\\})$, by letting
$\phi_{+1}(x,y)=y>x\wedge\neg\exists z,x<z<y$. However, over such a domain,
register automata coincide with counter machines, which is a Turing-complete
model (with two or more counters); in particular, emptiness of such register
automata is easily shown undecidable (with two or more registers).
## Future work
We have given tight complexity results in a family of oligomorphic structures,
namely polynomially decidable ones. An example of an exponentially decidable
oligomorphic structure is the one of finite bitvectors with bitwise xor
operation. Representing the type of $k$ elements may require exponential sized
formulas (for example stating that a family of $k$ elements is free, i.e. any
non-trivial sum is non-zero). The same kind of proof would give ExpSpace
algorithms over this particular data set (for the transducer problems we have
been studying). One could try to classify the different structures based on
the complexity of solving these kinds of problems.
We have been able to show decidability of several transducer problems over the
data set $\mathbb{N}$ with the linear order. This was done using two
properties: 1) that $\mathbb{N}$ is a substructure of $\mathbb{Q}$ and 2) that
we were able to characterize the iterable loops in $\mathbb{N}$. Moreover we
can transfer the result to other substructures of $\mathbb{Q}$, e.g.
$\mathbb{Z}$. One possible extension would be to investigate data sets which
have a tree-like structure, e.g. the infinite binary tree. There exists a
tree-like oligomorphic structure, of which the binary tree is a substructure.
Studying the iterable loops of the binary tree may yield a similar result as
in the linear order case. But, as it turns out, the notion of quantifier-free
interpretation can directly yield decidability, as the binary tree is a
quantifier-free interpretation of $\mathbb{N}$, thanks to a result of Demri
and Deters [DD16, Section 3]. Indeed, they show that words over a finite
alphabet of size $k$ with the prefix relation (which corresponds to a $k$-ary
tree) can be encoded in $\mathbb{N}$. However, the question remains open for
non-tree-like structures that are substructures of an oligomorphic structure,
for which we might be able to characterize iterable loops.
There are several classical ways of extending synthesis results, for instance
considering larger classes of specifications, larger classes of
implementations or both. In particular, an interesting direction is to
consider non-functional specifications. As mentioned in Introduction however,
and as a motivation for studying the functional case, enlarging both (non-
functional) specifications and implementations to an asynchronous setting
leads to undecidability. Indeed, already in the finite alphabet setting, the
synthesis problem of deterministic transducers over $\omega$-words from
specifications given by non-deterministic transducers is undecidable [CL14]. A
simple adaptation of the proof of [CL14] allows to show that in this finite
alphabet setting, enlarging the class of implementations to any computable
function also yields an undecidable synthesis problem. An interesting case
however is yet unexplored already in the finite alphabet setting: given a
synchronous specifications, as an $\omega$-automaton, is to possible to
synthesise a computable function realizing it? In [HKT12, FLZ11], this
question has been shown to be decidable for specifications with _total_ input
domain (any input word has at least one correct output by the specification).
More precisely, it is shown that realizability by a continuous function is
decidable, but it turns out that the synthesised function is definable by a
deterministic transducer (hence computable). When the domain of the
specification is partial, the situation changes drastically: deterministic
transducers may not suffice to realize a specification realizable by a
computable function. This can be seen by considering the (partial) function
$g$ of Introduction, seen as a specification and casted to a finite alphabet
$\\{a,b,c\\}$: it is not computable by a deterministic transducer, since it
requires an unbounded amount of memory to compute this function.
## References
* [BKL14] Mikolaj Bojanczyk, Bartek Klin, and Slawomir Lasota. Automata theory in nominal sets. Log. Methods Comput. Sci., 10(3), 2014.
* [Boj19] Mikołaj Bojańczyk. Atom Book. 2019\.
* [CHVB18] Edmund M. Clarke, Thomas A. Henzinger, Helmut Veith, and Roderick Bloem, editors. Handbook of Model Checking. Springer, 2018.
* [CL14] Arnaud Carayol and Christof Löding. Uniformization in Automata Theory. In Proceedings of the 14th Congress of Logic, Methodology and Philosophy of Science Nancy, July 19-26, 2011, pages 153–178. London: College Publications, 2014.
* [DD16] Stéphane Demri and Morgan Deters. Temporal logics on strings with prefix relation. J. Log. Comput., 26(3):989–1017, 2016.
* [DFKL20] Vrunda Dave, Emmanuel Filiot, Shankara Narayanan Krishna, and Nathan Lhote. Synthesis of computable regular functions of infinite words. In Igor Konnov and Laura Kovács, editors, 31st International Conference on Concurrency Theory, CONCUR 2020, September 1-4, 2020, Vienna, Austria (Virtual Conference), volume 171 of LIPIcs, pages 43:1–43:17. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020\.
* [DL09] Stéphane Demri and Ranko Lazic. LTL with the freeze quantifier and register automata. ACM Trans. Comput. Log., 10(3):16:1–16:30, 2009.
* [EFK21] Léo Exibard, Emmanuel Filiot, and Ayrat Khalimov. Church synthesis on register automata over infinite ordered data domains. STACS, 2021.
* [EFR19] Léo Exibard, Emmanuel Filiot, and Pierre-Alain Reynier. Synthesis of data word transducers. In 30th International Conference on Concurrency Theory, CONCUR 2019, August 27-30, 2019, Amsterdam, the Netherlands., pages 24:1–24:15, 2019\.
* [EFR20] Léo Exibard, Emmanuel Filiot, and Pierre-Alain Reynier. On computability of data word functions defined by transducers. In Jean Goubault-Larrecq and Barbara König, editors, Foundations of Software Science and Computation Structures - 23rd International Conference, FOSSACS 2020, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020, Dublin, Ireland, April 25-30, 2020, Proceedings, volume 12077 of Lecture Notes in Computer Science, pages 217–236. Springer, 2020.
* [ESKG14] Rüdiger Ehlers, Sanjit A. Seshia, and Hadas Kress-Gazit. Synthesis with identifiers. In Proceedings of the 15th International Conference on Verification, Model Checking, and Abstract Interpretation - Volume 8318, VMCAI 2014, pages 415–433, 2014.
* [Exi21] Léo Exibard. Automatic Synthesis of Systems with Data. Theses, Aix Marseille Université (AMU); Université libre de Bruxelles (ULB), September 2021.
* [FJLW16] Emmanuel Filiot, Ismaël Jecker, Christof Löding, and Sarah Winter. On equivalence and uniformisation problems for finite transducers. In 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, Rome, Italy, pages 125:1–125:14, 2016.
* [FLZ11] Wladimir Fridman, Christof Löding, and Martin Zimmermann. Degrees of lookahead in context-free infinite games. In Marc Bezem, editor, Computer Science Logic, 25th International Workshop / 20th Annual Conference of the EACSL, CSL 2011, September 12-15, 2011, Bergen, Norway, Proceedings, volume 12 of LIPIcs, pages 264–276. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2011.
* [HKT12] Michael Holtmann, Lukasz Kaiser, and Wolfgang Thomas. Degrees of lookahead in regular infinite games. Logical Methods in Computer Science, 8(3), 2012.
* [JL69] J.R. Büchi and L.H. Landweber. Solving sequential conditions finite-state strategies. Transactions of the American Mathematical Society, 138:295–311, 1969.
* [KK19] Ayrat Khalimov and Orna Kupferman. Register-bounded synthesis. In 30th International Conference on Concurrency Theory, CONCUR 2019, August 27-30, 2019, Amsterdam, the Netherlands., pages 25:1–25:16, 2019\.
* [KMB18] Ayrat Khalimov, Benedikt Maderbacher, and Roderick Bloem. Bounded synthesis of register transducers. In Automated Technology for Verification and Analysis, 16th International Symposium, ATVA 2018, Los Angeles, October 7-10, 2018. Proceedings, 2018.
* [KZ10] Michael Kaminski and Daniel Zeitlin. Finite-memory automata with non-deterministic reassignment. Int. J. Found. Comput. Sci., 21(5):741–760, 2010.
* [MH94] Bohdan S. Majewski and George Havas. The complexity of greatest common divisor computations. In Leonard M. Adleman and Ming-Deh A. Huang, editors, Algorithmic Number Theory, First International Symposium, ANTS-I, Ithaca, NY, USA, May 6-9, 1994, Proceedings, volume 877 of Lecture Notes in Computer Science, pages 184–193. Springer, 1994.
* [Pri02] Christophe Prieur. How to decide continuity of rational functions on infinite words. Theor. Comput. Sci., 276(1-2):445–447, 2002.
* [WZ20] Sarah Winter and Martin Zimmermann. Finite-state strategies in delay games. Inf. Comput., 272:104500, 2020.
## Appendix A Proof of Lemma 14
Recall Lemma 14: See 14 In order to show the lemma we use the result from
[MH94, Theorem 9]:
###### Theorem 47.
Let $p_{1},\ldots,p_{k}\in\mathbb{Z}{\setminus}{\left\\{0\right\\}}$ be non-
zero integers. Then there exist integers
$z_{1},\ldots,z_{k}\leq\max(|p_{1}|,\ldots,|p_{k}|)$ such that:
$z_{1}p_{1}+\ldots+z_{k}p_{k}=\gcd(p_{1},\ldots,p_{k})$
It is more convenient to split the integers into two groups depending on their
signs. Let $p_{0},\ldots,p_{k}$ be positive integers and $q_{0},\ldots,q_{l}$
be negative integers, let $d=\gcd(p_{0},\ldots,p_{k},q_{0},\ldots,q_{l})$ and
let $M=\max(p_{0},\ldots,p_{k},-q_{0},\ldots,-q_{l})$. Using the previous
theorem, we know there are integers $m_{0},\ldots,m_{k}$ and
$n_{0},\ldots,n_{l}$ all in $[-M,M]$ so that $\sum_{0\leq i\leq
k}m_{i}p_{i}+\sum_{0\leq j\leq l}n_{i}q_{i}=d$. We modify the coefficients in
the following way: $m_{0}^{\prime}=m_{0}-M\sum_{1\leq j\leq l}q_{j}$,
$m_{i}^{\prime}=m_{i}-Mq_{0}$ for $i\in\left\\{1,\ldots,k\right\\}$,
$n_{0}^{\prime}=n_{0}+M\sum_{1\leq i\leq k}p_{i}$,
$n_{j}^{\prime}=n_{j}+Mp_{0}$ for $j\in\left\\{1,\ldots,l\right\\}$. Note that
all the new coefficients are positive. We only have left to check that the sum
is unchanged.
$\begin{array}[]{l}\sum_{0\leq i\leq k}m_{i}^{\prime}p_{i}+\sum_{0\leq j\leq
l}n_{i}^{\prime}q_{i}=\\\ \sum_{0\leq i\leq k}m_{i}p_{i}-p_{0}M\sum_{1\leq
j\leq l}q_{j}-\sum_{0\leq i\leq k}Mq_{0}p_{i}\\\ +\sum_{0\leq j\leq
l}n_{i}q_{i}+q_{0}M\sum_{1\leq i\leq k}p_{i}+\sum_{0\leq j\leq
l}Mp_{0}q_{j}\\\ =\sum_{0\leq i\leq k}m_{i}p_{i}+\sum_{0\leq j\leq
l}n_{i}q_{i}\end{array}$
We have manage to obtain positive coefficients, all smaller than $M^{3}$.
## Appendix B Proof of Lemma 30
Recall Lemma 30: See 30
###### Proof B.1.
There are two cases:
* •
If $d-c=b-a$, then take $f:x\rightarrow x+c-a$.
* •
Otherwise, define $f$ as: for all $x\in[a;b-1]$ (note that such interval can
be empty), let $f(x)=x+c-a$ as above. Then, for $x\in[b-1;b]$, let
$e=c+(b-1-a)$ and $f(x)=d-(b-x)(d-e)$. We have $f(b-1)=e$, which is consistent
with the definition of $f$ on $[a;b-1]$, and $f(b)=d$, and $f$ is moreover
increasing and bijective. Finally, $f(b-1)\in\mathbb{N}$ and
$f(b)\in\mathbb{N}$. Overall, $f$ satisfies the required properties.
•$a$•$c$$\cdots$•$b-1$•$e=c+(b-1-a)$•$b-1/2$•$e+\dfrac{d-e}{2}$•$b$•$d$
|
# Optical phonon contribution to the thermal conductivity of a quantum
paraelectric
Pankaj Bhalla Beijing Computational Science Research Center, Beijing, 100193,
China Nabyendu Das Department of Physics, The LNM-Institute of Information
Technology, Jaipur 302031, India
###### Abstract
Motivated by recent experimental findings, we study the contribution of a
quantum critical optical phonon branch to the thermal conductivity of a
paraelectric system. We consider the proximity of the optical phonon branch to
transverse acoustic phonon branch and calculate its contribution to the
thermal conductivity within the Kubo formalism. We find a low temperature
power law dependence of the thermal conductivity as $T^{\alpha}$, with
$1<\alpha<2$, (lower than $T^{3}$ behavior) due to optical phonons near the
quantum critical point. This result is in accord with the experimental
findings and indicates the importance of quantum fluctuations in the thermal
conduction in these materials.
## I Introduction
In the last few decades, the study of quantum paraelectrics such as SrTiO3 and
KTaO3 has become a topic of considerable interest [1, 2, 3, 4, 5, 6]. In
general, interest in the quantum fluctuation induced novel states is on rise
as it can potentially lead to technological advancements. The main feature of
these materials is a suppressed long range ferroelectric order down to the
zero temperature due to quantum fluctuations. It is found that the dielectric
constant saturates at a very high value on lowering the temperature. Thus
these materials in their pristine state may be considered to be on the verge
of a phase transition near $T=0$. In principle, the ferroelectric transition
can be induced by some non thermal parameters such as pressure, chemical
composition, etc. at a zero temperature. A continuous zero temperature
transition without any discontinuity in the order-parameter variations are
termed as quantum critical point. The “zero temperature transition” or a
“quantum critical point” are not just theoretical concepts with academic
interests only. Once the non-thermal parameters in a system are tuned to value
that lead to a zero temperature continuous transition, they determine the
finite temperature properties of a system in terms of quantum critical
fluctuations of the associated degrees of freedom. In such cases, the finite
temperature properties shows some universal behaviour which are independent of
many details of the microscopic interations [7]. A set of dielectric materials
such as SrTiO3, KTaO3 associated with such phenomena as a result of quantum
critical polarization fluctuations are coined as quantum paraelectrics. In
these materials (suppressed) ferroelectric transition is associated with a
nearly soft zone center optical mode and in the low temperature regime, many
of the thermodynamic and transport properties can be described by quantum
fluctuations of the associated optic branch and its interactions with other
degrees of freedom. Such novel dielectric behavior was first reported by
Müller et al. [8] in strontium titanate compounds. In recent times, pressure
induced quantum phase transitions and associated quantum critical behavior
have been experimentally confirmed [2]. A lot of theoretical and experimental
activities in this direction to describe the dielectric behavior of these
systems have been successfully taken place in the recent times [3, 9, 10, 11,
12, 13, 14, 15, 16, 17, 18].
To explore consequences of quantum fluctuations in these materials, the study
of heat transport properties is also equally important like their dielectric
behavior. This may shed further light on the behavior of quantum paraelectrics
at or near the quantum critical point. In an insulator heat conduction is
mainly driven by the flow of phonons. Optical phonons have higher energies
$\sim$ Debye temperature and they often lead to novel thermal conductivity
behavior at the said temperature regime [19, 20]. However, at low temperature
thermal conductivity is dominated by the flow of the acoustic phonons. In case
of a quantum paraelectric, zone center transverse optic mode is nearly soft
and the phonons in the respective transverse optical (TO) branch have lowered
energy compared to the usual insulators. This is to mention that LO phonons in
this materials still have higher energy owing to the long range dipolar
interaction [21].This scenario is observed in experiments [22, 23] and
schematically shown in Fig. 1. Thus it is plausible that optical phonons in a
quantum paraelectric shape the low temperature behavior in certain systems,
such as KTaO3.
Figure 1: A schematic description of proximity of the transverse acoustic (TA)
and transverse optical (TO) phonon modes associated with (incipient)
ferroelectric transition in a typical quantum paraelectric. Here
$\Delta=\Delta_{0}+aT^{2}$ is the temperature dependent gap between the TA and
TO phonons, where $T$ is the temperature, $a$ is material dependent parameter,
and $\Delta_{0}$ is the phonon energy gap at $T=0$.
Being high electronic energy gap semiconductors, electronic exciations are
gapped out at a low temperature and transport properties in these materials
are governed by the interactions between lowered energy optical phonons with
the acoustic phonons [24, 25, 26] and other degrees of freedom like
impurities, structural domains[16], etc.
The basic understanding of the phonon thermal conductivity is as follows. In a
classical kinetic theory estimate, the thermal conductivity $\kappa\sim
v_{\text{ph}}c_{V}\lambda_{\text{ph}}$, where $v_{\text{ph}}$, $c_{V}$, and
$\lambda_{\text{ph}}$ are the phonon velocity, phonon specific heat and the
phonon mean free path respectively [27, 28, 29]. Thermal conductivity being a
transport coefficient, depends on the relaxation of the phonons due to its
interactions with other degrees of freedom and or anharmonicities. The same is
encoded in the $\lambda_{\text{ph}}=v_{\text{ph}}\tau_{ph}$, $\tau_{ph}$ is
the relaxation time. At low temperature and for acoustic phonon dominated
scenario, $c_{V}\sim T^{3}$ and $\lambda_{\text{ph}}$ is determined by the
scattering with the impurities and is temperature independent. Thus
$\kappa\sim T^{3}$. Experimental findings on this materials also support such
$T^{3}$ behavior of the specific heat [30]. Thus any departure from the said
$T^{3}$ behavior of the thermal conductivity, especially in a regime where
some other interacting degrees of freedom become active, indicates the
presence of some new scattering mechanism which results in frequancy and
temperature dependence of the mean free path or the scattering rate.
A recent experiment shows a few such novel behavior of the thermal
conductivity in quantum paraelectrics. For SrTiO3 there is a stronger than
$T^{3}$ increase of the thermal conductivity at low temperature. Poiseuille
flow of the phonons has been argued as a possible explanation of this
behavior. On the other hand it has been reported that in case of KTaO3, there
is a weaker than $T^{3}$ behavior for the thermal conductivity [24]. The later
behavior is typically attributed to the mutual interactions between more than
one low lying modes [31]. These experimental results suggest that quantum
paraelectrics are wonderful playgrounds to explore them. We put forward such
an explanation in terms of the thermal conduction of the optical phonons and
its interaction with the acoustic phonons which are the relevant degrees of
freedom in this system and in the temperature regime of our interest.
Our main result can be summarized as follows. Within a minimal model with
coupled TA-TO phonons, it is shown that near the quantum critical point, the
temperature variation of the thermal conductivity due the nearly soft optical
phonon follows a power law $T^{\alpha}$, $2>\alpha>1$ and its numerical value
can increase an order of magnitude. The same is shown in Fig. 2.
Figure 2: Power law behavior of the optical phonon contribution to the thermal
conductivity near the quantum critical point as obtained in the present
formalism.
This paper is organized as follows. First, we provide the mathematical
description of the anharmonic effects in Sec. II by introducing the model
Hamiltonian terms of the order of cubic and quartic in phonon displacement
functions. Then, we derive the expressions of the decay constants and thermal
conductivity in Sec. III. In Sec. IV, we present the numerical results. In
Sec. V, we discuss our results and conclude.
## II Phonon decay due to anharmonicity
In a perfect crystal, harmonic phonons do not decay due to their infinitely
long lived nature and lead to infinite thermal conductivity [32]. We need to
introduce symmetry permitted anharmonic interactions to study the finite
lifetime of the phonons and hence the associated transport porperties [33, 34]
In the present work, we also focus on the study of anharmonic effects on the
thermal conductivity of quantum paraelectrics.
For a system in which atoms are displaced from their equilibrium positions,
the total Hamiltonian can be written as
$\displaystyle H$ $\displaystyle=$ $\displaystyle
H_{\text{har}}+H_{\text{anhar}},$ (1)
where $H_{\text{har}}$ corresponds to a free phonon part of the Hamiltonian
and $H_{\text{anhar}}$ represents the corrections to the Harmonic
approximation. The Harmonic part in terms of the phonon coordinates can be
expressed like
$\displaystyle H_{\text{har}}$ $\displaystyle=$
$\displaystyle\sum_{\alpha\mathbf{q}}\frac{1}{2}\omega_{\mathbf{\alpha
q}}A_{\alpha\mathbf{q}}A_{\alpha\mathbf{-q}}.$ (2)
Here
$A_{\alpha\mathbf{q}}=a^{\dagger}_{\alpha\mathbf{q}}+a_{\alpha\mathbf{q}}$ is
the sum of the creation $(a^{\dagger}_{\alpha\mathbf{q}})$ and the
annihilation $(a_{\alpha\mathbf{q}})$ phonon operators,
$\omega_{\alpha\mathbf{q}}$ is the phonon dispersion for the $\alpha$-th
branch. The latter quantity for small wavelengths can have form
$\omega_{\alpha q}=\sqrt{{\Delta_{\alpha 0}}^{2}+c_{\alpha}^{2}q^{2}},$ (3)
having $\Delta_{\alpha 0}$ the phonon energy gap, $c_{\alpha}$ a sound
velocity and $q$ a phonon wave vector. The dispersion correspond to the phonon
branch can be associated with the gap. For the case $\Delta_{0}=0$, the
dispersion corresponds to acoustic phonon and for $\Delta_{0}\neq 0$ it
represents optical phonon. Since we are considering a minimal model with one
TA and one TO branch, we drop the branch index $\alpha$ for further
discussions and $\Delta_{0}$ represents the energy gap for the zone center TO
phonon at $T=0$. Further, the ferroelectric instability in materials having
anharmonic effects is determined with $q=0$ optical phonon or
$\omega_{q}=\Delta_{0}$. The ferroelectric quantum critical point, it is
determined by $\Delta_{0}=0$ at $T=0$ for the optic branch. At this quantum
critical point, the phonon energy gap for the optical branch shows quadratic
temperature dependent phonon dispersion at finite temperature such as
$\Delta=\Delta_{0}+aT^{2}$ having $a$ as a material dependent parameter [2, 1]
as depicted in Fig. 1.
Figure 3: a): Lowest order three phonon process due to the cubic anharmonic
term where the quantum fluctuations of the relevant optic branch contribute
the most and b) Second order four phonon process due to the quartic anharmonic
term. Here the dotted and the dashed curves represent two different kind of
phonons.
In the anharmonic part $H_{\text{anhar}}$ of the total Hamiltonian, we
consider two lowest order terms of the following forms.
$\displaystyle H_{3}$ $\displaystyle=$
$\displaystyle\sum_{\alpha\beta\gamma\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}}V_{\alpha\beta\gamma}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})A_{\alpha,\mathbf{k}_{1}}A_{\beta,\mathbf{k}_{2}}A_{\beta,\mathbf{k}_{3}}$
$\displaystyle H_{4}$ $\displaystyle=$
$\displaystyle\sum_{\alpha\beta\gamma\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}}V_{\alpha\beta\gamma\delta}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3},\mathbf{k}_{4})A_{\alpha,\mathbf{k}_{1}}A_{\beta,\mathbf{k}_{2}}A_{\beta,\mathbf{k}_{3}}A_{\delta,\mathbf{k}_{4}}$
Here $V$ in both terms represents the anharmonic coefficients and the relative
weights of these coefficients depend on the symmetry of system. The
Hamiltonian $H_{3}$ corresponds to the three phonon process and $H_{4}$ to the
four phonon process. The lowest order processes form both types of
anharmonicities which involve momentum transfer between two kind of phonons
are shown in Fig. 3. Here the first order effect from the fourth order
anharmonicity does not associated with any net momentum transfer. It
contributes to the quadratic temperature dependence to the phonon energy gap
as shown in Fig. 4.
Figure 4: Mass enhancement due to the lowest order quartic anharmonic effect
which leads to Eq. 3 with renormalized gap $\Delta_{0}\sim T^{2}.$
### II.1 Three Phonon process
For the three phonon process, decay rate or the inverse lifetime of a phonon
at the phonon frequency due to the cubic anharmonicity and at a temperature
$T$ is given as:
$\Gamma^{3}(\omega,T)=18\pi\sum_{\mathbf{k}_{1}\mathbf{k}_{2}\lambda_{1}\lambda_{2}}|V_{\alpha\beta}(-\mathbf{k}\lambda;\mathbf{k}_{1}\lambda_{1};\mathbf{k}_{2}\lambda_{2})|^{2}\bigg{\\{}(n_{\omega_{1}}-n_{-\omega_{2}})\delta_{12}^{s}+(n_{\omega_{1}}-n_{\omega_{2}})\delta_{12}^{d}\bigg{\\}}.$
(5)
Here $\omega_{i}\equiv\left\\{\omega(\mathbf{k}_{i}\lambda_{i})\right\\}$,
$n_{\omega_{i}}=(e^{\beta\omega_{i}}-1)^{-1}$ is the Bose-Einstein
distribution function having $\beta$ a inverse of the temperature, and the
dirac delta functions denote
$\delta_{12}^{s}=\delta_{\omega-\omega_{1}-\omega_{2}}-\delta_{\omega+\omega_{1}+\omega_{2}}$
and
$\delta_{12}^{d}=\delta_{\omega+\omega_{1}-\omega_{2}}-\delta_{\omega-\omega_{1}+\omega_{2}}$.
Considering the anharmonic coefficient
$|V(-\mathbf{k}\lambda;\mathbf{k}_{1}\lambda_{1};\mathbf{k}_{2}\lambda_{2})|^{2}=\frac{V_{0}^{2}}{\omega(-\mathbf{k}\lambda)\omega(\mathbf{k}_{1}\lambda_{1})\omega(\mathbf{k}_{2}\lambda_{2})}$[35]
and then converting the summations into integrals, we have
$\Gamma^{3}(\omega,T)=18\pi\frac{V_{0}^{2}}{\omega}\int
d\omega_{1}\rho_{1}^{\omega_{1}}\bigg{\\{}\rho_{2,\omega}^{\omega_{1}}(n_{\omega_{1}}-n_{-\omega+\omega_{1}})+\rho_{2,\omega}^{-\omega_{1}}(n_{\omega_{1}}-n_{\omega+\omega_{1}})-\text{terms
with}(\omega\rightarrow-\omega)\bigg{\\}},$ (6)
where
$\rho_{i,\omega}^{\pm\omega_{1}}=\rho_{i}(\omega\mp\omega_{1})/(\omega\mp\omega_{1})$
having $\rho_{i}(\omega)$ the density of states. For a certain phonon branch,
it takes a form $\rho_{i}({\omega})\sim$ $\omega\sqrt{\omega^{2}-\Delta^{2}}$
for $\omega>\Delta$ and zero in other cases. Here we consider
$\Delta=\Delta_{0}+aT^{2}$ with $a$ eV${}^{-1})$ as a constant for optic
phonon which is not explicitly calculated here and taken from other references
[1, 2] and shown in Fig. 1.
To simplify the expression of decay rate, we have used $\rho_{1}(\omega)$ for
optic phonon and $\rho_{2}(\omega)$ for acoustic phonon. This yields
$\Gamma^{3}(\omega,T)$
$\Gamma^{3}(\omega,T)=\frac{18\pi}{4\pi^{4}}\frac{V_{0}^{2}}{\omega}\int
d\omega_{1}\sqrt{\omega_{1}^{2}-\Delta^{2}}\bigg{\\{}(\omega-\omega_{1})\big{[}n_{\omega_{1}}-n_{-\omega+\omega_{1}}\big{]}+(\omega+\omega_{1})\big{[}n_{\omega_{1}}-n_{\omega+\omega_{1}}\big{]}\bigg{\\}}.$
(7)
This is an expression for the decay rate of the optical phonon due to three
phonon process.
### II.2 Four Phonon process
Due to the fourth order anharmonicity, the phonon decay rate can be expressed
as
$\displaystyle\Gamma^{4}(\omega,T)=$ $\displaystyle
96\pi\sum_{\mathbf{k}_{1}\lambda_{1}}\sum_{\mathbf{k}_{2}\lambda_{2}}\sum_{\mathbf{k}_{3}\lambda_{3}}|V(-\mathbf{k}\lambda;\mathbf{k}_{1}\lambda_{1};\mathbf{k}_{2}\lambda_{2};\mathbf{k}_{3}\lambda_{3})|^{2}$
$\displaystyle\quad\bigg{\\{}(-n_{-\omega_{1}}n_{-\omega_{2}}n_{-\omega_{3}}-n_{\omega_{1}}n_{\omega_{2}}n_{\omega_{3}})\delta_{123}^{s}+3(n_{\omega_{1}}n_{-\omega_{2}}n_{-\omega_{3}}+n_{-\omega_{1}}n_{\omega_{2}}n_{\omega_{3}})\delta_{123}^{d}\bigg{\\}}$
, (8)
where $\delta_{123}$ represents the dirac delta function due to the
combination of different phonon frequencies. The corresponding factors with
superscript $s$ and $d$ are
$\delta_{\omega-\omega_{1}-\omega_{2}-\omega_{3}}-\delta_{\omega+\omega_{1}+\omega_{2}+\omega_{3}}$,
and
$\delta_{\omega+\omega_{1}-\omega_{2}-\omega_{3}}-\delta_{\omega-\omega_{1}+\omega_{2}+\omega_{3}}$
respectively.
Assuming the anharmonic coefficient for this process in a similar fashion to
the three phonon process
$|V(-\mathbf{k}\lambda;\mathbf{k}_{1}\lambda_{1};\mathbf{k}_{2}\lambda_{2};\mathbf{k}_{3}\lambda_{3})|^{2}=\frac{V_{1}^{2}}{\omega(-\mathbf{k}\lambda)\omega(\mathbf{k}_{1}\lambda_{1})\omega(\mathbf{k}_{2}\lambda_{2})\omega(\mathbf{k}_{3}\lambda_{3})}$
and simplifying the above equation, the the decay rate for four phonon process
can been written as
$\displaystyle\Gamma^{4}(\omega,T)=$ $\displaystyle
96\pi\frac{V_{1}^{2}}{\omega}\int d\omega_{1}\rho_{1}^{\omega_{1}}\int
d\omega_{2}\rho_{2}^{\omega_{2}}\bigg{\\{}\rho_{2}^{-\omega_{1}-\omega_{2}}(-n_{-\omega_{1}}n_{-\omega_{2}}n_{-\omega+\omega_{1}+\omega_{2}}-n_{\omega_{1}}n_{\omega_{2}}n_{\omega-\omega_{1}-\omega_{2}})$
$\displaystyle+3\rho_{2}^{\omega_{1}-\omega_{2}}(n_{\omega_{1}}n_{-\omega_{2}}n_{-\omega-\omega_{1}+\omega_{2}}+n_{-\omega_{1}}n_{\omega_{2}}n_{\omega+\omega_{1}-\omega_{2}})-\text{terms
with}(\omega\rightarrow-\omega)\bigg{\\}}$
Here we consider restrict us to the simplest model which captures the
essential aspects of the quantum fluctuations associated with the nearly soft
optic mode and consider $\rho_{1}(\omega)$ for optic phonon and other density
of states for the case of acoustic phonon branches.
## III Thermal conductivity
In order to compute the thermal conductivity, we start with its expression
based on the Kubo approach [36] and is given by
$\displaystyle\kappa(\omega,T)$ $\displaystyle=$
$\displaystyle\frac{\kappa_{0}}{T}\sum_{\mathbf{k}}\omega^{2}(\mathbf{k})v^{2}_{\mathbf{k}}\int_{-\infty}^{\infty}d\omega^{\prime}f_{T}(\omega,\omega^{\prime})\mathcal{S}(\omega,\omega^{\prime}).$
Here
$f_{T}(\omega,\omega^{\prime})=\frac{n_{\omega^{\prime}}-n_{\omega+\omega^{\prime}}}{\omega}$
is the thermal weight and the spectral weight function
$\mathcal{S}(\omega,\omega^{\prime})$ is defined as
$\mathcal{S}(\omega,\omega^{\prime})=\mathcal{A}(\mathbf{k},\omega^{\prime})\mathcal{A}(\mathbf{k},\omega+\omega^{\prime})$.
Further in zero frequency limit, the above equation can be written as
$\kappa(\omega=0,T)=\frac{\kappa_{0}}{T}\sum_{\mathbf{k}}\omega^{2}(\mathbf{k})v^{2}_{\mathbf{k}}\int_{-\infty}^{\infty}d\omega^{\prime}\frac{\partial
n(\omega^{\prime})}{\partial\omega^{\prime}}\mathcal{A}^{2}(\mathbf{k},\omega^{\prime}).$
(11)
Here the spectral function $\mathcal{A}(\mathbf{k},\omega)$ is defined as the
imaginary part of the phonon propagator and multiplied by a factor $1/\pi$
i.e.,
$\displaystyle\mathcal{A}(\mathbf{k},\omega)$ $\displaystyle=$
$\displaystyle-\frac{1}{\pi}Im\mathcal{D}(\omega,\mathbf{k})$ (12)
$\displaystyle=$
$\displaystyle-\frac{1}{\pi}\frac{2\omega(\mathbf{k})\Gamma(\mathbf{k},\omega)}{\left(\omega^{2}-\omega^{2}(\mathbf{k})\right)^{2}+4\omega(\mathbf{k})^{2}\Gamma(\mathbf{k},\omega)^{2}}.$
Using the expressions for decay rate Eqs. (7) and (LABEL:eqn:DR4) and the
phonon dispersion in the above equation, the thermal conductivity Eq. (11)can
be calculated. This calculation is within linear response theory by beyond
relaxation time approximation. We calculate these expressions numerically and
discuss the behavior of relevant quantities in the next section.
(a) noonleline
(b) noonleline
Figure 5: (a). Normalized temperature dependent thermal decay rate as a
function of $T/\Theta_{D}$ having $\Theta_{D}$ a Debye’s temperature
corresponds to three (T) and four phonon processes. Here the solid blue curve
refers to the three phonon contribution and dotted black curve to the three
and four phonon (T+F) combination.(b). Decay rate at a fixed temperature as a
function of frequency.
## IV Results
In Fig. 5(a), we plot the normalized thermal decay rate $\Gamma(T)/\Gamma_{0}$
using Eqs. 7 and LABEL:eqn:DR4 as a function of $T/\Theta_{D}$ having
$\Theta_{D}$ a Debye’s temperature at a gap $\Delta_{0}=0$. Here, we observe
that the scattering rate increases with temperature. Both three and phonon
processes contribute to rise in $\Gamma(T)/\Gamma_{0}$. However, the
contribution from four phonon process is quite small as shown by black dotted
curve in Fig. 5(b). Further it shows nonlinear behavior at temperature greater
than $0.1$ Debye temperature. In frequency dependent case, it has been
observed that the decay rate varies nonlinearly with the freqeuncy, shown in
Fig. 5(b).
(a) noonleline
(b) noonleline
Figure 6: Plot for the normalized thermal decay rate of the optical phonon as
a function of (a): scaled temperature $T/\Theta_{D}$, (b): frequency at
different gap values $\Delta_{0}$ such as $0.0$, $0.1$, and $1.0$ meV. Figure
7: Thermal conductivity of a paraelectric due to three (T) phonon process and
the combination of three and four (T+F) processes. The curve has been plotted
at the quantum critical point $\Delta_{0}=0$.
Refering Fig. 6(a) for the temperature dependent normalized scattering rate at
different gap values, we have following findings. Firstly, the increase in the
gap $\Delta_{0}$ suppresses $\Gamma(T)/\Gamma_{0}$. Physically, this arises
due to the less number of excitations of phonon from both acoustic and optical
branches. Secondly, on moving away from quantum critical point $\Delta_{0}=0$,
the contribution made by acoustic phonons decreases which results the less
decay rate. In the frequency dependent case Fig. 6(b), the variation of
scattering rate at different gap is not clearly visible due to the less
magnitude of gap as compared to frequency scale.
The thermal conductivity of a paraelectric material is presented in Fig. 7 for
different phonon processes. At low temperatures, four phonon process does not
show appreciable contribution to the thermal conductivity using fitting
parameters $a=0.01$ eV-1 and $V_{1}$ and $V_{0}=1000$eV-1. Our choices of
parameter is such that the maximum value of the optical phonon decay rates
which $\sim\Gamma_{0}\sim$ a few meV. This choice is in accord with
experimental findings on the spectral width of the optical phonons due to TA-
TO interactions [37, 22] and is well justified for the temperature regime of
our interest. Even with same or slightly higher values of $V_{1}$, the fourth
order anharmonicity gives a little contribution which indicates that third
order ahnarmonicity effects play the most significant role in the thermal
conduction as expected.
This is to mention that usually four phonon process is considered to be
important near the Debye temperature [38]. However we had an anticipation that
owing to the presence of a nearly soft optic phonon which has significant
amount of excitation even at low temperature, it might turnout to be
important. Our calculation rules out that possibility.
Here the plot of $\kappa(T)/\kappa_{0}$ vs $T/\Theta_{D}$ shows a peak around
a temperature
Figure 8: Plot for normalized thermal conductivity with $T/\Theta_{D}$ at
different values of gap. The blue curve corresponds to the quantum critical
scenario.
$T\approx 0.075\Theta_{D}$ and then decays towards the zero as the temperature
is lowered. Further, the broadening of the curve is decided by the decay rate
$\Gamma(T)$ whose magnitude depends on the strength of the anharmonic
coefficients. To look at its behavior away from a quantum critical point, we
plot thermal conductivity at different gap values in Fig. 8 as a function of
$T/\Theta_{D}$. We see that at points away from the quantum critical point,
the optical phonon does not conduct, resulting vanishing contribution to the
thermal conductivity at low temperatures. Also the peak shifts towards the
high temperatures as shown in Fig. 8 corresponding to $\Delta_{0}=0.1$, $1$
meV. These results are qualitatively in agreement with experimental results
[24] as discussed.
## V Discussion and Conclusion
Our results can be understood as follows. Like acoustic phonons, near quantum
critical point optical phonon density is also strongly temperature dependent.
Thus the decay rate of the optical phonon is strongly temperature dependent as
plotted in Fig. 6(a). We may describe the temperature dependence as $\sim
T^{1+x}$ where $0<x<1$. A finite temperature scaling description should give
$c_{v}\sim T^{3}$ for a quantum critical optical phonon which is same as the
acoustic phonons. Considering the above and following the classical
expression, at low temperature, $\kappa\sim$ $c_{V}/\Gamma\sim$ $T^{3-1-x}$
$\sim T^{\alpha}$ $\,\,2>\alpha=(2-x)>1$. The same has been obtained in our
study and shown in the log plot for thermal conductivity in Fig. 2. Though we
do not have a closed analytic expression for the thermal conductivity, the
power law obtained here is universal in a sense that is determined by only
available energy scale $k_{B}T$, spatial dimension, symmetry of the order
parameter and the dynamical critical exponent $z=1$ and do not depend on the
specific values of the interaction parameters. Similar behavior has been
obtained in various model calculations on different quantum critical systems:
e.g. the loop contribution in 2D superconductor-diffusive metal with z=2,
$\kappa\sim\ln T$ [39].
Other important finding form the numerical evaluation of the expression for
thermal conductivity is near the critical point, thermal conductivity due to
optic phonons enhanced by an order of magnitude than when it is away from a
quantum critical point. Thus we propose that TA-Quantum critical TO scattering
introduces an additional channel for thermal conduction which may dominate
over other contributions in a certain temperature regime. As the related
experimental results suggests, the results seem to be relevant for KTaO3 [24],
where any structural transition is absent. Our analysis paves a way for
addressing quantum critical aspects of thermal conductivity and thermal hall
effects in quantum paraelectrics [26, 40] and may be extended to bicritical
scenarios in a certain multiferroic sytems [4, 41, 10].
## Acknowledgments
PB acknowledges the National Key Research and Development Program of China
(grant No. 2017YFA0303400), China postdoctoral science foundation (grant no.
2019M650461) and NSFC grant no. U1930402 for financial support.
## References
* Das and Mishra [2009] N. Das and S. G. Mishra, Fluctuations and criticality in quantum paraelectrics, Journal of Physics: Condensed Matter 21, 095901 (2009).
* Rowley _et al._ [2014] S. E. Rowley, L. J. Spalek, R. P. Smith, M. P. M. Dean, M. Itoh, J. F. Scott, G. G. Lonzarich, and S. S. Saxena, Ferroelectric quantum criticality, Nature Physics 10, 367 (2014).
* Pálová _et al._ [2009] L. Pálová, P. Chandra, and P. Coleman, Quantum critical paraelectrics and the casimir effect in time, Phys. Rev. B 79, 075101 (2009).
* Das [2012] N. Das, Quantum critical behavior of a magnetic quantum paraelectric, Physics Letters A 376, 2683 (2012).
* Das [2013] N. Das, Effects of strain coupling and marginal dimensionality in the nature of phase transition in quantum paraelectrics, International Journal of Modern Physics B 27, 1350028 (2013).
* Das [2014] N. Das, On the possibility of mixed phases in disordered quantum paraelectrics, Modern Physics Letters B 28, 1450167 (2014).
* Sachdev [1999] S. Sachdev, _Quantum Phase Transitions_ (Cambridge University Press, UK, 1999).
* Müller and Burkard [1979] K. A. Müller and H. Burkard, SrTiO3: An intrinsic quantum paraelectric below 4 k, Phys. Rev. B 19, 3593 (1979).
* Conduit and Simons [2010] G. J. Conduit and B. D. Simons, Theory of quantum paraelectrics and the metaelectric transition, Phys. Rev. B 81, 024102 (2010).
* Narayan _et al._ [2019] A. Narayan, A. Cano, A. V. Balatsky, and N. A. Spaldin, Multiferroic quantum criticality, Nature Materials 18, 223 (2019).
* Li _et al._ [2019] X. Li, T. Qiu, J. Zhang, E. Baldini, J. Lu, A. M. Rappe, and K. A. Nelson, Terahertz field–induced ferroelectricity in quantum paraelectric SrTiO3, Science 364, 1079 (2019).
* Ang _et al._ [2000] C. Ang, Z. Yu, and Z. Jing, Impurity-induced ferroelectric relaxor behavior in quantum paraelectric SrTiO3 and ferroelectric BaTiO3, Phys. Rev. B 61, 957 (2000).
* Khaetskii _et al._ [2020] A. Khaetskii, V. Juričič, and A. V. Balatsky, Thermal magnetic fluctuations of a ferroelectric quantum critical point, Journal of Physics: Condensed Matter 33, 04LT01 (2020).
* Geirhos _et al._ [2020] K. Geirhos, P. Lunkenheimer, M. Blankenhorn, R. Claus, Y. Matsumoto, K. Kitagawa, T. Takayama, H. Takagi, I. Kézsmárki, and A. Loidl, Quantum paraelectricity in the kitaev quantum spin liquid candidates H3LiIr2O6 and D3LiIr2O6, Phys. Rev. B 101, 184410 (2020).
* He _et al._ [2020] X. He, D. Bansal, B. Winn, S. Chi, L. Boatner, and O. Delaire, Anharmonic eigenvectors and acoustic phonon disappearance in quantum paraelectric SrTiO3, Phys. Rev. Lett. 124, 145901 (2020).
* Kustov _et al._ [2020] S. Kustov, I. Liubimova, and E. K. H. Salje, Domain dynamics in quantum-paraelectric SrTiO3, Phys. Rev. Lett. 124, 016801 (2020).
* Nataf _et al._ [2020] G. F. Nataf, M. Guennou, J. M. Gregg, D. Meier, J. Hlinka, E. K. H. Salje, and J. Kreisel, Domain-wall engineering and topological defects in ferroelectric and ferroelastic materials, Nature Reviews Physics 2, 634 (2020).
* Sim _et al._ [2021] S. Sim, H. Yang, H.-L. Kim, M. J. Coak, M. Itoh, Y. Noda, and J.-G. Park, Sizable suppression of thermal hall effect upon isotopic substitution in srtio3, Phys. Rev. Lett. 126, 015901 (2021).
* Ye _et al._ [2015] Z.-Q. Ye, B.-Y. Cao, W.-J. Yao, T. Feng, and X. Ruan, Spectral phonon thermal properties in graphene nanoribbons, Carbon 93, 915 (2015).
* Zou _et al._ [2019] J.-H. Zou, X.-T. Xu, and B.-Y. Cao, Size-dependent mode contributions to the thermal transport of suspended and supported graphene, Applied Physics Letters 115, 123105 (2019).
* Coak _et al._ [2019] M. J. Coak, C. R. S. Haines, C. Liu, G. G. Guzmán-Verri, and S. S. Saxena, Pressure dependence of ferroelectric quantum critical fluctuations, Phys. Rev. B 100, 214111 (2019).
* Shirane _et al._ [1967] G. Shirane, R. Nathans, and V. J. Minkiewicz, Temperature dependence of the soft ferroelectric mode in kta${\mathrm{o}}_{3}$, Phys. Rev. 157, 396 (1967).
* Farhi _et al._ [2000] E. Farhi, A. Tagantsev, R. Currat, B. Hehlen, E. Courtens, and L. Boatner, Low energy phonon spectrum and its parameterization in pure ktao3 below 80 k, EPJB 15, 615 (2000).
* Martelli _et al._ [2018] V. Martelli, J. L. Jiménez, M. Continentino, E. Baggio-Saitovitch, and K. Behnia, Thermal transport and phonon hydrodynamics in strontium titanate, Phys. Rev. Lett. 120, 125901 (2018).
* Yang _et al._ [2020] Y.-f. Yang, G.-M. Zhang, and F.-C. Zhang, Universal behavior of the thermal hall conductivity, Phys. Rev. Lett. 124, 186602 (2020).
* Li _et al._ [2020] X. Li, B. Fauqué, Z. Zhu, and K. Behnia, Phonon thermal hall effect in strontium titanate, Phys. Rev. Lett. 124, 105901 (2020).
* Ziman [1960] J. M. Ziman, _Electrons and Phonons_ (Clarendon Oxford, 1960).
* Tritt [2005] T. M. Tritt, _Conductivity Theory, Properties, and Applications_ (Springer, Berlin, 2005).
* Berman [1976] R. Berman, _Thermal Conduction in Solids_ (Clarendon Press, Oxford, 1976).
* Bourgeal _et al._ [1988] S. Bourgeal, R. Villar, S. Vieira, and V. A. Trepakov, Low temperature specific heat of ktao3 from 0.3k, Ferroelectrics 79, 237 (1988).
* Gurzhi [1968] R. N. Gurzhi, Hydrodynamic effects in solids at low temperature, Phys. Usp. 11, 255 (1968).
* Ashcroft and Mermin [1976] N. W. Ashcroft and N. D. Mermin, _Solid state physics_ (Science: Physics (Saunders College), 1976).
* Klemens [1966] P. G. Klemens, Anharmonic decay of optical phonons, Phys. Rev. 148, 845 (1966).
* Chang and Jones [1962] G. K. Chang and R. E. Jones, Low-temperature thermal conductivity of amorphous solids, Phys. Rev. 126, 2055 (1962).
* Peierls [1974] R. Peierls, _Quantum Theory of Solids_ (Oxford University Press, 1974).
* Mahan [1990] G. D. Mahan, _Many-Particle Physics_ (Plenum, New York and London, 2nd. Ed., 1990).
* Axe _et al._ [1970] J. D. Axe, J. Harada, and G. Shirane, Anomalous acoustic dispersion in centrosymmetric crystals with soft optic phonons, Phys. Rev. B 1, 1227 (1970).
* Glassbrenner and Slack [1964] C. J. Glassbrenner and G. A. Slack, Thermal conductivity of silicon and germanium from 3k to the melting point, Phys. Rev. 134, A1058 (1964).
* Podolsky _et al._ [2007] D. Podolsky, A. Vishwanath, J. Moore, and S. Sachdev, Thermoelectric transport near pair breaking quantum phase transition out of $d$-wave superconductivity, Phys. Rev. B 75, 014520 (2007).
* Chen _et al._ [2020] J.-Y. Chen, S. A. Kivelson, and X.-Q. Sun, Enhanced thermal hall effect in nearly ferroelectric insulators, Phys. Rev. Lett. 124, 167601 (2020).
* Morice _et al._ [2017] C. Morice, P. Chandra, S. E. Rowley, G. Lonzarich, and S. S. Saxena, Hidden fluctuations close to a quantum bicritical point, Phys. Rev. B 96, 245104 (2017).
|
# CLASTER: Clustering with Reinforcement Learning
for Zero-Shot Action Recognition
Shreyank N Gowda
University of Edinburgh
<EMAIL_ADDRESS>Laura Sevilla-Lara
University of Edinburgh
<EMAIL_ADDRESS>Frank Keller
University of Edinburgh
<EMAIL_ADDRESS>Marcus Rohrbach
Facebook AI Research
<EMAIL_ADDRESS>
###### Abstract
Zero-shot action recognition is the task of recognizing action classes without
visual examples, only with a semantic embedding which relates unseen to seen
classes. The problem can be seen as learning a function which generalizes well
to instances of unseen classes without losing discrimination between classes.
Neural networks can model the complex boundaries between visual classes, which
explains their success as supervised models. However, in zero-shot learning,
these highly specialized class boundaries may not transfer well from seen to
unseen classes. In this paper we propose a centroid-based representation,
which clusters visual and semantic representation, considers all training
samples at once, and in this way generalizing well to instances from unseen
classes. We optimize the clustering using Reinforcement Learning which we show
is critical for our approach to work. We call the proposed method CLASTER and
observe that it consistently outperforms the state-of-the-art in all standard
datasets, including UCF101, HMDB51 and Olympic Sports; both in the standard
zero-shot evaluation and the generalized zero-shot learning. Further, we show
that our model performs competitively in the image domain as well,
outperforming the state-of-the-art in many settings.
## 1 Introduction
Research on action recognition in videos has made rapid progress in the last
years, with models becoming more accurate and even some datasets becoming
saturated. Much of this progress has hinged on large scale training sets.
However, it is not practical to collect thousands of video samples every time
we want a network to recognize a new class. This idea has led to research in
the zero-shot learning (ZSL) domain, where training occurs in a set of seen
classes, and testing occurs in a set of unseen classes. In particular, in the
case of video ZSL, class labels are typically “enriched” with semantic
embeddings, which are sometimes manually annotated and other times inferred
automatically. These semantic embeddings map the seen training classes to new
unseen classes. In the typical prediction pipeline, at test time a seen class
is predicted, and its semantic embedding is used to search for a nearest
neighbor in the space of the semantic embeddings of unseen classes. The
predicted class will be the class corresponding to the nearest neighbor. While
ZSL is potentially a very useful technology, it also presents challenges.
Figure 1: CLASTER improves the representation and clustering in unseen
classes. The figure shows t-SNE [26] of video instances, where each color
corresponds to a unique unseen class label. Our reinforcement learning (RL)
optimization improves the representation by making it more compact: in (b)
instances of the same class, i.e. same color, are together and there are less
outliers for each class compared to (a).
Neural networks have proven extraordinarily powerful at learning complex
discrimination functions of classes with many modes. In other words, instances
of the same class can be very different and still be projected by the neural
network to the same category. While this works well in supervised training, it
can be a problem in zero-shot recognition, where the highly specialized
discrimination function might not transfer well to instances of unseen
categories. In this work, we build a representation based on three main ideas.
First, we turn to clustering, and use the centroids of the clusters to
represent a video. We postulate that centroids are more robust to outliers.
The initial clustering considers all training instances at once instead of
optimizing the classification of single instances. Second, our representation
is a combination of a visual and a semantic representation, both during
clustering and when representing each instance, this allows better transfer to
unseen classes.
Third, to directly use the signal from the classification supervision for
clustering, we use Reinforcement Learning (RL). Specifically, we use the
REINFORCE algorithm to directly update the cluster centroids. This
optimization improves the clustering significantly. As we can see in Fig. 1,
it leads to less noisy and more compact representations for unseen classes. We
call the proposed method CLASTER, for _CLustering for Action recognition in
zero-ShoT lEaRning_ , and show that it significantly outperforms all existing
methods across all standard zero-shot action recognition datasets and tasks.
Our method also generalizes well to the image domain, showing strong
performance on four datasets.
#### Contributions:
Our main contribution is CLASTER, a novel model for zero-shot action
recognition which learns a clustering-based representation optimized with
reinforcement learning (RL). Clustering with RL has not previously been
explored for this task, and we show in our ablations that the RL optimization
is the critical aspect of our approach which learns a representation which
generalizes better to unseen classes. In our experimental evaluation we find
that CLASTER consistently outperforms recent state-of-the-art on three
challenging zero-shot action recognition benchmarks, Olympics, HMDB51, and
UCF101, both, in zero-shot learning and in the more challenging generalized
zero-shot learning (GZSL) task.
## 2 Related Work
#### Traditional Fully Supervised Action Recognition.
The seminal work of Simonyan and Zisserman [41] introduced the now standard
two-stream deep learning framework, which combines spatial and temporal
information. Spatio-temporal CNNs [43, 37, 5] are also widely used as
backbones for many applications, including this work. More recently, research
has incorporated attention [45, 15] and leveraged the multi-modal nature of
videos [2]. Ji et al. [18], proposed the use of knowledge maps for coarse-to-
fine action recognition.
#### Zero-shot Learning.
Early approaches followed the idea of learning semantic classifiers for seen
classes and then classifying the visual patterns by predicting semantic
descriptions and comparing them with descriptions of unseen classes. In this
space, Lampert et al. [22] propose attribute prediction, using the posterior
of each semantic description. The SJE model [1] uses multiple compatibility
functions to construct a joint embedding space. ESZSL [39] uses a Frobenius
norm regularizer to learn an embedding space. In videos, there are additional
challenges: action labels need more complex representations than objects and
hence give rise to more complex manual annotations.
#### ZSL for Action Recognition.
Early work [38] was restricted to cooking activities, using script data to
transfer to unseen classes. Gan et al. [12] consider each action class as a
domain, and address semantic representation identification as a multi-source
domain generalization problem. Manually specified semantic representations are
simple and effective [56] but labor-intensive to annotate. To overcome this,
the use of label embeddings has proven popular, as only category names are
needed. Some approaches use common embedding space between class labels and
video features [53, 52], pairwise relationships between classes [10], error-
correcting codes [36], inter-class relationships [11], out-of-distribution
detectors [27], and Graph Neural networks [14]. In contrast, we are learning
to optimize centroids of visual semantic representations that generalize
better to unseen classes.
Figure 2: Overview of CLASTER. We train an MLP to map the semantic embedding
to the same space as the visual features. We cluster these visual-semantic
representations with k-means to obtain initial cluster centroids (Sec. 3.2).
We then represent our video by a weighted representation of all clusters,
which we refer to as CLASTER representation (Sec. 3.3), which is expanded in
Fig. 3. This is used as input for the final classification (Sec 3.4 and Eq.
3). Based on the classification result, we send a reward and optimize the
cluster centroids using REINFORCE (Sec. 3.5). At test time, we first perform
classification on the seen classes and then do a nearest neighbor (NN) search
to predict the unseen class.
#### Reinforcement Learning for Zero-Shot Learning.
RL for ZSL in images was introduced by Liu et al. [24] by using a combination
of ontology and RL. In zero-shot text classification, Ye et al. [54] propose a
self-training method to leverage unlabeled data. RL has also been used in the
zero-shot setting for task generalization [34], active learning [7], and video
object segmentation [17]. To the best of our knowledge, there is no previous
work using RL for optimizing centroids in zero-shot recognition.
#### Deep Approaches to Centroid Learning for Classification.
Since our approach learns cluster centroids using RL, it is related to the
popular cluster learning strategy for classification called Vector of Locally
Aggregated Descriptors (VLAD) [3]. The more recent NetVLAD [3] leverages
neural networks which helps outperform the standard VLAD by a wide margin.
ActionVLAD [16] aggregates NetVLAD over time to obtain descriptors for videos.
ActionVLAD uses clusters that correspond to spatial locations in a video while
we use joint visual semantic embeddings for the entire video. In general, VLAD
uses residuals with respect to cluster centroids as representation while
CLASTER uses a weighting of the centroids. The proposed CLASTER outperforms
NetVLAD by a large margin on both HMDB51 and UCF101.
## 3 CLASTER
We now describe the proposed method, CLASTER, which leverages clustering of
visual and semantic features for video action recognition and optimizes the
clustering with RL. Figure 2 shows an overview of the method.
### 3.1 Problem Definition
Let $S$ be the training set of seen classes. $S$ is composed of tuples
$\left(x,y,a(y)\right)$, where $x$ represents the spatio-temporal features of
a video, $y$ represents the class label in the set of $Y_{s}$ seen class
labels, and $a(y)$ denotes the category-specific semantic representation of
class $y$. These semantic representations are either manually annotated or
computed using a language-based embedding of the category name, such as
word2vec or sentence2vec.
Let $U$ be the set composed of pairs $\left(u,a(u)\right)$, where $u$ is a
class in the set of unseen classes $Y_{u}$ and $a(u)$ are the corresponding
semantic representations. The seen classes $Y_{s}$ and the unseen classes
$Y_{u}$ do not overlap.
In the zero-shot learning (ZSL) setting, given an input video the task is to
predict a class label in the unseen classes, as $f_{ZSL}:X\rightarrow Y_{U}$.
In the related generalized zero-shot learning (GZSL) setting, given an input
video, the task is to predict a class label in the union of the seen and
unseen classes, as $f_{GZSL}:X\rightarrow Y_{S}\cup Y_{U}$.
### 3.2 Cluster Initialization
We initialize the clustering of all instances in the training set $S$ using
k-means [9]. We obtain a representation for clustering which is not only aware
of the visual features but also the semantic embedding. For this, we create a
visual-semantic representation as follows. Given video $i$, we compute visual
features $x_{i}$ and a semantic embedding of their class $y_{i}$ as $a(y_{i})$
(see Sec. 4 for details how we obtain these representations). We want to map
both to an equal dimension and similar magnitude so they have a similar weight
during clustering. Additionally, to reduce overfitting to the training classes
and improve generalization, we learn a multi-layer perceptron (MLP) [55] which
maps $a(y_{i})$ in the space of $x_{i}$ with an MLP, which consists of two
fully connected (FC) layers and a ReLU. This MLP is trained with a least-
square embedding loss to minimize the distance between $x_{i}$ and the output
from the MLP, which we call $a^{\prime}(y)$. $x_{i}$ is kept fixed. Given
these two representations, we concatenate $x_{i}$ and $a^{\prime}(y)$ to
obtain the visual-semantic representations that we cluster with k-means. Each
resulting cluster $j$ has a centroid $c_{j}$, that is the average of all
visual-semantic samples in that particular cluster. Note that we keep the MLP
fixed after this initial training.
### 3.3 CLASTER Representation
We now detail how we represent videos using the clusters we computed, our
_CLASTER_ representation. The intuition here is that representing data w.r.t.
the visual-semantic centroids of the training set will lead to a
representation which generalizes well to unseen classes. This would avoid
overfitting to the training classes as it often happens with hidden
representations. In other words, a representation w.r.t centroids is more
robust to outliers, which is helpful since all instances of the unseen classes
are effectively outliers w.r.t. the training distribution.
Figure 3: Our CLASTER Representation, see Fig. 2 for the overview of the full
model. The visual feature is mapped to match the space of the visual-semantic
cluster centroids with an MLP and concatenation. Based on the distances to the
cluster centroids the final representation $\omega$ is a weighted
representation of the centroids, more robust to the out-of-distribution
instances of the unseen test classes. Details in Sec. 3.3 and Eq. 1.
To classify a video $x_{i}$ we first project it in the visual-semantic space
of the clusters and represent it using the centroids. This CLASTER
representation is then used as input to the classification layer. Figure 3
shows this network in detail. Recall that our centroids are based on a visual
_and_ a semantic vector. To estimate the semantic vector, we project $x_{i}$
to $\phi_{i}$ with an MLP. Concatenating $x_{i}$ and $\phi_{i}$ we obtain the
intermediate representation $\psi_{i}$ which is in the same visual-semantic
space as the cluster centroids. Now, to represent instance $i$ with respect to
the centroids, we compute the Euclidean distance to each cluster $j$, which we
refer to as $d_{i,j}$, take its inverse $1/d_{i,j}$ and normalize the
distances using their maximum and minimum values. We refer to these normalized
values as $\eta_{i,j}$, and they are used as the weights of each cluster
centroid in our final CLASTER representation $\omega_{i}$:
$\omega_{i}=\psi_{i}+\sum_{j=1}^{k}\eta_{i,j}c_{j}$ (1)
Our CLASTER representation $\omega_{i}$ is the input to the classification
layer, discussed next. Sec. 3.5 describes how we further optimize the
centroids with RL. The implementation details of the architecture of both
networks are described in Sec. 4.
### 3.4 Loss Function
Given the CLASTER representation $\omega$ we learn to predict a training
class. We use a semantic softmax function [19], to learn a function which is
aware of the semantic embedding and thus can transfer better to zero-shot
classes:
$\hat{y}_{i}=\frac{e^{a(y_{i})^{T}V(\omega_{i})}}{\sum_{j=1}^{S}e^{a(y_{j})^{T}V(\omega_{i})}},$
(2)
where $S$ is the total number of seen classes, $V$ is an MLP and the output
$\hat{y}_{i}$ is a vector with a probability distribution over the seen
classes. We train a multi-class classifier, which minimizes the cross-entropy
loss with a regularization term ($W$ refers to all weights in the network):
$\textup{min}_{W}\sum_{i=1}^{N}\mathcal{L}(x_{i})+\lambda\left\|W\right\|_{F}^{2}.$
(3)
### 3.5 Optimization with Reinforcement Learning
We now describe the optimization of the cluster centroids with RL. For this,
we compute two variables that will determine each centroid update: the reward,
which measures whether the classification is correct and will determine the
direction of the update, and the classification score, which measures how far
the prediction is from the correct answer and will determine the magnitude of
the update.
Given the probabilistic prediction $\hat{y}_{i}$ and the one-hot
representation of the ground truth class $y_{i}$, we compute the
classification score as the dot product of the two: $z_{i}=y_{i}.\hat{y}_{i}$.
To obtain the reward, we check if the maximum of $\hat{y}_{i}$ and $y_{i}$ lie
in the same index.
$r=\left\\{\begin{matrix}1\>\>if\>\>\arg\max\hat{y}_{i}=\arg\max y_{i}\\\
-1\>\>otherwise\hskip 79.6678pt\\\ \end{matrix}\right.$ (4)
This essentially gives a positive reward if the model has predicted a correct
classification and a negative reward if the classification was incorrect. This
formulation is inspired by Likas [23], where it is used for competitive
learning.
Now we can update the cluster centroid $c_{j}$ closest to $\psi_{i}$ using the
REINFORCE [23] algorithm.
We compute the update of $c_{j}$, which we call $\Delta c_{j}$ as:
$\Delta c_{j}=\alpha(r-\beta_{j})\frac{\partial\textup{ln}\ g_{j}}{\partial
c_{j}},$ (5)
where $\alpha$ is a fixed learning rate, $r$ is the reward, $\beta_{j}$ is
called the reinforcement baseline, and
$\frac{\partial\textup{ln}g_{j}}{\partial c_{j}}$ is called the characteristic
eligibility of cluster centroid $c_{j}$, which quantifies the match of a
cluster $j$ with respect to a given input. This term is approximated as:
$\frac{\partial\textup{ln}\ g_{j}}{\partial
p_{j}}=\frac{z_{i}-p_{j}}{p_{j}(1-p_{j})},$ (6)
and $p_{j}=2(1-f(\eta_{i,j}))$ and $f(x)=\frac{1}{1+e^{-x}}$. Substituting Eq.
6 in Eq. 5, we obtain:
$\Delta c_{j}=\alpha(r-\beta_{j})\frac{\partial\textup{ln}\ g_{j}}{\partial
p_{j}}\frac{\partial
p_{j}}{\partial\eta_{i,j}}\frac{\partial\eta_{i,j}}{\partial c_{j}}.$ (7)
From Eq. 6 and the definition of $p_{j}$, we get to:
$\Delta
c_{j}=\alpha(r-\beta_{j})(z_{i}-p_{j})\frac{\partial\eta_{i,j}}{\partial
c_{j}}.$ (8)
Using Euclidean distance and setting $\beta_{j}$ to zero, the updating rule
for cluster centroid $c_{j}$ is:
$\Delta c_{j}=\alpha\ r\ (z_{i}-p_{j})\ (\psi_{i}-c_{j}).$ (9)
For further details on this derivation, please refer to Likas [23]. The main
difference is that we do not consider our clusters to be Bernoulli units,
where the modification of the cluster representative is discrete (either 0 or
1). Instead, we modify the cluster centroid with $z_{i}$, which is continuous
between 0 and 1.
### 3.6 Use of Sentence2vec
Some datasets, such as HMDB51 [21], do not have semantic manual annotations.
Thus, semantic representations are often computed using a word2vec model [29].
In most action recognition datasets, labels are phrases (e.g. “playing
guitar”) and thus the semantic embeddings are computed by averaging the
word2vec embeddings of each word. This works in some cases, however, simple
averaging does not always capture the inter-dependency of action classes. For
example, “horse riding” and “horse racing” lie far apart in the word2vec
space.
To alleviate this, we propose using sentence2vec [35], a model designed to
capture the meaning of sentences. Specifically, sentence2vec learns embeddings
with respect to the sentence context. It represents the sentence context as
n-grams and optimizes the additive combination of the word vectors to obtain
sentence embeddings. Figure 4 illustrates how class neighbors become more
meaningful when we move from word2vec to sentence2vec.
Figure 4: HMDB51 (a) Averaging word embeddings can produce poor results in
certain cases. For example the nearest neighbor of “shoot ball” is “shoot
gun”, and of “sit up” is “sit down” which are not necessarily meaningful (b)
Sentence2vec better captures phrase meanings: Nearest neighbors to “sit up” is
now “push up”, and for “shoot ball”, is golf. UCF101 (c) The same effect is
observed, where after averaging word2vec representations, the nearest neighbor
of “pommel horse” is ”horse riding” (d) Sentence2vec helps capture phrase
meanings: the while nearest neighbor of “pommel horse” is now “balance beam”.
The circles contain the nearest neighbor to the given unseen class and is for
illustration purposes.
We show in Sec. 5.5 and 5.6 that sentence2vec significantly improves
performance of some of the recent state-of-the-art approaches, reaching
performance close to using manual semantic representation. This suggests the
potential of sentence2vec to automatically annotate large scale datasets. We
also observe that combining multiple of these embeddings (e.g., through
averaging) leads to a small but consistent improvement, suggesting they may
contain complementary information.
## 4 Implementation Details
#### Visual features.
We use RGB and flow features extracted from the Mixed 5c layer of an I3D
network pre-trained on the Kinetics [5] dataset. The Mixed 5c output of the
flow network is averaged across the temporal dimension and pooled by four in
the spatial dimension and then flattened to a vector of size 4096. We then
concatenate the two. In the case of images, similar to other approaches, we
use 2048D ResNet101 features, pre-trained on Imagenet [6].
#### Network architecture.
The MLP in the CLASTER Representation module is a two-layer FC network, whose
output after concatenation with the video feature has the same dimensions as
the cluster representatives. The size of the FC layers is 8192 each. The final
classification MLP (represented as a classification block in Figure 2)
consists of two convolutional layers and two FC layers, where the last layer
equals the number of unseen classes in the dataset we are looking at. All the
modules are trained with the Adam optimizer with a learning rate of 0.0001 and
weight decay of 0.0005.
#### Number of clusters.
Since the number of clusters is a hyperparameter, we evaluate the effect of
the number of clusters on the UCF101 dataset for videos and choose 6 after the
average performance stablizes as can be seen in the supplementary material. We
then use the same number for the HMDB51 and Olympics datasets. Similarly we
test the effect of cluster numbers on the SUN dataset for images and use 9
clusters for all image datasets based on this.
#### RL optimization.
We use 10,000 iterations and the learning rate $\alpha$ is fixed to 0.1 for
the first 1000 iterations, 0.01 for the next 1000 iterations and then drop it
to 0.001 for the remaining iterations.
#### Semantic embeddings.
We experiment with three types of embeddings as semantic representations of
the classes. We have human annotated semantic representations for UCF101 and
the Olympic sports dataset of sizes 40 and 115 respectively. HMDB51 does not
have human annotated semantic representations. Instead, we use a skip-gram
model trained on the news corpus provided by Google to generate word2vec
embeddings. Using action classes as input, we obtain a vector representation
of 300 dimensions. Some class labels contain multiple words. In those cases,
we use the average of the word2vec embeddings. We also use sentence2vec
embeddings, trained on Wikipedia. These can be obtained for both single words
and multi-word expressions. In case of images, we use only the manual
annotations as features since they are the best performing embedding in recent
approaches.
#### Rectification of the Semantic Embedding
Sometimes, in ZSL, certain data points tend to appear as nearest-neighbor of
many other points in the projection space. This is referred to as the hubness
problem [40]. We avoid this problem using semantic rectification [25], where
the class representation is modified by averaging the output generated by the
projection network, which in our case is the penultimate layer of the
classification MLP. Specifically, for the unseen classes, we perform
rectification by first using the MLP trained on the seen classes to project
the semantic embedding to the visual space. We add the average of projected
semantic embeddings from the k-nearest neighbors of the seen classes,
specifically as follows:
$\hat{a}(y_{i})=a^{\prime}(y_{i})+\frac{1}{k}\sum_{n\in
N}cos\left(a^{\prime}(y_{i}),n\right)\cdot n,$ (10)
where $a^{\prime}(y)$ refers to the embedding after the MLP introduced in Sec.
3.2, $cos(a,n)$ refers to the cosine similarity between $a$ and $n$, the
operator $\cdot$ refers to the dot product and $N$ refers to the k-nearest
neighbors of $a^{\prime}(y_{u_{i}})$.
#### Nearest Neighbor Search
At test time in the ZSL, given a test video, we predict a seen class and
compute or retrieve its semantic representation. After rectification, we find
the nearest neighbor in the set of unseen classes. In the GZSL task, class
predictions may be of seen or unseen classes. Thus, we first use a bias
detector [13] which helps us detect if the video belongs to the seen or unseen
class. If it belongs to a seen class, we predict the class directly from our
model, else we proceed as in ZSL.
## 5 Experimental Analysis
In this section, we look at the qualitative and quantitative performance of
our model. We first describe the experimental settings, and then show an
ablation study, that explores the contribution of each component. We then
compare the proposed method to the state-of-the-art in the ZSL and GZSL tasks,
and give analytical insights into the advantages of CLASTER.
### 5.1 Datasets
We choose the Olympic Sports [33], HMDB-51 [21] and UCF-101 [42], so that we
can compare to recent state-of-the-art models [12, 27, 36]. We follow the
commonly used 50/50 splits of Xu et al. [52], where 50 percent are seen
classes and 50 are unseen classes. Similar to previous approaches [56, 12, 36,
28, 20], we report average accuracy and standard deviation over 10 independent
runs. For images, we use CUB [44], SUN [50], AWA2 [47] and APY [8]. We report
results on the split proposed by [47], in the standard inductive setting.
### 5.2 Ablation Study
We test the performance of using the different components of CLASTER in Table
1. We consider no clustering (which in our case is the equivalent of having a
single cluster), random clustering and the standard k-means. We observe that
using clusters is beneficial, but only if they are meaningful, as in the case
of k-means.
We observe that using semantic embedding rectification (+R) improves the
accuracy, as the chances of reaching previously unreachable classes increases.
Next we replace the standard softmax with the semantic softmax (+SS), which
results in a small increase in accuracy.
Since we use RL optimization to learn cluster centroids, we compare our full
model (CLASTER) to a baseline where the centroids are updated using standard
SGD instead of RL (referred to in Table 1 as “CLASTER w/o RL”). As a
comparison, we also replace our centroid-learning with NetVLAD [3]. The input
to NetVLAD are also the joint visual-semantic features. We see that our model
outperforms NetVLAD by an average of 4.7% and the CLASTER w/o RL by 7.3% on
the UCF101 dataset. One reason for this difference is that the loss is back-
propagated through multiple parts of the model before reaching the centroids.
However, with RL the centroids are directly updated using the reward signal.
We see that the cluster optimization with RL gives a clear performance
improvement. Section 5.7 explores how the clusters change after this
optimization. In a nutshell, the RL optimization essentially makes the
clusters cleaner moving most instances in a class to the same cluster.
Component | HMDB51 | Olympics | UCF101
---|---|---|---
No clustering | 25.6 ± 2.8 | 57.7 ± 3.1 | 31.6 ± 4.6
Random clustering (K=6) | 20.2 ± 4.2 | 55.4 ± 3.1 | 24.1 ± 6.3
k-means (K=6) | 27.9 ± 3.7 | 58.6 ± 3.5 | 35.3 ± 3.9
k-means + R | 29.3 ± 3.8 | 59.4 ± 2.9 | 37.1 ± 2.7
k-means + R + SS | 29.6 ± 3.5 | 59.5 ± 2.6 | 37.5 ± 3.2
CLASTER w/o RL | 30.1 ± 3.4 | 60.5 ± 1.9 | 39.1 ± 3.2
CLASTER w/ NetVLAD | 33.2 ± 2.8 | 62.6 ± 4.1 | 41.7 ± 3.8
CLASTER | 36.8 ± 4.2 | 63.5 ± 4.4 | 46.4 ± 5.1
Table 1: Results of the ablation study of different components of CLASTER ZSL.
Each cell shows the average accuracy and the standard deviation over 5
independent runs. The study shows the effect of using a standard clustering
algorithm, rectification of semantic embeddings (“R”), and replacing the
standard softmax with semantic softmax. Finally, the last row represents the
proposed model. All the reported accuracies are on the same five splits, note
that Table 3 is with 10 splits.
### 5.3 Number of Clusters
We test using different number of clusters on the UCF-101 dataset and show the
results in Figure 5. These are for 5 runs on random splits. As we can see, the
average accuracy increases until 6 clusters, and after that remains more or
less constant. Thus, we use 6 clusters and continue with the same number for
both HMDB51 and Olympics. For images, similarly, we used 5 random splits of
CUB and found the performance stabilizes after having 9 clusters and use the
same number of clusters for the other image datasets.
Figure 5: Effect of using different number of clusters. The green line
represents the standard deviation. The reported accuracy is on the UCF101
dataset. As can be seen, the average cluster accuracy increases till about 6
clusters and then remains more or less constant. The vertical lines correspond
to the standard deviation.
### 5.4 Ablation Study on Image Datasets
In Table 2 we show the ablation results on the proposed splits of the CUB,
SUN, AWA and APY datasets. The results are consistent with those reported on
the paper for ablation on video datasets. Again, we can see that optimizing
the visual-semantic centroids with REINFORCE yields best results with a
consistent improvement over NetVLAD and optimizing with backpropogation.
Component | CUB | SUN | AWA | APY
---|---|---|---|---
No clustering | 61.2 | 54.6 | 62.8 | 33.6
Random clustering (K=9) | 56.6 | 48.3 | 56.9 | 28.2
k-means (K=9) | 68.9 | 60.1 | 68.8 | 37.9
k-means + R | 69.5 | 61.2 | 69.4 | 38.3
k-means + R + SS | 69.8 | 61.4 | 69.8 | 38.5
CLASTER w/o RL | 71.4 | 63.2 | 70.9 | 39.3
CLASTER w/ NetVLAD | 73.1 | 64.7 | 71.8 | 41.2
CLASTER | 76.4 | 67.2 | 74.1 | 44.1
Table 2: Results of the ablation study on image datasets of different
components of CLASTER ZSL. Each cell shows the average unseen class accuracy
over 5 independent runs. The study shows the effect of using a standard
clustering algorithm, rectification of semantic embeddings (“R”), and
replacing the standard softmax with semantic softmax. Finally, the last row
represents the proposed model. All the reported accuracies are on the proposed
split [48].
### 5.5 Results on ZSL
We compare our approach to several state-of-the-art methods: the out-of-
distribution detector method (OD) [27], a generative approach to zero-shot
action recognition (GGM) [31], the evaluation of output embeddings (SJE) [1],
the feature generating networks (WGAN) [46], the end-to-end training for
realistic applications approach (E2E) [4], the inverse autoregressive flow
(IAF) based generative model, bi-directional adversarial GAN(Bi-dir GAN) [30]
and prototype sampling graph neural network (PS-GNN) [14]. We use a pre-
trained model on Kinetics.
We observe that the proposed CLASTER consistently outperforms other state-of-
the-art approaches. Results are shown in Table 3. On the HMDB51 dataset, it
improves 3.5% over the next best performing model E2E [4]. On UCF101 it
improves 13.5% over the next best performing model, when using semantic manual
annotations.
On the Olympics dataset, CLASTER improves 1.5% over the next best performing
model OD [27] when using manual semantic representations; and 2% over PS-GNN
[14] when using word2vec.
We measure the impact of using sentence2vec instead of word2vec. We test this
on our own method, as well as as input to OD and WGAN, using the authors’
code. We show that sentence2vec significantly improves over using word2vec,
especially on UCF101 and HMDB51. Combination of embeddings resulted in average
improvements of 0.3%, 0.8% and 0.9% over the individual best performing
embedding of CLASTER.
Method | SE | Olympics | HMDB51 | UCF101
---|---|---|---|---
SJE [1] | M | 47.5 ± 14.8 | - | 12.0 ± 1.2
Bi-Dir GAN [30] | M | 53.2 ± 10.5 | - | 24.7 ± 3.7
IAF [30] | M | 54.9 ± 11.7 | - | 26.1 ± 2.9
GGM [31] | M | 57.9 ± 14.1 | - | 24.5 ± 2.9
OD [27] | M | 65.9 ± 8.1 | - | 38.3 ± 3.0
WGAN [46] | M | 64.7 ± 7.5 | - | 37.5 ± 3.1
CLASTER (ours) | M | 67.4 ± 7.8 | - | 51.8 ± 2.8
SJE [1] | W | 28.6 ± 4.9 | 13.3 ± 2.4 | 9.9 ± 1.4
IAF [30] | W | 39.8 ± 11.6 | 19.2 ± 3.7 | 22.2 ± 2.7
Bi-Dir GAN [30] | W | 40.2 ± 10.6 | 21.3 ± 3.2 | 21.8 ± 3.6
GGM [31] | W | 41.3 ± 11.4 | 20.7 ± 3.1 | 20.3 ± 1.9
WGAN [46] | W | 47.1 ± 6.4 | 29.1 ± 3.8 | 25.8 ± 3.2
OD [27] | W | 50.5 ± 6.9 | 30.2 ± 2.7 | 26.9 ± 2.8
PS-GNN [14] | W | 61.8 ± 6.8 | 32.6 ± 2.9 | 43.0 ± 4.9
E2E [4]* | W | 61.4 ± 5.5 | 33.1 ± 3.4 | 46.2 ± 3.8
CLASTER (ours) | W | 63.8 ± 5.7 | 36.6 ± 4.6 | 46.7 ± 5.4
WGAN* | S | 46.8 ± 4.2 | 34.7 ± 4.3 | 32.8 ± 5.4
OD* | S | 50.8 ± 2.1 | 39.3 ± 3.1 | 45.7 ± 2.3
CLASTER (ours) | S | 64.2 ± 3.3 | 41.8 ± 2.1 | 50.2 ± 3.8
CLASTER (ours) | C | 67.7 ± 2.7 | 42.6 ± 2.6 | 52.7 ± 2.2
Table 3: Results on ZSL. SE: semantic embedding, M: manual representation, W: word2vec embedding, S: sentence2vec, C: Combination of embeddings. * run by us with author’s code on same splits as ours. Method | SE | Olympics | HMDB51 | UCF101
---|---|---|---|---
Bi-Dir GAN [30] | M | 44.2 ± 11.2 | - | 22.7 ± 2.5
IAF [30] | M | 48.4 ± 7.0 | - | 25.9 ± 2.6
GGM [31] | M | 52.4 ± 12.2 | - | 23.7 ± 1.2
WGAN [46] | M | 59.9 ± 5.3 | - | 44.4 ± 3.0
OD[27] | M | 66.2 ± 6.3 | - | 49.4 ± 2.4
CLASTER (ours) | M | 68.8 ± 6.6 | - | 50.9 ± 3.2
IAF [30] | W | 30.2 ± 11.1 | 15.6 ± 2.2 | 20.2 ± 2.6
Bi-Dir GAN [30] | W | 32.2 ± 10.5 | 7.5 ± 2.4 | 17.2 ± 2.3
SJE [1] | W | 32.5 ± 6.7 | 10.5 ± 2.4 | 8.9 ± 2.2
GGM[31] | W | 42.2 ± 10.2 | 20.1 ± 2.1 | 17.5 ± 2.2
WGAN [46] | W | 46.1 ± 3.7 | 32.7 ± 3.4 | 32.4 ± 3.3
PS-GNN [14] | W | 52.9 ± 6.2 | 24.2 ± 3.3 | 35.1 ± 4.6
OD [27] | W | 53.1 ± 3.6 | 36.1 ± 2.2 | 37.3 ± 2.1
CLASTER (ours) | W | 58.1 ± 2.4 | 42.4 ± 3.6 | 42.1 ± 2.6
WGAN* | S | 47.6 ± 4.2 | 38.2 ± 4.1 | 40.2 ± 5.1
OD* | S | 54.1 ± 2.7 | 42.3 ± 3.1 | 45.7 ± 2.3
CLASTER (ours) | S | 58.7 ± 3.1 | 47.4 ± 2.8 | 48.3 ± 3.1
CLASTER (ours) | C | 69.1 ± 5.4 | 48.0 ± 2.4 | 51.3 ± 3.5
Table 4: GZSL. SE: semantic embedding, M: manual representation, W: word2vec
embedding, S: sentence2vec, C: combination of embeddings. * run by us with
author’s code. Reported results are the harmonic mean of seen and unseen class
accuracies.
### 5.6 Results on GZSL
We now compare to the same approaches in the GZSL task in Table 4, the
reported results are the harmonic mean of the seen and unseen class
accuracies. Here CLASTER outperforms all previous methods across different
modalities. We obtain an improvement on average of 2.6% and 5% over the next
best performing method on the Olympics dataset using manual representations
and word2vec respectively. We obtain an average improvement of 6.3% over the
next best performing model on the HMDB51 dataset using word2vec. We obtain an
improvement on average performance by 1.5% and 4.8% over the next best
performing model on the UCF101 dataset using manual representations and
word2vec respectively. Similarly to ZSL, we show generalized performance
improvements using sentence2vec. We also report results on the combination of
embeddings. We see an improvement of 0.3%, 0.6% and 0.4% over the individual
best embedding for CLASTER.
### 5.7 Analysis of the RL optimization
We analyze how RL affects clustering on the UCF101 training set. For each
class in the training set, we measure the distribution of clusters that they
belong to, visualized in the supplementary material. We observe that after the
RL optimization, the clustering becomes “cleaner”. This is, most instances in
a class belong to a dominant cluster. This effect can be measured using the
purity of the cluster:
$Purity=\frac{1}{N}\sum_{i=1}^{k}max_{j}\left|c_{i}\cap t_{j}\right|,$ (11)
where $N$ is the number of data points (video instances), $k$ is the number of
clusters, $c_{i}$ is a cluster in the set of clusters, and $t_{j}$ is the
class which has the maximum count for cluster $c_{i}$. Poor clustering results
in purity values close to 0, and a perfect clustering will return a purity of
1. Using k-means, the purity of the clusters is 0.77, while optimizing the
clusters with RL results in a purity of 0.89.
Finally, we observe another interesting side effect of clustering. Some of the
most commonly confused classes before clustering (e.g. “Baby crawling” vs.
“Mopping floor”, “Breaststroke” vs. “front crawl”, “Rowing vs. front crawl”)
are assigned to different clusters after RL, resolving confusion. This
suggests that clusters are also used as a means to differentiate between
similar classes.
### 5.8 Results on Images
Our method also generalizes to the image domain as shown in Table 5. CLASTER
outperforms previous work in five out of eight tasks and obtains comparable
results in the remaining three tasks. We compare our approach to several
state-of-the-art methods: Region Graph Embedding Network with the Parts
Relation Reasoning branch (RGEN) and without the branch (R-PRR) [51], the
evaluation of output embeddings(SJE) [1], the feature generating networks
(WGAN) [46], the feature generating framework (VAEGAN) [49] and the latent
embedding feedback method (LEF) [32]. [51]).
Method | CUB | SUN | AWA2 | APY
---|---|---|---|---
| ZSL | GZSL | ZSL | GZSL | ZSL | GZSL | ZSL | GZSL
SJE [1] | 53.9 | 33.6 | 53.7 | 19.8 | 61.9 | 14.4 | 32.9 | 6.9
WGAN [46] | 57.3 | 52.3 | 60.8 | 40.0 | 68.2 | 60.2 | - | -
VAEGAN [49] | 72.9 | 68.9 | 65.6 | 43.1 | 70.3 | 65.2 | - | -
LEF [32] | 74.3 | 70.7 | 66.7 | 46.3 | 73.4 | 66.7 | - | -
R-PRR [51] | 75.0 | 64.7 | 63.4 | 36.2 | 72.5 | 69.7 | 43.9 | 36.3
RGEN [51] | 76.1 | 66.1 | 63.8 | 36.8 | 73.6 | 71.5 | 44.4 | 37.2
CLASTER | 76.4 | 69.8 | 67.2 | 44.8 | 74.1 | 72.7 | 44.1 | 40.8
Table 5: Results on image datasets. For ZSL, we report the mean class accuracy
and for GZSL, we report the harmonic mean of seen and unseen class accuracies.
All approaches use manual annotations as the form of semantic embedding.
### 5.9 Seen and Unseen Class Performance for GZSL
In order to better analyze performance of the model on GZSL, we report the
average seen and unseen accuracies along with their harmonic mean. The results
using different embeddings and on the UCF101, HMDB51 and Olympics datasets are
reported in Table 7.
Model | CUB | SUN | AWA2 | APY
---|---|---|---|---
| u | s | H | u | s | H | u | s | H | u | s | H
WGAN [46] | 43.7 | 57.7 | 49.7 | 42.6 | 36.6 | 39.4 | 57.9 | 61.4 | 59.6 | – | – | –
R-PRR [51] | 61.4 | 68.5 | 64.7 | 42.7 | 31.5 | 36.2 | 64.1 | 76.4 | 69.7 | 29.2 | 48.0 | 36.3
RGEN [51] | 60.0 | 73.5 | 66.1 | 44.0 | 31.7 | 36.8 | 67.1 | 76.5 | 71.5 | 30.4 | 48.1 | 37.2
CLASTER | 66.3 | 73.8 | 69.8 | 55.2 | 37.7 | 44.8 | 68.8 | 77.1 | 72.7 | 33.7 | 51.9 | 40.8
Table 6: Seen and unseen accuracies for CLASTER on different datasets compared against recent state-of-the-art approaches. ’u’, ’s’ and ’H’ corresponds to average unseen accuracy, average seen accuracy and the harmonic mean of the two. Figure 6: Analysis of how RL optimization changes the cluster to which an instance belongs. The frequencies are represented as percentages of instances in each cluster. We can see that the clusters are a lot ”cleaner” after the optimization by RL. Model | E | Olympics | HMDB51 | UCF-101
---|---|---|---|---
| | u | s | H | u | s | H | u | s | H
WGAN [46] | A | 50.8 | 71.4 | 59.4 | - | - | - | 30.4 | 83.6 | 44.6
OD [27] | A | 61.8 | 71.1 | 66.1 | - | - | - | 36.2 | 76.1 | 49.1
CLASTER | A | 66.2 | 71.7 | 68.8 | - | - | - | 40.2 | 69.4 | 50.9
WGAN [46] | W | 35.4 | 65.6 | 46.0 | 23.1 | 55.1 | 32.5 | 20.6 | 73.9 | 32.2
OD [27] | W | 41.3 | 72.5 | 52.6 | 25.9 | 55.8 | 35.4 | 25.3 | 74.1 | 37.7
CLASTER | W | 49.2 | 71.1 | 58.1 | 35.5 | 52.8 | 42.4 | 30.4 | 68.9 | 42.1
WGAN [46] | S | 36.1 | 66.2 | 46.7 | 28.6 | 57.8 | 38.2 | 27.5 | 74.7 | 40.2
OD [27] | S | 42.9 | 73.5 | 54.1 | 33.4 | 57.8 | 42.3 | 32.7 | 75.9 | 45.7
CLASTER | S | 49.9 | 71.3 | 58.7 | 42.7 | 53.2 | 47.4 | 36.9 | 69.8 | 48.3
CLASTER | C | 66.8 | 71.6 | 69.1 | 43.7 | 53.3 | 48.0 | 40.8 | 69.3 | 51.3
Table 7: Seen and unseen accuracies for CLASTER on different datasets using
different embeddings. ’E’ corresponds to the type of embedding used, wherein
’A’, ’W’, ’S’ and ’C’ refers to manual annotations, word2vec, sen2vec and
combination of the embeddings respectively. ’u’, ’s’ and ’H’ corresponds to
average unseen accuracy, average seen accuracy and the harmonic mean of the
two. All the reported results are on the same splits.
Similarly we report results on CUB, AWA2, APY and SUN and this can be seen in
Table 6.
### 5.10 Change in Clusters After RL
How do the clusters change after the RL optimization? For each class in the
training set, we measure the distribution of clusters that they belong to,
visualized in Figure 6. Here, each column represents a class, and each color a
cluster. In a perfect clustering, each column would have a single color. We
observe that after the RL optimization, the clustering becomes “cleaner”. This
is, most instances in a class belong to a dominant cluster.
## 6 Statistical Significance
We consider the dependent t-test for paired samples. This test is utilized in
the case of dependent samples, in our case different model performances on the
same random data split. This is a case of a paired difference test. This is
calculated as shown in Eq 12.
$t=\frac{\bar{X}_{D}-\mu_{0}}{s_{D}/\sqrt{n}}$ (12)
Where $\bar{X}_{D}$ is the average of the difference between all pairs and
$s_{D}$ is the standard deviation of the difference between all pairs. The
constant $\mu_{0}$ is zero in case we wish to test if the average of the
difference is different; $n$ represents the number of samples, $n=10$ in our
case. The comparisons can be seen in Table 8. The lower the value of ’p’,
higher the significance.
As we can see, our results are statistically significant in comparison to both
OD [27] and WGAN [46] in both ZSL and GZSL. We also see that our results are
statistically significant for both HMDB51 and Olympics in comparison to E2E
[4]. In GZSL, OD [27] also achieves results that are significantly different
in comparison to WGAN [46].
Pairs | Dataset | t-value | Statistical significance(p$<$0.05) | Type
---|---|---|---|---
CLASTER and OD [27] | UCF101 | -15.77 | Significant, p$<$0.00001 | ZSL
CLASTER and WGAN [46] | UCF101 | -9.08 | Significant, p$<$0.00001 | ZSL
CLASTER and E2E [4] | UCF101 | -0.67 | Not Significant, p = 0.26 | ZSL
OD [27] and WGAN [46] | UCF101 | -1.70 | Not Significant, p$=$0.12278 | ZSL
CLASTER and OD [27] | HMDB51 | -4.33 | Significant, p$=$0.00189 | ZSL
CLASTER and WGAN [46] | HMDB51 | -5.54 | Significant, p$=$0.00036 | ZSL
CLASTER and E2E [4] | HMDB51 | -3.77 | Significant, p = 0.00219 | ZSL
OD [27] and WGAN [46] | HMDB51 | -3.71 | Significant, p$=$0.00483 | ZSL
CLASTER and OD [27] | Olympics | -9.06 | Significant, p$<$0.00001 | ZSL
CLASTER and WGAN [46] | Olympics | -11.73 | Significant, p$<$0.00001 | ZSL
CLASTER and E2E [4] | Olympics | -2.72 | Significant, p = 0.012 | ZSL
OD [27] and WGAN [46] | Olympics | -2.47 | Significant, p$=$0.03547 | ZSL
CLASTER and OD [27] | UCF101 | -4.51 | Significant, p$=$0.00148 | GZSL
CLASTER and WGAN [46] | UCF101 | -5.49 | Significant, p$=$0.00039 | GZSL
OD [27] and WGAN [46] | UCF101 | -3.16 | Significant, p$=$0.01144 | GZSL
CLASTER and OD [27] | HMDB51 | -5.08 | Significant, p$=$0.00066 | GZSL
CLASTER and WGAN [46] | HMDB51 | -7.51 | Significant, p$=$0.00004 | GZSL
OD [27] and WGAN [46] | HMDB51 | -5.27 | Significant, p$=$0.00051 | GZSL
CLASTER and OD [27] | Olympics | -5.79 | Significant, p$=$0.00026 | GZSL
CLASTER and WGAN [46] | Olympics | -8.39 | Significant, p$=$0.00002 | GZSL
OD [27] and WGAN [46] | Olympics | -6.22 | Significant, p$=$0.00014 | GZSL
Table 8: Comparison of the t-test for different pairs of models on the same
random split. Lower the value of ’p’, higher the significance. As we can see,
our results are statistically significant in comparison to both OD [27] and
WGAN [46] in both ZSL and GZSL. For GZSL, OD [27] also achieves results that
are significant in comparison to WGAN [46].
## 7 Average of Differences in Performance for Same Splits
Since the performance of the model varies for each random split (as witnessed
by the standard deviation values), we average the difference in performance
between CLASTER, OD, WGAN and E2E on the same splits. We believe that this
gives us a better metric to check the performance of CLASTER with the other
approaches. The results are depicted in Table 9.
Models | Setting | Olympics | HMDB51 | UCF101
---|---|---|---|---
Ours and WGAN [46] | ZSL | 17.5 ± 4.5 | 7.0 ± 3.8 | 17.4 ± 5.7
Ours and OD [27] | ZSL | 13.6 ± 4.5 | 2.4 ± 1.6 | 14.3 ± 2.7
Ours and E2E [4] | ZSL | 2.6 ± 2.8 | 3.7 ± 2.8 | 0.4 ± 1.8
Ours and WGAN [46] | GZSL | 11.2 ± 4.0 | 9.3 ± 3.7 | 8.1 ± 4.4
Ours and OD [27] | GZSL | 4.6 ± 2.4 | 5.2 ± 3.1 | 2.7 ± 1.8
Table 9: Comparing the average of the difference in performance for recent
state-of-the-art approaches in zero-shot and generalized zero-shot action
recognition on the same splits. All results were computed using sen2vec as the
embedding. We can see that we outperform recent approaches in every scenario.
## 8 Conclusion
Zero-shot action recognition is the task of recognizing action classes without
any visual examples. The challenge is to map the knowledge of seen classes at
training time to that of novel unseen classes at test time. We propose a novel
model that learns clustering-based representation optimized by reinforcement
learning. Our method consistently outperforms prior work, regardless of the
semantic embeddings used, the dataset, and, both, for standard and for
generalized zero-shot evaluation (GZSL). We also show that better semantic
representations of action classes can be obtained using sentence2vec instead
of word2vec, as the former is specifically trained to capture the meaning of
multi-word expression such as the labels of action classes. Overall, we
achieve remarkable improvements over the previously reported results, up to
11.9% absolute improvement on HMDB51 for GZSL. We also show that our method
generalizes to the image domain as well.
## References
* [1] Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, and Bernt Schiele. Evaluation of output embeddings for fine-grained image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2927–2936, 2015.
* [2] Humam Alwassel, Dhruv Mahajan, Lorenzo Torresani, Bernard Ghanem, and Du Tran. Self-supervised learning by cross-modal audio-video clustering. arXiv preprint arXiv:1911.12667, 2019.
* [3] Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, and Josef Sivic. Netvlad: Cnn architecture for weakly supervised place recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5297–5307, 2016.
* [4] Biagio Brattoli, Joseph Tighe, Fedor Zhdanov, Pietro Perona, and Krzysztof Chalupka. Rethinking zero-shot video classification: End-to-end training for realistic applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4613–4623, 2020.
* [5] J. Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In IEEE Conf. Comput. Vis. Pattern Recog., 2017.
* [6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [7] Yang Fan, Fei Tian, Tao Qin, Jiang Bian, and Tie-Yan Liu. Learning what data to learn. arXiv preprint arXiv:1702.08635, 2017.
* [8] Ali Farhadi, Ian Endres, Derek Hoiem, and David Forsyth. Describing objects by their attributes. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 1778–1785. IEEE, 2009.
* [9] Edward W Forgy. Cluster analysis of multivariate data: efficiency versus interpretability of classifications. biometrics, 21:768–769, 1965.
* [10] Chuang Gan, Ming Lin, Yi Yang, Gerard De Melo, and Alexander G Hauptmann. Concepts not alone: Exploring pairwise relationships for zero-shot video activity recognition. In Thirtieth AAAI conference on artificial intelligence, 2016.
* [11] Chuang Gan, Ming Lin, Yi Yang, Yueting Zhuang, and Alexander G Hauptmann. Exploring semantic inter-class relationships (sir) for zero-shot action recognition. In Proceedings of the National Conference on Artificial Intelligence, 2015.
* [12] Chuang Gan, Tianbao Yang, and Boqing Gong. Learning attributes equals multi-source domain generalization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 87–97, 2016.
* [13] Junyu Gao, Tianzhu Zhang, and Changsheng Xu. I know the relationships: Zero-shot action recognition via two-stream graph convolutional networks and knowledge graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8303–8311, 2019.
* [14] Junyu Gao, Tianzhu Zhang, and Changsheng Xu. Learning to model relationships for zero-shot video classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020\.
* [15] Rohit Girdhar and Deva Ramanan. Attentional pooling for action recognition. In Advances in Neural Information Processing Systems, pages 34–45, 2017.
* [16] Rohit Girdhar, Deva Ramanan, Abhinav Gupta, Josef Sivic, and Bryan Russell. Actionvlad: Learning spatio-temporal aggregation for action classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 971–980, 2017.
* [17] Shreyank N Gowda, Panagiotis Eustratiadis, Timothy Hospedales, and Laura Sevilla-Lara. Alba: Reinforcement learning for video object segmentation. arXiv preprint arXiv:2005.13039, 2020.
* [18] Yanli Ji, Yue Zhan, Yang Yang, Xing Xu, Fumin Shen, and Heng Tao Shen. A context knowledge map guided coarse-to-fine action recognition. IEEE Transactions on Image Processing, 29:2742–2752, 2019.
* [19] Zhong Ji, Yuxin Sun, Yunlong Yu, Jichang Guo, and Yanwei Pang. Semantic softmax loss for zero-shot learning. Neurocomputing, 316:369–375, 2018.
* [20] Elyor Kodirov, Tao Xiang, Zhenyong Fu, and Shaogang Gong. Unsupervised domain adaptation for zero-shot learning. In Proceedings of the IEEE international conference on computer vision, pages 2452–2460, 2015.
* [21] Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, and Thomas Serre. Hmdb: a large video database for human motion recognition. In 2011 International Conference on Computer Vision, pages 2556–2563. IEEE, 2011.
* [22] Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 951–958. IEEE, 2009.
* [23] Aristidis Likas. A reinforcement learning approach to online clustering. Neural computation, 11(8):1915–1932, 1999.
* [24] Bin Liu, Li Yao, Zheyuan Ding, Junyi Xu, and Junfeng Wu. Combining ontology and reinforcement learning for zero-shot classification. Knowledge-Based Systems, 144:42–50, 2018.
* [25] Changzhi Luo, Zhetao Li, Kaizhu Huang, Jiashi Feng, and Meng Wang. Zero-shot learning via attribute regression and class prototype rectification. IEEE Transactions on Image Processing, 27(2):637–648, 2017.
* [26] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.
* [27] Devraj Mandal, Sanath Narayan, Sai Kumar Dwivedi, Vikram Gupta, Shuaib Ahmed, Fahad Shahbaz Khan, and Ling Shao. Out-of-distribution detection for generalized zero-shot action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9985–9993, 2019.
* [28] Pascal Mettes and Cees GM Snoek. Spatial-aware object embeddings for zero-shot localization and classification of actions. In Proceedings of the IEEE International Conference on Computer Vision, pages 4443–4452, 2017.
* [29] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.
* [30] Ashish Mishra, Anubha Pandey, and Hema A Murthy. Zero-shot learning for action recognition using synthesized features. Neurocomputing, 390:117–130, 2020.
* [31] Ashish Mishra, Vinay Kumar Verma, M Shiva Krishna Reddy, S Arulkumar, Piyush Rai, and Anurag Mittal. A generative approach to zero-shot and few-shot action recognition. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 372–380. IEEE, 2018.
* [32] Sanath Narayan, Akshita Gupta, Fahad Shahbaz Khan, Cees GM Snoek, and Ling Shao. Latent embedding feedback and discriminative features for zero-shot classification. arXiv preprint arXiv:2003.07833, 2020.
* [33] Juan Carlos Niebles, Chih-Wei Chen, and Li Fei-Fei. Modeling temporal structure of decomposable motion segments for activity classification. In European conference on computer vision, pages 392–405. Springer, 2010.
* [34] Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. Zero-shot task generalization with multi-task deep reinforcement learning. In International Conference on Machine Learning, pages 2661–2670, 2017.
* [35] Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. Unsupervised learning of sentence embeddings using compositional n-gram features. In Proceedings of NAACL-HLT, pages 528–540, 2018.
* [36] Jie Qin, Li Liu, Ling Shao, Fumin Shen, Bingbing Ni, Jiaxin Chen, and Yunhong Wang. Zero-shot action recognition with error-correcting output codes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2833–2842, 2017.
* [37] Zhaofan Qiu, Ting Yao, and Tao Mei. Learning spatio-temporal representation with pseudo-3d residual networks. In proceedings of the IEEE International Conference on Computer Vision, pages 5533–5541, 2017.
* [38] Marcus Rohrbach, Michaela Regneri, Mykhaylo Andriluka, Sikandar Amin, Manfred Pinkal, and Bernt Schiele. Script data for attribute-based recognition of composite activities. In Eur. Conf. Comput. Vis., 2012.
* [39] Bernardino Romera-Paredes and Philip Torr. An embarrassingly simple approach to zero-shot learning. In International Conference on Machine Learning, pages 2152–2161, 2015.
* [40] Yutaro Shigeto, Ikumi Suzuki, Kazuo Hara, Masashi Shimbo, and Yuji Matsumoto. Ridge regression, hubness, and zero-shot learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 135–151. Springer, 2015.
* [41] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pages 568–576, 2014.
* [42] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
* [43] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 4489–4497, 2015.
* [44] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011\.
* [45] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7794–7803, 2018.
* [46] Yongqin Xian, Tobias Lorenz, Bernt Schiele, and Zeynep Akata. Feature generating networks for zero-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5542–5551, 2018.
* [47] Yongqin Xian, Bernt Schiele, and Zeynep Akata. Zero-shot learning-the good, the bad and the ugly. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4582–4591, 2017.
* [48] Yongqin Xian, Bernt Schiele, and Zeynep Akata. Zero-shot learning-the good, the bad and the ugly. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4582–4591, 2017.
* [49] Yongqin Xian, Saurabh Sharma, Bernt Schiele, and Zeynep Akata. f-vaegan-d2: A feature generating framework for any-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10275–10284, 2019.
* [50] Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pages 3485–3492. IEEE, 2010.
* [51] Guo-Sen Xie, Li Liu, Fan Zhu, Fang Zhao, Zheng Zhang, Yazhou Yao, Jie Qin, and Ling Shao. Region graph embedding network for zero-shot learning. In European Conference on Computer Vision, pages 562–580. Springer, 2020.
* [52] Xun Xu, Timothy Hospedales, and Shaogang Gong. Transductive zero-shot action recognition by word-vector embedding. International Journal of Computer Vision, 123(3):309–333, 2017\.
* [53] Xun Xu, Timothy M Hospedales, and Shaogang Gong. Multi-task zero-shot action recognition with prioritised data augmentation. In European Conference on Computer Vision, pages 343–359. Springer, 2016.
* [54] Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, SuHang Zheng, Feng Wang, Jun Zhang, and Huajun Chen. Zero-shot text classification via reinforced self-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3014–3024, 2020.
* [55] Li Zhang, Tao Xiang, and Shaogang Gong. Learning a deep embedding model for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2021–2030, 2017.
* [56] Yi Zhu, Yang Long, Yu Guan, Shawn Newsam, and Ling Shao. Towards universal representation for unseen action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9436–9445, 2018.
|
# In-situ AC-hysteresis measurements of SPD-processed Cu20(Fe15Co85)80
Martin Stückler1, Stefan Wurster, Reinhard Pippan, Andrea Bachmaier
Erich Schmid Institute of Materials Science, Austrian Academy of Sciences
Jahnstraße 12, 8700 Leoben, Austria
<EMAIL_ADDRESS>
###### Abstract
The changes of magnetic properties upon heat treatment of a metastable
supersaturated solid solution processed by severe plastic deformation are
investigated by in-situ AC-hysteresis measurements. Data are analyzed in the
framework of dynamic loss theory, with correlative investigations of the
microstructural properties. The evolution of hysteresis upon annealing points
out that the single-phase supersaturated solid solution remains stable up to
400°C, then hindering of domain wall motion sets in at this temperature. At
600°C, a multi phase microstructure is present, causing a significant increase
in coercivity.
## 1 Introduction
With high-pressure torsion (HPT), a technique of severe plastic deformation
(SPD), it is possible to form bulk supersaturated solid solutions with grain
sizes in the nanocrystalline regime [1, 2]. It has been shown, that with
different ratios of Co to Cu, huge reductions in the coercivity can be
achieved by means of SPD, resulting in soft magnetic materials [3, 4]. In the
present study, Fe is added to lower the magnetocrystalline anisotropy to
further reduce the coercivity [5]. To study the evolution of magnetic
properties as a function of temperature, the hysteresis is recorded during in-
situ annealing treatments with a concomitant investigation of the dynamic
magnetic behavior for certain temperatures. The resulting data are discussed
in the framework of dynamic loss theory and correlated to the evolving
microstructure.
## 2 Experimental
Powders (Fe: MaTeck 99.9% -100+200 mesh; Co: GoodFellow 99.9% 50-150 $\mu$m;
Cu: Alfa Aesar -170+400 mesh 99.9%) were mixed and hydrostatically
consolidated in Ar-atmosphere. A coin-shaped specimen (diameter: 8 mm;
thickness: 1 mm) was processed by two subsequent steps of HPT deformation (100
turns at 300°C; 50 turns at room temperature (RT)), as described elsewhere[4].
The sample was further processed into a ring shaped specimen and equipped with
68 primary windings and 61 secondary windings (Cu-wire; diameter: 0.315 mm and
0.200 mm, respectively). Electrical isolation between the sample and the
windings was maintained with a high temperature adhesive (Minco FortaFix
Autostic FC8). To apply the magnetic field, a sinusoidal current (4 A; 5-1000
Hz) was applied to the primary winding with a KEPCO BOP 100-4M power supply,
according to [eq. 1a]. The voltage induced in the secondary windings [eq. 2]
was measured with a National Instruments BNC-2110 terminal block. Data
processing was carried out with LabView (version 14.0.1f3). The hysteresis
measurement is described in more detail in Ref. [6]. For in-situ measurements,
the sample was clamped between two Cu-blocks and heated by cartridge heaters
(hotset hotrod HHP HT4030504). The temperature was measured by a K-type
thermocouple, close to the samples’ position to ensure the hysteresis
measurements take place at $\pm$5°C of the target temperature. To maintain a
homogeneously heated sample, measurements were started 15 min after
stabilization of the target temperature. In-situ measurements were performed
in a customized vacuum chamber to prevent oxidation, maintaining a pressure
below 10-2 mbar during the whole experiment.
Microstructural investigations were performed by X-ray diffraction using Co-Kα
radiation (XRD; Bruker D2-Phaser) and scanning electron microscopy (SEM; Zeiss
LEO1525) in backscattered electron (BSE) mode. The composition was determined
by an energy dispersive X-ray spectroscopy (EDS; Bruker XFlash 6|60) system.
## 3 Results and discussion
The magnetic field $H$ is controlled by the number of primary windings
$N_{p}$, the applied current $I$ and the mean diameter $d_{m}$ [eq. 1a],
resulting in the present case in a maximum magnetic field of 11.9 kAm-1
($d_{outer}$=8.76 mm; $d_{inner}$=5.76 mm).
$H=\frac{N_{p}\cdot I}{\pi\cdot d_{m}}$ (1a)
$d_{m}=\frac{d_{outer}+d_{inner}}{2}$ (1b)
Eq. 2 gives the magnetic induction $B$ as a function of the time dependent
induced voltage $U_{ind}(t)$, the number of secondary windings $N_{s}$ and the
cross-sectional area $A$ of the ring-core (here: $A$=0.707 mm2).
$B=\frac{\int U_{ind}(t)dt}{N_{s}\cdot A}$ (2)
| |
---|---|---
| |
Figure 1: Evolution of in-situ AC-hysteresis curves according to the
temperature treatment in fig. 2. The as-deformed state is measured at RT (a).
After in-situ temperature treatment at 150°C (b), 300°C (c), 400°C (d) and
600°C (e), the specimen is measured again after cooling down to room
temperature (600°C-RT; (f)). The legend in (a) applies to all diagrams.
The specimen is measured in the as-deformed state at RT, shown in fig. 1(a).
It can be seen, that saturation is achieved and the area of the hysteresis
loop rises slightly with increasing frequency, indicating low eddy-current
losses. The in-situ temperature treatment was performed according to the
thermal profile shown in fig. 2. Measurements were started after settling the
target temperature for 15 min, since similarly processed Co-Cu samples showed
the majority of microstructural changes happening only during a short time
period after reaching the target temperature [7]. The time stamps of the
measurements are represented by the black diamonds in fig. 2. Hysteresis are
measured at 150°C (fig. 1(b)), 300°C (fig. 1(c)), 400°C (fig. 1(d)) and 600°C
(fig. 1(e)). For the 150°C measurement, the temperature exceeded 160°C for a
short period of time, but it was shown that the microstructure does not change
in this temperature range since only relief of internal stresses takes place
to a small extend [8]. The hysteresis exhibits similar shapes up to 300°C, but
at 400°C the frequency behavior changes significantly, which is clearly
visible in the 1000 Hz measurement. The area of the 1000 Hz hysteresis
increases again in the 600°C-state. The measurement of the in-situ treated
specimen is repeated after cooling down to RT (fig. 1(f); 600°C-RT). For the
600°C-RT state saturation is not achieved in the 1000 Hz measurement. This
means, the measured hysteresis represents a minor loop and the coercivity
cannot be evaluated for this frequency. Lower frequencies reveal a larger area
of the hysteresis loop with respect to the 600°C-state, arising most likely
from the temperature dependence of the magnetocrystalline anisotropy according
to Brukhatov-Kirensky [9].
Figure 2: Temperature $T$ during the in-situ experiment as a function of time
$t$. Black diamonds mark the periods, in which hysteresis measurements are
performed.
In the following, a quantitative analysis on the evolution of the coercivity
is carried out. Fig. 3 shows the measured coercivities as a function of
frequency. To disentangle the (static) intrinsic magnetic properties from the
dynamic losses, [eq. 3] is fitted to the data[10, 11, 12].
$H_{C}(f)=H_{C}(0)+b\cdot\sqrt{f}+c\cdot f$ (3)
In [eq. 3], the dynamic loss is separated into anomalous-loss $b$, caused by
domain wall motion, and eddy current loss $c$, which is mainly controlled by
the conductivity[13]. Since the conductivity of SPD-processed materials is
significantly lowered with respect to coarse-grained materials [14], we assume
the dynamic losses being mainly controlled by anomalous losses and therefore
neglect the third term in [eq. 3]. In fig. 3, the measured coercivities are
plotted versus the square-root of frequency, showing a linear scaling with
$\sqrt{f}$, confirming the aforementioned statement. The results from linearly
fitting the data are shown in fig. 4. Diminishing static coercivity, as well
as anomalous loss, can be identified between the as-deformed, 150°C- and
300°C-annealed state. For SPD-processed Cu-Co and Cu-Fe-Co, a diminishing
defect density was reported in this temperature window, but no apparent
changes in the grain size have been determined [4, 8]. The decreasing
coercivity between RT and 400°C might therefore originate from a decrease in
the magnetoelastic anisotropy constant[15] $K_{el}\propto\sigma\cdot\lambda$,
with the residual stress $\sigma$ and the magnetostriction $\lambda$. It
should be stated, that the coercivity is further lowered by the temperature
dependence of the magnetocrystalline anisotropy, as already mentioned [9]. A
huge jump in $H_{C}(0)$ can be noticed at 600°C, indicating a large
microstructural variation, such as grain growth. Large microstructural
variations have been determined in similar materials at this temperature [4,
8]. A further increase in coercivity is visible for the 600°C-RT state, which
is again traced back to temperature dependence.
Figure 3: Coercivity plotted versus the square-root of frequency as recorded
during temperature treatment. The lines represent the fits to [eq. 3], taking
only anomalous dynamic losses into account.
The anomalous loss parameter $b$ is closely related to the microstructure, and
takes into account the energy needed for domain wall motion, which can be
increased due to pinning at lattice defects or grain / phase boundaries. The
increase in $b$ at 400°C therefore indicates the formation of pinning centers
accelerating domain wall motion [16, 17], which rushes ahead the demixing of
the microstructure at 600°C. For the 600°C-state, the anomalous loss parameter
stays rather constant.
Figure 4: Static coercivity $H_{C}(0)$ and anomalous loss parameter $b$,
determined according to [eq. 3].
The microstructure of the in-situ heat-treated sample is investigated by SEM
and XRD and compared to the initial (as-deformed) state. For this purpose a
second sample is fabricated, representing the as-deformed state. Fig. 5 shows
SEM images of both samples. The as-deformed state (fig. 5(a)) shows a highly
homogeneous, nanocrystalline microstructure. In the 600°C-RT state (fig.
5(b)), a significantly larger grain size in the ultra-fine grained regime is
visible. Furthermore, phase contrast indicates a chemical inhomogeneity,
showing demixing tendencies. Bright areas indicate high Z and therefore Cu-
rich regions, whereas dark-grey or black areas point at the presence of low Z
Fe-Co-rich alloy intermetallic phases. EDS-measurements at 20 different spots
reveal the composition of the in-situ treated sample to Cu23(Fe12Co88)77
(wt.%; $\pm$1 wt.%).
|
---|---
Figure 5: SEM-BSE images of a second specimen in as-deformed state (a) and the
sample after in-situ heat-treatment (600°C-RT; (b)) in tangential sample
direction. The scale bar in (a) applies to both images.
In fig. 6, the XRD pattern of the in-situ treated sample is shown in
comparison to the as-deformed state. In the as-deformed state, only one fcc-
pattern is visible, revealing the single-phase crystallographic structure,
whereas in the 600°C-RT state, three different patterns evolve: the fcc-Cu
pattern coincides with the theoretical values, indicating the presence of pure
Cu. In contrast, the fcc-Co, as well as the bcc-Fe pattern, show deviations
with respect to the theoretical values, suggesting the presence of
$\gamma$-(Fe,Co) and $\alpha$-(Fe,Co), in line with the SEM investigations
above.
Figure 6: XRD pattern of the sample after in-situ heat treatment (600°C-RT) in
comparison with a second specimen in as-deformed state, measured with Co-Kα
radiation.
## 4 Conclusion
In-situ AC-hysteresis measurements of SPD-processed Cu20(Fe15Co85)80 reveal a
persisting soft magnetic behavior up to 400°C. The amount of eddy current
losses is low by comparison, owing to the high resistivity of SPD-processed
materials. At 400°C, pinning centers start to form, accelerating domain wall
motion and causing an increase in dynamic loss. At 600°C, the microstructure
has changed from the initial single-phase supersaturated solid solution into a
multi phase microstructure according to the thermodynamical equilibrium. The
formation of pinning centers rushes ahead this phase change. The results
demonstrate the capability of magnetic measurements capturing smallest
microstructural changes before they become evident with other techniques.
## 5 Acknowledgments
This project has received funding from the European Research Council (ERC)
under the European Union’s Horizon 2020 research and innovation programme
(Grant No. 757333). The authors thank R. Neubauer, M. Reiter and M. Kasalo for
preparing the sample and the respective holder for in-situ experiment.
## Data Availibility Statement
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* [1] R. Z. Valiev, R. K. Islamgaliev, and I. V. Alexandrov. Bulk nanostructured materials from severe plastic deformation. Prog. Mater. Sci., 45(2):103–189, 2000.
* [2] K. S. Kormout, R. Pippan, and A. Bachmaier. Deformation-induced supersaturation in immiscible material systems during high-pressure torsion. Adv. Eng. Mater., 19(4):1600675, 2017.
* [3] M. Stückler, H. Krenn, R. Pippan, L. Weissitsch, S. Wurster, and A. Bachmaier. Magnetic binary supersaturated solid solutions processed by severe plastic deformation. Nanomaterials, 9(1):6, 2019.
* [4] M. Stückler, C. Teichert, A. Matković, H. Krenn, L. Weissitsch, S. Wurster, R. Pippan, and A. Bachmaier. On the magnetic nanostructure of a Co-Cu alloy processed by high-pressure torsion. J. Sci. Adv. Mater. Dev., 2020. in press.
* [5] C. Kuhrt and L. Schultz. Formation and magnetic properties of nanocrystalline mechanically alloyed Fe-Co. J. Appl. Phys., 71(4):1896–1900, 1992.
* [6] R. S. Turtelli, S. Hartl, R. Grössinger, R. Wöhrnschimmel, D. Horwatitsch, F. Spieckermann, G. Polt, and M. Zehetbauer. Hysteresis and loss measurements on the plastically deformed Fe–(3 wt%) Si under sinusoidal and triangular external field. IEEE Trans. Magn., 52(5):1–7, 2016.
* [7] S. Wurster, M. Stückler, L. Weissitsch, T. Müller, and A. Bachmaier. Microstructural changes influencing the magnetoresistive behavior of bulk nanocrystalline materials. Appl. Sci., 10(15):5094, Jul 2020.
* [8] M. Stückler, L. Weissitsch, S. Wurster, H. Krenn, R. Pippan, and A. Bachmaier. Formation of solid solutions of Cu-Fe-Co by severe plastic deformation.
* [9] N. L. Brukhatov and L. V. Kirensky. The anisotropy of the magnetic energy in single crystals of nickel as a function of temperature. Phys Z Sowjetunion, 12(5):601, 1937.
* [10] G. Bertotti, F. Fiorillo, and G. P. Soardo. The prediction of power losses in soft magnetic materials. J. Phys. Colloques, 49(C8):C8–1915, 1988.
* [11] F. Fiorillo. Measurement and characterization of magnetic materials. North-Holland, 2004.
* [12] R. Hilzinger and W. Rodewald. Magnetic materials: fundamentals, products, properties, applications. Vacuumschmelze, 2013.
* [13] F. Fiorillo, G. Bertotti, C. Appino, and M. Pasquale. Soft Magnetic Materials, pages 1–42. 2016\.
* [14] J. M. Cubero-Sesin, M. Arita, and Z. Horita. High strength and electrical conductivity of Al-Fe alloys produced by synergistic combination of high-pressure torsion and aging. Adv. Eng. Mater., 17(12):1792–1803, 2015.
* [15] T. D. Shen, R. B. Schwarz, and J. D. Thompson. Soft magnetism in mechanically alloyed nanocrystalline materials. Phys. Rev. B, 72:014431, Jul 2005.
* [16] G. L. Houze Jr. Domain-wall motion in grain-oriented silicon steel in cyclic magnetic fields. J. Appl. Phys., 38(3):1089–1096, 1967.
* [17] K. Overshott. The use of domain observations in understanding and improving the magnetic properties of transformer steels. IEEE Trans. Magn., 12(6):840–845, 1976.
|
# Integrating theory and experiments to link local mechanisms and ecosystem-
level consequences of vegetation patterns in drylands
Ricardo Martinez-Garcia1,∗, Ciro Cabal2, Justin M. Calabrese3,4,5,
Emilio Hernández-García6, Corina E. Tarnita2, Cristóbal López6,†, Juan A.
Bonachela7,†
( 1 ICTP South American Institute for Fundamental Research & Instituto de
Física Teórica - Universidade Estadual Paulista, São Paulo, SP, Brazil.
2 Department of Ecology and Evolutionary Biology, Princeton University,
Princeton, NJ, USA.
3 Center for Advanced Systems Understanding (CASUS), Görlitz, Germany.
4 Helmholtz-Zentrum Dresden Rossendorf (HZDR), Dresden, Germany.
5 Department of Ecological Modelling, Helmholtz Centre for Environmental
Research – UFZ, Leipzig, Germany.
6 IFISC, Instituto de Física Interdisciplinar y Sistemas Complejos (CSIC-UIB),
Palma de Mallorca, Spain.
7 Department of Ecology, Evolution, and Natural Resources, Rutgers University,
New Brunswick, NJ, USA
∗ Correspondence<EMAIL_ADDRESS>$\dagger$ equal contribution. )
###### Abstract
Self-organized spatial patterns of vegetation are frequent in water-limited
regions and have been suggested as important indicators of ecosystem health.
However, the mechanisms underlying their emergence remain unclear. Some
theories hypothesize that patterns could result from a water-mediated scale-
dependent feedback (SDF), whereby interactions favoring plant growth dominate
at short distances and growth-inhibitory interactions dominate in the long
range. However, we know little about how net plant-to-plant interactions may
shift from positive to negative as a function of inter-individual distance,
and in the absence of strong empirical support, the relevance of this SDF for
vegetation pattern formation remains disputed. These theories predict a
sequential change in pattern shape from gapped to labyrinthine to spotted
spatial patterns as precipitation declines. Nonetheless, alternative theories
show that the same sequence of patterns could emerge even if net interactions
between plants were always inhibitory (purely competitive feedbacks, PCF).
Importantly, although these alternative hypotheses lead to visually
indistinguishable patterns they predict very different desertification
dynamics following the spotted pattern, limiting their potential use as an
ecosystem-state indicator. Moreover, vegetation interaction with other
ecosystem components can introduce additional spatio-temporal scales that
reshape both the patterns and the desertification dynamics. Therefore, to make
reliable ecological predictions for a focal ecosystem, it is crucial that
models accurately capture the mechanisms at play in the system of interest.
Here, we review existing theories for vegetation self-organization and their
conflicting predictions about desertification dynamics. We further discuss
possible ways for reconciling these predictions and potential empirical tests
via manipulative experiments to improve our understanding of how vegetation
self-organizes and better predict the fate of the ecosystems where they form.
Keywords: ecological patterns, competition, scale-dependent feedback,
ecological transitions, spatial self-organization, mathematical models.
## 1 Introduction
From microbial colonies to ecosystems extending over continental scales,
complex biological systems often feature self-organized patterns, regular
structures that cover large portions of the system and emerge from nonlinear
interactions among its components (Meinhardt,, 1982; Camazine et al.,, 2003;
Sole and Bascompte,, 2006; Pringle and Tarnita,, 2017). Importantly, because
harsh environmental conditions provide a context in which self-organization
becomes important for survival, emergent patterns contain crucial information
about the physical and biological processes that occur in the systems in which
they form (Sole and Bascompte,, 2006; Meron,, 2018; Zhao et al.,, 2019).
A well-known example of ecological self-organization is vegetation pattern
formation in water-limited ecosystems (Deblauwe et al.,, 2008; Rietkerk and
van de Koppel,, 2008). Flat landscapes can show vegetation spots regularly
distributed on a matrix of bare soil, soil-vegetation labyrinths, and gaps of
bare soil regularly interspersed on a homogeneous layer of vegetation (Fig.
1). Importantly, water availability strongly influences the specific shape of
the pattern. In agreement with model predictions (von Hardenberg et al.,,
2001; Meron et al.,, 2004), a Fourier-based analysis of satellite imagery
covering extensive areas of Sudan revealed that more humid regions are
dominated by gapped patterns, whereas spotted patterns dominate in more arid
conditions (Deblauwe et al.,, 2011). However, imagery time series are not long
enough to observe whether vegetation cover in a specific region undergoes
these transitions between patterns when aridity increases over time.
Figure 1: Aerial images of self-organized vegetation patterns. In all panels,
vegetated regions are darker and bare-soil regions, lighter. a) Spot pattern
in Sudan (11∘34’55.2” N; 27∘54’47.52”E). b) Labyrinthine pattern in Mali
(12∘27’50”N; 3∘18’30”E). c) Gap pattern in Niger (11∘1’12”N; 28∘10’48”E). d)
Band pattern in Sudan (11∘3’0”N; 28∘17’24”E). Microsoft product screen shot(s)
reprinted with permission from Microsoft Corporation. Image Copyrights
$\copyright$2021 Maxar.
Following the spotted pattern, if precipitation continues to decrease, models
predict that patterned ecosystems undergo a transition to a desert state. This
observed correlation between pattern shape and water availability suggests
that the spotted pattern could serve as a reliable and easy-to-identify early-
warning indicator of this ecosystem shift (Scheffer and Carpenter,, 2003;
Rietkerk et al.,, 2004; Scheffer et al.,, 2009; Dakos et al.,, 2011, 2015).
This has reinforced the motivation to develop several models aiming to explain
both the formation of spatial patterns of vegetation and their dependence on
environmental variables (von Hardenberg et al.,, 2001; Rietkerk et al.,, 2002;
Meron et al.,, 2004; Borgogno et al.,, 2009; Martínez-García et al.,, 2014;
Gowda et al.,, 2014). Although some attempts to test model predictions with
satellite imagery exist (Weissmann et al.,, 2017; Bastiaansen et al.,, 2018),
theoretical studies using models remain the dominant approach to study this
hypothesized transition.
All these frameworks successfully reproduce the sequence of gapped,
labyrinthine, and spotted patterns found in satellite imagery (Fig. 2a).
However, they disagree in their predictions regarding the nature of the
desertification transition that follows the spotted pattern. For example,
Rietkerk et al., (2002) and von Hardenberg et al., (2001) predicted that
ecosystems may undergo abrupt desertification, including a hysteresis loop,
following the spotted pattern. Martínez-García et al., 2013a and Yizhaq and
Bel, (2016) predicted that desertification could also occur gradually with
progressive loss of vegetation biomass. Using alternative modeling approaches,
other studies have supported the idea that whether an ecosystem will collapse
gradually or abruptly is system-dependent and determined by the intensity of
stochasticity (Weissmann et al.,, 2017), vegetation and soil type (Kéfi et
al., 2007b, ), colonization rates (Corrado et al.,, 2015), and intensity of
external stresses, such as grazing (Kéfi et al., 2007a, ). Because drylands
cover $\sim 40\%$ of Earth’s land surface and are home to $\sim 35\%$ of the
world population (Mortimore et al.,, 2009), determining whether they will
respond abruptly or gradually to aridification is critical both from an
ecosystem-management and socio-economic point of view.
Active lines of theoretical research aiming to address this question have
focused on understanding how different components of the ecosystem may
interact with each other to determine an ecosystem’s response to aridification
(Bonachela et al.,, 2015; Yizhaq and Bel,, 2016), as well as on designing
synthetic feedbacks (in the form of artificial microbiomes) that could prevent
ecosystems collapses or make such transitions smoother (Villa Martín et al.,,
2015; Conde-Pueyo et al.,, 2020; Vidiella et al.,, 2020). The question has
also attracted considerable attention from empirical researchers (Maestre et
al.,, 2016), and recent evidence suggests that certain structural and
functional ecosystem attributes respond abruptly to aridity (Berdugo et al.,,
2020). Despite current efforts, whether desertification is more likely to
occur gradually or abruptly remains largely unknown.
Here, we outline and rank strategies that will help answer this question. In
section 2, we discuss the ecological rationale behind existing models for
vegetation self-organization. We review such models in section 3 and summarize
their opposing predictions about the ecosystem collapse in section 4. In
section 5, we describe possible manipulative experiments and empirical
measures to select among the previously scrutinized models. Finally, in
section 6, we discuss different research lines that build on current knowledge
and discuss how to apply lessons learned from studying self-organized
vegetation patterns to other self-organizing systems.
## 2 Ecological rationale behind current models for vegetation spatial self-
organization
The net biotic interaction between plants, i.e., the effect that plants have
on each other’s survival, reproduction, and growth rate ultimately determines
vegetation spatial patterns. Depending on the number of individuals that form
each patch, we can classify non-random vegetation spatial patterns into two
families: segregated and aggregated patterns (Fig. 2b). Segregated patterns
emerge when competition is the dominant net biotic interaction and surviving
individuals avoid interacting with each other. In the long term, vegetation
arranges in a hexagonal lattice of patches with each individual representing
one patch (Pringle and Tarnita,, 2017). Aggregated patterns, on the other
hand, emerge when surviving plants form vegetation patches with several
individuals. Segregated patterns are very common in drylands and are expected
to emerge exclusively due to ecological interference or competition (Pringle
and Tarnita,, 2017). Aggregated patterns, which are the focus of this study,
are ecologically more intriguing because they might result from a richer set
of mechanisms (van de Koppel et al.,, 2008; Rietkerk and van de Koppel,, 2008;
Lee et al.,, 2021) and their ecological implications strongly depend on them
(Fig. 2c, d). Moreover, a direct evidence of which feedback type drives
aggregated patterns of vegetation in drylands remains elusive.
Existing theories for vegetation self-organization hypothesize two alternative
ways for the formation of aggregated patterns: scale-dependent feedbacks (SDF)
and purely competitive feedbacks (PCF). Both are based on the biophysical
effects of the plant canopy on the microclimate underneath and of the root
system on the soil conditions (Cabal et al., 2020b, ) (Fig. 2c), but they
differ in how the net interaction between plants changes with the inter-
individual distance. Next, we will briefly review the mechanisms that have
been suggested to underpin each of these two feedbacks and the type of
patterns they might create.
Scale-dependent feedbacks. Biotic facilitation is a very common interaction in
semiarid and arid ecosystems (Holmgren and Scheffer,, 2010). The SDFs invoked
to explain vegetation self-organization are caused by the facilitative effects
of plants nearby their stems, coupled with negative effects at longer
distances. Several ecological processes have been suggested to support these
SDFs. One is the positive effects of shade, which can overcome competition for
light and the effects of root competition for water, and lead to under-canopy
facilitation (Valladares et al.,, 2016). In this context, focal plants have an
overall facilitative effect in the area of most intense shade at the center of
the crown. This effect progressively loses intensity with distance to the
center of the crown and ultimately vanishes, leaving just simple below-ground
competition in areas farther from the plant. A complementary rationale is that
plants modify soil crust, structure, and porosity, and therefore enhance soil
water infiltration (Eldridge et al.,, 2000; Ludwig et al.,, 2004). Enhanced
water infiltration has a direct positive effect close to the plant because it
increases soil water content but, as a by-product, it has negative
consequences farther away from the plant’s insertion point because, by
increasing local infiltration, plants also reduce the amount of water that can
infiltrate further away in bare soil locations (Montaña,, 1992; Bromley et
al.,, 1997). The spatial heterogeneity in water availability due to plant-
enhanced infiltration is higher in sloped landscapes where down-slope water
runoff leads to the formation of banded vegetation patterns (Deblauwe et al.,,
2012; Valentin et al.,, 1999), but it is also significant in flat landscapes
and might cause the of emergence gaps, labyrinths, and spots of vegetation
(HilleRisLambers et al.,, 2001; Gilad et al.,, 2004; Okayasu and Aizawa,,
2001). (Fig. 2a).
Figure 2: Conceptual summary of existing theories for vegetation self-
organization, their emergent patterns and the type of desertification
processes they predict. a) Observed spatial patterns across a gradient of
rainfall, with more humid ecosystems showing gaps and more arid, spots. b)
Examples of spatial random, segregated, and aggregated patterns as defined in
the main text. Aggregated patterns may result both from PCF and SDF, whereas
segregated patterns emerge from PCF. c) Types of feedbacks invoked to explain
the emergence of self-organized vegetation patterns and d) the different
desertification transitions they predict.
Purely competitive feedbacks. On the other hand, competition for resources is
a ubiquitous interaction mechanism that affects the relation between any two
plants that are located in sufficient proximity. Above ground, plants compete
for light through their canopies; below ground, they compete for several soil
resources, including water and nitrogen, through their roots (Craine and
Dybzinski,, 2013). If only competitive mechanisms occur, we should expect
plants to have a negative effect on any other plant within their interaction
range (Fig. 2c) and the intensity of this effect to peak at intermediate
distances between vegetation patches(van de Koppel and Crain,, 2006; Rietkerk
and van de Koppel,, 2008). Because finite-range competition is the only
interaction required by PCF models to explain vegetation self-organization,
PCF is the most parsimonious feedback type that generates vegetation patterns.
## 3 Models for vegetation self-organization
Mathematical frameworks to vegetation self-organized are grouped into two main
categories: individual-based models (IBM) and partial differential equations
models (PDEM). IBMs describe each plant as a discrete entity whose attributes
change in time following a stochastic updating rule (DeAngelis and Yurek,,
2016; Railsback and Grimm,, 2019). PDEMs describe vegetation biomass and water
concentration as continuous fields that change in space and time following a
system of deterministic partial differential equations (Meron,, 2015). IBM are
the most convenient approach to study segregated patterns, where single
individuals are easy to identify and central to the formation of vegetation
patches (Bolker and Pacala,, 1999; Iwasa,, 2010; Wiegand and Moloney,, 2013;
Plank and Law,, 2015). Conversely, PDEMs are a better approximation to
aggregated patterns because they focus on a continuous measure of vegetation
abundance and describe the dynamics of patches that can spread or shrink
without any upper or lower limit on their size. Natural multi-individual
patches can change in size and shape depending on the interaction among the
individual plants within them, whereas the size of single-plant patches is
subject to stronger constraints (they usually grow, not shrink, and their
maximum size is bounded by plant physiology). Therefore, PDEMs represent a
simplification of the biological reality that is more accurate in aggregated
than in segregated patterns. Because we are only considering aggregated
patterns, we will focus our review of the mathematical literature on PDEMs.
### 3.1 Reaction-diffusion SDF models
In 1952, Turing showed that differences in the diffusion coefficients of two
reacting chemicals can lead to the formation of stable spatial heterogeneities
in their concentration (Turing,, 1952). In Turing’s original model, one of the
chemicals acts as an activator and produces both the second chemical and more
of itself via an autocatalytic reaction. The second substance inhibits the
production of the activator and therefore balances its concentration (Fig.
3a). Spatial heterogeneities in the concentrations can form if the inhibitor
diffuses much faster than the activator, so that it inhibits the production of
the activator at a long range and confines the concentration of the activator
locally (Fig. 3b). This activation-inhibition principle thus relies on a SDF
to produce patterns: positive feedbacks (autocatalysis) dominate on short
scales and negative, inhibitory feedbacks dominate on larger scales. In the
context of vegetation pattern formation, plant biomass acts as the self-
replicating activator. Water is a limiting resource and thus water scarcity is
an inhibitor of vegetation growth (Rietkerk and van de Koppel,, 2008; Meron,,
2012).
Figure 3: a) Schematic of the Turing activation-inhibition principle. The
activator, with diffusion coefficient $D_{a}$, produces the inhibitor at rate
$k_{ai}$ as well as more of itself at rate $k_{aa}$ through an autocatalytic
reaction. The inhibitor degrades the activator at rate $k_{ia}$ and diffuses
at rate $D_{i}>D_{a}$. b) Schematic of the pattern-forming process in a one-
dimensional system.
#### 3.1.1 Two-equation water-vegetation dynamics: the generalized Klausmeier
model
Initially formulated to describe the formation of stripes of vegetation in
sloping landscapes (Fig. 1d), subsequent studies have generalized the
Klausmeier model to flat surfaces (Kealy and Wollkind,, 2011; Bastiaansen et
al.,, 2018; Eigentler and Sherratt,, 2020). Mathematically, the generalized
version of the Klausmeier model is given by the following equations:
$\displaystyle\frac{\partial w({\bm{r}},t)}{\partial t}$ $\displaystyle=$
$\displaystyle
R-l\,w({\bm{r}},t)-a\,g\left(w\right)f\left(v\right)v({\bm{r}},t)+D_{w}\mathrm{\nabla}^{2}w({\bm{r}},t),$
(1) $\displaystyle\frac{\partial v({\bm{r}},t)}{\partial t}$ $\displaystyle=$
$\displaystyle
a\,q\,g\left(w\right)f\left(v\right)v({\bm{r}},t)-m\,v({\bm{r}},t)+D_{v}\mathrm{\nabla}^{2}v({\bm{r}},t),$
(2)
where $w({\bm{r}},t)$ and $v({\bm{r}},t)$ represent water concentration and
density of vegetation biomass at location ${\bm{r}}$ and time $t$,
respectively. In Eq. (1), water is continuously supplied at a precipitation
rate $R$, and its concentration decreases due to physical losses such as
evaporation, occurring at rate $l$ (second term), and local uptake by plants
(third term). In the latter, $a$ is the plant absorption rate, $g(w)$
describes the dependence of vegetation growth on water availability, and
$f(v)$ is an increasing function of vegetation density that represents the
positive effect that the presence of plants has on water infiltration.
Finally, water diffuses with a diffusion coefficient $D_{w}$. Similarly, Eq.
(2) accounts for vegetation growth due to water uptake (first term), plant
mortality at rate $m$ (second term), and plant dispersal (last term). In the
plant growth term, the parameter $q$ represents the yield of plant biomass per
unit of consumed water; although the original model assumed for mathematical
simplicity linear absorption rate and plant response to water
($g(w)=w({\bm{r}},t)$ and $f(v)=v({\bm{r}},t)$), other biologically-plausible
choices can account for processes such as saturation in plant growth due to
intraspecific competition (Eigentler,, 2020).
The generalized Klausmeier model with linear absorption rate and plant
responses to water has three spatially-uniform equilibria obtained from the
fixed points of Eqs. (1)-(2): an unvegetated state $(v^{*}=0,w^{*}=R/l)$,
stable for any value of the rainfall parameter $R$; and two states in which
vegetation and water coexist at different non-zero values. Only the vegetated
state with higher vegetation biomass is stable against non-spatial
perturbations, and only for a certain range of values of $R$. The latter
suffices to guarantee bistability, that is, the presence of alternative stable
states (vegetated vs unvegetated), and hysteresis. For spatial perturbations,
however, the stable vegetated state becomes unstable within a range of $R$,
and the system develops spatial patterns.
#### 3.1.2 Three-equation water-vegetation dynamics: the Rietkerk model
The Rietkerk model extends the generalized Klausmeier model by splitting Eq.
(1) into two equations (one for surface water, and another one for soil water)
and including a term that represents water infiltration. Moreover, the
functions that represent water uptake and infiltration are nonlinear, which
introduces additional feedbacks between vegetation, soil water, and surface
water. The model equations are as follows:
$\displaystyle\frac{\partial u(\mathbf{r},t)}{\partial t}$ $\displaystyle=$
$\displaystyle
R-\alpha\frac{v(\mathbf{r},t)+k_{2}\,w_{0}}{v(\mathbf{r},t)+k_{2}}u(\mathbf{r},t)+D_{u}\nabla^{2}u(\mathbf{r},t)$
(3) $\displaystyle\frac{\partial w(\mathbf{r},t)}{\partial t}$
$\displaystyle=$
$\displaystyle\alpha\frac{v(\mathbf{r},t)+k_{2}\,w_{0}}{v(\mathbf{r},t)+k_{2}}u(\mathbf{r},t)-g_{m}\,\frac{v(\mathbf{r},t)\,w(\mathbf{r},t)}{k_{1}+w(\mathbf{r},t)}-\delta_{w}\,w(\mathbf{r},t)+D_{w}\nabla^{2}w(\mathbf{r},t)$
(4) $\displaystyle\frac{\partial v(\mathbf{r},t)}{\partial t}$
$\displaystyle=$ $\displaystyle
c\,g_{m}\,\frac{v(\mathbf{r},t)\,w(\mathbf{r},t)}{k_{1}+w(\mathbf{r},t)}-\delta_{v}\,v(\mathbf{r},t)+D_{v}\nabla^{2}v(\mathbf{r},t)$
(5)
where $u(\mathbf{r},t)$, $w(\mathbf{r},t)$, and $v(\mathbf{r},t)$ are the
density of surface water, soil water, and vegetation, respectively. In Eq.
(3), $R$ is the mean annual rainfall, providing a constant supply of water to
the system; the second term accounts for infiltration; and the diffusion term
accounts for the lateral circulation of water on the surface. In Eq. (4), the
first term represents the infiltration of surface water into the soil, which
is enhanced by the presence of plants; the second term represents water
uptake; the third one accounts for physical losses of soil water, such as
evaporation; and the diffusion term describes the lateral circulation of
underground water. Finally, the first term in Eq. (5) represents vegetation
growth due to the uptake of soil water, which is a function that saturates for
high water concentrations; the second term accounts for biomass loss at
constant rate due to natural death or external hazards; and the diffusion term
accounts for plant dispersal. The meaning of each parameter in the equations,
together with the values used in Rietkerk et al., (2002) for their numerical
analysis, are provided in Table 1.
In the spatially uniform case, this model allows for two different steady
states: a vegetated state in which vegetation, soil water, and surface water
coexist at non-zero values; and an unvegetated (i.e., desert) state in which
only soil water and surface water are non-zero. Considering the
parameterization in Table 1, the stability of each of these states switches at
$R=1$. For $R<1$, only the unvegetated equilibrium is stable against non-
spatial perturbations, whereas for $R>1$ the vegetated equilibrium becomes
stable and the desert state, unstable. When allowing for spatial
perturbations, numerical simulations using the parameterization in Table 1
show the existence of spatial patterns within the interval $0.7\lesssim
R\lesssim 1.3$, which is in agreement with analytical approximations (Gowda et
al.,, 2016). Within this range of mean annual rainfall, the patterns
sequentially transition from gaps to labyrinths to spots with increasing
aridity. For $R\approx 0.7$, the system transitions abruptly from the spotted
pattern to the desert state.
Parameter | Symbol | Value
---|---|---
$c$ | Water-biomass conversion factor | $10$ (g mm-1 m-2)
$\alpha$ | Maximum infiltration rate | $0.2$ (day-1)
$g_{m}$ | Maximum uptake rate | $0.05$ (mm g-1 m-2 day-1)
$w_{0}$ | Water infiltration in the absence of plants | $0.2$ (-)
$k_{1}$ | Water uptake half-saturation constant | $5$ (mm)
$k_{2}$ | Saturation constant of water infiltration | $5$ (g m-2)
$\delta_{w}$ | Soil water loss rate | $0.2$ (day-1)
$\delta_{v}$ | Plant mortality | $0.25$ (day-1)
$D_{w}$ | Soil water lateral diffusion | $0.1$ (m2 day-1)
$D_{v}$ | Vegetation dispersal | $0.1$ (m2 day-1)
$D_{u}$ | Surface water lateral diffusion | $100$ (m2 day-1)
Table 1: Typical parameterization of the Rietkerk model (Rietkerk et al.,,
2002).
The Rietkerk model assumes constant rainfall, homogeneous soil properties, and
only local and short-range processes. Therefore, all the parameters are
constant in space and time, and patterns emerge from SDFs between vegetation
biomass and water availability alone. This simplification is, however, not
valid for most ecosystems. Arid and semi-arid regions feature seasonal
variability in rainfall (Salem et al.,, 1989). Kletter et al., (2009) showed
that, depending on the functional dependence between water uptake and soil
moisture, stochastic rainfall might increase the amount of vegetation biomass
in the ecosystem compared to a constant rainfall scenario. Moreover, the
properties of the soil often change in space. A widespread cause of this
heterogeneity is soil-dwelling macrofauna, such as ants, earthworms, and
termites (Pringle and Tarnita,, 2017). Bonachela et al., (2015) found that
heterogeneity in substrate properties induced by soil-dwelling macrofauna, and
modeled by space-dependent parameters, might interact with SDFs between water
and vegetation and introduce new characteristic spatial scales in the pattern.
Finally, researchers have also extended the Rietkerk model to account for
long-range, nonlocal processes. For example, Gilad et al., (2004) introduced a
nonlocal mechanism in the interaction between vegetation biomass and soil
water of Eqs. (4)-(5) to model the water conduction of lateral roots towards
the plant canopy. As explained in the next section, although this model
accounts for nonlocal processes, it is conceptually very different from
kernel-based models.
### 3.2 Kernel-based SDF models
Kernel-based models are those in which all the feedbacks that control the
interaction between plants are encapsulated in a single nonlocal net
interaction between plants. The nonlocality in the net plant interaction
accounts for the fact that individual (or patches of) plants can interact with
each other within a finite neighborhood. Therefore, the vegetation dynamics at
any point of the space is coupled to the density of vegetation at locations
within the interaction range. Because all feedbacks are collapsed into a net
interaction between plants, kernel-based models do not describe the dynamics
of any type of water and use a single partial integro-differential equation
for the spatiotemporal dynamics of the vegetation. The kernel is often defined
as the addition of two Gaussian functions with different widths, with the
wider function taking negative values to account for the longer range of
competitive interactions (D’Odorico et al.,, 2006) (central panels of Fig. 2).
#### 3.2.1 Models with additive nonlocal interactions
In the simplest kernel-based SDF models, the spatial coupling is introduced
linearly in the equation for the local dynamics (D’Odorico et al.,, 2006),
$\frac{\partial v({\bm{r}},t)}{\partial t}=h\left(v\right)+\int
d{\bm{r}^{\prime}}G\left({\bm{r}^{\prime}};{\bm{r}}\right)\left[v\left({\bm{r}^{\prime}},t\right)-v_{0}\right].$
(6)
The first term describes the local dynamics of the vegetation, i.e., temporal
changes in vegetation density at a location ${\bm{r}}$ due to processes in
which neighboring vegetation does not play any role. The integral term
describes any spatial coupling, i.e., changes in vegetation density at
${\bm{r}}$ due to vegetation density at neighbor locations ${\bm{r}}$’.
Assuming spatial isotropy, the kernel function $G({\bm{r}},{\bm{r}^{\prime}})$
decays radially with the distance from the focal location,
$\rvert{\bm{r}^{\prime}}-{\bm{r}}\rvert$, so
$G\left({\bm{r}^{\prime}},{\bm{r}}\right)=G(\rvert{\bm{r}^{\prime}}-{\bm{r}}\rvert)$.
Therefore, the dynamics of the vegetation density is governed by two main
contributions: first, if the spatial coupling is neglected, the vegetation
density increases or decreases locally depending on the sign of $h(v)$ until
reaching a uniform steady state $v_{0}$, solution of $h(v_{0})=0$; second, the
spatial coupling enhances or diminishes vegetation growth depending on the
sign of the kernel function (i.e., whether the spatial interactions affect
growth positively or negatively) and the difference between the local
vegetation density and the uniform steady state $v_{0}$.
Assuming kernels that are positive close to the focal location and negative
far from it (a la SDF), local perturbations in the vegetation density around
$v_{0}$ are locally enhanced if they are larger than $v_{0}$ and attenuated
otherwise. As a result, the integral term destabilizes the homogeneous state
when perturbed, and spatial patterns arise in the system. Long-range growth-
inhibition interactions, together with nonlinear terms in the local-growth
function $h(v)$, avoid the unbounded growth of perturbations and stabilize the
pattern. However, although this mechanism imposes an upper bound to vegetation
density, nothing prevents $v$ from taking unrealistic, negative values. To
avoid this issue, the model must include an artificial bound at $v=0$ such
that vegetation density is reset to zero whenever it becomes negative.
#### 3.2.2 Models with multiplicative nonlocal interactions
A less artificial way to ensure that vegetation density remain always positive
is to modulate the spatial coupling with nonlinear terms. For example, the
pioneering model developed by Lefever and Lejeune, (1997) consists of a
modified logistic equation in which each of the terms includes an integral
term to encode long-range spatial interactions,
$\frac{\partial v({\bm{r}},t)}{\partial
t}=\beta\,\left(\omega_{1}*v\right)({\bm{r}},t)\left[1-\frac{\left(\omega_{2}*v\right)({\bm{r}},t)}{K}\right]-\eta\left(\omega_{3}*v\right)({\bm{r}},t)$
(7)
where $\beta$ is the rate at which seeds are produced (a proxy for the number
of seeds produced by each plant) and $\eta$ is the rate at which vegetation
biomass is lost due to spontaneous death and external hazards such as grazing,
fires, or anthropogenic factors (last term). The model assumes spatial
isotropy, and the symbol $*$ indicates a linear convolution operation:
$\left(\omega_{i}*v\right)({\bm{r}},t)=\int
d{\bm{r}}^{\prime}\omega_{i}({\bm{r}}-{\bm{r}}^{\prime};\ell_{i})v({\bm{r}}^{\prime},t)\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ i=1,2,3$ (8)
in which each $\omega_{i}$ is a weighting function with a characteristic
spatial scale $\ell_{i}$ that defines the size of the neighborhood
contributing to the focal process. For instance,
$\omega_{1}({\bm{r}}-{\bm{r}}^{\prime};\ell_{1})$ defines the size of the
neighborhood that contributes to the growth of vegetation biomass at
${\bm{r}}$. Similarly, $\ell_{2}$ defines the scale over which plants inhibit
the growth of their neighbors, and $\ell_{3}$ the scale over which vegetation
density influences the spontaneous death rate of vegetation at the focal
location. Because the sign of the interaction is explicit in each term of Eq.
(7), the convolutions only represent weighted averages of vegetation biomass
and the weighting functions are always positive. In addition, because Lefever
and Lejeune, (1997) set the scale of the inhibitory interactions larger than
the scale of the positive interactions ($\ell_{2}>\ell_{1}$), the model
includes a SDF with short-range facilitation and long-range competition.
Expanding upon this work, several other models have introduced non-linear
spatial couplings via integral terms (Fernandez-Oto et al.,, 2014; Escaff et
al.,, 2015; Berríos-Caro et al.,, 2020), and others have expanded the integral
terms and studied the formation of localized structures of vegetation (Parra-
Rivas and Fernandez-Oto,, 2020).
### 3.3 Kernel-based PCF models.
In previous sections, we invoked the existence of SDFs in the interactions
among plants to explain the emergence of self-organized spatial patterns of
vegetation. Both theoretical and empirical studies, however, have highlighted
the importance of long-range negative feedbacks on pattern formation,
suggesting that short-range positive feedbacks might be secondary actors that
sharpen the boundaries of clusters rather than being key for the instabilities
that lead to the patterns (van de Koppel and Crain,, 2006; Rietkerk and van de
Koppel,, 2008; van de Koppel et al.,, 2008). Following these arguments,
Martínez-García et al., 2013a , (2014) proposed a family of purely competitive
models with the goal of identifying the smallest set of mechanisms needed for
self-organized vegetation patterns to form.
The simplest PCF models consider additive nonlocal interactions (Martínez-
García et al.,, 2014). Alternatively, nonlocal interactions can be represented
through nonlinear functions modulating either the growth or the death terms.
In both cases, the models develop the full sequence of patterns (gaps,
labyrinths, and spots). The model proposed in Martínez-García et al., 2013a
introduces competition through the growth term:
$\frac{\partial v({\bm{r}},t)}{\partial
t}=P_{\mbox{\tiny{E}}}\left(\widetilde{v},\delta\right)\beta\,v({\bm{r}},t)\left(1-\frac{v({\bm{r}},t)}{K}\right)-\eta\,v({\bm{r}},t),$
(9)
where $\beta$ and $K$ are the seed production rate and the local carrying
capacity, respectively. $\delta$ is the competition-strength parameter, and
$\widetilde{v}\left({\bm{r}},t\right)$ is the average density of vegetation
around the focal position ${\bm{r}}$:
$\widetilde{v}\left({\bm{r}},t\right)=\int
d{\bm{r}}^{\prime}\mathcal{G}\left(\rvert{\bm{r}^{\prime}}-{\bm{r}}\rvert\right)\,v\left({\bm{r}}^{\prime},t\right).$
(10)
where the kernel function $\mathcal{G}$, assumes spatial isotropy and weighs
how vegetation at a location ${\bm{r}^{\prime}}$ contributes to the average
vegetation density around ${\bm{r}}$, and is necessarily defined positive.
Because it is a weighting function, $\mathcal{G}$ plays the same role and has
the same properties described for the $\omega_{i}$ functions in section 3.2.
The model further assumes that vegetation losses occur at constant rate $\eta$
and vegetation grows through a three-step sequence of seed production, local
dispersal, and establishment (Calabrese et al.,, 2010), represented by the
three factors that contribute to the first term in Eq. (9). First, plants
produce seeds at a constant rate $\beta$, which leads to the a growth term
$\beta v({\bm{r}},t)$. Second, seeds disperse locally and compete for space
which defines a local carrying capacity $K$. Third, plants compete for
resources with other plants, which is modeled using a plant establishment
probability, $P_{\mbox{\tiny{E}}}$. Because the only long-range interaction in
the model is root-mediated interference, and competition for resources is more
intense in more crowded environments, $P_{E}$ is a monotonically decreasing
function of the nonlocal vegetation density $\tilde{v}({\bm{r}},t)$ defined in
Eq. (10). Moreover, $P_{\mbox{\tiny{E}}}$ also depends on the competition-
strength parameter, $\delta$, representing resource limitation. In the limit
$\delta=0$, resources are abundant, competition is weak, and
$P_{\mbox{\tiny{E}}}=1$. Conversely, in the limit $\delta\rightarrow\infty$,
resources are very scarce, competition is very strong, and
$P_{\mbox{\tiny{E}}}\rightarrow 0$.
In PCF models, spatial patterns form solely due to the long-range competition.
If patches are far enough from each other, individuals attempting to establish
in between patches compete with vegetation from more than one adjacent patch,
whereas individuals within a patch only interact with plants in that same
patch. As a result, competition is more intense in the regions between patches
than inside each patch, which stabilizes an aggregated pattern of vegetation
(Fig. 4) whose shape (gaps, labyrinths or spots) will depend on the model
parameterization. This same mechanism has been suggested to drive the
formation of clusters of competing species in the niche space (Scheffer and
van Nes,, 2006; Pigolotti et al.,, 2007; Hernández-García et al.,, 2009; Fort
et al.,, 2009; Leimar et al.,, 2013) and the aggregation of individuals in
models of interacting particles with density-dependent reproduction rates
(Hernández-García and López,, 2004).
Figure 4: In PCF models, patchy distributions of vegetation in which the
distance between patches is between one and two times the range of the
nonlocal interactions are stable. Individuals within each patch only compete
with the individuals in that patch (a,b), whereas individuals in between
patches compete with individuals from both patches (c). Color code: green
trees are focal individuals, and dashed circles limit the range of interaction
of the focal individual. Dark grey is used for individuals that interact with
the focal one, whereas light gray indicates individuals that are out of the
range of interaction of the focal individual.
## 4 Self-organized patterns as indicators of ecological transitions
Models using different forms for the net biotic interaction between neighbor
patches (SDF vs PCF) have successfully reproduced qualitatively the spatial
patterns of vegetation observed in water-limited ecosystems (Deblauwe et al.,,
2011). All these different models also predict that the spotted pattern
precedes a transition to an unvegetated state, which positions it as an early-
warning indicator of desertification transitions (Scheffer et al.,, 2009;
Dakos et al.,, 2011). Despite the similar prediction, models with different
underlying mechanisms for the formation of these patterns result in different
desertification processes.
The Rietkerk model from section 3.1.2, for example, predicts that self-
organized ecosystems eventually collapse following an abrupt transition that
includes a hysteresis loop (Fig. 5a). Abrupt transitions such as this one are
typical of bistable systems in which the stationary state depends on the
environmental and the initial conditions. Bistability is a persistent feature
of models for vegetation pattern formation, sometimes occurring also in
transitions between patterned states (von Hardenberg et al.,, 2001). It also
denotes the existence of thresholds in the system that trigger sudden, abrupt
responses in its dynamics. These thresholds are often created by positive
feedbacks or quorum-regulated behaviors, as is the case in populations subject
to strong Allee effects (Courchamp et al.,, 1999). In the Rietkerk model, as
rainfall decreases, the spatial distribution of vegetation moves through the
gapped-labyrinthine-spotted sequence of patterns (Fig. 5a). However, when the
rainfall crosses a threshold value the system responds abruptly, and all
vegetation dies. Setting up the simulations as indicated in Rietkerk et al.,
(2002) and using the parameterization of Table 1, this threshold is located at
$R\approx 0.55$ mm day-1. Once the system reaches this unvegetated state,
increasing water availability does not allow vegetation recovery until
$R\approx 0.70$ mm day-1, which results in a hysteresis loop and a region of
bistability ($R\in[0.55,0.70]$ in Fig. 5a). Bistability and hysteresis loops
make abrupt, sudden transitions like this one extremely hard to revert. Hence,
anticipating such abrupt transitions is crucial from a conservation and
ecosystem-management point of view (Scheffer et al.,, 2009; Dakos et al.,,
2011).
Extended versions of the Rietkerk model have suggested that the interaction
between vegetation and other biotic components of the ecosystem may change the
transition to the unvegetated state. Specifically, Bonachela et al., (2015)
suggested that soil-dwelling termites, in establishing their nests (mounds),
engineer the chemical and physical properties of the soil in a way that turns
the abrupt desertification into a two-step process (Fig. 5b). At a certain
precipitation level ($R\approx 0.75$ mm day-1 using the parameterization in
Table 1 and the same initial condition used for the original Rietkerk model),
vegetation dies in most of the landscape (T1 in Fig. 5b) but persists on the
mounds due to improved properties for plant growth created by the termites.
On-mound vegetation survives even if precipitation continues to decline, and
is finally lost at a rainfall threshold $R\approx 0.35$ mm day-1 (T2 in Fig.
5b). As a consequence of the two-step transition, the ecosystem collapse is
easier to prevent because a bare soil matrix with vegetation only on mounds
serves as an early-warning signal of desertification, and it is easier to
revert since termite-induced heterogeneity breaks the large hysteresis loop of
the original model into two smaller ones (compare the hysteresis loops in
Figs. 5a and 5b).
On the other hand, the PCF model in Martínez-García et al., 2013a , ()
discussed in section 3.3 predicts a smooth desertification in which vegetation
biomass decreases continuously in response to decreasing seed production rate
(a proxy for worsening environmental conditions). It is important to say that
considering local vegetation dispersal, and thus not introducing a diffusive
term in model Eq. (9), is also one of the assumptions leading to this smooth
transition. According to this model, the spotted pattern would persist as
precipitation declines, with vegetation biomass decreasing until it eventually
disappears (Fig. 5c). As opposed to the catastrophic shift described for the
Rietkerk model, smooth transitions such as the one depicted by this model do
not show bistability and do not feature hysteresis loops. This difference has
important socio-ecological implications, because it enables easier and more
affordable management strategies to restore the ecosystem after the collapse
(Villa Martín et al.,, 2015). Moreover, continuous transitions are also more
predictable because the density of vegetation is univocally determined by the
control parameter (seed production rate $\beta$ in Fig. 5c).
Figure 5: Although different models for vegetation pattern formation may
recover the same sequence of gapped-labyrinthine-spotted patterns from
different mechanism, the type of desertification transition that follows the
spotted pattern strongly depends on the model ingredients. a) Abrupt
desertification as predicted by the Rietkerk model (Rietkerk et al.,, 2002).
b) Two-step desertification process as predicted in Bonachela et al., (2015)
when soil-dwelling insects are added to the Rietkerk model. c) Progressive
desertification as predicted by the PCF model introduced in Martínez-García et
al., 2013a . For each panel, numerical simulations were conducted using the
same setup described in the original publications.
In the three scenarios discussed above, patterns have tremendous potential for
ecosystem management as an inexpensive and reliable early indicator of
ecological transitions (Scheffer et al.,, 2009; Dakos et al.,, 2011). However,
the different predictions that models make about the transition highlights the
need for tailored models that not only reproduce the observed patterns but do
so through the mechanisms relevant to the focal system. We have shown that
widespread spotted patterns can form in models accounting for very different
mechanisms (Fig. 5). Because ecosystems are highly complex, it is very likely
that spotted patterns observed in different regions emerge from very different
mechanisms (or combinations of them) and thus anticipate very different
transitions (Kéfi et al., 2007a, ; Kéfi et al., 2007b, ; Corrado et al.,,
2015; Weissmann et al.,, 2017). Therefore, a reliable use of spotted patterns
as early warning indicators of ecosystem collapse requires (i) mechanistic
models that are parameterized and validated by empirical observations of both
mechanisms and patterns; (ii) quantitative analyses of field observations; and
(iii) manipulative experiments.
## 5 Testing models for vegetation self-organization in the field
As depicted through this review, the study of self-organized vegetation
patterns has been mostly theoretical and empirical evidences of the self-
organization hypotheses explaining the formation of vegetation spatial
patterns are much less widespread. In this section, we explore the possible
empirical approaches to understanding the mechanisms responsible for the
formation and persistence of self-organized aggregated patterns and propose a
two-step protocol to this end.
At least three possible approaches exist to study existing self-organization
hypotheses for pattern formation in natural systems. First, the observation
and measurement of the spatial structure of patterns from aerial and satellite
photography (the observational approach). Second, the assessment of the net
interactions among plants or plant patches as a function of the distance
between them (the net-interaction approach). Third, the investigation of the
specific mechanisms underpinning that interaction (the selective mechanistic
approach). The observational approach has been relatively common, and is the
one that has motivated the development of most existing models (Borgogno et
al.,, 2009; Wu et al.,, 2000). Nevertheless, without a more detailed
understanding of the focal systems, one cannot discard whether agreement
between model predictions and natural observations is coincidental. The
mechanistic approach has been relatively common. Some studies have
investigated selectively the mechanisms potentially leading to pattern
formation in tiger bush banded vegetation in Niger (Valentin and Poesen,,
1999), vegetation rings in Israel (Sheffer et al.,, 2011; Yizhaq et al.,,
2019), or fairy circles in Namibia (Ravi et al.,, 2017) and Australia (Getzin
et al.,, 2021). To our knowledge, the net-interaction approach is conspicuous
by being absent, and researchers have scarcely measured directly the net
interaction among plants or vegetation patches and its variation with the
inter-plant distance in patterned ecosystems.
Because we are discussing here aggregated patterns, a first step in such
approach is to confirm that vegetation patches are formed by several
aggregated individuals. In the case of segregated patterns, we recommend the
use of point pattern analysis under the hypothesis of the dominance of
competition forces (Franklin,, 2010). Following this preliminary test, we
propose a two-step protocol to conduct future field reseearch on the emergence
and maintenance of vegetation spatial patterns.
First step: phenomenological investigation of the net interaction among plants
within the vegetated patch and in bare soil. This first step is needed because
a myriad of alternative mechanisms can explain the formation of spatial
patterns, some of them well aligned with the idea of vegetation self-
organization and others dependent on external biological or geological
factors. For example, in the case of fairy circles researchers have explored
the role of higher evaporation (Vlieghe and Picker,, 2019) and increased
termite activity (Juergens,, 2013) within the circles; spatial heterogeneity
in hydrological processes, such as increased infiltration in the circles of
bare soil (Ravi et al.,, 2017) or increased water runoff in the circles and in
the matrix (Getzin et al.,, 2021); and the geological emanation of toxic gases
(Naudé et al.,, 2011) and the presence of allelochemical substances (Meyer et
al.,, 2020) in the circles. By using a phenomenological approach as a first
step, researchers can discard many of these potential mechanisms, directing
their efforts towards more specific hypotheses in a second step. For this
first step, a simple experimental setup, based on mainstream methods to
measure plant biotic interaction would reveal whether PCF, SDF, or none of
them are responsible for the emergent pattern (Armas et al.,, 2004). Our
proposed experiment would compare a fitness proxy (e.g., growth, survival) of
experimental plants planted in the system under study, where we observe a
regular vegetation pattern and we assume that the vegetation vegetation
dynamics has reached a stationary state for the location-specific
environmental conditions. Each experimental block would consist of a plant
growing under-canopy (Fig. 6a), a plant growing in bare soil (i.e., between
two vegetation patches) (Fig. 6b), and a control plant growing in the same
ecosystem but artificially isolated from the interaction with pattern-forming
individuals (Fig. 6c). To isolate control plants from canopy interaction they
need to be planted in bare soil areas. To isolate them from below-ground
competition, one can excavate narrow, deep trenches in which a root barrier
can be inserted (Morgenroth,, 2008). To isolate them from the competition for
runoff water with the vegetation patches, root barriers should protrude from
the soil a few centimeters, preventing precipitation water to leave the area
and runoff water to enter. Comparing the fitness proxy of the control plant
with that of plants growing in vegetation patches or bare soil reveals the
sign and strength of the net biotic interaction. By replicating this
experimental block, we can statistically determine whether the pattern
formation results from a SDF, a PCF, or whether it involves a different
process. The SDF hypothesis would be validated if a predominantly positive net
interaction is observed under the canopy, and a negative interaction is
observed in bare soils. Conversely, the PCF would be validated if we observe a
negative net interaction in bare soils and under canopy (see Table 2). Any
other outcome in the spatial distribution of the sign of the net interaction
between plants would suggest that other mechanisms are at play, which could
include the action of different ecosystem components, such as soil-dwelling
macrofauna (Tarnita et al.,, 2017), or abiotic factors, such as micro-
topology.
Figure 6: Schematic representation of a simple experimental setup to test in the field whether the mechanism of spatial patterning is a purely competitive feedback (PCF) or a classic scale-dependent feedback (SDF). Plant (a) is an experimental plant growing under-canopy, (b) is growing in bare soil, and (c) is a control plant growing in artificial conditions, free from the biotic interaction using root barriers in bare soil areas of the same environment. Under canopy vs control | Bare soil vs control | Outcome
---|---|---
$0/-$ | $-\ \ -$ | Purely competitive feedback
$+$ | $-$ | Scale-dependent feedback
Table 2: Testing the PCF versus SDF hypotheses in the experimental setup
introduced in Fig. 6. Double signs indicate stronger intensity. Indexes to
calculate the sign of the net interaction can be taken from Armas et al.,
(2004).
Second step: direct measurement of the biophysical processes responsible for
the pattern. After confirming the PCF, the SDF, finding an alternative spatial
distribution of plant interactions or rejecting the self-organizing
hypothesis, a second experimental step would test specific biophysical
mechanisms responsible for the measured interaction and driving the spatial
pattern. For example, PCF models hypothesize that spatial patterns are driven
by long-range below-ground competition for a limiting resource through the
formation of exclusion regions. As discussed in section 3.3, these exclusion
regions are territories between patches of vegetation in which the intensity
of competition is higher than within the patch (van de Koppel and Crain,,
2006), possibly because they present a higher density of roots (Fig. 4)
(Martínez-García et al., 2013a, ; Martínez-García et al.,, 2014). To test for
the existence of exclusion regions and confirm whether below-ground
competition is driving the spatial pattern, researchers can measure the
changes in root density using coring devices (Cabal et al.,, 2021) across soil
transects going from the center of a vegetated patch to the center of a bare
soil patch. Field tests and manipulative experiments to confirm that SDFs are
responsible for vegetation patterns are not easy to perform but researchers
can do a handful of analyses. For example, ecohydrological SDF models assume
that water infiltration is significantly faster in vegetation patches than in
bare soil areas (Rietkerk et al.,, 2002). Many empirical researchers have
tested this assumption in patterned vegetation using mini disk infiltrometers
to quantify unsaturated hydraulic conductivity, dual head infiltrometers to
measure saturated hydraulic conductivity, or buried moisture sensors connected
to data loggers to record volumetric soil moisture content (Yizhaq et al.,,
2019; Ravi et al.,, 2017; Getzin et al.,, 2021; Cramer and Barger,, 2013). The
measures should show higher infiltration rates and moisture under vegetated
patches than in bare soil. Note, however, that infiltration rates might be
very hard to measure due to small-scale soil heterogeneities. Ecohydrological
models make other assumptions that are less often considered in the field. For
instance, they assume that the lateral transport of water is several orders of
magnitude larger than vegetation diffusion (i.e., patch growth speed), which
might be true or not depending on soil properties. To test these assumptions,
field researchers need to measure the intensity of the water runoff and
compare it to a measure of the lateral growth rate of vegetation patches.
Water runoff is very challenging to measure directly, but estimates can be
calculated using the infiltration rates obtained with infiltrometers (Cook,,
1946). The lateral growth rate of vegetation patches can be estimated based on
drone or satellite images repeated over time (Trichon et al.,, 2018).
Combining measures of both water runoff and expansion rates of vegetation
patches, one can estimate approximated values for the relative ratio of the
two metrics.
## 6 Conclusions and future lines of research
As our ability to obtain and analyze large, high-resolution images of the
Earth’s surface increases, more examples of self-organized vegetation patterns
are found in water-limited ecosystems. Here, we have reviewed different
modeling approaches employed to understand the mechanistic causes and the
predicted consequences of those patterns. We have discussed how different
models, relying on different mechanisms, can successfully reproduce the
patterns observed in natural systems and that, however, each of these models
predicts very different ecosystem-level consequences of the emergent pattern.
This discrepancy limits the utility of the patterns as applied ecological
tools in the absence of explicit knowledge of underlying mechanisms. To solve
this issue, we claim that models need to move from their current
phenomenological formulation towards a more system-specific and mechanistic
one. This new approach must necessarily focus on isolating the system-
specific, key feedbacks for vegetation self-organization. We identify two main
directions for future research to develop this new approach to vegetation
pattern formation.
First, biologically-grounded studies should aim to combine system-specific
models with empirical measures of vegetation-mediated feedbacks. Existing
models for vegetation self-organization are mostly phenomenological and are
only validated qualitatively via the visual comparison of simulated and
observed (macroscopic) patterns. Experimental measures of the (microscopic)
processes and feedbacks central to most models of vegetation pattern formation
are hard to obtain, leading to arbitrary (free) parameter values and response
functions. For example, very few models incorporate empirically-validated
values of water diffusion and plant dispersal rates, despite the crucial role
of these parameters in the emergence of patterns. Instead, these models fine-
tune such values to obtain patterns similar in, for example, their wavelength,
to the natural pattern. Similarly, we are only beginning to understand how
plants rearrange their root system in the presence of competing individuals
(Cabal et al., 2020a, ), and hence kernel-based models still do not
incorporate realistic functional forms for the kernels. Instead, these models
use phenomenological functions to test potential mechanisms for pattern
formation by qualitatively comparing model output and target pattern, thus
limiting the potential of the models to make quantitative predictions. To
establish a dialogue between experiments and theory, models should develop
from a microscopic description of the system (DeAngelis and Yurek,, 2016;
Railsback and Grimm,, 2019). This approach allows for a more realistic and
accurate description of the plant-to-plant and plant-water interactions as
well as for a better reconciliation between model parameters and system-
specific empirical measures. Hopefully it will also allow researchers to unify
existing theoretical approaches to vegetation pattern formation (Martínez-
García et al.,, 2014). Subsequently, existing tools from mathematics,
statistical physics, and/or computer science can be used to reach a
macroscopic PDEM that captures the key ingredients of the microscopic
dynamics. Statistical physics, which was conceived to describe how observed
macroscopic properties of physical systems emerge from the underlying
microscopic processes, provides a compelling and well-developed framework to
make such a micro-macro connection.
Second, recent developments in remotely sensed imagery have enabled the
measurement of an ecosystem’s state indicators, which will allow researchers
to compare observed and simulated patterns quantitatively (Bastiaansen et
al.,, 2018). On the one hand, using existing databases of ecosystem responses
to aridity (Berdugo et al.,, 2020) and satellite imagery of vegetation
coverage (Deblauwe et al.,, 2011), researchers could conduct a model selection
analysis and classify existing models from more to less realistic depending on
whether (and how many) features of the focal ecosystem the model manages to
reproduce in the correct environmental conditions. For example, models could
be classified depending on whether, after proper parameterization, they can
predict ecosystem responses such as transitions between pattern types at the
correct aridity thresholds. To elaborate this model classification, the use of
Fourier analysis for identifying regularity in natural patterns, geostatistics
for quantifying spatial correlations, and time series analysis for tracking
changes in the ecosystem properties through time will be essential.
Beyond water-limited ecosystems, SDFs have been invoked to explain spatial
pattern formation in mussel beds (Rietkerk and van de Koppel,, 2008),
freshwater and salt marshes (van de Koppel et al.,, 2005; Van Wesenbeeck et
al.,, 2008; Zhao et al.,, 2021), and seagrasses (van der Heide et al.,, 2011;
Ruiz-Reynés et al.,, 2017). Conversely, nonlocal competition drives the
emergence of aggregated patterns in freshwater marshes (van de Koppel et al.,,
2008) and in theoretical models of population dynamics (Fuentes et al.,, 2003;
Dornelas et al.,, 2019; Maruvka and Shnerb,, 2006; Da Cunha et al.,, 2011;
Clerc et al.,, 2005; Maciel and Martinez-Garcia,, 2020). Understanding the
conditions in which negative feedbacks dominate over positive feedbacks,
finding the key features that distinguish the patterns caused by these
different feedbacks, and contrasting their divergent ecological consequences
constitutes an exciting venue for future research that has just started to
develop (Lee et al.,, 2021).
### Acknowledgments
We thank Robert M. Pringle, Rubén Juanes, and Ignacio Rodríguez-Iturbe for
various discussions at different stages of the development of this work. This
work is supported by FAPESP through grants ICTP-SAIFR 2016/01343-7, and
Programa Jovens Pesquisadores em Centros Emergentes 2019/24433-0 and
2019/05523-8, Instituto Serrapilheira through grant Serra-1911-31200, and the
Simons Foundation (RMG); the Princeton University May Fellowship in the
department of Ecology and Evolutionary Biology (CC); MINECO/AEI/FEDER through
the María de Maeztu Program for Units of Excellence in R&D (MDM-2017-0711,
Spain) (EHG & CL); The Gordon and Betty Moore Foundation, grant #7800 (CET &
JAB); and the NSF grant DMS-2052616 (JAB). This work was partially funded by
the Center of Advanced Systems Understanding (CASUS) which is financed by
Germany’s Federal Ministry of Education and Research (BMBF) and by the Saxon
Ministry for Science, Culture and Tourism (SMWK) with tax funds on the basis
of the budget approved by the Saxon State Parliament (JMC).
### Author contribution
Conceptualization, R.M.G., C.C., J.A.B., J.M.C., E.H.G., C.L., C.E.T.;
writing–original draft preparation, R.M.G.; writing–review and editing,
R.M.G., C.C., J.A.B., J.M.C., E.H.G., C.L., C.E.T.; visualization, R.M.G.,
C.C. All authors have read and agreed to the published version of the
manuscript.
## References
* Armas et al., (2004) Armas, C., Ordiales, R., and Pugnaire, F. I. (2004). Measuring plant interactions: a new comparative index. Ecology, 85(10):2682–2686.
* Bastiaansen et al., (2018) Bastiaansen, R., Jaïbi, O., Deblauwe, V., Eppinga, M. B., Siteur, K., Siero, E., Mermoz, S., Bouvet, A., Doelman, A., and Rietkerk, M. (2018). Multistability of model and real dryland ecosystems through spatial self-organization. Proceedings of the National Academy of Sciences, 115(44):11256–11261.
* Berdugo et al., (2020) Berdugo, M., Delgado-Baquerizo, M., Soliveres, S., Hernández-Clemente, R., Zhao, Y., Gaitán, J. J., Gross, N., Saiz, H., Maire, V., Lehman, A., Rillig, M. C., Solé, R. V., and Maestre, F. T. (2020). Global ecosystem thresholds driven by aridity. Science, 367(6479):787–790.
* Berríos-Caro et al., (2020) Berríos-Caro, E., Clerc, M., Escaff, D., Sandivari, C., and Tlidi, M. (2020). On the repulsive interaction between localised vegetation patches in scarce environments. Scientific Reports, 10(1):1–8.
* Bolker and Pacala, (1999) Bolker, B. M. and Pacala, S. W. (1999). Spatial moment equations for plant competition: Understanding spatial strategies and the advantages of short dispersal. American Naturalist, 153(6):575–602.
* Bonachela et al., (2015) Bonachela, J. A., Pringle, R. M., Sheffer, E., Coverdale, T. C., Guyton, J. A., Caylor, K. K., Levin, S. A., and Tarnita, C. E. (2015). Termite mounds can increase the robustness of dryland ecosystems to climatic change. Science, 347(6222):651–655.
* Borgogno et al., (2009) Borgogno, F., D’Odorico, P., Laio, F., and Ridolfi, L. (2009). Mathematical models of vegetation pattern formation in ecohydrology. Reviews of Geophysics, 47(1).
* Bromley et al., (1997) Bromley, J., Brouwer, J., Barker, A. P., Gaze, S. R., and Valentin, C. (1997). The role of surface water redistribution in an area of patterned vegetation in a semi-arid environment, south-west Niger. Journal of Hydrology, 198:1–29.
* Cabal et al., (2021) Cabal, C., De Deurwaerder, H. P., and Matesanz, S. (2021). Field methods to study the spatial root density distribution of individual plants. Plant and Soil, pages 1–19.
* (10) Cabal, C., Martínez-García, R., De Castro, A., Valladares, F., and Pacala, S. W. (2020a). The Exploitative Segregation of Plant Roots. Science, 1199(December):1197–1199.
* (11) Cabal, C., Martinez-Garcia, R., and Valladares, F. (2020b). The ecology of plant interactions: A giant with feet of clay. Preprints, page 2020090520.
* Calabrese et al., (2010) Calabrese, J. M., Vazquez, F., López, C., San Miguel, M., and Grimm, V. (2010). The independent and interactive effects of tree-tree establishment competition and fire on savanna structure and dynamics. The American Naturalist, 175(3):E44–65.
* Camazine et al., (2003) Camazine, S., Deneubourg, J.-L., Franks, N. R., Sneyd, J., Bonabeau, E., and Theraula, G. (2003). Self-organization in biological systems. Princeton university press.
* Clerc et al., (2005) Clerc, M. G., Escaff, D., and Kenkre, V. M. (2005). Patterns and localized structures in population dynamics. Physical Review E, 72(5):1–5.
* Conde-Pueyo et al., (2020) Conde-Pueyo, N., Vidiella, B., Sardanyés, J., Berdugo, M., Maestre, F. T., de Lorenzo, V., and Solé, R. (2020). Synthetic biology for terraformation lessons from mars, earth, and the microbiome. Life, 10(2):1–27.
* Cook, (1946) Cook, H. L. (1946). The infiltration approach to the calculation of surface runoff. Eos, Transactions American Geophysical Union, 27(5):726–747.
* Corrado et al., (2015) Corrado, R., Cherubini, A. M., and Pennetta, C. (2015). Critical desertification transition in semi-arid ecosystems: The role of local facilitation and colonization rate. Communications in Nonlinear Science and Numerical Simulation, 22(1-3):3–12.
* Courchamp et al., (1999) Courchamp, F., Clutton-Brock, T., and Grenfell, B. (1999). Inverse density dependence and the allee effect. Trends in ecology & evolution, 14(10):405–410.
* Craine and Dybzinski, (2013) Craine, J. M. and Dybzinski, R. (2013). Mechanisms of plant competition for nutrients, water and light. Functional Ecology, 27(4):833–840.
* Cramer and Barger, (2013) Cramer, M. D. and Barger, N. N. (2013). Are namibian “fairy circles” the consequence of self-organizing spatial vegetation patterning? PloS one, 8(8):e70876.
* Da Cunha et al., (2011) Da Cunha, J. A., Penna, A. L., and Oliveira, F. A. (2011). Pattern formation and coexistence domains for a nonlocal population dynamics. Physical Review E, 83(1):015201(R).
* Dakos et al., (2015) Dakos, V., Carpenter, S. R., van Nes, E. H., and Scheffer, M. (2015). Resilience indicators: prospects and limitations for early warnings of regime shifts. Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1659):20130263.
* Dakos et al., (2011) Dakos, V., Kéfi, S., Rietkerk, M., Van Nes, E. H., and Scheffer, M. (2011). Slowing down in spatially patterned ecosystems at the brink of collapse. The American Naturalist, 177(6):E153–E166.
* DeAngelis and Yurek, (2016) DeAngelis, D. L. and Yurek, S. (2016). Spatially explicit modeling in ecology: A review. Ecosystems, 20(2):284–300.
* Deblauwe et al., (2008) Deblauwe, V., Barbier, N., Couteron, P., Lejeune, O., and Bogaert, J. (2008). The global biogeography of semi-arid periodic vegetation patterns. Global Ecology and Biogeography, 17(6):715–723.
* Deblauwe et al., (2012) Deblauwe, V., Couteron, P., Bogaert, J., and Barbier, N. (2012). Determinants and dynamics of banded vegetation pattern migration in arid climates. Ecological Monographs, 82(1):3–21.
* Deblauwe et al., (2011) Deblauwe, V., Couteron, P., Lejeune, O., Bogaert, J., and Barbier, N. (2011). Environmental modulation of self-organized periodic vegetation patterns in sudan. Ecography, 34(6):990–1001.
* D’Odorico et al., (2006) D’Odorico, P., Laio, F., and Ridolfi, L. (2006). Patterns as indicators of productivity enhancement by facilitation and competition in dryland vegetation. Journal of Geophysical Research, 111(G3).
* Dornelas et al., (2019) Dornelas, V., Colombo, E., and Anteneodo, C. (2019). Single-species fragmentation: The role of density-dependent feedback. Physical Review E, 99(6):062225.
* Eigentler, (2020) Eigentler, L. (2020). Intraspecific competition in models for vegetation patterns: Decrease in resilience to aridity and facilitation of species coexistence. Ecological Complexity, 42.
* Eigentler and Sherratt, (2020) Eigentler, L. and Sherratt, J. A. (2020). Effects of precipitation intermittency on vegetation patterns in semi-arid landscapes. Physica D: Nonlinear Phenomena, 405:132396.
* Eldridge et al., (2000) Eldridge, D. J., Zaady, E., and Shachak, M. (2000). Infiltration through three contrasting biological soil crusts in patterned landscapes in the Negev, Israel. Catena, 40(3):323–336.
* Escaff et al., (2015) Escaff, D., Fernandez-Oto, C., Clerc, M. G., and Tlidi, M. (2015). Localized vegetation patterns, fairy circles, and localized patches in arid landscapes. Physical Review E, 91(2).
* Fernandez-Oto et al., (2014) Fernandez-Oto, C., Tlidi, M., Escaff, D., and Clerc, M. G. (2014). Strong interaction between plants induces circular barren patches: fairy circles. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 372(2027):20140009.
* Fort et al., (2009) Fort, H., Scheffer, M., and van Nes, E. H. (2009). The paradox of the clumps mathematically explained. Theoretical Ecology, 2(3):171–176.
* Franklin, (2010) Franklin, J. (2010). Spatial point pattern analysis of plants. In Perspectives on spatial data analysis, pages 113–123. Springer.
* Fuentes et al., (2003) Fuentes, M. A., Kuperman, M. N., and Kenkre, V. M. (2003). Nonlocal interaction effects on pattern formation in population dynamics. Physical Review Letters, 91(15):3–6.
* Getzin et al., (2021) Getzin, S., Erickson, T. E., Yizhaq, H., Muñoz-Rojas, M., Huth, A., and Wiegand, K. (2021). Bridging ecology and physics: Australian fairy circles regenerate following model assumptions on ecohydrological feedbacks. Journal of Ecology, 109(1):399–416.
* Gilad et al., (2004) Gilad, E., von Hardenberg, J., Provenzale, A., Shachak, M., and Meron, E. (2004). Ecosystem Engineers: From Pattern Formation to Habitat Creation. Physical Review Letters, 93(9):098105.
* Gowda et al., (2016) Gowda, K., Chen, Y., Iams, S., and Silber, M. (2016). Assessing the robustness of spatial pattern sequences in a dryland vegetation model. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 472(2187).
* Gowda et al., (2014) Gowda, K., Riecke, H., and Silber, M. (2014). Transitions between patterned states in vegetation models for semiarid ecosystems. Physical Review E, 89(2):022701.
* Hernández-García and López, (2004) Hernández-García, E. and López, C. (2004). Clustering, advection, and patterns in a model of population dynamics with neighborhood-dependent rates. Physical Review E, 70(1 2):1–11.
* Hernández-García et al., (2009) Hernández-García, E., López, C., Pigolotti, S., and Andersen, K. H. (2009). Species competition: coexistence, exclusion and clustering. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 367(1901):3183–95.
* HilleRisLambers et al., (2001) HilleRisLambers, R., Rietkerk, M., van den Bosch, F., Prins, H. H. T., and de Kroon, H. (2001). Vegetation Pattern Formation in Semi-Arid Grazing Systems. Ecology, 82(1):50.
* Holmgren and Scheffer, (2010) Holmgren, M. and Scheffer, M. (2010). Strong facilitation in mild environments: the stress gradient hypothesis revisited. Journal of Ecology, 98(6):1269–1275.
* Iwasa, (2010) Iwasa, Y. (2010). Lattice Models and Pair Approximation in Ecology. In Dieckmann, U., Law, R., and Metz, J., editors, The Geometry of Ecological Interactions, chapter 13, pages 227–251. Cambridge University Press, Cambridge.
* Juergens, (2013) Juergens, N. (2013). The biological underpinnings of namib desert fairy circles. Science, 339(6127):1618–1621.
* Kealy and Wollkind, (2011) Kealy, B. J. and Wollkind, D. J. (2011). A nonlinear stability analysis of vegetative turing pattern formation for an interaction–diffusion plant-surface water model system in an arid flat environment. Bulletin of Mathematical Biology, 74(4):803–833.
* (49) Kéfi, S., Rietkerk, M., Alados, C. L., Pueyo, Y., Papanastasis, V. P., ElAich, A., and De Ruiter, P. C. (2007a). Spatial vegetation patterns and imminent desertification in Mediterranean arid ecosystems. Nature, 449(7159):213–217.
* (50) Kéfi, S., Rietkerk, M., van Baalen, M., and Loreau, M. (2007b). Local facilitation, bistability and transitions in arid ecosystems. Theoretical Population Biology, 71(3):367–379.
* Kletter et al., (2009) Kletter, A. Y., von Hardenberg, J., Meron, E., and Provenzale, A. (2009). Patterned vegetation and rainfall intermittency. Journal of Theoretical Biology, 256(4):574–583.
* Lee et al., (2021) Lee, E. D., Kempes, C. P., and West, G. B. (2021). Growth, death, and resourcer competition in sessile organisms. Proceedings of the National Academy of Sciences of the United States of America, 118(15).
* Lefever and Lejeune, (1997) Lefever, R. and Lejeune, O. (1997). On the origin of tiger bush. Bulletin of Mathematical Biology, 59(2):263–294.
* Leimar et al., (2013) Leimar, O., Sasaki, A., Doebeli, M., and Dieckmann, U. (2013). Limiting similarity, species packing, and the shape of competition kernels. Journal of Theoretical Biology, 339:3 – 13.
* Ludwig et al., (2004) Ludwig, F., De Kroon, H., Berendse, F., and Prins, H. H. (2004). The influence of savanna trees on nutrient, water and light availability and the understorey vegetation. Plant Ecology, 97(2):199–205.
* Maciel and Martinez-Garcia, (2020) Maciel, G. A. and Martinez-Garcia, R. (2020). Enhanced species coexistence in lotka-volterra competition models due to nonlocal interactions.
* Maestre et al., (2016) Maestre, F. T., Eldridge, D. J., Soliveres, S., Kéfi, S., Delgado-Baquerizo, M., Bowker, M. A., García-Palacios, P., Gaitán, J., Gallardo, A., Lázaro, R., and Berdugo, M. (2016). Structure and Functioning of Dryland Ecosystems in a Changing World. Annual Review of Ecology, Evolution, and Systematics, 47(August):215–237.
* (58) Martínez-García, R., Calabrese, J. M., Hernández-García, E., and López, C. (2013a). Vegetation pattern formation in semiarid systems without facilitative mechanisms. Geophysical Research Letters, 40(23):6143–6147.
* Martínez-García et al., (2014) Martínez-García, R., Calabrese, J. M., Hernández-García, E., and López, C. (2014). Minimal mechanisms for vegetation patterns in semiarid regions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 372(2027):20140068.
* (60) Martínez-García, R., Calabrese, J. M., and López, C. (2013b). Spatial patterns in mesic savannas: The local facilitation limit and the role of demographic stochasticity. Journal of Theoretical Biology, 333:156–165.
* Maruvka and Shnerb, (2006) Maruvka, Y. E. and Shnerb, N. M. (2006). Nonlocal competition and logistic growth: Patterns, defects, and fronts. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 73(1):1–12.
* Meinhardt, (1982) Meinhardt, H. (1982). Models of biological pattern formation. New York, page 118.
* Meron, (2012) Meron, E. (2012). Pattern-formation approach to modelling spatially extended ecosystems. Ecological Modelling, 234:70–82.
* Meron, (2015) Meron, E. (2015). Nonlinear physics of ecosystems. CRC Press.
* Meron, (2018) Meron, E. (2018). From patterns to function in living systems: Dryland ecosystems as a case study. Annual Review of Condensed Matter Physics, 9(1):79–103.
* Meron et al., (2004) Meron, E., Gilad, E., von Hardenberg, J., Shachak, M., and Zarmi, Y. (2004). Vegetation patterns along a rainfall gradient. Chaos, Solitons & Fractals, 19(2):367–376.
* Meyer et al., (2020) Meyer, J. M., Schutte, C. E., Hurter, J. W., Galt, N. S., Degashu, P., Breetzke, G., Baranenko, D., and Meyer, N. L. (2020). The allelopathic, adhesive, hydrophobic and toxic latex of euphorbia species is the cause of fairy circles investigated at several locations in namibia. BMC ecology, 20(1):1–23.
* Montaña, (1992) Montaña, C. (1992). The Colonization of Bare Areas in Two-Phase Mosaics of an Arid Ecosystem. The Journal of Ecology, 80(2):315–327.
* Morgenroth, (2008) Morgenroth, J. (2008). A review of root barrier research. Arboriculture and Urban Forestry, 34(2):84–88.
* Mortimore et al., (2009) Mortimore, M., Anderson, S., Cotula, L., Davies, J., Faccer, K., Hesse, C., Morton, J., Nyangena, W., Skinner, J., and Wolfangel, C. (2009). Dryland opportunies: A new paradigm for people, ecosystems and development. Technical report, International Union for Conservation of Nature (IUCN).
* Naudé et al., (2011) Naudé, Y., van Rooyen, M. W., and Rohwer, E. R. (2011). Evidence for a geochemical origin of the mysterious circles in the Pro-Namib desert. Journal of Arid Environments, 75(5):446–456.
* Okayasu and Aizawa, (2001) Okayasu, T. and Aizawa, Y. (2001). Systematic analysis of periodic vegetation patterns. Progress of Theoretical Physics, 10(4):705–720.
* Parra-Rivas and Fernandez-Oto, (2020) Parra-Rivas, P. and Fernandez-Oto, C. (2020). Formation of localized states in dryland vegetation: Bifurcation structure and stability. Physical Review E, 101(5):052214.
* Pigolotti et al., (2007) Pigolotti, S., López, C., and Hernández-García, E. (2007). Species Clustering in Competitive Lotka-Volterra Models. Physical Review Letters, 98(25):258101.
* Plank and Law, (2015) Plank, M. J. and Law, R. (2015). Spatial point processes and moment dynamics in the life sciences: a parsimonious derivation and some extensions. Bulletin of mathematical biology, 77(4):586–613.
* Pringle and Tarnita, (2017) Pringle, R. M. and Tarnita, C. E. (2017). Spatial self-organization of ecosystems: Integrating multiple mechanisms of regular-pattern formation. Annual Review of Entomology, 62(1):359–377.
* Railsback and Grimm, (2019) Railsback, S. F. and Grimm, V. (2019). Agent-based and individual-based modeling: a practical introduction. Princeton university press.
* Ravi et al., (2017) Ravi, S., Wang, L., Kaseke, K. F., Buynevich, I. V., and Marais, E. (2017). Ecohydrological interactions within “fairy circles” in the namib desert: Revisiting the self-organization hypothesis. Journal of Geophysical Research: Biogeosciences, 122(2):405–414.
* Rietkerk et al., (2002) Rietkerk, M., Boerlijst, M. C., van Langevelde, F., HilleRisLambers, R., van de Koppel, J., Kumar, L., Prins, H. H. T., and de Roos, A. M. (2002). Self-organization of vegetation in arid ecosystems. The American Naturalist, 160(4):524–530.
* Rietkerk et al., (2004) Rietkerk, M., Dekker, S. C., De Ruiter, P. C., and van de Koppel, J. (2004). Self-organized patchiness and catastrophic shifts in ecosystems. Science, 305(5692):1926–9.
* Rietkerk and van de Koppel, (2008) Rietkerk, M. and van de Koppel, J. (2008). Regular pattern formation in real ecosystems. Trends in ecology & evolution, 23(3):169–175.
* Ruiz-Reynés et al., (2017) Ruiz-Reynés, D., Gomila, D., Sintes, T., Hernández-García, E., Marbà, N., and Duarte, C. M. (2017). Fairy circle landscapes under the sea. Science Advances, 3:e1603262.
* Salem et al., (1989) Salem, B. et al. (1989). Arid zone forestry: a guide for field technicians. Number 20. Food and Agriculture Organization (FAO).
* Scheffer et al., (2009) Scheffer, M., Bascompte, J., Brock, W. A., Brovkin, V., Carpenter, S. R., Dakos, V., Held, H., van Nes, E. H., Rietkerk, M., and Sugihara, G. (2009). Early-warning signals for critical transitions. Nature, 461(7260):53–9.
* Scheffer and Carpenter, (2003) Scheffer, M. and Carpenter, S. R. (2003). Catastrophic regime shifts in ecosystems: linking theory to observation. Trends in Ecology & Evolution, 18(12):648–656.
* Scheffer and van Nes, (2006) Scheffer, M. and van Nes, E. H. (2006). Self-organized similarity, the evolutionary emergence of groups of similar species. Proceedings of the National Academy of Sciences of the United States of America, 103(16):6230–5.
* Sheffer et al., (2011) Sheffer, E., Yizhaq, H., Shachak, M., and Meron, E. (2011). Mechanisms of vegetation-ring formation in water-limited systems. Journal of Theoretical Biology, 273(1):138–146.
* Sole and Bascompte, (2006) Sole, R. V. and Bascompte, J. (2006). Self-organization in complex ecosystems. Princeton University Press.
* Tarnita et al., (2017) Tarnita, C. E., Bonachela, J. A., Sheffer, E., Guyton, J. A., Coverdale, T. C., Long, R. A., and Pringle, R. M. (2017). A theoretical foundation for multi-scale regular vegetation patterns. Nature, 541(7637):398–401.
* Trichon et al., (2018) Trichon, V., Hiernaux, P., Walcker, R., and Mougin, E. (2018). The persistent decline of patterned woody vegetation: The tiger bush in the context of the regional sahel greening trend. Global change biology, 24(6):2633–2648.
* Turing, (1952) Turing, A. M. (1952). The chemical basis of morphogenesis. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 237(641):37–72.
* Valentin et al., (1999) Valentin, C., d’Herbès, J.-M., and Poesen, J. (1999). Soil and water components of banded vegetation patterns. Catena, 37(1-2):1–24.
* Valentin and Poesen, (1999) Valentin, C. and Poesen, J. (1999). The significance of soil, water and landscape processes in banded vegetation patterning. Catena, 37(1-2).
* Valladares et al., (2016) Valladares, F., Laanisto, L., Niinemets, Ü., and Zavala, M. A. (2016). Shedding light on shade: ecological perspectives of understorey plant life. Plant Ecology and Diversity, 9(3):237–251.
* van de Koppel and Crain, (2006) van de Koppel, J. and Crain, C. M. (2006). Scale-dependent inhibition drives regular tussock spacing in a freshwater marsh. The American Naturalist, 168(5):E136–E147.
* van de Koppel et al., (2008) van de Koppel, J., Gascoigne, J. C., Theraulaz, G., Rietkerk, M., Mooij, W. M., and Herman, P. M. (2008). Experimental evidence for spatial self-organization and its emergent effects in mussel bed ecosystems. Science, 322(5902):739–42.
* van de Koppel et al., (2005) van de Koppel, J., van der Wal, D., Bakker, J. P., and Herman, P. M. (2005). Self-organization and vegetation collapse in salt marsh ecosystems. The American naturalist, 165(1).
* van der Heide et al., (2011) van der Heide, T., van Nes, E. H., van Katwijk, M. M., Olff, H., and Smolders, A. J. (2011). Positive feedbacks in seagrass ecosystems - Evidence from large-scale empirical data. PLoS ONE, 6(1):1–7.
* Van Wesenbeeck et al., (2008) Van Wesenbeeck, B. K., Van De Koppel, J., Herman, P. M., and Bouma, T. J. (2008). Does scale-dependent feedback explain spatial complexity in salt-marsh ecosystems? Oikos, 117(1):152–159.
* Vidiella et al., (2020) Vidiella, B., Sardanyés, J., and Solé, R. (2020). Synthetic soil crusts against green-desert transitions: a spatial model. Royal Society Open Science, 7:200161.
* Villa Martín et al., (2015) Villa Martín, P., Bonachela, J. A., Levin, S. A., and Muñoz, M. Á. (2015). Eluding catastrophic shifts. Proceedings of the National Academy of Sciences, 112(15):E1828–E1836.
* Vlieghe and Picker, (2019) Vlieghe, K. and Picker, M. (2019). Do high soil temperatures on namibian fairy circle discs explain the absence of vegetation? Plos one, 14(5):e0217153.
* von Hardenberg et al., (2001) von Hardenberg, J., Meron, E., Shachak, M., and Zarmi, Y. (2001). Diversity of vegetation patterns and desertification. Physical Review Letters, 87(19).
* Weissmann et al., (2017) Weissmann, H., Kent, R., Michael, Y., and Shnerb, N. M. (2017). Empirical analysis of vegetation dynamics and the possibility of a catastrophic desertification transition. PLoS ONE, 12(12):1–13.
* Wiegand and Moloney, (2013) Wiegand, T. and Moloney, K. A. (2013). Handbook of spatial point-pattern analysis in ecology. CRC press.
* Wu et al., (2000) Wu, X., Thurow, T., and Whisenant, S. (2000). Fragmentation and functional change of tiger bush landscapes in niger. J. Ecol, 88:790–800.
* Yizhaq and Bel, (2016) Yizhaq, H. and Bel, G. (2016). Effects of quenched disorder on critical transitions in pattern-forming systems. New Journal of Physics, 18(2).
* Yizhaq et al., (2019) Yizhaq, H., Stavi, I., Swet, N., Zaady, E., and Katra, I. (2019). Vegetation ring formation by water overland flow in water-limited environments: Field measurements and mathematical modelling. Ecohydrology, 12(7):e2135.
* Zhao et al., (2019) Zhao, L. X., Xu, C., Ge, Z. M., Van De Koppel, J., and Liu, Q. X. (2019). The shaping role of self-organization: Linking vegetation patterning, plant traits and ecosystem functioning. Proceedings of the Royal Society B: Biological Sciences, 286(1900).
* Zhao et al., (2021) Zhao, L. X., Zhang, K., Siteur, K., Li, X. Z., Liu, Q. X., and van de Koppel, J. (2021). Fairy circles reveal the resilience of self-organized salt marshes. Science Advances, 7(6).
|
# Electron-hole asymmetry in electron systems with orbital degeneracy and
correlated hopping
Yu. Skorenkyy, O. Kramar, Yu. Dovhopyaty
(Received June 15, 2020, in final form September 15, 2020)
###### Abstract
Microscopic models of electronic subsystems with orbital degeneracy of energy
states and non-diagonal matrix elements of electron interactions (correlated
hopping) are considered within the configuration-operator approach. Equations
for arbitrary-temperature numerical calculation of the doublon concentration
for the integer band filling $n=1$ at different forms of the model density of
states are derived. The energy spectra obtained within the Green function
method are discussed with special emphasis on the role of correlated hopping
in transition from itinerant to localized behavior observed in vanadium
Magneli phases VnO2n-1, transition-metal dichalcogenides NiS2-xSex, fulleride
AnC60 systems.
Key words: electron correlations, energy spectrum, orbital degeneracy
###### Abstract
Ó ðîáîòi äîñëiäæåíî ìiêðîñêîïiíi ìîäåëi åëåêòðîííèõ ñèñòåì ç îðáiòàëüíèì
âèðîäæåííÿì åíåðãåòèíèõ ðiâíiâ òà íåäiàãîíàëüíèìè ìàòðèíèìè åëåìåíòàìè
ìiæåëåêòðîííî¿ âçàìîäi¿ (êîðåëüîâàíîãî ïåðåíîñó) çà äîïîìîãîþ êîíôiãóðàöiéíîãî
ïðåäñòàâëåííÿ. Îòðèìàíî ðiâíÿííÿ äëÿ èñëîâîãî ðîçðàõóíêó ïðè äîâiëüíèõ
òåìïåðàòóðàõ êîíöåíòðàöi¿ ïîëÿðíèõ ñòàíiâ äëÿ ñèñòåì ç âèðîäæåííÿì òà
öiëîèñåëüíèì çàïîâíåííÿì çîíè $n=1$ ïðè çàñòîñóâàííi ðiçíèõ ôîðì íåçáóðåíî¿
ãóñòèíè åëåêòðîííèõ ñòàíiâ. Åíåðãåòèíi ñïåêòðè, îòðèìàíi â ïiäõîäi ðiâíÿíü
ðóõó äëÿ ôóíêöié ðiíà, çàñòîñîâàíi äëÿ çÿñóâàííÿ ðîëi êîðåëüîâàíîãî ïåðåíîñó â
åôåêòàõ ëîêàëiçàöi¿ åëåêòðîíiâ, ÿêi ñïîñòåðiãàþòüñÿ â ôàçàõ Ìàãíåëi VnO2n-1,
äèõàëüêîãåíiäàõ ïåðåõiäíèõ ìåòàëiâ NiS2-xSex, ôóëåðèäàõ AnC60.
Ключов слова: åëåêòðîííi êîðåëÿöi¿, åíåðãåòèíèé ñïåêòð, îðáiòàëüíå âèðîäæåííÿ
## 1 Introduction
The idea of interaction-driven electron localization in a paramagnet due to
Coulomb forces (the Mott transition [1, 2]) and attempts to explain
ferromagnetism of transition metals have led to the elaboration of extremely
simple and insightful models of quantum statistics, namely Anderson model [3]
and Hubbard model [4]. The Hubbard model, describing a single non-degenerate
band of electrons with the on-site Coulomb interaction, provided the
mechanisms of the metal-to-insulator transition (MIT) and magnetic orderings.
The model Hamiltonian contains just two energy parameters: the matrix element
$t$ being the hopping integral of an electron to a neighboring site and the
parameter $U$ of the intra-atomic Coulomb repulsion of two electrons on the
same site. This model is used for a description of a variety of phenomena and
is intensively studied [5, 6, 7]. Albeit apparent simplicity, the Hubbard
model does not allow for exact solution except for the very special cases of
$d=1$ (one spatial dimension [8]) or $d\to\infty$ (infinite spatial
dimensionality [9]). The limit $d\to\infty$ seems to explain physics of the
real $3D-$materials quite well but the wide variety of compounds in which
strong electron correlations govern localization phenomena requires more than
just two-parameter model to account for all of the distinctive features. To
describe the peculiarities of real materials, the matrix elements of electron
interactions neglected in the original form of the Hubbard model as well as
orbital degeneracy of atomic energy levels [10, 11] are to be taken into
account.
Classic materials to manifest the Mott transition are a family of vanadium
Magneli phases, which have recently seen renewed attention [12]. In these
compounds not only the metal-to-insulator transition at increasing
temperatures is observed but also doping of cation subsystem or application of
the external pressure determine electrical properties. The transitions in VO2
and V2O3 can be explained within the framework of non-degenerated Hubbard
model [1, 13]. On the contrary, Mott-Hubbard physics of transition metal
dichalcogenides [14], which may find new applications in next generation
nanoelectronics [15], is the best described when the double orbital degeneracy
of $e_{g}$-electrons is explicitly incorporated into the Hamiltonian [16].
Another class of materials with inherent orbital degeneracy and promising
physical properties are fullerides [17], solid-state systems with doped
fullerene molecules C60. Unusual metal-insulator transition is observed in
fullerides AxC60 doped with alkali metals A. Among these compounds, A3C60 is
metallic, and the other phases AC60, A2C60 and A4C60 are insulators [18] while
the band structure calculations predict a purely metallic behavior [19]. The
orbital degeneracy of energy levels is responsible for the noted disagreement
of theory and experiment. The Gutzwiller variational approach [20] predicts
MIT for all integer band fillings $n=1,2,3,4,5$. Experimentally measured
electrical resistivity of polycrystals C60 [21] monotonously changes with
temperature and the energy gap has a monotonous dependence on the external
pressure.
A breakthrough in solving the puzzle of strong correlation happened due to the
development of the dynamical mean-field theory (DMFT) in the limit of infinite
dimensionality [9, 22]. It was quickly realized that DMFT is not applicable to
the system with the hopping amplitudes dependent on the site occupancies [23].
Studies of Falicov-Kimball model [24, 25], a partial case of the Hubbard model
with two sorts of particles, one localized and the other itinerant, shed light
on the peculiarities of the systems with correlated hopping of electrons
(dependency of hopping amplitudes on lattice site occupancies). Within DMFT
for Falicov-Kimball model, an exact two-pole structure was obtained for Green
function, while for a general case of Hubbard model, a finite-temperature
perturbation theory with electron hopping term as a perturbation parameter was
built [26].On this basis, a rigorous approach for description of correlated
hopping within the Falicov-Kimball model was built, thermodynamics of the
model and temperature-induced MIT were studied [27, 28]. Within the elaborated
scheme, considerable progress was achieved in theoretical description of
inelastic Raman and X-ray scattering in the vicinity of MIT [29, 30] and in
charge-density wave phase [31, 32]. The model with correlated hopping was
shown to manifest a peculiar optical conductivity [33] and thermoelectric
properties [34]–[36] due to the correlated hopping of electrons.
The present study is devoted to investigation of a system close to Mott-
Hubbard transition within the models taking into account the orbital
degeneracy of energy levels, strong Coulomb interaction and correlated hopping
of electrons. The structure of this paper is as follows. Section 2 is devoted
to the model formulation and model Hamiltonians for non-degenerated, doubly
and triply degenerated models. The general representations of the Hamiltonians
in the electron and Hubbard operators is given. In section 3 the
concentrations of empty sites (holes) and doubly occupied sites close to
metal-insulator transition at the electron concentration $n=1$ for models with
non-degenerated, doubly- and triply-degenerated energy levels are calculated.
For calculation of quasiparticle energy spectra, the nonperturbative approach,
generalized mean-field approximation [37, 38] is used. Section 4 presents our
conclusions on the role of correlated hopping in models with orbital
degeneracy.
## 2 The Hamiltonians of electronic subsystem with orbitally degenerated
energy levels
Within the second quantization formalism, the Hamiltonian of interacting
electron systems can be written [39] as
$\displaystyle
H=-\mu\sum_{i\lambda\sigma}a_{i\lambda\sigma}^{+}a_{i\lambda\sigma}+{\sum_{ij\lambda\sigma}}^{\prime}t_{ij}a_{i\lambda\sigma}^{+}a_{j\lambda\sigma}+\frac{1}{2}{\sum_{ijkl}}{\sum_{\alpha\beta\gamma\delta}}{\sum_{\sigma\sigma^{\prime}}}J^{\alpha\beta\gamma\delta}_{ijkl}a_{i\alpha\sigma}^{+}a_{j\beta\sigma^{\prime}}^{+}a_{l\delta\sigma^{\prime}}a_{k\gamma\sigma}\,,$
(2.1)
where $\mu$ stands for chemical potential, $a_{i\lambda\sigma}^{+}$,
$a_{i\lambda\sigma}$ are operators of spin-$\sigma$ electron creation and
annihilation in orbital state $\lambda$ on lattice site $i$, respectively, the
second sum with matrix element
$\displaystyle t_{ij}=\int{\mathrm{d}^{3}}r{\phi}_{\lambda}^{*}({\bf r}-{\bf
R}_{i})\left[-\frac{\hbar^{2}}{2m}\Delta+V^{\text{ion}}({\bf
r})\right]\phi_{\lambda}({\bf r}-{\bf R}_{j})$ (2.2)
describes translations (hopping) of electrons in the crystal field
$V^{\text{ion}}(\bf{r})$. The third sum in equation (2.1) is the general
expression for pair electron interactions described by matrix elements
$\displaystyle J^{\alpha\beta\gamma\delta}_{ijkl}$ $\displaystyle=$
$\displaystyle\int{\int{{\phi}_{\alpha}^{*}({\bf r}-{\bf
R}_{i}){\phi}_{\beta}({\bf r}-{\bf
R}_{j})}}\frac{\mathrm{e}^{2}}{|r-r^{\prime}|}{\phi}_{\delta}^{*}({\bf
r^{\prime}}-{\bf R}_{l}){\phi}_{\gamma}({\bf r^{\prime}}-{\bf
R}_{k})\mathrm{d}r\mathrm{d}r^{\prime}.$ (2.3)
In the above formulae, indices $\alpha$, $\beta$, $\gamma$, $\delta$,
$\lambda$ denote orbital states, ${\phi}_{\lambda i}$ is wave-function in
Wannier (site) representation, other notations are standard.
Hamiltonian (2.1) is hard to treat mathematically due to its non-diagonal form
in both Wannier- and wave-vector representation. The model is greatly
simplified if all matrix elements are neglected except on-site Coulomb
correlation (the Hubbard parameter $U$):
$\displaystyle U=\int{\int{|{\phi}_{\lambda}^{*}({\bf r}-{\bf
R}_{i})|^{2}\frac{\mathrm{e}^{2}}{|r-r^{\prime}|}|\phi_{\lambda}({\bf
r^{\prime}}-{\bf R}_{i})|^{2}\mathrm{d}r\mathrm{d}r^{\prime}}}.$ (2.4)
In this case, however, we fail to describe the influence of lattice site
occupation on the electron hopping (correlated hopping of electrons) which
removes the electron-hole symmetry for real correlated electron systems [16]
as well as plays an important role in the onset of itinerant ferromagnetism
[40, 41, 42, 43, 44]. Besides, for a system with orbital degeneracy of energy
states, the on-site exchange integral (Hund’s rule coupling)
$\displaystyle J_{H}$ $\displaystyle=$
$\displaystyle\int{\int{\phi}_{\lambda}^{*}}({\bf r}-{\bf
R}_{i})\phi_{\lambda^{{}^{\prime}}}({\bf r}-{\bf
R}_{i}){\mathrm{e}^{2}\over|{r}-{r}^{{}^{\prime}}|}\phi^{*}{{}_{\lambda^{{}^{\prime}}}}({\bf
r}^{{}^{\prime}}-{\bf R}_{i})\phi_{\lambda}({\bf r}^{{}^{\prime}}-{\bf
R}_{i})\mathrm{d}{\bf r}\mathrm{d}{\bf r}^{{}^{\prime}},$ (2.5)
which makes the configurations with parallel spins energetically more
favorable, should be taken into account.
The Hamiltonian of the generalized Hubbard model with correlated hopping of
electrons then reads as
$\displaystyle H_{s}$ $\displaystyle=$
$\displaystyle-\mu\sum_{i\sigma}n_{i\sigma}+U\sum_{i}n_{i\uparrow}n_{i\downarrow}+{\sum_{ij\sigma}}^{\prime}t_{ij}(n)a_{i\sigma}^{+}a_{j\sigma}+{\sum_{ij\lambda\sigma}}^{\prime}T^{{}^{\prime}}_{ij}\left(a_{i\sigma}^{+}a_{j\sigma}n_{i\bar{\sigma}}+h.c.\right),$
(2.6)
where $n_{i\sigma}=a_{i\sigma}^{+}a_{i\sigma}$ is the site occupancy operator,
hopping integrals $t_{ij}(n)$, $T^{{}^{\prime}}_{ij}$ taking into account two
types of correlated hopping of electrons [37] in non-degenerate model are
introduced. Such a generalized model is not invariant under particle-hole
transformation $a_{i\sigma}^{+}\to a_{i\sigma}$, in distinction from the
Hubbard model [45].
The Hamiltonian of the orbitally degenerated electronic system with strong
electron correlation and correlated hopping of electrons can be represented in
the form
$\displaystyle H_{\text{deg}}$ $\displaystyle=H_{\text{loc}}+H_{\text{tr}}\,,$
$\displaystyle H_{\text{loc}}$
$\displaystyle=-\mu\sum_{i\lambda\sigma}n_{i\lambda\sigma}+U\sum_{i\lambda}n_{i\lambda\uparrow}n_{i\lambda\downarrow}+\frac{U^{\prime}}{2}\sum_{i\lambda\sigma}n_{i\lambda\sigma}n_{i\lambda^{\prime}\bar{\sigma}}+\frac{U^{\prime}-J_{H}}{2}\sum_{i\lambda\lambda^{\prime}\sigma}n_{i\lambda\sigma}n_{i\lambda^{\prime}\sigma}\,,$
$\displaystyle H_{\text{tr}}$
$\displaystyle={\sum_{ij\lambda\sigma}}^{\prime}t_{ij}(n)a_{i\lambda\sigma}^{+}a_{j\lambda\sigma}+{\sum_{ij\lambda\sigma}}^{\prime}t^{{}^{\prime}}_{ij}\left(a_{i\lambda\sigma}^{+}a_{j\lambda\sigma}n_{i\bar{\lambda}}+h.c.\right)+{\sum_{ij\lambda\sigma}}^{\prime}t^{{}^{\prime\prime}}_{ij}\left(a_{i\lambda\sigma}^{+}a_{j\lambda\sigma}n_{i\lambda\bar{\sigma}}+h.c.\right),$
where $n_{i\lambda\sigma}=a_{i\lambda\sigma}^{+}a_{i\lambda\sigma}$,
$U^{\prime}=U-2J_{H}$ and hopping integrals $t_{ij}(n)$,
$t^{{}^{\prime}}_{ij},t^{{}^{\prime\prime}}_{ij}$ are introduced taking into
account different types of correlated hopping of electrons [16]. In a model of
doubly degenerate band, every site can be in one of 16 configurations [16]
while for triple degeneracy, inherent to fullerides, the number of possible
configurations rises to 64 [46].
In fullerenes, which have triply degenerated lowest unoccupied molecular
orbital of $t_{1u}$ symmetry, the competition between intra-site Coulomb
interaction and electron hoppings defines the electrical properties, with
insulator or metallic behavior realized [10]. Fullerides (fullerene crystals)
are semiconductors with an energy gap of $1.2-1.9$ eV [47, 48]. Intra-atomic
repulsion energy estimates [49] give $2.7$ eV. Calculation with a screening
effect taken into account gives $U=2.7$ eV [50, 51]. The energy cost of
electron configurations with two spins aligned in parallel on different
orbitals is considerably less than for anti-parallel alignment due to Hund’s
rule coupling.
To consider the case of strong electron correlation, it is reasonable to pass
from electron operator to Hubbard operators $X^{pl}$ of site transition from
state $|l\rangle$ to state $|p\rangle$ with anticommutation relations
$\\{X_{i}^{pl};X_{j}^{kt}\\}=\delta_{ij}(\delta_{lk}X_{i}^{pt}+\delta_{pt}X_{i}^{kl})$,
and normalizing condition $\sum\limits_{i}X_{i}^{p}=1$, for number operators
$X_{i}^{p}=X_{i}^{pl}X_{i}^{lp}$ of $|p\rangle$–state on site $i$.
In the configuration-operator representation, the model Hamiltonian takes the
form
$\displaystyle
H=H_{\text{loc}}+\sum_{\lambda=\alpha,\beta,\gamma}\left(H_{b}^{(\lambda)}+H_{h}^{(\lambda)}\right),$
(2.8)
where the interaction part $H_{\text{loc}}$ is diagonal, the processes forming
Hubbard subbands described by $H_{b}^{(\lambda)}$ and the hybridization of
these subbands $H_{h}^{(\lambda)}$ are clearly separated in the translational
part of the Hamiltonian, and which constitute a distinctive feature of
configuration-operator representation. Different hopping integrals correspond
to transitions in (or between) the different subbands.
In the partial case of band filling $n=1$, strong Coulomb correlation and
strong Hund’s coupling (parameter $U-3J_{H}$ is much greater than the
bandwidth), the states with three and more electrons on the same site are
excluded. Then, the influence of correlated hopping can be described by three
different hopping integrals. The bare band hopping integral $t_{ij}$ is
renormalized to take into account the band narrowing caused by concentration
dependent correlated hopping as $t_{ij}(n)=t_{ij}(1-\tau_{1}n)$. This hopping
integral characterizes the lower Hubbard subband. Parameter $\tau_{1}$ is
usually neglected, but it is of principal importance for a consistent
description of correlation effects in narrow band systems. The hopping
integral for the upper Hubbard subband is
$\tilde{t}_{ij}(n)=t_{ij}(n)+2t^{\prime}_{ij}=(1-\tau_{1}n-2\tau_{2})t_{ij}$
and $\bar{t}_{ij}(n)=t_{ij}(n)+t^{\prime}_{ij}=(1-\tau_{1}n-\tau_{2})t_{ij}$
describes a hybridization of lower and upper Hubbard subbands, correlated
hopping parameter $\tau_{2}$ describes the effect of occupancy of the sites
involved in the electron transfer and $\tau_{1}$ describes that of the
nearest-neighbor sites.
## 3 Results and discussion
Green functions technique allows us to calculate the energy spectrum of the
model in the case of electron concentration $n=1$. One can rewrite the single-
particle Green function $\langle\langle
a_{i\lambda\sigma}|a_{j\lambda\sigma}^{+}\rangle\rangle$ on the basis of the
relation between electronic operators and Hubbard’s $X$-operators describing
electron hoppings between holes, single-electron configurations and Hund’s
doublons. Probabilities of the processes involving another type of doubly
occupied states, states with three or more electrons tend to zero in the
considered limit of strong intra-atomic correlation.
Substantial extension of the Hilbert space for the cases with orbital
degeneracy makes the direct application of standard methods quite complicated
[52]. One may considerably simplify the calculation of quasiparticle energy
spectrum by projecting the terms arising from commutation with $H_{\text{tr}}$
in the Green function equation of motions onto a set of basic operators which
describe processes responsible for Hubbard subbands formation. The
configuration representation of Hubbard operators is especially well-suited
for such procedures and makes physical nature of the metal-to-insulator
transition and the influence of correlated hopping relatively transparent. In
the limit of strong Coulomb correlation and strong Hund’s coupling, for the
doubly degenerated band, such set of operators is $X^{0,\lambda\sigma}_{i}$,
$X^{\lambda\sigma,2}_{i}$, and for triply degenerated model it consists of
$X^{0,\lambda\sigma}_{i}$ and $Y^{\lambda\sigma,2}_{i}$ where $Y$-operator is
introduced to take into account that for the triply degenerate orbitals two
equivalent states are involved, for example,
$Y^{\alpha\uparrow,2}_{i}=X^{0\uparrow 0,\uparrow\uparrow
0}_{i}+X^{00\uparrow,\uparrow 0\uparrow}_{i}$.
To obtain a closed system of equations for Green functions $\langle\langle
X_{p}^{0,\alpha\uparrow}|X_{p^{\prime}}^{\alpha\uparrow 0}\rangle\rangle$ and
$\langle\langle Y_{p}|X_{p^{\prime}}^{\alpha\uparrow,0}\rangle\rangle$, we use
the projection procedure [37]:
$\displaystyle[X_{p}^{0,\alpha\uparrow};\sum_{\lambda}{H_{b}^{(\lambda)}}]$
$\displaystyle=$
$\displaystyle\sum_{i}\varepsilon_{pi}^{b}X_{i}^{0,\uparrow};\qquad[Y_{p};\sum_{\lambda}{H_{b}^{(\lambda)}}]=\sum_{i}\tilde{\varepsilon}_{pi}^{b}Y_{i};$
(3.1)
$\displaystyle[X_{p}^{0,\alpha\uparrow};\sum_{\lambda}{H_{h}^{(\lambda)}}]$
$\displaystyle=$
$\displaystyle\sum_{i}\varepsilon_{pi}^{h}Y_{i};\qquad\quad[Y_{p};\sum_{\lambda}{H_{h}^{(\lambda)}}]=\sum_{i}\tilde{\varepsilon}_{pi}^{h}X_{i}^{0,\uparrow}.$
This procedure realizes the generalized mean-field approximation which allows
one to easily break off the sequence of Green function equations and results
in two-pole quasiparticle Green function, with the spectrum describing two
Hubbard subbands with renormalized widths and hybridization. To obtain the
single-electron Green function, however, one should solve two systems of
equations (for $\langle\langle
X_{p}^{0,\alpha\uparrow}|X_{p^{\prime}}^{\alpha\uparrow 0}\rangle\rangle$,
$\langle\langle Y_{p}|X_{p^{\prime}}^{\alpha\uparrow,0}\rangle\rangle$ and
another for $\langle\langle X_{p}^{0,\alpha\uparrow}|Y^{+}_{p}\rangle\rangle$,
$\langle\langle Y_{p}|Y^{+}_{p}\rangle\rangle$), neglecting the irreducible
parts. Through the non-operator coefficients $\varepsilon^{b}({\bf
k}),\tilde{\varepsilon}^{b}({\bf k}),\varepsilon^{h}({\bf
k}),\tilde{\varepsilon}^{h}({\bf k})$, the spectrum becomes dependent on the
mean numbers of sites in particular electron configurations (for example, Hund
doublons in the doubly orbitally degenerated case) and thus dependent on
temperature.
Solving analytically the system of equations for $\langle\langle
X_{p}^{0,\alpha\uparrow}|X_{p^{\prime}}^{\alpha\uparrow 0}\rangle\rangle$ and
$\langle\langle Y_{p}|X_{p^{\prime}}^{\alpha\uparrow,0}\rangle\rangle$ after
Fourier transformation, we obtain the quasi-particle energy spectrum
$\displaystyle E_{1,2}({\bf k})=$ $\displaystyle-$
$\displaystyle\mu+\frac{\Delta}{2}+\frac{\varepsilon^{b}({\bf
k})+\tilde{\varepsilon}^{b}({\bf k})}{2}$ (3.2) $\displaystyle\mp$
$\displaystyle\frac{1}{2}\sqrt{(\Delta-\varepsilon^{b}({\bf
k})+\tilde{\varepsilon}^{b}({\bf k}))^{2}+4\varepsilon^{h}({\bf
k})\tilde{\varepsilon}^{h}({\bf k})}$
with $\Delta=U$ corresponding to the non-degenerated Hubbard model and
$\Delta=U-3J_{H}$ being the energy cost of creating a Hund’s doublon — a pair
of electrons with parallel spins in different orbitals of the same lattice
site [16, 11] for the models with orbital degeneracy. In a more robust
approach of work [26], the two-pole structure of the Green function was found
to realize only for Falicov-Kimball model while in general four state,
subspaces (electron, hole and resonant-valence bond) were considered. In
terminology of papers [26, 53], we study here only the low-energy sector of
the model to compare the effects produced by the degeneracy of the energy
levels and the correlated hopping, both removing the particle-hole symmetry.
Concentrations of doublons appear to be an important parameter in this
approach that allows one to naturally introduce a dependence of the energy
spectrum (3.2) on temperature or doping. In the degenerate model, in
distinction from the original Hubbard model, a doublon is not just charge
excitation but may carry a spin as well. For sufficiently high temperatures,
in the absence of any type of magnetic order, we denote the concentration of
empty lattice sites (holes) by $c$, concentration of sites occupied by a
single electron of any spin projection $\sigma$ in any orbital state $\lambda$
by $s$, Hund’s doublons concentration by $d$ and neglect all other
configurations. For the case of strong Hund’s coupling, the high energy
doublon configurations (spin-singlet Hubbard doublons and spin-triplet non-
Hund doublons) are excluded. We make use of the completeness condition for the
$X$-operator set to obtain constraints $c+2s+d=1$ for a non-degenerate model,
$c+4s+6d=1$ for a model with double degeneracy, $c+6s+6d=1$ for a model with
triple degeneracy at strong Coulomb correlation and at strong Hund’s coupling.
The non-operator coefficients $\varepsilon^{b}({\bf
k}),\tilde{\varepsilon}^{b}({\bf k}),\varepsilon^{h}({\bf
k}),\tilde{\varepsilon}^{h}({\bf k})$ can be obtained by a procedure described
in detail in paper [38] in the paramagnetic case at $n=1$ for the doubly
degenerated model as
$\displaystyle\varepsilon^{b}=\left((1-\tau_{1})\frac{1+4d+12(1-4d)^{2}}{4(1+4d)}+(1-\tau_{1}-2\tau_{2})\frac{8d^{2}}{1+4d}\right)\varepsilon;$
(3.3) $\displaystyle\varepsilon^{h}=(1-\tau_{1}-\tau_{2})d(3+4d)\varepsilon,$
(3.4)
$\displaystyle\tilde{\varepsilon}^{b}=\left(-8d^{2}(1-\tau_{1})+\frac{(1-\tau_{1}-2\tau_{2})}{8}(1-16d+32d^{2})\right)\varepsilon,$
(3.5)
$\displaystyle\tilde{\varepsilon}^{h}=(1-\tau_{1}-\tau_{2})\frac{d(3+4d)}{1+4d}\varepsilon$
(3.6)
and for the triply degenerated paramagnet at $n=1$ as
$\displaystyle\varepsilon^{b}=\left((1-\tau_{1})\frac{216d^{2}-12d+1}{24d+1}+(1-\tau_{1}-2\tau_{2})\frac{72d^{2}}{24d+1}\right)\varepsilon;$
(3.7)
$\displaystyle\varepsilon^{h}=(1-\tau_{1}-\tau_{2})\frac{8d-12d^{2}}{1-6d}\varepsilon,$
(3.8)
$\displaystyle\tilde{\varepsilon}^{b}=\left(-(1-\tau_{1})\frac{36d^{2}}{1-6d}+(1-\tau_{1}-2\tau_{2})\frac{1-16d+84d^{2}}{2(1-6d)}\right)\varepsilon,$
(3.9)
$\displaystyle\tilde{\varepsilon}^{h}=(1-\tau_{1}-\tau_{2})\frac{24d+1-216d^{2}}{3(24d+1)}\varepsilon.$
(3.10)
Corresponding expressions for non-degenerated model [38] are
$\displaystyle\varepsilon^{b}=\left[(1-\tau_{1})(1-2d+2d^{2})-2(1-\tau_{1}-2\tau_{2})d^{2}\right]\varepsilon;$
(3.11)
$\displaystyle\tilde{\varepsilon}^{b}=\left[(1-\tau_{1}-2\tau_{2})(1-2d+2d^{2})-2(1-\tau_{1})d^{2}\right]\varepsilon;$
(3.12)
$\displaystyle\varepsilon^{h}=\tilde{\varepsilon}^{h}=-2(1-\tau_{1}-\tau_{2})d\varepsilon.$
(3.13)
One can see that in degenerated models there is no symmetry between
expressions describing hole and doublon sectors.
For the generalized Hubbard model with correlated hopping [38], the doublon
concentration is given by
$\displaystyle
d=C\int^{w}_{-w}\rho(\varepsilon){\left[\frac{A(\varepsilon)}{\exp(\frac{E_{1}(\varepsilon)}{\Theta}+1)}+\frac{B(\varepsilon)}{\exp(\frac{E_{2}(\varepsilon)}{\Theta}+1)}\right]}\mathrm{d}\varepsilon,$
(3.14)
where $C={1\over 2}$, $\Theta=k_{\text{B}}T$,
$\displaystyle A({\bf k})={1\over
2}\left(1+\frac{\Delta+\tilde{\varepsilon}^{b}-\varepsilon^{b}}{\sqrt{(\Delta-\varepsilon^{b}+\tilde{\varepsilon}^{b})^{2}+4\varepsilon^{h}\tilde{\varepsilon}^{h}}}\right),$
$\displaystyle B({\bf k})=1-A({\bf k}).$ (3.15)
The equation for holes for non-degenerated model is completely identical to
the above equation as $c=d$, so $\mu$ is obtained from condition $n=1$.
For the doubly degenerated model with correlated hopping, the doublon
concentration is determined by the condition (3.14) with $C={1\over 4}$ to be
solved together with the condition for the chemical potential $c=2d$, which
for the doubly degenerated model can be written as
$\displaystyle
8d=(1+4d)\int^{w}_{-w}{\rho(\varepsilon)\left(\frac{B(\varepsilon)}{\exp(\frac{-E_{1}(\varepsilon)}{\Theta}+1)}+\frac{A(\varepsilon)}{\exp(\frac{-E_{2}(\varepsilon)}{\Theta}+1)}\right)}d\varepsilon.$
(3.16)
In analogous way we obtain equation (3.14) with $C={1\over 12}$ for triply
degenerated model with correlated hopping. To complete a set of equations, the
condition for the chemical potential $c=6d$ for the triply degenerated model
is also used
$\displaystyle
36d=(1-24d)\int^{w}_{-w}{\rho(\varepsilon)\left[\frac{B(\varepsilon)}{\exp(\frac{-E_{1}(\varepsilon)}{\Theta}+1)}+\frac{A(\varepsilon)}{\exp(\frac{-E_{2}(\varepsilon)}{\Theta}+1)}\right]}\mathrm{d}\varepsilon,$
(3.17)
Solving these equations numerically with appropriate model density
$\rho(\varepsilon)$ of electronic states (DOS), we obtain the doublon
concentration as functions of the model parameters and may apply the gap
criterion for the spectra (3.2) to study a metal-insulator transition (MIT)
[2, 14]. In the ground state, at the point of MIT, the polar states (holes and
doublons) concentrations tend to zero.
In distinction from the non-degenerated model, where it is the correlated
hopping which destroys the particle-hole symmetry, in doubly and triply-
degenerated models, there is no symmetry even at $\tau_{1}=\tau_{2}=0$ due to
non-equivalence of subspaces for empty states and doublon states (see [27]).
Dependencies of the doublon concentration on the correlation strength, shown
in figures 2, 2, allow us to illustrate the influence of correlated hopping
and asymmetry of the DOS form.
Figure 1: (Colour online) Dependence of doublon concentration on the energy
parameter $\Delta/w$ for different correlated hopping values at $\Theta/w=0.2$
for non-degenerated model. Solid curve corresponds to the Hubbard model
$\tau_{1}=\tau_{2}=0$, short-dashed curve corresponds to
$\tau_{1}=\tau_{2}=0.1$ and long-dashed curve corresponds to
$\tau_{1}=\tau_{2}=0.2$. Semi-elliptic DOS is used in calculations.
Figure 2: (Colour online) Dependence of doublon concentration in doubly
orbitally degenerated model at $\Theta/w=0.4$ on the energy parameter
$\Delta/w$ for DOS with asymmetry on a band edge at different values of the
asymmetry parameter $a=0,0.3,0.5,0.99$ from upper to lower curve.
Figure 3: (Colour online) Dependence of doublon concentration in doubly
orbitally degenerated model on temperature at fixed values of the energy
parameter $\Delta/w=0.6$ for the upper group of curves, $\Delta/w=1.2$ for the
middle group of curves and $\Delta/w=1.8$ for the lower group of curves. Solid
curves correspond to the semi-elliptic DOS, long-dashed curves correspond to
simple cubic lattice DOS and short-dashed curves correspond to DOS with
asymmetry.
Figure 4: (Colour online) Dependence of energy gap on the interaction strength
parameter for non-degenerated and triply degenerated models. In each pair, the
lower curve corresponds to non-degenerated model and the upper one corresponds
to the triply-degenerated model: for solid curves $\tau_{1}=\tau_{2}=0$, for
long-dashed curves $\tau_{1}=\tau_{2}=0.1$, for short-dashed curves
$\tau_{1}=\tau_{2}=0.2$.
One can see from figures 2–4 that the correlated hopping substantially
influences the doubly occupied states concentration in a system with orbital
degeneracy of the energy levels. Quite naturally, an increase of the
correlated hopping suppresses the conductance through electron localization,
and the doublon concentration decreases, whose effect for non-degenerated
model can be seen from figure 2. In a way, the role of the correlated hopping
does not change when the models with orbital degeneracy are considered, but
those models offer more mechanisms for particle-hole asymmetry than just the
correlated hopping. The DOS form can also affect the probability of doublon
and hole pair creation. To study an effect of bare-band density of states
asymmetry on the energy spectrum of a model with orbital degeneracy, we make
use of a model DOS with asymmetry on the band edge [41]. If the density of
states is not symmetrically distributed with respect to the subband center,
the doublon concentration is smaller than for a semi-elliptical DOS (see
figure 2) and $\rho(\varepsilon)$ makes a more prominent effect for weaker
correlations. DOS form asymmetry was shown to make more preferable conditions
for ferromagnetism stabilization [41]. From our results we conclude that both
DOS asymmetry and correlated hopping favor electron delocalization and
ferromagnetism (see also [44] where a similar effect was observed in the case
of week correlation). Figure 4 visualizes a temperature dependence of the
doublon concentration. Three families of dependencies are shown, each
corresponding to a fixed value of the correlation strength $\Delta=U-3J_{H}$.
The dependency for DOS form which corresponds to a simple cubic lattice of
transition metal compounds with double orbital degeneracy of $e_{g}$ band is
depicted by long-dashed curves. One can see that the model DOS with tunable
asymmetry parameter provides lower (at high asymmetries) and upper (for the
limiting case of semi-elliptical DOS) bounds for doublon concentration with
sc-lattice DOS. Qualitatively, the temperature dependencies monotonously
increase in the whole temperature interval, but are flattened at the greater
values of the interaction parameter. Both the occupancy of the sites involved
into the hopping processes (through the correlated hopping of the first type)
and the neighbor sites (through the second type correlated hopping), have the
effect of a gap in the energy spectrum and stabilization of the insulator
state. The energy gap, however, will not open up until a relatively large
increase of correlated hopping parameters takes place, not achievable in real
systems. At an increase of intra-site interaction parameter over a critical
value (dependent on the correlated hopping strength), the energy gap occurs
and the metal-insulator transition takes place (see figure 4 for zero
temperature). An increase of the polar states (holes and doublons)
concentration with temperature change modifies the energy spectrum, the energy
gap opens up and the insulating state stabilizes. On the contrary, the
application of the external pressure or doping in systems
(V1-xCr${}_{x})_{2}$O3 and NiS2-xSex leads to metalization [1] by increasing
the energy subbands width and reducing $\frac{\Delta}{w}$ below the critical
value. The critical values for the partial case when there is no correlated
hopping are in agreement with the results of paper [38] for non-degenerated
Hubbard model and paper [46] for a triply degenerated model.
## 4 Conclusions
Continuing the studies of non-denenerate Hubbard model generalized with taking
into account the correlated hopping of electrons, quasiparticle energy spectra
for doubly- and triply degenerated model are calculated to show that both the
orbital degeneracy and the correlated hopping remove the particle-hole
symmetry and have a strong effect on the electron localization. The energy
spectra of the lower and upper subband depend on the polar states
concentration and are essentially non-equivalent.
Equations are derived for an arbitrary-temperature numerical calculation of
the doublon concentration for the integer band filling $n=1$ at different
forms of the model density of states. Within the models of strongly correlated
electron systems with orbitally degenerated energy levels, both the correlated
hopping and the degeneracy of energy levels are crucial to describe the
effects of particle-hole asymmetry. The use of Hubbard $X$-operators
representation appears to be useful to structure the model Hamiltonian
according to the occupancies of the state subspaces and allows one to single
out the quasiparticle subbands, relevant for metal-insulator transition. The
expression for the energy gap which is temperature dependent through the polar
state concentration, which also depends on the DOS form, alows one to describe
the phase transition under the external actions. Critical values of the
correlation strength parameter for correlation-driven metal-insulator
transition in models with the degeneracy of energy levels and correlated
hopping substantially differ from that of Hubbard model. Taking into account a
number of microscopic parameters can be a leverage for tuning the analytical
expressions to reveal the relevant microscopic mechanisms of electron
localization in materials with non-equivalent subbands.
## References
* [1] Mott N.F., Metal-insulator transition, Taylor & Francis, London, 1990, doi:10.1201/b12795.
* [2] Gebhard F., The Mott metal-insulator transition: models and metods, Springer, Berlin, 1997,
doi:10.1007/3-540-14858-2.
* [3] Anderson P.W., Phys. Rev., 1961, 124, 41–53, doi:10.1103/PhysRev.124.41.
* [4] Hubbard J., Proc. R. Soc. London, Ser. A, 1963, 276, No. 1365, 238–257, doi:10.1098/rspa.1963.0204.
* [5] Tasaki H., J. Phys.: Condens. Matter, 1998, 10, No. 20, 4353–4378, doi:10.1088/0953-8984/10/20/004.
* [6] Irkhin V.Yu., Irkhin Yu.P., Electronic Structure, Correlation Effects and Physical Properties of 3d- and 4f-metals and their Compounds, Cambridge International Science Publishing, 2007.
* [7] Dutta O., Gajda M., Hauke P., Lewenstein M., Lühmann D-S., Malomed B., Sowiński T., Zakrzewski J., Rep. Prog. Phys., 2015, 78, No. 6, 066001 (47 pages), doi:10.1088/0034-4885/78/6/066001.
* [8] Lieb E.H., Wu F.Y., Phys. Rev. Lett., 1968, 20, 1445–1448, doi:10.1103/PhysRevLett.20.1445.
* [9] Metzner W., Vollhardt D., Phys. Rev. Lett., 1989, 62, No. 3, 324–327, doi:10.1103/PhysRevLett.62.324.
* [10] Gunnarsson O., Koch E., Martin R.M., Phys. Rev. B, 1996, 54, R11026–R11029,
doi:10.1103/PhysRevB.54.R11026.
* [11] Pavarini E., In: DMFT: From Infinite Dimensions to Real Materials Modeling and Simulation 8, Pavarini E., Koch E., Lichtenstein A., Vollhardt D. (Eds.), Forschungszentrum Jülich, 2018, 1–40.
* [12] Mogunov I.A., Lysenko S., Fedianin A.E., Fernández F.E., Rúa A., Kent A.J., Akimov A.V., Kalashnikova A.M., Nat. Commun., 2020, 11, 1690 (8 pages), doi:10.1038/s41467-020-15372-z.
* [13] Rozenberg M., In: Many-Body Methods for Real Materials Modeling and Simulation 9, Pavarini E., Koch E., Zhang S. (Eds.), Forschungszentrum Jülich, 2019, 1–32.
* [14] Wilson J.A., In: The Metallic and Nonmetallic State of Matter, Edwards P.P., Rao C.N.R., Taylor and Francis, London, 1985, 215261.
* [15] Hu Z., Wu Z., Han C., He J., Ni Z., Chen W., Chem. Soc. Rev., 2018, 47, 3100–3128 doi:10.1039/C8CS00024G.
* [16] Didukh L., Skorenkyy Y., Dovhopyaty Y., Hankevych V., Phys. Rev. B, 2000, 61, No. 12, 7893–7908, doi:10.1103/PhysRevB.61.7893.
* [17] Holczer K., Whetten R.L., Carbon, 1992, 30, No. 8, 1261–1276, doi:10.1016/0008-6223(92)90067-7.
* [18] Poirier D.M., Ohno T.R., Kroll G.H., Benning P.J., Stepniak F., Weaver J.H., Chibante L.P.F., Smalley R.E., Phys. Rev. B, 1993, 47, 9870–9877, doi:10.1103/physrevb.47.9870.
* [19] Satpathy S., Antropov V.P., Andersen O.K., Jepsen O., Gunnarsson O., Liechtenstein A.I., Phys. Rev. B, 1992, 46, 1773–1793, doi:10.1103/PhysRevB.46.1773.
* [20] Lu J.P., Phys. Rev. B, 1994, 49, 5687–5690, doi:10.1103/PhysRevB.49.5687.
* [21] Nũnez Regeiro M., Monceau P., Rassat A., Bernier P., Zahab A., Nature, 1991, 354, 289–291, doi:10.1038/354289a0.
* [22] Georges A., Kotliar G., Krauth W., Rozenberg M.J., Rev. Mod. Phys., 1996, 68, No. 1, 13–125,
doi:10.1103/RevModPhys.68.13.
* [23] Schiller A., Phys. Rev. B, 1999, 60, No. 23, 15660–15663, doi:10.1103/PhysRevB.60.15660.
* [24] Falicov L.M., Kimball J.C., Phys. Rev. Lett., 1969, 22, No. 19, 997–999, doi:10.1103/PhysRevLett.22.997.
* [25] Freericks J.K., Zlatić V., Rev. Mod. Phys., 2003, 75, No. 4, 1333–1382, doi:10.1103/RevModPhys.75.1333.
* [26] Shvaika A.M., Phys. Rev. B, 2000, 62, No. 4, 2358–2371, doi:10.1103/PhysRevB.62.2358.
* [27] Shvaika A.M., Phys. Rev. B, 2003, 67, No. 7, 075101 (12 pages), doi:10.1103/PhysRevB.67.075101.
* [28] Shvaika A.M., Phys. Status Solidi B, 2003, 236, No. 2, 368–371, doi:10.1002/pssb.200301681.
* [29] Shvaika A.M., Vorobyov O., Freericks J.K., Devereaux T.P., Physica B, 2005, 359-361, 705–707, doi:10.1016/j.physb.2005.01.200.
* [30] Pakhira N., Freericks J.K., Shvaika A.M., Phys. Rev. B, 2012, 861, 125103 (14 pages),
doi:10.1103/PhysRevB.86.125103.
* [31] Matveev O.P., Shvaika A.M., Freericks J.K., Phys. Rev. B, 2009 79, No. 11, 115130 (17 pages),
doi:10.1103/PhysRevB.79.115130.
* [32] Matveev O.P., Shvaika A.M., Devereaux T.P., Freericks J.K., J. Supercond. Novel Magn., 2016, 29, No. 3, 581–585, doi:10.1007/s10948-015-3304-2.
* [33] Dobushovskyi D.A., Shvaika A.M., Condens. Matter Phys., 2018, 21, No. 2, 23702 (16 pages), doi:10.5488/CMP.21.23702.
* [34] Shvaika A.M., Condens. Matter Phys., 2014, 17, No. 4, 43704 (14 pages), doi:10.5488/CMP.17.43704.
* [35] Dobushovskyi D.A., Shvaika A.M., Zlatić V., Phys. Rev. B, 2017, 95, No. 12, 125133 (15 pages), doi:10.1103/PhysRevB.95.125133.
* [36] Dobushovskyi D.A., Shvaika A.M., Condens. Matter Phys., 2020, 23, No. 1, 13703 (14 pages),
doi:10.5488/CMP.23.13703.
* [37] Didukh L., Condens. Matter Phys., 1998, 1, No. 1, 125–143, doi:10.5488/CMP.1.1.125.
* [38] Didukh L., Skorenkyy Yu., Kramar O., Condens. Matter Phys., 2008, 11, No. 3, 443–454, doi:10.5488/CMP.11.3.443.
* [39] Fetter A.L., Walecka J.D., Quantum theory of many-particle systems, McGraw-Hill, New York, 1971.
* [40] Amadon J.C., Hirsch J.E., Phys. Rev. B, 1996, 54, 6364–6375, doi:10.1103/PhysRevB.54.6364.
* [41] Vollhardt D., Blümer N., Held K., Kollar M., Schlipf J., Ulmke M., Wahle J., In: Advances in Solid State Physics, Kramer B. (Ed.), 1999, 38, 383–396, Springer, Berlin, Heidelberg, doi:10.1007/BFb0107631.
* [42] Kollar M., Vollhardt D., Phys. Rev. B, 2001, 63, 045107 (5 pages), doi:10.1103/PhysRevB.63.045107.
* [43] Farkasovsky P., Czech. J. Phys., 2004, 54, 419–422, doi:10.1007/s10582-004-0110-7.
* [44] Didukh L., Skorenkyy Yu., Hankevych V., Kramar O. Phys. Rev. B, 2001, 64, No. 14, 144428 (10 pages), doi:10.1103/PhysRevB.64.144428.
* [45] Mielke A., In: Many-Body Physics: From Kondo to Hubbard Modeling and Simulation, Pavarini E., Koch E., Coleman P. (Eds.), Vol. 5, Verlag des Forschungszentrum Jülich, 2015.
* [46] Dovhopyaty Yu., Ukr. J. Phys., 2012, 57, No. 9, 920–928.
* [47] Saito S., Oshiyama A., Phys. Rev. Lett., 1991, 66, 2637–2640, doi:10.1103/PhysRevLett.66.2637.
* [48] Achiba Y., Takashi N., Yoko M., Shinzo S., Haruo S., Kotaro Y., Kozaburo N., Masatsune K., Hajime H., Yusei M., Tadaoki M., Chem. Lett., 1991, 20, 1233–1236, doi:10.1246/cl.1991.1233.
* [49] Hettich R.L., Compton R.N., Ritchie R.H., Phys. Rev. Lett., 1991, 67, 1242–1245,
doi:10.1103/PhysRevLett.67.1242.
* [50] Pederson M.R., Quong A.A., Phys. Rev. B, 1992, 46, 13584–13591,
doi:10.1103/PhysRevB.46.13584.
* [51] Antropov V.P., Gunnarsson O., Jepsen O., Phys. Rev. B, 1992, 46, 13647–13650,
doi:10.1103/PhysRevB.46.13647.
* [52] Pavarini E., In: The Physics of Correlated Insulators, Metals, and Superconductors Modeling and Simulation, Vol. 7, Pavarini E., Koch E., Scalettar R., Martin R. (Eds.), Forschungszentrum Jülich, 2017.
* [53] Shvaika A.M., Acta Phys. Pol. B, 2001, 32, No. 10, 3415–3420.
Ukrainian 0
Åëåêòðîí-äiðêîâà àñèìåòðiÿ â ñèñòåìàõ ç îðáiòàëüíèì âèðîäæåííÿì òà
êîðåëüîâàíèì ïåðåíîñîì Þ. Ñêîðåíüêèé, Î. Êðàìàð, Þ. Äîâãîïÿòèé
Òåðíîïiëüñüêèé íàöiîíàëüíèé òåõíiíèé óíiâåðñèòåò iìåíi Iâàíà Ïóëþÿ, âóë.
Ðóñüêà, 56,
46001 Òåðíîïiëü, Óêðà¿íà
|
∎
11institutetext: Xin-long Luo 22institutetext: Corresponding author. School of
Artificial Intelligence,
Beijing University of Posts and Telecommunications, P. O. Box 101,
Xitucheng Road No. 10, Haidian District, 100876, Beijing China
22email<EMAIL_ADDRESS>33institutetext: Jia-hui Lv 44institutetext:
School of Artificial Intelligence,
Beijing University of Posts and Telecommunications, P. O. Box 101,
Xitucheng Road No. 10, Haidian District, 100876, Beijing China
44email<EMAIL_ADDRESS>55institutetext: Hang Xiao 66institutetext: School
of Artificial Intelligence,
Beijing University of Posts and Telecommunications, P. O. Box 101,
Xitucheng Road No. 10, Haidian District, 100876, Beijing China
66email<EMAIL_ADDRESS>
# Explicit continuation methods with L-BFGS updating formulas for linearly
constrained optimization problems
Xin-long Luo$\ast$ Jia-hui Lv Hang Xiao
(Received: date / Accepted: date)
###### Abstract
This paper considers an explicit continuation method with the trusty time-
stepping scheme and the limited-memory BFGS (L-BFGS) updating formula (Eptctr)
for the linearly constrained optimization problem. At every iteration, Eptctr
only involves three pairs of the inner product of vector and one matrix-vector
product, other than the traditional and representative optimization method
such as the sequential quadratic programming (SQP) or the latest continuation
method such as Ptctr LLS2020 , which needs to solve a quadratic programming
subproblem (SQP) or a linear system of equations (Ptctr). Thus, Eptctr can
save much more computational time than SQP or Ptctr. Numerical results also
show that the consumed time of EPtctr is about one tenth of that of Ptctr or
one fifteenth to 0.4 percent of that of SQP. Furthermore, Eptctr can save the
storage space of an $(n+m)\times(n+m)$ large-scale matrix, in comparison to
SQP. The required memory of Eptctr is about one fifth of that of SQP. Finally,
we also give the global convergence analysis of the new method under the
standard assumptions.
###### Keywords:
continuation method trust-region method SQP structure-preserving algorithm
generalized projected gradient flow large-scale optimization
###### MSC:
65J15 65K05 65L05
††journal: Journal of XXX
## 1 Introduction
In this article, we consider the following linearly equality-constrained
optimization problem
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$ $\displaystyle\text{subject
to}\;\;Ax=b,$ (1)
where matrix $A\in\Re^{m\times n}$ and vector $b\in\Re^{m}$ may have random
noise. This problem has many applications in engineering fields such as the
visual-inertial navigation of an unmanned aerial vehicle maintaining the
horizontal flight CMFO2009 ; LLS2020 , and there are many practical methods to
solve it such as the sequential quadratic programming (SQP) method
Bertsekas2018 ; NW1999 or the penalty function method FM1990 .
For the constrained optimization problem (1), the continuation method AG2003 ;
CKK2003 ; Goh2011 ; KLQCRW2008 ; Pan1992 ; Tanabe1980 is another method other
than the traditional optimization method such as SQP or the penalty function
method. The advantage of the continuation method over the SQP method is that
the continuation method is capable of finding many local optimal points of the
non-convex optimization problem by tracking its trajectory, and it is even
possible to find the global optimal solution BB1989 ; Schropp2000 ;
Yamashita1980 . However, the computational efficiency of the continuation
method may be higher than that of SQP. Recently, Luo, Lv and Sun LLS2020 give
a continuation method with the trusty time-stepping scheme and its consumed
time is about one fifth of that of SQP for the linearly constrained
optimization problem (1). Their method only needs to solve a linear system of
equations with an $n\times n$ symmetric definite coefficient matrix at every
iteration, which involves about $\frac{1}{3}n^{3}$ flops. SQP needs to solve a
linear system of equations with an $(m+n)\times(m+n)$ coefficient matrix,
which involves about $\frac{2}{3}(m+n)^{3}$ flops. In order to improve the
computational efficiency further and save the storage of the continuation
method LLS2020 for the large-scale optimization problem, we consider a
special limited-memory BFGS updating formula and the trusty time-stepping
scheme in this article.
The rest of the paper is organized as follows. In section 2, we give a new
continuation method with the trusty time-stepping scheme and the L-BFGS
updating formula for the linearly equality-constrained optimization problem
(1). In section 3, we analyze the global convergence of this new method. In
section 4, we report some promising numerical results of the new method, in
comparison to the traditional optimization method (SQP) and the latest
continuation method (Ptctr) for some large-scale problems. Finally, we give
some discussions and conclusions in section 5.
## 2 The explicit continuation method with L-BFGS updating formulas
In this section, we construct an explicit continuation method with the
adaptive time-stepping scheme based on the trust-region updating strategy
Yuan2015 for the linearly equality-constrained optimization problem (1).
Firstly, we construct a generalized projected gradient flow based on the KKT
conditions of linearly constrained optimization problem. Then, in order to
efficiently follow the generalized gradient flow, we construct an explicit
continuation method with an adaptive time-stepping scheme for this special
ordinary differential equations (ODEs). Furthermore, we give a preprocessing
method for the infeasible initial point.
### 2.1 The generalized projected gradient flow
For the linearly constrained optimization problem (1), it is well known that
its optimal solution $x^{\ast}$ needs to satisfy the Karush-Kuhn-Tucker
conditions (p. 328, NW1999 ) as follows:
$\displaystyle\nabla_{x}L(x,\,\lambda)$ $\displaystyle=\nabla
f(x)+A^{T}\lambda=0,$ (2) $\displaystyle Ax-b$ $\displaystyle=0,$ (3)
where the Lagrangian function $L(x,\,\lambda)$ is defined by
$\displaystyle L(x,\,\lambda)=f(x)+\lambda^{T}(Ax-b).$ (4)
Similarly to the method of the negative gradient flow for the unconstrained
optimization problem LKLT2009 , from the first-order necessary conditions
(2)-(3), we can construct a dynamical system of differential-algebraic
equations for problem (1) LL2010 ; Luo2012 ; LLW2013 ; Schropp2003 as
follows:
$\displaystyle\frac{dx}{dt}=-\nabla L_{x}(x,\,\lambda)=-\left(\nabla
f(x)+A^{T}\lambda\right),$ (5) $\displaystyle Ax-b=0.$ (6)
By differentiating the algebraic constraint (6) with respect to $t$ and
replacing it into the differential equation (5), we obtain
$\displaystyle A\frac{dx}{dt}=-A\left(\nabla f(x)+A^{T}\lambda\right)=-A\nabla
f(x)-AA^{T}\lambda=0.$ (7)
If we assume that matrix $A$ has full row rank further, from equation (7), we
obtain
$\displaystyle\lambda=-\left(AA^{T}\right)^{-1}A\nabla f(x).$ (8)
By replacing $\lambda$ of equation (8) into equation (5), we obtain the
dynamical system of ordinary differential equations (ODEs) as follows:
$\displaystyle\frac{dx}{dt}=-\left(I-A^{T}\left(AA^{T}\right)^{-1}A\right)\nabla
f(x).$ (9)
Thus, we also obtain the projected gradient flow for the constrained
optimization problem Tanabe1980 .
For convenience, we denote the projection matrix $P$ as
$\displaystyle P=I-A^{T}\left(AA^{T}\right)^{-1}A.$ (10)
It is not difficult to verify $P^{2}=P$ and $AP=0$. That is to say, $P$ is a
symmetric projection matrix and its eigenvalues are 0 or 1. From Theorem 2.3.1
in p. 73 of GV2013 , we know that its matrix 2-norm is
$\displaystyle\|P\|=1.$ (11)
We denote $P^{+}$ as the Moore-Penrose generalized inverse of $P$ (p. 11,
SY2006 ). Since $P$ is symmetric and its eigenvalues are 0 or 1, it is not
difficult to verify
$\displaystyle P^{+}=P.$ (12)
Thus, for a full rank matrix $B\in\Re^{n\times n}$, we obtain the generalized
inverse $(PB)^{+}$ of $PB$ as follows:
$\displaystyle(PB)^{+}=B^{+}P^{+}=B^{-1}P.$ (13)
Similarly to the generalized gradient flow for an unconstrained optimization
problem (p. 361, HM1996 ), from the projected gradient flow (9), we can
construct the generalized projected gradient flow for the constrained
optimization problem (1) as follows:
$\displaystyle\frac{dx}{dt}=-(PH(x)P)P\nabla f(x),\;x(0)=x_{0},$ (14)
where $H(x)$ is a symmetric positive definite matrix for any $x\in\Re^{n}$.
Here, $H(x)$ may be selected as the inverse of the Hessian matrix
$\nabla^{2}f(x)$ of $f(x)$ and $PH(x)P$ can be regarded as a pre-conditioner
of $P\nabla f(x)$ to mitigate the stiffness of the ODEs (14). Consequently, we
can adopt the explicit numerical method to compute the trajectory of the ODEs
(14) efficiently LXL2020 ; LY2021 .
###### Remark 1
If $x(t)$ is the solution of the ODEs (14), it is not difficult to verify that
$x(t)$ satisfies $A(dx/dt)=0$. That is to say, if the initial point $x_{0}$
satisfies $Ax_{0}=b$, the solution $x(t)$ of the generalized projected
gradient flow (14) also satisfies $Ax(t)=b,\;\forall t\geq 0$. This property
is very useful when we construct a structure-preserving algorithm HLW2006 ;
Simos2013 to follow the trajectory of the ODEs (14) and obtain its
equilibrium point $x^{\ast}$.
###### Remark 2
If we assume that $x(t)$ is the solution of the ODEs (14), from equations
(10)-(11) and the positive definite property of $H(x)$, we obtain
$\displaystyle\frac{df(x)}{dt}$ $\displaystyle=\left(\nabla
f(x)\right)^{T}\frac{dx}{dt}=-(\nabla f(x))^{T}PH(x)P\nabla f(x)$
$\displaystyle=-(P\nabla f(x))^{T}H(x)(P\nabla f(x))\leq 0.$
That is to say, $f(x)$ is monotonically decreasing along the solution curve
$x(t)$ of the dynamical system (14). Furthermore, the solution $x(t)$
converges to $x^{\ast}$ when $f(x)$ is lower bounded and $t$ tends to infinity
HM1996 ; Schropp2000 ; Tanabe1980 , where $x^{\ast}$ satisfies the first-order
Karush-Kuhn-Tucker conditions (2)-(3). Thus, we can follow the trajectory
$x(t)$ of the ODEs (14) to obtain its equilibrium point $x^{\ast}$, which is
also one saddle point of the original optimization problem (1).
### 2.2 The explicit continuation method
The solution curve of the degenerate ordinary differential equations is not
efficiently followed on an infinite interval by the traditional ODE method
AP1998 ; BCP1996 ; LF2000 , so one needs to construct the particular method
for this problem (14). We apply the first-order implicit Euler method SGT2003
to the ODEs (14), then we obtain
$\displaystyle x_{k+1}=x_{k}-\Delta t_{k}(PH(x_{k}))(P\nabla f(x_{k})),$ (15)
where $\Delta t_{k}$ is the time-stepping size.
Since the system of equations (15) is a nonlinear system which is not directly
solved, we seek for its explicit approximation formula. We denote
$s_{k}=x_{k+1}-x_{k}$. By using the first-order Taylor expansion, we have the
linear approximation $\nabla f(x_{k})+\nabla^{2}f(x_{k})s_{k}$ of $\nabla
f(x_{k+1})$. By substituting it into equation (15) and using the zero-order
approximation $H(x_{k})$ of $H(x_{k+1})$, we have
$\displaystyle s_{k}$ $\displaystyle\approx-\Delta
t_{k}(PH(x_{k}))\left(P\left(\nabla
f(x_{k})+\nabla^{2}f(x_{k})s_{k}\right)\right)$ $\displaystyle=-\Delta
t_{k}(PH(x_{k}))(P\nabla f(x_{k}))-\Delta
t_{k}P(H(x_{k})P)(P\nabla^{2}f(x_{k}))s_{k}.$ (16)
From equation (15) and $P^{2}=P$, we have $Ps_{k}=s_{k}$. Let
$H(x_{k})=(\nabla^{2}f(x_{k}))^{-1}$. Then, we have
$H(x_{k})P=(P\nabla^{2}f(x_{k}))^{+}$. Thus, we regard
$\displaystyle
P(H(x_{k})P)(P\nabla^{2}f(x_{k})Ps_{k})=P(P\nabla^{2}f(x_{k}))^{+}(P\nabla^{2}f(x_{k}))Ps_{k}\approx
Ps_{k}=s_{k}.$ (17)
By substituting it into equation (16), we obtain the explicit continuation
method as follows:
$\displaystyle s_{k}=-\frac{\Delta t_{k}}{1+\Delta t_{k}}(PH_{k})(Pg_{k}),$
(18) $\displaystyle x_{k+1}=x_{k}+s_{k},$ (19)
where $g_{k}=\nabla f(x_{k})$ and $H_{k}=(\nabla^{2}f(x_{k}))^{-1}$ or its
quasi-Newton approximation in the projective space
$S_{p}^{k}=\\{x:\;x=x_{k}+Pd,\,d\in\Re^{n}\\}$.
If we let the projection matrix $P=I$, the formula (18) is equivalent to the
explicit continuation method given by Luo, Xiao and Lv LXL2020 for nonlinear
equations. The explicit continuation method (18)-(19) is similar to the
projected damped Newton method if we let $\alpha_{k}=\Delta t_{k}/(1+\Delta
t_{k})$ in equation (18). However, from the view of the ODE method, they are
different. The projected damped Newton method is obtained by the explicit
Euler scheme applied to the generalized projected gradient flow (14), and its
time-stepping size $\alpha_{k}$ is restricted by the numerical stability
SGT2003 . That is to say, the large time-stepping size $\alpha_{k}$ can not be
adopted in the steady-state phase.
The explicit continuation method (18)-(19) is obtained by the implicit Euler
approximation method applied to the generalized projected gradient flow (14),
and its time-stepping size $\Delta t_{k}$ is not restricted by the numerical
stability. Therefore, the large time-stepping size can be adopted in the
steady-state phase for the explicit continuation method (18)-(19), and it
mimics the Newton method near the equilibrium solution $x^{\ast}$ such that it
has the fast local convergence rate. The most of all, the new step size
$\alpha_{k}=\Delta t_{k}/(\Delta t_{k}+1)$ is favourable to adopt the trust-
region updating technique for adaptively adjusting the time-stepping size
$\Delta t_{k}$ such that the explicit continuation method (18)-(19) accurately
tracks the trajectory of the generalized projected gradient flow in the
transient-state phase and achieves the fast convergence rate near the
equilibrium point $x^{\ast}$.
###### Remark 3
From equation (18) and the property $AP=0$ of the projected matrix $P$, it is
not difficult to verify $As_{k}=0$. Thus, if the initial point $x_{0}$
satisfies the linear constraint $Ax_{0}=b$, the point $x_{k}$ also satisfies
the linear constraint $Ax_{k}=b$. That is to say, the explicit continuation
method (18)-(19) is a structure-preserving method.
### 2.3 The L-BFGS quasi-Newton updating formula
For the large-scale problem, the numerical evaluation of the Hessian matrix
$\nabla^{2}f(x_{k})$ consumes much time and stores an $n\times n$ matrix. In
order to overcome these two shortcomings, we use the L-BFGS quasi-Newton
formula (BNY1987 ; Goldfarb1970 or pp. 222-230, NW1999 ) to approximate the
generalized inverse $H(x_{k})P$ of $P\nabla^{2}f(x_{k})$. Recently, Ullah,
Sabi and Shah USS2020 give an efficient L-BFGS updating formula for the
system of monotone nonlinear equations. Here, in order to suit the generalized
projected gradient flow (14), we revise their L-BFGS updating formula as
$\displaystyle
H_{k+1}=\begin{cases}I-\frac{y_{k}s_{k}^{T}+s_{k}y_{k}^{T}}{y_{k}^{T}s_{k}}+2\frac{y_{k}^{T}y_{k}}{(y_{k}^{T}s_{k})^{2}}s_{k}s_{k}^{T},\;\text{if}\;|s_{k}^{T}y_{k}|>\theta\|s_{k}\|^{2},\\\
I,\;\text{otherwise}.\end{cases}$ (20)
where $s_{k}=x_{k+1}-x_{k},\;y_{k}=P\nabla f(x_{k+1})-P\nabla f(x_{k})$ and
$\theta$ is a small positive constant such as $\theta=10^{-6}$. The initial
matrix $H_{0}$ can be simply selected by the identity matrix. When
$|s_{k}^{T}y_{k}|\geq\theta\|s_{k}\|^{2}$, from equation (20), it is not
difficult to verify
$\displaystyle H_{k+1}y_{k}=\frac{y_{k}^{T}y_{k}}{y_{k}^{T}s_{k}}s_{k}.$
That is to say, $H_{k+1}$ satisfies the scaling quasi-Newton property. By
using the Sherman-Morrison-Woodburg formula, from equation (20), when
$|s_{k}^{T}y_{k}|\geq\theta\|s_{k}\|^{2}$, we have
$\displaystyle
B_{k+1}=H_{k+1}^{-1}=I-\frac{s_{k}s_{k}^{T}}{s_{k}^{T}s_{k}}+\frac{y_{k}y_{k}^{T}}{y_{k}^{T}y_{k}}.$
The L-BFGS updating formula (20) has some nice properties such as the
symmetric positive definite property and the positive lower bound of its
eigenvalues.
###### Lemma 1
Matrix $H_{k+1}$ defined by equation (20) is symmetric positive definite and
its eigenvalues are greater than $1/2$.
Proof. (i) For any nonzero vector $z\in\Re^{n}$, from equation (20), when
$|s_{k}^{T}y_{k}|>\theta\|s_{k}\|^{2}$, we have
$\displaystyle
z^{T}H_{k+1}z=\|z\|^{2}-2{(z^{T}y_{k})(z^{T}s_{k})}/{y_{k}^{T}s_{k}}+2(z^{T}s_{k})^{2}{\|y_{k}\|^{2}}/{(y_{k}^{T}s_{k})^{2}}$
$\displaystyle\quad=\left(\|z\|-\left|{z^{T}s_{k}}/{y_{k}^{T}s_{k}}\right|\|y_{k}\|\right)^{2}+2\|z\|\left|{z^{T}s_{k}}/{y_{k}^{T}s_{k}}\right|\|y_{k}\|$
$\displaystyle\quad\quad-2{(z^{T}y_{k})(z^{T}s_{k})}/{y_{k}^{T}s_{k}}+\|y_{k}\|^{2}(z^{T}s_{k}/y_{k}^{T}s_{k})^{2}\geq
0.$ (21)
In the last inequality of equation (21), we use the Cauchy-Schwartz inequality
$\|z^{T}y\|\leq\|z\|\|y_{k}\|$ and its equality holds if only if $z=ty_{k}$.
When $z=ty_{k}$, from equation (21), we have
$z^{T}H_{k+1}z=t^{2}\|y_{k}\|^{2}=\|z\|^{2}>0$. When $z^{T}s_{k}=0$, from
equation (21), we also have $z^{T}H_{k+1}z=\|z\|^{2}>0$. Therefore, we
conclude that $H_{k+1}$ is a symmetric positive definite matrix when
$|s_{k}^{T}y_{k}|>\theta\|s_{k}\|^{2}$. From equation (20), We apparently
conclude that $H_{k+1}$ is a symmetric positive definite matrix since
$H_{k+1}=I$ when $|s_{k}^{T}y_{k}|\leq\theta\|s_{k}\|^{2}$.
(ii) It is not difficult to know that it exists at least $n-2$ linearly
independent vectors $z_{1},\,z_{2},\,\ldots,\,z_{n-2}$ such that
$z_{i}^{T}s_{k}=0,\,z_{i}^{T}y_{k}=0\,(i=1:(n-2))$ hold. That is to say,
matrix $H_{k+1}$ defined by equation (20) has at least $(n-2)$ linearly
independent eigenvectors whose corresponding eigenvalues are 1. We denote the
other two eigenvalues of $H_{k+1}$ as $\mu_{i}^{k+1}\,(i=1:2)$ and their
corresponding eigenvalues as $p_{1}$ and $p_{2}$, respectively. Then, from
equation (20), we know that the eigenvectors $p_{i}\,(i=1:2)$ can be
represented as $p_{i}=y_{k}+\beta_{i}s_{k}$ when $\mu_{i}^{k+1}\neq
1\,(i=1:2)$. From equation (20) and
$H_{k+1}p_{i}=\mu_{i}^{k+1}p_{i}\,(i=1:2)$, we have
$\displaystyle-\left(\mu_{i}^{k+1}+\beta_{i}\frac{s_{k}^{T}s_{k}}{s_{k}^{T}y_{k}}\right)y_{k}+\left(\frac{y_{k}^{T}y_{k}}{y_{k}^{T}s_{k}}+2\beta_{i}\frac{(y_{k}^{T}y_{k})(s_{k}^{T}s_{k})}{(y_{k}^{T}s_{k})^{2}}-\mu_{i}^{k+1}\beta_{i}\right)s_{k}=0,\;i=1:n.$
(22)
When $y_{k}=ts_{k}$, from equation (20), we have $H_{k+1}=I$. In this case, we
conclude that the eigenvalues of $H_{k+1}$ are greater than $1/2$. When
vectors $y_{k}$ and $s_{k}$ are linearly independent, from equation (22), we
have
$\displaystyle\mu_{i}^{k+1}+\beta_{i}{s_{k}^{T}s_{k}}/{s_{k}^{T}y_{k}}=0,$
$\displaystyle{y_{k}^{T}y_{k}}/{y_{k}^{T}s_{k}}+2\beta_{i}{(y_{k}^{T}y_{k})(s_{k}^{T}s_{k})}/{(y_{k}^{T}s_{k})^{2}}-\mu_{i}^{k+1}\beta_{i}=0,\;i=1:n.$
That is to say, $\mu_{i}^{k+1}\,(i=1:2)$ are the two solutions of the
following equation:
$\displaystyle\mu^{2}-2\mu(y_{k}^{T}y_{k})(s_{k}^{T}s_{k})/(s_{k}^{T}y_{k})^{2}+(y_{k}^{T}y_{k})(s_{k}^{T}s_{k})/(s_{k}^{T}y_{k})^{2}=0.$
(23)
Consequently, from equation (23), we obtain
$\displaystyle\mu_{1}^{k+1}+\mu_{2}^{k+1}=2(y_{k}^{T}y_{k})(s_{k}^{T}s_{k})/(s_{k}^{T}y_{k})^{2},\;\mu_{1}^{k+1}\mu_{2}^{k+1}=(y_{k}^{T}y_{k})(s_{k}^{T}s_{k})/(s_{k}^{T}y_{k})^{2}.$
(24)
From equation (24), it is not difficult to obtain
$\displaystyle{1}/{\mu_{1}^{k+1}}+{1}/{\mu_{2}^{k+1}}=2,\;\mu_{i}^{k+1}>0,\;i=1:2.$
(25)
Therefore, from equation (25), we conclude that
$\mu_{i}^{k+1}>\frac{1}{2}\,(i=1:2)$. Consequently, the eigenvalues of
$H_{k+1}$ are greater than 1/2. ∎
If $s_{k-1}$ is obtained from the explicit continuation method (18), we have
$Ps_{k-1}=s_{k-1}$ since $P^{2}=P$. By combining it with the L-BFGS updating
formula (20), the explicit continuation method (18)-(19) can be simplified by
$\displaystyle
d_{k}=\begin{cases}-p_{g_{k}},\;\text{if}\;|s_{k-1}^{T}y_{k-1}|\leq\theta\|s_{k-1}\|^{2},\\\
-p_{g_{k}}+\frac{y_{k-1}(s_{k-1}^{T}p_{g_{k}})+s_{k-1}(y_{k-1}^{T}p_{g_{k}})}{y_{k-1}^{T}s_{k-1}}-2\frac{\|y_{k-1}\|^{2}(s_{k-1}^{T}p_{g_{k}})}{(y_{k-1}^{T}s_{k-1})^{2}}s_{k-1},\,\text{otherwise},\end{cases}$
(26) $\displaystyle s_{k}=\frac{\Delta t_{k}}{1+\Delta
t_{k}}d_{k},\;x_{k+1}=x_{k}+s_{k},$ (27)
where $p_{g_{k}}=Pg_{k}=P\nabla f(x_{k})$ and $y_{k-1}=P\nabla
f(x_{k})-P\nabla f(x_{k-1})$. Thus, it does not need to store the matrix
$H_{k}$ in practical computation. Furthermore, it only requires three pairs of
the inner product of vector and one matrix-vector product ($p_{g_{k}}=Pg_{k}$)
to obtain the trial step $s_{k}$ and involves in $O((n-m)n)$ flops when we use
the QR decomposition or the singular value decomposition to obtain the
projection matrix $P$ in subsection 2.4.
### 2.4 The treatments of infeasible initial points and projection matrices
We need to compute the projected gradient $Pg_{k}$ at every iteration in the
updating formula (26). In order to reduce the computational complexity, we use
the QR factorization (pp.276-278, GV2013 ) to factor $A^{T}$ into a product of
an orthogonal matrix $Q\in\Re^{n\times n}$ and an upper triangular matrix
$R\in\Re^{n\times m}$:
$\displaystyle
A^{T}=QR=\begin{bmatrix}Q_{1}|Q_{2}\end{bmatrix}\begin{bmatrix}R_{1}\\\
0\end{bmatrix},$ (28)
where $Q_{1}=Q(1:n,\,1:m),\;Q_{2}=Q(1:n,\,m+1:n)$, $R_{1}=R(1:r,\,1:m)$ is
upper triangular and nonsingular. Then, from equations (10), (28), we simplify
the projection matrix $P$ as
$\displaystyle P=I-Q_{1}Q_{1}^{T}=Q_{2}Q_{2}^{T}.$ (29)
In practical computation, we adopt the different formulas of the projection
$P$ according to $m\leq n/2$ or $m>n/2$. Thus, we give the computational
formula of the projected gradient $Pg_{k}$ as follows:
$\displaystyle
Pg_{k}=\begin{cases}g_{k}-Q_{1}\left(Q_{1}^{T}g_{k}\right),\;\text{if}\;m\leq\frac{1}{2}n,\\\
Q_{2}\left(Q_{2}^{T}g_{k}\right),\;\text{otherwise}.\end{cases}$ (30)
For a real-world optimization problem (1), we probably meet the infeasible
initial point $x_{0}$. That is to say, the initial point can not satisfy the
constraint $Ax=b$. We handle this problem by solving the following projection
problem:
$\displaystyle\min_{x\in\Re^{n}}\;\left\|x-x_{0}\right\|^{2}\;\text{subject
to}\hskip 5.69054ptQ_{1}^{T}x=b_{r},$ (31)
where $b_{r}=\left(R_{1}R_{1}^{T}\right)^{-1}\left(R_{1}b\right)$. By using
the Lagrangian multiplier method and the QR factorization (28) of matrix
$A^{T}$ to solve problem (31), we obtain the initial feasible point
$x_{0}^{F}$ of problem (1) as follows:
$\displaystyle
x_{0}^{F}=x_{0}-Q_{1}\left(Q_{1}^{T}Q_{1}\right)^{-1}\left(Q_{1}^{T}x_{0}-b_{r}\right)=x_{0}-Q_{1}\left(Q_{1}^{T}x_{0}-b_{r}\right).$
(32)
For convenience, we set $x_{0}=x_{0}^{F}$ in line 4, Algorithm 1.
### 2.5 The trusty time-stepping scheme
Another issue is how to adaptively adjust the time-stepping size $\Delta
t_{k}$ at every iteration. We borrow the adjustment method of the trust-region
radius from the trust-region method due to its robust convergence and fast
local convergence CGT2000 . After the preprocess of the initial point $x_{0}$,
it is feasible. According to the structure-preserving property of the explicit
continuation method (18)-(20), $x_{k+1}$ will preserve the feasibility. That
is to say, $x_{k+1}$ satisfies $Ax_{k+1}=b$. Therefore, we use the objective
function $f(x)$ instead of the nonsmooth penalty function
$f(x)+\sigma\|Ax-b\|$ as the cost function.
When we use the trust-region updating strategy to adaptively adjust time-
stepping size $\Delta t_{k}$ Higham1999 , we need to construct a local
approximation model of the objective $f(x)$ around $x_{k}$. Here, we adopt the
following quadratic function as its approximation model:
$\displaystyle
q_{k}(x_{k}+s)=f(x_{k})+s^{T}g_{k}+\frac{1}{2}s^{T}H_{k}^{-1}s.$ (33)
In practical computation, we do not store the matrix $H_{k}$. Thus, we use the
explicit continuation method (18)-(20) and regard $(H_{k}P)(H_{k}P)^{+}\approx
I$ to simplify the quadratic model $q_{k}(x_{k}+s_{k})-q(x_{k})$ as follows:
$\displaystyle m_{k}(s_{k})=g_{k}^{T}s_{k}-\frac{0.5\Delta t_{k}}{1+\Delta
t_{k}}g_{k}^{T}s_{k}=\frac{1+0.5\Delta t_{k}}{1+\Delta
t_{k}}g_{k}^{T}s_{k}\approx q_{k}(x_{k}+s_{k})-q_{k}(x_{k}).$ (34)
where $g_{k}=\nabla f(x_{k})$. We enlarge or reduce the time-stepping size
$\Delta t_{k}$ at every iteration according to the following ratio:
$\displaystyle\rho_{k}=\frac{f(x_{k})-f(x_{k+1})}{m_{k}(0)-m_{k}(s_{k})}.$
(35)
A particular adjustment strategy is given as follows:
$\displaystyle\Delta t_{k+1}=\begin{cases}\gamma_{1}\Delta t_{k},&{if\hskip
2.84526pt0\leq\left|1-\rho_{k}\right|\leq\eta_{1},}\\\ \Delta t_{k},&{if\hskip
2.84526pt\eta_{1}<\left|1-\rho_{k}\right|<\eta_{2},}\\\ \gamma_{2}\Delta
t_{k},&{if\hskip 2.84526pt\left|1-\rho_{k}\right|\geq\eta_{2},}\end{cases}$
(36)
where the constants are selected as
$\eta_{1}=0.25,\;\gamma_{1}=2,\;\eta_{2}=0.75,\;\gamma_{2}=0.5$ according to
numerical experiments. When $\rho_{k}\geq\eta_{a}$, we accept the trial step
$s_{k}$ and let $x_{k+1}=x_{k}+s_{k}$, where $\eta_{a}$ is a small positive
number such as $\eta_{a}=1.0\times 10^{-6}$. Otherwise, we discard it and let
$x_{k+1}=x_{k}$.
According to the above discussions, we give the detailed implementation of the
explicit continuation method with the trusty time-stepping scheme for the
linearly equality-constrained optimization problem (1) in Algorithm 1.
Algorithm 1 The explicit continuation method with the trusty time-stepping
scheme for linearly constrained optimization (the Eptctr method)
0: the objective function $f(x)$, the linear constraint $Ax=b$, the initial
point $x_{0}$ (optional), the terminated parameter $\epsilon$ (optional).
0: the optimal approximation solution $x^{\ast}$.
1: Set $x_{0}=\text{ones}(n,\,1)$ and $\epsilon=10^{-6}$ as the default
values.
2: Initialize the parameters:
$\eta_{a}=10^{-6},\;\eta_{1}=0.25,\;\gamma_{1}=2,\;\eta_{2}=0.75,\;\gamma_{2}=0.5,\;\theta=10^{-6}$.
3: Factorize matrix $A^{T}$ with the QR factorization (28).
4: Compute $x_{0}\leftarrow x_{0}-Q_{1}\left(Q_{1}^{T}x_{0}-b_{r}\right),$
such that $x_{0}$ satisfies the linear system of constraints $Ax=b$.
5: Set $k=0$. Evaluate $f_{0}=f(x_{0})$ and $g_{0}=\nabla f(x_{0})$.
6: Compute the projected gradient $p_{g_{0}}=Pg_{0}$ according to the formula
(30). Set $y_{-1}=0$ and $s_{-1}=0$.
7: Set $\Delta t_{0}=10^{-2}$.
8: while $\|p_{g_{k}}\|>\epsilon$ do
9: if $\left(|s_{k-1}^{T}y_{k-1}|>\theta s_{k-1}^{T}s_{k-1}\right)$ then
10: $s_{k}=-\frac{\Delta t_{k}}{1+\Delta
t_{k}}\left(p_{g_{k}}-\frac{y_{k-1}(s_{k-1}^{T}p_{g_{k}})+s_{k-1}(y_{k-1}^{T}p_{g_{k}})}{y_{k-1}^{T}s_{k-1}}+2\frac{\|y_{k-1}\|^{2}(s_{k-1}^{T}p_{g_{k}})}{(y_{k-1}^{T}s_{k-1})^{2}}s_{k-1}\right)$.
11: else
12: $s_{k}=-\frac{\Delta t_{k}}{1+\Delta t_{k}}p_{g_{k}}$.
13: end if
14: Compute $x_{k+1}=x_{k}+s_{k}$.
15: Evaluate $f_{k+1}=f(x_{k+1})$ and compute the ratio $\rho_{k}$ from
equations (34)-(35).
16: if $\rho_{k}\leq\eta_{a}$ then
17: Set
$x_{k+1}=x_{k},\;f_{k+1}=f_{k},\;p_{g_{k+1}}=p_{g_{k}},\;g_{k+1}=g_{k},\;y_{k}=y_{k-1}.$
18: else
19: Evaluate $g_{k+1}=\nabla f(x_{k+1})$.
20: Compute $p_{g_{k+1}}=Pg_{k+1}$ according to the formula (30). Set
$y_{k}=p_{g_{k+1}}-p_{g_{k}}$ and $s_{k}=x_{k+1}-x_{k}$.
21: end if
22: Adjust the time-stepping size $\Delta t_{k+1}$ based on the trust-region
updating scheme (36).
23: Set $k\leftarrow k+1$.
24: end while
## 3 Algorithm Analysis
In this section, we analyze the global convergence of the explicit
continuation method (18)-(19) with the trusty time-stepping scheme and the
L-BFGS updating formula (20) for the linearly equality-constrained
optimization problem (i.e. Algorithm 1). Firstly, we give a lower-bounded
estimate of $m_{k}(0)-m_{k}(s_{k})$ $(k=1,\,2,\,\ldots)$. This result is
similar to that of the trust-region method for the unconstrained optimization
problem Powell1975 . For simplicity, we assume that the rank of matrix $A$ is
full.
###### Lemma 2
Assume that the quadratic model $q_{k}(x)$ is defined by equation (34) and
$s_{k}$ is computed by the explicit continuation method (18)-(20). Then, we
have
$\displaystyle m_{k}(0)-m_{k}(s_{k})\geq\frac{\Delta t_{k}}{4(1+\Delta
t_{k})}\left\|p_{g_{k}}\right\|^{2},$ (37)
where $p_{g_{k}}=Pg_{k}=P\nabla f(x_{k})$ and the projection matrix $P$ is
defined by equation (10).
Proof. From equation (20) and Lemma 1, we know that $H_{k}$ is symmetric
positive definite and its eigenvalues are greater than 1/2. According to the
eigenvalue decomposition of $H_{k}$, we know that it exists an orthogonal
matrix $Q_{k}$ such that $H_{k}=Q_{k}^{T}S_{k}Q_{k}$ holds, where
$S_{k}=\text{diag}(\mu_{1}^{k},\,\ldots,\,\mu_{n}^{k})$ and
$\mu_{i}^{k}\,(i=1:n)$ are the eigenvalues of $H_{k}$. We denote the smallest
eigenvalue of $H_{k}$ is $\mu_{min}^{k}$. From the explicit continuation
method (18) and $P^{2}=P$, we know that $s_{k}=Ps_{k}$. By combining it with
the explicit continuation method (18) and the quadratic model (34), we have
$\displaystyle
m_{k}(0)-m_{k}(s_{k})\geq-\frac{1}{2}g_{k}^{T}s_{k}=-\frac{1}{2}g_{k}^{T}Ps_{k}=-\frac{1}{2}(Pg_{k})^{T}s_{k}=\frac{\Delta
t_{k}}{2(1+\Delta t_{k})}(Pg_{k})^{T}H_{k}(Pg_{k})$
$\displaystyle\quad=\frac{\Delta t_{k}}{2(1+\Delta
t_{k})}(Pg_{k})^{T}(Q_{k}^{T}S_{k}Q_{k})(Pg_{k})=\frac{\Delta
t_{k}}{2(1+\Delta t_{k})}(QPg_{k})^{T}S_{k}(QPg_{k})$
$\displaystyle\quad\geq\mu_{min}^{k}\frac{\Delta t_{k}}{2(1+\Delta
t_{k})}\|QPg_{k}\|^{2}\geq\frac{\Delta t_{k}}{4(1+\Delta
t_{k})}\|QPg_{k}\|^{2}=\frac{\Delta t_{k}}{4(1+\Delta t_{k})}\|Pg_{k}\|^{2}.$
(38)
In the first inequality in equation (38), we use the property $(1+0.5\Delta
t_{k})/(1+\Delta t_{k})\geq 0.5$ when $\Delta t_{k}\geq 0$. Consequently, we
prove the result (37). ∎
In order to prove that $p_{g_{k}}$ converges to zero when $k$ tends to
infinity, we need to estimate the lower bound of time-stepping sizes $\Delta
t_{k}\,(k=1,\,2,\,\ldots)$. We denote the constrained level set $S_{f}$ as
$\displaystyle S_{f}=\\{x:\;f(x)\leq f(x_{0}),\;Ax=b\\}.$ (39)
###### Lemma 3
Assume that $f:\;\Re^{n}\rightarrow\Re$ is continuously differentiable and its
gradient $g(x)$ satisfies the following Lipschitz continuity:
$\displaystyle\|g(x)-g(y)\|\leq L_{c}\|x-y\|,\;\forall x,\,y\in S_{f}.$ (40)
where $L_{c}$ is the Lipschitz constant. We suppose that the sequence
$\\{x_{k}\\}$ is generated by Algorithm 1. Then, there exists a positive
constant $\delta_{\Delta t}$ such that
$\displaystyle\Delta t_{k}\geq\gamma_{2}\delta_{\Delta t}$ (41)
holds for all $k=1,\,2,\,\dots$, where $\Delta t_{k}$ is adaptively adjusted
by the trust-region updating scheme (34)-(36).
Proof. From Lemma 1, we know that the eigenvalues of $H_{k}$ is greater than
1/2 and it has at least $n-2$ eigenvalues which equal 1. When
$|s_{k-1}^{T}y_{k-1}|\geq\theta\|s_{k-1}\|^{2}$, we denote the other two
eigenvalues of $H_{k}$ as $\mu_{1}^{k}$ and $\mu_{2}^{k}$. By substituting it
into equation (24), we obtain
$\displaystyle\mu_{1}^{k}\mu_{2}^{k}=\frac{\|y_{k-1}\|^{2}\|s_{k-1}\|^{2}}{(s_{k-1}^{T}y_{k-1})^{2}}\leq\frac{\|y_{k-1}\|^{2}\|s_{k-1}\|^{2}}{\theta^{2}\|s_{k-1}\|^{4}}=\frac{1}{\theta^{2}}\frac{\|y_{k-1}\|^{2}}{\|s_{k-1}\|^{2}}.$
(42)
From Lemma 2 and Algorithm 1, we know $f(x_{k})\leq
f(x_{0})\,(k=1,\,2,\,\ldots)$. From the explicit continuation method (18)-(20)
and Remark 3, we know that $Ax_{k}=Ax_{0}=b\,(k=1,\,2,\,\ldots)$. Thus, from
the Lipschitz continuity (40) of $g(x)$, we have
$\displaystyle\|y_{k-1}\|\leq\|P\|\|g(x_{k})-g(x_{k-1})\|\leq
L_{C}\|x_{k}-x_{k-1}\|=L_{C}\|s_{k-1}\|.$ (43)
By substituting it into equation (42) and using
$\mu_{i}^{k}>\frac{1}{2}\,(i=1,\,2)$, we obtain
$\displaystyle\frac{1}{2}<\mu_{i}^{k}<\frac{2L_{C}^{2}}{\theta^{2}},\;i=1,\,2.$
(44)
That is to say, the eigenvalues of $H_{k}$ are less than or equal to $M_{H}$,
where $M_{H}=\max\\{1,\,{2L_{C}^{2}}/{\theta^{2}}\\}$. According to the
eigenvalue decomposition theorem, we know that there exists an orthogonal
matrix $Q_{k}$ such that $H_{k}=Q_{k}^{T}S_{k}Q_{k}$ holds, where
$S_{k}=\text{diag}(\mu_{1}^{k},\,\ldots,\,\mu_{n}^{k})$ and
$\mu_{i}^{k}\,(i=1:n)$ are the eigenvalues of $H_{k}$. Consequently, we have
$\displaystyle\|H_{k}(Pg_{k})\|=\|(Q_{k}^{T}S_{k}Q_{k})Pg_{k}\|=\|S_{k}(Q_{k}Pg_{k})\|\leq
M_{H}\|Pg_{k}\|.$ (45)
From the first-order Taylor expansion, we have
$\displaystyle
f(x_{k}+s_{k})=f(x_{k})+\int_{0}^{1}s_{k}^{T}g(x_{k}+ts_{k})dt.$ (46)
Thus, from equations (34)-(37), (46) and the Lipschitz continuity (40) of
$g(x)$, we have
$\displaystyle\left|\rho_{k}-1\right|=\left|\frac{(f(x_{k})-f(x_{k}+s_{k}))-(m_{k}(0)-m_{k}(s_{k}))}{m_{k}(0)-m_{k}(s_{k})}\right|$
$\displaystyle\quad\leq\frac{1+\Delta t_{k}}{1+0.5\Delta
t_{k}}\frac{\left|\int_{0}^{1}s_{k}^{T}(g(x_{k}+ts_{k})-g(x_{k}))dt\right|}{\left|s_{k}^{T}g_{k}\right|}+\frac{0.5\Delta
t_{k}}{1+0.5\Delta t_{k}}$ $\displaystyle\quad\leq\frac{2L_{C}(1+\Delta
t_{k})}{\Delta t_{k}}\frac{\|s_{k}\|^{2}}{\|Pg_{k}\|^{2}}+\frac{0.5\Delta
t_{k}}{1+0.5\Delta t_{k}}.$ (47)
By substituting equation (18) and equation (45) into equation (47), we have
$\displaystyle\left|\rho_{k}-1\right|\leq\frac{2L_{C}\Delta t_{k}}{1+\Delta
t_{k}}\frac{\|PH_{k}(Pg_{k})\|^{2}}{\|Pg_{k}\|^{2}}+\frac{0.5\Delta
t_{k}}{1+0.5\Delta t_{k}}$ $\displaystyle\leq\frac{2L_{C}\Delta
t_{k}}{1+\Delta
t_{k}}\frac{\|P\|^{2}\|H_{k}(Pg_{k})\|^{2}}{\|Pg_{k}\|^{2}}+\frac{0.5\Delta
t_{k}}{1+0.5\Delta t_{k}}\leq\frac{(2L_{C}M_{H}^{2}+0.5)\Delta
t_{k}}{1+0.5\Delta t_{k}}.$ (48)
In the last inequality of equation (48), we use the property $\|P\|=1$. We
denote
$\displaystyle\delta_{\Delta
t}\triangleq\frac{\eta_{1}}{2L_{C}M_{H}^{2}+0.5}.$ (49)
Then, from equation (48)-(49), when $\Delta t_{k}\leq\delta_{\Delta t}$, it is
not difficult to verify
$\displaystyle\left|\rho_{k}-1\right|\leq(2L_{C}M_{H}^{2}+0.5)\Delta
t_{k}\leq\eta_{1}.$ (50)
We assume that $K$ is the first index such that $\Delta
t_{K}\leq\delta_{\Delta t}$ where $\delta_{\Delta t}$ is defined by equation
(49). Then, from equations (49)-(50), we know that $|\rho_{K}-1|\leq\eta_{1}$.
According to the time-stepping adjustment formula (36), $x_{K}+s_{K}$ will be
accepted and the time-stepping size $\Delta t_{K+1}$ will be enlarged.
Consequently, the time-stepping size $\Delta t_{k}$ holds $\Delta
t_{k}\geq\gamma_{2}\delta_{\Delta t}$ for all $k=1,\,2,\ldots$. ∎
By using the results of Lemma 2 and Lemma 3, we prove the global convergence
of Algorithm 1 for the linearly constrained optimization problem (1) as
follows.
###### Theorem 3.1
Assume that $f:\;\Re^{n}\rightarrow\Re$ is continuously differentiable and its
gradient $\nabla f(x)$ satisfies the Lipschitz continuity (40). Furthermore,
we suppose that $f(x)$ is lower bounded when $x\in S_{f}$, where the
constrained level set $S_{f}$ is defined by equation (39). The sequence
$\\{x_{k}\\}$ is generated by Algorithm 1. Then, we have
$\displaystyle\lim_{k\to\infty}\inf\|Pg_{k}\|=0,$ (51)
where $g_{k}=\nabla f(x_{k})$ and matrix $P$ is defined by equation (10).
Proof. According to Lemma 3 and Algorithm 1, we know that there exists an
infinite subsequence $\\{x_{k_{i}}\\}$ such that trial steps $s_{k_{i}}$ are
accepted, i.e., $\rho_{k_{i}}\geq\eta_{a},\,i=1,\,2,\ldots$. Otherwise, all
steps are rejected after a given iteration index, then the time-stepping size
will keep decreasing, which contradicts (41). Therefore, from equations (35)
and (37), we have
$\displaystyle
f(x_{0})-\lim_{k\to\infty}f(x_{k})=\sum_{k=0}^{\infty}(f(x_{k})-f(x_{k+1}))$
$\displaystyle\geq\eta_{a}\sum_{i=0}^{\infty}\left(m_{k_{i}}(0)-m_{k_{i}}(s_{k_{i}})\right)\geq\eta_{a}\sum_{i=0}^{\infty}\frac{\Delta
t_{k_{i}}}{4(\Delta t_{k_{i}}+1)}\|Pg_{k_{i}}\|.$ (52)
From the result (41) of Lemma 3, we know that $\Delta
t_{k}\geq\gamma_{2}\delta_{\Delta t}\,(k=1,\,2,\,\dots)$. By substituting it
into equation (52), we have
$\displaystyle
f(x_{0})-\lim_{k\to\infty}f(x_{k})\geq\eta_{a}\sum_{i=0}^{\infty}\frac{\gamma_{2}\delta_{\Delta
t}}{4(\gamma_{2}\delta_{\Delta t}+1)}\|Pg_{k_{i}}\|.$ (53)
Since $f(x)$ is lower bounded when $x\in S_{f}$ and the sequence
$\\{f(x_{k})\\}$ is monotonically decreasing, we have
$\lim_{k\to\infty}f(x_{k})=f^{\ast}$. By substituting it into equation (53),
we obtain the result (51). ∎
## 4 Numerical Experiments
In this section, some numerical experiments are executed to test the
performance of Algorithm 1 (the Eptctr method). The codes are executed by a
Dell G3 notebook with the Intel quad-core CPU and 20Gb memory. We compare
Eptctr with SQP (the built-in subroutine fmincon.m of the MATLAB2018a
environment) FP1963 ; Goldfarb1970 ; MATLAB ; NW1999 ; Wilson1963 Ptctr
LLS2020 ) for some large-scale linearly constrained-equality optimization
problems which are listed in Appendix A. SQP is the traditional and
representative optimization for the constrained optimization problem. Ptctr is
significantly better than SQP for linearly constrained optimization problems
according to the numerical results in LLS2020 . Therefore, we select these two
typical methods as the basis for comparison.
The termination conditions of the three compared methods are all set by
$\displaystyle\|\nabla_{x}L(x_{k},\,\lambda_{k})\|_{\infty}\leq 1.0\times
10^{-6},$ (54) $\displaystyle\|Ax_{k}-b\|_{\infty}\leq 1.0\times
10^{-6},\;k=1,\,2,\,\ldots,$ (55)
where the Lagrange function $L(x,\,\lambda)$ is defined by equation (4) and
$\lambda$ is defined by equation (8).
We test those ten problems with $n\approx 5000$. The numerical results are
arranged in Table 1 and illustrated in Figure 1. From Table 1, we find that
three methods can correctly solve those ten test problems and the consumed
time of Eptctr is significantly less than those of the other two methods for
every test problem, respectively. The consumed time of Eptctr is about one
tenth of that Ptctr or one fifteenth to 0.4 percent of that of SQP for the
test problem.
From those test data, we find that Eptctr works significantly better than the
other two methods, respectively. One of the reasons is that Eptctr only
involves three pairs of the inner product of two vectors and one matrix-vector
product ($p_{g_{k}}=Pg_{k}$) to obtain the trial step $s_{k}$ and involves
about $(n-m)n$ flops at every iteration. However, Ptctr needs to solve a
linear system of equations with an $n\times n$ symmetric definite coefficient
matrix and involves about $\frac{1}{3}n^{3}$ flops (p. 169, GV2013 ) at every
iteration. SQP needs to solve a linear system of equations with dimension
$(m+n)$ when it solves a quadratic programming subproblem at every iteration
(pp. 531-532, NW1999 ) and involves about $\frac{2}{3}(m+n)^{3}$ flops (p.
116, GV2013 ). Furthermore, Eptctr can save the storage space of an
$(n+m)\times(n+m)$ large-scale matrix, in comparison to SQP. The required
memory of Eptctr is about one fifth of that of SQP.
Table 1: Numerical results of test problems with $n\approx 5000$. Problems | Ptctr | Eptctr | SQP
---|---|---|---
| steps
---
(time)
$f(x^{\star})$ | | Mem
---
/Gb
| steps
---
(time)
$f(x^{\star})$ | | Mem
---
/Gb
| steps
---
(time)
$f(x^{\star})$ | | Mem
---
/Gb
| Exam. 1
---
(n = 5000,
m = n/2)
| 11
---
(15.46)
3.64E+04 | 3.41 | | 11
---
(1.94)
3.64E+04 | 0.51 | | 2
---
(34.93)
3.64E+04 | 2.43
| Exam. 2
---
(n = 4800,
m = n/2)
| 17
---
(16.06)
5.78E+03 | 4.09 | | 15
---
(1.58)
5.78E+03 | 0.31 | | 14
---
(116.16)
5.78E+03 | 1.53
| Exam. 3
---
(n = 4800,
m = 2/3n)
| 12
---
(23.51)
2.86E+03 | 3.40 | | 12
---
(2.11)
2.86E+03 | 0.54 | | 3
---
(56.06)
2.86E+03 | 3.08
| Exam. 4
---
(n = 5000,
m = n/2)
| 11
---
(17.07)
493.79 | 3.41 | | 11
---
(2.02)
493.79 | 0.51 | | 8
---
(123.04)
493.79 | 2.43
| Exam. 5
---
(n = 5000,
m = n/2)
| 14
---
(16.79)
432.15 | 3.97 | | 13
---
(2.04)
432.15 | 0.51 | | 11
---
(178.41)
432.15 | 2.43
| Exam. 6
---
(n = 4800,
m = 2/3n)
| 13
---
(23.58)
2.06E+03 | 3.57 | | 15
---
(2.17)
2.06E+03 | 0.54 | | 11
---
(211.35)
2.06E+03 | 3.10
| Exam. 7
---
(n = 5000,
m = n/2)
| 10
---
(15.02)
5.94E+04 | 3.22 | | 13
---
(2.07)
5.94E+04 | 0.51 | | 20
---
(344.37)
5.94E+04 | 2.43
| Exam. 8
---
(n = 4800,
m = n/3)
| 38
---
(38.29)
776.88 | 7.36 | | 133
---
(3.21)
-1.21E+04 | 0.31 | | 28
---
(243.09)
784.94 | 1.53
| Exam. 9
---
(n = 5000,
m = n/2)
| 12
---
(22.59)
2.21E+05 | 3.59 | | 8
---
(1.94)
2.21E+05 | 0.51 | | 29
---
(490.36)
2.21E+05 | 2.43
| Exam. 10
---
(n = 4800,
m = n/3)
| 16
---
(13.51)
2.00 | 3.92 | | 20
---
(1.39)
2.00 | 0.31 | | 14
---
(109.59)
2.00 | 1.53
Figure 1: The consumed CPU time (s) of Ptctr, Eptctr and SQP for test problems
with $n\approx 5000$.
## 5 Conclusion and Future Work
In this paper, we give an explicit continuation method with the trusty time-
stepping scheme and the L-BFGS updating formula (Eptctr) for linearly
equality-constrained optimization problems. This method only involves three
pairs of the inner product of vector and one matrix-vector product
($p_{g_{k}}=Pg_{k}$) at every iteration, other than the traditional
optimization method such as SQP or the latest continuation method such as
Ptctr LLS2020 , which needs to solve a quadratic programming subproblem (SQP)
or a linear system of equations (Ptctr). Thus, Eptctr involves about $(n-m)n$
flops, Ptctr involves about $\frac{1}{3}n^{3}$ flops, and SQP involves about
$\frac{2}{3}(m+n)^{3}$ flops at every iteration. This means that Eptctr can
save much more computational time than SQP or Ptctr. Numerical results also
show that the consumed time of EPtctr is about one tenth of that Ptctr or one
fifteenth to 0.4 percent of that of SQP for the test problem with $n\approx
5000$. Furthermore, Eptctr can save the storage space of an $(n+m)\times(n+m)$
large-scale matrix, in comparison to SQP. The required memory of Eptctr is
about one fifth of that of SQP. Therefore, Eptctr is worth exploring further,
and we will extend it to the general nonlinear optimization problem in the
future.
## Acknowledgments
This work was supported in part by Grant 61876199 from National Natural
Science Foundation of China, Grant YBWL2011085 from Huawei Technologies Co.,
Ltd., and Grant YJCB2011003HI from the Innovation Research Program of Huawei
Technologies Co., Ltd.. The first author is grateful to Prof. Ya-xiang Yuan
and Prof. Li-zhi Liao for their suggestions.
## Appendix A Test Problems
Example 1.
$\displaystyle\quad m$ $\displaystyle=n/2$
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n/2}\;\left(x_{2k-1}^{2}+10x_{2k}^{2}\right),\;\text{subject
to}\;x_{2i-1}+x_{2i}=4,\;i=1,\,2,\ldots,\,m.$
This problem is extended from the problem of Kim2010 . We assume that the
feasible initial point is $(2,\,2,\,\ldots,\,2,\,2)$.
Example 2.
$\displaystyle\quad m$ $\displaystyle=n/3$
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n/2}\;\left(\left(x_{2k-1}-2\right)^{2}+2\left(x_{2k}-1\right)^{4}\right)-5,\;\text{subject
to}\;x_{3i-2}+4x_{3i-1}+2x_{3i}=3,\;i=1,\,2,\ldots,\,n/3.$
We assume that the infeasible initial point is
$(-0.5,\,1.5,\,1,\,0,\,\ldots,\,0,\,0)$.
Example 3.
$\displaystyle\quad m$ $\displaystyle=(2/3)n$
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n}\;x_{k}^{2},\;\text{subject
to}\;x_{3i-2}+2x_{3i-1}+x_{3i}=1,\;2x_{3i-2}-x_{3i-1}-3x_{3i}=4,\;i=1,\,2,\ldots,\,n/3.$
This problem is extended from the problem of Osborne2016 . The infeasible
initial point is $(1,\,0.5,\,-1,\,\ldots,\,1,\,0.5,\,-1)$.
Example 4.
$\displaystyle\quad m$ $\displaystyle=n/2$
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n/2}\;\left(x_{2k-1}^{2}+x_{2k}^{6}\right)-1,\;\text{subject
to}\;x_{2i-1}+x_{2i}=1,\;i=1,\,2,\,\ldots,\,n/2.$
This problem is modified from the problem of MAK2019 . We assume that the
infeasible initial point is $(1,\,1,\,\ldots,\,1)$.
Example 5.
$\displaystyle\quad m$ $\displaystyle=n/2$
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n/2}\;\left(\left(x_{2k-1}-2\right)^{4}+2\left(x_{2k}-1\right)^{6}\right)-5,\;\text{subject
to}\;x_{2i-1}+4x_{2i}=3,\;i=1,\,2,\,\ldots,\,m.$
We assume that the feasible initial point is
$(-1,\,1,\,-1,\,1,\,\ldots,\,-1,\,1)$.
Example 6.
$\displaystyle\quad m$ $\displaystyle=(2/3)n$
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n/3}\;\left(x_{3k-2}^{2}+x_{3k-1}^{4}+x_{3k}^{6}\right),$
subject to
$\displaystyle{x_{3i-2}}+2{x_{3i-1}}+{x_{3i}}=1,\;2{x_{3i-2}}-{x_{3i-1}}-3{x_{3i}}=4,\;i=1,\,2,\,\ldots,\,m/2.$
This problem is extended from the problem of Osborne2016 . We assume that the
infeasible initial point is $(2,\,0,\,\ldots,\,0)$.
Example 7.
$\displaystyle\quad m$ $\displaystyle=n/2$
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n/2}\;\left(x_{2k-1}^{4}+3x_{2k}^{2}\right),\;\text{subject
to}\;x_{2i-1}+x_{2i}=4,\;i=1,\,2,\,\ldots,\,n/2.$
This problem is extended from the problem of Carlberg2009 . We assume that the
infeasible initial point is $(2,\,2,\,0,\,\ldots,\,0,\,0)$.
Example 8.
$\displaystyle\quad m$ $\displaystyle=n/3$
$\displaystyle\min_{x\in\Re{{}^{n}}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n/3}\;\left(x_{3k-2}^{2}+x_{3k-2}^{2}\,x_{3k}^{2}+2x_{3k-2}\,x_{3k-1}+x_{3k-1}^{4}+8x_{3k-1}\right),$
subject to $\displaystyle
2x_{3i-2}+5x_{3i-1}+x_{3i}=3,\;i=1,\,2,\,\ldots,\,m.$
We assume that the infeasible initial point is $(1.5,\,0,\,0,\,\ldots,\,0)$.
Example 9.
$\displaystyle\quad m$ $\displaystyle=n/2$
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n/2}\;\left(x_{2k-1}^{4}+10x_{2k}^{6}\right),\;\text{subject
to}\;x_{2i-1}+x_{2i}=4,\;i=1,\,2,\,\ldots,\,m.$
This problem is extended from the problem of Kim2010 . We assume that the
feasible initial point is $(2,\,2,\,\ldots,\,2,\,2)$.
Example 10.
$\displaystyle\quad m$ $\displaystyle=n/3$
$\displaystyle\min_{x\in\Re^{n}}\;f(x)$
$\displaystyle=\sum_{k=1}^{n/3}\;\left(x_{3k-2}^{8}+x_{3k-1}^{6}+x_{3k}^{2}\,\right),\;\text{subject
to}\;x_{3i-2}+2x_{3i-1}+2x_{3i}=1,\;i=1,\,2,\,\ldots,\,m.$
This problem is modified from the problem of Yamashita1980 . The feasible
initial point is $(1,\,0,\,0,\,\ldots,\,1,\,0,\,0)$.
## References
* (1) E. L. Allgower and K. Georg, _Introduction to Numerical Continuation Methods_ , SIAM, Philadelphia, PA, 2003.
* (2) U. M. Ascher and L. R. Petzold, _Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations_ , SIAM, Philadelphia, PA, 1998.
* (3) D. P. Bertsekas, _Nonlinear Programming (3rd Edition)_ , Tsinghua University Press, 2018.
* (4) A. A. Brown and M. C. Bartholomew-Biggs, _ODE versus SQP methods for constrained optimization_ , Journal of Optimization and Theory Applications, 62 (3): 371-386, 1989.
* (5) K. E. Brenan, S. L. Campbell and L. R. Petzold, _Numerical solution of initial-value problems in differential-algebraic equations_ , SIAM, Philadelphia, PA, 1996.
* (6) R. Byrd, J. Nocedal and Y. X. Yuan, _Global convergence of a class of quasi-Newton methods on convex problems_ , SIAM Journal of Numerical Analysis, 24: 1171-1189, 1987.
* (7) K. Carlberg, _Lecture notes of constrained optimization_ , https://www.sandia.gov/~ktcarlb/opt_class/OPT_Lecture3.pdf, 2009.
* (8) F. Caballero, L. Merino, J. Ferruz and A. Ollero, _Vision-based odometry and SLAM for medium and high altitude flying UAVs_ , Journal of Intelligent and Robotic Systems, 54 (1-3): 137-161, 2009.
* (9) T. S. Coffey, C. T. Kelley and D. E. Keyes, _Pseudotransient continuation and differential-algebraic equations_ , SIAM Journal on Scientific Computing, 25: 553-569, 2003.
* (10) A. R. Conn, N. Gould and Ph. L. Toint, _Trust-Region Methods_ , SIAM, Philadelphia, USA, 2000.
* (11) A. V. Fiacco and G. P. McCormick, _Nonlinear programming: Sequential Unconstrained Minimization Techniques_ , SIAM, 1990.
* (12) R. Fletcher and M. J. D. Powell, _A rapidly convergent descent method for minimization_ , Computer Journal, 6: 163-168, 1963.
* (13) B. S. Goh, _Approximate greatest descent methods for optimization with equality constraints_ , Journal of Optimization Theory and Applications 148 (3): 505-527, 2011.
* (14) D. Goldfarb, _A family of variable metric updates derived by variational means_ , Mathematics of Computing, 24: 23-26, 1970.
* (15) G. H. Golub and C. F. Van Loan, _Matrix Computations_ , 4th ed., The Johns Hopkins University Press, 2013.
* (16) E. Hairer, C. Lubich and G. Wanner, _Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations_ , 2nd ed., Springer, Berlin, 2006.
* (17) U. Helmke and J. B. Moore, _Optimization and Dynamical Systems_ , 2nd ed., Springer-Verlag, London, 1996.
* (18) D. J. Higham, _Trust region algorithms and timestep selection_ , SIAM Journal on Numerical Analysis, 37: 194-210, 1999.
* (19) C. T. Kelley, L.-Z. Liao, L. Qi, M. T. Chu, J. P. Reese and C. Winton, _Projected Pseudotransient Continuation_ , SIAM Journal on Numerical Analysis, 46: 3071-3083, 2008.
* (20) D. G. Liu and J. G. Fei, _Digital Simulation Algorithms for Dynamic Systems_ (in Chinese), Science Press, Beijing, 2000.
* (21) S.-T. Liu and X.-L. Luo, _A method based on Rayleigh quotient gradient flow for extreme and interior eigenvalue problems_ , Linear Algebra and its Applications, 432 (7): 1851-1863, 2010.
* (22) X.-L. Luo, _A dynamical method of DAEs for the smallest eigenvalue problem_ , Journal of Computational Science, 3 (3): 113-119, 2012.
* (23) X.-L. Luo, C. T. Kelley, L.-Z. Liao and H.-W. Tam, _Combining trust-region techniques and Rosenbrock methods to compute stationary points_ , Journal of Optimization Theory and Applications, 140 (2): 265-286, 2009.
* (24) X.-L. Luo, J.-R. Lin and W.-L. Wu, _A prediction-correction dynamic method for large-scale generalized eigenvalue problems_ , Abstract and Applied Analysis, Article ID 845459, 1-8, http://dx.doi.org/10.1155/2013/845459, 2013.
* (25) X.-L. Luo, J.-H. Lv and G. Sun, _Continuation method with the trusty time-stepping scheme for linearly constrained optimization with noisy data_ , published online in http://arxiv.org/abs/2005.05965 or http://doi.org/10.1007/s11081-020-09590-z, Optimization and Engineering, Accepted, January 10, 2021.
* (26) X.-L. Luo, H. Xiao and J.-H. Lv, _Continuation Newton methods with the residual trust-region time-stepping scheme for nonlinear equations_ , June 2020, arXiv preprint, http://arxiv.org/abs/2006.02634.
* (27) X.-L. Luo and Y.Y. Yao, _Primal-dual path-following methods and the trust-region updating strategy for linear programming with noisy data_ , June 2020, arXiv preprint available at http://arxiv.org/abs/2006.07568, minor revision resubmitted to Journal of Computational Mathematics, January 16, 2021.
* (28) M.-W. Mak, _Lecture notes of constrained optimization and support vector machines_ , http://www.eie.polyu.edu.hk/~mwmak/EIE6207/ContOpt-SVM-beamer.pdf, 2019.
* (29) MATLAB 9.4.0 (R2018a), The MathWorks Inc., http://www.mathworks.com, 2018.
* (30) J. Nocedal and S. J. Wright, _Numerical Optimization_ , Springer-Verlag, 1999.
* (31) N. H. Kim, _Leture notes of constrained optimization_ , https://mae.ufl.edu/nkim/eas6939/ConstrainedOpt.pdf, 2010.
* (32) M. J. Obsborne, _Mathematical methods for economic theory_ , https://mjo.osborne.economics.utoronto.ca/index.php/tutorial/index/1/mem, 2016.
* (33) P.-Q. Pan, _New ODE methods for equality constrained optimization (2): algorithms_ , Journal of Computational Mathematics, 10 (2): 129-146, 1992.
* (34) M. J. D. Powell, _Convergence properties of a class of minimization algorithms_ , in: O.L. Mangasarian, R. R. Meyer and S. M. Robinson, eds., _Nonlinear Programming 2_ , Academic Press, New York, 1-27, 1975.
* (35) J. Schropp, _A dynamical systems approach to constrained minimization_ , Numerical Functional Analysis and Optimization, 21 (3-4): 537-551, 2000.
* (36) J. Schropp, _One- and multistep discretizations of index 2 differential algebraic systems and their use in optimization_ , Journal of Computational and Applied Mathematics, 150: 375-396, 2003.
* (37) L. F. Shampine, I. Gladwell and S. Thompson, _Solving ODEs with MATLAB_ , Cambridge University Press, Cambridge, 2003.
* (38) W. Y. Sun and Y. X. Yuan, _Optimization Theory and Methods: Nonlinear Programming_ , Springer, New York, 2006.
* (39) K. Tanabe, _A geometric method in nonlinear programming_ , Journal of Optimization Theory and Applications, 30 (2): 181-210, 1980.
* (40) T. E. Simos, _New open modified Newton Cotes type formulae as multilayer symplectic integrators_ , Applied Mathematical Modelling 37: 1983-1991, 2013\.
* (41) N. Ullah, J. Sabi and A. Shah, _A derivative-free scaled memoryless BFGS method for solving a system of monotone nonlinear equations_ , Submitted to Numerical Linear Algebra with Applications, October 2020.
* (42) R. B. Wilson, _A Simplicial Method for Convex Programming_ , Phd thesis, Harvard University, 1963.
* (43) H. Yamashita, _A differential equation approach to nonlinear programming_ , Mathematical Programming, 18: 155-168, https://doi.org/10.1007/BF01588311, 1980\.
* (44) Y. Yuan, _Recent advances in trust region algorithms_ , Mathematical Programming, 151: 249-281, 2015.
|
# Simple closed geodesics on regular tetrahedra in spherical space
Alexander A. Borisenko, Darya D. Sukhorebska111 The second author is supported
by IMU Breakout Graduate Fellowship
Abstract. On a regular tetrahedron in spherical space there exist the finite
number of simple closed geodesics. For any pair of coprime integers $(p,q)$ it
was found the numbers $\alpha_{1}$ and $\alpha_{2}$ depending on $p$, $q$ and
satisfying the inequalities $\pi/3<\alpha_{1}<\alpha_{2}<2\pi/3$ such that on
a regular tetrahedron in spherical space with the faces angle
$\alpha\in\left(\pi/3,\alpha_{1}\right)$ there exists unique, up to the rigid
motion of the tetrahedron, simple closed geodesic of type $(p,q)$, and on a
regular tetrahedron with the faces angle
$\alpha\in\left(\alpha_{2},2\pi/3\right)$ there is no simple closed geodesic
of type $(p,q)$.
Keywords: closed geodesic, regular tetrahedron, spherical space.
MSC: 5322, 52B10
## 1 Introduction
Working on the three-body problem, Poincare conjectured the existing of a
simple (without points of self-intersection) closed geodesic on a smooth
closed convex surface in three-dimensional Euclidean space. In 1929 Lyusternik
and Shnirelman proved that on a Riemannian manifold homeomorphic to a sphere
there exist at least three simple closed geodesics (see [1], [2]).
In 1898 Hadamard showed that on a closed surface of negative curvature any
closed curve, that is not homotopic to zero, could be deformed into the convex
curve of minimal length within its free homotopy group. This minimal curve is
unique and it is a closed geodesic (see [3]). An interesting problem is to
find the asymptotic behavior of the number of simple closed geodesics,
depending on the length of these geodesics, on a compact manifold of negative
curvature For instance, Huber proved that on a complete closed two-dimensional
manifold of constant negative curvature the number of closed geodesics of
length at most $L$ has the order of growth $e^{L}/L$ as $L\rightarrow+\infty$
(see. [4], [5]). In Rivin’s work [6], and later in Mirzakhani’s work [7], it’s
proved that on a surface of a constant negative curvature of genus $g$ and
with $n$ cusps (points at infinity) the number of simple closed geodesics of
length at most $L$ is asymptotic to (positive) constant times $L^{6g-6+2n}$ as
$L\rightarrow+\infty$.
The substantive results about the behavior of geodesic lines on a convex two-
dimensional surface was found by Cohn-Vossen [8], Alexandrov [9], Pogorelov
[10]. In one of the earliest work Pogorelov proved that a geodesic of length
$\leq\pi/\sqrt{k}$ realized the shortest path between its endpoint on a closed
convex surface of the Gaussian curvature $\leq k$ [11]. Toponogov proved that
on $C^{2}$-regular closed surface of curvature $\geq k>0$ the length of a
simple closed geodesic is $\leq 2\pi/\sqrt{k}$ [12]. Vaigant and Matukevich
obtained that on this surface a geodesic of length $\geq 3\pi/\sqrt{k}$ has
point of self-intersection [13].
Geodesics have also been studied on non-smooth surfaces, including convex
polyhedra (see [14] and [15]). D. Fuchs and E. Fuchs supplemented and
systematized the results on closed geodesics on regular polyhedra in three-
dimensional Euclidean space (see [16] and [17]). Protasov obtained a condition
for the existence of simple closed geodesics on a tetrahedron in Euclidean
space and evaluated the number of these geodesics in terms of the difference
between $\pi$ and the sum of the angles at a vertex of the tetrahedron [18].
A simple closed geodesic is said to be of type $(p,q)$ if it has $p$ vertices
on each of two opposite edges of the tetrahedron, $q$ vertices on each of
other two opposite edges, and $p+q$ vertices on each of the remaining two
opposite edges. Geodesics are called equivalent if they intersect the same
edges of the tetrahedron in the same order.
On a regular tetrahedron in Euclidean space, for each ordered pair of coprime
integers $(p,q)$ there exists a class of equivalent simple closed geodesics of
type $(p,q)$, up to the isometry of the tetrahedron. Each of these classes
contains an infinity many geodesics. Furthermore, into the class there is a
simple close geodesic passing through the midpoints of two pairs of opposite
edges of the tetrahedron.
In [19] we studied simple closed geodesics on a regular tetrahedra in
Lobachevsky (hyperbolic) three-dimensional space. In Euclidean space, the
faces of a tetrahedron have zero Gaussian curvature, and the curvature of a
tetrahedron is concentrated only at its vertices. In Lobachevsky space, the
Gaussian curvature of faces is -1, then the curvature of a tetrahedron is
determined not only by its vertices, but also by its faces. Moreover, in
hyperbolic space the value $\alpha$ of faces angle satisfies $0<\alpha<\pi/3$.
The intrinsic geometry of such tetrahedron depends on the value of its faces
angle. It follows that the behavior of closed geodesics on a regular
tetrahedron in Lobachevsky space differs from the Euclidean case.
It is proved that on a regular tetrahedron in hyperbolic space for any coprime
integers $(p,q)$, $0\leq p<q$, there exists unique, up to the rigid motion of
the tetrahedron, simple closed geodesic of type $(p,q)$, and it passes through
the midpoints of two pairs of opposite edges of the tetrahedron. These
geodesics exhaust all simple closed geodesics on a regular tetrahedron in
hyperbolic space. The number of simple closed geodesics of length bounded by
$L$ is asymptotic to constant (depending on $\alpha$) times $L^{2}$, when $L$
tends to infinity [19].
In this work we considered simple closed geodesics on a regular tetrahedron in
spherical three-dimensional space. In this space the curvature of a face
equals 1, then the curvature of a tetrahedron is also determined by its
vertices and faces. The intrinsic geometry of a tetrahedron depends on the
value $\alpha$ of its faces angle, where $\alpha$ satisfies $\pi/3<\alpha\leq
2\pi/3$. If $\alpha=2\pi/3$, then the tetrahedron coincides with the unit two-
dimensional sphere. Hence there are infinitely many simple closed geodesics on
it and they are great circles of the sphere.
On a regular tetrahedron in spherical space there exists the finite number of
simple closed geodesics. The length of all these geodesics is less than
$2\pi$.
For any coprime integers $(p,q)$ we presented the numbers $\alpha_{1}$ and
$\alpha_{2}$, depending on $p$, $q$ and satisfying the inequalities
$\pi/3<\alpha_{1}<\alpha_{2}<2\pi/3$, such that
1) if $\pi/3<\alpha<\alpha_{1}$, then on a regular tetrahedron in spherical
space with the faces angle $\alpha$ there exists unique simple closed geodesic
of type $(p,q)$, up to the rigid motion of this tetrahedron;
2) if $\alpha_{2}<\alpha<2\pi/3$, then on a regular tetrahedron with the faces
angle $\alpha$ there is not simple closed geodesic of type $(p,q)$.
## 2 Definitions
A geodesic is locally the shortest curve. On a convex polyhedron, a geodesic
has the following properties (see [9]):
1) it consists of line segments on faces of the polyhedron;
2) it forms equal angles with edges of adjacent faces;
3) a geodesic cannot pass through a vertex of a convex polyhedron.
Note, that by the ‘line segment we mean a geodesic segment in a space of
constant curvature, where the polyhedron lies in.
Let us take two tetrahedra in the spaces of constant curvature and consider a
closed geodesic on each of them. Construct a bijection between the vertices of
the tetrahedra and give the same labels to the corresponding vertices. Hence
closed geodesics on these tetrahedra is called equivalent if they intersect
the same-labeling edges in the same order [18].
Fix the point of a geodesic on a tetrahedron’s edge and roll the tetrahedron
along the plane in such way that the geodesic always touches the plane. The
traces of the faces form the development of the tetrahedron on a plane and the
geodesic is a line segment inside the development.
A spherical triangle is a convex polygon on a unit sphere bounded by three the
shortest lines. A regular tetrahedron in three-dimensional spherical space
$\mathbb{S}^{3}$ is a closed convex polyhedron such that all its faces are
regular spherical triangles and all its vertices are regular trihedral angles.
The value $\alpha$ of its faces angle satisfies the conditions
$\pi/3<\alpha\leq 2\pi/3$. Note, than there exist a unique (up to the rigid
motion) tetrahedron in spherical space with a given value of a faces angle.
The edge length is equal
$a=\text{arccos}\left(\frac{\cos\alpha}{1-\cos\alpha}\right),$ (2.1)
$\lim\limits_{\alpha\to\frac{\pi}{3}}a=0;\;\;\;\lim\limits_{\alpha\to\frac{\pi}{2}}a=\frac{\pi}{2};\;\;\;\lim\limits_{\alpha\to\frac{2\pi}{3}}a=\pi-\text{arccos}\frac{1}{3}.$
(2.2)
If $\alpha=2\pi/3$, then a tetrahedron coincides with a unit two-dimensional
sphere. Hence there are infinitely many simple closed geodesics on it. In the
following we consider $\alpha$ that $\pi/3<\alpha<2\pi/3$.
## 3 Closed geodesics on regular tetrahedra in Euclidean space
Consider a regular tetrahedron $A_{1}A_{2}A_{3}A_{4}$ in Euclidean space with
the edge of length $1$. A development of the tetrahedron is a part of the
standard triangulation of the Euclidean plane. Denote the vertices of the
triangulation in accordance with the vertices of the tetrahedron. Choose two
identically oriented edges $A_{1}A_{2}$ of the triangulation, which don’t
belong to the same line. Take two points $X$ and $X^{\prime}$ at equal
distances from the vertex $A_{1}$ such that the segment $XX^{\prime}$ doesn’t
contain any vertex of the triangulation. Hence the segment $XX^{\prime}$
corresponds to the closed geodesic on the tetrahedron $A_{1}A_{2}A_{3}A_{4}$.
Any closed geodesic on a regular tetrahedron in Euclidean space can be
constructed in this way (see Figure 1).
Note, that the segments of geodesics lying on the same face of the tetrahedron
are parallel to each other. It follows that any closed geodesic on a regular
tetrahedron in Euclidean space does not have points of self-intersection.
Figure 1:
We introduce a rectangular Cartesian coordinate system with the origin at
$A_{1}$ and the $x$-axis along the edge $A_{1}A_{2}$ containing $X$. Then the
vertices $A_{1}$ and $A_{2}$ has the coordinates $\left(l,k\sqrt{3}\right)$,
and the coordinates of $A_{3}$ and $A_{4}$ are
$\left(l+1/2,k\sqrt{3}+1/2\right)$, where $k,l$ are integers. The coordinates
of $X$ and $X^{\prime}$ equal $(\mu,0)$ and $(\mu+q+2p,q\sqrt{3})$, where
$0<\mu<1$. The segment $XX^{\prime}$ corresponds to the simple closed geodesic
$\gamma$ of type $(p,q)$ on a regular tetrahedron in Euclidean space. If
$(p,q)$ are coprime integers then $\gamma$ does not repeat itself. The length
of $\gamma$ is equal
$L=2\sqrt{p^{2}+pq+q^{2}}.$ (3.1)
Note, that for each coprime integers $(p,q)$ there exist infinitely many
simple closed geodesics of type $(p,q)$, and all of them are parallel in the
development and intersect the tetrahedron’s edges in the same order.
If $q=0$ and $p=1$, then geodesic consists of four segments that consecutively
intersect four edges of the tetrahedron, and doesn’t go through the one pair
of opposite edges.
###### Proposition 1.
(see [19]) For each pair of coprime integers $(p,q)$ there exists a simple
closed geodesic intersecting the midpoints of two pairs of opposite edges of
the regular tetrahedron in Euclidean space.
###### Proposition 2.
(see [19]) The development of the tetrahedron obtained by unrolling along a
closed geodesic consists of four equal polygons, and any two adjacent polygons
can be transformed into each other by a rotation through an angle $\pi$ around
the midpoint of their common edge.
###### Lemma 3.1.
Let $\gamma$ be a simple closed geodesic of type $(p,q)$ on a regular
tetrahedron in Euclidean space such that $\gamma$ intersects the midpoints of
two pairs of opposite edges. Then the distance $h$ from the tetrahedron’s
vertices to $\gamma$ satisfies the inequality
$h\geq\frac{\sqrt{3}}{4\sqrt{p^{2}+pq+q^{2}}}.$ (3.2)
###### Proof.
Let us take a regular tetrahedron $A_{1}A_{2}A_{3}A_{4}$ in Euclidean space
with the edge of length $1$. Suppose $\gamma$ intersects the edge $A_{1}A_{2}$
at the midpoint $X$. Consider the development of the tetrahedron along
$\gamma$ starting from the point $X$ and introduce a Cartesian coordinate
system as described above. The geodesic $\gamma$ is unrolled into the segment
$XX^{\prime}$ lying at the line $y=\frac{q\sqrt{3}}{q+2p}(x-\frac{1}{2})$ (see
Figure 1). The segment $XX^{\prime}$ intersect the edges $A_{1}A_{2}$ at the
points $(x_{b},y_{b})=$$\left(\frac{2(q+2p)k+q}{2q},k\sqrt{3}\right)$, where
$k\leq q$. Since $\gamma$ does not pass through a vertex of the tilling, then
$x_{b}$ couldn’t be an integer. Hence on the edge $A_{1}A_{2}$ the distance
from the vertices to the points of $\gamma$ is not less than $1/2q$.
In the same way on the edge $A_{3}A_{2}$ the distance from the vertices to the
points of $\gamma$ is not less than $1/2p$.
Figure 2:
Choose the points $B_{1}$ at the edge $A_{2}A_{1}$ and $B_{2}$ at the edge
$A_{2}A_{3}$ such that the length $A_{2}B_{1}$ is $1/2q$ and the length
$A_{2}B_{2}$ equals $1/2p$ (see Figure 2). The distance $h$ from the vertex
$A_{2}$ to $\gamma$ is not less than the height $A_{2}H$ of the triangle
$B_{1}A_{2}B_{2}$. The length of $B_{1}B_{2}$ equals
$\frac{\sqrt{p^{2}+pq+q^{2}}}{2pq}$. Then the length of $A_{2}H$ is
$|A_{2}H|=\frac{\sqrt{3}}{4\sqrt{p^{2}+pq+q^{2}}}.$
Hence the inequality (3.2) is proved. ∎
Introduce some definitions following [18]. A broken line on a tetrahedron is a
curve consisting of the line segments, which connect points on the edges of
this tetrahedron consecutively. A generalized geodesic on a tetrahedron is a
closed broken line with following properties:
(1) it does not have points of self-intersection and adjacent segments of it
lie on different faces;
(2) it crosses more than three edges on the tetrahedron and doesn’t pass
through tetrahedron’s vertices.
###### Proposition 3.
(V. Protasov [18]) For every generalized geodesic on a tetrahedron in
Euclidean space there exists a simple closed geodesic on a regular tetrahedron
in Euclidean space that is equivalent to this generalized geodesic.
## 4 Geodesics of type $(0,1)$ and $(1,1)$ on a regular tetrahedron in
$\mathbb{S}^{3}$
Let us remind that a simple closed geodesic $\gamma$ has type $(p,q)$ if it
has $p$ vertices on each of two opposite edges of the tetrahedron, $q$
vertices on each of other two opposite edges, and $p+q$ vertices on each of
the remaining two opposite edges. If $q=0$ and $p=1$, then geodesic consists
of four segments that consecutively intersect four edges of the tetrahedron,
and doesn’t go through the one pair of opposite edges.
###### Lemma 4.1.
On a regular tetrahedron in spherical space there exist three different simple
closed geodesics of type $(0,1)$. They coincide under isometries of the
tetrahedron.
###### Proof.
Consider a regular tetrahedron $A_{1}A_{2}A_{3}A_{4}$ in $\mathbb{S}^{3}$ with
the faces angle $\alpha$ where $\pi/3<\alpha<2\pi/3$. Let $X_{1}$ and $X_{2}$
be the midpoints of $A_{1}A_{4}$ and $A_{3}A_{2}$, and $Y_{1}$, $Y_{2}$ be the
midpoints of $A_{4}A_{2}$ and $A_{1}A_{3}$ respectively. Join these points
consecutively with the segments trough the faces. We obtain a closed broken
line $X_{1}Y_{1}X_{2}Y_{2}$ on the tetrahedron. Since the points $X_{1}$,
$Y_{1}$, $X_{2}$ and $Y_{2}$ are midpoints, then the triangles
$X_{1}A_{4}Y_{1}$, $Y_{1}A_{2}X_{2}$, $X_{2}A_{3}Y_{2}$ and $Y_{2}A_{1}X_{1}$
are equal. It follows that the broken line $X_{1}Y_{1}X_{2}Y_{2}$ is a simple
closed geodesic of type $(0,1)$ on a regular tetrahedron in spherical space
(see Figure 3). Choosing the midpoints of other pairs of opposite edges, we
can construct other two geodesics of type $(0,1)$ on the tetrahedron. ∎
Figure 3: Simple closed geodesic of type $(0,1)$ on a regular tetrahedron in
$\mathbb{S}^{3}$
###### Lemma 4.2.
On a regular tetrahedron in spherical space with the faces angle
$\alpha<\pi/2$ there exist three simple closed geodesics of type $(1,1)$.
###### Proof.
Consider a regular tetrahedron $A_{1}A_{2}A_{3}A_{4}$ in $\mathbb{S}^{3}$ with
the faces angle $\alpha$ where $\pi/3<\alpha<\pi/2$. As above, the points
$X_{1}$ and $X_{2}$ are the midpoints of $A_{1}A_{4}$ and $A_{3}A_{2}$, and
$Y_{1}$, $Y_{2}$ are the midpoints of $A_{4}A_{2}$ and $A_{1}A_{3}$.
Develop two adjacent faces $A_{1}A_{4}A_{3}$ and $A_{4}A_{3}A_{2}$ into the
plain and join the points $X_{1}$ and $Y_{1}$ with the line segment. Since
$\alpha<\pi/2$, then the segment $X_{1}Y_{1}$ is contained inside the
development and intersects the edge $A_{4}A_{2}$ at right angle. Then develop
another two adjacent faces $A_{4}A_{1}A_{2}$ and $A_{1}A_{2}A_{3}$ and
construct the segment $Y_{1}X_{2}$. In the same way join the points $X_{2}$
and $Y_{2}$ within the faces $A_{2}A_{3}A_{4}$ and $A_{3}A_{4}A_{1}$, and join
$Y_{2}$ and $X_{1}$ within $A_{1}A_{2}A_{3}$ and $A_{4}A_{1}A_{2}$ (see Figure
4). Since the points $X_{1}$, $Y_{1}$, $X_{2}$ and $Y_{2}$ are the midpoints,
it follows, that the triangles $X_{1}A_{4}Y_{1}$, $Y_{1}A_{2}X_{2}$,
$X_{2}A_{3}Y_{2}$ $Y_{2}A_{1}X_{1}$ are equal. Hence, the segments
$X_{1}Y_{1}$, $Y_{1}X_{2}$, $X_{2}Y_{2}$, $Y_{2}X_{1}$ form a simple closed
geodesic of type $(1,1)$ on the tetrahedron.
Another two geodesics of type $(1,1)$ on a tetrahedron could be constructed in
the same way, if we choose the midpoints of other pairs of opposite edges. ∎
Figure 4: Simple closed geodesic of type $(1,1)$ on a regular tetrahedron in
$\mathbb{S}^{3}$
###### Lemma 4.3.
On a regular tetrahedron in spherical space with the faces angle
$\alpha\geq\pi/2$ there exists only three simple closed geodesics and all of
them have type $(0,1)$.
###### Proof.
Consider a regular tetrahedron in spherical space with the faces angle
$\alpha\geq\pi/2$. Since a geodesic is a line segment inside the development
of the tetrahedron, then it cannot intersect three edges of the tetrahedron,
coming out from the same vertex, in succession.
If a simple closed geodesic on the tetrahedron is of type $(p,q)$, where
$p=q=1$ or $1<p<q$, then this geodesic intersect three tetrahedron’s edges,
starting at the same vertex, in succession (see [18]). And only a simple
closed geodesic of type $(0,1)$ intersects two tetrahedron’s edges, that have
a common vertex, and doesn’t intersects the third. It follows that on a
regular tetrahedron in spherical space with the faces angle
$\alpha\in\left[\pi/2,2\pi/3\right)$ there exist only three simple closed
geodesic of type $(0,1)$ and others don’t exist. ∎
In the next sections we will assume that $\alpha$ satisfying
$\pi/3<\alpha<\pi/2$.
## 5 The length of a simple closed geodesic on a regular tetrahedron in
$\mathbb{S}^{3}$
###### Lemma 5.1.
The length of a simple closed geodesic on a regular tetrahedron in spherical
space is less than $2\pi$.
###### Proof.
Consider a regular tetrahedron $A_{1}A_{2}A_{3}A_{4}$ in $\mathbb{S}^{3}$ with
the faces angle of value $\alpha$, where $\pi/3<\alpha<\pi/2$. A spherical
space $\mathbb{S}^{3}$ of curvature $1$ is realized as a unite tree-
dimensional sphere in four-dimensional Euclidean space. Hence the tetrahedron
$A_{1}A_{2}A_{3}A_{4}$ is situated in an open hemisphere. Consider a tangent
Euclidean space to this hemisphere at the center of the tetrahedron (by
‘center of the tetrahedron’ we mean a center of circumscribed sphere of the
tetrahedron). A central projection of the hemisphere to this tangent space
maps the regular tetrahedron in spherical space to the regular tetrahedron in
Euclidean space. A simple closed geodesic $\gamma$ on $A_{1}A_{2}A_{3}A_{4}$
is mapped into generalized geodesic on a regular Euclidean tetrahedron. From
Proposition 3 we know that there exists a simple closed geodesic on a regular
tetrahedron in Euclidean space equivalent to this generalized geodesic. It
follows, that a simple closed geodesic on a regular tetrahedron in
$\mathbb{S}^{3}$ is also uniquely determined with the pair of coprime integers
$(p,q)$ and has the same structure as a closed geodesic on a regular
tetrahedron in Euclidean space. Investigate this structure following [18].
A vertex of a geodesic is called a catching node if it and two adjacent
vertices lie on the three edges coming out from the same vertex $A_{i}$ of the
tetrahedron and are the vertices of the geodesic nearest to $A_{i}$ on these
edges. By the ’vertex of a geodesic’ we mean a point of geodesic on an edge.
###### Proposition 4.
(V. Protasov [18]) Let $\gamma^{1}_{1}$ and $\gamma^{2}_{1}$ is a segments of
a simple closed geodesic $\gamma$, starting at a catching node on a regular
tetrahedron, $\gamma^{1}_{2}$ and $\gamma^{2}_{2}$ is the segments following
$\gamma^{1}_{1}$ and $\gamma^{2}_{1}$ and so on. Then for each
$i=2,\dots,2p+2q-1$ the segments $\gamma^{1}_{i}$ and $\gamma^{2}_{i}$ lie on
the same face of the tetrahedron, and there are no other geodesic points
between them. The segments $\gamma^{1}_{2p+2q}$ $\gamma^{2}_{2p+2q}$ meet at
the second catching node of the geodesic.
Figure 5:
Suppose $\gamma$ has $q$ points on the edges $A_{1}A_{2}$ and $A_{3}A_{4}$,
$p$ points on $A_{1}A_{4}$ and $A_{2}A_{3}$, and $p+q$ points on $A_{2}A_{4}$
and $A_{1}A_{3}$. Consider the catching node $B_{0}$ of $\gamma$ at the edge
$A_{4}A_{2}$. The adjacent geodesic vertices $B^{1}_{1}$ and $B^{2}_{1}$ are
the nearest point of $\gamma$ to $A_{4}$ at the edges $A_{4}A_{1}$ and
$A_{4}A_{3}$ respectively. The geodesic segments $B_{0}B^{1}_{1}$ and
$B_{0}B^{2}_{1}$ correspond to $\gamma^{1}_{1}$ and $\gamma^{2}_{1}$ (see
Figure 5). If we develop the faces $A_{1}A_{2}A_{4}$ and $A_{2}A_{4}A_{3}$
into the plain, then the segments $\gamma^{1}_{1}$ and $\gamma^{2}_{1}$ will
form one line segment. The triangle $B^{1}_{1}A_{4}B^{2}_{1}$ at the
development is called catching triangle (see Figure 7).
The segments $\gamma^{1}_{2p+2q}$ and $\gamma^{2}_{2p+2q}$ meet at the second
catching node $B_{pq}$ of the geodesic. We will assume, that $B_{pq}$ is the
nearest to $A_{1}$ geodesic vertex at the edge $A_{1}A_{3}$. The adjacent
geodesic vertices $B^{1}_{pq}$ and $B^{2}_{pq}$ are the nearest point of
$\gamma$ to $A_{1}$ at $A_{1}A_{2}$ and $A_{1}A_{4}$ respectively. They form
the second catching triangle $B^{1}_{pq}A_{1}B^{2}_{pq}$.
From these catching triangles $B^{1}_{1}A_{4}B^{2}_{1}$ and
$B^{1}_{pq}A_{1}B^{2}_{pq}$ it follows next inequalities
$|B^{1}_{1}B^{2}_{1}|<|B^{1}_{1}A_{4}|+|A_{4}B^{2}_{1}|;\;\;|B^{2}_{pq}B^{1}_{pq}|<|B^{2}_{pq}A_{1}|+|A_{1}B^{1}_{pq}|.$
(5.1)
Now let us develop the tetrahedron into two-dimensional sphere, starting from
the face $A_{1}A_{4}A_{3}$ and go along the segments $\gamma^{1}_{i}$ and
$\gamma^{2}_{i}$, $i=2,\dots,2p+2q-1$. The segments $\gamma^{1}_{2}$ and
$\gamma^{2}_{2}$ start at the points $B^{1}_{1}$ and $B^{2}_{1}$ respectively,
then they intersect the edge $A_{1}A_{3}$. Other segments $\gamma^{1}_{i}$ and
$\gamma^{2}_{i}$, $i=3,\dots,2p+2q-2$, are unrolled into two line segments,
that intersect the same edges in the same order and there are no other
geodesic points between them. The last segments $\gamma^{1}_{2p+2q-1}$ and
$\gamma^{2}_{2p+2q-1}$ intersect the edge $A_{2}A_{4}$, then go within
$A_{1}A_{2}A_{4}$ and end at the points $B^{1}_{pq}$ and $B^{2}_{pq}$
respectively (see Figure 7). It follows, that the tetrahedron’s vertices
$A_{4}$ and $A_{1}$ lie inside the spherical lune, formed by two great arcs
containing $B^{1}_{1}B^{1}_{pq}$ and $B^{2}_{1}B^{2}_{pq}$. We obtained a
convex hexagon $B^{1}_{1}A_{4}B^{2}_{1}B^{2}_{pq}A_{1}B^{1}_{pq}$ at the
sphere.
Figure 6:
Figure 7:
From the inequalities (5.1) it follows that the length of the geodesic
$\gamma$ is less then the perimeter of the hexagon
$B^{1}_{1}A_{4}B^{2}_{1}B^{2}_{pq}A_{1}B^{1}_{pq}$. Since the perimeter of
convex polygon on a unit sphere is less than $2\pi$, we get, that on a regular
tetrahedron in Euclidean space with the face’s angle $\alpha<\pi/2$ the length
of a simple closed geodesic is less than $2\pi$.
From Lemma 4.3 we know, that if the faces angle $\alpha$ satisfies the
condition $\pi/2\leq\alpha<2\pi/3$, then on a regular tetrahedron there exist
only three simple closed geodesics and they have type $(0,1)$. The length of
these geodesics equals
$L_{0,1}=4\arccos\left(\frac{\sin\frac{3\alpha}{2}}{2\sin\frac{\alpha}{2}}\right).$
(5.2)
It is easy to check, that $L_{0,1}<2\pi$, when $\pi/2\leq\alpha<2\pi/3$. ∎
###### Remark 1.
Lemma 5.1 could be considered as the particular case of the result [20] proved
by first author about the generalization of V. Toponogov theorem [12] to the
case of two-dimensional Alexandrov space.
## 6 Uniqueness of a simple closed geodesic on a regular tetrahedron in
$\mathbb{S}^{3}$
In a spherical space the analogical to Proposition 1 result is true.
###### Lemma 6.1.
On a regular tetrahedron in a spherical space a simple closed geodesic
intersects midpoints of two pairs of opposite edges.
###### Proof.
Let $\gamma$ be a simple closed geodesic on a regular tetrahedron
$A_{1}A_{2}A_{3}A_{4}$ in $\mathbb{S}^{3}$. In the part 5 we showed, that
there exists a simple closed geodesic $\widetilde{\gamma}$ on a regular
tetrahedron in Euclidean space such that $\widetilde{\gamma}$ is equivalent to
$\gamma$. From Proposition 1 we assume $\widetilde{\gamma}$ intersects the
midpoints $\widetilde{X}_{1}$ and $\widetilde{X}_{2}$ of the edges
$A_{1}A_{2}$ and $A_{3}A_{4}$ on the tetrahedron. Denote by $X_{1}$ and
$X_{2}$ the vertices of $\gamma$ at the edges $A_{1}A_{2}$ and $A_{3}A_{4}$ on
a spherical tetrahedron such that $X_{1}$ and $X_{2}$ are equivalent to the
points $\widetilde{X}_{1}$ and $\widetilde{X}_{2}$.
Consider the development of the tetrahedron along $\gamma$ starting from the
point $X_{1}$ on a two-dimensional unite sphere. The geodesic $\gamma$ is
unrolled into the line segment $X_{1}X^{\prime}_{1}$ of length less than
$2\pi$ inside the development. Denote by $T_{1}$ and $T_{2}$ the parts of the
development along $X_{1}X_{2}$ and $X_{2}X^{\prime}_{1}$ respectively.
On the tetrahedron $A_{1}A_{2}A_{3}A_{4}$ mark the midpoints $M_{1}$ and
$M_{2}$ of the edges $A_{1}A_{2}$ and $A_{3}A_{4}$ respectively. Rotation by
the angle $\pi$ over the line $M_{1}M_{2}$ is an isometry of the tetrahedron.
It follows that the development of the tetrahedron is centrally symmetric with
the center $M_{2}$.
On the other hand symmetry over $M_{2}$ replaces $T_{1}$ and $T_{2}$. The
point $X^{\prime}_{1}$ at the edge $A_{1}A_{2}$ of the part $T_{2}$ is mapped
into the point $\widehat{X}^{\prime}_{1}$ at the edge $A_{2}A_{1}$ containing
$X_{1}$ on $T_{1}$, and the lengths of $A_{2}X_{1}$ and
$\widehat{X}^{\prime}_{1}A_{1}$ are equal.
The image of the point $X_{1}$ on $T_{1}$ is a point $\widehat{X}_{1}$ at the
edge $A_{1}A_{2}$ on $T_{2}$. Since $M_{2}$ is a midpoint of $A_{3}A_{4}$,
then the symmetry maps the point $X_{2}$ at $A_{3}A_{4}$ to the point
$\widehat{X}_{2}$ at the same edge $A_{3}A_{4}$ such that the lengths of
$A_{4}X_{2}$ and $\widehat{X}_{2}A_{3}$ are equal. Thus, the segment
$X_{1}X^{\prime}_{1}$ is mapped into the segment
$\widehat{X}^{\prime}_{1}\widehat{X}_{1}$ inside the development.
Figure 8:
Figure 9:
Suppose the segments $\widehat{X}^{\prime}_{1}\widehat{X}_{2}$ and
$X_{1}X_{2}$ intersect at the point $Z_{1}$ inside $T_{1}$. Then the segments
$\widehat{X}_{2}\widehat{X}_{1}$ and $X_{2}X^{\prime}_{1}$ intersect at the
point $Z_{2}$ inside $T_{2}$, and the point $Z_{2}$ is symmetric to $Z_{1}$
over $M_{2}$ (see Figure 9). We obtain two great arcs $X_{1}X^{\prime}_{1}$
and $\widehat{X}^{\prime}_{1}\widehat{X}_{1}$ intersecting in two points. It
follows that $Z_{1}$ and $Z_{2}$ are antipodal points on a sphere and the
length of the geodesic segment $Z_{1}X_{2}Z_{2}$ equals $\pi$.
Now consider the development of the tetrahedron along $\gamma$ starting from
the point $X_{2}$. This development also consists of spherical polygons
$T_{2}$ and $T_{1}$, but in this case they are glued by the edge $A_{1}A_{2}$
and centrally symmetric with the center $M_{1}$ (see Figure 9).
By analogy with previous case apply the symmetry over $M_{1}$. The segment
$X_{2}X_{1}X^{\prime}_{2}$ is mapped into the segment
$\widehat{X}_{2}\widehat{X}_{1}\widehat{X}^{\prime}_{2}$ inside the
development. Since the symmetries over $M_{1}$ and over $M_{2}$ correspond to
the same isometry of the tetrahedron, then the arcs $X_{2}X_{1}X^{\prime}_{2}$
and $\widehat{X}_{2}\widehat{X}_{1}\widehat{X}^{\prime}_{2}$ also intersect at
the points $Z_{1}$ and $Z_{2}$ (see Figure 9). It follows that the length of
geodesic segment $Z_{1}X_{1}Z_{2}$ is also equal $\pi$. Hence the length of
the geodesic $\gamma$ on a regular tetrahedron in spherical space equals
$2\pi$, that contradicts to Lemma 5.1. We get that the segments
$\widehat{X}^{\prime}_{1}\widehat{X}_{2}$ and $X_{1}X_{2}$ either don’t
intersect or coincide at the development part $T_{1}$.
Figure 10:
If the $X_{1}X_{2}$ and $\widehat{X}^{\prime}_{1}\widehat{X}_{2}$ at the
development don’t intersect, then they form the quadrilateral
$X_{1}X_{2}\widehat{X}_{2}\widehat{X}^{\prime}_{1}$ on the unite sphere. Since
$\gamma$ is closed geodesic, then $\angle A_{1}X_{1}X_{2}+\angle
A_{2}\widehat{X}^{\prime}_{1}\widehat{X}^{\prime}_{2}=\pi$. Furthermore,
$\angle
X_{1}X_{2}A_{3}+\angle\widehat{X}^{\prime}_{1}\widehat{X}_{2}A_{4}=\pi$. We
obtain the convex quadrilateral on a sphere with the sum of inner angles
$2\pi$ (see Figure 10). It follows that the integral of the Gaussian curvature
over the interior of $X_{1}X_{2}\widehat{X}_{2}\widehat{X}^{\prime}_{1}$ on a
sphere is equal zero. Hence, the segments $X_{1}X_{2}$ and
$\widehat{X}^{\prime}_{1}\widehat{X}_{2}$ coincide under the symmetry of the
development. We obtain that the points $X_{1}$ and $X_{2}$ of geodesic
$\gamma$ are the midpoints of the edges $A_{1}A_{2}$ and $A_{3}A_{4}$
respectively.
Similarly it can be proved that $\gamma$ intersect the midpoints of the second
pair of the opposite edges of the tetrahedron. ∎
###### Corollary 6.1.
If two simple closed geodesic on a regular tetrahedron in spherical space
intersect the edges of the tetrahedron in the same order, then they coincide.
## 7 Lower bound of the length of simple closed geodesic on a regular
tetrahedron in $\mathbb{S}^{3}$
###### Lemma 7.1.
On a regular tetrahedron with the faces angle $\alpha$ in spherical space the
length of a simple closed geodesic of type $(p,q)$ satisfies the inequality
$L_{p,q}>2\sqrt{p^{2}+pq+q^{2}}\frac{\sqrt{4\sin^{2}\frac{\alpha}{2}-1}}{\sin\frac{\alpha}{2}}.$
(7.1)
###### Proof.
Consider a regular tetrahedron $A_{1}A_{2}A_{3}A_{4}$ in $\mathbb{S}^{3}$ with
the faces angle $\alpha$ and $\gamma$ is a simple closed geodesic of type
$(p,q)$ on it.
Each of the faces of this tetrahedron is a regular spherical triangle.
Consider a two-dimensional unit sphere containing the face $A_{1}A_{2}A_{3}$.
Construct the Euclidean plane $\Pi$ passing through the points $A_{1}$,
$A_{2}$ and $A_{3}$. The intersection of the sphere with $\Pi$ is a small
circle. A ray starting at the spheres center $O$ and going through a point at
the triangle $A_{1}A_{2}A_{3}$ intersects the plane $\Pi$. So we get the
geodesic map between the sphere and the plane $\Pi$. The image of the
spherical triangle $A_{1}A_{2}A_{3}$ is the triangle $\bigtriangleup
A_{1}A_{2}A_{3}$ at the Euclidean plane $\Pi$. The edges of $\bigtriangleup
A_{1}A_{2}A_{3}$ are the chords joining the vertices of the spherical
triangle. From (2.1) it follows that the length $\widetilde{a}$ of the plane
triangles edge equals
$\widetilde{a}=\frac{\sqrt{4\sin^{2}\frac{\alpha}{2}-1}}{\sin\frac{\alpha}{2}}.$
(7.2)
The segments of the geodesic $\gamma$ lying inside $A_{1}A_{2}A_{3}$ are
mapped into the straight line segments inside $\bigtriangleup A_{1}A_{2}A_{3}$
(see Figure 11).
Figure 11:
In the similar way the other tetrahedron faces $A_{2}A_{3}A_{4}$,
$A_{2}A_{4}A_{1}$ and $A_{1}A_{4}A_{3}$ are mapped into the plane triangles
$\bigtriangleup A_{2}A_{3}A_{4}$, $\bigtriangleup A_{2}A_{4}A_{1}$ and
$\bigtriangleup A_{1}A_{4}A_{3}$ respectively. Since the spherical tetrahedron
is regular, the constructed plane triangles are equal. We can glue them
together identifying the edges with the same labels. Hence we obtain the
regular tetrahedron in the Euclidean space. Since the segments of $\gamma$ are
mapped into the straight line segments within the plane triangles, then they
form the generalized geodesic $\widetilde{\gamma}$ on the regular Euclidean
tetrahedron. Furthermore $\widetilde{\gamma}$ is equivalent to $\gamma$, so
$\widetilde{\gamma}$ passes through the tetrahedrons edges at the same order
as the simple closed geodesic of type $(p,q)$.
Let us show that the length of $\gamma$ is greater than the length of
$\widetilde{\gamma}$. Consider an arc $MN$ of the geodesic $\gamma$ within the
face $A_{1}A_{2}A_{3}$. The rays $OM$ and $ON$ intersect the plane $\Pi$ at
the points $\widetilde{M}$ and $\widetilde{N}$ respectively. The line segment
$\widetilde{M}$ $\widetilde{N}$ lying into $\bigtriangleup A_{1}A_{2}A_{3}$ is
the image of the arc $MN$ under the geodesic map (see Figure 11). Suppose that
the length of the arc $MN$ is equal to $2\phi$, then the length of the segment
$\widetilde{M}\widetilde{N}$ equals $2\sin\phi$. Thus the length of $\gamma$
on a regular tetrahedron in spherical space is greater than the length of its
image $\widetilde{\gamma}$ on a regular tetrahedron in Euclidean space.
From Proposition 3 we know that on a regular tetrahedron in Euclidean space
there exists a simple closed geodesic $\widehat{\gamma}$ such that
$\widehat{\gamma}$ is equivalent to $\widetilde{\gamma}$. Since on the
development of the tetrahedron the geodesic $\widehat{\gamma}$ is a line
segment, and the generalized geodesic $\widetilde{\gamma}$ is a broken line,
then the length of $\widehat{\gamma}$ is less than the length of
$\widetilde{\gamma}$.
Hence we get, that on a regular tetrahedron $A_{1}A_{2}A_{3}A_{4}$ in
$\mathbb{S}^{3}$ with the faces angle $\alpha$ the length $L_{p,q}$ of a
simple closed geodesic $\gamma$ of type $(p,q)$ is greater than the length of
a simple closed geodesic $\widehat{\gamma}$ of type $(p,q)$ on a regular
tetrahedron in Euclidean space with the edge length $\widetilde{a}$. From the
equations (3.1) and (7.2) we get, that
$L_{p,q}>2\sqrt{p^{2}+pq+q^{2}}\frac{\sqrt{4\sin^{2}\frac{\alpha}{2}-1}}{\sin\frac{\alpha}{2}}.$
∎
###### Theorem 1.
On a regular tetrahedron with the faces angle $\alpha$ in spherical space such
that
$\alpha>2\arcsin\sqrt{\frac{p^{2}+pq+q^{2}}{4(p^{2}+pq+q^{2})-\pi^{2}}},$
(7.3)
where $(p,q)$ is a pair of coprime integers, there is no simple closed
geodesic of type $(p,q)$.
###### Proof.
From Lemma 5.1 and the inequality (7.1) we get, that if $\alpha$ fulfills the
inequality
$2\sqrt{p^{2}+pq+q^{2}}\frac{\sqrt{4\sin^{2}\frac{\alpha}{2}-1}}{\sin\frac{\alpha}{2}}>2\pi,$
(7.4)
then the necessary condition for the existence of a simple closed geodesic of
type $(p,q)$ on a regular tetrahedron with faces angle $\alpha$ in spherical
space is failed. Hence, after modifying (7.4), we get that, if
$\alpha>2\arcsin\sqrt{\frac{p^{2}+pq+q^{2}}{4(p^{2}+pq+q^{2})-\pi^{2}}},$
then there is no simple closed geodesics of type $(p,q)$ on the tetrahedron
with faces angle $\alpha$ in spherical space. ∎
###### Corollary 7.1.
On a regular tetrahedron in spherical space there exist a finite number of
simple closed geodesics.
###### Proof.
If the integers $(p,q)$ go to infinity, then
$\lim\limits_{p,q\to\infty}2\arcsin\sqrt{\frac{p^{2}+pq+q^{2}}{4(p^{2}+pq+q^{2})-\pi^{2}}}=2\arcsin\frac{1}{2}=\frac{\pi}{3}.$
From the inequality (7.3) we get, that for the large numbers $(p,q)$ a simple
closed geodesic of type $(p,q)$ could exist on a regular tetrahedron with the
faces angle $\alpha$ closed to $\pi/3$ in spherical space. ∎
The pairs $p=0,q=1$ and $p=1,q=1$ don’t satisfy the condition (7.3). Geodesics
of this types are considered in Section 4.
## 8 Sufficient condition for the existence of a simple closed geodesic on a
regular tetrahedron in $\mathbb{S}^{3}$
In previous sections we assumed that the Gaussian curvature of the
tetrahedron’s faces is equal $1$ in spherical space. In this case faces of a
tetrahedron were regular spherical triangles with angles $\alpha$ on a unit
two-dimensional sphere. The length $a$ of the edges was the function depending
on $\alpha$ according to (2.1). In current section we will assume that the
tetrahedron’s faces are spherical triangles with the angle $\alpha$ on a
sphere of radius $R=1/a$. In this case the length of the tetrahedron edges
equals $1$, and the faces curvature is $a^{2}$.
Since $\alpha>\pi/3$, then we can assume $\alpha=\pi/3+\varepsilon$, where
$\varepsilon>0$. Taking into account Lemma 4.3 we also expect
$\varepsilon<\pi/6$. Let us proof some subsidiary results.
###### Lemma 8.1.
The length of the edges of a regular tetrahedron in spherical space of
curvature one satisfies inequality
$a<\pi\sqrt{2\cos\frac{\pi}{12}}\cdot\sqrt{\varepsilon},$ (8.1)
where $\alpha=\frac{\pi}{3}+\varepsilon$ is a value of faces angles.
###### Proof.
From the equality (2.1) it follows, that
$\sin a=\frac{\sqrt{4\sin^{2}\frac{\alpha}{2}-1}}{2\sin^{2}\frac{\alpha}{2}}.$
Substituting $\alpha=\frac{\pi}{3}+\varepsilon$ into this equality, we get
$\sin
a=\frac{\sqrt{\sin\frac{\varepsilon}{2}\cos\left(\frac{\pi}{6}-\frac{\varepsilon}{2}\right)}}{\sin^{2}\left(\frac{\pi}{6}+\frac{\varepsilon}{2}\right)}.$
Since $\varepsilon<\frac{\pi}{6}$, then
$\cos\left(\frac{\pi}{6}-\frac{\varepsilon}{2}\right)<\cos\frac{\pi}{12}$,
$\sin\left(\frac{\pi}{6}+\frac{\varepsilon}{2}\right)>\sin\frac{\pi}{6}$ and
$\sin\frac{\varepsilon}{2}<\frac{\varepsilon}{2}$. Using this estimations, we
obtain
$\sin a<2\sqrt{2\cos\frac{\pi}{12}}\cdot\sqrt{\varepsilon}.$ (8.2)
Since $a<\frac{\pi}{2}$, then $\sin a>\frac{2}{\pi}a$. Substituting this
estimation into (8.2), we get the necessary inequality (8.1). ∎
Consider a parametrization of a two-dimensional sphere $S^{2}$ of radius $R$
in three-dimensional Euclidean case:
$\begin{cases}x=R\sin\phi\cos\theta\\\ y=R\sin\phi\sin\theta\\\
z=-R\cos\phi\end{cases},$ (8.3)
where $\phi\in[0,\pi]$, $\theta\in[0,2\pi)$. Let the point $P$ have the
coordinates $\phi=r/R,\theta=0$, where $r/R<\pi/2$, and the point $X_{1}$
correspond to $\phi=0$. Apply a central projection of the hemisphere
$\phi\in[0,\pi/2]$, $\theta\in[0,2\pi)$ to the tangent plane at the point
$X_{1}$.
###### Lemma 8.2.
The central projection of the hemisphere of radius $R=1/a$ to the tangent
plane at the point $X_{1}$ maps the angle $\alpha=\pi/3+\varepsilon$ with the
vertex $P(R\sin\frac{r}{R},0,-R\cos\frac{r}{R})$ at this hemisphere to the
angle $\widehat{\alpha}_{r}$ on a plane, which satisfies the inequality
$\Big{|}\widehat{\alpha}_{r}-\frac{\pi}{3}\Big{|}<\pi\tan^{2}\frac{r}{R}+\varepsilon.$
(8.4)
###### Proof.
Construct two planes $\Pi_{1}$ and $\Pi_{2}$ passing through the center of a
hemisphere and the point $P(R\sin\frac{r}{R},0,-R\cos\frac{r}{R})$:
$\Pi_{1}:a_{1}\cos\frac{r}{R}\>x+\sqrt{1-a_{1}^{2}}\>\>y+a_{1}\sin\frac{r}{R}\>z=0;$
$\Pi_{2}:a_{2}\cos\frac{r}{R}\>x+\sqrt{1-a_{2}^{2}}\>\>y+a_{2}\sin\frac{r}{R}\>z=0,$
where
$|a_{1}|,|a_{2}|\leq 1.$ (8.5)
Let the angle between this two planes $\Pi_{1}$ and $\Pi_{2}$ equals $\alpha$.
Then
$\cos\alpha=a_{1}a_{2}+\sqrt{(1-a_{1}^{2})(1-a_{2}^{2})}.$ (8.6)
Figure 12:
The equation of the tangent plane at the $X_{1}$ to the sphere $S^{2}$ is
equal $z=-R$. This tangent plane intersects the planes $\Pi_{1}$ and $\Pi_{2}$
along the lines, which form the angle $\widehat{\alpha}_{r}$ (see Figure 12),
and
$\cos\widehat{\alpha}_{r}=\frac{a_{1}a_{2}\cos^{2}\frac{r}{R}+\sqrt{(1-a_{1}^{2})(1-a_{2}^{2})}}{\sqrt{1-a_{1}^{2}\sin^{2}\frac{r}{R}}\sqrt{1-a_{2}^{2}\sin^{2}\frac{r}{R}}}.$
(8.7)
From the equations (8.6) and (8.7) we get
$|\cos\widehat{\alpha}_{r}-\cos\alpha|<\frac{|a_{1}a_{2}\sin^{2}\frac{r}{R}|}{\sqrt{1-a_{1}^{2}\sin^{2}\frac{r}{R}}\sqrt{1-a_{2}^{2}\sin^{2}\frac{r}{R}}}.$
(8.8)
Applying the inequality (8.5) and the estimation (8.8) we get
$|\cos\widehat{\alpha}_{r}-\cos\alpha|<\tan^{2}\frac{r}{R}.$ (8.9)
From the equation
$|\cos\widehat{\alpha}_{r}-\cos\alpha|=\Big{|}2\sin\frac{\widehat{\alpha}_{r}-\alpha}{2}\sin\frac{\widehat{\alpha}_{r}+\alpha}{2}\Big{|}$
and inequalities
$\left|\sin\frac{\widehat{\alpha}_{r}+\alpha}{2}\right|>\sin\frac{\pi}{6},\;\;\;\left|\sin\frac{\widehat{\alpha}_{r}-\alpha}{2}\right|>\frac{2}{\pi}\left|\frac{\widehat{\alpha}_{r}-\alpha}{2}\right|,$
and since $\alpha>\frac{\pi}{3}$ and $\widehat{\alpha}_{r}<\pi$ it follows
$\frac{2}{\pi}\left|\frac{\widehat{\alpha}_{r}-\alpha}{2}\right|<|\cos\widehat{\alpha}_{r}-\cos\alpha|.$
From (8), (8.9) and equality $\alpha=\frac{\pi}{3}+\varepsilon$ we obtain
$\Big{|}\widehat{\alpha}_{r}-\frac{\pi}{3}\Big{|}<\pi\tan^{2}\frac{r}{R}+\varepsilon.$
∎
Let us consider the arc of length one starting at the point $P$ with the
coordinates $\phi=r/R,\theta=0$, where $r/R<\pi/2$, on a sphere (8.3). Apply
the central projection of this arc to the plane $z=-R$, which is tangent to
the sphere at the point $X_{1}(\phi=0)$.
###### Lemma 8.3.
The central projection of the hemisphere of radius $R=1/a$ to the tangent
plane at the point $X_{1}$ maps the arc of the length one starting from the
point $P(R\sin\frac{r}{R},0,-R\cos\frac{r}{R})$ into the segment of length
$\widehat{l}_{r}$ satisfying the inequality
$\widehat{l}_{r}-1<\frac{\cos\frac{\pi}{12}\left(4+\pi^{2}(2r+1)^{2}\right)}{\left(1-\frac{2}{\pi}a(r+1)\right)^{2}}\cdot\varepsilon.$
(8.10)
###### Proof.
The point $P(R\sin\frac{r}{R},0,-R\cos\frac{r}{R})$ on the sphere $S^{2}$ is
projected to the point $\widehat{P}(R\tan\frac{r}{R},0,-R)$ on the tangent
plane $z=-R$.
Take the point $Q(Ra_{1},Ra_{2},Ra_{3})$ on a sphere such that the spherical
distance $PQ$ equals $1$. Then $\angle POQ=1/R$, where $O$ is a center of the
sphere $S^{2}$ (see Figure 13). Thus we get the following conditions for the
constants $a_{1},a_{2},a_{3}$:
$a_{1}\sin\frac{r}{R}-a_{3}\cos\frac{r}{R}=\cos\frac{1}{R};$ (8.11)
$a_{1}^{2}+a_{2}^{2}+a_{3}^{2}=1.$ (8.12)
The central projection into the plane $z=-R$ maps the point $Q$ to the point
$\widehat{Q}\left(-\frac{a_{1}}{a_{3}}R,-\frac{a_{2}}{a_{3}}R,-R\right)$. The
length of $\widehat{P}\widehat{Q}$ equals
$|\widehat{P}\widehat{Q}|=R\sqrt{\left(\frac{a_{1}}{a_{3}}-\tan\frac{r}{R}\right)^{2}+\frac{a^{2}_{2}}{a_{3}^{2}}}$
(8.13)
Applying the method of Lagrange multipliers for finding the local extremum of
the length $\widehat{P}\widehat{Q}$, we get, that the minimum of
$|\widehat{P}\widehat{Q}|$ reaches when $Q$ has the coordinates
$\left(R\sin\frac{r-1}{R},0,R\cos\frac{r-1}{R}\right)$. Then
$|\widehat{P}\widehat{Q}|_{min}=R\left|\tan\frac{r}{R}-\tan\frac{r-1}{R}\right|=\frac{R\sin\frac{1}{R}}{\cos\frac{r}{R}\cos\frac{r-1}{R}}.$
Note that $|\widehat{P}\widehat{Q}|_{min}>1$.
Figure 13:
The maximum of $|\widehat{P}\widehat{Q}|$ reaches at the point
$Q\left(R\sin\frac{r+1}{R},0,R\cos\frac{r+1}{R}\right)$. This maximum value
equals
$|\widehat{P}\widehat{Q}|_{max}=R\left|\tan\frac{r}{R}-\tan\frac{r+1}{R}\right|=\frac{R\sin\frac{1}{R}}{\cos\frac{r}{R}\cos\frac{r+1}{R}}.$
Since $R=1/a$, then the length $\widehat{l}_{r}$ of the projection of $PQ$
satisfies
$\widehat{l}_{r}<\frac{\sin a}{a\cos(ar)\cos\big{(}a(r+1)\big{)}}.$
Applying the estimation $\sin a<a$, we obtain
$\widehat{l}_{r}-1<\frac{2-\cos
a-\cos\big{(}a(2r+1)\big{)}}{2\cos(ar)\cos\big{(}a(r+1)\big{)}}.$ (8.14)
From the equation (8.2) it follows
$1-\cos a=\frac{\sin^{2}a}{1+\cos a}\leq 8\cos\frac{\pi}{12}\;\varepsilon.$
(8.15)
In the same way from the inequality (8.1) we obtain
$1-\cos\left(a(2r+1)\right)\leq
2\pi^{2}\cos\frac{\pi}{12}(2r+1)^{2}\;\varepsilon;$ (8.16)
Estimate the denominator of the (8.14), using the inequality $\cos
x>1-\frac{2}{\pi}x$ where $x<\pi/2$. Applying the inequalities (8.15) (8.16),
we get
$\widehat{l}_{r}-1<\frac{4\cos\frac{\pi}{12}+\pi^{2}\cos\frac{\pi}{12}(2r+1)^{2}}{\left(1-\frac{2}{\pi}a\left(r+1\right)\right)^{2}}\cdot\varepsilon.$
∎
###### Theorem 2.
Let $(p,q)$ be a pair of coprime integers, $0\leq p<q$, and let $\varepsilon$
satisfy
$\varepsilon<\min\left\\{\frac{\sqrt{3}}{4c_{0}\sqrt{p^{2}+q^{2}+pq}\;\sum_{i=0}^{\left[\frac{p+q}{2}\right]+2}\left(c_{l}(i)+\sum_{j=0}^{i}c_{\alpha}(j)\right)};\frac{1}{8\cos\frac{\pi}{12}(p+q)^{2}}\right\\},$
(8.17)
where
$c_{0}=\frac{3-\frac{(p+q+2)}{\pi\cos\frac{\pi}{12}(p+q)^{2}}-16\sum_{i=0}^{\left[\frac{p+q}{2}\right]+2}\tan^{2}\left(\frac{\pi
i}{2(p+q)}\right)}{1-\frac{(p+q+2)}{2\pi\cos\frac{\pi}{12}(p+q)^{2}}-8\sum_{i=0}^{\left[\frac{p+q}{2}\right]+2}\tan^{2}\left(\frac{\pi
i}{2(p+q)}\right)},$
$c_{l}(i)=\frac{\cos\frac{\pi}{12}(p+q)^{2}\left(4+\pi^{2}(2i+1)^{2}\right)}{\left(p+q-i-1\right)^{2}},$
$c_{\alpha}(j)=4\left(8\pi(p+q)^{2}\cos\frac{\pi}{12}\tan^{2}\frac{\pi
j}{2(p+q)}+1\right).$
Then on a regular tetrahedron in spherical space with the faces angle
$\alpha=\pi/3+\varepsilon$ there exists a unique, up to the rigid motion of
the tetrahedron, simple closed geodesic of type $(p,q)$.
###### Proof.
Take the pair of coprime integers $(p,q)$, where $0<p<q$. Consider a simple
closed geodesic $\widetilde{\gamma}$ of type $(p,q)$ on a regular tetrahedron
$\widetilde{A}_{1}\widetilde{A}_{2}\widetilde{A}_{3}\widetilde{A}_{4}$ with
the edge of the length one in Euclidean space. Suppose, that
$\widetilde{\gamma}$ passes through the midpoints $\widetilde{X}_{1}$,
$\widetilde{X}_{2}$ and $\widetilde{Y}_{1}$, $\widetilde{Y}_{2}$ of the edges
$\widetilde{A}_{1}\widetilde{A}_{2}$ $\widetilde{A}_{3}\widetilde{A}_{4}$ and
$\widetilde{A}_{1}\widetilde{A}_{3}$, $\widetilde{A}_{4}\widetilde{A}_{2}$
respectively.
Consider the development $\widetilde{T}_{pq}$ of the tetrahedron along
$\widetilde{\gamma}$ starting from the point $\widetilde{X}_{1}$. The geodesic
unfolds to the segment
$\widetilde{X}_{1}\widetilde{Y}_{1}\widetilde{X}_{2}\widetilde{Y}_{2}\widetilde{X^{\prime}_{1}}$
inside the development $\widetilde{T}_{pq}$. From Proposition 2 we know, that
the parts of the development along geodesic segments
$\widetilde{X}_{1}\widetilde{Y}_{1}$, $\widetilde{Y}_{1}\widetilde{X}_{2}$,
$\widetilde{X}_{2}\widetilde{Y}_{2}$ and
$\widetilde{Y}_{2}\widetilde{X}^{\prime}_{1}$ are equal, and any two adjacent
polygons can be transformed into each other by a rotation through an angle
$\pi$ around the midpoint of their common edge (see Figure 15).
Figure 14:
Figure 15:
Now consider a two-dimensional sphere $S^{2}$ of radius $R=1/a$, where $a$
depends on $\alpha$ according to (2.1). On this sphere we take the several
copies of regular spherical triangles with the angle $\alpha$ at vertices,
where $\pi/3<\alpha<\pi/2$. Put this triangles up in the same order as we
develop the faces of the Euclidean tetrahedron along $\widetilde{\gamma}$ into
the plane. In other words, we construct a polygon $T_{pq}$ on a sphere $S^{2}$
formed by the same sequence of regular triangles as the polygon
$\widetilde{T}_{pq}$ on Euclidean plane. Denote the vertices of $T_{pq}$
according to the vertices of $\widetilde{T}_{pq}$. By the construction the
spherical polygon $T_{pq}$ has the same properties of the central symmetry as
the Euclidean $\widetilde{T}_{pq}$. Since the groups of isometries of regular
tetrahedra in spherical space and in Euclidean space are equal, then $T_{pq}$
corresponds to the development of a regular tetrahedron with the faces angle
$\alpha$ in spherical space.
Denote by $X_{1}$, $X^{\prime}_{1}$ and $X_{2}$, $Y_{1}$, $Y_{2}$ the
midpoints of the edges $A_{1}A_{2}$, $A_{3}A_{4}$, $A_{1}A_{3}$, $A_{4}A_{2}$
on $T_{pq}$ such that these midpoints correspond to the points
$\widetilde{X}_{1}$, $\widetilde{X}^{\prime}_{1}$ and $\widetilde{X}_{2}$,
$\widetilde{Y}_{1}$, $\widetilde{Y}_{2}$ on the Euclidean development
$\widetilde{T}_{pq}$. Construct the great arcs $X_{1}Y_{1}$, $Y_{1}X_{2}$,
$X_{2}Y_{2}$ $Y_{2}X^{\prime}_{1}$ on a sphere. From the properties of the
central symmetry of $T_{pq}$ we obtain that these arcs form the one great arc
$X_{1}X^{\prime}_{1}$ on $S^{2}$ (see Figure 15). Since the polygon $T_{pq}$
is not convex, we want to find $\alpha$ such that the polygon $T_{pq}$
contains the arc $X_{1}Y_{1}$ inside. Therefore the whole arc
$X_{1}X^{\prime}_{1}$ will be also contained inside $T_{pq}$ and
$X_{1}X^{\prime}_{1}$ will correspond to the simple closed geodesic of type
$(p,q)$ on a regular tetrahedron with the faces angle $\alpha$ in spherical
space.
In the following we will consider the part of the polygon $T_{pq}$ along
$X_{1}Y_{1}$ and will also denote it as $T_{pq}$. This part consists of $p+q$
regular spherical triangles with the edges of length one. Then if $a$
satisfies the following inequality
$a(p+q)<\frac{\pi}{2},$ (8.18)
then the polygon $T_{pq}$ is contained inside the open hemisphere. Since
$\alpha=\pi/3+\varepsilon$, then from the condition $(\ref{a_above_epsilon})$
we get that the estimation (8.18) holds if
$\varepsilon<\frac{1}{8\cos\frac{\pi}{12}(p+q)^{2}}.$ (8.19)
In this case the length of the arc $X_{1}Y_{1}$ is less than $\pi/2a$, so
$X_{1}Y_{1}$ satisfies the necessary condition from Lemma 5.1.
Apply a central projection of the $T_{pq}$ into the tangent plane
$T_{X_{1}}S^{2}$ at the point $X_{1}$ to the sphere $S^{2}$. Since the central
projection is a geodesic map, then the image of the spherical polygon $T_{pq}$
on $T_{X_{1}}S^{2}$ is a polygon $\widehat{T}_{pq}$.
Denote by $\widehat{A}_{i}$ the vertex of $\widehat{T}_{pq}$, which is an
image of the vertex $A_{i}$ on $T_{pq}$. The arc $X_{1}Y_{1}$ maps into the
line segment $\widehat{X}_{1}\widehat{Y}_{1}$ on $T_{X_{1}}S^{2}$, that joins
the midpoints of the edges $\widehat{A}_{1}\widehat{A}_{2}$
$\widehat{A}_{1}\widehat{A}_{3}$. If for some $\alpha$ the segment
$\widehat{X}_{1}\widehat{Y}_{1}$ lies inside the polygon $\widehat{T}_{pq}$,
then the arc $X_{1}Y_{1}$ is also containing inside $T_{pq}$ on the sphere.
The vector $\widehat{X}_{1}\widehat{Y}_{1}$ equals
$\widehat{X}_{1}\widehat{Y}_{1}=\widehat{a_{0}}+\widehat{a_{1}}+\dots+\widehat{a_{s}}+\widehat{a}_{s+1},$
(8.20)
where $\widehat{a_{i}}$ are the sequential vectors of the $\widehat{T}_{pq}$
boundary, $\widehat{a_{0}}=\widehat{X_{1}}\widehat{A_{2}}$,
$\widehat{a}_{s+1}=\widehat{A_{1}}\widehat{Y_{1}}$, and
$s=\left[\frac{p+q}{2}\right]+1$ (if we take the boundary of
$\widehat{T}_{pq}$ from the other side of $\widehat{X}_{1}\widehat{Y}_{1}$,
then $s=\left[\frac{p+q}{2}\right]$ ) (see Figure 16).
On the other hand at the Euclidean plane $T_{X_{1}}S^{2}$ there exists a
development $\widetilde{T}_{pq}$ of a regular Euclidean tetrahedron
$\widetilde{A}_{1}\widetilde{A}_{2}\widetilde{A}_{3}\widetilde{A}_{4}$ with
the edge of length one along a simple closed geodesic $\widetilde{\gamma}$.
The development $\widetilde{T}_{pq}$ is equivalent to $T_{pq}$, and then it’s
equivalent to $\widehat{T}_{pq}$. The segment
$\widetilde{X}_{1}\widetilde{Y}_{1}$ lies inside $\widetilde{T}_{pq}$ and
corresponds to the segment of $\widetilde{\gamma}$.
Let the development $\widetilde{T}_{pq}$ be placed such that the point
$\widetilde{X}_{1}$ coincides with $\widehat{X}_{1}$ of $\widehat{T}_{pq}$,
and the vector $\widehat{X}_{1}\widehat{A}_{2}$ has the same direction with
$\widetilde{X}_{1}\widetilde{A}_{2}$. Similarly the vector
$\widetilde{X}_{1}\widetilde{Y}_{1}$ equals
$\widetilde{X}_{1}\widetilde{Y}_{1}=\widetilde{a_{0}}+\widetilde{a_{1}}+\dots+\widetilde{a_{s}}+\widetilde{a}_{s+1},$
(8.21)
where $\widetilde{a_{i}}$ are the sequential vectors of the
$\widetilde{T}_{pq}$ boundary, $s=\left[\frac{p+q}{2}\right]+1$ and
$\widetilde{a_{0}}=\widetilde{X}_{1}\widetilde{A}_{2}$,
$\widetilde{a}_{s+1}=\widetilde{A}_{1}\widetilde{Y}_{1}$ (see Figure 16).
Figure 16:
Suppose the minimal distance from the vertices of $\widetilde{T}_{pq}$ to the
segment $\widetilde{X}_{1}\widetilde{Y}_{1}$ reaches at the vertex
$\widetilde{A}_{k}$ and equals $\widetilde{h}$ from the formula (3.2). Let us
find the estimation of the distance $\widehat{h}$ between the segment
$\widehat{X}_{1}\widehat{Y}_{1}$ and the corresponding vertex
$\widehat{A}_{k}$ on $\widehat{T}_{pq}$. A geodesic on a regular tetrahedron
in Euclidean space intersects at most three edges starting from the same
tetrahedron’s vertex. It follows, that the interior angles of the polygon
$\widetilde{T}_{pq}$ is not greater than $4\frac{\pi}{3}$, and the angles of
the corresponding vertices on $\widehat{T}_{pq}$ is not greater than
$4\widehat{\alpha}_{i}$. Applying the formula (8.4) for $1\leq i\leq s$ we
get, that the angle between $\widehat{a_{i}}$ and $\widetilde{a_{i}}$
satisfies the inequality
$\angle(\widehat{a_{i}},\widetilde{a_{i}})<\sum_{j=0}^{i}4\left(\pi\tan^{2}\frac{j}{R}+\varepsilon\right).$
(8.22)
Since $R=1/a$, then using (8.1) we obtain
$\tan\frac{j}{R}<\tan\left(j\pi\sqrt{2\cos\frac{\pi}{12}}\;\sqrt{\varepsilon}\right).$
(8.23)
The inequality (8.18) holds if the following condition fulfills
$\tan\left(j\pi\sqrt{2\cos\frac{\pi}{12}}\;\sqrt{\varepsilon}\right)<\tan\frac{\pi
j}{2(p+q)}.$ (8.24)
If $\tan x<\tan x_{0}$, then $\tan x<\frac{\tan x_{0}}{x_{0}}x$. From (8.24)
it follows
$\tan\left(j\pi\sqrt{2\cos\frac{\pi}{12}}\;\sqrt{\varepsilon}\right)<2(p+q)\tan\frac{\pi
j}{2(p+q)}\sqrt{2\cos\frac{\pi}{12}}\;\sqrt{\varepsilon}.$ (8.25)
Therefore from (8.23) and (8.25) we get
$\tan\frac{j}{R}<2(p+q)\tan\frac{\pi
j}{2(p+q)}\sqrt{2\cos\frac{\pi}{12}}\;\sqrt{\varepsilon}.$ (8.26)
Using (8.22) and (8.26) we obtain the final estimation for the angle between
the vectors $\widehat{a_{i}}$ and $\widetilde{a_{i}}$:
$\angle(\widehat{a_{i}},\widetilde{a_{i}})<\sum_{j=0}^{i}4\left(8\pi(p+q)^{2}\cos\frac{\pi}{12}\tan^{2}\frac{\pi
j}{2(p+q)}+1\right)\varepsilon.$ (8.27)
Figure 17:
Figure 18:
Now estimate the length of the vector $\widehat{a_{i}}-\widetilde{a_{i}}$ (see
Figure 18). The following inequality holds
$|\widehat{a_{i}}-\widetilde{a_{i}}|\leq\left|\frac{\widehat{a_{i}}}{|\widehat{a_{i}}|}-\widetilde{a_{i}}\right|+\left|\widehat{a_{i}}-\frac{\widehat{a_{i}}}{|\widehat{a_{i}}|}\right|.$
(8.28)
Since $\widetilde{a_{i}}$ is a unite vector, then
$\left|\frac{\widehat{a_{i}}}{|\widehat{a_{i}}|}-\widetilde{a_{i}}\right|\leq\angle(\widehat{a_{i}},\widetilde{a_{i}})\;\;\;\textnormal{}\;\;\;\left|\widehat{a_{i}}-\frac{\widehat{a_{i}}}{|\widehat{a_{i}}|}\right|\leq\widehat{l}_{i}-1.$
(8.29)
From the inequality (8.10) we get
$\left|\widehat{a_{i}}-\frac{\widehat{a_{i}}}{|\widehat{a_{i}}|}\right|<\frac{\cos\frac{\pi}{12}\left(4+\pi^{2}(2i+1)^{2}\right)}{\left(1-\frac{2}{\pi}a(i+1)\right)^{2}}\cdot\varepsilon.$
(8.30)
Using the estimation (8.18), we obtain
$\left|\widehat{a_{i}}-\frac{\widehat{a_{i}}}{|\widehat{a_{i}}|}\right|<\frac{\cos\frac{\pi}{12}(p+q)^{2}\left(4+\pi^{2}(2i+1)^{2}\right)}{\left(p+q-i-1\right)^{2}}\cdot\varepsilon.$
(8.31)
Therefore, from (8.28), (8.27) and (8.31) we get
$|\widehat{a_{i}}-\widetilde{a_{i}}|\leq\left(c_{l}(i)+\sum_{j=0}^{i}c_{\alpha}(j)\right)\varepsilon,$
(8.32)
where
$c_{l}(i)=\frac{\cos\frac{\pi}{12}(p+q)^{2}\left(4+\pi^{2}(2i+1)^{2}\right)}{\left(p+q-i-1\right)^{2}},$
(8.33) $c_{\alpha}(j)=4\left(8\pi(p+q)^{2}\cos\frac{\pi}{12}\tan^{2}\frac{\pi
j}{2(p+q)}+1\right).$ (8.34)
Using (8.32) we estimate the length of $\widehat{Y}_{1}\widetilde{Y}_{1}$
$|\widehat{Y}_{1}\widetilde{Y}_{1}|<\sum_{i=0}^{s+1}|\widehat{a_{i}}-\tilde{a_{i}}|<\sum_{i=0}^{s+1}\left(c_{l}(i)+\sum_{j=0}^{i}c_{\alpha}(j)\right)\varepsilon.$
(8.35)
From (8.27) it follows that the angle
$\angle\widehat{Y}_{1}\widehat{X}_{1}\widetilde{Y}_{1}$ satisfies
$\angle\widehat{Y}_{1}\widehat{X}_{1}\widetilde{Y}_{1}<\sum_{i=0}^{s+1}c_{\alpha}(i)\varepsilon.$
(8.36)
The distance between the vertices $\widehat{A}_{k}$ and $\widetilde{A}_{k}$
equals
$|\widehat{A}_{k}\widetilde{A}_{k}|<\sum_{i=0}^{k}\left(c_{l}(i)+\sum_{j=0}^{i}c_{\alpha}(j)\right)\varepsilon.$
(8.37)
We drop a perpendicular $\widehat{A}_{k}\widehat{H}$ from the vertex
$\widehat{A}_{k}$ into the segment $\widehat{X}_{1}\widehat{Y}_{1}$. The
length of $\widehat{A}_{k}\widehat{H}$ equals $\widehat{h}$. Then we drop the
perpendicular $\widetilde{A}_{k}\widetilde{H}$ into the segment
$\widetilde{X}_{1}\widetilde{Y}_{1}$ and the length of
$\widetilde{A}_{k}\widetilde{H}$ equals $\widetilde{h}$.
Let the point $F$ on $\widetilde{X}_{1}\widetilde{Y}_{1}$ be such that the
segment $\widetilde{A}_{k}F$ intersect $\widehat{X}_{1}\widehat{Y}_{1}$ at
right angle. Then the length of $\widetilde{A}_{k}F$ is not less than
$\widetilde{h}$. Let the point $G$ on $\widetilde{X}_{1}\widetilde{Y}_{1}$ lie
at the extension of the segment $\widehat{A}_{k}\widehat{H}$, and $FK$ is
perpendicular to $\widehat{H}G$ (see Figure 18). Then the length of $FK$ is
not greater than the length of $\widehat{A}_{k}\widetilde{A}_{k}$, and $\angle
KFG=\angle\widehat{Y}_{1}\widehat{X}_{1}\widetilde{Y}_{1}$. From the triangle
$GFK$ we get, that
$|FG|=\frac{|FK|}{\cos\angle\widehat{Y}_{1}\widehat{X}_{1}\widetilde{Y}_{1}}.$
(8.38)
Applying the inequality $\cos x>1-\frac{2}{\pi}x$, where $x<\frac{\pi}{2}$, to
(8.38). We obtain
$|FG|<\frac{|\widehat{A}_{k}\widetilde{A}_{k}|}{1-\frac{2}{\pi}\angle\widehat{Y}_{1}\widehat{X}_{1}\widetilde{Y}_{1}}.$
(8.39)
So from (8.36), (8.37), and (8.39) it follows
$|FG|<\frac{\sum_{i=0}^{k}\left(c_{l}(i)+\sum_{j=0}^{i}c_{\alpha}(j)\right)\varepsilon}{1-\sum_{i=0}^{s}\left(64\pi(p+q)^{2}\cos\frac{\pi}{12}\tan^{2}\frac{\pi
i}{2(p+q)}+\frac{8}{\pi}\right)\varepsilon}.$ (8.40)
From (8.19) and (8.40) we obtain
$|FG|<\frac{\sum_{i=0}^{k}\left(c_{l}(i)+\sum_{j=0}^{i}c_{\alpha}(j)\right)\varepsilon}{1-\frac{(p+q+2)}{2\pi\cos\frac{\pi}{12}(p+q)^{2}}-8\sum_{i=0}^{s+1}\tan^{2}\left(\frac{\pi
i}{2(p+q)}\right)}.$ (8.41)
From the our construction it follows
$\widetilde{h}\leq\widetilde{A}_{k}F\leq\widehat{h}+|\widehat{H}G|+|\widehat{A}_{k}\widetilde{A}_{k}|+|FG|;$
(8.42)
Note, that $|\widehat{H}G|<|\widehat{Y}_{1}\widetilde{Y}_{1}|$. From Lemma 3.1
we know, that the distance $\widetilde{h}$ satisfies the inequality
$\widetilde{h}>\frac{\sqrt{3}}{4\sqrt{p^{2}+q^{2}+pq}}.$
Hence from (8.42) it follows, that
$\widehat{h}>\frac{\sqrt{3}}{4\sqrt{p^{2}+q^{2}+pq}}-|\widehat{Y}_{1}\widetilde{Y}_{1}|-|\widehat{A}_{k}\widetilde{A}_{k}|-|FG|.$
(8.43)
Applying the estimations (8.35), (8.37), (8.41) and the equality
$s=\left[\frac{p+q}{2}\right]+1$, we get
$\widehat{h}>\frac{\sqrt{3}}{4\sqrt{p^{2}+q^{2}+pq}}-c_{0}\sum_{i=0}^{\left[\frac{p+q}{2}\right]+2}\left(c_{l}(i)+\sum_{j=0}^{i}c_{\alpha}(j)\right)\varepsilon,$
(8.44)
where $c_{l}(i)$ is from (8.33), and $c_{\alpha}(j)$ is from (8.34) and
$c_{0}=\frac{3-\frac{(p+q+2)}{\pi\cos\frac{\pi}{12}(p+q)^{2}}-16\sum_{i=0}^{\left[\frac{p+q}{2}\right]+2}\tan^{2}\left(\frac{\pi
i}{2(p+q)}\right)}{1-\frac{(p+q+2)}{2\pi\cos\frac{\pi}{12}(p+q)^{2}}-8\sum_{i=0}^{\left[\frac{p+q}{2}\right]+2}\tan^{2}\left(\frac{\pi
i}{2(p+q)}\right)},$
From the inequality (8.44) we obtain, that if $\varepsilon$ satisfies the
condition
$\varepsilon<\frac{\sqrt{3}}{4c_{0}\sqrt{p^{2}+q^{2}+pq}\;\sum_{i=0}^{\left[\frac{p+q}{2}\right]+2}\left(c_{l}(i)+\sum_{j=0}^{i}c_{\alpha}(j)\right)},$
(8.45)
then the distance from the vertices of the polygon $\widehat{T}_{pq}$ to
$\widehat{X}_{1}\widehat{Y}_{1}$ is greater than zero.
Since we use the estimation (8.19), we get, that if
$\varepsilon<\min\left\\{\frac{\sqrt{3}}{4c_{0}\sqrt{p^{2}+q^{2}+pq}\;\sum_{i=0}^{\left[\frac{p+q}{2}\right]+2}\left(c_{l}(i)+\sum_{j=0}^{i}c_{\alpha}(j)\right)};\frac{1}{8\cos\frac{\pi}{12}(p+q)^{2}}\right\\},$
(8.46)
then the segment $\widehat{X}_{1}\widehat{Y}_{1}$ lies inside the polygon
$\widehat{T}_{pq}$. It follows that the arc $X_{1}Y_{1}$ on a sphere is also
containing inside the polygon $T_{pq}$. The arc $X_{1}Y_{1}$ corresponds to a
simple closed geodesic $\gamma$ of type $(p,q)$ on a regular tetrahedron with
the faces angle $\alpha=\pi/3+\varepsilon$ in spherical space. From Corollary
6.1 we get, that this geodesic is unique, up to the rigid motion of the
tetrahedron.
Note, that the geodesic $\gamma$ is invariant under the rotation of the
tetrahedron of the angle $\pi$ over the line passing through the midpoints of
the opposite edges of the tetrahedron. The rotation of the tetrahedron of the
angle $2\pi/3$ or $4\pi/3$ over the line passing through the tetrahedron’s
vertex and the center of its opposite face changes $\gamma$ into another
geodesic of type $(p,q)$.
The rotations over others lines connected other tetrahedron’s vertex with the
center of its opposite face produce the existing geodesics. So if
$\varepsilon$ satisfies the condition (8.46), then on a regular tetrahedron
with the faces angle $\alpha=\pi/3+\varepsilon$ in a spherical space there
exist three different simple closed geodesics of type $(p,q)$. Theorem 2 is
proved. ∎
## References
* [1] L. A. Lyusternik and L. G. Shnirelman, ‘‘Sur le probleme de troix geodesique fermees sur les surfaces de genre 0’’, C. R. Acad. Sci. Paris, 189 (1929), 269-271.
* [2] L. A. Lyusternik, L. G. Shnirel’man, Topological methods in variational problems and their application to the differential geometry of surfaces, Uspekhi Mat. Nauk, 2:1(17) (1947), 166217
* [3] J. Hadamard, ‘‘ Les surfaces à courbures opposées et leurs lignes géodésiques ’’, J. Math. Pures et Appl. 4:5 (1898), 27-74.
* [4] H. Huber, ‘‘Zur analytischen Theorie hyperbolischen Raumformen und Bewegungsgruppen’’, Mathematische Annalen, 138:1 (1959), 1-26.
* [5] H. Huber, ‘‘Zur analytischen Theorie hyperbolischen Raumformen und Bewegungsgruppen II’’, Mathematische Annalen, 143, 1961, 463-464.
* [6] I. Rivin, ‘‘Simple curves on surfaces’’, Geometriae Dedicata , 87:1-3 (2001), 345360.
* [7] M. Mirzakhani, ‘‘Growth of the number of simple closed geodesics on hyperbolic surfaces’’ Annals of Mathematics, 168 (2008), 97-125.
* [8] S. Cohn-Vossen, Some problems of differential geometry in the large , 1959 (in Russian).
* [9] A. D. Alexandrov, Convex Polyhedra, Springer, 2005.
* [10] A. V. Pogorelov, Extrinsic geometry of convex surfaces, Providence, R.I: AS, 1973.
* [11] A. V. Pogorelov, One theorem about geodesic lines on a closed convex surface, Rec. Math. [Mat. Sbornik], 18(60):1 (1946), 181183.
* [12] V. A. Toponogov, ‘‘Estimation of the length of a convex curve on a two-dimensional surface’’, Sibirsk. Mat. Zh.l 4:5 (1963), 11891183 (in Russian).
* [13] V. A. Vaigant, O. Yu. Matukevich, Estimation of the length of a simple geodesic on a convex surface, Siberian Math. J.,42:5 (2001), 833845.
* [14] A. Cotton, D. Freeman, A. Gnepp, T. Ng, J. Spivack, C. Yoder, ‘‘The isoperimetric problem on some singular surfaces’’, J. Aust. Math. Soc., 78:2 (2005), 167197.
* [15] K. Lawson, J. Parish , C. Traub, A. Weyhaupt, ‘‘Coloring graphs to classify simple closed geodesics on convex deltahedra’’, International Journal of Pure and Apllied Mathematics, 89 (2013), 1-19.
* [16] D. B. Fuchs, E. Fuchs, ‘‘Closed geodesics on regular polyhedra’’, Mosc. Math. J., 7:2 (2007), 265–279.
* [17] D. B. Fuchs, ‘‘Geodesics on a regular dodecahedron’’, Preprints of Max Planck Institute for Mathematics, Bonn, 91 (2009), 1-14.
* [18] V. Yu. Protasov, ‘‘Closed geodesics on the surface of a simplex’’, Sbornik: Mathematics, 198:2 (2007), 243260.
* [19] A A Borisenko, D D Sukhorebska, "Simple closed geodesics on regular tetrahedra in Lobachevsky space", Mat. Sb., 211:5 (2020), 330; (in Russian). English translation: Sb. Math., 211:5 (2020), 617642, DOI:10.1070/SM9212.
* [20] A A Borisenko, ‘‘The estimation of the length of a convex curve in two-dimensional Alexandrov space’’, Journal of Mathematical Physics, Analysis, Geometry, 16:3 (2020), p. 221-227.
B.Verkin Institute for Low Temperature Physics and Engineering of the National
Academy of Sciences of Ukraine, Kharkiv, 61103, Ukraine
E-mail address<EMAIL_ADDRESS><EMAIL_ADDRESS>
|
# Linearization stability of reflection-asymmetric thin-shell wormholes with
double shadows
Naoki Tsukamoto1<EMAIL_ADDRESS>1Department of General Science and
Education, National Institute of Technology, Hachinohe College, Aomori
039-1192, Japan
###### Abstract
Wormholes are hypothetical objects which can be black hole mimickers with
strong gravitational fields. Recently, Wielgus et al. have constructed
reflection-asymmetric thin-shell wormholes which are composed of the parts of
a Schwarzschild spacetime and a Reissner-Nordstrom spacetime with two photon
spheres with different sizes each other in [M. Wielgus, J. Horak, F. Vincent,
and M. Abramowicz, Phys. Rev. D 102, 084044 (2020)]. They have discussed
observational property of the shadows with two photon rings in different sizes
as seen from an observer and they have named their shadows double shadows. In
this paper, we study the linearization stability of the reflection-asymmetric
thin-shell wormholes with the double shadows.
## I Introduction
Recently, LIGO and VIRGO Collaborations have detected gravitational waves from
black hole binaries Abbott:2016blz and Event Horizon Telescope Collaboration
has reported the shadow image of a black hole candidate at the center of a
giant elliptical galaxy M87 Akiyama:2019cqa . The theoretical and
observational aspects of compact objects in general relativity will be more
important than before.
Wormholes are hypothetical objects with a non-trivial topology in general
relativity Visser_1995 ; Morris:1988cz . Morris and Thorne have discussed the
passability of wormholes and they have also shown that energy conditions
violate at least at the throat of static and spherically symmetric wormholes
if we assume general relativity without a cosmological constant Morris:1988cz
. The wormholes can be black hole mimickers because they can have strong
gravitational fields. For example, spherically symmetric wormholes with strong
gravitational fields can have unstable (stable) circular light orbits named
photon spheres (antiphoton spheres) Perlick:2003vg ; Nandi:2006ds ;
Tsukamoto:2012xs ; Tsukamoto:2016qro ; Nandi:2018mzm ; Shaikh:2018oul ;
Shaikh:2019jfr ; Tsukamoto:2020uay ; Tsukamoto:2020bjm .
A thin-shell wormhole whose energy conditions are broken only at a throat
which is supported by a thin shell Lanczos:1922 ; Lanczos:1924 ; Israel:1966rt
; Poisson:2004 was considered by Visser Visser:1989kg with Darmois-Israel
matching Darmois:1927 ; Israel:1966rt ; Poisson:2004 . The linearization
stability of the thin shell of a Schwarzschild wormhole was investigated by
Poisson and Visser Poisson:1995sv and then the stability of thin-shell
wormholes such as Reissner-Nordstrom wormholes Eiroa:2003wp ; Eiroa:2007qz ;
Eiroa:2008ky ; Eiroa:2009hm ; Kuhfittig:2010pb ; Sharif:2013lna ;
Sharif:2013tva ; Eid:2016axb ; HabibMazharimousavi:2017qfb ; Wang:2017gdp ;
Forghani:2019wgt , the other static and spherically symmetric wormholes
Kim:1992sh ; Barcelo:2000ta ; Ishak:2001az ; Lobo:2003xd ; Lobo:2005zu ;
Eiroa:2005pc ; Lobo:2005yv ; Rahaman:2006xb ; Rahaman:2007bf ; Eiroa:2007qz ;
Eiroa:2008ky ; Eiroa:2009hm ; Garcia:2011aa ; Eiroa:2015hrt , plane symmetric
wormholes Lemos:2008aj , cylindrical symmetric wormholes Eiroa:2004at ;
Bejarano:2006uj , higher-dimensional wormholes Thibeault:2005ha ;
Rahaman:2006vg ; Mazharimousavi:2010bm ; Kokubu:2014vwa ; Mehdizadeh:2015dta ;
Kokubu:2015spa , lower-dimensional wormholes Perry:1991qq ; Kim:1993bz ;
Rahaman:2011yh ; Eiroa:2013xj ; Tsukamoto:2018lsg , and wormholes in an
expanding spacetime LaCamera:2011qi have been discussed. The passability
Nakao:2013hba and nonlinear stability Akai:2017aro of thin-shell wormholes
have been studied.
Usually two copied manifolds are used to construct the thin-shell wormholes.
It would be difficult to distinguish a reflection-symmetric wormhole spacetime
constructed by the parts of the copied black hole spacetimes from the original
black hole spacetime by astronomical observations if the wormhole has photon
spheres and if there are few light sources in the other side of the throat.
111In principle, we would distinguish wormholes from black holes. For an
example, if there are stars bounded by a wormhole in both sides of the
wormhole, the orbit of the star in one side can be affected by the gravity of
the star of the other side. Simonetti:2020vhw ; Dai:2019mse
There were few researches on reflection-asymmetric thin-shell wormholes such
as Garcia:2011aa ; Eid:2017xcg ; Forghani:2018gza . Recently, the shadow of a
reflection-asymmetric thin-shell wormhole which is composed of the parts of
two Schwarzschild spacetime with different masses were discussed by Wang et
al. Wang:2020emr . Wielgus et al. have constructed a reflection-asymmetric
thin-shell wormhole which is composed of the parts of the Schwarzschild
spacetime with a photon sphere and a Reissner-Nordstrom spacetime with another
photon sphere and they have discussed the observational property of the shadow
Wielgus:2020uqz . They have concluded that the shadow of the asymmetric
wormhole has two photon rings in different sizes as seen from an observer and
they have named the shadow double shadow. The asymmetric thin-shell wormhole
with the photon spheres can be distinguished from the black holes by the
observations since light rays can be reflected by a potential wall near the
throat Wielgus:2020uqz .
In this paper, we investigate the linearization stability of the reflection-
asymmetric thin-shell wormhole which is composed of the parts of the
Schwarzschild spacetime and the Reissner-Nordstrom spacetime with the double
shadow constructed in Wielgus:2020uqz .
This paper is organized as follows. We review the Reissner-Nordstrom spacetime
very briefly in Sec. II and we construct the reflection-asymmetric thin-shell
wormhole with the double shadow in Sec. III. We investigate the linearization
stability of the reflection-asymmetric thin-shell wormhole in Sec. IV and we
conclude our results in Sec. V. In this paper we use the units in which a
light speed and Newton’s constant are unity.
## II Reissner-Nordstrom spacetime
A Reissner-Nordstrom spacetime has a line element
$\displaystyle
ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}(d\theta^{2}+\sin^{2}\theta
d\phi^{2}),$ (1)
where $f(r)$ is given by
$\displaystyle f(r)\equiv 1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}$ (2)
and where $M>0$ and $Q$ are a mass and a charge, respectively. It is a black
hole spacetime with an event horizon at $r=r_{\mathrm{EH}}\equiv
M+\sqrt{M^{2}-Q^{2}}$ for $Q^{2}\leq M^{2}$ while it has naked singularity for
$M^{2}<Q^{2}$. There is a photon sphere at
$\displaystyle r=r_{\mathrm{PS}}\equiv\frac{3M+\sqrt{9M^{2}-8Q^{2}}}{2}$ (3)
for $Q^{2}\leq 9M^{2}/8$ and there is an antiphoton sphere at
$\displaystyle r=r_{\mathrm{APS}}\equiv\frac{3M-\sqrt{9M^{2}-8Q^{2}}}{2}$ (4)
for $M^{2}<Q^{2}\leq 9M^{2}/8$.
## III Reflection-Asymmetric thin-shell wormhole
In this section, we construct a wormhole spacetime without reflection symmetry
or Z2 symmetry by the Darmois-Israel matching Darmois:1927 ; Israel:1966rt ;
Poisson:2004 . We make two manifolds
$\mathcal{M}_{\pm}\equiv\left\\{r>a\right\\}$, where $a$ is a constant
satisfying $a>r_{\mathrm{EH}\pm}$ by removing $\Omega_{\pm}\equiv\left\\{r\leq
a\right\\}$ from the Reissner-Nordstrom spacetimes. The boundaries of the
manifolds $\mathcal{M}_{\pm}$ are timelike hypersurfaces
$\Sigma_{\pm}\equiv\left\\{r=a\right\\}$ and we identify the hypersurfaces
$\Sigma\equiv\Sigma_{+}=\Sigma_{-}$. As a result, we obtain a manifold
$\mathcal{M}$ by gluing the manifolds $\mathcal{M}_{\pm}$ at a throat located
at $\Sigma$. The hypersurface $\Sigma$ is filled with a Dirac distribution
matter and it is called thin shell. We permit $a=a(\tau)$, where $\tau$ is the
proper time of the thin shell since we are interested in the stability of the
thin shell $\Sigma$.
The line elements in the domains $\mathcal{M}_{\pm}$ are given by
$ds_{\pm}^{2}=-f_{\pm}(r)dt^{2}_{\pm}+\frac{dr^{2}}{f_{\pm}(r)}+r^{2}(d\theta^{2}+\sin^{2}\theta
d\phi^{2}),$ (5)
where $f_{\pm}(r)$ are given by
$\displaystyle f_{\pm}(r)\equiv
1-\frac{2M_{\pm}}{r}+\frac{Q_{\pm}^{2}}{r^{2}}.$ (6)
Note the time coordinates $t_{\pm}$ are discontinuous on the hypersurface
$\Sigma$ while coordinates $r$, $\theta$, and $\phi$ are continuous across the
hypersurface $\Sigma$.
We assume that we can set coordinates $y^{i}=(\tau,\theta,\phi)$ on the both
sides of $\Sigma$. Let the thin shell be at $t_{\pm}=T_{\pm}(\tau)$ and
$r=a(\tau)$. We set the unit normal vectors of the thin shells as
$\displaystyle
n_{\mu\pm}dx^{\mu}_{\pm}=\pm\left(-\dot{a}dt_{\pm}+\dot{T}_{\pm}dr\right),$
(7)
where the overdot is a differentiation with respect to $\tau$. The four
velocity of the thin shell is given by
$u^{\mu}_{\pm}\partial_{\mu\pm}=\dot{T}_{\pm}\partial_{t\pm}+\dot{a}\partial_{r}$.
From the normalization of the four velocity $u^{\mu}_{\pm}u_{\mu\pm}=-1$, we
obtain
$f_{\pm}(a)\dot{T}^{2}_{\pm}-\frac{\dot{a}^{2}}{f_{\pm}(a)}=1,$ (8)
where $\dot{T}_{\pm}$ should be
$\displaystyle\dot{T}_{\pm}=\frac{\sqrt{f_{\pm}+\dot{a}^{2}}}{f_{\pm}}.$ (9)
By using Eq. (8) and the basis vectors $e^{\mu}_{i\pm}\equiv\partial
x^{\mu}_{\pm}/\partial y^{i}$ given by
$\displaystyle
e^{\mu}_{\tau\pm}\partial_{\mu\pm}=\dot{T}_{\pm}\partial_{t\pm}+\dot{a}\partial_{r},$
(10) $\displaystyle e^{\mu}_{\theta\pm}\partial_{\mu\pm}=\partial_{\theta},$
(11) $\displaystyle e^{\mu}_{\phi\pm}\partial_{\mu\pm}=\partial_{\phi},$ (12)
the induced metric $h_{ij\pm}\equiv g_{\mu\nu\pm}e^{\mu}_{i\pm}e^{\nu}_{j\pm}$
on the hypersurface $\Sigma$ in $\mathcal{M}_{\pm}$ is given by
$\displaystyle ds_{\Sigma}^{2}$ $\displaystyle=$
$\displaystyle\left.ds_{\pm}^{2}\right|_{\Sigma}$ (13) $\displaystyle=$
$\displaystyle h_{ij\pm}dy^{i}dy^{j}$ $\displaystyle=$
$\displaystyle-d\tau^{2}+a^{2}\left(d\theta^{2}+\sin^{2}\theta
d\phi^{2}\right).$
It guarantees that the metric on the hypersurface $\Sigma$ is the same as
viewed from both sides.
The thin shell satisfies Einstein equations
$S^{i}_{j}=-\frac{1}{8\pi}\left(\left[K^{i}_{j}\right]-\left[K\right]\delta^{i}_{j}\right),$
(14)
where the bracket $\left[\textrm{\boldmath$F$}\right]$ denotes the jump of any
function $F$ across $\Sigma$,
$\left[\textrm{\boldmath$F$}\right]\equiv\left.\textrm{\boldmath$F$}_{+}\right|_{\Sigma}-\left.\textrm{\boldmath$F$}_{-}\right|_{\Sigma},$
(15)
where $\textrm{\boldmath$F$}_{+}$ and $\textrm{\boldmath$F$}_{-}$ are $F$ in
$\mathcal{M}_{+}$ and $\mathcal{M}_{-}$, respectively, and $S^{i}_{j}$ is the
surface stress-energy tensor of the thin shell given by
$S^{i}_{j}=(\sigma+p)U^{i}U_{j}+p\delta^{i}_{j},$ (16)
where $U_{i}$ is given by $U_{i}dy^{i}\equiv
u_{\mu\pm}e^{\mu}_{i\pm}dy^{i}=-d\tau$, and where $\sigma=-S^{\tau}_{\tau}$
and $p=S^{\theta}_{\theta}=S^{\phi}_{\phi}$ are the surface energy density and
the surface pressure of the thin shell, respectively. Here, $K_{ij}$ is the
extrinsic curvature given by
$K_{ij}\equiv n_{\mu;\nu}e^{\mu}_{i}e^{\nu}_{j},$ (17)
where ; is the covariant derivative. By using the normal vectors (7), the
extrinsic curvatures of the hypersurfaces in $\mathcal{M}_{\pm}$ are given by
$\displaystyle K^{\tau}_{\tau\pm}$ $\displaystyle=$ $\displaystyle\frac{\pm
1}{\sqrt{\dot{a}^{2}+f_{\pm}}}\left(\ddot{a}+\frac{f^{\prime}_{\pm}}{2}\right),$
(18) $\displaystyle K^{\theta}_{\theta\pm}$ $\displaystyle=$ $\displaystyle
K^{\phi}_{\phi\pm}=\frac{\pm\sqrt{\dot{a}^{2}+f_{\pm}}}{a},$ (19)
and the traces are obtained as
$K_{\pm}=\frac{\pm
1}{\sqrt{\dot{a}^{2}+f_{\pm}}}\left(\ddot{a}+\frac{f^{\prime}_{\pm}}{2}\right)\pm\frac{2}{a}\sqrt{\dot{a}^{2}+f_{\pm}}.$
(20)
From $(\tau,\tau)$ and $(\theta,\theta)$ components of the Einstein equations
(14), we obtain
$\sigma=-\frac{\sqrt{\dot{a}^{2}+f_{+}}}{4\pi
a}-\frac{\sqrt{\dot{a}^{2}+f_{-}}}{4\pi a}$ (21)
and
$\displaystyle p$ $\displaystyle=$
$\displaystyle\frac{1}{8\pi\sqrt{\dot{a}^{2}+f_{+}}}\left(\ddot{a}+\frac{\dot{a}^{2}+f_{+}}{a}+\frac{f^{\prime}_{+}}{2}\right)$
(22)
$\displaystyle+\frac{1}{8\pi\sqrt{\dot{a}^{2}+f_{-}}}\left(\ddot{a}+\frac{\dot{a}^{2}+f_{-}}{a}+\frac{f^{\prime}_{-}}{2}\right)$
and then we get, from Eqs. (21) and (22),
$\frac{d(\sigma\mathcal{A})}{d\tau}+p\frac{d\mathcal{A}}{d\tau}=0,$ (23)
where $\mathcal{A}\equiv 4\pi a^{2}$ is the area of the throat. Equation (23)
can be expressed by
$a\sigma^{\prime}+2(\sigma+p)=0,$ (24)
where the prime denotes the differentiation with respect to $a$ and
$\sigma^{\prime}=\dot{\sigma}/\dot{a}$. We assume that the thin shell is
filled with a barotropic fluid with $p=p(\sigma)$. From Eq. (24), we notice
that the surface density of the barotropic fluid is expressed as a function of
$a$ or $\sigma=\sigma(a)$. The equation of motion of the thin shell is given
by, from Eq. (21),
$\dot{a}^{2}+V(a)=0,$ (25)
where $V(a)$ is an effective potential defined by
$V(a)\equiv\bar{f}-\left(\frac{\Delta}{4\pi a\sigma}\right)^{2}-\left(2\pi
a\sigma\right)^{2},$ (26)
where $\bar{f}$ and $\Delta$ are given by
$\displaystyle\bar{f}\equiv\frac{f_{-}+f_{+}}{2}$ (27)
and
$\displaystyle\Delta\equiv\frac{f_{+}-f_{-}}{2},$ (28)
respectively. The derivative of $V$ with respect to $a$ is obtained as
$V^{\prime}=\bar{f}^{\prime}-\frac{\Delta\left[\Delta^{\prime}a\sigma-\Delta(\sigma+a\sigma^{\prime})\right]}{8\pi^{2}a^{3}\sigma^{3}}-8\pi^{2}a\sigma(\sigma+a\sigma^{\prime})$
(29)
and, from Eq. (24), it can be rewritten as
$V^{\prime}=\bar{f}^{\prime}-\frac{\Delta\left[\Delta^{\prime}a\sigma+\Delta(\sigma+2p)\right]}{8\pi^{2}a^{3}\sigma^{3}}+8\pi^{2}a\sigma(\sigma+2p).$
(30)
By using Eq. (24) again, the second derivative of $V$ is obtained as
$\displaystyle V^{\prime\prime}=$
$\displaystyle\bar{f}^{\prime\prime}-\frac{\Delta^{\prime
2}}{8\pi^{2}a^{2}\sigma^{2}}-\frac{\Delta}{8\pi^{2}a^{4}\sigma^{4}}\left[4\Delta^{\prime}(\sigma+2p)a\sigma\right.$
(31)
$\displaystyle\left.+\Delta^{\prime\prime}a^{2}\sigma^{2}-2\Delta\sigma(\sigma+p)(1+2\beta^{2})+3\Delta(\sigma+2p)^{2}\right]$
$\displaystyle-8\pi^{2}\left[(\sigma+2p)^{2}+2\sigma(\sigma+p)(1+2\beta^{2})\right],$
where $\beta^{2}\equiv dp/d\sigma=p^{\prime}/\sigma^{\prime}$.
Here and hereafter, we impose a constraint
$\displaystyle f_{+}(a)=f_{-}(a)$ (32)
and we concentrate on the case that the manifold $\mathcal{M}_{-}$ is the part
of the Schwarzschild black hole spacetime, i.e., $Q_{-}=0$, as well as Ref.
Wielgus:2020uqz . The constraint is expressed as
$\displaystyle Q_{+}^{2}=2aM_{-}(\xi-1),$ (33)
where $\xi$ is an asymmetry parameter defined by $\xi\equiv M_{+}/M_{-}$. The
reflection-asymmetric thin-shell wormhole with a double shadow must have the
throat in domains $r_{\mathrm{EH}-}<a<r_{\mathrm{PS}-}$ and
$r_{\mathrm{EH}+}<a<r_{\mathrm{PS}+}$. Permitted parameters $(\xi,a/M_{-})$
for the reflectional-asymmetry thin-shell wormhole with the double shadow are
shown in Fig. 1.
Figure 1: Permitted parameters $(\xi,a/M_{-})$ of the asymmetric wormhole with
the double shadow are in shaded zones. $\mathcal{M}_{+}$ is the part of the
Reissner-Nordstrom black hole (BH) spacetime in a deep blue shaded zone while
it is the part of the Reissner-Nordstrom naked singularity (NS) spacetime in
light blue shaded zones. $\mathcal{M}_{-}$ is the subset of the Schwarzschild
black hole spacetime. A blue dotted line denotes $a=r_{\mathrm{PS}+}$. A red
dashed line is explained in Sec. IV.
## IV Stability of thin-shell wormhole
We consider the linearization stability of a static wormhole with a thin shell
at $a=a_{0}$ under the constraint (32) and $Q_{-}=0$. The surface energy
density $\sigma_{0}$ and pressure $p_{0}$ of the thin shell are given by
$\sigma_{0}=-\frac{\sqrt{f_{0}}}{2\pi a_{0}}$ (34)
and
$\displaystyle
p_{0}=\frac{1}{8\pi\sqrt{f_{0}}}\left(\frac{2f_{0}}{a_{0}}+\bar{f}^{\prime}_{0}\right),$
(35)
respectively. Here and hereafter a function with subscript $0$ means the
function at $a=a_{0}$. Since $V_{0}=V^{\prime}_{0}=0$ is satisfied, the
effective potential can be expanded around $a=a_{0}$ as
$V(a)=\frac{V_{0}^{\prime\prime}}{2}(a-a_{0})^{2}+O\left(\left(a-a_{0}\right)^{3}\right),$
(36)
where $V_{0}^{\prime\prime}$ is given by
$V_{0}^{\prime\prime}=A_{0}-B_{0}\left(1+2\beta^{2}_{0}\right),$ (37)
where $A_{0}$ and $B_{0}$ are defined by
$A_{0}\equiv\bar{f}^{\prime\prime}_{0}-\frac{\Delta^{\prime
2}_{0}}{2f_{0}}-\frac{\bar{f}^{\prime 2}_{0}}{2f_{0}}$ (38)
and
$B_{0}\equiv\frac{2f_{0}}{a_{0}^{2}}-\frac{\bar{f}^{\prime}_{0}}{a_{0}},$ (39)
respectively. The thin shell is stable (unstable) when
$V_{0}^{\prime\prime}>0$ $(V_{0}^{\prime\prime}<0)$. Thus, the thin shell is
stable when
$\beta^{2}_{0}<\frac{1}{2}\left(\frac{A_{0}}{B_{0}}-1\right)$ (40)
and $B_{0}>0$ holds or
$\beta^{2}_{0}>\frac{1}{2}\left(\frac{A_{0}}{B_{0}}-1\right)$ (41)
and $B_{0}<0$ holds. Figure 2 shows the parameter zone of $(\xi,a_{0}/M_{-})$
for $B_{0}>0$, the one for $B_{0}<0$, and their boundary $B_{0}=0$.
Figure 2: $B_{0}<0$ holds in a deep blue shaded zone and $B_{0}>0$ holds in a
light blue shaded zone. The red dashed line $a_{0}/M_{-}=(7-\xi)/2$ for
$(32+6\sqrt{2})/17\leq\xi\leq 2+\sqrt{2}/2$ denotes the boundary $B_{0}=0$.
The boundary $B_{0}=0$ is given by
$a_{0}/M_{-}=(7-\xi)/2\quad\mathrm{for}\quad\frac{32+6\sqrt{2}}{17}\leq\xi\leq
2+\frac{\sqrt{2}}{2}.$ (42)
On the boundary, the thin shell is unstable for any $\beta_{0}^{2}$. The
parameters $(a_{0}/M_{-},\beta^{2}_{0})$ for the stable thin shell are shown
in Fig. 3.
Figure 3: The static thin shell is stable if it is in blue shaded zones of
$(a_{0}/M_{-},\beta^{2}_{0})$. Left top, right top, left middle, right middle,
left bottom, and right bottom panels show the cases of $\xi=1.0$ for
$2<a_{0}/M_{-}<3$, $\xi=1.2$ for $2<a_{0}/M_{-}<2.8$, $\xi=1.6$ for
$2<a_{0}/M_{-}<2.4$, $\xi=1.8$ for $2<a_{0}/M_{-}<729/320$, $\xi=2.5$ for
$25/12<a_{0}/M_{-}<75/32$, and $\xi=3.5$ for $49/20<a_{0}/M_{-}<441/160$,
respectively.
## V Conclusion and Discussion
We have investigated the linearization stability of the reflection-asymmetric
thin-shell wormhole which is composed of the parts of the Schwarzschild and
Reissner-Nordstrom manifolds with the double shadow. We have imposed the
constraint (32) as well as Ref. Wielgus:2020uqz . The linearization stability
given by Fig. 3 is characterized by the boundary $B_{0}=0$ or
$a_{0}/M_{-}=(7-\xi)/2$ for $(32+6\sqrt{2})/17\leq\xi\leq 2+\sqrt{2}/2$ shown
as the red line in Fig. 2. In the boundary case $B_{0}=0$, the throat with
$a_{0}/M_{-}=2.25$ is unstable for any $\beta^{2}_{0}$ as shown the left
bottom panel with $\xi=2.5$ in Fig. 3. We also note the throat at
$a_{0}/M_{-}=r_{\mathrm{EH}-}/M_{-}=2$ for $1\leq\xi\leq 2$ is unstable for
any $\beta^{2}_{0}$ as shown the panels in Fig. 3.
We comment on the Schwarzschild thin-shell wormhole with reflection symmetry
or $\xi=1.0$. In Fig. 2, $B_{0}$ vanishes at a point
$(\xi,a_{0}/M_{-})=(1.0,3.0)$. Thus, the throat of the Schwarzschild
reflection-symmetric wormhole with $\xi=1.0$ at $a_{0}/M_{-}=3$ is unstable
for any $\beta_{0}^{2}$ as shown Fig. 3.
We also give comments on wormholes without a thin shell while we have
concentrated on the thin-shell wormhole on this paper. The earliest
traversable wormhole filled with a phantom scalar field Martinez:2020hjm was
investigated by Ellis Ellis:1973yv and Bronnikov Bronnikov:1973fh . The
Ellis-Bronnikov wormhole is a reflection-asymmetric wormhole when it has a
positive Arnowitt-Deser-Misner (ADM) mass in a side and a negative ADM mass in
another side and it is a reflection-symmetric wormhole when it has vanishing
ADM masses in the both sides. See Refs. Ellis:1973yv ; Nandi:2016uzg for
gravitational lensing by the Ellis-Bronnikov wormhole with the positive ADM
mass, see Refs. Cramer:1994qj ; Bronnikov:2018nub for the negative ADM mass,
and see Ref. Tsukamoto:2016qro and references therein for the vanishing ADM
mass. The instability of the Ellis-Bronnikov wormhole was reported
Shinkai_Hayward_2002 ; Gonzalez_Guzman_Sarbach_2008_I ;
Gonzalez_Guzman_Sarbach_2008_II . Reliable stable wormholes without thin
shells have not reported yet in general relativity but we may construct stable
wormhole solution by choosing exotic matters other than the phantom scalar
field Bronnikov:2013coa . Recently, Shaikh et al. have suggested gravitational
lensing by reflection-symmetric wormholes with multiple photon spheres and the
shadow images of the multiple photon spheres Shaikh:2018oul and Bronnikov and
Baleevskikh have considered that all asymptotically-flat, static, spherically
symmetric wormholes with reflection symmetry have a photon sphere on a throat
Bronnikov:2018nub . However, we notice that they overlook the possibility that
circular photon orbits on the throat can form an antiphoton sphere. The
gravitational lensing by wormholes with the antiphoton sphere on the throat
has been suggested by Shaikh et al. in Ref. Shaikh:2019jfr . Tsukamoto has
shown that the circular photon orbits on the throats of a Damour-Solodukhin
wormhole Damour:2007ap and a Simpson-Visser wormhole Simpson:2018tsi can be
the antiphoton spheres Tsukamoto:2020uay ; Tsukamoto:2020bjm . In Ref.
Bronnikov:2018nub , Bronnikov and Baleevskikh have discussed the deflection
angle of a light ray scattered by reflection-asymmetric wormholes with the
photon sphere which is slightly off the throat. Gravitational lensing by
reflection-asymmetric wormholes with the antiphoton sphere which is slightly
off the throat is left as a future work.
Note added. Recentely, related papers have appeared on arXiv. Guerrero et al.
have constructed reflection-asymmetric wormholes supported by a positive
energy thin shell with a double shadow in Palatini $f(R)$ gravity
Guerrero:2021pxt . Peng et al. have studied the observational appearance of an
accretion disk around a reflection-asymmetric thin-shell wormhole Peng:2021osd
.
## References
* (1) B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], Phys. Rev. Lett. 116, 061102 (2016).
* (2) K. Akiyama et al. [Event Horizon Telescope Collaboration], Astrophys. J. 875, L1 (2019).
* (3) M. S. Morris and K. S. Thorne, Am. J. Phys. 56, 395 (1988).
* (4) M. Visser, Lorentzian Wormholes: From Einstein to Hawking (American Institute of Physics, Woodbury, NY, 1995).
* (5) V. Perlick, Phys. Rev. D 69, 064017 (2004).
* (6) K. K. Nandi, Y. Z. Zhang, and A. V. Zakharov, Phys. Rev. D 74, 024020 (2006).
* (7) N. Tsukamoto, T. Harada, and K. Yajima, Phys. Rev. D 86, 104062 (2012).
* (8) N. Tsukamoto, Phys. Rev. D 94, 124001 (2016).
* (9) K. K. Nandi, R. N. Izmailov, E. R. Zhdanov, and A. Bhattacharya, JCAP 07, 027 (2018).
* (10) R. Shaikh, P. Banerjee, S. Paul, and T. Sarkar, Phys. Lett. B 789, 270 (2019). [erratum: Phys. Lett. B 791, 422 (2019)].
* (11) R. Shaikh, P. Banerjee, S. Paul, and T. Sarkar, JCAP 07, 028 (2019).
* (12) N. Tsukamoto, Phys. Rev. D 101, 104021 (2020).
* (13) N. Tsukamoto, Phys. Rev. D 103, 024033 (2021).
* (14) C. Lanczos, Phys. Zeits. 23, 539 (1922).
* (15) C. Lanczos, Ann. Phys. 74, 518 (1924).
* (16) W. Israel, Nuovo Cim. B 44, 1 (1966), Erratum: [Nuovo Cim. B 48, 463 (1967)].
* (17) E. Poisson, A Relativist’s Toolkit: The Mathematics of Black-Hole Mechanics (Cambridge University Press, Cambridge, 2004).
* (18) M. Visser, Nucl. Phys. B 328, 203 (1989).
* (19) G. Darmois, Les équations de la gravitation Einsteinienne, vol. XXV of Mémorial des sciences mathématiques (Gauthier-Villars, Paris, 1927).
* (20) E. Poisson and M. Visser, Phys. Rev. D 52, 7318 (1995).
* (21) E. F. Eiroa, Phys. Rev. D 78, 024018 (2008).
* (22) E. F. Eiroa and C. Simeone, Phys. Rev. D 76, 024021 (2007).
* (23) E. F. Eiroa, Phys. Rev. D 80, 044033 (2009).
* (24) E. F. Eiroa and G. E. Romero, Gen. Rel. Grav. 36, 651 (2004).
* (25) M. Sharif and M. Azam, Eur. Phys. J. C 73, 2554 (2013).
* (26) A. Eid, Eur. Phys. J. Plus 131, 23 (2016).
* (27) S. Habib Mazharimousavi and M. Halilsoy, Int. J. Mod. Phys. D 27, 1850028 (2018).
* (28) M. Sharif and M. Azam, JCAP 05, 025 (2013).
* (29) D. Wang and X. Meng, Phys. Dark Univ. 17, 46 (2017).
* (30) S. D. Forghani, S. H. Mazharimousavi, and M. Halilsoy, Eur. Phys. J. Plus 134, 342 (2019).
* (31) P. K. F. Kuhfittig, Acta Phys. Polon. B 41, 2017 (2010).
* (32) N. M. Garcia, F. S. Lobo, and M. Visser, Phys. Rev. D 86, 044026 (2012).
* (33) E. F. Eiroa and C. Simeone, Phys. Rev. D 71, 127501 (2005).
* (34) F. S. Lobo, Phys. Rev. D 71, 124022 (2005).
* (35) F. Rahaman, M. Kalam, and S. Chakraborti, Int. J. Mod. Phys. D 16, 1669 (2007).
* (36) F. Rahaman, M. Kalam, K. Rahman, and S. Chakraborti, Gen. Rel. Grav. 39, 945 (2007).
* (37) E. F. Eiroa and G. Figueroa Aguirre, Eur. Phys. J. C 76, 132 (2016).
* (38) M. Ishak and K. Lake, Phys. Rev. D 65, 044011 (2002).
* (39) F. S. N. Lobo and P. Crawford, Class. Quant. Grav. 22, 4869 (2005).
* (40) S. Kim, Phys. Lett. A 166, 13 (1992).
* (41) F. S. N. Lobo and P. Crawford, Class. Quant. Grav. 21, 391 (2004).
* (42) C. Barcelo and M. Visser, Nucl. Phys. B 584, 415 (2000).
* (43) J. P. S. Lemos and F. S. N. Lobo, Phys. Rev. D 78, 044030 (2008).
* (44) E. F. Eiroa and C. Simeone, Phys. Rev. D 70, 044008 (2004).
* (45) C. Bejarano, E. F. Eiroa, and C. Simeone, Phys. Rev. D 75, 027501 (2007).
* (46) M. Thibeault, C. Simeone, and E. F. Eiroa, Gen. Rel. Grav. 38, 1593 (2006).
* (47) F. Rahaman, M. Kalam, and S. Chakraborty, Gen. Rel. Grav. 38, 1687 (2006).
* (48) S. H. Mazharimousavi, M. Halilsoy, and Z. Amirabi, Phys. Rev. D 81, 104002 (2010).
* (49) T. Kokubu and T. Harada, Class. Quant. Grav. 32, 205001 (2015).
* (50) M. R. Mehdizadeh, M. Kord Zangeneh, and F. S. N. Lobo, Phys. Rev. D 92, 044022 (2015).
* (51) T. Kokubu, H. Maeda, and T. Harada, Class. Quant. Grav. 32, 235021 (2015).
* (52) G. Perry and R. B. Mann, Gen. Rel. Grav. 24, 305 (1992).
* (53) S. Kim, H. Lee, S. K. Kim, and J. Yang, Phys. Lett. A 183, 359 (1993).
* (54) F. Rahaman, A. Banerjee, and I. Radinschi, Int. J. Theor. Phys. 51, 1680 (2012).
* (55) E. F. Eiroa and C. Simeone, Phys. Rev. D 87, 064041 (2013).
* (56) N. Tsukamoto and T. Kokubu, Phys. Rev. D 98, 044026 (2018).
* (57) M. La Camera, Mod. Phys. Lett. A 26, 857 (2011).
* (58) K.-i. Nakao, T. Uno, and S. Kinoshita, Phys. Rev. D 88, 044036 (2013).
* (59) Y. Akai and K.-i. Nakao, Phys. Rev. D 96, 024033 (2017).
* (60) D. C. Dai and D. Stojkovic, Phys. Rev. D 100, 083513 (2019).
* (61) J. H. Simonetti, M. J. Kavic, D. Minic, D. Stojkovic, and D. C. Dai, [arXiv:2007.12184 [gr-qc]].
* (62) A. Eid, New Astron. 53, 6 (2017).
* (63) S. D. Forghani, S. Habib Mazharimousavi, and M. Halilsoy, Eur. Phys. J. C 78, 469 (2018).
* (64) X. Wang, P. C. Li, C. Y. Zhang, and M. Guo, Phys. Lett. B 811, 135930 (2020).
* (65) M. Wielgus, J. Horak, F. Vincent, and M. Abramowicz, Phys. Rev. D 102, 084044 (2020).
* (66) C. Martinez and M. Nozawa, Phys. Rev. D 103, 024003 (2021).
* (67) H. G. Ellis, J. Math. Phys. 14, 104 (1973).
* (68) K. A. Bronnikov, Acta Phys. Polon. B 4, 251 (1973).
* (69) K. K. Nandi, R. N. Izmailov, A. A. Yanbekov, and A. A. Shayakhmetov, Phys. Rev. D 95, 104011 (2017).
* (70) J. G. Cramer, R. L. Forward, M. S. Morris, M. Visser, G. Benford, and G. A. Landis, Phys. Rev. D 51, 3117 (1995).
* (71) K. A. Bronnikov and K. A. Baleevskikh, Grav. Cosmol. 25, 44 (2019).
* (72) H. Shinkai and S. A. Hayward, Phys. Rev. D 66, 044005 (2002);
* (73) J. A. González, F. S. Guzmán, and O. Sarbach Class. Quant. Grav. 26, 015010 (2009).
* (74) J. A. González, F. S. Guzmán, and O. Sarbach Class. Quant. Grav. 26, 015011 (2009).
* (75) K. A. Bronnikov, L. N. Lipatova, I. D. Novikov, and A. A. Shatskiy, Grav. Cosmol. 19, 269 (2013).
* (76) T. Damour and S. N. Solodukhin, Phys. Rev. D 76, 024016 (2007).
* (77) A. Simpson and M. Visser, JCAP 02, 042 (2019).
* (78) M. Guerrero, G. J. Olmo, and D. Rubiera-Garcia, [arXiv:2102.00840 [gr-qc]].
* (79) J. Peng, M. Guo, and X. H. Feng, [arXiv:2102.05488 [gr-qc]].
|
# Deep Inertial Odometry with Accurate IMU Preintegration
Rooholla Khorrambakht1 Chris Xiaoxuan Lu2 Hamed Damirchi1 Zhenghua Chen3
Zhengguo Li3 1Rooholla Khorrambakht and Hamed Damirchi are with Faculty of
Electrical Engineering and Computer Science, K. N. Toosi University of
Technology, lran, r.khorrambakht<EMAIL_ADDRESS>Xiaoxuan Lu
is with the School of Informatics, University of Edinburgh, United Kingdom,
<EMAIL_ADDRESS>Chen and Zhengguo Li are with the Institute for
Infocomm Research (I2R), A*STAR, Singapore, Chen_Zhenghua,
<EMAIL_ADDRESS>
###### Abstract
Inertial Measurement Units (IMUs) are interceptive modalities that provide
ego-motion measurements independent of the environmental factors. They are
widely adopted in various autonomous systems. Motivated by the limitations in
processing the noisy measurements from these sensors using their mathematical
models, researchers have recently proposed various deep learning architectures
to estimate inertial odometry in an end-to-end manner. Nevertheless, the high-
frequency and redundant measurements from IMUs lead to long raw sequences to
be processed. In this study, we aim to investigate the efficacy of accurate
preintegration as a more realistic solution to the IMU motion model for deep
inertial odometry (DIO) and the resultant DIO is a fusion of model-driven and
data-driven approaches. The accurate IMU preintegration has the potential to
outperform numerical approximation of the continuous IMU model used in the
existing DIOs. Experimental results validate the proposed DIO.
## I Introduction
Inertial Odometry (IO) is concerned with the estimation of ego-pose
transformations using raw IMU measurements. Performing IO based on the
mathematical model of an IMU leads to large accumulative errors in long runs
and is only reliable for short time-intervals between the measurements from
other complementary sensors in a fusion setup. However, learning the inertial
odometry encapsulates the IO problem in a sequence modeling setup and is
motivated by the flexibility of deep learning in exploiting the complex motion
patterns in the data as pseudo measurements for keeping the error in check.
IMU measurements are produced at high frequencies, which leads to long
sequences for relatively short time periods. Processing these long sequences
using Recurrent Neural Networks (RNNs) is challenging and is prone to washout
[1, 2], while processing them using Convolutional Neural Networks (CNNs)
requires deep architectures to cover large enough receptive fields. Although
Wavenets have been employed to address such problems [3], they have not yet
found widespread adoption due to their specialized architecture. As another
alternative to this solution, we proposed exploiting preintegration as a
model-aware preprocessing step for compressing the temporal dimension of the
raw motion measurements from the IMU [4]. As pointed out in the literature
[5], correct inductive biases, and proper intermediate representations can
lead to significant performance boosts. By introducing preintegration as a
method of extracting intermediate representations, we achieved significant
performance improvements while also reducing the computational load.
Furthermore, the produced intermediate representations do not impose any
architectural constraints on the deep learning models.
However, the validity of this compression relies on the correctness of the
assumptions made for formulating the motion preintegration. The formulation
presented in [4] relied on the derivations of [6] which is based on the
assumption that the angular velocity in body frame and linear acceleration in
world frame between two consecutive samples remain constant. This assumption
holds when the motion is not highly dynamic or when the IMU sample rate is
high. The authors of [7] addressed this issue by exploiting the linear
switched systems theory and proposed an accurate preintegration formulation
for Visual Inertial Odometry (VIO) applications. This study aims to adopt the
presented formulation by [7] in computing the PreIntegrated (PI) features in
our deep learning setup and study its effectiveness in two dynamic and
moderate motion domains. As such, the model-driven and data-driven approaches
co-exist under one roof. Using real-world datasets, we show that the two
features lead to similar performances in moderate motions. However, at higher
speeds and dynamic motions, accurate preintegration yields better performance.
Nevertheless, the fusion of IMU preintegration and deep learning results in a
deep inertial odometry (DIO) that is useful for the emerging cognitive
navigation for robots. The cognitive navigation has the potential to replace
the popular metric navigation due to the development of artificial
intelligence [8].
The remainder of this report is structured as follows. Section II presents the
preintegration theory and the formulation of its accurate solution. Next, the
experimental setups, datasets, and results are introduced in Section III and
the concluding remarks are presented in Section IV.
## II Preintegration Theory and PI Features
Preintegration is the method of computing motion constraint variables between
keyframes in a pose graph [6]. It is based on the mathematical model of the
IMU and compresses the samples between frames into 9D vectors constraining the
orientations, velocities, and positions of adjacent nodes in the graph. In
this section, the relevant formulations for computing Forster and accurate PI
features have been presented.
### II-A IMU Model
The measurement model of an IMU may be expressed as follows [6]:
$\displaystyle{}_{\mathrm{B}}\tilde{\boldsymbol{\omega}}(t)$
$\displaystyle={}_{\mathrm{B}}\boldsymbol{\omega}_{\mathrm{WB}}(t)+\mathbf{b}^{g}(t)+\boldsymbol{\eta}^{g}(t)$
(1) $\displaystyle{}_{\mathrm{B}}\tilde{\mathbf{a}}(t)$
$\displaystyle=\mathrm{R}_{\mathrm{wB}}^{\top}(t)\left({}_{\mathrm{w}}\mathbf{a}(t)-{}_{\mathrm{w}}\mathbf{g}\right)+\mathbf{b}^{a}(t)+\boldsymbol{\eta}^{a}(t)$
where $\mathrm{R}_{\mathrm{wB}}$ represents that orientation of the IMU body
frame $B$ with respect to world frame $W$, ${}_{\mathrm{w}}\mathbf{a}(t)$
represents the acceleration of the IMU with respect to the world frame, and
${}_{\mathrm{w}}\boldsymbol{\omega}_{WB}(t)$ represents the angular velocity
of the IMU expressed in its local body coordinate frame. The IMU measurements
${}_{\mathrm{B}}\tilde{\boldsymbol{\omega}}(t)$, and
${}_{\mathrm{B}}\tilde{\mathbf{a}}(t)$ are contaminated with additive Gaussian
noise $\boldsymbol{\eta}^{g}(t)$, $\boldsymbol{\eta}^{a}(t)$, and random walk
biases $\mathbf{b}^{g}(t)$ and $\mathbf{b}^{g}(t)$. Furthermore, $\mathbf{g}$
in the above equation represents the known gravity direction in the world
coordinate frame.
Based on the kinematic equations of the IMU, the motion propagation of the
sensor in the world frame may be expressed as follows[6]:
$\displaystyle\dot{\mathrm{R}}_{\mathrm{WB}}=\mathrm{R}_{\mathrm{WB}}\times{}_{\mathrm{B}}\boldsymbol{\omega}_{\mathrm{WB}}^{\wedge},\quad{}_{\mathrm{w}}\dot{\mathbf{v}}={}_{\mathrm{w}}\mathbf{a},\quad{}_{\mathrm{w}}\dot{\mathbf{p}}={}_{\mathrm{w}}\mathbf{v}$
(2)
where $(.)^{\wedge}$ operator converts a 3D vector into its skew symmetric
matrix representation. Through integration we have:
$\begin{array}[]{l}\mathrm{R}_{\mathrm{wB}}(t+\Delta
t)=\mathrm{R}_{\mathrm{wB}}(t)\operatorname{exp}\left(\int_{t}^{t+\Delta
t}{}_{\mathrm{B}}\boldsymbol{\omega}_{\mathrm{WB}}(\tau)d\tau\right)\\\
{}_{\mathrm{w}}\mathbf{v}(t+\Delta
t)={}_{\mathrm{w}}\mathbf{v}(t)+\int_{t}^{t+\Delta
t}{}_{\mathrm{w}}\mathbf{a}(\tau)d\tau\\\ {}_{\mathrm{w}}\mathbf{p}(t+\Delta
t)={}_{\mathrm{w}}\mathbf{p}(t)+\int_{t}^{t+\Delta
t}{}_{\mathrm{w}}\mathbf{v}(\tau)d\tau+\iint_{t}^{t+\Delta
t}{}_{\mathrm{w}}\mathbf{a}(\tau)d\tau^{2}\end{array}$
where $\Delta T$ is the sampling interval of the sensor and
$\operatorname{exp}(.)$ is the $SO(3)$ exponential map.
### II-B Forster Formulation
Assuming the angular velocity in body frame and acceleration in world frame as
constants between two IMU samples, equations 1 and 2 lead to the following
discrete motion propagation model:
$\displaystyle\mathrm{R}(t+\Delta t)$
$\displaystyle=\mathrm{R}(t)\operatorname{Exp}\left(\left(\tilde{\boldsymbol{\omega}}(t)-\mathbf{b}^{g}(t)-\boldsymbol{\eta}^{gd}(t)\right)\Delta
t\right)$ (3) $\displaystyle\mathbf{v}(t+\Delta t)$
$\displaystyle=\mathbf{v}(t)$ $\displaystyle+\mathbf{g}\Delta
t+\mathrm{R}(t)\left(\tilde{\mathbf{a}}(t)-\mathbf{b}^{a}(t)-\boldsymbol{\eta}^{ad}(t)\right)\Delta
t$ $\displaystyle\mathbf{p}(t+\Delta t)$
$\displaystyle=\mathbf{p}(t)+\mathbf{v}(t)\Delta t+\frac{1}{2}\mathbf{g}\Delta
t^{2}$
$\displaystyle+\frac{1}{2}\mathrm{R}(t)\left(\tilde{\mathbf{a}}(t)-\mathbf{b}^{a}(t)-\boldsymbol{\eta}^{ad}(t)\right)\Delta
t^{2}$
Based on the above model, the motion corresponding to a batch of IMU samples
between time steps $i$ and $j$ may be formulated as follows:
$\displaystyle\mathrm{R}_{j}$
$\displaystyle=\mathrm{R}_{i}\prod_{k=i}^{j-1}\operatorname{Exp}\left(\left(\tilde{\boldsymbol{\omega}}_{k}-\mathbf{b}_{k}^{g}-\boldsymbol{\eta}_{k}^{gd}\right)\Delta
t\right)$ (4) $\displaystyle\mathbf{v}_{j}$
$\displaystyle=\mathbf{v}_{i}+\mathbf{g}\Delta
t_{ij}+\sum_{k=i}^{j-1}\mathrm{R}_{k}\left(\tilde{\mathbf{a}}_{k}-\mathbf{b}_{k}^{a}-\boldsymbol{\eta}_{k}^{ad}\right)\Delta
t$ $\displaystyle\mathbf{p}_{j}$ $\displaystyle=\mathbf{p}_{i}$
$\displaystyle+\sum_{k=i}^{j-1}\left[\mathbf{v}_{k}\Delta
t+\frac{1}{2}\mathbf{g}\Delta
t^{2}+\frac{1}{2}\mathrm{R}_{k}\left(\tilde{\mathbf{a}}_{k}-\mathbf{b}_{k}^{a}-\boldsymbol{\eta}_{k}^{ad}\right)\Delta
t^{2}\right]$
As proposed in [6], through the multiplication of both sides of the above
questions by $R_{i}$ the initial state-dependent and sensor-dependent
integrated measurements can be separated:
$\displaystyle\Delta\mathrm{R}_{ij}$
$\displaystyle\doteq\mathrm{R}_{i}^{\top}\mathrm{R}_{j}=\prod_{k=i}^{j-1}\operatorname{exp}\left(\left(\tilde{\boldsymbol{\omega}}_{k}-\mathbf{b}_{k}^{g}-\boldsymbol{\eta}_{k}^{gd}\right)\Delta
t\right)$ (5) $\displaystyle\Delta\mathbf{v}_{ij}$
$\displaystyle\doteq\mathrm{R}_{i}^{\top}\left(\mathbf{v}_{j}-\mathbf{v}_{i}-\mathrm{g}\Delta
t_{ij}\right)$
$\displaystyle=\sum_{k=i}^{j-1}\Delta\mathrm{R}_{ik}\left(\tilde{\mathbf{a}}_{k}-\mathbf{b}_{k}^{a}-\boldsymbol{\eta}_{k}^{ad}\right)\Delta
t$ $\displaystyle\Delta\mathbf{p}_{ij}$
$\displaystyle\doteq\mathrm{R}_{i}^{\top}\left(\mathbf{p}_{j}-\mathbf{p}_{i}-\mathbf{v}_{i}\Delta
t_{ij}-\frac{1}{2}\sum_{k=i}^{j-1}\mathbf{g}\Delta t^{2}\right)$
$\displaystyle=\sum_{k=i}^{j-1}\left[\Delta\mathbf{v}_{ik}\Delta
t+\frac{1}{2}\Delta\mathrm{R}_{ik}\left(\tilde{\mathbf{a}}_{k}-\mathbf{b}_{k}^{a}-\boldsymbol{\eta}_{k}^{ad}\right)\Delta
t^{2}\right]$
The right hand side of the above equations represent motion constraints that
are only functions of the IMU measurements. In [4], we proposed adopting these
constraints as an intermediate representation for deep inertial odometry,
which we called, PI features. It is worth noting that we substitute the bias-
corrected values in the above equations with their corresponding raw
measurements and rely on our model to compensate for their impact.
### II-C Accurate Preintegration
The constant world acceleration between consecutive IMU samples can be
violated in the case of highly dynamic motions, or when the IMU sampling
frequency is not very high. To account for this problem, [7] exploits the
switched liner systems theory to model the state transition between each pair
of the IMU samples. In other words, [7] assumes the acceleration and angular
velocity between two IMU samples to be constant in the body frame as opposed
to the world frame which is a more realistic assumption. With this assumption
and after deriving the closed form solutions of the state transition between
each pair of IMU samples, the following accurate preintegrated constraints are
derived:
$\displaystyle\Delta\mathrm{R}_{ij}$
$\displaystyle=\prod_{k=i}^{j-1}\operatorname{exp}(\boldsymbol{\Theta}(t))$
(6) $\displaystyle\Delta\mathbf{v}_{ij}$
$\displaystyle=\sum_{k=i}^{j-1}\Delta\mathrm{R}_{ik}\Gamma(\boldsymbol{\Theta}(t))\left(\tilde{\mathbf{a}}_{k}-\mathbf{b}_{k}^{a}-\boldsymbol{\eta}_{k}^{ad}\right)\Delta
t$ $\displaystyle\Delta\mathbf{p}_{ij}$
$\displaystyle=\sum_{k=i}^{j-1}\left[\Delta\mathbf{v}_{ik}\Delta
t+\Delta\mathrm{R}_{ik}\Lambda(\boldsymbol{\Theta}(t))\left(\tilde{\mathbf{a}}_{k}-\mathbf{b}_{k}^{a}-\boldsymbol{\eta}_{k}^{ad}\right)\Delta
t^{2}\right]$
where $\Lambda(.)$ and $\Gamma(.)$ are corrective terms defined in [7].
Furthermore $\boldsymbol{\Theta}(t)$ is defined as follows:
$\boldsymbol{\Theta}(t)\doteq\left(\tilde{\boldsymbol{\omega}}_{k}-\mathbf{b}_{k}^{g}-\boldsymbol{\eta}_{k}^{gd}\right)\Delta
t$
It can be shown that when $\boldsymbol{\Theta}(t)$ is small (high sampling
rate or moderate dynamics), $\Lambda(.)$ converges to $0.5$ and $\Gamma(.)$
converges to $1$. In other words, when the sampling rate of the IMU is high or
the motion is highly dynamic, the two preintegrated features become identical.
## III Experiments
In this section, we use two real-world datasets to investigate the
effectiveness of accurate preintegration compared to two baselines, a model
trained using raw IMU data and another using preintegrated features based on
Forster’s formulation.
Figure 1: The architecture of the models used in all our experiments.
### III-A Setup
#### III-A1 Network Architecture
In this study, we use the base architecture from the IO-Net paper [9], which
is a single layer bi-directional LSTM with a hidden state size of 128 and
independently leaned initial hidden states. The selection of LSTM brings the
flexibility of feeding inputs with various temporal resolutions without any
architectural modifications. Furthermore, as reported in [9], bi-
directionality improves the capacity of the model for capturing the motion
dynamics by allowing the predictions from one step to use both past and future
histories of the signal.
As depicted in Fig. 1, we consider a temporal history of 200 IMU samples on
each inference. The window length of 200 is the value for which we achieved a
well balance between the performance and computational load. The presented
architecture is common in all of our experiments. Two experiments are designed
for each type of accurate and Forster preintegrated features. In this paper,
we choose a preintegration length of 10 IMU samples based on the slowest
ground-truth frequency in our datasets. Our baseline experiment bypasses the
preintegration module and feeds the LSTM with the raw IMU samples. For each
odometry transformation between 10 IMU samples, the hidden states of the LSTM
are fed into fully connected layers with $\boldsymbol{\xi}_{i}\in se(3)$
predictions as outputs. These $se(3)$ vectors are converted into $SE(3)$
transformation matrices through the exponential mapping function,
$\mathbf{T}_{i}=\exp{(\boldsymbol{\xi}_{i})}$.
#### III-A2 Training and Loss
The training loss has been formulated similar to DPC-Net[10], with geodesic
distances between the network predictions and ground-truth labels as the loss:
$\displaystyle L$
$\displaystyle=\sum_{k=1}^{N}\boldsymbol{\Phi}_{i}^{T}\boldsymbol{\Sigma}^{-1}\boldsymbol{\Phi}_{i}$
(7) $\displaystyle\boldsymbol{\Phi}_{i}$
$\displaystyle\doteq\log(\boldsymbol{\Delta
T^{*}}_{i,i+1}^{-1}\exp{(\boldsymbol{\xi}_{i}^{\wedge})})^{\vee}$
where
$\Delta\boldsymbol{T^{*}}_{i-1,i}=\boldsymbol{T^{*}}_{i-1}^{-1}\boldsymbol{T^{*}}_{i}$
are odometry labels, and the $\boldsymbol{\Sigma}$ is an empirical covariance
matrix computed using the training data. Furthermore, $(.)^{\wedge}$ and
$(.)^{\vee}$ operators respectively represent operators that transform the
$se(3)$ vectors into their skew-symmetric matrix form and vise versa.
Finally, the models are implemented using Pytorch, and Adam optimizer with an
initial learning rate of $0.001$ has been adopted to train them. In order to
avoid overfitting, dropout layers with a rate of $25\%$ are added between the
LSTM outputs and the FC inputs. On average, our models converged after 100
epochs.
Figure 2: The cumulative distribution of the dynamic acceleration of the Kitti
and OxfordIO datasets.
#### III-A3 Datasets
We have chosen two datasets to represent fast and slow motion distributions:
the OxfordIO pedestrian odometry dataset [11] as a representative of an
application domain with moderate motions, and Kitti autonomous driving dataset
as an example of a domain with fast motions. Fig. 2 illustrates the cumulative
distribution of the static acceleration of a snippet from the test sets of the
two datasets. As it can be seen in the graph, the 90th percentile of the
OxfordIO dataset is at around $1.25m/s^{2}$ while this value for the Kitti
dataset is $2.5ms/s^{2}$ with maximum accelerations of over $3ms/s^{2}$, which
coincides with our assumption about the fast and slow motion nature of each
dataset.
In terms of sensor specifications, the Kitti dataset is recorded using a car
equipped with vision, Lidar, and RTK-GPS+IMU units traversing urban and
countryside environments. The IMU measurements are available at a rate of 100
Hz, and a 10 Hz centimeter-level accurate ground truth is provided through the
fusion of Lidar and GPS-IMU sensors. On the other hand, the OxfordIO dataset
contains the IMU readings at a rate of 100 Hz from a smartphone held in
various configurations by multiple users undergoing different motion patterns.
The ground-truth for this dataset is recorded using a Vicon motion capture
system with millimeter-level accuracy.
### III-B Autonomous Driving Motion Domain
TABLE II: The performance evaluation of the three baseline models tested on Kitti dataset. test seq. | IO-Net | | PI-Net
---
(Forster)
| PI-Net
---
(Accurate PI)
| $t_{rel}$
---
$(\%)$
| $r_{rel}$
---
$(deg/m)$
| $t_{rel}$
---
$(\%)$
| $r_{rel}$
---
$(deg/m)$
| $t_{rel}$
---
$(\%)$
| $r_{rel}$
---
$(deg/m)$
10 | 11.37 | 0.018 | 10.23 | 0.015 | 10.6 | 0.019
07 | 20.91 | 0.016 | 8.52 | 0.013 | 5.82 | 0.014
05 | 18.6 | 0.035 | 8.8 | 0.021 | 8.77 | 0.019
avg. | 16.96 | 0.023 | 9.18 | 0.0163 | 8.3 | 0.017
This section investigates the effectiveness of accurate preintegration in IO
performance improvement trained on a dynamic motion model.
#### III-B1 Baselines
We train three models using the Kitti dataset. The base model takes the raw
IMU measurements as input while the other two are fed with the two types of PI
features, one computed using the accurate formulation and the other using
Forster’s method. Based on the 10 Hz frequency of the ground-truth labels and
the 100 Hz IMU sampling rate, we set the odometry step (preintegration length)
to 10 IMU samples ($100/10=10$). Furthermore, for training, sequences
{00-10}-{05,07,10,03} were used and testing was performed using sequences
{05,07,10,03}.
#### III-B2 Evaluation Metric
The relative translation and rotation errors defined by the KITTI benchmark
[12] have been adopted as the evaluation metric. These relative errors are
computed as the averaged position/orientation errors within all possible sub-
sequences of lengths $100m,...,800m$. We use the open-source implementation
provided in [13].
#### III-B3 Results
The results of this experiment have been reported in Table II. Each of the
three main columns of the table represent the performance for each of the
three baselines, and each row indicates the performance on each test sequence.
As it can be seen, the average translation error for the model trained with
accurate PI features surpasses the other two baselines, by $0.88\%$ compared
to the Forster’s method and $8.66\%$ compared to the model with raw input. It
is important to note that models using any integration methods perform better
than the baseline model with raw inputs. It is also important to note that the
orientation errors of the two preintegration methods are close to each other.
Because, based on Eq. 6 and Eq. 5, both accurate and Forster integration
methods employ identical formulas to compute the rotation portion of the PI
features.
### III-C Pedestrian Odometry Motion Domain
Unlike the driving motion domain, pedestrians do not exhibit frequent high
acceleration/deceleration and high-speed motions. Thus, in this section, we
repeat the experiments on this domain to investigate the hypothesis that both
accurate and Forster methods perform similarly under moderate motions. It is
important to note that the sampling frequencies of the IMUs in both datasets
are equal, which is important for isolating the motion characteristics as the
only influential factor.
#### III-C1 Baselines
The three baseline models of the previous section were trained on the handheld
domain of the OxfordIO dataset. In order to maintain comparability, we adopted
identical odometry steps and integration lengths (10 IMU samples). The
sequences shown in Table III are used for testing, and the remaining sequences
were adopted for training and validation.
#### III-C2 Evaluation Metric
We integrate the 6-DOF $SE(3)$ odometry predictions corresponding to batches
of 200 IMU samples to calculate the displacements. We compare these
displacements against their ground-truth values to compute the error for each
model. We then divide these errors by the displacement length to compute
normalized error values.
#### III-C3 Results
The results of this experiment have been reported in Table II. As can be seen
in the table, the difference between the average performance of the two
preintegration methods is marginal in this experiment. However, similar to the
previous section, both preintegration methods surpass the performance of the
model operating on raw data. The close gap between the performances of the two
preintegration methods was expected for this motion domain. As indicated in
Section II, the two preintegration formulations are identical when the IMU
sampling frequency is high or when the accelerations and rotation rates are
not highly dynamic.
TABLE III: The performance evaluation of the three baseline models tested on OxfordIO dataset. Test seq. | | IO-Net
---
($\%$)
| PI-Net
---
(Forster) ($\%$)
| PI-Net
---
(Accurate PI)
handheld-d1-s2 | $6.5$ | $\mathbf{4.94}$ | 5.0
handheld-d1-s5 | $3.33$ | ${2.91}$ | 2.67
handheld-d1-s6 | $3.12$ | $\mathbf{2.82}$ | 2.91
handheld-d3-s1 | $4.48$ | $\mathbf{3.42}$ | 3.6
handheld-d4-s1 | $4.32$ | ${4.28}$ | 3.92
handheld-d4-s3 | $4.6$ | $\mathbf{4.05}$ | 4.14
handheld-d5-s1 | $3.64$ | ${3.53}$ | 3.42
average | $4.35$ | $3.71$ | $\mathbf{3.66}$
## IV Conclusion and Discussion
Preintegration reduces the temporal dimension of the raw IMU signals by
incorporating the mathematical model of the sensor. This reduction of temporal
steps leads to fewer recursions by the RNN, which facilitates faster inference
and better performance. In this study, we investigated the impact of this
numerical inaccuracy on the performance of a deep learning model trained using
PI features. We observed that the adoption of accurate preintegration leads to
performance improvements in highly dynamic motions. In contrast, the
performance gap is marginal when the movements are not highly dynamic or
equivalently when the sampling frequency of the sensor is very high.
## References
* [1] J. Zhao, F. Huang, J. Lv, Y. Duan, Z. Qin, G. Li, and G. Tian, “Do rnn and lstm have long memory?” 2020.
* [2] T. H. Trinh, A. M. Dai, M.-T. Luong, and Q. V. Le, “Learning longer-term dependencies in rnns with auxiliary losses,” 2018. [Online]. Available: https://openreview.net/forum?id=Hy9xDwyPM
* [3] C. Chen, P. Zhao, C. X. Lu, W. Wang, A. Markham, and N. Trigoni, “Deep-learning-based pedestrian inertial navigation: Methods, data set, and on-device inference,” _IEEE Internet of Things Journal_ , vol. 7, no. 5, pp. 4431–4441, 2020.
* [4] R. Khorrambakht, H. Damirchi, and H. Taghirad, “Preintegrated imu features for efficient deep inertial odometry,” _arXiv preprint arXiv:2007.02929_ , 2020\.
* [5] E. Kaufmann, A. Loquercio, R. Ranftl, M. Müller, V. Koltun, and D. Scaramuzza, “Deep drone acrobatics,” _arXiv preprint arXiv:2006.05768_ , 2020.
* [6] C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-time visual–inertial odometry,” _IEEE Transactions on Robotics_ , vol. 33, no. 1, pp. 1–21, 2017.
* [7] J. Henawy, Z. Li, W. Y. Yau, G. Seet, and K. W. Wan, “Accurate imu preintegration using switched linear systems for autonomous systems,” in _2019 IEEE Intelligent Transportation Systems Conference (ITSC)_. IEEE, 2019, pp. 3839–3844.
* [8] T. Wolbers and J. M. Wiener, “Challenges for identifying the neural mechanisms that support spatial navigation: the impact of spatial scale,” _Frontiers in human neuroscience_ , vol. 8, p. 571, 2014.
* [9] C. Chen, X. Lu, A. Markham, and N. Trigoni, “Ionet: Learning to cure the curse of drift in inertial odometry,” in _Thirty-Second AAAI Conference on Artificial Intelligence_ , 2018.
* [10] V. Peretroukhin and J. Kelly, “Dpc-net: Deep pose correction for visual localization,” _IEEE Robotics and Automation Letters_ , vol. 3, no. 3, pp. 2424–2431, 2018.
* [11] C. Chen, P. Zhao, C. X. Lu, W. Wang, A. Markham, and A. Trigoni, “Oxiod: The dataset for deep inertial odometry,” _ArXiv_ , vol. abs/1809.07491, 2018\.
* [12] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in _Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2012.
* [13] H. Zhan, C. S. Weerasekera, J. Bian, and I. Reid, “Visual odometry revisited: What should be learnt?” _arXiv preprint arXiv:1909.09803_ , 2019.
|
# Rigorous Bounds on the Heating Rate in Thue-Morse Quasiperiodically and
Randomly Driven Quantum Many-Body Systems
Takashi Mori RIKEN Center for Emergent Matter Science (CEMS), Wako 351-0198,
Japan Hongzheng Zhao Blackett Laboratory, Imperial College London, London
SW7 2AZ, United Kingdom Florian Mintert Blackett Laboratory, Imperial
College London, London SW7 2AZ, United Kingdom Johannes Knolle Department of
Physics TQM, Technische Universität München, James-Franck-Straße 1, D-85748
Garching, Germany Munich Center for Quantum Science and Technology (MCQST),
80799 Munich, Germany Blackett Laboratory, Imperial College London, London
SW7 2AZ, United Kingdom Roderich Moessner Max-Planck-Institut für Physik
komplexer Systeme, Nöthnitzer Straße 38, 01187 Dresden, Germany
###### Abstract
The nonequilibrium quantum dynamics of closed many-body systems is a rich yet
challenging field. While recent progress for periodically driven (Floquet)
systems has yielded a number of rigorous results, our understanding on quantum
many-body systems driven by rapidly varying but a- and quasi-periodic driving
is still limited. Here, we derive rigorous, non-perturbative, bounds on the
heating rate in quantum many-body systems under Thue-Morse quasi-periodic
driving and under random multipolar driving, the latter being a tunably
randomized variant of the former. In the process, we derive a static effective
Hamiltonian that describes the transient prethermal state, including the
dynamics of local observables. Our bound for Thue-Morse quasi-periodic driving
suggests that the heating time scales like $(\omega/g)^{-C\ln(\omega/g)}$ with
a positive constant $C$ and a typical energy scale $g$ of the Hamiltonian, in
agreement with our numerical simulations.
###### Acknowledgements.
Introduction.— When a thermally isolated ergodic quantum many-body system is
subjected to time-dependent external fields, it eventually enters an entirely
featureless (‘infinite-temperature’) state. The route it takes involves
intriguing non-equilibrium dynamics. The technical challenges involved in
describing non-equilibrium many-body dynamics such as this are formidable, and
specifically rigorous results are very rare.
Periodically driven (Floquet) systems provide a notable exception: here the
heating rate is exponentially small at high $\omega$ [1, 2, 3, 4]. This result
implies that a Floquet system generically exhibits prethermalization behavior
[5]: it first relaxes to a long-lived prethermal state, followed by slow
heating to infinite temperature. The prethermal state is described by a static
effective Hamiltonian obtained by the high-frequency (Magnus) expansion. This
opens the possibility of Floquet engineering, i.e., implementing a desired
effective Hamiltonian by applying periodic driving [6, 7]. Floquet prethermal
states can even realize novel phases of matter such as prethermal Floquet time
crystals in clean systems [8]. In experiments, Floquet prethermalization has
been observed for ultra-cold atoms in a driven optical lattice [9] and for
nuclear spins in Fluorapatite [10].
Recently, the focus has shifted beyond fully periodic drives to quasi-periodic
driving [11, 12, 13, 14, 15, 16, 17, 18]. A further step was taken by
considering structured random driving [16], which permits interpolating
between fully random and quasi-periodic drives. Under fast random driving,
heating is often swift, and no prethermalization is observed. However, recent
reports suggest that prethermalization can still be encountered under fast
quasi-periodic driving [13, 16] or certain structured random drives [16]. For
the former, rigorous results on the heating rates are available for
continuously varying quasi-periodic driving [17]. However, no such results
exist for discrete quasi-periodic driving, such as Fibonacci [13, 15] and
Thue-Morse [12, 16, 18] driving.
Here, we provide rigorous bounds on the heating rate for Thue-Morse quasi-
periodic driving, as well as for the random multipolar driving introduced in
Ref. [16]. These turn out to differ from those of the abovementioned settings:
we find a power-law bound with a tunable exponent in random multipolar
driving, as well as a rate vanishing faster than any power for Thue-Morse. In
a Magnus expansion, the heating bounds are derived alongside a static
effective Hamiltonian that describes prethermal states and their transient
nonequilibrium dynamics. These results are non-perturbative in driving
amplitude, and they provide a transparent connection to the lore of Floquet
systems.
In the following, we first introduce the class of models under consideration,
and the Magnus expansion used to derive our core results. These we supplement
by numerical investigations of actual model Hamiltonians. We describe possible
generalisations and close with a discussion and outlook.
Setting.— We consider two Hamiltonians $H_{+}$ and $H_{-}$ for a lattice
system. The time evolution operators over a time period $T$ generated by
$H_{\pm}$ are denoted by $U^{(\pm)}=e^{-iH_{\pm}T}$. The quantity
$\omega=2\pi/T$ is referred to as the “frequency” of the drive. For a periodic
sequence of $U_{\pm}$, Floquet prethermalizationn occurs and the lifetime of a
prethermal state scales exponentially in $\omega$. On the other hand, for
random sequences of $U_{\pm}$, the heating time remains finite even for
$\omega\rightarrow\infty$, and hence no prethermalization occurs in general.
Random multipolar driving [16] ($n$-RMD in short) generated by $U_{\pm}$
involves a random sequence of “dipoles” $U_{1}^{(\pm)}$, where
$U_{1}^{(+)}=U_{-}U_{+}$ and $U_{1}^{(-)}=U_{+}U_{-}$, or “quadrupoles”
$U_{2}^{(\pm)}$, where $U_{2}^{(+)}=U_{+}U_{-}U_{-}U_{+}$ and
$U_{2}^{(-)}=U_{-}U_{+}U_{+}U_{-}$, and so on recursively: $n$-multipoles
$U_{n}^{(\pm)}$ are given by $U_{n}^{(+)}=U_{n-1}^{(-)}U_{n-1}^{(+)}$ and
$U_{n}^{(-)}=U_{n-1}^{(+)}U_{n-1}^{(-)}$ starting from
$U_{0}^{(\pm)}=U_{\pm}$. In Ref. [16], it is found that the heating time
scales like $\omega^{2n+1}$ with $n\geq 1$, which corresponds to a random
sequence of $U_{n}^{(\pm)}$. The limit of $n\to\infty$ of $n$-RMD corresponds
to the Thue-Morse quasi-periodic driving. Prethermalization also occurs in
this limit, but the situation has not been fully clear: the heating time is
longer than algebraic in $\omega$ and was found to be consistent with
exponential scaling for finite size numerical calculations [16].
An increasing heating time $\sim\omega^{2n+1}$ with $n$ for $n$-RMD has been
derived within the linear-response regime, perturbatively via Fermi’s golden
rule. For example, when $n=0$ (a random sequence of $U_{\pm}$), Fermi’s golden
rule predicts a heating time proportional to $\omega$, but it actually remains
finite in the limit of $\omega\to\infty$ 111See Supplementary Material for the
proof of Eq. (3), the detail of Fermi’s golden rule calculations, and the
formal expression of Eq. (5) and its proof.. In this example, for any fixed
driving amplitude, the driving cannot be regarded as perturbatively weak for
sufficiently high frequencies which motivates us to derive rigorous results.
As slow heating is numerically observed in the Thue-Morse and random cases
even for strong driving, it is any rate imperative to go beyond linear-
response.
Below we derive such rigorous upper bounds on the heating rate for $n$-RMD and
the Thue-Morse quasi-periodic driving without the assumption of weak driving.
The previous results for periodic driving [2] can be discussed in a parallel
way.
Magnus expansion.— Since $U_{n}^{(\pm)}$ is a sequence of $U_{+}$ and $U_{-}$
of length $2^{L}$, it represents unitary time evolution over a time $2^{n}T$.
Let us express $U_{n}^{(\pm)}$ in the form
$U_{n}^{(\pm)}=e^{-i\tilde{H}_{n}^{(\pm)}2^{n}T}$, where
$\tilde{H}_{n}^{(\pm)}$ is a certain Hermitian operator. It is in general
hopeless to obtain an exact explicit expression of $\tilde{H}_{n}^{(\pm)}$ for
many-body systems, but its high-frequency expansion is available. The Magnus
expansion of $\tilde{H}_{n}^{(\pm)}$ is given as follows:
$\tilde{H}_{n}^{(\pm)}=\sum_{m=0}^{\infty}(2^{n}T)^{m}\Omega_{n,m}^{(\pm)}.$
(1)
By introducing an instantaneous Hamiltonian $H_{n}^{(\pm)}(t)$, which is
either $H_{+}$ or $H_{-}$ for each time step $t\in[\ell T,(\ell+1)T)$
depending on whether $\ell$th unitary of $U_{n}^{(\pm)}$ is $U_{+}$ or
$U_{-}$, $U_{n}^{(\pm)}$ is expressed as
$U_{n}^{(\pm)}=\mathcal{T}e^{-\int_{0}^{2^{n}T}dt\,H_{n}^{(\pm)}(t)}$, where
$\mathcal{T}$ stands for the time-ordering operator. It should be noted that
the lowest term of the Magnus expansion is given by the time-averaged
Hamiltonian:
$\Omega_{n,0}^{(\pm)}=\frac{1}{2^{n}T}\int_{0}^{2^{n}T}dt\,H_{n}^{(\pm)}(t)=\frac{H_{+}+H_{-}}{2}.$
(2)
Explicit expressions of higher order terms are provided in, e.g., Refs. [1,
2]. By using the relations $U_{n}^{(\pm)}=U_{n-1}^{(\mp)}U_{n-1}^{(\pm)}$, we
can obtain $\Omega_{n,m}^{(\pm)}$ recursively starting from $n=0$:
$\Omega_{0,m}^{(\pm)}=\delta_{m,0}H_{\pm}$.
A crucial observation here is that the Magnus expansion in $U_{n}^{(\pm)}$ has
the following remarkable property: for each $n$,
$\Omega_{n,m}^{(+)}=\Omega_{n,m}^{(-)}=2^{-m(n-m)}\frac{\Omega_{m,m}^{(+)}+\Omega_{m,m}^{(-)}}{2}\quad\text{for
all }m\leq n-1.$ (3)
This property stems from self-similarity of the Thue-Morse sequence, and it
plays an essential role in deriving rigorous bounds on the heating rate
discussed below. We can prove Eq. (3) by induction, see Supplementary Material
for details 11footnotemark: 1.
In this way, although $\tilde{H}_{n}^{(+)}$ differs from
$\tilde{H}_{n}^{(-)}$, their Magnus expansions coincide with each other up to
$(n-1)$th order. Therefore, $H_{\mathrm{eff}}^{(n)}$ plays the role of a
static effective Hamiltonian for time evolution generated by an arbitrary
sequence of $U_{n}^{(+)}$ and $U_{n}^{(-)}$. It should be noted that by using
Eq. (3), $H_{\mathrm{eff}}^{(n)}$ is also expressed as
$H_{\mathrm{eff}}^{(n)}=\sum_{m=0}^{n-1}(2^{m}T)^{m}\frac{\Omega_{m,m}^{(+)}+\Omega_{m,m}^{(-)}}{2}.$
(4)
Thus the $m$th order term of the Magnus expansion of $\tilde{H}_{n}^{(\pm)}$
is independent of $n$ as long as $n\geq m$, which enables us to define the
high-frequency expansion of the effective Hamiltonian for the Thue-Morse
quasi-periodic driving $n\to\infty$.
Rigorous bounds on heating rate.— Time evolution under a generic time-
dependent Hamiltonian $H(t)$ from time $t=0$ to $t=\tau$ is expressed by a
unitary operator
$U_{\tau}=\mathcal{T}e^{-i\int_{0}^{\tau}dt\,H(t)}=:e^{-i\tilde{H}\tau}$. Let
us consider the Magnus expansion of $\tilde{H}$:
$\tilde{H}=\sum_{m=0}^{\infty}\tau^{m}\Omega_{m}$. Its truncation at $n$th
order is denoted by $\tilde{H}^{(n)}=\sum_{m=0}^{n}\tau^{m}\Omega_{m}$. Under
this general setting, the following inequality is proved for a generic local
Hamiltonian 222Here, “local” Hamiltonian implies a $k$-local Hamiltonian,
which means that the Hamiltonian contains up to $k$-site interactions. The
interaction range may be arbitrarily long.:
$\|U_{\tau}^{\dagger}\tilde{H}^{(n)}U_{\tau}-\tilde{H}^{(n)}\|\lesssim(n+1)!(g\tau)^{n+2}gN,$
(5)
where $g$ denotes the maximum energy of a single site and $N$ denotes the
number of lattice sites (system size). See Supplementary Material for an exact
formal expression of the inequality (5) and its proof 11footnotemark: 1.
Exponentially slow heating in Floquet systems is derived by using the
inequality (5). In the case of periodic driving, we choose $\tau$ to be the
period $T=2\pi/\omega$ of driving. In this case $\tilde{H}$ is nothing but the
Floquet Hamiltonian $H_{F}$, which is independent of time. Therefore, after
$M$ periods $(t=MT)$, the possible change of $H_{F}^{(n)}$ is bounded by
$\displaystyle\frac{\|U_{T}^{\dagger
M}H_{F}^{(n)}U_{T}^{M}-H_{F}^{(n)}\|}{N}\leq
M\frac{\|U_{T}^{\dagger}H_{F}^{(n)}U_{T}-H_{F}^{(n)}\|}{N}$
$\displaystyle\lesssim(n+1)!(gT)^{n+2}gM=(n+1)!(Tg)^{n+1}t.$ (6)
The heating rate $\kappa$ is therefore bounded as
$\kappa\lesssim(n+1)!(Tg)^{n+1}$. Since $n$ is arbitrary, we choose it so that
the upper bound of $\kappa$ becomes minimum. We then have the optimal
truncation order $n^{*}\sim\omega/g$ and $\kappa\lesssim e^{-O(\omega/g)}$.
The heating rate in Floquet systems is thus exponentially small in $\omega$
[1, 2].
In the case of $n$-RMD, we choose $\tau=2^{n}T$. Then, $U_{\tau}$ is either
$U_{n}^{(+)}$ or $U_{n}^{(-)}$. As we have already seen,
$\tilde{H}^{(n-1)}=H_{\mathrm{eff}}^{(n)}$ for both $U_{n}^{(+)}$ and
$U_{n}^{(-)}$. Thus we have from Eq. (5)
$\|U_{n}^{(\pm)\dagger}H_{\mathrm{eff}}^{(n)}U_{n}^{(\pm)}-H_{\mathrm{eff}}^{(n)}\|\lesssim
n!(2^{n}gT)^{n+1}gN.$ (7)
The time evolution $U_{t}$ over time $t=M2^{n}T$ is generated by a certain
sequence of $U_{n}^{(\pm)}$, i.e.,
$U_{t}=U_{n}^{(\sigma_{M})}U_{n}^{(\sigma_{M-1})}\dots U_{n}^{(\sigma_{1})}$
with $\sigma_{\ell}\in\\{+,-\\}$. The change of $H_{\mathrm{eff}}^{(n)}$ over
time $t=M2^{n}T$ is evaluated as
$\displaystyle\frac{\|U_{t}^{\dagger}H_{\mathrm{eff}}^{(n)}U_{t}-H_{\mathrm{eff}}^{(n)}\|}{N}$
$\displaystyle\leq\sum_{\ell=1}^{M}\frac{\|U_{n}^{(\sigma_{\ell})\dagger}H_{\mathrm{eff}}^{(n)}U_{n}^{(\sigma_{\ell})}-H_{\mathrm{eff}}^{(n)}\|}{N}$
$\displaystyle\lesssim n!(2^{n}gT)^{n}t.$ (8)
The heating rate $\kappa$ is thus evaluated as
$\kappa\lesssim n!(2^{n}gT)^{n}\sim n!\left(\frac{2^{n}g}{\omega}\right)^{n}.$
(9)
This bound shows that heating becomes slower with $n$ for large enough
$\omega$. This upper bound is however not optimal since numerical results
indicate $\kappa\sim\omega^{2n+1}$ [16]. Later we will see that the correct
scaling of the heating rate is obtained by combining the Magnus expansion for
the $n$-RMD developed above with Fermi’s golden rule.
The heating rate under the Thue-Morse quasi-periodic driving is also evaluated
by using Eq. (9). Since the Thue-Morse driving is regarded as a certain
sequence of $U_{n}^{(\pm)}$ for arbitrary $n$, the heating rate $\kappa$ in
this case satisfies Eq. (9) for arbitrary $n$. Therefore, the optimal
truncation order $n^{*}$ is determined by minimizing the right-hand-side of
Eq. (9). As a result, we have
$n^{*}\sim\ln(\omega/g)$ (10)
and
$\kappa\lesssim
e^{-C[\ln(\omega/g)]^{2}}=\left(\frac{\omega}{g}\right)^{-C\ln(\omega/g)},\quad
C\geq\frac{1}{4\ln 2}.$ (11)
This upper bound of $\kappa$ implies that the heating time under the Thue-
Morse quasi-periodic driving is shorter than $e^{O(\omega/g)}$ but longer than
any polynomial in $\omega/g$.
Up to here, we have shown that $H_{\mathrm{eff}}^{(n)}$ is a quasi-conserved
quantity. If no (quasi)conserved quantity exists besides
$H_{\mathrm{eff}}^{(n)}$, a prethermal state is described by the Gibbs state
$\rho_{\mathrm{pre}}\propto e^{-\beta_{\mathrm{eff}}H_{\mathrm{eff}}^{(n)}}$
with an effective temperature $\beta_{\mathrm{eff}}^{-1}$, which is determined
by the initial value of the expectation value of $H_{\mathrm{eff}}^{(n)}$.
Accurate heating rates for $n$-RMD.— For $n$-RMD, we have obtained a rigorous
upper bound $\kappa\lesssim O(\omega^{-n})$ for large $\omega$, but it is not
optimal as we have already mentioned. The correct scaling
$\kappa\sim\omega^{-(2n+1)}$ is theoretically (but not rigorously) obtained by
combining the Magnus expansion technique with Fermi’s golden rule. For $n$-RMD
with $n\geq 1$, $\tilde{H}_{n}^{(\pm)}$ shares the common Magnus expansion
$H_{\mathrm{eff}}^{(n)}$ up to $(n-1)$th order. Now we investigate the effect
of one more higher order term $(2^{n}T)^{n}\Omega_{n,n}^{(\pm)}$ by
considering it as a perturbation. Truncated Magnus expansions of
$\tilde{H}_{n}^{(\pm)}$ at $n$th order are given by
$\tilde{H}_{\mathrm{eff}}^{(n,\pm)}:=H_{\mathrm{eff}}^{(n)}+(2^{n}T)^{n}\Omega_{n,n}^{(\pm)}$.
For each time interval $t\in[\ell 2^{n}T,(\ell+1)2^{n}T)$, either
$\tilde{H}_{\mathrm{eff}}^{(n,+)}$ or $\tilde{H}_{\mathrm{eff}}^{(n,-)}$ is
chosen randomly. Therefore the additional term
$(2^{n}T)^{n}\Omega_{n,n}^{(\pm)}$ is regarded as a random driving with
strength proportional to $T^{n}$. According to Fermi’s golden rule, the
heating rate under a random sequence of two Hamiltonians whose difference is
proportional to $v$ is given by $\kappa\sim\omega^{-1}v^{2}$ 11footnotemark:
1. In our case $v\sim T^{n}\sim\omega^{-n}$, and therefore
$\kappa\sim\omega^{-1}\cdot\omega^{-2n}=\omega^{-(2n+1)},$ (12)
which agrees with numerical results [16].
We emphasize that Fermi’s golden rule calculations discussed here are beyond
the linear-response argument. Here we have applied the golden rule to the
effective Hamiltonian (random driving of $H_{\mathrm{eff}}^{(n,\pm)}$), in
which nonlinear effects of the driving field is already taken into account.
Indeed, the assumption of weak driving is unnecessary. Equation (12) is valid
as long as the frequency $\omega$ is large enough.
Comparison with numerical results.—
Figure 1: (a) Time evolutions of $\braket{\bar{H}}_{t}/\braket{\bar{H}}_{0}$
in the model under the Thue-Morse quasi-periodic driving for various driving
frequencies. (b) Semi-log plot and (c) log-log plot of the heating rate (blue
circles). Straight lines in (b) (green squares) and (c) (pink diamonds)
correspond to an exponential fit and an algebraic hit, respectively. In (b),
numerical data curves down from an exponential fit. In (c), numerical data
curves up from an algebraic fit.
We have obtained $\kappa\sim\omega^{-(2n+1)}$ for $n$-RMD by applying Fermi’s
golden rule to the Magnus expansion, which agrees with numerical results
obtained in the previous work [16]. On the other hand, for the Thue-Morse
quasi-periodic driving, we have obtained the not so familiar scaling
$\kappa\lesssim(\omega/g)^{-C\ln(\omega/g)}$. Previous work [16] reported that
the heating time looks exponential in $\omega$, while our upper bound (11)
suggests neither exponential nor algebraic dependence of the heating rate on
the frequency. Although our upper bound does not contradict an exponential
dependence, we should carefully analyze numerical data because it is sometimes
a tricky problem to determine even whether a given numerical data shows an
exponential or algebraic dependence.
In numerical calculations, the following Hamilltonians are considered:
$H_{\pm}=\sum_{i=1}^{N}\left[\sigma_{i}^{z}\sigma_{i+1}^{z}+J_{x}\sigma_{i}^{x}\sigma_{i+1}^{x}+(B_{0}\pm
B_{x})\sigma_{i}^{x}+B_{z}\sigma_{i}^{z}\right],$ (13)
where $\sigma_{i}^{x},\sigma_{i}^{y},\sigma_{i}^{z}$ are Pauli matrices and
periodic boundary conditions are imposed. In this Letter we fix $J_{x}=0.72$,
$B_{z}=0.49$, $B_{x}=0.61$, and $B_{0}=0.21$. For the Thue-Morse quasi-
periodic driving, we calculate the time evolution of the averaged Hamiltonian
$\bar{H}=(H_{+}+H_{-})/2$, which is regarded as the energy of the system.
Figure 1 (a) shows time evolutions of
$\braket{\bar{H}}_{t}/\braket{\bar{H}}_{0}$ for various driving frequencies.
Prethermalization occurs and the heating time increases with $\omega\sim 1/T$.
Numerically, the heating time $\tau_{h}$ is extracted by averaging the time
such that $\braket{\bar{H}}_{t}/\braket{\bar{H}}_{0}=0.75,0.75\pm 0.03,0.75\pm
0.06$, where $\braket{\bar{H}}_{t}$ denotes the expectation value of the
energy at time $t$. The heating rate $\kappa$ is just given by
$\kappa=\tau_{h}^{-1}$.
Figure 1 (b) and (c) show a semi-log plot and a log-log plot of the heating
time $\tau_{h}$, respectively. In the semi-log (log-log) plot, numerical data
curves down (up) from a straight line, which indicates that the heating time
is shorter than exponential but longer than algebraic in $\omega$. This is
consistent with the upper bound given in Eq. (11). In Fig. 2, we plot the
function $\sqrt{\ln\tau_{h}}$ against $\ln\omega$. If $\kappa\approx
e^{-C[\ln(\omega/g)]^{2}}$ as suggested by Eq. (11), we will have a straight
line of slope $\sqrt{C}$. Numerical data fits well with a straight line of
slope $\sqrt{C}=1.1$. Although the constant $C\geq 1/(4\ln 2)$ in our upper
bound (11) seems not tight, our upper bound captures a correct frequency
dependence of the heating rate.
Figure 2: Plot of $\sqrt{\ln\tau_{h}}$ against $\ln\omega$. Numerical data
(blue circles) fits well with a straight line of slope $\sqrt{C}=1.1$, which
implies that $\tau_{h}$ behaves like $\tau_{h}\sim e^{C[\ln(\omega/g)]^{2}}$,
as suggested by Eq. (11).
Transient dynamics.— Finally, we make a remark that the effective Hamiltonian
$H_{\mathrm{eff}}^{(n)}$ not only describes a prethermal state, but also
governs transient dynamics of local observables under an arbitrary sequence of
$U_{n}^{(\pm)}$. Let us assume that the Hamiltonians $H_{\pm}(t)$ describe a
$d$-dimensional lattice system with short-ranged interactions (interactions
decay exponentially fast). In Ref. [21], the following inequality has been
proved for any local operator $O_{X}$ that acts nontrivially only on a set of
sites $X\subset\\{1,2,\dots,N\\}$ and is normalized as $\|O_{X}\|=1$:
$\left\|U_{t}^{\dagger}O_{X}U_{t}-e^{iH_{\mathrm{eff}}^{(n)}t}O_{X}e^{-iH_{\mathrm{eff}}^{(n)}t}\right\|\lesssim|X|n!(2^{n}gT)^{n}t^{d+1},$
(14)
where $t=M2^{n}T$ and $U_{t}=U_{n}^{(\sigma_{M})}U_{n}^{(\sigma_{M-1})}\dots
U_{n}^{(\sigma_{1})}$ with an arbitrary sequence of
$\sigma_{\ell}\in\\{+,-\\}$. This inequality is proved by using the inequality
like Eq. (5) and the Lieb-Robinson bound [22, 23].
The inequality (14) tells us that the effective Hamiltonian approximates the
dynamics of local observables during a time $\tau_{d}$, where
$\tau_{d}\gtrsim(\omega/g)^{n/(d+1)}$ for $n$-RMD and
$\tau_{d}\gtrsim\exp\left\\{\frac{C}{d+1}[\ln(\omega/g)]^{2}\right\\}$ for the
Thue-Morse quasi-periodic driving.
Some extensions.— Our derivation essentially relies on Eq. (3), which is
tailored to the Thue-Morse sequence. Our results allow some extensions. First,
we have assumed static $H_{\pm}$, but our main results, i.e. Eqs. (9), (11),
(12), and (14), hold for time-dependent $H_{\pm}(t)$ also, because Eq. (3) is
not affected by this modification. Second, the Thue-Morse sequence can be
generalised. For example, consider three unitaries $U_{0}=e^{-iH_{0}T}$,
$U_{1}=e^{-iH_{1}T}$, and $U_{2}=e^{-iH_{2}T}$, where $H_{\sigma}$
($\sigma=0,1,2$) are local Hamiltonians. We recursively define
$U_{n}^{(\sigma)}$ as
$U_{n}^{(\sigma)}=U_{n-1}^{(\sigma+2)}U_{n-1}^{(\sigma+1)}U_{n-1}^{(\sigma)}$,
where the sums $\sigma+1$ and $\sigma+2$ are defined modulo 3. Then, a similar
upper bound for a random sequence of $\\{U_{n}^{(\sigma)}\\}$ or for a quasi-
periodic sequence defined by $\lim_{n\to\infty}U_{n}^{(\sigma)}$. However, Eq.
(3) is in general not satisfied for other quasi-periodic driving such as the
Fibonacci driving. It is an open problem to rigorously evaluate the heating
rate for general quasi-periodic drives.
Conclusion.— We have derived rigorous upper bounds on the heating rate for
quantum many-body systems under quasi- and a-periodic driving with tunable
structured randomness. These differ from those for Floquet [1, 2, 3, 4] and
smooth quasi-periodic driving [17].
We emphasize that nonperturbative and rigorous results for aperiodic – tunable
yet random – driven systems are very rare. Remarkably, such rigorous results
are established based on mathematical machineries developed for regular
systems (i.e. periodically driven systems) [1, 2]. It is an interesting
problem to investigate whether there are any other structures to the driving
which give rise to systematically understandable behavior.
###### Acknowledgements.
This work was in part supported by Japan Society for the Promotion of Science
KAKENHI Grants No. 19K14622, by the Deutsche Forschungsgemeinschaft under
cluster of excellence ct.qmat (EXC 2147, project-id 390858490), and by a
Doctoral-Program of the German Academic Exchange Service (DAAD) Fellowship. We
acknowledge support from the Imperial-TUM flagship partnership.
## References
* Kuwahara _et al._ [2016] T. Kuwahara, T. Mori, and K. Saito, Floquet-Magnus theory and generic transient dynamics in periodically driven many-body quantum systems, Ann. Phys. (N. Y). 367, 96–124 (2016), arXiv:1508.05797 .
* Mori _et al._ [2016] T. Mori, T. Kuwahara, and K. Saito, Rigorous Bound on Energy Absorption and Generic Relaxation in Periodically Driven Quantum Systems, Phys. Rev. Lett. 116, 120401 (2016).
* Abanin _et al._ [2017a] D. Abanin, W. De Roeck, W. W. Ho, and F. Huveneers, A Rigorous Theory of Many-Body Prethermalization for Periodically Driven and Closed Quantum Systems, Commun. Math. Phys. 354, 809–827 (2017a), arXiv:1509.05386 .
* Abanin _et al._ [2017b] D. A. Abanin, W. De Roeck, W. W. Ho, and F. Huveneers, Effective Hamiltonians, prethermalization, and slow energy absorption in periodically driven many-body systems, Phys. Rev. B 95, 014112 (2017b), arXiv:1510.03405 .
* Mori _et al._ [2018] T. Mori, T. N. Ikeda, E. Kaminishi, and M. Ueda, Thermalization and prethermalization in isolated quantum systems: A theoretical overview, J. Phys. B At. Mol. Opt. Phys. 51, 112001 (2018), arXiv:1712.08790 .
* Eckardt [2017] A. Eckardt, Colloquium: Atomic quantum gases in periodically driven optical lattices, Rev. Mod. Phys. 89, 011004 (2017).
* Oka and Kitamura [2019] T. Oka and S. Kitamura, Floquet engineering of quantum materials, Annu. Rev. Condens. Matter Phys. 10, 387–408 (2019), arXiv:1804.03212 .
* Else _et al._ [2017] D. V. Else, B. Bauer, and C. Nayak, Prethermal phases of matter protected by time-translation symmetry, Phys. Rev. X 7, 011026 (2017), arXiv:1607.05277 .
* Rubio-Abadal _et al._ [2020] A. Rubio-Abadal, M. Ippoliti, S. Hollerith, D. Wei, J. Rui, S. L. Sondhi, V. Khemani, C. Gross, and I. Bloch, Floquet Prethermalization in a Bose-Hubbard System, Phys. Rev. X 10, 021044 (2020), arXiv:2001.08226 .
* Peng _et al._ [2021] P. Peng, C. Yin, X. Huang, C. Ramanathan, and P. Cappellaro, Floquet prethermalization in dipolar spin chains, Nat. Phys. (2021).
* Verdeny _et al._ [2016] A. Verdeny, J. Puig, and F. Mintert, Quasi-periodically driven quantum systems, Zeitschrift fur Naturforsch. A 71, 897–907 (2016), arXiv:1603.03923 .
* Nandy _et al._ [2017] S. Nandy, A. Sen, and D. Sen, Aperiodically driven integrable systems and their emergent steady states, Phys. Rev. X 7, 031034 (2017), arXiv:1701.07596 .
* Dumitrescu _et al._ [2018] P. T. Dumitrescu, R. Vasseur, and A. C. Potter, Logarithmically Slow Relaxation in Quasiperiodically Driven Random Spin Chains, Phys. Rev. Lett. 120, 070602 (2018), arXiv:1708.00865 .
* Giergiel _et al._ [2019] K. Giergiel, A. Kuroś, and K. Sacha, Discrete time quasicrystals, Phys. Rev. B 99, 220303(R) (2019), arXiv:1807.02105 .
* Ray _et al._ [2019] S. Ray, S. Sinha, and D. Sen, Dynamics of quasiperiodically driven spin systems, Phys. Rev. E 100, 052129 (2019), arXiv:1907.07492 .
* [16] H. Zhao, F. Mintert, R. Moessner, and J. Knolle, Random multipolar driving: tunably slow heating through spectral engineering, arXiv:2007.07301 .
* Else _et al._ [2020] D. V. Else, W. W. Ho, and P. T. Dumitrescu, Long-Lived Interacting Phases of Matter Protected by Multiple Time-Translation Symmetries in Quasiperiodically Driven Systems, Phys. Rev. X 10, 021032 (2020), arXiv:1910.03584 .
* Mukherjee _et al._ [2020] B. Mukherjee, A. Sen, D. Sen, and K. Sengupta, Restoring coherence via aperiodic drives in a many-body quantum system, Phys. Rev. B 102, 014301 (2020), arXiv:2002.08683 .
* Note [1] See Supplementary Material for the proof of Eq. (3), the detail of Fermi’s golden rule calculations, and the formal expression of Eq. (5) and its proof.
* Note [2] Here, “local” Hamiltonian implies a $k$-local Hamiltonian, which means that the Hamiltonian contains up to $k$-site interactions. The interaction range may be arbitrarily long.
* Mori [2018] T. Mori, Floquet prethermalization in periodically driven classical spin systems, Phys. Rev. B 98, 104303 (2018), arXiv:1804.02165 .
* Lieb and Robinson [1972] E. H. Lieb and D. W. Robinson, The finite group velocity of quantum spin systems, Commun. Math. Phys. 28, 251–257 (1972).
* Bravyi _et al._ [2006] S. Bravyi, M. B. Hastings, and F. Verstraete, Lieb-Robinson bounds and the generation of correlations and topological quantum order, Phys. Rev. Lett. 97, 050401 (2006), arXiv:0603121 [quant-ph] .
|
††thanks: Corresponding author<EMAIL_ADDRESS>
# Computational design of armored nanodroplets as nanocarriers for
encapsulation and release under flow conditions
François Sicard1,2 Jhoan Toro-Mendoza3 1 Department of Physics and Astronomy,
University College London, WC1E 6BT London, UK 2 Department of Chemical
Engineering, University College London, WC1E 7JE London, UK 3Centro de
Estudios Interdisciplinarios de la Fisica, Instituto Venezolano de
Investigaciones Cientificas, Caracas 1020A, Venezuela
###### Abstract
Nanocarriers are nanosized materials commonly used for targeted-oriented
delivery of active compounds, including antimicrobials and small-molecular
drugs. They equally represent fundamental and engineering challenges since
sophisticated nanocarriers must show adequate structure, stability, and
function in complex ambients. Here, we report on the computational design of a
distinctive class of nanocarriers, built from buckled armored nanodroplets,
able to selectively encapsulate or release a probe load under specific flow
conditions. Mesoscopic simulations offer detailed insight into the interplay
between the characteristics of laden surface coverage and evolution of the
droplet morphology. First, we describe in detail the formation of pocket-like
structures in Pickering emulsion nanodroplets and their stability under
external flow. Then we use that knowledge to test the capacity of these
emulsion-based pockets to yield flow-assisted encapsulation or expulsion of a
probe load. Finally, the rheological properties of our model carrier are put
into perspective with those of delivery systems employed in pharmaceutical and
cosmetic technology.
Nanocarriers, Pickering emulsion, buckling, shear flow, encapsulation,
Dissipative Particle Dynamics
Over the last decade, special attention has been assigned to the design,
characterization, and development of nanocarrier systems, which can have
potential in targeted-oriented active molecule delivery. They offer remarkable
advantages in a wide range of industrial and medical applications, including
food Silva et al. (2012), cosmetic Aziz et al. (2019) and pharmaceutical
industries Rosenblum et al. (2018). In this context, nanoparticle-stabilized
emulsions, aka Pickering emulsions Pickering (1907), have been intensively
used as drug-delivery vehicles in topical medication Frelichowska et al.
(2009), where their surfactant-free character makes them attractive for
different applications since surfactants often produce adverse effects, such
as irritation and haemolytic disturbances Aparicio et al. (2005); Chevalier
and Bolzinger (2013). They can also serve as ideal compartments for reactions
catalyzed by nanoparticles (NPs) attached at the oil-water interfaces Shi et
al. (2014); Faria et al. (2015); Qu et al. (2017) and can be used in bacterial
recognition technologies. Shen et al. (2014); Horváth et al. (2019) Another
important and useful advantage of Pickering emulsions over conventional
surfactant-stabilized systems is their enhanced stabilization against
coalescence Sicard and Striolo (2016) and their smaller environmental
footprint Ortiz et al. (2020). While tremendous progress has been made in
particle-based microfluidic technology Seemann et al. (2012); Orellana and
Baret (2019), the characteristics of Pickering emulsions pose a number of
intriguing physical questions, including a thorough understanding of the
perennial lack of detail about how particles arrange at the liquid/liquid
interface. Predicting and controlling this interfacial arrangement is even
more challenging under flow conditions.
Here, we report on the computational design of a new class of nanocarriers
built from Pickering nano-emulsions, which exhibit a pocket-like morphology
able to encapsulate or release a probe load under specific flow conditions.
Dissipative Particle Dynamics (DPD) is employed as a mesoscopic simulation
method Groot and Warren (1997) with two aims: (1) to describe in detail the
formation of pocket-like structures in Pickering nanodroplets and their
stability under specific flow conditions and then (2) to test the capacity of
the formed pockets to encapsulate or release a probe load. Also, the physical
properties of our model carrier are put into perspective within the conditions
encountered in the high-shear regime of spreading topical medication on the
skin and the transport of targeted carriers in pathological alterations of the
vascular system. Despite technological advances in experimental methods to
control NP assembly at fluid interfaces Reguera et al. (2012); Giner-Casares
and Reguera (2016), the inherent limitation in experimental resolution eludes
direct access to local observables, such as the particles’ three-phase contact
angle distribution and the details of the particles’ interfacial network Binks
and Yin (2016) presenting complex geometries, while these pieces of
information can be accessed by numerical simulations Sicard and Striolo (2016,
2017, 2018); Sicard et al. (2019).
Figure 1: Formation of pocket-like structures. (a) Simulation snapshots
representing the initial (left) and final (center) water in oil droplets
armored with different nanoparticles surface coverages, obtained fom the
evaporation process: uniformly covered droplet with small NPs
($\textrm{UC}_{L}$) and heterogeneously covered droplets with either each
hemisphere covered with small or large NPs ($\textrm{HC}_{1}$) or three
distinct layers made of small-large-small NPs ($\textrm{HC}_{2}$). The cross-
sectional view of each system is also shown (right). Cyan and purple spheres
represent the small and large Janus NPs, respectively. The detailed structure
of the NPs is shown in Fig. S1a in the SI. Pink spheres represent water beads.
The oil molecules surrounding the system are not shown for clarity. (b)
Evolution of the radius of gyration, $R_{\textrm{GYR}}$, of $\textrm{UC}_{S}$,
$\textrm{UC}_{L}$, $\textrm{HC}_{1}$, and $\textrm{HC}_{2}$, as a function of
the dimensionless parameter $\Delta N_{W}=N_{W}/N_{W}^{(0)}$. $N_{W}$
represent the number of water beads that remain in the droplet after each
removal and $N_{W}^{(0)}$ is the initial number of water beads. The
statistical errors are estimated as one standard deviation from the average
obtained for equilibrated trajectories and are always smaller than the
symbols. The dashed lines represents the spherical-shrinking regime defined as
$R_{\textrm{GYR}}\sim(\Delta N_{W})^{1/3}$. (c) Three-phase contact angle
distribution of small (blue) and large (red) NPs for $\textrm{HC}_{1}$ after
the last pumping/equilibration iteration ($\Delta N_{W}\sim 0.35$). (d)
Evolution of the radial distribution function, $g(r)$, with $r$ the distance
between the center of the NPs, of small (blue) and large (red) NPs for
$\textrm{HC}_{1}$ (b) and $\textrm{HC}_{2}$ (c).
Water nanodroplets coated with spherical NPs with different diameters and
immersed in an organic solvent are considered. The coating is formed by Janus
NPs (particles whose surface shows two distinct wetting properties) Zhang et
al. (2017); Agrawal and Agrawal (2019) whose initial three-phase contact
angles result in maximum adsorption energy at the fluid-fluid interface Binks
and Lumsdon (2000). Hence, we are able to quantify the role played by
homogeneous and heterogenous NP surface coverage at the emulsion droplet
interface when the volume of the droplet is reduced. In particular, we observe
in detail the formation of crater-like depressions with selective geometry,
which can structurally favour the loading of a probe load. The flow conditions
clearly affect the dynamical response of the pocket-like armored nanodroplets.
Under specific conditions, we observe the formation of long-lived anisotropic
structures, characteristic of a jammed particle coverage at the liquid-liquid
interface. Furthermore, we examine the capacity of the system to control the
flow-assisted encapsulation or release of a probe load, which depends on the
interplay between NP surface coverage, the level of buckling, and the shear
flow conditions.
System characteristics. In Fig. 1a we show representative snapshots of water
emulsion nanodroplets in organic solvent (decane) stabilized with Janus NPs.
The scaled temperature in the DPD framework is equivalent to $298.73$ K. The
details of the numerical parametrization and NP structures are given in the
Methods section and the Supporting Information (SI). The configurations differ
by the size of the NPs and the characteristics of the surface coverage. We
consider small (S) and large (L) NPs with diameters $d_{S}\sim 2.2$ nm and
$d_{L}\sim 4.5$ nm, whose diffusion coefficients measured on a planar
decane/water interface are $D_{S}\sim 4.7\pm 3.1\times 10^{-7}$
$\textrm{cm}^{2}~{}\textrm{s}^{-1}$ and $D_{L}\sim 1.8\pm 0.7\times 10^{-7}$
$\textrm{cm}^{2}~{}\textrm{s}^{-1}$, respectively (see Methods and Fig. S1c in
the SI). The NPs are originally located at the surface of the emulsion
nanodroplets of diameter $d_{D}\sim 45$ nm. Similar NP surface coverage
$\phi\sim 0.8$, as defined in Ref Luu et al. (2013); Sicard and Striolo
(2017), is considered on the armored nanodroplets. This yields similar initial
three-phase contact angles $\theta_{S}\sim 84.1^{\circ}\pm 2.7^{\circ}$ and
$\theta_{L}\sim 86.8^{\circ}\pm 1.1^{\circ}$ for the small and large NPs,
respectively (see Methods and Fig. S1b in the SI), in qualitative agreement
with simulations Fan et al. (2011); Fan and Striolo (2012); Sicard and Striolo
(2016) and experimental observations Arnaudov et al. (2010). From the error
bars estimated, it is observed that the small NPs are more sensitive to
thermal fluctuations at the interface compared to the large ones,
characteristic of the increase of the adsorption energy with the particle
radius Binks and Fletcher (2001); Jiang and Granick (2007); Khedr and Striolo
(2020). We also measure the decrease of the interfacial tension,
$\Delta\gamma_{S}$ and $\Delta\gamma_{L}$, for small and large NPs at planar
interfaces for similar NP surface coverage (see Methods). We obtain
$\Delta\gamma_{S}=\gamma_{0}-\gamma_{S}\sim 5.1~{}\textrm{mN}.\textrm{m}^{-1}$
and $\Delta\gamma_{B}=\gamma_{0}-\gamma_{B}\sim
2.2~{}\textrm{mN}.\textrm{m}^{-1}$, with $\gamma_{0}\sim
51.7~{}\textrm{mN.m}^{-1}$ the interfacial tension for a planar decane/water
interface Fan and Striolo (2012); Sicard et al. (2019), and $\gamma_{S,B}$ the
interfacial tension when the interface is covered with small or large NPs,
respectively. In particular, large NPs have less effect on the reduction of
the interfacial tension and are less diffusive than smaller ones, in
qualitative agreement with simulations Khedr and Striolo (2020) and
experimental observations Wang et al. (2011). A lower mobility along with the
size of the NPs, will play a key role in the pocket formation.
Formation of pocket-like structures. The volume of the droplets is
systematically reduced, by iteratively pumping a small constant proportion of
water molecules out of the droplets and letting the systems equilibrate
between each iteration, until the systems present dimples and cups at the
droplet interface followed by the formation of crater-like depressions,
characteristic of the buckling instability Datta et al. (2010); Sicard and
Striolo (2017); Gu and Botto (2018) (see details in the SI). This process is
physically equivalent to a process of solubilization of the dispersed phase
into the solvent Datta et al. (2010); Sicard and Striolo (2017). We
arbitrarily stop the pumping when the number of water molecule constituting
the droplets reaches the value $\Delta N_{W}=N_{W}/N_{W}^{(0)}\sim 0.35$,
where $N_{W}^{(0)}$ and $N_{W}$ are the initial number of water beads and the
number of water beads remaining in the droplets, respectively.
Figure 2: Dynamical response under shear flow. (a) Temporal evolution of the
velocity center of mass $V_{COM}$ and the relative shape anisotropy
$\kappa^{2}$ of $\textrm{HC}_{1}$ subjected to shear flow and after abrupt
shear cessation for three different values of the shear rate $\dot{\epsilon}$.
The shear flow is continuously applied for a time duration $\Delta t\sim
0.6~{}\mu s$ before it is abruptly stopped and the structure relaxes for
another $\Delta t\sim 0.6~{}\mu s$. (b) Representative snapshots of the
armored nanodroplets obtained just before the flow cessation ($t\sim 0.6~{}\mu
s$) and at the end of the simulation ($t\sim 1.2~{}\mu s$) are shown. Cyan and
purple spheres represent the small and large Janus NPs, respectively. The
detailed structure of the NPs is shown in Fig. S1a in the SI. Pink spheres
represent water beads. The oil molecules surrounding the system are not shown
for clarity. (c) Radial distribution function, $g_{S}(r)$ and $g_{L}(r)$, with
$r$ the distance between the center of the NPs, of small (blue) and large
(red) NPs for $\textrm{HC1}_{a,b,c}^{*}$. The corresponding radial
distribution functions measured before the shear flow is applied (HC1) are
shown in black color for comparison.
In Fig. 1b, we show the evolution of the radius of gyration of the emulsion
nanodroplets, $R_{\textrm{GYR}}$, as a function of the dimensionless parameter
$\Delta N_{W}$. We initially consider spherical droplets whose surface is
either uniformly covered (UC) with NPs of identical diameter or
heterogeneously covered (HC) with NPs of different diameters. In particular,
$\textrm{UC}_{S}$ (respectively $\textrm{UC}_{L}$) is solely covered with
small (respectively large) NPs, as shown in Fig. 1a. $\textrm{HC}_{1}$ and
$\textrm{HC}_{2}$ have each hemisphere covered with small and large NPs, or
three distinct layers made of small-large-small NPs, respectively (cf. Fig.
1a). When $\Delta N_{W}>0.75$, the radii of gyration of four systems follow
similar evolution, regardless the NP coverage (UC or HC), characteristic of a
spherical-shrinking regime, $R_{\textrm{GYR}}\sim(\Delta N_{W})^{1/3}$ (dashed
line in Fig. 1b). When $\Delta N_{W}<0.75$, the systems follow different
transitions from spherical shrinking to buckling, depending on the
characteristics of the NP interfacial packing originating from the difference
in surface coverage Basavaraj et al. (2006). This transition happens when the
NP monolayer becomes close to its maximum packing, as observed with the
evolution of the radial distribution function $g(r)$, with $r$ the distance
between the center of the NPs, shown in Fig. 1d and Figs. S2 in the SI. In
particular, $\textrm{UC}_{S}$ and $\textrm{UC}_{L}$ show different
morphological evolutions when $\Delta N_{W}$ decreases, with $\textrm{UC}_{S}$
entering the buckling regime at larger $\Delta N_{W}$ than $UC_{L}$, in
qualitative agreement with the numerical work of Gu et al. Gu and Botto
(2018). Finally, below $\Delta N_{W}\sim 0.45$. $R_{\textrm{GYR}}$ increases
as the droplets can be described as half-sphered.
The structures of the armored nanodroplets obtained after the last
pumping/equilibration iteration are shown in Fig. 1a (central panel). Visual
inspection shows different folding morphologies, depending on the
characteristics of the NP coverage. Unlike UC where crater-like depressions
form evenly at the interface when the droplet is subject to a compressive
surface stress, we observe the formation of well-localised crater-like
depressions when the droplet is heterogeneously covered ($\textrm{HC}_{1}$ or
$\textrm{HC}_{2}$), depending on the localisation of the interfacial areas
covered with small or large NPs. Notably, we observe the crater-like
depressions form in the interfacial areas covered with the smallest NPs, where
maximum packing of the interfacial network is achieved quicker and the
interfacial tension is lower than those measured for larger NPs.
The properties of the interfacial layers are quantitatively assess via the
analysis of the distribution of the three phase contact angles,
$\theta_{C}^{(S)}$ and $\theta_{C}^{(L)}$, of small and large NPs,
respectively. As shown in Fig. S1b in the SI, $\theta_{C}^{(S)}$ and
$\theta_{C}^{(L)}$ follow Gaussian distributions in the initial configurations
($\Delta N_{W}\sim 1$), where the shape of the droplets is spherical. When the
volume of $\textrm{UC}_{S}$ and $\textrm{UC}_{L}$ is reduced,
$\theta_{C}^{(S)}$ and $\theta_{C}^{(L)}$ uniformly evolve from Gaussian to
skewed unimodal distributions, in line with previous work Sicard and Striolo
(2017). The values of the respective means, $\mu_{S}$ and $\mu_{L}$, and
standard deviations, $\sigma_{S}$ and $\sigma_{L}$, for small and large NPs,
respectively, are shown in Table 1. Whereas the contact angle distributions
show a single peak centered at the same value as the one measured for the
initial configuration, $\sigma_{S}$ and $\sigma_{L}$ show significant
variations when the volume of the droplets is reduced, characteristic of the
skewness of the distribution and the decrease of the NP–NP distance (cf. Fig.
S2 in the SI).
$\Delta N_{W}$ | | $\textrm{UC}_{S}$ | $\textrm{UC}_{L}$ | $\textrm{HC}_{1}$ | $\textrm{HC}_{2}$
---|---|---|---|---|---
1.0 | (S) | $84.1^{\circ}\pm 2.7^{\circ}$ | $-$ | $84.1^{\circ}\pm 2.7^{\circ}$ | $84.1^{\circ}\pm 2.7^{\circ}$
(L) | $-$ | $86.8\pm 1.1$ | $86.8^{\circ}\pm 1.1^{\circ}$ | $86.8^{\circ}\pm 1.1^{\circ}$
0.35 | (S) | $82.9^{\circ}\pm 5.9^{\circ}$ | $-$ | $82.8^{\circ}\pm 6.0^{\circ}$ | $82.4\pm 6.4$
(L) | $-$ | $83.6^{\circ}\pm 9.9^{\circ}$ | $86.7^{\circ}\pm 1.9^{\circ}$ | $87.0\pm 1.8$
Table 1: Measure of the mean ($\mu$) and standard error ($\sigma$) of the
three phase contact angle distribution in UC and HC droplets in the initial
($\Delta N_{W}\sim 1.0$) and final ($\Delta N_{W}\sim 0.35$) configurations.
When the volume of $\textrm{HC}_{1}$ and $\textrm{HC}_{2}$ is reduced, on the
other hand, we observe significant differences in the evolution of the
distributions of $\theta_{C}^{(S)}$ and $\theta_{C}^{(L)}$, due to the
heterogeneity in NP size and surface coverage. In particular, the distribution
of $\theta_{C}^{(L)}$ is similar to the one measured in the initial
configuration, while the distributions of $\theta_{C}^{(S)}$ shows large
variability, similar to the one measured in $\textrm{UC}_{S}$, during the
buckling transition, originating from the difference in packing of the
monolayer at the droplet interface, as shown in Fig. 1c.
Dynamical response under shear flow. Thereafter, we investigate the structural
response of the buckled armored nanodroplets subjected to shear flow of the
surrounding fluid using the SLLOD algorithm Evans and Morriss (1984, 1984)
coupled with Lee-Edwards periodic boundary conditions Lees and Edwards (1972)
(see Methods). We focus our analysis on $\textrm{HC}_{1}$ whose structural
morphology is more likely to yield better loading of a probe load (cf. Fig.
1a). The minimum value for the shear rate, $\dot{\epsilon}_{0}\sim
0.9~{}\textrm{ns}^{-1}$, is set to the one for which the initial structure
starts showing significant deformations. The system is first stressed under a
constant shear rate, $\dot{\epsilon}=\alpha\times\dot{\epsilon}_{0}$, along
the $x$-axis for a time duration $\Delta t\sim 0.6~{}\mu\textrm{s}$, with the
parameter $\alpha=1.0$, $1.5$, and $2.0$. The length of the simulation is
chosen sufficiently long for the velocity center of mass of the droplet,
$V_{COM}$, to level off to a plateau whose value matches the one obtained from
stationary velocity profile of laminar flow, $V_{COM}=\dot{\epsilon}\times
L_{y}/2$, with $L_{y}\sim 77$ nm the size of the simulation box along the
$y$-direction (cf. Fig. 2a). The flow is then abruptly halted and the
dynamical stability of the nanodroplet is studied for a time duration $\Delta
t\sim 0.6~{}\mu\textrm{s}$.
Figure 3: Encapsulation and release of load probe. (a) Representative
snapshots of the structural morphology of $\textrm{HC}1$ preliminary loaded
with small (S) and large (L) hydrophobic spherical solute of diameter
$d_{S}\sim 7.7$ nm (top panel) and $d_{L}\sim 15.4$ nm (bottom panel),
respectively. Left, middle, and right panels correspond to the initial
structure, and those obtained after $0.2~{}\mu$s and $0.5~{}\mu$s,
respectively. Cyan and purple spheres represent the small and large Janus NPs,
respectively. The detailed structure of the NPs is shown in Fig. S1a in the
SI. Pink and gold spheres represent water and solute beads. The oil molecules
surrounding the system are not shown for clarity. (b) Representative temporal
evolution of the relative shape anisotropy, $\kappa^{2}$, of $\textrm{HC}_{1}$
loaded with small and large solute, subjected to shear flow
($\dot{\epsilon}=2\times\dot{\epsilon}_{0}$) and after abrupt shear cessation.
The shear flow is continuously applied for a time duration $\Delta t\sim
0.6~{}\mu s$ before it is abruptly stopped and the structure relaxes for
another $\Delta t\sim 0.6~{}\mu s$. The evolution of $\kappa^{2}$ for the free
system is shown for comparison.
Representative snapshots of the structural morphology of the armored
nanodroplets obtained after $t\sim 0.6~{}\mu\textrm{s}$ and $t\sim
1.2~{}\mu\textrm{s}$, identified in Fig. 2a, are shown in Fig. 2b. Visual
inspection shows different morphologies depending on the intensity of the
shear rate and the relaxation of the system. The changes in structural
morphology is quantitatively assessed with the measure of the relative shape
anisotropy parameter, $\kappa^{2}$, which reflects both the symmetry and
dimensionality of the system Vymetal and Vondrásek (2011); Arkin and Janke
(2013) (see Methods). As shown in Fig. 2a (right panel), we observe the
increase of $\kappa^{2}$ at relatively short time until it levels off to a
plateau when the velocity profile of the laminar fluid becomes stationary, and
whose value depends on the intensity of the shear rate. In Figs. 2b and Fig.
S3a in the SI, we observe the increase of $\kappa^{2}$, associated with the
elongation of the droplet along the deformation axis $x$ and with the
squeezing of the crater-like depression along the orthogonal $z$-direction.
($\textrm{HC}1_{a,b,c}$).
When the flow is abruptly halted at $t\sim 0.6~{}\mu\textrm{s}$, we observe
either the relaxation of $\kappa^{2}$ towards its initial value
($\textrm{HC}1_{a}^{*}$) or the formation of long-lived anisotropic structure
($\textrm{HC}1_{b,c}^{*}$), depending on the intensity of $\dot{\epsilon}$.
The specificity of the structural morphology of $\textrm{HC}1_{b,c}^{*}$ can
be explained by the formation of a jammed particle layer at the droplet
interface, in qualitative agreement with recently reported experimental
observations Kaganyuk and Mohraz (2020). To do so, we assess the
characteristics of the NP interfacial layer of $\textrm{HC}1_{a,b,c}^{*}$ with
the analysis of the three-phase contact angle distribution and the NP radial
distribution function of small and large NPs. Within the range of shear rates
considered in this work, $\theta_{C}^{(L)}$ follows a Gaussian distribution of
mean $\mu_{L}\sim 87.2^{\circ}$ and standard deviation $\sigma_{L}\sim
1.8^{\circ}$, similar the the one measured in both the initial and buckled
configurations (cf. Fig. 1c). $\theta_{C}^{(S)}$, on the other hand, shows a
skewed unimodal distribution with a central peak located at the same value as
the one measured for both the initial and buckled configurations. The skweness
of the distribution does not depend significantly on the intensity of the
shear rate within the standard errors (cf. Fig. S3 in the SI).
Most importantly, the radial distribution functions, $g_{S}$ and $g_{L}$, of
small and large NPs, respectively, show different behaviours depending on the
size of the NPs, as shown in Fig. 2c. Whereas $g_{S}$ follows the same
evolution as the one measured in $\textrm{HC}_{1}$ before the shear rate is
applied, the evolution of $g_{L}$ reflects the local reorganisation of the
layer made solely of large NPs at the droplet interface, as shown with the
gradual decrease of its first peak associated with the first coordination
sphere, eventually recovering the distribution observed in the initial
spherical configuration shown in Fig. 1d.
Encapsulation and release of probe load. Our results so far allow us to
address our second aim of investigating the dynamical response of the system
under shear stress, when the buckled armored nanodroplet is preliminary loaded
with a probe load, as shown in Fig. 3a. Then, we determine the ability of
$\textrm{HC}1_{c}$ to lead to the encapsulation or release of the solutes
under flow conditions identical to those studied in the free configuration. We
consider the largest shear rate, $\dot{\epsilon}=2\times\dot{\epsilon}_{0}\sim
1.8~{}\textrm{ns}^{-1}$, which shows the strongest structural deformation of
the system, as shown in Figs. 2a (right panel). One small ($S_{S}$) and one
large ($S_{L}$) spherical hydrophobic solutes are considered, with radius
$r_{S}^{(s)}\sim 4$ nm and $r_{L}^{(s)}\sim 8$ nm, respectively. The size of
$S_{S}$ and $S_{L}$ is specifically chosen so that they can be preliminary
loaded in the crater-like depression formed at the interface of
$\textrm{HC}1$, obtained after the last removal of water (cf Fig. 1a). $S_{S}$
and $S_{L}$, however, differ in their ability to eventually fit or not in
$\textrm{HC}1_{c}$ when the shear stress is applied. The characteristics of
the spherical solutes in the DPD framework are given in the Methods section.
The system is first stressed under constant shear rate along the $x$-axis for
a time duration $\Delta t\sim 0.6~{}\mu$s, sufficiently long to observe the
flow-assisted encapsulation or release of the small and large solutes,
respectively. The flow is then abruptly halted and the relaxation of the
system is studied for a time duration $\Delta t\sim 0.6~{}\mu$s. In Fig. 3a we
show representative snapshots of the systems loaded with the two spherical
solutes, $S_{S}$ and $S_{L}$, at different simulation stages. When the solute
is sufficiently small, the particle-laden interface folds inward under surface
stress leading to the encapsulation of the solute. When the solute is
sufficiently large, however, the crater-like depression cannot accommodate the
solute when the system is stressed. Therefore, $S_{L}$ is progressively
expelled from the pocket following the narrowing and elongation of the
nanodroplet. As the flow is abruptly halted, the armored nanodroplet relaxes
its structural morphology, accommodating the solute load inside the residual
pocket, regardless the size of the load.
The evolution of the structural morphology of the loaded nanodroplets is
quantitatively assessed with the estimation of the relative shape anisotropy,
$\kappa^{2}$, as shown in Fig. 3b. In particular, we compare the average value
of $\kappa^{2}$ in the stationary regime, i.e. $0.2~{}\mu s\leq t\leq
0.6~{}\mu s$, defined as $\langle\kappa^{2}\rangle=\frac{1}{\Delta
t}\int\kappa^{2}(t)dt$, along with the relative change
$\delta\kappa^{2}=\Big{|}\kappa^{2}(t=1.2~{}\mu s)-\kappa^{2}(t=0.6~{}\mu
s)\Big{|}/\kappa^{2}(t=0.6~{}\mu s)$, measured between the beginning
($t=0.6~{}\mu s$) and the end ($t=1.2~{}\mu s$) of the relaxation period.
| free | $S_{S}$ | $S_{L}$
---|---|---|---
$\langle\kappa^{2}\rangle$ | $0.81\pm 0.02$ | $0.79\pm 0.04$ | $0.80\pm 0.05$
$\delta\kappa^{2}$ | $1.2\%\pm 0.3\%$ | $4.4\%\pm 1.6\%$ | $6.4\%\pm 4.5\%$
Table 2: Estimation of the average value of $\kappa^{2}$ when the system
reaches a stationary state under flow conditions, $\langle\kappa^{2}\rangle$,
and the relative change $\delta\kappa^{2}$ between the beginning and the end
of the relaxation period. Uncertainties are determined by considering three
replica of the systems, and calculating the standard error.
As shown in Tab. 2, the values of $\langle\kappa^{2}\rangle$ estimated in the
free and loaded configurations do not differ significantly within the standard
errors, suggesting the pocket-like nanodroplet passively encapsulates or
expels the small and large solutes, respectively, under the flow conditions
and solute characteristics considered in this work. When the flow is abruptly
halted, on the other hand, we observe the relaxation of the system, which
accommodates the solute load inside the residual pockets. During this process,
the relaxation of the structural morphology of the loaded nanodroplets differs
from the solute-free configuration, as quantified with the relative change
$\delta\kappa^{2}$ in Tab. 2, in qualitative agreement with the visual
inspection in Fig. 3a.
Perspectives in delivery technology. The flow-assisted encapsulation and
release of load probes in armored nanodroplets reported so far can be extended
to systems of larger dimensions under conditions similar to those expected in
the high-shear regime of spreading topical medication on the skin (such as
creams and ointments) and the transport of targeted carriers in pathological
alterations of the vascular system (such as venous or arterial thrombosis).
These predictions would depend on the original dimension of the spherical
droplet along with the initial NP surface coverage, and the NP dimension to
droplet size ratio, which would affect the surface area to volume ratio of the
system and the average surface pressure of the particle–laden interface Gu and
Botto (2018), respectively.
To extend our results, the flow properties of the system are analyzed with two
essential control parameters, i.e. the Weber number ($We$) and the Ohnesorge
number ($Oh$), commonly used in microfluidic Hall et al. (2013); Xu et al.
(2020) and droplet formation Roas-Escalona et al. (2018). The Weber number,
$We=\rho_{o}v^{2}d_{D}/\gamma$, represents the ratio of the disrupting
inertial force to the restorative surface tension force, where $\rho_{o}$ and
$v$ are the density and the relative velocity of the ambient fluid (decane
oil) and $d_{D}$ and $\gamma$ are the diameter and the interfacial tension of
the droplet, respectively. The Ohnesorge number,
$Oh=\mu_{W}/\sqrt{\rho_{W}\gamma d_{D}}$, represents the relative importance
of the viscous force to the inertial and surface tension forces, where
$\mu_{W}$ and $\rho_{W}$ are the dynamics viscosity and the density of the
water droplet, respectively. From the calculation of $Oh$, one can define the
critical Weber number, $We_{C}=12~{}(1+1.5\times Oh^{0.74})$, which
corresponds to the minimum Weber number for a droplet to exhibit breakup modes
Gelfand (1996). Given $\gamma\sim 51.7~{}\textrm{mN.m}^{-1}$ the interfacial
tension for a planar decane/water interface Fan and Striolo (2012); Sicard et
al. (2019), $\rho_{W}\sim 1000~{}\textrm{kg.m}^{-3}$ and $\rho_{o}\sim
726~{}\textrm{kg.m}^{-3}$ the density of water and decane oil, respectively,
$v\sim 50-70~{}\textrm{m.s}^{-1}$ the stationary velocity of the laminar flow
(cf. Fig.3a), $\mu_{W}=8.9\times 10^{-4}~{}\textrm{Pa.s}$ the dynamics
viscosity of water, and $d_{D}\sim 40$ nm the droplet diameter obtained from
the measure of $R_{\textrm{GYR}}$ (cf. Fig. 2a), we obtain $Oh\sim 0.6$,
$We_{C}\sim 25$, and $We\sim 1.4-2.8$, indicating the armored droplets
considered in the flow-asssisted encapslation and release processes are
outside their breakup regime Derby (2010).
Now, based on the estimation of the Weber number, we first extend our
predictions to the high-shear regime of spreading water-in-oil/oil-in-water
emulsion-based products. Given the relation $v\sim\dot{\epsilon}\times
L_{\perp}$ with $L_{\perp}$ the dimension of the system orthogonal to the flow
direction, we obtain
$We\sim\rho_{o}\dot{\epsilon}^{2}L_{\perp}^{2}d_{D}/\gamma$. Considering the
average thickness of a cream $L_{\perp}\sim 1$ cm and representative shear
rates $\dot{\epsilon}\sim 10^{2}-10^{3}~{}\textrm{s}^{-1}$ Walicka et al.
(2019); Simoes et al. (2020), we obtain the characteristic dimension of the
emulsion droplet $d_{D}\sim 1-100~{}\mu\textrm{m}$, corresponding to the
minimal droplet size to observe the encapsulation or release mechanism, in
agreement with the range of characteristic droplet sizes commonly used in
topical pharmaceutical products Lu and Gao (2010); Simoes et al. (2020).
Either by skin adsorption or others intake paths, targeted carriers can reach
bloodstream as required. The complexity of the flow scenarios present in the
circulatory system defies the full description of the behaviour of our model
carrier once entering into the body. However, it is possible to put our
predictions into perspective with the transport of our model carrier in the
vascular subsystem, in particular in the pathological flow conditions
encountered in venous or arterial thrombosis Esmon (2009). The fluid
properties of the hepatic artery in non-pathological conditions, which is
representative of a large artery, has a characteristic dimension
$L_{\perp}\sim 5~{}\textrm{mm}$, and shear rate $\dot{\epsilon}\sim
500~{}s^{-1}$ Sakariassen et al. (2015). A pathological flow, on the other
hand, can be defined as where the blood reaches shear rates
$\dot{\epsilon}>5000~{}s^{-1}$, resulting, for example, from pathological
clotting of blood within the lumen of a vessel Herbig et al. (2018).
Considering $\rho_{\textrm{blood}}\sim 1060~{}\textrm{kg.m}^{-3}$ and
$\gamma\sim 42~{}\textrm{mN.m}^{-1}$ as representative values of the average
density and interfacial tension (against fluorocarbon) of the blood fluid
Mottaghy and Hahn (1981), along with the narrowing of the pathological vessel
$L_{\perp}\to L_{\perp}/2$, we obtain $d_{D}\sim 500~{}\textrm{nm}$ for the
minimal droplet dimension in the conditions of the hepatic artery with
pathological alterations to observe the encapsulation or release mechanism.
For comparison, we obtain $d_{D}\sim 10~{}\mu\textrm{m}$ for the minimal
droplet dimension in the conditions of the normal hepatic artery, in the range
of sizes characteristic of leucocyte and red blood cells Phillips et al.
(2008). As a result, the process of targeted-delivery of active-compounds
(such as antithrombotic agents) can be selectively controled with the size of
the model nanocarrier.
## Conclusions
The numerical simulations discussed above allowed us to unravel the interplay
between the structural morphology of armored nanodroplets and the organisation
of the NP interfacial network, when the volume of the system is reduced, in
qualitative agreement with experimental observation Datta et al. (2010). We
showed that finite-size NPs can strongly affect the droplet shape with the
formation of pocket-like depressions, which can structurally favour the
loading of a probe load. Eventually, our method would allow including specific
interactions inside the formed cavity in order to mimic, for example, protein
binding pockets or catalytic nanosurfaces.
The dynamical response of specifically designed pocket-like nanodroplets under
different shear flow conditions exhibited the formation of long-lived
anisotropic structures, characteristic of a jammed particle coverage at the
liquid-liquid interface, associated with the dynamical rearrangement of the NP
interfacial network. Most importantly, the ability of pocket-like nanodroplets
to encapsulate or realease spherical solute loads, located inside the crater-
like depression, during their transport under shear-flow conditions was
validated.
Our predictions on the flow-assisted encapsulation and release of load probes
in armored nanodroplets were extended to systems in the micron scale
encountered in pharmaceutical and cosmetic technology. Noticeably, we
demonstrated that the mechanism reported in our work could be at play at
larger scales, such as those encountered in the high-shear regime of spreading
creams and ointments on the skin, and the transport of targeted carriers in
pathological alterations of the vascular system. We put the physical
properties of our model carrier into perspective within the conditions
encountered in the pathological alteration of the hepatic artery, where the
formation of a blood clot inside the blood vessel can obstruct the flow of
blood through the circulatory system increasing the haemodynamic shear stress
and the risk of bleeding complications. In particular, hepatic artery
thrombosis can be a very serious complication of liver transplantation, with
mortality in children which can be as high as $70\%$ Acharya and Sarangi
(2016). Hence, it is essential to develop distinctive means to control the
process of targeted-delivery of antithrombotic agents in the vascular system.
The physical insights discussed here provide a deeper understanding on the
potential role played by nanoparticle-stabilized emulsions in the biomimetic
design of novel hybrid materials for targeted-oriented active load delivery.
This information could be useful for a variety of applications including the
design of pharmaceutical carriers for drug delivery and pathogen
encapsulation, where knowledge of the rheological properties of the system
must be quantitatively assessed.
## Acknowledgements
F.S. acknowledges J. Reguera for fruitful suggestions and A. Striolo for
useful discussions. Via our membership of the UKs HEC Materials Chemistry
Consortium, which is funded by EPSRC (EP/L000202), this work used the ARCHER
UK National Supercomputing Service (http://www.archer.ac.uk).
## References
* Silva et al. (2012) Silva, H.; Cerqueira, M.; Vicente, A. Nanoemulsions for Food Applications: Development and Characterization. _Food Bioprocess Technol._ 2012, _5_ , 854–867.
* Aziz et al. (2019) Aziz, Z.; Mohd-Nasir, H.; Ahmad, A.; Setapar, S.; Peng, W.; Chuo, S.; Khatoon, A.; Umar, K.; Yaqoob, A.; Ibrahim, M. Role of Nanotechnology for Design and Development of Cosmeceutical: Application in Makeup and Skin Care. _Front. Chem._ 2019, _7_ , 739.
* Rosenblum et al. (2018) Rosenblum, D.; Joshi, N.; Tao, W.; Karp, J.; Peer, D. Progress and challenges towards targeted delivery of cancer therapeutics. _Nature Comm._ 2018, _9_ , 1410.
* Pickering (1907) Pickering, S. CXCVI.-Emulsions. _J. Chem. Soc._ 1907, _91_ , 2001–2021.
* Frelichowska et al. (2009) Frelichowska, J.; Bolzinger, M.-A.; Valour, J.-P.; Mouaziz, H.; Pelletier, J.; Chevalier, Y. Pickering W/O Emulsions: Drug Release and Topical Delivery. _Int. J. Pharm._ 2009, _23_ , 7–15.
* Aparicio et al. (2005) Aparicio, R.; García-Celma, M.; Vinardell, M. P.; Mitjans, M. In vitro studies of the hemolytic activity of microemulsions in human erythrocytes. _J Pharm Biomed Anal._ 2005, _39_ , 1063–7.
* Chevalier and Bolzinger (2013) Chevalier, Y.; Bolzinger, M. Emulsions stabilized with solid nanoparticles: pickering emulsions. _Colloids Surfaces A: Physicochem. Eng. Aspects_ 2013, _439_ , 23–34.
* Shi et al. (2014) Shi, D.; Faria, J.; Pham, T.; Resasco, D. Enhanced Activity and Selectivity of Fischer-Tropsch Synthesis Catalysts in Water/Oil Emulsions. _ACS Catal._ 2014, _4_ , 1944–1952.
* Faria et al. (2015) Faria, J.; Ruiz, M. P.; Resasco, D. Carbon Nanotube/Zeolite Hybrid Catalysts for Glucose Conversion in Water/Oil Emulsions. _ACS Catal._ 2015, _5_ , 4761–4771.
* Qu et al. (2017) Qu, Y.; Huang, R.; ans Q. Qu, W. Q.; Su, R.; He, Z. Structural Insight into Stabilization of Pickering Emulsions with Fe3O4@SiO2 Nanoparticles for Enzyme Catalysis in Organic Media. _Part. Part. Syst. Charact._ 2017, _34_ , 1700117.
* Shen et al. (2014) Shen, X.; Bonde, J. S.; Kamra, T.; Bülow, L.; Leo, J.; Linke, D.; Ye, L. Bacterial Imprinting at Pickering Emulsion Interfaces. _Angew. Chem. Int. Ed._ 2014, _53_ , 10687–10690.
* Horváth et al. (2019) Horváth, B.; Balázs, V.; Varga, A.; Böszörményi, A.; Kocsis, B.; Horváth, G.; Széchenyi, A. Preparation, characterisation and microbiological examination of Pickering nano-emulsions containing essential oils, and their effect on Streptococcus mutans biofilm treatment. _Nature Sci. Rep._ 2019, _9_ , 16611.
* Sicard and Striolo (2016) Sicard, F.; Striolo, A. Numerical Analysis of Pickering Emulsion Stability: Insights from ABMD Simulations. _Faraday Discuss._ 2016, _191_ , 287–304.
* Ortiz et al. (2020) Ortiz, D. G.; Pochat-Bohatier, C.; Cambedouzou, J.; Bechelany, M.; Miele, P. Current Trends in Pickering Emulsions: Particle Morphology and Applications. _Engineering_ 2020, _6_ , 468–482.
* Seemann et al. (2012) Seemann, R.; Brinkmann, M.; Pfohl, T.; Herminghaus, S. Droplet based microfluidics. _Rep. Prog. Phys._ 2012, _75_ , 16601–2012.
* Orellana and Baret (2019) Orellana, L. C.; Baret, J. Rapid Stabilization of Droplets by Particles in Microfluidics: Role of Droplet Formation. _Chem. System Chem._ 2019, _1_ , 16–24.
* Groot and Warren (1997) Groot, R.; Warren, P. Dissipative Particle Dynamics: Bridging the Gap between Atomistic and Mesoscopic Simulation. _J. Chem. Phys._ 1997, _107_ , 4423–4435.
* Reguera et al. (2012) Reguera, J.; Ponomarev, E.; Geue, T.; Stellacci, F.; Bresme, F.; Moglianetti, M. Contact Angle and Adsorption Energies of Nanoparticles at the Air-Liquid Interface Determined by Neutron Reflectivity and Molecular Dynamics. _Nanoscale_ 2012, _7_ , 5665–5673.
* Giner-Casares and Reguera (2016) Giner-Casares, J.; Reguera, J. Directed self-assembly of inorganic nanoparticles at air/liquid interfaces. _Nanoscale_ 2016, _8_ , 16589–16595.
* Binks and Yin (2016) Binks, B.; Yin, D. Pickering emulsions stabilized by hydrophilic nanoparticles: in situ surface modification by oil. _Soft Matter_ 2016, _12_ , 6858–6867.
* Sicard and Striolo (2017) Sicard, F.; Striolo, A. Buckling in Armored Droplet. _Nanoscale_ 2017, _9_ , 8567–8572.
* Sicard and Striolo (2018) Sicard, F.; Striolo, A. In _Anisotropic Particle Assemblies_ ; Wu, N., Lee, D., Striolo, A., Eds.; Elsevier: Amsterdam, 2018; pp 167 – 200.
* Sicard et al. (2019) Sicard, F.; Toro-Mendoza, J.; Striolo, A. Nanoparticles Actively Fragment Armored Droplets. _ACS Nano_ 2019, _13_ , 9498–9503.
* Zhang et al. (2017) Zhang, J.; Grzybowski, B.; Granick, S. Janus Particle Synthesis, Assembly, and Application. _Langmuir_ 2017, _33_ , 6964–6977.
* Agrawal and Agrawal (2019) Agrawal, G.; Agrawal, R. Janus Nanoparticles: Recent Advances in Their Interfacial and Biomedical Applications. _ACS Appl. Nano Mater._ 2019, _2_ , 1738–1757.
* Binks and Lumsdon (2000) Binks, B.; Lumsdon, S. Influence of Particle Wettability on the Type and Stability of Surfactant-Free Emulsions. _Langmuir_ 2000, _16_ , 8622–8631.
* Luu et al. (2013) Luu, X.-C.; Yu, J.; Striolo, A. Nanoparticles Adsorbed at the Water/Oil Interface: Coverage and Composition Effects on Structure and Diffusion. _Langmuir_ 2013, _29_ , 7221.
* Fan et al. (2011) Fan, H.; Resasco, D.; Striolo, A. Amphiphilic Silica Nanoparticles at the Decane-Water Interface: Insights from Atomistic Simulations. _Langmuir_ 2011, _27_ , 5264–5274.
* Fan and Striolo (2012) Fan, H.; Striolo, A. Nanoparticle Effects on the Water-Oil Interfacial Tension. _Phys. Rev. E_ 2012, _86_ , 051610.
* Arnaudov et al. (2010) Arnaudov, L.; Cayre, O.; Stuart, M. C.; Stoyanov, S.; Paunov, V. Measuring the three-phase contact angle of nanoparticles at fluid interfaces. _Phys. Chem. Chem. Phys._ 2010, _12_ , 328–331.
* Binks and Fletcher (2001) Binks, B.; Fletcher, P. Pickering Emulsions Stabilized by Monodisperse Latex Particles: Effects of Particle Size. _Langmuir_ 2001, _16_ , 21–41.
* Jiang and Granick (2007) Jiang, S.; Granick, S. Janus balance of amphiphilic colloidal particles. _J. Chem. Phys._ 2007, _127_ , 161102.
* Khedr and Striolo (2020) Khedr, A.; Striolo, A. Self-assembly of mono- and poly-dispersed nanoparticles on emulsion droplets: antagonistic vs. synergistic effects as a function of particle size. _Phys. Chem. Chem. Phys._ 2020, _22_ , 22662\.
* Wang et al. (2011) Wang, D.; Yordanov, S.; Paroor, H.; Mukhopadhyay, A.; Li, C.; Butt, H.; Koynov, K. Probing Diffusion of Single Nanoparticles at Water–Oil Interfaces. _Small_ 2011, _7_ , 3502–3507.
* Datta et al. (2010) Datta, S.; Shum, H.; Weitz, D. Controlled Buckling and Crumpling of Nanoparticle-Coated Droplets. _Langmuir Lett._ 2010, _26_ , 18612–18616.
* Gu and Botto (2018) Gu, C.; Botto, L. Buckling vs. particle desorption in a particle-covered drop subject to compressive surface stresses: a simulation study. _Soft Matter_ 2018, _14_ , 711.
* Basavaraj et al. (2006) Basavaraj, M.; Fuller, G.; Fransaer, J.; Vermant, J. Packing, Flipping, and Buckling Transitions in Compressed Monolayers of Ellipsoidal Latex Particles. _Langmuir_ 2006, _22_ , 6605–6612.
* Evans and Morriss (1984) Evans, D. J.; Morriss, G. P. Non-Newtonian molecular dynamics. _Comput. Phys. Rep._ 1984, _1_ , 297.
* Evans and Morriss (1984) Evans, D.; Morriss, G. Nonlinear-response theory for steady planar Couette flow. _Phys. Rev. A_ 1984, _30_ , 1528.
* Lees and Edwards (1972) Lees, A.; Edwards, S. F. The computer study of transport processes under extreme conditions. _J. Phys. C_ 1972, _5_ , 1921.
* Vymetal and Vondrásek (2011) Vymetal, J.; Vondrásek, J. Gyration- and Inertia-Tensor-Based Collective Coordinates for Metadynamics. Application on the Conformational Behavior of Polyalanine Peptides and Trp-Cage Folding. _J. Phys. Chem. A_ 2011, _115_ , 11455–11465.
* Arkin and Janke (2013) Arkin, H.; Janke, W. Gyration tensor based analysis of the shapes of polymer chains in an attractive spherical cage. _J. Chem. Phys._ 2013, _138_ , 054904.
* Kaganyuk and Mohraz (2020) Kaganyuk, M.; Mohraz, A. Shear-induced deformation and interfacial jamming of solid-stabilized droplets. _Soft Matter_ 2020, _16_ , 4431\.
* Hall et al. (2013) Hall, S.; Pacek, A.; Kowalski, A.; Cooke, M.; Rothman, D. The effect of scale and interfacial tension on liquid–liquid dispersion in in-line Silverson rotor–stator mixers. _Chem. Engineering Research Design_ 2013, _91_ , 2156–2168.
* Xu et al. (2020) Xu, Z.; Wang, T.; Che, Z. Droplet deformation and breakup in shear flow of air. _Phys. Fluids_ 2020, _32_ , 052109.
* Roas-Escalona et al. (2018) Roas-Escalona, N.; Williams, Y. O.; Cruz-Barrios, E.; Toro-Mendoza, J. Intertwining Roles of the Disperse Phase Properties during Emulsification. _Langmuir_ 2018, _34_ , 6480–6488, PMID: 29758983.
* Gelfand (1996) Gelfand, B. Droplet Breakup Phenomena in Flows with Velocity Lag. _Progress Energy Combustion Sci._ 1996, _22_ , 201–265.
* Derby (2010) Derby, B. Inkjet Printing of Functional and Structural Materials: Fluid Property Requirements, Feature Stability, and Resolution. _Annu. Rev. Mater. Res._ 2010, _40_ , 395–414.
* Walicka et al. (2019) Walicka, A.; Falicki, J.; Iwanowska-Chomiak, B. Rheology of drugs for topical and transdermal delivery. _Int. J. of Applied Mechanics and Engineering_ 2019, _24_ , 179–198.
* Simoes et al. (2020) Simoes, A.; Veiga, F.; Vitorino, C. Progressing Towards the Sustainable Development of Cream Formulations. _Pharmaceutics_ 2020, _12_ , 647.
* Lu and Gao (2010) Lu, G.; Gao, P. In _Handbook of Non-Invasive Drug Delivery Systems_ ; Kulkarni, V. S., Ed.; Personal Care & Cosmetic Technology; William Andrew Publishing: Boston, 2010; pp 59 – 94.
* Esmon (2009) Esmon, C. Basic Mechanisms and Pathogenesis of Venous Thrombosis. _Blood Rev._ 2009, _23_ , 225–229.
* Sakariassen et al. (2015) Sakariassen, K.; Orning, L.; Turitto, V. The impact of blood shear rate on arterial thrombus formation. _Future Sci. OA_ 2015, _1_ , FSO30.
* Herbig et al. (2018) Herbig, B.; Yu, X.; Diamond, S. Using microfluidic devices to study thrombosis in pathological blood flows. _Biomicrofluidics_ 2018, _12_ , 042201.
* Mottaghy and Hahn (1981) Mottaghy, K.; Hahn, A. Interfacial tension of some biological fluids: A comparative study. _J. Clin. Chem. Clin. Biochem._ 1981, _19_ , 267–271.
* Phillips et al. (2008) Phillips, R.; Kondev, J.; Theriot, J. _Physical Biology of the Cell_ ; Garland Science, Taylor & Francis Group, 2008.
* Acharya and Sarangi (2016) Acharya, S. S.; Sarangi, S. N. In _Lanzkowsky’s Manual of Pediatric Hematology and Oncology (Sixth Edition)_ , sixth edition ed.; Lanzkowsky, P., Lipton, J. M., Fish, J. D., Eds.; Academic Press: San Diego, 2016; pp 279 – 333\.
* Groot and Warren (1997) Groot, R.; Warren, P. Dissipative Particle Dynamics: Bridging the Gap between Atomistic and Mesoscopic Simulation. _J. Chem. Phys._ 1997, _107_ , 4423–4435.
* Plimpton (1995) Plimpton, S. Fast Parallel Algorithms for Short-Range Molecular Dynamcis. _J. Comput. Phys._ 1995, _117_ , 1–19.
* Groot and Rabone (2001) Groot, R.; Rabone, K. Mesoscopic Simulation of Cell Membrane Damage, Morphology Change and Rupture by Nonionic Surfactants. _Biophys. J._ 2001, _81_ , 725.
* Calvaresi et al. (2009) Calvaresi, M.; Dallavalle, M.; Zerbetto, F. Wrapping Nanotubes with Micelles, Hemimicelles, and Cylindrical Micelles. _Small_ 2009, _5_ , 2191–2198.
* X-C.Luu et al. (2013) X-C.Luu,; Yu, J.; Striolo, A. Ellipsoidal Janus Nanoparticles Adsorbed at the Water-Oil Interface: some evidence of emergent behavior. _J. Phys. Chem. B_ 2013, _117_ , 13922–13929.
* Partington et al. (1952) Partington, J.; R.F.Hudson,; Bagnall, K. Self-Diffusion of Aliphatic Alcohols. _Nature_ 1952, _169_ , 583.
* Warren (2003) Warren, P. Vapor-liquid coexistence in many-body dissipative particle dynamics. _Phys. Rev. E: Stat. Phys., Plasmas, Fluids, Relat. Interdiscip. Top._ 2003, _68_ , 066702.
* Fan and Striolo (2012) Fan, H.; Striolo, A. Mechanistic Study of Droplets Coalescence in Pickering Emulsions. _Soft Matter_ 2012, _8_ , 9533–9538.
* Solc (1971) Solc, K. Shape of a Random-Flight Chain. _J. Chem. Phys._ 1971, _55_ , 335.
## Methods
Mesoscopic framework. The Dissipative Particle Dynamics (DPD) simulation
method Groot and Warren (1997) is implemented within the simulation package
LAMMPS Plimpton (1995). In the DPD simulations, a particle represents a
cluster of atoms rather than an individual atom. These particles interact with
each other through soft particle-particle interactions. The movement of the
particle can be realized by solving the Newton’s equation of motion
$\frac{d\mathbf{r}_{i}}{dt}=\mathbf{v}_{i}\,,~{}~{}~{}~{}~{}m_{i}\frac{d\mathbf{v}_{i}}{dt}=\mathbf{F}_{i}\,,$
(1)
where $m_{i}$, $\mathbf{r}_{i}$, $\mathbf{v}_{i}$, and $\mathbf{F}_{i}$ denote
the mass, position, velocity, and total force acting on the $i$th particle,
respectively. The total force $\mathbf{F}_{i}$ is divided into three parts,
the conservative force $\big{(}\mathbf{F}^{C}_{ij}\big{)}$, dissipative force
$\big{(}\mathbf{F}^{D}_{ij}\big{)}$, and random force
$\big{(}\mathbf{F}^{R}_{ij}\big{)}$, and defined as
$\mathbf{F}_{i}=\sum_{j\neq
i}\Big{(}\mathbf{F}^{C}_{ij}+\mathbf{F}^{C}_{ij}+\mathbf{F}^{C}_{ij}\Big{)}$
with
$\displaystyle\mathbf{F}^{C}_{ij}$ $\displaystyle=$ $\displaystyle
a_{ij}\sqrt{\omega(r_{ij})}\,\hat{\mathbf{r}}_{ij}\,,$ (2)
$\displaystyle\mathbf{F}^{D}_{ij}$ $\displaystyle=$
$\displaystyle-\Gamma\omega(r_{ij})(\hat{\mathbf{r}}_{ij}\cdot\mathbf{v}_{ij})\hat{\mathbf{r}}_{ij}\,,$
(3) $\displaystyle\mathbf{F}^{R}_{ij}$ $\displaystyle=$
$\displaystyle\sigma\sqrt{\omega(r_{ij})}\theta_{ij}\hat{\mathbf{r}}_{ij}\,$
(4)
where $\mathbf{r}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}$,
$r_{ij}=|\mathbf{r}_{ij}|$, $\hat{\mathbf{r}}_{ij}=\mathbf{r}_{ij}/r_{ij}$,
and $\mathbf{v}_{ij}=\mathbf{v}_{i}-\mathbf{v}_{j}$. The weight function
$\omega(r_{ij})$ equals to $(1-r_{ij}/R_{c})^{2}$ with a cut-off distance
$R_{c}$. $a_{ij}$, $\Gamma$, $\sigma$, and $\theta_{ij}$ are the repulsive
parameter, friction coefficient, noise amplitude, and Gaussian random
variable, respectively. To keep the temperature of the system constant,
$\Gamma$ and $\sigma$ satisfy the fluctuation-dissipation theorem as
$\sigma^{2}=2\Gamma k_{B}T$, where $k_{B}$ and $T$ are the Boltzmann and the
absolute temperature, respectively.
The system simulated here is composed of water, oil (decane), nanoparticles
(NPs) and solute molecules. Following previous work Luu et al. (2013); Sicard
and Striolo (2016, 2017); Sicard et al. (2019), we choose the degree of coarse
graining $N_{m}=5$ with the understanding that one ”water bead” (w) represents
$5$ water molecules. Within this assumption, the volume of each bead is
$V_{\textrm{bead}}\approx 150{\AA}^{3}$. The scaled density is set to $\rho=3$
beads/$R_{c}^{3}$, where $R_{c}$ is the DPD cutoff distance given as
$R_{c}=\sqrt[3]{\rho V_{\textrm{bead}}}\approx 0.766$ nm. The scaled mass of
each bead (oil, water, solute molecule, and NP beads) was set to 1. One decane
molecule is modeled as two ”oil beads” (o) connected by one harmonic spring of
length $0.72$ $R_{c}$ and spring constant $350$ $k_{B}T/R_{c}$ Groot and
Rabone (2001). The size of the triclinic simulation box (initially orthogonal)
is $L_{x}\times L_{y}\times L_{z}\equiv 200\times 100\times 100$ $R_{c}^{3}$,
where $L_{x}$ (respectively $L_{y}$ and $L_{z}$) is the box length along the
$X$ (respectively $Y$ and $Z$) direction. Periodic boundary conditions are
applied in all three directions. The solute molecules and the NPs are modelled
as hollow rigid spheres, as already described in previous work Luu et al.
(2013); Sicard and Striolo (2016, 2017); Sicard et al. (2019). The hydrophobic
solute molecules are made of nonpolar DPD beads, whereas the NPs contain polar
(p) and nonpolar (ap) DPD beads on their surface Calvaresi et al. (2009). One
DPD bead was placed at the NP and solute molecule centers for convenience, as
described elsewhere Luu et al. (2013); X-C.Luu et al. (2013). All types of
beads in our simulations have reduced mass of $1$. We maintain the surface
bead density on the NPs and solute molecule sufficiently high to prevent other
DPD beads (either decane or water) from penetrating the NPs and solute
molecules X-C.Luu et al. (2013).
The interaction parameters shown in Table 3 are used here. These parameters
are adjusted to reproduce selected atomistic simulation results, as explained
in prior work Luu et al. (2013). The interaction parameters between NP polar
and nonpolar beads, as well as solute molecule beads, are adjusted to ensure
that NPs/NPs and NPs/solute are able to assemble and disassemble without
yielding permanent dimers at the water/oil interface Luu et al. (2013). The
scaled temperature was set to $1$, equivalent to $298.73$ K. The time step
$\delta t=0.03\times\tau$ was used to integrate the equations of motion, where
$\tau$ is the DPD time constant. As demonstrated by Groot and Rabone Groot and
Rabone (2001), the time constant of the simulation can be gauged by matching
the simulated self-diffusion of water, $D_{\textrm{sim}}$, with the the
experimental water self-diffusion coefficient, $D_{\textrm{water}}=2.43\times
10^{-5}$ $\textrm{cm}^{2}/s$ Partington et al. (1952), calculated as
$\tau=\frac{N_{m}D_{\textrm{sim}}R_{c}^{2}}{D_{\textrm{water}}}$, as shown in
previous work Luu et al. (2013). When $a_{w-w}=131.5$ $k_{B}T/R_{c}$, this
results in a time step $\delta t=5.6$ ps.
| $w$ | $o$ | $ap$ | $p$ | $s$
---|---|---|---|---|---
$w$ | $131.5$ | $198.5$ | $178.5$ | $110$ | $670$
$o$ | | $131.5$ | $161.5$ | $218.5$ | $161.5$
$ap$ | | | $450$ | $670$ | $450$
$p$ | | | | $450$ | $670$
$s$ | | | | | $131.5$
Table 3: DPD interaction parameters expressed in $k_{B}T/R_{c}$ units. Symbols
$w$, $o$, $ap$, $p$, and $s$ stand for water beads, oil beads, NP nonpolar
beads, NP polar beads, and solute beads, respectively.
While the DPD framework satisfies the Navier-Stokes equations in the continuum
limit Groot and Warren (1997), the traditional DPD algorithm cannot reproduce
the vapour-liquid coexistence of water at the droplet interface Warren (2003).
This is due to the DPD conservative force, which determines the thermodynamics
of the DPD system and yields the equation of state (EOS) Groot and Warren
(1997)
$p=\rho k_{B}T+\alpha a\rho^{2}~{},$ (5)
where $p$ is the pressure, $\rho$ is the number density of the DPD beads, $a$
is the repulsion strength, and $\alpha$ is a fitting parameter equal to
$0.101\pm 0.001$ in DPD reduced units Groot and Warren (1997). As shown by
Warren in Ref. Warren (2003), The DPD system is unstable for $a<0$, so one is
restrictided to $a\geq 0$ and therefore to strictly repulsive (conservative)
interactions. This implies that calculations such as the vapor-liquid
coexistence and free-surface simulations cannot be attempted. This can be
adjusted by considering higher order terms of the density, $\rho$, in Eq. (5),
i.e. making the conservative force in Eq. (2) density dependent Warren (2003).
Nonequilibrium simulation. To simulate the response of the system subjected to
an homogeneous shear flow, we employ the SLLOD algorithm Evans and Morriss
(1984, 1984) coupled with the Lee-Edwards periodic boundary conditions Lees
and Edwards (1972), as implemented in the simulation package LAMMPS Plimpton
(1995). The SLLOD algorithm modifies the equations of motion in Eq. 1 as:
$\displaystyle\frac{d\mathbf{r}_{i}}{dt}$ $\displaystyle=$
$\displaystyle\mathbf{v}_{i}+\mathbf{e}_{x}\dot{\epsilon}r_{i,y}\,,$ (6)
$\displaystyle m_{i}\frac{d\mathbf{v}_{i}}{dt}$ $\displaystyle=$
$\displaystyle\mathbf{F}_{i}-m_{i}\mathbf{e}_{x}\dot{\epsilon}v_{i,y}\,,$ (7)
where $\dot{\epsilon}=\partial v_{x}/\partial r_{y}$ is the shear rate of the
external flow and $\mathbf{e}_{x,y}$ are the unit vectors along the $x$ and
$y$ directions, respectively. The velocity of the $i$th particle is divided
into two parts, that is, the peculiar velocity $\mathbf{v}_{i}$ representing
the random thermal motions and the shear flow velocity
$\mathbf{e}_{x}\dot{\epsilon}v_{i,y}$ relating to the external disturbance
strength. Specifically, we impose a linear velocity profile in the $x$
direction with a constant gradient in the $y$ direction, keeping the density
of the system constant, by changing the $xy$-tilt factor, $T_{xy}$, of the
triclinic simulation box at a constant shear rate, $\dot{\epsilon}$, as
$T_{xy}(t)=T_{xy}^{(0)}+\dot{\epsilon}~{}L_{0}~{}\Delta t\,.$ (8)
In Eq. 8, $T_{xy}^{(0)}$ and $L_{0}$ are the initial tilt factor and the
original length of the box perpendicular to the shear direction. This can be
related to the shear stress of the external shear flow
$\tau_{s}=\mu\dot{\epsilon}$, with $\mu$ the dynamic viscosity of the
continuous phase.
Three phase contact angle. To estimate the three phase contact angle,
$\theta_{C}$, for the NPs on the droplets we calculate the fraction of the
spherical NP surface area that is wetted by water Fan and Striolo (2012),
$\theta_{C}=180-\arccos\Big{(}1-\frac{2A_{w}}{4\pi R^{2}}\Big{)}\,,$ (9)
where $A_{w}$ is the area of the NP surface that is wetted by water and $R$ is
the radius of the NP. The ratio $A_{w}/4\pi R^{2}$ is obtained by dividing the
number of NP surface beads (ap or p), which are wetted by water, by the total
number of beads on the NP surface. One surface bead is wet by water if a water
bead is the solvent bead nearest to it. One standard deviation from the
average is used to estimate the statistical uncertainty.
Interfacial tension.The interfacial tension $\gamma$ at the water/oil
interface as a function of the NP surface coverage $\Phi$ is calculated as Fan
and Striolo (2012); Luu et al. (2013)
$\gamma=\Bigg{\langle}P_{zz}-\frac{P_{xx}+P_{yy}}{2}\Bigg{\rangle}\frac{L_{z}}{2}\,.$
(10)
In Eq. 10, $P_{ij}$ is the $ij$ element of the pressure tensor, $L_{z}$ is the
simulation box length in the $z$ dimension, and the angular brackets denote
the ensemble average.
Self-diffusion coefficient. To characterize the self-diffusion coefficient of
the NPs at the water/oil interface, we estimate the mean squared displacement
(MSD) of a single NP adsorbed at a planar interface parallel to the $x-y$
plane. For each particle size, the simulated diffusion coefficient is
estimated according to
$D_{x-y}=\frac{1}{4}\Bigg{\langle}\frac{|r_{i}(t)-r_{i}(0)|^{2}}{t}\Bigg{\rangle}$
(11)
where $r_{i}(t)$ is the position of particle $i$ at time $t$ on the plane of
the interface.
Gyration tensor. To measure the evolution of the structural morphology of the
emulsion droplet, we estimate the principal components of the gyration tensor
Vymetal and Vondrásek (2011); Solc (1971); Sicard and Striolo (2016), which
allow the evaluation of the overall shape of the system and reveal its
symmetry. Considering the definition for the gyration tensor,
$\mathcal{T}_{GYR}=\frac{1}{N}\begin{bmatrix}\sum x_{i}^{2}&\sum
x_{i}y_{i}&\sum x_{i}z_{i}\\\ \sum x_{i}y_{i}&\sum y_{i}^{2}&\sum
y_{i}z_{i}\\\ \sum x_{i}z_{i}&\sum y_{i}z_{i}&\sum z_{i}^{2}\end{bmatrix}\,,$
(12)
where the summation is performed over $N$ atoms and the coordinates $x$, $y$,
and $z$ are related to the geometrical center of the atoms, one can define a
reference frame where $\mathcal{T}_{GYR}$ can be diagonalized:
$\mathcal{T}_{GYR}^{diag}=\begin{bmatrix}S_{1}^{2}&0&0\\\ 0&S_{2}^{2}&0\\\
0&0&S_{3}^{2}\end{bmatrix}\,.$ (13)
In Eq. 13, we follow the convention of indexing the eigenvalues according to
their magnitude, i.e. $S_{1}^{2}>S_{2}^{2}>S_{3}^{2}$. We define the radius of
gyration $R_{GYR}^{2}\equiv S_{1}^{2}+S_{2}^{2}+S_{3}^{2}$ and the relative
shape anisotropy
$\kappa^{2}=\frac{3}{2}\frac{S_{1}^{4}+S_{2}^{4}+S_{3}^{4}}{(S_{1}^{2}+S_{2}^{2}+S_{3}^{2})^{2}}-\frac{1}{2}$,
and we calculate $R_{GYR}$ and $\kappa^{2}$ using the centers of the water
beads.
Computational design of armored nanodroplets as nanocarriers for encapsulation
and release under flow conditions
Supporting Information
## Nanoparticle characteristics
Following previous work Fan et al. (2011); Fan and Striolo (2012); Luu et al.
(2013); Sicard and Striolo (2016, 2017); Sicard et al. (2019), the
nanoparticles (NPs) are specifically designed to represent Janus silica NPs
(particles whose surface shows two distinct wetting properties) at the
decane/water interface. The NPs are modelled as hollow rigid spheres with two
different diameters, $d_{S}\sim 3R_{c}$ and $d_{L}\sim 6R_{c}$ for small and
large NPs, respectively, with $R_{c}\sim 0.766$ nm the DPD cutoff distance.
Each NP contains polar (p) and nonpolar (ap) DPD beads on its surface and one
DPD bead is placed at the NP center for convenience, as shown in Fig. S1a.
Hollow models have been used in the literature to simulate NPs, and hollow NPs
can also be synthesized experimentally Calvaresi et al. (2009). All types of
beads in our simulations have reduced mass of $1$. To cover small and large
NPs, $108$ and $432$ beads are required, respectively, yielding a surface
density of $\approx 3.8$ beads per $R_{c}^{2}$ on the NP surface Fan and
Striolo (2012). The total number of beads on one NP surface is chosen such
that the surface bead density be sufficiently high to prevent other DPD beads
(either decane or water) from penetrating the NPs (which would be unphysical),
as it has already been explained elsewhere X-C.Luu et al. (2013). We use the
same surface density for the hydrophobic spherical probe load.
The NP-solvent interaction parameters in the DPD framework, given in the
Methods section in the main text, were originally parametrized to reproduce
the three-phase contact angle, $\theta_{c}\sim 85.3^{\circ}\pm 1.9^{\circ}$,
obtained via atomistic molecular dynamics (MD) simulations for one silica
Janus NP of diameter $\sim 2R_{c}$ at the decane/water interface, as explained
in previous work Fan et al. (2011); Fan and Striolo (2012); Luu et al. (2013).
In our case, we check that the three-phase contact angles for small and large
NPs, $\theta_{S}\sim 84.1^{\circ}\pm 2.7^{\circ}$ and $\theta_{L}\sim
86.8^{\circ}\pm 1.1^{\circ}$, respectively, as shown in Fig. S1b, are in
qualitative agreement, within the standard errors, with experimental
observations Arnaudov et al. (2010). From the error bars measured, we observe
that the small NPs are more sensitive to thermal fluctuations at the interface
compared to the large ones, characteristic of the increase of the adsorption
energy with the particle radius Binks and Fletcher (2001); Jiang and Granick
(2007); Khedr and Striolo (2020).
To evaluate the diffusion of the small and large NPs at the water/oil
interface, we estimate the mean squared displacement (MSD) of a single NP
adsorbed at a planar water/oil interface parallell to the $x-y$ plane for
increasing simulation lagtime, as shown in Fig. S1c. For each particle size,
the MSD is averaged over $5$ replicas conducted for $1~{}\mu\textrm{s}$ each,
and the simulated diffusion coefficient is estimated accordingly (see Methods
section in the main text). We measure $D_{S}\sim 4.7\pm 3.1\times 10^{-7}$
$\textrm{cm}^{2}~{}\textrm{s}^{-1}$ and $D_{L}\sim 1.8\pm 0.7\times 10^{-7}$
$\textrm{cm}^{2}~{}\textrm{s}^{-1}$, for small and large NPs, respectively. In
particular, large NPs are less diffusive than smaller ones, in qualitative
agreement with simulations Khedr and Striolo (2020) and experimental
observations Wang et al. (2011).
Figure S1: (a) Cross sectional view of the small (left panel) and large (right
panel) spherical NPs simulated in this work. Cyan, purple and gold spheres
represent the nonpolar (ap), polar (p), and NP center beads, respectively.
Small and large NPs are covered with 108 and 432 beads,respectively,
corresponding to a surface density of $\sim 3.8$ beads per $R_{c}^{2}$ on the
NP surface. The fractions of nonpolar and polar beads on the NP surface are
identical. (b) Probability distributions of the three-phase contact angles
$\theta_{S}$ and $\theta_{L}$ for small (S) and large (L) NPs, respectively.
The probability distributions is fitted with Gaussian distributions of means
$\mu_{S}\sim 84.1^{\circ}$ and $\mu_{L}\sim 86.8^{\circ}$, and standard
deviations $\sigma_{S}\sim 2.7^{\circ}$ and $\sigma_{L}\sim 1.1^{\circ}$, as
shown with continuous lines. (c) MSD as a function of simulation lagtime for
small and large NPs measured at the water/oil planar interface.
## Formation of pocket-like structures
The number of water beads constituting the initial water-in-oil emulsion
droplets is fixed to $N_{W}\approx 3\times 10^{5}$. At the beginning of each
simulation, the solvent (oil) beads are uniformly distributed within the
simulation box. One water droplet of radius $\approx 32~{}R_{C}$ is generated
by replacing the oil beads with water beads within the volume of the spherical
surface. A number of spherical NPs are placed randomly at the water-decane
interface with their polar (nonpolar) part in the water (oil) phase to achieve
the desired water-decane interfacial area per NP. The initial configuration
obtained is simulated for $10^{6}$ timesteps in order to relax the density of
the system and the contact angle of the NPs on the droplet. The system
pressure and the three-phase contact angle distributions converged after 5000
simulation steps. Then, we let the system run for an additional $2\times
10^{6}$ timesteps to generate two new initial configurations, which allows us
to test the reproducibility of the simulations.
To study the surface mechanical instabilities and the collapse mechanisms
responsible for the formation of the crater-like depressions at the droplet
interface, we follow the numerical protocol discussed by Sicard et al. in
previous work Sicard and Striolo (2017). The surface area of the droplets is
slowly diminished, pumping randomly a constant proportion, i.e. $10$ percent,
of water molecules out of the droplet and letting the system pressure and the
three-phase contact angle distribution equilibrate at constant density. By
slowly, we mean we do not create any hollow volume in the droplet that would
strongly drive the system out-of-equilibrium. Doing so, the three-phase
contact angle distribution of the NPs evolves sufficiently smoothly when the
droplet buckles and becomes nonspherical, thereby preventing particles to be
artifactually realeased. This numerical protocol is comparable to a
solubilization experiment, where the dispersed phase is slightly soluble in
the continuous phase Datta et al. (2010). By adding a fixed amount of
unsatured continuous phase, the volume of the droplets can then be
controllably reduced.
To study quantitatively the transition from spherical shrinking to buckling in
the uniformly covered droplets, $\textrm{UC}_{S}$ and $\textrm{UC}_{L}$, we
follow the evolution of the radial distribution functions, $g_{S}(r)$ and
$g_{L}(r)$, with $r$ the distance between the center of the NPs, along with
the distributions of the three-phase contact angles, $\theta_{S}$ and
$\theta_{L}$, of small and large NPs, respectively. In Fig. S2a, we show the
evolution of $g(r)$, as a function of the dimensionless parameter $\Delta
N_{W}$ defined in the main text, for $\textrm{UC}_{S}$ (blue) and
$\textrm{UC}_{L}$ (red). Unlike $\textrm{UC}_{S}$ where the first peak in
$g(r)$ is already present for $\Delta N_{W}\sim 0.8$ and increases
significantly when $\Delta N_{W}$ decreases, we observe the apparition of the
first peak in $g(r)$ for $\textrm{UC}_{L}$ at a later stage ($\Delta N_{W}\sim
0.72$). This peak increases significantly slower when $\Delta N_{W}$
decreases. This behaviour is representative of the difference in NP
interfacial packing as a function of the NP size, with a transition from
spherical shrinking to buckling happening when the NP monolayer becomes close
to its maximum packing. When the volume of $\textrm{UC}_{S}$ and
$\textrm{UC}_{L}$ is reduced, $\theta_{C}^{(S)}$ and $\theta_{C}^{(L)}$
uniformly evolve from a Gaussian to a skewed unimodal distribution, as shown
in Fig. S2b for $\textrm{UC}_{L}$, in line with previous work Sicard and
Striolo (2017). When the volume of $\textrm{HC}_{1}$ or $\textrm{HC}_{2}$ is
reduced, on the other hand, we observe significant differences in the
evolution of the distributions of $\theta_{C}^{(S)}$ and $\theta_{C}^{(L)}$,
due to heterogeneity in NP size and surface coverage, as shown in Fig. S2c. In
particular, the distribution of $\theta_{C}^{(L)}$ remains similar to the
Gaussian distribution observed in the initial configuration (continuous line),
while the distributions of $\theta_{C}^{(S)}$ shows larger variability,
characterized with the increase of the asymmetry of the distribution towards
lower values of $\theta_{S}$.
Figure S2: (a) Evolution of the NP radial distribution function, $g(r)$, as a
function of the dimensionless parameter $\Delta N_{W}$, defined in the main
text, when the droplet is uniformly covered with small (top panel) and large
(bottom panel) NPs. (b) Probability distribution of the three-phase contact
angle of large NPs, $\theta_{L}$, at the interface of $\textrm{UC}_{L}$, when
$\Delta N_{W}\sim 0.35$. The initial Gaussian distribution, fitted with
continuous line, is shown for comparison. (c) Probability distributions of the
three-phase contact angle of $\theta_{S}$ and $\theta_{L}$, for small and
large NPs, respectively, at the interface of $\textrm{HC}_{1}$, when $\Delta
N_{W}\sim 0.35$. The initial Gaussian distributions, fitted with continuous
lines, are shown for comparison.
## Evolution of the structural morphology of the droplets under flow
conditions
As explained in details in the main text and the Methods section, we
investigate the dynamical response of the buckled armored nanodroplets
$\textrm{HC}_{1}$ subjected to shear flow of the surrounding fluid, using the
SLLOD algorithm Evans and Morriss (1984, 1984) coupled with Lee-Edwards
periodic boundary conditions Lees and Edwards (1972). The changes in the
structural morphology of the system are characterized with the elongation of
the nanodroplet along the deformation axis $x$, and the squeezing of the
crater-like depression along the orthogonal $z$-direction, as shown in Fig.
S3a. In Fig. S3b, we show the probability distribution of the three-phase
contact angle, $\theta_{C}^{(S)}$, for small NPs, at the interface of the
structures $\textrm{HC}1_{a,b,c}$ defined in the main text. Within the range
of shear rates considered in this work, $\theta_{C}^{(S)}$ shows a skewed
unimodal distribution with a central peak located at the same value as the one
measured for both the initial and buckled configurations (shown with
continuous lines).
Figure S3: (a) Representative cross-view (top) and side-view (bottom) of
$\textrm{HC}1_{a,b,c}$ (from left to right) obtained after the relaxation of
the system ($t\sim 1.2~{}\mu s$). Cyan and purple spheres represent the small
and large Janus NPs, respectively. Pink spheres represent water beads. The oil
molecules surrounding the system are not shown for clarity. (b) Corresponding
distributions of the three-phase contact angle, $\theta_{C}^{(S)}$, for small
NPs, at the interface of the structures $\textrm{HC}1_{a,b,c}$.
|
# Old and New Major Mergers in the SOSIMPLE galaxy, NGC 7135
Thomas A. Davison,1,2 Harald Kuntschner,1 Bernd Husemann,3 Mark A. Norris,2
Julianne J. Dalcanton,4 Alessandra De Rosa,5 Pierre-Alain Duc,6 Stefano
Bianchi,7 Pedro R. Capelo,8 Cristian Vignali9,10
1European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-87548 Garching
bei Muenchen, Germany
2Jeremiah Horrocks Institute, University of Central Lancashire, Preston PR1
2HE, UK
3Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg,
Germany
4Department of Astronomy, University of Washington, Box 351580, Seattle, WA
98195, USA
5INAF - Istituto di Astrofisica e Planetologie Spaziali, Via Fosso del
Cavaliere, 00133 Rome, Italy
6Université de Strasbourg, CNRS, Observatoire astronomique de Strasbourg
(ObAS), UMR 7550, 67000 Strasbourg, France
7Dipartimento di Matematica e Fisica, Università degli Studi Roma Tre, via
della Vasca Navale 84, I-00146 Roma, Italy
8Center for Theoretical Astrophysics and Cosmology, Institute for
Computational Science, University of Zurich,
Winterthurerstrasse 190, CH-8057 Zürich, Switzerland
9Dipartimento di Fisica e Astronomia, Alma Mater Studiorum, Università degli
Studi di Bologna, Via Gobetti 93/2, I-40129 Bologna, Italy
10INAF - Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via
Gobetti 93/3, I-40129 Bologna, Italy E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The simultaneous advancement of high resolution integral field unit
spectroscopy and robust full-spectral fitting codes now make it possible to
examine spatially-resolved kinematic, chemical composition, and star-formation
history from nearby galaxies. We take new MUSE data from the Snapshot Optical
Spectroscopic Imaging of Mergers and Pairs for Legacy Exploration (SOSIMPLE)
survey to examine NGC 7135. With counter-rotation of gas, disrupted kinematics
and asymmetric chemical distribution, NGC 7135 is consistent with an ongoing
merger. Though well hidden by the current merger, we are able to distinguish
stars originating from an older merger, occurring 6-10 Gyr ago. We further
find a gradient in ex-situ material with galactocentric radius, with the
accreted fraction rising from 0% in the galaxy centre, to $\sim$7% within 0.6
effective radii.
###### keywords:
galaxies: interactions – galaxies: evolution – galaxies: stellar content
††pubyear: 2020††pagerange: Old and New Major Mergers in the SOSIMPLE galaxy,
NGC 7135–References
## 1 Introduction
Galaxy merger research has shown how fundamental merging is to galaxy
evolution, with historical merger rates generally increasing with galaxy mass
(Bundy et al., 2009; Schawinski et al., 2010; L’Huillier et al., 2012;
Pillepich et al., 2018). Distant galaxies (z$\approx$2) are often quoted as
being a factor of 2-5 times smaller than those found locally (Daddi et al.,
2005; Van Dokkum et al., 2008; Saracco et al., 2009). As such it is widely
assumed that a large amount of mass-assembly after z$\approx$2 is a result of
hierarchical growth through galaxy mergers and accretion which has been widely
corroborated from galaxy evolution models. Not only does merger history impact
on almost all other aspects of galaxy evolution, but many galaxies have
experienced large mergers throughout their history with around 50% of galaxies
experiencing a major merger (Maller et al., 2006), and essentially all
surviving galaxies experiencing minor mergers, with frequency increasing with
merger mass-ratio (Lotz et al., 2011). The exception for these cases are some
rare pristine galaxy types ($\lesssim$ 0.1% of galaxies according to Quilis &
Trujillo 2013) which have likely experienced no outside interaction or
accretion events (Trujillo et al., 2013).
Modelling is an excellent way to delve into the mechanics and subsequent
effects of galaxy mergers. Using simulations, the ex-situ mass fraction of
accreted galaxies has been explored in depth (Pillepich et al., 2015a; Qu et
al., 2017; Davison et al., 2020). This is useful for defining expected current
merger rates to be compared to observationally. A challenging aspect of
observational astronomy is demonstrating the merger history of observed nearby
galaxies to verify these models, particularly if potential mergers occurred
several Gyr ago.
Integral Field Spectroscopy has proven particularly useful in exploring galaxy
kinematics and populations. Integral Field Units (IFU’s) have provided
spatially resolved maps of galaxies which can be used to diagnose population
differences and kinematic effects as a result of mergers. This has been shown
to be effective in numerous observational cases (see e.g. Guérou et al., 2016;
Faifer et al., 2017; Ge et al., 2019)
The impact of mergers and merger history on galaxy evolution is an important
aspect to understand. For one thing, mergers are known to drive gas towards
the galaxy centre (Mihos & Hernquist, 1995), causing AGN activity and black
hole growth, which in turn can shut down or suppress star formation in the
galaxy at large (Cales & Brotherton, 2015; Choi et al., 2015). On the other
hand, mergers can cause sudden and significant bursts of star formation due to
the disruption of previously unperturbed gas kinematics (Di Matteo et al.,
2008; Ellison et al., 2013; Moreno et al., 2015; Capelo et al., 2015).
Disruption in the gas kinematics of galaxies can leave key fingerprints in
identification of merger events. One of the most readily identifiable features
of a recent or ongoing merger is counter rotating components, with up to 40%
of S0 galaxies displaying signatures of counter-rotation (Rubin, 1994; Davis
et al., 2011; Coccato et al., 2015; Bassett et al., 2017). Galaxy-galaxy
mergers of the right combination can change the very morphological type of a
galaxy. As such, mergers hold the power to define entire galaxy futures.
The S01-pec galaxy NGC 7135 (AM 2146–350, IC 5136) in the constellation of
Piscis Austrinus is a merger remnant galaxy (Keel, 1985a) that is likely en
route to forming an S0 galaxy. It currently displays several immediately
striking visual features including an extended tail, shell features, and
curved structure (Figure 1) based on photometry from the Carnegie-Irvine
Galaxy Survey (Ho et al., 2011).
NGC 7135 was first described as having ‘a curious jet and shell’ in Malin &
Carter (1983) with the ‘jet’ later shown to be a tail in Rampazzo et al.
(2003). The shell structures of the galaxy were found to be particularly clear
in UV (Rampazzo et al., 2007; Marino et al., 2011), with FUV gas structure
further linked to an accretion event that also likely formed the shells. Ueda
et al. (2014) found CO emitting gas that was unassociated with the nucleus,
along with 3 mm continuum associated with the nucleus. Despite speculation,
NGC 7135 was determined to have no active nucleus as shown in Zaw et al.
(2009) through optical spectra analysis. Analysis in Keel (1985b) identifies
NGC 7135 as a merger galaxy, and in Rampazzo et al. (2003) NGC 7135 is shown
to possess an elongated, asymmetric gas structure relative to the stellar
material.
The local environment of NGC 7135 is described by Samir et al. (2016) as being
‘low density’, with the classification of ‘low density’ (Annibali et al.,
2010) a result of the richness parameter $\rho_{xyz}$=0.32 gal Mpc-3 (Tully &
Fisher, 1988). Early type galaxies in low density environments are known to
possess on average younger populations ($\sim$ 2Gyr younger) than similar
galaxies in higher density environments (Thomas et al., 2003), a likely result
of more recent mergers and star formation.
In this work we present new observations of the galaxy NGC 7135, recently
obtained with MUSE. We aim to show that NGC 7135 is currently undergoing a
major merger, with a history of older mergers underlying in the galaxy
populations. The paper is presented as follows: In Section 2 we describe the
motivation behind the observations, as well as the data reduction and
limitations. In Section 3 we describe our methodology, including the use of
regularisation during spectral fitting. In Section 4 we present the resultant
maps of stellar populations and kinematics, as well as gas properties
similarly derived, including rotation differences between the two components.
In Section 5 we discuss the implications of the results and finally in Section
6 we provide a summary and concluding remarks.
## 2 Observations and data reduction
We observed NGC 7135 with the Multi Unit Spectroscopic Explorer (MUSE, Bacon
et al., 2010, 2014) at the Very Large Telescope (VLT) as part of the Snapshot
Optical Spectroscopic Imaging of Mergers and Pairs for Legacy Exploration
(SOSIMPLE) survey (Program ID: 0103.A-0637(A), PI: B. Husemann). The aim of
the SOSIMPLE survey is to provide complementary IFU observations for an
ongoing Hubble filler gap snapshot imaging program (Program ID: 15446, PI: J.
Dalcanton). HST imaging of NGC 7135 is not yet taken due to the filler nature
of the HST program, thus these MUSE observations act as a first look at the
data, to which HST data can be compared to at a later date. Combining IFU
spectroscopy with a large set of high-quality ancillary data will hopefully
provide observational and theoretical insights into the evolution of merging
systems.
The MUSE observations were conducted on 6 July 2019 during dark sky conditions
and split into 3$\times$560 s dithered pointings along with a 300 s dedicated
blank sky field exposure for background subtraction of this extended galaxy.
Rotations of 90° were applied between exposures covering approximately 3.4
arcmin2 as shown in Fig 1. The seeing during the observations maintained at
$\sim$1″ , and the sky was covered with thin clouds during strong wind
conditions from North-West direction.
The data were reduced with the standard ESO pipeline (Weilbacher et al., 2020)
which performs detector calibrations, flat-fielding, wavelength calibration,
flux calibration as well as sky subtraction, exposure alignment, and cube
reconstruction of the combined exposures. We performed an additional
correction for residual sky lines using a simple PCA algorithm. The MUSE pixel
scale is 0.2 arcsec pixel-1, with a mean spectral resolution of $\sim$2.5Å
though this can vary across the wavelength range (see figure 5 of Husser et
al. 2016). The resulting mean Signal-to-Noise (SN) ratio of the spaxels in the
MUSE image within a wavelength range of 4759–6849 Å (limited from 4759–9300 Å)
is 9.5, with a maximum spaxel SN of 131.
## 3 Methodology
Spaxels were Voronoi binned to a minimum SN of 50 per Å, thereby poor signal
regions were made available for analysis, whilst higher SN spaxels remained
unbinned. This optimally allowed for spatial investigation of spectral
properties, without losing valuable high resolution data at high SN locations.
The wavelength was restricted to 4759 - 6849 Å for all spaxels to ensure the
strongest Balmer lines were included, and to exclude noisier sky-dominated
regions at redder wavelengths. All spectra of spaxels within a bin were summed
into a single spectra representing the area covered by the bin. An area
containing a foreground star was masked from analysis in the West of the image
(see Figure 1).
To analyse the spectra from the binned NGC 7135 data we utilised the Penalized
PiXel-Fitting (pPXF) method, described in Cappellari & Emsellem (2004) and
upgraded in Cappellari (2017). With this method, single-age single-metallicity
stellar population (SSP) models are fit to spectra to build a map of stellar
populations across age and metallicity space. By identifying the combination
of SSP models that approximate a given spectrum, the estimated constituent
populations are extracted, as well as velocity and dispersion. Stellar models
are weighted as per the estimated fraction of the population present in the
galaxy. As a result, output weights of stellar models indicate the fractions
of specific stellar populations present in the spectrum. The output model of
combined spectra is made more physical by the use of template regularisation
(see e.g. section 3.5 of Cappellari 2017), the methodology of which is
explained in detail below. Standard pPXF cleaning algorithms were included to
mask emission lines where necessary.
A total of 552 MILES SSP models (Vazdekis et al., 2010) were used to fit to
galaxy spectra. These models were of Kroupa revised initial mass function (log
slope of 1.3, Mmax=100M⊙) using BaSTI isochrones, with a metallicity range of
-2.27 to +0.4 [M/H] in 12 non-linear steps, and an age range of 0.1 to 14.0
Gyr in 46 non-linear steps Kroupa (2001); Cassisi et al. (2006); Pietrinferni
et al. (2006); Falcón-Barroso et al. (2011); Vazdekis et al. (2012).
Application of regularisation allows smoothing over stellar model weights to
reproduce a population map consistent with physical results. The weighted
templates that have been combined to produce a target spectrum will often be
unphysically localised to only the strongest of possible solutions, with many
other valid solutions being overlooked, despite their physicality. To produce
more representative distributions, regularisation seeks to smooth the
solutions to a physical state. The challenge is to smooth the template weights
to a solution that most accurately represents observed conditions, whilst not
overlooking genuine fluctuations and details present in the model-fit. The
regularisation parameter controls the strength of the smoothing and is deduced
through a robust iterative approach for each spectrum individually. The
regularisation parameter is derived such that it corresponds to the maximum
value consistent with observations. Thus the derived star formation history
will be the smoothest that is consistent with the observations. This has been
shown in literature to be an accurate and useful method of galaxy population
extraction (see e.g. Comerón et al., 2015; Norris et al., 2015; Guérou et al.,
2016; Faifer et al., 2017; Ge et al., 2019; Boecker et al., 2020).
In this work an iterative routine is applied to extract the optimal
regularisation parameter. For the best possible fit, the $\chi^{2}$ of the
solution is expected to be approximately equal to the number of available
voxels in the spectrum, $N$ (i.e. the number of voxels available after any
masking). To obtain this optimal solution, the $\chi^{2}$ must be increased
from the unregularised $\chi^{2}$ (referred to as $\chi^{2}_{0}$) by
$\sqrt{2N}$.
After rescaling noise from the unregularised solution such that
$\frac{\chi^{2}}{N}$ = 1, we make a number of primary guesses at the
regularisation parameter. We find the $\Delta\chi^{2}$ of these initial
guesses and fit a function to the input regularisation guesses and output
$\Delta\chi^{2}$ values. By doing so we can precisely find the optimal
regularisation parameter such that $\chi^{2}=\chi^{2}_{0}+\sqrt{2N}$. This
action is performed for every bin, resulting in optimal solutions across the
entire image map.
Figure 1: A colour image of NGC 7135 showing the MUSE cube footprint.
Photometry of NGC 7135 is from the Carnegie-Irvine Galaxy Survey (Ho et al.,
2011). The blue border shows the boundaries of the reduced MUSE IFU data used
in this study. A green circle traces an area containing a bright foreground
star that was entirely excluded from the analysis.
## 4 Results
We separate the analysis of NGC 7135 into three components; the stellar
component analysis, encompassing the stellar kinematics; the gaseous component
analysis, encompassing gas kinematics, emission lines and star formation
aspects; and the population analysis, examining the various stellar
populations and the resulting implications for the assembly history of NGC
7135.
To examine the stellar component we utilise Voronoi binning as described in
Section 3. From this we are able to examine the stellar rotation and bulk
velocities, as well as mean age and metallicities spatially across the galaxy
(Fig 2). To investigate details related to the gaseous component we use
regular binning to view the gas velocities and rotation, as well as the line
strengths of H$\alpha$ and H$\beta$ (Fig 3). Though we see reasonable amounts
of H$\alpha$ emission, there is scant evidence for significant ongoing star
formation. This is explained in detail in Section 4.2. Finally, in Section 4.3
we further analyse age and metallicity distributions for sampled regions
across the galaxy to diagnose assembly history and current merger status, then
go on to examine underlying metal poor populations in Section 4.4.
### 4.1 Stellar Properties
Application of the pPXF method to the NGC 7135 data cube provides mean
kinematic properties which are extracted from each bin. Demonstrations of this
for velocity and velocity dispersion of the galaxy are found in the top panels
of Figure 2. Application of regularisation and mass-to-light ratios produce
maps of the constituent stellar populations within each bin of the galaxy.
From these bins we can derive mean mass-weighted stellar age and metallicity
values, as demonstrated in the lower panels of Figure 2.
Figure 2: Voronoi map of NGC 7135 showing 4 different stellar kinematic or
mass-weighted population properties. The top left panel shows the mean
velocity in km/s for each bin. The top right panel shows mean velocity
dispersion within bins in km/s. The lower left panel shows the mean age of
populations within the bin in Gyr. Finally the lower right panel shows mean
metallicity within each bin. North is to the top of the image, and East is to
the left. The stars show clear rotation in the centre. Velocity dispersion,
age and metallicity all increase towards the galaxy centre. Distinct
kinematics and metallicity south of the centre highlight a distinct component.
The stellar kinematic, age, and metallicity maps of NGC 7135 reveal much about
the galaxy. Stellar rotation is immediately visible. This is of key interest
when comparing to gas which rotates counter to the direction of stellar
rotation. This is explored in detail in Section 4.2. One prominent kinematic
feature, perhaps most clearly seen in the velocity map (top left panel) of
Figure 2, is an arc of incongruous material at higher than average velocity,
stretching from the South West of the Figure to the West. The Southern end of
this arc is matched in the metallicity map (lower right panel, Figure 2) by a
higher metallicity region, which is also distinct in velocity and velocity
dispersion. Upon inspection, this is revealed to be an infalling galaxy
currently merging onto NGC 7135. This can be clearly seen in photometry shown
in Figure 6, and even more compelling evidence comes from population analysis
below.
### 4.2 Gas Properties
Figure 3: Regularly binned map of NGC 7135 showing 4 different gas kinematic
and strength properties. The top left panel shows the mean velocity of gas in
km/s for each bin. The top right panel shows mean velocity dispersion of gas
within bins in km/s. The lower left panel shows the H$\alpha$ flux throughout
NGC 7135. The scale has been limited from the true maximum to better display
regions of intermediate strength. This limits the core from a true strength of
at most 36.2$\times$10-16erg/s/cm2 (limited to 2.5$\times$10-16erg/s/cm2). The
lower right panel shows H$\beta$ flux throughout NGC 7135. The scale has been
limited from the true maximum to better display regions of intermediate
strength. This limits the core from a true strength of at most
5$\times$10-16erg/s/cm2 (limited to 2.1$\times$10-16erg/s/cm2). The gas
velocity shows counter rotation compared to the stellar component, and on a
slightly different axis, suggesting a merger origin. Figure 4: Regularly
binned and zoomed in map of NGC 7135 showing 4 different gas kinematic and
strength properties. The top left panel shows the mean velocity of gas in km/s
for each bin. The top right panel shows mean velocity dispersion of gas within
bins in km/s. The lower left shows the H$\alpha$ flux throughout NGC 7135. The
scale has been limited from the true maximum to better display regions of
intermediate strength. This limits the strongest emission near the core from a
true strength of at most 36.2$\times$10-16erg/s/cm2 (limited to
2.5$\times$10-16erg/s/cm2). The lower right panel shows H$\beta$ flux
throughout NGC 7135. The scale here has also been limited. This limits the
strongest emission from a true strength of at most 5$\times$10-16erg/s/cm2
(limited to 2.1$\times$10-16erg/s/cm2). In the upper left panel, arrows show
the average positive rotation direction. The solid arrow indicates the average
stellar component positive rotation whilst the dotted arrow shows the average
gas positive rotation direction. Shaded regions show the standard deviation of
vectors for both components for bins of 0.1 effective radii. In the lower left
panel, contours show integrated CO(J=1–0) emission detected in ALMA
observations (Ueda et al., 2014). Contours show the 0.8, 1.0 and 1.2 Jy km s-1
levels. There is pervasive H$\alpha$ emission with a high luminosity and high
velocity dispersion component in the centre, though there is little evidence
of star formation.
To explore gas kinematics and distribution in NGC 7135, regular binning was
employed to avoid biases caused by the stellar light controlling Voronoi
binning. Large square bins containing 64 pixels were selected across the face
of the data cube, and spectra within a given bin were summed and analysed with
ppxf as described in Section 3. Following this, those bins with signal-to-
noise that exceeded the minimum detection threshold were re-binned to a higher
resolution. This adaptive ‘zoom’ binning gave high resolution in areas of
strong H$\alpha$ emission. The zoom resolution was limited to central regions
of the galaxy, where the finest detail was required.
NGC 7135 displays localised areas of strong Balmer emission, shown in Figure 3
with a cropped version showing the galaxy centre in Figure 4. As seen from all
panels, the gas is asymmetric in distribution as well as in kinematics. The
rotation of the gas highlights the decoupled nature of the stellar material in
the core.
Gas is counter-rotating to the stellar component, strongly indicating a
disrupted system. A slight deviation to the coherent gas movement is seen in
the galaxy centre, giving an ‘S’ shaped gas rotation profile. Counter rotation
has long been associated with galaxy mergers (see e.g. Bertola et al., 1988).
Total decoupling of gas rotation from stellar components as a result of
prograde-prograde merger shocks has been shown in simulation in Capelo & Dotti
(2017), and a similar event appears to be in play here, wherein a major merger
has resulted in a counter rotation of the gas component. Plausibly this is the
result of a previous merger providing counter rotation from a prograde-
prograde merger, this is expanded further in section 4.3. Alternatively,
counter rotation could have arisen as a result of a first pass of the
currently infalling galaxy.
Velocity vectorisation of the gas and stars allows us to measure the gas and
stellar rotation misalignment. The rotation consensus in the gas is fairly
standard, with the gas rotating around the centre. In the stellar component
however, matters are complicated by the velocity of the in-falling galaxy,
which shifts the positive rotation vector compared to the core. If we consider
only the core, the misalignment of gas and stars is 176∘, whereas when the
entire cube is considered, the misalignment is 139∘. This is entirely within
the realm of expected values for an interacting galaxy (see e.g. Barrera-
Ballesteros et al., 2015; Bryant et al., 2019). This is shown in Figure 4 as
solid and dashed arrows for the directions of mean positive stellar and gas
rotation respectively, with associated errors shown as shaded regions.
Regions of H$\alpha$ emission can be seen in the southern areas of the lower
left panel of Figure 3. This forms a large arc with patches exhibiting
particularly strong emission. These are seemingly matched by arcs in the north
in an asymmetrical manner.
Considering the gas asymmetry and the increase in both gas velocity and
velocity dispersion, a large amount of gas can be attributed to material
stripped from the outskirts of the infalling galaxy and which is currently in
the process of accreting onto the host galaxy. This is seen in the largest
area of gas velocity dispersion occurring outside the core, located in a tight
region south of the galaxy core. This region indicates a quantity of gas that
is not associated with the cohort gas of NGC 7135, as it displays a region
where infalling gas is interacting with the galaxy interstellar medium. This
area of higher than expected dispersion is in the plane of the galaxy gas
rotation, again evidence that gas is infalling, creating high velocity
dispersion at the position where in-situ gas meets ex-situ gas.
A strong presence of H$\alpha$ in concentrated regions is consistent with the
picture of NGC 7135 as a galaxy that has perhaps recently undergone star
formation as suggested in Rampazzo et al. (2007), though at low levels.
Despite this, there is little to no evidence of strong ongoing star formation.
This can be seen in the emission line diagnostic diagram in Figure 5. Almost
all the sources of emission are associated with low-ionization nuclear
emission-line regions (LINERs). Though a handful of active galactic nuclei
(AGN) sources can be seen, they largely lie in the outer noisier regions of
the data-cube, which makes the presence of true AGN sources doubtful, as shown
in Zaw et al. (2009). This strong bias towards LINER emission is typical of
merging systems with shock driven LINER emission (Monreal-Ibero et al., 2010;
Rich et al., 2011).
ALMA data (Ueda et al., 2014) showing the 12CO(J=1–0) emission is overlaid in
the lower left panel of Figure 4. The ALMA observations reveal a significant
peak in CO emission offset from the galaxy core with an integrated molecular
gas mass of $M_{\mathrm{H2}}=(5.4\pm 1.4)\times 10^{7}M_{\sun}$ adopting an
$\alpha_{\mathrm{CO}}=4.8M_{\sun}\,\mathrm{pc}^{-2}(\mathrm{K\,km\,s}^{-1})^{-1}$
(Solomon & Barrett, 1991). This cold gas mass would correspond to an expected
SFR of only $\sim 0.025M_{\sun}\,\mathrm{yr}^{-1}$ if a normal depletion time
of 2 Gyr for galaxies is assumed (Bigiel et al., 2011; Leroy et al., 2013).
Although there is no similarly distinct ionised gas structure observed with
MUSE, there is plenty of ionized gas which may partially originate from star
formation despite the LINER-like classification. The extinction-corrected
H$\alpha$ flux within the central r=1$\arcsec$ is $(4\pm 0.4)\times
10^{-13}\mathrm{erg}\,\mathrm{s}^{-1}\,\mathrm{cm}^{-2}$ which would
correspond to $\mathrm{SFR}=0.5\pm 0.05M_{\sun}\,\mathrm{yr}^{-1}$ following
Kennicutt Jr (1998). So only 5% of the central H$\alpha$ would need to be
hidden among LINER-like classified ionised gas to be in agreement with ongoing
star formation. Such a low fraction of star formation would not alter the line
diagnostics significantly and would remain hidden. Hence, we cannot rule out
ongoing star formation based on the central cold gas mass observed by Ueda et
al. (2014). Given the highly disturbed kinematics, the possibility that
dynamical suppression of star formation is preventing cold gas collapse cannot
be tested by our observations.
Figure 5: An emission line diagnostic diagram (Baldwin et al., 1981) divided
into various sources. Each bin is shown as a point according to its emission
ratios of [NII]/H$\alpha$ and [OIII]/H$\beta$ allowing for the identification
of regions of star formation, AGN emission or Low-ionization nuclear emission-
line region (LINER) emission. Detailed description of the line equations can
be found in Park et al. (2013). NGC 7135 shows no bins where current star
formation is clear in the emission. Slight overlaps outside the LINER emission
bin are unlikely to be genuine, but rather likely arise because of noise and
intrinsic variations. The galaxy emission is overwhelmingly LINER type.
### 4.3 Stellar Population Mapping
Populations of a galaxy evolve in metallicity over time, gradually enriching
with age. The exact quantities and rates of this enrichment are well known
(Carraro & Chiosi, 1994; Layden & Sarajedini, 2000; Pont & Eyer, 2004), with
the rate of enrichment directly tied to galaxy mass resulting in the mass-
metallicity relation. Thus, we can quickly establish whether a galaxy has
followed the standard enrichment of its population as would be expected from
an isolated galaxy.
In reality, galaxies are more often than not experiencing regular disturbances
in the form of mergers, fly-bys and intracluster medium interaction such as
ram-pressure stripping (Lotz et al., 2011; Sinha & Holley-Bockelmann, 2012;
Ebeling et al., 2014; Ventou et al., 2017). One effect of this is the
variation of the age-metallicity relation of a galaxy from the modelled form.
This is most strikingly clear when a galaxy accretes material from a lower
mass galaxy (Spolaor et al., 2009; Leaman et al., 2013). Due to the lower
metal enrichment rate of lower mass galaxies than that of larger mass
galaxies, one finds that in general a smaller mass galaxy will exhibit far
lower values of metallicity at late ages. Because of the ability for full
spectral fitting methods to identify populations based on age and metallicity
models, one would see these two populations as distinct and separate areas on
an age-metallicity diagram. This is dependent on the difference in mass of the
mergers however, as if two galaxies of similar mass were to merge, the
separation of populations on the age-metallicity diagram would be too little
to distinguish at the current resolutions of full-spectral fitting methods.
Using these principles we can estimate which of the populations present are
those which have accreted onto the host galaxy, and are therefore ex-situ in
origin.
We apply these principles to the population maps of NGC 7135 in order to
derive the history of formation and evolution. In Figure 6, nine regions are
marked with sequential letters corresponding to population maps, which are
similarly sequentially lettered, with maps taken from the Voronoi bin below
the labelled cross. Each position marks an area of interest or standard
uniformity across the maps of Figure 2 with which we can build a picture of
the assembly and current status of NGC 7135. Region ‘A’ marks the core of NGC
7135. Regions ‘B’ and ‘C’ sample the tidal tail clearly seen in the unsharp
mask image (lower right panel of Figure 6), with increasing galactocentric
radius. Regions ‘D’, ‘E’, and ‘F’ also sample with increasing galactocentric
radius, however they do so outside of any prominent tidal features. These are
assumed to be a ‘control’ sample which are chosen to represent the underlying
galaxy, though show signs of probing accreted material. Regions ‘G’ and ‘H’
sample the tidal regions opposite the tail, with ‘H’ particularly covering
unstripped remnants of the infalling galaxy. Finally region ‘K’ covers the
core of the infalling galaxy.
Starting with region ‘A’, we see a very high metallicity, very old population
associated with the galaxy core. This is to be expected and is commonly seen
in galaxy cores (see e.g. Guérou et al., 2016). There is little obvious
evidence for accreted populations as expected, as shown by the old and high
metallicity population, and lack of any clear population bimodality.
Moving along the main tidal tail in region ‘B’ we see a much younger
population at high metallicity. When comparing to regions not associated with
tidal features but at similar radius such as ‘E’ and ‘F’, we see that the
population of ‘B’ is not comparable to ‘E’ or ‘F’. This is largely due to a
lack of older material that would be expected to be associated with the host
galaxy. Plausibly this is the result of the vast majority of the stellar
material originating in the infalling galaxy and comprising the tidal tail,
and thus the populations visible are instead associated with this infalling
object, rather than original populations of NGC 7135. A small amount of
material is also visible as a young and metal poor population. This can be
attributed to ex-situ material that merged onto either NGC 7135 or the
infalling galaxy in the past prior to the current merger, and thus shows a
separate population signature.
As we move further out along the tidal tail to region ‘C’, many of the
features become more prominent. For one thing, the high metallicity population
associated with the stripped material from the infalling galaxy remains.
Furthermore, low metallicity ex-situ populations increase in the fraction of
contributed mass (as seen as a distinctly separate low metallicity
population). Care must be taken in comparison due to colour normalisation
differences on the plot, however the maximum low metallicity ex-situ fraction
increases from $\sim$0.5% in ‘B’ to $\sim$1.0% in ‘C’, with a higher sum total
of ex-situ material. This increase is to be expected, as ex-situ material
commonly increases in fraction with galactocentric radius (La Barbera et al.,
2012; Martin et al., 2018; Davison et al., 2020). It is unclear whether this
ex-situ population is associated with NGC 7135 or the infalling galaxy,
however it could plausibly be from both, as models of hierarchical growth
suggest both galaxies would have undergone historical minor mergers in all but
the rarest cases (Fakhouri et al., 2010). A burst of star formation is also
seen in the final Gyr history. This is suggestive of a rapid star formation
event, most likely triggered as a result of the galaxy interactions. Following
this, no star formation is noticed in any bin. A shutdown of star formation
after a major merger is discussed widely in literature (see e.g. Bekki, 2001;
Barr et al., 2007; Cortesi et al., 2013; Querejeta et al., 2015a; Puglisi et
al., 2021).
Region ‘D’ samples an inner region of NGC 7135. It shows similar populations
as in ‘A’, however extends slightly to lower ages as expected following galaxy
population age gradients. Little to no ex-situ material is clear. Moving
further out in radius, we come to region ‘E’. This also shows the expected
populations previously seen in ‘A’ and ‘D’. This time however there is a more
significant low metallicity ex-situ population, which as mentioned previously
is expected as one reaches regions further from the galaxy centre according to
galaxy simulations. Also prominent in region ‘E’ is a population of
intermediate age and high metallicity stars. As shown below in region ‘H’,
this is almost certainly associated with the infalling galaxy.
Region ‘F’ samples at a slightly greater radius than ‘E’, again with more
prominent features, though in similar positions to ‘E’. We see an increase in
the low metallicity ex-situ population radially along the tidal tail (‘A’, ‘B’
and ‘C’) and well as radially in areas not associated with tidal features
(‘D’, ‘E’ and ‘F’).
The final regions sample the galaxy shell and associated infalling object.
Region ‘G’ examines an area of tidal shell seemingly also originating from the
infalling galaxy. The region almost identically matches ‘H’ which is placed to
examine the outskirts of the infalling object, in regions that have yet to be
stripped. The fact that these two populations are quite so similar suggests
they are of the same origin, and that the tidal shells and tails are the
result of scattered accreted material from the infalling galaxy.
Finally region ‘K’ examines the core of the infalling galaxy at approximately
0.5 effective radii from the centre of NGC 7135. It shows a highly metal rich
and old population with the exact tendencies of a galaxy nucleus. It shows
largely the same properties as the nucleus of NGC 7135, though with marginally
lower metallicity and a greater extent in age, suggesting a lower mass.
The velocity dispersion of region ‘K’ (seen in Fig 2) is at a much lower
average velocity dispersion than the host galaxy, again suggesting a lower
mass of the merging galaxy compared to NGC 7135. This is curious considering
its high metallicity. One explanation would be that the in-falling galaxy is
the remnant of a galaxy core stripped of its halo, which would explain both
its relatively high brightness and high metallicity. This is also supported by
the large amounts of seemingly ex-situ gas that are seen in Figure 3, where
this gas would have formed the outer regions of the infalling galaxy as
explained further in section 4.2.
The velocity dispersion (Fig 2) increases significantly midway between the
accreting galaxy core and the host galaxy core. This further lends weight to
the idea that material is accreting onto the host galaxy, as the high velocity
dispersion area indicates a region where accreted material begins encountering
large amounts of in-situ material, and the difference in velocities becomes
more evident, inflating the velocity dispersion, prior to mixing.
In summary, the population maps are indicative of three distinct galaxy
populations, in which two significant merger events are present. The first is
ongoing, with an intact core of a second galaxy currently in close proximity
to NGC 7135, with material being stripped off, accreted onto NGC 7135, and
creating large tidal features. These make up the high metallicity populations
at intermediate ages. Yet another population is consistently present, as a low
metallicity, intermediate to old aged population. As discussed previously,
chemical enrichment and mass-metallicity relations mean this population is not
associated with either galaxy. Therefore we attribute these stars to older
historical mergers, now mixed loosely with the main populations. It is unclear
which of these two present galaxies these populations accreted to, however as
mentioned previously, the ex-situ population is likely present in both
galaxies independently, and was captured by each prior to this ongoing merger.
Figure 6: NGC 7135 population sampling diagram. The upper nine panels display
mass weighted metallicity space of NGC 7135 for various regions. Corresponding
regions are marked in the lower left panel with crosses marking the position
extracted, and the corresponding letter. The lower right panel shows the same
region as an unsharp masked image to highlight tidal features. Data for the
unsharp masked image are taken from the VST ATLAS survey (Shanks et al.,
2015). The diagrams build a narrative in which a recent and ongoing merger
creates large tidal features in NGC 7135. There are also populations of far
lower metallicity which are well mixed in the galaxy. These populations
indicate historical mergers of high merger-mass ratio.
### 4.4 Accreted Populations
As seen in Figure 6, many bins display a bimodality in population distribution
(see e.g. panels ‘B’, ‘C’, ‘E’, ‘F’, ‘G’, and ‘H’). Such a strong separation
in populations suggests stellar material being obtained from more than a
single source. Galaxies not associated with the main galaxy will evolve with a
different metallicity due to the mass metallicity relation. As such, when the
galaxies merge, there will be a distinct separation in the Age-Metallicity
relation of each galaxy. The most obvious explanation for the bimodal
populations seen in Figure 6 would be the merger of a less massive, lower
metallicity galaxy to the host galaxy or onto the infalling galaxy, beginning
$\sim$ 10 Gyr ago. Furthermore, the fact that the bi-modality of populations
is seen at almost all positions across the galaxy outside of the cores (panels
‘B’, ‘C’, ‘E’, ‘F’, ‘G’, and ‘H’) suggests that this material has been well
mixed and is distributed throughout the galaxy, with the exception of the two
galaxy cores (see panels ‘A’, ‘D’, and ‘K’).
To explore the population bi-modality, the fraction of stars not associated
with the main host population was determined from each bin. To identify two
discontinuous populations, a dividing line was sought across the population
map, which would follow the lowest saddle points. This ‘path of least
resistance’ then divided the populations into two distinct sources; one being
material from NGC 7135 and the in-situ material of the infalling galaxy; and
the other source being low metallicity populations accreted onto both galaxies
at earlier times. This can be imagined as the valley between two hills, with
the dividing line taking the natural path of a river at the lowest potential.
This is visualised in Figure 7 with a red line showing the calculated
separation path for one random bin, separating the populations into two
sources.
Figure 7: Population map of one bin projected on 3D axes. A line is sought for
each map to bisect the lower metallicity population from the older using low
saddle points. For this example, the path is marked by a red line.
Application of this to all bins provides a map such as in Figure 8, where we
can examine the fraction of stellar material associated with the lower
metallicity source. Figure 8 shows a polar view of NGC 7135 to better examine
radial features. By examining fraction across the galaxy we can infer regions
of higher or lower concentration of the accreted material.
At the centre of NGC 7135 we see no accreted material suggesting the core is
dominated by in-situ stars. The density of accreted material rises with radius
which is indicative of galaxy mergers depositing material on the outer regions
of the galaxy. The material seems to be unevenly radially mixed, with
proportionally higher quantities of ex-situ material deposited between 0 and 1
radians from North. This is likely a projection effect, as the area at the
south of the galaxy (the left and right extents of Figure 8) aligns with the
previously mentioned high metallicity galaxy, with the stream of stellar
material obscuring the host galaxy structure, and dominating the spectral
light.
Figure 8: The left panel shows a polar oriented map of NGC 7135. Blue colour
shows the mass fraction of derived material not associated with the host
galaxy population, with contouring shown in red-orange-yellow. The angle is
shown with 0 radians as the North of the image and positive angle increase
showing clockwise movement around the galaxy. Gaussian smoothing has been
applied to show more clearly larger structures of ex-situ material. The radius
from centre has been limited to include only radii in which a complete circle
can be arranged within the image. The adjoining right-hand panel shows the
same radial positions as the left side, however it shows the mean
discontinuous mass fraction for a complete circle for the radii. Mean fraction
was calculated using circular annuli of radius 3 pixels with a moving average.
The effective radius is taken from table 1 of Marino et al. (2011). The
fraction of accreted material increases with radius, with a roughly 7%
increase within 0.6 effective radii.
We can further see evidence of the division of the various populations by
examining stellar mass estimates per population, determined with the division
of the age-metallicity plane in combination with mass-to-light ratios. We show
this in Figure 9, with three regions of different populations separated
roughly. Using mass to light ratios from Thomas et al. (2003), we estimate the
stellar mass per population division, per pixel. The panel labelled ‘1’
corresponds to intermediate age stars with high metallicity which were
associated with the infalling galaxy. This is confirmed in the first map in
the Figure (panel 2) in which there is a noticeably higher stellar mass
associated with the infalling object for only this population. This panel also
encompasses much of the stellar material of NGC 7135 near to the centre though
at a slight distance, as is expected from standard galaxy age gradients.
Though effects from the pointing overlaps are visible, it is notable that we
see a small amount of material tracing the tidal tail and other tidally
derived features. This suggests that the intermediate age material and tidal
tail is associated with the infalling galaxy exclusively, though further data
analysis from a higher resolution stellar model grid would be required for
verification of this.
In the second labelled map (panel 3) we see that the most metal rich and
oldest material is associated heavily with the host galaxy, with a strong
gradient from the galaxy centre. This in-situ population is generally
undisturbed and centrally concentrated, in comparison to the largely ex-situ
population represented in the 1st map. Finally in the third labelled map
(panel 4), we see again a gradient of stellar mass associated with the host
galaxy. This third map shows only stars at far lower metallicities than the
majority of the stellar material. This material is assumed to be low mass
objects which have historically accreted to NGC 7135, and are now well mixed
into the galaxy. It should be noted that these are rigid divisions, and that
the true population distributions from each object undoubtedly bleed over into
the other divided regions (especially in regions ‘1’ and ‘2’).
Figure 9: The first panel shows a general galaxy age-metallicity map. This is
divided by the red boxes into 3 groups of populations to examine the mass
associated with each area. Panel labels correspond to the numbers on the age-
metallicity map. These show the divided nature of the populations, in which
the intermediate age high metallicity population is more strongly associated
with the infalling object and tidal features, whilst the older metal rich
population is associated with the host galaxy.
## 5 Discussion
Analysis of the galaxy kinematics and gas of NGC 7135 yielded evidence for
both historical galaxy mergers, as well as an ongoing disruptive major merger.
Despite the kinematics of past mergers being hidden (to the available
resolution of data) due to mixing over time, ex-situ populations were
extracted from the galaxy using full spectral fitting. This allowed for the
identification of a well mixed low-metallicity stellar population relative to
the larger fraction of higher metallicity stellar population. Considering
expected enrichment patterns, this can only have occurred if either gas or
stars (or both) originating in an ex-situ galaxy rapidly accreted or fully
merged onto NGC 7135. The lower metal content of this population made it
distinct from the original population.
Potentially, all the stellar material in this population could have been
created in-situ using gas that was accreted from another galaxy. This is
highly unlikely however considering the specificity of the age and metallicity
of the two distinct populations. Were these stars to be the product of new
infalling gas, we would expect to see a mixing of the gas, and for the
metallicity of new stars born after the merger event to be at a more
intermediate metallicity. Instead, we see the two populations continuing to
form stars without a sharp change in metallicity, thus the lower metallicity
population stars are considered to be born ex-situ.
The bimodality of these stars allowed for clean separation of the ex-situ and
in-situ populations. Thus the relative fraction of ex-situ material could be
ascertained. This allowed for the exploration of ex-situ fraction with
galactocentric radius, as shown in Figure 8. The Figure shows a clear
preference for ex-situ material to be located at the outer edges of the
galaxy, with no detectable ex-situ material in the centre of the galaxy. This
is akin to simulated results showing the same preference for ex-situ fraction
increase with galactocentric radius (Schaye et al., 2014; Crain et al., 2015;
Rodriguez-Gomez et al., 2016; Davison et al., 2020), as well as observational
studies showing the same principles (Forbes et al., 2011; Pillepich et al.,
2015b; Oyarzún et al., 2019). The mean ex-situ fraction measured for NGC 7135
at approximately 0.6 effective radii (the greatest extent captured by the MUSE
image) is 7%. This is only representative of the low metallicity populations
from low-mass systems. Higher metallicity populations from mergers of smaller
mass-ratio mergers would be disguised amongst in-situ populations.
Limitations of this technique largely arise from the ability to separate
populations. At current resolutions of full spectral fitting techniques,
populations must be wholly distinct in metallicity to be noticeably separable
from the host population. Accreted material with age and metallicity similar
to that of the host galaxy would be largely indistinguishable from the main
population. Further limitations are the inability to directly distinguish
between stars that are born ex-situ, and those born in-situ but of ex-situ
material. As discussed above, these limitations are unlikely to be dominant in
this scenario.
One interesting area to consider is the eventual fate of NGC7135. Will it
retain some semblance of a spiral structure, or evolve into an S0 or
elliptical galaxy? Conversion into an S0 galaxy seems to be a distinct
possibility as S0 galaxies with coherent disk kinematics form through merger
mechanisms, though the exact merger specifics continue to be debated within
the community. Some evidence suggests that S0 galaxies are rarely expected to
be formed through major mergers (<4:1 merger ratio) (Bournaud et al., 2005;
Lofthouse et al., 2016), with the conclusion given that major mergers are a
plausible but non-dominant mechanism for early type formation. Conversely,
other arguments suggest that S0 galaxies can indeed be formed from major
mergers (Querejeta et al., 2015a, b). Furthermore major mergers can be shown
to give rise to much of the inner structure often found in early types
(Eliche-Moral et al., 2018). Perhaps the most consistent agreement for the
formation requirements of an S0 via mergers is the necessity for a
misalignment of angular momentum between the in-situ and ex-situ accreted
baryonic components (see e.g. Sales et al., 2012). Considering the existing
baryonic misalignment present in NGC 7135 in the form of a counter rotating
disk, and considering the seemingly misaligned orbit of the ongoing merger, it
is perhaps likely that the ongoing disruption will lead to NGC 7135 tending
towards S0 morphology. Plausibly the kinematics would increasingly reflect
those of general spheroid galaxies as newly formed stars with an opposing
angular momentum to the mean, and those recently accreted, would begin to
reduce kinematic coherence. Though this is a distinct possibility, the true
future of NGC 7135 will remain unknown until more decisive techniques and
modelling are developed. Due to the complex nature of the recent history of
NGC 7135, any predictions on future evolution are speculation.
## 6 Conclusions
We have used a Voronoi binned map of NGC 7135 to explore kinematic stellar
features such as velocity and velocity dispersion, as well as the
distributions of stellar properties such as age and metallicity. Gas
properties were also explored in regular bins, with both kinematic gas
properties and gas distribution investigated. Gas was shown to be counter
rotating compared to stellar material, with significant evidence of
disturbance in the galaxy core. This along with population evidence shows a
galaxy currently merging onto NGC 7135. Despite gas being present, little to
no current star formation was identified. ALMA data of the galaxy core points
to a star formation rate of only $0.025M_{\sun}\,\mathrm{yr}^{-1}$ assuming
normal depletion times. Strong LINER emission likely obscures emission
associated with star formation and as such a higher SFR cannot be ruled out.
During population analysis of NGC 7135 from data provided by the SOSIMPLE
project, we have identified both historic and ongoing merger activity. This
was achieved using a ‘full spectral fitting’ method to disentangle strong bi-
modalities in stellar populations. We show that in a snapshot of a ‘single’
galaxy, we are in reality witnessing the product of three distinct galaxy
populations.
An ongoing merger or large accretion event is clear from the stellar kinematic
maps, showing a distinct area of stellar material not associated with the host
galaxy, but interacting with the galaxy structure. Likewise in gas maps we see
large velocity dispersion in areas where ex-situ infalling gas interacts with
in-situ gas.
At least one historical large merger event took place at 6-10 Gyr ago
according to star-formation history derived by full spectral fitting. This
potentially provided gas with lower enrichment with which NGC 7135 birthed
stars of lower metallicity; however the timeline of stellar ages, matched with
the likely merger date makes it highly likely that most, if not all of the
stars belonging to this population are ex-situ stars originating in another
galaxy. Considering there is no discernible change in the host population
metallicity of new stars born after the merger, we assume that all lower
metallicity population stars are ex-situ in origin. The timeline of star
formation history suggests that this merger caused a general shut-down of star
formation in NGC 7135, not long after the merger event.
We calculate the fraction of the ex-situ material as a function of
galactocentric radius, finding a steep increase in ex-situ material as we
probe further to the outskirts of the galaxy. The centre of the galaxy
exhibits no signs of ex-situ material, whilst by 0.6 effective radii, this
fraction is at 7%. This is in common with literature expectations of ‘two
phase’ galaxy assembly seen both observationally and in simulation, where ex-
situ material is preferentially deposited on the outskirts of a galaxy.
Many more SOSIMPLE galaxies are available from the survey, with much left to
explore.
## 7 Acknowledgements
Many thanks to an anonymous referee for useful comments. This work was
completed with the support of the ESO studentship program and the Moses Holden
Scholarship. BH acknowledges support by the DFG grant GE625/17-1 and DLR grant
50OR1911. Based on observations collected at the European Southern Observatory
under ESO programme 0103.A-0637(A).
## 8 Data Availability
The data described in this article are accessible via the ESO archive of MUSE
data.
## References
* Annibali et al. (2010) Annibali F., Bressan A., Rampazzo R., Zeilinger W., Vega O., Panuzzo P., 2010, Astronomy & Astrophysics, 519, A40
* Bacon et al. (2010) Bacon R., et al., 2010, in Ground-based and Airborne Instrumentation for Astronomy III. p. 773508, doi:10.1117/12.856027
* Bacon et al. (2014) Bacon R., et al., 2014, The Messenger, 157, 13
* Baldwin et al. (1981) Baldwin J. A., Phillips M. M., Terlevich R., 1981, Publications of the Astronomical Society of the Pacific, 93, 5
* Barr et al. (2007) Barr J., Bedregal A., Aragón-Salamanca A., Merrifield M., Bamford S., 2007, Astronomy & Astrophysics, 470, 173
* Barrera-Ballesteros et al. (2015) Barrera-Ballesteros J., et al., 2015, Astronomy & Astrophysics, 582, A21
* Bassett et al. (2017) Bassett R., Bekki K., Cortese L., Couch W., 2017, Monthly Notices of the Royal Astronomical Society, 471, 1892
* Bekki (2001) Bekki K., 2001, Astrophysics and Space Science, 276, 847
* Bertola et al. (1988) Bertola F., Buson L., Zeilinger W. W., 1988, Nature, 335, 705
* Bigiel et al. (2011) Bigiel F., et al., 2011, The Astrophysical Journal Letters, 730, L13
* Boecker et al. (2020) Boecker A., Alfaro-Cuello M., Neumayer N., Martín-Navarro I., Leaman R., 2020, The Astrophysical Journal, 896, 13
* Bournaud et al. (2005) Bournaud F., Jog C., Combes F., 2005, Astronomy & Astrophysics, 437, 69
* Bryant et al. (2019) Bryant J., et al., 2019, Monthly Notices of the Royal Astronomical Society, 483, 458
* Bundy et al. (2009) Bundy K., Fukugita M., Ellis R. S., Targett T. A., Belli S., Kodama T., 2009, The Astrophysical Journal, 697, 1369
* Cales & Brotherton (2015) Cales S. L., Brotherton M. S., 2015, Monthly Notices of the Royal Astronomical Society, 449, 2374
* Capelo & Dotti (2017) Capelo P. R., Dotti M., 2017, Monthly Notices of the Royal Astronomical Society, pp 465, 2643
* Capelo et al. (2015) Capelo P. R., Volonteri M., Dotti M., Bellovary J. M., Mayer L., Governato F., 2015, Monthly Notices of the Royal Astronomical Society, 447, 2123
* Cappellari (2017) Cappellari M., 2017, MNRAS, 466, 798
* Cappellari & Emsellem (2004) Cappellari M., Emsellem E., 2004, PASP, 116, 138
* Carraro & Chiosi (1994) Carraro G., Chiosi C., 1994, Astronomy and Astrophysics, 287, 761
* Cassisi et al. (2006) Cassisi S., Pietrinferni A., Salaris M., Castelli F., Cordier D., Castellani M., 2006, Memorie della Societa Astronomica Italiana, 77, 71
* Choi et al. (2015) Choi E., Ostriker J. P., Naab T., Oser L., Moster B. P., 2015, Monthly Notices of the Royal Astronomical Society, 449, 4105
* Coccato et al. (2015) Coccato L., et al., 2015, Astronomy & Astrophysics, 581, A65
* Comerón et al. (2015) Comerón S., Salo H., Janz J., Laurikainen E., Yoachim P., 2015, A&A, 584, A34
* Cortesi et al. (2013) Cortesi A., et al., 2013, Monthly Notices of the Royal Astronomical Society, 432, 1010
* Crain et al. (2015) Crain R. A., et al., 2015, Monthly Notices of the Royal Astronomical Society, 450, 1937
* Daddi et al. (2005) Daddi E., et al., 2005, The Astrophysical Journal, 626, 680
* Davis et al. (2011) Davis T. A., et al., 2011, Monthly Notices of the Royal Astronomical Society, 417, 882
* Davison et al. (2020) Davison T. A., Norris M. A., Pfeffer J. L., Davies J. J., Crain R. A., 2020, Monthly Notices of the Royal Astronomical Society, 497, 81
* Di Matteo et al. (2008) Di Matteo P., Bournaud F., Martig M., Combes F., Melchior A.-L., Semelin B., 2008, Astronomy & Astrophysics, 492, 31
* Ebeling et al. (2014) Ebeling H., Stephenson L. N., Edge A. C., 2014, The Astrophysical Journal Letters, 781, L40
* Eliche-Moral et al. (2018) Eliche-Moral M. d. C., Rodríguez-Pérez C., Borlaff A., Querejeta M., Tapia T., 2018, Astronomy & Astrophysics, 617, A113
* Ellison et al. (2013) Ellison S. L., Mendel J. T., Scudder J. M., Patton D. R., Palmer M. J., 2013, Monthly Notices of the Royal Astronomical Society, 430, 3128
* Faifer et al. (2017) Faifer F. R., Escudero C. G., Scalia M. C., Smith Castelli A. V., Norris M., De Rossi M. E., Forte J. C., Cellone S. A., 2017, A&A, 599, L8
* Fakhouri et al. (2010) Fakhouri O., Ma C.-P., Boylan-Kolchin M., 2010, Monthly Notices of the Royal Astronomical Society, 406, 2267
* Falcón-Barroso et al. (2011) Falcón-Barroso J., Sánchez-Blázquez P., Vazdekis A., Ricciardelli E., Cardiel N., Cenarro A., Gorgas J., Peletier R., 2011, Astronomy & Astrophysics, 532, A95
* Forbes et al. (2011) Forbes D. A., Spitler L. R., Strader J., Romanowsky A. J., Brodie J. P., Foster C., 2011, Monthly Notices of the Royal Astronomical Society, 413, 2943
* Ge et al. (2019) Ge J., Mao S., Lu Y., Cappellari M., Yan R., 2019, MNRAS, 485, 1675
* Guérou et al. (2016) Guérou A., Emsellem E., Krajnović D., McDermid R. M., Contini T., Weilbacher P. M., 2016, A&A, 591, A143
* Ho et al. (2011) Ho L. C., Li Z.-Y., Barth A. J., Seigar M. S., Peng C. Y., 2011, The Astrophysical Journal Supplement Series, 197, 21
* Husser et al. (2016) Husser T.-O., et al., 2016, Astronomy & Astrophysics, 588, A148
* Keel (1985a) Keel W. C., 1985a, AJ, 90, 2207
* Keel (1985b) Keel W. C., 1985b, AJ, 90, 2207
* Kennicutt Jr (1998) Kennicutt Jr R. C., 1998, The Astrophysical Journal, 498, 541
* Kroupa (2001) Kroupa P., 2001, Monthly Notices of the Royal Astronomical Society, 322, 231
* La Barbera et al. (2012) La Barbera F., Ferreras I., de Carvalho R. R., Bruzual G., Charlot S., Pasquali A., Merlin E., 2012, MNRAS, 426, 2300
* Layden & Sarajedini (2000) Layden A. C., Sarajedini A., 2000, The Astronomical Journal, 119, 1760
* Leaman et al. (2013) Leaman R., VandenBerg D. A., Mendel J. T., 2013, Monthly Notices of the Royal Astronomical Society, 436, 122
* Leroy et al. (2013) Leroy A. K., et al., 2013, The Astronomical Journal, 146, 19
* Lofthouse et al. (2016) Lofthouse E., Kaviraj S., Conselice C. J., Mortlock A., Hartley W., 2016, Monthly Notices of the Royal Astronomical Society, p. stw2895
* Lotz et al. (2011) Lotz J. M., Jonsson P., Cox T., Croton D., Primack J. R., Somerville R. S., Stewart K., 2011, The Astrophysical Journal, 742, 103
* L’Huillier et al. (2012) L’Huillier B., Combes F., Semelin B., 2012, Astronomy & Astrophysics, 544, A68
* Malin & Carter (1983) Malin D., Carter D., 1983, The Astrophysical Journal, 274, 534
* Maller et al. (2006) Maller A. H., Katz N., Kereš D., Davé R., Weinberg D. H., 2006, The Astrophysical Journal, 647, 763
* Marino et al. (2011) Marino A., et al., 2011, Monthly Notices of the Royal Astronomical Society, 411, 311
* Martin et al. (2018) Martin G., Kaviraj S., Devriendt J. E. G., Dubois Y., Pichon C., 2018, MNRAS, 480, 2266
* Mihos & Hernquist (1995) Mihos C., Hernquist L., 1995, arXiv preprint astro-ph/9512099
* Monreal-Ibero et al. (2010) Monreal-Ibero A., Arribas S., Colina L., Rodríguez-Zaurín J., Alonso-Herrero A., García-Marín M., 2010, Astronomy & Astrophysics, 517, A28
* Moreno et al. (2015) Moreno J., Torrey P., Ellison S. L., Patton D. R., Bluck A. F., Bansal G., Hernquist L., 2015, Monthly Notices of the Royal Astronomical Society, 448, 1107
* Norris et al. (2015) Norris M. A., Escudero C. G., Faifer F. R., Kannappan S. J., Forte J. C., van Den Bosch R. C., 2015, Monthly Notices of the Royal Astronomical Society, 451, 3615
* Oyarzún et al. (2019) Oyarzún G. A., et al., 2019, The Astrophysical Journal, 880, 111
* Park et al. (2013) Park S., Sohn B. W., Sukyoung K. Y., 2013, Astronomy & Astrophysics, 560, A80
* Pietrinferni et al. (2006) Pietrinferni A., Cassisi S., Salaris M., Castelli F., 2006, The Astrophysical Journal, 642, 797
* Pillepich et al. (2015a) Pillepich A., Madau P., Mayer L., 2015a, The Astrophysical Journal, 799, 184
* Pillepich et al. (2015b) Pillepich A., Madau P., Mayer L., 2015b, The Astrophysical Journal, 799, 184
* Pillepich et al. (2018) Pillepich A., et al., 2018, Monthly Notices of the Royal Astronomical Society, 475, 648
* Pont & Eyer (2004) Pont F., Eyer L., 2004, Monthly Notices of the Royal Astronomical Society, 351, 487
* Puglisi et al. (2021) Puglisi A., et al., 2021, Nature Astronomy
* Qu et al. (2017) Qu Y., et al., 2017, Monthly Notices of the Royal Astronomical Society, 464, 1659
* Querejeta et al. (2015a) Querejeta M., et al., 2015a, Astronomy & Astrophysics, 579, L2
* Querejeta et al. (2015b) Querejeta M., et al., 2015b, Astronomy & Astrophysics, 579, L2
* Quilis & Trujillo (2013) Quilis V., Trujillo I., 2013, The Astrophysical Journal Letters, 773, L8
* Rampazzo et al. (2003) Rampazzo R., Plana H., Longhetti M., Amram P., Boulesteix J., Gach J. L., Hernandez O., 2003, MNRAS, 343, 819
* Rampazzo et al. (2007) Rampazzo R., et al., 2007, Monthly Notices of the Royal Astronomical Society, 381, 245
* Rich et al. (2011) Rich J. A., Kewley L. J., Dopita M. A., 2011, The Astrophysical Journal, 734, 87
* Rodriguez-Gomez et al. (2016) Rodriguez-Gomez V., et al., 2016, Monthly Notices of the Royal Astronomical Society, 458, 2371
* Rubin (1994) Rubin V. C., 1994, The Astronomical Journal, 108, 456
* Sales et al. (2012) Sales L. V., Navarro J. F., Theuns T., Schaye J., White S. D., Frenk C. S., Crain R. A., Dalla Vecchia C., 2012, Monthly Notices of the Royal Astronomical Society, 423, 1544
* Samir et al. (2016) Samir R., Reda F., Shaker A., Osman A., Amin M., 2016, NRIAG Journal of Astronomy and Geophysics, 5, 277
* Saracco et al. (2009) Saracco P., Longhetti M., Andreon S., 2009, Monthly Notices of the Royal Astronomical Society, 392, 718
* Schawinski et al. (2010) Schawinski K., Dowlin N., Thomas D., Urry C. M., Edmondson E., 2010, The Astrophysical Journal Letters, 714, L108
* Schaye et al. (2014) Schaye J., et al., 2014, Monthly Notices of the Royal Astronomical Society, 446, 521
* Shanks et al. (2015) Shanks T., et al., 2015, Monthly Notices of the Royal Astronomical Society, 451, 4238
* Sinha & Holley-Bockelmann (2012) Sinha M., Holley-Bockelmann K., 2012, The Astrophysical Journal, 751, 17
* Solomon & Barrett (1991) Solomon P., Barrett J., 1991, in Symposium-International Astronomical Union. pp 235–241
* Spolaor et al. (2009) Spolaor M., Proctor R. N., Forbes D. A., Couch W. J., 2009, The Astrophysical Journal Letters, 691, L138
* Thomas et al. (2003) Thomas D., Maraston C., Bender R., 2003, Monthly Notices of the Royal Astronomical Society, 339, 897
* Trujillo et al. (2013) Trujillo I., Ferré-Mateu A., Balcells M., Vazdekis A., Sánchez-Blázquez P., 2013, The Astrophysical Journal Letters, 780, L20
* Tully & Fisher (1988) Tully R. B., Fisher J. R., 1988, Nearby galaxies catalog. Cambridge University Press
* Ueda et al. (2014) Ueda J., et al., 2014, The Astrophysical Journal Supplement Series, 214, 1
* Van Dokkum et al. (2008) Van Dokkum P. G., et al., 2008, The Astrophysical Journal Letters, 677, L5
* Vazdekis et al. (2010) Vazdekis A., Sánchez-Blázquez P., Falcón-Barroso J., Cenarro A., Beasley M., Cardiel N., Gorgas J., Peletier R., 2010, Monthly Notices of the Royal Astronomical Society, 404, 1639
* Vazdekis et al. (2012) Vazdekis A., Ricciardelli E., Cenarro A., Rivero-González J., Díaz-García L., Falcón-Barroso J., 2012, Monthly Notices of the Royal Astronomical Society, 424, 157
* Ventou et al. (2017) Ventou E., et al., 2017, Astronomy & Astrophysics, 608, A9
* Weilbacher et al. (2020) Weilbacher P. M., et al., 2020, A&A, 641, A28
* Zaw et al. (2009) Zaw I., Farrar G. R., Greene J. E., 2009, The Astrophysical Journal, 696, 1218
|
empty
# Sum-Rate Maximization in Distributed Intelligent Reflecting Surfaces-Aided
mmWave Communications
Yue Xiu1, Wei Sun2, Jiao Wu3, Guan Gui4, Ning Wei1, Zhongpei Zhang1
This work was supported in part by the Guangdong province Key Project of
science and Technology (2018B010115001), the National Natural Science
Foundation of China (NSFC) under Grant 91938202 and 61871070, and the Defense
Industrial Technology Development Program (JCKY2016204A603). The corresponding
author is Ning Wei. 1 National Key Laboratory of Science and Technology on
Communications, UESTC, Chengdu, China
2School of Computer Science and Engineering, Northeastern University,
Shenyang, China
3College of Electrical and Computer Engineering, Seoul National University,
Seoul, South Korea
4 College of Telecommunications and Information Engineering, NJUPT, Nanjing,
China
###### Abstract
In this paper, we focus on the sum-rate optimization in a multi-user
millimeter-wave (mmWave) system with distributed intelligent reflecting
surfaces (D-IRSs), where a base station (BS) communicates with users via
multiple IRSs. The BS transmit beamforming, IRS switch vector, and phase
shifts of the IRS are jointly optimized to maximize the sum-rate under minimum
user rate, unit-modulus, and transmit power constraints. To solve the
resulting non-convex optimization problem, we develop an efficient alternating
optimization (AO) algorithm. Specifically, the non-convex problem is converted
into three subproblems, which are solved alternatively. The solution to
transmit beamforming at the BS and the phase shifts at the IRS are derived by
using the successive convex approximation (SCA)-based algorithm, and a greedy
algorithm is proposed to design the IRS switch vector. The complexity of the
proposed AO algorithm is analyzed theoretically. Numerical results show that
the D-IRSs-aided scheme can significantly improve the sum-rate and energy
efficiency performance.
###### Index Terms:
Millimeter-wave, distributed intelligent reflecting surfaces, sum-rate,
alternating optimization.
## I Introduction
Millimeter-wave (mmWave) is widely acknowledged as a promising technology for
the fifth-generation (5G) communications, which can achieve ultra-high data-
rate [1, 2, 3, 4, 5]. However, the high path loss and severe blockages in
mmWave bands greatly degrades the quality of service (QoS)[6]. The intelligent
reflecting surface (IRS) technology has been recently investigated for
overcoming the serious path loss and enhancing the data-rate [7].
Specifically, IRS is a planar metasurface consisting of a large number of
passive reflecting elements, each of which is able to reflect the incident
signals with a desired phase shift [8]. By adaptively altering the propagation
of the reflected signal, the IRS is capable of improving the received signal
power via constructive signal combination and destructive interference
mitigation at the receivers, thereby enhancing the system performance.
Various studies over the transmit beamforming and phase shifts of the IRS
design for IRS-aided wireless systems have been increasingly made [8, 9, 10,
11]. In [8], the transmit beamforming at the base station (BS) and the phase
shifts at the IRS have been jointly designed to maximize the achievable sum-
rate. In [9], an IRS-aided multi-user multiple-input single-output (MISO)
system has been studied, where the phase shift matrix and transmit beamforming
were jointly optimized by semidefinite relaxation and alternating optimization
techniques. In [10], the authors studied for a single-user IRS-aided MISO
system, and a dual decomposition and price-based method are used to maximize
the weighted sum-rate. In [11], a sum-rate maximization problem has been
studied, where manifold optimization algorithm was adopted for designing phase
shifts of the IRS. However, these works are mainly oriented to the microwave
communications, while mmWave systems with multiple IRSs have still been
unexplored.
These unsolved problems motivate us to investigate the sum-rate optimization
problem in a multi-user mmWave system assisted by distributed IRSs (D-IRSs).
To be specific, the key idea of the proposed alternating optimization (AO)
scheme is to jointly optimize the phase shiftsat the IRS, the switch vector at
the IRS and the transmit beamforming at the BS, subject to the constraints on
the transmit power and user rate. Due to the non-convexity of the optimization
problem, we propose a novel algorithm to maximize the sum-rate for a D-IRSs
aided multi-user MISO system, referred to as the joint beamforming, switch and
phase shifts optimization algorithm. In the proposed algorithm, the transmit
beamforming is firstly derived by the successive convex approximation (SCA)
algorithm, then which is also used to obtain the phase shift matrix at the
IRS. For the design of the IRS switch vector, a greedy algorithm is
exploited.Finally, we demonstrate from numerical results that the proposed
algorithm can achieve high sum-rate while improving the energy efficiency.
## II System Model
Figure 1: System model for mmWave communication system with D-IRSs.
Consider the system model in Fig. 1, where a $N_{t}$-antenna BS transmits
signals to $K$ single-antenna users. This communication is aided by $L$ IRS
units, where each IRS comprises $N_{r}$ reflecting elements being able to
reflect the incident signal independently with an adjustable phase shift. In
this paper, we assume that the direct link between the BS and each user is
blocked by obstacles. Also, $L$ IRSs are assumed to be deployed on high
buildings around the users. Thus, the BS-IRS channel is dominated by the line
of sight (LoS) path, which can be expressed as
$\displaystyle\boldsymbol{G}_{l}=\sqrt{\frac{1}{\beta_{l}}}\alpha_{l}\boldsymbol{a}\boldsymbol{b}^{H},$
(1)
where $\beta_{l}$ is the large-scale fading coefficient for the $l$th path,
and $\alpha_{l}\sim\mathcal{CN}(0,1)$ is the small-scale fading coefficient.
$\boldsymbol{b}\in\mathbb{C}^{N_{t}\times 1}$ and
$\boldsymbol{a}\in\mathbb{C}^{N_{r}\times 1}$ are the array response vector at
the BS and IRS, respectively. The channel from the $l$th IRS to the $k$th user
is expressed as
$\displaystyle\boldsymbol{h}_{kl}=\sqrt{\frac{1}{(\beta_{kl}L_{kl})}}\sum\nolimits_{L=0}^{L_{kl}-1}\alpha_{kl}\boldsymbol{a}_{kl},$
(2)
where $\beta_{kl}$ and $\alpha_{kl}\sim\mathcal{CN}(0,1)$ are similarly
defined as that in (1). $L_{kl}$ is the number of paths from the $l$th IRS to
the $k$th user, $\boldsymbol{a}_{kl}$ is the transmit array steering vector at
the IRS. In this work, the channel state information (CSI) is assumed to be
perfectly known at BS.
In the D-IRSs-aided mmWave system, The received signal at the $k$th user can
be written as
$\displaystyle y_{k}=$
$\displaystyle\sum\nolimits_{l=1}^{L}x_{l}\boldsymbol{h}^{H}_{kl}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}\boldsymbol{w}_{k}s_{k}+\sum\nolimits_{i\neq
k}^{K}\sum\nolimits_{l=1}^{L}x_{l}\boldsymbol{h}^{H}_{kl}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}\boldsymbol{w}_{i}$
$\displaystyle s_{i}+n_{k},$ (3)
where $\boldsymbol{s}=[s_{1},\ldots,s_{K}]\in\mathbb{C}^{K\times 1}$ is the
transmit signal, satisfying $\mathbb{E}\\{s_{k}s_{k}^{H}\\}=1$,
$\mathbb{E}\\{s_{i}s_{j}^{H}\\}=0,i\neq j$, and
$n_{k}\sim\mathcal{CN}(0,\sigma^{2}_{k})$ is the noise vector.
$\boldsymbol{\Theta}_{l}=\mathrm{diag}\\{e^{j\theta_{l1}},\cdots,e^{j\theta_{lN_{r}}}\\}$
denotes the reflecting matrix of the $l$th IRS.
$\boldsymbol{x}=[x_{1},\ldots,x_{L}]^{T}$ is defined as the switch vector with
$x_{l}\in\\{0,1\\}$. $x_{l}=1$ means that the $l$th IRS is active, while
$x_{l}=0$ represents that the $l$th IRS deos not work and consume any power.
Then, the signal to interference plus noise ratio (SINR) at the user $k$ can
be defined as
$\displaystyle\mathrm{SINR}_{k}=\frac{|\sum\nolimits_{l=1}^{L}x_{l}\boldsymbol{h}^{H}_{kl}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}\boldsymbol{w}_{k}|^{2}}{\sum\nolimits_{i\neq
k}^{K}|\sum\nolimits_{l=1}^{L}x_{l}\boldsymbol{h}^{H}_{kl}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}\boldsymbol{w}_{i}|^{2}+\sigma_{k}^{2}}.$
(4)
In this paper, we aim to maximize the sum-rate by jointly designing the phase
shift matrix and the switch vector at the IRS, and the transmit beamforming at
the BS, under the constraints of user rate and the transmit power. Therefore,
the optimization problem is formulated as
$\displaystyle(\text{P1}):\max_{\\{\boldsymbol{w}_{k}\\},\\{\boldsymbol{\Theta}_{l}\\},\boldsymbol{x}}~{}$
$\displaystyle\sum\nolimits_{k=1}^{K}R_{k},$ (5a) s.t. $\displaystyle
R_{k}\geq\gamma_{k},$ (5b)
$\displaystyle\sum\nolimits_{k=1}^{K}\mathrm{Tr}\left(\boldsymbol{w}_{k}\boldsymbol{w}_{k}^{H}\right)\leq
P,$ (5c) $\displaystyle|\theta_{l,j}|=1,~{}\forall
l\in\mathcal{L},j=1,\cdots,N_{r}.$ (5d) $\displaystyle
x_{l}\in\\{0,1\\},\forall l\in\mathcal{L},$ (5e)
where $R_{k}=\log_{2}(1+\mathrm{SINR}_{k})$ is defined as the achievable rate
of the user $k$, $P$ is the maximum transmit power. It is easily find that the
problem (P1) is highly non-convex due to the non-convexity of the objective
function and constraints, which is challenging to solve.
## III Sum-Rate Maximization Via the Alternating Optimization Algorithm
In this section, we propose a new scheme to handle the non-convex problem
(P1). Firstly, we optimize the transmit beamforming vector
$\boldsymbol{w_{k}}$ and the phase shift matrix $\boldsymbol{\Theta}_{l}$ by
using the SCA algorithm. Then, the IRS switch vector $\boldsymbol{x}$ is
optimized via a greedy algorithm.
### III-A Transmit Beamforming Design
Given the phase shift matrix $\boldsymbol{\Theta}_{l}$ and the IRS switch
vector $\boldsymbol{x}$, (P1) can be rewritten as
$\displaystyle(\text{P2}):\max_{\\{\boldsymbol{w}_{k}\\}}~{}$
$\displaystyle\sum\nolimits_{k=1}^{K}R_{k},$ (6a) s.t. $\displaystyle
R_{k}\geq\gamma,$ (6b)
$\displaystyle\sum\nolimits_{k=1}^{K}\mathrm{Tr}\left(\boldsymbol{w}_{k}\boldsymbol{w}_{k}^{H}\right)\leq
P.$ (6c)
In order to make the problem (P2) more tractable, we introduce two new
variables $\boldsymbol{W}_{k}=\boldsymbol{w}_{k}\boldsymbol{w}_{k}^{H}$, and
$\boldsymbol{a}^{H}_{k}=\sum\nolimits_{l=1}^{L}x_{l}\boldsymbol{h}_{kl}^{H}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}$.
Then, we have
$\|\sum\nolimits_{l=1}^{L}x_{l}\boldsymbol{h}_{kl}^{H}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}\boldsymbol{w}_{k}\|^{2}=\mathrm{Tr}(\boldsymbol{W}_{k}\boldsymbol{A}_{k})$,
and
$\|\sum\nolimits_{l=1}^{L}x_{l}\boldsymbol{h}_{kl}^{H}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}\boldsymbol{w}_{i}\|^{2}=\mathrm{Tr}(\boldsymbol{W}_{i}\boldsymbol{A}_{k})$,
with $\boldsymbol{A}_{k}=\boldsymbol{a}_{k}\boldsymbol{a}_{k}^{H}$ and
$\mathrm{rank}(\boldsymbol{W}_{k})=1$. Here, we exploit the semidefinite
relaxation (SDR) to drop the rank-one constraint
$\mathrm{rank}(\boldsymbol{W}_{k})=1$. Therefore, (P2) can be expressed as
$\displaystyle(\text{P3}):\max_{\\{\boldsymbol{W}_{k}\\}}~{}$
$\displaystyle\sum\nolimits_{k=1}^{K}\log_{2}\left(1+\frac{\mathrm{Tr}(\boldsymbol{W}_{k}\boldsymbol{A}_{k})}{\sum\nolimits_{i\neq
k}^{K}\mathrm{Tr}(\boldsymbol{W}_{i}\boldsymbol{A}_{k})+\sigma^{2}_{k}}\right),$
(7a) s.t.
$\displaystyle\log_{2}\left(1+\frac{\mathrm{Tr}(\boldsymbol{W}_{k}\boldsymbol{A}_{k})}{\sum\nolimits_{i\neq
k}^{K}\mathrm{Tr}(\boldsymbol{W}_{i}\boldsymbol{A}_{k})+\sigma^{2}_{k}}\right)\geq\gamma,$
(7b)
$\displaystyle\sum\nolimits_{k=1}^{K}\mathrm{Tr}\left(\boldsymbol{W}_{k}\right)\leq
P,$ (7c) $\displaystyle\boldsymbol{W}_{k}\succeq\boldsymbol{0}.$ (7d)
However, the problem (P3) is still non-convex due to the non-convex objective
function (7a). In order to transform (P3) into a convex problem, we introduce
new variables
$\displaystyle
e^{p_{i}}=\sum\nolimits_{k=1}^{K}\mathrm{Tr}(\boldsymbol{W}_{k}\boldsymbol{A}_{i})+\sigma_{k}^{2}$
(8) $\displaystyle e^{q_{i}}=\sum\nolimits_{k\neq
i}^{K}\mathrm{Tr}(\boldsymbol{W}_{k}\boldsymbol{A}_{i})+\sigma_{k}^{2}.$ (9)
Then, we have
$\displaystyle(\text{P4}):\max_{\\{\boldsymbol{W}_{k}\\},\\{p_{i}\\},\\{q_{i}\\}}~{}$
$\displaystyle\sum\nolimits_{i=1}^{K}\log_{2}\left(e^{p_{i}-q_{i}}\right),$
(10a) s.t. $\displaystyle(p_{i}-q_{i})\log_{2}(e)\geq\gamma,$ (10b)
$\displaystyle\sum\nolimits_{k=1}^{K}\mathrm{Tr}(\boldsymbol{W}_{k}\boldsymbol{A}_{i})+\sigma_{k}^{2}\geq
e^{p_{i}},$ (10c) $\displaystyle\sum\nolimits_{k\neq
i}^{K}\mathrm{Tr}(\boldsymbol{W}_{k}\boldsymbol{A}_{i})+\sigma_{k}^{2}\leq
e^{q_{i}},$ (10d)
$\displaystyle\sum\nolimits_{k=1}^{K}\mathrm{Tr}\left(\boldsymbol{W}_{k}\right)\leq
P,\boldsymbol{W}_{k}\succeq\boldsymbol{0},$ (10e)
$\displaystyle\mathrm{Tr}({\boldsymbol{W}_{k}\boldsymbol{A}_{i}})\geq 0.$
(10f)
We observe that
$\displaystyle\sum\nolimits_{i=1}^{K}\log_{2}\left(e^{p_{i}-q_{i}}\right)=\sum\nolimits_{i=1}^{K}(p_{i}-q_{i})\log_{2}(e),$
(11)
thus, the objective function (10a) is convex.
By replacing (10c) and (10d) with (8) and (9) with, we see that the
inequalities (10c) and (10d) hold with equalities when the solution is
optimal. The main reason is that the objective function in the problem (P4) is
monotonous. In this case, we aim to maximize $e^{p_{i}}$ while minimizing
$e^{q_{i}}$, rather than directly maximizing the objective function (10a).
Next, we use the successive convex approximation (SCA) algorithm to solve the
problem (P4). The first-order Taylor expansion of $e^{q_{i}}$ at the point
$\bar{q}_{i}$ is given by
$\displaystyle e^{\bar{q}_{i}}+e^{\bar{q}_{i}}(q_{i}-\bar{q}_{i}),$ (12)
where $\bar{q}_{i}$ is feasible to the problem (P4). And the constraint in
(10d) can be rewritten as
$\displaystyle\sum\nolimits_{k\neq
i}^{K}\mathrm{Tr}(\boldsymbol{W}_{k}\boldsymbol{A}_{i})+\sigma_{k}^{2}\leq
e^{\bar{q}_{i}}+e^{\bar{q}_{i}}(q_{i}-\bar{q}_{i}).$ (13)
It is observed that (13) is convex since that (12) is linear and convex. Then,
by replacing (10d) with (13), we have
$\displaystyle(\text{P5}):\max_{\boldsymbol{W},\\{p_{i}\\},\\{q_{i}\\}}~{}$
$\displaystyle\sum\nolimits_{i=1}^{K}\log_{2}\left(e^{p_{i}-q_{i}}\right),$
(14a) s.t. $\displaystyle\sum\nolimits_{k\neq
i}^{K}\mathrm{Tr}(\boldsymbol{W}_{k}\boldsymbol{A}_{i})+\sigma_{k}^{2}$
$\displaystyle\leq e^{\bar{q}_{i}}+e^{\bar{q}_{i}}(q_{i}-\bar{q}_{i}),$ (14b)
$\displaystyle\text{(\ref{6-10b})},\text{(\ref{6-10c})},\text{(\ref{6-10e})},\text{(\ref{6-10f})}.$
(14c)
Note that (P5) is a convex optimization problem that can be solved by using
the convex optimization toolbox, e.g. CVX[12].
The SCA-based algorithm for solving (P5) is summarized in Algorithm 1111In
practical mmWave systems, the transmitter is usually equipped with the hybrid
beamforming structure. After obtaining
$\boldsymbol{W}=[\boldsymbol{w}_{1},\cdots,\boldsymbol{w}_{K}]$ from Algorithm
1, therefore, we use OMP algorithm to design the hybrid beamforming [13]..
1 Initialization: $t=0$, given $\boldsymbol{w}^{0}$ that is satisfied
conditions, calculate $q^{0}_{i}$ based on (9) and let
$\bar{q}_{i}^{1}=q^{0}_{i}$.
2 Repeat
3 Solve the problem in (14) to obtain the optimal solution
$\\{\boldsymbol{W}_{k}^{t}\\}$ and $\\{q_{i}^{t}\\}$.
4 Update $\bar{q}_{i}^{t+1}=q_{i}^{t}$.
5 Set $t=t+1$.
6 Until The stopping criterion is met.
7 Output: Obtain $\boldsymbol{w}_{k}^{*}$ by decomposition of
$\boldsymbol{W}_{k}^{*}$ when the $\mathrm{rank}(\boldsymbol{W}_{k}^{*})=1$;
otherwise the Gaussian Randomization method is utilized to obtain a rank-one
approximation.
Algorithm 1 Proposed SCA-based Algorithm for Solving the Problem (P2).
### III-B Phase Shifts Optimization
Given the beamforming vector $\\{\boldsymbol{w}_{k}\\}$ and the IRS shift
switch vector $\boldsymbol{x}$, the problem in (6) can be expressed as
$\displaystyle(\text{P6}):\max_{\boldsymbol{\lambda},\\{\boldsymbol{\Theta}_{l}\\}}~{}$
$\displaystyle\sum\nolimits_{k=1}^{K}\log_{2}(1+\lambda_{k})$ (15a) s.t.
$\displaystyle\lambda_{k}\leq\frac{|\sum\nolimits_{l=1}^{L}\boldsymbol{h}^{H}_{kl}x_{l}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}\boldsymbol{w}_{k}|^{2}}{\sum\nolimits_{i\neq
k}^{K}|\sum\nolimits_{l=1}^{L}\boldsymbol{h}^{H}_{kl}x_{l}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}\boldsymbol{w}_{i}|^{2}+\sigma_{k}^{2}},$
(15b) $\displaystyle\lambda_{k}\geq 2^{\gamma}-1,$ (15c)
$\displaystyle|\theta_{l,j}|=1,$ (15d)
where $\boldsymbol{\lambda}=[\lambda_{1},\cdots,\lambda_{K}]^{T}$,
$\lambda_{i}$ is a slack variable, ensuring that the constraint (15b) always
holds with equality for the optimal solution.
Let $u_{ln}=e^{j\theta_{ln}}$,
$\boldsymbol{u}_{l}=[u_{l1},\cdots,u_{lN_{r}}]^{T}$, and
$\boldsymbol{u}=[u_{11},\cdots,u_{1N_{r}},\cdots,u_{LN_{r}},\cdots,u_{LN_{r}}]^{T}$.
Since
$x_{l}\boldsymbol{h}_{kl}^{H}\boldsymbol{\Theta}_{l}\boldsymbol{G}_{l}\boldsymbol{w}_{i}=\boldsymbol{v}_{kli}^{H}\boldsymbol{u}_{l}$
with
$\boldsymbol{v}_{kli}=x_{l}(\mathrm{diag}(\boldsymbol{h}_{kl}^{H})\boldsymbol{G}_{l}\boldsymbol{w}_{i})^{*}$,
the constraint (15b) can be rewritten as
$\displaystyle\lambda_{k}\leq\frac{|\boldsymbol{v}_{kk}^{H}\boldsymbol{u}|^{2}}{\sum\nolimits_{i\neq
k}^{K}|\boldsymbol{v}_{ki}^{H}\boldsymbol{u}|^{2}+\sigma^{2}},$ (16)
where
$\boldsymbol{v}_{ki}=[\boldsymbol{v}_{ki1},\cdots,\boldsymbol{v}_{kiL}]$.
Then, (16) can be transformed into
$\displaystyle\lambda_{k}(\sum\nolimits_{i\neq
k}^{K}|\boldsymbol{v}_{ki}^{H}\boldsymbol{u}|^{2}+\sigma^{2})-|\boldsymbol{v}_{kk}^{H}\boldsymbol{u}|^{2}\leq
0.$ (17)
Therefore, the problem (P6) can be rewritten as
$\displaystyle(\text{P7}):\max_{\boldsymbol{u},\boldsymbol{\lambda}}~{}$
$\displaystyle\sum\nolimits_{k=1}^{K}\log_{2}(1+\lambda_{k}),$ (18a) s.t.
$\displaystyle|u_{ln}|=1,$ (18b)
$\displaystyle\text{(\ref{6-16c})},\text{(\ref{6-17m})}.$ (18c)
By introducing the penalty factor, (18) can be reformulated as
$\displaystyle(\text{P8}):\max_{\boldsymbol{u},\boldsymbol{\lambda}}~{}$
$\displaystyle\sum\nolimits_{k=1}^{K}\log_{2}(1+\lambda_{k})$
$\displaystyle+\mu\sum\nolimits_{l=1}^{L}\sum\nolimits_{n=1}^{N_{r}}(|u_{ln}|^{2}-1),$
(19a) s.t. $\displaystyle|u_{ln}|\leq 1,$ (19b)
$\displaystyle\text{(\ref{6-16c})},\text{(\ref{6-17m})},$ (19c)
where $\mu$ is a large positive constant. In order to solve the non-convex
parts in (17) and (19a), the SCA algorithm is used. The first-order Taylor
series of (19a) and (17) can be respectively expressed as
$\displaystyle\sum\nolimits_{k=1}^{K}\log_{2}(1+\lambda_{k})+\mu\sum\nolimits_{l=1}^{L}\sum\nolimits_{n=1}^{N_{r}}u_{ln}^{t}(u_{ln}-u_{ln}^{t}),$
(20)
and
$\displaystyle\lambda_{k}(\sum\nolimits_{i\neq
k}^{K}|\boldsymbol{v}_{ki}^{H}\boldsymbol{u}|^{2}+\sigma^{2})-|\boldsymbol{v}_{kk}^{H}\boldsymbol{u}^{t}|^{2}-2\mathrm{Re}[(\boldsymbol{u}^{H})^{t}\boldsymbol{v}_{kk}\boldsymbol{v}_{kk}^{H}$
$\displaystyle(\boldsymbol{u}-\boldsymbol{u}^{t})]\leq 0.$ (21)
With the above approximations at hand, the non-convex problem (P8) can be
rewritten as
$\displaystyle(\text{P9}):\max_{\boldsymbol{u},\boldsymbol{\lambda}}~{}$
$\displaystyle\sum\nolimits_{k=1}^{K}\log_{2}(1+\lambda_{k})+\mu\sum\nolimits_{l=1}^{L}\sum\nolimits_{n=1}^{N_{r}}u_{ln}^{t}(u_{ln}-u_{ln}^{t}),$
(22a) s.t.
$\displaystyle\text{(\ref{6-19b})},\text{(\ref{6-16c})},\text{(\ref{6-20m})}.$
(22b)
The procedure for solving the problem (P9) is similar to Algorithm 1.
### III-C IRS Switch Optimization
Given the phase shifts matrix $\\{\boldsymbol{\Theta}_{l}\\}$ and the
beamforming vector $\\{\boldsymbol{w}_{k}\\}$, the problem (6) is a nonlinear
integer optimization problem with respect to $\boldsymbol{x}$. Since the
nonlinear integer optimization problem is NP-hard in general, it is difficult
to obtain the globally optimal solution. Thus, we propose a greedy method
here, which is explained in Algorithm 2.
1 Initialization $\mathcal{S}=\\{1,\cdots,L\\}$ and $x_{l}=1$, $\forall
l\in\mathcal{S}$.
2 Calculate the objective function value in (6a), which is denoted by $V_{0}$.
3 If $\mathcal{S}\neq\emptyset$.
4 For $l\in\mathcal{S}$.
5 Turn off the $l$th IRS, i.e., $x_{l}=0$, $x_{m}=1$, $x_{n}=0$, $\forall
m\in\mathcal{S}\setminus\\{l\\}$, $n\in\mathcal{L}\setminus\mathcal{S}$.
6 When $R_{k}\geq\gamma$, $\forall~{}k$, calculate (6a), which is denoted by
$V_{l}$.
7 When $R_{k}<\gamma$, $\exists~{}k$, set $V_{l}=0$.
8 End.
9 Calculate $k=\arg\max_{j\in\mathcal{S}\bigcup\\{0\\}}V_{j}$.
10 If $k\neq 0$
11 Set $\mathcal{S}=\mathcal{S}\setminus\\{k\\}$ and $V_{0}=V_{k}$.
12 Else
13 Break and jump to Step 16.
14 End.
15 End.
16 Output:
$x_{l}=1,x_{n}=0,~{}\forall~{}l\in\mathcal{S},n\in\mathcal{L}\setminus\mathcal{S}$.
Algorithm 2 Proposed Greedy Algorithm for $\boldsymbol{x}$.
In Algorithm 2, $\mathcal{S}$ represents the active IRSs set, and $V^{0}$ is
initialized as the objective function. In the step 5, we deactivate one of the
IRSs and obtain a IRS shift switch vector $\boldsymbol{x}$. If the obtained
solution $\boldsymbol{x}$ is feasible, then the value of the objective
function can be calculated based on (6a) in the step 6. Otherwise, the value
of the objective function is set as 0 in the step 7. Next, the sum-rates of
the newly obtained feasible value and the initial value are compared. Here,
$k\neq 0$ denotes that the sum-rate can be increased by deactivating the IRS,
and the highest sum-rate can be achieved by deactivating the IRS $k$ rather
than any other IRSs. Then, the index $k$ is removed from the active IRSs set
$\mathcal{S}$, and the initial sum-rate valueis updated in the step 11. And
$k=0$ denotes that deactivating any IRSs will lead to a reduction in the sum-
rate. In this case, the iteration isterminated since that deactivating any
IRSs does not increase the sum-rate.
Finally, the proposed AO algorithm is summarized in Algorithm 3.
1 Initialization $\\{\boldsymbol{w}_{k}^{0}\\}$, $\boldsymbol{x}^{0}$, and
$\\{\boldsymbol{\Theta}_{l}^{0}\\}$.
2 Repeat
3 Solve (6) via Algorithm 1.
4 Solve (15) via SCA-based algorithm.
5 Obtain $\boldsymbol{x}$ via Algorithm 2.
6 Until the objective value (5a) converges.
7 Output: $\\{\boldsymbol{w}_{k}^{0}\\}$, $\boldsymbol{x}^{0}$, and
$\\{\boldsymbol{\Theta}_{l}^{0}\\}$.
Algorithm 3 Proposed AO Algorithm for Problem (5).
### III-D Computational Complexity
According to [14], the total complexity of the SCA algorithm for solving the
problem in (14) is
$\mathcal{O}(S_{1}(2KN_{t}^{2}+2K)^{3.5}\log_{2}(\frac{1}{\epsilon_{1}}))$,
where $S_{1}$ is the number of iterations. With similar analysis, the total
complexity of the SCA algorithm for solving the beamforming optimization
problem in (22) is
$\mathcal{O}(S_{2}(2LN_{r}+K)^{3.5}\log_{2}(\frac{1}{\epsilon_{2}}))$, where
$\epsilon_{1}$ and $\epsilon_{2}$ are the accuracies of the SCA algorithm for
solving problem (14) and (22), respectively. $S_{2}$ is the number of
iterations. The computation complexity of the proposed greedy algorithm for
obtaining IRS switch vector is $\mathcal{O}(L^{3}N_{r}N_{t})$, where $L$ is
the number of variables, and $L^{2}$ is the number of iterations for Algorithm
2. As a result, the total complexity of the proposed JSTPO algorithm for
solving problem (5) is
$\mathcal{O}(TS_{1}(2KN_{t}^{2}+2K)^{3.5}\log_{2}(\frac{1}{\epsilon_{1}})+TS_{2}(2LN_{r}+K)^{3.5}\log_{2}(\frac{1}{\epsilon_{2}})+TL^{3}N_{r}N_{t})$,
where $T$ is the number of iterations for proposed AO algorithm.
## IV Numerical Results
In this section, numerical results are provided to demonstrate the
effectiveness of the proposed scheme. As shown in Fig. 2, it is assumed that
the BS is located at $(0,0,0)$. Three IRSs are located at $(0,30,20)$,
$(0,60,20)$, $(0,90,20)$ in meters, respectively. While three users are
located at $(0,30,0)$, $(0,60,0)$, and $(0,90,0)$ in meters, respectively. The
large-scale fading $\beta_{l}$ and $\beta_{kl}$ are taken as
$\beta_{0}+10c\log_{10}(d)$, where $d$ is the propagation distance of signals,
and the path loss exponents of LoS and No-LoS are respectively set as $c=2$
and $c=5$. In the simulations, we set $N_{t}=16$, $N_{r}=16$, $L=3$,
$\beta_{0}=61.4$ dB, and $\sigma_{k}^{2}=-100$ dBm. We compare the proposed
scheme with the Single-IRS (S-IRS) scheme and all-active-IRS scheme (D-IRSs
without switch). In the S-IRS scheme, two scenarios, i.e., the IRS locates at
$(0,60,20)$ and $(0,90,20)$ in meters are both considered. The number of
reflecting elements of the IRS is set as the total number of reflecting
elements for all IRSs in D-IRSs-aided mmWave system in the S-IRS scheme. In
addition, we evaluate the energy efficiency of the proposed scheme. In this
paper, the energy efficiency is defined as
$\displaystyle\eta_{EE}=\frac{R_{sum}}{P_{W}+N_{RF}P_{RF}+\sum\nolimits_{l=1}^{L}x_{l}N_{r}P_{IRS}}.$
(23)
where $R_{sum}=\sum\nolimits_{k=1}^{K}R_{k}$ is the sum-rate,
$P_{W}=\sum\nolimits_{k=1}^{K}\mathrm{Tr}(\boldsymbol{w}_{k}\boldsymbol{w}_{k}^{H})$
is the transmit power, $P_{RF}$ is the RF chain power, and $N_{r}P_{IRS}$ is
the power consumption of each IRS. Also, we set $P_{RF}$ and $P_{IRS}$ as
$250$ mW and $10$ mW, respectively[15].
Figure 2: Simulation Setup.
(a) $N_{r}=16$, $N_{RF}=8$
(b) $N_{r}=16$, $N_{RF}=8$
Figure 3: Sum-rate versus transmit power $P$
(a) $P=5~{}dBm$
(b) $P=5~{}dBm$
Figure 4: Sum-rate versus the number of reflecting elements
(a) $N_{r}=16$, $N_{RF}=8$
(b) $N_{r}=16$, $N_{RF}=8$
Figure 5: Energy efficiency versus transmit power $P$
The IRS locates at $(0,60,20)$ and $(0,90,20)$ in the S-IRS scheme in Fig.
4(a) and Fig. 4(b), respectively. The sum-rate versus transmit power $P$ is
shown in Fig. 4(a) and Fig. 4(b). Based on the optimal beamformer obtained, it
is easily proved that the sum-rate is an increasing function of $P$. In
addition, as shown in Fig. 4(a) and Fig. 4(b), the sum-rate of the IRS-aided
mmWave system with the HBF structure is lower than the IRS-aided mmWave system
with digital beamforming (DBF). The reason is that the HBF obtained by using
the OMP algorithm is an approximate solution, which leads to the degradation
on the sum-rate. Thus, as $P$ increases, the sum-rate increases monotonously.
The IRS locates at $(0,60,20)$ and $(0,90,20)$ in the S-IRS scheme in Fig.
3(a) and Fig. 3(b), respectively. Fig. 3(a) and Fig. 3(b) reveal that as
reflecting elements increase from $16$ to $96$, the sum-rate increases
monotonously. This is because the more reflecting elements result in sharper
reflecting beams, thereby enhancing the sum-rate. Finally, from Fig. 3(a) and
Fig. 4(b), D-IRS scheme can increase up to 40% sum-rate compared with the
mmWave system with the S-IRS scheme, benefiting from the distributed
deployment. Compared to the mmWave system with only one IRS, the D-RISs scheme
can provide robust data-transmission since different RISs can be deployed
geometrically apart from each other. Meanwhile, the D-RISs scheme can provide
multiple paths with low path loss for received signals, which increases the
received signal strength.
Fig. 5(a) and Fig. 5(b) show the variation of the energy efficiency with the
transmit power. The IRS locates at $(0,60,20)$ in the S-IRS scheme in Fig.
5(a), and the IRS locates at $(0,60,20)$ in the S-IRS scheme in Fig. 5(b). The
simulation results show that with the increase of the BS transmit power, all
the schemes increase with the increase of the transmit power and tend to be
stable. This is because energy efficiency is not a monotonically increasing
function of the transmit power, as shown in (23). In addition, from the
figure, we can find that the energy efficiency of the proposed D-IRSs scheme
with HBF is larger than that of D-IRSs scheme with DBF, because the HBF
requires fewer RF links than DBF, which will result in less RF power
consumption. Compared with the All-IRS-active scheme and the S-IRS scheme, the
proposed D-IRSs scheme have higher energy efficient, this is because the
switch of D-IRSs can be adjusted adaptively the number of IRSs while
increasing the sum-rate of the system, which can reduce the energy consumption
of the IRS and increase the energy efficiency.
## V Conclusion
In this paper, we have investigated the sum-rate maximization problem for a
mmWave communication system with D-IRSs. The IRS phase shifts, the transmit
beamforming at the BS, and IRS switch vector have been jointly optimized to
maximize the sum-rate, subject to the constraints on the transmit power,
minimum user rate, and unit-modulus constraints. To solve this problem, we
have proposed the AO algorithm for the multi-users case. In particular, the
joint design of the transmit beamforming and phase shifts was solved by using
the SCA algorithm while the IRS switch vector was been obtained by using the
greedy method. Finally, numerical results have shown that the proposed D-IRSs-
aided scheme outperforms the S-IRS scheme in terms of the sum-rate and energy
efficiency.
## References
* [1] M. Di Renzo, M. Debbah, D.-T. Phan-Huy, A. Zappone, M.-S. Alouini, C. Yuen, V. Sciancalepore, G. C. Alexandropoulos, J. Hoydis, H. Gacanin _et al._ , “Smart radio environments empowered by reconfigurable ai meta-surfaces: An idea whose time has come,” _EURASIP J. Wireless Comm. and Networking_ , vol. 2019, no. 1, pp. 1–20, 2019.
* [2] X. Lu, W. Yang, X. Guan, Q. Wu, and Y. Cai, “Robust and secure beamforming for intelligent reflecting surface aided mmwave MISO systems,” _arXiv preprint arXiv:2003.11195_ , 2020.
* [3] Z. Pi and F. Khan, “An introduction to millimeter-wave mobile broadband systems,” _IEEE commun. Mag._ , vol. 49, no. 6, pp. 101–107, 2011.
* [4] G. Gui, H. Sari, and E. Biglieri, “A new definition of fairness for non-orthogonal multiple access,” _IEEE Commun Lett_ , vol. 23, no. 7, pp. 1267–1271, 2019.
* [5] G. Gui, M. Liu, F. Tang, N. Kato, and F. Adachi, “6g: Opening new horizons for integration of comfort, security and intelligence,” _IEEE Wireless Commun_ , 2020.
* [6] L. Dai, B. Wang, M. Peng, and S. Chen, “Hybrid precoding-based millimeter-wave massive mimo-noma with simultaneous wireless information and power transfer,” _IEEE J. Sel. Areas Commun._ , vol. 37, no. 1, pp. 131–141, 2018\.
* [7] Y. Cao and T. Lv, “Intelligent reflecting surface enhanced resilient design for mec offloading over millimeter wave links,” _[Online] Available:https://arxiv.org/abs/1912.06361, 2019._ , 2019.
* [8] Q. Wu and R. Zhang, “Towards smart and reconfigurable environment: Intelligent reflecting surface aided wireless network,” _IEEE Commun. Mag._ , 2019.
* [9] ——, “Intelligent reflecting surface enhanced wireless network via joint active and passive beamforming,” _IEEE Trans. Wireless Commun._ , vol. 18, no. 11, pp. 5394–5409, 2019.
* [10] H. Guo, Y.-C. Liang, J. Chen, and E. G. Larsson, “Weighted sum-rate optimization for intelligent reflecting surface enhanced wireless networks,” _arXiv preprint arXiv:1905.07920_ , 2019.
* [11] C. Pan, H. Ren, K. Wang, W. Xu, M. Elkashlan, A. Nallanathan, and L. Hanzo, “Multicell mimo communications relying on intelligent reflecting surfaces,” _IEEE Trans. Wireless Commun._ , 2020.
* [12] S. Boyd and L. Vandenberghe, _Convex Optimization_. Cambridge university press, 2004.
* [13] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” _IEEE Trans. Inf. Theory_ , vol. 53, no. 12, pp. 4655–4666, 2007.
* [14] M. G. S. Boyd and Y. Ye, _CVX: Matlab software for disciplined convex programming_. Stanford University Press, 2009.
* [15] C. Huang, A. Zappone, G. C. Alexandropoulos, M. Debbah, and C. Yuen, “Reconfigurable intelligent surfaces for energy efficiency in wireless communication,” _IEEE Trans. Wireless Commun._ , vol. 18, no. 8, pp. 4157–4170, 2019.
|
# Another Family of Permutations Counted by the Bell Numbers
Fufa Beyene 111Corresponding author. Addis Ababa University, Addis Ababa,
Ethiopia and CoRS; email<EMAIL_ADDRESS>Roberto Mantaci IRIF,
Université de Paris, Paris, France and CoRS; email<EMAIL_ADDRESS>
###### Abstract
Using a permutation code introduced by Rakotondrajao and the second author, we
associate with every set partition of $[n]$ a permutation over $[n]$, thus
defining a class of permutation whose size is the $n$-th Bell number. We
characterize the permutations belonging to this class and we study the
distribution of weak exceedances over these permutations, which turns out to
be enumerated by the Stirling numbers of the second kind. We provide a direct
bijection between our class of permutations and another equisized class of
permutations introduced by Poneti and Vajnovszki.
###### keywords:
Permutations, Set Partitions, Codes, Subexceedant functions, Exceedances, Bell
numbers, Stirling numbers of the second kind.
## 1 Introduction
Permutations and set partitions are among the richest objects in
combinatorics, they have been enumerated according to several criteria of
interest.
There is a plethora of studies of statistics on permutations, many of which
are counted by eulerain or macmahonian numbers, such as descents, exceedances,
right-to-left minima or maxima, inversions, etc. On the other hand, we recall
the two most basic enumerations for set partitions : the total number of set
partitions of $[n]$ is the Bell number [11] and the number of set partitions
of $[n]$ with $k$ blocks is the Stirling number of the second kind, as given
in [3, 12].
Both permutations and set partition can be coded by subexceedant functions,
that is, functions $f$ over $[n]$ such that $1\leq f(i)\leq i$ for all
$i\in[n]$ (in some contexts it is rather required that $0\leq f(i)\leq i-1$).
Some permutation codes with subexceedant functions are very well known (Lehmer
code or inversion table, Denert code, …[6, 4, 5]). On the other hand, a way to
code set partitions with subexceedant functions is provided by Mansour’s
definition of canonical form for a set partition $\pi$ ([7]). In this form any
integer $i\in[n]$ is coded with the number of the block of $\pi$ where $i$
belongs when $\pi$ is written in standard form, that is, the elements in each
block are arranged increasingly and the blocks are arranged in increasing
order of their first elements. In fact, canonical forms of set partitions are
restricted growth functions (RGF), a particular case of subexceedant
functions.
Several properties of set partitions or permutations can easily be read on
their corresponding codes, this allows to prove elegantly some results by
reasoning on the codes rather than on the coded objects themselves, see for
instance the nice article of Baril and Vajnovszki [1], and also the article of
Foata and Zeilberger [5].
Furthermore, these codes are also useful to implement efficient algorithms for
the exhaustive generation of the corresponding class of objects. For instance,
M. Orlov [9] used the representation of set partitions as RGFs to implement an
algorithm to generate all set partitions in constant space and amortized time
complexity.
F. Rakotondrajao and the second author in [8] defined a new way of coding
permutations with subexceedant functions. This code associates with a
subexceedant function $f$ with the permutation $\sigma=\tilde{\phi}(f)$
defined by the product of transpositions (the leftmost always acts first):
$\sigma=(n\leavevmode\nobreak\ f(n))\leavevmode\nobreak\
(n-1\leavevmode\nobreak\ f(n-1))\leavevmode\nobreak\
\cdots\leavevmode\nobreak\ (1\leavevmode\nobreak\ f(1)).$
We studied further this code in [2], where we gave for it a new interpretation
based on the action and the cycle structure of the permutation. In that work
we also introduced the definition of _inom_ , which will be reminded in the
Section 2 and by which we propose to call this code “ _inom code_ ”.
In this paper we associate bijectively a permutation on $[n]$ with every set
partition of $[n]$ by composing the canonical form and the inom code. We
obtain this way a family of permutations counted by the Bell numbers and we
present some properties and some recurrence relations satisfied by these
objects.
We show in particular in Section 3 that the permutations belonging to this
class can be characterised combinatorially and that the distribution of the
weak exceedances statistic in this class is the same as the distribution of
number of blocks in the set of set partitions, and therefore is given by the
Stirling numbers of the second kind.
In Section 4, we provide a direct bijection between each partition and its
corresponding permutation (without passing through canonical form and inom
code).
Finally, in Section 5, we provide a bijection between our family of
permutations and another Bell-counted class of permutations introduced by
Poneti and Vajnovszki [10].
## 2 Definitions, Notations and Preliminaries
### 2.1 Subexceedant functions
###### Definition 2.1.
A function $f$ over $[n]$ is said to be subexceedant if $1\leq f(i)\leq i$ for
all $i$, $1\leq i\leq n$ (in some contexts it is required that $0\leq f(i)\leq
i-1$).
###### Notation 2.1.
We denote by $F_{n}$ the set of all subexceedant functions over $[n]$, and if
$f$ is a function over $[n]$, we often denote by $f_{i}$ the value $f(i)$ and
write $f$ as the word $f_{1}f_{2}\ldots f_{n}$.
Subexceedant functions can be used to code permutations, the Inversion table
or the Denert table are two examples. F. Rakotondrajao and the second author
defined in ([8]) a new bijection $\phi$ associating to each subexceedant
function $f$ a permutation $\sigma=\phi(f)$ defined as the product of
transpositions (the product is from left to right):
$\sigma=(1\leavevmode\nobreak\ f_{1})\leavevmode\nobreak\
(2\leavevmode\nobreak\ f_{2})\leavevmode\nobreak\ \cdots\leavevmode\nobreak\
(n\leavevmode\nobreak\ f_{n}).$
If $f_{i}=i$, then $(i\leavevmode\nobreak\ f_{i})=(i)$ does not represent a
transposition but the identity permutation.
Further, a variation of this bijection associates with a subexceedant function
$f$ the permutation $\sigma=\tilde{\phi}(f)$ defined by the product of
transpositions (the product is from left to right):
$\sigma=(n\leavevmode\nobreak\ f_{n})\leavevmode\nobreak\
(n-1\leavevmode\nobreak\ f_{n-1})\leavevmode\nobreak\
\cdots\leavevmode\nobreak\ (1\leavevmode\nobreak\ f_{1}).$
For example, take $f=121132342$. Then $\phi(f)=568179342$ while
$\tilde{\phi}(f)=497812536$.
In this paper we will work with the variation $\tilde{\phi}$.
In a previous work ([2]) we gave the following :
###### Definition 2.2.
Let $\sigma=\sigma(1)\sigma(2)\cdots\sigma(n)\in\mathfrak{S}_{n}$. Then the
inverse nearest orbital minorant $(inom)$ of $i\in[n]$ is the integer
$j=\sigma^{-t}(i)\leq i$ with $t\geq 1$ chosen as small as possible.
###### Example 2.1.
Let $\sigma=10\leavevmode\nobreak\ 6\leavevmode\nobreak\ 8\leavevmode\nobreak\
5\leavevmode\nobreak\ 1\leavevmode\nobreak\ 4\leavevmode\nobreak\
9\leavevmode\nobreak\ 3\leavevmode\nobreak\ 2\leavevmode\nobreak\
7=(1\leavevmode\nobreak\ 10\leavevmode\nobreak\ 7\leavevmode\nobreak\
9\leavevmode\nobreak\ 2\leavevmode\nobreak\ 6\leavevmode\nobreak\
4\leavevmode\nobreak\ 5)(3\leavevmode\nobreak\ 8)$. Then
$\begin{array}[]{|c|c|c|c|c|c|c|c|c|c|c|}\hline\cr x&1&2&3&4&5&6&7&8&9&10\\\
\hline\cr inom(x)&1&1&3&2&4&2&1&3&7&1\\\ \hline\cr\end{array}$
This can be clearer with a picture where the permutation is represented as a
union of cyclic graphs. The blue continuous arcs $(i,\sigma(i))$ represent the
action of the permutation, the red dashed arcs $(i,inom(i))$ represent the
action of the corresponding subexceedant function.
Figure 1: $\sigma$ and $inom$
###### Definition 2.3.
We call _inom code_ the bijection $\tilde{\phi}^{-1}$.
In ([2]), we also proved the following:
###### Theorem 2.1.
If $\tilde{\phi}^{-1}(\sigma)=f=f_{1}f_{2}...f_{n}$, then $f(i)=inom(i)$.
###### Notation 2.2.
We denote by $F_{n}$ the set of all subexceedant functions over $[n]$, and
$f=f_{1}f_{2}...f_{n}\in F_{n}$, where $f_{i}=f(i),i\in[n]$.
Let $f=f_{1}f_{2}...f_{n}\in F_{n}$. Then the set of images of $f$ is denoted
by $Im(f)$ and its cardinality by $IMA(f)$. For instance, in $f=121132342$,
then $Im(f)=\\{1,2,3,4\\}$, $IMA(f)=4$.
### 2.2 Set Partitions
###### Definition 2.4.
Let $S=[n]$, the set of the first $n$ positive integers. A set partition $\pi$
of $S$ is defined as a collection $B_{1},\ldots,B_{k}$ of nonempty disjoint
subsets such that $\cup_{i=1}^{k}B_{i}=S$. The subsets $B_{i}$ will be
referred to as ”blocks”.
The block representation of a set partition $\pi$ is:
$\pi=B_{1}/B_{2}/\ldots/B_{k}$.
###### Definition 2.5.
The block representation of a set partition is said to be standard if the
blocks $B_{1},\ldots,B_{k}$ are sorted in such way that
$min(B_{1})<min(B_{2})<\cdots<min(B_{k})$ and if the elements of every block
are arranged in an increasing order.
We consider set partitions only in their standard representation. The
condition on the order of the blocks and the arrangement of the integers in
each block implies that the standard representation of a set partition is
unique and that two different set partitions have different standard
representations.
###### Remark 2.1.
Note that in the standard representation, the integer $1$ is always in the
first block, $2$ is in one of the first two blocks, and $3$ is in any one of
the first three blocks. Indeed, it is easy to show that in the standard
representation of a set partition $\pi$ of $[n]$,
$\displaystyle\mbox{every element}\leavevmode\nobreak\ i\leavevmode\nobreak\
\mbox{of}\leavevmode\nobreak\ [n]\leavevmode\nobreak\ \mbox{is necessarily in
one of the first}\leavevmode\nobreak\ i\leavevmode\nobreak\ \mbox{blocks.}$
(1)
###### Definition 2.6.
The canonical form of a set partition of $[n]$ is a n-tuple indicating the
block of the standard representation in which each integer occurs, that is,
$f=\langle f_{1},f_{2},\ldots,f_{n}\rangle$ such that $j\in B_{f_{j}}$ for all
$j$ with $1\leq j\leq n$.
###### Example 2.2.
The sequences of canonical forms corresponding to the set partitions of $[3]$
are :
$\langle 1,1,1\rangle,\langle 1,1,2\rangle,\langle 1,2,2\rangle,\langle
1,2,1\rangle,\langle 1,2,3\rangle$.
Note that canonical form of a set partition is a subexceedant function. But,
not all subxeceedant functions are canonical forms of set partitions.
The set partitions of $[n]$ having exactly $k$ blocks are counted by the
Stirling numbers of the second kind ([3, 12]) denoted $S(n,k)$, which satisfy
the recurrence relation:
$S(n,k)=kS(n-1,k)+S(n-1,k-1)$
On the other hand, the number of all set partitions over $[n]$ is counted by
the Bell numbers, denoted $B(n)$ and satisfying:
$B(n)=\sum_{k=0}^{n}S(n,k)\leavevmode\nobreak\ \leavevmode\nobreak\
\mbox{and}\leavevmode\nobreak\ \leavevmode\nobreak\
B(n+1)=\sum_{i=0}^{n}\binom{n}{i}B(i),n\geq 0,\mbox{ with }B(0)=1.$
We denote the set of all set partitions of $[n]$ by $\mathfrak{P}(n)$, and its
cardinality by $B_{n}$, (equal to the $n$-th Bell number $B(n)$) with
$B_{0}=1$ (as there is only one set partition of the empty set).
In Subsection 3.1 we will associate each set partition of $[n]$ with a
permutation of the symmetric group $\mathfrak{S}_{n}$ having certain
properties.
### 2.3 Statistics on permutations
Let $\sigma=\sigma(1)\sigma(2)...\sigma(n)\in\mathfrak{S}_{n}$, where
$\mathfrak{S}_{n}$ is the symmetric group of permutations over $[n]$.
Recall that a weak exceedance of $\sigma$ is a position $i$ such that
$\sigma(i)\geq i$; the set of weak exceedances of $\sigma$ is
w-$Exc(\sigma)=\\{i:\sigma(i)\geq i\\}$. The values of weak exceedances are
said to be _weak exceedance letters_ and the subword of
$\sigma=\sigma(1)\sigma(2)...\sigma(n)$ including all weak exceedance letters
is denoted by w-$ExcL(\sigma)$. An anti-exceedance of $\sigma$ is a position
$i$ such that $\sigma(i)\leq i$, and its value is called an anti-exceedance
letter.
###### Example 2.3.
Let $\sigma=435129678$. Then w-$Exc(\sigma)=\\{1,2,3,6\\}$. The set of weak
exceedance letters of $\sigma$, w-$ExcL(\sigma)=\\{\sigma(i):i\in$
w-$Exc(\sigma)\\}=\langle 4,3,5,9\rangle$.
In [8] it is proved that if $\sigma=\phi(f)$ then the elements in $IMA(f)$
correspond to the anti-exceedance letters of $\sigma$, while in [2], it is
proved that if $\sigma=\tilde{\phi}(f)$ then the elements in $IMA(f)$
correspond to the weak exceedances (positions) of $\sigma$.
###### Remark 2.2.
It is also proved that $\sigma$ has an anti-exceedance at $i$ if and only if
at position $i$ there is the rightmost occurrence of the integer $k=f(i)$ in
$f=\phi^{-1}(\sigma)$. Analoguously $\sigma$ has a weak exceedance at $i$ if
and only if at position $k$ there is the rightmost occurrence of an integer
$i=f(k)$ in $f=\tilde{\phi}^{-1}(\sigma)$.
## 3 Inom code and Canonical Forms
In this section we will compare the inom code for permutations with the
canonical form of set partitions of $[n]$ and we define a family of
permutations counted by the Bell numbers.
M. Orlov in [9] used the following characterization of canonical forms of set
partitions to implement an efficient algorithm for the exhaustive generation
of all set partitions for a given $n$, with constant space and amortised
constant time complexity.
###### Lemma 3.1.
([9]) There is a bijection between all set partitions of the set $[n]$ and the
set
$\displaystyle\\{\langle 1,k_{2},\ldots,k_{n}\rangle:k_{i}\in\mathbb{N}\mbox{
and for all }i\mbox{ with }2\leq i\leq n\mbox{ one has }1\leq k_{i}\leq
1+max_{1\leq j<i}k_{j}\\}$ (2)
for all $n\in\mathbb{N}$.
The $n-$tuples satisfying condition (2) are called Restricted Growth Functions
(RGF).
The canonical form of a set partition of $[n]$ is indeed a subexceedant
function : we noted before that in the standard representation of a set
partition, an integer $i$ is always in one of the $i$ first blocks so one has
$f_{i}\leq i$ for all $i$ in $[n]$.
We have noted before, not all subexceedant functions are $RGF$ and therefore
canonical forms of set partitions.
We want to study the subset of $F_{n}$ corresponding to $\mathfrak{P}(n)$, as
well as the permutations whose inom codes is one of these subexceedant
functions.
This set of permutations is counted by the Bell numbers, therefore we will
call these objects “Bell permutations of the second kind” (M. Poneti and V.
Vajnovszki in [10] already introduced another family of permutations counted
by the Bell numbers that they called “Bell permutations”).
We will first reformulate the condition expressed in equation (2) of Lemma 3.1
in such a manner that is more useful for our purposes.
###### Proposition 3.1.
Let $f=f_{1}f_{2}...f_{n}$ be a subexceedant function, then $f$ satisfies the
condition expressed in equation (2) of Lemma 3.1 if and only if for all
$i\in[n]$ one has $\\{f_{1},f_{2},...,f_{i}\\}=\\{1,2,...,p\\}=[p]$ for a
certain $p$. In other terms, for all $i\in[n]$, the set
$\\{f_{1},f_{2},...,f_{i}\\}$ is an integer interval with minimum value 1.
###### Proof.
Suppose that $f$ satisfies the condition expressed in equation (2) of Lemma
3.1. We use induction on $i$.
For $i=1$, $f_{1}=1$ and $\\{1\\}$ is an interval.
Suppose $\\{f_{1},f_{2},...,f_{i-1}\\}$ is an interval $[p]$. Then the
condition expressed in 2 implies $f_{i}\leq
1+Max\\{f_{1},\cdots,f_{i-1}\\}\leq p+1$.
So $\\{f_{1},f_{2},...,f_{i}\\}=[p]\cup\\{f_{i}\\}$ is either the interval
$[p]$ or the interval $[p+1]$.
Conversely, suppose that for all $i\in[n]$ the set
$\\{f_{1},f_{2},...,f_{i}\\}$ is an integer interval $[k]$ for some $k$, let
us prove that for all $i$, $f_{1}\leq f_{i}\leq Max_{1\leq j<i}\\{f_{j}\\}+1$.
This is obviously true if $i=1$.
Let $i\geq 2$. Let $\\{f_{1},f_{2},...,f_{i}\\}=[k_{1}]$ and
$\\{f_{1},f_{2},...,f_{i-1}\\}=[k_{2}]$ for some integers $k_{1}$ and $k_{2}$.
Then there are only two possibilities: either $k_{1}=k_{2}$ or
$k_{1}=k_{2}+1$.
1. 1.
Case $k_{1}=k_{2}$ : this implies that $f_{i}\leq Max_{1\leq
j<i}\\{f_{j}\\}\leq Max_{1\leq j<i}\\{f_{j}+1\\}$.
2. 2.
Case $k_{1}=k_{2}+1$ : this implies that $f_{i}=k_{1}=k_{2}+1=Max_{1\leq
j<i}\\{f_{j}+1\\}$.
∎
### 3.1 Bell Permutations of the second kind
In this subsection, we will present the class permutations associated to
$RGF$s under $\tilde{\phi}$.
###### Definition 3.1.
We define Bell permutation of the second kind a permutation
$\sigma\in\mathfrak{S}_{n}$ whose corresponding subexceedant function
$f=f_{1}f_{2}...f_{n}=(\tilde{\phi})^{-1}(\sigma)$ satisfies : the set
$\\{f_{1},f_{2},...,f_{i}\\}$ is an integer interval for all $i\in[n]$.
We denote by $BP_{2}(n)$ the set of Bell permutations of the second kind over
$[n]$ and by $b(n)$ its cardinality, the $n$-th Bell number.
###### Example 3.1.
There are five Bell permutations of the second kind on $[3]$. These are all
the permutations of $\mathfrak{S}_{3}$ except the permutation $213$. Note that
$f=(\tilde{\phi})^{-1}(213)=113$ and $Im(f)=\\{f_{1},f_{2},f_{3}\\}=\\{1,3\\}$
is not an integer interval.
###### Remark 3.1.
If $\sigma=\sigma(1)\sigma(2)...\sigma(n)\in\mathfrak{S}_{n}$ is a Bell
permutation of the second kind, then its inom code $f$ has
$IMA(f)=\\{1,2,\ldots,p\\}$ for a certain $p$, therefore $\sigma$ has weak
exceedances exactly at $\\{1,2,\ldots,p\\}$.
###### Definition 3.2.
Let $\sigma=\sigma(1)\sigma(2)...\sigma(n)\in\mathfrak{S}_{n}$. Then the
increasing integer sequence $Seq(\sigma)$ associated to $\sigma$ is given as
follows :
For $x=1,2,...,n$, we add $x$ to $Seq(\sigma)$ if $inom(x)=y$ is not the
$inom$ of some integer smaller than $x$.
###### Example 3.2.
Let $\sigma=435129678=(1\leavevmode\nobreak\ 4)(2\leavevmode\nobreak\
3\leavevmode\nobreak\ 5)(6\leavevmode\nobreak\ 9\leavevmode\nobreak\
8\leavevmode\nobreak\ 7)$. Then $Seq(\sigma)=\langle 1,2,5,6\rangle$.
###### Remark 3.2.
The cardinality of $Seq(\sigma)$, $\sigma\in\mathfrak{S}_{n}$ is equal to the
cardinality of w-$Exc(\sigma)$. That is, if $f=\tilde{\phi}^{-1}(\sigma)$,
then $i\in Seq(\sigma)$ if and only if at the position $i$ there is the
leftmost occurrence of the integer $k=f(i)$ in $f$.
The characterisation of $RGF$s as expressed in Proposition 3.1 implies a
characterization for Bell permutations of the second kind.
###### Theorem 3.1.
Let $\sigma=\sigma(1)\sigma(2)...\sigma(n)\in\mathfrak{S}_{n}$ with the set of
weak exceedance letters
w-$ExcL(\sigma)=\langle\alpha_{1},\alpha_{2},\ldots,\alpha_{k}\rangle$ and
$Seq(\sigma)=\langle\gamma_{1},\gamma_{2},\ldots,\gamma_{k}\rangle$. Then
$\sigma$ is a Bell-permutation of the second kind if and only if
1. 1.
the set of the weak exceedances of $\sigma$ is exactly the interval
$\\{1,2,...,k\\}$, and
2. 2.
$\gamma_{i}\leq\alpha_{i},\leavevmode\nobreak\ \mbox{for
all}\leavevmode\nobreak\ i=1,2,\ldots,k.$ (3)
###### Proof.
Let $f\in F_{n}$ be the code of the permutation $\sigma$ via the bijection
$(\tilde{\phi})^{-1}$. As a special case of Proposition 3.1 for $i=n$, we have
$Im(f)=\\{f_{1},f_{2},...,f_{n}\\}$ is an integer interval $[p]$ for some $p$.
But on the other hand $Im(f)$ coincides with the set of the weak exceedances
of $\sigma$ whose cardinality is $k$ (Remark 2.2). Therefore $p=k$ and the set
of the weak exceedances of $\sigma$ is exactly $\\{1,2,\ldots,k\\}$.
Further, for $\gamma_{i}\in Seq(\sigma)$ at position $\gamma_{i}$ in $f$ we
have the value $f_{\gamma_{i}}$ is the leftmost occurrence in $f$. Because
$Seq(\sigma)$ only includes the positions of the smallest values of all among
the values of $inoms$ ($f_{j}=inom(j)$) and for $\alpha_{i}\in$
w-$ExcL(\sigma)$ we have $f_{\alpha_{i}}$ is the rightmost occurrence in $f$
(Remark 2.2). From this we see that $f=f_{1}f_{2}...f_{n}$ is an integer
interval for each $j\in[n]$ if and only if w-$Exc(\sigma)=[k]$ and the
leftmost occurrences of $f$ are increasing. Hence $\gamma_{i}\leq\alpha_{i}$
for all $i\in[k]$. ∎
###### Example 3.3.
1. 1.
Let $\sigma=763592148=(1\leavevmode\nobreak\ 7)(2\leavevmode\nobreak\
6)(3)(4\leavevmode\nobreak\ 5\leavevmode\nobreak\ 9\leavevmode\nobreak\ 8)$.
Then $Seq(\sigma)=\langle 1,2,3,4,8\rangle$ and w-$ExcL(\sigma)=\langle
7,6,3,5,9\rangle$. Thus the two sequences satisfy (3) and hence $\sigma$ is a
Bell permutation of the second kind.
2. 2.
Let $\sigma=245987316=(1\leavevmode\nobreak\ 2\leavevmode\nobreak\
4\leavevmode\nobreak\ 9\leavevmode\nobreak\ 6\leavevmode\nobreak\
7\leavevmode\nobreak\ 3\leavevmode\nobreak\ 5\leavevmode\nobreak\ 8)$. Then
$Seq(\sigma)=\langle 1,3,5,6,7,8\rangle$ and w-$ExcL(\sigma)=\langle
2,4,5,9,8,7\rangle$. Observe that $\gamma_{6}=8>7=\alpha_{6}$ and hence
$\sigma$ is not a Bell permutation of the second kind.
The following proposition gives a recursive procedure to check if a
permutation is a Bell Permutation of the second kind, for its proof it is
convenient to give first the following.
###### Lemma 3.2.
Let $\tilde{\phi}(f)=\sigma\in\mathfrak{S}_{n}$, where $f$ is a subexceedant
function over $[n]$ obtained from the subexceedant function $f^{\prime}$ over
$[n-1]$ by concatenating some $j\in[n]$ at its end, where
$\tilde{\phi}(f^{\prime})=\sigma^{\prime}$. Then $\sigma$ is obtained from
$\sigma^{\prime}$ by replacing the integer $n$ by the integer
$\sigma^{\prime}(j)$ in $\sigma^{\prime}$ and appending the integer
$\sigma^{\prime}(j)$ at the end.
If $j=n$, then $\sigma$ is obtained from $\sigma^{\prime}$ by appending $n$ at
the end of $\sigma^{\prime}$.
###### Proposition 3.2.
A permutation $\sigma=\sigma(1)\sigma(2)...\sigma(n)\in\mathfrak{S}_{n}$ whose
set of weak exceedances is an integer interval $[k]$ is in $BP_{2}(n)$ if and
only if the permutation $\sigma^{\prime}\in\mathfrak{S}_{n-1}$ obtained from
$\sigma$ by exchanging the integer $n$ by $\sigma(n)$ in the word
$\sigma(1)\sigma(2)...\sigma(n-1)$ is in $BP_{2}(n-1)$.
###### Proof.
According to Lemma 3.2, for all permutations $\sigma$, if $f=f_{1}\cdots
f_{n}=(\tilde{\phi})^{-1}(\sigma)$ and $\sigma^{\prime}\in\mathfrak{S}_{n-1}$
is the permutation obtained from $\sigma$ by replacing the integer $n$ by
$\sigma(n)$, then the subexceedant function associated with $\sigma^{\prime}$
is $f^{\prime}=f_{1}f_{2}\cdots f_{n-1}$.
In the hypothesis that $w$-$Exc(\sigma)=\\{f_{1},...,f_{n}\\}$ is an integer
interval $[k]$, the two following conditions become trivially equivalent :
1. 1.
for all $i\in[n]$, the set $\\{f_{1},f_{2},...,f_{i}\\}$ is an integer
interval with minimum value 1.
2. 2.
for all $i\in[n-1]$, the set $\\{f_{1},f_{2},...,f_{i}\\}$ is an integer
interval with minimum value 1.
That is, according to Proposition 3.1, $\sigma$ is Bell if and only if
$\sigma^{\prime}$ is Bell. ∎
###### Example 3.4.
We will look at the three different cases :
1. 1.
Consider $\sigma=7156432$. We have $w$-$Exc(\sigma)=\\{1,3,4\\}$, which is not
an integer interval, thus $\sigma$ is not a Bell permutation of the second
kind.
2. 2.
As we have already observed, if one considers $\sigma=2431$, this permutation
has set of weak exceedances $\\{1,2,3\\}=[3]$ but it is not a Bell permutation
of the second kind as $2431\rightarrow 213$ and $213$ is not Bell permutation
of the second kind because its set of weak exceedances, $\\{1,3\\}$ is not an
interval.
3. 3.
Let $\sigma=7245613$. We have $w$-$Exc(\sigma)=[5]$, so $\sigma$ may be a Bell
permutation of the second kind. We apply Proposition 3.2 : $7245613\rightarrow
324561\rightarrow 32451\rightarrow 3241\rightarrow 321$. Since $321$ is a Bell
permutation of the second kind of $\mathfrak{S}_{3}$ we can conclude that
$\sigma$ and those permutations obtained in the process are Bell permutations
of the second kind.
### 3.2 The distribution of weak exceedances on Bell Permutations of the
second kind
We denote by $BP_{2}(n,k)$ the set of Bell permutations of the second kind
over $[n]$ having $k$ weak exceedances and by $b(n,k)$ their cardinalities. We
can use Proposition 3.2 to give a direct, constructive proof that the
cardinalities of the sets $BP_{2}(n,k)$ equal the Stirling numbers of the
second kind $S(n,k)$.
Accordingly, we define the following operation to construct a Bell permutation
of the second kind $\sigma$ in $BP_{2}(n)$ starting from a Bell-permutation of
the second kind $\sigma^{\prime}\in BP_{2}(n-1)$ (and an integer):
Let $\sigma^{\prime}\in BP_{2}(n-1,k)$ and $i\in[k+1]$,
$\displaystyle\mbox{then }\sigma\mbox{ is obtained by replacing
}\sigma^{\prime}(i)\mbox{ by }n\mbox{ and then appending
}\sigma^{\prime}(i)\mbox{ at the end.}$ (4)
###### Proposition 3.3.
The number of Bell permutations of the second kind over $[n]$ having $k$ weak
exceedances equals the Stirling number of the second kind $S(n,k)$.
###### Proof.
We prove that the numbers $b(n,k)$ satisfy the same recurrence relation as the
Stirling number of the second kind :
$b(n,k)=kb(n-1,k)+b(n-1,k-1).$
Observe that the operation defined in (4) either preserves the number of weak
exceedances or increases it by $1$. Therefore, any Bell permutation of the
second kind in $BP_{2}(n,k)$ can uniquely be obtained from a permutation
$\sigma^{\prime}\in BP_{2}(n-1,k)$ or from a permutation $\sigma^{\prime}\in
BP_{2}(n-1,k-1)$. More precisely :
1. 1.
if $w$-$Exc(\sigma^{\prime})=[k]$, $i\in[k]$ and $\sigma$ is obtained from
$\sigma^{\prime}$ by the operation defined in (4), then $\sigma\in
BP_{2}(n,k)$. There are $b(n-1,k)$ possible choices for $\sigma^{\prime}$ and
$k$ possible choices for $i$ hence this contributes $kb(n-1,k)$ to $b(n,k)$.
2. 2.
if $w$-$Exc(\sigma^{\prime})=[k-1]$, $i=k$ and $\sigma$ is obtained from
$\sigma^{\prime}$ by the operation defined in (4), then $\sigma\in
BP_{2}(n,k)$. This contributes $b(n-1,k-1)$ to the number $b(n,k)$.
This completes the proof. ∎
###### Example 3.5.
Note that $b(4,2)=2b(3,2)+b(3,1)=2\times 3+1=6+1=7$. So, from each of the
elements of $BP_{2}(3,2)=\\{132,231,321\\}$ we obtain 2 Bell permutations of
the second kind in $BP_{2}(4,2)$ by the first operation described in the proof
of Proposition 3.3.
That is, take $[2]\times\\{132,231,321\\}$ and apply the first operation to
get
$\\{4321,1423,4312,2413,4213,3412\\}\subseteq BP_{2}(4,2)$.
Again from $BP_{2}(3,1)=\\{312\\}$ we obtain 1 Bell permutation of the second
kind in $BP_{2}(4,2)$ by the second operation. That is, take $312$ and
$k=1+1=2$. Then by the second operation of the proof of Proposition 3.3 we
have $\\{3421\\}\subseteq BP_{2}(4,2)$.
## 4 A bijection between Bell permutations of the second kind and set
partitions
Here we give a direct bijection between Bell permutations of the second kind
in $BP_{2}(n)$ and set partitions in $\mathfrak{P}(n)$.
Let $\pi=B_{1}/B_{2}/\cdots/B_{k}$ be a set partition of $[n]$ having $k$
blocks. Then by (1) each element of $B_{i}$ is greater than or equal to $i$,
for all $i\in[k]$.
Let $\sigma=\sigma(1)\sigma(2)\cdots\sigma(n)$ be a Bell permutation of the
second kind with $k$ weak exceedances, then the set of these weak exceedances
is $w$-$Exc(\sigma)=[k]$.
Define a map $\lambda:BP_{2}(n)\rightarrow\mathfrak{P}(n)$ by
$\lambda(\sigma)=\pi$ provided that :
1. 1.
for all weak exceedances $(i,\sigma(i))$ with $i\geq\sigma(i)$, insert
$\sigma(i)$ in the $i$-th block, and
2. 2.
for all non weak exceedance letters $i$, taken in decreasing order, insert $i$
at the beginning of the $inom(i)$-th block.
###### Example 4.1.
Let $\sigma=45213\in\mathfrak{B}(5,2)$. Then $\lambda(\sigma)=\pi$ has two
blocks. That is, we have $4/5$.
Observe that $inom(3)=2,inom(2)=2$ and $inom(1)=1$. So $3$ and $2$ are in the
second block, and $1$ is in the first block. Thus we have
$\pi=14/235\in\mathfrak{P}(5,2)$.
###### Remark 4.1.
If $\lambda(\sigma)=\pi=B_{1}/\ldots/B_{k}$, where $w$-$Exc(\sigma)=[k]$, then
$max(B_{i})=\sigma(i),i=1,\ldots,k$.
###### Proposition 4.1.
The map $\lambda$ is a bijection from $BP_{2}(n)$ onto $\mathfrak{P}(n)$.
###### Proof.
We will prove that $\lambda$ is indeed the composition of two bijections: the
restriction of the permutation code $(\tilde{\phi})^{-1}$ to $BP_{2}(n)$ and
the bijection associating to a canonical form its correspondent partition.
For any $\sigma\in BP_{2}(n)$, let $\pi\in\mathfrak{P}(n)$ with
$\pi=\lambda(\sigma)$, let $f_{\sigma}={(\tilde{\phi})^{-1}(\sigma)}$ and
$f_{\pi}$ is the canonical form of $\pi$.
It suffices to prove that for all Bell-permutations of the second kind
$\sigma$ one has $f_{\sigma}=f_{\pi}=f_{\lambda(\sigma)}$.
Now the definition of $\tilde{\phi}$ implies easily that every integer $i$ is
placed in $f_{\sigma}(i)$-th block of $\pi$ and hence
$f_{\pi}(i)=f_{\sigma}(i)$ for all $i$. ∎
###### Example 4.2.
Take $\sigma=36821457\in BP_{2}(8)$. Then $\lambda(\sigma)=\pi=13/246/578$.
By applying $\tilde{\phi}^{-1}$ to $\sigma$ we get
$\tilde{\phi}^{-1}(\sigma)=f_{\sigma}=12123233=f_{\pi}$.
We now give the direct inverse of the bijection $\lambda$.
Let us define a mapping $\chi:\mathfrak{P}(n)\mapsto BP_{2}(n)$ such that
$\chi(\pi)=\sigma$, where $\sigma$ is obtained from
$\pi=B_{1}/B_{2}/\ldots/B_{k}\in\mathfrak{P}(n)$ as follows. Let
$m_{i}=max(B_{i})$ and $\sigma=\sigma(1)\sigma(2)\ldots\sigma(n)$. Then
1. 1.
For $i=1,2,\ldots,k$ set $\sigma(i)=m_{i}$.
2. 2.
For $j\in[n]\backslash\\{m_{1},\ldots,m_{k}\\}$, in decreasing order, set
$j=\sigma(\sigma^{t}(r))$, where $r$ is the number of the block where $j$ is
and $t$ is the smallest positive integer such that $\sigma^{t}(r)$ has not yet
been received an image.
###### Lemma 4.1.
For all $\pi\in\mathfrak{P}(n)$ the image $\sigma=\chi(\pi)\in BP_{2}(n)$.
###### Proof.
Let us start by proving $w$-$Exc(\sigma)=[k]$, where $k$ is the number of
blocks of $\pi$.
Note that every element of block $B_{i}$ of $\pi$ is greater than or equal to
$i$. Hence $\sigma(i)=m_{i}\geq i$, for $i=1,\ldots,k$ and $i\in[k]$ is a weak
exceedance of $\sigma$.
Take an integer $p>k$. We want prove that $q=\sigma(p)<p$. Let $q\in B_{r}$
for some $r\leq k$. Then $p=\sigma^{t}(r)$ for a certain $t$.
If $t=1$, then $p=m_{r}$ and hence $p>q$ since $q$ is in the $r$-th block.
If $t>1$, then the position $\sigma^{t-1}(r)$ had arleady received
$\sigma(\sigma^{t-1}(r))=\sigma^{t}(r)=p$ as image. Since the integers are
inserted in decreasing order we have $p>q$.
Therefore, $w$-$Exc(\sigma)=[k]$ and
$w$-$ExcL(\sigma)=\langle\alpha_{1},\ldots,\alpha_{k}\rangle$, where
$\alpha_{i}=m_{i}$ and $m_{i}$ is the largest integer having $inom$ equal to
$i$.
By the definition of $Seq(\sigma)=\langle\gamma_{1},\ldots,\gamma_{k}\rangle$,
the integer $\gamma_{i}$ is the smallest integer having $inom$ equal to $i$.
Therefore, $\gamma_{i}\leq\alpha_{i}$ for all $i$.
Hence $\sigma\in BP_{2}(n)$. ∎
###### Proposition 4.2.
The map $\chi$ is the inverse of the map $\lambda$.
###### Proof.
Let $\chi(\pi)=\sigma$ and $\lambda(\sigma)=\pi^{\prime}$, where
$\pi\in\mathfrak{P}(n,k)$. Then we show that $\pi=\pi^{\prime}$.
Since $w$-$Exc(\sigma)=[k]$ we have that $\pi^{\prime}\in\mathfrak{P}(n,k)$.
That is, $\pi^{\prime}$ has $k$ blocks.
Let $x$ be in the $i$-th block of $\pi$.
If $x=m_{i}$, then $\sigma(i)=x$ and hence $x$ is in the $i$-th block of
$\pi^{\prime}$ under $\lambda$. This is because $i\in$ $w$-$Exc(\sigma)$.
If $x\neq m_{i}$, then there is the smallest integer $t$ such that
$x=\sigma(\sigma^{t}(i))$. This means that $\sigma^{-t-1}(x)=i$ and hence
$inom(x)=i$ and $x$ is in the $i$-th block of $\pi^{\prime}$ under $\lambda$.
Thus $\pi=\pi^{\prime}$. ∎
###### Example 4.3.
Let $\pi=1\leavevmode\nobreak\ 4\leavevmode\nobreak\ 7/2\leavevmode\nobreak\
9/3\leavevmode\nobreak\ 5\leavevmode\nobreak\ 10/6\leavevmode\nobreak\
8\in\mathfrak{P}(10)$. Then $\sigma=\sigma(1)\sigma(2)\ldots\sigma(n)$, where
: $\sigma(1)=7,\sigma(2)=9,\sigma(3)=10,\sigma(4)=8$, and
$6=\sigma^{2}(4)=\sigma(8),5=\sigma^{2}(3)=\sigma(10),4=\sigma^{2}(1)=\sigma(7),3=\sigma^{3}(3)=\sigma(5),2=\sigma^{2}(2)=\sigma(9),1=\sigma^{5}(1)=\sigma(6)$.
Thus, $\sigma=\chi(\pi)=7\leavevmode\nobreak\ 9\leavevmode\nobreak\
10\leavevmode\nobreak\ 8\leavevmode\nobreak\ 3\leavevmode\nobreak\
1\leavevmode\nobreak\ 4\leavevmode\nobreak\ 6\leavevmode\nobreak\
2\leavevmode\nobreak\ 5\in BP_{2}(10)$.
Our bijection $\lambda$ allows us to characterise and enumerate some
subclasses of $BP_{2}(n)$ that are associated to some remarkable subclasses of
$\mathfrak{P}(n)$. We give two exemples in the following corollaries.
###### Corollary 4.1.
The number of Bell permutations of the second kind in $BP_{2}(n,k)$ with
$\sigma(k)=n$ and $\sigma(n)<k,k\neq 1,n$ equals the total number of Bell
permutations of the second kind in $BP_{2}(n-1,k-1)$. That is,
$\displaystyle b(n-1,k-1)=\\#\\{\sigma\in BP_{2}(n,k):\sigma(k)=n,k\neq
1,\sigma(n)<k<n\\}$ (5)
###### Proof.
We prove that the integer $n$ forms a singleton block in the set
$\mathfrak{P}(n,k)$ if and only if the corresponding Bell permutation of the
second kind in $BP_{2}(n,k)$ satisfies condition 5.
Let $\pi\in\mathfrak{P}(n,k)$ and $n$ forms a singleton block in $\pi$. Then
deleting this block we get $\pi^{\prime}\in\mathfrak{P}(n-1,k-1)$.
Let $\lambda^{-1}(\pi)=\sigma$ and
$\lambda^{-1}(\pi^{\prime})=\sigma^{\prime}$. Then $\sigma\in BP_{2}(n,k)$ and
$\sigma^{\prime}\in BP_{2}(n-1,k-1)$ and $\sigma^{\prime}(k)<k$ since
$\sigma^{\prime}$ has only $k-1$ weak exceedances. So by Proposition 3.3,
$\sigma\in BP_{2}(n,k)$ satisfies condition 5 if and only if
$\sigma=(k\leavevmode\nobreak\ n)\cdot\sigma^{\prime}$. That is, $\sigma(k)=n$
and $\sigma(n)=\sigma^{\prime}(k)<k$.
Note that the number of set partitions in $\mathfrak{P}(n,k)$ in which $n$
forms a singleton block equals $b(n-1,k-1)$, and hence the proof. ∎
###### Remark 4.2.
The equation in 5 is also true for $k=1$ or $k=n$. That is, there is only one
set partition with $n$ blocks which corresponds to the identity permutation,
and only one set partition with one block which corresponds to
$n\leavevmode\nobreak\ 1\leavevmode\nobreak\ 2\leavevmode\nobreak\
\ldots\leavevmode\nobreak\ n-1$. So for all possible $k$ satisfying the
condition of the previous corollary we have all Bell permutations of the
second kind over $[n-1]$.
###### Example 4.4.
There are five Bell permutations of the second kind over $[4]$ that satisfy
the condition of Corollary 4.1. These are: $4123,2341,3241,1342,1234$. Their
corresponding set partitions under $\lambda$, respectively, are:
$123/4,12/3/4,13/2/4,1/23/4,1/2/3/4$. Removing the singleton block of $4$ from
each of them we get all set partitions over $[3]$.
Further, by the corollary we have $4123\leftrightarrow 312,2341\leftrightarrow
231,3241\leftrightarrow 321,1342\leftrightarrow 132$, and $1234\leftrightarrow
123$.
###### Corollary 4.2.
The number of Bell permutations of the second kind of size $n$ having $n-1$
weak exceedances is equal to the number $b(n,n-1)$ of set partitions over
$[n]$ having $n-1$ blocks, which, as commonly known, is equal to the sum of
the first $n-1$ positive integers $\binom{n}{2}=\frac{n(n-1)}{2}$.
###### Proof.
The second part can be easily demonstrated. Let $\pi\in\mathfrak{P}(n)$ with
$n-1$ blocks. Then there is some $i\in[n-1]$ such that the $i$-th block
contains exactly two elements. Since every element $j\in[n]$ must be in any
one of the first $j$ blocks, then for all $j\in[i]$ the integer $j$ is in the
$j$-th block, hence the second element of block $i$ can only be chosen from
$[i+1,n]$, therefore we have $n-i$ possible ways to select the other element
in to the $i$-th block. This implies that there are
$(n-1)+(n-2)+\cdots+2+1=\binom{n}{2}=\frac{n(n-1)}{2}$ total such set
partitions.
Further we have observed that set partitions with $n-1$ blocks correspond to
Bell permutations of the second kind with $n-1$ weak exceedances. Then the
number of Bell-permutations of the second kind over $[n]$ with $n-1$ weak
exceedances also equals the sum of the first $n-1$ positive integers.
Note that set partitions with $n-1$ blocks in which the block containing two
elements is the $i$-th block correspond to Bell-permutations of the second
kind $\sigma=\sigma(1)\sigma(2)\ldots\sigma(n)$ with $n-1$ weak exceedances,
where $\sigma(n)=i$. ∎
## 5 A bijection between Bell permutations of the first and the second kind
In this part we present a bijection between the set $BP_{1}(n)$ of Bell
permutations over $[n]$ introduced by M. Poneti and V. Vajnovszki ([10])
(which we will call _Bell permutations of the first kind_) and the set
$BP_{2}(n)$ of Bell permutations of the second kind over $[n]$.
First, we recall the definition of Bell permutations of the first kind.
Let $\pi=B_{1}/B_{2}/\ldots/B_{k}\in\mathfrak{P}(n)$ a set partition in its
standard representation and let $\mu:\mathfrak{P}(n)\mapsto BP_{1}(n)$, where
the permutation $\mu(\pi)$ is constructed as follows :
* 1.
reorder all integers in each block $B_{i}$ in decreasing order;
* 2.
transform each of these blocks into a cycle.
###### Example 5.1.
Let $\pi=1279/356/48$. Then $\mu(\pi)=\sigma=(9721)(653)(84)$.
###### Remark 5.1.
By definition of Bell permutations of the first kind and by definition of inom
code, if $\sigma\in BP_{1}(n)$ and $f=\tilde{\phi}^{-1}(\sigma)$ is its inom
code, then for all $i\in[n]$,
$f(i)=inom(i)=\mbox{ minimum of the block containing }i.$
Recall also that by definition of Bell permutations of the second kind, if
$\sigma\in BP_{2}(n)$ and $f=\tilde{\phi}^{-1}(\sigma)$ is its inom code, then
for all $i\in[n]$,
$f(i)=inom(i)=\mbox{ number of the block containing }i.$
Let us define a map $\beta:BP_{1}(n)\mapsto BP_{2}(n)$.
Let $\sigma_{1}=C_{1}C_{2}\ldots C_{k}\in BP_{1}(n)$ be a permutation of
$BP_{1}(n)$ written as a product of disjoint cycles, where the cycles have
been ordered with their respective minima increasing. Then
$\sigma_{2}=\beta(\sigma_{1})$ is constructed from $\sigma_{1}$ according to
the rule :
for $i=k,\ldots,2$, if the integer $i$ is not in the cycle $C_{i}$, then
insert the cycle $C_{i}$ after $i$ in the cycle containing $i$.
###### Example 5.2.
Let $\sigma_{1}=(9721)(653)(84)$. Then $\sigma_{2}$ is obtained as :
$\sigma_{1}=(9721)(653)(84)\longrightarrow(9721)(65384)\longrightarrow(972653841)=\sigma_{2}$
We start by proving that $\beta(BP_{1}(n))\subseteq BP_{2}(n)$.
###### Lemma 5.1.
Let $\sigma_{1}\in BP_{1}(n)$ with $k$ cycles. If
$\sigma_{2}=\beta(\sigma_{1})$, then w-$Exc(\sigma_{2})=\\{1,2,\ldots,k\\}$.
###### Proof.
Let $\sigma_{1}=C_{1}C_{2}\ldots C_{k}$, where $k$ is the number of cycles of
$\sigma_{1}$.
The operation obviously constructs a weak exceedance at each $p\leq k$,
because it inserts after $p$ a sequence of integers all larger than $p$. It
remains to be proved that if $p>k$ then $p$ is a strict anti-exceedance for
$\sigma_{2}$, that is $\sigma_{2}(p)<p$.
If $p>k$ and $p$ is a strict anti-exceedance for $\sigma_{1}$, that is if
$\sigma_{1}(p)<p$, then the construction never inserts anything between $p$
and $\sigma_{1}(p)$, therefore $\sigma_{2}(p)=\sigma_{1}(p)<p$.
Let then $p>k$ and $\sigma_{1}(p)\geq p$. Note that this can only happen if
$p$ is the minimum of its cycle of $\sigma_{1}$ , say $C_{t}$.
Let $\sigma_{1}(p)\ldots p$ be the sequence of elements of the cycle $C_{t}$
of $\sigma_{1}$, where $p=min(C_{t})$ and $\sigma_{1}(p)=max(C_{t})$. Then
$t\leq k<p$ and $t$ is not in the cycle $C_{t}$ because $p$ is the minimum in
$C_{t}$, hence there is some integer $t_{1}<t$ such that $t\in C_{t_{1}}$.
For $i=k,\ldots,t+1$ the procedure has not yet modified
$C_{t}=(\sigma_{1}(p)\ldots p)$ since all of its elements are greater than
$k$.
Now for $i=t$, the operation inserts $\sigma_{1}(p)\ldots p$ in to $C_{t_{1}}$
after $t$ (the following diagram shows this).
$\ldots\underbrace{(\ldots t\leavevmode\nobreak\
\overbrace{\sigma_{1}(p)\ldots p}^{C_{t}}\ldots)}_{C_{t_{1}}}\ldots$
If $t\neq min(C_{t_{1}})$, then $t$ is a strict anti-exceedance for
$\sigma_{1}$, that is, $\sigma_{1}(t)<t$. The steps of the transformation for
$k,k-1,\ldots,t+1$ have not inserted any integer between $t$ and
$\sigma_{1}(t)$. When we insert $C_{t}$ in between these two integers,
$\sigma_{1}(t)$ become the image of $p$, then we have
$\sigma_{2}(p)=\sigma_{1}(t)<t$.
Suppose $t=min(C_{t_{1}})$, then $t_{1}\notin C_{t_{1}}$ for otherwise
$t_{1}\geq t$ and hence a contradiction.
Thus, there is some integer $t_{2}<t_{1}$ such that $t_{1}\in C_{t_{2}}$.
For $i=t_{1}$, the operation inserts the sequence of the elements of
$C_{t_{1}}$ in to $C_{t_{2}}$ after $t_{1}$.
$\ldots\underbrace{(\ldots t_{1}\overbrace{\ldots t\leavevmode\nobreak\
\sigma_{1}(p)\ldots p}^{C_{t_{1}}}\ldots)}_{C_{t_{2}}}\ldots$
By the same argument as above, if $t_{1}\neq min(C_{t_{2}})$, then we have
$\sigma_{2}(p)<t_{1}$.
Suppose $t_{1}=min(C_{t_{2}})$, then $t_{2}\notin C_{t_{2}}$ for otherwise
$t_{2}\geq t_{1}$ and hence a contradiction.
Thus, repeating the same procedure a finite number of times we must eventually
find some integer $t_{s}$ such that $1\leq t_{s}<t_{s-1}\neq min(C_{t_{s}})$
and $t_{s}\in C_{t_{s}}$. Then $\sigma_{2}(p)<t_{s-1}$.
Therefore, $p$ is not a weak exceedance of $\sigma_{2}$. Hence
w-$Exc(\sigma_{2})=[k]$. ∎
###### Lemma 5.2.
For all $\sigma_{1}\in BP_{1}(n)$, $\beta(\sigma_{1})=\sigma_{2}\in
BP_{2}(n)$.
###### Proof.
Let $\sigma_{1}=C_{1}C_{2}\ldots C_{k}$. By Lemma 5.1, we have
w-$Exc(\sigma_{2})=[k]$ for a certain integer $k$. Then
w-$ExcL(\sigma_{2})=\langle\alpha_{1},\ldots,\alpha_{k}\rangle$, where
$\alpha_{i}=\sigma_{2}(i)$, $i=1,2,\ldots,k$.
Let $Seq(\sigma_{2})=\langle\gamma_{1},\ldots,\gamma_{k}\rangle$. Then note
that for all $i\in[k]$, the integers $\gamma_{i}$ and $\alpha_{i}$, are
respectively the minimum and maximum integers having $inom$ equal to $i$. Thus
$\gamma_{i}\leq\alpha_{i}$ for all $i$.
Therefore, by Theorem 3.1, $\sigma_{2}\in BP_{2}(n)$. ∎
###### Theorem 5.1.
The map $\beta$ is a bijection between $BP_{1}(n)$ and $BP_{2}(n)$.
###### Proof.
We deduce the result from the fact that the following diagram is commutative
where :
* 1.
$\tilde{\phi}$ denotes the $inom$ code;
* 2.
$\tau$ denotes the bijection associating each partition $\pi$ with its
canonical form;
* 3.
$\mu$ denotes the bijection introduced by Poneti and Vajnovszki;
* 4.
$\nu$ denotes the transformation that normalizes any
$f\in\tilde{\phi}^{-1}(BP_{1}(n))$ via an order-preserving bijection of
$Im(f)$ into $[IMA(f)]$;
* 5.
$\zeta$ denotes the transformation that replaces every integer in any
$f=f_{1}f_{2}\ldots f_{n}\in\tilde{\phi}^{-1}(BP_{2}(n))$ with the leftmost
position where such integer occurs.
Figure 2:
From the remark 5.1 it is easy to see that the inom code of the permutation
$\beta(\sigma_{1})$ is obtained by applying $\nu$ to the inom code of
$\sigma_{1}$ and that the inom code of $\sigma_{1}$ is obtained by applying
$\zeta$ to the inom code of $\beta(\sigma_{1})$, (the maps $\mu$ and $\zeta$
are the inverse of each other when restricted to
$\tilde{\phi}^{-1}(BP_{1}(n))$ and $\tilde{\phi}^{-1}(BP_{2}(n))$
respectively).
So we have $\tilde{\phi}^{-1}\circ\beta=\nu\circ\tilde{\phi}^{-1}$, or
equivalently $\beta=\tilde{\phi}\circ\nu\circ\tilde{\phi}^{-1}$ and therefore
$\beta$ is a bijection. ∎
We can also define directly the inverse of $\beta$, a map
$\vartheta:BP_{2}(n)\mapsto BP_{1}(n)$ such that
$\vartheta(\sigma_{2})=\sigma_{1}$, where $\sigma_{1}$ is obtained as follows.
Take $\sigma_{2}\in BP_{2}(n)$ and let $C_{1}C_{2}\ldots C_{t}$ be its cycle
decomposition. Recall we showed that the set of weak-exceedances of a Bell
permutation of the second kind is an interval $[k]$.
For $i=2,\ldots,k$, if $i$ is not the minimum of its own cycle $C_{j}$, then
form a new cycle by taking out of $C_{j}$ the longest sequence of integers
greater than $i$ starting immediately after $i$.
###### Example 5.3.
Let $\sigma_{2}=468912357=(\mathbf{14}97\mathbf{35}8)(\mathbf{2}6)$ in cycle
notation and with the weak exceedances in bold. Then $\sigma_{1}$ is obtained
as :
$\sigma_{2}=(1497358)(\mathbf{2}6)\longrightarrow(1497\mathbf{3})(26)(58)\longrightarrow(1\mathbf{4}3)(26)(58)(97)\longrightarrow(143)(26)(\mathbf{5})(97)(8)=\sigma_{1}$
###### Lemma 5.3.
For all $\sigma_{2}\in BP_{2}(n)$, $\vartheta(\sigma_{2})=\sigma_{1}\in
BP_{1}(n)$.
###### Proof.
The construction produces a permutation with $k$ cycles in which elements of
each cycle are decreasing. ∎
###### Proposition 5.1.
The map $\vartheta$ is the inverse of $\beta$.
###### Proof.
Let $\sigma_{2}\in BP_{2}(n)$, w-$Exc(\sigma_{2})=[k]$ and let
$\beta(\vartheta(\sigma_{2}))=\beta(\sigma_{1})=\sigma_{2}^{\prime}$. Then we
show that $\sigma_{2}=\sigma_{2}^{\prime}$. Since $\sigma_{1}$ has $k$ cycles,
$\sigma_{2}^{\prime}$ also has weak-exceedances at $\\{1,2,\ldots,k\\}$, as
well as $\sigma_{2}$.
If $i>k$, then $\sigma_{2}(i)<i$ and therefore the construction never changes
the image of $i$ and we have
$\sigma_{2}(i)=\sigma_{1}(i)=\sigma_{2}^{\prime}(i)$.
Assume that $\sigma_{2}(i)\neq\sigma_{2}^{\prime}(i)$ for some $i\in[k]$.
Note that if $\vartheta(\sigma_{2})=\sigma_{1}$, then for every weak
exceedance $i$ of $\sigma_{2}$, one has that $\sigma_{2}(i)$ is equal to the
maximum element of the $i$-th cycle of $\sigma_{1}$. On the other hand, if
$\sigma^{\prime}_{2}=\beta(\sigma_{1})$ then for every weak exceedance $i$ of
$\sigma^{\prime}_{2}$, one also has that $\sigma^{\prime}_{2}(i)$ is equal to
the maximum element of the $i$-th cycle of $\sigma_{1}$. This is a
contradiction.
Therefore $\sigma_{2}=\sigma_{2}^{\prime}$ and hence $\vartheta=\beta^{-1}$. ∎
###### Remark 5.2.
Under the above bijection $\beta:\sigma_{1}\mapsto\sigma_{2}$, the number of
cycles of $\sigma_{1}$ is equal to the number of weak exceedances of
$\sigma_{2}$.
### Acknowledgements
Both authors are members of the project CoRS (Combinatorial Research Studio),
supported by the Swedish government agency SIDA. The most significant advances
of this research work have been made during two visits of the first author to
IRIF. The first visit was entirely supported by IRIF, the second visit was
substantially supported by ISP (International Science Programme) of Uppsala
University (Sweden) and partially supported by IRIF. The authors are deeply
grateful to these two institutions. We also thank our colleagues from CoRS for
valuable discussions and comments.
## References
* [1] J. Baril and V. Vajnovszki, A permutation code preserving a double Eulerian bistatistic, Discrete Applied Mathematics, Vol. 224, 9-15, (2017).
* [2] F. Beyene and R. Mantaci, Investigations on a Permutation Code, Submitted to Electronic Journal of Combinatorics, (2020)
* [3] M. Bona, Introduction to Enumerative Combinatorics, The McGraw Hill Companies, (2007).
* [4] D. Dumont and G. Viennot, A combinatorial interpretation of the Seidel generation of Genocchi numbers, Ann. Discrete Math. 6, 77-87, (1980).
* [5] D. Foata and D. Zeilberger, Denert’s Permutation Statistic is indeed Euler-Mahonian, Studies in Applied Mathematics, 31-59, (1990).
* [6] D. H. Lehmer, Teaching combinatorial tricks to a computer, Proc. Sympos. Appl. Math. Combinatorial Analysis, Amer. Math. Soc., vol. 10, p. 179-193, (1960).
* [7] T. Mansour. Combinatorics of set partitions. Taylor & Francis Group, LLC, (2013).
* [8] R. Mantaci and F. Rakotondrajao. A permutation representation that knows what “Eulerian” means. Discrete Mathematics and Theoretical Computer Science 4, 101-108, (2001).
* [9] M. Orlov, Efficient Generation of Set Partitions, (2002)
* [10] M. Poneti and V. Vajnovszki, Generating restricted classes of involutions, Bell and Stirling permutations, European Journal of Combinatorics 31, 553–564, (2010).
* [11] G. Rota, The number of partitions of a set, Amer. Math. Monthly 71, 498–504, (1964).
* St [1] R. P. Stanley, Enumerative Combinatorics, Vol. 1, $2^{ed}$ ed, Cambridge Studies of Advanced Mathematics, Cambridge University Press, (2011).
* St [2] R. P. Stanley. Enumerative combinatorics. Vol. 2. Cambridge Studies of Advanced Mathematics, Cambridge University Press, (1999).
|
# Some Examples of Family Floer Mirrors
Man-Wai Cheung, Yu-Shen Lin
###### Abstract
In this article, we give explicit calculations for the family Floer mirrors of
some non-compact Calabi-Yau surfaces. We compare them with the mirror
construction of Gross-Hacking-Keel-Siebert for suitably chosen log Calabi-Yau
pairs and with rank two cluster varieties of finite type. In particular, the
analytifications of the latter two give partial compactifications of the
family Floer mirrors that we computed.
## 1 Introduction
The Strominger-Yau-Zaslow (SYZ) conjecture predicts that Calabi-Yau manifolds
have the structure of special Lagrangian fibrations and that their mirrors can
be constructed via dual special Lagrangian fibrations. Moreover, the Ricci-
flat metrics of Calabi-Yau manifolds receive instanton corrections from
holomorphic discs with boundaries on the special Lagrangian torus fibres. The
conjecture not only gives a geometric way to construct the mirror, it also
gives intuitive reasoning for mirror symmetry, for instance see [FLTZ, LYZ].
The SYZ philosophy has become a the helpful tool for studying mirror symmetry
and many of its implications have been proven. However, the difficulty of the
analysis involving singular special Lagrangian fibres makes progress toward
the original conjecture relatively slow (see [CJL, CJL2, L16] for the recent
progress).
To understand instanton corrections rigorously in the mathematical context,
Fukaya [F3] proposed a way to understand the relation between instanton
corrections from holomorphic curves/discs and the mirror complex structure via
a Floer theoretic approach. Kontsevich-Soibelman [KS1] and Gross-Siebert [GS1]
later systematically formulated how to construct the mirror in various
settings via algebraic approaches. These approaches opened up a window to
understanding mirror symmetry intrinsically.
In their algebro-geometric approach, Gross-Siebert first constructed affine
manifolds with singularities from toric degenerations of Calabi-Yau manifolds.
Then there is a systematic way of constructing so-called scattering diagrams,
which capture the information of the instanton corrections, on the the affine
manifold. The data of the scattering diagrams encode how to glue the expected
local models into the mirror Calabi-Yau manifolds. On the other hand, family
Floer homology as proposed by Fukaya [F4] lays out the foundation for
realizing mirror symmetry intrinsically from the symplectic geometry point of
view. Given a Lagrangian fibration, Fukaya’s trick introduced later in Section
4.1 provides pseudo-isotopies between the $A_{\infty}$ structures of fibres
after compensation by symplectic flux. In particular, the pseudo-isotopies
induce canonical isomorphisms of the corresponding Maurer-Cartan spaces. The
family Floer mirror is then the gluing of the Maurer-Cartan spaces via these
isomorphisms. Not only have the family Floer mirrors been constructed [A1, T4,
Y, Y2], but Abouzaid proved that the family Floer functor induces homological
mirror symmetry [A2, A3]. It is natural to ask if the mirrors constructed via
the Gross-Siebert program and the family Floer homology approach coincide or
not.
The following is an expected dictionary connecting the two approaches:
family Floer SYZ | GHKS mirror construction
---|---
Large complex structure limit | Toric degeneration or Looijenga pair
Base of SYZ fibration with complex affine structure | Dual intersection complex of the toric degeneration or $B_{\mathrm{GHK}}$
Loci of SYZ fibres bounding MI=0 holomorphic discs | Rays in scattering diagram
Homology of the boundary of a holomorphic disc | Direction of the ray
Exp of the generating function of open Gromov-Witten invariants of Maslov index zero | Wall functions attached to the ray
Coefficients of the superpotential = Open Gromov-Witten invariants of Maslov index 2 discs | Coefficients of theta functions = Counting of broken lines
Isomorphisms of Maurer-Cartan spaces induced by pseudo isotopies | Wall crossing transformations
Lemma 4.1 in this article | Consistency of the scattering diagrams
Family Floer mirror | GHK/GHKS mirror
Table 1: Dictionary between the symplectic and algebraic approaches of mirror
construction.
However, it is hard to have good control over all possible discs in a Calabi-
Yau manifold due to the wall-crossing phenomenon. Thus, it is generally hard
to write down the family Floer mirror explicitly. In the examples of family
Floer mirrors computed in the literature, there exist torus symmetries, and
one can write down all the possible holomorphic discs explicitly. In
particular, the loci of Lagrangian fibres bounding Maslov index zero discs do
not intersect and thus exclude the presence of more complicated bubbling
phenomena.
In this paper, we engineer some $2$-dimensional examples where the family
Floer mirrors can be computed explicitly and realize most part of the above
dictionary step by step. We first prove that the complex affine structures of
the bases of special Lagrangian fibrations coincide with the affine manifolds
with singularities constructed in Gross-Hacking-Keel [GHK] from some log
Calabi-Yau surfaces. See the similar results for the case of $\mathbb{P}^{2}$
[LLL], general del Pezzo surfaces relative to smooth anti-canoncial divisors
[LLL2], rational elliptic surfaces [CJL2] and Fermat hypersurfaces [L16]. When
a Calabi-Yau surface admits a special Lagrangian fibration, it is well-known
that the special Lagrangian torus fibres bounding holomorphic discs are fibres
above certain affine lines with respect to the complex affine coordinates on
the base. Using Fukaya’s trick, the second author identified a version of open
Gromov-Witten invariants with tropical discs counting [L8, L14], which lays
out a foundation for the connection between family Floer mirrors and Gross-
Siebert/Gross-Hacking-Keel mirror. The examples are engineered such that all
the wall functions are polynomials. Therefore, there are no convergence issues
in the gluing procedure and the complexity is kept to minimal. In particular,
the family Floer mirror has a model over complex numbers. On the other hand,
one can compare it with this process with the process of Gross-Hacking-Keel:
we can construct a corresponding log Calabi-Yau pair $({Y},{D})$ such that the
induced affine manifold with singularities coincides with the complex affine
structure of the base of the special Lagrangian fibration. Then we identify
the loci of special Lagrangian fibres bounding holomorphic discs with the rays
of the canonical scattering diagram and the corresponding wall-crossing
transformations in Gross-Hacking-Keel [GHK]. The technical part is to prove
that the family Floer mirror has a partial compactification by the gluing of
rigid analytic tori. Notice that directly computing the family Floer mirror of
$Y$ would lead to only a small subset of the mirror from the Gross-Hacking-
Keel mirror construction. One requires a certain renormalization procedure
(see [A4][L4]) and such machinery has not been developed for family Floer
mirror yet.
By comparing with the calculation of Gross-Hacking-Keel, we then know that the
family Floer mirror has a partial compactification that is the analytification
of the mirror of $(Y,D)$ constructed in Gross-Hacking-Keel. The mirror
construction of Gross-Hacking-Keel produces a family, where the base can be
viewed as the complexified Kähler moduli of $Y$. We further determine the
distinguished point that corresponds to the family Floer mirror. Let
$Y^{\prime}_{*}$ be the extremal rational elliptic surface with exactly two
singular fibres and singular fibre over $0$ in the base of type $*$, where
$*=II,III,IV$. Let $X^{\prime}_{*}$ be the complement of the other singular
fibre. We will let $X_{*}$ be a suitable hyperKähler rotation of
$X^{\prime}_{*}$. The following is a summary of Theorem 5.23, Theorem 6.6 and
Theorem 7.4
###### Theorem 1.1.
The analytification of an $\mathcal{X}$-cluster variety of type $A_{2}$ (or
$B_{2}$ or $G_{2}$) or the Gross-Hacking-Keel mirror of a suitable log Calabi-
Yau pair $(Y,D)$ is a partial compactification of the family Floer mirror of
$X_{II}$ (or $X_{III}$ or $X_{IV}$ respectively).
We will sketch the proof of the $A_{2}$ case, and the other cases are similar.
We first compute the complex affine structure of the SYZ fibration in $X_{II}$
by taking advantage of the fact that its hyperKähler rotation
$X^{\prime}_{II}$ is an elliptic fibration. Consider then the local Gromov-
Witten invariants computed by the second author [L12], we use the split
attractor flow mechanism to prove that there are exactly five families of SYZ
fibres bounding holomorphic discs. The loci parametrizing such fibres in the
base of the SYZ fibration are affine lines with respect to the complex affine
structure and thus naturally give a cone decomposition on the base. It is not
too hard to engineer a Looijenga pair $(Y,D)$ such that its corresponding
affine manifold with singularity and cone decomposition are the ones derived
from the SYZ fibration of $X_{II}$. In this case, $Y$ is the del Pezzo surface
of degree five with $D$ an anti-canonical cycle consisting of five rational
curves. The canonical scattering diagram of $(Y,D)$ contains only finitely
many rays, and each wall function is a polynomial. Thus, the Gross-Hacking-
Keel mirror of $(Y,D)$ is the gluing of finitely many tori via certain
birational transformations. On the other hand, the family Floer mirror
$\check{X}$ is the gluing of building blocks of the form
$\mathfrak{Trop}^{-1}(U)$, where $U$ is a rational domain. The sets
$\mathfrak{Trop}^{-1}(U)$ are ”small” subsets in $(\mathbb{G}_{m}^{an})^{2}$,
but we prove that the inclusion
$\mathfrak{Trop}^{-1}(U)\hookrightarrow\check{X}$ can be extended to
$(\mathbb{G}_{m}^{an})^{2}\hookrightarrow\check{X}$. Thus, the family Floer
mirror $\check{X}$ can also be realized as the gluing of finitely many rigid
analytic tori. Finally, we identify the gluing functions, which are
polynomials, from the two mirror constructions from the identification of the
affine manifolds.
In addition to being the first example of explicit computation of a family
Floer mirror without $S^{1}$-symmetries and providing a comparison of two
mirror constructions, the above theorem has other significance. For instance,
it is not clear that the Gross-Hacking-Keel mirror would in general satisfy
homological mirror symmetry. On the other hand, the family Floer mirror is
designed to prove the homological mirror symmetry conjecture. Abouzaid proved
that (when there is no singular fibre) the family Floer mirrors implies
homological mirror symmetry [A3]. The comparison of the two mirror
constructions provides an intermediate step towards the homological mirror
symmetry for Gross-Hacking-Keel mirrors. The current work relies on the fact
that the relevant scattering diagrams contain only finitely many rays and the
wall functions are all polynomials, and it seems to completely rely on that.
However, the more crucial part is the identification (or certain weaker
version of equivalence) of the scattering diagrams on the SYZ base and the
canonical scattering diagrams. As these are carried out in the cases of
$\mathbb{P}^{2}$ relative to a smooth anti-canonical divisor [L14] and the del
Pezzo surface of degree three [BCHL], the authors expect the equivalence of
the two mirror constructions with certain modifications to the treatment of
rigid analytic geometry.
### 1.1 Structure
The structure of the paper is arranged as follows: In Section 2, we review the
definition of cluster varieties and the mirror construction in Gross-Hacking-
Keel [GHK] and Gross-Hacking-Keel-Siebert [ghks_cubic]. In Section 3, we
formulate the surfaces for which we are going to compute the family Floer
mirror. They come from hyperKähler rotation of rational elliptic surfaces with
prescribed singularities.
In Section 4, we review the family Floer mirror construction and the open
Gromov-Witten invariants. In Section 5, we compute the family Floer mirror of
a non-compact Calabi-Yau surface $X_{II}$ explicitly in full detail. Then we
compare it with the analytification of the $A_{2}$-cluster variety. We also
compare it with the Gross-Hacking-Keel mirror for a del Pezzo surface of
degree five. In particular, the family Floer mirror of $X_{II}$ can be
compactified to a del Pezzo surface of degree five via the algebraic structure
of the theta functions. In Section 6 and Section 7, we sketch the calculation
for the family Floer mirror of $X_{III}$ and $X_{IV}$, pointing out the
differences from the case of $X_{II}$.
## Acknowledgement
The authors would like to thank Mark Gross and Shing-Tung Yau for the constant
support and encouragement. The authors would also like to thank Hülya Argüz,
Dori Bejleri, Paul Hacking, Hansol Hong, Chi-Yun Hsu, Laura Friedrickson, Tom
Sutherland for helpful discussion. The first author is supported by NSF grant
DMS-1854512. The second author is supported by Simons Collaboration Grant #
635846 and NSF grant DMS #2204109.
## 2 Cluster Varieties and GHK Mirrors
### 2.1 Gross-Hacking-Keel Mirror Construction
Building upon the work of [GS][GPS], Gross-Hacking-Keel [GHK] utilized the
enumerative invariants coming from $\mathbb{A}^{1}$-curves in a log Calabi-Yau
surface to recover its mirror family. By the use of theta functions and broken
lines, they extended the mirror from a family over the Gross-Siebert locus to
a family over a version of the complexified ample cone. Heuristically, as the
SYZ fibres move to infinity in a direction, the Maslov index zero holomorphic
discs with suitable boundary homology class close up to holomorphic curves
with exactly one intersection with some boundary divisor, and hence become
$\mathbb{A}^{1}$-curves. The $\mathbb{A}^{1}$-curves tropicalize to rays and
the counting of $\mathbb{A}^{1}$-curves will determine the wall functions of
the canonical scattering diagram. In this section, we review the mirror
construction of Gross-Hacking-Keel [GHK] and Gross-Hacking-Keel-Siebert
[ghks_cubic].
Consider the pair $(Y,D)$, where $Y$ is a smooth projective rational surface,
and $D=D_{1}+\cdots+D_{n}$ is an anti-canonical cycle of rational curves. Then
$X:=Y\setminus D$ is a log Calabi-Yau surface111Note that $X$ is denoted
instead by $U$ in [GHK, ghks_cubic].. The tropicalization of $(Y,D)$ is a pair
$(B_{\mathrm{GHK}},\Sigma)$, where $B_{\mathrm{GHK}}$ is homeomorphic to
${\mathbb{R}}^{2}$ and has the structure of an integral affine manifold with
singularity at the origin, and $\Sigma$ is a decomposition of
$B_{\mathrm{GHK}}$ into cones. The construction of $(B_{\mathrm{GHK}},\Sigma)$
starts by associating each node $p_{i,i+1}:=D_{i}\cap D_{i+1}$ with a rank two
lattice $M_{i,i+1}$ with basis $v_{i}$, $v_{i+1}$ and with the cone
$\sigma_{i,i+1}\subset M_{i,i+1}\otimes_{{\mathbb{Z}}}{\mathbb{R}}$ generated
by $v_{i}$ and $v_{i+1}$. Then $\sigma_{i,i+1}$ is glued to $\sigma_{i-1,i}$
along the rays $\rho_{i}:={\mathbb{R}}_{\geq}v_{i}$ to obtain a piecewise-
linear manifold $B_{\mathrm{GHK}}$ and a decomposition
$\displaystyle\Sigma=\\{\sigma_{i,i+1}|i=1,\dots,n\\}\cup\\{\rho_{i}|i=1,\dots,n\\}\cup\\{0\\}\subseteq\mathbb{R}^{2}.$
Define
$U_{i}=\mathrm{Int}(\sigma_{i-1,i}\cup\sigma_{i,i+1}).$
The integral affine structure on
$B_{\mathrm{GHK},0}=B_{\mathrm{GHK}}\setminus\\{0\\}$ is defined by the charts
$\psi:U_{i}\rightarrow M_{{\mathbb{R}}},$ $\psi_{i}(v_{i-1})=(1,0),\
\psi_{i}(v_{i})=(0,1),\ \psi_{i}(v_{i+1})=(-1,-D^{2}_{i}),$
with $\psi_{i}$ linear on $\sigma_{i-1,i}$ and $\sigma_{i,i+1}$. It may be
worth noting here that at the end of the gluing process, $\rho_{n+1}$ may not
agree with $\rho_{1}$. This induces an affine structure on
$B_{\mathrm{GHK},0}$ which might not extend over the origin when we identify
$\rho_{n+1}$ with $\rho_{1}$. We are going to demonstrate the affine
structures explicitly in examples later in this article. Heuristically, one
collides all the singular fibres of the SYZ fibration into one to achieve
$B_{\mathrm{GHK}}$. Note that if we consider three successive rays
$\rho_{i-1},\rho_{i},\rho_{i+1}$, there is the relation
$\displaystyle\psi(v_{i-1})+D^{2}_{i}\psi(v_{i})+\psi(v_{i+1})=0.$ (1)
Consider a toric monoid $P$. A toric monoid $P$ is a commutative monoid whose
Grothendieck group $P^{\mathrm{gp}}$ is a finitely generated free abelian
group and $P=P^{\mathrm{gp}}\cap\sigma_{P}$, where $\sigma_{P}\subseteq
P^{\mathrm{gp}}\otimes_{{\mathbb{Z}}}{\mathbb{R}}=P^{\mathrm{gp}}_{{\mathbb{R}}}$
is a convex rational polyhedral cone. We will assume that $P$ comes with a
homomorphism $\eta:\mathrm{NE}(Y)\rightarrow P$ of monoids, where $NE(Y)$ is
the intersection of the cone generated by effective curves with
$A_{1}(Y,\mathbb{Z})$. In later discussion, we will in particular choose
$P=\mathrm{NE}(Y)$ and take $\eta$ to be the identity.
Next we define a multi-valued $\Sigma$-piecewise linear function as a
continuous function $\varphi:|\Sigma|\rightarrow
P^{\mathrm{gp}}_{{\mathbb{R}}}$ such that for each
$\sigma_{i,i+1}\in\Sigma_{\max}$, $\varphi_{i}=\varphi|_{\sigma_{i,i+1}}$ is
given by an element
$\varphi_{\sigma_{i,i+1}}\in\operatorname{Hom}_{{\mathbb{Z}}}(M,P^{\mathrm{gp}})=N\otimes_{{\mathbb{Z}}}P^{\mathrm{gp}}$.
For each codimension one cone $\rho=\mathbb{R}_{+}v_{i}\in\Sigma$ contained in
two maximal cones $\sigma_{i-1,i}$ and $\sigma_{i,i+1}$, we have
$\displaystyle\varphi_{i+1}-\varphi_{i}=n_{\rho}\otimes[D_{i}],$ (2)
where $n_{\rho}\in N$ is the unique primitive element annihilating $\rho$ and
positive on $\sigma_{i,i+1}$. Such data $\\{\varphi_{i}\\}$ gives a local
system $\mathcal{P}$ on $B_{\mathrm{GHK},0}$ with the structure of
$P^{\mathrm{gp}}_{{\mathbb{R}}}$-principal bundle
$\pi:{\mathbb{P}}_{0}\rightarrow B_{\mathrm{GHK},0}$. To determine such a
local system, we first construct an affine manifold ${\mathbb{P}}_{0}$ by
gluing $U_{i}\times P^{\mathrm{gp}}_{{\mathbb{R}}}$ to $U_{i+1}\times
P^{\mathrm{gp}}_{{\mathbb{R}}}$ along $(U_{i}\cap U_{i+1})\times
P^{\mathrm{gp}}_{{\mathbb{R}}}$ by
$(x,p)\mapsto\left(x,p+\varphi_{i+1}(x)-\varphi_{i}(x)\right).$
The local sections $x\mapsto(x,\varphi_{i}(x))$ patch to give a piecewise
linear section $\varphi:B_{\mathrm{GHK},0}\rightarrow{\mathbb{P}}_{0}$. Let
$\Lambda_{B}$ denote the sheaf of integral constant vector fields, and
$\Lambda_{B,{\mathbb{R}}}:=\Lambda_{B}\otimes_{{\mathbb{Z}}}{\mathbb{R}}$. We
can then define
$\mathcal{P}:=\pi_{*}\Lambda_{B,{\mathbb{P}}_{0}}\cong\varphi^{-1}\Lambda_{B,{\mathbb{P}}_{0}}$
on $B_{\mathrm{GHK},0}$. There is an exact sequence
$\displaystyle
0\rightarrow\underline{P}^{\mathrm{gp}}\rightarrow\mathcal{P}\xrightarrow{r}\Lambda_{B}\rightarrow
0$ (3)
of local systems on $B_{\mathrm{GHK},0}$, where $r$ is the derivative of
$\pi$. Then (2) is equivalent to
$\displaystyle\varphi_{i}(v_{i-1})+\varphi_{i}(v_{i+1})=[D_{i}]-D_{i}^{2}\varphi_{i}(v_{i}),$
(4)
which is the lifting of (1) to $\mathcal{P}$. We will describe the symplectic
meaning of $P$, $P^{\mathrm{gp}}$, and $\mathcal{P}$ in Section 5.2,
particularly see (50).
Next we define the canonical scattering diagram $\mathfrak{D}_{can}$ on
$(B_{\mathrm{GHK}},\Sigma)$. We will first state the definition of a
scattering diagram as in [ghks_cubic] and then restrict to the finite case in
this article. A ray in $\mathfrak{D}_{can}$ is a pair
$(\mathfrak{d},f_{\mathfrak{d}})$ where
* •
$\mathfrak{d}\subset\sigma_{i,i+1}$ for some $i$, called the support of a ray,
is a ray generated by $av_{i}+bv_{i+1}\neq 0$, $a,b\in{\mathbb{Z}}_{\geq 0}$;
* •
$\log{f_{\mathfrak{d}}}=\sum_{k\geq
1}kc_{k}X_{i}^{-ak}X_{i+1}^{-bk}\in\Bbbk[P][[X_{i}^{-a}X_{i+1}^{-b}]]$ with
$c_{k}$ in the maximal ideal $\mathfrak{m}\subseteq\Bbbk[P]$222At first
glance, it looks like there is a sign change comparing this expression with
the wall functions of the scattering diagram from Floer theory in Definition
4.3. However, such discrepancy is explained in the discussion after Lemma 5.22
.
The coefficient $c_{k}$ is the generating function of relative Gromov-Witten
invariants,
$\displaystyle c_{k}=\sum_{\beta}N_{\beta}z^{\beta},$
where the summation is over all possible classes $\beta\in
H_{2}(Y,\mathbb{Z})$ with incidence relations
$\beta.D_{i}=ak,\beta.D_{i+1}=bk$ and $\beta.D_{j}=0$, for $j\neq i,i+1$. The
coefficient $N_{\beta}$ is the counting of $\mathbb{A}^{1}$-curves in such a
class $\beta$. We will refer the readers to [GHK]*Section 3 for technical
details of the definition of relative Gromov-Witten invariants and remark that
this is mostly replaced by logarithmic Gromov-Witten theory nowadays
[GS3][C3].
Roughly speaking, a scattering diagram for the data
$(B_{\mathrm{GHK}},\Sigma)$ is a set
$\mathfrak{D}=\\{(\mathfrak{d},f_{\mathfrak{d}})\\}$ such that, for every co-
Artinian monomial ideal $I$, $f_{\mathfrak{d}}(\mbox{ mod }I)$ is a finite sum
for each wall function $f_{\mathfrak{d}}$ and there are only finitely many
$f_{\mathfrak{d}}\neq 1(\mbox{ mod }I)$ . For notation simplicity, we will
write $\mathfrak{D}=\mathfrak{D}_{can}$ later. Note that scattering diagrams
may give a refinement of the original fan structure given by $\Sigma$. We will
call the maximal cones of this refinement chambers.
The scattering diagram will lead to a flat family over
$\operatorname{Spec}A_{I}$, where $A_{I}=\Bbbk[P]/I$ and $I\subseteq\Bbbk[P]$
is any monomial ideal with $\Bbbk[P]/I$ Artinian. Now consider each $\rho_{i}$
as the support of a ray $(\rho_{i},f_{i})$ in $\mathfrak{D}_{can}$. Define
$\displaystyle R_{i,I}$ $\displaystyle:=A_{I}[X_{i-1},X_{i}^{\pm
1},X_{i+1}]/(X_{i-1}X_{i+1}-z^{[D_{i}]}X_{i}^{-D_{i}^{2}}f_{i}),$
$\displaystyle R_{i,i+1,I}$ $\displaystyle:=A_{I}[X_{i}^{\pm 1},X_{i+1}^{\pm
1}]\cong(R_{i,I})_{X_{i+1}},$
where $z^{[D_{i}]}$ is the monomial in $\Bbbk[P]$ corresponding to the class
of $[D_{i}]$. Let
$U_{i,I}:=\operatorname{Spec}R_{i,I}\text{ and
}U_{i,i+1,I}:=\operatorname{Spec}R_{i,i+1,I}.$
One would then like to glue $U_{i,I}$ and $U_{i+1,I}$ over the identified
piece $U_{i,i+1,I}$ to obtain a scheme $X_{I}^{\circ}$ flat over
$\operatorname{Spec}A_{I}$.
To obtain a quasi-affine $X_{I}^{\circ}$, one needs to consider an
automorphism $R_{i,i+1,I}$, called the path ordered product, associated to a
path $\gamma:[0,1]\rightarrow\operatorname{Int}(\sigma_{i,i+1})$. Suppose
$\gamma$ crosses a given ray $\left(\mathfrak{d}={\mathbb{R}}_{\geq
0}(av_{i}+bv_{i+1}),f_{\mathfrak{d}}\right)$. The $A_{I}$-algebra homomorphism
$\theta_{\gamma,\mathfrak{d}}:R_{i,i+1,I}\rightarrow R_{i,i+1,I}$ is defined
by $X_{i}^{k_{i}}X_{i+1}^{k_{i+1}}\mapsto
X_{i}^{k_{i}}X_{i+1}^{k_{i+1}}f_{\mathfrak{d}}^{\pm(-bk_{i}+ak_{i+1})}$, where
the sign $\pm$ is positive if $\gamma$ goes from $\sigma_{i-1,i}$ to
$\sigma_{i,i+1}$ when passing through $\mathfrak{d}$ and is negative if
$\gamma$ goes in the opposite direction. One can see this is the same as the
wall crossing transformation stated in (11). If $\gamma$ passes through more
than one ray, we define the path ordered product by composing each individual
path ordered product of each ray in the order $\gamma$ passes them. Choosing a
path $\gamma$ by starting very close to $\rho_{i}$ and ending near
$\rho_{i+1}$ in $\sigma_{i,i+1}$, we see that $\gamma$ passes all the rays in
$\sigma_{i,i+1}$. We then define
$X_{I,\mathfrak{D}}^{\circ}=\coprod_{i}U_{i,I}/\sim$ with the gluing given by
$U_{i,I}\hookleftarrow
U_{i,i+1,I}\xrightarrow{\theta_{\gamma,\mathfrak{D}}}U_{i,i+1,I}\hookrightarrow
U_{i+1,I}.$
The following observation is important later for the comparison between the
Gross-Hacking-Keel mirror and the family Floer mirror in the examples
considered in this paper.
###### Remark 2.1.
When there are only finitely many rays $\mathfrak{d}$ with nontrivial
$f_{\mathfrak{d}}$ and $I=\mathfrak{m}$, one can replace $(Y,D)$ by a minimal
resolution such that all the $\mathbb{A}^{1}$-curves are transversal to the
toric boundary divisors. This procedure replaces $\Sigma$ by the refinement
given by the original canonical scattering diagram. The integral affine
manifold $B_{\mathrm{GHK}}$ remains the same. Then
$X_{I,\mathfrak{D}}^{\circ}$ is a gluing of tori, with one torus per chamber.
The next step in [GHK, ghks_cubic] is considering the broken lines to define
consistency of a scattering diagram and to construct the theta functions.
Since we will focus on the cases with only finitely many rays in the canonical
scattering diagram and wall functions are polynomials in this paper, we can
make use of path-ordered products directly without the use of broken lines.
For the definition of broken lines and theta functions, one can refer to [GHK,
Section 2.2]. Instead, to define consistency, we can extend the definition of
path ordered product to a path $\gamma:[0,1]\rightarrow B_{0}({\mathbb{Z}})$
with starting point $q$, and endpoint $Q$, where neither $q$ and $Q$ lies on
any ray. Then the path ordered product $\theta_{\gamma,\mathfrak{D}}$ can be
defined similarly by composing $\theta_{\gamma,\mathfrak{d}}$’s of the walls
$\mathfrak{d}$ passed by $\gamma$. Then the canonical scattering diagram
$\mathfrak{D}$ is consistent in the sense that the path ordered product
$\theta_{\gamma,\mathfrak{D}}$ only depends on the two end points $q$ and $Q$.
For a point $q\in B_{0}({\mathbb{Z}})$, let us assume
$q=av_{i-1}+bv_{i}\in\sigma_{i-1,i}$ and associate the monomial
$X_{i-1}^{a}X_{i}^{b}$ to $q$. Consider now another point $Q\in
B_{0}\setminus\bigcup_{\mathfrak{d}\in\mathfrak{D}_{can}}\mbox{Supp}\mathfrak{d}$
and a path $\gamma$ from $q$ to $Q$, then
$\vartheta_{q,Q}=\varprojlim\theta_{\gamma,\mathfrak{D}}(X_{i-1}^{a}X_{i}^{b})(\mbox{
mod }I)$. This is well-defined because the canonical scattering diagram
$\mathfrak{D}$ is consistent. We will define
$\vartheta_{0,Q}=\vartheta_{0}=1$. Thus the $\vartheta_{q,Q}$ for various $Q$
can be glued to give the global function
$\vartheta_{q}\in\Gamma(X_{I,\mathfrak{D}}^{\circ},{\mathcal{O}}_{X_{I,\mathfrak{D}}^{\circ}})$.
Then, by [GHK]*Theorem 2.28,
$X_{I,\mathfrak{D}}:=\operatorname{Spec}\Gamma\left(X_{I,\mathfrak{D}}^{\circ},{\mathcal{O}}_{X_{I,\mathfrak{D}}^{\circ}}\right)$
is a partial compactification of $X_{I,\mathfrak{D}}^{\circ}$.
### 2.2 Cluster varieties
Gross-Hacking-Keel-Kontsevich [GHKK] constructed cluster scattering diagrams
and showed that the cluster monomials can be expressed as theta functions
defined on the cluster scattering diagrams. The collections of the theta
functions form the bases of the (middle) cluster algebras defined by Fomin-
Zelevinsky [cluster1].
One can perform a similar construction as in the Gross-Hacking-Keel mirror
construction by associating each chamber in the cluster scattering diagram
with an algebraic torus. The path-ordered products (wall crossings) give
birational maps between the tori. The $\mathcal{A}_{\mathrm{prin}}$-cluster
varieties are then defined as the schemes (up to codimension 2) obtained from
gluing the tori associated to the chambers by the birational maps. The
${\mathcal{X}}$-cluster varieties can be described as quotients of the
$\mathcal{A}_{\mathrm{prin}}$-varieties by a torus action. The reader can
refer to [GHK_bir] for a detailed definition of cluster varieties.
We can define the $\mathcal{A}_{\mathrm{prin}}$-scattering diagrams and
$\mathcal{A}$-scattering diagrams giving an injectivity assumption. We can
define the ${\mathcal{X}}$ scattering diagrams as quotients of the
$\mathcal{A}_{\mathrm{prin}}$-scattering diagrams as described in [broken].
Note that the underlying affine manifolds of the cluster scattering diagrams
do not carry any monodromy that are not exactly the same as the monodromy of
the canonical scattering diagrams. The cluster scattering diagrams can be seen
as pushing the singularities of the affine structures of $B$ to infinity, as
explained in [vianna]. We will illustrate how to choose branch cuts and
decompose the monodromy of $B$ in Section 5.3. Then we can translate from the
canonical scattering diagrams to the cluster scattering diagrams. The
resulting schemes, no matter whether described by the canonical or the cluster
scattering diagrams, are determined (up to codimension 2) by the algebras
generated by the set of theta functions. We are going to see the cases in this
article are all associated to cluster algebras.
For the dimension two case, the fixed data333We follow the definition of fixed
data as in [GHK_bir] are given by the bilinear form $\begin{pmatrix}0&1\\\
-1&0\end{pmatrix}$ and the scalars $d_{1},d_{2}\in{\mathbb{N}}$. Given fixed
data, we can define the $\mathcal{A}$ (and ${\mathcal{X}}$) cluster varieties
such that the rings of regular functions carry the $\mathcal{A}$ (and
${\mathcal{X}}$) cluster structures respectively. Gross-Hacking-Keel-
Kontsevich [GHKK] showed that the middle $\mathcal{A}$ and ${\mathcal{X}}$
cluster algebras can be constructed from the theta functions of the
corresponding schemes. Relations between the generators $\vartheta_{i}$ in the
cluster complex of the (middle) ${\mathcal{X}}$ cluster algebras can be
expressed as
$\displaystyle\vartheta_{i-1}\cdot\vartheta_{i+1}$
$\displaystyle=\begin{cases}(1+\vartheta_{i})^{d_{1}},&\mbox{if $i$ is odd}\\\
(1+\vartheta_{i})^{d_{2}},&\mbox{if $i$ is even},\end{cases}$ (5)
where $i\in{\mathbb{Z}}$. Conversely, given such relations between the
variables, we can determine the algebras.
## 3 Set-Up of the Geometry
Consider $Y^{\prime}$ an extremal rational elliptic surface with its singular
configuration one of the following: $II^{*}II$, $III^{*}III$, $IV^{*}IV$. Here
we use the notation in the Kodaira classification of the singular fibres. We
will denote $Y^{\prime}=Y^{\prime}_{*}$, where $*=II,III$ or $IV$ is the fibre
over zero. These rational elliptic surfaces can be constructed explicitly.
We will first consider the case where $Y^{\prime}$ is the unique rational
elliptic surface with singular configuration $II^{*}II$. Consider the compact
complex surface given by the minimal resolution of
$\displaystyle\\{ty^{2}z=tx^{3}+uz^{3}\\}\subseteq\mathbb{P}^{2}_{(x,y,z)}\times\mathbb{P}^{1}_{(u,t)}.$
(6)
By the Tate algorithm [T5], this is an elliptic surface with a type $II^{*}$
singular fibre over $u=\infty$. Straight-forward calculation shows that it has
singular configuration $II^{*}II$ and produces the extremal rational elliptic
surface $Y^{\prime}_{II}$. By Castelnuovo’s criterion of rationality,
$Y^{\prime}$ is rational and thus a rational elliptic surface. The other
extremal rational elliptic surfaces can be constructed similarly with the
corresponding affine equations below [Miranda]*p.545:
$\displaystyle y^{2}$ $\displaystyle=x^{4}+u$ $\displaystyle y^{2}$
$\displaystyle=x^{3}+t^{2}s^{4}$
Recall that any rational elliptic surface $Y^{\prime}$ has canonical bundle
$K_{Y^{\prime}}=\mathcal{O}_{Y^{\prime}}(-D^{\prime})$, where $D^{\prime}$
denotes an elliptic fibre. Thus, there exists a meromorphic $2$-form
$\Omega^{\prime}$ with simple pole along an assigned fibre which is unique up
to a $\mathbb{C}^{*}$-scaling. In particular, the non-compact surface
$X^{\prime}=Y^{\prime}\backslash D^{\prime}$ can be viewed as a non-compact
Calabi-Yau surface. Indeed,
###### Theorem 3.1.
[H] There exists a Ricci-flat Kähler form $\omega^{\prime}$ on $X^{\prime}$
for any choice of the fibre $D^{\prime}$. In particular, $2\omega^{\prime
2}=\Omega^{\prime}\wedge\bar{\Omega^{\prime}}$, and $X^{\prime}$ is
hyperKähler.
The Ricci-flat Kähler form may not be unique (even in a given cohomology
class), but we will just fix one for later purposes and family Floer mirror
computations will not be affected by such choices. Consider $D^{\prime}_{*}$
to be the infinity fibre in $Y^{\prime}_{*}$ and denote the hyperKähler
rotation of $X^{\prime}_{*}=Y^{\prime}_{*}\setminus D^{\prime}_{*}$ by
$X=X_{*}$. Explicitly, $X_{*}$ has the same underlying space as
$X^{\prime}_{*}$ and is equipped with the Kähler form and the holomorphic
volume form
$\displaystyle\omega$ $\displaystyle=\mbox{Re}\Omega^{\prime}$
$\displaystyle\Omega$
$\displaystyle=\omega^{\prime}-i\mbox{Im}\Omega^{\prime}.$ (7)
Then the elliptic fibration $X^{\prime}_{*}\rightarrow\mathbb{C}$ becomes the
special Lagrangian fibration $X_{*}\rightarrow B$, where
$B\cong\mathbb{R}^{2}$ [HL] (see the diagram below) topologically. Let
$B_{0}\cong\mathbb{R}^{2}\setminus\\{0\\}$ be the complement of the
discriminant locus. We will refer the readers to [CJL]*P.35 for more explicit
calculation of hyperKähler (HK) rotation. We will omit the subindex when there
is no confusion.
${Y^{\prime}}$${X^{\prime}=Y^{\prime}\setminus
D^{\prime}}$${X}$${{\mathbb{P}}^{1}}$${B\cong{\mathbb{C}}}$${B\cong{\mathbb{R}}^{2}}$$\scriptstyle{\mathrm{HK}}$
The fibrewise relative homology $H_{2}(X,L_{u})$ glues to a local system of
lattices over $B_{0}$. For any relative class $\gamma\in H_{2}(X,L_{u})$, we
define the central charge
$\displaystyle Z_{\gamma}(u):=\int_{\gamma}\Omega^{\prime}$
to be a function from the local system $\Gamma$ to $\mathbb{C}$. Notice that
$B_{0}\cong\mathbb{C}^{*}$ admits a complex structure structure and
$Z_{\gamma}$ is locally444Since there is monodromy, it is a multi-valued
function on $B_{0}$ a holomorphic function in $u$ by [L4, Corollary 2.8]. The
central charge will help to locate the special Lagrangian torus fibre bounding
holomorphic discs in Section 4.2.
### 3.1 Affine Structures of the Base
Let $(X,\omega)$ be a Kähler surface with holomorphic volume form $\Omega$
satisfying $2\omega^{2}=\Omega\wedge\bar{\Omega}$. Assume that $X$ admits a
special Lagrangian fibration $\pi:X\rightarrow B$, possibly with singular
fibres with respect to $(\omega,\Omega)$, i.e.,
$\omega|_{L}=\mbox{Im}\Omega|_{L}=0$ for any fibre $L$. We will use $L_{u}$ to
denote the fibre over $u\in B$. There are two natural integral affine
structures defined on $B_{0}$ by Hitchin [H2]. One is called the symplectic
affine structure and the other one is the complex affine structure. Given a
reference point $u_{0}\in B_{0}$ and a choice of the basis
$\check{e}_{1},\check{e}_{2}\in H_{1}(L_{u_{0}},\mathbb{Z})$, we will define
the local affine coordinates around $u_{0}$. For any $u\in B_{0}$ in a small
neighborhood of $u_{0}$, we choose a path $\phi$ contained in $B_{0}$
connecting $u,u_{0}$. Let $C_{k}$ be the $S^{1}$-fibration over the image of
$\phi$ such that the fibres are in the homology class of the parallel
transport of $\check{e}_{k}$ along $\phi$. Then the local symplectic affine
coordinates are defined by
$\displaystyle x_{k}(u)=\int_{C_{k}}\omega.$ (8)
It is straight-forward to check that the transition functions fall in
$GL(2,\mathbb{Z})\rtimes\mathbb{R}^{2}$555 If there is a global Lagrangian
section, then the transition functions fall in $GL(2,\mathbb{Z})$. , and thus
the above coordinates give an integral affine structure on $B_{0}$. Replacing
$\omega$ in (8) by $\mbox{Im}\Omega$ gives the complex integral affine
coordinates $\check{x}_{k}(u)=\int_{C_{k}}\mbox{Im}\Omega$.
###### Remark 3.2.
By the construction, primitive classes $\check{e}\in H_{1}(L_{u},\mathbb{Z})$
are in one-to-one correspondence with the primitive integral vectors in
$(T_{\mathbb{Z}}B_{0})_{u}$. Indeed, each $v\in T_{u}B_{0}$ has a
corresponding functional $\int_{-}\iota_{v}\mbox{Im}\Omega$ on
$H_{1}(L_{u},\mathbb{Z})$ and thus corresponds to a primitive element in
$H_{1}(L_{u},\mathbb{Z})$ via its natural symplectic pairing and Poincare
duality.
###### Remark 3.3.
Since special Lagrangians have vanishing Maslov classes, all the special
Lagrangian torus fibres only bound Maslov index zero discs and the assumption
of the Lemma 4.1 holds. In general, it is hard to control all the bubbling of
pseudo-holomorphic discs (of Maslov index zero). However, when the Lagrangian
fibration is further special, the special Lagrangian fibres bounding
holomoprhic discs of a fixed relative class maps to an affine line with
respect to the complex affine structure by the fibration map $\pi$. Indeed, if
$u_{t}$ is a path in $B_{0}$ such that each $L_{u_{t}}$ bounds a holomorphic
disc in class $\gamma$ (we identify the relative classes via parallel
transport along the path $u_{t}$), then $\int_{\gamma}\mbox{Im}\Omega=0$ along
$u_{t}$ since $\gamma$ can be represented by a holomorphic cycle and cannot
support a non-zero holomorphic $2$-form. In particular, $u_{t}$ is contained
in an affine line with respect to the complex affine structure, which locally
we will denote by $l_{\gamma}$.666The notation $\gamma$ and thus $l_{\gamma}$
are only locally defined since the monodromy of the fibration acts non-
trivially on the local system $\bigcup_{u\in B_{0}}H_{2}(X,L_{u})$ Notice that
$l_{\gamma}$ is naturally oriented such that the symplectic area of $\gamma$
is increasing along $l_{\gamma}$. From the expected dictionary in the
introduction, these affine lines correspond to rays in the scattering
diagrams, and Lemma 4.1 translates to the consistency of scattering diagrams.
We will use both integral affine structures later: the complex affine
structures will be used to locate the fibres bounding holomorphic discs (see
Section 4.2) while the symplectic affine structures will be used to define the
family Floer mirrors (see Section 4.3).
## 4 Floer Theory and Family Floer Mirror
In this section, we will talk about the background for the explicit
calculation of a family Floer mirror in Section 5. We will review the
construction of family Floer mirror of Tu [T4] in Section 4.3. Given a
Lagrangian torus fibration $X\rightarrow B$ with fibre $L_{u}$ over $u\in B$,
Fukaya-Oh-Ohta-Ono [FOOO] constructed $A_{\infty}$ algebras on de Rham
cohomologies of the fibres. Assuming that the fibres are unobstructed, then
the exponential of the corresponding Maurer-Cartan spaces are the analogue of
the dual tori for the original Lagrangian fibration. Then the family Floer
mirror is the gluing of these exponential of Maurer-Cartan spaces. The gluing
morphisms, known as the ”quantum correction” to the mirror complex structure,
are induced by the wall-crossing of the Maurer-Cartan spaces. Such wall-
crossing phenomena receive contributions from the holomoprhic discs of Maslov
index zero with boundaries on SYZ fibres. We review the relation of the open
Gromov-Witten invariants and the gluing morphisms in Section 4.1. To have a
better understanding of the gluing morphisms, in Section 4.2 we study the
location of all fibres with non-trivial open Gromov-Witten invariants within
the geometry discussed in Section 3 by taking advantage of the special
Lagrangian boundary conditions.
### 4.1 Fukaya’s Trick and Open Gromov-Witten Invariants
We will first review the so-called Fukaya’s trick, which is a procedure for
comparing the variation of the $A_{\infty}$ structures of a Lagrangian and
those of its nearby deformations. We will use Fukaya’s trick to detect the
open Gromov-Witten invariants.
Let $X$ be a symplectic manifold with Lagrangian fibration $X\rightarrow B$.
Recall the definition of the Novikov field,
$\displaystyle\Lambda:=\left\\{\sum_{i\in\mathbb{N}}c_{i}T^{\lambda_{i}}\middle|\quad\lambda_{i}\in\mathbb{R},\lim_{i\rightarrow\infty}\lambda_{i}=\infty,c_{i}\in\mathbb{C}\right\\}.$
Denote
$\Lambda_{+}=\\{\sum_{i\in\mathbb{N}}c_{i}T^{\lambda_{i}}|\lambda_{i}>0\\}$
and $\Lambda^{*}=\Lambda\backslash\\{0\\}$. There is a natural discrete
valuation
$\displaystyle\mbox{val}:\Lambda^{*}\longrightarrow$ $\displaystyle\mathbb{R}$
$\displaystyle\sum_{i\in\mathbb{N}}c_{i}T^{\lambda_{i}}\mapsto$
$\displaystyle\lambda_{i_{0}},$
where $i_{0}$ is the smallest $i$ with $c_{i}\neq 0$. One can extend the
domain of val to $\Lambda$ by setting $\mbox{val}(0)=\infty$.
Let $B_{0}$ be the complement of the discriminant locus of the special
Lagrangian fibration and $L_{u}$ be the fibre over $u\in B_{0}$. Fix an almost
complex structure $J$ which is compatible with $\omega$. For our later
purposes, $X$ will be Kähler and we will take $J$ to be its complex structure.
Given a relative class $\gamma\in H_{2}(X,L_{u})$, we use
$\mathcal{M}_{\gamma}((X,J),L_{u})$ to denote the moduli space of stable
$J$-holomorphic discs in relative class $\gamma$ with respect to the almost
complex structure $J$. We may omit the $J$ if there is no confusion. Fukaya-
Oh-Ohta-Ono [FOOO] constructed a filtered unital $A_{\infty}$ structure
$\\{m_{k}\\}_{k\geq 0}$ on $H^{*}(L_{u},\Lambda)$ by considering the boundary
relations of $\mathcal{M}_{\gamma}((X,L),L_{u})$, for all $\gamma\in
H_{2}(X,L_{u})$. We will assume that there exists only Maslov index zero discs
in $X$. If we restrict to $\mbox{dim}_{\mathbb{R}}X=4$, then the moduli space
$M_{\gamma}((X,J),L_{u})$ has virtual dimension negative one. In particular,
the Maurer-Cartan space associate to the $A_{\infty}$ structure is simply
$H^{1}(L_{u},\Lambda_{+})$.
Now we explain Fukaya’s trick. Given $p\in B_{0}$ and a path $\phi$ contained
in a small neighborhood of $p$ such that $\phi(0)=u_{-},\phi(1)=u_{+}$, one
can choose a $1$-parameter family of paths $\phi_{s}(t)$ such that
$\phi_{0}(t)=\phi(t),\phi_{1}(t)=p$ and $\phi_{s}(t)$ is contained in a small
enough neighborhood of $p$. It is illustrated as follows:
$p$
---
$\phi(t)$
---
$\phi(0)$
---
=$u_{-}$
$\phi(1)$
---
$=u_{+}$
Figure 1: Fukaya’s trick
Then there exists a $2$-parameter family of fibrewise preserving
diffeomorphisms $f_{s,t}$ of $X$ such that
1. 1.
$f_{s,1}=id$.
2. 2.
$f_{s,t}$ sends $L_{\phi_{s}(t)}$ to $L_{p}$.
3. 3.
$f_{s,t}$ is the identity outside the preimage of a compact subset of $B_{0}$.
Then $J_{t}=(f_{1,t})_{*}J$ is a $1$-parameter family of almost complex
structures tamed with respect to $\omega$ since $\phi$ is contained in a small
enough neighborhood of $p$. There is a canonical isomorphism of moduli spaces
of holomorphic discs
$\displaystyle\mathcal{M}_{k,\gamma}((X,J),L_{\phi(t)})\cong\mathcal{M}_{k,(f_{1,t})_{*}\gamma}\big{(}(X,(f_{1,t})_{*}J),L_{p}\big{)}$
(9)
which carries over to the identification of the Kuranishi structures. However,
the $A_{\infty}$ structure from two sides of (9) are not the same under the
parallel transport $H^{*}(L_{\phi(t)},\Lambda)\cong H^{*}(L_{p},\Lambda)$
because of the difference of the corresponding symplectic area (also known as
the flux)
$\displaystyle\int_{{f_{1,t}}_{*}\gamma}\omega-\int_{\gamma}\omega=\left\langle\sum_{k=1}^{n}\big{(}x_{k}(\phi(t))-x_{k}(p)\big{)}e_{k},\
\partial\gamma\right\rangle,$
where $e_{i}\in H^{1}(L_{p},\mathbb{Z})$ is an integral basis.
From the $1$-parameter family of almost complex structures $J_{t}$, one can
construct a pseudo-isotopy of unital $A_{\infty}$ structures on
$H^{*}(L_{p},\Lambda)$, connecting the $A_{\infty}$ structures on
$H^{*}(L_{p},\Lambda)$ from $u_{\pm}$ (for instance see [F2]). This induces a
pseudo-isotopy of $A_{\infty}$ structures from $H^{*}(L_{p},\Lambda)$ to
itself. In particular, this induces an isomorphism on the corresponding
Maurer-Cartan spaces, which are isomorphic to $H^{1}(L_{p},\Lambda_{+})$
because of the dimension,
$\displaystyle\Phi:H^{1}(L_{p},\Lambda_{+})\rightarrow
H^{1}(L_{p},\Lambda_{+}).$ (10)
A priori this is not the identity if $L_{\phi(t)}$ bounds holomorphic discs of
Maslov index zero for some $t\in[0,1]$ [FOOO]. The following lemma states that
$\Phi$ only depends on the homotopy class of the path $\phi$.
###### Lemma 4.1.
[T4]*Theorem 2.7 $\Phi\equiv 1$ (mod $\Lambda_{+}$) and $\Phi$ only depends on
the homotopy class of $\phi$ assuming no appearance of negative Maslov index
discs in the homotopy. In particular, if $\phi$ is a contractible loop, then
the corresponding $\Phi=1$ (before modulo $\Lambda_{+}$).777Lemma 4.3.15
[FOOO] showed that the homotopic $A_{\infty}$-homomorphisms induced the same
map on the Maurer-Cartan spaces.
The explicit form of $\Phi$ can be computed in the case of the hyperKähler
surfaces assumption and one can see that $\Phi$ acts like wall crossing in the
Gross-Siebert program in the theorem below.
###### Theorem 4.2.
[L8]*Theorem 6.15 Let $\gamma$ be a primitive relative class and assume that
$k\gamma$, with $k\in\mathbb{N}$ are the only relative classes such that
$L_{\phi(t)}$ bound holomorphic discs. Suppose that
$\mbox{Arg}Z_{\gamma}(u_{-})<\mbox{Arg}Z_{\gamma}(u_{+})$ (Check Remark 5.17
for the discussion of the signs). Then the transformation $\Phi$ is given by
$\displaystyle\mathcal{K}_{\gamma}:z^{\partial\gamma^{\prime}}\mapsto
z^{\partial\gamma^{\prime}}f_{\gamma}^{\langle\gamma^{\prime},\gamma\rangle},$
(11)
for some power series $f_{\gamma}\in
1+T^{\omega(\gamma)}z^{\partial\gamma}\mathbb{R}[[T^{\omega(\gamma)}z^{\partial\gamma}]]$.
Here $\langle\gamma^{\prime},\gamma\rangle$ denotes the intersection pairing
of the corresponding boundary classes in the torus fibre.
The coefficients of $\log{f_{\gamma}}$ have enumerative meanings: counting
Maslov index zero discs bounded by the $1$-parameter family of Lagrangians
[L14] or counting rational curves with certain tangency conditions [GPS]. This
motivates the following definition. More precisely, we define the open Gromov-
Witten invariants $\tilde{\Omega}(\gamma;u)$ below:
###### Definition 4.3.
[L8] With the notation as in Theorem 4.2 and $u\in B_{0}$ the intersection of
the image of $\phi$ and $l_{\gamma}$, the open Gromov-Witten invariants
$\tilde{\Omega}(\gamma;u)$ are defined via the formula
$\displaystyle\log{f_{\gamma}}=\sum_{d\in\mathbb{N}}d\tilde{\Omega}(d\gamma;u)(T^{\omega(\gamma)}z^{\partial\gamma})^{d}.$
For other choices of $(u,\gamma)$, we set $\tilde{\Omega}(\gamma;u)=0$. In
other words, $\tilde{\Omega}(\gamma;u)\neq 0$ only if
$\mathcal{M}_{\gamma}((X,J),L_{u})\neq\emptyset$ from (9).
Then BPS rays are defined to have the support equal to the loci with non-
trivial open Gromov-Witten invariants of the same homology classes (up to
parallel transport).
###### Definition 4.4.
Given $u\in B_{0}$ and a relative class $\gamma\in H_{2}(X,L_{u})$, then the
associated BPS ray is defined (locally) to be
$\displaystyle l_{\gamma}:=\\{u^{\prime}\in B_{0}\ |\
\tilde{\Omega}(\gamma;u^{\prime})\neq 0\mbox{ and
$Z_{\gamma}(u^{\prime})\in\mathbb{R}_{+}$}\\}.$
Recall that the geometry $X=X_{*}$ is not compact and a priori the moduli
spaces of $J$-holomorphic discs may not be compact. However, from the
curvature decay and injectivity radius decay in [H, Theorem 1.5] and the
qualitative version of Gromov compactness theorem (see for instance [CJL,
Proposition 5.3] or [Gro]), the moduli spaces of discs in $X$ with compact
Lagrangian boundary conditions are compact. To compute the open Gromov-Witten
invariants on $X$, we first recall the following fact: given a rational
elliptic surface $Y^{\prime}$ and a fibre $D^{\prime}$, there exists a
$1$-parameter deformation $(Y_{t},D_{t})$ such that $D^{\prime}_{t}\cong
D^{\prime}$ and $Y^{\prime}_{t}$ are rational elliptic surfaces with only type
$I_{1}$ singular fibres except $D^{\prime}_{t}$. The following theorem
explains how to compute the local open Gromov-Witten invariants near a general
singular fibre other than those of type $I_{1}$. We will denote $X_{t}$ to be
the hyperKähler rotation of $Y^{\prime}_{t}\backslash D^{\prime}_{t}$ with
relation similar to (3). Then let $X_{t}\rightarrow B_{t}$ be a $1$-parameter
family of hyperKähler surfaces with special Lagrangian fibration and
$X_{0}=X$. We will identify $B_{t}\cong B_{0}=B$ topologically.
###### Theorem 4.5.
[L12]*Theorem 4.3 Given any $u\in B_{0}$, $\gamma\in H_{2}(X,L_{u})$, there
exists $t_{0}$ and a neighborhood $\mathcal{U}\subseteq B_{0}$ of $u$ such
that
1. 1.
If $\tilde{\Omega}(\gamma;u)=0$, then $\tilde{\Omega}(\gamma;u^{\prime})=0$
for $u^{\prime}\in\mathcal{U}$.
2. 2.
If $\tilde{\Omega}(\gamma;u)\neq 0$, then
$l^{t}_{\gamma}\cap\mathcal{U}\neq\emptyset$ and
$\displaystyle\tilde{\Omega}_{t}(\gamma;u^{\prime})=\tilde{\Omega}(\gamma;u),$
for $u^{\prime}\in l^{t}_{\gamma}\cap\mathcal{U}$ and $t$ with $|t|<t_{0}$.
Here $\tilde{\Omega}_{t}(\gamma;u)$ denotes the open Gromov-Witten invariant
of $X_{t}$.
For instance, in the case where the singular configuration of $Y^{\prime}$ is
$II^{*}II$, then the BPS rays of $X_{t}$ would look like the following picture
with the notation defined in Section 5:
$\gamma_{2}$$\gamma_{1}+\gamma_{2}$$\gamma_{1}$$-\gamma_{2}$$-\gamma_{1}$
Figure 2: BPS rays on $B_{t}$ for the case discussed in Section 5
### 4.2 Location of BPS Rays
In this section, we will restrict to the case where $Y^{\prime}$ has exactly
two singular fibres at $0,\infty$ and the monodromy of each singular fibre is
of finite order. The examples listed in Section 3 are exactly those possible
$Y^{\prime}$. We will show that the BPS rays divide the base into chambers
which are in one-to-one correspondence with the torus charts of the family
Floer mirror later. In particular, the following observation simplifies the
explicit computation of family Floer mirror.
###### Lemma 4.6.
Let $\gamma$ be one of the relative classes in Theorem 5.8. Then
$l_{\gamma_{i}}$ does not intersect each other. Specifically, $B_{0}$ is
divided into chambers by $l_{\gamma_{i}}$.
###### Proof.
Let $v\in TB$ and recall that one has
$vZ_{\gamma}=\int_{\partial\gamma}\iota_{\tilde{v}}\Omega$, where $\tilde{v}$
is a lifting of $v$, by direct computation. Together with $\Omega$ is
holomorphic symplectic, $Z_{\gamma}$ has no critical point in $B_{0}$. Let
$l_{\gamma}$ be a BPS ray, then by definition the holomorphic function
$Z_{\gamma}$ has phase $0$ along $l_{\gamma}$. Now take $v$ to be the tangent
of $l_{\gamma}$ at $u\in l_{\gamma}$ pointing away from the origin. Therefore,
$vZ_{\gamma}(u)\neq 0$. Otherwise, $u$ is a critical point of $Z_{\gamma}$ and
contracdicts to the fact that $\Omega$ is holomorphic symplectic. In other
words, the function $|Z_{\gamma}|$ is strictly increasing along $l_{\gamma}$.
Next we claim that $l_{\gamma}$ can not be contained in a compact set.
Otherwise, there exists a sequence of points $u_{i}\in l_{\gamma}$ converging
to some point $u_{\infty}\in B_{0}$. Since the order of the monodromy is
finite, there are only finitely possibly relative classes among
$\gamma_{u_{i}}$ with respect to the trivialization of the local system
$H_{2}(X,L_{u})$ in a small neighborhood of $u_{\infty}$. Here
$\gamma_{u_{i}}$ is the parallel transport of $\gamma$ to $u_{i}$. After
passing to a subsequence, one has
$\lim_{i\rightarrow\infty}Z_{\gamma}(u_{i})=Z_{\gamma}(u_{\infty})$. If
$u_{\infty}\in B_{0}$, then $l_{\gamma}$ can be extended over $u_{\infty}$ and
leads to a contradiction. Therefore, $l_{\gamma}$ connects $0$ and $\infty$.
Recall the asymptotic geometry of $X^{\prime}$ from [BPV, p.208]: consider the
model space $X_{mod}$ defined by
$\displaystyle
X_{mod}:=\\{(u,v)\in\mathbb{C}\times\mathbb{C}||u|>R,\mbox{Arg}(u)\in[0,2\pi\beta]\\}/\sim$
where the equivalence relation is
$\displaystyle(u,v)$ $\displaystyle\sim(e^{2\pi i\beta}u,e^{-2\pi
i\beta}v),\mbox{ for }u\in\mathbb{R}_{+}$ $\displaystyle(u,v)$
$\displaystyle\sim(u,v+m+n\tau),\mbox{ for }m,n\in\mathbb{Z}.$
Here $\beta,\tau$ are suitable constants depending on the type of the fibre at
infinity [BPV, p.208, Table 5]. Then there exists a compact set $K\subseteq
X^{\prime}$, a diffeomorphism $\Psi:X^{\prime}\setminus K\rightarrow X_{mod}$
and a holomorphic volume form $\Omega^{\prime}$ on $X^{\prime}$ such that
$\displaystyle(\Psi^{-1})^{*}\Omega^{\prime}=du\wedge dv+O(|u|^{-1}).$
Here $O(|u|^{-1})$ means differential forms with components of the
differential form in $(u,v)$-coordinates are of $O(|u|^{-1})$. Then straight-
forward calculations shows that $|Z_{\gamma}|\nearrow\infty$ along
$l_{\gamma}$.
Notice that the above argument holds for $l_{\gamma}^{\theta}$, where
$l_{\gamma}^{\theta}$ is the loci where $Z_{\gamma}$ has phase $\theta\in
S^{1}$. This implies that $|Z_{\gamma}(u)|\rightarrow\infty$ as
$u\rightarrow\infty$. Recall that $Z_{\gamma}(u)$ is a multi-valued
holomorphic function on $B_{0}\cong\mathbb{C}^{*}$. Since
$\pi_{1}(B_{0})\cong\mathbb{Z}$ and the monodromy is of order $k$, we have
$Z_{\gamma}(u^{k})$ is a well-defined holomorphic function
$\mathbb{C}^{*}\rightarrow\mathbb{C}^{*}$. By straight-forward calculation one
has $\lim_{u\rightarrow 0}Z_{\gamma}(u^{k})=0$ and thus $u=0$ is a removable
singularity. The previous discussion implies that $\infty$ is a pole and the
holomorphic function $Z_{\gamma}(u^{k})$ extends to
$\mathbb{P}^{1}\rightarrow\mathbb{P}^{1}$ and fixing $0,\infty$. Thus, we
reach that
$\displaystyle Z_{\gamma}(u^{k})=cu,$ (12)
for some constant $c\in\mathbb{C}^{*}$ and the lemma follows. ∎
###### Remark 4.7.
Let $Y_{t}$ be a small deformation of $Y$ such that $Y_{t}$ has a fibre
isomorphic to $D$ and all other singular fibres are of type $I_{1}$, then the
BPS rays still divide the base into five chambers.
###### Remark 4.8.
Let $Y$ be the del Pezzo surface of degree five and $D$ be an anti-canonical
divisor consists of a wheel of five rational curves. Set $X=Y\setminus D$. It
is known that $X$ is the moduli space of flat connections on punctured sphere
[Bo]. There exists a hyperKḧlaer metric on it such that suitable hyperKähler
rotation becomes some meromorphic Hitchin moduli space, which is $X^{\prime}$,
the complement of the $II^{*}$ fibre of the rational elliptic surface
$Y^{\prime}$ with singular configuration $II^{*}II$. It is not clear if the
holomorphic volume form $\Omega^{\prime}$ on $X^{\prime}$ extends as a
meromorphic form with a simple pole along the $II^{*}$ fibre. However, the
Hitchin metric is exponentially asymptotic to the semi-flat metric at infinity
[FMSW], the proof of Lemma 4.6 also applies to this case.
### 4.3 Construction of the Family Floer Mirror
We will briefly recall the construction of family Floer mirror constructed by
Tu [T4] in this section. We will refer the details of the analytic geometry to
[C].
###### Definition 4.9.
Let $U\subseteq B_{0}$ be an open set and $\psi:U\rightarrow\mathbb{R}^{n}$ be
the affine coordinate such that $\psi(u)=0$ for some $u\in U$. Then we say $U$
is a rational domain if $U\cong\psi(U)=P\subseteq\mathbb{R}^{n}$ is a rational
convex polytope. The Tate algebra $T_{U}$ associated to a rational domain $U$
consists of the power series of the form
$\displaystyle\sum_{\mathbf{k}\in\mathbb{Z}^{n}}a_{\mathbf{k}}z_{1}^{k_{1}}\cdots
z_{n}^{k_{n}},$
where $\mathbf{k}=(k_{1},\cdots,k_{n})$ with the following conditions:
1. 1.
$a_{\mathbf{k}}\in\Lambda$ the Novikov field and
2. 2.
(convergence in $T$-adic topology)
$\displaystyle\lim_{\mathbf{k}\rightarrow\infty}\mbox{val}(a_{\mathbf{k}})+\langle\mathbf{k},\mathbf{x}\rangle\rightarrow\infty,$
(13)
as $\mathbf{k}\rightarrow\infty$, for each $\mathbf{x}=(x_{1},\cdots,x_{n})\in
U$.
For such rational domain $U$, we denote $\mathcal{U}$ be the maximum spectrum
of the associated Tate algebra $T_{U}$.
###### Remark 4.10.
Recall that the Novikov field $\Lambda$ is algebraically closed. From
Proposition 3.1.8 (c) [EKL], $\mathcal{U}$ is identified with
$\mbox{Val}^{-1}(\psi(U))$, where
$\mbox{Val}:(\Lambda^{*})^{n}\rightarrow\mathbb{R}^{n}$ is component-wise the
valuation on $\Lambda^{*}$. If $f\in T_{U}$, then $f(x)$ converges for all
points $x\in\mathcal{U}$ and thus we may view $f$ as a function defined on
$\mbox{Val}^{-1}(\psi(U))$.
Take an open cover of rational domains $\\{U_{i}\\}_{i\in I}$ of $B_{0}$ with
affine coordinates $\psi_{i}:U_{i}\rightarrow\mathbb{R}^{n}$ such that
$\psi_{i}(u_{i})=0\in\mathbb{R}^{n}$ for some $u_{i}\in U_{i}$. For each pair
$i,j$ with $U_{i}\cap U_{j}\neq\emptyset$, there is a natural gluing data
$\displaystyle\Psi_{ij}:\mathcal{U}_{i}\rightarrow\mathcal{U}_{j},$
which now we will explain below: Choose $p\in U_{i}\cap U_{j}$ and
$f_{u_{i},p}$ fibrewise preserving diffeomorphism sending $L_{u_{i}}$ to
$L_{p}$ and is identity outside $U_{i}$. Let $(x^{i}_{1},\cdots,x^{i}_{n})$
(and $(x^{j}_{1},\cdots,x^{j}_{n})$) be the local symplectic affine
coordinates on $U_{i}$ (and $U_{j}$ respectively) with respect to the same set
of basis up to parallel transport within $U_{i},U_{j}$. The corresponding
functions in $T_{U_{i}}$ are denoted by $(z^{i}_{1},\cdots,z^{i}_{n})$, where
$\mbox{val}(z^{i}_{k})=x^{i}_{k}$.
The difference of symplectic affine coordinates is
$\displaystyle\int_{{f_{u_{i},p}}_{*}\gamma}\omega-\int_{\gamma}\omega=\left\langle\sum_{k=1}^{n}\big{(}x_{k}(p)-x_{k}(u_{i})\big{)}e^{i}_{k},\partial\gamma\right\rangle.$
Denote $T_{U_{i,p}}$ the Tate algebra satisfying the convergence in $T$-adic
topology (13) on the rational domain $\psi_{i}(U_{i})-\psi_{i}(p)$, the
translation of $\psi(U_{i})$, and we denote $\mathcal{U}_{i,p}$ as the
spectrum of $T_{U_{i,p}}$. Then there is the transition map
$\displaystyle S_{u_{i},p}:\mathcal{U}_{i}$
$\displaystyle\rightarrow\mathcal{U}_{i,p}$ $\displaystyle z^{i}_{k}$
$\displaystyle\mapsto T^{x_{k}(u_{i})-x_{k}(p)}z^{i}_{k}.$ (14)
Then define the gluing data $\Psi_{ij}$ be the composition
$\displaystyle\Psi_{ij}:\mathcal{U}_{ij}\xrightarrow{S_{u_{i},p}}\mathcal{U}_{ij,p}\xrightarrow{\Phi_{ij}}\mathcal{U}_{ji,p}\xrightarrow{S^{-1}_{u_{j},p}}\mathcal{U}_{ji},$
(15)
where $\Phi_{ij}$ is defined in (10). The gluing data $\Psi_{ij}$ satisfies
(Section 4.9 [T4])
1. 1.
Independent of the choice of reference point $p\in U_{i}\cap U_{j}$.
2. 2.
$\Psi_{ij}=\Psi_{ji}$ and $\Psi_{ij}\Psi_{jk}=\Psi_{ik}$.
3. 3.
For $p\in U_{i}\cap U_{j}\cap U_{k}$, we have
$\Psi_{ij}(\mathcal{U}_{ij}\cap\mathcal{U}_{ik})\subseteq\mathcal{U}_{ji}\cap\mathcal{U}_{jk}$.
Then the family Floer mirror $\check{X}$ is defined to be the gluing of the
affinoid domains
$\displaystyle\check{X}:=\bigcup_{i\in I}\mathcal{U}_{i}/\sim,$ (16)
where $\sim$ is defined by (15). The natural projection map $T_{U}\rightarrow
U$ from the valuation glue together and gives the family Floer mirror a
projection map
$\displaystyle\mathfrak{Trop}:\check{X}\rightarrow B_{0}.$
The following example is straight forward from the construction.
###### Example 4.11.
Recall that the rigid analytic torus $(\mathbb{G}_{m}^{an})^{2}$ admits a
valuation map
$\mathfrak{Trop}:(\mathbb{G}_{m}^{an})^{2}\rightarrow\mathbb{R}^{2}$. Let
$\pi:X\rightarrow U$ be a Lagrangian fibration such that for any path $\phi$
connecting $u_{i},u_{j}\in U$, the corresponding $F_{\phi}=id$. Assume that
the symplectic affine coordinates give an embedding
$U\rightarrow\mathbb{R}^{2}$ and we will simply identify $U$ with its image.
Then the gluing
$\displaystyle\Psi_{ij}:\mathcal{U}_{ij}$
$\displaystyle\rightarrow\mathcal{U}_{ji}$ $\displaystyle z_{k}^{i}$
$\displaystyle\mapsto T^{x_{k}(u_{i})-x_{k}(u_{j})}z_{k}^{j},$
is simply translation from equation (15). Thus the corresponding family mirror
$\check{X}$ is simply $\mathfrak{Trop}^{-1}(U)\rightarrow U$. In particular,
when $U\cong\mathbb{R}^{2}$, then the family Floer mirror is simply the rigid
analytic torus $(\mathbb{G}_{m}^{an})^{2}$ . It worth noticing that if
$U\subseteq\mathbb{R}^{2}$ is a proper subset, then $\mathfrak{Trop}^{-1}(U)$
is not a dense subset of $(\mathbb{G}_{m}^{an})^{2}$.
## 5 Family Floer Mirror of $X_{II}$
In this section, we will have a detailed computation of the family Floer
mirror of $X=X_{II}$ (see Section 3. We sketch the proof below: We will first
identify the locus of special Lagrangian fibres bounding holomorphic discs to
be simply five rays $l_{\gamma_{i}}$ connecting $0,\infty$. Then we compute
their corresponding wall-crossing transformations which are analytification of
some birational maps. Thus, the family Floer mirror $\check{X}$ can be glue
from five charts. Then we will prove that the embedding of each of the five
charts into $\check{X}$ can be extended to an embedding of the analytic torus
$\mathbb{G}_{m}^{an}$ into $\check{X}$. In other words, $\check{X}$ is gluing
five analytic torus. On the other hand, consider the del Pezzo surface $Y$ of
degree $5$ and $D$ be the cycle of five rational curves. Let
$B_{\mathrm{GHK}}$ be the affine manifold with the singularity constructed in
Section 2.1. We identify the complex affine structure on $B$ with the one on
$B_{\mathrm{GHK}}$, the rays and the corresponding wall-crossing
transformations. Then from [GHK]*Example 3.7, we know that $\check{X}$ is the
analytification of the del Pezzo surface of degree five. Furthermore, we would
choose the branch cuts on $B$ in a different way. This would induce another
realization of $\check{X}$ as gluing to five tori but with different gluing
morphisms, which we will later identify $\check{X}$ as the
${\mathcal{X}}$-cluster variety of type $A_{2}$.
As discussed in Section 3, the base $B$ with the complex structure from the
elliptic fibration $Y^{\prime}_{II}\rightarrow\mathbb{P}^{1}$ is biholomoprhic
to $\mathbb{C}$ as a subset of $\mathbb{P}^{1}$. We may choose a holomorphic
coordinate $u$ on $B$ such that the fibres over $u=0,\infty$ are the type
$II,II^{*}$ singular fibres. Such coordinate is unique up to scaling
$\mathbb{C}^{*}$. We will determine such scaling later. First we apply Theorem
4.5 to the $1$-parameter family of hyperKähler rotation of the rational
elliptic surfaces described in Section 3, one get
###### Theorem 5.1.
[L12]*Theorem 4.11 Choose a branch cut from the singularity to infinity and a
basis $\\{\gamma_{1}^{\prime},\gamma_{2}^{\prime}\\}$ of
$H_{2}(X,L_{u})\cong\mathbb{Z}^{2}$ such that
$\langle{\gamma_{1}^{\prime}},{\gamma_{2}^{\prime}}\rangle=1$ and the counter-
clockwise monodromy $M$ around the singularity is
$\displaystyle{\gamma_{1}^{\prime}}$
$\displaystyle\mapsto-{\gamma_{2}^{\prime}}$
$\displaystyle{\gamma_{2}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}.$ (17)
Then for $|u|\ll 1$, one has
$\displaystyle\tilde{\Omega}(\gamma;u)=\begin{cases}\frac{(-1)^{d-1}}{d^{2}},&\mbox{if
}\gamma=\pm d{\gamma_{1}^{\prime}},\pm d{\gamma_{2}^{\prime}},\pm
d({\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}),\mbox{ for }d\in\mathbb{N}\\\
0,&\mbox{otherwise.}\end{cases}$
###### Remark 5.2.
See Remark 2.10 [L12] for the relation between $\tilde{\Omega}(\gamma;u)$ and
${\Omega}(\gamma;u)$ in the reference.
###### Remark 5.3.
The monodromy (5.1) acts transitively on the set
$\\{\pm{\gamma_{1}^{\prime}},\pm{\gamma_{2}^{\prime}},\pm({\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}})\\}$.
###### Remark 5.4.
If $A\in GL(2,\mathbb{Z})$ such that $A\begin{pmatrix}0&-1\\\
1&1\end{pmatrix}=\begin{pmatrix}0&-1\\\ 1&1\end{pmatrix}A$, then
$A=\begin{pmatrix}0&-1\\\ 1&1\end{pmatrix}^{k}$ for some $k\in\mathbb{N}$.
Therefore, if $\gamma_{1},\gamma_{2}\in H_{2}(X,L_{u})$ such that the
monodromy is the form of (5.1), then
$\gamma_{1}=M^{k}{\gamma_{1}^{\prime}},\gamma_{2}=M^{k}{\gamma_{2}^{\prime}}$
for some $k\in\mathbb{N}$.
Furthermore, we will next show that these are the only possible relative
classes with non-trivial open Gromov-Witten invariants even globally in
$X_{II}$.
###### Corollary 5.5.
If $\tilde{\Omega}(\gamma;u)\neq 0$, then $\gamma$ is one of the relative
classes in Theorem 5.1, $u\in l_{\gamma}$ and
$\tilde{\Omega}(\gamma;u)=\frac{(-1)^{d-1}}{d^{2}}$, where $d$ is the
divisibility of $\gamma$.
###### Proof.
This is a direct consequence of the split attractor flow mechanism of the open
Gromov-Witten invariants $\tilde{\Omega}(\gamma;u)$ (see [L8]*Theorem 6.32).
We will sketch the proof here for self-containedness. Let $l_{\gamma}$ be a
ray emanating from $u$ such $\omega_{\gamma}$ is decreasing along
$l_{\gamma}$. From Gromov compactness theorem, the loci where
$\tilde{\Omega}(\gamma)$ jumps are discrete. Assume that
$\tilde{\Omega}(\gamma)$ is invariant along $l_{\gamma}$, then the holomorphic
disc representing $\gamma$ can only fall into a tubular neighborhood of the
singular fibre over $0$ by [CJL]*Proposition 5.3 when the symplectic area
decrease to small enough. Therefore by Lemma 5.1, $\gamma$ is one of the
relative classes in Theorem 5.1. Otherwise, assume $u_{1}$ is the first point
where $\tilde{\Omega}(\gamma)$ jumps. Apply Lemma 4.1 to a small loop around
$u_{1}$, there exists $\gamma_{\alpha}$, $\alpha\in A$ such that
$\tilde{\Omega}(\gamma_{\alpha};u_{1})\neq 0$ and $\gamma=\sum_{\alpha\in
A}\gamma_{\alpha}$ with $|A|\geq 2$. In particular,
$\omega(\gamma_{\alpha})<\omega(\gamma)$. One may replace $(\gamma,u)$ by
$(\gamma_{\alpha},u_{1})$ and run the procedure. Again by Gromov compactness
theorem, after finitely many splittings, all the relative classes are among
the one listed in Theorem 5.1. To sum up, there exists a rooted tree $T$ and a
continuous map $f$ such that the root maps to $u$, each edge is mapped to an
affine line segment and all the $1$-valent vertex are mapped to $0$. Since the
affine line corresponding to the relative classes in Theorem 5.1 do not
intersect by Lemma 4.6, the lemma follows. ∎
Although there are six classes of relative classes support non-trivial open
Gromov-Witten invariants, next we explan that there are actually only five BPS
rays emanating from $u=0$ due to the monodromy.
Thanks to Remark 5.3, we will choose the scaling of the coordinate $u$ such
that the branch cut is $\mbox{Arg}(u)=0$ and
$l_{\gamma_{1}}=\\{u\in\mathbb{R}_{+}\\}$, where
$\gamma_{1}=-{\gamma_{1}^{\prime}}$. Denote $\gamma_{i+1}=M\gamma_{i}$ on the
complement of branch cut, for $i\in\mathbb{Z}$. Straight-forward calculation
shows that $\gamma_{i+6}=\gamma_{i}$ and
$\gamma_{i-1}-\gamma_{i}+\gamma_{i+1}=0$. Denote the symplectic and complex
affine coordinate (with respect to $\gamma_{k},\gamma_{k+1}$) discussed in
Section 3.1 by
$\displaystyle x_{k}$ $\displaystyle=\int_{\gamma_{k}}\omega,\qquad
y_{k}=\int_{\gamma_{k+1}}\omega$ (18) $\displaystyle\check{x}_{k}$
$\displaystyle=\int_{\gamma_{k}}\mbox{Im}\Omega,\qquad\check{y}_{k}=\int_{\gamma_{k+1}}\mbox{Im}\Omega.$
(19)
We will also denote
$\displaystyle x$ $\displaystyle=\int_{-{\gamma_{1}^{\prime}}}\omega,\qquad
y=\int_{{\gamma_{2}^{\prime}}}\omega,$ $\displaystyle\check{x}$
$\displaystyle=\int_{{\gamma_{2}^{\prime}}}\mbox{Im}\Omega,\qquad\check{y}=\int_{-{\gamma_{1}^{\prime}}}\mbox{Im}\Omega,$
(20)
which give another set of symplectic/complex affine coordinates.
Recall that $x_{k}(u)-i\check{x}_{k}(u)=Z_{\gamma_{k}}(u)$ is a holomorphic
function with respect to the above complex structure on $B$ defined on the
complement of the branch cut and can be analytic continued to a multi-valued
holomorphic function on $\mathbb{C}^{*}$. In particular, if $\gamma$ is a
relative class in Theorem 5.1, then $x_{k}(u)>0$ and $\check{x}_{k}(u)=0$
along a BPS ray $l_{\gamma_{k}}$. From (12), one have
$\displaystyle x_{k}-i\check{x}_{k}=c_{k}u^{\frac{a}{6}},\quad
k\in\mathbb{Z},$ (21)
for some constant $a\in\mathbb{N},c_{k}\in\mathbb{C}^{*}$. With more analysis,
we have the following lemma
###### Lemma 5.6.
With the above choice of coordinate $u$ on $B_{0}\cong\mathbb{C}^{*}$, we have
$\displaystyle x_{k}-i\check{x}_{k}=e^{2\pi
i(k-1)\frac{5}{6}}u^{\frac{5}{6}}.$ (22)
In particular, the angle between $l_{\gamma_{k}}$ and $l_{\gamma_{k+1}}$ is
$\frac{2\pi}{5}$ with respect to the conformal structure after hyperKähler
rotation 888Notice that there is no well-defined notion of angle with only an
affine structure on $B_{0}$ and thus one does not see this aspect on the
affine manifold used in Gross-Hacking-Keel..
###### Proof.
From the normalization, we have $x_{1}-i\check{x}_{1}=u^{\frac{a}{6}}$. Recall
that $Z_{\gamma_{k}}:=x_{k}-i\check{x}_{k}$. From the monodromy
$M{\gamma_{k}}=\gamma_{k+1}$, we have
$\displaystyle Z_{\gamma_{k+1}}(u)=Z_{\gamma_{k}}(ue^{2\pi i})=e^{2\pi
i\frac{a}{6}}Z_{\gamma_{k}}(u).$
Here recall that $Z_{\gamma_{i}}(u)$ is a priori only defined on the
complement of the branch cut and we use $Z_{\gamma}(ue^{2\pi i})$ to denote
the value of analytic continuation across the branch cut counter-clockwise
once at $u$.
Now it suffices to show that $a=5$ or show that
$Z_{\gamma_{i}}(u)=O(|u^{\frac{5}{6}}|)$. This can be seen by direct
computation. Indeed, for $u$ close enough to the origin $0\in B$, the
representatives of $\gamma_{i}$ can be chosen to be in a neighborhood of the
singular point of the type $II$ singular fibre. In such neighborhood,
$X^{\prime}_{II}$ is defined by $y^{2}=x^{3}+u$ from (6). One can write
$\Omega^{\prime}=\frac{2f(u)}{u}dy\wedge dx=f(u)du\wedge\frac{dx}{y}$999Notice
that $u=y^{2}-x^{3}$ is a well-defined function on the chart. for some
holomorphic function $f(u)$ with $f(0)\neq 0$, since $\Omega^{\prime}$ is a
non-where vanishing holomorphic $2$-form on $X^{\prime}_{II}$. Recall that the
fibre over $u$ is topologically the compactification of $y^{2}=x^{3}+u$, a
double cover of the $x$-plane ramified at three points
$\zeta^{i}(-u)^{\frac{1}{3}},i=0,1,2$ where $\zeta=\exp{(2\pi i/3)}$. A path
connecting $\zeta^{i}(-u)^{\frac{1}{3}},\zeta^{j}(-u)^{\frac{1}{3}}$ in the
$x$-plane lifts to an $S^{1}$ in the fibre. Consider the $2$-chain
$\gamma_{i,j},i\neq j$, which is an $S^{1}$-fibration over a line segment from
$u=0$ to $u=u_{0}$ such that the $S^{1}$-fibre in $L_{u}$ is the double cover
of path connecting $\zeta^{i}(-u)^{\frac{1}{3}},\zeta^{j}(-u)^{\frac{1}{3}}$
in $x$-plane. Each of $\gamma_{i,j}$ can be represented by the $2$-chain
parametrized by $u=tu_{0}$, and the double cover of
$x=s\zeta^{i}(-u)^{\frac{1}{3}}+(1-s)\zeta^{j}(-u)^{\frac{1}{3}}$, with
$t\in[0,1],s\in[0,1]$. Since $\partial\gamma_{i}$ are vanishing cycles and
generate $H_{1}(L_{u})\cong H_{2}(X,L_{u})$101010The isomorphism can be easily
seen from the Mayer-Vietoris sequence., $\gamma_{i}$ can be represented by
some linear combination $a\gamma_{0,1}+b\gamma_{1,2}$ with $a,b\in\mathbb{Z}$
and $a^{2}+b^{2}\neq 0$.
Then by direct calculation, one has
$\displaystyle Z_{\gamma_{i,j}}(u_{0})=\int_{\gamma_{i,j}}\Omega^{\prime}$
$\displaystyle=\int_{u=0}^{u=u_{0}}\int_{x=\zeta^{i}(-u)^{\frac{1}{3}}}^{x=\zeta^{j}(-u)^{\frac{1}{3}}}f(u)du\wedge\frac{dx}{y}$
(23)
$\displaystyle=\int_{u=0}^{u=u_{0}}\bigg{(}\int_{x=\zeta^{i}(-u)^{\frac{1}{3}}}^{x=\zeta^{j}(-u)^{\frac{1}{3}}}\frac{dx}{y}\bigg{)}du+O(|u_{0}|)$
$\displaystyle=\int_{u=0}^{u=u_{0}}\bigg{(}\int_{x=\zeta^{i}(-u)^{\frac{1}{3}}}^{x=\zeta^{j}(-u)^{\frac{1}{3}}}\frac{dx}{(x^{3}+u)^{\frac{1}{2}}}\bigg{)}du+O(|u_{0}|)$
$\displaystyle=\int_{u=0}^{u=u_{0}}\bigg{(}\int_{s=0}^{s=1}\frac{x^{\prime}(s)ds}{\big{(}(x(s)-\zeta^{i}(-u)^{\frac{1}{3}})(x(s)-\zeta^{j}(-u)^{\frac{1}{3}})(x(s)-\zeta^{k}(-u)^{\frac{1}{3}})\big{)}^{\frac{1}{2}}}\bigg{)}du+O(|u_{0}|).$
(24)
Here we have $k\in\\{0,1,2\\}\setminus\\{i,j\\}$ and we use the change of
variable $x(s)=s\zeta^{i}(-u)^{\frac{1}{3}}+(1-s)\zeta^{j}(-u)^{\frac{1}{3}})$
in the forth line. Using $x^{\prime}(s)=O(u^{\frac{1}{3}})$ and factoring out
$u^{\frac{1}{2}}$ in the denominator of the last line of (23), we have the
part in the parenthesis is asymptotic to
$\frac{u^{\frac{1}{3}}}{u^{\frac{1}{2}}}\int_{0}^{1}\frac{ds}{\big{(}(s(1-s)\big{)}^{\frac{1}{2}}}=O(u^{-\frac{1}{6}})$.
From the fact that
$\int_{0}^{u_{0}}u^{-\frac{1}{6}}du=O(u_{0}^{\frac{5}{6}})$, we arrive at
$\displaystyle Z_{\gamma_{i,j}}(u)=C_{i,j}u^{\frac{5}{6}}+O(|u|),$
where $C_{i,j}$ is some constant independent of $u_{0}$ and $C_{0,1},C_{1,2}$
is linear independent over $\mathbb{Z}$. Thus, we have similar estimate
$\displaystyle Z_{\gamma_{i}}(u)=O(u^{\frac{5}{6}}).$
In particular, it implies that $Z_{\gamma_{1}}(u)=u^{\frac{5}{6}}$ from the
choice of the normalization of $u$. Then
$\displaystyle Z_{\gamma_{2}}(u)=Z_{M\gamma_{1}}(u)=(e^{2\pi
i}u)^{\frac{5}{6}}=e^{2\pi i\frac{5}{6}}u^{\frac{5}{6}}.$
Thus, $Z_{\gamma_{2}}(u)\in\mathbb{R}_{>0}$ if and only if
$u\in\mathbb{R}_{>0}e^{\frac{2\pi}{5}i}$. On the other hand, when $u\in
l_{\gamma_{k}}$, one has
$Z_{\gamma_{k}}(u)=\int_{\gamma_{k}}\omega-i\int_{\gamma_{k}}\mbox{Im}\Omega\in\mathbb{R}_{>0}$
by Remark 3.3 and the fact that symplectic area is of a holomorphic disc is
positive. Thus, we have $l_{\gamma_{2}}=\\{u\in\mathbb{R}_{>0}e^{\frac{2\pi
i}{5}}\\}$, or the angle between $l_{\gamma_{1}},l_{\gamma_{2}}$ is
$\frac{2\pi}{5}$. The general statement of the second part of the lemma can be
then proved inductively.
∎
###### Remark 5.7.
With Lemma 5.6, one can see that $Z_{\gamma_{6}}\notin\mathbb{R}_{+}$ until
$u$ varies across the branch cut counter-clockwisely. If one analytic
continues $Z_{\gamma_{6}}$ across the branch cut counter-clockwisely becomes
$Z_{\gamma_{7}}=Z_{\gamma_{1}}$ because $\gamma_{7}=M\gamma_{6}$ and
$M^{6}=id$. Therefore, the corresponding BPS ray is again has the same locus
as $l_{\gamma_{1}}$. In particular, there are only five BPS rays in total. In
other words, there are only five families of discs with non-trivial open
Gromov-Witten invariant and contribute to the construction of the family Floer
mirror.
We conclude the above discussion now:
###### Theorem 5.8.
With the notation above,
$\gamma_{1}=-\gamma_{1}^{\prime},\ \gamma_{2}=\gamma_{2}^{\prime},\
\gamma_{3}=\gamma_{1}^{\prime}+\gamma_{2}^{\prime},\
\gamma_{4}=\gamma^{\prime}_{1},\ \gamma_{5}=-\gamma_{2}^{\prime}.$
Then
1. 1.
$f_{\gamma}(u)\neq 1$ if and only if $u\in l_{\gamma_{i}}$ and
$\gamma=\gamma_{i}$ for some $i=1,\cdots,5$ .
2. 2.
In such cases,
$f_{\gamma_{i}}=1+T^{\omega(\gamma_{i})}z^{\partial\gamma_{i}}$.
3. 3.
The branch cut can be chosen to be between $l_{\gamma_{1}}$ and
$l_{\gamma_{5}}$.
###### Proof.
Remark 5.7 explains that there are actually five BPS rays
$l_{\gamma_{i}},i=1,\cdots,5$. The second statement comes from Definition 4.3
and Theorem 5.1. The third statement is how we define the branch cut below
Corollary 5.5. ∎
The affine structure is illustrated in Figure 3. In Figure 3, the curvy ray,
between $l_{\gamma_{5}}$ and $l_{\gamma_{1}}$, represents the branch cut. The
‘monodromy’ of the affine structure, can be seen as gluing the the curvy ray
with $l_{\gamma_{1}}$. The shaded region denoted where the gluing is.
$l_{\gamma_{2}}$$l_{\gamma_{3}}$$l_{\gamma_{4}}$$l_{\gamma_{5}}$$l_{\gamma_{1}}$
Figure 3: BPS rays near the singularity.
Straight-forward calculation shows that
$\displaystyle\gamma_{i+2}=-\gamma_{i}+\gamma_{i+1},$ (25)
which is the analogue of (1).
### 5.1 Construction of Family Floer Mirror of $X_{II}$
$l_{\gamma_{2}}$$l_{\gamma_{3}}$$l_{\gamma_{4}}$$l_{\gamma_{5}}$$l_{\gamma_{1}}$$U_{1}$
Let $U_{k}$ be the chamber bounded by $l_{\gamma_{k}}$ and $l_{\gamma_{k+1}}$
in $B_{0}$, $i=1,\cdots,4$ and $U_{5}$ be the chamber bounded by
$l_{\gamma_{5}}$ and $l_{\gamma_{1}}$. Thus there are only 5 chambers. Recall
that the dotted line represents a branch cut between $l_{\gamma_{1}}$ and
$l_{\gamma_{5}}$. With such branch cut and monodromy, we trivialized the local
system $H_{2}(X,L_{u})$ over the complement of the branch cut. Recall that we
have $M\gamma_{i}=\gamma_{i+1}$ by definition.
Next, we compare the affine structure from the SYZ fibration with the one from
Gross-Hacking-Keel (see Section 2.1).
###### Lemma 5.9.
The complex affine structure on $B_{0}$ is isomorphic to the affine manifold
$B_{\mathrm{GHK}}$ with singularity constructed from del Pezzo surface of
degree five relative to a cycle of five rational curves in [GHK].
###### Proof.
First notice that one can compute the complex affine coordinates on
$l_{\gamma_{1}},l_{\gamma_{2}}$,
$\displaystyle l_{\gamma_{1}}$ $\displaystyle=\\{\check{y}=0,\check{x}>0\\}$
$\displaystyle l_{\gamma_{2}}$ $\displaystyle=\\{\check{x}=0,\check{y}>0\\}.$
(26)
Indeed, we have $\check{y}=0$ on $l_{\gamma_{1}}$ by Remark 3.3. From Lemma
5.6, we have
$\displaystyle\check{x}(u)=\int_{{\gamma_{2}^{\prime}}}\mbox{Im}\Omega=\mbox{Re}Z_{\gamma}(u)=\mbox{Re}Z_{-M{\gamma_{1}^{\prime}}}(u)=-\mbox{Re}(ue^{2\pi
i})^{\frac{5}{6}}>0,$
for $u\in l_{\gamma_{1}}$. One can compute the case of $u\in l_{\gamma_{2}}$
similarly. Therefore, with respect to the complex affine structure, the
primitive tangent vectors of $l_{\gamma_{1}},l_{\gamma_{2}}$ are given by
$\frac{\partial}{\partial\check{x}},\frac{\partial}{\partial\check{y}}$. To
compare with the affine structure from Gross-Hacking Keel, we will identify
them with $\mathbb{R}_{>0}(1,0),\mathbb{R}_{>0}(0,1)$. Then
$(-1,1),(-1,0),(0,-1)$ are the tangents of
$l_{\gamma_{3}},l_{\gamma_{4}},l_{\gamma_{5}}$ respectively by Lemma 5.6 and
the relation $-Z_{\gamma_{i}}+Z_{\gamma_{i+1}}=Z_{\gamma_{i+2}}$ which is the
analogue of (1). Therefore, the complex affine coordinates on the region in
$\\{u\in B_{0}|0<\mbox{Arg}u<\frac{8\pi}{5}\\}$ is isomorphic to the one on
the sector (without the vertex) from $(1,0)$ counter-clockwise to $(0,-1)$
viewed as an affine submanifold of $\mathbb{R}^{2}$. To understand the
monodromy of the complex affine structure on $B_{0}$, one need to do the
similar calculation across the branch cut. Consider the complex affine
structure on the universal cover of $B_{0}$ 111111By abuse of notation, we
still use the coordinate $u$ for the corresponding holomorphic coordinate and
$\check{x},\check{y}$ for the pull back of the complex affine structure.. Then
similar calculation shows that the complex affine coordinates on the region in
$\\{u\in B_{0}|0<\mbox{Arg}(u)<2\pi\\}$ is isomorphic to the one on the sector
(without the vertex) from $(1,0)$ counter-clockwise to $(-1,-1)$, which
denoted as $\mathcal{V}_{1}$, viewed as an affine submanifold of
$\mathbb{R}^{2}$. If one change the location of the branch cut to
$\mbox{Arg}(u)=-\frac{2\pi}{5}^{-}$, the complex affine structure on the
region in $\\{u\in B_{0}|-\frac{2\pi}{5}<\mbox{Arg}u<\frac{8\pi}{5}\\}$ is
isomorphic to the one on the sector (without the vertex) from $(-1,-1)$
counter-clockwise to $(0,-1)$, denoted as $\mathcal{V}_{2}$, viewed as an
affine submanifold of $\mathbb{R}^{2}$ 121212Alternatively, one may extend the
affine structure across the original branch cut clock-wisely and then
$(-1,-1)$ is the primitive tangent of $l_{\gamma_{0}}$.. The affine
coordinates on $\\{u\in B_{0}|0<\mbox{Arg}(u)<\frac{8\pi}{5}\\}$ from pull-
back from $\mathcal{V}_{1},\mathcal{V}_{2}$ coincide, so the complex affine
structure on $\\{-\frac{2\pi}{5}<\mbox{Arg}(u)<2\pi\\}$ (viewed as a subset of
universal cover of $B_{0}$) is isomorphic to the natural affine structure on
$\mathcal{V}_{1}\cup\mathcal{V}_{2}$ as a subset of $\mathbb{R}^{2}$ (but not
with respect to the affine structure inherited from $\mathbb{R}^{2}$). Recall
that we have $l_{\gamma_{6}}=l_{\gamma_{1}}$ and
$l_{\gamma_{5}}=l_{\gamma_{0}}$ from Remark 5.7. Therefore, $B_{0}$ as an
affine manifold is simply the gluing of the sector bounded by $(0,-1),(-1,-1)$
in $\mathcal{V}_{1}$ and the sector bounded by $(-1,-1),(1,0)$ in
$\mathcal{V}_{2}$. Denote $M:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ be the
linear map sending $(0,-1)$ to $(-1,-1)$ and $(-1,-1)$ to $(1,0)$. Explicitly,
we have $B_{0}=\mathcal{V}_{1}\cup\mathcal{V}_{2}/\sim$ as affine manifolds,
where $x\sim y$ if $x$ is contained in the sector bounded by $(0,-1),(-1,-1)$
and $y=Mx\in\mathcal{V}_{2}$. This is exactly the description of $B_{GHK}$.
Moreover, one sees that $\\{U_{i}\\}_{i=1,\cdots,5}$ coincides with the
decomposition $\Sigma$ in Section 2.1.
∎
###### Remark 5.10.
Write $\check{x}^{\prime}(u)=\check{x}(ue^{2\pi
i}),\check{y}^{\prime}(u)=\check{y}(ue^{2\pi i})$ as the continuation of
$\check{x},\check{y}$ counterclockwisely. From (5.1)(5), one has
$\displaystyle d\check{x}^{\prime}$ $\displaystyle=d\check{x}-d\check{y}$
$\displaystyle d\check{y}^{\prime}$ $\displaystyle=d\check{x}$
or equivalently
$\displaystyle d\check{x}$ $\displaystyle=d\check{y}^{\prime}$ $\displaystyle
d\check{y}$ $\displaystyle=-d\check{x}^{\prime}+d\check{y}^{\prime}.$
Dually, the monodromy on the complex affine coordinate is given by
$\displaystyle\frac{\partial}{\partial\check{x}^{\prime}}$
$\displaystyle=\frac{\partial}{\partial\check{x}}+\frac{\partial}{\partial\check{y}}$
$\displaystyle\frac{\partial}{\partial\check{y}^{\prime}}$
$\displaystyle=-\frac{\partial}{\partial\check{x}},$
which exactly the gluing at the end of Lemma 5.9, sending $(-1,0)$ (and
$(-1,-1)$) to $(-1,-1)$ (and $(-1,0)$ respectively).
$\bullet$$l_{\gamma_{i}}=\\{\check{y}_{i}=0\\}$$l_{\gamma_{i+1}}$$l_{\gamma_{i+2}}$$l_{\gamma_{i+3}}$$U_{i}$$y_{i}=0$$x_{i}=0$$U_{i}^{\prime}$$V_{i}$
Figure 4: Illustration for the notations in the beginning of Section 5.1.
Notice that a priori $l_{\gamma_{i}}$ is only an affine line with respect to
the complex affine coordinates. To compute the family Floer mirror, we need to
have a better control of the BPS rays in terms of the symplectic affine
structure. The following observation comes from (21) directly.
###### Lemma 5.11.
Any ray with a constant phase is affine with respect to the symplectic affine
structure. In particular, $l_{\gamma_{i}}$ is an affine line with respect to
the symplectic affine structure.
###### Proof.
Any such ray can be parametrized by $z=Ct$ for some complex number $C$. From
(21), the symplectic coordinates along the ray are given by
${x}_{k}=C^{\prime}_{k}t^{\frac{2\pi
k}{5}},y_{k}=C^{\prime\prime}_{k}t^{\frac{2\pi k}{5}}$, for some
$C^{\prime}_{k},C^{\prime\prime}_{k}\in\mathbb{R}$ and the lemma follows. In
other words, such ray is given by the affine line
$C^{\prime\prime}_{k}x_{k}=C^{\prime}_{k}y_{k}$ with respect to the symplectic
affine coordinates $(x_{k},y_{k})$. ∎
Recall that the family mirror $\check{X}$ is defined by
$\coprod\mathcal{U}_{\alpha}/\sim$, where $\mathcal{U}_{\alpha}$ is the
maximum spectrum of $T_{U_{\alpha}}$, for refined enough (so that the Fukaya
trick applies) open covering $\\{U_{\alpha}\\}_{\alpha\in A}$ and together
with symplectic affine coordinates
$\psi_{\alpha}:U_{\alpha}\rightarrow\mathbb{R}^{2}$ such that
$\psi_{\alpha}(u_{\alpha})=0$ for some $u_{\alpha}\in U_{\alpha}$. We may take
$\displaystyle\psi_{\alpha}(u)=(x(u)-x(u_{\alpha}),y(u)-y(u_{\alpha})).$
131313Here we abuse the notation, denote $x,y$ to be the natural extension of
clock-wisely across the branch cut if $U_{\alpha}$ intersects the branch cut
and $u\in U_{5}$. In other words, one should replace $(x,y)$ by $(y,x-y)$
under the circumstances from (5.1).
###### Remark 5.12.
On one hand, from Remark 4.10, we have $\mathcal{U}_{\alpha}$ is identified
with $\mbox{Val}^{-1}(\psi_{\alpha}(U))\subseteq(\Lambda^{*})^{2}$. On the
other hand, from Example 4.11 and Theorem 5.8, if $U_{\alpha}\subseteq U_{k}$
for any $k\in 1,\cdots,5$, then
$\mathcal{U}_{\alpha}\cong\mathfrak{Trop}^{-1}(U_{\alpha})$. Here we use the
symplectic affine coordinates $x,y$ on $U_{\alpha}$ to embed $U_{\alpha}$ into
$\mathbb{R}^{2}$ as an affine submanifold and
$\mathfrak{Trop}:(\mathbb{G}_{an})^{2}\rightarrow\mathbb{R}^{2}$. Recall that
there is a natural identification
$(\Lambda^{*})^{2}\cong(\mathbb{G}_{m}^{an})^{2}$ as sets such that the below
diagram commutes.
${(\Lambda^{*})^{2}}$${(\mathbb{G}_{m}^{an})^{2}}$${{\mathbb{R}}^{2}}$Val$\scriptstyle{\mathfrak{Trop}}$
(27)
The two description of $\mathcal{U}_{\alpha}$ simply differs by a translation
$\displaystyle\mbox{Val}^{-1}(U_{\alpha})$
$\displaystyle\rightarrow\mathfrak{Trop}^{-1}(U_{\alpha})$
$\displaystyle(z_{1},z_{2})$
$\displaystyle\mapsto(T^{x(u_{\alpha})}z_{1},T^{y(u_{\alpha})}z_{2})$
under the above identification.
Let
$\mathfrak{Trop}_{i}:(\mathbb{G}_{m}^{an})_{i}^{2}\rightarrow\mathbb{R}^{2}_{i}$
be the standard valuation map. Here we put an subindex $i$ for each analytic
tori and later it would correspond to the five different tori. Now if
$U_{\alpha}\subseteq U_{i}$ for some $i=1,\cdots,5$ and $U_{\alpha}\cap
U_{\alpha^{\prime}}\neq\emptyset$ with the reference $u_{\alpha^{\prime}}\in
U_{i+1}$, then again from Ex 4.11 and Theorem 5.8, one can naturally identify
$\mathcal{U}_{\alpha}\coprod\mathcal{U}_{\alpha^{\prime}}/\sim\cong\mathfrak{Trop}^{-1}(U_{\alpha}\cup
U_{\alpha^{\prime}})$. From the identification in Remark 5.12, the function
$T^{\omega(\gamma_{u_{\alpha}})}z^{\partial\gamma}\in T_{U_{\alpha}}$ and
$T^{\omega(\gamma_{u_{\alpha^{\prime}}})}z^{\partial\gamma}\in
T_{U_{\alpha^{\prime}}}$ glue to a function on
$\mathfrak{Trop}^{-1}(U_{\alpha}\cup U_{\alpha^{\prime}})$, which is simply
the restriction of $z^{\partial\gamma}$ on
$(\Lambda^{*})^{2}\cong(\mathbb{G}_{m}^{an})^{2}$.
Denote $U^{\prime}_{i}=\cup_{\alpha}U_{\alpha}$, where $\alpha$ runs through
those $u_{\alpha}\in U_{i}$. By taking refinement of the open cover, we may
assume that $U_{i}\subseteq U^{\prime}_{i}$ without loss of generality. Then
we have the extension of the embedding
$\mathfrak{Trop}^{-1}(U_{i})\subseteq\check{X}$ to
$\mathfrak{Trop}^{-1}(U^{\prime}_{i})\subseteq\check{X}$. From the previous
discussion, the family Floer mirror is simply
$\coprod_{i=1}^{5}\mathfrak{Trop}_{i}^{-1}(U_{i}^{\prime})/\sim$. Note that
${\check{X}=\bigcup_{i}\mathfrak{Trop}_{i}^{-1}(U_{i}^{\prime})/\sim}$${(\mathbb{G}_{m}^{an})^{2}}$${\mathfrak{Trop}_{i}^{-1}(U_{i})}$$\supseteq$$\subseteq$
To distinguish the two inclusion, we will always view
$\mathfrak{Trop}_{i}^{-1}(U_{i})$ as a subset of $(\mathbb{G}_{m}^{an})^{2}$
and consider
$\alpha_{i}:\mathfrak{Trop}_{i}^{-1}(U_{i})\rightarrow\check{X}.$
Notice that $\mathfrak{Trop}^{-1}_{i}(U^{\prime}_{i})$ only occupies a small
portion of $(\mathbb{G}^{an}_{m})^{2}$. Thus we need to extend $\alpha_{i}$ to
most part of $(\mathbb{G}_{m}^{an})_{i}^{2}$. For the simplicity of the
notation, we will still denote those extension of $\alpha_{i}$ be the same
notation.
Now we want to understand how $\mathcal{U}^{\prime}_{i}$ glue with
$\mathcal{U}^{\prime}_{i+1}$. Let $V_{i},V_{i+1}$ be any small enough rational
domains on $B_{0}$ such that $V_{i}\subseteq U_{i}^{\prime}$,
$V_{i+1}\subseteq U_{i+1}^{\prime}$ and the Fukaya’s trick applies. Let $p\in
V_{i}\cap V_{i+1}$ be the reference point and one has
$\displaystyle(\mathbb{G}_{m}^{an})^{2}_{i}\supseteq\mathfrak{Trop}_{i}^{-1}(V_{i})\supseteq\mathfrak{Trop}_{i}^{-1}(V_{i}\cap
V_{i+1})\xrightarrow{\Psi_{i,i+1}}\mathfrak{Trop}_{i+1}^{-1}(V_{i}\cap
V_{i+1})\subseteq\mathfrak{Trop}_{i+1}^{-1}(V_{i+1})\subseteq(\mathbb{G}_{m}^{an})^{2}_{i+1},$
where $\Phi_{i,i+1}=\alpha_{i+1}^{-1}\circ\alpha_{i}$ is given by
$\Psi_{i,i+1}=S^{-1}_{u_{i+1},p}\circ\Phi_{i,i+1}\circ S_{u_{i},p}$ by (15)
and
$\displaystyle\Phi_{i,i+1}:z^{\partial\gamma}\mapsto
z^{\partial\gamma}(1+T^{\omega(\gamma_{i+1})}z^{\partial\gamma_{i}})^{\langle\gamma,\gamma_{i+1}\rangle}$
from Definition 4.3 and Theorem 5.8. Here $\omega(\gamma_{i+1})$ is evaluated
at $p$. From (25), we have $\langle\gamma_{i+1},\gamma_{i}\rangle=1$. Then
with the notation and discussion below Remark 5.12, we have $\Phi_{i,i+1}$ is
simply the polynomial map
$\displaystyle z^{\gamma_{i}}\mapsto z^{\gamma_{i}}(1+z^{\gamma_{i+1}})^{-1}$
$\displaystyle z^{\gamma_{i+1}}\mapsto z^{\gamma_{i+1}}.$ (28)
Since near $l_{\gamma_{i+1}}$ one has $\omega(\gamma_{i+1})>0$, one has
$\displaystyle\mbox{val}(z^{\gamma})=\mbox{val}(z^{\gamma}(1+z^{\gamma_{i}})^{-1}).$
(29)
Here we view $z^{\gamma}$ as a function on $(\Lambda^{*})^{2}$ and val is the
valuation on $\Lambda^{*}$. Thus,the following commutative diagram holds under
the identification $(\Lambda^{*})^{2}\cong(\mathbb{G}_{m}^{an})^{2}$,
${\mathfrak{Trop}_{i}^{-1}(V_{i})\supseteq\mathfrak{Trop}_{i}^{-1}(V_{i}\cap
V_{i+1})}$${\mathfrak{Trop}_{i+1}^{-1}(V_{i}\cap
V_{i+1})\subseteq\mathfrak{Trop}_{i+1}^{-1}(V_{i+1})}$${{\mathbb{R}}^{2}_{i}\supseteq
V_{i}\cap V_{i+1}}$${V_{i}\cap
V_{i+1}\subseteq{\mathbb{R}}^{2}_{i+1}}$$\scriptstyle{\mathfrak{Trop}_{i}}$$\scriptstyle{\Phi_{i,i+1}}$$\scriptstyle{\mathfrak{Trop}_{i+1}}$
(30)
We may view $(\Lambda^{*})^{2}$ as the $\Lambda$-points of the scheme
$(\mathbb{G}_{m})^{2}=\mbox{Spec}\Lambda[z^{\pm\gamma_{i}},z^{\pm\gamma_{i+1}}]$.
Then we have the commutative diagram from the functoriality of the GAGA map on
objects:
${(\mathbb{G}_{m}^{an})^{2}}$${(\mathbb{G}_{m}^{an})^{2}}$${(\mathbb{G}_{m})^{2}}$${(\mathbb{G}_{m})^{2}}$$\scriptstyle{\Psi_{i,i+1}}$GAGAGAGA
(31)
Under the identification $(\Lambda^{*})^{2}\cong(\mathbb{G}_{m}^{an})^{2}$,
$\Psi_{i,i+1}$ is simply the restriction of the map
$(\mathbb{G}_{m}^{an})^{2}\rightarrow(\mathbb{G}_{m}^{an})^{2}$ with the same
equation as in (5.1). Therefore, we have the same commutative diagram as in
(30) with $V_{i},V_{i+1}$ replaced by $U_{i}^{+},U_{i+1}$ for any open subset
$U^{+}_{i}\subseteq\mathbb{R}^{2}$ such that $\omega(\gamma_{i+1})>0$ on
$U_{i}^{+}$, which we will choose it explicitly later.
To see the largest possible extension $U^{+}_{i}$ and thus largest possible
extension of the above diagram, we would want to know explicitly where
$\omega(\gamma_{i+1})>0$. Viewing $B\cong\mathbb{C}$, we may take $U_{i}^{+}$
as the interior of the sector bounded by $l_{\gamma_{i}}$ and the ray by
rotating $\frac{3\pi}{5}$ counter-clockwisely from $l_{\gamma_{i+1}}$ by Lemma
5.6 and this is the largest possible region (extending $U_{i}$ counter-
clockwisely) such that $\omega(\gamma_{i+1})>0$ holds. In particular,
$U_{i}^{+}$ occupies $U_{i},U_{i+1}$ and half of $U_{i+2}$. Therefore, we have
the following lemma
###### Lemma 5.13.
The inclusion
$\alpha_{i}:\mathfrak{Trop}_{i}^{-1}(U_{i})\hookrightarrow\check{X}$ can be
extended to $\mathfrak{Trop}_{i}^{-1}(U_{i}^{+})\hookrightarrow\check{X}$,
$i=1,\cdots,5$. We will still denote the inclusion map by $\alpha_{i}$. In
particular, $\alpha_{i}(\mathfrak{Trop}^{-1}_{i}(U_{i}\cup
U_{i+1}))\subseteq\check{X}$. Here we use the convention $U_{i+5}=U_{i}$.
Notice that the commutative diagram $\eqref{eq:commdiag}$ no longer holds on
$U_{i+2}\setminus U_{i}^{+}$ since
$\displaystyle\mbox{val}(z^{\gamma_{i}}\big{(}1+z^{\gamma_{i+1}}\big{)}^{-1})=\mbox{val}(z^{\gamma_{i}})-\mbox{val}(1+z^{\gamma_{i+1}})=\mbox{val}(z^{\gamma_{i}})-\mbox{val}(z^{\gamma_{i+1}})$
(32)
outside of $U^{+}_{i}$, which is no longer $\mbox{val}(z^{\gamma_{i}})$ on the
right hand side as in (29). Now for $V_{i}$ disjoint from $U_{i}^{+}$ and
$V_{i+1}\subseteq U_{i+2}\subseteq U_{i+1}^{+}$, the diagram becomes
${\mathfrak{Trop}_{i}^{-1}(V_{i})\supseteq\mathfrak{Trop}_{i}^{-1}(V_{i}\cap
V_{i+1})\setminus\\{1+z^{\gamma_{i+1}}=0\\}}$${\mathfrak{Trop}_{i}^{-1}(V_{i}\cap
V_{i+1})\subseteq(\mathbb{G}_{m}^{an})^{2}}$${{\mathbb{R}}^{2}_{i}\supseteq
V_{i}\cap
V_{i+1}}$${{\mathbb{R}}^{2}_{i+1}}$$\scriptstyle{\mathfrak{Trop}_{i}}$$\scriptstyle{\Psi_{i,i+1}}$$\scriptstyle{\mathfrak{Trop}_{i+1}}$$\scriptstyle{\phi_{i,i+1}}$
(33)
Recall that
$\displaystyle\mbox{val}(z^{\gamma_{i}})=\int_{\gamma_{i}}\omega=x_{i},\hskip
8.53581pt\mbox{val}(z^{\gamma_{i+1}})=\int_{\gamma_{i+1}}\omega=y_{i}$
from (18) and thus together with (32) we have
$\displaystyle\phi_{i,i+1}:x_{i}$ $\displaystyle\mapsto x_{i}-y_{i}$
$\displaystyle y_{i}$ $\displaystyle\mapsto y_{i}$ (34)
on its domain. Notice that $\Psi_{i,i+1}$ is only defined when
$1+z^{\gamma_{i+1}}\neq 0$. Since the linear map (5.1) is well-defined on
$\mathbb{R}^{2}$, we will still use the same notation for such natural
extension.
###### Lemma 5.14.
$\phi_{i,i+1}(U_{i+2}\setminus U_{i}^{+})\subseteq U_{i+1}^{+}$. In
particular,
$\alpha_{i}\big{(}\mathfrak{Trop}_{i}^{-1}(U_{i+2})\big{)}\subseteq\alpha_{i+1}\big{(}\mathfrak{Trop}_{i+1}^{-1}(U_{i+1}^{+})\big{)}\subseteq\check{X}.$
###### Proof.
The left boundary of $U_{i}^{+}$ is characterized by $x_{i+1}=0,y_{i+1}>0$ and
the left boundary of $U_{i+1}^{+}$ is characterized by $x_{i+1}<0,y_{i+1}=0$
from Lemma 5.6. Therefore, we may identify the region bounded by the above two
affine lines with the third quadrant of $\mathbb{R}^{2}_{x_{i+1},y_{i+1}}$ as
affine manifolds. Notice that this is a subset of $U_{i+1}^{+}$. Under such
identification, we have $U_{i+2}\setminus U_{i}^{+}$ as the region bounded by
$x_{i+1}+y_{i+1}=0$ and $y_{i+1}$-axis in the third quadrant by Lemma 5.11. In
terms of $(x_{i+1},y_{i+1})$, (5.1) becomes
$\displaystyle\phi_{i,i+1}:x_{i+1}$ $\displaystyle\mapsto x_{i+1}$
$\displaystyle y_{i+1}$ $\displaystyle\mapsto x_{i+1}+y_{i+1},$
from the relation $\gamma_{i}+\gamma_{i+2}=\gamma_{i+1}$. The lemma then
follows from direct computation. ∎
To sum up, one can extend the original inclusion
$\alpha_{i}\big{(}\mathfrak{Trop}_{i}^{-1}(U_{i})\big{)}\subseteq\check{X}$ in
the counter-clockwise direction to
$\displaystyle\alpha_{i}\big{(}\mathfrak{Trop}_{i}^{-1}(\overline{U_{i}\cup
U_{i+1}\cup
U_{i+2}})\setminus\\{1+z^{\gamma_{i+1}}=0\\}\big{)}\subseteq\check{X}.$ (35)
Here we use $\overline{U}$ to denote the interior of the compactification of
$U$.
###### Lemma 5.15.
The inclusion (35) extends over
$\\{1+z^{\gamma_{i+1}}=0\\}\setminus\mathfrak{Trop}_{i}^{-1}(0)$.
###### Proof.
Let $W_{i}$ be small neighborhood of (a component of ) $\partial U_{i}^{+}$
such that
$\\{1+z^{\gamma_{i+1}}=0\\}\subseteq\mathfrak{Trop}_{i}^{-1}(W_{i})$. Notice
that from Lemma 5.14, we have that
$\mathfrak{Trop}\left(\alpha_{i}(\mathfrak{Trop}_{i}^{-1}(W_{i}))\right)\subseteq
U_{i+2}$. We will show that
$\displaystyle\alpha_{i}\big{(}\mathfrak{Trop}_{i}^{-1}(W_{i})\big{)}\subseteq\alpha_{i+1}\big{(}\mathfrak{Trop}_{i+1}^{-1}(U_{i+1}^{+})\big{)}\cup\alpha_{i+2}\big{(}\mathfrak{Trop}_{i+2}^{-1}(U_{i+2})\big{)}\cup\alpha_{i+3}\big{(}\mathfrak{Trop}_{i+3}^{-1}(U_{i+2})\big{)}.$
(36)
From the earlier discussion, we have
$\displaystyle\alpha_{i}\left(\mathfrak{Trop}_{i}^{-1}(W_{i})\setminus\\{1+z^{\gamma_{i+1}}=0\\}\big{)}\subseteq\alpha_{i+1}\big{(}\mathfrak{Trop}_{i+1}^{-1}(U_{i+1}^{+})\right).$
From the earlier discussion, we have
$\displaystyle\Psi_{i+1,i+2}:\mathfrak{Trop}_{i+1}^{-1}(U_{i+2})\cong\mathfrak{Trop}_{i+2}^{-1}(U_{i+2})$
$\displaystyle\Psi_{i+3,i+2}:\mathfrak{Trop}_{i+3}^{-1}(U_{i+2})\cong\mathfrak{Trop}_{i+2}^{-1}(U_{i+2}).$
(37)
Recall that $\Psi_{i,j}=\alpha_{j}^{-1}\circ\alpha_{i}$. It suffices to check
that
$\displaystyle
A=\\{1+z^{\gamma_{i+1}}=0\\}\subseteq\Psi_{i+2,i}\big{(}\mathfrak{Trop}_{i+2}^{-1}(U_{i+2})\big{)}\cup\Psi_{i+3,i}\big{(}\mathfrak{Trop}_{i+3}^{-1}(U_{i+2})\big{)}$
(38)
as subsets of $(\mathbb{G}_{m}^{an})^{2}_{i}$. Straight calculation shows that
$\displaystyle\Psi_{i,i+2}:$
$\displaystyle\mathfrak{Trop}_{i}^{-1}(W_{i})\rightarrow\mathfrak{Trop}_{i+2}(U_{i+2})$
$\displaystyle z^{\gamma}\mapsto$ $\displaystyle
z^{\gamma}(1+z^{\gamma_{i+2}})^{\langle\gamma,\gamma_{i+2}\rangle}\left(1+\frac{z^{\gamma_{i+1}}}{1+z^{\gamma_{i+2}}}\right)^{\langle\gamma,\gamma_{i+2}\rangle}$
Since $\langle\gamma,\gamma_{i+2}\rangle>0$ and
$\langle\gamma,\gamma_{i+1}\rangle>0$ over $U_{i+2}$. We have $\Psi_{i,i+2}$
is not defined only on
$\displaystyle
B=\\{1+z^{\gamma_{i+2}}=0\\}\cup\\{1+z^{\gamma_{i+1}}+z^{\gamma_{i+2}}=0\\}.$
Therefore, we have $\alpha_{i}$ can be extended over
$\mathfrak{Trop}_{i}^{-1}(W_{i})\setminus B$. Similarly, $\Psi_{i,i+3}$ is
defined except
$\displaystyle
C=\\{1+z^{\gamma_{i+3}}=0\\}\cup\\{1+z^{\gamma_{i+2}}+z^{\gamma_{i+3}}=0\\}\cup\\{1+z^{\gamma_{i+1}}+z^{\gamma_{i+2}}+z^{\gamma_{i+3}}=0\\}.$
Therefore, $\alpha_{i}$ can be extended over
$\mathfrak{Trop}_{i}^{-1}(W_{i})\setminus C$. It is easy to check that $A\cap
B\cap
C=\\{z^{\gamma_{i+1}}=z^{\gamma_{i+2}}=-1\\}\subseteq\mathfrak{Trop}^{-1}(0)$.
Since $\Psi_{i,j}=\alpha_{j}^{-1}\circ\alpha_{i}$ and thus the extension is
compatible. Now the lemma is proved. ∎
For the same reason, one can extend the inclusion in the clockwise direction
$\displaystyle\alpha_{i}\big{(}\mathfrak{Trop}_{i}^{-1}(\overline{U_{i}\cup
U_{i-1}\cup U_{i-2}})\big{)}\subseteq\check{X}.$ (39)
Notice that $l_{\gamma_{i+3}}=l_{\gamma_{i-2}}$ is the the boundary of both
$U_{i+2}$ and $U_{i-2}$. Then (35)(39) together imply the inclusion
$\displaystyle\alpha_{i}\big{(}\mathfrak{Trop}_{i}^{-1}(\mathbb{R}^{2}\backslash
l_{\gamma_{i+3}})\big{)}\subseteq\check{X}.$ (40)
Then Lemma 5.16 below guarantees that the inclusion extends over the ray
$l_{\gamma_{i+3}}$ and we reach an extension
$\displaystyle\alpha_{i}:\mathfrak{Trop}_{i}^{-1}(\mathbb{R}^{2}\setminus\\{0\\})\rightarrow\check{X}.$
Finally we claim that $\alpha_{i}$ is an embedding restricting on
$\mathfrak{Trop}^{-1}(U)$ for small enough open subset
$U\subseteq\mathbb{R}^{2}$. On the other hand, $\alpha_{i}$ is fibre-
preserving with respect to
$\mathfrak{Trop}_{i}:(\mathbb{G}_{m}^{an})^{2}\rightarrow\mathbb{R}^{2}$ and
$\mathfrak{Trop}:\check{X}\rightarrow B$ and the induced map on the base is
piecewise-linear. Direct computation shows that induced map on the base is
injective. Therefore, $\alpha_{i}$ is an embedding. Therefore $\check{X}$ has
a partial compactification
$\bigcup_{i=1}^{5}(\mathbb{G}_{m}^{an})^{2}_{i}/\sim$, with the identification
$\Psi_{i,j}:(\mathbb{G}_{m}^{an})^{2}_{i}\rightarrow(\mathbb{G}_{m}^{an})^{2}_{j}$.
The following lemma is due to Gross-Siebert [GS].
###### Lemma 5.16.
The composition of the wall-crossing transformations cancel out the monodromy.
Explicitly,
$\displaystyle\mathcal{K}_{\gamma_{5}}\mathcal{K}_{\gamma_{4}}\mathcal{K}_{\gamma_{3}}\mathcal{K}_{\gamma_{2}}\mathcal{K}_{\gamma_{1}}(z^{\partial\gamma})=z^{M^{-1}(\partial\gamma)}.$
###### Proof.
We will use the identification as in (5.1). Let us consider
$a=a_{1}{\gamma_{1}^{\prime}}+a_{2}{\gamma_{2}^{\prime}}\in
H_{1}(L_{p},\mathbb{Z})$, where $p\in B_{0}$ is a reference point, and a loop
from $l_{\gamma_{1}}$ anticlockwise to itself:
We will first compute the case without any singularities. This is very
standard from [GPS]. We are only repeating it as there may be confusion about
signs.
$1+z^{{\gamma_{2}^{\prime}}}$$1+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}}$$1+z^{{\gamma_{1}^{\prime}}}$$l_{\gamma_{2}}$$l_{\gamma_{3}}$$l_{\gamma_{4}}$$l_{\gamma_{5}}$$l_{\gamma_{1}}$$\delta$
###### Remark 5.17.
Before we go into the calculation, let us unfold the sign convention in
Theorem 4.2. To determine the sign, we have the condition
$\mbox{Arg}Z_{\gamma}(u_{-})<\mbox{Arg}Z_{\gamma}(u_{+})$. This means that the
loop $\delta$ is going in anti-clockwise direction.
In the calculation of the exponents, we consider
$\gamma\mapsto\langle\cdot,\gamma\rangle$. Note that
$\langle\cdot,\cdot\rangle$ is the intersection pairing but not the usual
inner product. Together with
$\langle{\gamma_{1}^{\prime}},{\gamma_{2}^{\prime}}\rangle=1$, we have
$\langle\cdot,\gamma\rangle$ is the normal of $l_{\gamma}$ pointing in the
same direction as $\delta$ in the language of [GPS].
Let us consider the transformation
$\mathcal{K}_{\delta}=\mathcal{K}_{{\delta},l_{\gamma_{1}}}\mathcal{K}_{{\delta},l_{\gamma_{5}}}\mathcal{K}_{{\delta},l_{\gamma_{4}}}\mathcal{K}_{{\delta},l_{\gamma_{3}}}\mathcal{K}_{{\delta},l_{\gamma_{2}}}$,
where $\mathcal{K}_{{\delta},l_{\gamma_{k}}}=\mathcal{K}_{\gamma_{k}}$ for
$k=1,2,3$; $\mathcal{K}_{{\delta},l_{\gamma_{k}+3}}=\mathcal{K}_{\gamma_{k}}$
for $k=1,2$.
To simplify the notation, we will denote
$\xmapsto{\mathcal{K}_{{\delta},l_{\gamma_{k}}}}$ for the wall crossing over
the wall $l_{\gamma_{k}}$ according to the curve $\delta$.
$\displaystyle z^{a}$
$\displaystyle\xmapsto{\mathcal{K}_{{\delta},l_{\gamma_{2}}}}z^{a}(1+z^{{\gamma_{2}^{\prime}}})^{a_{1}},$
$\displaystyle\xmapsto{\mathcal{K}_{\delta,l_{\gamma_{3}}}}z^{a}(1+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}})^{a_{1}-a_{2}}\left(1+z^{{\gamma_{2}^{\prime}}}(1+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}})^{-1}\right)^{a_{1}},$
$\displaystyle=z^{a}(1+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}})^{-a_{2}}(1+z^{{\gamma_{2}^{\prime}}}+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}})^{a_{1}},$
$\displaystyle\xmapsto{\mathcal{K}_{\delta,l_{\gamma_{4}}}}z^{a}(1+z^{{\gamma_{1}^{\prime}}})^{-a_{2}}\left(1+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}}(1+z^{{\gamma_{1}^{\prime}}})^{-1}\right)^{-a_{2}}\left(1+z^{{\gamma_{2}^{\prime}}}(1+z^{{\gamma_{1}^{\prime}}})^{-1}(1+z^{{\gamma_{1}^{\prime}}})\right)^{a_{1}},$
$\displaystyle=z^{a}(1+z^{{\gamma_{1}^{\prime}}}+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}})^{-a_{2}}(1+z^{{\gamma_{2}^{\prime}}})^{a_{1}},$
$\displaystyle\xmapsto{\mathcal{K}_{\delta,l_{\gamma_{5}}}}z^{a}(1+z^{{\gamma_{2}^{\prime}}})^{-a_{1}}\left(1+z^{{\gamma_{1}^{\prime}}}(1+z^{{\gamma_{2}^{\prime}}})^{-1}(1+z^{{\gamma_{2}^{\prime}}})\right)^{-a_{2}}(1+z^{{\gamma_{2}^{\prime}}})^{a_{1}}$
$\displaystyle=z^{a}(1+z^{{\gamma_{1}^{\prime}}})^{-a_{2}},$
$\displaystyle\xmapsto{\mathcal{K}_{\delta,l_{\gamma_{1}}}}z^{a}(1+z^{{\gamma_{1}^{\prime}}})^{a_{2}}(1+z^{{\gamma_{1}^{\prime}}})^{-a_{2}},$
$\displaystyle=z^{a}.$
Thus we obtain the consistency as usual. Next we investigate the wall crossing
transformation over the monodromy deduced by focus-focus singularities on
$l_{{\gamma_{2}^{\prime}}}$.
$1+z^{-{\gamma_{2}^{\prime}}}$$1+z^{{\gamma_{2}^{\prime}}}$$0$$\beta$
Let us consider the wall crossing
$\mathcal{K}_{\beta}=\mathcal{K}_{\beta,2}\mathcal{K}_{\beta,1}$ over the
curve $\beta$, where
$\mathcal{K}_{\beta,1}=\mathcal{K}_{{\gamma_{2}^{\prime}}}$, and
$\mathcal{K}_{\beta,2}=\mathcal{K}_{-{\gamma_{2}^{\prime}}}$. The first wall
crossing will lead us to
$\displaystyle\mathcal{K}_{\beta,1}(z^{a})=z^{a}(1+z^{{\gamma_{2}^{\prime}}})^{a_{1}}.$
Then passing over the wall again by using $\beta$ will get us
$\displaystyle\mathcal{K}_{\beta}(z^{a})$
$\displaystyle=\mathcal{K}_{\beta,2}\circ\mathcal{K}_{\beta,1}(z^{a})=z^{a}(1+z^{-{\gamma_{2}^{\prime}}})^{-a_{1}}(1+z^{{\gamma_{2}^{\prime}}})^{a_{1}}$
$\displaystyle=z^{a_{1}{\gamma_{1}^{\prime}}+(a_{1}+a_{2}){\gamma_{2}^{\prime}}}.$
To have $z^{a_{1}{\gamma_{1}^{\prime}}+(a_{1}+a_{2}){\gamma_{2}^{\prime}}}$
goes back to $z^{a}$, we have the monodromy $M_{2}$
$\displaystyle{\gamma_{1}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}}-{\gamma_{2}^{\prime}},$ (41)
$\displaystyle{\gamma_{2}^{\prime}}$
$\displaystyle\mapsto{\gamma_{2}^{\prime}}.$ (42)
Let us first consider the monodromy over the focus-focus singularities on
$l_{{\gamma_{1}^{\prime}}}$:
$1+z^{{\gamma_{1}^{\prime}}}$$1+z^{-{\gamma_{1}^{\prime}}}$$0$$\alpha$
Consider the transformation according to the loop $\alpha$. Let
$\mathcal{K}_{\alpha,1}=\mathcal{K}_{{\gamma_{1}^{\prime}}}$, and
$\mathcal{K}_{\alpha,2}=\mathcal{K}_{-{\gamma_{1}^{\prime}}}$. We have
$\displaystyle\mathcal{K}_{\alpha,1}(z^{a})=z^{a}(1+z^{{\gamma_{1}^{\prime}}})^{-a_{2}}.$
Then the whole loop $\alpha$ leads us to
$\displaystyle\mathcal{K}_{\alpha}$
$\displaystyle=\mathcal{K}_{\alpha,2}\circ\mathcal{K}_{\alpha,1}(z^{a})=z^{a}(1+z^{-{\gamma_{1}^{\prime}}})^{a_{2}}(1+z^{{\gamma_{1}^{\prime}}})^{-a_{2}}$
$\displaystyle=z^{(a_{1}-a_{2}){\gamma_{1}^{\prime}}+a_{2}{\gamma_{2}^{\prime}}}.$
Then we obtain the monodromy $M_{1}$
$\displaystyle{\gamma_{1}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}},$ (43)
$\displaystyle{\gamma_{2}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}.$ (44)
Thus, we can compute the monodromy while singularity is at the origin by
decomposing the singularity at the origin into two focus-focus singularities
to check consistency similar to [GHK]. Explicitly, there are two ways checking
it. The first one is doing a similar calculation as in the beginning of the
proof. Now we consider
$1+z^{{\gamma_{2}^{\prime}}}$$1+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}}$$1+z^{{\gamma_{1}^{\prime}}}$$1+z^{-{\gamma_{1}^{\prime}}}$$1+z^{-{\gamma_{2}^{\prime}}}$$l_{\gamma_{2}}$$l_{\gamma_{3}}$$l_{\gamma_{4}}$$l_{\gamma_{5}}$$l_{\gamma_{1}}$
The first three wall crossings are the same and let us recap here:
$\displaystyle\mathcal{K}_{\delta,l_{\gamma_{4}}}\mathcal{K}_{\delta,l_{\gamma_{3}}}\mathcal{K}_{\delta,l_{\gamma_{2}}}(z^{a})$
$\displaystyle=z^{a}(1+z^{{\gamma_{1}^{\prime}}}+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}})^{-a_{2}}(1+z^{{\gamma_{2}^{\prime}}})^{a_{1}}.$
Now to pass over $l_{\gamma_{5}}$, we will have
$\displaystyle\mathcal{K}(z^{a}(1+z^{{\gamma_{1}^{\prime}}}+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}})^{-a_{2}}(1+z^{{\gamma_{2}^{\prime}}})^{a_{1}})$
$\displaystyle=z^{a}(1+z^{-{\gamma_{2}^{\prime}}})^{-a_{1}}\left(1+z^{{\gamma_{1}^{\prime}}}(1+z^{-{\gamma_{2}^{\prime}}})^{-1}(1+z^{{\gamma_{2}^{\prime}}})\right)^{-a_{2}}(1+z^{{\gamma_{2}^{\prime}}})^{a_{1}}$
$\displaystyle=z^{a_{1}{\gamma_{1}^{\prime}}+(a_{1}+a_{2}){\gamma_{2}^{\prime}}}(1+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}})^{-a_{2}}.$
The monodromy $M$ would then be
$\displaystyle{\gamma_{1}^{\prime}}$
$\displaystyle\mapsto-{\gamma_{2}^{\prime}};$
$\displaystyle{\gamma_{2}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}.$
and gives us
$\displaystyle\mathcal{K}_{M}(z^{a_{1}{\gamma_{1}^{\prime}}+(a_{1}+a_{2}){\gamma_{2}^{\prime}}}(1+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}})^{-a_{2}})=z^{(a_{1}+a_{2}){\gamma_{1}^{\prime}}+a_{2}{\gamma_{2}^{\prime}}}(1+z^{{\gamma_{1}^{\prime}}})^{-a_{2}}.$
The last wall crossing would then be
$\displaystyle\mathcal{K}_{\delta,l_{\gamma_{1}}}\left((z^{(a_{1}+a_{2}){\gamma_{1}^{\prime}}+a_{2}{\gamma_{2}^{\prime}}}(1+z^{{\gamma_{1}^{\prime}}})^{a_{1}}\right)$
$\displaystyle=z^{(a_{1}+a_{2}){\gamma_{1}^{\prime}}+a_{2}{\gamma_{2}^{\prime}}}(1+z^{-{\gamma_{1}^{\prime}}})^{a_{2}}(1+z^{{\gamma_{1}^{\prime}}})^{-a_{2}}$
$\displaystyle=z^{a}.$
The second way is to use the following meta-lemma by direct computation
###### Claim 5.18.
$\mathcal{K}_{-\gamma}\mathcal{K}_{\gamma}(z^{\gamma^{\prime}})=z^{M^{-1}\gamma^{\prime}}$,
where $M$ is transformation
$\gamma^{\prime}\mapsto\gamma^{\prime}+\langle\gamma,\gamma^{\prime}\rangle\gamma$.
Note that if $\gamma$ is primitive, then $M$ is the Picard-Lefschetz
transformation of a focus-focus singularity with Lefschetz thimble $\gamma$.
Recall that if $\langle\gamma^{\prime},\gamma\rangle=1$, then the pentagon
equation reads
$\displaystyle\mathcal{K}_{\gamma}\mathcal{K}_{\gamma^{\prime}}=\mathcal{K}_{\gamma^{\prime}}\mathcal{K}_{\gamma+\gamma^{\prime}}\mathcal{K}_{\gamma}.$
(45)
Let $M_{1},M_{2}$ denote the transformation in the Claim 5.18 with respect to
$\gamma^{\prime}_{1},\gamma^{\prime}_{2}$ respectively.
With the branch cut as in Figure 3, one has
$\displaystyle\mathcal{K}_{\gamma_{5}}\mathcal{K}_{\gamma_{4}}\mathcal{K}_{\gamma_{3}}\mathcal{K}_{\gamma_{2}}\mathcal{K}_{\gamma_{1}}=\bigg{(}\mathcal{K}_{-\gamma_{2}}\mathcal{K}_{\gamma_{2}}\bigg{)}\bigg{(}\mathcal{K}_{\gamma_{2}}^{-1}\mathcal{K}_{\gamma_{4}}\mathcal{K}_{\gamma_{3}}\mathcal{K}_{\gamma_{2}}\mathcal{K}_{-\gamma_{1}}^{-1}\bigg{)}\bigg{(}\mathcal{K}_{-\gamma_{1}}\mathcal{K}_{\gamma_{1}}\bigg{)}.$
Notice that the middle of the right hand side is identity by the pentagon
identity (45). From Lemma 5.18, we have
$\displaystyle\mathcal{K}_{\gamma_{5}}\mathcal{K}_{\gamma_{4}}\mathcal{K}_{\gamma_{3}}\mathcal{K}_{\gamma_{2}}\mathcal{K}_{\gamma_{1}}(z^{\gamma})=z^{M_{2}^{-1}M_{1}^{-1}\gamma}=z^{(M_{1}M_{2})^{-1}\gamma}$
and the lemma follows from the fact that $M=M_{1}M_{2}$. Notice the the proof
is motivated by deforming the type $II$ singular fibre into two $I_{1}$
singular fibres as in Figure 5. However, the proof does NOT depend on the
actual geometric deformation.
$1+z^{{\gamma_{2}^{\prime}}}$$1+z^{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}}$$1+z^{{\gamma_{1}^{\prime}}}$$1+z^{-{\gamma_{1}^{\prime}}}$$1+z^{-{\gamma_{2}^{\prime}}}$
Figure 5: Geometric interpretation of Lemma 5.16.
∎
###### Remark 5.19.
It worth noticing that the above calculation a priori may be different from
the composition of wall-crossing for the $A_{2}$ cluster variety for two
reasons. The first difference comes from the appearance of the monodromy at
the origin while there is no such in the cluster scattering diagram. We will
explain the identification in Section 5.3. The second difference comes from
the fact that the in the calculation for cluster variety there is a preferred
choice of basis in each chamber while the calculation in Floer theory uses a
fixed basis (up to parallel transport). However, thanks to (25), the two
calculations thus coincide.
In next section, we will show that the later has a compactification to the
analytification of the del Pezzo surface of degree five by adding a cycle of
five rational curves via ring of theta functions following [GHK].
###### Remark 5.20.
One would naturally expect that the family Floer mirror of the hyperKähler
rotation of $X^{\prime}_{t}$ still compactifies to the del Pezzo surface of
degree five. In this case, one there is only two families of holomorphic discs
in each of the singularities and one can glue the local model in [KS1]*Section
8 and get a partial compactification of the family Floer mirror. The author
will compare it with the Gross-Siebert construction of the mirror in the
future work.
###### Remark 5.21.
Shen, Zaslow and Zhou prove the homological mirror symmetry for the $A_{2}$
cluster variety featuring the canonical equivariant $\mathbb{Z}_{5}$-action
[SZZ].
### 5.2 Comparison with GHK Mirror of $dP_{5}$
Let ${Y}$ be the del Pezzo surface of degree five and ${D}$ be the anti-
conical divisor consists of wheel of five rational curves. Here we will
explain the comparison of the family Floer mirror of $X_{II}$ with the GHK
mirror of $({Y},{D})$. Recall that in Lemma 5.9, we identify the integral
affine structures on $B_{0}$ and $B_{\mathrm{GHK}}$. Moreover, the BPS rays
naturally divide $B_{0}$ into cones which is exactly the cone decomposition of
$B_{\mathrm{GHK}}$. The canonical scattering diagram in this case is computed
in [GHK]*Example 3.7 and all the $\mathbb{A}^{1}$\- curves are shown in Figure
8.
###### Lemma 5.22.
There exists a homeomorphism $X_{II}\cong{Y}\setminus{D}$.
###### Proof.
From the explicit equation in Section 3, a deformation of $X_{II}$ has two
singular fibres of type $I_{1}$ and the vanishing cycles has intersection
number $1$. On the other hand, [A4]*Example 3.1.2 provides the local model of
Lagrangian fibration near the blow-up of a point on the surface. Since ${Y}$
can be realized as the blow up of two non-toric boundary point on del Pezzo
surface of degree $7$, One can topologically glue the pull-back of the moment
map torus fibration with the local Lagrangian fibration to get a torus
fibration on ${Y}\setminus{D}$ with two nodal fibres such that the vanishing
cycles has intersection $1$. This gives the homeomorphism between $X_{II}$ and
${Y}\setminus{D}$ topologically and the identification of the class of tori
among $H_{2}(X_{II},\mathbb{Z})\cong H_{2}({Y}\setminus{D},\mathbb{Z})$. In
particular, we can use ${Y}$ as an auxiliary topological compactification of
$X_{II}$. ∎
We will take $P=\mathrm{NE}(Y)$ in the Gross-Hacking-Keel construction. We
have $P^{gp}\cong\mbox{Pic}(Y)^{*}\cong H_{2}(Y,\mathbb{Z})$, where the first
isomorphism comes from Poincare duality and $Y$ is projective while the second
isomorphism comes from $H^{1,0}(Y)=H^{2,0}(Y)=0$ . The rank two lattice
$H_{1}(L_{u},\mathbb{Z})$ glues to a local system of lattice over $B_{0}$ and
is naturally identified with $\Lambda_{B_{0}}$ by Remark 3.2. Then we have the
commutative diagram except the middle map. Here
$\underline{H_{2}(Y,\mathbb{Z})}$ denotes the constant sheaf with fibre
$H_{2}(Y,\mathbb{Z})$ over $B_{0}$, $\Gamma$ (and $\Gamma_{g}$) is the local
system of lattices over $B_{0}$ with fibre $H_{2}(Y,L_{u};\mathbb{Z})$
($H_{1}(L_{u})$ respectively) over $u\in B_{0}$.
(50)
Notice that the bottom short exact sequence is (3). Next we will construct the
middle map $\Psi$. Recall that $D_{i}^{2}<0$ by a theorem of Grauert [G8] one
can contract $D_{i}$ to an orbifold singularity locally modeled by a
neighborhood of the origin in $\mathbb{C}^{2}/D_{i}^{2}$. Since the blow up of
$\mathbb{C}^{2}/D_{i}^{2}$ is the total space of
$\mathcal{O}_{\mathbb{P}^{1}}(D_{i}^{2})$, a neighborhood of $D_{i}$ is
biholomorphic to a neighborhood of the zero section in
$\mathcal{O}_{\mathbb{P}^{1}}(D_{i}^{2})$. Therefore, a neighborhood of $D$ is
covered by charts $W_{i}=\\{(x_{i},y_{i})\in\mathbb{C}^{2}||x_{i}y_{i}|<1\\}$
such that
1. 1.
$(Y,D)$ is modeled by $(W_{i},\\{x_{i}y_{i}=0\\})$ near a node $D_{i}\cap
D_{i+1}$
2. 2.
$D_{i}=\\{x_{i}=0\\}$ and $D_{i+1}=\\{y_{i}=0\\}$
3. 3.
$x_{i+1}=y_{i}^{-1},y_{i+1}=x_{i}y_{i}^{-D_{i}^{2}}$.
Notice that $N_{D_{i}/Y}\cong\mathcal{O}_{\mathbb{P}^{1}}(D_{i}^{2})$. Indeed,
Thus, the last equation comes from the transition functions for
$\mathcal{O}_{\mathbb{P}^{1}}(D_{i}^{2})$. The torus fibre in $Y\setminus D$
near the node $D_{i}\cap D_{i+1}$ is isotopic to
$L=\\{|x_{i}|=|y_{i}|=\frac{1}{2}\\}$. It is easy to see that $L$ bounds two
families of holomorphic discs $\\{|x_{i}|\leq\frac{1}{2},y_{i}=const\\}$ and
$\\{x_{i}=const,|y_{i}|\leq\frac{1}{2}\\}$. Denote $\beta_{i}\in H_{2}(Y,L)$
be relative class of the discs intersecting $D_{i}$ and represented by the
$2$-chain $b_{i}$. Over the simply connected subset $U_{i}\subseteq B_{0}$,
both of the short exact sequence in (50) splits (non-canonically) and we
define the middle map by $\Psi(\beta_{i})=\phi_{i}(v_{i})$. From Remark 3.2,
the right hand side square commutes and $\partial\beta_{i}$ (up to parallel
transport) generate $H_{1}(L_{u},\mathbb{Z})$. To see that the middle map is
independent of $i$ and the left hand side square commutes, one has the
following observation: We may choose $u$ to be in a neighborhood of $D_{i}$,
which is diffeomorphic to
$N_{D_{i}/Y}\cong\mathcal{O}_{\mathbb{P}^{1}}(D_{i}^{2})$. The relation
$x_{i+1}=y_{i}^{-1},y_{i+1}=x_{i}y_{i}^{-D_{i}^{2}}$ translates to
$\partial\beta_{i-1}+D_{i}^{2}\partial\beta_{i}+\partial\beta_{i+1}=0\in
H_{1}(L,\mathbb{Z})$, which is the analogue of (1). Therefore, there exists a
$2$-chain $C$ in $L$ such that $C\cup b_{i-1}\cup D_{i}^{2}b_{i}\cup b_{i+1}$
is a $2$-cycle in the neighborhood of $D_{i}$. To see lifting the relation (1)
in $\Gamma$, notice that the $2$-cycle falls in $Y\setminus\cup_{j\neq
i-1,i,i+1}D_{j}$, which is homeomorphic to the total space of
$\mathcal{O}_{\mathbb{P}^{1}}(D_{i}^{2})$, which has its second homology
generated by $[D_{i}]$. Therefore, the $2$-cycle must be a multiple of
$[D_{i}]$. On the other hand, the intersection number of the $2$-cycle with
$D_{i-1},D_{i+1}$ are both $1$ from the explicit representative chosen and
thus
$\displaystyle\beta_{i-1}+D_{i}^{2}\beta_{i}+\beta_{i+1}=[D_{i}],$ (51)
which is exactly the analogue of (4) and defines the gluing relation in
$\mathcal{P}$. We also remark that $H_{2}(Y,\mathbb{Z})$ is generated by
$[D_{i}]$. Thus, we have the right hand side square of (50) commutes and
$\Psi$ is an isomorphism from the five lemma.
$Q$
---
$D_{i+1}$
---
$D_{i}$
---
$D_{i-1}$
---
$D_{i+1}$
---
$D_{i}$
---
$D_{i-1}$
---
converges
---
Figure 6: Illustration for (51).
Therefore, the middle map is well-defined from (51) and the middle map is an
isomorphism by the five lemma. Notice that $\beta_{i}+\gamma_{i}$ represents a
$2$-chain, which defines a $2$-cycle up to a multiple of the fibre. Since the
fibre is contractible in $Y$, thus we may view $\beta_{i}+\gamma_{i}$ as a
$2$-cycle in $H_{2}(Y,\mathbb{Z})$. Since $[E_{i}]$ is the unique class with
intersections $[E_{i}].[D_{j}]=\delta_{ij}$, we have
$z^{[E_{i}]-\phi_{i}(v_{i})}$ identified with $z^{\gamma_{i}}$ (see Figure 7).
$D_{i}$
---
$\gamma_{i}$
---
$\beta_{i}$
---
Figure 7: The class $[E_{i}]$ decomposes into sum of $\gamma_{i}$ and
$\beta_{i}$
In particular, the transformation $\Psi_{i,i+1}$ coincides with
(LABEL:gluing_of_tori). This will leads to the identification of $\check{X}$
and the GHK mirror of $({Y},{D})$ as gluing to tori. Notice that the Gross-
Hacking-Keel mirror of $(Y,D)$ comes with a family over
$\mbox{Spec}\mathbb{C}[\mathrm{NE}(Y)]$. We will have to determine which
particular point in $\mbox{Spec}\mathbb{C}[\mathrm{NE}(Y)]$ the family Floer
mirror $\check{X}$ corresponds to. Notice that the monodromy sends
$\gamma_{i}$ to $\gamma_{i+1}$. This implies that $\check{X}$ corresponds to
the point such that the value of $z^{[E_{i}]}$ all coincides. From the
explicit relation of curve classes $[E_{i}]$, $\check{X}$ corresponds to the
point where $z^{[D_{i}]}=z^{[E_{i}]}=1$.
Indeed, one can see this via the identification $\check{X}$ with in the subset
of the analyticiation of del Pezzo surface of degree 5. We will see in the
next section (Section 5.3) that this is the cluster variety of type $A_{2}$.
Recall that the Gross-Hacking-Keel mirror is determined by the algebraic
equations (52) from the theta functions [GHK]*Equation (3.2),
$\displaystyle\vartheta_{i-1}\vartheta_{i+1}=z^{[D_{i}]}(\vartheta_{i}+z^{[E_{i}]}).$
Comparing with (5) (and later (52)), we see that the family Floer mirror
$\check{X}$ corresponds to the fibre with
$\displaystyle z^{[D_{i}]}=z^{[E_{i}]}=1.$
Here we remark that the complex structure of the fibre defined by
$z^{[D_{i}]}=z^{[E_{i}]}=1$ is expected to be mirror to the del Pezzo surface
of degree five with monotone symplectic structure. However, the verification
seems hard due to the analytic difficulty explained in Remark 4.8.
$\small{E_{y}}$
---
$E_{x}$
---
$H-E_{x}$
---
$H-E_{y}$
---
$H-E_{x}-E_{y}$
---
Figure 8: The canonical scattering diagram and the $\mathbb{A}^{1}$-curves in
del Pezzo surfaces of degree $5$ (illustrated by a projection to
$\mathbb{P}^{2}$).
We will show in Section 5.1, Section 5.3 and Section 5.2 the following result:
###### Theorem 5.23.
The analytification of ${\mathcal{X}}$-cluster variety of type $A_{2}$ or the
Gross-Hacking-Keel mirror of $(Y,D)$ is a partial compactification of the
family Floer mirror of $X_{II}$.
###### Remark 5.24.
Since here we do not include the singular fibre for the family Floer mirror,
the family Floer mirror is missing
$\mathfrak{Trop}^{-1}(0)\subseteq(\mathbb{G}_{m}^{an})^{2}$ in each rigid
analytic torus.
### 5.3 Comparison with $A_{2}$-Cluster Variety
In this section, we will prove that the family Floer mirror constructed in
Section 5.1 is simply the ${\mathcal{X}}$-cluster variety of type $A_{2}$. The
${\mathcal{X}}$-cluster algebra of type $A_{2}$ are defined in Section 2.2
with $d_{1}=d_{2}=1$. The following observation helps to link the scattering
diagram in Theorem 5.8 and ${\mathcal{X}}$ the scattering diagram of type
$A_{2}$.
The operation we are going to have can be viewed as a symplectic analogue of
“pushing singularities to infinity” in [GHK]. Recall that if one has a special
Lagrangian fibration with a focus-focus singularity at $u_{0}$ and Lefschetz
thimble $\gamma$. Then locally there exist two affine rays $l_{\pm\gamma}$
emanating from $u_{0}$ on the base, parametrizing special Lagrangian fibres
bounding holomorphic discs in classes $\pm\gamma$ [A]. Then $l_{\pm\gamma}$
divide a neighborhood of $u_{0}$ into two chambers $U_{\pm}$, where $U_{\pm}$
is characterized by $\int_{\pm\gamma}\mbox{Im}\Omega>0$. The corresponding
wall-crossing across $l_{\pm\gamma}$ from $U_{-}$ to $U_{+}$ is
$\mathcal{K}_{\pm\gamma}$ and the monodromy around $u_{0}$ is given by $M$ in
Claim 5.18. We make a branch cut from $u_{0}$ to infinity and the parallel
transport should changed by $M$ when crossing the branch cut. Notice that the
three transformations $\mathcal{K}_{\pm\gamma}$ and $M$ commute. If we choose
the cut coincides with $l_{-\gamma}$, then the transformation crossing
$l_{-\gamma}$ from $U_{-}$ to $U_{+}$ is $K_{\gamma}$, coincides with the
transformation crossing $l_{\gamma}$ from $U_{-}$ to $U_{+}$. Similarly, if we
choose the cut coincides with $l_{\gamma}$, then the transformation crossing
$l_{\gamma}$ from $U_{+}$ to $U_{-}$ is $K_{-\gamma}$, coincides with the
transformation crossing $l_{-\gamma}$ from $U_{+}$ to $U_{-}$.
To sum up, choosing the branch cut coinciding with $l_{-\gamma}$ makes the
transformation across $l_{\pm\gamma}$ from $U_{-}$ to $U_{+}$ both equal to
$\mathcal{K}_{\gamma}$, as if the singularity $u_{0}$ is moved to infinity
along $l_{-\gamma}$. Similarly, if we choose the branch cut coincides with
$l_{\gamma}$, then the transformation from $U_{-}$ to $U_{+}$ is $K_{-\gamma}$
as if the singularity is moved to infinity along $l_{\gamma}$.
Now back to the scattering diagram in Theorem 5.8. We can express the
underlying integral affine structure on $B_{0}$ in a different way by choosing
different branch cuts. First we decompose $M=M_{1}M_{2}$, where $M_{1},M_{2}$
are the Picard-Lefschetz transformations with vanishing cycles
${\gamma_{1}^{\prime}},{\gamma_{2}^{\prime}}$. Choose the branch cut to be
$l_{\gamma_{1}}$ (and $l_{\gamma_{5}}$) with the corresponding identifications
to be $M_{1}$ (and $M_{2}$ respectively) as in Figure 9. Then from the
previous discussion in this section and the same argument in Section 5.1, the
family mirror is thus gluing of five tori with the gluing coincide with those
of the $A_{2}$-cluster variety $\check{X}_{\mathbb{C}}$.
$l_{\gamma_{2}}$$l_{\gamma_{3}}$$l_{\gamma_{4}}$$l_{\gamma_{1}}$$l_{\gamma_{5}}$$M_{1}$$M_{2}$
Figure 9: The different choice of branch cuts for $X_{II}$.
Note that one can similarly define theta function in the analytic situation.
Since we are working with finite type, we can express theta functions in
different torus charts by path ordered products. The functions are well
defined since the scattering diagram is consistent (see Lemma 5.16). Further
note that, in the finite case, we can replicate (5) to define multiplications
between theta functions without broken lines 141414In general, the products of
theta functions can be expressed as the linear combination of theta functions
[GHK, GHKK], which the coefficients can be computed via broken lines..
Standard and straight-forward calculation shows that
$\displaystyle\vartheta_{v_{i-1}}\cdot\vartheta_{v_{i+1}}$
$\displaystyle=1+\vartheta_{v_{i}},$ (52)
where $v_{i}$ denotes the primitive generator of $l_{\gamma_{i}}$
$i\in\\{1,\dots,5\\}$ ordered cyclically. We can see it agrees with the
exchange relations as in Section 2.1. This gives a natural embedding of
$\check{X}_{\mathbb{C}}$ into $\mathbb{P}^{5}$ after suitable homogenization
of (52) thus compactified to a del Pezzo surface of degree five.
## 6 Family Floer Mirror of $X_{III}$
In this section, we will consider the case when $Y^{\prime}=Y^{\prime}_{III}$
be a rational elliptic surface with singular configuration $III^{*}III$,
$D^{\prime}$ is the type $III^{*}$ fibre. We claim that the family Floer
mirror of $X=X_{III}$ is then the del Pezzo surface of degree $6$. The
argument is similar to that in Section 5.
First of all, such $Y^{\prime}$ has the explicit affine equation
$\displaystyle y^{2}=x^{4}+u.$
It is easy to see that the fibre over $u=0$ is a singular fibre of type $III$,
while the fibre at infinity is of type $III^{*}$. There is a natural
deformation $Y^{\prime}_{t}$ be the minimal resolution of the surface
$\displaystyle\\{z^{2}y^{2}=x^{4}+4t^{2}x^{2}z^{2}+uz^{4}\\}\subseteq\mathbb{P}^{2}_{(x:y:z)}\times\mathbb{P}^{1}_{(t:u)}$
such that there are two singular fibres of type $I_{1},I_{2}$ with near $u=0$,
$|t|\ll 1$. With vanishing thimbles ${\gamma_{1}^{\prime}}$ and
${\gamma_{2}^{\prime}},{\gamma_{3}^{\prime}}$. By Theorem 4.5, we have the
analogue of Theorem 5.8.
###### Theorem 6.1.
[L12]*Theorem 4.12 There exist
${\gamma_{1}^{\prime}},{\gamma_{2}^{\prime}},{\gamma_{3}^{\prime}}\in
H_{2}(X,L_{u})\cong\mathbb{Z}^{3}$ such that
$\langle{\gamma_{1}^{\prime}},{\gamma_{2}^{\prime}}\rangle=\langle{\gamma_{1}^{\prime}},{\gamma_{3}^{\prime}}\rangle=1$,
$\langle{\gamma_{2}^{\prime}},{\gamma_{3}^{\prime}}\rangle=0$ and
$Z_{{\gamma_{2}^{\prime}}}=Z_{{\gamma_{3}^{\prime}}}$. Moreover, if we set
$\gamma_{1}=-{\gamma_{1}^{\prime}},\ \gamma_{2}={\gamma_{2}^{\prime}},\
\gamma_{3}={\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}+{\gamma_{3}^{\prime}},\
\gamma_{4}={\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}},\
\gamma_{5}={\gamma_{1}^{\prime}},\gamma_{6}=-{\gamma_{3}^{\prime}}.$
Then
1. 1.
$f_{\gamma}(u)\neq 1$ if and only if $u\in l_{\gamma_{i}}$ and
$\gamma=\gamma_{i}$ for some $i\in\\{1,\cdots,6\\}$ .
2. 2.
In such cases,
$\displaystyle
f_{\gamma_{i}}=\begin{cases}1+T^{\omega(\gamma_{i})}z^{\partial\gamma_{i}}&\mbox{
if $i$ odd,}\\\ (1+T^{\omega(\gamma_{i})}z^{\partial\gamma_{i}})^{2}&\mbox{ if
$i$ even.}\end{cases}$
3. 3.
If we choose the branch cut between $l_{\gamma_{1}}$ and $l_{\gamma_{6}}$,
then the counter-clockwise mondoromy $M$ across the branch cut is given by
$\displaystyle{\gamma_{1}^{\prime}}$
$\displaystyle\mapsto-({\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}+{\gamma_{3}^{\prime}})$
$\displaystyle{\gamma_{2}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}$
$\displaystyle{\gamma_{3}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}}+{\gamma_{3}^{\prime}}.$ (53)
Notice that from the condition
$Z_{{\gamma_{2}^{\prime}}}=Z_{{\gamma_{3}^{\prime}}}$, we have
$l_{{\gamma_{2}^{\prime}}}=l_{{\gamma_{3}^{\prime}}}$ and
$l_{{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}}=l_{{\gamma_{1}^{\prime}}+{\gamma_{3}^{\prime}}}$.
Then we compute the central charges $Z_{\gamma_{i}}$, which is parallel to
Lemma 5.6. Taking the branch cut between $l_{\gamma_{1}}$ and
$l_{\gamma_{6}}$, we would obtain the diagram as in Figure 10.
$\gamma_{1}=-{\gamma_{1}^{\prime}}$$\gamma_{2}={\gamma_{2}^{\prime}}$$\gamma_{3}={\gamma_{1}^{\prime}}+2{\gamma_{2}^{\prime}}$$\gamma_{4}={\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}$$\gamma_{5}={\gamma_{1}^{\prime}}$$\gamma_{6}=-{\gamma_{2}^{\prime}}$
Figure 10: BPS rays near the singular fibre in $X_{III}$.
###### Lemma 6.2.
With suitable choice of coordinate $u$ on $B_{0}\cong\mathbb{C}^{*}$, we have
$\displaystyle Z_{\gamma_{k}}(u)=\begin{cases}e^{\pi
i(k-1)\frac{3}{4}}u^{\frac{3}{4}}&\mbox{if $k$ odd,}\\\ \frac{1-i}{2}e^{\pi
i(k-2)\frac{3}{4}}u^{\frac{3}{4}}&\mbox{if $k$ even.}\end{cases}$ (54)
In particular, the angle between $l_{\gamma_{k}}$ and $l_{\gamma_{k+1}}$ is
$\frac{\pi}{3}$. See how the BPS rays position as demonstrated in Figure 10.
###### Proof.
Straight-forward calculation shows that
$Z_{\gamma_{k}}(u)=O(|u|^{\frac{3}{4}})$. Normalize the coordinate $u$ such
that $Z_{\gamma_{1}}(u)=u^{\frac{3}{4}}$. Notice that
$M{\gamma_{k}}=\gamma_{k+2}$, the case for $k$ being odd follows immediately.
Similarly, when $k$ is even, $Z_{\gamma_{k}}(u)=ce^{\pi
i(k-2)\frac{3}{4}}u^{\frac{3}{4}}$, for some $c\in\mathbb{C}$. With
$Z_{\gamma_{2}}+Z_{\gamma_{4}}=Z_{\gamma_{3}}$ we gets $c=\frac{1-i}{2}$. ∎
We will take $U_{i}$ be the sector bounded by $l_{\gamma_{i}}$ and
$l_{\gamma_{i+1}}$. Let $\check{X}$ to be the family Floer mirror constructed
by Tu [T4]. Again we denote the embedding
$\alpha_{i}:\mathfrak{Trop}_{i}^{-1}(U_{i})\rightarrow\check{X}$. From Lemma
6.2, $x_{i}>0$ on a sector symmetric with respect to $l_{\gamma_{i}}$ and
angle $\frac{2\pi}{3}\times 2$. Thus, $\alpha_{i}$ can be extended to
$\mathfrak{Trop}_{i}^{-1}\bigg{(}\overline{\bigcup_{k=i-2}^{k=i+2}U_{k}}\bigg{)}$.
Following the same line of Lemma 5.14 and Lemma 5.15, $\alpha_{i}$ extends to
$\mathfrak{Trop}^{-1}\bigg{(}\overline{\bigcup_{k=i-2}^{k=i+3}U_{k}}\bigg{)}$.
Finally, $\alpha_{i}$ extends over $l_{\gamma_{i+4}}$ from the following
analogue of Lemma 5.16. The proof is similar and we will omit the proof.
###### Lemma 6.3.
The composition of the wall-crossing transformations cancel out the monodromy.
Explicitly,
$\displaystyle\mathcal{K}_{\gamma_{6}}\mathcal{K}_{\gamma_{5}}\mathcal{K}_{\gamma_{4}}\mathcal{K}_{\gamma_{3}}\mathcal{K}_{\gamma_{2}}\mathcal{K}_{\gamma_{1}}(z^{\gamma})=z^{M^{-1}\gamma}.$
Similar to the argument of Section 5.3, we may change the branch cut in Figure
10 into two, as in Figure 11. The explicit gluing functions of $B_{2}$-cluster
variety can be found in [thesis]*p.54 Figure 4.1. Then the family Floer mirror
$\check{X}$ can be partially compactified to gluing of six tori (up to GAGA)
with the gluing function same as the ${\mathcal{X}}$ cluster variety of type
$B_{2}$. One can compute the product of the theta functions via broken lines
and obtain
$\displaystyle\vartheta_{v_{1}}\vartheta_{v_{3}}$
$\displaystyle=1+\vartheta_{v_{2}},$
$\displaystyle\vartheta_{v_{2}}\vartheta_{v_{4}}$
$\displaystyle=(1+\vartheta_{v_{3}})^{2},$
$\displaystyle\vartheta_{v_{3}}\vartheta_{v_{5}}$
$\displaystyle=1+\vartheta_{v_{4}},$
$\displaystyle\vartheta_{v_{4}}\vartheta_{v_{6}}$
$\displaystyle=(1+\vartheta_{v_{5}})^{2},$
$\displaystyle\vartheta_{v_{5}}\vartheta_{v_{1}}$
$\displaystyle=1+\vartheta_{v_{6}},$
$\displaystyle\vartheta_{v_{6}}\vartheta_{v_{2}}$
$\displaystyle=(1+\vartheta_{v_{1}})^{2},$ (55)
where $v_{i}$ denotes the primitive generator of $l_{\gamma_{i}}$ for
$i\in\\{1,\dots,6\\}$ ordered cyclically. By [positive], Cheung and Magee
showed that the compactification of the cluster variety of type $B_{2}$ is the
del Pezzo surface of degree 6.
$l_{\gamma_{2}}$$l_{\gamma_{3}}$$l_{\gamma_{4}}$$l_{\gamma_{5}}$$l_{\gamma_{6}}$$l_{\gamma_{1}}$$M_{1}$$M_{2}$
Figure 11: The choice of a different branch cut for $X_{III}$.
To compare with the mirror constructed by Gross-Hacking-Keel, we take the
corresponding log Calabi-Yau pair $({Y},{D})$ with ${Y}$ the del Pezzo surface
of degree six. Since all del Pezzo surfaces of degree $6$ are isomorphic, we
will identify it with the blow up of $\mathbb{P}^{2}$ at three points, two
non-toric points on $y$-axis and one non-toric point on $x$-axis. The anti-
canonical divisor ${D}$ is the proper transform of the $x,y,z$-axis of
$\mathbb{P}^{2}$. Denote $H$ be the pull-back of the hyperplane class,
$E_{1},$ (and $E_{2},E_{3}$) be the exceptional divisor of the blow up on
$x$-axis (and $y$-axis).
###### Lemma 6.4.
There is an isomorphism of affine manifolds $B_{{\mathrm{GHK}}}\cong B$.
###### Proof.
From [GHK]*Lemma 1.6, toric blow-ups corresponds to the refinement of cone
decomposition but not change the integral affine structure. We will find a
successive toric blow-ups of $(\tilde{Y},\tilde{D})\rightarrow(Y,D)$ such that
not only the corresponding integral affine structure with singularity
coincides with $B$ but also its cone decomposition coincide with the chamber
structure bounded by the BPS rays. Such $\tilde{Y}$ is the ordered blow up the
intersection point of the $x,z$-axis, the proper transform of the $z$-axis and
the exceptional divisor, the proper transform of $y,x$-axis. Then we take
$\tilde{D}$ to be the pull-back of the $x,y,z$-axis. If we take the proper
transform of $y$-axis as $\tilde{D}_{1}$ and number the boundary divisors in
counter-clockwise order, then we have $\tilde{D}_{i}^{2}=-1$ if $i$ odd and
$\tilde{D}_{i}^{2}=-2$ if $i$ even.
Use (6.2), we have
$\displaystyle l_{\gamma_{1}}=$ $\displaystyle\\{\check{x}>0,\check{y}=0\\}$
$\displaystyle l_{\gamma_{2}}=$ $\displaystyle\\{\check{y}>0,\check{x}=0\\}.$
and we will identify $l_{\gamma_{1}}=\mathbb{R}_{>0}(1,0)$ and
$l_{\gamma_{2}}=\mathbb{R}_{>0}(0,1)$ and the rest of the proof is similar to
that in Lemma 5.9. ∎
Same argument of Lemma 5.22, we have a homeomorphism between $X_{III}\cong
Y\setminus D\cong\tilde{Y}\setminus\tilde{D}$ and $\tilde{Y}$ provides a
compactification of $X_{III}$. For the later discussion, we will replace
$(Y,D)$ by $(\tilde{Y},\tilde{D})$ for the rest of the section (see Remark
2.1). Similarly, we have the identification of the short exact sequence (50).
Next we need to compute the canonical scattering diagram for $(Y,D)$. Let
$D_{i}$ be the components of $D$ with $D_{i}$ are exceptional curves when $i$
even.
###### Lemma 6.5.
Under the identification of integral affine structures with singularities
$B\cong B_{\mathrm{GHK}}$, the canonical scattering diagram of Gross-Hacking-
Keel coincides with the scattering diagram in Theorem 6.1 via identification
$z^{[C_{i}]-\phi_{\rho_{i}}(v_{i})}=z^{\gamma_{i}}$ (or
$z^{[C^{j}_{i}]-\phi_{\rho_{i}}(v_{i})}=z^{\gamma_{i}}$) for $i$ is odd (or
even).
###### Proof.
We will first compute all the $\mathbb{A}^{1}$-curves of $({Y},{D})$, which is
standard and we just include it for self-completeness. Any irreducible curves,
in particular the irreducible $\mathbb{A}^{1}$ curves in $({Y},{D})$ are
either exceptional curves of blow-up from $\mathbb{P}^{2}$ or proper transform
of a curve $C\subseteq\mathbb{P}^{2}$. All the three exceptional curves are
$\mathbb{A}^{1}$-curves intersecting $D_{i}$ for $i$ odd. If $C$ is of degree
one and its proper transform is an $\mathbb{A}^{1}$-curve, then it either
1. 1.
passes through two of the blow up points and its proper transform intersect
$\tilde{D}_{i}$ for $i$ odd. There are three such lines.
2. 2.
passes through one blow up point and one intersection of toric $1$-stratum.
There are three such lines and intersect $\tilde{D}_{i}$ for $i$ even.
There are no higher degree curves with proper transform are
$\mathbb{A}^{1}$-curves and we draw the canonical scattering diagram and the
corresponding $\mathbb{A}^{1}$-curves in Figure 12.
Since $D\in|-K_{Y}|$ is ample, there is no holomorphic curves contained in
$Y\setminus D$. In particular, all the simple $\mathbb{A}^{1}$-curves are
irreducible and all the possible $\mathbb{A}^{1}$-curves are the multiple
covers of the above ones. The contribution of multiple covers of degree $d$ is
$(-1)^{d-1}/d^{2}$ by [GPS]*Proposition 6.1. Then the lemma follows from the
definition of the canonical scattering diagram [GHK]*Definition 3.3. Then the
function attached to the ray $\rho_{i}$ is
$\displaystyle
f_{i}=\begin{cases}(1+z^{[C_{i}]-\phi_{\rho_{i}}(v_{i})}),&\mbox{ if $i$ is
odd,}\\\ \prod_{j=1}^{2}(1+z^{[C_{i}^{j}]-\phi_{\rho_{i}}(v_{i})}),&\mbox{ if
$i$ is even,}\end{cases}$ (56)
where $C_{i},C_{i}^{j}$ are the $\mathbb{A}^{1}$-curve classes corresponding
to $l_{\gamma_{i}}$ in Figure 12. The assumption
$Z_{\gamma_{2}}=Z_{\gamma_{3}}$ implies that $z^{E_{2}]}=z^{[E_{3}]}$. Notice
that the monodromy of the only singular fibre shifts $\gamma_{i}$ to
$\gamma_{i+2}$. This implies that one would also need to identify
$\displaystyle z^{[E_{1}]}$
$\displaystyle=z^{[H-E_{i}]}=z^{[2H-E_{1}-E_{2}-E_{3}]}$ $\displaystyle
z^{[E_{i}]}$ $\displaystyle=z^{[H-E_{i}]}=z^{[H-E_{1}-E_{i}]},i=2,3.$
Equivalently, this corresponds to
$\displaystyle z^{[D_{i}]}=z^{[C_{i}]}=z^{[C_{i}^{j}]}=1.$
∎
$E_{1}$
---
$E_{i}$, $i=2,3$
---
$H-E_{i}$,
---
$i=2,3$
$H-E_{1}$
---
$H-E_{1}-E_{i}$, $i=2,3$
---
$2H-E_{1}-E_{2}-E_{3}$
---
$l_{\gamma_{1}}$
---
$l_{\gamma_{6}}$
---
$l_{\gamma_{5}}$
---
$l_{\gamma_{4}}$
---
$l_{\gamma_{3}}$
---
$l_{\gamma_{2}}$
---
Figure 12: The canonical scattering diagram and the $\mathbb{A}^{1}$-curves in
del Pezzo surfaces of degree $6$ (illustrated by a projection to
$\mathbb{P}^{2}$).
The GHK mirror can be computed via the spectrum of the algebra generated by
theta functions. The products of the theta functions
$\displaystyle\vartheta_{i-1}\vartheta_{i+1}$
$\displaystyle=z^{[D_{i}]}\prod_{j=1}^{2}\left(\vartheta_{i}+z^{[C_{i}^{j}]}\right)\quad\text{for
$i$ even,}$ $\displaystyle\vartheta_{i-1}\vartheta_{i+1}$
$\displaystyle=z^{[D_{i}]}\left(\vartheta_{i}+z^{[C_{i}]}\right)\quad\text{for
$i$ odd}.$
Again compare it with the analogue relations (6) from ${\mathcal{X}}$-cluster
algebra of type $B_{2}$, we conclude that the family Floer mirror $\check{X}$
corresponds to the particular fibre of the GHK mirror characterized by
$\displaystyle z^{[D_{i}]}=z^{[C_{i}]}=z^{[C_{i}^{j}]}=1.$
To sum up, we conclude the section with the following theorem.
###### Theorem 6.6.
The family Floer mirror of $X_{III}$ has a partial compactification as the
analytification of the $B_{2}$-cluster variety or the Gross-Hacking-Keel
mirror of $({Y},{D})$ described above Lemma 6.4. In particular, the family
Floer mirror of $X_{III}$ can be compactified as the analytification of a del
Pezzo surface of degree $6$.
## 7 Family Floer Mirror of $X_{IV}$
In this section, we will consider the case when $Y^{\prime}$ be a rational
elliptic surface with singular configuration $IV^{*}IV$ and $D^{\prime}$ is
the type $IV^{*}$ fibre. We claim that the family Floer mirror of $X$ is then
the del Pezzo surface of degree $4$. The argument is also similar to that in
Section 5. Such rational elliptic surface $Y^{\prime}$ has Weiestrass model
$\displaystyle y^{2}=x^{3}+t^{2}s^{4}.$ (57)
###### Theorem 7.1.
[L12]*Theorem 4.14 There exist
${\gamma_{1}^{\prime}},{\gamma_{2}^{\prime}},{\gamma_{3}^{\prime}},{\gamma_{4}^{\prime}}\in
H_{2}(X,L_{u})\cong\mathbb{Z}^{4}$ such that
$\langle{\gamma_{1}^{\prime}},{\gamma_{i}^{\prime}}\rangle=1$,
$\langle{\gamma_{i}^{\prime}},{\gamma_{j}^{\prime}}\rangle=0$ and
$Z_{{\gamma_{i}^{\prime}}}=Z_{{\gamma_{j}^{\prime}}}$, for
$i,j\in\\{2,3,4\\}$. Moreover, if we set
$\displaystyle\gamma_{1}$ $\displaystyle=-{\gamma_{1}^{\prime}},\
\gamma_{2}={\gamma_{2}^{\prime}},\
\gamma_{3}={\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}+{\gamma_{3}^{\prime}}+{\gamma_{4}^{\prime}},\
\gamma_{4}={\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}+{\gamma_{3}^{\prime}},$
$\displaystyle\gamma_{5}$
$\displaystyle=2{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}+{\gamma_{3}^{\prime}}+{\gamma_{4}^{\prime}},\
\gamma_{6}={\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}},\
\gamma_{7}={\gamma_{1}^{\prime}},\ \gamma_{8}=-{\gamma_{4}^{\prime}}.$
Then
1. 1.
$f_{\gamma}(u)\neq 1$ if and only if $u\in l_{\gamma_{i}}$ and
$\gamma=\gamma_{i}$ for some $i\in\\{1,\cdots,8\\}$ .
2. 2.
In such cases,
$\displaystyle
f_{\gamma_{i}}=\begin{cases}1+T^{\omega(\gamma_{i})}z^{\partial\gamma_{i}}&\mbox{
if $i$ odd,}\\\ (1+T^{\omega(\gamma_{i})}z^{\partial\gamma_{i}})^{3}&\mbox{ if
$i$ even.}\end{cases}$
3. 3.
If we choose the branch cut between $l_{\gamma_{1}}$ and $l_{\gamma_{8}}$,
then the counter-clockwise mondoromy $M$ across the branch cut is given by
$\displaystyle{\gamma_{1}^{\prime}}$
$\displaystyle\mapsto-({\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}+{\gamma_{3}^{\prime}}+{\gamma_{4}^{\prime}})$
$\displaystyle{\gamma_{2}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}$
$\displaystyle{\gamma_{3}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}}+{\gamma_{3}^{\prime}}$ (58)
$\displaystyle{\gamma_{4}^{\prime}}$
$\displaystyle\mapsto{\gamma_{1}^{\prime}}+{\gamma_{4}^{\prime}}.$ (59)
###### Lemma 7.2.
With suitable choice of coordinate $u$ on $B_{0}\cong\mathbb{C}^{*}$, we have
$\displaystyle
Z_{\gamma_{k}}(u)=\begin{cases}e^{\frac{5\pi}{6}i(k-1)}u^{\frac{2}{3}}&\mbox{if
$k$ odd,}\\\ \frac{1}{\sqrt{3}}e^{\frac{5\pi}{6}i(k-1)}e^{-\frac{\pi
i}{6}}u^{\frac{2}{3}}&\mbox{if $k$ even.}\end{cases}$ (60)
In particular, the angle between $l_{\gamma_{i}}$ and $l_{\gamma_{i+1}}$ is
$\frac{\pi}{4}$. See how the BPS rays position as demonstrated in Figure 13.
$\gamma_{1}=-{\gamma_{1}^{\prime}}$$\gamma_{2}={\gamma_{2}^{\prime}}$$\gamma_{3}=\gamma_{1}^{\prime}+\gamma_{2}^{\prime}+\gamma_{3}^{\prime}+\gamma_{4}^{\prime}$$\gamma_{4}=\gamma_{1}^{\prime}+\gamma_{2}^{\prime}+\gamma_{3}^{\prime}$$\gamma_{5}=2\gamma_{1}^{\prime}+\gamma_{2}^{\prime}+\gamma_{3}^{\prime}+\gamma_{4}^{\prime}$$\gamma_{6}={\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}$$\gamma_{7}={\gamma_{1}^{\prime}}$$\gamma_{8}=-\gamma_{4}^{\prime}$
Figure 13: BPS rays near the singular fibre in $X_{IV}$. Note in Theorem 7.1,
we have $Z_{{\gamma_{i}^{\prime}}}=Z_{{\gamma_{j}^{\prime}}}$, for
$i,j\in\\{2,3,4\\}$.
###### Proof.
One can check that $Z_{\gamma}(u)=O(|u|^{\frac{2}{3}})$ and let
$Z_{\gamma_{k}}(u)=c_{k}u^{\frac{2}{3}}$. Using the relations between
$\gamma_{i}$ and straight-forward calculation show that
$\displaystyle c_{1}=1,c_{2}=\frac{1}{\sqrt{3}}e^{-\frac{\pi
i}{6}},c_{3}=e^{-\frac{\pi i}{3}},c_{4}=-\frac{i}{\sqrt{3}}$
after suitable normalization of the coordinate $u$. Then use the relation
$M_{\gamma_{i}}=\gamma_{i+4}$ to determines the rest of $c_{k}$. ∎
With the data above, the similar argument in Section 5.1 shows that the family
Floer mirror of $X_{IV}$ is gluing of eight copies of
$\mathfrak{Trop}^{-1}(\mathbb{R}^{2}\setminus\\{0\\})\subseteq(\mathbb{G}_{m}^{an})^{2}$
, with the gluing functions in Theorem 7.1. Similar to the argument of Section
5.3, we may change the branch cut in Figure 13 into two, as in Figure 14.
$\gamma_{1}=-{\gamma_{1}^{\prime}}$$\gamma_{2}={\gamma_{2}^{\prime}}$$\gamma_{3}=\gamma_{1}^{\prime}+\gamma_{2}^{\prime}+\gamma_{3}^{\prime}+\gamma_{4}^{\prime}$$\gamma_{4}=\gamma_{1}^{\prime}+\gamma_{2}^{\prime}+\gamma_{3}^{\prime}$$\gamma_{5}=2\gamma_{1}^{\prime}+\gamma_{2}^{\prime}+\gamma_{3}^{\prime}+\gamma_{4}^{\prime}$$\gamma_{6}={\gamma_{1}^{\prime}}+{\gamma_{2}^{\prime}}$$\gamma_{7}={\gamma_{1}^{\prime}}$$\gamma_{8}=-\gamma_{4}^{\prime}$
Figure 14: A choice of a different branch cut for $X_{IV}$
The scattering diagram of cluster type $G_{2}$ can be found in [GHKK]*Figure
1.2. One can show that the corresponding gluing functions of the
${\mathcal{X}}$ case are the same as those in Theorem 7.1 under suitable
identification. Then the family Floer mirror of ${X}_{IV}$ can be partially
compactified to gluing of eight tori (up to GAGA) with the gluing functions
same as the ${\mathcal{X}}$-cluster variety of type $G_{2}$.
Next we will construct a log Calabi-Yau pair $(Y,D)$ such that the
corresponding Gross-Hacking-Keel mirror corresponds to the family Floer mirror
of $X_{IV}$. We will take
1. 1.
$Y$ to be the blow up of $\mathbb{P}^{2}$ at $4$ points, three of them are the
non-toric points on $y$-axis and one non-toric point on $x$-axis.
2. 2.
$D$ is the proper transform of $x,y,z$-coordinate axis.
Let $\tilde{Y}$ be the successive toric blow up of $(Y,D)$ at the intersection
of $x,z$-axis, the proper transform of $z$-axis and the exceptional divisor,
the two nodes on the last exceptional divisor and then the proper transform of
$y,z$-axis in order. Then take $\tilde{D}$ to be the proper transform of $D$.
Denote $H$ to be the pull-back of the hyperplane class on $\mathbb{P}^{2}$,
$E_{1}$ (and $E_{2},E_{3},E_{4}$) to be the exceptional divisor of the blow up
on the non-toric point on the $x$-axis (and $y$-axis).
Similar to the argument Section 5.2 we have the following lemma.
###### Lemma 7.3.
The complex affine structure on $B_{0}$ together with $l_{\gamma_{i}}$ is
isomorphic to the integral affine manifold $B_{\mathrm{GHK}}$ of
$(\tilde{Y},\tilde{D})$. Moreover, the BPS rays $l_{\gamma_{i}}$ give the
correoding cone decomposition on $B_{\mathrm{GHK}}$ from
$(\tilde{Y},\tilde{D})$, the wall function with restriction
$z^{[D_{i}]}=z^{[E_{i}]}=1$ and the identification $d$ coincide with the
functions in Theorem 7.1
We then can compute the canonical scattering diagram for $(Y,D)$. Actually all
the simple $\mathbb{A}^{1}$-curves contributing to the scattering diagram are
toric transverse in $(\tilde{Y},\tilde{D})$, which are depicted in Figure 15
below.
$E_{1}$
---
$E_{i}$, $i=2,3,4$
---
$H-E_{i}$,
---
$i=2,3,4$
3
---
2
---
3
---
1
---
$H-E_{1}$
---
$H-E_{1}-E_{i}$
---
$H-(\sum_{i=1}^{4}E_{i})$
---
$2H-E_{1}-E_{i}-E_{j}$,
---
$3H-2E_{1}-E_{2}-E_{3}-E_{4}$
---
$\\{i,j\\}=\\{2,3,4\\}$
---
Figure 15: The scattering diagram of Gross-Pandharipande-Siebert [GPS] and the
$\mathbb{A}^{1}$-curves corresponding to $X_{IV}$ (illustrated by a projection
to $\mathbb{P}^{2}$).
We conclude the section with the following theorem.
###### Theorem 7.4.
The family Floer mirror of $X_{IV}$ has a partial compactification as the
analytification of the $G_{2}$-cluster variety or the Gross-Hacking-Keel
mirror of a suitable pair $({Y},{D})$.
## 8 Further Remarks
Here we consider the family Floer mirror of $X$ without the geometry of its
compactification. Following the idea of the Gross-Hacking-Keel as summarized
in Section 2.1, one would need to use the theta functions, the tropicalization
of the counting of Maslov index two discs, to construct a (partial)
compactification of the original mirror. Assuming that $X=X_{*}$ in the
previous sections admit a compactification to a rational surface with an anti-
canonical cycle at infinity. Moreover, assume that the there is certain
compatibility between the compactification and the asymptotic of the metric
behaviour. Then one can follow the similar argument in the work of the second
author [L14] and prove that the counting of the Maslov index two discs with
Lagrangian fibre boundary conditions can be computed by the weighted count of
broken lines. However, the authors are unaware of such asymptotic estimates of
the metrics in the literature.
One can further construct the pair $({Y},{D})$ such that the corresponding
monodromy is conjugate to the monodromy of the type
$IV^{*},III^{*},II^{*},I_{0}^{*}$. For instance, the case of $I_{0}^{*}$ can
be realized by a cubic surface with anti-canonical cycle consisting of three
$(-1)$-curves [ghks_cubic]. The authors would expect that the family Floer
mirror of $X=Y\setminus D$ coincides with a particular fibre in the mirror
family constructed by Gross-Hacking-Keel. Moreover, the families of Maslov
index zero discs emanating from the singular fibres in $X$ are one-to-one
corresponding to the $\mathbb{A}^{1}$-curves of the pair $({Y},{D})$. This may
help to understand the Floer theory of more singular Lagrangians. In this
case, the wall functions are algebraic functions and the GAGA can still apply.
Although the walls are dense, it is likely the mirror can be covered by
finitely many tori up to some codimension two locus. In general, the wall
functions may not be algebraic a priori and GAGA may not apply directly. The
authors will leave it for the future work.
## References
Man-Wai Cheung
Department of Mathematics, One Oxford Street Cambridge, Harvard University, MA
02138, USA
e-mail<EMAIL_ADDRESS>
Yu-Shen Lin
Department of Mathematics and Statistics, 111 Cummington Mall, Boston, Boston
University, MA 02215, USA
e-mail<EMAIL_ADDRESS>
|
Merging-Free Partitions and Run-Sorted Permutations
Fufa Beyene111Corresponding author.
Department of Mathematics, Addis Ababa University
P.O. Box 1176
Addis Ababa, Ethiopia
<EMAIL_ADDRESS>
Roberto Mantaci
IRIF, Université de Paris
8, Place Aurélie Nemours, 75013
Paris, France
<EMAIL_ADDRESS>
###### Abstract
In this paper, we study merging-free partitions with their canonical forms and
run-sorted permutations. We give a combinatorial proof of the conjecture made
by Nabawanda et al. We describe the distribution of the statistics of runs and
right-to-left minima over the set of run-sorted permutations and we give the
exponential generating function for their joint distribution. We show the
number of right-to-left minima is given by the shifted distribution of the
Stirling number of the second kind. We also prove that the non-crossing
merging-free partitions are enumerated by powers of $2$. We use one of the
constructive proofs given in the paper to implement an algorithm for the
exhaustive generation of run-sorted permutations by number of runs.
## 1 Introduction
Given a non-empty finite subset $A$ of positive integers, a _set partition_
$P$ of $A$ is a collection of disjoint non-empty subsets $A_{i}$ called
_blocks_ of $A$ such that $\cup_{i=1}^{k}A_{i}=A$ [5, 9]. We shall use the
notation $[n]:=\\{1,2,\ldots,n\\}$, where $n$ is a fixed positive integer. It
is well known that set partitions over $[n]$ and set partitions over $[n]$
having $k$ blocks are counted by the Bell numbers, $b_{n}$ and Stirling
numbers of the second kind, $S(n,k)$ respectively ([5, 15, 19]). Mansour [9]
defined the _block representation_ of a set partition where the elements in a
block are arranged increasingly and the blocks are arranged in increasing
order of their first elements. Mansour also gave a way to encode a set
partition (in its block representation) by its _canonical form_ , that is,
every integer is encoded by the number of the block it belongs to. We note
that canonical forms of set partitions coincide with the so-called _restricted
growth functions_ ($\operatorname{RGF}$).
Callan [6] introduced the “flattening” operation ($\operatorname{Flatten}$) on
set partitions, which acts in such a way that a permutation $\sigma$ is
obtained from a set partition $P$ by removing the separators enclosing the
different blocks of $P$ in its block representation. For example, the set
partition $P=126/3/48/57$ is in the block representation, and so we remove the
separators $``/"$ and obtain the permutation $\sigma=12634857$. As a result of
Callan’s work, such objects are getting the attention of different researchers
and several new findings are emerging, for example see [1, 10, 13].
In the literature permutations obtained this way are sometimes called
“flattened partitions”. We found this term somewhat confusing because these
objects are permutations and not partitions, consequently, since the runs of
the resulting permutations are sorted by the increasing values of their
respective minima, we chose to adopt the term run-sorted permutations already
used by Alexandersson and Nabawanda [1]. Run-sorted permutations are counted
by the shifted Bell numbers (see [13]).
The same permutation can be obtained by flattening several set partitions. For
instance, the permutation $\sigma=12634857$ can also be obtained by flattening
the set partition $P^{\prime}=126/348/57$. Among all the set partitions having
the same $\operatorname{Flatten}$, we will distinguish the only one whose
number of blocks is the same as the number of runs of the permutation obtained
by flattening it (this is the set partition $P^{\prime}$ for the permutation
$\sigma$). For obvious reasons we named these objects merging-free partitions.
The $\operatorname{Flatten}$ operation clearly becomes injective and hence a
bijection if restricted to the set of merging-free partitions.
In this article we study some properties of run-sorted permutations as well as
of merging-free partitions and their canonical forms, we compute the
distribution of some statistics (runs, right-to-left minima, $\ldots$) over
these sets, we relate these classes to the classes of separated partitions and
of non-crossing partitions and we provide an exhaustive generation algorithm
for the class of run-sorted permutations partitioned by number of runs. In
particular, in Section 2, we give the characterization of the canonical forms
of merging-free partitions, and show that they can be bijectively related to
$\operatorname{RGF}$s of one size smaller.
In Section 3, we give a combinatorial bijective proof of a recurrence relation
in Theorem 23 satisfied by run-sorted permutations over $[n]$ having $k$ runs,
a recurrence relation that was conjectured by Nabawanda et al. [13]. We also
give the interpretation of the proof of the same result by working on the
canonical forms of merging-free partitions.
In Section 4.1, we prove that the distribution of right-to-left minima over
run-sorted permutations is the same as the distribution of the number of
blocks over set partitions of one size smaller (and also given by the shifted
Stirling number of the second kind). We refine the recurrence relation
satisfied by the number of run-sorted permutations over $[n]$ having $k$ runs
by counting these permutations by number of runs and by number of right-to-
left minima simultaneously and we obtain an exponential generating function
for the associated three-variables formal series. Munagi [12] proved that the
set partitions over $[n]$ having $k$ blocks such that no two consecutive
integers in the same block are also counted by the shifted Stirling numbers of
the second kind. So, in this section we also show that these partitions
bijectively correspond to run-sorted permutations over $[n]$ having $k$ right-
to-left minima.
Non-crossing partitions are Catalan enumerated objects introduced in the
founding work of Becker [3] and later deeply studied by different eminent
scholars like Kreweras and Simion ([7, 17]). We characterize the class of non-
crossing merging-free partitions in Section 5 and we enumerate them according
to their number of blocks and show that the total number of such partitions is
counted by the power of $2$.
Finally, Section 6 presents an exhaustive generation algorithm for run-sorted
permutations partitioned by the number of runs, based on the recurrence
relation proved in Theorem 23 and using the classical dynamic programming
techniques.
### 1.1 Definitions, Notation, and Preliminaries
###### Definition 1.
A set partition $P$ of $[n]$ is defined as a collection $B_{1},\ldots,B_{k}$
of nonempty disjoint subsets such that $\cup_{i=1}^{k}B_{i}=[n]$. The subsets
$B_{i}$ will be referred to as “blocks”.
###### Definition 2.
A set partition $P=B_{1}/\cdots/B_{k}$ is said to be the _block
representation_ of $P$ if the blocks $B_{1},\ldots,B_{k}$ are sorted in such
way that $\min(B_{1})<\min(B_{2})<\cdots<\min(B_{k})$ and the elements of
every block are arranged in increasing order.
We will always write set partitions in their block representation. Let
$\mathcal{SP}(n)$ denote the set of all set partitions over $[n]$ and
$b_{n}=|\mathcal{SP}(n)|$ the $n$-th Bell number.
###### Definition 3.
The _canonical form_ of a set partition of $[n]$ is a $n$-tuple indicating the
block in which each integer occurs, that is, $f=f_{1}f_{2}\cdots f_{n}$ such
that $j\in B_{f_{j}}$ for all $j$ with $1\leq j\leq n$.
###### Example 4.
If $P=138/2/47/56\in\mathcal{SP}(n)$, then its canonical form is $f=12134431$.
###### Definition 5.
A _restricted growth function_ $(\operatorname{RGF})$ over $[n]$ is a function
$f:[n]\mapsto[n]$, where $f=f_{1}\cdots f_{n}$ such that $f_{1}=1$ and
$f_{i}\leq 1+\max\\{f_{1},\ldots,f_{i-1}\\}$ for $2\leq i\leq n$, or
equivalently, such that the set $\\{f_{1},f_{2},\ldots,f_{i}\\}$ is an integer
interval for all $i\in[n]$.
The canonical forms of set partitions are exactly the restricted growth
functions ($\operatorname{RGF}$). We let $\operatorname{RGF}(n)$ denote the
set of all restricted growth functions over $[n]$. We will note
$f\in\operatorname{RGF}(n)$ as a word $f_{1}f_{2}\cdots f_{n}$ over the
alphabet $[n]$, where $f_{i}=f(i)$. We define the statistic of the set of
left-to-right maxima of $f$ by
$\operatorname{LrMax}(f)=\\{i~{}:f_{i}>f_{j},1\leq i\leq n,j<i\\},$
and a statistic of the set of weak left-to-right maxima of $f$ by
$\operatorname{WLrMax}(f)=\\{i:~{}f_{i}\geq f_{j},1\leq i\leq n,j<i\\},$
We also use the notation $\operatorname{lrmax}(f):=|\operatorname{LrMax}(f)|$
and $\operatorname{wlrmax}(f):=|\operatorname{WLrMax}(f)|$.
###### Example 6.
If $f=121132342\in\operatorname{RGF}(9)$, then observe that
$\operatorname{LrMax}(f)=\\{1,2,5,8\\}$ and
$\operatorname{WLrMax}(f)=\\{1,2,5,7,8\\}$.
###### Definition 7.
Let $P=B_{1}/\cdots/B_{k}$ be a set partition. We say that $P$ is merging-free
if $\max(B_{i})>\min(B_{i+1}),1\leq i\leq k-1$.
A permutation $\pi$ over $[n]$ will be represented in the one-line notation,
$\pi=\pi_{1}\pi_{2}\cdots\pi_{n}$. In particular, every permutation can be
considered as a word of length $n$, with letters in $[n]$. We define the set
of right-to-left minima of $\pi$ by
$\operatorname{RlMin}(\pi)=\\{\pi_{i}~{}:~{}\pi_{i}<\pi_{j},j>i\\},$
and we use the notation
$\operatorname{rlmin}(\pi):=|\operatorname{RlMin}(\pi)|$.
###### Definition 8.
A maximal increasing subsequence of consecutive letters in the word of a
permutation $\pi$ is called a run.
###### Definition 9.
_Flattening_ a set partition is an operation by which we obtain a permutation
from the set partition $P=B_{1}/\cdots/B_{k}$ over $[n]$ by concatenating its
blocks. We denote the resulting permutation by
$\operatorname{Flatten}(P)=\pi$.
If a permutation is obtained by flattening a set partition, then its runs are
ordered in such a way that the minimum of the runs are increasing, therefore,
we will call all permutations in $\operatorname{Flatten}(\mathcal{SP}_{n})$
_run-sorted permutations_. We let $\mathcal{RSP}(n)$ denote the set of all
run-sorted permutations over $[n]$ and $r_{n}$ is its cardinality.
###### Remark 10.
Merging-free partitions over $[n]$ and run-sorted permutations over $[n]$ are
in bijection, because the restriction of $\operatorname{Flatten}$ to the
merging-free partitions is a bijection.
Nabawanda et al. [13] proved the following result.
###### Proposition 11.
The number of set partitions over $[n]$ and the number of run-sorted
permutations over $[n+1]$ (and therefore, the number of merging-free
partitions over $[n+1]$) are equal. That is, $r_{n+1}=b_{n}$ for all $n\geq
1$.
###### Proof.
We give a sketch of the proof. Let $P=B_{1}/\cdots/B_{k}\in\mathcal{SP}(n)$.
The corresponding run-sorted permutation in $\mathcal{RSP}(n+1)$ is
constructed as follows: move each minimum element of the block at the end of
its block, remove the slashes, increase every integer by 1, and finally attach
the integer 1 at the front. Conversely, we construct the set partition over
$[n]$ corresponding to a run-sorted permutation $\pi\in\mathcal{RSP}(n+1)$ as
follows. Put a slash after each right-to-left minimum of $\pi$, then delete
the integer $1$, and decrease every integer by 1, finally arrange the elements
of each block in increasing order.∎
###### Example 12.
If $P=14/258/37/6\in\mathcal{SP}(8)$, then by the above operation we obtain
the run-sorted permutation $\pi=152693847\in\mathcal{RSP}(9)$. Conversely, for
$\pi=152693847\in\mathcal{RSP}(9)$ we have
$\operatorname{RlMin}(\pi)=\\{1,2,3,4,7\\}$. So by putting a slash after each
right-to-left minimum we obtain $1/52/693/84/7\longrightarrow 14/258/37/6=P$.
## 2 Canonical forms of merging-free partitions
In this section we characterize the $\operatorname{RGF}$s corresponding to
merging-free partitions and we present some results related to these canonical
forms.
###### Remark 13.
Let $f=f_{1}\cdots f_{n}$ be the canonical form of a set partition
$P=B_{1}/\cdots/B_{k}$ over $[n]$ having $k$ blocks. We have
$i\in\operatorname{LrMax}(f)$ if and only if $i=\min(B_{f_{i}})$.
###### Proposition 14.
There is a bijection between the set of merging-free partitions over $[n]$ and
the set $T_{n}$ of $\operatorname{RGF}$s $f=f_{1}f_{2}\cdots f_{n}$ over $[n]$
satisfying the condition that every left-to-right maximum letter $s>1$ of $f$
has at least one occurrence of $s-1$ on its right.
###### Proof.
If $P=B_{1}/B_{2}/\cdots/B_{k}$ is a merging-free partition with $k$ blocks,
then $\min(B_{s-1})<\min(B_{s})$ and
$\max(B_{s-1})>\min(B_{s}),~{}s=2,\ldots,k$. Note that every leftmost
occurrence of a letter in $f$ is a left-to-right maximum letter. The positions
of the leftmost and rightmost occurrences of the letter $s$ in $f$ correspond
to the minimum and the maximum elements of the block $B_{s}$, respectively.
Thus, if $1<s\leq k$, then $f_{\min(B_{s})}=s$ and $f_{\max(B_{s-1})}=s-1$. ∎
###### Definition 15.
Let $f=f_{1}f_{2}\cdots f_{n}\in\operatorname{RGF}(n)$. If the occurrence of
the letter $f_{i}$ in $f$ has no repetition, then we say that $f_{i}$ is
unique in $f$, that is, $f_{i}$ is unique if and only if $i$ forms a singleton
block in the partition. A weak left-to-right maximum $i$ in $f$ for which
there exists $i_{0}<i$ such that $f_{i}=f_{i_{0}}$ is called a non-strict
left-to-right maximum.
We shall give a combinatorial proof of Proposition 11 in terms of canonical
forms. For each $i\in[n]$ we define $u_{i}$ as the number of unique left-to-
right maximum letters of $f$ in the positions $1,\ldots,i-1$ that are smaller
than $f_{i}$. We let $u=(u_{1},\ldots,u_{n})$. Let
$\delta=(\delta_{1},\ldots,\delta_{n})$, where
$\delta_{i}=\begin{cases}1,&\text{ if $f_{i}$ is a non-unique left-to-right
maximum letter of $f$;}\\\ 0,&\text{ otherwise}.\end{cases}$
Define a mapping $\alpha:\operatorname{RGF}(n)\mapsto T_{n+1}$, where
$T_{n+1}$ is the set of canonical forms of merging-free partitions over
$[n+1]$, by $\alpha(f)=1\cdot f^{\prime}$, that is, a concatenation of $1$ and
$f^{\prime}$, where $f^{\prime}=f^{\prime}_{1}\cdots f^{\prime}_{n}$ is
obtained from $f$ as follows:
$f^{\prime}=f-u+\delta.$
###### Example 16.
If $f=1213124$, then $f_{1}=1$ and $f_{2}=2$ are non-unique left-to-right
maximum letters, while $f_{4}=3$ and $f_{7}=4$ are the unique ones. So,
$u=(0,0,0,0,0,0,1)$ and $\delta=(1,1,0,0,0,0,0)$. Thus,
$f^{\prime}=f-u+\delta=2313123$ and $\alpha(f)=1\cdot f^{\prime}=12313123\in
T_{8}$.
###### Lemma 17.
If $f\in\operatorname{RGF}(n)$ and $f^{\prime}$ is obtained from $f$ as in the
above construction, then
$\operatorname{LrMax}(f)\subseteq\operatorname{WLrMax}(f^{\prime})$.
###### Proof.
Let $\operatorname{LrMax}(f)=\\{i_{1},\ldots i_{k}\\}$. We proceed by
induction on $k$. For $k=1$, note that $i_{1}=1$ and
$f_{1}^{\prime}=f_{1}-u_{1}+\delta_{1}=f_{1}+\delta_{1}\in\\{1,2\\}$. Thus,
the assertion is true for the basis step. Suppose that the assertion is true
for $k-1$, and we show that $i_{k}$ is a weak left-to-right maximum in
$f^{\prime}$. By definition we have
$u_{i_{k}}=\begin{cases}u_{i_{k-1}}+1,&\text{ if }f_{i_{k-1}}\text{ is
unique;}\\\ u_{i_{k-1}},&\text{ if }f_{i_{k-1}}\text{ is non-
unique.}\end{cases}$
If $f_{i_{k}}$ is unique, then $\delta_{i_{k}}=0$ and in either cases we have
$\displaystyle f_{i_{k}}^{\prime}=f_{i_{k}}-u_{i_{k}}=f_{i_{k-1}}^{\prime}.$
If $f_{i_{k}}$ is non-unique, then $\delta_{i_{k}}=1$ and in either cases we
have
$\displaystyle
f_{i_{k}}^{\prime}=f_{i_{k}}-u_{i_{k}}+1=f_{i_{k-1}}^{\prime}+1.$
Therefore, in all of the above cases we have $f_{i_{k}}^{\prime}\geq
f_{i_{k-1}}^{\prime}$. For the intermediate values, we already know them to be
non-weak left-to-right maximum letters, hence they are not greater than
$f_{i_{k-1}}^{\prime}$. Thus, by using the induction hypothesis we see that
$i_{k}$ is a weak left-to-right maximum in $f^{\prime}$. ∎
###### Lemma 18.
For all $f\in\operatorname{RGF}(n)$ we have that $\alpha(f)\in T_{n+1}$.
###### Proof.
By the above lemma it is easy to see that $1\cdot
f^{\prime}\in\operatorname{RGF}(n+1)$. Let
$f^{\prime}=f_{1}^{\prime}f_{2}^{\prime}\cdots f_{n}^{\prime}$ and let
$f_{i}^{\prime}>1$ be a left-to-right maximum letter in $1\cdot f^{\prime}$,
then $f_{i}$ is a non-unique left-to-right maximum letter in $f$. This implies
that there is some $j>i$ such that $f_{i}=f_{j}$, and hence
$u_{i}=u_{j},\delta_{j}=0$. So,
$\alpha(f(i))=f^{\prime}_{i}=f_{i}-u_{i}+1=f_{j}^{\prime}+1$. Therefore, every
left-to-right maximum letter $s>1$ of $1\cdot f^{\prime}$ has some occurrence
$s-1$ from its right and hence $1\cdot f^{\prime}\in T_{n+1}$. ∎
We now define a map $\beta:T_{n+1}\mapsto\operatorname{RGF}(n)$ which
associates each $1\cdot g=1\cdot g_{1}\cdots g_{n}\in T_{n+1}$ with a function
$\beta(1\cdot g)=g^{\prime}=g^{\prime}_{1}g^{\prime}_{2}\cdots
g^{\prime}_{n}$, where $g^{\prime}$ is obtained from $g$ as follows. For each
$i\in[n]$, we let $v_{i}$ denote the number of non-strict left-to-right
maximum letters in $g$ that are less than or equal to $g_{i}$ in the positions
$1,\ldots,i-1$. Let $v=(v_{1},\ldots,v_{n})$. Further, let
$\delta^{\prime}=(\delta^{\prime}_{1},\ldots,\delta^{\prime}_{n})$, where
$\delta^{\prime}_{i}=\begin{cases}1,&\text{ if $g_{i}$ is left-to-right
maximum of $g$};\\\ 0,&\text{ otherwise}.\end{cases}$
Then, $g^{\prime}$ is obtained from $g$ as follows:
$g^{\prime}=g+v-\delta^{\prime}.$
For instance, if $1\cdot g=122134321\in T_{9}$, then
$g=22134321,v=(0,0,0,1,1,1,1,0)$, and $\delta^{\prime}=(1,0,0,1,1,0,0,0)$.
Thus, $g^{\prime}=12134431\in\operatorname{RGF}(8)$. Note that
$\beta=\alpha^{-1}$ and as a result, we have the following proposition.
###### Proposition 19.
The mapping $\alpha$ from the set $\operatorname{RGF}(n)$ to the set $T_{n+1}$
is a bijection. ∎
###### Corollary 20.
If $f\in\operatorname{RGF}(n)$ and $\alpha(f)=1\cdot f^{\prime}$, then
$\operatorname{LrMax}(f)=\operatorname{WLrMax}(f^{\prime})$. ∎
We now evaluate the number $l_{n}$ of $f\in\operatorname{RGF}(n)$ having the
sequence $u=0$, that is, if $f=f_{1}f_{2}\cdots f_{n}$, then for each
$i\in[n]$ there is no unique left-to-right maximum letter smaller than $f_{i}$
on its left. The set partitions corresponding to such functions are exactly
those satisfying the condition that their blocks have size at least two except
for the last block, which may be singleton. The sequence of the numbers
$l_{n},n\geq 0$ is the same as the OEIS A346771.
###### Theorem 21.
For all $n\geq 1$ we have
$l_{n}=\sum_{k=1}^{n-1}{n-1\choose k}l_{n-k-1},l_{0}=l_{1}=1.$ (1)
###### Proof.
Let $f\in\operatorname{RGF}(n)$ satisfy the above condition. Since $f_{1}=1$
is the smallest integer, every such function has at least two $1$s. Suppose
that $f$ has $k+1$ occurrences of $1$s. If we delete all the $1$s and decrease
each of the remaining integers by $1$, then we obtain a $\operatorname{RGF}$
over $[n-k-1]$, with the same condition as $f$. So, there are $l_{n-k-1}$ such
functions. We now choose $k$ positions from $\\{2,3,\ldots,n\\}$ where $1$ can
be inserted, and this is possible in $\displaystyle{n-1\choose k}$ ways.
Therefore, by applying the product rule and then taking the sum over all
possible $k$ we have the right hand side of (1). ∎
## 3 Run distribution in run-sorted permutations
The following table presents the first few values $r_{n,k}$ of the number of
run-sorted permutations over $[n]$ having $k$ runs (see A124324).
$n\backslash k$ | 1 | 2 | 3 | 4 | 5 | 6 | 7
---|---|---|---|---|---|---|---
1 | 0 | | | | | |
2 | 1 | 0 | | | | |
3 | 1 | 1 | 0 | | | |
4 | 1 | 4 | 0 | 0 | | |
5 | 1 | 11 | 3 | 0 | 0 | |
6 | 1 | 26 | 25 | 0 | 0 | 0 |
7 | 1 | 57 | 130 | 15 | 0 | 0 | 0
Table 1: The values of $r_{n,k}$ for $1\leq k,n\leq 7$
###### Remark 22.
The number $k$ of runs of a run-sorted permutation over $[n]$ satisfies the
condition
$1\leq k\leq\lceil\frac{n}{2}\rceil,~{}n\geq 1,$
because each run except the last has length at least $2$.
The following result was conjectured by Nabawanda et al. [13], who also gave a
justification of the first term of the right-hand side of (2). We were the
first to provide a combinatorial bijective proof justifying the second term
and thus prove the conjecture. We show a complete combinatorial proof of the
conjecture here using our bijection.
###### Theorem 23.
The number $r_{n,k}$ of run-sorted permutations of $[n]$ having $k$ runs
satisfies the recurrence relation
$r_{n,k}=kr_{n-1,k}+(n-2)r_{n-2,k-1},~{}~{}n\geq 2,~{}k\geq 1,$ (2)
where $r_{0,0}=1,~{}r_{1,0}=0,~{}r_{1,1}=1$.
In order to prove this result, we partition the set $\mathcal{RSP}(n,k)$ of
run-sorted permutations over $[n]$ having $k$ runs into two subsets:
$\mathcal{RSP}^{(1)}(n,k)$ and $\mathcal{RSP}^{(2)}(n,k)$, where
$\mathcal{RSP}^{(1)}(n,k)$ is the set of elements of $\mathcal{RSP}(n,k)$ in
which the removal of the integer $n$ does not decrease the number of runs and
$\mathcal{RSP}^{(2)}(n,k)$ is the set of elements of $\mathcal{RSP}(n,k)$ in
which the removal of the integer $n$ decreases the number of runs, this
happens when the integer $n$ occurs between two integers $x$ and $y$ with
$x<y$. For example, $12435\in\mathcal{RSP}^{(1)}(5,2)$ and
$15234\in\mathcal{RSP}^{(2)}(5,2)$. We will denote the cardinalities of these
subsets by $r^{(1)}_{n,k}$ and $r^{(2)}_{n,k}$, respectively.
Let $\phi:[k]\times\mathcal{RSP}(n-1,k)\mapsto\mathcal{RSP}^{(1)}(n,k)$
associating each element $(i,\sigma)\in[k]\times\mathcal{RSP}(n-1,k)$ with the
permutation $\sigma^{\prime}=\phi(i,\sigma)$ obtained from $\pi$ by inserting
$n$ at the end of the $i$-th run of the permutation $\sigma$. It is easy to
see that $\phi$ is a bijection (see [13], p. 6).
We now define the mapping
$\psi:[n-2]\times\mathcal{RSP}(n-2,k-1)\mapsto\mathcal{RSP}^{(2)}(n,k)$,
associating each element $(i,\pi)\in[n-2]\times\mathcal{RSP}(n-2,k-1)$ with
the permutation $\pi^{\prime}=\psi(i,\pi)$ obtained from $\pi$ by increasing
all integers greater than $i$ by 1 and inserting the subword $n~{}i{+}1$
immediately after the rightmost of the integers of the set
$\\{1,2,\ldots,i\\}$.
###### Example 24.
Let $i=3$ and $\pi=13524\in\mathcal{RSP}(5,2)$. We construct $\psi(i,\pi)$ as
follows: increase each integer greater than 3 in $\pi$ by $1$ to get $13625$,
then insert the subword $7~{}(3{+}1)=7~{}4$ into the position after the
rightmost of the integers $1,2,3$, thus the subword must be inserted between
$2$ and $5$, hence,
$\psi(3,13524)=\pi^{\prime}=1362745\in\mathcal{RSP}^{(2)}(7,3)$.
###### Lemma 25.
For all $(i,\pi)\in[n-2]\times\mathcal{RSP}(n-2,k-1)$, we have
$\psi(i,\pi)\in\mathcal{RSP}^{(2)}(n,k)$.
###### Proof.
Since $\pi\in\mathcal{RSP}(n-2,k-1)$ and the procedure inserts the subword
$n~{}i{+}1$ immediately after the rightmost integer of the set
$\\{1,\ldots,i\\}$, all integers to the right of $i+1$ are greater than $i+1$
and $i+1$ is the first element of a new run. Thus the resulting permutation is
run-sorted with the number of runs increased by $1$. Furthermore, in the
resulting permutation the integer $n$ is immediately preceded by some integer
in the set $\\{1,\ldots,i\\}$ and immediately followed by $i+1$, hence its
removal decreases the number of runs, so
$\pi^{\prime}\in\mathcal{RSP}^{(2)}(n,k)$. ∎
###### Proposition 26.
The map $\psi$ defined above is a bijection.
###### Proof.
We prove that $\psi$ is both injective and surjective. First let us assume
that $(i_{1},\pi_{1})\neq(i_{2},\pi_{2})$ for $i_{1},i_{2}\in[n-2]$ and
$\pi_{1},\pi_{2}\in\mathcal{RSP}(n-2,k-1)$. Let
$\psi(i_{1},\pi_{1})=\pi_{1}^{\prime}$ and
$\psi(i_{2},\pi_{2})=\pi_{2}^{\prime}$. Then $\pi_{1}^{\prime}$ and
$\pi_{2}^{\prime}$ are run-sorted permutations in $\mathcal{RSP}^{(2)}(n,k)$
by the previous lemma. We consider two cases. If $i_{1}\neq i_{2}$, then in
one of the two resulting permutations $n$ is followed by $i_{1}+1$ while in
the other $n$ is followed by $i_{2}+1$. If $i_{1}=i_{2}$ and
$\pi_{1}\neq\pi_{2}$, then the two run-sorted permutations $\pi_{1}$ and
$\pi_{2}$ have at least two entries in which they differ. Thus inserting
$n~{}i_{1}{+}1=n~{}i_{2}{+}1$ after the rightmost element of the set
$\\{1,2,\ldots,i_{1}=i_{2}\\}$, produces two different permutations
$\pi_{1}^{\prime}$ and $\pi_{2}^{\prime}$. Thus, in both cases,
$\pi_{1}^{\prime}=\psi(i_{1},\pi_{1})\neq\psi(i_{2},\pi_{2})=\pi_{2}^{\prime}$,
hence $\psi$ is injective. Next, consider any
$\pi^{\prime}\in\mathcal{RSP}^{(2)}(n,k)$, then $n$ does not appear in the
last position. Let $j>1$ be the integer following $n$ in $\pi^{\prime}$. We
exhibit a pair $(i,\pi)\in[n-2]\times\mathcal{RSP}(n-2,k-1)$ such that
$\psi(i,\pi)=\pi^{\prime}$. Define $\pi$ to be the run-sorted permutation
obtained from $\pi^{\prime}$ by deleting the subword $n~{}j$ and by decreasing
by 1 every integer greater than or equal to $j+1$ in the resulting word. Note
that if $n$ follows the integer $i$ in $\pi^{\prime}$, then $i<j$ and hence
deleting the subword $n~{}j$ from $\pi^{\prime}$ reduces the number of runs by
$1$ and the size of the partition by 2, whence $\pi\in\mathcal{RSP}(n-2,k-1)$
and $\psi(j+1,\pi)=\pi^{\prime}$. Therefore, $\psi$ is a bijection. ∎
We are now ready to present the proof of Theorem 23.
###### Proof.
The left-hand side counts the number of run-sorted permutations in
$\mathcal{RSP}(n,k)$. The first term of the right-hand side counts the number
of elements in $\mathcal{RSP}^{(1)}(n,k)$. Since $\phi$ is a bijection, we
have $r_{n,k}^{(1)}=kr_{n-1,k}$. We show that the second term of the right-
hand side counts the number of elements in $\mathcal{RSP}^{(2)}(n,k)$. By
Proposition 26 the sets $[n-2]\times\mathcal{RSP}(n-2,k-1)$ and
$\mathcal{RSP}^{(2)}(n,k)$ have the same cardinality, that is,
$(n-2)r_{n-2,k-1}=r_{n,k}^{(2)}$. Thus by combining the two parts we obtain
$r_{n,k}=r_{n,k}^{(1)}+r_{n,k}^{(2)}$, and hence the recurrence relation in
(2). ∎
We also provide a bijective proof of Theorem 23 in terms of canonical forms.
Let $T_{n,k}=\\{f\in T_{n}:\operatorname{lrmax}(f)=k\\}$, so
$|T_{n,k}|=r_{n,k}$. Recall that $T_{n,k}$ is the set of the canonical forms
of merging-free partitions over $[n]$ having $k$ blocks.
###### Proof.
Firstly, if $f=f_{1}f_{2}\cdots f_{n-1}\in T_{n-1,k}$, then by concatenating
any integer $i\in[k]$ at the end of $f$ we obtain a $f^{\prime}\in T_{n,k}$.
This is because, $f^{\prime}$ satisfies the condition of Proposition 14 if and
only if $f$ does. This construction obviously produces $kr_{n-1,k}$ functions
of $T_{n,k}$ having the property that by erasing the last value $f_{n}$ we
obtain a function in $T_{n-1,k}$.
Secondly, if $f=f_{1}f_{2}\cdots f_{n-2}\in T_{n-2,k-1}$, let $i\in[n-2]$, and
let $m=\max_{1\leq j\leq i}\\{f_{j}\\}$, then we construct
$f^{\prime}=f^{\prime}_{1}f^{\prime}_{2}\cdots f^{\prime}_{n}\in T_{n,k}$
associated with $(i,f)$ as follows: increase by $1$ all $f_{j}$s such that
$f_{j}\geq m,j>i$, insert $m+1$ at the position $i+1$, and append $m$ at the
end. The functions obtained with the second construction are all different
from those obtained using the former one. Indeed, by erasing the last integer
from $f^{\prime}$ we do not obtain a function in $T_{n-1,k}$. The reason is
that the value $m+1$ in the position $i+1$ is a left-to-right maximum letter
because of the choice of $m$. Now, by construction, the only occurrence of $m$
in $f^{\prime}$ is at position $n$, by erasing this value the left-to-right
maximum letter $m+1$ in the position $i+1$ is left without an occurrence of
$m$ on its right. Therefore, $f_{1}^{\prime}f_{2}^{\prime}\cdots
f_{n-1}^{\prime}$ does not satisfy the property characterizing canonical forms
of merging-free partitions. So, this contributes $(n-2)r_{n-2,k-1}$ to the
number $r_{n,k}$ as there are $n-2$ possibilities for $i$ and the number of
image values of $f^{\prime}$ increases by $1$. ∎
###### Example 27.
Take $f=12132\in T_{5,3}$, and let $i=3$. We construct $(i,f)\mapsto
f^{\prime}$ as follows: we have $m=\max_{1\leq j\leq 3}\\{f_{j}\\}=\max_{1\leq
j\leq 3}\\{1,2,1\\}=2$, and
$f^{\prime}_{1}=f_{1}=1,f^{\prime}_{2}=f_{2}=2,f^{\prime}_{3}=f_{3}=1,f^{\prime}_{4}=m+1=3,f^{\prime}_{5}=f_{4}+1=4,f^{\prime}_{6}=f_{5}+1=3,f^{\prime}_{7}=m=2$.
Thus, $f^{\prime}=1213432\in T_{7,4}$.
## 4 Right-to-left minima in run-sorted permutations
### 4.1 The distribution of right-to-left minima over the set of run-sorted
permutations
The following proposition gives us the relation between the statistics of
right-to-left minima of run-sorted permutations and the weak left-to-right
maxima of the canonical forms of the corresponding merging-free partitions.
###### Proposition 28.
The set of right-to-left minima of a run-sorted permutation over $[n]$ and the
set of weak left-to-right maxima of the canonical form of the corresponding
merging-free partition are the same.
###### Proof.
Let $\operatorname{Flatten}(P)=\pi=\pi(1)\cdots\pi(n)\in\mathcal{RSP}(n)$,
where $P$ is a merging-free partition over $[n]$. Let $f=f_{1}\cdots f_{n}$ be
the canonical form of $P$ and let $\\{i_{1},\ldots,i_{r}\\}$ be the set of the
positions of the right-to-left minima of $\pi$, then by definition of right-
to-left minima
$\operatorname{RlMin}(\pi)=\\{1=\pi(i_{1})<\pi(i_{2})<\cdots<\pi(i_{r})=\pi(n)\\}$.
Furthermore, if $1\leq j_{1}<j_{2}\leq r$ and we let $B_{q_{1}}$ be the block
of $P$ containing $\pi(i_{j_{1}})$ and $B_{q_{2}}$ the block of $P$ containing
$\pi(i_{j_{2}})$, then $q_{1}\leq q_{2}$ and by the definition of canonical
form we have $f_{\pi(i_{1})}\leq f_{\pi(i_{2})}\leq\cdots\leq f_{\pi(i_{r})}$.
Assume that $\pi(j)\notin\operatorname{RlMin}(\pi)$, then there exists some
integer $s$ such that $\pi(j)>\pi(s),s>j$. Hence $f_{\pi(s)}>f_{\pi(j)}$ and
$\pi(j)\notin\operatorname{WLrMax}(f)$. Thus,
$\\{\pi(i_{1}),\ldots,\pi(i_{k})\\}\subseteq\operatorname{WLrMax}(f)$.
Conversely, if $\operatorname{WLrMax}(f)=\\{i_{1},\ldots,i_{s}\\}$, then for
each $i_{j}$ we have $f_{i_{j}}\geq f_{t},t<i_{j}$, that is, all integers
$t<i_{j}$ belong either to $f_{i_{j}}$-th block or to a preceding block of
$P$, therefore, in $\pi$ there is no integer smaller than $i_{j}$ on the right
of $i_{j}$. Hence $i_{j}\in\operatorname{RlMin}(\pi)$. Therefore,
$\operatorname{RlMin}(\pi)=\operatorname{WLrMax}(f)$. ∎
###### Example 29.
If $P=149/238/57/6$, then its canonical form is $f=122134321$, and
$\operatorname{Flatten}(P)=\pi=149238576$. Thus, we have
$\operatorname{RlMin}(\pi)=\\{1,2,3,5,6\\}=\operatorname{WLrMax}(f)$.
Let $h_{n,r}$ denote the number of run-sorted permutations over $[n]$ having
$r$ right-to-left minima.
###### Proposition 30.
For all positive integers $n$ and $r$ with $2\leq r\leq n$ we have
$h_{n,r}=h_{n-1,r-1}+(r-1)h_{n-1,r},~{}h_{1,1}=1.$ (3)
###### Proof.
A run-sorted permutation $\pi^{\prime}$ over $[n]$ can be obtained from a run-
sorted permutation $\pi$ over $[n-1]$ either by appending $n$ at its end, or
by inserting $n$ before any of its right-to-left minima that is different from
$1$. In the former case, the number of right-to-left minima increases by $1$,
hence this contributes $h_{n-1,r-1}$ to the number $h_{n,r}$. In the later
case, if
$\operatorname{RlMin}(\pi)=\\{\pi(i_{1}),\pi(i_{2}),\ldots,\pi(i_{r})\\}$,
then $1=\pi(i_{1})<\pi(i_{2})<\cdots<\pi(i_{r})$. So, inserting $n$ before any
$\pi(i_{j})$ for $j\neq 1$ makes $\pi(i_{j})$ to be the minimum element of its
run in $\pi^{\prime}$. Thus the permutation $\pi^{\prime}$ is run-sorted with
the same number of right-to-left minima as $\pi$, and this contributes
$(r-1)h_{n-1,r}$ as there are $r-1$ right-to-left minima different from $1$. ∎
We also give the interpretation of the bijective proof of the recursion
formula in (3) for the corresponding set of canonical forms of merging-free
partitions using Proposition 28. We interpret $h_{n,r}$ as the number of
canonical forms in $T_{n}$ having $r$ weak left-to-right maxima, that is,
$h_{n,r}=|\\{f\in T_{n}:\operatorname{wlrmax}(f)=r\\}|$. All the elements of
the set $\\{f\in T_{n}:\operatorname{wlrmax}(f)=r\\}$ are obtained in a unique
way
1. 1.
either from a $f=f_{1}f_{2}\cdots f_{n-1}\in T_{n-1,r-1}$ by concatenating
$\max_{1\leq j\leq n-1}\\{f_{j}\\}$ at its end;
2. 2.
or from a $f=f_{1}f_{2}\cdots f_{n-1}\in T_{n-1,r}$ with weak left-to-right
maxima $\\{i_{1},i_{2},\ldots,i_{r}\\}$ as follows. For each $j=2,\ldots,r$ :
* -
if $f_{i_{j}}$ is a non-strict left-to-right maximum letter of $f$, then
increase by $1$ every integer $f_{s}$ such that $f_{s}\geq f_{i_{j}}$ and
$s\geq i_{j}$, and
* -
concatenate $f_{i_{j-1}}$ at the end of the resulting function.
Thus, the recurrence relation in (3) follows.
Recall that the recurrence relation satisfied by the Stirling numbers of the
second kind is $S(n,r)=S(n-1,r-1)+rS(n-1,r)$. It is easy to see that from
Corollary 20 of Section 2 and Proposition 28, the number of blocks in a set
partition over $[n-1]$ is one less than the number of right-to-left minima of
the corresponding run-sorted permutation over $[n]$ under the bijection in
Proposition 11. So, the values of $h_{n,r}$ given in (3) are the shifted
values of the Stirling numbers of the second kind, that is,
$h_{n,r}=S(n-1,r-1)$, for all $n\geq r\geq 1$.
### 4.2 The joint distribution of $\operatorname{run}$ and
$\operatorname{rlmin}$ over the set of run-sorted permutations
The statistics $\operatorname{run}$ and $\operatorname{rlmin}$ of a run-sorted
permutation are obviously related. In particular, each minimum element of a
run is always a right-to-left minimum, so
$\operatorname{run}(\pi)\leq\operatorname{rlmin}(\pi),\forall\pi\in\mathcal{RSP}(n)$.
We are interested in the joint distribution of these statistics. Let
$a_{n,k,r}$ denote the number of run-sorted permutations over $[n]$ having $k$
runs and $r$ right-to-left minima. If $n=0$, the only nonzero term is
$a_{0,0,0}=1$, if $n\geq 1$, then $a_{n,k,r}=0$, where
$k>\lceil\frac{n}{2}\rceil,r>n,r<k,k<,r<1$, or $r>n-k+1$.
###### Proposition 31.
For all integers $n,k,r$ such that $1\leq k,r\leq n$ the numbers $a_{n,k,r}$
of run-sorted permutations over $[n+2]$ having $k$ runs and $r$ right-to-left
minima satisfy
$a_{n+2,k,r}=a_{n+1,k,r-1}+\sum_{i=1}^{n}{n\choose i}a_{n+1-i,k-1,r-1}.$
###### Proof.
Let $\pi\in\mathcal{RSP}(n+2)$. Let us suppose that the integers $1$ and $2$
are in the same run of $\pi$. Let $\pi^{\prime}$ be the permutation obtained
from $\pi$ by deleting $1$ and then decreasing each of the remaining integers
by $1$, then $\pi^{\prime}\in\mathcal{RSP}(n+1)$ and
$\operatorname{run}(\pi)=\operatorname{run}(\pi^{\prime})$ and
$\operatorname{rlmin}(\pi)=\operatorname{rlmin}(\pi^{\prime})+1$. This implies
that
$|\\{\pi\in\mathcal{RSP}(n+2):~{}\operatorname{run}(\pi)=k,\operatorname{rlmin}(\pi)=r,1\text{
and }2\text{ are in the same run}\\}|=a_{n+1,k,r-1}.$
Let us suppose now that $1$ and $2$ are in different runs of $\pi$ and that
the first run (containing $1$) has length $i+1,i\geq 1$, then we can choose
$i$ elements from the set $\\{3,4,\ldots,n+2\\}$ to include in the first run.
There are $\displaystyle{n\choose i}$ ways to do so. The remaining part of
$\pi$ is a run-sorted permutation over $[n-i+1]$ and there are $a_{n+1-i}$ of
them. In this case, the number of runs and the number of right-to-left minima
of $\pi$ each increase by $1$. This completes the proof. ∎
###### Theorem 32.
We have
$a_{n,k,r}=a_{n-1,k,r-1}+(k-1)a_{n-1,k,r}+(n-2)a_{n-2,k-1,r-1},n\geq 2,k,r\geq
1$ (4)
with the initial conditions $a_{0,0,0}=1,a_{1,1,1}=1$.
###### Proof.
The proof is based on the technique used in the proof of Theorem 23. Let
$\pi^{\prime}$ be a run-sorted permutation over $[n]$ obtained from
$\pi\in\mathcal{RSP}(n-1)$ by inserting $n$ at the end of any of its runs.
This operation preserves the number of runs. It also preserves the number of
right-to-left minima except when $n$ is inserted at the end of the last run of
$\pi$, in which case the number of right-to-left minima increases by $1$. So,
we get the first two terms of the right-hand side of the recurrence relation.
Again, if $\pi^{\prime}\in\mathcal{RSP}(n)$ is obtained from
$\pi\in\mathcal{RSP}(n-2)$ by the operation defined in Lemma 25, that is,
$\pi^{\prime}=\psi(i,\pi)$, where $i\in[n-2]$, then the number of runs and the
number of right-to-left minima each increases by $1$. We showed already that
this is true for the number of runs, let us show it for the number of right-
to-left minima. The operation increases by $1$ each integer greater than $i$
in $\pi$ and inserts the subword $n~{}i{+}1$ immediately after the rightmost
position of the integers of the set $\\{1,2,\ldots,i\\}$, then the newly
created run beginning at $i+1$ contributes one more right-to-left minimum
since the minima of the runs form an increasing subsequence. Thus, we have the
last term of the right-hand side of the recurrence. ∎
###### Theorem 33.
The exponential generating function
$A(x,y,z)=\sum_{n,k,r\geq 0}a_{n,k,r}\frac{x^{n}}{n!}y^{k}z^{r}$
satisfies the differential equation
$\frac{\partial A}{\partial x}=yze^{xz+yz(-x-1)+ye^{x}}$ (5)
with the initial condition $\frac{\partial A}{\partial x}|_{x=0}=yz$.
###### Proof.
From (4) we have
$\displaystyle\sum_{n\geq 2,k,r\geq 1}a_{n,k,r}\frac{x^{n}}{n!}y^{k}z^{r}$
$\displaystyle=\sum_{n\geq 2,k,r\geq
1}a_{n-1,k,r-1}\frac{x^{n}}{n!}y^{k}z^{r}+\sum_{n\geq 2,k,r\geq
1}(k-1)a_{n-1,k,r}\frac{x^{n}}{n!}y^{k}z^{r}+$
$\displaystyle~{}~{}~{}~{}\sum_{n\geq 2,k,r\geq
1}a_{n-2,k-1,r-1}\frac{x^{n}}{n!}y^{k}z^{r}.$
Using the notation $\frac{\partial A}{\partial y}=A_{y}$ and expressing the
above equation in terms of $A$ we obtain
$\displaystyle A$ $\displaystyle=z\left(\int Adx-x\right)+\int
yA_{y}dx-\left(\int Adx-x\right)+xyz\int Adx-$
$\displaystyle~{}~{}~{}~{}2yz\int\int Adxdx+1+xyz$ $\displaystyle=\int
yA_{y}dx-(1-z-xyz)\int Adx+x-xz-2yz\int\int Adxdx+1+xyz.$ (6)
By differentiating both sides of (6) with respect to $x$ we obtain the
following:
$\displaystyle A_{x}=yA_{y}-(1-z-xyz)A+1-z-yz\int Adx+yz.$
Again by differentiating the above equation with respect to $x$ we obtain
$\displaystyle A_{xx}=yA_{yx}-(1-z-xyz)A_{x}.$ (7)
By letting $A_{x}=B$ in (7) we obtain
$\displaystyle B_{x}-yB_{y}+(1-z-xyz)B=0.$ (8)
Then, the characteristic equation is $\frac{dy}{dx}=\frac{-y}{1}$ or $\ln
y+x=k$ with $k$ constant. We make the transformation with $\epsilon=x,\mu=\ln
y+x,\zeta=z$, and $w(\epsilon,\mu,\zeta)=B(x,y,z)$. Using the substitution we
find that (8) transforms to
$\displaystyle w_{\epsilon}+\left(1-\zeta-\epsilon\zeta
e^{\mu-\epsilon}\right)w=0.$
By the integrating factor method we have
$\displaystyle\frac{\partial}{\partial\epsilon}\left(e^{\int\left(1-\zeta-\epsilon\zeta
e^{\mu-\epsilon}\right)d\epsilon}w\right)=0$
and integrating it with respect to $\epsilon$ and simplifying
$\displaystyle w(\epsilon,\mu,\zeta)$
$\displaystyle=g(\mu,\zeta)e^{\int\left(1-\zeta-\epsilon\zeta
e^{\mu-\epsilon}\right)d\epsilon}$
$\displaystyle=g(\mu,\zeta)e^{-\epsilon+\zeta\epsilon+\zeta
e^{\mu-\epsilon}(-\epsilon-1)+h(\mu,\zeta)},$
where $g$ and $h$ are any differentiable functions of two variables. Using the
initial condition $B(0,y,z)=yz$ we have $x=0,\epsilon=0,\mu=\ln
y,\zeta=z,w(\epsilon=0,\mu=\ln y,\zeta=z)=yz$, and
$\displaystyle y$ $\displaystyle=g(\ln y,z)e^{-y+h(\ln y,z)}$ $\displaystyle
ye^{y}$ $\displaystyle=g(\ln y,z)e^{h(\ln y,z)}.$
Thus, we obtain $g(t,z)=ze^{t},h(t,z)=e^{t}$. Therefore, we back the
transformation in terms of $x,y,z$ so that
$\displaystyle B(x,y,z)$ $\displaystyle=g(\ln y+x,z)e^{-x+zx+yz(-x-1)+h(\ln
y+x,z)}$ $\displaystyle=yze^{xz+yz(-x-1)+ye^{x}}.$
∎
By specializing $z=1$ in (5) we obtain the result about the exponential
generating function counting run sorted permutations by the number of runs
[13]. Recall that $r_{n,k}$ is the number of run-sorted permutations over
$[n]$ having $k$ runs.
###### Corollary 34.
If $A(x,y)=\sum_{n,k\geq 1}r_{n,k}\frac{x^{n}}{n!}y^{k}$, then $A$ satisfies
$\displaystyle\frac{\partial A}{\partial x}=ye^{x+y(-x-1)+ye^{x}}$
with the initial condition $\frac{\partial A}{\partial x}|_{x=0}=y$.
By specializing $y=z=1$ in (5) we obtain the well-known result about the
exponential generating function counting the number of run sorted permutations
(merging-free partitions) [5, 19].
###### Corollary 35.
The exponential generating function $A(x)$ of the number of run sorted
permutations has the closed differential form
$A^{\prime}(x)=e^{e^{x}-1}.$
###### Corollary 36.
For all positive integer $n\geq 1$, the number $a_{n}$ of run sorted
permutations over $[n]$ is given by
$a_{n}=\frac{1}{e}\sum_{m\geq 0}\frac{m^{n-1}}{m!}.$
### 4.3 A bijection with separated partitions
We now consider set partitions with no two consecutive integers in the same
block. Such partitions have been studied, for instance, by Munagi [12], who
called them “separated” partitions and proved that separated partitions over
$[n]$ having $k$ blocks are counted by the shifted Stirling numbers of the
second kind (see A008277), like run sorted permutations over $[n]$ having $k$
right-to-left minima.
It is then natural to provide a bijection between these two equisized classes
of objects. Let $\mathcal{P}_{n}$ denote the set of all separated partitions
over $[n]$. Let $P=B_{1}/B_{2}/\cdots/B_{k}\in\mathcal{P}_{n}$ with $k$
blocks. Define a map $\theta:\mathcal{P}_{n}\mapsto\mathcal{RSP}(n)$ given by
$\pi=\theta(P)$, where $\pi$ is obtained as follows:
* -
for $i=2,\ldots,k$ :
if $b\in B_{i},b\neq\min(B_{i})$ and $b-1\in B_{j}$ such that $j<i$, then
move $b$ to $B_{i-1}$ and rearrange the elements of $B_{i-1}$ in increasing
order;
* -
flatten the resulting partition and set it to $\pi$.
###### Example 37.
If $P=1358/26/47$, then $1358/26/47\rightarrow 13568/2/47\rightarrow
13568/27/4~{}\rightarrow 13568274=\pi$.
###### Theorem 38.
The map $\theta$ is a bijection.
###### Proof.
We first prove that $\pi\in\mathcal{RSP}(n)$, for every $P\in\mathcal{P}_{n}$.
The procedure never moves $\min(B_{i})$ for all $i$. Thus, the minima remain
in increasing order and hence $\pi$ is a run-sorted permutation. We now show
that if $P$ has $k$ blocks, then $\pi$ has $k$ right-to-left minima.
Obviously, the minimum of each block of $P$ becomes a right-to-left minimum of
$\pi$. Let $b$ be in the block $B_{i}$ with $b\neq\min(B_{i})$. The integer
$b-1$ is in different block, say $B_{j}$. If $j<i$, then the procedure moves
$b$ to the block $B_{i-1}$ leaving $\min(B_{i})$ on its right in $\pi$.
Therefore, $b$ cannot be a right-to-left minimum of $\pi$. Suppose that $j>i$.
Since $b-1\geq\min(B_{j})$ we have $b>\min(B_{j})$, so the procedure moves
neither $b$ nor $\min(B_{j})$ which implies that $b$ cannot be a right-to-left
minimum of $\pi$. Therefore, $b$ is a right-to-left minimum of $\pi$ if and
only if $b=\min(B_{i}),i=1,\ldots,k$. We next prove that $\theta$ is one-to-
one. Suppose that $P\neq P^{\prime}$, where $P,P^{\prime}\in\mathcal{P}_{n}$.
If the number of blocks of $P$ and the number of blocks of $P^{\prime}$ are
different, then we are done since $\theta(P)$ and $\theta(P^{\prime})$ have
different number of right-to-left minima. Let $P=B_{1}/\cdots/B_{k}$ and
$P^{\prime}=B^{\prime}_{1}/\cdots/B^{\prime}_{k}$, and assume that there
exists an element $b\in B_{i}$ and $b\in B^{\prime}_{j}$ such that $B_{i}$ is
the block of $P$ and $B^{\prime}_{j}$ is the block of $P^{\prime}$ with $i\neq
j$. We take the minimal of these elements. Up to exchanging of $P$ and
$P^{\prime}$ we can suppose $i<j$.
1. 1.
If $b=\min(B_{i})$, then $b$ is the $i$-th right-to-left minimum of
$\theta(P)$ and it would not be the case for $\theta(P^{\prime})$.
2. 2.
Let $b\neq\min(B_{i})$ and let $b-1\in B_{m}$. Note that $b-1\in
B^{\prime}_{m}$ for the minimality of $b$. Three sub-cases are possible:
* •
if $m<i<j$, then $\theta$ moves $b$ to the block $B_{i-1}$ of $P$ and moves
$b$ to the block $B^{\prime}_{j-1}$ of $P^{\prime}$;
* •
if $i<m<j$, then $\theta$ leaves $b$ in the block $B_{i}$ in $P$ while it
moves $b$ to the block $B_{j-1}^{\prime}$ in $P^{\prime}$. Note that $j-1\neq
i$, because $i\lneq m\lneq j$;
* •
if $i<j<m$, then $\theta$ leaves $b$ in the block $B_{i}$ in $P$ and leaves
$b$ in the block $B^{\prime}_{j}$ in $P^{\prime}$.
In all cases we have $\theta(P)\neq\theta(P^{\prime})$. Therefore, $\theta$ is
a bijection. ∎
We now present the inverse of $\theta$. Let $\pi\in\mathcal{RSP}(n)$ with $k$
right-to-left minima. We construct $P=\theta^{-1}(P^{\prime})$ as follows:
* -
insert a slash before each right-to-left minimum of $\pi$ and let
$B_{1}/\cdots/B_{k}$ be the resulting partition;
* -
for $i=k,\ldots,1$:
for $b$ in $B_{i}\setminus\\{\min(B_{i})\\}$ taken in increasing order
if $b-1\in B_{j},j\leq i$, then
move $b$ to $B_{i+1}$ and rearrange the elements in each block in increasing
order.
It can be easily checked that $\theta^{-1}$ constructs a separated partition.
For instance, if $\pi=13625784$, then by inserting slashes before each right-
to-left minimum we have the partition $136/2578/4$, and we move $7$ to $B_{3}$
since $6\in B_{1}$. So, $P=136/258/47$.
## 5 Non-crossing merging-free partitions
A non-crossing partition of a set $A=[n]$ is a partition in which no two
blocks “cross” each other, that is, if $a$ and $b$ belong to one block and $x$
and $y$ to another, then they cannot be arranged in the order $axby$. If one
draws an arc connecting $a$ and $b$, and another arc connecting $x$ and $y$,
then the two arcs cross each other if the order is $axby$ but not if it is
$axyb$ or $abxy$. In the latter two orders the partition
$\\{\\{a,b\\},\\{x,y\\}\\}$ is non-crossing [17].
###### Example 39.
In the following figure, the diagrams of
$P=1~{}2~{}5~{}7/~{}3~{}9~{}10/~{}4~{}6~{}8$ and of
$P^{\prime}=1~{}2~{}3~{}9~{}10/~{}4~{}6~{}7~{}8/~{}5$, respectively crossing
and non-crossing partitions.
Figure 1: Crossing and non-crossing merging-free partitions
We are interested in non-crossing merging-free partitions over $[n]$. We let
$\mathcal{M}_{n}$ denote the set of all non-crossing merging-free partitions
over $[n]$, and $\mathcal{M}_{n,t}:=\\{P\in
M_{n}:\operatorname{blocks}(P)=t\\}$, where $\operatorname{blocks}(P)$ denote
the number of blocks of the partition $P$.
###### Theorem 40.
For all integers $n\geq 1$ we have
$\sum_{P\in\mathcal{M}_{n}}q^{\operatorname{blocks}(P)}=\sum_{t=1}^{\frac{n+1}{2}}{n-1\choose
2(t-1)}q^{t}.$ (9)
###### Proof.
We use strong induction on $n$, and provide a recursive construction for the
merging-free partitions of $\mathcal{M}_{n}$. For $n=1$ the assertion is
trivially true (initial condition). Assume that $n\geq 2$ and the assertion is
true for all integers smaller than $n$. We distinguish two cases, depending on
if $n$ is in the same block of a merging-free partition $P$ as $n-1$ or not.
Suppose that $P$ has $t$ blocks.
Case-1. If $n$ is in the same block as $n-1$, then we delete $n$ and obtain a
non-crossing merging-free partition $P^{\prime}$ in $\mathcal{M}_{n-1,t}$. The
induction hypothesis implies that
$\displaystyle\sum_{\begin{subarray}{c}P\in\mathcal{M}_{n}\\\ n\text{ and
}n-1\text{ are in}\\\ \text{the same block of
}P\end{subarray}}q^{\operatorname{blocks}(P)}$
$\displaystyle=\sum_{\begin{subarray}{c}P^{\prime}\in\mathcal{M}_{n-1}\end{subarray}}q^{\operatorname{blocks}(P^{\prime})}$
$\displaystyle=\sum_{t=1}^{\frac{n}{2}}{n-2\choose 2(t-1)}q^{t}.$ (10)
Case-2. Suppose that $n$ is not in the same block as $n-1$ in $P$ and, that
$n$ is in the same block as a certain $i$, with $i<n-1$. Let $i$ be the
maximum of such elements. In this case, as we shall see, all the integers
$1,2,\ldots,i,n$ are in the first block of $P$. Assume for a contradiction
that there is $j\in\\{1,2,\ldots,i-1\\}$ such that $j$ is not in the same
block as $i$. We can choose $j$ such that $j+1$ is in the same block as $i$.
If $j$ is not the maximum element of its block, then the arc relating it to
its successor in the partition creates a crossing.
Figure 2:
If instead $j$ is the maximum element of its block, then there is an integer
$k>i$ such that $k$ is in the same block as an integer $j^{\prime}<j$ (and
hence creates a crossing with the arc $(i,n)$), because otherwise, the block
containing $j$ should be merged with one of the blocks containing integers of
$[i+1,n-1]$ and hence the partition would not be merging-free.
Figure 3:
Thus, the first block is uniquely determined by the integer $i$ and the
remaining $n-i-1$ integers must form a non-crossing merging-free partition. So
let $P^{\prime\prime}$ be the partition obtained by deleting the first block
of $P$ and then standardizing the resulting partition, that is, subtract $i$
from each of the remaining integers, so we have
$P^{\prime\prime}\in\mathcal{M}_{n-i-1,t-1}$. Thus by the induction hypothesis
and taking the sum over all possible $i$ we have
$\displaystyle\sum_{\begin{subarray}{c}P\in\mathcal{M}_{n}\\\ n\text{ and
}n-1\text{ are not}\\\ \text{in the same block of
}P\end{subarray}}q^{\operatorname{blocks}(P)}$
$\displaystyle=\sum_{i=1}^{n-2}q\sum_{\begin{subarray}{c}P^{\prime\prime}\in\mathcal{M}_{n-i-1}\end{subarray}}q^{\operatorname{blocks}(P^{\prime\prime})}$
$\displaystyle=\sum_{i=1}^{n-2}q\sum_{t=2}^{\frac{n-i+2}{2}}{n-i-2\choose
2(t-2)}q^{t-1}.$ (11)
Therefore, putting (5) and (5) together we have
$\displaystyle\sum_{P\in\mathcal{M}_{n}}q^{\operatorname{blocks}(P)}$
$\displaystyle=\sum_{t=1}^{\frac{n}{2}}{n-2\choose
2(t-1)}q^{t}+\sum_{i=1}^{n-2}q\sum_{t=2}^{\frac{n-i+2}{2}}{n-i-2\choose
2(t-2)}q^{t-1}$ $\displaystyle=\sum_{t=1}^{\frac{n}{2}}{n-2\choose
2(t-1)}q^{t}+\sum_{t=2}^{\frac{n+1}{2}}q^{t}\sum_{i=1}^{n-2t+2}{n-i-2\choose
2t-4}$ $\displaystyle=\sum_{t=1}^{\frac{n}{2}}{n-2\choose
2(t-1)}q^{t}+\sum_{t=2}^{\frac{n+1}{2}}{n-2\choose 2t-3}q^{t}$
$\displaystyle=q+\sum_{t=2}^{\frac{n+1}{2}}\left[{n-2\choose 2t-2}+{n-2\choose
2t-3}\right]q^{t}=\sum_{t=1}^{\frac{n+1}{2}}{n-1\choose 2t-2}q^{t}.$
∎
###### Corollary 41.
The number $m_{n}$ of non-crossing merging-free partitions over $[n]$ is equal
to $2^{n-2}$, where $n\geq 2$ and $m_{1}=m_{2}=1$.
###### Proof.
This follows from putting $q=1$ in (9) and using the well-known identity
$\sum_{k\geq 0}{n\choose 2k}=2^{n-1}.$
∎
A function $f\in\operatorname{RGF}(n)$ is said to avoid a pattern $212$ if
there does not exist some indices $a<b<c$ such that $f_{a}=f_{c}>f_{b}$. Let
$f$ be the canonical form of a set partition $P$ over $[n]$, then $P$ is non-
crossing if and only if $f$ is $212$-avoiding (see [8, 16]). A function
$f=f_{1}f_{2}\cdots f_{n}$ is said to be weakly uni-modal if there exists a
value $m\leq n$ for which it is weakly increasing for $i\leq m$ and weakly
decreasing for $i\geq m$, that is, $f_{1}\leq f_{2}\leq\cdots\leq f_{m}\geq
f_{m+1}\geq\cdots\geq f_{n}$. Thus, we have the following result.
###### Proposition 42.
Let $f$ be the canonical form of a merging-free partition $P$, then $f$ is
$212$-avoiding if and only if it is weakly uni-modal.
###### Proof.
Consider the forward implication. Since $f$ is a $\operatorname{RGF}$, if it
is not weakly uni-modal, although it is $212$-avoiding, then it contains
either a pattern $312$ or $213$. If $f$ contains a $312$, then before the $3$
there is a $2$ and hence it contains $212$. This is a contradiction. Suppose
that $f$ contains a pattern $213$ in the positions $a<a+1<b$, where $b$ is the
smallest such integer. If $f_{b}$ is a left-to-right maximum letter in $f$,
then there exists some integer $c>b$ such that $f_{c}=f_{b}-1$ because $P$ is
merging-free. Since $f$ is a $\operatorname{RGF}$ there exists an integer
$d\leq a$ such that $f_{d}=f_{c}$ because of the choice of $b$. Thus, $f$
contains the pattern $212$ in $d<a+1<c$ and this is a contradiction. If
$f_{b}$ is not a left-to-right maximum letter, then we have some integer $e<a$
such that $f_{e}=f_{b}$, hence $f$ contains the pattern $212$ in $e<a+1<b$ and
this is also a contradiction. Therefore, $f$ is weakly uni-modal. The converse
implication is clearly true. ∎
## 6 The Exhaustive Generation
We used the results presented here – and in particular the construction shown
in the proof of the recurrence relation for the sets $\mathcal{RSP}(n,k)$ – to
implement an algorithm to generate these objects, that is, an algorithm which,
for any fixed integer $n$, solves the following problem:
Problem: Run-Sorted-Permutations-Generation
Input: an integer $n$
Output: the set of all run-sorted permutations $\mathcal{RSP}(n)$, partitioned
into the subsets $\mathcal{RSP}(n,k)$.
Compared to previous, more naive algorithms, this algorithm has allowed some
researchers to extend the range of calculations performed in acceptable time
and confirm various conjectures for a larger value of $n$. Rather than
implementing recursive algorithms we made use of dynamic programming and
obtained iterative algorithms. All algorithms implements a partition as a list
of integers and a set of partitions as a list of partitions and hence as a
list of lists of integers. Algorithm 1 is used to generate run-sorted
permutations in $\mathcal{RSP}^{(1)}(n,k)\subseteq\mathcal{RSP}(n,k)$ from
$\mathcal{RSP}(n-1,k)$ based on the idea presented in section 3.
Algorithm 1 Exhaustive Generation of Run-Sorted Permutations in
$\mathcal{RSP}^{(1)}(n,k)$ from the partitions of $\mathcal{RSP}(n-1,k)$
Procedure: FUNCTION_ONE$(\mathcal{RSP},n)$
$\mathcal{RSP}$ is a list of lists whose elements represent run-sorted
permutations of size $n-1$, and $n$ is the integer to be inserted so that the
number of blocks remains the same.
$L\longleftarrow[~{}]$
for $\pi$ in $\mathcal{RSP}$ do
for $t$ in $Range(Length(\pi)-1)$ do
if $\pi[t]>\pi[t+1]$ then
$\pi^{\prime}\longleftarrow\pi.Insert(t+1,n)$
$L.Append(\pi^{\prime})$
end if
end for
$\pi^{\prime}\longleftarrow\pi.Append(n)$
$L.Append(\pi^{\prime})$
end for
return $L$
The exhaustive generation of run-sorted permutations in
$\mathcal{RSP}^{(2)}(n,k)$ starts from the set
$[n-2]\times\mathcal{RSP}(n-2,k-1)$, that is, from a pair
$(i,\pi)\in[n-2]\times\mathcal{RSP}(n-2,k-1)$ we obtain a run-sorted
permutation $\pi^{\prime}\in\mathcal{RSP}^{(2)}(n,k)$ using Algorithm 2. The
idea is based on the operation given in section 3.
Algorithm 2 Generation of a run-sorted permutation of
$\mathcal{RSP}^{(2)}(n,k)$ from an element of
$[n-2]\times\mathcal{RSP}(n-2,k-1)$
Procedure: RUN_SORTED_PERMUTATION_SIZE_INC_BY_TWO($(\pi,p)$)
$\pi$ is a run-sorted permutation in $\mathcal{RSP}(n-2,k-1)$ and $p$ is an
integer in $[n-2]$.
for $t$ in $Range(Length(\pi))$ do
if $\pi[t]>p$ then
$\pi[t]\longleftarrow\pi[t]+1$
end if
end for
$pos=Length(\pi)-1$
while $\pi[pos]>p$ do
$pos\longleftarrow pos-1$
end while
$\pi.Insert(pos+1,Length(\pi)+2)$
$\pi.Insert(pos+2,p+1)$
return $(\pi)$
Algorithm 3 calls Algorithm 2 and gives us the exhaustive generation algorithm
for the set of run-sorted permutations in $\mathcal{RSP}^{(2)}(n,k)$.
Algorithm 3 Exhaustive Generation of Run-sorted Permutations in
$\mathcal{RSP}^{(2)}(n,k)$
Procedure: FUNCTION_TWO$(\mathcal{RSP})$
$\mathcal{RSP}$ is a list of lists whose elements represent a run-sorted
permutation of $\mathcal{RSP}(n-2,k-1)$.
$L\leftarrow[~{}]$
for $\pi$ in $\mathcal{RSP}$ do
for $p$ in $Range(1,Length(\pi)+1)$ do
$\pi^{\prime}\longleftarrow$ RUN_SORTED_PERMUTATION_SIZE_INC_BY_TWO$(\pi,p)$
$L.Append(\pi^{\prime})$
end for
end for
return $L$
Now we present the main exhaustive generation algorithm that generates all and
only those run-sorted permutations in $\mathcal{RSP}(n)$ for all possible $n$.
The algorithm returns a list of lists of lists of integers, namely the list
$\mathcal{RSP}(n)=[\mathcal{RSP}(n,1),\mathcal{RSP}(n,2),\ldots,\mathcal{RSP}(n,n)]$
where each element $\mathcal{RSP}(n,k)$ is the list of all run-sorted
permutations of $[n]$ having $k$ runs. Since $\mathcal{RSP}(n,k)=\varnothing$
if $k>\lceil{\frac{n}{2}}\rceil$, the algorithm can be optimized by competing
only those sets $\mathcal{RSP}(n,k)$ that are not empty. As we said, the
algorithm is based on dynamic programming. We stock the values of the lists
$\mathcal{RSP}(n-1)$ and $\mathcal{RSP}(n-2)$ and use them to compute the list
$\mathcal{RSP}(n)$. In order to save memory, only the last two lists are kept
at any time: the list $\mathcal{RSP}(n-1)$ will be stocked in a variable
called LastRow and the list $\mathcal{RSP}(n-2)$ will be stocked in a variable
called RowBeforeLast, while list $\mathcal{RSP}(n)$ will be affected to the
variable CurrentRow. At the end of each iteration, the three variables are
shifted.
Algorithm 4 Exhaustive Generation of Run-Sorted Permutations
Procedure: RUN_SORTED_PERMUTATIONS($n$)
$\mathcal{RSP}$ is a list of lists whose elements represent a run-sorted
permutation of $\mathcal{RSP}(n-2,k-1)$.
RowBeforeLast$\longleftarrow[[[1]]]$
LastRow$\longleftarrow[[[1,2]],[~{}]]$
for $i$ in $Range(3,n+1)$ do
CurrentRow$\longleftarrow[~{}]$
CurrentRow$.Append$(FUNCTION_ONE(LastRow$[0],i$))
for $j$ in $Range(1,\lceil\frac{i}{2}\rceil)$ do
CurrentRow$.Append($FUNCTION_ONE(LastRow$[j],i$)
CurrentRow$.Append($FUNCTION_TWO$($RowBeforeLast$[j-1]))$
end for
RowBeforeLast$\longleftarrow$ LastRow
LastRow$\longleftarrow$ CurrentRow
end for
All these algorithms have been implemented in Python.
###### Example 43.
When RUN_SORTED_PERMUTATIONS($n$) is executed for $n=5$ we get the list
$\mathcal{RSP}(5)$:
$\displaystyle[[[1,2,3,4,5]],$
$\displaystyle[[1,3,4,5,2],[1,3,4,2,5],[1,3,5,2,4],[1,3,2,4,5],[1,4,5,2,3],[1,4,2,3,5],[1,2,4,5,3],$
$\displaystyle[1,2,4,3,5],[1,5,2,3,4],[1,2,5,3,4],[1,2,3,5,4]],$
$\displaystyle[[1,5,2,4,3],[1,4,2,5,3],[1,3,2,5,4]],$ $\displaystyle[~{}]].$
Observe that $a_{5,1}=1,a_{5,2}=11,a_{5,3}=3,a_{5,4}=0$, and
$a_{5}=1+11+3+0=15$.
## 7 Acknowledgements
The first author is grateful for the financial support extended by IRIF, the
cooperation agreement between International Science Program (ISP) at Uppsala
University and Addis Ababa University, and the support by the Wenner-Gren
Foundations. We appreciate the hospitality we got from IRIF during the
research visits of the first author. We also thank our colleagues from CoRS
(Combinatorial Research Studio) for valuable discussions and comments, and in
particular we thank Dr. Per Alexandersson and Prof. Jörgen Backelin for their
useful discussions and suggestions.
## References
* [1] P. Alexandersson and O. Nabwanda, Peaks are preserved under run-sorted, Enumerative Combinatorics and Applications, 2:1 (2022), Article #S2R2.
* [2] J. Baril and V. Vajnovszki, A permutation code preserving a double Eulerian bistatistic, Discrete Applied Mathematics, 224 (2017), 9-15.
* [3] H. W. Becker, Planar rhyme schemes, Bull. Amer. Math. Soc., 58 (1952), 39. Math. Mag., 22 (1948-49), 23-26.
* [4] F. Beyene and R. Mantaci, “Nom” code and pattern avoidance in a Catalan numbered class of permutations, 2021, https://arxiv.org/abs/2111.11527v1.
* [5] M. Bona, Introduction to Enumerative Combinatorics, The McGraw Hill Companies, 2007.
* [6] D. Callan, Pattern avoidance in “flattened” partitions. Discrete Math., 309(2009), 4187-4191.
* [7] G. Kreweras, Sur les partitions non croisées d’un cycle, Discrete Mathematics, 4 (1972), 333-350.
* [8] Z. Lin and Sh. Fu, On 1212-avoiding restricted growth functions, Electronic Journal of Combinatorics, 24(1) (2017), $\\#$P1.53.
* [9] T. Mansour, Combinatorics of Set Partitions. Taylor & Francis Group, LLC, 2013.
* [10] T. Mansour, M. Shattuck and S. Wagner, Counting subwords in flattened partitions of sets, Discrete Mathematics 338 (2015), 1989-2005.
* [11] R. Mantaci and F. Rakotondrajao, A permutation representation that knows what “Eulerian” means, Discrete Mathematics and Theoretical Computer Science, 4 (2001), 101-108.
* [12] A. O. Munagi, Set partitions and separations, International Journal of Mathematics and Mathematical Sciences 3 (2005), 451-463.
* [13] O. Nabawanda, F. Rakotondrajao, and A.S. Bamunoba, Run distribution over flattened partitions, Journal of Integer Seq., 23 (2020), Article 20.9.6
* [14] M. Orlov, Efficient generation of set partitions, (2002).
* [15] G. Rota, The number of partitions of a set, Amer. Math. Monthly 71 (1964), 498-504.
* [16] R. Simion, Combinatorial statistics on non-crossing partitions, Journal of Combinatorial Theory, 66 (1994), 270-301.
* [17] R. Simion, Non-crossing partitions, Discrete Mathematics, 217 (2000), 367-409.
* [18] N. J. A. Sloane et al. , The on-line encyclopedia of integer sequences, Available at http://oeis.org, 2019.
* [19] R. P. Stanley, Enumerative Combinatorics, 1, Cambridge Studies of Advanced Mathematics, 2011.
2010 Mathematics Subject Classification: Primary 05A05; Secondary 05A15,
05A19.
_Keywords:_ merging-free partition, canonical form, run-sorted permutation,
non-crossing partition, algorithm.
|
# Model Compression for Domain Adaptation through Causal Effect Estimation
Guy Rotman , Amir Feder11footnotemark: 1 , Roi Reichart
Faculty of Industrial Engineering and Management, Technion, IIT
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
Authors contributed equally.
###### Abstract
Recent improvements in the predictive quality of natural language processing
systems are often dependent on a substantial increase in the number of model
parameters. This has led to various attempts of compressing such models, but
existing methods have not considered the differences in the predictive power
of various model components or in the generalizability of the compressed
models. To understand the connection between model compression and out-of-
distribution generalization, we define the task of compressing language
representation models such that they perform best in a domain adaptation
setting. We choose to address this problem from a causal perspective,
attempting to estimate the average treatment effect (ATE) of a model
component, such as a single layer, on the model’s predictions. Our proposed
ATE-guided Model Compression scheme (AMoC), generates many model candidates,
differing by the model components that were removed. Then, we select the best
candidate through a stepwise regression model that utilizes the ATE to predict
the expected performance on the target domain. AMoC outperforms strong
baselines on dozens of domain pairs across three text classification and
sequence tagging tasks. 111Our code and data are available at:
https://github.com/rotmanguy/AMoC. 222This is a pre-MIT Press publication
version.
## 1 Introduction
The rise of deep neural networks (DNNs) has transformed the way we represent
language, allowing models to learn useful features directly from raw inputs.
However, recent improvements in the predictive quality of language
representations are often related to a substantial increase in the number of
model parameters. Indeed, the introduction of the Transformer architecture
Vaswani et al. (2017) and attention-based models Devlin et al. (2019); Liu et
al. (2019); Brown et al. (2020) has improved performance on most natural
language processing (NLP) tasks, while facilitating a large increase in model
sizes.
Since large models require a significant amount of computation and memory
during training and inference, there is a growing demand for compressing such
models while retaining the most relevant information. While recent attempts
have shown promising results Sanh et al. (2019), they have some limitations.
Specifically, they attempt to mimic the behavior of the larger models without
trying to understand the information preserved or lost in the compression
process.
In compressing the information represented in billions of parameters, we
identify three main challenges. First, current methods for model compression
are not interpretable. While the importance of different model parameters is
certainly not uniform, it is hard to know a-priori which of the model
components should be discarded in the compression process. This notion of
feature importance has not yet trickled down into compression methods, and
they often attempt to solve a dimensionality reduction problem where a smaller
model aims to mimic the predictions of the larger model. Nonetheless, not all
parameters are born equal, and only a subset of the information captured in
the network is actually useful for generalization Frankle and Carbin (2018).
The second challenge we observe in model compression is out-of-distribution
generalization. Typically, compressed models are tested for their in-domain
generalization. However, in reality the distribution of examples often varies
and is different than that seen during training. Without testing for the
generalization of the compressed models on different test-set distributions,
it is hard to fully assess what was lost in the compression process. The
setting explored in domain adaptation provides us with a platform to test the
ability of the compressed models to generalize across-domains, where some
information that the model has learned to rely on might not exist. Strong
model performance across domains provides a stronger signal on retaining
valuable information.
Lastly, another challenge we identify in training and selecting compressed
models is confidence estimation. In trying to understand what gives large
models the advantage over their smaller competitors, recent probing efforts
have discovered that commonly used models such as BERT Devlin et al. (2019),
learn to capture semantic and syntactic information in different layers and
neurons across the network Rogers et al. (2021). While some features might be
crucial for the model, others could learn spurious correlations that are only
present in the training set and are absent in the test set Kaushik et al.
(2019). Such cases have led to some intuitive common practices such as keeping
only layers with the same parity or the top or bottom layers Fan et al.
(2019); Sajjad et al. (2020). Those practices can be good on average, but do
not provide model confidence scores or success rate estimates on unseen data.
Our approach addresses each of the three main challenges we identify, as it
allows estimating the marginal effect of each model component, is designed and
tested for out-of-distribution generalization, and provides estimates for each
compressed model performance on an unlabeled target domain. We dive here into
the connection between model compression and out-of-distribution
generalization, and ask whether compression schemes should consider the effect
of individual model components on the resulting compressed model.
Particularly, we present a method that attempts to compress a model while
maintaining components that can generalize well across domains.
Inspired by causal inference Pearl (1995), our compression scheme is based on
estimating the average effect of model components on the decisions the model
makes, at both the source and target domains. In causal inference, we measure
the effect of interventions by comparing the difference in outcome between the
control and treatment groups. In our setting, we take advantage of the fact
that we have access to unlabeled target examples, and treat the model’s
predictions as our outcome variable. We then try to estimate the effect of a
subset of the model components, such as one or more layers, on the model’s
output.
To do that, we propose an approximation of a counterfactual model where a
model component of choice is removed. We train an instance of the model
without that component and keep everything else equal apart from the input and
output to that component, which allows us to perform only a small number of
gradient steps. Using this approximation, we then estimate the average
treatment effect (ATE) by comparing the predictions of the base model to those
of its counterfactual instance.
Since our compressed models are very efficiently trained, we can generate a
large number of such models per each source-target domain pair. We then train
a regression model on our training domain pairs in order to predict how well a
compressed model would generalize from a source to a target domain, using the
ATE as well as other variables. This regression model can then be applied to
new source-target domain pairs in order to select the compressed model that
best supports cross-domain generalization.
To organize our contributions, we formulate three research questions:
1. 1.
Can we produce a compressed model that outperforms all baselines in out-of-
distribution generalization?
2. 2.
Does the model component we decide to remove indeed hurt performance the
least?
3. 3.
Can we use the average treatment effect to guide our model selection process?
In $\S$ 6 we directly address each of the three research questions, and
demonstrate the usefulness of our method, ATE-guided model compression (AMoC),
to improve model generalization.
## 2 Previous Work
Previous work on the intersection of neural model compression, domain
adaptation and causal inference is limited, as our application of causal
inference to model compression and our discussion of the connection between
compression and cross-domain generalization are novel. However, there is an
abundance of work in each field on its own, and on the connection between
domain adaptation and causal inference. Since our goal is to explore the
connection between compression and out-of-distribution generalization, as
framed in the setting of domain adaptation, we survey the literature on model
compression and the connection between generalization, causality and domain
adaptation.
### 2.1 Model Compression
NLP models have been increased exponentially in size, growing from less than a
million parameters a few years ago to hundreds of billions. Since the
introduction of the Transformer architecture, this trend has been
strengthened, with some models reaching more than 175 billion parameters Brown
et al. (2020). As a result, there has been a growing interest in compressing
the information captured in Transformers into smaller models Chen et al.
(2020); Ganesh et al. (2020); Sun et al. (2020).
Usually, such smaller models are trained using the base model as a teacher,
with the smaller student model learning to predict its output probabilities
Hinton et al. (2015); Jiao et al. (2020); Sanh et al. (2019). However, even if
the student closely matches the teacher’s soft labels, their internal
representations may be considerably different. This internal mismatch can
undermine the generalization capabilities originally intended to be
transferred from the teacher to the student Aguilar et al. (2020); Mirzadeh et
al. (2020).
As an alternative, we try not to interfere or alter the learned representation
of the model. Compression schemes such as those presented in Sanh et al.
(2019) discard model components randomly. Instead, we choose to focus on
understanding which components of the model capture the information that is
most useful for it to perform well across domains, and hence should not be
discarded.
### 2.2 Domain Adaptation and Causality
Domain adaptation is a longstanding challenge in machine learning (ML) and
NLP, which deals with cases where the train and test sets are drawn from
different distributions. A great effort has been dedicated to exploit labels
from both source and target domains for that purpose Daumé III et al. (2010);
Sato et al. (2017); Cui et al. (2018); Lin and Lu (2018); Wang et al. (2018).
However, a much more challenging and realistic scenario, also termed as
unsupervised domain adaptation, occurs when no labeled target samples exist
Blitzer et al. (2006); Ganin et al. (2016); Ziser and Reichart (2017, 2018a,
2018b, 2019); Rotman and Reichart (2019); Ben-David et al. (2020). In this
setting, we have access to labeled and unlabeled data from the source domain
and to unlabeled data from the target, and models are tested by their
performance on unseen examples from the target domain.
A closely related task is domain adaptation success prediction. This task
explores the possibility of predicting the expected performance degradation
between source and target domains McClosky et al. (2010); Elsahar and Gallé
(2019). Similar to predicting performance in a given NLP task, methods for
predicting domain adaptation success often rely on in-domain performance and
distance metrics estimating the difference between the source and target
distributions Reichart and Rappoport (2007); Ravi et al. (2008); Louis and
Nenkova (2009); Van Asch and Daelemans (2010); Xia et al. (2020). While these
efforts have demonstrated the importance of out-of-domain performance
prediction, they have not been made as far as we know in relation to model
compression.
As the fundamental purpose of domain adaptation algorithms is improving the
out-of-distribution generalization of learning models, it is often linked with
causal inference Johansson et al. (2016). In causal inference we typically
care about estimating the effect that an intervention on a variable of
interest would have on an outcome Pearl (2009). Recently, using causal methods
to improve the out-of-distribution performance of trained classifiers is
gaining traction Rojas-Carulla et al. (2018); Wald et al. (2021).
Indeed, recent papers applied a causal approach to domain adaptation. Some
researchers proposed using causal graphs to predict under distribution shifts
Schölkopf et al. (2012) and to understand the type of shift Zhang et al.
(2013). Adapting these ideas to computer vision, Gong et al. (2016) were one
of the first to propose a causal graph describing the generative process of an
image as being generated by a “domain”. The causal graph served for learning
invariant components that transfer across domains. Since that, the notion of
invariant prediction has emerged as an important operational concept in causal
inference Peters et al. (2017). This idea has been used to learn classifiers
that are robust to domain shifts and can perform well on unseen target
distributions Gong et al. (2016); Magliacane et al. (2018); Rojas-Carulla et
al. (2018); Greenfeld and Shalit (2020).
Here we borrow ideas from causality to help us reason on the importance of
specific model components, such as individual layers. That is, we estimate the
effect of a given model component (denoted as the treatment) on the model’s
predictions in the unlabeled target domain, and use the estimated effect as an
evaluation of the importance of this component. Our treatment effect
estimation method is inspired by previous causal model explanation work Goyal
et al. (2019); Feder et al. (2021), although our algorithm is very different.
## 3 Causal Terminology
Causal methodology is most commonly used in cases where the goal is estimating
effects on real-world outcomes, but it can be adapted to help us understand
and explain what affects NLP models Feder et al. (2021). Specifically, we can
think of intervening on a model and altering its components as a causal
question, and measure the effect of this intervention on model predictions. A
core benefit of this approach is that we can estimate treatment effects on
model’s predictions without the need for manually-labeled target data.
Borrowing causal methodology into our setting, we treat model components as
our treatment, and try to estimate the effect of removing them on our model’s
predictions. The predictions of a model are driven by its components, and by
changing one component and holding everything else equal, we can estimate the
effect of this intervention. We can use this estimation in deciding which
model component should be kept in the compression process.
As the link between model compression and causal inference was not explored
previously, we provide here a short introduction to causal inference and its
basic terminology, focusing on its application to our use case. We then
discuss the connection to Pearl’s _do_ -operator Pearl et al. (2009) and the
estimation of treatment effects.
Imagine we have a model $m$ that classifies examples to one of $L$ classes.
Given a set $\mathcal{C}$ of $K$ model components, which we hypothesize might
affect the model’s decision, we denote the set of binary variables
$I_{c}=\\{I_{c_{j}}\in\\{0,1\\}|j\in\\{1,\ldots,K\\}\\}$, where each
corresponds to the inclusion of the component in the model, i.e., if
$I_{c_{j}}=1$ then the $j$-th component ($c_{j}$) is in the model. Our goal is
to assert how the model’s predictions are affected by the components in
$\mathcal{C}$. As we are interested in the effect on the class probability
assigned by $m$, we measure this probability for an example $x$, and denote it
for a class $l$ as $z(m(x))_{l}$ and for all $L$ classes as $\vec{z}(m(x))$.
Using this setup, we can now define the ATE, the common metric used when
estimating causal effects. ATE is the difference in mean outcomes between the
treatment and control groups, and using _do_ -calculus Pearl (1995) we can
define it as follows:
###### Definition 1 (Average Treatment Effect (ATE)).
The average treatment effect of a binary treatment $I_{c_{j}}$ on the outcome
$\vec{z}(m(x))$ is:
$\displaystyle\text{ATE}({c_{j}})=$
$\displaystyle\mathbb{E}\left[\vec{z}(m(x))|do(I_{c_{j}}=1)\right]$ (1)
$\displaystyle-\mathbb{E}\left[\vec{z}(m(x))|do(I_{c_{j}}=0)\right],$
where the do-operator is a mathematical operator introduced by Pearl (1995),
which indicates that we intervene on $c_{j}$ such that it is included
($do(I_{c_{j}}=1)$) or not ($do(I_{c_{j}}=0)$) in the model.
While the setup usually explored with _do_ -calculus involves a fixed joint-
distribution where treatments are assigned to individuals (or examples), we
borrow intuition from a specialized case where interventions are made on the
process which generates outcomes given examples. This type of an intervention
is called Process Control, and was proposed by Pearl et al. (2009) and further
explored by Bottou et al. (2013). This unique setup is designed to improve our
understanding of the behavior of complex learning systems and predict the
consequences of changes made to the system. Recently, Feder et al. (2021) used
it to intervene on language representation models, generating a counterfactual
representation model through an adversarial training algorithm which biases
the representation model to forget information about treatment concepts and
maintain information about control concepts.
In our approach we intervene on the $j$-th component, by holding the rest of
the model fixed and training only the parameters that control the input and
output to that component. This is crucial for our estimation procedure as we
want to know the effect of the $j$-th component on a specific model instance.
This effect can be computed by comparing the predictions of the original model
instance to those of the intervened model (see below). This computation is
fundamentally different from measuring the conditional probability where the
$j$-th component is not in the model by estimating
$\mathbb{E}\left[\vec{z}(m(x))|I_{c_{j}}=0\right]$.
## 4 Methodology
We start by describing the task of compressing models such that they perform
well on out-of-distribution examples, detailing the domain adaptation
framework we focus on. Then, we describe our compression scheme, designed to
allow us to approximate the ATE and responsible for producing compressed model
candidates. Finally, we propose a regression model that uses the ATE and other
features to predict a candidate model’s performance on a target domain. This
regression allows us to select a strong candidate model.
### 4.1 Task Definition and Framework
To test the ability of a compressed model to generalize on out-of-distribution
examples, we choose to focus on a domain adaptation setting. An appealing
property of domain adaptation setups is that they allow us to measure out-of-
distribution performance in a very natural way by training on one domain and
testing on another.
In our setup, during training, we have access to $n$ source-target domain
pairs $(\mathbf{S}^{i},\mathbf{T}^{i})_{i=1}^{n}$. For each pair we assume to
have labeled data from the source domains $(\mathbf{L_{S^{i}}})_{i=1}^{n}$ and
unlabeled data from the the source and target domains $(\mathbf{U_{S^{i}}}$,
$\mathbf{U_{T^{i}}})_{i=1}^{n}$. We also assume to have held-out labeled data
for all domains, for measuring test performance
$(\mathbf{H_{S^{i}}},\mathbf{H_{T^{i}}})_{i=1}^{n}$. At test time we are given
an unseen domain pair $(\mathbf{S^{n+1}},\mathbf{T^{n+1}})$ with labeled
source data $\mathbf{L_{S^{n+1}}}$ and unlabeled data from both domains
$\mathbf{U_{S^{n+1}}}$ and $\mathbf{U_{T^{n+1}}}$, respectively. Our goal is
to classify examples on the unseen target domain $\mathbf{T^{n+1}}$ using a
compressed model $m^{n+1}$ trained on the new source domain.
For each domain pair in $(\mathbf{S^{i}},\mathbf{T^{i}})_{i=1}^{n}$, we
generate a set of $K$ candidate models
$M^{i}=\\{m_{1}^{i},\ldots,m_{K}^{i}\\}$, differing by the model components
that were removed from the base model $m^{i}_{B}$. For each candidate, we
compute the ATE and other relevant features which we discuss in $\S$ 4.3.
Then, using the training domain pairs, for which we have access to a limited
amount of labeled target data, we train a stepwise linear regression to
predict the performance of all candidate models in $\\{M^{i}\\}_{i=1}^{n}$ on
their target domain. Finally, at test time, after computing the regression
features on the unseen source-target pair, we use the trained regression model
to select the compressed model $(m^{n+1})^{*}\in M^{n+1}$ that is expected to
perform best on the unseen unlabeled target domain.
While this task definition relies on a limited number of labeled examples from
some target domains at training time, at test time we only use labeled
examples from the source domain and unlabeled examples from the target. We
elaborate on our compression scheme, responsible for generating the compressed
model candidates in $\S$ 4.2. We then describe the regression features and the
regression model in $\S$ 4.3 and $\S$ 4.4, respectively.
(a) Base Model
(b) Compressed Model
Figure 1: An example of our method with a 3-layer encoder when considering the
removal of layer components. (a) At first, the base model is trained (Alg. 1,
step $1(a)$). (b) The second encoder layer is removed from the base model, and
the first layer is connected to the final encoder layer. The compressed model
is then fine-tuned for one or more epochs, where only the parameters of the
first layer and the decoder are updated (Alg. 1, step $1(b)$). We mark frozen
layers and non-frozen layers with snow-flakes and fire symbols, respectively.
### 4.2 Compression Scheme
Our compression scheme (AMoC) assumes to operate on a large classifier,
consisting of an encoder-decoder architecture, that serves as the base model
being compressed. In such models, the encoder is the language representation
model (e.g., BERT), and the decoder is the task classifier. Each input
sentence $x$ to the base model $m^{i}_{B}$ is encoded by the encoder $e$.
Then, the encoded sentence $e(x)$ is passed through the decoder $d$ to compute
a distribution over the the label space $L$:
$\vec{z}(m^{i}_{B}(x))=Softmax(d(e(x)))$. AMoC is designed to remove a set of
encoder components, and can in principle be used with any language encoder.
As described in Algorithm 1, AMoC generates candidate compressed versions of
$m^{i}_{B}$. In each iteration it selects from $\mathcal{C}$, the set
containing subsets of encoder components, a candidate $c_{k}\in\mathcal{C}$ to
be removed. 333For example, if components correspond to layers, and we wish to
remove an individual layer from a 12-layer encoder, then
$\mathcal{C}=\\{\\{i\\}|i\in\\{1,\ldots,12\\}\\}$. The goal of this process is
to generate many compressed model candidates, such that the $k$-th candidate
$c_{k}$ differs from the base model $m^{i}_{B}$ only by the effect of the
parameters in $c_{k}$ on the model’s predictions. After generating these
candidates, AMoC tries to choose the best performing model for the unseen
target domain.
Algorithm 1 ATE-Guided Model Compression (AMoC)
Input: Domain pairs $(\mathbf{S^{i}},\mathbf{T^{i}})_{i=1}^{n+1}$ with Labeled
source data $\mathbf{(L_{S^{i}})_{i=1}^{n+1}}$, Unlabeled source and target
data $(\mathbf{U_{S^{i}}},\mathbf{U_{T^{i}}})_{i=1}^{n+1}$, Labeled held-out
source and target data $(\mathbf{H_{S^{i}}},\mathbf{H_{T^{i}}})_{i=1}^{n}$,
and a set $\mathcal{C}$ of subsets of encoder components to be removed.
Algorithm:
1. 1.
For each domain pair in $(\mathbf{S^{i}},\mathbf{T^{i}})_{i=1}^{n}$
1. (a)
Train the base model $m^{i}_{B}$ on $\mathbf{L_{S^{i}}}$.
2. (b)
For $c_{k}\in\mathcal{C}$
\- Freeze all encoder parameters.
\- Remove every component in $c_{k}$ from $m^{i}_{B}$.
\- Connect and unfreeze the remaining components according to $\S$ 4.2.
\- Fine-tune the new model $m^{i}_{k}$ on $\mathbf{L_{S^{i}}}$ for one or more
epochs.
\- Compute $\widehat{ATE}_{S^{i}}(c_{k})$ and $\widehat{ATE}_{T^{i}}(c_{k})$
according to Eq. 2, using $\mathbf{U_{S^{i}}}$ and $\mathbf{U_{T^{i}}}$.
\- Compute the remaining features in 4.3.
2. 2.
Train the stepwise regression according to Eq. 4, using all compressed models
generated in step 1.
3. 3.
Repeat steps 1(a)-1(b) for $(\mathbf{S^{n+1}},\mathbf{T^{n+1}})$ and choose
$(m^{n+1})^{*}$ with the highest expected performance according to the
regression model.
When generating the $k$-th compressed model of the $i$-th source-target pair,
we start by removing all parameters in $c_{k}$ from the computational graph of
$m^{i}_{B}$. Then, we connect the predecessor of each detached component from
$c_{k}$ to its successor in the graph, which yields the new ${m^{i}_{k}}$ (see
Figure 1). To estimate the effect of $c_{k}$ on the predictions of
$m^{i}_{B}$, we freeze all remaining model parameters in ${m^{i}_{k}}$ and
fine-tune it for one or more epochs, training only the decoder and the
parameters of the new connections between the predecessors and successors of
the removed components. An advantage of this procedure is that we can
efficiently generate many model candidates. Figure 1 demonstrates this process
on a simple architecture when considering the removal of layer components.
Guiding our model selection step is the ATE of $c_{k}$ on the base model
$m^{i}_{B}$. The generation of each compressed candidate ${m^{i}_{k}}$ is
designed to allow us to estimate the effect of $c_{k}$ on the model’s
predictions. In comparing the predictions of $m^{i}_{B}$ to the compressed
model ${m^{i}_{k}}$ on many examples, we try to mimic the process of
generating control and treatment groups. As is done in controlled experiments,
we compare examples that are given a treatment, i.e., encoded by the
compressed model ${m^{i}_{k}}$, and examples that were encoded by the base
model $m^{i}_{B}$. Intervening on the example-generating process was explored
previously in the causality literature by Bottou et al. (2013); Feder et al.
(2021).
Alongside the ATE, we compute other features that might be predictive of a
compressed model’s performance on an unlabeled target domain, which we discuss
in detail in $\S$ 4.3. Using those features and the ATE, we train a linear
stepwise regression to predict a compressed model’s performance on target
domains ($\S$ 4.4). Finally, at test time AMoC is given an unseen domain pair
and applies the regression in order to choose the compressed source model
expected to perform best on the target domain. Using the regression, we can
estimate the power of the ATE in predicting model performance and answer
Question $3$ of $\S$ 1.
In this paper, we choose to focus on the removal of sets of layers, as done in
previous work Fan et al. (2019); Sanh et al. (2019); Sajjad et al. (2020).
While our method can support any other parameter partitioning, such as
clusters of neurons, we leave this for future work. In the case of layers, to
establish the new compressed model we simply connect the remained layers
according to their hierarchy. For example, for a base model with a 12-layer
encoder and $c=\\{2,3,7\\}$ the unconnected components are
$\\{1\\},\\{4,5,6\\}$ and $\\{8,9,10,11,12\\}$. Layer $1$ will then be
connected to layer $4$, and layer $6$ to layer $8$. The compressed model will
be then trained for one or more epochs where only the decoder and layers $1$
and $6$ (using the original indices) are fine-tuned . In times where layer 1
is removed, the embedding layer is connected to the first unremoved layer and
is fine-tuned.
### 4.3 Regression Features
Apart from the ATE, which estimates the impact of the intervention on the base
model, we naturally need to consider other features. Indeed, without any
information on the target domain, predicting that a model will perform the
same as in the source domain could be a reasonable first-order approximation
McClosky et al. (2010). Also, adding information on the distance between the
source and target distributions Van Asch and Daelemans (2010) or on the type
of components that were removed (such as the number of layers) might also be
useful for predicting the model’s success. We present here all the features we
consider, and discuss their usefulness in predicting model performance. To
answer Q$3$, we need to show that given all this information, the ATE is still
predictive for the model’s performance in the target domain.
#### ATE
Our main variable of interest is the average treatment effect of the
components in $c_{k}$ on the predictions of the model. In our compression
scheme, we estimate for a specific domain $d\in\\{S^{i},T^{i}\\}$ the ATE for
each compressed model ${m^{i}_{k}}$ by comparing it to the base model
$m^{i}_{B}$:
$\widehat{ATE}_{d}(c_{k})=\frac{1}{|\mathbf{U_{d}}|}\sum_{x\in\mathbf{U_{d}}}\langle\vec{z}\big{(}m^{i}_{B}(x)\big{)}-\vec{z}\big{(}{m^{i}_{k}}(x)\big{)}\rangle$
(2)
where the operator $\langle\rangle$ denotes the total variation distance: A
summation over the absolute values of vector coordinates. 444For a three class
prediction and a single example, where the probability distributions for the
base and the compressed models are $(0.7,0.2,0.1)$ and $(0.5,0.1,0.4)$,
respectively, $\widehat{ATE}_{i}(c_{k})$ =
$|0.7-0.5|+|0.2-0.1|+|0.1-0.4|=0.6$. As we are interested in the effect on the
probability assigned to each class by the classifier $m^{i}_{k}$, we measure
the class probability of its output for an example $x$, as proposed by Feder
et al. (2021).555For sequence tagging tasks, we first compute sentence-level
ATEs by averaging the word-level probability differences, and then average
those ATEs to get the final ATE.
In our regression model we choose to include the ATE of the source and the
target domains, $\widehat{ATE}_{S^{i}}(c_{k})$ (estimated on
$\mathbf{U_{S^{i}}}$) and $\widehat{ATE}_{T^{i}}(c_{k})$ (estimated on
$\mathbf{U_{T^{i}}}$) , respectively. We note that in computing the ATE we
only require the predictions of the models, and do not need labeled data.
#### In-domain Performance
A common metric for selecting a classification model is its performance on a
held-out set. Indeed, in cases where we do not have access to any information
from the target domain, the naive choice is the best performing model on a
held-out source domain set Elsahar and Gallé (2019). Hence, for every
$c_{k}\in\mathcal{C}$ we compute the performance of ${m^{i}_{k}}$ on
$\mathbf{H_{S^{i}}}$.
#### Domain Classification
An important variable when predicting model performance on an unseen test
domain is the distance between its training domain and that test domain
Elsahar and Gallé (2019). While there are many ways to approximate this
distance, we choose to do so by training a domain classifier on
$\mathbf{U_{S^{i}}}$ and $\mathbf{U_{T^{i}}}$, classifying each example
according to its domain. We then compute the average probability assigned to
the target examples to belong to the source domain, according to the domain
classifier:
$\widehat{P(S^{i}|T^{i})}=\frac{1}{|\mathbf{H_{T^{i}}}|}\sum_{x\in\mathbf{H_{T^{i}}}}P(S^{i}|x),$
(3)
where $P(S^{i}|x)$ denotes for an unlabeled target example $x$, the
probability that it belongs to the source domain $\mathbf{S^{i}}$, based on
the domain classifier.
#### Compression-size Effects
We include in our regression binary variables indicating the number of layers
that were removed. Naturally, we assume that the larger the number of layers
removed, the bigger the gap from the base model should be.
### 4.4 Regression Analysis
In order to decide which $c_{k}$ should be removed from the base model, we
follow the process described in Algorithm 1 for all $c\in\mathcal{C}$ and end
up with many candidate compressed models, differing by the model components
that were removed. As our goal is to choose a candidate model to be used in an
unseen target domain, we train a standard linear stepwise regression model
Hocking (1976); Draper and Smith (1998); Dubossarsky et al. (2020) to predict
the candidate’s performance on the seen target domains:
$Y=\beta_{0}+\beta_{1}X_{1}+\cdots+\beta_{m}X_{m}+\epsilon,$ (4)
where $Y$ is performance on these target domains, computed using their held-
out sets $(\mathbf{H_{T^{i}}})_{i=1}^{n}$, and ${X_{1},\cdots,X_{m}}$ are the
set of variables described in 4.3, including the ATE. In stepwise regression
variables are added to the model incrementally only if their marginal addition
for predicting $Y$ is statistically significant ($p<0.01$). This method is
useful for finding variables with maximal and unique contribution to the
explanation of $Y$. The value of this regression is two-fold in our case as it
allows us to: (1) get a predictive model that can choose a high quality
compressed model candidate, and (2) estimate the predictive power of the ATE
on model performance in the target domain.
## 5 Experiments
### 5.1 Data
We consider three challenging datasets (tasks):
(1) The Amazon product reviews dataset for sentiment classification He and
McAuley (2016).666http://jmcauley.ucsd.edu/data/amazon/. This dataset consists
of product reviews and metadata, from which we choose 6 distinct domains:
Amazon Instant Video (AIV), Beauty (B), Digital Music (DM), Musical
Instruments (MI), Sports and Outdoors (SAO) and Video Games (VG). All reviews
are annotated with an integer score between 0 and 5. We label $>3$ reviews as
positive and $<3$ reviews as negative. Ambiguous reviews (rating $=3$) are
discarded. Since the dataset does not contain development and test sets, we
randomly split each domain into training (64%), development (16%) and test
(20%) sets.
(2) The Multi-Genre Natural Language Inference (MultiNLI) corpus for natural
language inference classification Williams et al. (2018)
.777https://cims.nyu.edu/~sbowman/multinli/. This corpus consists of pairs of
sentences, a premise and a hypothesis, where the hypothesis either entails the
premise, is neutral to it or contradicts it. The MultiNLI dataset extends upon
the SNLI corpus Bowman et al. (2015), assembled from image captions, to 10
additional domains: 5 $matched$ domains, containing training, development and
test samples and 5 $mismatched$, containing only development and test samples.
We experiment with the original SNLI corpus (Captions domain) as well as the
$matched$ version of MultiNLI, containing the Fiction, Government, Slate,
Telephone and Travel domains, for a total of 6 domains.
(3) The OntoNotes 5.0 dataset Hovy et al. (2006), consisting of sentences
annotated with named entities, part-of-speech tags and parse trees.
888https://catalog.ldc.upenn.edu/LDC2013T19. We focus on the Named Entity
Recognition (NER) task with 6 different English domains: Broadcast
Conversation (BC), Broadcast News (BN), Magazine (MZ), Newswire (NW),
Telephone Conversation (TC) and Web data (WB). This setup allows us to
evaluate the quality of AMoC on a sequence tagging task.
The statistics of our experimental setups are reported in Table 1. Since the
test sets of the MultiNLI domains are not publicly available, we treat the
original development sets as our test sets, and randomly choose 2,000 examples
from the training set of each domain to serve as the development sets. We use
the original splits of the SNLI as they are all publicly available. Since our
datasets manifest class imbalance phenomena we use the macro average F1 as our
evaluation measure.
For the regression step of Algorithm 1, we use the development set of each
target domain to compute the model’s macro F1 score (for the $Y$ and the in-
domain performance variables). We compute the ATE variables on the development
sets of both domains, train the domain classifier on unlabeled versions of the
training sets and compute $\widehat{P(S|T)}$ on the target development set.
Amazon Reviews
---
| Train | Dev | Test
Amazon Instant Video | 21K | 5.2K | 6.5K
Beauty | 112K | 28K | 35K
Digital Music | 37K | 9.2K | 11K
Musical Instruments | 6K | 1.5K | 1.9K
Sports and Outdoors | 174K | 43K | 54K
Video Games | 130K | 32K | 40K
MultiNLI
| Train | Dev | Test
Captions | 550K | 10K | 10K
Fiction | 75K | 2K | 2K
Government | 75K | 2K | 2K
Slate | 75K | 2K | 2K
Telephone | 81K | 2K | 2K
Travel | 75K | 2K | 2K
OntoNotes
| Train | Dev | Test
Broadcast Conversation | 173K | 30K | 36K
Broadcast News | 207K | 25K | 26K
Magazine | 161K | 15K | 17K
News | 878K | 148K | 60K
Telephone Conversation | 92K | 11K | 11K
Web | 361K | 48K | 50K
Table 1: Data statistics. We report the number of sentences for Amazon Reviews
and MultiNLI, and the number of tokens for OntoNotes.
### 5.2 Model and Baselines
#### Model
The encoder being compressed is the BERT-base model Devlin et al. (2019). BERT
is a 12-layer Transformer model Vaswani et al. (2017); Radford et al. (2018),
representing textual inputs contextually and sequentially. Our decoder
consists of a layer attention mechanism Kondratyuk and Straka (2019) which
computes a parameterized weighted average over the layers’ output, followed by
a $1D$ convolution with the max-pooling operation and a final Softmax layer.
Figure 1(a) presents a simplified version of the architecture of this model
with 3 encoder layers.
#### Baselines
To put our results in context of previous model compression work, we compare
our models to three strong baselines. Like AMoC, the baselines generate
reduced-size encoders. These encoders are augmented with the same decoder as
in our model to yield the baseline architectures.
The first baseline is DistilBERT (DB) Sanh et al. (2019): A 6-layer compressed
version of BERT-base, trained on the masked language modelling task with the
goal of mimicking the predictions of the larger model. We used its default
setting, i.e., removal of 6 layers with $c=\\{2,4,6,7,9,11\\}$. Sanh et al.
(2019) demonstrated that DistilBERT achieves comparable results to the large
model with only half of its layers.
Since DistilBERT was not designed or tested on out-of-distribution data, we
create an additional version, denoted as DB + DA. In this version, the
training process is performed on the masked language modelling task using an
unlabeled version of the training data from both the source and the target
domains, with its original hyper-parameters.
We further add an additional adaptation-aware baseline: DB + GR, the
DistilBERT model equipped with the gradient reversal (GR) layer Ganin and
Lempitsky (2015). Particularly, we augment the DistilBERT model with a domain
classifier, similar in structure to the task classifier, which aims to
distinguish between the unlabeled source and the unlabeled target examples. By
reversing the gradients resulting from the objective function of this
classifier, the encoder is biased to produce domain-invariant representations.
We set the weights of the main task loss and the domain classification loss to
$1$ and $0.1$, respectively.
Another baseline is LayerDrop (LD), a procedure that applies layer dropout
during training, making the model robust to the removal of certain layers
during inference Fan et al. (2019). During training, we apply a fixed dropout
rate of $0.5$ for all layers. At inference, we apply their $Every$ $Other$
strategy by removing all even layers to obtain a reduced 6-layer model.
Finally, we compare AMoC to ALBERT, a recently proposed BERT-based variant
designed to mimic the performance of the larger BERT model with only a tenth
of its parameters (11M parameters compared to BERT’s 110M parameters) Lan et
al. (2020). ALBERT is trained with cross-layer parameter sharing and sentence
ordering objectives, leading to better model efficiency. Unlike other
baselines explored here, it is not directly comparable since it consists of 12
layers and was pre-trained on substantially more data. As such, we do not
include it in the main results table (Table 2), and instead discuss its
performance compared to AMoC in Section 6.
### 5.3 Compression Scheme Experiments
While our compression algorithm is neither restricted to a specific DNN
architecture nor to the removal of certain model components, we follow
previous work and focus on the removal of layer sets Fan et al. (2019); Sanh
et al. (2019); Sajjad et al. (2020). With the goal of addressing our research
questions posed in $\S$ 1, we perform extensive compression experiments on the
12-layer BERT by considering the removal of $4,6$ and $8$ layers. For each
number of layers removed, we randomly sample $100$ layer sets to generate our
model candidates. To be able to test our method on all domain pairs, we
randomly split these pairs into five 20% domain pair sets and train five
regression models, differing in the set used for testing. Our splits respect
the restriction that no test set domain (source or target) appears in the
training set.
### 5.4 Hyper-parameters
We implement all models using HuggingFace’s Transformers package Wolf et al.
(2020).999https://github.com/huggingface/transformers. We consider the
following hyper-parameters for the uncompressed models: Training for 10 epochs
(Amazon Reviews and MultiNLI) or 30 epochs (OntoNotes) with an early stopping
criterion according to the development set, optimizing all parameters using
the ADAM optimizer Kingma and Ba (2015) with a weight decay of 0.01 and a
learning rate of 1e-4, a batch size of 32, a window size of 9, 16 output
channels for the $1D$ convolution, and a dropout layer probability of 0.1 for
the layer attention module. The compressed models are trained on the labeled
source data for 1 epoch (Amazon Reviews and MultiNLI) or 10 epochs
(OntoNotes).
The domain classifiers are identical in architecture to our task classifiers
and use the uncompressed encoder after it was optimized during the above task-
based training. These classifiers are trained on the unlabeled version of the
source and target training sets for 25 epochs with early stopping, using the
same hyper-parameters as above.
## 6 Results
#### Performance of Compressed Models
Table 2 reports macro F1 scores for all domain pairs of the Amazon Reviews,
MultiNLI and OntoNotes datasets, when considering the removal of 6 layers,
while Figure 2 provides summary statistics. Clearly, AMoC outperforms all
baselines in the vast majority of setups (see, e.g., the lower graphs of
Figure 2). Moreover, its average target-domain performance (across the 5
source domains) improves over the second best model (DB + DA) by up to 4.56%,
5.16% and 1.63%, on Amazon Reviews, MultiNLI and OntoNotes, respectively
(lowest rows of each table in Table 2; see also the average across setups in
the upper graphs of Figure 2). These results provide a positive answer to Q$1$
of $\S$ 1, by indicating the superiority of AMoC over strong alternatives.
DB+GR is overall the worst performing baseline, followed by DB, with an
average degradation of 11.3% and 8.2% macro F1 score, respectively, compared
to the more successful cross-domain oriented variant DB + DA. This implies
that out-of-the-box compressed models such as DB struggle to generalize well
to out-of-distribution data. DB + DA also performs worse than AMoC in a large
portion of the experiments. These results are even more appealing given that
AMoC does not perform any gradient step on the target data, performing only a
small number of gradient steps on the source data. In fact, AMoC only uses the
unlabeled target data for computing the regression features. Lastly, LD,
another strong baseline which was specifically designed to remove layers from
BERT, is surpassed by AMoC by as much as 6.76% F1, when averaging over all
source-target domain pairs.
Finally, we compare AMoC to ALBERT. We find that on average ALBERT is
outperformed by AMoC by $8.8\%$ F1 on Amazon Reviews, and by $1.6\%$ F1 on
MultiNLI. On OntoNotes the performance gap between ALBERT and AMoC is an
astounding $24.8\%$ F1 in favor of AMoC, which might be a result of ALBERT
being an uncased model, an important feature for NER tasks.
Amazon Reviews
---
S\T | AIV | B | DM
| Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD
AIV | | | | | | | 75.49 | 82.14 | 65.00 | 75.86 | 65.42 | 69.51 | 77.66 | 76.02 | 67.12 | 75.94 | 62.8 | 71.92
B | 80.05 | 79.18 | 69.23 | 74.07 | 66.73 | 74.10 | | | | | | | 77.10 | 76.60 | 65.42 | 72.74 | 58.52 | 69.94
DM | 78.97 | 78.57 | 69.52 | 76.00 | 70.39 | 72.14 | 76.54 | 74.37 | 63.83 | 74.94 | 65.21 | 67.36 | | | | | |
MI | 65.24 | 69.87 | 54.96 | 67.21 | 55.99 | 56.53 | 72.72 | 72.78 | 55.75 | 74.83 | 46.44 | 61.25 | 60.09 | 63.88 | 50.01 | 68.24 | 30.42 | 52.67
SAO | 77.10 | 77.64 | 63.26 | 70.01 | 63.43 | 67.72 | 83.88 | 85.12 | 69.87 | 81.74 | 67.19 | 76.32 | 74.30 | 75.15 | 58.51 | 67.60 | 60.58 | 64.60
VG | 82.73 | 83.79 | 73.66 | 78.98 | 73.24 | 76.24 | 85.20 | 85.21 | 69.62 | 80.34 | 70.91 | 77.13 | 81.10 | 82.43 | 71.21 | 75.08 | 72.51 | 76.01
AVG | 76.81 | 77.81 | 66.13 | 73.25 | 65.96 | 69.35 | 78.77 | 79.92 | 64.81 | 77.54 | 63.03 | 70.31 | 74.05 | 74.82 | 62.45 | 71.92 | 56.97 | 67.03
| MI | SAO | VG
| Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD
AIV | 67.99 | 64.44 | 58.26 | 61.64 | 58.64 | 61.43 | 69.76 | 69.52 | 59.71 | 71.62 | 58.96 | 62.97 | 77.71 | 76.52 | 67.43 | 76.44 | 67.11 | 70.19
B | 82.70 | 80.16 | 66.47 | 76.28 | 68.03 | 71.87 | 83.73 | 83.21 | 72.23 | 79.57 | 72.11 | 77.29 | 82.57 | 82.23 | 65.52 | 76.96 | 65.50 | 71.59
DM | 71.53 | 71.10 | 59.18 | 67.21 | 61.37 | 63.13 | 70.94 | 63.83 | 58.45 | 65.29 | 61.75 | 62.79 | 78.45 | 76.04 | 68.67 | 76.21 | 66.93 | 70.66
MI | | | | | | | 70.08 | 72.71 | 59.23 | 71.39 | 58.30 | 66.10 | 65.10 | 67.91 | 51.60 | 56.37 | 49.67 | 56.87
SAO | 84.16 | 84.73 | 71.09 | 78.64 | 72.27 | 72.44 | | | | | | | 80.05 | 81.06 | 64.51 | 75.14 | 65.78 | 70.00
VG | 86.43 | 82.07 | 66.22 | 76.77 | 67.38 | 70.59 | 82.61 | 82.23 | 68.96 | 79.12 | 70.18 | 73.83 | | | | | |
AVG | 78.56 | 76.50 | 64.24 | 72.11 | 65.54 | 67.89 | 75.42 | 74.30 | 63.72 | 73.40 | 64.26 | 68.60 | 76.78 | 76.75 | 63.55 | 72.22 | 63.00 | 67.86
MNLI
S\T | Captions | Fiction | Govern.
| Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD
Captions | | | | | | | 58.92 | 58.37 | 46.96 | 57.37 | 46.04 | 54.93 | 59.51 | 59.35 | 40.14 | 57.85 | 42.54 | 56.85
Fiction | 71.33 | 68.81 | 39.04 | 67.60 | 45.26 | 63.27 | | | | | | | 73.41 | 69.71 | 46.83 | 69.55 | 47.10 | 63.56
Govern. | 62.52 | 68.04 | 44.45 | 63.47 | 39.23 | 54.68 | 67.61 | 66.05 | 44.5 | 63.47 | 46.75 | 60.44 | | | | | |
Slate | 65.04 | 62.40 | 37.58 | 46.99 | 44.87 | 55.39 | 69.83 | 67.70 | 46.53 | 58.59 | 43.58 | 62.07 | 72.95 | 72.16 | 49.53 | 71.31 | 49.23 | 66.82
Telephone | 65.04 | 61.22 | 40.03 | 58.65 | 36.64 | 59.77 | 69.07 | 67.77 | 47.45 | 63.76 | 46.76 | 61.91 | 71.46 | 65.47 | 46.83 | 66.63 | 45.99 | 65.53
Travel | 65.77 | 62.11 | 36.54 | 60.11 | 38.29 | 55.41 | 66.97 | 65.19 | 44.05 | 60.06 | 42.94 | 56.67 | 74.24 | 72.07 | 49.03 | 72.69 | 51.32 | 65.47
AVG | 65.94 | 64.52 | 39.53 | 59.36 | 40.86 | 57.70 | 66.48 | 65.02 | 45.90 | 60.65 | 45.21 | 59.20 | 70.31 | 67.75 | 46.47 | 67.61 | 47.24 | 63.65
| Slate | Telephone | Travel
| Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD
Captions | 52.83 | 53.26 | 41.30 | 52.96 | 42.23 | 50.56 | 56.94 | 56.68 | 41.22 | 58.35 | 45.53 | 54.01 | 57.88 | 57.40 | 42.86 | 54.84 | 43.64 | 54.88
Fiction | 66.76 | 62.94 | 44.79 | 64.13 | 45.70 | 59.82 | 71.83 | 68.47 | 41.66 | 67.70 | 44.52 | 64.97 | 69.86 | 66.28 | 46.98 | 66.52 | 46.81 | 62.36
Govern. | 65.22 | 65.59 | 46.57 | 62.89 | 45.42 | 61.06 | 67.54 | 67.87 | 43.73 | 65.39 | 45.88 | 65.46 | 67.45 | 64.70 | 48.67 | 66.99 | 48.58 | 63.09
Slate | | | | | | | 68.27 | 71.27 | 45.21 | 59.39 | 39.50 | 61.06 | 71.47 | 69.01 | 46.19 | 57.94 | 46.92 | 61.79
Telephone | 65.53 | 63.62 | 45.73 | 56.35 | 44.68 | 60.70 | | | | | | | 69.20 | 65.97 | 47.30 | 65.53 | 42.94 | 61.76
Travel | 65.02 | 60.11 | 45.65 | 60.96 | 47.08 | 56.51 | 69.57 | 66.31 | 42.30 | 64.35 | 45.86 | 61.63 | | | | | |
AVG | 63.07 | 61.10 | 44.81 | 59.46 | 45.02 | 57.73 | 66.83 | 66.12 | 42.82 | 63.04 | 44.26 | 61.43 | 67.17 | 64.67 | 46.40 | 62.36 | 45.78 | 60.78
OntoNotes
S\T | BC | BN | MZ
| Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD
BC | | | | | | | 73.78 | 71.28 | 70.76 | 70.94 | 58.22 | 66.46 | 64.06 | 60.96 | 63.44 | 64.75 | 48.48 | 53.78
BN | 74.25 | 71.06 | 70.83 | 70.11 | 70.29 | 65.61 | | | | | | | 69.92 | 68.34 | 68.71 | 69.39 | 69.70 | 60.87
MZ | 66.56 | 62.00 | 60.55 | 61.76 | 62.06 | 54.76 | 71.47 | 67.32 | 66.5 | 66.41 | 59.67 | 61.29 | | | | | |
NW | 72.23 | 70.26 | 68.22 | 70.16 | 41.20 | 63.57 | 80.85 | 79.54 | 78.15 | 79.34 | 68.92 | 75.07 | 74.66 | 71.78 | 71.86 | 72.28 | 65.76 | 64.82
TC | 42.63 | 41.78 | 45.14 | 39.18 | 21.32 | 29.64 | 53.08 | 52.37 | 54.56 | 51.69 | 19.80 | 42.16 | 39.17 | 38.59 | 41.94 | 38.75 | 16.98 | 33.81
WB | 28.47 | 27.58 | 26.79 | 25.17 | 26.97 | 21.97 | 40.39 | 40.68 | 39.09 | 40.35 | 30.79 | 33.55 | 15.86 | 20.09 | 22.84 | 24.84 | 15.53 | 13.52
AVG | 56.83 | 54.54 | 54.31 | 53.28 | 44.37 | 47.11 | 63.91 | 62.24 | 61.81 | 61.75 | 47.48 | 55.71 | 52.73 | 51.95 | 53.76 | 54.00 | 43.29 | 45.36
| NW | TC | WB
| Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD | Base | AMoC | DB | DB+DA | DB+GR | LD
BC | 61.31 | 58.80 | 58.44 | 57.75 | 46.95 | 50.73 | 62.39 | 63.07 | 58.19 | 59.53 | 59.31 | 55.21 | 48.90 | 47.42 | 45.58 | 46.00 | 45.56 | 40.17
BN | 73.55 | 69.79 | 70.51 | 71.26 | 58.80 | 62.31 | 69.64 | 65.69 | 61.45 | 64.68 | 64.98 | 60.40 | 51.34 | 50.14 | 48.02 | 48.72 | 48.39 | 43.45
MZ | 67.40 | 63.80 | 63.04 | 63.64 | 50.33 | 52.08 | 60.31 | 56.94 | 54.61 | 55.51 | 63.37 | 42.00 | 48.25 | 44.78 | 43.11 | 43.91 | 39.98 | 38.80
NW | | | | | | | 61.20 | 51.88 | 50.73 | 49.78 | 36.48 | 44.38 | 52.23 | 50.52 | 49.07 | 49.30 | 41.34 | 45.72
TC | 35.25 | 35.15 | 36.73 | 35.58 | 20.83 | 27.93 | | | | | | | 36.50 | 35.36 | 37.00 | 36.23 | 25.72 | 27.04
WB | 22.60 | 26.40 | 23.64 | 27.57 | 20.61 | 17.02 | 18.68 | 15.45 | 18.36 | 15.38 | 7.64 | 10.77 | | | | | |
AVG | 52.02 | 50.79 | 50.47 | 51.16 | 39.50 | 42.01 | 54.44 | 50.61 | 48.67 | 48.98 | 46.36 | 42.55 | 47.44 | 45.64 | 44.56 | 44.83 | 40.20 | 39.04
Table 2: Domain adaptation results in terms of macro F1 scores on Amazon
Reviews (top), MultiNLI (middle) and OntoNotes (bottom) with 6 removed layers.
S and T denote Source and Target, respectively. The best result among the
compressed models (all models except from Base) is highlighted in bold. We
mark results that outperform the uncompressed Base model with an underscore.
#### Compressed Model Selection
We next evaluate how well the regression model and its variables predict the
performance of a candidate compressed model on the target domain. Table 3
presents the Adjusted $R^{2}$, indicating the share of the variance in the
predicted outcome that the variables explain. Across all experiments and
regardless of the number of layers removed, our regression model predicts well
the performance on unseen domain pairs, averaging an $R^{2}$ of $0.881,0.916$
and $0.826$ on Amazon Reviews, MultiNLI and OntoNotes, respectively. This
indicates that our regression properly estimates the performance of candidate
models.
Figure 2: Summary of domain adaptation results. Overall average score (up) and
overall number of wins (down) over all source-target domain pairs.
| $\\#$ of removed Layers |
---|---|---
Dataset | 4 | 6 | 8 | Average
Amazon Reviews | 0.844 | 0.898 | 0.902 | 0.881
MultiNLI | 0.902 | 0.921 | 0.926 | 0.916
OntoNotes | 0.827 | 0.830 | 0.821 | 0.826
Table 3: Adjusted $R^{2}$ on the test set for each type of compression ($4,6$
or $8$ layers) on each dataset.
Another support for this observation is that in $75\%$ of the experiments the
model selected by the regression is among the top 10 performing compressed
candidates. In $55\%$ of the experiments, it is among the top 5 models. On
average it performs only $1\%$ worse than the best performing compressed
model. Combined with the high adjusted $R^{2}$ across experiments, this
suggests a positive answer to Q$2$ of $\S$ 1.
Finally, as expected, we find that AMoC is often outperformed by the full
model. However, the gap between the models is small, averaging only in
$1.26\%$ . Moreover, in almost 25% of all experiments AMoC was able to surpass
the full model (underscored scores in Table 2).
#### Marginal Effects of Regression Variables
While the performance of the model on data drawn from the same distribution
may also be indicative of its out-of-distribution performance, additional
information is likely to be needed in order to make an exact prediction. Here,
we supplement this indicator with the variables described in $\S$ 4.3 and ask
whether they can be useful to select the best compressed model out of a set of
candidates. Table 4 presents the most statistically significant variables in
our stepwise regression analysis. It demonstrates that the ATE and the model’s
performance in the source domain are usually very indicative of the model’s
performance.
Indeed, most of the regression’s predictive power comes from the model
performance on the source domain ($F1_{S}$) and the treatment effects on the
source and target domains ($\widehat{ATE_{S}}$, $\widehat{ATE_{T}}$). In
contrast, the distance metric ($\widehat{P(S|T})$) and the interaction terms
($\widehat{ATE_{T}}\cdot\widehat{P(S|T)}$, $F1_{S}\cdot\widehat{P(S|T)}$)
contribute much less to the total $R^{2}$. The predictive power of the ATE in
both source and target domains suggests a positive answer to Q$3$ of $\S$ 1.
## 7 Additional Analysis
### 7.1 Layer Importance
To further understand the importance of each of BERT’s layers, we compute the
frequency in which each layer appears in the best candidate model, i.e., the
model with the highest F1 score on the target test set, of every experiment.
Figure 3 captures the layer frequencies across the different datasets and
across the number of removed layers.
| Amazon | MultiNLI | OntoNotes
---|---|---|---
Variable | $\beta$ | $\Delta R^{2}$ | $\beta$ | $\Delta R^{2}$ | $\beta$ | $\Delta R^{2}$
$F1_{S}$ | 0.435 | 0.603 | -0.299 | 0.143 | 0.748 | 0.510
$\widehat{ATE_{T}}$ | -1.207 | 0.239 | -0.666 | 0.413 | 117.5 | 0.202
$\widehat{ATE_{S}}$ | 1.836 | 0.029 | 0.557 | 0.232 | 125.9 | 0.072
$\widehat{P(S|T)}$ | -0.298 | 0.028 | -0.652 | 0.061 | 15.60 | 0.052
$\widehat{ATE_{T}}$ | -0.560 | 0.007 | -0.092 | 0.029 | -115.8 | 0.004
$\cdot\widehat{P(S|T)}$
$F1_{S}$ | 0.472 | 0.004 | 1.027 | 0.043 | 0.187 | 0.004
$\cdot\widehat{P(S|T)}$
8 layers | -0.137 | 0.001 | -0.303 | 0.001 | -3.145 | 0.001
6 layers | -0.066 | 0 | -0.146 | 0.007 | -1.020 | 0.005
const | 0.259 | 0 | 0.594 | 0 | -12.18 | 0
Table 4: Stepwise regression coefficients ($\beta$) and their marginal
contribution to the adjusted $R^{2}$ ($\Delta R^{2}$) on all experiments on
both datasets.
The plots suggest that the two final layers, layers 11 and 12, are the least
important layers with average frequencies of 30.3% and 24.8%, respectively.
Additionally, in most cases layer 1 is ranked below the other layers. These
results imply that the compressed models are able to better recover from the
loss of parameters when the external layers are removed. The most important
layer appears to be layer 4, with an average frequency of 73.3%. Finally, we
notice that a large frequency variance exists across the different subplots.
Such variance supports our hypothesis that the decision of which layers to
remove should not be based solely on the architecture of the model.
Figure 3: Layer frequency at the best (oracle) compressed models when
considering the removal of 4, 6 and 8 layers in the three datasets.
To pin down the importance of a specific layer for a given base model, we
utilize a similar regression analysis to that of $\S$ 6. Specifically, we
train a regression model on all compressed candidates for a given source-
target domain pair (in all three tasks), adding indicator variables for the
exclusion of each layer from the model. This model associates each layer with
a regression coefficient, which can be interpreted as the marginal effect of
that layer being removed on expected target performance. We then compute for
each layer its average coefficient across source-target pairs (Table 5,
$\beta$ column) and compare it to the fraction of source-target pairs where
this layer is not included in the best possible (oracle) compressed model
(Table 5, $P(\text{Layer removed})$ column).
As can be seen in the table, layers that their removal is associated with
better model performance are more often not included in the best performing
compressed models. Indeed, the Spearman’s rank correlation between the two
rankings is as high as 0.924. Such analysis demonstrates that the regression
model used as part of AMoC not only selects high quality candidates, but can
also shed light on the importance of individual layers.
Layer Rank | $\bar{\beta}$ | $P(\text{Layer removed})$
---|---|---
1 | 0.0448 | 0.300
2 | 0.0464 | 0.333
3 | 0.0473 | 0.333
4 | 0.0483 | 0.333
5 | 0.0487 | 0.416
6 | 0.0495 | 0.555
7 | 0.0501 | 0.472
8 | 0.0507 | 0.638
9 | 0.0514 | 0.500
10 | 0.0522 | 0.638
11 | 0.0538 | 0.611
12 | 0.0577 | 0.666
Table 5: Layer rank according to regression coefficients ($\beta$) and the
probability the layer was removed form the best compressed model. Results are
averaged across all target-domain pairs in our experiments.
### 7.2 Training Epochs
We next analyze the number of epochs required to fine-tune our compressed
models. For each dataset (task) we randomly choose for every target domain 10
compressed models and create two alternatives, differing in the number of
training epochs performed after layer removal: One trained for a single epoch
and another for 5 epochs (Amazon Reviews, MultiNLI) or 10 epochs (Ontonotes).
Table 6 compares the average F1 (target-domain task performance) and
$\widehat{ATE_{T}}$ differences between the two alternatives, on the target
domain test and dev sets, respectively. The results suggest that when training
for more epochs on Amazon Reviews and MultiNLI the difference in both the F1
and ATE are negligible. For OntoNotes (NER), in contrast, additional training
improves the F1, suggesting that further training of the compressed model
candidates may be favorable for sequence tagging tasks such as NER.
| F1 Difference | $\widehat{ATE_{T}}$ Difference
---|---|---
Amazon Reviews | 0.080 | 0.011
MNLI | -0.250 | 0.003
OntoNotes | 2.940 | -0.009
Table 6: F1 and ATE differences when training AMoC after layer removal for
multiple epochs vs. a single epoch.
### 7.3 Space and Time Complexity
Table 7 compares the number of overall and trainable parameters and the
training time of BERT, DistilBERT and AMoC. Removing $L$ layers from BERT
yields a reduction of $7L$ million parameters. As can be seen in the Table,
AMoC requires training only a small fraction of the overall parameters. Since
we only unfreeze one layer per each new connected component, at the worst case
our algorithm requires the training of $\min\\{L,12-L\\}$ layers. The only
exception is in the case where Layer 1 is removed ($1\in c$). In such a case
we unfreeze the embedding layer, which adds 24 million trained parameters. In
terms of total training time (one epoch of task-based fine-tuning), when
averaging over all setups, a single compressed AMoC model is $\times 11$
faster than BERT and $\times 6$ faster than DistilBERT.
| Overall Parameters | Trainable Parameters | Train Time Reduction
---|---|---|---
BERT-base | $110$M | $110$M | $\times 1$
DistilBERT | $66$M | $66$M | $\times 1.83$
AMoC | $110$M - $7$M $\cdot$ L | $7$M $\cdot$ $\min\\{L,12-L\\}$ | $\times 11$
$+17M\cdot\mathbb{1}_{\\{1\in c\\}}$
Table 7: Comparison of number of parameters and training time between BERT-
base, DistilBERT and AMoC when removing $L$ layers. AMoC’s number of trainable
parameters is an upper bound.
### 7.4 Design Choices
#### Computing the ATE
Following Goyal et al. (2019) and Feder et al. (2021), we implement the ATE
with the total variation distance between the probability output of the
original model and that of the compressed models. To verify the quality of
this design choice, we re-ran our experiments where the ATE is calculated
using the KL-divergence between the same distributions. While the results in
both conditions are qualitatively similar, we did find a consistent
quantitative improvement of the $R^{2}$ (average of $0.05$ across setups) when
considering our total variation distance.
#### Regression Analysis
Our regression approach is designed to allow us to both select high-quality
compressed candidates and to interpret the importance of each explanatory
variable, including the ATEs. As this regression has relatively few features,
we do not expect to lose significant predictive power by choosing to focus on
linear predictors. To verify this, we re-ran our experiments when using a
fully connected feed-forward network 101010With one intermediate layer, same
input feature as the regression, and hyper-parameters tuned on the development
set of each source-target pair. to predict target performance. This model,
which is less interpretable than our regression, is also less accurate: We
have observed an increased mean squared error of 1-3% with the network.
## 8 Conclusion
We explored the relationship between model compression and out-of-distribution
generalization. AMoC, our proposed algorithm, relies on causal inference tools
for estimating the effects of interventions. It hence creates an interpretable
process that allows to understand the role of specific model components. Our
results indicate that AMoC is able to produce a smaller model with minimal
loss in performance across domains, without any use of target labeled data at
test time (Q$1$).
AMoC can efficiently train a large number of compressed model candidates, that
can then serve as training examples for a regression model. We have shown that
this approach results in a high quality estimation of the performance of
compressed models on unseen target domains (Q$2$). Moreover, our stepwise
regression analysis indicates that the $\widehat{ATE_{S}}$ and
$\widehat{ATE_{T}}$ estimates are instrumental for these attractive properties
(Q$3$).
As training and test set mismatches are common, we steered our model
compression research towards out-of-domain generalization. Besides its
realistic nature, this setup poses additional modeling challenges, such as
understanding the proximity between domains, identifying which components are
invariant to domain shift, and estimating performance on unseen domains.
Hence, AMoC is designed for model compression in the out-of-distribution
setup. We leave the design of similar in-domain compression methods for future
work.
Finally, we believe that using causal methods to produce compressed NLP models
that can well generalize across distributions is a promising direction of
research, and hope that more work will be done in this intersection.
## References
* Aguilar et al. (2020) Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. 2020\. Knowledge distillation from internal representations. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 7350–7357.
* Ben-David et al. (2020) Eyal Ben-David, Carmel Rabinovitz, and Roi Reichart. 2020. Perl: Pivot-based domain adaptation for pre-trained deep contextualized embedding models. _Transactions of the Association for Computational Linguistics_ , 8:504–521.
* Blitzer et al. (2006) John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In _Proceedings of the 2006 conference on empirical methods in natural language processing_ , pages 120–128.
* Bottou et al. (2013) Léon Bottou, Jonas Peters, Joaquin Quiñonero-Candela, Denis X Charles, D Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. 2013. Counterfactual reasoning and learning systems: The example of computational advertising. _The Journal of Machine Learning Research_ , 14(1):3207–3260.
* Bowman et al. (2015) Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015\. A large annotated corpus for learning natural language inference. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 632–642.
* Brown et al. (2020) Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _arXiv preprint arXiv:2005.14165_.
* Chen et al. (2020) Daoyuan Chen, Yaliang Li, Minghui Qiu, Zhen Wang, Bofang Li, Bolin Ding, Hongbo Deng, Jun Huang, Wei Lin, and Jingren Zhou. 2020. Adabert: Task-adaptive bert compression with differentiable neural architecture search. In _Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20_ , pages 2463–2469. International Joint Conferences on Artificial Intelligence Organization. Main track.
* Cui et al. (2018) Wanyun Cui, Guangyu Zheng, Zhiqiang Shen, Sihang Jiang, and Wei Wang. 2018. Transfer learning for sequences via learning to collocate. In _International Conference on Learning Representations_.
* Daumé III et al. (2010) Hal Daumé III, Abhishek Kumar, and Avishek Saha. 2010. Frustratingly easy semi-supervised domain adaptation. In _Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing_ , pages 53–59.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186.
* Draper and Smith (1998) Norman R Draper and Harry Smith. 1998. _Applied regression analysis_, volume 326. John Wiley & Sons.
* Dubossarsky et al. (2020) Haim Dubossarsky, Ivan Vulić, Roi Reichart, and Anna Korhonen. 2020. The secret is in the spectra: Predicting cross-lingual task performance with spectral similarity measures. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing_ , pages 2377–2390.
* Elsahar and Gallé (2019) Hady Elsahar and Matthias Gallé. 2019. To annotate or not? predicting performance drop under domain shift. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing_ , pages 2163–2173.
* Fan et al. (2019) Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing transformer depth on demand with structured dropout. In _International Conference on Learning Representations_.
* Feder et al. (2021) Amir Feder, Nadav Oved, Uri Shalit, and Roi Reichart. 2021. CausaLM: Causal Model Explanation Through Counterfactual Language Models. _Computational Linguistics_ , 47(2):333–386.
* Frankle and Carbin (2018) Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In _International Conference on Learning Representations_.
* Ganesh et al. (2020) Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Deming Chen, Marianne Winslett, Hassan Sajjad, and Preslav Nakov. 2020. Compressing large-scale transformer-based models: A case study on bert. _arXiv preprint arXiv:2002.11985_.
* Ganin and Lempitsky (2015) Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In _International conference on machine learning_ , pages 1180–1189. PMLR.
* Ganin et al. (2016) Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016\. Domain-adversarial training of neural networks. _The Journal of Machine Learning Research_ , 17(1):2096–2030.
* Gong et al. (2016) Mingming Gong, Kun Zhang, Tongliang Liu, Dacheng Tao, Clark Glymour, and Bernhard Schölkopf. 2016. Domain adaptation with conditional transferable components. In _International conference on machine learning_ , pages 2839–2848.
* Goyal et al. (2019) Yash Goyal, Amir Feder, Uri Shalit, and Been Kim. 2019. Explaining classifiers with causal concept effect (cace). _arXiv preprint arXiv:1907.07165_.
* Greenfeld and Shalit (2020) Daniel Greenfeld and Uri Shalit. 2020. Robust learning with the Hilbert-schmidt independence criterion. In _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 3759–3768. PMLR.
* He and McAuley (2016) Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In _proceedings of the 25th international conference on world wide web_ , pages 507–517.
* Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In _NIPS Deep Learning and Representation Learning Workshop_.
* Hocking (1976) Ronald R Hocking. 1976. A biometrics invited paper. the analysis and selection of variables in linear regression. _Biometrics_ , 32(1):1–49.
* Hovy et al. (2006) Eduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006\. Ontonotes: the 90% solution. In _Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers_ , pages 57–60.
* Jiao et al. (2020) Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 4163–4174, Online. Association for Computational Linguistics.
* Johansson et al. (2016) Fredrik Johansson, Uri Shalit, and David Sontag. 2016. Learning representations for counterfactual inference. In _International conference on machine learning_ , pages 3020–3029.
* Kaushik et al. (2019) Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2019. Learning the difference that makes a difference with counterfactually-augmented data. In _International Conference on Learning Representations_.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In _International Conference on Learning Representations_.
* Kondratyuk and Straka (2019) Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing_ , pages 2779–2795.
* Lan et al. (2020) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In _International Conference on Learning Representations_.
* Lin and Lu (2018) Bill Yuchen Lin and Wei Lu. 2018. Neural adaptation layers for cross-domain named entity recognition. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2012–2022.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_.
* Louis and Nenkova (2009) Annie Louis and Ani Nenkova. 2009. Performance confidence estimation for automatic summarization. In _Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)_ , pages 541–548. Association for Computational Linguistics.
* Magliacane et al. (2018) Sara Magliacane, Thijs van Ommen, Tom Claassen, Stephan Bongers, Philip Versteeg, and Joris M Mooij. 2018. Domain adaptation by using causal inference to predict invariant conditional distributions. In _Advances in Neural Information Processing Systems_ , pages 10846–10856.
* McClosky et al. (2010) David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In _Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics_ , pages 28–36. Association for Computational Linguistics.
* Mirzadeh et al. (2020) Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. 2020. Improved knowledge distillation via teacher assistant. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 5191–5198.
* Pearl (1995) Judea Pearl. 1995. Causal diagrams for empirical research. _Biometrika_ , 82(4):669–688.
* Pearl (2009) Judea Pearl. 2009. _Causality_. Cambridge university press.
* Pearl et al. (2009) Judea Pearl et al. 2009. Causal inference in statistics: An overview. _Statistics surveys_ , 3:96–146.
* Peters et al. (2017) Jonas Peters, Dominik Janzing, and Bernhard Schlkopf. 2017. _Elements of Causal Inference: Foundations and Learning Algorithms_. The MIT Press.
* Radford et al. (2018) Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. _Technical report, OpenAI_.
* Ravi et al. (2008) Sujith Ravi, Kevin Knight, and Radu Soricut. 2008. Automatic prediction of parser accuracy. In _Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing_ , pages 887–896.
* Reichart and Rappoport (2007) Roi Reichart and Ari Rappoport. 2007. An ensemble method for selection of high quality parses. In _Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics_ , pages 408–415.
* Rogers et al. (2021) Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2021. A Primer in BERTology: What We Know About How BERT Works. _Transactions of the Association for Computational Linguistics_ , 8:842–866.
* Rojas-Carulla et al. (2018) Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, and Jonas Peters. 2018\. Invariant models for causal transfer learning. _The Journal of Machine Learning Research_ , 19(1):1309–1342.
* Rotman and Reichart (2019) Guy Rotman and Roi Reichart. 2019. Deep contextualized self-training for low resource dependency parsing. _Transactions of the Association for Computational Linguistics_ , 7:695–713.
* Sajjad et al. (2020) Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2020. Poor man’s bert: Smaller and faster transformer models. _arXiv preprint arXiv:2004.03844_.
* Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. In _Proceedings of the 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing in Advances in neural information processing systems_.
* Sato et al. (2017) Motoki Sato, Hitoshi Manabe, Hiroshi Noji, and Yuji Matsumoto. 2017. Adversarial training for cross-domain universal dependency parsing. In _Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_.
* Schölkopf et al. (2012) Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. 2012. On causal and anticausal learning. In _Proceedings of the 29th International Coference on International Conference on Machine Learning_ , pages 459–466.
* Sun et al. (2020) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020\. MobileBERT: a compact task-agnostic BERT for resource-limited devices. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 2158–2170. Association for Computational Linguistics.
* Van Asch and Daelemans (2010) Vincent Van Asch and Walter Daelemans. 2010. Using domain similarity for performance estimation. In _Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing_ , pages 31–36.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008.
* Wald et al. (2021) Yoav Wald, Amir Feder, Daniel Greenfeld, and Uri Shalit. 2021. On calibration and out-of-domain generalization. _arXiv preprint arXiv:2102.10395_.
* Wang et al. (2018) Zhenghui Wang, Yanru Qu, Liheng Chen, Jian Shen, Weinan Zhang, Shaodian Zhang, Yimei Gao, Gen Gu, Ken Chen, and Yong Yu. 2018. Label-aware double transfer learning for cross-specialty medical named entity recognition. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1–15. Association for Computational Linguistics.
* Williams et al. (2018) Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1112–1122.
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45. Association for Computational Linguistics.
* Xia et al. (2020) Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, and Graham Neubig. 2020. Predicting performance for natural language processing tasks. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8625–8646. Association for Computational Linguistics.
* Zhang et al. (2013) Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. 2013. Domain adaptation under target and conditional shift. In _International Conference on Machine Learning_ , pages 819–827.
* Ziser and Reichart (2017) Yftah Ziser and Roi Reichart. 2017. Neural structural correspondence learning for domain adaptation. In _Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)_ , pages 400–410.
* Ziser and Reichart (2018a) Yftah Ziser and Roi Reichart. 2018a. Deep pivot-based modeling for cross-language cross-domain transfer with minimal guidance. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 238–249. Association for Computational Linguistics.
* Ziser and Reichart (2018b) Yftah Ziser and Roi Reichart. 2018b. Pivot based language modeling for improved neural domain adaptation. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1241–1251. Association for Computational Linguistics.
* Ziser and Reichart (2019) Yftah Ziser and Roi Reichart. 2019. Task refinement learning for improved accuracy and stability of unsupervised domain adaptation. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5895–5906. Association for Computational Linguistics.
|
# Byzantine Generals in the Permissionless Setting
Andrew Lewis-Pye<EMAIL_ADDRESS>Department of Mathematics London School
of Economics London UK and Tim Roughgarden<EMAIL_ADDRESS>Department of Computer Science Columbia University New York USA
###### Abstract.
Consensus protocols have traditionally been studied in a setting where all
participants are known to each other from the start of the protocol execution.
In the parlance of the ‘blockchain’ literature, this is referred to as the
_permissioned_ setting. What differentiates the most prominent blockchain
protocol Bitcoin (N+, 08) from these previously studied protocols is that it
operates in a _permissionless_ setting, i.e. it is a protocol for establishing
consensus over an unknown network of participants that anybody can join, with
as many identities as they like in any role. The arrival of this new form of
protocol brings with it many questions. Beyond Bitcoin and other proof-of-work
(PoW) protocols, what can we prove about permissionless protocols in a general
sense? How does the recent stream of work on permissionless protocols in the
blockchain literature relate to the well-developed history of research on
permissioned protocols?
To help answer these questions, we describe a formal framework for the
analysis of both permissioned and permissionless systems. Our framework allows
for “apples-to-apples” comparisons between different categories of protocols
and, in turn, the development of theory to formally discuss their relative
merits. A major benefit of the framework is that it facilitates the
application of a rich history of proofs and techniques for permissioned
systems to problems in blockchain and the study of permissionless systems.
Within our framework, we then address the questions above. We consider a
programme of research that asks, “Under what adversarial conditions, and for
what types of permissionless protocol, is consensus possible?” We prove
several results for this programme, our main result being that _deterministic_
consensus is not possible for decentralised permissionless protocols. To
close, we give a list of eight open questions.
## 1\. Introduction
The Byzantine Generals Problem (PSL, 80; LSP, 82) was introduced by Lamport,
Shostak and Pease to formalise the problem of reaching consensus in a context
where faulty processors may display arbitrary behaviour. Famously, they were
able to show that there are deterministic protocols for the broadcast variant
of the problem that can tolerate any number of faulty processors if message
delivery is reliable and if messages can be authenticated, i.e. if a signature
scheme is available. If messages cannot be authenticated, then protocols exist
to solve the problem for a set of $n$ processors iff $n>3f$, where $f$ is the
number of processors that are faulty. The problem has subsequently become a
central topic in distributed computing. Of particular relevance to us here are
the seminal works of Dwork, Lynch and Stockmeyer (DLS, 88), who considered the
problem in a range of synchronicity settings, and the result of Dolev and
Strong (DS, 83) showing that, even in the strongly synchronous setting of
reliable next-round message delivery with authentication, $f+1$ rounds of
interaction are necessary to solve the problem.
The permissionless setting (and the need for a framework). This rich history
of analysis considers the problem of consensus in a setting where all
participants are known to each other from the start of the protocol execution.
In the parlance of the ‘blockchain’ literature, this is referred to as the
_permissioned_ setting. More recently, however, there has been significant
interest in a number of protocols, such as Bitcoin (N+, 08) and Ethereum (But,
18), that operate in a fundamentally different way. What differentiates these
new protocols from those previously studied is that they operate in a
_permissionless_ setting, i.e. these are protocols for establishing consensus
over an unknown network of participants that anybody can join, with as many
identities as they like in any role. Interest in these new protocols is such
that, at the time of writing, Bitcoin has a market capitalisation of over $700
billion.111See www.coinmarketcap.com for a comprehensive list of
cryptocurrencies and their market capitalisations. Beyond their use as
cryptocurrency protocols, there is also substantial interest in the
possibility that these permissionless protocols might be applied to broad
swathes of other applications, such as establishing decentralised financial
markets (Sch, 20), or perhaps even a truly decentralised world-wide-web (RJC+,
19). Given the level of investment, it seems important to put the study of
permissionless protocols on a firm theoretical footing.
Since results for the permissioned setting rely on bounding the number of
faulty participants, and since there may be an _unbounded_ number of faulty
participants in the permissionless setting, it is clear that classical results
for the permissioned setting will not carry over to the permissionless setting
directly. Consider the aforementioned proof of Dolev and Strong (DS, 83) that
$f+1$ rounds are required and suffice if $f$ many participants may be faulty,
for example. If the number of faulty participants is unbounded, then the
apparent conclusion is that consensus is not possible. To make consensus
possible in the permissionless setting, some substantial changes to the setup
assumptions are therefore required. Bitcoin approaches this issue by
introducing the notion of ‘proof-of-work’ (PoW) and limiting the computational
(or hashing) power of faulty participants. A number of papers (GKL, 18; PSas,
16; GKO+, 20) have appeared that consider frameworks for the analysis of
Bitcoin and other PoW protocols. The PoW mechanism used by Bitcoin is,
however, just one approach to defining permissionless protocols. As has been
well documented (BCNPW, 19), proof-of-stake (PoS) protocols, such as Ouroboros
(KRDO, 17) and Algorand (CM, 16), are a form of permissionless protocol with
very different properties, and face a different set of design challenges. As
we will expand on here, there are a number of reasons why PoS protocols do not
fit into the previously mentioned frameworks for the analysis of Bitcoin. The
deeper question remains, how best to understand permissionless protocols more
generally?
Defining a framework. The first aim of this paper is to describe a framework
that allows one to formally describe and analyse both permissioned and
permissionless protocols in a general sense, and to compare their properties.
To our knowledge, our framework is the first capable of modelling all
significant features of PoW and PoS protocols simultaneously, as well as other
approaches like proof-of-space (RD, 16). This allows us to prove general
impossibility results for permissionless protocols – of course, impossibility
results cannot be proved without reference to an appropriate framework. The
framework is constructed according to two related design principles:
1. (1)
Our aim is to establish a framework capable of dealing with permissionless
protocols, but which is as similar as possible to the standard frameworks in
distributed computing for dealing with permissioned protocols. As we will see
in Sections 3 and 4, a major benefit of this approach is that it facilitates
the application of classical proofs and techniques in distributed computing to
problems in ‘blockchain’ and the study of permissionless protocols.
2. (2)
We aim to produce a framework which is as accessible as possible for
researchers in blockchain without a strong background in security. To do so,
we blackbox the use of cryptographic methods where possible, and isolate a
small number of properties for permissionless protocols that are the key
factors in determining the performance guarantees that are possible for
different types of protocol (such as availability and consistency in different
synchronicity settings).
In Section 2 we describe a framework of this kind, according to which
protocols run relative to a _resource pool_. This resource pool specifies a
_resource balance_ for each participant over the duration of the execution
(such as hashrate or stake in the currency), which may be used in determining
which participants are permitted to make broadcasts updating the state.
Byzantine Generals in the Permissionless Setting. Our second aim is to address
a programme of research that looks to replicate for the permissionless setting
what papers such as (DLS, 88; DS, 83; LSP, 82) achieved for the permissioned
case. Our framework allows us to formalise the question, “Under what
adversarial conditions, under what synchronicity assumptions, and for what
types of permissionless protocol (proof-of-work/proof-of-stake/proof-of-
space), are solutions to the Byzantine Generals Problem possible?” In fact,
the theory of consensus for permissionless protocols is quite different than
for the permissioned case. Our main theorem establishes one such major
difference. All terms in the statement of Theorem 3.3 below will be formally
defined in Sections 2 and 3. Roughly, the adversary is $q$-bounded if it
always has at most a $q$-fraction of the total resource balance (e.g. a
$q$-fraction of the total hashrate), while being weakly decentralised is a
minimal requirement that ensures that the criterion for permissionless entry
to the protocol isn’t circumvented in some trivial sense.222This requirement
could really be absorbed into the definition of a permissionless protocol,
but, for technical reasons, we find it convenient keep weak decentralisation
as a separate notion. It is formally defined in Section 3.
Theorem 3.3. _Consider the synchronous and permissionless setting, and suppose
$q\in(0,1]$. There is no deterministic and weakly decentralised protocol that
solves the Byzantine Generals Problem for a $q$-bounded adversary._
The positive results that we previously mentioned for the permissioned case
concerned deterministic protocols. So, Theorem 3.3 describes a fundamental
difference in the theory for the permissioned and permissionless settings.
With Theorem 3.3 in place, we then focus on probabilistic solutions to the
Byzantine Generals Problem. We leave the details until Sections 3 and 4, but
highlight below another theorem of significant interest, which clearly
separates the functionalities that can be achieved by PoW and PoS protocols.
Separating PoW and PoS protocols. The resource pool will be defined as a
function that allocates a resource balance to each participant, depending on
time and on the messages broadcast by protocol participants. One of our major
concerns is to understand how properties of the resource pool may influence
the functionality of the resulting protocol. In Sections 2, 3 and 4 we will be
concerned, in particular, with the distinction between scenarios in which the
resource pool is given as a protocol input, and scenarios where the resource
pool is unknown. We refer to these as the sized and unsized settings,
respectively. PoS protocols are best modelled in the sized setting, because
the way in which a participant’s resource balance depends on the set of
broadcast messages (such as blocks of transactions) is given from the start of
the protocol execution. PoW protocols, on the other hand, are best modelled in
the unsized setting, because one does not know in advance how a participant’s
hashrate will vary over time. The fundamental result when communication is
partially synchronous is that no PoW protocol gives a probabilistic solution
to the Byzantine Generals Problem:
Theorem 4.1. _There is no permissionless protocol giving a probabilistic
solution to the Byzantine Generals Problem in the unsized setting with
partially synchronous communication._
In some sense, Theorem 4.1 can be seen as an analogue of the CAP Theorem (Bre,
00; GL, 02) for our framework, but with a trade-off now established between
‘consistency’ and weaker notion of ‘availability’ than considered in the CAP
Theorem (and with the unsized setting playing a crucial role in establishing
this tradeoff). For details see Section 4.
### 1.1. Related work
The Byzantine Generals Problem was introduced in (PSL, 80; LSP, 82) and has
become a central topic in distributed computing. Prior to Bitcoin, a variety
of papers analysed the Byzantine Generals Problem in settings somewhere
between the permissioned and permissionless settings. For example, Okun (Oku,
05) considered certain relaxations of the classical permissioned setting
(without resource restrictions of the kind employed by Bitcoin). In his
setting, a fixed number of processors communicate by private channels, but
processors may or may not have unique identifiers and might be ‘port unaware’,
meaning that they are unable to determine from which private channel a message
has arrived. Okun showed that deterministic consensus is not possible in the
absence of a signature scheme and unique identifiers when processors are port
unaware – our Theorem 3.3 establishes a similar result even when unique
identifiers and a signature scheme are available (but without full PKI), and
when resource bounds may be used to limit the ability of processors to
broadcast. Bocherdung (Bor, 96) considered a setting in which a fixed set of
$n$ participants communicate by private channels (without the ‘port unaware’
condition of Okun), and in which a signature scheme is available. Now,
however, processors are not made aware of each others’ public keys before the
protocol execution. In this setting, he was able to show that Byzantine
Agreement is not possible when $n\leq 3f$, where $f$ denotes the number of
processors that may display Byzantine failures. A number of papers (CSS, 04;
ABdSFG, 08) have also considered the CUP framework (Consensus amongst Unknown
Participants). In the framework considered in those papers, the number and the
identifiers of other participants may be unknown from the start of the
protocol execution. A fundamental difference with the permissionless setting
considered here is that, in the CUP framework, all participants have a unique
identifier and the adversary is unable to obtain additional identifiers to be
able to launch a Sybil attack against the system, i.e. the number of
identifiers controlled by the adversary is bounded.
The Bitcoin protocol was first described in 2008 (N+, 08). Since then, a
number of papers (see, for example, (GKL, 18; PSas, 16; PS, 17; GPS, 19)) have
considered frameworks for the analysis of PoW protocols. These papers
generally work within the UC framework of Canetti (Can, 01), and make use of a
random-oracle (RO) functionality to model PoW. As we shall see in Section 2,
however, a more general form of oracle is required for modelling PoS and other
forms of permissionless protocol. With a PoS protocol, for example, a
participant’s apparent stake (and their corresponding ability to update state)
depends on the set of broadcast messages that have been received, and _may
therefore appear different from the perspective of different participants_
(i.e. unlike hashrate, measurement of a user’s stake is user-relative). In
Section 2 we will also describe various other modelling differences that are
required to be able to properly analyse a range of attacks, such as ‘nothing-
at-stake’ attacks, on PoS protocols. We take the approach of avoiding use of
the UC framework, since this provides a substantial barrier to entry for
researchers in blockchain who do not have a strong background in security.
The idea of blackboxing the process of participant selection as an oracle
(akin to our _permitter_ , as described in Section 2) was explored in (AM+,
17). Our paper may be seen as taking the same basic approach, and then
fleshing out the details to the point where it becomes possible to prove
impossibility results like those presented here. As here, a broad aim of (AM+,
17) was to understand the relationship between permissionless and permissioned
consensus protocols, but the focus of that paper was somewhat different than
our objectives in this paper. While our aim is to describe a framework which
is as general as possible, and to establish impossibility results which hold
for all protocols, the aim of (AM+, 17) was to examine specific permissioned
consensus protocols, such as Paxos (L+, 01), and to understand on a deep level
how their techniques for establishing consensus connect with and are echoed by
Bitcoin.
In (GKO+, 20), the authors considered a framework with similarities to that
considered here, in the sense that ability to broadcast is limited by access
to a restricted resource. In particular, they abstract the core properties
that the resource-restricting paradigm offers by means of a _functionality
wrapper_ , in the UC framework, which when applied to a standard point-to-
point network restricts the ability (of the adversary) to send new messages.
Once again, however, the random oracle functionality they consider is
appropriate for modelling PoW rather than PoS protocols, and does not reflect,
for example, the sense in which resources such as stake can be user relative
(as discussed above), as well as other significant features of PoS protocols
discussed in Section 2.3. So, the question remains as to how to model and
prove impossibility results for PoS, proof-of-space and other permissionless
protocols in a general setting.
In (Ter, 20), a model is considered which carries out an analysis somewhat
similar to that in (GKL, 18), but which blackboxes all probabilistic elements
of the process by which processors are selected to update state. Again, the
model provides a potentially useful way to analyse PoW protocols, but fails to
reflect PoS protocols in certain fundamental regards. In particular, the model
does not reflect the fact that stake is user relative (i.e. the stake of user
$x$ may appear different from the perspectives of users $y$ and $z$). The
model also does not allow for analysis of the ‘nothing-at-stake’ problem, and
does not properly reflect timing differences that exist between PoW and PoS
protocols, whereby users who are selected to update state may delay their
choice of block to broadcast upon selection. These issues are discussed in
more depth in Section 2.
As stated in the introduction, Theorem 4.1 can be seen as a recasting of the
CAP Theorem (Bre, 00; GL, 02) for our framework. CAP-type theorems have
previously been shown for various PoW frameworks (PS, 17; GPS, 19). In (PS,
17), for example, a framework for analysing PoW protocols is considered, in
which $n$ processors participate and where the number of participants
controlled by the adversary depends on their hashing power. It was shown that
if the protocol is unsure about the number of participants to a factor of 2
and still needs to provide availability if between $n$ and $2n$ participants
show up, then it is not possible to guarantee consistency in the event of
network partitions.
Of course, a general framework is required to be able to provide negative
(impossibility) results of the sort presented here in Sections 3 and 4. In
those sections we also describe how existing positive results fit into the
narrative, as well as outlining some of the most significant remaining open
questions.
## 2\. The framework
### 2.1. The computational model
Informal overview. We use a very simple computational model, which is designed
to be as similar as possible to standard models from distributed computing
(e.g. (DLS, 88)), while also being adapted to deal with the permissionless
setting.333There are a number of papers analysing Bitcoin (GKL, 18; PSas, 16)
that take the approach of working within the language of the UC framework of
Canetti (Can, 01). Our position is that this provides a substantial barrier to
entry for researchers in blockchain who do not have a strong background in
security, and that the power of the UC framework remains largely unused in the
subsequent analysis. We thus consider a simple model in which processors are
specified by state transition diagrams. A _permitter oracle_ is introduced as
a generalisation of the random oracle functionality in the Bitcoin Backbone
paper (GKL, 18): It is the permitter oracle’s role to grant _permissions_ to
broadcast messages. The duration of the execution is divided into timeslots.
Each processor enters each timeslot $t$ in a given _state_ $x$, which
determines the instructions for the processor in that timeslot – those
instructions may involve broadcasting messages, as well as sending _requests_
to the permitter oracle. The state $x^{\prime}$ of the processor at the next
timeslot is determined by the state $x$, together with the messages and
permissions received at $t$.
Formal description. For a list of commonly used variables and terms, see Table
1 in Appendix 1. We consider a (potentially infinite) system of _processors_ ,
some of which may be _faulty_. Each processor is specified by a state
transition diagram, for which the number of states may be infinite. At each
_step_ $s$ of its operation, a processor $p$ _receives_ a pair $(M,M^{\ast})$,
where either or both of $M$ and $M^{\ast}$ may be empty. Here, $M$ is a finite
set of _messages_ (i.e. strings) that have previously been _broadcast_ by
other processors. We refer to $M$ as the _message set_ received by $p$ at step
$s$, and say that each message $m\in M$ is received by $p$ at step $s$.
$M^{\ast}$ is a potentially infinite set of messages, referred to as the
_permission set_ received by $p$ at step $s$. If $m\in M^{\ast}$, then receipt
of the permission set $M^{\ast}$ means that $p$ is able to broadcast $m$ at
subsequent steps: Once $M^{\ast}$ has been received, we refer to $m$ as being
_permitted_ for $p$.444So, at step $s$, $m$ has been _received_ by $p$ if it
belongs to one of the message sets received at steps $\leq s$, while $m$ is
_permitted_ for $p$ if it belongs to one of the permission sets received by
$p$ at steps $\leq s$. To complete step $s$, $p$ then broadcasts a finite set
of messages $M^{\prime}$ that are permitted for $p$, makes a finite _request
set_ $R$, and then enters a new state $x^{\prime}$, where
$x^{\prime},M^{\prime}$ and $R$ are determined by the present state $x$ and
$(M,M^{\ast})$, according to the state transition diagram. The form of the
request set $R$ will be described shortly, together with how $R$ determines
the permission set received at the next step of $p$’s operation.
Amongst the states of a processor are a non-empty set of possible _initial
states_. The _inputs_ to $p$ determine which initial state it starts in. If a
variable is specified as an input to $p$, then we refer to it as _determined_
for $p$, referring to the variable as _undetermined_ for $p$ otherwise. If a
variable is determined/undetermined for all $p$, we will simply refer to it as
determined/undetermined. To define outputs, we consider each processor to have
a distinguished set of _output states_ , a processor’s output being determined
by the first output state it enters. Amongst the inputs to $p$ is an
_identifier_ $\mathtt{U}_{p}$, which can be thought of as a name for $p$, and
which is unique in the sense that $\mathtt{U}_{p}\neq\mathtt{U}_{p^{\prime}}$
when $p\neq p^{\prime}$. A principal difference between the permissionless
setting (as considered here) and the permissioned setting is that, in the
permissionless setting, the number of processors is undetermined, and
$\mathtt{U}_{p}$ is undetermined for $p^{\prime}$ when $p^{\prime}\neq p$.
We consider a real-time clock, which exists outside the system and measures
time in natural number timeslots. We also allow the inputs to $p$ to include
messages, which are thought of as having been received by $p$ at timeslot
$t=0$. A _run_ of the system is described by specifying the initial states for
all processors and by specifying, for each timeslot $t\geq 1$: (1) The
messages and permission sets received by each processor at that timeslot, and;
(2) The instruction that each processor executes, i.e., what messages it
broadcasts, what requests it makes, and the new state it enters.
We require that each message is received by $p$ at most once for each time it
is broadcast, i.e. at the end of the run it must be possible to specify an
injective function $d_{p}$ mapping each pair $(m,t)$, such that $m$ is
received by $p$ at timeslot $t$, to a triple $(p^{\prime},m,t^{\prime})$, such
that $t^{\prime}<t$, $p^{\prime}\neq p$ and such that $p^{\prime}$ broadcast
$m$ at $t^{\prime}$.
In the _authenticated_ setting, we assume the existence of a signature scheme
(without PKI), see Appendix 2 for formal details. We let $m_{\mathtt{U}}$
denote the message $m$ signed by $\mathtt{U}$. We consider standard versions
of the _synchronous_ and _partially synchronous_ settings (as in (DLS, 88)) –
the version of the partially synchronous setting we consider is that in which
the determined upper bound $\Delta$ on message delay holds after some
undetermined stabilisation time. See Appendix 2 for further details.
### 2.2. The resource pool and the permitter
Informal motivation. Who should be allowed to create and broadcast new Bitcoin
blocks? More broadly, when defining a permissionless protocol, who should be
able to broadcast new messages? For a PoW protocol, the selection is made
depending on computational power. PoS protocols are defined in the context of
specifying how to run a currency, and select identifiers according to their
stake in the given currency. More generally, one may consider a scarce
resource, and then select identifiers according to their corresponding
resource balance.
We consider a framework according to which protocols run relative to a
_resource pool_ , which specifies a resource balance for each identifier over
the duration of the run. The precise way in which the resource pool is used to
determine identifier selection is then black boxed through the use of what we
call the _permitter oracle_ , to which processors can make requests to
broadcast, and which will respond depending on their resource balance. To
model Bitcoin, for example, we simply allow each identifier (or rather, the
processor allocated the identifier) to make a request to broadcast a block at
each step of operation. The permitter oracle then gives a positive response
with probability depending on their resource balance, which in this case is
defined by hashrate. So, this gives a straightforward way to model the
process, without the need for a detailed discussion of hash functions and how
they are used to instantiate the selection process.
Formal specification. At each timeslot $t$, we refer to the set of all
messages that have been received or broadcast by $p$ at timeslots
$t^{\prime}\leq t$ as the _message state_ $M$ of $p$. Each run of the system
happens relative to a (determined or undetermined) _resource pool_ ,555As
described more precisely in Section 2.3, whether the resource pool is
determined or undetermined will decide whether we are in the _sized_ or
_unsized_ setting. which in the general case is a function
$\mathcal{R}:\mathcal{U}\times\mathbb{N}\times\mathcal{M}\rightarrow\mathbb{R}_{\geq
0}$, where $\mathcal{U}$ is the set of all identifiers and $\mathcal{M}$ is
the set of all possible sets of messages (so, $\mathcal{R}$ can be thought of
as specifying the resource balance of each identifier at each timeslot,
possibly relative to a given message state).666For a PoW protocol like
Bitcoin, the resource balance of each identifier will be their (relevant)
computational power at the given timeslot (and hence independent of the
message state). For PoS protocols, such as Ouroboros (KRDO, 17) and Algorand
(CM, 16), however, the resource balance will be determined by ‘on-chain’
information, i.e. information recorded in the message state $M$. For each $t$
and $M$, we suppose: (a) If $\mathcal{R}(\mathtt{U},t,M)\neq 0$ then
$\mathtt{U}=\mathtt{U}_{p}$ for some processor $p$; (b) There are finitely
many $\mathtt{U}$ for which $\mathcal{R}(\mathtt{U},t,M)\neq 0$, and; (c)
$\sum_{\mathtt{U}}\mathcal{R}(\mathtt{U},t,M)>0$.
Suppose $p$ performs a step at timeslot $t$. After receiving messages and a
permission set, suppose that $p$’s message state is $M_{0}$, and that
$M_{0}^{\ast}$ is the set of all messages that are permitted for $p$. We
consider two _settings_ – the _timed_ and _untimed_ settings. The form of each
request $r\in R$ made by $p$ at timeslot $t$ depends on the setting, as
specified below. While the following definitions might initially seem a little
abstract, we will shortly give some concrete examples to make things clear.
* •
The untimed setting. Here, each request $r$ made by $p$ must be777To model a
perfectly co-ordinated adversary, we will later modify this definition to
allow the adversary to make requests of a slightly more general form (see
Section 6.4). of the form $(M,A)$, where $M\subseteq M_{0}\cup M_{0}^{\ast}$,
and where $A$ is some (possibly empty) extra data. The permitter oracle will
respond with a (possibly empty) set $M^{\ast}$. The value of $M^{\ast}$ will
be assumed to be a probabilistic function888We can allow some flexibility with
regard to what it means to be a ‘probabilisitic function’ here. To describe a
permitter oracle in the most general form, we can suppose that $\mathtt{O}$ is
actually a distribution on the set of functions which specify a distribution
on outputs for each input (the input being specified by the determined
variables, $(M,A)$, and $\mathcal{R}(\mathtt{U}_{p},t,M)$). Then we can
suppose that one such function $\mathcal{O}$ is sampled according to the
distribution specified by $\mathtt{O}$ at the start of each run, and that,
each time a request is sent to the permitter oracle, a response is
independently sampled according to the distribution specified by
$\mathcal{O}$. This allows us to model both permitter oracles that give
independent responses each time they are queried, and also permitter oracles
that randomly select outputs but give the same response each time the same
request is made within a single run. of the determined variables, $(M,A)$, and
of $\mathcal{R}(\mathtt{U}_{p},t,M)$, subject to the condition that
$M^{\ast}=\emptyset$ if $\mathcal{R}(\mathtt{U}_{p},t,M)=0$. (If modelling
Bitcoin, for example, $M$ might be a set of blocks that have been received by
$p$, or that $p$ is already permitted to broadcast, while $A$ specifies a new
block extending the ‘longest chain’ in $M$. If the block is valid, then the
permitter oracle will give permission to broadcast it with probability
depending on the resource balance of $p$ at time $t$. We will expand on this
example below.)
* •
The timed setting. Here, each request $r$ made by $p$ must be of the form
$(t^{\prime},M,A)$, where $t^{\prime}$ is a timeslot, and where $M$ and $A$
are as in the untimed setting, The response $M^{\ast}$ of the permitter oracle
will be assumed to be a probabilistic function of the determined variables,
$(t^{\prime},M,A)$, and of $\mathcal{R}(\mathtt{U}_{p},t^{\prime},M)$, subject
to the condition that $M^{\ast}=\emptyset$ if
$\mathcal{R}(\mathtt{U}_{p},t^{\prime},M)=0$.
If the set of requests made by $p$ at timeslot $t$ is
$R=\\{r_{1},\dots,r_{k}\\}$, and if the permitter oracle responds with
$M_{1}^{\ast},\dots,M_{k}^{\ast}$ respectively, then
$M^{\ast}:=\cup_{i=1}^{k}M_{i}^{\ast}$ is the permission set received by $p$
at its next step of operation.
By a _permissionless protocol_ we mean a pair $(\mathtt{S},\mathtt{O})$, where
$\mathtt{S}$ is a state transition diagram to be followed by all non-faulty
processors, and where $\mathtt{O}$ is a permitter oracle, i.e. a probabilistic
function of the form described above. It should be noted that the roles of the
resource pool and the permitter oracle are different, in the following sense:
While the resource pool is a variable (meaning that a given protocol will be
expected to function with respect to all possible resource pools consistent
with the setting), the permitter is part of the protocol description.
How to understand the form of requests (informal). To help explain these
definitions, we consider how to model some simple protocols.
#### Modelling Bitcoin.
To model Bitcoin, we work in the untimed setting, and we define the set of
possible messages to be the set of possible _blocks_ 999In this paper, we use
the terms ‘block’ and ‘chain’ in an informal sense, for the purpose of giving
examples.. We then allow $p$ to make a single request of the form $(M,A)$ at
each step of its operation. Here $M$ will be a set of blocks that have been
received by $p$, or that $p$ is already permitted to broadcast. The entry $A$
will be data (without PoW attached) that specifies a block extending the
‘longest chain’ in $M$. If $A$ specifies a valid block, then the permitter
oracle will give permission to broadcast the block specified by $A$ with
probability depending on the resource balance of $\mathtt{U}_{p}$ at time $t$
(which is $p$’s hashrate, and is independent of $M$). So, if each timeslot
corresponds to a short time interval (one second, say), then the model ‘pools’
all attempts by $p$ to find a nonce within that time interval into a single
request. The higher $\mathtt{U}_{p}$’s resource balance at a given timeslot,
the greater the probability $p$ will be able to mine a block at that timeslot.
101010So, in this simple model, we don’t deal with any notion of a
‘transaction’. It is clear, though, that the model is sufficient to be able to
define what it means for blocks to be _confirmed_ , to define notions of
_liveness_ (roughly, that the set of confirmed blocks grows over time with
high probability) and _consistency_ (roughly, that with high probability, the
set of confirmed blocks is monotonically increasing over time), and to prove
liveness and consistency for the Bitcoin protocol in this model (by importing
existing proofs, such as that in (GKL, 18)). Note that the resource pool is
best modelled as undetermined here, because one does not know in advance how
the hashrate attached to each identifier (or even the total hashrate) will
vary over time.
#### Modelling PoS protocols
The first major difference for a PoS protocol is that the resource balance of
each participant now depends on the message state, and may also be a function
of time.111111It is standard practice in PoS blockchain protocols to require a
participant to have a currency balance that has been recorded in the
blockchain for at least a certain minimum amount of time before they can
produce new blocks, for example. So, a given participant may not be permitted
to extend a given chain of blocks at timeslot $t$, but may be permitted to
extend the same chain at a later timeslot $t^{\prime}$. So, the resource pool
is a function
$\mathcal{R}:\mathcal{U}\times\mathbb{N}\times\mathcal{M}\rightarrow\mathbb{R}_{\geq
0}$. A second difference is that $\mathcal{R}$ is determined, because one
knows from the start how the resource balance of each participant depends on
the message state as a function of time. Note that advance knowledge of
$\mathcal{R}$ _does not_ mean that one knows from the start which processors
will have large resource balances throughout the run, unless one knows which
messages will be broadcast. A third difference is that, with PoS protocols,
processors can generally look ahead to determine their permission to broadcast
at future timeslots, when their resource balance may be different than it is
at present. This means that PoS protocols are best modelled in the timed
setting, where processors where processors can make requests corresponding to
timeslots $t^{\prime}$ other than the current timeslot $t$. To make these
ideas concrete, let us consider a simple example.
There are various ways in which ‘standard’ PoS selection processes can work.
Let us restrict ourselves, just for now and for the purposes of this example,
to considering blockchain protocols in which the only broadcast messages are
blocks, and let us consider a longest chain PoS protocol which works as
follows: For each broadcast chain of blocks $C$ and for all timeslots in a set
$T(C)$, the protocol being modelled selects precisely _one_ identifier who is
permitted to produce blocks extending $C$, with the probability each
identifier is chosen being proportional to their wealth, which is a time
dependent function of $C$. To model a protocol of this form, we work in the
timed and authenticated setting. We consider a resource pool which takes any
chain $C$ and allocates to each identifier $\mathtt{U}_{p}$ their wealth
according to $C$ as a function of $t$. Then we can consider a permitter oracle
which chooses one identifier $\mathtt{U}_{p}$ for each chain $C$ and each
timeslot $t^{\prime}$ in $T(C)$, each identifier $\mathtt{U}_{p}$ being chosen
with probability proportional to $\mathcal{R}(\mathtt{U}_{p},t^{\prime},C)$.
The owner $p$ of the chosen identifer $\mathtt{U}_{p}$ corresponding to $C$
and $t^{\prime}$, is then given permission to broadcast blocks extending $C$
whenever $p$ makes a request $(t^{\prime},C,\emptyset)$. This isolates a
fourth major difference from the PoW case: For the PoS protocol, the request
to broadcast and the resulting permission is not block specific, i.e. requests
are of the form $(t^{\prime},M,A)$ for $A=\emptyset$, and the resulting
permission is to broadcast _any_ from the range of appropriately timestamped
and valid blocks extending $C$. If one were to make requests block specific,
then users would be motivated to churn through large numbers of blocks, making
the protocol best modelled as partly PoW.
To model a BFT PoS protocol like Algorand, the basic approach will be very
similar to that described for the longest chain PoS protocol above, except
that certain other messages might be now required in $M$ (such as
authenticated votes on blocks) before permission to broadcast is granted, and
permission may now be given for the broadcast of messages other than blocks
(such as votes on blocks).
### 2.3. Defining the timed/untimed, sized/unsized and single/multi-permitter
settings
In the previous section we isolated four qualitative differences between PoW
and PoS protocols. The first difference is that, for PoW protocols, the
resource pool is a function
$\mathcal{R}:\mathcal{U}\times\mathbb{N}\rightarrow\mathbb{R}_{\geq 0}$, while
for PoS protocols, the resource pool is a function
$\mathcal{R}:\mathcal{U}\times\mathbb{N}\times\mathcal{M}\rightarrow\mathbb{R}_{\geq
0}$. Then there are three differences in the _settings_ that are appropriate
for modelling PoW and PoS protocols. We make the following formal definitions:
1. (1)
The timed and untimed settings. As we define them here, there are two
differences between the timed and untimed settings. The first concerns the
form of requests, as detailed in Section 2.2. We also require that the
following holds in the timed setting: For each broadcast message $m$, there
exists a unique timeslot $t_{m}$ such that permission to broadcast $m$ was
given in response to some request $(t_{m},M,A)$, and $t_{m}$ is computable
from $m$. We call $t_{m}$ the _timestamp_ of $m$.
2. (2)
The sized and unsized settings. We call the setting _sized_ if the resource
pool is determined. By the _total resource balance_ we mean the function
$\mathcal{T}:\mathbb{N}\times\mathcal{M}\rightarrow\mathbb{R}_{>0}$ defined by
$\mathcal{T}(t,M):=\sum_{\mathtt{U}}\mathcal{R}(\mathtt{U},t,M)$. For the
unsized setting, $\mathcal{R}$ and $\mathcal{T}$ are undetermined, with the
only restrictions being:
1. (i)
$\mathcal{T}$ only takes values in a determined interval
$[\alpha_{0},\alpha_{1}]$, where $\alpha_{0}>0$ (meaning that, although
$\alpha_{0}$ and $\alpha_{1}$ are determined, protocols will be required to
function for all possible $\alpha_{0}>0$ and $\alpha_{1}>\alpha_{0}$, and for
all undetermined $\mathcal{R}$ consistent with $\alpha_{0},\alpha_{1}$,
subject to (ii) below).121212We consider resource pools with range restricted
in this way, because it turns out to be an overly strong condition to require
a protocol to function without _any_ further conditions on the resource pool,
beyond the fact that it is a function to $\mathbb{R}_{\geq 0}$. Bitcoin will
certainly fail if the total resource balance over all identifiers decreases
sufficiently quickly over time, or if it increases too quickly, causing blocks
to be produced too quickly compared to $\Delta$.
2. (ii)
There may also be bounds placed on the resource balance of identifiers owned
by the adversary.
3. (3)
The multi-permitter and single-permitter settings. In the _single-permitter_
setting, each processor may submit a single request of the form $(M,A)$ or
$(t,M,A)$ (depending on whether we are in the timed setting or not) at each
timeslot, and it is allowed that $A\neq\emptyset$. In the _multi-permitter_
setting, processors can submit any finite number of requests at each timeslot,
but they must all satisfy the condition that $A=\emptyset$.131313The names
‘single-permitter’ and ‘multi-permitter’ come from the sizes of the resulting
permission sets when modelling blockchain protocols. For PoW protocols the the
permission set received at a single step will generally be of size at most 1,
while this is not generally true for PoS protocols.
In this paper do not actually define the general classes of PoW and PoS
protocols141414We will nevertheless be happy to refer to specific protocols as
PoW or PoS. – such an approach would be too limited, being overly focussed on
the step-by-step operations. In our impossibility results, we assume nothing
about the protocol other than basic properties of the resource pool and
permitter, as specified by the various settings above. We model PoW protocols
in the untimed, unsized, and single permitter settings, with
$\mathcal{R}:\mathcal{U}\times\mathbb{N}\rightarrow\mathbb{R}_{\geq 0}$. We
model PoS protocols in the timed, sized, multi-permitter and authenticated
settings, and with
$\mathcal{R}:\mathcal{U}\times\mathbb{N}\times\mathcal{M}\rightarrow\mathbb{R}_{\geq
0}$. Appendix 3 expands on the reasoning behind these modelling choices. In
the following sections, we will see that whether a protocol operates in the
sized/unsized, timed/untimed, or multi/single-permitter settings is a key
factor in determining the performance guarantees that are possible (such as
availability and consistency in different synchronicity settings).
### 2.4. The adversary
Appendix 4 gives an expanded version of this subsection, which also considers
the meaning of probabilisitic statements in detail. In the permissionless
setting, we generally consider Byzantine faults, thought of as being carried
out with malicious intent by an _adversary_. The adversary controls a fixed
set of faulty processors - in formal terms, the difference between faulty and
non-faulty processors is that the state transition diagram for faulty
processors might not be $\mathtt{S}$, as specified by the protocol. In this
paper, we consider a static (i.e. non-mobile) adversary that controls a set of
processors that is fixed from the start of the protocol execution. We do this
to give the strongest possible form of our impossibility results. Since
protocols will be expected to behave well with respect to all possible message
delays consistent with the setting, it will sometimes be useful to _think of_
the adversary as also having control over the length of message delays.
We place no bound on the _size_ of the set of processors controlled by the
adversary. Rather, placing bounds on the power of the adversary in the
permissionless setting means limiting their resource balance. For $q\in[0,1]$,
we say the adversary is $q$-_bounded_ if their total resource balance is
always at most a $q$ fraction of the total, i.e. for all $M,t$, $\sum_{p\in
P_{A}}\mathcal{R}(\mathtt{U}_{p},t,M)\leq q\cdot\sum_{p\in
P}\mathcal{R}(\mathtt{U}_{p},t,M)$, where $P_{A}$ is the set of processors
controlled by the adversary.
### 2.5. The permissioned setting
So that we can compare the permissioned and permissionless settings, it is
useful to specify how the permissioned setting is to be defined within our
framework. According to our framework, the permissioned setting is exactly the
same as the permissionless setting that we have been describing, but with the
following differences:
* •
The finite number $n$ of processors is determined, together with the
identifier for each processor.
* •
All processors are automatically permitted to broadcast all messages, (subject
only to the same rules as formally specified in Appendix 2 for the
authenticated setting).151515It is technically convenient here to allow that
processors can still submit requests, but that requests always get the same
response (the particular value then being immaterial).
* •
Bounds on the adversary are now placed by limiting the _number_ of faulty
processors – the adversary is $q$-_bounded_ if at most a fraction $q$ of all
processors are faulty.
## 3\. Byzantine Generals in the synchronous setting
### 3.1. Simulating permissioned protocols
Recall from Section 2.2 that we write $m_{\mathtt{U}}$ to denote the message
$m$ signed by $\mathtt{U}$. We consider protocols for solving the ‘Byzantine
Broadcast’ (BB) variant of the Byzantine Generals Problem. 161616We consider
the broadcast version of the problem (as originally considered in (PSL, 80;
LSP, 82)) because, although the required conditions for broadcast are
apparently easier to satisfy, solutions to the broadcast problem with optimal
resiliency (i.e. tolerating a maximal adversary) can easily be transformed
into solutions of the ‘agreement’ version of the problem which also have
optimal resiliency (while the reverse is not true). The ‘agreement’ version of
the problem is the same as broadcast, but with the stronger requirement that
all non-faulty processors must give output $z$ whenever all _non-faulty_
processors are given input $z$. A distinguished identifier
$\mathtt{U}^{\ast}$, which does not belong to any processor, is thought of as
belonging to the _general_. Each processor $p$ begins with a protocol input
$\mathtt{in}_{p}$, which is a set of messages from the general: either
$\\{0_{\mathtt{U}^{\ast}}\\},\\{1_{\mathtt{U}^{\ast}}\\}$, or
$\\{0_{\mathtt{U}^{\ast}},1_{\mathtt{U}^{\ast}}\\}$. All non-faulty processors
$p$ must give the same output $o_{p}\in\\{0,1\\}$. In the case that the
general is ‘honest’, there will exist $z\in\\{0,1\\}$, such that
$\mathtt{in}_{p}=\\{z_{\mathtt{U}^{\ast}}\\}$ for all $p$, and in this case we
require that $o_{p}=z$ for all non-faulty processors.
As we have already stipulated, processors also take other inputs beyond their
_protocol input_ as described in the last paragraph, such as their identifier
and $\Delta$ – to distinguish these latter inputs from the protocol inputs, we
will henceforth refer to them as _parameter inputs_. The protocol inputs and
the parameter inputs have different roles, in that the form of the outputs
required to ‘solve’ BB only depend on the protocol inputs, but the protocol
will be required to produce correct outputs for all possible parameter inputs.
Our first simple, but important, observation is that we can define PoS
protocols that solve Byzantine Broadcast (BB) in an uninteresting way,
essentially just by simulating existing protocols for the permissioned setting
(we will subsequently argue that such a solution is not really decentralised,
and discuss how to avoid this issue). Roughly, one takes the set of processors
who have non-zero resource balance at $t=0$ and have them carry out a
permissioned version of the protocol. Interestingly, however, a complication
arises which stems from one of the more subtle differences between the
requirements for BB and state-machine-replication (SMR): The former only
requires protocol participants to output correctly, while the latter also
requires output conditions on clients who do not actively participate in the
process of reaching consensus. In the permissionless setting, we also require
processors that may have zero resource balance throughout the run to output
correctly, and those processors will not be able to actively participate. So,
the output requirements are more similar to those for SMR in that particular
sense.
###### Theorem 3.1.
Consider the synchronous setting, and suppose $q<1$. There exists a
(deterministic) PoS protocol solving Byzantine Broadcast for a $q$-bounded
adversary.
###### Proof.
See Appendix 5. ∎
Probabilistic analogues of Theorem 3.1 are proved for PoW protocols in (AD,
15) and (KMS, 14) (with each of those papers making slightly different
assumptions on the setup).
### 3.2. Defining (weak) decentralisation
The protocol described in the proof of Theorem 3.1 could be criticised
(informally) for not really being _decentralised_. The protocol is really
carried out by a closed circle of participants who start with non-zero
resource balance, while other participants are just observers – the whole
point of permissionless protocols is to avoid having a closed circle of
participants of this kind. In this section, we state conditions that suffice
to ensure permissionless protocols cannot operate in this way. While we cannot
stop faulty processors from delaying the point at which they broadcast
messages (or from broadcasting messages earlier than their timestamp in the
timed setting), the key is simply to ensure that non-faulty processors do not
play such timing tricks.
###### Definition 3.2.
The conditions for a permissionless protocol to be _weakly decentralised_
depend on the setting:
* •
In the untimed setting, we require that if non-faulty $p$ receives
$(M,M^{\ast})$ at $t$ and broadcasts the set of messages $M^{\prime}$ (at
$t$), then $M^{\prime}\subseteq M^{\ast}$.
* •
In the timed setting, we require that if non-faulty $p$ broadcasts $m$ at
timeslot $t$, then $t=t_{m}$, i.e. $m$ is broadcast at $t$ which is its
timestamp.
So, roughly, the point of Definition 3.2 is to ensure that, when the resource
balance shifts from one set of processors to another, this actually affects
their ability to participate in the protocol. If some processors with non-zero
resource balance at the start of the run are permitted to broadcast as they
please throughout the run, while all other processors are never permitted to
broadcast (as in the proof of Theorem 3.1), for example, then the protocol
should not be considered decentralised.
Combined with our requirement that protocols work for _any_ resource pool
(consistent with the setting), Definition 3.2 is similar to Algorand’s (CGMV,
18) requirement for ‘player replaceability’, which means that the protocol
should still function if the set of processors that are allowed to broadcast
messages changes arbitrarily in each round. Interestingly, the motivation for
achieving player replaceability in Algorand is not to ensure decentralisation,
but rather to defend against DDoS attacks and a mobile adversary who can
corrupt users adaptively and instantaneously, but cannot control more than 1/3
of the total stake in the system. It remains a topic for future research to
understand whether there is some sense in which Definition 3.2 is equivalent
to an ability to defend against such mobile adversaries. It is also worth
noting that Algorand gives a probabilisitic solution to the BB problem, and an
interesting consequence of our Theorem 3.3 is that the solution is
_necessarily_ probabilisitic.
###### Question 1.
Is there some sense in which Definition 3.2 is equivalent to the requirement
of being able to defend against certain types of mobile adversary?
### 3.3. The impossibility of deterministic consensus in the permissionless
setting
In Section 2.2, we allowed the permitter oracle $\mathtt{O}$ to be a
probabilistic function. In the case that $\mathtt{O}$ is deterministic, i.e.
if there is a single output for each input, we will refer to the protocol
$(\mathtt{S},\mathtt{O})$ as deterministic.
In the following proof, it is convenient to consider an infinite set of
processors. As always, though, (see Section 2.2) we assume for each $t$ and
$M$, that there are finitely many $\mathtt{U}$ for which
$\mathcal{R}(\mathtt{U},t,M)\neq 0$, and thus only finitely many corresponding
processors given permission to broadcast. All that is really required for the
proof to go through is that there are an unbounded number of identifiers that
can participate _at some timeslot_ (such as is true for Bitcoin, or in any
context where the adversary can transfer their resource balance to an
unbounded number of possible public keys), and that the set of identifiers
with non-zero resource balance can change quickly. In particular, this means
that the adversary can broadcast using new identifiers at each timeslot. Given
this condition, one can then adapt the proof of (DS, 83), that a permissioned
protocol solving BB for a system with $t$ many faulty processors requires at
least $t+1$ many steps, to show that a deterministic and weakly decentralised
protocol in the permissionless setting cannot always give correct outputs.
Adapting the proof, however, is highly non-trivial, and requires establishing
certain compactness conditions on the space of runs, which are straightforward
in the permissioned setting but require substantial effort to establish in the
permissionless setting. Of course, Theorem 3.1 also means that weak
decentralisation must play a significant role in the proof.
###### Theorem 3.3.
Consider the synchronous setting and suppose $q\in(0,1]$. There is no
deterministic and weakly decentralised permissionless protocol that solves BB
for a $q$-bounded adversary.
###### Proof.
See Appendix 6. ∎
Theorem 3.3 limits the kind of solution to BB that is possible in the
permissionless setting. In the context of a blockchain protocol (for state
machine replication), however, one is (in some sense) carrying out multiple
versions of (non-binary) BB in sequence. One approach to circumventing Theorem
3.3 would be to accept some limited centralisation: One might have a fixed
circle of participants carry out each round of BB (involving interactions over
multiple timeslots according to a permissioned protocol), only allowing in new
participants after the completion of each such round. While this approach
clearly does _not_ involve a decentralised solution to BB, it might well be
considered sufficiently decentralised in the context of state machine
replication.
### 3.4. Probabilistic consensus in the authenticated setting
In light of Theorem 3.3, it becomes interesting to consider weakly
decentralised permissionless protocols giving _probabilistic_ solutions to BB.
To this end, we consider protocols that take an extra parameter input
$\varepsilon>0$, which we call the _security parameter_. Now we require that,
for any value of the security parameter input $\varepsilon>0$, it holds with
probability $>1-\varepsilon$ that all non-faulty processors give correct
outputs.
In the remaining sections of this paper, we describe some impossibility
results for probabilisitic permissionless protocols. As we do so, we will also
refer to positive (possibility) results from the existing literature: _The aim
in doing so is not to attempt a survey of existing results, but just to show
how our negative results fit into existing positive results_.
Remaining in the synchronous setting for now, the pertinent question then
becomes:
###### Question 2.
For which $q\in[0,1)$ do there exist weakly decentralised protocols giving a
probabilistic solution to BB for a $q$-bounded adversary in the authenticated
and synchronous setting?
Longest chain protocols such as Bitcoin, Ouroboros (KRDO, 17) and Snow White
(BPS, 16) suffice to give a positive response to Question 2 for
$q\in[0,\frac{1}{2})$, and with respect to both PoW and PoS protocols. The
case $q\in[\frac{1}{2},1)$ remains open for BB.
### 3.5. Probabilistic consensus in the unauthenticated setting
So far it might seem that, whatever the setting, permissioned protocols can be
found to solve any problem that can be solved by a permissionless protocol. In
fact, this is not so. In the original papers (PSL, 80; LSP, 82) it was shown
that there exists a permissioned protocol solving BB (and Byzantine Agreement)
in the unauthenticed and synchronous setting for a $q$-bounded adversary iff
$q<1/3$. This was for a framework in which processors communicate using
private channels, however. Theorem 3.4 below shows that, for our framework
without private channels, there does not exist a permissioned protocol solving
the Byzantine Generals problem in the unauthenticed and synchronous setting
for a $q$-bounded adversary if $q>0$. Since PoW protocols can be defined
solving this problem when $q<\frac{1}{2}$, this demonstrates a setting in
which PoW protocols can solve problems that cannot be solved by any
permissioned protocol.
A version of Theorem 3.4 below was already proved (for a different framework)
in (PS, 17). We include a proof here171717To describe probabilisitic protocols
in the permissioned setting, we allow that state transitions may be
probabilisitic. because it is significantly simpler than the proof in (PS,
17), and because the version we give here is easily modified to give a proof
of Theorem 3.5.
###### Theorem 3.4.
(PS, 17) Consider the synchronous and unauthenticated setting (with the
framework described in Section 2, whereby processors broadcast, rather than
communicate by private channels). If $q\in(0,1]$, then there is no
permissioned protocol giving a probabilistic solution to BB for a $q$-bounded
adversary.
###### Proof.
See Appendix 7. ∎
What happens in the permissionless setting? First of all, the proof of Theorem
3.4 is easily modified to show that that it is not possible to deal with the
case $q\geq\frac{1}{2}$. Superficially, the statement of Theorem 3.5 below
sounds similar to Theorem 1 from (GK, 20), but that paper deals with the
Byzantine Agreement Problem, for which is it is easy to see that it is never
possible to deal with $q\geq\frac{1}{2}$.
###### Theorem 3.5.
Consider the synchronous and unauthenticated setting. If $q\geq\frac{1}{2}$,
then there is no permissionless protocol giving a probabilistic solution to BB
for a $q$-bounded adversary.
###### Proof.
See Appendix 8. ∎
We have required that PoS protocols operate in the authenticated setting. So,
in the opposite direction to Theorem 3.5, this leaves us to consider what can
be done with PoW protocols. As shown in (GKL, 18), Bitcoin is a PoW protocol
which solves BB in the unauthenticated setting for all $q\in[0,\frac{1}{2})$.
## 4\. Byzantine Generals with partially synchronous communication
Throughout this section, we assume that communication is partially
synchronous. We note first that, in this setting, protocols giving a
probabilistic solution to BB will not be possible if the adversary is
$q$-bounded for $q\geq\frac{1}{3}$ – this follows easily by modifying the
argument presented in (DLS, 88), although that proof was given for
deterministic protocols in the permissioned setting. For $q<\frac{1}{3}$ and
working in the sized setting, there are multiple PoS protocols, such as
Algorand,181818For an exposition of Algorand that explains how to deal with
the partially synchronous setting, see (CGMV, 18). which work successfully
when communication is partially synchronous.
The fundamental result with respect to the _unsized_ setting with partially
synchronous communication is that there is no permissionless protocol giving a
probabilistic solution to BB. So, PoW protocols cannot give a probabilistic
solution to BB when communication is partially synchronous.191919Of course, it
is crucial to our analysis here that PoW protocols are being modelled in the
unsized setting. It is also interesting to understand why Theorem 4.1 does not
contradict the results of Section 7 in (GKL, 18). In that paper, they consider
the form of partially synchronous setting from (DLS, 88) in which the delay
bound $\Delta$ always holds, but is undetermined. In order for the ‘common
prefix property’ to hold in Lemma 34 of (GKL, 18), the number of blocks $k$
that have to be removed from the longest chain is a function of $\Delta$. When
$\Delta$ is unknown, the conditions for block confirmation are therefore also
unknown. It is for this reason that the Bitcoin protocol cannot be used to
give a probabilistic solution to BB in the partially synchronous and unsized
setting.
###### Theorem 4.1.
There is no permissionless protocol giving a probabilistic solution to BB in
the unsized setting with partially synchronous communication.
###### Proof.
See Appendix 9. ∎
As stated previously, Theorem 4.1 can be seen as an analog of the CAP Theorem
for our framework. While the CAP Theorem asserts that (under the threat of
unbounded network partitions), no protocol can be both available and
consistent, it _is_ possible to describe protocols that give a solution to BB
in the partially synchronous setting (DLS, 88). The crucial distinction is
that such solutions are not required to give outputs until after the
undetermined stabilisation time has passed. The key idea behind the proof of
Theorem 4.1 is that, in the unsized and partially synchronous setting, this
distinction disappears. Network partitions are now indistinguishable from
waning resource pools. In the unsized setting, the requirement to give an
output can therefore force participants to give an output before the
stabilisation time has passed.
###### Question 3.
What are the results for the timed/untimed, sized/unsized, single/multi-
permitter settings other than those used to model PoW and PoS protocols? What
happens, for example, when communication is partially synchronous and we
consider a variant of PoW protocols for which the total resource balance (see
Section 2.3) is determined?
## 5\. Concluding comments
We close with a list of further questions. In Section 2 we assumed that
processors are synchronous, in the sense that each processor takes one step at
each timeslot. It would be easy to adapt the framework to deal with partially
synchronous processors as in (DLS, 88).
###### Question 4.
What happens in the setting of partially synchronous processors, and to what
extent can the techniques of (DLS, 88) be used to solve this question? How
does this depend on whether we are working in the timed/untimed and
sized/unsized settings?
While we have defined the single-permitter and multi-permitter settings, we
didn’t analyse the resulting differences in Sections 3 and 4. In fact, this is
the distinction between PoS and PoW protocols which has probably received the
most attention in the previous literature (but not within the framework we
have presented here) in the form of the ‘nothing-at-stake’ problem (BCNPW,
19). In the framework outlined in Section 2, we did not allow for a mobile
adversary (who can make non-faulty processors faulty, perhaps for a temporary
period). It seems reasonable to suggest that the difference between these two
settings becomes particularly significant in the context of a mobile
adversary:
###### Question 5.
What happens in the context of a mobile adversary, and how does this depend on
whether we are working in the single-permitter or multi-permitter settings? Is
this a significant advantage of PoW protocols?
In the framework we have described here, we have followed much of the
classical literature in not limiting the length of messages, or the finite
number of messages that can be sent in each timeslot. While the imagined
network over which processors communicate does have message delays, it
apparently has infinite bandwidth so that these delays are independent of the
number and size of messages being sent. While this is an appropriate model for
some circumstances, in looking to model such things as sharding protocols
(ZMR, 18) it will be necessary to adopt a more realistic model:
###### Question 6.
How best to modify the framework, so as to model limited bandwidth (and
protocols such as those for implementing sharding)?
In this paper we have tried to follow a piecemeal approach, in which new
complexities are introduced one at a time. This means that there are a number
of differences between the forms of analysis that normally take place in the
blockchain literature and in distributed computing that we have not yet
addressed. One such difference is that it is standard in the blockchain world
to consider a setting in which participants may be late joining. A number of
papers (PS, 17; GPS, 19) have already carried out an analysis of some of the
nuanced considerations to be had here, but there is more to be done:
###### Question 7.
What changes in the context of late joining? In what ways is this different
from the partially synchronous setting, and how does this relate to Question
6? How does all of this depend on other aspects of the setting?
Another crucial difference between the blockchain and distributed computing
literatures, is that the former normally takes very seriously the idea that
incentives for participants must stand up to game theoretic scrutiny. One
might argue that, beyond any political motivations, the reason permissionless
protocols are normally introduced in the context of defining _currencies_ is
because it is in such a context that incentives for participants (and
‘miners’, in particular) can be properly aligned. So far, though, any attempt
at such an analysis is entirely missing from the framework we have described.
The Rational Protocol Design framework of Garay et al. could be useful in this
regard (GKM+, 13).
###### Question 8.
How should ‘incentives’ be incorporated into the framework? Does it make sense
to do this outside the context of defining state machine replication protocols
for running currencies?
## References
* ABdSFG (08) Eduardo AP Alchieri, Alysson Neves Bessani, Joni da Silva Fraga, and Fabíola Greve. Byzantine consensus with unknown participants. In International Conference On Principles Of Distributed Systems, pages 22–40. Springer, 2008.
* AD (15) Marcin Andrychowicz and Stefan Dziembowski. Pow-based distributed cryptography with no trusted setup. In Annual Cryptology Conference, pages 379–399. Springer, 2015\.
* AM+ (17) Ittai Abraham, Dahlia Malkhi, et al. The blockchain consensus layer and bft. Bulletin of EATCS, 3(123), 2017.
* BCNPW (19) Jonah Brown-Cohen, Arvind Narayanan, Alexandros Psomas, and S Matthew Weinberg. Formal barriers to longest-chain proof-of-stake protocols. In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 459–473, 2019.
* Bor (96) Malte Borcherding. Levels of authentication in distributed agreement. In International Workshop on Distributed Algorithms, pages 40–55. Springer, 1996.
* BPS (16) Iddo Bentov, Rafael Pass, and Elaine Shi. Snow white: Provably secure proofs of stake. IACR Cryptology ePrint Archive, 2016(919), 2016.
* Bre (00) Eric A Brewer. Towards robust distributed systems. In PODC, volume 7, pages 343477–343502. Portland, OR, 2000.
* But (18) Vitalik Buterin. What is ethereum? Ethereum Official webpage. Available: http://www. ethdocs. org/en/latest/introduction/what-is-ethereum. html. Accessed, 14, 2018.
* Can (01) Ran Canetti. Universally composable security: A new paradigm for cryptographic protocols. In Proceedings 42nd IEEE Symposium on Foundations of Computer Science, pages 136–145. IEEE, 2001.
* CGMV (18) Jing Chen, Sergey Gorbunov, Silvio Micali, and Georgios Vlachos. Algorand agreement: Super fast and partition resilient byzantine agreement. IACR Cryptol. ePrint Arch., 2018:377, 2018.
* CM (16) Jing Chen and Silvio Micali. Algorand. arXiv preprint arXiv:1607.01341, 2016.
* CSS (04) David Cavin, Yoav Sasson, and André Schiper. Consensus with unknown participants or fundamental self-organization. In International Conference on Ad-Hoc Networks and Wireless, pages 135–148. Springer, 2004.
* DLS (88) Cynthia Dwork, Nancy A. Lynch, and Larry Stockmeyer. Consensus in the presence of partial synchrony. Journal of the ACM, 35(2):288–323, 1988.
* DS (83) Danny Dolev and H. Raymond Strong. Authenticated algorithms for byzantine agreement. SIAM Journal on Computing, 12(4):656–666, 1983.
* GK (20) Juan Garay and Aggelos Kiayias. Sok: A consensus taxonomy in the blockchain era. In Cryptographers? Track at the RSA Conference, pages 284–318. Springer, 2020.
* GKL (18) Juan A Garay, Aggelos Kiayias, and Nikos Leonardos. The bitcoin backbone protocol: Analysis and applications. 2018\.
* GKM+ (13) Juan Garay, Jonathan Katz, Ueli Maurer, Björn Tackmann, and Vassilis Zikas. Rational protocol design: Cryptography against incentive-driven adversaries. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 648–657. IEEE, 2013.
* GKO+ (20) Juan Garay, Aggelos Kiayias, Rafail M Ostrovsky, Giorgos Panagiotakos, and Vassilis Zikas. Resource-restricted cryptography: Revisiting mpc bounds in the proof-of-work era. Advances in Cryptology–EUROCRYPT 2020, 12106:129, 2020.
* GKR (18) Peter Gaži, Aggelos Kiayias, and Alexander Russell. Stake-bleeding attacks on proof-of-stake blockchains. In 2018 Crypto Valley Conference on Blockchain Technology (CVCBT), pages 85–92. IEEE, 2018.
* GL (02) Seth Gilbert and Nancy Lynch. Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. Acm Sigact News, 33(2):51–59, 2002.
* GPS (19) Yue Guo, Rafael Pass, and Elaine Shi. Synchronous, with a chance of partition tolerance. In Annual International Cryptology Conference, pages 499–529. Springer, 2019.
* KMS (14) Jonathan Katz, Andrew Miller, and Elaine Shi. Pseudonymous broadcast and secure computation from cryptographic puzzles. Technical report, Cryptology ePrint Archive, Report 2014/857, 2014. http://eprint. iacr. org ?, 2014.
* KRDO (17) Aggelos Kiayias, Alexander Russell, Bernardo David, and Roman Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Annual International Cryptology Conference, pages 357–388. Springer, 2017.
* L+ (01) Leslie Lamport et al. Paxos made simple. ACM Sigact News, 32(4):18–25, 2001.
* LSP (82) Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine generals problem. ACM Transactions on Programming Languages and Systems (TOPLAS), 4(3):382–401, 1982.
* N+ (08) Satoshi Nakamoto et al. Bitcoin: A peer-to-peer electronic cash system.(2008), 2008.
* Oku (05) Michael Okun. Distributed computing among unacquainted processors in the presence of Byzantine failures. Hebrew University of Jerusalem, 2005.
* PS (17) Rafael Pass and Elaine Shi. Rethinking large-scale consensus. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pages 115–129. IEEE, 2017.
* PSas (16) Rafael Pass, Lior Seeman, and abhi shelat. Analysis of the blockchain protocol in asynchronous networks, 2016. eprint.iacr.org/2016/454.
* PSL (80) Marshall Pease, Robert Shostak, and Leslie Lamport. Reaching agreement in the presence of faults. Journal of the ACM (JACM), 27(2):228–234, 1980.
* RD (16) Ling Ren and Srinivas Devadas. Proof of space from stacked expanders. In Theory of Cryptography Conference, pages 262–285. Springer, 2016\.
* RJC+ (19) Aravindh Raman, Sagar Joglekar, Emiliano De Cristofaro, Nishanth Sastry, and Gareth Tyson. Challenges in the decentralised web: The mastodon case. In Proceedings of the Internet Measurement Conference, pages 217–229, 2019.
* Sch (20) Fabian Schär. Decentralized finance: On blockchain-and smart contract-based financial markets. Available at SSRN 3571335, 2020.
* Ter (20) Benjamin Terner. Permissionless consensus in the resource model. IACR Cryptol. ePrint Arch., 2020:355, 2020.
* ZMR (18) Mahdi Zamani, Mahnush Movahedi, and Mariana Raykova. Rapidchain: Scaling blockchain via full sharding. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 931–948, 2018.
## 6\. Appendices
### 6.1. Appendix 1 – Table 1
term | meaning
---|---
$\Delta$ | bound on message delay
$\mathtt{I}$ | a protocol instance
$m$ | a message
$M$ | a set of messages
$\mathcal{M}$ | the set of all possible sets of messages
$\mathtt{O}$ | a permitter oracle
$p$ | a processor
$P$ | a permission set
$\mathtt{P}$ | a permissionless protocol
$R$ | a request set
$\mathcal{R}$ | the resource pool
$\mathtt{S}$ | a state transition diagram
$t$ | a timeslot
$(t,M,A)$ | a request in the timed setting
$\mathtt{T}$ | a timing rule
$(M,A)$ | a request in the untimed setting
$\mathcal{U}$ | the set of all identifiers
$\mathtt{U}_{p}$ | the identifier for $p$
Table 1. Some commonly used variables and terms.
### 6.2. Appendix 2 – Timing rules and the definitions of the synchronous,
partially synchronous and authenticated settings
The synchronicity settings we consider with regard to message delivery are
just the standard settings introduced in (DLS, 88). In the _synchronous_
setting it holds for some determined $\Delta\geq 1$, for all $p_{1}\neq p_{2}$
and all $t$, that if $p_{1}$ broadcasts a message $m$ at timeslot $t$, then
$p_{2}$ receives $m$ at some timeslot in $(t,t+\Delta]$. In the _partially
synchronous_ setting, there exists an undetermined stabilisation time $T$ and
determined $\Delta\geq 1$, such that, for all $p_{1}\neq p_{2}$ and all $t\geq
T$, if $p_{1}$ broadcasts a message $m$ at timeslot $t$, then $p_{2}$ receives
$m$ at some timeslot in $(t,t+\Delta]$. For the sake of simplicity (and since
we consider mainly impossibility results), we will suppose in this paper that
each processor takes one step at each timeslot, but it would be easy to adapt
the framework to deal with partially synchronous processors as in (DLS, 88).
It is also useful to consider the notion of a _timing rule_ , by which we mean
a partial function $\mathtt{T}$ mapping tuples of the form
$(p,p^{\prime},m,t)$ to timeslots. We say that a run follows the timing rule
$\mathtt{T}$ if the following holds for all processors $p$ and $p^{\prime}$:
We have that $p^{\prime}$ receives $m$ at $t^{\prime}$ iff there exists some
$p$ and $t<t^{\prime}$ such that $p$ broadcasts the message $m$ at $t$ and
$\mathtt{T}(p,p^{\prime},m,t)\downarrow=t^{\prime}$.202020Note that a single
timing rule might result in many different sequences of messages being
received, if different sequences of messages are broadcast. We restrict
attention to timing rules which are consistent with the setting. So the timing
rule just specifies how long messages take to be received by each processor
after broadcast.
In the _authenticated_ setting, we make the following modification to the
definitions of Section 2.2: The response $M^{\ast}$ of the permitter to a
request $r$ made by $p$ is now allowed to be a probabilistic function also of
$\mathtt{U}_{p}$ (as well as the determined variables, $r$ and the resource
balance, as previously). Then we consider an extra filter on the set of
messages that are permitted for $p$: To be permitted for $p$, $m$ must belong
to some permission set $M^{\ast}$ that has previously been received by $p$
_and_ must satisfy the condition that for any ordered pair of the form
$(\mathtt{U}_{p^{\prime}},m^{\prime})$ contained in $m$ with
$\mathtt{U}_{p^{\prime}}\in\mathcal{U}$, either $p=p^{\prime}$, or else
$(\mathtt{U}_{p^{\prime}},m^{\prime})$ is contained in a message that has been
received by $p$.212121Formally, messages and identifiers are strings forming a
prefix-free set, i.e. such that no message or identifier is an initial segment
of another. For strings $\sigma$ and $\tau$, we say $\sigma$ is contained in
$\tau$ if $\sigma$ is a substring of $\tau$, i.e. if there exist (possibly
empty) strings $\rho_{0}$ and $\rho_{1}$ such that $\tau$ is the concatenation
of $\rho_{0}$, $\sigma$ and $\rho_{1}$. The point of these definitions is to
be able to model the authenticated setting within an information-theoretic
state transition model. We write $m_{\mathtt{U}}$ to denote the ordered pair
$(\mathtt{U},m)$, thought of as ‘$m$ signed by $\mathtt{U}$’. In the
_unauthenticated_ setting, the previously described modifications do not
apply, so that $m$ is permitted for $p$ whenever it belongs to some permission
set that has previously been received by $p$.
### 6.3. Appendix 3 – Modelling PoW and PoS protocols
PoW protocols will generally be best modelled in the untimed, unsized and
single-permitter settings. They are best modelled in the untimed setting,
because a processor’s probability of being granted permission to broadcast a
block at timeslot $t$ (even if that block has a different timestamp) depends
on their resource balance at $t$, rather than at any other timeslot. They are
best modelled in the unsized setting, because one does not know in advance of
the protocol execution the amount of mining which will take place at a given
timeslot in the future. They are best modelled in the single-permitter
setting, so long as permission to broadcast is block-specific.
PoS protocols are best modelled in the timed, sized and multi-permitter
settings. They are best modelled in the timed setting, because blocks will
generally have non-manipulable timestamps, and because a processor’s ability
to broadcast a block may be determined at a timestamp $t$ even through the
probability of success depends on their resource balance at $t^{\prime}$ other
than $t$. They are best modelled in the sized setting, because the resource
pool is known from the start of the protocol execution. They are best modelled
in the multi-permitter setting, so long as permission to broadcast is not
block-specific, i.e. when permission is granted, it is to broadcast a range of
permissible blocks at a given position in the blockchain. One further
difference is that PoS permitter oracles would seem to require the
authenticated setting for their implementation, while PoW protocols might be
modelled as operating in either the authenticated or unauthenticated settings
– we do not attempt to _prove_ any such fact here, and indeed our framework is
not appropriate for such an analysis.
### 6.4. Appendix 4 – The adversary and the meaning of probabilistic
statements
In the permissionless setting, we are generally most interested in dealing
with Byzantine faults, normally thought of as being carried out with malicious
intent by an _adversary_. The adversary controls a fixed set of faulty
processors - in formal terms, the difference between faulty and non-faulty
processors is that the state transition diagram for faulty processors might
not be $\mathtt{S}$, as specified by the protocol. In this paper, we consider
a static (i.e. non-mobile) adversary that controls a set of processors that is
fixed from the start of the protocol execution so as to give the strongest
possible form of our impossibility results.
In order to model an adversary that is able to perfectly co-ordinate the
processors it controls, we also make the two following changes to the
definitions of previous sections. Let $P$ be the set of processors, and let
$P_{A}$ be the set of processors controlled by the adversary. If $p\in P_{A}$
then:
* •
At timeslot $t$, $p$’s next state $x^{\prime}$ is allowed to depend on (the
present state $x$ and) messages and permission sets received at $t$ by all
$p^{\prime}\in P_{A}$ (rather than just those received by $p$).
* •
If $p$ makes a request $(M,A)$ or $(t,M,A)$, the only requirement on $M$ is
that all $m\in M$ must be permitted for, or else have been received or
broadcast by, some $p^{\prime}\in P_{A}$.
Since protocols will be expected to behave well with respect to all timing
rules consistent with the setting (see Appendix 2 for the definition of a
timing rule), it will sometimes be useful to _think of_ the adversary as also
having control over the choice of timing rule.
Placing bounds on the power of the adversary in the permissionless setting
means limiting their resource balance. For $q\in[0,1]$, we say the adversary
is $q$-_bounded_ if their total resource balance is always222222In the context
of discussing PoS protocols, an objection that may be raised to simply
assuming the adversary is $q$-bounded (for some $q<1$), is that there may be
attacks such as ‘stake bleeding’ attacks (GKR, 18), which allow an adversary
with lower resource balance to achieve a resource balance $>q$ (at least,
relative to certain message states). A simple approach to dealing with this
issue is to maintain the assumption that the adversary is $q$-bounded, but
then to add the existence of certain message states (e.g. those conferring too
great a proportion of block rewards to the adversary) to the set of other
failure conditions (such as the existence of incompatible confirmed blocks if
one is analysing a blockchain protocol). at most a $q$ fraction of the total,
i.e. for all $M,t$, $\sum_{p\in P_{A}}\mathcal{R}(\mathtt{U}_{p},t,M)\leq
q\cdot\sum_{p\in P}\mathcal{R}(\mathtt{U}_{p},t,M)$.
For a given protocol, another way to completely specify a run (beyond that
described in Section 2.1) is via the following breakdown: (1) The set of
processors and their inputs; (2) The set of processors controlled by the
adversary, and their state transition diagrams; (3) The timing rule; (4) The
resource pool (which may or may not be undetermined); (5) The probabilistic
responses of the permitter.
When we say that a protocol satisfies a certain condition (such as solving the
Byzantine Generals Problem), we mean that this holds for all values of (1)-(5)
above that are consistent with the setting. We call a set of values for
(1)-(4) above a _protocol instance_. When we make a probabilistic
statement232323Thus far we have assumed that it is only the permitter oracle
that may behave probabilistically. One could also allow that state transitions
may be probabilisitic without any substantial change to the presentation. to
the effect that a certain condition holds with at most/least a certain
probability, this means that the probabilistic bound holds for all protocol
instances that are consistent with the setting.
### 6.5. Appendix 5 – The proof of Theorem 3.1.
###### Proof.
We work in the appropriate setting for PoS protocols, i.e. the authenticated,
sized, timed and multi-permitter setting. Note that in Section 2.2 we made the
assumption that, for all $t$ and $M$,
$\sum_{\mathtt{U}}\mathcal{R}(\mathtt{U},t,M)>0$. Some sort of assumption
along these lines is necessary for permissionless protocols to be able to
solve any problem, given the ‘no balance no voice’ condition for the permitter
oracle response, as described in Section 2.2, that the permitter oracle’s
response to any request $(t^{\prime},M,\emptyset)$ must be
$M^{\ast}=\emptyset$ whenever $\mathcal{R}(\mathtt{U}_{p},t^{\prime},M)=0$.
Let $\mathcal{U}^{\ast}$ be the set of $\mathtt{U}$ for which
$\mathcal{R}(\mathtt{U},0,\emptyset)\neq 0$. Let $P$ be the set of processors,
and let $P^{\ast}$ be the processors $p$ with
$\mathtt{U}_{p}\in\mathcal{U}^{\ast}$. The basic idea behind the proof is that
we should have the processors in $P^{\ast}$ carry out a standard protocol for
the permissioned setting (DS, 83). Note that, by the conventions of Section
2.2, there are a finite number of processors in $P^{\ast}$ and, since $q<1$,
at least one member of $P^{\ast}$ must be non-faulty. We also have to make
sure, however, that all non-faulty processors in $P-P^{\ast}$ output
correctly. This can be achieved by filtering the messages received by
processors in $P-P^{\ast}$ with some careful timing. Let us now see the
details.
Since $\mathcal{R}$ is determined, and since $\mathcal{U}^{\ast}$ is finite,
all non-faulty processors can decide a common value $k$ that upper bounds the
number of identifiers in $\mathcal{U}^{\ast}$ controlled by the adversary.
Recall from Section 2.1 that we write write $m_{\mathtt{U}}$ to denote the
ordered pair $(\mathtt{U},m)$, thought of as ‘$m$ signed by $\mathtt{U}$’. For
$z\in\\{0,1\\}$, and for distinct $\mathtt{U}_{1},\dots,\mathtt{U}_{i}$, we
let $z_{\mathtt{U}_{1},\dots,\mathtt{U}_{i}}$ be $z_{\mathtt{U}^{\ast}}$
signed by $\mathtt{U}_{1},\dots,\mathtt{U}_{i}$ in order, i.e., for the empty
sequence $\emptyset$, $z_{\emptyset}$ is $z_{\mathtt{U}^{\ast}}$ ($z$ signed
by the general), and for each $i^{\prime}\in[0,i-1]$,
$z_{\mathtt{U}_{1},\dots,\mathtt{U}_{i^{\prime}+1}}$ is
$z_{\mathtt{U}_{1},\dots,\mathtt{U}_{i^{\prime}}}$ signed by
$\mathtt{U}_{i^{\prime}+1}$. Let $M_{i}$ be the set of all messages of the
form $z_{\mathtt{U}_{1},\dots,\mathtt{U}_{i}}$ such that $z\in\\{0,1\\}$ and
$\mathtt{U}_{1},\dots,\mathtt{U}_{i}$ are distinct members of
$\mathcal{U}^{\ast}$.
Given $\Delta$ (as in Section 6.2), we divide the timeslots up into intervals
of length $2\Delta$. For $i\in\mathbb{N}$, we define
$t^{\ast}_{i}:=2+i(2\Delta)$. Each processor $p$ maintains a set $O_{p}$,
which can be thought of as the set of protocol inputs that $p$ has seen, and
which is initially empty. Roughly speaking, the instructions below can then be
summed up as follows. At timeslot $t=1$, processors in $P^{\ast}$ submit a
request, which results in permission to broadcast as they please, subject only
to the conditions imposed by the authenticated setting. Then, at every
$t_{i}^{\ast}$, processors in $P^{\ast}$ carry out another step in a standard
permissioned protocol. Processors in $P-P^{\ast}$ proceed similarly, except
that they don’t broadcast messages, and require messages to be received
$\Delta$ many timeslots earlier than processors in $P^{\ast}$ do, if they are
to pay attention to them. At timeslot $t_{k+1}^{\ast}$ (with $k$ defined as
above), all processors give an output which either corresponds to the single
input value they have seen, or else is the default output 0. The precise
instructions are as follows.
The instructions for $p\in P^{\ast}$.
Timeslot $1$: Submit the request $(0,\emptyset,\emptyset)$. The permitter
oracle responds with $M^{\ast}$ which is the set of all possible messages.
Timeslot $t_{i}^{\ast}$: Consider the set of messages $m\in M_{i}$ that $p$
has received242424Note that, according to the conventions of Section 2.1 and
our description of BB, $p$’s protocol inputs are messages received at $t=0$.
at timeslots $t\leq t_{i}^{\ast}$. For each such message
$m=z_{\mathtt{U}_{1},\dots,\mathtt{U}_{i}}$, if $z\notin O_{p}$, proceed as
follows: Enumerate $z$ into $O_{p}$ and broadcast $m_{\mathtt{U}_{p}}$.
The instructions for $p\in P-P^{\ast}$.
At each timeslot $t_{i}^{\ast}-\Delta$ with $i\geq 1$, consider the set of
messages $m\in M_{i}$ that $p$ has received at timeslots $t\leq
t_{i}^{\ast}-\Delta$. For each such message
$m=z_{\mathtt{U}_{1},\dots,\mathtt{U}_{i}}$, if $z\notin O_{p}$, then
enumerate $z$ into $O_{p}$. Note that $p$ requires messages to be received
earlier than processors in $P^{\ast}$ to act on them, and that $p$ ignores its
own input.
The output for $p\in P$. Let $k$ be as defined above, so that $k$ upper bounds
the number of identifiers in $\mathcal{U}^{\ast}$ controlled by the adversary.
At timeslot $t_{k+1}^{\ast}$, $p$ outputs $z$ if $O(p)$ contains a single
value $z$, and otherwise $p$ outputs 0.
Verification. For $z\in\\{0,1\\}$, let $i_{z,p}$ be the least $i$ such that
$z\in O(p)$ at the end of timeslot $t_{i}^{\ast}$, letting $i_{z,p}$ be
undefined if no such $i$ exists. To verify that the non-faulty processors
produce a correct set of outputs, it suffices to show for all non-faulty
$p,p^{\prime}$, and all $z\in\\{0,1\\}$, that if $i_{z,p}\leq k+1$, then
$i_{z,p^{\prime}}\leq k+1$ also. Towards this end, note first that:
1. $(\dagger_{0})$
If $p$ and $p^{\prime}$ are non-faulty, if $p\in P^{\ast}$, and if
$i_{z,p}\leq k$, then $i_{z,p^{\prime}}\leq i_{z,p}+1$.
This follows because the given conditions ensure $p$ will broadcast some $m\in
M_{i_{z,p}+1}$ of the form $z_{\mathtt{U}_{1},\dots,\mathtt{U}_{p}}$ at
timeslot $t_{i_{z,p}}^{\ast}$, which will be received by $p^{\prime}$ by
timeslot $t_{i_{z,p}+1}^{\ast}-\Delta$. Note also that:
1. $(\dagger_{1})$
If any $p$ receives $m\in M_{k+1}$ at $t\leq t_{k+1}^{\ast}$, then
$i_{z,p^{\prime}}\leq k+1$ for all non-faulty processors $p^{\prime}$.
The statement $(\dagger_{1})$ holds since $k$ upper bounds the number of
members of $\mathcal{U}^{\ast}$ controlled by the adversary. The stated
condition thus means that at least one non-faulty processor
$p^{\prime\prime}\in P^{\ast}$ has $i_{z,p^{\prime\prime}}\leq k$. Then
$(\dagger_{1})$ follows from $(\dagger_{0})$.
To show for all non-faulty $p,p^{\prime}$, and all $z\in\\{0,1\\}$, that if
$i_{z,p}\leq k+1$, then $i_{z,p^{\prime}}\leq k+1$ also, suppose first that
$p\in P^{\ast}$. If $i_{z,p}\leq k$ then the result follows from
$(\dagger_{0})$. If $i_{z,p}=k+1$ then the result follows from
$(\dagger_{1})$. Suppose next that $p\in P-P^{\ast}$. If $i_{z,p}\leq k$, then
all non-faulty processors $p^{\prime\prime}\in P^{\ast}$ must have
$i_{z,p^{\prime\prime}}\leq k$. In this case, the result then follows from
$(\dagger_{0})$. If $i_{z,p}=k+1$ then the result follows from
$(\dagger_{1})$. ∎
### 6.6. Appendix 6 – The proof of Theorem 3.3.
Towards a contradiction, suppose we are given $(\mathtt{S},\mathtt{O})$ which
is a deterministic and weakly decentralised permissionless protocol solving BB
for a $q$-bounded adversary. Consider an infinite set of processors $P$, and
suppose $p_{0},p_{1}\in P$. If $(\mathtt{S},\mathtt{O})$ solves BB, then it
must do so for all protocol and parameter inputs consistent with the setting.
So, fix a set of parameter inputs for all processors with $\Delta=2$, and fix
$\mathcal{R}$ satisfying the condition that
$\mathcal{R}(\mathtt{U}_{p_{i}},t,M)=0$ for all $i\in\\{0,1\\}$ and for all
$t,M$ (while $\mathcal{R}$ takes arbitrary values amongst those consistent
with the setting for other processors) – the possibility of two processors
with zero resource balance throughout the run is not really important for the
proof, but simplifies the presentation.
We consider runs in which the only faulty behaviour is to delay the broadcast
of messages, perhaps indefinitely. Given that all faults are of this kind, it
will be presentationally convenient to think of all processors as having the
state transition diagram specified by $\mathtt{S}$, but then to allow that the
adversary can intervene to delay the broadcast of certain messages for an
undetermined set of processors (causing certain processors to deviate from
their ‘instructions’ in that sense). By a _system input_ , we mean a choice of
protocol input for each processor in $P$. We let $\Pi$ be the set of all
possible runs given this fixed set of parameter inputs, given the fixed value
of $\mathcal{R}$, and given the described restrictions on the behaviour of the
adversary. To specify a run $\mathtt{R}\in\Pi$ it therefore suffices to
specify the system input, which broadcasts are delayed and for how long, and
when broadcast messages are received by each processor (subject to the
condition that $\Delta=2$).
Proof Outline. By a $k$_-run_ , we mean the first $k$ timeslots of a run
$\mathtt{R}\in\Pi$. The proof outline then breaks down into the following
parts:
1. (P1)
We show there exists $k$ such that $p_{0}$ and $p_{1}$ give an output within
the first $k$ timeslots of all $\mathtt{R}\in\Pi$.
2. (P2)
Let $k$ be as given in (P1). We produce $\zeta_{0},\dots,\zeta_{m}$, where
each $\zeta_{j}$ is a $k$-run, and such that: (a) All processors have protocol
input $\\{0_{\mathtt{U}^{\ast}}\\}$ in $\zeta_{0}$; (b) For each $j$ with
$0\leq j<m$, there exists $i\in\\{0,1\\}$, such that $\zeta_{j}$ and
$\zeta_{j+1}$ are indistinguishable from the point of view of $p_{i}$; (c) All
processors have protocol input $\\{1_{\mathtt{U}^{\ast}}\\}$ in $\zeta_{m}$.
From (P2)(a) above, it follows that in $\zeta_{0}$, processors $p_{0}$ and
$p_{1}$ must both output 0. Repeated applications of (P2)(b) then suffice to
show that $p_{0}$ and $p_{1}$ must both output 0 in all of the $k$-runs
$\zeta_{0},\dots,\zeta_{m}$ (since they must each give the same output as the
other). This contradicts (P2)(c), and completes the proof.
Establishing (P1). Towards establishing (P1) above, we first prove the
following technical lemma. For each $t$, let $M_{t}$ be the set of messages
$m$ for which there exists $\mathtt{R}\in\Pi$ in which $m$ is broadcast at a
timeslot $t^{\prime}\leq t$. Let $Q_{t}$ be the set of $p$ for which there is
some $\mathtt{R}\in\Pi$ in which a non-empty set of messages $m$ are: (a)
Permitted for $p$ by the end of timeslot $t$, and; (b) Satisfy $t_{m}\leq t$
if we are in the timed setting. Let $B_{t}$ be the set of $p$ for which there
is some $\mathtt{R}\in\Pi$ in which $p$ is (permitted and) instructed to
broadcast a message at $t$.
###### Lemma 6.1.
For each $t$, $M_{t}$, $Q_{t}$ and $B_{t}$ are finite.
###### Proof.
If $Q_{t}$ is finite, then clearly $B_{t}$ must be finite. The proof for
$M_{t}$ and $Q_{t}$ is by induction on $t$. At $t=1$, no processor has yet
been permitted to broadcast any messages.
Suppose $t>1$ and that the induction hypothesis holds for all $t^{\prime}<t$.
The state of each of the finitely many processors $p\in Q_{t-1}$ at the end of
timeslot $t-1$ is dictated by the protocol inputs for $p$, and by the messages
$p$ receives at each timeslot $t^{\prime}<t$. It therefore follows252525Recall
that we consider faulty processors to follow the same state transition diagram
as non-faulty processors, but to have certain broadcasts delayed in
contravention of those instructions. The state of a faulty processor is thus
determined in the same way as that of a non-faulty processor. from the
induction hypothesis that:
1. $(\diamond_{0})$
There are only finitely many states that the processors in $Q_{t-1}$ can be in
at any timeslot $t^{\prime}<t$.
Recall from Section 2.2, that if $p$ makes a request $(M,A)$, or
$(t^{\prime},M,A)$, then every $m\in M$ must either be in the message state of
$p$, or else be permitted for $p$.262626In Section 6.4, we loosened these
requirements for the adversary. They still hold here, though, since we assume
that the only faulty behaviour of processors controlled by the adversary is to
delay the broadcast of certain messages. By the induction hypothesis for
$M_{t-1}$ and from $(\diamond_{0})$, it therefore follows that there are only
finitely many different possible $M$ for which requests $(M,A)$ or
$(t^{\prime},M,A)$ can be made by processors at timeslots $<t$. The fact that
$Q_{t}$ is finite then follows, since for each $t^{\prime}$ and $M$, there are
finitely many $p$ for which $\mathcal{R}(\mathtt{U}_{p},t^{\prime},M)\neq 0$.
For each non-faulty $p\in Q_{t}$, the finite set of messages $p$ broadcasts at
timeslot $t$ is decided by the protocol inputs for $p$ and by the messages $p$
receives at each timeslot $t^{\prime}<t$. Since there are only finitely many
possibilities for these values, it follows that there are only a finite set of
messages that can be broadcast by non-faulty processors at timeslot $t$. The
messages that can be broadcast by faulty processors at $t$, are just those
that can be broadcast by non-faulty processors at timeslots $t^{\prime}\leq
t$. So $M_{t}$ is finite, as required for the induction step. ∎
_Continuing with the proof of Theorem 3.3: The application of König’s Lemma._
Our next aim is to use Lemma 6.1 to establish (P1) via an application of
König’s Lemma. To do this, we want to show that the runs in $\Pi$ are _in some
sense_ finitely branching, i.e. that there are essentially finitely many
different things that can happen at each timeslot. The difficulty is that
there are infinitely many possible system inputs and, when $m$ is broadcast at
$t$, there are infinitely many different sets of processors that could receive
$m$ at $t+1$ (while the rest receive $m$ at $t+2$). To deal with this, we
first define an appropriate partition of the processors. Then we will further
restrict $\Pi$, by insisting that some elements of this partition act as a
collective unit with respect to the receipt of messages.
For each $t\in\mathbb{N}$, let $B_{t}$ be defined as above. Let
$B_{\infty}=P-\cup_{t}B_{t}$, and let $B_{\infty}^{0},B_{\infty}^{1}$ be a
partition of $B_{\infty}$, such that $p_{i}\in B_{\infty}^{i}$. From now on,
we further restrict $\Pi$, by requiring all processors in each
$B_{\infty}^{i}$ to have the same protocol input, and to receive the same
message set at each timeslot (we _do not_ make the same requirement for each
$B_{t}$, $t\in\mathbb{N}$). As things stand,
$B^{0}_{\infty},B^{1}_{\infty},B_{1},B_{2},\dots$ need not be a partition of
$P$, though, because processors might belong to multiple $B_{i}$. We can
further restrict $\mathcal{R}$ to rectify this. If
$\mathcal{R}(\mathtt{U},t,M)>0$, then we write $\mathtt{U}\in S(t,M)$, and say
that $\mathtt{U}$ is _in the support of_ $(t,M)$. Roughly, we restrict
attention to $\mathcal{R}$ which has disjoint supports. More precisely, we
assume:
1. $(\diamond_{1})$
For all $t\neq t^{\prime}$ and all $M,M^{\prime}$, $S(t,M)\cap
S(t^{\prime},M^{\prime})=\emptyset$.
We will also suppose, for the remainder of the proof, that:
1. $(\diamond_{2})$
No single identifier has more than a $q$ fraction of the total resource
balance corresponding to any given $(t,M)$, i.e. for any given
$\mathtt{U},t,M$, we have $\mathcal{R}(\mathtt{U},t,M)\leq
q\cdot\sum_{\mathtt{U}^{\prime}\in\mathcal{U}}\mathcal{R}(\mathtt{U}^{\prime},t,M)$.
With the restrictions on $\mathcal{R}$ described above,
$B^{0}_{\infty},B^{1}_{\infty},B_{1},B_{2},\dots$ is a partition of $P$ (only
$(\diamond_{1})$ is required for this). By a _$t$ -specification_ we mean, for
some $\mathtt{R}\in\Pi$:
* •
A specification of the protocol inputs for processors in
$B^{0}_{\infty},B^{1}_{\infty},B_{1},B_{2},\dots B_{t}$.
* •
A specification of which messages are broadcast by which processors at
timeslots $\leq t$, and when these messages are received by the processors in
$B^{0}_{\infty},B^{1}_{\infty},B_{1},B_{2},\dots B_{t}$.
If $\eta$ is a $t$-specification, and $\eta^{\prime}$ is a
$t^{\prime}$-specification for $t^{\prime}\geq t$, then we say
$\eta^{\prime}\supseteq\eta$ if the protocol inputs and message sets specified
by $\eta$ and $\eta^{\prime}$ are consistent, i.e. there is no processor $p$
for which they specify a different protocol input, or for which they have
messages being received or broadcast at different timeslots. We also extend
this notation to runs in the obvious way, so that we may write
$\eta\subset\mathtt{R}$, for example. Now, by Lemma 6.1, there are finitely
many $t$-specifications for each $t\in\mathbb{N}$. Let us say that a
$t$-specification $\eta$ is _undecided_ if either of $p_{0}$ or $p_{1}$ do not
give an output during the first $t$ timeslots during some run
$\mathtt{R}\supset\eta$. Towards a contradiction, suppose that there is no
upper bound on the $t$ for which there exists a $t$-specification which is
undecided. Then it follows directly from König’s Lemma that there exists an
infinite sequence $\eta_{1},\eta_{2},\dots$, such that each $\eta_{i}$ is
undecided, and $\eta_{i}\subset\eta_{i+1}$ for all $i\geq 1$. Let
$\mathtt{R}\in\Pi$ be the unique run with $\eta_{i}\subset\mathtt{R}$ for all
$i$. Then $\mathtt{R}$ is a run in which at least one of $p_{0}$ or $p_{1}$
does not give an output. This gives the required contradiction, and suffices
to establish (P1).
Establishing (P2). To complete the proof, it suffices to establish (P2). For
the remainder of the proof, we let $k$ be as given by (P1). We also further
restrict $\Pi$ by assuming that, for $\mathtt{R}\in\Pi$, each protocol input
$\mathtt{in}_{p}$ is either $\\{0_{\mathtt{U}^{\ast}}\\}$ or
$\\{1_{\mathtt{U}^{\ast}}\\}$, and that when a processor delays until
$t^{\prime}$ the broadcast of a certain message that it is instructed to
broadcast at timeslot $t$, it does the same for all messages that it is
instructed to broadcast at $t$.
It will be convenient to define a new way of specifying $k$-runs. To do so, we
will assume that, unless explicitly stated otherwise: (i) Each protocol input
$\mathtt{in}_{p}=\\{0_{\mathtt{U}^{\ast}}\\}$; (ii) Messages are broadcast as
per the instructions given by $\mathtt{S}$, and; (iii) Broadcast messages are
received at the next timeslot. So, to specify a $k$-run, all we need to do is
to specify the deviations from these ‘norms’. More precisely, for
$\mathtt{R}\in\Pi$, we let $\zeta(\mathtt{R})$ be the set of all tuples $q$
such that, for some $t<k$, either:
* •
$q=(p)$ and $\mathtt{in}_{p}=\\{1_{\mathtt{U}^{\ast}}\\}$ in $\mathtt{R}$, or;
* •
$q=(p,p^{\prime})$ and, in $\mathtt{R}$, the non-empty set of messages that
$p$ broadcasts at $t$ are all received by $p^{\prime}$ at $t+2$.
* •
$q=(p,t^{\prime})$ and, in $\mathtt{R}$, the non-empty set of messages that
$p$ is instructed to broadcast at $t$ are all delayed for broadcast until
$t^{\prime}>t$.
If $\zeta=\zeta(\mathtt{R})$ for some $\mathtt{R}\in\Pi$, we also identify
$\zeta$ with the $k$-run that it specifies, and refer to $\zeta$ as a $k$-run.
We say $p$ is _faulty in_ $\zeta$ if there exists some tuple $(p,t^{\prime})$
in $\zeta$, i.e. if $p$ delays the broadcast of some messages. For
$i\in\\{0,1\\}$, we say $\zeta$ and $\zeta^{\prime}$ are _indistinguishable
for_ $p_{i}$, if $p_{i}$ has the same protocol inputs in $\zeta$ and
$\zeta^{\prime}$ and receives the same message sets at each $t\leq k$. We say
that any sequence $\zeta_{0},\dots,\zeta_{m}$ is _compliant_ if the following
holds for each $j$ with $0\leq j\leq m$:
1. (i)
If $j<m$, there exists $i\in\\{0,1\\}$ such that $\zeta_{j}$ and $\zeta_{j+1}$
are indistinguishable for $p_{i}$.
2. (ii)
For each $t$ with $1\leq t\leq k$, there exists at most one processor in
$B_{t}$ that is faulty in $\zeta_{j}$.
It follows from $(\diamond_{1})$ and $(\diamond_{2})$ that satisfaction of
(ii) suffices to ensure each $\zeta_{j}$ is a $k$-run, i.e. that it actually
specifies the first $k$ timeslots of a run in $\Pi$ (for which the adversary
is thus $q$-bounded).
The following lemma completes the proof.
###### Lemma 6.2.
There exists $\zeta_{0},\dots,\zeta_{m}$ that is compliant, and such that: (a)
$\zeta_{0}=\emptyset$; (b) For all $p\in P$, $(p)\in\zeta_{m}$.
Before giving the formal proof of Lemma 6.2, we first outline the basic idea.
The proof is similar to that described in (DS, 83) for permissioned protocols,
but is complicated by the fact that we consider broadcast messages, rather
than private channels.
To explain the basic idea behind the proof, we consider the case $k=4$, noting
that $t=2$ is the first timeslot at which any processor can possibly broadcast
a non-empty set of (permitted) messages. We define $\zeta_{0}:=\emptyset$.
Note that $\zeta_{0}$ is a $k$-run in which no processors are faulty, and in
which $p_{0}$ and $p_{1}$ both output 0. The rough idea is that we now want to
define a compliant sequence of runs, starting with $\zeta_{0}$, and in which
we gradually get to change the inputs of all processors.
First, suppose that $p\in B_{3}$, and that we want to change $p$’s protocol
input to $\\{1_{\mathtt{U}^{\ast}}\\}$. If we just do this directly, by
defining $\zeta_{1}:=\zeta_{0}\cup\\{(p)\\}$, then the sequence
$\zeta_{0},\zeta_{1}$ will not necessarily be compliant, because messages
broadcast by $p$ at $t=3$ will be received by both $p_{0}$ and $p_{1}$ at
$t=4$. To avoid this issue, we must first ‘remove’ (the effect of) $p$’s
broadcast at $t=3$ from the $k$-run, so as to produce a compliant sequence. To
do this, we can consider the sequence $\zeta_{1},\zeta_{2},\zeta_{3}$, where
$\zeta_{1}:=\zeta_{0}\cup\\{(p,p_{0})\\}$,
$\zeta_{2}:=\zeta_{1}\cup\\{(p,p_{1})\\}$, and
$\zeta_{3}^{\ast}:=\zeta_{0}\cup\\{(p,4)\\}$. So, first we delay (by one) the
timeslot at which $p_{0}$ receives $p$’s messages. Then, we delay (by one) the
timeslot at which $p_{1}$ receives $p$’s messages. Then, finally, we remove
those delays in the receipt of messages, but instead have $p$ delay the
broadcast of all messages by a single timeslot. It’s then clear than
$\zeta_{0},\zeta_{1},\zeta_{2},\zeta_{3}$ is a compliant sequence. To finish
changing $p$’s input and remove any faulty behaviour, we can define
$\zeta_{4}:=\zeta_{3}\cup\\{(p)\\}$, and then we can define
$\zeta_{5},\zeta_{6},\zeta_{7}$ to be a compliant sequence which adds $p$’s
broadcast back into the $k$-run, by carrying the previous ‘removal’ in
reverse. Then $\zeta_{0},\dots\zeta_{7}$ is a compliant sequence changing
$p$’s protocol input, and which ends with a $k$-run in which there is no
faulty behaviour by any processor. To sum up, we carry out the following
(which has more steps than really required, to fit more closely with the
general case):
1. (1)
Delay by one timeslot the receipt of $p$’s messages by $p_{0}$.
2. (2)
Delay by one timeslot the receipt of $p$’s messages by $p_{1}$.
3. (3)
Remove the delays introduced in steps (1) and (2), and instead have $p$ delay
the broadcast of all messages by one timeslot. So far, we have ‘removed’ the
broadcasts of $p\in B_{3}$, by delaying them until $t_{4}$.
4. (4)
Change $p$’s input, before reversing the previous sequence of changes so as to
remove delays in $p$’s broadcasts, making $p$ non-faulty once again.
Next, suppose that $p\in B_{2}$, and that we want to change $p$’s protocol
input to $\\{1_{\mathtt{U}^{\ast}}\\}$. The added complication now is that, as
well delaying the receipt of $p$’s message by $p_{0}$ and $p_{1}$, we must
also delay the receipt of $p$’s messages by processors in $B_{3}$. This is
because any difference observed by $p^{\prime}\in B_{3}$ by timeslot $t_{3}$
can be relayed to $p_{0}$ and $p_{1}$ by $t_{4}$. In order that the delay in
the receipt of $p$’s messages by $p^{\prime}\in B_{3}$ is not relayed
simultaneously to $p_{0}$ and $p_{1}$, we must also remove the broadcasts of
$p^{\prime}\in B_{3}$. We can therefore proceed roughly as follows (the
following approximate description is made precise in Supplementary Materials
4). Let $p_{0}^{\ast},\dots,p^{\ast}_{\ell}$ be an enumeration of the
processors in $B_{3}$.
1. (1)
‘Remove’ (delay) the broadcasts of $p_{0}^{\ast}$, just as we did for $p\in
B_{3}$ above.
2. (2)
Delay by one timeslot the receipt of $p$’s messages by $p_{0}^{\ast}$.
3. (3)
Reverse the previous removal (delay) of $p_{0}^{\ast}$’s broadcasts, so that
$p_{0}^{\ast}$ is non-faulty.
4. (4)
Repeat (1)–(3) above, for each of $p_{2}^{\ast},\dots,p_{\ell}^{\ast}$ in
turn.
5. (5)
Delay by one timeslot the receipt of $p$’s messages by $p_{0}$.
6. (6)
Delay by one timeslot the receipt of $p$’s messages by $p_{1}$.
7. (7)
Remove all delays introduced in (1)-(6), and instead have $p$ delay the
broadcast of all messages by one timeslot. So far, we have formed a compliant
sequence ending with a $k$-run that delays the broadcast of $p$’s messages by
one timeslot, until $t_{3}$.
8. (8)
Repeating the same process allows us to delay the broadcast of $p$’s messages
by another timeslot, until $t_{4}$.
9. (9)
Change $p$’s input, before reversing the previous sequence of changes so as to
remove delays in $p$’s broadcasts, making $p$ non-faulty again.
In the above, we have dealt with $p\in B_{3}$ and then $p\in B_{2}$, for the
case that $k=4$. These ideas are easily extended to the general case, so as to
form a compliant sequence which changes the protocol inputs for all
processors. We now give the formal details.
The formal proof of Lemma 6.2. The variable $\kappa$ is used to range over
finite sequences of $k$-runs. We let $\kappa_{1}\ast\kappa_{2}$ be the
concatenation of the two sequences, and also extend this notation to singleton
sequences in the obvious way, so that we may write $\kappa_{1}\ast\zeta$, for
example. If $\kappa=\zeta_{0},\dots,\zeta_{\ell}$, then we define
$\zeta(\kappa):=\zeta_{\ell}$. For $t\in[1,k]$, and any $k$-run $\zeta$, we
let $\zeta_{\geq t}$ be the set of all $q\in\zeta$ such that either
$q=(p,p^{\prime})$ or $q=(p,t^{\prime})$, and such that $p\in\cup_{j\geq
t}B_{j}$. We also define $\zeta_{<t}$, by modifying the previous definition in
the obvious way.
Ultimately, the plan is to start with the sequence $\kappa$ that has just a
single element $\zeta_{0}:=\emptyset$. Then we’ll repeatedly redefine
$\kappa$, by extending it, until it is equal to the sequence required to
establish the lemma. To help in this process, we define the three functions
$\mathtt{Remove}(p,\kappa)$, $\mathtt{Add}(p,\kappa)$ and
$\mathtt{Change}(p,\kappa)$ by backwards induction on $t$ such that $p\in
B_{t}$. The rough idea is that $\mathtt{Remove}(p,\kappa)$ will remove the
broadcasts of $p$ from the $k$-run (or, rather, postpone them until $t=k$).
Then $\mathtt{Add}(p,\kappa)$ will reverse the process carried out by
$\mathtt{Remove}(p,\kappa)$. $\mathtt{Change}(p,\kappa)$ will produce a
compliant sequence that changes the protocol input for $p$. First of all,
though, we define $\mathtt{Remove}(p,\kappa)$ and $\mathtt{Add}(p,\kappa)$ for
$p\notin\cup_{t<k}B_{t}$.
If $p\notin\cup_{t<k}B_{t}$ then:
1. (1)
$\mathtt{Remove}(p,\kappa):=\kappa$.
2. (2)
$\mathtt{Add}(p,\kappa):=\kappa$.
Then $\mathtt{Change}(p,\kappa)$ is defined for any processor $p$ by the
following process:
$\mathtt{Change}(p,\kappa)$.
1. (1)
$\kappa\leftarrow\mathtt{Remove}(p,\kappa)$.
2. (2)
$\kappa\leftarrow\kappa\ast\zeta$, where $\zeta:=\zeta(\kappa)\cup\\{(p)\\}$.
3. (3)
$\kappa\leftarrow\mathtt{Add}(p,\kappa)$.
4. (4)
Return $\kappa$.
Now suppose that $p\in B_{t}$ for $t<k$. Suppose we have already defined
$\mathtt{Remove}(p^{\prime},\kappa)$, and $\mathtt{Add}(p^{\prime},\kappa)$
for $p^{\prime}\in B_{t^{\prime}}$ when $t<t^{\prime}<k$, and suppose
inductively that $(\diamond_{3})_{p^{\prime}},(\diamond_{4})_{p^{\prime}}$ and
$(\diamond_{5})_{p^{\prime}}$ below all hold whenever $p^{\prime}\in
B_{t^{\prime}}$ for $t<t^{\prime}<k$ and $\kappa$ is compliant:
1. $(\diamond_{3})_{p}$
If $\zeta(\kappa)_{\geq t^{\prime}}=\emptyset$, then
$\kappa^{\prime}:=\mathtt{Remove}(p,\kappa)$ is compliant, with
$\zeta(\kappa^{\prime})_{\geq t^{\prime}}=\\{(p,k)\\}$ and
$\zeta(\kappa^{\prime})_{<t^{\prime}}=\zeta(\kappa)_{<t^{\prime}}$.
2. $(\diamond_{4})_{p}$
If $\zeta(\kappa)_{\geq t^{\prime}}=(p,k)$, then
$\kappa^{\prime}:=\mathtt{Add}(p,\kappa)$ is compliant, with
$\zeta(\kappa^{\prime})_{\geq t^{\prime}}=\emptyset$ and
$\zeta(\kappa^{\prime})_{<t^{\prime}}=\zeta(\kappa)_{<t^{\prime}}$.
3. $(\diamond_{5})_{p}$
If $\zeta(\kappa)_{\geq 0}=\emptyset$, then
$\kappa^{\prime}:=\mathtt{Change}(p,\kappa)$ is compliant, with
$\zeta(\kappa^{\prime})=\zeta(\kappa)\cup\\{(p)\\}$.
Let $p_{0}^{\ast},\dots,p_{\ell}^{\ast}$ be an enumeration of the processors
$p^{\prime}\in
P_{>t}:=(\cup_{t^{\prime}\in(t,k)}B_{t^{\prime}})\cup\\{p_{0},p_{1}\\}$, in
any order.
Then $\mathtt{Remove}(p,\kappa)$ is defined via the following process:
$\mathtt{Remove}(p,\kappa)$.
1. (1)
For $j=t+1$ to $k$ do:
2. (2)
For $i=0$ to $\ell$ do:
3. (3)
$\kappa\leftarrow\mathtt{Remove}(p_{i}^{\ast},\kappa)$.
4. (4)
$\kappa\leftarrow\kappa\ast\zeta$, where
$\zeta:=\zeta(\kappa)\cup\\{(p,p_{i}^{\ast})\\}$.
5. (5)
$\kappa\leftarrow\mathtt{Add}(p_{i}^{\ast},\kappa)$.
6. (6)
$\kappa\leftarrow\kappa\ast\zeta$, where
$\zeta:=(\zeta(\kappa)-\\{(p,p^{\prime}):p\in
P_{>t}\\})-\\{(p,j-1)\\}\cup\\{(p,j)\\}$.
7. (7)
Return $\kappa$.
Then $\mathtt{Add}(p,\kappa)$ is defined via the following process:
$\mathtt{Add}(p,\kappa)$.
1. (1)
For $j=k-1$ to $t$ do:
2. (2)
If $j>t$ then $\kappa\leftarrow\kappa\ast\zeta$, where
$\zeta:=\zeta(\kappa)-\\{(p,j+1)\\}\cup\\{(p,j)\\}\cup\\{(p,p^{\prime}):p^{\prime}\in
P_{>t}\\}$.
3. (3)
If $j=t$ then $\kappa\leftarrow\kappa\ast\zeta$, where
$\zeta:=\zeta(\kappa)-\\{(p,j+1)\\}\cup\\{(p,p^{\prime}):p^{\prime}\in
P_{>t}\\}$.
4. (4)
For $i=0$ to $\ell$ do:
5. (5)
$\kappa\leftarrow\mathtt{Remove}(p_{i}^{\ast},\kappa)$.
6. (6)
$\kappa\leftarrow\kappa\ast\zeta$, where
$\zeta:=\zeta(\kappa)-\\{(p,p_{i}^{\ast})\\}$.
7. (7)
$\kappa\leftarrow\mathtt{Add}(p_{i}^{\ast},\kappa)$.
8. (8)
Return $\kappa$.
It then follows easily from the induction hypothesis, and by induction on the
stages of the definition, that
$(\diamond_{3})_{p^{\prime}},(\diamond_{4})_{p^{\prime}}$ and
$(\diamond_{5})_{p^{\prime}}$ all hold whenever $p^{\prime}\in B_{t^{\prime}}$
for $t\leq t^{\prime}<k$.
Finally, we can define the sequence $\kappa$ as required to establish the
statement of the lemma, as follows. Initially we let $\zeta_{0}=\emptyset$,
and we set $\kappa$ to be the single element sequence $\zeta_{0}$. Then we
carry out the following process, where $p_{0}^{\ast},\dots,p_{\ell}^{\ast}$ is
an enumeration of the processors in $P_{>1}$, and where $P$ is the set of all
processors.
Defining $\kappa=\zeta_{0},\dots,\zeta_{m}$ as required by the lemma.
1. (1)
For $i=0$ to $\ell$ do:
2. (2)
$\kappa\leftarrow\mathtt{Change}(p_{i}^{\ast},\kappa)$.
3. (3)
$\kappa\leftarrow\kappa\ast\zeta$, where $\zeta:=\zeta(\kappa)\cup\\{(p):p\in
P\\}$.
4. (4)
Return $\kappa$.
It follows from repeated applications of $(\diamond_{5})_{p}$ that the
sequence $\kappa$ produced is compliant. For all processors $p$, we also have
that $(p)\in\zeta(\kappa)$, as required.
### 6.7. Appendix 7 – The proof of Theorem 3.4
Towards a contradiction, suppose that such a permissioned protocol exists. Let
the set of processors be $P=\\{p_{0},p_{1},\dots,p_{n}\\}$, suppose that
$n\geq 2$ and that the adversary controls $p_{0}$. Fix a set of parameter
inputs and a timing rule consistent with those inputs, such that the security
parameter $\varepsilon$ is small (see Appendix 2 for the definition of a
‘timing rule’). We say that two runs are _indistinguishable for $p_{i}$_ if
the distribution on $p_{i}$’s state and the messages it receives at each
timeslot is identical in both runs. By a _system input_ $s$, we mean a choice
of protocol input for each processor in $P$. We restrict attention to system
inputs in which each protocol input $\mathtt{in}_{p}$ is either
$\\{0_{\mathtt{U}^{\ast}}\\}$ or $\\{1_{\mathtt{U}^{\ast}}\\}$ (but never
$\\{0_{\mathtt{U}^{\ast}},1_{\mathtt{U}^{\ast}}\\}$ ).272727Note that it still
makes sense to consider inputs of this form in the unauthenticated setting,
but now any participant will be able to send messages that look like they are
signed by the general. For any system input $s$, we let $\bar{s}$ be the
system input in which each protocol input is reversed, i.e. if
$\mathtt{in}_{p}=\\{z_{\mathtt{U}{\ast}}\\}$ in $s$, then
$\mathtt{in}_{p}=\\{(1-z)_{\mathtt{U}^{\ast}}\\}$ in $\bar{s}$. For each
system input $s$ and each $i\in[1,n]$, we consider a strategy (i.e. state
transition diagram) for the adversary $\mathtt{adv}(i,s)$, in which the
adversary ignores their own protocol input, and instead simulates all
processors in $\\{p_{1},\dots,p_{n}\\}$ except $p_{i}$, with the protocol
input specified by $\bar{s}$, i.e. the adversary follows the state transition
diagram for each $p_{j}\in\\{p_{1},\dots,p_{n}\\}$ other than $p_{i}$, and
broadcasts all of the messages it is instructed to broadcast.
Fix arbitrary $i\in[1,\dots,n]$ and a protocol input $\mathtt{in}_{p_{i}}$.
For any two system inputs $s$ and $s^{\prime}$ compatible with
$\mathtt{in}_{p_{i}}$ (i.e. which gives $p_{i}$ protocol input
$\mathtt{in}_{p_{i}}$), the two runs produced when the adversary follows the
strategies $\mathtt{adv}(i,s)$ and $\mathtt{adv}(i,s^{\prime})$ respectively
are then indistinguishable for $p_{i}$. When $s$ specifies a protocol input of
$\\{z_{\mathtt{U}{{}^{\ast}}}\\}$ for all processors, $p_{i}$ must output $z$
with probability $>1-\varepsilon$. We conclude that, whatever the system
input, if $\mathtt{in}_{p_{i}}=\\{0_{\mathtt{U}^{\ast}}\\}$, then $p_{i}$ must
output 0 with probability $>1-\varepsilon$, and if
$\mathtt{in}_{p_{i}}=\\{1_{\mathtt{U}^{\ast}}\\}$, then $p_{i}$ must output 1
with probability $>1-\varepsilon$. This is true for all $i\in[1,\dots,n]$,
however, meaning that the protocol fails when $\varepsilon$ is small and when
$p_{1}$ and $p_{2}$ receive inputs $\\{0_{\mathtt{U}^{\ast}}\\}$ and
$\\{1_{\mathtt{U}^{\ast}}\\}$ respectively.
### 6.8. Appendix 8 – The proof of Theorem 3.5
We follow the proof of Theorem 3.4 very closely. Towards a contradiction,
suppose that such a permissionless protocol exists. We give a proof which
considers a set of three processors, but which is easily adapted to deal with
any number of processors $n\geq 3$. Let the set of processors be
$P=\\{p_{0},p_{1},p_{2}\\}$, and suppose that the adversary controls $p_{0}$.
Fix a set of parameter inputs and a timing rule consistent with those inputs
(see Appendix 2 for the definition of a ‘timing rule’), such that the security
parameter $\varepsilon$ is small, and such that $\mathcal{R}$ allocates
$p_{0}$ and $p_{1}$ the same constant resource balance for all inputs, and
allocates $p_{2}$ resource balance 0 for all inputs. Recall the definition of
a protocol instance from Section 6.4. We say that two protocol instances are
_indistinguishable for $p$_ if both of the following hold: (a) Processor $p$
receives the same protocol inputs for both instances, and; (b) The
distributions on the pairs $(M,M^{\ast})$ received by $p$ at each timeslot are
the same for the two instances, i.e. for any (possibly infinite) sequence
$(M_{1},M^{\ast}_{1}),(M_{2},M^{\ast}_{2}),\dots$ the probability that, for
all $t\geq 1$, $p$ receives $(M_{t},M^{\ast}_{t})$ at timeslot $t$, is the
same for both protocol instances. As in the proof of Theorem 3.4, by a _system
input_ $s$ we mean a choice of protocol input for each processor in $P$.
Again, we restrict attention to system inputs in which each protocol input
$\mathtt{in}_{p}$ is either $\\{0_{\mathtt{U}^{\ast}}\\}$ or
$\\{1_{\mathtt{U}^{\ast}}\\}$ (but never
$\\{0_{\mathtt{U}^{\ast}},1_{\mathtt{U}^{\ast}}\\}$ ). For any system input
$s$, we let $\bar{s}$ be the system input in which each protocol input is
reversed, i.e. if $\mathtt{in}_{p}=\\{z_{\mathtt{U}{\ast}}\\}$ in $s$, then
$\mathtt{in}_{p}=\\{(1-z)_{\mathtt{U}^{\ast}}\\}$ in $\bar{s}$. For each
system input $s$ we consider a strategy (i.e. state transition diagram) for
the adversary $\mathtt{adv}(s)$, in which the adversary ignores their own
protocol input, and instead simulates $p_{1}$ with the protocol input
specified by $\bar{s}$, i.e. the adversary follows the state transition
diagram for $p_{1}$, and broadcasts all of the messages it is instructed to
broadcast.
Let us say a system input is compatible with $\mathtt{in}_{p_{2}}$ if it gives
$p_{2}$ the protocol input $\mathtt{in}_{p_{2}}$. For any two system inputs
$s$ and $s^{\prime}$ compatible with a fixed value $\mathtt{in}_{p_{2}}$, the
the protocol instances produced when the adversary follows the strategies
$\mathtt{adv}(s)$ and $\mathtt{adv}(s^{\prime})$ respectively are then
indistinguishable for $p_{2}$. When $s$ specifies a protocol input of
$\\{z_{\mathtt{U}{{}^{\ast}}}\\}$ for all processors, $p_{2}$ must output $z$
with probability $>1-\varepsilon$. We conclude that, whatever the system
input, if $\mathtt{in}_{p_{2}}=\\{0_{\mathtt{U}^{\ast}}\\}$, then $p_{2}$ must
output 0 with probability $>1-\varepsilon$, and if
$\mathtt{in}_{p_{2}}=\\{1_{\mathtt{U}^{\ast}}\\}$, then $p_{2}$ must output 1
with probability $>1-\varepsilon$. Note also, that any two protocol instances
that differ only in the protocol input for $p_{2}$ are indistinguishable for
$p_{1}$. So, $p_{1}$ also satisfies the property that, whatever the system
input, if $\mathtt{in}_{p_{1}}=\\{0_{\mathtt{U}^{\ast}}\\}$, then $p_{1}$ must
output 0 with probability $>1-\varepsilon$, and if
$\mathtt{in}_{p_{1}}=\\{1_{\mathtt{U}^{\ast}}\\}$, then $p_{1}$ must output 1
with probability $>1-\varepsilon$. The protocol thus fails when $p_{1}$ and
$p_{2}$ receive inputs $\\{0_{\mathtt{U}^{\ast}}\\}$ and
$\\{1_{\mathtt{U}^{\ast}}\\}$ respectively.
### 6.9. Appendix 9 – The proof of Theorem 4.1
The idea behind the proof can be summed up as follows. Recall the definition
of a protocol instance from Section 6.4. We consider protocol instances in
which there are at least two processors $p_{0}$ and $p_{1}$, both of which are
non-faulty, and with identifiers $\mathtt{U}_{0}$ and $\mathtt{U}_{1}$
respectively. Suppose that, in a certain protocol instance, $\mathtt{U}_{0}$
and $\mathtt{U}_{1}$ both have the same constant and non-zero resource balance
for all inputs, and that all other identifiers have resource balance zero for
all $t$ and $M$. According to the ‘no balance, no voice’ assumptions of
Section 2.2 (that the permitter oracle’s response to any request
$(t^{\prime},M,\emptyset)$ must be $M^{\ast}=\emptyset$ whenever
$\mathcal{R}(\mathtt{U}_{p},t^{\prime},M)=0$), this means that $p_{0}$ and
$p_{1}$ will be the only processors that are able to broadcast messages. For
as long as messages broadcast by each $p_{i}$ are prevented from being
received by $p_{1-i}$ ($i\in\\{0,1\\}$), however, the protocol instance will
be indistinguishable for $p_{i}$ from one in which only $\mathtt{U}_{i}$ has
the same constant and non-zero resource balance. After some finite time
$p_{0}$ and $p_{1}$ must therefore give outputs, which will be incorrect for
certain protocol inputs.
To describe the argument in more detail, let $\mathtt{U}_{0}$ and
$\mathtt{U}_{1}$ be identifiers allocated to the non-faulty processors $p_{0}$
and $p_{1}$ respectively. We consider three different resource pools:
1. $\mathcal{R}_{0}:$
For all inputs $t$ and $M$, $\mathtt{U}_{0}$ and $\mathtt{U}_{1}$ are given
the same constant value $I>0$, while all other identifiers are assigned the
constant value 0.
2. $\mathcal{R}_{1}:$
For all inputs $t$ and $M$, $\mathtt{U}_{0}$ is given the same constant value
$I>0$, while all other identifiers are assigned the constant value 0.
3. $\mathcal{R}_{2}:$
For all inputs $t$ and $M$, $\mathtt{U}_{1}$ is given the same constant value
$I>0$, while all other identifiers are assigned the constant value 0.
We also consider three different instances of the protocol
$\mathtt{I}_{0},\mathtt{I}_{1}$ and $\mathtt{I}_{2}$. In all three instances,
the security parameter $\varepsilon$ is given the same small value, and for
all $i\in\\{0,1\\}$, $p_{i}$ has protocol input $\\{i_{\mathtt{U}^{\ast}}\\}$.
More generally, all three instances have identical parameter and protocol
inputs, except for the differences detailed below:
1. $\mathtt{I}_{0}:$
Here $\mathcal{R}:=\mathcal{R}_{0}$. For $i\in\\{0,1\\}$, messages broadcast
by $p_{i}$ are not received by $p_{1-i}$ until after the (undetermined)
stabilisation time $T$.
2. $\mathtt{I}_{1}:$
Here $\mathcal{R}:=\mathcal{R}_{1}$, and the choice of timing rule is
arbitrary.
3. $\mathtt{I}_{2}:$
Here $\mathcal{R}:=\mathcal{R}_{2}$, and the choice of timing rule is
arbitrary.
For any timeslot $t$, we say that two protocol instances are
_indistinguishable for $p$ until $t$_ if both of the following hold: (a)
Processor $p$ receives the same protocol inputs for both instances, and; (b)
The distributions on the pairs $(M,M^{\ast})$ received by $p$ at each timeslot
$\leq t$ are the same for the two instances, i.e. for any sequence
$(M_{1},M^{\ast}_{1}),\dots,(M_{t},M^{\ast}_{t})$, the probability that, for
all $t^{\prime}\leq t$, $p$ receives $(M_{t^{\prime}},M^{\ast}_{t^{\prime}})$
at timeslot $t^{\prime}$, is the same for both protocol instances.
According to the ‘no balance, no voice’ assumptions of Section 2.2, it follows
that only $p_{0}$ and $p_{1}$ will be able to broadcast messages in any run
corresponding to any of these three instances. Our framework also stipulates
that the response of the permitter to a request from $p$ at timeslot $t$ of
the form $(M,A)$ (or $(t^{\prime},M,A)$) is a probabilistic function of the
determined variables, $(M,A)$ (or $(t^{\prime},M,A)$), and of
$\mathcal{R}(\mathtt{U}_{p},t,M)$ (or
$\mathcal{R}(\mathtt{U}_{p},t^{\prime},M)$), and also $\mathtt{U}_{p}$ if we
are working in the authenticated setting. It therefore follows by induction on
timeslots $\leq T$ that, because the resource pool is undetermined:
1. $(\dagger)$
For each $i\in\\{0,1\\}$ and all $t\leq T$, $\mathtt{I}_{0}$ and
$\mathtt{I}_{1+i}$ are indistinguishable for $p_{i}$ until $t$.
If $T$ is chosen sufficiently large, it follows that we can find $t_{0}<T$
satisfying the following condition: For both $\mathtt{I}_{1+i}$
($i\in\\{0,1\\}$), it holds with probability $>1-\varepsilon$ that $p_{i}$
outputs $i$ before timeslot $t_{0}$. By $(\dagger)$, it therefore holds for
$\mathtt{I}_{0}$ that, with probability $>1-2\varepsilon$, $p_{0}$ outputs 0
and $p_{1}$ outputs 1 before $t_{0}$. This gives the required contradiction,
so long as $\varepsilon<\frac{1}{3}$.
|
# The ExoGRAVITY project: using single mode interferometry to characterize
exoplanets
S. Lacour LESIA, Observatoire de Paris, PSL, CNRS, Sorbonne Université, 92195
Meudon, France European Southern Observatory, Karl-Schwarzschild-Straße 2,
85748 Garching, Germany J. J. Wang Department of Astronomy, California
Institute of Technology, Pasadena, CA 91125, USA M. Nowak Institute of
Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK L.
Pueyo Space Telescope Science Institute, Baltimore, MD 21218, USA F.
Eisenhauer Max Planck Institute for extraterrestrial Physics, 85748 Garching,
Germany A.-M. Lagrange Université Grenoble Alpes, CNRS, IPAG, 38000
Grenoble, France LESIA, Observatoire de Paris, PSL, CNRS, Sorbonne
Université, 92195 Meudon, France P. Mollière Max Planck Institute for
Astronomy, Königstuhl 17, 69117 Heidelberg, Germany R. Abuter European
Southern Observatory, Karl-Schwarzschild-Straße 2, 85748 Garching, Germany A.
Amorim Universidade de Lisboa - Faculdade de Ciências, Campo Grande, 1749-016
Lisboa, Portugal CENTRA - Centro de Astrofísica e Gravitação, Universidade de
Lisboa, Lisboa, Portugal R. Asensio-Torres Max Planck Institute for
Astronomy, Königstuhl 17, 69117 Heidelberg, Germany M. Bauböck Max Planck
Institute for extraterrestrial Physics, 85748 Garching, Germany M. Benisty
Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France J.P. Berger
Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France H. Beust
Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France S. Blunt
Department of Astronomy, California Institute of Technology, Pasadena, CA
91125, USA A. Boccaletti LESIA, Observatoire de Paris, PSL, CNRS, Sorbonne
Université, 92195 Meudon, France A. Bohn Leiden Observatory, Leiden
University, P.O. Box 9513, 2300 RA Leiden, The Netherlands M. Bonnefoy
Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France H. Bonnet
European Southern Observatory, Karl-Schwarzschild-Straße 2, 85748 Garching,
Germany W. Brandner Max Planck Institute for Astronomy, Königstuhl 17, 69117
Heidelberg, Germany F. Cantalloube Max Planck Institute for Astronomy,
Königstuhl 17, 69117 Heidelberg, Germany P. Caselli Max Planck Institute for
extraterrestrial Physics, 85748 Garching, Germany B. Charnay LESIA,
Observatoire de Paris, PSL, CNRS, Sorbonne Université, 92195 Meudon, France
G. Chauvin Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France E.
Choquet Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France V.
Christiaens School of Physics and Astronomy, Monash University, Clayton,
Melbourne, Australia Y. Clénet LESIA, Observatoire de Paris, PSL, CNRS,
Sorbonne Université, 92195 Meudon, France A. Cridland Leiden Observatory,
Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands P.T. de
Zeeuw Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden,
The Netherlands Max Planck Institute for extraterrestrial Physics, 85748
Garching, Germany R. Dembet European Southern Observatory, Karl-
Schwarzschild-Straße 2, 85748 Garching, Germany J. Dexter Max Planck
Institute for extraterrestrial Physics, 85748 Garching, Germany A. Drescher
Max Planck Institute for extraterrestrial Physics, 85748 Garching, Germany G.
Duvert Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France F. Gao
Max Planck Institute for extraterrestrial Physics, 85748 Garching, Germany P.
Garcia CENTRA - Centro de Astrofísica e Gravitação, Universidade de Lisboa,
Lisboa, Portugal Universidade do Porto, Faculdade de Engenharia, Rua Dr.
Roberto Frias, Porto, Portugal R. Garcia Lopez School of Physics, University
College Dublin, Belfield, Dublin 4, Ireland Max Planck Institute for
Astronomy, Königstuhl 17, 69117 Heidelberg, Germany T. Gardner Astronomy
Department, University of Michigan, Ann Arbor, MI 48109 USA E. Gendron
LESIA, Observatoire de Paris, PSL, CNRS, Sorbonne Université, 92195 Meudon,
France R. Genzel Max Planck Institute for extraterrestrial Physics, 85748
Garching, Germany S. Gillessen Max Planck Institute for extraterrestrial
Physics, 85748 Garching, Germany J. H. Girard Space Telescope Science
Institute, Baltimore, MD 21218, USA X. Haubois European Southern
Observatory, Casilla 19001, Santiago 19, Chile G. Heißel LESIA, Observatoire
de Paris, PSL, CNRS, Sorbonne Université, 92195 Meudon, France T. Henning
Max Planck Institute for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany
S. Hinkley University of Exeter, Physics Building, Stocker Road, Exeter EX4
4QL, United Kingdom S. Hippler Max Planck Institute for Astronomy,
Königstuhl 17, 69117 Heidelberg, Germany M. Horrobin 1\. Institute of
Physics, University of Cologne, Zülpicher Straße 77, 50937 Cologne, Germany
M. Houllé Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France Z. Hubert
Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France A. Jiménez-
Rosales Max Planck Institute for extraterrestrial Physics, 85748 Garching,
Germany L. Jocou Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble,
France J. Kammerer European Southern Observatory, Karl-Schwarzschild-Straße
2, 85748 Garching, Germany Research School of Astronomy & Astrophysics,
Australian National University, Australia M. Keppler Max Planck Institute
for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany P. Kervella LESIA,
Observatoire de Paris, PSL, CNRS, Sorbonne Université, 92195 Meudon, France
L. Kreidberg Max Planck Institute for Astronomy, Königstuhl 17, 69117
Heidelberg, Germany V. Lapeyrère LESIA, Observatoire de Paris, PSL, CNRS,
Sorbonne Université, 92195 Meudon, France J.-B. Le Bouquin Université
Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France P. Léna LESIA,
Observatoire de Paris, PSL, CNRS, Sorbonne Université, 92195 Meudon, France
D. Lutz Max Planck Institute for extraterrestrial Physics, 85748 Garching,
Germany A.-L. Maire STAR Institute/Université de Liège, Belgium Max Planck
Institute for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany A. Mérand
European Southern Observatory, Karl-Schwarzschild-Straße 2, 85748 Garching,
Germany J.D. Monnier Astronomy Department, University of Michigan, Ann
Arbor, MI 48109 USA D. Mouillet Université Grenoble Alpes, CNRS, IPAG, 38000
Grenoble, France A. Muller Max Planck Institute for Astronomy, Königstuhl
17, 69117 Heidelberg, Germany E. Nasedkin Max Planck Institute for
Astronomy, Königstuhl 17, 69117 Heidelberg, Germany T. Ott Max Planck
Institute for extraterrestrial Physics, 85748 Garching, Germany G. P. P. L.
Otten Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France C. Paladini
European Southern Observatory, Casilla 19001, Santiago 19, Chile T. Paumard
LESIA, Observatoire de Paris, PSL, CNRS, Sorbonne Université, 92195 Meudon,
France K. Perraut Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble,
France G. Perrin LESIA, Observatoire de Paris, PSL, CNRS, Sorbonne
Université, 92195 Meudon, France O. Pfuhl European Southern Observatory,
Karl-Schwarzschild-Straße 2, 85748 Garching, Germany J. Rameau Université
Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France L. Rodet Department of
Astronomy, Cornell University, Ithaca, NY 14853, USA G. Rodriguez-Coira
LESIA, Observatoire de Paris, PSL, CNRS, Sorbonne Université, 92195 Meudon,
France G. Rousset LESIA, Observatoire de Paris, PSL, CNRS, Sorbonne
Université, 92195 Meudon, France J. Shangguan Max Planck Institute for
extraterrestrial Physics, 85748 Garching, Germany T. Shimizu Max Planck
Institute for extraterrestrial Physics, 85748 Garching, Germany J. Stadler
Max Planck Institute for extraterrestrial Physics, 85748 Garching, Germany O.
Straub Max Planck Institute for extraterrestrial Physics, 85748 Garching,
Germany C. Straubmeier 1\. Institute of Physics, University of Cologne,
Zülpicher Straße 77, 50937 Cologne, Germany E. Sturm Max Planck Institute
for extraterrestrial Physics, 85748 Garching, Germany T. Stolker Leiden
Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands
E.F. van Dishoeck Leiden Observatory, Leiden University, P.O. Box 9513, 2300
RA Leiden, The Netherlands Max Planck Institute for extraterrestrial Physics,
85748 Garching, Germany A. Vigan Aix Marseille Univ, CNRS, CNES, LAM,
Marseille, France F. Vincent LESIA, Observatoire de Paris, PSL, CNRS,
Sorbonne Université, 92195 Meudon, France S.D. von Fellenberg Max Planck
Institute for extraterrestrial Physics, 85748 Garching, Germany K. Ward-Duong
Five College Astronomy Department, Amherst College, Amherst, MA 01002, USA F.
Widmann Max Planck Institute for extraterrestrial Physics, 85748 Garching,
Germany E. Wieprecht Max Planck Institute for extraterrestrial Physics,
85748 Garching, Germany E. Wiezorrek Max Planck Institute for
extraterrestrial Physics, 85748 Garching, Germany J. Woillez European
Southern Observatory, Karl-Schwarzschild-Straße 2, 85748 Garching, Germany
###### Abstract
Combining adaptive optics and interferometric observations results in a
considerable contrast gain compared to single-telescope, extreme AO systems.
Taking advantage of this, the ExoGRAVITY project is a survey of known young
giant exoplanets located in the range of 0.1” to 2” from their stars. The
observations provide astrometric data of unprecedented accuracy, being crucial
for refining the orbital parameters of planets and illuminating their
dynamical histories. Furthermore, GRAVITY will measure non-Keplerian
perturbations due to planet-planet interactions in multi-planet systems and
measure dynamical masses. Over time, repetitive observations of the exoplanets
at medium resolution ($R=500$) will provide a catalogue of K-band spectra of
unprecedented quality, for a number of exoplanets. The K-band has the unique
properties that it contains many molecular signatures (CO, H2O, CH4, CO2).
This allows constraining precisely surface gravity, metallicity, and
temperature, if used in conjunction with self-consistent models like Exo-REM.
Further, we will use the parameter-retrieval algorithm petitRADTRANS to
constrain the C/O ratio of the planets. Ultimately, we plan to produce the
first C/O survey of exoplanets, kick-starting the difficult process of linking
planetary formation with measured atomic abundances.
###### keywords:
Exoplanets, optical interferometry, planet formation
## 1 INTRODUCTION
With more than 4000 exoplanets discovered to date, the focus is rapidly
shifting from census to characterization. The directly imaged exoplanets, seen
through thermal emission, offer unique possibilities compared to transit
spectroscopy: they are a distinct and young subset of exoplanets at large
separations.
We have started a program to observe young exoplanets by optical
interferometry which objectives is to obtain high-resolution astrometry and
K-band spectra of all known directly-imaged exoplanets, and more. We are using
the GRAVITY instrument[1] which offers the best astrometric accuracy and
highest quality spectra. The scientific goal of this program is to answer the
following two key scientific questions:
• What are the dynamics of directly imaged planetary systems? We are
monitoring the dynamical interactions between seen and unseen companions. We
are also getting the best dynamical masses for directly-imaged planets either
in combination with GAIA stellar astrometry or by directly measuring dynamical
perturbations caused by planet-planet interactions in resonant multi-planet
systems.
• How does the carbon-to-oxygen ratio (C/O) vary for these young exoplanets?
This ratio will allow us to establish, for the first time, correlations
between atmospheric composition and formation.
This ongoing program is already delivering to the community a catalogue of
high quality spectra that will not be rivalled until the beginning of the ELT
era. This catalogue will be used to test planetary atmosphere models for years
to come. Astrometry will have an even longer lasting legacy, extending into
the era of the ELTs. GRAVITY is an unpeered astrometric facility for imaged
exoplanets for the next few decades, and the orbits it measures will be the
foundation for understanding these planetary systems.
Figure 1: Orbital and dynamical mass constraints with GRAVITY astrometry.
Upper panels: Astrometry of HR 8799 e. The black point is the GRAVITY
measurement, and the gray points are from previous astrometry [2, 3].
Interferometric astrometry is an order of magnitude more accurate than direct
imaging. Lower panels: Predicted deviation from a Keplerian orbit over the
next two years for each of the four HR 8799 planets due to planet-planet
perturbations based on the orbital constraints[3]. The deviations are
essentially linear as a function of time over the next few years. We explored
three cases where we assumed different mass ranges for the inner three
planets. The shaded region for each case corresponds to the 2$\sigma$ credible
region, marginalizing over the mass range and over the uncertainty in the
orbital parameters.
## 2 THE DYNAMICS OF EXOPLANETS
A key diagnostic of planet formation and evolution is provided by analyzing
planetary orbital parameters as tracers of their dynamical history. However,
milliarcsecond-level astrometry as provided by single dish telescopes can only
provide limited constraints on key orbital parameters without additional
priors: for example, we currently cannot rule out eccentricities between 0 and
0.7 for 51 Eri b. GRAVITY has demonstrated 50-100 $\mu$as precision on
directly-imaged planets[4, 5, 6, 7] with potential to reach 10 $\mu$as
precision, as demonstrated on the Galactic Center. With a 10-100x refinement
in the orbits (see Fig. 1, top), we will drastically improve our knowledge of
several planetary systems.
Here are a few examples. 51 Eridani b could be entering into its first Koazi-
Lidov cycle as the timescale is too large for any cycles to have completed
[8]. Such dynamics are seen in a significant fraction of triple star systems
and are thought to destabilize any interior planets [9]. HD 95086 b cannot
alone carve out the gap between two debris belts in this system. Detecting an
eccentricity of $\sim$0.3 would allow HD 95086 b maintain the inner edge of
the outer disk as well as indicate dynamical interactions with closer-in
planets that are sculpting the outer edge of the inner disk [10]. HIP 65426 b
is predicted to have moderate eccentricities $e\sim 0.3$ if it formed via core
accretion, since it is believed to have been scattered out by an inner
companion[11]. $\beta$ Pictoris harbors one of the first imaged
exoplanets[12]. Recently, it was confirmed as a multi-planetary system with a
new exoplanet detected by radial velocity situated at 2.7 AU[7]. Measuring the
orbit of both planets help us to reconstruct the past and future dynamical
history of system, such as past scattering events and how coplanar their
orbits are. PDS 70 b&c is the best system to study still-accreting
exoplanets[13, 14, 15]. Their orbit will be crucial for modeling how it
accretes material from the circumplanetary disk. Secular stability will help
us to constrain their masses. HR 8799 is a resonant multi-planet system. We
can reveal deviations from purely single-planet Keplerian orbital trajectories
in the next few years due to the strong mutual interaction between the
planets. With GRAVITY, we hope to achieve 2 $M_{Jup}$ precision on the masses
of the outer three planets in two years, providing model-independent planet
mass estimates (see Fig. 1, bottom). While it will be more difficult to
constrain the mass of HR 8799 e directly, an accurate understanding of the
orbits of all four planets is required to model the resonant dynamics. GRAVITY
recently ruled out perfectly coplanar solutions for HR 8799 [4]. Finding
stable, non-coplanar orbits will reveal the true orbital architecture as well
as give us clues to how these planets migrated into resonance[3].
These precise orbits can also be combined with GAIA astrometry of the host
star to obtain dynamical masses of the planets. This is of critical importance
as the masses of directly imaged planets have mostly been estimated by
comparing their luminosity to evolutionary models, whose predictions are
highly dependent on the amount of gravitational energy gained at formation
(initial entropy)[16, 17]. As GAIA will only measure a 5-year acceleration of
the star due to these planets, planet mass will be degenerate with planet
semi-major axis and eccentricity. Precise constraints of these key orbital
parameters using GRAVITY will lead to the best constraints on these planet
masses and initial entropies at formation.
## 3 ATMOSPHERIC PHYSICS AND C/O DETERMINATION
---
Figure 2: On the superior quality of spectro-interferometry. Left panel: GPI
observation of exoplanets and brown dwarf companions. Right panel: comparison
with GRAVITY spectra for two data sets: $\beta$ Pictoris b and HR 8799 e.
GRAVITY only observes in the K band, but with a better resolution and
normalization that enables precision fitting by atmospheric models, revealing
faint absorption bands otherwise not detected.
From core-accretion [18, 19] to gravitational instability [20, 21] solar and
extra-solar planet formation paradigms are debated in the community. Our
understanding can be improved by studying the main planetary characteristics
over a range of age and mass regimes: their bulk properties (mass, $T_{\rm
eff}$, log g, semi-major axis), their chemical composition (related to their
atmospheric absorber abundances, as accessible from studying their spectra),
and their dynamical history (orbital eccentricities, resonances). Directly
imaged giant planets are excellent targets for such studies as the
aforementioned characteristics are deducible from multi-band photometry,
spectroscopy, and astrometric monitoring. About a dozen massive planets have
been imaged to date, and are now being characterized through low-resolution
spectroscopy ($R\sim 50$) with high-contrast imagers like VLT-SPHERE and
Gemini-GPI[22, 23, 24]. In order to better interpret what these observations
and the derived planetary characteristics mean in view of planet formation,
several authors now aim at linking giant planet and brown dwarf formation
processes to observable quantities like elemental ratios [25, 26, 27, 28]. In
this framework, the carbon-to-oxygen number ratio (C/O) is emerging as a
parameter of paramount importance, as it can hold crucial information about
the fractional content of solid and gaseous material accreted by the
planet/brown dwarf [29, 30] Hence, combined with chemically evolving disk
models[31], measurements of C/O ratios can be used to constrain when and where
in the disk a substellar companion has formed. A sample of planets with well-
contrained C/O ratios would also help quantify the impact of solid accretion
processes, like pebble accretion [32] or late-stage atmospheric enrichment by
planetesimals[26].
But despite the growing interest of the community for exoplanetary C/O ratios,
the number of actual measurements remains limited: for example, Madhusudhan et
al. (2011) [33] obtained a controversial estimate for the transiting hot
Jupiter WASP 12b [34, 35, 36, 37, 38], while other teams report values for HR
8799 b and c [39, 40]. In the case of hot Jupiters, hot temperatures and poor
data quality result in upper limits only (close to C/O$\sim 0.9$[37]) or very
large uncertainties (see Table 2 in Molaverdikhani et al. 2019[41]). From the
ground, observations in the K band strongly constrain planetary C/O ratios
[39, 40], due to the presence of spectral signatures of many absorbers: CO
(overtone band at 2.3-2.4 microns), CH4, CO2, and H2O. H2O has an opacity
minimum at $\sim$2.2 microns that defines the K-band. For C/O ratios below
$\sim$0.7 to 0.9 at high temperatures, and at all C/O ratios at low
temperatures, water is an important absorber in the atmospheres, and therefore
determines the overall flux level and depth of the atmospheric absorption
features. The K-band is also well suited to constrain the planetary
metallicity[24] and can be used to constrain the temperature profile,
especially if absorption features are detected. However, a minimum resolution
of a few hundred is necessary to well resolve, for example, the multiple
features of the CO overtone band at 2.3-2.4 microns. Yet, to date, only a few
giant planets have been characterized at $R\geq 100$ [39, 42, 22, 43, 44].
This is due to the technical challenge of extracting a faint planet’s spectrum
buried in the stellar halo with single-dish telescopes, limiting these studies
to the brightest and furthest-out objects (separation $\geq 1$” and contrast
$\geq 10^{-4}$).
## 4 RESULTS TO DATE
In GRAVITY Collaboration: Lacour et al. (2019)[4], we observed the target HR
8799 e, one of the most challenging exoplanets in terms of dynamic range
($\Delta$mag=11) and separation (390 mas). The astrometry revealed for HR 8799
e an orbital plane non coplanar with the others planets. We also demonstrated
that the spatial resolution of the interferometer is capable of retrieving the
planetary spectra cleaned of stellar contamination. We used Exo-REM atmosphere
grids to derive the main characteristics of the atmosphere of HR 8799 e:
surface gravity and temperature.
We also detected $\beta$ Pictoris b while it was at 140 mas from its star[5].
This is another powerful demonstration that the technique works at even closer
separations. In 2.5h integration time, we reached an exquisite S/N of 50 (see
Fig. 2). We have therefore included an additional, powerful approach for
spectral analysis: free atmospheric parameter-retrieval. Our team has
developed the petitRADTRANS code[45], which accounts for clouds, their
condensation, and non-equilibrium chemistry. We currently using this code on
several exoplanet datasets (papers in preparation). For $\beta$ Pictoris b,
the high SNR allows to determine temperature profile and C/O ratio with good
accuracy ($\approx 10\%$). For HR 8799 e, (with an SNR of the order of 10 per
spectral channel), we additionally used GPI data to constrain the C/O. We
discovered very different C/O between the two targets. This hints towards a
very different formation process between the two, or a different birth
environment due to the larger separation of HR 8799 e. For both of these
planets, we achieved 100 $\mu$as astrometric precision, ten times better than
any other facility.
Recently, we used the technique to detect, for the first time, the emission of
a planet previously detected by radial velocity[7]. This exoplanet is $\beta$
Pictoris c, orbiting at a distance of 2.7 au from its star[46, 6]. The
exoplanet orbits at a maximum of 150 mas with a contrast ratio of $2\times
10^{4}$. It is the first demonstration that optical interferometry can surpass
direct imaging with monolithic telescopes.
## 5 PROSPECTS AND CONCLUSION
Figure 3: Sensitivity region of Gaia, classical imaging and interferometric
imaging, overlaid to known planets from the NASA database. Because of
observational and astrophysical biases, most known young planets are at $>$10
AU (brown in this picture). Thanks to its exhaustive approach, Gaia will
detect many more of these young planets at $<$10 AU. We know this population
of young planets should exist according to exoplanets population models (blue
in this picture)[47, 48]. GRAVITY+ is unique to characterise the intrinsic
infrared flux and thus the formation entropy of these young planets.
If interferometry can surpass direct imaging, it has an intrinsic limitation:
the field of view (FOV). The FOV of GRAVITY has the size of the diffraction
limit of a single telescope: $\pm 60\,$mas. While the FOV of a classical
imager is several tens of arcseconds. This explains why, to observe $\beta$
Pictoris c, we had to rely on an expected position predicted from radial
velocity. The alternative would have been to search blindly using dithering,
but at the cost of a very large observational overhead.
Therefore to detect new giant exoplanets, we have to rely on predictions.
Radial velocities is a good solution (as shown), except that to be detectable,
the exoplanets must be young; otherwise, the contrast ratio would be too high.
The alternative will come from the incoming GAIA releases. For stars at 20 pc
from the Sun and a nominal 5-year mission, Gaia’s peak sensitivity corresponds
to planets at 100 mas or 2 AU from their host star. This is just too small for
measuring their infrared flux with classical imaging. But it falls well in the
range of interferometric imaging. At a separation of 130 mas, $\beta$ Pictoris
c is actually a prototypical example (see Figure 3. We expect a few of them
within the current contrast limit of GRAVITY (many others at larger
separations will be observable with classical imaging, but they are much less
challenging for the formation theories).
Last, GRAVITY+ will come as an upgrade of the GRAVITY instrument [49]. It will
provide new extreme Adaptive Optics to replace the 20-year old system that
feeds the instrument (and with which GRAVITY is currently using). This is a
critical ingredient to increase the contrast of GRAVITY. For the first time
the need to optimise a dedicated high-contrast mode is part of GRAVITY, and
should pay by increased contrast ratio.
###### Acknowledgements.
This unnumbered section is used to identify those who have aided the authors
in understanding or accomplishing the work presented and to acknowledge
sources of funding.
## References
* [1] Gravity Collaboration, Abuter, R., Accardo, M., Amorim, A., Anugu, N., Ávila, G., Azouaoui, N., Benisty, M., Berger, J. P., Blind, N., Bonnet, H., Bourget, P., Brandner, W., Brast, R., Buron, A., Burtscher, L., Cassaing, F., Chapron, F., Choquet, É., Clénet, Y., Collin, C., Coudé Du Foresto, V., de Wit, W., de Zeeuw, P. T., Deen, C., Delplancke-Ströbele, F., Dembet, R., Derie, F., Dexter, J., Duvert, G., Ebert, M., Eckart, A., Eisenhauer, F., Esselborn, M., Fédou, P., Finger, G., Garcia, P., Garcia Dabo, C. E., Garcia Lopez, R., Gendron, E., Genzel, R., Gillessen, S., Gonte, F., Gordo, P., Grould, M., Grözinger, U., Guieu, S., Haguenauer, P., Hans, O., Haubois, X., Haug, M., Haussmann, F., Henning, T., Hippler, S., Horrobin, M., Huber, A., Hubert, Z., Hubin, N., Hummel, C. A., Jakob, G., Janssen, A., Jochum, L., Jocou, L., Kaufer, A., Kellner, S., Kendrew, S., Kern, L., Kervella, P., Kiekebusch, M., Klein, R., Kok, Y., Kolb, J., Kulas, M., Lacour, S., Lapeyrère, V., Lazareff, B., Le Bouquin, J.-B., Lèna, P., Lenzen, R., Lévêque, S., Lippa, M., Magnard, Y., Mehrgan, L., Mellein, M., Mérand, A., Moreno-Ventas, J., Moulin, T., Müller, E., Müller, F., Neumann, U., Oberti, S., Ott, T., Pallanca, L., Panduro, J., Pasquini, L., Paumard, T., Percheron, I., Perraut, K., Perrin, G., Pflüger, A., Pfuhl, O., Phan Duc, T., Plewa, P. M., Popovic, D., Rabien, S., Ramírez, A., Ramos, J., Rau, C., Riquelme, M., Rohloff, R.-R., Rousset, G., Sanchez-Bermudez, J., Scheithauer, S., Schöller, M., Schuhler, N., Spyromilio, J., Straubmeier, C., Sturm, E., Suarez, M., Tristram, K. R. W., Ventura, N., Vincent, F., Waisberg, I., Wank, I., Weber, J., Wieprecht, E., Wiest, M., Wiezorrek, E., Wittkowski, M., Woillez, J., Wolff, B., Yazici, S., Ziegler, D., and Zins, G., “First light for GRAVITY: Phase referencing optical interferometry for the Very Large Telescope Interferometer,” Astronomy and Astrophysics 602, A94 (June 2017).
* [2] Konopacky, Q. M., Marois, C., Macintosh, B. A., Galicher, R., Barman, T. S., Metchev, S. A., and Zuckerman, B., “ASTROMETRIC MONITORING OF THE HR 8799 PLANETS: ORBIT CONSTRAINTS FROM SELF-CONSISTENT MEASUREMENTS,” The Astronomical Journal 152, 28 (July 2016).
* [3] Wang, J. J., Graham, J. R., Dawson, R., Fabrycky, D., De Rosa, R. J., Pueyo, L., Konopacky, Q., Macintosh, B., Marois, C., Chiang, E., Ammons, S. M., Arriaga, P., Bailey, V. P., Barman, T., Bulger, J., Chilcote, J., Cotten, T., Doyon, R., Duchêne, G., Esposito, T. M., Fitzgerald, M. P., Follette, K. B., Gerard, B. L., Goodsell, S. J., Greenbaum, A. Z., Hibon, P., Hung, L.-W., Ingraham, P., Kalas, P., Larkin, J. E., Maire, J., Marchis, F., Marley, M. S., Metchev, S., Millar-Blanchaer, M. A., Nielsen, E. L., Oppenheimer, R., Palmer, D., Patience, J., Perrin, M., Poyneer, L., Rajan, A., Rameau, J., Rantakyrö, F. T., Ruffio, J.-B., Savransky, D., Schneider, A. C., Sivaramakrishnan, A., Song, I., Soummer, R., Thomas, S., Wallace, J. K., Ward-Duong, K., Wiktorowicz, S., and Wolff, S., “Dynamical Constraints on the HR 8799 Planets with GPI,” The Astronomical Journal 156, 192 (Nov. 2018).
* [4] Gravity Collaboration, Lacour, S., Nowak, M., Wang, J., Pfuhl, O., Eisenhauer, F., Abuter, R., Amorim, A., Anugu, N., Benisty, M., Berger, J. P., Beust, H., Blind, N., Bonnefoy, M., Bonnet, H., Bourget, P., Brandner, W., Buron, A., Collin, C., Charnay, B., Chapron, F., Clénet, Y., Coudé Du Foresto, V., de Zeeuw, P. T., Deen, C., Dembet, R., Dexter, J., Duvert, G., Eckart, A., Förster Schreiber, N. M., Fédou, P., Garcia, P., Garcia Lopez, R., Gao, F., Gendron, E., Genzel, R., Gillessen, S., Gordo, P., Greenbaum, A., Habibi, M., Haubois, X., Haußmann, F., Henning, T., Hippler, S., Horrobin, M., Hubert, Z., Jimenez Rosales, A., Jocou, L., Kendrew, S., Kervella, P., Kolb, J., Lagrange, A.-M., Lapeyrère, V., Le Bouquin, J.-B., Léna, P., Lippa, M., Lenzen, R., Maire, A.-L., Mollière, P., Ott, T., Paumard, T., Perraut, K., Perrin, G., Pueyo, L., Rabien, S., Ramírez, A., Rau, C., Rodríguez-Coira, G., Rousset, G., Sanchez-Bermudez, J., Scheithauer, S., Schuhler, N., Straub, O., Straubmeier, C., Sturm, E., Tacconi, L. J., Vincent, F., van Dishoeck, E. F., von Fellenberg, S., Wank, I., Waisberg, I., Widmann, F., Wieprecht, E., Wiest, M., Wiezorrek, E., Woillez, J., Yazici, S., Ziegler, D., and Zins, G., “First direct detection of an exoplanet by optical interferometry. Astrometry and K-band spectroscopy of HR 8799 e,” Astronomy and Astrophysics 623, L11 (Mar. 2019).
* [5] Gravity Collaboration, Nowak, M., Lacour, S., Mollière, P., Wang, J., Charnay, B., van Dishoeck, E. F., Abuter, R., Amorim, A., Berger, J. P., Beust, H., Bonnefoy, M., Bonnet, H., Brandner, W., Buron, A., Cantalloube, F., Collin, C., Chapron, F., Clénet, Y., Coudé Du Foresto, V., de Zeeuw, P. T., Dembet, R., Dexter, J., Duvert, G., Eckart, A., Eisenhauer, F., Förster Schreiber, N. M., Fédou, P., Garcia Lopez, R., Gao, F., Gendron, E., Genzel, R., Gillessen, S., Haußmann, F., Henning, T., Hippler, S., Hubert, Z., Jocou, L., Kervella, P., Lagrange, A.-M., Lapeyrère, V., Le Bouquin, J.-B., Léna, P., Maire, A.-L., Ott, T., Paumard, T., Paladini, C., Perraut, K., Perrin, G., Pueyo, L., Pfuhl, O., Rabien, S., Rau, C., Rodríguez-Coira, G., Rousset, G., Scheithauer, S., Shangguan, J., Straub, O., Straubmeier, C., Sturm, E., Tacconi, L. J., Vincent, F., Widmann, F., Wieprecht, E., Wiezorrek, E., Woillez, J., Yazici, S., and Ziegler, D., “Peering into the formation history of $\beta$ Pictoris b with VLTI/GRAVITY long-baseline interferometry,” Astronomy and Astrophysics 633, A110 (Jan. 2020).
* [6] Lagrange, A. M., Rubini, P., Nowak, M., Lacour, S., Grandjean, A., Boccaletti, A., Langlois, M., Delorme, P., Gratton, R., Wang, J., Flasseur, O., Galicher, R., Kral, Q., Meunier, N., Beust, H., Babusiaux, C., Le Coroller, H., Thebault, P., Kervella, P., Zurlo, A., Maire, A.-L., Wahhaj, Z., Amorim, A., Asensio-Torres, R., Benisty, M., Berger, J. P., Bonnefoy, M., Brandner, W., Cantalloube, F., Charnay, B., Chauvin, G., Choquet, E., Clénet, Y., Christiaens, V., Coudé Du Foresto, V., de Zeeuw, P. T., Desidera, S., Duvert, G., Eckart, A., Eisenhauer, F., Galland, F., Gao, F., Garcia, P., Garcia Lopez, R., Gendron, E., Genzel, R., Gillessen, S., Girard, J., Hagelberg, J., Haubois, X., Henning, T., Heissel, G., Hippler, S., Horrobin, M., Janson, M., Kammerer, J., Kenworthy, M., Keppler, M., Kreidberg, L., Lapeyrère, V., Le Bouquin, J.-B., Léna, P., Mérand, A., Messina, S., Mollière, P., Monnier, J. D., Ott, T., Otten, G., Paumard, T., Paladini, C., Perraut, K., Perrin, G., Pueyo, L., Pfuhl, O., Rodet, L., Rodriguez-Coira, G., Rousset, G., Samland, M., Shangguan, J., Schmidt, T., Straub, O., Straubmeier, C., Stolker, T., Vigan, A., Vincent, F., Widmann, F., Woillez, J., and Gravity Collaboration, “Unveiling the $\beta$ Pictoris system, coupling high contrast imaging, interferometric, and radial velocity data,” Astronomy and Astrophysics 642, A18 (Oct. 2020).
* [7] Nowak, M., Lacour, S., Lagrange, A.-M., Rubini, P., Wang, J., Stolker, T., Abuter, R., Amorim, A., Asensio-Torres, R., Bauböck, M., Benisty, M., Berger, J. P., Beust, H., Blunt, S., Boccaletti, A., Bonnefoy, M., Bonnet, H., Brandner, W., Cantalloube, F., Charnay, B., Choquet, E., Christiaens, V., Clénet, Y., Coudé Du Foresto, V., Cridland, A., de Zeeuw, P. T., Dembet, R., Dexter, J., Drescher, A., Duvert, G., Eckart, A., Eisenhauer, F., Gao, F., Garcia, P., Garcia Lopez, R., Gardner, T., Gendron, E., Genzel, R., Gillessen, S., Girard, J., Grandjean, A., Haubois, X., Heißel, G., Henning, T., Hinkley, S., Hippler, S., Horrobin, M., Houllé, M., Hubert, Z., Jiménez-Rosales, A., Jocou, L., Kammerer, J., Kervella, P., Keppler, M., Kreidberg, L., Kulikauskas, M., Lapeyrère, V., Le Bouquin, J.-B., Léna, P., Mérand, A., Maire, A.-L., Mollière, P., Monnier, J. D., Mouillet, D., Müller, A., Nasedkin, E., Ott, T., Otten, G., Paumard, T., Paladini, C., Perraut, K., Perrin, G., Pueyo, L., Pfuhl, O., Rameau, J., Rodet, L., Rodríguez-Coira, G., Rousset, G., Scheithauer, S., Shangguan, J., Stadler, J., Straub, O., Straubmeier, C., Sturm, E., Tacconi, L. J., van Dishoeck, E. F., Vigan, A., Vincent, F., von Fellenberg, S. D., Ward-Duong, K., Widmann, F., Wieprecht, E., Wiezorrek, E., Woillez, J., and Gravity Collaboration, “Direct confirmation of the radial-velocity planet $\beta$ Pictoris c,” Astronomy and Astrophysics 642, L2 (Oct. 2020).
* [8] Macintosh, B., Graham, J. R., Barman, T., De Rosa, R. J., Konopacky, Q., Marley, M. S., Marois, C., Nielsen, E. L., Pueyo, L., Rajan, A., Rameau, J., Saumon, D., Wang, J. J., Patience, J., Ammons, M., Arriaga, P., Artigau, E., Beckwith, S., Brewster, J., Bruzzone, S., Bulger, J., Burningham, B., Burrows, A. S., Chen, C., Chiang, E., Chilcote, J. K., Dawson, R. I., Dong, R., Doyon, R., Draper, Z. H., Duchêne, G., Esposito, T. M., Fabrycky, D., Fitzgerald, M. P., Follette, K. B., Fortney, J. J., Gerard, B., Goodsell, S., Greenbaum, A. Z., Hibon, P., Hinkley, S., Cotten, T. H., Hung, L.-W., Ingraham, P., Johnson-Groh, M., Kalas, P., Lafreniere, D., Larkin, J. E., Lee, J., Line, M., Long, D., Maire, J., Marchis, F., Matthews, B. C., Max, C. E., Metchev, S., Millar-Blanchaer, M. A., Mittal, T., Morley, C. V., Morzinski, K. M., Murray-Clay, R., Oppenheimer, R., Palmer, D. W., Patel, R., Perrin, M. D., Poyneer, L. A., Rafikov, R. R., Rantakyrö, F. T., Rice, E. L., Rojo, P., Rudy, A. R., Ruffio, J.-B., Ruiz, M. T., Sadakuni, N., Saddlemyer, L., Salama, M., Savransky, D., Schneider, A. C., Sivaramakrishnan, A., Song, I., Soummer, R., Thomas, S., Vasisht, G., Wallace, J. K., Ward-Duong, K., Wiktorowicz, S. J., Wolff, S. G., and Zuckerman, B., “Discovery and spectroscopy of the young jovian planet 51 Eri b with the Gemini Planet Imager,” Science 350, 64–67 (Oct. 2015).
* [9] Moe, M. and Kratter, K. M., “Dynamical Formation of Close Binaries during the Pre-main-sequence Phase,” The Astrophysical Journal 854, 44 (Feb. 2018).
* [10] Su, K. Y. L., MacGregor, M. A., Booth, M., Wilner, D. J., Flaherty, K., Hughes, A. M., Phillips, N. M., Malhotra, R., Hales, A. S., Morrison, S., Ertel, S., Matthews, B. C., Dent, W. R. F., and Casassus, S., “ALMA 1.3 mm Map of the HD 95086 System,” The Astronomical Journal 154, 225 (Dec. 2017).
* [11] Marleau, G.-D., Coleman, G. A. L., Leleu, A., and Mordasini, C., “Exploring the formation by core accretion and the luminosity evolution of directly imaged planets. The case of HIP 65426 b,” Astronomy and Astrophysics 624, A20 (Apr. 2019).
* [12] Lagrange, A.-M., Bonnefoy, M., Chauvin, G., Apai, D., Ehrenreich, D., Boccaletti, A., Gratadour, D., Rouan, D., Mouillet, D., Lacour, S., and Kasper, M., “A Giant Planet Imaged in the Disk of the Young Star $\beta$ Pictoris,” Science 329, 57 (July 2010).
* [13] Keppler, M., Benisty, M., Müller, A., Henning, T., van Boekel, R., Cantalloube, F., Ginski, C., van Holstein, R. G., Maire, A.-L., Pohl, A., Samland, M., Avenhaus, H., Baudino, J.-L., Boccaletti, A., de Boer, J., Bonnefoy, M., Chauvin, G., Desidera, S., Langlois, M., Lazzoni, C., Marleau, G.-D., Mordasini, C., Pawellek, N., Stolker, T., Vigan, A., Zurlo, A., Birnstiel, T., Brandner, W., Feldt, M., Flock, M., Girard, J., Gratton, R., Hagelberg, J., Isella, A., Janson, M., Juhasz, A., Kemmer, J., Kral, Q., Lagrange, A.-M., Launhardt, R., Matter, A., Ménard, F., Milli, J., Mollière, P., Olofsson, J., Pérez, L., Pinilla, P., Pinte, C., Quanz, S. P., Schmidt, T., Udry, S., Wahhaj, Z., Williams, J. P., Buenzli, E., Cudel, M., Dominik, C., Galicher, R., Kasper, M., Lannier, J., Mesa, D., Mouillet, D., Peretti, S., Perrot, C., Salter, G., Sissa, E., Wildi, F., Abe, L., Antichi, J., Augereau, J.-C., Baruffolo, A., Baudoz, P., Bazzon, A., Beuzit, J.-L., Blanchard, P., Brems, S. S., Buey, T., De Caprio, V., Carbillet, M., Carle, M., Cascone, E., Cheetham, A., Claudi, R., Costille, A., Delboulbé, A., Dohlen, K., Fantinel, D., Feautrier, P., Fusco, T., Giro, E., Gluck, L., Gry, C., Hubin, N., Hugot, E., Jaquet, M., Le Mignant, D., Llored, M., Madec, F., Magnard, Y., Martinez, P., Maurel, D., Meyer, M., Möller-Nilsson, O., Moulin, T., Mugnier, L., Origné, A., Pavlov, A., Perret, D., Petit, C., Pragt, J., Puget, P., Rabou, P., Ramos, J., Rigal, F., Rochat, S., Roelfsema, R., Rousset, G., Roux, A., Salasnich, B., Sauvage, J.-F., Sevin, A., Soenke, C., Stadler, E., Suarez, M., Turatto, M., and Weber, L., “Discovery of a planetary-mass companion within the gap of the transition disk around PDS 70,” Astronomy and Astrophysics 617, A44 (Sept. 2018).
* [14] Wagner, K., Follete, K. B., Close, L. M., Apai, D., Gibbs, A., Keppler, M., Müller, A., Henning, T., Kasper, M., Wu, Y.-L., Long, J., Males, J., Morzinski, K., and McClure, M., “Magellan Adaptive Optics Imaging of PDS 70: Measuring the Mass Accretion Rate of a Young Giant Planet within a Gapped Disk,” The Astrophysical Journal Letters 863, L8 (Aug. 2018).
* [15] Haffert, S. Y., Bohn, A. J., de Boer, J., Snellen, I. A. G., Brinchmann, J., Girard, J. H., Keller, C. U., and Bacon, R., “Two accreting protoplanets around the young star PDS 70,” Nature Astronomy (June 2019).
* [16] Marley, M. S., Fortney, J. J., Hubickyj, O., Bodenheimer, P., and Lissauer, J. J., “On the Luminosity of Young Jupiters,” The Astrophysical Journal 655, 541–549 (Jan. 2007).
* [17] Marleau, G.-D. and Cumming, A., “Constraining the initial entropy of directly detected exoplanets,” Monthly Notices of the Royal Astronomical Society 437, 1378–1399 (Jan. 2014).
* [18] Pollack, J. B., Hubickyj, O., Bodenheimer, P., Lissauer, J. J., Podolak, M., and Greenzweig, Y., “Formation of the Giant Planets by Concurrent Accretion of Solids and Gas,” Icarus 124, 62–85 (Nov. 1996).
* [19] Alibert, Y., Mordasini, C., and Benz, W., “Migration and giant planet formation,” Astronomy and Astrophysics 417, L25–L28 (Apr. 2004).
* [20] Boss, A. P., “Giant planet formation by gravitational instability.,” Science 276, 1836–1839 (1997).
* [21] Nayakshin, S., “Dawes Review 7: The Tidal Downsizing Hypothesis of Planet Formation,” Publications of the Astronomical Society of Australia 34, e002 (Jan. 2017).
* [22] Bonnefoy, M., Marleau, G.-D., Galicher, R., Beust, H., Lagrange, A.-M., Baudino, J.-L., Chauvin, G., Borgniet, S., Meunier, N., Rameau, J., Boccaletti, A., Cumming, A., Helling, C., Homeier, D., Allard, F., and Delorme, P., “Physical and orbital properties of $\beta$ Pictoris b,” Astronomy and Astrophysics 567, L9 (July 2014).
* [23] Bonnefoy, M., Zurlo, A., Baudino, J. L., Lucas, P., Mesa, D., Maire, A.-L., Vigan, A., Galicher, R., Homeier, D., Marocco, F., Gratton, R., Chauvin, G., Allard, F., Desidera, S., Kasper, M., Moutou, C., Lagrange, A.-M., Antichi, J., Baruffolo, A., Baudrand, J., Beuzit, J.-L., Boccaletti, A., Cantalloube, F., Carbillet, M., Charton, J., Claudi, R. U., Costille, A., Dohlen, K., Dominik, C., Fantinel, D., Feautrier, P., Feldt, M., Fusco, T., Gigan, P., Girard, J. H., Gluck, L., Gry, C., Henning, T., Janson, M., Langlois, M., Madec, F., Magnard, Y., Maurel, D., Mawet, D., Meyer, M. R., Milli, J., Moeller-Nilsson, O., Mouillet, D., Pavlov, A., Perret, D., Pujet, P., Quanz, S. P., Rochat, S., Rousset, G., Roux, A., Salasnich, B., Salter, G., Sauvage, J.-F., Schmid, H. M., Sevin, A., Soenke, C., Stadler, E., Turatto, M., Udry, S., Vakili, F., Wahhaj, Z., and Wildi, F., “First light of the VLT planet finder SPHERE. IV. Physical and chemical properties of the planets around HR8799,” Astronomy and Astrophysics 587, A58 (Mar. 2016).
* [24] Samland, M., Mollière, P., Bonnefoy, M., Maire, A.-L., Cantalloube, F., Cheetham, A. C., Mesa, D., Gratton, R., Biller, B. A., Wahhaj, Z., Bouwman, J., Brandner, W., Melnick, D., Carson, J., Janson, M., Henning, T., Homeier, D., Mordasini, C., Langlois, M., Quanz, S. P., van Boekel, R., Zurlo, A., Schlieder, J. E., Avenhaus, H., Beuzit, J.-L., Boccaletti, A., Bonavita, M., Chauvin, G., Claudi, R., Cudel, M., Desidera, S., Feldt, M., Fusco, T., Galicher, R., Kopytova, T. G., Lagrange, A.-M., Le Coroller, H., Martinez, P., Moeller-Nilsson, O., Mouillet, D., Mugnier, L. M., Perrot, C., Sevin, A., Sissa, E., Vigan, A., and Weber, L., “Spectral and atmospheric characterization of 51 Eridani b using VLT/SPHERE,” Astronomy & Astrophysics 603, A57 (July 2017).
* [25] Madhusudhan, N., Amin, M. A., and Kennedy, G. M., “Toward Chemical Constraints on Hot Jupiter Migration,” The Astrophysical Journal Letters 794, L12 (Oct. 2014).
* [26] Mordasini, C., van Boekel, R., Mollière, P., Henning, T., and Benneke, B., “The Imprint of Exoplanet Formation History on Observable Present-day Spectra of Hot Jupiters,” The Astrophysical Journal 832, 41 (Nov. 2016).
* [27] Eistrup, C., Walsh, C., and van Dishoeck, E. F., “Setting the volatile composition of (exo)planet-building material. Does chemical evolution in disk midplanes matter?,” Astronomy and Astrophysics 595, A83 (Nov. 2016).
* [28] Mollière, P. and Snellen, I. a. G., “Detecting isotopologues in exoplanet atmospheres using ground-based high-dispersion spectroscopy,” arXiv e-prints , arXiv:1809.01156 (Sept. 2018).
* [29] Öberg, K. I., Murray-Clay, R., and Bergin, E. A., “The Effects of Snowlines on C/O in Planetary Atmospheres,” The Astrophysical Journal Letters 743, L16 (Dec. 2011).
* [30] Espinoza, N., Fortney, J. J., Miguel, Y., Thorngren, D., and Murray-Clay, R., “Metal Enrichment Leads to Low Atmospheric C/O Ratios in Transiting Giant Exoplanets,” The Astrophysical Journal Letters 838, L9 (Mar. 2017).
* [31] Eistrup, C., Walsh, C., and van Dishoeck, E. F., “Molecular abundances and C/O ratios in chemically evolving planet-forming disk midplanes,” Astronomy and Astrophysics 613, A14 (May 2018).
* [32] Madhusudhan, N., Bitsch, B., Johansen, A., and Eriksson, L., “Atmospheric signatures of giant exoplanet formation by pebble accretion,” Monthly Notices of the Royal Astronomical Society 469, 4102–4115 (Aug. 2017).
* [33] Madhusudhan, N., Harrington, J., Stevenson, K. B., Nymeyer, S., Campo, C. J., Wheatley, P. J., Deming, D., Blecic, J., Hardy, R. A., Lust, N. B., Anderson, D. R., Collier-Cameron, A., Britt, C. B. T., Bowman, W. C., Hebb, L., Hellier, C., Maxted, P. F. L., Pollacco, D., and West, R. G., “A high C/O ratio and weak thermal inversion in the atmosphere of exoplanet WASP-12b,” Nature 469, 64–67 (Jan. 2011).
* [34] Crossfield, I. J. M., Barman, T., Hansen, B. M. S., Tanaka, I., and Kodama, T., “Re-evaluating WASP-12b: Strong Emission at 2.315 $M$m, Deeper Occultations, and an Isothermal Atmosphere,” The Astrophysical Journal 760, 140 (Dec. 2012).
* [35] Swain, M., Deroo, P., Tinetti, G., Hollis, M., Tessenyi, M., Line, M., Kawahara, H., Fujii, Y., Showman, A. P., and Yurchenko, S. N., “Probing the extreme planetary atmosphere of WASP-12b,” Icarus 225, 432–445 (July 2013).
* [36] Line, M. R., Wolf, A. S., Zhang, X., Knutson, H., Kammer, J. A., Ellison, E., Deroo, P., Crisp, D., and Yung, Y. L., “A Systematic Retrieval Analysis of Secondary Eclipse Spectra. I. A Comparison of Atmospheric Retrieval Techniques,” The Astrophysical Journal 775, 137 (Oct. 2013).
* [37] Benneke, B., “Strict Upper Limits on the Carbon-to-Oxygen Ratios of Eight Hot Jupiters from Self-Consistent Atmospheric Retrieval,” arXiv e-prints 1504, arXiv:1504.07655 (Apr. 2015).
* [38] Kreidberg, L., Line, M. R., Bean, J. L., Stevenson, K. B., Désert, J.-M., Madhusudhan, N., Fortney, J. J., Barstow, J. K., Henry, G. W., Williamson, M. H., and Showman, A. P., “A Detection of Water in the Transmission Spectrum of the Hot Jupiter WASP-12b and Implications for Its Atmospheric Composition,” The Astrophysical Journal 814, 66 (Nov. 2015).
* [39] Konopacky, Q. M., Barman, T. S., Macintosh, B. A., and Marois, C., “Detection of Carbon Monoxide and Water Absorption Lines in an Exoplanet Atmosphere,” Science 339, 1398–1401 (Mar. 2013).
* [40] Lavie, B., Mendonça, J. M., Mordasini, C., Malik, M., Bonnefoy, M., Demory, B.-O., Oreshenko, M., Grimm, S. L., Ehrenreich, D., and Heng, K., “HELIOS-RETRIEVAL: An Open-source, Nested Sampling Atmospheric Retrieval Code; Application to the HR 8799 Exoplanets and Inferred Constraints for Planet Formation,” The Astronomical Journal 154, 91 (Sept. 2017).
* [41] Molaverdikhani, K., Henning, T., and Mollière, P., “From Cold to Hot Irradiated Gaseous Exoplanets: Toward an Observation-based Classification Scheme,” The Astrophysical Journal 873, 32 (Mar. 2019).
* [42] Barman, T. S., Konopacky, Q. M., Macintosh, B., and Marois, C., “SIMULTANEOUS DETECTION OF WATER, METHANE, AND CARBON MONOXIDE IN THE ATMOSPHERE OF EXOPLANET HR 8799 b,” The Astrophysical Journal 804, 61 (May 2015).
* [43] Snellen, I. A. G., Brandl, B. R., de Kok, R. J., Brogi, M., Birkby, J., and Schwarz, H., “Fast spin of the young extrasolar planet $\beta$ Pictoris b,” Nature 509, 63 (Apr. 2014).
* [44] Bryan, M. L., Benneke, B., Knutson, H. A., Batygin, K., and Bowler, B. P., “Constraints on the spin evolution of young planetary-mass companions,” Nature Astronomy 2, 138–144 (Dec. 2018).
* [45] Mollière, P., Stolker, T., Lacour, S., Otten, G. P. P. L., Shangguan, J., Charnay, B., Molyarova, T., Nowak, M., Henning, T., Marleau, G.-D., Semenov, D. A., van Dishoeck, E., Eisenhauer, F., Garcia, P., Garcia Lopez, R., Girard, J. H., Greenbaum, A. Z., Hinkley, S., Kervella, P., Kreidberg, L., Maire, A.-L., Nasedkin, E., Pueyo, L., Snellen, I. A. G., Vigan, A., Wang, J., de Zeeuw, P. T., and Zurlo, A., “Retrieving scattering clouds and disequilibrium chemistry in the atmosphere of HR 8799e,” Astronomy and Astrophysics 640, A131 (Aug. 2020).
* [46] Lagrange, A.-M., Meunier, N., Rubini, P., Keppler, M., Galland, F., Chapellier, E., Michel, E., Balona, L., Beust, H., Guillot, T., Grandjean, A., Borgniet, S., Mékarnia, D., Wilson, P. A., Kiefer, F., Bonnefoy, M., Lillo-Box, J., Pantoja, B., Jones, M., Iglesias, D. P., Rodet, L., Diaz, M., Zapata, A., Abe, L., and Schmider, F.-X., “Evidence for an additional planet in the $\beta$ Pictoris system,” Nature Astronomy 3, 1135–1142 (Aug. 2019).
* [47] Emsenhuber, A., Mordasini, C., Burn, R., Alibert, Y., Benz, W., and Asphaug, E., “The New Generation Planetary Population Synthesis (NGPPS). I. Bern global model of planet formation and evolution, model tests, and emerging planetary systems,” arXiv e-prints 2007, arXiv:2007.05561 (July 2020).
* [48] Emsenhuber, A., Mordasini, C., Burn, R., Alibert, Y., Benz, W., and Asphaug, E., “The New Generation Planetary Population Synthesis (NGPPS). II. Planetary population of solar-like stars and overview of statistical results,” arXiv e-prints 2007, arXiv:2007.05562 (July 2020).
* [49] Eisenhauer, F., “GRAVITY+: Towards faint science,” 30 (July 2019).
|
# Gaussian Process Modelling for Improved Resolution in Faraday Depth
Reconstruction
S. W. Ndiritu,1,2 A. M. M. Scaife2,3, D. L. Tabb4, M. Cárcamo2,5 & J. Hanson2
1 Max Planck Institute for Astrophysics, Karl-Schwarzschildstraße 1, 85748
Garching, Germany
2 Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy,
University of Manchester, Alan Turing Building,
Oxford Road, Manchester, M13 9PL, UK
3 The Alan Turing Institute, Euston Road, London NW1 2DB, UK
4 Division of Molecular Biology and Human Genetics, Stellenbosch University
Faculty of Medicine and Health Sciences,
Cape Town, South Africa.
5 Departamento de Ingeniería Informática, Universidad de Santiago de Chile,
Av. Ecuador 3659, Santiago, Chile
E-mail<EMAIL_ADDRESS>(SWN)
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The incomplete sampling of data in complex polarization measurements from
radio telescopes negatively affects both the rotation measure (RM) transfer
function and the Faraday depth spectra derived from these data. Such gaps in
polarization data are mostly caused by flagging of radio frequency
interference and their effects worsen as the percentage of missing data
increases. In this paper we present a novel method for inferring missing
polarization data based on Gaussian processes (GPs). Gaussian processes are
stochastic processes that enable us to encode prior knowledge in our models.
They also provide a comprehensive way of incorporating and quantifying
uncertainties in regression modelling. In addition to providing non-parametric
model estimates for missing values, we also demonstrate that Gaussian process
modelling can be used for recovering rotation measure values directly from
complex polarization data, and that inferring missing polarization data using
this probabilistic method improves the resolution of reconstructed Faraday
depth spectra.
###### keywords:
techniques: polarimetric – methods: statistical – radio continuum: general
††pubyear: 2019††pagerange: Gaussian Process Modelling for Improved Resolution
in Faraday Depth Reconstruction–Gaussian Process Modelling for Improved
Resolution in Faraday Depth Reconstruction
## 1 Introduction
Polarisation measurements from radio telescopes with broadband receiver
systems have changed the way in which we investigate magnetised astrophysical
plasmas by enabling the use of the RM synthesis method (Burn, 1966; Brentjens
& de Bruyn, 2005). In this method the Fourier relationship between polarized
intensity as a function of wavelength-squared and the Faraday dispersion
function is exploited to recover the polarized intensity as a function of
Faraday depth, $\phi$, such that
$F(\phi)=\int_{-\infty}^{\infty}{P(\lambda^{2}){\rm
e}^{-2i\phi\lambda^{2}}~{}{\rm d}\lambda^{2}},$ (1)
where
$P(\lambda^{2})=|P(\lambda^{2})|{\rm
e}^{2i\chi(\lambda^{2})}=Q(\lambda^{2})+iU(\lambda^{2}).$ (2)
Although conceptually simple, in practical terms the implementation of this
method and the interpretation of the resulting Faraday dispersion function are
complicated by a number of factors. The first of these complications is that
the Stokes pseudo-vectors $Q$ and $U$ are not sampled natively in wavelength-
squared but linearly in frequency. A further complication is that Eq. 1 does
not represent a true Fourier relationship as $P(\lambda^{2})$ does not exist
at $\lambda^{2}<0$. Since $F(\phi)$ is not necessarily a purely real quantity
this represents a fundamental limitation in attempting to reconstruct an
unknown Faraday dispersion function from measured values of $P(\lambda^{2})$.
An additional limitation comes from the finite bandwidth and hence range of
wavelength-squared, $\Delta(\lambda^{2})$, measured by a radio telescope
receiver, as well as the potentially incomplete sampling over this bandwidth
due to various effects (RFI, instrumental problems etc.) that require flagging
(excision) of specific frequency channel data during an observation.
Incomplete sampling of complex polarization in frequency-space, as well as the
non-linear mapping of frequency to $\lambda^{2}$, results in a non-uniform
distribution of measurements in $\lambda^{2}$-space. This is typically
represented as a multiplicative weighting function, $W(\lambda^{2})$, which
results in the convolution of the Faraday dispersion function with a transfer
function,
${\rm RMTF}(\phi)=\frac{\int_{-\infty}^{\infty}{W(\lambda^{2}){\rm
e}^{-2i\phi\lambda^{2}}~{}{\rm
d}\lambda^{2}}}{\int_{-\infty}^{\infty}{W(\lambda^{2})~{}{\rm
d}\lambda^{2}}},$ (3)
known as the rotation measure transfer function, RMTF$(\phi)$ (Brentjens & de
Bruyn, 2005). Consequently, the complications caused by both the
$\lambda^{2}<0$ problem and the mapping of linear frequency to wavelength-
squared may be encapsulated in the weighting function $W(\lambda^{2})$ and its
Fourier counterpart, RMTF($\phi$). Although attempts have been made to
deconvolve the RMTF from the Faraday depth spectra, e.g. Heald (2009), the
methods are inherently under-constrained due to the
$P(\lambda^{2}<0)=\varnothing$ problem.
This can cause statistical ambiguity in the results of such a deconvolution
process, as Faraday depth is inherently complex-valued and therefore not
subject to the same conjugate symmetry as, for example, interferometric
imaging. It has also been shown that certain Faraday structures will
themselves not have conjugate symmetry (Brandenburg & Stepanov, 2014).
Consequently, under-constrained deconvolution processes are liable to
introduce spurious non-physical structures, whilst also being unable to
reconstruct all true physical structures (Macquart et al., 2012; Pratley et
al., 2020).
These issues are of particular importance when complex or compound Faraday
structures are seen along a single line of sight. In a blind data challenge
comparing different methods for extracting the parameters of Faraday
structures, Sun et al. (2015) noted that RM synthesis was highly prone to
missing composite Faraday structures and that the $\sim$25% of extragalactic
sources found observationally by Law et al. (2011) to be Faraday complex was
likely an underestimate. Sun et al. (2015) concluded that when Faraday
complexity is present, only QU-fitting methods will produce results that do
not introduce extraneous scatter in recovered RM values. Such QU-fitting
methods, e.g. Farnsworth et al. (2011); O’Sullivan et al. (2012); Ideguchi et
al. (2014), model the observed polarization, $\tilde{P}(\lambda^{2})$,
directly rather than decomposing Faraday depth spectra to recover parameters.
However, whilst QU-fitting methods can be more reliable for recovering
composite structures, in particular those including Faraday thick components,
in general they rely on physically parameterised models of the synchrotron-
emitting and Faraday rotating media along the line of sight to be known a
priori. One recent exception to this is the deconvolution method proposed by
Pratley et al. (2020), which uses a constrained clean-style non-parametric QU-
fitting approach that combines QU-fitting with deconvolution.
In this work we explore the potential for using a Gaussian Process Modelling
(GPM) approach to improve both the interpretation of Faraday depth spectra and
the recovery of physical parameters from polarization measurements. In its
simplest form, a GPM approach can be considered part of the family of QU-
fitting methods; however, it has certain advantages over physically
parameterised QU-fitting methods, which include both a robustness to using
non-exact models for imperfectly known datasets and an ability to provide
predictive models for imputation including posterior variances. The contents
of this paper are as follows, in § 3 we describe the underlying GPM
methodology including a description of the covariance kernel function employed
and the optimization of hyper-parameters; in § 4 we illustrate the impact of
GP optimisation and re-sampling of polarization data on their resulting
Faraday depth spectra in the presence of significant RFI flagging, and
demonstrate how the rotation measures for Faraday thin data can be recovered
from the hyper-parameters of the GP; in § 5 we demonstrate the method using a
simulated source population with observational specifications matching that of
the MeerKAT radio telescope; in § 6 we discuss the implications of these
results, compare the method to previous work in the literature and consider
the use of more complex covariance kernels. In Section 7 we draw our
conclusions.
## 2 Gaussian Process Modelling for Faraday Depth Reconstruction
Gaussian Processes (GPs) are probabilistic non-parametric models that can be
used for a variety of regression and classification problems. A Gaussian
Process Model (GPM) can be used to describe a given dataset by assuming a
priori that the data themselves represent a realisation of a stochastic
process described by a multi-variate Gaussian distribution. Such models are
non-parametric as the Gaussian process (GP) is a prior on the data itself
rather than on a parametric model, i.e. the complexity of the model grows with
the inclusion of additional data points. GPMs are considered to be an
attractive solution when modelling systems where the degree of complexity
required by a parametric model is not necessarily supported by the amount of
information in the observations. Overly complex parametric models are prone to
over-fitting and can lead to misinterpretation of the data. In contrast,
Gaussian processes are flexible probabilistic regression models that do not
require exact knowledge of the underlying physical model and can also provide
posterior uncertainties that naturally reflect high uncertainty in regions of
data space where there are few observational constraints.
In astrophysics, Gaussian processes have been used in various applications
including in the study of photometric redshifts (Way et al., 2009),
variability in stellar light curves (Angus et al., 2018), Galactic dust
emission (Leike & Enßlin, 2019), eclipses in cataclysmic variable stars
(McAllister et al., 2016), transmission spectroscopy of extrasolar planets
(Gibson et al., 2012), and radio velocity studies for planetary candidates
(Rajpaul et al., 2015). They have also been used extensively in cosmic
microwave background analysis (Bond & Efstathiou, 1987; Wandelt & Hansen,
2003), as well as for modelling instrumental systematics and calibration, e.g.
Gibson et al. (2012); Haywood et al. (2014); Barclay et al. (2015); Czekala et
al. (2015); Evans et al. (2015); Rajpaul et al. (2015); Aigrain et al. (2016);
Rajpaul et al. (2016); Littlefair et al. (2017); Mertens et al. (2018) and
interferometric image reconstruction (Junklewitz et al., 2016).
### 2.1 GPM for imputation of missing polarization data
The transfer function described in Equation 3 is a function of the
observational sampling in $\lambda^{2}$-space. In practice this sampling is
affected not only by the characteristics of the telescope’s receiver system
but also by the necessary removal of frequency channels affected by radio
frequency interference (RFI). Radio telescopes operating at or close to L-band
(1.4 GHz) typically experience data flagging at a level of $20-50\%$ in
bandwidth due to a wide variety of both terrestrial and satellite RFI sources
(e.g. Mauch et al., 2020). As well as negatively impacting the sensitivity of
an observation, this flagging can also result in large contiguous gaps in the
frequency coverage that have a detrimental effect on the form of the RMTF.
Contingent on there being sufficient information in the remaining data,
Gaussian process modelling may provide a potential method for imputing111In
statistics, _imputation_ is the process of replacing missing data points with
substituted values such as model estimates. these missing data subject to only
weak assumptions about the form of the data and without the need for a
parameterised physical model. Furthermore, GPM not only provides a model
estimate of the value for each missing data point but also an uncertainty on
that value, allowing astronomers to set thresholds on the degree of
reliability they are prepared to accept when imputing the values of missing
data points.
From the perspective of imputation in a more general sense, this kind of
application addressing RFI excision can be considered as an example of
recovering data missing completely at random (MCAR) and/or missing at random
(MAR; e.g. Seaman et al., 2013), depending on the nature of the RFI, i.e. it
is being used to repair deficits from local interruptions in the data set
rather than recovering information from a signal at or below the reliable cusp
of detection.
### 2.2 GPM for resampling of irregular data
In addition to the imputation of missing data values, GPM can also provide a
method of re-sampling observational data onto a uniform spacing in
$\lambda^{2}$-space. Since data are natively taken in channels uniformly
spaced in frequency, some previous practical implementations of RM Synthesis
have employed an initial processing step to re-bin spectral data,
$Q(\nu)~{}\&~{}U(\nu)$, into regularly spaced arrays of $\lambda^{2}$ in order
to use an FFT transformation into Faraday depth space, as implemented in for
example pyrmsynth222https://github.com/mrbell/pyrmsynth package (described in
Pratley & Johnston-Hollitt, 2020) that was used to perform RM synthesis for
the LOFAR Two-metre Sky Survey (Van Eck et al., 2018). Like RFI flagging, the
non-linear relationship between frequency and wavelength-squared results in a
distribution of measured data points that are often separated by large
contiguous gaps in $\lambda^{2}$-space, resulting in high sidelobes in the
RMTF. Furthermore, GPM can not only be used to re-sample data within the
observed bandwidth but also to infer data beyond the boundaries of that
bandwidth. Without additional data, these inferences will be subject to
rapidly increasing degrees of uncertainty but may still be used over moderate
distances in frequency space to improve the resolution of a measurement in
Faraday depth. In practice, re-sampling to uniformly spaced
$\lambda^{2}$-positions and imputing the values of data that are missing due
to RFI flagging can be done simultaneously.
## 3 Method
The basic premise of GPM is that, instead of employing a physically motivated
or semi-empirical parametric model to describe a dataset, we can instead
assume that the data, $\mathbf{y}(\mathbf{x})$, represent the realization of a
random Gaussian process,
$y(\mathbf{x})=N(\mathbf{\mu}(\mathbf{x}),\mathbf{K}(\mathbf{x},\mathbf{x})),$
(4)
where $K(\mathbf{x},\mathbf{x})$ is the matrix that defines the degree of
covariance between every pair of measurement positions, $(x,x^{\prime})$, for
$\mathbf{x}=\\{x_{1},x_{2},\dots,x_{n}\\}$, and $\mathbf{\mu}(\mathbf{x})$ is
an underlying mean function, which in this case is considered to be zero-
valued. In the case of non-negligible polarization leakage, this would
manifest as $\mu(x)\neq 0$.
The expected value of the data at all other positions, $x_{\ast}$, can be
calculated as a posterior mean, $\mu_{\ast}$, with an associated posterior
variance, $C_{\ast}$, such that
$\displaystyle\mu_{\ast}$ $\displaystyle=$
$\displaystyle\mathbf{K}(x_{\ast},\mathbf{x})^{T}\mathbf{K}(\mathbf{x},\mathbf{x})^{-1}y$
(5) $\displaystyle C_{\ast}$ $\displaystyle=$
$\displaystyle\mathbf{K}(x_{\ast},x_{\ast})-\mathbf{K}(x_{\ast},\mathbf{x})^{T}\mathbf{K}(x_{\ast},\mathbf{x})^{-1}\mathbf{K}(\mathbf{x},x_{\ast})$
(6)
see e.g. Rasmussen & Williams (2006); Roberts et al. (2013).
For polarization data measured at regular intervals in frequency (spectral
channels) and hence irregular intervals in $\lambda^{2}$, the posterior mean
can be used to infer the values of the polarization data at regular intervals,
$\lambda_{\ast}^{2}$, avoiding the need for explicit re-binning of the data,
and the posterior variance can be used to define the weighting function
$W(\lambda_{\ast}^{2})$ at those positions and hence calculate the RMTF.
### 3.1 Kernel Definition
The covariance matrix used in Eqs. 5 & 6 can be populated analytically using a
kernel function, $k$, that defines the degree of covariance for each
measurement separation,
$\mathbf{K}(\mathbf{x},\mathbf{x})=\left(\begin{array}[]{cccc}k(x_{1},x_{1})&k(x_{1},x_{2})&...&k(x_{1},x_{n})\\\
k(x_{2},x_{1})&k(x_{2},x_{2})&...&k(x_{2},x_{n})\\\
\vdots&\vdots&\vdots&\vdots\\\
k(x_{n},x_{1})&k(x_{n},x_{2})&...&k(x_{n},x_{n})\end{array}\right),$ (7)
where $k(x_{1},x_{2})$ is the covariance between two measurements at a
separation $|x_{1}-x_{2}|$. We note that this assumes statistical homogeneity
within a given data space, which may not be the case for data sets that
combine significantly different $\lambda^{2}$ regimes.
### 3.2 Covariance kernel definition
For the application to radio polarization data we use a compound covariance
kernel. The contributions to this kernel comprise covariance arising both from
the expected signal and from the properties of the measurement data. Faraday
rotation results in periodicity of the complex polarization signal but the
presence of additional Faraday components, including thick structures, can
cause deviations from exact periodicity in the data. To represent this
behaviour we use a quasi-periodic kernel formed from the multiplication of a
exponential kernel and a periodic kernel.
We implement our kernel using the celerite Gaussian processing library
(Foreman-Mackey et al., 2017). Traditional GPM scales as $\mathcal{O}(N^{3})$.
This is due to the need to invert the covariance matrix and often makes GPM
computationally intractable for large datasets. In contrast, the celerite GP
implementation uses a restricted set of kernels that allow for fast separable
evaluation with a linear complexity of $\mathcal{O}(N)$.
The celerite kernel has a generalised exponential form,
$k(x_{i},x_{j})=\sum_{n=1}^{J}{\alpha_{n}{\rm
e}^{-\beta_{n}|x_{i}-x_{j}|}+\alpha_{n}^{\ast}{\rm
e}^{-\beta_{n}^{\ast}|x_{i}-x_{j}|}},$ (8)
which can be used to produce a quasi-periodic kernel of the form,
$k_{1}(x_{i},x_{j})=h\,{\rm
e}^{-c|x_{i}-x_{j}|}\left[\cos\left(\frac{2\pi|x_{i}-x_{j}|}{P}\right)\right].$
(9)
In this work we combine this quasi-periodic kernel with a white noise kernel,
$k_{2}(x_{i},x_{j})=\sigma_{\rm n}^{2}\,\delta_{ij},$ (10)
to take account of the thermal noise present on observational measurements of
the complex polarization. Together, $k_{1}$ and $k_{2}$ are used to create a
composite kernel
$k=k_{1}+k_{2},$ (11)
where the contributions of the different components are governed by the values
of their hyper-parameters, $h$, $c$, $P$ and $\sigma^{2}$. While the values of
these hyper-parameters need to be inferred from the data, they do not
explicitly characterise a functional form for the data, but instead govern the
scales on which different covariance distributions act.
### 3.3 Hyper-parameter Estimation
The values of the hyper-parameters can be optimized by evaluation of a
likelihood function using the measured data. Polarization data are complex
valued, where
$\mathbf{P}(\mathbf{\nu})=\mathbf{Q}(\mathbf{\nu})+i\mathbf{U}(\mathbf{\nu}),$
(12)
and so the loglikelihood function can be broken down such that
$\log L=\log L_{\rm Q}+\log L_{\rm U}.$ (13)
The components of this likelihood are
$\displaystyle\log L_{\rm Q}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}(\mathbf{Q}-\mathbf{\mu}_{\rm Q})^{\rm
T}\mathbf{K}_{\rm Q}^{-1}(\mathbf{Q}-\mathbf{\mu}_{\rm
Q})-\frac{1}{2}\ln|\mathbf{K}_{\rm Q}|$ (14) $\displaystyle\log L_{\rm U}$
$\displaystyle=$ $\displaystyle-\frac{1}{2}(\mathbf{U}-\mathbf{\mu}_{\rm
U})^{\rm T}\mathbf{K}_{\rm U}^{-1}(\mathbf{U}-\mathbf{\mu}_{\rm
U})-\frac{1}{2}\ln|\mathbf{K}_{\rm U}|$ (15)
where,
$\displaystyle\mathbf{\mu}_{Q}$ $\displaystyle=$ $\displaystyle\mathbf{K}_{\rm
Q}(x_{\ast},\mathbf{x})^{T}\mathbf{K}_{\rm
Q}(\mathbf{x},\mathbf{x})^{-1}\mathbf{Q}$ (16) $\displaystyle\mathbf{\mu}_{U}$
$\displaystyle=$ $\displaystyle\mathbf{K}_{\rm
U}(x_{\ast},\mathbf{x})^{T}\mathbf{K}_{\rm
U}(\mathbf{x},\mathbf{x})^{-1}\mathbf{U}.$ (17)
In the case where the covariance matrix is characterised solely by stationary
kernels, i.e. kernels that are functions of the sample separation,
$|x_{i}-x_{j}|$, only, it can be assumed that $\mathbf{K}_{\rm
U}\equiv\mathbf{K}_{\rm Q}$ and in this paper we assume a single kernel of the
form given in Equation 9 to describe both Stokes parameters, the hyper-
parameters of which are optimised using the joint likelihood in Equation 13.
## 4 Application to Simulated Data
### 4.1 Example scenarios
We illustrate the use of the GPM method on two examples: (1) a simple Faraday
Thin scenario, and (2) a more complex scenario involving both Faraday Thin and
Faraday Thick components. This second scenario corresponds to the example
geometry outlined in § 1 of Brentjens & de Bruyn (2005).
Scenario One In this scenario we assume polarized emission from the lobe of a
radio galaxy with an intensity of 1 Jy at a reference frequency of $\nu_{\rm
ref}=1.4$ GHz and a synchrotron spectral index of $\alpha=0.7$, which has a
single valued Faraday depth of 50 rad m-2 such that,
$\displaystyle P_{\rm rg}(\lambda^{2})$ $\displaystyle=$
$\displaystyle\left(\frac{\lambda^{2}}{\lambda_{\rm
ref}^{2}}\right)^{\alpha/2}\exp(2i\phi_{1}\lambda^{2})$ (18) $\displaystyle=$
$\displaystyle\left(\frac{\lambda^{2}}{\lambda^{2}_{\rm
ref}}\right)^{\alpha/2}\left[\cos(2\phi_{1}\lambda^{2})+i\sin(2\phi_{1}\lambda^{2})\right].$
(19)
Scenario Two In this scenario we adopt the second line of sight geometry of
Brentjens & de Bruyn (2005), where
$F_{\rm gal}(\phi)=\begin{cases}(2\phi_{\rm fg})^{-1}&-\phi_{\rm
fg}~{}<~{}\phi~{}<~{}\phi_{\rm fg}\\\ 0&{\rm elsewhere}\end{cases}$ (20)
$P_{\rm gal}(\lambda^{2})=\frac{\sin(2\phi_{\rm fg}\lambda^{2})}{2\phi_{\rm
fg}\lambda^{2}}$ (21)
and the total polarization is given by
$P_{\rm tot}(\lambda^{2})=P_{\rm gal}(\lambda^{2})+P_{\rm rg}(\lambda^{2}).$
(22)
In both scenarios we assume that the signal is measured over a frequency
bandwidth from 0.58 GHz to 2.50 GHz in 512 evenly spaced frequency channels.
We map each channel frequency to a corresponding value of $\lambda^{2}$. For
each measurement we assume that $(Q,U,\sigma_{\rm Q},\sigma_{\rm U})$ are
recorded, where $Q$ corresponds to the real part of the polarization and $U$
corresponds to the imaginary component.
### 4.2 Faraday Depth Spectra
We project the data into Faraday depth space following the numerical method
outlined in Brentjens & de Bruyn (2005). The resulting Faraday depth spectra
are calculated using,
$F(\phi)=K^{-1}\sum_{i=1}^{N}{W_{i}P_{i}{\rm
e}^{-2j(\lambda_{i}^{2}-\lambda_{0}^{2})\phi}},$ (23)
where $W_{i}$ is the weight for channel $i$ at a wavelength of $\lambda_{i}$,
and $P_{i}$ is the complex polarization measured in that channel, see Equation
2. These are the _dirty_ Faraday depth spectra, i.e. they are convolved with
the RMTF, which itself is calculated using,
$R(\phi)=K^{-1}\sum_{i=1}^{N}{W_{i}{\rm
e}^{-2j(\lambda_{i}^{2}-\lambda_{0}^{2})\phi}}.$ (24)
In each case, $K$, is the sum of the weights,
$K=\sum_{i=1}^{N}{W_{i}},$ (25)
where the weights are defined as the reciprocal of the variance in each
channel, and $\lambda_{0}$ is a reference wavelength used to derotate the
polarization data to a common zero-point defined as
$\lambda_{0}^{2}=K^{-1}\sum_{i=1}^{N}{W_{i}\lambda_{i}^{2}}.$ (26)
The recovered Faraday depth spectra for Scenario 2 are shown in Figure 1,
which illustrates the difference in the Faraday depth spectra when
polarization data are sampled uniformly in frequency compared to being sampled
uniformly in $\lambda^{2}$-space. From the figure, two key differences can be
observed. Firstly, by comparing the FWHM of the RMTF in each case, it can be
seen that the resolution in Faraday depth is improved by a factor of $\sim 2$.
This is important for resolving different Faraday components and complex
structure. Secondly, it can be seen that the main lobe of the RMTF is clearly
separated from the sidelobe structure in the case of uniform
$\lambda^{2}$-sampling, and that the amplitude of the first sidelobe is $\sim
3$ times smaller than that resulting from uniform sampling in frequency. As
well as aiding visual inspection, this is important for approaches that
incorporate deconvolution, as incorrect or imperfect identification of model
component positions during deconvolution will result in residuals with
amplitudes proportional to RMTF sidelobe levels. As a consequence of these
factors, the Faraday thick structure from Scenario 2 in Figure 1 is resolved
in the resulting Faraday depth spectrum when using uniform
$\lambda^{2}$-sampling, with the positions of the edges clearly delineated.
blackAdditionally, we note that the sidelobe response of regular sampling in
$\lambda^{2}$-space could be further improved by the application of a window
function prior to Fourier transforming, and that such an application could be
tailored to balance the contributions from sidelobe structure, thermal noise
and resolution, as needed for an individual science case.
Figure 1: Scenario 2: RMTF (left), Faraday depth spectrum (centre) and
Faraday depth spectrum overlaid with input model (right) for the case of (i)
data evenly sampled in frequency (top) and (ii) data evenly sampled in
wavelength-squared (bottom). Data are noiseless in both cases.
In what follows, we use inverse variance weights for calculating Faraday depth
spectra. For the optimised GP reconstructions, we use an approximate
predictive posterior variance for the weights given by
$\sigma_{\ast}^{2}=C_{\ast}+\sigma_{\rm n}^{2},$ (27)
(Rasmussen & Williams, 2006) where $\sigma_{\rm n}^{2}$ is the amplitude of
the white noise kernel in Equation 10. The white noise variance needs to be
included because we are building a Faraday depth spectrum from the model
estimate of the noisy complex polarization data.
### 4.3 Parameter Estimation
To optimize the hyper-parameters of our GP kernel we use MCMC implemented via
the emcee package (Foreman-Mackey et al., 2013) to explore the posterior
probability distribution. We use 500 samples for a burn-in then take the
maximum likelihood hyper-parameter values from the burn-in and perturb them by
adding noise at a level of $10^{-5}$. We then use these perturbed values to
start a production run of 5000 samples. Prior ranges for each of the hyper-
parameters are shown in Table 1. The lower limit on $\log P$ corresponds to a
maximum Faraday depth of $\sim 1200$ rad m-2.
Table 1: Range of uniform priors used for each of the hyper-parameters in GP kernel, see Equation 9. Hyper-parameter | Prior
---|---
$\ln h$ | (-25,5)
$\ln c$ | (-25,5)
$\ln P$ | (-6,10)
$\ln\sigma$ | (-25,5)
### 4.4 Imputation of Missing Data
RFI flagging creates gaps in the polarization data thereby reducing the
$\lambda^{2}$-coverage in an irregular fashion and causing the sidelobe
structure in the RMTF to become more prominent.
Figure 2: Left upper: Stokes parameters for Scenario 1 (see § 4.1) with
SNR${}_{\rm chan}=1$ and 30% of the data flagged; black & grey error bars),
and maximum a posteriori GP model (Equation 9; blue solid line) with one
standard deviation posterior uncertainty (grey shaded area). Left lower:
residuals between the posterior mean for Stokes Q and the data (black points)
and residuals between the posterior mean and the missing/flagged Stokes Q data
(blue points). The $\pm 3\sigma$ limits are marked by grey dashed lines.
Right: The inferred period of the process with the true period marked
(vertical blue line) as well as the 16%, 50% and 84% confidence intervals of
the posterior probability distribution (grey dashed lines).
To mimic the RFI flagging process, we remove random portions of data from the
simulated data set. To do this, a random position in the data set is selected
and a chunk comprised of a randomly selected number of entries is removed.
This process is repeated until the desired percentage of data is removed. We
simulate data sets with 20%, 30% and 40% missing values. An example of this
flagging using the Scenario 1 dataset is shown in Figure 2, which shows the
input data, flagged at a level of 30%. These data are scattered by white
noise, giving a signal-to-noise of one in each channel.
Following Sun et al. (2018), we define the SNR of the integrated polarization
data as
${\rm SNR}_{\rm int}=\sqrt{N_{\rm chan}}\times\frac{P_{0}}{\sigma_{\nu}},$
(28)
where $\sigma_{\nu}$ is the standard deviation of the random Gaussian noise
added to each frequency channel and $N_{\rm chan}$ is the number of frequency
channels. $P_{0}$ is the polarized amplitude at the reference frequency, taken
in this case to be 1.4 GHz. Noise is added independently to both Stokes Q and
Stokes U data.
Figure 2 also shows the posterior mean from the GP model, optimized using only
the unflagged data points, as well as the residual between the input data and
the optimised GP model for both the unflagged and flagged data points. The
right-hand panel of Figure 2 shows the posterior probability distribution for
the hyper-parameter, $P$, which can be used to calculate the rotation measure
as described later in this work, see Section 4.6.
The effect of the flagging process on the Faraday depth spectrum is shown for
Scenario 1 in Figure 3. In Figure 3 (centre), data have been imputed at
regular intervals across the full bandwidth of the simulation and the Faraday
depth spectrum calculated using the posterior mean values, weighted by the
posterior covariance at each point.
Figure 3: Faraday depth spectra for Scenario 1 (Faraday thin) with 20% (top),
30% (middle) and 40% (bottom) of channel data flagged and ${\rm SNR}_{\rm
chan}=1$. Left: Faraday depth spectrum from flagged data. Right: Faraday depth
spectra from GPM reconstructed data as described in § 4.4. The $\pm$5
$\sigma_{\phi}$ limits are shown as a light grey shaded area.
It is noticeable in Figure 3 that the spectra calculated from the optimized GP
model appear significantly cleaner than those calculated from the original
data. This is in part because the sidelobe structure due to the gaps in
frequency coverage is reduced, but also because the posterior mean values from
the model are not scattered by the thermal noise in the same way as the
original data. Although this may be advantageous in enhancing some features of
the spectra, it can also potentially lead to over-interpretation of low-level
features. We caution that structures present at amplitudes below
$5-8\,\sigma_{\phi}$ should not be considered to represent true astrophysical
components but are more likely to arise from structure in the noise. For
uniform spectral noise, i.e. noise per frequency channel, the value of
$\sigma_{\phi}$ can be calculated by taking into account the number of
independent measurements in complex polarization, which is $N_{\rm
chan}^{\prime}$, where $N_{\rm chan}^{\prime}$ is the number of unflagged
frequency channels in the data set. By analogy to the quantity
$P(\lambda^{2})$, which has $\sigma_{P}=\sigma$ when
$\sigma_{Q}=\sigma_{U}=\sigma$ (Brentjens & de Bruyn, 2005), the amplitude of
the Faraday depth spectrum will have noise $\sigma_{\phi}=\sigma/\sqrt{N_{\rm
chan}^{\prime}}$. The threshold indicated in Figure 3 is shown at
$5\sigma_{\phi}$.
The right-hand column of Figure 3 shows the Faraday depth spectrum
reconstructed from the difference between the GP estimate and the noiseless
model in Scenario 1. We note that this difference is not the same as the
residual that would be obtained from deconvolution of Faraday depth
reconstruction of the GP estimate, which depends only on the data itself, but
instead describes the degree of difference between the true model structure
and the estimate fitted to the noisy data. Residuals in this structure below
the level of the noise threshold are consistent with the model over-fitting to
noise fluctuations in the Stokes Q and U data from the perspective of the
model. One notable feature of this difference in the case of low signal-to-
noise is the appearance of a residual peak at $\phi=-50$ rad m-2 for Scenario
1. This is due to the fact that the GP model is not parametrically constrained
to produce the same amplitude for both the Stokes Q and U components, but
rather the amplitude will follow the pattern of the noise in the data. For low
signal to noise data, this can result in a local offsets between the amplitude
of Q and U inferred by the GP estimate as a function of $\lambda^{2}$. From
the perspective of the model this local offset removes flux from the true peak
and moves it to the mirrored peak, resulting in the double structure seen in
the residual Faraday depth spectrum. However, once again we note that from the
perspective of the data, this is a consequence of the noise realisation, which
is why it is reflected in the GP model estimate.
As illustrated by Foreman-Mackey et al. (2017), it is often the case that
valid inferences can be made using incorrect GP models. For physical processes
whose underlying properties are not well understood, such as generalised
Faraday spectra, this can significantly complicate the process of selecting a
parametric model. Incorporating domain knowledge helps to make the problem of
model selection and interpretation of parameters less difficult. We
demonstrate this by using the GP kernel from Scenario 1 to impute missing data
for simulations of Scenario 2. An example of this is shown in Figure 4, which
shows a simulated dataset where 20% of the data points have been removed to
mimic RFI flagging. In spite of not being an exact model, the optimized GP is
able to reconstruct the missing data within the measurement uncertainties. It
can also be seen that the optimized model is able to recover the rotation
measure of the radio galaxy with high accuracy.
Figure 4: Left upper: Stokes parameters for Scenario 2 (see § 4.1; black &
grey error bars), and maximum a posteriori GP model (Equation 9; blue solid
line) with one standard deviation posterior uncertainty (grey shaded area).
Left lower: residuals between the posterior mean for Stokes Q and the data
(black points) and residuals between the posterior mean and the
missing/flagged Stokes Q data (blue points). The $\pm 3\sigma$ limits are
marked by grey dashed lines. Right: The inferred period of the process with
the true period marked (vertical blue line) as well as the 16%, 50% and 84%
confidence intervals of the posterior probability distribution (grey dashed
lines).
In this case, recovering the Faraday depth of the thin component is not as
strong an indicator of performance as in Scenario 1. Instead we compare the
posterior mean of the optimized GP to the input model without noise and the
residuals for this comparison are shown in the lower left hand panel of Figure
4. Figure 5 shows the Faraday depth spectrum reconstructed from the optimized
GP for Scenario 2 in the presence of different levels of RFI flagging. These
should be compared to the noiseless model spectrum shown in Figure 1.
Figure 5: Faraday depth spectra for Scenario 2 (Faraday complex) with 20%
(top), 30% (middle) and 40% (bottom) of channel data flagged and ${\rm
SNR}_{\rm chan}=5$ at a reference frequency of 1.4 GHz. Left: Faraday depth
spectrum from flagged data. Centre: Faraday depth spectra reconstructed from
the GP estimate as described in § 4.4. Right: Faraday depth residuals formed
by subtracting the noiseless model from the GP estimate. The $\pm$5
$\sigma_{\phi}$ limits are shown as a light grey shaded area.
### 4.5 Multiple Sources along the line of sight
As described in Sun et al. (2015), it is expected that many lines of sight
will have more than one Faraday thin component. Such Faraday geometries are
also able to be recovered by the kernel in Equation 9. In this case the hyper-
parameter $P$ will represent the $|RM|$ of the component with the highest
rotation. An example of where this can occur is the super-position of radio
galaxy jets or lobes oriented along a line of sight unresolved by the beam of
a telescope. Such geometries result in double Faraday components, typically
with similar Faraday depths as the integrated rotation differs only by the
degree contributed in the local medium between the jets whilst experiencing a
common degree of rotation from the comparatively larger path between the
galaxy and the observer. An example of such a situation is shown in Figure 6.
We illustrate the GP model in this figure at high signal to noise, SNR${}_{\rm
chan}=20$, to demonstrate that the same GP kernel from Equation 9 can
accurately represent the complex polarization behaviour of multiple sources
without requiring additional kernel components. The Faraday depth spectra in
the lower panels of Figure 6 show the recovered Faraday depth spectrum from
the noisy input data (left) and from the optimised GP reconstruction (right).
It can be observed that the double structure is more apparent in the GP
reconstruction. We note that the separation of the peaks in Faraday depth is
wider than the separation of components in the model. This is not an artifact
of the GP reconstruction, but due to the position of the first null in the
RMTF.
Figure 6: Upper: Stokes parameters for line of sight comprised of two Faraday
thin sources with SNR${}_{\rm chan}=20$ (black & grey error bars), and maximum
a posteriori GP model (Equation 9; blue solid line) with one standard
deviation posterior uncertainty (grey shaded area). Left lower: Faraday depth
spectrum calculated from the noisy input data sampled at regular intervals in
frequency. Right lower: Faraday depth spectrum calculated from the optimised
GPM sampled at regular intervals in $\lambda^{2}$-space with the model
components marked as solid black lines.
### 4.6 Rotation Measure Extraction
Under certain circumstances it is possible to use the optimised hyper-
parameters to recover the rotation measure of a source. In principle, for
Faraday thin sources the absolute value of the rotation measure can be
recovered from the periodicity using,
$|{\rm RM}|=\frac{\pi}{P}.$ (29)
Due to the stationary nature of the covariance kernels, it is not possible to
recover the sign of the rotation measure from the hyper-parameters of the
covariance matrix; however, it can be recovered by comparing the correlation
co-efficient, $c_{\rm corr}$, between (e.g.) the Stokes Q data and the Stokes
U data shifted by $\pm\pi/2$ radians, $U^{\pm}$:
$c_{\rm corr}^{\pm}=\langle Q,U^{\pm}\rangle.$ (30)
Using Scenario One, we evaluate the accuracy of RM recovery for the Faraday
thin source as a function of the signal-to-noise (SNR) of the integrated
polarization data, ${\rm SNR}_{\rm int}$. It can be seen in Figure 7 that for
Scenario 1, the rotation measure is recovered with $<1\,\sigma$ deviation from
the true value for even low values of ${\rm SNR}_{\rm int}<10$ where the
signal-to-noise on an individual frequency channel, ${\rm SNR}_{\rm chan}\ll
1$. A minimum value of ${\rm SNR}_{\rm int}=8$ is typically used to identify
sources in total polarization images, see e.g. George et al. (2012), and this
value is marked in Figure 7. It can be seen that the optimized RM value is in
good agreement with both the expectation value and the true value of the RM
above this limit.
However, we note that when using this method to obtain an RM there is a lower
limit on the absolute value of the rotation measure that can be reliably
recovered. This limit is dependent on the both the frequency coverage of the
observational data and the signal to noise ratio. At very low values of
Faraday depth the degree of rotation in the complex polarization signal
becomes smaller and causes a degeneracy between hyper-parameter $c$ in
Equation 9 and the periodicity, $P$. This is exacerbated for data at higher
frequencies where the rotation is more poorly sampled. This lack of rotation
also causes the method for assigning the direction of the rotation measure
described in Equation 30 to be less effective, as there are not guaranteed to
be $\pm\pi/2$ radians of rotation present in the observed data. As this lower
limit, $\phi_{\rm lim}$, is a strong function of frequency coverage, it is
possible to determine it a priori for a particular telescope such that
$\phi_{\rm lim}=\frac{\pi}{4\Delta\lambda^{2}},$ (31)
where $\Delta\lambda^{2}$ is the coverage of the observational data in
$\lambda^{2}$-space and the given expression corresponds to that coverage
being equivalent to $\pi/2$ radians for $\phi_{\rm lim}$. A illustrated in
Figure 7, RM values above this limit are recovered reliably for SNR${}_{\rm
int}\geq 8$.
We emphasise that this limit is only applicable to the calculation of the
rotation measure from the hyper-parameters and does not affect the improvement
in the Faraday depth spectrum from GPM re-sampling or imputation, as shown in
Figures 1, where structure at low Faraday depths is recovered equivalently
well.
Figure 7: Recovered rotation measures (RM) as a function of the signal-to-
noise (SNR) in the integrated polarization data for Scenario 1 are shown for a
range of different $\phi_{\rm gal}$ values. The difference between the
expectation of the RM from the GP and the input to the Scenario 1 model are
shown in each case with uncertainties calculated using the 68% confidence
interval on the optimised value of $\log P$, which results in asymmetric
uncertainties on the RM values. The point at which the signal-to-noise in an
individual frequency channel is equal to one is shown as a vertical dashed
line (blue). The distribution of equivalent expectation values for ten
different noise realisations are shown as light grey lines.
## 5 Performance considerations for surveys
We now extend the application to simulated data to a large data set, such as
might be recovered from a MeerKAT observation of a survey field. We use a
simulated data set containing Stokes Q and Stokes U data for 10,000 Faraday
thin sources. Observational parameters are matched to those of the MeerKAT
telescope, one of the precursor arrays of the Square Kilometre Array (SKA)
(Dewdney et al., 2009; Jonas, 2009). The MeerKAT consists of 64 13.5 m
parabolic dishes sited in the Northern Cape province of South Africa (Jonas,
2016; Taylor & Jarvis, 2017).
A population of 10,000 Faraday thin sources was created with a Stokes I flux
density at 1.4 GHz taken from a uniform distribution with values between 10
$\mu$Jy and 30 mJy, and an RM taken from a uniform distribution between -100
and +100 rad m-2. Stokes Q and U values were generated for black320 channels
spaced uniformly in frequency from 0.88 GHz to 1.68 GHz. The intrinsic angle
of polarisation for each source, $\chi_{0}$, was sampled randomly from a
uniform distribution $[-\pi,+\pi)$. For each frequency channel, noise was
added to the Stokes Q and U values using a normal distribution with a mean of
zero and a variance of $180\mu$ Jy beam-1, equivalent to an integrated noise
level of $\sim 10\mu$Jy beam-1. These specifications are representative of the
observational parameters for the MeerKAT MIGHTEE survey (Taylor & Jarvis,
2017).
The frequency-dependence of the flux density was modelled such that
$S_{\nu}=S_{\rm 1.4\,GHz}\left(\frac{\nu}{\rm 1.4\,GHz}\right)^{-\alpha},$
(32)
where $S_{\nu}$ is the flux density at each frequency, $\nu$,
$S_{1.4\rm{GHz}}$ is the flux density at 1.4 $\rm{GHz}$, and $\alpha$ is the
spectral index, which is taken to be the canonical value of $\alpha$ = 0.7 for
synchrotron radiation.
The intensity of the polarised signal $P$ is given by
$P=\sqrt{Q^{2}+U^{2}}=I*\Pi,$ (33)
where $Q$ and $U$ are the Stokes parameters for linear polarisation, $I$ is
the total intensity Stokes parameter, and $\Pi$ is the polarisation fraction.
To simulate values for $\Pi$, the following relationship was used,
$\log\Pi=-(0.051\pm 0.004)\log S_{\rm 1.4\,GHz}+(0.388\pm 0.007)$ (34)
(Stil et al., 2014), where $\Pi$ is the median percentage polarisation
fraction and $S_{\rm 1.4\,GHz}$ is the 1.4 GHz flux density in mJy.
We randomly select 500 sources from this population with an SNR${}_{\rm
int}\geq 8$ and $|RM|>\phi_{\rm lim}$ and use an optimised GP with the kernel
specified in Equation 9 to recover the rotation measure for each object. We
use a prior ranges as specified in Table 1. Uncertainties on the recovered RMs
are calculated from the 68% confidence interval on the optimised value of
$\log P$, which results in asymmetric uncertainties on the RM values. For 500
randomly selected samples under these constraints all of the recovered RMs lie
within 5$\sigma$ of the true value and $>99\%$ within 3$\sigma$. The average
run time per object is 43 seconds on a Macbook Pro with a 3.5 GHz Intel Core
i7 processor.
## 6 Discussion
### 6.1 Using GP reconstructed Faraday depth spectra
Examples of Faraday depth spectra reconstructed from the optimized GP for
Scenario 1 and Scenario 2 are shown in Figures 3 & 5. The reconstructed
Faraday depth spectra in each case (right hand column of both figures)
represent the structure in the quasi-periodic component of the covariance
kernel only. Although the white noise component is taken into account during
the optimization, it is not appropriate to scatter the predicted values by
this distribution as the white component represents the measurement noise on
the original data only. As mentioned already in § 4.4, in order to avoid over-
interpretation of the reconstructed Faraday depth spectrum, blackthe
significance of features below the equivalent detection threshold in Faraday
depth space should be carefully considered.
If the optimized GPM was used to infer the posterior mean only at the
positions of the input data, the model could be subtracted from the data at
those positions to estimate a residual, as shown in the lower panel of Figures
2 & 4. This residual could then be transformed separately to the model and
combined with the model reconstruction in a manner similar to that employed by
the final step of the Cotton-Schwab CLEAN algorithm in synthesis imaging. We
note that a kernel-based GPM approach to two-dimensional image synthesis and
deconvolution in radio astronomy is likely to be prohibitive in its
computational complexity, as the fast separable kernels employed here are only
appropriate for one-dimensional data. However, kernel-free methods such as
those described by Arras et al. (2020) can achieve similar scaling.
#### 6.1.1 Recovery of Faraday thick structure
As can be seen in Figure 5, using an optimized GP to reconstruct the Faraday
depth spectrum in the case where Faraday thick structure is present can reveal
these structures without the need for assuming any specific parametric model
of Faraday depth a priori. Although it is not possible to recover the
parameterised geometry of the thick structure from the hyper-parameters of the
GP in this case, the shape is far more clearly delineated in the GP-
reconstructed Faraday depth spectrum than in the spectrum calculated from the
original data. This remains the case even in the presence of high degrees of
RFI excision.
In principle, GPs might also be used to infer the complex polarization signal
at smaller values of $\lambda^{2}$ than are present in the original dataset,
which could be beneficial for recovering extended structure in Faraday depth
space. When used to infer the behaviour of a signal outside the range of
measurements, the posterior covariance of a GP will increase in response to
the stationary nature of the covariance not finding any additional
measurements to anchor it. This increase in covariance can be used to down-
weight predicted values of the posterior mean, relative to those known with
more confidence. The behaviour of the inferred posterior mean will tend
towards the mean function and in the general case where the mean function is
assumed to be zero-valued, this is equivalent to the same logical sampling as
in the original dataset; however, for spectra where Faraday thick structure
are present, which have non-zero means at low-$\lambda^{2}$ this may
artificially truncate structure. This was found to be the case here for the
kernel in Equation 9 when used to extend $\lambda^{2}$-space.
It is also important to note that structures which exist only on scales larger
than that set by the measured $\lambda^{2}_{\rm min}$ will not be recovered by
a GP model used for inferring data at positions outside the range of the
original measurements. Hence this form of structure enhancement for Faraday
thick structures can only be used to improve the recovery of structures which
exist on scales that are already sampled in part by the original measurements.
Any larger structures will not appear in the Faraday depth spectrum of the GP
reconstruction, an effect analogous to flux loss in synthesis imaging for
radio interferometry.
#### 6.1.2 Zero-valued Faraday depth signals
In the scenarios considered here, we have assumed a zero-valued mean function.
In the absence of instrumental leakage, this assumption is valid for modelling
sources with non-zero Faraday rotation; however, there are also two scenarios
where a non-zero offset in both Stokes Q and Stokes U can occur, resulting in
a zero-valued Faraday depth signal. The first of these is the case of un-
rotated radio polarization signals, the second is the presence of residual
instrumental leakage. The inclusion of such a constant offset as a free
parameter (separately for Q and U) in the optimization process is
straightforward to implement and requires only very minor changes to be made
in the calculation of the posterior mean. However, for some telescopes with
wide bandwidths, polarization leakage can be frequency-dependent and
consequently modelling it as a constant offset will not be sufficient. In this
case there is potential for confusion between the leakage term and covariate
behaviour in the complex polarization; as can be seen in Figure 4, smoothly
varying systematic changes in amplitude can be modelled perfectly well using
stationary kernels.
### 6.2 Faraday Challenge
In order to evaluate the performance of the GP approach against other methods
in the literature we use the 17 models from the Faraday challenge of Sun et
al. (2015). These models include examples of single Faraday thin objects,
multiple (two) Faraday thin objects, and combinations of Faraday thick and
Faraday thin objects. While the GP approach can only determine RM values in
the case of single Faraday thin models, we can use the model evaluation
metrics to compare the predicted Faraday depth spectra in all cases. For this
purpose we use the kernel from Equation 10 for all test data sets with priors
as listed in Table 1. All evaluations are made on models with SNRint=32, as in
Sun et al. (2015).
The reduced $\chi^{2}$ for parametric models described in Sun et al. (2015) is
roughly equivalent to the standardized mean squared error (SMSE). For an
optimised GP, the SMSE is defined as
$SMSE=\frac{1}{N}\left[\frac{(P-P_{\ast})^{2}}{\sigma_{\rm n}^{2}}\right],$
(35)
where $P_{\ast}$ is the posterior mean of complex polarization from the
optimized GP, $P$ is the input complex polarization, and $\sigma_{\rm n}^{2}$
is the variance of the white noise on the input data. In order to account for
the independence of the white noise between Stokes Q and U we set $N=600$. The
difference between the SMSE and the $\chi^{2}_{\rm r}$ used by Sun et al.
(2015) is the normalisation, which instead uses $N-K$ in the denominator where
$K$ is the number of parameters. Although the GP used here is not strictly
parametric, we calculate this quantity using $K=4$ to represent the number of
hyper-parameters in our model.
Since the GP optimisation method produces a full predictive distribution at
the position of each test sample, we also calculate the mean standardised log
likelihood as a measure of performance (Rasmussen & Williams, 2006). The
negative log-likelihood for the GP is given by,
$-\log
L^{\ast}=\frac{1}{2}\log(2\pi\sigma_{\ast})+\frac{(P-P_{\ast})^{2}}{2\sigma_{\ast}^{2}},$
(36)
where $P$ are the input polarization data, $P_{\ast}$ are the GP reconstructed
polarization data, and $\sigma_{\ast}^{2}$ is the variance of the GP
reconstruction, defined in Equation 27. We standardise this loss by
subtracting the equivalent loss as calculated using a model that predicts
using a simple Gaussian with the mean and variance of the input data. The
individual performance metrics for each model in the Faraday challenge are
listed in Table 2.
Table 2: Faraday challenge models. This table lists the input parameters for the 17 Faraday challenge models in Columns 1 to 9, and the evaluation metrics for the predicted complex Stokes data from the optimised GP in Columns 10 to 12. See Section 6.2 for details. | Input Parameters | Evaluation Metrics
---|---|---
Model | $p_{1}$ | $\phi_{1}$ | $\chi_{1}$ | $p_{2}$ | $\phi_{2}$ | $\chi_{2}$ | $p_{0}$ | $\phi_{\rm c}$ | $\phi_{\rm s}$ | $\chi^{2}_{r}$ | SMSE | MSLL | $\langle\phi-$SMSE$\rangle$
| $\%$ | rad m-2 | deg | $\%$ | rad m-2 | deg | $\%$ | rad m-2 | rad m-2 | | | |
01 | 100.0 | 500.10 | 40 | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 0.88 | 0.87 | -1.46 | 1.62
02 | 100.0 | 49.38 | 60 | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 1.00 | 0.99 | -0.94 | 1.31
03 | 100.0 | 4.96 | 60 | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 1.06 | 1.05 | -0.47 | 1.01
04 | 25.0 | $-37.84$ | 0 | 16.70 | 103.18 | $-36$ | $-$ | $-$ | $-$ | 1.03 | 1.03 | -0.99 | 1.33
05 | 25.0 | $-37.84$ | $-40$ | 24.00 | 5.05 | $-40$ | $-$ | $-$ | $-$ | 0.96 | 0.95 | -0.71 | 1.18
06 | 25.0 | $-37.84$ | $-40$ | 9.00 | 5.05 | $-40$ | $-$ | $-$ | $-$ | 1.05 | 1.04 | -0.80 | 1.16
07 | 25.0 | $-44.55$ | 0 | 16.70 | 37.50 | 72 | $-$ | $-$ | $-$ | 1.08 | 1.07 | -0.51 | 1.28
08 | 25.0 | 232.56 | 40 | 9.00 | 192.70 | 40 | $-$ | $-$ | $-$ | 1.04 | 1.03 | -1.27 | 1.30
09 | 25.0 | $-37.83$ | $-40$ | 16.50 | 5.05 | 140 | $-$ | $-$ | $-$ | 0.91 | 0.90 | -0.72 | 1.05
10 | 25.0 | $-37.84$ | 0 | 9.00 | 103.00 | $-36$ | $-$ | $-$ | $-$ | 1.05 | 1.05 | -0.90 | 1.27
11 | 25.0 | 149.50 | $40$ | 23.75 | 163.50 | $-68$ | $-$ | $-$ | $-$ | 1.10 | 1.09 | -1.34 | 1.33
12 | 25.0 | $-232.56$ | 0 | 9.00 | $-50.10$ | 72 | $-$ | $-$ | $-$ | 1.03 | 1.02 | -1.47 | 1.41
13 | 25.0 | $-44.55$ | 0 | 24.00 | 37.54 | 72 | $-$ | $-$ | $-$ | 0.99 | 0.98 | -0.58 | 1.17
14 | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 1.9 | $-136.98$ | 50 | 0.87 | 0.87 | -1.41 | 1.59
15 | 1.8 | $-240.22$ | $-36$ | $-$ | $-$ | $-$ | 1.9 | $-250.17$ | 50 | 1.07 | 1.06 | -1.35 | 1.16
16 | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 1.9 | $-136.98$ | 25 | 0.99 | 0.98 | -1.53 | 1.25
17 | 1.8 | $-240.00$ | $-36$ | $-$ | $-$ | $-$ | 1.9 | $-250.17$ | 25 | 1.02 | 1.01 | -1.41 | 1.20
The median of the reduced chi-squared statistic recovered from our model,
$\chi^{2}_{\rm r}=0.99$, is in good agreement with the expected optimum value
of $1.00\pm 0.02$ from Sun et al. (2015). Results for both the SMSE and
$\chi_{\rm r}^{2}$ across all 17 test models, evaluated using SNR=32 as in Sun
et al. (2015), are shown in Figure 8. By comparison to Figure 3 in that work
it can be seen that the GP $\chi_{\rm r}^{2}$ has a median value comparable to
that of the three best performing methods in that work, each of which use QU-
fitting approaches. Furthermore, it can also be seen that the distribution of
$\chi_{\rm r}^{2}$ for the GP method is significantly narrower than those
methods, each of which has a 68% confidence interval that extends to values
larger than $\chi_{\rm r}^{2}=1.1$.
Figure 8: Box-and-whisker plots for figures of merit over all 17 Faraday
challenge models. The boxes show the first, the second (median), and the third
quartile, and the ends of the whiskers show the edges of the distribution.
Points deemed to be outliers are shown as open circles.
In order to evaluate the resulting Faraday depth reconstructions we draw
$T=100$ Monte Carlo realisations from the posterior distribution of the
optimised GP for each model. We then construct a residual Faraday depth
spectrum from the difference between the model realisation and the noiseless
Faraday challenge model. To mitigate the effect of noise correlation in
Faraday depth space, we down-sample these residual Faraday depth data by a
factor
$f=\Delta\phi_{\rm FWHM}/\delta\phi,$ (37)
where $\Delta\phi_{\rm FWHM}\simeq 135$ rad m-2 is the FWHM of the RMTF for
the Faraday challenge data and $\delta\phi$ is the pixel size in Faraday
depth. We use the down-sampled residual Faraday depth spectra to calculate an
SMSE of the form shown in Equation 35, relative to a model of $0+0j$. The
Faraday depth SMSE uses $\sigma_{\phi}=\sigma/\sqrt{N_{\rm chan}}$ as a
normalisation, as described in Section 4.4, and $N=2\times N_{\rm ds}$, where
$N_{\rm ds}$ is the number of data points in each of the down-sampled complex
parts of the residual Faraday depth spectrum. We average the resulting SMSE
values across all realisations for a particular model to obtain the quantity
$\langle\phi$-SMSE$\rangle$ and this is listed for each model in Table 2 and
the distribution visualised in Figure 8.
By comparing the SMSE and $\langle\phi$-SMSE$\rangle$ values for each model it
can be seen that the goodness of fit for the GP model estimates is higher in
the Stokes Q and U data space than it is in the Faraday depth signal space.
This is to be expected as the model itself is optimised against the noisy
Stokes data. Figure 8 also indicates models that are deemed to be outliers
based on the interquartile range of the data. These two outliers are Models 1
and 14, which are low outliers for the data (QU) space SMSE distribution and
high outliers for the signal ($\phi$) space $\langle\phi$-SMSE$\rangle$
distribution, indicating that the GP model estimates are over-fitting the
noisy QU data in these cases, resulting in larger Faraday depth residuals.
The calculation of $\langle\phi$-SMSE$\rangle$ performed here is possible due
to the nature of the GP model, which provides a full predictive posterior
distribution for each estimate. Whilst equivalent values are not available for
the other methods profiled in the Faraday challenge of Sun et al. (2015), we
are able to visualise the equivalent residual Faraday depth spectra. For
example, for Model 1, the average difference between the Faraday depth,
$\phi_{1}$ (see Table 2), of the noiseless model and the fitted value of this
parameter across all of the methods in the Sun et al. (2015) paper is
$\Delta\phi_{1}=1.57$ rad m-2 (see Table 3 in that work). In Figure 9 we show
the residual Faraday depth spectra for the difference between the true
noiseless model and a model with a value of $\phi_{1}$ that is incorrect by
this amount. We also include residuals for $\Delta\phi_{1}=0.60$ rad m-2 and
$\Delta\phi_{1}=3.60$ rad m-2, which were the smallest and largest differences
returned by the methods evaluated against Model 1 in the Sun et al. (2015)
comparison. It can be seen from this figure that the amplitude of the
residuals from the GP model are equivalent to those of the best fitting model
from the Sun et al. (2015) Faraday challenge in this case; however, it should
also be noted that the nature of these residuals is different in form to those
resulting from an inexact parametric model.
Figure 9: Upper: Residual Faraday depth spectra for Model 1 of the Faraday
challenge, blackconstructed using the transform of the difference between the
noiseless model and a model where the fitted Faraday depth, $\phi_{1}$,
differs from the input model by (i) an average deviation of
$\Delta\phi_{1}=1.57$ rad m-2 (grey line indicates absolute polarization, with
real and imaginary components shown in cyan), (ii) a minimum deviation of
$\Delta\phi_{1}=0.60$ rad m-2 (blue dashed line), and (iii) a maximum
deviation of $\Delta\phi_{1}=3.60$ rad m-2 (magenta dashed line). Lower:
Residual Faraday depth spectrum for Model 1 of the Faraday challenge,
blackconstructed using the transform of the difference between the noiseless
model and the posterior mean of the optimised GP model. The $\pm$5
$\sigma_{\phi}$ limits appropriate to SNR${}_{\rm int}=32$ for the Faraday
challenge are shown as a light grey shaded area.
### 6.3 Alternative kernels
For a given regression problem, there are an infinite number of models that
can be used. As such, it is usually not feasible to test all possible models.
In this work so far we have used the GP kernel defined in Equation 9; here we
consider the use of alternative kernels. We denote our original kernel as as
“3-HP" and we compare it to (i) the quasi-periodic kernel used by Foreman-
Mackey et al. (2017) to model stellar variability, and (ii) a hybrid kernel
including a second squared-exponential term.
In Foreman-Mackey et al. (2017), the period of rotation of a star was
determined by extracting the optimised $P$ hyper-parameter, analogous to our
use of $P$ to estimate the rotation measure of a Faraday thin source. The
stellar variability kernel from Foreman-Mackey et al. (2017) is also built
using the celerite library and has the form,
$k(x_{i},x_{j})=\frac{B}{1+C}\,{\rm
e}^{-A|x_{i}-x_{j}|}\left[\cos\left(\frac{2\pi|x_{i}-x_{j}|}{P}\right)+(1+C)\right],$
(38)
where $A$, $B$, $C$ and $P$ are hyper-parameters. We denote this kernel as
“4-HP".
The hybrid kernel, denoted here as “5-HP", has the form,
$k(x_{i},x_{j})=h_{1}\,{\rm
e}^{-c_{1}|x_{i}-x_{j}|}\cos\left(\frac{2\pi|x_{i}-x_{j}|}{P}\right)+h_{2}\,{\rm
e}^{-c_{2}|x_{i}-x_{j}|},$ (39)
where $h_{1}$, $c_{1}$, $P$, $h_{2}$ and $c_{2}$ are hyper-parameters. Both
the 4-HP and 5-HP kernels have an extra term which is intended to model
smoothly varying non-periodic behaviour such as that which might arise from
Faraday thick components along the line of sight.
Although different methods of evaluating competing models have been developed,
the computational complexity involved in applying them limits their
applicability. To overcome this limitation, easier-to-use approximate methods
are commonly used to compare the performance of two or more models. These
include the Bayesian Information Criterion (BIC), the Akaike Information
Criterion (AIC) and the corrected Akaike Information Criterion (AICc). These
information criteria are designed to evaluate the amount of information lost
by a particular model. The model that loses the least information, i.e. has
the lowest valued information criterion, is considered to be the preferred
model.
Burnham & Anderson (2004) suggest that the BIC is a powerful model selection
tool when the ‘true’ model is in the set of test models; however, in the case
of empirical data where the true model is unlikely to be known a priori the
AIC is considered to be a more appropriate criterion. Furthermore, the
_corrected_ AIC places a stronger complexity penalty on models used with
finite data series than the original AIC. In the case of small sample sizes it
is considered to be more appropriate and accurate for finding the preferred
model. However, Kass & Raftery (1995) suggest that the AIC should only be used
in situations where the prior distribution is similar to that of the
likelihood and that when this isn’t the case and the prior is significantly
less informative than the likelihood then the BIC will favour simpler models
than the AIC.
For each sample dataset we optimize the hyper-parameters for each of the three
kernels using the method described in § 4.3 and use these values to compute
the Bayesian Information Criteria (BIC; Schwarz, 1978),
${\rm BIC}=-2\log L^{\ast}+K\log N,$ (40)
the Akaike Information Criterion (AIC; Akaike, 1974),
${\rm AIC}=2K-2\log L^{\ast},$ (41)
and the corrected AIC (AICc; Hurvich & Tsai, 1989),
${\rm AICc}={\rm AIC}+\frac{2K(K+1)}{N-K-1},$ (42)
where $\log L^{\ast}$ is given by Equation 36, $K$ is the number of hyper-
parameters, and $N=600$ is the number of data points.
We calculate the median value and inter-quartile range for each of the three
information criteria from ten different noise realisations with SNRint=32. The
goodness of fit statistics for the Faraday challenge models using the two
alternative kernels are $\chi^{2}_{\rm r}=0.98$ for the 4-HP model, and
$\chi^{2}_{\rm r}=0.97$ for the 5-HP model.
The median values of the information criteria suggest a preference for the
HP-4 kernel in the case of single Faraday thin component models, a preference
for the HP-5 kernel in the case of double Faraday thin models, and a
preference for the HP-3 kernel in the case of Faraday thick models. However,
we note that these are modal preferences and no model type shows a unanimous
preference for a particular kernel. Indeed, although ranking the three kernels
using the median value of the information criteria for each data set may
nominally show a preference for one kernel above others, the overlapping
inter-quartile ranges for most models suggest that there is in fact no clear
preference. The only model where one kernel is significantly preferred is
Model 1, where the inter-quartile range of the HP-4 kernel is outside the
inter-quartile range of both other kernels, see Figure 10. We also see no
difference in the median ranking across the three information criteria, which
is perhaps unsurprising given that $N\gg K$ in all cases. Furthermore, as
noted by Bollen et al. (2014), information criteria comparisons are
appropriate only for data sets with the same number of measurement samples,
and we emphasise that the kernel comparison presented here is for the
specifications of the test sets in the Sun et al. (2015) Faraday challenge.
Figure 10: Box-and-whisker plots for BIC values derived from each of the 3
kernels using 10 noise realisations of Model 1 from Sun et al. (2015). The
boxes show the first, the second (median), and the third quartile, and the
ends of the whiskers show the minimum and maximum values.
## 7 Conclusions
In this work we suggest that Gaussian Process Modelling (GPM) may be a
flexible pre-processing approach for the improved analysis of Faraday depth
spectra as it does not rely on an exact parameterised physical model, or set
of models, to be known a priori and is therefore suitable for situations where
some prior knowledge of the form of the signal is understood, but the exact
line of sight structure is imperfectly known. This is often the case for
Faraday depth analysis. With regard to this, we have shown that even simple
GPM covariance kernels can be used to impute data reliably for complex lines
of sight.
We have demonstrated the applicability of GPM for improved recovery of
structure in Faraday depth spectra. We have shown that these improvements
include the reduction of sidelobe contamination in Faraday depth spectra
through imputation of missing data when RFI flagging causes significant
losses, even in the case where significant portions of the original data set
are lost. We have also illustrated the benefits of using GPM to re-sample
complex polarization data onto regular spacings in $\lambda^{2}$-space as an
alternative to currently used re-gridding methods. We have shown that these
applications in combination can provide an improved representation of Faraday
thick structure in Faraday depth spectra.
We have shown that under certain circumstances the hyper-parameters of the
periodic covariance kernel can be used to recover rotation measure values for
Faraday thin sources and that this hyper-parameter can also be physically
meaningful in a more limited way for complex lines of sight. We have presented
a method to calculate the range of RM over which this method is applicable for
different telescopes and/or data sets and we have demonstrated that within
this range RM recovery from GP hyper-parameters is robust to typical detection
thresholds.
We have discussed limitations in the use of GPM reconstructed Faraday depth
spectra, highlighted situations in which GPM is not an appropriate solution
and presented a quantitative method for determining a threshold above which
Faraday structure can be safely considered significant when reconstructed
using an optimised GP fitted to noisy data.
We have evaluated the GP reconstruction proposed here against other methods in
the literature and have shown that it has a data space performance equivalent
to other QU-fitting methods and furthermore that the variance in results from
the GP approach is smaller in the Stokes data space than seen from those
methods. blackUsing the same test data we demonstrate that even complex
Faraday structure can be recovered using a comparatively simple covariance
kernel.
## Acknowledgements
We thank the anonymous referee for their careful reading of the manuscript and
their helpful comments, which improved the content of this paper. The authors
also thank Torsten Enßlin for useful comments on an earlier draft of this
paper. This work makes extensive use of the celerite
library333https://celerite.readthedocs.io. SWN acknowledges support from the
UK Newton Fund via the Science & Technology Facilities Council (STFC) through
the Development in Africa with Radio Astronomy (DARA) Big Data project
ST/R001898/1. AMS gratefully acknowledges support from an Alan Turing
Institute AI fellowship EP/V030302/1. JH acknowledges support from the UK
Science & Technology Facilities Council (STFC). MC acknowledges support from
ANID PFCHA/DOCTORADO BECAS CHILE/2018-72190574.
## Data Availability
All code from this work is publicly available on Github at the following
address: https://github.com/as595/GPM-for-Faraday-Depth-Spectra
## References
* Aigrain et al. (2016) Aigrain S., Parviainen H., Pope B. J. S., 2016, MNRAS, 459, 2408
* Akaike (1974) Akaike H., 1974, IEEE Transactions on Automatic Control, 19, 716
* Angus et al. (2018) Angus R., Morton T., Aigrain S., Foreman-Mackey D., Rajpaul V., 2018, MNRAS, 474, 2094
* Arras et al. (2020) Arras P., Perley R. A., Bester H. L., Leike R., Smirnov O., Westermann R., Enßlin T. A., 2020, arXiv e-prints, p. arXiv:2008.11435
* Barclay et al. (2015) Barclay T., Endl M., Huber D., Foreman-Mackey D., Cochran W. D., MacQueen P. J., Rowe J. F., Quintana E. V., 2015, ApJ, 800, 46
* Bollen et al. (2014) Bollen K. A., Harden J. J., Ray S., Zavisca J., 2014, Structural equation modeling : a multidisciplinary journal, 21, 1
* Bond & Efstathiou (1987) Bond J. R., Efstathiou G., 1987, MNRAS, 226, 655
* Brandenburg & Stepanov (2014) Brandenburg A., Stepanov R., 2014, The Astrophysical Journal, 786, 91
* Brentjens & de Bruyn (2005) Brentjens M. A., de Bruyn A. G., 2005, A&A, 441, 1217
* Burn (1966) Burn B. J., 1966, MNRAS, 133, 67
* Burnham & Anderson (2004) Burnham K. P., Anderson D. R., 2004, Sociological Methods & Research, 33, 261
* Czekala et al. (2015) Czekala I., Andrews S. M., Mandel K. S., Hogg D. W., Green G. M., 2015, The Astrophysical Journal, 812, 128
* Dewdney et al. (2009) Dewdney P. E., Hall P. J., Schilizzi R. T., Lazio T. J. L. W., 2009, Proceedings of the IEEE, 97, 1482
* Evans et al. (2015) Evans T. M., Aigrain S., Gibson N., Barstow J. K., Amundsen D. S., Tremblin P., Mourier P., 2015, MNRAS, 451, 680
* Farnsworth et al. (2011) Farnsworth D., Rudnick L., Brown S., 2011, The Astronomical Journal, 141, 191
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306
* Foreman-Mackey et al. (2017) Foreman-Mackey D., Agol E., Angus R., Ambikasaran S., 2017, ArXiv
* George et al. (2012) George S. J., Stil J. M., Keller B. W., 2012, Publications of the Astronomical Society of Australia, 29, 214–220
* Gibson et al. (2012) Gibson N. P., Aigrain S., Roberts S., Evans T. M., Osborne M., Pont F., 2012, MNRAS, 419, 2683
* Haywood et al. (2014) Haywood R. D., et al., 2014, MNRAS, 443, 2517
* Heald (2009) Heald G., 2009, in Strassmeier K. G., Kosovichev A. G., Beckman J. E., eds, IAU Symposium Vol. 259, Cosmic Magnetic Fields: From Planets, to Stars and Galaxies. pp 591–602, doi:10.1017/S1743921309031421
* Hurvich & Tsai (1989) Hurvich C. M., Tsai C.-L., 1989, Biometrika, 76, 297
* Ideguchi et al. (2014) Ideguchi S., Tashiro Y., Akahori T., Takahashi K., Ryu D., 2014, ApJ, 792, 51
* Jonas (2009) Jonas J. L., 2009, IEEE Proceedings, 97, 1522
* Jonas (2016) Jonas J. L., 2016, ., .
* Junklewitz et al. (2016) Junklewitz H., Bell M. R., Selig M., Enßlin T. A., 2016, A&A, 586, A76
* Kass & Raftery (1995) Kass R. E., Raftery A. E., 1995, Journal of the American Statistical Association, 90, 773
* Law et al. (2011) Law C. J., et al., 2011, ApJ, 728, 57
* Leike & Enßlin (2019) Leike R. H., Enßlin T. A., 2019, A&A, 631, A32
* Littlefair et al. (2017) Littlefair S. P., Burningham B., Helling C., 2017, MNRAS, 466, 4250
* Macquart et al. (2012) Macquart J. P., Ekers R. D., Feain I., Johnston-Hollitt M., 2012, ApJ, 750, 139
* Mauch et al. (2020) Mauch T., et al., 2020, ApJ, 888, 61
* McAllister et al. (2016) McAllister M. J., et al., 2016, Monthly Notices of the Royal Astronomical Society, 464, 1353
* Mertens et al. (2018) Mertens F. G., Ghosh A., Koopmans L. V. E., 2018, MNRAS, 478, 3640
* O’Sullivan et al. (2012) O’Sullivan S. P., et al., 2012, Monthly Notices of the Royal Astronomical Society, 421, 3300
* Pratley & Johnston-Hollitt (2020) Pratley L., Johnston-Hollitt M., 2020, The Astrophysical Journal, 894, 38
* Pratley et al. (2020) Pratley L., Johnston-Hollitt M., Gaensler B. M., 2020, arXiv e-prints, p. arXiv:2010.07932
* Rajpaul et al. (2015) Rajpaul V., Aigrain S., Osborne M. A., Reece S., Roberts S., 2015, Monthly Notices of the Royal Astronomical Society, 452, 2269
* Rajpaul et al. (2016) Rajpaul V., Aigrain S., Roberts S., 2016, MNRAS, 456, L6
* Rasmussen & Williams (2006) Rasmussen C., Williams C., 2006, Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, USA
* Roberts et al. (2013) Roberts S., Osborne M., Ebden M., Reece S., Gibson N., Aigrain S., 2013, Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 371, 20110550
* Schwarz (1978) Schwarz G., 1978, Annals of Statistics, 6, 461
* Seaman et al. (2013) Seaman S., Galati J., Jackson D., Carlin J., 2013, arXiv e-prints, p. arXiv:1306.2812
* Stil et al. (2014) Stil J. M., Keller B. W., George S. J., Taylor A. R., 2014, ApJ, 787, 99
* Sun et al. (2015) Sun X. H., et al., 2015, The Astronomical Journal, 149, 60
* Sun et al. (2018) Sun S., Zhang G., Wang C., Zeng W., Li J., Grosse R., 2018, arXiv e-prints, p. arXiv:1806.04326
* Taylor & Jarvis (2017) Taylor A. R., Jarvis M., 2017, IOP Conference Series: Materials Science and Engineering, 198, 012014
* Van Eck et al. (2018) Van Eck C. L., et al., 2018, A&A, 613, A58
* Wandelt & Hansen (2003) Wandelt B. D., Hansen F. K., 2003, Phys. Rev. D, 67, 023001
* Way et al. (2009) Way M. J., Foster L. V., Gazis P. R., Srivastava A. N., 2009, Astrophys. J., 706, 623
|
# Online detection of failures generated by storage simulator
Kenenbek Arzymatov Mikhail Hushchyn Andrey Sapronov Vladislav Belavin
Leonid Gremyachikh Maksim Karpov and Andrey Ustyuzhanin National Research
University Higher School of Economics 11 Pokrovsky Boulevard, Moscow, Russia,
109028<EMAIL_ADDRESS>
###### Abstract
Modern large-scale data-farms consist of hundreds of thousands of storage
devices that span distributed infrastructure. Devices used in modern data
centers (such as controllers, links, SSD- and HDD-disks) can fail due to
hardware as well as software problems. Such failures or anomalies can be
detected by monitoring the activity of components using machine learning
techniques. In order to use these techniques, researchers need plenty of
historical data of devices in normal and failure mode for training algorithms.
In this work, we challenge two problems: 1) lack of storage data in the
methods above by creating a simulator and 2) applying existing online
algorithms that can faster detect a failure occurred in one of the components.
We created a Go-based (golang) package for simulating the behavior of modern
storage infrastructure. The software is based on the discrete-event modeling
paradigm and captures the structure and dynamics of high-level storage system
building blocks. The package’s flexible structure allows us to create a model
of a real-world storage system with a configurable number of components. The
primary area of interest is exploring the storage machine’s behavior under
stress testing or exploitation in the medium- or long-term for observing
failures of its components.
To discover failures in the time series distribution generated by the
simulator, we modified a change point detection algorithm that works in online
mode. The goal of the change-point detection is to discover differences in
time series distribution. This work describes an approach for failure
detection in time series data based on direct density ratio estimation via
binary classifiers.
## Introduction
Disk-drive is one of the crucial elements of any computer and IT
infrastructure. Disk failures have a high contributing factor to outages of
the overall computing system. During the last decades, the storage system’s
reliability and modeling is an active area of research in industry and
academia works [1, 2, 3]. Nowadays, the rough total amount of hard disk drives
(HDD) and solid-state drives (SSD) deployed in data-farms and cloud systems
passed tens of millions of units [4]. Consequently, the importance of early
identifying defects leading to failures that can happen in the future can
result in significant benefits. Such failures or anomalies can be detected by
monitoring components’ activity using machine learning techniques, named
change point detection [5, 6, 7]. To use these techniques, especially for
anomaly detection, it is a necessity in historical data of devices in normal
and failure mode for training algorithms. In this paper, due to the reasons
mentioned above, we challenge two problems: 1) lack of storage data in the
methods above by creating a simulator and 2) applying new online algorithms
that can faster detect a failure occurred in one of the components [8].
A Go-based (golang) package for simulating the behavior of modern storage
infrastructure is created. The primary area of interest is exploring the
storage machine’s behavior under stress testing or exploitation in the medium-
or long-term for observing failures of its components. The software is based
on the discrete-event modeling paradigm and captures the structure and
dynamics of high-level storage system building blocks. It represents the
hybrid approach to modeling storage attached network [9, 10]. This method uses
additional blocks with a neural network that tunes the internal model
parameters while a simulation is running, described in [11]. This approach’s
critical advantage is a decreased requirement for detailed simulation and the
number of modeled parameters of real-world system components and, as a result,
a significant reduction in the intellectual cost of its development. The
package’s modular structure allows us to create a model of a real-word storage
system with a configurable number of components. Compared to other techniques,
parameter tuning does not require heavy-lifting changes within developing
service [12].
To discover failures in the time series distribution generated by the
simulator, we modified a change point detection algorithm that works in online
mode. The goal of the change-point detection is to discover differences in
time series distribution. This work uses an approach for failure detection in
time series data based on direct density ratio estimation via binary
classifiers [8].
## Simulator
### Internals
The simulator uses a Discrete Event Simulation (DES) [13] paradigm for
modeling storage infrastructure. In a broad sense, DES is used to simulate a
system as a discrete sequence of events in time. Each event happens in a
specific moment in time and traces a change of state in the system. Between
two consecutive events, no altering in the system is presumed to happen; thus,
the simulation time can directly move to the next event’s occurrence time. The
scheme of the process is shown in Figure 1.
Figure 1: The event handling loop is the central part that responsible for
time movement in the simulator. The Master process creates necessary logical
processes (Client1, IOBalancer, HDD_Write, etc.) and populates a Priority
Queue by collecting events from modeling processes. The last part of the
implementation is running the event handling loop. It removes successive
elements from the queue. That would be correct because we know that the queue
is already time sorted and performed the associated actions.
The simulator’s programming environment provides the functionality to set up a
model for specific computing environments, especially storage area networks.
The key site of interest is exploring the storage infrastructure’s behavior
under various stress testing or utilization in the medium- or long-term for
monitoring breakups of its components.
In the simulator, load to storage system can be represented by two action
types: read file from disk and write file to disk. Each file has corresponding
attributes, such as name, block size, and total size. With the current load,
these attributes determine the amount of time required to perform the
corresponding action. The three basic types of resources are provided: CPU,
network interface, and storage. Their representation is shown in the Figure 3
and informative description is given in the Table 1. By using basic blocks,
real-world systems can be constructed, as shown in the Figure 3.
Figure 2: The example of the real storage system that can be modeled by using
basic blocks
Figure 3: Basic resource entities in the simulator
Table 1: Resource description Resource | Real word entity | Parameters | Units | Anomaly type
---|---|---|---|---
CPU | Controller, server | Number of cores | Amount | Each component
| | Core speed | Flops | can suffer from
Link | Networking cables | Bandwidth | Megabyte/sec | performance degradation
| | Latency | Sec | or total breakup
Storage | Cache, SSD, HDD | Size | Gigabyte |
| | Write speed | Megabyte/sec |
| | Read speed | Megabyte/sec |
| | | |
### Comparison with the real data
The data from the real-world storage system were used to validate the behavior
of the simulator. A similar writing load scenario was generated on the model
prototype, together with intentional controller failure (turn-off). The
comparison is shown in the Figure 4. As we can see, the simulator’s data can
qualitatively reflect the components breakup.
Figure 4: Comparison of the CPU load metrics between simulated (A) and real
data (B). The periods marked ‘Failure’ correspond to a storage processor being
offline
## Change point detection
Consider a $d$-dimensional time series that is described by a vector of
observations $x(t)\in R^{d}$ at time $t$. Sequence of observations for time
$t$ with length $k$ is defined as:
$X(t)=[x(t)^{T},x(t-1)^{T},\dots,x(t-k-1)^{T}]^{T}\in R^{kd}$
Sample of sequences of size $n$ is defined as:
$\mathcal{X}(t)={X(t),X(t-1),\dots,X(t-n+1)}$
It is implied that observation distribution changes at time $t^{*}$. The goal
is to detect this change. The idea is to estimate dissimilarity score between
reference $X_{rf}(t-n)$ and test $X_{te}(t)$. The larger dissimilarity, the
more likely the change point occurs at time $t-n$.
In this work, we apply a CPD algorithm based on direct density ratio
estimation developed in [8]. The main idea is to estimate density ratio $w(X)$
between two probability distributions $P_{te}(X)$ and $P_{rf}(X)$ which
correspond to test and reference sets accordingly. For estimating $w(X)$,
different binary classifiers can be used, like decision trees, random forests,
SVM, etc. We use neural networks for this purpose. This network $f(X,\theta)$
is trained on the mini-batches with cross-entropy loss function
$L(\mathcal{X}(t-l),\mathcal{X}(t),\theta)$,
$L(\mathcal{X}(t-l),\mathcal{X}(t),\theta)=-\frac{1}{n}\sum_{X\in\mathcal{X}(t-l)}\log(1-f(X,\theta))-\frac{1}{n}\sum_{X\in\mathcal{X}(t)}\log
f(X,\theta),$
We use a dissimilarity score based on the Kullback-Leibler divergence,
$D(\mathcal{X}(t-l),\mathcal{X}(t))$. Following [14], we define this score as:
$D(\mathcal{X}(t-l),\mathcal{X}(t),\theta)=\frac{1}{n}\sum_{X\in\mathcal{X}(t-l)}\log\frac{1-f(X,\theta)}{f(X,\theta)}+\frac{1}{n}\sum_{X\in\mathcal{X}(t)}\log\frac{f(X,\theta)}{1-f(X,\theta)}.$
According to [8], the training algorithm is shown in Alg. 1. It consists of
the following steps performing in the loop: 1) initializing hyper-parameters
2) preparing single datasets $\mathcal{X^{\prime}}_{rf}$ and
$\mathcal{X^{\prime}}_{te}$ 3) calculating loss function $J$ 4) applying
gradients to the weights of neural network.
Inputs: time series $\\{X(t)\\}_{t=k}^{T}$; $k$ – size of a combined vector
$X(t)$; $n$ – size of a mini-batch $\mathcal{X}(t)$; $l$ – lag size and $n\ll
l$; $f(X,\theta)$ – a neural network with weights $\theta$;
Initialization: $t\leftarrow k+n+l$;
while _$t\leq T$_ do
take mini-batches $\mathcal{X}(t-l)$ and $\mathcal{X}(t)$;
$d(t)\leftarrow D(\mathcal{X}(t-l),\mathcal{X}(t),\theta)$;
$\bar{d}(t)\leftarrow\bar{d}(t-n)+\frac{1}{l}(d(t)-d(t-l-n))$;
$loss(t,\theta)\leftarrow L(\mathcal{X}(t-l),\mathcal{X}(t),\theta)$;
$\theta\leftarrow\mathrm{Optimizer}(\mathrm{loss}(t,\theta))$;
$t\leftarrow t+n$;
end while
return _$\\{\bar{d}(t)\\}_{t=1}^{T}$_ – change-point detection score
Algorithm 1 Change-point detection algorithm.
## Results
To check the change-point algorithm against the simulation data, four time-
series datasets were prepared: 1) controller’s CPU load metric 2) load
balancer request time 3) data traffic to storage devices and 4) differences
change of used space. Their time-series are shown on the upper halves of
Figures 6, 6, 8 and 8.
As shown in the bottom halves of the figures above, the algorithm can identify
data points where distribution changes. A red line on each plot is a CPD score
line. The higher values it has, the more confident algorithm about a change
point occurred at this timestamp.
Figure 5: Controller failure
Figure 6: IO balancer time series
Figure 7: Storage traffic
Figure 8: Storage used space
## Conclusion
The simulator for modeling storage infrastructure based on the event-driven
paradigm was presented. It allows researchers to try different I/O load
scenarios to test disk performance and model failures of its hardware
components. By providing large amounts of synthetic data of anomalies and time
series of a machine in various modes, the simulator can also be used as a
benchmark for comparing different change-point detection algorithms. In this
work, the density ratio estimation CPD algorithm were successfully applied to
the simulator data.
This research was supported in part through computational resources of HPC
facilities at NRU HSE.
## References
## References
* [1] Yang J and Feng-Bin Sun 1999 A comprehensive review of hard-disk drive reliability Annual Reliability and Maintainability. Symposium. 1999 Proceedings (Cat. No.99CH36283) pp 403–409
* [2] Strom B D, Lee S, Tyndall G W and Khurshudov A 2007 IEEE Transactions on Magnetics 43 3676–3684
* [3] Elerath J G 2000 Specifying reliability in the disk drive industry: No more mtbf’s Annual Reliability and Maintainability Symposium. 2000 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.00CH37055) pp 194–199
* [4] Ganguly S, Consul A, Khan A, Bussone B, Richards J and Miguel A 2016 A practical approach to hard disk failure prediction in cloud platforms: Big data model for failure management in datacenters 2016 IEEE Second International Conference on Big Data Computing Service and Applications (BigDataService) pp 105–116
* [5] Aminikhanghahi S and Cook D J 2017 Knowledge and information systems 51 339–367
* [6] Kawahara Y and Sugiyama M 2009 Change-Point Detection in Time-Series Data by Direct Density-Ratio Estimation pp 389–400 (Preprint https://epubs.siam.org/doi/pdf/10.1137/1.9781611972795.34) URL https://epubs.siam.org/doi/abs/10.1137/1.9781611972795.34
* [7] Liu S, Yamada M, Collier N and Sugiyama M 2013 Neural Networks 43 72 – 83 ISSN 0893-6080 URL http://www.sciencedirect.com/science/article/pii/S0893608013000270
* [8] Hushchyn M, Arzymatov K and Derkach D 2020 Online neural networks for change-point detection (Preprint 2010.01388)
* [9] Karpov M, Arzymatov K, Belavin V, Sapronov A, Ustyuzhanin A and Nevolin A 2018 International Journal of Civil Engineering and Technology 9 220–226
* [10] Arzymatov K, Sapronov A, Belavin V, Gremyachikh L, Karpov M, Ustyuzhanin A, Tchoub I and Ikoev A 2020 PeerJ Computer Science 6 e271 ISSN 2376-5992 URL https://doi.org/10.7717/peerj-cs.271
* [11] Belavin V, Sapronov A, Arzymatov K, Karpov M, Nevolin A and Ustyuzhanin A 2018 Advances in Systems Science and Applications 18(4) 1–12
* [12] Mousavi S S, Schukat M and Howley E 2017 Lecture Notes in Networks and Systems 426–440 ISSN 2367-3389 URL http://dx.doi.org/10.1007/978-3-319-56991-832
* [13] Fishman G S 1978 Principles of discrete event simulation Wiley series on systems engineering and analysis (New York: Wiley) ISBN 9780471043959
* [14] Hushchyn M and Ustyuzhanin A 2020 Generalization of change-point detection in time series data based on direct density ratio estimation (Preprint 2001.06386)
|
# Hubs-biased resistance distances on graphs and networks
Ernesto Estrada1,2,3 and Delio Mugnolo4 1Institute of Mathematics and
Applications, University of Zaragoza, Pedro Cerbuna 12, Zaragoza 50009, Spain;
2ARAID Foundation, Government of Aragon, Spain. 3Institute for Cross-
Disciplinary Physics and Complex Systems (IFISC, UIB-CSIC), Campus Universitat
de les Illes Balears E-07122, Palma de Mallorca, Spain. 4Lehrgebiet Analysis,
Fakultät Mathematik und Informatik, FernUniversität in Hagen, D-58084 Hagen,
Germany;
###### Abstract
We define and study two new kinds of “effective resistances” based on hubs-
biased – hubs-repelling and hubs-attracting – models of navigating a
graph/network. We prove that these effective resistances are squared Euclidean
distances between the vertices of a graph. They can be expressed in terms of
the Moore–Penrose pseudoinverse of the hubs-biased Laplacian matrices of the
graph. We define the analogous of the Kirchhoff indices of the graph based of
these resistance distances. We prove several results for the new resistance
distances and the Kirchhoff indices based on spectral properties of the
corresponding Laplacians. After an intensive computational search we
conjecture that the Kirchhoff index based on the hubs-repelling resistance
distance is not smaller than that based on the standard resistance distance,
and that the last is not smaller than the one based on the hubs-attracting
resistance distance. We also observe that in real-world brain and neural
systems the efficiency of standard random walk processes is as high as that of
hubs-attracting schemes. On the contrary, infrastructures and modular software
networks seem to be designed to be navigated by using their hubs.
AMS Subject Classification: 05C50; 05C82; 15A18; 47N50
###### keywords:
graph Laplacians; resistance distances; spectral properties; algebraic
connectivity; complex networks Corresponding author: Ernesto Estrada; email:
<EMAIL_ADDRESS>
## 1 Introduction
Random walk and diffusive models are ubiquitous in mathematics, physics,
biology and social sciences, in particular when the random walker moves
through the vertices and edges of a graph $G=\left(V,E\right)$ [35, 1, 12, 8,
36]. In this scenario a random walker at the vertex $j\in V$ of $G$ at time
$t$ can move to any of the nearest neighbors of $j$ with equal probability at
time $t+1$ [35]. That is, if as illustrated in Fig. 1.1(a) the vertex $j$ has
three nearest neighbors $\left\\{k,l,m\right\\}$ the random walker can move to
any of them with probability $p_{jm}=p_{jl}=p_{jk}=k_{j}^{-1}$, where $k_{j}$
is the degree of $j$. We can figure out situations in which the movement of
the random walker at a given position is facilitated by the large degree of
any of its nearest neighbors. Here we use a relaxed definition of the term
“hub”. Although this term is used in network theory for those nodes of
exceptionally large degree, we use it here in the following way: If there are
two connected vertices of different degree, we call a “hub” the one of larger
degree. This is formally defined later on in the paper.
Then, let us suppose that there are situations in which the probability that
the random walker moves to a nearest neighbor of $j$ at time $t+1$ increases
with the degree of the nearest neighbor. This is illustrated in Fig. 1.1(b)
where $k_{i}>k_{l}>k_{m}$, and consequently $p_{jm}<p_{jl}<p_{j{i}}$. We will
refer hereafter to this scenario as the “hubs-attracting” one. Another
possibility is that the random walker is repelled by large degree vertices,
such as for $k_{i}>k_{l}>k_{m}$, we have $p_{jm}>p_{jl}>p_{j{i}}$ as
illustrated in Fig. 1.1(c). We will refer to this model as the “hubs-
repelling” one. These scenarios could be relevant in the context of real-world
complex networks where vertices represent the entities of the system and the
edges the interrelation between them [4, 42, 19]. An example of hubs-repelling
strategies of navigation are some of the diffusive processes in the brain
where there is a high energetic cost for navigating through the hubs of the
system [47, 46]. Hubs-attracting mechanisms could be exhibited, for instance,
by diffusive epidemic processes in which hubs are major attractors of the
disease and can be considered at high risk of contagion by spreaders in the
network [41].
(a) (b) (c)
Figure 1.1: Schematic illustration of a normal (a), hubs-attracting (b) and
hubs-repelling schemes of a particle hopping on a network.
From a mathematical perspective, one of the most important aspects of the
model of random walks on graphs is its connection with the graph Laplacian
matrix [37, 38, 29, 30, 45] and with the concept of resistance distance [17,
34, 43, 48, 28, 18]. In a simple graph the resistance distance is the
effective resistance between two vertices $v$ and $w$, which measures the
resistance of the total system when a voltage is connected across $v$ and $w$.
Klein and Randić [34] proved that the effective resistance is a squared
Euclidean distance between the two vertices of the graph, which can be
obtained from the Moore–Penrose pseudoinverse of the graph Laplacian. It is
also known that the commute time between two vertices $v$ and $w$ of a random
walker [17, 43, 28, 18], i.e., the number of steps of a random walker starting
at $v$, before arriving at $w$ and returning to $v$ again, is proportional to
the resistance distance between the two vertices.
Strategies for avoiding large/small degree vertices in random-walk processes
on graphs/networks have been proposed in the literature under the general
umbrella of ‘degree-biased random walks’ [9, 39, 27, 6]. However, in the
current work we go beyond the random walk formulation of the problem and
express it in terms of hubs-biased Laplacians [22, 26] and the corresponding
resistance distance matrices on graphs. Thus, we focus here on the algebraic
properties of these resistance distance matrices. We note in passing that the
current concept has nothing in common with the so-called “degree-resistance”
between a pair of vertices, which is nothing else that the resistance distance
multiplied by the difference of degrees of the vertices forming the
corresponding edge [31].
Due to the relation between resistance distance and commute time of random
walks on graphs, we study here the efficiency of hubs attracting/repelling
diffusive processes on graphs. In closing, in this work we define the hubs-
biased resistance distances between pairs of vertices of a simple graph and
study their main spectral properties. We also propose analogues of the
Kirchhoff index [34, 50, 44, 14], the semi-sum of all resistance distances in
the graph, for the hubs-biased resistances. We report here several bounds for
the two new kinds of resistance distances as well as for the corresponding
Kirchhoff indices. Finally, we study the commute time of the hubs-attracting
random walks and analyze their relative improvement over the standard one. We
observe that certain classes of real-world networks, such as brain/neuronal
networks and electronic circuits, have normal random walks as efficient as the
hubs-attracting one, while others, like infrastructural networks, can reduce
their average commuting times by 300% by using the hubs-attracting mechanism.
## 2 Preliminaries
In this article we consider simple weighted graphs. We will always impose the
following throughout.
###### Assumptions 2.1.
$G$ is a weighted graph with vertex set $V$ and edge set $E$. The underlying
unweighted graph $(V,E)$ is simple and finite: we denote by $n,m\in\mathbb{N}$
the number of vertices and edges, respectively. To avoid trivialities, we also
assume $n\geq 2$ and that no vertex is isolated. We also denote by
$\mathcal{C}$ its number of connected components.
Additionally, each edge is assigned a weight by means of a surjective mapping
$\varphi:E\rightarrow W$, with $W\subset(0,\infty)$.
In the following we use interchangeably the terms graphs and networks.
Let $A$ be the adjacency matrix of the (weighted) graph $G$ and let $k_{i}$
denote the degree of the vertex $i\in V$, i.e., the sum of the $i$th row or
column of $A$; or equivalently $k_{i}:=\\#\mathcal{N_{\mathit{i}}}$ where
$\mathcal{N_{\mathit{j}}}=\left\\{j\in V|\left(i,j\right)\in E\right\\}$ is
the set of all nearest neighbors of $i$. We will denote the minimal and
maximum degree by $\delta$ and $\varDelta$, respectively. Let $j$ be a node
such that $j\in{\mathcal{N_{\mathit{i}}}}$. Then, we say that $j$ is a “hub”,
more correctly a “local hub”, if it has the largest degree among all
$j\in{\mathcal{N_{\mathit{i}}}}$. We will denote by $K$ the diagonal matrix of
vertex degrees. 111In the case of weighted graphs the degree is often referred
as strength, but we will use the general term degree here in all cases.
We use the following condensed notation across this paper. If $x_{\alpha}$ is
a number depending on an index $\alpha$ – in the following typically
$\alpha\in\left\\{-1,1\right\\}$ – then we will write $x_{\alpha}$ to
symbolize both $x_{-1}$ and $x_{1}$ depending on the choice on the index
$\alpha$. Let $\ell^{2}(V)$ be the finite-dimensional Hilbert space of
functions on $V$ with respect to the inner product
$\langle f,g\rangle=\sum_{v\in V}f(v)\overline{g(v)},\qquad
f,g\in\ell^{2}(V).$
The standard graph Laplacian is an operator in $\ell^{2}(V)$ which is defined
by
$\bigl{(}\mathscr{L}f\bigr{)}(v)\coloneqq\sum_{w\in V:\,(v,w)\in
E}{\varphi(vw)}\bigl{(}f(v)-f(w)\bigr{)},\qquad f\in\ell^{2}(V),$ (2.1)
where $\varphi(v,w)\in W$ is the weight of the edge $(v,w)\in E$.
Finally, in the following ${1_{n}}$ will denote the all-ones column vector of
order $n$; $J_{n}$ the $n\times n$ all-one matrix; and $I_{n}$ the identity
matrix of order $n$.
## 3 Hubs-biased Laplacians and their spectra
Here we introduce the concepts of hubs-biased Laplacians in the context of
resistive networks.
###### Definition 3.1.
A conductance function is a function $c:V\times V\rightarrow\mathbb{R}^{+}$
which respects adjacency between vertices, i.e., $\left(v,w\right)\in E$ if
and only if $c\left(v,w\right)>0$.
###### Definition 3.2.
The total conductance at a vertex $v$ is defined as
$\begin{split}c\left(v\right)&\coloneqq\sum_{\left(v,w\right)\in
E}c\left(v,w\right)\\\ &=\sum_{w\in V}c\left(v,w\right).\end{split}$ (3.1)
(The second equality in (3.1) holds in view of Definition 3.1.)
To motivate the following definition, let us now consider a diffusive particle
that does not necessarily hop to any nearest neighbor with the same
probability. More precisely, in a hubs-attracting model the hopping of a
diffusive particle from a vertex $i$ to a target neighboring vertex $j$ is
favored by a (comparatively) small degree of $i$ and a large degree of $j$
(“$j$ is a hub”), corresponding to a small ratio $\frac{k_{i}}{k_{j}}$. On the
other hand, the hopping from $i$ to $j$ is disfavored by a (comparatively)
large degree of $i$ and a (comparatively) small degree of $j$. Notice that
these conditions for the hopping from $i$ to $j$ are different from the ones
for the hopping from $j$ to $i$: hubs-attracting diffusion is not symmetric.
Specularly, hubs-repelling models are conceivable in which the hopping from
$i$ to $j$ would be favored by a comparatively large degree of $i$ and a
comparatively small degree of $j$, and disfavored by a comparatively small
degree of $i$ and a comparatively large degree of $j$.
We will capture this intuition by the following.
###### Definition 3.3.
Let $G$ be a graph satisfying the Assumptions 2.1 and let
$\alpha\in\left\\{-1,1\right\\}.$ The hubs-biased Laplacian corresponding to
$\alpha$ is the operator on $\ell^{2}(V)$ defined by
$\bigl{(}\mathscr{L}_{\alpha}f\bigr{)}(v)\coloneqq\sum_{w\in\mathcal{N}_{v}}c_{\alpha}\left(v,w\right)\bigl{(}f(v)-f(w)\bigr{)},\qquad
f\in\ell^{2}(V),$ (3.2)
where here and in the following
$c_{\alpha}\left(v,w\right):=\left(\dfrac{k_{v}}{k_{w}}\right)^{\alpha}.$
(3.3)
We call $\mathscr{L}_{\alpha}$ the hubs-repelling Laplacian if $\alpha=1$ and
hence $c_{\alpha}\left(v,w\right)=\dfrac{k_{v}}{k_{w}}$; or hubs-attracting
Laplacian if $\alpha=-1$ and hence
$c_{\alpha}\left(v,w\right)=\dfrac{k_{w}}{k_{v}}$.
###### Remark 3.4.
Actually, we could easily extend this definition by allowing for any
$\alpha\in{\mathbb{R}}$; for $\alpha=0$, corresponding to the unweighted case,
we would then recover the standard (discrete) Laplacian ${\mathscr{L}}_{0}$.
However, we will not explore this direction in this paper.
###### Remark 3.5.
In order to understand the main difference between the current approach and
some previous approaches introduced for the analysis of weighted directed
graphs we should point out the following. First, we analyze here unweighted
and undirected graphs, which are then transformed into a very specific kind of
weighted directed graphs. In [5, 49, 3, 7], among others, the authors focus on
weighted directed graphs. Then, they generate either asymmetric Laplacians
$L^{a}$ [5, 3] or symmetrized versions $L^{s}$ thereof [5, 7]. Yet another
approach was proposed by Young et al. [49] where the resistance distances are
calculated from a matrix $X=2Q^{T}{\Sigma}Q$ where
$\tilde{\mathscr{L}}{\Sigma}+{\Sigma}\tilde{\mathscr{L}}^{T}=I_{n-1}$,
$\tilde{\mathscr{L}}=QLQ^{T}$ and the matrix $Q$ obeys the following
conditions: $Q1_{n}=0$, $QQ^{T}=I_{n-1}$ and $Q^{T}Q=I_{n}-(1/n)J_{n}$. The
matrix $\tilde{\mathscr{L}}=Q{\mathscr{L}}Q^{T}$ is known as the reduced
Laplacian. In a further work, Fitch [25] demonstrated that the resistance
distances obtained through the matrix $X$ for any pair of vertices in any
connected, directed graph, are equal to the resistance distances obtained for
a certain symmetric, undirected Laplacian on the same set of nodes and
possibly admitting negative edge weights. As the hubs-biased Laplacian
matrices are non-symmetric in general, it is obvious that these two approaches
are not equivalent.
In the current study a different kind of matricial structure emerges, which is
neither the asymmetric cases previously considered nor a symmetric one. The
matrices $\mathscr{L}_{\alpha}$ correspond to the class of quasi-reciprocal
matrices (see [32]), which has not been previously used in the analysis of
graphs. An $n\times n$ matrix $M$ is called quasi-reciprocal if
$M_{ij}\neq 0\Rightarrow M_{ji}=M_{ij}^{-1},\forall i,j=1,2,\ldots,n.$ (3.4)
Therefore, due to this notable difference the approach developed here for the
resistance distance and related descriptions of undirected graphs are
substantially different from the ones previously analyzed for weighted
directed graphs [5, 49, 3, 7].
Let $e_{v}$, $v\in V$ be a standard orthonormal basis in $\ell^{2}(V)$
consisting of the vectors
$e_{v}(w)\coloneqq\begin{cases}1&\text{if $w=v$},\\\\[2.15277pt]
0&\text{otherwise}.\end{cases}$ (3.5)
Then, $\mathscr{L}_{\alpha}$ acts on the vectors $e_{v}$ as follows:
$(\mathscr{L}_{\alpha}e_{v})(w)=\begin{cases}c_{\alpha}(v)&\text{if
$w=v$},\\\\[2.15277pt] -c_{\alpha}\left(v,w\right)&\text{if $(v,w)\in
E$},\\\\[2.15277pt] 0&\text{otherwise},\end{cases}$ (3.6)
where
$c_{\alpha}(v)=\sum_{w\in\mathscr{\mathcal{N_{\mathit{v}}}}}c_{\alpha}\left(v,w\right)$
(3.7)
is the total $\alpha$-conductance of the vertex $v$. Then, the hubs-biased
matrices can be expressed as
$\mathscr{L}_{\alpha}=\Xi_{\alpha}-K^{\alpha}AK^{-\alpha},$ (3.8)
where
$\begin{split}\Xi_{\alpha}&:=\operatorname{diag}\left(c_{\alpha}(v)\right)_{v\in
V},\\\ K&:=\operatorname{diag}\left(k_{v}\right)_{v\in V},\end{split}$ (3.9)
and $A$ is the unweighted adjacency matrix of the graph.
Let us note an elementary but important fact concerning the trace of (any)
hubs-biased Laplacians.
###### Lemma 3.6.
There holds
$\operatorname{tr_{{}_{\,HB}}}\left(G\right):=\textnormal{tr}\left(\mathscr{L}_{1}\right)=\textnormal{tr}\left(\mathscr{L}_{-1}\right)=\sum_{v,w\in
V}a_{vw}\dfrac{k_{v}}{k_{w}}.$ (3.10)
###### Proof.
Let us, for $\alpha\in\\{-1,1\\}$, consider the definition of total
conductance and in view of (3.1),
$\textnormal{tr}\left(\mathscr{L}_{\alpha}\right)=\sum_{v\in
V}c_{\alpha}\left(v\right)=\sum_{v,w\in
V}a_{vw}\left(\dfrac{k_{v}}{k_{w}}\right)^{\alpha}.$ (3.11)
Now, because $a_{v,w}=1$ if and only if $a_{w,v}=1$ (and in this case both
addends $\dfrac{k_{v}}{k_{w}},\dfrac{k_{w}}{k_{v}}$ appear), we conclude that
$\textnormal{tr}\left(\mathscr{L}_{\alpha}\right)=\sum_{v\in
V}c_{\alpha}\left(v\right)=\sum_{v,w\in V}a_{vw}\dfrac{k_{v}}{k_{w}}:$ (3.12)
since the right-hand side is independent of $\alpha$, the claim is proved. ∎
###### Remark 3.7.
A rather natural generalization of Definition 3.3 involves infinite graphs.
Let for a moment – unlike under the standing Assumptions 2.1! – the graph $G$
be allowed to have infinitely many vertices. If the degree sequence
$(k_{v})_{v\in V}$ is still bounded, then by (3.3) and (3.7) the sequence
$(c_{\alpha}(v))_{v\in V}$ is bounded, too; it is then easy to see that the
matrices $\Xi_{\alpha}$, $K^{\pm\alpha}$, and $A$ define bounded linear
operators on the (infinite dimensional!) space $\ell^{2}(V)$. We conclude that
for such infinite, uniformly locally bounded graphs $\mathcal{L}_{\alpha}$ is
well-defined as a bounded linear operator on $\ell^{2}(V)$, too. However, $A$
and hence $\mathcal{L}_{\alpha}$ have absolutely continuous spectrum, rather
than a discrete set of eigenvalues as in the finite case: for this reason,
this setting is not convenient for our purposes: for example,
$\mathcal{L}_{\alpha}$ will not have finite trace.
The double-sided bound
$2m\dfrac{\delta}{\varDelta}\leq\operatorname{tr_{{}_{\,HB}}}\left(G\right)\leq
2m\dfrac{\varDelta}{\delta}$ (3.13)
on the hubs-biased trace immediately follows from (3.10), since $\delta\leq
k_{v}\leq\varDelta$ for all $v\in V$, and because $\sum_{v,w\in
V}a_{vw}=\sum_{v\in V}k_{v}=2m$ by the Handshaking Lemma. (We recall that
$\delta$ and $\varDelta$ denote the minimum and maximum degree of the vertices
of $G$, respectively, and $m$ is the number of edges of $G$.)
The estimates in (3.13) are rough, yet sharp as both inequalities become
equalities for complete graphs. (We should observe that if $G$ is complete,
then $\operatorname{tr_{{}_{\,HB}}}\left(G\right)$ also agrees with the trace
of the standard discrete Laplacian.)
We are able to provide a few improved estimates.
###### Lemma 3.8.
There holds
$\dfrac{1}{\varDelta}\sum_{v\in
V}k_{v}^{2}\leq\operatorname{tr_{{}_{\,HB}}}\left(G\right)\leq\dfrac{1}{\delta}\sum_{v\in
V}k_{v}^{2}.$ (3.14)
###### Proof.
The bounds follow directly from (3.10), since
$\sum_{w\in V}a_{vw}k_{v}=k_{v}\sum_{w\in V}a_{vw}=k_{v}^{2}\qquad\hbox{for
all $v\in V$}.$ (3.15)
Summing over $v\in V$ yields the claimed bounds. ∎
Because of the well-known identity
$\sum_{v\in V}k_{v}^{2}=\sum_{(v,w)\in E}\left(k_{v}+k_{w}\right),$
the estimates in (3.14) imply those in (3.13), but are clearly sharper if,
e.g., $G$ is bi-regular (recall that $G$ is bi-regular if $V$ can be
partitioned in $V_{1},V_{2}$ with $k_{v}\equiv k_{1}\in\mathbb{N}$ for all
$v\in V_{1}$; and $k_{w}\equiv k_{2}\in\mathbb{N}$ for all $w\in V_{2}$).
More explicit bounds in (3.14) can be obtained using known estimates on
$\sum_{v\in V}k_{v}^{2}$ and $\sum_{v\in V}k_{v}^{-1}$, the so-called (first)
Zagreb index and Randić index of $G$, respectively: we refer to [2] for a
comprehensive survey of results on this topic, including improving bounds for
special classes, like planar or triangle-free graphs, that might be of
interest in applications. In particular, we mention the following.
###### Corollary 3.9.
There holds
$\dfrac{4m^{2}}{n\varDelta}\leq\operatorname{tr_{{}_{\,HB}}}\left(G\right)\leq\dfrac{2m\left(2m+\left(n-1\right)\left(\varDelta-\delta\right)\right)}{\delta\left(n+\varDelta-\delta\right)}.$
(3.16)
The lower estimate becomes an equality if and only if $G$ is regular. The
upper estimate becomes an equality if and only if $G$ is a graph with $t$
vertices of degree $n-1$ and the remaining $n-t$ vertices forming an
independent set.
###### Proof.
The claimed bounds follow from the known estimates
$\dfrac{4m^{2}}{n}\leq\sum_{v\in
V}k_{v}^{2}\leq\dfrac{2m\left(2m+\left(n-1\right)\left(\varDelta-\delta\right)\right)}{n+\varDelta-\delta}:$
(3.17)
cf. [11, Remark 4 and Remark 5] and [13], where also the extremal graphs are
characterized. ∎
###### Remark 3.10.
(1) The lower estimate in (3.16) is significantly better than the lower
estimate in (3.13): indeed, the net effect is like replacing $\delta$ by the
average degree $\dfrac{2m}{n}$ in (3.13). The upper estimate in (3.16) is
better than the upper estimate in (3.13) if and only if
$2m+\left(n-1\right)\left(\varDelta-\delta\right)\leq\varDelta\left(n+\varDelta-\delta\right),$
(3.18)
which is sometimes the case, for instance for any regular graph, and sometimes
not, for instance for any path on more than 2 edges. The equality in (3.18)
holds for complete graphs.
(2) Further estimates on the hubs-biased trace may be obtained re-writing
(3.10) in alternative ways, including
$\sum_{v,w\in V}a_{vw}\dfrac{k_{v}}{k_{w}}=\sum_{v\in V}k_{v}\left(\sum_{w\in
V}\dfrac{a_{vw}}{k_{w}}\right)=\sum_{v\in
V}k_{v}\left(\sum_{w\in\mathcal{N}_{v}}\dfrac{1}{k_{w}}\right),$ (3.19)
or
$\sum_{v,w\in V}a_{vw}\dfrac{k_{v}}{k_{w}}=\sum_{v\in
V}\dfrac{1}{k_{v}}\left(\sum_{w\in V}a_{vw}k_{w}\right)=\sum_{v\in
V}\dfrac{1}{k_{v}}\left(\sum_{w\in\mathcal{N}_{v}}k_{w}\right)$ (3.20)
(we recall that $\mathcal{N}_{v}$ is the set of nearest neighbors of $v$).
Considering the minima $\delta_{\mathcal{N}_{v}}$ and maxima
$\varDelta_{\mathcal{N}_{v}}$ of the degree function in neighborhoods
$\mathcal{N}_{v}$ we can deduce from (3.19) and (3.20) the following sharper
estimates:
$\sum_{v\in
V}\dfrac{k_{v}^{2}}{\varDelta_{\mathcal{N}_{v}}}\leq\operatorname{tr_{{}_{\,HB}}}\left(G\right)\leq\sum_{v\in
V}\dfrac{k_{v}^{2}}{\delta_{\mathcal{N}_{v}}},$ (3.21)
$\sum_{v\in
V}\delta_{\mathcal{N}_{v}}\leq\operatorname{tr_{{}_{\,HB}}}\left(G\right)\leq\sum_{v\in
V}\varDelta_{\mathcal{N}_{v}},$ (3.22)
which shows, for instance, that
$\operatorname{tr_{{}_{\,HB}}}\left(G\right)=2pq$ for the complete bipartite
graph $K_{p,q}.$
(3) Invoking Titu’s Lemma we also obtain
$\sum_{w\in\mathcal{N}_{v}}\dfrac{1}{k_{w}}\geq\dfrac{\left(\sum_{w\in\mathcal{N}_{v}}1\right)^{2}}{\sum_{w\in\mathcal{N}_{v}}k_{w}}=\dfrac{k_{v}^{2}}{\sum_{w\in\mathcal{N}_{v}}k_{w}}.$
(3.23)
Because Titu’s Lemma is equivalent to the Cauchy–Schwarz inequality, the
previous inequality becomes an equality if and only if
$\left(1\right)_{w\in\mathcal{N}_{v}}$ and
$\left(k_{w}\right)_{w\in\mathcal{N}_{v}}$ are linearly dependent, i.e., if
and only if the degree function is constant on each neighborhood, which is the
case for instance in regular and bi-regular graphs. We finally deduce the
estimate
$\operatorname{tr_{{}_{\,HB}}}\left(G\right)\geq\sum_{v\in
V}\dfrac{k_{v}^{3}}{\sum_{w\in\mathcal{N}_{v}}k_{w}},$ (3.24)
which of course implies the lower estimate in (3.21).
Let us now collect a few important properties of the hubs-biased Laplacian
matrices of any graphs satisfying our standing Assumptions 2.1.
###### Theorem 3.11.
Let $\alpha\in\\{-1,1\\}$. Then the hubs-biased Laplacian matrix
${\mathscr{L}}_{\alpha}$ enjoys the following properties:
(i) its eigenvalues are real;
(ii) it is positive semidefinite;
(iii) $\textnormal{{rank\,}}{\mathscr{L}}_{\alpha}=n-\mathscr{C}$;
(iv) it can be diagonalized as
${\mathscr{L}}_{\alpha}=\left(KU_{\alpha}\right)\Lambda_{\alpha}\left(KU_{\alpha}\right)^{-1}$,
where $\Xi_{\alpha}-A=U_{\alpha}\Lambda_{\alpha}U_{\alpha}^{-1}$ and $K$ is as
in (3.9).
###### Proof.
(i) Under the Assumptions 2.1, no vertex is isolated, hence $k_{i}\geq 1$ for
all $i$. Therefore, for any $\alpha\in\left\\{-1,1\right\\}$ the diagonal
matrix $K^{\alpha}$ is invertible and we observe that
$\begin{split}{\mathscr{L}}_{\alpha}&=\Xi_{\alpha}-K^{\alpha}AK^{-\alpha}\\\
&=K^{\alpha}\left(K^{-\alpha}\Xi_{\alpha}K^{\alpha}-A\right)K^{-\alpha}\\\
&=K^{\alpha}\left(\Xi_{\alpha}-A\right)K^{-\alpha},\end{split}$ (3.25)
since $\Xi_{\alpha}$ is diagonal, too. We conclude that
${\mathscr{L}}_{\alpha}$ is similar to the symmetric matrix
$\left(\Xi_{\alpha}-A\right)$, and so their eigenvalues are real.
(ii) Now let $x\in\ell^{2}(V)$ and $x\neq{0}$. Then, we can write
$\begin{split}x^{T}\left(\Xi_{\alpha}-A\right)x&=\sum_{\left(i,j\right)\in
E}\left(\left(k_{i}^{\alpha/2}k_{j}^{-\alpha/2}\right)x_{i}-\left(k_{j}^{\alpha/2}k_{i}^{-\alpha/2}\right)x_{j}\right)^{2}\end{split}\geq
0.$ (3.26)
Therefore, because $\Xi_{\alpha}-A$ and ${\mathscr{L}}_{\alpha}$ are similar,
we have that ${x}^{T}{\mathscr{L}}_{\alpha}{x}\geq 0$.
(iii) Let us now prove that the dimension of the null space of
${\mathscr{L}}_{\alpha}$ is $\mathscr{C}$. Let ${z}$ be a vector such that
${\mathscr{L}}_{\alpha}{z}=0$. This implies that for every
$\left(i,j\right)\in E$, $z_{i}=z_{j}$. Therefore ${z}$ takes the same value
on all vertices of the same connected component, which indicates that the
dimension of the null space is $\mathscr{C}$, and so
$\textnormal{{rank\,}}{\mathscr{L}}_{\alpha}=n-\mathscr{C}$.
(iv) Finally, we also have that because $\Xi_{\alpha}-A$ is symmetric we can
write it as: $\Xi_{\alpha}-A=U_{\alpha}\Lambda_{\alpha}U_{\alpha}^{-1}$. Thus,
${\mathscr{L}}_{\alpha}=\left(KU_{\alpha}\right)\Lambda_{\alpha}\left(KU_{\alpha}\right)^{-1}$
which indicates that all hubs-biased Laplacians are diagonalizable. ∎
We know from Theorem 3.11 that all eigenvalues of any hubs-biased Laplacian
are real positive: we will denote them by
$\rho_{\alpha,1}\ldots,\rho_{\alpha_{n}}$ in ascending order, i.e.,
$0=\rho_{\alpha,1}\leq\ldots\leq\rho_{\alpha,n}.$
Let us conclude this section deducing some estimates on them. To avoid
trivialities, and in view of Theorem 3.11.(iii), in the remainder of this
section we always assume the graph $G$ to be connected.
###### Corollary 3.12.
Let $\alpha\in\left\\{-1,1\right\\}$. Then we have
$\rho_{\alpha,n}\geq\frac{4m^{2}}{n(n-1)\varDelta}$ (3.27)
and
$\left.\begin{array}[]{c}\dfrac{2m\left(2m+\left(n-1\right)\left(\varDelta-\delta\right)\right)}{\delta(n-1)\left(n+\varDelta-\delta\right)}\\\\[15.0pt]
\dfrac{2m\varDelta}{(n-1)\delta}\end{array}\right\\}\geq\rho_{\alpha,2}.$
(3.28)
We note in passing that on a $\left(k+1\right)$-star, the upper bounds in
(3.28) reduce to
$\left.\begin{array}[]{c}2+k\\\ 2k+1\end{array}\right\\}\geq\rho_{\alpha,2,}$
(3.29)
which does not prevent $\rho_{\alpha,2}$ to tend to $+\infty$ as the graph
grows.
We also remark that these estimates are only really interesting for non-
regular graphs, because in regular graphs the hubs-biased Laplacians coincide
with the standard discrete Laplacian for which many different bounds are known
for its eigenvalues.
###### Proof.
Observe that $\dfrac{1}{n-1}\operatorname{tr_{{}_{\,HB}}}\left(G\right)$
yields the arithmetic mean of all non-zero eigenvalues of
${\mathscr{L}}_{\alpha}$; in particular, the lowest non-zero eigenvalue
$\rho_{\alpha,2}$ cannot be larger than
$\dfrac{1}{n-1}\operatorname{tr_{{}_{\,HB}}}\left(G\right)$ while the largest
eigenvalue $\rho_{\alpha,n}$ cannot be larger that
$\dfrac{1}{n-1}\operatorname{tr_{{}_{\,HB}}}\left(G\right)$. Taking into
account (3.13) and (3.16) we deduce the claimed estimate. ∎
###### Remark 3.13.
(1) The naive upper bound
$\rho_{\alpha,n}\leq 2\max_{v\in V}\left({\mathscr{L}}_{\alpha}\right)_{vv}$
(3.30)
on the largest eigenvalue of ${\mathscr{L}}_{\alpha}$ follows from Geršgorin’s
Theorem, since by (3.6) $\left({\mathscr{L}}_{\alpha}\right)_{vv}=\sum_{w\in
V}\left|\left({\mathscr{L}}_{\alpha}\right)_{vw}\right|$. Now,
$\begin{split}\left({\mathscr{L}}_{-1}\right)_{vv}&=\sum_{w\in\mathcal{N}_{v}}\dfrac{k_{w}}{k_{v}}\leq\dfrac{\varDelta_{\mathcal{N}_{v}}}{k_{v}}\sum_{w\in\mathcal{N}_{v}}1=\varDelta_{\mathcal{N}_{v}}\leq\varDelta,\\\
\left({\mathscr{L}}_{1}\right)_{vv}&=\sum_{w\in\mathcal{N}_{v}}\dfrac{k_{v}}{k_{w}}\leq\dfrac{k_{v}}{\delta}\sum_{w\in\mathcal{N}_{v}}1=\dfrac{k_{v}^{2}}{\delta}\leq\dfrac{\varDelta^{2}}{\delta},\end{split}$
(3.31)
which are both sharp for regular graphs. Plugging these expressions in (3.30)
yields
$\rho_{-1,n}\leq 2\varDelta$ (3.32)
and
$\rho_{1,n}\leq 2\dfrac{\varDelta^{2}}{\delta}.$ (3.33)
(2) In order to show some lower bounds on $\rho_{\alpha,2}$, we remark that
the standard discrete Laplacian $\mathcal{\mathscr{L}}$ and
${\mathscr{L}}_{\alpha}$ share the null space, cf. the proof of Theorem
3.11.(iii). Then, we can quotient it out, thus studying the lowest eigenvalue
of $\mathcal{\mathscr{L}}$ and ${\mathscr{L}}_{\alpha}$ on
$\mathbb{C}^{n}/\left\langle{\mathbf{1}}\right\rangle$, the space of vectors
orthogonal to the vector ${\mathbf{1}}=(1,\ldots,1)$. Taking a normalized
vector $f$ in this space one sees that
$\dfrac{\delta}{\varDelta}\left(\mathscr{L}f,f\right)\leq\left(\mathscr{L}_{\alpha}f,f\right).$
(3.34)
In particular, choosing $f$ to be an eigenfunction associated with the lowest
non-zero eigenvalue $\rho_{\alpha,2}$ of $\mathscr{L}_{\alpha}$ and applying
the Courant–Fischer min-max characterization of the eigenvalues of the
Hermitian matrix we deduce
$\dfrac{\delta}{\varDelta}\rho_{2}\leq\rho_{\alpha,2},$ (3.35)
where $\rho_{2}$ is the second lowest eigenvalue of the discrete Laplacian,
i.e., the algebraic connectivity [16]. Alternatively, for $\alpha=-1$ we can
use
$\delta\left(\mathscr{L}_{\textnormal{norm}}f,f\right)\leq\left(\mathscr{L}_{\alpha}f,f\right),$
(3.36)
and deduce
$\delta\rho_{2,\textnormal{norm}}\leq\rho_{\alpha,2},$ (3.37)
where $\mathscr{L}_{\textnormal{norm}}\coloneqq K^{-1/2}\mathscr{L}K^{-1/2}$
is the normalized Laplacian. Now we can either apply explicit formulae for
$\rho_{2}$ and $\rho_{2,\textnormal{norm}}$ for classes of graphs or use
general estimates like
$\rho_{2}\geq 2\eta\left(1-\cos\dfrac{\pi}{n}\right),$ (3.38)
from [23] or
$\rho_{2,\textnormal{norm}}\geq\left\\{\begin{array}[]{c}\dfrac{1}{Dn},\\\
1-\cos\dfrac{\pi}{m},\end{array}\right.$ (3.39)
from [10, 33], respectively, where $\eta\geq 1$ is the edge connectivity and
$D$ is the diameter of $G$.
(3) Further sharp bounds on the Zagreb index are known, including [15, Theorem
2.3 and Theorem 2.6], immediately yielding
$\varDelta+\frac{(2m-\varDelta)^{2}}{\varDelta(n-1)}+\frac{2(n-2)(\varDelta_{2}-\delta)^{2}}{\varDelta(n-1)^{2}}\leq\operatorname{tr_{{}_{\,HB}}}(G)\leq\frac{(n+1)m-\varDelta(n-\varDelta)}{\delta}+\frac{2(m-\varDelta)^{2}}{\delta(n-2)},$
where $\varDelta_{2}$ denotes the second largest degree of $G$, the upper
bound holding under the additional assumption that $n\geq 3$. (A
characterization of the extremal graphs where equality is attained is
available, too, but is rather technical.)
Following the same proof as in Corollary 3.12 finally yields the bounds
$\rho_{\alpha,n}\geq\frac{\varDelta}{n-1}+\frac{(2m-\varDelta)^{2}}{\varDelta(n-1)^{2}}+\frac{2(n-2)(\varDelta_{2}-\delta)^{2}}{\varDelta(n-1)^{3}}$
and
$\frac{(n+1)m-\varDelta(n-\varDelta)}{\delta(n-1)}+\frac{2(m-\varDelta)^{2}}{\delta(n-2)(n-1)}\geq\rho_{\alpha,2}.$
## 4 Hubs-biased Resistance Distance
We adapt here some general definitions of resistive networks to the case of
hubs-biased systems, mainly following the classic formulations given in [17,
34, 18]. Let us consider $G$ as a resistive network in which every edge
$\left(v,w\right)\in E$ has edge resistance
$r_{\alpha}\left(v,w\right)\coloneqq c_{\alpha}^{-1}\left(v,w\right)$.
Let us consider the connection of a voltage between the vertices $v$ and $w$,
and let $i\left(v,w\right)>0$ be the net current out the source $v$ and into
the sink $w$, such that $i\left(v,w\right)=-i\left(w,v\right).$ Then,
according to the first Kirchhoff’s law we have that
$\sum_{w\in\mathcal{N}_{v}}i\left(v,w\right)=I$ if $v$ is a source,
$\sum_{w\in\mathcal{N}_{v}}i\left(v,w\right)=-I$ if $v$ is a sink, or zero
otherwise, where we have denoted by $I$ the net current flowing through the
whole network. The application of the second Kirchhoff’s law, namely that
$\sum_{\left(v,w\right)\in C}i\left(v,w\right)r_{\alpha}\left(v,w\right)=0$
where $C$ is a cycle with edges labeled in consecutive order, implies that a
potential $\mathscr{V}$ may be associated with any vertex $v$, such that for
all edges
$i\left(v,w\right)r_{\alpha}\left(v,w\right)=\mathscr{V}\left(v\right)-\mathscr{V}\left(w\right),$
(4.1)
which represents the Ohm’s law, and where $i$ and $\mathscr{V}$ depend on the
net current $I$ and on the pair of vertices where the voltage source has been
placed. Let us now define formally the hubs-biased effective resistance, which
is the resistance of the total system when a voltage source is connected
across a corresponding pair of vertices. Throughout this section we are still
adopting the notation in Section 3: in particular, $\alpha\in\\{-1,1\\}$.
###### Definition 4.1.
The hubs-biased effective resistance between the vertices $v$ and $w$ of $G$
is
$\Omega_{\alpha}\left(v,w\right):=\dfrac{\mathscr{V}\left(v\right)-\mathscr{V}\left(w\right)}{I}.$
(4.2)
We now prove the following result, showing that hubs-biased resistance between
any two vertices of $G$ is a squared Euclidean distance.
###### Lemma 4.2.
Let $v,w\in V$. Then for $\alpha\in\\{-1,1\\}$ the hubs-biased resistance
$\Omega_{\alpha}\left(v,w\right)$ is given by
$\Omega_{\alpha}\left(v,w\right)=\mathscr{L}_{\alpha}^{+}\left(v,v\right)+\mathscr{L}_{\alpha}^{+}\left(w,w\right)-\mathscr{L}_{\alpha}^{+}\left(v,w\right)-\mathscr{L}_{\alpha}^{+}\left(w,v\right),$
(4.3)
where $\mathscr{L}_{\alpha}^{+}$ stands for the Moore–Penrose pseudoinverse of
$\mathscr{L}_{\alpha}$.
###### Proof.
First, we will prove that
$\Omega_{\alpha}\left(v,w\right)=\left({e}_{v}-{e}_{w}\right)^{T}\mathscr{L}_{\alpha}^{+}\left(v,v\right)\left({e}_{v}-{e}_{w}\right),$
(4.4)
where ${e}_{v}$ is the vector with all entries equal to zero except the one
corresponding to vertex $v$ which is equal to one. Using the second
Kirchhoff’s law we have
$\sum_{w\in\mathcal{N}_{v}}\dfrac{1}{r\left(v,w\right)}\left(\mathscr{V}\left(v\right)-\mathscr{V}\left(w\right)\right)=\begin{cases}I\qquad&\hbox{if
$v$ is a source},\\\ -I\qquad&\hbox{if $v$ is a sink},\\\
0&\hbox{otherwise},\end{cases}$ (4.5)
which can also be written as
$c_{\alpha}\left(v\right)\mathscr{V}\left(v\right)-\sum_{w=1}^{n}\dfrac{1}{r\left(v,w\right)}\mathscr{V}\left(w\right)=\begin{cases}I\qquad&\hbox{if
$v$ is a source},\\\ -I\qquad&\hbox{if $v$ is a sink},\\\
0&\hbox{otherwise}.\end{cases}$ (4.6)
Let us write it in matrix-vector form as
$\mathscr{\mathscr{L}_{\alpha}{V}}=I\left({e}_{v}-{e}_{w}\right).$ (4.7)
Due to the fact that the right-hand side of (4.7) is orthogonal to ${1}$ we
can obtain ${\mathscr{V}}$ as
$\mathscr{V}\left(v\right)-\mathscr{V}\left(w\right)=\left({e}_{v}-{e}_{w}\right)^{T}\mathscr{{V}}=I\left({e}_{v}-{e}_{w}\right)^{T}\mathscr{L}_{\alpha}^{+}\left({e}_{v}-{e}_{w}\right).$
(4.8)
Then, using the definition of the effective resistance we have
$\Omega_{\alpha}\left(v,w\right)=\dfrac{\mathscr{V}\left(v\right)-\mathscr{V}\left(w\right)}{I}=\left({e}_{v}-{e}_{w}\right)^{T}\mathscr{L}_{\alpha}^{+}\left({e}_{v}-{e}_{w}\right).$
(4.9)
Now, because
$\left({e}_{v}-{e}_{w}\right)^{T}\mathscr{L}_{\alpha}^{+}\left({e}_{v}-{e}_{w}\right)=\mathscr{L}_{\alpha}^{+}\left(v,v\right)+\mathscr{L}_{\alpha}^{+}\left(w,w\right)-\mathscr{L}_{\alpha}^{+}\left(v,w\right)-\mathscr{L}_{\alpha}^{+}\left(w,v\right)$
we only remain to prove that it is a distance. Let
$\mathscr{L}_{\alpha}=V_{\alpha}\Lambda_{\alpha}V_{\alpha}^{-1}$, where
$V_{\alpha}=KU_{\alpha}.$ Then
$\mathscr{L}_{\alpha}^{+}=V_{\alpha}\Lambda_{\alpha}^{+}V_{\alpha}^{-1},$ with
$\Lambda_{\alpha}^{+}$ being the Moore–Penrose pseudoinverse of the diagonal
matrix of eigenvalues of$\mathscr{L}_{\alpha},$ i.e., the diagonal matrix
whose $i$th entry is
$\Lambda_{\alpha}^{+}\left(i,i\right)=\begin{cases}0\quad&\hbox{if the $i$th
eigenvalue is $0$},\\\ \rho_{\alpha,i}^{-1}\quad&\hbox{if the $i$th eigenvalue
is $\neq 0$}.\end{cases}$
Let us write the right-hand side of (4.3) as
${v}_{\alpha}\Lambda_{\alpha}^{+}{u}_{\alpha}^{T}+{w}_{\alpha}\Lambda_{\alpha}^{+}{w}_{\alpha}^{T}-{v}_{\alpha}\Lambda_{\alpha}^{+}{w}_{\alpha}^{T}-{w}_{\alpha}\Lambda_{\alpha}^{+}{v}_{\alpha}^{T},$
(4.10)
where ${v}$ and ${w}$ are the corresponding rows of $V_{\alpha}$ for the
vertices $u$ and $w$, respectively. Then, we have
$\begin{split}&\mathscr{L}_{\alpha}^{+}\left(u,u\right)+\mathscr{L}_{\alpha}^{+}\left(w,w\right)-\mathscr{L}_{\alpha}^{+}\left(u,w\right)-\mathscr{L}_{\alpha}^{+}\left(w,u\right)\\\
&={v}_{\alpha}\left(\Lambda_{\alpha}^{+}{v}_{\alpha}^{T}-\Lambda_{\alpha}^{+}{w}_{\alpha}^{T}\right)-{w}_{\alpha}\left(\Lambda_{\alpha}^{+}{v}_{\alpha}^{T}-\Lambda_{\alpha}^{+}{w}_{\alpha}^{T}\right),\\\
&=\left({v}_{\alpha}-{w}_{\alpha}\right)\left(\Lambda_{\alpha}^{+}{v}_{\alpha}^{T}-\Lambda_{\alpha}^{+}{w}_{\alpha}^{T}\right)\\\
&=\left({v}_{\alpha}-{w}_{\alpha}\right)\Lambda_{\alpha}^{+}\left({v}_{\alpha}-{w}_{\alpha}\right)^{T}\\\
&=\left(\left({v}_{\alpha}-{w}_{\alpha}\right)\sqrt{\Lambda_{\alpha}^{+}}\right)\left(\left({v}_{\alpha}-{w}_{\alpha}\right)\sqrt{\Lambda_{\alpha}^{+}}\right)^{T}\\\
&=\left(\mathscr{V}_{\alpha}\left(v\right)-\mathscr{V}_{\alpha}\left(w\right)\right)^{T}\left(\mathscr{V}_{\alpha}\left(v\right)-\mathscr{V}_{\alpha}\left(w\right)\right)\\\
&=\left\|\mathscr{V}_{\alpha}\left(v\right)-\mathscr{V}_{\alpha}\left(w\right)\right\|^{2},\end{split}$
(4.11)
where
$\mathscr{V}_{\alpha}\left(v\right)={v}_{\alpha}\sqrt{\Lambda_{\alpha}^{+}}$
is the position vector of the vertex $v$ in the Euclidean space induced by the
hubs-repelling Laplacian. ∎
###### Corollary 4.3.
Let $\alpha\in\\{-1,1\\}$ and
$\mathscr{L}_{\alpha}=V_{\alpha}\Lambda_{\alpha}V_{\alpha}^{-1}$, where
$V_{\alpha}:=KU_{\alpha}$ for
$V_{\alpha}:=\left[{\psi}_{\alpha,1},{\psi}_{\alpha,2},\cdots,{\psi}_{\alpha
n}\right]$ and
$\Lambda_{\alpha}:=\operatorname{diag}\left(\rho_{\alpha,k}\right)$. Then,
$\Omega_{\alpha}\left(u,w\right)=\sum_{k=2}^{n}\rho_{\alpha,k}^{-1}\left(\psi_{\alpha,k,u}-\psi_{\alpha,k,w}\right)^{2}.$
(4.12)
###### Proof.
It is easy to see from Lemma 4.2 that
$\begin{split}\Omega_{\alpha}\left(u,w\right)&=\sum_{k=2}^{n}\rho_{\alpha,k}^{-1}\psi_{\alpha,k,u}^{2}+\sum_{k=2}^{n}\rho_{\alpha,k}^{-1}\psi_{\alpha,k,w}^{2}-2\sum_{k=2}^{n}\rho_{\alpha,k}^{-1}\psi_{\alpha,k,u}\psi_{\alpha,k,w}\\\
&=\sum_{k=2}^{n}\rho_{\alpha,k}^{-1}\left(\psi_{\alpha,k,u}^{2}+\psi_{\alpha,k,w}^{2}-2\psi_{\alpha,k,u}\psi_{\left\\{R,A\right\\},k,w}\right)\\\
&=\sum_{k=2}^{n}\rho_{\alpha,k}^{-1}\left(\psi_{\alpha,k,u}-\psi_{\alpha,k,w}\right)^{2}.\end{split}$
(4.13)
∎
###### Corollary 4.4.
Let $\alpha=\\{-1,1\\}$ and
$0=\rho_{\alpha,1}<\rho_{\alpha,2}\leq\cdots\leq\rho_{\alpha,n}$ be the
eigenvalues of $\mathscr{L}_{\alpha}$. Then,
$\dfrac{2}{\rho_{\alpha,n}}\leq\Omega_{\alpha}\left(u,w\right)\leq\dfrac{2}{\rho_{\alpha,2}}.$
(4.14)
###### Proof.
Using Corollary 4.3 and the fact that $\rho_{\alpha,2}$ is the smallest
eigenvalue of $\mathscr{L}_{\alpha}$ we have the upper bound if all the
eigenvalues are equal to $\rho_{\alpha,2}$. The result follows from the fact
that $\sum_{k=1}^{n}\psi_{\alpha,k,u}^{2}=1$ and
$\sum_{k=1}^{n}\psi_{\alpha,k,u}\psi_{\alpha,k,w}=0$ for every $u\neq w$. The
lower bound is obtained similarly by the fact that $\rho_{\alpha,n}$ is the
largest eigenvalue of $\mathscr{L}_{\alpha}$. ∎
### 4.1 Hubs-biased Kirchhoff index
In full analogy with the definition of the so-called Kirchhoff index by Klein
and Randić [34] we define here the hubs-biased Kirchhoff indices of a graph.
###### Definition 4.5.
Let $\alpha\in\\{-1,1\\}$. The total hubs-biased resistance distance, or hubs-
biased Kirchhoff index, of a graph $G$ is defined as
$\mathcal{\mathscr{R}}_{\alpha}\left(G\right)=\sum_{v<w}\Omega_{\alpha}\left(v,w\right).$
(4.15)
###### Lemma 4.6.
Let $\alpha\in\\{-1,1\\}$ and
$0=\rho_{\alpha,1}<\rho_{\alpha,2}\leq\cdots\leq\rho_{\alpha,n}$ be the
eigenvalues of $\mathscr{L}_{\alpha}$. Then,
$\mathcal{\mathscr{R}}_{\alpha}\left(G\right)=n\sum_{k=2}^{n}\dfrac{1}{\rho_{\alpha,k}}.$
(4.16)
###### Proof.
Let us write the sum of the hubs-biased resistance distances as
$\begin{split}\dfrac{1}{2}\sum_{v,w\in
V}\Omega_{\alpha}\left(u,w\right)&=\dfrac{1}{2}\left({1}^{T}\textnormal{diag$\left(\mathscr{L}_{\alpha\
}^{+}\right){1}^{T}$}{1}+{1}^{T}{1}\left(\operatorname{diag}\left(\mathscr{L}_{\alpha}^{+}\right)\right)^{T}{1}-{1}^{T}\mathscr{L}_{\alpha}^{+}{1}-{1}^{T}\left(\mathscr{L}_{\alpha}^{+}\right)^{T}{1}\right)\\\
&=\dfrac{1}{2}\left(2n\textnormal{tr}\left(\mathscr{L}_{\alpha}^{+}\right)\right)\\\
&=n\sum_{k=2}^{n}\dfrac{1}{\rho_{\alpha,k}},\end{split}$
where $\textnormal{tr}\left(\mathscr{L}_{\alpha}^{+}\right)$ is the trace of
$\mathscr{L}_{\alpha}^{+}$. ∎
###### Corollary 4.7.
Let $\alpha\in\\{-1,1\\}$ and
$0=\rho_{\alpha,1}<\rho_{\alpha,2}\leq\cdots\leq\rho_{\alpha,n}$ be the
eigenvalues of $\mathscr{L}_{\alpha}$ for $G$ with $n\geq 2$. Then,
$\dfrac{n\left(n-1\right)}{\rho_{\alpha,n}}\mathcal{\leq\mathcal{\mathscr{R}}_{\mathnormal{\alpha}}}\left(G\right)\leq\dfrac{n\left(n-1\right)}{\rho_{\alpha,2}}.$
(4.17)
###### Lemma 4.8.
Let $\alpha\in\\{-1,1\\}$. Then
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha}}}\left(G\right)\geq n-1,$
(4.18)
with equality if and only if $G=K_{n}$.
###### Proof.
Let $\mathcal{S}$ be the set of all column vectors $x$ such that $x\cdot x=1$,
$x\cdot e$= 0. Then, it is known that
$\rho_{\alpha,2}=\min_{x\in S}x^{T}\mathscr{L}_{\alpha}x.$ (4.19)
Then, we can show that the matrix
$\mathscr{\tilde{L}}_{\alpha}\coloneqq\mathscr{L}_{\alpha}-\rho_{\alpha,2}\left(I_{n}-n^{-1}J_{n}\right)$
is also positive semidefinite. Since $\mathscr{\tilde{L}}_{\alpha}e=0$ we have
$y^{T}\mathscr{\tilde{L}}_{\alpha}y=c_{2}^{2}x^{T}\mathscr{\tilde{L}}_{\alpha}x=c_{2}^{2}\left(x^{T}\mathscr{L}_{\alpha}x-\rho_{\alpha,2}\right)\geq
0.$ (4.20)
Thus
$\min_{v}\mathscr{L}_{\alpha}\left(v,v\right)-\rho_{\alpha,2}\left(1-n^{-1}\right)\geq
0.$ (4.21)
We can then write,
$\rho_{\alpha,2}\leq\dfrac{n}{n-1}\min_{v}\mathscr{L}_{\alpha}\left(v,v\right).$
(4.22)
Now, because $\min_{v}\mathscr{L}_{\alpha}\left(v,v\right)$ cannot be larger
than $\varDelta/\delta\leq n-1$ we have that $\rho_{\alpha,2}\leq n$ and then
using the result in Corollary 4.7 we have
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha}}}\left(G\right)\geq n-1$.
The equality is obtained only for the case of the complete graph where
$\rho_{\alpha,k\neq 2}=n-1$, which proves the final result. ∎
We now obtain some bounds for the Kirchhoff index for $\alpha=1$ and
$\alpha=-1$, respectively. (As usual, in the following $n$ denotes the number
of vertices of $G$, while $\delta,\varDelta$ are its smallest and largest
degree, respectively.)
###### Lemma 4.9.
Let $\alpha\in\\{-1,1\\}$. Then
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=1}}}\left(G\right){\color[rgb]{0,0,0}{{\color[rgb]{0,0,0}\geq\dfrac{n\left(n-1\right)\delta}{\varDelta^{2}}}}.}$
(4.23)
###### Proof.
We already know from Remark 3.13 that
$\rho_{\alpha=1,n}\leq\varDelta^{2}/\delta$. Then,
$c_{\alpha=1}(v)=k_{v}\sum_{j\in\eta_{v}}k_{j}^{-1}\leq\varDelta^{2}/\delta$
and using the Corollary 4.7 we obtain the result. ∎
###### Remark 4.10.
The four graphs with the largest value of
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=1}}}\left(G\right)$ among
all connected graphs with 8 vertices are illustrated in Fig. 4.1.
Figure 4.1: Graphs with the maximum values of
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=1}}}\left(G\right)$ among
all connected graphs with 8 vertices.
###### Lemma 4.11.
Let $\alpha\in\\{-1,1\\}$. Then
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=-1}}}\left(G\right){\color[rgb]{0,0,0}{{\color[rgb]{0,0,0}\geq\dfrac{n\left(n-1\right)}{2\varDelta}}}.}$
(4.24)
###### Proof.
Again by Remark 3.13, $\rho_{\alpha=-1,n}\leq 2\varDelta$. Then,
$\underset{i}{\max}c_{\alpha=-1}\left(i\right)=\underset{i}{\max}k_{i}^{-1}\sum_{v\in\eta_{i}}k_{v}$,
with the maximum being attained at those vertices $i$ with minimal degree
$k_{i}=\delta$ and connected to $\delta$ vertices $v\in\eta_{i}$ which all
have the maximum degree $\varDelta$. Thus, the result follows using the
Corollary 4.7. ∎
The four graphs with the largest value of
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=-1}}}\left(G\right)$ among
all connected graphs with 8 vertices are illustrated in Fig. 4.2.
Figure 4.2: Graphs with the maximum values of
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=-1}}}\left(G\right)$ among
all connected graphs with 8 vertices.
###### Remark 4.12.
In $d$-regular graphs,
$c_{\alpha=1}\left(v,w\right)=c_{\alpha=-1}\left(v,w\right)=1$ for all
$\left(v,w\right)\in E$. Thus,
$\Omega_{\alpha=1}\left(v,w\right)=\Omega_{\alpha=-1}\left(v,w\right)=\Omega\left(v,w\right)$
for all $\left(v,w\right)\in E$, and
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=1}}}\left(G\right)=\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=-1}}}\left(G\right)=\mathcal{\mathcal{\mathscr{R}}}_{\alpha=0}$.
Then, we calculated the hubs-repelling
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=1}}}\left(G\right)$,
attracting
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=-1}}}\left(G\right)$ and
normal $\mathcal{\mathcal{\mathscr{R}}}_{\alpha=0}\left(G\right)$ Kirchhoff
indices for all connected graphs with $5\leq n\leq 8$. In general, we observed
for these more than 12,000 graphs that:
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=1}}}\left(G\right)\geq\mathcal{\mathcal{\mathscr{R}}}_{\alpha=0}\left(G\right)\geq\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=-1}}}\left(G\right)$.
Finally we formulate the following conjecture for the Kirchhoff indices of
graphs.
###### Conjecture 4.13.
Let $\alpha\in\\{-1,1\\}$. Then
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=1}}}\left(G\right)\geq\mathcal{\mathcal{\mathscr{R}}}_{\alpha=0}\left(G\right)\geq\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=-1}}}\left(G\right),$
(4.25)
with equality if and only if $G$ is regular.
## 5 Computational results
### 5.1 Random-walks connection
Let us consider a “particle” performing a standard random walk through the
vertices and edges of any of the networks studied here. The use of random
walks in graphs [35] and networks [36] is one of the most fundamental types of
stochastic processes, used to model diffusive processes, different kinds of
interactions, and opinions among humans and animals [36]. They can also be
used as a way to extract information about the structure of networks,
including the detection of dense groups of entities in a network [36]. Then,
we consider a random walk on $G$, which represents the real-world network
under consideration. We start at a vertex $v_{0}$; if at the $r$th step we are
at a vertex $v_{r}$, we move to any neighbor of $v_{r}$, with probability
$k_{r}^{-1},$ where $k_{r}$ is the degree of the vertex $r$. Clearly, the
sequence of random vertices ($v_{t}:t=0,1,\ldots$) is a Markov chain [35].
An important quantity in the study of random walks on graphs is the access or
hitting time $\mathcal{H}\left(v,w\right),$ which is the number of steps
before vertex $w$ is visited, starting from vertex $v$ [35]. The sum
$\mathcal{C}\left(v,w\right)=\mathcal{H}\left(v,w\right)+\mathcal{H}\left(w,v\right)$
is called the commute time, which is the number of steps in a random walk
starting at $v$, before vertex $w$ is visited and then the walker comes back
again to vertex $v$ [35]. The connection between random walks and resistance
distance on graphs is then provided by the following result (see for instance
[1]), where we as usual denote by $c\left(v\right)$ the vertex conductances of
vertices $v$.
###### Lemma 5.1.
For any two vertices $v$ and $w$ in $G$, the commute time is
$\mathcal{C}\left(v,w\right)=\textnormal{vol}(G)\Omega\left(v,w\right).$ (5.1)
Here and in the following,
$\textnormal{vol}(G)=\sum\limits_{v=1}^{n}c\left(v\right)$ is volume of the
graph. (Notice that if the graph is unweighted (the case formally
corresponding to $\alpha=0$) then by the Handshaking Lemma
$\textnormal{vol}(G)=2m$.) The “efficiency” of a standard random walk process
on $G$ can then be measured by
$\varepsilon\left(G\right)=1/\sum_{v,w}{\mathcal{C}}\left(v,w\right),$ (5.2)
That is, if a standard random walker on a graph uses small times to commute
between every pair of vertices in the graph, it is an efficient navigational
process. On the contrary, large commuting times between pairs of vertices
reveal very inefficient processes. Obviously,
$\varepsilon\left(G\right)=1/\left(\textnormal{vol}(G)\sum_{v,w}\Omega\left(v,w\right)\right)$.
We now extend these concepts to the use of hubs-biased random walks and
calculated the Kirchhoff indices
$\mathcal{\mathcal{\mathscr{R}}}\left(G\right),$
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=-1}}}\left(G\right)$ and
$\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha=1}}}\left(G\right)$ for
all these networks. Following a similar reasoning as before we define the
efficiencies of the hubs-biased random walks by:
$\varepsilon_{\alpha}\left(G\right)\coloneqq
1/\left(\textnormal{vol}_{\alpha}\sum_{v,w}{\Omega}_{\alpha}\left(v,w\right)\right),$
(5.3)
where $\textnormal{vol}_{\alpha}$ is the volume of the graph with conductances
based on $\alpha$. We are interested here in the efficiency of the hubs-biased
random walk processes relative to the standard random walk. We propose to
measure these relative efficiencies by
$\mathscr{E}_{\alpha}\left(G\right)\coloneqq\dfrac{\varepsilon_{\alpha}\left(G\right)}{\varepsilon\left(G\right)}=\dfrac{\textnormal{vol}}{\textnormal{vol}_{\alpha}}\dfrac{\mathcal{\mathcal{\mathscr{R}}}\left(G\right)}{\mathcal{\mathcal{\mathscr{R}}_{\mathnormal{\alpha}}}\left(G\right)}.$
(5.4)
When $\mathscr{E}_{\alpha}\left(G\right)>1$ the hubs-biased random walk is
more efficient than the standard random walk. On the other hand, when
$\mathscr{E}_{\alpha}\left(G\right)<1$ the standard random walk is more
efficient than the hubs-biased one. When the efficiency of both processes,
hubs-biased and standard, are similar we have
$\mathscr{E}_{\alpha}\left(G\right)\approx 1$.
### 5.2 Efficiency in small graphs
We start by analyzing the 11,117 connected graphs with 8 vertices that we
studied previously. In Fig. 5.1 we illustrate the results in a graphical way.
As can be seen
* 1.
$\mathscr{E}_{\alpha=-1}\left(G\right)>1$ for 95.7% of the graphs considered
(10,640 out of 11,117), indicating that in the majority of graphs a hubs-
attracting random walk can be more efficient than the standard random walk
driven by the unweighted Laplacian, corresponding to $\alpha=0$;
* 2.
$\mathscr{E}_{\alpha=1}\left(G\right)>1$ only in 91 out of the 11,117 graphs,
which indicates that hubs-repelling random walks are more efficient than the
standard random walk only for very few graphs;
* 3.
All graphs for which $\mathscr{E}_{\alpha=1}\left(G\right)\geq 1$ also have
$\mathscr{E}_{\alpha=-1}\left(G\right)\geq 1$, with equality only for regular
graphs;
* 4.
Only 461 graphs (4.15% of all graphs considered) have simultaneously
$\mathscr{E}_{\alpha=1}\left(G\right)<1$ and
$\mathscr{E}_{\alpha=-1}\left(G\right)<1$. These are graphs for which the
standard random walk is more efficient than both hubs-biased random walks;
* 5.
Only 17 graphs have
$\mathscr{E}_{\alpha=1}\left(G\right)=\mathscr{E}_{\alpha=-1}\left(G\right)=1$.
They are all the regular graphs having 8 nodes that exist.
Figure 5.1: Plot of the efficiency of hubs-biased random walks relative to the
standard one for all 11,117 connected graphs with 8 vertices.
These results bring some interesting hints about the structural
characteristics that the graphs have to display to be benefited from hubs-
biased processes. For instance, the fact that most of the graphs can be
benefited from hubs-attracting ($\alpha=-1$) random walks is explained as
follow. A standard random walk typically does not follow the shortest
topological path connecting a pair of non-connected vertices. However, the
number of shortest paths crossing a vertex increases with the degree of that
vertex. For instance, let $k_{v}$ and $t_{v}$ be the degree and the number of
triangles incident to a vertex $v$. The number of shortest paths connecting
pairs of vertices is $P\geq\dfrac{1}{2}k_{v}\left(k_{v}-1\right)-t_{v}$.
Therefore, the hubs-attracting strategy induces the random walker to navigate
the network using many of the shortest paths interconnecting pairs of
vertices, which obviously decreases the commute time and increases the
efficiency of the process. Most networks can be benefited from this strategy.
The case of the hubs-repelling strategies is more subtle. To reveal the
details we illustrate the four graphs having the minimum efficiency of a hubs-
repelling ($\alpha=1$) random walk in Fig. 5.2. As can be seen all these
graphs have star-like structures, which are the graphs with the largest
possible degree heterogeneity [20, 21]. Therefore, in these graphs the use of
hubs-repelling strategies lets the random walker get trapped in small degree
vertices without the possibility of visiting other vertices, because for such
navigation they have to cross the hubs of the graph, a process which is
impeded by the hubs-repelling strategy.
Figure 5.2: Illustration of the graphs with the minimum values of
$\mathscr{E}_{\alpha=1}\left(G\right)$ among the 11,117 connected graphs with
8 vertices.
The previous analysis allows us to consider the reason why so little number of
graphs display $\mathscr{E}_{\alpha=1}\left(G\right)>1$. Such graphs have to
display very small degree heterogeneity, but without being regular, as for
regular graphs $\mathscr{E}_{\alpha=1}\left(G\right)=1$. The four graphs with
the highest value of $\mathscr{E}_{\alpha=1}\left(G\right)$ among all
connected graphs with 8 vertices are illustrated in Fig. 5.3. As can be seen
these graphs display “quasi-regular” structures but having at least one
pendant vertex connected to another low-degree vertex. Then, in a hubs-
repelling strategy this relatively isolated vertex (the pendant one) has
larger changes (than in a standard random walk) of being visited by the random
walker who is escaping from the vertices of larger degree.
Figure 5.3: Illustration of the graphs with the maximum values of
$\mathscr{E}_{\alpha=1}\left(G\right)$ among the 11,117 connected graphs with
8 vertices.
In closing, we have observed that hubs-attracting random walks are very
efficient in most of graphs due to the fact that such processes increase the
chances of navigating the graph through their shortest paths. On the other
hand, hubs-repelling random walks are efficient only in those quasi-regular
graphs having some relatively isolated vertices which can be visited by the
hubs-repelling walker with higher chances than in a standard random walk.
### 5.3 Efficiency in real-world networks
Here we study 59 real-world networks representing brain/neuronal systems,
electronic circuits, social systems, food webs, protein-protein interaction
networks (PIN), modular software, citation networks, transcription networks
and infrastructure networks. The description of all the networks is in the
Appendix of the book [19].
The results obtained here for the 59 real-world networks are resumed in Table
1 where we report the average values of the previous indices for groups of
networks in different functional classes, i.e., brain networks, electronic
circuits, social networks, etc.
type | number | $\mathscr{\bar{E}}_{\alpha=1}\left(G\right)$ | std | $\mathscr{\bar{E}}_{\alpha=-1}\left(G\right)$ | std
---|---|---|---|---|---
brain | 3 | 0.6365 | 0.2552 | 0.9938 | 0.0489
circuits | 3 | 0.5187 | 0.0277 | 1.0662 | 0.0095
foodweb | 14 | 0.4134 | 0.2715 | 1.1247 | 0.2545
social | 12 | 0.3536 | 0.1917 | 1.0103 | 0.1812
citations | 7 | 0.2032 | 0.1758 | 0.8199 | 0.2769
PIN | 8 | 0.1385 | 0.0896 | 0.7855 | 0.1864
infrastructure | 4 | 0.0869 | 0.1439 | 0.4846 | 0.4638
software | 5 | 0.0712 | 0.0296 | 0.6148 | 0.1515
transcription | 3 | 0.0711 | 0.0710 | 0.5806 | 0.3218
Table 1: Average values of the relative efficiency of using a hubs-attracting
random walk on the graph with respect to the use of the standard random walk:
$\mathscr{\bar{E}}_{\alpha=-1}\left(G\right)$ for different networks grouped
in different classes. The number of networks in each class is given in the
column labeled as “number”. The same for a hubs-repelling random walk:
$\mathscr{\bar{E}}_{\alpha=1}\left(G\right)$. In both cases the standard
deviations of the samples of networks in each class is also reported.
The main observations from the analysis of these real-world networks are:
* 1.
$\mathscr{E}_{\alpha=-1}\left(G\right)>1$ for 50.8% of the networks
considered, indicating that only in half of the networks a hubs-attracting
random walk can be more efficient than the standard random walk;
* 2.
$\mathscr{E}_{\alpha=1}\left(G\right)>1$ in none of the networks, which
indicates that standard random walks are always more efficient than hubs-
repelling random walks in all these networks.
The first result is understood by the large variability in the degree
heterogeneity of real-world networks. In this case, only those networks with
skew degree distributions are benefited from the use of hubs-attracting random
walks, while in those with more regular structures are not.
The second result indicates that there are no network with such quasi-regular
structures where some of the vertices are relatively isolated as the graphs
displayed in Fig. 5.3. However, as usual the devil is in the details. The
analysis of the results in Table 1 indicates that brain networks, followed
closely by electronic flip-flop circuits, are the networks in which the use of
hubs-repelling strategies of navigation produces the highest efficiency
relative to standard random walks. This can also be read as that these brain
networks have evolved in a way in which their topologies guarantee random walk
processes as efficient as the hubs-attracting ones without the necessity of
navigating the brain using such specific mechanisms. In addition, the use of
hubs-repelling processes do not affect significantly the average efficiency of
brain networks, as indicated by the value of
$\mathscr{\bar{E}}_{\alpha=1}\left(G\right),$ which is very close to one. This
result indicates that if these networks have to use a hubs-repelling
strategies of navigation due to certain biological constraints, they have
topologies which are minimally affected – in terms of efficiency – when using
such strategies.
Finally, another remarkable result is that the efficiency of navigational
processes in infrastructural and modular software systems is more than 1000%
efficient by using normal random-walk approaches than by using hubs-repelling
strategies. Those infrastructural networks seem to be wired to be navigated by
using their hubs, and avoiding them cost a lot in terms of efficiency. This is
clearly observed in many transportation networks, such as air transportation
networks, where the connection between pairs of airports is realized through
the intermediate of a major airport, i.e., a hub, in the network.
## 6 Conclusions
We have introduced the concept of hubs-biased resistance distance. These
Euclidean distances are based on graph Laplacians which consider the edges
$e=\left(v,w\right)$ of a graph weighted by the degrees of the vertices $v$
and $w$ in a double orientation of that edge. Therefore, the hubs-biased
Laplacian matrices are non-symmetric and reflect the capacity of a
graph/network to diffuse particles using hubs-attractive or hubs-repulsive
strategies. The corresponding hubs-biased resistance distances and the
corresponding Kirchhoff indices can be seen as the efficiencies of these hubs-
attracting/repelling random walks of graphs/networks. We have proved several
mathematical results for both the hubs-biased Laplacian matrices and the
corresponding resistances and Kirchhoff indices. Finally we studied a large
number of real-world networks representing a variety of complex systems in
nature, and society. All in all we have seen that there are networks which
have evolved, or have being designed, to operate efficiently under hubs-
attracting strategies. Other networks, like brain ones, are almost immune to
the change of strategies, because the use of hubs-attracting strategies
improve very little the efficiency of a standard random walk, and the
efficiency of hubs-repelling strategies is not significantly different than
that of the classical random walks. Therefore, in such networks the use of the
standard random walk approach is an efficient strategy of navigation, while
infrastructures and modular software networks seem to be designed to be
navigated by using their hubs.
## Acknowledgment
The work of D.M. was supported by the Deutsche Forschungsgemeinschaft (Grant
397230547). E.E. thanks financial support from Ministerio de Ciencia,
Innovacion y Universidades, Spain for the grant PID2019-107603GB-I00 “Hubs-
repelling/attracting Laplacian operators and related dynamics on
graphs/networks”. This article is based upon work from COST Action 18232 MAT-
DYN-NET, supported by COST (European Cooperation in Science and Technology),
www.cost.eu.
## References
* [1] D. Aldous, and J. Fill, Reversible Markov Chains And Random Walks On Graphs. Book in preparation. Available at: https://www.stat.berkeley.edu/~aldous/RWG/book.pdf.
* [2] A. Ali, I. Gutman, E. Milovanović, and I. Milovanović, Sum of Powers of the Degrees of Graphs: Extremal Results and Bounds, MATCH Commun. Math. Comput. Chem. 80 (2018), 5–84.
* [3] M. Bianchi, J.L. Palacios, A. Torriero, and A.L. Wirkierman, Kirchhoffian indices for weighted digraphs. Discrete Applied Mathematics 255 (2019) 142–154.
* [4] S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D.-U. Hwang, Complex networks: Structure and dynamics, Phys. Rep. 424 (2006), 175–308.
* [5] D. Boley, G. Ranjan, and Z. L. Zhang, Commute times for a directed graph using an asymmetric Laplacian. Lin. Algebra Appl. 435 (2011), 224–242.
* [6] M. Bonaventura, V. Nicosia, and V. Latora, Characteristic times of biased random walks on complex networks. Phys. Rev. E. 89 (2014), 012803.
* [7] Z.M. Boyd, N. Fraiman, J. Marzuola, P.J. Mucha, B. Osting, and J. Weare, A metric on directed graphs and Markov chains based on hitting probabilities. SIAM J. Math. Data Science 3 (2021), 467–493.
* [8] A. Bunde, J. Caro, J. Kärger, and G. Vogl (eds.), Diffusive Spreading in Nature, Technology and Society, Springer-Verlag, Berlin, 2017.
* [9] Z. Burda, J. Duda, J. M. Luck, and B. Waclaw, Localization of the maximum entropy random walk. Phys. Rev. Lett. 102 (2009), 160602.
* [10] F. R. K. Chung, Spectral Graph Theory. Vol. 92 of Reg. Conf. Series Math., Amer. Math. Soc., Providence (RI), 1997.
* [11] S. M. Cioabă, Sums of powers of the degrees of a graph. Discrete Math. 306 (2006), 1959–1964.
* [12] D. Coppersmith, U. Feige, and J. Shearer, Random walks on regular and irregular graphs. SIAM J. Discr. Math. 9 (1996), 301–308.
* [13] K. C. Das, Maximizing the sum of the squares of the degrees of a graph. Discrete Math. 285 (2004), 57–66.
* [14] K. C. Das, On the Kirchhoff index of graphs. Zeitsch. Naturforsch. A. 68 (2013), 531–538.
* [15] K. C. Das, K. Xu, and J. Nam, Zagreb indices of graphs. Front. Math. China 10 (2015), 567–582.
* [16] N. M. De Abreu, Old and new results on algebraic connectivity of graphs. Lin. Algebra Appl. 423 (2007), 53–73.
* [17] P. G. Doyle and J. L. Snell, Random Walks And Electric Networks. vol. 22 of Carus Math. Monogr., Math. Assoc. Amer., Washington (DC), 1984.
* [18] W. Ellens, F. M. Spieksma, P. Van Mieghem, A. Jamakovic, and R. E. Kooij, Effective graph resistance. Lin. Algebra Appl. 435 (2011), 2491–2506.
* [19] E. Estrada, The Structure Of Complex Networks: Theory And Applications, Oxford University Press, Oxford, 2012.
* [20] E. Estrada, Degree heterogeneity of graphs and networks. I. Interpretation and the “heterogeneity paradox”. J. Interdisc. Math. 22 (2019), 503–529.
* [21] E. Estrada, Degree heterogeneity of graphs and networks. II. Comparison with other indices. J. Interdisc. Math. 22 (2019), 711–735.
* [22] E. Estrada, ‘Hubs-repelling’ Laplacian and related diffusion on graphs/networks. Lin. Algebra Appl. 596 (2020), 256–280.
* [23] M. Fiedler, Algebraic connectivity of graphs, Czechoslovak Math. J. 23 (1973), 298–305.
* [24] M. Fiedler, Laplacian of graphs and algebraic connectivity, Banach Center Publications 25 (1989), 57–70.
* [25] K. Fitch, Effective resistance preserving directed graph symmetrization. SIAM J. Matrix Analysis Appl. 40 (2019), 49–65.
* [26] L. V. Gambuzza, M. Frasca, and E. Estrada, Hubs-attracting Laplacian and Related Synchronization on Networks. SIAM J. Appl. Dyn. Syst. 19 (2020), 1057–1079.
* [27] A. Gerbaud, K. Altisen, S. Devismes, and P. Lafourcade, Comparison of mean hitting times for a degree-biased random walk. Discr. Appl. Math. 19 (2014), 104–109.
* [28] A. Ghosh, S. Boyd, and A. Saberi, Minimizing effective resistance of a graph. SIAM Rev. 50 (2008), 37–66.
* [29] R. Grone, R. Merris, and V. S. Sunder, The Laplacian spectrum of a graph, SIAM J. Matrix Anal. Appl. 11 (1990), 218–238.
* [30] R. Grone and R. Merris, The Laplacian spectrum of a graph II, SIAM J. Matrix Anal. Appl. 7 (1994), 221–229.
* [31] I. Gutman, L. Feng, and G. Yu, Degree resistance distance of unicyclic graphs. Trans. Combinat. 1 (2012), 27–40.
* [32] P.T. Harker, Alternative modes of questioning in the analytic hierarchy process. Mathematical Modelling 9 (1987), 353–360.
* [33] J. B. Kennedy, P. Kurasov, G. Malenová, and D. Mugnolo, On the spectral gap of a quantum graph. Ann. Henri Poincaré 17 (2016), 2439–2473.
* [34] D. J. Klein, and M.Randić, Resistance distance. J. Math. Chem. 12 (1993), 81–95.
* [35] L. Lovász, Random walks on graphs: A survey. In: D. Miklos, V. T. Sos, and T. Szonyi (eds.), Combinatorics, Paul Erdős is Eighty, Vol. 2, János Bolyai Mathematical Society, Budapest, 1993, pp. 1–46.
* [36] N. Masuda, M. A. Porter, and R. Lambiotte, Random walks and diffusion on networks. Phys. Rep. 716 (2017), 1–58.
* [37] R. Merris, Laplacian matrices of graphs: a survey. Lin. Algebra Appl. 197 (1994), 143–176.
* [38] B. Mohar, The Laplacian spectrum of graphs. In: Y. Alavi, G. Chartrand, O. R. Oellermann, and A. J. Schwenk (eds.), Graph Theory, Combinatorics, and Applications, Vol. 2, Wiley, New York, 1991, pp. 871–898.
* [39] R. J. Mondragón, Core-biased random walks in networks. J. Complex Net. 6 (2018), 877–886.
* [40] D. Mugnolo, Semigroup Methods For Evolution Equations On Networks. Springer-Verlag, Cham, 2014.
* [41] M. E. Newman. Spread of epidemic disease on networks. Phys. Rev. E 66 (2002), 016128.
* [42] M. E. Newman, The structure and function of complex networks, SIAM Review 45 (2003), 167–256.
* [43] J. L. Palacios, Resistance distance in graphs and random walks. Int. J. Quantum Chem. 81 (2001), 29–33.
* [44] J. L. Palacios, Closed-form formulas for Kirchhoff index. Int. J. Quantum Chem. 81 (2001), 135–140.
* [45] C. Poignard, T. Pereira, and J. P. Pade, Spectra of Laplacian matrices of weighted graphs: structural genericity properties, SIAM J. Appl. Math. 78 (2018), 372–394.
* [46] C. Seguin, M. P. Van Den Heuvel, and A. Zalesky, Navigation of brain networks. Proc. Natl. Acad. Sci. USA 115 (2018), 6297–6302.
* [47] D. Tomasi, G. J. Wang, and N. D. Volkow, Energetic cost of brain functional connectivity. Proc. Natl. Acad. Sci. USA 110 (2013), 13642–13647.
* [48] W. Xiao, and I. Gutman, Resistance distance and Laplacian spectrum. Theoret. Chem. Acc. 110 (2003), 284–289.
* [49] G. F. Young, L. Scardovi, and N. E. Leonard, A new notion of effective resistance for directed graphs – Part I: Definition and properties. IEEE Transactions on Automatic Control. 61 (2015), 1727–1736.
* [50] B. Zhou, and N. Trinajstić. On resistance-distance and Kirchhoff index. J. Math. Chem. 46 (2009), 283–289.
|
# Chiral effective Lagrangian for excited heavy-light mesons from QCD
Qing-Sen Chen Center for Theoretical Physics, College of Physics, Jilin
University, Changchun 130012, China Hui-Feng Fu<EMAIL_ADDRESS>Center
for Theoretical Physics, College of Physics, Jilin University, Changchun
130012, China Yong-Liang Ma<EMAIL_ADDRESS>School of Fundamental Physics
and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS,
Hangzhou, 310024, China International Center for Theoretical Physics Asia-
Pacific (ICTP-AP) (Beijing/Hangzhou), UCAS, Beijing 100190, China Qing Wang
<EMAIL_ADDRESS>Department of Physics, Tsinghua University, Beijing
100084, China Center for High Energy Physics, Tsinghua University, Beijing
100084, China
###### Abstract
We derive the chiral effective Lagrangian for excited heavy-light mesons from
QCD under proper approximations. We focus on the chiral partners with
$j_{l}^{P}=\frac{3}{2}^{+}$ and $j_{l}^{P}=\frac{3}{2}^{-}$ which amounts to
($1^{+},2^{+}$) and ($1^{-},2^{-}$) states respectively. The low energy
constants including the masses of the chiral partners are calculated. The
calculated spectrum for the excited mesons are found roughly consistent with
experimental data. In addition, our results indicate that quantum numbers of
$B_{J}(5970)$ can be identified with $1^{-}$ or $2^{-}$.
## I Introduction
The heavy quark spin-flavor symmetry that exact under the
$m_{Q}\rightarrow\infty$ limit with $m_{Q}$ being the heavy quark mass plays
an important role in the hadronic systems containing one heavy quark/antiquark
Isgur and Wise (1990, 1989); Eichten and Hill (1990). Due to this symmetry,
the total angular momentum of a heavy-light meson has eigenvalues
$j_{\pm}=j_{l}\pm 1/2$ with $\vec{j}_{l}$ being the angular momentum of the
light degrees of freedom, and the corresponding eigenstates form a degenerate
doublet $(j_{-},j_{+})$ for each $j_{l}$. The angular momentum $\vec{j}_{l}$
can be decomposed as $\vec{j}_{l}=\vec{s}+\vec{l}$, where $\vec{s}$ is the
spin of light quark/antiquark and $\vec{l}$ is the orbital angular momentum.
For S-wave heavy-light mesons, $l=0$ and $j_{l}^{P}=\frac{1}{2}^{-}$
represents the degenerate doublet $(j^{P}_{-},j^{P}_{+})=(0^{-},1^{-})$. For
P-wave, $l=1$ and $j_{l}^{P}=\frac{1}{2}^{+}$ or $j_{l}^{P}=\frac{3}{2}^{+}$
represent two distinct doublets $(j^{P}_{-},j^{P}_{+})=(0^{+},1^{+})$ or
$(j^{P}_{-},j^{P}_{+})=(1^{+},2^{+})$, respectively. For D-wave , $l=2$ and
$j_{l}^{P}=\frac{3}{2}^{-}$ or $j_{l}^{P}=\frac{5}{2}^{-}$, we have two
distinct doublets $(j^{P}_{-},j^{P}_{+})=(1^{-},2^{-})$ and
$(j^{P}_{-},j^{P}_{+})=(2^{-},3^{-})$. The similar argument applies to other
excited states Yuan (1995).
For the light quarks, the chiral symmetry, which is dynamically (and
explicitly) broken, is essential. If the chiral symmetry were preserved, the
heavy-light mesons with the same $j_{l}$ but opposite parities would be
degenerated. We call them chiral partners. The mass splitting between the
chiral partners reflects the chiral symmetry breaking Nowak _et al._ (1993);
Bardeen and Hill (1994).
Since heavy-light mesons capture the features of both heavy quark symmetry and
chiral symmetry, it is ideal to study heavy-light meson phenomena using the
chiral Lagrangian incorporating heavy quark symmetry. Since 2000, a series of
researches on deriving the chiral effective Lagrangian from QCD have been
carried out in Refs. Wang _et al._ (2000a); Yang _et al._ (2002); Jiang _et
al._ (2010); Jiang and Wang (2010); Jiang _et al._ (2013, 2015); Wang and
Wang (2000); Wang _et al._ (2000b); Ren _et al._ (2017). The advantage of
this method is that it can establish the analytic relationships between the
low energy constants (LECs) of the chiral effective theory and QCD. Recently,
we used the same methodology to derive the chiral Lagrangian incorporating
heavy quark symmetry from QCD to study heavy-light mesons Chen _et al._
(2020a, b). In these works, we focused on the chiral partners
$j_{l}^{P}=\frac{1}{2}^{-}$ and $j_{l}^{P}=\frac{1}{2}^{+}$. The low energy
constants of the effective Lagrangian are expressed in terms of the light
quark self-energy which can be calculated by using Dyson-Schwinger equations
or lattice QCD. Numerical results of the low energy constants are globally in
agreements with the experiment data.
In recent years, more and more excited heavy-light mesons have been observed.
In the charm sector, many new excited states such as $D_{0}(2550)$,
$D^{\ast}_{J}(2680)$, $D(2740)$, $D^{*}_{2}(3000)$, $D_{J}(3000)$,
$D_{J}^{*}(3000)$, etc., have been announced by LHCb Aaij _et al._ (2013,
2016, 2020) and BABAR del Amo Sanchez _et al._ (2010). In the bottom sector,
new bottom states $B_{J}(5721)$, $B_{2}^{*}(5747)$, $B_{J}(5840)$ and
$B_{J}(5970)$ were observed by CDF Aaltonen _et al._ (2014) and LHCb Aaij
_et al._ (2015). The properties of these hadrons have drawn extensive
attractions in recent years Godfrey and Moats (2016); Song _et al._ (2015);
Chen _et al._ (2017); Gupta and Upadhyay (2018); Kher _et al._ (2017);
Gandhi and Rai (2019); Gupta and Upadhyay (2019); Godfrey and Moats (2019).
Here, we extend our approach to the effective field theory of excited heavy-
light mesons with chiral partner structure Nowak and Zahed (1993). Special
interests are given to the states with quantum numbers
$j_{l}^{P}=\frac{3}{2}^{+}$ and $j_{l}^{P}=\frac{3}{2}^{-}$ to lay the
foundation for researches on arbitrary spin heavy-light mesons.
The remaining part of this paper is organized as follows. In Sec. II, for
convenience, we give the general form of the chiral Lagrangian of the excited
heavy-light mesons. In Sec. III we derive the excited heavy-light meson
Lagrangian from QCD and determine the expression of low energy constants. The
numerical results calculated by using the quark self-energy obtained from a
typical Dyson-Schwinger equation and from lattice QCD are given in Sec. IV.
Section V is devoted to discussions. The expressions of the LECs with the
contribution from the renormalization factor of quark wave function are given
in Appendix A.
## II chiral effective Lagrangian for excited heavy-light mesons
For the convenience of the following description, we present the chiral
effective Lagrangian for excited heavy-light meson doublets
$T^{\mu}=(1^{+},2^{+})$ and $R^{\mu}=(1^{-},2^{-})$ here. They correspond to
the $j_{l}^{P}=\frac{3}{2}^{+}$ in P-wave and $j_{l}^{P}=\frac{3}{2}^{-}$ in
D-wave, respectively. The excited heavy-light meson doublets $T^{\mu}$ and
$R^{\mu}$ can be expressed as Nowak and Zahed (1993)
$\displaystyle T^{\mu}(x)$ $\displaystyle=$
$\displaystyle\frac{1+\not{v}}{2}\left\\{P^{\ast\mu\nu}_{2^{+}}\gamma_{\nu}-\sqrt{\frac{3}{2}}P^{\nu}_{1^{+}}\gamma_{5}[g^{\mu}_{\nu}-\frac{1}{3}\gamma_{\nu}(\gamma^{\mu}-v^{\mu})]\right\\},$
$\displaystyle R^{\mu}(x)$ $\displaystyle=$
$\displaystyle\frac{1+\not{v}}{2}\left\\{P^{\ast\mu\nu}_{2^{-}}\gamma_{\nu}\gamma_{5}-\sqrt{\frac{3}{2}}P_{1^{-}}^{\nu}[g^{\mu}_{\nu}-\frac{1}{3}\gamma_{\nu}(\gamma^{\mu}+v^{\mu})]\right\\},$
where $(P^{\nu}_{1^{+}},P_{2^{+}}^{\ast\mu\nu})$ refer to
$J^{P}=(1^{+},2^{+})$ states, and $(P^{\nu}_{1^{-}},P_{2^{-}}^{\ast\mu\nu})$
refer to $J^{P}=(1^{-},2^{-})$ states, respectively. $v^{\mu}$ is the velocity
of an on-shell heavy quark, i.e. $p^{\mu}=m_{Q}v^{\mu}$ with $v^{2}=1$. Then
the chiral effective Lagrangian for excited heavy-light mesons can be written
as Kilian _et al._ (1992); Nowak and Zahed (1993)
$\displaystyle{\cal L}$ $\displaystyle=$ $\displaystyle{\cal L}_{T}+{\cal
L}_{R}+{\cal L}_{TR},$ (2)
where
$\displaystyle{\cal L}_{T}$ $\displaystyle=$ $\displaystyle{}-i{\rm
Tr}\left(\bar{T}^{\mu}v\cdot\nabla T_{\mu}\right)-g_{T}{\rm
Tr}\left(\bar{T}^{\mu}T_{\mu}\gamma^{\nu}\gamma_{5}A_{\nu}\right)$
$\displaystyle{}+m_{T}{\rm Tr}\left(\bar{T}^{\mu}T_{\mu}\right),$
$\displaystyle{\cal L}_{R}$ $\displaystyle=$ $\displaystyle{}-i{\rm
Tr}\left(\bar{R}^{\mu}v\cdot\nabla R_{\mu}\right)+g_{R}{\rm
Tr}\left(\bar{R}^{\mu}R_{\mu}\gamma^{\nu}\gamma_{5}A_{\nu}\right)$
$\displaystyle{}+m_{R}{\rm Tr}\left(\bar{R}^{\mu}R_{\mu}\right),$
$\displaystyle{\cal L}_{TR}$ $\displaystyle=$ $\displaystyle g_{TR}{\rm
Tr}\left(\bar{R}^{\mu}T_{\mu}\gamma^{\nu}\gamma_{5}A_{\nu}\right)+h.c..$ (3)
The covariant derivative $\nabla_{\mu}=\partial_{\mu}-iV_{\mu}$ with
$V_{\mu}=\frac{i}{2}\left(\Omega^{\dagger}\partial_{\mu}\Omega+\Omega\partial_{\mu}\Omega^{\dagger}\right)$,
and
$A_{\mu}=\frac{i}{2}\left(\Omega^{\dagger}\partial_{\mu}\Omega-\Omega\partial_{\mu}\Omega^{\dagger}\right)$.
The $\Omega$ field is related to the chiral field $U(x)=\exp(i\pi(x)/f_{\pi})$
through $U=\Omega^{2}$. The parameters $g_{T},g_{R},g_{TR},m_{T}$, and $m_{R}$
are the LECs of the Lagrangian. They are free ones at the level of the
effective theory.
As mentioned previously, the states associated with $T^{\mu}$ and $R^{\mu}$
are called chiral partners, and their mass splitting arises from the chiral
symmetry breaking. In the $D$ meson family, $T^{\mu}$ may be associated to
$T^{\mu}=(1^{+},2^{+})=(D_{1}(2430),D_{2}^{*}(2460))$ and a possible
identification of the $R^{\mu}$ doublet may be
$R^{\mu}=(1^{-},2^{-})=(D_{J}^{\ast}(2600),D(2740))$ with respect that the
fact that the states in the $R^{\mu}$ doublet can decay to their chiral
partners in the $T^{\mu}$ doublet so that should have broad widths , so the
spin-averaged masses of the chiral partners are Zyla _et al._ (2020); Aaij
_et al._ (2020)
$\displaystyle m_{T}\simeq~{}2448~{}{\rm MeV},\;\;\;m_{R}\simeq 2703~{}{\rm
MeV},$ (4)
which yields the mass splitting
$\displaystyle\Delta m$ $\displaystyle=$ $\displaystyle m_{R}-m_{T}\simeq
255~{}{\rm MeV}.$ (5)
In the $B$ meson family, $T^{\mu}$ may be associated to
$(B_{1}(5721),B_{2}^{*}(5747))$, so the spin-averaged mass is Zyla _et al._
(2020)
$\displaystyle m_{T}$ $\displaystyle\simeq$ $\displaystyle~{}5735~{}{\rm MeV}$
(6)
For the $(1^{-},2^{-})$ doublet, the situation is subtle since the quantum
numbers of the possible candidates in PDG are not well determined. With
respect to the mass splitting between chiral partners in the charmed meson
sector, the masses of the states in the $R^{\mu}$ doublet should have masses
$\sim 6000~{}$MeV. This means that it is reasonable to identify the quantum
numbers of the state $B_{J}(5970)$ as $1^{-}$ or $2^{-}$ or maybe the $1^{-}$
and $2^{-}$ states have nearly degenerate masses.
## III chiral effective lagrangian of excited heavy-light mesons from QCD
In this section, we follow our previous work Chen _et al._ (2020a) to derive
the chiral effective Lagrangian for excited heavy-light mesons with
$j_{l}^{P}=\frac{3}{2}^{\pm}$ and its low energy constants.
The generating functional of QCD with an external source $J(x)$ is
$\displaystyle Z[J]$ $\displaystyle=$
$\displaystyle\int\mathcal{D}q\mathcal{D}\bar{q}\mathcal{D}Q\mathcal{D}\bar{Q}\mathcal{D}G_{\mu}\varDelta_{F}(G_{\mu})$
$\displaystyle\times\exp\left\\{i\int
d^{4}x\left[{\mathcal{L}_{\mathrm{QCD}}(q,\bar{q},Q,\bar{Q},G_{\mu})+\bar{q}Jq}\right]\right\\},$
where $q(x),Q(x)$ and $G_{\mu}(x)$ are the light-quark, heavy-quark and gluon
fields, respectively.
By integrating in the chiral field $U(x)$ and the heavy-light meson fields,
and integrating out gluon fields and quark fields, we obtain the effective
action for the chiral effective theory asChen _et al._ (2020a)
$\displaystyle S[U,\Pi_{2},\bar{\Pi}_{2}]$ $\displaystyle=$
$\displaystyle{}-iN_{c}\mathrm{Tr}\ln\left\\{\left(i\not{\partial}-\bar{\Sigma}\right)I_{1}+J_{\Omega}\right.$
(8)
$\displaystyle\left.{}\qquad\qquad\qquad+\left(i\not{\partial}+M\not{v}-M\right)I_{4}\right.$
$\displaystyle\left.{}\qquad\qquad\qquad-\Pi_{2}-\bar{\Pi}_{2}\right\\},$
where $\Pi_{2}$ and $\bar{\Pi}_{2}$ are the heavy-light meson field and its
conjugate, respectively. $M$ is the mass matrice of heavy quark $Q$.
$\bar{\Sigma}$ is the self-energy of the light quark propagator.
$I_{1}=\mathrm{diag}(1,1,0)$ and $I_{4}=\mathrm{diag}(0,0,1)$ are matrices in
the flavor space. $J_{\Omega}$ is the chiral rotated external source. To
obtain this effective action from QCD, we have taken the chiral limit, the
heavy quark limit and the large $N_{c}$ limit. More details can be found in
the Appendix A of Ref. Chen _et al._ (2020a).
Since the introduced heavy-light meson field $\Pi_{2}$ is a bilocal field, we
need a suitable localization on $\Pi_{2}$ fields to get a local effective
Lagrangian. Here we follow the approach in Ref. Nowak and Zahed (1993), and
take the following localization conditions
$\displaystyle\Pi_{2}(x_{1},x_{2})$ $\displaystyle=$
$\displaystyle\Pi_{2}(x_{1})\delta(x_{1}-x_{2})+\Pi^{\mu}_{2}(x_{1})\overrightarrow{\partial}_{\mu}\delta(x_{1}-x_{2}),$
$\displaystyle\bar{\Pi}_{2}(x_{1},x_{2})$ $\displaystyle=$
$\displaystyle\bar{\Pi}_{2}(x_{1})\delta(x_{1}-x_{2})+\delta(x_{1}-x_{2})\overleftarrow{\partial}_{\mu}\bar{\Pi}^{\mu}_{2}(x_{1}).$
where
$\displaystyle\Pi_{2}(x)$ $\displaystyle=$ $\displaystyle H(x)+G(x),$
$\displaystyle\Pi^{\mu}_{2}(x)$ $\displaystyle=$ $\displaystyle
T^{\mu}(x)+R^{\mu}(x),$ (10)
Here, the fields $H$ and $G$ refer to $J^{P}=(0^{-},1^{-})$ states and
$J^{P}=(0^{+},1^{+})$ states, the fields $T^{\mu}$ and $R^{\mu}$ refer to
$J^{P}=(1^{+},2^{+})$ states and $J^{P}=(1^{-},2^{-})$ states respectively.
Since we are interested in the heavy-light meson doublets with
$j_{l}^{P}=\frac{3}{2}^{\pm}$ in this work, we focus on the chiral effective
Lagrangian for $\Pi_{2}^{\mu}$ alone. So the chiral effective action is
reduced to
$\displaystyle S[U,\Pi^{\mu}_{2},\bar{\Pi}^{\nu}_{2}]$ $\displaystyle=$
$\displaystyle{}-iN_{c}\mathrm{Tr}\ln\left\\{\left(i\not{\partial}-\bar{\Sigma}\right)I_{1}+J_{\Omega}\right.$
(11)
$\displaystyle\left.{}\qquad\qquad\qquad+\left(i\not{\partial}+M\not{v}-M\right)I_{4}\right.$
$\displaystyle\left.{}\qquad\qquad\qquad-\Pi^{\nu}_{2}\overrightarrow{\partial}_{\nu}-\overleftarrow{\partial}_{\mu}\bar{\Pi}^{\mu}_{2}\right\\}$
In order to obtain the chiral effective Lagrangian, we need expand the action
$S[U,\Pi^{\mu}_{2},\bar{\Pi}^{\nu}_{2}]$ with respect to the fields $U$,
$\Pi^{\mu}_{2}$ and $\bar{\Pi}^{\nu}_{2}$. The kinetic terms of the excited
heavy-light meson fields $T^{\mu}$ (a similar equation holds for $R^{\mu}$)
arise from
$\displaystyle S_{2}$ $\displaystyle\equiv$
$\displaystyle\frac{\delta^{2}S}{\delta\bar{T}^{\mu}\delta
T^{\nu}}T^{\nu}\bar{T}^{\mu}$ $\displaystyle=$ $\displaystyle iN_{c}\int
d^{4}x_{1}d^{4}x_{2}$
$\displaystyle{}\times\mathrm{Tr}\Bigg{[}\left(i\not{\partial}-\bar{\Sigma}\right)^{-1}\delta(x_{2}-x_{1})\overleftarrow{\partial}_{\mu}\bar{T}^{\mu}(x_{1})$
$\displaystyle\qquad\qquad\times\left(iv\cdot\partial\right)^{-1}\overrightarrow{\partial}_{\nu}T^{\nu}(x_{2})\delta(x_{1}-x_{2})\Bigg{]},$
Taking the derivative expansion up to the first order, we obtain
$\displaystyle S_{2}$ $\displaystyle=$ $\displaystyle iN_{c}\int
d^{4}x\int\frac{d^{4}p}{(2\pi)^{4}}\left[-\frac{p^{2}-(v\cdot
p)^{2}}{p^{2}-\Sigma^{2}}+\frac{p^{2}\Sigma}{(p^{2}-\Sigma^{2})v\cdot
p}\right]\mathrm{Tr}\left[\bar{T}^{\mu}(x)T_{\mu}(x)\right]$ (13)
$\displaystyle{}~{}~{}~{}~{}+iN_{c}\int
d^{4}x\int\frac{d^{4}p}{(2\pi)^{4}}\left[-\frac{p^{2}}{(p^{2}-\Sigma^{2})v\cdot
p}-\frac{[2\Sigma+2\Sigma^{\prime}(p^{2}+\Sigma^{2})][p^{2}-(v\cdot
p)^{2}]}{(p^{2}-\Sigma^{2})^{2}}\right]\mathrm{Tr}\left[T_{\mu}(x)v\cdot\partial\bar{T}^{\mu}(x)\right]$
$\displaystyle{}~{}~{}~{}~{}+iN_{c}\int
d^{4}x\int\frac{d^{4}p}{(2\pi)^{4}}\left[-\frac{2\Sigma^{\prime}(p^{2}+\Sigma^{2})[p^{2}-(v\cdot
p)^{2}]}{(p^{2}-\Sigma^{2})^{2}}\right]\mathrm{Tr}\left[T_{\mu}(x)v\cdot
V_{\Omega}\bar{T}^{\mu}(x)\right].$
In the calculation, we have used the relation $T^{\mu}\not{v}=-T^{\mu}$. It is
easy to identify the mass term and the kinetic term in Eq. (13). In addition,
an interaction term between the $T^{\mu}$ fields and $V_{\Omega}$ fields also
appear in Eq. (13) because we have taken
$\bar{\Sigma}(x-y)=\Sigma(\nabla^{2})\delta(x-y)$ in the calculation to retain
the correct chiral transformation properties in the theory Yang _et al._
(2002).
The interaction between the heavy-light meson field $T^{\mu}$ and the light
Goldstone boson field $S_{3}$ can also be obtained by expanding the action
$S[U,\Pi^{\mu}_{2},\bar{\Pi}^{\nu}_{2}]$ with respect to the external
$J_{\Omega}$, $\Pi^{\mu}_{2}$ and $\bar{\Pi}^{\nu}_{2}$ as
$\displaystyle S_{3}$ $\displaystyle\equiv$
$\displaystyle\frac{\delta^{3}S}{\delta J_{\Omega}\delta\bar{T}^{\mu}\delta
T^{\nu}}J_{\Omega}T^{\nu}\bar{T}^{\mu}$ (14) $\displaystyle=$
$\displaystyle{}-iN_{c}\int d^{4}x_{1}d^{4}x_{2}d^{4}x_{3}$
$\displaystyle\qquad{}\times\mathrm{Tr}\left[(i\not{\partial}-\Sigma)^{-1}\delta(x_{2}-x_{3})J_{\Omega}(x_{3})(i\not{\partial}-\Sigma)^{-1}\delta(x_{3}-x_{1})\overleftarrow{\partial}_{\mu}\bar{T}^{\mu}(x_{1})\left(iv\cdot\partial\right)^{-1}\delta(x_{1}-x_{2})\overrightarrow{\partial}_{\nu}T^{\nu}(x_{2})\right].$
Upto the first order of the derivative expansion, one obtains
$\displaystyle S_{3}$ $\displaystyle=$ $\displaystyle{}-iN_{c}\int
d^{4}x\int\frac{d^{4}p}{(2\pi)^{4}}\left[\frac{p^{2}(\Sigma^{2}+\frac{2}{3}p^{2})}{(p^{2}-\Sigma^{2})v\cdot
p}-\frac{2\Sigma[p^{2}-(v\cdot
p)^{2}]}{(p^{2}-\Sigma^{2})^{2}}\right]\mathrm{Tr}\left[\bar{T}(x)^{\mu}T_{\mu}(x)\gamma^{\nu}\gamma_{5}A_{\nu}\right]$
(15) $\displaystyle{}-iN_{c}\int
d^{4}x\int\frac{d^{4}p}{(2\pi)^{4}}\left[\frac{p^{2}}{(p^{2}-\Sigma^{2})v\cdot
p}+\frac{2\Sigma[p^{2}-(v\cdot
p)^{2}]}{(p^{2}-\Sigma^{2})^{2}}\right]\mathrm{Tr}\left[T_{\mu}(x)v\cdot
V_{\Omega}\bar{T}^{\mu}(x)\right].$
Summing up Eqs. (13) and (15), we obtain the expressions of the constants
$m_{T}$ and $g_{T}$ as
$\displaystyle m_{T}$ $\displaystyle=$
$\displaystyle\frac{iN_{c}}{Z_{T}}\int\frac{d^{4}p}{(2\pi)^{4}}\left[\frac{p^{2}-(v\cdot
p)^{2}}{p^{2}-\Sigma^{2}}-\frac{p^{2}\Sigma}{(p^{2}-\Sigma^{2})v\cdot
p}\right],$ $\displaystyle g_{T}$ $\displaystyle=$
$\displaystyle{}-\frac{iN_{c}}{Z_{T}}\int\frac{d^{4}p}{(2\pi)^{4}}\left[\frac{p^{2}(\Sigma^{2}+\frac{2}{3}p^{2})}{(p^{2}-\Sigma^{2})v\cdot
p}-\frac{2\Sigma[p^{2}-(v\cdot p)^{2}]}{(p^{2}-\Sigma^{2})^{2}}\right],$
with $Z_{T}$ being the wave function renormalization factor
$\displaystyle Z_{T}$ $\displaystyle=$ $\displaystyle
iN_{c}\int\frac{d^{4}p}{(2\pi)^{4}}\biggr{[}{}-\frac{p^{2}}{(p^{2}-\Sigma^{2})v\cdot
p}$
$\displaystyle{}\qquad\qquad\qquad-\frac{[2\Sigma+2\Sigma^{\prime}(p^{2}+\Sigma^{2})][p^{2}-(v\cdot
p)^{2}]}{(p^{2}-\Sigma^{2})^{2}}\biggr{]}.$
Following the same procedure, one can get the LECs associated to $R^{\mu}$.
The only difference in the calculation is the relation
$R^{\mu}\not{v}=R^{\mu}$, which induces the differences between the LECs of
$R^{\mu}$ and those of $T^{\mu}$. The LECs associated to $T^{\mu}$ read
$\displaystyle m_{R}$ $\displaystyle=$
$\displaystyle\frac{iN_{c}}{Z_{R}}\int\frac{d^{4}p}{(2\pi)^{4}}\left[\frac{p^{2}-(v\cdot
p)^{2}}{p^{2}-\Sigma^{2}}+\frac{p^{2}\Sigma}{(p^{2}-\Sigma^{2})v\cdot
p}\right],$ $\displaystyle g_{R}$ $\displaystyle=$
$\displaystyle{}-\frac{iN_{c}}{Z_{R}}\int\frac{d^{4}p}{(2\pi)^{4}}\left[\frac{p^{2}(\Sigma^{2}+\frac{2}{3}p^{2})}{(p^{2}-\Sigma^{2})v\cdot
p}-\frac{2\Sigma[p^{2}-(v\cdot p)^{2}]}{(p^{2}-\Sigma^{2})^{2}}\right],$
$\displaystyle Z_{R}$ $\displaystyle=$ $\displaystyle
iN_{c}\int\frac{d^{4}p}{(2\pi)^{4}}\biggr{[}{}-\frac{p^{2}}{(p^{2}-\Sigma^{2})v\cdot
p}$
$\displaystyle{}\qquad\qquad\qquad+\frac{[2\Sigma+2\Sigma^{\prime}(p^{2}+\Sigma^{2})][p^{2}-(v\cdot
p)^{2}]}{(p^{2}-\Sigma^{2})^{2}}\biggr{]}.$
We also obtain the coupling constant between the chiral partner fields
$T^{\mu}$ and $R^{\mu}$ as
$\displaystyle g_{TR}$ $\displaystyle=$
$\displaystyle{}-i\frac{N_{c}}{\sqrt{Z_{T}Z_{R}}}\int\frac{d^{4}p}{(2\pi)^{4}}\biggl{[}\frac{p^{2}(\Sigma^{2}+p^{2})}{(p^{2}-\Sigma^{2})^{2}v\cdot
p}\biggr{]}.$ (18)
In the above expressions, $\Sigma(-p^{2})$ stands for self-energy of light
quarks to be calculated from QCD. This is an apparent indicator that we have
established a connection between the low energy constants of the chiral
effective theory and the Green functions of QCD.
## IV Numerical results and discussions
In order to obtained the explicit numerical values of the LECs, we first need
to obtain the light quark self-energy $\Sigma(-p^{2})$. As in our previous
works Chen _et al._ (2020a, b), we use two method to determine
$\Sigma(-p^{2})$ for comparison, namely, the gap equation, i.e., the Dyson-
Schwinger equation for quarks, and the fittings from lattice QCD.
For the Dyson-Schwinger equation method, we use the differential form of the
gap equation Yang _et al._ (2002):
$\displaystyle\left(\frac{\alpha_{s}(x)}{x}\right)^{\prime}\Sigma(x)^{\prime\prime}-\left(\frac{\alpha_{s}(x)}{x}\right)^{\prime\prime}\Sigma(x)^{\prime}$
$\displaystyle\qquad\qquad\qquad\quad{}-\frac{3C_{2}(R)}{4\pi}\frac{x\Sigma(x)}{x+\Sigma^{2}(x)}\left(\frac{\alpha_{s}(x)}{x}\right)^{\prime
2}=0,$
where the boundary conditions are
$\displaystyle\Sigma^{\prime}(0)$ $\displaystyle=$
$\displaystyle{}-\frac{3C_{2}(R)\alpha_{s}(0)}{8\pi\Sigma(0)},$
$\displaystyle\Sigma(\Lambda^{\prime})$ $\displaystyle=$
$\displaystyle\frac{3C_{2}(R)\alpha_{s}(\Lambda^{\prime})}{4\pi\Lambda^{\prime}}\int^{\Lambda^{\prime}}_{0}dx\frac{x\Sigma(x)}{x+\Sigma^{2}(x)},$
(19)
with $\alpha_{s}$ being the running coupling constant of QCD.
$\Lambda^{\prime}$ is an ultraviolet cutoff regularizing the integral, which
is taken $\Lambda^{\prime}\rightarrow\infty$ eventually. To solve the Eq.
(IV), we take a model description for $\alpha_{s}$ given in Ref. Dudal _et
al._ (2012)
$\alpha_{s}(p^{2})=\alpha_{0}p^{2}\frac{p^{2}+M^{2}}{p^{4}+(M^{2}+m^{2})p^{2}+\lambda^{4}}.$
(20)
For convenience, we call it Gribov-Zwanziger (G-Z) Model. The parameters are
$M^{2}=4.303~{}{\rm GeV}^{2},(M^{2}+m^{2})=0.526~{}{\rm GeV}^{2}$ and
$\lambda^{4}=0.4929~{}{\rm GeV}^{4}$ Dudal _et al._ (2012). $\alpha_{0}$ is a
model parameter to be determined. Although the UV behavior of G-Z formalism is
inconsistent with QCD, it should not have a sizable impact on our results
because the LECs are mostly controlled by its low energy behavior.
Solving the gap equation, we obtain $\Sigma(-p^{2})$. Then the LECs are
calculated according to Eqs. (LABEL:eq:mHgH)-(18). It is clear that the
integrals of the LECs have a physical ultraviolet cutoff $\Lambda_{c}$ which
should be of the order of the chiral symmetry breaking scale and serves as
another parameter in our calculations. Since we are studying excited states,
the cutoff $\Lambda_{c}$ should take a bit larger value than 1 GeV. For the
excited states we are considering now, the energy of the light quark in the
chromoelectrical fields generated by the heavy quark could be up to $\sim
1.45$ GeV, so $\Lambda_{c}$ taking a value $\gtrsim 1.5$ GeV would be more
appropriate for the present situation. For the G-Z model, we take
$\Lambda_{c}=1.6$ GeV and determine the model parameter $a_{0}$ by requiring
the calculated $f_{\pi}$ be consistent with the corresponding experimental
value. We find that using $a_{0}=0.52$ we can obtain the well-established
quantity $f_{\pi}$. The results are listed in Table 1. For comparison, we also
list the numerical results obtained with different $a_{0}$’s in the table. To
give an intuitive impression, we draw the running coupling constant
$\alpha_{s}$ calculated with the G-Z formalism at $a_{0}=0.52$ in Fig. 1. The
light quark self-energy $\Sigma(-p^{2})$ solved by the gap equation (IV) is
shown in Fig. 2.
Table 1: The heavy-light meson masses and the coupling constants calculated from the G-Z Model with $\Lambda_{c}=1.6$ GeV. $a_{0}$ | $\Sigma(0)$ (GeV) | $f_{\pi}$(GeV) | ${}-\langle\bar{\psi}\psi\rangle$ (GeV3) | $g_{R}$ | $g_{T}$ | $g_{TR}$ | $m_{R}$ (GeV) | $m_{T}$ (GeV) | $\Delta m$ (GeV)
---|---|---|---|---|---|---|---|---|---
0.50 | 0.158 | 0.073 | $(0.328)^{3}$ | 0.882 | -0.479 | -0.990 | 1.365 | 1.104 | 0.261
0.51 | 0.193 | 0.084 | $(0.350)^{3}$ | 0.936 | -0.442 | -0.987 | 1.411 | 1.091 | 0.321
0.52 | 0.226 | 0.093 | $(0.368)^{3}$ | 0.991 | -0.409 | -0.984 | 1.459 | 1.080 | 0.379
0.53 | 0.259 | 0.102 | $(0.385)^{3}$ | 1.047 | -0.377 | -0.981 | 1.510 | 1.071 | 0.439
0.54 | 0.291 | 0.109 | $(0.399)^{3}$ | 1.107 | -0.347 | -0.979 | 1.564 | 1.064 | 0.500
Since there are more and more excited states observed experimentally whose
masses and quantum numbers have not been confirmed yet, we focus on the masses
of the chiral partners in our method in this work and hopefully could help
identifying some excited mesons. The mass splitting $\Delta m$ is a direct
indicator of the chiral symmetry breaking. In our calculations, it is
completely originated from the non-vanishing of the quark running mass, which
can be reflected by the quark condensate or the value of $\Sigma(0)$. From
Table 1, one can see that $\Delta m$ increases as the quark condensate or
$\Sigma(0)$ increases. At $a_{0}=0.52$, we find $\Delta m=379$ MeV which is a
bit larger than the expected value $\sim 255$ MeV as given in (4). This is not
surprising because on one hand the measured masses of the relevant excited
mesons are still not accurate, and on the other hand our results are suffering
from uncertainties due to the ignorance of the $1/m_{Q}$ contributions, etc..
The masses $m_{T}$ and $m_{R}$ listed in Table 1 are the “residue masses” with
the heavy quark mass rotated away. The physical masses (denoted as
$\tilde{m}_{T}$ and $\tilde{m}_{R}$) can be obtained by simply adding back the
heavy quark mass to $m_{T}$ and $m_{R}$. For the $D$ meson sector, using
$m_{c}\approx 1.27$ GeV Zyla _et al._ (2020), we obtain the physical masses
to be $\tilde{m}_{T}\approx 2.35$ GeV and $\tilde{m}_{R}\approx 2.73$ GeV with
$a_{0}=0.52$. Keeping in mind the large uncertainties from both the
theoretical side and the experimental side, these values are roughly
comparable to the corresponding experimental data (4). For the $B$ meson
family, using $m_{b}\approx 4.66$ GeV Zyla _et al._ (2020), we obtain the
physical masses to be $\tilde{m}_{T}\approx 5.74$ GeV and
$\tilde{m}_{R}\approx 6.12$ GeV. Comparing with the experimental data for the
bottom mesons in (6), the numerical result of $\tilde{m}_{T}$ is close to the
corresponding experimental data. And we expect $(1^{-},2^{-})$ $B$ meson
states appear at around 6.12 GeV. Actually, the excited state $B_{J}(5970)$
reported in Refs. Aaltonen _et al._ (2014); Aaij _et al._ (2015) with the
mass $\sim 5970$ MeV could be the candidate for the $1^{-}$ or $2^{-}$ states.
Up to now, experiments have not yet determined the $J^{P}$ quantum numbers for
$B_{J}(5970)$. We expect this could be done in the near future.
Now, let us turn to the results using the quark self-energy fitted from
lattice QCD. We use the lattice fittings for the quark wave function
renormalization $Z(-p^{2})$ and the running quark mass $M(-p^{2})$ given in
Ref. Oliveira _et al._ (2019). The formula of LECs shown in Eqs.
(LABEL:eq:mHgH)-(18) can be directly extended to incorporate the contributions
from $Z(-p^{2})$. The relevant expressions are given in Appendix A. The
functions $Z(-p^{2})$ and $M(-p^{2})$ are also plotted in Fig. 2. The
numerical results of LECs calculated by using these lattice fittings are given
in Table 2. There is no free parameters in the functions of $M(-p^{2})$ and
$Z(-p^{2})$. We display results from various values of the cutoff
$\Lambda_{c}$ for comparison.
Figure 1: Running coupling constant of the G-Z model with $a_{0}=0.52$. Figure 2: The lattice fittings of $M(-p^{2})$ and $Z(-p^{2})$ given in Ref. Oliveira _et al._ (2019) and $\Sigma(-p^{2})$ from the gap equation with G-Z Model. Table 2: LECs calculated from lattice fittings given in Ref Oliveira _et al._ (2019). $\Lambda_{c}$(GeV) | $M(0)$(GeV) | $f_{\pi}$ (GeV) | ${}-\langle\bar{\psi}\psi\rangle$ (GeV3) | $g_{R}$ | $g_{T}$ | $g_{TR}$ | $m_{R}$ (GeV) | $m_{T}$ (GeV) | $\Delta m$ (GeV)
---|---|---|---|---|---|---|---|---|---
1.4 | 0.344 | 0.095 | $(0.341)^{3}$ | 1.101 | $-0.398$ | $-1.031$ | 1.349 | 0.948 | 0.401
1.5 | 0.344 | 0.096 | $(0.351)^{3}$ | 1.054 | -0.432 | -1.041 | 1.405 | 1.021 | 0.384
1.6 | 0.344 | 0.096 | $(0.359)^{3}$ | 1.015 | -0.462 | -1.050 | 1.462 | 1.095 | 0.368
1.7 | 0.344 | 0.096 | $(0.367)^{3}$ | 0.983 | -0.488 | -1.058 | 1.522 | 1.169 | 0.352
1.8 | 0.344 | 0.096 | $(0.375)^{3}$ | 0.956 | -0.511 | -1.064 | 1.582 | 1.245 | 0.337
From Table 2, one can see that the numerical results of LECs are comparable to
the results from the G-Z model when $\Lambda_{c}=1.6$ GeV, which implies that
$Z(-p^{2})$ does not have significant impacts on the excited states. The
physical masses are $\tilde{m}_{T}\approx 2.37$ GeV and $\tilde{m}_{R}\approx
2.73$ GeV for excited $D$ mesons, and $\tilde{m}_{T}\approx 5.76$ GeV and
$\tilde{m}_{G}\approx 6.12$ GeV for excited $B$ mesons. The conclusions could
be drawn from these results are the same as we have in the G-Z model.
## V Conclusions
In this paper, we derive the chiral effective Lagrangian for excited heavy-
light mesons from QCD. We focus on the chiral partners with
$j_{l}^{P}=\frac{3}{2}^{+}$ and $j_{l}^{P}=\frac{3}{2}^{-}$ which amounts to
($1^{+},2^{+}$) and ($1^{-},2^{-}$) states respectively. The low energy
constants are expressed as momentum integrals of the light quark self-energy,
which in turn are obtained by using the gap equation or lattice QCD. The
relevant LECs in the chiral Lagrangian are calculated, and the resulted masses
for the excited $D$ mesons are found roughly consistent with experimental
data. For the excited $B$ mesons, we obtain $\tilde{m}_{T}\approx 5.74$ GeV
and $\tilde{m}_{R}\approx 6.12$ GeV. We find that $\tilde{m}_{T}$ are
consistent to the masses of $(B_{1}(5721),B_{2}^{*}(5747))$, and the result of
$\tilde{m}_{R}$ suggests that $B_{J}(5970)$ could be a good candidate for the
states in the doublet $(1^{-},2^{-})$.
As more and more excited states of charmed mesons and bottom mesons are
discovered in experiments, it will be an interesting and important research to
extend our method to obtain the chiral effective Lagrangian of heavy-light
mesons with arbitrary spins. Since this extension is straightforward, we will
not go to details here.
We finally want to say that, so far, we did not discuss the chiral partner
structure of the heavy-light mesons including a strange quark. One of the
reasons is that the quark content of some of these mesons is still under
debate. For example, the mesons $D_{s0}(2317)$ and $D_{s1}(2460)$ which can be
arranged to the $G$ doublet with quantum numbers $G=(0^{+},1^{+})$ are
regarded as the chiral partners of $D$ and $D^{\ast}$, respectively, in the
$H$ doublet with quantum numbers $H=(0^{-},1^{-})$ Nowak _et al._ (2004). The
molecular state interpretation of these mesons and their bottom cousins cannot
be ruled out Faessler _et al._ (2007a, b, 2008). We leave the discussion of
these strange mesons as future works.
###### Acknowledgements.
The work of Y. L. M. was supported in part by National Science Foundation of
China (NSFC) under Grant No. 11875147 and No.11475071. H.-F. Fu was supported
by NSFC under Grant No.12047569. Q. W. was supported by the National Science
Foundation of China (NSFC) under Grant No. 11475092.
## Appendix A FORMULA FOR LECS WITH $Z(-p^{2})$
The quark propagator can be generally written as
$\displaystyle S(p)$ $\displaystyle=$ $\displaystyle\frac{i}{A(-p^{2})\not{p}\
-B(-p^{2})}$ $\displaystyle=$ $\displaystyle i\,Z(-p^{2})\,\frac{\not{p}\
+M(-p^{2})}{p^{2}-M^{2}(-p^{2})},$
where $Z(-p^{2})=1/A(-p^{2})$ stands for the quark wave function
renormalization and $M(-p^{2})=B(-p^{2})/A(-p^{2})$ is the renormalization
group invariant running quark mass. After a series of calculations, we can get
the LECs with $Z(-p^{2})$ as follows:
$\displaystyle m_{T}$ $\displaystyle=$
$\displaystyle\frac{iN_{c}}{Z_{T}}\int\frac{d^{4}p}{(2\pi)^{4}}\biggr{[}{}\frac{p^{2}-(v\cdot
p)^{2}}{p^{2}-M^{2}}-\frac{p^{2}M}{(p^{2}-M^{2})v\cdot p}\biggr{]}Z,$
$\displaystyle g_{T}$ $\displaystyle=$
$\displaystyle-\frac{iN_{c}}{Z_{T}}\int\frac{d^{4}p}{(2\pi)^{4}}\biggl{[}\frac{p^{2}(M^{2}+\frac{2}{3}p^{2})}{(p^{2}-M^{2})^{2}v\cdot
p}-\frac{2M[p^{2}-(v\cdot p)^{2}]}{(p^{2}-M^{2})^{2}}\biggr{]}Z^{2},$
$\displaystyle m_{R}$ $\displaystyle=$
$\displaystyle\frac{iN_{c}}{Z_{R}}\int\frac{d^{4}p}{(2\pi)^{4}}\biggr{[}{}\frac{p^{2}-(v\cdot
p)^{2}}{p^{2}-M^{2}}+\frac{p^{2}M}{(p^{2}-M^{2})v\cdot p}\biggr{]}Z,$
$\displaystyle g_{R}$ $\displaystyle=$
$\displaystyle-\frac{iN_{c}}{Z_{R}}\int\frac{d^{4}p}{(2\pi)^{4}}\biggl{[}\frac{p^{2}(M^{2}+\frac{2}{3}p^{2})}{(p^{2}-M^{2})^{2}v\cdot
p}+\frac{2M[p^{2}-(v\cdot p)^{2}]}{(p^{2}-M^{2})^{2}}\biggr{]}Z^{2},$
$\displaystyle Z_{T}$ $\displaystyle=$ $\displaystyle
iN_{c}\int\frac{d^{4}p}{(2\pi)^{4}}\Biggr{\\{}\biggr{[}-\frac{p^{2}}{(p^{2}-M^{2})v\cdot
p}-\frac{[2M+2ZM^{\prime}(p^{2}+M^{2})][p^{2}-(v\cdot
p)^{2}]}{(p^{2}-M^{2})^{2}}\Biggr{]}Z-\frac{2Z^{\prime}M}{p^{2}-M^{2}}\biggr{\\}}$
$\displaystyle Z_{R}$ $\displaystyle=$ $\displaystyle
iN_{c}\int\frac{d^{4}p}{(2\pi)^{4}}\Biggr{\\{}\biggr{[}-\frac{p^{2}}{(p^{2}-M^{2})v\cdot
p}+\frac{[2M+2ZM^{\prime}(p^{2}+M^{2})][p^{2}-(v\cdot
p)^{2}]}{(p^{2}-M^{2})^{2}}\Biggr{]}Z+\frac{2Z^{\prime}M}{p^{2}-M^{2}}\biggr{\\}},$
$\displaystyle g_{TR}$ $\displaystyle=$
$\displaystyle-i\frac{N_{c}}{\sqrt{Z_{T}Z_{R}}}\int\frac{d^{4}p}{(2\pi)^{4}}\biggl{[}\frac{p^{2}(M^{2}+p^{2})}{(p^{2}-M^{2})^{2}v\cdot
p}\biggr{]}Z^{2}.$ (21)
## References
* Isgur and Wise (1990) N. Isgur and M. B. Wise, Phys. Lett. B 237, 527 (1990).
* Isgur and Wise (1989) N. Isgur and M. B. Wise, Phys. Lett. B 232, 113 (1989).
* Eichten and Hill (1990) E. Eichten and B. R. Hill, Phys. Lett. B 234, 511 (1990).
* Yuan (1995) T. C. Yuan, Phys. Rev. D 51, 4830 (1995), arXiv:hep-ph/9407341 .
* Nowak _et al._ (1993) M. A. Nowak, M. Rho, and I. Zahed, Phys. Rev. D 48, 4370 (1993).
* Bardeen and Hill (1994) W. A. Bardeen and C. T. Hill, Phys. Rev. D 49, 409 (1994).
* Wang _et al._ (2000a) Q. Wang, Y.-P. Kuang, M. Xiao, and X.-L. Wang, Phys. Rev. D 61, 054011 (2000a).
* Yang _et al._ (2002) H. Yang, Q. Wang, Y.-P. Kuang, and Q. Lu, Phys. Rev. D 66, 014019 (2002).
* Jiang _et al._ (2010) S.-Z. Jiang, Y. Zhang, C. Li, and Q. Wang, Phys. Rev. D 81, 014001 (2010).
* Jiang and Wang (2010) S.-Z. Jiang and Q. Wang, Phys. Rev. D 81, 094037 (2010).
* Jiang _et al._ (2013) S.-Z. Jiang, Q. Wang, and Y. Zhang, Phys. Rev. D 87, 094014 (2013).
* Jiang _et al._ (2015) S.-Z. Jiang, Z.-L. Wei, Q.-S. Chen, and Q. Wang, Phys. Rev. D 92, 025014 (2015).
* Wang and Wang (2000) X.-L. Wang and Q. Wang, Commun. Theor. Phys. 34, 519 (2000).
* Wang _et al._ (2000b) X.-L. Wang, Z.-M. Wang, and Q. Wang, Commun. Theor. Phys. 34, 683 (2000b).
* Ren _et al._ (2017) K. Ren, H.-F. Fu, and Q. Wang, Phys. Rev. D 95, 074012 (2017).
* Chen _et al._ (2020a) Q.-S. Chen, H.-F. Fu, Y.-L. Ma, and Q. Wang, Phys. Rev. D 102, 034034 (2020a), arXiv:2001.06418 [hep-ph] .
* Chen _et al._ (2020b) Q.-S. Chen, H.-F. Fu, Y.-L. Ma, and Q. Wang, (2020b), arXiv:2012.03733 [hep-ph] .
* Aaij _et al._ (2013) R. Aaij _et al._ (LHCb), JHEP 09, 145 (2013), arXiv:1307.4556 [hep-ex] .
* Aaij _et al._ (2016) R. Aaij _et al._ (LHCb), Phys. Rev. D 94, 072001 (2016), arXiv:1608.01289 [hep-ex] .
* Aaij _et al._ (2020) R. Aaij _et al._ (LHCb), Phys. Rev. D 101, 032005 (2020), arXiv:1911.05957 [hep-ex] .
* del Amo Sanchez _et al._ (2010) P. del Amo Sanchez _et al._ (BaBar), Phys. Rev. D 82, 111101 (2010), arXiv:1009.2076 [hep-ex] .
* Aaltonen _et al._ (2014) T. A. Aaltonen _et al._ (CDF), Phys. Rev. D 90, 012013 (2014), arXiv:1309.5961 [hep-ex] .
* Aaij _et al._ (2015) R. Aaij _et al._ (LHCb), JHEP 04, 024 (2015), arXiv:1502.02638 [hep-ex] .
* Godfrey and Moats (2016) S. Godfrey and K. Moats, Phys. Rev. D 93, 034035 (2016), arXiv:1510.08305 [hep-ph] .
* Song _et al._ (2015) Q.-T. Song, D.-Y. Chen, X. Liu, and T. Matsuki, Phys. Rev. D 92, 074011 (2015), arXiv:1503.05728 [hep-ph] .
* Chen _et al._ (2017) H.-X. Chen, W. Chen, X. Liu, Y.-R. Liu, and S.-L. Zhu, Rept. Prog. Phys. 80, 076201 (2017), arXiv:1609.08928 [hep-ph] .
* Gupta and Upadhyay (2018) P. Gupta and A. Upadhyay, Phys. Rev. D 97, 014015 (2018), arXiv:1801.00404 [hep-ph] .
* Kher _et al._ (2017) V. Kher, N. Devlani, and A. K. Rai, Chin. Phys. C 41, 073101 (2017), arXiv:1704.00439 [hep-ph] .
* Gandhi and Rai (2019) K. Gandhi and A. K. Rai, (2019), arXiv:1911.06063 [hep-ph] .
* Gupta and Upadhyay (2019) P. Gupta and A. Upadhyay, Phys. Rev. D 99, 094043 (2019), arXiv:1803.03136 [hep-ph] .
* Godfrey and Moats (2019) S. Godfrey and K. Moats, Eur. Phys. J. A 55, 84 (2019), arXiv:1903.03886 [hep-ph] .
* Nowak and Zahed (1993) M. A. Nowak and I. Zahed, Phys. Rev. D 48, 356 (1993).
* Kilian _et al._ (1992) U. Kilian, J. G. Korner, and D. Pirjol, Phys. Lett. B 288, 360 (1992).
* Zyla _et al._ (2020) P. Zyla _et al._ (Particle Data Group), Prog. Theor. Exp. Phys. 2020, 083C01 (2020).
* Dudal _et al._ (2012) D. Dudal, O. Oliveira, and J. Rodriguez-Quintero, Phys. Rev. D 86, 105005 (2012).
* Oliveira _et al._ (2019) O. Oliveira, W. de Paula, T. Frederico, and J. de Melo, Eur. Phys. J. C 79, 116 (2019).
* Nowak _et al._ (2004) M. A. Nowak, M. Rho, and I. Zahed, Acta Phys. Polon. B 35, 2377 (2004), arXiv:hep-ph/0307102 .
* Faessler _et al._ (2007a) A. Faessler, T. Gutsche, V. E. Lyubovitskij, and Y.-L. Ma, Phys. Rev. D 76, 014005 (2007a), arXiv:0705.0254 [hep-ph] .
* Faessler _et al._ (2007b) A. Faessler, T. Gutsche, V. E. Lyubovitskij, and Y.-L. Ma, Phys. Rev. D 76, 114008 (2007b), arXiv:0709.3946 [hep-ph] .
* Faessler _et al._ (2008) A. Faessler, T. Gutsche, V. E. Lyubovitskij, and Y.-L. Ma, Phys. Rev. D 77, 114013 (2008), arXiv:0801.2232 [hep-ph] .
|
# Conditional Independence Testing in Hilbert Spaces with Applications to
Functional Data Analysis
Anton Rask Lundborg
University of Cambridge, UK
<EMAIL_ADDRESS>Part of this work was done while ARL was at the
University of Copenhagen. ARL was supported by the Cantab Capital Institute
for the Mathematics of Information. Rajen D. Shah
University of Cambridge, UK
<EMAIL_ADDRESS>RDS was supported by an EPSRC Programme Grant
EP/N031938/1 and an EPSRC First Grant EP/R013381/1. Jonas Peters
University of Copenhagen, Denmark
<EMAIL_ADDRESS>JP was supported by the Carlsberg Foundation and a
research grant (18968) from the VILLUM Foundation.
###### Abstract
We study the problem of testing the null hypothesis that $X$ and $Y$ are
conditionally independent given $Z$, where each of $X$, $Y$ and $Z$ may be
functional random variables. This generalises testing the significance of $X$
in a regression model of scalar response $Y$ on functional regressors $X$ and
$Z$. We show however that even in the idealised setting where additionally
$(X,Y,Z)$ has a Gaussian distribution, the power of any test cannot exceed its
size. Further modelling assumptions are needed and we argue that a convenient
way of specifying these assumptions is based on choosing methods for
regressing each of $X$ and $Y$ on $Z$. We propose a test statistic involving
inner products of the resulting residuals that is simple to compute and
calibrate: type I error is controlled uniformly when the in-sample prediction
errors are sufficiently small. We show this requirement is met by ridge
regression in functional linear model settings without requiring any eigen-
spacing conditions or lower bounds on the eigenvalues of the covariance of the
functional regressor. We apply our test in constructing confidence intervals
for truncation points in truncated functional linear models and testing for
edges in a functional graphical model for EEG data.
## 1 Introduction
In a variety of application areas, such as meteorology, neuroscience,
linguistics, and chemometrics, we observe samples containing random functions
[ullah2013applications, Ramsay2005]. The field of functional data analysis
(FDA) has a rich toolbox of methods for the study of such data. For instance,
there are a number of regression methods for different functional data types,
including linear function-on-scalar [Reiss2010], scalar-on-function
[hall2007methodology, Goldsmith2011, Shin2009, Reiss2007, Yuan2010,
delaigle2012methodology] and function-on-function [Ivanescu2015, Scheipl2015]
regression; there are also nonlinear and nonparametric variants [ferraty2006,
Ferraty2011, Fan2015, Yao2010], and versions able to handle potentially large
numbers of functional predictors [fan2015functional], to give a few examples;
see Wang2016, Morris2015 for helpful reviews and a more extensive list of
relevant references. The availability of software packages for functional
regression methods, such as the R-packages refund [refund] and FDboost
[FDboost], allow practitioners to easily adopt the FDA framework for their
particular data.
One area of FDA that has received less attention is that of conditional
independence testing. Given random elements $X,Y,Z$, the conditional
independence $X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ formalises the
idea that $X$ contains no further information about $Y$ beyond that already
contained in $Z$. A precise definition is given in Section 1.2. Inferring
conditional independence from observed data is of central importance in causal
inference [Pearl2009, Spirtes2000, Peters2017], graphical modelling
[lauritzen1996, Koller2009] and variable selection. For example, consider the
linear scalar-on-function regression model
$Y=\int_{0}^{1}\theta_{X}(t)X(t)dt+\int_{0}^{1}\theta_{Z}(t)Z(t)dt+\varepsilon,$
(1)
where $X,Z$ are random covariate functions taking values in
$L^{2}([0,1],\mathbb{R})$, $\theta_{X},\theta_{Z}$ are unknown parameter
functions, $Y\in{\mathbb{R}}$ is a scalar response and
$\varepsilon\in\mathbb{R}$ satisfying
$\varepsilon\mbox{${}\perp\mkern-11.0mu\perp{}$}(X,Z)$ represents stochastic
error. In this model, conditional independence
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ is equivalent to
$\theta_{X}=0$, i.e., whether the functional predictor $X$ is significant.
For nonlinear regression models, the conditional independence
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ still characterises whether $X$
is useful for predicting $Y$ given $Z$. Indeed, consider a more general
setting where $Y$ is a potentially infinite-dimensional response, and
$X_{1},\ldots,X_{p}$ are predictors, some or all of which may be functional.
Then a set of predictors $S\subseteq\\{1,\ldots,p\\}$ that contain all useful
information for predicting $Y$, that is such that
$Y\mbox{${}\perp\mkern-11.0mu\perp{}$}\\{X_{j}\\}_{j\notin
S}\,|\,\\{X_{j}\\}_{j\in S}$, is known as a Markov blanket of $Y$ in the
graphical modelling literature [pearl2014probabilistic, Sec. 3.2.1]. If
$Y\mbox{${}\not\\!\perp\mkern-11.0mu\perp{}$}X_{j}\,|\,\\{X_{k}\\}_{k\neq j}$,
then $j$ is contained in every Markov blanket, and under mild conditions
(e.g., the intersection property [Pearl2009, Peters2014jci]), the smallest
Markov blanket (sometimes called the Markov boundary) is unique and coincides
exactly with those variables $j$ satisfying this conditional dependence. This
set may thus be inferred by applying conditional independence tests.
Conditional independence tests may also be used to test for edge presence in
conditional independence graphs and are at the heart of several methods for
causal discovery [Spirtes2000, Peters2016jrssb].
Recent work [GCM] however has shown that in the setting where $X,Y$ and $Z$
are random vectors where $Z$ is absolutely continuous (i.e., has a density
with respect to Lebesgue measure), testing the conditional independence
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ is fundamentally hard in the
sense that any test for conditional independence must have power at most its
size. Intuitively, the reason for this is that given any test, there are
potentially highly complex joint distributions for the triple $(X,Y,Z)$ that
maintain conditional independence but yield rejection rates as high as for any
alternative distribution. Lipschitz constraints on the joint density, for
example, preclude the presence of such distributions [neykov2020minimax].
In the context of functional data however, the problem can be more severe, and
we show in this work that even in the idealised setting where $(X,Y,Z)$ are
jointly Gaussian in the functional linear regression model (1), testing for
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ is fundamentally impossible:
any test must have power at most its size. In other words, any test with power
$\beta$ at some alternative cannot hope to control type I error at level
$\alpha<\beta$ across the entirety of the null hypothesis, even if we are
willing to assume Gaussianity. Perhaps more surprisingly, this fundamental
problem persists even if additionally we allow ourselves to know the precise
null distribution of the infinite-dimensional $Z$.
Consequently, there is no general purpose conditional independence test even
for Gaussian functional data, and we must necessarily make some additional
modelling assumptions to proceed. We argue that this calls for the need of
conditional independence tests whose suitability for any functional data
setting can be judged more easily.
Motivated by the Generalised Covariance Measure [GCM], we propose a simple
test we call the Generalised Hilbertian Covariance Measure (GHCM) that
involves regressing $X$ on $Z$ and $Y$ on $Z$ (each of which may be functional
or indeed collections of functions), and computing a test statistic formed
from inner products of pairs of residuals. We show that the validity of this
form of test relies primarily on the relatively weak requirement that the
regression procedures have sufficiently small in-sample prediction errors. We
thus aim to convert the problem of conditional independence testing into the
more familiar task of regression with functional data, for which well-
developed methods are readily available. These features mark out our test as
rather different from existing approaches for assessing conditional
independence in FDA, which we review in the following.
One approach to measuring conditional dependence with functional data is based
on the Gaussian graphical model. zhu2016bayesian propose a Bayesian approach
for learning a graphical model for jointly Gaussian multivariate functional
data. Qiao2019 and Zapata2019 study approaches based on generalisations of the
graphical Lasso [yuan2007model]. These latter methods do not aim to perform
statistical tests for conditional independence, but rather provide a point
estimate of the graph, for which the authors establish consistency results
valid in potentially high-dimensional settings.
As discussed earlier, conditional independence testing is related to
significance testing in regression models. There is however a paucity of
literature on formal significance tests for functional predictors. The R
implementation [refund] of the popular functional regression methodology of
Greven2017 produces $p$-values for the inclusion of a functional predictor
based on significance tests for generalised additive models developed in
wood2013p. These tests, whilst being computationally efficient, however do not
have formal uniform level control guarantees.
### 1.1 Our main contributions and organisation of the paper
#### It is impossible to test conditional independence with Gaussian
functional data.
In Section 2 we present our formal hardness result on conditional independence
testing for Gaussian functional data. The proof rests on a new result on the
maximum power attainable at any alternative when testing for conditional
independence with multivariate Gaussian data. The full technical details are
given in Section A of the supplementary material. As we cannot hope to have
level control uniformly over the entirety of the null of conditional
independence, it is important to establish, for any given test, subsets
$\tilde{\mathcal{P}}_{0}$ of null distributions $\mathcal{P}_{0}$ over which
we do have uniform level control.
#### We provide new tools allowing for the development of uniform results in
FDA.
Uniform results are scarce in functional data analysis; we develop the tools
for deriving such results in Section B of the supplementary material which
studies uniform convergence of Hilbertian and Banachian random variables.
#### Given sufficiently good methods for regressing each of $X$ and $Y$ on
$Z$, the GHCM can test conditional independence with certain uniform level
guarantees.
In Section 3 we describe our new GHCM testing framework for testing
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$, where each of $X$, $Y$ and $Z$
may be collections of functional and scalar variables. In Section 4 we show
that for the GHCM, an effective null hypothesis $\tilde{\mathcal{P}}_{0}$ may
be characterised as one where in addition to some tightness and moment
conditions, the conditional expectations ${\mathbb{E}}(X\,|\,Z)$ and
${\mathbb{E}}(Y\,|\,Z)$ can be estimated at sufficiently fast rates, such that
the product of the corresponding in-sample mean squared prediction errors
(MSPEs) decay faster than $1/n$ uniformly, where $n$ is the sample size. Note
that this does not contradict the hardness result: it is well known that there
do not exist regression methods with risk converging to zero uniformly over
all distributions for the data [gyorfi, Thm. 3.1]. Thus, the regression
methods must be chosen appropriately in order for the GHCM to perform well. In
Section 4.3 we show that a version of the GHCM incorporating sample-splitting
has uniform power against alternatives where the expected conditional
covariance operator ${\mathbb{E}}\\{\mathrm{Cov}(X,Y\,|\,Z)\\}$ has
Hilbert–Schmidt norm of order $n^{-1/2}$, and is thus rate-optimal.
#### The regression methods are only required to perform well on the observed
data.
The fact that control of the type I error of the GHCM depends on an in-sample
MSPE rather than a more conventional out-of-sample MSPE, has important
consequences. Whilst in-sample and out-of-sample errors may be considered
rather similar, in the context of function regression, they are substantially
different. We demonstrate in Section 4.4 that bounds on the former are
achievable under significantly weaker conditions than equivalent bounds on the
latter by considering ridge regression in the functional linear model. In
particular the required prediction error rates are satisfied over classes of
functional linear models where the eigenvalues of the covariance operator of
the functional regressor are dominated by a summable sequence; no additional
eigen-spacing conditions, or lower bounds on the decay of the eigenvalues are
needed, in contrast to existing results on out-of-sample error rates [cai2006,
hall2007methodology, Crambes2013].
#### The GHCM has several uses.
Section 5 presents the results of numerical experiments on the GHCM. We study
the following use cases. (i) Testing for significance of functional predictors
in functional regression models. We are not aware of other approaches that
provide significance statements in functional regression models and come with
statistical guarantees. For example, in comparison to the $p$-values from pfr,
which are highly anti-conservative in challenging setups, the type I error of
the GHCM test is well-controlled (see Figure 1). (ii) Deriving confidence
intervals for truncation points in truncated functional linear model. We
demonstrate in Section 5.2 the use of the GHCM in the construction of a
confidence interval for the truncation point in a truncated functional linear
model, a problem which we show may be framed as one of testing certain
conditional independencies. (iii) Testing for edge presence in functional
graphical models. In Section 5.3, we use the GHCM to learn functional
graphical models for EEG data from a study on alcoholism.
We conclude with a discussion in Section 6 outlining potential follow-on work
and open problems. The supplementary material contains the proofs of all
results presented in the main text and some additional numerical experiments,
as well as the uniform convergence results mentioned above. An R-package ghcm
[ghcm] implementing the methodology is available on CRAN.
### 1.2 Preliminaries and notation
For three random elements $X$, $Y$ and $Z$ defined on the same probability
space $(\Omega,\mathcal{F},\mathbb{P})$ with values in measurable spaces
$(\mathcal{X},\mathcal{A})$, $(\mathcal{Y},\mathcal{G})$ and
$(\mathcal{Z},\mathcal{K})$ respectively, we say that $X$ is conditionally
independent of $Y$ given $Z$ and write
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ when
$\mathbb{E}(f(X)g(Y)\,|\,Z)\overset{a.s}{=}\mathbb{E}(f(X)\,|\,Z)\mathbb{E}(g(Y)\,|\,Z)$
for all bounded and Borel measurable $f:\mathcal{X}\to\mathbb{R}$ and
$g:\mathcal{Y}\to\mathbb{R}$. Several equivalent definitions are given in
Constantinou2017. As with Euclidean variables, the interpretation of
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ is that ‘knowing $Z$ renders
$X$ irrelevant for predicting $Y$’ [lauritzen1996].
Throughout the paper we consider families of probability distributions
$\mathcal{P}$ of the triplet $(X,Y,Z)$, which we partition into the null
hypothesis $\mathcal{P}_{0}$ of those $P\in\mathcal{P}$ satisfying
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$, and set of alternatives
$\mathcal{Q}:=\mathcal{P}\setminus\mathcal{P}_{0}$ where the conditional
independence relation is violated. We consider data $(x_{i},y_{i},z_{i})$,
$i=1,\ldots,n$, consisting of i.i.d. copies of $(X,Y,Z)$, and write
$X^{(n)}:=(x_{i})_{i=1}^{n}$ and similarly for $Y^{(n)}$ and $Z^{(n)}$. We
apply to this data a test
$\psi_{n}:(\mathcal{X}\times\mathcal{Y}\times\mathcal{Z})^{n}\to\\{0,1\\}$,
with a value of $1$ indicating rejection. We will at times write
${\mathbb{E}}_{P}(\cdot)$ for expectations of random elements whose
distribution is determined by $P$, and similarly
${\mathbb{P}}_{P}(\cdot)={\mathbb{E}}_{P}(\mathbbm{1}_{\\{\cdot\\}})$. Thus,
the size of the test $\psi_{n}$ may be written as
$\sup_{P\in\mathcal{P}_{0}}{\mathbb{P}}_{P}(\psi_{n}=1)$.
We always take $\mathcal{X}=\mathcal{H}_{X}$ and $\mathcal{Y}=\mathcal{H}_{Y}$
for separable Hilbert spaces $\mathcal{H}_{X}$ and $\mathcal{H}_{Y}$ and write
$d_{X}$ and $d_{Y}$ for their dimensions, which may be $\infty$. When these
are finite-dimensional, as will typically be the case in practice, $X^{(n)}$
will be a $n\times d_{X}$ matrix and similarly for $Y^{(n)}$. Similarly, we
will take $\mathcal{Z}={\mathbb{R}}^{d_{Z}}$ in the finite-dimensional case
and then $Z^{(n)}\in{\mathbb{R}}^{n\times d_{Z}}$. However, in order for our
theoretical results to be relevant for settings where $d_{X}$ and $d_{Y}$ may
be arbitrarily large compared to $n$, our theory must also accommodate
infinite-dimensional settings, for which we introduce the following notation.
For $g$ and $h$ in a Hilbert space $\mathcal{H}$, we write $\langle
g,h\rangle$ for the inner product of $g$ and $h$ and $\|g\|$ for its norm;
note we suppress dependence of the norm and inner product on the Hilbert
space. The bounded linear operator on $\mathcal{H}$ given by $x\mapsto\langle
x,g\rangle h$ is the outer product of $g$ and $h$ and is denoted by $g\otimes
h$. A bounded linear operator $\mathcal{A}$ on $\mathcal{H}$ is compact if it
has a singular value decomposition, i.e., there exists two orthonormal bases
$(e_{1,k})_{k\in\mathbb{N}}$ and $(e_{2,k})_{k\in\mathbb{N}}$ of $\mathcal{H}$
and a non-increasing sequence $(\lambda_{k})_{k\in\mathbb{N}}$ of singular
values such that
$\mathscr{A}h=\sum_{k=1}^{\infty}\lambda_{k}(e_{1,k}\otimes
e_{2,k})h=\sum_{k=1}^{\infty}\lambda_{k}\langle e_{1,k},h\rangle e_{2,k}$
for all $h\in\mathcal{H}$. For a compact linear operator $\mathcal{A}$ as
above, we denote by $\lVert\mathscr{A}\rVert_{\textrm{op}}$,
$\lVert\mathscr{A}\rVert_{\textrm{HS}}$ and
$\lVert\mathscr{A}\rVert_{\textrm{TR}}$ the operator norm, Hilbert–Schmidt
norm and trace norm, respectively, of $\mathscr{A}$, which equal the
$\ell^{\infty}$, $\ell^{2}$ and $\ell^{1}$ norms, respectively, of the
sequence of singular values $(\lambda_{k})_{k\in\mathbb{N}}$.
A random variable on a separable Banach space $\mathcal{B}$ is a mapping
$X:\Omega\to\mathcal{B}$ defined on a probability space
$(\Omega,\mathcal{F},\mathbb{P})$ which is measurable with respect to the
Borel $\sigma$-algebra on $\mathcal{B}$, $\mathbb{B}(\mathcal{B})$. Integrals
with values in Hilbert or Banach spaces, including expectations, are Bochner
integrals throughout. For a random variable $X$ on Hilbert space
$\mathcal{H}$, we define the covariance operator of $X$ by
$\mathrm{Cov}(X):=\mathbb{E}\left[(X-\mathbb{E}(X))\otimes(X-\mathbb{E}(X))\right]=\mathbb{E}(X\otimes
X)-\mathbb{E}(X)\otimes\mathbb{E}(X)$
whenever $\mathbb{E}\lVert X\rVert^{2}<\infty$. For $h\in\mathcal{H}$ we thus
have
$\mathrm{Cov}(X)h=\mathbb{E}\left(\langle
X,h\rangle^{2}\right)-\mathbb{E}(\langle X,h\rangle)^{2}.$
For another random variable $Y$ with $\mathbb{E}\lVert Y\rVert^{2}<\infty$, we
define the cross-covariance operator of $X$ and $Y$ by
$\mathrm{Cov}(X,Y):=\mathbb{E}\left[(X-\mathbb{E}(X))\otimes(Y-\mathbb{E}(Y))\right]=\mathbb{E}(X\otimes
Y)-\mathbb{E}(X)\otimes\mathbb{E}(Y).$
We define conditional variants of the covariance operator and cross-covariance
operator by replacing expectations with conditional expectations given a
$\sigma$-algebra or random variable.
## 2 The hardness of conditional independence testing with Gaussian
functional data
In this section we present a negative result on the possibility of testing for
conditional independence with functional data in the idealised setting where
all variables are Gaussian. We take $\mathcal{P}$ to consist of distributions
of $(X,Y,Z)$ that are jointly Gaussian with injective covariance operator,
where $X$ and $Z$ take values in separable Hilbert spaces $\mathcal{H}_{X}$
and $\mathcal{H}_{Z}$ respectively with $\mathcal{H}_{Z}$ infinite-
dimensional, and $Y\in{\mathbb{R}}^{d_{Y}}$. We note that in the case where
$d_{Y}=1$ and $\mathcal{H}_{X}=\mathcal{H}_{Z}=L^{2}([0,1],\mathbb{R})$, each
$P\in\mathcal{P}$ admits a representation as a Gaussian scalar-on-function
linear model (1) where $Y$ is the scalar response, and functional covariates
$X,Z$ and error $\varepsilon$ are all jointly Gaussian with
$\varepsilon\mbox{${}\perp\mkern-11.0mu\perp{}$}(X,Z)$ (see Proposition 7 in
the supplementary material); the settings with $d_{Y}>1$ may be thought of
equivalently as multi-response versions of this.
For each $Q$ in the set of alternatives $\mathcal{Q}$, we further define
$\mathcal{P}_{0}^{Q}\subset\mathcal{P}_{0}$ by
$\mathcal{P}_{0}^{Q}:=\\{P\in\mathcal{P}_{0}:\text{ the marginal distribution
of }Z\text{ under }P\text{ and }Q\text{ is the same}\\}.$
Theorem 1 below shows that not only is it fundamentally hard to test the null
hypothesis of $\mathcal{P}_{0}$ against $\mathcal{Q}$ for all dataset sizes
$n$, but restricting to the null $\mathcal{P}_{0}^{Q}$ for $Q\in\mathcal{Q}$
presents an equally hard problem.
###### Theorem 1.
Given alternative $Q\in\mathcal{Q}$ and $n\in\mathbb{N}$, let $\psi_{n}$ be a
test for null hypothesis $\mathcal{P}_{0}^{Q}$ against $Q$. Then we have that
the power is at most the size:
${\mathbb{P}}_{Q}(\psi_{n}=1)\leq\sup_{P\in\mathcal{P}_{0}^{Q}}{\mathbb{P}}_{P}(\psi_{n}=1).$
An interpretation of this statement in the context of the functional linear
model is that regardless of the number of observations $n$, there is no non-
trivial test for the significance of the functional predictor $X$, even if the
marginal distribution of the additional infinite-dimensional predictor $Z$ is
known exactly. It is clear that the size of a test over $\mathcal{P}_{0}$ is
at least as large as that over the null $\mathcal{P}_{0}^{Q}$, so testing the
larger null is of course at least as hard.
It is known that testing conditional independence in simple multivariate
(finite-dimensional) settings is hard in the sense of Theorem 1 when the
conditioning variable is continuous. In such settings, restricting the null to
include only distributions with Lipschitz densities, for example, allows for
the existence of tests with power against large classes of the alternative.
The functional setting is however very different, simply removing pathological
distributions from the entire null of conditional independence does not make
the problem testable. Even with the parametric restriction of Gaussianity, the
null is still too large for the existence of non-trivial hypothesis tests.
Indeed, the starting point of our proof is a result due to Kraft1955 that the
hardness in the statement of Theorem 1 is equivalent to the $n$-fold product
$Q^{\otimes n}$ lying in the convex closure in total variation distance of the
set of $n$-fold products of distributions in $\mathcal{P}_{0}^{Q}$.
A consequence of Theorem 1 is that we need to make strong modelling
assumptions in order to test for conditional independence in the functional
data setting. Given the plethora of regression methods for functional data, we
argue that it can be convenient to frame these modelling assumptions in terms
of regression models for each of $X$ and $Y$ on $Z$, or more generally, in
terms of the performances of methods for these regressions. The remainder of
this paper is devoted to developing a family of conditional independence tests
whose validity rests primarily on the prediction errors of these regressions.
## 3 GHCM methodology
In this section we present the Generalised Hilbertian Covariance Measure
(GHCM) for testing conditional independence with functional data. To motivate
the approach we take, it will be helpful to first review the construction of
the Generalised Covariance Measure (GCM) developed in GCM for univariate $X$
and $Y$, which we do in the next section. In Section 3.2 we then define the
GHCM.
### 3.1 Motivation
Consider first therefore the case where $X$ and $Y$ are real-valued random
variables, and $Z$ is a random variable with values in some space
$\mathcal{Z}$. We can always write $X=f(Z)+\varepsilon$ where
$f(z):=\mathbb{E}(X\,|\,Z=z)$ and similarly $Y=g(Z)+\xi$ with
$g(z):=\mathbb{E}(Y\,|\,Z=z)$. The conditional covariance of $X$ and $Y$ given
$Z$,
$\mathrm{Cov}(X,Y\,|\,Z):=\mathbb{E}\left[\\{X-\mathbb{E}(X\,|\,Z)\\}\\{Y-\mathbb{E}(Y\,|\,Z)\\}\,|\,Z\right]={\mathbb{E}}(\varepsilon\xi\,|\,Z),$
has the property that $\mathrm{Cov}(X,Y\,|\,Z)=0$ and hence
${\mathbb{E}}(\varepsilon\xi)=0$ whenever
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$. The GCM forms an empirical
version of ${\mathbb{E}}(\varepsilon\xi)$ given data
$(x_{i},y_{i},z_{i})_{i=1}^{n}$ by first regressing each of $X^{(n)}$ and
$Y^{(n)}$ onto $Z^{(n)}$ to give estimates $\hat{f}$ and $\hat{g}$ of $f$ and
$g$ respectively. Using the corresponding residuals
$\hat{\varepsilon}_{i}:=x_{i}-\hat{f}(z_{i})$ and
$\hat{\xi}_{i}:=y_{i}-\hat{g}(z_{i})$, the product
$R_{i}:=\hat{\varepsilon}_{i}\hat{\xi}_{i}$ is computed for each
$i=1,\ldots,n$ and then averaged to give $\bar{R}:=\sum_{i=1}^{n}R_{i}/n$, an
estimate of ${\mathbb{E}}(\varepsilon\xi)$. The standard deviation of
$\bar{R}$ under the null $X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ may
also be estimated, and it can be shown [GCM, Thm 8] that under some
conditions, $\bar{R}$ divided by its estimated standard deviation converges
uniformly to a standard Gaussian distribution.
This basic approach can be extended to the case where $X$ and $Y$ take values
in $\mathbb{R}^{d_{X}}$ and $\mathbb{R}^{d_{Y}}$ respectively, by considering
a multivariate conditional covariance,
$\mathrm{Cov}(X,Y\,|\,Z):=\mathbb{E}\left[\\{X-\mathbb{E}(X\,|\,Z)\\}\\{Y-\mathbb{E}(Y\,|\,Z)\\}^{\top}\,|\,Z\right]={\mathbb{E}}(\varepsilon\xi^{\top}\,|\,Z)\in{\mathbb{R}}^{d_{X}\times
d_{Y}}.$
This is a zero matrix when $X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$, and
hence ${\mathbb{E}}(\varepsilon\xi^{\top})=0$ under this null. Thus, $\bar{R}$
defined as before but where $R_{i}:=\hat{\varepsilon}_{i}\hat{\xi}_{i}^{\top}$
can form the basis of a test of conditional independence. There are several
ways to construct a final test statistic using
$\bar{R}\in{\mathbb{R}}^{d_{X}\times d_{Y}}$. The approach taken in GCM
involves taking the maximum absolute value of a version of $\bar{R}$ with each
entry divided by its estimated standard deviation. This, however, does not
generalise easily to the functional data setting we are interested in here; we
now outline an alternative that can be extended to handle functional data.
To motivate our approach, consider multiplying $\bar{R}$ by $\sqrt{n}$:
$\begin{split}\sqrt{n}\bar{R}&=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\hat{\varepsilon}_{i}\hat{\xi}_{i}^{\top}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(f(z_{i})-\hat{f}(z_{i})+\varepsilon_{i})(g(z_{i})-\hat{g}(z_{i})+\xi_{i})^{\top}\\\
&=\frac{1}{\sqrt{n}}\underbrace{\sum_{i=1}^{n}\varepsilon_{i}\xi_{i}^{\top}}_{U_{n}}+\underbrace{\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(f(z_{i})-\hat{f}(z_{i}))(g(z_{i})-\hat{g}(z_{i}))^{\top}}_{a_{n}}\\\
&\qquad+\underbrace{\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(f(z_{i})-\hat{f}(z_{i}))\xi_{i}^{\top}}_{b_{n}}+\underbrace{\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\varepsilon_{i}(g(z_{i})-\hat{g}(z_{i}))^{\top}}_{c_{n}}.\end{split}$
(2)
Observe that $U_{n}$ is a sum of i.i.d. terms and so the multivariate central
limit theorem dictates that $U_{n}/\sqrt{n}$ converges to a $d_{X}\times
d_{Y}$-dimensional Gaussian distribution. Applying the Frobenius norm
$\lVert\cdot\rVert_{F}$ to the $a_{n}$ term, we get by submultiplicativity and
the Cauchy–Schwarz inequality,
$\displaystyle\lVert a_{n}\rVert_{F}$
$\displaystyle\leq\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert_{2}\lVert g(z_{i})-\hat{g}(z_{i})\rVert_{2}$
$\displaystyle\leq\sqrt{n}\bigg{(}\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert_{2}^{2}\bigg{)}^{1/2}\bigg{(}\frac{1}{n}\sum_{i=1}^{n}\lVert
g(z_{i})-\hat{g}(z_{i})\rVert_{2}^{2}\bigg{)}^{1/2},$ (3)
where $\lVert\cdot\rVert_{2}$ denotes the Euclidean norm. The right-hand-side
here is a product of in-sample mean squared prediction errors for each of the
regressions performed. Under the null of conditional independence, each term
of $b_{n}$ and $c_{n}$ is mean zero conditional on $(X^{(n)},Z^{(n)})$ and
$(Y^{(n)},Z^{(n)})$, respectively. Thus, so long as both of the regression
functions are estimated at a sufficiently fast rate, we can expect
$a_{n},b_{n},c_{n}$ to be small so the distribution of $\sqrt{n}\bar{R}$ can
be well-approximated by the Gaussian limiting distribution of
$U_{n}/\sqrt{n}$. As in the univariate setting, it is crucially the product of
the prediction errors in (3) that is required to be small, so each root mean
squared prediction error term can decay at relatively slow $o(n^{-1/4})$
rates.
Unlike the univariate setting however, $\sqrt{n}\bar{R}$ is now a matrix and
hence we need to choose some sensible aggregator function
$t:\mathbb{R}^{d_{X}\times d_{Y}}\to\mathbb{R}$ such that we can threshold
$t(\sqrt{n}\bar{R})$ to yield a $p$-value. One option is as follows; we take a
different approach as the basis of the GHCM for reasons which will become
clear in the sequel. If we vectorise $\bar{R}$, i.e., view the matrix as a
$d_{X}d_{Y}$-dimensional vector, then under the assumptions required for the
above heuristic arguments to formally hold, $\sqrt{n}\textrm{Vec}(\bar{R})$
converges to a Gaussian with mean zero and some covariance matrix
$C\in{\mathbb{R}}^{d_{X}d_{Y}\times d_{X}d_{Y}}$ if
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$. Provided $C$ is invertible,
$\sqrt{n}C^{-1/2}\bar{R}$ therefore converges to a Gaussian with identity
covariance under the null and hence $\lVert
C^{-1/2}\sqrt{n}\bar{R}\rVert_{2}^{2}$ converges to a $\chi^{2}$-distribution
with $d_{X}d_{Y}$ degrees of freedom. Replacing $C$ with an estimate $\hat{C}$
then yields a test statistic from which we may derive a $p$-value.
### 3.2 The GHCM
We now turn to the setting where $X$ and $Y$ take values in separable Hilbert
spaces $\mathcal{H}_{X}$ and $\mathcal{H}_{Y}$ respectively. These could for
example be $L^{2}([0,1],\mathbb{R})$, or ${\mathbb{R}}^{d_{X}}$ and
${\mathbb{R}}^{d_{Y}}$ respectively, but where $X$ and $Y$ are vectors of
function evaluations. The latter case, which we will henceforth refer to as
the finite-dimensional case, corresponds to how data would often be received
in practice with the observation vectors consisting of function evaluations on
fixed grids (which are not necessarily equally spaced). However, it is
important to recognise that the dimensions $d_{X}$ and $d_{Y}$ of the grids
may be arbitrarily large, and it is necessary for the methodology to
accommodate this; as we will see, the approach for the multivariate setting
described in the previous section does not satisfy this requirement whereas
our proposed GHCM will do so.
In some settings, our observed vectors of function evaluations will not be on
fixed grids, and the numbers of function evaluations may vary from observation
to observation. In Section 3.2.1 we set out a scheme to handle this case and
bring it within our framework here.
Similarly to the approach outlined in Section 3.1, we propose to first regress
each of $X^{(n)}$ and $Y^{(n)}$ onto $Z^{(n)}$ to give residuals
$\hat{\varepsilon}_{i}\in\mathcal{H}_{X}$, $\hat{\xi}_{i}\in\mathcal{H}_{Y}$
for $i=1,\ldots,n$. (In practice, these regressions could be performed by pfr
or pffr in the refund package [Goldsmith2011, Ivanescu2015] or boosting
[FDboost], for instance.) We centre the residuals, as these and other
functional regression methods do not always produce mean-centred residuals.
With these residuals we proceed as in the multivariate case outlined above but
replacing matrix outer products in the multivariate setting with outer
products in the Hilbertian sense, that is we define for $i=1,\ldots,n$,
$\displaystyle\mathscr{R}_{i}:=\hat{\varepsilon}_{i}\otimes\hat{\xi}_{i},\;\text{
and }\;\mathscr{T}_{n}:=\sqrt{n}\bar{\mathscr{R}}$ (4)
$\displaystyle\text{where
}\;\bar{\mathscr{R}}:=\frac{1}{n}\sum_{i=1}^{n}\mathscr{R}_{i}.$
We can show (see Theorem 2) that under the null, provided the analogous
prediction error terms in (3) decay sufficiently fast and additional
regularity conditions hold, $\mathscr{T}_{n}$ above converges uniformly to a
Gaussian distribution in the space of Hilbert–Schmidt operators. This comes as
a consequence of new results we prove on uniform convergence of Banachian
random variables. Moreover, the covariance operator of this limiting Gaussian
distribution can be estimated by the empirical covariance operator
$\hat{\mathscr{C}}:=\frac{1}{n-1}\sum_{i=1}^{n}(\mathscr{R}_{i}-\bar{\mathscr{R}})\otimes_{\textrm{HS}}(\mathscr{R}_{i}-\bar{\mathscr{R}})$
(5)
where $\otimes_{\textrm{HS}}$ denotes the outer product in the space of
Hilbert–Schmidt operators.
An analogous approach to that outlined above for the multivariate setting
would involve attempting to whiten this limiting distribution using the
square-root of the inverse of $\hat{\mathscr{C}}$. However, here we hit a
clear obstacle: even in the finite-dimensional setting, whenever
$d_{X}d_{Y}\geq n$, the inverse of $\hat{\mathscr{C}}$ or $\hat{C}$ from the
previous section, cannot exist. Moreover, as indicated by bai1996effect, who
study the problem of testing whether a finite-dimensional Gaussian vector has
mean zero, even when the inverses do exist, the estimated inverse covariance
may not approximate its population level counterpart sufficiently well.
Instead, bai1996effect advocate using a test statistic based on the squared
$\ell_{2}$-norm of the Gaussian vector.
We take an analogous approach here, and use as our test statistic
$T_{n}:=\|\mathscr{T}_{n}\|_{\textrm{HS}}^{2}$ (6)
where $\lVert\cdot\rVert_{\textrm{HS}}$ denotes the Hilbert–Schmidt norm. A
further advantage of this test statistic is that it admits an alternative
representation given by
$T_{n}=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}\langle\hat{\varepsilon}_{i},\hat{\varepsilon}_{j}\rangle\langle\hat{\xi}_{i},\hat{\xi}_{j}\rangle;$
(7)
see Section C.1 for a derivation. Only inner products between residuals need
to be computed, and so in the finite-dimensional case with the standard inner
product, the computational burden is only $O(\max(d_{X},d_{Y})n^{2})$.
As $\mathscr{T}_{n}$ has an asymptotic Gaussian distribution under the null
with an estimable covariance operator, we can deduce the asymptotic null
distribution of $T_{n}$ as a function of $\mathscr{T}_{n}$. This leads to the
$\alpha$-level test function $\psi_{n}$ given by
$\psi_{n}:=\mathbbm{1}_{\\{T_{n}\geq q_{\alpha}\\}}$ (8)
where $q_{\alpha}$ is the $1-\alpha$ quantile of a weighted sum
$\sum_{k=1}^{d}\lambda_{k}W_{k}$
of independent $\chi^{2}_{1}$ distributions $(W_{k})_{k=1}^{d}$ with weights
given by the $d$ non-zero eigenvalues $(\lambda_{k})_{k=1}^{d}$ of
$\hat{\mathscr{C}}$. Note that $d\leq\min(n-1,d_{X}d_{Y})$.
These eigenvalues may also be derived from inner products of the residuals:
they are equal to the eigenvalues of the $n\times n$ matrix
$\frac{1}{n-1}(\Gamma-J\Gamma-\Gamma J+J\Gamma J)$
where $J\in{\mathbb{R}}^{n\times n}$ is a matrix with all entries equal to
$1/n$, and $\Gamma\in{\mathbb{R}}^{n\times n}$ has $ij$th entry given by
$\Gamma_{ij}:=\langle\hat{\varepsilon}_{i},\hat{\varepsilon}_{j}\rangle\langle\hat{\xi}_{i},\hat{\xi}_{j}\rangle;$
(9)
see Section C.1 for a derivation. Thus, in the finite-dimensional case, the
computation of the eigenvalues requires $O(n^{2}\max(d_{X},d_{Y},n))$
operations. In typical usage therefore, the cost for computing the test
statistic given the residuals is dominated by the cost of performing the
initial regressions, particularly those corresponding to function-on-function
regression. Note that there are several schemes for approximating $q_{\alpha}$
[Imhof1961, Liu2009, Farebrother1984]; we use the approach of Imhof1961 as
implemented in the QuadCompForm package in R [QuadCompForm] in all of our
numerical experiments. We summarise the above construction of our test
function for the finite-dimensional case with the standard inner product in
Algorithm 1.
In principle, different inner products may be chosen, to yield different test
functions. However, the theoretical properties of the test function rely on
the prediction errors of the regressions, measured in terms of the norm
corresponding to the inner product used, being small. In the common case where
the observed data are finite vectors of function evaluations, i.e., for each
$i=1,\ldots,n$, $x_{ik}=W_{X,i}(k/d_{X})$ for a function $W_{X,i}\in
L_{2}([0,1],{\mathbb{R}})$, and similarly for $y_{i}$, our default
recommendation is to use the standard inner product. The residuals,
$\hat{\varepsilon}_{i}\in{\mathbb{R}}^{d_{X}}$ and
$\hat{\xi}_{i}\in{\mathbb{R}}^{d_{Y}}$, would then similarly correspond to
underlying functional residuals via
$\hat{\varepsilon}_{ik}=W_{\hat{\varepsilon},i}(k/d_{X})$ for
$W_{\hat{\varepsilon},i}\in L_{2}([0,1],{\mathbb{R}})$, and similarly for
$\hat{\xi}_{i}$. We may compare the test function computed based on the
computed residuals $\hat{\varepsilon}_{i}$ and $\hat{\xi}_{i}$ with that which
would be obtained when replacing these with the underlying functions
$W_{\hat{\varepsilon},i}$ and $W_{\hat{\xi},i}$. As the test function depends
entirely on inner products between residuals, it suffices to compare
$\hat{\varepsilon}_{i}^{\top}\hat{\varepsilon}_{j}=\sum_{k=1}^{d_{X}}W_{\hat{\varepsilon},i}(k/d_{X})W_{\hat{\varepsilon},i}(k/d_{X})\qquad\text{and}\qquad\int_{0}^{1}W_{\hat{\varepsilon},i}(t)W_{\hat{\varepsilon},j}(t)\,\mathrm{d}t.$
(10)
We see that the LHS is $d_{X}$ times a Riemann sum approximation to the
integral on the RHS. The $p$-value computed is invariant to multiplicative
scaling of the test statistic, and so in the so-called densely observed case
where $d_{X}$ is large, the $p$-value from the finite-dimensional setting
would be a close approximation to that which would be obtained with the true
underlying functions.
Other numerical integration schemes could be used to make the approximation
even more precise. However, the theory we present in Section 4 that guarantees
uniform asymptotic level control and power over certain classes of nulls and
alternatives applies directly to the finite-dimensional or infinite-
dimensional settings, and so there is no requirement that the approximation
error above is small. In particular, there is no strict requirement that the
residuals computed correspond to function evaluations on equally spaced grids.
However, in that case $\hat{\varepsilon}_{i}^{\top}\hat{\varepsilon}_{j}$ will
not necessarily approximate a scaled version of the RHS of (10), and an inner
product that maintains this approximation may be more desirable from a power
perspective.
1 input: $X^{(n)}\in{\mathbb{R}}^{n\times d_{X}}$,
$Y^{(n)}\in{\mathbb{R}}^{n\times d_{Y}}$, $Z^{(n)}\in{\mathbb{R}}^{n\times
d_{Z}}$ ;
2 options: regression methods for each of the regressions ;
3 begin
4 regress $X^{(n)}$ on $Z^{(n)}$ producing residuals
$\hat{\varepsilon}_{i}\in\mathbb{R}^{d_{X}}$ for $i=1,\dots,n$ ;
5 regress $Y^{(n)}$ on $Z^{(n)}$ producing residuals
$\hat{\xi}_{i}\in\mathbb{R}^{d_{Y}}$ for $i=1,\dots,n$ ;
6 construct $\Gamma\in{\mathbb{R}}^{n\times n}$ with entries
$\Gamma_{ij}\leftarrow\hat{\varepsilon}_{i}^{\top}\hat{\varepsilon}_{j}\hat{\xi}_{i}^{\top}\hat{\xi}_{j}$
(or more generally via (9)) ;
7 compute test statistic
$T_{n}\leftarrow\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}\Gamma_{ij}$ ;
8 set $A\leftarrow\frac{1}{n-1}\left(\Gamma-J\Gamma-\Gamma J+J\Gamma J\right)$
where $J\in{\mathbb{R}}^{n\times n}$ has all entries equal to $1/n$ ;
9 compute the non-zero eigenvalues $\lambda_{1},\dots,\lambda_{d}$ of $A$
(there are at most $n-1$);
10 compute by numerical integration $p$-value
$p\leftarrow\mathbb{P}\left(\sum_{k=1}^{d}\lambda_{k}\zeta_{k}^{2}>T_{n}\right)$,
where $\zeta_{1},\dots,\zeta_{d}$ are independent standard Gaussian variables
;
11
12 end
13
14output: $p$-value $p$;
Algorithm 1 Generalised Hilbertian Covariance Measure (GHCM)
In the following section we explain how when the residuals
$\hat{\varepsilon}_{i}$ and $\hat{\xi}_{i}$ correspond to function evaluations
on different grids for each $i$, we can preprocess these to obtain residuals
corresponding to fixed grids, which may then be fed into our algorithm.
An R-package ghcm [ghcm] implementing the methodology is available on CRAN.
#### 3.2.1 Data observed on irregularly spaced grids of varying lengths
We now consider the case where
$\hat{\varepsilon}_{i}\in{\mathbb{R}}^{d_{X,i}}$ with its $k$th component
given by $\hat{\varepsilon}_{ik}=W_{\hat{\varepsilon},i}(t_{ik})$ for
$t^{X}_{ik}\in[0,1]$, and similarly for $\hat{\xi}_{i}$. Such residuals would
typically be output by regression methods when supplied with functional data
$x_{i}\in{\mathbb{R}}^{d_{X,i}}$ and $y_{i}\in{\mathbb{R}}^{d_{Y,i}}$
corresponding to functional evaluations on grids $(t_{ik})_{k=1}^{d_{X,i}}$
and $(t_{ik})_{k=1}^{d_{Y,i}}$ respectively.
In order to apply our GHCM methodology, we need to represent these residual
vectors by vectors of equal lengths corresponding to fixed grids. Our approach
is to construct for each $i$, natural cubic interpolating splines
$\hat{W}_{\hat{\varepsilon},i}$ and $\hat{W}_{\hat{\xi},i}$ corresponding to
$\hat{\varepsilon}_{i}$ and $\hat{\xi}_{i}$ respectively. We may compute the
inner product between these functions in $L_{2}([0,1],{\mathbb{R}})$ exactly
and efficiently as it is the integral of a piecewise polynomial with the
degree in each piece at most $6$. This gives us the entries of the matrix
$\Gamma$ (9) which we may then use in lines 7 and following in Algorithm 1.
Furthermore, Theorems 3 and 4 apply equally well to the setting considered
here provided the residuals are understood as the interpolating splines
described above, and the fitted regression functions are defined accordingly
as the difference between the observed functional responses these functional
residuals.
## 4 Theoretical properties of the GHCM
In this section, we provide uniform level control guarantees for the GHCM, and
uniform power guarantees for a version incorporating sample-splitting; note
that we do not recommend the use of the latter in practice but consider it a
proxy for the GHCM that is more amenable to theoretical analysis in non-null
settings. Before presenting these results, we explain the importance of
uniform results in this context, and set out some notation relating to uniform
convergence.
### 4.1 Background on uniform convergence
In Section 2 we saw that even when $\mathcal{P}$ consists of Gaussian
distributions over
$\mathcal{H}_{X}\times{\mathbb{R}}^{d_{Y}}\times\mathcal{H}_{Z}$, we cannot
ensure that our test has both the desired size $\alpha$ over $\mathcal{P}_{0}$
and also non-trivial power properties against alternative distributions in
$\mathcal{Q}$. We also have the following related result.
###### Proposition 1.
Let $\mathcal{H}_{Z}$ be a separable Hilbert space with orthonormal basis
$(e_{k})_{k\in\mathbb{N}}$. Let $\mathcal{P}$ be the family of Gaussian
distributions for
$(X,Y,Z)\in{\mathbb{R}}\times{\mathbb{R}}\times\mathcal{H}_{Z}$ with injective
covariance operator and where
$(X,Y)\mbox{${}\perp\mkern-11.0mu\perp{}$}(Z_{r+1},Z_{r+2},\ldots)\,|\,Z_{1},\ldots,Z_{r}$
for some $r\in\mathbb{N}$ and $Z_{k}:=\langle e_{k},Z\rangle$ for all
$k\in\mathbb{N}$. Let $Q\in\mathcal{Q}$ and recall the definition of
$\mathcal{P}_{0}^{Q}$ from Section 2. Then, for any test $\psi_{n}$,
${\mathbb{P}}_{Q}(\psi_{n}=1)\leq\sup_{P\in\mathcal{P}_{0}^{Q}}{\mathbb{P}}_{P}(\psi_{n}=1).$
In other words, even if we know a basis $(e_{k})_{k\in\mathbb{N}}$ such that
in particular the conditional expectations ${\mathbb{E}}(X\,|\,Z)$ and
${\mathbb{E}}(Y\,|\,Z)$ are sparse in that they depend only on finitely many
components $Z_{1},\ldots,Z_{r}$ (with $r\in\mathbb{N}$ unknown), and the
marginal distribution of $Z$ is known exactly, there is still no non-trivial
test of conditional independence.
In this specialised setting, it is however possible to give a test of
conditional independence that will, for each _fixed_ null hypothesis
$P\in\mathcal{P}_{0}$, yield exact size control and power against all
alternatives $\mathcal{Q}$ for $n$ sufficiently large. These properties are
for example satisfied by the nominal $\alpha$-level $t$-test
$\psi_{n}^{\textrm{OLS}}$ for $Y$ in a linear model of $X$ on
$Y,Z_{1},\ldots,Z_{a(n)}$ and an intercept term, for some sequence $a(n)<n-1$
with $a(n)\to\infty$ and $n-a(n)\to\infty$ as $n\to\infty$. Indeed,
$\sup_{P\in\mathcal{P}_{0}}\lim_{n\to\infty}{\mathbb{P}}_{P}(\psi_{n}^{\textrm{OLS}}=1)=\alpha\qquad\text{
and
}\qquad\inf_{Q\in\mathcal{Q}}\lim_{n\to\infty}{\mathbb{P}}_{Q}(\psi_{n}^{\textrm{OLS}}=1)=1;$
(11)
see Section C.2 in the supplementary material for a derivation. This
illustrates the difference between pointwise asymptotic level control in the
left-hand side of (11), and uniform asymptotic level control given by
interchanging the limit and the supremum.
Our analysis instead focuses on proving that the GHCM asymptotically maintains
its level uniformly over a subset of the conditional independence null. In
order to state our results we first introduce some definitions and notation to
do with uniform stochastic convergence. Throughout the remainder of this
section we tacitly assume the existence of a measurable space
$(\Omega,\mathcal{F})$ whereupon all random quantities are defined. The
measurable space is equipped with a family of probability measures
$(\mathbb{P}_{P})_{P\in\mathcal{P}}$ such that the distribution of $(X,Y,Z)$
under $\mathbb{P}_{P}$ is $P$. For a subset $\mathcal{A}\subseteq\mathcal{P}$,
we say that a sequence of random variables $W_{n}$ _converges uniformly in
distribution to $W$ over $\mathcal{A}$_ and write if
$W_{n}\underset{\mathcal{A}}{\overset{\mathcal{D}}{\rightrightarrows}}W\qquad\text{if}\;\;\;\lim_{n\to\infty}\sup_{P\in\mathcal{A}}d_{{\textrm{BL}}}(W_{n},W)=0,$
where $d_{\textrm{BL}}$ denotes the bounded Lipschitz metric. We say, _$W_{n}$
converges uniformly in probability to $W$ over $\mathcal{A}$_ and write
$W_{n}\underset{\mathcal{A}}{\overset{P}{\rightrightarrows}}W\qquad\text{if
for any
}\varepsilon>0,\;\;\;\lim_{n\to\infty}\sup_{P\in\mathcal{A}}\mathbb{P}_{P}(\lVert
W_{n}-W\rVert\geq\varepsilon)=0.$
We sometimes omit the subscript $\mathcal{A}$ when it is clear from the
context. A full treatment of uniform stochastic convergence in a general
setting is given in Section B of the supplementary material. Throughout this
section we emphasise the dependence of many of the quantities in Section 3.1
on the distribution of $(X,Y,Z)$ with a subscript $P$, e.g. $f_{P}$,
$\varepsilon_{P}$ etc.
In Sections 4.2 and 4.3 we present general results on the size and power of
the GHCM. We take $\mathcal{P}$ to be the set of all distributions over
$\mathcal{H}_{X}\times\mathcal{H}_{Y}\times\mathcal{Z}$, and $\mathcal{P}_{0}$
to be the corresponding conditional independence null. We however show
properties of the GHCM under smaller sets of distributions
$\tilde{\mathcal{P}}\subset\mathcal{P}$ with corresponding null distributions
$\tilde{\mathcal{P}}_{0}\subset\mathcal{P}_{0}$, where in particular certain
conditions on the quality of the regression procedures on which the test is
based are met. In Section 4.4 we consider the special case where the
regressions of each of $X$ and $Y$ on $Z$ are given by functional linear
models and show that Tikhonov regularised regression can satisfy these
conditions. We note that throughout, the dimensions $d_{X}$ and $d_{Y}$ may be
finite or infinite.
### 4.2 Size of the test
In order to state our result on the size of the GHCM, we introduce the
following quantities. Let
$u_{P}(z):=\mathbb{E}_{P}\left(\lVert\varepsilon_{P}\rVert^{2}\,|\,Z=z\right),\quad
v_{P}(z):=\mathbb{E}_{P}\left(\lVert\xi_{P}\rVert^{2}\,|\,Z=z\right).$
We further define the in-sample unweighted and weighted mean squared
prediction errors of the regressions as follows:
$\displaystyle M_{n,P}^{f}$
$\displaystyle:=\frac{1}{n}\sum_{i=1}^{n}\left\lVert
f_{P}(z_{i})-\hat{f}^{(n)}(z_{i})\right\rVert^{2},$ $\displaystyle
M_{n,P}^{g}$ $\displaystyle:=\frac{1}{n}\sum_{i=1}^{n}\left\lVert
g_{P}(z_{i})-\hat{g}^{(n)}(z_{i})\right\rVert^{2},$ (12)
$\displaystyle\tilde{M}_{n,P}^{f}$
$\displaystyle:=\frac{1}{n}\sum_{i=1}^{n}\left\lVert
f_{P}(z_{i})-\hat{f}^{(n)}(z_{i})\right\rVert^{2}v_{P}(z_{i}),$
$\displaystyle\tilde{M}_{n,P}^{g}$
$\displaystyle:=\frac{1}{n}\sum_{i=1}^{n}\left\lVert
g_{P}(z_{i})-\hat{g}^{(n)}(z_{i})\right\rVert^{2}u_{P}(z_{i}).$ (13)
The result below shows that on a subset $\tilde{\mathcal{P}}_{0}$ of the null
distinguished primarily by the product of the prediction errors in (12) being
small, the operator-valued statistic $\mathscr{T}_{n}$ converges in
distribution uniformly to a mean zero Gaussian whose covariance can be
estimated consistently. We remark that prediction error quantities in (12) and
(13) are “in-sample” prediction errors, only reflecting the quality of
estimates of the conditional expectations $f$ and $g$ at the observed values
$z_{1},\ldots,z_{n}$.
###### Theorem 2.
Let $\tilde{\mathcal{P}}_{0}\subseteq\mathcal{P}_{0}$ be such that uniformly
over $\tilde{\mathcal{P}}_{0}$,
1. (i)
$nM_{n,P}^{f}M_{n,P}^{g}\overset{P}{\rightrightarrows}0$,
2. (ii)
$\tilde{M}_{n,P}^{f}\overset{P}{\rightrightarrows}0$,
$\tilde{M}_{n,P}^{g}\overset{P}{\rightrightarrows}0$,
3. (iii)
$\inf_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{E}_{P}\left(\lVert\varepsilon_{P}\rVert^{2}\lVert\xi_{P}\rVert^{2}\right)>0$
and
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{E}_{P}\left(\lVert\varepsilon_{P}\rVert^{2+\eta}\lVert\xi_{P}\rVert^{2+\eta}\right)<\infty$
for some $\eta>0$, and
4. (iv)
for some orthonormal bases $(e_{X,i})_{i=1}^{d_{X}}$ and
$(e_{Y,j})_{j=1}^{d_{Y}}$ of $\mathcal{H}_{X}$ and $\mathcal{H}_{Y}$,
respectively, writing $\varepsilon_{P,i}:=\langle
e_{X,i},\varepsilon_{P}\rangle$ and $\xi_{P,j}:=\langle
e_{Y,j},\xi_{P}\rangle$, we have
$\lim_{K\to\infty}\sup_{P\in\tilde{\mathcal{P}}_{0}}\sum_{(i,j):i+j\geq
K}{\mathbb{E}}_{P}(\varepsilon_{P,i}^{2}\xi_{P,j}^{2})=0,$
where we interpret an empty sum as $0$.
Then uniformly over $\tilde{\mathcal{P}}_{0}$ we have
$\mathscr{T}_{n}{\overset{\mathcal{D}}{\rightrightarrows}}\mathcal{N}(0,\mathscr{C}_{P})\quad\text{and}\quad\lVert\hat{\mathscr{C}}-\mathscr{C}_{P}\rVert_{\textrm{TR}}{\overset{P}{\rightrightarrows}}0,$
where
$\mathscr{C}_{P}:={\mathbb{E}}\\{(\varepsilon_{P}\otimes\xi_{P})\otimes_{\textrm{HS}}(\varepsilon_{P}\otimes\xi_{P})\\}.$
Condition (i) is the most important requirement, and says that the regression
methods must perform sufficiently well, uniformly on
$\tilde{\mathcal{P}}_{0}$. It is satisfied if
$\sqrt{n}M_{n,P}^{f},\,\sqrt{n}M_{n,P}^{g}\overset{P}{\rightrightarrows}0$,
and so allows for relatively slow $o(\sqrt{n})$ rates for the mean squared
prediction errors. Moreover, if one regression yields a faster rate, the other
can go to zero more slowly. These properties are shared with the regular
generalised covariance measure and more generally doubly robust procedures
popular in the literature on causal inference and semiparametric statistics
[robins1995semiparametric, scharfstein1999adjusting, chernozhukov2018double].
Condition (ii) is much milder, and if the conditional variances $u_{P}$ and
$v_{P}$ are bounded almost surely, it is satisfied when simply
$M_{n,P}^{f},\,M_{n,P}^{g}\overset{P}{\rightrightarrows}0$. We note that
importantly, the regression methods are not required to extrapolate well
beyond the observed data. We show in Section 4.4 that when the regression
models are functional linear models and ridge regression is used for the
functional regressions, (i) and (ii) hold under much weaker conditions than
are typically required for out-of-sample prediction error guarantees in the
literature.
Conditions (iii) and (iv) imply that the family
$\\{\varepsilon_{P}\otimes\xi_{P}:P\in\tilde{\mathcal{P}}_{0}\\}$ is uniformly
tight. Similar tightness conditions are required in chen1998central in the
context of functional central limit theorems. Note that if $d_{X}$ and $d_{Y}$
are both finite, this condition is always satisfied.
The result below shows that the GHCM test $\psi_{n}$ (8) has type I error
control uniformly over $\tilde{\mathcal{P}}_{0}$ given in Theorem 2, provided
an additional assumption of non-degeneracy of the covariance operators is
satisfied.
###### Theorem 3.
Let $\tilde{\mathcal{P}}_{0}\subseteq\mathcal{P}_{0}$ satisfy the conditions
stated in Theorem 2, and in addition suppose
$\inf_{P\in\tilde{\mathcal{P}}_{0}}\lVert\mathscr{C}_{P}\rVert_{\textrm{op}}>0.$
(14)
Then for each $\alpha\in(0,1)$, the $\alpha$-level GHCM test $\psi_{n}$ (8)
satisfies
$\lim_{n\to\infty}\sup_{P\in\tilde{\mathcal{P}}_{0}}|\mathbb{P}_{P}(\psi_{n}=1)-\alpha|=0.$
(15)
### 4.3 Power of the test
We now study the power of the GHCM. It is not straightforward to analyse what
happens to the test statistic $T_{n}$ when the null hypothesis is false in the
setup we have considered so far. However, if we modify the test such that the
regression function estimates $\hat{f}$ and $\hat{g}$ are constructed using an
auxiliary dataset independent of the main data
$(x_{i},y_{i},z_{i})_{i=1}^{n}$, the behaviour of $T_{n}$ is more tractable.
Given a single sample, this could be achieved through sample splitting, and
cross-fitting [chernozhukov2018double] could be used to recover the loss in
efficiency from the split into smaller datasets. However, we do not recommend
such sample-splitting in practice here and view this as more of a technical
device that facilitates our theoretical analysis. As we require $\hat{f}$ and
$\hat{g}$ to satisfy (i) and (ii) of Theorem 2, these estimators would need to
perform well out of sample rather than just on the observed data, which is
typically a harder task.
Given that our test is based on an empirical version of
$\mathbb{E}(\mathrm{Cov}(X,Y\,|\,Z))={\mathbb{E}}(\varepsilon\otimes\xi)$, we
can only hope to have power against alternatives where this is non-zero. For
such alternatives however, we have positive power whenever the Hilbert–Schmidt
norm of the expected conditional covariance operator is at least $c/\sqrt{n}$
for a constant $c>0$, as the following result shows.
###### Theorem 4.
Consider a version of the GHCM test $\psi_{n}$ where $\hat{f}$ and $\hat{g}$
are constructed on independent auxiliary data. Let
$\tilde{\mathcal{P}}\subset\mathcal{P}$ be the set of distributions for
$(X,Y,Z)$ satisfying (i)-(iv) of Theorem 2 and (14) with $\tilde{\mathcal{P}}$
in place of $\tilde{\mathcal{P}}_{0}$. Then writing
$\mathscr{K}_{P}:=\mathbb{E}_{P}(\varepsilon_{P}\otimes\xi_{P})=\mathbb{E}_{P}(\mathrm{Cov}_{P}(X,Y\,|\,Z))$,
we have, uniformly over $\tilde{\mathcal{P}}$,
$\tilde{\mathscr{T}}_{n}:=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(\mathscr{R}_{i}-\mathscr{K}_{P})\overset{\mathcal{D}}{\rightrightarrows}\mathcal{N}(0,\mathscr{C}_{P})\qquad\text{and}\qquad\lVert\hat{\mathscr{C}}-\mathscr{C}_{P}\rVert_{\textrm{TR}}\overset{P}{\rightrightarrows}0.$
Furthermore, an $\alpha$-level GHCM test $\psi_{n}$ (constructed using
independent estimates $\hat{f}$ and $\hat{g}$) satisfies the following two
statements.
1. (i)
Redefining $\tilde{\mathcal{P}}_{0}=\tilde{\mathcal{P}}\cap\mathcal{P}_{0}$,
we have that (15) is satisfied, and so an $\alpha$-level GHCM test has size
converging to $\alpha$ uniformly over $\tilde{\mathcal{P}}_{0}$.
2. (ii)
For every $0<\alpha<\beta<1$ there exists $c>0$ and $N\in\mathbb{N}$ such that
for any $n\geq N$,
$\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(\psi_{n}=1)\geq\beta,$
where $\mathcal{Q}_{c,n}:=\\{P\in\tilde{\mathcal{P}}\ :\
\lVert\mathscr{K}_{P}\rVert_{\textrm{HS}}>c/\sqrt{n}\\}$.
In a setting where $X$, $Y$ and $Z$ are related by linear regression models,
we can write down $\|{\mathbb{E}}\mathrm{Cov}(X,Y\,|\,Z)\|_{\textrm{HS}}$ more
explicitly. Suppose $Z$, $\varepsilon$ and $\xi$ are independent random
variables in $L^{2}([0,1],\mathbb{R})$, with $X$ and $Y$ determined by
$\displaystyle X(t)$
$\displaystyle=\int\beta^{X}(s,t)Z(s)\,\mathrm{d}s+\varepsilon(t)$
$\displaystyle Y(t)$
$\displaystyle=\int\beta^{Y}(s,t)Z(s)\,\mathrm{d}s+\int\theta(s,t)X(s)\,\mathrm{d}s+\varepsilon+\xi(t).$
Then ${\mathbb{E}}\mathrm{Cov}(X,Y\,|\,Z)$ is an integral operator with kernel
$\phi(s,t)=\int_{0}^{1}\theta(u,s)v(t,u)\,\mathrm{d}u,$
where $v(t,u)$ denotes the covariance function of $\varepsilon$. The
Hilbert–Schmidt norm $\|{\mathbb{E}}\mathrm{Cov}(X,Y\,|\,Z)\|_{\textrm{HS}}$
is then given by the $L^{2}([0,1]^{2},\mathbb{R})$-norm of $\phi$. We
investigate the empirical performance of the GHCM in such a setting in Section
5.1.2.
### 4.4 GHCM using linear function-on-function ridge regression
Here we consider a special case of the general setup used in Sections 4.2 and
4.3 where we assume that $\mathcal{Z}$ is a Hilbert space $\mathcal{H}_{Z}$
and that, under the null of conditional independence, the Hilbertian $X$ and
$Y$ are related to Hilbertian $Z$ via linear models:
$\displaystyle X$ $\displaystyle=\mathscr{S}^{X}_{P}Z+\varepsilon_{P}$ (16)
$\displaystyle Y$ $\displaystyle=\mathscr{S}^{Y}_{P}Z+\xi_{P}.$ (17)
Here $\mathscr{S}^{X}_{P}$ is a Hilbert–Schmidt operator such that
$\mathscr{S}^{X}_{P}Z=f(Z):={\mathbb{E}}(X\,|\,Z)$, with analogous properties
holding for $\mathscr{S}^{Y}_{P}$, and it is assumed that ${\mathbb{E}}Z=0$.
If $X$, $Y$ and $Z$ are elements of $L^{2}([0,1],\mathbb{R})$, this is
equivalent to
$X(t)=\int_{0}^{1}\beta^{X}_{P}(s,t)Z(s)\,\mathrm{d}s+\varepsilon_{P}(t),$
(18)
where $\beta^{X}_{P}$ is a square-integrable function, and similarly for the
relationship between $Y$ and $Z$. Such functional response linear models have
been discussed by Ramsay2005, and studied by chiou2004functional,
yao2005functional, Crambes2013, for example. Benatia2017 propose a Tikhonov
regularised estimator analogous to ridge regression [hoerl1970ridge]; applied
to the regression model (16), this estimator takes the form
$\hat{\mathscr{S}}=\operatorname*{argmin}_{\mathscr{S}}\sum_{i=1}^{n}\lVert
x_{i}-\mathscr{S}(z_{i})\rVert^{2}+\gamma\lVert\mathscr{S}\rVert_{\textrm{HS}}^{2},$
(19)
where $\gamma>0$ is a tuning parameter.
We now consider a specific instance of the general GHCM framework using
regression estimates based on (19). Specifically, we form estimate
$\hat{\mathscr{S}}^{X}$ of $\mathscr{S}^{X}$ by solving the optimisation in
(19) with regularisation parameter
$\hat{\gamma}:=\operatorname*{argmin}_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{i=1}^{n}\min(\hat{\mu}_{i}/4,\gamma)+\frac{\gamma}{4}\right),$ (20)
where $\hat{\mu}_{1}\geq\hat{\mu}_{2}\geq\dots\geq\hat{\mu}_{n}\geq 0$ are the
ordered eigenvalues of the $n\times n$ matrix $K$ with $K_{ij}=\langle
z_{i},z_{j}\rangle/n$. We form estimate $\hat{\mathscr{S}}^{Y}$ of
$\mathscr{S}^{Y}$ analogously but with the $x_{i}$ replaced by $y_{i}$ in
(19). Note that in the case where $K=0$ and so $\hat{\gamma}$ does not exist,
we simply take $\hat{\mathscr{S}}^{X}$ and $\hat{\mathscr{S}}^{Y}$ to be $0$
operators, i.e., no regression is performed.
The data-driven choice of $\hat{\gamma}$ above is motivated by an upper bound
on the in-sample MSPE of the estimators $\hat{\mathscr{S}}^{X}$ and
$\hat{\mathscr{S}}^{Y}$ (see Lemma 17 in the supplementary material) where we
have omitted some distribution-dependent factors of
$\lVert\mathscr{S}_{P}^{X}\rVert_{\textrm{HS}}^{2}$ or
$\lVert\mathscr{S}_{P}^{Y}\rVert^{2}_{\textrm{HS}}$ and a variance factor; a
similar strategy was used in an analysis of kernel ridge regression [GCM]
which closely parallels ours here. This choice allows us to conduct a
theoretical analysis that we present below. In practice, other choices of
regularisation parameter such as cross validation-based approaches may perform
even better and so could alternative methods that are not based on Tikhonov
regularisation.
In the following result, we take $\psi_{n}$ to be the $\alpha$-level GHCM test
(8) with estimated regression functions $\hat{f}$ and $\hat{g}$ yielding
fitted values given by
$\hat{f}(z_{i})=\hat{\mathscr{S}}^{X}z_{i}\qquad\text{and}\qquad\hat{g}(z_{i})=\hat{\mathscr{S}}^{Y}z_{i},\qquad\text{for
all }i=1,\ldots,n.$ (21)
Note that in the finite dimensional setting where
$X^{(n)}\in{\mathbb{R}}^{n\times d_{X}}$ (which is also covered by the result
below), we have that the matrix of fitted values
$(\hat{f}(z_{i}))_{i=1}^{n}\in{\mathbb{R}}^{n\times d_{X}}$ is given by
$K(K+\gamma I)^{-1}X^{(n)},$
and similarly for the $Y^{(n)}$ regression.
###### Theorem 5.
Let $\tilde{\mathcal{P}}_{0}\subset\mathcal{P}_{0}$ be such that (16) and (17)
are satisfied, and moreover (iii) and (iv) of Theorem 2 and (14) hold when
$\hat{f}$ and $\hat{g}$ are as in (21). Suppose further that
1. (i)
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\max(\lVert\mathscr{S}^{X}_{P}\rVert_{\textrm{HS}},\lVert\mathscr{S}^{Y}_{P}\rVert_{\textrm{HS}})<\infty$,
2. (ii)
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\max(u_{P}(Z),v_{P}(Z))<\infty$ almost
surely,
3. (iii)
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{E}\lVert Z\rVert^{2}<\infty$ and
$\lim_{\gamma\downarrow
0}\sup_{P\in\tilde{\mathcal{P}}_{0}}\sum_{k=1}^{\infty}\min(\mu_{k,P},\gamma)=0$
where $(\mu_{k,P})_{k\in\mathbb{N}}$ denote the ordered eigenvalues of the
covariance operator of $Z$ under $P$.
Then the $\alpha$-level GHCM test $\psi_{n}$ satisfies
$\lim_{n\to\infty}\sup_{P\in\tilde{\mathcal{P}}_{0}}|\mathbb{P}_{P}(\psi_{n}=1)-\alpha|=0.$
Condition (iii) is generally satisfied, by the dominated convergence theorem,
for any family $\tilde{\mathcal{P}}_{0}$ for which the sequence of eigenvalues
of the covariance operators are uniformly bounded above by a summable
sequence. As a very simple example where all the remaining conditions of
Theorem 5 are satisfied, we may consider the family of distribution
$\tilde{\mathcal{P}}_{0}$ where $Z$, $\varepsilon_{P}$ in (22) and $\xi_{P}$
in (23) are independent, and the latter two are Brownian motions with
variances $\sigma_{\varepsilon,P}^{2}$ and $\sigma_{\xi,P}^{2}$ respectively.
If the coefficient functions $\beta_{P}^{X}$ corresponding to $X$ in (18) are
in $L_{2}([0,1]^{2},\mathbb{R})$ with norms bounded above for all
$P\in\mathcal{P}_{0}$, and an equivalent assumption for the coefficient
functions relating to $Y$ holds, and $\sigma_{\varepsilon,P}^{2}$ and
$\sigma_{\xi,P}^{2}$ are bounded from above and below uniformly, we have that
$\mathcal{P}_{0}$ satisfies all the requirements of Theorem 5.
The proof of Theorem 5 relies on Lemma 17 in Section C.5 of the supplementary
material, which gives a bound on the in-sample MSPE of ridge regression in
terms of the decay of the eigenvalues $\mu_{k,P}$, which may be of independent
interest. For example, we have that if these are dominated by an exponentially
decaying sequence, the in-sample MSPE is $o(\log n/n)$ as $n\to\infty$ (see
Corollary 2). This matches the out-of-sample MSPE bound obtained in
Crambes2013 in the same setting as that described, but the out-of-sample
result additionally requires convexity and lower bounds on the decay of the
sequence of eigenvalues of the covariance operator, and stronger moment
assumptions on the norm of the predictor. Similarly, other related results
[e.g., cai2006, hall2007methodology] require additional eigen-spacing
conditions in place of convexity, and upper and lower bounds on the decay of
the eigenvalues. Furthermore, while some of these bounds are uniform over
values of the linear coefficient operator for fixed distributions of the
predictors, our in-sample MSPE bound is uniform over both the coefficients and
distributions of the predictor. This illustrates how in-sample and out-of-
sample prediction are very different in the functional data setting, and
reliance on the former being small, as we have with the GHCM, is desirable due
to the weaker conditions needed to guarantee this.
## 5 Experiments
In this section we present the results of numerical experiments that
investigate the performance of our proposed GHCM methodology. We implement the
GHCM as described in Algorithm 1 with scalar-on-function and function-on-
function regressions performed using the pfr and pffr functions respectively
from the refund package [refund]. These are functional linear regression
methods which rely on fitting smoothers implemented in the mgcv package
[Wood2017]; we choose the tuning parameters for these smoothers (dimension of
the basis expansions of the smooth terms) as per the standard guidance such
that a further increase does not decrease the deviance. In Section 5.3 in the
supplement, we study high-dimensional EEG data using the GHCM with regressions
performed using FDboost.
We note that, to the best of our knowledge, neither FDboost nor the regression
methods in refund come with prediction error bounds (such as the ones derived
in Section 4.4) that are required for obtaining formal guarantees for the
GHCM; nevertheless they are well-developed and well-used functional regression
methods and our aim here is to demonstrate empirically that they perform
suitably well in terms of prediction such that when used with the GHCM, type I
error is maintained across a variety of settings. In Section D of the
supplementary material, we include additional simulations that consider among
others, settings with heavy tailed errors, test the GHCM with FDboost in
further settings and examine the local power of the GHCM.
### 5.1 Size and power simulation
In this section we examine the size and power properties of the GHCM when
testing the conditional independence
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$. We take $X,Z\in
L^{2}([0,1],\mathbb{R})$, and first consider the setting where $Y$ is scalar.
In Section 5.1.2 we present experiments for the case where $Y\in
L^{2}([0,1],\mathbb{R})$, so all variables are functional. All simulated
functional random variables are sampled on an equidistant grid of $[0,1]$ with
$100$ grid points.
#### 5.1.1 Scalar $Y$, functional $X$ and $Z$
Here we consider the setup where $Z$ is standard Brownian motion and $X$ and
$Y$ are related to $Z$ through the functional linear models
$\displaystyle X(t)$
$\displaystyle=\int_{0}^{1}\beta_{a}(s,t)Z(s)\,\mathrm{d}s+N_{X}(t),$ (22)
$\displaystyle Y$
$\displaystyle=\int_{0}^{1}\alpha_{a}(t)Z(t)\,\mathrm{d}t+N_{Y}.$ (23)
The variables $N_{X},N_{Y}$ and $Z$ are independent with $N_{X}$ a Brownian
motion with variance $\sigma_{X}^{2}$, $N_{Y}\sim\mathcal{N}(0,1)$, so
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$. Nonlinear coefficient
functions $\beta_{a}$ and $\alpha_{a}$ are given by
$\displaystyle\beta_{a}(s,t)=a\exp(-(st)^{2}/2)\sin(ast),\qquad\alpha_{a}(t)=\int_{0}^{1}\beta_{a}(s,t)\,\mathrm{d}s.$
(24)
We vary the parameters $\sigma_{X}\in\\{0.1,0.25,0.5,1\\}$ and
$a\in\\{2,6,12\\}$. We generate $n$ i.i.d. observations from each of the
$4\times 3=12$ models given by (22), (23), for sample sizes
$n\in\\{100,250,500,1000\\}$. Increasing $a$ or decreasing $\sigma_{X}$
increase the difficulty of the testing problem: for large $a$, $\beta_{a}$
oscillates more, making it harder to remove the dependence of $X$ on $Z$. A
smaller $\sigma_{X}$ makes $Y$ closer to the integral of $X$, and so increases
the marginal dependence of $X$ and $Y$.
We apply the GHCM and compare the resulting tests to those corresponding to
the significance test for $X$ in a regression of $Y$ on $(X,Z)$ implemented in
pfr. The rejection rates of the two tests at the $5\%$ level, averaged over
$100$ simulation runs, can be seen in Figure 1. We see that the pfr test has
size greatly exceeding its level in the more challenging large $a$, small
$\sigma_{X}$ settings, with large values of $n$ exposing most clearly the
miscalibration of the test statistic. In these settings, $Y$ may be
approximated simply by the integral of $X$ reasonably well, and is also well-
approximated by the true regression function that features only $Z$.
Regularisation encourages pfr to fit a model where $X$ determines the
response, rather than $X$, and the $p$-values reflect this. On the other hand,
the GHCM tests maintain reasonable type I error control across the settings
considered here.
Figure 1: Rejection rates in the various null settings considered in Section
5.1.1 for the nominal 5%-level pfr test (top) and GHCM test (bottom).
To investigate the power properties of the test, we simulate $Z$ as before
with $X$ also generated according to (22). We replace the regression model
(23) for $Y$ with
$Y=\int_{0}^{1}\alpha_{a}(t)Z(t)\,\mathrm{d}t+\int_{0}^{1}\frac{\alpha_{a}(t)}{a}X(t)\,\mathrm{d}t+N_{Y},$
(25)
where $N_{Y}\sim\mathcal{N}_{Y}(0,1)$ as before. Note that the coefficient
function for $X$ oscillates more as $a$ increases. The rejection rates at the
$5\%$ level can be seen in Figure 2.
Figure 2: Rejection rates in the various alternative settings considered in
Section 5.1.1 (see (25)) for the nominal 5%-level pfr test (top) and GHCM test
(bottom).
While the two approaches perform similarly when $a=2$, the pfr test has higher
power in the more complex cases. However, as the results from the size
analysis in Figure 1 show, null cases are also rejected in the analogous
settings.
To illustrate the full distribution of $p$-values from the two methods under
the null and the alternative, we plot false positive rates and true positive
rates in each setting as a function of the chosen significance level of the
test $\alpha$. The full set of results can be seen in Section D of the
supplementary material and a plot for a subset of the simulations settings
where $n=500$ and $\sigma_{X}\in\\{0.1,0.25,0.5\\}$ is presented in Figure 3.
Figure 3: Rejection rates against significance level for the pfr (red) and
GHCM (green) tests under null (light) and alternative (dark) settings when
$n=500$.
We see that both tests distinguish null from alternative well in the cases
with $a$ small and $\sigma_{X}$ large. The $p$-values of the GHCM are close to
uniform in the settings considered, whereas the distribution of the pfr
$p$-values is heavily dependent on the particular null setting, illustrating
the difficulty with calibrating this test.
In Section D of the supplementary material we also present the results of two
additional sets of experiments. We repeat the experiments above using the
FDboost package for regressions in place of the refund package. We see that
the performance of the GHCM with FDboost is broadly similar to that displayed
in Figures 1 and 2, supporting our theoretical results which indicate that
provided the prediction errors of the regression methods used are sufficiently
small, the test will perform similarly.
We also consider the case where the noise is heavy-tailed. Specifically, we
present analogous plots for setting where $N_{Y}$ is $t$-distributed with
different degrees of freedom, $n=500$ and $\sigma_{X}=0.25$; the results are
similar to Figure 3, with the GHCM maintaining type I error control, and pfr
tending to be anti-conservative in the more challenging settings.
#### 5.1.2 Functional $X$, $Y$ and $Z$
In this section we modify the setup and consider functional $Y\in
L^{2}([0,1],\mathbb{R})$. We take $X$ and $Z$ as in Section 5.1.1 but in the
null settings we let
$Y(t)=\int_{0}^{1}\beta_{a}(s,t)Z(s)\,\mathrm{d}s+N_{Y}(t),$
where $N_{Y}$ is a standard Brownian motion. Note that this is a particularly
challenging setting to maintain type I error control as $X$ and $Y$ are then
highly correlated, and moreover the biases from regressing each of $X$ and $Y$
on $Z$ will tend to be in similar directions making the equivalent of the term
$a_{n}$ in (2) potentially large.
In the alternative settings, we take
$Y(t)=\int_{0}^{1}\beta_{a}(s,t)Z(s)\,\mathrm{d}s+\int_{0}^{1}\frac{\beta_{a}(s,t)}{a}X(s)\,\mathrm{d}s+N_{Y}(t)$
with $N_{Y}$ again being a standard Brownian motion.
The rejection rates at the $5\%$ level, averaged over $100$ simulation runs,
can be seen in Figure 4. We see that, as in the case where $Y\in\mathbb{R}$,
the GHCM maintains good type I error control in the settings considered, and
has power increasing with $n$ and $\sigma_{X}$ as expected. We note that a
comparison with the $p$-values from ff-terms in the pffr-function of the
refund package here does not seem helpful. In our experiments the
corresponding tests consistently reject in true null settings even for simple
models.
In Section D of the supplementary material we look at the subset of the
settings considered above with $n=500$ and $\sigma_{X}=0.25$ but where $X$ and
$Y$ are observed on irregular grids of varying length grids. We first
preprocess the residuals output by the regression method as described in
Section 3.2.1 and then apply the GHCM. We observe that the performance is
similar to that in the fixed grid setting, though the power is lower when the
average grid length is smaller, and type I error increases slightly above
nominal levels in the most challenging $a=12$ setting.
Figure 4: Rejection rates in the various null (top) and alternative (bottom)
settings considered in Section 5.1.2 for the nominal 5%-level GHCM test.
### 5.2 Confidence intervals for truncated linear models
In this section we consider an application of the GHCM in constructing a
confidence interval for the truncation point $\theta\in[0,1]$ in a truncated
functional linear model [Hall2016]
$Y=\int_{0}^{\theta}\alpha(t)X(t)\,\mathrm{d}t+\varepsilon,$ (26)
where the predictor $X\in L^{2}([0,1],\mathbb{R})$, $Y\in{\mathbb{R}}$ is a
response and $\varepsilon\mbox{${}\perp\mkern-11.0mu\perp{}$}X$ is stochastic
noise. To frame this as a conditional independence testing problem, observe
that (26) implies that defining the null hypotheses
$H_{\tilde{\theta}}:\;\;\;Y\mbox{${}\perp\mkern-11.0mu\perp{}$}\\{X(t)\\}_{t>\tilde{\theta}}\,|\,\\{X(t)\\}_{t\leq\tilde{\theta}}$
(27)
for $\tilde{\theta}\in(0,1)$, we have that $H_{\tilde{\theta}}$ is true for
all $\theta\leq\tilde{\theta}\leq 1$.
Given an $\alpha$-level conditional independence test $\psi$, we may thus form
a one-sided confidence interval for $\theta$ using
$\left[\inf\left\\{\tilde{\theta}\in(0,1)\,:\,\psi\text{ accepts null
}H_{\tilde{\theta}}\right\\},\,1\right].$ (28)
Indeed, with probability $1-\alpha$, $\psi$ will not reject the true null
$H_{\theta}$, and so with probability $1-\alpha$ the infimum above will be at
most $\theta$.
To approximate (28) we initially consider the null hypothesis
$H_{\tilde{\theta}}$ at $5$ equidistant values of $\tilde{\theta}$ and then
employ a bisection search between the smallest of these points
$\tilde{\theta}$ at which $H_{\tilde{\theta}}$ is accepted by a 5% level GHCM,
and the point immediately before it or $0$. We consider two instances of the
model (26) with $\theta=0.275,0.675$ and with $\alpha(t):=10(t+1)^{-1/3}$, $X$
a standard Brownian motion and $\varepsilon\sim\mathcal{N}(0,1)$. The
simulated functional variables are observed on an equidistant grid of $[0,1]$
with $121$ grid points. The results across $500$ simulations are given in
Figure 5. We see that the empirical coverage probabilities are close to the
nominal coverage of 95%.
Figure 5: Histograms of the left endpoints of 95% confidence intervals for
truncation points $\theta=0.275$ (left) and $\theta=0.675$ (right), given by
red vertical lines, in model (26) across $500$ simulations.
### 5.3 EEG data analysis
In this section we demonstrate the application of our GHCM methodology to the
problem of learning functional graphical models. In contrast to existing work
[Qiao2019, Qiao2020] which typically assumes a Gaussian functional graphical
model and outputs a point estimate of the conditional independence graph, here
we are able to test for the presence of each edge, with type I error control
guaranteed for data generating processes where our regression methods perform
suitably well as indicated by Theorem 3.
We illustrate this on an EEG dataset from a study on alcoholism [Zhang1995,
Ingber1997, Ingber1998]. The study participants were shown one of three visual
stimuli repeatedly and simultaneous EEG activity was measured across $64$
channels over the course of $1$ second at $256$ measurements per second. While
the study included both a control group and an alcoholic group we will
restrict our analysis to the alcoholic group consisting of $77$ subjects and
further restrict ourselves to a single type of visual stimulus. We preprocess
the data as in Qiao2019, averaging across the repetitions of the experiment
for each subject and using an order $96$ FIR filter implemented in the eegkit
R-package [Helwig2018] to filter the averaged curves at the $\alpha$ frequency
bands (between $8$ and $12.5$ Hz). We thus obtain $64$ $\alpha$-filtered
frequency curves for each of the $77$ subjects.
Given the low number of observations compared to the $64$ functional
variables, there is not enough data to reject the null of edge absence even if
a true edge were to be present. We therefore aim for a coarser analysis by
grouping the variables by brain region and then further according to whether
the variable corresponded to the right or left hemispheres of the brain. This
yields disjoint groups $G_{1},\ldots,G_{24}$ comprising $52$ variables in
total after omitting reference channels and midline channels that could not
easily be classified as being in either hemisphere, that is,
$G_{1}\cup\ldots\cup G_{24}=\\{1,\ldots,52\\}$. We suppose the observed data
are i.i.d. copies functional variables $(X_{1},\ldots,X_{52})$, and then test
the null hypothesis
$X_{G_{j}}\mbox{${}\perp\mkern-11.0mu\perp{}$}X_{G_{k}}\,|\,\\{X_{G_{m}}:m\in\\{1,\ldots,24\\}\setminus\\{j,k\\}\\},$
(29)
for each $j,k\in\\{1,\ldots,24\\}$ with $j\neq k$; that is, we test for edge
presence in the conditional independence graph of the grouped variables. Here,
the conditional independence graph over the grouped variables is defined as an
undirected graph over $G_{1},\ldots,G_{24}$, in which the edge between $G_{j}$
and $G_{k}$, $j\neq k$ is missing if and only if (29) holds; that is,
rejection of the null in (29) for $k$ and $j$ indicates that the conditional
independence graph has an edge between $G_{k}$ and $G_{j}$.
To construct $p$-values for the null in (29) using the GHCM, we must regress
for each $l\in G_{j}$ and $r\in G_{k}$, each of the functional variables
$X_{l}$ and $X_{r}$ on to the set of variables in the conditioning set. Since
the regressions will involve large numbers of functional predictors, the
refund package is not suitable to perform the regressions. Instead, we use the
FDboost package in R, which is well-suited to high-dimensional functional
regressions [FDboost]. We fit a concurrent functional model [Ramsay2005,
Section 16] of the form
$X_{l}(t)=\sum_{m}\beta_{m}(t)X_{m}(t);$
the inclusion of additional functional linear terms did not improve the fit.
We assessed the appropriateness of this regression method to data of the sort
studied here through simulations described in Section D of the supplement.
Figure 6: Network summarising the output of conditional independence tests for
each pair of groups. Only edges with $p$-values of less than 5% are shown with
thicker lines indicating smaller $p$-values.
Figure 6 summarises the results of GHCM applied to test the presence of each
edge in the conditional independence graph. We see that some of the brain
regions located close to each other appear to be connected, as one might
expect. Note that the network presented includes all edges that had a
$p$-value less than $5\%$. The edge PO-R—O-R has a Bonferroni-corrected
$p$-value of $0.0027$, and is the only edge yielding a corrected $p$-value
less than $5\%$. Applying the Benjamini–Hochberg procedure [FDR] to control
the false discovery rate at the 5% level selects this edge and also PO-L—O-L.
We may compare these results with those of Qiao2019 and Qiao2020 who study the
same dataset but consider the different problem of estimation of the
conditional independence graph rather than testing of edge presence as we do
here. We see that our results are broadly in line with their estimates: for
example, there are edges estimated between the groups represented by PO-R and
O-R (the group pair which yields the lowest $p$-value) even in some of their
sparsest estimated graphs.
## 6 Conclusion
Testing the conditional independence
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ has been shown to be a hard
problem in the setting where $X,Y,Z$ are all real-valued and $Z$ is absolutely
continuous with respect to Lebesgue measure [GCM]. This hardness takes a more
extreme form in the functional setting: even when $(X,Y,Z)$ are jointly
Gaussian with non-degenerate covariance and $Z$ and at most one of $X$ and $Y$
are infinite-dimensional, there is no non-trivial test of conditional
independence. This requires us to (i) understand the form of an ‘effective
null hypothesis’ for a given hypothesis test, and (ii) develop tests where
these effective nulls are somewhat interpretable so that domain knowledge can
more easily inform the choice of a conditional independence test to use on any
given dataset.
In order to address these two needs, we introduce here a new family of tests
for functional data and develop the necessary uniform convergence results to
understand the forms of null hypotheses that we can have type I error control
over. We see that for our proposed GHCM tests, error control is guaranteed
under conditions largely determined by the in-sample prediction error rate of
regressions upon which the test is based. Whilst in-sample and more common
out-of-sample results share similarities in some settings, the lack of a need
to extrapolate beyond the data in the former lead to important differences
when regressing on functional data. In particular, no eigen-spacing conditions
or lower bounds on the eigenvalues of the covariance of the regressor are
required for the in-sample error to be controlled when ridge regression is
used. It would be interesting to investigate the in-sample MSPE properties of
other regression methods and understand whether such conditions can be avoided
more generally.
One attractive feature of the GHCM is that it only depends on inner products
between the residuals produced by the regression methods. An interesting
question is whether different inner products can be constructed to have power
against different sets of alternatives, by emphasising certain regions of the
function domains, for example.
Another direction which may be fruitful to pursue is to adapt the GHCM so that
it has power against alternatives where
${\mathbb{E}}\mathrm{Cov}(X,Y\,|\,Z)=0$. It is likely that further conditions
will be required of the regression methods than simply that their in-sample
prediction errors are small, and so some interpretability of the effective
null hypotheses, and indeed its size compared to the full null of conditional
independence, will need to be sacrificed. There are however settings where the
severity of type I versus type II errors may be balanced such that this is an
attractive option.
It would also be interesting to investigate the hardness of conditional
independence in the setting where all of $X$, $Y$ and $Z$ are infinite-
dimensional. For our hardness result here, at least one of $X$ and $Y$ must be
finite-dimensional. It may be the case that requiring two infinite-dimensional
variables to be conditionally independent is such a strong condition that the
null is not prohibitively large compared to the entire space of Gaussian
measures, and so genuine control of the type I error while maintaining power
is in fact possible. Such a result, or indeed a proof that hardness persists,
would certainly be of interest.
## Acknowledgements
We thank Yoav Zemel, Alexander Aue, Sonja Greven and Fabian Scheipl for
helpful discussions.
## References
* Bai and Saranadasa [1996] Z. Bai and H. Saranadasa. Effect of high dimension: by an example of a two sample problem. _Statistica Sinica_ , pages 311–329, 1996.
* Benatia et al. [2017] D. Benatia, M. Carrasco, and J.-P. Florens. Functional linear regression with functional response. _Journal of Econometrics_ , 201(2):269–291, 2017\.
* Benjamini and Hochberg [1995] Y. Benjamini and Y. Hochberg. Controlling the false discovery rate: a practical and powerful approach to multiple testing. _Journal of the Royal Statistical Society Series B_ , 57(1):289–300, 1995.
* Brockhaus et al. [2020] S. Brockhaus, D. Rügamer, and S. Greven. Boosting functional regression models with fdboost. _Journal of Statistical Software_ , 94(10):1–50, 2020.
* Cai and Hall [2006] T. T. Cai and P. Hall. Prediction in functional linear regression. _Annals of Statistics_ , 34(5):2159–2179, 2006.
* Chen and White [1998] X. Chen and H. White. Central limit and functional central limit theorems for hilbert-valued dependent heterogeneous arrays with applications. _Econometric Theory_ , pages 260–284, 1998.
* Chernozhukov et al. [2018] V. Chernozhukov, D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, and J. Robins. Double/debiased machine learning for treatment and structural parameters. _The Econometrics Journal_ , 21(1):C1–C68, 2018\.
* Chiou et al. [2004] J.-M. Chiou, H.-G. Müller, and J.-L. Wang. Functional response models. _Statistica Sinica_ , pages 675–693, 2004.
* Constantinou and Dawid [2017] P. Constantinou and A. P. Dawid. Extended conditional independence and applications in causal inference. _Annals of Statistics_ , 45(6):2618–2653, 2017\.
* Crambes and Mas [2013] C. Crambes and A. Mas. Asymptotics of prediction in functional linear regression with functional outputs. _Bernoulli_ , 19(5B):2627–2651, 2013.
* Delaigle and Hall [2012] A. Delaigle and P. Hall. Methodology and theory for partial least squares applied to functional data. _Annals of Statistics_ , 40(1):322–352, 2012\.
* Duchesne and de Micheaux [2010] P. Duchesne and P. L. de Micheaux. Computing the distribution of quadratic forms: Further comparisons between the liu-tang-zhang approximation and exact methods. _Computational Statistics and Data Analysis_ , 54:858–862, 2010.
* Fan et al. [2015a] Y. Fan, G. M. James, and P. Radchenko. Functional additive regression. _Annals of Statistics_ , 43(5):2296–2325, 2015a.
* Fan et al. [2015b] Y. Fan, G. M. James, P. Radchenko, et al. Functional additive regression. _Annals of Statistics_ , 43(5):2296–2325, 2015b.
* Farebrother [1984] R. W. Farebrother. Algorithm AS 204: The distribution of a positive linear combination of chi-squared random variables. _Journal of the Royal Statistical Society Series C_ , 33(3):332–339, 1984.
* Ferraty and Vieu [2006] F. Ferraty and P. Vieu. _Nonparametric Functional Data Analysis: Theory and Practice_. Springer Series in Statistics. Springer New York, 2006.
* Ferraty et al. [2011] F. Ferraty, A. Laksaci, A. Tadj, and P. Vieu. Kernel regression with functional response. _Electronic Journal of Statistics_ , 5:159–171, 2011.
* Goldsmith et al. [2011] J. Goldsmith, J. Bobb, C. M. Crainiceanu, B. Caffo, and D. Reich. Penalized functional regression. _Journal of Computational and Graphical Statistics_ , 20(4):830–851, 2011.
* Goldsmith et al. [2020] J. Goldsmith, F. Scheipl, L. Huang, J. Wrobel, C. Di, J. Gellar, J. Harezlak, M. W. McLean, B. Swihart, L. Xiao, C. Crainiceanu, and P. T. Reiss. _refund: Regression with Functional Data_ , 2020. URL https://CRAN.R-project.org/package=refund. R-package version 0.1-22.
* Greven and Scheipl [2017] S. Greven and F. Scheipl. A general framework for functional regression modelling. _Statistical Modelling_ , 17(1-2):1–35, 2017\.
* Györfi et al. [2002] L. Györfi, M. Kohler and H. Walk. A distribution-free theory of nonparametric regression. Springer New York, 2002.
* Hall and Hooker [2016] P. Hall and G. Hooker. Truncated linear models for functional data. _Journal of the Royal Statistical Society Series B_ , 78(3):637–653, 2016.
* Hall and Horowitz [2007] P. Hall and J. L. Horowitz. Methodology and convergence rates for functional linear regression. _Annals of Statistics_ , 35(1):70–91, 2007\.
* Helwig [2018] N. E. Helwig. _eegkit: Toolkit for Electroencephalography Data_ , 2018. URL https://CRAN.R-project.org/package=eegkit. R-package version 1.0-4.
* Hoerl and Kennard [2000] A. E. Hoerl and R. W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. _Technometrics_ , 42(1):80–86, 2000.
* Imhof [1961] J. P. Imhof. Computing the distribution of quadratic forms in normal variables. _Biometrika_ , 48(3/4):419–426, 1961.
* Ingber [1997] L. Ingber. Statistical mechanics of neocortical interactions: Canonical momenta indicatorsof electroencephalography. _Phys. Rev. E_ , 55:4578–4593, 1997.
* Ingber [1998] L. Ingber. Statistical mechanics of neocortical interactions: Training and testing canonical momenta indicators of eeg. _Mathematical and Computer Modelling_ , 27(3):33–64, 1998.
* Ivanescu et al. [2015] A. E. Ivanescu, A.-M. Staicu, F. Scheipl, and S. Greven. Penalized function-on-function regression. _Computational Statistics_ , 30(2):539–568, 2015\.
* Koller and Friedman [2009] D. Koller and N. Friedman. _Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning_. The MIT Press, 2009.
* Kraft [1955] C. Kraft. _Some Conditions for Consistency and Uniform Consistency of Statistical Procedures_. University of California Press, 1955.
* Lauritzen [1996] S. Lauritzen. _Graphical Models_. Oxford Statistical Science Series. Clarendon Press, 1996.
* Liu et al. [2009] H. Liu, Y. Tang, and H. H. Zhang. A new chi-square approximation to the distribution of non-negative definite quadratic forms in non-central normal variables. _Computational Statistics & Data Analysis_, 53(4):853–856, 2009.
* Lundborg et al. [2021] A. R. Lundborg, R. D. Shah, and J. Peters. _ghcm: Functional Conditional Independence Testing with the GHCM_ , 2021. URL https://CRAN.R-project.org/package=ghcm. R-package version 2.1.0.
* Morris [2015] J. S. Morris. Functional regression. _Annual Review of Statistics and Its Application_ , 2(1):321–359, 2015.
* Neykov et al. [2020] M. Neykov, S. Balakrishnan, and L. Wasserman. Minimax optimal conditional independence testing. _arXiv preprint arXiv:2001.03039_ , 2020.
* Pearl [2009] J. Pearl. _Causality_. Cambridge University Press, 2009.
* Pearl [2014] J. Pearl. _Probabilistic reasoning in intelligent systems: networks of plausible inference_. Elsevier, 2014.
* Peters [2014] J. Peters. On the intersection property of conditional independence and its application to causal discovery. _Journal of Causal Inference_ , 3:97–108, 2014.
* Peters et al. [2016] J. Peters, P. Bühlmann, and N. Meinshausen. Causal inference using invariant prediction: identification and confidence intervals. _Journal of the Royal Statistical Society Series B_ , 78(5):947–1012, 2016.
* Peters et al. [2017] J. Peters, D. Janzing, and B. Schölkopf. _Elements of Causal Inference: Foundations and Learning Algorithms_. MIT Press, Cambridge, MA, USA, 2017.
* Qiao et al. [2019] X. Qiao, S. Guo, and G. M. James. Functional graphical models. _Journal of the American Statistical Association_ , 114(525):211–222, 2019.
* Qiao et al. [2020] X. Qiao, C. Qian, G. M. James, and S. Guo. Doubly functional graphical models in high dimensions. _Biometrika_ , 107(2):415–431, 2020.
* Ramsay and Silverman [2005] J. O. Ramsay and B. W. Silverman. _Functional Data Analysis_. Springer New York, 2005.
* Reiss and Ogden [2007] P. T. Reiss and R. T. Ogden. Functional principal component regression and functional partial least squares. _Journal of the American Statistical Association_ , 102(479):984–996, 2007.
* Reiss et al. [2010] P. T. Reiss, L. Huang, and M. Mennes. Fast function-on-scalar regression with penalized basis expansions. _The International Journal of Biostatistics_ , 6(1), 2010\.
* Robins and Rotnitzky [1995] J. M. Robins and A. Rotnitzky. Semiparametric efficiency in multivariate regression models with missing data. _Journal of the American Statistical Association_ , 90(429):122–129, 1995.
* Scharfstein et al. [1999] D. O. Scharfstein, A. Rotnitzky, and J. M. Robins. Adjusting for nonignorable drop-out using semiparametric nonresponse models. _Journal of the American Statistical Association_ , 94(448):1096–1120, 1999.
* Scheipl et al. [2015] F. Scheipl, A.-M. Staicu, and S. Greven. Functional additive mixed models. _Journal of Computational and Graphical Statistics_ , 24(2):477–501, 2015.
* Shah and Peters [2020] R. D. Shah and J. Peters. The hardness of conditional independence testing and the generalised covariance measure. _Annals of Statistics_ , 48(3):1514–1538, 2020\.
* Shin [2009] H. Shin. Partial functional linear regression. _Journal of Statistical Planning and Inference_ , 139(10):3405 – 3418, 2009.
* Spirtes et al. [2000] P. Spirtes, P. Scheines, C. Glymour, R. Scheines, S. Richard, D. Heckerman, C. Meek, G. Cooper, and T. Richardson. _Causation, Prediction, and Search_. Adaptive computation and machine learning. MIT Press, 2000.
* Ullah and Finch [2013] S. Ullah and C. F. Finch. Applications of functional data analysis: A systematic review. _BMC medical research methodology_ , 13(1):43, 2013.
* Wang et al. [2016] J.-L. Wang, J.-M. Chiou, and H.-G. Müller. Functional data analysis. _Annual Review of Statistics and Its Application_ , 3(1):257–295, 2016.
* Wood [2013] S. N. Wood. On p-values for smooth components of an extended generalized additive model. _Biometrika_ , 100(1):221–228, 2013.
* Wood [2017] S. N. Wood. _Generalized Additive Models_. Chapman and Hall/CRC, 2017.
* Yao and Müller [2010] F. Yao and H.-G. Müller. Functional quadratic regression. _Biometrika_ , 97(1):49–64, 2010.
* Yao et al. [2005] F. Yao, H.-G. Müller, and J.-L. Wang. Functional linear regression analysis for longitudinal data. _Annals of Statistics_ , pages 2873–2903, 2005.
* Yuan and Cai [2010] M. Yuan and T. T. Cai. A reproducing kernel hilbert space approach to functional linear regression. _Annals of Statistics_ , 38(6):3412–3444, 2010\.
* Yuan and Lin [2007] M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model. _Biometrika_ , 94(1):19–35, 2007.
* Zapata et al. [2019] J. Zapata, S.-Y. Oh, and A. Petersen. Partial separability and functional graphical models for multivariate gaussian processes. _arXiv preprint arXiv:1910.03134_ , 2019.
* Zhang et al. [1995] X. L. Zhang, H. Begleiter, B. Porjesz, W. Wang, and A. Litke. Event related potentials during object recognition tasks. _Brain Research Bulletin_ , 38(6):531–538, 1995\.
* Zhu et al. [2016] H. Zhu, N. Strawn, and D. B. Dunson. Bayesian graphical models for multivariate functional data. _Journal of Machine Learning Research_ , 17(1):7157–7183, 2016.
## Supplementary material for ‘Conditional Independence Testing in Hilbert
Spaces with Applications to Functional Data Analysis’
Section A is a self-contained presentation of the theory and proofs of Section
2 in the paper. Section B contains much of the background on uniform
stochastic convergence that is used for the technical results of the paper.
This includes an account of previously established results for real-valued
random variables and new results for Hilbertian and Banachian random
variables. Section C contains the proofs of the results in Sections 3.2 and 4
in the paper. Section D contains some additional simulation results.
## Appendix A Hardness of functional Gaussian independence testing
In this section we provide the necessary background and prove the hardness
result in Section 2. We use the notation and terminology described in the
setup of Section 2 with the exception that $\mathcal{P}$, $\mathcal{P}_{0}$
and $\mathcal{Q}$ will consist of $n$ i.i.d. copies of jointly Gaussian
$(X,Y,Z)$ rather than a single copy. For a bounded linear operator
$\mathscr{A}$ on a Hilbert space $\mathcal{H}$, we let $\mathscr{A}^{*}$
denote the adjoint of $\mathscr{A}$. For two orthogonal subspaces
$\mathcal{A}$ and $\mathcal{B}$ of a Hilbert space $\mathcal{H}$, we write
$\mathcal{A}\oplus\mathcal{B}$ for the orthogonal direct sum of $\mathcal{A}$
and $\mathcal{B}$.
In Section A.1 we consider the setup of Section 2 in the specific case where
all the Hilbert spaces are finite-dimensional. We show that for any
$Q\in\mathcal{Q}$, sample size $n$ and $\varepsilon>0$, we can find a
sufficiently large dimension of $\mathcal{H}_{Z}$ such that any test of size
$\alpha$ over $\mathcal{P}_{0}^{Q}$ has power at most $\alpha+\varepsilon$
against any alternative. In Section A.2 we use this to prove Theorem 1. In
Section A.3 we review the theory of regular conditional probabilities and
conditional distributions of Hilbertian random variables and prove several
Hilbertian analogues of well-known multivariate Gaussian results. Sections A.1
and A.2 with the exception of Lemma 1 contain new material while Section A.3
is primarily a review of relatively well-known results.
### A.1 Power of finite-dimensional Gaussian conditional independence testing
Before we consider Gaussian conditional independence testing, we present the
following general result from Kraft1955. A summary is given in LeCam1973.
###### Lemma 1.
Let $\mathcal{P}$ and $\mathcal{Q}$ denote two families of probability
measures on some measurable space $(\mathcal{X},\mathcal{A})$ and assume that
both families are dominated by a $\sigma$-finite measure. Consider the problem
of testing the null hypothesis that the given data is from a distribution in
$\mathcal{P}$ against the alternative that the distribution is in
$\mathcal{Q}$. Let $d_{\textrm{TV}}$ denote the total variation distance and
$\widetilde{\mathcal{P}}$ and $\widetilde{\mathcal{Q}}$ the closed convex
hulls of $\mathcal{P}$ and $\mathcal{Q}$. Then
$\inf_{\psi:\mathcal{X}\to[0,1]}\sup_{P\in\mathcal{P},Q\in\mathcal{Q}}\left[\int\psi\,\mathrm{d}P+\int(1-\psi)\,\mathrm{d}Q\right]=1-\inf_{P\in\widetilde{\mathcal{P}},Q\in\widetilde{\mathcal{Q}}}d_{\textrm{TV}}(P,Q).$
An immediate consequence of this is that for any test $\psi$ that has size
$\alpha$ and power function $\beta:\mathcal{Q}\to[0,1]$,
$\beta(Q)=\int\psi\,\mathrm{d}Q$, we have
$\inf_{Q\in\mathcal{Q}}\beta(Q)\leq\alpha+\inf_{P\in\widetilde{\mathcal{P}},Q\in\widetilde{\mathcal{Q}}}d_{\textrm{TV}}(P,Q)\leq\alpha+\inf_{P\in\widetilde{\mathcal{P}},Q\in\mathcal{Q}}d_{\textrm{TV}}(P,Q).$
In most practical situations both $\mathcal{P}$ and $\mathcal{Q}$ will consist
of product measures on a product space corresponding to a situation where we
observe a sample of $n$ i.i.d. observations of some random variable. The
theorem states that a lower bound on the sum of the type I and type II error
probabilities of testing the null that data is from a distribution in
$\mathcal{P}$ against the alternative that the distribution is in
$\mathcal{Q}$ is given by $1$ minus the total variation distance between the
closed convex hulls of $\mathcal{P}$ and $\mathcal{Q}$. As a consequence we
see that the power of a test is upper bounded by the size plus the total
variation distance between the closed convex hull of $\mathcal{P}$ and
$\mathcal{Q}$.
In the remainder of this section we will consider the testing problem
described in Section 2 with $\mathcal{H}_{X}=\mathbb{R}^{d_{X}}$ and
$\mathcal{H}_{Z}=\mathbb{R}^{d_{Z}}$ for $d_{X},d_{Z}\in\mathbb{N}$. To
produce bounds on the power of a test in this setting, we will construct an
explicit TV-approximation to a family of particularly simple distributions in
$\mathcal{Q}$ using a distribution in the convex hull of the null
distributions. We will need the following upper bound on the total variation
distance between measures.
###### Lemma 2.
Let $P$ and $Q$ be probability measures where $P$ has density $f$ with respect
to $Q$. Then
$4d_{\mathrm{TV}}(P,Q)^{2}\leq\int f^{2}\,\mathrm{d}Q-1.$
###### Proof.
We may assume that the integral of $f^{2}$ with respect to $Q$ is finite,
otherwise the inequality is trivially valid. Then by Jensen’s inequality, we
get
$d_{\mathrm{TV}}(P,Q)^{2}=\frac{1}{4}\left(\int|f-1|\,\mathrm{d}Q\right)^{2}\leq\frac{1}{4}\int(f-1)^{2}\,\mathrm{d}Q=\frac{1}{4}\int
f^{2}\,\mathrm{d}Q-\frac{1}{4}.\qed$
Using this bound and Lemma 1, we can show the following result.
###### Theorem 6.
Let $Q$ be a distribution consisting of $n$ i.i.d. copies of jointly Gaussian
$(X,Y,Z)$ on $(\mathbb{R},\mathbb{R},\mathbb{R}^{d})$ for some
$d\in\mathbb{N}$, where $X$ and $Y$ are standard Gaussian, $Z$ is mean zero
with identity covariance matrix, $\mathrm{Cov}(X,Z)=\mathrm{Cov}(Y,Z)=0$ and
$\mathrm{Cov}(X,Y)=\rho\in(0,1)$. Consider the testing problem described in
Section 2 with $\mathcal{H}_{X}=\mathbb{R}$ and
$\mathcal{H}_{Z}=\mathbb{R}^{d}$ and let $\psi$ be the test function of a size
$\alpha$ test over $\mathcal{P}_{0}^{Q}$. Writing $\beta$ for the power of
$\psi$ against $Q$, we have
$\beta\leq\alpha+\frac{1}{2}\sqrt{-1+(1+\rho)^{n}\sum_{k=0}^{d}\frac{\binom{d}{k}}{2^{d}(1+(3-4k/d)\rho)^{n}}}.$
In particular, for fixed $n$ the upper bound converges to $\alpha$ as $d$
increases.
###### Proof.
Let $\tau\in\\{-1,1\\}^{d}$ and let $P_{\tau}$ denote the Gaussian
distribution consisting of $n$ i.i.d. copies of jointly Gaussian $(X,Y,Z)$
where $X$ and $Y$ are standard Gaussian, $Z$ is mean zero with identity
covariance matrix, $\mathrm{Cov}(X,Y)=\rho$ and
$\mathrm{Cov}(X,Z)=\mathrm{Cov}(Y,Z)=\sqrt{\frac{\rho}{d}}\tau^{\top}$. For
every $\tau\in\\{-1,1\\}^{d}$, it is clear that
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ under $P_{\tau}$ and thus
forming
$P:=\frac{1}{2^{d}}\sum_{\tau\in\\{-1,1\\}^{d}}P_{\tau}$
we note that $P$ is in the closed convex hull of the set of null
distributions. Let $\Gamma_{\tau}$ and $\Gamma_{Q}$ denote the
$n(d+2)$-dimensional covariance matrices of the $n$ i.i.d. copies of $(X,Y,Z)$
under $P_{\tau}$ and $Q$ respectively. These are block-diagonal, and we let
$\Sigma_{\tau}$ and $\Sigma_{Q}$ respectively denote the matrices in the
diagonal, corresponding to the covariance of a single observation of $(X,Y,Z)$
under $P_{\tau}$ and $Q$. By standard manipulations of densities, the density
of $P$ with respect to $Q$ is simply the ratio of their respective densities
with respect to the Lebesgue measure. We have
$\Sigma_{\tau}=\begin{pmatrix}\begin{pmatrix}1&\rho\\\
\rho&1\end{pmatrix}&\sqrt{\frac{\rho}{d}}\begin{pmatrix}\tau^{\top}\\\
\tau^{\top}\end{pmatrix}\\\
\sqrt{\frac{\rho}{d}}\begin{pmatrix}\tau&\tau\end{pmatrix}&I_{d}\end{pmatrix}$
and, letting $I_{d}$ denote the $d$-dimensional identity matrix,
$\Sigma_{Q}=\begin{pmatrix}\begin{pmatrix}1&\rho\\\ \rho&1\end{pmatrix}&0\\\
0&I_{d}\end{pmatrix}.$
The determinant of $\Sigma_{Q}$ is $1-\rho^{2}$ by Laplace-expanding the first
row. Letting $J_{2}$ denote the $2$-dimensional matrix of ones, we have
$\det(\Sigma_{\tau})=\det(I_{d})\det\left(\begin{pmatrix}1&\rho\\\
\rho&1\end{pmatrix}-\rho J_{2}\right)=(1-\rho)^{2}$
by Schur’s formula. Defining $f$ to be the density of $P$ with respect to $Q$,
we see that
$f(v)=\frac{1}{2^{d}}\frac{(1+\rho)^{n/2}}{(1-\rho)^{n/2}}\sum_{\tau\in\\{-1,1\\}^{d}}\exp\left(-\frac{1}{2}v^{\top}(\Gamma_{\tau}^{-1}-\Gamma_{Q}^{-1})v\right)$
since the determinants of $\Gamma_{\tau}$ and $\Gamma_{Q}$ are the
determinants of $\Sigma_{\tau}$ and $\Sigma_{Q}$ to the $n$th power. From this
we get that
$\displaystyle\int
f^{2}\,\mathrm{d}Q=\frac{1}{2^{2d}}\frac{(1+\rho)^{n}}{(1-\rho)^{n}}\sum_{\tau,\tau^{\prime}\in\\{-1,1\\}^{d}}\int\exp\left(-\frac{1}{2}v^{\top}(\Gamma_{\tau}^{-1}+\Gamma_{\tau^{\prime}}^{-1}-2\Gamma_{Q}^{-1})v\right)\,\mathrm{d}Q(v)=$
$\displaystyle\frac{1}{2^{2d}}\frac{(1+\rho)^{n}}{(1-\rho)^{n}}\frac{1}{\sqrt{(2\pi)^{n(d+2)}(1-\rho^{2})^{n}}}\sum_{\tau,\tau^{\prime}\in\\{-1,1\\}^{d}}\int\exp\left(-\frac{1}{2}v^{\top}(\Gamma_{\tau}^{-1}+\Gamma_{\tau^{\prime}}^{-1}-\Gamma_{Q}^{-1})v\right)\,\mathrm{d}\lambda_{n(d+2)}(v),$
where $\lambda_{n(d+2)}$ denotes the $n(d+2)$-dimensional Lebesgue measure.
Each integral is the integral of an unnormalised Gaussian density in
$\mathbb{R}^{n(d+2)}$, and thus we can simplify further to get
$\displaystyle\int f^{2}\,\mathrm{d}Q$
$\displaystyle=\frac{1}{2^{2d}}\frac{(1+\rho)^{n}}{(1-\rho)^{n}}\frac{1}{(1-\rho^{2})^{n/2}}\sum_{\tau,\tau^{\prime}\in\\{-1,1\\}^{d}}\sqrt{\det\left[(\Gamma_{\tau}^{-1}+\Gamma_{\tau^{\prime}}^{-1}-\Gamma_{Q}^{-1})^{-1}\right]}$
$\displaystyle=\frac{1}{2^{2d}}\frac{(1+\rho)^{n}}{(1-\rho)^{n}}\frac{1}{(1-\rho^{2})^{n/2}}\sum_{\tau,\tau^{\prime}\in\\{-1,1\\}^{d}}\det(\Gamma_{\tau}^{-1}+\Gamma_{\tau^{\prime}}^{-1}-\Gamma_{Q}^{-1})^{-1/2}$
$\displaystyle=\frac{1}{2^{2d}}\frac{(1+\rho)^{n}}{(1-\rho)^{n}}\frac{1}{(1-\rho^{2})^{n/2}}\sum_{\tau,\tau^{\prime}\in\\{-1,1\\}^{d}}\det(\Sigma_{\tau}^{-1}+\Sigma_{\tau^{\prime}}^{-1}-\Sigma_{Q}^{-1})^{-n/2},$
by again using the block diagonal structure of $\Gamma_{Q}$ and the
$\Gamma_{\tau}$’s. Recall that for a symmetric block matrix
$\begin{pmatrix}A&B^{\top}\\\
B&C\end{pmatrix}^{-1}=\begin{pmatrix}(A-B^{\top}C^{-1}B)^{-1}&-(A-B^{\top}C^{-1}B)^{-1}B^{\top}C^{-1}\\\
-C^{-1}B(A-B^{\top}C^{-1}B^{\top})^{-1}&C^{-1}+C^{-1}B(A-B^{\top}C^{-1}B)^{-1}B^{\top}C^{-1}\end{pmatrix}.$
Using this, we see that
$\Sigma_{Q}^{-1}=\begin{pmatrix}\frac{1}{1-\rho^{2}}\begin{pmatrix}1&-\rho\\\
-\rho&1\end{pmatrix}&0\\\ 0&I_{d}\end{pmatrix}$
and
$\Sigma_{\tau}^{-1}=\begin{pmatrix}\frac{1}{1-\rho}I_{2}&-\frac{1}{1-\rho}\sqrt{\frac{\rho}{d}}\begin{pmatrix}\tau^{\top}\\\
\tau^{\top}\end{pmatrix}\\\
-\frac{1}{1-\rho}\sqrt{\frac{\rho}{d}}\begin{pmatrix}\tau&\tau\end{pmatrix}&I_{d}+\frac{2\rho}{(1-\rho)d}\tau\tau^{\top}\end{pmatrix}.$
Further,
$\Sigma_{\tau}^{-1}+\Sigma_{\tau^{\prime}}^{-1}-\Sigma_{Q}^{-1}=\begin{pmatrix}A&B^{\top}\\\
B&C\end{pmatrix},$
where
$\displaystyle A$
$\displaystyle:=\frac{1}{1-\rho^{2}}\begin{pmatrix}2\rho+1&\rho\\\
\rho&2\rho+1\end{pmatrix}$ $\displaystyle B$
$\displaystyle:=-\frac{1}{1-\rho}\sqrt{\frac{\rho}{d}}\begin{pmatrix}\tau+\tau^{\prime}&\tau+\tau^{\prime}\end{pmatrix}$
$\displaystyle C$
$\displaystyle:=I_{d}+\frac{2\rho}{(1-\rho)d}(\tau\tau^{\top}+\tau^{\prime}\tau^{\prime\top}).$
We may once more use Schur’s formula for the determinant of a block matrix to
find that
$\det(\Sigma_{\tau}^{-1}+\Sigma_{\tau^{\prime}}^{-1}-\Sigma_{Q}^{-1})=\det(C)\det(A-B^{\top}C^{-1}B).$
Defining $V=\begin{pmatrix}\tau&\tau^{\prime}\end{pmatrix}$, we note that
$C=I_{d}+\frac{2\rho}{(1-\rho)d}VV^{\top}$ and defining further
$M:=I_{2}+\frac{2\rho}{(1-\rho)d}V^{\top}V=\frac{1}{d(1-\rho)}\begin{pmatrix}d(1+\rho)&2\rho\langle\tau,\tau^{\prime}\rangle\\\
2\rho\langle\tau,\tau^{\prime}\rangle&d(1+\rho),\end{pmatrix}$
the Weinstein–Aronszajn identity yields that
$\det(C)=\det(M)=\frac{(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)(d(1+\rho)-2\rho\langle\tau,\tau^{\prime}\rangle)}{d^{2}(1-\rho)^{2}}.$
The Woodbury matrix identity yields that
$C^{-1}=I_{d}-\frac{2\rho}{(1-\rho)d}VM^{-1}V^{\top}.$
Hence,
$\det(A-B^{\top}C^{-1}B)=\det\left(A-B^{\top}B+\frac{2\rho}{(1-\rho)d}B^{\top}VM^{-1}V^{\top}B\right).$
Now
$M^{-1}=\frac{(1-\rho)d}{(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)(d(1+\rho)-2\rho\langle\tau,\tau^{\prime}\rangle)}\begin{pmatrix}d(1+\rho)&-2\rho\langle\tau,\tau^{\prime}\rangle\\\
-2\rho\langle\tau,\tau^{\prime}\rangle&d(1+\rho)\end{pmatrix}$
and
$B^{\top}V=-\frac{1}{1-\rho}\sqrt{\frac{\rho}{d}}(d+\langle\tau,\tau^{\prime}\rangle)J_{2},$
where $J_{2}$ is the $2$-dimensional matrix of ones. Thus,
$\frac{2\rho}{(1-\rho)d}B^{\top}VM^{-1}V^{\top}B=\frac{2\rho^{2}(d+\langle\tau,\tau^{\prime}\rangle)^{2}}{(1-\rho)^{3}d^{2}}J_{2}M^{-1}J_{2}=\frac{4\rho^{2}(d+\langle\tau,\tau^{\prime}\rangle)^{2}}{(1-\rho)^{2}d(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)}J_{2}.$
Since
$B^{\top}B=\frac{2\rho}{(1-\rho)^{2}d}(d+\langle\tau,\tau^{\prime}\rangle)J_{2}$
we get that
$\displaystyle\det(A-B^{\top}C^{-1}B)=\det\left(A+\left(\frac{4\rho^{2}(d+\langle\tau,\tau^{\prime}\rangle)^{2}}{(1-\rho)^{2}d(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)}-\frac{2\rho}{(1-\rho)^{2}d}(d+\langle\tau,\tau^{\prime}\rangle)\right)J_{2}\right)$
$\displaystyle=\det\left(A-\frac{2\rho(d+\langle\tau,\tau^{\prime}\rangle)}{(1-\rho)(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)}J_{2}\right)$
$\displaystyle=\frac{\det\left((d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)\begin{pmatrix}2\rho+1&\rho\\\
\rho&2\rho+1\end{pmatrix}-2\rho(d+\langle\tau,\tau^{\prime}\rangle)(1+\rho)J_{2}\right)}{(1-\rho)^{2}(1+\rho)^{2}(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)^{2}}$
$\displaystyle=\frac{\det\begin{pmatrix}(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)\rho+(1+\rho)(1-\rho)d&(1+\rho)(1-\rho)d-(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)\\\
(1+\rho)(1-\rho)d-(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)&(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)\rho+(1+\rho)(1-\rho)d\end{pmatrix}}{(1-\rho)^{2}(1+\rho)^{2}(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)^{2}}$
$\displaystyle=\frac{(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)(1+\rho)(\rho-1)+2(1+\rho)^{2}(1-\rho)d}{(1-\rho)^{2}(1+\rho)^{2}(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)}$
$\displaystyle=\frac{d(1+\rho)-2\rho\langle\tau,\tau^{\prime}\rangle}{(1-\rho)(1+\rho)(d(1+\rho)+2\rho\langle\tau,\tau^{\prime}\rangle)}$
and thus
$\det(\Sigma_{\tau}^{-1}+\Sigma_{\tau^{\prime}}^{-1}-\Sigma_{Q}^{-1})=\frac{(d(1+\rho)-2\rho\langle\tau,\tau^{\prime}\rangle)^{2}}{d^{2}(1-\rho)^{3}(1+\rho)}.$
Returning to the squared integral of $f^{2}$ with respect to $Q$, we get that
$\displaystyle\int f^{2}\,\mathrm{d}Q$
$\displaystyle=\frac{1}{2^{2d}}\frac{(1+\rho)^{n}}{(1-\rho)^{n}}\frac{1}{(1-\rho^{2})^{n/2}}\sum_{\tau,\tau^{\prime}\in\\{-1,1\\}^{d}}\frac{d^{n}\sqrt{(1-\rho)^{3n}(1+\rho)^{n}}}{|d(1+\rho)-2\rho\langle\tau,\tau^{\prime}\rangle|^{n}}$
$\displaystyle=\frac{1}{2^{2d}}(1+\rho)^{n}\sum_{\tau,\tau^{\prime}\in\\{-1,1\\}^{d}}\frac{d^{n}}{|d(1+\rho)-2\rho\langle\tau,\tau^{\prime}\rangle|^{n}}.$
For $\tau,\tau^{\prime}\in\\{-1,1\\}^{d}$,
$\langle\tau,\tau^{\prime}\rangle=2k-d$ where $k$ is the number of indices
where $\tau_{i}=\tau_{i}^{\prime}$. Thus instead of summing over
$\tau,\tau^{\prime}\in\\{-1,1\\}^{d}$, we can count the number of
$(\tau,\tau^{\prime})$-pairs where $\tau$ and $\tau^{\prime}$ agree in exactly
$k$ positions. For each $\tau$, there are $\binom{d}{k}$ other elements in
$\\{-1,1\\}^{d}$ agreeing in exactly $k$ positions and there are $2^{d}$
different $\tau$’s, hence
$\displaystyle\int f^{2}\,\mathrm{d}Q$
$\displaystyle=\frac{1}{2^{2d}}(1+\rho)^{n}\sum_{k=0}^{d}\frac{d^{n}\binom{d}{k}2^{d}}{|d(1+\rho)-2\rho(2k-d)|^{n}}$
$\displaystyle=(1+\rho)^{n}\sum_{k=0}^{d}\frac{d^{n}\binom{d}{k}}{2^{d}(d+\rho(3d-4k))^{n}}=(1+\rho)^{n}\sum_{k=0}^{d}\frac{\binom{d}{k}}{2^{d}(1+\rho(3-4k/d))^{n}}.$
The result now follows from Proposition 2 and Lemma 1.
To see this for each $n$ the bound converges to $\alpha$ as $d$ increases, let
$W_{d}$ be a random variable with a binomial distribution with probability
parameter $1/2$ and with $d$ trials and note that
$\sum_{k=0}^{d}\frac{\binom{d}{k}}{2^{d}(1+\rho(3-4k/d))^{n}}=\mathbb{E}\left((1+\rho(3-4W_{d}/d))^{-n}\right).$
By the Strong Law of Large Numbers (SLLN), $W_{d}/d\overset{a.s.}{\to}1/2$ and
thus $(1+\rho(3-4W_{d}/d))^{-n}\overset{a.s.}{\to}(1+\rho)^{-n}$. Since
$(1+\rho(3-4W_{d}/d))^{-n}\leq(1-\rho)^{-n}$, we get by the bounded
convergence theorem that
$\lim_{d\to\infty}\mathbb{E}\left((1+\rho(3-4W_{d}/d))^{-n}\right)=\mathbb{E}\left((1+\rho)^{-n}\right)=(1+\rho)^{-n},$
and hence the upper bound on the power converges to $\alpha$. ∎
We can generalise the previous result to the situation where $X$ and $Y$ are
of arbitrary finite dimension.
###### Theorem 7.
Let $Q$ be a distribution consisting of $n$ i.i.d. copies of jointly Gaussian
$(X,Y,Z)$ on $(\mathbb{R}^{d_{X}},\mathbb{R}^{d_{Y}},\mathbb{R}^{d_{Z}})$ for
some $d_{X},d_{Y},d_{Z}\in\mathbb{N}$ where $X$, $Y$ and $Z$ are all mean zero
with identity covariance matrix, $\mathrm{Cov}(X,Z)=\mathrm{Cov}(Y,Z)=0$ and
$\mathrm{Cov}(X,Y)=R$ for some rectangular diagonal matrix $R$ with diagonal
entries $\rho_{1},\dots,\rho_{r}\in(0,1)$, where $r=\min(d_{X},d_{Y})$.
Consider the testing problem described in Section 2 with
$\mathcal{H}_{X}=\mathbb{R}^{d_{X}}$ and $\mathcal{H}_{Z}=\mathbb{R}^{d_{Z}}$
and let $\psi$ be the test function of a size $\alpha$ test over
$\mathcal{P}_{0}^{Q}$. Assume that $d_{Z}\geq r$ and let $d=\lfloor
d_{Z}/r\rfloor$. Letting $\beta$ denote the power of $\psi$ against $Q$, we
have
$\beta\leq\alpha+\frac{1}{2}\sqrt{-1+\prod_{i=1}^{r}\left((1+\rho_{i})^{n}\sum_{k=0}^{d}\frac{\binom{n}{k}}{2^{d}(1+(3-4k/d)\rho_{i})^{n}}\right)}.$
In particular for fixed $n$ the upper bound converges to $\alpha$ as $d_{Z}$
increases.
###### Proof.
Assume without loss of generality that $d_{X}\geq d_{Y}$. The proof follows a
similar idea to the proof of Theorem 6. In what follows we consider a
different ordering of the variables than the natural one given by $(X,Y,Z)$.
We consider $r+1$ blocks, where the first $r$ blocks are
$(X_{i},Y_{i},Z_{(i-1)d+1},\dots,Z_{id})$ for $i\in\\{1,\dots,r\\}$ and the
final block consists of the remaining components of $X$ and $Z$. When we
consider $n$ i.i.d. copies, we will again reorder the variables such that we
consider each block separately. As a consequence of doing this, the covariance
matrix of $n$ i.i.d. copies under $Q$, $\Xi_{Q}$, can be written as a block-
diagonal matrix with $r$ $n(d+2)\times n(d+2)$ blocks $\Gamma_{Q,i}$ and a
final identity matrix block. Each of the $\Gamma_{Q,i}$’s is again a block-
diagonal matrix consisting of $n$ identical blocks $\Sigma_{Q,i}$ of the form
$\Sigma_{Q,i}=\begin{pmatrix}\begin{pmatrix}1&\rho_{i}\\\
\rho_{i}&1\end{pmatrix}&0\\\ 0&I_{d}\end{pmatrix}.$
Let now $\mathcal{T}=(\\{-1,1\\}^{d})^{r}$ and for each
$\tau=(\tau_{1},\dots,\tau_{r})\in\mathcal{T}$ let $P_{\tau}$ denote the
Gaussian distribution consisting of $n$ i.i.d. copies of jointly Gaussian
$(X,Y,Z)$ where $X$, $Y$ and $Z$ are mean zero with identity covariance,
$\mathrm{Cov}(X,Y)=R$ and $\mathrm{Cov}(X,Z)=\mathrm{Cov}(Y,Z)=0$ except for
$\mathrm{Cov}(X_{i},(Z_{(i-1)d+1},\dots,Z_{id}))=\mathrm{Cov}(Y_{i},(Z_{(i-1)d+1},\dots,Z_{id}))=\sqrt{\frac{\rho_{i}}{d}}\tau_{i}^{\top}$
for $i\in\\{1,\dots,r\\}$. Arranging the random variables as before, the
covariance matrix of $n$ i.i.d. copies under $P_{\tau}$, $\Xi_{\tau}$, is a
block-diagonal matrix with $r$ $n(d+2)\times n(d+2)$ blocks $\Gamma_{\tau,i}$
and a final identity matrix block. Each of the $\Gamma_{\tau,i}$’s is again a
block-diagonal matrix consisting of $n$ identical blocks $\Sigma_{\tau,i}$ of
the form
$\Sigma_{\tau,i}=\begin{pmatrix}\begin{pmatrix}1&\rho_{i}\\\
\rho_{i}&1\end{pmatrix}&\sqrt{\frac{\rho_{i}}{d}}\tau_{i}^{\top}\\\
\sqrt{\frac{\rho_{i}}{d}}\tau_{i}&I_{d}\end{pmatrix}.$
Clearly $X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ under $P_{\tau}$ for
every $\tau\in\mathcal{T}$ and thus letting
$P:=\frac{1}{2^{dr}}\sum_{\tau\in\mathcal{T}}P_{\tau},$
we note that $P$ is in the closed convex hull of the null distributions.
Letting $f$ be the density of $P$ with respect to $Q$, we see that
$f(v)=\frac{1}{2^{dr}}\left(\prod_{i=1}^{r}\frac{1+\rho_{i}}{1-\rho_{i}}\right)^{n/2}\sum_{\tau\in\mathcal{T}}\exp\left(-\frac{1}{2}v^{\top}(\Xi_{\tau}^{-1}-\Xi_{Q}^{-1})v\right),$
since this is simply the ratio of their respective densities with respect to
the Lebesgue measure. We can now repeat the argument of the proof of Theorem 6
to obtain
$\int
f^{2}\,\mathrm{d}Q=\frac{1}{2^{2dr}}\prod_{i=1}^{r}\left(\frac{1+\rho_{i}}{(1-\rho_{i})\sqrt{1-\rho_{i}^{2}}}\right)^{n}\sum_{\tau,\tau^{\prime}\in\mathcal{T}}\left(\sqrt{\det(\Xi_{\tau}^{-1}+\Xi_{\tau^{\prime}}^{-1}-\Xi_{Q}^{-1})}\right)^{-1}.$
The determinant can be written as
$\det(\Xi_{\tau}^{-1}+\Xi_{\tau^{\prime}}^{-1}-\Xi_{Q}^{-1})=\prod_{i=1}^{r}\det(\Gamma_{\tau,i}^{-1}+\Gamma_{\tau^{\prime},i}^{-1}-\Gamma_{Q,i}^{-1})$
by the block-diagonal structure of the $\Xi$’s. In the proof of Theorem 6, we
derive that
$\det(\Gamma_{\tau,i}^{-1}+\Gamma_{\tau^{\prime},i}^{-1}-\Gamma_{Q,i}^{-1})=\left(\frac{(d(1+\rho_{i})-2\rho_{i}\langle\tau_{i},\tau_{i}^{\prime}\rangle)^{2}}{d^{2}(1+\rho_{i})(1-\rho_{i})^{3}}\right)^{n}.$
Therefore,
$\int
f^{2}\,\mathrm{d}Q=\frac{1}{2^{2dr}}\left(\prod_{j=1}^{r}(1+\rho_{j})^{n}\right)\sum_{\tau,\tau^{\prime}\in\mathcal{T}}\prod_{i=1}^{r}\frac{d^{n}}{|d(1+\rho_{i})-2\rho_{i}\langle\tau_{i},\tau_{i}^{\prime}\rangle|^{n}}.$
Since each factor of the second product only depends on the $i$th component of
$\tau$ and $\tau^{\prime}$, we can interchange the product and sum and apply
the same counting arguments as in Theorem 6 to get that
$\int
f^{2}\,\mathrm{d}Q=\prod_{i=1}^{r}\left((1+\rho_{i})^{n}\sum_{k=0}^{d}\frac{\binom{n}{k}}{2^{d}(1+(3-4k/d)\rho_{i})^{n}}\right)$
as desired. We can repeat the same SLLN-based limiting arguments as in Theorem
6 to show that as $d$ increases the integral will converge to $1$ and hence
the power is bounded by the size in the limit. ∎
Having shown that for each $n$ and $d$, we have an upper bound on the power of
a Gaussian conditional independence test against a simple alternative, we can
now show this also holds for Gaussian conditional independence testing
problems against other $Q$.
###### Lemma 3.
Let $Q\in\mathcal{Q}$ be a distribution consisting of $n$ i.i.d. copies of
jointly Gaussian and injective $(X,Y,Z)$ on
$(\mathbb{R}^{d_{X}},\mathbb{R}^{d_{Y}},\mathbb{R}^{d_{Z}})$ with non-singular
covariance for some $d_{X},d_{Y},d_{Z}\in\mathbb{N}$. Consider the testing
problem described in Section 2 with $\mathcal{H}_{X}=\mathbb{R}^{d_{X}}$ and
$\mathcal{H}_{Z}=\mathbb{R}^{d_{Z}}$ and let $\psi$ be the test function of a
size $\alpha$ test over $\mathcal{P}_{0}^{Q}$ with power $\beta$ against $Q$.
Then there exists a $d_{X}\times d_{Y}$-rectangular diagonal matrix $R$ with
diagonal entries $\rho_{1},\dots,\rho_{r}\in(0,1)$, a distribution $\tilde{Q}$
consisting of $n$ i.i.d. copies of jointly Gaussian
$(\tilde{X},\tilde{Y},\tilde{Z})$ where $\tilde{X}$, $\tilde{Y}$ and
$\tilde{Z}$ are all mean zero with identity covariance matrix,
$\mathrm{Cov}(\tilde{X},\tilde{Z})=\mathrm{Cov}(\tilde{Y},\tilde{Z})=0$ and
$\mathrm{Cov}(\tilde{X},\tilde{Y})=R$ and a test size $\alpha$ test over
$\mathcal{P}_{0}^{\tilde{Q}}$ with power $\beta$ against $\tilde{Q}$.
###### Proof.
Let $\psi$ denote the test function of the test with power $\beta$ against $Q$
and $\mu$ and $\Sigma$ denote the mean and covariance matrix of $(X,Y,Z)$
under $Q$. We construct a new test with test function $\tilde{\psi}$ performed
by first applying a transformation $f$ to each sample of the data and then
applying $\psi$. The transformation
$f:\mathbb{R}^{d_{X}+d_{Y}+d_{Z}}\to\mathbb{R}^{d_{X}+d_{Y}+d_{Z}}$ is an
affine transformation given by $f(v)=Ax+\mu$ where
$A=\begin{pmatrix}D&M\\\ 0&B\end{pmatrix}$
for a block-diagonal matrix $D$ consisting of a $d_{X}\times d_{X}$ matrix
$D_{X}$ and $d_{Y}\times d_{Y}$ matrix $D_{Y}$, a $(d_{X}+d_{Y})\times d_{Z}$
matrix $M$ and a full rank $d_{Z}\times d_{Z}$ matrix $B$.
Note first that such a transformation preserves conditional independence. Let
$(X^{0},Y^{0},Z^{0})$ be jointly Gaussian with
$X^{0}\mbox{${}\perp\mkern-11.0mu\perp{}$}Y^{0}\,|\,Z^{0}$, joint mean
$\mu^{0}$ and covariance matrix $\Sigma^{0}$. The distribution of
$(\check{X}_{0},\check{Y}_{0},\check{Z}_{0}):=f(X^{0},Y^{0},Z^{0})$ is again
Gaussian by the finite-dimensional version of Proposition 6 and has mean
$A\mu^{0}+\mu$ and covariance
$A\Sigma^{0}A^{\top}=\begin{pmatrix}D\Sigma_{XY}^{0}D^{\top}+M\Sigma_{Z,XY}^{0}D^{\top}+D\Sigma_{XY,Z}^{0}M^{\top}+M\Sigma_{Z}^{0}M^{\top}&D\Sigma_{XY,Z}^{0}A^{\top}+M\Sigma_{Z}^{0}B^{\top}\\\
B\Sigma_{Z,XY}^{0}D^{\top}+B\Sigma_{Z}^{0}M^{\top}&B\Sigma_{Z}^{0}B^{\top}\end{pmatrix},$
where $\Sigma_{XY}^{0}=\mathrm{Cov}((X^{0},Y^{0}))$,
$\Sigma_{XY,Z}^{0}=\Sigma_{Z,XY}^{0}=\mathrm{Cov}((X^{0},Y^{0}),Z^{0})$ and
$\Sigma_{Z}^{0}=\mathrm{Cov}(Z_{0})$. Using the finite-dimensional version of
Proposition 7, we get that the conditional distribution of
$(\check{X}_{0},\check{Y}_{0})$ given $\check{Z}_{0}$ is again Gaussian with
covariance matrix
$\displaystyle
D\Sigma_{XY}^{0}D^{\top}+M\Sigma_{Z,XY}^{0}D^{\top}+D\Sigma_{XY,Z}^{0}M^{\top}+M\Sigma_{Z}^{0}M^{\top}$
$\displaystyle\qquad-(D\Sigma_{XY,Z}^{0}B^{\top}+M\Sigma_{Z}^{0}B^{\top})(B\Sigma_{Z}^{0}B^{\top})^{-1}(B\Sigma_{Z,XY}^{0}D^{\top}+B\Sigma_{Z}^{0}M^{\top})$
$\displaystyle=$ $\displaystyle
D(\Sigma^{0}_{XY}-\Sigma_{XY,Z}^{0}\Sigma_{Z}^{0}\Sigma_{Z,XY}^{0})D^{\top}.$
The matrix $\Sigma^{0}_{XY}-\Sigma_{XY,Z}^{0}\Sigma_{Z}^{0}\Sigma_{Z,XY}^{0}$
is the conditional covariance matrix of $(X^{0},Y^{0})$ given $Z^{0}$ and is
block-diagonal since
$X^{0}\mbox{${}\perp\mkern-11.0mu\perp{}$}Y^{0}\,|\,Z^{0}$ by the multivariate
analogue of Proposition 5. By the same proposition, since $D$ is block-
diagonal, we see that the conditional covariance of
$(\check{X}_{0},\check{Y}_{0})$ given $\check{Z}_{0}$ is block-diagonal and
hence
$\check{X}_{0}\mbox{${}\perp\mkern-11.0mu\perp{}$}\check{Y}_{0}\,|\,\check{Z}_{0}$
as desired.
Let now
$\Sigma_{X|Z}^{-1/2}\Sigma_{XY\,|\,Z}\Sigma_{Y\,|\,Z}^{-1/2}=USV^{\top}$
be the singular-value decomposition of the normalised conditional covariance
of $X$ and $Y$ given $Z$ under $Q$. The normalisation ensures that $S$ is a
rectangular diagonal matrix with diagonal entries in the open unit interval.
If we let
$\displaystyle B$ $\displaystyle:=\Sigma_{Z}^{1/2},\quad
M:=\begin{pmatrix}\Sigma_{X,Z}\Sigma_{Z}^{-1/2}\\\
\Sigma_{Y,Z}\Sigma_{Z}^{-1/2}\end{pmatrix}$ $\displaystyle D$
$\displaystyle:=\begin{pmatrix}\Sigma_{X\,|\,Z}^{1/2}U&0\\\
0&\Sigma_{Y\,|\,Z}^{1/2}V\end{pmatrix},\quad R:=S$
and $(\check{X},\check{Y},\check{Z})=f((\tilde{X},\tilde{Y},\tilde{Z}))$ where
$(\tilde{X},\tilde{Y},\tilde{Z})\sim\tilde{Q}$, then Proposition 6 yields that
$(\check{X},\check{Y},\check{Z})\sim Q$ and hence when applying $\psi$, we
have power $\beta$ by assumption. Since $A$ also transforms a null
distribution with identity covariance into a null distribution with where $Z$
has mean $\mu_{Z}$ and covariance $\Sigma_{Z}$, we have the desired result. ∎
### A.2 Hardness of infinite-dimensional Hilbertian Gaussian conditional
independence testing
In this section we consider the testing problem described in Section 2 with
$\mathcal{H}_{X}$ and $\mathcal{H}_{Z}$ infinite-dimensional and separable. We
will show that the testing problem against $Q$ is hard for any
$Q\in\mathcal{Q}$. In particular, this includes the typical functional data
setting where $\mathcal{H}_{Z}=L^{2}([0,1],\mathbb{R})$. It follows that the
Gaussian conditional independence problem is hard in the same settings when
the null distributions are not restricted to match the marginals of $Q$.
#### A.2.1 Preliminary results
In this section, we consider finite-dimensional $\mathcal{H}_{X}$ and
infinite-dimensional $\mathcal{H}_{Z}$. We will need a lemma using the theory
of conditional Hilbertian Gaussian distributions from Section A.3.
###### Lemma 4.
Let $(X,Y,Z)$ be jointly Gaussian on
$\mathbb{R}^{d_{X}}\times\mathbb{R}^{d_{Y}}\times\mathcal{H}$ and assume that
the covariance operator of $Z$ is injective. Then there exists a basis
$(e_{k})_{k\in\mathbb{N}}$ of $\mathcal{H}$ such that
$(X,Y)\mbox{${}\perp\mkern-11.0mu\perp{}$}Z_{d_{X}+d_{Y}+1},\ldots\,|\,Z_{1},\dots
Z_{d_{X}},Z_{d_{X}+1},\ldots,Z_{d_{X}+d_{Y}}$
where $Z_{k}:=\langle Z,e_{k}\rangle$.
###### Proof.
Note that $\mathbb{R}^{d_{X}}\times\mathbb{R}^{d_{Y}}\times\mathcal{H}_{Z}$ is
itself a Hilbert space and decompose it as
$(\mathbb{R}^{d_{X}}\times\mathbb{R}^{d_{Y}})\oplus\mathcal{H}_{Z}$. Let
$\mathscr{C}_{Z}:=\mathrm{Cov}(Z)$, $\mathscr{C}_{(X,Y)}:=\mathrm{Cov}((X,Y))$
(the covariance of the joint vector $(X,Y)$) and
$\mathscr{C}_{(X,Y),Z}:=\mathrm{Cov}((X,Y),Z)$. We can apply Proposition 7 to
see that $(X,Y)$ conditional on $Z$ is Gaussian with mean
$\mathscr{C}_{(X,Y),Z}\mathscr{C}_{Z}^{\dagger}Z$ and covariance operator
$\mathscr{C}_{(X,Y)}-\mathscr{C}_{(X,Y),Z}\mathscr{C}_{Z}^{\dagger}\mathscr{C}_{(X,Y),Z}^{*}$.
The operator $\mathscr{A}:=\mathscr{C}_{(X,Y),Z}\mathscr{C}_{Z}^{\dagger}$
maps from $\mathcal{H}$ to $\mathbb{R}^{d_{X}}\times\mathbb{R}^{d_{Y}}$ and
thus is at most a rank $d_{X}+d_{Y}$ operator. By Hsing2015 this implies that
the rank of $\mathscr{A}^{*}$ is also at most $d_{X}+d_{Y}$. Furthermore,
Hsing2015 yields that
$\mathcal{H}=\textrm{Ker}(\mathscr{A})\oplus\textrm{Im}(\mathscr{A}^{*})$.
Using this decomposition we can write
$Z=(Z_{\textrm{Ker}(\mathscr{A})},Z_{\textrm{Im}(\mathscr{A}^{*})})$ and note
that by construction
$\mathscr{A}Z=\mathscr{A}Z_{\textrm{Im}(\mathscr{A}^{*})}$ thus the
conditional distribution of $(X,Y)$ given $Z$ only depends on
$Z_{\textrm{Im}(\mathscr{A}^{*})}$. In total, we have shown by Proposition 4
that
$(X,Y)\mbox{${}\perp\mkern-11.0mu\perp{}$}Z_{\textrm{Ker}(\mathscr{A})}\,|\,Z_{\textrm{Im}(\mathscr{A}^{*})}$.
Letting $r$ denote the rank of $\mathscr{A}^{*}$, if we start with a basis for
$\textrm{Im}(\mathscr{A}^{*})$ and append vectors to form a basis for
$\mathcal{H}$ using the Gram–Schmidt procedure, we get a basis where
$(X,Y)\mbox{${}\perp\mkern-11.0mu\perp{}$}Z_{r+1},\dots\,|\,Z_{1},\dots
Z_{r}.$
Since $r\leq d_{X}+d_{Y}$, the weak union property of conditional independence
yields
$(X,Y)\mbox{${}\perp\mkern-11.0mu\perp{}$}Z_{d_{X}+d_{Y}+1},\dots\,|\,Z_{1},\dots
Z_{d_{X}},Z_{d_{X}+1},\dots,Z_{d_{Y}+d_{X}},$
as desired. ∎
Using this lemma and Lemma 3 and Theorem 7 from the previous section, we can
prove the hardness result for finite-dimensional $\mathcal{H}_{X}$ and
$\mathcal{H}_{Y}$.
###### Theorem 8.
Let $Q\in\mathcal{Q}$ be a distribution consisting of $n$ i.i.d. copies of
jointly Gaussian and injective $(X,Y,Z)$ on
$(\mathbb{R}^{d_{X}},\mathbb{R}^{d_{Y}},\mathcal{H}_{Z})$ for
$d_{X},d_{Y}\in\mathbb{N}$ and any infinite-dimensional and separable
$\mathcal{H}_{Z}$. Consider the testing problem described in Section 2 with
$\mathcal{H}_{X}=\mathbb{R}^{d_{X}}$ and $\mathcal{H}_{Z}$ as above and let
$\psi$ be the test function of a size $\alpha$ test over
$\mathcal{P}_{0}^{Q}$. Then $\psi$ has power at most $\alpha$ against $Q$.
###### Proof.
Assume for contradiction that $\psi$ is a test of size $\alpha$ over
$\mathcal{P}_{0}^{\mathcal{Q}}$ with power $\alpha+\varepsilon$ for some
$\varepsilon>0$ against $Q$. Let $(X,Y,Z)$ be distributed as one of the $n$
i.i.d. copies constituting $Q$. By Lemma 4, we can express $Z$ in a basis
$(e_{k})_{k\in\mathbb{N}}$ such that defining $Z_{k}=\langle Z,e_{k}\rangle$,
we have
$(X,Y)\mbox{${}\perp\mkern-11.0mu\perp{}$}Z_{d_{X}+d_{Y}+1},\dots\,|\,Z_{1},\dots,Z_{d_{X}},Z_{d_{X}+1},\dots,Z_{d_{X}+d_{Y}}.$
By the weak union property of conditional independence, this implies that
$(X,Y)\mbox{${}\perp\mkern-11.0mu\perp{}$}Z_{d+1},\dots\,|\,Z_{1},\dots,Z_{d}$
for any $d\geq d_{X}+d_{Y}$.
Choose now an arbitrary $d\geq d_{X}+d_{Y}$ and let $\tilde{Q}$ denote the
distribution of $n$ i.i.d. copies of $(X,Y,Z_{1},\dots,Z_{d})$ under $Q$.
Consider the testing problem described in Section 2 with
$\mathcal{H}_{X}=\mathbb{R}^{d_{X}}$ and $\mathcal{H}_{Z}=\mathbb{R}^{d}$. We
can construct a test in this setting by defining new observations
$(\check{X},\check{Y},\check{Z})$ with values in
$(\mathbb{R}^{d_{X}},\mathbb{R}^{d_{Y}},\mathcal{H}_{Z})$ and applying $\psi$.
We form the new observations by setting $\check{X}:=\tilde{X}$,
$\check{Y}:=\tilde{Y}$ and
$\check{Z}:=(\tilde{Z}_{1},\dots,\tilde{Z}_{d},Z^{\circ}_{d+1},Z^{\circ}_{d+2},\dots)$,
where $Z^{\circ}_{d+1},Z^{\circ}_{d+2},\dots$ are sampled from the conditional
distribution $Z_{d+1},Z_{d+2},\dots\,|\,Z_{1}=\tilde{Z}_{1},\dots
Z_{d}=\tilde{Z}_{d}$. If the original sample is from a distribution in
$\mathcal{P}_{0}^{\tilde{Q}}$ then the modified sample will be from a null
distribution in $\mathcal{P}_{0}^{Q}$, thus the test has size $\alpha$ over
$\mathcal{P}_{0}^{\tilde{Q}}$. Similarly, if
$(\tilde{X},\tilde{Y},\tilde{Z})\sim\tilde{Q}$, the modified sample will have
distribution $Q$ and hence the test has power $\alpha+\varepsilon$ against
$\tilde{Q}$.
By Lemma 3 this implies the existence of a $d_{X}\times d_{Y}$ block-diagonal
matrix $R$ with diagonal entries in the open unit interval, a Gaussian
distribution $Q^{\prime}$ on
$(\mathbb{R}^{d_{X}},\mathbb{R}^{d_{Y}},\mathbb{R}^{d})$ where if
$(X^{\prime},Y^{\prime},Z^{\prime})\sim Q^{\prime}$, $X^{\prime}$,
$Y^{\prime}$ and $Z^{\prime}$ are mean zero with identity covariance matrix,
$\mathrm{Cov}(X^{\prime},Z^{\prime})=\mathrm{Cov}(Y^{\prime},Z^{\prime})=0$
and $\mathrm{Cov}(X^{\prime},Y^{\prime})=R$, and a test with size $\alpha$
over $\mathcal{P}_{0}^{Q^{\prime}}$ with power $\alpha+\varepsilon$ against
$Q^{\prime}$. Since $d$ was arbitrary, this contradicts Theorem 7. ∎
#### A.2.2 Proofs of Theorem 1 and Proposition 1
In this section we prove Theorem 1 and Proposition 1. We do this by extending
the results from the previous section to the situation where at most one of
$X$ and $Y$ are infinite-dimensional.
###### Lemma 5.
Let $(X,Y,Z)$ be jointly Gaussian on
$\mathbb{R}^{d_{X}}\times\mathcal{H}_{Y}\times\mathcal{H}_{Z}$ and assume that
the covariance operator of $(Y,Z)$ is injective. Then there exists a basis
$(e_{k})_{k\in\mathbb{N}}$ of $\mathcal{H}_{Y}$ such that
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y_{d_{X}+1},\dots\,|\,Y_{1},\dots,Y_{d_{X}},Z$
where $Y_{k}:=\langle Y,e_{k}\rangle$.
###### Proof.
Note that $\mathbb{R}^{d_{X}}\times\mathcal{H}_{Y}\times\mathcal{H}_{Z}$ is
again a Hilbert space and decompose it as
$\mathbb{R}^{d_{X}}\oplus(\mathcal{H}_{Y}\times\mathcal{H}_{Z})$. Let
$\mathscr{C}_{(Y,Z)}:=\mathrm{Cov}((Y,Z))$ (the covariance of the joint vector
$(Y,Z)$), $\mathscr{C}_{X}:=\mathrm{Cov}(X)$ and
$\mathscr{C}_{X,(Y,Z)}:=\mathrm{Cov}(X,(Y,Z))$. We can apply Proposition 7 to
see that $X$ conditional on $(Y,Z)$ is Gaussian with mean
$\mathscr{C}_{X,(Y,Z)}\mathscr{C}_{(Y,Z)}^{\dagger}(Y,Z)$ and covariance
operator
$\mathscr{C}_{X}-\mathscr{C}_{X,(Y,Z)}\mathscr{C}_{(Y,Z)}^{\dagger}\mathscr{C}_{X,(Y,Z)}^{*}$.
The operator $\mathscr{A}=\mathscr{C}_{X,(Y,Z)}\mathscr{C}_{(Y,Z)}^{\dagger}$
maps from $\mathcal{H}_{Y}\times\mathcal{H}_{Z}$ to $\mathbb{R}^{d_{X}}$ and
thus is at most a rank $d_{X}$ operator. By Hsing2015 this implies that the
rank of $\mathscr{A}^{*}$ is also at most $d_{X}$. Furthermore, Hsing2015
yields that
$\mathcal{H}_{Y}\times\mathcal{H}_{Z}=\textrm{Ker}(\mathscr{A})\oplus\textrm{Im}(\mathscr{A}^{*})$.
Using this decomposition we can write
$(Y,Z)=((Y,Z)_{\textrm{Ker}(\mathscr{A})},(Y,Z)_{\textrm{Im}(\mathscr{A}^{*})})$
and note that by construction
$\mathscr{A}(Y,Z)=\mathscr{A}(Y,Z)_{\textrm{Im}(\mathscr{A}^{*})}$ thus the
conditional distribution of $X$ given $(Y,Z)$ only depends on
$(Y,Z)_{\textrm{Im}(\mathscr{A}^{*})}$. In total, we have shown by Proposition
4 that
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}(Y,Z)_{\textrm{Ker}(\mathscr{A})}\,|\,(Y,Z)_{\textrm{Im}(\mathscr{A}^{*})}$
which implies by the weak union property of conditional independence that
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y_{\textrm{Ker}(\mathscr{A})}\,|\,Y_{\textrm{Im}(\mathscr{A}^{*})},Z$.
Any basis of $\textrm{Im}(\mathscr{A}^{*})$ will consist of at most $d_{X}$
elements. Forming the span of the $\mathcal{H}_{Y}$-components of the basis
vectors will yield a subspace of $\mathcal{H}_{Y}$ that contains the
projection onto $\mathcal{H}_{Y}$ of $\textrm{Im}(\mathscr{A}^{*})$. Thus,
letting $r$ denote the rank of $\mathscr{A}^{*}$, we can append vectors and
form a basis for $\mathcal{H}_{Y}$ using the Gram–Schmidt procedure to get a
basis where
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y_{r+1},\dots\,|\,Y_{1},\dots,Y_{r},Z.$
Since $r\leq d_{X}$, the weak union property of conditional independence
yields
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y_{d_{X}+1},\dots\,|\,Y_{1},\dots,Y_{d_{X}},Z$
as desired. ∎
We are now ready to prove Theorem 1.
###### Proof of Theorem 1.
Assume without loss of generality that $\mathcal{H}_{X}$ is finite-dimensional
and thus $\mathcal{H}_{X}$ is isomorphic to a real vector space, and we will
instead denote $\mathcal{H}_{X}=\mathbb{R}^{d_{X}}$ where $d_{X}$ is the
dimension of $\mathcal{H}_{X}$.
Assume for contradiction that $\psi$ is a test of size $\alpha$, with power
$\alpha+\epsilon$ for some $\epsilon>0$ against $Q$. Let $(X,Y,Z)$ be
distributed as one of the $n$ i.i.d. copies constituting $Q$. By Lemma 5 we
can express $Y$ in a basis $(e_{k})_{k\in\mathbb{N}}$ such that defining
$Y_{k}=\langle Y,e_{k}\rangle$, we have
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y_{d_{X}+1},\dots\,|\,Y_{1},\dots,Y_{d_{X}},Z.$
By the weak union property of conditional independence, this implies that
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y_{d+1},\dots\,|\,Y_{1},\dots,Y_{d},Z$
for any $d\geq d_{X}$.
Choose now an arbitrary $d\geq d_{X}+d_{Y}$ and let $\tilde{Q}$ denote the
distribution of $n$ i.i.d. copies of $(X,Y_{1},\dots,Y_{d},Z)$ under $Q$.
Consider the testing problem described in Section 2 with
$\mathcal{H}_{X}=\mathbb{R}^{d_{X}}$ and $\mathcal{H}_{Z}$ as above. We can
construct a test in this setting by defining new observations
$(\check{X},\check{Y},\check{Z})$ with values in
$(\mathbb{R}^{d_{X}},\mathcal{H}_{Y},\mathcal{H}_{Z})$ and applying $\psi$. We
form the new observations by setting $\check{X}:=\tilde{X}$,
$\check{Z}:=\tilde{Z}$ and
$\check{Y}:=(\tilde{Y}_{1},\dots,\tilde{Y}_{d},Y^{\circ}_{d+1},Y^{\circ}_{d+2},\dots)$,
where $Y^{\circ}_{d+1},Y^{\circ}_{d+2},\ldots$ are sampled from the
conditional distribution
$Y_{d+1},Y_{d+2},\dots\,|\,Y_{1}=\tilde{Y}_{1},\ldots,Y_{d}=\tilde{Y}_{d},Z=\tilde{Z}$.
If the original sample is from a distribution in
$\mathcal{P}_{0}^{\tilde{Q}}$, then the modified sample will be from a null
distribution in $\mathcal{P}_{0}^{Q}$, thus the test has size $\alpha$ over
$\mathcal{P}_{0}^{\tilde{Q}}$. Similarly, if
$(\tilde{X},\tilde{Y},\tilde{Z})\sim\tilde{Q}$, the modified sample will have
distribution $Q$ and hence the test has power $\alpha+\epsilon$ against the
distribution of $(X,Y_{1},\dots,Y_{d},Z)$. But this contradicts Theorem 8. ∎
A similar strategy can be employed to prove Proposition 1.
###### Proof of Proposition 1.
We can repeat the arguments of Theorem 8 and Theorem 1 without using Lemma 4
and Lemma 5 since we can use the basis $(e_{k})_{k\in\mathbb{N}}$ instead. ∎
### A.3 Auxiliary results about conditional distributions on Hilbert spaces
Let us first recall how to formally define a conditional distribution. We
follow Dudley2002 and Roenn-Nielsen2014.
###### Definition 1.
Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space, let
$\mathcal{D}$ be a sub-$\sigma$-algebra of $\mathcal{F}$ and let
$\mathbb{P}_{|\mathcal{D}}$ denote the restriction of $\mathbb{P}$ to
$\mathcal{D}$. Let $X$ be a random variable defined on
$(\Omega,\mathcal{F},\mathbb{P})$ mapping into a measurable space
$(\mathcal{X},\mathcal{A})$. We say that a function
$P_{X\,|\,\mathcal{D}}:\mathcal{A}\times\Omega\to[0,1]$ is a _conditional
distribution for $X$ given $\mathcal{D}$_ if the following two conditions
hold.
1. (i)
For each $A\in\mathcal{A}$,
$P_{X\,|\,\mathcal{D}}(A,\cdot)=\mathbb{E}(\mathbbm{1}_{\\{(X\in
A)\\}}\,|\,\mathcal{D})=\mathbb{P}(X\in A\,|\,\mathcal{D})$
$\mathbb{P}_{|\mathcal{D}}$-a.s.
2. (ii)
For $P_{|\mathcal{D}}$ almost every $\omega\in\Omega$,
$P_{X\,|\,\mathcal{D}}(\cdot,\omega)$ is a probability measure on
$(\mathcal{X},\mathcal{A})$.
We are mainly interested in conditioning on the value of some random variable
which leads to the following definition.
###### Definition 2.
Consider random variables $X$ and $Y$ defined on the probability space
$(\Omega,\mathcal{F},\mathbb{P})$ with values in the measurable spaces
$(\mathcal{X},\mathcal{A})$ and $(\mathcal{Y},\mathcal{G})$, respectively. We
say that a function $P_{Y\,|\,X}:\mathcal{G}\times\mathcal{X}\to[0,1]$ is a
_conditional distribution for $Y$ given $X$_ if the following conditions hold.
1. (i)
For each $x\in\mathcal{X}$, $P_{Y\,|\,X}(\cdot,x)$ is a probability measure on
$(\mathcal{Y},\mathcal{G})$.
2. (ii)
For each $G\in\mathcal{G}$, $P_{Y\,|\,X}(G,\cdot)$ is
$\mathcal{A}$-$\mathbb{B}$ measurable, where $\mathbb{B}$ denotes the Borel
$\sigma$-algebra on $\mathbb{R}$.
3. (iii)
For each $A\in\mathcal{A}$
$\mathbb{P}(X\in A,Y\in G)=\int_{(X\in
A)}P_{Y\,|\,X}(G,X(\omega))\,\mathrm{d}\mathbb{P}(\omega)=\int_{A}P_{Y\,|\,X}(G,x)\,\mathrm{d}X(\mathbb{P})(x),$
where $X(\mathbb{P})$ is the push-forward measure of $X$ under $\mathbb{P}$,
i.e. the measure on $(\mathcal{X},\mathcal{A})$ such that
$X(\mathbb{P})(A)=\mathbb{P}(X\in A)$ for $A\in\mathcal{A}$.
Informally, we write $Y\,|\,X$ for the conditional distribution of $Y$ given
$X$ and $Y\,|\,X=x$ for the measure $P_{Y\,|\,X}(\cdot,x)$. If a function
$Q:\mathcal{G}\times\mathcal{X}\to[0,1]$ only satisfies the first two
conditions, we say that $Q$ is a _$(\mathcal{X},\mathcal{A})$ -Markov kernel
on $(\mathcal{Y},\mathcal{G})$_.
The connection between the previous two definitions can be seen by viewing $X$
and $Y$ as random variables on the probability space
$(\mathcal{X}\times\mathcal{Y},\mathcal{A}\otimes\mathcal{G},(X,Y)(\mathbb{P}))$
where $(X,Y)(\mathbb{P})$ is the joint push-forward measure of $X$ and $Y$
under $\mathbb{P}$. If we then let $\mathcal{D}$ be the smallest
$\sigma$-algebra making the projection onto the $\mathcal{X}$-space
measurable, we see by letting
$P_{Y\,|\,\mathcal{D}}(G,(x,y))=P_{Y\,|\,X}(G,x)$ that $P_{Y\,|\,X}$ also
satisfies the conditions of the first definition. For more on this
perspective, see Dudley2002. It is non-trivial to show the existence of
conditional distributions, however, we do have the following result from
Dudley2002.
###### Lemma 6.
Consider random variables $X$ and $Y$ defined on the probability space
$(\Omega,\mathcal{F},\mathbb{P})$ with values in the measurable spaces
$(\mathcal{X},\mathcal{A})$ and $(\mathcal{Y},\mathcal{G})$ respectively. If
$\mathcal{X}$ and $\mathcal{Y}$ are Polish spaces and $\mathcal{A}$ and
$\mathcal{G}$ are their respective Borel $\sigma$-algebras then the
conditional distribution for $Y$ given $X$ exists.
We will consider real-valued and Hilbertian random variables in the following,
thus we are free to assume the existence of conditional distributions wherever
needed. Before we delve into the main preliminary results about Hilbertian
conditional distributions, we present some fundamental results from the theory
of regular conditional distributions. For measurable spaces
$(\mathcal{X},\mathcal{A})$ and $(\mathcal{Y},\mathcal{G})$, we let
$i_{x}:\mathcal{Y}\to\mathcal{X}\times\mathcal{Y}$ denote the inclusion map,
i.e. $i_{x}(y)=(x,y)$. This is a $\mathcal{G}-\mathcal{A}\otimes\mathcal{G}$
measurable mapping for each fixed $x$. The following four results are included
for completeness and can be found in Roenn-Nielsen2014. Unless otherwise
specified, for these results $X$, $Y$ and $Z$ are random variables on
measurable spaces $(\mathcal{X},\mathcal{A})$, $(\mathcal{Y},\mathcal{G})$ and
$(\mathcal{Z},\mathcal{K})$ respectively.
###### Lemma 7.
Let $Q$ be a $(\mathcal{X},\mathcal{A})$-Markov kernel on
$(\mathcal{Y},\mathcal{G})$ and let $\mathbb{B}$ denote the Borel
$\sigma$-algebra on $\mathbb{R}$. For each $C\in\mathcal{A}\otimes\mathcal{G}$
the map
$x\mapsto Q(i_{x}^{-1}(C),x)$
is $\mathcal{A}$-$\mathbb{B}$ measurable.
###### Proof.
Let
$\mathcal{D}=\\{C\in\mathcal{A}\otimes\mathcal{G}\,|\,\text{$x\mapsto
Q(i_{x}^{-1}(C),x)$ is $\mathcal{A}$-$\mathbb{B}$ measurable}\\}$
and consider a product set $A\times G\in\mathcal{A}\otimes\mathcal{G}$.
Clearly,
$i_{x}^{-1}(A\times G)=\begin{cases}\emptyset&\text{if $x\not\in A$}\\\
B&\text{if $x\in A$}\end{cases}$
and therefore
$Q(i_{x}^{-1}(A\times G),x)=\begin{cases}0&\text{if $x\not\in A$}\\\
Q(G,x)&\text{if $x\in A$}\end{cases}=\mathbbm{1}_{A}(x)Q(G,x).$
This is a product of two $\mathcal{A}$-$\mathbb{B}$ measurable functions and
is thus also $\mathcal{A}$-$\mathbb{B}$ measurable. This shows that
$\mathcal{D}$ contains all product sets and since the product sets are an
intersection-stable generator of $\mathcal{A}\otimes\mathcal{G}$, we are done
if we can show that $\mathcal{D}$ is a Dynkin class by schilling.
We have already shown that product sets are in $\mathcal{D}$ which includes
$\mathcal{X}\times\mathcal{Y}$. If $C_{1},C_{2}\in\mathcal{D}$ where
$C_{1}\subseteq C_{2}$ then clearly also $i_{x}^{-1}(C_{1})\subseteq
i_{x}^{-1}(C_{2})$ and further $i_{x}^{-1}(C_{2}\setminus
C_{1})=i_{x}^{-1}(C_{2})\setminus i_{x}^{-1}(C_{1})$. This implies that
$Q(i_{x}^{-1}(C_{2}\setminus
C_{1}),x)=Q(i_{x}^{-1}(C_{2}),x)-Q(i_{x}^{-1}(C_{1}),x)$
which is the difference of two $\mathcal{A}$-$\mathbb{B}$ measurable functions
and is thus also $\mathcal{A}$-$\mathbb{B}$ measurable. Hence, $C_{2}\setminus
C_{1}\in\mathcal{D}$. Finally, assume that $C_{1}\subseteq
C_{2}\subseteq\cdots$ is an increasing sequence of $\mathcal{D}$-sets.
Similarly to above we have $i_{x}^{-1}(C_{1})\subseteq
i_{x}^{-1}(C_{2})\subseteq\cdots$ and
$i_{x}^{-1}\left(\bigcup_{n=1}^{\infty}C_{n}\right)=\bigcup_{n=1}^{\infty}i_{x}^{-1}\left(C_{n}\right).$
Then
$Q\left(i_{x}^{-1}\left(\bigcup_{n=1}^{\infty}C_{n}\right),x\right)=Q\left(\bigcup_{n=1}^{\infty}i_{x}^{-1}\left(C_{n}\right),x\right)=\lim_{n\to\infty}Q(i_{x}^{-1}(C_{n}),x).$
The limit is $\mathcal{A}$-$\mathbb{B}$ measurable since each of the functions
$x\mapsto Q(i_{x}^{-1}(C_{n}),x)$ are measurable. Hence, $\mathcal{D}$ is a
Dynkin class, and we have the desired result. ∎
###### Proposition 2.
Let $\mu$ be a probability measure on $(\mathcal{X},\mathcal{A})$ and let $Q$
be a $(\mathcal{X},\mathcal{A})$-Markov kernel on $(\mathcal{Y},\mathcal{G})$.
There exists a uniquely determined probability measure $\lambda$ on
$(\mathcal{X}\times\mathcal{Y},\mathcal{A}\otimes\mathcal{G})$ satisfying
$\lambda(A\times G)=\int_{A}Q(G,x),\mathrm{d}\mu(x)$
for all $A\in\mathcal{A}$ and $G\in\mathcal{G}$. Furthermore, for
$C\in\mathcal{A}\otimes\mathcal{G}$
$\lambda(C)=\int Q(i_{x}^{-1}(C),x),\mathrm{d}\mu(x).$
###### Proof.
Uniqueness follows from schilling since $\lambda$ is determined on the product
sets which form an intersection-stable generator of
$\mathcal{A}\otimes\mathcal{G}$.
For existence, we show that $\lambda$ as defined for general
$C\in\mathcal{A}\otimes\mathcal{G}$ is a measure. The integrand is measurable
by Lemma 7 and since $Q$ is non-negative, the integral is well-defined with
values in $[0,\infty]$. Let $C_{1},C_{2}\dots$ be a sequence of disjoint sets
in $\mathcal{A}\otimes\mathcal{G}$. Then for each $x\in\mathcal{X}$ the sets
$i_{x}^{-1}(C_{1}),i_{x}^{-1}(C_{2}),\dots$ are disjoint as well. Hence,
$\displaystyle\lambda\left(\bigcup_{n=1}^{\infty}C_{n}\right)$
$\displaystyle=\int
Q\left(i_{x}^{-1}\left(\bigcup_{n=1}^{\infty}C_{n}\right),x\right)\,\mathrm{d}\mu(x)=\int\sum_{n=1}^{\infty}Q\left(i_{x}^{-1}(C_{n}),x\right)\,\mathrm{d}\mu(x)$
$\displaystyle=\sum_{n=1}^{\infty}\int
Q\left(i_{x}^{-1}(C_{n}),x\right)\,\mathrm{d}\mu(x)=\sum_{n=1}^{\infty}\lambda(C_{n})$
where the second equality uses that $Q(\cdot,x)$ is a measure and the third
uses monotone convergence to interchange integration and summation. Since also
$\lambda(\mathcal{X}\times\mathcal{Y})=\int
Q(i_{x}^{-1}(\mathcal{X}\times\mathcal{Y}),x)\,\mathrm{d}\mu(x)=\int
Q(\mathcal{Y},x)\,\mathrm{d}\mu(x)=\int 1\,\mathrm{d}\mu(x)=1$
$\lambda$ is a probability measure, and it follows that
$\lambda(A\times G)=\int Q(i_{x}^{-1}(A\times
G),x)\,\mathrm{d}\mu(x)=\int_{A}Q(G,x)\,\mathrm{d}\mu(x)$
for all $A\in\mathcal{A}$ and $G\in\mathcal{G}$ as desired. ∎
###### Proposition 3.
Assume that $P_{Y\,|\,X}$ is the conditional distribution of $Y$ given $X$.
Let $(\mathcal{Z},\mathcal{K})$ be another measurable space and let
$\phi:\mathcal{X}\times\mathcal{Y}\to\mathcal{Z}$ be a measurable mapping.
Define $Z=\phi(X,Y)$. Then the conditional distribution of $Z$ given $X$
exists and for $K\in\mathcal{K}$ and $x\in\mathcal{X}$ is given by
$P_{Z\,|\,X}(K,x)=P_{Y\,|\,X}((\phi\circ i_{x})^{-1}(K),x).$
###### Proof.
Clearly $P_{Z\,|\,X}(\cdot,x)$ is a probability measure for every
$x\in\mathcal{X}$ and Lemma 7 yields that $P_{Z\,|\,X}(K,\cdot)$ is
$\mathcal{A}$-$\mathbb{B}$ measurable for every $K\in\mathcal{K}$. It remains
to show that $P_{Z\,|\,X}$ satisfies the third condition required to be the
conditional distribution of $Z$ given $X$. For $A\in\mathcal{A}$ and
$K\in\mathcal{K}$ we get that
$\mathbb{P}(X\in A,Z\in
K)=\mathbb{P}((X,Y)\in(A\times\mathcal{Y})\cap\phi^{-1}(K))$
and hence by Proposition 2, we get that
$\mathbb{P}(X\in A,Z\in K)=\int
P_{Y\,|\,X}(i_{x}^{-1}((A\times\mathcal{Y})\cap\phi^{-1}(K)),x)\,\mathrm{d}X(\mathbb{P})(x).$
Since
$i_{x}^{-1}((A\times\mathcal{Y})\cap\phi^{-1}(K))=\begin{cases}\emptyset&\text{if
$x\not\in A$}\\\ i_{x}^{-1}(\phi^{-1}(K))&\text{if $x\in A$}\end{cases},$
we get
$\mathbb{P}(X\in A,Z\in
K)=\int_{A}P_{Y\,|\,X}(i_{x}^{-1}(\phi^{-1}(K)),x)\,\mathrm{d}X(\mathbb{P})(x)=\int_{A}P_{Z\,|\,X}(K,x)\,\mathrm{d}X(\mathbb{P})(x),$
proving the desired result. ∎
###### Proposition 4.
Suppose that conditional distribution $P_{Y\,|\,(X,Z)}$ of $Y$ given $(X,Z)$
has the structure
$P_{Y\,|\,(X,Z)}(G,(x,z))=Q(G,z)$
for some $Q:\mathcal{G}\times\mathcal{Z}$ where for every $z\in\mathcal{Z}$,
$Q(\cdot,z)$ is a probability measure. Then $Q$ is a Markov kernel, $Q$ is the
conditional distribution of $Y$ given $Z$ and
$X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$.
###### Proof.
That $Q$ is a Markov kernel follows immediately from the fact that
$P_{Y\,|\,(X,Z)}$ is a Markov kernel. To see that $Q$ is the conditional
distribution of $Y$ given $Z$, note that defining
$\pi_{Z}:\mathcal{X}\times\mathcal{Z}\to\mathcal{Z}$ to be the projection onto
$\mathcal{Z}$, we get
$\displaystyle\mathbb{P}(Z\in K,Y\in G)$
$\displaystyle=\mathbb{P}((X,Z)\in\pi_{Z}^{-1}(K),Y\in
G)=\int_{\pi_{Z}^{-1}(K)}P_{Y\,|\,(X,Z)}(G,(x,z))\,\mathrm{d}(X,Z)(P)(x,z)$
$\displaystyle=\int_{\pi_{Z}^{-1}(K)}Q(G,\pi_{Z}(x,z))\,\mathrm{d}(X,Z)(P)(x,z)=\int_{K}Q(G,z)\,\mathrm{d}Z(P)(z),$
by viewing $Z(P)$ as the image measure of $(X,Z)(P)$ under $\pi_{Z}$ and
applying schilling. For every $G\in\mathcal{G}$, $Q(G,Z)$ is a version of the
conditional probability $\mathbb{P}(Y\in G\,|\,Z)=\mathbb{E}(1_{(Y\in
G)}\,|\,Z)$ since $Q(G,Z)$ is clearly measurable with respect to $\sigma(Z)$
and
$\int_{(Z\in K)}1_{(Y\in G)}\,\mathrm{d}P=\mathbb{P}(Z\in K,Y\in
G)=\int_{(Z\in K)}Q(G,Z)\,\mathrm{d}P.$
The same argument applies to show that $P_{Y\,|\,(X,Z)}(G,(X,Z))$ is a version
of $\mathbb{P}(Y\in G\,|\,X,Z)$. Hence, for every $G\in\mathcal{G}$
$\mathbb{P}(Y\in
G\,|\,Z)=Q(G,Z)=P_{Y\,|\,(X,Z)}(G,(X,Z))=\mathbb{P}(Y\,|\,X,Z)$
and thus $X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$ as desired. ∎
With these results we are ready to start considering Hilbertian conditional
distributions.
###### Remark 1.
In the following we will repeatedly consider orthogonal decompositions of
Hilbert spaces. We write $\mathcal{H}=\mathcal{H}_{1}\oplus\mathcal{H}_{2}$ if
every $h\in\mathcal{H}$ can be written as $h=h_{1}+h_{2}$ where
$h_{1}\in\mathcal{H}_{1}$ and $h_{2}\in\mathcal{H}_{2}$ and
$\mathcal{H}_{1}\perp\mathcal{H}_{2}$. If an operator $\mathscr{A}$ is defined
on $\mathcal{H}$, the decomposition induces four operators: $\mathscr{A}_{11}$
and $\mathscr{A}_{21}$, the $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ components
of the restriction of $\mathscr{A}$ to $\mathcal{H}_{1}$ and similarly
$\mathscr{A}_{12}$ and $\mathscr{A}_{22}$, the $\mathcal{H}_{1}$ and
$\mathcal{H}_{2}$ components of the restriction of $\mathscr{A}$ to
$\mathcal{H}_{2}$. We can write $\mathscr{A}$ as the sum of these four
operators. If $X$ is a random variable on $\mathcal{H}$ and $\mathcal{H}_{1}$
and $\mathcal{H}_{2}$ are as above, we can similarly decompose $X$ into
$(X_{1},X_{2})$ where $X_{1}\in\mathcal{H}_{1}$ and $X_{2}\in\mathcal{H}_{2}$.
If $\mathscr{C}$ is the covariance operator of $X$, we can decompose it as
mentioned above and, in particular, we have
$\mathscr{C}_{11}=\mathrm{Cov}(X_{1})$, $\mathscr{C}_{22}=\mathrm{Cov}(X_{2})$
and $\mathscr{C}_{12}=\mathscr{C}_{21}^{*}=\mathrm{Cov}(X_{1},X_{2})$, where
$\mathscr{C}_{21}^{*}$ denotes the adjoint of $\mathscr{C}_{21}$. This is
analogous to the usual block matrix decomposition of the covariance matrix of
multivariate random variables.
We will need two results that are fundamental in the theory of the
multivariate Gaussian distribution.
###### Proposition 5.
Let $X$ be Gaussian on $\mathcal{H}$ and assume that
$\mathcal{H}=\mathcal{H}_{1}\oplus\mathcal{H}_{2}$. Define $(X_{1},X_{2})$ to
be the corresponding decomposition of $X$. Then
$X_{1}\mbox{${}\perp\mkern-11.0mu\perp{}$}X_{2}$ if and only if
$\mathrm{Cov}(X_{1},X_{2})=0$.
###### Proof.
We show that $\mathrm{Cov}(X_{1},X_{2})=0$ implies independence since the
other direction is trivial. We will use the approach of characteristic
functionals as described in detail in Vakhania1987. The characteristic
functional of a random variable (technically, the distribution of the random
variable) is the mapping defined on $\mathcal{H}$ where
$h\mapsto\mathbb{E}[\exp(i\langle X,h\rangle)]$. Vakhania1987 state that for
Gaussian $X$ with mean $\mu$ and covariance operator $\mathscr{C}$ the
characteristic functional is
$\phi_{X}(h)=\exp\left(i\langle\mu,h\rangle-\frac{1}{2}\langle\mathscr{C}h,h\rangle\right).$
Vakhania1987 state that $X_{1}$ and $X_{2}$ are independent if the
characteristic functional of $X$ factorises into the product of their
respective characteristic functionals. By the assumption that
$\mathscr{C}_{12}=\mathrm{Cov}(X_{1},X_{2})=0$, we can write the covariance as
$\mathscr{C}=\mathscr{C}_{1}+\mathscr{C}_{2}$ where $\mathscr{C}_{i}$ is the
covariance of $X_{i}$. The result then follows by factorising the
characteristic functional appropriately. ∎
###### Proposition 6.
Let $X$ be Gaussian on $\mathcal{H}_{1}$ with mean $\mu$ and covariance
operator $\mathscr{C}$ and let $\mathscr{A}$ be a bounded linear operator from
$\mathcal{H}_{1}$ to $\mathcal{H}_{2}$ and $z\in\mathcal{H}_{2}$. Then
$Y=\mathscr{A}X+z$ is Gaussian on $\mathcal{H}_{2}$ with mean
$\mathscr{A}\mu+z$ and covariance operator
$\mathscr{A}\mathscr{C}\mathscr{A}^{*}$ where $\mathscr{A}^{*}$ is the adjoint
of $\mathscr{A}$.
###### Proof.
Throughout, we let $\langle\cdot,\cdot\rangle_{1}$ and
$\langle\cdot,\cdot\rangle_{2}$ denote the inner products of $\mathcal{H}_{1}$
and $\mathcal{H}_{2}$ respectively. By definition, for every
$h_{1}\in\mathcal{H}_{1}$, $\langle X,h_{1}\rangle$ is Gaussian on
$\mathbb{R}$. For every $h_{2}\in\mathcal{H}_{2}$ we have
$\langle Y,h_{2}\rangle_{2}=\langle\mathscr{A}X,h_{2}\rangle_{2}+\langle
z,h_{2}\rangle_{2}=\langle X,\mathscr{A}^{*}h_{2}\rangle_{1}+\langle
z,h_{2}\rangle_{2}$
thus $Y$ is also Gaussian. Using the interchangeability of the Bochner
integral and linear operators (see Hsing2015), we get the mean of $Y$
immediately. By noting that for any $h,k\in\mathcal{H}_{1}$, we have
$(\mathscr{A}h)\otimes k=\langle\mathscr{A}h,\cdot\rangle_{2}k=\langle
h,\mathscr{A}^{*}\cdot\rangle_{1}k=(h\otimes k)\mathscr{A}^{*},$
the covariance result then follows by the same argument as for the mean. ∎
With these results we can now show that conditioning on an injective part of a
Gaussian distribution on a Hilbert space yields another Gaussian distribution
with mean and covariance given by the Hilbertian analogue of the well-known
Gaussian conditioning formula.
###### Proposition 7.
Let $X$ be mean zero Gaussian on $\mathcal{H}$ with covariance operator
$\mathscr{C}$ and assume that
$\mathcal{H}=\mathcal{H}_{1}\oplus\mathcal{H}_{2}$. Let $(X_{1},X_{2})$ denote
the corresponding decomposition of $X$. As discussed in Remark 1, we then set
$\mathscr{C}_{11}:=\mathrm{Cov}(X_{1})$,
$\mathscr{C}_{22}:=\mathrm{Cov}(X_{2})$ and
$\mathscr{C}_{12}=\mathscr{C}_{21}^{*}:=\mathrm{Cov}(X_{1},X_{2})$, where
$\mathscr{C}_{21}^{*}$ denotes the adjoint of $\mathscr{C}_{21}$. If
$\mathscr{C}_{22}$ is injective, i.e.
$\textrm{Ker}(\mathscr{C}_{22})=\\{h\in\mathcal{H}_{2}\,|\,\mathscr{C}_{22}h=0\\}=\\{0\\}$
then the conditional distribution of $X_{1}$ given $X_{2}$ is Gaussian on
$\mathcal{H}_{1}$ with
$\mathbb{E}(X_{1}\,|\,X_{2})=\mathscr{C}_{12}\mathscr{C}_{22}^{{\dagger}}X_{2}$
and
$\mathrm{Cov}(X_{1}\,|\,X_{2})=\mathscr{C}_{11}-\mathscr{C}_{12}\mathscr{C}_{22}^{{\dagger}}\mathscr{C}_{21},$
where $\mathscr{C}_{22}^{{\dagger}}$ is the generalised inverse (or
Moore–Penrose inverse) of $\mathscr{C}_{22}$.
###### Proof.
Define $Z:=X_{1}-\mathscr{C}_{12}\mathscr{C}_{22}^{{\dagger}}X_{2}$. Note that
since $(Z,X_{2})$ is a bounded linear transformation of $(X_{1},X_{2})$,
$(Z,X_{2})$ must be jointly Gaussian by Proposition 6. By Proposition 5, $Z$
and $X_{2}$ are independent if $\mathrm{Cov}(Z,X_{2})=0$. We calculate the
covariance and get
$\mathrm{Cov}(Z,X_{2})=\mathscr{C}_{12}-\mathscr{C}_{12}\mathscr{C}_{22}^{{\dagger}}\mathscr{C}_{22}=0$
by Hsing2015 since $\textrm{Ker}(\mathscr{C}_{22})={0}$. This implies that the
conditional distribution of $Z$ given $X_{2}$ is simply the distribution of
$Z$. We can find the complete distribution of $Z$ by calculating the mean and
covariance of $Z$, since $Z$ is Gaussian. We get by Proposition 6,
$\mathbb{E}(Z)=\mathbb{E}(X_{1})-\mathscr{C}_{12}\mathscr{C}_{22}^{{\dagger}}\mathbb{E}(X_{2})=0$
and
$\mathrm{Cov}(Z)=\mathscr{C}_{11}-\mathscr{C}_{12}\mathscr{C}_{22}^{{\dagger}}\mathscr{C}_{21}.$
By Proposition 3, since we can write
$X_{1}=Z+\mathscr{C}_{12}\mathscr{C}_{22}^{{\dagger}}X_{2}$, the conditional
distribution of $X_{1}$ given $X_{2}$ is as desired. ∎
## Appendix B Uniform convergence of random variables
In this section we develop some background theory that will be useful when
considering simultaneous convergence of sequences with varying distributions.
In particular, we are interested the convergence of a sequence of random
variables $(X_{n})_{n\in\mathbb{N}}$ defined on a measurable space
$(\Omega,\mathcal{F})$ with a family of probability measures
$(\mathbb{P}_{\theta})_{\theta\in\Theta}$. For each $\theta\in\Theta$ the
distribution of $(X_{n})_{n\in\mathbb{N}}$ will change as the background
measure $\mathbb{P}_{\theta}$ changes. We are also interested in the
convergence of $\theta$-dependent functions of $X_{n}$ such as the conditional
expectation with respect to $\mathbb{P}_{\theta}$ of $X_{n}$ given a
sub-$\sigma$-algebra $\mathcal{D}$ of $\mathcal{F}$. To allow for such
considerations, the definitions given here will be more general than in
Section 4 and will allow for a family of random variables
$(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ to converge to a family of
random variables $(X_{\theta})_{\theta\in\Theta}$.
The material in this section extends the work of Kasy2019 and Bengs2019 to
Hilbertian and Banachian random variables and also adds further
characterisations of their central assumptions for families of real-valued
random variables.
Unless stated otherwise, we consider the following setup for the remainder of
this section. Let $(\Omega,\mathcal{F})$ be a measurable space,
$(\mathbb{P}_{\theta})_{\theta\in\Theta}$ a family of probability measure on
$(\Omega,\mathcal{F})$ where $\Theta$ is any set and
$(\mathcal{B},\mathbb{B}(\mathcal{B}))$ a separable Banach space with its
Borel $\sigma$-algebra. Let $(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$
and $(X_{\theta})_{\theta\in\Theta}$ be families of random variables defined
on $(\Omega,\mathcal{F})$ with values in $\mathcal{B}$. All additional random
variables are also defined on $(\Omega,\mathcal{F})$. We write
$\mathbb{E}_{\theta}$ for the expectation with respect to
$\mathbb{P}_{\theta}$.
###### Definition 3 (Uniform convergence of random variables).
1. (i)
We say that _$X_{n,\theta}$ converges uniformly in distribution over $\Theta$
to $X_{\theta}$ and write
$X_{n,\theta}\underset{\Theta}{\overset{\mathcal{D}}{\rightrightarrows}}X_{\theta}$_
if
$\lim_{n\to\infty}\sup_{\theta\in\Theta}d_{{\textrm{BL}}}^{\theta}(X_{n,\theta},X_{\theta})=0,$
where
$d_{\textrm{BL}}^{\theta}(X_{n,\theta},X_{\theta}):=\sup_{f\in{\textrm{BL}}_{1}}\left|\mathbb{E}_{\theta}(f(X_{n,\theta}))-\mathbb{E}_{\theta}(f(X_{\theta}))\right|,$
and ${\textrm{BL}}_{1}$ denotes the set of all functions
$f:\mathcal{B}\to[-1,1]$ that are Lipschitz with constant at most $1$. We
write $X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$ and
simply say that $X_{n,\theta}$ converges uniformly in distribution to
$X_{\theta}$ when $\Theta$ is clear from the context. When considering
collections of random variables that do not depend on $\theta$ except through
the measure on the domain of the random variables, we simply write
$X_{n}\overset{\mathcal{D}}{\rightrightarrows}X$.
2. (ii)
We say that _$X_{n,\theta}$ converges uniformly in probability over $\Theta$
to $X_{\theta}$ and write
${X_{n,\theta}\underset{\Theta}{\overset{P}{\rightrightarrows}}X_{\theta}}$_
if, for any $\epsilon>0$,
$\lim_{n\to\infty}\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
X_{n,\theta}-X_{\theta}\rVert\geq\epsilon)=0.$
We write $X_{n,\theta}\overset{P}{\rightrightarrows}X_{\theta}$ and simply say
that $X_{n,\theta}$ converges uniformly in probability to $X_{\theta}$ when
$\Theta$ is clear from the context. When considering collections of random
variables that do not depend on $\theta$ except through the measure on the
domain of the random variables, we simply write
$X_{n}\overset{P}{\rightrightarrows}X$.
Using a slight abuse of notation, we write
$X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}0$ and
$X_{n,\theta}\overset{P}{\rightrightarrows}0$ to mean that $X_{n,\theta}$
converges uniformly to the family of random variables $X_{\theta}$ that is
equal to $0$ for all $\omega\in\Omega$ and any $\theta\in\Theta$. Note that if
$(\mathbb{P}_{\theta})_{\theta\in\Theta}$ contains a single element, we
recover the standard definitions of convergence in distribution and
probability. We have the following helpful characterisations of the two modes
of uniform convergence.
###### Proposition 8.
1. (i)
$X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$ if and only
if for any sequence $(\theta_{n})_{n\in\mathbb{N}}\subset\Theta$
$\lim_{n\to\infty}d_{\textrm{BL}}^{\theta_{n}}(X_{n,\theta_{n}},X_{\theta_{n}})=0.$
2. (ii)
$X_{n,\theta}\overset{P}{\rightrightarrows}X_{\theta}$ if and only if for any
sequence $(\theta_{n})_{n\in\mathbb{N}}\subset\Theta$ and any $\varepsilon>0$
$\lim_{n\to\infty}\mathbb{P}_{\theta_{n}}(\lVert
X_{n,\theta_{n}}-X_{\theta_{n}}\rVert\geq\varepsilon)=0.$
###### Proof.
The proof given in Kasy2019 also works in the Banachian case. ∎
In the remainder of this section we derive various properties of uniform
convergence in probability and distribution that are analogous to the well-
known properties of non-uniform convergence. In particular, we first consider
a uniform version of the continuous mapping theorem which relies on stronger
versions of continuity.
###### Proposition 9.
Let $\psi:\mathcal{B}\to\tilde{\mathcal{B}}$ where $\tilde{\mathcal{B}}$ is
another separable Banach space.
1. (i)
If $X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$ and $\psi$
is Lipschitz-continuous then
$\psi(X_{n,\theta})\overset{\mathcal{D}}{\rightrightarrows}\psi(X_{\theta})$.
2. (ii)
If $X_{n,\theta}\overset{P}{\rightrightarrows}X_{\theta}$ and $\psi$ is
uniformly continuous then
$\psi(X_{n,\theta})\overset{P}{\rightrightarrows}\psi(X_{\theta})$.
###### Proof.
The proof in Kasy2019 also works in the Banachian case. ∎
In what follows we will investigate different alternative assumptions such
that continuity of $\psi$ suffices. One such assumption is tightness of the
family of pushforward measures
$(X_{\theta}(\mathbb{P}_{\theta}))_{\theta\in\Theta}$.
###### Definition 4.
Let $(\mu_{\theta})_{\theta\in\Theta}$ be a family of probability measures on
$\mathcal{B}$.
1. (i)
$(\mu_{\theta})_{\theta\in\Theta}$ is said to be _tight_ if for any
$\varepsilon>0$, there exists a compact set $K$ such that
$\sup_{\theta\in\Theta}\mu_{\theta}(K^{c})<\varepsilon$.
$(X_{\theta})_{\theta\in\Theta}$ is said to be _uniformly tight with respect
to $\Theta$_ if the family of pushforward measures
$(X_{\theta}(\mathbb{P}_{\theta}))_{\theta\in\Theta}$ is tight. If $\Theta$ is
clear from the context we simply say that $(X_{\theta})_{\theta\in\Theta}$ is
uniformly tight.
2. (ii)
$(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ is said to be _sequentially
tight with respect to $\Theta$_ if for any sequence
$(\theta_{n})_{n\in\mathbb{N}}\subset\Theta$ the sequence of pushforward
measures $(X_{n,\theta_{n}}(\mathbb{P}_{\theta_{n}}))_{n\in\mathbb{N}}$ is
tight. If $\Theta$ is clear from the context we simply say that
$(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ is sequentially tight.
3. (iii)
$(\mu_{\theta})_{\theta\in\Theta}$ is said to be _relatively compact_ if for
any sequence $(\theta_{n})_{n\in\mathbb{N}}$ there exists a subsequence
$(\theta_{k(n)})_{n\in\mathbb{N}}$, where $k:\mathbb{N}\to\mathbb{N}$ is
strictly increasing, such that $\mu_{\theta_{k(n)}}$ converges weakly to some
measure $\mu$, which is not necessarily in the family
$(\mu_{\theta})_{\theta\in\Theta}$.
Prokhorov’s theorem states that tightness implies relative compactness and
that they are equivalent on separable and complete metric spaces; in this
work, we therefore use the terms interchangeably since we only consider
separable Banach and Hilbert spaces. With a uniform tightness assumption, we
can perform continuous operations and preserve uniform convergence in
probability just as in the non-uniform setting.
###### Proposition 10.
Let $(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ and
$(X_{\theta})_{\theta\in\Theta}$ be random variables taking values in
$\mathcal{B}$. Assume that
$X_{n,\theta}\overset{P}{\rightrightarrows}X_{\theta}$ and $X_{\theta}$ is
uniformly tight. Then, for any continuous function
$\psi:\mathcal{B}\to\tilde{\mathcal{B}}$, where $\tilde{\mathcal{B}}$ is
another separable Banach space, we have
$\psi(X_{n,\theta})\overset{P}{\rightrightarrows}\psi(X_{\theta}).$
###### Proof.
Let $\epsilon>0$ be given. We need to show that
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert\psi(X_{n,\theta})-\psi(X_{\theta})\rVert\geq\epsilon)\to
0$
As $X_{\theta}$ is uniformly tight, for $\eta>0$ there exists a compact set
$K$ such that
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X_{\theta}\not\in K_{X})<\eta/2.$
By the Heine–Cantor theorem, $\psi$ is uniformly continuous on $K_{X}$, so
there exists $\delta>0$ such that $\lVert x-x^{\prime}\rVert<\delta$ implies
that $\lVert\psi(x)-\psi(x^{\prime})\rVert<\epsilon$. We thus have
$\displaystyle\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert\psi(X_{n,\theta})$
$\displaystyle-\psi(X_{\theta})\rVert\geq\epsilon)\leq\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X_{\theta}\not\in
K)+\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
X_{n,\theta}-X_{\theta}\rVert\geq\delta).$
By assumption, we can choose $N$ sufficiently large such that for all $n\geq
N$, the final term is less than $\eta/2$, resulting in the whole expression
being less than $\eta$. As $\eta$ was arbitrary, this proves the result. ∎
Bengs2019 make repeated use of an alternative assumption for many of their
results for real-valued random variables.
###### Definition 5.
A family of probability measures $(\mu_{\theta})_{\theta\in\Theta}$ is
_uniformly absolutely continuous with respect to the measure $\mu$_ if for any
$\varepsilon>0$, there exists $\delta>0$ such that for any Borel set $B$
$\mu(B)<\delta\Longrightarrow\sup_{\theta\in\Theta}\mu_{\theta}(B)<\varepsilon.$
A family of random variables $(X_{\theta})_{\theta\in\Theta}$ is _uniformly
absolutely continuous over $\Theta$ with respect to the measure $\mu$_ if the
family of pushforward measures
$(X_{\theta}(\mathbb{P}_{\theta}))_{\theta\in\Theta}$ is uniformly absolutely
continuous with respect to $\mu$. When $\Theta$ is clear from the context we
simply say that $X_{\theta}$ is uniformly absolutely continuous with respect
to $\mu$.
Uniform absolute continuity has previously been studied in other works such as
the ones by Bogachev2018 and Doob1994. An intuitive view of uniform absolute
continuity can be given when $\mu$ is a finite measure. In this case, we can
define a pseudometric $d_{\mu}$ on the Borel sets with
$d_{\mu}(A,B)=\mu(A\triangle B)$, where $A\triangle B$ is the symmetric
difference. Uniform absolute continuity is then uniform $d_{\mu}$-continuity
over $\theta$ of the collection of push-forward measures
$(X_{\theta}(\mathbb{P}_{\theta}))_{\theta\in\Theta}$ viewed as mappings from
the Borel sets into $\mathbb{R}$.
Another helpful perspective is in the case where for each $\theta$,
$X_{\theta}$ has a density $f_{\theta}$ with respect to a common measure
$\mu$. The following proposition shows that $X_{\theta}$ is uniformly
absolutely continuous with respect to $\mu$ if and only if for each $\theta$,
$X_{\theta}$ has a density $f_{\theta}$ with respect to $\mu$ and the family
of densities is uniformly integrable. A convenient sufficient condition for
uniform integrability is the existence of $r>0$ such that
$\sup_{\theta\in\Theta}\int f_{\theta}^{1+r}\,\mathrm{d}\mu<\infty$.
###### Proposition 11.
If $(X_{\theta})_{\theta\in\Theta}$ is uniformly absolutely continuous with
respect to $\mu$, then for each $\theta$ $X_{\theta}$ has a density
$f_{\theta}$ with respect to $\mu$ and the family
$(f_{\theta})_{\theta\in\Theta}$ is uniformly integrable with respect to
$\mu$. Conversely, if for each $\theta$ $X_{\theta}$ has a density
$f_{\theta}$ with respect to $\mu$ and the family
$(f_{\theta})_{\theta\in\Theta}$ is uniformly integrable then
$(X_{\theta})_{\theta\in\Theta}$ is uniformly absolutely continuous with
respect to $\mu$.
###### Proof.
For the first statement, note that by the Radon–Nikodym theorem, we need to
show that for each $\theta$, $\mu(B)=0$ implies that
$\mathbb{P}_{\theta}(X_{\theta}\in B)=0$ for every Borel measurable $B$. This
is immediate from the assumption of uniform absolute continuity (by negation)
and so is the uniform integrability of the family
$(f_{\theta})_{\theta\in\Theta}$. The second statement follows immediately
from the definitions of uniform integrability and uniform absolute continuity.
∎
In Bengs2019 uniform absolute continuity is assumed with respect to a
probability measure. For uniformly tight Banachian random variables that are
uniformly absolutely continuous with respect to a $\sigma$-finite measure
$\mu$, we can show that the family is also uniformly absolutely continuous
with respect to any $\sigma$-finite measure $\nu$ such that $\mu$ has a
continuous density with respect to $\nu$.
###### Proposition 12.
Assume that $(X_{\theta})_{\theta\in\Theta}$ is uniformly absolutely
continuous with respect to some $\sigma$-finite measure $\mu$. If $\nu$ is
another $\sigma$-finite measure dominating $\mu$ and there exists a continuous
Radon-Nikodym derivative of $\mu$ with respect to $\nu$, then $X$ is uniformly
absolutely continuous with respect to $\nu$.
###### Proof.
Let $\varepsilon>0$ be given. Because $(X_{\theta})_{\theta\in\Theta}$ is
uniformly tight, we can choose a compact set $K$, such that
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X_{\theta}\not\in
K)<\varepsilon/2.$
Then note that for any Borel measurable set $B$
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X_{\theta}\in
B)<\varepsilon/2+\sup_{\theta\in\Theta}\mathbb{P_{\theta}}(X_{\theta}\in B\cap
K).$
We thus need to find $\delta$ so that $\nu(B\cap K)<\delta$ implies
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X_{\theta}\in B\cap
K)<\varepsilon/2$. Letting $g$ denote the continuous Radon-Nikodym derivative
of $\mu$ with respect to $\nu$, we see that
$\mu(B\cap K)=\int_{B\cap K}g\,\mathrm{d}\nu\leq\left(\sup_{x\in
K}g(x)\right)\nu(B\cap K).$
The supremum is finite by the extreme value theorem for continuous functions
since $K$ is compact. If $\sup_{x\in K}g(x)>0$ choose $\delta^{\prime}$ from
the uniform absolute continuity of $X$ with respect to $\mu$ matching
$\varepsilon/2$ and set $\delta=\delta^{\prime}/(\sup_{x\in K}g(x))$. Then for
all $B$ with $\nu(B)<\delta$, we have
$\delta>\nu(B)\geq\nu(B\cap K)\geq\frac{\mu(B\cap K)}{\sup_{x\in
K}g(x)}\Longrightarrow\mu(B\cap K)<\delta^{\prime}$
and thus
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X_{\theta}\in B\cap
K)<\varepsilon/2$
proving the result. If $\sup_{x\in K}g(x)=0$ any $\delta$ works since
$\mu(B\cap K)=0$ implies
${\sup_{\theta\in\Theta}\mathbb{P}(X_{\theta}\in B\cap K)=0}$. ∎
A consequence of the above result is that uniform absolute continuity with
respect to the Lebesgue measure implies uniform absolute continuity with
respect to the standard Gaussian measure. This lets us immediately apply many
of the results of Bengs2019 such as Theorem 4.1, when we consider a uniformly
tight real-valued random variable that is uniformly absolutely continuous with
respect to the Lebesgue measure.
###### Corollary 1.
A real-valued family of random variables $(X_{\theta})_{\theta\in\Theta}$ is
uniformly absolutely continuous with respect to the Lebesgue measure if and
only if it is uniformly absolutely continuous with respect to the standard
Gaussian measure.
###### Proof.
The statement follows immediately by the equivalence of the standard Gaussian
measure and the Lebesgue measure, by the continuity of the Gaussian density
and its reciprocal, and Proposition 12. ∎
We will consider sums of real-valued random variables and thus need to
consider when such sums are uniformly absolutely continuous with respect to a
measure. It turns out that when the random variables are independent and one
of the families is uniformly absolutely continuous with respect to the
Lebesgue measure, the same is true for the family of sums.
###### Theorem 9.
Let $(X_{\theta})_{\theta\in\Theta}$ and $(Y_{\theta})_{\theta\in\Theta}$ be
two real-valued random variables such that for any $\theta\in\Theta$
$X_{\theta}$ and $Y_{\theta}$ are independent under $\mathbb{P}_{\theta}$.
Assume that $(X_{\theta})_{\theta\in\Theta}$ is uniformly absolutely
continuous with respect to the Lebesgue measure. Then
$(X_{\theta}+Y_{\theta})_{\theta\in\Theta}$ is uniformly absolutely continuous
with respect to the Lebesgue measure.
###### Proof.
Let $\varepsilon>0$ be given and let $\lambda$ denote the Lebesgue measure. We
need to find $\delta>0$ such that for any Borel measurable $B$ with
$\lambda(B)<\delta$, we have
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X_{\theta}+Y_{\theta}\in
B)<\varepsilon$. We can use the independence of $X_{\theta}$ and $Y_{\theta}$
to write the probability as a double-integral with respect to the pushforward
measures $X_{\theta}(\mathbb{P}_{\theta})$ and
$Y_{\theta}(\mathbb{P}_{\theta})$ as follows:
$\mathbb{P}_{\theta}(X_{\theta}+Y_{\theta}\in
B)=\int\mathbbm{1}_{B}(X_{\theta}(\omega)+Y_{\theta}(\omega))\,\mathrm{d}\mathbb{P}_{\theta}(\omega)=\int\int\mathbbm{1}_{B}(x+y)\,\mathrm{d}X_{\theta}(\mathbb{P}_{\theta})(x)\,\mathrm{d}Y_{\theta}(\mathbb{P}_{\theta})(y).$
Note that $\mathbbm{1}_{B}(x+y)=\mathbbm{1}_{B-y}(x)$ where
$B-y:=\\{b-y\,:\,b\in B\\}$ and that, by the translation invariance of the
Lebesgue measure, $\lambda(B)=\lambda(B-y)$. As $X_{\theta}$ is uniformly
absolutely continuous with respect to $\lambda$, there exists $\delta$ such
that if $\lambda(B)<\delta$ we have
$\displaystyle\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X_{\theta}+Y_{\theta}\in
B)$
$\displaystyle\leq\sup_{\theta\in\Theta}\int\left(\sup_{\theta\in\Theta}\int\mathbbm{1}_{B-y}(x)\,\mathrm{d}X_{\theta}(\mathbb{P}_{\theta})(x)\right)\,\mathrm{d}Y_{\theta}(\mathbb{P}_{\theta})(y)$
$\displaystyle<\sup_{\theta\in\Theta}\int\varepsilon\,\mathrm{d}Y_{\theta}(\mathbb{P}_{\theta})(y)<\varepsilon.$
Thus far, we have not discussed when we can expect uniform convergence in
distribution to imply uniform convergence of distribution functions. This is
exactly where we need an assumption of uniform absolute continuity. The
following result is a modified version of Bengs2019, where our condition
includes uniform convergence in $x$, rather than convergence for all $x$.
###### Proposition 13.
Let $(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ and
$(X_{\theta})_{\theta\in\Theta}$ be real-valued random variables. Assume that
$(X_{\theta})_{\theta\in\Theta}$ is uniformly absolutely continuous with
respect to a continuous probability measure $\mu$. Then
$X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$ if and only
if
$\lim_{n\to\infty}\sup_{x\in\mathbb{R}}\sup_{\theta\in\Theta}|\mathbb{P}_{\theta}(X_{n,\theta}\leq
x)-\mathbb{P}_{\theta}(X_{\theta}\leq x)|=0.$ (30)
###### Proof.
See Bengs2019 for a proof that
$X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$ if and only
if
$\lim_{n\to\infty}\sup_{\theta\in\Theta}|\mathbb{P}_{\theta}(X_{n,\theta}\leq
x)-\mathbb{P}_{\theta}(X_{\theta}\leq x)|=0$
for all $x\in\mathbb{R}$. To show that the convergence of distribution
functions is uniform, we proceed as follows. In view of the uniform absolute
continuity of $(X_{\theta})_{\theta\in\Theta}$ with respect to $\mu$, for all
$\varepsilon>0$ there exists $\delta>0$ such that for Borel measurable $B$
with $\mu(B)<\delta$, we have
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X_{\theta}\in B)<\varepsilon$. Let
$-\infty=x_{0}<x_{1}<\dots<x_{m}=\infty$ such that for all
$i\in\\{1,\dots,m\\}$, $0<\mu((x_{i-1},x_{i}])<\delta$. We can find such a
grid since $\mu$ is a continuous probability measure. For any $\theta$ and
$i\in\\{1,\dots,m\\}$, we thus have
$\mathbb{P}_{\theta}(X_{\theta}\leq x_{i})-\mathbb{P}_{\theta}(X_{\theta}\leq
x_{i-1})=\mathbb{P}_{\theta}(X_{\theta}\in(x_{i-1},x_{i}])<\varepsilon.$
For $x\in(x_{i-1},x_{i}]$,
$\displaystyle\sup_{\theta\in\Theta}\\{\mathbb{P}_{\theta}(X_{n,\theta}\leq
x)-\mathbb{P}_{\theta}(X_{\theta}\leq
x)\\}\leq\sup_{\theta\in\Theta}\\{\mathbb{P}_{\theta}(X_{n,\theta}\leq
x_{i})-\mathbb{P}_{\theta}(X_{\theta}\leq x_{i-1})\\}$
$\displaystyle\leq\sup_{\theta\in\Theta}\\{\mathbb{P}_{\theta}(X_{n,\theta}\leq
x_{i})-\mathbb{P}_{\theta}(X_{\theta}\leq
x_{i})\\}+\varepsilon\leq\sup_{\theta\in\Theta}|\mathbb{P}_{\theta}(X_{n,\theta}\leq
x_{i})-\mathbb{P}_{\theta}(X_{\theta}\leq x_{i})|+\varepsilon,$
and, similarly,
$\displaystyle\sup_{\theta\in\Theta}\\{\mathbb{P}_{\theta}(X_{\theta}\leq
x)-\mathbb{P}_{\theta}(X_{n,\theta}\leq
x)\\}\leq\sup_{\theta\in\Theta}\\{\mathbb{P}_{\theta}(X_{\theta}\leq
x_{i})-\mathbb{P}_{\theta}(X_{n,\theta}\leq x_{i-1})\\}$
$\displaystyle\leq\sup_{\theta\in\Theta}\\{\mathbb{P}_{\theta}(X_{\theta}\leq
x_{i-1})-\mathbb{P}_{\theta}(X_{n,\theta}\leq
x_{i-1})\\}+\varepsilon\leq\sup_{\theta\in\Theta}|\mathbb{P}_{\theta}(X_{\theta}\leq
x_{i-1})-\mathbb{P}_{\theta}(X_{n,\theta}\leq x_{i-1})|+\varepsilon.$
Thus,
$\sup_{x\in\mathbb{R}}\sup_{\theta\in\Theta}|\mathbb{P}_{\theta}(X_{n,\theta}\leq
x)-\mathbb{P}_{\theta}(X_{\theta}\leq
x)|\leq\sup_{i\in\\{0,\dots,m\\}}\sup_{\theta\in\Theta}|\mathbb{P}_{\theta}(X_{n,\theta}\leq
x_{i})-\mathbb{P}_{\theta}(X_{\theta}\leq x_{i})|+\varepsilon.$
The first term on the right-hand side goes to $0$ by assumption and
$\varepsilon$ was arbitrary, thus proving the uniform convergence. ∎
The final results of this section are uniform versions of Slutsky’s lemma, the
Weak Law of Large Numbers and the Central Limit Theorem. In the remaining
results uniform tightness will play a crucial role. It is a standard result
that if $(X_{n})_{n\in\mathbb{N}}$ converges in distribution to $X$ then
$(X_{n})_{n\in\mathbb{N}}$ is tight. We can show that analogously if
$X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$ and
$(X_{\theta})_{\theta\in\Theta}$ is uniformly tight then
$(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ is sequentially tight.
###### Proposition 14.
Assume that $(X_{\theta})_{\theta\in\Theta}$ is uniformly tight. If
$X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$ then
$(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ is sequentially tight.
###### Proof.
We prove the contrapositive statement. Assume that there exists a sequence
$(\theta_{n})_{n\in\mathbb{N}}\subset\Theta$ such that
$(X_{n,\theta_{n}}(\mathbb{P}_{\theta_{n}}))_{n\in\mathbb{N}}$ is not tight.
Let $Y_{n}$ be distributed as $X_{n,\theta_{n}}(\mathbb{P}_{\theta_{n}})$ and
$Z_{n}$ distributed as $X(\mathbb{P}_{\theta_{n}})$ defined on a probability
space $(\Omega,\mathcal{F},\mathbb{P})$. Since $(Y_{n})_{n\in\mathbb{N}}$ is
not tight, there exists a subsequence $(k(n))_{n\in\mathbb{N}}$ with
$k:\mathbb{N}\to\mathbb{N}$ strictly increasing such that any further
subsequence of $(Y_{k(n)})_{n\in\mathbb{N}}$ does not converge in
distribution. Since $(Z_{n})_{n\in\mathbb{N}}$ is tight, there exists a
strictly increasing $k^{\prime}:\mathbb{N}\to\mathbb{N}$ and a random variable
$Z$ such that writing $m=k\circ k^{\prime}$, we have
$d_{\textrm{BL}}(Z_{m(n)},Z)\to 0.$
However, since $Y_{k(n)}$ does not have a weakly convergent subsequence, we
have
$d_{\textrm{BL}}(Y_{m(n)},Z)\not\to 0.$
Thus, there exists $\varepsilon>0$ and a strictly increasing
$k^{\prime\prime}:\mathbb{N}\to\mathbb{N}$ such that writing $l=m\circ
k^{\prime\prime}$, we have for all $n$
$d_{\textrm{BL}}(Y_{l(n)},Z)\geq\varepsilon.$
Next choose $N$ such that for $n\geq N$ we have
$d_{\textrm{BL}}(Z_{l(n)},Z)<\varepsilon/2.$
Then by the reverse triangle inequality
$d_{\textrm{BL}}(Z_{l(n)},Y_{l(n)})\geq\left|d_{\textrm{BL}}(Z_{l(n)},Z)-d_{\textrm{BL}}(Z,Y_{l(n)})\right|\geq\varepsilon/2$
for all $n\geq N$. Since
$d_{\textrm{BL}}(Z_{l(n)},Y_{l(n)})=d_{\textrm{BL}}^{\theta_{l(n)}}\left(X_{l(n),\theta_{l(n)}},X_{\theta_{l(n)}}\right)$
by Proposition 8 we cannot have
$X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$ proving the
desired statement. ∎
The previous result will be required when proving the second part of the
upcoming uniform version of Slutsky’s lemma.
###### Proposition 15 (Uniform Slutsky’s lemma).
Let $(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$,
$(Y_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ and
$(X_{\theta})_{\theta\in\Theta}$ be Banachian random variables. Assume that
$X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$ and
$Y_{n,\theta}\overset{P}{\rightrightarrows}0$. Then, the following two
statements hold.
1. (i)
$X_{n,\theta}+Y_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$.
2. (ii)
If $(Y_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ is a family of real-
valued random variables and $(X_{\theta})_{\theta\in\Theta}$ is uniformly
tight, then $Y_{n,\theta}X_{n,\theta}\overset{P}{\rightrightarrows}0$.
###### Proof.
We first prove (i), for which we need to show that
$\sup_{\theta\in\Theta}d_{\textrm{BL}}^{\theta}(X_{n,\theta}+Y_{n,\theta},X_{\theta})\to
0$
as $n\to\infty$. We have for any $\theta$
$\displaystyle
d_{\textrm{BL}}^{\theta}(X_{n,\theta}+Y_{n,\theta},X_{\theta})\leq
d_{\textrm{BL}}^{\theta}(X_{n,\theta}+Y_{n,\theta},X_{n,\theta})+d_{\textrm{BL}}^{\theta}(X_{n,\theta},X_{\theta}),$
where the second term goes to $0$ uniformly by assumption. It remains to show
that the first term goes to $0$ uniformly. Now for $f\in{\textrm{BL}}_{1}$ we
have that for any $\varepsilon>0$ and any $x,y\in\mathcal{B}$, $\lVert
y\rVert<\varepsilon$ implies $\lVert f(x+y)-f(x)\rVert\leq\varepsilon$. Hence,
by using the triangle inequality for the expectation, partitioning the
integral and using the uniform continuity above, we get
$\displaystyle
d_{\textrm{BL}}^{\theta}(X_{n,\theta}+Y_{n,\theta},X_{n,\theta})\leq\varepsilon+\sup_{f\in{\textrm{BL}}_{1}}\mathbb{E}_{\theta}\left|[f(X_{n,\theta}+Y_{n,\theta})-f(X_{n,\theta})]\mathbbm{1}_{\\{\lVert
Y_{n,\theta}\rVert>\varepsilon\\}}\right|.$
We can again apply the triangle inequality and recall that $f$ is bounded by
$1$, yielding
$\sup_{\theta\in\Theta}\sup_{f\in{\textrm{BL}}_{1}}\mathbb{E}_{\theta}\left|[f(X_{n,\theta}+Y_{n,\theta})-f(X_{n,\theta})]\mathbbm{1}_{\\{\lVert
Y_{n,\theta}\rVert>\varepsilon\\}}\right|\leq
2\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
Y_{n,\theta}\rVert>\varepsilon),$
which goes to $0$ by assumption. Since $\varepsilon>0$ was arbitrary, we have
proven the desired result.
We now turn to the proof of (ii). We will apply Proposition 8 and show that
for any $(\theta_{n})_{n\in\mathbb{N}}\subseteq\Theta$ and any
$\varepsilon>0$,
$\mathbb{P}_{\theta_{n}}(\lVert
Y_{n,\theta_{n}}X_{n,\theta_{n}}\rVert\geq\varepsilon)\to 0$
as $n\to\infty$, which implies the desired result. Let $\delta>0$ be given. By
Proposition 14 there exists a compact set $K$ such that
$\sup_{n\in\mathbb{N}}\mathbb{P}_{\theta_{n}}(X_{n,\theta_{n}}\in
K^{c})\leq\delta/2.$
Since $K$ is compact, it is bounded and thus there exists $M>0$ such that
$\|x\|<M$ for all $x\in K$. By the uniform convergence in probability of
$Y_{n}$ to zero, we can find $N$ such that for all $n\geq N$,
$\mathbb{P}_{\theta_{n}}(|Y_{n,\theta_{n}}|\geq\varepsilon/M)<\delta/2.$
Putting things together, we get, for all $n\geq N$,
$\displaystyle\mathbb{P}_{\theta_{n}}(\lVert
X_{n,\theta_{n}}Y_{n,\theta_{n}}\rVert\geq\varepsilon)$
$\displaystyle\leq\mathbb{P}_{\theta_{n}}(\lVert
X_{n,\theta_{n}}Y_{n,\theta_{n}}\rVert\geq\varepsilon,X_{n,\theta_{n}}\in
K)+\mathbb{P}_{\theta_{n}}(X_{n,\theta_{n}}\in K^{c})$
$\displaystyle\leq\mathbb{P}_{\theta_{n}}(|Y_{n,\theta_{n}}|\geq\varepsilon/M)+\sup_{n\in\mathbb{N}}\mathbb{P}_{\theta_{n}}(X_{n,\theta_{n}}\in
K^{c})<\delta,$
proving the result. ∎
We will now consider the setting of uniform convergence of averages of i.i.d.
random variables, i.e. we assume that for each $\theta\in\Theta$ the sequence
$(X_{n,\theta})_{n\in\mathbb{N}}$ is i.i.d. and consider the convergence of
$1/n\sum_{i=1}^{n}X_{i,\theta}$. We first prove a small technical lemma and
then apply this lemma to prove an analogue of the Law of Large numbers for
uniform convergence in probability for Hilbertian random variables.
###### Lemma 8.
Let $Y_{1},\dots,Y_{n}$ be independent, mean zero random variables taking
values in Hilbert space $\mathcal{H}$. Then
$\mathbb{E}\left(\left\lVert\sum_{i=1}^{n}Y_{i}\right\rVert^{2}\right)=\sum_{i=1}^{n}\mathbb{E}\lVert
Y_{i}\rVert^{2}.$
###### Proof.
Note first that
$\left\lVert\sum_{i=1}^{n}Y_{i}\right\rVert^{2}=\sum_{i=1}^{n}\sum_{j=1}^{n}\langle
Y_{i},Y_{j}\rangle.$
Let $(e_{k})_{k\in\mathbb{N}}$ denote a basis of $\mathcal{H}$. Then for
$i\neq j$
$\mathbb{E}(\langle
Y_{i},Y_{j}\rangle)=\mathbb{E}\left(\sum_{k=1}^{\infty}\langle
Y_{i},e_{k}\rangle\langle
Y_{j},e_{k}\rangle\right)=\sum_{k=1}^{\infty}\mathbb{E}\left(\langle
Y_{i},e_{k}\rangle\langle
Y_{j},e_{k}\rangle\right)=\sum_{k=1}^{\infty}\mathbb{E}(\langle
Y_{i},e_{k}\rangle)\mathbb{E}(\langle Y_{j},e_{k}\rangle)$
but $\mathbb{E}(\langle Y_{i},e_{k}\rangle)=0$ for all $i$ and $k$ since
$Y_{i}$ are mean zero. ∎
###### Proposition 16.
Let $(X_{\theta})_{\theta\in\Theta}$ be Hilbertian random variables with
$\mathbb{E}_{\theta}(X_{\theta})=0$ for all $\theta\in\Theta$ and
$\sup_{\theta\in\Theta}\mathbb{E}_{\theta}(\lVert
X_{\theta}\rVert^{1+\eta})<C$ for some $C,\eta>0$. Let
$(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ be random variables such
that for $\theta\in\Theta$ $(X_{n,\theta})_{n\in\mathbb{N}}$ is i.i.d. with
the same distribution as $X_{\theta}$ under $\mathbb{P}_{\theta}$. Then
$\frac{1}{n}\sum_{i=1}^{n}X_{i,\theta}\overset{P}{\rightrightarrows}0.$
###### Proof.
We adapt the argument given in GCM. Defining
$S_{n,\theta}:=n^{-1}\sum_{i=1}^{n}X_{i,\theta}$, we need to show that for any
$\varepsilon>0$,
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
S_{n,\theta}\rVert\geq\varepsilon)\to 0$
as $n\to\infty$. To this end, we let $M>0$ and define
$X_{\theta}^{<}:=\mathbbm{1}_{\\{\lVert X_{\theta}\rVert\leq M\\}}X_{\theta}$
and $X_{\theta}^{>}:=\mathbbm{1}_{\\{\lVert X_{\theta}\rVert>M\\}}X_{\theta}$
and similarly $X_{i,\theta}^{<}$ and $X_{i,\theta}^{>}$ for $i\in\mathbb{N}$.
We also define $S_{n,\theta}^{<}:=n^{-1}\sum_{i=1}^{n}X_{i,\theta}^{<}$ and
$S_{n,\theta}^{>}:=n^{-1}\sum_{i=1}^{n}X_{i,\theta}^{>}$. Note first that
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
X_{\theta}\rVert>M)\leq\frac{\sup_{\theta\in\Theta}\mathbb{E}_{\theta}\lVert
X_{\theta}\rVert}{M}\leq\frac{C}{M},$
hence choosing $M$ large, we can make $\mathbb{P}_{\theta}(\lVert
X_{\theta}\rVert>M)$ small uniformly in $\theta$. Combining this with the fact
that $\mathbb{E}(X_{\theta}^{<})=-\mathbb{E}(X_{\theta}^{>})$, we get
$\sup_{\theta\in\Theta}\lVert\mathbb{E}(X_{\theta}^{<})\rVert=\sup_{\theta\in\Theta}\lVert\mathbb{E}(X_{\theta}^{>})\rVert\leq\sup_{\theta\in\Theta}\mathbb{E}\lVert
X_{\theta}^{>}\rVert\leq\sup_{\theta\in\Theta}\left(\mathbb{E}\lVert
X_{\theta}\rVert^{1+\eta}\right)^{\frac{1}{1+\eta}}\mathbb{P}_{\theta}(\lVert
X_{\theta}\rVert>M)^{\frac{\eta}{1+\eta}}\leq\frac{C^{2}}{M},$ (31)
by Hölder’s inequality. This implies that choosing $M$ large we can ensure
that
$\sup_{\theta\in\Theta}\lVert\mathbb{E}(X_{\theta}^{<})\rVert<\varepsilon/3$
and for these $M$, we have
$\displaystyle\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
S_{n,\theta}\rVert>\varepsilon)$
$\displaystyle\leq\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
S_{n,\theta}^{<}\rVert>2\varepsilon/3)+\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
S_{n,\theta}^{>}\rVert>\varepsilon/3)$
$\displaystyle\leq\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
S_{n,\theta}^{<}-\mathbb{E}(X_{\theta}^{<})\rVert>\varepsilon/3)+\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
S_{n,\theta}^{>}\rVert>\varepsilon/3).$
By Markov’s inequality and the triangle inequality
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
S_{n,\theta}^{>}\rVert>\varepsilon/3)\leq\frac{3\sup_{\theta\in\Theta}\mathbb{E}_{\theta}\lVert
X_{\theta}^{>}\rVert}{\varepsilon}$
which we have already shown in (31) is uniformly small when $M$ is
sufficiently large. Finally, by Markov’s inequality, the triangle inequality
and Lemma 8, we have
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
S_{n,\theta}^{<}-\mathbb{E}(X_{\theta}^{<})\rVert>\varepsilon/3)\leq\frac{\sup_{\theta\in\Theta}\textrm{E}_{\theta}\lVert
S_{n,\theta}^{<}-\mathbb{E}(X_{\theta}^{<})\rVert^{2}}{t^{2}}=\frac{\sup_{\theta\in\Theta}\mathbb{E}_{\theta}\lVert
X_{\theta}^{<}\rVert^{2}}{nt^{2}}\leq\frac{M^{2}}{nt^{2}}$
hence choosing $n$ sufficiently large, we can control the final term. ∎
We can extend the previous result to a special class of Banach spaces under an
additional tightness assumption. Recall that a Banach space $\mathcal{B}$ has
a _Schauder basis_ if there exists $(e_{k})_{k\in\mathbb{N}}$ such that for
every $v\in\mathcal{B}$ there exists a unique sequence of scalars
$(\alpha_{k})_{k\in\mathbb{N}}$ satisfying
$\left\lVert v-\sum_{k=1}^{K}\alpha_{k}e_{k}\right\rVert\to 0$
as $K\to\infty$.
###### Proposition 17.
Let $(X_{\theta})_{\theta\in\Theta}$ be Banachian random variables taking
values in $\mathcal{B}$ with $\mathbb{E}_{\theta}(X_{\theta})=0$ for all
$\theta\in\Theta$ and $\sup_{\theta\in\Theta}\mathbb{E}_{\theta}(\lVert
X_{\theta}\rVert^{1+\eta})<C$ for some $C,\eta>0$. Let
$(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ be random variables such
that for $\theta\in\Theta$ $(X_{n,\theta})_{n\in\mathbb{N}}$ is i.i.d. with
the same distribution as $X_{\theta}$ under $\mathbb{P}_{\theta}$. Assume
further that $\mathcal{B}$ has a Schauder basis and that
$(X_{\theta})_{\theta\in\Theta}$ is uniformly tight. Then
$\frac{1}{n}\sum_{i=1}^{n}X_{i,\theta}\overset{P}{\rightrightarrows}0.$
###### Proof.
For $K\in\mathbb{N}$ let $P_{K}$ denote the canonical projection of
$v\in\mathcal{B}$ onto the first $K$ components of the Schauder basis, i.e.
the mapping
$v=\sum_{k=1}^{\infty}\alpha_{k}e_{k}\mapsto\sum_{k=1}^{K}\alpha_{k}e_{k}.$
This mapping is linear and satisfies that $\sup_{K\in\mathbb{N}}\lVert
P_{K}\rVert_{\textrm{op}}<\infty$ by Li2017. By the triangle inequality
$\mathbb{P}_{\theta}\left(\left\lVert\frac{1}{n}\sum_{i=1}^{n}X_{i,\theta}\right\rVert\geq\varepsilon\right)\leq\mathbb{P}_{\theta}\left(\left\lVert\frac{1}{n}\sum_{i=1}^{n}P_{K}X_{i,\theta}\right\rVert\geq\varepsilon/2\right)+\mathbb{P}_{\theta}\left(\left\lVert\frac{1}{n}\sum_{i=1}^{n}(X_{i,\theta}-P_{K}X_{i,\theta})\right\rVert\geq\varepsilon/2\right),$
hence it is sufficient to show that the first term converges to $0$ uniformly
as $n\to\infty$ for fixed $K$ and that the second term converges to $0$
uniformly as $K\to\infty$. By Proposition 16 the first term converges to $0$
for fixed $K$ since $(P_{K}X_{\theta})_{\theta\in\Theta}$ are concentrated on
a finite-dimensional subspace of $\mathcal{B}$ and since
$\sup_{\theta\in\Theta}\mathbb{E}_{\theta}\left(\lVert
P_{K}X_{\theta}\rVert^{1+\eta}\right)\leq\lVert
P_{K}\rVert_{\textrm{op}}^{1+\eta}\sup_{\theta\in\Theta}\mathbb{E}_{\theta}\left(\lVert
X_{\theta}\rVert^{1+\eta}\right)<\infty.$
It remains to show that when we choose $K$ large, the second term is small.
Bogachev2018 characterises tightness of families of random variables on Banach
spaces with a Schauder basis. In particular, they satisfy
$\lim_{K\to\infty}\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
X_{\theta}-P_{K}X_{\theta}\rVert>\varepsilon)=0$ (32)
for every $\varepsilon>0$. Applying Markov’s inequality, partitioning the
integral, applying Hölder’s inequality and the triangle inequality yields that
for any $t>0$ and $\delta>0$,
$\displaystyle\sup_{\theta\in\Theta}\mathbb{P}_{\theta}\left(\left\lVert\frac{1}{n}\sum_{i=1}^{n}(X_{i,\theta}-P_{K}X_{i,\theta})\right\rVert\geq
t\right)\leq\frac{1}{t}\sup_{\theta\in\Theta}\mathbb{E}_{\theta}\lVert
X_{\theta}-P_{K}X_{\theta}\rVert$
$\displaystyle\leq\frac{1}{t}\sup_{\theta\in\Theta}\left\\{\delta+\mathbb{E}_{\theta}\left(\lVert
X_{\theta}-P_{K}X_{\theta}\rVert\mathbbm{1}_{\\{\lVert
X_{\theta}-P_{K}X_{\theta}\rVert>\delta\\}}\right)\right\\}$
$\displaystyle\leq\frac{1}{t}\sup_{\theta\in\Theta}\left\\{\delta+\left(\mathbb{E}_{\theta}\lVert
X_{\theta}-P_{K}X_{\theta}\rVert^{1+\eta}\right)^{\frac{1}{1+\eta}}\left(\mathbb{P}_{\theta}(\lVert
X_{\theta}-P_{K}X_{\theta}\rVert>\delta)\right)^{\frac{\eta}{1+\eta}}\right\\}$
$\displaystyle\leq\frac{1}{t}\left\\{\delta+(1+\sup_{K\in\mathbb{N}}\lVert
P_{K}\rVert_{\textrm{op}})C\sup_{\theta\in\Theta}\left(\mathbb{P}_{\theta}(\lVert
X_{\theta}-P_{K}X_{\theta}\rVert>\delta)\right)^{\frac{\eta}{1+\eta}}\right\\}.$
By (32), we can choose $\delta$ and $K$ such that the upper bound is
arbitrarily small, hence we have shown the desired result. ∎
For the uniform central limit theorem, we only consider the Hilbertian case
since this is sufficient for our needs and avoids technical problems to do
with tightness and the regular (non-uniform) central limit theorem on Banach
spaces. We first give some sufficient conditions for uniform convergence in
distribution of Hilbertian random variables.
###### Proposition 18.
Let $(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ and
$(X_{\theta})_{\theta\in\Theta}$ be Hilbertian random variables. Assume that
1. (i)
for all $h\in\mathcal{H}$, $\langle
X_{n,\theta},h\rangle\overset{\mathcal{D}}{\rightrightarrows}\langle
X_{\theta},h\rangle$,
2. (ii)
$(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ is sequentially tight, and
3. (iii)
$(X_{\theta})_{\theta\in\Theta}$ is uniformly tight.
Then, $X_{n,\theta}\overset{\mathcal{D}}{\rightrightarrows}X_{\theta}$.
###### Proof.
Let $(\theta_{n})_{n\in\mathbb{N}}\subseteq\Theta$ and let $Y_{n}$ have
distribution $X_{n,\theta}(\mathbb{P}_{\theta_{n}})$ and $Z_{n}$ have
distribution $X_{\theta}(\mathbb{P}_{\theta_{n}})$ defined on a probability
space $(\Omega,\mathcal{F},\mathbb{P})$. Suppose for contradiction that
$d_{\textrm{BL}}^{\theta_{n}}(X_{n,\theta_{n}},X_{\theta_{n}})=d_{\textrm{BL}}(Y_{n},Z_{n})\not\to
0$
as $n\to\infty$. Then there exists a subsequence of $Y_{n}$ and $Z_{n}$ and an
$\varepsilon>0$ such that for all $n$
$d_{\textrm{BL}}(Y_{k(n)},Z_{k(n)})\geq\varepsilon,$
where $k:\mathbb{N}\to\mathbb{N}$ is a strictly increasing function. By
sequential tightness of $(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$,
there exists a subsequence of $(Y_{k(n)})_{n\in\mathbb{N}}$, represented by
the index function $m=k\circ k^{\prime}$ for a strictly increasing
$k^{\prime}:\mathbb{N}\to\mathbb{N}$ such that the subsequence
$(Y_{m(n)})_{n\in\mathbb{N}}$ converges weakly to some random variable $Y$. By
uniform tightness of $X$ there exists a further subsequence of
$(Z_{m(n)})_{n\in\mathbb{N}}$, represented by the index function $l=m\circ
k^{\prime\prime}$ for a strictly increasing
$k^{\prime\prime}:\mathbb{N}\to\mathbb{N}$ such that
$(Z_{l(n)})_{n\in\mathbb{N}}$ converges weakly to some random variable $Z$.
Note that since the range of $l$ is a subset of the range of $m$,
$(Y_{l(n)})_{n\in\mathbb{N}}$ also converges to $Y$.
We intend to show that the distributions of $Z$ and $Y$ are equal. The
distribution of a Hilbertian random variable is completely determined by the
distribution of the linear functionals [Hsing2015, Theorem 7.1.2]. However,
for any $h\in\mathcal{H}$ and any $n$,
$\displaystyle d_{\textrm{BL}}\left(\langle Y,h\rangle,\langle
Z,h\rangle\right)\leq d_{\textrm{BL}}\left(\langle Y,h\rangle,\langle
Y_{l(n)},h\rangle\right)+d_{\textrm{BL}}\left(\langle
Y_{l(n)},h\rangle,\langle
Z_{l(n)},h\rangle\right)+d_{\textrm{BL}}\left(\langle
Z_{l(n)},h\rangle,\langle Z,h\rangle\right).$
The first and third term of the right-hand side go to zero by definition and
the middle term goes to zero by assumption (i). Now,
$d_{\textrm{BL}}(Y_{l(n)},Z_{l(n)})\leq
d_{\textrm{BL}}(Y_{l(n)},Z)+d_{\textrm{BL}}(Z,Z_{l(n)}).$
Hence, we can choose $N$ making $l(N)$ large enough that the RHS is smaller
than $\varepsilon/2$. This is a contradiction since we chose $k$ such that
$d_{\textrm{BL}}(Y_{k(n)},Z_{k(n)})\geq\varepsilon$ for all $n\in\mathbb{N}$
but $(l(n))_{n\in\mathbb{N}}\subseteq(k(n))_{n\in\mathbb{N}}$. ∎
We can now prove a uniform central limit theorem in Hilbert spaces.
###### Proposition 19.
Let $(X_{\theta})_{\theta\in\Theta}$ be Hilbertian random variables with
$\mathbb{E}_{\theta}(X_{\theta})=0$ for all $\theta$ and
$\sup_{\theta\in\Theta}\mathbb{E}_{\theta}(\lVert
X_{\theta}\rVert^{2+\eta})<K$ for some $K,\eta>0$. Denote
$(\mathscr{C}_{\theta})_{\theta\in\Theta}$ the family of covariance operators
of $X_{\theta}$ under each $\mathbb{P}_{\theta}$, i.e.
$\mathscr{C}_{\theta}=\mathbb{E}_{\theta}(X_{\theta}\otimes X_{\theta})$. Let
$(X_{n,\theta})_{n\in\mathbb{N},\theta\in\Theta}$ be random variables such
that for $\theta\in\Theta$ $(X_{n,\theta})_{n\in\mathbb{N}}$ is i.i.d. with
the same distribution as $X_{\theta}$ under $\mathbb{P}_{\theta}$. Assume
further that for some orthonormal basis $(e_{k})_{k=1}^{\infty}$ of
$\mathcal{H}$
$\lim_{K\to\infty}\sup_{\theta\in\Theta}\sum_{k=K}^{\infty}\langle\mathscr{C}_{\theta}e_{k},e_{k}\rangle=0.$
(33)
Then
$\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i,\theta}\overset{\mathcal{D}}{\rightrightarrows}Z$
where the distribution of $Z$ under $\mathbb{P}_{\theta}$ is
$\mathcal{N}(0,\mathscr{C}_{\theta})$.
###### Proof.
We intend to apply Proposition 18 and thus check the conditions. For the first
condition, let $h\in\mathcal{H}$ be given and let $Y_{n}=\langle
X_{n},h\rangle$ and let $Y$ be distributed as
$\langle\mathcal{N}(0,\mathscr{C}_{\theta}),h\rangle$ under
$\mathbb{P}_{\theta}$, i.e. as
$\mathcal{N}(0,\langle\mathscr{C}_{\theta}h,h\rangle)$. Note that
$\left\langle\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i,\theta},h\right\rangle=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}Y_{i,\theta}$
hence by Proposition 8 it is sufficient for the first condition that for any
$(\theta_{n})_{n\in\mathbb{N}}\subset\Theta$
$\lim_{n\to\infty}d_{\textrm{BL}}^{\theta_{n}}(Y_{n},Y)=0.$
Suppose for contradiction that there exists a sequence
$(\theta_{n})_{n\in\mathbb{N}}$ such that the limit does not equal $0$. Then
there exists an $\varepsilon>0$ and a strictly increasing function
$m:\mathbb{N}\to\mathbb{N}$ such that
$d_{\textrm{BL}}^{\theta_{m(n)}}(Y_{n},Y)\geq\varepsilon$
for any $n\in\mathbb{N}$. Denoting for $n\in\mathbb{N}$,
$\sigma^{2}_{\theta_{m(n)}}:=\langle\mathscr{C}_{\theta_{m(n)}}h,h\rangle$, we
note that the sequence
$\left(\sigma^{2}_{\theta_{m(n)}}\right)_{n\in\mathbb{N}}$ is bounded by
assumption and hence by the Bolzano–Weierstrass theorem it has a convergent
subsequence, i.e. there exists $\sigma^{2}\geq 0$ and a strictly increasing
$m^{\prime}:\mathbb{N}\to\mathbb{N}$ such that letting $l=m^{\prime}\circ m$,
$\sigma^{2}_{\theta_{l(n)}}\to\sigma^{2}$. Letting $W$ denote a random
variable with distribution $\mathcal{N}(0,\sigma^{2})$ for any
$\mathbb{P}_{\theta}$, by Scheffé’s lemma this implies that
$\lim_{n\to\infty}d_{\textrm{BL}}^{\theta_{l(n)}}(Y,W)=0.$
Further, the Lindeberg–Feller theorem [Durrett2019, Theorem 3.4.10] yields
that
$\lim_{n\to\infty}d_{\textrm{BL}}^{\theta_{l(n)}}(Y_{n},W)=0,$
since Lyapunov’s condition is fulfilled by the uniform bound on the
$(2+\eta)$th moment of $X_{\theta}$. Because the range of $l$ is contained in
the range of $m$, this is a contradiction, hence the first condition is
fulfilled.
The third condition follows immediately from the assumption in (33) by
Bogachev2018. Define
$S_{n,\theta}:=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i,\theta}$ for
$n\in\mathbb{N}$. The second condition follows by the same assumption and
theorems by observing that $\mathbb{E}_{\theta}\lVert S_{n,\theta}\rVert^{2}$
is bounded by the same constant bounding $\mathbb{E}_{\theta}\lVert
X_{\theta}\rVert^{2}$ and that
$\mathrm{Cov}_{\theta}(S_{n,\theta})=\frac{1}{n}\sum_{i=1}^{n}\mathrm{Cov}_{\theta}(X_{i,\theta})=\mathscr{C}_{\theta}.$
This shows that the family of measures
$(X_{n,\theta}(\mathbb{P}_{\theta}))_{n\in\mathbb{N},\theta\in\Theta}$ is
tight which implies the second condition. ∎
## Appendix C Proofs of results in Sections 3.2 and 4
This section contains the proofs of all results in Sections 3.2 and 4 except
Proposition 1 which is proven in Section A.2.2. The proofs are self-contained,
but readers new to the field may find the following references helpful. For
general results about random variables on metric spaces (Slutsky’s theorem,
etc.) see Billingsley1999. For more specific results about Hilbertian random
variables, Bochner integrals and operators on Hilbert spaces, see Hsing2015.
For existence and construction of conditional expectations on Hilbert spaces,
see scalora1961. In this section, we sometimes omit the subscript $P$ when it
is clear from the context.
### C.1 Derivation of (7)
We first prove a small lemma.
###### Lemma 9.
Let $x_{1},\dots,x_{n}$ be elements of a Hilbert space $\mathcal{H}$. Then
$\left\|\sum_{i=1}^{n}x_{i}\right\|^{2}=\sum_{i=1}^{n}\sum_{j=1}^{n}\langle
x_{i},x_{j}\rangle$
and the non-zero eigenvalues of the operator
$\mathscr{A}:=\sum_{i=1}^{n}x_{i}\otimes x_{i}$
equal the eigenvalues of the matrix $A$ with entries
$A_{ij}:=\langle x_{i},x_{j}\rangle.$
###### Proof.
The first claim is immediate, since
$\left\|\sum_{i=1}^{n}x_{i}\right\|^{2}=\left\langle\sum_{i=1}^{n}x_{i},\sum_{j=1}^{n}x_{j}\right\rangle=\sum_{i=1}^{n}\sum_{j=1}^{n}\langle
x_{i},x_{j}\rangle.$
For the second claim, note that we can write the operator $\mathscr{A}$ as
$\mathscr{B}^{*}\mathscr{B}$ where $\mathscr{B}:\mathcal{H}\to\mathbb{R}^{n}$
is an operator given by
$\mathscr{B}h=\begin{pmatrix}\langle x_{1},h\rangle\\\ \vdots\\\ \langle
x_{n},h\rangle\end{pmatrix}$
with adjoint $\mathscr{B}^{*}$ given by
$\mathscr{B}^{*}v=\sum_{i=1}^{n}v_{i}x_{i}.$
The result now follows since $A=\mathscr{B}\mathscr{B}^{*}$. ∎
Applying the first result of the lemma to the sequence
$1/\sqrt{n}\mathscr{R}_{i}$ for $i=1,\dots,n$ viewed as Hilbert–Schmidt
operators from $\mathcal{H}_{X}$ to $\mathcal{H}_{Y}$, we get that
$T_{n}=\frac{1}{n}\left\|\sum_{i=1}^{n}\mathscr{R}_{i}\right\|^{2}=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}\langle\mathscr{R}_{i},\mathscr{R}_{j}\rangle=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}\langle\hat{\varepsilon}_{i},\hat{\varepsilon}_{j}\rangle\langle\hat{\xi}_{i},\hat{\xi}_{j}\rangle.$
Applying the second result of the lemma to the sequence
$1/\sqrt{n-1}\mathscr{R}_{i}-\bar{\mathscr{R}}$, we get that the eigenvalues
of
$\hat{\mathscr{C}}=\frac{1}{n-1}\sum_{i=1}^{n}(\mathscr{R}_{i}-\bar{\mathscr{R}})\otimes_{\textrm{HS}}(\mathscr{R}_{i}-\bar{\mathscr{R}})$
equal the eigenvalues of the matrix $A$ with entries
$A_{ij}:=\frac{1}{n-1}\langle\mathscr{R}_{i}-\bar{\mathscr{R}},\mathscr{R}_{j}-\bar{\mathscr{R}}\rangle$
Using bilinearity of the inner product, we can expand and see that
$A=\Gamma-J\Gamma-\Gamma J+J\Gamma J$
as desired.
### C.2 Derivation of (11)
###### Proof.
Fix $n\geq 2$ and write $p:=1+a(n)$. Let
$(\tilde{x}_{i},\tilde{y}_{i},\tilde{z}_{i})_{i=1}^{n}$ denote mean-centred
observations, so e.g. $\tilde{z}_{i}=z_{i}-\sum_{j=1}^{n}z_{j}/n$, and let
$\tilde{X}^{(n)}=(\tilde{x}_{1},\ldots,\tilde{x}_{n})^{\top}\in{\mathbb{R}}^{n}$.
Let $W_{n}\in{\mathbb{R}}^{n\times p}$ be the design matrix with $i$th row
given by $(\tilde{y}_{i},\tilde{z}_{i1},\ldots,\tilde{z}_{ia(n)})$, and let
$\hat{\theta}_{n}\in{\mathbb{R}}$ be the first component of the coefficient
vector from regressing $\tilde{X}^{(n)}$ onto $W_{n}$, so
$\hat{\theta}_{n}:=\\{(W_{n}^{\top}W_{n})^{-1}W_{n}^{\top}\tilde{X}^{(n)}\\}_{1}$.
Further, let $P_{n}\in{\mathbb{R}}^{n\times n}$ be the orthogonal projection
onto the column space of $W_{n}$. Then
$\psi_{n}^{\textrm{OLS}}=\mathbbm{1}_{\\{|\hat{\theta}_{n}|\geq
t_{n-p-1}(\alpha/2)\hat{\sigma}_{W,n}\|(I-P_{n})\tilde{X}^{(n)}\|_{2}/\sqrt{n-p-1}\\}},$
where $t_{n-p}(\alpha/2)$ is the upper $\alpha/2$-point of a $t$ distribution
on $n-p$ degrees of freedom, and
$\hat{\sigma}_{W,n}^{2}:=\\{(W_{n}^{\top}W_{n})^{-1}\\}_{11}$. Fix
$Q\in\mathcal{Q}$; in the following we will suppress dependence on this for
notational simplicity. Then there exists $r\in\mathbb{N}$ such that
$\theta:=\frac{\mathrm{Cov}(Y,X\,|\,Z)}{\mathrm{Var}(X\,|\,Z)}=\frac{\mathrm{Cov}(Y,X\,|\,Z_{1},\ldots,Z_{r})}{\mathrm{Var}(X\,|\,Z_{1},\ldots,Z_{r})}>0,$
and so for $n$ such that $a(n)>r$,
$\hat{\theta}_{n}\,|\,W_{n}\sim\mathcal{N}(\theta,\sigma^{2}\hat{\sigma}_{W,n}^{2})$
where
$\sigma^{2}:=\mathrm{Var}(X\,|\,Y,Z)=\mathrm{Var}(X\,|\,Y,Z_{1},\ldots,Z_{r})>0.$
Note that
$\|(I-P_{n})\tilde{X}^{(n)}\|_{2}^{2}/\sigma^{2}\sim\chi^{2}_{n-p-1}$, and so
by the weak law of large numbers and the continuous mapping theorem,
$\|(I-P_{n})\tilde{X}^{(n)}\|_{2}/\sqrt{n-p-1}\stackrel{{\scriptstyle
P}}{{\to}}\sigma$. To show that ${\mathbb{P}}(\psi_{n}^{\textrm{OLS}}=1)\to
1$, it therefore suffices to show that
$\hat{\sigma}_{W,n}^{2}\stackrel{{\scriptstyle P}}{{\to}}0$.
Now writing $\Sigma_{n}=\mathrm{Cov}(Y,Z_{1},\ldots,Z_{a(n)})$, we have that
$W_{n}^{\top}W_{n}$ has a Wishart distribution on $n-1$ degrees of freedom:
$W_{n}^{\top}W_{n}\sim W_{p}(\Sigma_{n},n-1)$. Thus,
$(\Sigma_{n}^{-1})_{11}/\hat{\sigma}_{W,n}^{2}\sim\chi^{2}_{n-p}$ and
$(\Sigma_{n}^{-1})_{11}=\mathrm{Var}(Y\,|\,Z_{1},\ldots,Z_{r})=\mathrm{Var}(Y\,|\,Z)<\infty$.
We therefore see that as $n\to\infty$ and hence $n-p\to\infty$, we have
$\hat{\sigma}_{W,n}^{2}\stackrel{{\scriptstyle P}}{{\to}}0$ as required. ∎
### C.3 Proofs of results in Section 4.2
In this section we provide proofs of Theorems 3 and 2. The proofs rely heavily
on the theory developed in Section B.
#### C.3.1 Auxiliary lemmas
We first prove some auxiliary lemmas that will be needed for the upcoming
proofs.
###### Lemma 10.
Let $(X_{n})_{n\in\mathbb{N}}$ be a sequence of real-valued random variables
defined on $(\Omega,\mathcal{F})$ equipped with a family of probability
measures $(\mathbb{P}_{\theta})_{\theta\in\Theta}$. Let $X$ be another real-
valued random variable on the same space and let
$(\mathcal{F}_{n})_{n\in\mathbb{N}}$ be a sequence of sub-$\sigma$-algebras of
$\mathcal{F}$. If
$\mathbb{E}_{\theta}(|X_{n}|\,|\,\mathcal{F}_{n})\overset{P}{\rightrightarrows}0$
then $X_{n}\overset{P}{\rightrightarrows}0$.
###### Proof.
Let $\epsilon>0$ be given. By Markov’s inequality
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(|X_{n}|\geq\epsilon)\leq\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(|X_{n}|\wedge\epsilon\geq\epsilon)\leq\sup_{\theta\in\Theta}\frac{\mathbb{E}_{\theta}(|X_{n}|\wedge\epsilon)}{\epsilon}.$
We will be done if we can show that
$\sup_{\theta\in\Theta}\mathbb{E}_{\theta}(|X_{n}|\wedge\epsilon)\to 0$ as
$n\to\infty$. Note that by monotonicity of conditional expectations, for each
$\theta\in\Theta$ we have
$\mathbb{E}_{\theta}(|X_{n}|\wedge\epsilon\,|\,\mathcal{F}_{n})\leq\mathbb{E}_{\theta}(\epsilon\,|\,\mathcal{F}_{n})=\epsilon,$
and
$\mathbb{E}_{\theta}(|X_{n}|\wedge\epsilon\,|\,\mathcal{F}_{n})\leq\mathbb{E}_{\theta}(|X_{n}|\,|\,\mathcal{F}_{n}).$
Combining both of the above expressions, we get
$\mathbb{E}_{\theta}(|X_{n}|\wedge\epsilon\,|\,\mathcal{F}_{n})\leq\mathbb{E}_{\theta}(|X_{n}|\,|\,\mathcal{F}_{n})\wedge\epsilon.$
This lets us write by the tower property and monotonicity of integrals,
$\sup_{\theta\in\Theta}\mathbb{E}_{\theta}(|X_{n}|\wedge\epsilon)=\sup_{\theta\in\Theta}\mathbb{E}_{\theta}[\mathbb{E}_{\theta}(|X_{n}|\wedge\epsilon\,|\,\mathcal{F}_{n})]\leq\sup_{\theta\in\Theta}\mathbb{E}_{\theta}[\mathbb{E}_{\theta}(|X_{n}|\,|\,\mathcal{F}_{n})\wedge\epsilon].$
Let $Y_{n}:=\mathbb{E}_{\theta}(|X_{n}|\,|\,\mathcal{F}_{n})\wedge\epsilon$
and let $\delta>0$ be given. Then
$\displaystyle\sup_{\theta\in\Theta}\mathbb{E}_{\theta}(Y_{n})$
$\displaystyle\leq\sup_{\theta\in\Theta}\mathbb{E}_{\theta}\left(Y_{n}\mathbbm{1}_{\\{Y_{n}<\delta/2\\}}\right)+\sup_{\theta\in\Theta}\mathbb{E}_{\theta}\left(Y_{n}\mathbbm{1}_{\\{Y_{n}\geq\delta/2\\}}\right)$
$\displaystyle\leq\frac{\delta}{2}+\epsilon\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(Y_{n}\geq\delta/2).$
By assumption, for any $\eta>0$, we can choose $N\in\mathbb{N}$ so that for
all $n\geq N$, we can make
$\sup_{\theta\in\Theta}\mathbb{E}_{\theta}(|X_{n}|\,|\,\mathcal{F}_{n})\geq\delta/2)<\eta$.
Thus, choosing $N$ to parry $\eta=\frac{\epsilon}{2\epsilon}$, we get
$\sup_{\theta\in\Theta}\mathbb{E}_{\theta}(|Y_{n}|)<\delta,$
proving the desired result. ∎
###### Lemma 11.
Let $X$ and $Y$ be random variables defined on the probability space
$(\Omega,\mathcal{F},\mathbb{P})$ with values in a Hilbert space
$\mathcal{H}$. Let $\mathcal{D}$ be a sub-$\sigma$-algebra of $\mathcal{F}$ so
that $X$ is $\mathcal{D}$-measurable. Assume that $\mathbb{E}(\lVert
X\rVert)$, $\mathbb{E}(\lVert Y\rVert)$ and $\mathbb{E}(\lVert X\rVert\lVert
Y\rVert)$ all exist. Then
$\mathbb{E}(\langle X,Y\rangle\,|\,\mathcal{D})=\langle
X,\mathbb{E}(Y\,|\,\mathcal{D})\rangle.$
###### Proof.
To show the result, we need to show that $\langle
X,\mathbb{E}(Y\,|\,\mathcal{D})\rangle$ is $\mathcal{D}$-measurable and that
integrals over $\mathcal{D}$-sets of $\langle X,Y\rangle$ and $\langle
X,\mathbb{E}(Y\,|\,\mathcal{D})\rangle$ coincide. $\langle
X,\mathbb{E}(Y\,|\,\mathcal{D})\rangle$ is $\mathcal{D}$-measurable by
continuity of the inner product and the fact that $X$ and
$\mathbb{E}(Y\,|\,\mathcal{D})$ are $\mathcal{D}$-measurable by assumption and
definition, respectively. By expanding the inner product in an orthonormal
basis $(e_{k})_{k\in\mathbb{N}}$ of $\mathcal{H}$, we get
$\displaystyle\int_{D}\langle
X,Y\rangle\,\mathrm{d}\mathbb{P}=\int_{D}\sum_{k=1}^{\infty}\langle
X,e_{k}\rangle\langle
Y,e_{k}\rangle\,\mathrm{d}\mathbb{P}=\sum_{k=1}^{\infty}\int_{D}\mathbb{E}(\langle
X,e_{k}\rangle\langle Y,e_{k}\rangle\,|\,\mathcal{D})\,\mathrm{d}\mathbb{P}$
$\displaystyle=\sum_{k=1}^{\infty}\int_{D}\langle
X,e_{k}\rangle\langle\mathbb{E}(Y\,|\,\mathcal{D}),e_{k}\rangle\,\mathrm{d}\mathbb{P}=\int_{D}\sum_{k=1}^{\infty}\langle
X,e_{k}\rangle\langle\mathbb{E}(Y\,|\,\mathcal{D}),e_{k}\rangle\,\mathrm{d}\mathbb{P}=\int_{D}\langle
X,\mathbb{E}(Y\,|\,\mathcal{D})\rangle\,\mathrm{d}\mathbb{P},$
by using the interchangeability of sums and integrals and the property
$\mathbb{E}(\langle
Y,e_{i}\rangle\,|\,\mathcal{D})=\langle\mathbb{E}(Y\,|\,\mathcal{D}),e_{i}\rangle$
of conditional expectations on Hilbert spaces.
∎
###### Lemma 12.
Let $q$ denote the function that maps a self-adjoint, positive semidefinite,
trace-class operator, $\mathscr{C}$ on a separable Hilbert space
$\mathcal{H}$, to the $1-\alpha$ quantile of the
$\lVert\mathcal{N}(0,\mathscr{C})\rVert^{2}$ distribution. Then $q$ is
continuous in trace norm and the restriction of $q$ to a bounded subset
$\mathcal{C}$ of covariance operators satisfying
$\lim_{N\to\infty}\sup_{\mathscr{C}\in\mathcal{C}}\sum_{k=K}^{\infty}\langle\mathscr{C}e_{k},e_{k}\rangle=0$
(34)
for some orthonormal basis $(e_{k})_{k=1}^{\infty}$ of $\mathcal{H}$, is
uniformly continuous in trace norm.
###### Proof.
Let $\mathscr{C}_{n}$ be a sequence of self-adjoint, positive semidefinite,
trace-class operators converging to $\mathscr{C}$ in trace norm. Then by
Bogachev2018
$\mathcal{N}(0,\mathscr{C}_{n})\overset{\mathcal{D}}{\to}\mathcal{N}(0,\mathscr{C})$
and by the continuous mapping theorem we have
$\lVert\mathcal{N}(0,\mathscr{C}_{n})\rVert^{2}\overset{\mathcal{D}}{\to}\lVert\mathcal{N}(0,\mathscr{C})\rVert^{2}$.
This implies the convergence of the quantile functions by the Portmanteau
theorem and Vandervaart1998 and hence $q$ is continuous.
By the Heine–Cantor theorem, the restriction of $q$ to the closure of
$\mathcal{C}$ is uniformly continuous if $\mathcal{C}$ is relatively compact.
Restricting $q$ further to $\mathcal{C}$ preserves the uniform continuity.
Bogachev2018 states that equation (34) exactly characterises the relatively
compact sets of trace class operators. ∎
###### Lemma 13.
Let $\Theta\subseteq\mathbb{R}_{+}$ and let $(\mu_{\theta})_{\theta\in\Theta}$
be the family of probability distributions on $\mathbb{R}$ where for each
$\theta\in\Theta$, $\mu_{\theta}$ denotes the distribution of $\theta Z$ where
$Z\sim\chi^{2}_{1}$. If $\Theta$ is bounded away from $0$, the family is
uniformly absolutely continuous with respect to the Lebesgue measure
$\lambda$.
###### Proof.
Note that the density $f_{\theta}$ of $\mu_{\theta}$ with respect to the
Lebesgue measure is
$f_{\theta}(x)=\frac{1}{\sqrt{2\pi\theta}}\frac{e^{-\frac{1}{2\theta}x}}{\sqrt{x}}.$
We will apply Proposition 11 by showing that $\sup_{\theta\in\Theta}\int
f_{\theta}^{3/2}\,\mathrm{d}\lambda<\infty$, which is sufficient for uniform
integrability by Bogachev2007. We see that
$\int
f_{\theta}(x)^{3/2}\,\mathrm{d}\lambda=\frac{1}{\sqrt[4]{6\pi^{3}\theta^{3}}}\int\frac{e^{-\frac{3}{4\theta}x}}{\sqrt[4]{x^{3}}}\,\mathrm{d}\lambda,$
and we recognise the final integral as the unnormalised density of a
$\Gamma(1/4,3/(4\theta))$ random variable. Thus,
$\int
f_{\theta}(x)^{3/2}\,\mathrm{d}\lambda=\frac{1}{\sqrt[4]{6\pi^{3}\theta^{3}}}\frac{\Gamma(1/4)\sqrt[4]{4\theta}}{\sqrt[4]{3}}=\frac{\Gamma(1/4)}{\sqrt[4]{6\pi^{3}\theta^{2}}}.$
This is finite for all $\theta\in\Theta$ since $\Theta$ is bounded away from
zero, proving the desired result. ∎
###### Lemma 14.
Let $X$ be a uniformly tight with respect to index family $\Theta$ (see
Definition 4), real-valued and non-negative random variable that is uniformly
absolutely continuous with respect to the Lebesgue measure. Then so is
$\sqrt{X}$.
###### Proof.
Let $\epsilon>0$ be given and let $\lambda$ denote the Lebesgue measure. We
need to find $\delta>0$ such that for any Borel measurable $B$,
$\lambda(B)<\delta\Longrightarrow\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\sqrt{X}\in
B)<\epsilon.$
For each measurable $B$, we define $B^{2}:=\\{b^{2}\ :\ b\in\mathbb{R}\\}$.
Then $\mathbb{P}_{\theta}(\sqrt{X}\in B)=\mathbb{P}_{\theta}(X\in B^{2})$ and
by the uniform tightness of $X$, we can find $M>0$ such that
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X\in
B^{2})\leq\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X\in
B^{2}\cap[0,M])+\epsilon/2.$
By the uniform absolute continuity of $X$ with respect to $\lambda$, we can
find $\delta^{\prime}$ such that $\lambda(B)<\delta^{\prime}$ implies
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X\in B)<\epsilon/2$. Note that for
any such $B$, by the regularity of the Lebesgue measure, we can find an open
set $U\supseteq B$ such that $\lambda(U\setminus
B)<\delta^{\prime}-\lambda(B)$. This implies that
$\lambda(U)<\delta^{\prime}$. For every open $U$, by Carothers2000, we can
find a countable union of disjoint open intervals $(I_{j})_{j=1}^{\infty}$,
where $I_{j}=(a_{j},b_{j})$, such that $U=\bigcup_{j=1}^{\infty}I_{j}$. Note
that $U^{2}$ also covers $B^{2}$ since if $x\in U$, $x$ is in at least one of
the intervals $I_{j}$, and thus $x^{2}$ is in $I_{j}^{2}$. Combining these
observations, we get that
$\displaystyle\lambda(B^{2}\cap[0,M])$
$\displaystyle\leq\lambda(U^{2}\cap[0,M])=\sum_{j=1}^{\infty}\lambda(I_{j}^{2}\cap[0,M])=\sum_{j=1}^{\infty}(\min(M,b_{j}^{2})-a_{j}^{2})$
$\displaystyle=\sum_{j=1}^{\infty}(\min(\sqrt{M},b_{j})+a_{j})(\min(\sqrt{M},b_{j})-a_{j})\leq
2\sqrt{M}\sum_{j=1}^{\infty}b_{j}-a_{j}<2\sqrt{M}\delta^{\prime}.$
Thus letting $\delta=\delta^{\prime}/(2\sqrt{M})$, we see that for all $B$
with $\lambda(B)<\delta$, we also have
$\lambda(B^{2}\cap[0,M])<\delta^{\prime}$, and hence
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(X\in B^{2}\cap[0,M])<\epsilon/2,$
proving the statement. ∎
###### Lemma 15.
Let $(X_{\theta})_{\theta\in\Theta}$ be Hilbertian random variables with
values in $\mathcal{H}$. Assume that for every $\theta\in\Theta$,
$\mathbb{E}_{\theta}(X_{\theta})=0$,
$\sup_{\theta\in\Theta}\allowbreak\mathbb{E}\lVert
X_{\theta}\rVert^{2}<\infty$ and that there exists a basis
$(e_{k})_{k\in\mathbb{N}}$ of $\mathcal{H}$ such that
$\lim_{K\to\infty}\sup_{\theta\in\Theta}\sum_{k=K}^{\infty}\mathbb{E}(\langle
X_{\theta},e_{k}\rangle^{2})=0.$
Then the family $(X_{\theta}\otimes X_{\theta})_{\theta\in\Theta}$ is
uniformly tight when viewed as random variables in the Banach space of trace-
class operators on $\mathcal{H}$.
###### Proof.
By Fugarolas1983 $(e_{k}\otimes e_{j})_{(k,j)\in\mathbb{N}^{2}}$ is a Schauder
basis for the Banach space of trace-class operators on $\mathcal{H}$. Thus,
Bogachev2018 yields that we need to show that
$\lim_{r\to\infty}\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
X_{\theta}\otimes X_{\theta}\rVert_{\textrm{TR}}>r)=0$
and for all $\varepsilon>0$
$\lim_{K\to\infty}\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert
X_{\theta}\otimes X_{\theta}-P_{K}(X_{\theta}\otimes
X_{\theta})\rVert_{\textrm{TR}}>\varepsilon)=0,$
where $P_{K}$ denotes the projection onto the $K$ first basis vectors in the
space of trace-class operators. An application of Markov’s inequality yields
immediately that
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}(\lVert X_{\theta}\otimes
X_{\theta}\rVert_{\textrm{TR}}>r)\leq\frac{\sup_{\theta\in\Theta}\mathbb{E}_{\theta}\lVert
X_{\theta}\rVert^{2}}{r}$
hence the first condition is satisfied by the assumed uniform upper bound on
$\mathbb{E}_{\theta}\lVert X_{\theta}\rVert^{2}$. For $K=m^{2}$,
$m\in\mathbb{N}$, note that
$\displaystyle\lVert X_{\theta}\otimes X_{\theta}-P_{K}(X_{\theta}\otimes
X_{\theta})\rVert_{\textrm{TR}}$
$\displaystyle=\left\lVert\left(\sum_{j=m}^{\infty}\langle
X_{\theta},e_{j}\rangle e_{j}\right)\otimes\left(\sum_{k=m}^{\infty}\langle
X_{\theta},e_{k}\rangle e_{k}\right)\right\rVert_{\textrm{TR}}$
$\displaystyle=\left\lVert\sum_{j=m}^{\infty}\langle X_{\theta},e_{j}\rangle
e_{j}\right\rVert^{2}=\sum_{j=m}^{\infty}\langle X_{\theta},e_{j}\rangle^{2},$
where the final equality is by Parseval’s identity. Using this, the second
condition is satisfied by assumption, since, by Markov’s inequality, for all
$\epsilon>0$,
$\sup_{\theta\in\Theta}\mathbb{P}_{\theta}\left(\sum_{j=m}^{\infty}\langle
X_{\theta},e_{j}\rangle^{2}>\epsilon\right)\leq\frac{\sup_{\theta\in\Theta}\sum_{j=m}^{\infty}\mathbb{E}_{\theta}\left(\langle
X_{\theta},e_{j}\rangle^{2}\right)}{\epsilon}.$
∎
#### C.3.2 Proof of Theorem 2
###### Proof.
Throughout the proof we omit the subscript $P$ from $\varepsilon$, $\xi$, $f$,
$g$.
##### Convergence of $\mathscr{T}_{n}$.
We have that
$\displaystyle\mathscr{T}_{n}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\mathscr{R}_{i}$
$\displaystyle=\underbrace{\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\varepsilon_{i}\otimes\xi_{i}}_{=:U_{n}}+\underbrace{\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(f(z_{i})-\hat{f}(z_{i}))\otimes(g(z_{i})-\hat{g}(z_{i}))}_{=:a_{n}}$
$\displaystyle+\underbrace{\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(f(z_{i})-\hat{f}(z_{i}))\otimes\xi_{i}}_{=:b_{n}}+\underbrace{\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(\varepsilon_{i}\otimes
g(z_{i})-\hat{g}(z_{i}))}_{=:c_{n}}.$
Since
$\mathbb{E}(\varepsilon_{i}\otimes\xi_{i})=\mathbb{E}((X-\mathbb{E}(X\,|\,Z))\otimes(Y-\mathbb{E}(Y\,|\,Z)))=\mathbb{E}(\mathrm{Cov}(X,Y\,|\,Z))=0$
because $X\mbox{${}\perp\mkern-11.0mu\perp{}$}Y\,|\,Z$, Proposition 19 yields
that $U_{n}$ converges uniformly in distribution to the desired Gaussian over
$\tilde{\mathcal{P}_{0}}$. By Proposition 15, if $a_{n}$, $b_{n}$ and $c_{n}$
all converge to $0$ uniformly in probability, we will have shown the desired
result. We establish this by looking at the Hilbert–Schmidt norm of the
sequences, since uniform convergence of the norms to $0$ implies uniform
convergence of the sequences to $0$. For $a_{n}$, using properties of the
Hilbert–Schmidt norm and the Cauchy–Schwarz inequality yields
$\displaystyle\lVert a_{n}\rVert_{\textrm{HS}}$
$\displaystyle=\left\lVert\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(f(z_{i})-\hat{f}(z_{i}))\otimes(g(z_{i})-\hat{g}(z_{i}))\right\rVert_{\textrm{HS}}$
$\displaystyle\leq\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\lVert(f(z_{i})-\hat{f}(z_{i}))\otimes(g(z_{i})-\hat{g}(z_{i}))\rVert_{\textrm{HS}}$
$\displaystyle=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\lVert(f(z_{i})-\hat{f}(z_{i}))\rVert\lVert
g(z_{i})-\hat{g}(z_{i})\rVert$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum_{i=1}^{n}\lVert(f(z_{i})-\hat{f}(z_{i}))\rVert^{2}\sum_{i=1}^{n}\lVert
g(z_{i})-\hat{g}(z_{i})\rVert^{2}}=\sqrt{nM_{n,P}^{f}M_{n,P}^{g}}.$
By assumption $nM_{n,P}^{f}M_{n,P}^{g}\overset{P}{\rightrightarrows}0$ and
Proposition 10 yields that the same is true for
$\sqrt{nM_{n,P}^{f}M_{n,P}^{g}}$. This implies that $\lVert
a_{n}\rVert_{\textrm{HS}}\overset{P}{\rightrightarrows}0$ as desired.
To establish that $\lVert
b_{n}\rVert_{\textrm{HS}}\overset{P}{\rightrightarrows}0$, we will instead
show that the square of the Hilbert–Schmidt norm goes to $0$. This implies
that $\lVert b_{n}\rVert_{\textrm{HS}}\overset{P}{\rightrightarrows}0$ by the
same arguments about $x\mapsto\sqrt{x}$ as above. We will show that
$\mathbb{E}_{P}(\lVert
b_{n}\rVert_{\textrm{HS}}^{2}\,|\,X^{(n)},Z^{(n)})\overset{P}{\rightrightarrows}0$,
where $X^{(n)}=(x_{1},\dots,x_{n})$ and $Z^{(n)}=(x_{1},\dots,x_{n})$, which
then implies the desired result by Lemma 10. For every
$P\in\tilde{\mathcal{P}_{0}}$ we have
$\displaystyle\mathbb{E}_{P}(\lVert
b_{n}\rVert_{\textrm{HS}}^{2}\,|\,X^{(n)},Z^{(n)})=\frac{1}{n}\mathbb{E}_{P}\left(\left\lVert\sum_{i=1}^{n}(f(z_{i})-\hat{f}(z_{i}))\otimes\xi\right\rVert^{2}_{\textrm{HS}}\,|\,X^{(n)},Z^{(n)}\right)$
$\displaystyle=\frac{1}{n}\sum_{j=1}^{n}\sum_{i=1}^{n}\mathbb{E}_{P}\left(\langle(f(z_{i})-\hat{f}(z_{i}))\otimes\xi_{i},(f(z_{j})-\hat{f}(z_{j}))\otimes\xi_{j}\rangle_{\textrm{HS}}\,|\,X^{(n)},Z^{(n)}\right)$
$\displaystyle=\frac{1}{n}\sum_{j=1}^{n}\sum_{i=1}^{n}\mathbb{E}_{P}\left(\langle
f(z_{i})-\hat{f}(z_{i}),f(z_{j})-\hat{f}(z_{j})\rangle\langle\xi_{i},\xi_{j}\rangle\,|\,X^{(n)},Z^{(n)}\right)$
$\displaystyle=\frac{1}{n}\sum_{j=1}^{n}\sum_{i=1}^{n}\langle
f(z_{i})-\hat{f}(z_{i}),f(z_{j})-\hat{f}(z_{j})\rangle\mathbb{E}_{P}\left(\langle\xi_{i},\xi_{j}\rangle\,|\,X^{(n)},Z^{(n)}\right),$
(35)
where the penultimate equality uses the fact that for Hilbert–Schmidt
operators $\langle x_{1}\otimes y_{1},x_{2}\otimes
y_{2}\rangle_{\textrm{HS}}=\langle x_{1},x_{2}\rangle\langle
y_{1},y_{2}\rangle$. The final equality holds since the terms involving
$f(z_{i})-\hat{f}(z_{i})$ are measurable with respect to the $\sigma$-algebra
generated by $X^{(n)}$ and $Z^{(n)}$. The term $\langle\xi_{i},\xi_{j}\rangle$
only depends on $Z_{i}$ and $Z_{j}$ of the conditioning variables, so we can
omit the remaining variables from the conditioning expression. Recall that
$\xi_{i}=Y_{i}-\mathbb{E}_{P}(Y_{i}\,|\,Z_{i})$. For $i\neq j$, by using that
$\mathbb{E}_{P}(Y_{i}\,|\,Z_{i})=\mathbb{E}_{P}(Y_{i}\,|\,Z_{i},Z_{j})$ since
$Z_{j}$ is independent of $(Y_{i},Z_{i})$ and Lemma 11, we get
$\displaystyle\mathbb{E}_{P}\big{[}\langle\xi_{i},\xi_{j}\rangle\,|\,X^{(n)},Z^{(n)}\big{]}$
$\displaystyle=\mathbb{E}_{P}\big{[}\langle Y_{i},Y_{j}\rangle-\langle
Y_{i},\mathbb{E}_{P}(Y_{j}\,|\,Z_{j})\rangle-\langle\mathbb{E}_{P}(Y_{i}\,|\,Z_{i}),Y_{j}\rangle$
$\displaystyle\qquad+\langle\mathbb{E}_{P}(Y_{i}\,|\,Z_{i}),\mathbb{E}_{P}(Y_{j}\,|\,Z_{j})\rangle\,|\,Z_{i},Z_{j}\big{]}$
$\displaystyle=\mathbb{E}_{P}(\langle
Y_{i},Y_{j}\rangle\,|\,Z_{i},Z_{j})-\langle\mathbb{E}_{P}(Y_{i}\,|\,Z_{i},Z_{j}),\mathbb{E}(Y_{j}\,|\,Z_{i},Z_{j})\rangle.$
We will show that this is zero. By assumption
$(Y_{i},Z_{i})\mbox{${}\perp\mkern-11.0mu\perp{}$}(Y_{j},Z_{j})$, so applying
the usual laws of conditional independence, we get
$Y_{i}\mbox{${}\perp\mkern-11.0mu\perp{}$}Y_{j}\,|\,(Z_{i},Z_{j})$. Take now
some orthonormal basis for $\mathcal{H}_{Y}$, $(e_{k})_{k\in\mathbb{N}}$, and
expand $\langle Y_{i},Y_{j}\rangle$ to get
$\mathbb{E}_{P}(\langle
Y_{i},Y_{j}\rangle\,|\,Z_{i},Z_{j})=\mathbb{E}_{P}\left(\sum_{k=1}^{\infty}\langle
Y_{i},e_{k}\rangle\langle
Y_{j},e_{k}\rangle\,|\,Z_{i},Z_{j}\right)=\sum_{k=1}^{\infty}\mathbb{E}_{P}\left(\langle
Y_{i},e_{k}\rangle\langle Y_{j},e_{k}\rangle\,|\,Z_{i},Z_{j}\right).$
For all $k$, $\langle
Y_{i},e_{k}\rangle\mbox{${}\perp\mkern-11.0mu\perp{}$}\langle
Y_{j},e_{k}\rangle\,|\,(Z_{i},Z_{j})$, so $\mathbb{E}(\langle
Y_{i},e_{k}\rangle\langle Y_{j},e_{k}\rangle\,|\,Z_{i},Z_{j})$ factorises, and
we get
$\displaystyle\sum_{k=1}^{\infty}\mathbb{E}_{P}\left(\langle
Y_{i},e_{k}\rangle\langle
Y_{j},e_{k}\rangle\,|\,Z_{i},Z_{j}\right)=\sum_{k=1}^{\infty}\mathbb{E}_{P}(\langle
Y_{i},e_{k}\rangle\,|\,Z_{i},Z_{j})\mathbb{E}_{P}(\langle
Y_{j},e_{k}\rangle\,|\,Z_{i},Z_{j})$
$\displaystyle=\sum_{k=1}^{\infty}\langle\mathbb{E}_{P}(Y_{i}\,|\,Z_{i},Z_{j}),e_{k}\rangle\langle\mathbb{E}_{P}(Y_{j}\,|\,Z_{i},Z_{j}),e_{k}\rangle=\langle\mathbb{E}_{P}(Y_{i}\,|\,Z_{i},Z_{j}),\mathbb{E}(Y_{j}\,|\,Z_{i},Z_{j})\rangle,$
where the second last equality follows from $\mathbb{E}_{P}(\langle
Y,e_{k}\rangle\,|\,Z_{i},Z_{j})=\langle\mathbb{E}_{P}(Y\,|\,Z_{i},Z_{j}),e_{k}\rangle$
by Lemma 11. We can thus omit all terms from the sum in (35) where $i\neq j$
and get
$\displaystyle\mathbb{E}_{P}(\lVert
b_{n}\rVert_{\textrm{HS}}^{2}\,|\,X^{(n)},Z^{(n)})=\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}^{(n)}(z_{i})\rVert^{2}_{X}\mathbb{E}_{P}\left(\lVert\xi_{i}\rVert_{Y}^{2}\,|\,Z_{i}\right)=\tilde{M}_{n}^{f}\overset{P}{\rightrightarrows}0,$
by assumption. An analogous argument can be repeated for $c_{n}$, thus proving
the desired result.
##### Convergence of $\hat{\mathscr{C}}$.
For simplicity, we prove convergence where $\hat{\mathscr{C}}$ is instead
defined as the estimate where we divide by $n$ instead of $n-1$ since this
does not affect the asymptotics.
By the above and Proposition 15, since
$(\mathcal{N}(0,\mathscr{C}_{P}))_{P\in\tilde{\mathcal{P}}_{0}}$ is uniformly
tight by Bogachev2018, we have
$\frac{1}{n}\sum_{i=1}^{n}\mathscr{R}_{i}=\frac{1}{\sqrt{n}}\mathscr{T}_{n}\overset{P}{\rightrightarrows}0.$
By Proposition 10, this implies that the second term in the definition of
$\hat{\mathscr{C}}$ converges to $0$ uniformly in probability since the
mapping
$(\mathscr{A},\mathscr{B})\mapsto\mathscr{A}\otimes_{\textrm{HS}}\mathscr{B}$
is continuous. It remains to show that the first term in the definition of
$\hat{\mathscr{C}}$ converges to $\mathscr{C}$. The proof is similar to the
proof of Theorem 6 in [GCM] and relies on expanding the first term
$\frac{1}{n}\sum_{i=1}^{n}\mathscr{R}_{i}\otimes_{\textrm{HS}}\mathscr{R}_{i}$
to yield
$\frac{1}{n}\sum_{i=1}^{n}[(f(z_{i})-\hat{f}(z_{i}))\otimes(g(z_{i})-\hat{g}(z_{i}))+(f(z_{i})-\hat{f}(z_{i}))\otimes\xi_{i}+\varepsilon_{i}\otimes(g(z_{i})-\hat{g}(z_{i}))+\varepsilon_{i}\otimes\xi_{i}]^{\otimes_{\textrm{HS}}2},$
where
$\mathscr{A}^{\otimes_{\textrm{HS}}2}=\mathscr{A}\otimes_{\textrm{HS}}\mathscr{A}$.
Expanding this even further yields 16 terms of which 15 go to zero. The non-
zero term is
$\textup{I}_{n}=\frac{1}{n}\sum_{i=1}^{n}(\varepsilon_{i}\otimes\xi_{i})^{\otimes_{\textrm{HS}}2}\overset{P}{\rightrightarrows}\mathbb{E}_{P}\left((\varepsilon_{i}\otimes\xi_{i})^{\otimes_{\textrm{HS}}2}\right)=\mathscr{C},$
by Proposition 17 and Lemma 15 and the assumed tightness condition. For the
remaining 15 terms, we will argue by taking trace norms and applying the
triangle inequality to reduce the number of cases. This leaves us with 8 terms
and 5 cases (by symmetry of $f$ and $\varepsilon$, $g$ and $\xi$) that we need
to argue converge to $0$ uniformly in probability.
The first case is
$\displaystyle\textup{II}_{n}$
$\displaystyle=\left\lVert\frac{1}{n}\sum_{i=1}^{n}[(f(z_{i})-\hat{f}(z_{i}))\otimes(g(z_{i})-\hat{g}(z_{i}))]^{\otimes_{\textrm{HS}}2}\right\rVert_{\textrm{TR}}$
$\displaystyle\leq\frac{1}{n}\sum_{i=1}^{n}\left\lVert[(f(z_{i})-\hat{f}(z_{i}))\otimes(g(z_{i})-\hat{g}(z_{i}))]^{\otimes_{\textrm{HS}}2}\right\rVert_{\textrm{TR}}$
$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\left\lVert
f(z_{i})-\hat{f}(z_{i})\otimes
g(z_{i})-\hat{g}(z_{i})\right\rVert_{\textrm{HS}}^{2}=\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert^{2}\lVert g(z_{i})-\hat{g}(z_{i})\rVert^{2}$
$\displaystyle\leq nM_{n,P}^{f}M_{n,P}^{g}\overset{P}{\rightrightarrows}0,$
where the final inequality uses that for positive sequences $\sum
a_{n}b_{n}\leq\sum a_{n}\sum b_{n}$, which can be seen by noting that every
term on the left-hand side also appears on the right-hand side. For the second
case we have, by applying the Cauchy–Schwarz inequality,
$\displaystyle\textup{III}_{n}$
$\displaystyle=\left\lVert\frac{1}{n}\sum_{i=1}^{n}[(f(z_{i})-\hat{f}(z_{i}))\otimes\xi_{i}]\otimes_{\textrm{HS}}[(g(z_{i})-\hat{g}(z_{i}))\otimes\varepsilon_{i}]\right\rVert_{\textrm{TR}}$
$\displaystyle\leq\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert\lVert
g(z_{i})-\hat{g}(z_{i})\rVert\lVert\varepsilon_{i}\rVert\lVert\xi_{i}\rVert$
$\displaystyle\leq\sqrt{\underbrace{\left(\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert^{2}\lVert
g(z_{i})-\hat{g}(z_{i})\rVert^{2}\right)}_{=:\tilde{a}_{n}}\underbrace{\left(\frac{1}{n}\sum_{i=1}^{n}\lVert\varepsilon_{i}\rVert^{2}\lVert\xi_{i}\rVert^{2}\right)}_{=:\tilde{U}_{n}}}.$
By Cauchy–Schwarz, we have $\tilde{a}_{n}\leq
nM_{n,P}^{f}M_{n,P}^{g}\overset{P}{\rightrightarrows}0$. We have
$\tilde{U}_{n}\overset{P}{\rightrightarrows}\lVert\mathscr{C}\rVert_{\textrm{TR}}$
by Proposition 16. The family
$(\lVert\mathscr{C}\rVert_{\textrm{TR}})_{P\in\tilde{\mathcal{P}}_{0}}$ is
uniformly tight by the assumption that
$\mathbb{E}(\lVert\varepsilon_{P}\rVert^{2+\eta}\lVert\xi_{P}\rVert^{2+\eta})$
is uniformly bounded, since this also yields a bound on
$\mathbb{E}(\lVert\varepsilon_{P}\rVert^{2}\lVert\xi_{P}\rVert^{2})=\lVert\mathscr{C}\rVert_{\textrm{TR}}$
thus Proposition 10 yields that
$\sqrt{\tilde{a}_{n}\tilde{U}_{n}}\overset{P}{\rightrightarrows}0$.
The remaining three cases have an $f$ and a $g$ variant where the roles of $f$
and $g$ and $\varepsilon$ and $\xi$ are swapped. We only show one variant of
each, since the arguments are identical. The $f$-variant of the third case is
$\textup{IV}_{n}^{f}=\left\lVert\frac{1}{n}\sum_{i=1}^{n}[(f(z_{i})-\hat{f}(z_{i}))\otimes\xi_{i}]^{\otimes_{\textrm{HS}}2}\right\rVert_{\textrm{TR}}\leq\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert^{2}\lVert\xi_{i}\rVert^{2}=:\tilde{b}_{n}.$
If we can show that
$\mathbb{E}(\tilde{b}_{n}\,|\,X^{(n)},Z^{(n)})\overset{P}{\rightrightarrows}0$,
we have that $\tilde{b}_{n}\overset{P}{\rightrightarrows}0$ by Lemma 10 and
hence $\textup{IV}_{n}^{f}\overset{P}{\rightrightarrows}0$. This holds since
$\mathbb{E}_{P}(\tilde{b}_{n}\,|\,X^{(n)},Z^{(n)})=\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert^{2}\mathbb{E}_{P}\left(\lVert\xi_{i}\rVert^{2}\,|\,X^{(n)},Z^{(n)}\right)=\tilde{M}^{f}_{n,P}\overset{P}{\rightrightarrows}0,$
by assumption.
The $f$-variant of the fourth case is, by applying the Cauchy–Schwarz
inequality,
$\displaystyle\textup{V}_{n}^{f}$
$\displaystyle=\left\lVert\frac{1}{n}\sum_{i=1}^{n}[(f(z_{i})-\hat{f}(z_{i}))\otimes(g(z_{i})-\hat{g}(z_{i}))]\otimes_{\textrm{HS}}[(f(z_{i})-\hat{f}(z_{i}))\otimes\xi_{i}]\right\rVert_{\textrm{TR}}$
$\displaystyle\leq\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert^{2}\lVert
g(z_{i})-\hat{g}(z_{i})\rVert\lVert\xi_{i}\rVert$
$\displaystyle\leq\sqrt{\underbrace{\left(\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert^{2}\lVert
g(z_{i})-\hat{g}(z_{i})\rVert^{2}\right)}_{\tilde{a}_{n}}\underbrace{\left(\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert^{2}\lVert\xi_{i}\rVert^{2}\right)}_{\tilde{b}_{n}}}.$
We saw above that $\tilde{a}_{n}\overset{P}{\rightrightarrows}0$ and
$\tilde{b}_{n}\overset{P}{\rightrightarrows}0$, hence by Proposition 10,
$\sqrt{\tilde{a}_{n}\tilde{b}_{n}}\overset{P}{\rightrightarrows}0$.
For the $f$-variant of the fifth and final case, we get, by applying the
Cauchy–Schwarz inequality again,
$\displaystyle\textup{VI}_{n}^{f}$
$\displaystyle=\left\lVert\frac{1}{n}\sum_{i=1}^{n}[(f(z_{i})-\hat{f}(z_{i}))\otimes\xi_{i}]\otimes_{\textrm{HS}}[\varepsilon_{i}\otimes\xi_{i}]\right\rVert_{\textrm{TR}}\leq\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert\lVert\varepsilon_{i}\rVert\lVert\xi_{i}\rVert^{2}$
$\displaystyle\leq\sqrt{\underbrace{\left(\frac{1}{n}\sum_{i=1}^{n}\lVert
f(z_{i})-\hat{f}(z_{i})\rVert^{2}\lVert\xi_{i}\rVert^{2}\right)}_{\tilde{b}_{n}}\underbrace{\left(\frac{1}{n}\sum_{i=1}^{n}\lVert\varepsilon_{i}\rVert^{2}\lVert\xi_{i}\rVert^{2}\right)}_{\tilde{U}_{n}}}.$
We can repeat the arguments used above yielding
$\sqrt{\tilde{a}_{n}\tilde{U}_{n}}\overset{P}{\rightrightarrows}0$ to show
that $\sqrt{\tilde{b}_{n}\tilde{U}_{n}}\overset{P}{\rightrightarrows}0$ hence
$\textup{VI}_{n}^{f}\overset{P}{\rightrightarrows}0$ as desired. ∎
#### C.3.3 Proof of Theorem 3
###### Proof.
Let $W$ be distributed as
$\lVert\mathcal{N}(0,\mathscr{C}_{P})\rVert_{\textrm{HS}}^{2}$ when the
background measure is $\mathbb{P}_{P}$. Recalling the notation from Lemma 12,
since
$\mathbb{P}_{P}(\psi_{n}=1)=\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}))$
we need to show that
$\lim_{n\to\infty}\sup_{P\in\tilde{\mathcal{P}}_{0}}\left|\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}))-\alpha\right|=0,$
which amounts to finding, for each $\epsilon>0$, an $N\in\mathbb{N}$, such
that for all $n\geq N$,
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}))<\alpha+\epsilon$
(36)
and
$\inf_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}))>\alpha-\epsilon.$
(37)
To show (36), take $\delta>0$ (to be fixed later). If
$|q(\hat{\mathscr{C}})-q(\mathscr{C}_{P})|<\delta$ and
$T_{n}>q(\hat{\mathscr{C}})$, then $T_{n}>q(\mathscr{C}_{P})-\delta$, so
$\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}))\leq\mathbb{P}_{P}(T_{n}>q(\mathscr{C}_{P})-\delta)+\mathbb{P}_{P}(|q(\hat{\mathscr{C}})-q(\mathscr{C}_{P})|\geq\delta).$
Taking suprema and rewriting, we get
$\displaystyle\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}_{P}))$
$\displaystyle\leq\overbrace{\sup_{P\in\tilde{\mathcal{P}}_{0}}[\mathbb{P}_{P}(T_{n}>q(\mathscr{C}_{P})-\delta)-\mathbb{P}_{P}(W>q(\mathscr{C}_{P})-\delta)]}^{=:\textup{I}_{n}}$
$\displaystyle+\underbrace{\sup_{P\in\tilde{\mathcal{P}}_{0}}[\mathbb{P}_{P}(W>q(\mathscr{C}_{P})-\delta)-\alpha]}_{=:\textup{II}_{n}}+\underbrace{\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{P}_{P}(|q(\hat{\mathscr{C}})-q(\mathscr{C}_{P})|\geq\delta)}_{=:\textup{III}_{n}}+\alpha.$
We seek to show that, if $n$ is sufficiently large, we can make each of the
terms $\textup{I}_{n}$, $\textup{II}_{n}$ and $\textup{III}_{n}$ less than
$\epsilon/3$ such that
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}))<\alpha+\epsilon,$
as desired.
We note first that
$\displaystyle|\textup{I}_{n}|$
$\displaystyle\leq\sup_{P\in\tilde{\mathcal{P}}_{0}}|\mathbb{P}_{P}(T_{n}^{1/2}>\\{q(\mathscr{C}_{P})-\delta\\}^{1/2})-\mathbb{P}_{P}(W^{1/2}>\\{q(\mathscr{C}_{P})-\delta\\}^{1/2})|$
(38)
$\displaystyle\leq\sup_{P\in\tilde{\mathcal{P}}_{0}}\sup_{x\in\mathbb{R}}|\mathbb{P}_{P}(T_{n}^{1/2}>x)-\mathbb{P}_{P}(W^{1/2}>x)|.$
For each $P\in\tilde{\mathcal{P}}_{0}$, $W$ has the same distribution as
$\sum_{k=1}^{\infty}\lambda_{k}^{P}V_{k}^{2},$
where $\lambda_{k}^{P}$ is the $k$th eigenvalue of $\mathscr{C}_{P}$ and
$(V_{k})_{k\in\mathbb{N}}$ is a sequence of independent standard Gaussian
random variables. We have assumed that the operator norm of
$(\mathscr{C}_{P})_{P\in\tilde{\mathcal{P}}_{0}}$ is bounded away from zero
which implies that $\lambda_{1}^{P}$ is bounded away from zero. Thus, the
family $(\lambda_{1}^{P}V_{1}^{2})_{P\in\tilde{\mathcal{P}}_{0}}$ is uniformly
absolutely continuous with respect to the Lebesgue measure by Lemma 13.
Theorem 9 yields that $W$ is also uniformly absolutely continuous with respect
to the Lebesgue measure and Lemma 14 yields that the same is true for
$W^{1/2}$, since $W$ is uniformly tight by the assumed uniform bound on
$\mathbb{E}_{P}(\lVert\epsilon_{P}\rVert^{2}\lVert\xi_{P}\rVert^{2})$.
Further, Corollary 1 yields that $W^{1/2}$ is also uniformly absolutely
continuous with respect to the standard Gaussian on $\mathbb{R}$. Proposition
9 and Theorem 2 $T_{n}^{1/2}\overset{\mathcal{D}}{\rightrightarrows}W^{1/2}$
since $\lVert\cdot\rVert_{\textrm{HS}}$ is Lipschitz. Finally since we argued
that $W^{1/2}$ is uniformly absolutely continuous with respect to the standard
Gaussian on $\mathbb{R}$, Proposition 13 yields that we can make the bound in
(38) less than $\epsilon/3$ for $n$ sufficiently large.
For the $\textup{II}_{n}$ term, recall that
$\alpha=\mathbb{P}_{P}(W>q(\mathscr{C}_{P}))$, and thus
$\mathbb{P}_{P}(W>q(\mathscr{C}_{P})-\delta)-\alpha=\mathbb{P}_{P}(W\in[q(\mathscr{C}_{P})-\delta,q(\mathscr{C}_{P})]).$
By the uniform absolute continuity of $W$ with respect to the Lebesgue measure
$\lambda$, we may fix $\delta$ such that
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{P}_{P}(W\in B)<\epsilon/3$ whenever
$\lambda(B)<2\delta$. This implies that $\textup{II}_{n}<\epsilon/3$.
For the $\textup{III}_{n}$ term, Theorem 2 yields
$\hat{\mathscr{C}}\overset{P}{\rightrightarrows}\mathscr{C}_{P}$ and since
Lemma 12 yields that $q$ is uniformly continuous, Proposition 9 yields
$q(\hat{\mathscr{C}})\overset{P}{\rightrightarrows}q(\mathscr{C}_{P})$. Thus,
the third term is less than $\epsilon/3$ when $n$ is large enough.
To show (37), note first that, as before, if
$|q(\hat{\mathscr{C}})-q(\mathscr{C}_{P})|<\delta$ and
$T_{n}>q(\mathscr{C}_{P})+\delta$, then $T_{n}>q(\hat{\mathscr{C}})$ and hence
$\displaystyle\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}))$
$\displaystyle\geq\mathbb{P}_{P}((T_{n}>q(\mathscr{C}_{P})+\delta)\cap(|q(\hat{\mathscr{C}})-q(\mathscr{C}_{P})|<\delta))$
(39)
$\displaystyle\geq\mathbb{P}_{P}(T_{n}>q(\mathscr{C}_{P})+\delta)-\mathbb{P}_{P}(|q(\hat{\mathscr{C}})-q(\mathscr{C}_{P})|\geq\delta).$
The final step uses that for any measurable sets $A$ and $B$,
$\mathbb{P}(A\cap B)=\mathbb{P}(A)+\mathbb{P}(B)-\mathbb{P}(A\cup
B)=\mathbb{P}(A)-\mathbb{P}(B^{c})+1-\mathbb{P}(A\cup
B)\geq\mathbb{P}(A)-\mathbb{P}(B^{c}).$
This lets us continue using similar arguments as for (36), proving the
statement. ∎
### C.4 Proof of Theorem 4
###### Proof.
To argue that the modified GHCM satisfies (15), we can repeat the arguments of
Theorem 2 and Theorem 3 replacing conditioning on $X^{(n)}$ and $Z^{(n)}$ with
conditioning on $Z^{(n)}$ and $A$ and conditioning on $Y^{(n)}$ and $Z^{(n)}$
with conditioning on $Z^{(n)}$ and $A$.
For the first claim that
$\tilde{\mathscr{T}}_{n}\overset{\mathcal{D}}{\rightrightarrows}\mathcal{N}(0,\mathscr{C}_{P})$,
we can repeat the decomposition of the proof of Theorem 2 and write
$\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(\mathscr{R}_{i}-\mathscr{K}_{P})=\underbrace{\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(\varepsilon_{i}\otimes\xi_{i}-\mathscr{K}_{P})}_{=:U_{n}}+a_{n}+b_{n}+c_{n},$
where $a_{n}$, $b_{n}$ and $c_{n}$ are as in the proof of Theorem 2. We have
$U_{n}\overset{\mathcal{D}}{\rightrightarrows}\mathcal{N}(0,\mathscr{C}_{P})$
over $\mathcal{Q}$ by Proposition 19 $a_{n}{\overset{P}{\rightrightarrows}}0$
over $\mathcal{Q}$ by the same argument as in the proof of Theorem 2. The
argument of the proof of Theorem 2 to show that
$b_{n}{\overset{P}{\rightrightarrows}}0$ and
$c_{n}{\overset{P}{\rightrightarrows}}0$ will also work here if we replace
conditioning as we did for the first claim.
For the second claim that
$\lVert\hat{\mathscr{C}}-\mathscr{C}\rVert_{\textrm{TR}}\overset{P}{\rightrightarrows}0$,
note that by the $\tilde{\mathscr{T}}_{n}$ result, Proposition 15 and
Proposition 10,
$\frac{1}{n}\sum_{i=1}^{n}\mathscr{R}_{i}=\frac{1}{\sqrt{n}}\cdot\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(\mathscr{R}_{i}-\mathscr{K}_{P})+\mathscr{K}_{P}\overset{P}{\rightrightarrows}_{\mathcal{Q}}\mathscr{K}_{P}.$
Hence, by Proposition 10,
$\left(\frac{1}{n}\sum_{i=1}^{n}\mathscr{R}_{i}\right)\otimes_{\textrm{HS}}\left(\frac{1}{n}\sum_{i=1}^{n}\mathscr{R}_{i}\right)\overset{P}{\rightrightarrows}_{\mathcal{Q}}\mathscr{K}_{P}\otimes_{\textrm{HS}}\mathscr{K}_{P},$
since the mapping
$(\mathscr{A},\mathscr{B})\mapsto\mathscr{A}\otimes_{\textrm{HS}}\mathscr{B}$
is continuous. We can now repeat the remaining arguments of the proof of
Theorem 2 while again replacing conditioning as we did in the proof of the
first claim to yield the desired result.
For the final claim that for large enough $n$ the GHCM has power greater than
$\beta$ over alternatives where
$\lVert\sqrt{n}\mathscr{K}_{P}\rVert_{\textrm{HS}}>c$, let $W$ be distributed
as $\lVert\mathcal{N}(0,\mathscr{C}_{P})\rVert_{\textrm{HS}}^{2}$ when the
background measure is $\mathbb{P}_{P}$ for $P\in\mathcal{Q}$. Let $q$ denote
the mapping that sends a covariance operator $\mathscr{C}$ to the $1-\alpha$
quantile of the distribution of
$\lVert\mathcal{N}(0,\mathscr{C})\rVert_{\textrm{HS}}^{2}$ as in Lemma 12. By
similar arguments as (39) in the proof of Theorem 3, we get that for any
$\delta>0$, $c>0$ and $n\in\mathbb{N}$,
$\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}))\geq\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(T_{n}>q(\mathscr{C}_{P})+\delta)-\sup_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(|q(\hat{\mathscr{C}})-q(\mathscr{C}_{P})|\geq\delta).$
Defining
$\tilde{T}_{n}^{1/2}:=\lVert\tilde{\mathscr{T}}_{n}\rVert_{\textrm{HS}}$, by
the reverse triangle inequality
$T_{n}^{1/2}=\left\lVert\tilde{\mathscr{T}}_{n}+\sqrt{n}\mathscr{K}_{P}\right\rVert_{\textrm{HS}}\geq\left|\tilde{T}_{n}^{1/2}-\sqrt{n}\lVert\mathscr{K}_{P}\rVert_{\textrm{HS}}\right|\geq\sqrt{n}\lVert\mathscr{K}_{P}\rVert_{\textrm{HS}}-\tilde{T}_{n}^{1/2},$
and hence
$\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(T_{n}>q(\mathscr{C}_{P})+\delta)\geq\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(\sqrt{n}\lVert\mathscr{K}_{P}\rVert_{\textrm{HS}}-\tilde{T}_{n}^{1/2}>\\{q(\mathscr{C}_{P})+\delta\\}^{1/2}).$
Now since we are taking an infimum over a set where
$\sqrt{n}\lVert\mathscr{K}_{P}\rVert_{\textrm{HS}}>c$, we have
$\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(\sqrt{n}\lVert\mathscr{K}_{P}\rVert_{\textrm{HS}}-\tilde{T}_{n}^{1/2}>\\{q(\mathscr{C}_{P})+\delta\\}^{1/2})\geq\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(c-\tilde{T}_{n}^{1/2}>\\{q(\mathscr{C}_{P})+\delta\\}^{1/2}),$
and thus combining all the above yields
$\displaystyle\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}_{P}}))$
$\displaystyle\geq\overbrace{\inf_{P\in\mathcal{Q}_{c,n}}[\mathbb{P}_{P}(c-\tilde{T}_{n}^{1/2}>\\{q(\mathscr{C}_{P})+\delta\\}^{1/2})-\mathbb{P}_{P}(c-W^{1/2}>\\{q(\mathscr{C}_{P})+\delta\\}^{1/2})]}^{=:\textup{I}_{n}}$
$\displaystyle+\underbrace{\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(c-W^{1/2}>\\{q(\mathscr{C}_{P})+\delta\\}^{2})}_{=:\textup{II}_{n}}-\underbrace{\sup_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(|q(\hat{\mathscr{C}})-q(\mathscr{C}_{P})|\geq\delta)}_{=:\textup{III}_{n}}.$
If we can show that for $n$ sufficiently large we can make
$\textup{I}_{n}+\textup{II}_{n}+\textup{III}_{n}\geq\beta$, we will be done.
For the $\textup{I}_{n}$ term, we can write
$\textup{I}_{n}\geq-\sup_{P\in\mathcal{Q}_{c,n}}\sup_{x\in\mathbb{R}}|\mathbb{P}_{P}(\tilde{T}_{n}^{1/2}<x)-\mathbb{P}_{P}(W^{1/2}<x)|.$
By the first claim proven above and Proposition 9,
$\tilde{T}_{n}^{1/2}\overset{\mathcal{D}}{\rightrightarrows}W^{1/2}$. We can
therefore repeat the arguments used to deal with the $\textup{I}_{n}$ term in
the proof of Theorem 3 to see that for $n$ sufficiently large we have
$\textup{I}_{n}\geq-(1-\beta)/3$.
For the $\textup{II}_{n}$ term, we can write
$\textup{II}_{n}=1-\sup_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(W^{1/2}+\\{q(\mathscr{C}_{P})+\delta\\}^{1/2}\geq
c).$
Hence, by uniform tightness of
$(W^{1/2}+\\{q(\mathscr{C}_{P})+\delta\\}^{1/2})_{P\in\mathcal{Q}}$ we can
find $c$ such that
$\sup_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(W^{1/2}+\\{q(\mathscr{C}_{P})+\delta\\}^{1/2}\geq
c)<(1-\beta)/3$ which implies $\textup{II}_{n}>1-(1-\beta)/3$.
For the $\textup{III}_{n}$ term, we can repeat the arguments for the
$\textup{III}_{n}$ term in the proof of Theorem 3 to show that
$\textup{III}_{n}\overset{P}{\rightrightarrows}0$. Hence, for sufficiently
large $n$, we have $\textup{III}_{n}>-(1-\beta)/3$.
Putting things together, we have for $n$ sufficiently large that
$\inf_{P\in\mathcal{Q}_{c,n}}\mathbb{P}_{P}(T_{n}>q(\hat{\mathscr{C}}))\geq\beta.\qed$
### C.5 Proof of Theorem 5 and related results
We first prove a representer theorem [Kimeldorf1970, Schoelkopf2001] for
scalar-on-function regression which we use to provide bounds on the in-sample
error of the Hilbertian linear model in Lemma 17.
###### Lemma 16.
Let $\mathcal{H}$ denote a Hilbert space with norm $\lVert\cdot\rVert$,
$x_{1},\dots,x_{n}\in\mathbb{R}$, $z_{1},\dots,z_{n}\in\mathcal{H}$ and
$\gamma>0$ Let $K$ be an $n\times n$ matrix where $K_{i,j}:=\langle
z_{i},z_{j}\rangle$ and let $x=(x_{1},\dots,x_{n})^{\top}\in\mathbb{R}^{n}$.
Then $\hat{\beta}$ minimises
$L_{1}(\beta)=\sum_{i=1}^{n}(x_{i}-\langle\beta,z_{i}\rangle)^{2}+\gamma\lVert\beta\rVert^{2}$
over $\beta\in\mathcal{H}$ if and only if
$\hat{\beta}=\sum_{i=1}^{n}\hat{\alpha}_{i}z_{i}$ and
$\hat{\alpha}=(\hat{\alpha}_{1},\dots,\hat{\alpha}_{n})^{\top}\in\mathbb{R}^{n}$
minimises
$L_{2}(\alpha)=\lVert x-K\alpha\rVert_{2}^{2}+\gamma\alpha^{\top}K\alpha$
over $\mathbb{R}^{n}$ where $\lVert\cdot\rVert_{2}$ denotes the standard
Euclidean norm on $\mathbb{R}^{n}$.
###### Proof.
Assume that $\hat{\beta}$ minimises $L_{1}$. Write $\hat{\beta}=u+v$ where
$u\in\mathcal{U}:=\textrm{span}(z_{1},\dots,z_{n})$ and
$v\in\mathcal{U}^{\perp}$. Since
$\langle\hat{\beta},z_{i}\rangle=\langle u,z_{i}\rangle,$
the first term of $L_{1}$ only depends on the quantity $u$. Also, by
Pythagoras’ theorem,
$\lVert\hat{\beta}\rVert^{2}=\|u\|^{2}+\|v\|^{2}\geq\lVert u\rVert^{2}.$
Thus, $v=0$ by optimality of $\hat{\beta}$, and so $\hat{\beta}$ can be
written
$\hat{\beta}=\sum_{i=1}^{n}\hat{\alpha}_{i}z_{i}$
for some $\hat{\alpha}\in{\mathbb{R}}^{n}$. But now that $\hat{\beta}$ is
known to have this form, it can be seen that
$\hat{\alpha}^{\top}K\hat{\alpha}=\lVert\hat{\beta}\rVert^{2}$ and
$\sum_{i=1}^{n}(x_{i}-\langle\hat{\beta},z_{i}\rangle)^{2}=\sum_{i=1}^{n}\left(x_{i}-\sum_{j=1}^{n}\hat{\alpha}_{j}\langle
z_{j},z_{i}\rangle\right)^{2}=\lVert x-K\hat{\alpha}\rVert_{2}^{2},$
hence $\hat{\alpha}$ minimises $L_{2}$.
Assume now that $\hat{\alpha}\in{\mathbb{R}}^{n}$ minimises $L_{2}$ and
$\hat{\beta}=\sum_{i=1}^{n}\hat{\alpha}_{i}z_{i}$. Clearly,
$L_{2}(\hat{\alpha})=L_{1}(\hat{\beta})$. For any
$\tilde{\beta}\in\mathcal{H}$, we can write
$\tilde{\beta}=\tilde{u}+\tilde{v}$ with $\tilde{u}\in\mathcal{U}$ and
$\tilde{v}\in\mathcal{U}^{\perp}$ as before. By similar arguments as above,
$L_{1}(\tilde{\beta})\geq L_{1}(\tilde{u}).$
However, $\tilde{u}=\sum_{i=1}^{n}\tilde{\alpha}_{i}z_{i}$, hence by
optimality of $\hat{\alpha}$, we have
$L_{1}(\tilde{u})=L_{2}(\tilde{\alpha})\geq
L_{2}(\hat{\alpha})=L_{1}(\hat{\beta}),$
proving that $\hat{\beta}$ minimises $L_{1}$ as desired. ∎
###### Lemma 17.
Let $n\in\mathbb{N}$ be fixed. Consider the estimator $\hat{\mathscr{S}}$ (19)
in the Hilbertian linear model which is a function of
$x_{1},\dots,x_{n},z_{1},\dots,z_{n}$ and let $\sigma^{2}>0$ be such that
$\mathbb{E}(\lVert\varepsilon\rVert^{2}\,|\,Z)\leq\sigma^{2}$ almost surely.
Let $K$ be the $n\times n$ matrix where $K_{ij}:=\langle z_{i},z_{j}\rangle$
and let $(\hat{\mu}_{i})_{i=1}^{n}$ denote the eigenvalues of $K$. Then,
letting $Z^{(n)}:=(z_{1},\dots,z_{n})$,
$\frac{1}{n}\mathbb{E}\left(\sum_{i=1}^{n}\lVert\mathscr{S}(z_{i})-\hat{\mathscr{S}}(z_{i})\rVert^{2}\,|\,Z^{(n)}\right)\leq\frac{\sigma^{2}}{\gamma}\frac{1}{n}\sum_{i=1}^{n}\min(\hat{\mu}_{i}/4,\gamma)+\lVert\mathscr{S}\rVert_{\textrm{HS}}^{2}\frac{\gamma}{4n}$
(40)
almost surely.
###### Proof.
Let $(e_{k})_{k\in\mathbb{N}}$ denote a basis of $\mathcal{H}_{X}$ and write
$\langle\cdot,\cdot\rangle_{X}$ and $\langle\cdot,\cdot\rangle_{Z}$ for the
inner products and $\lVert\cdot\rVert_{X}$ and $\lVert\cdot\rVert_{Z}$ for the
norms on $\mathcal{H}_{X}$ and $\mathcal{H}_{Z}$, respectively. Then
$\displaystyle\sum_{i=1}^{n}\lVert\mathscr{S}(z_{i})-\hat{\mathscr{S}}(z_{i})\rVert_{X}^{2}$
$\displaystyle=\sum_{k=1}^{\infty}\sum_{i=1}^{n}(\langle\mathscr{S}(z_{i}),e_{k}\rangle_{X}-\langle\hat{\mathscr{S}}(z_{i}),e_{k}\rangle_{X})^{2}$
$\displaystyle=\sum_{k=1}^{\infty}\sum_{i=1}^{n}(\langle
z_{i},\mathscr{S}^{*}(e_{k})\rangle_{Z}-\langle
z_{i},\hat{\mathscr{S}}^{*}(e_{k})\rangle_{Z})^{2}$ (41)
and similarly we can rewrite the penalised square-error criterion in (19) as
$\sum_{i=1}^{n}\lVert
x_{i}-\tilde{\mathscr{S}}(z_{i})\rVert_{X}^{2}+\gamma\lVert\tilde{\mathscr{S}}\rVert_{\textrm{HS}}^{2}=\sum_{k=1}^{\infty}\left[\sum_{i=1}^{n}(\langle
x_{i},e_{k}\rangle_{X}-\langle
z_{i},\tilde{\mathscr{S}}^{*}(e_{k})\rangle_{Z})^{2}+\gamma\lVert\tilde{\mathscr{S}}e_{k}\rVert^{2}\right].$
Since each of the terms in square brackets can be chosen independently of each
other, we have
$\hat{\beta}_{k}:=\hat{\mathscr{S}}^{*}_{\gamma}(e_{k})=\operatorname*{argmin}_{\beta\in\mathcal{H}_{Z}}\sum_{i=1}^{n}(\langle
x_{i},e_{k}\rangle_{X}-\langle
z_{i},\beta\rangle_{Z})^{2}+\gamma\lVert\beta\rVert_{Z}^{2}.$
A bit of matrix calculus combined with Lemma 16 yields that
$(\langle z_{1},\hat{\beta}_{k}\rangle_{Z},\dots,\langle
z_{n},\hat{\beta}_{k}\rangle_{Z})^{\top}=K(K+\gamma I)^{-1}X^{(n)}_{k},$
where $I$ is the $n\times n$ identity matrix and $X^{(n)}_{k}:=(\langle
x_{1},e_{k}\rangle_{X},\dots,\langle x_{n},e_{k}\rangle_{X})^{\top}$. Defining
$\beta_{k}:=\mathscr{S}^{*}(e_{k})$, we can write $\beta_{k}=u_{k}+v_{k}$
where $u_{k}\in\mathcal{U}:=\textrm{span}(z_{1},\dots,z_{n})$ and
$v\in\mathcal{U}^{\perp}$. Writing $u_{k}=\sum_{j=1}^{n}\alpha_{k,j}z_{j}$
where $\alpha_{k}=(\alpha_{k,1},\dots,\alpha_{k,n})^{\top}\in\mathbb{R}^{n}$,
we have for $i\in\\{1,\dots,n\\}$,
$\langle z_{i},\beta_{k}\rangle_{Z}=\langle
z_{i},u_{k}\rangle_{Z}=\left\langle
z_{i},\sum_{j=1}^{n}\alpha_{k,j}z_{j}\right\rangle_{Z}=\sum_{j=1}^{n}\alpha_{k,j}\langle
z_{i},z_{j}\rangle_{Z}.$
This entails
$(\langle z_{1},\beta_{k}\rangle_{Z},\dots,\langle
z_{n},\beta_{k}\rangle_{Z})^{\top}=K\alpha_{k}.$
Let $K=UDU^{\top}$ be the eigendecomposition of $K$, where
$D_{ii}=\hat{\mu}_{i}$, and let $\theta_{k}:=U^{\top}K\alpha_{k}$. Let
$\varepsilon^{(n)}_{k}:=(\langle\varepsilon_{1},e_{k}\rangle_{X},\dots,\langle\varepsilon_{n},e_{k}\rangle_{X})^{\top}\in{\mathbb{R}}^{n}$
and note that $X^{(n)}_{k}=K\alpha_{k}+\varepsilon^{(n)}_{k}$. Letting
$\lVert\cdot\rVert_{2}$ denote the Euclidean norm, $n$ times the left-hand
side of equation (40) can now be written (using equation (41))
$\displaystyle\mathbb{E}\left[\sum_{k=1}^{\infty}\lVert K(K+\gamma
I)^{-1}(U\theta_{k}+\varepsilon^{(n)}_{k})-U\theta_{k}\rVert^{2}_{2}\,|\,Z^{(n)}\right]$
$\displaystyle=\mathbb{E}\left[\sum_{k=1}^{\infty}\lVert
DU^{\top}(UDU^{\top}+\gamma
I)^{-1}(U\theta_{k}+\varepsilon^{(n)}_{k})-\theta_{k}\rVert^{2}_{2}\,|\,Z^{(n)}\right]$
$\displaystyle=\mathbb{E}\left[\sum_{k=1}^{\infty}\lVert D(D+\gamma
I)^{-1}(\theta_{k}+U^{\top}\varepsilon^{(n)}_{k})-\theta_{k}\rVert_{2}^{2}\,|\,Z^{(n)}\right]$
$\displaystyle=\sum_{k=1}^{\infty}\lVert(D(D+\gamma
I)^{-1}-I)\theta_{k}\rVert_{2}^{2}+\mathbb{E}\left[\sum_{k=1}^{\infty}\lVert
D(D+\gamma
I)^{-1}U^{\top}\varepsilon^{(n)}_{k}\rVert_{2}^{2}\,|\,Z^{(n)}\right]$ (42)
where the final equality uses that the first term is a function of $Z^{(n)}$
and the conditional expectation of the cross term in the sum of squares is
$0$, since $\mathbb{E}(\varepsilon^{(n)}_{k}\,|\,Z^{(n)})=0$.
The second term of (42) may be simplified as follows:
$\displaystyle\mathbb{E}\left[\sum_{k=1}^{\infty}\lVert D(D+\gamma
I)^{-1}U^{\top}\varepsilon^{(n)}_{k}\rVert_{2}^{2}\,|\,Z^{(n)}\right]$
$\displaystyle=\mathbb{E}\left[\sum_{k=1}^{\infty}\textrm{tr}\left(D(D+\gamma
I)^{-1}U^{\top}\varepsilon^{(n)}_{k}(\varepsilon^{(n)}_{k})^{\top}UD(D+\gamma
I)^{-1}\right)\,|\,Z^{(n)}\right]$
$\displaystyle=\textrm{tr}\biggl{(}D(D+\gamma
I)^{-1}U^{\top}\underbrace{\mathbb{E}\left[\sum_{k=1}^{\infty}\varepsilon^{(n)}_{k}(\varepsilon^{(n)}_{k})^{\top}\,|\,Z^{(n)}\right]}_{\Sigma_{\varepsilon|Z}}UD(D+\gamma
I)^{-1}\biggr{)},$
where we have used that only $\varepsilon^{(n)}_{k}$ is not a function of
$Z^{(n)}$ and linearity of conditional expectations and the trace. Note that
$\Sigma_{\varepsilon\,|\,Z}$ is a diagonal matrix with $i$th diagonal entry
equal to
$\mathbb{E}\left[\sum_{k=1}^{\infty}\langle\varepsilon_{i},e_{k}\rangle_{X}^{2}\,|\,Z^{(n)}\right]=\mathbb{E}\left[\lVert\varepsilon_{i}\rVert_{X}^{2}\,|\,z_{i}\right],$
hence we can bound each diagonal term by $\sigma^{2}$ by assumption. This
implies that
$\displaystyle\textrm{tr}\biggl{(}D(D+\gamma
I)^{-1}U^{\top}\Sigma_{\varepsilon\,|\,Z}UD(D+\gamma I)^{-1}\biggr{)}$
$\displaystyle\leq\sigma^{2}\textrm{tr}\biggl{(}D(D+\gamma I)^{-1}D(D+\gamma
I)^{-1}\biggr{)}$
$\displaystyle=\sigma^{2}\sum_{i=1}^{n}\frac{\hat{\mu}_{i}^{2}}{(\hat{\mu}_{i}+\gamma)^{2}}.$
The first term of (42) can be dealt with by noting that
$\displaystyle\sum_{k=1}^{\infty}\lVert(D(D+\gamma
I)^{-1}-I)\theta_{k}\rVert_{2}^{2}$
$\displaystyle=\sum_{k=1}^{\infty}\sum_{i=1}^{n}\frac{\gamma^{2}\theta_{k,i}^{2}}{(\hat{\mu}_{i}+\gamma)^{2}}=\sum_{k=1}^{\infty}\sum_{i:\hat{\mu}_{i}>0}\frac{\gamma^{2}\theta_{k,i}^{2}}{(\hat{\mu}_{i}+\gamma)^{2}}=\sum_{k=1}^{\infty}\sum_{i:\hat{\mu}_{i}>0}\frac{\theta_{k,i}^{2}}{\hat{\mu}_{i}}\frac{\gamma^{2}\hat{\mu}_{i}}{(\hat{\mu}_{i}+\gamma)^{2}}$
$\displaystyle\leq\left(\max_{i\in
1,\dots,n}\frac{\gamma^{2}\hat{\mu}_{i}}{(\hat{\mu}_{i}+\gamma)^{2}}\right)\sum_{k=1}^{\infty}\sum_{i:\hat{\mu}_{i}>0}\frac{\theta_{k,i}^{2}}{\hat{\mu}_{i}}\leq\frac{\gamma}{4}\sum_{k=1}^{\infty}\sum_{i:\hat{\mu}_{i}>0}\frac{\theta_{k,i}^{2}}{\hat{\mu}_{i}}.$
The second equality uses that
$\theta_{k}=U^{\top}K\alpha_{k}=DU^{\top}\alpha_{k}$, hence $\theta_{k,i}=0$
whenever $\hat{\mu}_{i}=0$ and the final inequality uses that
$ab^{2}/(a+b)^{2}\leq b/4$. Let $D^{+}$ denote the generalised inverse of $D$,
i.e. $D_{ii}^{+}:=\hat{\mu}_{i}^{-1}\mathbbm{1}_{\hat{\mu}_{i}>0}$. Then
$\displaystyle\sum_{i:\hat{\mu}_{i}>0}\frac{\theta_{k,i}^{2}}{\hat{\mu}_{i}}$
$\displaystyle=\lVert\sqrt{D^{+}}\theta_{k}\rVert_{2}^{2}=\alpha_{k}^{\top}KUD^{+}U^{\top}K\alpha_{k}=\alpha_{k}^{\top}UDD^{+}DU^{\top}\alpha_{k}=\alpha_{k}^{\top}K\alpha_{k}$
$\displaystyle=\lVert u_{k}\rVert_{Z}^{2}\leq\lVert u_{k}\rVert_{Z}^{2}+\lVert
v_{k}\rVert_{Z}^{2}=\lVert\beta_{k}\rVert_{Z}^{2}.$
Putting things together, we have
$\sum_{k=1}^{\infty}\lVert(D(D+\gamma
I)^{-1}-I)\theta_{k}\rVert_{2}^{2}\leq\frac{\gamma}{4}\sum_{k=1}^{\infty}\lVert\beta_{k}\rVert_{Z}^{2}=\frac{\gamma}{4}\lVert\mathscr{S}\rVert_{\textrm{HS}}^{2}.$
Hence,
$\frac{1}{n}\mathbb{E}\left(\sum_{i=1}^{n}\lVert\mathscr{S}(Z_{i})-\hat{\mathscr{S}}(Z_{i})\rVert_{Z}^{2}\,|\,Z^{(n)}\right)\leq\frac{\sigma^{2}}{n}\sum_{i=1}^{n}\frac{\hat{\mu}_{i}^{2}}{(\hat{\mu}_{i}+\gamma)^{2}}+\frac{\gamma}{4n}\lVert\mathscr{S}\rVert_{\textrm{HS}}^{2},$
and using that
$\frac{\hat{\mu}_{i}^{2}}{(\hat{\mu}_{i}+\gamma)^{2}}\leq\min(1,\hat{\mu}_{i}^{2}/(4d_{i}\gamma))=\min(\hat{\mu}_{i}/4,\gamma)/\gamma,$
we have shown equation (40). ∎
To go from a conditional statement to an unconditional result, we first
require the following lemma.
###### Lemma 18.
Let $x_{1},\ldots,x_{n}$ be i.i.d. observations of a centred Hilbertian random
variable $X$ with $E\lVert X\rVert^{2}<\infty$. Let $\mathscr{C}$ denote the
covariance operator of $X$ with eigen-expansion
$\mathscr{C}=\sum_{k=1}^{\infty}\mu_{k}e_{k}\otimes e_{k}$ (43)
for an orthonormal basis $(e_{k})_{k=1}^{\infty}$, and summable eigenvalues
$\mu_{1}\geq\mu_{2}\geq\cdots\geq 0$. Let the random matrix
$K\in{\mathbb{R}}^{n\times n}$ have entries given by $K_{ij}=\langle
x_{i},x_{j}\rangle$ and denote the eigenvalues of $K/n$ by
$\hat{\mu}_{1}\geq\hat{\mu}_{2}\geq\cdots\geq\hat{\mu}_{n}\geq 0$.
For all $r>0$,
${\mathbb{E}}\bigg{(}\sum_{k=1}^{n}\min(\hat{\mu}_{k},r)\bigg{)}\leq\sum_{k=1}^{\infty}\min(\mu_{k},r).$
###### Proof.
It suffices to show that given any $\epsilon>0$, we have
${\mathbb{E}}\bigg{(}\sum_{k=1}^{n}\min(\hat{\mu}_{k},r)\bigg{)}\leq\epsilon+\sum_{k=1}^{\infty}\min(\mu_{k},r).$
Now let $d$ be such that
$\sum_{k=d+1}^{\infty}\mu_{k}<\epsilon/n.$
Let $\Phi\in\mathbb{R}^{n\times d}$ have entries given by
$\Phi_{ij}:=\langle x_{i},e_{j}\rangle,$
such that
$(\Phi\Phi^{\top})_{ij}:=\sum_{k=1}^{d}\langle x_{i},e_{k}\rangle\langle
x_{j},e_{k}\rangle.$
From this, it is clear that
$(K-\Phi\Phi^{\top})_{ij}=\sum_{k=d+1}^{\infty}\langle
x_{i},e_{k}\rangle\langle x_{j},e_{k}\rangle.$
Thus for $v\in\mathbb{R}^{d}$
$v^{\top}(K-\Phi\Phi^{\top})v=\sum_{i=1}^{d}\sum_{j=1}^{d}v_{i}v_{j}\sum_{k=d+1}^{\infty}\langle
s_{i},e_{k}\rangle\langle
x_{j},e_{k}\rangle=\sum_{k=d+1}^{\infty}\left\langle\sum_{i=1}^{d}v_{i}x_{i},e_{k}\right\rangle^{2}\geq
0,$
showing that $K-\Phi\Phi^{\top}$ is positive semi-definite.
Next let $\mathbb{S}^{d}_{+}$ be the cone of positive semi-definite $d\times
d$ matrices, and for $A\in\mathbb{S}^{d}_{+}$ and $k=1,\ldots,d$, let
$\lambda_{k}(A)$ denote the $k$th largest eigenvalue. Let
$f:\mathbb{S}^{d}_{+}\to{\mathbb{R}}$ be given by
$f(A)=\sum_{k=1}^{d}\min(\lambda_{k}(A),r).$
By Weyl’s inequality, noting that the non-zero eigenvalues of
$\Phi^{\top}\Phi$ and $\Phi\Phi^{\top}$ coincide, we have for all $k$,
$\hat{\mu}_{k}\leq\lambda_{k}(\Phi^{\top}\Phi/n)+\lambda_{1}(K-\Phi\Phi^{\top}/n)$
and so
$\min(\hat{\mu}_{k},r)\leq\min(\lambda_{k}(\Phi^{\top}\Phi/n),r)+{\mathrm{tr}}(K-\Phi\Phi^{\top})/n.$
Thus,
${\mathbb{E}}\bigg{(}\sum_{k=1}^{n}\min(\hat{\mu}_{k},r)\bigg{)}\leq{\mathbb{E}}f(\Phi^{\top}\Phi/n)+{\mathbb{E}}{\mathrm{tr}}(K-\Phi\Phi^{\top}).$
(44)
Now by Fubini’s theorem,
${\mathbb{E}}{\mathrm{tr}}(K-\Phi\Phi^{\top})=\sum_{i=1}^{n}\sum_{k=d+1}^{\infty}{\mathbb{E}}(\langle
x_{i},e_{k}\rangle^{2})=n\sum_{k=d+1}^{\infty}\mu_{k}<\epsilon.$
We now claim that $f$ is concave, from which the result will follow. Indeed,
then by Jensen’s inequality, ${\mathbb{E}}f(\Phi^{\top}\Phi/n)\leq
f({\mathbb{E}}\Phi^{\top}\Phi/n)$ and
$\frac{1}{n}\big{(}{\mathbb{E}}\Phi^{\top}\Phi\big{)}_{kl}=\frac{1}{n}\sum_{i=1}^{n}{\mathbb{E}}(\langle
x_{i},e_{k}\rangle\langle x_{i},e_{l}\rangle)=\mu_{k}\mathbbm{1}_{\\{k=l\\}}.$
Thus,
$f({\mathbb{E}}\Phi^{\top}\Phi/n)=\sum_{k=1}^{d}\min(\mu_{k},r),$
and so returning to (44) we would have
${\mathbb{E}}\bigg{(}\sum_{k=1}^{n}\min(\hat{\mu}_{k},r)\bigg{)}\leq\epsilon+\sum_{k=1}^{\infty}\min(\mu_{k},r).$
We now show that $f$ is concave. Take $t\in(0,1)$ and
$A,B\in\mathbb{S}^{d}_{+}$. We will show that
$\sum_{k=1}^{d}(\lambda_{k}(tA+(1-t)B)-r)_{+}\leq\sum_{k=1}^{d}\\{t(\lambda_{k}(A)-r)_{+}+(1-t)(\lambda_{k}(B)-r)_{+}\\},$
(45)
where $(\cdot)_{+}$ denotes the positive part. This will prove concavity of
$f$ as
$\displaystyle\sum_{k=1}^{d}\lambda_{k}(tA+(1-t)B)$
$\displaystyle={\mathrm{tr}}(tA+(1-t)B)$
$\displaystyle=t{\mathrm{tr}}(A)+(1-t){\mathrm{tr}}(B)=\sum_{k=1}^{d}\\{t\lambda_{k}(A)+(1-t)\lambda_{k}(B)\\},$
so subtracting (45) yields $f(tA+(1-t)B)\geq tf(A)+(1-t)f(B)$ as desired.
Certainly (45) holds when $r\geq\lambda_{1}(tA+(1-t)B)$. Now by Lidskii’s
inequality, for each $j=1,\ldots,d$,
$\sum_{k=1}^{j}\lambda_{k}(tA+(1-t)B)\leq\sum_{k=1}^{j}\\{t\lambda_{k}(A)+(1-t)\lambda_{k}(B)\\}.$
(46)
For convenience, let us set $\lambda_{d+1}(tA+(1-t)B)=0$. Then for any
$j=1,\ldots,d$, if $\lambda_{j+1}(tA+(1-t)B)\leq r\leq\lambda_{j}(tA+(1-t)B)$,
we have
$\displaystyle\sum_{k=1}^{d}(\lambda_{k}(tA+(1-t)B)-r)_{+}$
$\displaystyle=\sum_{k=1}^{j}(\lambda_{k}(tA+(1-t)B)-r)$
$\displaystyle\leq\sum_{k=1}^{j}\\{t(\lambda_{k}(A)-r)+(1-t)(\lambda_{k}(B)-r)\\}$
$\displaystyle\leq\sum_{k=1}^{d}\\{t(\lambda_{k}(A)-r)_{+}+(1-t)(\lambda_{k}(B)-r)_{+}\\},$
using (46) for the first inequality. We thus have that (45) holds whatever the
value of $r$, and so $f$ is concave, which completes the proof. ∎
Combining Lemma 17 and Lemma 18 now yields the following bound on our
regression estimator.
###### Lemma 19.
Let $\mathcal{P}$ consist of a family of distributions of
$(X,Z)\in\mathcal{H}_{X}\times\mathcal{H}_{Z}$ such that
$X=\mathscr{S}_{P}Z+\varepsilon_{P},$
where we assume that
$\sup_{P\in\mathcal{P}}\lVert\mathscr{S}_{P}\rVert_{\textrm{HS}}<C$ and
$\sup_{P\in\mathcal{P}}{\mathbb{E}}_{P}\lVert\varepsilon_{P}\rVert^{2}<\sigma^{2}$.
Suppose we are given $n$ i.i.d. observations $(x_{i},z_{i})_{i=1}^{n}$ of
$(X,Z)$ and denote by $(\mu_{k,P})_{k\in\mathbb{N}}$ the non-negative
eigenvalues of $\mathrm{Cov}_{P}(\varepsilon_{P})$. Let $\mathscr{S}_{\gamma}$
be the estimator in (19). We have for each $P\in\mathcal{P}$, that
$\frac{1}{n}\mathbb{E}_{P}\left(\sum_{i=1}^{n}\lVert\mathscr{S}(z_{i})-\hat{\mathscr{S}}_{\gamma}(z_{i})\rVert^{2}\right)\leq\frac{\sigma^{2}}{\gamma}\frac{1}{n}\sum_{k=1}^{\infty}\min(\mu_{k,P}/4,\gamma)+\lVert\mathscr{S}_{P}\rVert_{\textrm{HS}}^{2}\frac{\gamma}{4n}.$
(47)
Further, if we use $\hat{\gamma}$ as in (20), that is,
$\hat{\gamma}=\operatorname*{argmin}_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{k=1}^{n}\min(\hat{\mu}_{k}/4,\gamma)+\frac{\gamma}{4}\right),$
to produce an estimate $\hat{\mathscr{S}}:=\hat{\mathscr{S}}_{\hat{\gamma}}$
of $\mathscr{S}_{P}$, then
$\sup_{P\in\mathcal{P}}\mathbb{E}_{P}\left(\frac{1}{n}\sum_{i=1}^{n}\lVert\mathscr{S}_{P}(Z_{i})-\hat{\mathscr{S}}(Z_{i})\rVert^{2}\right)\leq\max(\sigma^{2},C)\sup_{P\in\mathcal{P}}\inf_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{k=1}^{\infty}\min(\mu_{k,P},\gamma)+\gamma\right).$ (48)
###### Proof.
Result (47) follows immediately from Lemmas 17 and 18. To show (48), we argue
as follows. Let $(e_{k})_{k\in\mathbb{N}}$ denote a basis of
$\mathcal{H}_{X}$. Then conditioning on $z_{1},\dots,z_{n}$ and applying
equation (40) in Lemma 17, we get that
$\displaystyle\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{E}_{P}\left(\frac{1}{n}\sum_{i=1}^{n}\lVert\mathscr{S}_{P}(z_{i})-\hat{\mathscr{S}}(z_{i})\rVert^{2}\right)\leq\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{E}_{P}\left(\frac{\sigma^{2}}{\hat{\gamma}}\frac{1}{n}\sum_{k=1}^{n}\min(\hat{\mu}_{k}/4,\hat{\gamma})+\lVert\mathscr{S}_{P}\rVert_{\textrm{HS}}^{2}\frac{\hat{\gamma}}{4}\right)$
$\displaystyle\leq\max(\sigma^{2},C)\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{E}_{P}\left[\min_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{k=1}^{n}\min(\hat{\mu}_{k}/4,\gamma)+\frac{\gamma}{4}\right)\right].$
Using the fact that the expectation of a minimum is less than the minimum of
the expectation, we get that
$\displaystyle\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{E}_{P}\left[\min_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{k=1}^{n}\min(\hat{\mu}_{k}/4,\gamma)+\frac{\gamma}{4}\right)\right]$
$\displaystyle\leq\sup_{P\in\tilde{\mathcal{P}}_{0}}\inf_{\gamma>0}\left[\mathbb{E}_{P}\left(\frac{1}{\gamma
n}\sum_{k=1}^{n}\min(\hat{\mu}_{k}/4,\gamma)+\frac{\gamma}{4}\right)\right]$
$\displaystyle\leq\sup_{P\in\tilde{\mathcal{P}}_{0}}\inf_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{k=1}^{\infty}\min(\mu_{k,P},\gamma)+\gamma\right),$
where the second inequality is due to Lemma 18. ∎
Finally, we can prove Theorem 5.
###### Proof.
By Theorem 3 and the assumptions of the Theorem it is sufficient to show that
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\sqrt{n}\mathbb{E}_{P}\left(\frac{1}{n}\sum_{i=1}^{n}\lVert\mathscr{S}^{X}_{P}(z_{i})-\hat{\mathscr{S}}(z_{i})\rVert^{2}\right)\to
0$ (49)
and similarly for the regression of $Y$ on $Z$. This can be seen by noting
that an application of Cauchy–Schwarz and Markov’s inequality yields that
$nM_{n,P}^{f}M_{n,P}^{g}\overset{P}{\rightrightarrows}0$ and, by the upper
bound on $u_{P}$ and $v_{P}$ in assumption (ii),
$\tilde{M}_{n,P}^{f}\overset{P}{\rightrightarrows}0$ and
$\tilde{M}_{n,P}^{g}\overset{P}{\rightrightarrows}0$.
Lemma 19 implies that it is sufficient to show that
$\sqrt{n}\sup_{P\in\tilde{\mathcal{P}}_{0}}\inf_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{k=1}^{\infty}\min(\mu_{k,P},\gamma)+\gamma\right)\to 0$
as $n\to\infty$ for (49) to hold. For each $P\in\tilde{\mathcal{P}}_{0}$, we
let $\phi_{P}:\mathbb{R}_{+}\to\mathbb{R}_{+}$ be given by
$\phi_{P}(\gamma)=\sum_{k=1}^{\infty}\min(\mu_{k,P},\gamma).$
By assumption (iii), $\lim_{\gamma\downarrow
0}\sup_{P\in\tilde{\mathcal{P}}_{0}}\phi_{P}(\gamma)=0$, hence for any
$\epsilon>0$ we can find $N\in\mathbb{N}$ such that for any $n\geq N$,
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\sqrt{\phi_{P}\left(n^{-1/2}\right)}<\epsilon/2$.
Let $\gamma_{n,P}=n^{-1/2}\sqrt{\phi_{P}\left(n^{-1/2}\right)}$. Then,
$\displaystyle\sqrt{n}\sup_{P\in\tilde{\mathcal{P}}_{0}}\inf_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{k=1}^{\infty}\min(\mu_{k,P},\gamma)+\gamma\right)=\sup_{P\in\tilde{\mathcal{P}}_{0}}\inf_{\gamma>0}\left(\frac{\phi_{P}(\gamma)}{\gamma\sqrt{n}}+\sqrt{n}\gamma\right)$
$\displaystyle\leq\sup_{P\in\tilde{\mathcal{P}}_{0}}\left(\frac{\phi_{P}(\gamma_{n,P})}{\gamma_{n,P}\sqrt{n}}+\sqrt{n}\gamma_{n,P}\right)=\sup_{P\in\tilde{\mathcal{P}}_{0}}\left(\frac{\phi_{P}\left(n^{-1/2}\sqrt{\phi_{P}\left(n^{-1/2}\right)}\right)}{\sqrt{\phi_{P}\left(n^{-1/2}\right)}}+\sqrt{\phi_{P}\left(n^{-1/2}\right)}\right).$
Assuming that $\epsilon\leq 2$ and using that $\phi_{P}$ is increasing, we get
that for $n\geq N$,
$\displaystyle\sup_{P\in\tilde{\mathcal{P}}_{0}}\left(\frac{\phi_{P}\left(n^{-1/2}\sqrt{\phi_{P}\left(n^{-1/2}\right)}\right)}{\sqrt{\phi_{P}\left(n^{-1/2}\right)}}+\sqrt{\phi_{P}\left(n^{-1/2}\right)}\right)<\sup_{P\in\tilde{\mathcal{P}}_{0}}\left(\frac{\phi_{P}\left(n^{-1/2}\epsilon/2\right)}{\sqrt{\phi_{P}\left(n^{-1/2}\right)}}+\sqrt{\phi_{P}\left(n^{-1/2}\right)}\right)$
$\displaystyle<\sup_{P\in\tilde{\mathcal{P}}_{0}}2\sqrt{\phi_{P}\left(n^{-1/2}\right)}<\epsilon,$
proving the result. ∎
###### Corollary 2.
Consider the setup of Lemma 19 but with the additional assumption that for
some $a,b>0$, we have $\mu_{k,P}\leq ae^{-bk}$ for all $P\in\mathcal{P}$. Then
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\mathbb{E}_{P}\left(\frac{1}{n}\sum_{i=1}^{n}\lVert\mathscr{S}_{P}(z_{i})-\hat{\mathscr{S}}(z_{i})\rVert^{2}\right)=o(\log
n/n)$
###### Proof.
Applying Lemma 19, we show that
$\sup_{P\in\tilde{\mathcal{P}}_{0}}\inf_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{k=1}^{\infty}\min(\mu_{k,P},\gamma)+\gamma\right)\leq\inf_{\gamma>0}\left(\frac{1}{\gamma
n}\sum_{k=1}^{\infty}\min(ae^{-bk},\gamma)+\gamma\right)=o(\log n/n).$
To that end, note that
$\frac{1}{\gamma
n}\sum_{k=1}^{\infty}\min(ae^{-bk},\gamma)+\gamma\leq-\frac{1}{nb}\log(\gamma/a)+\frac{1}{n\gamma}\int_{-\log(\gamma/a)/b}^{\infty}ae^{-xb}\,\mathrm{d}x+\gamma=-\frac{1}{nb}\log(\gamma/a)+\frac{1}{nb}+\gamma.$
The right-hand side is a strictly convex function in $\gamma$ hence it has a
unique minimum at the unique root of the derivative function given by
$\gamma^{*}:=\frac{1}{nb}$ which yields a minimum of
$\frac{1}{nb}(\log(anb)+2)=o(\log n/n).\qed$
## Appendix D Additional numerical results
Here we include additional results relating to the setups in Section 5.
Figures 7, 8 and 9 plot rejection rates against nominal significance levels
for pfr and the GHCM, for the setups described in 5.1.
Figure 10 plots rejection rates for a subset of null settings considered in
Section 5.1.1 but where the noise $N_{Y}$ in (25) is $t$-distributed.
Figure 11 plots rejection rates for a subset of null settings considered in
Section 5.1.1 but where instead of (25), the regression model for $Y$ is given
by
$Y=\int_{0}^{1}\alpha_{a}(t)Z(t)\mathrm{d}t+\sqrt{\frac{100}{n}}\int_{0}^{1}\frac{\alpha_{a}(t)}{a}X(t)\mathrm{d}t+N_{Y}.$
Note that when $n=100$, the model is identical to (25). In general however,
$\|{\mathbb{E}}\mathrm{Cov}(X,Y\,|\,Z)\|_{\textrm{HS}}$ scales with
$1/\sqrt{n}$ here, and so Theorem 4 suggests as $n$ changes, the power should
not change much. This is confirmed by our empirical results where we observe
that the power remains largely unchanged as $n$ changes, suggesting in
particular that the GHCM has power against $1/\sqrt{n}$ alternatives.
Figure 12 plots rejection rates for the same settings considered in Section
5.1.1 but where we use the FDboost package for regressions instead of the
refund package. We use default tuning parameters for the regression; it is
possible that performance could improve with more careful tuning.
Figure 13 plots rejection rates for the same settings considered in Section
5.1.2 but where the $X$ and $Y$ curves are observed on an irregular grid with
points sampled independently and uniformly on $[0,1]$. We consider a sparse
grid of $4$ points as well as four unequal grid sizes sampled as the maximum
of $4$ and a Poisson random variable with mean in $\\{10,25,50,100\\}$.
Figure 14 plots rejection rates for a simulation based on the real data
analysis in Section 5.3. For each of the two edges that had
Benjamini–Hochberg-corrected $p$-values at most $5\%$ (O-L—PO-L and O-R—PO-R),
we created artificial datasets as follows. We added independent Brownian
motion noise to each of the estimated regression functions (note there were
regression functions estimated for each variable in each of the two groups)
thereby simulating a new $X$ and $Y$ conditional on the fixed $Z$. In these
simulated datasets, the null of conditional independence does hold, and so we
should expect the GHCM to deliver uniformly-distributed $p$-values. The
results using the GHCM as described in Section 5.3 and for varying standard
deviation $\sigma$ of the Brownian motion noise for one set of regressions
with the other set at $1$, are shown in Figure 14. We see that even in the low
$\sigma$ settings, which are expected to be the most challenging, the GHCM
maintains level control.
Figure 7: Rejection rates against significance level $\alpha$ for the pfr
(red) and GHCM (green) tests under null (light) and alternative (dark)
settings when $a=2$. Figure 8: Rejection rates against significance level
$\alpha$ for the pfr (red) and GHCM (green) tests under null (light) and
alternative (dark) settings when $a=6$. Figure 9: Rejection rates against
significance level $\alpha$ for the pfr (red) and GHCM (green) tests under
null (light) and alternative (dark) settings when $a=12$. Figure 10: Rejection
rates in a subset of the null settings considered in Section 5.1.1 for the
nominal 5%-level pfr test (top) and GHCM test (bottom) where $\sigma_{X}=0.25$
and $n=500$ and the noise $N_{Y}$ in (25) is $t$-distributed with
$\mathrm{df}$ degrees of freedom. Figure 11: Rejection rates in a subset of
the alternative settings considered in Section 5.1.1 for the nominal 5%-level
GHCM test where $a=2$ and $\alpha_{a}$ has been replaced with
$(100/n)^{-1/2}\alpha_{a}$. Figure 12: Rejection rates in the setting of
Section 5.1.1, replicating Figures 1 and 2, for the nominal 5%-level GHCM test
using FDboost package for regressions instead of the refund package. Figure
13: Rejection rates in the setting of Section 5.1.2, replicating Figure 4, for
the nominal 5%-level GHCM test where the $X$ and $Y$ curves are observed on
irregular grids as described in the main text. Figure 14: Rejection rates for
nominal 5%-level GHCM tests in simulation settings based on the EEG data
studied in Section 5.3; see the main text for further details.
## References
* Bengs and Holzmann [2019] V. Bengs and H. Holzmann. Uniform approximation in classical weak convergence theory, 2019.
* Billingsley [1999] P. Billingsley. _Convergence of Probability Measures_. John Wiley & Sons, Inc., 1999.
* Bogachev [2018] V. Bogachev. _Weak Convergence of Measures_. Mathematical Surveys and Monographs. American Mathematical Society, 2018\.
* Bogachev [2007] V. I. Bogachev. _Measure Theory_. Springer Berlin Heidelberg, 2007.
* Carothers [2000] N. L. Carothers. _Real Analysis_. Cambridge University Press, 2000.
* Doob [1994] J. L. Doob. _Measure Theory_. Springer New York, 1994.
* Dudley [2002] R. M. Dudley. _Real Analysis and Probability_. Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2 edition, 2002.
* Durrett [2019] R. Durrett. _Probability: Theory and Examples_. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 5 edition, 2019.
* Fugarolas and Cobos [1983] M. Fugarolas and F. Cobos. On schauder bases in the lorentz operator ideal. _Journal of Mathematical Analysis and Applications_ , 95(1):235–242, 1983.
* Hsing and Eubank [2015] T. Hsing and R. Eubank. _Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators_. John Wiley & Sons, Ltd, 2015.
* Kasy [2019] M. Kasy. Uniformity and the delta method. _Journal of Econometric Methods_ , 8(1):1–19, 2019.
* Kimeldorf and Wahba [1970] G. S. Kimeldorf and G. Wahba. A correspondence between bayesian estimation on stochastic processes and smoothing by splines. _The Annals of Mathematical Statistics_ , 41(2):495–502, 1970.
* Kraft [1955] C. Kraft. _Some Conditions for Consistency and Uniform Consistency of Statistical Procedures_. University of California Press, 1955.
* LeCam [1973] L. LeCam. Convergence of estimates under dimensionality restrictions. _Annals of Statistics_ , 1(1):38–53, 1973.
* Li and Queffélec [2017] D. Li and H. Queffélec. _Introduction to Banach Spaces: Analysis and Probability_ , volume 1 of _Cambridge Studies in Advanced Mathematics_. Cambridge University Press, 2017.
* Rønn-Nielsen and Hansen [2014] A. Rønn-Nielsen and E. Hansen. _Conditioning and Markov properties_. Department of Mathematical Sciences, University of Copenhagen, 2014.
* Scalora [1961] F. S. Scalora. Abstract martingale convergence theorems. _Pacific Journal of Mathematics_ , 11(1):347–374, 1961.
* Schilling [2017] R. L. Schilling. _Measures, Integrals and Martingales_. Cambridge University Press, 2017.
* Schölkopf et al. [2001] B. Schölkopf, R. Herbrich, and A. J. Smola. A generalized representer theorem. In _International Conference on Computational Learning Theory_ , pages 416–426. Springer, 2001.
* Shah and Peters [2020] R. D. Shah and J. Peters. The hardness of conditional independence testing and the generalised covariance measure. _Annals of Statistics_ , 48(3):1514–1538, 2020\.
* Vaart [1998] A. W. v. d. Vaart. _Asymptotic Statistics_. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998.
* Vakhania et al. [1987] N. N. Vakhania, V. I. Tarieladze, and S. A. Chobanyan. _Probability Distributions on Banach Spaces_. Springer Netherlands, 1987.
|
# Siegel’s theorem via the Lawrence–Venkatesh method
Marc Paul Noordman Bernoulli Institute, University of Groningen, The
Netherlands
(Date: January 18, 2021)
###### Abstract.
In the recent paper [LV20], B. Lawrence and A. Venkatesh develop a method of
proving finiteness theorems in arithmetic geometry by studying the geometry of
families over a base variety. Their results include a new proof of both the
$S$-unit theorem and Faltings’ theorem, obtained by constructing and studying
suitable abelian-by-finite families over
$\mathbb{P}^{1}\setminus\\{0,1,\infty\\}$ and over an arbitrary curve of genus
$\geq 2$ respectively. In this paper, we apply this strategy to reprove
Siegel’s theorem: we construct an abelian-by-finite family on a punctured
elliptic curve to prove finiteness of $S$-integral points on elliptic curves.
###### Contents
1. 1 Introduction
2. 2 The Lawrence–Venkatesh framework
3. 3 The Lawrence–Venkatesh framework
4. 4 An abelian-by-finite cover of $E\setminus\\{0\\}$
1. 4.1 The monodromy
2. 4.2 Siegel’s theorem
## 1\. Introduction
Let $K$ be a number field and $E$ an elliptic curve over $K$ given by a
Weierstrass equation $y^{2}=x^{3}+ax+b$. Let $S$ be any finite set of places
of $K$ including those where $a$ and $b$ have negative valuation and those
dividing the discriminant of this Weierstrass model. An $S$-integral point of
$E$ is then a solution $(x,y)$ of this Weierstrass equation with
$x,y\in\mathcal{O}_{K}[S^{-1}]$. Siegel’s theorem states that $E$ has only
finitely many $S$-integral points (see [Cor16, Chapter 3] for a modern
treatment of this theorem, including several proofs).
In the recent paper [LV20], B. Lawrence and A. Venkatesh develop a method of
proving finiteness theorems in arithmetic geometry, by studying the geometry
of families over a base and the associated complex-analytic and $p$-adic
period mappings. They apply this method to prove or reprove several results in
arithmetic geometry, including reproofs of the $S$-unit theorem (finiteness of
the set of $S$-integral points on $\mathbb{P}^{1}\setminus\\{0,1,\infty\\}$)
and of Faltings’ theorem (finiteness of the set of rational points on smooth
projective curves of genus $\geq 2$). This new approach has generated much
interest. In [LS20] this method is used to prove a Shafarevich theorem for
hypersurfaces of abelian varieties. Uniformity aspects of the
Lawrence–Venkatesh method are analyzed in [NX19], and effectivity aspects in
[BBB+19]. The latter work moreover compares the Lawrence–Venkatesh method to
the Kim–Chabauty approach to (effective) finiteness of rational points on
curves. Finally, we mention the work of [JL19], who show that in the context
of the Lawrence–Venkatesh method, one can often extend results about
finiteness or non-Zariski denseness of sets of points over number rings to the
same results for points over more general finitely generated rings.
The goal of the present paper is to show that Siegel’s theorem admits a proof
via the Lawrence–Venkatesh method as well. Note that since [LV20] already
handles the case of $\mathbb{P}^{1}\setminus\\{0,1,\infty\\}$ and of smooth
curves of genus $\geq 2$, this is the only remaining case left in dimension
1.111After uploading this work to arXiv, we have been made aware of H. Liu’s
thesis [Liu20], which also handles this case. We construct an abelian-by-
finite family over a punctured elliptic curve and we show that it has the
correct properties for the Lawrence–Venkatesh method to succeed.
The paper consists of two parts. In the first section, we revisit briefly the
Lawrence–Venkatesh method. We formulate a theorem (Theorem 3.4) which
summarizes the result of this approach, in the case of an abelian-by-finite
family. This theorem may be considered a “black box”, in that one can apply
this theorem without knowing the intricate and sometimes technically demanding
techniques underlying [LV20]; as such we hope that articulating this theorem
explicitly may help increase the reach and usability of this new method. In
the second section, we show how to apply the Lawrence–Venkatesh method to
prove Siegel’s theorem. We construct a suitable abelian-by-finite family of a
punctured elliptic curve, and show that it satisfies the conditions of the
black box theorem. As is often the case, the main difficulty is showing that
the monodromy is large enough.
### Notation
In the remainder of this paper we will use the following notation throughout.
Let $K$ be a number field with ring of integers $\mathcal{O}_{K}$, and $S$ a
finite set of finite places of $K$. We will assume that $S$ contains all
places of $K$ that are ramified over the corresponding primes of $\mathbb{Q}$.
We denote by $\mathcal{O}_{K}[S^{-1}]$ the ring of $S$-integers of $K$, i.e.
the ring of elements of $K$ that have non-negative valuation for all
valuations $v$ on $K$ with $v\notin S$. For any finite place $v$, we choose
some Frobenius $\operatorname{Frob}_{v}\in\operatorname{Gal}(\overline{K}/K)$.
We denote by $K_{v}$ the $v$-adic completion of $K$. The unique continuous
extension of $\operatorname{Frob}_{v}$ to $K_{v}$ will also be denoted
$\operatorname{Frob}_{v}$. We will also fix an inclusion
$K\subseteq\mathbb{C}$.
Let $X/K$ be a smooth, geometrically connected, but not necessarily proper,
algebraic variety. We will assume that $X$ has a smooth model $\mathcal{X}$
over $\mathcal{O}_{K}[S^{-1}]$, and we will fix one. We are interested in
studying the set $\mathcal{X}(\mathcal{O}_{K}[S^{-1}])$ of $S$-integral points
of this model, which by abuse of notation we will just denote by
$X(\mathcal{O}_{K}[S^{-1}])$. Note that in general this set depends on the
choice of the model $\mathcal{X}$.
### Acknowledgments
I am grateful to Jaap Top for interesting conversations and useful comments on
this work.
### Addendum
Shortly after the first version of this work appeared on arXiv, A. Cadoret
kindly alerted me to the master thesis of H. Liu [Liu20], which contains,
along with a very well-written and detailed explanation of the
Lawrence–Venkatesh proof of Faltings’ theorem, also a proof of Siegel’s
theorem via the Lawrence–Venkatesh method, although via a differently
constructed abelian-by-finite family.
## 2\. The Lawrence–Venkatesh framework
In this section we explain the Lawrence–Venkatesh framework, in the case of an
abelian-by-finite family. To simplify matters, we will use a theorem by
Faltings ([Fal83]) which says that geometric étale cohomology of abelian
varieties is semisimple (the authors of [LV20] explicitly avoid using this
theorem in order to increase independence of their proof of Faltings’ theorem
from Faltings’ own proof). We start by recalling relevant definitions from
[LV20].
## 3\. The Lawrence–Venkatesh framework
In this section we explain the Lawrence–Venkatesh framework, in the case of an
abelian-by-finite family. To simplify matters, we will use a theorem by
Faltings ([Fal83]) which says that geometric étale cohomology of abelian
varieties is semisimple (the authors of [LV20] explicitly avoid using this
theorem in order to increase independence of their proof of Faltings’ theorem
from Faltings’ own proof). We start by recalling relevant definitions from
[LV20].
The following is Definition 5.1 in [LV20].
###### Definition 3.1.
An _abelian-by-finite family over $X$_ is a sequence of maps
$A\overset{f}{\to}X^{\prime}\overset{\pi}{\to}X$ where $A\to X^{\prime}$ is a
family of polarized abelian varieties and $X^{\prime}\to X$ is finite étale. A
_good model_ of such an abelian-by-finite family over
$\mathcal{O}_{K}[S^{-1}]$ is a sequence of maps
$\mathcal{A}\to\mathcal{X}^{\prime}\to\mathcal{X}$ of schemes over
$\mathcal{O}_{K}[S^{-1}]$, where again $\mathcal{A}\to\mathcal{X}^{\prime}$ is
a family of polarized abelian varieties and
$\mathcal{X}^{\prime}\to\mathcal{X}$ is finite étale, which give back the maps
$A\to X^{\prime}\to X$ after base change
$-\otimes_{\mathcal{O}_{K}[S^{-1}]}K$, and where $\mathcal{X}$ is the model of
$X$ that we fixed, and which moreover satisfies the following technical
conditions: the cohomology sheaves
$\mathbf{R}^{q}(\pi\\!\circ\\!f)_{*}\Omega^{p}_{\mathcal{A}/\mathcal{X}}$ and
de Rham sheaves
$\mathcal{H}^{q}=\mathbf{R}^{q}(\pi\\!\circ\\!f)_{*}\Omega^{\bullet}_{\mathcal{A}/\mathcal{X}}$
on $\mathcal{X}$ are locally free as $\mathcal{O}_{\mathcal{X}}$-modules, and
the Gauss-Manin connection extends to a morphism
$\mathcal{H}^{q}\to\mathcal{H}^{q}\otimes\Omega^{1}_{\mathcal{X}/\mathcal{O}_{K}[S^{-1}]}$
on $\mathcal{X}$.
By the usual arguments, any abelian-by-finite family over $K$ has a good model
over $\mathcal{O}_{K}[(S^{\prime})^{-1}]$ for some finite set
$S^{\prime}\supseteq S$ of places of $K$.
The second important definition from [LV20] we need is the notion of full
monodromy, which we now explain. Fix an abelian-by-finite family
$A\overset{f}{\to}X^{\prime}\overset{\pi}{\to}X.$
We denote by $g$ the relative dimension of $A$ over $X^{\prime}$. With respect
to the inclusion $K\subseteq\mathbb{C}$ we fixed, we get continuous maps
$A(\mathbb{C})\to X^{\prime}(\mathbb{C})\to X(\mathbb{C})$. Fix a point
$x_{0}\in X(\mathbb{C})$. The fiber $\pi^{-1}(x_{0})\subset
X^{\prime}(\mathbb{C})$ consists of $\deg\pi$ points, and we get a
decomposition
$H^{1}_{B}(A_{x_{0}}(\mathbb{C}),\mathbb{Q})=\bigoplus_{\pi(x^{\prime})=x_{0}}H^{1}_{B}(A_{x^{\prime}}(\mathbb{C}),\mathbb{Q}).$
Here $H^{1}_{B}$ denotes the first Betti cohomology (also known as singular
cohomology). The action of the fundamental group
$\pi_{1}(X(\mathbb{C}),x_{0})$ by monodromy on these spaces preserves this
decomposition (but not necessarily the individual factors). The symplectic
form $\omega$ induced by the polarization on $A$ is also preserved by this
action.
###### Definition 3.2.
Let
$\rho\colon\pi_{1}(X(\mathbb{C}),x_{0})\to\operatorname{GL}(H^{1}_{B}(A_{x_{0}}(\mathbb{C}),\mathbb{Q}))$
be the monodromy representation. The abelian-by-finite family has _full
monodromy_ if the Zariski closure of the image of $\rho$ contains
$\prod_{\pi(x^{\prime})=x_{0}}\operatorname{Sp}(H^{1}_{B}(A_{x^{\prime}}(\mathbb{C}),\mathbb{Q}),\omega)$.
In order to apply the Lawrence–Venkatesh method, we need to make sure that the
Frobenius action on the fibers of $\pi$ is large enough. In order to make this
precise, we make the following definition, which is a variation of [LV20,
Definition 5.2]
###### Definition 3.3.
Let $E$ be a finite $\operatorname{Gal}(\overline{K}/K)$-set and $v$ a place
of $K$ for which this action is unramified. Then the _$v$ -length of $E$_,
denoted $\operatorname{length}_{v}(E)$, is the average size of the orbits in
$E$ under the action of $\operatorname{Frob}_{v}$, i.e.
$\operatorname{length}_{v}(E)=\frac{\\#E}{\textrm{number of
$\operatorname{Frob}_{v}$-orbits}}$
If $E$ is a finite $K$-scheme we write $\operatorname{length}_{v}(E)$ instead
of $\operatorname{length}_{v}(E(\overline{K}))$. Note that the $v$-length of
$E$ does not depend on the choice of the Frobenius element
$\operatorname{Frob}_{v}$.
The following theorem is a compact restatement of the Lawrence–Venkatesh
method. This theorem is not stated as such in [LV20], but all the components
of the proof are there. Therefore we will give a short sketch of the argument.
We stress that our arguments uses the fact that the Galois representation on
étale cohomology of an abelian variety over a number field is semi-simple,
which is a deep theorem of Faltings [Fal83]. The authors of [LV20] avoid using
this theorem, both to keep their proof of Faltings’ theorem independent of
Faltings’ own proof, and to be able to apply the same methods to more general
families for which semisimplicity is not unconditionally known. We have also
incorporated the results of Bakker and Tsimerman [BT19], as explained in
[LV20, Section 9].
###### Theorem 3.4.
Let $T\subseteq X(\mathcal{O}_{K}[S^{-1}])$ be a subset of $S$-integral points
of $X$. Assume that there is an abelian-by-finite family
$A\overset{f}{\to}X^{\prime}\overset{\pi}{\to}X$, with
$\frac{1}{2}g(g+1)\deg(\pi)>\dim(X)$, with the following properties
1. (1)
The family has full monodromy, and
2. (2)
For every $x\in T$, there is a finite place $v$ of $K$ for which the Galois
action on $\pi^{-1}(x)$ is unramified and such that
$\operatorname{length}_{v}(\pi^{-1}(x))\geq\frac{4g^{2}\deg(\pi)}{\frac{1}{2}g(g+1)\deg(\pi)-\dim(X)}.$
Then $T$ is not Zariski dense in $X$.
###### Proof sketch.
After possibly increasing $S$, we can and will assume that the abelian-by-
finite family admits a good model over $\mathcal{O}_{K}[S^{-1}]$. Note also
that in the second condition, we can restrict to $v\notin S$: for an $x\in T$
and $v$ as in the second condition, Chebotarev’s density theorem implies that
there are infinitely many places $v^{\prime}$ of $K$ for which the Galois
action on $\pi^{-1}(x)$ is also unramified, and for which the actions of
$\operatorname{Frob}_{v}$ and $\operatorname{Frob}_{v^{\prime}}$ are conjugate
and hence give the same length. Moreover, the assumption that the abelian-by-
finite family has a good model over $\mathcal{O}_{K}[S^{-1}]$ implies that the
residue fields of the points in $\pi^{-1}(x)$ are field extensions of $K$
which are unramified away from $S$. It follows from the Hermite-Minkowski
theorem that there are only finitely many possibilities for $\pi^{-1}(x)$ as a
$\operatorname{Gal}(\overline{K}/K)$-set, where $x$ ranges over $T$.
Therefore, we can choose the places $v$ that occur in condition (2) to lie in
a fixed finite set of places $S^{\prime}$ with $S\cap S^{\prime}=\emptyset$.
Therefore we may decompose $T=T_{1}\cup\cdots\cup T_{r}$ as a finite union of
subsets, where for each $T_{i}$ there is a single place $v\in S^{\prime}$ that
works for all $x\in T_{i}$. Since a finite union of non-Zariski dense sets is
not Zariski dense, we may replace $T$ by one of the $T_{i}$, and assume that
we have a fixed place $v\notin S$ that works for all points $x\in T$. Let us
fix such a place $v$.
Let $x\in T$. We have $\pi^{-1}(x)=\operatorname{Spec}E_{x}$ for a $K$-algebra
$E_{x}$ of degree $\deg\pi$. As in [LV20, Lemma 2.3], there are only finitely
many possibilities for the filtered $\varphi$-module
$V_{x}:=H_{\textrm{dR}}^{1}(A_{x}/K_{v})$ (as mentioned above, the étale
cohomologies are semisimple so we don’t need to bother with
semisimplification). Through the factorization
$A_{x,v}\to\operatorname{Spec}E_{x,v}\to\operatorname{Spec}K_{v},$
where $A_{x,v}:=A_{x}\otimes_{K}K_{v}$ and $E_{x,v}=E_{x}\otimes_{K}K_{v}$,
this filtered $\varphi$-module moreover has the structure of a free
$E_{x,v}$-module of rank $2g$. The Frobenius of $V_{x}$ is compatible with the
Frobenius of $E_{x,v}$, and the filtration of $V_{x}$ is given by a free and
saturated $E_{x,v}$-submodule, which is moreover Lagrangian with respect to
the symplectic form $\omega$ on $V_{x}$. For any point $x^{\prime}\in
X(K_{v})$ with $x\equiv x^{\prime}\mod v$, the same statements are true for
$E_{x^{\prime}}$ (where
$\pi^{-1}(x^{\prime})=\operatorname{Spec}E_{x^{\prime}}$) and
$V_{x^{\prime}}:=H_{\textrm{dR}}^{1}(A_{x^{\prime}}/K_{v})$. The Gauss-Manin
connection gives us canonical bijections
$E_{x,v}\overset{\sim}{\to}E_{x^{\prime}}$ and
$V_{x}\overset{\sim}{\to}V_{x^{\prime}}$. These bijections commute with all
the mentioned structure except for the filtration. Variation of this
filtration then gives the $p$-adic period map
$\Phi_{v}\colon U\to\mathcal{H}_{v}(K_{v})$
where $U=\\{x^{\prime}\in X(K_{v}):x^{\prime}\equiv x\pmod{v}\\}$ and
$\mathcal{H}_{v}=\operatorname{Res}_{K_{v}}^{E_{x,v}}\operatorname{LGr}(V_{x},\omega)$.
Here $\operatorname{LGr}(V_{x},\omega)$ is the Lagrangian Grassmannian
classifying free rank-$g$ $E_{x,v}$-submodules of $V_{x}$ on which $\omega$ is
trivial, and $\operatorname{Res}_{K_{v}}^{E_{x,v}}$ denotes Weil-restriction
from $E_{x,v}$ to $K_{v}$. So $\mathcal{H}_{v}$ is an algebraic variety over
$K_{v}$ whose $K_{v}$-points classify free rank-$g$ $E_{x,v}$-submodules of
$V_{x}$ on which $\omega$ is trivial. The Zariski dimension of
$\mathcal{H}_{v}$ is $\frac{1}{2}g(g+1)\cdot\deg(\pi)$. From [LV20, Lemma
3.3], the full monodromy assumption implies that the period map has Zariski-
dense image.
Let $\varphi$ be the Frobenius on $V_{x}$. We may write
$E_{x,v}=L_{1}\times\cdots\times L_{r}$ for unramified field extensions
$L_{i}/K_{v}$. Note that $r$ is the number of $\operatorname{Frob}_{v}$-orbits
on $\pi^{-1}(x)$, so that $\operatorname{length}_{v}(\pi^{-1}(x))=\deg\pi/r$.
Then the assumption on $\operatorname{length}_{v}(E_{x})$ means that
$r\leq\frac{\frac{1}{2}g(g+1)\deg(\pi)-\dim(X)}{4g^{2}}.$
The $E_{v,x}$-module on $V_{x}$ induces a decomposition
$V_{x}=\bigoplus_{i=1}^{r}V_{i}$, where $V_{i}=V_{x}\otimes_{E_{v,x}}L_{i}$.
Each $V_{i}$ is a $L_{i}$-vector space of dimension $2g$. We write
$Z(\varphi)=\\{f\colon V_{x}\to V_{x}\textrm{ $E$-linear, and }\varphi\circ
f=f\circ\varphi\\}.$
Then $Z(\varphi)$ is a $K_{v}$-vector space. By a variation of [LV20, Lemma
2.1] we have
$\dim_{K_{v}}(Z(\varphi))\leq\sum_{i=1}^{r}(\dim_{L_{i}}V_{i})^{2}=4g^{2}\cdot
r\leq\frac{1}{2}g(g+1)\deg(\pi)-\dim(X).$
Set $T^{\prime}=T\cap U$. As in [LV20, Section 3] it follows that
$\Phi_{v}(T^{\prime})$ is contained in an algebraic subset of
$\mathcal{H}_{v}$ of the form $\mathcal{Z}:=\bigcup_{i}Z(\varphi)\cdot h_{i}$,
for finitely many $h_{i}\in\mathcal{H}_{v}(K_{v})$. We have
$\dim\mathcal{Z}\leq\frac{1}{2}g(g+1)\deg(\pi)-\dim(X)$ and
$\dim\mathcal{H}_{v}=\frac{1}{2}g(g+1)\deg(\pi)$, so
$\operatorname{codim}_{\mathcal{H}_{v}}\mathcal{Z}\geq\dim(X).$
Applying [LV20, Lemma 9.3], we find that $\Phi_{v}^{-1}(\mathcal{Z})$, and
therefore $T^{\prime}$, is not Zariski dense in $X$. We do this for all of the
finitely many $v$-adic residue disks of $X(K_{v})$, and since the finite union
of non-Zariski dense sets is not Zariski dense, we conclude that $T$ is not
Zariski dense in $X$. ∎
## 4\. An abelian-by-finite cover of $E\setminus\\{0\\}$
In what follows, we fix an elliptic curve $E/K$ which has good reduction away
from $S$. Our goal is to construct an abelian-by-finite cover on
$E\setminus\\{0\\}$ that proves Siegel’s theorem: the set of $S$-integral
points of $E$ is finite.
In order to prove Siegel’s theorem we may increase $K$ and $S$ without loss of
generality (the set of $S$-integral points of $E$ will only increase if we do
so). Hence, we may and will assume in what follows that $S$ contains all
primes over $2$, and that the 2-torsion of $E$ is $K$-rational, and that the
model of $E$ that we fixed over $O_{K}[S^{-1}]$ is given by the Weierstrass
equation
$y^{2}=x(x-1)(x-\lambda)$
for some $\lambda\in\mathcal{O}_{K}[S^{-1}]$. As in the introduction, we will
also continue to assume that $S$ contains all places of $K$ that are ramified
in the extension $K/\mathbb{Q}$.
To define the abelian-by-finite family over $E\setminus\\{0\\}$, we first
consider the family $A^{\prime}$ of elliptic curves over $E\setminus E[2]$
defined by the equation
$v^{2}=u(u-1)(u-x).$
Here $x$ is still the $x$-coordinate on $E$, viewed as a regular function on
$E\setminus E[2]$. In other words $A^{\prime}$ is the pull-back of the
Legendre family over $\mathbb{P}^{1}\setminus\\{0,1,\infty\\}$ along the
$x$-coordinate map $E\setminus
E[2]\to\mathbb{P}^{1}\setminus\\{0,1,\lambda,\infty\\}$. For every $m\geq 1$
we define the family $A_{m}$ over $E\setminus\\{0\\}$ as the composition of
the map $A^{\prime}\to E\setminus E[2]$, restricted to $E\setminus E[2^{m}]$,
with the multiplication-by-$2^{m}$ map $[2^{m}]\colon E\setminus E[2^{m}]\to
E\setminus\\{0\\}$. Geometrically, the fiber of $A_{m}$ over a geometric point
$e\in E\setminus\\{0\\}$ is the disjoint union of the elliptic curves
$v^{2}=u(u-1)(u-x^{\prime})$ where $x^{\prime}$ runs over the $x$-coordinates
of the $2^{2m}$ geometric points of $E$ mapping to $e$ under the $[2^{m}]$
map. The situation is depicted in the following diagram:
${\mathrm{Legendre}}$${A_{m}}$${\mathbb{P}^{1}\setminus D}$${E\setminus
E[2^{m}]}$${E\setminus\\{0\\}}$$\pi$$[2^{m}]$$x$
Here $D$ is the divisor of $x$-coordinates of points in $E[2^{m}]$ (including
$\infty$) and $\mathrm{Legendre}$ denotes the Legendre family
$v^{2}=u(u-1)(u-t)$. To prove Siegel’s theorem we will apply Theorem 3.4 to
the abelian-by-finite family $A_{m}\to E\setminus E[2^{m}]\to
E\setminus\\{0\\}$ for $m=3$.
### 4.1. The monodromy
Let $e_{0}\in E(K)\setminus\\{0\\}$ be some arbitrary base point. Let
$W_{m}=H^{1}_{B}((A_{m})_{e_{0}}(\mathbb{C}),\mathbb{Q})$ be the first Betti
cohomology of the fiber of $A_{m}\to E\setminus\\{0\\}$ over $e_{0}$.
Monodromy gives a representation
$\rho_{m}\colon\pi_{1}(E(\mathbb{C})\setminus\\{0\\},e_{0})\longrightarrow\operatorname{GL}(W_{m}).$
Our goal in this subsection is to study this representation. To ease the
notation, we will abuse notation in this subsection and identify varieties
with their $\mathbb{C}$-points, i.e. we will write $E$ instead of
$E(\mathbb{C})$.
Note that we have
$W_{m}=\bigoplus_{[2^{m}]e^{\prime}=e_{0}}H^{1}_{B}((A_{m})_{e^{\prime}},\mathbb{Q}),$
where the direct sum runs over all points $e^{\prime}\in E\setminus E[2^{m}]$
mapping to $e_{0}$ under $[2^{m}]$. Note that the fiber $(A_{m})_{e^{\prime}}$
is just the elliptic curve given by $v^{2}=u(u-1)(u-x^{\prime})$, where
$e^{\prime}=(x^{\prime},y^{\prime})$.
The following proposition says that the family $A_{m}\to E\setminus\\{0\\}$
has full monodromy.
###### Proposition 4.1.
The Zariski closure of the image of $\rho_{m}$ in $\operatorname{GL}(W_{m})$
contains
$\prod_{[2^{m}]e^{\prime}=e_{0}}\operatorname{SL}(H^{1}_{B}((A_{m})_{e^{\prime}},\mathbb{Q})).$
###### Proof.
Let $\Gamma$ be the Zariski closure of the image of $\rho$ in
$\operatorname{GL}(W_{m})$, and set
$\Delta=\Gamma\cap\prod_{[2^{m}]e^{\prime}=e_{0}}\operatorname{SL}(H^{1}_{B}((A_{m})_{e^{\prime}},\mathbb{Q})).$
We will use [LV20, Lemma 2.12], which says the following: if $\Delta$ projects
surjectively to each factor
$\operatorname{SL}(H^{1}_{B}((A_{m})_{e^{\prime}},\mathbb{Q}))$, and if for
each pair $e^{\prime}_{1}\neq e^{\prime}_{2}$ there is a $g\in\Delta$ which
projects to unipotent elements of
$\operatorname{SL}(H^{1}_{B}((A_{m})_{e^{\prime}_{1}},\mathbb{Q}))$ and
$\operatorname{SL}(H^{1}_{B}((A_{m})_{e^{\prime}_{2}},\mathbb{Q}))$ with fixed
spaces of different dimensions, then in fact
$\Delta=\prod_{[2^{m}]e^{\prime}=e_{0}}\operatorname{SL}(H^{1}_{B}((A_{m})_{e^{\prime}},\mathbb{Q}))$.
First we consider surjectivity of the projections. Let $e^{\prime}_{0}\in
E\setminus E[2^{m}]$ be a point with $[2^{m}]e^{\prime}_{0}=e_{0}$. Then the
étale map $[2^{m}]\colon E\setminus E[2^{m}]\to E\setminus\\{0\\}$ induces on
fundamental groups an injective map $\pi_{1}(E\setminus
E[2^{m}],e_{0}^{\prime})\to\pi_{1}(E\setminus\\{0\\},e_{0})$, and the
restriction of $\rho$ to $\pi_{1}(E\setminus E[2^{m}],e_{0}^{\prime})$
stabilizes the direct summand $H^{1}_{B}((A_{m})_{e^{\prime}_{0}},\mathbb{Q})$
of $W_{m}$. But we also can consider the $x$-coordinate map $x\colon
E\setminus E[2^{m}]\to\mathbb{P}^{1}\setminus D$, where $D$ is the set of
$x$-coordinates of the $2^{m}$-torsion points of $E$. This map is also étale,
and it induces an injective homomorphism $\pi_{1}(E\setminus
E[2^{m}],e_{0}^{\prime})\to\pi_{1}(\mathbb{P}^{1}\setminus D,x^{\prime})$. The
cover $A_{m}\to E\setminus E[2^{m}]$ is just the pull-back of the Legendre
family over $\mathbb{P}^{1}\setminus D$, and the Zariski closure of the image
of monodromy for the Legendre family is well-known to be
$\operatorname{SL}_{2}$ (see for example [CMSP17, Theorem 1.1.7] for an
explicit description of this monodromy). So the restriction of $\rho$ to
$\pi_{1}(E\setminus E[2^{m}],e_{0}^{\prime})$ has Zariski-closure a finite
index subgroup of $SL_{2}$, which is therefore equal to $SL_{2}$ since
$SL_{2}$ is connected.
Now we show that there are enough elements acting unipotently on the various
summands with fixed spaces of different dimensions. Let $\gamma$ be a path in
$E\setminus\\{0\\}$ that goes from $e_{0}$ to a point close to the identity
$0\in E$, then circles around this point once, and goes back again to $e_{0}$
in the same way. A lift of this path to $E\setminus E[2^{m}]$ is a loop from a
point $e^{\prime}_{0}$ over $e_{0}$ going around a $2^{m}$-torsion point and
back to $e^{\prime}_{0}$. The family $A_{m}$ extends in a smooth way over all
$2^{m}$-torsion points except the points $(0,0)$, $(1,0)$ and $0$. Over these
points, the monodromy is unipotent: indeed, the local monodromy of the
Legendre family over $\mathbb{P}^{1}\setminus\\{0,1,\infty\\}$ is unipotent
around $0$ and $1$ and conjugate to $\begin{pmatrix}-1&1\\\
0&-1\end{pmatrix}$ around $\infty$, but the $x$-coordinate is locally 2-to-1
around these points. So we see that $\rho(\gamma)$ fixes each summand of the
decomposition
$W_{m}=\bigoplus_{[2^{m}]e^{\prime}=e_{0}}H^{1}_{B}((A_{m})_{e^{\prime}},\mathbb{Q})$,
and it acts in a non-trivial unipotent way on exactly three of the summands
and trivially on the other summands. The three summands on which
$\rho(\gamma)$ acts non-trivially depend on the choice of $\gamma$; more
specifically on the choice of the path from $e_{0}$ to the point close to $0$.
Now let $e_{1}^{\prime}$ and $e_{2}^{\prime}$ be two distinct points in the
fiber $[2^{m}]^{-1}(e_{0})$. We claim that we can choose the path $\gamma$ in
such a way that $\rho(\gamma)$ acts non-trivially on the summand corresponding
to the fiber over $e_{1}^{\prime}$ and trivially on the summand corresponding
to the fiber over $e_{2}^{\prime}$. For this, let
$d=e_{2}^{\prime}-e_{1}^{\prime}\in E$ be the difference. Then $d$ is
$2^{m}$-torsion and non-trivial. The set $\\{(0,0),(1,0),0\\}$ is not closed
under shifting by $d$. Pick a point $P\in\\{(0,0),(1,0),0\\}$ such that
$P+d\notin\\{(0,0),(1,0),0\\}$. Then choose a path $\gamma^{\prime}$ in
$E\setminus E[2^{m}]$ that starts at $e_{1}^{\prime}$, goes to a point close
to $P$, circles around $P$, and then goes back to $e_{1}^{\prime}$ the same
way. Set $\gamma=[2^{m}]\circ\gamma^{\prime}$. This is a loop in
$E\setminus\\{0\\}$ based at $e_{0}$. By construction, $\gamma^{\prime}$ is
the lift of $\gamma$ starting at $e_{1}^{\prime}$, and it loops around the bad
fiber over $P$, while $\gamma^{\prime}+d$ is the lift of $\gamma$ starting at
$e_{2}^{\prime}$. The latter loops around $P+d$, which is not a bad fiber.
Thus $\rho(\gamma)$ acts non-trivially unipotently on the summand
corresponding to $e_{1}^{\prime}$ and trivially on the summand corresponding
to $e_{2}^{\prime}$. ∎
### 4.2. Siegel’s theorem
We can now prove Siegel’s theorem. We follow the argument in [LV20, Section
4].
First we note that we may assume, after enlarging $K$ if necessary, that the
$8$-torsion of $E$ is $K$-rational. Now let
$T=\\{e\in(E\setminus\\{0\\})(\mathcal{O}_{K}[S^{-1}]):e\notin 2E(K)\\}$
be the set of $S$-integral points of $E$ that are not divisible by $2$. We
claim that it is enough to prove that $T$ is finite. Indeed, let $k$ be the
largest integer such that $E(K)$ contains a point of order $2^{k}$. Then we
have
$(E\setminus\\{0\\})(\mathcal{O}_{K}[S^{-1}])\,\subseteq\,\bigcup_{j=0}^{k}[2^{j}](T).$
To see this, note that if $e$ is an $S$-integral point of $E$ and $e^{\prime}$
a rational point such that $e=2e^{\prime}$, then also $e^{\prime}$ is
$S$-integral (because $e^{\prime}$ does not reduce to the identity modulo any
place $v\notin S$). If an $S$-integral point $e$ of $E$ is not divisible by
$2^{k}$, then $e$ is in the right hand side of the claimed inclusion.
Otherwise, write $e=2^{k}\cdot e^{\prime}$. Then by adjusting $e^{\prime}$ by
a $2^{k}$-torsion point if needed, we can ensure that $e^{\prime}\in T$, so
that $e\in[2^{k}](T)$.
In order to prove finiteness of $T$, we want to apply Theorem 3.4 to $T$ and
the abelian-by-finite family $A_{3}\to E\setminus E[8]\to E\setminus\\{0\\}$.
This will imply that $T$ is not Zariski-dense in $E$ and therefore finite. We
have already established that this family has full monodromy, so it remains to
verify that condition 2 of the theorem is fulfilled.
###### Lemma 4.2.
Let $t\in T$. There is a place $v\notin S$ of $K$ such that all
$\operatorname{Frob}_{v}$-orbits in $[8]^{-1}(t)(\overline{K})$ have length
$8$. In particular $\operatorname{length}_{v}([8]^{-1}(t))=8$.
###### Proof.
Write $[8]^{-1}(t)=\operatorname{Spec}E_{t}$. Then we have
$E_{t}=\prod_{i}L_{i}$, where the $L_{i}$ are finite field extensions of $K$
that are unramified away from $S$. In fact, the $L_{i}$ are the fields
obtained by adjoining the coordinates of points in $[8]^{-1}(t)$ to $K$. Since
the $8$-torsion of $E$ is $K$-rational, these fields are in fact the same, so
we have $E_{t}=L^{n}$ for some $n$.
To study the Galois action on the factors $L$ in this decomposition, let
$\sigma\in\operatorname{Gal}(\overline{K}/K)$. Then for each $t^{\prime}\in
E(\overline{K})$ with $8t^{\prime}=t$, we also have $8\sigma(t^{\prime})=t$.
Therefore we have $\sigma(t^{\prime})-t^{\prime}\in E[8](\overline{K})$. Using
that the $8$-torsion is $K$-rational, it is not hard to see that this element
$\sigma(t^{\prime})-t^{\prime}$ does not depend on the choice of $t^{\prime}$.
Thus we get a map $\operatorname{Gal}(\overline{K}/K)\to
E[8](K)\cong(\mathbb{Z}/8\mathbb{Z})^{2}$ that describes the Galois action on
the fiber over $t$. (This is of course just a well-known explicit description
of the Kummer map $E(K)\to
H^{1}(\operatorname{Gal}(\overline{K}/K),E[8])=\operatorname{Hom}(\operatorname{Gal}(\overline{K}/K),E[8](K))$,
where the last equality follows from the 8-torsion being $K$-rational.) The
image of $\operatorname{Gal}(\overline{K}/K)\to E[8](K)$ contains an element
of order 8: if it didn’t, then $\operatorname{Gal}(\overline{K}/K)$ would
stabilize the point $4t^{\prime}$, contradicting the assumption that $t$ is
not divisible by $2$ in $E(K)$. Thus, there is some
$\sigma\in\operatorname{Gal}(\overline{K}/K)$ which acts with order 8 on some
(hence each) factor $L$. By Chebotarev’s density theorem, there is a positive
density of places $v\notin S$ such that $\operatorname{Frob}_{v}$ acts in the
same way as $\sigma$ on $L$, and then $\operatorname{Frob}_{v}$ has orbits of
length $8$ on $[8]^{-1}(t)(\overline{K})$. ∎
###### Corollary 4.3 (Siegel’s theorem).
The set $(E\setminus\\{0\\})(\mathcal{O}_{K}[S^{-1}])$ of $S$-integral points
of $E$ is finite.
###### Remark 4.4.
The proof of Siegel’s theorem presented in this section relies on Theorem 3.4,
which in turn relies on Faltings’ deep result that the the Galois
representations attached to abelian varieties over number fields are
semisimple. However, for the purpose of proving Siegel’s theorem the
dependence on Faltings’ result can be removed, in the same way as for the
Lawrence–Venkatesh proof of the $S$-unit theorem, by applying [LV20, Lemma
4.4].
## References
* [BBB+19] Jennifer S. Balakrishnan, Alex J. Best, Francesca Bianchi, Brian Lawrence, J. Steffen Müller, Nicholas Triantafillou, and Jan Vonk. Two recent p-adic approaches towards the (effective) Mordell conjecture. arXiv e-prints, page arXiv:1910.12755, October 2019. arXiv:1910.12755.
* [BT19] Benjamin Bakker and Jacob Tsimerman. The Ax-Schanuel conjecture for variations of Hodge structures. Invent. Math., 217(1):77–94, 2019. doi:10.1007/s00222-019-00863-8.
* [CMSP17] James Carlson, Stefan Müller-Stach, and Chris Peters. Period Mappings and Period Domains. Cambridge University Press, Second edition, 2017.
* [Cor16] Pietro Corvaja. Integral points on algebraic varieties, volume 3 of Institute of Mathematical Sciences Lecture Notes. Hindustan Book Agency, New Delhi, 2016. An introduction to Diophantine geometry. doi:10.1007/978-981-10-2648-5.
* [Fal83] G. Faltings. Endlichkeitssätze für abelsche Varietäten über Zahlkörpern. Invent. Math., 73(3):349–366, 1983. doi:10.1007/BF01388432.
* [JL19] Ariyan Javanpeykar and Daniel Litt. Integral points on algebraic subvarieties of period domains: from number fields to finitely generated fields. arXiv e-prints, page arXiv:1907.13536, July 2019. arXiv:1907.13536.
* [Liu20] Haohao Liu. Lawrence–Venkatesh’s $p$-adic approach to Mordell’s conjecture. Master thesis, 2020. URL: http://home.ustc.edu.cn/~kyung/Lawrence-Venkatesh.pdf.
* [LS20] Brian Lawrence and Will Sawin. The Shafarevich conjecture for hypersurfaces in abelian varieties. arXiv e-prints, page arXiv:2004.09046, April 2020. arXiv:2004.09046.
* [LV20] Brian Lawrence and Akshay Venkatesh. Diophantine problems and $p$-adic period mappings. Invent. Math., 221(3):893–999, 2020. doi:10.1007/s00222-020-00966-7.
* [NX19] Brett Nasserden and Stanley Yao Xiao. Uniformity of fibres of period mappings and the $S$-unit equation. arXiv e-prints, page arXiv:1910.14122, October 2019. arXiv:1910.14122.
|
# Reference-less complex wavefields characterization with a high-resolution
wavefront sensor
Tengfei Wu Université de Paris, SPPIN – Saints-Pères Paris Institute for
Neurosciences, CNRS, 75006 Paris, France Sorbonne Université, CNRS, INSERM,
Institut de la Vision, 17 Rue Moreau, 75012 Paris, France Université de
Paris, 75006 Paris, France Pascal Berto Sorbonne Université, CNRS, INSERM,
Institut de la Vision, 17 Rue Moreau, 75012 Paris, France Université de
Paris, 75006 Paris, France Marc Guillon Université de Paris, SPPIN – Saints-
Pères Paris Institute for Neurosciences, CNRS, 75006 Paris, France Université
de Paris, 75006 Paris, France Institut Universitaire de France (IUF), Paris,
France<EMAIL_ADDRESS>
Wavefront sensing is a widely-used non-interferometric, single-shot, and
quantitative technique providing the spatial-phase of a beam. The phase is
obtained by integrating the measured wavefront gradient. Complex and random
wavefields intrinsically contain a high density of singular phase structures
(optical vortices) associated with non-conservative gradients making this
integration step especially delicate. Here, using a high-resolution wavefront
sensor, we demonstrate experimentally a systematic approach for achieving the
complete and quantitative reconstruction of complex wavefronts. Based on the
Stokes’ theorem, we propose an image segmentation algorithm to provide an
accurate determination of the charge and location of optical vortices. This
technique is expected to benefit to several fields requiring complex media
characterization.
Quantitative phase mesurements of complex wavefields is at the basis of many
applications including bio-imaging [1, 2, 3], fibroscopy [4, 5], diffractive
tomography [6], astronomy [7], and more generally for imaging through
scattering media by phase conjugation [8, 9]. When a coherent reference beam
is available, complex wavefields may typically be measured using digital
holography [10, 11]. Alternatively, several techniques have been developed for
cases when no coherent reference beam is available, such as common-path
interferometers [12, 13, 14, 15, 16], point diffraction interferometry [17],
Shack-Hartman wavefront sensors (WFS) [18], curvature sensing [19], light-
field imaging [20], phase diversity approaches [21], or phase retrieval
algorithms [22, 23, 24]. All these techniques have their own advantages and
try to optimize the compromise between ease of use and measurement
reliability. Among the latter techniques, WFS generally consist in a so-called
“Hartmann mask” simply placed before a camera and have the specific advantages
to be a robust single-shot technique offering a great ease of use. They have
thus been the most used systems to measure distorted wavefronts arising from
random refractive index inhomogeneities due to the atmosphere in astronomy, or
due to tissues in bio-imaging. The main limitation associated to Shack-
Hartmann WFS has long been the limited resolution associated with the low
maximum density of micro-lens arrays. The advent of high-resolution WFS,
either based on a modified Hartmann mask [25] or on a thin diffuser [26, 27,
28, 29], has represented an important step towards making WFS a promising
alternative for measuring complex spatial phase profiles [30]. However,
despite their promising potential to measure complex WFs, high-resolution WFS
have mostly been used, so far, for measuring smooth WF distortions, such as
optical aberrations [1, 2] (typically projected onto Zernike polynomials [31])
or optical-path-length profiles of thin and transparent biological samples
[25, 27]. In contrast, complex and scattered wavefields comprise a high
spatial density of phase singularities, namely optical vortices [32], whose
separation distance close to nucleation/annihilation centers can be
arbitrarily small [33, 34, 35]. These singular spiral phase structures are
associated with non-conservative gradient fields that are especially
challenging for WFS which, unlike interferometric methods, do not provide a
direct measurement of the phase, but only measure phase gradients ${\bf g}$
(_i.e._ the transverse component of the local wavevector). The problem of
phase-spirals integration has appeared since the early ages of adaptive optics
in astronomy [36]. In this context, it has been shown that neglecting “branch-
cuts” significantly degrades adaptive-optics performances [7]. The
identification of spiral phase structures in phase gradient maps ${\bf g}$
relies on their Helmholtz decomposition (HD) [37, 38]. According to
Helmholtz’s theorem [39], the vector field ${\bf g}$ can be split into an
irrotational (curl-free) component and a solenoidal (divergence-free or
rotational) component [40, 37, 7, 41]. Most of current integration techniques
for WFS basically consist in computing $\nabla\cdot{\bf g}$, so implicitly
canceling the solenoidal component of the vector field [42]. The vector
potential associated with the solenoidal component (so-called “branch-point”
potential [40, 37]) exhibits peaks at vortex locations, so allowing their
localization and reconstruction [7]. However, although WFS have demonstrated
their ability to detect and characterize optical vortices [36, 43, 44],
complete wavefront reconstruction by vortex-gradients integration has given
rise to many works considering only simple experimental cases involving a
single vortex [45, 46, 47], or more vortices but in numerical simulations [36,
40, 7, 48, 45]. We believe that the high density of singular phase structures
in complex wavefields has probably been the main obstacle for measuring
experimentally complex wavefronts with a WFS. In WF shaping experiments,
neglecting a single optical vortex is equivalent to adding a complementary
spiral phase mask, which has been described as yielding a two-dimensional
Hilbert transform of the field [49]. For complex speckled wavefields, such a
single spiral transform induces a major change in patterns since resulting in
an inversion of intensity contrasts [50, 51]. Here, we propose a robust and
systematic numerical data-processing technique and demonstrate the
experimental quantitative reconstruction of complex wavefronts containing up
to $133$ optical vortices with a high-resolution WFS based on a $1.3~{}{\rm
MP}$ camera [27]. The camera sensor is thus similar to those typically used by
regular WFS based on micro-lens arrays. Wavefront reconstruction was achieved
using a segmentation algorithm that optimizes the identification of nearby
optical vortices. According to the Stokes’ theorem [40], segmentation
optimizes the computation areas to measure the charge of vortices, based on
the integration of the “branch-point” potential.
Fig. 1: Experimental vortex WF sensing. A phase profile containing optical
vortices (a) is addressed onto a spatial light modulator (SLM) (b) illuminated
by a colimated laser beam and imaged onto a high-resolution wavefront sensor
(WFS) with a telescope ($L_{1}$, $L_{2}$). Considering only the irrotational
component of the gradient field detected by the WFS leads to an erroneous WF
lacking optical vortices (c), unlike full Helmholtz decomposition (HD) of the
gradient field (d).
To summarize, the problem to rebuild singular WFs is mostly threefold. First,
the phase-gradient map ${\bf g}$ measured by a WFS is a non-conservative
vector field defined over a multiply connected domain. Direct spatial
integration is thus not possible since the integral value depends on the line-
path. Second, optical vortices are associated with an infinite phase-gradient
${\bf g}$ at the singularity locations, which appears as critically
incompatible with WFS. Third, WFS may imprecisely measure strongly fluctuating
gradients around anisotropic vortices, leading to inaccurate vortex
characterization, in particular their spiral spectrum [52]. Such cases
especially appear in random wavefields, wherein pairs of vortices of opposit
charges are frequently close to one another [33]. This configuration results
in large phase gradients in between vortices and makes the computation of the
charges (the circulation of ${\bf g}$) especially delicate. The image
segmentation we propose more specifically addresses this latter challenge
[53].
By way of illustration, a phase pattern exhibiting optical vortices has been
designed (Fig. 1a) and addressed to a phase-only spatial light modulator (SLM)
(Hamamatsu, LCOS-X10468-01), illuminated by a spatially filtered, polarized,
and collimated laser beam (Fig. 1b). The SLM allows displaying patterns
exhibiting both smooth local WF distortions (such as lenses for the eyes and
the face contour for instance) as well as optical vortices of any topological
charge (left- and right-handed optical vortices at the tips of the curling
mustache). The phase-gradient map is detected with a custom-built high-
resolution WFS comprising $\simeq 2\times 10^{4}$ phase pixels and described
in Ref. [27]. The WFS is conjugated to the SLM with a Galilean telescope in a
$4-f$ configuration. The full Helmholtz decomposition (HD) of the gradient
vector-field detected by the WFS can be achieved according to:
${\bf g}=\nabla\varphi_{ir}+\nabla\times{\bf A}$ (1)
so splitting appart the irrotational contribution of the regular phase
$\varphi_{ir}$ and the solenoidal contribution of a vector potential ${\bf
A}$. The sought-for complete phase profile $\varphi$, whose gradient-field is
${\bf g}$, can then be written as $\varphi=\varphi_{ir}+\varphi_{s}$. Typical
direct numerical integration [42] yields the regular WF shown in Fig. 1c,
missing phase singularities because the non-conservative (solenoidal)
contribution to the WF-gradient has been ignored. The singular phase
contribution $\varphi_{s}$ (or “hidden phase” [40]), is defined over a
multiply-connected domain, and satisfies $\nabla\varphi_{s}=\nabla\times{\bf
A}$. Solving this latter equation then allows proper reconstruction of the WF
(Fig. 1d).
Fig. 2: Principle of full WF reconstruction. The divergence and the curl of
the phase gradient map ${\bf g}$ (a) are computed to extract the irrotational
phase $\varphi_{ir}$ (b) and the solenoidal phase $\varphi_{s}$ (c). Double
space integration of $\nabla\cdot{\bf g}$ yields the irrotational phase (a
parabola in (b)). The curl of ${\bf g}$ yields $-\Delta{\bf A}$ where ${\bf
A}$ is the potential vector of the Helmholtz decomposition. Image segmentation
and weighted centroid computation of the peaks in $-\Delta{\bf A}$ then allows
reconstructing a Dirac-like distribution whose convolution by a spiral phase
profile yields $\varphi_{s}$. The complete phase $\varphi$ is finally rebuilt
by summing the two components $\varphi_{ir}$ and $\varphi_{s}$ (d).
We now detail the steps allowing us to achieve rigorous HD (See also Supp.
Mat.). First, taking the curl of Eq. (1), it appears that the potential vector
${\bf A}$ is a solution of the equation: $\nabla\times{\bf
g}=\nabla(\nabla\cdot{\bf A})-\Delta{\bf A}$, where "$\Delta=\nabla^{2}$"
denotes the Laplacian operator. Determining the potential vector thus requires
fixing a gauge. The Coulomb gauge $\nabla\cdot{\bf A}=0$ is chosen here for
obvious convenience [40, 37, 7]. Since ${\bf g}$ is a two-dimensional vector
field (in say $x$, $y$ plane), we may then write ${\bf A}=A_{z}{\bf e_{z}}$
without loss of generality. Second, introducing the circular vector
$\boldsymbol{\sigma_{-}}={\bf e_{x}}-i{\bf e_{y}}$, simple manipulation of Eq.
(1) yields: ${\bf
g}\cdot\boldsymbol{\sigma_{-}}=g_{x}+ig_{y}=(\partial_{x}+i\partial_{y})(\varphi_{ir}-iA_{z})$.
The HD can then be efficiently achieved numerically thanks to a single
computation step:
$\varphi_{ir}-iA_{z}={\cal F}^{-1}\left\\{\frac{{\cal F}\left[{\bf
g}\cdot\boldsymbol{\sigma_{-}}\right]}{i{\bf
k}\cdot\boldsymbol{\sigma_{-}}}\right\\}$ (2)
where ${\bf k}$ stands for the two-dimentional coordinate vector in the
reciprocal Fourier space. The regular phase component $\varphi_{ir}$ is thus
recovered the same way as previously proposed [42, 54] (Fig. 2b). Third, for
complete HD over a bounded domain, the contribution of a so-called additional
“harmonic” (or translation) term ${\bf h}$ must be considered when the flow
through the boundary is not zero [39]. This translation term, accounting for a
global tip/tilt of the WF, is both curl-free and divergence-free. If computing
derivation and integration through discrete Fourier transforms without care,
implicit periodic boundary conditions cancels out this term. Here we thus
retrieve the term ${\bf h}$ by symmetrizing the gradient field ${\bf g}$ prior
to gradient integration (Eq. (2)) as performed in Ref. [54], which
conveniently includes ${\bf h}$ in the curl-free component
($\nabla\varphi_{ir}$) of the HD. The divergence-free component requires
futher processing steps to obtain the singular phase pattern $\varphi_{s}$
from the potential-vector component $A_{z}$, as detailed hereafter.
Fig. 3: Reconstruction of phase spirals of various topological charges $n$
from $-10$ to $+10$. The potential vector ${\bf A}$ of a spiral phase of
topological charge $1$ is, in theory, the Green function of Laplace equation.
In practice, $\Delta A_{z}$ exhibit peaks at the vortex location (a) whose
size depend on the topological charge of the corresponding vortex (in (a),
$\Delta A_{z}$ has been slightly filtered for the sake of readability). After
segmentation, the integral computation of this peak yields the expected value
$2\pi n$ (with a $3\%$ precision) (b). Phase profiles are then rebuilt (c).
A vortex is essentially characterized by the circulation of the vector flow
around the singularity, namely its topological charge (or winding number)
defined from the circulation of ${\bf g}$ around the vortex. Applying Stokes’
theorem to the definition of $n$ yields:
$n=\frac{1}{2\pi}\displaystyle{\oint_{\cal C}}{\bf g}\cdot{\rm
d}\boldsymbol{\ell}=-\frac{1}{2\pi}\displaystyle{\int_{\cal S}}\Delta
A_{z}{\rm d}S$ (3)
Reducing the contour length ${\cal C}$ (and so the enclosed surface ${\cal
S}$) to zero, it appears that $-A_{z}/(2\pi n)$ is the Green function of the
two-dimensional Laplace equation. In theory, $-\Delta A_{z}/(2\pi n)$ is thus
a Dirac distribution, making it easy to identify optical vortices [40, 37, 44]
(see Fig. 2c). In principle, the corresponding sought-for singular phase
component $\varphi_{s}$ could then be simply obtained by convolving $-\Delta
A_{z}/(2\pi n)$ with a single $+1$ optical vortex. However, in practice,
rebuilding $\varphi_{s}$ this way is not possible. The main difficulty is that
the experimental $-\Delta A_{z}/(2\pi n)$ map is not a perfect Dirac
distribution (or a single-pixeled non-zero data map) for three main reasons:
first, experimental data are affected by noise, second, ${\bf g}$ is filtered
by the optical transfer function of the WFS and third, optical vortices are
associated with a vanishing intensity that compromises accurate gradient
measurement in their vicinity. As detailed in Ref. [27], the optical transfer
function of a WFS is especially limited by the non-overlapping condition,
which imposes a maximum magnitude for the eigenvalues of the Jacobi matrix of
${\bf g}$ (_i.e._ the Hessian matrix of $\varphi$). As a first consequence,
the large curvatures of the phase component $\varphi_{r}$ (_i.e._ its second
derivatives) may be underestimated. As a second consequence, the diverging
magnitude of ${\bf g}$ (as $1/r$) prevents its proper estimation at close
distances $r$ from the optical vortex location, as well as the estimation of
the Hessian coefficients $\partial_{x}g_{y}$ and $\partial_{y}g_{x}$.
Therefore, the measurement of the vector potential $A_{z}$ is wrong in the
vicinity of the vortex center and the obtained peak is not single-pixeled
(Fig. 2c).
In experiments, the precise identification of the location and the charge of
optical vortices from the computed $\Delta A_{z}$-map thus demands an image
processing step. Notably, the circulation of ${\bf g}$ in Eq. (3) can yield an
accurate measure of the vortex charge provided that the contour ${\cal C}$
surrounds the vortex at a large enough distance, so that ${\bf g}$ is small
enough to be accurately measured by the WFS. Consequently, although the
estimate of $\Delta A_{z}$ is wrong in the vicinity of the vortex, the peak
integral achieved over a large enough surface ${\cal S}$ (enclosed by ${\cal
C}$) provides the proper charge, under the Stokes’ theorem (Eq. (3)). Vortices
may be characterized by a simple peak detection algorithm, after
regularization of the $\Delta A_{z}$-map with a low-pass filter [38]. However,
this solution is not robust since it requires setting manually an integration
radius. Such an adjustment of the integration radius is all the more
impractical when considering two typical cases: either high densities of
optical vortices potentially located at arbitrarily small distances (see Fig.
4d), or high-order vortices, associated with charge-dependent peak-sizes in
the $\Delta A_{z}$-maps (see Fig. 3a). In contrast, the solution we propose
consists in performing an image segmentation that automatically optimizes the
integration surface by adjusting it to the size and density of peaks. The main
processing steps are summarized in Fig. 2c (See also Supp. Mat.). The
segmentation of the $\Delta A_{z}$-map is achieved in two substeps. First,
$|\Delta A_{z}|$ is filtered in order to avoid oversegmentation, with a
Gaussian filter $G$ whose waist is chosen equal to the WFS resolution [27]
($\simeq 8$ camera pixels). Second, a watershed operation (matlab®, _Image
Processing Toolbox_) is applied to $-G\ast|\Delta A_{z}|$. Noteworthy, we
observed that segmentation of $-G\ast|\Delta A_{z}|$ rather than
$-|G\ast\Delta A_{z}|$ more efficiently cancels out pairs of vortices too
close to be resolved by the WFS. The resulting segments are thus structured
according to extrema of $\Delta A_{z}$, the latters corresponding to positive
and negative vortices locations. Integration and weighted-centroid computation
of the unfiltered $\Delta A_{z}$ image over each segment yields the charge and
the precise location of each vortex. To obtain $\varphi_{s}$, a Dirac-like
vortex map is rebuilt based on the result of the former step, and convolved by
a $+1$ spiral phase mask (see Fig. 2c). Finally, the complete phase
reconstruction $\varphi=\varphi_{ir}+\varphi_{s}$ is computed and wrapped
between $0$ and $2\pi$ (Fig. 2d). Even if the proposed image processing
algorithm includes a filtering operation, it is of a very different nature
from previously suggested regularization approaches [38], because the filter
size is set according to the WFS resolution (to avoid oversegmentation) and
not to the vortices to be characterized. Oversegmentation was observed to
degrade charge-measurement reliability and lengthen the processing time,
measured to be $0.54s$ on an Intel® Core i5-9400H CPU for a $1.3~{}{\rm MP}$
map at maximal vortex density.
Fig. 4: WF reconstruction of a complex wavefield. The phase pattern addressed
to the SLM (a) is retrieved thanks to the WFS (b) after removal of the first
few low-order aberrations (tip/tilt, defocus and astigmatism). The phase
difference in (c) clearly demonstrates the efficient wavefront
reconstructions, especially the non-conservative part. Remaining smooth
distortions in $\varphi_{ir}$ at the boundaries are partly attributed to
inaccurate phase modulation by the SLM. The (filtered) $\Delta{\bf A_{z}}$ map
(d) allows identifying the $133$ vortices. The inset shows the optimized
segmentation of nearby vortices.
To demonstrate the flexibility of this approach to characterize and rebuild
optical vortices of any charge, we addressed phase spirals with charges
ranging from $-10$ to $+10$ (Fig. 3). Because of the diverging phase gradient
at the vortex center, the $\Delta A_{z}$ maps exhibit peaks whose widths
increase with the charge $n$ of the vortex (Fig. 3a). Nevertheless,
integration of $-\Delta A_{z}/(2\pi)$ over segmented regions yields $n$ within
a $3\%$ accuracy range (Fig. 3b). Except for specially designed optical vortex
beams carrying fractional charges [55], for beams propagating in free space,
the topological charge is typically an integer. Rounding the integral to the
closest integer value then allows an accurate reconstruction of the optical
vortices (Fig. 3c). Differences between the rebuilt phase profiles and the
perfect ones addressed to the SLM are due to the contribution of
$\varphi_{ir}$ arising from uniformity imperfections of the SLM.
Finally, we demonstrate the possibility to retrieve the phase of complex
random wavefields with our algorithm. Random wavefields contain a high density
of optical vortices of charge $+1$ and $-1$ [56, 32, 57]. These vortices
exhibit elliptical phase and intensity profiles along the azimuthal
coordinate. The non-uniform increase of the phase around the singular point
may then alter the ability to detect them if the phase-gradient magnitude is
locally too large. Furthermore, the separation distance between vortices may
be much smaller than the speckle grain size, especially when close to creation
or annihilation events of pairs of vortices [33, 58, 35]. Such a complex
wavefield was numerically generated by taking the Fourier transform of a
random phase map of finite aperture, and addressed to the SLM (Fig. 4a).
Despite the aforementioned specific difficulties, the WF could be efficiently
rebuilt (Fig. 4b). The accurate reconstruction can be visually appreciated by
considering $0$-$2\pi$ equiphase lines, easy to identify with a gray-level
colormap as abrupt white-to-black drops. The difference between the rebuilt
and the input phase profiles is shown in Fig. 4c demonstrating the almost
perfect reconstruction of the $133$ optical vortices. The six lowest order
Zernike aberration modes were removed (piston, tip/tilt, defocus,
astigmatisms) for better visualization. Again, differences mostly appear on
the $\varphi_{ir}$ contribution on the edges of the SLM, where the SLM
reliability degrades and where aberrations introduced by the relay telescope
are maximum. The dense experimental map of vortices distribution and charges
is shown in Fig 4d.
Relying on a high-resolution WFS, we could thus propose a systematic and
robust approach to rebuild complex wavefields containing a high density of
optical vortices. The proposed method first consists in performing a HD of the
local wavevector field ${\bf g}$ measured by the WFS. Importantly, the
circulation of ${\bf g}/(2\pi)$ over vortices, computed as the integral
$\nabla\times{\bf g}$ over large enough surface areas (under the Stokes’
theorem), yields the topological charge of vortices. The systematic
reconstruction of the optical vortex map relies on an image segmentation that
optimizes surface integration and thus, vortex identification. The robustness
of phase-spiral-reconstructions further relies on the quantization prior about
the detected topological charges. Our method is applicable to any high-
resolution WFS or system requiring integrating the local wavevector map. Full
reconstruction of WFs with a WFS represents an important step to make WFS
efficient reference-less spatial-phase detectors and to allow random
wavefields characterization, in particular with incoherent light sources.
These developments are of interest for applications such as adaptive optics,
diffractive tomography, as well as beam shaping behind scattering and complex
media.
## Funding Information
This work was partially funded by the french Agence Nationale pour la
Recherche (SpeckleSTED ANR-18-CE42-0008-01), by the technology transfer office
SATT/Erganeo (project 520 and project 600) and by Region île de France (DIM
ELICIT, 3-DiPSI).
## Acknowledgments
The authors thank Jacques Boutet de Monvel and Pierre Bon for careful reading
of the manuscript, and Benoit Forget for stimulating discussions.
## References
## References
* [1] Kai Wang, Wenzhi Sun, Christopher T. Richie, Brandon K. Harvey, Eric Betzig, and Na Ji. Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue. Nat. Commun., 6(1):1–6, jun 2015.
* [2] Kai Wang, Daniel E. Milkie, Ankur Saxena, Peter Engerer, Thomas Misgeld, Marianne E. Bronner, Jeff Mumm, and Eric Betzig. Rapid adaptive optical recovery of optimal resolution over large volumes. Nature Methods, 11(6):625–628, Jun 2014.
* [3] Molly A. May, Nicolas Barré, Kai K. Kummer, Michaela Kress, Monika Ritsch-Marte, and Alexander Jesacher. Fast holographic scattering compensation for deep tissue biological imaging. bioRxiv, 2021.
* [4] Youngwoon Choi, Changhyeong Yoon, Moonseok Kim, Taeseok Daniel Yang, Christopher Fang-Yen, Ramachandra R. Dasari, Kyoung Jin Lee, and Wonshik Choi. Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber. Phys. Rev. Lett., 109:203901, Nov 2012.
* [5] Damien Loterie, Salma Farahi, Ioannis Papadopoulos, Alexandre Goy, Demetri Psaltis, and Christophe Moser. Digital confocal microscopy through a multimode fiber. Opt. Express, 23(18):23845–23858, Sep 2015.
* [6] Matthieu Debailleul, Bertrand Simon, Vincent Georges, Olivier Haeberlé, and Vincent Lauer. Holographic microscopy and diffractive microtomography of transparent samples. Measurement Science and Technology, 19(7):074009, may 2008.
* [7] Glenn A Tyler. Reconstruction and assessment of the least-squares and slope discrepancy components of the phase. J. Opt. Soc. Am. A, 17(10):1828–1839, oct 2000.
* [8] Sébastien Popoff, Geoffroy Lerosey, Mathias Fink, Albert Claude Boccara, and Sylvain Gigan. Image transmission through an opaque material. Nature Communications, 1(1):81, Sep 2010.
* [9] Roarke Horstmeyer, Haowen Ruan, and Changhuei Yang. Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue. Nat Photon, 9(9):563–571, sep 2015.
* [10] Etienne Cuche, Frédéric Bevilacqua, and Christian Depeursinge. Digital holography for quantitative phase-contrast imaging. Opt. Lett., 24(5):291, 1999.
* [11] Tatsuki Tahara, Xiangyu Quan, Reo Otani, Yasuhiro Takaki, and Osamu Matoba. Digital holography and its multidimensional imaging applications: a review. Microscopy, 67(2):55–67, 02 2018.
* [12] Stefan Bernet, Alexander Jesacher, Severin Fürhapter, Christian Maurer, and Monika Ritsch-Marte. Quantitative imaging of complex samples by spiral phase contrast microscopy. Opt. Express, 14(9):3792–3805, May 2006.
* [13] Ivo M. Vellekoop, Meng Cui, and Changhuei Yang. Digital optical phase conjugation of fluorescence in turbid tissue. Applied Physics Letters, 101(8):081108, 2012.
* [14] Walter Harm, Clemens Roider, Alexander Jesacher, Stefan Bernet, and Monika Ritsch-Marte. Lensless imaging through thin diffusive media. Opt. Express, 22(18):22146–22156, Sep 2014.
* [15] Dushan N. Wadduwage, Vijay Raj Singh, Heejin Choi, Zahid Yaqoob, Hans Heemskerk, Paul Matsudaira, and Peter T. C. So. Near-common-path interferometer for imaging fourier-transform spectroscopy in wide-field microscopy. Optica, 4(5):546–556, May 2017.
* [16] Joseph Rosen, A. Vijayakumar, Manoj Kumar, Mani Ratnam Rai, Roy Kelner, Yuval Kashter, Angika Bulbul, and Saswata Mukherjee. Recent advances in self-interference incoherent digital holography. Adv. Opt. Photon., 11(1):1–66, Mar 2019.
* [17] James Notaras and Carl Paterson. Point-diffraction interferometer for atmospheric adaptive optics in strong scintillation. Optics Communications, 281(3):360–367, 2008.
* [18] R Shack and BC Platt. History and principles of Shack-Hartmann wavefront sensing. Journal of Refractive Surgery, 17(October 2001):573–577, 2001.
* [19] François Roddier. Curvature sensing and compensation: a new concept in adaptive optics. Appl. Opt., 27(7):1223–1225, Apr 1988.
* [20] Robert Prevedel, Young-Gyu Yoon, Maximilian Hoffmann, Nikita Pak, Gordon Wetzstein, Saul Kato, Tina Schrödel, Ramesh Raskar, Manuel Zimmer, Edward S. Boyden, and Alipasha Vaziri. Simultaneous whole-animal 3d imaging of neuronal activity using light-field microscopy. Nature Methods, 11(7):727–730, Jul 2014.
* [21] Manoj Kumar Sharma, Charu Gaur, Paramasivam Senthilkumaran, and Kedar Khare. Phase imaging using spiral-phase diversity. Appl. Opt., 54(13):3979–3985, May 2015.
* [22] J. R. Fienup. Phase retrieval algorithms: a comparison. Appl. Opt., 21(15):2758–2769, Aug 1982.
* [23] L. J. Allen, H. M. L. Faulkner, K. A. Nugent, M. P. Oxley, and D. Paganin. Phase retrieval from images in the presence of first-order vortices. Phys. Rev. E, 63:037602, Feb 2001.
* [24] Percival Almoro, Giancarlo Pedrini, and Wolfgang Osten. Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field. Appl. Opt., 45(34):8596, 2006.
* [25] Pierre Bon, Guillaume Maucort, Benoit Wattellier, and Serge Monneret. Quadriwave lateral shearing interferometry for quantitative phase microscopy of living cells. Opt. Express, 17(15):13080, 2009.
* [26] Sébastien Bérujon, Eric Ziegler, Roberto Cerbino, and Luca Peverini. Two-dimensional x-ray beam phase sensing. Phys. Rev. Lett., 108:158102, Apr 2012.
* [27] Pascal Berto, Hervé Rigneault, and Marc Guillon. Wavefront sensing with a thin diffuser. Opt. Lett., 42(24), 2017.
* [28] Congli Wang, Xiong Dun, Qiang Fu, and Wolfgang Heidrich. Ultra-high resolution coded wavefront sensor. Opt. Express, 25(12):13736–13746, Jun 2017.
* [29] Congli Wang, Qiang Fu, Xiong Dun, and Wolfgang Heidrich. Quantitative phase and intensity microscopy using snapshot white light wavefront sensing. Scientific Reports, 9(1):13795, Sep 2019.
* [30] M. Jang, C. Yang, and I. M. Vellekoop. Optical phase conjugation with less than a photon per degree of freedom. Physical review letters, 118(9):093902–093902, Mar 2017. 28306287[pmid].
* [31] Gordon D Love. Wave-front correction and production of Zernike modes with a liquid-crystal spatial light modulator. Appl. Opt., 36(7):1517–1524, mar 1997.
* [32] J F Nye and M V Berry. Dislocations in Wave Trains. Proc. Roy. Soc. A: Math., Phys. and Eng. Sci., 336(1605):165–190, 1974.
* [33] Natalya Shvartsman and Isaac Freund. Vortices in random wave fields: Nearest neighbor anticorrelations. Phys. Rev. Lett., 72:1008–1011, Feb 1994.
* [34] M V Berry and M R Dennis. Natural superoscillations in monochromatic waves inDdimensions. Journal of Physics A: Mathematical and Theoretical, 42(2):022003, dec 2008.
* [35] Marco Pascucci, Gilles Tessier, Valentina Emiliani, and Marc Guillon. Superresolution Imaging of Optical Vortices in a Speckle Pattern. Phys. Rev. Lett., 116(9):093904, mar 2016.
* [36] David L Fried and Jeffrey L Vaughn. Branch cuts in the phase function. Appl. Opt., 31(15):2865–2882, 1992.
* [37] Walter J Wild and Eric O Le Bigot. Rapid and robust detection of branch points from wave-front gradients. Opt. Lett., 24(4):190–192, 1999.
* [38] Valerii P. Aksenov and Olga V. Tikhomirova. Theory of singular-phase reconstruction for an optical speckle field in the turbulent atmosphere. J. Opt. Soc. Am. A, 19(2):345–355, Feb 2002.
* [39] Harsh Bhatia, Gregory Norgard, Valerio Pascucci, and Peer-Timo Bremer. The Helmholtz-Hodge Decomposition-A Survey. IEEE Trans. Vis Comput Graph, 19(8):1386–1404, 2013.
* [40] David L Fried. Branch point problem in adaptive optics. J. Opt. Soc. Am. A, 15(10):2759–2768, oct 1998.
* [41] Monika Bahl and P Senthilkumaran. Helmholtz Hodge decomposition of scalar optical fields. J. Opt. Soc. Am. A, 29(11):2421–2427, nov 2012.
* [42] Lei Huang, Mourad Idir, Chao Zuo, Konstantine Kaznatcheev, Lin Zhou, and Anand Asundi. Comparison of two-dimensional integration methods for shape reconstruction from gradient data. Optics and Lasers in Engineering, 64:1–11, 2015.
* [43] Kevin Murphy and Chris Dainty. Comparison of optical vortex detection methods for use with a Shack-Hartmann wavefront sensor. Opt. Express, 20(5):4988–5002, 2012.
* [44] Abolhasan Mobashery, Morteza Hajimahmoodzadeh, and Hamid Reza Fallah. Detection and characterization of an optical vortex by the branch point potential method: analytical and simulation results. Appl. Opt., 54(15):4732–4739, May 2015.
* [45] F A Starikov, G G Kochemasov, S M Kulikov, A N Manachinsky, N V Maslov, A V Ogorodnikov, S A Sukharev, V P Aksenov, I V Izmailov, F Yu. Kanev, V V Atuchin, and I S Soldatenkov. Wavefront reconstruction of an optical vortex by a Hartmann-Shack sensor. Opt. Lett., 32(16):2291–2293, 2007.
* [46] Kevin Murphy, Daniel Burke, Nicholas Devaney, and Chris Dainty. Experimental detection of optical vortices with a shack-hartmann wavefront sensor. Opt. Express, 18(15):15448–15460, Jul 2010.
* [47] Jia Luo, Hongxin Huang, Yoshinori Matsui, Haruyoshi Toyoda, Takashi Inoue, and Jian Bai. High-order optical vortex position detection using a Shack-Hartmann wavefront sensor. Opt. Express, 23(7):8706–8719, 2015.
* [48] Mingzhou Chen, Filippus S. Roux, and Jan C. Olivier. Detection of phase singularities with a shack-hartmann wavefront sensor. J. Opt. Soc. Am. A, 24(7):1994–2002, Jul 2007.
* [49] Kieran G Larkin, Donald J Bone, and Michael A Oldfield. Natural demodulation of two-dimensional fringe patterns. I. General background of the spiral phase quadrature transform. J. Opt. Soc. Am. A, 18(8):1862–1870, 2001.
* [50] Jérôme Gateau, Hervé Rigneault, and Marc Guillon. Complementary Speckle Patterns: Deterministic Interchange of Intrinsic Vortices and Maxima through Scattering Media. Phys. Rev. Lett., 118(4):43903, jan 2017.
* [51] Jérôme Gateau, Ferdinand Claude, Gilles Tessier, and Marc Guillon. Topological transformations of speckles. Optica, 6(7):914–920, 2019.
* [52] Lluis Torner, Juan P. Torres, and Silvia Carrasco. Digital spiral imaging. Opt. Express, 13(3):873–881, Feb 2005.
* [53] Marc Guillon and Pascal Berto. Reconstruction of a wavefront of a light beam containing optical vortices, Aug. 2020. Patent application EP20305898.7.
* [54] Pierre Bon, Serge Monneret, and Benoit Wattellier. Noniterative boundary-artifact-free wavefront reconstruction from its derivatives. Appl. Opt., 51(23):5698, 2012.
* [55] I V Basistiy, V A Pas ko, V V Slyusar, M S Soskin, and M V Vasnetsov. Synthesis and analysis of optical vortices with fractional topological charges. Journal of Optics A: Pure and Applied Optics, 6(5):S166–S169, apr 2004.
* [56] M S Longuet-Higgins. Reflection and Refraction at a Random Moving Surface. I. Pattern and Paths of Specular Points. J. Opt. Soc. Am., 50(9):838–844, sep 1960.
* [57] Isaac Freund. 1001 correlations in random wave fields. Waves in Random Media, 8(1):119–158, 1998.
* [58] J F Nye, J V Hajnal, and J H Hannay. Phase saddles and dislocations in two-dimensional waves such as the tides. Proc. Roy. Soc. A: Math., Phys. and Eng. Sci., 417(1852):7–20, 1988\.
|
# Complementary Searches of Low Mass Non-Abelian Vector Dark Matter,
Dark Photon and Dark $Z^{\prime}$
Raymundo Ramos<EMAIL_ADDRESS>Institute of Physics, Academia
Sinica, Nangang, Taipei 11529, Taiwan Van Que Tran<EMAIL_ADDRESS>School
of Physics, Nanjing University, Nanjing 210093, China Tzu-Chiang Yuan
<EMAIL_ADDRESS>Institute of Physics, Academia Sinica, Nangang,
Taipei 11529, Taiwan
###### Abstract
We present a novel study of the non-abelian vector dark matter candidate
$W^{\prime}$ with a MeV$-$GeV low mass range, accompanied by a dark photon
$A^{\prime}$ and a dark $Z^{\prime}$ of similar masses in the context of a
simplified gauged two-Higgs-doublet model. The model is scrutinized by taking
into account various experimental constraints including dark photon searches,
electroweak precision data, relic density of dark matter together with its
direct and indirect searches, mono-jet and Higgs collider physics from LHC.
The viable parameter space of the model consistent with all experimental and
theoretical constraints is exhibited. While a dark $Z^{\prime}$ can be the
dominant contribution in the relic density due to resonant annihilation of
dark matter, a dark photon is crucial to dark matter direct detection. We
demonstrate that the parameter space can be further probed in the near future
by sub-GeV dark matter experiments like CDEX, NEWS-G and SuperCDMS.
Introduction.– Albeit mountains of experimental data, ranged from the cosmic
microwave background radiation at the largest scale to galaxy structure
formation at the local scale, are now providing strong evidences for the
existence of dark matter (DM), its particle nature remains elusive. The
stability of massive DM is usually implemented in many particle physics models
beyond the standard model (SM) by imposing a discrete $Z_{2}$ symmetry in the
Lagrangian. The most studied DM candidate is the spin 1/2 lightest neutralino
in minimal supersymmetric standard model with $R$-parity [2, 3, 4].
Recently a gauged two-Higgs-doublet model (G2HDM) based on an extended
electroweak gauge group ${\mathcal{G}}=SU(2)_{L}\times U(1)_{Y}\times
SU(2)_{H}\times U(1)_{X}$ was proposed [5], in which a hidden discrete $Z_{2}$
symmetry ($h$-parity) [6] arises naturally as an accidental remnant symmetry
rather than imposed ad hoc by hand. This discrete symmetry ensures the
stability of the DM candidate in G2HDM, which can be either a complex scalar
(in general a linear combination of various fields in the model as studied in
[6]) or a heavy neutrino $\nu^{H}$ or an extra gauge boson
$W^{\prime(p,m)}\left(W^{\prime m}=\left(W^{\prime p}\right)^{*}\right)$, all
of which have odd $h$-parity. Unlike the left-right symmetric model [7], the
$W^{\prime(p,m)}$ in G2HDM do not carry electric charge. Moreover the
$h$-parity ensures there is no tree level flavor changing neutral currents for
the SM sector in G2HDM [5].
The novel idea of G2HDM, as compared with many variants of general 2HDM [8],
is that the two Higgs doublets $H_{1}$ and $H_{2}$ of $SU(2)_{L}$ are grouped
into a fundamental representation of a new hidden $SU(2)_{H}$. Consistency
checks of the model were performed in [9, 10]. In this Letter, we will show
that the non-abelian gauge boson $W^{\prime(p,m)}$ associated with $SU(2)_{H}$
can be a viable DM as well. In particular we will focus on the low mass DM
scenario in the MeV$-$GeV range and its correlations with two other neutral
gauge bosons in $SU(2)_{H}\times U(1)_{X}$.
The G2HDM Model and its Mass Spectra.– We begin by simplifying the original
G2HDM [5] to remove the $SU(2)_{H}$ triplet scalar $\Delta_{H}$ because it is
not absolutely required for a realistic particle spectra and the number of
free parameters in the scalar potential can be reduced significantly. New
heavy fermions $f^{\rm H}$ are the same as before due to anomaly
cancellations. Details of the model can be found in [5, 9, 10].
The most general renormalizable Higgs potential invariant under the extended
gauge group $\mathcal{G}$ is
$\displaystyle V={}$ $\displaystyle-\mu^{2}_{H}\left(H^{\alpha i}H_{\alpha
i}\right)+\lambda_{H}\left(H^{\alpha i}H_{\alpha i}\right)^{2}$
$\displaystyle+\frac{1}{2}\lambda^{\prime}_{H}\epsilon_{\alpha\beta}\epsilon^{\gamma\delta}\left(H^{\alpha
i}H_{\gamma i}\right)\left(H^{\beta j}H_{\delta j}\right)$
$\displaystyle-\mu^{2}_{\Phi}\Phi_{H}^{\dagger}\Phi_{H}+\lambda_{\Phi}\left(\Phi_{H}^{\dagger}\Phi_{H}\right)^{2}$
(1)
$\displaystyle+\lambda_{H\Phi}\left(H^{\dagger}H\right)\left(\Phi_{H}^{\dagger}\Phi_{H}\right)+\lambda^{\prime}_{H\Phi}\left(H^{\dagger}\Phi_{H}\right)\left(\Phi_{H}^{\dagger}H\right),$
where ($\alpha$, $\beta$, $\gamma$, $\delta$) and ($i$, $j$) refer to the
$SU(2)_{H}$ and $SU(2)_{L}$ indices respectively, all of which run from one to
two, and $H^{\alpha i}=H^{*}_{\alpha i}$. $\Phi_{H}$ is a $SU(2)_{H}$ doublet.
To facilitate spontaneous symmetry breaking (SSB), we shift the fields based
on our conventional wisdom
$\displaystyle H_{1}=\begin{pmatrix}H_{11}\\\
H_{12}\end{pmatrix}=\begin{pmatrix}G^{+}\\\
\frac{v+h}{\sqrt{2}}+i\frac{G^{0}}{\sqrt{2}}\end{pmatrix},\;\Phi_{H}=\begin{pmatrix}G_{H}^{p}\\\
\frac{v_{\Phi}+\phi_{2}}{\sqrt{2}}+i\frac{G_{H}^{0}}{\sqrt{2}}\end{pmatrix},$
where $v$ and $v_{\Phi}$ are the vacuum expectation values (VEV)s of $H_{1}$
and $\Phi_{H}$ fields respectively. $H_{2}=(H_{21},H_{22})^{\rm
T}=(H^{+},H_{2}^{0})^{\rm T}$ is the inert doublet in G2HDM and does not have
VEV. We will work in the ’t Hooft-Landau gauge.
The two $h$-parity even fields $h$ and $\phi_{2}$ mix by an angle $\theta_{1}$
satisfying $\tan
2\theta_{1}=\lambda_{H\Phi}vv_{\Phi}/(\lambda_{\Phi}v_{\Phi}^{2}-\lambda_{H}v^{2})$,
gives rise to two physical states $h_{1}$ and $h_{2}$. $h_{1}$ is identified
as the 125 GeV SM-like Higgs boson and $h_{2}$ as a heavier CP-even scalar
boson. Similarly, the two $h$-parity odd complex fields $G_{H}^{p}$ and
$H_{2}^{0*}$ mix and give rise to two physical states, $\tilde{G}^{p}_{H}$ and
$D$. $\tilde{G}^{p}_{H}$ is the massless Nambu-Goldstone boson absorbed by
$W^{\prime\,p}$, while $D$ is a massive dark complex scalar. The Goldstone
bosons $G^{0}$, $G^{\pm}$ and $G^{0}_{H}$ are massless. We note that
$h_{1,2}$, $G^{0}$, $G^{\pm}$ and $G^{0}_{H}$ are even under $h$-parity, while
$\tilde{G}^{p}_{H}$, $D$ and $H^{\pm}$ are odd [6]. The fact that $H^{\pm}$
has odd $h$-parity implies that the $H^{\pm}W^{\mp}\gamma$ and
$H^{\pm}W^{\mp}Z$ couplings are absent in G2HDM.
The $W^{\pm}$ gauge boson of $SU(2)_{L}$ remains the same as in SM with its
mass given by $m_{W}=gv/2$. The $SU(2)_{H}$ gauge boson $W^{\prime(p,m)}$
receives mass from $\langle H_{1}\rangle$ and $\langle\Phi_{2}\rangle$,
$m^{2}_{W^{\prime}}=\frac{1}{4}g^{2}_{H}\left(v^{2}+v^{2}_{\Phi}\right)\;.$
(2)
As aforementioned, $W^{\prime(p,m)}$ is odd under $h$-parity, while $W^{\pm}$
is even. If $W^{\prime(p,m)}$ is the lightest $h$-parity odd particle in the
model, it can be a stable DM candidate [11].
On the other hand, the SM neutral gauge bosons $B$ and $W^{3}$ can mix with
the new gauge bosons $W^{\prime 3}$ and $X$, all of which have even
$h$-parity. Together with the Stueckelberg mass parameter $M_{X}$ for the
abelian $U(1)_{X}$, SSB generates a 4 by 4 neutral gauge boson mass matrix. In
the basis of $\left\\{B,W^{3},W^{\prime 3},X\right\\}~{}$[5, 10], one can
apply the weak rotation on upper left $2\times 2$ block of the mass matrix,
resulting in a zero eigenvalue identified as the SM photon and a 3 by 3 sub-
matrix which can be diagonalized by an orthogonal rotation matrix ${\cal O}$
so that the physical states $(Z,\,Z^{\prime},\,A^{\prime})=(Z^{\rm
SM},\,W^{\prime 3},\,X)\cdot{\mathcal{O}}^{T}$. In this analysis, we arrange
the neutral gauge boson masses as $m_{A^{\prime}}<m_{Z^{\prime}}<m_{Z}$ with
$Z$ identified as the physical SM $Z$ boson with mass $91.1876\pm 0.0021$ GeV
[12]. Two more lighter neutral vector bosons, namely a dark photon
$A^{\prime}$ and a dark $Z^{\prime}$, are predicted.
The new gauge couplings $g_{H}$ and $g_{X}$ for $SU(2)_{H}$ and $U(1)_{X}$ are
expected to be much smaller than the SM $g$ and $g^{\prime}$ in order not to
jeopardize the electroweak precision data. The VEV $v_{\Phi}$ is also expected
to be larger than $v$ since all new heavy fermion masses are proportional to
it. Furthermore, since we want the hierarchy
$m_{A^{\prime}}<m_{Z^{\prime}}<m_{Z}$, we require $M_{X}<v$. The neutral gauge
boson masses can then be well approximated by [13]
$m_{Z}^{2}\approx m^{2}_{Z^{\rm SM}}\equiv\frac{1}{4}\left(g^{2}+g^{\prime
2}\right)v^{2}\,,\;\;m_{Z^{\prime}}^{2}\approx
m_{W^{\prime}}^{2}\left(1+\frac{4g_{X}^{2}}{g_{H}^{2}}\right)+M_{X}^{2}-m_{A^{\prime}}^{2}\;\;{\rm
and}\;\;m_{A^{\prime}}^{2}\approx
M_{X}^{2}\left(1+\frac{4g_{X}^{2}}{g_{H}^{2}}+\frac{M_{X}^{2}}{m_{W^{\prime}}^{2}}\right)^{-1}.$
(3)
Thus the masses of the DM $W^{\prime(p,m)}$, dark photon $A^{\prime}$ and dark
$Z^{\prime}$ are entangled with each other and we have $m_{Z^{\prime}}\gtrsim
m_{W^{\prime}}$ and $m_{A^{\prime}}\lesssim M_{X}$. Since the couplings of
$A^{\prime}$ and $Z^{\prime}$ with the SM fermions are proportional to the
smaller couplings $g_{H}$ and/or $g_{X}$, the Drell-Yan type processes are
suppressed and this can explain the null results of extra neutral gauge bosons
searches at LEP.
It is useful to express the fundamental parameters in the scalar potential in
terms of the physical scalar boson masses. Indeed one can trade six model
parameters with five physical squared masses and one mixing angle,
$\\{\lambda_{H},\lambda_{\Phi},\lambda_{H\Phi},\lambda^{\prime}_{H\Phi},\lambda^{\prime}_{H},v_{\Phi}\\}$
$\rightarrow\\{m^{2}_{h_{1}},m^{2}_{h_{2}},m^{2}_{D},m^{2}_{H^{\pm}},m^{2}_{W^{\prime}},\theta_{1}\\}.$
Detailed formulas for the mass spectra and these conversions will be presented
elsewhere [13]. The remaining free parameters of the model are $g_{H}$,
$g_{X}$, $M_{X}$ and $m_{f^{\rm H}}$.
Theoretical Constraints.– (a) Vacuum Stability: To make sure the scalar
potential (Complementary Searches of Low Mass Non-Abelian Vector Dark Matter,
Dark Photon and Dark $Z^{\prime}$) is bounded from below, we follow [9] and
use copositivity conditions to obtain the following constraints
$\widetilde{\lambda}_{H}(\eta)\geq 0,\lambda_{\Phi}\geq 0$ and
$\widetilde{\lambda}_{H\Phi}(\xi)+2\sqrt{\widetilde{\lambda}_{H}(\eta)\lambda_{\Phi}}\geq
0,$ where
$\widetilde{\lambda}_{H}(\eta)\equiv\lambda_{H}+\eta\lambda^{\prime}_{H}$ and
${\widetilde{\lambda}_{H\Phi}}(\xi)\equiv\lambda_{H\Phi}+\xi\lambda^{\prime}_{H\Phi}$
with $0\leq\xi\leq 1$ and $-1\leq\eta\leq 0$. (b) Partial Wave Unitarity: We
compute all the spinless 2$\rightarrow$2 scattering amplitudes induced by the
quartic couplings in (Complementary Searches of Low Mass Non-Abelian Vector
Dark Matter, Dark Photon and Dark $Z^{\prime}$) and require their magnitudes
to be less than $8\pi$ [9]. (c) Electroweak Constraints: Following [10], we
implement all the relevant constraints from electroweak precision data on the
gauge sector.
Dark Photon.– The light boson $A^{\prime}$ can be treated as a dark photon and
constraints from $A^{\prime}\rightarrow\bar{\ell}\ell\,(l=e,\mu)$ should be
applied. Dark photon experiments constrain the size of the coupling via a
parameter $\varepsilon_{\ell}$ that appears as [14]
$\Gamma\left(A^{\prime}\rightarrow\bar{\ell}\ell\right)=\frac{\alpha}{3}\varepsilon_{\ell}^{2}m_{A^{\prime}}\sqrt{1-\mu_{\ell}^{2}}\left(1+\frac{\mu_{\ell}^{2}}{2}\right),$
(4)
where $\mu_{\ell}=2m_{\ell}/m_{A^{\prime}}<1$. In the G2HDM, the parameter
$\varepsilon_{\ell}$ at tree level is given by
$\varepsilon_{\ell}=\frac{1}{2s_{W}c_{W}}\sqrt{\left(v^{A^{\prime}}_{\ell}\right)^{2}+\left(a^{A^{\prime}}_{\ell}\right)^{2}\left(\frac{1-\mu_{\ell}^{2}}{1+\mu_{\ell}^{2}/2}\right)}\;,$
(5)
where $v^{A^{\prime}}_{\ell}$ and $a^{A^{\prime}}_{\ell}$ are the vector and
axial couplings [10]. Since $Z^{\prime}$ is also expected to be light, the
dark photon experimental limits can also be applied as above with
$A^{\prime}\rightarrow Z^{\prime}$ in (4) and
$\\{v_{\ell}^{A^{\prime}},a_{\ell}^{A^{\prime}}\\}\rightarrow\\{v^{Z^{\prime}}_{\ell},a^{Z^{\prime}}_{\ell}\\}$
in (5). However, since $A^{\prime}$ is lighter it is expected to be more
strongly constrained. Typically, one finds $a^{A^{\prime}}_{\ell}\sim
10^{-3}v^{A^{\prime}}_{\ell}$ and $a^{Z^{\prime}}_{\ell}\sim
10^{-2}v^{Z^{\prime}}_{\ell}$ [13]. Since both
$v_{\ell}^{A^{\prime}(Z^{\prime})}$ and $a_{\ell}^{A^{\prime}(Z^{\prime})}$
have the same values for all charged leptons, $\varepsilon_{\ell}$ is only
weakly depend on $\ell$ through $\mu_{\ell}$ in both cases.
Many experiments had reported stringent limits on $\varepsilon_{\ell}$ for low
mass $m_{A^{\prime}}$ [15, 16, 17, 18, 19, 20]. Current limits on
$\varepsilon_{\ell}$ for $m_{A^{\prime}}>1$ MeV are displayed on the top panel
of Fig. 10 in [14].
Higgs Constraints.– In this analysis, we take the mass of observed Higgs boson
as $m_{h_{1}}=125.10\pm 0.14$ GeV [12].
We consider two signal strengths of $h_{1}\rightarrow\gamma\gamma$ and
$h_{1}\rightarrow f\bar{f}$ from gluon fusion. Their current experimental
values are $\mu^{\gamma\gamma}_{\rm ggH}=0.96\pm 0.14$ [21] and
$\mu^{\tau\tau}_{\rm ggH}=1.05^{+0.53}_{-0.47}$ [22]. Besides the
contributions from the SM charged particles in the 1-loop process
$h_{1}\rightarrow\gamma\gamma$, we also include all charged $f^{\rm H}$ (with
$m_{f^{\rm H}}$ fixed at 3 TeV) and $H^{\pm}$ in G2HDM.
If $m_{h_{1}}>2\,m_{W^{\prime}}$, $h_{1}$ can decay invisibly into a pair of
$W^{\prime(p,m)}$ with invisible branching ratio ${\rm
BR}({h_{1}\rightarrow{\rm inv}})=\Gamma({h_{1}\rightarrow W^{\prime
p}W^{\prime m}})/\Gamma_{h_{1}}$. Recently, assuming that the Higgs boson
production cross section via vector boson fusion is comparable to the SM
prediction, ATLAS sets the limit ${\rm BR}({h_{1}\rightarrow{\rm inv}})<0.13$
at $95\%$ C.L. [23].
Dark Matter Constraints.–
Relic Density: The main DM annihilation channels in our model are to pairs of
SM fermions mediated by $Z$ and $Z^{\prime}$. Other annihilation channels are
also possible but are far more suppressed. First, there is also the
$A^{\prime}$ exchange diagram. The $A^{\prime}$ couplings to SM fermions are
suppressed by combinations of new gauge couplings in $v^{A^{\prime}}_{f}$ and
$a^{A^{\prime}}_{f}$. Similarly to the case of $A^{\prime}$, the $Z^{\prime}$
couplings to the SM fermions are suppressed by its own $v^{Z^{\prime}}_{f}$
and $a^{Z^{\prime}}_{f}$. However, it is possible to have $\sqrt{s}\approx
2m_{W^{\prime}}\approx m_{Z^{\prime}}$ resulting in an important contribution
from resonant annihilation. Secondly, we also have the $h_{1}$ and $h_{2}$
Higgs exchange diagrams. Their couplings to pairs of $W^{\prime(p,m)}$ and SM
fermions are suppressed by $g_{H}^{2}$ and light fermion masses $m_{q}/v$
respectively. Finally, $t$-channel diagram via exchange of $f^{\rm H}$ is
suppressed by $g_{H}^{2}$ and the mass of the new heavy fermion in the
propagator. The annihilation mediated by the $Z$ is also suppressed by a
mixing angle of ${\mathcal{O}}_{21}^{2}$ required to be small mostly by
measurements on the total decay width of the $Z$ and branching fraction for
the invisible decay $Z\rightarrow W^{\prime p}W^{\prime m}$. As mentioned
above, the channel mediated by $Z^{\prime}$ is also suppressed by $g_{H}$ and
$g_{X}$ inside the couplings of $v^{Z^{\prime}}_{f}$ and $a^{Z^{\prime}}_{f}$.
However, these suppressions are not as severe as in other channels and when we
include the effects from $Z^{\prime}$ resonance it is possible to bring the
relic density to its observed value of $\Omega_{\rm DM}h^{2}=0.120\pm 0.001$
from Planck’s measurement [24].
Direct Detection: Due to the small couplings between the DM candidate,
$W^{\prime(p,m)}$, with the SM-like states $h_{1}$ and $Z$ and the new states
$h_{2}$, $Z^{\prime}$ and $A^{\prime}$ which couple to the visible sector, it
is possible to have effects from DM scattering against nucleons in detectors
used in direct detection experiments. In this case we have to consider the
elastic scattering between a DM particle and the quarks inside the nucleon.
The suppressions from $h_{1,2}$ (and $f^{\rm H}$) exchange work in the same
way as in the processes for relic density just described. Therefore, we are
only left with the processes mediated by $Z$, $Z^{\prime}$ and $A^{\prime}$ in
the $t$-channel. Usually, for direct detection processes the momentum exchange
$|{\bf q}|$ is rather small. This will result in amplitudes suppressed by the
inverse mass squared of the mediator meaning that the lighter states,
$Z^{\prime}$ and $A^{\prime}$, will be less suppressed than the $Z$. In the
approximation that $|{\bf q}|\ll m_{Z(i)}$, the interaction between
$W^{\prime}$ and light quark $q$ can be written as a contact interaction
$\displaystyle\mathcal{L}_{\rm CI-DD}$
$\displaystyle=\sum_{q}\sum_{i=2}^{3}\frac{g_{M}g_{H}\mathcal{O}_{2i}v^{Z(i)}_{q}}{2m_{Z(i)}^{2}}$
$\displaystyle\;\;\;\;\times\left(W^{\prime p\,\mu}\partial_{\nu}W^{\prime
m}_{\mu}-W^{\prime m\,\mu}\partial_{\nu}W^{\prime
p}_{\mu}\right)\bar{q}\gamma^{\nu}q\;,$ (6)
where $Z(2)\equiv Z^{\prime}$ and $Z(3)\equiv A^{\prime}$. It is worth noting
that, as light as the mediators $Z^{\prime}$ and $A^{\prime}$ are, we can
still integrate them out thanks to the comparably small maximum momentum
transfer, $|{\bf q}|_{\max}$. The smallness of $|{\bf q}|_{\max}$ is also due
to $W^{\prime}$ being light. Consider $|{\bf q}|_{\max}=2\,v_{\rm
DM}\,m_{W^{\prime}}m_{A}/(m_{W^{\prime}}+m_{A})$ with $v_{\rm DM}=10^{-3}c$,
$m_{W^{\prime}}=0.5$ GeV and the target mass $m_{A}=131$ GeV or $40$ GeV for
xenon or argon target respectively. In both cases $|{\bf q}|_{\max}\sim O(1\
\text{MeV})$ while we expect $m_{A^{\prime}}\gtrsim O(10\ \text{MeV})$ due to
constraints on dark photon. Smaller $m_{W^{\prime}}$ results in even smaller
$|{\bf q}|_{\max}$. Furthermore, for the smaller axial couplings with the
quarks, in the small momentum exchange limit, only the space components of
$\gamma^{\nu}$ remain which are suppressed by the $W^{\prime(p,m)}$ momentum
due to the derivatives $\nabla W^{\prime(p,m)}$ as in (Complementary Searches
of Low Mass Non-Abelian Vector Dark Matter, Dark Photon and Dark
$Z^{\prime}$) [26, 25].
From (Complementary Searches of Low Mass Non-Abelian Vector Dark Matter, Dark
Photon and Dark $Z^{\prime}$), it is clear that the $A^{\prime}$ mediated
process is expected to dominate the cross section unless
$|\mathcal{O}_{23}/\mathcal{O}_{22}|<|m_{A^{\prime}}/m_{Z^{\prime}}|^{2}$. The
case where both mediators participate equally is expected to happen only
through fine tuning of masses and mixings. Therefore, we expect the cross
section with the nucleons to be mostly mediated by either $A^{\prime}$ or
$Z^{\prime}$. The elastic spin-independent (SI) cross section between
$W^{\prime(p,m)}$ and a nucleon, $N$, is given by [27]
$\displaystyle\sigma^{\rm SI}_{W^{\prime}N}$ $\displaystyle=\sigma^{\rm
SI}_{W^{\prime}p}\frac{\sum_{k}\eta_{k}\mu_{A_{k}}^{2}\left[Z_{\text{atom}}+(A_{k}-Z_{\text{atom}})f_{n}/f_{p}\right]^{2}}{\sum_{k}\eta_{k}\mu_{A_{k}}^{2}A_{k}^{2}}\;,$
(7) $\displaystyle\sigma^{\rm SI}_{W^{\prime}p}$
$\displaystyle=\frac{\mu_{p}^{2}g_{M}^{2}g_{H}^{2}\mathcal{O}_{2i}^{2}}{4\pi
m_{Z(i)}^{4}}f_{p}^{2}\;,\;\;\;\;\;(i=2\;{\rm or}\;3)$ (8)
where $\mu_{p}=m_{W^{\prime}}m_{p}/(m_{W^{\prime}}+m_{p})$ is the reduced DM-
proton mass, $\mu_{A_{k}}=m_{W^{\prime}}m_{A_{k}}/(m_{W^{\prime}}+m_{A_{k}})$
is the reduced DM-isotope nucleus mass, and $f_{p}$ and $f_{n}$ are effective
couplings of the DM with protons and neutrons, respectively. $Z_{\text{atom}}$
is the atomic number, and the isotope dependent variables $\eta_{k}$ and
$A_{k}$ are the abundance and mass number of the $k^{\text{th}}$ target
isotope, respectively. Direct detection experiments usually report the number
in (7) assuming isospin conservation, _i.e._ , $f_{p}=f_{n}$. In that case it
is straightforward to see that the ratio of the sums over isotopes reduces to
1 and $\sigma^{\rm SI}_{W^{\prime}N}=\sigma^{\rm SI}_{W^{\prime}p}$. However,
in our case the couplings between quarks, $u$ and $d$, and the gauge bosons,
$Z^{\prime}$ and $A^{\prime}$, are all different due to their distinct SM
charges leading to isospin violation (ISV), _i.e._ , $f_{p}\neq f_{n}$.
Following [27, 28], we can rescale the reported experimental limit,
$\sigma_{\text{limit}}\rightarrow\sigma_{\text{limit}}\times\sigma^{\rm
SI}_{W^{\prime}p}/\sigma^{\rm SI}_{W^{\prime}N}$ to account for ISV effects
and use it to limit $\sigma^{\rm SI}_{W^{\prime}p}$ as given by (8). This
rescaling depends on the mass of DM, the atomic numbers and the ratio
$f_{n}/f_{p}$, and, hence, will be different for different points in the
parameter space. To constraint $\sigma^{\rm SI}_{W^{\prime}p}$ we use the most
recent limits set by CRESST III [29], DarkSide-50 [30] and XENON1T [31].
Indirect Detection: Due to DM annihilation in SM particles before freeze out
happening through resonance of an otherwise suppressed channel, the
annihilation of DM at the present time—after the shift in energy from the
early Universe to the current time—loses the resonance resulting in a very low
annihilation cross section. We have checked that the value of the total
annihilation cross section in G2HDM at the present time is of order $10^{-32}$
cm${}^{3}\cdot$s-1 or below, much lower than the canonical limits set for
various channels by Fermi-LAT data [32, 33].
Mono-jet: The event of an energetic jet with large missing transverse momentum
has been searched by ATLAS [34, 35] and CMS [36] collaborations with null
results. In G2HDM, the process $pp\rightarrow W^{\prime p}W^{\prime m}j$ can
give rise to the mono-jet signal at the LHC. In the most sensitive signal
region with $E_{T}^{\rm miss}\in(700,800)$ GeV (the signal region EM7 in
[35]), we have checked that at a couple of benchmark points (BP1 and 2), the
mono-jet signals with $p_{T}^{\rm miss}>100$ GeV and precuts for jets
$p_{T}^{j}>30$ GeV and $|\eta_{j}|<2.8$, are at least two orders-of-magnitude
below the current $95\%$ C.L. exclusion limit on the production cross section.
Table 1: Ranges and values for the prior of 8 free parameters. We fix $m_{f^{\rm H}}=3$ TeV. Parameter [units] | Range
---|---
$m_{h_{2}},\,m_{D},\,m_{H^{\pm}}$ [TeV] | [0.3 , 10]
$\log_{10}(m_{W^{\prime}}/\text{GeV}),\,\log_{10}(M_{X}/\text{GeV})$ | [$-3$ , 2]
$\log_{10}(g_{H}),\,\log_{10}(g_{X})$ | [$-6$ , 0]
$\theta_{1}$ [rad] | [$-\pi/2$ , $\pi/2$]
Figure 1: The 1$\sigma$, 2$\sigma$ and 3$\sigma$ allowed regions projected on
the $(m_{W^{\prime}},g_{H})$ plane. Benchmark points used for mono-jet
analysis are also projected as indicated by the green down-triangle (BP1) and
red up-triangle (BP2).
Figure 2: The 1$\sigma$ and 2$\sigma$ allowed contours projected on the
$(m_{W^{\prime}},\sigma_{W^{\prime}p}^{\rm SI})$ plane (left) and the
$(m_{A^{\prime}},\varepsilon)$ plane (right). The experimental excluded
regions used are shown as solid colored regions. Projected experimental limits
are shown as dotted lines.
Results.– To sample the parameter space we use the affine invariant Markov
Chain Monte Carlo (MCMC) ensemble sampler emcee [37] which presents advantages
such as fast calculation of parameter distributions in multi-dimensions. The
initial priors and ranges of the free parameters used in our scan are
tabulated in Table 1.
The DM constraints from the relic density, direct detection and indirect
detection are calculated using micrOMEGAs [38] and a set of model files
generated by FeynRules [39]. For the invisible decay branching ratio of the
Higgs, we take advantage of the use of CalcHEP [40] within micrOMEGAs to
calculate the decay width along with the rest of the DM constraints just
mentioned.
All the points outside the theoretical constraints are simply rejected. By the
same token, the dark photon constraints are used to reject any parameter
combination of $\varepsilon_{e,\mu}$-$m_{A^{\prime}}$ or
$\varepsilon_{e,\mu}$-$m_{Z^{\prime}}$ located inside the currently excluded
regions. The rest of the constraints are summed into a total $\chi^{2}$ that
also includes relic density and direct detection SI cross section. In the case
of direct detection experiments, where a limit is reported at a 95% C.L. with
null-signal assumption, we use a $\chi_{\text{DD}}^{2}$ of the form
$\chi^{2}_{\text{DD}}=4.61\times\left(\sigma_{\text{theory}}/\sigma_{\text{limit}}\right)^{2}$,
where the 4.61 factors allows $\chi_{\text{DD}}^{2}=4.61$ when we are exactly
at the 95% C.L. in 2D. In mass ranges where more than one limit exists we take
the one with the largest $\chi_{\text{DD}}^{2}$. Note that, due to ISV, the
largest $\chi^{2}$ for direct detection may not correspond to the experiment
with the smallest cross section. Since direct detection limits are reported
assuming $f_{p}=f_{n}$ in (7), it is possible for ISV ($f_{p}\neq f_{n}$) to
produce some amount of cancellation or enhancement of the limits depending on
the atoms used in the detector. Computing the SI cross section in the manner
described earlier in the direct detection allows us to account for ISV and the
different atoms used in different experiments. For the Higgs invisible decay
branching fraction, the appropriate
$\chi^{2}_{\text{inv}}=2.71\times\left({\rm BR}({h_{1}\rightarrow{\rm
inv}})/0.13\right)^{2}$, where the 2.71 factor allows for
$\chi^{2}_{\text{inv}}=2.71$ when our result is exactly at the reported 95%
C.L. in 1D.
Fig. 1 shows the allowed region projected on the $(m_{W^{\prime}},g_{H})$
plane where the band-shaped region is caused by the relation $\Omega_{\rm
DM}h^{2}\propto 1/\langle\sigma v\rangle$. Since $\sigma\propto
g_{H}^{2}m_{W^{\prime}}^{2}/s^{2}$, for $s\sim 4m_{W^{\prime}}^{2}$ we have
$g_{H}^{2}m_{W^{\prime}}^{2}/s^{2}\sim g_{H}^{2}/16m_{W^{\prime}}^{2}$
resulting in $\Omega_{\rm DM}h^{2}\propto m_{W^{\prime}}^{2}/g_{H}^{2}$. In
order to have a constant relic density, $m_{W^{\prime}}$ and $g_{H}$ have to
maintain a linear relation as displayed in the plot. Deviation from this band
results in a relic density lying outside Planck’s allowed range.
Note that the allowed region in Fig. 1 are bounded in their top-right and
bottom-left corners by direct detection (DD) and the constraint on dark
photon, respectively. We know that the direct detection cross section grows
with $g_{H}^{2}$ as seen in (8), therefore, it is expected to see it setting
an upper bound on $g_{H}$. In the bottom-left region disfavored by dark photon
searches, this is mostly due to the constraint from $\nu$-CAL I which limits
$\varepsilon$ from below as will be explained in the next figure.
In the left panel of Fig. 2 we show the allowed region projected on the
($m_{W^{\prime}}$, $\sigma_{W^{\prime}p}^{\rm SI}$) plane. The dark (light)
blue shaded zone represents the $1\sigma$ ($2\sigma$) allowed region. The
current DM direct detection measurements from CRESST III (green) [29],
DarkSide-50 (orange) [30] and XENON1T (brown) [31] constrain the DM mass to
remain below $\sim 2$ GeV. A small part of the $2\sigma$ allowed region lies
below the neutrino floor (light orange), where the coherent neutrino-nucleus
scattering would dominate over any DM signal. Additionally, we show that
experiments in the near future such as NEWS-G [41], SuperCDMS [42] and CDEX
[43] can further probe our allowed parameter space, in particular
$m_{W^{\prime}}$ can reach $\mathrel{\mathop{\kern 0.0pt\hbox to0.0pt{\raise
0.86108pt\hbox{$>$}\hss}}\lower 3.87495pt\hbox{\kern-1.90002pt$\sim$}}0.3$ GeV
with NEWS-G and $\sigma_{W^{\prime}p}^{\rm SI}$ can push down to about
$10^{-44}$ cm2 with SuperCDMS and CDEX.
The right panel in Fig. 2 shows the allowed region projected on the
($m_{A^{\prime}}$, $\varepsilon$) plane. Various experimental limits from dark
photon searches are displayed in color shaded zones including LHCb (green)
[15], BaBar (pink) [16], NA48 (purple) [17], NA64 (light brown) [18], E141
(magenta) [19] and $\nu$-CAL I (olive green) [20]. The dilepton searches at
the LHCb, BaBar and NA48 put upper limits of $\varepsilon\lesssim 10^{-3}$ for
$m_{A^{\prime}}\mathrel{\mathop{\kern 0.0pt\hbox to0.0pt{\raise
0.86108pt\hbox{$>$}\hss}}\lower 3.87495pt\hbox{\kern-1.90002pt$\sim$}}0.03$
GeV, especially LHCb which sets a strong limit on $\varepsilon$ at $0.2$ GeV
$<m_{A^{\prime}}<0.5$ GeV causing a concave region in the $2\sigma$ allowed
region at this mass range. We note that this concave region due to LHCb
corresponds to the right-tilted concave region at
$(m_{W}^{\prime},\sigma_{W^{\prime}p}^{\rm
SI})\sim(1\rm\,{GeV},10^{-42}\,\rm{cm^{2}})$ in the left panel of the same
figure. Experimental probes of dark photon, dark $Z^{\prime}$ and dark matter
are thus correlated. The LHCb long lived dark photon search constraints [15]
are also shown by the two isolated green shaded islands.
On the other hand, the beam dump experiments NA64, E141 and $\nu$-CAL I close
the available space for smaller $\varepsilon$ and lighter $m_{A^{\prime}}$
setting lower bounds of $m_{A^{\prime}}>0.02$ GeV and
$\varepsilon\mathrel{\mathop{\kern 0.0pt\hbox to0.0pt{\raise
0.86108pt\hbox{$>$}\hss}}\lower 3.87495pt\hbox{\kern-1.90002pt$\sim$}}2\times
10^{-5}$. The lower limit on $\varepsilon$ for $m_{A^{\prime}}>0.05$ GeV is
due to the DM relic density measured by the Planck’s experiment.
Interestingly, our final allowed region is located in the gap between the
beam-dump and the collider based experiments, an area of special interest for
future dark photon searches. For instance, Belle-II [44] with a luminosity of
$50\ {\rm ab}^{-1}$ can probe $\varepsilon$ down to $2\times 10^{-4}$, the
next upgrade of NA64 [45] can cover $10^{-5}\mathrel{\mathop{\kern 0.0pt\hbox
to0.0pt{\raise 0.86108pt\hbox{$<$}\hss}}\lower
3.87495pt\hbox{\kern-1.90002pt$\sim$}}\varepsilon\mathrel{\mathop{\kern
0.0pt\hbox to0.0pt{\raise 0.86108pt\hbox{$<$}\hss}}\lower
3.87495pt\hbox{\kern-1.90002pt$\sim$}}10^{-3}$ and
$m_{A^{\prime}}\mathrel{\mathop{\kern 0.0pt\hbox to0.0pt{\raise
0.86108pt\hbox{$<$}\hss}}\lower 3.87495pt\hbox{\kern-1.90002pt$\sim$}}0.08$
GeV by reaching $\sim 5\times 10^{12}$ electrons-on-target (abbreviated by eot
in the figure) and Advanced WAKEfield Experiment (AWAKE) run 2 [46] can reach
$m_{A^{\prime}}$ up to $0.15$ GeV with $10^{16}$ electrons-on-target with an
energy of 50 GeV. These limits are shown explicitly in the right panel of Fig.
2. In the future, with access to high energy electron-proton colliders, AWAKE
may reach 1 TeV for the electrons, extending $m_{A^{\prime}}$ up to 0.6 GeV
[46].
Conclusions.– To summarize, we found that the simplified G2HDM developed in
this work provides a viable vector DM candidate $W^{\prime}$ with mass down to
$O(10^{-2})$ GeV. All the predictions in the model are in good agreement with
current observations. Importantly, both new vector states, $Z^{\prime}$ and
$A^{\prime}$, play key roles for DM observables. They are the portal
connecting the visible SM and hidden sectors via superweak interactions with
couplings $g_{H}$ and $g_{X}$ of size $O(10^{-5}-10^{-3})$. While the dark
$Z^{\prime}$ can be the dominant resonant contribution for DM relic density,
the dark photon is crucial for DM direct detection. Besides the possibility of
detecting a low mass $W^{\prime}$ in DM direct detection experiments, the dark
photon $A^{\prime}$ is predicted to be well positioned for future observations
that may reach $m_{A^{\prime}}\sim 0.1$ GeV. This work demonstrates that the
G2HDM is a successful and competitive dark matter model with diverse
exploration possibilities.
We conclude that experimental searches for low mass non-abelian $W^{\prime}$
DM, dark photon and dark $Z^{\prime}$ in the sub-GeV range are complimentary
with each other. Although our analysis is carried out within a specific model,
we expect our results may be generic and are of general interests.
This work is supported in part by the Ministry of Science and Technology of
Taiwan under Grant Nos. 108-2112-M-001-018 (TCY) and 108-2811-M-001-550 (RR)
and by National Natural Science Foundation of China under Grant Nos. 11775109
and U1738134 (VQT).
## References
* [1]
* [2] A. H. Chamseddine, R. L. Arnowitt and P. Nath, Phys. Rev. Lett. 49, 970 (1982); id. Phys. Lett. B 121, 33-36 (1983).
* [3] H. P. Nilles, Phys. Rept. 110, 1-162 (1984).
* [4] G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept. 267, 195-373 (1996).
* [5] W. C. Huang, Y. L. S. Tsai and T. C. Yuan, JHEP 1604, 019 (2016).
* [6] C. R. Chen et al. Phys. Rev. D 101, no.3, 035037 (2020).
* [7] R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, 912 (1980); id. Phys. Rev. D 23, 165 (1981).
* [8] G. C. Branco et al. Phys. Rept. 516, 1-102 (2012).
* [9] A. Arhrib et al. Phys. Rev. D 98, no.9, 095006 (2018).
* [10] C. T. Huang et al. JHEP 1909 (2019) 048.
* [11] For earlier suggestions of $W^{\prime}$ as DM candidate, see for instance, C. D. Carone and R. Ramos, Phys. Rev. D 88, 055020 (2013); B. Barman, S. Bhattacharya, S. K. Patra and J. Chakrabortty, JCAP 12, 021 (2017); B. Barman, S. Bhattacharya and M. Zakeri, JCAP 02, 029 (2020); T. Abe, M. Fujiwara, J. Hisano and K. Matsushita, JHEP 07, 136 (2020).
* [12] P. A. Zyla et al. [Particle Data Group], PTEP 2020, no.8, 083C01 (2020).
* [13] R. Ramos, T.-C. Yuan and V. Q. Tran, in preparation.
* [14] M. Fabbrichesi, E. Gabrielli and G. Lanfranchi, [arXiv:2005.01515 [hep-ph]].
* [15] R. Aaij et al. [LHCb], Phys. Rev. Lett. 124, no.4, 041801 (2020).
* [16] J. P. Lees et al. [BaBar], Phys. Rev. Lett. 113, no.20, 201801 (2014).
* [17] J. R. Batley et al. [NA48/2], Phys. Lett. B 746, 178-185 (2015).
* [18] D. Banerjee et al. [NA64], Phys. Rev. Lett. 120, no.23, 231802 (2018).
* [19] E. M. Riordan et al. Phys. Rev. Lett. 59, 755 (1987).
* [20] J. Blümlein and J. Brunner, Phys. Lett. B 701, 155-159 (2011); id. Phys. Lett. B 731, 320-326 (2014).
* [21] G. Aad et al. [ATLAS], Phys. Rev. D 101, no.1, 012002 (2020).
* [22] A. M. Sirunyan et al. [CMS], Eur. Phys. J. C 79, no.5, 421 (2019).
* [23] [ATLAS], ATLAS-CONF-2020-008.
* [24] N. Aghanim et al. [Planck], Astron. Astrophys. 641, A6 (2020).
* [25] M. Escudero, A. Berlin, D. Hooper and M. X. Lin, JCAP 12, 029 (2016)
* [26] G. Arcadi et al. Eur. Phys. J. C 78, no.3, 203 (2018).
* [27] J. L. Feng, J. Kumar, D. Marfatia and D. Sanford, Phys. Lett. B 703, 124-127 (2011).
* [28] C. E. Yaguna, Phys. Rev. D 95, no.5, 055015 (2017).
* [29] G. Angloher et al. [CRESST], Eur. Phys. J. C 77, no.9, 637 (2017).
* [30] P. Agnes et al. [DarkSide], Phys. Rev. Lett. 121, no.8, 081307 (2018).
* [31] E. Aprile et al. [XENON], Phys. Rev. Lett. 123, no.25, 251801 (2019).
* [32] M. Ackermann et al. [Fermi-LAT], Phys. Rev. Lett. 115, no.23, 231301 (2015).
* [33] A. Albert et al. [Fermi-LAT and DES], Astrophys. J. 834, no.2, 110 (2017).
* [34] M. Aaboud et al. [ATLAS], JHEP 01, 126 (2018).
* [35] [ATLAS], ATLAS-CONF-2020-048.
* [36] A. M. Sirunyan et al. [CMS], JHEP 07, 014 (2017).
* [37] D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publ. Astron. Soc. Pac. 125, 306-312 (2013).
* [38] G. Bélanger et al. Comput. Phys. Commun. 231, 173-186 (2018).
* [39] A. Alloul et al. Comput. Phys. Commun. 185, 2250-2300 (2014).
* [40] A. Belyaev, N. D. Christensen and A. Pukhov, Comput. Phys. Commun. 184, 1729-1769 (2013).
* [41] M. Battaglieri et al. [arXiv:1707.04591 [hep-ph]].
* [42] R. Agnese et al. [SuperCDMS], Phys. Rev. D 95, no.8, 082002 (2017).
* [43] H. Ma et al. [CDEX], J. Phys. Conf. Ser. 1342, no.1, 012067 (2020).
* [44] E. Kou et al. [Belle-II], PTEP 2019, no.12, 123C01 (2019) [erratum: PTEP 2020, no.2, 029201 (2020)].
* [45] D. Banerjee et al., CERN-SPSC-2018-004 (SPSC-P-348-ADD-2).
* [46] A. Caldwell et al. [arXiv:1812.11164 [physics.acc-ph]].
|
# Control technique for synchronization of selected nodes in directed networks
Bruno Ursino, Lucia Valentina Gambuzza, Vito Latora, Mattia Frasca∗ B. Ursino,
L.V. Gambuzza, and M. Frasca are with the Department of Electrical Electronic
and Computer Science Engineering, University of Catania, Catania, Italy. V.
Latora is with the School of Mathematical Sciences, Queen Mary University of
London, London E1 4NS, UK and with the Department of Physics and Astronomy,
University of Catania and INFN, Catania, Italy. This work was supported by the
Italian Ministry for Research and Education (MIUR) through Research Program
PRIN 2017 under Grant 2017CWMF93. ∗ Email<EMAIL_ADDRESS>
###### Abstract
In this Letter we propose a method to control a set of arbitrary nodes in a
directed network such that they follow a synchronous trajectory which is, in
general, not shared by the other units of the network. The problem is inspired
to those natural or artificial networks whose proper operating conditions are
associated to the presence of clusters of synchronous nodes. Our proposed
method is based on the introduction of distributed controllers that modify the
topology of the connections in order to generate outer symmetries in the nodes
to be controlled. An optimization problem for the selection of the
controllers, which includes as a special case the minimization of the number
of the links added or removed, is also formulated and an algorithm for its
solution is introduced.
###### Index Terms:
Network analysis and control; Control of networks.
## I Introduction
In the last few decades most of the works on synchronization control in
complex networks have focused on the problem of steering the network towards a
collective state shared by all the units. Such a synchronized state has been
obtained by means of techniques ranging from pinning control [1, 2] to
adaptive strategies [3], discontinuous coupling [4], stochastic broadcasting
[5] and impulsive control [6]. Other studies have focused on the control of a
more structured state where the units split into clusters of synchronized
nodes, and each one of these groups follows a different trajectory [7, 8, 9,
10, 11, 12].
In the above mentioned works, the control action is such that all network
nodes are forced to follow a given dynamical behavior. However, the number of
nodes and links can be very large in real-world systems, so that the question
of whether it is possible to control the state of only a subset of the network
units, disregarding the behavior of the other units, becomes of great
importance. Solving the problem can lead to potentially interesting
applications. Consider a team of mobile agents and the case in which a
particular task can be accomplished by a subset of the agents only. In such a
scenario, one could exploit the relationship between oscillator
synchronization and collective motion [13] and apply control techniques for
synchronizing a subset of nodes to recruit only a group of all the mobile
agents and coordinate them. In a different context, it is well known that
synchronization of the whole brain network is associated to pathological
states, whereas neural areas are actively synchronized to engage in functional
roles [14]. Our approach could provide control techniques supporting
neuromorphic engineering applications relying on the principles of neuronal
computation [15].
Recently, it has been argued that a subset of network nodes can be controlled
by adopting a distributed control paradigm whose formulation relies on the
notion of symmetries in a graph [16, 17]. The approach there presented is
restricted to undirected graphs, whereas here we propose a control technique
for the more general case of directed networks. We find that, in order to form
a synchronous cluster, the nodes to control must have the same set of
successors, and the common value of their out-degree has to be larger than a
threshold, which decreases when the coupling strength in the network is
increased. Both conditions can be matched by a proper design of controllers
adding to or removing links from the original network structure. The selection
of the controllers is addressed by formulating an optimization problem,
minimizing an objective function which accounts for the costs associated to
adding and/or removing links. We show that an exact solution to the problem
can be found, and we propose an algorithm to calculate it.
The rest of the paper is organized as follows: Sec. II contains the
preliminaries; the problem is formulated in Sec. III; a theorem illustrating
how to design the controllers is illustrated in Sec. IV; the optimization
problem and its solution are dealt with in Sec. V; an example of our approach
is provided in Sec. VI and the conclusions are drawn in Sec. VII.
## II Preliminaries
In this section we introduce notations and definitions used in the rest of the
paper [18]. A graph $\mathcal{G}=\left(\mathcal{V},\mathcal{E}\right)$
consists of set of _vertices_ or _nodes_
$\mathcal{V}=\left\\{v_{1},...,v_{n}\right\\}$ and a set of _edges_ or _links_
$\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$. Network nodes are
equivalently indicated as $v_{i}$ or, shortly, as $i$. If
$\forall\left(v_{i},v_{j}\right)\in\mathcal{E}\Rightarrow\left(v_{j},v_{i}\right)\in\mathcal{E}$,
the graph is _undirected_ , otherwise it is _directed_. Only simple (i.e.,
containing no loops and no multiple edges) directed graphs are considered in
what follows. The set
$\mathcal{S}_{i}=\left\\{v_{j}\in\mathcal{V}|\left(v_{i},v_{j}\right)\in\mathcal{E}\right\\}$
is the set of _successors_ of $v_{i}$ (in undirected graphs $\mathcal{S}_{i}$
coincides with the set of neighbors).
The graph $\mathcal{G}$ can be described through the adjacency matrix
$\mathrm{A}$, a $N\times N$ matrix, with $N=|\mathcal{V}|$, and whose elements
are $a_{ij}=1$ if $v_{j}\in\mathcal{S}_{i}$ and $a_{ij}=0$, otherwise. We
define the _out-degree_ of a node $i$ as the number of its successors,
$k_{i}=|\mathcal{S}_{i}|=\sum_{j=1}^{N}a_{ij}$. The Laplacian matrix,
$\mathcal{L}$, is defined as $\mathcal{L}=D-\mathrm{A}$, where
$D=diag\left\\{k_{1},...,k_{N}\right\\}$. Its elements are:
$\mathcal{L}_{ij}=k_{i}$, if $i=j$, $\mathcal{L}_{ij}=0$, if $i\neq j$ and
$v_{j}\notin\mathcal{S}_{i}$, and $\mathcal{L}_{ij}=-1$, if $i\neq j$ and
$v_{j}\in\mathcal{S}_{i}$. From the definition it immediately follows that
$\mathcal{L}1_{N}=0_{N,1}$ and, so, $0$ is an eigenvalue of the Laplacian
matrix with corresponding eigenvector $1_{N}$.
While arguments based on network symmetries are used for controlling groups of
nodes in undirected networks [17], directed topologies require the notion of
_outer symmetrical nodes_ , here introduced. We define two nodes $v_{i}$ and
$v_{j}$ _outer symmetrical_ if $\mathcal{S}_{i}=\mathcal{S}_{j}$ and
$\left(v_{i},v_{j}\right)\notin\mathcal{E}$. This notion is more restrictive
than that of input equivalence given in [19] for networks including different
node dynamics and coupling functions. In [19] the _input set_ of a node
$v_{i}$ is defined as
$I(v_{i})=\left\\{e\in\mathcal{E}:e=\left(v_{i},v_{j}\right)\mbox{ for some
}v_{j}\in V\right\\}$. Two nodes $v_{i}$ and $v_{j}$ are called _input
equivalent_ if and only if there exists a bijection $\beta:I(v_{i})\rightarrow
I(v_{j})$ such that the _type of connection_ is preserved, that is the
coupling function is the same and the extremes of the edges have the same
dynamics. For networks of identical dynamical units and coupling functions, as
those considered in our work, input equivalent nodes are nodes with the same
out-degree. To be outer symmetrical, a further condition is required: outer
symmetrical nodes are input equivalent nodes where the bijection is the
identity. This property is fundamental for the control problem dealt with in
our paper.
## III Problem formulation
Let us consider a directed network of $N$ identical, $n$-dimensional units
whose dynamics is given by
$\dot{x_{i}}=f(x_{i})-\sigma\sum_{j=1}^{N}\mathcal{L}_{ij}\mathrm{H}x_{j}+u_{i},\forall
i=1,...,N$ (1)
with $x_{i}=\left(x_{i1},x_{i2},...,x_{in}\right)^{T}$. Here,
$f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ is the uncoupled dynamics of each
unit, $H\in\mathbb{R}^{n\times n}$ is a constant matrix with elements taking
values in $\\{0,1\\}$ that represents inner coupling, i.e., it specifies the
components of the state vectors through which node $j$ is coupled to node $i$,
and $\sigma>0$ is the coupling strength. $u_{i}$ represent the control actions
on the network. Equations (1) with $u_{i}=0$ are extensively used to model
diffusively coupled oscillators in biology, chemistry, physics and engineering
[20].
Equations (1) can be rewritten in compact form as
$\dot{\mathbf{x}}=F(\mathbf{x})-\sigma\mathcal{L}\otimes\mathrm{H}\mathbf{x}+\mathbf{u}$
(2)
where $\mathbf{x}=\left[x_{1}^{T},x_{2}^{T},...,x_{N}^{T}\right]^{T}$,
$F(\mathbf{x})=\left[f^{T}(x_{1}),f^{T}(x_{2}),...,f^{T}(x_{N})\right]^{T}$
and $\mathbf{u}=\left[u_{1}^{T},u_{2}^{T},...,u_{N}^{T}\right]^{T}$. In the
following we use distributed controllers of the form
$u_{i}=-\sigma\sum_{j=1}^{N}\mathcal{L}^{\prime}_{ij}\mathrm{H}x_{j}$ (3)
where $\mathcal{L}^{\prime}$ is a matrix whose elements are $-1$, $0$ or $1$;
if $\mathcal{L}_{ij}=0$, setting $\mathcal{L}^{\prime}_{ij}=-1$ introduces a
link between two nodes, $i$ and $j$, not connected in the pristine network; on
the contrary, setting $\mathcal{L}^{\prime}_{ij}=1$ in correspondence of
$\mathcal{L}_{ij}=-1$ removes the existing edge $(v_{i},v_{j})$ in the
pristine network; finally, setting $\mathcal{L}^{\prime}_{ij}=0$ indicates no
addition or removal of links between $i$ and $j$. The diagonal elements of
$\mathcal{L}^{\prime}$ are $\mathcal{L}^{\prime}_{ii}=-\sum_{j=1,j\neq
i}^{N}\mathcal{L}^{\prime}_{ij}$. Notice that, even if $\mathcal{L}^{\prime}$
is not a Laplacian, the resulting matrix
$\mathcal{L}^{\prime\prime}=\mathcal{L}+\mathcal{L}^{\prime}$ (a matrix
representing the network formed by the original topology and the links added
or removed by the controllers) is instead a Laplacian.
The system with the controllers reads
$\dot{\mathbf{x}}=F(\mathbf{x})-\sigma\mathcal{L}^{\prime\prime}\otimes\mathrm{H}\mathbf{x}$
(4)
The problem tackled in this paper is twofold: i) given an arbitrary subset
$V_{n_{2}}$ of $n_{2}<N$ nodes, to determine a set of controllers $u_{i}$ with
$i=1,\ldots,N$ such that the nodes in $V_{n_{2}}$ synchronizes to each other;
ii) to formulate an optimization problem for the selection of the controllers
$u_{i}$.
Without lack of generality, we relabel the network nodes so that the nodes to
control are indexed as $i=n_{1}+1,\ldots,N$, such that
$V_{n_{2}}=\left\\{v_{n_{1}+1},...,v_{N}\right\\}$. Objective of the
controllers is, therefore, to achieve a synchronous evolution of the type
$\begin{cases}x_{1}(t)=s_{1}(t)\\\ \vdots\\\ x_{n_{1}}(t)=s_{n_{1}}(t)\\\
x_{n_{1}+1}(t)=x_{n_{1}+2}(t)=...=x_{N}(t)=s(t),\quad
t\rightarrow+\infty\end{cases}$ (5)
In compact form the synchronous state is denoted as
$\mathbf{x}^{s}(t)=\left[s_{1}^{T}(t),...,s_{n_{1}}^{T}(t),s^{T}(t),...,s^{T}(t)\right]^{T}$.
In the most general case, the trajectories of the first $n_{1}$ nodes are
different from each other and from $s(t)$, that is, $s_{i}(t)\neq s_{j}(t)\neq
s(t)$ for $i,j=1,\ldots,n_{1}$, but eventually some of them may coincide or
converge to $s(t)$. In the next section, we demonstrate how to select the
controllers such that the state $\mathbf{x}^{s}(t)$ exists and is locally
exponentially stable, while, in the second part of the paper, we consider the
optimization problem.
## IV Design of the controllers
To achieve a stable synchronous state $\mathbf{x}^{s}(t)$ the controllers
$u_{i}$ as in Eq. (3) have to satisfy the conditions expressed by the
following theorem.
###### Theorem IV.1.
Consider the dynamical network (1) and the controllers (3) such that the
Laplacian $\mathcal{L}^{\prime\prime}$ satisfies the following conditions:
1. 1.
$\mathcal{L}^{\prime\prime}_{i_{1},j}=\mathcal{L}^{\prime\prime}_{i_{2},j}$
for $i_{1}=n_{1}+1,\ldots,N$, $i_{2}=n_{1}+1,\ldots,N$, $j=1,\ldots,N$ and
$j\neq i_{1}$, $j\neq i_{2}$;
2. 2.
$\mathcal{L}^{\prime\prime}_{i_{1},i_{2}}=0$ for $i_{1}=n_{1}+1,\ldots,N$,
$i_{2}=n_{1}+1,\ldots,N$ with $i_{1}\neq i_{2}$;
then, a synchronous behavior
$\mathbf{x}^{s}(t)=\left[s_{1}^{T}(t),...,s_{n_{1}}^{T}(t),s^{T}(t),...,s^{T}(t)\right]^{T}$
exists.
In addition, define $k_{i}=\sum_{j}\mathcal{L}^{\prime\prime}_{i,j}$ with
$i=n_{1}+1,\ldots,N$, and, since from hypothesis 1)
$k_{n_{1}+1}=\ldots=k_{N}$, define $k\triangleq k_{n_{1}+1}=\ldots=k_{N}$. If
1. 3.
there exists a diagonal matrix $\mathrm{L}>0$ and two constants
$\overline{q}>0$ and $\tau>0$ such that the following linear matrix inequality
(LMI) is satisfied $\forall q\geq\overline{q}$ and $t>0$:
$\left[Df(s(t))-qH\right]^{T}L+L\left[Df(s(t))-qH\right]\leq-\tau I_{n},$ (6)
where $Df(s(t))$ is the Jacobian of $f$ evaluated on $s(t)$;
2. 4.
$k$ is such that $k>\frac{\overline{q}}{\sigma}$;
then, the synchronous state is locally exponentially stable.
###### Proof.
Existence of the synchronous solution. Hypotheses 1) and 2) induce some
structural properties in the new network defined by the original topology and
the controller links. In particular, hypothesis 1) is equivalent to require
that each node in $V_{n_{2}}$ has the same set of successors, that is,
$\mathcal{S}^{\prime\prime}_{n_{1}+1}=\ldots=\mathcal{S}^{\prime\prime}_{N}$,
while hypothesis 2) requires that there are no links between any pair of nodes
in $V_{n_{2}}$, that is, $\forall v_{i},v_{j}\in
V_{n_{2}}\Rightarrow\left(v_{i},v_{j}\right)\notin\mathcal{E}$. Consequently,
selecting the controllers such that hypotheses 1) and 2) hold makes the nodes
in $V_{n_{2}}$ outer symmetrical.
In turns this means that, with reference to the system in Eq. (4), if we
permute the nodes in $V_{n_{2}}$, the dynamical network does not change, and
the $n_{2}$ nodes have the same equation of motion. If the nodes in
$V_{n_{2}}$ start from the same initial conditions, then they remain
synchronized for $t>t_{0}$, and thus a synchronous solution
$\mathbf{x}^{s}(t)$ as in Eq. (5) exists.
Local exponential stability of the synchronous solution. To prove the
stability of $\mathbf{x}^{s}(t)$, we first prove that the synchronous solution
$\mathbf{x}^{s}(t)$ is locally exponentially stable if
$\dot{\zeta}=\left(Df-\sigma kH\right)\zeta$ (7)
is locally exponentially stable.
We first consider Eq. (4) and linearize it around $\mathbf{x}^{s}(t)$. We
define $\eta=\mathbf{x}-\mathbf{x}^{s}$ and calculate its dynamics as
$\dot{\eta}=DF\eta-\sigma\left(\mathcal{L}^{\prime\prime}\otimes\mathrm{H}\right)\eta$
(8)
Let us indicate as $Df_{i}$ the Jacobian of $F$ evaluated on
$\mathbf{x}^{s}_{i}(t)$. Taking into account Eq. (5), it follows that
$Df_{n_{1}+1}=...=Df_{N}\triangleq Df_{s}$ and, hence,
$DF=diag\left\\{Df_{1},...,Df_{n_{1}},Df_{s},...,Df_{s}\right\\}$.
From the structure of $\mathbf{x}^{s}$, it also follows that the synchronous
behavior is preserved for all variations belonging to the linear subspace
$\mathbb{P}$ generated by the column vectors of the following matrix
$\mathrm{M}_{s}=\begin{bmatrix}1&0&...&0&0\\\ 0&1&...&0&0\\\
\vdots&\vdots&\ddots&\vdots&\vdots\\\ 0&0&...&1&0\\\
0&0&...&0&\frac{1}{\sqrt{n_{2}}}\\\ \vdots&\vdots&\ddots&\vdots&\vdots\\\
0&0&...&0&\frac{1}{\sqrt{n_{2}}}\\\ \end{bmatrix}\otimes I_{n}$
Such variations in fact occur along the synchronization manifold where all the
last $n_{2}$ units have the same evolution. The column vectors of
$\mathrm{M}_{s}$ represent an orthonormal basis for the considered linear
subspace with $dim(\mathbb{P})=n(n_{1}+1)$. The remaining vectors in
$\mathbb{R}^{nN}\setminus\mathbb{P}$ represent transversal motions with
respect to the synchronization manifold.
An orthonormal basis for $\mathbb{R}^{nN}$ is built by considering a linear
vector space $\mathbb{O}$ of $dim(\mathbb{O})=n(n_{2}-1)$ that is orthogonal
to $\mathbb{P}$. All vectors of $\mathbb{R}^{nN}$ can be thus expressed as
linear combinations of vectors in $\mathbb{P}$ and vectors in $\mathbb{O}$,
that is, $\eta=\left(M\otimes I_{n}\right)\xi$ with
${M}=\begin{bmatrix}1&0&...&0&0&0&...&0\\\ 0&1&...&0&0&0&...&0\\\
\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\\
0&0&...&1&0&0&...&0\\\
0&0&...&0&\frac{1}{\sqrt{n_{2}}}&o_{n_{1}+1,1}&...&o_{n_{1}+1,n_{2}-1}\\\
0&0&...&0&\frac{1}{\sqrt{n_{2}}}&o_{n_{1}+2,1}&...&o_{n_{1}+2,n_{2}-1}\\\
\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\\
0&0&...&0&\frac{1}{\sqrt{n_{2}}}&o_{N,1}&...&o_{N,n_{2}-1}\\\ \end{bmatrix}$
(9)
For easy of compactness, matrix ${M}$ is rewritten as
$M=\begin{bmatrix}I_{n_{1}}&0\\\ 0&R_{n_{2}}\\\ \end{bmatrix}$.
The evolution of the first $n(n_{1}+1)$ elements of $\xi$ is the evolution of
motions along the synchronization manifold, while the remaining elements of
$\xi$ are transversal to the synchronization manifold. As a consequence of
this, to prove the exponential stability of the synchronization manifold, we
have to prove that the evolution of the last $n(n_{2}-1)$ elements of vector
$\xi$ decays exponentially to $0$ as $t\rightarrow+\infty$.
We now apply the transformation $\xi=\left(M\otimes I_{n}\right)^{-1}\eta$ to
Eqs. (8):
$\small\begin{array}[]{l}\dot{\xi}=\left(M^{-1}\otimes
I_{n}\right)DF\left(M\otimes
I_{n}\right)\xi-\sigma\left(M^{-1}\mathcal{L}^{\prime\prime}M\right)\otimes
H\xi\end{array}$ (10)
Straightforward calculations yield that $\left(M^{-1}\otimes
I_{n}\right)DF\left(M\otimes I_{n}\right)=DF$. Let us now focus on
$M^{-1}\mathcal{L}^{\prime\prime}M$. To calculate this term, we partition
$\mathcal{L}^{\prime\prime}$ in (4) as follows:
$\mathcal{L}^{\prime\prime}=\begin{bmatrix}A_{n_{1}\times
n_{1}}&B_{n_{1}\times n_{2}}\\\ C_{n_{2}\times n_{1}}&D_{n_{2}\times n_{2}}\\\
\end{bmatrix}$ (11)
From hypothesis 2), it follows that $D=kI_{n_{2}}$. Consider now the block
$C_{n_{2}\times n_{1}}$. From hypothesis 1) we have that
$\mathcal{L}^{\prime\prime}_{i_{1},j}=\mathcal{L}^{\prime\prime}_{i_{2},j},\forall
j\leq n_{1},\forall i_{1},i_{2}>n_{1}$. Denoting with $C_{i}$ the $i$-th row
of $C$ we obtain that $C_{i_{1}}=C_{i_{2}},\quad\forall
i_{1},i_{2}=1,...,n_{2}$.
Given that
$\begin{array}[]{c}M^{-1}\mathcal{L}^{\prime\prime}M=\begin{bmatrix}A&BR_{n_{2}}\\\
R_{n_{2}}^{T}C&R_{n_{2}}^{T}DR_{n_{2}}\\\ \end{bmatrix}\end{array}$ (12)
since $D=kI_{n_{2}}$ and all the rows in $C$ are equal, we can rewrite
$\mathcal{L}^{\prime\prime}$ as:
$\mathcal{L}^{\prime\prime}=\begin{bmatrix}A&B\\\
\begin{matrix}\begin{matrix}c_{1}\\\ c_{1}\\\ \vdots\\\
c_{1}\end{matrix}&...&\begin{matrix}c_{n_{1}}\\\ c_{n_{1}}\\\ \vdots\\\
c_{n_{1}}\end{matrix}\\\ \end{matrix}&\begin{matrix}k&0&...&0\\\ 0&k&...&0\\\
\vdots&\vdots&\ddots&\vdots\\\ 0&0&...&k\end{matrix}\end{bmatrix}$ (13)
where $c_{i}\in\\{0,1\\}$ if node $i$ is connected or not with the nodes of
$V_{n_{2}}$.
Notice that the first row of $R_{n_{2}}^{T}$ is a vector parallel to
$\left[1,1,...,1\right]$ while the remaining ones are all orthogonal to it,
so:
$R_{n_{2}}^{T}C=\begin{bmatrix}a_{1}\sqrt{n_{2}}&...&a_{n_{1}}\sqrt{n_{2}}\\\
0&...&0\\\ \vdots&\ddots&\vdots\\\ 0&...&0\end{bmatrix}$ (14)
Moreover
$R_{n_{2}}^{T}DR_{n_{2}}=R_{n_{2}}^{T}kI_{n_{2}}R_{n_{2}}=kI_{n_{2}}=D$.
It follows that Eq. (10) becomes:
$\small\begin{array}[]{l}\dot{\xi}=\left[\begin{array}[]{cccc|ccc}Df_{1}&\cdots&0&0&0&\cdots&0\\\
\vdots&&\vdots&\vdots&\vdots&&\vdots\\\ 0&\cdots&Df_{n_{1}}&0&0&\cdots&0\\\
0&\cdots&0&Df_{s}&0&\cdots&0\\\ \hline\cr 0&\cdots&0&0&Df_{s}&\cdots&0\\\
\vdots&&\vdots&\vdots&\vdots&&\vdots\\\
0&\cdots&0&0&0&\cdots&Df_{s}\end{array}\right]\xi-\sigma\cdot\\\
\left[\begin{array}[]{ccc|cc}l_{11}\ldots&l_{1n_{1}}&l_{1n_{1}+1}&l_{1n_{1}+2}\ldots&l_{1N}\\\
\vdots&\vdots&\vdots&\vdots&\vdots\\\
l_{n_{1}1}\ldots&l_{n_{1}n_{1}}&l_{n_{1}n_{1}+1}&l_{n_{1}n_{1}+2}\ldots&l_{n_{1}N}\\\
c_{1}\sqrt{n_{2}}\ldots&c_{n_{1}}\sqrt{n_{2}}&k&0\ldots&0\\\ \hline\cr
0\ldots&0&0&k\ldots&0\\\ \vdots&\vdots&\vdots&\vdots&\vdots\\\
0\ldots&0&0&0\ldots&k\end{array}\right]\otimes H\xi\end{array}$ (15)
where the lines in the matrices suggest a partition highlighting the last
$n\left(n_{2}-1\right)$ elements of $\xi$.
The system describing the evolution of variations transversal to the
synchronization manifold is uncoupled from the rest of the equations and
composed of identical blocks, taking the following form:
$\dot{\zeta}=\left(Df_{s}-\sigma kH\right)\zeta,\quad\mbox{ with
}\zeta\in\mathbb{R}^{n}$ (16)
It only remains to prove the exponential stability of (16). By hypothesis 4)
we have that $k>\frac{\overline{q}}{\sigma}$, thus $k\sigma>\overline{q}$.
From the LMI (6) we can use the Lyapunov function $V=\zeta^{T}L\zeta$ to prove
exponential stability:
1. 1.
$V(0)=0$;
2. 2.
$V(\zeta)=\zeta^{T}L\zeta>0,\;\forall\zeta\neq 0$ because $L>0$;
3. 3.
$\dot{V}<0,\;\forall\zeta\neq 0$ in fact:
$\begin{array}[]{l}\frac{d}{dt}\left[\zeta^{T}L\zeta\right]=\dot{\zeta^{T}}L\zeta+\zeta^{T}L\dot{\zeta}\\\
=\zeta^{T}\left[\left(Df_{s}-\sigma kH\right)^{T}L+L\left(Df_{s}-\sigma
kH\right)\right]\zeta\\\ \leq-\tau\zeta^{T}\zeta<0.\end{array}$ (17)
∎
We note that, in the application of Theorem IV.1, we can first consider the
set formed by the union of the successors of the nodes to control. If the
cardinality of this set is greater than $\frac{\overline{q}}{\sigma}$, then we
can add links such that the successors of each node of $V_{n_{2}}$ are all the
elements of this set. Otherwise, one needs to expand this set by including
other nodes of the network. Interestingly, the choice of such nodes is totally
arbitrary and any node, not yet included in the set of successors, fits for
the purpose.
The upper bound of $k$ is the cardinality of $\mathcal{V}\setminus V_{n_{2}}$
and, since Theorem IV.1 requires that $k>\frac{\overline{q}}{\sigma}$, a
necessary condition for the application of the proposed technique is that
$\sigma>\frac{\overline{q}}{n_{1}}$: if this condition is not met, then, there
are not enough nodes in $\mathcal{V}\setminus V_{n_{2}}$ to which the nodes of
$V_{n_{2}}$ can be connected by the controllers.
## V Optimization
In this section we address the problem of optimizing the controllers with
respect to the cost of the links added or removed. Let $w_{ij}^{-}$
($w_{ij}^{+}$) be the cost associated to the removal of an existing link
(addition of a new link) between $i$ and $j$. These parameters account for a
general scenario where different links have different costs to change.
Formally, the following minimization problem is considered:
$\min\limits_{\mathbf{u}\in\mathcal{U}}\sum\limits_{\mathcal{L}^{\prime}_{i,j}=1}\mathcal{L}^{\prime}_{ij}w_{ij}^{-}+\sum\limits_{\mathcal{L}^{\prime}_{i,j}=-1}|\mathcal{L}^{\prime}_{ij}|w_{ij}^{+}$
(18)
where $\mathcal{U}$ is the set of controllers that satisfies Theorem IV.1 and,
thus, ensures the existence and stability of $\mathbf{x}^{s}(t)$. In the
special case, when the costs are equal and unitary, i.e.,
$w_{ij}^{-}=w_{ij}^{+}=1$, the optimization problem reduces to
$\min\limits_{\mathbf{u}\in\mathcal{U}}\sum\limits_{i,j}|\mathcal{L}^{\prime}_{ij}|$
(19)
i.e., minimization of the number of links added or removed by the controllers.
Let $\bar{k}\triangleq\lceil\frac{\bar{q}}{\sigma}\rceil$. Theorem IV.1
requires that the nodes in $V_{n_{2}}$ have a number of successors greater
than or equal to $\bar{k}$, i.e., since $|\mathcal{S}^{\prime\prime}|=k$,
$k\geq\bar{k}$. The optimization problem is thus equivalent to determine the
nodes in $\mathcal{S}^{\prime\prime}$ which minimize the objective function
(18). Consider the set $\bar{\mathcal{S}}=\bigcup_{v_{i}\in
V_{n_{2}}}\mathcal{S}_{i}\setminus V_{n_{2}}$, containing the successors of at
least one node of the pristine network that are not in $V_{n_{2}}$. Depending
on the cardinality of this set we can have two different scenarios: 1) if
$|\bar{\mathcal{S}}|<\overline{k}$, then, ${\mathcal{S}^{\prime\prime}}$ needs
to contain all the nodes in $\bar{\mathcal{S}}$ and some other nodes of the
set $V_{n_{1}}\setminus\bar{\mathcal{S}}$; 2) if
$\left|\bar{\mathcal{S}}\right|\geq\bar{k}$, then, one has to select
${\mathcal{S}^{\prime\prime}}\subset\bar{\mathcal{S}}$. In both cases, the
choice of the nodes in ${\mathcal{S}^{\prime\prime}}$ is accomplished taking
into account the costs associated to the network links.
First, note that, given $V_{n_{2}}$, to fulfill condition 2) of Theorem IV.1
the links between nodes in this set need to be removed. This yields a fixed
cost $\bar{c}=\sum\limits_{i,j\in V_{n_{2}}}a_{ij}w_{ij}^{-}$ such that
$\min\limits_{\mathbf{u}\in\mathcal{U}}\sum\limits_{\mathcal{L}^{\prime}_{i,j}=1}\mathcal{L}^{\prime}_{ij}w_{ij}^{-}+\sum\limits_{\mathcal{L}^{\prime}_{i,j}=-1}|\mathcal{L}^{\prime}_{ij}|w_{ij}^{+}\geq\bar{c}$.
Let $c_{i}^{+}$ be the cost to have node $i$ in ${\mathcal{S}^{\prime\prime}}$
and $c_{i}^{-}$ the cost of not including it in
${\mathcal{S}^{\prime\prime}}$. It follows that $c_{i}^{-}=\sum\limits_{j\in
V_{n_{2}}}a_{ij}w_{ij}^{-}$ and $c_{i}^{+}=\sum\limits_{j\in
V_{n_{2}}}(1-a_{ij})w_{ij}^{+}$. Once calculated $c_{i}^{+}$ and $c_{i}^{-}$,
we reformulate the optimization problem in terms of minimization of the
overall cost of the control:
$Cost=\sum\limits_{v_{i}\in\bar{\mathcal{S}}}\left[c_{i}^{+}x_{i}+c_{i}^{-}(1-x_{i})\right]+\bar{c}$,
where $x_{i}$ ($i=1,\ldots,N$) are decisional variables, such that $x_{i}=1$
if $v_{i}\in{\mathcal{S}^{\prime\prime}}$ and $x_{i}=0$ otherwise. The
optimization problem now reads:
$\begin{cases}\min\sum\limits_{v_{i}\in\bar{\mathcal{S}}}\left[c_{i}^{+}x_{i}+c_{i}^{-}(1-x_{i})\right]+\bar{c}\\\
\sum\limits_{v_{i}\in\bar{\mathcal{S}}}x_{i}\geq\bar{k}\end{cases}$ (20)
where the constraint $\sum\limits_{v_{i}\in\bar{\mathcal{S}}}x_{i}\geq\bar{k}$
guarantees that condition $k\geq\bar{k}$ holds. Since the overall cost can be
rewritten as
$Cost=\sum\limits_{v_{i}\in\bar{\mathcal{S}}}\left(c_{i}^{+}-c_{i}^{-}\right)x_{i}+\sum\limits_{v_{i}\in\bar{\mathcal{S}}}c_{i}^{-}+\bar{c}$
and the terms $\sum\limits_{v_{i}\in\bar{\mathcal{S}}}c_{i}^{-}$ and $\bar{c}$
do not depend on the variables $x_{i}$, the optimization problem becomes
$\begin{cases}\min\sum\limits_{v_{i}\in\bar{\mathcal{S}}}c_{i}x_{i}\\\
\sum\limits_{v_{i}\in\bar{\mathcal{S}}}x_{i}\geq\bar{k}\end{cases}$ (21)
where $c_{i}\triangleq c_{i}^{+}-c_{i}^{-}$.
This formulation prompts the following solution for the optimization problem.
Defining
$k^{\prime}\triangleq\left|\left\\{v_{i}\in\bar{\mathcal{S}}\;|\;c_{i}\leq
0\right\\}\right|$ and sorting the nodes in $\bar{\mathcal{S}}$ in ascending
order with respect to their cost $c_{i}$, we take
$k_{max}=\max\left\\{\bar{k},\;k^{\prime}\right\\}$ and assign $x_{i}=1$ to
the first $k_{max}$ nodes and $x_{i}=0$ to the remaining ones. The overall
cost to achieve synchronization of the nodes in the set $V_{n_{2}}$ is given
by
$Cost=\sum\limits_{v_{i}\in\bar{\mathcal{S}}}c_{i}^{-}+\sum\limits_{v_{i}\in{\mathcal{S}^{\prime\prime}}}c_{i}+\bar{c}$
(22)
Algorithm 1 is based on the above observations and returns the nodes belonging
to ${\mathcal{S}^{\prime\prime}}$. The inputs are $V_{n_{2}}$, $\bar{k}$ and
$A$ (the adjacency matrix of the network) and the outputs are the set
${\mathcal{S}^{\prime\prime}}$ and the overall cost.
Algorithm 1 Algorithm to select the nodes in ${\mathcal{S^{\prime\prime}}}$
0: $V_{n_{2}}$, $\bar{k}$ and $A$
0: ${\mathcal{S}^{\prime\prime}}$, overall cost $Cost$Initialization:
1: Create $\bar{\mathcal{S}}=\bigcup_{v_{i}\in
V_{n_{2}}}\mathcal{S}_{i}\setminus V_{n_{2}}$ and determine its cardinality
$|\bar{\mathcal{S}}|$Procedure:
2: Calculate $c_{i}^{-}$, $c_{i}^{+}$, $\bar{c}$ and
$c_{i}=c_{i}^{+}-c_{i}^{-}$
3: Sort the nodes in $\bar{\mathcal{S}}$ in ascending order and set
$k^{\prime}=\left|\left\\{v_{i}\in{\mathcal{S}}\;|\;c_{i}\leq
0\right\\}\right|$
4: Calculate $k_{max}=\max\left\\{\bar{k},\;k^{\prime}\right\\}$
5: if $|\bar{\mathcal{S}}|\geq\bar{k}$ then
6: return
$Cost=\bar{c}+\sum\limits_{v_{i}\in\bar{\mathcal{S}}}c_{i}^{-}+\sum\limits_{v_{i}\in\bar{\mathcal{S}^{\prime\prime}}}c_{i}$
and build ${\mathcal{S}^{\prime\prime}}$ taking the first $k_{max}$ nodes in
$\bar{\mathcal{S}}$
7: else
8: Build ${\mathcal{S}^{\prime\prime}}$ taking all the elements of
$\bar{\mathcal{S}}$ and add nodes from $V_{n_{1}}\setminus\bar{\mathcal{S}}$,
in ascending order of $c_{i}$, until $|{\mathcal{S^{\prime\prime}}}|=\bar{k}$
9: end if
10: return $Cost$ and ${\mathcal{S}^{\prime\prime}}$
## VI Examples
We now discuss an example of how the proposed control works in the directed
network with $N=20$ nodes shown in Fig. 1. We refer to several cases,
corresponding to two distinct sets of nodes to control, $V_{n_{2}}$, two
values of $\bar{k}$, and different costs associated to the links. For each of
these cases, the controllers that satisfy Theorem IV.1 and are the result of
the optimization procedure of Sec. V are discussed; we will show that they
depend on the control goal, on the link costs and, through $\bar{k}$, on the
coupling coefficient.
More specifically, we first consider unitary costs for the links and
synchronization of two different triplets of nodes, i.e., either
$V_{n_{2}}=\\{1,4,16\\}$ or $V_{n_{2}}=\\{1,3,19\\}$, with two values of
$\bar{k}$, i.e., $\bar{k}=1$ and $\bar{k}=3$111The value of $\bar{k}$ depends
on the node dynamics considered and strength of the coupling, e.g., with
reference to Chua’s circuit as node dynamics and coupling of the type
$\mathrm{H}=\textrm{diag}\\{1,1,0\\}$, we have $\bar{q}=4.5$ [17], and
$\bar{k}=1$ for $\sigma=5$ while $\bar{k}=3$ for $\sigma=2$.. This leads to
cases 1-4 in Table I. Case 5, instead, refers to a scenario where the costs
are not unitary.
Figure 1: Case study: a directed network of $N=20$ nodes. The spatial position
of the network nodes is used to define costs for link addition proportional to
the Euclidean distance of the nodes to be connected.
_Case 1: $V_{n_{2}}=\\{1,4,16\\}$, $\bar{k}=1$._ Here, we have that
$\bar{\mathcal{S}}=\\{3,6,8,10,15,20\\}$ and $|\bar{\mathcal{S}}|>\bar{k}$.
Following Algorithm 1, we find $k^{\prime}=2$, so that $k^{\prime}>\bar{k}$
and ${\mathcal{S}^{\prime\prime}}=\\{3,8\\}$. Synchronization of the nodes in
$V_{n_{2}}$ is achieved if two links are added to the original network, and
four links are removed.
_Case 2: $V_{n_{2}}=\\{1,4,16\\}$, $\bar{k}=3$._ Here again
$|\bar{\mathcal{S}}|>\bar{k}$, but $k^{\prime}<\bar{k}$. We get
${\mathcal{S}^{\prime\prime}}=\\{3,6,8\\}$, four links to add and three to
remove.
_Case 3: $V_{n_{2}}=\\{1,3,19\\}$, $\bar{k}=1$._ We have
$\bar{\mathcal{S}}=\\{4,10\\}$ and $|\bar{\mathcal{S}}|>\bar{k}$. In this
case, we obtain ${\mathcal{S}^{\prime\prime}}=\\{4\\}$, a single link to add
and three links to remove.
_Case 4: $V_{n_{2}}=\\{1,3,19\\}$, $\bar{k}=3$._ We have
$\left|\bar{\mathcal{S}}\right|<\bar{k}$ thus, following step 8 of the
algorithm, we need to add a node from $V_{n_{1}}\setminus\bar{\mathcal{S}}$,
i.e. a node which is not a successor of any of the nodes to be synchronized.
As the choice is completely arbitrary, we select node $6$. So,
${\mathcal{S}^{\prime\prime}}=\\{4,6,10\\}$. Control is attained by adding
seven links and removing three links.
_Case 5: $V_{n_{2}}=\\{1,4,16\\}$, $\bar{k}=3$, non-unitary costs._ For the
purpose of illustration, here we assume that the cost to add a link is
proportional to the distance between the two nodes, while removing links
always has a unitary cost. We consider the synchronization problem as in case
2. Here, the different costs yield a different result for
${\mathcal{S}^{\prime\prime}}$, i.e.,
${\mathcal{S}^{\prime\prime}}=\\{3,8,10\\}$. In this scenario, optimization
requires to include in ${\mathcal{S}^{\prime\prime}}$ node 10 rather than node
6.
TABLE I: Added and removed links obtained for different $V_{n_{2}}$ and $\bar{k}$ for the network in Fig. 1. Costs associated to links are considered unitary in all cases, except for case 5. Case | $V_{n_{2}}$ | $\bar{k}$ | Added links | Removed links
---|---|---|---|---
1 | {1,4,16} | 1 | (1,8) (16,3) | (16,15) (16,20) (1,10) (4,6)
2 | {1,4,16} | 3 | (1,8) (16,3) (1,6) (16,6) | (16,15) (16,20) (1,10)
3 | {1,3,19} | 1 | (1,4) | (1,10) (1,3) (19,1)
4 | {1,3,19} | 3 | (1,6) (1,4) (3,10) (3,4) (3,6) (19,20) (19,6) | (19,1) (1,3) (3,19)
5* | {1,4,16} | 3 | (1,8) (4,10) (16,3) (16,10) | (4,6) (16,15) (16,20)
Finally, for case 2 we report the waveforms obtained by simulating the network
with control (Fig. 2). Chua’s circuits starting from random initial conditions
are considered (equations and parameters have been fixed as in [17]). Fig. 2
shows that the nodes in $V_{n_{2}}=\\{1,4,16\\}$ follow the same trajectory,
while the remaining units are not synchronized with them. Similar results are
obtained for the other scenarios.
Figure 2: Evolution of the first state variable for nodes in
$V_{n_{2}}=\\{1,4,16\\}$ (upper plot) and for all the network nodes (bottom
plot).
## VII Conclusions
In this work we have focused on the problem of controlling synchronization of
a group of nodes in directed networks. The nodes are all assumed to have the
same dynamics and, similarly, coupling is assumed to be fixed to the same
value along all the links of the network. The technique we propose is based on
the use of distributed controllers which add further links to the network or
remove some of the existing ones, creating a new network structure which has
to satisfy two topological conditions. The first condition refers to the fact
that, in the new network, merging the existing links and those of the
controllers, the nodes to control must be outer symmetrical, while the second
condition requires that the out-degree of these nodes has to be higher than a
threshold. Quite interestingly, the threshold depends on the dynamics of the
units and on the coupling strength, in such a way that a higher coupling
strength favors control as it requires a smaller out-degree. It is also worth
noticing that, when the out-degree needs to be increased to exceed the
threshold, this can be obtained by connecting to any of the remaining nodes of
the network.
The selection of the nodes forming the set of successors of the units to
control is carried out by considering an optimization problem and finding the
exact solution that minimizes the cost of the changes (i.e. link additions or
removals). In the case of unitary costs, the problem reduces to minimization
of the number of added or removed links, thereby defining a strategy for the
control of synchronization of a group of nodes in a directed network with
minimal topological changes.
## References
* [1] T. Chen, X. Liu, and W. Lu, “Pinning complex networks by a single controller,” _IEEE Trans. on Circ. Sys. I_ , vol. 54, no. 6, pp. 1317–1326, 2007.
* [2] W. Yu, G. Chen, and J. Lü, “On pinning synchronization of complex dynamical networks,” _Automatica_ , vol. 45, no. 2, pp. 429–435, 2009.
* [3] P. DeLellis, F. Garofalo _et al._ , “Novel decentralized adaptive strategies for the synchronization of complex networks,” _Automatica_ , vol. 45, no. 5, pp. 1312–1318, 2009.
* [4] M. Coraggio, P. De Lellis, S. J. Hogan, and M. di Bernardo, “Synchronization of networks of piecewise-smooth systems,” _IEEE Control Systems Letters_ , 2018.
* [5] R. Jeter, M. Porfiri, and I. Belykh, “Network synchronization through stochastic broadcasting,” _IEEE Control Systems Letters_ , vol. 2, no. 1, pp. 103–108, 2018.
* [6] Z.-H. Guan, Z.-W. Liu, G. Feng, and Y.-W. Wang, “Synchronization of complex dynamical networks with time-varying delays via impulsive distributed control,” _IEEE Trans. Circ. Sys. I_ , vol. 57, no. 8, pp. 2182–2195, 2010\.
* [7] W. Wu, W. Zhou, and T. Chen, “Cluster synchronization of linearly coupled complex networks under pinning control,” _IEEE Trans. Circ. Syst. I_ , vol. 56, no. 4, pp. 829–839, 2009.
* [8] H. Su, Z. Rong, M. Z. Chen, X. Wang, G. Chen, and H. Wang, “Decentralized adaptive pinning control for cluster synchronization of complex dynamical networks,” _IEEE Transactions on Cybernetics_ , vol. 43, no. 1, pp. 394–399, 2013.
* [9] Q. Ma and J. Lu, “Cluster synchronization for directed complex dynamical networks via pinning control,” _Neurocomputing_ , vol. 101, pp. 354–360, 2013.
* [10] C. B. Yu, J. Qin, and H. Gao, “Cluster synchronization in directed networks of partial-state coupled linear systems under pinning control,” _Automatica_ , vol. 50, no. 9, pp. 2341–2349, 2014.
* [11] L. V. Gambuzza and M. Frasca, “A criterion for stability of cluster synchronization in networks with external equitable partitions,” _Automatica_ , vol. 100, pp. 212–218, 2019.
* [12] T. H. Lee, Q. Ma, S. Xu, and J. H. Park, “Pinning control for cluster synchronisation of complex dynamical networks with semi-markovian jump topology,” _Int. J. Control_ , vol. 88, no. 6, pp. 1223–1235, 2015.
* [13] D. A. Paley, N. E. Leonard, R. Sepulchre, D. Grunbaum, and J. K. Parrish, “Oscillator models and collective motion,” _IEEE Control Systems Magazine_ , vol. 27, no. 4, pp. 89–105, 2007.
* [14] R. T. Canolty, K. Ganguly, S. W. Kennerley, C. F. Cadieu, K. Koepsell, J. D. Wallis, and J. M. Carmena, “Oscillatory phase coupling coordinates anatomically dispersed functional cell assemblies,” _Proceedings of the National Academy of Sciences_ , vol. 107, no. 40, pp. 17 356–17 361, 2010.
* [15] G. Indiveri and T. K. Horiuchi, “Frontiers in neuromorphic engineering,” _Frontiers in neuroscience_ , vol. 5, p. 118, 2011.
* [16] V. Nicosia, M. Valencia, M. Chavez, A. Díaz-Guilera, and V. Latora, “Remote synchronization reveals network symmetries and functional modules,” _Phys. Rev. Lett._ , vol. 110, p. 174102, 2013.
* [17] L. V. Gambuzza, M. Frasca, and V. Latora, “Distributed control of synchronization of a group of network nodes,” _IEEE Trans. Automatic Control_ , vol. 64, no. 1, pp. 362–369, 2019.
* [18] V. Latora, V. Nicosia, and G. Russo, _Complex networks: principles, methods and applications_. Cambridge University Press, 2017.
* [19] M. Golubitsky and I. Stewart, “Nonlinear dynamics of networks: the groupoid formalism,” _Bulletin of the American Mathematical Society_ , vol. 43, no. 3, pp. 305–364, 2006.
* [20] A. Arenas, A. Díaz-Guilera, J. Kurths, Y. Moreno, and C. Zhou, “Synchronization in complex networks,” _Physics Reports_ , vol. 469, no. 3, pp. 93–153, 2008.
|
# A Simple Geometric Proof for the Benefit of Depth in ReLU Networks
Asaf Amrami1,2 & Yoav Goldberg1,2
1 Bar Ilan University
2 Allen Institute for Artificial Intelligence
###### Abstract
We present a simple proof for the benefit of depth in multi-layer feedforward
network with rectified activation (“depth separation”). Specifically we
present a sequence of classification problems indexed by $m$ such that (a) for
any fixed depth rectified network there exist an $m$ above which classifying
problem $m$ correctly requires exponential number of parameters (in $m$); and
(b) for any problem in the sequence, we present a concrete neural network with
linear depth (in $m$) and small constant width ($\leq 4$) that classifies the
problem with zero error.
The constructive proof is based on geometric arguments and a space folding
construction.
While stronger bounds and results exist, our proof uses substantially simpler
tools and techniques, and should be accessible to undergraduate students in
computer science and people with similar backgrounds.
## 1 Introduction
We present a simple, geometric proof of the benefit of depth in deep neural
networks.
We prove that there exist a set of functions indexed by $m$, each of which can
be efficiently represented by a depth $m$ rectified MLP network requiring
$O(m)$ parameters. However, for any bounded depth rectified MLP network, there
is a function $f_{m}$ in this set that cannot be represented by the network
unless it has exponential number of parameters in $m$.
More formally, We will prove the following theorem:
###### Theorem 1 (Depth Separation).
There exists a sequence of functions
$f_{1},f_{2},...:\mathbb{R}^{2}\mapsto\\{-1,1\\}$ such that:
1. a
[Bounded depth network is exponential in size] For any rectified MLP with
bounded depth $d$, solving problem $f_{m}$ requires a width of at least
$b^{m}$ to solve, where $b>1$ is a constant determined by the bounded depth
$d$.
2. b
[Utility of depth] For any problem $f_{m}$ there is a rectified MLP with
linear number of parameters in $m$ that solves $f_{m}$ perfectly. More
concretely, there exists a network with depth $m+2$ and layer width $\leq 4$
that perfectly represents $f_{m}$.
While this is not a novel result, a main characteristic of our proof is its
simplicity. In contrast to previous work, our proof uses only basic algebra,
geometry and simple combinatorial arguments. As such, it can be easily read
and understood by newcomers and practitioners, or taught in a self-contained
lecture in an undergraduate class, without requiring extensive background.
Tailoring to these crowds, our presentation style is more verbose then is
usual in papers of this kind, attempting to spell out all steps explicitly. We
also opted to trade generality for proof simplicity, remaining in input space
$\mathbb{R}^{2}$ rather than the more general $\mathbb{R}^{n}$, thus allowing
us to work with lines rather than hyperplanes. Beyond being easy to visualize,
it also results in somewhat simpler proofs of the different lemmas.
## 2 Related Work
The expressive power gained by depth in multi-layer perceptron (MLP) networks
is relatively well studied, with multiple works showing that deep MLPs can
represent functions that cannot be represented by similar but shallower
networks, unless those have a significantly larger number of units (Delalleau
& Bengio, 2011; Pascanu et al., 2013; Bianchini & Scarselli, 2014).
Telgarsky (2015; 2016) show that network depth facilitate fast oscillations in
the network response function. Oscillations enabled by a linear growth in
depth are shown to require exponential growth in the number of units when
approximated well by a shallower network.
Eldan & Shamir (2016) study approximation to the unit sphere in a wide family
of activation function. In their construction they show that a 3-layer MLP
could first compute the polynomial $x^{2}$ for each of the dimensions and use
the last layer to threshold the sum of them to model the unit sphere
indicator. They analytically show that the same approximation with 2-layer
network requires exponentially growth in width with precision.
Yarotsky (2017); Safran & Shamir (2016), show that depth is useful for
approximating polynomials by ReLU MLPs. Specifically, that $f(x)=x^{2}$ could
be efficiently approximated with network depth.
While results similar to one presented here could be derived by a combination
of the construction in Eldan & Shamir (2016) and the polynomial approximation
of Yarotsky (2017), we present a different (and to our taste, simpler) proof,
using a geometric argument and a bound on the number of response regions of
ReLU networks, without explicitly modeling the $x^{2}$ polynomial.
The ReLU MLP decision space was studied by Pascanu et al. (2013). They show
that the input space is sequentially refined by the ReLU and linear operations
of the network to form separated convex polytopes in the input space. They
call these regions _response regions_. They also establish a lower bound on
the maximal number of regions, a bound which is tightened by Montufar et al.
(2014); Raghu et al. (2017); Arora et al. (2016); Serra et al. (2017). We rely
on the notion of response region in our proof, while attempting to provide an
accessible explanation of it. Some of the lemmas we present are simplified
versions of results presented in these previous works.
## 3 Background
### 3.1 Linearity and piecewise linearity. Convexity.
A _linear function_ is a function of the form $f(x)=\mathbf{A}x+\mathbf{b}$.
For affine spaces (like the Euclidean space), this is also called an _affine
transformation_ of the input. In a _piecewise linear_ function the input space
is split into regions, and each region is associated with a linear function. A
composition of linear functions is linear. A composition of piecewise linear
functions is piecewise linear.
A $2d$ region is _convex_ iff, for any two points in the region, all points on
the line connecting the two points is also within the region. A polygon with
all internal angles $<180^{o}$ is a convex region.
### 3.2 ReLU MLP with d Layers
A ReLU MLP with $d$ layers parameterized by $\Theta$ is a multivariate
function defined as the composition:
$F(X;\Theta)=h^{out}\circ h_{d}^{A}\circ\sigma\circ
h_{d-1}^{A}\circ\sigma\ldots\circ h_{2}^{A}\circ\sigma\circ h_{1}^{A}(X)$
Where $h^{A}_{i}$s are parameterized affine transformations; $\Theta$ the set
of parameters in them; and $\sigma$ is the ReLU activation function: a non
linear element-wise activation function defined by $\sigma(x)=max\\{0,x\\}$.
We consider ReLU MLPs where all hidden layers have the same width $w$.111This
subsumes networks with layers with width $<w$, as these are equivalent to
width $w$ layers with zeroes in specific regions of the parameters. Without
loss of generality we define the last layer of network, $h^{out}$, as a
weighted sum over its inputs where a sum strictly greater than zero is mapped
to the $1$ class, and otherwise to the $-1$ class.222In common ML operations,
this simply means that $h^{out}$ multiplies by a vector and takes the sign of
the resulting scalar.
The combination of linear operations and the ReLU function result in a
piecewise linear function of the input $X$.
### 3.3 ReLU MLP response regions
Piecewise linear activation functions such as ReLU split the input space into
convex regions of linear activation. This is asserted formally and visualized
in Hanin & Rolnick (2019). The ReLU function has two regions (“pieces”) of
linearity $x>0,x\leq 0$. Within each of these, linearity is maintained. The
sequential composition of affine transformations and the ReLU operations
created by the MLP layers, divides the the input space into convex polytopes
(in $2D$, as we consider here, these are convex polygons). Within each such
polytope, the function behaves linearly. We call these polytopes _linear
response regions_.
The number of these linear response regions, and specifically the effect of
MLP depth on the maximal number of regions, was studied in multiple works
Montufar et al. (2014); Raghu et al. (2017); Arora et al. (2016); Serra et al.
(2017). We focus on the simpler case of 2-class classification ReLU MLP on the
Euclidean plane and denote the maximal number of response regions of a network
of $d$ layers each with $w$ units as $r(w,d)$.
Our presentation of the proof of lemma 3 gives more insight into response
regions.
### 3.4 Folding transformations
Montufar et al. (2014) present the concept of folding transformation and their
implementation with ReLUs. Looking at one or more layers as a function
$f:\mathbb{R}^{2}\to\mathbb{R}^{2}$, a folding transformation maps a part of
the input space to coincide with another. Subsequent operations on the
resulting space will apply to both parts, indifferently to their origin in
their initial position. As a simple example, consider a ReLU MLP of input
dimension 1. A simple folding two-layer transformation could easily model the
function $abs(x)=|x|$, mapping the negative input values to their positive
counterparts.333This is achieved by linearly mapping $x$ into the pair
$[x,-x]$, and then applying ReLU and summing ($abs(x)=ReLU(x)+ReLU(-x)=$).
Afterwards, any operation in subsequent layers will apply to both the negative
values and positive values. This simple mechanism of “code reuse” is key to
our constructed deep network and its unit-efficiency. Intuitively, our
construction resembles children paper-cutting, where a sheet of paper is
folded multiple times, then cut with scissors. Unfolding the paper reveals a
complex pattern with distinctive symmetries. Tracing and cutting the same
pattern without any paper folding would require much more effort. Analogously,
we’ll show how deep networks could implement “folds” through their layers and
how ReLU operations, like scissor cuts, are mirrored through the symmetries
induced by these folds. Conversely, shallow networks, unable to “fold the
paper”, must make many more cuts — i.e. must have much more units in order to
create the very same pattern.444Malach & Shalev-Shwartz (2019) show a similar
construction using Fractals.
## 4 Main Proof
Figure 1: The problem family $f_{m}$ is characterized by regular polygons,
where polygon $P_{m}$ has $2^{m+1}$ edges.
### 4.1 The problems $f_{m}$
Let $P_{m}$ be a regular polygon with $2^{m+1}$ edges (Figure 1).555A regular
polygon is a polygon that is both equi-angular (whose angles are all equal)
and equilateral (whose edges are all equal). Without loss of generality,
$P_{m}$ is centered around the origin, bounded by the unit circle, and has a
vertex at $(0,1)$.666Any other regular polygon can be shifted, rotated and
scaled to these conditions using affine transformation with $O(1)$ parameters.
The set of polygons $P_{1},P_{2},...$ approaches the unit circle as
$m\rightarrow\infty$. Let $f_{m}$ be the function with decision boundary
$P_{m}$:
$f_{m}(x)=\begin{cases}1&x\text{ is inside }P_{m}\\\
-1&\text{otherwise}\end{cases}$
Points within polygon $P_{m}$ are of class $1$, while other points are of
class $-1$.
### 4.2 A bounded-depth network representing $f_{m}$ must be exponentially
wide.
We begin with proving (a) of Theorem 1. We will use the following lemmas, with
proofs for the lemmas provided in the next section.
###### Lemma 1.
A rectified MLP is a piecewise linear function.
Proof: A linear layer followed by a rectifier is piecewise-linear. A
composition of piecewise linear functions is itself piecewise linear.
###### Lemma 2.
Modeling $f_{m}$ as a piecewise linear function requires at least $2^{m}$
response regions.
###### Lemma 3.
Rectified MLP with input in $\mathbb{R}^{2}$, with $d$ hidden layers and
maximal layer width of $w$, has at most $w^{2d}$ response regions.
Together, Lemma 2 and Lemma 3 show how network width $w$ behaves when the
problem grows more complex. To prove Theorem (1a), we need to show that $w$ is
exponential. Namely, we will show that there is a base $b>1$ such that $w\geq
b^{m}$. From Lemma 2, modeling $f_{m}$ requires $2^{m}$ response regions. From
lemma 3, a network with depth $d$ has at most $w^{2d}$ regions. To model
$f_{m}$, we thus need $w^{2d}\geq 2^{m}$ response regions.
Taking the $2d$ root from both sides777Both sides’ variables are strictly
positive, allowing taking roots and logarithms. we get $w\geq
2^{\frac{m}{2d}}=(2^{\frac{1}{2d}})^{m}$. Since the depth $d$ is constant,
denote $b=2^{\frac{1}{2d}}>1$, leading to $w\geq b^{m}$ as desired. This
concludes the proof of Theorem (1a). An alternative view of the same math
which may be simpler to some readers: we can re-write $w$ as $2^{\log_{2}w}$,
leading to $(2^{\log_{2}w})^{2d}=2^{\log_{2}w\cdot 2d}\geq 2^{m}$. Obtaining
$log_{2}w\cdot 2d\geq m$, where the logarithm on the left indicates that we
require an exponential growth in $w$ to match $m$ for a fixed depth $d$ as $m$
grows.
### 4.3 Efficient depth-$m$ solution exists.
Lemma 3 provides a lower bound for the size of any zero error network. We now
turn to prove Theorem (1b) by showing how to construct a linear depth and
bounded width network. Our construction is based on folding transformations.
As discussed in §3.4, we construct the regular polygon decision boundary for
polygon $P_{m}$ through exploitation of symmetry and folding transformations.
Formally, our deep network operates as follows: first, it folds across both
the $X$ and $Y$ axes, mapping the input space into the first quadrant
$(x,y)\mapsto(|x|,|y|)$. It now has to deal only with the positive part of the
decision boundary. It then proceeds in steps, in which it first rotates the
space around the origin until the remaining decision boundary is symmetric
around the $X$ axis, and then folds around the $X$ axis, resulting in half the
previous decision boundary, in the first quadrant. This process continues
until the decision boundary is a single line, which can be trivially
separated. The first step cuts the number of edges in the decision boundary by
a factor of four, while each subsequent rotate + fold sequence further cuts
the number of polygon edges in half.
This process is depicted in Figure 2.
More formally, we require four types of transformations:
* •
$\textit{foldXY}(\begin{bmatrix}x_{0}\\\
x_{1}\end{bmatrix}):\mathbb{R}^{2}\to\mathbb{R}^{2}$ — initial mapping of
input to the first quadrant.
* •
$\textit{rotate}_{\Theta}(\begin{bmatrix}x_{0}\\\
x_{1}\end{bmatrix}):\mathbb{R}^{2}\to\mathbb{R}^{2}$ — clockwise rotation
around the origin by an angle of $\Theta$.
* •
$\textit{foldX}(\begin{bmatrix}x_{0}\\\
x_{1}\end{bmatrix}):\mathbb{R}^{2}\to\mathbb{R}^{2}$ — folding across the $X$
axis.
* •
$\textit{top}(\begin{bmatrix}x_{0}\\\
x_{1}\end{bmatrix}):\mathbb{R}^{2}\to\mathbb{R}^{1}$ — the final activation
layer.
These operations are realized in the network layers, using a combination of
linear matrix operations and ReLU activations. The _rotate_ operation is
simply a rotation matrix. Rotating by an angle of $\Theta$ is realized as:
$\textit{rotate}_{\Theta}(\begin{bmatrix}x_{0}\\\
x_{1}\end{bmatrix})=\left[\begin{array}[]{rr}cos(\Theta)&-sin(\Theta)\\\
sin(\Theta)&cos(\Theta)\\\ \end{array}\right]\begin{bmatrix}x_{0}\\\
x_{1}\end{bmatrix}$
The initial folding across both $X$ and $Y$ axes first transforms the input
$(x,y)$ to $(x,-x,y,-y)$ using a linear transformation. It then trims the
negative values using a ReLU, and sums the first two and last two coordinates
using another linear operation, resulting in:
$\textit{foldXY}(\begin{bmatrix}x_{0}\\\
x_{1}\end{bmatrix})=\left[\begin{array}[]{rrrr}1&1&0&0\\\ 0&0&1&1\\\
\end{array}\right]\sigma(\left[\begin{array}[]{rr}-1&0\\\ 1&0\\\ 0&1\\\
0&-1\\\ \end{array}\right]\begin{bmatrix}x_{0}\\\ x_{1}\end{bmatrix})$
Where $\sigma$ is the elementwise ReLU activation function. Folding across the
$X$ axes is similar, but as all $x$ values are guaranteed to be positive, we
do not need to consider $-x$.
$\textit{foldX}(\begin{bmatrix}x_{0}\\\
x_{1}\end{bmatrix})=\left[\begin{array}[]{rrr}1&0&0\\\ 0&1&1\\\
\end{array}\right]\sigma(\left[\begin{array}[]{rr}1&0\\\ 0&1\\\ 0&-1\\\
\end{array}\right]\begin{bmatrix}x_{0}\\\ x_{1}\end{bmatrix})$
Finally, the final classification layer is:
$\textit{top}(\begin{bmatrix}x_{0}\\\
x_{1}\end{bmatrix})=\operatorname{sign}(a\cdot x_{0}+b\cdot x_{1}+c)$
Composing these operations, the constructed network for problem $f_{m}$ has
the form:
$\textit{f}_{\textit{MLP}}(x)=\textit{top}\circ\textit{foldX}\circ\textit{rotate}_{\pi/2^{m+1}}\circ\textit{foldX}\circ\ldots\circ\textit{rotate}_{\pi/8}\circ\textit{foldX}\circ\textit{rotate}_{\pi/4}\circ\textit{foldXY}$
Note that the angle of rotation is decreased by a factor of 2 in every
subsequent rotate. The rotate and foldX transformations pair, folds input
space along a symmetry axis and effectively reduces the problem by half. This
results in a foldXY operation followed by a sequence of $m$
$\textit{foldX}\circ\textit{rotate}$ operations, followed by top.
Marking a fold operation as $\mathbf{F}\sigma\mathbf{C}$ and a rotate
operation as $\mathbf{R}$, where $\mathbf{F,C,R}$ being matrices, the MLP
takes the form: $\mathbf{F\sigma CRF\sigma CRF\sigma CRF\ldots}$ where a
sequence $\mathbf{CRF}$ of matrix operations can be collapsed into a single
matrix $\mathbf{M}$. This brings us to the familiar MLP form that alternates
matrix multiplications and ReLU activations. Overall, the network has $m+1$
non-linear activations (from $m$ foldX operations and $1$ foldXY operation),
resulting in $m+1$ layers.
Figure 2: Constructing $P_{3}$ using folding and rotation transformations.
The 3 blackened markers show how 3 points in the input space are transformed
during this process. (a-b) a foldXY operation maps all points to the first
quadrant. (c) the slice is rotated clockwise by 45°using a linear
transformation. (d-e) the bottom half is mapped into the first quadrant using
a foldX operation. (f) rotate by $45/2\degree$. (g-h) folding. final rotation
by $45/4\degree$ and a final linear decision boundary that correctly
classifies the three points.
The response regions produced by the constructed MLP and by a shallow network
are depicted in Figure 3.
Figure 3: (a) The response regions of the constructed solution for $P_{2}$.
(b) A shallow, one layer MLP that solve $P_{2}$ \- Such an MLP must model each
of the regular polygon edges separately .
## 5 Proofs of Lemmas
### 5.1 Lemma 2
Modeling $P_{m}$ as a piecewise linear function requires at least $2^{m}$
response regions.
_Proof:_ consider the polygon $P_{m}$, and let $MLP_{m}$ be a ReLU MLP
(piecewise-linear function) correctly classifying the problem. Let $V_{even}$
be the set of every _second_ vertex along a complete traversal of $P_{m}$. For
each vertex take an $\epsilon$ step away from the origin to create
$V_{\textit{even}}^{\prime}$ (see Figure 4a for an illustration). Each of the
points in $V_{\textit{even}}^{\prime}$ are strictly outside $P_{m}$ and
therefore should be classified as class $-1$.
The response regions produced by $\textit{MLP}_{m}$ are both convex and
linear. Let $p_{i}$, $p_{j}$ be two arbitrary points in
$V_{\textit{even}}^{\prime}$, $p_{i}\neq p_{j}$. We will show that $p_{i}$,
$p_{j}$ belong in different response regions. Assume by contradiction that
$p_{i},p_{j}$ are in the same response region. By convexity all points in a
straight line between $p_{i}$ and $p_{j}$ are also in the same response
region. Also, by linearity these points have an activation value between
$p_{i}$ and $p_{j}$ and therefore should also be classified as class $-1$.
From the problem construction we know that lines between the even vertices of
$P_{m}$ cross the class boundary as demonstrated in Figure 4b. Therefore,
$p_{i}$ and $p_{j}$ must lay in different response regions. Since $p_{i}$ and
$p_{j}$ are arbitrary, $\textit{MLP}_{m}$’s number of response regions is at
least $|V_{\textit{even}}^{\prime}|=2^{m}$.
Figure 4: Left: $V_{even}^{\prime}$ are created by taking every second vertex
of $P_{m}$ then moving them slightly such that they are strictly outside
$P_{m}$. Right: a chord $a$ in green connecting any two vertices of
$V^{\prime}_{even}$, must cross $P_{m}$. Had both of the chord vertices been
in the same response region, by convexity so do all points on $a$. By
linearity, the final network activation of $a$’s points will interpolate the
activation of $a$’s endpoints.
### 5.2 Lemma 3
Rectified MLP with input in $\mathbb{R}^{2}$, with $d$ hidden layers and
maximal layer width of $w$, has at most $w^{2d}=2^{2d\log_{2}w}$ response
regions.
_Proof:_ Raghu et al. (2017) prove a version of this lemma for input space
$\mathbb{R}^{n}$, which have at most $O(w^{nd})$ response regions. We show a
proof for the more restricted case of inputs in $\mathbb{R}^{2}$, in a similar
fashion. We first consider the bound for 1 hidden-layer networks, then extend
to $d$ layers. The first part of the proof follows classic and basic results
in computational geometry. The argument in the second part (move from 1 to $d$
layers) is essentially the same one of Raghu et al. (2017).
#### Number of regions in a line-arrangement of $n$ lines 888A _line-
arrangement of $n$ lines_ is simply a collection of $n$ lines on a plane,
which partitions the plane.
We start by showing that $r(n)$, the maximal number of regions in
$\mathbb{R}^{2}$ created by a line arrangement of $n$ lines, is $r(n)\leq
n^{2}$. This is based on classic result from computational geometry
(Zaslavsky, 1975) which we include for completeness. Initially, the entire
space is a region. A single line divides the space in two, adding one
additional region. What happens as we add additional lines? The second line
intersects999We assume the added lines are not parallel to any previous line,
and do not cross an intersection of previous lines. It is easy to be convinced
that such cases will split the space into fewer regions. with the first, and
splits each of the previous regions in two, adding 2 more regions. The third
line intersects with both lines, dividing the line into three sections. Each
section splits a region, adding 3 more regions. Continuing this way, the $i$th
line intersects $i-1$ lines, resulting in $i$ sections, each intersecting a
region and thus adding a region. Figure 5 illustrates this for the 4th line.
We get:
$r(n)=1+1+2+3+4+\ldots+n=1+\sum_{i=1}^{n}i=1+\frac{n(n+1)}{2}\leq
n^{2}\;(\text{for }n>2)$
#### A 1 hidden-layer ReLU network is a line arrangement
Consider a network of the form $y=\mathbf{v}(\mathbf{A}x+\mathbf{b})$ where
the matrix $\mathbf{A}$ projects the input $x$ to $w$ dimensions, and the
vector $\mathbf{v}$ combines them into a weighted sum. The entire input space
is linear under this network: the output is linear in the input.101010We can
then set a linear classifier by setting a threshold on $y$, this will divide
the input space in 2, with a single line. When setting an ReLU activation
function after the first layer: $y=\mathbf{v}\sigma(\mathbf{A}x+\mathbf{b})$
we get a 1-hidden layer ReLU network. For a network with a width $w$ hidden
layer ($\mathbf{A}\in\mathbb{R}^{w\times 2}$), we get $w$ linear equations,
$\mathbf{A}^{(i)}x+\mathbf{b}^{(i)}$ corresponding to $w$ piecewise linear
functions: each function has a section where it behaves according to its
corresponding equation (the “active” section), and a section where it is 0
(the “rectified” section). The input transitions between the active and the
rectified sections of function $i$ at the boundary given by
$\mathbf{A}^{(i)}x+\mathbf{b}^{(i)}=0$. Thus, each ReLU neuron corresponds to
a line that splits the input space into two: one input region where the neuron
is active, and one where it is rectified. Within each region, the behavior of
the neuron is linear. For a width $w$ network, we have $w$ such lines — a line
arrangement of $w$ lines. The arrangement splits the space into at most
$r(w)<w^{2}$ convex cells, where each cell corresponds to a set of active
neurons. Within each cell, the behavior of the input is linear. Such a cell is
called a _linear region_.
#### Additional Layers
(Raghu et al., 2017; Pascanu et al., 2013) Additional layers further split the
linear regions. Consider the network after $d-1$ layers, and a given linear
region $R$. Within $R$, the set of active neurons in layers $<d-1$ is
constant, and so within the region the next layer computes a linear function
of the input. As above, the ReLU activation then again gives $w$ line
equations, but this time these equations are only valid within $R$. The next
layer than splits $R$ into at most $r(w)\leq w^{2}$ regions.
Figure 5: By iteratively introducing lines we can count the maximal number of
regions created by $k$ lines. In general positions, the 4th introduced line
(d. in greed) will intersect its 3 predecessor in 3 different points. These
will create 4 sections, each splitting a region into two (red-blue) hence
adding 4 regions to the total count.
#### Max number of regions in deep networks
Raghu et al. (2017) Consider a network with two hidden layers of width $w$.
The first layer introduced at most $r(w)\leq w^{2}$ convex regions. As we saw
above, for the second layer each region can be split again into at most $r(w)$
regions, resulting in at most $w^{2}\cdot w^{2}=(w^{2})^{2}$ regions. Applying
this recursively, we get that the maximal number of regions in a depth $d$
width $w$ ReLU MLP network is the required bound: $r(w,d)=w^{2d}$. Re-writing
$w$ as $2^{log_{2}w}$ we get: $r(w,d)=2^{log_{2}w\cdot 2d}$.
## 6 Conclusion
We present a depth separation proof for ReLU MLP which is fully self contained
and uses only basic mathematical concepts and proof techniques. We believe
this work has educational value and new-comers could benefit from its
simplicity.
## References
* Arora et al. (2016) Raman Arora, Amitabh Basu, Poorya Mianjy, and Anirbit Mukherjee. Understanding deep neural networks with rectified linear units. _arXiv preprint arXiv:1611.01491_ , 2016.
* Bianchini & Scarselli (2014) Monica Bianchini and Franco Scarselli. On the complexity of shallow and deep neural network classifiers. In _ESANN_ , 2014.
* Delalleau & Bengio (2011) Olivier Delalleau and Yoshua Bengio. Shallow vs. deep sum-product networks. In _Advances in Neural Information Processing Systems_ , pp. 666–674, 2011.
* Eldan & Shamir (2016) Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In _Conference on learning theory_ , pp. 907–940, 2016.
* Hanin & Rolnick (2019) Boris Hanin and David Rolnick. Complexity of linear regions in deep networks. _arXiv preprint arXiv:1901.09021_ , 2019.
* Malach & Shalev-Shwartz (2019) Eran Malach and Shai Shalev-Shwartz. Is deeper better only when shallow is good? _arXiv preprint arXiv:1903.03488_ , 2019.
* Montufar et al. (2014) Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In _Advances in neural information processing systems_ , pp. 2924–2932, 2014.
* Pascanu et al. (2013) Razvan Pascanu, Guido Montufar, and Yoshua Bengio. On the number of response regions of deep feed forward networks with piece-wise linear activations. _arXiv preprint arXiv:1312.6098_ , 2013.
* Raghu et al. (2017) Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl Dickstein. On the expressive power of deep neural networks. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , pp. 2847–2854. JMLR. org, 2017.
* Safran & Shamir (2016) Itay Safran and Ohad Shamir. Depth separation in relu networks for approximating smooth non-linear functions. _CoRR_ , abs/1610.09887, 2016. URL http://arxiv.org/abs/1610.09887.
* Serra et al. (2017) Thiago Serra, Christian Tjandraatmadja, and Srikumar Ramalingam. Bounding and counting linear regions of deep neural networks. _arXiv preprint arXiv:1711.02114_ , 2017.
* Telgarsky (2015) Matus Telgarsky. Representation benefits of deep feedforward networks. _arXiv preprint arXiv:1509.08101_ , 2015.
* Telgarsky (2016) Matus Telgarsky. Benefits of depth in neural networks. _arXiv preprint arXiv:1602.04485_ , 2016.
* Yarotsky (2017) Dmitry Yarotsky. Error bounds for approximations with deep relu networks. _Neural Networks_ , 94:103–114, 2017.
* Zaslavsky (1975) Thomas Zaslavsky. _Facing up to Arrangements: Face-Count Formulas for Partitions of Space by Hyperplanes: Face-count Formulas for Partitions of Space by Hyperplanes_ , volume 154. American Mathematical Soc., 1975.
|
# Fundamental Limits of
Demand-Private Coded Caching
Chinmay Gurjarpadhye, Jithin Ravi, Sneha Kamath, Bikash Kumar Dey, and Nikhil
Karamchandani J. Ravi has received funding from the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation
programme (Grant No. 714161). The work of B. K. Dey was supported in part by
the Bharti Centre for Communication in IIT Bombay. The work of N.
Karamchandani is supported in part by a Science and Engineering Research Board
(SERB) grant on “Content Caching and Delivery over Wireless Networks”. The
material in this paper was presented in part at the IEEE National Conference
on Communications, Kharagpur, India, February 2020, and will be presented in
part at the IEEE Information Theory Workshop, Riva del Garda, Italy, April
2021. C. Gurjarpadhye, B. K. Dey and N. Karamchandani are with Department of
Electrical Engineering, IIT Bombay, Mumbai, India. J. Ravi is with the Signal
Theory and Communications Department, Universidad Carlos III de Madrid, Spain,
and with the Gregorio Marañón Health Research Institute, Madrid, Spain. S.
Kamath is with Qualcomm, India (emails<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>nikhilk@ee.iitb.ac.in). Part of the work of J. Ravi and S. Kamath was done
when they were at IIT Bombay.
###### Abstract
We consider the coded caching problem with an additional privacy constraint
that a user should not get any information about the demands of the other
users. We first show that a demand-private scheme for $N$ files and $K$ users
can be obtained from a non-private scheme that serves only a subset of the
demands for the $N$ files and $NK$ users problem. We further use this fact to
construct a demand-private scheme for $N$ files and $K$ users from a
particular known non-private scheme for $N$ files and $NK-K+1$ users. It is
then demonstrated that, the memory-rate pair $(M,\min\\{N,K\\}(1-M/N))$, which
is achievable for non-private schemes with uncoded transmissions, is also
achievable under demand privacy. We further propose a scheme that improves on
these ideas by removing some redundant transmissions. The memory-rate trade-
off achieved using our schemes is shown to be within a multiplicative factor
of 3 from the optimal when $K<N$ and of 8 when $N\leq K$. Finally, we give the
exact memory-rate trade-off for demand-private coded caching problems with
$N\geq K=2$.
## I Introduction
In their seminal work [1, 2], Maddah-Ali and Niesen analyzed the fundamental
limits of caching in a noiseless broadcast network from an information-
theoretic perspective. A server has $N$ files of equal size. There are $K$
users, each equipped with a cache that can store $M$ files. In the _placement
phase_ , the cache of each user is populated with some functions of the files.
In the _delivery phase_ , each user requests one of the $N$ files, and the
server broadcasts a message to serve the demands of the users. The goal of the
_coded caching_ problem is to identify the minimum required _rate_ of
transmission from the server for any given cache size $M$. For this setup, [1]
proposed an achievability scheme and by comparing the achievable rate with an
information-theoretic lower bound on the optimal rate, demonstrated the scheme
to be _order optimal_ , i.e., the achievable rate is within a constant
multiplicative factor from the optimal for all system parameters $N,K,M$. The
works [3, 4, 5] mainly focused on obtaining improved achievable rates while
the works [6, 7] focused on improving the lower bounds. Different aspects of
the coded caching problem such as _subpacketization_ [8, 9, 10], _non-uniform
demands_ [11, 12, 13] and _asychnronous demands_ [14, 15] have been
investigated in the past. Fundamental limits of caching has also been studied
for some other network models, see for example [16, 17, 18]. We refer the
reader to [19] for a detailed survey.
The schemes proposed in [1, 2] for the coded caching problem exploited the
broadcast property of the network to reduce the rate of transmission. The fact
that this can lead to a coding gain has also been explored in the related
_index coding_ framework [20], where the users may have a subset of files as
side information and request one file from the server that they do not have
access to. While the broadcast property helps in achieving a coding gain under
such settings, it affects the security and privacy of users. Two types of
security/privacy issues have been studied in index coding. The works [21, 22]
addressed the problem of _file privacy_ where the constraint is that each user
should not get any information about any file other than the requested one.
The work [23] studied index coding with _demand privacy_ where each user
should not get any information about the identity of the file requested by
other users. Demand privacy is also studied in a different context called
_private information retrieval_ where a user downloads her file of interest
from one or more servers and does not want to reveal the identity of the
requested file to any server, see [24] for example.
File privacy for the coded caching problem was investigated in [25, 26]. In
particular, [25] considered the privacy of files against an eavesdropper who
has access to the broadcast link, while [26] studied the caching problem with
the constraint that each user should not get any information about any file
other than the requested one. In [26], a private scheme was proposed using
techniques from secret sharing, and the achievable rate was shown to be order
optimal.
In this paper, we consider the coded caching problem with an extra constraint
that each user should not learn any information about the demands of other
users. Coded caching under demand privacy was studied from an information-
theoretic framework in some recent works [27, 28, 29, 30, 31, 32]. The works
[27] and [28] (a preliminary version of this work) demonstrated that a demand-
private scheme for $N$ files and $K$ users can be obtained from a non-private
scheme for $N$ files and $NK$ users. The rate achievable using such schemes
was shown to be order optimal for all regimes except for the case when $K<N$
and $M<N/K$. A demand-private scheme using MDS codes was also proposed for
$M\geq N/2$ in [27]. In [29], the authors focused on obtaining demand-private
schemes that achieve a weaker privacy condition such that one user should not
get any information about the demand of another user, but may gain some
information about the demand vector. They mainly addressed the
subpacketization requirement for $N=K=2$ in [29] and extended their study to
more general cases in [32]. Demand privacy against colluding users was studied
for device-to-device network in [31] where a trusted server helps to co-
ordinate among the users to achieve a demand-private scheme. The case of
colluding users for the coded caching problem was considered in [30] where the
privacy condition was such that one user should not learn any information
about the demands of other users even if she is given all the files.
Now we briefly summarize the main contributions of this paper. We first show
that a demand-private scheme for $N$ files and $K$ users can be obtained from
a non-private scheme that serves only a subset of demands for $N$ files and
$NK$ users (Theorem 1). Our first achievability scheme, Scheme A, is built on
this fact. We then propose Scheme B which is based on the idea that permuting
broadcast symbols and not fully revealing the permutation function helps to
achieve demand privacy. Our third achievability scheme, Scheme C, combines the
ideas of Schemes A and B. Using these achievability schemes, we show the order
optimality for the case when $K<N$ and $M<N/K$, thus completing the order
optimality result for all regimes111This result was first shown in a
preliminary version [33] of this work.. Finally, we characterize the exact
memory-rate trade-off under demand privacy for the case $N\geq K=2$. We detail
the contributions and describe the organization of the paper in the next
subsection.
### I-A Contributions and organization of the paper
The main contributions of this paper are the following.
1. 1.
Using the fact that a demand-private scheme for $N$ files and $K$ users can be
obtained from a non-private scheme that serves only a structured subset of
demands for $N$ files and $NK$ users, we propose Scheme A for demand-private
caching that uses the non-private scheme for $N$ files and $NK-K+1$ users from
[5] (which we refer to as the YMA scheme). This then implies that the memory-
rate pairs achievable by the YMA scheme for $N$ files and $NK-K+1$ users are
also achievable under demand privacy for $N$ files and $K$ users (Theorem 2 in
Section III-A).
2. 2.
In [1, Example 1], it was shown that memory-rate pair
$(M,\min\\{N,K\\}(1-M/N))$ can be achieved for non-private schemes without any
coding in the placement phase or in the delivery phase. In Theorem 3 (Section
III-B), we show that this memory-rate pair $(M,\min\\{N,K\\}(1-M/N))$ is also
achievable under demand privacy. For $N\leq K$, the scheme (Scheme B) that
achieves this pair is trivial, while for $K<N$, the scheme is non-trivial.
3. 3.
We then propose a demand-private scheme (Scheme C) that builds on the ideas of
Schemes A and B. The memory-rate pairs achievable using Scheme C are given in
Theorem 4 (Section III-C). Using numerical computations, we demonstrate that,
for $K<N$, a combination of Schemes B and C outperforms Scheme A. In contrast,
Scheme A outperforms Schemes B and C for $N\leq K$.
4. 4.
The characterization of the exact memory-rate trade-off is known to be
difficult for non-private schemes. So, the order optimality of the achievable
rates is investigated. We show that the rates achievable using our schemes are
within a constant multiplicative gap of the optimal non-private rates (Theorem
5 in Section III-E) in all parameter regimes. In particular, we prove this for
$K<N$ and $M<N/K$, the regime that was left open in previous works. This gives
the order optimality result since the optimal rates under privacy is lower
bounded by the optimal non-private rates. This also implies that the optimal
private and non-private rates are always within a constant factor.
5. 5.
One class of instances for which we have the exact trade-off [1, 34] for non-
private schemes is when $K=2$ and $N\geq 2$. For this class, we characterize
the exact trade-off under demand privacy in Theorem 6 (Section III-F). Our
characterization shows that the exact trade-off region under demand privacy
for this class is strictly smaller than the one without privacy. To
characterize the exact trade-off, we give a converse bound that accounts for
the privacy constraints. To the best of our knowledge, this converse bound is
the first of its kind, and also that this is the first instance where it is
demonstrated that the optimal rates with privacy can be strictly larger than
the optimal rates without privacy.
The rest of the paper is organized as follows. In Section II, we give our
problem formulation. We present our results and briefly describe our proposed
schemes in Section III. All the proofs of our results can be found in Section
IV and the appendices.
### I-B Notations
We denote the set $\\{0,1,\ldots,N-1\\}$ by $[0:N-1]$, the cardinality of a
set $\cal{A}$ by $|\mbox{$\cal{A}$}|$, and the closed interval between two
real numbers $a$ and $b$ by $[a,b]$. For a positive integer $\ell$, if $\pi$
denotes a permutation of $[0:\ell-1]$, and
$Y=(Y_{0},Y_{1},\ldots,Y_{\ell-1})$, with abuse of notation, we define
$\pi(Y)=\left(Y_{\pi^{-1}(i)}\right)_{i\in[0:\ell-1]}$. We denote random
variables by upper case letters (e.g. $X$) and their alphabets by calligraphic
letters (e.g. $\cal{X}$). For a random variable/vector $B$, $len(B)$ denotes
$\log_{2}|\mbox{$\cal{B}$}|$.
## II Problem formulation and definitions
Consider one server connected to $K$ users through a noiseless broadcast link.
The server has access to $N$ independent files of $F$ bits each. These files
are denoted as $(W_{0},W_{1},\ldots,W_{N-1})$ and each file is uniformly
distributed in $\\{0,1\\}^{F}$. Each user has a cache of size $MF$ bits. The
coded caching problem has two phases: prefetching and delivery. In the
prefetching phase, the server places at most $MF$ bits in the cache of each
user. The cache content of user $k\in[0:K-1]$ is denoted by $Z_{k}$. In the
delivery phase, each user demands one of the $N$ files from the server and
this demand is conveyed secretly to the server. Let the demand of user $k$ be
denoted by $D_{k}\in[0:N-1]$. We define
$\bar{D}=(D_{0},D_{1},\ldots,D_{K-1})$. $\bar{D}$ is independent of the files
$W_{i},i\in[0:N-1]$ and caches $Z_{k},k\in[0:K-1]$, and is uniformly
distributed in $[0:N-1]^{K}$.
In the delivery phase, the server broadcasts a message $X$ to all the $K$
users such that user $k\in[0:K-1]$ can decode file $W_{D_{k}}$ using $X$ and
$Z_{k}$ (see Fig. 1). If message $X$ consists of $RF$ bits, then $R$ is said
to be the rate of transmission. In addition to the decodability of the
demanded file, demand-privacy imposes another constraint that the demands of
all other users should remain perfectly secret to each of the $K$ users. To
ensure demand-privacy, the server can share some randomness denoted by $S_{k}$
with user $k\in[0:K-1]$ in the prefetching phase. This shared randomness is of
negligible size and hence, it is not included in the memory size. We define
$S=(S_{0},S_{1},\ldots,S_{K-1})$. The server also has access to some private
randomness which we denote by $P$. The random variables
$S,P,\\{W_{i}|i\in[0:N-1]\\},\\{D_{k}|k\in[0:K-1]\\}$ are independent of each
other.
Figure 1: Demand-private coded caching model.
Non-private coded caching scheme: An non-private coded caching scheme consists
of the following.
Cache encoding functions: For $k\in[0:K-1]$, the cache encoding function for
the $k$-th user is a map
$\displaystyle C_{k}:{[0:2^{F}-1]}^{N}\rightarrow[0:2^{MF}-1],$ (1)
and the cache content $Z_{k}$ is given by $Z_{k}=C_{k}(\bar{W})$.
Broadcast transmission encoding function: The transmission encoding is a map
$\displaystyle
E:{[0:2^{F}-1]}^{N}\times\mbox{$\cal{D}$}_{0}\times\cdots\times\mbox{$\cal{D}$}_{K-1}\rightarrow[0:2^{RF}-1],$
(2)
and the transmitted message is given by $X=(E(\bar{W},\bar{D}),\bar{D})$.
Decoding functions: User $k$ uses a decoding function
$\displaystyle
G_{k}:\mbox{$\cal{D}$}_{0}\times\cdots\times\mbox{$\cal{D}$}_{K-1}\times[0:2^{RF}-1]\times[0:2^{MF}-1]\rightarrow[0:2^{F}-1].$
(3)
Let $\mbox{$\cal{C}$}=\\{C_{k}:k=0,\ldots,K-1\\}$ and
$\mbox{$\cal{G}$}=\\{G_{k}:k=0,\ldots,K-1\\}$. Then the triple
$(\mbox{$\cal{C}$},E,\mbox{$\cal{G}$})$ is called an $(N,K,M,R)$-non-private
scheme if it satisfies
$\displaystyle W_{D_{k}}=G_{k}(\bar{D},E(\bar{W},\bar{D}),C_{k}(\bar{W}))$ (4)
for all values of $\bar{D}$ and $\bar{W}$. A memory-rate pair $(M,R)$ is said
to be achievable for the $(N,K)$ coded caching problem if there exists an
$(N,K,M,R)$-non-private scheme for some $F$. The memory-rate trade-off
$R^{*}_{N,K}(M)$ for the non-private coded caching problem is defined as
$\displaystyle R^{*}_{N,K}(M)$ $\displaystyle=\inf\\{R:(M,R)\mbox{ is
achievable for $(N,K)$ coded caching problem}\\}.$ (5)
Private coded caching scheme: A private coded caching scheme consists of the
following.
Cache encoding functions: For $k\in[0:K-1]$, the cache encoding function for
the $k$-th user is given by
$\displaystyle
C_{k}:\mbox{$\cal{S}$}_{k}\times\mbox{$\cal{P}$}\times{[0:2^{F}-1]}^{N}\rightarrow[0:2^{MF}-1],$
(6)
and the cache content $Z_{k}$ is given by
$Z_{k}=(C_{k}(S_{k},P,\bar{W}),S_{k})$.
Broadcast transmission encoding function: The transmission encoding functions
are
$\displaystyle
E:{[0:2^{F}-1]}^{N}\times\mbox{$\cal{D}$}_{0}\times\cdots\times\mbox{$\cal{D}$}_{K-1}\times\mbox{$\cal{P}$}\times\mbox{$\cal{S}$}_{0}\times\cdots\times\mbox{$\cal{S}$}_{K-1}\rightarrow[0:2^{RF}-1],$
$\displaystyle
J:\mbox{$\cal{D}$}_{0}\times\cdots\times\mbox{$\cal{D}$}_{K-1}\times\mbox{$\cal{P}$}\times\mbox{$\cal{S}$}_{0}\times\cdots\times\mbox{$\cal{S}$}_{K-1}\rightarrow\mbox{$\cal{J}$}.$
The transmitted message $X$ is given by
$\displaystyle
X=\left(E(\bar{W},\bar{D},P,\bar{S}),J(\bar{D},P,\bar{S})\right).$
Here $\log_{2}|\mbox{$\cal{J}$}|$ is negligible222 The auxiliary transmission
$J$ essentially captures any additional transmission, that does not contribute
any rate, in addition to the main payload. Such auxiliary transmissions of
negligible rate are used even in non-private schemes without being formally
stated in most work. For example, the scheme in [1] works only if the server
additionally transmits the demand vector in the delivery phase. We have chosen
to formally define such auxiliary transmission here. compared to file size
$F$.
Decoding functions: User $k$ has a decoding function
$\displaystyle
G_{k}:\mbox{$\cal{D}$}_{k}\times\mbox{$\cal{S}$}_{k}\times\mbox{$\cal{J}$}\times[0:2^{RF}-1]\times[0:2^{MF}-1]\rightarrow[0:2^{F}-1].$
(7)
Let $\mbox{$\cal{C}$}=\\{C_{k}:k=0,\ldots,K-1\\}$ and
$\mbox{$\cal{G}$}=\\{G_{k}:k=0,\ldots,K-1\\}$. The tuple
$(\mbox{$\cal{C}$},E,J,\mbox{$\cal{G}$})$ is called as an $(N,K,M,R)$-private
scheme if it satisfies the following decoding and privacy conditions:
$\displaystyle
W_{D_{k}}=G_{k}\bigl{(}D_{k},S_{k},J(\bar{D},P,\bar{S},),E(\bar{W},\bar{D},P,\bar{S}),C_{k}(S_{k},P,\bar{W})\bigr{)},\quad\text{
for }k\in[0:K-1],$ $\displaystyle
I\left({\bar{D}_{\tilde{k}}};D_{k},S_{k},J(\bar{D},P,\bar{S},),E(\bar{W},\bar{D},P,\bar{S}),C_{k}(S_{k},P,\bar{W})\
\right)=0,\quad\text{ for }k\in[0:K-1],$
where $\bar{D}_{\tilde{k}}=(D_{0},\ldots,D_{k-1},D_{k+1},\ldots,D_{K-1})$. The
above conditions are respectively equivalent to
$\displaystyle H(W_{D_{k}}|Z_{k},X{},D_{k})$ $\displaystyle=0,\quad\text{ for
}k\in[0:K-1],$ (8) $\displaystyle I({\bar{D}_{\tilde{k}}};Z_{k},X{},D_{k})$
$\displaystyle=0,\quad\text{ for }k\in[0:K-1].$ (9)
A memory-rate pair $(M,R)$ is said to be achievable with demand privacy for
the $(N,K)$ coded caching problem if there exists an $(N,K,M,R)$-private
scheme for some $F$. The memory-rate trade-off with demand privacy is defined
as
$\displaystyle R^{*p}_{N,K}(M)$ $\displaystyle=\inf\\{R:(M,R)\mbox{ is
achievable with demand privacy for $(N,K)$ coded caching problem}\\}.$ (10)
###### Remark 1 (Different privacy metrics)
A weaker notion of privacy was considered in [29, 32] given by
$\displaystyle I(D_{i};Z_{k},D_{k},X)=0,\quad i\neq k.$ (11)
In words, the privacy condition in (11) requires that user $k\in[0:K-1]$
should not get any information about $D_{i},i\neq k$, but may have some
information about the demand vector. Note that a scheme that satisfies the
privacy condition (9) also satisfies (11). The model in [30] assumed that the
users can collude, and the following stronger notion of privacy metric was
considered
$\displaystyle
I(D_{[0:K-1]\setminus\mbox{$\cal{S}$}};Z_{\mbox{$\cal{S}$}},D_{\mbox{$\cal{S}$}},X|\bar{W})=0,\quad\forall\mbox{$\cal{S}$}\subseteq[0:K-1]$
(12)
where $D_{\mbox{$\cal{S}$}}$ and $Z_{\mbox{$\cal{S}$}}$ denote the demands and
the caches of users in $\cal{S}$, respectively. This stronger privacy metric
is also satisfied by our Scheme A described in Subsection III-A (see Remark
2). In contrast, Schemes B and C, described in Subsections III-B and III-C,
respectively, do not satisfy this stronger privacy metric (see Remark 3).
## III Results
In this section, we present our results that include our achievability
schemes, the tightness of the memory-rate pairs achievable using these schemes
and the exact trade-off for $N\geq K=2$. In Subsections III-A, III-B and
III-C, we discuss Schemes A, B and C, respectively and the memory-rate pairs
achievable using these schemes. We give a comparison of the memory-rate pairs
achievable using Schemes A, B and C in Subsection III-D. In particular, we
show that Scheme A outperforms Schemes B and C for $N\leq K$, while a
combination of Schemes B and C outperforms Scheme A for $K<N$. In Subsection
III-E, we discuss the tightness of the achievable memory-rate pairs, and show
the order optimality result for all regimes. Finally, we present the exact
memory-rate trade-off under demand privacy for the case $N\geq K=2$ in
Subsection III-F.
### III-A Scheme A
It was observed in [27, 28] that a demand-private scheme for $N$ files and $K$
users can be obtained using an existing non-private achievable scheme for $N$
files and $NK$ users as a blackbox. Here every user is associated with a stack
of $N$ virtual users in the non-private caching problem. For example, demand-
private schemes for $N=K=2$ are obtained from the non-private schemes for
$N=2$ and $K=4$. We next show that only certain types of demand vectors of the
non-private scheme are required in the private scheme. To this end, we define
this particular subset of demand vectors.
Consider a non-private coded caching problem with $N$ files and $NK$ users. A
demand vector $\bar{d}$ in this problem is an $NK$-length vector, where the
$j$-th component denotes the demand of user $j$. Then $\bar{d}$ can also be
represented as $K$ subvectors of length $N$ each, i.e.,
$\displaystyle\mbox{$\bar{d}$}=[\mbox{$\bar{d}$}^{(0)},\mbox{$\bar{d}$}^{(1)},\ldots,\mbox{$\bar{d}$}^{(K-1)}]$
(13)
where $\mbox{$\bar{d}$}^{(i)}\in[0:N-1]^{N}$ is an $N$-length vector for all
$i\in[0:K-1]$. We now define a _restricted demand subset_ $\cal{D}_{RS}$.
###### Definition 1 (Restricted Demand Subset $\cal{D}_{RS}$)
The restricted demand subset $\cal{D}_{RS}$ for an $(N,NK)$ coded caching
problem is the set of all $\bar{d}$ such that $\mbox{$\bar{d}$}^{(i)}$ is a
cyclic shift of the vector $(0,1,\ldots,N-1)$ for all $i=0,1,\ldots,K-1$.
Since $N$ cyclic shifts are possible for each $\mbox{$\bar{d}$}^{(i)}$, there
are a total of $N^{K}$ demand vectors in $\cal{D}_{RS}$.
For a given $\mbox{$\bar{d}$}\in\mbox{$\cal{D}_{RS}$}$ and $i\in[0:K-1]$, let
$c_{i}$ denote the number of right cyclic shifts of $(0,1,\ldots,N-1)$ needed
to get $\mbox{$\bar{d}$}^{(i)}$. Then,
$\mbox{$\bar{d}$}\in\mbox{$\cal{D}_{RS}$}$ is uniquely identified by the
vector $\bar{c}(\mbox{$\bar{d}$}):=(c_{0},\ldots,c_{K-1})$. For $N=2$ and
$NK=4$, the demands in $\cal{D}_{RS}$ and their corresponding
$\bar{c}(\bar{d}_{s})$ are given in Table I.
$D_{0}$ | $D_{1}$ | $D_{2}$ | $D_{3}$ | $\bar{c}(\bar{d}_{s})$
---|---|---|---|---
$0$ | $1$ | $0$ | $1$ | $(0,0)$
$0$ | $1$ | $1$ | $0$ | $(0,1)$
$1$ | $0$ | $0$ | $1$ | $(1,0)$
$1$ | $0$ | $1$ | $0$ | $(1,1)$
TABLE I: Restricted Demand Subset $\cal{D}_{RS}$ for $N=2$ and $NK=4$.
A non-private scheme for an $(N,K)$ coded caching problem that serves all
demand vectors in a particular set $\mbox{$\cal{D}$}\subseteq[0:N-1]^{K}$, is
called a $\cal{D}$-non-private scheme. We have the following theorem.
###### Theorem 1
If there exists an $(N,NK,M,R)$ $\cal{D}_{RS}$-non-private scheme, then there
exists an $(N,K,M,R)$-private scheme.
The proof of Theorem 1 is given in Subsection IV-A. The proof follows by
showing a construction of an $(N,K,M,R)$-private scheme using an $(N,NK,M,R)$
$\cal{D}_{RS}$-non-private scheme as a blackbox. The following example shows a
construction of $(2,2,\frac{1}{3},\frac{4}{3})$-private scheme from
$(2,4,\frac{1}{3},\frac{4}{3})$-$\cal{D}_{RS}$-non-private scheme. This
particular non-private scheme is from [34]. It is important to note that the
memory-rate pair $(\frac{1}{3},\frac{4}{3})$ is not achievable for $N=2,K=4$
under no privacy requirement. Thus, we observe that there exist memory-rate
pairs that are achievable for the $\cal{D}_{RS}$-non-private scheme, but not
achievable for the non-private scheme which serves all demands.
###### Example 1
We consider the demand-private coded caching problem for $N=2,K=2,M=1/3$. It
was shown in [34] that for memory $M=1/3$, the optimum non-private rate for
$N=2,K=4$ satisfies $R^{*}_{2,4}(1/3)>4/3$. Next we give a scheme which
achieves a rate $4/3$ under demand privacy for $N=2,K=2,M=1/3$. The other
known demand-private schemes also do not achieve $R=4/3$ for $N=2,K=2$. See
Fig. 5 for reference.
Let $A$ and $B$ denote the two files. We will now give a scheme which achieves
a rate $4/3$ for $M=1/3$ with $F=3l$ for some positive integer $l$. We denote
the 3 segments of $A$ and $B$ by $A_{1},A_{2},A_{3}$ and $B_{1},B_{2},B_{3}$
respectively, of $l$ bits each. First let us consider a $\cal{D}_{RS}$-non-
private scheme for $N=2$ and $K=4$ from [34]. Let $C_{i,j}(A,B),i,j=0,1$, as
shown in Table II, correspond to the cache content of user $2i+j$ in the
$\cal{D}_{RS}$-non-private scheme. The transmission $T_{(p,q)}(A,B),p,q=0,1$,
as given in Table III, is chosen for the demand
$\bar{d}\in\mbox{$\cal{D}_{RS}$}$ such that $(p,q)=\bar{c}(\bar{d})$. Using
Tables II and III, it is easy to verify that the non-private scheme satisfies
the decodability condition for demands in $\cal{D}_{RS}$. From this scheme, we
obtain a demand-private scheme for $N=2,K=2$ as follows.
Cache | Cache Content
---|---
$C_{0,0}(A,B)$ | $A_{1}\oplus B_{1}$
$C_{0,1}(A,B)$ | $A_{3}\oplus B_{3}$
$C_{1,0}(A,B)$ | $A_{2}\oplus B_{2}$
$C_{1,1}(A,B)$ | $A_{1}\oplus A_{2}\oplus A_{3}\oplus B_{1}\oplus B_{2}\oplus B_{3}$
TABLE II: Choices for the caches of user 0 and user 1. $T_{(0,0)}(A,B)$ | $B_{1},B_{2},A_{3},A_{1}\oplus A_{2}\oplus A_{3}$
---|---
$T_{(0,1)}(A,B)$ | $A_{2},A_{3},B_{1},B_{1}\oplus B_{2}\oplus B_{3}$
$T_{(1,0)}(A,B)$ | $B_{2},B_{3},A_{1},A_{1}\oplus A_{2}\oplus A_{3}$
$T_{(1,1)}(A,B)$ | $A_{1},A_{2},B_{3},B_{1}\oplus B_{2}\oplus B_{3}$
TABLE III: Transmissions for $(2,2,\frac{1}{3},\frac{4}{3})$-private scheme.
Let the shared key $S_{k},k=0,1$ of user $k$ be a uniform binary random
variable. The cache encoding functions and the transmission encoding function
are denoted as
$\displaystyle C_{k}(S_{k},A,B)$ $\displaystyle=C_{k,S_{k}}(A,B)\text{ for
}k=0,1,$ $\displaystyle E(A,B,D_{0},D_{1},S_{0},S_{1})$
$\displaystyle=T_{(D_{0}\oplus S_{0},D_{1}\oplus S_{1})}(A,B).$
User $k$ chooses $C_{k,S_{k}}(A,B)$ given in Table II as the cache encoding
function. In the delivery phase, for given $(S_{0},S_{1})$ and
$(D_{0},D_{1})$, the server broadcasts $T_{(D_{0}\oplus S_{0},D_{1}\oplus
S_{1})}(A,B)$ as the main payload and $(D_{0}\oplus S_{0},D_{1}\oplus S_{1})$
as the auxiliary transmission. For such a transmission, the decodability
follows from the decodability of the chosen non-private scheme.
Further, the broadcast transmission will not reveal any information about the
demand of one user to the other user since one particular transmission
$T_{(p,q)}(A,B)$ happens for all demand vectors $(D_{0},D_{1})$, and also that
$S_{i}$ acts as one time pad for $D_{i}$ for each $i=0,1$. Here, all the
transmissions consist of $4l$ bits (neglecting the 2 bits for $(D_{0}\oplus
S_{0},D_{1}\oplus S_{1})$). Since $F=3l$, this scheme achieves a rate $R=4/3$.
Example 1 showed that there exists an $(M,R)$ pair that is not achievable for
$N$ files and $NK$ users, but it is achievable with demand privacy for $N$
files and $K$ users. This was possible because a $\cal{D}_{RS}$-non-private
scheme needs to serve only a subset of demands. Our Scheme A as described
later utilizes this fact, and obtains a general scheme for any parameters $N$
and $K$. Specifically, we show that a $\cal{D}_{RS}$-non-private scheme for
$N$ files and $NK$ users can be obtained from the non-private scheme given in
[5] for $N$ files and $NK-K+1$ users. The memory-rate pairs achievable using
Scheme A are presented in Theorem 2. We use the following lemma to prove
Theorem 2.
###### Lemma 1
For the $(M,R)$ pairs given by
$\displaystyle(M,R)=\left(\frac{Nr}{NK-K+1},\frac{{NK-K+1\choose
r+1}-{NK-K+1-N\choose r+1}}{{NK-K+1\choose r}}\right),\quad\text{ for
}r\in\\{0,1,\ldots,NK-K\\}$
which are achievable for the non-private coded caching problem with $N$ files
and $NK-K+1$ users by the YMA scheme [5], there exists an $(N,NK,M,R)$
$\cal{D}_{RS}$-non-private scheme.
The proof of Lemma 1 can be found in Subsection IV-B. The proof follows by
dividing $NK$ users in the $\cal{D}_{RS}$-non-private scheme into two groups
with the first group containing $K-1$ users and the second group containing
$NK-K+1$ users. Users in the second group follow the prefetching of the YMA
scheme while users in the first group follow coded prefetching. In particular,
the users in the first group follow the coded prefetching of Type III caching
discussed in [35]. In the delivery phase, for a given
$\bar{d}\in\mbox{$\cal{D}_{RS}$}$, the server chooses the transmission of the
YMA scheme corresponds to the demands in the second group of users. Due to the
special nature of the demand vectors in $\cal{D}_{RS}$, using this
transmission, the demands of all users in the first group can also be served.
Scheme A: Scheme A consists of two steps. In the first step, a
$\cal{D}_{RS}$-non-private scheme is obtained from the non-private YMA scheme
for $N$ files and $NK-K+1$ users. In the second step, an $(N,K,M,R)$-private
scheme is obtained using this $\cal{D}_{RS}$-non-private scheme as a blackbox.
Scheme A achieves the memory-rate pairs given in the following theorem.
###### Theorem 2
There exists an $(N,K,M,R)$-private scheme with the following memory-rate
pair:
$\displaystyle(M,R)=\left(\frac{Nr}{NK-K+1},\frac{{NK-K+1\choose r+1}-{NK-
K-N+1\choose r+1}}{{NK-K+1\choose r}}\right),\quad\mbox{ for
}r=\\{0,\ldots,NK-K+1\\}.$ (14)
###### Proof.
The given memory-rate pair is achievable by the YMA scheme for $N$ files and
$NK-K+1$ users. So, the theorem follows from Theorem 1 and Lemma 1. ∎
###### Remark 2
If a private scheme is derived from a $\cal{D}_{RS}$-non-private scheme using
the construction described in the proof of Theorem 1, then it also satisfies
the stronger notion of privacy metric (12). This can be shown by replacing
${\bar{D}_{\tilde{k}}}$ by $D_{[0:K-1]\setminus\mbox{$\cal{S}$}}$, $D_{k}$ by
$D_{\mbox{$\cal{S}$}}$, and $Z_{k}$ by $Z_{\mbox{$\cal{S}$}}$ in the proof of
privacy that led to (26). Since Scheme A is obtained using a
$\cal{D}_{RS}$-non-private scheme as a blackbox, it also satisfies the
stronger privacy condition (12).
### III-B Scheme B
Now we describe Scheme B. For $N\leq K$, Scheme B is trivial, where the caches
of all users are populated with the same $M/N$ fraction of each file in the
placement phase. In the delivery phase, the uncached parts of all files are
transmitted. In this scheme, all users get all files, and the rate of
transmission is given by $N(1-M/N)=N-M$. Since the broadcast transmission is
independent of the demands, it clearly satisfies the privacy condition (9).
However, if the number of users is less than the number of files, then this
scheme is very wasteful in terms of rate. For $K<N$, next we give an outline
of Scheme B. It achieves a rate $K(1-M/N)$ that, in this case, is an improved
rate compared to $N-M$.
For $K<N$, let us first consider the non-private scheme [1, Example 1] which
achieves rate $K(1-M/N)$. In this scheme, all users store the same $M/N$
fraction of each file in the placement phase. In the delivery phase, the
server transmits $K$ components where $i$-th component consists of the
uncached part of the file demanded by user $i$. However, this scheme does not
ensure demand privacy since if the $i$-th component is different from the
$j$-th component, $i\neq j$, then user $i$ learns that $D_{j}\neq D_{i}$. This
clearly violates demand privacy. For $K<N$, the placement phase in Scheme B is
the same as that of for $N\leq K$. In the delivery phase, the server transmits
$K$ components without violating demand privacy. We next illustrate Scheme B
using an example with $N>2$ and $K=2$.
###### Example 2
Let us consider that there are two users and more than two files, i.e., $N>2$
and $K=2$. In the placement phase, each user stores $M/N$ fraction of each
file. In the delivery phase, first let us consider the case of $D_{0}\neq
D_{1}$. In this case, the server transmits two components which correspond to
the uncached parts of each demanded file. To achieve privacy, the position
where the uncached part of $W_{D_{0}}$ is placed, is selected from one out of
two possible choices uniformly at random. The uncached fraction of $W_{D_{1}}$
is placed in the other position. These positions are conveyed to each user in
the auxiliary transmissions. The random variable to convey the position to
user $0$ is XOR-ed with shared randomness $S_{0}$. Since $S_{0}$ is known only
to user $0$, it acts as an one-time pad. Similarly, $S_{1}$ helps in
protecting the privacy against user $0$. When $D_{0}=D_{1}$, the uncached part
of the file is placed in a position chosen randomly. The other position is
filled with random bits. Since one user does not have any information about
the component from which the other user’s demanded file is decoded, and since
the files are independent of the demands, this scheme preserves the demand
privacy.
For $K<N$, Scheme B is a generalization of the scheme presented in Example 2.
Scheme B achieves the memory-rate pairs given in the following theorem.
###### Theorem 3
There exists an $(N,K,M,R)$-private scheme with the memory-rate pair
$(M,\min\\{N,K\\}(1-M/N))$.
Full description of Scheme B and the proof of Theorem 3 are provided in
Subsection IV-C.
### III-C Scheme C
For $K<N$, the broadcast in scheme A contains symbols which are not necessary
for decoding, but are still broadcasted to preserve privacy. In this section,
we propose Scheme C which gets rid of such redundant symbols using the idea of
permuting the broadcast symbols as in Scheme B, thus improving the memory-rate
trade-off. In Theorem 4, we give the memory-rate pairs achievable using Scheme
C.
###### Theorem 4
There exists an $(N,K,M,R)$-private scheme with the following memory-rate
pair:
$\displaystyle(M,R)=$ $\displaystyle\left(\frac{N\sum_{s=t}^{NK-1}{NK-1\choose
s-1}r^{NK-s-1}}{\sum_{s=t}^{NK-1}{NK\choose
s}r^{NK-s-1}},\frac{\sum_{s=t+1}^{NK}[{NK\choose s}-{NK-K\choose
s}]r^{NK-s}}{\sum_{s=t}^{NK-1}{NK\choose s}r^{NK-s-1}}\right),$
$\displaystyle\qquad\mbox{ for }t=\\{1,\ldots,NK-1\\},\;r\in[1,N-1].$ (15)
Note that for the memory-rate pairs in Theorem 4, we have 2 free parameters
$t$ and $r$. By fixing the value of $r$, one can obtain a memory-rate curve by
varying the value of $t$. We have observed through numerical computations that
the memory-rate curve achieved for $r=r_{1}$ is better than that for $r=r_{2}$
if $r_{1}>r_{2}$ (see Fig. 5). The memory-rate curve for $r<N-1$, although
empirically suboptimal compared to $r=N-1$, is useful in showing the order
optimality result presented in Theorem 5.
###### Remark 3
For $K<N$, Scheme B does not satisfy the stronger privacy metric in (12).
Since Scheme C builds on the ideas of Scheme B, it also does not satisfy this
stronger privacy metric. The fact that Scheme B does not satisfy (12) can be
intuitively observed from Example 2. For $K=2$, the privacy condition (12) is
achieved if there is no leakage of privacy after one user gets to know all the
files. If one user has all the files, then she can easily verify that the part
of the broadcast that she has not used for decoding is some random bits or a
part of a file. Thus, she can infer some knowledge about the demand of the
other user in Scheme B. This was also observed in [30] (see [30, Example 1]).
Figure 2: Memory-rate pairs in Theorem 4 are plotted for different values of
$r$. The first figure is for $N=5,K=3$ and the second one is for $N=5,K=7$.
Next we illustrate Scheme C for $N=3,K=2$.
###### Example 3
Let us consider the demand-private coded caching problem for $N=3$ files and
$K=2$ users. By choosing $r=2$ and $t=3$ in the expression for memory in
Theorem 4, we get $M=\frac{195}{116}$. The same parameters give
$R=\frac{69}{116}$. Next we describe the scheme which achieves this memory-
rate pair with $F=116l$ for some positive integer $l$. We partition each file
$W_{i},i\in[0:2]$ into $\sum_{j=t}^{NK-1}{NK\choose j}=\sum_{j=3}^{5}{6\choose
j}=41$ segments of three different sizes. These segments are grouped into
three groups such that all segments in one group have the same size. The
segments are labelled by some subsets of $[0:NK-1]=[0:5]$. The segments of
$W_{i}$ are $W_{i,\mbox{$\cal{R}$}}$;
$\mbox{$\cal{R}$}\subset[0:5],|\mbox{$\cal{R}$}|=3,4,5$. These segments are of
different sizes, and these are grouped into 3 groups as
$\displaystyle\mbox{$\cal{T}$}^{i}_{5}$
$\displaystyle=(W_{i,\mbox{$\cal{R}$}})_{\mbox{$\cal{R}$}\subset[0:5],|\mbox{$\cal{R}$}|=5},$
$\displaystyle\mbox{$\cal{T}$}^{i}_{4}$
$\displaystyle=(W_{i,\mbox{$\cal{R}$}})_{\mbox{$\cal{R}$}\subset[0:5],|\mbox{$\cal{R}$}|=4},$
$\displaystyle\mbox{$\cal{T}$}^{i}_{3}$
$\displaystyle=(W_{i,\mbox{$\cal{R}$}})_{\mbox{$\cal{R}$}\subset[0:5],|\mbox{$\cal{R}$}|=3}.$
The size of segment $W_{i,\mbox{$\cal{R}$}},i\in[0:2]$ is chosen as follows:
$len(W_{i,\mbox{$\cal{R}$}})=\left\\{\begin{array}[]{lcl}l&\text{if
}|\mbox{$\cal{R}$}|=5\\\ \\\ rl=2l&\text{if }|\mbox{$\cal{R}$}|=4\\\ \\\
r^{2}l=4l&\text{if }|\mbox{$\cal{R}$}|=3.\\\ \end{array}\right.$
Thus, each segment in $\mbox{$\cal{T}$}^{i}_{5},\mbox{$\cal{T}$}^{i}_{4}$ and
$\mbox{$\cal{T}$}^{i}_{3}$ has respectively $l,2l$ and $4l$ bits. Then, for
all $i\in[0:2]$, we have
$\displaystyle len(W_{i})$
$\displaystyle=(|\mbox{$\cal{T}$}^{i}_{5}|+|\mbox{$\cal{T}$}^{i}_{4}|\times
r+|\mbox{$\cal{T}$}^{i}_{3}|\times r^{2})l$ $\displaystyle=(6+15\times
2+20\times 4)l$ $\displaystyle=116l.$
Caching: The cache content of user $k\in\\{0,1\\}$ is determined by the key
$S_{k},k=0,1$ which is shared only between the server and user $k$. Shared key
$S_{k},k=0,1$ is distributed as $S_{k}\sim unif\\{[0:N-1]\\}=unif\\{[0:2]\\}$.
The cache contents of each user is grouped into three parts. The
$j^{th},j=1,2,3$ part of user $k\in\\{0,1\\}$ is denoted by
$\mbox{$\cal{G}$}_{k,j}$ and is shown in Table IV. Thus, the number of bits
stored at one user is given by $3\left({5\choose 4}+2\times{5\choose
3}+4\times{5\choose 2}\right)l=195l$. Thus, we have $M=\frac{195}{116}$. Other
than $S_{k}$ the server also places some additional random keys of negligible
size in the cache of user $k\in\\{0,1\\}$. These will be used as keys for one-
time pad in the delivery phase.
$\mbox{$\cal{G}$}_{k,1}$ | $(W_{i,\mbox{$\cal{R}$}}|W_{i,\mbox{$\cal{R}$}}\in\mbox{$\cal{T}$}^{i}_{5}\mbox{ and }S_{k}+3k\in\mbox{$\cal{R}$})_{i=0,1,2}$
---|---
$\mbox{$\cal{G}$}_{k,2}$ | $(W_{i,\mbox{$\cal{R}$}}|W_{i,\mbox{$\cal{R}$}}\in\mbox{$\cal{T}$}^{i}_{4}\mbox{ and }S_{k}+3k\in\mbox{$\cal{R}$})_{i=0,1,2}$
$\mbox{$\cal{G}$}_{k,3}$ | $(W_{i,\mbox{$\cal{R}$}}|W_{i,\mbox{$\cal{R}$}}\in\mbox{$\cal{T}$}^{i}_{3}\mbox{ and }S_{k}+3k\in\mbox{$\cal{R}$})_{i=0,1,2}$
TABLE IV: Cache contents of user $k,k=0,1$.
Delivery: In the delivery phase, for given demands $(D_{0},D_{1})$, we first
construct an expanded demand vector $\bar{d}$ of length $6$ such that
$\bar{d}\in\mbox{$\cal{D}_{RS}$}$ defined in Definition 1. The vector
$\bar{d}$ is given by $\bar{d}=(\bar{d}^{(0)},\bar{d}^{(1)})$, where
$\bar{d}^{(k)},k=0,1$ is obtained by applying $S_{k}\ominus D_{k}$ right
cyclic shift to the vector $(0,1,2)$, where $\ominus$ denotes modulo $3$
subtraction. That is, for $k=0,1$, $d_{i}^{(k)}=i-(S_{k}-D_{k})\mod 3$. Having
defined vector $\bar{d}$, we now define symbols $Y_{\mbox{$\cal{R}$}}$ for
$\mbox{$\cal{R}$}\subset[0:5]$ and $|\mbox{$\cal{R}$}|=4,5,6$ as follows:
$\displaystyle
Y_{\mbox{$\cal{R}$}}=\bigoplus_{u\in\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}\setminus\\{u\\}}$
where $d_{u}$ is the $u+1$-th item in $\bar{d}$. In particular, for
$\mbox{$\cal{R}$}=[0:5]$, we have
$\displaystyle
Y_{[0:5]}=\bigoplus_{u\in[0:5]}W_{d_{u},[0:5]\setminus\\{u\\}}.$
Symbol $Y_{[0:5]}$ as defined above is a part of the main payload in the
broadcast transmission which needs $l$ bits.
To give the other parts of the broadcast, we define symbols
$W_{\mbox{$\cal{R}$}}$ and $V_{\mbox{$\cal{R}$}}$ for
$\mbox{$\cal{R}$}\subset[0:5]$ and $|\mbox{$\cal{R}$}|=4,5$ as follows:
$\displaystyle W_{\mbox{$\cal{R}$}}=(W_{0,\mbox{$\cal{R}$}}\oplus
W_{1,\mbox{$\cal{R}$}},W_{1,\mbox{$\cal{R}$}}\oplus W_{2,\mbox{$\cal{R}$}})$
and
$\displaystyle V_{\mbox{$\cal{R}$}}$ $\displaystyle=Y_{\mbox{$\cal{R}$}}\oplus
W_{\mbox{$\cal{R}$}}.$
Note that for $|\mbox{$\cal{R}$}|=4,W_{\mbox{$\cal{R}$}}$ has two parts, each
of length $2l$ bits, and $Y_{\mbox{$\cal{R}$}}$ has a length of $4l$ bits. We
further define sets $V_{4}$ and $V_{5}$ as follows:
$\displaystyle
V_{4}=\\{V_{\mbox{$\cal{R}$}}|\mbox{$\cal{R}$}\cap\\{S_{0},S_{1}+3\\}\neq\phi,|\mbox{$\cal{R}$}|=4\\}$
and
$\displaystyle V_{5}$
$\displaystyle=\\{V_{\mbox{$\cal{R}$}}||\mbox{$\cal{R}$}|=5\\}.$
Observe that $V_{4}$ and $V_{5}$ contain 14 symbols each of size $4l$ bits and
6 symbols each of size $2l$ bits, respectively. The server picks permutation
functions $\pi_{4}(\cdot)$ and $\pi_{5}(\cdot)$ uniformly at random
respectively from the symmetric group333A symmetric group defined over any set
is the group whose elements are all the bijections from the set to itself, and
whose group operation is the composition of functions. of permutations of
$[0:13]$ and $[0:5]$ and broadcasts $\pi_{4}(V_{4})$ and $\pi_{5}(V_{5})$. The
server does not fully reveal these permutation functions with any of the
users. The position of any symbol $V_{\mbox{$\cal{R}$}}\in V_{i}$, $i=4,5$ in
$\pi_{i}(V_{i})$ is privately conveyed to user $k$, if and only if
$S_{k}+3k\in\mbox{$\cal{R}$}$. This private transmission of positions is
achieved using one-time pads whose keys are deployed in the caches of
respective users in the caching phase. The main payload of the broadcast
$X^{\prime}$ can be written as
$X^{\prime}=(X_{0},X_{1},X_{2})=(Y_{[0:5]},\pi_{4}(V_{4}),\pi_{5}(V_{5})).$
Thus, the total number of transmitted bits are
$(1+6\times 2+14\times 4)l=69l.$
So, the rate of transmission is $\frac{69}{116}.$ Note that $X^{\prime}$ is
only the main payload. Along with $X^{\prime}$, the server also broadcasts
some auxiliary transmission $J=(S_{0}\ominus D_{0},S_{1}\ominus
D_{1},J^{\prime})=(\bar{S}\ominus\bar{D},J^{\prime})$. Here, $J^{\prime}$
contains the positions of various symbols in $X_{1}$ and $X_{2}$ encoded using
one-time pad as discussed above. Thus, the complete broadcast transmission is
$X=(X^{\prime},J)$.
Remark: Here, note that $V_{5}$ contains all $V_{\mbox{$\cal{R}$}}$ with
$|\mbox{$\cal{R}$}|=5$. However, $V_{4}$ does not contain all
$V_{\mbox{$\cal{R}$}}$ with $|\mbox{$\cal{R}$}|=4$. For example, if $S_{0}=0$
and $S_{1}=0$, then $V$ does not contain $V_{\\{1,2,4,5\\}}$. This is similar
to avoiding some redundant transmissions in the leader-based YMA scheme [5]
compared to the scheme in [1]. This is the main reason for getting lower rates
using this scheme compared to the rates in Theorem 2.
Decoding: For user $k\in\\{0,1\\}$, let us first consider the recovery of
segments belonging to $\mbox{$\cal{T}$}^{D_{k}}_{i}$, $i=3,4$. This is done
using symbols from $X_{1}$ and $X_{2}$. All symbols
$W_{D_{k},\mbox{$\cal{R}$}}\in\mbox{$\cal{T}$}^{D_{k}}_{i}$ such that
$S_{k}+3k\in\mbox{$\cal{R}$}$ (all symbols in set $\mbox{$\cal{G}$}_{k,6-i}$)
are cached at user $k$. User $k$ decodes the remaining symbols in
$\mbox{$\cal{T}$}^{D_{k}}_{i}$, i.e., $W_{D_{k},\mbox{$\cal{R}$}}$ such that
$|\mbox{$\cal{R}$}|=i,S_{k}+3k\notin\mbox{$\cal{R}$}$ and
$\mbox{$\cal{R}$}\subset[0:5]$ as follows:
$\displaystyle\widehat{W}_{D_{k},\mbox{$\cal{R}$}}=V_{\mbox{$\cal{R}$}^{+}}\oplus
W_{\mbox{$\cal{R}$}^{+}}\oplus\left(\bigoplus_{u\in{\mbox{$\cal{R}$}}}W_{d_{u},\mbox{$\cal{R}$}^{+}\setminus\\{u\\}}\right)$
(16)
where $\mbox{$\cal{R}$}^{+}=\\{S_{k}+3k\\}\cup\mbox{$\cal{R}$}$. Here,
$V_{\mbox{$\cal{R}$}^{+}}$ is a part of $\pi_{i+1}(V_{i+1})$ and its position
in $\pi_{i+1}(V_{i+1})$ has been revealed to user $k$ since
$S_{k}+3k\in\mbox{$\cal{R}$}^{+}$. The symbols $W_{\mbox{$\cal{R}$}^{+}}$ and
$W_{d_{u},\mbox{$\cal{R}$}^{+}\setminus\\{u\\}}$ in (16) can be recovered from
her cache. Substituting for $V_{\mbox{$\cal{R}$}^{+}}$ in (16) yields
$\displaystyle\widehat{W}_{D_{k},\mbox{$\cal{R}$}}$
$\displaystyle=Y_{\mbox{$\cal{R}$}^{+}}\oplus W_{\mbox{$\cal{R}$}^{+}}\oplus
W_{\mbox{$\cal{R}$}^{+}}\oplus\left(\bigoplus_{u\in{\mbox{$\cal{R}$}}}W_{d_{u},\mbox{$\cal{R}$}^{+}\setminus\\{u\\}}\right)$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{R}$}^{+}}W_{d_{u},\mbox{$\cal{R}$}^{+}\setminus\\{u\\}}\oplus\left(\bigoplus_{u\in{\mbox{$\cal{R}$}}}W_{d_{u},\mbox{$\cal{R}$}^{+}\setminus\\{u\\}}\right)$
$\displaystyle=W_{d_{S_{k}+3k},\mbox{$\cal{R}$}}$
$\displaystyle=W_{D_{k},\mbox{$\cal{R}$}}.$ (17)
Since user $k$ has all segments in $\mbox{$\cal{T}$}^{D_{k}}_{3}$ and
$\mbox{$\cal{T}$}^{D_{k}}_{4}$, we consider the recovery of symbols in
$\mbox{$\cal{T}$}^{D_{k}}_{5}$. In the first part $\mbox{$\cal{G}$}_{k,1}$ of
cache, user $k$ does not have one segment of $\mbox{$\cal{T}$}^{D_{k}}_{5}$,
namely $W_{D_{k},[0:5]\setminus\\{S_{k}+3k\\}}$ . User $k$ decodes this
segment as
$\displaystyle\widehat{W}_{D_{k},[0:5]\setminus\\{S_{k}+3k\\}}=Y_{[0:5]}\oplus\left(\bigoplus_{u\in{[0:5]\setminus\\{S_{k}+3k\\}}}W_{d_{u},[0:5]\setminus\\{u\\}}\right).$
Observe that $Y_{[0:5]}$ is broadcasted by the server while each symbol
$W_{d_{u},[0:5]\setminus\\{u\\}}$ is a part of $\mbox{$\cal{G}$}_{k,1}$, and
hence a part of the cache of user $k$. Thus, user $k$ can compute
$\widehat{W}_{D_{k},[0:5]\setminus\\{S_{k}+3k\\}}$. Using (17), it can be
shown that
$\widehat{W}_{D_{k},[0:5]\setminus\\{S_{k}+3k\\}}=W_{D_{k},[0:5]\setminus\\{S_{k}+3k\\}}$.
Thus, user $k$ can retrieve all symbols belonging to each of the three groups
of file $W_{D_{k}}$ and she can recover this file by concatenating these
symbols.
Privacy: To show the demand-privacy for user $k\in\\{0,1\\}$, we first define
$\tilde{k}=(k+1)$ mod $2$. Since $I(D_{\tilde{k}};Z_{k},D_{k})=0$, the privacy
condition $I(D_{\tilde{k}};X,Z_{k},D_{k})=0$ follows by showing that
$I(X;D_{\tilde{k}}|Z_{k},D_{k})=0$. To that end, we divide all symbols that
are a part of the main payload into two sets, $X^{\prime}_{k}$ and
$\tilde{X}^{\prime}_{k}$ which are defined as follows:
$\displaystyle X^{\prime}_{k}$
$\displaystyle=\\{Y_{[0:NK-1]}\\}\cup\\{V_{\mbox{$\cal{R}$}}|S_{k}+3k\in\mbox{$\cal{R}$},V_{\mbox{$\cal{R}$}}\in
X^{\prime}\\},$ $\displaystyle\tilde{X}^{\prime}_{k}$
$\displaystyle=X^{\prime}\setminus X^{\prime}_{k}.$
Note that the positions in $X^{\prime}$ of all symbols belonging to
$X^{\prime}_{k}$ is known to user $k$ while the positions of symbols belonging
to $\tilde{X}^{\prime}_{k}$ are not known. It can be shown that all symbols in
$\tilde{X}^{\prime}_{k}$ appear like a sequence of random bits to user $k$.
This is because for some set $\cal{R}$,
$\mbox{$\cal{R}$}\subset[0:5],|\mbox{$\cal{R}$}|=4,5$, the server broadcasts
$V_{\mbox{$\cal{R}$}}$ instead of $Y_{\mbox{$\cal{R}$}}$. The symbol
$W_{\mbox{$\cal{R}$}}$ essentially hides the message $Y_{\mbox{$\cal{R}$}}$
from all users that do not belong to set $\cal{R}$. Further, it can also be
shown that
$\displaystyle H(X^{\prime}_{k}|W_{D_{k}},Z_{k},\bar{S}\ominus\bar{D})=0.$
(18)
It is easy to see that, $(W_{D_{k}},Z_{k},\bar{S}\ominus\bar{D})$ does not
reveal any information about $D_{\tilde{k}}$ which in combination with (18)
ensures privacy.
### III-D Comparison of our schemes
Figure 3: Comparison of different schemes for $N=15$ and $K=10$. The region
given by the lower convex envelop (LCE) of the the points in Theorem 3 and
Theorem 4 is larger than the region given by the LCE of the points in Theorem
2.
Now we give a comparison of our schemes. From numerical simulations we observe
that, a combination of Schemes B and C outperforms Scheme A for $K<N$, and
Scheme A outperforms both Schemes B and C for $N\leq K$, i.e., for $K<N$, the
region given by the lower convex envelop (LCE) of the the points in Theorem 3
and Theorem 4, is larger than the region given by the LCE of the points in
Theorem 2. Whereas, we observe the opposite for $N\leq K$, i.e., the region
given by the LCE of the points in Theorem 2 is larger than the region given by
the LCE of the points in Theorem 3 and Theorem 4. In Fig. 3, we plot the
memory-rate pairs achievable using our schemes along with the pairs of
achievable using the MDS scheme in [27] for $N=15$ and $K=10$. In Fig. 4, we
give a comparison of the memory-rate pairs achievable using different schemes
for $N=10$ and $K=15$.
Figure 4: Comparison of different schemes for $N=10$ and $K=15$. The region
given by the lower convex envelop (LCE) of the points in Theorem 2 is larger
than the region given by the LCE of the points in Theorem 3 and Theorem 4.
### III-E Tightness of the achievable memory-rate pairs
Now we compare the memory-rate pairs achievable using our schemes with lower
bounds on the optimal rates for non-private schemes. Recall that for $N$
files, $K$ users and memory $M$, $R^{*p}_{N,K}(M)$ and $R^{*}_{N,K}(M)$ denote
the optimal private rate and non-private rate, respectively.
###### Theorem 5
Let $R^{A}_{N,K}(M)$ denote the LCE of the points in Theorem 2, and let
$R^{BC}_{N,K}(M)$ denote the LCE of the points in Theorem 3 and Theorem 4.
Then, we have
1. 1.
For $N\leq K$,
$\displaystyle\frac{R^{A}_{N,K}(M)}{R^{*}_{N,K}(M)}\leq\begin{cases}4&\text{
if }M\leq\left(1-\frac{N}{K}\right)\\\ 8&\text{
if}\left(1-\frac{N}{K}\right)\leq M\leq\frac{N}{2}\\\ 2&\text{ if
}M\geq\frac{N}{2}.\end{cases}$ (19)
2. 2.
For $K<N$,
$\displaystyle\frac{R^{BC}_{N,K}(M)}{R^{*}_{N,K}(M)}$
$\displaystyle\leq\begin{cases}3&\text{ if }M<\frac{N}{2}\\\ 2&\text{ if
}M\geq\frac{N}{2}.\end{cases}$ (20)
3. 3.
For all $N$ and $K$, $R^{*p}_{N,K}(M)=R^{*}_{N,K}(M)$ if
$M\geq\frac{N(NK-K)}{NK-K+1}$.
Since $R^{A}_{N,K}(M)\geq R^{*p}_{N,K}(M)\geq R^{*}_{N,K}(M)$, the upper
bounds in (19) also hold for the ratios
$\frac{R^{A}_{N,K}(M)}{R^{*p}_{N,K}(M)}$ and
$\frac{R^{*p}_{N,K}(M)}{R^{*}_{N,K}(M)}$. Similarly, the upper bounds in (20)
also hold for the ratios $\frac{R^{BC}_{N,K}(M)}{R^{*p}_{N,K}(M)}$ and
$\frac{R^{*p}_{N,K}(M)}{R^{*}_{N,K}(M)}$.
The proof of Theorem 5 is presented in Subsection IV-E. Theorem 5 shows that a
combination of our schemes gives rates that are always within a constant
multiplicative factor from the optimal, i.e., the order optimality result is
shown for all cases. We also note that the order optimality result is also
obtained in [27] for all regimes except for the case when $K<N$ and $M<N/K$.
The constant factors in (19) and also the factor $2$ for the regime $K<N,M\geq
N/2$ are obtained in [27]. In contrast, the constant factor $3$ in (20) for
the regime $K<N,M<N/2$ shows the order optimality for the case $K<N,M<N/K$,
and improves the previously known factor $4$ for the case $K<N,M\leq N/K<N/2$.
One natural question that arises in demand-private coded caching is how much
cost it incurs due to the extra constraint of demand privacy. It follows from
Theorem 5 that the extra cost is always within a constant factor. However, we
note that the extra cost may not be a constant factor for all the regimes
under the stronger privacy condition (12). For example, when $K<N$ and $M=0$,
the optimal non-private rate is $K$. However, for this case, the optimal
private rate under the stronger privacy condition (12) is shown to be $N$ in
[30], whereas the optimal private rate under the privacy condition (9) is $K$
(Theorem 3). Such a difference in rates under these two notions of privacy
conditions also extends for very small memory regimes when $K<N$.
### III-F Exact trade-off for $N\geq K=2$
For $N=K=2$, the exact trade-off under no privacy was shown in [1]. Tian
characterized the exact trade-off under no privacy for $N>K=2$ in [34]. For
$N=K=2$, the non-private trade-off region is characterized by three lines.
Whereas, if $N>2,K=2$, then the non-private trade-off region is given by two
lines. We characterize the exact trade-off for $N\geq K=2$ under demand
privacy in the following theorem. The characterization shows that the exact
trade-off region of private schemes for $N\geq K=2$ is always given by three
lines.
Figure 5: The figure on the left gives the exact trade-off with demand privacy
for $N=K=2$ and the region given by other known schemes. The figure on the
right gives the exact trade-off with demand privacy for $N=2,3,4$ and $K=2$.
###### Theorem 6
1. 1.
Any memory-rate pair $(M,R)$ is achievable with demand privacy for $N=K=2$ if
and only if
$\displaystyle 2M+R\geq 2,\quad 3M+3R\geq 5,\quad M+2R\geq 2.$ (21)
2. 2.
Any memory-rate pair $(M,R)$ is achievable with demand privacy for $N>K=2$ if
and only if
$\displaystyle 3M+NR\geq 2N,\quad 3M+(N+1)R\geq 2N+1,\quad M+NR\geq N.$ (22)
Next we give some outlines of the achievability schemes and the converse to
obtain Theorem 6.
Outline of converse: Any $(M,R)$ pair that is achievable under no privacy
requirement needs to satisfy the first and third inequalities in (21) for
$N=K=2$ [1]. Similarly, for $N>K=2$, any $(M,R)$ pair satisfies the first and
third inequalities in (IV-G2) under no privacy [34]. Since any converse bound
with no privacy requirement is also a converse bound with privacy. So, to
prove the converse result, we need to show only the second inequality in
(IV-G2) and in (21). Furthermore, observe that substituting $N=2$ in the
second inequality in (IV-G2) gives the second inequality in (21). So, to show
the converse of Theorem 6, we prove that for $N\geq K=2$, any $(M,R)$ pair
under privacy satisfies the second inequality in (IV-G2). Full proof of the
converse can be found in Subsection IV-F.
Outline of achievability: To show the achievability of the region in (21), we
use a particular non-private scheme from [34]. Using this particular non-
private scheme, we can show that for any $(M,R)$ pair in the region by (21),
there exists an $(2,4,M,R)$-$\cal{D}_{RS}$-non-private scheme. Then, the
achievability follows from Theorem 1. Details can be found in Subsection IV-G.
To prove the achievability of the region given by (IV-G2) for $K=2$ and $N>2$,
we show the achievability of two corner points $(\frac{N}{3},1)$ and
$(\frac{N^{2}}{2N-1},\frac{N-1}{2N-1})$. The corner points of the memory-rate
curve given by (IV-G2) are
$(0,2),(\frac{N}{3},1),(\frac{N^{2}}{2N-1},\frac{N-1}{2N-1})$ and $(N,0)$. The
achievability of the points $(0,2)$ and $(N,0)$ follows from Theorem 3. We
propose two schemes, Scheme D and Scheme E which achieve the pairs
$(\frac{N}{3},1)$ and $(\frac{N^{2}}{2N-1},\frac{N-1}{2N-1})$, respectively.
Scheme D achieves memory-rate pair $(\frac{N}{3},1)$ using uncoded prefetching
while Scheme E achieves memory-rate pair
$(\frac{N^{2}}{2N-1},\frac{N-1}{2N-1})$ using coded prefetching. In Example 4,
we describe Scheme D for $N=3,K=2,M=1$. Then in Example 5, we describe Scheme
E for $N=3,K=2,M=2/5$. General versions of both these schemes for $N>2$ and
$K=2$ are provided in Subsection IV-G.
###### Example 4
We describe Scheme D for $N=3$ and $K=2$ which achieves rate $1$ for
$M=\frac{N}{3}=1$. File $W_{i},i\in[0:2]$ is divided into 3 disjoint parts of
equal size, i.e., $W_{i}=(W_{i,0},W_{i,1},W_{i,2})$.
Caching: The server picks 2 independent permutations $\pi_{0}$ and $\pi_{1}$
uniformly at random from the symmetric group of permutations of [0:2]. The
server places $\pi_{0}(W_{0,0},W_{1,0},W_{2,0})$ and
$\pi_{1}(W_{0,1},W_{1,1},W_{2,1})$ in the caches of user 0 and user 1,
respectively. Each of these permutation functions $\pi_{0}$ and $\pi_{1}$ are
unknown to both the users. Some additional random bits are also shared with
each user in the caching phase.
Delivery: The server picks permutation $\pi_{2}$ uniformly at random from the
symmetric group of permutations of [0:2] which is independent of $\pi_{0}$,
$\pi_{1}$. The main payload $X^{\prime}$ is given by
$\displaystyle X^{\prime}=\begin{cases}\pi_{2}(W_{D_{0},1}\oplus
W_{D_{1},0},W_{D_{0},2},W_{D_{1},2})\qquad\qquad\qquad\text{if }D_{0}\neq
D_{1}\\\ \pi_{2}(W_{D_{0},1}\oplus W_{m,0},W_{D_{0},2},W_{D_{1},0}\oplus
W_{m,1})\qquad\quad\text{if }D_{0}=D_{1}\end{cases}$
where $m=(D_{0}+1)$ mod $3$. To enable decoding at each user, the server also
transmits some auxiliary transmission $J=(J_{1},J_{2},J_{3})$ of negligible
rate. Each $J_{j},j=1,2,3$ can be further divided into 2 parts, i.e.,
$J_{j}=(J_{j,0},J_{j,1})$, where $J_{j,k},k\in\\{0,1\\}$ is meant for user $k$
for all $j=1,2,3$. Using a one-time pad which uses the pre-shared random bits,
the server ensures that $J_{j,k}$ can be decoded only by user $k$ and it is
kept secret from the other user. These parts are used as follows:
1. 1.
$J_{1,k}$ conveys the position of $W_{D_{k},k}$ in user $k$’s cache.
2. 2.
$J_{2,k}$ gives the positions of the coded and uncoded parts of $X^{\prime}$
involving $W_{D_{k}}$ to user $k$. Specifically, $J_{2,k}$ reveals the
positions of $W_{D_{0},1}\oplus W_{D_{1},0}$ and $W_{D_{k},2}$ to user $k$
when $D_{0}\neq D_{1}$, and the positions of $W_{D_{k},\tilde{k}}\oplus
W_{m,k}$ and $W_{D_{k},2}$ when $D_{0}=D_{1}$, where $\tilde{k}=(k+1)$ mod 2.
3. 3.
$J_{3,k}$ discloses the position of $W_{D_{\tilde{k}},k}$ if $D_{0}\neq D_{1}$
and $W_{m,k}$ if $D_{0}=D_{1}$ in her cache to user $k$.
Decoding: User $k$ decodes $W_{D_{k}}$ as follows. $W_{D_{k},k}$ can be
obtained from the cache since she knows its position from $J_{1,k}$. User $k$
recovers $W_{D_{k},2}$ from the delivery since she knows its position in
$X^{\prime}$ from $J_{2,k}$. The remaining segment $W_{D_{k},\tilde{k}}$ is
available in coded form in $X^{\prime}$. The segment that
$W_{D_{k},\tilde{k}}$ is XOR-ed with, is available in the cache of user $k$,
and its position in the cache is revealed by $J_{3,k}$. Thus, user $k$
retrieves all three segments of file $W_{D_{k}}$.
Privacy: Now we give an outline of how $D_{1}$ is kept secret from user 0.
From the transmission, we can observe that for both the cases, i.e.,
$D_{0}\neq D_{1}$ and $D_{0}=D_{1}$, user 0 receives $W_{D_{0},2}$ in the
uncoded form and $W_{D_{0},1}$ coded with another symbol. Also, in both the
cases, the remaining symbol is like a sequence of $\frac{F}{3}$ random bits to
user 0, because it contains either $W_{D_{1},2}$ or $W_{m,1}$ which she
doesn’t have access to. Thus, even though the structure of the broadcast is
different in the two cases, any user cannot differentiate between them.
Further, given $J_{1,0}$, any of the remaining 2 symbols can occupy the
remaining 2 positions in the cache with equal likelihood. Thus, although user
0 can use one of these symbols, i.e., the symbol XOR-ed with $W_{D_{0},1}$,
for decoding with the help of $J_{3,0}$, the identity of the symbol is not
known because $J_{3,0}$ only discloses the position of that symbol in user 0’s
cache. Due to the symmetry of the scheme, similar privacy arguments apply for
user 1.
###### Example 5
Now we describe Scheme E for $N=3$ and $K=2$ which achieves rate
$\frac{N-1}{2N-1}=\frac{2}{5}$, for $M=\frac{N^{2}}{2N-1}=\frac{9}{5}$. File
$W_{i},i\in[0:2]$ is partitioned into $2N-1=5$ parts of equal size, i.e.,
$W_{i}=(W_{i,1},W_{i,2},W_{i,3},W_{i,4},W_{i,5})$. We encode each file $W_{i}$
using a $(3N-2,2N-1)=(7,5)$ MDS code (i.e. each file is split into $5$ pieces
of size $\frac{F}{5}$ bits each, which are then encoded using $(7,5)$ MDS code
such that each of $7$ MDS coded symbols has $\frac{F}{5}$ bits). Each file can
be reconstructed using any $5$ MDS symbols. The $7$ symbols of file $W_{i}$
are denoted by $F_{i,0},F_{i,0,0},F_{i,0,1},F_{i,1,0},F_{i,1,1},F_{i,2,0}$ and
$F_{i,2,1}$. Further, we define tuples
$\mbox{$\cal{L}$}_{0},\mbox{$\cal{L}$}_{1}$ and $\mbox{$\cal{L}$}_{2}$ as
follows:
$\mbox{$\cal{L}$}_{j}=(F_{i,0},F_{i,j,0},F_{i,j,1})_{i\in[0:2]},\quad\forall
j\in[0:2].\\\ $
Caching: The server picks 2 independent permutation functions $\pi_{0}$ and
$\pi_{1}$ uniformly at random from the symmetric group of permutations of
[0:8]. The server also picks a random number $U_{0}$ which is uniformly
distributed in $\\{0,1,2\\}$ and places $\pi_{0}(\mbox{$\cal{L}$}_{U_{0}})$ in
the cache of user 0. Similarly, the server picks random number $U_{1}$ which
is uniformly distributed in $\\{0,1,2\\}\backslash\\{U_{0}\\}$ and places
$\pi_{1}(\mbox{$\cal{L}$}_{U_{1}})$ in user 1’s cache. Similar to Scheme D,
each of these permutation functions $\pi_{0}$ and $\pi_{1}$ are private to
both the users. Unlike the permutation functions, $U_{k}$ is shared with user
$k\in\\{0,1\\}$ by placing it in her cache and thus kept secret from the other
user.
Delivery: The main payload, $X^{\prime}$ is given by
$X^{\prime}=\left\\{\begin{array}[]{lcl}(F_{D_{0},U_{1},0}\oplus
F_{D_{1},U_{0},0},F_{D_{0},U_{1},1}\oplus F_{D_{1},U_{0},1})&\text{if
}D_{0}\neq D_{1}\\\ \\\ (F_{D_{0},V,0}\oplus F_{m_{1},0},F_{D_{0},V,1}\oplus
F_{m_{2},0})&\text{if }D_{0}=D_{1}\end{array}\right.$
where, $m_{t}=(D_{0}+t)$ mod $3$, and $V=[0:2]\backslash\\{U_{0},U_{1}\\}$.
To enable decoding at each user, the server also transmits some auxiliary
transmission $J=(J_{1},J_{2},J_{3})$ of negligible rate. Each $J_{j},j=1,2,3$
can be further divided into 2 parts, i.e., $J_{j}=(J_{j,0},J_{j,1})$. where
$J_{j,k},k\in\\{0,1\\}$ is meant for user $k$ for all $j=1,2,3$. Using a one-
time pad, the server ensures that $J_{j,k}$ can be decoded only by user $k$
and it is kept secret from the other user. These parts are used as follows:
1. 1.
$J_{1,k}$ conveys the positions of $F_{D_{k},0},F_{D_{k},U_{k},0}$ and
$F_{D_{k},U_{k},1}$ in user $k$’s cache.
2. 2.
$J_{2,k}$ discloses the positions of symbols $F_{D_{\tilde{k}},U_{k},0}$ and
$F_{D_{\tilde{k}},U_{k},1}$ in user $k$’s cache if $D_{0}\neq D_{1}$, where
$\tilde{k}=(k+1)$ mod 2. If $D_{0}=D_{1}$, then $J_{2,k}$ reveals the
positions of $F_{m_{1},0}$ and $F_{m_{2},0}$ in the cache for user $k$.
3. 3.
$J_{3,k}$ gives the value of the random variable $T_{k}$ which takes the value
$U_{\tilde{k}}$ if $D_{0}\neq D_{1}$, and $V$ if $D_{0}=D_{1}$.
Decoding: Let us consider the decoding of $W_{D_{k}}$ at user $k$. First, the
user retrieves $F_{D_{k},0},F_{D_{k},U_{k},0}$ and $F_{D_{k},U_{k},1}$
directly from the cache since its positions are obtained from $J_{1,k}$. The
positions of $F_{D_{\tilde{k}},U_{k},0}$ and $F_{D_{\tilde{k}},U_{k},1}$ when
$D_{0}\neq D_{1}$, and the positions of $F_{m_{1},0}$ and $F_{m_{2},0}$ when
$D_{0}=D_{1}$, are available through $J_{2,k}$. Thus, using $X^{\prime}$ user
$k$ can recover $F_{D_{k},U_{\tilde{k}},0}$ and $F_{D_{k},U_{\tilde{k}},1}$,
or $F_{D_{k},V,0}$ and $F_{D_{k},V,1}$, accordingly. Note that because user
$k$ does not know whether $D_{0}\neq D_{1}$ or $D_{0}=D_{1}$, she also does
not know whether she has recovered $F_{D_{k},U_{\tilde{k}},0}$ and
$F_{D_{k},U_{\tilde{k}},1}$ or $F_{D_{k},V,0}$ and $F_{D_{k},V,1}$. This
information is available through $J_{3,k}$ which gives the value of $T_{k}$.
Thus, user $k$ has knowledge of $F_{D_{k},T_{k},0}$ and $F_{D_{k},T_{k},1}$.
Since $T_{k}\neq U_{k}$, user $k$ has access to 5 distinct symbols of the MDS
code namely,
$F_{D_{k},0},F_{D_{k},U_{k},0},F_{D_{k},U_{k},1},F_{D_{k},T_{k},0}$ and
$F_{D_{k},T_{k},1}$. Thus, the file $W_{D_{k}}$ can be retrieved.
Privacy: Now we describe how $D_{1}$ remains private to user 0. It is
important to note that, from the knowledge of the tuple $(U_{0},D_{0},T_{0})$,
user 0 cannot find out the demand of user 1. For example, if
$(U_{0},D_{0},T_{0})=(0,0,1)$, $(U_{1},D_{1})$ can take 3 distinct values
namely, $(1,1)$, $(1,2)$ and $(2,0)$, all 3 possibilities being equally
likely, implying that even after knowing $(U_{0},D_{0},T_{0})$, $D_{1}$ can
take all 3 values with equal likelihood. Since $\pi_{0}$ is not shared with
user 0, and since the other auxiliary transmissions ($J_{1,0}$ and $J_{2,0}$)
reveal only the positions of the 5 relevant symbols in user 0’s cache, they
are independent of $D_{1}$ and can take any of the $9\choose{5}$ possible
values depending on $\pi_{0}$. Hence, it is clear that they do not reveal the
demand of user 1. These arguments are crucial in ensuring the demand-privacy
of user 1 and similar arguments hold for the other user as well due to the
symmetry of this scheme.
## IV Proofs
### IV-A Proof of Theorem 1
Let us consider any $(N,NK,M,R)$ $\cal{D}_{RS}$-non-private scheme. Let
$C_{k}^{(np)},k\in[0:NK-1]$ be the cache encoding functions, $E^{(np)}$ be the
broadcast encoding function, and $G_{k}^{(np)},k\in[0:NK-1]$ be the decoding
functions for the given $(N,NK,M,R)$ $\cal{D}_{RS}$-non-private scheme. We now
present a construction of an $(N,K,M,R)$-private scheme from the given
$(N,NK,M,R)$ $\cal{D}_{RS}$-non-private scheme.
Caching: For $k\in[0:K-1]$ and $S_{k}\in[0:N-1]$, the $k$-th user’s cache
encoding function is given by
$\displaystyle C_{k}(S_{k},\bar{W}):=C_{kN+S_{k}}^{(np)}(\bar{W}).$ (23)
The $k$-th user’s cache encoding function is taken to be the same as that of
the $S_{k}$-th user in the $k$-th stack in the corresponding $(N,NK)$ caching
problem. The cache content is given by $Z_{k}=(C_{k}(S_{k},\bar{W}),S_{k})$.
Delivery: To define the broadcast encoding, we need some new notations and
definitions. Let $\Psi:[0:N-1]^{N}\rightarrow[0:N-1]^{N}$ denote the cyclic
shift operator, such that
$\Psi(t_{1},t_{2},\ldots,t_{N})=(t_{N},t_{1},\ldots,t_{N-1})$. Let us denote a
vector $\mathbb{I}:=(0,1,\ldots,N-1)$. Let us also define
$\displaystyle\bar{S}\ominus\bar{D}:=(S_{0}\ominus D_{0},S_{1}\ominus
D_{1},\ldots,S_{K-1}\ominus D_{K-1})$
where $S_{k}\ominus D_{k}$ denotes the difference of $S_{k}$ and $D_{k}$
modulo $N$. For a given $\bar{D}\in[0:N-1]^{K}$, we define an expanded demand
vector for the non-private problem as:
$\displaystyle\bar{D}^{(np)}(\bar{D},\bar{S})=(\Psi^{S_{0}\ominus
D_{0}}(\mathbb{I}),\ldots,\Psi^{S_{K-1}\ominus D_{K-1}}(\mathbb{I}))$
where $\Psi^{i}$ denotes the $i$-times cyclic shift operator.
The broadcast encoding function for the $(N,K,M,R)$-private scheme is defined
by
$\displaystyle
E(\bar{W},\bar{D},\bar{S}):=E^{(np)}(\bar{W},\bar{D}^{(np)}(\bar{D},\bar{S})).$
(24)
Let us denote $X_{1}=E(\bar{W},\bar{D},\bar{S})$. In the private scheme, the
server transmits $X=(X_{1},\bar{S}\ominus\bar{D})$.
Decoding: User $k\in[0:K-1]$ uses the decoding function of the $(kN+S_{k})$-th
user in the non-private scheme, i.e.,
$\displaystyle G_{k}(D_{k},S_{k},\bar{S}\ominus\bar{D},X_{1},Z_{k})$
$\displaystyle:=G_{kN+S_{k}}^{(np)}(\bar{D}^{(np)}(\bar{D},\bar{S}),X_{1},Z_{k}).$
(25)
Here the decoder computes $\bar{D}^{(np)}(\bar{D},\bar{S})$ from
$\bar{S}\ominus\bar{D}$.
From (23), (24), and (25), it is clear that the decoder of the $k$-th user
outputs the same file requested by the $S_{k}$-th virtual user of the $k$-th
stack in the non-private scheme. The index of the output file is the
$(kN+S_{k})$-th component in $\bar{D}^{(np)}(\bar{D},\bar{S})$, i.e.,
$S_{k}\ominus(S_{k}\ominus D_{k})=D_{k}$. Thus, the $k$-th user recovers its
desired file.
Proof of privacy: The proof of privacy essentially follows from the fact that
$S_{i}$ acts as one-time pad for $D_{i}$ which prevents any user $j\neq i$
getting any information about $D_{i}$. We now show that the derived
$(N,K,M,R)$-private scheme satisfies the privacy condition (9). First we show
that $I({\bar{D}_{\tilde{k}}};Z_{k},D_{k},X{}|\bar{W})=0$.
$\displaystyle I({\bar{D}_{\tilde{k}}};Z_{k},D_{k},X{}|\bar{W})$
$\displaystyle=H(Z_{k},D_{k},X{}|\bar{W})-H(Z_{k},D_{k},X{}|\bar{W},{\bar{D}_{\tilde{k}}})$
$\displaystyle\overset{(a)}{=}H(S_{k},D_{k},\bar{S}\ominus\bar{D},\bar{D}^{(np)}(\bar{D},\bar{S})|\bar{W})-H(S_{k},D_{k},\bar{S}\ominus\bar{D},\bar{D}^{(np)}(\bar{D},\bar{S})|\bar{W},{\bar{D}_{\tilde{k}}})$
$\displaystyle\overset{(b)}{=}H(S_{k},D_{k},\bar{S}\ominus\bar{D}|\bar{W})-H(S_{k},D_{k},\bar{S}\ominus\bar{D}|\bar{W},{\bar{D}_{\tilde{k}}})$
$\displaystyle\overset{(c)}{=}H(S_{k},D_{k},\bar{S}\ominus\bar{D})-H(S_{k},D_{k},\bar{S}\ominus\bar{D}|{\bar{D}_{\tilde{k}}})$
$\displaystyle\overset{(d)}{=}H(S_{k},D_{k},\bar{S}\ominus\bar{D})-H(S_{k},D_{k},\bar{S}\ominus\bar{D})$
$\displaystyle=0.$ (26)
Here, $(a)$ follows since
$X=(X_{1},\bar{S}\ominus\bar{D}),Z_{k}=(C_{k}(S_{k},\bar{W}),S_{k})$, and also
due to (24). In $(b)$, we used that
$H(\bar{D}^{(np)}(\bar{D},\bar{S})|\bar{S}\ominus\bar{D})=0$, and $(c)$
follows since $(S_{k},D_{k},\bar{S}\ominus\bar{D},{\bar{D}_{\tilde{k}}})$ is
independent of $\bar{W}$. We get $(d)$ since $S_{i}\ominus D_{i}$ is
independent of $D_{i}$ for all $i\in[0:K-1]$. Using the fact that demands and
files are independent, we get the following from (26)
$\displaystyle I({\bar{D}_{\tilde{k}}};Z_{k},D_{k},X{},\bar{W})$
$\displaystyle=I({\bar{D}_{\tilde{k}}};\bar{W})+I({\bar{D}_{\tilde{k}}};Z_{k},D_{k},X{}|\bar{W})$
$\displaystyle=0.$
This shows the derived scheme satisfies the privacy condition
$I({\bar{D}_{\tilde{k}}};Z_{k},D_{k},X{})=0$.
The size of the cache in the $(N,K,M,R)$-private scheme differs only by the
size of the shared key from the $(N,NK,M,R)$ $\cal{D}_{RS}$-non-private
scheme. For large enough file size $2^{F}$, this difference is negligible.
Furthermore, we can observe that the rate of transmission in
$(N,K,M,R)$-private scheme is the same as that of the $(N,NK,M,R)$
$\cal{D}_{RS}$-non-private scheme. This proves Theorem 1.
### IV-B Proof of Lemma 1
Consider any
$\displaystyle(M,R)=\left(\frac{Nr}{NK-K+1},\frac{{NK-K+1\choose
r+1}-{NK-K+1-N\choose r+1}}{{NK-K+1\choose r}}\right),\quad\text{ for
}r\in\\{0,1,\ldots,NK-K\\}$
which is achievable for $N$ files and $NK-K+1$ users by the YMA scheme. We
will construct an $(N,NK,M,R)$ $\cal{D}_{RS}$-non-private scheme with these
$(M,R)$ pairs. We denote the set of $NK$ users by
$\mbox{$\cal{U}$}=\mbox{$\cal{U}$}_{1}\cup\mbox{$\cal{U}$}_{2}$, where
$\displaystyle\mbox{$\cal{U}$}_{1}:=\\{u^{\prime}_{1},u^{\prime}_{2},\ldots,u^{\prime}_{K-1}\\},$
and
$\displaystyle\mbox{$\cal{U}$}_{2}:=\\{u_{0},u_{1},\ldots,u_{NK-K}\\}.$
The users are partitioned into $K$ subsets/stacks
$\mbox{$\cal{K}$}_{i},i\in[0:K-1]$ as given below.
$\displaystyle\mbox{$\cal{K}$}_{0}$ $\displaystyle=\\{u_{j}|j\in[0:N-1]\\},$
$\displaystyle\mbox{$\cal{K}$}_{i}$ $\displaystyle=\\{u_{j}|i(N-1)+1\leq
j\leq(i+1)N-1\\}\cup\\{u^{\prime}_{i}\\}\quad\mbox{ for }1\leq i\leq K-1.$
The stacks of users are shown in Fig. 6. We denote the demand of
$u_{i}\in\mbox{$\cal{U}$}_{2}$ by $d_{i}$, and the demand of
$u^{\prime}_{i}\in\mbox{$\cal{U}$}_{1}$ by $d^{\prime}_{i}$. Similarly, we
denote the cache of $u_{i}\in\mbox{$\cal{U}$}_{2}$ by $Z_{i}$, and the cache
of user $u^{\prime}_{i}\in\mbox{$\cal{U}$}_{1}$ by $Z^{\prime}_{i}$. We
further define
$\displaystyle\mbox{$\cal{K}$}^{\prime}_{i}$
$\displaystyle:=\\{j|u_{j}\in\mbox{$\cal{K}$}_{i}\\}\quad\text{for
}i=0,1,2,\ldots,K-1.$ $\displaystyle\mbox{$\cal{V}$}_{i}$
$\displaystyle:=\mbox{$\cal{K}$}^{\prime}_{0}\cup\mbox{$\cal{K}$}^{\prime}_{i},\quad\text{for
}i=1,2,\ldots,K-1.$
Figure 6: The stacks of users are shown vertically. The users from
$\mbox{$\cal{U}$}_{2}$ are shown in black color, whereas those from
$\mbox{$\cal{U}$}_{1}$ are shown in red color.
Caching: The users in $\mbox{$\cal{U}$}_{2}$ has the same prefetching as that
of the users in the YMA scheme. Let $Z_{m}^{\text{YMA}},m=0,\ldots,NK-K$
denote the cache content of $m$-th user in the YMA scheme. The cache content
of user $u_{j}\in\mbox{$\cal{U}$}_{2}$ is given by
$\displaystyle Z_{j}=Z_{j}^{\text{YMA}}.$
To explain the prefetching of users in $\mbox{$\cal{U}$}_{1}$, let
$\displaystyle\mbox{$\cal{T}$}:=[0:NK-K].$
In the YMA scheme, each file is divided into ${NK-K+1\choose r}$ subfiles and
file $W_{i},i\in[0:N-1]$ is given by
$W_{i}=(W_{i,\mbox{$\cal{R}$}})_{\mbox{$\cal{R}$}\subset\mbox{$\cal{T}$},|\mbox{$\cal{R}$}|=r}.$
For $\mbox{$\cal{S}$}\subset\mbox{$\cal{T}$}$ such that
$|\mbox{$\cal{S}$}|=r-1$, we define
$\displaystyle
Z^{j}_{i,\mbox{$\cal{S}$}}:=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\setminus\mbox{$\cal{S}$}\cap\mbox{$\cal{V}$}_{i}}W_{j,\\{u\\}\cup\mbox{$\cal{S}$}},\text{
for }i\in[0:K-1]\setminus\\{0\\},\;j\in[0:N-1].$ (27)
The cache content of user $u^{\prime}_{i}\in\mbox{$\cal{U}$}_{1}$ is given by
$\displaystyle
Z^{\prime}_{i}=\left(Z^{j}_{i,\mbox{$\cal{S}$}}|\mbox{$\cal{S}$}\subset\mbox{$\cal{T}$}\setminus\\{0\\},|\mbox{$\cal{S}$}|=r-1\right)_{j\in[0:N-1]}.$
Since the size of one $Z^{j}_{i,\mbox{$\cal{S}$}}$ is $\frac{F}{{NK-K+1\choose
r}}$ and since there are ${NK-K\choose r-1}$ possible sets $\cal{S}$, the
number of bits stored at user $u^{\prime}_{i}$ is given by
$\displaystyle len(Z^{\prime}_{i})$ $\displaystyle=\frac{NF{NK-K\choose
r-1}}{{NK-K+1\choose r}}$ $\displaystyle=\frac{NFr}{NK-K+1}$
$\displaystyle=FM.$
Delivery: For a given $\bar{d}\in\mbox{$\cal{D}_{RS}$}$, let
$\bar{d}_{1}=(d^{\prime}_{i})_{u^{\prime}_{i}\in\mbox{$\cal{U}$}_{1}}$ and
$\bar{d}_{2}=(d_{i})_{u_{i}\in\mbox{$\cal{U}$}_{2}}$. The server chooses the
transmission of the YMA scheme for $\mbox{$\cal{U}$}_{2}$ under demand
$\bar{d}_{2}$. The broadcast transmission is described using
$\displaystyle
Y_{\mbox{$\cal{R}$}}=\bigoplus_{u\in\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}\setminus\\{u\\}},\quad\mbox{$\cal{R}$}\subset\mbox{$\cal{T}$}\text{
such that }|\mbox{$\cal{R}$}|=r+1.$ (28)
The broadcast transmission $X$ is given by
$\displaystyle
X=\\{Y_{\mbox{$\cal{R}$}}|\mbox{$\cal{R}$}\cap\mbox{$\cal{K}$}_{0}\neq\phi\\}.$
Note that in the above broadcast transmission, stack of users
$\mbox{$\cal{K}$}_{0}$ corresponds to the set of “leaders” described in the
YMA scheme since each file $W_{i},i\in[0:N-1]$ is demanded by exactly one user
in $\mbox{$\cal{K}$}_{0}$. The size of each symbol $Y_{\mbox{$\cal{R}$}}\in X$
is $\frac{F}{{NK-K+1\choose r}}$ and $X$ contains ${NK-K+1\choose
r+1}-{NK-K+1-N\choose r+1}$ such symbols. Thus,
$\displaystyle len(X)$ $\displaystyle=\frac{\left({NK-K+1\choose
r+1}-{NK-K+1-N\choose r+1}\right)F}{{NK-K+1\choose r}}$ $\displaystyle=RF.$
Decoding: For all user in $\mbox{$\cal{U}$}_{2}$, the decodability follows
from the decodability of the YMA scheme. The decoding of users in
$\mbox{$\cal{U}$}_{1}$ is as follows.
###### Remark 4
From the YMA scheme, we know that all symbols $Y_{\mbox{$\cal{R}$}}$ such that
$\mbox{$\cal{R}$}\subset[0:NK-1]$ and $|\mbox{$\cal{R}$}|=r+1$ can be
recovered from $X$. Similarly the following lemma states that although all
symbols $Z^{j}_{i,\mbox{$\cal{S}$}}$ given in (27) are not a part of the cache
of user $u^{\prime}_{i}\in\mbox{$\cal{U}$}_{1}$, each of these symbols can
still be recovered from the cache contents.
###### Lemma 2
For $i\in[0:K-1]\setminus\\{0\\}$, all symbols $Z^{j}_{i,\mbox{$\cal{S}$}}$,
where $\mbox{$\cal{S}$}\subset\mbox{$\cal{T}$},\;|\mbox{$\cal{S}$}|=r-1$, and
$j\in[0:N-1]$, can be recovered from the cache content $Z^{\prime}_{i}$ of
user $u^{\prime}_{i}$.
###### Proof.
See Appendix A. ∎
Next we show how user $u^{\prime}_{i}\in\mbox{$\cal{U}$}_{1}$ obtains
$W_{d^{\prime}_{i},{\mbox{$\cal{R}$}}}$ for all $\cal{R}$ satisfying
$\mbox{$\cal{R}$}\subset\mbox{$\cal{T}$}$ and $|\mbox{$\cal{R}$}|=r$. Each
$W_{d^{\prime}_{i},{\mbox{$\cal{R}$}}}$ can be written as
$\displaystyle W_{d^{\prime}_{i},{\mbox{$\cal{R}$}}}$
$\displaystyle\overset{\mathrm{(a)}}{=}\bigoplus_{u\in\mbox{$\cal{V}$}_{i}}W_{d_{u},\mbox{$\cal{R}$}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}}$
$\displaystyle\overset{\mathrm{(b)}}{=}\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}\bigg{\\{}Y_{\\{u\\}\cup{\mbox{$\cal{R}$}}}\oplus\bigoplus_{t\in{\mbox{$\cal{R}$}}}W_{d_{t},\\{u\\}\cup{\mbox{$\cal{R}$}}\backslash\\{t\\}}\bigg{\\}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}Y_{\\{u\\}\cup{\mbox{$\cal{R}$}}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}\;\bigoplus_{t\in{\mbox{$\cal{R}$}}}W_{d_{t},\\{u\\}\cup{\mbox{$\cal{R}$}}\backslash\\{t\\}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}Y_{\\{u\\}\cup{\mbox{$\cal{R}$}}}\oplus\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}\;\bigoplus_{u\in{\mbox{$\cal{R}$}}}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}Y_{\\{u\\}\cup{\mbox{$\cal{R}$}}}$
$\displaystyle\quad\oplus\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}\;\bigg{\\{}\bigoplus_{u\in{\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}\oplus\bigoplus_{u\in{\mbox{$\cal{R}$}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}\bigg{\\}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}Y_{\\{u\\}\cup{\mbox{$\cal{R}$}}}\oplus\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}\;\bigoplus_{u\in{\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}$
$\displaystyle\quad\oplus\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}\;\bigoplus_{u\in{\mbox{$\cal{R}$}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}}\oplus\bigoplus_{u\in{\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}}\;\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}Y_{\\{u\\}\cup{\mbox{$\cal{R}$}}}$
$\displaystyle\quad\oplus\bigoplus_{u\in{\mbox{$\cal{R}$}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}}\;\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}\bigg{\\{}W_{d_{u},\mbox{$\cal{R}$}}\oplus\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}\bigg{\\}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}Y_{\\{u\\}\cup{\mbox{$\cal{R}$}}}$
$\displaystyle\quad\oplus\bigoplus_{u\in{\mbox{$\cal{R}$}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}}\;\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap(\mbox{$\cal{R}$}\backslash\\{u\\})}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}\bigg{\\{}\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap(\mbox{$\cal{R}$}\backslash\\{u\\})}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}\bigg{\\}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}Y_{\\{u\\}\cup{\mbox{$\cal{R}$}}}$
$\displaystyle\quad\oplus\bigoplus_{u\in{\mbox{$\cal{R}$}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}}\;\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap(\mbox{$\cal{R}$}\backslash\\{u\\})}W_{d_{u},\\{t\\}\cup{\mbox{$\cal{R}$}}\backslash\\{u\\}}$
$\displaystyle\overset{\mathrm{(c)}}{=}\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}Z^{d_{u}}_{i,\mbox{$\cal{R}$}\backslash\\{u\\}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}Y_{\\{u\\}\cup{\mbox{$\cal{R}$}}}\quad\oplus\bigoplus_{u\in{\mbox{$\cal{R}$}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}}}Z^{d_{u}}_{i,\mbox{$\cal{R}$}\backslash\\{u\\}}$
where $(a)$ follows since all
$W_{d_{u},\mbox{$\cal{R}$}},u\in\mbox{$\cal{V}$}_{i}$ but
$W_{d^{\prime}_{i},\mbox{$\cal{R}$}}$ appear twice in the summation on the RHS
of $(a)$ which is due to the structure of demands in $\cal{D}_{RS}$. Further,
$(b)$ follows from the definition of $Y_{\\{u\\}\cup\mbox{$\cal{R}$}}$. The
symbols in the first and third terms of $(c)$ can be obtained from the cache
of the user due to Lemma 2, and the symbols in the second term can be obtained
from the delivery part because of Remark 4. Hence, the decodability of user
$u^{\prime}_{i}\in\mbox{$\cal{U}$}_{1}$ follows. This completes the proof of
Theorem 1.
### IV-C Proof of Theorem 3
In the placement phase, the caches of all users are populated with the same
$M/N$ fraction of each file. Let each file $W_{i}$ be split in two parts:
cached part $W^{(c)}_{i}$ of length $FM/N$, and uncached part $W^{(u)}_{i}$ of
length $F(1-M/N)$. The cache contents of all the users are the same, and given
by $Z_{k}=(Z^{(0)},Z^{(1)},\ldots,Z^{(N-1)})$, where
$\displaystyle Z^{(i)}=W^{(c)}_{i},\quad\text{for }i=0,1,\ldots,N-1.$
To describe the delivery phase, we consider two cases:
Case 1: $N\leq K$
For $N\leq K$, the server broadcasts the remaining $(1-M/N)$ fraction of each
file. This scheme achieves privacy because the transmission does not depend on
the demands of the users.
Case 2: $K<N$
Let $D_{0},D_{1},\ldots,D_{K-1}$ be the demands of the users. The transmission
$X$ has two parts $(X^{\prime},J)$, where
$X^{\prime}=(X^{\prime}_{0},X^{\prime}_{1},\ldots,X^{\prime}_{K-1})$ is the
main payload, and $J$ is the auxiliary transmission of negligible rate which
helps each user find the corresponding decoding function. For each $i$,
$X^{\prime}_{i}$ is either $W^{(u)}_{D_{j}}$ for some $j$ or random bits of
the same length. In particular, the position of $W^{(u)}_{D_{j}}$ in
$X^{\prime}$ is denoted by a random variable $P_{j}\in[0:K-1]$. The random
variables $P_{0},P_{1},\ldots,P_{K-1}$ are defined inductively as
$\displaystyle P_{i}=$ $\displaystyle\begin{cases}P_{j}&\text{if
}D_{i}=D_{j}\text{ for some }j<i\\\ \sim
unif([0:K-1]\setminus\\{P_{0},P_{1},\ldots,P_{i-1}\\})&\text{if }D_{i}\neq
D_{j}\text{ for all }j<i.\end{cases}$
Note that each demanded (uncached) file is transmitted only in one component
of the transmission so that one user can not possibly detect the same file (as
its own demand) being transmitted in another component and thus infer that the
corresponding other user also has the same demand.
The keys $S_{0},S_{1},\ldots,S_{K-1}\in[0:K-1]$ are chosen i.i.d. and
uniformly distributed. The transmission is then given by
$\displaystyle X^{\prime}_{j}=\begin{cases}W^{(u)}_{D_{i}}&\text{ if
}j=P_{i}\text{ for some }i\in[0:K-1]\\\ \sim
unif\left(\\{0,1\\}^{F(1-M/N)}\right)&\text{ otherwise}\end{cases}$
and
$\displaystyle
J=(P_{0}\oplus_{K}S_{0},P_{1}\oplus_{K}S_{1},\ldots,P_{K-1}\oplus_{K}S_{K-1})$
where $\oplus_{K}$ denotes the addition modulo $K$ operation. Since user $k$
knows $S_{k}$, it can find $P_{k}$ from $J$. It then can find
$X^{\prime}_{P_{k}}=W^{(u)}_{D_{k}}$, and thus
$W_{D_{k}}=(Z^{(D_{k})},X^{\prime}_{P_{k}})$.
Next we show that this scheme also satisfies the privacy condition. Let us
denote $Q_{i}=P_{i}\oplus_{K}S_{i}$ for the ease of writing.
$\displaystyle I({\bar{D}_{\tilde{k}}};X,D_{k},Z_{k})$
$\displaystyle=I({\bar{D}_{\tilde{k}}};X^{\prime}_{0},\ldots,X^{\prime}_{K-1},Q_{0},Q_{1},\ldots,Q_{K-1},D_{k},S_{k},W^{(c)}_{0},\ldots,W^{(c)}_{N-1})$
$\displaystyle\overset{(a)}{=}I({\bar{D}_{\tilde{k}}};Q_{0},\ldots,Q_{K-1},D_{k},S_{k})$
$\displaystyle=I({\bar{D}_{\tilde{k}}};Q_{0},\ldots,Q_{k-1},Q_{k+1},\ldots,Q_{K-1},D_{k},S_{k},P_{k})$
$\displaystyle\overset{(b)}{=}0$
where $(a)$ follows because
$(X^{\prime}_{0},\ldots,X^{\prime}_{K-1},W^{(c)}_{0},\ldots,W^{(c)}_{N-1})$ is
uniformly distributed in $\\{0,1\\}^{MF+FK(1-M/N)}$, and is independent of
$({\bar{D}_{\tilde{k}}},Q_{0},\ldots,Q_{K-1},D_{k},S_{k})$, and $(b)$ follows
because all the random variables in the mutual information are independent. In
this scheme, the number of bits broadcasted is $FK(1-M/N)$ as the bits
transmitted for communicating $J$ is negligible for large $F$. Thus, the
scheme achieves rate $K(1-M/N)$.
### IV-D Proof of Theorem 4
First we explain the scheme that achieves the memory-rate pairs given in
Theorem 4 . We further show that this scheme also preserves privacy.
For $t\in\\{1,2,\ldots,NK-1\\}$, we partition file $W_{i},i\in[0:N-1]$ into
$\sum_{l=t}^{NK-1}{NK\choose l}$ segments of $(NK-t)$ different sizes. These
segments are grouped into $(NK-t)$ groups such that all segments in the same
group have the same size. The segments are labelled by some subsets of
$[0:NK-1]$. The segments of $W_{i}$ are $W_{i,\mbox{$\cal{R}$}}$;
$\mbox{$\cal{R}$}\subset[0:NK-1],|\mbox{$\cal{R}$}|\in\\{t,t+1,\ldots,NK-1\\}$.
These $(NK-t)$ groups are given as
$\displaystyle\mbox{$\cal{T}$}_{|\mbox{$\cal{R}$}|}^{i}=\\{W_{i,\mbox{$\cal{R}$}}|\mbox{$\cal{R}$}\subset[0:NK-1]\\}\quad\text{
for }t\leq|\mbox{$\cal{R}$}|\leq NK-1.$
Thus, file $W_{i},i\in[0:N-1]$ is given as
$\displaystyle
W_{i}=\left(\mbox{$\cal{T}$}_{|\mbox{$\cal{R}$}|}^{i}\right)_{t\leq|\mbox{$\cal{R}$}|\leq
NK-1}.$
All elements of each file in one group have same size and elements of
different groups have different sizes. For
$|\mbox{$\cal{R}$}_{1}|<|\mbox{$\cal{R}$}_{2}|$, size of an element in
$\mbox{$\cal{T}$}_{|\mbox{$\cal{R}$}_{1}|}^{i}$ is
$r^{|\mbox{$\cal{R}$}_{2}|-|\mbox{$\cal{R}$}_{1}|}$ times the size of an
element in $\mbox{$\cal{T}$}_{|\mbox{$\cal{R}$}_{2}|}^{i}$ for parameter
$r\in[1,N-1]$. Hence, for $i\in[0:N-1]$ and
$\mbox{$\cal{R}$}\subset[0:NK-1],t\leq|\mbox{$\cal{R}$}|\leq NK-1$, we have
$len(W_{i,\mbox{$\cal{R}$}})=\frac{r^{NK-|\mbox{$\cal{R}$}|-1}}{\sum_{s=t}^{NK-1}{NK\choose
s}r^{NK-s-1}}F.$
Sum of all these segments is $F$ since for any $r>0$, we have
$\displaystyle len(W_{i})$
$\displaystyle=\sum_{|\mbox{$\cal{R}$}|=t}^{NK-1}\frac{{NK\choose|\mbox{$\cal{R}$}|}r^{NK-|\mbox{$\cal{R}$}|-1}}{\sum_{s=t}^{NK-1}{NK\choose
s}r^{NK-s-1}}F$
$\displaystyle=\frac{\sum_{|\mbox{$\cal{R}$}|=t}^{NK-1}{NK\choose|\mbox{$\cal{R}$}|}r^{NK-|\mbox{$\cal{R}$}|-1}}{\sum_{s=t}^{NK-1}{NK\choose
s}r^{NK-s-1}}F$ $\displaystyle=F.$
Caching: The cache content $Z_{k}$ of user $k\in[0:K-1]$ has two components:
the main load $Z^{\prime}_{k}$ and sub load $Z^{\prime\prime}_{k}$. The main
load $Z^{\prime}_{k}$ is grouped into $NK-t$ groups similar to the way we
partition the file. The groups are indexed by the cardinality of
$\mbox{$\cal{R}$}\subset[0:NK-1]$, where $t\leq|\mbox{$\cal{R}$}|\leq NK-1$.
The group indexed by $|\mbox{$\cal{R}$}|$ of user $k$ is denoted by
$\mbox{$\cal{G}$}_{k,|\mbox{$\cal{R}$}|}$. Its content is determined by random
variable $S_{k}\sim unif[0:N-1]$ which is shared between user $k$ and the
server, and it is given by
$\displaystyle\mbox{$\cal{G}$}_{k,|\mbox{$\cal{R}$}|}=\\{W_{i,\mbox{$\cal{R}$}}|W_{i,\mbox{$\cal{R}$}}\in\mbox{$\cal{T}$}_{|\mbox{$\cal{R}$}|}^{i}\text{
and }S_{k}+kN\in\mbox{$\cal{R}$}\\}_{i\in[0:N-1]}.$
Then the main load $Z^{\prime}_{k}$ is given by
$\displaystyle
Z^{\prime}_{k}:=(\mbox{$\cal{G}$}_{k,|\mbox{$\cal{R}$}|})_{t\leq|\mbox{$\cal{R}$}|\leq
NK-1}.$
Since there are $N{NK-1\choose|\mbox{$\cal{R}$}|-1}$ elements in
$\mbox{$\cal{G}$}_{k,|\mbox{$\cal{R}$}|}$, we obtain the size of the main load
as
$\displaystyle len(Z^{\prime}_{k})$
$\displaystyle=\sum_{|\mbox{$\cal{R}$}|=t}^{NK-1}|\mbox{$\cal{G}$}_{k,|\mbox{$\cal{R}$}|}|\frac{r^{NK-|\mbox{$\cal{R}$}|-1}F}{\sum_{s=t}^{NK-1}{NK\choose
s}r^{NK-s-1}}$ $\displaystyle=\frac{N\sum_{s=t}^{NK-1}{NK-1\choose
s-1}r^{NK-s-1}F}{\sum_{s=t}^{NK-1}{NK\choose s}r^{NK-s-1}}$
$\displaystyle=MF.$
Now we define the sub load $Z^{\prime\prime}_{k}$ which is of negligible size
compared to the file size $F$. To this end, we first define
$\mbox{$\cal{L}$}:=\\{kN+S_{k}|0\leq k\leq K-1\\},$
and
$\tau:=\\{\mbox{$\cal{R}$}|\mbox{$\cal{R}$}\subset[0:NK-1],\mbox{$\cal{R}$}\cap\mbox{$\cal{L}$}\neq\phi,\text{
and }t+1\leq|\mbox{$\cal{R}$}|\leq NK-1\\}.$
The server generates independent symbols $S^{\prime}_{\mbox{$\cal{R}$}}$ for
all $\mbox{$\cal{R}$}\in\tau$, where each $S^{\prime}_{\mbox{$\cal{R}$}}\sim
unif\\{[0:\kappa_{|\mbox{$\cal{R}$}|}-1]\\}$, with $\kappa_{s}$ defined as
$\displaystyle\kappa_{s}={NK\choose s}-{NK-K\choose s},\quad\mbox{ for
}s\in\\{t+1,t+2,\ldots,NK-1\\}.$
For all $\mbox{$\cal{R}$}\in\tau$, $S^{\prime}_{\mbox{$\cal{R}$}}$ is cached
at user $k$ if and only if $k\in\mbox{$\cal{R}$}$. Then, the sub load
$Z^{\prime\prime}_{k}$ is given by
$\displaystyle
Z^{\prime\prime}_{k}:=\left(\\{S^{\prime}_{\mbox{$\cal{R}$}}|\mbox{$\cal{R}$}\in\tau\text{
and }(kN+S_{k})\cap\mbox{$\cal{R}$}\neq\phi\\},S_{k}\right).$ (29)
The cache content $Z_{k}$ is the concatenation of $Z^{\prime}_{k}$ and
$Z^{\prime\prime}_{k}$, i.e., $Z_{k}=(Z^{\prime}_{k},Z^{\prime\prime}_{k})$.
Delivery: For a given demand vector $(D_{0},\ldots,D_{K-1})$, the server first
constructs an expanded demand vector $\bar{d}$ of $NK$-length. We write it as
$K$ vectors of $N$ length each, as follows:
$\displaystyle\bar{d}=\left[\bar{d}^{(0)},\bar{d}^{(1)},\ldots,\bar{d}^{(K-1)}\right]$
(30)
where $\bar{d}^{(k)},k\in[0:K-1]$ is the vector obtained by applying
$S_{k}\ominus D_{k}$ cyclic shift to the vector $(0,1,\ldots,N-1)$. Here
$\ominus$ denotes modulo $N$ subtraction. That is, for $k\in[0:K-1]$,
$d_{i}^{(k)}=i-(S_{k}-D_{k})\mod N$. We also define
$\displaystyle\bar{S}\ominus\bar{D}:=\left(S_{0}\ominus D_{0},S_{1}\ominus
D_{1},\ldots,S_{K-1}\ominus D_{K-1}\right).$ (31)
To explain the broadcast transmission, we define symbols
$Y_{\mbox{$\cal{R}$}}$ for $\mbox{$\cal{R}$}\subset[0:NK-1]$ and
$t+1\leq|\mbox{$\cal{R}$}|\leq NK$ as follows:
$\displaystyle
Y_{\mbox{$\cal{R}$}}:=\bigoplus_{u\in\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}\setminus\\{u\\}}$
where $d_{u}$ is the $(u+1)$-th item of $\bar{d}$, and for all
$\mbox{$\cal{R}$}\in\tau$, we define symbols $W_{\mbox{$\cal{R}$}}$
$W_{\mbox{$\cal{R}$}}:=(W_{0,\mbox{$\cal{R}$}}\oplus
W_{1,\mbox{$\cal{R}$}},W_{1,\mbox{$\cal{R}$}}\oplus
W_{2,\mbox{$\cal{R}$}},\ldots,W_{N-2,\mbox{$\cal{R}$}}\oplus
W_{N-1,\mbox{$\cal{R}$}}).$ (32)
If the size of $W_{\mbox{$\cal{R}$}}$ is $F^{\prime}$ bits, we denote the
first $rF^{\prime}/(N-1)$ bits of $W_{\mbox{$\cal{R}$}}$ by
$W^{r}_{\mbox{$\cal{R}$}}$, where $r\in[1,N-1]$. Further, we also define
$\displaystyle V_{\mbox{$\cal{R}$}}$
$\displaystyle:=Y_{\mbox{$\cal{R}$}}\oplus
W^{r}_{\mbox{$\cal{R}$}},\quad\text{ for
}r\in[1,N-1],\enspace\mbox{$\cal{R}$}\in\tau,$ (33)
and
$\displaystyle V_{|\mbox{$\cal{R}$}|}$
$\displaystyle:=\\{V_{\mbox{$\cal{R}$}}|\mbox{$\cal{R}$}\in\tau\\}.$ (34)
Also, set $V$ is defined as the concatenation of all sets defined in (34),
i.e.,
$\displaystyle V$ $\displaystyle:=(V_{s})_{t+1\leq s\leq NK-1}.$
The server picks permutation functions
$\left(\pi_{t+1}(\cdot),\pi_{t+2}(\cdot),\ldots,\pi_{NK-1}(\cdot)\right)$,
where $\pi_{i}(\cdot)$ is picked uniformly at random from the symmetric group
of permutations of $[0:\kappa_{i}-1]$ for $i\in\\{t+1,t+2,\ldots,NK-1\\}$.
These permutation functions are not fully shared with any of the users. The
main payload is given by
$X^{\prime}=(X^{\prime}_{t+1},X^{\prime}_{t+2},\ldots,X^{\prime}_{NK-1},Y_{[0:NK-1]})=(\pi_{t+1}(V_{t+1}),\pi_{t+2}(V_{t+2}),\ldots,\pi_{NK-1}(V_{NK-1}),Y_{[0:NK-1]}).$
Rate of transmission is calculated as follows. For
$t+1\leq|\mbox{$\cal{R}$}|\leq NK-1$, the server transmits
${NK\choose|\mbox{$\cal{R}$}|}-{NK-K\choose|\mbox{$\cal{R}$}|}$ number of
symbols $V_{\mbox{$\cal{R}$}}$, and the server also transmits $Y_{[0:NK-1]}$.
Then, the total number of bits transmitted in the main payload are given by
$\displaystyle len(X^{\prime})$
$\displaystyle=\sum_{s=t+1}^{NK}\frac{[{NK\choose s}-{NK-K\choose
s}]r^{NK-s}F}{\sum_{s=t}^{NK-1}{NK\choose s}r^{NK-s-1}}$ $\displaystyle=RF.$
Along with $X^{\prime}$, the server also broadcasts some auxiliary
transmission $J$ of negligible rate, given by
$\displaystyle J$
$\displaystyle=(\\{S^{\prime}_{\mbox{$\cal{R}$}}\oplus\alpha_{|\mbox{$\cal{R}$}|,\mbox{$\cal{R}$}}|\mbox{$\cal{R}$}\in\tau\\},\bar{S}\ominus\bar{D})$
$\displaystyle=(J^{\prime},\bar{S}\ominus\bar{D}).$ (35)
Here, $\alpha_{|\mbox{$\cal{R}$}|,\mbox{$\cal{R}$}}$ denotes the position of
$V_{\mbox{$\cal{R}$}}$ in $\pi_{|\mbox{$\cal{R}$}|}(V_{|\mbox{$\cal{R}$}|})$
for $\mbox{$\cal{R}$}\in\tau$. The private keys ensure that the location of
any symbol $V_{\mbox{$\cal{R}$}}$ is shared with user $k$ if and only if
$S_{k}+kN\in\mbox{$\cal{R}$}$. For large file sizes, the size of the auxiliary
transmission is negligible. The broadcasted message, $X$ can thus be given as
$X=(X^{\prime},J)$.
Decoding: Now we explain how user $k\in[0:K-1]$ decodes the segments that are
missing from each group in her cache. We can observe that the group
$\mbox{$\cal{G}$}_{k,NK-1}$ in the cache of user $k$ has all the elements of
$\mbox{$\cal{T}$}_{NK-1}^{D_{k}}$ except one. This missing element
$W_{D_{k},[0:NK-1]\setminus\\{S_{k}+kN\\}}$ can be decoded as
$\displaystyle\widehat{W}_{D_{k},[0:NK-1]\setminus\\{S_{k}+kN\\}}=Y_{[0:NK-1]}\oplus\left(\bigoplus_{u\in{[0:NK-1]\setminus\\{S_{k}+kN\\}}}W_{d_{u},{[0:NK-1]}\setminus\\{u\\}}\right).$
Observe that $Y_{[0:NK-1]}$ is broadcasted by the server while each symbol
$W_{d_{u},[0:NK-1]\setminus\\{u\\}}$ is a part of $\mbox{$\cal{G}$}_{k,NK-1}$
and hence a part of the cache of user $k$. Thus, user $k$ can compute
$\widehat{W}_{D_{k},[0:NK-1]\setminus\\{S_{k}+3k\\}}$. It follows that
$\displaystyle\widehat{W}_{D_{k},[0:NK-1]\setminus\\{S_{k}+kN\\}}$
$\displaystyle=Y_{[0:NK-1]}\oplus\left(\bigoplus_{u\in{[0:NK-1]\setminus\\{S_{k}+kN\\}}}W_{d_{u},{[0:NK-1]}\setminus\\{u\\}}\right)$
$\displaystyle=\bigoplus_{u\in[0:NK-1]}W_{d_{u},[0:NK-1]\setminus\\{u\\}}\oplus\left(\bigoplus_{u\in{[0:NK-1]\setminus\\{S_{k}+kN\\}}}W_{d_{u},{[0:NK-1]}\setminus\\{u\\}}\right)$
$\displaystyle=W_{d_{S_{k}+kN},[0:NK-1]\setminus\\{S_{k}+kN\\}}$
$\displaystyle\overset{(a)}{=}W_{D_{k},[0:NK-1]\setminus\\{S_{k}+kN\\}}.$
Here $(a)$ follows because $d_{S_{k}+kN}=(S_{k}+kN-(S_{k}-D_{k}))$ mod $N$
$=D_{k}$. Since user $k$ has all the segments in
$\mbox{$\cal{T}$}^{D_{k}}_{NK-1}$, we explain how user $k$ can obtain all
symbols in any set $\mbox{$\cal{T}$}^{D_{k}}_{j}$, where $t\leq j\leq NK-2$.
All symbols $W_{D_{k},\mbox{$\cal{R}$}}\in\mbox{$\cal{T}$}^{D_{k}}_{j}$ such
that $S_{k}+kN\in\mbox{$\cal{R}$}$ form the group $\mbox{$\cal{G}$}_{k,j}$ and
hence are a part of her cache. All the remaining symbols
$W_{D_{k},\mbox{$\cal{R}$}}\in\mbox{$\cal{T}$}^{D_{k}}_{j}$ satisfying
$S_{k}+kN\notin\mbox{$\cal{R}$}$ can be decoded by user $k$ as follows:
$\displaystyle\widehat{W}_{D_{k},{\mbox{$\cal{R}$}}}$
$\displaystyle=X^{\prime}_{{|\mbox{$\cal{R}$}^{+}|},t}\oplus
W^{r}_{\mbox{$\cal{R}$}^{+}}\oplus\left(\bigoplus_{u\in{\mbox{$\cal{R}$}}}W_{d_{u},\mbox{$\cal{R}$}^{+}\setminus\\{u\\}}\right)$
where $\mbox{$\cal{R}$}^{+}=\\{S_{k}+kN\\}\cup{\mbox{$\cal{R}$}}$,
$t=\alpha_{{|\mbox{$\cal{R}$}^{+}|},\mbox{$\cal{R}$}^{+}}$ and
$X^{\prime}_{{|\mbox{$\cal{R}$}^{+}|},t}$ denotes the symbol in the $t$-th
position of $X^{\prime}_{{|\mbox{$\cal{R}$}^{+}|}}$. Here,
$X^{\prime}_{{|\mbox{$\cal{R}$}^{+}|}}$ is a part of the broadcast. User $k$
can recover $t$ using the auxiliary transmission as
$t=S^{\prime}_{\mbox{$\cal{R}$}^{+}}\oplus(S^{\prime}_{\mbox{$\cal{R}$}^{+}}\oplus\alpha_{|\mbox{$\cal{R}$}^{+}|,\mbox{$\cal{R}$}^{+}})$
because $S^{\prime}_{\mbox{$\cal{R}$}^{+}}$ is part of her cache. All symbols
in the second and third terms can also be recovered from the cache of user
$k$. Thus, user $k$ can compute $\widehat{W}_{D_{k},{\mbox{$\cal{R}$}}}$.
Thus, we obtain
$\displaystyle\widehat{W}_{D_{k},{\mbox{$\cal{R}$}}}$
$\displaystyle=X^{\prime}_{{|\mbox{$\cal{R}$}^{+}|},t}\oplus
W^{r}_{\mbox{$\cal{R}$}^{+}}\oplus\left(\bigoplus_{u\in{\mbox{$\cal{R}$}}}W_{d_{u},\mbox{$\cal{R}$}^{+}\setminus\\{u\\}}\right)$
$\displaystyle=V_{\\{S_{k}+kN\\}\cup\mbox{$\cal{R}$}}\oplus
W^{r}_{\\{S_{k}+kN\\}\cup\mbox{$\cal{R}$}}\oplus\left(\bigoplus_{u\in{\mbox{$\cal{R}$}}}W_{d_{u},\\{S_{k}+kN\\}\cup{\mbox{$\cal{R}$}}\setminus\\{u\\}}\right)$
$\displaystyle=Y_{\\{S_{k}+kN\\}\cup\mbox{$\cal{R}$}}\oplus
W^{r}_{\\{S_{k}+kN\\}\cup\mbox{$\cal{R}$}}\oplus
W^{r}_{\\{S_{k}+kN\\}\cup\mbox{$\cal{R}$}}\oplus\left(\bigoplus_{u\in{\mbox{$\cal{R}$}}}W_{d_{u},\\{S_{k}+kN\\}\cup{\mbox{$\cal{R}$}}\setminus\\{u\\}}\right)$
$\displaystyle=\bigoplus_{u\in\\{S_{k}+kN\\}\cup{\mbox{$\cal{R}$}}}W_{d_{u},\\{S_{k}+kN\\}\cup{\mbox{$\cal{R}$}}\setminus\\{u\\}}\oplus\left(\bigoplus_{u\in{\mbox{$\cal{R}$}}}W_{d_{u},\\{S_{k}+kN\\}\cup{\mbox{$\cal{R}$}}\setminus\\{u\\}}\right)$
$\displaystyle=W_{D_{k},\mbox{$\cal{R}$}}$
which shows that user $k$ can recover all symbols in
$\mbox{$\cal{T}$}^{D_{k}}_{j}$ for $t\leq j\leq NK-1$. This completes the
proof for decodability.
Proof of privacy: We show that
$\displaystyle I(X;{\bar{D}_{\tilde{k}}}|Z_{k},D_{k})=0,\quad\forall
k\in[0:K-1]$ (36)
which implies the privacy condition
$I({\bar{D}_{\tilde{k}}};X,Z_{k},D_{k})=0$, since
$I({\bar{D}_{\tilde{k}}};Z_{k},D_{k})=0$. To show (36), we first define
$\displaystyle
B_{k}:=\\{\alpha_{|\mbox{$\cal{R}$}|,\mbox{$\cal{R}$}}|\mbox{$\cal{R}$}\subset[0:NK-1],kN+S_{k}\cap\mbox{$\cal{R}$}\neq\phi,t+1\leq|\mbox{$\cal{R}$}|\leq
NK-1\\}.$ (37)
We also divide $J^{\prime}$ given in (35) into two parts,
$J^{\prime}=(J^{\prime}_{k},\tilde{J^{\prime}}_{k})$, where $J^{\prime}_{k}$
is the part $J$ which can be accessed by user $k$ while
$\tilde{J^{\prime}}_{k}$ is the remaining part. These are defined as follows:
$\displaystyle J^{\prime}_{k}$
$\displaystyle:=\\{S^{\prime}_{\mbox{$\cal{R}$}}\oplus\alpha_{|\mbox{$\cal{R}$}|,\mbox{$\cal{R}$}}|\mbox{$\cal{R}$}\in\tau,kN+S_{k}\in\mbox{$\cal{R}$}\\},$
$\displaystyle\tilde{J^{\prime}}_{k}$ $\displaystyle:=J^{\prime}\setminus
J^{\prime}_{k}.$
Then, we have
$\displaystyle I(X;{\bar{D}_{\tilde{k}}}|Z_{k},D_{k})$
$\displaystyle=I(X^{\prime},J;{\bar{D}_{\tilde{k}}}|Z_{k},D_{k})$
$\displaystyle=I(X^{\prime},J^{\prime}_{k},\tilde{J^{\prime}}_{k},\bar{S}\ominus\bar{D};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k})$
$\displaystyle\overset{(a)}{=}I(X^{\prime},\bar{S}\ominus\bar{D},J^{\prime}_{k},B_{k};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k})$
$\displaystyle\overset{(b)}{=}I(X^{\prime},\bar{S}\ominus\bar{D},B_{k};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k})$
$\displaystyle=I(\bar{S}\ominus\bar{D},B_{k};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k})+I(X^{\prime};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},\bar{S}\ominus\bar{D},B_{k})$
$\displaystyle\overset{(c)}{=}I(X^{\prime};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},\bar{S}\ominus\bar{D},B_{k})$
$\displaystyle=I(Y_{[0:NK-1]},\pi_{t+1}(V_{t+1}),\pi_{t+2}(V_{t+2}),\ldots,\pi_{NK-1}(V_{NK-1});{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D})$
$\displaystyle\overset{(d)}{=}I(Y_{[0:NK-1]},V;{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D}).$
(38)
Here, $(a)$ follows since $B_{k}$ is a function of $(Z_{k},J^{\prime}_{k})$
and $\tilde{J^{\prime}}_{k}$ is independent of all other random variables on
the RHS of $(a)$, and $(b)$ follows since $J^{\prime}_{k}$ is a function of
$(Z_{k},B_{k})$. Further, $(c)$ follows since $(\bar{S}\ominus\bar{D},B_{k})$
is independent of other random variables, and $(d)$ follows due to the fact
that the permutations
$\left(\pi_{t+1}(\cdot),\pi_{t+2}(\cdot),\ldots,\pi_{NK-1}(\cdot)\right)$ are
independent of all other random variables. Next we show that the RHS of (38)
is zero. To this end, we first divide the set $V$, defined in (34), into two
parts: the first part $X_{k}$ contains the symbols in $V$ whose positions are
known to user $k$, and the second part $\tilde{X}_{k}$ contains the remaining
symbols in $V$, i.e.,
$\displaystyle X_{k}$
$\displaystyle:=\\{V_{\mbox{$\cal{R}$}}|(kN+S_{k})\cap\mbox{$\cal{R}$}\neq\phi,\mbox{$\cal{R}$}\in\tau\\},$
$\displaystyle\tilde{X}_{k}$ $\displaystyle:=V\setminus X_{k}.$
Set $\tilde{X}_{k}$ can be further divided into more groups labelled by
$\tilde{X}_{k,|\mbox{$\cal{R}$}|}$, where $t+1\leq|\mbox{$\cal{R}$}|\leq
NK-1$, as follows:
$\tilde{X}_{k,|\mbox{$\cal{R}$}|}=\\{V_{\mbox{$\cal{R}$}}|(kN+S_{k})\cap\mbox{$\cal{R}$}=\phi,\mbox{$\cal{R}$}\in\tau\\}.$
Then, we get
$\displaystyle
I(Y_{[0:NK-1]},V;{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D})$
(39)
$\displaystyle=I(Y_{[0:NK-1]},X_{k},\tilde{X}_{k};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D})$
$\displaystyle=I(Y_{[0:NK-1]},X_{k};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D})+I(\tilde{X}_{k};{\bar{D}_{\tilde{k}}}|Y_{[0:NK-1]},X_{k},Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D}).$
$\displaystyle\overset{(a)}{=}I(Y_{[0:NK-1]},X_{k},W_{D_{k}};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D})+I(\tilde{X}_{k};{\bar{D}_{\tilde{k}}}|Y_{[0:NK-1]},X_{k},Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D},W_{D_{k}}).$
$\displaystyle\overset{(b)}{=}I(W_{D_{k}};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D})+I(\tilde{X}_{k};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D},W_{D_{k}}).$
$\displaystyle=I(W_{D_{k}};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D})+\sum_{i=t+1}^{NK-1}I(\tilde{X}_{k,i};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D},W_{D_{k}},\tilde{X}_{k,t+1},\ldots,\tilde{X}_{k,i-1})$
(40)
where in (40), we used $\tilde{X}_{k,i}=\phi$ for $i<t+1$. Here, $(a)$ follows
because we have seen in the decodability section that $W_{D_{k}}$ can be
recoverd from $(\bar{S}\ominus\bar{D},Z_{k},X_{k},Y_{[0:NK-1]})$, and $(b)$
follows since each $V_{\mbox{$\cal{R}$}}\in X_{k}$ can be written as
$\displaystyle V_{\mbox{$\cal{R}$}}$ $\displaystyle=Y_{\mbox{$\cal{R}$}}\oplus
W^{r}_{\mbox{$\cal{R}$}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{R}$}}W_{d_{u},\mbox{$\cal{R}$}\backslash\\{u\\}}\oplus
W^{r}_{\mbox{$\cal{R}$}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{R}$}\backslash(S_{k}+kN)}W_{d_{u},\mbox{$\cal{R}$}\backslash\\{u\\}}\oplus
W_{d_{S_{k}+kN},\mbox{$\cal{R}$}\backslash\\{S_{k}+kN\\}}\oplus
W^{r}_{\mbox{$\cal{R}$}}.$
Here, the first and third terms can be recovered from $Z_{k}$ and the second
term is a part of $W_{D_{k}}$ since $d_{S_{k}+kN}=D_{k}$. Similarly, we have
$\displaystyle Y_{[0:NK-1]}$
$\displaystyle=\bigoplus_{u\in{[0:NK-1]}}W_{d_{u},{[0:NK-1]}\backslash\\{u\\}}$
$\displaystyle=\bigoplus_{u\in{[0:NK-1]\backslash(S_{k}+kN)}}W_{d_{u},{[0:NK-1]}\backslash\\{u\\}}\oplus
W_{d_{S_{k}+kN},[0:NK-1]\backslash\\{S_{k}+kN\\}}.$
Here, all symbols in the first term are a part of $Z_{k}$ while the second
term is a part of $W_{D_{k}}$ because $d_{S_{k}+kN}=D_{k}$. Thus,
$(X_{k},Y_{[0:NK-1]})$ is a function of
$(\bar{S}\ominus\bar{D},Z_{k},W_{D_{k}})$ which completes the argument for
$(b)$.
Next, we show that each term on the RHS of (40) is zero. First, we consider
the terms
$I(\tilde{X}_{k,i};Z_{k},B_{k},\bar{S}\ominus\bar{D},{\bar{D}_{\tilde{k}}},W_{D_{k}}|Y_{[0:NK-1]},X_{k},\tilde{X}_{k,i-1},\ldots,\tilde{X}_{k,t+1})$
for $t+1\leq i\leq NK-1$. For simplicity of notation, we define set
$\tau_{k,i}$ as follows:
$\tau_{k,i}=\\{\mbox{$\cal{R}$}\in\tau,\mbox{$\cal{R}$}\cap(kN+S_{k})=\phi,|\mbox{$\cal{R}$}|=i\\}.$
For $k\in[0:K-1]$ and $t+1\leq i\leq NK-1$, we get
$\displaystyle
I(\tilde{X}_{k,i};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D},W_{D_{k}},\tilde{X}_{k,t+1},\ldots\tilde{X}_{k,i-1})$
$\displaystyle=I((V_{\mbox{$\cal{R}$}})_{\mbox{$\cal{R}$}\in\tau_{k,i}};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D},W_{D_{k}},\tilde{X}_{k,t+1},\ldots\tilde{X}_{k,i-1})$
$\displaystyle=I((Y_{\mbox{$\cal{R}$}}\oplus
W_{\mbox{$\cal{R}$}})_{\mbox{$\cal{R}$}\in\tau_{k,i}};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D},W_{D_{k}},\tilde{X}_{k,t+1},\ldots\tilde{X}_{k,i-1})$
$\displaystyle=I((Y_{\mbox{$\cal{R}$}}\oplus(W_{0,\mbox{$\cal{R}$}}\oplus
W_{1,\mbox{$\cal{R}$}},...,W_{N-2,\mbox{$\cal{R}$}}\oplus
W_{N-1,\mbox{$\cal{R}$}}))_{\mbox{$\cal{R}$}\in\tau_{k,i}};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D},W_{D_{k}},\tilde{X}_{k,t+1},\ldots\tilde{X}_{k,i-1})$
$\displaystyle=0.$ (41)
Here, (41) follows because each symbol $(W_{i,\mbox{$\cal{R}$}})$,
$i\in[0:N-1]$ is non-overlapping with
$(\tilde{X}_{k,t+1},\ldots,\tilde{X}_{k,i-1},Y_{k,i})$ and because $W_{D_{k}}$
contains only one symbol in $W_{\mbox{$\cal{R}$}}$, namely
$W_{D_{k},\mbox{$\cal{R}$}}$. We can also see that the first term on the RHS
of (40) is zero, i.e.,
$\displaystyle
I(W_{D_{k}};{\bar{D}_{\tilde{k}}}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D})=0.$
(42)
because $W_{D_{k}}$ is independent of ${\bar{D}_{\tilde{k}}}$. Thus, from (42)
and (41), we obtain
$\displaystyle
I(Y_{[0:NK-1]},V;\bar{D}|Z_{k},D_{k},B_{k},\bar{S}\ominus\bar{D})=0.$
This together with (38) implies (36).
### IV-E Proof of Theorem 5
To prove the theorem, we first give some notations and inequalities. For
parameter $r_{2}=\frac{KM}{N}$, let
$\displaystyle R^{\text{YMA}}_{N,K}\left(\frac{Nr_{2}}{K}\right)$
$\displaystyle=\frac{{K\choose r_{2}+1}-{K-\min(N,K)\choose
r_{2}+1}}{{K\choose r_{2}}},\quad\mbox{ for }r_{2}\in\\{0,1,\ldots,K\\},$ (43)
$\displaystyle R^{\text{MAN}}_{N,K}\left(M\right)$
$\displaystyle=K\left(1-\frac{M}{N}\right)\min\left(\frac{1}{1+\frac{KM}{N}},\frac{N}{K}\right),\quad\mbox{
for }M\in\\{0,N/K,2N/K,\ldots,N\\}.$ (44)
Furthermore, let $R^{\text{YMA, c}}_{N,K}(M)$ and $R^{\text{MAN, c}}_{N,K}(M)$
denote the lower convex envelop of the points in (43) and (44), respectively.
Recall that $R^{*p}_{N,K}(M)$ and $R^{*}_{N,K}(M)$ denote the optimal rate
with privacy and without privacy as defined in (10) and (5), respectively.
Then we have the following inequalities which hold for all $M\geq 0$:
$\displaystyle
R^{*}_{N,K}(M)\stackrel{{\scriptstyle(a)}}{{\leq}}R^{*p}_{N,K}(M)\stackrel{{\scriptstyle(b)}}{{\leq}}R^{A}_{N,K}(M)=R^{\text{YMA,
c}}_{N,NK-K+1}(M)\leq R^{\text{YMA,
c}}_{N,NK}(M)\stackrel{{\scriptstyle(c)}}{{\leq}}R^{\text{MAN, c}}_{N,NK}(M)$
(45)
where $(a)$ follows from the fact that the optimal rate required with demand
privacy is larger than that of without privacy, $(b)$ follows since an
achievable rate is lower-bounded by the optimal rate, and $(c)$ was shown in
[5].
#### IV-E1 Proof of Part 1), ($N\leq K$)
We first prove that
$\displaystyle\frac{R^{A}_{N,K}(M)}{R^{*}_{N,K}(M)}\leq\begin{cases}4&\text{
if }M\leq\left(1-\frac{N}{K}\right)\\\ 8&\text{
if}\left(1-\frac{N}{K}\right)\leq M\leq\frac{N}{2}.\end{cases}$ (46)
To this end, we show that
$\displaystyle\frac{R^{\text{MAN,
c}}_{N,NK}(M)}{R^{*}_{N,K}(M)}\leq\begin{cases}4&\text{ if
}M\leq\left(1-\frac{N}{K}\right)\\\ 8&\text{ if}\left(1-\frac{N}{K}\right)\leq
M\leq\frac{N}{2}.\end{cases}$ (47)
Then the result follows from (45). We first consider the ratio
$\frac{R^{\text{MAN}}_{N,NK}(M)}{R^{\text{MAN}}_{N,K}(M)}$ for
$M\in\\{0,N/K,2N/K,\ldots,N\\}$. We have
$\displaystyle\frac{R^{\text{MAN}}_{N,NK}(M)}{R^{\text{MAN}}_{N,K}(M)}=\frac{N\min\left(\frac{1}{1+KM},\frac{1}{K}\right)}{\min\left(\frac{1}{1+\frac{KM}{N}},\frac{N}{K}\right)},\quad
M\in\\{0,N/K,2N/K,\ldots,N\\}.$ (48)
We consider the following three cases.
Case 1: $M\in[0,1-\frac{N}{K}]$
We first find $\min\left(\frac{1}{1+KM},\frac{1}{K}\right)$ and
$\min\left(\frac{1}{1+\frac{KM}{N}},\frac{1}{K}\right)$.
$\displaystyle\frac{1}{1+KM}$ $\displaystyle\geq\frac{1}{1+K(1-N/K)}$
$\displaystyle=\frac{1}{K-N+1}$ $\displaystyle>\frac{1}{K},\quad\mbox{ for
}N>1.$
So, $\min\left(\frac{1}{1+KM},\frac{1}{K}\right)=\frac{1}{K}$. Further,
$\displaystyle\frac{1}{1+\frac{KM}{N}}$
$\displaystyle\geq\frac{1}{1+\frac{K}{N}(1-N/K)}$ $\displaystyle=\frac{N}{K}.$
Thus, $\min\left(\frac{1}{1+\frac{KM}{N}},\frac{N}{K}\right)=\frac{N}{K}$.
Hence (48) gives 1.
Case 2: $M\in[1-\frac{N}{K},1-\frac{1}{K}]$
In this case, we get
$\displaystyle\min\left(\frac{1}{1+KM},\frac{1}{K}\right)=\frac{1}{K},$
and
$\displaystyle\min\left(\frac{1}{1+\frac{KM}{N}},\frac{N}{K}\right)=\frac{1}{1+\frac{KM}{N}}.$
Then from (48), it follows that
$\displaystyle\frac{R^{\text{MAN}}_{N,K}(M)}{R^{\text{MAN}}_{N,K}\left(M\right)}$
$\displaystyle=\frac{N}{K}\left(1+\frac{KM}{N}\right)$
$\displaystyle=\frac{N}{K}+M$ $\displaystyle\leq 2$
where the last inequality follows since $\frac{N}{K}\leq 1$ and $M\leq 1$.
Case 3: $M\in[1-\frac{1}{K},N]$
In this case, we obtain
$\displaystyle\min\left(\frac{1}{1+KM},\frac{1}{K}\right)=\frac{1}{1+KM},\quad\mbox{
if }1-\frac{1}{K}\leq M\leq N$
and
$\displaystyle\min\left(\frac{1}{1+\frac{KM}{N}},\frac{N}{K}\right)=\frac{1}{1+\frac{KM}{N}},\quad\mbox{
if }1-\frac{1}{K}\leq M\leq N.$
Then from (48), we get the following
$\displaystyle\frac{R^{\text{MAN}}_{N,NK}(M)}{R^{\text{MAN}}_{N,K}(M)}$
$\displaystyle=\frac{N}{1+KM}\left(1+\frac{KM}{N}\right)$
$\displaystyle=\frac{N+KM}{1+KM}$ $\displaystyle=\frac{N-1}{1+KM}+1.$ (49)
Further,
$\displaystyle M\geq 1-\frac{1}{K}$ $\displaystyle\implies KM\geq K-1,$
$\displaystyle\implies KM\geq N-1\;(\text{Since $K\geq N$}),$
$\displaystyle\implies KM+1\geq N-1,$
$\displaystyle\implies\frac{N-1}{1+KM}\leq 1.$
Then, we obtain $\frac{R^{\text{MAN}}_{N,NK}(M)}{R^{\text{MAN}}_{N,K}(M)}\leq
2$ from (49).
Let $R^{\text{MAN, lin}}_{N,K}(M)$ denote the region obtained by linearly
interpolating the adjacent memory points given in (44). Similarly,
$R^{\text{MAN, lin}}_{N,NK}(M)$ denotes the linear interpolation of the points
$R^{\text{MAN}}_{N,NK}(M),M\in\\{0,N/K,2N/K,\ldots,N\\}$. Then, it follows
from the above three cases that
$\displaystyle\frac{R^{\text{MAN,
lin}}_{N,NK}(M)}{R^{\text{MAN,lin}}_{N,K}(M)}\leq\begin{cases}1&\text{ if
}M\leq\left(1-\frac{N}{K}\right)\\\ 2&\text{ if}\left(1-\frac{N}{K}\right)\leq
M\leq\frac{N}{2}.\end{cases}$ (50)
Next we need the following lemma.
###### Lemma 3
For $N\leq K$, the following holds:
$\displaystyle\frac{R^{\text{MAN, lin}}_{N,K}(M)}{R^{*}_{N,K}(M)}\leq
4,\quad\mbox{for }0\leq M\leq\frac{N}{2}.$ (51)
###### Proof.
See Appendix B. ∎
Since $R^{\text{MAN, c}}_{N,NK}(M)\leq R^{\text{MAN, lin}}_{N,NK}(M)$, (47)
follows from (50) and (51). This further implies (46).
Now it remains to prove
$\displaystyle\frac{R^{A}_{N,K}(M)}{R^{*}_{N,K}(M)}$ $\displaystyle\leq
2,\quad\text{ if }M\geq\frac{N}{2}.$
By substituting $r_{2}=\lfloor{NK/2}\rfloor$ in (43) for $N$ files and $NK$
users, we get
$\displaystyle
R^{\text{YMA}}_{N,NK}\left(\frac{\lfloor{NK/2}\rfloor}{K}\right)$
$\displaystyle\leq\frac{{NK\choose\lfloor{NK/2}\rfloor+1}}{{NK\choose\lfloor{NK/2}\rfloor}}$
$\displaystyle=\frac{NK-\lfloor{NK/2}\rfloor}{\lfloor{NK/2}\rfloor+1}$
$\displaystyle=\frac{NK+1}{\lfloor{NK/2}\rfloor+1}-1$
$\displaystyle\leq\frac{NK+1}{NK/2-1/2+1}-1$ $\displaystyle=1.$
Since $\frac{\lfloor{NK/2}\rfloor}{K}\leq\frac{N}{2}$, we have $R^{\text{YMA,
c}}_{N,NK}(N/2)\leq 1$. Also, $R^{\text{YMA, c}}_{N,NK}(N)=0$. Thus, for
$N/2\leq M\leq N$, it follows that
$R^{\text{YMA, c}}_{N,NK}(M)\leq 2\left(1-\frac{M}{N}\right).$ (52)
The cutset bounds on the rates without privacy gives that
$R^{*}_{N,K}(M)\geq\left(1-\frac{M}{N}\right).$ (53)
From (52) by (53), we obtain
$\displaystyle\frac{R^{\text{YMA, c}}_{N,NK}(M)}{R^{*}_{N,K}(M)}\leq
2,\quad\text{ for }M\geq\frac{N}{2}.$
From (45), we have $R^{A}_{N,K}(M)\leq R^{\text{YMA, c}}_{N,NK}(M)$ which then
implies that
$\displaystyle\frac{R^{A}_{N,K}(M)}{R^{*}_{N,K}(M)}\leq 2,\quad\text{ for
}M\geq\frac{N}{2}.$
This completes the proof of Part 1).
#### IV-E2 Proof of Part 2), $K<N$
We denote the memory corresponding to parameters $r=r_{0}$ and $t=t_{0}$ in
(15) by $M_{r_{0},t_{0}}$. First we consider the memory regime $M\leq N/2$.
Substituting $t=1$ in (15), we get the achievability of the following memory-
rate pairs
$\displaystyle(M_{r,1},R)$
$\displaystyle=\left(\frac{N\sum_{s=1}^{NK-1}{NK-1\choose
s-1}r^{NK-s-1}}{\sum_{s=1}^{NK-1}{NK\choose
s}r^{NK-s-1}},\frac{\sum_{s=2}^{NK}[{NK\choose s}-{NK-K\choose
s}]r^{NK-s}}{\sum_{s=1}^{NK-1}{NK\choose s}r^{NK-s-1}}\right)$
$\displaystyle=\left(\frac{N\sum_{s=1}^{NK-1}{NK-1\choose
s-1}r^{NK-s}}{\sum_{s=1}^{NK-1}{NK\choose
s}r^{NK-s}},\frac{\sum_{s=2}^{NK}{NK\choose
s}r^{NK-s+1}-\sum_{s=2}^{NK}{NK-K\choose
s}r^{NK-s+1}}{\sum_{s=1}^{NK-1}{NK\choose s}r^{NK-s}}\right)$
$\displaystyle=\left(\frac{N\sum_{s=0}^{NK-2}{NK-1\choose
s}r^{NK-s-1}}{\sum_{s=1}^{NK-1}{NK\choose
s}r^{NK-s}},\frac{\sum_{s=2}^{NK}{NK\choose
s}r^{NK-s+1}-\sum_{s=2}^{NK}{NK-K\choose
s}r^{NK-s+1}}{\sum_{s=1}^{NK-1}{NK\choose s}r^{NK-s}}\right)$
$\displaystyle=\left(\frac{N(\sum_{s=0}^{NK-1}{NK-1\choose
s}r^{NK-s-1}-1)}{\sum_{s=0}^{NK}{NK\choose
s}r^{NK-s}-r^{NK}-1},\frac{\sum_{s=0}^{NK}{NK\choose
s}r^{NK-s+1}-\sum_{s=0}^{NK}{NK-K\choose
s}r^{NK-s+1}-Kr^{NK}}{\sum_{s=1}^{NK-1}{NK\choose s}r^{NK-s}}\right)$
$\displaystyle=\left(\frac{N((r+1)^{NK-1}-1)}{(r+1)^{NK}-r^{NK}-1},\frac{r((r+1)^{NK}-Kr^{NK-1}-(r+1)^{NK-K}r^{K})}{((r+1)^{NK}-r^{NK}-1)}\right).$
(54)
We first show that $M_{r,1}$ in (54) satisfies the following
$\displaystyle M_{r,1}$
$\displaystyle=\frac{N((r+1)^{NK-1}-1)}{(r+1)^{NK}-r^{NK}-1}$
$\displaystyle=\frac{N}{r+1}\left(1-\frac{r-r^{NK}}{(r+1)^{NK}-r^{NK}-1}\right)$
$\displaystyle\geq\frac{N}{r+1}$ (55)
where the last inequality follows since
$\left(1-\frac{r-r^{NK}}{(r+1)^{NK}-r^{NK}-1}\right)\geq 1$. Using the fact
that all points on the line joining $(0,K)$ and $(M_{r,1},R)$ are also
achievable, for $M\leq M_{r,1}$ we get
$\displaystyle R^{BC}_{N,K}(M)$
$\displaystyle\leq\left(\frac{R-K}{M_{r,1}}\right)M+K$ (56)
$\displaystyle=\left(\frac{(r+1)^{NK}r-(r+1)^{NK-K}r^{K+1}-K(r+1)^{NK}+K}{N((r+1)^{NK-1}-1)}\right)M+K.$
(57)
Now we substitute $r=K/s-1$ for some integer $s$ in the interval
$[1,\lfloor{K/2}\rfloor]$. Note that, $Ns/K=N/(r+1)\leq M_{r,1}$, where the
inequality follows from (55). Thus, (57) holds for $M=Ns/K$ and we obtain
$\displaystyle R^{BC}_{N,K}(Ns/K)$
$\displaystyle\leq\left(\frac{(\frac{K}{s})^{NK}\left(\frac{K}{s}-1\right)-(\frac{K}{s})^{NK-K}\left(\frac{K}{s}-1\right)^{K+1}-K(\frac{K}{s})^{NK}+K}{N((\frac{K}{s})^{NK-1}-1)}\right)\frac{Ns}{K}+K$
$\displaystyle=\left(\frac{K^{NK}(K-s)-K^{NK-K}(K-s)^{K+1}-sK^{NK+1}+Ks^{NK+1}}{N(K^{NK-1}s-s^{NK})}\right)\frac{N}{K}+K$
$\displaystyle=\left(\frac{K^{NK-1}(K-s)-K^{NK-K-1}(K-s)^{K+1}-sK^{NK}+s^{NK+1}}{(K^{NK-1}s-s^{NK})}\right)+K$
$\displaystyle=\left(\frac{K^{NK-1}(K-s)-K^{NK-K-1}(K-s)^{K+1}-Ks^{NK}+s^{NK+1}}{(K^{NK-1}s-s^{NK})}\right).$
(58)
Note that $R^{\text{YMA}}_{N,K}(Ns/K)=(K-s)/(s+1)$. Dividing (58) by
$R^{\text{YMA}}_{N,K}(Ns/K)$ yields
$\displaystyle\frac{R^{BC}_{N,K}(Ns/K)}{R^{\text{YMA}}_{N,K}(Ns/K)}$
$\displaystyle=(s+1)\left(\frac{K^{NK-1}(K-s)-K^{NK-K-1}(K-s)^{K+1}-Ks^{NK}+s^{NK+1}}{(K^{NK-1}s-s^{NK})(K-s)}\right)$
$\displaystyle=(s+1)\left(\frac{K^{NK-1}-K^{NK-K-1}(K-s)^{K}-s^{NK}}{(K^{NK-1}s-s^{NK})}\right)$
$\displaystyle\leq\frac{(s+1)}{s}\left(\frac{K^{NK-1}-K^{NK-K-1}(K-s)^{K}-s^{NK-1}}{(K^{NK-1}-s^{NK-1})}\right)$
$\displaystyle=\frac{(s+1)}{s}\left(1-\frac{K^{NK-K-1}(K-s)^{K}}{(K^{NK-1}-s^{NK-1})}\right)$
$\displaystyle\leq\frac{(s+1)}{s}\left(1-\frac{K^{NK-K-1}(K-s)^{K}}{K^{NK-1}}\right)$
$\displaystyle=\frac{(s+1)}{s}\left(1-\left(1-\frac{s}{K}\right)^{K}\right).$
(59)
Now we need to compute the maximum value of the expression on the RHS of (59)
for $K\geq 2$ and $1\leq s\leq\lfloor{K/2}\rfloor$. Note that both $s$ and $K$
are integers. For $s$ fixed, $K\geq 2s$ satisfies all constarints. Observe
that for $s$ fixed the function $(1-\frac{s}{K})^{K}$ is increasing in $K$.
Thus, the RHS of (59) is decreasing in $K$. Since we want to compute the
maxima, we substitute $K=2s$. Thus, it follows that
$\displaystyle\frac{R^{BC}_{N,K}(Ns/K)}{R^{\text{YMA}}_{N,K}(Ns/K)}$
$\displaystyle\leq\frac{(s+1)}{s}\left(1-\left(\frac{1}{2}\right)^{2s}\right).$
(60)
The expression on the RHS of (60) takes value $3/2$ when $s=1$. For $s\geq 2$,
$(s+1)/s\leq 3/2$. So the maxima is $3/2$ and attained at $s=1,K=2$. Thus, we
have
$\frac{R^{BC}_{N,K}(Ns/K)}{R^{\text{YMA}}_{N,K}(Ns/K)}\leq 3/2\qquad\forall
s\in\\{1,2,\ldots,\lfloor{K/2}\rfloor\\}.$ (61)
Substituting $t=1$ and $r=1$ in (15), we get the following memory-rate pair
$(M_{1,1},R)=\left(\frac{N}{2},\frac{2^{NK}-2^{NK-K}-K}{2^{NK}-2}\right).$
We know that $R^{\text{YMA}}_{N,K}(N/2)\geq K/(K+2)$. Thus,
$\displaystyle\frac{R^{BC}_{N,K}(N/2)}{R^{\text{YMA}}_{N,K}(N/2)}$
$\displaystyle\leq\frac{(K+2)(2^{NK}-2^{NK-K}-K)}{K(2^{NK}-2)}$
$\displaystyle\leq\frac{(K+2)}{K}\left(\frac{(2^{NK}-2^{NK-K}-2)}{(2^{NK}-2)}\right)$
$\displaystyle=\frac{(K+2)}{K}\left(1-\frac{2^{NK-K}}{(2^{NK}-2)}\right)$
$\displaystyle\leq\frac{(K+2)}{K}\left(1-\frac{1}{2^{K}}\right).$ (62)
The maximum value of the expression on the RHS of (62)) is 3/2 and is attained
at $K=2$. The analysis for computing this maxima is exactly the same as the
one we used for (60). Thus,
$\frac{R^{BC}_{N,K}(N/2)}{R^{\text{YMA}}_{N,K}(N/2)}\leq 3/2.$ (63)
It was shown in [5] that
$\frac{R^{\text{YMA}}_{N,K}(M)}{R^{*}_{N,K}(M)}\leq 2.$ (64)
For $M\leq N/2$, $R^{\text{YMA}}_{N,K}(M)$ is the linear extrapolation of the
points $R^{\text{YMA}}_{N,K}(M^{\prime})$ where
$M^{\prime}\in\\{0,N/K,2N/K,\ldots,N/2\\}$. Thus, using (61), (63) and (64),
we conclude that,
$\frac{R^{BC}_{N,K}(M)}{R^{*p}_{N,K}(M)}\leq 3\quad\text{for }M\leq N/2.$ (65)
Now let us consider the memory regime $M\geq N/2$. All memory-rate points on
the line joining $(N/2,R^{BC}_{N,K}(M)(N/2))$ and $(N,0)$ are achievable.
Moreover from (IV-E2), it is clear that $R^{BC}_{N,K}(N/2)\leq 1$. So,
$R^{BC}_{N,K}(M)(M)\leq 2-\frac{2}{N}M,\quad\text{for }M\geq N/2.$ (66)
Using the cut-set bounds, we have the lower bound on the non-private rate
$R^{*}_{N,K}(M)\geq\left(1-\frac{M}{N}\right).$ (67)
Since $R^{*p}_{N,K}(M)\geq R^{*}_{N,K}(M)$, from (66) and (67), it follows
that
$\frac{R^{BC}_{N,K}(M)}{R^{*p}_{N,K}(M)}\leq 2,\quad\text{for }M\geq N/2.$
This completes the proof of Part 2).
#### IV-E3 Proof of Part 3)
On substituting $r=NK-K$ and $r=NK-K+1$ in (1) we get memory-rate trade-off
points $(\frac{N(NK-K)}{NK-K+1},\frac{1}{NK-K+1})$ and $(N,0)$, respectively.
Observe that both these points lie on the line given by (67). This shows that
$R^{*p}_{N,K}(M)=R^{*}_{N,K}(M)$ for $M\geq\frac{N(NK-K)}{NK-K+1}$.
### IV-F Proof of Converse for Theorem 6
As discussed in Section III, any $(M,R)$ pair that is achievable under no
privacy requirement needs to satisfy the first and third inequalities in (21)
for $N=K=2$ [1]. Similarly, for $N>2$ and $K=2$, any feasible $(M,R)$ pair
under no privacy constraint is required to satisfy the first and third
inequalities in (IV-G2) [34]. Substituting $N=2$ in the second inequality of
(IV-G2) gives us the second inequality of (21). Thus, we only need to prove
that the inequality $3M+(N+1)R\geq 2N+1$ holds for $N\geq 2$ and $K=2$ and we
give a proof for the same in this subsection. To show that any feasible
$(M,R)$ pair satisfies this inequality, we use the following lemma on some
conditional distributions.
###### Lemma 4
Let $\tilde{k}=(k+1)\mod 2$ for $k=0,1$. Then for all
$i\in[0:N-1],i^{\prime}\in[0:N-1]$ and $j\in[0:N-1]$ any demand-private scheme
for $K=2$ satisfies the following for user $k$, with $k\in\\{0,1\\}$ :
$\displaystyle(X,Z_{k},W_{j}|D_{k}=j)$
$\displaystyle\sim(X,Z_{k},W_{j}|D_{\tilde{k}}=i,D_{k}=j)$
$\displaystyle\sim(X,Z_{k},W_{j}|D_{\tilde{k}}=i^{\prime},D_{k}=j).$ (68)
###### Proof.
See Appendix C. ∎
Throughout this proof, for simplicity, we denote $(X|D_{0}=d_{0},D_{1}=d_{1})$
by $X_{d_{0},d_{1}}$. We also define
$W_{[0:N-1]}=(W_{0},W_{1},\ldots,W_{N-1})$, $j_{i}=(j\oplus i)$ mod $N$ and
$X^{\prime}_{j}=(X_{j,j_{1}},X_{j,j_{2}},\ldots,X_{j,j_{N-1}})$. Then, we have
$\displaystyle\sum_{j=0}^{N-1}(NH(Z_{0})+H(Z_{1})+2H(X_{j,j})+\sum_{i\neq
j}H(X_{j,i}))$
$\displaystyle\geq\sum_{j=0}^{N-1}(H(Z_{0},X_{j,j})+H(Z_{1},X_{j,j})+\sum_{i\neq
j}H(X_{j,i},Z_{0}))$
$\displaystyle\overset{\mathrm{(a)}}{=}\sum_{j=0}^{N-1}(H(Z_{0},X_{j,j},W_{j})+H(Z_{1},X_{j,j},W_{j})+\sum_{i\neq
j}H(X_{j,i},Z_{0},W_{j}))$
$\displaystyle=\sum_{j=0}^{N-1}(H(Z_{1},X_{j,j},W_{j})+H(X_{j,j}|Z_{0},W_{j})+\sum_{i\neq
j}H(X_{j,i}|Z_{0},W_{j})+NH(Z_{0},W_{j}))$
$\displaystyle\geq\sum_{j=0}^{N-1}(H(Z_{1},X_{j,j},W_{j})+H(X^{\prime}_{j},X_{j,j},Z_{0},W_{j})+(N-1)H(Z_{0},W_{j}))$
$\displaystyle=\sum_{j=0}^{N-1}(H(Z_{1}|X_{j,j},W_{j})+H(X^{\prime}_{j},Z_{0}|X_{j,j},W_{j})+(N-1)H(Z_{0},W_{j})+2H(X_{j,j},W_{j}))$
$\displaystyle\geq\sum_{j=0}^{N-1}(H(X^{\prime}_{j},Z_{0},Z_{1},X_{j,j},W_{j})+(N-1)H(Z_{0},W_{j})+H(X_{j,j},W_{j}))$
$\displaystyle\overset{\mathrm{(b)}}{=}\sum_{j=0}^{N-1}(H(X^{\prime}_{j},Z_{0},Z_{1},X_{j,j},W_{[0:N-1]})+(N-1)H(Z_{0},W_{j})+H(X_{j_{1},j},W_{j}))$
$\displaystyle\geq\sum_{j=0}^{N-1}(H(W_{[0:N-1]})+(N-2)H(Z_{0},W_{j})+H(Z_{0}|W_{j})+H(X_{j_{1},j}|W_{j})+2H(W_{j}))$
$\displaystyle\geq\sum_{j=0}^{N-1}(H(W_{[0:N-1]})+(N-2)H(Z_{0},W_{j})+H(Z_{0},X_{j_{1},j},W_{j})+H(W_{j}))$
$\displaystyle\overset{\mathrm{(c)}}{\geq}\sum_{j=0}^{N-1}(H(W_{[0:N-1]})+(N-2)H(Z_{0},W_{j})+H(Z_{0},W_{j},W_{j_{1}})+H(W_{j}))$
$\displaystyle=N(N+1)F+\sum_{j=0}^{N-1}(N-2)H(Z_{0},W_{j})+\sum_{j=0}^{N-1}H(Z_{0},W_{j},W_{j_{1}})$
$\displaystyle\overset{\mathrm{(d)}}{=}N(N+1)F+\sum_{j=0}^{N-1}\sum_{i=2}^{N-1}H(Z_{0},W_{j_{i}})+\sum_{j=0}^{N-1}H(Z_{0},W_{j},W_{j_{1}})$
$\displaystyle\geq
N(N+1)F+\sum_{j=0}^{N-1}(\sum_{i=2}^{N-1}H(W_{j_{i}}|Z_{0})+H(W_{j},W_{j_{1}}|Z_{0})+(N-1)H(Z_{0})))$
$\displaystyle\geq N(N+1)F+\sum_{j=0}^{N-1}(H(W_{[0:N-1]})+(N-2)H(Z_{0}))$
$\displaystyle=N(2N+1)F+N(N-2)H(Z_{0})$ (69)
where (a) and (c) follow from the decodability criteria; (b) follows from
decodability criteria and Lemma 4; (d) follows by rearranging the terms of
first summation and the definition $j_{i}=(j\oplus i)$ mod $N$. Cancelling out
the common terms of $H(Z_{0})$ from both sides in (69), we obtain
$\displaystyle 2NH(Z_{0})+NH(Z_{1})+\sum_{j=0}^{N-1}(2H(X_{j,j})+\sum_{i\neq
j}H(X_{j,i}))\geq N(2N+1)F$
which implies that
$\displaystyle 3M+(N+1)R\geq 2N+1.$
This follows since $H(Z_{i})\leq MF$ and $H(X_{j,i})\leq RF$ for
$i\in[0:N-1]$, $j\in[0:N-1]$ by definition. The proof is thus complete.
### IV-G Proof of Achievability for Theorem 6
#### IV-G1 Achievability of the region in (21) for $N=K=2$
We show that any memory-rate pair $(M,R)$ satisfying the below inequalities is
achievable under demand privacy for $N=K=2$:
$\displaystyle 2M+R\geq 2,\quad 3M+3R\geq 5,\quad M+2R\geq 2.$
To this end, we use the concept of _demand type_ introduced in [34].
###### Definition 2 (Demand Types)
In $(N,K)$-non-private coded caching problem, for a given demand vector
$\bar{d}$, let $t_{i}$ denote the number of users requesting file $i$, where
$i=0,\ldots,N-1$. Demand type of $\bar{d}$, denoted by $T(\bar{d})$, is
defined as the $N$-length vector $T(\bar{d}):=\bar{t}=(t_{1},\ldots,t_{N})$.
The type class of $\bar{t}$ is defined as
$\mbox{$\cal{D}$}_{\bar{t}}=\\{\mbox{$\bar{d}$}|T(\mbox{$\bar{d}$})=\bar{t}\\}$.
Clearly, the restricted demand subset $\cal{D}_{RS}$ defined in Definition 1
is a subset of the type class $(K,K,\ldots,K)$, i.e.,
$\displaystyle\mbox{$\cal{D}_{RS}$}\subseteq\mbox{$\cal{D}$}_{(K,K,\ldots,K)}.$
(70)
Indeed, for $\mbox{$\cal{D}$}_{1}\subset\mbox{$\cal{D}$}_{2}$, a
$\mbox{$\cal{D}$}_{2}$-non-private scheme is also a
$\mbox{$\cal{D}$}_{1}$-non-private scheme. Thus, we have the following
proposition.
###### Proposition 1
If there exists an $(N,NK,M,R)$ $\mbox{$\cal{D}$}_{(K,K,\ldots,K)}$-non-
private scheme, then there exists an $(N,K,M,R)$-private scheme.
###### Proof.
As mentioned before, we have
$\mbox{$\cal{D}_{RS}$}\subseteq\mbox{$\cal{D}$}_{(K,K,\ldots,K)}$. So, an
$(N,NK,M,R)$ $\mbox{$\cal{D}$}_{(K,K,\ldots,K)}$-non-private scheme is also an
$(N,NK,M,R)$ $\cal{D}_{RS}$-non-private scheme. Then, the proposition follows
from Theorem 1. ∎
It was shown in [34, Proposition 7] that the region given by (21) is an
achievable for Type $(2,2)$ for the $N=2,K=4$ coded caching problem without
demand privacy. So it follows from Proposition 1 that the same region is
achievable under demand privacy for $N=K=2$.
###### Remark 5
The corner points of the region in (21) are $(0,2)$
$(\frac{1}{3},\frac{4}{3})$, $(\frac{4}{3},\frac{1}{3})$ and $(2,0)$. The
achievability of the pairs $(0,2)$ and $(2,0)$ for Type $(2,2)$ in $N=2,K=4$
non-private coded caching problem is trivial. The achievability of the pairs
$(\frac{1}{3},\frac{4}{3})$ and $(\frac{4}{3},\frac{1}{3})$ were shown in
[34]. In Example 1, we showed that using the non-private scheme that achieves
the memory-rate pair $(\frac{1}{3},\frac{4}{3})$, we can achieve the same pair
with demand privacy for $N=K=2$. Similarly, we can also achieve the pair
$(\frac{4}{3},\frac{1}{3})$. We further note that the pair
$(\frac{4}{3},\frac{1}{3})$ is also achievable by the MDS scheme in [27].
#### IV-G2 Achievability of the region in (IV-G2) for $N>K=2$
We show that any rate-memory pair satisfying the below inequalities is
achievable under demand privacy for $N>2$ and $K=2$:
$\displaystyle 3M+NR\geq 2N,\quad 3M+(N+1)R\geq 2N+1,\quad M+NR\geq N.$
The corner points of this rate-memory curve are
$(0,2),(\frac{N}{3},1),(\frac{N^{2}}{2N-1},\frac{N-1}{2N-1})$ and $(N,0)$. The
achievability of $(0,2)$ and $(N,0)$ was shown in Theorem 3 while that of
$(\frac{N}{3},1)$ and $(\frac{N^{2}}{2N-1},\frac{N-1}{2N-1})$ is proved next.
The achievability of the entire region then follows by memory sharing.
Throughout this subsection, for simplicity we define $\tilde{k}=(k+1)$ mod 2,
where $k\in\\{0,1\\}$.
Achievability of $(\frac{N}{3},1)$: Now we describe Scheme D for $N>2$ files
and 2 users which generalizes the ideas discussed in Example 4. Scheme D
achieves rate 1 for memory $\frac{N}{3}$. We first give an outline of Scheme D
before describing it in detail.
In Scheme D, the server partitions each file into three symbols of equal size.
The first symbols of all files are cached at user 0 and the second symbols are
cached at user 1. So, each user has $N$ symbols of $F/3$ bits in her cache.
The server randomly permutes all these $N$ symbols before caching at each
user. Thus, the users do not know the position of each symbol in their own
cache. In the delivery phase, the server reveals the position of the symbol of
the demanded file that is available in her cache, through auxiliary
transmission. Thus, she needs two more symbols to recover the entire file,
which are obtained from the broadcast. The main payload of the broadcast
consists of three symbols each of size $F/3$ bits. Each user uses two out of
these three symbols to recover its demanded file in the two cases of
$D_{0}=D_{1}$ and $D_{0}\neq D_{1}$. Out of the two symbols that each user
uses to recover the file, one symbol is coded (XOR-ed with a symbol available
in the cache) and the other one is uncoded in both the cases. The remaining
symbol in the broadcast, which the user does not use for decoding, appears as
a sequence of random bits to the user. This symmetry helps in achieving the
privacy. Next we formally describe Scheme D.
Caching: The server breaks each file $W_{i},i\in[0:N-1]$ into 3 disjoint parts
of equal size, i.e., $W_{i}=(W_{i,0},W_{i,1},W_{i,2})$. We define
$Z^{\prime}_{0}$ and $Z^{\prime}_{1}$ as follows:
$\displaystyle Z^{\prime}_{0}$ $\displaystyle:=(W_{i,0})_{i\in[0:N-1]}$
$\displaystyle Z^{\prime}_{1}$ $\displaystyle:=(W_{i,1})_{i\in[0:N-1]}.$
Let $\pi_{0}$ and $\pi_{1}$ be two permutation functions which are independent
and uniformly distributed in the symmetric group of permutations of $[0:N-1]$.
Further, for $k\in\\{0,1\\}$, let
$\displaystyle Z^{\prime\prime}_{k}$
$\displaystyle:=(Z^{\prime\prime}_{k,0},Z^{\prime\prime}_{k,1},\ldots,Z^{\prime\prime}_{k,N-1})$
$\displaystyle=\pi_{k}(Z^{\prime}_{k}).$
The server places $Z^{\prime\prime}_{k}$ in the cache of user $k\in\\{0,1\\}$
along with 4 symbols $(S_{k,1},S_{k,2},P_{k,1},P_{k,2})$ of negligible size,
where
$\displaystyle S_{k,j}\sim unif\\{[0:N-1]\\},$ $\displaystyle P_{k,j}\sim
unif\\{[0:2]\\},\quad\text{for }k\in\\{0,1\\},\;j=1,2.$
These 4 symbols are used in the delivery phase. Thus, the cache of user $k$,
$Z_{k}$ is given by
$Z_{k}=(Z^{\prime\prime}_{k},S_{k,1},S_{k,2},P_{k,1},P_{k,2}).$
Observe that $Z^{\prime}_{k}$ consists of $N$ symbols each containing
$\frac{F}{3}$ bits, which gives444 where $o(F)$ is some function of $F$ such
that $\lim_{F\to\infty}\frac{o(F)}{F}=0$.
$\displaystyle len(Z_{k})$ $\displaystyle=\frac{NF}{3}+o(F)$
$\displaystyle=MF+o(F).$
Note that in this caching scheme, the server does not fully reveal the
permutation functions $\pi_{0}$ and $\pi_{1}$ with any user.
Delivery: To describe the delivery, we first define
$X^{\prime}:=\left\\{\begin{array}[]{lcl}(W_{D_{0},1}\oplus
W_{D_{1},0},W_{D_{0},2},W_{D_{1},2})&\text{if }D_{0}\neq D_{1}\\\ \\\
(W_{D_{0},1}\oplus W_{m,0},W_{D_{0},2},W_{D_{0},0}\oplus W_{m,1})&\text{if
}D_{0}=D_{1}\end{array}\right.$
where $m=(D_{0}+1)$ mod $N$. The server picks a permutation function $\pi_{2}$
uniformly at random from the symmetric group of permutations of $[0:2]$ and
includes $\pi_{2}(X^{\prime})$ in the transmission. The permutation $\pi_{2}$
is not fully revealed to any of the users. In addition to
$\pi_{2}(X^{\prime})$, to recover the demanded files, users need some more
information, which can be delivered with negligible rate. The entire broadcast
is given by
$\displaystyle X$ $\displaystyle=(\pi_{2}(X^{\prime}),J_{1},J_{2},J_{3})$
$\displaystyle=(X^{\prime\prime},J_{1},J_{2},J_{3})$
$\displaystyle=(X^{\prime\prime}_{0},X^{\prime\prime}_{1},X^{\prime\prime}_{2},J_{1},J_{2},J_{3}).$
Here, $(J_{1},J_{2},J_{3})$ are auxiliary transmissions which contain the
extra information. The auxiliary transmission $J_{1}$ is given by
$J_{1}=(J_{1,0},J_{1,1})$, where $J_{1,k}$ is defined as
$\displaystyle J_{1,k}:=S_{k,1}\oplus\pi_{k}(D_{k}).$ (71)
Recall that $\pi_{k}(D_{k})$ gives the position of $W_{D_{k},k}$ in
$Z^{\prime\prime}_{k}$ while $S_{k,1}$, $k\in\\{0,1\\}$ is a part of $Z_{k}$.
To define auxiliary transmission $J_{2}$, we first define random variables
$T_{k,j}$ for $k\in\\{0,1\\}$, $j\in\\{1,2\\}$ as follows:
$\displaystyle(T_{0,1},T_{0,2},T_{1,1},T_{1,2}):=\left\\{\begin{array}[]{lcl}(\pi_{2}(0),\pi_{2}(1),\pi_{2}(0),\pi_{2}(2))&\text{if
}D_{0}\neq D_{1}\\\ \\\ (\pi_{2}(0),\pi_{2}(1),\pi_{2}(2),\pi_{2}(1))&\text{if
}D_{0}=D_{1}.\end{array}\right.$ (75)
Note that $\pi_{2}(i)$ gives the position of the $i$-th symbol of $X^{\prime}$
in $\pi_{2}(X^{\prime})$. Auxiliary transmission $J_{2}$ is given by
$J_{2}=(J_{2,0},J_{2,1})=(J_{2,0,0},J_{2,0,1},J_{2,1,0},J_{2,1,1})$, where
$J_{2,k,j}$, $j\in\\{0,1\\}$ is defined as
$\displaystyle J_{2,k,j}$ $\displaystyle:=P_{k,j+1}\oplus T_{k,j+1}.$ (76)
Recall that symbols $P_{k,1}$ and $P_{k,2}$ are part of $Z_{k}$.
Auxiliary transmission $J_{3}$ is given by $J_{3}=(J_{3,0},J_{3,1})$, where
$J_{3,k}$ is defined as
$\displaystyle J_{3,k}:=S_{k,2}\oplus\pi_{k}(p_{k})$ (77)
and $p_{k}$ is given as
$\displaystyle p_{k}:=\left\\{\begin{array}[]{lcl}D_{\tilde{k}}&\text{if
}D_{0}\neq D_{1}\\\ \\\ m&\text{if }D_{0}=D_{1}.\end{array}\right.$ (81)
Recall that $m=(D_{0}+1)$ mod $N$, and $S_{0,2}$ and $S_{1,2}$ are part of
$Z_{0}$ and $Z_{1}$, respectively.
Observe that $X^{\prime}$ contains 3 symbols, each of size $\frac{F}{3}$ bits,
which gives
$\displaystyle len(X)$ $\displaystyle=\frac{3F}{3}+o(F)$
$\displaystyle=RF+o(F).$
Decoding: We discuss the decoding of file $W_{D_{k}}$ for user $k=0,1$. User
$k$ first recovers $\pi_{k}(D_{k}),T_{k,1},T_{k,2}$ and $\pi_{k}(p_{k})$ from
$J_{1},J_{2}$ and $J_{3}$ as follows:
$\displaystyle J_{1,k}\oplus S_{k,1}=\pi_{k}(D_{k})\oplus S_{k,1}\oplus
S_{k,1}=\pi_{k}(D_{k})\qquad\text{(using \eqref{eq:s1J1})}$ $\displaystyle
J_{2,k,0}\oplus P_{k,1}=T_{k,1}\oplus P_{k,1}\oplus
P_{k,1}=T_{k,1}\qquad\qquad\text{(using \eqref{eq:s1J2})}$ $\displaystyle
J_{2,k,1}\oplus P_{k,2}=T_{k,2}\oplus P_{k,2}\oplus
P_{k,2}=T_{k,2}\qquad\qquad\text{(using \eqref{eq:s1J2})}$ $\displaystyle
J_{3,k}\oplus S_{k,2}=\pi_{k}(p_{k})\oplus S_{k,2}\oplus
S_{k,2}=\pi_{k}(p_{k})\qquad\enspace\text{(using \eqref{eq:s1J3})}.$
User $k$ decodes the 3 parts of $W_{D_{k}}$ as follows:
$\displaystyle\widehat{W}_{D_{k},k}=Z^{\prime\prime}_{k,\pi_{k}(D_{k})}$
$\displaystyle\widehat{W}_{D_{k},2}=X^{\prime\prime}_{T_{k,2}}$
$\displaystyle\widehat{W}_{D_{k},\tilde{k}}=X^{\prime\prime}_{T_{k,1}}\oplus
Z^{\prime\prime}_{k,\pi_{k}(p_{k})}.$
User $k$ can recover each of $\widehat{W}_{D_{k},k},\widehat{W}_{D_{k},2}$ and
$\widehat{W}_{D_{k},\tilde{k}}$, where $\tilde{k}=(k+1)$ mod 2, because she
has access to each symbol $X^{\prime\prime}_{i}$, $i\in[0:2]$ from the
broadcast while all symbols $Z^{\prime\prime}_{k,j}$, $j\in[0:N-1]$ are
available in her cache.
Observe that $\widehat{W}_{D_{k},k}={W}_{D_{k},k}$ and
$\widehat{W}_{D_{k},2}={W}_{D_{k},2}$ by definition of $\pi_{k}(D_{k})$ and
$T_{k,2}$, respectively. We show that
$\widehat{W}_{D_{k},\tilde{k}}=W_{D_{k},\tilde{k}}$ by considering the
following two cases:
Case 1: $D_{0}\neq D_{1}$
$\displaystyle\widehat{W}_{D_{k},\tilde{k}}$
$\displaystyle=X^{\prime\prime}_{T_{k,1}}\oplus
Z^{\prime\prime}_{k,\pi_{k}(p_{k})}$
$\displaystyle=X^{\prime\prime}_{T_{k,1}}\oplus
Z^{\prime\prime}_{k,\pi_{k}(D_{\tilde{k}})}\qquad\text{(using
\eqref{eq:impobsT})}$ $\displaystyle=W_{D_{k},\tilde{k}}\oplus
W_{D_{\tilde{k}},k}\oplus W_{D_{\tilde{k}},k}$
$\displaystyle=W_{D_{k},\tilde{k}}.$
Case 2: $D_{0}=D_{1}$
$\displaystyle\widehat{W}_{D_{k},\tilde{k}}$
$\displaystyle=X^{\prime\prime}_{T_{k,1}}\oplus
Z^{\prime\prime}_{k,\pi_{k}(p_{k})}$
$\displaystyle=X^{\prime\prime}_{T_{k,1}}\oplus
Z^{\prime\prime}_{k,\pi_{k}(m)}\qquad\text{(using \eqref{eq:impobsT})}$
$\displaystyle=W_{D_{k},\tilde{k}}\oplus W_{m,k}\oplus W_{m,k}$
$\displaystyle=W_{D_{k},\tilde{k}}.$
Having retrieved these 3 segments of $W_{D_{k}}$, user $k$ recovers
$W_{D_{k}}$ by concatenating $W_{D_{k},0},W_{D_{k},1}$ and $W_{D_{k},2}$ in
that order.
Proof of privacy: We now prove that Scheme D is demand-private for user
$k\in\\{0,1\\}$, i.e., (9) holds true for this scheme. Recall that $\tilde{k}$
is defined as $\tilde{k}=(k+1)$ mod 2. Then the following sequence of
equalities holds true:
$\displaystyle I(D_{\tilde{k}};X,Z_{k},D_{k})$
$\displaystyle=I(D_{\tilde{k}};X|Z_{k},D_{k})+I(D_{\tilde{k}};Z_{k}|D_{k})+I(D_{k};D_{\tilde{k}})$
$\displaystyle\overset{\mathrm{(a)}}{=}I(X;D_{\tilde{k}}|Z_{k},D_{k})$
$\displaystyle=I(\pi_{2}(X^{\prime}),J_{1},J_{2},J_{3};D_{\tilde{k}}|\pi_{k}(Z^{\prime}_{k}),S_{k,1},S_{k,2},P_{k,1},P_{k,2},D_{k})$
$\displaystyle\overset{\mathrm{(b)}}{=}I(J_{1},J_{2},J_{3};D_{\tilde{k}}|S_{k,1},S_{k,2},P_{k,1},P_{k,2},D_{k})$
$\displaystyle=I(J_{1,0},J_{1,1},J_{2,0},J_{2,1},J_{3,0},J_{3,1};D_{\tilde{k}}|S_{k,1},S_{k,2},P_{k,1},P_{k,2},D_{k})$
$\displaystyle\overset{\mathrm{(c)}}{=}I(J_{1,k},J_{2,k},J_{3,k};D_{\tilde{k}}|S_{k,1},S_{k,2},P_{k,1},P_{k,2},D_{k})$
$\displaystyle=I((S_{k,1}\oplus\pi_{k}(D_{k})),(P_{k,1}\oplus
T_{k,1},P_{k,2}\oplus
T_{k,2}),(S_{k,2}\oplus\pi_{k}(p_{k}));D_{\tilde{k}}|S_{k,1},S_{k,2},P_{k,1},P_{k,2},D_{k})$
$\displaystyle=I(\pi_{k}(D_{k}),T_{k,1},T_{k,2},\pi_{k}(p_{k});D_{\tilde{k}}|S_{k,1},S_{k,2},P_{k,1},P_{k,2},D_{k})$
$\displaystyle\overset{\mathrm{(d)}}{=}I(\pi_{k}(D_{k}),T_{k,1},T_{k,2},\pi_{k}(p_{k});D_{\tilde{k}}|D_{k})$
(82)
where (a) follows because $Z_{k}$ is independent of $(D_{0},D_{1})$ and also
$D_{0}$ and $D_{1}$ are independent; for (b) note that for any fixed value of
$(J_{1},J_{2},J_{3},D_{k},D_{\tilde{k}},S_{k,1},S_{k,2},P_{k,1},P_{k,2})$, we
have
$\displaystyle(X^{\prime},Z^{\prime}_{k}|J_{1},J_{2},J_{3},D_{k},D_{\tilde{k}},S_{k,1},S_{k,2},P_{k,1},P_{k,2})\sim
unif\\{0,1\\}^{\left(1+\frac{N}{3}\right)F}$
which holds because for both cases $D_{0}\neq D_{1}$ and $D_{0}=D_{1}$, the
symbols in $X^{\prime}$ and $Z_{k}$ are independent. Hence, $X^{\prime},Z_{k}$
and $(J_{1},J_{2},J_{3},D_{k},D_{\tilde{k}},S_{k,1},S_{k,2},P_{k,1},P_{k,2})$
are independent which gives (b); (c) follows because
$(S_{\tilde{k},1},S_{\tilde{k},2},$ $P_{\tilde{k},1},P_{\tilde{k},2})$ which
are one-time pads for symbols in
$(J_{1,\tilde{k}},J_{2,\tilde{k}},J_{3,\tilde{k}})$ are independent of all
other random variables; (d) follows because
$(S_{k,1},S_{k,2},P_{k,1},P_{k,2})$ are independent of all other random
variables.
Now we show that the RHS of (82) is 0. From the definition in (81), we have
that, for all fixed values of $D_{0}$, $D_{1}$, $\pi_{0}$, $\pi_{1}$, and
$t_{1},t_{2}\in[0:2]$,
$\Pr(T_{k,1}=t_{1},T_{k,2}=t_{2}|D_{0},D_{1},\pi_{0},\pi_{1})=\Pr(T_{k,1}=t_{1},T_{k,2}=t_{2})=\left\\{\begin{array}[]{lr}\frac{1}{6},&\text{if
}t_{1}\neq t_{2}\\\ \\\ 0,&\text{if }t_{1}=t_{2}.\\\ \end{array}\right.$ (83)
Hence, $(T_{k,1},T_{k,2})$ is independent of $(D_{0},D_{1},\pi_{0},\pi_{1})$.
Also, from definition we know that $(\pi_{k}(D_{k}),\pi_{k}(p_{k}))$ is a
function of $(D_{0},D_{1},\pi_{k})$, which implies the independence of
$(T_{k,1},T_{k,2})$ and $(\pi_{k}(D_{k}),\pi_{k}(p_{k}),D_{0},D_{1})$. Then,
it follows from (82) that
$\displaystyle I(D_{\tilde{k}};X,Z_{k},D_{k})$
$\displaystyle=I(\pi_{k}(D_{k}),\pi_{k}(p_{k});D_{\tilde{k}}|D_{k})$
$\displaystyle=I(\pi_{k}(p_{k});D_{\tilde{k}}|D_{k},\pi_{k}(D_{k})+I(\pi_{k}(D_{k});D_{\tilde{k}}|D_{k})$
$\displaystyle\overset{\mathrm{(a)}}{=}0$
where (a) follows because for any fixed value of
$(D_{k},D_{\tilde{k}},\pi_{k}(D_{k}))$, we have
$\displaystyle(\pi_{k}(p_{k})|D_{k},D_{\tilde{k}},\pi_{k}(D_{k}))\sim(\pi_{k}(p_{k})|D_{k},\pi_{k}(D_{k}))\sim
unif\\{[0:N-1]\setminus\\{\pi_{k}(D_{k})\\}\\}$
and
$\displaystyle(\pi_{k}(D_{k})|D_{k},D_{\tilde{k}})\sim(\pi_{k}(D_{k})|D_{k})\sim
unif\\{[0:N-1]\\}.$
This completes the proof of privacy.
Achievability of $(\frac{N^{2}}{2N-1},\frac{N-1}{2N-1})$: Now we describe
Scheme E for $N>2$ files and 2 users which achieves rate $\frac{N-1}{2N-1}$
for memory $\frac{N^{2}}{2N-1}$. File $W_{i}$, $i\in[0:N-1]$ is partitioned
into $2N-1$ disjoint parts of equal size, i.e., $W_{i}$ is given by
$W_{i}=(W_{i,0},W_{i,1},\ldots,W_{i,2N-2})$. File $W_{i}$ is then encoded
using a $(3N-2,2N-1)$ MDS code such that each of $(3N-2)$ coded symbols has
$\frac{F}{2N-1}$ bits. Each file can be reconstructed using any $(2N-1)$ coded
symbols. One of the $(3N-2)$ coded symbols of file $W_{i}$ is denoted by
$F_{i,0}$ while the remaining $(3N-3)$ symbols are denoted by $F_{i,j,k}$,
where $j\in[0:2]$ and $k\in[0:N-2]$. Next we give an outline of Scheme E.
In Scheme E, $N$ out of $3N-2$ symbols of each file are cached at each user.
Out of these $N$ symbols, one symbol is common with the other user and
remaining $N-1$ symbols are distinct from the other user. Similar to Scheme D,
the server randomly permutes these $N^{2}$ symbols before caching at each
user. The main payload of the broadcast consists of $N-1$ symbols. To decode
the demanded file, each user needs $2N-1$ symbols. The server reveals the
positions of the $N$ symbols of the requested file that are available in the
cache of each user, through auxiliary transmission. Both users obtain
additional $N-1$ symbols from the broadcast in the two cases of $D_{0}=D_{1}$
and $D_{0}\neq D_{1}$. This symmetry is a crucial point in preserving privacy.
We formally describe Scheme E next.
Caching: To give the cache contents of the users, we first define tuples
$\mbox{$\cal{L}$}_{0},\mbox{$\cal{L}$}_{1}$ and $\mbox{$\cal{L}$}_{2}$ as
follows:
$\mbox{$\cal{L}$}_{j}=(F_{i,0},F_{i,j,1},F_{i,j,2},\ldots,F_{i,j,N-1})_{i\in[0:N-1]},\quad\forall
j\in[0:2].$
The server randomly picks any 2 of $\mbox{$\cal{L}$}_{0},\mbox{$\cal{L}$}_{1}$
and $\mbox{$\cal{L}$}_{2}$ and places one of them in the cache of user 0 after
applying a random permutation, and places the other in the cache of user 1
after applying another independent permutation. To describe this process
formally, we first define
$\displaystyle U_{0}\sim unif\\{[0:2]\\}$ $\displaystyle U_{1}\sim
unif\\{[0:2]\backslash\\{U_{0}\\}\\}.$
Let $\pi_{0}$ and $\pi_{1}$ be two independent and uniformly distributed
permutation functions in the symmetric group of permutations of $[0:N^{2}-1]$.
Further, for $k\in\\{0,1\\}$, let
$\displaystyle Z^{\prime}_{k}$
$\displaystyle=(Z^{\prime}_{k,0},Z^{\prime}_{k,1},\ldots,Z^{\prime}_{k,N^{2}-1})=\pi_{k}(\mbox{$\cal{L}$}_{U_{k}}).$
(84)
The server places $Z^{\prime}_{k}$ and $U_{k}$ in the cache of user $k$ and
also the symbols $(S_{k,1},S_{k,2},\ldots,S_{k,2N-1},P_{k})$, where
$\displaystyle S_{k,j}$ $\displaystyle\sim unif\\{[0:N^{2}-1]\\},$
$\displaystyle P_{k}$ $\displaystyle\sim unif\\{[0:2]\\},\qquad\forall
k\in\\{0,1\\},\enspace j\in[0:2N-1]\backslash\\{0\\}.$
These symbols are used in the delivery phase. Thus, the cache of user $k$,
$Z_{k}$ is given by
$Z_{k}=(Z^{\prime}_{k},S_{k,1},S_{k,2},\ldots,S_{k,2N-1},P_{k},U_{k}).$
Note that the main payload consists of $N$ coded symbols of each file and each
symbol has $\frac{F}{2N-1}$ bits. Thus, we have
$\displaystyle len(Z_{k})$ $\displaystyle=\frac{N^{2}F}{2N-1}+o(F)$
$\displaystyle=MF+o(F).$
Delivery: To describe the delivery phase, we first define
$\displaystyle
X^{\prime}=(X^{\prime}_{0},X^{\prime}_{1},\ldots,X^{\prime}_{N-2})=\left\\{\begin{array}[]{lcl}(F_{D_{0},U_{1},t}\oplus
F_{D_{1},U_{0},t})_{t\in[0:N-1]\backslash\\{0\\}}&\text{if }D_{0}\neq D_{1}\\\
\\\ (F_{D_{0},V,t}\oplus F_{m_{t},0})_{t\in[0:N-1]\backslash\\{0\\}}&\text{if
}D_{0}=D_{1}\end{array}\right.$ (88)
where $m_{t}=(D_{0}+t)$ mod $N$, and $V=[0:2]\backslash\\{U_{0},U_{1}\\}$. The
transmitted message $X$ is given by
$X=(X^{\prime},J_{1},J_{2},J_{3})$
where $(J_{1},J_{2},J_{3})$ are the auxiliary transmissions. Next we describe
these auxiliary transmissions.
To describe $J_{1}$, we define
$\displaystyle C^{k}_{i,j}:=\pi_{k}(Ni+j),\quad\forall
i\in[0:N-1],\;j\in[0:N-1],\;k\in\\{0,1\\}.$ (89)
Thus, $C^{k}_{i,0}$ and $C^{k}_{i,j}$ respectively give the positions of
$F_{i,0}$ and $F_{i,U_{k},j}$ in $Z^{\prime}_{k}$. The auxiliary transmission
$J_{1}$ is given by $J_{1}=(J_{1,0},J_{1,1})$, where
$\displaystyle J_{1,k}$
$\displaystyle=(J_{1,k,j})_{j\in[0:N-1]}:=(S_{k,j+1}\oplus
C^{k}_{D_{k},j})_{j\in[0:N-1]},\quad k\in\\{0,1\\}.$ (90)
Here, $\oplus$ denotes addition modulo $N^{2}$ and also note that $S_{k,j+1}$
are part of $Z_{k}$.
Auxiliary transmission $J_{2}$ is defined as $J_{2}=(J_{2,0},J_{2,1})$, where
$\displaystyle J_{2,k}=(J_{2,k,j})_{j\in[0:N-2]}:=(S_{k,N+1+j}\oplus
H_{k,j+1})_{j\in[0:N-2]},\quad k\in\\{0,1\\}$ (91)
with $H_{k,j}\in[0:N^{2}-1]$ defined by
$\displaystyle
H_{k,j}:=\left\\{\begin{array}[]{lcl}C^{k}_{D_{\tilde{k}},j}&\text{if
}D_{0}\neq D_{1}\\\ \\\ C^{k}_{m_{j},0}&\text{if
}D_{0}=D_{1}.\end{array}\right.$ (95)
Note that, for $k\in\\{0,1\\}$ and $j\in[0:N-1]\backslash\\{0\\}$,
$S_{k,N+1+j}\in[0:N^{2}-1]$ are part of $Z_{k}$.
Finally, the auxiliary transmission $J_{3}$ is defined as
$J_{3}=(J_{3,0},J_{3,1})$, where
$\displaystyle J_{3,k}$ $\displaystyle:=(P_{k}\oplus T_{k}).$ (96)
Here, $P_{k}$ is a part of $Z_{k}$, and $(T_{0},T_{1})$ is defined as
$(T_{0},T_{1}):=\left\\{\begin{array}[]{lcl}(U_{1},U_{0})&\text{if }D_{0}\neq
D_{1}\\\ \\\ (V,V)&\text{if }D_{0}=D_{1}.\end{array}\right.$
Observe that the main payload $X^{\prime}$ consists of $(N-1)$ symbols of
$\frac{F}{2N-1}$ bits each. Thus, we have
$\displaystyle len(X)$ $\displaystyle=\frac{(N-1)F}{2N-1}+o(F)$
$\displaystyle=RF+o(F).$
Decoding: Now we describe the decoding of file $W_{D_{k}}$ at user
$k\in\\{0,1\\}$. For $i\in[0:N-1],j\in[0:N-1]\backslash\\{0\\}$, user $k$
recovers $(C^{k}_{D_{k},0},C^{k}_{D_{k},1},\ldots,C^{k}_{D_{k},N-1})$,
$(H_{k,1},H_{k,2},\ldots,H_{k,N-1})$ and $T_{k}$ from $J_{1},J_{2}$ and
$J_{3}$, respectively as follows:
$\displaystyle J_{1,k,i}\oplus S_{k,i+1}=C^{k}_{D_{k},i}\oplus S_{k,i+1}\oplus
S_{k,i+1}=C^{k}_{D_{k},i}\qquad\quad\enspace\text{(using~{}\eqref{eq:J1})}$
$\displaystyle J_{2,k,j-1}\oplus S_{k,N+j}=H_{k,j}\oplus S_{k,N+j}\oplus
S_{k,N+j}=H_{k,j}\qquad\text{(using~{}\eqref{eq:J2})}$ $\displaystyle
J_{3,k}\oplus P_{k}=T_{k}\oplus P_{k}\oplus
P_{k}=T_{k}\qquad\qquad\qquad\qquad\qquad\enspace\hskip
2.27621pt\text{(using~{}\eqref{eq:J3})}.$
The coded symbols of $W_{D_{k}}$, namely, $F_{D_{k},0}$ and
$F_{D_{k},U_{k},j},j\in[0:N-1]\setminus\\{0\\}$ are stored in the cache of
user $k$, but their positions are unknown to the user. Using
$(C^{k}_{D_{k},0},C^{k}_{D_{k},1},\ldots,C^{k}_{D_{k},N-1})$, user $k$ can
recover these symbols as
$\displaystyle\widehat{F}_{D_{k},0}$
$\displaystyle=Z^{\prime}_{k,C^{k}_{D_{k},0}},$
$\displaystyle\widehat{F}_{D_{k},U_{k},j}$
$\displaystyle=Z^{\prime}_{k,C^{k}_{D_{k},j}},\quad\text{for
}j\in[0:N-1]\backslash\\{0\\}.$
Observe that, by the definition of $C^{k}_{D_{k},0}$ and $C^{k}_{D_{k},j}$, we
get $\widehat{F}_{D_{k},0}={F}_{D_{k},0}$ and
$\widehat{F}_{D_{k},U_{k},j}={F}_{D_{k},U_{k},j}$. Now that user $k$ has
recovered $N$ coded symbols of $W_{D_{k}}$, we show how it recovers $(N-1)$
more symbols namely, $F_{D_{k},T_{k},j}$, $j\in[0:N-1]\setminus\\{0\\}$.
Symbol $F_{D_{k},T_{k},j}$ can be recovered from the main payload using
$(H_{k,1},H_{k,2},\ldots,H_{k,N-1})$ as follows:
$\widehat{F}_{D_{k},T_{k},j}=X^{\prime}_{j-1}\oplus
Z^{\prime}_{k,H_{k,j}},\quad\text{for }j\in[0:N-1]\backslash\\{0\\}.$
To show that $\widehat{F}_{D_{k},T_{k},j}={F}_{D_{k},T_{k},j}$, we consider
the following two cases:
Case 1: $D_{0}\neq D_{1}$
$\displaystyle\widehat{F}_{D_{k},T_{k},j}$
$\displaystyle=X^{\prime}_{j-1}\oplus Z^{\prime}_{k,H_{k,j}}$
$\displaystyle=F_{D_{0},U_{1},j}\oplus F_{D_{1},U_{0},j}\oplus
Z^{\prime}_{k,C^{k}_{D_{\tilde{k}},j}}\qquad\text{(using \eqref{eq:X'} and
\eqref{eq:defh})}$ $\displaystyle=F_{D_{0},U_{1},j}\oplus
F_{D_{1},U_{0},j}\oplus F_{D_{\tilde{k}},U_{k},j}\qquad\text{(using
\eqref{eq:pos} and \eqref{eq:cache})}$
$\displaystyle=F_{D_{k},U_{\tilde{k}},j}$ $\displaystyle=F_{D_{k},T_{k},j}.$
Case 2: $D_{0}=D_{1}$
$\displaystyle\widehat{F}_{D_{k},T_{k},j}$
$\displaystyle=X^{\prime}_{j-1}\oplus Z^{\prime}_{k,H_{k,j}}$
$\displaystyle=F_{D_{0},V,j}\oplus F_{m_{j},0}\oplus
Z^{\prime}_{k,C^{k}_{m_{j},0}}\qquad\text{(using \eqref{eq:X'} and
\eqref{eq:defh})}$ $\displaystyle=F_{D_{0},V,j}\oplus F_{m_{j},0}\oplus
F_{m_{j},0}\qquad\quad\text{(using \eqref{eq:pos} and \eqref{eq:cache})}$
$\displaystyle=F_{D_{0},V,j}$ $\displaystyle=F_{D_{k},T_{k},j}.$
Since $T_{k}\neq U_{k}$, user $k$ has retrieved $2N-1$ distinct symbols of the
MDS code. Using these, user $k$ can decode file $W_{D_{k}}$.
Proof of privacy: Now we prove that Scheme E is demand-private for user
$k=0,1$, i.e., (9) holds true for this scheme. Recall that $\tilde{k}$ is
defined as $\tilde{k}=(k+1)$ mod 2. Then the following sequence of equalities
holds true:
$\displaystyle I(D_{\tilde{k}};X,Z_{k},D_{k})$
$\displaystyle=I(D_{\tilde{k}};X|Z_{k},D_{k})+I(D_{\tilde{k}};Z_{k}|D_{k})+I(D_{k};D_{\tilde{k}})$
$\displaystyle\overset{\mathrm{(a)}}{=}I(X;D_{\tilde{k}}|Z_{k},D_{k})$
$\displaystyle=I(X^{\prime},J_{1},J_{2},J_{3};D_{\tilde{k}}|\pi_{k}(\mbox{$\cal{L}$}_{U_{k}}),S_{k,1},S_{k,2},\ldots,S_{k,2N-1},P_{k},U_{k},D_{k})$
$\displaystyle\overset{\mathrm{(b)}}{=}I(J_{1},J_{2},J_{3};D_{\tilde{k}}|S_{k,1},S_{k,2},\ldots,S_{k,2N-1},P_{k},U_{k},D_{k})$
$\displaystyle=I(J_{1,0},J_{1,1},J_{2,0},J_{2,1},J_{3,0},J_{3,1};D_{\tilde{k}}|S_{k,1},S_{k,2},\ldots,S_{k,2N-1},P_{k},U_{k},D_{k})$
$\displaystyle\overset{\mathrm{(c)}}{=}I(J_{1,k},J_{2,k},J_{3,k};D_{\tilde{k}}|S_{k,1},S_{k,2},\ldots,S_{k,2N-1},P_{k},U_{k},D_{k})$
$\displaystyle=I(C^{k}_{D_{k},0},C^{k}_{D_{k},1},\ldots,C^{k}_{D_{k},N-1},H_{k,1},H_{k,2},\ldots,H_{k,N-1},T_{k};D_{\tilde{k}}|S_{k,1},S_{k,2},\ldots,S_{k,2N-1},P_{k},U_{k},D_{k})$
$\displaystyle\overset{\mathrm{(d)}}{=}I(C^{k}_{D_{k},0},C^{k}_{D_{k},1},\ldots,C^{k}_{D_{k},N-1},H_{k,1},H_{k,2},\ldots,H_{k,N-1},T_{k};D_{\tilde{k}}|U_{k},D_{k})$
(97)
where (a) follows because $Z_{k}$ is independent of $(D_{0},D_{1})$ and also
$D_{0}$ and $D_{1}$ are independent; (b) follows because for any fixed value
of
$(J_{1},J_{2},J_{3},S_{k,1},S_{k,2},\ldots,S_{k,2N-1},P_{k},U_{k},D_{k},D_{\tilde{k}})$,
we have
$\displaystyle(X^{\prime},\mbox{$\cal{L}$}_{U_{k}}|J_{1},J_{2},J_{3},S_{k,1},S_{k,2},\ldots\,S_{k,2N-1},P_{k},U_{k},D_{k},D_{\tilde{k}})\sim
unif\\{0,1\\}^{F\frac{(N^{2}+N-1)}{(2N-1)}};$
(c) follows because $(J_{1,\tilde{k}},J_{2,\tilde{k}},J_{3,\tilde{k}})$ are
encoded using one-time pads which are only available with user $\tilde{k}$;
(d) follows because $(S_{k,1},S_{k,2},\ldots,S_{k,2N-1},P_{k})$ are one-time
pads which are independent of all other random variables.
Next we show that the RHS of (97) is 0. To this end, we need the following.
For all $C^{k}_{D_{k},i}$ and $H_{k,j}$ distinct, observe that:
1. (i)
For $d_{0}\in[0:N-1],d_{1}\in[0:N-1]$, $d_{0}\neq d_{1}$ and any fixed values
of $(U_{0},U_{1})$,
$\displaystyle\Pr(C^{k}_{D_{k},0},C^{k}_{D_{k},1},\ldots,C^{k}_{D_{k},N-1},H_{k,1},H_{k,2},\ldots,H_{k,N-1}|D_{0}=d_{0},D_{1}=d_{1},U_{0},U_{1})$
$\displaystyle\overset{\mathrm{(e)}}{=}\Pr(C^{k}_{d_{k},0},C^{k}_{d_{k},1},\ldots,C^{k}_{d_{k},N-1},C^{k}_{d_{\tilde{k}},1},C^{k}_{d_{\tilde{k}},2},\ldots,C^{k}_{d_{\tilde{k}},N-1}|D_{0}=d_{0},D_{1}=d_{1},U_{0},U_{1})$
$\displaystyle\overset{\mathrm{(f)}}{=}\Pr(C^{k}_{d_{k},0},C^{k}_{d_{k},1},\ldots,C^{k}_{d_{k},N-1},C^{k}_{d_{\tilde{k}},1},C^{k}_{d_{\tilde{k}},2},\ldots,C^{k}_{d_{\tilde{k}},N-1})$
$\displaystyle=\frac{(N^{2}-2N+1)!}{(N^{2})!}.$ (98)
2. (ii)
For $d_{0}\in[0:N-1],d_{1}\in[0:N-1]$, $d_{0}=d_{1}$ and any fixed values of
$(U_{0},U_{1})$,
$\displaystyle\Pr(C^{k}_{D_{k},0},C^{k}_{D_{k},1},\ldots,C^{k}_{D_{k},N-1},H_{k,1},H_{k,2},\ldots,H_{k,N-1}|D_{0}=d_{0},D_{1}=d_{1},U_{0},U_{1})$
$\displaystyle\overset{\mathrm{(g)}}{=}\Pr(C^{k}_{d_{k},0},C^{k}_{d_{k},1},\ldots,C^{k}_{d_{k},N-1},C^{k}_{m_{1},0},C^{k}_{m_{2},0},\ldots,C^{k}_{m_{N-1},0}|D_{0}=d_{0},D_{1}=d_{1},U_{0},U_{1})$
$\displaystyle\overset{\mathrm{(h)}}{=}\Pr(C^{k}_{d_{k},0},C^{k}_{d_{k},1},\ldots,C^{k}_{d_{k},N-1},C^{k}_{m_{1},0},C^{k}_{m_{2},0},\ldots,C^{k}_{m_{N-1},0})$
$\displaystyle=\frac{(N^{2}-2N+1)!}{(N^{2})!}.$ (99)
Here, (e) and (g) follow from (95); (f) follows because
$(C^{k}_{d_{k},0},C^{k}_{d_{k},1},\ldots,C^{k}_{d_{k},N-1},C^{k}_{d_{\tilde{k}},1},C^{k}_{d_{\tilde{k}},2},\ldots,C^{k}_{d_{\tilde{k}},N-1})$
only depends on $\pi_{k}$ which is independent of $(D_{0},D_{1},U_{0},U_{1})$;
(h) follows for similar reasons as (f). Note that by definition $T_{k}$ is a
function of $(D_{0},D_{1},U_{0},U_{1})$.
Now using (98) and (99), we conclude that
$(C^{k}_{D_{k},0},C^{k}_{D_{k},1},\ldots,C^{k}_{D_{k},N-1},H_{k,1},H_{k,2},\ldots,H_{k,N-1})$
is independent of $(D_{0},D_{1},T_{k},U_{k})$. Thus, it follows from (97) that
$\displaystyle
I(C^{k}_{D_{k},0},C^{k}_{D_{k},1},\ldots,C^{k}_{D_{k},N-1},H_{k,1},H_{k,2},\ldots,H_{k,N-1},T_{k};D_{\tilde{k}}|U_{k},D_{k})=I(T_{k};D_{\tilde{k}}|U_{k},D_{k}).$
(100)
For
$d_{0}\in[0:N-1],d_{1}\in[0:N-1],u_{k}\in[0:2],t\in[0:2]\setminus\\{u_{k}\\}$,
$\displaystyle\Pr(T_{k}=t|D_{0}=d_{0},D_{1}=d_{1},U_{k}=u_{k})=\begin{cases}\Pr(U_{\tilde{k}}=t|D_{0}=d_{0},D_{1}=d_{1},U_{k}=u_{k})=\frac{1}{2},\quad&\text{if
}d_{0}\neq d_{1}\\\ \\\
\Pr(V=t|D_{0}=d_{0},D_{1}=d_{1},U_{k}=u_{k})=\frac{1}{2},\quad&\text{if
}d_{0}=d_{1}.\end{cases}$ (101)
From (101), we obtain that $T_{k}$ is independent of
$(U_{k},D_{k},D_{\tilde{k}})$. It thus follows from (100) and (97) that
$\displaystyle
I(D_{\tilde{k}};X,Z_{k},D_{k})=I(T_{k};D_{\tilde{k}}|U_{k},D_{k})=0.$
This completes the proof for privacy.
## Appendix A Proof of Lemma 2
To prove this lemma, we need to show that user $u^{\prime}_{i}$ can recover
all $Z^{i}_{j,\mbox{$\cal{S}$}}$ such that
$\mbox{$\cal{R}$}^{-}\subset\mbox{$\cal{T}$},|\mbox{$\cal{R}$}^{-}|=r-1$, and
$0\in\mbox{$\cal{S}$}$. For
$\mbox{$\cal{A}$}:=\mbox{$\cal{S}$}\setminus\\{0\\}$, it follows from the
definition that
$\displaystyle Z^{j}_{i,\mbox{$\cal{S}$}}$
$\displaystyle=\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{S}$}}W_{j,\mbox{$\cal{S}$}\cup\\{t\\}}$
$\displaystyle=\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{R}$}^{-}}W_{j,\mbox{$\cal{A}$}\cup\\{0,t\\}}.$
(102)
For
$t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{S}$}$,
we can write
$\displaystyle Z^{j}_{i,\mbox{$\cal{A}$}\cup\\{t\\}}$
$\displaystyle=\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap(\mbox{$\cal{A}$}\cup\\{t\\})}W_{j,\mbox{$\cal{A}$}\cup\\{t,u\\}}$
$\displaystyle\overset{(a)}{=}W_{j,\mbox{$\cal{A}$}\cup\\{0,t\\}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap(\mbox{$\cal{S}$}\cup\\{t\\})}W_{j,\mbox{$\cal{A}$}\cup\\{t,u\\}}$
where, $(a)$ follows because
$\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap(\mbox{$\cal{A}$}\cup\\{t\\})=\left(\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap(\mbox{$\cal{S}$}\cup\\{t\\})\right)\cup\\{0\\}$.
Thus, we have
$\displaystyle W_{j,\mbox{$\cal{A}$}\cup\\{0,t\\}}$
$\displaystyle=Z^{j}_{i,\mbox{$\cal{A}$}\cup\\{t\\}}\oplus\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap(\mbox{$\cal{S}$}\cup\\{t\\})}W_{j,\mbox{$\cal{A}$}\cup\\{t,u\\}}.$
Substituting the above expression of $W_{j,\mbox{$\cal{A}$}\cup\\{0,t\\}}$ in
(102), we get
$\displaystyle Z^{j}_{i,\mbox{$\cal{S}$}}$
$\displaystyle=\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{S}$}}Z^{j}_{i,\mbox{$\cal{A}$}\cup\\{t\\}}\oplus\bigoplus_{t\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap\mbox{$\cal{S}$}}\;\bigoplus_{u\in\mbox{$\cal{V}$}_{i}\backslash\mbox{$\cal{V}$}_{i}\cap(\mbox{$\cal{S}$}\cup\\{t\\})}W_{j,\mbox{$\cal{A}$}\cup\\{t,u\\}}.$
Observe that $Z^{j}_{i,\mbox{$\cal{A}$}\cup\\{t\\}}$ is cached at the user
while the second term is zero because every term
$W_{j,\mbox{$\cal{A}$}\cup\\{t,u\\}}$ appears twice in the double summation.
This shows that $Z^{j}_{i,\mbox{$\cal{S}$}}$ can be recovered using only the
cache contents. This completes the proof of Lemma 2.
## Appendix B Proof of Lemma 3
To prove the lemma, we follow the proof of [6, Theorem 2], where a lower bound
on the optimal rate that is tighter than the cut-set bound was obtained. We
also use that $R^{\text{MAN,lin}}_{N,K}(M)$ is monotonically non-increasing
for all $M\geq 0$ which can be shown as follows. Let $g_{1}(M)=N-M$,
$g_{2}(M)=K(1-M/N)\frac{1}{1+\frac{KM}{N}}$ and
$t_{0}=\frac{KM}{N}\in\\{0,1,\ldots,K\\}$. It is easy to see that, for
$t^{\prime}_{0},t^{\prime\prime}_{0}\in\\{0,1,\ldots,K\\}$,
$\displaystyle g_{1}\left(\frac{t^{\prime}_{0}N}{K}\right)\leq
g_{1}\left(\frac{t^{\prime\prime}_{0}N}{K}\right),\quad\mbox{if
}t^{\prime}_{0}>t^{\prime\prime}_{0},\;.$ (103)
We also have
$\displaystyle g_{2}\left(\frac{t^{\prime}_{0}N}{K}\right)\leq
g_{2}\left(\frac{t^{\prime\prime}_{0}N}{K}\right),\quad\mbox{if
}t^{\prime}_{0}>t^{\prime\prime}_{0}$ (104)
since
$\displaystyle
g_{2}\left(\frac{t_{0}N}{K}\right)-g_{2}\left(\frac{(t_{0}+1)N}{K}\right)$
$\displaystyle=\frac{K-t_{0}}{1+t_{0}}-\frac{K-(t_{0}+1)}{2+t_{0}}$
$\displaystyle=\frac{K+1}{(1+t_{0})(2+t_{0})}$ $\displaystyle\geq 0.$
From (103) and (104), it follows that
$\displaystyle\min\left(g_{1}\left(\frac{t^{\prime}_{0}N}{K}\right),g_{2}\left(\frac{t^{\prime}_{0}N}{K}\right)\right)$
$\displaystyle\leq\min\left(g_{1}\left(\frac{t^{\prime\prime}_{0}N}{K}\right),g_{2}\left(\frac{t^{\prime\prime}_{0}N}{K}\right)\right),\quad\mbox{for
}t^{\prime}_{0}>t^{\prime\prime}_{0}.$ (105)
Since $R^{\text{MAN,lin}}_{N,K}(M)$ is the linear interpolation of
$\min\left(g_{1}\left(\frac{t_{0}N}{K}\right),g_{2}\left(\frac{t_{0}N}{K}\right)\right)$
for $t_{0}\in\\{0,1,\ldots,K\\}$, (105) implies that
$R^{\text{MAN,lin}}_{N,K}(M)$ is monotonically non-increasing in $M$.
Now we consider the two memory regions studied in the proof of [6, Theorem 2].
For $N\leq K$, the two regions are as follows:
Region I: $0\leq M\leq 1$: Since $R^{\text{MAN,lin}}_{N,K}(0)=N$, and also
that $R^{\text{MAN, lin}}_{N,K}(M)$ is monotonically non-increasing in $M$, we
get
$\displaystyle R^{\text{MAN, lin}}_{N,K}(M)\leq N.$
For this regime, it was shown [6, Theorem 2] that $R^{*}_{N,K}(M)\geq N/4$.
Then, we have
$\displaystyle\frac{R^{\text{MAN, lin}}_{N,K}(M)}{R^{*}_{N,K}(M)}\leq
4,\quad\mbox{ for }0\leq M\leq 1.$
Region II: $1\leq M\leq N/2$: Let us define
$f_{1}(M):=\frac{N}{M}-\frac{1}{2}$. For $t_{0}\geq 1$ and
$\frac{Nt_{0}}{K}\leq M\leq\frac{N(t_{0}+1)}{K}$, it was shown [6, Theorem 2]
that
$\displaystyle R^{\text{MAN,
lin}}_{N,K}\left(\frac{Nt_{0}}{K}\right)=\frac{K-t_{0}}{t_{0}+1}\leq f_{1}(M)$
(106)
and also that
$\displaystyle\frac{f_{1}(M)}{R^{*}_{N,K}(M)}\leq 4.$ (107)
Since $R^{\text{MAN, lin}}_{N,K}(M)$ is non-increasing, we get
$\displaystyle R^{\text{MAN, lin}}_{N,K}(M)\leq R^{\text{MAN,
lin}}_{N,K}\left(\frac{Nt_{0}}{K}\right).$ (108)
It thus follows from (106), (107) and (108) that
$\displaystyle\frac{R^{\text{MAN,lin}}_{N,K}(M)}{R^{*}_{N,K}(M)}\leq 4.$ (109)
This completes the proof of Lemma 3.
## Appendix C Proof of Lemma 4
We prove (68) for $k=1$. Other cases follow similarly. Any $(N,K,M,R)$-private
scheme satisfies that $I(D_{0};Z_{1},D_{1},X)=0$. Since
$H(W_{D_{1}}|X,Z_{1},D_{1})=0$, we have that
$I(D_{0};Z_{1},D_{1},X,W_{D_{1}})=0$. Then it follows that
$\displaystyle\Pr(D_{0}=i|X=x,Z_{1}=z^{\prime},W_{j}=w_{j},D_{1}=j)=\Pr(D_{0}=i^{\prime}|X=x,Z_{1}=z^{\prime},W_{j}=w_{j},D_{1}=j).$
Multiplying both sides by $\Pr(X=x,Z_{1}=z^{\prime},W_{j}=w_{j}|D_{1}=j)$
gives
$\displaystyle\Pr(D_{0}=i,X=x,Z_{1}=z^{\prime},W_{j}=w_{j}|D_{1}=j)=\Pr(D_{0}=i^{\prime},X=x,Z_{1}=z^{\prime},W_{j}=w_{j}|D_{1}=j).$
Then it follows that
$\displaystyle\Pr(D_{0}=i|D_{1}=j)\times\Pr(X=x,Z_{1}=z^{\prime},W_{j}=w_{j}|D_{0}=i,D_{1}=j)$
$\displaystyle=\Pr(D_{0}=i^{\prime}|D_{1}=j)\times\Pr(X=x,Z_{1}=z^{\prime},W_{j}=w_{j}|D_{0}=i^{\prime},D_{1}=j).$
Since the demands are equally likely and they are independent of each other,
we get
$\displaystyle\Pr(X=x,Z_{1}=z^{\prime},W_{j}=w_{j}|D_{0}=i,D_{1}=j)$
$\displaystyle=\Pr(X=x,Z_{1}=z^{\prime},W_{j}=w_{j},|D_{0}=i^{\prime},D_{1}=j).$
(110)
Further, we also have
$\displaystyle\Pr(X=x,Z_{1}=z^{\prime},W_{j}=w_{j}|D_{1}=j)$
$\displaystyle=\sum_{t=0}^{N-1}\Pr(D_{0}=t)\times\Pr(X=x,Z_{1}=z^{\prime},W_{j}=w_{j},|D_{0}=t,D_{1}=j).$
(111)
Eq. (110) and (111) together prove (68) for $k=1$.
## References
* [1] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” _IEEE Transactions on Information Theory_ , vol. 60, no. 5, pp. 2856–2867, May 2014\.
* [2] ——, “Decentralized coded caching attains order-optimal memory-rate tradeoff,” _IEEE/ACM Transactions On Networking_ , vol. 23, no. 4, pp. 1029–1040, 2014.
* [3] M. Mohammadi Amiri and D. Gunduz, “Fundamental limits of coded caching: Improved delivery rate-cache capacity tradeoff,” _IEEE Transactions on Communications_ , vol. 65, no. 2, pp. 806–815, Feb 2017.
* [4] J. Gómez-Vilardebó, “Fundamental limits of caching: Improved rate-memory tradeoff with coded prefetching,” _IEEE Transactions on Communications_ , vol. 66, no. 10, pp. 4488–4497, Oct 2018.
* [5] Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “The exact rate-memory tradeoff for caching with uncoded prefetching,” _IEEE Transactions on Information Theory_ , vol. 64, no. 2, pp. 1281–1296, Feb. 2018.
* [6] H. Ghasemi and A. Ramamoorthy, “Improved lower bounds for coded caching,” _IEEE Transactions on Information Theory_ , vol. 63, no. 7, pp. 4388–4413, July 2017.
* [7] C. Wang, S. Saeedi Bidokhti, and M. Wigger, “Improved converses and gap results for coded caching,” _IEEE Transactions on Information Theory_ , vol. 64, no. 11, pp. 7051–7062, Nov 2018.
* [8] Q. Yan, M. Cheng, X. Tang, and Q. Chen, “On the placement delivery array design for centralized coded caching scheme,” _IEEE Transactions on Information Theory_ , vol. 63, no. 9, pp. 5821–5833, 2017.
* [9] L. Tang and A. Ramamoorthy, “Coded caching schemes with reduced subpacketization from linear block codes,” _IEEE Transactions on Information Theory_ , vol. 64, no. 4, pp. 3099–3120, 2018.
* [10] H. H. Suthan Chittoor, M. Bhavana, and P. Krishnan, “Coded caching via projective geometry: A new low subpacketization scheme,” in _2019 IEEE International Symposium on Information Theory (ISIT)_ , 2019, pp. 682–686.
* [11] U. Niesen and M. A. Maddah-Ali, “Coded caching with nonuniform demands,” _IEEE Transactions on Information Theory_ , vol. 63, no. 2, pp. 1146–1158, 2017.
* [12] M. Ji, A. M. Tulino, J. Llorca, and G. Caire, “Order-optimal rate of caching and coded multicasting with random demands,” _IEEE Transactions on Information Theory_ , vol. 63, no. 6, pp. 3923–3949, 2017.
* [13] J. Zhang, X. Lin, and X. Wang, “Coded caching under arbitrary popularity distributions,” _IEEE Transactions on Information Theory_ , vol. 64, no. 1, pp. 349–366, 2018.
* [14] H. Ghasemi and A. Ramamoorthy, “Asynchronous coded caching with uncoded prefetching,” _IEEE/ACM Transactions on Networking_ , vol. 28, no. 5, pp. 2146–2159, 2020.
* [15] Q. Yang, M. Mohammadi Amiri, and D. Gunduz, “Audience-retention-rate-aware caching and coded video delivery with asynchronous demands,” _IEEE Transactions on Communications_ , vol. 67, no. 10, pp. 7088–7102, 2019.
* [16] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and G. Caire, “Femtocaching: Wireless content delivery through distributed caching helpers,” _IEEE Transactions on Information Theory_ , vol. 59, no. 12, pp. 8402–8413, 2013.
* [17] N. Karamchandani, U. Niesen, M. A. Maddah-Ali, and S. N. Diggavi, “Hierarchical coded caching,” _IEEE Transactions on Information Theory_ , vol. 62, no. 6, pp. 3212–3229, 2016.
* [18] M. Ji, G. Caire, and A. F. Molisch, “Fundamental limits of caching in wireless d2d networks,” _IEEE Transactions on Information Theory_ , vol. 62, no. 2, pp. 849–869, 2016.
* [19] M. A. Maddah-Ali and U. Niesen, “Coding for caching: Fundamental limits and practical challenges,” _IEEE Communications Magazine_ , vol. 54, no. 8, pp. 23–29, 2016.
* [20] Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol, “Index coding with side information,” _IEEE Transactions on Information Theory_ , vol. 57, no. 3, pp. 1479–1494, March 2011.
* [21] V. Narayanan, J. Ravi, V. K. Mishra, B. K. Dey, N. Karamchandani, and V. M. Prabhakaran, “Private index coding,” in _2018 IEEE International Symposium on Information Theory (ISIT)_ , June 2018, pp. 596–600.
* [22] S. H. Dau, V. Skachek, and Y. M. Chee, “On the security of index coding with side information,” _IEEE Transactions on Information Theory_ , vol. 58, no. 6, pp. 3975–3988, June 2012.
* [23] M. Karmoose, L. Song, M. Cardone, and C. Fragouli, “Privacy in index coding: $k$ -limited-access schemes,” _IEEE Transactions on Information Theory_ , vol. 66, no. 5, pp. 2625–2641, 2020.
* [24] H. Sun and S. A. Jafar, “The capacity of private information retrieval,” _IEEE Transactions on Information Theory_ , vol. 63, no. 7, pp. 4075–4088, 2017.
* [25] A. Sengupta, R. Tandon, and T. C. Clancy, “Fundamental limits of caching with secure delivery,” _IEEE Transactions on Information Forensics and Security_ , vol. 10, no. 2, pp. 355–370, Feb. 2015.
* [26] V. Ravindrakumar, P. Panda, N. Karamchandani, and V. M. Prabhakaran, “Private coded caching,” _IEEE Transactions on Information Forensics and Security_ , vol. 13, no. 3, pp. 685–694, Mar. 2018.
* [27] K. Wan and G. Caire, “On coded caching with private demands,” _IEEE Transactions on Information Theory_ , vol. 67, no. 1, pp. 358–372, Jan. 2021.
* [28] S. Kamath, “Demand private coded caching,” arXiv:1909.03324 [cs.IT], Sep. 2019.
* [29] V. R. Aravind, P. K. Sarvepalli, and A. Thangaraj, “Subpacketization in coded caching with demand privacy,” in _2020 National Conference on Communications (NCC)_ , Kharagpur, India, Feb. 2020.
* [30] Q. Yan and D. Tuninetti, “Fundamental limits of caching for demand privacy against colluding users,” arXiv:2008.03642 [cs.IT], Aug. 2020.
* [31] K. Wan, H. Sun, M. Ji, D. Tuninetti, and G. Caire, “Fundamental limits of device-to-device private caching with trusted server,” arXiv: 1912.09985 [cs:IT], Jan. 2020.
* [32] V. R. Aravind, P. K. Sarvepalli, and A. Thangaraj, “Coded caching with demand privacy: Constructions for lower subpacketization and generalizations,” arXiv: 2007.07475 [cs.IT], Jul. 2020.
* [33] S. Kamath, J. Ravi, and B. K. Dey, “Demand-private coded caching and the exact trade-off for N=K=2,” in _2020 National Conference on Communications (NCC)_ , Kharagpur, India, Feb. 2020.
* [34] C. Tian, “Symmetry, outer bounds, and code constructions: A computer-aided investigation on the fundamental limits of caching,” _Entropy_ , vol. 20, no. 8, pp. 603.1–603.43, Aug. 2018.
* [35] S. Shao, J. Gómez-Vilardebó, K. Zhang, and C. Tian, “On the fundamental limit of coded caching systems with a single demand type,” in _2019 IEEE Information Theory Workshop (ITW)_ , 2019, pp. 1–5.
|
Univ Paris Est Creteil, LACL, F-94010 Creteil,
Franceluc.dartois@lacl.frhttps://orcid.org/0000-0001-9974-1922 Université
Paris-Saclay, ENS Paris-Saclay, CNRS, LMF, 91190, Gif-sur-Yvette,
Francepaul.gastin@lsv.frhttps://orcid.org/0000-0002-1313-7722 IIT Bombay,
India krishnas@cse.iitb.ac.inhttps://orcid.org/0000-0003-0925-398X Luc
Dartois, Paul Gastin and S. Krishna [500]Theory of computation Transducers
Supported by IRL ReLaX
# SD-Regular Transducer Expressions for Aperiodic Transformations
Luc Dartois Paul Gastin Shankara Narayanan Krishna
###### Abstract
FO transductions, aperiodic deterministic two-way transducers, as well as
aperiodic streaming string transducers are all equivalent models for first
order definable functions. In this paper, we solve the long standing open
problem of expressions capturing first order definable functions, thereby
generalizing the seminal SF=AP (star free expressions = aperiodic languages)
result of Schützenberger. Our result also generalizes a lesser known
characterization by Schützenberger of aperiodic languages by SD-regular
expressions (SD=AP). We show that every first order definable function over
finite words captured by an aperiodic deterministic two-way transducer can be
described with an SD-regular transducer expression (SDRTE). An SDRTE is a
regular expression where Kleene stars are used in a restricted way: they can
appear only on aperiodic languages which are prefix codes of bounded
synchronization delay. SDRTEs are constructed from simple functions using the
combinators unambiguous sum (deterministic choice), Hadamard product, and
unambiguous versions of the Cauchy product and the $k$-chained Kleene-star,
where the star is restricted as mentioned. In order to construct an SDRTE
associated with an aperiodic deterministic two-way transducer, (i) we
concretize Schützenberger’s SD=AP result, by proving that aperiodic languages
are captured by SD-regular expressions which are unambiguous and stabilising;
(ii) by structural induction on the unambiguous, stabilising SD-regular
expressions describing the domain of the transducer, we construct SDRTEs.
Finally, we also look at various formalisms equivalent to SDRTEs which use the
function composition, allowing to trade the $k$-chained star for a 1-star.
###### keywords:
transducers, aperiodic functions, regular expressions, transition monoids.
###### category:
## 1 Introduction
The seminal result of Kleene, which proves the equivalence of regular
expressions and regular languages, is among the cornerstones of formal
language theory. The Büchi, Elgot, Trakhtenbrot theorem which proved the
equivalence of regular languages with MSO definable languages, and the
equivalence of regular languages with the class of languages having a finite
syntactic monoid, established the synergy between machines, logic and algebra.
The fundamental correspondence between machines and logic at the language
level has been generalized to transformations by Engelfreit and Hoogeboom
[15], where regular transformations are defined by two way transducers (2DFTs)
as well as by the MSO transductions of Courcelle [9]. A generalization of
Kleene’s theorem to transformations can be found in [3], [4] and [11].
In [3], regular transformations were described using additive cost register
automata (ACRA) over finite words. ACRAs are a generalization of streaming
string transducers (SSTs) [1] which make a single left to right pass over the
input and use a finite set of variables over strings from the output alphabet.
ACRAs compute partial functions from finite words over a finite input alphabet
to a monoid $(\mathbb{D},+,0)$. The main contribution of [3] was to provide a
set of combinators, analogous to the operators used in regular expressions,
which help in forming combinator expressions computing the output of the ACRA
over finite words. The result of [3] was generalized to infinite words in
[11]. The proof technique in [11] is completely different from [3], and, being
algebraic, allows a uniform treatment of the result for transductions over
finite and infinite words. Subsequently, an alternative proof for the result
of [3] appeared in [4].
The class of star-free languages form a strict subset of regular languages. In
1965, Schützenberger [21] proved his famous result that star-free languages
(SF) and languages having an aperiodic syntactic monoid (or aperiodic
languages AP) coincide over finite words (SF=AP). This equivalence gives an
effective characterization of star-free languages, since one can decide if a
syntactic monoid is aperiodic. This was followed by a result [18] of
McNaughton and Papert proving the equivalence of star-free languages with
counter-free automata as well as first order logic, thereby completing the
machine-logic-algebra connection once again. The generalization of this result
to transformations has been investigated in [8], [16], proving the equivalence
of aperiodic two way transducers and FO transductions a la Courcelle for
finite and infinite words. The counter part of regular expressions for
aperiodic languages are star-free expressions, which are obtained by using the
complementation operation instead of Kleene-star. The generalization of this
result to transformations has been an open problem, as mentioned in [3], [4]
and [11]. One of the main challenges in generalizing this result to
transformations is the difficulty in finding an analogue for the
complementation operation on sets in the setting of transformations.
Our Contributions. The following central problem remained open till now: Given
an aperiodic 2DFT $\mathcal{A}$, does there exist a class of expressions over
basic functions and regular combinators such that, one can effectively compute
from $\mathcal{A}$, an expression $E$ in this class, and conversely, such that
$[\\![\mathcal{A}]\\!](w)=[\\![E]\\!](w)$ for each
$w\in\mathsf{dom}(\mathcal{A})$? We solve this open problem, by providing a
characterization by means of expressions for aperiodic two way transducers. In
the following, we describe the main steps leading to the solution of the
problem.
Concretizing Schützenberger’s characterization. In 1973, Schützenberger [22]
presented a characterization of aperiodic languages in terms of rational
expressions where the star operation is restricted to prefix codes with
bounded synchronization delay and no complementation is used. This class of
languages is denoted by SD, and this result is known as SD=AP. To circumvent
the difficulty of using complementation in star-free expressions, we use this
SD=AP characterization of aperiodic languages by SD-expressions. An SD-
expression is a regular expression where the Kleene stars are restricted to
appear only on prefix codes of bounded synchronization delay. Our _first_
contribution is to concretize Schützenberger’s result to more specific SD-
expressions. We show that aperiodic languages can be captured by _unambiguous_
, _stabilising_ , SD-expressions. The _unambiguity_ of an expression refers to
the unique way in which it can be parsed, while _stabilising expressions_ is a
new notion introduced in this paper. Our concretization, (Theorem 3.6) shows
that, given a morphism $\varphi$ from the free monoid $\Sigma^{*}$ to a finite
aperiodic monoid $M$, for each $s\in M$, $\varphi^{-1}(s)$ can be expressed by
an unambiguous, stabilising SD-expression. The two notions of _unambiguity_
and _stabilising_ help us to capture the runs of an aperiodic two way
transducer. These two notions will be described in detail in Section 3.
The Combinators. Our _second_ contribution is the definition of SD-regular
transducer expressions (SDRTE). These are built from basic constant functions
using combinators such as unambiguous sum, unambiguous Cauchy product,
Hadamard product. In addition, we use $k$-chained Kleene star $[L,C]^{k\star}$
(and its reverse) when the parsing language $L$ is restricted to be aperiodic
and a prefix code with bounded synchronisation delay. It should be noticed
that, contrary to the case of regular transducer expressions (RTE) which
define all regular functions, the 2-chained Kleene star $[L,C]^{2\star}$ does
not seem sufficient to define all aperiodic functions (see Section 4.7 as well
as Figure 2), and $k$-chained Kleene stars for arbitrary large $k$ seem
necessary to capture all aperiodic functions.
The semantics of an SDRTE $C$ is a partial function
$[\\![C]\\!]\colon\Sigma^{*}\to\Gamma^{*}$ with domain denoted
$\mathsf{dom}(C)$. An SDRTE of the form ${L}\triangleright{v}$ where
$L\subseteq\Sigma^{*}$ is an aperiodic language and $v\in\Gamma^{*}$ is such
that $[\\![{L}\triangleright{v}]\\!]$ is a constant function with value $v$
and domain $L$. The Hadamard product $C_{1}\odot C_{2}$ when applied to
$w\in\mathsf{dom}(C_{1})\cap\mathsf{dom}(C_{2})$ produces
$[\\![C_{1}]\\!](w)\cdot[\\![C_{2}]\\!](w)$. The unambiguous Cauchy product
$C_{1}\cdot C_{2}$ when applied on $w\in\Sigma^{*}$ produces
$[\\![C_{1}]\\!](u)\cdot[\\![C_{2}]\\!](v)$ if $w$ can be unambiguously
decomposed as $u\cdot v$, with $u\in\mathsf{dom}(C_{1})$ and
$v\in\mathsf{dom}(C_{2})$. The Kleene star $C^{*}$ is defined when
$L=\mathsf{dom}(C)$ is an aperiodic language which is a prefix code with
bounded synchronisation delay. Then $\mathsf{dom}(C^{*})=L^{*}$, and, for
$w=u_{1}u_{2}\cdots u_{n}$ with $u_{i}\in L$, we have
$[\\![C^{*}]\\!](w)=[\\![C]\\!](u_{1})[\\![C]\\!](u_{2})\cdots[\\![C]\\!](u_{n})$.
$q_{0}$$q_{1}$$q_{2}$$q_{3}$$q_{4}$$q_{5}$$q_{6}$$\$/\varepsilon,+1$$a/\varepsilon,+1$${\vdash}/\varepsilon,+1$$\\#/\varepsilon,+1$$a/b,+1$$\$/\varepsilon,-1$$a/\varepsilon,-1$$\\#/\varepsilon,-1$$a/\varepsilon,-1$$\$/\varepsilon,+1$$a/a,+1$$\\#/\varepsilon,+1$$a/\varepsilon,+1$$\$/\varepsilon,+1$
Figure 1: An aperiodic 2DFT $\cal{A}$ computing the partial function
$[\\![{\mathcal{A}}]\\!](\$a^{m_{1}}\\#a^{m_{2}}\$a^{m_{3}}\\#a^{m_{4}}\$\cdots
a^{m_{2k}}\$)=b^{m_{2}}a^{m_{1}}b^{m_{4}}a^{m_{3}}\cdots
b^{m_{2k}}a^{m_{2k-1}}$, for $k\geq 0$. The input alphabet is
$\Sigma=\\{a,\\#,\$\\}$ while the output alphabet is $\Gamma=\\{a,b\\}$.
As an example, consider the SDRTEs $C=C_{1}\cdot C_{2}$,
$C^{\prime}=C^{\prime}_{1}\cdot C^{\prime}_{2}$ and $D=C\odot C^{\prime}$ with
$C_{1}={(a^{*}\\#)}\triangleright{\varepsilon}$,
$C_{2}=({a}\triangleright{b})^{*}\cdot({\$}\triangleright{\varepsilon})$, and
$C^{\prime}_{1}=({a}\triangleright{a})^{*}\cdot({\\#}\triangleright{\varepsilon})$,
$C^{\prime}_{2}={(a^{*}\$)}\triangleright{\varepsilon}$. Then
$\mathsf{dom}(C_{1})=a^{*}\\#=\mathsf{dom}(C^{\prime}_{1})$,
$\mathsf{dom}(C_{2})=a^{*}\$=\mathsf{dom}(C^{\prime}_{2})$, and
$\mathsf{dom}(C)=a^{*}\\#a^{*}\$=\mathsf{dom}(C^{\prime})=\mathsf{dom}(D)$.
Further, $[\\![C_{1}]\\!](a^{m}\\#)=\varepsilon$,
$[\\![C_{2}]\\!](a^{n}\$)=b^{n}$, $[\\![C^{\prime}_{1}]\\!](a^{m}\\#)=a^{m}$,
and $[\\![C^{\prime}_{2}]\\!](a^{*}\$)=\varepsilon$. Also,
$[\\![D]\\!](a^{m}\\#a^{n}\$)=b^{n}a^{m}$. Notice that $\mathsf{dom}(D)$ is a
prefix code with synchronisation delay 1. Hence, we can define the SDRTE
$D^{*}$ which has domain the aperiodic language
$\mathsf{dom}(D^{*})=(a^{*}\\#a^{*}\$)^{*}$, and
$[\\![D^{*}]\\!](a^{2}\\#a^{3}\$a^{4}\\#a^{5}\$)=b^{3}a^{2}b^{5}a^{4}$. The
SDRTE $D^{\prime}=({\$}\triangleright{\varepsilon})\cdot D^{*}$ corresponds to
the aperiodic 2DFT $\mathcal{A}$ in Figure 1:
$[\\![\mathcal{A}]\\!]=[\\![D^{\prime}]\\!]$.
$\textsf{SDRTE}\leftrightarrow$ Aperiodic 2DFT. Our _third_ and main
contribution solves the open problem by proving the effective equivalence
between aperiodic two way transducers and $\textsf{SDRTE}s$ over finite words:
###### Theorem 1.1.
(1) Given an SDRTE, we can effectively construct an equivalent aperiodic 2DFT.
(2) Given an aperiodic 2DFT, we can effectively construct an equivalent SDRTE.
The proof of (1) is by structural induction on the SDRTE. All cases except the
$k$-chained Kleene star are reasonably simple, and it is easy to see how to
construct the equivalent 2DFT. The case of the $k$-chained Kleene star is more
involved. We write $[L,C]^{k\star}$ as the composition of 3 aperiodic
functions $f_{1},f_{2},f_{3}$, where, (i) $f_{1}$ takes as input
$u_{1}u_{2}\cdots u_{n}\in L^{*}$ with $u_{i}\in L$ and produces as output
$\\#u_{1}\\#u_{2}\\#\cdots\\#u_{n}\\#$, (ii) $f_{2}$ takes
$\\#v_{1}\\#v_{2}\\#\cdots\\#v_{m}\\#$ with $v_{i}\in\Sigma^{*}$ as input, and
produces $\\#v_{1}\cdots v_{k}\\#v_{2}\cdots
v_{k+1}\\#\cdots\\#v_{m-k+1}\cdots v_{m}\\#$ as output, (iii) finally, $f_{3}$
takes $\\#w_{1}\\#w_{2}\\#\cdots\\#w_{\ell}\\#$ as input with
$w_{i}\in\Sigma^{*}$ and produces as output $f(w_{1})f(w_{2})\cdots
f(w_{\ell})$. We produce aperiodic 2DFTs for $f_{1},f_{2},f_{3}$, and compose
them, obtaining the required aperiodic 2DFT.
The construction of SDRTE from an aperiodic 2DFT $\mathcal{A}$ is much more
involved, and is based on the transition monoid $\mathsf{TrM}$ of the 2DFT
$\mathcal{A}$. The translation of $\mathcal{A}$ to SDRTE is guided by an
unambiguous, stabilising, SD-regular expression induced by $\mathsf{TrM}$.
These expressions are obtained thanks to Theorem 3.6 applied to the canonical
morphism $\varphi\colon\Sigma\rightarrow\mathsf{TrM}$ where the transition
monoid $\mathsf{TrM}$ of $\mathcal{A}$ is aperiodic. This construction is
illustrated in detail via Examples 5.13, 5.18, 5.19 and 5.22.
Related Work. A natural operation on functions is that of composition. The
composition operation can be used in place of the chained-sum operator of [3],
and also in place of the unambiguous 2-chained iteration of [11], preserving
expressiveness. In yet another recent paper, [17] proposes simple functions
like copy, duplicate and reverse along with function composition to capture
regular word transductions.
A closely related paper to our work is [6], where first-order and regular list
functions were introduced. Using the basic functions reverse, append, co-
append, map, block on lists, and combining them with the function combinators
of disjoint union, map, pairing and composition, these were shown to be
equivalent (after a suitable encoding) to FO transductions a la Courcelle
(extendible to MSO transductions by adding to the basic functions, the prefix
multiplication operation on groups). [6] provides an equivalent
characterization (modulo an encoding) for FO transductions with basic list
functions and combinators.
Contrary to [6] where expressions crucially rely on function composition, we
focus on concatenation and iteration as first class combinators, in the spirit
of Kleene’s theorem and of Schützenberger’s characterisation AP=SD. We are
able to characterise 2DFTs with such SD-regular expressions without using
composition. Hence, our result is fully independent and complementary to the
work in [6]: both formalisms, SDRTEs and list functions are natural choices
for describing first order transductions. Our basic functions and combinators
are inspired from the back and forth traversal of a two way automaton, and the
restrictions on the usage of the Kleene star comes from the unambiguous,
stabilising nature of the expressions capturing the aperiodic domain of the
2DFT. We also study in Section 6 how composition may be used to simplify our
SDRTEs (Theorem 6.1). With composition, $k$-chained Kleene star ($k>1$) is no
more necessary, resulting in an equivalent formalism, namely, SDRTE where we
only use $1$-star. Yet another equivalent formalism is obtained by restricting
SDRTE to simple functions, unambiguous sum, Cauchy product and 1-star, but
adding the functions duplicate and reverse along with composition.
Structure of the paper. In Section 2, we introduce preliminary notions used
throughout the paper. In Section 3 we give a procedure to construct
complement-free expressions for aperiodic languages that suits our approach.
This is a generic result on languages, independent of two-way transducers.
Section 4 presents the combinators and the chain-star operators for our
characterization. The main theorem and technical proofs, which is constructing
SD-regular transducer expressions from a two-way aperiodic transducer, are in
Section 5.
12$a,b$$a,b$12$a,c$$b,c$$a$$b$
## 2 Preliminaries
### 2.1 Rational languages and monoids
We call a finite set $\Sigma$ an _alphabet_ and its elements _letters_. A
finite sequence of letters of $\Sigma$ is called a _word_ , and a set of words
is a _language_. The empty word is denoted $\varepsilon$, and we denote by
$\Sigma^{*}$ the set of all words over the alphabet $\Sigma$. More generally,
given any language $L\subseteq\Sigma^{*}$, we write $L^{*}$ for the _Kleene
star_ of $L$, i.e., the set of words which can be written as a (possibly
empty) sequence of words of $L$. Given a word $u$, we write $|u|$ for the
_length_ of $u$, i.e., its number of letters, and we denote by $u_{i}$ its
$i^{th}$ letter.
A _monoid_ $M$ is a set equipped with a binary associative law, usually
denoted $\cdot$ or omitted when clear from context, and a neutral element
$1_{M}$ for this law, meaning that for any $s\in M$, $1_{M}\cdot s=s\cdot
1_{M}=s$. The set of words $\Sigma^{*}$ can be seen as the free monoid
generated by $\Sigma$ using the concatenation of words as binary law. Given a
_morphism_ $\varphi:\Sigma^{*}\to M$, i.e., a function between monoids that
satisfies $\varphi(\varepsilon)=1_{M}$ and $\varphi(xy)=\varphi(x)\varphi(y)$
for any $x,y$, we say that $\varphi$ recognizes a language
$L\subseteq\Sigma^{*}$ if $M$ is finite and $L=\varphi^{-1}(P)$ for some
$P\subseteq M$. A monoid is called _aperiodic_ if there exists an integer $n$
such that for any element $s$ of $M$, $s^{n}=s^{n+1}$.
###### Example 2.1.
We define the monoids $\widetilde{U}_{n}$, for $n\geq 0$, as the set of
elements $\\{1,s_{1},\ldots,s_{n}\\}$, with $1$ being the neutral element, and
for any $1\leq i,j\leq n$, $s_{i}\cdot s_{j}=s_{i}$. Clearly,
$\widetilde{U}_{n}$ is aperiodic, actually idempotent, as $s_{i}\cdot
s_{i}=s_{i}$ for any $1\leq i\leq n$. For instance, the monoid
$\widetilde{U}_{2}$ is the transition monoid (defined below) of the automaton
below with $\varphi(a)=s_{1}$, $\varphi(b)=s_{2}$ and $\varphi(c)=1$.
_Rational_ languages are languages that can be described by rational
expressions, i.e., sets of words constructed from finite sets using the
operations of concatenation, union and Kleene star. It is well-known that
rational languages are equivalent to _regular_ languages, i.e., languages
accepted by finite automata, and to languages recognized by finite monoids
(and Monadic Second-order logic [7]). _Star-free_ rational expressions are
built from finite sets using the operations of concatenation, union and
complement (instead of Kleene star). They have the same expressive power as
finite aperiodic monoids [21] (as well as counter-free automata and first-
order logic [18]).
### 2.2 Two-way transducers
###### Definition 2.2 (Two-way transducer).
A _(deterministic) two-way transducer_ (2DFT) is a tuple
$\mathcal{A}=(Q,\Sigma,\Gamma,\delta,\gamma,q_{0},F)$ defined as follows:
* •
$Q$ is a finite set of _states_.
* •
$\Sigma$ and $\Gamma$ are the finite _input_ and _output alphabets_.
* •
$\delta:Q\times(\Sigma\uplus\\{{\vdash},{\dashv}\\})\to Q\times\\{-1,+1\\}$ is
the partial _transition function_. Contrary to one-way machines, the
transition function also outputs an integer, indicating the move of the
reading head. The alphabet is enriched with two new symbols ${\vdash}$ and
${\dashv}$, which are endmarkers that are added respectively at the beginning
and at the end of the input word, such that for all $q\in Q$, we have
${\delta(q,{\vdash})\in Q\times\\{+1\\}}$ (if defined), $\delta(q,{\dashv})\in
Q\times\\{-1\\}$ (if defined) and $\delta(q,{\dashv})$ is undefined for $q\in
F$.
* •
$\gamma:Q\times(\Sigma\uplus\\{{\vdash},{\dashv}\\})\to\Gamma^{*}$ is the
partial _production function_ with same domain as $\delta$.
* •
$q_{0}\in Q$ is the _initial state_.
* •
$F\subseteq Q$ is the set of final states.
A _configuration_ $c$ of $\mathcal{A}$ over an input word $w=w_{1}\cdots
w_{|w|}$ is simply a pair $(p,i)$ where $p\in Q$ is the current state and
$0\leq i\leq|w|+1$ is the position of the head on the input tape containing
${\vdash}w{\dashv}$. Two configurations $c=(p,i)$ and $c^{\prime}=(q,j)$ are
_successive_ if we have $\delta(p,w_{i})=(q,d)$ and $i+d=j$, with
$w_{0}={\vdash}$ and $w_{|w|+1}={\dashv}$. In this case, they produce an
output $v=\gamma(p,w_{i})$. Abusing notations we will sometime write
$\gamma(c)$ when the input word $w$ is clear. A run $\rho$ is a sequence of
successive configurations $c_{0}\cdots c_{n}$. The run $\rho$ is _initial_ if
$c_{0}=(q_{0},0)$ and is _final_ if $c_{n}=(q,|w|+1)$ for some $q\in F$. It is
_accepting_ if it is both initial and final.
The _output_ of a run $\rho=c_{0}\cdots c_{n}$ is the concatenation of the
output of the configurations, and will be denoted
$[\\![\rho]\\!]=\gamma(c_{0})\cdots\gamma(c_{n-1})$. Given a deterministic
two-way transducer $\mathcal{A}$ and an input word $w$, there is at most one
accepting run of $\mathcal{A}$ over ${\vdash}w{\dashv}$, which we will denote
$\rho(w)$. The output of $\mathcal{A}$ over $w$ is then
$[\\![\mathcal{A}]\\!](w)=[\\![\rho(w)]\\!]$. The _domain_ of $\mathcal{A}$ is
the set $\mathsf{dom}(\mathcal{A})$ of words $w$ such that there exists an
accepting run of $\mathcal{A}$ over $w$. Finally, the semantics of
$\mathcal{A}$ is the partial function
$[\\![\mathcal{A}]\\!]\colon\Sigma^{*}\to\Gamma^{*}$ defined on
$\mathsf{dom}(\mathcal{A})$ by $w\mapsto[\\![\mathcal{A}]\\!](w)$.
Let $\rho=(p_{0},i_{0})\cdots(p_{n},i_{n})$ be a run over a nonempty word
$w\in\Sigma^{+}$ such that $1\leq i_{j}\leq|w|$ for all $0\leq j<n$. It is a
_left-right_ run if $i_{0}=1$ and $i_{n}=|w|+1$. If this is the case, we say
that $\rho$ is a $(\rightarrow,p_{0},p_{n})$-run. Similarly, it is a _left-
left_
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p_{0},p_{n})$-run
if $i_{0}=1$ and $i_{n}=0$. It is a _right-left_
$(\leftarrow,p_{0},p_{n})$-run if $i_{0}=|w|$ and $i_{n}=0$ and it is a
_right-right_
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p_{0},p_{n})$-run
if $i_{0}=|w|$ and $i_{n}=|w|+1$. Notice that if $|w|=1$, then left-right runs
and right-right runs coincide, also right-left runs and left-left runs
coincide.
###### Remark 2.3.
Given our semantics of two-way transducers, a run associates states to each
position, whereas the classical semantics of one-way automata keeps the states
between two positions. Then, if we consider a word $w=uv$ and a left-left run
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p,q)$
on $v$, we begin on the first position of $v$ in state $p$, and the state $q$
is reached at the end of the run on the last position of $u$. This allows for
easy sequential composition of partial runs when concatenating non empty
words, as the end of a partial run is the start of the next one.
However, in order to keep our figures as readable as possible, we will
represent these states between words. A state $q$ between two words $u$ and
$v$ is to be placed on the first position of $v$ if it is the start of a run
going to the right, and on the last position of $u$ otherwise. For instance,
in Figure 4, state $q_{1}$ is on the first position of $u_{i+1}$ and state
$q_{3}$ is on the last position of $u_{i}$.
##### Transition monoid of a two-way automaton
Let $\mathcal{A}$ be a deterministic two-way automaton (2DFA) with set of
states $Q$. When computing the transition monoid of a two-way automaton, we
are interested in the behaviour of the partial runs, i.e., how these partial
runs can be concatenated. Thus we abstract a given $(d,p,q)$-run $\rho$ over a
word $w$ to a _step_
$(d,p,q)\in\\{\rightarrow,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\leftarrow\\}\times
Q^{2}$ and we say that $w$ realises the step $(d,p,q)$. The transition monoid
$\mathsf{TrM}$ of $\mathcal{A}$ is a subset of the powerset of steps:
$\mathsf{TrM}\subseteq\mathcal{P}(\\{\rightarrow,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\leftarrow\\}\times
Q^{2})$. The canonical surjective morphism
$\varphi\colon(\Sigma\uplus\\{{\vdash},{\dashv}\\})^{*}\to\mathsf{TrM}=\varphi((\Sigma\uplus\\{{\vdash},{\dashv}\\})^{*})$
is defined for a word $w\in(\Sigma\uplus\\{{\vdash},{\dashv}\\})^{*}$ as the
set of steps realised by $w$, i.e., $\varphi(w)=\\{(d,p,q)\mid\text{there is a
}(d,p,q)\text{-run on
}w\\}\subseteq\\{\rightarrow,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\leftarrow\\}\times
Q^{2}$. As an example, in Figure 1, we have
$\varphi(a\\#)=\\{(\rightarrow,q_{1},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{1},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{3},q_{3}),(\leftarrow,q_{3},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{4},q_{4}),(\rightarrow,q_{5},q_{6}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6})\\}\,.$
The unit of $\mathsf{TrM}$ is
$\mathbf{1}=\\{(\rightarrow,p,p),(\leftarrow,p,p)\mid p\in Q\\}$ and
$\varphi(\varepsilon)=\mathbf{1}$.
A 2DFA is _aperiodic_ if its transition monoid $\mathsf{TrM}$ is aperiodic.
Also, a 2DFT is aperiodic if its underlying input 2DFA is aperiodic.
When talking about a given step $(d,p,q)$ belonging to an element of
$\mathsf{TrM}$, we will sometimes forget $p$ and $q$ and talk about a
$d$-step, for
$d\in\\{\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\rightarrow,\leftarrow\\}$
if the states $p,q$ are clear from the context, or are immaterial for the
discussion. In this case we also refer to a step $(d,p,q)$ as a $d$-step
having $p$ as the starting state and $q$ as the final state.
## 3 Complement-free expressions for aperiodic languages
As the aim of the paper is to obtain rational expressions corresponding to
transformations computed by aperiodic two-way transducers, we cannot rely on
extending the classical (SF=AP) star-free characterization of aperiodic
languages, since the complement of a function is not a function. We solve this
problem by considering the SD=AP characterization of aperiodic languages,
namely prefix codes with bounded synchronisation delay, introduced by
Schützenberger [22].
A language $L$ is called a _code_ if for any word $u\in L^{*}$, there is a
unique decomposition $u=v_{1}\cdots v_{n}$ such that $v_{i}\in L$ for $1\leq
i\leq n$. For example, the language $W=\\{a,ab,ba,bba\\}$ is not a code: the
words $abba,aba\in W^{*}$ have decompositions $a\cdot bba=ab\cdot ba$ and
$a\cdot ba=ab\cdot a$ respectively. A _prefix code_ is a language $L$ such
that for any pair of words $u,v$, if $u,uv\in L$, then $v=\varepsilon$. $W$ is
not a prefix code, while $W_{1}=W\setminus\\{ab\\}$ and
$W_{2}=W\setminus\\{a\\}$ are prefix codes. Prefix codes play a particular
role in the sense that the unique decomposition can be obtained on the fly
while reading the word from left to right.
###### Definition 3.1.
Let $d$ be a positive integer. A _prefix_ code $C$ over an alphabet $\Sigma$
has a synchronisation delay $d$ (denoted $d$-SD) if for all
$u,v,w\in\Sigma^{*}$, $uvw\in C^{*}$ and $v\in C^{d}$ implies $uv\in C^{*}$
(hence also $w\in C^{*}$). An SD prefix code is a prefix code with a bounded
synchronisation delay.
As an example, consider the prefix code $C=\\{aa,ba\\}$ and the word
$ba(aa)^{d}\in C^{*}$. We have $ba(aa)^{d}=uvw$ with $u=b$, $v=(aa)^{d}\in
C^{d}$ and $w=a$. Since $uv\notin C^{*}$, the prefix code $C$ is not of
bounded synchronisation delay. Likewise, $C=\\{aa\\}$ is also not of bounded
synchronisation delay. On the other hand, the prefix code $C=\\{ba\\}$ is
1-SD.
The syntax of regular expressions over the alphabet $\Sigma$ is given by the
grammar
$E::=\emptyset\mid\varepsilon\mid a\mid E\cup E\mid E\cdot E\mid E^{*}$
where $a\in\Sigma$. We say that an expression is $\varepsilon$-free (resp.
$\emptyset$-free) if it does not use $\varepsilon$ (resp. $\emptyset$) as
subexpressions. The semantics of a regular expression $E$ is a regular
language over $\Sigma^{*}$ denoted $\mathcal{L}(E)$.
An SD-regular expression is a regular expression where Kleene-stars are
restricted to SD prefix codes: If $E^{*}$ is a sub-expression then
$\mathcal{L}(E)$ is a prefix code with bounded synchronization delay. Thus,
the regular expression $(ba)^{*}$ is a SD-regular expression while $(aa)^{*}$
is not.
The relevance of SD-regular expressions comes from the fact that they are a
complement-free characterization of aperiodic languages.
###### Theorem 3.2.
[22] A language $L$ is recognized by an aperiodic monoid if, and only if,
there exists an SD-regular expression $E$ such that $L=\mathcal{L}(E)$.
Theorem 3.6 concretizes this result, and extends it to get more specific
expressions which are (i) _unambiguous_ , a property required for the regular
combinators expressing functions over words, and (ii) _stabilising_ , which is
a new notion introduced below that suits our need for characterizing runs of
aperiodic two-way transducers. Our proof technique follows the local divisor
technique, which was notably used by Diekert and Kufleitner to lift the result
of Schützenberger to infinite words [12, 13].
A regular expression $E$ is unambiguous, if it satisfies the following:
1. 1.
for each subexpression $E_{1}\cup E_{2}$ we have
$\mathcal{L}(E_{1})\cap\mathcal{L}(E_{2})=\emptyset$,
2. 2.
for each subexpression $E_{1}\cdot E_{2}$, each word
$w\in\mathcal{L}(E_{1}\cdot E_{2})$ has a _unique_ factorisation $w=uv$ with
$u\in\mathcal{L}(E_{1})$ and $v\in\mathcal{L}(E_{2})$,
3. 3.
for each subexpression $E_{1}^{*}$, the language $\mathcal{L}(E_{1})$ is a
_code_ , i.e., each word $w\in\mathcal{L}(E_{1}^{*})$ has a _unique_
factorisation $w=v_{1}\cdots v_{n}$ with $v_{i}\in\mathcal{L}(E_{1})$ for
$1\leq i\leq n$.
###### Definition 3.3.
Given an aperiodic monoid $M$ and $X\subseteq M$, we say that $X$ is
$n$-_stabilising_ if $xy=x$ for all $x\in X^{n}$ and $y\in X$. We say that $X$
is stabilising if it is $n$-stabilising for some $n\geq 1$.
Remark. Stabilisation generalizes aperiodicity in some sense. For
aperiodicity, we require $x^{n}=x^{n+1}$ for each element $x\in M$ and some
$n\in\mathbb{N}$, i.e., all _singleton_ subsets of $M$ should be stabilising.
###### Example 3.4.
Continuing Example 2.1, any subset
$X\subseteq\\{s_{1},\ldots,s_{n}\\}\subseteq\widetilde{U}_{n}$ is
1-stabilising.
As another example, consider the aperiodic 2DFT $\mathcal{A}$ in Figure 1, and
consider its transition monoid $\mathsf{TrM}$. Clearly, $\mathsf{TrM}$ is an
aperiodic monoid. Let $\varphi$ be the morphism from
$(\Sigma\uplus\\{{\vdash},{\dashv}\\})^{*}$ to $\mathsf{TrM}$. Consider the
subset $Z=\\{Y,Y^{2}\\}$ of $\mathsf{TrM}$ where $Y=\varphi(a\\#a\$)$:
$\displaystyle Y$
$\displaystyle=\\{(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{3},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{4},q_{4}),(\rightarrow,q_{5},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{0},q_{1}),(\leftarrow,q_{2},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{4},q_{5}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{6},q_{1})\\}$
$\displaystyle Y^{2}$
$\displaystyle=\\{(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{3},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{4},q_{4}),(\rightarrow,q_{5},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{0},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{2},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{4},q_{5}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{6},q_{1})\\}\,.$
It can be seen that $Y^{3}=Y^{2}$, hence $Z$ is 2-stabilising.
Let $\varphi\colon\Sigma^{*}\to M$ be a morphism. We say that a regular
expression $E$ is $\varphi$-_stabilising_ (or simply stabilising when
$\varphi$ is clear from the context) if for each subexpression $F^{*}$ of $E$,
the set $\varphi(\mathcal{L}(F))$ is stabilising.
Continuing Example 3.4, we can easily see that $\varphi(a)$ is idempotent and
we get $\varphi(a^{+}\\#a^{+}\$)=\\{Y\\}$. Since $Y^{3}=Y^{2}$, we deduce that
$(aa^{*}\\#aa^{*}\$)^{*}$ is a stabilising expression. Notice also that, by
definition, expressions without a Kleene-star are stabilising vacuously.
###### Example 3.5.
As a more non trivial example to illustrate stabilising expressions, consider
the 2DFT $\mathcal{A}$ in Figure 2, whose domain is the language
$b(a^{*}b)^{\geq 2}a^{*}$.
$s$$q_{0}$$q_{1}$$q_{2}$$q_{3}$$q_{4}$$q_{5}$$q_{6}$${\vdash}/\varepsilon,+1$$b/\varepsilon,+1$$a/\varepsilon,+1$$b/\varepsilon,+1$$a/\varepsilon,+1$$b/\varepsilon,+1$$a/a,+1$$b/b,-1$$a/\varepsilon,-1$$b/\varepsilon,-1$$a/\varepsilon,-1$$b/\varepsilon,-1$$a/\varepsilon,-1$$b/\varepsilon,+1$$a/a,+1$$b/b,+1$
Figure 2: For $u_{i}\in a^{*}b$, an aperiodic 2DFT $\cal{A}$ computing the
partial function $[\\![{\mathcal{A}}]\\!](bu_{1}u_{2}\cdots
u_{n}a^{k})=u_{3}u_{1}u_{4}u_{2}\cdots u_{n}u_{n-2}a^{k}$ if $n\geq 3$, and
$a^{k}$ if $n=2$. The domain is $b(a^{*}b)^{\geq 2}a^{*}$.
Consider $b(a^{*}b)^{\geq 3}\subseteq\mathsf{dom}(\mathcal{A})$. Note that
$a^{*}b$ is a prefix code with synchronisation delay 1. Let
$X=\varphi(a^{*}b)$, where $\varphi$ is the morphism from
$(\Sigma\uplus\\{{\vdash},{\dashv}\\})^{*}$ to $\mathsf{TrM}$. We will see
that $X$ stabilises.
* •
First, we have $X=\\{Y_{1},Y_{2}\\}$ where
$Y_{1}=\varphi(b)=\\{\\\
(\rightarrow,s,q_{0}),(\rightarrow,q_{0},q_{1}),(\rightarrow,q_{1},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{3},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{4},q_{5}),(\rightarrow,q_{5},q_{6}),(\rightarrow,q_{6},q_{0}),\\\
(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},s,q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{0},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{1},q_{2}),(\leftarrow,q_{2},q_{3}),(\leftarrow,q_{3},q_{4}),(\leftarrow,q_{4},q_{5}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{6},q_{0})\\}$
$Y_{2}=\varphi(a^{+}b)=\varphi(ab)=\\{\\\
(\rightarrow,q_{0},q_{1}),(\rightarrow,q_{1},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{3},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{4},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{5},q_{5}),(\rightarrow,q_{6},q_{0}),\\\
(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},s,q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{0},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{1},q_{2}),(\leftarrow,q_{2},q_{3}),(\leftarrow,q_{3},q_{4}),(\leftarrow,q_{4},q_{5}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{6},q_{0})\\}$
* •
Next, we can check that $X^{2}=\\{Y_{3},Y_{4}\\}$ where
$Y_{3}=Y_{1}Y_{1}=Y_{1}Y_{2}=\\{\\\
(\rightarrow,s,q_{1}),(\rightarrow,q_{0},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{3},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{4},q_{5}),(\rightarrow,q_{5},q_{0}),(\rightarrow,q_{6},q_{1}),\\\
(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},s,q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{0},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{1},q_{2}),(\leftarrow,q_{2},q_{4}),(\leftarrow,q_{3},q_{5}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{4},q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{6},q_{0})\\}$
$Y_{4}=Y_{2}Y_{1}=Y_{2}Y_{2}=\\{\\\
(\rightarrow,q_{0},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{3},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{4},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{5},q_{5}),(\rightarrow,q_{6},q_{1}),\\\
(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},s,q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{0},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{1},q_{2}),(\leftarrow,q_{2},q_{4}),(\leftarrow,q_{3},q_{5}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{4},q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{6},q_{0})\\}$
* •
Then, we have $X^{4}=\\{Z_{1},Z_{2}\\}$ where
$Z_{1}=Y_{3}Y_{3}=Y_{3}Y_{4}=\\{\\\
(\rightarrow,s,q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{0},q_{5}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{3},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{4},q_{5}),(\rightarrow,q_{5},q_{2}),(\rightarrow,q_{6},q_{2}),\\\
(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},s,q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{0},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{1},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{2},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{3},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{4},q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{6},q_{0})\\}$
$Z_{2}=Y_{4}Y_{3}=Y_{4}Y_{4}=\\{\\\
(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{0},q_{5}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{3},q_{3}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{4},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{5},q_{5}),(\rightarrow,q_{6},q_{2}),\\\
(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},s,q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{0},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{1},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{2},q_{2}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{3},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{4},q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{6},q_{0})\\}$
* •
Finally, we can easily check that $Z_{1}Y_{1}=Z_{1}Y_{2}=Z_{1}$ and
$Z_{2}Y_{1}=Z_{2}Y_{2}=Z_{2}$. Therefore, $X$ is 4-stabilising. Moreover,
${b(a^{*}b)^{\geq 3}}\subseteq\varphi^{-1}(Z_{1})$ and ${a^{+}b(a^{*}b)^{\geq
3}}\subseteq\varphi^{-1}(Z_{2})$.
Given a morphism $\varphi$ from $\Sigma^{*}$ to some aperiodic monoid $M$, our
goal is to build, for each language $\varphi^{-1}(s)$ with $s\in M$, an SD-
regular expression which is both _unambiguous_ and _stabilising_. The proof is
by induction on the monoid $M$ via the local divisor technique, similar to
Diekert and Kufleitner [12, 13, 14], and to Perrin and Pin [20, Chapter VIII,
Section 6.1], with the objective to get stronger forms of SD-regular
expressions.
###### Theorem 3.6.
Given a morphism $\varphi$ from the free monoid $\Sigma^{*}$ to a finite
aperiodic monoid $M$, for each $s\in M$ there exists an unambiguous,
stabilising, SD-regular expression $E_{s}$ such that
$\mathcal{L}(E_{s})=\varphi^{-1}(s)$.
The proof of this theorem makes crucial use of _marked substitutions_ (see
[20]) that we define and study in the next section.
### 3.1 Marked substitutions
Let $A,B$ be finite alphabets. A map $\alpha\colon A\to\mathcal{P}(B^{*})$ is
called a _marked substitution_ if it satisfies the following two properties:
* •
There exists a partition $B=B_{1}\uplus B_{2}$ such that for all $a$ in $A$,
$\alpha(a)\subseteq B_{1}^{*}B_{2}$,
* •
For all $a_{1}$ and $a_{2}$ in $A$, $a_{1}\neq a_{2}$ implies
$\alpha(a_{1})\cap\alpha(a_{2})=\emptyset$.
A marked substitution $\alpha\colon A\to\mathcal{P}(B^{*})$ can be naturally
extended to words in $A^{*}$ using concatenation of languages, i.e., to a
morphism from the free monoid $A^{*}$ to
$(\mathcal{P}(B^{*}),\cdot,\\{\varepsilon\\})$. It is then further lifted to
languages $L\subseteq A^{*}$ by union: $\alpha(L)=\bigcup_{w\in L}\alpha(w)$.
###### Lemma 3.7 ([20] Chapter VIII, Proposition 6.2).
Let $\alpha\colon A\to\mathcal{P}(B^{*})$ be a marked substitution, and
$X\subseteq A^{+}$ be a prefix code with synchronisation delay $d$. Then
$Y=\alpha(X)\subseteq(B_{1}^{*}B_{2})^{+}$ is a prefix code with
synchronisation delay $d+1$.
###### Proof 3.8.
First, since $B_{1}$ and $B_{2}$ are disjoint, $B_{1}^{*}B_{2}\subseteq B^{*}$
is a prefix code. Hence, given a word
$w\in\alpha(A^{*})\subseteq(B_{1}^{*}B_{2})^{*}$, there exists a unique
decomposition $w=w_{1}\cdots w_{n}$ such that $w_{i}\in B_{1}^{*}B_{2}$ for
$1\leq i\leq n$. Now since images of different letters from $A$ are disjoint,
there exists at most one $a_{i}$ such that $w_{i}\in\alpha(a_{i})$. We deduce
that there is exactly one word $w^{\prime}\in A^{*}$ such that
$\alpha(w^{\prime})=w$. This word is denoted $\alpha^{-1}(w)$.
Now, we prove that $Y$ is a prefix code. Let
$v,w\in\alpha(A^{*})\subseteq(B_{1}^{*}B_{2})^{*}$ and assume that $v$ is a
prefix of $w$. Write $w=w_{1}\cdots w_{n}$ with $w_{i}\in B_{1}^{*}B_{2}$.
Since $v$ ends with a letter from $B_{2}$ we deduce that $v=w_{1}\cdots w_{i}$
for some $1\leq i\leq n$. Let $w^{\prime}=\alpha^{-1}(w)=a_{1}\cdots a_{n}$.
We have $v^{\prime}=\alpha^{-1}(v)=a_{1}\cdots a_{i}$. Now, if
$v,w\in\alpha(X)$ then we get $v^{\prime},w^{\prime}\in X$. Since $X$ is a
prefix code we get $i=n$. Hence $v=w$, proving that $Y$ is also a prefix code.
Finally, we prove that $Y$ has synchronization delay $d+1$. Let $u,v,w$ in
$B^{*}$ such that $uvw\in Y^{*}$ and $v\in Y^{d+1}$. We need to prove that
$uv\in Y^{*}$. Since $v\in Y^{d+1}$, it can be written $v=v_{0}v_{1}\cdots
v_{d}$ with $v_{i}\in Y$ for $0\leq i\leq d$. Then, let us remark that
$\alpha(A)\subseteq B_{1}^{*}B_{2}$ is a prefix code with synchronisation
delay $1$. Since $uv_{0}\cdots v_{d}w\in Y^{*}\subseteq\alpha(A)^{*}$ and
$v_{0}\in Y\subseteq\alpha(A)^{+}$, we deduce that $uv_{0}$ belongs to
$\alpha(A)^{*}=\alpha(A^{*})$, as well as $v_{1}\cdots v_{d}$ and $w$. Let
$r=\alpha^{-1}(uv_{0})$, $s=\alpha^{-1}(v_{1}\cdots v_{d})$ and
$t=\alpha^{-1}(w)$. We have $rst=\alpha^{-1}(uvw)$ and since $uvw\in
Y^{*}=\alpha(X^{*})$, we deduce that $rst\in X^{*}$. Similarly, from
$v_{1}\cdots v_{d}\in Y^{d}=\alpha(X)^{d}$, we get $s\in X^{d}$. Now, $X$ has
synchronisation delay $d$. Therefore, $rs\in X^{*}$, meaning that
$uv=uv_{0}\cdots v_{d}\in\alpha(rs)\subseteq\alpha(X^{*})=Y^{*}$.
Marked substitutions also preserve unambiguity of union, concatenation and
Kleene star.
###### Lemma 3.9.
Let $\alpha\colon A\to\mathcal{P}(B^{*})$ be a marked substitution and let
$L_{1},L_{2}\subseteq A^{*}$.
1. 1.
If the union $L_{1}\cup L_{2}$ is unambiguous then so is
$\alpha(L_{1})\cup\alpha(L_{2})$.
2. 2.
If the concatenation $L_{1}\cdot L_{2}$ is unambiguous then so is
$\alpha(L_{1})\cdot\alpha(L_{2})$.
3. 3.
If the Kleene star $L_{1}^{*}$ is unambiguous then so is $\alpha(L_{1})^{*}$.
###### Proof 3.10.
As stated in the previous proof, a marked substitution is one-to-one. We
denote by $\alpha^{-1}(w)$ the unique inverse of $w$, for $w$ in
$\alpha(A^{*})$.
If $w\in\alpha(L_{1})\cup\alpha(L_{2})$ then $\alpha^{-1}(w)\in L_{1}\cup
L_{2}$. This shows that unambiguity of union is preserved by $\alpha$.
Assume now that the concatenation $L_{1}\cdot L_{2}$ is unambiguous. Let
$w\in\alpha(L_{1})\cdot\alpha(L_{2})$ and consider its unique factorisation
$w=v_{1}b_{1}\cdots v_{n}b_{n}$ as above and $\alpha^{-1}(w)=a_{1}\cdots
a_{n}$. Since $\alpha(L_{1})\subseteq(B_{1}^{*}B_{2})^{*}$, a factorisation of
$w$ according to $\alpha(L_{1})\cdot\alpha(L_{2})$ must be of the form
$w=(v_{1}b_{1}\cdots v_{i}b_{i})\cdot(v_{i+1}b_{i+1}\cdots v_{n}b_{n})$ with
$a_{1}\cdots a_{i}\in L_{1}$ and $a_{i+1}\cdots a_{n}\in L_{2}$. From
unambiguity of the product $L_{1}\cdot L_{2}$ we deduce that such a
factorisation of $w$ is unique. Hence, the product
$\alpha(L_{1})\cdot\alpha(L_{2})$ is unambiguous.
We can prove similarly that $\alpha$ preserves unambiguity of Kleene stars.
We will be interested in marked substitutions that are defined by regular
expressions. A _regular marked substitution_ (RMS) is a map $\alpha\colon
A\to\mathsf{Reg}(B^{*})$ which assigns a regular expression $\alpha(a)$ over
$B$ to each letter $a\in A$ such that $\tilde{\alpha}\colon
A\to\mathcal{P}(B^{*})$ defined by $\tilde{\alpha}(a)=\mathcal{L}(\alpha(a))$
is a marked substitution.
Let $\alpha\colon A\to\mathsf{Reg}(B^{*})$ be an RMS and let $E$ be a regular
expression over $A$. We define $\alpha(E)=E[\alpha(a)/a,\forall a\in A]$ to be
the regular expression over $B$ obtained from $E$ by substituting each
occurrence of a letter $a\in A$ with the expression $\alpha(a)$. Notice that
$\alpha$ is compositional: $\alpha(E_{1}\cup
E_{2})=\alpha(E_{1})\cup\alpha(E_{2})$, $\alpha(E_{1}\cdot
E_{2})=\alpha(E_{1})\cdot\alpha(E_{2})$ and
$\alpha(E_{1}^{*})=\alpha(E_{1})^{*}$. In particular, we have
$\tilde{\alpha}(\mathcal{L}(E))=\mathcal{L}(\alpha(E))$.
Further, we say that a RMS $\alpha$ is _unambiguous_ (URMS) if $\alpha(a)$ is
unambiguous for each $a\in A$. Similarly, an RMS $\alpha$ is SD-regular
(SDRMS) if $\alpha(a)$ is an SD-regular expression for each $a\in A$. We
obtain:
###### Corollary 3.11.
Let $\alpha\colon A\to\mathsf{Reg}(B^{*})$ be an RMS and $E$ be a regular
expression over $A$.
1. 1.
If $\alpha$ and $E$ are SD-regular, then $\alpha(E)$ is SD-regular.
2. 2.
If $\alpha$ and $E$ are unambiguous, then $\alpha(E)$ is unambiguous.
###### Proof 3.12.
1\. Let $F^{*}$ be a subexpression of $\alpha(E)$. If $F^{*}$ is a
subexpression of some $\alpha(a)$ then, $\alpha(a)$ being SD-regular we obtain
than $\mathcal{L}(F)$ is an SD prefix code. Otherwise, $F=\alpha(G)$ where
$G^{*}$ is a subexpression of $E$. Since $E$ is SD-regular, $\mathcal{L}(G)$
is a SD prefix code. By lemma 3.7 we deduce that
$\mathcal{L}(F)=\tilde{\alpha}(\mathcal{L}(G))$ is a SD prefix code.
2\. First, we know that each $\alpha(a)$ is unambiguous. Next, a subexpression
of $\alpha(E)$ which is not a subexpression of some $\alpha(a)$ must be of the
form $\alpha(F)$ where $F$ is a subexpression of $E$. We conclude easily using
unambiguity of $E$ and Lemma 3.9.
### 3.2 Proof of Theorem 3.6
###### Proof 3.13.
We first consider the set of neutral letters, i.e., letters whose image is the
neutral element $1$ of $M$. To ease the proof, we first explain how to handle
them, and in the rest of the proof, focus on the case where we do not have
neutral letters.
Let $\varphi\colon\Sigma^{*}\to M$ be a morphism and
$\Sigma_{0}=\\{a\in\Sigma\mid\varphi(a)=1\\}$ be the set of neutral letters.
Further, let $\Sigma_{1}=\Sigma\setminus\Sigma_{0}$ and let
$\varphi_{1}\colon\Sigma_{1}^{*}\to M$ be the restriction of $\varphi$ to
$\Sigma_{1}^{*}$. Let $\alpha\colon\Sigma_{1}\to\mathsf{Reg}(\Sigma^{*})$ be
the regular marked substitution defined by $\alpha(a)=\Sigma_{0}^{*}a$.
Clearly, $\alpha$ is unambiguous and since $\Sigma_{0}$ is a 1-SD prefix code
we get that $\alpha$ is SD-regular. By Corollary 3.11 we deduce that $\alpha$
preserves unambiguity and also SD-expressions. It also preserves stabilising
expressions, i.e., if $E\in\mathsf{Reg}(\Sigma_{1}^{*})$ is
$\varphi_{1}$-stabilising then $\alpha(E)\in\mathsf{Reg}(\Sigma^{*})$ is
$\varphi$-stabilising. Indeed, $\varphi(\Sigma_{0})$ is 1-stabilising.
Further, if $G^{*}$ is a subexpression of $\alpha(E)$ different from
$\Sigma_{0}^{*}$ then there is a subexpression $F^{*}$ of $E$ such that
$G=\alpha(F)$. Hence, $\mathcal{L}(G)=\tilde{\alpha}(\mathcal{L}(F))$ and
$X=\varphi(\mathcal{L}(G))=\varphi_{1}(\mathcal{L}(F))$ is stabilising.
Now, suppose we have unambiguous, stabilising, SD-expressions $E_{s}$ for
$\varphi_{1}$ and each $s\in M$: $\mathcal{L}(E_{s})=\varphi_{1}^{-1}(s)$. We
deduce that $E^{\prime}_{s}=\alpha(E_{s})\cdot\Sigma_{0}^{*}$ is an
unambiguous, stabilising, SD-expression. Moreover, we have
$\mathcal{L}(E^{\prime}_{s})=\varphi^{-1}(s)$.
In the rest of the proof, we assume that the morphism $\varphi$ has no neutral
letters. The proof is by induction on the size of $M$, using a result from
Perrin and Pin [20, Chapter XI, Proposition 4.14] stating that if $\varphi$ is
a surjective morphism from $\Sigma^{*}$ to a finite aperiodic monoid $M$, then
one of the following cases hold:
1. 1.
$M$ is a cyclic monoid, meaning that $M$ is generated by a single element.
2. 2.
$M$ is isomorphic to $\widetilde{U}_{n}$ for some $n\geq 1$
3. 3.
There is a partition $\Sigma=A\uplus B$ such that $\varphi(A^{*})$ and
$\varphi((A^{*}B)^{*})$ are proper submonoids of $M$.
We now treat the three cases above.
1. 1.
$M$ is a cyclic monoid. Then $M$ is of the form $\\{1,s,s^{2},\ldots,s^{n}\\}$
with $s^{i}s^{j}=s^{i+j}$ if $i+j\leq n$ and $s^{n}$ otherwise. Notice that
since we have no neutral letters, $\varphi^{-1}(1)=\\{\varepsilon\\}$. For
$1\leq i\leq n$, we denote by $\Sigma_{i}$ the set of letters whose image is
$s^{i}$. Now, we define inductively stabilising, unambiguous, SD-regular
expressions $E_{j}$ such that $\mathcal{L}(E_{j})=\varphi^{-1}(s^{j})$ for
$1\leq j\leq n$. Let $E_{1}=\Sigma_{1}$. Then, for $1<j<n$ we let
$E_{j}=\Sigma_{j}\cup\bigcup_{1\leq i<j}E_{i}\Sigma_{j-i}\,.$
Notice that expressions $E_{j}$ for $1\leq j<n$ are unambiguous and do not use
the Kleene star, hence are stabilising and SD. Finally, let
$E_{n}=\Big{(}\Sigma_{n}\cup\bigcup_{1\leq i,j<n\mid n\leq
i+j}E_{i}\Sigma_{j}\Big{)}\Sigma^{*}\,.$
Notice that the separation between the first part of $E_{n}$ and $\Sigma^{*}$
is done at the letter at which the image of the prefix reaches $s^{n}$. Then
as above, $E_{n}$ is unambiguous, and $\Sigma$ is an $n$-stabilising, 1-SD
prefix code. Moreover, $\mathcal{L}(E_{j})=\varphi^{-1}(s^{j})$ for $1\leq
j\leq n$, which concludes the proof in the case of cyclic monoids.
2. 2.
$M$ is isomorphic to $\widetilde{U}_{n}$ for some $n$. Then $M$ is of the form
$\\{1,s_{1},\ldots,s_{n}\\}$ where $s_{i}s_{j}=s_{i}$ for all $1\leq i,j\leq
n$. As above, since we have no neutral letters, we deduce that
$\varphi^{-1}(1)=\\{\varepsilon\\}$. We similarly define
$\Sigma_{i}=\varphi^{-1}(s_{i})\cap\Sigma$. Clearly,
$\varphi^{-1}(s_{i})=\mathcal{L}(\Sigma_{i}\Sigma^{*})$, and
$\Sigma_{i}\Sigma^{*}$ is an unambiguous, 1-stabilising, 1-SD regular
expression.
3. 3.
There is a partition $\Sigma=A\uplus B$ such that $M_{A}=\varphi(A^{*})$ and
$M_{B}=\varphi((A^{*}B)^{*})$ are proper submonoids of $M$. We set
$C=\varphi(A^{*}B)\subseteq M_{B}$ and view $C$ as a new alphabet. Note that
$C$ generates $M_{B}$. Finally, let $f\colon A^{*}\to M_{A}$ be the
restriction of $\varphi$ to $A$ and $g\colon C^{*}\to M_{B}$ be the evaluation
morphism defined by $g(c)=c$ for $c\in C$.
Then $f$ and $g$ are surjective morphisms to monoids $M_{A}$ and $M_{B}$ whose
size are smaller than $M$. We can thus use the induction hypothesis and get
unambiguous, stabilising, SD-expressions for elements of $M_{A}$ and $M_{B}$
with respect to $f$ and $g$ respectively. Given an element $s$ in $M_{A}$, we
get an unambiguous, $f$-stabilising, SD-expression $E_{s}$ over $A$ for
$f^{-1}(s)$. Similarly for an element $t$ in $M_{B}$, we get an unambiguous,
$g$-stabilising, SD-expression $F_{t}$ over the alphabet $C$ for $g^{-1}(t)$.
Notice that we have in particular expressions $E_{1}$ for $f^{-1}(1)$ and
$F_{1}$ for $g^{-1}(1)$.
To go back from expressions over $A,C$ to expressions over $\Sigma^{*}$, we
first define expressions $G_{c}$ for $\varphi^{-1}(c)\cap A^{*}B$, for each
$c\in C$:
$G_{c}=\bigcup\limits_{s\in M_{A},b\in B\mid s\varphi(b)=c}E_{s}\cdot b\,.$
Notice that $\alpha\colon C\to\mathsf{Reg}(\Sigma^{*})$ defined by
$\alpha(c)=G_{c}$ is a regular marked substitution with respect to the
partition $\Sigma=A\uplus B$. Indeed, $\alpha(c)\subseteq A^{*}B$ and
$\varphi(\alpha(c))=c$ which implies that the images are pairwise disjoint.
Moreover, each $G_{c}$ is unambiguous, $\varphi$-stabilising and SD-regular.
Let $t\in M_{B}$. By Corollary 3.11, $\alpha(F_{t})$ is unambiguous and SD-
regular. It is also $\varphi$-stabilising. Indeed, let $G^{*}$ be a
subexpression of $\alpha(F_{t})$. If $G^{*}$ is a subexpression of some
$\alpha(c)$ we get the result since $G_{c}$ is $\varphi$-stabilising.
Otherwise, there is a subexpression $F^{*}$ of $F_{t}$ such that
$G=\alpha(F)$. Hence, $\mathcal{L}(G)=\tilde{\alpha}(\mathcal{L}(F))$ and
$X=\varphi(\mathcal{L}(G))=g(\mathcal{L}(F))$ is stabilising.
For each $t\in M_{B}$, we have
$\mathcal{L}(\alpha(F_{t}))=\varphi^{-1}(t)\cap(A^{*}B)^{*}$. Finally, we need
to combine these elements to get an expression for any word over $\Sigma$.
Noticing that $\Sigma^{*}=(A^{*}B)^{*}A^{*}$, we define for $s\in M$
$E^{\prime}_{s}=\bigcup_{t\in M_{B},r\in M_{A},tr=s}\alpha(F_{t})\cdot
E_{r}\,.$
We can easily show that $E^{\prime}_{s}$ is unambiguous and satisfies
$\mathcal{L}(E_{s})=\varphi^{-1}(s)$. It is also clearly a
$\varphi$-stabilising SD-expressions. Hence, we get the result.
## 4 Combinator expressions
In this section, we present our combinators to compute first order definable
functions from finite words to finite words. The simpler combinators of
unambiguous concatenation and sum are similar to those in [3, 11], but we
differ in the equivalent of the Kleene-Star to match the aperiodicity that we
tackle.
### 4.1 Simple Functions
For each $v\in\Gamma^{*}$ we have a constant function $f_{v}$ defined by
$f_{v}(u)=v$ for all $u\in\Sigma^{*}$. Abusing notations, we simply denote the
constant function $f_{v}$ by $v$. We denote by
$\bot\colon\Sigma^{*}\to\Gamma^{*}$ the function with empty domain. These
atomic functions are the most simple ones.
### 4.2 Unambiguous sums
We will use two equivalent ways of defining a function by cases. First, the
if-then-else construct is given by $h={L}\,?\,{f}:{g}$ where
$f,g\colon\Sigma^{*}\to\Gamma^{*}$ are functions and $L\subseteq\Sigma^{*}$ is
a language. We have $\mathsf{dom}(h)=(\mathsf{dom}(f)\cap
L)\cup(\mathsf{dom}(g)\setminus L)$. Then, for $w\in\mathsf{dom}(h)$ we have
$h(w)=\begin{cases}f(w)&\text{if }w\in L\\\ g(w)&\text{otherwise.}\end{cases}$
We will often use this case definition with $L=\mathsf{dom}(f)$. To simplify
notations we define $f+g={\mathsf{dom}(f)}\,?\,{f}:{g}$. Note that
$\mathsf{dom}(f+g)=\mathsf{dom}(f)\cup\mathsf{dom}(g)$ but the sum is not
commutative and $g+f={\mathsf{dom}(g)}\,?\,{g}:{f}$. For
$w\in\mathsf{dom}(f)\cap\mathsf{dom}(g)$ we have $(f+g)(w)=f(w)$ and
$(g+f)(w)=g(w)$. When the domains of $f$ and $g$ are disjoint then $f+g$ and
$g+f$ are equivalent functions with domain
$\mathsf{dom}(f)\uplus\mathsf{dom}(g)$. In all cases the sum is associative
and the sum notation is particularly useful when applied to a sequence
$f_{1},\ldots,f_{n}$ of functions:
$\sum_{1\leq i\leq
n}f_{i}=f_{1}+\cdots+f_{n}={\mathsf{dom}(f_{1})}\,?\,{f_{1}}:{{\mathsf{dom}(f_{2})}\,?\,{f_{2}}:{\cdots{\mathsf{dom}(f_{n-1})}\,?\,{f_{n-1}}:{f_{n}}}}$
If the domains of the functions are pairwise disjoint then this sum is
associative and commutative.
Further, we let ${L}\triangleright{f}={L}\,?\,{f}:{\bot}$ the function $f$
restricted to $L\cap\mathsf{dom}(f)$. When $L=\\{w\\}$ is a singleton, we
simply write ${w}\triangleright{f}$.
### 4.3 The Hadamard product
The Hadamard product of two functions $f,g\colon\Sigma^{*}\to\Gamma^{*}$ first
applies $f$ and then applies $g$. It is denoted by $f\odot g$. Its domain is
$\mathsf{dom}(f)\cap\mathsf{dom}(g)$ and $(f\odot g)(u)=f(u)g(u)$ for each
input word $u$ in its domain.
### 4.4 The unambiguous Cauchy product
Consider two functions $f,g\colon\Sigma^{*}\to\Gamma^{*}$. The unambiguous
Cauchy product of $f$ and $g$ is the function $f\cdot g$ whose domain is the
set of words $w\in\Sigma^{*}$ which admit a unique factorization $w=uv$ with
$u\in\mathsf{dom}(f)$ and $v\in\mathsf{dom}(g)$, and in this case, the
computed output is $f(u)g(v)$.
Contrary to the Hadamard product which reads its full input word $w$ twice,
first applying $f$ and then applying $g$, the Cauchy product _splits
unamgibuously_ its input word $w$ as $uv$, applies $f$ on $u$ and then $g$ on
$v$.
Sometimes we may want to reverse the output and produce $g(v)f(u)$. This
_reversed_ Cauchy product can be defined using the Hadamard product as
$f\cdot_{r}g=(({\mathsf{dom}(f)}\triangleright{\varepsilon})\cdot
g)\odot(f\cdot({\mathsf{dom}(g)}\triangleright{\varepsilon}))$
### 4.5 The $k$-chained Kleene-star and its reverse
Let $L\subseteq\Sigma^{*}$ be a code, let $k\geq 1$ be a natural number and
let $f\colon\Sigma^{*}\to\Gamma^{*}$ be a partial function. We define the
$k$-chained Kleene-star $[L,f]^{k\star}\colon\Sigma^{*}\to\Gamma^{*}$ and its
reverse $[L,f]_{r}^{k\star}\colon\Sigma^{*}\to\Gamma^{*}$ as follows.
The domain of both these functions is contained in $L^{*}$, the set of words
having a (unique) factorization over the code $L$. Let $w\in L^{*}$ and
consider its unique factorization $w=u_{1}u_{2}\cdots u_{n}$ with $n\geq 0$
and $u_{i}\in L$ for all $1\leq i\leq n$. Then,
$w\in\mathsf{dom}([L,f]^{k\star})=\mathsf{dom}([L,f]_{r}^{k\star})$ if
$u_{i+1}\cdots u_{i+k}\in\mathsf{dom}(f)$ for all $0\leq i\leq n-k$ and in
this case we set
$\displaystyle[L,f]^{k\star}(w)$ $\displaystyle=f(u_{1}\cdots u_{k})\cdot
f(u_{2}\cdots u_{k+1})\cdots f(u_{n-k+1}\cdots u_{n})$
$\displaystyle[L,f]_{r}^{k\star}(w)$ $\displaystyle=f(u_{n-k+1}\cdots
u_{n})\cdots f(u_{2}\cdots u_{k+1})\cdot f(u_{1}\cdots u_{k})\,.$
Notice that when $n<k$, the right-hand side is an empty product and we get
$[L,f]^{k\star}(w)=\varepsilon$ and $[L,f]_{r}^{k\star}(w)=\varepsilon$. When
$k=1$ and $L=\mathsf{dom}(f)$ is a code then we simply write
$f^{\star}=[\mathsf{dom}(f),f]^{1\star}$ and
$f_{r}^{\star}=[\mathsf{dom}(f),f]_{r}^{1\star}$. We have
$\mathsf{dom}(f^{\star})=\mathsf{dom}(f_{r}^{\star})=L^{*}$.
The k-chained Kleene star was also defined in [3, 11]; however as we will see
below, we use it in a restricted way for aperiodic functions.
### 4.6 SD-regular transducer expressions (SDRTE)
SD-regular transducer expressions (SDRTEs) are obtained from classical regular
transducer expressions (RTEs) [3, 11] by restricting the $k$-chained Kleene-
star $[L,f]^{k\star}$ and its reverse $[L,f]_{r}^{k\star}$ to aperiodic
languages $L$ that are prefix codes of bounded synchronisation delay. The if-
then-else choice ${L}\,?\,{f}:{g}$ is also restricted to aperiodic languages
$L$. Hence, the syntax of SDRTEs is given by the grammar:
$C::=\bot\mid v\mid{L}\,?\,{C}:{C}\mid C\odot C\mid C\cdot
C\mid[L,C]^{k\star}\mid[L,C]_{r}^{k\star}$
where $v\in\Gamma^{*}$, and $L\subseteq\Sigma^{*}$ ranges over aperiodic
languages (or equivalently SD-regular expressions), which are also prefix
codes with bounded synchronisation delay for $[L,C]^{k\star}$ and
$[L,C]_{r}^{k\star}$.
The semantics of SDRTEs is defined inductively. $[\\![\bot]\\!]$ is the
function which is nowhere defined, $[\\![v]\\!]$ is the constant function such
as $[\\![v]\\!](u)=v$ for all $u\in\Sigma^{*}$, and the semantics of the other
combinators has been defined in the above sections.
As discussed in Section 4.2, we will use binary sums
$C+C^{\prime}={\mathsf{dom}(C)}\,?\,{C}:{C^{\prime}}$ and generalised sums
$\sum_{i}C_{i}$. Also, we use the abbreviation
${L}\triangleright{C}={L}\,?\,{C}:{\bot}$ and the reversed Cauchy product
$C\cdot_{r}C^{\prime}=(({\mathsf{dom}(C)}\triangleright{\varepsilon})\cdot
C^{\prime})\odot(C\cdot({\mathsf{dom}(C^{\prime})}\triangleright{\varepsilon}))$.
###### Lemma 4.1.
If $C$ is an SDRTE, then $\mathsf{dom}(C)$ is an aperiodic language.
###### Proof 4.2.
We prove the statement by induction on the syntax of SDRTEs. We recall that
aperiodic languages are closed under concatenation, union, intersection and
complement.
* •
$\mathsf{dom}(\bot)=\emptyset$ and $\mathsf{dom}(v)=\Sigma^{*}$ are aperiodic
languages.
* •
$C={L}\,?\,{C_{1}}:{C_{2}}$. By induction, $\mathsf{dom}(C_{1})$ and
$\mathsf{dom}(C_{2})$ are aperiodic. We have
$\mathsf{dom}(C)=(L\cap\mathsf{dom}(C_{1}))\cup(\mathsf{dom}(C_{2})\setminus
L)$, which is aperiodic thanks to the closure properties of aperiodic
languages.
* •
$C=C_{1}\odot C_{2}$. By induction, $\mathsf{dom}(C_{1})$ and
$\mathsf{dom}(C_{2})$ are aperiodic. We deduce that
$\mathsf{dom}(C)=\mathsf{dom}(C_{1})\cap\mathsf{dom}(C_{2})$ is aperiodic.
* •
$C=C_{1}\cdot C_{2}$. By induction, $L_{1}=\mathsf{dom}(C_{1})$ and
$L_{2}=\mathsf{dom}(C_{2})$ are aperiodic. We have
$\mathsf{dom}(C)\subseteq\mathsf{dom}(C_{1})\cdot\mathsf{dom}(C_{2})$.
However, $C$ is undefined on words having more than one decomposition. A word
which admits at least two decompositions can be written $uvw$ with
$v\neq\varepsilon$, $u,uv\in L_{1}$ and $vw,w\in L_{2}$. Let
$\varphi\colon\Sigma^{*}\to M$ be a morphism to a finite aperiodic monoid
recognising both $L_{1}$ and $L_{2}$. We have $L_{1}=\varphi^{-1}(P_{1})$ and
$L_{2}=\varphi^{-1}(P_{2})$ for some $P_{1},P_{2}\subseteq M$. The set $L_{3}$
of words having at least two decompositions is precisely
$L_{3}=\bigcup_{\begin{subarray}{c}r,s,t\mid r,rs\in P_{1}\wedge st,t\in
P_{2}\end{subarray}}\varphi^{-1}(r)(\varphi^{-1}(s)\setminus\\{\varepsilon\\})\varphi^{-1}(t)$
which is aperiodic. We deduce that $\mathsf{dom}(C)=(L_{1}\cdot
L_{2})\setminus L_{3}$ is aperiodic.
* •
$C=[L,C^{\prime}]^{k\star}$. By induction, $\mathsf{dom}(C^{\prime})$ is
aperiodic and by definition $L$ is an aperiodic SD prefix code. Hence $L^{*}$
is aperiodic. Notice that $\mathsf{dom}(C)\subseteq L^{*}$ but $C$ is
undefined on words $w=u_{1}\cdots u_{n}$ with $u_{i}\in L$ if there is a
factor $u_{i+1}\cdots u_{i+k}$ which is not in $\mathsf{dom}(C^{\prime})$. We
deduce that
$\mathsf{dom}(C)=L^{*}\setminus(L^{*}(L^{k}\setminus\mathsf{dom}(C^{\prime}))L^{*})$
which is aperiodic thanks to the closure properties given above.
* •
Notice that
$\mathsf{dom}([L,C^{\prime}]_{r}^{k\star})=\mathsf{dom}([L,C^{\prime}]^{k\star})$,
which is aperiodic as proved above.
###### Proposition 4.3.
Given an SDRTE $C$ and a letter $a\in\Sigma$,
1. 1.
we can construct an SDRTE ${a}^{-1}{C}$ such that
$\mathsf{dom}({a}^{-1}{C})=a^{-1}\mathsf{dom}(C)$ and
$[\\![{a}^{-1}{C}]\\!](w)=[\\![C]\\!](aw)$ for all $w\in
a^{-1}\mathsf{dom}(C)$,
2. 2.
we can construct an SDRTE ${C}{a}^{-1}$ such that
$\mathsf{dom}({C}{a}^{-1})=\mathsf{dom}(C)a^{-1}$ and
$[\\![{C}{a}^{-1}]\\!](w)=[\\![C]\\!](wa)$ for all
$w\in\mathsf{dom}(C)a^{-1}$.
###### Proof 4.4.
We recall that aperiodic languages are closed under left and right quotients.
The proof is by structural induction on the given SDRTE $C$ over alphabet
$\Sigma$. We only construct below the SDRTEs for the left quotient. Formulas
for the right quotient can be obtained similarly. A point to note is that,
unlike the left quotient, the right quotient of a language might break its
prefix code property, which could be a problem if applied to a parsing
language $L$ used for $k$-star or its reverse. However, the quotient by a
letter only modifies the first or last copy of $L$, which can be decoupled so
that the remaining iterations are still performed with the same parsing
language $L$.
Basic cases.
We define ${a}^{-1}{\bot}=\bot$ and ${a}^{-1}{v}=v$ for $v\in\Gamma^{*}$.
If-then-else.
Let $C={L}\,?\,{C_{1}}:{C_{2}}$. We define
${a}^{-1}{C}={a^{-1}L}\,?\,{{a}^{-1}{C_{1}}}:{{a}^{-1}{C_{2}}}$.
Recall that $\mathsf{dom}(C)=(\mathsf{dom}(C_{1})\cap
L)\cup(\mathsf{dom}(C_{2})\setminus L)$. We deduce that
$\displaystyle a^{-1}\mathsf{dom}(C)$
$\displaystyle=((a^{-1}\mathsf{dom}(C_{1}))\cap(a^{-1}L))\cup((a^{-1}\mathsf{dom}(C_{2}))\setminus(a^{-1}L))$
$\displaystyle=\mathsf{dom}({a^{-1}L}\,?\,{{a}^{-1}{C_{1}}}:{{a}^{-1}{C_{2}}})$
Moreover, for $w\in a^{-1}\mathsf{dom}(C)$, we have
$\displaystyle[\\![C]\\!](aw)$
$\displaystyle=\begin{cases}[\\![C_{1}]\\!](aw)&\text{if }aw\in L\\\
[\\![C_{2}]\\!](aw)&\text{otherwise.}\end{cases}=\begin{cases}[\\![{a}^{-1}{C_{1}}]\\!](w)&\text{if
}w\in a^{-1}L\\\ [\\![{a}^{-1}{C_{2}}]\\!](w)&\text{otherwise.}\end{cases}$
$\displaystyle=[\\![{a^{-1}L}\,?\,{{a}^{-1}{C_{1}}}:{{a}^{-1}{C_{2}}}]\\!](w)$
Hadamard product.
Let $C=C_{1}\odot C_{2}$. We define
${a}^{-1}{C}={a}^{-1}{C_{1}}\odot{a}^{-1}{C_{2}}$.
Recall that $\mathsf{dom}(C)=\mathsf{dom}(C_{1})\cap\mathsf{dom}(C_{2})$. We
deduce that
$\displaystyle a^{-1}\mathsf{dom}(C)$
$\displaystyle=(a^{-1}\mathsf{dom}(C_{1}))\cap(a^{-1}\mathsf{dom}(C_{2}))=\mathsf{dom}({a}^{-1}{C_{1}}\odot{a}^{-1}{C_{2}})$
Moreover, for $w\in a^{-1}\mathsf{dom}(C)$, we have
$\displaystyle[\\![C]\\!](aw)$
$\displaystyle=[\\![C_{1}]\\!](aw)[\\![C_{2}]\\!](aw)=[\\![{a}^{-1}{C_{1}}]\\!](w)[\\![{a}^{-1}{C_{2}}]\\!](w)=[\\![{a}^{-1}{C_{1}}\odot{a}^{-1}{C_{2}}]\\!](w)$
Cauchy product.
Let $C=C_{1}\cdot C_{2}$. The SDRTE ${a}^{-1}{C}$ is the unambiguous sum of
two expressions depending on whether the letter $a$ is removed from $C_{1}$ or
from $C_{2}$. Hence, we let $C^{\prime}=({a}^{-1}{C_{1}})\cdot{C_{2}}$ and
$C^{\prime\prime}=({\varepsilon}\triangleright{[\\![C_{1}]\\!](\varepsilon)})\cdot({a}^{-1}{C_{2}})$.
Notice that $\mathsf{dom}(C^{\prime\prime})=\emptyset$ when
$\varepsilon\notin\mathsf{dom}(C_{1})$ (i.e.,
$[\\![C_{1}]\\!](\varepsilon)=\bot$). Now, we define
${a}^{-1}{C}={(a^{-1}\mathsf{dom}(C))}\triangleright{(C^{\prime}+C^{\prime\prime})}$.
Let $w\in a^{-1}\mathsf{dom}(C)$. Then $aw$ admits a unique factorization
$aw=uv$ with $u\in\mathsf{dom}(C_{1})$ and $v\in\mathsf{dom}(C_{2})$. There
are two exclusive cases.
If $u\neq\varepsilon$ then $u=au^{\prime}$ with $u^{\prime}\in
a^{-1}\mathsf{dom}(C_{1})$. The word $w$ admits a unique factorization
according to $\mathsf{dom}({a}^{-1}{C_{1}})\mathsf{dom}(C_{2})$ which is
$w=u^{\prime}v$. Hence, $w\in\mathsf{dom}(C^{\prime})$ and
$[\\![C]\\!](aw)=[\\![C_{1}]\\!](u)[\\![C_{2}]\\!](v)=[\\![{a}^{-1}{C_{1}}]\\!](u^{\prime})[\\![C_{2}]\\!](v)=[\\![C^{\prime}]\\!](w)\,.$
If $u=\varepsilon$ then $v=av^{\prime}$ and $v^{\prime}\in
a^{-1}\mathsf{dom}(C_{2})$. The word $w=v^{\prime}$ admits a unique
factorization according to
$\\{\varepsilon\\}\cdot\mathsf{dom}({a}^{-1}{C_{2}})$ which is
$w=\varepsilon\cdot w$. Hence, $w\in\mathsf{dom}(C^{\prime\prime})$ and
$[\\![C]\\!](aw)=[\\![C_{1}]\\!](\varepsilon)[\\![C_{2}]\\!](v)=[\\![C_{1}]\\!](\varepsilon)[\\![{a}^{-1}{C_{2}}]\\!](w)=[\\![C^{\prime\prime}]\\!](w)\,.$
We deduce that
$a^{-1}\mathsf{dom}(C)\subseteq\mathsf{dom}(C^{\prime})\cup\mathsf{dom}(C^{\prime\prime})=\mathsf{dom}(C^{\prime}+C^{\prime\prime})$
and $\mathsf{dom}(a^{-1}C)=a^{-1}\mathsf{dom}(C)$ as desired.
Finally, assume that
$w\in\mathsf{dom}(C^{\prime})\cap\mathsf{dom}(C^{\prime\prime})$. Then, $w$
admits two factorizations $w=u^{\prime}v=\varepsilon v^{\prime}$ with
$u^{\prime}\in\mathsf{dom}({a}^{-1}{C_{1}})$, $v\in\mathsf{dom}(C_{2})$,
$\varepsilon\in\mathsf{dom}(C_{1})$ and
$v^{\prime}\in\mathsf{dom}({a}^{-1}{C_{2}})$. We deduce that $aw$ admits two
distinct factorizations $aw=(au^{\prime})v=\varepsilon(av^{\prime})$ with
$au^{\prime},\varepsilon\in\mathsf{dom}(C_{1})$ and
$v,av^{\prime}\in\mathsf{dom}(C_{2})$. This is a contradiction with
$aw\in\mathsf{dom}(C)$.
We deduce that in both cases above, we have
$[\\![C]\\!](aw)=[\\![{(a^{-1}\mathsf{dom}(C))}\triangleright{(C^{\prime}+C^{\prime\prime})}]\\!](w)\,.$
$k$-star.
Let $L\subseteq\Sigma^{*}$ be an aperiodic prefix code with bounded
synchronisation delay and let $C$ an SDRTE. Notice that, since $L$ is a code,
$\varepsilon\notin L$. Also, $a^{-1}\mathsf{dom}([L,C]^{k\star})\subseteq
a^{-1}L^{*}=(a^{-1}L)L^{*}$. Let $w\in a^{-1}L^{*}$. It admits a unique
factorization $w=u^{\prime}_{1}u_{2}\cdots u_{n}$ with
$u_{1}=au^{\prime}_{1}\in L$ and $u_{2},\ldots,u_{n}\in L$. The unique
factorization of $aw$ according to the code $L$ is $aw=u_{1}u_{2}\cdots
u_{n}$.
Now, by defintion of $k$-star, when $n<k$ we have
$[\\![[L,C]^{k\star}]\\!](aw)=\varepsilon$. Hence, we let
$C^{\prime}={\big{(}(a^{-1}L)\cdot
L^{<k-1}\big{)}}\triangleright{\varepsilon}$ so that in the case $n<k$ we get
$[\\![[L,C]^{k\star}]\\!](aw)=\varepsilon=[\\![C^{\prime}]\\!](w)\,.$
Next we assume that $n\geq k$. We define two SDRTEs:
$\displaystyle C^{\prime\prime}$ $\displaystyle=\big{(}{\big{(}(a^{-1}L)\cdot
L^{k-1}\big{)}}\triangleright{{a}^{-1}{C}}\big{)}\cdot({L^{*}}\triangleright{\varepsilon})$
$\displaystyle C^{\prime\prime\prime}$
$\displaystyle=({a^{-1}L}\triangleright{\varepsilon})\cdot[L,C]^{k\star}$
Notice that $L^{*}$ is aperiodic since $L$ is an aperiodic prefix code with
bounded synchronisation delay. Hence, $C^{\prime\prime}$ is indeed an SDRTE.
We get
$\displaystyle[\\![C^{\prime\prime}]\\!](w)$
$\displaystyle=[\\![{a}^{-1}{C}]\\!](u^{\prime}_{1}u_{2}\cdots
u_{k})=[\\![C]\\!](u_{1}u_{2}\cdots u_{k})$
$\displaystyle[\\![C^{\prime\prime\prime}]\\!](w)$
$\displaystyle=[\\![C]\\!](u_{2}\cdots u_{k+1})[\\![C]\\!](u_{3}\cdots
u_{k+2})\cdots[\\![C]\\!](u_{n-k+1}\cdots u_{n})$
$\displaystyle[\\![C^{\prime\prime}\odot C^{\prime\prime\prime}]\\!](w)$
$\displaystyle=[\\![[L,C]^{k\star}]\\!](aw)\,.$
Therefore, we define the SDRTE
${a}^{-1}{([L,C]^{k\star})}={(a^{-1}L^{*})}\triangleright{\big{(}C^{\prime}+(C^{\prime\prime}\odot
C^{\prime\prime\prime})\big{)}}\,.$
Notice that $\mathsf{dom}(C^{\prime})=a^{-1}L^{<k}$ and
$\mathsf{dom}(C^{\prime\prime}\odot C^{\prime\prime\prime})\subseteq
a^{-1}L^{\geq k}$ are disjoint.
Reverse $k$-star.
This case is similar. Let $L\subseteq\Sigma^{*}$ be an aperiodic prefix code
with bounded synchronisation delay and let $C$ an SDRTE. We define
${a}^{-1}{([L,C]_{r}^{k\star})}={(a^{-1}L^{*})}\triangleright{\big{(}C^{\prime}+(C^{\prime\prime\prime\prime}\odot
C^{\prime\prime})\big{)}}$
where $C^{\prime},C^{\prime\prime}$ are as above and
$C^{\prime\prime\prime\prime}=({a^{-1}L}\triangleright{\varepsilon})\cdot[L,C]_{r}^{k\star}$.
###### Lemma 4.5.
Given an SDRTE $C$ over an alphabet $\Sigma$ and a sub-alphabet
$\Sigma^{\prime}\subseteq\Sigma$, we can construct an SDRTE $C^{\prime}$ _over
alphabet $\Sigma^{\prime}$_ such that
$\mathsf{dom}(C^{\prime})\subseteq\Sigma^{\prime*}$ and for any word $w$ in
$\Sigma^{\prime*}$, $[\\![C]\\!](w)=[\\![C^{\prime}]\\!](w)$.
###### Proof 4.6.
The proof itself is rather straightforward, and simply amounts to get rid of
letters that do not appear in $\Sigma^{\prime}$. We first construct
$C^{\prime}$ by structural induction from $C$, and then prove that it is
indeed an SDRTE. Thus $C^{\prime}$ is defined as follows:
* •
if $C=\bot$ then $C^{\prime}=\bot$,
* •
if $C=v$ then $C^{\prime}=v$, with $\mathsf{dom}(v)=\Sigma^{\prime*}$ here
since $C^{\prime}$ is over $\Sigma^{\prime}$,
* •
if $C={L}\,?\,{C_{1}}:{C_{2}}$ then
$C^{\prime}={(L\cap\Sigma^{\prime*})}\,?\,{C^{\prime}_{1}}:{C^{\prime}_{2}}$,
* •
if $C=C_{1}\odot C_{2}$ then $C^{\prime}=C^{\prime}_{1}\odot C^{\prime}_{2}$,
* •
if $C=C_{1}\cdot C_{2}$ then $C^{\prime}=C^{\prime}_{1}\cdot C^{\prime}_{2}$,
* •
if $C=[L,C_{1}]^{k\star}$ then
$C^{\prime}=[L\cap\Sigma^{\prime*},C^{\prime}_{1}]^{k\star}$,
* •
if $C=[L,C_{1}]_{r}^{k\star}$ then
$C^{\prime}=[L\cap\Sigma^{\prime*},C^{\prime}_{1}]_{r}^{k\star}$.
To prove that $C^{\prime}$ is SD-regular, we construct, given an SD-expression
$E$ for $L$ over $\Sigma$, an SD-expression $E^{\prime}$ over
$\Sigma^{\prime}$ for $L\cap\Sigma^{\prime*}$. Again, the proof is an easy
structural induction:
* •
if $E=\emptyset$ then $E^{\prime}=\emptyset$,
* •
if $E=a\in\Sigma^{\prime}$ then $E^{\prime}=a$,
* •
if $E=a\in\Sigma\setminus\Sigma^{\prime}$ then $E^{\prime}=\emptyset$,
* •
if $E=E_{1}+E_{2}$ then $E^{\prime}=E^{\prime}_{1}+E^{\prime}_{2}$,
* •
if $E=E_{1}\cdot E_{2}$ then $E^{\prime}=E^{\prime}_{1}\cdot E^{\prime}_{2}$,
* •
if $E=E_{1}^{*}$ then $E^{\prime}=E_{1}^{\prime*}$.
We conclude by stating that being a prefix code with bounded synchronisation
delay is a property preserved by subsets, hence $E^{\prime}$ is an SD-
expression.
$w$$w^{\prime}$$x_{0}$$x_{1}$$x_{2}$$x_{3}$$p_{1}$$q_{1}$$q_{2}$$q_{3}$$p_{2}$$u$$v$$\rho_{0}$$\rho_{1}$$\rho_{2}$$\rho_{3}$$\rho_{4}$$\rho_{5}$$p$$p_{1}$$p_{2}$$p_{3}$$p_{4}$$p_{5}$$q$$u$$v$$\rho_{0}$$\rho_{1}$$\rho_{2}$$\rho_{3}$$\rho_{4}$$p$$p_{1}$$p_{2}$$p_{3}$$p_{4}$$q$$u_{1}$$u_{2}$$u_{i}$$u_{i+1}$$u_{i+k}$$u_{i+k+1}$$u_{n}$$\rho_{1}$$\rho_{2}$$\rho_{3}$$\rho_{4}$$p$$q_{1}$$q_{2}$$q_{3}$$q$$u_{1}$$u_{2}$$u_{i}$$u_{i+1}$$u_{i+k}$$u_{i+k+1}$$u_{n}$$\rho_{1}$$\rho_{2}$$\rho_{3}$$\rho_{4}$$p$$q_{1}$$q_{2}$$q_{3}$$q$$u_{1}$$u_{2}$$u_{i}$$u_{i+1}$$u_{i+k}$$u_{i+k+1}$$u_{n}$$\rho_{1}$$\rho_{2}$$\rho_{3}$$p$$q_{1}$$q_{2}$$q$$u_{1}$$u_{2}$$u_{3}$$u_{4}$$u_{5}$$u_{6}$$u_{7}$$u_{8}$$u_{9}$$u_{10}$$u_{11}$$u_{12}$$\rho_{0}$$\rho_{1}$$\rho_{2}$$\rho_{3}$$\rho_{4}$$\rho_{5}$$\rho_{6}$$\rho_{7}$$\rho_{8}$$\rho_{9}$$\rho_{10}$$p$$q$$q_{1}$$q$$q$$q_{3}$$q$$q_{4}$$q$$q$$q$$q_{7}$$q$$q_{8}$$q$$q$$q_{10}$$q$${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}q_{2}}={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}q_{5}}=q_{6}=q_{9}=q$$u_{1}$$u_{2}$$u_{3}$$u_{4}$$u_{5}$$u_{6}$$u_{7}$$u_{8}$$u_{9}$$u_{10}$$u_{11}$$u_{12}$$\rho_{11}$$\rho_{10}$$\rho_{9}$$\rho_{8}$$\rho_{7}$$\rho_{6}$$\rho_{5}$$\rho_{4}$$\rho_{3}$$\rho_{2}$$\rho_{1}$$p$$q_{10}$$q^{\prime}_{10}$$q_{9}$$q_{8}$$q^{\prime}_{8}$$q_{7}$$q^{\prime}_{7}$$q_{6}$$q^{\prime}_{6}$$q_{5}$$q_{4}$$q^{\prime}_{4}$$q_{3}$$q^{\prime}_{3}$$q_{2}$$q^{\prime}_{2}$$q_{1}$$q^{\prime}_{1}$$q$${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}q^{\prime}_{5}=q_{5}}$${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}q^{\prime}_{9}=q_{9}}$$\\#$$u_{1}$$\\#$$u_{2}$$\\#$$u_{3}$$\\#$$u_{4}$$\\#$$u_{5}$$\\#$$\\#$$u_{1}$$\\#$$u_{2}$$\\#$$u_{3}$$\\#$$u_{2}$$u_{3}$$u_{4}$$u_{3}$$u_{4}$$u_{5}$input
:output copy 1:output copy 2:output copy
3:$u_{1}$$u_{2}$$u_{3}$$u_{4}$$u_{5}$input:$X_{1}$:$X_{2}$:$X_{3}$:$O$:.....................$\times$.$\times$...........$u_{1}$$u_{1}$$u_{1}$$u_{2}$$u_{2}$$u_{2}$$u_{3}$$u_{3}$$u_{3}$$u_{4}$$u_{4}$$u_{4}$$u_{5}$$u_{5}$$u_{5}$$\\#$$\\#$$\\#$$\\#$
### 4.7 Can the 2-chained Kleene star suffice for all aperiodic functions?
It is known [11] that the 2-chained Kleene star can simulate the $k$-chained
Kleene-star for regular functions. However, we believe that, contrary to the
case of regular functions, the $k$-chained Kleene-star operator cannot be
simulated by the $2$-chained Kleene-star while preserving the aperiodicity of
the expression. The key idea is that, in order to simulate a $k$-chained
Kleene-star on a SD prefix code $L$ using a $2$-chained Kleene-star, one needs
to use $L^{\lceil k/2\rceil}$ as a parser. However, for any given prefix code
$L$, the language $L^{n}$ for $n>1$, while still a prefix code, is not of
bounded synchronisation delay (for the same reason that $\\{aa\\}$ is not,
i.e., for $v=(aa)^{d}$ that we consider, $ava$ belongs to $(aa)^{*}$ but $av$
does not). Intuitively, parsing $L^{n}$ reduces to _count_ ing factors of $L$
_modulo $n$_, which is a classical example of non-aperiodicity.
As an example, consider the prefix code $L=(a+b)^{*}c$ which has
synchronisation delay $1$. Define a function $f$ with domain $L^{3}$ by
$f(u_{1}u_{2}u_{3})=u_{3}u_{1}$ when $u_{1},u_{2},u_{3}\in L$, which can be
written using combinators as
$\big{(}({L^{2}}\triangleright{\varepsilon})\cdot({L}\triangleright{id})\big{)}\odot\big{(}({L}\triangleright{id})\cdot({L^{2}}\triangleright{\varepsilon})\big{)}$.
The identity function $id$ can itself be written as
$({a}\triangleright{a}+{b}\triangleright{b}+{c}\triangleright{c})^{\star}$
(see also Figure 2, which is a simplification of the same function, but
neverthless has the same inexpressiveness with 2 chained star). Then we
believe that the function $[L,f]^{3\star}$, which associates to a word
$u_{1}\cdots u_{n}\in L^{*}$ the word $u_{3}u_{1}u_{4}u_{2}\cdots
u_{n}u_{n-2}$ is not definable using only $2$-chained Kleene-stars. While not
a proof, the intuition behind this is that, in order to construct
$u_{i+1}u_{i-1}$, we need to highlight words from $L^{3}$. In order to do this
with a $2$-chained Kleene-star, it seems necessary to apply a chained star
with parser $L^{2}$, which is a prefix code but not of bounded synchronisation
delay. A similar argument would hold for any $[L,f]^{k\star}$, $k\geq 3$ with
a function $f(u_{1}u_{2}\cdots u_{k})=u_{k}u_{1}$.
## 5 The Equivalence of SDRTE and Aperiodic 2DFT
In this section, we prove the main result of the paper, namely the equivalence
between SDRTE and aperiodic 2DFT stated in Theorem 1.1. The first direction,
given an SDRTE $C$, constructing an equivalent aperiodic 2DFT $\mathcal{A}$ is
given by Theorem 5.1, while Theorem 5.14 handles the converse.
### 5.1 Aperiodic 2DFTs for SD-regular transducer expressions
###### Theorem 5.1.
Given an SDRTE $C$, we can construct an equivalent aperiodic 2DFT
$\mathcal{A}$ with $[\\![C]\\!]=[\\![\mathcal{A}]\\!]$.
###### Proof 5.2.
We construct $\mathcal{A}$ by induction on the structure of the SDRTE $C$. In
the suitable cases, we will suppose thanks to induction that we have aperiodic
transducers $\mathcal{A}_{i}$ for expressions $C_{i}$, $i\leq 2$. We also have
a deterministic and complete aperiodic automaton $A_{L}$ for any aperiodic
language $L$.
* •
$C=\bot$. Then $\mathcal{A}$ is a single state transducer with no final state
so that its domain is empty.
* •
$C=v$. Then $\mathcal{A}$ is a single state transducer which produces $v$ and
accepts any input word. Clearly, $\mathcal{A}$ is aperiodic.
* •
$C={L}\,?\,{C_{1}}:{C_{2}}$. The transducer $\mathcal{A}$ first reads its
input, simulating $A_{L}$. Upon reaching the end of the input word, it goes
back to the beginning of the word, and either executes $\mathcal{A}_{1}$ if
the word was accepted by $A_{L}$, or executes $\mathcal{A}_{2}$ otherwise.
Since every machine was aperiodic, so is $\mathcal{A}$.
* •
$C=C_{1}\odot C_{2}$. The transducer $\mathcal{A}$ does a first pass executing
$\mathcal{A}_{1}$, then resets to the beginning of the word and simulates
$\mathcal{A}_{2}$. Since both transducers are aperiodic, so is $\mathcal{A}$.
* •
$C=C_{1}\cdot C_{2}$. We express $\mathcal{A}$ as the composition of three
functions $f_{1},f_{2},f_{3}$, each aperiodic. Since aperiodic functions are
closed under composition, we get the result. The first function $f_{1}$
associates to each word $w\in\Sigma^{*}$ the word
$u_{1}\\#u_{2}\\#\cdots\\#u_{n}$, such that $w=u_{1}u_{2}\cdots u_{n}$ and for
any prefix $u$ of $w$, $u$ belongs to the domain of $C_{1}$ if, and only if,
$u=u_{1}\cdots u_{i}$ for some $1\leq i<n$. Notice that $u_{1}=\varepsilon$
iff $\varepsilon\in\mathsf{dom}(C_{1})$ and $u_{n}=\varepsilon$ iff
$w\in\mathsf{dom}(C_{1})$. The other $u_{i}$’s must be nonempty. The second
function $f_{2}$ takes as input a word in $(\Sigma\cup\\{\\#\\})^{*}$, reads
it from right to left, and suppresses all $\\#$ symbols except for the ones
whose corresponding suffix belongs to the domain of $C_{2}$. Then,
$f_{2}(f_{1}(w))$ contains exactly one $\\#$ symbol if and only if $w$ has a
unique factorisation $w=uv$ with $u\in\mathsf{dom}(C_{1})$ and
$v\in\mathsf{dom}(C_{2})$. In this case, $f_{2}(f_{1}(w))=u\\#v$.
Finally, the function $f_{3}$ has domain $\Sigma^{*}\\#\Sigma^{*}$ and first
executes $\mathcal{A}_{1}$ on the prefix of its input upto the $\\#$ symbol,
treating it as the right endmarker ${\dashv}$, and then executes
$\mathcal{A}_{2}$ on the second part, treating $\\#$ as the left endmarker
${\vdash}$.
The functions $f_{1}$ and $f_{2}$ can be realised by aperiodic transducers as
they only simulate automata for the aperiodic domains of $C_{1}$ and the
reverse of $C_{2}$ respectively, and the function $f_{3}$ executes
$\mathcal{A}_{1}$ and $\mathcal{A}_{2}$ one after the other, and hence is also
aperiodic.
* •
$C=[L,C_{1}]^{k\star}$ or $C=[L,C_{1}]_{r}^{k\star}$. Here
$L\subseteq\Sigma^{*}$ is an aperiodic language which is also a prefix code
with bounded synchronisation delay, and $k\geq 1$ is a natural number. Let
$f=[\\![C_{1}]\\!]\colon\Sigma^{*}\to\Gamma^{*}$ be the aperiodic function
defined by $C_{1}$. We write
$[L,f]^{k\star}={L^{<k}}\,?\,{({\Sigma^{*}}\triangleright{\varepsilon})}:{(f_{3}\circ
f_{2}\circ f_{1})}$, where $\varepsilon$ is the output produced when the input
has less than $k$ $L$ factors; otherwise the output is produced by the
composition of 3 aperiodic functions. As aperiodic functions are closed under
composition, this gives the result. The first one,
$f_{1}\colon\Sigma^{*}\to(\Sigma\cup\\{\\#\\})^{*}$ splits an input word $w\in
L^{*}$ according to the unique factorization $w=u_{1}u_{2}\cdots u_{n}$ with
$n\geq 0$ and $u_{i}\in L$ for all $1\leq i\leq n$ and inserts $\\#$ symbols:
$f_{1}(w)=\\#u_{1}\\#u_{2}\\#\cdots\\#u_{n}\\#$. The domain of $f_{1}$ is
$L^{*}$.
The second function $f_{2}$ constructs the sequence of $k$ factors. Its domain
is $\\#(\Sigma^{*}\\#)^{\geq k}$ and it is defined as follows, with
$u_{i}\in\Sigma^{*}$:
$f_{2}(\\#u_{1}\\#u_{2}\\#\cdots\\#u_{n}\\#)=\\#u_{1}u_{2}\cdots
u_{k}\\#u_{2}\cdots u_{k+1}\\#\cdots\\#u_{n-k+1}\cdots u_{n}\\#\,.$
Finally, the third function simply applies $f$ and erases the $\\#$ symbols:
$f_{3}(\\#v_{1}\\#v_{2}\\#\cdots\\#v_{m}\\#)=f(v_{1})f(v_{2})\cdots
f(v_{m})\,.$
In particular, $f_{3}(\\#)=\varepsilon$. We have
$\mathsf{dom}(f_{3})=\\#(\mathsf{dom}(f)\\#)^{*}$.
For the reverse iteration, we simply change the last function and use instead
$f_{4}(\\#v_{1}\\#v_{2}\\#\cdots\\#v_{m}\\#)=f(v_{m})\cdots
f(v_{2})f(v_{1})\,.$
Lemma 5.3 below proves that the functions $f_{i}$ for $i\leq 4$ are aperiodic.
$0$$1$$2$$3$$-3$$end$$-1$$-2$${\vdash}/\varepsilon,+1$$a/a,+1$$a/a,+1$$a/a,+1$$a/\varepsilon,-1$$a/\varepsilon,-1$$\\#/\\#,+1$$\\#/\varepsilon,+1$$\\#/\varepsilon,+1$$\\#/\\#,+1$$\\#,a/\varepsilon,-1$$\\#/\varepsilon,-1$$\\#/\varepsilon,-1$$\\#/\varepsilon,+1$
Figure 3: The transducer $T_{2}$ for $k=3$.
###### Lemma 5.3.
The functions $f_{1},f_{2},f_{3},f_{4}$ are realised by aperiodic 2DFTs.
###### Proof 5.4.
The function $f_{1}$. First, since $L$ is an aperiodic language which is a
prefix code with bounded synchronisation delay, $L^{*}$ is aperiodic. Let
$\mathcal{A}_{1}$ be an aperiodic deterministic automaton that recognizes
$L^{*}$. Let $w$ be a word in $L^{*}$ and $w=u_{1}\cdots u_{n}$ with $u_{i}\in
L$. Since $L$ is a code, this decomposition is unique. Notice that
$\varepsilon\notin L$. We claim that the run of $\mathcal{A}_{1}$ over $w$
reaches final states exactly at the end of each $u_{i}$. Should this hold,
then we can easily construct a (one-way) aperiodic transducer $T_{1}$
realising $f_{1}$ by simply simulating $\mathcal{A}_{1}$ and copying its
input, adding $\\#$ symbols each time $\mathcal{A}_{1}$ reaches a final state.
It remains to prove the claim. First, since for any $1\leq i\leq n$,
$u_{1}\cdots u_{i}$ belongs to $L^{*}$, $\mathcal{A}_{1}$ reaches a final
state after reading $u_{i}$. Conversely, suppose $\mathcal{A}_{1}$ reaches a
final state after reading some nonempty prefix $v$ of $w$. Then $v$ can be
written $u_{1}\cdots u_{i}u^{\prime}$ for some index $0\leq i<n$ and some
nonempty prefix $u^{\prime}$ of $u_{i+1}$. But since $\mathcal{A}_{1}$ reaches
a final state on $v$, we have $v\in L^{*}$. Hence, there is a unique
decomposition $v=v_{1}\cdots v_{m}$ with $v_{j}\in L$. Since $v=u_{1}\cdots
u_{i}u^{\prime}=v_{1}\cdots v_{m}$, either $u_{1}$ is a prefix of $v_{1}$ or
conversely. Since $L$ is a prefix code, and both $u_{1}$ and $v_{1}$ belong to
$L$, we obtain $u_{1}=v_{1}$. By induction, we get that $u_{j}=v_{j}$ for
$j\leq i$. Now, $u^{\prime}=v_{i+1}\cdots v_{m}$ is a nonempty prefix of
$u_{i+1}$. Using again that $L$ is a prefix code, we get $m=i+1$ and
$u^{\prime}=v_{i+1}=u_{i+1}$, which concludes the proof of the claim.
The function $f_{2}$. The domain of $f_{2}$ is the language
$K=\\#(\Sigma^{*}\\#)^{\geq k}$. We construct an aperiodic 2DFT $T_{2}$ for
$f_{2}$ (see Figure 3 for $T_{2}$ where $k=3$). Let
$T_{2}=(\\{-k,-k+1,\ldots,0,\ldots,k-1,k\\}\cup\\{end\\},\Sigma\cup\\{\\#\\},\Sigma\cup\\{\\#\\},\delta_{2},\gamma_{2},0,\\{end\\})$
be the 2DFT realising $f_{2}$. The transition function $\delta_{2}$ is defined
as:
* •
$\delta_{2}(0,{\vdash})=(0,+1)$,
* •
$\delta_{2}(i,a)=(i,+1)$ for $0<i\leq k$ and $a\in\Sigma$,
* •
$\delta_{2}(i,a)=(i,-1)$ for $-k<i<0$ and $a\in\Sigma$,
* •
$\delta_{2}(end,a)=\delta_{2}(end,\\#)=(-k,-1)$,
* •
$\delta_{2}(i,\\#)=(i+1,+1)$ for $0\leq i<k$,
* •
$\delta_{2}(k,\\#)=(end,+1)$,
* •
$\delta_{2}(i,\\#)=(i+1,-1)$ for $-k\leq i<-1$,
* •
$\delta_{2}(-1,\\#)=(1,+1)$.
The production function $\gamma_{2}$ is then simply $\gamma_{2}(i,a)=a$ for
$i>0$, $\gamma_{2}(i,\\#)=\\#$ for $i=0$ and $i=k$, and is set to
$\varepsilon$ for all other transitions.
The way the transducer $T_{2}$ works is that it reads forward, in the strictly
positive states, a factor of the input containing $k$ $\\#$ symbols, copying
it to the output. Upon reaching the $k^{th}$ $\\#$ symbol, it goes to state
$end$ to check if it was the last $\\#$ symbol, and otherwise reads back the
last $k-1$ $\\#$ symbols and starts again.
Let us prove the aperiodicity of $T_{2}$. First, notice that the
$\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}}$ and
$\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}}$ are
always aperiodic relations, for any finite 2DFT. This is due to the fact that
if a
$\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}}$
step exists in some $u\neq\varepsilon$, it also appears in $uv$. So for
$(v^{n})_{n>0}$, the
$\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}}$ and
$\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}}$
relations are monotone, and since we consider finite state machines, they
eventually stabilize. So we turn to traversal steps. These traversal steps
only depend on the number of $\\#$ symbols in the word, as well as the
starting and ending symbols. In particular, if a word $v$ has $k+1$ or more
$\\#$ symbols, the only traversals it realises are in
$\\{(\rightarrow,0,k),(\rightarrow,0,end),(\rightarrow,1,k),(\rightarrow,1,end)\\}$,
starting from $0$ is possible only if $v$ starts with $\\#$, the target state
is $end$ if the last letter of $v$ is $\\#$, otherwise it is $k$. Notice that
both $(\rightarrow,0,end)$ and $(\rightarrow,1,end)$ are possible if
$v\in\\#(\Sigma^{*}\\#)^{\geq k}$. Similarly, both $(\rightarrow,0,k)$ and
$(\rightarrow,1,k)$ are possible if $v\in\\#(\Sigma^{*}\\#)^{\geq
k}\Sigma^{+}$. Then given any word $v\in(\Sigma\cup\\{\\#\\})^{+}$, both
$v^{k+1}$ and $v^{k+2}$ have either no $\\#$ symbol, or at least $k+1$ $\\#$
symbols; further they have the same starting and ending letters. Thus they
realise the same steps.
The function $f_{3}$. The goal of $f_{3}$ is to iteratively simulate $f$ on
each factor appearing between $\\#$ symbols. To this end, $T_{3}$ is defined
as the transducer $T$ realising $f$, with the exception that it reads the
$\\#$ symbols as endmarkers, and upon reaching a final state while reading a
$\\#$ symbol, it first checks if the next symbol is ${\dashv}$, and in this
case ends the run, or simulates the move of $T$ reading ${\vdash}$ from the
initial state. Note that $\\#$ being used for both endmarkers could generate
some non-determinism, however this can be avoided as the left endmarker can
only be reached while moving to the left, and symmetrically for the right
endmarker. Then we solve non-determinism by duplicating the problematic states
$q$ to states $q_{\ell}$ (where $\\#$ is seen as the left endmarker) and
$q_{r}$ (where $\\#$ is seen a right endmarker), which can only be reached
while moving to the left or the right respectively.
We now turn to the aperiodicity of $T_{3}$. If the input word $v$ does not
contain any $\\#$ symbol, then the
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\rightarrow,\leftarrow)$-runs
of $v^{n}$ are the same as the ones in $T$, and since $T$ is aperiodic then we
get $\varphi(v^{n})=\varphi(v^{n+1})$ for some $n$, where $\varphi$ is the
syntactic morphism of $T$.
Otherwise, let us remark that by design, once the reading head has gone right
of a given $\\#$ symbol, it never goes back to its left, and secondly the
behavior of $T_{3}$ when going from left to right of a $\\#$ symbol is always
the same since it simulates the initial transition of $T$. So given a word $v$
with at least one $\\#$ symbol, let $u_{1}$ and $u_{2}$ be the prefix and
suffix of $v$ upto the first and from the last $\\#$ respectively, i.e.,
$v=u_{1}\\#w_{1}\\#\cdots w_{m}\\#u_{2}$ with $m\geq 0$ and
$u_{1},u_{2},w_{1},\ldots,w_{m}\in\Sigma^{*}$. Then there exists no
$\leftarrow$ traversal of $v^{n}$ for $n\geq 2$ since the reading head cannot
move from right to left of a $\\#$ symbol. The $\rightarrow$ traversals of
$v^{n}$, for $n\geq 2$, exist if and only if $u_{2}u_{1},w_{1},\ldots,w_{m}$
belong to the domain of $T$, and consist of all $(\rightarrow,p,q)$, where
$\varphi(u_{1})$ contains $(\rightarrow,p,f)$ for some final state $f$, and
$\varphi(u_{2})$ contains $(\rightarrow,\iota,q)$ where $\iota$ is the initial
state. These traversals are then the same for $v^{2}$ and $v^{3}$, which
concludes the proof of aperiodicity of $T_{3}$.
The function $f_{4}$. The transducer $T_{4}$ realising $f_{4}$ is similar to
$T_{3}$. The main difference is that it starts by reaching the end of the
word, then goes back to the previous $\\#$ symbol to simulate $T$. On reaching
the end of the run in $T$ (in a final state of $T$ when reading $\\#$) it
treats $\\#$ as ${\dashv}$ and then enters into a special state which moves
the reading head to the left, till the time it has finished reading two $\\#$
symbols, while outputting $\varepsilon$ all along. When it reads the second
$\\#$, it moves right entering the state $T$ would, on reading ${\vdash}$ from
its initial state, and continues simulating $T$. This goes on until it reaches
the start symbol ${\vdash}$, and then it goes to the final state of $T_{4}$
that only moves to the right outputting $\varepsilon$ all along until the end
of the input to ${\dashv}$.
The arguments for the aperiodicity of $T_{4}$ are similar to the ones for
$T_{3}$.
### 5.2 SD-regular transducer expressions for aperiodic 2DFTs
In this section, we show that the runs of an aperiodic 2DFT have a
“stabilising” property. This property crucially distinguishes aperiodic 2DFTs
from non aperiodic ones, and we use this in our proof to obtain SDRTEs from
aperiodic 2DFTs. In the remainder of this section, we fix an aperiodic 2DFT
$\mathcal{A}=(Q,\Sigma,\Gamma,\delta,\gamma,q_{0},F)$. Let
$\varphi\colon(\Sigma\uplus\\{{\vdash},{\dashv}\\})^{*}\to\mathsf{TrM}$ be the
canonical surjective morphism to the transition monoid of $\mathcal{A}$.
#### 5.2.1 Stabilising runs in two-way automata
Consider a _code_ $L\subseteq\Sigma^{*}$ such that $X=\varphi(L)$ is
$k$-stabilizing for some $k>0$. We will see that a run of $\mathcal{A}$ over a
word $w\in L^{*}$ has some nice properties. Intuitively, if it moves forward
through $k$ factors from $L$ then it never moves backward through more than
$k$ factors.
More precisely, let $w=u_{1}u_{2}\cdots u_{n}$ be the unique factorisation of
$w\in L^{*}$ with $u_{i}\in L$ for $1\leq i\leq n$. We assume that $n\geq k$.
We start with the easiest fact.
###### Lemma 5.5.
If
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p,q)\in\varphi(w)$
then the run of $\mathcal{A}$ over $w$ starting on the left in state $p$ only
visits the first $k$ factors $u_{1}\cdots u_{k}$ of $w$.
###### Proof 5.6.
Since $X$ is $k$-stabilising, we have $\varphi(w)=\varphi(u_{1}\cdots u_{k})$.
Hence,
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p,q)\in\varphi(u_{1}\cdots
u_{k})$ and the result follows since $\mathcal{A}$ is deterministic.
Notice that the right-right
($\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}}$)
runs of $\mathcal{A}$ over $w$ need not visit the last $k$ factors only (see
Lemma 5.11 below). This is due to the fact that _stabilising_ is not a
symmetric notion.
Next, we consider the left-right runs of $\mathcal{A}$ over $w$.
###### Lemma 5.7.
Assume that $(\rightarrow,p,q)\in\varphi(w)$. Then the run $\rho$ of
$\mathcal{A}$ over $w$ starting on the left in state $p$ has the following
property, that we call $k$-forward-progressing: for each $1\leq i<n-k$, after
reaching the suffix $u_{i+k+1}\cdots u_{n}$ of $w$, the run $\rho$ will never
visit again the prefix $u_{1}\cdots u_{i}$. See Figure 4 for a non-example and
Figure 10 for an example.
###### Proof 5.8.
Towards a contradiction, assume that for some $1\leq i<n-k$, the run $\rho$
visits $u_{1}\cdots u_{i}$ after visiting $u_{i+k+1}\cdots u_{n}$ (See Figure
4). Then, there exists a subrun $\rho^{\prime}$ of $\rho$ making some
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{3})$-step
on $u_{i+1}\cdots u_{n}$ and visiting $u_{i+k+1}$ (on Figure 4 we have
$\rho^{\prime}=\rho_{2}\rho_{3}$). Hence
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{3})\in\varphi(u_{i+1}\cdots
u_{n})$ and by Lemma 5.5 we deduce that $\rho^{\prime}$ visits $u_{i+1}\cdots
u_{i+k}$ only, a contradiction.
Figure 4: A left-right run which is not $k$-forward-progressing
###### Lemma 5.9.
Assume that $(\leftarrow,p,q)\in\varphi(w)$. Then the run $\rho$ of
$\mathcal{A}$ over $w$ starting on the right in state $p$ has the following
property, that we call $k$-backward-progressing: for each $1\leq i<n-k$, after
reaching the prefix $u_{1}\cdots u_{i}$ of $w$, the run $\rho$ will never
visit again the suffix $u_{i+k+1}\cdots u_{n}$.
###### Proof 5.10.
This Lemma is a consequence of Lemma 5.5. Indeed, consider any part of $\rho$
that visits $u_{i+1}$ again (in some state $q_{1}$) after visiting $u_{i}$,
for some $1\leq i<n-k$. As $\rho$ is a $\leftarrow$ run, it will later cross
from $u_{i+1}$ to $u_{i}$ (reaching some state $q_{3}$). Then
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{3})$
is a run on $u_{i+1}\cdots u_{n}$. By Lemma 5.5, it does not visit
$u_{i+k+1}\cdots u_{n}$, which concludes the proof (See Figure 5 for a non-
example).
Figure 5: A right-left run which is not $k$-backward-progressing
###### Lemma 5.11.
Assume that
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p,q)\in\varphi(w)$
and let $\rho$ be the run of $\mathcal{A}$ over $w$ starting on the right in
state $p$. Then, either $\rho$ visits only the last $k$ factors
$u_{n-k+1}\cdots u_{n}$, or for some $1\leq i\leq n-k$ the run $\rho$ is the
concatenation $\rho_{1}\rho_{2}\rho_{3}$ of a $k$-backward-progressing run
$\rho_{1}$ over $u_{i+1}\cdots u_{n}$ followed by a run $\rho_{2}$ staying
inside some $u_{i}\cdots u_{i+k}$, followed by some $k$-forward-progressing
run $\rho_{3}$ over $u_{i+1}\cdots u_{n}$. See Figure 6.
###### Proof 5.12.
Assume that $\rho$ visits $u_{1}\cdots u_{n-k}$ and let $u_{i}$ ($1\leq i\leq
n-k$) be the left-most factor visited by $\rho$. We split $\rho$ in
$\rho_{1}\rho_{2}\rho_{3}$ (see Figure 6) where
Figure 6: A right-right run $\rho_{1}\rho_{2}\rho_{3}$ where $\rho_{1}$ is
$k$-backward-progressing, $\rho_{2}$ is local to $u_{i}\cdots u_{i+k}$ and
$\rho_{3}$ is $k$-forward-progressing.
* •
$\rho_{1}$ is the prefix of $\rho$, starting on the right of $w$ in state $p$
and going until the first time $\rho$ crosses from $u_{i+1}$ to $u_{i}$.
Hence, $\rho_{1}$ is a run over $u_{i+1}\cdots u_{n}$ starting on the right in
state $p$ and exiting on the left in some state $q_{1}$. We have
$(\leftarrow,p,q_{1})\in\varphi(u_{i+1}\cdots u_{n})$. By Lemma 5.9 we deduce
that $\rho_{1}$ is $k$-backward-progressing.
* •
Then, $\rho_{2}$ goes until the last crossing from $u_{i}$ to $u_{i+1}$.
* •
Finally, $\rho_{3}$ is the remaining suffix of $\rho$. Hence, $\rho_{3}$ is a
run over $u_{i+1}\cdots u_{n}$ starting on the left in some state $q_{2}$ and
exiting on the right in state $q$. We have
$(\rightarrow,q_{2},q)\in\varphi(u_{i+1}\cdots u_{n})$. By Lemma 5.7 we deduce
that $\rho_{3}$ is $k$-forward-progressing.
It remains to show that $\rho_{2}$ stays inside $u_{i}\cdots u_{i+k}$. Since
$u_{i}$ is the left-most factor visited by $\rho$, we already know that
$\rho_{2}$ does not visit $u_{1}\cdots u_{i-1}$. Similarly to Lemma 5.9, any
maximal subrun $\rho_{2}^{\prime}$ of $\rho_{2}$ that does not visit $u_{i}$
is a
$\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}}$ run
on $u_{i+1}\cdots u_{n}$ since $\rho_{2}$ starts and ends at the frontier
between $u_{i}$ and $u_{i+1}$. By Lemma 5.5, the subrun $\rho_{2}^{\prime}$
does not visit $u_{i+k+1}\cdots u_{n}$ and thus $\rho_{2}$ stays inside
$u_{i}\cdots u_{i+k}$.
###### Example 5.13.
We illustrate the stabilising runs of an aperiodic 2DFT using the aperiodic
2DFT $\mathcal{A}$ in Figure 2. Figure 7 depicts the run of $\mathcal{A}$ on
words in $b(a^{*}b)^{\geq 3}$. We use the set $Z_{1}$ computed in Example 3.5.
Notice that a run of $\mathcal{A}$ on such words is 4-forward-progressing, as
seen below. For each $w=u_{1}u_{2}\cdots u_{n}$ with $n>3$, $u_{1}=b$ and
$u_{i}\in a^{*}b$ for $2\leq i\leq n$, we have $\varphi(w)=Z_{1}$ and one can
see that
* •
each
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p,q)\in
Z_{1}$, is such that, whenever the run of $\mathcal{A}$ starts at the left of
$w$ in state $p$, it stays within $u_{1}\cdots u_{4}$ and never visits
$u_{5}\cdots u_{n}$ (as in Lemma 5.5).
* •
each $(\rightarrow,p,q)\in Z_{1}$, is such that, whenever the run of
$\mathcal{A}$ starts at the left of $w$ in state $p$ and reaches $u_{i+5}$,
for $i\geq 1$, it no longer visits any of $u_{1}\cdots u_{i}$ (4-forward-
progressing as in Lemma 5.7).
* •
each
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p,q)\in
Z_{1}$, is such that, whenever the run of $\mathcal{A}$ starts at the right of
$w$ in state $p$, it never visits $u_{1}\cdots u_{n-4}$ (the easy case of
Lemma 5.11).
Figure 7: An accepting run on words in $b(a^{*}b)^{\geq 3}$. Bottom left:
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{2},q_{2})$.
#### 5.2.2 Computing SDRTE
In this section, we show how to construct SDRTEs which are equivalent to
aperiodic 2DFTs. Recall that
$\varphi\colon(\Sigma\uplus\\{{\vdash},{\dashv}\\})^{*}\to\mathsf{TrM}$ is the
canonical surjective morphism to the transition monoid of the 2DFT
$\mathcal{A}=(Q,\Sigma,\Gamma,\delta,\gamma,q_{0},F)$. Given a regular
expression $E$ and a monoid element $s\in\mathsf{TrM}$, we let
$\mathcal{L}(E,s)=\mathcal{L}(E)\cap\varphi^{-1}(s)$. The main construction of
this section is given by Theorem 5.14.
Recall that $\mathsf{TrM}$ represents the transition monoid of a 2DFT, and
consists of elements $\varphi(w)$ for all $w\in\Sigma^{*}$, where each
$\varphi(w)=\\{(d,p,q)\mid\text{there is a }(d,p,q)\text{-run on
}w\\}\subseteq\\{\rightarrow,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\leftarrow\\}\times
Q^{2}$. The elements of $\varphi(w)$ are called _steps_ , since any run of $w$
is obtained by a sequence of such steps. If the states $p,q$ in a step
$(d,p,q)$ are clear from the context, or is immaterial for the discussion we
also refer to a step as a $d$ step,
$d\in\\{\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\rightarrow,\leftarrow\\}$.
In this case we also refer to a step $(d,p,q)$ as a $d$ step having $p$ as the
starting state and $q$ as the final state.
###### Theorem 5.14.
Let $E$ be an unambiguous, stabilising, SD-regular expression over
$\Sigma\uplus\\{{\vdash},{\dashv}\\}$ and let $s\in\mathsf{TrM}$. For each
step
$x\in\\{\rightarrow,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\leftarrow\\}\times
Q^{2}$, we can construct an SDRTE $\mathsf{C}_{E,s}(x)$ such that:
1. 1.
$\mathsf{C}_{E,s}(x)=\bot$ when $x\notin s$, and otherwise
2. 2.
$\mathsf{dom}([\\![\mathsf{C}_{E,s}(x)]\\!])=\mathcal{L}(E,s)$ and for all
words $w\in\mathcal{L}(E,s)$, $[\\![\mathsf{C}_{E,s}(x)]\\!](w)$ is the output
produced by $\mathcal{A}$ running over $w$ according to step $x$.
When $w=\varepsilon$ and $s=\mathbf{1}=\varphi(\varepsilon)$ with
$x\in\mathbf{1}$, this means
$[\\![\mathsf{C}_{E,s}(x)]\\!](\varepsilon)=\varepsilon$.
###### Proof 5.15.
The construction is by structural induction on $E$.
##### Atomic expressions
We first define $\mathsf{C}_{E,s}(x)$ when $E$ is an atomic expression, i.e.,
$\emptyset$, $\varepsilon$ or $a$ for $a\in\Sigma$.
* •
$E=\emptyset$: we simply set $\mathsf{C}_{\emptyset,s}(x)=\bot$, which is the
nowhere defined function.
* •
$E=\varepsilon$: when $s=\mathbf{1}$ and $x\in s$ then we set
$\mathsf{C}_{\varepsilon,s}(x)={\varepsilon}\triangleright{\varepsilon}$ and
otherwise we set $\mathsf{C}_{\varepsilon,s}(x)=\bot$.
* •
$E=a\in\Sigma\uplus\\{{\vdash},{\dashv}\\}$: again, we set
$\mathsf{C}_{a,s}(x)=\bot$ if $s\neq\varphi(a)$ or $x\notin s$. Otherwise,
there are two cases. Either
$x\in\\{(\rightarrow,p,q),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p,q)\\}$
for some states $p,q$ such that $\delta(p,a)=(q,+1)$, or
$x\in\\{(\leftarrow,p,q),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p,q)\\}$
for some states $p,q$ with $\delta(p,a)=(q,-1)$. In both cases the output
produced is $\gamma(p,a)$ and we set
$\mathsf{C}_{a,s}(x)={a}\triangleright{\gamma(p,a)}$.
##### Disjoint union
If the expression is $E\cup F$ with $\mathcal{L}(E)$ and $\mathcal{L}(F)$
disjoint, then we simply set $\mathsf{C}_{E\cup
F,s}(x)=\mathsf{C}_{E,s}(x)+\mathsf{C}_{F,s}(x)$.
##### Unambiguous concatenation $E\cdot F$
Here, we suppose that we have SDRTEs for $\mathsf{C}_{E,s}(x)$ and
$\mathsf{C}_{F,s}(x)$ for all $s$ in $\mathsf{TrM}$ and all steps
$x\in\\{\rightarrow,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\leftarrow\\}\times
Q^{2}$. We show how to construct SDRTEs for $\mathsf{C}_{E\cdot F,s}(x)$,
assuming that the concatenation $\mathcal{L}(E)\cdot\mathcal{L}(F)$ is
unambiguous.
A word $w\in\mathcal{L}(E\cdot F)$ has a unique factorization $w=uv$ with
$u\in\mathcal{L}(E)$ and $v\in\mathcal{L}(F)$. Let $s=\varphi(u)$ and
$t=\varphi(v)$. A run $\rho$ over $w$ is obtained by stitching together runs
over $u$ and runs over $v$ as shown in Figure 8. In the left figure, the run
over $w$ follows step $x=(\rightarrow,p,q)$ starting on the left in state $p$
and exiting on the right in state $q$. The run $\rho$ splits as
$\rho_{0}\rho_{1}\rho_{2}\rho_{3}\rho_{4}\rho_{5}$ as shown in the figure. The
output of the initial part $\rho_{0}$ is computed by
$\mathsf{C}_{E,s}((\rightarrow,p,p_{1}))$ over $u$ and the output of the final
part $\rho_{5}$ is computed by $\mathsf{C}_{F,t}((\rightarrow,p_{5},q))$ over
$v$. We focus now on the internal part $\rho_{1}\rho_{2}\rho_{3}\rho_{4}$
which consists of an alternate sequence of left-left runs over $v$ and right-
right runs over $u$. The corresponding sequence of steps
$x_{1}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p_{1},p_{2})\in
t$,
$x_{2}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p_{2},p_{3})\in
s$,
$x_{3}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p_{3},p_{4})\in
t$ and
$x_{4}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p_{4},p_{5})\in
s$ depends only on $s=\varphi(u)$ and $t=\varphi(v)$.
Figure 8: Decomposition of a $(\rightarrow,p,q)$-run and a
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p,q)$-run
over the product $w=uv$.
These internal zigzag runs will be frequently used when dealing with
concatenation or Kleene star. They alternate left-left
($\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}}$)
steps on the right word $v$ and right-right
($\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}}$)
steps on the left word $u$. They may start with a
$\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}}$-step
or a
$\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}}$-step.
The sequence of steps in a _maximal_ zigzag run is entirely determined by the
monoid elements $s=\varphi(u)$, $t=\varphi(v)$, the starting step
$d\in\\{\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}}\\}$
and the starting state $p^{\prime}$ of step $d$. The final step of this
_maximal_ sequence is some
$d^{\prime}\in\\{\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}}\\}$
and reaches some state $q^{\prime}$. We write
$Z_{s,t}(p^{\prime},d)=(d^{\prime},q^{\prime})$. For instance, on the left of
Figure 8 we get
$Z_{s,t}(p_{1},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p_{5})$
whereas on the right of Figure 8 we get
$Z_{s,t}(p_{1},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p_{4})$.
By convention, if the sequence of zigzag steps is empty then we define
$Z_{s,t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p)$
and
$Z_{s,t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p)$.
###### Lemma 5.16.
We use the above notation. We can construct SDRTEs
$\mathsf{ZC}_{E,s}^{F,t}(p,d)$ for $p\in Q$ and
$d\in\\{\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}}\\}$
such that
$\mathsf{dom}([\\![\mathsf{ZC}_{E,s}^{F,t}(p,d)]\\!])=\mathcal{L}(E,s)\mathcal{L}(F,t)$
and for all $u\in\mathcal{L}(E,s)$ and $v\in\mathcal{L}(F,t)$ the value
$[\\![\mathsf{ZC}_{E,s}^{F,t}(p,d)]\\!](uv)$ is the output produced by the
_internal_ zigzag run of $\mathcal{A}$ over $(u,v)$ following the maximal
sequence of steps starting in state $p$ with a $d$-step.
###### Proof 5.17.
We first consider the case
$d={\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}}}$
and
$Z_{s,t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q)$
for some $q\in Q$ which is illustrated on the left of Figure 8. Since
$\mathcal{A}$ is deterministic, there is a unique maximal sequence of steps
(with $n\geq 0$, $p_{1}=p$ and $p_{2n+1}=q$):
$x_{1}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p_{1},p_{2})\in
t$,
$x_{2}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p_{2},p_{3})\in
s$, …,
$x_{2n-1}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p_{2n-1},p_{2n})\in
t$,
$x_{2n}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p_{2n},p_{2n+1})\in
s$. The zigzag run $\rho$ following this sequence of steps over $uv$ splits as
$\rho_{1}\rho_{2}\cdots\rho_{2n}$ where $\rho_{2i}$ is the unique run on $u$
following step $x_{2i}$ and $\rho_{2i+1}$ is the unique run on $v$ following
step $x_{2i+1}$. The output of these runs are given by
$[\\![\mathsf{C}_{E,s}(x_{2i})]\\!](u)$ and
$[\\![\mathsf{C}_{F,t}(x_{2i+1})]\\!](v)$. When $n=0$ the zigzag run $\rho$ is
empty and we simply set
$\mathsf{ZC}_{E,s}^{F,t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})={(\mathcal{L}(E,s)\mathcal{L}(F,t))}\triangleright{\varepsilon}$.
Assume now that $n>0$. The required SDRTE computing the output of $\rho$ can
be defined as
$\displaystyle\mathsf{ZC}_{E,s}^{F,t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})={}$
$\displaystyle\big{(}({\mathcal{L}(E,s)}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,t}(x_{1})\big{)}\odot\big{(}\mathsf{C}_{E,s}(x_{2})\cdot\mathsf{C}_{F,t}(x_{3})\big{)}\odot\cdots\odot$
$\displaystyle\big{(}\mathsf{C}_{E,s}(x_{2n-2})\cdot\mathsf{C}_{F,t}(x_{2n-1})\big{)}\odot\big{(}\mathsf{C}_{E,s}(x_{2n})\cdot({\mathcal{L}(F,t)}\triangleright{\varepsilon})\Big{)}\,.$
Notice that each Cauchy product in this expression is unambiguous since the
product $\mathcal{L}(E)\cdot\mathcal{L}(F)$ is unambiguous.
The other cases can be handled similarly. For instance, when
$Z_{s,t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q)$
as on the right of Figure 8, the sequence of steps ends with
$x_{2n-1}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p_{2n-1},p_{2n})\in
t$ with $n>0$ and $p_{2n}=q$ and the zigzag run $\rho$ is
$\rho_{1}\rho_{2}\cdots\rho_{2n-1}$. The SDRTE
$\mathsf{ZC}_{E,s}^{F,t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
is given by
$\big{(}({\mathcal{L}(E,s)}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,t}(x_{1})\big{)}\odot\big{(}\mathsf{C}_{E,s}(x_{2})\cdot\mathsf{C}_{F,t}(x_{3})\big{)}\odot\cdots\odot\big{(}\mathsf{C}_{E,s}(x_{2n-2})\cdot\mathsf{C}_{F,t}(x_{2n-1})\big{)}\,.$
The situation is symmetric for
$\mathsf{ZC}_{E,s}^{F,t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})$:
the sequence starts with a right-right step
$x_{2}=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p_{2},p_{3})\in
s$ with $p=p_{2}$ and we obtain the SDRTE simply by removing the first factor
$\big{(}({\mathcal{L}(E,s)}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,t}(x_{1})\big{)}$
in the Hadamard products above.
We come back to the definition of the SDRTEs for $\mathsf{C}_{E\cdot F,r}(x)$
with $r\in\mathsf{TrM}$ and $x\in r$. As explained above, the output produced
by a run $\rho$ following step $x$ over a word $w=uv$ with
$u\in\mathcal{L}(E,s)$, $v\in\mathcal{L}(F,t)$ and $r=st$ consists of an
initial part, a zigzag internal part, and a final part. There are four cases
depending on the step $x$.
* •
$x=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p,q)$.
Either the run $\rho$ stays inside $u$ (zigzag part empty) or there is a
zigzag internal part starting with
$(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
such that $(\rightarrow,p,p^{\prime})\in\varphi(u)$ and ending with
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q^{\prime})$
such that $(\leftarrow,q^{\prime},q)\in\varphi(u)$. Thus we define the SDRTE
$\mathsf{C}_{E\cdot F,r}(x)$ as
$\displaystyle\sum_{st=r\mid x\in s}$
$\displaystyle\mathsf{C}_{E,s}(x)\cdot\big{(}{\mathcal{L}(F,t)}\triangleright{\varepsilon}\big{)}+{}$
$\displaystyle\sum_{\begin{subarray}{c}st=r,\,(p^{\prime},q^{\prime})\,\mid\\\
Z_{s,t}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q^{\prime})\end{subarray}}\big{(}\mathsf{C}_{E,s}((\rightarrow,p,p^{\prime}))\cdot({\mathcal{L}(F,t)}\triangleright{\varepsilon})\big{)}\odot\mathsf{ZC}_{E,s}^{F,t}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})\odot\big{(}\mathsf{C}_{E,s}((\leftarrow,q^{\prime},q))\cdot({\mathcal{L}(F,t)}\triangleright{\varepsilon})\big{)}$
Notice that all Cauchy products are unambiguous since the concatenation
$\mathcal{L}(E)\cdot\mathcal{L}(F)$ is unambiguous. The sums are also
unambiguous. Indeed, a word $w\in\mathcal{L}(E\cdot F,r)$ has a unique
factorization $w=uv$ with $u\in\mathcal{L}(E)$ and $v\in\mathcal{L}(F)$. Hence
$s=\varphi(u)$ and $t=\varphi(v)$ are uniquely determined and satisfy $st=r$.
Then, either $x\in s$ and $w$ is only in the domain of
$\mathsf{C}_{E,s}(x)\cdot\big{(}{\mathcal{L}(F,t)}\triangleright{\varepsilon}\big{)}$.
Or there is a unique $p^{\prime}$ with $(\rightarrow,p,p^{\prime})\in s$ and a
unique $q^{\prime}$ with
$Z_{s,t}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q^{\prime})$
and $(\leftarrow,q^{\prime},q)\in s$. Notice that if
$(\rightarrow,p,p^{\prime})\notin s$ then
$\mathsf{C}_{E,s}((\rightarrow,p,p^{\prime}))=\bot$ and similarly if
$(\leftarrow,q^{\prime},q)\notin s$. Hence we could have added the condition
$(\rightarrow,p,p^{\prime}),(\leftarrow,q^{\prime},q)\in s$ to the second sum,
but do not, to reduce clutter.
* •
$x=(\rightarrow,p,q)$. Here the run must cross from left to right. Thus we
define the SDRTE $\mathsf{C}_{E\cdot F,r}(x)$ as
$\sum_{\begin{subarray}{c}st=r,\,(p^{\prime},q^{\prime})\,\mid\\\
Z_{s,t}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q^{\prime})\end{subarray}}\big{(}\mathsf{C}_{E,s}((\rightarrow,p,p^{\prime}))\cdot({\mathcal{L}(F,t)}\triangleright{\varepsilon})\big{)}\odot\mathsf{ZC}_{E,s}^{F,t}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})\odot\big{(}({\mathcal{L}(E,s)}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,t}((\rightarrow,q^{\prime},q))\big{)}$
* •
$x=(\leftarrow,p,q)$. This case is similar. The SDRTE $\mathsf{C}_{E\cdot
F,r}(x)$ is
$\sum_{\begin{subarray}{c}st=r,\,(p^{\prime},q^{\prime})\,\mid\\\
Z_{s,t}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q^{\prime})\end{subarray}}\big{(}({\mathcal{L}(E,s)}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,t}((\leftarrow,p,p^{\prime}))\big{)}\odot\mathsf{ZC}_{E,s}^{F,t}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})\odot\big{(}\mathsf{C}_{E,s}((\leftarrow,q^{\prime},q))\cdot({\mathcal{L}(F,t)}\triangleright{\varepsilon})\big{)}$
* •
$x=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p,q)$.
Finally, for right-right runs, the SDRTE $\mathsf{C}_{E\cdot F,r}(x)$ is
$\displaystyle\sum_{st=r\mid x\in t}$
$\displaystyle({\mathcal{L}(E,s)}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,t}(x)+{}$
$\displaystyle\sum_{\begin{subarray}{c}st=r,\,(p^{\prime},q^{\prime})\,\mid\\\
Z_{s,t}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q^{\prime})\end{subarray}}\big{(}({\mathcal{L}(E,s)}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,t}((\leftarrow,p,p^{\prime}))\big{)}\odot\mathsf{ZC}_{E,s}^{F,t}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})\odot\big{(}({\mathcal{L}(E,s)}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,t}((\rightarrow,q^{\prime},q))\big{)}$
###### Example 5.18.
We go back to our running example of the aperiodic 2DFT $\mathcal{A}$ in
Figure 2 and illustrate the unambiguous concatenation. Consider $F=a^{+}b$,
$G=F^{2}$ and $E=F^{4}=G^{2}$. We know from Example 3.5 that
$\varphi(F)=Y_{2}$, $\varphi(G)=Y_{4}$ and $\varphi(E)=Z_{2}$. We compute
below some steps of $Y_{2}$, $Y_{4}$ and $Z_{2}$.
First, we look at some steps in $F$ for which the SDRTE are obtained directly
by looking at the automaton $\mathcal{A}$ in Figure 2 (we cannot give more
details here since we have not explained yet how to deal with Kleene-plus,
hence we rely on intuition for these steps).
$\displaystyle\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{0},q_{1}))$
$\displaystyle=({a^{+}b}\triangleright{\varepsilon})$
$\displaystyle\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{1},q_{2}))$
$\displaystyle=({a^{+}b}\triangleright{\varepsilon})$
$\displaystyle\mathsf{C}_{F,Y_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}))$
$\displaystyle=({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})$
$\displaystyle\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{6},q_{0}))$
$\displaystyle=({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})$
$\displaystyle\mathsf{C}_{F,Y_{2}}((\leftarrow,q_{3},q_{4}))$
$\displaystyle=({a^{+}b}\triangleright{\varepsilon})$
$\displaystyle\mathsf{C}_{F,Y_{2}}((\leftarrow,q_{4},q_{5}))$
$\displaystyle=({a^{+}b}\triangleright{\varepsilon})$
$\displaystyle\mathsf{C}_{F,Y_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}))$
$\displaystyle=({a^{+}b}\triangleright{\varepsilon})\,.$
Next, we compute some steps using the unambiguous concatenation $G=F\cdot F$.
We start with step $(\rightarrow,q_{0},q_{2})$ for which the zigzag part is
empty:
$\mathsf{ZC}_{F,Y_{2}}^{F,Y_{2}}(q_{1},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})={F^{2}}\triangleright{\varepsilon}$.
Hence, we get using the formula in the proof above
$\displaystyle\mathsf{C}_{G,Y_{4}}((\rightarrow,q_{0},q_{2}))$
$\displaystyle=\big{(}\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{0},q_{1}))\cdot({F}\triangleright{\varepsilon})\big{)}\odot({F^{2}}\triangleright{\varepsilon})\odot\big{(}({F}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{1},q_{2}))\big{)}$
and after some simplifications
$\displaystyle\mathsf{C}_{G,Y_{4}}((\rightarrow,q_{0},q_{2}))$
$\displaystyle=\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{0},q_{1}))\cdot\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{1},q_{2}))=({(a^{+}b)^{2}}\triangleright{\varepsilon})\,.$
Similarly, we can compute the following steps
$\displaystyle\mathsf{C}_{G,Y_{4}}((\rightarrow,q_{6},q_{1}))$
$\displaystyle=\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{6},q_{0}))\cdot\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{0},q_{1}))=({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({a^{+}b}\triangleright{\varepsilon})\,.$
For step
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3})$,
the run only visits the first factor since
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3})\in
Y_{2}$:
$\displaystyle\mathsf{C}_{G,Y_{4}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}))$
$\displaystyle=\mathsf{C}_{F,Y_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}))\cdot({F}\triangleright{\varepsilon})=({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({a^{+}b}\triangleright{\varepsilon})\,.$
Now, for step
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4})$,
the zigzag part is reduced to step
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3})$
and we get:
$\displaystyle\mathsf{C}_{G,Y_{4}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4}))$
$\displaystyle=\big{(}\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{1},q_{2}))\cdot({F}\triangleright{\varepsilon})\big{)}\odot\big{(}({F}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,Y_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}))\big{)}$
$\displaystyle\hskip
56.9055pt{}\odot\big{(}\mathsf{C}_{F,Y_{2}}((\leftarrow,q_{3},q_{4}))\cdot({F}\triangleright{\varepsilon})\big{)}$
$\displaystyle=({a^{+}b}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\,.$
Similarly, we compute
$\displaystyle\mathsf{C}_{G,Y_{4}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{4},q_{0}))$
$\displaystyle=\big{(}({F}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,Y_{2}}((\leftarrow,q_{4},q_{5}))\big{)}\odot\big{(}\mathsf{C}_{F,Y_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}))\cdot({F}\triangleright{\varepsilon})\big{)}$
$\displaystyle\hskip
56.9055pt{}\odot\big{(}({F}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,Y_{2}}((\rightarrow,q_{6},q_{0}))\big{)}$
$\displaystyle=({a^{+}b}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\,.$
Figure 9: Illustration for Example 5.18.
Finally, we consider the unambiguous decomposition $E=G\cdot G$ in order to
compute $\mathsf{C}_{E,Z_{2}}(x)$ where $x=(\rightarrow,q_{6},q_{2})$. Notice
that
$(\rightarrow,q_{6},q_{1}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{4},q_{0}),(\rightarrow,q_{0},q_{2})\in
Y_{4}=\varphi(G)$. Hence,
$Z_{Y_{4},Y_{4}}(q_{1},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{0})$
and applying the formulas in the proof above we obtain
$\displaystyle\mathsf{C}_{E,Z_{2}}(x)$
$\displaystyle=\big{(}\mathsf{C}_{G,Y_{4}}((\rightarrow,q_{6},q_{1}))\cdot({G}\triangleright{\varepsilon})\big{)}\odot\mathsf{ZC}_{G,Y_{4}}^{G,Y_{4}}(q_{1},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})\odot\big{(}({G}\triangleright{\varepsilon})\cdot\mathsf{C}_{G,Y_{4}}((\rightarrow,q_{0},q_{2}))\big{)}$
$\displaystyle\mathsf{ZC}_{G,Y_{4}}^{G,Y_{4}}(q_{1},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
$\displaystyle=\big{(}({G}\triangleright{\varepsilon})\cdot\mathsf{C}_{G,Y_{4}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{1},q_{4}))\big{)}\odot\big{(}\mathsf{C}_{G,Y_{4}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{4},q_{0}))\cdot({G}\triangleright{\varepsilon})\big{)}\,.$
Putting everything together and still after some simplifications, we get
$\displaystyle\mathsf{C}_{E,Z_{2}}(x)$
$\displaystyle=\big{(}({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{3}}\triangleright{\varepsilon})\big{)}\odot\big{(}({(a^{+}b)^{3}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\big{)}$
$\displaystyle\hskip
56.9055pt{}\odot\big{(}({a^{+}b}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{2}}\triangleright{\varepsilon})\big{)}\,.$
For instance, applying $\mathsf{C}_{E,Z_{2}}(x)$ to $w=aba^{2}ba^{3}ba^{4}b$
we obtain $aba^{4}ba^{2}b$.
##### SD-Kleene Star
The most interesting case is when $E=F^{*}$. Let
$L=\mathcal{L}(F)\subseteq\Sigma^{*}$. Since $E$ is a stabilising SD-regular
expression, $L$ is an aperiodic prefix code of bounded synchronisation delay,
and $X=\varphi(L)$ is $k$-stabilising for some $k>0$. Hence, we may apply the
results of Section 5.2.1.
By induction, we suppose that we have SDRTEs $\mathsf{C}_{F,s}(x)$ for all $s$
in $\mathsf{TrM}$ and steps $x$. Since $L=\mathcal{L}(F)$ is a code, for each
fixed $\ell>0$, the expression $F^{\ell}=F\cdot F\cdots F$ is an unambiguous
concatenation. Hence, from the proof above for the unambiguous concatenation,
we may also assume that we have SDRTEs $\mathsf{C}_{F^{\ell},s}(x)$ for all
$s\in\mathsf{TrM}$ and steps $x$. Similarly, we have SDRTEs for
$\mathsf{ZC}_{F^{k},s}^{F,t}(-)$ and $\mathsf{ZC}_{F,s}^{F^{k},t}(-)$. Notice
that $F^{0}$ is equivalent to $\varepsilon$ hence we have
$\mathsf{C}_{F^{0},s}(x)=\mathsf{C}_{\varepsilon,s}(x)$.
We show how to construct SDRTEs $\mathsf{C}_{E,s}(x)$ for $E=F^{*}$. There are
four cases, which are dealt with below, depending on the step $x$.
We fix some notation common to all four cases. Fix some
$w\in\mathcal{L}(E,s)=L^{*}\cap\varphi^{-1}(s)$ and let $w=u_{1}\cdots u_{n}$
be the unique factorization of $w$ with $n\geq 0$ and $u_{i}\in L$ for $1\leq
i\leq n$. For a step $x\in s$, we denote by $\rho$ the unique run of
$\mathcal{A}$ over $w$ following step $x$.
* •
$x=(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},p,q)\in
s$
The easiest case is for left-left steps. If $n<k$ then the output of $\rho$ is
$[\\![\mathsf{C}_{F^{n},s}(x)]\\!](w)$. Notice that here,
$\mathsf{C}_{F^{0},s}(x)=\bot$ since $x\notin\mathbf{1}=\varphi(\varepsilon)$.
Now, if $n\geq k$ then, by Lemma 5.5, the run $\rho$ stays inside $u_{1}\cdots
u_{k}$. We deduce that the output of $\rho$ is
$[\\![\mathsf{C}_{F^{k},s}(x)]\\!](u_{1}\cdots u_{k})$. Therefore, we define
$\displaystyle\mathsf{C}_{E,s}(x)$
$\displaystyle=\Big{(}\sum_{n<k}\mathsf{C}_{F^{n},s}(x)\Big{)}+\Big{(}\mathsf{C}_{F^{k},s}(x)\cdot({F^{*}}\triangleright{\varepsilon})\Big{)}$
Notice that the sums are unambiguous since $L=\mathcal{L}(F)$ is a code. The
concatenation $F^{k}\cdot F^{*}$ is also unambiguous.
* •
$x=(\rightarrow,p,q)\in s$
We turn now to the more interesting left-right steps. Again, if $n<k$ then the
output of $\rho$ is $[\\![\mathsf{C}_{F^{n},s}(x)]\\!](w)$. Assume now that
$n\geq k$. We apply Lemma 5.7 to deduce that the run $\rho$ is $k$-forward-
progressing. See Figure 10 for a sample run which is $2$-forward-progressing.
We split $\rho$ in $\rho_{0}\rho_{1}\cdots\rho_{n-k}$ where $\rho_{0}$ is the
prefix of $\rho$ going until the first crossing from $u_{k}$ to $u_{k+1}$.
Then, $\rho_{1}$ goes until the first crossing from $u_{k+1}$ to $u_{k+2}$.
Continuing in the same way, for $1\leq i<n-k$, $\rho_{i}$ goes until the first
crossing from $u_{k+i}$ to $u_{k+i+1}$. Finally, $\rho_{n-k}$ is the remaining
suffix, going until the run exits from $w$ on the right. Since the run $\rho$
is $k$-forward progressing, we deduce that $\rho_{i}$ does not go back to
$u_{1}\cdots u_{i-1}$, hence it stays inside $u_{i}\cdots u_{i+k}$, starting
on the left of $u_{i+k}$ and exiting on the right of $u_{i+k}$.
Since $X=\varphi(L)$ is $k$-stabilising, we have $\varphi(u_{1}\cdots
u_{k+i})=\varphi(w)$ for all $0\leq i\leq n-k$. Now, $\rho_{0}\cdots\rho_{i}$
is a run on $u_{1}\cdots u_{k+i}$ starting on the left in state $p$ and
exiting on the right. Since $\mathcal{A}$ is deterministic and
$x=(\rightarrow,p,q)\in\varphi(w)=\varphi(u_{1}\cdots u_{k+i})$ we deduce that
$\rho_{i}$ exits on the right of $u_{k+i}$ in state $q$. In particular,
$\rho_{0}$ is a run on $u_{1}\cdots u_{k}$ starting on the left in state $p$
and exiting on the right in state $q$. Moreover, for each $1\leq i\leq n-k$,
$\rho_{i}$ is the concatenation of a zigzag internal run over $(u_{i}\cdots
u_{i+k-1},u_{i+k})$ starting with
$(q,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
ending with
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{i})=Z_{s^{\prime},s^{\prime\prime}}(q,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
where $s^{\prime}=\varphi(u_{i}\cdots u_{i+k-1})$,
$s^{\prime\prime}=\varphi(u_{i+k})$ and a $(\rightarrow,q_{i},q)$ run over
$u_{i+k}$.
Figure 10: A left-right run which is 2-forward-progressing.
Let $v_{i}$ be the output produced by $\rho_{i}$ for $0\leq i\leq n-k$. Then,
using Lemma 5.16, the productions $v_{i}$ with $0<i\leq n-k$ are given by the
SDRTE $f$ defined as
$\displaystyle
f=\sum_{\begin{subarray}{c}s^{\prime},s^{\prime\prime},q^{\prime}\,\mid\\\
(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q^{\prime})=Z_{s^{\prime},s^{\prime\prime}}(q,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})\end{subarray}}\mathsf{ZC}_{F^{k},s^{\prime}}^{F,s^{\prime\prime}}(q,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})\odot\big{(}({\mathcal{L}(F^{k},s^{\prime})}\triangleright{\varepsilon})\cdot\mathsf{C}_{F,s^{\prime\prime}}((\rightarrow,q^{\prime},q))\big{)}$
Then the product $v_{1}\cdots v_{n-k}$ is produced by the $(k+1)$-chained
Kleene-star $[L,f]^{(k+1)\star}(w)$. From the above discussion, we also deduce
that $v_{0}=[\\![\mathsf{C}_{F^{k},s}(x)]\\!](u_{1}\cdots u_{k})$. Therefore,
we define
$\displaystyle\mathsf{C}_{E,s}(x)$
$\displaystyle=\Big{(}\sum_{n<k}\mathsf{C}_{F^{n},s}(x)\Big{)}+\Big{(}\big{(}\mathsf{C}_{F^{k},s}(x)\cdot({F^{*}}\triangleright{\varepsilon})\big{)}\odot[F,f]^{(k+1)\star}\Big{)}$
* •
$x=(\leftarrow,p,q)\in s$
The case of right-left runs is almost symmetric to the case of left-right
runs. Again, if $n<k$ then the output of $\rho$ is
$[\\![\mathsf{C}_{F^{n},s}(x)]\\!](w)$. Assume now that $n\geq k$. We apply
Lemma 5.9 to deduce that the run $\rho$ is $k$-backward-progressing. As
illustrated in Figure 11, we split $\rho$ in
$\rho_{n-k+1}\cdots\rho_{2}\rho_{1}$ where $\rho_{n-k+1}$ is the prefix of
$\rho$ going until the first crossing from $u_{n-k+1}$ to $u_{n-k}$. Then,
$\rho_{n-k}$ goes until the first crossing from $u_{n-k}$ to $u_{n-k-1}$.
Continuing in the same way, for $n-k>i>1$, $\rho_{i}$ goes until the first
crossing from $u_{i}$ to $u_{i-1}$. Finally, $\rho_{1}$ is the remaining
suffix, going until the run exits from $u_{1}$ on the left. Since the run
$\rho$ is $k$-backward progressing, we deduce that, for $1\leq i\leq n-k$, the
run $\rho_{i}$ does not go back to $u_{i+k+1}\cdots u_{n}$. Hence it is the
concatenation of a zigzag internal run over $(u_{i},u_{i+1}\cdots u_{i+k})$,
starting with some
$(q_{i},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})$
and exiting with
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q^{\prime}_{i})=Z_{s^{\prime},s^{\prime\prime}}(q_{i},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})$
where $s^{\prime}=\varphi(u_{i})$, $s^{\prime\prime}=\varphi(u_{i+1}\cdots
u_{i+k})$, and a $(\leftarrow,q^{\prime}_{i},q_{i-1})$-run over $u_{i}$ (see
again Figure 11). Let $v_{i}$ be the output produced by $\rho_{i}$ for $1\leq
i\leq n-k+1$. The output produced by $\rho$ is $v=v_{n-k+1}\cdots v_{2}v_{1}$.
Now, the situation is slightly more complicated than for left-right runs where
we could prove that $q_{i}=q$ for each $i$. Instead, let us remark that
$q_{i}$ is the unique state (by determinacy of $\mathcal{A}$) such that there
is a run over $u_{i+1}\cdots u_{n}$ following step $(\leftarrow,p,q_{i})$. But
since $X$ is $k$-stabilising, we know that $\varphi(u_{i+1}\cdots
u_{n})=\varphi(u_{i+1}\cdots u_{i+k})$. Then, given
$s^{\prime}=\varphi(u_{i})$ and $s^{\prime\prime}=\varphi(u_{i+1}\cdots
u_{i+k})$, we get that $q_{i}$ is the unique state such that
$(\leftarrow,p,q_{i})\in s^{\prime\prime}$ and $q_{i-1}$ the one such that
$(\leftarrow,p,q_{i-1})\in s^{\prime}s^{\prime\prime}$. Thus we define the
function $g$ generating the $v_{i}$ with $1\leq i\leq n-k$ by
$g=\sum_{\begin{subarray}{c}s^{\prime},s^{\prime\prime},p^{\prime},q^{\prime},q^{\prime\prime}\,\mid\,(\leftarrow,p,p^{\prime})\in
s^{\prime\prime},\\\
(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q^{\prime})=Z_{s^{\prime},s^{\prime\prime}}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}}),\,(\leftarrow,q^{\prime},q^{\prime\prime})\in
s^{\prime}\end{subarray}}\mathsf{ZC}_{F,s^{\prime}}^{F^{k},s^{\prime\prime}}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})\odot\Big{(}\mathsf{C}_{F,s^{\prime}}((\leftarrow,q^{\prime},q^{\prime\prime}))\cdot({\mathcal{L}(F^{k},s^{\prime\prime})}\triangleright{\varepsilon})\Big{)}$
Figure 11: A right-left run which is 2-backward-progressing.
Finally, a right-left run for $F^{*}$ is either a right-left run over $F^{n}$
for $n<k$, or the concatenation of a right-left run $\rho_{n-k+1}$ over the
$k$ rightmost iterations of $F$, and a sequence of runs $\rho_{i}$ with $1\leq
i\leq n-k$ as previously and whose outputs are computed by $g$. Therefore, we
define
$\mathsf{C}_{E,s}(x)=\Big{(}\sum_{n<k}\mathsf{C}_{F^{n},s}(x)\Big{)}+\Big{(}\sum_{\begin{subarray}{c}s=s^{\prime}s^{\prime\prime},\,p^{\prime}\,\mid\\\
(\leftarrow,p,p^{\prime})\in
s^{\prime\prime}\end{subarray}}({\mathcal{L}(F^{*},s^{\prime})}\triangleright{\varepsilon})\cdot\mathsf{C}_{F^{k},s^{\prime\prime}}((\leftarrow,p,p^{\prime}))\Big{)}\odot[F,g]_{r}^{(k+1)\star}$
* •
$x=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p,q)\in
s$
We finally deal with right-right runs, thus completing the case of $E=F^{*}$.
If $n<k$, then the output of $\rho$ is $[\\![\mathsf{C}_{F^{n},s}(x)]\\!](w)$.
Assume now that $n\geq k$. If $\rho$ only visits the last $k$ factors, then
$w$ can be decomposed as $uv$ where $u\in\mathcal{L}(F^{*},s^{\prime})$ and
$v\in\mathcal{L}(F^{k},s^{\prime\prime})$ with $s=s^{\prime}s^{\prime\prime}$
and
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p,q)\in
s^{\prime\prime}$. The output of $\rho$ is
$[\\![({\mathcal{L}(F^{*},s^{\prime})}\triangleright{\varepsilon})\cdot\mathsf{C}_{F^{k},s^{\prime\prime}}(x)]\\!](w)$.
Otherwise, by Lemma 5.11, we know that there is a _unique_ integer $1\leq
i\leq n-k$ such that $\rho$ only visits $u_{i}\cdots u_{n}$, and is the
concatenation of 3 runs $\rho_{1},\rho_{2}$ and $\rho_{3}$, where $\rho_{1}$
is a $k$-backward-progressing run over $u_{i+1}\cdots u_{n}$, $\rho_{2}$ is a
zigzag internal run over $(u_{i},u_{i+1}\cdots u_{i+k})$ and $\rho_{3}$ is a
$k$-forward-progressing run over $u_{i+1}\cdots u_{n}$, as depicted in Figure
6.
Let $r=\varphi(u_{1}\cdots u_{i-1})$, $s^{\prime}=\varphi(u_{i})$ and
$s^{\prime\prime}=\varphi(u_{i+1}\cdots u_{i+k})$. Since $i$ is uniquely
determined, the tuple $(r,s^{\prime},s^{\prime\prime})$ is also unique. Notice
that $s^{\prime\prime}=\varphi(u_{i+1}\cdots u_{n})$ since $X$ is
$k$-stabilising and $s^{\prime\prime}\in X^{k}$, hence we have
$s=rs^{\prime}s^{\prime\prime}$. Moreover, the starting and ending states of
$\rho_{2}$ are the _unique_ states $p^{\prime},q^{\prime}$ such that
$(\leftarrow,p,p^{\prime})\in s^{\prime\prime}$,
$Z_{s^{\prime},s^{\prime\prime}}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q^{\prime})$
and $(\rightarrow,q^{\prime},q)\in s^{\prime\prime}$. Once the tuple
$(r,s^{\prime},s^{\prime\prime},p^{\prime},q^{\prime})$ is fixed, using the
previous points, we have SDRTEs computing the outputs of $\rho_{1}$ and
$\rho_{3}$:
$[\\![\mathsf{C}_{E,s^{\prime\prime}}((\leftarrow,p,p^{\prime}))]\\!](u_{i+1}\cdots
u_{n})\hskip
28.45274pt[\\![\mathsf{C}_{E,s^{\prime\prime}}((\rightarrow,q^{\prime},q))]\\!](u_{i+1}\cdots
u_{n})$
and from Lemma 5.16, the output of $\rho_{2}$ is
$[\\![\mathsf{ZC}_{F,s^{\prime}}^{F^{k},s^{\prime\prime}}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})]\\!](u_{i}u_{i+1}\cdots
u_{i+k})$. This leads to the following SDRTE:
$\displaystyle
h=\sum_{\begin{subarray}{c}r,s^{\prime},s^{\prime\prime},p^{\prime},q^{\prime}\,\mid\\\
s=rs^{\prime}s^{\prime\prime},\,(\leftarrow,p,p^{\prime})\in
s^{\prime\prime}\\\
Z_{s^{\prime},s^{\prime\prime}}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q^{\prime})\\\
(\rightarrow,q^{\prime},q)\in
s^{\prime\prime}\end{subarray}}({\mathcal{L}(F^{*},r)}\triangleright{\varepsilon})\cdot\Big{(}$
$\displaystyle\big{(}{\mathcal{L}(F,s^{\prime})}\triangleright{\varepsilon}\big{)}\cdot\mathsf{C}_{E,s^{\prime\prime}}((\leftarrow,p,p^{\prime}))$
$\displaystyle\odot\mathsf{ZC}_{F,s^{\prime}}^{F^{k},s^{\prime\prime}}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})\cdot({F^{*}}\triangleright{\varepsilon})$
$\displaystyle\odot({\mathcal{L}(F,s^{\prime})}\triangleright{\varepsilon})\cdot\mathsf{C}_{E,s^{\prime\prime}}((\rightarrow,q^{\prime},q))\Big{)}$
Notice that if $r,s^{\prime},s^{\prime\prime},p^{\prime},q^{\prime}$ satisfy
the conditions of the sum, then there is a unique index $i$ with
$\varphi(u_{i})=s^{\prime}$ and $\varphi(u_{i+1}\cdots
u_{n})=s^{\prime\prime}$: there is a $(\leftarrow,p,p^{\prime})$-run on
$u_{i+1}\cdots u_{n}$ but not on $u_{i}\cdots u_{n}$. Hence all the Cauchy
products in $h$ are unambiguous and uniquely decompose $w$ in $u_{1}\cdots
u_{i-1}$ matched by $\mathcal{L}(F^{*},r)$, $u_{i}$ matched by
$\mathcal{L}(F,s^{\prime})$ and $u_{i+1}\cdots u_{n}$ matched by
$\mathsf{C}_{E,s^{\prime\prime}}((\leftarrow,p,p^{\prime}))$ and
$\mathsf{C}_{E,s^{\prime\prime}}((\rightarrow,q^{\prime},q))$. Also,
$u_{i}u_{i+1}\cdots u_{i+k}$ is matched by
$\mathsf{ZC}_{F,s^{\prime}}^{F^{k},s^{\prime\prime}}(p^{\prime},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}})$
and $n\geq i+k$.
Finally, for the right-right step
$x=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},p,q)$
we define
$\displaystyle\mathsf{C}_{E,s}(x)=$
$\displaystyle\Big{(}\sum_{n<k}\mathsf{C}_{F^{n},s}(x)\Big{)}+\Big{(}\sum_{s=s^{\prime}s^{\prime\prime}}({\mathcal{L}(F^{*},s^{\prime})}\triangleright{\varepsilon})\cdot\mathsf{C}_{F^{k},s^{\prime\prime}}(x)\Big{)}+h\,.$
The sums above are unambiguous. Indeed, the first case happens exactly when
$w\in F^{n}$ for $n<k$. The second happens exactly when $n\geq k$ and the
right-right run does not visit $u_{n-k}$. Otherwise, the third case happens.
###### Example 5.19.
Let us continue with our example in Figure 2. Here we illustrate computing an
SDRTE for some $E=F^{*}$ and some left-right step. We consider $F=a^{+}b$ so
that $\varphi(F)=Y_{2}$ as computed in Example 3.5, where we also computed
$Z_{2}=Y_{2}^{4}$. Consider $x=(\rightarrow,q_{6},q_{2})\in Z_{2}$. We explain
how to compute $\mathsf{C}_{E,Z_{2}}(x)$. Since $\varphi(F)=Y_{2}$ is
4-stabilising, the runs following step $x$ over words in
$\mathcal{L}(E,Z_{2})=(a^{+}b)^{\geq 4}$ are 4-forward-progressing, and the
construction in the proof above uses a 5-chained star111Actually, the runs
over step $x$ are 3-forward-progressing. Hence, we could simplify the example
below by using a 4-chained star only. But we decided to use a 5-star in order
to follow the construction described in the proof above..
Let $w=u_{1}u_{2}\cdots u_{n}$ with $n\geq 4$ and $u_{1},u_{2},\ldots,u_{n}\in
a^{+}b$. As can be seen on Figure 12 (with $n=8$), the 4-forward-progressing
run $\rho$ over $w$ following step $x=(\rightarrow,q_{6},q_{2})$ splits as
$\rho=\rho_{0}\cdots\rho_{4}$ where $\rho_{0}$ is the prefix from $u_{1}$ till
first crossing from $u_{4}$ to $u_{5}$, $\rho_{1}$ is the part from $u_{5}$
till the first crossing from $u_{5}$ to $u_{6}$, $\rho_{2}$ is the part of the
run from $u_{6}$ till the first crossing from $u_{6}$ to $u_{7}$, $\rho_{3}$
is the part of the run from $u_{7}$ till the first crossing from $u_{7}$ to
$u_{8}$, and $\rho_{4}$ is the part from $u_{8}$ till exiting at the right of
$u_{8}$.
We obtain
$\displaystyle\mathsf{C}_{(a^{+}b)^{*},Z_{2}}((\rightarrow,q_{6},q_{2}))$
$\displaystyle=(\mathsf{C}_{(a^{+}b)^{4},Z_{2}}((\rightarrow,q_{6},q_{2}))\cdot({(a^{+}b)^{*}}\triangleright{\varepsilon}))\odot[a^{+}b,f]^{5\star}$
where $\displaystyle f$
$\displaystyle=\mathsf{ZC}_{(a^{+}b)^{4},Z_{2}}^{a^{+}b,Y_{2}}(q_{2},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})\odot\Big{(}({a^{+}b)^{4}}\triangleright{\varepsilon})\cdot\mathsf{C}_{a^{+}b,Y_{2}}((\rightarrow,q_{1},q_{2}))\Big{)}$
$\displaystyle\mathsf{ZC}_{(a^{+}b)^{4},Z_{2}}^{a^{+}b,Y_{2}}(q_{2},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
$\displaystyle=\Big{(}({(a^{+}b)^{4}}\triangleright{\varepsilon})\cdot\mathsf{C}_{a^{+}b,Y_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}))\Big{)}\odot\Big{(}\mathsf{C}_{(a^{+}b)^{4},Z_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{3},q_{1}))\cdot({a^{+}b}\triangleright{\varepsilon})\Big{)}$
and the expressions below where computed in Example 5.18
$\displaystyle\mathsf{C}_{a^{+}b,Y_{2}}((\rightarrow,q_{1},q_{2}))$
$\displaystyle={a^{+}b}\triangleright{\varepsilon}$
$\displaystyle\mathsf{C}_{a^{+}b,Y_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{2},q_{3}))$
$\displaystyle=({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})$
$\displaystyle\mathsf{C}_{(a^{+}b)^{4},Z_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{3},q_{1}))$
$\displaystyle=({(a^{+}b)^{2}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({a^{+}b}\triangleright{\varepsilon})$
$\displaystyle\mathsf{C}_{(a^{+}b)^{4},Z_{2}}((\rightarrow,q_{6},q_{2}))$
$\displaystyle=\Big{(}({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{2}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\Big{)}\odot{}$
$\displaystyle\qquad\Big{(}({a^{+}b}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{2}}\triangleright{\varepsilon})\Big{)}$
Figure 12: Illustration for Example 5.19
After simplifications, we obtain
$\displaystyle f$
$\displaystyle=\mathsf{ZC}_{(a^{+}b)^{4},Z_{2}}^{a^{+}b,Y_{2}}(q_{2},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
$\displaystyle=\Big{(}({(a^{+}b)^{4}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\Big{)}\odot\Big{(}({(a^{+}b)^{2}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{2}}\triangleright{\varepsilon})\Big{)}\,.$
For instance, consider $w=u_{1}u_{2}\cdots u_{8}$ with $u_{i}=a^{i}b$. Then,
$\displaystyle f(u_{1}\cdots u_{5})$ $\displaystyle=a^{5}ba^{3}b$
$\displaystyle f(u_{2}\cdots u_{6})$ $\displaystyle=a^{6}ba^{4}b$
$\displaystyle f(u_{3}\cdots u_{7})$ $\displaystyle=a^{7}ba^{4}b$
$\displaystyle f(u_{4}\cdots u_{8})$ $\displaystyle=a^{8}ba^{5}b$
$\displaystyle[\\![\mathsf{C}_{(a^{+}b)^{4},Z_{2}}((\rightarrow,q_{6},q_{2}))]\\!](u_{1}\cdots
u_{4})$ $\displaystyle=aba^{4}ba^{2}b$
$\displaystyle[\\![\mathsf{C}_{(a^{+}b)^{*},Z_{2}}((\rightarrow,q_{6},q_{2}))]\\!](u_{1}\cdots
u_{8})$
$\displaystyle=\makebox[28.45274pt][l]{$aba^{4}ba^{2}ba^{5}ba^{3}ba^{6}ba^{4}ba^{7}ba^{4}ba^{8}ba^{5}b$\,.}$
We conclude the section by showing how to construct SDRTEs equivalent to
2DFTs.
###### Theorem 5.20.
Let $\mathcal{A}=(Q,\Sigma,\Gamma,\delta,\gamma,q_{0},F)$ be an aperiodic
2DFT. We can construct an equivalent SDRTE $C_{\mathcal{A}}$ over alphabet
$\Sigma$ with
$\mathsf{dom}([\\![C_{\mathcal{A}}]\\!])=\mathsf{dom}([\\![\mathcal{A}]\\!])$
and $[\\![\mathcal{A}]\\!](w)=[\\![C_{\mathcal{A}}]\\!](w)$ for all
$w\in\mathsf{dom}([\\![\mathcal{A}]\\!])$.
###### Proof 5.21.
We first construct below an SDRTE $C^{\prime}_{\mathcal{A}}$ with
$\mathsf{dom}([\\![C^{\prime}_{\mathcal{A}}]\\!])={\vdash}\mathsf{dom}([\\![\mathcal{A}]\\!]){\dashv}$
and such that
$[\\![\mathcal{A}]\\!](w)=[\\![C^{\prime}_{\mathcal{A}}]\\!]({\vdash}w{\dashv})$
for all $w\in\mathsf{dom}([\\![\mathcal{A}]\\!])$. Then, we obtain
$C^{\prime\prime}_{\mathcal{A}}$ using Proposition 4.3 by
$C^{\prime\prime}_{A}={({{\vdash}}^{-1}{C^{\prime}_{\mathcal{A}}})}{{\dashv}}^{-1}\,.$
Finally, we get rid of lingering endmarkers in
$C^{\prime\prime}_{\mathcal{A}}$ using Lemma 4.5 to obtain $C_{\mathcal{A}}$
as the projection of $C^{\prime\prime}_{\mathcal{A}}$ on $\Sigma^{*}$.
Let $\varphi\colon(\Sigma\uplus\\{{\vdash},{\dashv}\\})^{*}\to\mathsf{TrM}$ be
the canonical surjective morphism to the transition monoid of $\mathcal{A}$.
Since $\mathcal{A}$ is aperiodic, the monoid $\mathsf{TrM}$ is also aperiodic.
We can apply Theorem 3.6 to the restriction of $\varphi$ to $\Sigma^{*}$: for
each $s\in\mathsf{TrM}$, we get an unambiguous, stabilising, SD-regular
expression $E_{s}$ with $\mathcal{L}(E_{s})=\varphi^{-1}(s)\cap\Sigma^{*}$.
Let $E={\vdash}\cdot(\bigcup_{s\in\mathsf{TrM}}E_{s})$ which is an
unambiguous, stabilising, SD-regular expression with
$\mathcal{L}(E)={\vdash}\Sigma^{*}$. Applying Theorem 5.14, for each monoid
element $s\in\mathsf{TrM}$ and each step
$x\in\\{\rightarrow,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},\leftarrow\\}\times
Q^{2}$, we construct the corresponding SDRTE $\mathsf{C}_{E,s}(x)$. We also
apply Lemma 5.16 and construct for each state $p\in Q$ an SDRTE
$\mathsf{ZC}_{E,s}^{{\dashv},t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
where $t=\varphi({\dashv})$.
Figure 13: Removing end markers. On the left when there is a non trivial zig
zag until reaching a final state $q$; on the right when we have an empty
zigzag. $q_{0}$ is the initial state and $q\in F$.
Finally, we define
$C^{\prime}_{\mathcal{A}}=\sum_{\begin{subarray}{c}s,p,q\mid q\in F\\\
(\rightarrow,q_{0},p)\in s\\\
Z_{s,t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q)\end{subarray}}\big{(}\mathsf{C}_{E,s}((\rightarrow,q_{0},p))\cdot({{\dashv}}\triangleright{\varepsilon})\big{)}\odot\mathsf{ZC}_{E,s}^{{\dashv},t}(p,\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
See alo Figure 13 illustrating $C^{\prime}_{\mathcal{A}}$. We can easily check
that $C^{\prime}_{\mathcal{A}}$ satisfies the requirements stated above.
###### Example 5.22.
We complete the series of examples by giving an SDRTE equivalent with the
transducer $\mathcal{A}$ of Figure 2 on words in $E_{1}=b(a^{+}b)^{\geq
4}\subseteq\mathsf{dom}(\mathcal{A})$. Notice that by Example 3.5, we have
$\varphi(E_{1})=Z_{1}=Y_{1}Z_{2}$. We compute first
$\mathsf{C}_{E_{1},Z_{1}}((\rightarrow,s,q_{2}))$. We use the unambiguous
product $E_{1}=b\cdot E_{2}$ with $E_{2}=(a^{+}b)^{\geq 4}$. From Example 3.5,
we have
$(\rightarrow,s,q_{0}),(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6})\in
Y_{1}=\varphi(b)$ and
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{0},q_{5}),(\rightarrow,q_{6},q_{2})\in
Z_{2}=\varphi(E_{2})$. We deduce that in the product, the zigzag part consists
of the two steps
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{0},q_{5})$
and
$(\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6})$.
Therefore we obtain:
$\displaystyle\mathsf{C}_{E_{1},Z_{1}}((\rightarrow,s,q_{2}))$
$\displaystyle=\big{(}\mathsf{C}_{b,Y_{1}}((\rightarrow,s,q_{0}))\cdot({E_{2}}\triangleright{\varepsilon})\big{)}\odot\mathsf{ZC}_{b,Y_{1}}^{E_{2},Z_{2}}(q_{0},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})\odot\big{(}({b}\triangleright{\varepsilon})\cdot\mathsf{C}_{E_{2},Z_{2}}((\rightarrow,q_{6},q_{2}))\big{)}$
$\displaystyle\mathsf{ZC}_{b,Y_{1}}^{E_{2},Z_{2}}(q_{0},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})$
$\displaystyle=\big{(}({b}\triangleright{\varepsilon})\cdot\mathsf{C}_{E_{2},Z_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{0},q_{5}))\big{)}\odot\big{(}\mathsf{C}_{b,Y_{1}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}))\cdot({E_{2}}\triangleright{\varepsilon})\big{)}\,.$
Since
$\mathsf{C}_{b,Y_{1}}((\rightarrow,s,q_{0}))=\mathsf{C}_{b,Y_{1}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{90.0}{$\curvearrowleft$}}},q_{5},q_{6}))={b}\triangleright{\varepsilon}$,
we obtain after simplifications
$\displaystyle\mathsf{C}_{E_{1},Z_{1}}((\rightarrow,s,q_{2}))$
$\displaystyle=\big{(}({b}\triangleright{\varepsilon})\cdot\mathsf{C}_{E_{2},Z_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{0},q_{5}))\big{)}\odot\big{(}({b}\triangleright{\varepsilon})\cdot\mathsf{C}_{E_{2},Z_{2}}((\rightarrow,q_{6},q_{2}))\big{)}\,.$
Let $E=(a^{+}b)^{*}$ as in Example 5.19. Since
$\mathcal{L}(E_{2},Z_{2})=\mathcal{L}(E,Z_{2})$ we have
$\mathsf{C}_{E_{2},Z_{2}}(x)=\mathsf{C}_{E,Z_{2}}(x)$ for all steps $x$.
Recall that in Example 5.19 we have computed
$\mathsf{C}_{E,Z_{2}}((\rightarrow,q_{6},q_{2}))$. Moreover,
$\mathsf{C}_{E_{2},Z_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{0},q_{5}))=\mathsf{C}_{(a^{+}b)^{4},Z_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{0},q_{5}))\cdot({E}\triangleright{\varepsilon})$
and a computation similar to Example 5.18 gives
$\displaystyle\mathsf{C}_{(a^{+}b)^{4},Z_{2}}((\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}},q_{0},q_{5}))$
$\displaystyle=({(a^{+}b)^{2}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({a^{+}b}\triangleright{\varepsilon})\,.$
Finally, with the notations of Example 5.19, we obtain
$\displaystyle\mathsf{C}_{E_{1},Z_{1}}((\rightarrow,s,q_{2}))={}\phantom{\odot}$
$\displaystyle\big{(}({b(a^{+}b)^{2}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{+}}\triangleright{\varepsilon})\big{)}$
$\displaystyle{}\odot{}$
$\displaystyle\big{(}({b}\triangleright{\varepsilon})\cdot\mathsf{C}_{E,Z_{2}}((\rightarrow,q_{6},q_{2}))\big{)}$
$\displaystyle={}\phantom{\odot}$
$\displaystyle\big{(}({b(a^{+}b)^{2}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{+}}\triangleright{\varepsilon})\big{)}$
$\displaystyle{}\odot{}$
$\displaystyle\big{(}({b}\triangleright{\varepsilon})\cdot\mathsf{C}_{(a^{+}b)^{4},Z_{2}}((\rightarrow,q_{6},q_{2}))\cdot({(a^{+}b)^{*}}\triangleright{\varepsilon})\big{)}$
$\displaystyle{}\odot{}$
$\displaystyle\big{(}({b}\triangleright{\varepsilon})\cdot[a^{+}b,f]^{5\star}\big{)}$
$\displaystyle={}\phantom{\odot}$
$\displaystyle\big{(}({b(a^{+}b)^{2}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{+}}\triangleright{\varepsilon})\big{)}$
$\displaystyle{}\odot{}$
$\displaystyle\big{(}({b}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{2}}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{*}}\triangleright{\varepsilon})\big{)}$
$\displaystyle{}\odot{}$
$\displaystyle\big{(}({ba^{+}b}\triangleright{\varepsilon})\cdot({a}\triangleright{a})^{+}\cdot({b}\triangleright{b})\cdot({(a^{+}b)^{\geq
2}}\triangleright{\varepsilon})\big{)}$ $\displaystyle{}\odot{}$
$\displaystyle\big{(}({b}\triangleright{\varepsilon})\cdot[a^{+}b,f]^{5\star}\big{)}\,.$
Note that for our example, we have a simple case of using Theorem 5.20, since
${\vdash}$ is visited only once at state $s$, and there are no transitions
defined on ${\dashv}$, i.e., $\varphi({\dashv})=\emptyset$. So for
$E^{\prime}={\vdash}E_{1}$, we have
$(\rightarrow,s,q_{2})\in\varphi(E^{\prime})$ and
$\mathsf{ZC}_{E^{\prime},\varphi(E^{\prime})}^{{\dashv},\emptyset}(q_{2},\mathrel{\leavevmode{\rotatebox[origin={c}]{-90.0}{$\curvearrowright$}}})=\varepsilon$
(zigzag part empty as seen in the right of Figure 13). Thus, an expression
equivalent to $\mathcal{A}$ on words in $E_{1}$ effectively boils down to
$\mathsf{C}_{E_{1},Z_{1}}((\rightarrow,s,q_{2}))$. For the sake of simplicity,
we stop the example here and do not provide the expression on the full domain
$\mathsf{dom}(\mathcal{A})=b(a^{*}b)^{\geq 2}a^{*}$. ∎
## 6 Adding composition
Since SDRTEs are used to define functions over words, it seems natural to
consider the composition of functions, as it is an easy to understand but
powerful operator. In this section, we discuss other formalisms using
composition as a basic operator, and having the same expressive power as
SDRTEs.
Theorem 1.1 gives the equivalence between SDRTEs and aperiodic two-way
transducers, the latter being known to be closed under composition. Hence,
adding composition to SDRTEs does not add expressiveness, while allowing for
easier modelisation of transformations.
Moreover, we prove that, should we add composition of functions, then we can
replace the $k$-chained star operator and its reverse by the simpler $1$-star
$[L,f]^{1\star}$ and its reverse, which in particular are one-way (left-to-
right or right-to-left) operator when $f$ is also one-way.
Finally, we prove that we can furthermore get rid of the reverse operators as
well as the Hadamard product by adding two basic functions: _reverse_ and
_duplicate_. The reverse function is straightforward as it reverses its input.
The duplicate function is parameterised by a symbol, say $\\#$, duplicates its
input inserting $\\#$ between the two copies: $\mathsf{dup}_{\\#}(u)=u\\#u$.
###### Theorem 6.1.
The following families of expressions have the same expressive power:
1. 1.
SDRTEs,
2. 2.
SDRTEs with composition of functions,
3. 3.
SDRTEs with composition and chained star restricted to $1$.
4. 4.
Expressions with simple functions, unambiguous sum, Cauchy product, $1$-star,
duplicate, reverse and composition.
###### Proof 6.2.
It is trivial that $\ref{thm-wC-1c}\subseteq\ref{thm-wC-SDRTEc}$ as 3 is
obtained as a syntaxical restriction of 2. Although it is not needed in our
proof, note that $\ref{thm-wC-SDRTE}\subseteq\ref{thm-wC-SDRTEc}$ holds for
the same reason. Now, thanks to Theorem 1.1, we know that SDRTEs are
equivalent to aperiodic two-way transducers that are closed under composition.
Hence, composition does not add expressive power and we have $\ref{thm-wC-
SDRTEc}\subseteq\ref{thm-wC-SDRTE}$.
To prove that $\ref{thm-wc-BasicFun}\subseteq\ref{thm-wC-1c}$, we simply have
to prove that the duplicate and reverse functions can be expressed with SDRTEs
using only the $1$-star operator and its reverse. The duplicate function
definition relies on the Hadamard and is given by the expression:
$\mathsf{dup}_{\\#}=(\mathsf{id}_{\Sigma^{*}}\cdot({\varepsilon}\triangleright{\\#}))\odot\mathsf{id}_{\Sigma^{*}}$
where $\mathsf{id}_{\Sigma^{*}}$ is the identity function and can be written
as $[\Sigma,\mathsf{id}_{\Sigma}]^{1\star}$ where
$\mathsf{id}_{\Sigma}=\sum_{a\in\Sigma}{a}\triangleright{a}$. The reverse
function is also easy to define using the $1$-star reverse:
$\mathsf{rev}=[\Sigma,\mathsf{id}_{\Sigma}]_{r}^{1\star}$
To prove the last inclusion $\ref{thm-wC-SDRTE}\subseteq\ref{thm-wc-
BasicFun}$, we need to express the Hadamard product and the (reverse)
$k$-chained star, using duplicate, reverse and composition.
The Hadamard product $f\odot g$ is easy to define using $\mathsf{dup}_{\\#}$
where $\\#$ is a fresh marker:
$f\odot g=(f\cdot({\\#}\triangleright{\varepsilon})\cdot
g)\circ\mathsf{dup}_{\\#}\,.$
We show now how to reduce $k$-star to $1$-star using duplicate and
composition. The proof is by induction on $k$. When $k=1$ there is nothing to
do. Assume that $k>1$. We show how to express $[L,f]^{k\star}$ using
$(k-1)$-star, $1$-star and duplicate. The main idea is to use composition to
mark each factor in $L$ in order to duplicate them, then use a $(k-1)$-star to
have a reach of $k$ factors of $L$ (with some redundant information), and
lastly use composition to prune the input to a form suitable to finally apply
$f$.
More formally, let $\\#$ and $\$$ be two fresh markers and define
$f_{1}=[L,\mathsf{dup}_{\$}\cdot({\varepsilon}\triangleright{\\#})]^{1\star}$
with domain $L^{*}$ and, when applied to a word $u=u_{1}\cdots u_{n}$ with
$u_{i}\in L$, produces $u_{1}\$u_{1}\\#u_{2}\$u_{2}\\#u_{3}\$u_{3}\\#\cdots
u_{n-1}\$u_{n-1}\\#u_{n}\$u_{n}\\#\in\\{\varepsilon\\}\cup\Sigma^{*}\$(\Sigma^{*}\\#\Sigma^{*}\$)^{*}\Sigma^{*}\\#$.
Notice that $\Sigma^{*}\\#\Sigma^{*}\$$ is a $1$-SD prefix code and that
taking $k-1$ consecutive factors from this language allows us to have a reach
of $k$ factors of $L$. Then we define the function $g$
$g=\big{(}\mathsf{id}_{\Sigma^{*}}\cdot({\\#\Sigma^{*}\$}\triangleright{\varepsilon})\big{)}^{k-2}\cdot\big{(}\mathsf{id}_{\Sigma^{*}}\cdot({\\#}\triangleright{\varepsilon})\cdot\mathsf{id}_{\Sigma^{*}}\cdot({\$}\triangleright{\varepsilon})\big{)}$
with domain $(\Sigma^{*}\\#\Sigma^{*}\$)^{k-1}$ and, when applied to a word
$v_{1}\\#v^{\prime}_{1}\$v_{2}\\#v^{\prime}_{2}\$\cdots
v_{k-1}\\#v^{\prime}_{k-1}\$$, produces $v_{1}v_{2}\cdots
v_{k-1}v^{\prime}_{k-1}$. In particular,
$g(u_{i+1}\\#u_{i+2}\$u_{i+2}\\#u_{i+3}\$\cdots
u_{i+k-1}\\#u_{i+k}\$)=u_{i+1}\cdots u_{i+k}$. Finally, we have
$[L,f]^{k\star}=\big{(}({\varepsilon}\triangleright{\varepsilon})+({\Sigma^{*}\$}\triangleright{\varepsilon})\cdot[\Sigma^{*}\\#\Sigma^{*}\$,f\circ
g]^{(k-1)\star}\cdot({\Sigma^{*}\\#}\triangleright{\varepsilon})\big{)}\circ
f_{1}\,.$
The reverse $k$-star $[L,f]_{r}^{k\star}$ is not expressed in a
straightforward fashion using reverse composed with $k$-star, because while
reverse applies on all the input, the reverse $k$-star swaps the applications
of function $f$ while keeping the function $f$ itself untouched. In order to
express it, we reverse a $k$-star operator not on $f$, but on $f$ reversed.
The result is that the applications of $f$ are reversed twice, thus preserving
them. Formally, we have:
$[L,f]_{r}^{k\star}=\mathsf{rev}\circ[L,\mathsf{rev}\circ f]^{k\star}$
## 7 Conclusion
We conclude with some interesting avenues for future work, arising from the
open questions based on this paper.
We begin with complexity questions, and then move on to other directions for
future work. The complexity of our procedure, especially when going from the
declarative language SDRTE to the computing machine 2DFT, is open. This part
relies heavily on the composition of 2DFTs which incurs at least one
exponential blowup in the state space. A possibility to reduce the complexity
incurred during composition, is to obtain _reversible_ 2FT (2RFT) for each of
the intermediate functions used in the composition. 2RFTs are a class of 2DFTs
which are both deterministic and co-deterministic, and were introduced in
[10], where they prove that composition of 2RFTs results in a 2RFT with
polynomially many states in the number of states of the input transducers.
Provided that the composition of 2RFTs preserves aperiodicity, if we could
produce 2RFTs in our procedures in section 5.1, then we would construct a 2RFT
which is polynomial in the size of the SDRTE. Another open question is the
efficiency of evaluation, i.e., given an SDRTE and an input word, what is the
time complexity of obtaining the corresponding output. This is crucial for an
implementation, along the lines of DReX [2].
Yet another direction is to extend our result to transformations over infinite
words. While Perrin [19] generalized the SF=AP result of Schützenberger to
infinite words in the mid 1980s, Diekert and Kufleitner [12, 13] generalized
Schützenberger’s SD=AP result to infinite words. One could use this SD=AP over
infinite words and check how to adapt our proof to the setting of
transformations over infinite words. Finally, a long standing open problem in
the theory of transformations is to decide if a function given by a 2DFT is
realizable by an aperiodic one. This question has been solved in the one-way
case, or in the case when we have _origin_ information [5], but the general
case remains open. We believe that our characterisation of stabilising runs
provided in Section 5.2.1 could lead to a forbidden pattern criteria to decide
this question.
## References
* [1] Rajeev Alur and Pavol Černý. Expressiveness of streaming string transducers. In 30th International Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2010, volume 8 of LIPIcs. Leibniz Int. Proc. Inform., pages 1–12. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2010.
* [2] Rajeev Alur, Loris D’Antoni, and Mukund Raghothaman. Drex: A declarative language for efficiently evaluating regular string transformations. In Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2015, Mumbai, India, January 15-17, 2015, pages 125–137, 2015. doi:10.1145/2676726.2676981.
* [3] Rajeev Alur, Adam Freilich, and Mukund Raghothaman. Regular combinators for string transformations. In Thomas A. Henzinger and Dale Miller, editors, Joint Meeting of the 23rd EACSL Annual Conference on Computer Science Logic (CSL) and the 29th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS ’14, Vienna, Austria, July 14 - 18, 2014, pages 9:1–9:10. ACM, 2014\.
* [4] Nicolas Baudru and Pierre-Alain Reynier. From two-way transducers to regular function expressions. In Mizuho Hoshi and Shinnosuke Seki, editors, 22nd International Conference on Developments in Language Theory, DLT 2018, volume 11088 of Lecture Notes in Computer Science, pages 96–108. Springer, 2018.
* [5] Mikolaj Bojanczyk. Transducers with origin information. In Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part II, pages 26–37, 2014. doi:10.1007/978-3-662-43951-7\\_3.
* [6] Mikolaj Bojanczyk, Laure Daviaud, and Shankara Narayanan Krishna. Regular and first-order list functions. In Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2018, Oxford, UK, July 09-12, 2018, pages 125–134, 2018. doi:10.1145/3209108.3209163.
* [7] J. Richard Büchi. Weak second-order arithmetic and finite automata. Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 6:66–92, 1960.
* [8] Olivier Carton and Luc Dartois. Aperiodic two-way transducers and fo-transductions. In Stephan Kreutzer, editor, 24th EACSL Annual Conference on Computer Science Logic, CSL 2015, September 7-10, 2015, Berlin, Germany, volume 41 of LIPIcs, pages 160–174. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2015. doi:10.4230/LIPIcs.CSL.2015.160.
* [9] Bruno Courcelle. Monadic second-order definable graph transductions: a survey [see MR1251992 (94f:68009)]. Theoret. Comput. Sci., 126(1):53–75, 1994. Seventeenth Colloquium on Trees in Algebra and Programming (CAAP ’92) and European Symposium on Programming (ESOP) (Rennes, 1992). URL: http://dx.doi.org/10.1016/0304-3975(94)90268-2, doi:10.1016/0304-3975(94)90268-2.
* [10] Luc Dartois, Paulin Fournier, Ismaël Jecker, and Nathan Lhote. On reversible transducers. In Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl, editors, 44th International Colloquium on Automata, Languages, and Programming, ICALP 2017, July 10-14, 2017, Warsaw, Poland, volume 80 of LIPIcs, pages 113:1–113:12. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPIcs.ICALP.2017.113.
* [11] Vrunda Dave, Paul Gastin, and Shankara Narayanan Krishna. Regular Transducer Expressions for Regular Transformations. In Martin Hofmann, Anuj Dawar, and Erich Grädel, editors, Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic In Computer Science (LICS’18), pages 315–324, Oxford, UK, July 2018\. ACM Press.
* [12] Volker Diekert and Manfred Kufleitner. Bounded synchronization delay in omega-rational expressions. In Computer Science - Theory and Applications - 7th International Computer Science Symposium in Russia, CSR 2012, Nizhny Novgorod, Russia, July 3-7, 2012. Proceedings, pages 89–98, 2012. doi:10.1007/978-3-642-30642-6\\_10.
* [13] Volker Diekert and Manfred Kufleitner. Omega-rational expressions with bounded synchronization delay. Theory of Computing Systems, 56(4):686–696, 2015.
* [14] Volker Diekert and Manfred Kufleitner. A survey on the local divisor technique. Theoretical Computer Science, 610:13–23, Jan 2016.
* [15] Joost Engelfriet and Hendrik Jan Hoogeboom. MSO definable string transductions and two-way finite-state transducers. ACM Trans. Comput. Log., 2(2):216–254, 2001. URL: http://dx.doi.org/10.1145/371316.371512, doi:10.1145/371316.371512.
* [16] Emmanuel Filiot, Shankara Narayanan Krishna, and Ashutosh Trivedi. First-order definable string transformations. In Venkatesh Raman and S. P. Suresh, editors, 34th International Conference on Foundation of Software Technology and Theoretical Computer Science, FSTTCS 2014, December 15-17, 2014, New Delhi, India, volume 29 of LIPIcs, pages 147–159. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2014. URL: http://dx.doi.org/10.4230/LIPIcs.FSTTCS.2014.147, doi:10.4230/LIPIcs.FSTTCS.2014.147.
* [17] Paul Gastin. Modular descriptions of regular functions. In Algebraic Informatics - 8th International Conference, CAI 2019, Niš, Serbia, June 30 - July 4, 2019, Proceedings, pages 3–9, 2019\. doi:10.1007/978-3-030-21363-3\\_1.
* [18] Robert McNaughton and Seymour Papert. Counter-Free Automata. The MIT Press, Cambridge, Mass., 1971.
* [19] Dominique Perrin. Recent results on automata and infinite words. In Mathematical Foundations of Computer Science 1984, Praha, Czechoslovakia, September 3-7, 1984, Proceedings, pages 134–148, 1984. doi:10.1007/BFb0030294.
* [20] Dominique Perrin and Jean-Eric Pin. Infinite Words: Automata, Semigroups, Logic and Games, volume 141\. Elsevier, 2004.
* [21] Marcel Paul Schützenberger. On finite monoids having only trivial subgroups. Information and Control, 8(2):190–194, 1965.
* [22] Marcel-Paul Schützenberger. Sur certaines opérations de fermeture dans les langages rationnels. In Symposia Mathematica, Vol. XV (Convegno di Informatica Teorica, INDAM, Roma, 1973), pages 245–253. Academic Press, 1975.
|
# Performance of a low gain avalanche detector in a medical linac and
characterisation of the beam profile
T. Isidori P. McCavana B. McClean R. McNulty N. Minafra N. Raab L. Rock
C. Royon Department of Physics & Astronomy, University of Kansas, Lawrence,
KS 66045, USA St. Luke’s Hospital, Rathgar, Dublin 6, Ireland School of
Physics, University College Dublin, Belfield, Dublin 4, Ireland Beacon
Hospital, Sandyford, Dublin 18, Ireland
###### Abstract
Low gain avalanche detectors can measure charged particle fluences with high
speed and spatial precision, and are a promising technology for radiation
monitoring and dosimetry. A detector has been tested in a medical linac where
single particles were observed with a time resolution of 50 ps. The integrated
response is similar to a standard ionising chamber but with a spatial
precision twenty times finer, and a temporal precision over 100 million times
better, with the capability to measure the charge deposited by a single linac
pulse. The unprecedented resolving power allows the structure of the $\sim
3\,\mu$s linac pulses to be viewed and the 350 ps sub-pulses in the train to
be observed.
††journal: Physics in Medicine and Biology
## 1 Introduction
Dosimetry is essential in radiotherapy for understanding linac performance and
to monitor the dose delivered to the patient. Especially for small-field
dosimetry, a high spatial precision is needed to distinguish between
irradiation of the tumour and the surrounding healthy tissue and is
particularly important in intensity-modulated radiation therapy or microbeam
radiation therapy [1], where micron precision is desirable. Time performance
becomes important in dynamic environments when either the target is in motion
or the dose delivered varies with time. With the advent of FLASH radiotherapy
and the ability to delivery ultra-high radiation doses in fractions of a
second [2], there is a need for improved monitoring techniques [3]. Detailed
studies of the dosimetry and an understanding of the radiobiological
mechanisms of Flash treatments would be greatly assisted by new detectors with
high temporal and spatial resolution.
The standard monitoring tool in a clinical environment is the ionisation
chamber, which has a spatial precision of a few mm and a response time of
about a second. Combining several devices into a 2D array can allow the
radiation profile to be mapped out, and studies have been performed using
commercial devices [4, 5], although these are limited in resolution and have
significant dead-areas.
The technology of choice for high spatial precision is silicon diodes, with
dimensions from a few mm down to $50\,\mu$m, which can achieve resolutions of
up to about $5\,\mu$m (for a recent review, see [6]). Individual diodes can be
combined to create 2D arrays that have better resolution and granularity than
ion-chamber arrays: an example is the commercial SRS MapCHECK device
consisting of 1013 diodes that cover a square of side 7.7 cm [7]. Higher
spatial precision and much greater granularity can be obtained by designing
the diode array from scratch as strips or pixel implants in a single wafer of
silicon. Such silicon-strip or pixel detectors have routinely been used in the
field of particle physics over the last 30 years, and are capable of detecting
individual charged particles in the dense high-radiation environment of the
Large Hadron Collider (LHC) [8, 9, 10], with an almost 100% efficiency. These
detectors have now become the baseline choice for many commercial activities
[11, 12] and have found several applications in dosimetry (e.g. [13, 14, 15,
16]), due to their high spatial and temporal resolutions. Small pixel sizes
give near-photographic resolution: the Medipix device [17], an array of
256x256 pixels, each $55\,\mu$m square, can be used for X-ray imaging as well
as charged particle detection.
One drawback of silicon devices for clinical settings has been their radiation
hardness. However, significant advances have been made in the last fifteen
years, driven by the necessity of operating in the intense radiation
environment of the LHC. Today, such detectors can be manufactured capable of
withstanding up to $5\times 10^{15}$ 1 MeV neutrons per cm2 (about 200 kGy)
[18, 19].
Time resolved measurements in radiotherapy have generally been limited to
collection times in the ms range. A typical measurement time would be 0.25 s
for scanned beam profiles and depth doses in water tanks. The requirement for
finer resolved measurements arrived with the advent of dynamic dose delivery
where the gantry and MLC leaves move continuously during a treatment. The 2D
arrays developed to analyse these doses include the Delta4 diode phantom [20],
which with the use of a trigger pulse, can collect individual pulse doses over
tens of $\mu$s. Current interest in FLASH therapy has led to further
refinement with investigations into the individual $\mu$s pulses. Recently,
scintillator fibres were shown to achieve sub-$\mu$s resolution for X-rays
from a clinical Linac [21] using Cerenkov light from a fused silica cylinder
to view the $\mu$s pulse in a FLASH beam, though this suffered from a ringing
artifact. Diamond detectors are also capable of ns resolution of X-ray pulses
[22].
The signal collection time for typical silicon strip or pixel detectors is
about 10 ns, which when readout, for example by the Timepix4 chip [23], allows
the leading edge to be determined with a resolution of 200 ps. The fastest
time resolution however for silicon sensors is obtained with low gain
avalanche detectors (LGAD), which married with specialised electronics, can
record charged particles with a typical precision of 30 ps.
This paper describes the performance of a LGAD, of dimensions 2.9 x 0.5 mm2,
exposed to an electron beam of an Elekta linac. This is a radiation-hard
sensor designed to measure the timing difference (30 ps) between protons at
the LHC, which are 1 cm apart and travelling essentially at the speed of
light. The small dimensions of the device and the fast response mean that it
can measure fluence rates of up to 100 million particles per square millimeter
per second, and is thus capable of detecting single electrons in the linac
pulse. This allows the time-structure of the pulse, usually approximated as a
square wave, to be investigated in detail. Understanding the temporal
deposition of dose is of interest to radiobiologists as well as to machine
engineers in understanding the operation of a linac.
The apparatus used in these studies is described below. Section 2.1 gives an
overview of the linear accelerator, while the LGAD is described in Section 2.2
and the methodology is outlined in Section 2.3. The data analysis and results
are presented in Section 3, and conclusions are given in Section 4.
## 2 Apparatus and methodology
### 2.1 The Elekta linac
Figure 1: Idealised profile of linac pulse
An Elekta Precise (Elekta AB, Sweden) dual modality linear accelerator at
Saint Luke’s Hospital, Dublin, was used in these studies. This has the
capability to produce photon beams of energy 6 MV with dose rates up to 600
MU/min and electron beams with energies between 4 and 18 MeV and dose rates of
600 MU/min. The linac was decommissioned for clinical use several years ago
but remains on a comprehensive maintenance and quality assurance program,
including dose calibration, which allows its use for research purposes. The
research linac is also equipped with a multileaf collimator (MLC), an
electronic portal imaging device (EPID) and a range of applicators for
electron beam collimation. The accelerator operates at a pulse repetition
frequency of 400 Hz for photon beams and 200 Hz for electron beam with each
pulse length being about 3.2$\,\mu$s long. Each of these pulse sequences
contains thousands of 30 ps sub-pulses separated by 350 ps. The frequency of
this fine structure is 2.856 GHz. An idealised schematic of the pulse is given
in Fig. 1.
The dose was measured with a standard radiotherapy ion chamber. A PTW Semiflex
Ionisation Chamber 31010 was operated at +400 V. It consists of two layers of
material (0.55 mm PMMA and 0.15 mm graphite) that encapsulate the sensitive
volume of the detector. The total active bulk is a cylinder of radius 2.75 mm
and height 6.5 mm. A dual channel PTW Tandem dual channel electrometer was
used to read out the ion chamber with 10 fA resolution. Its minimum measuring
intervals of 10 ms represent the bottleneck for the time required for every
acquisition. The dead time between consecutive measurement is determined by
the time (typically several seconds) needed for resetting the device.
### 2.2 The low gain avalanche detector
LGADs are considered one of the most promising technologies currently
available for fast and precise measurements of charged particles in high-rate
environments. The energy deposited by a particle passing through the sensor
contributes to the creation of free charges (electron and holes) inside the
active volume that drift towards the electrodes once an external electric
field is applied. The motion of the charges generates a current that is
amplified by a thin multiplication layer (hence the name LGAD), which is
further amplified and shaped by the read-out electronics.
The typical gain of a LGAD is in the range 5-20, significantly lower than that
of an Avalanche Photo Diode (APD), whose gain can be up to 1000. The lower
number of free charges generated per unit of deposited energy causes a
reduction in the collection time, which in turns reduces the dead time after
the avalanche process. In addition, the sensor has a lower dark current.
Thanks to these properties, it is possible to design a thin ($\sim$ tens of
microns) sensor that produces a fast, low-noise signal.
Figure 2: Left: The KU-board mounted on the horizontal rail of an empty PTW
3D scanning water tank at St. Luke’s Hospital, Dublin. Right: A close-up of
the LGAD sensor aligned and glued to the KU-board. Only a single pixel
(outlined in red) was used for these tests.
The device used in the studies is displayed in Fig. 2 and consists of a LGAD
sensor mounted on a bespoke read-out board, developed at the University of
Kansas (KU). Only one of the pixels on the sensor was used with an active area
of 2.9$\times$0.5 mm2. The sensor was originally designed to operate inside
experiments at the Large Hadron Collider [24].
The KU read-out board [25] is designed to host the sensor and the full
amplification chain and is shown in Fig. 2. This compact and robust Printed
Circuit Board (PCB) is designed to be easily configurable for use in different
applications. It consists of 8 identical two-stage trans-impedance amplifiers
and a 20x20 mm2 pad that provides bias to the sensor up to $\sim$500 V. Sensor
pads can be wire bonded to one of the amplifiers. Due to the high input
impedance, the input capacitance has a large impact on the output signal;
therefore, the sensor is placed at only few millimeters from the amplifier (to
reduce the capacitance of the wire) and very high speed SiGe transistors are
used (to reduce parasitic capacitance). Previous tests with a similar
configuration [26], using minimum ionizing particles, showed a typical rise
time of the order of $\sim$ 600 ps, while the amplitude of the signal depends
on the sensor properties, i.e. capacitance and gain.
The amplifier on the KU-board reads the current generated by the sensor,
integrated on the capacitance of the sensor itself. The input impedance of the
amplifier directly affects the integration time. In fact, a higher input
impedance has a better signal-to-noise ratio (SNR) but a slower signal.
However, a high impedance also requires more time to restore the baseline by
removing the charge from the sensor. This presents a potential problem as
subsequent signals will add to the charge already present and the amplifier
will soon saturate. The choice of the input impedance leads to a compromise
between higher rates and SNR. In a similar way, since LGADs have a gain layer,
higher gains lead to more charge collected and a higher SNR but a lower
maximum rate.
The operating voltage for the detector was characterised during a previous
test using a 180 GeV pion beam inside the LHC North Area Facility at CERN
[25]. The gain of the device depends strongly on the applied electric field. A
bias voltage too low will lead to inefficiencies in pulse shaping while, with
a bias voltage too high, the sensor will generate too much charge during the
passage of the particle, limiting the maximum rate of operation. At high
voltages (see Fig.13 in [25]), the detector achieves a time precision better
than 25 ps. However for the present studies, the detector was operated at a
bias voltage of 150 V, in order to give both good SNR and a fast response of
the detector. At this working point, the expected time precision is about 50
ps.
### 2.3 LGAD measurements performed in the linac beam
A 6 MeV electron beam was used without an electron applicator and with the
primary collimators set to 3x3 cm2. A Neodymium N40 permanent magnet was
placed on the shadow tray, 12 cm below the collimator faceplate, to bend the
beam so that it spread along the horizontal axis. This isolates the charged
and neutral components of the beam, separates electrons with different
momenta, and reduces the contamination of bremsstrahlung photons. The LGAD was
mounted on the horizontal rail of an empty PTW 3D scanning water tank so its
position could be changed remotely. The sensor was aligned vertically using
the in-room positioning lasers and horizontally using the linac’s light-field
crosshair. The moving support was used to scan the LGAD along the diverged
beam and its output was recorded as a function of position relative to the
source of radiation. Fig 3 provides a simplified sketch of the setup used
during the tests.
Figure 3: The sketch shows the configuration used during the studies of the
electron beam. At the top is the linac head. The electron beam was deflected
by the Neodymium permanent magnet that was placed at the exit of the multi-
leaf collimator (MLC). The KU-board was mounted on a moving support to scan
the profile of the beam and connected to the oscilloscope. The thyratron
provided the trigger to the digitizer in the oscilloscope and the data were
recorded on a computer.
The electrical current induced by the particle crossing the sensor was read
out by the two stage amplification circuit producing a signal with a rise time
of $\sim 600$ ps and a total width of $5-10$ ns. The signals were digitized
using an Agilent™DSO8104A Infiniium oscilloscope with a bandwidth of 1 GHz and
a sampling rate of 4 GSa/s.
For each position of the sensor, the response of the detector to 200 pulses of
the linac was recorded. Fig.4 shows a typical signal acquired by the
oscilloscope (in yellow) during one pulse. The acquisition was triggered by a
signal from the thyratron in the linac (in purple) corresponding to the
electron injection inside the first acceleration stage. The data were
subsequently analysed offline.
Figure 4: Typical response of the LGAD (yellow) to one linac pulse as
recorded by the oscilloscope. The horizontal axis records time: The distance
between the two red vertical scale-lines is 2.34$\,\mu$s. The vertical axis
records voltage: the distance between the two red horizontal scale-lines is 2
V. The purple trace is the signal from the thyratron that provided the trigger
to start the acquisition.
## 3 Experimental results
The data taken were analysed in three steps. First, the total charge collected
by the LGAD per linac pulse was compared to that measured by an ion chamber.
Second, an algorithm was developed to identify individual charged particles
traversing the detector, with a time precision of about 50 ps. Third, this
precise time discrimination was used to investigate the structure of the linac
pulse.
### 3.1 Integrated charge collected by LGAD
Fig. 4 shows the data obtained during one linac pulse. The beam is present
from roughly the left-most red marker. The data were corrected for the DC
offset, which was measured by averaging data from 0.5 to 1.5$\,\mu$s before
the pulse, while the RMS of the data in this region defined the intrinsic
noise, $\sigma_{\rm noise}$. The total charge collected during each linac
pulse was found by integrating the signal.
The detector was moved laterally away from the central beam axis in 1 cm steps
and at each position the LGAD response was recorded for 200 linac pulses as
was the response of an ion chamber (described in Sec. 2.1) placed at the same
location. The average and RMS of the integrated signal in the LGAD is plotted
in Fig. 5(left) and compared to the ion-chamber response at all locations
except those on the beam-axis. A linear response is observed over most of the
range, which is fit with a straight line. The agreement between the two
detectors shows that the LGAD gives equivalent results to the ion chamber, but
with a time resolution of 3$\,\mu$s, rather than seconds.
Using the fit results, the LGAD response is scaled to that of the ion chamber.
Fig. 5(right) shows the average and RMS of the integrated charge in the LGAD
compared to the ion chamber as a function of distance from the axis. As
expected, the responses are broadly similar for charged particles (the region
$\geq 2$ cm from the beam) since both detectors have a highly efficient
response to the passage of charged particles. Differences here can be
attributed to small non-linearities in the read-out response [25] as a
function of deposited charge.
The principal difference between the LGAD and the ion chamber occurs under the
(undeflected) beam position where there is a large flux of low-energy photons
[27]. Differences in the responses of silicon detectors and ion chambers to
photons have previously been observed: sometimes a larger signal is observed
by the silicon and sometimes a smaller one [28]. Since neither the LGAD nor
the ion chamber directly detects photons, the response depends on how photons
convert, both in the detectors and their housings. A complete understanding of
the LGAD response to photons requires a full simulation of both the detector
and the board on which it is mounted. In this paper, we limit further
discussion to its response to charged particles.
The variation in the LGAD response, as reflected by the RMS, is due to the
number of charged particles crossing the detector during a single linac pulse
and the response of the electronics to variations in their temporal
distribution.
Figure 5: Left: Correlation between the charge in the ion chamber and the
average integrated signal in the LGAD measured for each linac pulse. The red
line shows the result of a straight-line fit. Right: Charge measured with the
ion chamber and average integrated signal recorded in the LGAD per linac pulse
as a function of distance from the beam axis. The error bars indicate the RMS
of the signals obtained over 200 linac pulses.
### 3.2 Single particle counting
The ion chamber response time is typically a few seconds and requires several
hundred linac pulses to obtain a measurement. In contrast, the measurement for
the LGAD presented above are for a single pulse. However, even greater time
precision is possible by identifying each charged particle that traverses the
detector. Fig. 6 shows details of the output signal for a single triggered
event. In the zoomed display the signals resulting from charged particles
crossing the detector are clear, and indicated in the figure by red markers.
Figure 6: Top: Part of a typical LGAD signal obtained during one linac pulse;
the abscissa shows the time elapsed since the trigger. Bottom: Zoom on the
selected area (highlighted in red). The width of the green band corresponds to
five times the noise. Isolated excursions from the baseline are indicated by
red triangles and correspond to candidate charged particles passing through
the silicon sensor, as identified by the algorithm.
An automated algorithm was implemented to identify particles, and was designed
such that it could be applied in real-time to give an instantaneous response
in a clinical setting. It proceeds in the following steps.
1. 1.
The data are sent through a low-pass filter in order to reduce fluctuations
due to noise.
2. 2.
For each sample, a baseline is defined using the average of the channels in
the time interval 2-4.5 ns before it. This interval was chosen by inspection
of the signal pulses (Fig. 6). A typical signal takes 1.2 ns (5 samples) to go
from the baseline to the maximum, after which the signal slowly decays. To
account for statistical fluctuations in identifying the maximum channel, the
window to define the baseline starts 8 samples before the putative maximum. To
account for statistical fluctuations in the baseline level, the calculation is
performed over ten samples.
3. 3.
A search is made for candidate channels greater than $5\sigma_{\rm noise}$
above the baseline
4. 4.
The candidate channel is required to be the highest channel inside a window of
$\pm 3$ ns. This time interval (corresponding to $\pm 12$ samples) is chosen
by inspection of isolated signals, and is the typical time after which the
signal is 50% of the value at the peak.
The time-stamp for an individual particle crossing the detector, $T$, was
taken as the time at which the leading edge of the signal crossed a threshold
that was set to be 60% of the distance from the baseline to the peak. As
discussed in Sec. 2.2, with the operating conditions chosen, the LGAD has a
time precision of about 50 ps.
Due to the high signal-to-noise of the LGAD, this algorithm is efficient for
particles isolated in time. However, when the time spacing between consecutive
particles crossing the detector becomes significantly lower than the width of
the signal pulse (around 10 ns), the identification efficiency decreases. For
overlapping signals (e.g. between 4010-4020 ns in Fig. 6), the reconstruction
algorithm can fail to identify all particles.
Figure 7: Left: Comparison between the charge measured by the ion chamber and
the average number of particles per pulse measured by the LGAD, as a function
of detector position. Right: Correlation between the ion chamber and LGAD,
before and after an efficiency correction for high fluence rates. The error
bars indicate the RMS on the number of particles detected over 200 linac
pulses.
The average number of particles found in a single linac cycle as a function of
detector position is shown in the left panel of Fig. 7 and compared to the ion
chamber results. It can be seen that the shapes are similar, although at high
fluence rates, there is an inefficiency due to the intrinsic limit to the
algorithm described above. The inefficiency occurs when two particles pass
through the detector in quick succession such that the signals overlap. The
identification algorithm will always reject at least one when they are closer
than 3 ns, while when they are separated by more than 10 ns, the algorithm is
fully efficient. On average, the algorithm fails when two particles pass
through the detector within 6.5 ns of each other. The probability of this
occurring is $\exp(-6.5\mu)$, where $\mu$ is the rate of particles going
through the detector in GHz, and this defines an efficiency correction. Fig. 7
(right panel) shows the correspondence between the charge recorded by the ion
chamber and the number of particles observed in the LGAD, with and without
this efficiency correction. Before the correction, the response can be
modelled with a second-order polynomial, while after, a linear response is
observed.
The method of counting particles gives equivalent results to integrating the
charge but now the detector is operating as a single-quantum detector: so long
as the rate of charged particles passing through the detector is less than 100
MHz, it can resolve single electrons with a time resolution of about 50 ps,
corresponding to the precision with which the leading edge can be measured.
### 3.3 Characterisation of the linac beam
The excellent time resolution and linearity of the LGAD allows the dose
delivered by the linac to be studied as a function of time within the pulse.
The number of particles recorded at all detector positions was summed and
plotted as a function of time. This allows the temporal profile of the beam
intensity to be seen, and since the relative dose delivered scales with the
charged particle flux, this identifies the fluctuations in the delivered dose
during the linac pulse. The result is shown in Fig. 8. It approximates to the
idealised square pulse shown in Fig. 1 but the precision of the LGAD allows a
clear sub-structure to be observed. Furthermore, the width of the pulse is
seen to be smaller than nominal at $2.85\pm 0.01\,\mu$s.
Figure 8: Particle flux measured by the LGAD, summed over all positions, as a
function of time on a linear (left) and log (right) scale.
Another feature that can be investigated is the train of sub-pulses that make
up the microsecond pulse (see Fig. 1). The linac operates with a radio-
frequency of 2.856 GHz so the sub-pulses (each expected to have a width of 30
ps) are separated by 350 ps. Given a resolution of about 50 ps on the time-
stamp for each particle crossing the detector, the sub-pulses should be
visible so long as all other timing parameters within the linac are well
aligned and have small jitter. Unfortunately, a large timing uncertainty was
found between the thyraton trigger signal and the arrival of the linac beam
(shown as the purple and yellow traces in Fig. 4).
In the absence of a precise trigger and to reduce potential decoherence over
several hundreds of ns, the sub-pulse structure was searched for using the
distribution of the difference between the time-stamps for consecutive
particles crossing the detector, $\Delta T$, which is shown in Fig. 9 (left).
For Poisson-distributed events, this distribution should be an exponential.
However, this idealised shape is modified due to the particle-finding
algorithm that introduces inefficiencies for $\Delta T<10$ ns, while the
requirement for a local maximum within $\pm 3$ ns implies no two particles
have $\Delta T$ below this value. To search for modularity in this plot, the
data were binned in steps of 10 ps and the autocorrelation function,
$R(\tau)=\frac{1}{N-\tau}\sum_{i=1}^{N-\tau}y_{i}y_{i+\tau}$ was calculated,
where $y_{i}$ is the number of entries in bin $i$. The coarse-grained
structure of $\Delta T$ results in a roughly exponential shape to $R(\tau)$,
superimposed on top of which is a fine-grained sinusoidal pattern. Fig. 9
(right) shows $R$ as a function of $\tau$, after the coarse-grained structure
has been subtracted. The periodic structure is fitted with a modulated sine
function for which the period is determined to be $346\pm 3$ ps, consistent
with the operating frequency of the linac of 2856 MHz. Thus, in addition to
observing the large-scale $\sim 3\,\mu$s-wide structure of the linac pulses
that approximate to, but are not precisely square waves, the detector is
capable of resolving the individual 350 ps-wide sub-pulses in the beam.
Figure 9: Distribution of the time between consecutive particles in the beam.
The left plot shows the full distribution. The right plot shows the
autocorrelation fitted with a modulated sine function.
## 4 Conclusions
The results of the beam test at Saint Luke’s hospital, Dublin provide a proof
of concept for the use of new generation fast detectors for medical
applications. Our results for charged particles show a similar response to
standard ion chamber measurements but with an active area a factor twenty
smaller and a time resolution a factor one hundred million better. Using this
new detector we are able to observe the time profile of the charged particles
in the linac beam and detect the pulse modulation due to the RF. We believe
this is the first time either structure has been measured with a particle
detector.
Studies of single particle interactions in high rate medical accelerators
cannot be performed with standard diagnostic tools. The capabilities of a LGAD
read out with the front-end electronics developed at the University of Kansas,
show unprecedented performance in beam monitoring for radiotherapy machines,
and the same strategy of data taking and analysis can be applied for hadron
therapy machines. The much shorter timescale of operation of LGADs suggest a
promising alternative for precise dose measurements and has particular
applicability when a rapid response is required, e.g. for fast profile
measurements or scanning of moving beam profiles.
LGAD technology is still in its infancy and many improvements are to be
expected. More sophisticated electronics should make it possible to reduce the
pulse-length by a factor of ten and this will allow an order of magnitude
improvement in the single-particle rates that it can measure. LGAD sensors and
the associated electronics currently only operate with a few pads or strips,
with current implementations up to 16 channels. However, the next generation
of particle physics experiments have plans for arrays of these sensors read-
out using a front-end ASIC [29], and there are already prototypes of sensors
with 512 pads of 1.3x1.3 mm2 [9].
As the field of medical physics is currently moving towards increasing the
amount of radiation delivered per unit of time with the development of
initiatives like Flash-Radiotherapy linear accelerators, the introduction of
fast detectors becomes important: LGADs represent a promising option for beam
monitoring and dosimetry as they combine excellent spatial and temporal
precision.
## References
* [1] M.A. Grotzer, E. Schültke, E. Bräuer-Krisch, and J.A. Laissue. Phys Med, 31(6):564, 2015. doi:10.1016/j.ejmp.2015.02.011.
* [2] M. Lempart, B. Blad, G. Adrian, S. Bäck, T. Knöös, C. Ceberg, and K. Petersson. Radiother. Oncol., 139:40, 2019. doi:10.1016/j.radonc.2019.01.031.
* [3] M.R. Ashraf, M. Rahman, R. Zhang, B.B. Williams, D.J. Gladstone, B.W. Pogue, and P. Bruza. Frontiers in Physics, 8:328, 2020. doi:10.3389/fphy.2020.00328.
* [4] S. Alashrah, S. Kandaiya, S.Y. Yong, and S.K. Cheng. Nucl. Instrum. Meth. A, 619(1):181, 2010. doi:10.1016/j.nima.2009.10.125.
* [5] Akbar Anvari, Seyed Mahmoud Reza Aghamiri, Seyed Rabie Mahdavi, and Parham Alaei. Journal of Radiotherapy in Practice, 14(2):194, 2015. doi:10.1017/S1460396914000442.
* [6] Anatoly B Rosenfeld, Giordano Biasi, Marco Petasecca, Michael L F Lerch, Giulio Villani, and Vladimir Feygelman. Physics in Medicine & Biology, 65(16):16TR01, 2020. doi:10.1088/1361-6560/aba163.
* [7] Saeed Ahmed, Geoffrey Zhang, Eduardo G. Moros, and Vladimir Feygelman. Journal of Applied Clinical Medical Physics, 20(10):13, 2019. doi:10.1002/acm2.12696.
* [8] E. Bossini. Instruments, 2(4):21, 2018. doi:10.3390/instruments2040021.
* [9] CMS collaboration. Technical Report CERN-LHCC-2019-003. CMS-TDR-020, 2019. Available from: https://cds.cern.ch/record/2667167.
* [10] ATLAS Collaboration. Technical Report CERN-LHCC-2018-023. LHCC-P-012, 2018. Available from: https://cds.cern.ch/record/2623663.
* [11] A.S. Tremsin and J.V. Vallerga. Radiation Measurements, 130:106228, 2020. doi:10.1016/j.radmeas.2019.106228.
* [12] S. Procz, C. Avila, J. Fey, G. Roque, M. Schuetz, and E. Hamann. Radiation Measurements, 127:106104, 2019. doi:10.1016/j.radmeas.2019.04.007.
* [13] S Manolopoulos, C Wojnecki, R Hugtenburg, M A Jaafar Sidek, G Chalmers, G Heyes, and S Green. Physics in Medicine and Biology, 54(3):485, 2009. doi:10.1088/0031-9155/54/3/002.
* [14] A. Bocci, M.A. Cortés-Giraldo, M.I. Gallardo, J.M. Espino, R. Arráns, M.A.G. Alvarez, Z. Abou-Haidar, J.M. Quesada, A. Pérez Vega-Leal, and F.J. Pérez Nieto. Nucl. Instrum. Meth. A, 673:98, 2012. doi:10.1016/j.nima.2012.01.018.
* [15] Francesca Bisello, David Menichelli, Monica Scaringella, Cinzia Talamonti, Margherita Zani, Marta Bucciolini, and Mara Bruzzi. Nucl. Instrum. Meth. A, 796:85, 2015. doi:10.1016/j.nima.2015.04.064.
* [16] Sarah J. Alnaghy et al. Medical Physics, 47(4):1920, 2020. doi:10.1002/mp.14016.
* [17] Anatoly Rosenfeld, Marco Silari, and Michael Campbell. Radiation Measurements, 139:106483, 2020. doi:10.1016/j.radmeas.2020.106483.
* [18] M. Moll. IEEE Transactions on Nuclear Science, 65(8):1561, 2018. doi:10.1109/TNS.2018.2819506.
* [19] M. Ferrero et al. Nucl. Instrum. Meth. A, 919:16, 2019. doi:10.1016/j.nima.2018.11.121.
* [20] R. Sadagopan, J.A. Bencomo, R.L. Martin, G. Nilsson, T. Matzen, and P.A. Balter. J Appl Clin Med Phys., 10(2):104, 2009. doi:10.1120/jacmp.v10i2.2928.
* [21] Vincent Favaudon, Jean-Michel Lentz, Sophie Heinrich, Annalisa Patriarca, Ludovic de Marzi, Charles Fouillade, and Marie Dutreix. Nucl. Instrum. Meth. A, 944:162537, 2019. doi:10.1016/j.nima.2019.162537.
* [22] L.Y. Liu, X.P. Ouyang, J.F. Zhang, P. Jin, and C.L. Su. Diamond and Related Materials, 73:248, 2017. doi:10.1016/j.diamond.2016.10.002.
* [23] R. Ballabriga, M. Campbell, and X. Llopart. Radiation Measurements, 136:106271, 2020. doi:10.1016/j.radmeas.2020.106271.
* [24] Albrow et al. Technical Report CERN-LHCC-2014-021, TOTEM-TDR-003, CMS-TDR-13, 2014. Available from: http://cds.cern.ch/record/1753795.
* [25] N. Minafra, H. Al Ghoul, R. Arcidiacono, N. Cartiglia, L. Forthomme, R. Mulargia, M. Obertino, and C. Royon. Nucl. Instrum. Meth. A, 867:88, 2017. doi:10.1016/j.nima.2017.04.032.
* [26] M. Berretti et al. JINST, 12(03):P03024, 2017. doi:10.1088/1748-0221/12/03/P03024.
* [27] Jun Deng, Steve B Jiang, Todd Pawlicki, Jinsheng Li, and C-M Ma. Physics in Medicine and Biology, 46(5):1429, 2001. doi:10.1088/0031-9155/46/5/308.
* [28] C McKerracher and D I Thwaites. Physics in Medicine and Biology, 44(9):2143, 1999. doi:10.1088/0031-9155/44/9/303.
* [29] N. Cartiglia et al. Nucl. Instrum. Meth. A, 979:164383, 2020. doi:10.1016/j.nima.2020.164383.
|
# Bias reduction as a remedy to the consequences of infinite estimates in
Poisson and Tobit regression
Susanne Köll<EMAIL_ADDRESS>Ioannis Kosmidis
<EMAIL_ADDRESS>Christian Kleiber<EMAIL_ADDRESS>Achim Zeileis<EMAIL_ADDRESS>Faculty of Economics and Statistics,
Universität Innsbruck, Austria Department of Statistics, University of
Warwick & The Alan Turing Institute, United Kingdom Faculty of Business and
Economics, Universität Basel, Switzerland
###### Abstract
Data separation is a well-studied phenomenon that can cause problems in the
estimation and inference from binary response models. Complete or quasi-
complete separation occurs when there is a combination of regressors in the
model whose value can perfectly predict one or both outcomes. In such cases,
and such cases only, the maximum likelihood estimates and the corresponding
standard errors are infinite. It is less widely known that the same can happen
in further microeconometric models. One of the few works in the area is Santos
Silva and Tenreyro (2010) who note that the finiteness of the maximum
likelihood estimates in Poisson regression depends on the data configuration
and propose a strategy to detect and overcome the consequences of data
separation. However, their approach can lead to notable bias on the parameter
estimates when the regressors are correlated. We illustrate how bias-reducing
adjustments to the maximum likelihood score equations can overcome the
consequences of separation in Poisson and Tobit regression models.
###### keywords:
Bias reduction , Data separation , Shrinkage
###### JEL:
C13, C24, C25, C52
## 1 Sources of separation in regression models
Suppose that the non-negative random variable $y_{i}$ has a distribution with
a point mass at zero.111Note that the discussion here extends to the case
where the support of the response is bounded below or above. If the lower
boundary is a constant $\underline{b}\neq 0$, we can use
$y_{i}-\underline{b}$. Similarly, if the upper boundary is $\overline{b}$, we
can use $\overline{b}-y_{i}$. Suppose that the distribution function of
$y_{i}$ is $F(\cdot;\mu_{i},\phi)$ ($i=1,\dots,n$), where the scalar parameter
$\mu_{i}$ is a centrality measure (e.g., the mean), and the parameter $\phi$
represents higher-order characteristics of the distribution (e.g.,
dispersion).
A regression model can be formulated as
$\displaystyle y_{i}$ $\displaystyle\sim$ $\displaystyle
F(\cdot;\mu_{i},\phi)\,,$ (1) $\displaystyle\mu_{i}$ $\displaystyle=$
$\displaystyle h(x_{i}^{\top}\beta)\quad(i=1,\ldots,n)\,,$ (2)
where $x_{i}$ is a vector of regressors with $\mathop{\rm dim}(x_{i})=p$,
which is observed along with $y_{i}$, and $h(\cdot)$ is a monotonically
increasing function that links $\mu_{i}$ to $x_{i}$ and a parameter vector
$\beta$. The model specification in (1) and (2) covers a range of models,
including models for binary, multinomial, ordinal, and count models, models
for limited dependent variables such as the Tobit model and its extensions,
and zero-inflated and two-part or hurdle models.
The existence of a point mass at zero implies that
$f(0;\mu_{i},\phi)=F(0;\mu_{i},\phi)$, where $f(\cdot;\mu_{i},\phi)$ is the
density or probability mass function corresponding to $F(\cdot;\mu_{i},\phi)$.
The simplest but arguably often-encountered occurrence of data separation in
practice is when there is a regressor $x_{i,k}\in\\{0,1\\}$ such that
$y_{i}=0$ for all $i\in\\{1,\ldots,n\\}$ with $x_{i,k}=1$. Assuming that
$y_{1},\ldots,y_{n}$ are independent conditionally on $x_{1},\ldots,x_{n}$,
the log-likelihood $\ell(\beta,\phi)$ for the model defined by (1) and (2) can
be decomposed as
$\displaystyle\ell(\beta,\phi)$ $\displaystyle=$
$\displaystyle\sum_{x_{i,k}=0}\log
f(y_{i};h(x_{i,-k}^{\top}\beta_{-k}),\phi)+$ (4)
$\displaystyle\sum_{x_{i,k}=1}\log
F(0;h(x_{i,-k}^{\top}\beta_{-k}+\beta_{k}),\phi),$
where $a_{-k}$ indicates the sub-vector formed from a vector $a$ after
omitting its $k$-th component.
Term (4) is exactly the log-likelihood without the $k$-th regressor and based
only on the observations with $x_{i,k}=0$. Under the extra assumption that
$F(0;\mu_{i},\phi)$ is monotonically decreasing with $\mu_{i}$ (which is true,
for example, in Poisson and Tobit regression models), $\beta_{k}$ will diverge
to $-\infty$ during maximization, so that (4) achieves its maximum value of
$0$. Then, the maximization of term (4) with respect to $\beta_{-k}$ yields
the maximum likelihood (ML) estimate of $\hat{\beta}_{-k}$. So, the ML
estimate of $\beta_{-k}$ will be the same as the ML estimate obtained by
maximizing the log-likelihood without the $k$-th regressor over the subset of
observations with $x_{i,k}=0$.
As Santos Silva and Tenreyro (2010) show for Poisson regression, the same
situation can occur more generally, when separation occurs for a certain
linear combination of regressors. Our discussion above extends their
considerations beyond log-link models and Poisson regression.
## 2 Estimating regression models with separated data
Albert and Anderson (1984) showed that infinite estimates in multinomial
logistic regression occur if and only if there is data separation. Since then,
the consequences of infinite estimates to estimation and inference have been
well-studied for binomial and multinomial responses.
A popular remedy in the statistics literature is to replace the ML estimator
with shrinkage estimators that are guaranteed to take finite values (see, for
example Gelman et al., 2008, for using shrinkage priors in the estimation of
binary regression models). The probably most-used estimator of this kind comes
from the solution of the bias-reducing adjusted score equations in Firth
(1993) (see, for example, Heinze and Schemper 2002 and Zorn 2005 for
accessible detailed accounts), which guarantee estimators with smaller
asymptotic bias than what the ML estimator typically has (Firth, 1993;
Kosmidis and Firth, 2009).
In contrast, the majority of methods that have been put forward in the
econometrics literature are typically based on omitting the regressors that
are responsible for the infinite estimates. Such practice can be problematic
as we discuss in the following sections.
### 2.1 Omitting regressors affected by separation
Santos Silva and Tenreyro (2010) show that the regressors responsible for
separation in Poisson models can be easily identified by running a least
squares regression on the non-boundary observations and checking for perfect
collinearity among the regressors. The same strategy is also applicable for
Tobit regression models.
Having identified the collinear regressors associated with separation, Santos
Silva and Tenreyro (2010) propose to simply omit those and re-estimate the
model using the _full_ data set with all $n$ observations. The same strategy
is also adopted in Cameron and Trivedi (2013, Chapter 6.2), who suggest to
drop the separating regressor from the binary model part of a count data
hurdle model.
However, this strategy only leads to consistent estimates if the omitted
regressors are, in fact, not relevant, or were constructed to specifically
indicate a zero response (e.g., in the artificial data set used in the
illustrations of Santos Silva and Tenreyro, 2011). In contrast, when a highly
informative regressor is omitted, separation will be replaced by a systematic
misspecification of the model (Heinze and Schemper, 2002; Zorn, 2005). In that
situation, consistent estimates can be obtained by not only omitting the
regressor but also the observations responsible for separation, i.e.,
considering only the first term (4) in the likelihood and dropping (4).
### 2.2 Bias reduction
Kosmidis and Firth (2020) have formally shown that, in logit regression models
with full-rank model matrix, the bias-reduced (BR) estimators coming from the
adjusted score equations in Firth (1993) (i) have always finite value and (ii)
shrink towards zero in the direction of maximizing the Fisher information
about the parameters. There are also strong empirical findings that the
finiteness of the BR estimator extends beyond logit models.
A desirable feature of the bias-reducing adjustments to the score functions is
that they are asymptotically dominated by the score functions. As a result,
inference that relies on the BR estimates (Wald tests, information criteria,
etc.) can be performed as usual by simply using the BR estimates in place of
the ML estimates. This makes BR estimation a rather attractive alternative
approach for dealing with separation, without omitting regressors.
While bias reduction is a well-established remedy for data separation in
binary regression models, it is less well known that it is effective also in
more general settings such as generalized nonlinear models (Kosmidis and
Firth, 2009), and, as illustrated here, the models in Section 1.
## 3 Illustration
Similarly to Santos Silva and Tenreyro (2011), we consider models with
intercept $x_{i,1}=1$ and regressors $x_{i,2}$ and $x_{i,3}$ $(i=1,\ldots,n)$.
The values for $x_{i,2}$ are generated from a uniform distribution as
$x_{i,2}\sim\mathcal{U}(-1,1)$. The values for $x_{i,3}$ are, then, generated
from Bernoulli distributions as $x_{i,3}\sim\mathcal{B}(\pi)$ if $x_{i,2}>0$
and $x_{i,3}\sim\mathcal{B}(1-\pi)$ otherwise, in order to allow for
correlation between the two regressors.
The responses for the Poisson model are generated from (1) using
$h(x_{i}^{\top}\beta)=\exp(x_{i}^{\top}\beta)$ and the Poisson distribution
for $F$ (with known dispersion $\phi=1$). The responses for the Tobit model
are generated from a latent normal distribution
$\mathcal{N}(x_{i}^{\top}\beta,\phi)$ with variance $\phi=2$ and subsequent
censoring by setting all negative responses to $0$.
For illustration purposes, we generate a single artificial data set involving
$n=100$ regressor values with $\pi=0.25$, and Poisson and Tobit responses
using $\beta_{1}=1$, $\beta_{2}=1$ and $\beta_{3}=-10$. In both cases,
separation occurs due to the extreme value for the coefficient of $x_{i,3}$.
In the Appendix, we carry out a thorough simulation study with $10{,}000$ data
sets for a range of combinations of $n$ and $\pi$ and $\beta_{2}=-3$ so that
separation occurs with smaller probability.
We estimate the models from the artificial data using ML and BR estimation
using all $n=100$ observations, and ML estimation of the reduced model after
omitting $x_{i,3}$ either by using just the subset of the data set with
$x_{i,2}=0$ (ML/sub), or all $n=100$ observations as proposed by Santos Silva
and Tenreyro (2010) (ML/SST).
The bias-reducing adjusted score equations for the Poisson regression are
$\sum_{i=1}^{n}(y_{i}+h_{i}/2-\mu_{i})x_{i}=0_{p}$, where $0_{p}$ is a
$p$-vector of zeros and $h_{i}=x_{i}^{\top}(X^{\top}WX)^{-1}x_{i}\mu_{i}$ with
$W=\mathop{\rm diag}\\{\mu_{1},\ldots,\mu_{n}\\}$ (Firth, 1992). It is solved
with the brglm_fit method from the R package _brglm2_ (Kosmidis, 2020). For
the Tobit model we derived the adjusted score equations along with an
implementation in the R package _brtobit_ (Köll et al., 2021). The derivations
are tedious but not complicated and are provided in the Appendix.
Table 1: Comparison of different approaches when dealing with separation in a Poisson model. N is the number of observations used. | ML | BR | ML/sub | ML/SST
---|---|---|---|---
(Intercept) | $0.951$ | $0.958$ | $0.951$ | $0.350$
| $(0.100)$ | $(0.099)$ | $(0.100)$ | $(0.096)$
x2 | $1.011$ | $1.006$ | $1.011$ | $1.662$
| $(0.158)$ | $(0.157)$ | $(0.158)$ | $(0.144)$
x3 | $-20.907$ | $-5.174$ | |
| $(2242.463)$ | $(1.416)$ | |
Log-likelihood | $-107.364$ | $-107.869$ | $-107.364$ | $-169.028$
N | $100$ | $100$ | $55$ | $100$
Table 2: Comparison of different approaches when dealing with separation in a Tobit model. N is the number of observations used. | ML | BR | ML/sub | ML/SST
---|---|---|---|---
(Intercept) | $1.135$ | $1.142$ | $1.135$ | $-0.125$
| $(0.208)$ | $(0.210)$ | $(0.208)$ | $(0.251)$
x2 | $0.719$ | $0.705$ | $0.719$ | $2.074$
| $(0.364)$ | $(0.359)$ | $(0.364)$ | $(0.404)$
x3 | $-11.238$ | $-4.218$ | |
| $(60452.270)$ | $(0.891)$ | |
(Variance) | $1.912$ | $1.970$ | $1.912$ | $3.440$
| $(0.422)$ | $(0.434)$ | $(0.422)$ | $(0.795)$
Log-likelihood | $-87.633$ | $-88.101$ | $-87.633$ | $-118.935$
N | $100$ | $100$ | $55$ | $100$
Tables 1 and 2 show the results from estimating the Poisson and Tobit models,
respectively, with the four different strategies. The following remarks can be
made:
* 1.
Standard ML estimation using all observations leads to a large estimate of
$\beta_{3}$ with even larger standard error. As a result, a standard Wald test
results in no evidence against the hypothesis that $x_{3}$ should not be in
the model, despite the fact that using $\beta_{3}=-10$ when generating the
data makes $x_{3}$ perhaps the most influential regressor.222The estimates for
$\beta_{3}$ and the corresponding standard errors are formally infinite. The
displayed finite values are the result of stopping the iterations early
according to the convergence criteria used during maximization of the
likelihood. Stricter convergence criteria will result in estimates and
standard errors that diverge further.
* 2.
The ML/sub strategy, i.e., estimating the model without $x_{2}$ only for the 0
observations with $x_{i,2}=0$, yields exactly the same estimates as ML because
it optimizes the term (4), after setting (4) to zero.
* 3.
Compared to ML and ML/sub, BR has the advantage of returning a finite estimate
and standard error for $\beta_{3}$. Hence a Wald test can be directly used to
examine the evidence against $\beta_{3}\leavevmode\nobreak\
=\leavevmode\nobreak\ 0$. The other parameter estimates and the log-likelihood
are close to ML. Similarly to binary response models, bias reduction here
slightly shrinks the parameter estimates of $\beta_{2}$ and $\beta_{3}$
towards zero.
* 4.
Finally, the estimates from ML/SST, where regressor $x_{3}$ is omitted and all
observations are used, appear to be far from the values we used to generate
the data. This is due to the fact that $x_{3}$ is not only highly informative
but also correlated with $x_{2}$.
Moreover, the simulation experiments in the Appendix provide evidence that the
BR estimates are always finite, and result in Wald-type intervals with better
coverage.
## References
* Albert and Anderson (1984) Albert, A., Anderson, J.A., 1984\. On the existence of maximum likelihood estimates in logistic regression models. Biometrika 71, 1–10. doi:10.1093/biomet/71.1.1.
* Amemiya (1973) Amemiya, T., 1973. Regression analysis when the dependent variable is truncated normal. Econometrica 41, 997–1016. doi:10.2307/1914031.
* Cameron and Trivedi (2013) Cameron, A.C., Trivedi, P.K., 2013\. Regression Analysis of Count Data. 2nd ed., Cambridge University Press, New York.
* Firth (1992) Firth, D., 1992. Bias reduction, the Jeffreys prior and GLIM, in: Fahrmeir, L., Francis, B., Gilchrist, R., Tutz, G. (Eds.), Advances in GLIM and Statistical Modelling: Proceedings of the GLIM 92 Conference, Munich, Springer, New York. pp. 91–100.
* Firth (1993) Firth, D., 1993. Bias reduction of maximum likelihood estimates. Biometrika 80, 27–38. doi:10.1093/biomet/80.1.27.
* Gelman et al. (2008) Gelman, A., Jakulin, A., Pittau, M.G., Su, Y.S., 2008\. A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics 2, 1360–1383. doi:10.1214/08-aoas191.
* Gourieroux (2000) Gourieroux, C., 2000. Econometrics of Qualitative Dependent Variables. Cambridge University Press, Cambridge.
* Heinze and Schemper (2002) Heinze, G., Schemper, M., 2002\. A solution to the problem of separation in logistic regression. Statistics in Medicine 21, 2409–2419. doi:10.1002/sim.1047.
* Köll et al. (2021) Köll, S., Kosmidis, I., Kleiber, C., Zeileis, A., 2021\. _brtobit_ : Bias-Reduced Tobit Regression Models. URL: https://R-Forge.R-project.org/projects/topmodels/. R package version 0.1-1/r1146.
* Kosmidis (2020) Kosmidis, I., 2020. _brglm2_ : Bias Reduction in Generalized Linear Models. URL: https://CRAN.R-project.org/package=brglm2. R package version 0.6.2.
* Kosmidis and Firth (2009) Kosmidis, I., Firth, D., 2009\. Bias reduction in exponential family nonlinear models. Biometrika 96, 793–804. doi:10.1093/biomet/asp055.
* Kosmidis and Firth (2010) Kosmidis, I., Firth, D., 2010\. A generic algorithm for reducing bias in parametric estimation. Electronic Journal of Statistics 4, 1097–1112. doi:10.1214/10-ejs579.
* Kosmidis and Firth (2020) Kosmidis, I., Firth, D., 2020\. Jeffreys-prior penalty, finiteness and shrinkage in binomial-response generalized linear models. Biometrika doi:10.1093/biomet/asaa052.
* R Core Team (2020) R Core Team, 2020. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna. URL: https://www.R-project.org/.
* Santos Silva and Tenreyro (2010) Santos Silva, J.M.C., Tenreyro, S., 2010\. On the existence of the maximum likelihood estimates in Poisson regression. Economics Letters 107, 310–312. doi:10.1016/j.econlet.2010.02.020.
* Santos Silva and Tenreyro (2011) Santos Silva, J.M.C., Tenreyro, S., 2011\. Poisson: Some convergence issues. Stata Journal 11, 207–212. doi:10.1177/1536867x1101100203.
* Zorn (2005) Zorn, C., 2005. A solution to separation in binary response models. Political Analysis 13, 157–170. doi:10.1093/pan/mpi009.
## Appendix A Bias-reducing adjusted score functions for Tobit regression
The Tobit model is one of the classic models of microeconometrics. Fundamental
results were obtained by Amemiya (1973). A detailed account of basic
properties is available in, e.g., Gourieroux (2000). Here we provide the
building blocks for bias-reduced estimation of the Tobit model.
Denote by $\ell(\theta)$ the log-likelihood function for a Tobit regression
model with full-rank, $n\times p$ model matrix $X$ with rows the $p$-vectors
$x_{1},\ldots,x_{n}$, and a $(p+1)$-vector of parameters
$\theta=(\beta^{\top},\phi)^{\top}$ with regression parameters $\beta$ and
variance $\phi$. Then,
$\ell(\theta)=\sum_{i=1}^{n}[(1-d_{i})\log(1-F_{i})-d_{i}(\log\phi)/2-d_{i}(y_{i}-\eta_{i})^{2}/(2\phi)]$,
where $d_{i}=1$ if $y_{i}>0$ and $d_{i}=0$ if $y_{i}\leq 0$,
$\eta_{i}=x_{i}^{\top}\beta$, and $F_{i}$ is the standard normal distribution
function at $\eta_{i}/\sqrt{\phi}$. The score vector is
$s(\theta)=\nabla\ell(\theta)=\begin{bmatrix}s_{\beta}(\theta)\\\
s_{\phi}(\theta)\end{bmatrix}=\begin{bmatrix}\displaystyle\sum_{i=1}^{n}\left\\{\frac{(d_{i}-1)\lambda_{i}}{\sqrt{\phi}}+\frac{d_{i}(y_{i}-\eta_{i})}{\phi}\right\\}x_{i}\\\
\displaystyle\sum_{i=1}^{n}\left\\{\frac{(1-d_{i})\lambda_{i}\eta_{i}}{2\phi^{3/2}}-\frac{d_{i}}{2\phi}+\frac{d_{i}(y_{i}-\eta_{i})^{2}}{2\phi^{2}}\right\\}\end{bmatrix}\,,$
where $\lambda_{i}=f_{i}/(1-F_{i})$ and $f_{i}$ is the density function of the
standard normal distribution at $\eta_{i}/\sqrt{\phi}$.
The observed information matrix, $j(\theta)=-\nabla\nabla^{\top}\ell(\theta)$,
has the form
$j(\theta)=\begin{bmatrix}j_{\beta\beta}(\theta)&j_{\beta\phi}(\theta)\\\
j_{\phi\beta}(\theta)&j_{\phi\phi}(\theta)\end{bmatrix}\,,$
where, setting $\nu_{i}=f_{i}/(1-F_{i})^{2}$,
$\displaystyle j_{\beta\beta}(\theta)$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\frac{\nu_{i}(d_{i}-1)}{\sqrt{\phi}}\left\\{\frac{f_{i}}{\sqrt{\phi}}-\frac{(1-F_{i})\eta_{i}}{\phi}\right\\}-\frac{d_{i}}{\phi}\right]x_{i}x_{i}^{\top}\,,$
$\displaystyle j_{\beta\phi}(\theta)$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\frac{\nu_{i}(d_{i}-1)}{2\phi^{3/2}}\left\\{\frac{(1-F_{i})\eta_{i}^{2}}{\phi}-1+F_{i}-\frac{\eta_{i}f_{i}}{\sqrt{\phi}}\right\\}-\frac{d_{i}(y_{i}-\eta_{i})}{\phi^{2}}\right]x_{i}\,,$
$\displaystyle j_{\phi\beta}(\theta)$ $\displaystyle=$ $\displaystyle
j_{\beta\phi}(\theta)^{\top}\,,$ $\displaystyle j_{\phi\phi}(\theta)$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\frac{\nu_{i}(1-d_{i})}{4\phi^{5/2}}\left\\{\frac{(1-F_{i})\eta_{i}^{3}}{\phi}-3(1-F_{i})\eta_{i}-\frac{\eta_{i}^{2}f_{i}}{\sqrt{\phi}}\right\\}+\frac{d_{i}}{2\phi^{2}}-\frac{d_{i}(y_{i}-\eta_{i})^{2}}{\phi^{3}}\right]\,.$
As shown in Kosmidis and Firth (2010), a BR estimator for $\theta$ results as
the solution of the adjusted score equations $s(\theta)+A(\theta)=0_{p+1}$,
where the vector $A(\theta)$ has $t$-th component
$A_{t}(\theta)=\operatorname{\mathrm{tr}}[\\{i(\theta)\\}^{-1}\\{P_{t}(\theta)+Q_{t}(\theta)\\}]/2$
$(t=1,\ldots,p+1)$. In the above expression,
$Q_{t}(\theta)=-\operatorname{\mathrm{E}}(j(\theta)s_{t}(\theta))$ and
$P_{t}(\theta)=\operatorname{\mathrm{E}}(s(\theta)s^{\top}(\theta)s_{t}(\theta))$,
where $i(\theta)=\operatorname{\mathrm{E}}(j(\theta))$ is the expected
information matrix. The R package _brtobit_ implements $i(\theta)$,
$Q_{t}(\theta)$, and $P_{t}(\theta)$, and solves the bias-reducing adjusted
score equations for general Tobit regressions using the quasi Fisher-scoring
scheme proposed in Kosmidis and Firth (2010).
The matrices $i(\theta)$, $Q_{t}(\theta)$ and $P_{t}(\theta)$ have the same
block structure as $j(\theta)$ and, directly by their definition, closed-form
expressions for their blocks result by taking expectations of the appropriate
products of blocks of $s(\theta)$ and $j(\theta)$. By direct inspection of the
expressions for $s(\theta)$ and $j(\theta)$, the required expectations result
by noting that $\operatorname{\mathrm{E}}(d_{i}^{m})=F_{i}$,
$\operatorname{\mathrm{E}}((1-d_{i})^{m})=1-F_{i}$,
$\operatorname{\mathrm{E}}(d_{i}^{m}(1-d_{i})^{l})=0$,
$\operatorname{\mathrm{E}}(d_{i}^{m}(1-d_{i})^{l}(y_{i}-\eta_{i})^{k})=0$, and
by computing $\operatorname{\mathrm{E}}(d_{i}^{m}(y_{i}-\eta_{i})^{l})$
$(k,l,m=1,\ldots,6)$. For the latter expression, note that
$\operatorname{\mathrm{E}}(d_{i}^{m}(y_{i}-\eta_{i})^{l})=F_{i}\operatorname{\mathrm{E}}((y_{i}-\eta_{i})^{l}\mid
y_{i}>0)$, and that some algebra gives
$\displaystyle\operatorname{\mathrm{E}}(y_{i}-\eta_{i}\mid y_{i}>0)$
$\displaystyle=$ $\displaystyle\sqrt{\phi}\xi_{i}\,,$
$\displaystyle\operatorname{\mathrm{E}}((y_{i}-\eta_{i})^{2}\mid y_{i}>0)$
$\displaystyle=$ $\displaystyle\phi-\sqrt{\phi}\eta_{i}\xi_{i}\,,$
$\displaystyle\operatorname{\mathrm{E}}((y_{i}-\eta_{i})^{3}\mid y_{i}>0)$
$\displaystyle=$ $\displaystyle\sqrt{\phi}\xi_{i}(\eta_{i}^{2}+2\phi)\,,$
$\displaystyle\operatorname{\mathrm{E}}((y_{i}-\eta_{i})^{4}\mid y_{i}>0)$
$\displaystyle=$ $\displaystyle
3\phi^{2}-\eta_{i}^{3}\sqrt{\phi}\xi_{i}-3\phi^{3/2}\eta_{i}\xi_{i}\,,$
$\displaystyle\operatorname{\mathrm{E}}((y_{i}-\eta_{i})^{5}\mid y_{i}>0)$
$\displaystyle=$
$\displaystyle\sqrt{\phi}\eta_{i}^{4}\xi_{i}+4\phi^{3/2}\xi_{i}(\eta_{i}^{2}+2\phi)\,,$
$\displaystyle\operatorname{\mathrm{E}}((y_{i}-\eta_{i})^{6}\mid y_{i}>0)$
$\displaystyle=$
$\displaystyle-\eta_{i}\sqrt{\phi}\xi_{i}(\eta_{i}^{4}+5\eta_{i}^{2}\phi+15\phi^{2})+15\phi^{3}\,,$
where $\xi_{i}=f_{i}/F_{i}$. The expected information,
$i(\theta)=\begin{bmatrix}\operatorname{\mathrm{E}}(j_{\beta\beta}(\theta))&\operatorname{\mathrm{E}}(j_{\beta\phi}(\theta))\\\
\operatorname{\mathrm{E}}(j_{\phi\beta}(\theta))&\operatorname{\mathrm{E}}(j_{\phi\phi}(\theta))\end{bmatrix},$
has elements
$\displaystyle\operatorname{\mathrm{E}}(j_{\beta\beta}(\theta))$
$\displaystyle=$
$\displaystyle-\frac{1}{\phi}\sum_{i=1}^{n}\left\\{\frac{\eta_{i}f_{i}}{\sqrt{\phi}}-\lambda_{i}f_{i}-F_{i}\right\\}x_{i}x_{i}^{\top}\,,$
$\displaystyle\operatorname{\mathrm{E}}(j_{\beta\phi}(\theta))$
$\displaystyle=$
$\displaystyle\frac{1}{2\phi^{3/2}}\sum_{i=1}^{n}f_{i}\left\\{\frac{\eta_{i}^{2}}{\phi}+1-\lambda_{i}\frac{\eta_{i}}{\sqrt{\phi}}\right\\}x_{i}^{\top}\,,$
$\displaystyle\operatorname{\mathrm{E}}(j_{\phi\beta}(\theta))$
$\displaystyle=$
$\displaystyle\operatorname{\mathrm{E}}(j_{\beta\phi}(\theta))^{\top}\,,$
$\displaystyle\operatorname{\mathrm{E}}(j_{\phi\phi}(\theta))$
$\displaystyle=$
$\displaystyle-\frac{1}{4\phi^{2}}\sum_{i=1}^{n}\left\\{f_{i}\frac{\eta_{i}^{3}}{\phi^{3/2}}+f_{i}\frac{\eta_{i}}{\sqrt{\phi}}-\lambda_{i}f_{i}\frac{\eta_{i}^{2}}{\phi}-2F_{i}\right\\}.$
Furthermore, for $t\in\\{1,\ldots,p\\}$,
$Q_{t}(\theta)=-\begin{bmatrix}\operatorname{\mathrm{E}}(j_{\beta\beta}s_{\beta_{t}})&\operatorname{\mathrm{E}}(j_{\beta\phi}s_{\beta_{t}})\\\
\operatorname{\mathrm{E}}(j_{\beta\phi}s_{\beta_{t}})^{\top}&\operatorname{\mathrm{E}}(j_{\phi\phi}s_{\beta_{t}})\end{bmatrix}\quad\text{and}\quad
P_{t}(\theta)=\begin{bmatrix}\operatorname{\mathrm{E}}(s_{\beta}s_{\beta}^{\top}s_{\beta_{t}})&\operatorname{\mathrm{E}}(s_{\beta}s_{\phi}s_{\beta_{t}})\\\
\operatorname{\mathrm{E}}(s_{\beta}s_{\phi}s_{\beta_{t}})^{\top}&\operatorname{\mathrm{E}}(s_{\phi}s_{\phi}s_{\beta_{t}})\end{bmatrix}\,,$
and for $t=p+1$,
$Q_{p+1}(\theta)=-\begin{bmatrix}\operatorname{\mathrm{E}}(j_{\beta\beta}s_{\phi})&\operatorname{\mathrm{E}}(j_{\beta\phi}s_{\phi})\\\
\operatorname{\mathrm{E}}(j_{\beta\phi}s_{\phi})^{\top}&\operatorname{\mathrm{E}}(j_{\phi\phi}s_{\phi})\end{bmatrix}\quad\text{and}\quad
P_{p+1}(\theta)=\begin{bmatrix}\operatorname{\mathrm{E}}(s_{\beta}s_{\beta}^{\top}s_{\phi})&\operatorname{\mathrm{E}}(s_{\beta}s_{\phi}s_{\phi})\\\
\operatorname{\mathrm{E}}(s_{\beta}s_{\phi}s_{\phi})^{\top}&\operatorname{\mathrm{E}}(s_{\phi}s_{\phi}s_{\phi})\end{bmatrix},$
where
$\displaystyle\operatorname{\mathrm{E}}(j_{\beta\beta}s_{\beta_{t}})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[-\frac{f_{i}}{\phi^{3/2}}\left(\lambda_{i}^{2}-\frac{\lambda_{i}\eta_{i}}{\sqrt{\phi}}-1\right)\right]x_{i}x_{i}^{\top}x_{i,t}\,,$
$\displaystyle\operatorname{\mathrm{E}}(j_{\beta\phi}s_{\beta_{t}})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\frac{1}{2\phi^{2}}\lambda_{i}f_{i}\left\\{-\frac{\eta_{i}^{2}}{\phi}+1+\lambda_{i}\frac{\eta_{i}}{\sqrt{\phi}}\right\\}+\frac{1}{\phi^{2}}\left\\{F_{i}-\frac{\eta_{i}f_{i}}{\sqrt{\phi}}\right\\}\right]x_{i}^{\top}x_{i,t}\,,$
$\displaystyle\operatorname{\mathrm{E}}(j_{\phi\phi}s_{\beta_{t}})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\frac{1}{\phi^{5/2}}\left[\lambda_{i}\frac{f_{i}\eta_{i}}{4\sqrt{\phi}}\left\\{\frac{\eta_{i}^{2}}{\phi}-3-\lambda_{i}\frac{\eta_{i}}{\sqrt{\phi}}\right\\}+\frac{f_{i}\eta_{i}^{2}}{\phi}+\frac{3f_{i}}{2}\right]x_{i,t}\,,$
$\displaystyle\operatorname{\mathrm{E}}(j_{\beta\beta}s_{\phi})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\frac{f_{i}^{2}\eta_{i}}{2\phi^{5/2}(1-F_{i})}\left\\{\lambda_{i}-\frac{\eta_{i}}{\sqrt{\phi}}\right\\}-\frac{\eta_{i}f_{i}}{2\phi^{5/2}}\right]x_{i}x_{i}^{\top}\,,$
$\displaystyle\operatorname{\mathrm{E}}(j_{\beta\phi}s_{\phi})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\lambda_{i}\frac{f_{i}\eta_{i}}{4\phi^{3}}\left\\{\frac{\eta_{i}^{2}}{\phi}-1-\lambda_{i}\frac{\eta_{i}}{\sqrt{\phi}}\right\\}+\frac{f_{i}}{2\phi^{5/2}}\left\\{1+\frac{\eta_{i}^{2}}{\phi}\right\\}\right]x_{i}^{\top}\,,$
$\displaystyle\operatorname{\mathrm{E}}(j_{\phi\phi}s_{\phi})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\lambda_{i}\frac{\eta_{i}^{2}}{8\phi^{4}}\left\\{-\frac{\eta_{i}^{2}f_{i}}{\phi}+3f_{i}+\lambda_{i}\frac{f_{i}\eta_{i}}{\sqrt{\phi}}\right\\}+\frac{F_{i}}{\phi^{3}}-\frac{3\eta_{i}f_{i}}{4\phi^{7/2}}-\frac{f_{i}\eta_{i}^{3}}{2\phi^{9/2}}\right]\,,$
$\displaystyle\operatorname{\mathrm{E}}(s_{\beta}s_{\beta}^{\top}s_{\beta_{t}})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[-\lambda_{i}^{2}\frac{f_{i}}{\phi^{3/2}}+\frac{f_{i}}{\phi^{5/2}}\left\\{\eta_{i}^{2}+2\phi\right\\}\right]x_{i}x_{i}^{\top}x_{i,t}\,,$
$\displaystyle\operatorname{\mathrm{E}}(s_{\beta}s_{\beta}^{\top}s_{\phi})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\frac{\eta_{i}f_{i}}{2\phi^{5/2}}\left\\{\lambda_{i}^{2}-2-\frac{\eta_{i}^{2}}{\phi}\right\\}+\frac{F_{i}}{\phi^{2}}\right]x_{i}x_{i}^{\top}\,,$
$\displaystyle\operatorname{\mathrm{E}}(s_{\beta}s_{\phi}s_{\beta_{t}})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\frac{f_{i}\eta_{i}}{2\phi^{5/2}}\left\\{\lambda_{i}^{2}-2\right\\}+\frac{F_{i}}{\phi^{2}}-\frac{f_{i}\eta_{i}^{3}}{2\phi^{7/2}}\right]x_{i}x_{i,t}\,,$
$\displaystyle\operatorname{\mathrm{E}}(s_{\beta}s_{\phi}s_{\phi})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\frac{f_{i}\eta_{i}^{2}}{2\phi^{7/2}}\left\\{-\lambda_{i}^{2}\frac{1}{2}+1\right\\}+\frac{f_{i}}{4\phi^{5/2}}\left\\{5+\frac{\eta_{i}^{4}}{\phi^{2}}\right\\}\right]x_{i}\,,$
$\displaystyle\operatorname{\mathrm{E}}(s_{\phi}s_{\phi}s_{\beta_{t}})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[-\frac{f_{i}\eta_{i}^{2}}{4\phi^{7/2}}\left\\{\lambda_{i}^{2}-2-\frac{\eta_{i}^{2}}{\phi}\right\\}+\frac{5f_{i}}{4\phi^{5/2}}\right]x_{i,t}\,,$
$\displaystyle\operatorname{\mathrm{E}}(s_{\phi}s_{\phi}s_{\phi})$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\left[\frac{f_{i}\eta_{i}^{3}}{8\phi^{9/2}}\left\\{\lambda_{i}^{2}-2-\frac{\eta_{i}^{2}}{\phi}\right\\}+\frac{F_{i}}{\phi^{3}}-\frac{9f_{i}\eta_{i}}{8\phi^{7/2}}\right].$
## Appendix B Simulation
The aim of the simulation experiment is to compare the performance of the BR
and ML estimator in count and limited dependent variable models with varying
probabilities of infinite ML estimates. The comparison here is in terms of
bias, variance, and empirical coverage of nominally $95\%$ Wald-type
confidence intervals based on the asymptotic normality of the estimators. Our
results were obtained using R 4.0.3 (R Core Team, 2020). Random variables were
generated using the default methods for the relevant distributions, which in
turn rely on uniform random numbers obtained by the Mersenne Twister,
currently R’s default generator.
The same data generating process as in Section 3 of the main paper is
considered, with the coefficient of the binary regressor $x_{2}$ set to the
less extreme value $\beta_{3}=-3$. The amount of correlation between $x_{2}$
and $x_{3}$ varies with $\pi\in\\{0,1/8,1/4,3/8,1/2\\}$ so that increasing the
value of $\pi$ leads to decreasing the probability of infinite estimates. The
sample sizes we consider are $n\in\\{25,50,100,200,400\\}$. For each
combination of $\pi$ and $n$, $10{,}000$ independent samples are simulated,
and the parameters of the Poisson and Tobit regression models in Section 3 are
estimated using maximum likelihood and bias reduction. The estimates are then
used to compute simulation-based estimates of the bias, variance, and coverage
probability for $\beta_{3}$.
For the ML estimator, the bias, variance, and coverage probabilities are
computed conditionally on the finiteness of the ML estimates. We classify an
ML estimate as infinite if the corresponding estimated standard error exceeds
$20$. In effect, we are assuming that if the standard error exceeds $20$, the
Fisher scoring iteration for ML stopped while moving along an asymptote on the
log-likelihood surface, hence, at a point where the inverse negative hessian
has at least one massive diagonal element. The heuristic value $20$ is
conservative even for $n=25$. This has been verified through a pilot
simulation study to estimate the variance of the reduced-bias estimator, which
has the same asymptotic distribution as the ML estimator. No convergence
issues were encountered and the maximum estimated standard error of the
reduced-bias estimators accross simulation settings, parameters, and sample
sizes was $8.3$ for Tobit and $5.5$ for Poisson regression.
For BR estimation, the estimates appear to be always finite. So, we estimate
biases, variances and coverage probabilities both conditionally on the
finiteness of the ML estimates and unconditionally. We note here that a direct
comparison of conditional and unconditional summaries is not formally valid,
but gets more and more informative as the probability of infinite estimates
decreases.
Figures 4, 4, 4, and 4 show the estimated probability that the ML and BR
estimate of $\beta_{3}$ are infinite, the estimated bias, the estimated
variance, and the estimated coverage probability of $95\%$ Wald-type
confidence intervals, respectively, for the Poisson model. Figures 8, 8, 8,
and 8 show the corresponding results for the Tobit model.
The results for Poisson and Tobit regression lead to similar insights:
* 1.
Bias reduction via adjusted score functions always yields finite estimates.
* 2.
The BR estimator has bias close to zero even for small sample sizes.
* 3.
Wald-type confidence intervals based on BR estimates have good coverage
properties.
* 4.
The variances of the BR and ML estimator get closer to each other and closer
to zero as $n$ increases. This is exactly what the theory suggests because the
score functions asymptotically dominate the bias-reducing adjustments.
Figure 1: Probability of infinite estimates for $\beta_{3}$ (Poisson).
Figure 2: Bias of estimates for $\beta_{3}$ (Poisson).
Figure 3: Variance of estimates for $\beta_{3}$ (Poisson).
Figure 4: Coverage of $95\%$ Wald-type confidence intervals for $\beta_{3}$
(Poisson).
Figure 5: Probability of infinite estimates for $\beta_{3}$ (Tobit).
Figure 6: Bias of estimates for $\beta_{3}$ (Tobit).
Figure 7: Variance of estimates for $\beta_{3}$ (Tobit).
Figure 8: Coverage of $95\%$ Wald-type confidence intervals for $\beta_{3}$
(Tobit).
|
# Designing a mobile game to generate player data — lessons learned
William Wallis William Kavanagh Alice Miller & Tim Storer
Department of Computing Science
University of Glasgow
Scotland
email<EMAIL_ADDRESS>
###### ABSTRACT
User friendly tools have lowered the requirements of high-quality game design
to the point where researchers without development experience can release
their own games. However, there is no established best-practice as few games
have been produced for research purposes. Having developed a mobile game
without the guidance of similar projects, we realised the need to share our
experience so future researchers have a path to follow. Research into game
balancing and system simulation required an experimental case study, which
inspired the creation of _“RPGLite”_ , a multiplayer mobile game. In creating
RPGLite with no development expertise we learned a series of lessons about
effective amateur game development for research purposes. In this paper we
reflect on the entire development process and present these lessons.
KEYWORDS
Design, Prototyping, Datasets, Mobile Game Design
## INTRODUCTION
Procuring datasets to validate theoretical research findings can be difficult.
Industrial sources for this data rarely have aligned interests with the
researchers who require them. Academic datasets can often be over-specialised
to the domain of the team who originally released them. While there is a
requirement for more generally applicable and well-documented datasets for
academic use, simple modern tooling has allowed for researchers to develop
their own systems to obtain well-scoped datasets under their own control.
Having developed techniques for generating synthetic gameplay data we required
an appropriate dataset for comparison, so developed a mobile game to create it
for us. This provided the opportunity for us to design a high-quality game to
generate the data we required cheaply and easily. Since its release in April
2020, RPGLite111RPGLite is available from https://rpglite.app/ has been more
successful than we anticipated, populating a substantial dataset which will be
of great value to our own research, and, we believe, to the wider research
community.
In this paper we detail our experience of creating RPGLite, including its
planning, implementation, testing and deployment. The motivation for sharing
this experience report was our own frustration at having nothing similar to
support us when we embarked on this project. In providing reflections upon the
successes and failures of our approach, we hope that this paper can guide
other researchers considering a similar project.
The key contributions of this paper include: (i) Descriptions of the lessons
learned in developing a mobile game for research purposes; (ii) An outline of
how a similar application can be released with no funding or development
experience, and; (iii) A frank discussion of the mistakes made and an analysis
of how we could have produced a richer source of data more efficiently.
In the following section we describe why we needed to generate this dataset
rather than use data from an already existing game. We go on to discuss the
design of the game and the resulting implementation. In the body of the paper
we present the four key lessons learned from the experience: how we came to
learn them and how future researchers can use them to assist their development
processes. Finally we summarise our contribution and detail future work which
will follow as a result of our data collection from RPGLite.
## MOTIVATION
In recent research we have developed an approach which uses model checking to
analyse the balance and metagame development of a game. We refer to this
approach as Chained Strategy Generation (CSG) (Kavanagh et al. 2019). We use
the PRISM model checker (Kwiatkowska et al. 2011) and the PRISM-Games
extension (Kwiatkowska et al. 2018), a probabilistic engine for analysis of
various Markov models, including Discrete Time Markov Chains (DTMCs), which
are purely probabilistic, and Markov Decision Processes (MDPs) which also
involve non-deterministic choice. PRISM allows us to specify quantitative
properties such as “what is the probability that event $e$ happens?” (for
DTMCs), or “for all possible sequences of choices, what is the greatest
probability that event $e$ happens?” (for MDPs). In our approach we define a
model representing our game and use PRISM to determine player strategies of
interest (here a strategy corresponds to a sequence of choices). For example,
to determine a strategy for player 1 that corresponds to the best probability
of winning, we check the property “what is the maximum likelihood that player
1 wins the game?”. As well as returning this maximum probability, the model
checker also allows us to extract the player strategy that achieves it. This
process is known as strategy synthesis (Kwiatkowska and Parker 2016) and is
used systematically throughout the CSG process in which we examine how
strategies evolve over time as players adopt optimal strategies. Model
checking is computationally expensive and so this approach would not be
suitable for a more elaborate game, for example, where multiple objects have
highly-precise 3D positions. Although PRISM has been used to verify soundness
properties in simple 2D games (Rezin et al. 2018), most modern games are too
complex to be modelled accurately in this way without overly compromising
abstractions. For model checking to be feasible on current hardware, a system
must have no more than roughly $10^{10}$ states.
In order to demonstrate that the outcomes of CSG represent a realistic
evolution of a game, we required data from a real game that was elaborate
enough to require considered decision making from players without being so
complex as to prevent the use of model checking. We also required a system
where player data could be compared for multiple configurations to allow a
comparison of their respective balance. Finding a pre-existing game satisfying
both of these requirements seemed unlikely, so we extended an existing case
study to something more akin to a real game, with a view to developing it as a
mobile application from which to collect data. This way we could perform CSG
analysis (on models) to find candidate configurations that promised
theoretical levels of balance, before releasing the game and testing how they
performed in practice.
Having control of the game, and the data it allowed us to collect, put us in
the unique position of being able to design a system to generate data that was
not only useful to us (in terms of our game balance research), but also in its
own right in the context of system modelling. It is our intention to publish
our dataset in full in the near future. Real-world datasets of sufficiently
well specified systems are rare, so we will specify our system and the nature
of the data it produces in as much detail as possible — however this is
outside the scope of this paper.
## DESIGN DETAILS & REQUIREMENTS
RPGLite, the game, is defined by its _rules_ , _mechanics_ and
_configuration_. We present these here. In later sections, “RPGLite” is used
solely to refer to RPGLite, the application.
### Rules
RPGLite is a two-player, turn-based game in which each player chooses a pair
of unique characters from a pool of eight. Each character has a unique action
and three attributes: health, accuracy and damage. Some have additional
attributes described by their action. On their turn, a player chooses the
action of one of their alive characters and targets an opposing character with
their action. That action will succeed or fail based on the acting character’s
accuracy value. Players can choose to skip on their turn or to forfeit the
game at any time. A coin is flipped to decide which player goes first and the
winner is the player who is first to reduce both of their opposing characters
health values to 0.
### Mechanics
The mechanics of RPGLite are encapsulated in the eight characters and their
actions:
Knight:
targets a single opponent;
Archer:
targets up to two opponents;
Healer:
targets a single opponent and heals a damaged ally or themselves;
Rogue:
targets a single opponent and does additional damage to heavily damaged
targets;
Wizard:
targets a single opponent and stuns them, preventing their action from being
used on their subsequent turn;
Barbarian:
targets a single opponent and does additional damage when heavily damaged
themselves;
Monk:
targets a single opponent and continues turn until a target is missed, and;
Gunner:
targets a single opponent, does some damage even on failed actions;
The additional attributes needed to describe the characters fully are the heal
value of the Healer, the heavily damaged value for the Rogue (the execute
range), the heavily damaged value for the Barbarian (the rage threshold), the
increased damage value for the Barbarian (the rage damage) and the miss damage
(graze) for the Gunner.
### Configuration
In total there are 29 attributes for the characters in RPGLite. A
configuration for RPGLite is a set of values for each attribute. These
attributes are the parameters we tune in an attempt to balance the game. The
application was released with a configuration which we suspected of being
balanced based on automated analysis. After a significant number of games were
played, the application was updated with a new configuration (dubbed “season
two”), with the aim of maintaining player interest. The new configuration had
altered attributes for seven of the characters, for example the Healer’s
health value decreased from 10 to 9 and their accuracy increased from 0.85 to
0.9. Only the Wizard remained the same between configurations.
## THE FINAL PRODUCT
As context for the reflections in the rest of this paper, it is necessary to
describe the application that was actually built. This finished product is a
combination of implementation details and the design decisions that lead to
their implementation.
### Design
RPGLite was designed to be simple to understand and play, so as to keep
players interested and reduce barriers to entry. On logging in, players are
presented with five “slots” for games, which can have a number of states:
* •
Unused, waiting for a game to be made
* •
Added to a queue of players waiting for a random match to be made
* •
In an active game,
* –
Waiting for a move to be made by the player
* –
Waiting for an opponent to make a move
Figure 1: Screenshots of the released application showing a player’s home
screen (left) and a game in progress (right)
On starting a new game, players choose two unique characters from a set of
eight and are presented with cards representing their chosen pair opposite
against their opponent’s. Animations of character cards are used to indicate
whether the player can make a move, as well as an on-screen prompt. The
application was designed to be as frictionless as possible to use, although,
as discussed in lesson 4, we found some users were still confused and we
simplified the design further via iteration.
Additional features were implemented specifically to encourage player
retention. As can be seen in fig. 2, peripheral systems around the core game,
such as medals for players to earn and leaderboards to climb, were intended to
give players goals to achieve and a reason to stay invested in the game.
As players were technically experiment participants, it was necessary to have
them “sign” an ethics-approved consent form and be delivered an information
sheet. We implemented this by requiring players to scroll through a panel
containing their consent form and information sheet on registration, and
explicitly tick boxes confirming that they consented to all necessary parts
and were over 15 years old.
### Implementation
RPGLite was implemented as a mobile game written in Unity. We chose Unity for
it’s ability to compile the same project to both iOS and Android and it’s
active community with numerous video tutorials for beginners. RPGLite made
requests to a public-facing REST api, written in Python3 and run on
university-provided servers with a firewall under the control of the
institution’s IT services. This server initially handed data processed in the
client to the database to avoid a direct connection (and the risk of exposing
the database publicly), but became a larger aspect of the engineering as
discussed in lesson 3. The project collected and stored its data within a
MongoDB database also hosted locally within the university.
## LESSON 1: RESIST TEMPTATION
At many points in the development process, we found it difficult to constrain
the feature set of the end product. The unbounded nature of the project led to
additional features being implemented as development became a “labour of
love”. These delayed the delivery of the game, and few new ideas were actually
discarded. Only some of these features were beneficial to the player
experience. An illustrative example is the comparison of two such “peripheral
systems”: the leaderboard and players’ profiles. Players load the leaderboard
roughly three times as often as their profiles and, anecdotally, they are a
central component of player retention. An equal amount of effort was spent on
each. During development it was impossible to know how often a feature would
be used in practice.
Figure 2: Screenshots of peripheral systems implemented towards the end of
development. The leaderboard screen (_left_) shows a player’s skill points
compared to all others. The player profile screen (_right_) displays usage
data for all characters and the medals earned.
The ideas that came to us during development were sometimes essential to the
project’s success, and to resist all of these would have resulted in a poorer
product. The danger we identified in our own endeavours was a desire to
implement these ideas _for their own sake_ , and not for their benefit to our
end product. New ideas must be abandoned where their benefit does not outweigh
the additional time they would demand. An agile development approach is the
best in these scenarios, where requirements naturally change over time.
Much like implementing new features, we found that the refinement of existing
features risked an emotional investment. Existing design components, such as
colour schemes and layouts of minor UI elements, were constantly changed prior
to release. We found the adage, “don’t let the perfect be the enemy of the
good”, useful in such moments.
We struggled to resist temptation because of our inexperience with app
development, our lack of a thorough plan and the fact that we were co-
developing and therefore reticent to shoot down each other’s ideas. For other
developers in similar situations to our own, we recommend a more structured
approach. Firstly, a project should have a plan produced at its inception,
which is maintained throughout the development process. Second, we suggest
adding to this plan a “margin”; a block of unallocated time at the end of the
project that can be spent on developing new ideas. As development progresses,
this margin can be “spent” on new ideas or refinements to existing design
elements. This facilitates necessary discussions by framing them within the
context of a shared resource.
## LESSON 2: EMPLOY AVAILABLE RESEARCH NETWORKS
Advertising is a major cost of app development; new users are expensive. With
no money for player recruitment we were forced to promote the application in a
similar way to other research experiments within a university context, through
participant calls in mailing lists and departmental announcements. Beyond this
we sought out opportunities for free publicity from within our research
community. We found that there is an appetite for open data and by encouraging
people to play our game “for science” our promotions were better received. We
anticipated undergraduate students would make up the majority of our users.
However, while promotions targeted at undergraduates introduced a large number
of users, those users tended to only complete a few games before stopping. For
our research we wanted to investigate how players learn over time, we needed
high player retention to allow users time to “learn” the system. We observed
that retention was highest within players who had a vested interest in us or
the research itself, or when the game was adopted by users from a social
clique.
Figure 3: The rate of user acquisition in the weeks following RPGLite’s
release. Important events are also marked: promotion of the application
through the Scottish International Game Developers Association branch, an
email to Computing Science undergraduates, the date from which UK citizens
were told to stay inside if at all possible, the time of a major update to the
game and an email to all Science and Engineering undergraduates at the
University of Glasgow.
In comparing events that we expected would increase player numbers with their
effects on new users and games played (a suitable measure of data generation)
fig. 3, the retention of the different groups recruited is pronounced. Over
half of our users failed to successfully complete a single game, and several
users installed the app without registering an account. We are fortunate
enough to know the chair of the Scottish branch of the International Game
Developers Association (IGDA) who kindly shared an advert for the game. The
increase in the speed of game completions accompanying the influx of new users
from his involvement shows that those players were valuable data generators.
The figure also shows that the large intake of undergraduate students from
Science and Engineering only caused a brief uptake in activity, which quickly
dissipated. We believe this is due to either the lack of a relationship with
us as the developers or of interest in games research. We also assumed that a
large update might increase activity, but found that not to be the case. A
single large update changing the configuration of the game, adding seasonal
leaderboards and improving existing features had no noticeable effect on the
number of games completed. The extent to which our data comes from a small
subset of users is shown in fig. 4.
Figure 4: The number of users to have played at least a given number of games.
Throughout development we sought advice from those around us with relevant
experience. Many of our university colleagues had been involved in various
aspects of application development and deployment, and advised us throughout.
For example, a web designer gave advice on UX design and a gamification
researcher suggested various incentivisation systems. We also relied heavily
on our department’s IT services team for support in deploying the middleware
server and administrative staff for promoting the app once it had been
released. Application development is multifaceted and the support of our peers
was important in areas where our skills were insufficient.
Without the extensive use of the research communities we belong to, RPGLite
would have been an inferior application, producing a less rich dataset. There
are numerous skills required to develop a system that people will use
willingly. Engaging peers early in the process and being clear in your aims
will highlight the areas in which you need support. Where user retention is
important your research community is vital, as they already have a connection
to you which will see them invested in the project from the outset. Your
individual network is unlikely to be enough to generate a significant dataset,
so we recommend engaging colleagues to advertise on your behalf. RPGLite never
sought to compete with professionally developed games, but through our various
communities we manged to generate enough interest for a steady playerbase.
## LESSON 3: THE SMALLER THE CLIENT, THE BETTER
The one aspect of RPGLite’s implementation that we most regret is the amount
of game logic being delivered to players in the mobile client rather than the
server. There are many reasons for this, the main one being that the server
could be replaced immediately if a bug were to be found. This is in contrast
to compiling, re-installing and re-testing attempts to fix the given bug, were
it to reside in the client. Fixing server-side bugs allowed more rapid
iteration when fixing those with origins we did not understand.
The need for moving logic out of our client became apparent after we had
pushed production code to app stores and had real players taking part in our
experiment. A particularly dedicated player discovered a bug where, after
playing enough games, characters that had been unlocked through repeated play
would become locked again and could no longer be accessed. Had this bug been
in the server, the issue could have been fixed, and a new version deployed in
seconds that lightweight clients could connect to. With our larger client,
this required testing in Unity, testing on-device (to ensure that there
weren’t platform-specific bugs), and deployment to app stores for approval and
distribution. This process took days, even though the bug was trivial to fix.
Large clients also risk introducing a duplication of code when paired with a
secure server. To validate game logic computed by a client, servers must
replicate much of the processing the client previously performed, to verify
that a malicious user hasn’t supplied corrupted game states. This process
requires the implementation of game logic within the server. As a result, a
secure server must include game logic regardless of whether the client does.
This means spending time, an already scarce resource, on duplicated code. This
is another reason we recommend developing a lightweight client, leaving the
majority of computation to a larger server.
When we realised that we had produced a large client, we made efforts to move
to a more server-centric design. For example, we considered sending push
notifications via APIs directly written into our client. However, the
flexibility and control of implementing this server-side caused us to move our
notification code to the server. After this, we implemented much of our
peripheral systems logic in the server, including the leaderboard, medal
logic, password reset, and much of the matchmaking systems.
Overall, we found that areas where the client was lightweight allowed more
rapid prototyping and bugfixing. We recommend other projects be constructed
with a small client for these reasons, as well as avoiding duplication of code
and a reduction in application size by limiting client-side dependencies.
## LESSON 4: TEST EARLY, TEST OFTEN
The best source of feedback and advice we received was from the shared
document we circulated alongside our two private test releases. We
specifically chose friends and colleagues who knew us well enough to be able
to have honest discussions on the weaker aspects of the application. We
carried out the testing by sharing Android application packages with Android
users and inviting iOS users to participate in private beta testing via
Apple’s TestFlight system. We were able to implement the majority of the
suggestions made, many of which have become central components in the final
game. This stage highlighted the importance of push notifications and
streamlining the user experience. Specifically, our test users found that they
would often forget to check whether they had moves to make. Before testing we
had investigated the feasibility of implementing push notifications, but were
unsure if they were worth the time to develop. Following testing feedback, we
made this a priority.
Figure 5: Evolution of the Barbarian card artwork throughout the design
process from initial prototype (left), to internal testing version (centre)
and current version (right)
The user interface, colour scheme and card art of the final application are a
result of feedback from our test users. As shown in fig. 5, the cards went
through a series of designs. Responding to test feedback that character cards
were too complicated, the final designs were significantly simpler. We also
received specific advice, such as blacking out the action description of a
stunned character to make it clear that they could not act. Having an ongoing
dialogue throughout development with invested parties, meant that we could
rapidly pivot to accommodate their suggestions.
From analysis of our test data we discovered a gap between the data we were
collecting and possible useful information we could capture. Specifically, we
realised we could log user interactions with the application, noting actions
they performed, when they performed them and what the result was if any (for
example, “a user searched for another by their username and found they had no
free game slots”). This idea was a result of realising that even amongst our
dozen test users, there were distinct styles of interacting with the
application. We thought that classifying these interaction styles would be of
interest.
Testing allowed us to identify areas in both the application and the dataset
that were lacking. We would encourage future researchers to get early versions
of their applications into the hands of testers multiple times before
finalising their system. There were many improvements made to RPGLite
specifically because we had others test it, and could assess it across a suite
of target devices. We structured the format of the feedback we received from
testers in our shared document by grouping requested feedback under specific
headings and directing them to features in which we lacked confidence. This
helped to scaffold the insightful conversations amongst our test users, and we
strongly recommend others make an effort to facilitate a similar dialogue.
## CONCLUSION
In releasing RPGLite we learned several lessons about the realities of mobile
game development within research. We have outlined our key insights and hope
that these will be helpful to researchers developing similar tools. To
summarise, the lessons that we learned are: to beware of scope creep and
lengthy feature refinement ; to utilise ones research community for their
expertise and willingness to contribute; to structure the application to
permit rapid bug-fixing, and to avoid duplication of code, and; to test as
soon as you have a workable build and to continue testing up until release.
Pausing our research to develop a mobile game was an atypical activity. We
hope that these observations are helpful to other researchers developing
similar projects. If they are, we encourage them to document the methodologies
they follow for building data-generating games for the benefit of others
engaged in similar projects, and the lessons they learned doing so.
## FUTURE WORK
This paper details the experience of developing a mobile game for data
collection. The next stage of our research is the processing and analysis of
this dataset. We intend to explore many research questions using it, with some
pertaining to the dataset itself and analysis of optimal play, and others, to
the accurate simulation of RPGLite players.
We will release the full dataset collected by RPGLite alongside the code
constituting the game client and server in a future publication. This will
include collections of all players, all games played, and all interactions
recorded within the application. In addition, this dataset will include
complete information about the games played such as moves made, characters
chosen, and other details used in our own research. These collections include
all the attributes we envisaged as being useful to future research. For
example, a player document includes their username, played/won counts for each
character, other players they have lost games against, skill points, and more.
We intend to omit only sensitive details, as all collected data is anonymised,
and users have indicated through our registration process their consent for
collected data to be disseminated through the academic community in the spirit
of open science.
### System Simulation
Datasets sourced from sufficiently scaled and well-detailed systems are rare.
Some are made available for academic use (Van Dongen, B.F. (2015) Boudewijn),
but available data typically originates from large industrial systems lacking
public specification for competitive reasons, or from well-scaled systems
which lack the supporting detail to be useful. We are therefore interested in
taking small datasets from systems of a manageable size, and producing
supplementary synthetic data which appears plausibly realistic. We believe
this data can be produced by an application of aspect-oriented programming
(Wallis and Storer 2018a). A small, naive simulation of behaviour is modified
via applied aspects to introduce errors and improvements. We are in the
process of developing simulations of RPGLite play and aspects to improve the
simulation’s realism. We aim to verify that this produces plausibly realistic
synthetic datasets by comparison with RPGLite’s empirically sourced data.
Assuming this work is successful, we intend to show that aspects can “fit”
themselves to real-world data. We expect these to produce datasets with
optimal similarity to empirical counterparts via the application of genetic
algorithms on their parameters(Wallis and Storer 2018b). A corollary of this
approach would be that, in addition to highly realistic simulations, aspect
parameters would then describe the nature of real-world agents. This process
could then be used as a lens through which to analyse actual behaviour,
weighing various influences by their importance.
### Game Development and Player Analysis
As described in the motivation, we have developed tools that use model
checking to assess game balance without gameplay data. RPGLite was originally
intended solely to verify this process with both quantitative analysis and
qualitative player feedback. Based on the findings of our model checking
analysis, we believe both of the configurations released for RPGLite are
balanced, but one is “more balanced” than the other. Calculating the extent of
this and comparing our metagame predictions to what was observed in the
dataset when players explored RPGLite will measure the validity of our
approach.
RPGLite is a bounded system that can be model checked, this allows for highly
specific analysis of player actions. We can calculate the cost of any move
made in the game as the difference between the player’s subsequent probability
of winning and their probability having chosen the best move available. By
comparing the costs of the moves a player makes over time we can measure their
rate of learning without considering their opponents. The effect of having
definitive measures of player mistakes for gameplay analysis is a research
area which is of great interest to us. This could help us answers questions
about the situations in which players make mistakes and what causes them.
Beyond game research, this could potentially lead to aiding the design of
systems which aim to minimise human interaction errors.
## ACKNOWLEDGEMENTS
We could not have released RPGLite without significant input from our
colleagues and friends. In particular, we would like to acknowledge Ellen
Wallace, Marta Araldo, Justin Nichol, Craig Reilly, Adam Elsbury, Alistair
Morrison, Chris McGlashan, Frances Cooper, Brian McKenna, and our test
players. The work was partly supported by Obashi Technologies.
## References
* Kavanagh et al. (2019) Kavanagh W.J.; Miller A.; Norman G.; and Andrei O., 2019. _Balancing Turn-Based Games with Chained Strategy Generation_. _IEEE Transactions on Games_.
* Kwiatkowska et al. (2018) Kwiatkowska M.; Parker D.; and Wiltsche C., 2018. _PRISM-games: verification and strategy synthesis for stochastic multi-player games with multiple objectives_. _STTT_ , 20, no. 2, 195–210.
* Kwiatkowska et al. (2011) Kwiatkowska M.Z.; Norman G.; and Parker D., 2011. _PRISM 4.0: Verification of Probabilistic Real-time Systems_. In _Proc. Int. Conf. Computer Aided Verification (CAV’11)_. Springer, vol. 6806, 585–591.
* Kwiatkowska and Parker (2016) Kwiatkowska M.Z. and Parker D., 2016. _Automated Verification and Strategy Synthesis for Probabilistic Systems_. In _Proceedings of Automated Technology for Verification and Analysis (ATVA’16)_. Springer, 5–52.
* Rezin et al. (2018) Rezin R.; Afanasyev I.; Mazzara M.; and Rivera V., 2018. _Model checking in multiplayer games development_. In _2018 IEEE 32nd International Conference on Advanced Information Networking and Applications (AINA)_. IEEE, 826–833.
* Van Dongen, B.F. (2015) (Boudewijn) Van Dongen, B.F. (Boudewijn), 2015. _BPI Challenge 2015_. doi:10.4121/UUID:31A308EF-C844-48DA-948C-305D167A0EC1. URL https://data.4tu.nl/repository/uuid:31a308ef-c844-48da-948c-305d167a0ec1.
* Wallis and Storer (2018a) Wallis T. and Storer T., 2018a. _Modelling realistic user behaviour in information systems simulations as fuzzing aspects_. In _International Conference on Advanced Information Systems Engineering_. Springer, 254–268.
* Wallis and Storer (2018b) Wallis T. and Storer T., 2018b. _Process Fuzzing as an Approach to Genetic Programming._ In _SICSA ReaLX_.
|
Fahad Islam.
# Provably Constant-time Planning and Replanning for Real-time Grasping
Objects off a Conveyor Belt
Fahad Islam11affiliationmark: and Oren Salzman22affiliationmark: and Aditya
Agarwal11affiliationmark: and Maxim Likhachev11affiliationmark:
11affiliationmark: The Robotics Institute, Carnegie Mellon University
22affiliationmark: Technion-Israel Institute of Technology
<EMAIL_ADDRESS>
###### Abstract
In warehouse and manufacturing environments, manipulation platforms are
frequently deployed at conveyor belts to perform pick and place tasks. Because
objects on the conveyor belts are moving, robots have limited time to pick
them up. This brings the requirement for fast and reliable motion planners
that could provide provable real-time planning guarantees, which the existing
algorithms do not provide. Besides the planning efficiency, the success of
manipulation tasks relies heavily on the accuracy of the perception system
which is often noisy, especially if the target objects are perceived from a
distance. For fast moving conveyor belts, the robot cannot wait for a perfect
estimate before it starts executing its motion. In order to be able to reach
the object in time, it must start moving early on (relying on the initial
noisy estimates) and adjust its motion on-the-fly in response to the pose
updates from perception. We propose a planning framework that meets these
requirements by providing provable constant-time planning and replanning
guarantees. To this end, we first introduce and formalize a new class of
algorithms called _Constant-Time Motion Planning algorithms (CTMP)_ that
guarantee to plan in constant time and within a user-defined time bound. We
then present our planning framework for grasping objects off a conveyor belt
as an instance of the CTMP class of algorithms. We present it, give its
analytical properties and show experimental analysis both in simulation and on
a real robot.
###### keywords:
Motion Planning, Automation, Manipulation
## 1 Introduction
Conveyor belts are widely used in automated distribution, warehousing, as well
as for manufacturing and production facilities. In the modern times robotic
manipulators are being deployed extensively at the conveyor belts for
automation and faster operations Zhang et al. (2018). In order to maintain a
high-distribution throughput, manipulators must pick up moving objects without
having to stop the conveyor for every grasp. In this work, we consider the
problem of motion planning for grasping moving objects off a conveyor. An
object in motion imposes a requirement that it should be picked up in a short
window of time. The motion planner for the arm, therefore, must compute a path
within a bounded time frame to be able to successfully perform this task.
Fig. 1: A scene demonstrating the PR2 robot picking up a moving object (sugar
box) off a conveyor belt.
Manipulation relies on high quality detection and localization of moving
objects. When the object first enters the robot’s field of view, the initial
perception estimates of the object’s pose are often inaccurate. Consider the
example of an object (sugar box) moving along the conveyor towards the robot
in Fig. 1, shown through an image sequence as captured by the robot’s Kinect
camera in Fig. 2. The plot in Fig. 2 shows the variation of the error between
the filtered input point cloud and a point cloud computed from the predicted
pose from our Iterative Closest Point (ICP)-based perception strategy Islam et
al. (2019) as the object gets closer to the camera. We observe that the error
decreases as the object moves closer, indicating that the point clouds overlap
more closely due to more accurate pose estimates closer to the camera.
However, if the robot waits too long to get an accurate estimate of the object
pose, the delay in starting plan execution could cause the robot to miss the
object. The likelihood of this occurring as the speed of the conveyor
increases. Therefore, the robot should start executing a plan computed for the
initial pose and as it gets better estimates, it should repeatedly replan for
the new goals. However, for every replanning query, the time window for the
pickup shrinks. This makes the planner’s job difficult to support real-time
planning.
Furthermore, the planning problem is challenging because the motion planner
has to account for the dynamic object and thus plan with time as one of the
planning dimensions. It should generate a valid trajectory that avoids
collision with the environment around it and also with the target object to
ensure that it does not damage or topple it during the grasp. Avoiding
collisions with the object requires precise geometric collision checking
between the object geometry and the geometry of the manipulator. The robot
arms also have kinodynamic constraints such as torque and velocity limits that
the motion planner may have to account for while computing the plans,
especially when the robots must move at high speeds. The resulting complexity
of the planning problem makes it infeasible to plan online for this task.
Motivated by these challenges, we propose an algorithm that leverages offline
preprocessing to provide bounds on the planning time when the planner is
invoked online. Our key insight is that in our domain the manipulation task is
highly repetitive. Even for different object poses, the computed paths are
quite similar and can be efficiently reused to speed up online planning. Based
on this insight, we derive a method that precomputes a representative set of
paths with some auxiliary datastructures offline and uses them online in a way
that provides _constant-time planning guarantee_. We present it as an instance
of a new class of algorithms which we call Constant-time Motion Planning
(CTMP) algorithms. To the best of our knowledge, our approach is the first to
provide constant-time planning guarantee on generating motions for indefinite
horizons i.e. all the way to the goal.
We experimentally show that constant-time planning and replanning capability
is necessary for a successful conveyor pickup task. Specifically if we only
perform one-time planning, that is planning only once either for the very
first and potentially inaccurate pose estimate or for a delayed but accurate
pose estimate of the object, the robot frequently fails to pick the object.
(a)
(b)
Fig. 2: (2(a)) Depiction of an object moving along a conveyor towards the
robot. (2(b)) Pose error as a function of the distance from the conveyor’s
start. Specifically we use ADD-S error Hinterstoisser et al. (2012).
### 1.1 Statement of Contributions
We make the following contributions in this work:
1. 1.
We develop a provably constant-time planning and replanning framework for
grasping moving objects off a conveyor belt.
2. 2.
We prove that the algorithm is complete and is guaranteed to run in constant
time and within a user specified time bound.
3. 3.
We provide experimental analysis of our approach in simulation as well as on a
physical robotic system.
4. 4.
We introduce and formalize a new class of algorithms called Constant-Time
Motion Planning (CTMP) algorithms and show that the proposed approach for
grasping objects off the conveyor is an instance of CTMP class of algorithms.
5. 5.
We develop a kinodynamic motion planner to account for the dynamic constraints
of the robot including joint torque and velocity limits.
This article is in continuation of our previous work presented in Islam et al.
(2020) and the contributions 4 and 5 specifically, are the extensions. In
addition to these extensions, we provide space complexity analysis of our
approach and report detailed preprocessing statistics of our experiments to
highlight the improvement over the brute force method. We also remove one of
the assumptions of Islam et al. (2020) which says that the environment remains
static up to a certain time in execution.
## 2 Related work
### 2.1 Motion planning for conveyor pickup task
Existing work on picking moving objects has focused on different aspects of
the problem ranging from closed-loop controls, servoing to object perception
and pose estimation, motion planning and others Zhang et al. (2018); Allen et
al. (1993); Han et al. (2019); Stogl et al. (2017). Here, we focus on motion-
planning related work. Time-configuration space representation was introduced
to avoid moving obstacles Fraichard (1993); Cefalo et al. (2013); Yang et al.
(2018). Specifically in Yang et al. (2018), a bidirectional sampling-based
method with a time-configuration space representation was used to plan motions
in dynamic environments to pickup moving objects. While their method showed
real-time performance in complex tasks, it used fully-specified goals; namely
knowing the time at which the object should be picked, which weakens the
completeness guarantee. Furthermore their method is probablistically complete
and therefore, does not offer constant-time behavior. Graph-search based
approaches have also been used for the motion-planning problem Menon et al.
(2014); Cowley et al. (2013). The former uses a kinodynamic motion planner to
smoothly pick up moving objects i.e., without an impactful contact. A
heuristic search-based motion planner that plans with dynamics and could
generate high-quality trajectories with respect to the time of execution was
used. While this planner provides strong guarantees on the solution quality,
it is not real-time and thus cannot be used online. The latter work
demonstrated online real-time planning capability. The approach plans to a
pregrasp pose with pure kinematic planning and relies on Cartesian-space
controllers to perform the pick up. The usage of the Cartesian controller
limits the types of objects that the robot can grasp. It also provided no
guarantee to be able to generate a plan in time to execute.
### 2.2 Preprocessing-based planning
Preprocessing-based motion planners often prove beneficial for real-time
planning. They analyse the configuration space offline to generate some
auxiliary information that can be used online to speed up planning. Probably
the best-known example is the Probablistic Roadmap Method (PRM) Kavraki et al.
(1996) which precomputes a roadmap that can answer any query by connecting the
start and goal configurations to the roadmap and then searching the roadmap.
PRMs are fast to query yet they do not provide constant-time guarantees. To
account for dynamic environments, several PRM method-based extensions have
been proposed Yoshida and Kanehiro (2011); Belghith et al. (2006). However,
these methods often require computationally expensive repairing operations
which cause additional overheads.
A provably constant-time planner was recently proposed in Islam et al. (2019).
Given a start state and a goal region, it precomputes a compressed set of
paths that can be utilised online to plan to any goal within the goal region
in bounded time. As we will see, our approach while algorithmically different,
draws some inspiration from this work. Both of the above two methods (Kavraki
et al. (1996); Islam et al. (2019)) mainly target pure kinematic planning and
thus they cannot be used for the conveyor-planning problem which is dynamic in
nature.
Another family of preprocessing-based planners utilises previous experiences
to speed up the search Berenson et al. (2012); Coleman et al. (2015); Phillips
et al. (2012). Experience graphs Phillips et al. (2012), provide speed up in
planning times for repetitive tasks by trying to reuse previous experiences.
These methods are also augmented with sparsification techniques (see e.g.,
Dobson and Bekris (2014); Salzman et al. (2014)) to reduce the memory
footprint of the algorithm. Unfortunately, none of the mentioned algorithms
provide fixed planning-time guarantees that we strive for in our application.
### 2.3 Online replanning and real time planning
The conveyor-planning problem can be modelled as a Moving Target Search
problem (MTS) which is a widely-studied topic in the graph search-based
planning literature Ishida and Korf (1991, 1995); Koenig et al. (2007); Sun et
al. (2010). These approaches interleave planning and execution incrementally
and update the heuristic values of the state space to improve the distance
estimates to the moving target. Unfortunately, in high-dimensional planning
problems, this process is computationally expensive which is why these
approaches are typically used for two-dimensional grid problem such as those
encountered in video games. More generally, real-time planning is widely
considered in the search community (see, e.g., Koenig and Likhachev (2006);
Koenig and Sun (2009); Korf (1990)). However, as mentioned, these works are
typically applicable to low-dimensional search spaces.
## 3 Constant-Time Motion Planning (CTMP) for Indefinite Horizons
Before we formally define our motion-planning problem, we introduce a new
_class_ of algorithms that we call _Constant-Time Motion Planning algorithms
(CTMP)_ that are specially designed for repetitive robotic tasks. We will see
that the algorithm we propose for real-time grasping objects off a conveyor
belt is an instance of a CTMP algorithm. However, we believe that their
definition is of independent interest regardless of the specific problem
instance that is considered. While limited-horizon planners like real-time
heuristic search-based planners also plan in constant-time, this paper
specifically addresses the indefinite horizon planning problem, that is to
plan all the way to the goal Koenig and Likhachev (2006); Koenig and Sun
(2009); Korf (1990).
### 3.1 CTMP Problem Definition
We consider the problem setting where the robot has a fixed start state
$s_{\textrm{start}}$ and a set of goals $G$, the representation and
specification of which are domain dependent. The algorithm is queried for a
pair ($s_{\textrm{start}},g$) where $g\in G$ and attempts to compute a
feasible path from $s_{\textrm{start}}$ to $g$ within a (small) constant time.
###### Definition 1 (CTMP Algorithm).
Let ALG be a motion-planning algorithm, $T_{\textrm{bound}}$ a user-controlled
time bound and $T_{\textrm{const}}<T_{\textrm{bound}}$ a small time constant
whose value is independent of the size and complexity of the motion-planning
problem. ALG is said to be a _CTMP algorithm_ if it is guaranteed to answer
any motion planning query within $T_{\textrm{bound}}$.
The input time $T_{\textrm{bound}}$ is a tunable parameter that can be used to
trade off query time with the memory footprint and preprocessing efforts of
the algorithm. It is important to note that an algorithm that returns
NO$\\_$PATH for any query is CTMP. Thus, unless endowed with additional
properties CTMP is a weak algorithmic property. We now define one such
property, namely CTMP-Completeness, that makes a CTMP algorithm interesting
and useful.
### 3.2 CTMP-Completeness
We assume a CTMP algorithm has access to a regular motion-planning algorithm
$\mathcal{P}$ (that is not necessarily CTMP). We introduce a new notion of
completeness for CTMP algorithms. Prior to that, we need to define some
preliminaries.
###### Definition 2 (Reachability).
A goal $g\in G$ is said to be reachable from a state $s_{\textrm{start}}$ if
$~{}\mathcal{P}$ can find a path to it within a (sufficiently large) time
bound $T_{\textrm{$\mathcal{P}$}}$.
###### Definition 3 (Goal Coverage).
A reachable goal $g\in G$ is said to be covered by the CTMP algorithm for a
state $s_{\textrm{start}}$ if (in the query phase) it can plan from
$s_{\textrm{start}}$ to $g$ while satisfying Def. 1.
We are now equipped to define CTMP-Completeness.
###### Definition 4 (CTMP-Completeness).
An algorithm is said to be CTMP-complete if it covers all the reachable goals
$g\in G$ for $s_{\textrm{start}}$.
A CTMP-algorithm provides a planning time bound reduction of
$T_{\mathcal{P}}:T_{\textrm{bound}}$, where $T_{\textrm{bound}}\ll
T_{\textrm{$\mathcal{P}$}}$, while still guaranteeing a success rate no worse
than what $\mathcal{P}$ would provide with a time bound of $T_{\mathcal{P}}$.
A CTMP-complete algorithm distinguishes from a CTMP algorithm for it must
_cover_ all _reachable_ goals in $G$ for $s_{\textrm{start}}$ in the
preprocessing phase, so that in the query phase, it can find a plan to any
reachable goal within $T_{\textrm{bound}}$ time. A CTMP algorithm which is not
CTMP-complete however, may return failure even if the queried goal is
reachable.
Note that the notion of CTMP-completeness is decoupled from the completeness
guarantee (by conventional definition) of the underlying motion planner
$\mathcal{P}$, hence we can view a CTMP algorithm as a meta planner. The CTMP
algorithm only provides guarantees for the subset of $G$ which is reachable
and hence its completeness properties differ from the completeness properties
of $\mathcal{P}$. However, the size of the reachable set of goals is largely
dependent on the performance of $\mathcal{P}$.
## 4 CTMP for Grasping Objects off A Conveyor Belt—Problem Setup
Our system is comprised of a robot manipulator, a conveyor belt moving at some
known velocity, a set of known objects $\mathcal{O}$ that need to be grasped
and a perception system that is able to estimate the type of object and its
location on the conveyor belt. Here, we assume that the geometric models of
the target objects are known apriori and we are given a set of feasible grasps
for each object $o\in\mathcal{O}$.
Given a pose $g$ of an object $o\in\mathcal{O}$, our task is to plan the
motion of the robot such that it grasps $o$ from the conveyor belt at some
future time. Unfortunately, the perception system may give inaccurate object
poses. Thus, the pose $g$ will be updated by the perception system as the
robot is executing its motion. To allow for the robot to move towards the
updated pose in real time, we introduce the additional requirement that
planning should be done within a user-specified time bound
$T_{\textrm{bound}}$. For ease of exposition, when we say that we plan to a
pose $g$ of $o$ that is given by the perception system, we mean that we plan
the motion of the robot such that it will be able to pick $o$ from the
conveyor belt at some future time. This is explained in detail in Sec. 6.2.1
and in Fig. 9.
We denote by $G^{\textrm{full}}$ the discrete set of initial object poses on
the conveyor belt that the perception system can perceive. Finally, we assume
that the robot has an initial state $s_{\textrm{home}}$ corresponding to the
time $t=0$ from which it starts planning to grasp any object.
Roughly speaking, the objective is to enable planning and replanning to any
goal pose $g\in G^{\textrm{full}}$ in bounded time $T_{\textrm{bound}}$
regardless of the robot’s current state. Specifically we aim to develop a
CTMP-complete algorithm, so that for any state $s$ that the system can be in
and every reachable goal pose $g\in G^{\textrm{full}}$ from $s$ updated by the
perception system, $g$ is covered by $s$.
We assume that the pose estimation error of the perception system is bounded
by a distance $\varepsilon$. We also specify a replan cutoff time
$t=t_{\textrm{rc}}$ after which the planner does not replan and continues to
execute the last planned path until the goal is reached. The time
$t_{\textrm{rc}}$ could be chosen based on when the perception system is
expected to send an accurate pose estimate, and therefore no replanning is
needed from there on.
## 5 Algorithmic framework
Our approach for constant-time planning relies on a _preprocessing_ stage that
allows to efficiently compute paths in a _query_ stage to any goal. Before we
describe our approach, we start by describing a naïve method that solves the
aforementioned problem but requires a prohibitive amount of memory. This can
be seen as a warmup before describing our algorithm which exhibits the same
traits but does so in a memory-efficient manner.
### 5.1 Straw man approach
(a)
(b)
Fig. 3: The figures show paths discretized from timesteps $t_{0}$ to
$t_{\textrm{rc}}$ with steps of size $\delta_{t}$. (3(a)) At $t_{0}$, the
algorithm computes $n_{\rm goal}$ paths, from $s_{\textrm{home}}$ to every
$g\in G^{\textrm{full}}$. (3(b)) At $t_{1}=\delta_{t}$, the algorithm computes
$n_{\rm goal}^{2}$ paths, from all $n_{\rm goal}$ replanable states at $t_{1}$
to every $g\in G^{\textrm{full}}$ (here we only show paths from three states).
Thus, the number of paths increases exponentially at every timestep.
We first compute from $s_{\textrm{home}}$ a path $\pi_{g}$ to every reachable
$g\in G^{\textrm{full}}$. These paths can be stored in a lookup (hash) table
which can be queried in constant time $T_{\textrm{const}}$ (assuming perfect
hashing Czech et al. (1997)). For the straw man approach, since there is no
utility of having a $T_{\textrm{bound}}$ larger than $T_{\textrm{const}}$ as
it only performs look up operations at query time, we have
$T_{\textrm{bound}}=T_{\textrm{const}}$. By storing paths to all the goals,
every goal is covered by $s_{\textrm{home}}$ and this allows us to start
executing a path once the perception system gives its initial pose estimate.
However, we need to account for pose update while executing $\pi_{g}$. This
only needs to be done up until time $t_{\textrm{rc}}$, since no future
improved estimates are expected from the perception system. Thus, we
discretize each path uniformly with resolution $\delta_{t}$. We call all
states that are less than $t_{\textrm{rc}}$ time from $s_{\textrm{home}}$
_replanable states_.
Next, for every replanable state along each path $\pi_{g}$, we compute a new
path to all goals. This will ensure that all goals are covered by all
replanable states. Namely, it will allow to immediately start executing a new
path once the goal location is updated by the perception system.
Unfortunately, the perception system may update the goal location more than
once. Thus, this process needs to be performed recursively for the new paths
as well.
The outcome of the preprocessing stage is a set of precomputed collision-free
paths starting at states that are at most $t_{\textrm{rc}}$ from
$s_{\textrm{home}}$ and end at goal states. The paths are stored in a lookup
table $\mathcal{M}:S\times
G^{\textrm{full}}\rightarrow\\{\pi_{1},\pi_{2},\ldots\\}$ that can be queried
in $T_{\textrm{bound}}=T_{\textrm{const}}$ time to find a path from any given
$s\in S$ to $g\in G^{\textrm{full}}$.
In the query stage we obtain an estimation $g_{1}$ of the goal pose by the
perception system. The algorithm then retrieves the path
$\pi_{1}(s_{\textrm{home}},g_{1})$ (from $s_{\textrm{home}}$ to $g_{1}$) from
$\mathcal{M}$ and the robot starts executing
$\pi_{1}(s_{\textrm{home}},g_{1})$. For every new estimation $g_{i}$ of the
goal pose obtained from the perception system while the system is executing
path $\pi_{i-1}(s_{i-1},g_{i-1})$, the algorithm retrieves from $\mathcal{M}$
the path $\pi_{i}(s_{i},g_{i})$ from the nearest state $s_{i}$ along
$\pi_{i-1}(s_{i-1},g_{i-1})$ that is least $T_{\textrm{bound}}$ away from
$s_{i-1}$. The robot then starts executing $\pi_{i}(s_{i},g_{i})$ once it
reaches $s_{i}$. Hence, the straw man algorithm is trivially a CTMP-complete
algorithm.
Clearly, every possible goal is covered for every possible configuration that
the robot might be in during execution by this brute force approach, however
it requires a massive amount of memory and prepreprocessing time. Let $n_{\rm
goal}=|G^{\textrm{full}}|$ be the number of goals and $\ell$ be the number of
states between $s_{\textrm{home}}$ and the state that is $t_{\textrm{rc}}$
time away. This approach requires precomputing and storing $O(n_{\rm
goal}^{\ell})$ paths which is clearly infeasible (see Fig. 3). In the next
sections, we show how we can dramatically reduce the memory footprint of the
approach without compromising on the system’s capabilities.
### 5.2 Algorithmic approach
While the straw man algorithm presented allows for planning to any goal pose
$g\in G^{\textrm{full}}$ within $T_{\textrm{const}}$ time, its memory
footprint is prohibitively large. We suggest to reduce the memory footprint by
building on the observation that many paths to close-by goals traverse very
similar parts of the configurations space. Instead of generating a plan
strictly within $T_{\textrm{const}}$ time, our approach trades off
preprocessing efforts with the bound on the planning time and guarantees to
generate a solution within the user-specified time bound
$T_{\textrm{bound}}(>T_{\textrm{const}})$.
The key idea of our approach is that instead of computing (and storing) paths
to all reachable goals in $G^{\textrm{full}}$, we compute a relatively small
subset of so-called “root paths” that can be reused in such a way that we can
still cover $G^{\textrm{full}}$ fully. Namely, at query time, we can reuse
these paths to plan to any $g\in G^{\textrm{full}}$ within
$T_{\textrm{bound}}$. The idea is illustrated in Fig. 5.
First, we compute a set of root paths $\\{\Pi_{1},\ldots,\Pi_{k}\\}$ from
$s_{\textrm{home}}$ to cover $G^{\textrm{full}}$ by $s_{\textrm{home}}$ (here
we will have that $k\ll n_{\rm goal})$ Next, for all replanabale states along
these root paths, the algorithm recursively computes additional root paths so
that their reachable goals are also covered. During this process, additional
root paths are computed only when the already existing set of root paths does
not provide enough guidance to the search to cover $G^{\textrm{full}}$ i.e.,
to be able to compute a path to any $g\in G^{\textrm{full}}$ within
$T_{\textrm{bound}}$. The remainder of this section formalizes these ideas.
### 5.3 Algorithmic building blocks
We start by introducing the algorithmic building blocks that we use.
Specifically, we start by describing the motion planner $\mathcal{P}$ that is
used to compute the root paths and then continue to describe how they can be
used as _experiences_ to efficiently compute paths to other goals. We
implemented two types of motion planners. One operates in a time-configuration
space and the other is a kinodynamic motion planner that plans with additional
state dimensions of joint velocities in order to satisfy the kinodynamic
constraints of the robot. While the former planning framework is simpler and
works successfully on a physical robot in our experiments, the latter might be
more desirable for operating a robot closer to its maximum performance limits.
We use a heuristic search-based planning approach with motion primitives (see,
e.g, Cohen et al. (2010, 2011); Likhachev and Ferguson (2009)) as it allows
for deterministic planning time which is key in our domain. Moreover, such
planners can easily handle under-defined goals as we have in our setting—we
define a goal as a grasp pose for the goal object. The grasp pose for a target
object $o$ is manually selected.
#### 5.3.1 Time-Configuration Motion Planner
State space and graph construction. We define a state $s$ as a pair $(q,t)$
where $q=(\theta_{1},...,\theta_{n})$ is a configuration represented by the
joint angles for an $n$-DOF robot arm (in our setting $n=$7) and $t$ is the
time associated with $q$. Given a state $s$ we define two types of motion
primitives which are short kinodynamically-feasible motions that the robot can
execute.
The first type of motion primitives are predefined primitives. These are small
individual joint movements in either direction as well as _wait_ actions.
These primitives have non-uniform resolution. For each joint, we define two
motion primitives of distance $\pm$4∘. In addition, for the first four of the
seven robot joints we define two additional primitives each, of distance
$\pm$7∘. We only allow moving one joint at a time which makes it a total of 23
predefined primitives. For each motion primitive, we compute its duration by
using a nominal constant velocity profile for the joint that is moved.
The second type of primitives are dynamic primitives. They are generated by
the search only at the states that represent the arm configurations where the
end effector is close to the object. These primitives correspond to the actual
grasping of the object while it is moving. The dynamic primitives are
generated by using a Jacobian pseudo inverse-based control law similar to what
Menon et al. (2014) used. The desired velocity of the end effector is computed
for which the end-effector minimizes the distance to the grasp pose. Once the
gripper encloses the object, it moves along with the object until the gripper
is closed. Some examples of the dynamic primitives are shown in Fig. 4. During
the search, all motion primitives are checked for validity with respect to
collision and joint limits.
Fig. 4: The figures show the dynamic motion primitive for two different
initial poses of the sugar box. (Only the gripper poses are visualized.) The
sugar box shown at the rear of the conveyor belt in the examples depicts its
initial pose.
Heuristic Function The search is guided by an efficient and fast-to-compute
heuristic function which in our case has two components. The first component
drives the search to intercept the object at the right time and the second
component guides the search to correct the orientation of the end effector as
it approaches the object. Mathematically, our heuristic function is given by
$h(s,g)=\max(\Delta t(s,g),\lambda\cdot\textsc{AngleDiff}(s,g)).$ (1)
Here, $\Delta t(s,g)$ is the expected time to intercept the object which can
be analytically computed from the positions of the target object and the end-
effector, the velocity of the target object and the speed of the end-effector.
We pick a nominal speed for the end-effector to solve the problem.
AngleDiff($s,g$) gives the magnitude of angular difference between the end-
effector’s current pose and the grasp pose. The coefficient $\lambda$ is used
as a weighting factor.
#### 5.3.2 Kinodynamic Motion Planner
The time-configuration motion planner does not guarantee that the plan when
transformed into a trajectory will satisfy robot’s torque limits. To this end,
we extend the previous motion planner framework to be able to plan within the
kinodynamic constraints of the robot, while still being able to handle the
conveyor speed of 0.2$m/s$. In this section we describe the modified state
space and the heuristic function for the kinodynamic motion planner.
State space and graph construction. We modify the state space to include joint
velocities. A state $s$ is a tuple $(q,\dot{q},t)$ where
$\dot{q}=(\dot{\theta_{1}},...,\dot{\theta_{n}})$ represent velocities for
each joint. The dimensionality of the state space hence becomes 15 in our
experiments. Similar to $q$, the velocities $\dot{q}$ are also discretized.
We modify the set of predefined motion primitives. The primitives are
specified as magnitudes of accelerations for each joint. Specifically, to
generate a successor of a state, for the desired accelerations (specified in
the primitive), first we use inverse dynamics to compute the required torques.
Second, we integrate using the forward dynamics for a fixed time step $dt$ to
get the resulting state using Runge-Kutta integration. We use the Orocos
Kinematics and Dynamics Library (KDL) to solve for forward/inverse dynamics
and the integration111KDL:https://orocos.org/kdl.html.. Let a motion primitive
be specified by a vector of accelerations $\ddot{q}$ of size $n$. The two
steps to compute the primitive are
$\displaystyle\tau=\textsc{ComputeInverseDynamics}(s,\ddot{q})$ $\displaystyle
s^{\prime}=\textsc{Integrate}(s,\tau,dt)$
Since the robot dynamics is a function of the state $s$ of the robot, these
primitives need to be computed on-the-fly during search. In addition to
performing kinematic and collision checks to verify motion validity, we
discard successors for which $\tau$ exceeds the robot’s torque or velocity
limits.
In our experiments, we use 6 motion primitives for each of the 7 joints. These
are accelerations $\pm$(4, 8, 12) $\deg/s^{2}$. We only accelerate or
decelerate one joint at a time under these acceleration profiles, thus
resulting in 42 primitives. In addition to these primitives, we use a
“coasting” primitive that assigns zero acceleration to all the joints.
Heuristic Function. The heuristic function we used for the time-configuration
space planner gives no guidance for the velocity profiling which is crucial in
the case of kinodynamic planning. We, therefore, modify the heuristic in Eq. 1
by introducing an additional term $\Delta\dot{x}$ that guides the search with
respect to the velocity profile.
$\Delta\dot{x}(s)=\|\mathbf{\dot{x^{o}}-\dot{x^{e}}}(s)\|.$ (2)
Namely, $\Delta\dot{x}$ is the magnitude of the difference of the target
object’s velocity $\mathbf{\dot{x^{o}}}$ and the robot end-effector’s velocity
$\mathbf{\dot{x^{e}}}$ at state $s$ in 3D. $\mathbf{\dot{x^{e}}}$ is computed
using forward velocity kinematics.
The new heuristic function is given by
$h(s,g)=\max(\Delta t(s,g),\\\
\lambda_{1}\cdot\textsc{AngleDiff}(s,g)+\lambda_{2}\cdot\Delta\dot{x}(s)).$
(3)
where $\lambda_{1}$ and $\lambda_{2}$ are the weighting factor. Intuitively,
this additional term guides the search to match the end-effector velocity with
$o$’s velocity as the end-effector approaches the object. This increases the
likelihood of generating a dynamic primitive that satisfies the kinodynamic
constraints of the robot.
#### 5.3.3 Graph Search
The states and the transitions implicitly define a graph $\mathcal{G}=(S,E)$
where $S$ is the set of all states and $E$ is the set of all transitions
defined by the motion primitives. We use Weighted A* (wA*) Pohl (1970) to find
a path in $\mathcal{G}$ from a given state $s$ to a goal $g$. wA* is a
suboptimal heursitic search algorithm that allows a tradeoff between
optimality and greediness by inflating the heuristic function $h$ by a given
weight $w$. The cost of an edge is the time of its traversal.
#### 5.3.4 Planning with Experience Reuse
We now show how previously-computed paths which we named as root paths can be
reused as experiences in our framework. Given a heuristic function $h$, we
define for a root path $\Pi$ and a goal $g\in G^{\textrm{full}}$ the
_shortcut_ state $s_{\textrm{sc}}(\Pi,g)$ as the state on the path $\Pi$ that
is closest to $g$ with respect $h$. Namely,
$s_{\textrm{sc}}(\Pi,g):=\operatorname*{arg\,min}\limits_{s_{i}\in\Pi}h(s_{i},g).$
(4)
Now, when searching for a path to a goal $g\in G^{\textrm{full}}$ using root
path $\Pi$ as an experience, we add $s_{\textrm{sc}}(\Pi,g)$ as a successor
for any state along $\Pi$ (subject to the constraint that the path along $\Pi$
to $s_{\textrm{sc}}$ is collision free). In this manner the search reuses
previous experience to quickly reach a state close to $g$.
### 5.4 Algorithmic details
We are finally ready to describe our algorithm describing first the
preprocessing phase and then the query phase.
#### 5.4.1 Preprocessing
(a)
(b)
(c)
Fig. 5: First step of the preprocessing stage. (5(a)) A goal $g_{1}$ is
sampled and the root path $\Pi_{1}$ is computed between $s_{\textrm{home}}$
and $g_{1}$. (5(b)) The set $G_{1}\subset G^{\textrm{full}}$ of all states
that can use $\Pi_{1}$ as an experience is computed and associated with
$\Pi_{1}$. (5(c)) The goal region covered by four root paths from
$s_{\textrm{home}}$ after the first step of the preprocessing stage
terminates.
Our preprocessing stage starts by sampling a goal $g_{1}\in G^{\textrm{full}}$
and computing a root path $\Pi_{1}$ from $s_{\textrm{home}}$ to $g_{1}$. We
then associate with $\Pi_{1}$ the set of goals $G_{1}\subset
G^{\textrm{full}}$ such that $\Pi_{1}$ can be used as an experience in
reaching any $g_{j}\in G_{1}$ within $T_{\textrm{bound}}$.222To account for
the lower bound $T_{\textrm{const}}$ time that is required for the query phase
which is consumed in operations, such as hash table lookups etc., the time
$T_{\textrm{const}}$ is subtracted from $T_{\textrm{bound}}$ for the
experience-based planner, to ensure that the overall query time is bounded by
$T_{\textrm{bound}}$. We use a conservative estimate of $T_{\textrm{const}}$
in our experiments. Additionally, the experience-based planner is constrained
to reuse the root path upto $t_{\textrm{rc}}$. Thus, all goals in $G_{1}$ are
covered by $s_{\textrm{home}}$. We then repeat this process but instead of
sampling a goal from $G^{\textrm{full}}$, we sample from
$G^{\textrm{full}}\setminus G_{1}$, thereby removing covered goals from
$G^{\textrm{full}}$ in every iteration. At the end of this step, we obtain a
set of root paths. Each root path $\Pi_{i}$ is associated with a goal set
$G_{i}\subseteq G^{\textrm{full}}$ such that (i) $\Pi_{i}$ can be used as an
experience for planning to any $g_{j}\in G_{i}$ in $T_{\textrm{bound}}$ and
(ii)
$\bigcup_{i}G_{i}=\textsc{Reachable}(s_{\textrm{home}},G^{\textrm{full}})$
(i.e., all reachable goals for $s_{\textrm{home}}$ in $G^{\textrm{full}}$).
Alg. 1 details this step (when called with arguments
($s_{\textrm{home}},G^{\textrm{full}}$)). It also returns a set of unreachable
goals that are left uncovered. The process is illustrated in Fig. 5.
Algorithm 1 Plan Root Paths
1:procedure PlanRootPaths($s_{\textrm{start}},G^{\textrm{uncov}}$)
2: $\Psi_{s_{\textrm{start}}}\leftarrow\emptyset$ $\triangleright$ a list of
pairs ($\Pi_{i},G_{i})$
3: $G^{\textrm{uncov}}_{s_{\textrm{start}}}\leftarrow\emptyset$; $i=0$
4: while $G^{\textrm{uncov}}\neq\emptyset$ do $\triangleright$ until all
reachable goals are covered
5: $g_{i}\leftarrow$SampleGoal($G^{\textrm{uncov}}$)
6: $G^{\textrm{uncov}}\leftarrow G^{\textrm{uncov}}\setminus\\{g_{i}\\}$
7: if $\Pi_{i}\leftarrow$ PlanRootPath($s_{\textrm{start}},g_{i}$) then
$\triangleright$ planner succeeded
8: $G_{i}\leftarrow\\{g_{i}\\}$ $\triangleright$ goals reachable
9: for each $g_{j}\in G^{\textrm{uncov}}$ do
10: if
$\pi_{j}\leftarrow$PlanPathWithExperience($s_{\textrm{start}},g_{j},\Pi_{i}$)
then
11: $G_{i}\leftarrow G_{i}\cup\\{g_{j}\\}$
12: $G^{\textrm{uncov}}\leftarrow G^{\textrm{uncov}}\setminus\\{g_{j}\\}$
13:
$\Psi_{s_{\textrm{start}}}\leftarrow\Psi_{s_{\textrm{start}}}\cup\\{(\Pi_{i},G_{i})\\}$;
$i\leftarrow i+1$
14: else
15: $G^{\textrm{uncov}}_{s_{\textrm{start}}}\leftarrow
G^{\textrm{uncov}}_{s_{\textrm{start}}}\cup\\{g_{i}\\}$ $\triangleright$ goals
unreachable
16: return $\Psi_{s_{\textrm{start}}},G^{\textrm{uncov}}_{s_{\textrm{start}}}$
Algorithm 2 Preprocess
1:procedure
TryLatching($s,\Psi_{s_{\textrm{home}}}G^{\textrm{uncov}},G^{\textrm{cov}}$)
2: for each $(\Pi_{i},G_{i})\in\Psi_{s_{\textrm{home}}}$ do
3: if CanLatch($s,\Pi_{i}$) then
4: $G^{\textrm{uncov}}\leftarrow G^{\textrm{uncov}}\setminus G_{i}$
5: $G^{\textrm{cov}}\leftarrow G^{\textrm{cov}}\cup G_{i}$
6: return $G^{\textrm{uncov}},G^{\textrm{cov}}$
7:procedure
Preprocess($s_{\textrm{start}},G^{\textrm{uncov}},G^{\textrm{cov}}$)
8:
$\Psi_{s_{\textrm{start}}},G^{\textrm{uncov}}_{s_{\textrm{start}}}\leftarrow$
PlanRootPaths($s_{\textrm{start}},G^{\textrm{uncov}}$)
9: if $s_{\textrm{start}}=s_{\textrm{home}}$ then
$\Psi_{s_{\textrm{home}}}=\Psi_{s_{\textrm{start}}}$
10: $G^{\textrm{cov}}_{s_{\textrm{start}}}\leftarrow
G^{\textrm{cov}}\cup(G^{\textrm{uncov}}\setminus
G^{\textrm{uncov}}_{s_{\textrm{start}}})$
11: if $t(s_{\textrm{start}})\leq t_{\textrm{rc}}$ then
12: for each $(\Pi_{i},G_{i})\in\Psi_{s_{\textrm{start}}}$ do
13: $G_{i}^{\textrm{cov}}\leftarrow G_{i}$; $G_{i}^{\textrm{uncov}}\leftarrow
G^{\textrm{cov}}_{s_{\textrm{start}}}\setminus G_{i}$;
14: for each $s\in\Pi_{i}$ (from last to first) do $\triangleright$ states up
to $t_{\textrm{rc}}$
15: $G^{\textrm{uncov}}_{i},G^{\textrm{cov}}_{i}\leftarrow$
TryLatching($s,\Psi_{s_{\textrm{home}}},G^{\textrm{uncov}}_{i},G^{\textrm{cov}}_{i}$)
16: if $G_{i}^{\textrm{uncov}}=\emptyset$ then
17: break
18: $G_{i}^{\textrm{uncov}},G_{i}^{\textrm{cov}}\leftarrow$
Preprocess($s,G_{i}^{\textrm{uncov}},G_{i}^{\textrm{cov}}$)
19: if $G_{i}^{\textrm{uncov}}=\emptyset$ then
20: break
21: return
$G^{\textrm{uncov}}_{s_{\textrm{start}}},G^{\textrm{cov}}_{s_{\textrm{start}}}$
So far we explained the algorithm for one-time planning when the robot is at
$s_{\textrm{home}}$ ($t=0$); we now need to allow for efficient replanning for
any state $s$ between $t=0$ to $t_{\textrm{rc}}$. In order to do so, we
iterate through all the states on these root paths and add additional root
paths so that these states also cover their respective reachable goals. This
has to be done recursively since newly-added paths generate new states which
the robot may have to replan from. The complete process is detailed in Alg. 2.
The Preprocess procedure takes in a state $s_{\textrm{start}}$, the goal
region that it has to cover $G^{\textrm{uncov}}$ and region that it already
has covered $G^{\textrm{cov}}$. Initially Preprocess is called with arguments
($s_{\textrm{home}},G^{\textrm{full}},\emptyset$) and it runs recursively
until no state is left with uncovered reachable goals.
At a high level, the algorithm iterates through each root path $\Pi_{i}$ (loop
at line 12) and for each state $s\in\Pi_{i}$ (loop at line 14) the algorithm
calls itself recursively (line 18). The algorithm terminates when all states
cover their reachable goals. The pseudocode in blue constitute an additional
optimization step which we call “latching” and is explained later in Sec.
5.4.3.
In order to minimise the required computation, the algorithm leverages two key
observations:
1. O1
If a goal is not reachable from a state $s\in\Pi$, it is not reachable from
all the states after it on $\Pi$.
2. O2
If a goal is covered by a state $s\in\Pi$, it is also covered by all states
preceeding it on $\Pi$.
O1 is an assumption that we make about the planner $\mathcal{P}$. We use O1 to
initialize the uncovered set for any state; instead of attempting to cover the
entire $G^{\textrm{full}}$ for each replanable state $s$, the algorithm only
attempts to cover the goals that could be reachable from $s$, thereby saving
computation. O2 is used by iterating backwards on each root path (loop at line
14) and for each state on the root path only considering the goals that are
left uncovered by the states that appear on the path after it.
Specifically, O2 is used to have a single set of uncovered goals
$G^{\textrm{uncov}}_{i}$ for all states that appear on $\Pi_{i}$ instead of
having individual sets for each state and the goals that each $s\in\Pi_{i}$
covers in every iteration of loop 14 are removed from
$G^{\textrm{uncov}}_{i}$. O1 is used to initialize $G^{\textrm{uncov}}_{i}$
(in line 13). Namely, it is initialized not by the entire $G^{\textrm{full}}$
but by the set of goals covered by $s_{\textrm{start}}$. $G_{i}$ is excluded
since it is already covered via $\Pi_{i}$. The iteration completes either when
all goals in $G^{\textrm{uncov}}_{i}$ are covered (line 19) or the loop
backtracks to $s_{\textrm{start}}$. The process is illustrated in Fig. 6
Thus, as the outcome of the preprocessing stage, a map $\mathcal{M}:S\times
G^{\textrm{full}}\rightarrow\\{\Pi_{1},\Pi_{2},\ldots\\}$ is constructed that
can be looked up to find which root path can be used as an experience to plan
to a goal $g$ from a state $s$ within $T_{\textrm{bound}}$.
(a)
(b)
(c)
(d)
Fig. 6: Preprocess loop for $\Pi_{1}$ without latching. (6(a)) Initially the
state $s$ covers $G_{1}$ via $\Pi_{1}$. (6(b)) New root paths are computed
from $s$ to cover remaining uncovered region. (6(c)) This process is repeated
by backtracking along the root path. (6(d)) Outcome of a preprocessing step
for one path: $G^{\textrm{full}}$ is covered either by using $\Pi_{1}$ as an
experience or by using newly-computed root paths.
(a)
(b)
(c)
(d)
Fig. 7: Preprocess loop for $\Pi_{1}$ with latching. (7(a)) The algorithm
starts by trying to latch on to every other root path; for successful latches,
the corresponding goals are removed from uncovered region. (7(b)) New root
paths are computed from $s$ to cover remaining uncovered region. (7(c)) This
process is repeated by backtracking along the root path. (7(d)) Outcome of a
preprocessing step: $G^{\textrm{full}}$ is covered either by using $\Pi_{1}$
as an experience, latching on to $\Pi_{2},\Pi_{3}$ or $\Pi_{4}$ (at different
time steps) or by using newly-computed root paths.
#### 5.4.2 Query
Alg. 3 describes the query phase of our algorithm. Again, the lines in blue
correspond to the blue pseudocode in Alg. 2 for the additional optimization
step which is explained in Sec. 5.4.3. Assume that the robot was at a state
$s_{\textrm{curr}}$ while executing a path $\pi_{\rm curr}$ when it receives a
pose update $g$ from the perception system. Alg. 3 will be called for the
first state $s_{\textrm{start}}$ that is $T_{\textrm{bound}}$ ahead of
$s_{\textrm{curr}}$ along $\pi_{\rm curr}$, allowing the algorithm to return a
plan before the robot reaches $s_{\textrm{start}}$.
Alg. 2 assures that either the first state on $\pi_{\rm curr}$ covers $g$ or
there exists one state on $\pi_{\rm curr}$ between $s_{\textrm{start}}$ and
the state at $t_{\textrm{rc}}$ that covers $g$. The algorithm first checks if
the first state on $\pi_{\rm curr}$ covers $g$ or not. If it covers $g$ then
the corresponding root path is used to find the new path $\pi_{\rm new}$.
Otherwise, it iterates over each $s\in\pi_{\rm curr}$ backwards (similar to
Alg. 2) between $s_{\textrm{start}}$ and the state at $t_{\textrm{rc}}$ and
finds the one that covers $g$ by quering $\mathcal{M}$. Once found, the
corresponding root path $\Pi_{\rm next}$ is used as an experience to plan the
path $\pi_{\rm next}$ from $s$ to $g$. Finally the paths $\pi_{\rm curr}$ and
$\pi_{\rm next}$ are merged together with $s$ being the transitioning state to
return the final path $\pi$.
Algorithm 3 Query
Inputs: $\mathcal{M},s_{\textrm{home}}$
1:procedure PlanPathByLatching($s,g,\pi_{\textrm{curr}}$)
2: if $\Pi_{\textrm{home}}\leftarrow\mathcal{M}(s_{\textrm{home}},g)$ exists
then $\triangleright$ lookup root path
3: if CanLatch($s,\Pi_{\textrm{home}}$) then
4:
$\pi_{\textrm{home}}\leftarrow$PlanPathWithExperience($s_{\textrm{home}},g,\Pi_{\textrm{home}}$)
5: $\pi\leftarrow$
MergePathsByLatching($\pi_{\textrm{curr}},\pi_{\textrm{home}},s$)
6: return $\pi$
7: return failure
8:procedure Query($g,\pi_{\textrm{curr}},s_{\textrm{start}}$)
9: $s\leftarrow\pi_{\textrm{curr}}[0]$ $\triangleright$ first state on
$\pi_{\textrm{curr}}$
10: if $\Pi_{\textrm{curr}}\leftarrow$ $\mathcal{M}(s,g)$ exists then
$\triangleright$ lookup root path
11:
$\pi_{\textrm{new}}\leftarrow$PlanPathWithExperience($s,g,\Pi_{\textrm{curr}}$)
12: return $\pi_{\textrm{new}}$
13: for each $s\in\pi_{\textrm{curr}}$ (from last to $s_{\textrm{start}}$) do
$\triangleright$ states up to $t_{\textrm{rc}}$
14: if $\Pi_{\textrm{next}}\leftarrow$ $\mathcal{M}(s,g)$ exists then
$\triangleright$ lookup root path
15:
$\pi_{\textrm{next}}\leftarrow$PlanPathWithExperience($s,g,\Pi_{\textrm{next}}$)
16: $\pi\leftarrow$ MergePaths($\pi_{\textrm{curr}},\pi_{\textrm{next}},s$)
17: return $\pi$
18: if $\pi\leftarrow$PlanPathByLatching($s,g,\pi_{\textrm{curr}}$) successful
then
19: return $\pi$
20: return failure $\triangleright$ goal is not reachable
#### 5.4.3 Latching: Reusing Root Paths
We introduce an additional step called “Latching” to minimise the number of
root paths computed in Alg. 2. With latching, the algorithm tries to reuse
previously-computed root paths as much as possible using special motion
primitives that allow transitions from one root path to another.
For the time-configuration motion planner, the primitive is computed from a
state $s\in\Pi_{i}$ to $s^{\prime}\in\Pi_{j}$ such that
$t(s^{\prime})=t(s)+\delta_{t}$ by simple linear interpolation while ensuring
the feasibility of the motion. Specifically, given the nominal joint
velocities of the robot, if $s^{\prime}$ can be reached from $s$ in time
$\delta_{t}$, while respecting the kinematic and collision constraints, then
the transition is allowed.
Clearly, this approach does not ensure that the torque limits are respected.
Therefore, for the kinodynamic planner, we interpolate by fitting a cubic
polynomial from the state $s\in\Pi_{i}$ to $s^{\prime}\in\Pi_{j}$ that
satisfies the boundary conditions. We then ensure motion validity by checking
the joint velocity and torque limits of each state along the interpolated
trajectory.
In Alg. 2, before calling the Preprocess procedure for a state, the algorithm
removes the set of goals that can be covered via latching, thereby reducing
the number of goals that need to be covered by the Preprocess procedure.
Correspondingly, in Alg. 3, an additional procedure is called to check if the
path can be found via latching. These additions in the two pseudocodes are
shown in blue. An iteration of the complete algorithm with latching is
illustrated in Fig 7.
### 5.5 Theoretical Analysis
#### 5.5.1 CTMP Properties
###### Lemma 1.
For any state $s_{\textrm{start}}$ of the robot during execution, provided
$t(s_{\textrm{start}})\leq t_{\textrm{rc}}$, the algorithm is CTMP.
###### Proof.
We can prove it by showing that the query stage (Alg. 3) returns within
$T_{\textrm{bound}}$ time and has a constant-time complexity. The number of
times the algorithm queries $\mathcal{M}$, which is an $O(1)$ operation
assuming perfect hashing, is bounded by $O(l)$ where
$l=t_{\textrm{rc}}/\delta_{t}$ is the maximum number of time steps from $t=0$
to $t_{\textrm{rc}}$. The number of times the algorithm attempts to latch on
to a root path (namely, a call to CanLatch which is a constant-time operation)
is also bounded by $l$. As $l$ is constant for of a fixed time cut off
$t_{\textrm{rc}}$, the execution time of the aforementioned operations
constitute $T_{\textrm{const}}$ (constant-time). Finally, Alg. 3 calls the
Plan method only once. Since the execution time of Plan is bounded by
$T_{\textrm{bound}}$\- $T_{\textrm{const}}$, the overall execution time of
Alg. 3 is $T_{\textrm{bound}}$. Hence it is a CTMP algorithm. ∎
###### Lemma 2.
For any state $s_{\textrm{start}}$ of the robot during execution, provided
$t(s_{\textrm{start}})\leq t_{\textrm{rc}}$, the algorithm is CTMP-complete.
###### Proof.
In order to prove it we need to show that for any $s_{\textrm{start}}$ that
the system can be at, (1) if a $g$ is reachable from $s_{\textrm{start}}$, it
is _covered_ by $s_{\textrm{start}}$ in Alg. 2 and (2) if $g$ is covered by
$s_{\textrm{start}}$, Alg. 3 is guaranteed to return a path within
$T_{\textrm{bound}}$ time.
Alg. 2 starts by computing a set of root paths from $s_{\textrm{home}}$ that
ensures that it covers all of its reachable goals. It then iterates over all
states on these paths and adds additional root paths ensuring that these
states also cover their reachable goals. It does it recursively until no state
$s_{\textrm{start}}$ before $t_{\textrm{rc}}$ is left with any uncovered $g$
which could be reachable.
Alg. 2 covers $g$ via at least one state between $s_{\textrm{start}}$ and the
state at $t_{rc}$ (inclusively) (loop at line 14). In query phase, Alg. 3
iterates through all states between $s_{\textrm{start}}$ and the state at
$t_{rc}$ (inclusively) to identify the one that covers $g$ (loop at line 13).
Since $g$ is covered by at least one of these states by Alg. 2, Alg. 3 is
guaranteed to find a path from $s_{\textrm{start}}$ to $g$. Moreover, from
lemma 1 we have that the path is returned within $T_{\textrm{bound}}$ time.
Hence the algorithm is CTMP-complete.
∎
#### 5.5.2 Space Complexity
###### Lemma 3.
Let $n_{\Pi}$ be the maximum number of root paths needed to cover a goal
region for a given state $s_{\textrm{start}}$ and $\ell^{\prime}$ be the
number of discretized time steps at which the algorithm computes root paths,
then algorithm requires $O(n_{\Pi}^{\ell^{\prime}})$ space.
###### Proof.
At first, the algorithm stores $n_{\Pi}$ paths (worst case) from
$s_{\textrm{home}}$ to $G^{\textrm{full}}$ consuming $O(n_{\Pi})$ space. It
then computes root paths starting with the states at $t_{\textrm{rc}}$ and
iterating backwards to previous time steps (see Alg. 2 line 14). The loop
terminates when no more uncovered goals are left. If $\ell^{\prime}-1$ is the
maximum number of iterations of loop at line 14 that Alg. 2 undergoes, then it
requires $O(n_{\Pi}^{\ell^{\prime}})$ space. ∎
Note that Alg. 2 reduces the space complexity of the naive approach (Sec. 5)
primarily by (1) compressing the number of paths required to cover a goal
region from $n_{\rm goal}$ to $n_{\Pi}$ by planning with experience reuse and
(2) compressing the number of time steps for which the algorithm requires
preprocessing for from $\ell$ to $\ell^{\prime}$ by leveraging O2.
Additionally, the latching feature allows the algorithm to further reduce the
required number of paths by a significant amount.
Pertaining to the structure of our problem, the algorithm works under the
assumption that $n_{\Pi}\ll n_{\text{goal}}$ and $\ell^{\prime}\ll\ell$. To
demonstrate the magnitude of compression, in preprocessing for the time-
configuration planner (see Table. 2(a)), the algorithm covers 7197 goals with
only nine root paths for $s_{\textrm{home}}$ and only computes root paths for
a single time step $t=0$ (i.e., $\ell^{\prime}=1$) out of a total of eight
time steps up to $t_{\textrm{rc}}$.
## 6 Evaluation
We evaluated our algorithm in simulation and on a real robot. The conveyor
speed that we used for all of our results was 0.2$m/s$. We used Willow
Garage’s PR2 robot in our experiments using its 7-DOF arm. For the real robot
experiments we tested with the time-configuration planner whereas for the
simulated results we tested both for the time-configuration and kinodynamic
planners. The additional time dimension made the planning problem eight
dimensional. The experiments can be viewed at https://youtu.be/iLVPBWxa5b8.
CTMP wA* E-Graph RRT $T_{b}$ = 0.2 $T_{b}$ = 0.5 $T_{b}$ = 1.0 $T_{b}$ = 2.0
$T_{b}$ = 0.5 $T_{b}$ = 1.0 $T_{b}$ = 2.0 $T_{b}$ = 0.5 $T_{b}$ = 1.0 $T_{b}$
= 2.0 Pickup success [%] 100 0.0 0.0 18.0 0.0 0.0 80.0 0.0 0.0 18.0 Planning
success [%] 100 4.0 17.0 19.0 31.0 80.0 90.0 12.0 9.0 13.0 Planning time [s]
0.085 0.433 0.628 0.824 0.283 0.419 0.311 0.279 0.252 0.197 Planning cycles 3
2 2 2 2 2 2 2 2 2 Path cost [s] 9.49 8.19 8.28 7.60 8.54 8.22 7.90 9.68 8.96
8.04
(a) Simulation results—Time-configuration Planner.
CTMP wA* E-Graph $T_{b}$ = 0.25 $T_{b}$ = 0.5 $T_{b}$ = 1.0 $T_{b}$ = 2.0
$T_{b}$ = 0.5 $T_{b}$ = 1.0 $T_{b}$ = 2.0 Pickup success [%] 100 0.0 0.0 18.0
0.0 0.0 24.0 Planning success [%] 100 14.0 27.0 44.0 35.0 22.0 38.0 Planning
time [s] 0.122 0.325 0.732 1.45 0.1593 0.405 0.413 Planning cycles 3 2 2 2 2 2
2 Path cost [s] 9.82 8.74 9.15 9.39 8.63 8.72 8.15
(b) Simulation results—Kinodynamic Planner.
Table 1: Simulation results averaged over 50 experiments. Here $T_{b}$ denotes
the (possibly arbitrary) time bound that the algorithm uses. Note that for our
method $T_{b}=T_{\textrm{bound}}$ is the time bound that the algorithm is
ensured to compute a plan. CTMP algorithm uses a fixed $T_{b}$ (0.2$s$). For
the baselines, since their planning times are unbounded, we test them with
$T_{b}=$ 0.5, 1.0 and 2.0$s$. Our CTMP approach shows a task and planning
success rate of 100$\%$ for both the planners. Among the baselines, E-Graph
shows the best performance, however its performance drops significantly for
the kinodynamic planning problem.
### 6.1 Experimental setup
#### 6.1.1 Sense-plan-act cycle
As object $o$ moves along the conveyor belt, we use the Brute Force ICP pose
estimation baseline proposed in Narayanan and Likhachev (2016) to obtain its
3-Dof pose for each captured input point cloud. We follow the classical sense-
plan-act cycle as depicted in Fig. 8. Specifically, the perception system
captures an image (point cloud) of the object $o$ at time $t_{\textrm{img}}$
followed by a period of duration $T_{\textrm{perception}}$ in which the
perception system estimates the pose of $o$. At time
$t_{\textrm{msg}}=t_{\textrm{img}}+T_{\textrm{perception}}$, planning starts
for a period of $T_{\textrm{planning}}$ which is guaranteed to be less than
$T_{\textrm{bound}}$. Thus, at $t_{\textrm{plan}}=t_{\textrm{msg}}+T_{\rm
planning}$ the planner waits for an additional duration of $T_{\rm
wait}=T_{\textrm{bound}}-T_{\rm planning}$. Finally, at
$t_{\textrm{exec}}=t_{\textrm{plan}}+T_{\rm wait}$, the robot starts executing
the plan. Note that the goal $g$ that the planner plans for is not for the
object pose at $t_{\textrm{img}}$ but its forward projection in time to
$t_{\textrm{exec}}$ to account for $T_{\textrm{perception}}$ and
$T_{\textrm{bound}}$. While executing the plan, if we obtain an updated pose
estimate, the execution is preempted and the cycle repeats.
Fig. 8: Timeline of the sense-plan-act cycle.
#### 6.1.2 Goal region specification
To define the set of all goal poses $G^{\textrm{full}}$, we need to detail our
system setup, depicted in Fig. 9. The conveyor belt moves along the $x$-axis
from left to right. We pick a fixed $x$-value termed $x_{\textrm{exec}}$, such
that when the incoming $o$ reaches $x_{\textrm{exec}}$ as per the perception
information, at that point we start execution.
Recall that a pose of an object $o$ is a three dimensional point
$(x,y,\theta)$ corresponding to the $(x,y)$ location of $o$ and to its
orientation (yaw angle) along the conveyor belt. $G^{\textrm{full}}$ contains
a fine discretization of all possible $x,y$ and $\theta$ values in
$[x_{\textrm{exec}}-2\varepsilon,x_{\textrm{exec}}+2\varepsilon]$. We select
$G^{\textrm{full}}$ such that $\varepsilon=$2.5$cm$, making the goal region
10$cm$ long along x-axis. Its dimension along $y$-axis is 20$cm$, equal to the
width of the conveyor belt. The discretization in $x,y$ and $\theta$ is
1.0$cm$ and 10$\deg$ respectively.
In the example depicted in Fig. 9, the thick and the thin solid rectangles
show the ground truth and estimated poses, respectively at two time instances
in the life time of the object. The first plan is generated for the pose shown
at $x_{\textrm{exec}}$. During execution, the robot receives an improved
estimate and has to replan for it. At this point we back project this new
estimate in time using the known speed of the conveyor and the time duration
between the two estimates. This back-projected pose (shown as the dotted
rectangle) is then picked as the new goal for replanning. Recall that under
our assumption about the pose error in perception being $\varepsilon$, the
back projected pose will always lie inside $G^{\textrm{full}}$.
Fig. 9: A depiction of $G^{\textrm{full}}$-specification on a conveyor belt
(overhead view) and perception noise handling.
### 6.2 Results
State time Num. of states Unreachable goals Covered goals Covered via root
paths Covered via Latching Num. of root paths Num. of states Latching tries
Latching failures Processing time 0 1 3 7197 7197 0 9 1641 0 0 2534 3.5 9 3
7197 0 7197 0 0 8 0 0.01
(a) Preprocessing statistics—Time-configuration Planner
State time Num. of states Unreachable goals Covered goals Covered via root
paths Covered via Latching Num. of root paths Num. of states Latching tries
Latching failures Processing time 0 1 16 7184 7184 0 18 3120 0 0 25237.7 3.0
16 16 7184 0 7184 0 0 13.2 0 1062 3.5 18 38.2 7161.8 444.6 6717.2 11.7 2203.4
18 2.4 0.02
(b) Preprocessing statistics—Kinodynamic Planner
Table 2: Preprocessing statistics— We report the preprocessing statistics for
the robot states from which the algorithm either computes a latching primitive
or computes new root paths for replanning. The statistics are indexed based on
the time step of the states. For each time step, we give the number of states
at that time step and the average values for each preprocessing metric over
all the states at that time stamp.
#### 6.2.1 Real-robot experiments
To show the necessity of real-time replanning in response to perception
updates, we performed three types of experiments: using our approach to replan
every time new object pose estimate arrives, single-shot planning based on the
first object pose estimate and single-shot planning using the late (more
accurate) pose estimate. For each set of experiments, we determined the pickup
success rate to grasp the moving object (sugar box) off the conveyor belt. In
addition, we report on the perception system’s success rate by observing the
overlap between the point cloud of the object’s 3D model transformed by the
predicted pose (that was used for planning) and the filtered input point cloud
containing points belonging to the object. A high (low) overlap corresponds to
an accurate (inaccurate) pose estimate. We use the same strategy to determine
the range for which the perception system’s estimates are accurate and use it
to determine the time for the best-pose planning. Further, for each method, we
determine the pickup success rate given that the perception system’s estimate
was or was not accurate.
The experimental results are shown in Table 3. Our method achieves the highest
overall pickup success rate on the robot by a large margin, indicating the
importance of continuous replanning with multiple pose estimates. First-pose
planning has the least overall success rate due to inaccuracy of pose
estimates when the object is far from the robot’s camera. Best-pose planning
performs better overall than the first pose strategy, since it uses accurate
pose estimates, received when the object is close to the robot. However it
often fails even when perception is accurate, since a large number of goals
are unreachable due to limited time remaining to grasp the object when it is
closer to the robot. A demonstration of our approach is given in Fig. 10.
| | Success
---
rate
| Accuracy of
---
perception $[\%]$
| Success rate
---
(Accurate perception)
| Success rate
---
(Inaccurate perception)
CTMP | 69.23 | 42.31 | 83.33 | 57.14
First-pose Planning | 16.00 | 24.00 | 66.67 | 0.00
Best-pose Planning | 34.61 | 34.62 | 55.56 | 23.53
Table 3: Real-robot experiments. Success rate for the three experiments (Our
method (CTMP), First-pose planning and Best-pose planning) averaged over 50
trials .
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 10: Snapshots from a real robot experiment. (10(a)) The robot receives
the first pose estimate from the perception system, generates the first plan
within $T_{\textrm{bound}}$ and starts execution. (10(b)) The robot receives
the second pose estimate with a pose correction of distance 3cm and replans to
the new goal (10(c)) The robot receives the third and last pose estimate with
no further correction and hence continues to follow the previous plan. (10(d))
and (10(e)) The robot executes the dynamic motion primitive to reach the grasp
pose and glide along with the object. (10(f)) The robot lifts up the sugar box
from the conveyor.
#### 6.2.2 Simulation experiments
We simulated the real world scenario to evaluate our method against other
baselines. We compared our method with wA* Pohl (1970), E-graph Phillips et
al. (2012) and RRT LaValle (1998). For wA* and E-graph we use the same graph
representation as our method. For E-graph we precompute five paths to
randomly-selected goals in $G^{\textrm{full}}$. We adapt the RRT algorithm to
account for the under-defined goals. To do so, we sample pre-grasp poses along
the conveyor and compute IK solutions for them to get a set of goal
configurations for goal biasing. When a newly-added node falls within a
threshold distance from the object, we use the same dynamic primitive that we
use in the search-based methods to add the final grasping maneuver. If the
primitive succeeds, we return success. We also allow wait actions at the pre-
grasp locations.
For any planner to be used in our system, we need to endow it with a (possibly
arbitrary) planning time bound to compute the future location of the object
from which the new execution will start. If the planner fails to generate the
plan within this time, the robot fails to react to that pose update and such
cases are recorded as planning failure. We label a run as a pickup success if
the planner successfully replans once after the object crosses the 1.0m mark.
The mark is the mean of accurate perception range that was determined
experimentally and used in the robot experiments as described in Section
6.2.1. The key takeaway from our experiments (Table 1) is that having a known
time bound on the query time is vital to the success of the conveyor pickup
task.
Our method shows the highest pickup success rate, planning success rate
(success rate over all planning queries) and an order of magnitude lower
planning times compared to the other methods. We tested the other methods with
several different time bounds. After our approach E-graph performed decently
well. RRT suffers from the fact that the goal is under-defined and sampling
based planners typically require a goal bias in the configuration space.
Another important highlight of the experiments is the number of planning
cycles over the lifetime of an object. While the other approaches could replan
at most twice, our method was able to replan thrice due to fast planning
times.
#### 6.2.3 Preprocessing
The statistics of the preprocessing stage (i.e. running Alg. 2) are shown in
Table 2. The offline time bound $T_{\mathcal{P}}$ used in all of our
experiments was 10$s$ In all experiments, we used $t_{\textrm{rc}}=$3.5$s$ and
$\delta_{t}=$0.5$s$. For the time-configuration planner the preprocessing took
2,534$s$. Only nine root paths are computed to cover 7,197 goals (three goals
being unreachable and hence uncovered). For the states at
$t=t_{\textrm{rc}}($3.5$s)$, there were no latching failures and therefore,
the algorithm terminates without preprocessing for earlier time stamps. For
the kinodynamic planner, the preprocessing takes about 7 hours. The dynamic
constraints causes latching failures and therefore, the algorithm requires
more preprocessing efforts. Due to latching failures at $t=$3.5, it needs to
compute root paths for some of the states at this time step. Finally, it
covers all the uncovered goals for states at $t=$3.0 via latching and finishes
preprocessing.
## 7 Conclusion and Discussion
To summarize, we developed a provably constant-time planning and replanning
algorithm that can be used to grasp fast moving objects off conveyor belts and
evaluated it in simulation and in the real world on the PR2 robot. Through
this work, we advocate the need for algorithms that guarantee (small)
constant-time planning for time-critical applications, such as the conveyor
pickup task, which are often encountered in warehouse and manufacturing
environments. To this end we introduce and formalize a new class of algorithms
called CTMP algorithms.
An interesting future research direction could be to leverage roadmap-based
representation instead of storing individual paths in a way that the algorithm
remains CTMP-complete, namely it maintains constant-time planning guarantee.
On the more practical side, a useful extension could be to parallelize the
preprocessing computation over multiple CPU cores. One obvious way of doing so
is within the Alg. 1. The required uncovered goal region $G^{\textrm{uncov}}$
can be divided over multiple threads.
Another useful extension to our CTMP approach is to be able to handle multiple
objects simultaneously coming on the conveyor belt. This setting is common for
a sorting task when multiple robot arms work at the same conveyor and they
have to pick up one object while avoiding collisions with the other object.
This makes the problem more challenging since the planner has to consider more
than one dynamic objects in the scene.
This work was supported by the ONR grant N00014-18-1-2775 and the ARL grant
W911NF-18-2-0218 as part of the A2I2 program. In addition, it was partially
supported by the Israeli Ministry of Science & Technology grant No. 102583 and
by Grant No. 1018193 from the United States-Israel Binational Science
Foundation (BSF) and by Grant No. 2019703 from the United States National
Science Foundation (NSF).
The authors would like to thank Andrew Dornbush for providing support for the
Search-Based Motion Planning Library (https://github.com/aurone/smpl) which is
used in our implementation. The authors would also like to thank Ellis Ratner
for fruitful discussions and Anirudh Vemula for helping out in restoring the
PR2 robot which is used in our experiments.
## References
* Allen et al. (1993) Allen PK, Timcenko A, Yoshimi B and Michelman P (1993) Automated tracking and grasping of a moving object with a robotic hand-eye system. _TRA_ 9(2): 152–165.
* Belghith et al. (2006) Belghith K, Kabanza F, Hartman L and Nkambou R (2006) Anytime dynamic path-planning with flexible probabilistic roadmaps. In: _Proceedings of the IEEE International Conference on Robotics and Automation, 2006. ICRA 2006._ IEEE, pp. 2372–2377.
* Berenson et al. (2012) Berenson D, Abbeel P and Goldberg K (2012) A robot path planning framework that learns from experience. In: _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_. pp. 3671–3678.
* Cefalo et al. (2013) Cefalo M, Oriolo G and Vendittelli M (2013) Task-constrained motion planning with moving obstacles. In: _2013 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, pp. 5758–5763.
* Cohen et al. (2010) Cohen BJ, Chitta S and Likhachev M (2010) Search-based planning for manipulation with motion primitives. In: _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_. pp. 2902–2908.
* Cohen et al. (2011) Cohen BJ, Subramania G, Chitta S and Likhachev M (2011) Planning for Manipulation with Adaptive Motion Primitives. In: _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_. pp. 5478–5485.
* Coleman et al. (2015) Coleman D, Şucan IA, Moll M, Okada K and Correll N (2015) Experience-based planning with sparse roadmap spanners. In: _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_. pp. 900–905.
* Cowley et al. (2013) Cowley A, Cohen B, Marshall W, Taylor CJ and Likhachev M (2013) Perception and motion planning for pick-and-place of dynamic objects. In: _Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. pp. 816–823.
* Czech et al. (1997) Czech ZJ, Havas G and Majewski BS (1997) Perfect hashing. _Theoretical Computer Science_ 182(1-2): 1–143.
* Dobson and Bekris (2014) Dobson A and Bekris KE (2014) Sparse roadmap spanners for asymptotically near-optimal motion planning. _The International Journal of Robotics Research (IJRR)_ 33(1): 18–47.
* Fraichard (1993) Fraichard T (1993) Dynamic trajectory planning with dynamic constraints: A’state-time space’approach. In: _IROS_ , volume 2. pp. 1393–1400.
* Han et al. (2019) Han SD, Feng SW and Yu J (2019) Toward Fast and Optimal Robotic Pick-and-Place on a Moving Conveyor. _IEEE Robotics and Automation Letters (RA-L)_ 5(2): 446–453.
* Hinterstoisser et al. (2012) Hinterstoisser S, Lepetit V, Ilic S, Holzer S, Bradski G, Konolige K and Navab N (2012) Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes. In: _Proceedings of the Asian Conference on Computer Vision_. pp. 548–562.
* Ishida and Korf (1991) Ishida T and Korf RE (1991) Moving Target Search. In: _Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)_ , volume 91. pp. 204–210.
* Ishida and Korf (1995) Ishida T and Korf RE (1995) Moving-target search: A real-time search for changing goals. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 17(6): 609–619.
* Islam et al. (2020) Islam F, Salzman O, Agarwal A and Likhachev M (2020) Provably Constant-time Planning and Replanning for Real-time Grasping Objects off a Conveyor Belt. In: _Proceedings of Robotics: Science and Systems (RSS)_.
* Islam et al. (2019) Islam F, Salzman O and Likhachev M (2019) Provable Indefinite-Horizon Real-Time Planning for Repetitive Tasks. In: _Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS)_. pp. 716–724.
* Kavraki et al. (1996) Kavraki LE, Svestka P, Latombe JC and Overmars MH (1996) Probabilistic roadmaps for path planning in high-dimensional configuration spaces. _IEEE Robotics and Automation Letters (RA-L)_ 12(4): 566–580.
* Koenig and Likhachev (2006) Koenig S and Likhachev M (2006) Real-time adaptive A*. In: _Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)_. pp. 281–288.
* Koenig et al. (2007) Koenig S, Likhachev M and Sun X (2007) Speeding up moving-target search. In: _Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)_. pp. 1–8.
* Koenig and Sun (2009) Koenig S and Sun X (2009) Comparing real-time and incremental heuristic search for real-time situated agents. _Autonomous Agents and Multi-Agent Systems_ 18(3): 313–341.
* Korf (1990) Korf RE (1990) Real-time heuristic search. _Artificial intelligence_ 42(2-3): 189–211.
* LaValle (1998) LaValle SM (1998) Rapidly-exploring random trees: A new tool for path planning .
* Likhachev and Ferguson (2009) Likhachev M and Ferguson D (2009) Planning Long Dynamically Feasible Maneuvers for Autonomous Vehicles. _The International Journal of Robotics Research (IJRR)_ 28(8): 933–945.
* Menon et al. (2014) Menon A, Cohen B and Likhachev M (2014) Motion planning for smooth pickup of moving objects. In: _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_. pp. 453–460.
* Narayanan and Likhachev (2016) Narayanan V and Likhachev M (2016) PERCH: Perception via search for multi-object recognition and localization. In: _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_. pp. 5052–5059.
* Phillips et al. (2012) Phillips M, Cohen BJ, Chitta S and Likhachev M (2012) E-Graphs: Bootstrapping Planning with Experience Graphs. In: _Proceedings of Robotics: Science and Systems (RSS)_.
* Pohl (1970) Pohl I (1970) . _Artificial intelligence_ 1(3-4): 193–204.
* Salzman et al. (2014) Salzman O, Shaharabani D, Agarwal PK and Halperin D (2014) Sparsification of motion-planning roadmaps by edge contraction. _The International Journal of Robotics Research (IJRR)_ 33(14): 1711–1725.
* Stogl et al. (2017) Stogl D, Zumkeller D, Navarro SE, Heilig A and Hein B (2017) Tracking, reconstruction and grasping of unknown rotationally symmetrical objects from a conveyor belt. In: _IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)_. IEEE, pp. 1–8.
* Sun et al. (2010) Sun X, Yeoh W and Koenig S (2010) Moving target D* lite. In: _Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)_. pp. 67–74.
* Yang et al. (2018) Yang Y, Merkt W, Ivan V and Vijayakumar S (2018) Planning in time-configuration space for efficient pick-and-place in non-static environments with temporal constraints. In: _2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)_. IEEE, pp. 1–9.
* Yoshida and Kanehiro (2011) Yoshida E and Kanehiro F (2011) Reactive robot motion using path replanning and deformation. In: _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, pp. 5456–5462.
* Zhang et al. (2018) Zhang Y, Li L, Ripperger M, Nicho J, Veeraraghavan M and Fumagalli A (2018) Gilbreth: A conveyor-belt based pick-and-sort industrial robotics application. In: _International Conference on Robotic Computing (IRC)_. pp. 17–24.
|
# Revisiting the Auction Algorithm for Weighted Bipartite Perfect Matchings
Megha Khosla and Avishek Anand
(L3S Research Center, Hannover, Germany
<EMAIL_ADDRESS>
###### Abstract
We study the classical _weighted perfect matchings problem_ for bipartite
graphs or sometimes referred to as the _assignment problem_ , i.e., given a
weighted bipartite graph $G=(U\cup V,E)$ with weights
$w:E\rightarrow\mathcal{R}$ we are interested to find the maximum matching in
$G$ with the minimum/maximum weight. In this work we present a new and
arguably _simpler_ analysis of one of the earliest techniques developed for
solving the assignment problem, namely the _auction algorithm_. Using our
analysis technique we present tighter and improved bounds on the runtime
complexity for finding an approximate minumum weight perfect matching in
$k$-left regular sparse bipartite graphs.
## 1 Introduction
Let $G=(U\cup V,E)$ be a bipartite graph on $n$ vertices and $m$ edges and let
$w:E\rightarrow\mathcal{R}$ be a weight function on edges. A perfect matching
of $G$ is a subset $M\subseteq E$ of the edges such that for every node $v\in
U\cup V$ there is exactly one incident edge $e\in M$. The weight of a matching
$M$ is given by the sum of the weights of its edges, i.e. $w(M):=\sum_{e\in
M}w(e)$. The minimum weight bipartite perfect matching problem (MWPM) is to
find for a given bipartite graph $G$ and a given weight function $w$ a perfect
matching of minimum weight which is sometimes also referred to as the
_assignment problem_. WLOG we can assume that $|U|\leq|V|=n$.
In this work we present a novel and simple analysis of _auction_ algorithm
originally proposed by Bertsekas [7] for solving the assignment problem. The
auction algorithm resembles a competitive bidding process whereby unassigned
persons (nodes on the left set $U$) bid simultaneously for objects (nodes on
the right set $V$), thereby raising their prices. On obtaining all bids, the
object $v\in V$ is assigned to the highest bidder.
There is a cost $w(u,v)$ for matching person $u$ with object $v$ and we want
to assign persons to objects so as to minimize the overall cost (for a
minimization objective). Let $E$ be the set of all pairs $(u,v)$ that can be
matched. A typical iteration of the auction algorithm consists of a bidding
and an assignment phase. To start off, each object $v$ is initialized with
some initial price $L(v)$. The bidding and assignment phases are as follows:
Bidding Phase : Let $I$ be the set of unassigned persons. Each person $u\in I$
finds an object $\bar{v}_{u}$ which optimizes for minimum cost, $w(u,v)+L(v)$
, that is,
$\bar{v}_{u}\,=\,\arg\min_{v:(u,v)\in E}\,w(u,v)+L(v)$
and computes a _bidding increment_ $\gamma_{u}$ for some parameter
$\varepsilon>0$
$\gamma_{u}=w^{2}_{u}-w^{1}_{u}+\varepsilon,$
where
$w^{1}_{u}=\min_{v:(u,v)\,\in\,E}w(u,v)+L(v)$
is the best object value and
$w^{2}_{u}=\min_{v:(u,v)\,\in\,E,\,v\neq\bar{v}_{u}}w(u,v)+L(v)$
is the second best object value.
Assignment Phase : Note that an object $v$ can be the best bid for multiple
persons $u\in I$. In such a case $v$ is assigned to the person with the
_highest bid_ and its price is raised by the highest bid, i.e.,
$L(v)=L(v)+\max_{u\in I:v=\bar{v}_{u}}\gamma_{u}.$
The person that was assigned to $v$ at the beginning of the iteration (if any)
becomes unassigned. The algorithm continues with a sequence of iterations
until all persons have an assigned object.
The auction algorithm finds an approximate solution for maximum/minimum weight
perfect matching which is bounded by $OPT+n\varepsilon$, which we also refer
as the _$\varepsilon$ \- optimal_ solution. Let $w^{max}$ and $w^{min}$ denote
the maximum and minimum edge weights respectively. The worst case running time
for complete bipartite graphs is $O({n^{2}w^{max}\over\varepsilon})$. In order
to improve the running time a procedure called _$\varepsilon$ -scaling_ is
employed. It essentially consists of executing the algorithm several times
starting with a large value of $\varepsilon$ and successively reducing
$\varepsilon$ after each run. The procedure terminates typically when the
ultimate value of $\varepsilon$ is less than some critical value (for example,
$1/n$, for integral weights). For integral weights, it has be shown that the
worst-case running time of the auction algorithm for finding optimal solutions
using $\varepsilon$-scaling is $O(nm\log(nw^{max}))$ [5, 6]. However for the
asymmetric problem where number of persons is less than the number of objects,
the prices would need to be initialized by $0$ and $\varepsilon$-scaling
cannot be used out of the box.
In order to benefit from $\varepsilon$-scaling an _reverse_ auction method is
used (in addition to the regular auction) in which the objects also compete
for persons by offering discounts [4].
Subsequently the auction method was extended to solve the classical linear
network flow problem and many of its special classes. In particular, [10] and
[2] propose an extension to the minimum cost network flow problem using
$\varepsilon$-relaxation. Auction algorithms for transportation problems [3]
and shortest paths [8] have also been proposed. We refer the interested reader
to [9] for a more comprehensive discussion on auction algorithms.
In this paper we focus on the sequential version of auction algorithm where a
single unassigned person bids at every iteration. This version is also known
as the _Gauss-Seidel_ version because of its similarity with Gauss-Seidel
methods for solving systems of nonlinear equations. Moreover, we restrict
ourselves to its application to finding minimum weight perfect matchings in
bipartite graphs. Our analysis technique is inspired by the LSA method [23]
which is used to construct large hash tables and finding maximum matchings in
unweighted bipartite graphs. In fact, the original motivation was to extend
the label based technique in LSA for solving the weighted version of the
matching problem in bipartite graphs. Though our proposed algorithm later
turned out to be a version of the auction algorithm, our analysis technique
allows for a simpler interpretation of prices (labels) as a function of
_shortest unweighted paths_ in the underlying bipartite graphs. This in turn
helps us to bound the runtime of the weighted version in terms of the shortest
unweighted paths in the underlying graphs (which also provides a bound on the
runtime of LSA) with weight range ($w^{max}-w^{min}$) in the multiplicative
factor.
In particular, we use the main result in [23] to show that the worst case
runtime of auction algorithm for sparse $k$\- regular bipartite graphs is no
more than $O(n\cdot{w^{max}-w^{min}\over\varepsilon})$ with high probability.
For complete bipartite graphs, we prove a slightly better runtime bound of
$O(n^{2}\cdot{w^{max}-w^{min}\over\varepsilon})$, which is an improvement over
the previous bound when both $w^{max}$ and $w^{min}$ are large.
### 1.1 More on Related Work
The first polynomial time algorithm for the assignment problem, the so called
Hungarian method, was given by [25, 26]; implementations of this algorithm had
a running time of $O(n^{3})$, which is still optimal for dense graphs. Edmonds
and Karp [14] and Tomizawa [33] independently observed that the assignment
problem is reducible to computing single-source shortest paths on non-negative
weights. Further, Fredman and Tarjan [16] showed that using Fibonacci heaps,
$n$ executions of Dijkstra’s [11] shortest path algorithm take $O(mn+n^{2}\log
n)$ time. On integer weighted graphs this algorithm can be implemented
slightly faster, in $O(mn+n^{2}\log\log n)$ time [20, 32] or $O(mn)$ time
(randomized) [1, 31], independent of the maximum edge weight. Gabow and Tarjan
[18] gave a scaling algorithm for the assignment problem running in
$O(m\sqrt{n}\log(nw^{max}))$ time for integral weights that is based on the
Hungarian method. Orlin and Ahuja [28] using the auction approach, and
Goldberg and Kennedy [19], obtain the same time bound. Recently the new
scaling algorithm by Duan et al. [12] matches the same bound for weighted
matchings in general graphs. Sankowski takes an algebraic approach by
proposing a randomized algorithm [30], that solves the assignment problem
using fast matrix multiplications in $O(w^{max}n^{\omega})$ time with high
probability, where $\omega$ is the exponent of square matrix multiplication.
Improved runtime bounds have also been achieved by reducing maximum weighted
matching to maximum cardinality matching problems ( see [22, 21, 29] and
references therein)
The other area of related work concerns with the algorithms for maximum
weighted matchings (MWM). It is important to note that though the two problems
MWM and MWPM are reducible to each other, the reductions do not work for their
approximate versions as the approximation may compromise perfection. Duan and
Pettie [13] presented a $(1-\varepsilon)$-MWM algorithm that runs in
$O(m\varepsilon^{-1}\log(\varepsilon^{-1}))$ on general graphs and
$O(m\varepsilon^{-1}\cdot\min\\{\log\varepsilon^{-1},\log(w^{max})\\})$ time
on integer-weighted general graphs. These results cannot be compared with the
bound that we provide for random sparse bipartite graphs as we require that
the matching returned should also be perfect. Please refer to [13] for more
details on other works for finding approximate maximum weight matching.
### 1.2 Our Contribution
We present a novel analysis of auction algorithm for finding approximate
minimum weight perfect matchings in bipartite graphs for arbitrary weights.
Our approach is inspired by the _local search allocation_ (LSA) method [23]
used to construct large hash tables and finding maximum matchings in
unweighted bipartite graphs. Using our analysis we could easily provide a
better runtime bound for a random sparse $k-$ left regular bipartite graphs.
In a random $k$-left regular bipartite graph, each vertex in the left-set
chooses $k$ neighbors from the right set independently and uniform at random.
From [27, 15, 17] we know that for such graphs, for all $k\geq 3$, there
exists a threshold density $c^{*}_{k}$ ( and is computable , for example for
$k=3$ it is close $0.91$) such that when $|U|/|V|<c^{*}_{k}$, there exists a
left perfect matching in $G$ with high probability, otherwise this is not the
case. We will show that for random $k$\- left regular bipartite graphs obeying
this threshold condition, one can find an $\varepsilon$\- optimal minimum
weight perfect matching in near linear time. In case a perfect matching does
not exist, we will show that with appropriate stopping criteria, the algorithm
will stop and return an approximate solution for minimum weight maximum
matching (though we do not analyse the approximation guarantee in detail). We
note that there exists no restriction on the weights and they can be
arbitrary. For complete bipartite graphs the runtime bound is
$O(n^{2}\cdot{w^{max}-w^{min}\over\varepsilon})$. We now state the main
result.
###### Theorem 1.
Let $G=(U\cup V,E)$ be a weighted bipartite graph with $|U|\leq|V|=n$ and
arbitrary weights. Let $OPT_{G}$ be the weight of the optimal minimum weight
maximum matching in $G$. Then for an arbitrary parameter $0<\varepsilon$ the
auction algorithm returns a $OPT_{G}+n\varepsilon$ minimum weight maximum
matching. The worst case runtime for sparse and complete bipartite graphs is
as follows.
1. 1.
Sparse Graphs: For $k\geq 3$, let $G=(U\cup V,E)$ be such that each vertex in
$U$ chooses $k$ neighbors in $V$ independently and uniform at random. In
addition let $|U|<c^{*}_{k}n$. The worst case runtime in case a perfect
matching exists is $O(n\cdot{w^{max}-w^{min}\over\varepsilon})$ with
probability $1-o(1)$.
2. 2.
Complete Graphs: For complete bipartite graphs the runtime bound is
$O(n^{2}\cdot{w^{max}-w^{min}\over\varepsilon})$.
The main idea behind the proof for the runtime bounds is to show that the
prices (which we will refer to as labels in the subsequent section) when
initialized with zero are increasing by at least $\varepsilon$ and are bounded
in terms of shortest distances in the underlying graph and the weight range.
We will then use the result from [23] which provides a linear bound in
expectation and with high probability for the sum of the shortest distances in
the special class of sparse bipartite graphs as considered in Theorem 1. For
the approximation guarantee, we will show that no alternating path allows for
a decrease in the existing matching weight by more than $n\varepsilon$. Note
that we do not provide any improved bound on approximation quality of the
solution in this paper. We start by giving a detailed description of the
algorithm in the next section.
## 2 The Auction Algorithm for Perfect Weighted Matchings
### 2.1 Notations.
Throughout the paper we use the following notations unless stated otherwise.
We denote the set of integers $\\{1,2,\ldots,n\\}$ by $[n]$. Let $G=(U\cup
V;E)$ denote a weighted bipartite graph where $|U|\leq|V|=n$. For any $e\in
E$, $w(e)$ denotes the weight on edge $e$. In addition let $w^{max}$ and
$w^{min}$ denote the maximum and minimum weight on any edge respectively. For
any vertex $u\in U$ we refer to set of neighbors of $u$ in $V$ as $N(u)$. For
any $v\in V$, $L(v)$ denotes the label of $v$. Let $M$ be the set of matched
edges in the optimal matching. For $u\in U$ and $v\in V$, we say that an edge
$e=(u,v)$ is assigned to $v$ if $e\in M$. We call a vertex $v\in V$ _free_ if
and only if no edge is assigned to it. The optimal minimum weight for a
perfect matching in $G$ is then equal to $\sum_{e\in M}w(e)$ and is denoted by
$OPT_{G}$.
### 2.2 The Algorithm and its Analysis
We first explain the auction algorithm in detail with respect to our
interpretation. Initially we are provided with all vertices from the right set
$V$. We assign labels to all vertices in $V$ and initialize them by $0$. The
vertices of the left set $U$ appear one by one together with the incident
edges and their weights. An edge $e=(u,v)$ is assigned to vertex $v\in V$ such
that the sum $L(v)+w(e)$ is minimum for all edges incident on $u$. We refer to
the rule for choosing a candidate edge as the _choice rule_. The original
algorithm aimed only to find prefect matchings. In case a perfect matching
does not exist, the algorithm might enter in an endless loop. We will argue in
the next section that the label values are bounded and therefore resolve this
issue by checking the minimum label with the maximum possible value of
$n/2(w^{max}-w^{min}+\varepsilon)$. In case for some vertex $u$ all its
neighbors have labels greater than the maximum value, $u$ is discarded and
never matched.
Let for vertex $v^{\prime}\in V\backslash\\{v\\}$ and
$e^{\prime}=(u,v^{\prime})$, the sum $L(v^{\prime})+w(e^{\prime})$ is the
minimum. For some $0<\varepsilon$, the label of $v$ after its new assignment
will be updated as follows.
$L(v)=L(v^{\prime})+w(e^{\prime})-w(e)+\varepsilon.$
We call the above rule as the _update rule_.
In case $v$ is not empty and was already assigned another edge $(u,v)$, the
edge $(u,v)$ is moved out of the matching and the process is repeated for
vertex $u$.
Let $\mathbf{L}=\\{L(v_{1}),\ldots,L(v_{n})\\}$ and
$\mathbf{T}=\\{T(v_{1}),\ldots,T(v_{n})\\}$ where $L(v_{i})$ denotes the label
of vertex $v_{i}$ and $T(v_{i})$ denotes the vertex matched to $v_{i}$. We
initialize $\mathbf{L}$ with all $0$s , i.e., all vertices are free. Auction
algorithm is described in Algorithm 1 which calls the Procedure 2 to match an
arbitrary vertex from the left set when it appears.
Algorithm 1 Auction algorithm ($U,V,E$)
1: for all $v\in V$ do
2: Set $L(v)=0$
3: Set $T(v)=\emptyset$
4: for all $u\in U$ do
5: CALL MatchVertex ($u,\mathbf{L},\mathbf{T}$)
Procedure 2 MatchVertex ($u,\mathbf{L},\mathbf{T}$)
1: Choose $v\in N(u)$ such that $L(v)+w(u,v)$ is the minimum $\rhd$Choice Rule
2: if $L(v)>n/2(w^{max}-w^{min}+\varepsilon)$ then
3: RETURN
4: $L(v)\leftarrow\min{(L(v^{\prime})+w(u,v^{\prime})|v^{\prime}\in
N(u)\setminus\\{v\\})}-w(u,v)+\varepsilon$ $\rhd$Update Rule
5: if $(T(v)\neq\emptyset)$ then
6: $y\leftarrow T(v)$ $\rhd$Move that moves an edge out of matching
7: $T(v)\leftarrow u$ $\rhd$Move that assigns a new edge or matches a new
vertex
8: $\mathbf{CALL}$ MatchVertex($y,\mathbf{L},\mathbf{T}$)
9: else
10: $T(v)\leftarrow u$ $\rhd$Move that assigns an edge or matches a vertex
We observe that if Auction algorithm does not enter an endless loop in the
Procedure 2, it would return a matched vertex for each of the vertices in $U$.
This implies that Auction algorithm will find a left perfect matching if it
terminates.
In the next section we will prove that the algorithm does not run in endless
loops, i.e., it terminates, by showing that the (1) labels are non decreasing
and (2) labels are bounded by a maximum value.
### 2.3 Bounding the Labels
We need some additional notation. In what follows a _move_ denotes either
assigning an edge to a free vertex or replacing a previously assigned edge.
Let $P$ be the total number of moves performed by the algorithm. For $p\in[P]$
we use $L_{p}(v)$ to denote the label of vertex $v$ at the end of the $p$th
move. Let $\mathcal{M}_{p}$ denote the set of matched edges at the end of the
$p$th move.
Let an edge $(u,v)$ is assigned to a vertex $v\in V$ in some move $p$. The
choice and the update rules can then be rewritten as follows.
$\displaystyle\textbf{Choice Rule : }L_{p-1}(v)+w(u,v)\leq\min_{v^{\prime}\in
N(u)\setminus\\{v\\}}(L_{p-1}(v^{\prime})+w(u,v^{\prime})).$ (1)
$\displaystyle\textbf{Update Rule : }L_{p}(v)=\min_{v^{\prime}\in
N(u)\setminus\\{v\\}}(L_{p-1}(v^{\prime})+w(u,v^{\prime}))-w(u,v)+\varepsilon.$
(2)
We will first show that the label of exactly one vertex increases by at least
$\varepsilon$ at the end of a move. The labels of other vertices remain
unchanged. Even though this fact is obvious from the algorithm description, we
prove it here for completeness.
###### Proposition 1.
For any $p\in[P]$ there exists exactly one vertex $v\in V$ such that
$L_{p}(v)\geq L_{p-1}(v)+\varepsilon$ and for all other $v^{\prime}\in
V\backslash\\{v\\}$, $L_{p}(v^{\prime})=L_{p-1}(v^{\prime})$.
###### Proof.
By definition of the move, the label of exactly one vertex is altered at the
end of a move. Let $v$ be assigned as edge $(u,v)$ is some move $p\in[P]$.
Therefore labels of all vertices except $v$ remain unchanged at the end of the
$p$th move, i.e.,
$\forall v^{\prime}\in
V\backslash\\{v\\}:L_{p}(v^{\prime})=L_{p-1}(v^{\prime})$
For vertex $v$ the new label at the end of the $p$th move is defined by (2)
(the update rule) as
$L_{p}(v)=\min_{v^{\prime}\in
N(u)\backslash\\{v\\}}(L_{p-1}(v^{\prime})+w(u,v^{\prime}))-w(u,v)+\varepsilon,$
which combined with (1) gives $L_{p}(v)\geq L_{p-1}(v)+\varepsilon$, thereby
concluding the proof.
∎
In the following proposition we bound the label of any vertex with respect to
the labels of the other neighbors of its matched vertex and the corresponding
edge weights. We will need this to bound the maximum label of any vertex at
the end of the $(P-1)$th move.
###### Proposition 2.
For all $p\in[P]$ and all $(u,v)\in\mathcal{M}_{p}$, the following holds.
$L_{p}(v)\leq\min_{v^{\prime}\in
N(u)\setminus\\{v\\}}(L_{p}(v^{\prime})+w(u,v^{\prime}))-w(u,v)+\varepsilon.$
###### Proof.
Let in some move $p\in P$ an edge $(u,v)$ is placed in the matched set
$\mathcal{M}_{p}$. Note that the labels of all vertices $v^{\prime}\in
V\setminus\\{v\\}$ remain unchanged at the end of the $p$th move. By update
rule we obtain
$\displaystyle L_{p+1}(v)=$ $\displaystyle\min_{v^{\prime}\in
N(u)\backslash\\{v\\}}(L_{p}(v^{\prime})+w(u,v^{\prime}))-w(u,v)+\varepsilon$
$\displaystyle\leq$ $\displaystyle\min_{v^{\prime}\in
N(u)\backslash\\{v\\}}(L_{p+1}(v^{\prime})+w(u,v^{\prime}))-w(u,v)+\varepsilon.$
The last inequality holds $L_{p+1}(v^{\prime})\geq L_{p}(v^{\prime})$ for all
$v^{\prime}$ and $p$ by Proposition 1. For any other edge
$(u^{\prime},v^{\prime})\in\mathcal{M}_{p}$ that was last assigned in some
move $p^{\prime}<p$ the following holds by update rule
$\displaystyle L_{p^{\prime}}(v^{\prime})=\min_{v^{\prime\prime}\in
N(u^{\prime})\backslash\\{v^{\prime}\\}}(L_{p^{\prime}-1}(v^{\prime\prime})+w(u^{\prime},v^{\prime\prime}))-w(u^{\prime},v^{\prime})+\varepsilon$
$\displaystyle\leq\min_{v^{\prime\prime}\in
N(u^{\prime})\backslash\\{v^{\prime}\\}}(L_{p}(v^{\prime\prime})+w(u^{\prime},v^{\prime\prime}))-w(u^{\prime},v^{\prime})+\varepsilon$
The last inequality holds as $L_{p}(v^{\prime\prime})\geq
L_{p^{\prime}-1}(v^{\prime\prime})$ for all $v^{\prime\prime}$ and all
$p>p^{\prime}-1$. Also as $v^{\prime}$ was not updated after $p^{\prime}$th
move, $L_{p^{\prime}}=L_{p}$. We therefore obtain
$L_{p}(v^{\prime})\leq\min_{v^{\prime\prime}\in
N(u^{\prime})\backslash\\{v^{\prime}\\}}(L_{p}(v^{\prime\prime})+w(u^{\prime},v^{\prime\prime}))-w(u^{\prime},v^{\prime})+\varepsilon,$
thereby completing the proof.
∎
#### 2.3.1 Maximum Label and the Runtime
We want to prove a bound of the labels of the vertices in terms of their
shortest distances to the set of free vertices at the end of the $(P-1)$th
move. We start by considering an ordered alternating path of unmatched and
matched edges. Using Proposition 2 we will exploit the relationship between
(a)label of the last vertex (b) label of the first vertex and (c) edge weights
in this ordered path.
For an even $4\leq t\leq 2n-4$, let
$B_{t}=(v_{1},u_{2},v_{2},\cdots,u_{{t\over 2}+1},v_{{t\over 2}+1})$ denote an
alternating path of unmatched and matched edges at the end of some move $p$.
We assume that the first edge in the path is an unmatched edge. Let
$\mathcal{M}_{p}(B_{t})$ and $\mathcal{M}^{\prime}_{p}(B_{t})$ denote the set
of matched and unmatched edges in $B_{t}$ at the end of some $p$th move.
###### Lemma 1.
For all $t\leq 2n-4$ and all paths $B_{t}$ as defined above, the following
holds.
$L_{p}(v_{t\over 2})\leq
L_{p}(v_{1})+\sum_{e\in\mathcal{M}^{\prime}_{p}(B_{t})}w(e)-\sum_{e\in\mathcal{M}_{p}(B_{t})}w(e)+{t\varepsilon\over
2}$
###### Proof.
We will prove the lemma by induction on the path length $t$. For the case
$t=2$ we have $B_{2}=(v_{1},u_{2},v_{2})$, where
$(v_{1},u_{2})\in\mathcal{M}^{\prime}_{p}(B_{2})$ and
$(u_{2},v_{2})\in\mathcal{M}_{p}(B_{2})$. By Proposition 2 we obtain
$L_{p}(v_{2})\leq L_{p}(v_{1})+w(v_{1},u_{2})-w(u_{2},v_{2})+\varepsilon.$
Clearly the lemma follows for path length $2$. Now assume that the lemma is
true for path length $t=2t^{\prime}$, i.e.,
$L_{p}(v_{t^{\prime}})\leq
L_{p}(v_{1})+\sum_{e^{\prime}\in\mathcal{M}^{\prime}_{P}(B_{2t^{\prime}})}w(e^{\prime})-\sum_{e\in\mathcal{M}_{p}(B_{2t^{\prime}})}w(e)+{t^{\prime}\varepsilon}$
We will now prove the lemma for paths of length $2t^{\prime}+2$. Consider the
last vertex $v_{{t^{\prime}}+1}$ in $B_{2t^{\prime}+2}$. As it is matched to
$u_{t^{\prime}+1}$, by Proposition 2 we obtain
$L_{p}(v_{t^{\prime}+1})\leq
L_{p}(v_{t^{\prime}})+w(u_{t^{\prime}+1},v_{t^{\prime}})-w(u_{t^{\prime}+1},v_{t^{\prime}+1})+\varepsilon.$
Combining the above inequality with the induction hypothesis we obtain
$L_{p}(v_{t^{\prime}+1})\leq
L_{p}(v_{1})+\sum_{e^{\prime}\in\mathcal{M}^{\prime}_{P}(B_{2t^{\prime}+2})}w(e^{\prime})-\sum_{e\in\mathcal{M}_{p}(B_{2t^{\prime}+2})}w(e)+(t^{\prime}+1)\varepsilon,$
hence completing the proof. ∎
We obtain the following corollary about the labels at the end of $(P-1)th$
move.
###### Corollary 1.
Let $d_{P-1}(v)$ denote the length of the shortest unweighted path to any free
vertex in $G$ after move $P-1$ is completed. Then for all $v\in V$,
$L_{P-1}(v)\leq{d_{P-1}(v)\over 2}(w^{max}-w^{min}+\varepsilon)$.
###### Proof.
We assume that the given bipartite graph is connected, otherwise we run the
algorithm on connected components. We note that at the end of $P-1$th move,
there is at least one free vertex in $V$. Note that the label of any free
vertex is $0$. For any $v\in V$we use Lemma 1 considering its shortest
alternating path to some free vertex and bounding the weights of unmatched and
matched edges on this path by $w^{max}$ and $w^{min}$ respectively, we obtain
the desired result, i.e.,
$L_{P-1}(v)\leq 0+{d_{P-1}(v)\over 2}(w^{max}-w^{min}+\varepsilon)$
∎
Before we prove the runtime bounds we will describe the main result from [23]
which we use to bound the distance values $d_{P-1}(v)$. Khosla [23] considers
the problem of assigning $m$ items to $n$ locations, such that each of the $m$
items chooses $k$ locations independently and uniformly at random. Each item
needs to be assigned to one of its $k$ choices. It is easy to see that such an
assignment instance represents a $k$-left regular bipartite graph and a valid
assignment corresponds to a left-perfect matching (where all vertices from the
left set have been matched).
A label based approach, LSA is used to find such a perfect matching for the
case $m<c^{*}_{k}n$, where $c^{*}_{k}$ is the _threshold density_ (known from
[27, 15, 17] before). The fact that $m<c^{*}_{k}n$ ensures that a left perfect
matching exists with probability $1-o(1)$. The runtime of the algorithm is
bounded by the sum of labels at the end of the algorithm which are in turn
bounded by the shortest distances to the set of free vertices. Note that
$c^{*}_{k}<1$ and there always exist at least one free vertex at the end of
the algorithm. The distances are in turn bounded using some structural
properties of the corresponding graphs. She shows that the sum of shortest
distances to the set of free vertices is bounded by $O(n)$ with probability
$1-o(1)$ . In a subsequent extension Anand and Khosla [24] show that the
result also holds in expectation. We state here their main result adjusted to
the terminology and its requirement in this paper.
###### Theorem 2.
For $k\geq 3$, let $G=(U\cup V,E)$ be such that each vertex in $U$ chooses $k$
neighbors in $V$ independently and uniformly at random. In addition let
$|U|<c^{*}_{k}|V|$. Then for some $\delta>0$, $\sum_{v\in V}d_{P-1}(v)=O(n)$
with probability $1-n^{-\delta}$ and in expectation.
We refer the reader to [23, 24] for a complete proof of Theorem 2. We are now
ready to prove the runtime bounds of the auction algorithm.
###### Proof of runtime bounds (Theorem 1).
We recall that $P\geq\sum_{v\in V}L_{P-1}(v)+1.$ From Corollary 1 and Theorem
2 we conclude that for sparse random $k$-left regular bipartite graphs $P\leq
O(n\cdot(w^{max}-w^{min}+\varepsilon))$ with high probability. We know that
labels increase by at least $\varepsilon$ in each step. Further $k$
comparisons are required, in each move, to find the best and the second best
vertices. From these two observations we can conclude that the worst case
bound is $O(nk\cdot{w^{max}-w^{min}+\varepsilon\over\varepsilon})$ with high
probability.
For complete bipartite graphs, note that $d_{P-1}(v)\leq 2$ for all $v\in V$
(for the last free vertex, it is $0$ ) Note that in each move $n$ comparisons
are made which gives us the worst case bound of
$O\left(n^{2}\cdot{w^{max}-w^{min}\over\varepsilon}\right)$. ∎
It is easy to see that for arbitrary bipartite graphs and also for the case
where the perfect matching does not exist, the worst case runtime is bounded
by $O(n^{2}d\cdot{w^{max}-w^{min}+\varepsilon\over\varepsilon})$, where $d$ is
the maximum degree of the vertices in $U$. We note that in practice the
maximum label value will be much lower than what we estimate here as we assume
all unmatched edges to be with the largest weight and all matched edges to be
of smallest weight. Also, what appears like the worst case for the runtime
analysis, i.e., where each unmatched edge has weight $w^{max}$ and each of the
matched edge has weight $w^{min}$ and when the weight ranges are large, is in
fact a considerably easy case for the algorithm. The parameter $\varepsilon$
will not have much role to play in this case as for each label update since
the label will be increased by a high value $w^{max}-w^{min}$.
For completeness, we prove in the following section that the algorithm outputs
a $\varepsilon$-optimal solution for the case where the perfect matching
exists. We do not go into the detailed analysis for the case where a perfect
matching does not exist because of the, previously stated, bad worst case
runtime bound. We believe that a closer analysis will lead to the same
approximation guarantee for this case too. In the future, we hope to improve
the runtime bound by using scaling type approach, in which the maximum
possible value of the label is initially set to a very small value. This would
lead to discarding most of the vertices and would result in a small matching
size. In each scale we would increase the maximum value by some factor and
improve on the matching size obtained from the previous scale.
### 2.4 An $\varepsilon$-optimal solution
Using Lemma 1 it is now to easy to show that Auction algorithm outputs a near
optimal minimum weight perfect matching for all classes of bipartite graphs
and arbitrary weights provided a perfect matching exists.
###### Lemma 2.
For any $\varepsilon>0$ and given bipartite graph $G$, Auction algorithm
outputs a perfect matching with weight at most $OPT_{G}+n\varepsilon$
###### Proof.
Let us assume that Auction algorithm does not output an optimal answer. First
consider the case where $|U|<|V|=n$, such that there is a free vertex $v\in V$
at the end of the last move. As in Lemma 1 let
$B_{t}=(v,u_{1},v_{1},u_{2},v_{2},\cdots,u_{t},v_{t})$ be an augmenting path
of length $2t$ where $\mathcal{M}_{P}(B_{t})$ and
$\mathcal{M}^{\prime}_{P}(B_{t})$ denote the set of matched and unmatched
edges in $B_{t}$. By Lemma 1 we obtain
$L_{P}(v_{t})\leq
L_{P}(v)+\sum_{e\in\mathcal{M}^{\prime}_{P}(B_{t})}w(e)-\sum_{e\in\mathcal{M}_{P}(B_{t})}w(e)+t\varepsilon.$
As $L_{P}(v)=0$ and $L_{P}(v_{t})>0$ we obtain
$\sum_{e\in\mathcal{M}_{P}(B_{t})}w(e)<\sum_{e\in\mathcal{M}^{\prime}_{P}(B_{t})}w(e)+t\varepsilon$
(3)
We next consider an augmenting cycle $C_{2t}$ of length $2t$ such that
$C_{2t}=(u_{1},v_{1},u_{2},v_{2},\cdots,u_{t},v_{t},u_{1})$
where for all $i\in[t]$, $(u_{i},v_{i})\in\mathcal{M}_{P}$. WLOG we assume
that vertex $v_{t}$ was matched later than all other right set vertices in
$C_{2t}$ in some move, say $p$. Let
$B_{2t-4}=(v_{1},u_{2},v_{2},\cdots,u_{t-1},v_{t-1})$. Now by Lemma 1 we
obtain
$L_{P}(v_{t-1})\leq
L_{P}(v_{1})+\sum_{e\in\mathcal{M}_{P}(B_{2t-4})}w(e)-\sum_{e\in\mathcal{M}^{\prime}_{P}(B_{2t-4})}w(e)+(t-2)\varepsilon$
As $v_{1}$ and $v_{t-1}$ by assumption were not updated after the $(p-1)$th
move we can write the above inequality as
$L_{p-1}(v_{t-1})\leq
L_{p-1}(v_{1})+\sum_{e\in\mathcal{M}_{P}(B_{2t-4})}w(e)-\sum_{e\in\mathcal{M}^{\prime}_{P}(B_{2t-4})}w(e)+(t-2)\varepsilon,$
because $L_{P}(v_{1})=L_{p-1}(v_{1})$ and $L_{P}(v_{t-1})=L_{p-1}(v_{t-1}).$
By the choice rule we have
$\displaystyle L_{p-1}(v_{t})+w(u_{t},v_{t})\leq
L_{p-1}(v_{t-1})+w(u_{t},v_{t-1})$
$\displaystyle\stackrel{{\scriptstyle}}{{\leq}}L_{p-1}(v_{1})+\sum_{i=1}^{t-1}w(u_{i+1},v_{i})-\sum_{i=2}^{t-1}w(u_{i},v_{i})+(t-2)\varepsilon$
$\displaystyle\leq
L_{p-1}(v_{t})+w(u_{1},v_{t})-w(u_{1},v_{1})+\sum_{i=1}^{t-1}w(u_{i+1},v_{i})-\sum_{i=2}^{t-1}w(u_{i},v_{i})+(t-1)\varepsilon$
$\displaystyle\implies$
$\displaystyle\sum_{i=1}^{t}w(u_{i},v_{i})\leq\sum_{i=2}^{t}w(u_{i},v_{i-1})+w(u_{1},v_{t})+(t-1)\varepsilon$
The proof is completed by adding (3) and (2.4) for all vertex disjoint
augmenting paths and cycles. ∎
## References
* [1] Arne Andersson, Torben Hagerup, Stefan Nilsson, and Rajeev Raman. Sorting in linear time. In Proceedings of the twenty-seventh annual ACM symposium on Theory of computing, pages 427–436. ACM, 1995.
* [2] D. P. Bertsekas. Distributed relaxation methods for linear network flow problems. In Decision and Control, 1986 25th IEEE Conference on, volume 25, pages 2101–2106. IEEE, 1986.
* [3] D. P. Bertsekas and D. A. Castañon. The auction algorithm for the transportation problem. Annals of Operations Research, 20(1):67–96, 1989.
* [4] D. P. Bertsekas, D.A. Castañon, and Haralampos Tsaknakis. Reverse auction and the solution of inequality constrained assignment problems. SIAM Journal on Optimization, 3(2):268–297, 1993.
* [5] D. P. Bertsekas and J. Eckstein. Dual coordinate step methods for linear network flow problems. Mathematical Programming, 42(1):203–243, 1988.
* [6] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and distributed computation: numerical methods, volume 23. 1989\.
* [7] D.P. Bertsekas. A new algorithm for the assignment problem. Mathematical Programming, 21(1):152–171, 1981.
* [8] D.P. Bertsekas. An auction algorithm for shortest paths. SIAM Journal on Optimization, 1(4):425–447, 1991.
* [9] D.P. Bertsekas. Auction algorithms for network flow problems: A tutorial introduction. Computational Optimization and Applications, 1(1):7–66, 1992.
* [10] D.P. Bertsekas and J. Eckstein. Distributed asynchronous relaxation methods for linear network flow problems. IFAC Proceedings Volumes, 20(5):103 – 114, 1987. 10th Triennial IFAC Congress on Automatic Control - 1987 Volume VII, Munich, Germany, 27-31 July.
* [11] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische mathematik, 1(1):269–271, 1959.
* [12] R. Duan, S. Pettie, and Hsin-Hao Su. Scaling algorithms for weighted matching in general graphs. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’17, pages 781–800, 2017.
* [13] Ran Duan and Seth Pettie. Linear-time approximation for maximum weight matching. Journal of the ACM (JACM), 61(1):1, 2014.
* [14] Jack Edmonds and Richard M Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM (JACM), 19(2):248–264, 1972.
* [15] N. Fountoulakis and K. Panagiotou. Sharp load thresholds for cuckoo hashing. Random Structures & Algorithms, 41(3):306–333, 2012.
* [16] Michael L Fredman and Robert Endre Tarjan. Fibonacci heaps and their uses in improved network optimization algorithms. Journal of the ACM (JACM), 34(3):596–615, 1987.
* [17] A. Frieze and P. Melsted. Maximum matchings in random bipartite graphs and the space utilization of cuckoo hash tables. Random Structures & Algorithms, 41(3):334–364, 2012.
* [18] H. N. Gabow and R. E. Tarjan. Faster scaling algorithms for network problems. SIAM Journal on Computing, 18(5):1013–1036, 1989.
* [19] Andrew V Goldberg and Robert Kennedy. Global price updates help. SIAM Journal on Discrete Mathematics, 10(4):551–572, 1997.
* [20] Yijie Han. Deterministic sorting in o (n log log n) time and linear space. In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pages 602–608. ACM, 2002.
* [21] Chien-Chung Huang and Telikepalli K. Efficient algorithms for maximum weight matchings in general graphs with small edge weights. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, pages 1400–1412, 2012.
* [22] Ming-Yang Kao, Tak-Wah Lam, Wing-Kin Sung, and Hing-Fung Ting. A decomposition theorem for maximum weight bipartite matchings. SIAM Journal on Computing, 31(1):18–26, 2001.
* [23] Megha Khosla. Balls into bins made faster. In European Symposium on Algorithms, pages 601–612. Springer, 2013\.
* [24] Megha Khosla and Avishek Anand. A faster algorithm for cuckoo insertion and bipartite matching in large graphs. Algorithmica, 81(9):3707–3724, 2019.
* [25] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97, 1955.
* [26] Harold W Kuhn. Variants of the hungarian method for assignment problems. Naval Research Logistics Quarterly, 3(4):253–258, 1956.
* [27] M. Lelarge. A new approach to the orientation of random hypergraphs. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’12, pages 251–264, 2012.
* [28] James B Orlin and Ravindra K Ahuja. New scaling algorithms for the assignment and minimum mean cycle problems. Mathematical programming, 54(1-3):41–56, 1992.
* [29] S. Pettie. A simple reduction from maximum weight matching to maximum cardinality matching. Information Processing Letters, 112(23):893–898, 2012.
* [30] Piotr Sankowski. Maximum weight bipartite matching in matrix multiplication time. Theoretical Computer Science, 410(44):4480–4488, 2009.
* [31] Mikkel Thorup. Equivalence between priority queues and sorting. In Foundations of Computer Science, 2002. Proceedings. The 43rd Annual IEEE Symposium on, pages 125–134. IEEE, 2002.
* [32] Mikkel Thorup. Integer priority queues with decrease key in constant time and the single source shortest paths problem. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, pages 149–158. ACM, 2003.
* [33] N Tomizawa. On some techniques useful for solution of transportation network problems. Networks, 1(2):173–194, 1971.
|
# Maximizing Approximately $k$-Submodular Functions††thanks: Partially
supported by NSFC Project 11771365.
Leqian Zheng Hau Chan Grigorios Loukides Minming Li City University of Hong
Kong, Hong Kong SAR<EMAIL_ADDRESS>of Nebraska-
Lincoln, USA<EMAIL_ADDRESS>College London, UK,
<EMAIL_ADDRESS>University of Hong Kong, Hong Kong SAR, and
City University of Hong Kong Shenzhen Research Institute, Shenzhen, P. R.
China<EMAIL_ADDRESS>
###### Abstract
We introduce the problem of maximizing _approximately_ $k$-submodular
functions subject to size constraints. In this problem, one seeks to select
$k$-disjoint subsets of a ground set with bounded total size or individual
sizes, and maximum utility, given by a function that is “close” to being
$k$-submodular. The problem finds applications in tasks such as sensor
placement, where one wishes to install $k$ types of sensors whose measurements
are noisy, and influence maximization, where one seeks to advertise $k$ topics
to users of a social network whose level of influence is uncertain. To deal
with the problem, we first provide two natural definitions for approximately
$k$-submodular functions and establish a hierarchical relationship between
them. Next, we show that simple greedy algorithms offer approximation
guarantees for different types of size constraints. Last, we demonstrate
experimentally that the greedy algorithms are effective in sensor placement
and influence maximization problems.
## 1 Introduction
A $k$-submodular function is a natural generalization of a submodular function
to $k$ arguments or dimensions [15]. Recently, there has been an increased
theoretical and algorithmic interest in studying the problem of maximizing
(monotone) _$k$ -submodular functions_, with [21] or without [15, 25, 16] size
constraints. The problem finds applications in sensor placement [21],
influence maximization [21], coupled feature selection [24], and network cut
capacity optimization [16]. In these applications, one wishes to select $k$
_disjoint_ subsets from a ground set that maximize a given function with $k$
arguments subject to an upper bound on the total size of the selected subsets
or the size of each individual selected subset [21]. Such a function of
interest often has a _diminishing returns property_ with respect to each
subset when fixing the other $k-1$ subsets [15].
More formally, let $V=\\{1,...,n\\}=[n]$ be a finite ground set of $n$
elements and $(k+1)^{V}=\\{(X_{1},X_{2},\dots,X_{k})\mid X_{i}\subseteq
V,\forall i\in[k],X_{i}\cap X_{j}=\emptyset\\}$ be the set of $k$-disjoint
subsets. A function $f:(k+1)^{V}\to\mathbb{R}^{+}$ is $k$-submodular if and
only if, for any $\boldsymbol{x},\boldsymbol{y}\in(k+1)^{V}$,
$\displaystyle
f(\boldsymbol{x})+f(\boldsymbol{y})\geq{}f(\boldsymbol{x}\sqcap\boldsymbol{y})+f(\boldsymbol{x}\sqcup\boldsymbol{y}),$
where
$\boldsymbol{x}\sqcap\boldsymbol{y}=(X_{1}\cap{}Y_{1},X_{2}\cap{}Y_{2},\ldots,X_{k}\cap{}Y_{k})$
and
$\displaystyle\boldsymbol{x}\sqcup\boldsymbol{y}=(X_{1}\cup{}Y_{1}\setminus(\bigcup_{i\neq{1}}X_{i}\cup{}Y_{i}),\ldots,X_{k}\cup{}Y_{k}\setminus(\bigcup_{i\neq{k}}X_{i}\cup{}Y_{i})).$
A $k$-submodular function $f$ is _monotone_ if and only if, for any
$\boldsymbol{x},\boldsymbol{y}\in(k+1)^{V}$ such that
${\boldsymbol{x}}\preceq{\boldsymbol{y}}$ (i.e., $X_{i}\subseteq Y_{i},$
$\forall i\in[k]$), $f(\boldsymbol{x})\leq f(\boldsymbol{y})$. The problem of
maximizing a monotone $k$-submodular function $f$ subject to the _total size_
(TS) and _individual size_ (IS) constraints [21] is
$\displaystyle\max_{\boldsymbol{x}\in(k+1)^{V}:|\cup_{i\in[k]}X_{i}|\leq
B}f(\boldsymbol{x})~{}~{}\text{ and
}~{}~{}\max_{\boldsymbol{x}\in(k+1)^{V}:|X_{i}|\leq B_{i}\;\forall
i\in[k]}f(\boldsymbol{x}),$
respectively, for some positive integers $B\in\mathbb{Z}^{+}$ and
$B_{i}\in\mathbb{Z}^{+}$ for all $i\in[k]$. While recent results [21] show
that the above two problems can be well-approximated using greedy algorithms
(with an approximation ratio of $\frac{1}{2}$ and $\frac{1}{3}$ for total size
and individual size constraints, respectively) under the value oracle model,
not much is known when the function is not entirely $k$-submodular, which can
result from the oracle being inaccurate or the function itself not being
$k$-submodular by default (see examples and applications below). In fact,
often there is no access to the exact value of the function but only to noisy
values of it. In this paper, we initiate the study of the above maximization
problems for non-$k$-submodular functions and pose the following questions:
> Q1. _How to define an approximately $k$-submodular function?_
>
> Q2. _What approximation guarantees can be obtained when maximizing such a
> function under total size or individual size constraints?_
| $\varepsilon$-AS | $\varepsilon$-ADR
---|---|---
$F$’s Solution | $F$’s Solution | $f$’s Solution | $F$’s Solution
$k=1$ | $\frac{1}{1+\frac{4B\varepsilon}{(1-\varepsilon)^{2}}}\left(1-\left(\frac{1-\varepsilon}{1+\varepsilon}\right)^{2B}\left(1-\frac{1}{B}\right)^{B}\right)$ [14] | $\frac{1-\varepsilon}{1+\varepsilon}\left(1-\frac{1}{e}\right)$ | $\left(1-e^{-\frac{(1-\varepsilon)}{(1+\varepsilon)}}\right)$
$k\geq 2$ | TS | $\frac{(1-\varepsilon)^{2}}{2(1-\varepsilon+\varepsilon B)(1+\varepsilon)}$ | $\frac{1-\varepsilon}{2(1+\varepsilon)}$ | $\frac{1-\varepsilon}{2}$
IS | $\frac{(1-\varepsilon)^{2}}{(3-3\varepsilon+2\varepsilon B)(1+\varepsilon)}$ | $\frac{1-\varepsilon}{3(1+\varepsilon)}$ | $\frac{1-\varepsilon}{3+\varepsilon}$
Table 1: Approximation ratios of the greedy algorithms of [21] for maximizing
$\varepsilon$-approximately $k$-submodular ($\varepsilon$-AS) and
$\varepsilon$-approximately diminishing returns ($\varepsilon$-ADR) function
$F$ under a total size (TS) constraint $B$, or individual size (IS)
constraints $B_{1},...,B_{k}$ with $B=\sum_{i\in[k]}B_{i}$, when they are
applied to $F$ or to $f$. An algorithm has approximation ratio $\alpha\leq 1$
if and only if it returns a (feasible) solution $\boldsymbol{x}\in{(k+1)^{V}}$
such that $f(\boldsymbol{x})\geq\alpha f(\boldsymbol{x}^{*})$, where
$\boldsymbol{x}^{*}$ is an optimal solution.
Applications. Answering Q1 and Q2 could be important in optimization, machine
learning, and beyond. Let us first consider the case of $k=1$. In many
applications, a function $F$ is _approximately submodular_ rather than
submodular (i.e., $(1-\varepsilon)f(x)\leq F(x)\leq(1+\varepsilon)f(x)$, for
an $\varepsilon>0$, where $f$ is a monotone submodular function) [14, 13, 23].
These applications include subset selection which is fundamental in areas such
as (sequential) document summarization, sensor placement, and influence
maximization [21, 9]. For example, in sensor placement, the objective is to
select a subset of good sensors and the approximation comes from sensors
producing noisy values due to hardware issues, environmental effects, and
imprecision in measurement [10, 2]. In influence maximization, the objective
is to select a subset of good users to start a viral marketing campaign over a
social network and the approximation comes from our uncertainty about the
level of influence of specific users in the social network [17, 11, 26]. Other
applications are PMAC learning [5, 4, 1], where the objective is to learn a
submodular function, and sketching [3], where the objective is to find a good
representation of a submodular function of polynomial size.
In many of these applications, it is often needed to select $k$ disjoint
subsets of given maximum total size or sizes, instead of a single subset,
which gives rise to size-constrained $k$-submodular function maximization
[21]. This is the case for sensor placement where $k$ types of sensors need to
be placed in disjoint subsets of locations, or influence maximization where
$k$ viral marketing campaigns, each starting from a different subset of users,
need to be performed simultaneously over the same social network. Again, a
function may not be exactly $k$-submodular, due to noise in sensor
measurements and uncertainty in user influence levels. Thus, by answering
question Q1, we could more accurately model the quality of solutions in these
applications and, by answering Q2, we could obtain solutions of guaranteed
quality.
Our Contributions. We address Q1 by introducing two natural definitions of an
approximately $k$-submodular function. To the best of our knowledge,
approximately $k$-submodular functions have not been defined or studied before
for general $k>1$. Namely, we define a function $F:(k+1)^{V}\to\mathbb{R}^{+}$
as $\varepsilon$-_approximately $k$-submodular_ ($\varepsilon$-AS) or
_$\varepsilon$ -approximately diminishing returns_ ($\varepsilon$-ADR) for
some small $\varepsilon>0$, if and only if there exists a monotone
k-submodular function $f$ such that for any $\boldsymbol{x}\in{(k+1)^{V}}$,
$u\not\in\bigcup_{l\in[k]}X_{l}$, $u\in V$ and $i\in[k]$
$\displaystyle\varepsilon\text{-AS}:$
$\displaystyle~{}(1-\varepsilon)f(\boldsymbol{x})\leq
F(\boldsymbol{x})\leq(1+\varepsilon)f(\boldsymbol{x})\text{ or }$
$\displaystyle\varepsilon\text{-ADR}:$
$\displaystyle~{}(1-\varepsilon)\Delta_{u,i}f(\boldsymbol{x})\leq{}\Delta_{u,i}F(\boldsymbol{x})\leq{}(1+\varepsilon)\Delta_{u,i}f(\boldsymbol{x}),$
where
$\Delta_{u,i}f({\boldsymbol{x}})=f(X_{1},\ldots,X_{i-1},X_{i}\cup\\{u\\},X_{i+1},\ldots,X_{k})-f(X_{1},\ldots,X_{k})$
and $\Delta_{u,i}F(\boldsymbol{x})$ is defined similarly. Our $\varepsilon$-AS
definition generalizes the approximately submodular definition of [14] from
$k=1$ to $k\geq 1$ dimensions. The $\varepsilon$-ADR definition is related to
the _marginal gain_ of the functions $F$ and $f$ and implies
$\alpha$-submodularity when $k=1$ [12]. As we will show, an $\varepsilon$-ADR
function $F$ is also $\varepsilon$-AS. However, the converse is not true.
Thus, we establish a novel approximately $k$-submodular hierarchy for $k\geq
1$.
We address Q2 by considering the maximization problems on a function $F$ that
is $\varepsilon$-AS or $\varepsilon$-ADR, subject to the total size (TS) or
individual size (IS) constraint. We show that, when applying the simple greedy
algorithms of [21] for TS and IS, we obtain approximation guarantees that
depend on $\varepsilon$. Table 1 provides an overview of our results. Note
that there are two cases with respect to an $\varepsilon$-AS $F$ and $f$ where
we can apply greedy algorithms to $F$ or $f$ (when $f$ is known). A surprising
observation is that we can derive better approximation ratios by applying the
greedy algorithms to $f$ and using the solutions to approximate $F$. When only
$F$ is known (e.g., through a noisy oracle, an approximately learned
submodular function, or a sketch), we provide approximation ratios of the
greedy algorithms being applied to $F$. When the function $F$ is
$\varepsilon$-ADR, we provide better approximation guarantees, compared to
when $F$ is $\varepsilon$-AS.
We conduct experiments on two real datasets to evaluate the effectiveness of
the greedy algorithms in a $k$-type sensor placement problem and a $k$-topic
influence maximization problem for our setting. We provide three approximately
$k$-submodular function generation/noise methods for generating function
values. Our experimental results are consistent with the theoretical findings
of the derived approximation ratios and showcase the impact of the type of
noise on the quality of the solutions.
Organization. Section 2 discusses related work. Section 3 provides some
preliminaries and establishes a strong relationship between $\varepsilon$-AS
and $\varepsilon$-ADR. Section 4 and Section 5 considers the approximation
ratios of the greedy algorithms for maximizing an $\varepsilon$-AS and
$\varepsilon$-ADR function for $k=1$ and $k>1$, respectively. Section 6
discusses the approximation ratios of $f$’s greedy solutions to $F$. Section 7
presents experimental results. In Section 8, we conclude and present
extensions.
## 2 Related Work.
The works of [13, 23] and [14] consider the problem of maximizing an
approximately submodular function (e.g., $\varepsilon$-AS definition with
$k=1$) subject to constraints under stochastic errors (e.g., $\varepsilon$)
drawn i.i.d. from some distributions and sub-constant (bounded) errors,
respectively. Our paper focuses on the latter setting with $k\geq 1$ and
considers the performance of greedy algorithms with respect to errors (e.g., a
fixed $\varepsilon$). In particular, [14] shows that, for some
$\varepsilon=\frac{1}{n^{1/2-\beta}}$ for $\beta>0$, no algorithm can obtain
any constant approximation ratio using polynomially many queries of the value
oracle. The same lower bound can be applied to our setting. However, when
$\varepsilon$ is sufficiently small with respect to the size constraint (e.g.,
$\varepsilon=\frac{\delta}{B}$), the standard greedy algorithm [19] provides
an (almost) constant approximation ratio of $(1-1/e-O(\delta))$.
Under the stochastic noise assumptions, [13] shows that, when the size
constraint $B\in\Omega(\log\log n)$ is sufficiently large, there is an
algorithm that achieves $(1-1/e)$ approximation ratio with high probability
(w.h.p.) and also shows some impossibility results for the existence of
randomized algorithms with a good approximation ratio using a polynomial
number of queries w.h.p. The work of [23] shows that good approximation ratios
are attainable (e.g., an approximation ratio of $(1-1/e)$) w.h.p. and/or in
expectation for arbitrary size constraint $B$ using greedy algorithms (with or
without randomization) under different size constraints. Additionally, [23]
considers general matroid constraints and provides an approximation ratio that
depends on the matroid constraints. Another work [22] considers the
approximate function maximization problem (with $f$ that is not necessarily
submodular) under the cardinality constraint and derives several approximation
ratios that depend on the submodularity ratio [8], $\varepsilon$, and $B$
using the greedy algorithm, as well as an algorithm based on the Pareto
Optimization for Subset Selection (POSS) strategy.
A recently published work [20] explores the problem of maximizing approximate
$k$-submodular functions in the streaming context, where the elements of $V$
are scanned at most once, under the total size constraint. The approximate
notion, introduced independently, is that of our $\varepsilon$-AS. However,
[20] aims to optimize $f$ instead of $F$, which is the main focus of our work,
and it does not consider $\varepsilon$-ADR nor individual size constraints.
## 3 Preliminaries
We denote the vector of $k$ empty subsets by ${\bf
0}=(X_{1}=\\{\\},\ldots,X_{k}=\\{\\})$. Without loss of generality, we assume
that the functions $F$ and $f$ are normalized such that
$F(\textbf{0})=f(\textbf{0})=0$. We define $k$-submodular functions as those
that are monotone and have a diminishing returns property in each dimension
[25].
###### Definition 3.1 ($k$-submodular function [25])
A function $f:(k+1)^{V}\rightarrow\mathbb{R}$ is $k$-_submodular_ if and only
if: (a) $\Delta_{u,i}f({\boldsymbol{x}})\geq\Delta_{u,i}f({\boldsymbol{y}})$,
for all ${\boldsymbol{x}},{\boldsymbol{y}}\in(k+1)^{V}$ with
${\boldsymbol{x}}\preceq{\boldsymbol{y}}$, $u\notin\cup_{\ell\in[k]}Y_{i}$,
and $i\in[k]$, and (b)
$\Delta_{u,i}f({\boldsymbol{x}})+\Delta_{u,j}f({\boldsymbol{x}})\geq 0$, for
any ${\boldsymbol{x}}\in(k+1)^{V}$, $u\notin\cup_{\ell\in[k]}X_{i}$, and
$i,j\in[k]$ with $i\neq j$.
Part (a) of Definition 3.1 is known as the _diminishing returns_ property and
part (b) as _pairwise monotonicity_. We assume that the function is monotone,
and this implies pairwise monotonicity directly. Moreover, $k$-submodular
functions are _orthant submodular_ [25].
For consistency, we use the following notations as used by [21]. Namely, with
a slight abuse of notation, we associate each
$\boldsymbol{x}=(X_{1},\ldots,X_{k})\in(k+1)^{V}$ with
$\boldsymbol{x}\in\\{0,1,\ldots,k\\}^{V}$ where
$X_{i}=\\{e\in{}V\mid\boldsymbol{x}(e)=i\\}$ for $i\in[k]$. We define the
_support_ or _size_ of $\boldsymbol{x}\in(k+1)^{V}$ as
$supp(\boldsymbol{x})=\\{e\in{}V\mid\boldsymbol{x}(e)\neq{0}\\}$. Similarly,
we define $supp_{i}(\boldsymbol{x})=\\{e\in{}V\mid{}\boldsymbol{x}(e)=i\\}$.
In the following, we show that when the function $f$ is $\varepsilon$-ADR, the
function is also $\varepsilon$-AS. The proof is in Appendix A.
###### Theorem 3.1
If $F$ is $\varepsilon$-approximately diminishing returns, then $F$ is
$\varepsilon$-approximately $k$-submodular.
The converse is not true (as shown below). One could attempt to upper bound
$\Delta_{u,i}F(\boldsymbol{x})\leq(1+\varepsilon)f(X_{1},...,X_{i}\cup\\{u\\},...,X_{k})-(1-\varepsilon)f(\boldsymbol{x})=(1+\varepsilon)\Delta_{u,i}f(\boldsymbol{x})+2\varepsilon
f(\boldsymbol{x})$ using the definition of $\varepsilon$-AS for each term.
However, the resultant function does not appear to be $\varepsilon$-ADR.
###### Theorem 3.2
If $F$ is $\varepsilon$-approximately $k$-submodular, then $F$ is not
necessarily $\varepsilon$-approximately diminishing returns.
* Proof.
To prove the statement, we construct a function $F$ that is $\varepsilon$-AS
but not $\varepsilon$-ADR for $k=1$. Let $V=\\{e_{1},e_{2}\\}$. By the
$\varepsilon$-AS definition, we have that
$(1-\varepsilon)f(\boldsymbol{x})\leq
F(\boldsymbol{x})\leq(1+\varepsilon)f(\boldsymbol{x})$ for any
$\boldsymbol{x}\in{(k+1)^{V}}$. We define $F$ partially as follows. Let
$F(\\{e_{1}\\})=(1+\varepsilon)f(\\{e_{1}\\})$ and
$F(\\{e_{1},e_{2}\\})=(1-\varepsilon)f(\\{e_{1},e_{2}\\})$. Consider the
marginal gain of adding $e_{2}$ to the set $\\{e_{1}\\}$. We have that
$\Delta_{e_{2}}F(\\{e_{1}\\})=(1-\varepsilon)f(\\{e_{1},e_{2}\\})-(1+\varepsilon)f(\\{e_{1}\\})<(1-\varepsilon)\left[f\left(\\{e_{1},e_{2}\\}\right)-f\left(\\{e_{1}\\}\right)\right]=(1-\varepsilon)\Delta_{e_{2}}f(\\{e_{1}\\})$,
where the first equality holds due to our construction, the inequality is due
to $\varepsilon>0$, and the last equality is from the lower bound of the
$\varepsilon$-ADR definition.
## 4 Approximately $k$-Submodular Function Maximization: $k=1$
When $k=1$, $k$-submodular functions coincide with the standard submodular
functions [19]. Thus, the total size and individual size constraints are the
same. We are interested in the maximization problem with a size constraint
with parameter $B$. For an $\varepsilon$-AS $F$, [14] proves the following
approximation ratio when applying the greedy algorithm [19], which iteratively
adds into $F$ an element that achieves the highest marginal gain.
###### Corollary 4.1 (Theorem 5 [14])
Suppose $F$ is $\varepsilon$-approximately submodular. The greedy algorithm
provides an approximation ratio of
$\frac{1}{1+\frac{4B\varepsilon}{(1-\varepsilon)^{2}}}\left(1-\left(\frac{1-\varepsilon}{1+\varepsilon}\right)^{2B}\left(1-\frac{1}{B}\right)^{B}\right)$
for the size constrained maximization problem.
When $F$ is $\varepsilon$-ADR, we obtain the following approximation ratio
using its connection to the notions of a-submodularity, weakly submodularity,
and submodularity ratio (see e.g., [12, 8, 6]).
###### Theorem 4.1
Suppose $F$ is $\varepsilon$-approximately diminishing returns. The greedy
algorithm provides an approximation ratio of $(1-e^{-a})$ for the size
constrained maximization problem where $a=(1-\varepsilon)/(1+\varepsilon)$.
* Proof.
We note that if $F$ is $\varepsilon\text{-ADR}$, then $F$ is $a$-submodular
[12] via the fact that
$\varepsilon\text{-ADR}:(1-\varepsilon)\Delta_{u,i}f(\boldsymbol{x})\leq{}\Delta_{u,i}F(\boldsymbol{x})\leq{}(1+\varepsilon)\Delta_{u,i}f(\boldsymbol{x}),$
which implies
$\Delta_{u,i}F(\boldsymbol{y})(1-\varepsilon)/(1+\varepsilon)\leq(1-\varepsilon)\Delta_{u,i}f(\boldsymbol{y})\leq(1-\varepsilon)\Delta_{u,i}f(\boldsymbol{x})\leq\Delta_{u,i}F(\boldsymbol{x})$
for any $\boldsymbol{x}\leq\boldsymbol{y},u\not\in\bigcup_{l\in[k]}X_{l}$ and
$i\in[k]$ with $a=(1-\varepsilon)/(1+\varepsilon)$ (i.e., a function, say $g$,
is $a$-submodular if and only if $\Delta_{u,i}g(\boldsymbol{x})\geq
a\Delta_{u,i}g(\boldsymbol{y})$ for any $\boldsymbol{x}\leq\boldsymbol{y}$
[12]). As any $a$-submodular function is weakly submodular (Proposition 8
[12]), the greedy algorithm provides an approximation of $(1-e^{-a})$ [8, 6].
## 5 Approximately $k$-Submodular Function Maximization: $k>1$
In this section, we consider the problems of maximizing approximately
$k$-submodular functions under the $\varepsilon$-AS and $\varepsilon$-ADR
definitions subject to the total size,
$\max\limits_{\boldsymbol{x}:|supp(\boldsymbol{x})|\leq{B}}F(\boldsymbol{x})$,
or individual size constraints,
$\max\limits_{\boldsymbol{x}:|supp_{i}(\boldsymbol{x})|\leq{B_{i}},\forall{}i\in[k]}F(\boldsymbol{x})$,
for some function $F$ and $B,B_{1},...,B_{k}\in\mathbb{Z}^{+}$. We show that
the greedy algorithms [21] $k$-Greedy-TS (see Algorithm 1) and $k$-Greedy-IS
(see Algorithm 2) provide (asymptotically tight as $\varepsilon\to 0$ for TS)
approximation ratios to function $F$. Algorithm 1 and Algorithm 2 essentially
add a single element with the highest marginal gain to one of the $k$ subsets
at each iteration without violating the TS and IS constraints, respectively.
Algorithm 1 and Algorithm 2 requires evaluating the function $O(knB)$ and
$O(kn\sum_{i\in[k]}B_{i})$ times, respectively.
Input: a $\varepsilon$-approximately $k$-submodular function
$F:(k+1)^{V}\mapsto\mathbb{R}^{+}$ and $B\in\mathbb{Z}^{+}$.
Output: a vector $\boldsymbol{x}$ with $|supp(\boldsymbol{x})|=B$.
1
2$\boldsymbol{x}\leftarrow\boldsymbol{0}$;
3for _$j=1$ to $B$_ do
4
$(e,i)\leftarrow{}\operatorname*{arg\,max}_{e\in{}V\setminus{supp(\boldsymbol{x})},i\in[k]}\Delta_{e,i}F(\boldsymbol{x})$;
5
6 $\boldsymbol{x}(e)\leftarrow{}i$;
7
8
Algorithm 1 $k$-Greedy-TS (Total Size)
Input: a $\varepsilon$-approximately $k$-submodular function
$F:(k+1)^{V}\mapsto\mathbb{R}^{+}$ and $B_{1},\cdots,B_{k}\in\mathbb{Z}^{+}$.
Output: a vector $\boldsymbol{x}$ with $|supp_{i}(\boldsymbol{x})|=B_{i}$
$\forall i\in[k]$.
1 $\boldsymbol{x}\leftarrow\boldsymbol{0}$; $I\leftarrow[k]$;
2
3while _$I\neq\emptyset$_ do
4
$(e,i)\leftarrow{}\operatorname*{arg\,max}_{e\in{}V\setminus{}supp(\boldsymbol{x}),i\in
I}\Delta_{e,i}F(\boldsymbol{x})$;
5
6 $\boldsymbol{x}(e)\leftarrow{}i$;
7
8 if _$|supp_{i}(\boldsymbol{x})=B_{i}|$_ then
9 $I\leftarrow{}I\setminus\\{i\\}$;
10
Algorithm 2 $k$-Greedy-IS (Individual Size)
The proof techniques in this section use similar ideas from [21]. However, the
proofs in [21] do not apply trivially and directly without our carefully
designed lemmas and appropriate derivations.
### 5.1 Maximizing $\varepsilon$-AS and $\varepsilon$-ADR Functions with the
TS Constraint
We consider the problem of maximizing $\varepsilon$-AS or $\varepsilon$-ADR
function $F$ subject to the total size constraint $B$ using Algorithm 1 on
function $F$. We use the following notations as in [21]. Let
$\boldsymbol{x}^{(j)}$ be the solution after the $j$-th iteration of Algorithm
1. For each $j$, we let $(e^{(j)},i^{(j)})\in{}V\times[k]$ be the selected
pair. Let
$\boldsymbol{o}\in\operatorname*{arg\,max}\limits_{\boldsymbol{x}:|supp(\boldsymbol{x})|\leq{B}}F(\boldsymbol{x})$
be an optimal solution.
Our goal is to compare the greedy solution $\boldsymbol{x}$ and an optimal
solution $\boldsymbol{o}$. To begin, we define
$\boldsymbol{o}^{(0)}=\boldsymbol{o},\boldsymbol{o}^{(\frac{1}{2})},\boldsymbol{o}^{(1)},\cdots,\boldsymbol{o}^{(B)}$
iteratively. Let
$S^{(j)}=supp(\boldsymbol{o}^{(j-1)})\setminus{}supp(\boldsymbol{x}^{(j-1)})$
with $\boldsymbol{x}^{(0)}=\boldsymbol{0}$. We set $o^{(j)}$ to be an
arbitrary element in $S^{(j)}$ if $e^{(j)}\notin{}S^{(j)}$, and set
$o^{(j)}=e^{(j)}$ otherwise. We construct $\boldsymbol{o}^{(j-\frac{1}{2})}$
from $\boldsymbol{o}^{(j-1)}$ by assigning $0$ to the $o^{(j)}$-th element.
Next, we define $\boldsymbol{o}^{(j)}$ from $\boldsymbol{o}^{(j-\frac{1}{2})}$
by assigning $i^{(j)}$ to the $e^{(j)}$-th element. As such, we have
$|supp(\boldsymbol{o}^{(j)})|=B$ for all $j\in[B]$ and
$\boldsymbol{o}^{(B)}=\boldsymbol{x}^{(B)}=\boldsymbol{x}$. Finally, we note
that $\boldsymbol{x}^{(j-1)}\preceq\boldsymbol{o}^{(j-\frac{1}{2})}$ for every
$j\in[B]$.
#### 5.1.1 $\varepsilon$-AS Functions with the TS Constraint
###### Theorem 5.1
Suppose $F$ is $\varepsilon$-approximately $k$-submodular. The $k$-Greedy-TS
algorithm provides an approximation ratio of
$\frac{(1-\varepsilon)^{2}}{2(1-\varepsilon+\varepsilon{}B)(1+\varepsilon)}$
for the total size constrained maximization problem.
To prove the above theorem, we first need to prove the following key lemma
(see Appendix B).
###### Lemma 5.1
For any $j\in[B]$,
$\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x}^{(j)})-f(\boldsymbol{x}^{(j-1)})\geq{}f(\boldsymbol{o}^{(j-1)})-f(\boldsymbol{o}^{(j)})$.
* Proof.
[Proof of Theorem 5.1] We have
$\displaystyle
f(\boldsymbol{o})-f(\boldsymbol{x})=\sum_{j\in[B]}\left(f(\boldsymbol{o}^{(j-1)})-f(\boldsymbol{o}^{(j)})\right)$
$\displaystyle\leq\sum_{j\in[B]}\left(\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x}^{(j)})-f(\boldsymbol{x}^{(j-1)})\right)$
$\displaystyle=\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x})+\sum_{j=1}^{B-1}\left(\frac{2\varepsilon}{1-\varepsilon}f(\boldsymbol{x}^{(j)})\right)$
$\displaystyle\leq\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x})+\sum_{j=1}^{B-1}\left(\frac{2\varepsilon}{1-\varepsilon}f(\boldsymbol{x})\right)=\frac{1-\varepsilon+2\varepsilon{}B}{1-\varepsilon}f(\boldsymbol{x}),$
where the first inequality is by Lemma 5.1. Thus,
$f(\boldsymbol{x})\geq\frac{1-\varepsilon}{2-2\varepsilon+2\varepsilon{}B}f(\boldsymbol{o})$.
We have
$F(\boldsymbol{x})\geq\frac{(1-\varepsilon)^{2}}{2(1-\varepsilon+\varepsilon{}B)(1+\varepsilon)}F(\boldsymbol{o})$
after applying the $\varepsilon$-AS definition.
#### 5.1.2 $\varepsilon$-ADR Functions with the TS Constraint
###### Lemma 5.2
For any $j\in[B]$, it holds that
$\frac{1}{1-\varepsilon}\left[F(\boldsymbol{x}^{(j)})-F(\boldsymbol{x}^{(j-1)})\right]\geq{}\frac{1}{1+\varepsilon}\left[F(\boldsymbol{o}^{(j-1)})-F(\boldsymbol{o}^{(j)})\right]$.
* Proof.
See Appendix B.
###### Theorem 5.2
Suppose $F$ is $\varepsilon$-approximately diminishing returns. The
$k$-Greedy-TS algorithm provides an approximation ratio of
$\frac{1-\varepsilon}{2}$ for the total size constrained maximization problem.
* Proof.
We have
$\displaystyle F(\boldsymbol{o})-F(\boldsymbol{x})$
$\displaystyle=\sum_{j\in[B]}\left[F(\boldsymbol{o}^{(j-1)})-F(\boldsymbol{o}^{j})\right]$
$\displaystyle\leq{}\frac{1+\varepsilon}{1-\varepsilon}\sum_{j\in[B]}\left[F(\boldsymbol{x}^{(j)})-F(\boldsymbol{x}^{j-1})\right]\leq\frac{1+\varepsilon}{1-\varepsilon}F(\boldsymbol{x}),$
where the first inequality is due to Lemma 5.2. Thus, we have
$F(\boldsymbol{x})\geq{}\frac{1-\varepsilon}{2}F(\boldsymbol{o})$.
### 5.2 Maximizing $\varepsilon$-AS and $\varepsilon$-ADR Functions with the
IS Constraints
We consider the problem of maximizing an $\varepsilon$-AS or $\varepsilon$-ADR
function $F$ subject to the individual size constraints $B_{1},...,B_{k}$
using Algorithm 2 on the function $F$. In the individual size constraints
maximization problem, we are given $B_{1},...,B_{k}$ restricting the maximum
number of elements one can select for each subset. We define
$B=\sum_{j\in[k]}B_{j}$. We simply state our main results here (see Appendix
B.1 for key lemmas and proofs).
###### Theorem 5.3
Suppose $F$ is $\varepsilon$-approximately $k$-submodular. The $k$-Greedy-IS
algorithm provides an approximation ratio of
$\frac{(1-\varepsilon)^{2}}{(3-3\varepsilon+2\varepsilon{}B)(1+\varepsilon)}$
for the individual size constrained maximization problem.
###### Theorem 5.4
Suppose $F$ is $\varepsilon$-approximately diminishing returns. The
$k$-Greedy-IS algorithm provides an approximation ratio of
$\frac{1-\varepsilon}{3+\varepsilon}$ for the individual size constrained
maximization problem.
## 6 Improved Greedy Approximation Ratios When $f$ is Known
In this section, we consider the case where $f$ is a known monotone function
that can be constructed directly. We investigate the question of whether we
can use such information to obtain alternative, possibly better, approximation
ratios for maximizing an $\varepsilon$-AS or $\varepsilon$-ADR function $F$
subject to the total size or individual size constraints. We answer the
question affirmatively via the following result (see Appendix C for details).
###### Theorem 6.1
Let $f$ be a $k$-submodular function and $F$ be an $\varepsilon$-approximately
$k$-submodular function that is bounded by $f$. If there is an algorithm that
provides an approximation ratio of $\alpha$ for maximizing $f$ subject to
constraint $\mathbb{X}$, then the same solution yields an approximation ratio
of $\frac{1-\varepsilon}{1+\varepsilon}\alpha$ for maximizing $F$ subject to
constraint $\mathbb{X}$.
This theorem implies the following: (1) By applying the greedy algorithm [19]
on a submodular function $f$ subject to a size constraint, we obtain a
solution providing an approximation ratio of
$\frac{1-\varepsilon}{1+\varepsilon}(1-\frac{1}{e})$ to the size constrained
maximization problem of an approximately submodular function $F$. (2) By
applying $k$-Greedy-TS on a $k$-submodular function $f$ subject to a total
size constraint, we obtain a solution providing an approximation ratio of
$\frac{1-\varepsilon}{2(1+\varepsilon)}$ to the total size constrained
maximization problem of an $\epsilon$-AS function $F$. (3) By applying
$k$-Greedy-IS on a $k$-submodular function $f$ subject to individual size
constraints, we obtain a solution providing an approximation ratio of
$\frac{1-\varepsilon}{3(1+\varepsilon)}$ to the individual size constrained
maximization problem of an $\epsilon$-AS function $F$. For $F$ that is
$\varepsilon$-ADR, since it is also $\varepsilon$-AS $F$, these three results
apply immediately.
## 7 Experiments
We evaluate the real-world performance of the algorithms for maximizing $F$
directly, or indirectly through the use of a bounding $k$-submodular function
$f$, by applying them to a variant of the $k$-sensor placement problem with IS
constraints and a variant of the $k$-topic influence maximization problem with
the TS constraint. In both problems, we used $\varepsilon$-AS $k$-submodular
functions. Appendix D contains additional experiments with $\varepsilon$-AS
and with $\varepsilon$-ADR $k$-submodular functions, whose results are similar
to those reported here.
### 7.1 Experimental Setup
In our experiments, we consider an $\varepsilon$-AS function $F$ and generate
the value of $F$ according to the bounding $k$-submodular function $f$, which
is given explicitly in our application domains. In particular, for each
$\boldsymbol{x}\in(k+1)^{V}$, the value of $F$ of $\boldsymbol{x}$ should be
generated such that $(1-\varepsilon)f(\boldsymbol{x})\leq
F(\boldsymbol{x})\leq(1+\varepsilon)f(\boldsymbol{x})$. When $f$ is known, we
can compute the value of $f$ for a given $\boldsymbol{x}$ directly and then
generate the value of $F$ according to some predefined generation methods. To
highlight the performance or solution quality of $F$ and $f$ under the greedy
algorithms, we considered the following three types of generation methods. The
functions $F$ and $f$ will be defined later for the appropriate domains.
Adversarial Generation (AG). To stress-test the greedy algorithms applied to
$f$, we identify the worst-case values of $F$ where, when applying these
algorithms, $f$’s solution obtains a better approximation ratio than the
solution generated when applying the greedy algorithms to $F$. We first run a
greedy algorithm on $f$ and obtain its solution $\boldsymbol{x_{f}}$. We let
$F(\boldsymbol{x_{f}})=(1+\varepsilon)f(\boldsymbol{x_{f}})$ which yields
higher weight to $f$’s solution. For the remaining $\boldsymbol{x}$, we let
$F(\boldsymbol{x})=\xi(\boldsymbol{x})\cdot f(\boldsymbol{x})$ where
$\xi(\boldsymbol{x})$ is selected uniformly at random in $[1-\varepsilon,1]$.
Max and Mean Generation (MaxG and MeanG). Our goal is to consider a more
structured error/noise generation setting where each (selected) element
contributes some uncertainty to the value of $F$. In MaxG,
$F(\boldsymbol{x})=\xi(\boldsymbol{x})\cdot f(\boldsymbol{x})$, where
$\xi(\boldsymbol{x})=\max_{x\in{supp(\boldsymbol{x})}}\xi(x)$ and
$\xi(x)\in[1-\varepsilon,1]$. Thus, we weigh $f$ with the maximum value of
noise over the elements of ${\boldsymbol{x}}$. In MeanG,
$\xi({\boldsymbol{x}})=\frac{\sum_{x\in
supp(\boldsymbol{x})}\xi(x)}{|supp(\boldsymbol{x})|}$ and
$\xi(x)\in[1-\varepsilon,1]$. Thus, we weigh $f$ with the expected value of
noise over ${\boldsymbol{x}}$.
Clearly, $F$ is $\varepsilon$-AS. It is also non $k$-submodular for all
generation methods (see Appendix D.1).
We implemented $k$-Greedy-TS, $k$-Greedy-IS, as well as baselines (details
below) in C++ and executed them on an Intel Cascade Lake @ 2.6GhZ with 48GB
RAM. These algorithms employed the lazy evaluation technique [18] (i.e., we
maintain an upper bound on the gain of inserting each element in each
dimension w.r.t. $F$ or $f$, to efficiently select the element in each
iteration). We report results of the quality of these algorithms with respect
to function $F$ (specifically, the mean and standard deviation of results for
$10$ different runs of value generations; each with a different seed). We do
not report runtime because the choice of function $F$ or $f$ did not
substantially affect the runtime of the algorithms (for efficiency results of
the algorithms see [21]). Our source code and the datasets that we used are
available at: https://github.com/55199789/approx_kSubmodular.git.
### 7.2 Sensor Placement with Approximately $k$-Submodular Functions and the
IS Constraints
The objective is to install a sufficiently large number of sensors of $k$
types into locations, so that each sensor is installed in a single location
and all installed sensors together collect measurements of low uncertainty. We
first define the entropy of a vector ${\boldsymbol{x}}$ of sensors, following
[21]. Let $\Omega=\\{X^{u}_{i}\\}_{i\in[k],u\in V}$ be the set of random
variables for each sensor type $i\in[k]$ and each location $u\in V$. Each
$X^{u}_{i}$ is the random variable representing the measurement collected from
a sensor of type $i$ that is installed at location $u$. Thus,
$X_{i}=\\{X^{u}_{i}\\}\subseteq\Omega$ is the set representing the
measurements for all locations at which a sensor of type $i\in[k]$ is
installed. The entropy of a vector
${\boldsymbol{x}}=(X_{1},\ldots,X_{k})\in(k+1)^{V}$ is given by the monotone
$k$-submodular function
$H({\boldsymbol{x}})=H(\cup_{i\in[k]}X_{i})=-\sum_{\mathbf{s}\in\text{dom}~{}\cup_{i\in[k]}X_{i}}Pr[\mathbf{s}]\cdot\log
Pr[\mathbf{s}]$, where $\text{dom}~{}\cup_{i\in[k]}X_{i}$ is the domain of
$\cup_{i\in[k]}X_{i}$ [21].
The work of [21] considered the problem of maximizing $H({\boldsymbol{x}})$
subject to individual size constraints for ${\boldsymbol{x}}$, using
$H({\boldsymbol{x}})$ to capture the uncertainty of measurements. Sensor
measurements often have random noise due to hardware issues, environmental
effects, and imprecision in measurement [10], and weighted entropy functions
are used to capture such errors [2]. As such, we consider a noisy variant of
the problem of [21], where the uncertainty of measurements is captured by a
weighted function $\tilde{H}({\boldsymbol{x}})=\xi(\boldsymbol{x})\cdot
H({\boldsymbol{x}})$ and $\xi()$ is generated by our AG, MaxG, or MeanG
generation method.
(a)
(b)
(c)
(d)
Figure 1: $\tilde{H}$ in AG setting vs: (a) number of dimensions $k$, (b)
individual size threshold $b$, (c,d) $\varepsilon$.
(a)
(b)
(c)
(d)
Figure 2: $\tilde{H}$ in MeanG setting vs: (a) number of dimensions $k$, (b)
individual size threshold $b$, (c,d) $\varepsilon$.
(a)
(b)
(c)
(d)
Figure 3: $\tilde{H}$ in MaxG setting vs: (a) number of dimensions $k$, (b)
individual size threshold $b$, (c,d) $\varepsilon$.
Algorithms. We first apply $k$-Greedy-IS using $H$ as $f$ and then using
$\tilde{H}$ as $F$, and we compare their solutions in terms of $\tilde{H}$. We
refer to these algorithms as $Gr$-$H$ and $Gr$-$\tilde{H}$ respectively. Since
no algorithms can address our problem, we compare against a baseline, Random
(referred to as $R$), which outputs as a solution a vector ${\boldsymbol{x}}$
with $B_{i}$ randomly selected elements in each dimension. Random was also
used in [21]. In our experiments, each $B_{i}$ has the same value $b$. We
configured the algorithms with $k\in\\{1,2,3\\}$,
$\varepsilon\in\\{0,0.1,\ldots,1\\}$, and $B_{i}\in\\{1,3,\ldots,15\\}$.
Unless otherwise stated, $k=1$, $\varepsilon=0.3$, and $b=9$. Other parameter
settings showed similar behaviors.
Dataset. We used the Intel Lab dataset which is available at
http://db.csail.mit.edu/labdata/labdata.html and is preprocessed as in [21].
The dataset is a log of approximately 2.3 million values that are collected
from 54 sensors installed in 54 locations in the Intel Berkeley research lab.
There are three types of sensors. Sensors of type 1, 2, and 3 collect
temperature, humidity, and light values, respectively. $F$ or $f$ take as
argument a vector: (1) $\boldsymbol{x}=(X_{1})$ of sensors of type 1, when
$k=1$; (2) $\boldsymbol{x}=(X_{1},X_{2})$ of sensors of type 1 and of type 2,
when $k=2$, or (3) $\boldsymbol{x}=(X_{1},X_{2},X_{3})$ of sensors of type 1
and of type 2 and of type 3, when $k=3$.
(a)
(b)
(c)
Figure 4: $\tilde{I}$ for varying $k$ in: (a) AG, (b) MeanG, and (c) MaxG
setting.
Results. Fig. 1 shows that, in the AG setting, $Gr$-$H$ substantially
outperformed $Gr$-$\tilde{H}$ across all $k$, $b$, and $\varepsilon$ values.
This can be explained by the fact that, in the AG setting, $Gr$-$H$ is favored
by the construction of $F$, because $F$ is based on its solution
$\boldsymbol{x_{f}}$. Both $Gr$-$H$ and $Gr$-$\tilde{H}$ outperformed $R$ (the
latter by a smaller margin due to the adversarial noise construction). This
happened even when $\varepsilon=1$, the case in which they do not offer
approximation guarantees. The performance of $Gr$-$\tilde{H}$ suffers as
$\varepsilon$ goes to 1, due to the uniform range $[1-\varepsilon,1]$ used to
generate $F$’s values.
Figs. 2 and 3 show that, in the MaxG and MeanG settings, $Gr$-$\tilde{H}$
outperformed $Gr$-$H$ in almost all tested cases on average, and especially
for larger $b$ and $\varepsilon$. This is because the noise is more structured
and suggests that $Gr$-$\tilde{H}$ may be a practical algorithm (e.g., in
applications where the maximum or expected noise of sensors is taken as an
aggregate of the noise of sensors). We also observe that $Gr$-$\tilde{H}$ has
a larger performance gain and less variability (low standard deviation bars)
over $Gr$-$H$ under MeanG than under MaxG.
### 7.3 Influence Maximization with Approximately $k$-Submodular Functions
and the TS Constraint
The objective is to select a sufficiently large number of users in a social
network who would influence the largest expected number of users in the social
network through word-of-mouth effects. The selected users are called _seeds_.
To measure influence, we adapt the $k$-IC influence diffusion model proposed
in [21]. In the $k$-IC model, $k$ different topics spread through a social
network independently. At $t=0$, there is a vector
$\boldsymbol{x}=(X_{1},\ldots,X_{k})$ of seeds who are influenced. Each $u$ in
$X_{i}$, $i\in[k]$, is influenced about topic $i$ and has a single chance to
influence its out-neighbor $v$, if $v$ is not already influenced. The node $v$
is influenced at $t=1$ by $u$ on topic $i$ with probability $p_{u,v}^{i}$.
Also, $v$ is influenced by any of its in-neighbors (other seeds) at time $t=1$
on topic $i$. When $v$ becomes influenced, it stays influenced and has a
single chance to influence each of its out-neighbors that is not already
influenced. The process proceeds until no new nodes are influenced. The
expected number of influenced users (_spread_) is
$I(\boldsymbol{x})=\mathrm{E}[|\cup_{i\in[k]}A_{i}(X_{i})|]$, where
$A_{i}(X_{i})$ is a random variable representing the set of users influenced
about topic $i$ through $X_{i}$.
Our adapted $k$-IC model differs from the $k$-IC model in that we measure
spread by $\tilde{I}(\boldsymbol{x})=\xi(\boldsymbol{x})\cdot
I({\boldsymbol{x}})$, instead of $I(\boldsymbol{x})$, where $\xi()$ is the
noise function in the AG, MaxG, or MeanG setting. The noise models empirical
evidence that the spread may be non-submodular and difficult to quantify
accurately. This happens because users find information diffused by many in-
neighbors as already known and less interesting, in which case the noise
depends on the data [11]. Furthermore, it happens because the combined
influence of subsets of influenced in-neighbors of a node also affects the
influence probability of the node and hence the spread, in which case the
noise depends on the subset of influenced in-neighbor of a user [26].
$\tilde{I}({\boldsymbol{x}})$ is an $\varepsilon$-AS function, since
$(1-\varepsilon)\cdot
I({\boldsymbol{x}})\leq\tilde{I}({\boldsymbol{x}})\leq(1+\varepsilon)\cdot
I({\boldsymbol{x}})$ and $I(\boldsymbol{x})$ is monotone $k$-submodular [21].
Algorithms. We first apply $k$-Greedy-TS using $I$ as $f$ and then using
$\tilde{I}$ as $F$, and we compare their solutions in terms of $\tilde{I}$. We
refer to them as $Gr$-$I$ and $Gr$-$\tilde{I}$. We also evaluate $Gr$-$I$ and
$Gr$-$\tilde{I}$ against two baselines, also used in [21]: (1) Random (R),
which outputs a random vector ${\boldsymbol{x}}$ of size $B$, and (2) Degree
(D), which sorts all nodes in decreasing order based on their out-degree and
then assigns each of them to a random topic (dimension). We simulated the
influence process based on Monte Carlo simulation as in [21]. We configured
the algorithms with $k\in\\{2,4,\ldots,10\\}$,
$\varepsilon\in\\{0,0.1,\ldots,1\\}$, and $B\in\\{5,10,\ldots,100\\}$. By
default, $k=8$, $\varepsilon=0.3$, and $B=75$.
Dataset. We used the Digg social news dataset that is available at
http://www.isi.edu/~lerman/downloads/digg2009.html, following the setup of
[21]. The dataset consists of a graph and a log of user votes for stories.
Each node represents a user and each edge $(u,v)$ represents that user $u$ can
watch the activity of node $v$. The edge probabilities $p_{u,v}^{i}$ for each
edge $(u,v)$ and topic $i$ were obtained from [21].
Results. The results in Fig. 4 are similar to those of Section 7.2. That is,
in the AG setting $Gr$-$I$ outperformed $Gr$-$\tilde{I}$ (see Fig. 4(a)). This
is because $F$ is based on the solution of $Gr$-$I$ and thus this algorithm is
favored over $Gr$-$\tilde{I}$. On the other hand, in the MaxG and MeanG
setting $Gr$-$\tilde{I}$ outperformed $Gr$-$I$ in all tested cases (see Figs.
4(b) and 4(c)). This is because the noise is more structured and suggests that
$Gr$-$\tilde{I}$ may be a practical algorithm, when the noise is structured
and not adversarially chosen. In all tested cases, as expected, both $Gr$-$I$
and $Gr$-$\tilde{I}$ outperformed $R$ and $D$. We observed similar trends,
when we varied the parameters $B$ and $\varepsilon$ (see Appendix D.2).
## 8 Conclusion and Discussion
In this paper, we show that simple greedy algorithms can obtain reasonable
approximation ratios for an $\varepsilon$-AS or $\varepsilon$-ADR function $F$
subject to total size and individual size constraints. The analysis (i.e.,
proofs of Theorem 5.3 and Theorem 5.4) for the individual size constraint can
be extended to capture a _group_ size constraint. Let $G_{1},...,G_{m}$ be a
partition of $\\{1,...,k\\}$ and $B_{1},...,B_{m}$ be some positive integer
numbers. The maximization problem of an $\varepsilon$-AS or $\varepsilon$-ADR
function $F$ subject to the group size constraint is defined to be
$\max_{\boldsymbol{x}\in(k+1)^{V}:\sum_{j\in
G_{i}}|supp_{j}(\boldsymbol{x})|\leq B_{i}\;\forall
i\in[m]}F(\boldsymbol{x})$, where the total size of all of subsets within a
group $G_{i}$ is at most $B_{i}$. The same approximation ratios from Theorem
5.3 and Theorem 5.4 can be obtained for $\varepsilon$-AS and $\varepsilon$-ADR
function $F$, respectively, using a modified greedy algorithm (similar to
Algorithm 2) where the condition in line 5 can be changed to account for the
group constraint.
Our definitions for $\varepsilon$-AS and $\varepsilon$-ADR depend on the lower
bound constant, $(1-\varepsilon)$ and upper bound constant, $(1+\varepsilon)$.
Similar approximation ratio results can be derived when replacing
$(1-\varepsilon)$ with some lower bound constant $a$ and upper bound constant
$b$ where $0<a\leq b$. The approximation ratios will depend on $a$ and $b$ and
can be obtained following the same proof ideas.
Finally, it would be interesting to evaluate our algorithms using datasets
that are inherently noisy.
## References
* [1] Representation, approximation and learning of submodular functions using low-rank decision trees. In COLT, volume 30, pages 711–740, 2013.
* [2] Darryl K. Ahner. A normalized weighted entropy measure for sensor allocation within simulations. In WSC, pages 1753–1763, 2009.
* [3] Ashwinkumar Badanidiyuru, Shahar Dobzinski, Hu Fu, Robert Kleinberg, Noam Nisan, and Tim Roughgarden. Sketching valuation functions. In SODA, pages 1025–1035, 2012.
* [4] Maria Florina Balcan, Florin Constantin, Satoru Iwata, and Lei Wang. Learning valuation functions. In COLT, volume 23, pages 4.1–4.24, 2012.
* [5] Maria-Florina Balcan and Nicholas J.A. Harvey. Learning submodular functions. In STOC, pages 793–802, 2011.
* [6] Andrew An Bian, Joachim M. Buhmann, Andreas Krause, and Sebastian Tschiatschek. Guarantees for greedy maximization of non-submodular functions with applications. In ICML, volume 70, pages 498–507, 2017.
* [7] Gerard Cornuejols, Marshall L. Fisher, and George L. Nemhauser. Location of bank accounts to optimize float: An analytic study of exact and approximate algorithms. Management Science, 23(8):789–810, 1977.
* [8] Abhimanyu Das and David Kempe. Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection. In ICML, pages 1057–1064, 2011.
* [9] Ehsan Elhamifar. Sequential facility location: Approximate submodularity and greedy algorithm. In ICML, pages 1784–1793, 2019.
* [10] Eiman Elnahrawy and Badri Nath. Cleaning and querying noisy sensors. In WSNA, pages 78–87, 2003.
* [11] Shanshan Feng, Xuefeng Chen, Gao Cong, Yifeng Zeng, Yeow Meng Chee, and Yanping Xiang. Influence maximization with novelty decay in social networks. In AAAI, pages 37–43, 2014.
* [12] Marwa El Halabi, Francis Bach, and Volkan Cevher. Combinatorial penalties: Which structures are preserved by convex relaxations? In AISTATS, volume 84, pages 1551–1560, 2018.
* [13] Avinatan Hassidim and Yaron Singer. Submodular optimization under noise. In COLT, volume 65, pages 1069–1122, 2017.
* [14] Thibaut Horel and Yaron Singer. Maximization of approximately submodular functions. In NIPS, pages 3045–3053. Curran Associates, Inc., 2016.
* [15] Anna Huber and Vladimir Kolmogorov. Towards minimizing k-submodular functions. In COCOA, pages 451–462, 2012.
* [16] Satoru Iwata, Shin-ichi Tanigawa, and Yuichi Yoshida. Improved approximation algorithms for k-submodular function maximization. In SODA, pages 404–413, 2016.
* [17] Qiang Li, Wei Chen, Xiaoming Sun, and Jialin Zhang. Influence maximization with $\varepsilon$-almost submodular threshold functions. In NIPS, pages 3801–3811. 2017.
* [18] Michel Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In Optimization Techniques, pages 234–243, Berlin, Heidelberg, 1978\.
* [19] George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for maximizing submodular set functions—I. Mathematical Programming, 14(1):265–294, 1978.
* [20] Lan Nguyen and My T. Thai. Streaming k-submodular maximization under noise subject to size constraint. In ICML, volume 119 of PMLR, pages 7338–7347. PMLR, 2020\.
* [21] Naoto Ohsaka and Yuichi Yoshida. Monotone k-submodular function maximization with size constraints. In NIPS, pages 694–702, 2015.
* [22] Chao Qian, Jing-Cheng Shi, Yang Yu, Ke Tang, and Zhi-Hua Zhou. Subset selection under noise. In NIPS, pages 3560–3570. 2017.
* [23] Yaron Singer and Avinatan Hassidim. Optimization for approximate submodularity. In NeurIPS, pages 396–407. 2018.
* [24] Ajit Singh, Andrew Guillory, and Jeff Bilmes. On bisubmodular maximization. volume 22 of PMLR, pages 1055–1063, 2012.
* [25] Justin Ward and Stanislav Živný. Maximizing k-submodular functions and beyond. ACM Trans. Algorithms, 12(4):47:1–47:26, 2016.
* [26] Jianming Zhu, Junlei Zhu, Smita Ghosh, Weili Wu, and Jing Yuan. Social influence maximization in hypergraph in social networks. IEEE TNSE, 6(4):801–811, 2019.
## A Proofs in Section 3
* Proof.
[Proof of Theorem 3.1] To prove the theorem, it is sufficient to show that the
claim holds for any $\boldsymbol{x}\in(k+1)^{V}$. For any
$\boldsymbol{x}=(X_{1},...,X_{k})\in(k+1)^{V}$, for each $i\in[k]$, we order
the elements of $X_{i}$ such that
$X_{i}=\\{e_{i1},e_{i2},...,e_{i|X_{i}|}\\}$. It follows that
$\displaystyle F(\boldsymbol{x})$ $\displaystyle=F(X_{1},\ldots,X_{k})$
$\displaystyle=\sum_{i\in[k]}\sum_{j=1}^{|X_{i}|}\Delta_{e_{j},i}F(X_{1},...,X_{i}\setminus\\{e_{i1},...,e_{ij}\\},...,X_{k}),$
where the equality is from adding/subtracting common terms. Since $F$ is
$\varepsilon$-ADR, we have
$\displaystyle(1-\varepsilon)f(\boldsymbol{x})$
$\displaystyle=(1-\varepsilon)\cdot\sum_{i\in[k]}\sum_{j=1}^{|X_{i}|}\Delta_{e_{j},i}f(X_{1},...,X_{i}\setminus\\{e_{i1},...,e_{ij}\\},...,X_{k})$
$\displaystyle\leq F(\boldsymbol{x})$
$\displaystyle\leq(1+\varepsilon)\cdot\sum_{i\in[k]}\sum_{j=1}^{|X_{i}|}\Delta_{e_{j},i}f(X_{1},...,X_{i}\setminus\\{e_{i1},...,e_{ij}\\},...,X_{k})$
$\displaystyle\leq(1+\varepsilon)f(\boldsymbol{x}),$
where the inequality is due to the application of $\varepsilon$-ADR definition
to each summation term. Thus, we have shown that $F$ is $\varepsilon$-AS under
the same $k$-submodular function $f$.
## B Proofs in Section 5
* Proof.
[Proof of Lemma 5.1] First note that
$\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})\geq{}\Delta_{o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)})}F(\boldsymbol{x}^{(j-1)})$
as $(e^{(j)},i^{(j)})$ is selected by the greedy algorithm, which must provide
as much marginal gain as $(o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)}))$. By using
the definition of $\varepsilon$-AS on the above inequality, we have that
$(1+\varepsilon)f(\boldsymbol{x}^{(j)})\geq(1-\varepsilon)f\left(\left(X^{(j-1)}_{1},\cdots,X^{(j-1)}_{\boldsymbol{o}^{(j-1)}(o^{(j)})}\cup\\{o^{(j)}\\},\cdots,X^{(j-1)}_{k}\right)\right)$.
By submodularity and $\boldsymbol{x}^{(j-1)}\preceq\boldsymbol{o}^{(j-1)}$, we
have that
$\Delta_{o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)})}f(\boldsymbol{x}^{(j-1)})\geq{}\Delta_{o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)})}f(\boldsymbol{o}^{(j-\frac{1}{2})})=f(\boldsymbol{o}^{(j-1)})-f(\boldsymbol{o}^{(j-\frac{1}{2})})\geq
f(\boldsymbol{o}^{(j-1)})-f(\boldsymbol{o}^{(j)})$. Our result follows
immediately after combining the above inequalities.
* Proof.
[Proof of Lemma 5.2] First note that
$\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})\geq{}\Delta_{o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)})}F(\boldsymbol{x}^{(j-1)})$
as $(e^{(j)},i^{(j)})$ is selected by the greedy algorithm, which must provide
as much marginal gain as $(o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)}))$. By
submodularity and $\boldsymbol{x}^{(j-1)}\preceq\boldsymbol{o}^{(j-1)}$, we
have
$\Delta_{o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)})}f(\boldsymbol{x}^{(j-1)})\geq\Delta_{o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)})}f(\boldsymbol{o}^{(j-\frac{1}{2})})\geq\frac{1}{1+\varepsilon}\Delta_{o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)})}F(\boldsymbol{o}^{(j-\frac{1}{2})})$
(the last inequality is by the $\varepsilon$-ADR definition) and
$\Delta_{o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)})}F(\boldsymbol{o}^{(j-\frac{1}{2})})-\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})})=F(\boldsymbol{o}^{(j-1)})-F(\boldsymbol{o}^{(j)})$.
It follows that
$\frac{1}{1-\varepsilon}\Delta_{o^{(j)},\boldsymbol{o}^{(j-1)}(o^{(j)})}F(\boldsymbol{x}^{(j-1)})\geq{}\frac{1}{1+\varepsilon}\left[F(\boldsymbol{o}^{(j-1)})-F(\boldsymbol{o}^{(j)})\right]$
by the definition of $\varepsilon$-ADR. Our claim follows immediately from the
last inequalities.
### B.1 Maximizing $\varepsilon$-AS and $\varepsilon$-ADR Functions with the
Individual Size Constraints
In this subsection, we consider the problem of maximizing $\varepsilon$-AS or
$\varepsilon$-ADR function $F$ subject to the individual size constraint
$B_{1},...,B_{k}$ using Algorithm 2 on the function $F$. Recall that in the
individual size constraint maximization problem, we are given
$B_{1},...,B_{k}$ restricting the maximum number of elements one can select
for each subset. We define $B=\sum_{j\in[k]}B_{j}$. We use the following
notations as in [21] and the same notations $e^{(j)}$, $i^{(j)}$, and
$\boldsymbol{x}^{(j)}$ from Section 5.1.
As before, we iteratively define
$\boldsymbol{o}^{(0)}=\boldsymbol{o},\boldsymbol{o}^{(1)},\cdots,\boldsymbol{o}^{(B)}$
as follows. For each $j\in[B]$, we let
$S^{(j)}_{i}=supp_{i}(\boldsymbol{o}^{(j-1)})\setminus{}supp_{i}(\boldsymbol{x}^{(j-1)})$.
We consider the following two cases.
* C1:
Suppose there exists $i^{\prime}\neq i^{(j)}$ such that
$e^{(j)}\in{}S_{i^{\prime}}^{(j)}$. In this case, we set $o^{(j)}$ to be an
arbitrary element in $S_{i^{(j)}}^{(j)}$. We construct
$\boldsymbol{o}^{(j-\frac{1}{2})}$ from $\boldsymbol{o}^{(j-1)}$ by assigning
$0$ to the $e^{(j)}$-th element and the $o^{(j)}$-th element. Then we
construct $\boldsymbol{o}^{(j)}$ from $\boldsymbol{o}^{(j-\frac{1}{2})}$ by
assigning $i^{(j)}$ to the $e^{(j)}$-th element and $i^{\prime}$ to the
$o^{(j)}$-th element. We may use $\boldsymbol{o}^{(j-\frac{1}{2})}_{(e,i)}$ to
denote $\boldsymbol{o}^{(j-\frac{1}{2})}$ with $i$ assigned to the $e$-th
element.
* C2:
Suppose, for any $i^{\prime}\neq i^{(j)}$, we have
$e^{(j)}\notin{}S_{i^{\prime}}^{(j)}$. In this case, we let $o^{(j)}=e^{(j)}$
if $e^{(j)}\in S_{i^{(j)}}^{(j)}$, and let $o^{(j)}$ be an arbitrary element
in $S_{i^{(j)}}^{(j)}$ otherwise. We construct
$\boldsymbol{o}^{(j-\frac{1}{2})}$ from $\boldsymbol{o}^{(j-1)}$ by assigning
$0$ to the $o^{(j)}$-th element. We then construct $\boldsymbol{o}^{(j)}$ from
$\boldsymbol{o}^{(j-\frac{1}{2})}$ by assigning $i^{(j)}$ to the $e^{(j)}$-th
element.
By construction, we have $|supp_{i}(\boldsymbol{o}^{(j)}|=B_{i}$ for each
$i\in[k]$ and $j\in\\{0,1,\cdots,B\\}$. Moreover,
$\boldsymbol{x}^{(j-1)}\preceq\boldsymbol{o}^{(j-\frac{1}{2})}$ for each
$j\in[B]$.
We first consider $\varepsilon$-AS Functions with the Individual Size
Constraints in Section B.1.1. Then, we consider $\varepsilon$-ADR functions
with the Individual Size constraints in Section B.1.2.
#### B.1.1 $\varepsilon$-AS Functions with the Individual Size Constraints
###### Lemma B.1
For any $j\in[B]$,
$2\left[\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x}^{(j)})-f(\boldsymbol{x}^{(j-1)})\right]\geq{}f(\boldsymbol{o}^{(j-1)})-f(\boldsymbol{o}^{(j)})$.
* Proof.
[Proof of Lemma B.1] The arguments for Case 1 and Case 2 are similar. We begin
with Case 1.
Case 1. First note that
$\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})\geq{}\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})$
and
$\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})\geq{}\Delta_{e^{(j)},i^{\prime}}F(\boldsymbol{x}^{(j-1)})$
because $(e^{(j)},i^{(j)})$ is selected by the greedy algorithm.
It follows that $(1+\varepsilon)f(\boldsymbol{x}^{(j)})\geq\\\
(1-\varepsilon)f\left(\left(X^{(j-1)}_{1},\cdots,X^{(j-1)}_{i^{(j)}}\cup\\{o^{(j)}\\},\cdots,X^{(j-1)}_{k}\right)\right)$
and $(1+\varepsilon)f(\boldsymbol{x}^{(j)})\geq\\\
(1-\varepsilon)f\left(\left(X^{(j-1)}_{1},\cdots,X^{(j-1)}_{i^{\prime}}\cup\\{e^{(j)}\\},\cdots,X^{(j-1)}_{k}\right)\right)$
by using the definition of approximate submodularity. Since
$\boldsymbol{x}^{(j-1)}\preceq\boldsymbol{o}^{(j-\frac{1}{2})}\preceq\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{\prime})}$,
we have that
$\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{x}^{(j-1)})\geq\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})})$,
$\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{x}^{(j-1)})\geq\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{\prime})})$,
and
$\Delta_{e^{(j)},i^{\prime}}f(\boldsymbol{x}^{(j-1)})\geq\Delta_{e^{(j)},i^{\prime}}f(\boldsymbol{o}^{(j-\frac{1}{2})})$
from orthant submodularity. From the above inequalities, we have that
$\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x}^{(j)})-f(\boldsymbol{x}^{(j-1)})$
is greater than or equal to
$\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})})$,
$\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{\prime})})$,
and $\Delta_{e^{(j)},i^{\prime}}f(\boldsymbol{o}^{(j-\frac{1}{2})})$. As a
result, we have
$\displaystyle f(\boldsymbol{o}^{(j-1)})-f(\boldsymbol{o}^{(j)})$
$\displaystyle=\Delta_{e^{(j)},i^{\prime}}f(\boldsymbol{o}^{(j-\frac{1}{2})})-\Delta_{o^{(j)},i^{\prime}}f(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{(j)})})$
$\displaystyle+\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{\prime})})-\Delta_{e^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})})$
$\displaystyle\leq\Delta_{e^{(j)},i^{\prime}}f(\boldsymbol{o}^{(j-\frac{1}{2})})+\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{(j)})})$
$\displaystyle\leq
2\left[\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x}^{(j)})-f(\boldsymbol{x}^{(j-1)})\right].$
Case 2. The argument follows similarly as Case 1 where one can show
$\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x}^{(j)})-f(\boldsymbol{x}^{(j-1)})$
is greater than or equal to
$\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})})$. Therefore, we
have
$\displaystyle f(\boldsymbol{o}^{(j-1)})-f(\boldsymbol{o}^{(j)})$
$\displaystyle=\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})})-\Delta_{e^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})})$
$\displaystyle\leq\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})})$
$\displaystyle\leq
2\left[\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x}^{(j)})-f(\boldsymbol{x}^{(j-1)})\right].$
* Proof.
[Proof of Theorem 5.3] We have
$\displaystyle f(\boldsymbol{o})-f(\boldsymbol{x})$
$\displaystyle=\sum_{j\in[B]}\left(f(\boldsymbol{o}^{(j-1)})-f(\boldsymbol{o}^{(j)})\right)$
$\displaystyle\leq
2\sum_{j\in[B]}\left(\frac{1+\varepsilon}{1-\varepsilon}f(\boldsymbol{x}^{(j)})-f(\boldsymbol{x}^{j-1})\right)$
$\displaystyle\leq\frac{2-2\varepsilon+2\varepsilon{}B}{1-\varepsilon}f(\boldsymbol{x}),$
where the first inequality is due to Lemma B.1. Hence, we have
$F(\boldsymbol{x})\geq\frac{(1-\varepsilon)^{2}}{(3-3\varepsilon+2\varepsilon{}B)(1+\varepsilon)}F(\boldsymbol{o})$.
#### B.1.2 $\varepsilon$-ADR Functions with the Individual Size Constraints
###### Lemma B.2
For any $j\in[B]$,
$2\left[F(\boldsymbol{x}^{(j)})-F(\boldsymbol{x}^{(j-1)})\right]\geq\frac{1-\varepsilon}{1+\varepsilon}\left[F(\boldsymbol{o}^{(j-1)})-F(\boldsymbol{o}^{(j)})\right]$.
* Proof.
[Proof of Lemma B.2] The arguments for Case 1 and Case 2 are similar. We begin
with Case 1.
Case 1. First note that since $(e^{(j)},i^{(j)})$ is selected by the greedy
algorithm, we have
$\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})\geq{}\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})$
and
$\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})\geq{}\Delta_{e^{(j)},i^{\prime}}F(\boldsymbol{x}^{(j-1)})$.
Since
$\boldsymbol{x}^{(j-1)}\preceq\boldsymbol{o}^{(j-\frac{1}{2})}\preceq\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{\prime})}$,
we have that
$\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{x}^{(j-1)})\geq\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})})\geq\frac{1}{1+\varepsilon}\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})})$,
$\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{x}^{(j-1)})\geq\Delta_{o^{(j)},i^{(j)}}f(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{\prime})})\geq\frac{1}{1+\varepsilon}\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{\prime})})$,
and
$\Delta_{e^{(j)},i^{\prime}}f(\boldsymbol{x}^{(j-1)})\geq\Delta_{e^{(j)},i^{\prime}}f(\boldsymbol{x}^{(j-1)})\geq\frac{1}{1+\varepsilon}\geq\Delta_{e^{(j)},i^{\prime}}F(\boldsymbol{x}^{(j-1)})$
from orthant submodularity.
Therefore, we obtain that
$\frac{1}{1-\varepsilon}\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})$ is
greater than or equal to
$\frac{1}{1+\varepsilon}\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})})$,
$\frac{1}{1+\varepsilon}\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})})$,
and
$\frac{1}{1+\varepsilon}\Delta_{e^{(j)},i^{\prime}}F(\boldsymbol{x}^{(j-1)})$
by the definition of $\varepsilon$-ADR.
$\displaystyle\frac{1}{1+\varepsilon}\left[F(\boldsymbol{o}^{(j-1)})-F(\boldsymbol{o}^{(j)})\right]$
$\displaystyle=$
$\displaystyle\frac{1}{1+\varepsilon}\left[\Delta_{e^{(j)},i^{\prime}}F(\boldsymbol{o}^{(j-\frac{1}{2})})-\Delta_{o^{(j)},i^{\prime}}F(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{(j)})})\right.$
$\displaystyle~{}~{}\quad\quad\quad\left.+\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{\prime})})-\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})})\right]$
$\displaystyle\leq$
$\displaystyle\frac{1}{1+\varepsilon}\left[\Delta_{e^{(j)},i^{\prime}}F(\boldsymbol{o}^{(j-\frac{1}{2})})+\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})}_{(e^{(j)},i^{(j)})})\right]$
$\displaystyle\leq$
$\displaystyle\frac{2}{1-\varepsilon}\left[F(\boldsymbol{x}^{(j)})-F(\boldsymbol{x}^{(j-1)})\right].$
Case 2. The argument follows similarly as Case 1 where one can show
$\frac{1}{1-\varepsilon}\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{x}^{(j-1)})$ is
greater than or equal to
$\frac{1}{1+\varepsilon}\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})})$.
Therefore, we have
$\displaystyle\frac{1}{1+\varepsilon}\left[F(\boldsymbol{o}^{(j-1)})-F(\boldsymbol{o}^{(j)})\right]$
$\displaystyle=$
$\displaystyle\frac{1}{1+\varepsilon}\left[\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})})-\Delta_{e^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})})\right]$
$\displaystyle\leq$
$\displaystyle\frac{1}{1+\varepsilon}\Delta_{o^{(j)},i^{(j)}}F(\boldsymbol{o}^{(j-\frac{1}{2})})$
$\displaystyle\leq$
$\displaystyle\frac{2}{1-\varepsilon}\left[F(\boldsymbol{x}^{(j)})-F(\boldsymbol{x}^{(j-1)})\right].$
* Proof.
[Proof of Theorem 5.4] We have
$\displaystyle F(\boldsymbol{o})-F(\boldsymbol{x})$
$\displaystyle=\sum_{j\in[B]}\left[F(\boldsymbol{o}^{(j-1)})-F(\boldsymbol{o}^{(j)})\right]$
$\displaystyle\leq\frac{2(1+\varepsilon)}{1-\varepsilon}\sum_{j\in[B]}\left[F(\boldsymbol{x}^{(j)})-F(\boldsymbol{x}^{(j-1)})\right]$
$\displaystyle\leq\frac{2(1+\varepsilon)}{1-\varepsilon}F(\boldsymbol{x})$
where the first inequality is due to Lemma B.2. Hence, we have
$F(\boldsymbol{x})\geq{}\frac{1-\varepsilon}{3+\varepsilon}F(\boldsymbol{o})$.
## C Improved Greedy Approximation Ratios When $f$ is Known: Additional
material
We start by restating the theorem presented in Section 6.
###### Theorem C.1
Let $f$ be a $k$-submodular function and $F$ be an $\varepsilon$-approximately
$k$-submodular function that is bounded by $f$. If there is an algorithm that
provides an approximation ratio of $\alpha$ for maximizing $f$ subject to
constraint $\mathbb{X}$, then the same solution yields an approximation ratio
of $\frac{1-\varepsilon}{1+\varepsilon}\alpha$ for maximizing $F$ subject to
constraint $\mathbb{X}$.
* Proof.
Let $\boldsymbol{o}_{f}$ and $\boldsymbol{o}_{F}$ be the optimal solutions of
$f$ and $F$, respectively, subject to constraint $\mathbb{X}$. Let
$\boldsymbol{x}_{f}$ be a solution of $f$ returned by an algorithm with an
approximation ratio of $\alpha$. We have
$\displaystyle\frac{1}{1-\varepsilon}F(\boldsymbol{x}_{f})\geq
f(\boldsymbol{x}_{f})\geq\alpha f(\boldsymbol{o}_{f})\geq\alpha
f(\boldsymbol{o}_{F})\geq\frac{1}{1+\varepsilon}\alpha F(\boldsymbol{o}_{F}),$
where the first inequality is by applying the definition of $\varepsilon$-AS,
the second inequality is by the definition of approximation ratios, the third
inequality is by replacing $\boldsymbol{o}_{f}$ with a less optimal solution
$\boldsymbol{o}_{F}$, and the last inequality is by the definition of
$\varepsilon$-AS.
The above theorem provides a set of results for our settings.
###### Corollary C.1
Suppose $F$ is approximately submodular. By applying the greedy algorithm on
$f$ subject to a size constraint, we obtain a solution providing an
approximation ratio of $\frac{1-\varepsilon}{1+\varepsilon}(1-\frac{1}{e})$ to
the size constrained maximization problem of $F$.
The above result follows from the known approximation ratio of greedy
algorithms for monotone submodular functions [7, 19]. Comparing to the
$\varepsilon$-AS result of [14], when
$B\geq\frac{1}{4(e-1)\varepsilon}+\frac{1}{2}$
111$B\geq\frac{1}{4(e-1)\varepsilon}+\frac{1}{2}$ implies
$B\geq\frac{1}{4(1-1/e)}\left(\frac{1}{\varepsilon}-\varepsilon-\left(1-\frac{1}{e}\right)\left(\frac{1}{\varepsilon}-2+\varepsilon\right)\right)$.
Thus,
$B\geq\frac{1-\varepsilon^{2}-(1-1/e)(1-\varepsilon)^{2}}{4(1-1/e)\varepsilon}$.
Then we have
$\frac{1}{1+\frac{4B\varepsilon}{(1-\varepsilon)^{2}}}\leq\frac{1-\varepsilon}{1+\varepsilon}(1-\frac{1}{e})$,
Corollary C.1 yields a better approximation ratio. Specifically, we obtain the
corollaries of Section 6 which we copy below for convenience.
###### Corollary C.2
Suppose $F$ is approximately $k$-submodular. By applying the $k$-Greedy-TS
algorithm on $f$ subject to a total size constraint, we obtain a solution
providing an approximation ratio of $\frac{1-\varepsilon}{2(1+\varepsilon)}$
to total size constrained maximization problem of $F$.
###### Corollary C.3
Suppose $F$ is approximately $k$-submodular. By applying the $k$-Greedy-IS
algorithm on $f$ subject to an individual size constraint, we obtain a
solution providing an approximation ratio of
$\frac{1-\varepsilon}{3(1+\varepsilon)}$ to individual size constrained
maximization problem of $F$.
The above corollaries follow from the results of [21] where one can obtain
approximation ratios of $\frac{1}{2}$ and $\frac{1}{3}$ for total size and
individual size constraints, respectively, for maximizing monotone
$k$-submodular functions. It turns out that, for any value of $\varepsilon$,
we can derive better theoretical guarantees using the the greedy solutions
from $f$ according to the above corollaries for $\varepsilon$-AS $F$. For $F$
that is $\varepsilon$-ADR, since it is also $\varepsilon$-AS $F$, the above
results apply immediately. However, the above results provide a weaker
guarantee than applying greedy algorithms on $F$ directly.
## D Additional Experiment Details
### D.1 Non-submodularity
To see that the F constructed in the AG setting is not k-submodular, consider
$k=1$, $\xi(u)=1-\varepsilon$, $\xi(v)=1$, $\xi(\\{u,v\\})=1$, as well as
$f(\\{u,v\\})-f(\\{v\\})=f(\\{u\\})$. It follows that
$F(\\{u\\})-F(\mathbf{0})<F(\\{u,v\\}-F(\\{v\\})\Leftrightarrow(1-\varepsilon)f(\\{u\\})=(1-\varepsilon)(f(\\{u,v\\}-f(\\{v\\}))<f(\\{u,v\\}-f(\\{v\\})=F(\\{u,v\\}-F(\\{v\\})$
for $\varepsilon>0$.
To see that $F(\boldsymbol{x})=\max_{x\in{supp(\boldsymbol{x})}}\xi(x)\cdot
f(\boldsymbol{x})$ constructed in the MaxG setting is not $k$-submodular,
consider $k=1$ and two elements $\\{u,v\\}$, $\xi(u)=1-\varepsilon$,
$\xi(v)=1$, and $f(\\{u,v\\})-f(\\{v\\})=f(\\{u\\})$ for some $u,v$. It
follows that
$F(\\{u\\})-F(\mathbf{0})<F(\\{u,v\\})-F(\\{v\\})\Leftrightarrow(1-\varepsilon)f(\\{u\\})<f(\\{u,v\\})-f(\\{v\\})$
for $\varepsilon>0$.
To see that $F(\boldsymbol{x})=\xi(\boldsymbol{x})\cdot f(\boldsymbol{x})$
with $\xi({\boldsymbol{x}})=\frac{\sum_{x\in
supp(\boldsymbol{x})}\xi(x)}{|supp(\boldsymbol{x})|}$, which is constructed in
the MeanG setting, is not $k$-submodular, consider $k=1$,
$\xi(u)=1-\varepsilon$, and $\xi(v)=1$, as well as
$f(\\{u,v\\})=\frac{3}{2}f(\\{u\\})$ and $f(\\{u,v\\})-f(\\{v\\})=f(\\{u\\})$
for some $u,v$. It follows that
$F(\\{u\\})-F(\mathbf{0})<F(\\{u,v\\}-F(\\{v\\})\Leftrightarrow(1-\varepsilon)f(\\{u\\})<(1-\frac{3}{4}\varepsilon)f(\\{u\\})=f(\\{u\\})-\frac{3\varepsilon}{4}f(\\{u\\})=f(\\{u,v\\})-f(\\{v\\})-\frac{\varepsilon}{2}f(\\{u,v\\})=\frac{2-\varepsilon}{2}f(\\{u,v\\})-f(\\{v\\})=F(\\{u,v\\})-F(\\{v\\})$
for $\varepsilon>0$.
### D.2 Influence Maximization with Approximately $k$-Submodular Functions
and the TS Constraint
In the main paper, we considered Influence Maximization with the impact of
$k$. Here, we present results in Figs. 5, 6 and 7 for the same problem with
the impact of $B$ and $\varepsilon$. The trends are similar to those for
parameter $k$ reported in the paper; $Gr$-$H$ outperformed $Gr$-$\tilde{H}$ in
the AG setting, and the opposite happened in the MeanG and MaxG settings.
(a)
(b)
Figure 5: $\tilde{I}$ in AG setting vs: (a) $B$, (b) $\varepsilon$.
(a)
(b)
Figure 6: $\tilde{I}$ in Mean setting vs: (a) $B$, (b) $\varepsilon$.
(a)
(b)
Figure 7: $\tilde{I}$ in MaxG setting vs: (a) $B$, (b) $\varepsilon$.
### D.3 Sensor Placement with Approximately $k$-Submodular Functions and the
TS Constraint
In the main paper, we considered Sensor Placement with IS constraints. Here,
we present results for the same problem with TS constraint in Figs. 8, 9 and
10. As can be seen, the results are qualitatively similar to those for the
problem with IS constraints. That is, $Gr$-$H$ outperformed $Gr$-$\tilde{H}$
in the AG setting, and the opposite happened in the MeanG and MaX settings.
(a)
(b)
(c)
Figure 8: $\tilde{H}$ in AG setting vs: (a) $k$, (b) $B$, (c) $\varepsilon$.
(a)
(b)
(c)
Figure 9: $\tilde{H}$ in MeanG setting vs: (a) $k$, (b) $B$, (c)
$\varepsilon$.
(a)
(b)
(c)
Figure 10: $\tilde{H}$ in MaxG setting vs: (a) $k$, (b) $B$, (c)
$\varepsilon$.
### D.4 Sensor Placement and Influence Maximization with Approximately
Diminishing Returns $k$-Submodular Functions
We constructed $\varepsilon$-ADR functions for each setting, as follows.
In the AG setting, we ran $k$-Greedy-TS on $f$ and set
$\Delta_{e,i}F(\boldsymbol{x_{f}})=(1+\varepsilon)\Delta_{e,i}f(\boldsymbol{x_{f}})$
for its solution $\boldsymbol{x_{f}}$. For any other
$\boldsymbol{x}\neq\boldsymbol{x_{f}}$, we selected $\xi(\boldsymbol{x})$
uniformly at random in $[1-\varepsilon,1]$ and set
$\Delta_{e,i}F(\boldsymbol{x_{f}})=\xi(\boldsymbol{x})\cdot\Delta_{e,i}f(\boldsymbol{x_{f}})$,
for each $(e,i)$. Then, we summed up
$\max_{(e,i)}\Delta_{e,i}F(\boldsymbol{x})$ in each iteration to obtain
$F(\boldsymbol{x})$.
In the MaxG setting, we set
$\Delta_{e,i}F(\boldsymbol{x})=\xi(\boldsymbol{x},e)\cdot\Delta_{e,i}f(\boldsymbol{x})$,
where $\xi(\boldsymbol{x},e)=\max(\xi(e),\max_{x\in
supp(\boldsymbol{x})}\xi({x}))$ and $\xi(x)\in[1-\varepsilon,1]$. We then
summed up $\max_{(e,i)}\Delta_{e,i}F(\boldsymbol{x})$ in each iteration to
obtain $F(\boldsymbol{x})$. Similarly, in the MeanG setting, we used
$\xi(\boldsymbol{x},e)=\frac{\xi(e)+\sum_{x\in
supp(\boldsymbol{x})}\xi(x)}{|supp(\boldsymbol{x})|+1}$, where
$\xi(x)\in[1-\varepsilon,1]$.
#### D.4.1 Non Submodularity
To see that the function $F$ constructed in the AG setting is not
k-submodular, consider $k=1$, $\xi(\\{v\\},u)=1$,
$\xi(\mathbf{0},u)=1-\varepsilon$, as well as
$f(\\{u,v\\})-f(\\{v\\})=f(\\{u\\})$ for some $u,v$. It follows that
$F(\\{u\\})-F(\mathbf{0})<F(\\{u,v\\}-F(\\{v\\})\Leftrightarrow\Delta_{u}F(\mathbf{0})=(1-\varepsilon)\Delta_{u}f(\mathbf{0})<f(\\{u,v\\}-f(\\{v\\})=\Delta_{u}F(\\{v\\})=F(\\{u,v\\}-F(\\{v\\})$
for $\varepsilon>0$.
To see that the $F$ function constructed in the MaxG setting is not
$k$-submodular, consider $k=1$ and two elements $\\{u,v\\}$,
$\xi(u)=1-\varepsilon$, $\xi(v)=1$, and $f(\\{u,v\\})-f(\\{v\\})=f(\\{u\\})$
for some $u,v$. It follows that
$F(\\{u\\})-F(\mathbf{0})<F(\\{u,v\\})-F(\\{v\\})\Leftrightarrow\Delta_{u}F(\mathbf{0})=(1-\varepsilon)\Delta_{u}f(\mathbf{0})<f(\\{u,v\\})-f(\\{v\\})=\Delta_{u}f(\\{v\\})=F(\\{u,v\\})-F(\\{v\\})$
for $\varepsilon>0$.
To see that the $F$ function constructed in the MeanG setting is not
$k$-submodular, consider $k=1$, $\xi(u)=1-\varepsilon$, and $\xi(v)=1$, as
well as $\Delta_{u}f(\\{v\\})=\Delta_{u}f(\mathbf{0})$ for some $u,v$. It
follows that
$F(\\{u\\})-F(\mathbf{0})<F(\\{u,v\\}-F(\\{v\\})\Leftrightarrow\Delta_{u}F(\mathbf{0})=(1-\varepsilon)\Delta_{u}f(\mathbf{0})<\frac{2-\varepsilon}{2}\Delta_{u}f(\mathbf{0})=\frac{2-\varepsilon}{2}\Delta_{u}f(\\{v\\})=\Delta_{u}F(\\{v\\})=F(\\{u,v\\})-F(\\{v\\})$
for $\varepsilon>0$.
We first considered sensor placement with the TS constraint, but with an
$\varepsilon$-ADR instead of an $\varepsilon$-AS function. $k$-Greedy-TS using
$H$ as $f$ is denoted with $Gr$-$H$-$ADR$, and $k$-Greedy-TS using $\tilde{H}$
as $F$ is denoted with $Gr$-$\tilde{H}$-$ADR$. The results in Figs. 11, 12,
and 13 are analogous to those for the $\varepsilon$-AS function in Section D.3
and confirm our analysis in Section 6.
(a)
(b)
(c)
Figure 11: $\tilde{H}$ in AG setting vs: (a) $k$, (b) $B$, (c) $\varepsilon$.
(a)
(b)
(c)
Figure 12: $\tilde{H}$ in MeanG setting vs: (a) $k$, (b) $B$, (c)
$\varepsilon$.
(a)
(b)
(c)
Figure 13: $\tilde{H}$ in MaxG setting vs: (a) $k$, (b) $B$, (c)
$\varepsilon$.
We then considered sensor placement with IS constraints using $k$-Greedy-IS.
$k$-Greedy-IS using $H$ as $f$ is denoted with $Gr$-$H$-$ADR$, and $k$-Greedy-
IS using $\tilde{H}$ as $F$ is denoted with $Gr$-$\tilde{H}$-$ADR$. As
expected from the analysis in Section 6, the results in Figs. 14, 15, and 16
are similar to those in Figs. 11, 12, and 13.
(a)
(b)
(c)
Figure 14: $\tilde{H}$ in AG setting vs: (a) $k$, (b) $b$, (c) $\varepsilon$.
(a)
(b)
(c)
Figure 15: $\tilde{H}$ in MeanG setting vs: (a) $k$, (b) $b$, (c)
$\varepsilon$.
(a)
(b)
(c)
Figure 16: $\tilde{H}$ in MaxG setting vs: (a) $k$, (b) $b$, (c)
$\varepsilon$.
Last, we considered influence maximization with IS constraints and an
$\varepsilon$-ADR function. $k$-Greedy-TS using $I$ as $f$ is denoted with
$Gr$-$I$-$ADR$, and $k$-Greedy-TS using $\tilde{I}$ as $F$ is denoted with
$Gr$-$\tilde{I}$-$ADR$. The obtained results are similar to those for the case
of $\varepsilon$-AS function (see Section 7.3 and Appendix D.2).
(a)
(b)
(c)
Figure 17: $\tilde{I}$ in AG setting vs: (a) k, (b) $B$, (c) $\varepsilon$.
(a)
(b)
(c)
Figure 18: $\tilde{I}$ in Mean setting vs: (a) k, (b) $B$, (c) $\varepsilon$.
(a)
(b)
(c)
Figure 19: $\tilde{I}$ in MaxG setting vs: (a) k, (b) $B$, (c) $\varepsilon$.
|
§ NOTATION
We establish our notational conventions in this paper. When possible, we have tried to keep notation consistent with <cit.>.
* $\X=G/K$ will denote a symmetric space of noncompact type. $G$ is assumed to be the connected component of the isometry group of $\X$, and $K$ is a maximal compact subgroup of $G$, see Section <ref>.
* We let $p,q,r,c$ denote points or curves in $\X$. We let $g,h,u,a$ denote elements or curves in $G$. An element or curve in $K$ may be denoted by $k$.
* The Lie algebra of $G$ is denoted $\mfg$. The Lie algebra of $K$ is denoted $\mfk$. When a point $p$ is given, $K$ is the stabilizer of $p$ in $G$. Usually $U,V,W,X,Y,Z$ will denote elements of $\mfg$.
* The orbit map $\orb_p \colon G \to \X$, given by $\orb_p(g)=gp$, has differential $\evp \colon \mfg \to T_p \X$ at the identity, see Section <ref>.
* The Cartan decomposition induced by $p \in \X$ is $\mfg = \mfk \oplus \mfp$. It corresponds to a Cartan involution $\vartheta_p \colon \mfg \to \mfg$, see Section <ref>.
* The Killing form on $\mfg$ is denoted $B$. Each point $p \in \X$ induces an inner product $B_p$ on $\mfg$ defined by $B_p(X,Y)=-B(\vartheta_pX,Y)$, see Section <ref>.
* We assume that the Riemmanian metric $\langle\cdot ,\cdot \rangle$ on $\X$ is the one induced by the Killing form, see Equation <ref>.
* The sectional curvature $\kappa$ of $\X$ has image $[-\kappa_0^2,0]$, see Section <ref>.
* A maximal abelian subspace of $\mfp$ will be denoted $\mfa$. The associated restricted roots are denoted $\Lambda \subset \mfa^\ast$. A choice of simple roots is denoted $\Delta$, see Section <ref>.
* Each maximal abelian subspace $\mfa$ has an action by the Weyl group and decomposition into Euclidean Weyl chambers denoted $V$, see Section <ref>.
* There is a vector-valued distance function $\vec{d}\colon \X \times \X \to V_{mod}$ with image the model Euclidean Weyl chamber, see Equation <ref>. In <cit.> this map is denoted $\Delta$, and they let $\Delta$ denote the model Euclidean Weyl chamber we call $V_{mod}$. In this paper, $\Delta$ denotes a choice of simple roots.
* A spherical Weyl chamber $\sigma$ corresponds to a set of simple roots $\Delta$. For a face $\tau$ of $\sigma$ we have
$$ \Delta_\tau = \{ \alpha \in \Delta \mid \alpha(\tau) = 0 \}, \quad \Delta_\tau^+ = \{ \alpha \in \Delta \mid \alpha(\interior \tau)>0 \} ,$$
see Equations <ref>. We have
$$ \tau = \sigma \cap \bigcap_{\alpha \in \Delta_\tau} \ker \alpha, \quad \interior_\tau \sigma = \{ X \in \sigma \mid \forall \alpha \in \Delta_\tau^+, \alpha(X) >0 \}, \quad \partial_\tau \sigma = \sigma \cap \bigcup_{\alpha \in \Delta_\tau^+} \ker \alpha .$$
* The visual boundary of $\X$ is denoted $\partial \X$, see Section <ref>. We let $\tau, \sigma$ denote a spherical simplex/chamber in $\mfa$ or an ideal simplex/chamber in $\partial \X$.
* There is a type projection
$ \theta \colon \partial \X \to \sigma_{mod}$
with image the model ideal Weyl chamber, see Section <ref>.
* A face of $\sigma_{mod}$ is called a model simplex and denoted $\tau_{mod}$. There is a decomposition $\sigma_{mod}= \interior_{\tau_{mod}} \sigma_{mod} \sqcup \partial_{\tau_{mod}} \sigma_{mod}$, see Section <ref>.
* We define $(\alpha_0,\tau)$-regular and $(\alpha_0,\tau)$-spanning vectors and geodesics in Section <ref>. We extend that definition to ideal points in Section <ref>.
* We write $\Flagt$ for the set of ideal simplices in $\partial \X$ of type $\tau_{mod}$, see Section <ref>.
* We define Weyl cones $V(x,\st(\tau),\alpha_0),V(x,\ost(\tau))$ and Weyl sectors $V(x,\tau)$ in Section <ref>.
* We describe the generalized Iwasawa decomposition $G=N_\tau A_\tau K$ in Section <ref>.
* A parallel set is denoted $P(\tau_-,\tau_+)$ for opposite simplices $\tau_-,\tau_+ \in \Flagt$. A horocycle is denoted $H(p,\tau)$, see Section <ref>. A diamond is denoted $\diamondsuit(p,q)$ and a truncated diamond is denoted $\diamondsuit_{\alpha_0}(p,q)$, see Section <ref>.
* For $p \in \X$ and $x,y \in \overline{X} \setminus \{p\}$, $\angle_p(x,y)$ denotes the Riemannian angle at $p$ between $x$ and $y$. For $\eta,\eta' \in \partial \X$, $\angle_{Tits}(\eta,\eta')$ denotes their Tits angle. If $px$ and $py$ are $\tau_{mod}$-regular and $\tau,\tau' \in \Flagt$ then $\angle_p^\zeta(\tau,\tau'),\angle_p^\zeta(\tau,y),\angle_p(\zeta(\tau),\zeta(py))$ denote the $\zeta$-angles, see Section <ref>.
* A $(c_1,c_2,c_3,c_4)$-quasigeodesic is a sequence $(x_n)$ (possibly finite, infinite, or biinfinite) in $\X$ such that
$$ \frac{1}{c_1} \abs{N}-c_2 \le d(x_n,x_{n+N}) \le \abs{N}c_3+c_4 .$$
A quasigeodesic is $(\alpha_0,\tau_{mod},D)$-Morse if for all $x_n,x_m$ there exists a diamond $\diamondsuit_{\alpha_0}(p,q)$ such that $d(p,x_n),d(q,x_m) \le D$ and for all $n \le i \le m$, $d(x_i,\diamondsuit) \le D$, see Section <ref>.
§ BACKGROUND ON SYMMETRIC SPACES
We begin with some background on the structure of symmetric spaces of noncompact type. Experts on symmetric spaces can skip this section, but should note that we assume the metric is induced by the Killing form (see Equation <ref>), quantify the regularity of geodesics in Definition <ref>, and define the $\zeta$-angle in Definition <ref>. For detailed references on symmetric spaces see <cit.>.
A symmetric space is a connected Riemannian manifold $\X$ such that for each point $p \in \X$, there exists a geodesic symmetry $S_p\colon \X \to \X$, an isometry fixing $p$ whose differential at $p$ is $(\dd{S_p})_p = -\id_{T_p\X}$. A symmetric space is necessarily complete with transitive isometry group. If $\X$ is nonpositively curved, it is simply connected. Simply connected Riemannian manifolds admit a de Rham decomposition into metric factors. If $\X$ is a nonpositively curved symmetric space with no Euclidean de Rham factors, we say $\X$ is a symmetric space of noncompact type. Throughout the paper, $\X$ refers to any fixed symmetric space of noncompact type.
The isometry group of $\X$ is a semisimple Lie group, and we let $G$ be the identity component of the isometry group. For each point $p \in \X$, the stabilizer $K=G_p = \{ g \in G \mid gp=p \}$ is a maximal compact subgroup of $G$. Hence $\X$ is diffeomorphic to $G/K$ by the orbit-stabilizer theorem for Lie groups and homogeneous spaces. We write $\mfg$ for the Lie algebra of left-invariant vector fields on $G$.
A Killing vector field on a Riemannian manifold is vector field whose induced flow is by isometries. There is a natural linear isomorphism from $\mfg$ to the space of Killing vector fields on $\X$ by defining for $X \in \mfg$ the vector field $X^\ast$ given by
\begin{equation}\label{killing field} X^\ast_p \coloneqq \eval{ \dv{t} e^{tX} p }_{t=0} . \end{equation}
The Lie bracket of two Killing vector fields is again a Killing vector field, but the map $X \mapsto X^\ast$ is a Lie algebra anti-homomorphism: $[X,Y]^\ast = -[X^\ast,Y^\ast]$.
§.§ Cartan decomposition
Each point $p \in \X$ induces a Cartan decomposition in the following way. The geodesic symmetry $S_p \colon \X \to \X$ induces an involution of $G$ by
$$ g \mapsto S_p \circ g \circ S_p .$$
The differential is a Lie algebra involution $\vartheta_p \colon \mfg \to \mfg$, so we may write
$$ \mfg = \mfk \oplus \mfp $$
where $\mfk = \{ X \in \mfg \mid \vartheta_pX=X \}$ and $\mfp = \{ X \in \mfg \mid \vartheta_p X = -X \}$. Since $\vartheta_p$ preserves brackets, we have
$$ [\mfk,\mfk] \subset \mfk, \quad [\mfk,\mfp] \subset \mfp, \quad [\mfp, \mfp] \subset \mfk .$$
We denote the orbit map $g \mapsto gp$ by $\orb_p\colon G \to \X$. The differential $(\dd{\orb_p})_1 \colon \mfg \to T_p \X$ has kernel precisely $\mfk$. Moreover, $\mfk$ is the Lie algebra of $K=G_p$. The restriction $(\dd{\orb_p})_1 \colon \mfp \to T_p\X$ is a vector space isomorphism. For any $X \in \mfg$, $(\dd{\orb_p})_1 X = X^\ast_p \eqqcolon \evp X$, see Equation <ref>, so we use the less cumbersome notation $\evp=(\dd{\orb_p})_1 \colon \mfg \to T_p\X$ throughout the paper (read as “evaluation at $p$").
Let $B$ denote the Killing form on $\mfg$ and let $\langle \cdot,\cdot \rangle$ denote the Riemannian metric on $\X$. We will assume that for all $X,Y \in \mfp$,
\begin{equation}\label{metric is Killing form}
B(X,Y) = \langle \mathrm{ev}_p X ,\mathrm{ev}_pY \rangle_p ,
\end{equation}
i.e. that the Riemannian metric on $\X$ is induced by the Killing form. Any other $G$-invariant Riemannian metrics on $\X$ only differs from this one by scaling by a global constant on each de Rham factor of $\X$.
Under the identification of $\mfp$ with $T_p\X$, the Riemannian exponential map $\mfp \to \X$ is given by $X \mapsto e^X p$. In particular, the constant speed geodesics at $p$ are given by $c(t)=e^{tX}p$ for $X \in \mfp$.
The point $p \in \X$ induces an inner product $B_p$ on $\mfg$ defined by
\begin{equation}\label{Bp definition}
B_p(X,Y)\coloneqq -B(\vartheta_p X,Y) .
\end{equation}
On $\mfp$, $B_p$ is just the restriction of the Killing form $B$, and we have required that the identification of $(\mfp,B)$ with $(T_p\X, \langle,\rangle)$ is an isometry. On $\mfk$, $B_p$ is the negative of the restriction of $B$ to $\mfk$. Since $\mfk$ and $\mfp$ are $B$-orthogonal, it follows that $B_p$ is an inner product on $\mfg$. For each $X \in \mfp$, $\ad X$ is symmetric with respect to $B_p$ on $\mfg$, and likewise for each $Y \in \mfk$, $\ad Y$ is skew-symmetric.
§.§ Restricted root space decomposition
Let $\mfa$ be a maximal abelian subspace of $\mfp$. Via the adjoint action, $\mfa$ is a commuting vector space of diagonalizable linear transformations on $\mfg$. Therefore $\mfg$ admits a common diagonalization called the restricted root space decomposition. For each $\alpha \in \mfa^\ast$, define
$$ \mfg_\alpha = \{ X \in \mfg \mid \forall A \in \mfa, \ad A (X) = \alpha(A) X \}. $$
We obtain a collection of roots
$$ \Lambda = \{ \alpha \in \mfa^\ast \setminus \{0\} \mid \mfg_\alpha \ne 0 \} $$
corresponding to the nonzero root spaces. The restricted root space decomposition is then
$$ \mfg = \mfg_0 \oplus \bigoplus_{\alpha \in \Lambda} \mfg_\alpha .$$
For each root $\alpha \in \Lambda$, define the coroot $H_\alpha \in \mfa$ by $\alpha(A)=B(H_\alpha,A)$ for all $A \in \mfa$. This induces an inner product, also denoted $B$, on $\mfa^\ast$ by defining $B(\alpha,\beta)\coloneqq B(H_\alpha,H_\beta)$. The set $\Lambda$ forms a root system[Note that this definition of root system is slightly different from the definition that appears in the study of, say, complex semisimple Lie algebras. There, one assumes that the only multiples of a root $\alpha$ appearing in $\Lambda$ are $\pm \alpha$. This assumption does not hold for restricted roots of symmetric spaces in general; for example, it fails in complex hyperbolic space.] in $(\mfa^\ast,B)$, see <cit.>. The restricted root space decomposition is $B_p$-orthogonal. A subset $\Lambda^+$ of the roots is positive if for every $\alpha \in \Lambda$, exactly one of $\alpha,-\alpha$ is contained in $\Lambda^+$ and for any $\alpha,\beta \in \Lambda^+$ such that $\alpha +\beta$ is a root, we have $\alpha+\beta \in \Lambda^+$.
The Cartan involution restricts to an isomorphism $\vartheta_p\colon \mfg_\alpha \to \mfg_{-\alpha} $ for each $\alpha \in \Lambda \cup \{0\}$. Thus we have
$$ \mathfrak{p}_\alpha\coloneqq\mathfrak{p} \cap \mathfrak{g}_\alpha \oplus \mathfrak{g}_{-\alpha} = (\id - \vartheta_p)\mathfrak{g}_\alpha = (\id -\vartheta_p)\mathfrak{g}_{-\alpha} .$$
$$ \mathfrak{k}_\alpha\coloneqq\mathfrak{k} \cap \mathfrak{g}_\alpha \oplus \mathfrak{g}_{-\alpha} = (\id + \vartheta_p)\mathfrak{g}_\alpha = (\id +\vartheta_p)\mathfrak{g}_{-\alpha} .$$
Note that $\mathfrak{p}_\alpha = \mathfrak{p}_{-\alpha}$ and likewise $\mathfrak{k}_\alpha = \mathfrak{k}_{-\alpha}$, so for $\Lambda^+$ a set of positive roots, we have the decomposition
$$\mathfrak{g} = \mathfrak{a} \oplus \mathfrak{k}_0 \oplus \bigoplus_{\alpha \in \Lambda^+} \mathfrak{p}_\alpha \oplus \bigoplus_{\alpha \in \Lambda^+} \mathfrak{k}_\alpha $$
which is both $B_p$ orthogonal and $B$ orthogonal. Some authors use the notation $\mathfrak{m}=\mfk_0$.
§.§ Curvature and copies of $\HH^2$
The curvature tensor $R$ of $\X$ may be defined using the Levi-Civita connection $\nabla$ by
$$ R(u,v) = \nabla_u \nabla_v - \nabla_v \nabla_u - \nabla_{[u,v]}, $$
for vector fields $u,v$ on $\X$. In a symmetric space there is a particularly nice formula for the curvature tensor. Our convention is that the sectional curvature spanned by orthonormal unit vectors $u,v$ is
$$ \kappa(u \wedge v) = \langle R(u,v)v,u \rangle. $$
Let $X,Y,Z \in \mfp$ and write $X^\ast,Y^\ast,Z^\ast$ for the corresponding Killing vector fields on $\X$. Then
$$ (R(X^\ast,Y^\ast)Z^\ast)_p = -\evp[[X,Y],Z] .$$
The theorem allows us to work directly with the sectional curvature by using the structure of the Lie algebra. Let $X \in \mfa$, $Y \in \mathfrak{p}$ and assume $X,Y$ are orthogonal unit vectors. For any $Y \in \mathfrak{p}$, we may write $Y=Y_0+\sum_{\alpha \in \Lambda^+ } Y_\alpha$ where $Y_0 \in \mfa$ and each $Y_\alpha \in \mathfrak{p}_\alpha$, and recall that this decomposition is $B$-orthogonal, so we have the lower curvature bound
\begin{align*} & \kappa(X^\ast_p \wedge Y^\ast_p) = B(-[[X,Y],Y],X) = B([X,Y],[X,Y]) = -B([X,[X,Y]],Y) = \\ & -\sum_{\alpha \in \Lambda^+} B(\alpha(X)^2Y_\alpha,Y) = -\sum_{\alpha,\beta \in \Lambda^+} \alpha(X)^2 B(Y_\alpha,Y_\beta) = -\sum_{\alpha \in \Lambda^+} \alpha(X)^2 B(Y_\alpha,Y_\alpha) \ge - \kappa_0^2
\end{align*}
where $\kappa_0$ is defined to be the maximum of $\{\alpha(X)\mid \alpha \in \Lambda, X \in \mfa, \abs{X}=1 \}$. In general, we have $\kappa_0 \le 1$, as we now explain. Since $\alpha(X)$ is maximized in the direction of the coroot $H_\alpha$, we have
$$ \kappa_0 = \alpha\left(\frac{H_\alpha}{\abs{ H_\alpha}}\right) = \lvert H_\alpha \rvert $$
for some $\alpha$. By <cit.>, we have for $A,A' \in \mfa$ that $B(A,A') = \sum_{\beta \in \Lambda} (\dim \mfg_\beta) \beta(A)\beta(A') $, so
$$ 1= B\left(\frac{H_\alpha}{\abs{ H_\alpha}},\frac{H_\alpha}{\abs{ H_\alpha}}\right) = \sum_{\beta \in \Lambda} \left( \dim \mathfrak{g}_\beta \right)\beta\left(\frac{H_\alpha}{\abs{ H_\alpha}}\right)^2 \ge \alpha\left(\frac{H_\alpha}{\abs{ H_\alpha}}\right)^2 =\kappa_0^2 .$$
In particular, under this normalization where the symmetric space inherits its metric from the Killing form, the sectional curvature is always bounded between $0$ and $-1$.
In $\mathfrak{sl}(d,\R)$, each root $\alpha$ has $\abs{H_\alpha}=\frac{1}{\sqrt{d}}$, so we have $\kappa_0 = \frac{1}{\sqrt{d}}$ and the associated symmetric space has lower curvature bound $-\frac{1}{d}$.
In Section <ref> we will need to know the curvature of copies of the hyperbolic plane in $\X$. These correspond to copies of $\sl2r$ in $\mfg$. Let $\alpha \in \Lambda$ and $X_\alpha \in \mfg_\alpha$ such that $B_p(X_\alpha,X_\alpha)=\frac{2}{\abs{H_\alpha}^2}$. Set $\tau_\alpha \coloneqq \frac{2}{\abs{H_\alpha}^2}H_\alpha$ so that $\alpha(\tau_\alpha)=2$. Set $Y_\alpha \coloneqq -\vartheta_pX_\alpha \in \mfg_{-\alpha}$. Then
$$ [\tau_\alpha,X_\alpha] = 2 X_\alpha, \quad [\tau_\alpha,Y_\alpha] = -2Y_\alpha, \quad \text{and} \quad [X_\alpha,Y_\alpha]=\tau_\alpha, $$
where the last equality follows from considering $B([X_\alpha,Y_\alpha],A)$ for $A \in \mfa = \R H_\alpha \oplus \ker \alpha$. Then $\vartheta_p (X_\alpha + Y_\alpha) = \vartheta_p X_\alpha - \vartheta_p^2 X_\alpha = -(Y_\alpha + X_\alpha)$, so $X_\alpha + Y_\alpha \in \mfp$ and $\abs{X_\alpha + Y_\alpha}^2 = \pnorm{X_\alpha}^2+ \pnorm{Y_\alpha}^2 = \frac{4}{\abs{H_\alpha}^2}$. So $\frac{\abs{H_\alpha}}{2}(X_\alpha+Y_\alpha)$ and $\frac{H_\alpha}{\abs{H_\alpha}}$ are orthonormal unit vectors in $\mfp$, and
$$ \kappa \left( \frac{\abs{H_\alpha}}{2}(X_\alpha+Y_\alpha) \wedge \frac{H_\alpha}{\abs{H_\alpha}} \right) = - \alpha \left( \frac{H_\alpha}{\abs{H_\alpha}}\right)^2 \abs{ \frac{\abs{H_\alpha}}{2}(X_\alpha+Y_\alpha)}^2 = -\frac{\abs{H_\alpha}^4}{\abs{H_\alpha}^2} \frac{\abs{H_\alpha}^2}{4} \frac{4}{\abs{H_\alpha}^2} = -\abs{H_\alpha}^2 $$
by the formula above.
In the symmetric space associated to $\mathfrak{sl}(d,\R)$, the root spaces $\mfg_\alpha$ are one-dimensional, so the subalgebra $\mathfrak{sl}(2,\R)_\alpha$ spanned by $X_\alpha,Y_\alpha,\tau_\alpha$ is uniquely determined by $\alpha$ and we denote it by $\sl2r_\alpha$. The image of $\R H_\alpha \oplus \mfp_\alpha$ under the Riemannian exponential map at $p$ is a totally geodesic submanifold $\HH^2_\alpha$ isometric to the hyperbolic plane of curvature $-\frac{1}{d}$.
§.§ Weyl chambers and the Weyl group
In this section we describe Weyl faces as subsets of maximal abelian subspaces $\mathfrak{a} \subset \mathfrak{p}$. In Section <ref> we will define Weyl faces as subsets of the visual boundary $\partial \X$, and explain how the definitions relate.
Let $\Lambda$ be the roots of a restricted root space decomposition of a maximal abelian subspace $\mathfrak{a}$ of $\mathfrak{p}$. For each $\alpha \in \Lambda \subset \mathfrak{a}^\ast$, the kernel of $\alpha$ is called a wall, and a component $C$ of the complement of the union of the walls is called an open Euclidean Weyl chamber; $C$ is open in $\mathfrak{a}$. A vector $X \in \mathfrak{a}$ is called regular if it lies in an open Euclidean Weyl chamber and singular otherwise. The closure $V$ of an open Euclidean Weyl chamber is a closed Euclidean Weyl chamber; $V$ is closed in $\mathfrak{p}$.
For a closed Weyl chamber $V$ there is an associated set of positive roots
$$ \Lambda_+ \coloneqq \{ \alpha \in \Lambda \mid \forall v \in V, \alpha(v) \ge 0 \} $$
and simple roots $\Delta$, i.e. those which cannot be written as a sum of two elements of $\Lambda_+$, see <cit.>.
We may define
$$ N_K(\mfa)\coloneqq\{ k \in K \mid \Ad(k)(\mfa)=\mfa \},\quad Z_K(\mfa)\coloneqq\{k \in K \mid \forall A \in \mfa, \Ad(k)(A)=A \}.$$
Since the adjoint action preserves the Killing form, $N_K(\mfa)$ acts by isometries on $\mfa$ with kernel $Z_K(\mfa)$. We call the image of this action the Weyl group. For each reflection $r_\alpha$ in a wall, it is possible to find a $k \in K$ whose action on $\mfa$ agrees with $r_\alpha$ <cit.>. It is well-known that the Weyl group acts simply transitively on the set of Weyl chambers, which implies it is generated by the reflections in the walls of a chosen Weyl chamber. It is convenient for us to show this fact in Proposition <ref>, since the same techniques provide Corollary <ref>.
(-4,0) – (4,0);
(-2,-3.46) – (2,3.46);
(-2,3.46) – (2,-3.46);
The walls of a maximal flat in $\SL(3,\R)/\SO(3)$.
(fbl) at (0,0);
(fbr) at (4,0);
(ful) at (0,4);
(fur) at (4,4);
(bbl) at (2,1);
(bbr) at (6,1);
(bul) at (2,5);
(bur) at (6,5);
(fbl) – (fbr) – (fur) – (ful) – (fbl);
(ful) – (bul) – (bur) – (bbr) – (fbr);
(fur) – (bur);
[dashed] (bul) – (bbl) – (bbr);
[dashed] (fbl) – (bbl);
(ful) – (bur) – (fbr) – (ful);
(fbl) – (fur) – (bul);
(fur) – (bbr);
[dashed] (fbl) – (bbr) – (bul) – (fbl);
[dashed] (fbl) – (bul);
[dashed] (ful) – (bbl) –(bur);
[dashed] (bbl) – (fbr);
The walls of a maximal flat in $\SL(4,\R)/\SO(4)$.
The Riemannian exponential map identifies maximal abelian subspaces in $\mathfrak{p}$ isometrically with maximal flats through $p$. So we can also refer to open/closed Euclidean Weyl chambers in $\X$ as the images of those in some $\mathfrak{a}$ under this identification. For every $X \in \mathfrak{p}$, there exists a maximal abelian subspace $\mathfrak{a}$ containing $X$, and in $\mathfrak{a}$, there exists some closed Euclidean Weyl chamber $V$ containing $X$.
§.§ A Morse function on flag manifolds
In this subsection, we show that the vector-valued distance function $\vec{d}$ on $\X$ (denoted $d_\Delta$ in <cit.>, see Definition <ref>) is well-defined, and give part of a proof of Theorem <ref>, an important part of the structure theory of symmetric spaces. Along the way we prove the $\vec{d}$-triangle inequality <cit.>, and provide an estimate on the Hessian of a certain Morse function defined on flag manifolds embedded in $\mfp$, see Proposition <ref> and Corollary <ref>.
We will use the following proposition. For $A \in \mfp$, let $\mfe_A$ be the intersection of all maximal abelian subspaces containing $A$.
Let $p$ in $\X$ with Cartan decomposition $\mfg = \mfk \oplus \mfp$ and let $k\in K$ and $A \in \mfp$. If $\Ad(k)(A)=A$ then for all $E \in \mathfrak{e}_A$ we have $\Ad(k)(E)=E$.
Note that there is a typo in Eberlein: the word “maximal" is omitted in the definition of $\mfe_A$. The proof of Proposition <ref> relies on passing to the compact real form of $\mfg^\mathbb{C}$.
In this section, a flag manifold is the orbit of a vector $Z \in \mfp$ under the adjoint action of $K=\Stab_G(p)$. The following proposition is essentially a standard part of the theory of symmetric spaces, however we will need to extract a specific estimate, recorded in Corollary <ref>, in order to prove Lemma <ref>.
Let $X,Z \in \mathfrak{p}$ be unit vectors. Define
$$ f\colon K \to \mathbb{R}, \quad f(k)\coloneqq B(X,\Ad(k)Z).$$
* If $k$ is a critical point for $f$, then $\Ad(k)Z$ commutes with $X$.
* If $k$ is a local maximum for $f$, then $\Ad(k)Z$ lies in a common closed Weyl chamber with $X$.
* If $X$ is regular then the function $B(X,\cdot)\colon \Ad(K)Z \to \mathbb{R}$ is Morse and has a unique local maximum.
* If $X$ is regular then the distance function $d(X,\cdot)\colon \Ad(K)Z \to \mathbb{R}$ has a unique local minimum.
Note that $f$ is the composition of the orbit map $K \to \Ad(K)Z$ with the map $B(X,\cdot)\colon \Ad(K)Z \to \mathbb{R}$.
1. Let $Y \in \mfk$, viewed as a left-invariant vector field on $K$. If $k$ is a critical point for $f$, then
\begin{align*}
0 = & \d f_k(Y)= \eval{ \dv{t} f(ke^{tY}) }_{t=0} = \eval{ \dv{t} B(X,\Ad(ke^{tY})Z)}_{t=0} \\
& = B(X,\Ad(k)(\ad(Y)(Z))) = B(X,[Y',Z'])=B([Z',X],Y')
\end{align*}
where we write $Y'=\Ad(k)Y$ and $Z'=\Ad(k)Z$. Since $Y'$ is an arbitrary element of $\mfk$, $[X,Z'] \in \mfk$, and $B$ is negative definite on $\mfk$, we can conclude that $[X,Z']=0$, which is the claim.
2. At a critical point $k$ for $f$, the Hessian of $f$ at $k$ is a symmetric bilinear form on $T_kK$ determined by
$$ \Hess(f)(v,v)_k = (f \circ c)''(0) $$
for any curve $c$ with $c(0)=k$ and $c'(0)=v$. Let $Y \in \mfk$, the left-invariant vector fields on $K$, and choose $c(t)=ke^{tY}$. To compute the Hessian of $f$ we only need to compute
\begin{align*}
\derivtwo{t} f(ke^{tY}) \atzero{t} & = \deriv{t} B(X,\Ad(ke^{tY})(\ad(Y)(Z))) \atzero{t} = B(X,\Ad(k)([Y,[Y,Z]])) = B(X,[Y',[Y',Z']]) \\
&= B([X,Y'],[Y',Z'])
= B([Z',[X,Y']],Y') = B(\ad(Z')\ad(X)(Y'),Y') = B(T Y',Y')
\end{align*}
where we write $T=\ad(Z')\circ \ad(X)$ as a linear transformation on $\mfk$. At a critical point $X$ and $Z'$ commute by part 1, and we can choose a maximal abelian subspace $\mfa$ containing both of them, and then consider the corresponding restricted root space decomposition. For $Y_\alpha \in \mfk_\alpha$,
$$ T Y_\alpha = \alpha(Z') \alpha(X) Y_\alpha $$
so the transformation $T$ has the eigenvalue $\alpha(Z')\alpha(X)$ on its eigenspace $\mfk_\alpha$ and acts as $0$ on $\mfk_0$. Since we assumed $k$ is a local maximum for $f$, we have
$$ 0 \ge \eval{ \derivtwo{t} f(ke^{tY}) }_{t=0} = B(T Y',Y') $$
for all $Y \in \mfk$, so for each $\alpha \in \Lambda$, $\alpha(Z')\alpha(X) \ge 0$, and therefore $X$ and $Z'$ lie in a common closed Weyl chamber.
(-4,0) – (4,0);
(-2,-3.46) – (2,3.46);
(-2,3.46) – (2,-3.46);
[circle,scale=0.5,fill=black,label=below:$X$] (x) at (2.6,1.5) ;
[circle,fill=black,scale=0.5,label=right:$\Ad(k_1)Z$] (z1) at (2,0.3) ;
[circle,fill=black,scale=0.5,label=right:$\Ad(k_2)Z$] (z2) at (2,-0.3) ;
[circle,fill=black,scale=0.5,label=left:$\Ad(k_3)Z$] (z3) at (-1.26,1.58) ;
[circle,fill=black,scale=0.5,label=above:$\quad \Ad(k_4)Z$] (z4) at (-0.74,1.88) ;
[circle,fill=black,scale=0.5,label=left:$\Ad(k_5)Z$] (z5) at (-1.26,-1.58) ;
[circle,fill=black,scale=0.5,label=below:$\quad \Ad(k_6)Z$] (z6) at (-0.74,-1.88) ;
The intersection $\Ad(K)Z \cap \mfa$
3. We may assume that $Z$ is a critical point of $f$ by precomposing $f$ with a left translation of $K$. The differential $(\dd{\orb_Z})_1\colon \mathfrak{k} \to T_z \Ad(K)Z$ is given by $-\ad Z$ and has kernel $\mfk_Z = Z_{\mfk}(Z) = \{ W \in \mfk \mid [W,Z]=0\}$ with orthogonal complement $\mfk^Z=\bigoplus_{\alpha \in \Lambda : \alpha(Z)>0}\mfk_\alpha $. Then $k$ is a critical point for $f$ if and only if $Z(k)=\Ad(k)Z$ is a critical point for $B(X,\cdot)$. The Hessians satisfy
$$ \Hess(B(X,\cdot))((\dd{\orb_Z})_k U, (\dd{\orb_Z})_k V)_{\Ad(k)Z} = \Hess(f)(U,V)_k ,$$
so by the calculation above the critical points are nondegenerate, occur at $\Ad(k)Z$ when $[\Ad(k)Z,X]=0$, and have index the number of positive signs in the collection $\alpha(X)\alpha(\Ad(k)Z)$, (weighted by $\dim \mfk_\alpha$) as $\alpha$ ranges over the roots with $\alpha(Z)>0$. These can only be nonnegative when $\Ad(k)Z$ lies in the closed Weyl chamber containing $X$.
For uniqueness, observe that any two maximizers $Z',Z''$ lie in the closed Weyl chamber containing $X$, and suppose $\Ad(k)(Z')=Z''$. The adjoint action takes walls to walls so $\Ad(k)$ preserves the facet spanned by $Z',Z''$ and hence fixes its soul (i.e. its center of mass) <cit.>. By Proposition <ref>, $\Ad(k)$ fixes each point of the face, and in particular $Z'=Z''$.
4. Since $(\mfp,B)$ is a Euclidean space,
$$ d_{\mfp}(X,Y)^2 = B(X-Y,X-Y) = B(X,X)+B(Y,Y) - 2 B(X,Y) $$
so if $X,Y$ are unit vectors in $\mfp$
$$ d_{\mfp}(X,Y)^2 = 2(1-B(X,Y))$$
and the distance function $d_{\mfp}(X,\cdot)$ is minimized when $B(X,\cdot)$ is maximized. Then by part 3, the distance function is uniquely minimized at the unique $\Ad(k)Z$ in the closed Weyl chamber containing $X$.
The next two results are part of the standard theory of symmetric spaces. Since we have already proven Proposition <ref> it is convenient to give the proofs.
Every $K$-orbit in the unit sphere $S(\mfp)$ intersects each closed spherical Weyl chamber exactly once.
Let $X$ be a regular vector in a chosen Weyl chamber. The $K$-orbit of a unit vector $Z$ is compact and therefore the function $d_{\mfp}(X,\cdot)$ has a global minimum on $\Ad(K)Z$. But that function has a unique local minimum which must lie in the chosen closed Weyl chamber.
For a point $p \in \X$, maximal abelian subspace $\mfa \subset \mfp$ and closed Euclidean Weyl chamber $V \subset \mfa$, we call $(p,\mfa,V)$ a point-chamber triple.
For any two point-chamber triples $(p,\mathfrak{a},V),(p',\mathfrak{a}',V')$ there exists an isometry $g \in G$ taking $(p,\mathfrak{a},V)$ to $(p',\mathfrak{a}',V')$. If $g$ stabilizes $(p,\mfa,V)$, then it acts trivially on it.
The group $G$ acts transitively on $X$, so we may assume that $p'=p$ and then show that an element of $K=\Stab_G(p)$ takes $(\mathfrak{a},V)$ to $(\mathfrak{a}',V')$. Choose any regular unit vectors $X\in V$, $Z \in V'$. Then Proposition <ref> implies there is an element $k \in K$ such that $\Ad(k)Z$ is in the same open Weyl chamber as $X$. Regular vectors lie in unique Weyl chambers in unique maximal abelian subspaces, so $\Ad(k)\mathfrak{a}'=\mathfrak{a}$ and $\Ad(k)V'=V$.
If $g$ fixes $p$ and stabilizes $(\mfa,V)$, then it acts trivially on $V$ by Corollary <ref>.
The above isometry is not necessarily unique. For example, consider hyperbolic space $\mathbb{H}^n, n\ge 3$. There a Euclidean Weyl chamber is just a geodesic ray, which has infinite pointwise stabilizer. However the action on $V$ is unique.
As a corollary, we may define the vector-valued distance function
\begin{equation}\label{vector valued distance}
\vec{d}\colon \X \times \X \to (\X \times \X) /G \eqqcolon V_{mod}
\end{equation}
to have range a model closed Euclidean Weyl chamber. One could think of $V_{mod}$ as some preferred Euclidean Weyl chamber, but it is better to think of it as an abstract Euclidean cone with no reference to a preferred basepoint, flat or Weyl chamber in $\X$. There is an “opposition involution" $\iota\colon V_{mod} \to V_{mod}$ induced by any geodesic symmetry $S_p$. On a model pointed flat $\mfa_{mod}$, the composition of $-\id$ with the longest element of the Weyl group restricts to $\iota$ on the model positive chamber $V_{mod}$. Note that $\iota \vec{d}(p,q) = \vec{d}(q,p)$.
The triangle inequality implies that for any $p,p',q,q'$ in a metric space,
$$ \abs{d(p,q)-d(p',q')} \le d(p,p')+d(q,q').$$
The next result is the “vector-valued triangle inequality" for symmetric spaces.
For points $p,p',q,q'$ in $\X$,
$$ \lvert \vec{d}(p,q)-\vec{d}(p',q') \rvert \le d(p,p')+d(q,q').$$
In a moment we will use the proposition to prove that for any $p,q,q'$ in $\X$,
\begin{equation}\label{vector valued reverse triangle inequality}
\lvert \vec{d}(p,q)-\vec{d}(p,q') \rvert \le d(q,q') ,
\end{equation}
from which the general inequality follows easily:
\begin{align*}
\lvert \vec{d}(p,q)-\vec{d}(p',q') & = \lvert \vec{d}(p,q)-\vec{d}(p,q') +\vec{d}(p,q')-\vec{d}(p',q')\rvert \\
& \le \lvert \vec{d}(p,q)-\vec{d}(p,q') \rvert +\lvert \iota \vec{d}(q',p)-\iota \vec{d}(q',p')\rvert \le d(q,q') + d(p,p').
\end{align*}
To prove <ref>, let $X,Z \in \mfp$ such that $e^Xp=q$ and $e^Zp=q'$. Choose a closed Weyl chamber $V$ containing $X$ and the unique $Z'$ in the $K$-orbit of $Z$ in that Weyl chamber. The map $\vec{d}(p,e^{(\cdot)}p)\colon V\to V_{mod}$ is an isometry. Note that $k\mapsto B(X,\Ad(k)Z)$ is maximized when $k\mapsto B(X,\Ad(k)Z)/\lvert X\rvert \lvert Z\rvert$ is maximized, so by Proposition <ref>
\begin{align*}
\lvert \vec{d}(p,q)-\vec{d}(p,q') \rvert^2 & = \lvert X-Z' \rvert^2 = \lvert X \rvert^2 + \lvert Z' \rvert^2 - 2 \langle X,Z' \rangle \\
& \le \lvert X \rvert^2 +\lvert Z \rvert^2 - 2 \langle X,Z \rangle = d_{\mfp} (X,Z)^2 \le d(q,q')^2
\end{align*}
since the Riemannian exponential map is distance non-decreasing by the nonpositive curvature of $\X$.
§.§ Regularity in maximal abelian subspaces
A spherical Weyl chamber is the intersection of a Euclidean Weyl chamber with the unit sphere $S$ in $\mathfrak{a}$. A spherical Weyl chamber $\sigma$ is a spherical simplex, and each of its faces $\tau$ is called a Weyl face. Each Euclidean (resp. spherical) Weyl face is the intersection of walls of $\mathfrak{a}$ (resp. as well as $S$). The interior of a face $\interior(\tau)$ is obtained by removing its proper faces; the interiors of faces are called open simplices. The unit sphere $S$ is a disjoint union of the open simplices. If $\tau$ is the smallest simplex containing a unit vector $X$ in its interior, we say that $\tau$ is spanned by $X$ and $X$ is $\tau$-spanning.
We will quantify the regularity of tangent vectors using a parameter $\alpha_0>0$. We will show in Proposition <ref> that our definition of regularity is equivalent to the definition in <cit.>. A similar definition appears in <cit.>.
Let $p \in \X$ and $\X$ be a closed spherical Weyl chamber and let $\tau$ be a face of $\sigma$. Consider the corresponding maximal abelian subspace $\mfa$ in $\mfp$ and set of simple roots $\Delta$. We define
\begin{equation}\label{delta tau}
\Delta_\tau = \{ \alpha \in \Delta \mid \alpha(\tau) = 0 \}, \quad \Delta_\tau^+ = \{ \alpha \in \Delta \mid \alpha(\interior \tau)>0 \}.
\end{equation}
A vector $X \in \mfa$ is called $(\alpha_0,\tau)$-regular if for each $\alpha \in \Delta_\tau^+, \alpha(X) \ge \alpha_0 \abs{X}$. A geodesic $c$ at $p$ is called $(\alpha_0,\tau)$-regular if $c'(0) = \evp X$ for an $(\alpha_0,\tau)$-regular vector $X \in \mfa$.
It is immediate from the definition that $X$ is $(\alpha_0,\sigma)$-regular for some $\alpha_0 >0$ and $\sigma$ if and only if $X$ is regular.
We define
\begin{equation}\label{lambda tau}
\Lambda_\tau \coloneqq \{ \alpha \in \Lambda \mid \alpha(\tau) = 0 \}, \quad \Lambda_\tau^+ \coloneqq \{ \alpha \in \Lambda \mid \alpha(\interior \tau)>0 \}
\end{equation}
Observe that $X$ is $(\alpha_0,\tau)$-regular if and only if for each root $\alpha \in \Lambda_\tau^+$ we have $\alpha(X)\ge\alpha_0$.
The signed distance from a vector $A \in \mfa$ to the wall $\ker \alpha$ is $\alpha(A)/\abs{\alpha} \ge \alpha(A)/\kappa_0$.
A unit vector $X$ is $(\alpha_0,\tau)$-spanning if it is $\tau$-spanning and $(\alpha_0,\tau)$-regular.
We may now record a mild extension of Proposition <ref> which will appear in Lemma <ref>.
Suppose $X\in \mfp$ is an $(\alpha_0,\tau)$-regular unit vector and $Z \in \mfp$ is a $(\zeta_0,\tau)$-spanning unit vector. Then $Z$ is the unique maximum of $B(X,\cdot)\colon \Ad(K)Z \to \R$, and for all $U,V \in T_Z \Ad(K)Z$,
$$ \abs{\Hess(B(X,\cdot))(U,V)_Z} \ge \alpha_0\zeta_0 \abs{B_p(U,V)} .$$
The proof of Proposition <ref> goes through in this setting, requiring only the following observation: if $X$ is $\tau$-regular and lies in a spherical Weyl chamber $\sigma$, then $\tau$ is a face of $\sigma$. If $U,V \in T_Z \Ad(K)Z$ correspond to $U',V' \in \mfk^\tau$ under the identification $T_Z \Ad(K)Z = \mfk^\tau$, we showed that $\Hess(B(X,\cdot))(U,V)_Z = B( \ad(Z) \ad(X) U',V')$.
§.§ The visual boundary $\partial \X$
We say two unit speed geodesic rays $c_1, c_2$ are asymptotic if there exists a constant $D>0$ such that
$$ d(c_1(t),c_2(t)) \le D $$
for all $t \ge 0$. The asymptote relation is an equivalence relation on unit-speed geodesic rays and the set of asymptote classes is called the visual boundary of $\X$ and denoted by $\partial \X$. There is a natural topology on $\partial \X$ called the cone topology, where for each point $p \in \X$ the map $S(T_p \X) \to \partial \X$ (which takes a unit tangent vector to the geodesic ray with that derivative) is a homeomorphism. In fact the cone topology extends to $\overline{\X}\coloneqq\X \cup \partial \X$, yielding a space homeomorphic to a unit ball of the same dimension as $\X$.
If $c_1$ and $c_2$ are asymptotic geodesic rays then for all $t \ge 0$,
$$ d(c_1(t),c_2(t)) \le d(c_1(0),c_2(0)) .$$
The left hand side is convex <cit.> and bounded above, hence (weakly) decreasing.
We have a natural action of $G$ on $\partial \X$: $g[c]= [g \circ c]$. For $\eta \in \partial \X$, we denote the stabilizer
$$ G_\eta \coloneqq \{ g \in G \mid g\eta =\eta \} $$
and call $G_\eta$ the parabolic subgroup fixing $\eta$. (Note that in <cit.> and <cit.>, $G$ itself is a parabolic subgroup, but in this paper a parabolic subgroup is automatically a proper subgroup.) When $\eta$ is regular, $G_\eta$ is a minimal parabolic subgroup of $G$ (sometimes called a Borel subgroup).
Let $\eta,\eta'$ be ideal points in $\partial \X$, represented by the geodesics $c(t)=e^{tX}p$ and $c'(t)=e^{tY}q$. Then since $G$ is transitive on point-chamber triples, we can find $g \in G$ such that $g q=p$ and $\Ad(g)Y$ lies in a (closed) Euclidean Weyl chamber in common with $X$. In particular, every $G$ orbit in $\partial \X$ intersects every spherical Weyl chamber exactly once.
Each unit sphere $S(\mfp)$ has the structure of a simplical complex compatible with the action of $G$. By Theorem <ref> this simplicial structure passes to $\partial \X$, which is in fact a thick spherical building whose apartments are the ideal boundaries of maximal flats. In <cit.> the spherical building structure on $\partial \X$ is used to describe the regularity of geodesic rays. We have used the restricted roots to define regularity and will show the notions are equivalent in Proposition <ref>. When we need to distinguish between simplices in $S(\mfp)$ and simplices in $\partial \X$ we call the former spherical and the latter ideal. Compared to a spherical simplex, an ideal simplex lacks the data of a basepoint $p \in \X$.
Define the type map to be
$$ \theta\colon \partial \X \to \partial \X /G \eqqcolon \sigma_{mod} $$
with range the model ideal Weyl chamber. The opposition involution $\iota\colon V_{mod} \to V_{mod}$ induces an opposition involution $\iota\colon \sigma_{mod} \to \sigma_{mod}$, see the discussion after Equation <ref> in the previous subsection. The faces of $\sigma_{mod}$ are called model simplices. For a model simplex $\tau_{mod} \subset \sigma_{mod}$, we define the flag manifold $\Flagt$ to be the set of simplices $\tau$ in $\partial \X$ such that $\theta(\tau)=\tau_{mod}$. If ideal points $\eta,\eta'$ span the same simplex $\tau$, then they correspond to the same parabolic subgroup, so we define $G_\tau\coloneqq G_\eta$. A model simplex corresponds to the conjugacy class of a parabolic subgroup of $G$.
§.§ Regularity for ideal points
[circle,scale=0.5,fill=black,label=below:$\tau_1$] (t1) at (0,0) ;
[circle,fill=black,scale=0.5,label=above:$\tau_2$] (t2) at (2,3.46) ;
[circle,fill=black,scale=0.5,label=below:$\tau_3$] (t3) at (4,0) ;
(t1) – node[pos=0.2](t12) (t2) – (t3) node[pos=0.8](t23) – (t1);
(sa) at (1.2,0.6);
(sb) at (2.8,0.6);
(sc) at (2,2.2);
[dashed,fill=gray!30] (sa) – (sb) – (sc) – cycle;
[circle,scale=0.5,fill=black,label=below:$\tau_1$] (t1) at (0,0) ;
[circle,fill=black,scale=0.5,label=above:$\tau_2$] (t2) at (2,3.46) ;
[circle,fill=black,scale=0.5,label=below:$\tau_3$] (t3) at (4,0) ;
(t1) – (t2) – (t3) – (t1) node[pos=0.2](t13a) node[pos=0.8](t13b);
(t13c) at (2,2.1);
[gray!30] (t13a.center) – (t13c)–(t13b.center) – cycle;
[dashed] (t13a.center) – (t13c)–(t13b.center);
[circle,scale=0.5,fill=black,label=below:$\tau_1$] (t1) at (0,0) ;
(t2) at (2,3.46);
[circle,fill=black,scale=0.5,label=below:$\tau_3$] (t3) at (4,0) ;
(t1) – coordinate[pos=0.2](t12) (t2) – (t3) coordinate[pos=0.8](t23) – (t1);
[gray!30] (t2) – (t12) – (t23) – cycle;
[dashed] (t12) – (t23);
[circle,fill=black,scale=0.5,label=above:$\tau_2$] at (t2) ;
$(\alpha_0,\tau_{mod})$-regularity for various choices of $\tau_{mod}$
Theorem <ref> implies that “model roots" are well-defined: if $g \in G$ takes the point-chamber triple $(p,\mfa,V)$ to $(p',\mfa',V')$ and takes the simplex $\tau \subset \partial V$ to $\tau' \subset \partial V'$, it also takes $\Delta_\tau$ to ${\Delta'}_{\tau'}$ and $\Delta_\tau^+$ to ${\Delta'}_{\tau'}^+$, where $\Delta$ is the simple roots in $\mfa^\ast$ corresponding to $V$ and $\Delta'$ is the simple roots in $\mfa'$ corresponding to $V'$.
An ideal point $\eta \in \partial \X$ is called $(\alpha_0,\tau)$-regular if every geodesic in its asymptote class is $(\alpha_0,\tau)$-regular. As soon as one representative of an ideal point is $(\alpha_0,\tau)$-regular, every representative is. A vector, geodesic, or ideal point is $(\alpha_0,\tau_{mod})$-regular if it is $(\alpha_0,\tau)$-regular for some simplex $\tau$ of type $\tau_{mod}$.
The open star of a simplex $\tau$, denoted $\ost(\tau)$, is the union of open simplices $\nu$ whose closures intersect $\tau$. Equivalently, it is the collection of $\tau$-regular points in $\partial X$. For a model simplex, $\interior_{\tau_{mod}}(\sigma_{mod})$ is the collection of $\tau_{mod}$-regular ideal points in $\sigma_{mod}$. Equivalently, it is $\sigma_{mod} \setminus \bigcup_{\alpha \in \Delta_\tau^+} \ker \alpha $.[ In <cit.> the notation $\ost(\tau_{mod})$ was used for what is called $\interior_{\tau_{mod}}(\sigma_{mod})$ here and in <cit.>.] We have
$$ \tau = \sigma \cap \bigcap_{\alpha \in \Delta_\tau} \ker \alpha, \quad \interior_\tau \sigma = \{ \eta \in \sigma \mid \forall \alpha \in \Delta_\tau^+, \alpha(\eta) >0 \}, \quad \partial_\tau \sigma = \sigma \cap \bigcup_{\alpha \in \Delta_\tau^+} \ker \alpha .$$
There is a decomposition $\sigma_{mod}= \interior_{\tau_{mod}} \sigma_{mod} \sqcup \partial_{\tau_{mod}} \sigma_{mod}$.
We call the set of $(\alpha_0,\tau)$-regular points the “$\alpha_0$-star of $\tau$." We define the closed cone on the $\alpha_0$-star of $\tau$
$$ V(p,\st(\tau),\alpha_0) \coloneqq \{ c_{px}(t) \mid t \in \left[ 0,\infty \right), x \text{ is }
(\alpha_0,\tau)\text{-regular} \} $$
the cone on the open star of $\tau$
$$ V(p,\ost(\tau)) \coloneqq \{ c_{px}(t) \mid t \in \left[ 0,\infty \right), x \text{ is }
\tau\text{-regular} \} $$
and the Euclidean Weyl sector
$$ V(p,\tau) \coloneqq \{ c_{px}(t) \mid t \in \left[ 0,\infty \right), x \text{ is }
\tau\text{-spanning} \} .$$
It follows from Lemma <ref> that the Hausdorff distance between $V(p,\st(\tau),\alpha_0)$ and $V(q,\st(\tau),\alpha_0)$ is bounded above by $d(p,q)$, and the same holds for the open cones $V(p,\ost(\tau))$ and $V(q,\ost(\tau))$ and for the Weyl sectors $V(p,\tau), V(q,\tau)$.
We now describe the notion of regularity used in <cit.> and show it is equivalent to our definition. We always work with respect to a fixed type $\tau_{mod}$. A subset $\Theta \subset \sigma_{mod}$ is called $\tau_{mod}$-Weyl convex if its symmetrization $W_{\tau_{mod}} \Theta \subset a_{mod}$ is a convex subset of the model apartment $a_{mod}$. Here we think of the Weyl group $W$ as acting on the visual boundary $a_{mod}$ of a model flat $\mfa_{mod}$ with distinguished Weyl chamber $\sigma_{mod}$ and $W_{\tau_{mod}}$ is the subgroup of $W$ stabilizing the simplex $\tau_{mod}$. One then quantifies $\tau_{mod}$-regular ideal points by fixing an auxiliary compact $\tau_{mod}$-Weyl convex subset $\Theta$ of $\interior_{\tau_{mod}}(\sigma_{mod}) \subset \sigma_{mod}$.
An ideal point $\eta$ is $\Theta$-regular if $\theta(\eta)\in \Theta$. It is easy to see that the notions of $\Theta$-regularity and $(\alpha_0,\tau_{mod})$-regularity are equivalent.
Let $\Delta_{\tau_{mod}} \subset \Delta$ be the model simple roots corresponding to a simplex $\tau_{mod} \subset \sigma_{mod}$. Then
* If $\Theta$ is a compact subset of $\interior_{\tau_{mod}}(\sigma_{mod})$ then every $\Theta$-regular ideal point is $(\alpha_0,\tau_{mod})$-regular for $\alpha_0 = \min_{\alpha \in \Delta_{\tau_{mod}}^+} \alpha (\Theta)$.
* Every $(\alpha_0,\tau_{mod})$-regular ideal point is $\Theta$-regular for $\Theta= \{ \xi \in \sigma_{mod} \mid \forall \alpha \in \Delta_{\tau_{mod}}^+, \alpha(\zeta) \ge \alpha_0 \} $.
We first prove $1$. Since $\Theta$ is a compact subset of $\sigma_{mod} \setminus \bigcup_{\alpha \in \Delta_{\tau_{mod}}^+} \ker \alpha $, the quantity $ \min \{ \alpha(\zeta) \mid \alpha \in \Delta_{\tau_{mod}}^+, \zeta \in \Theta \}$
exists and is positive.
We now prove $2$. The subset $\Theta= \{\zeta \in \sigma_{mod} \mid \forall \alpha \in \Delta_{\tau_{mod}}^+, \alpha(\zeta) \ge \alpha_0 \}$ has symmetrization $W_{\tau_{mod}}\Theta= \{\xi \in a_{mod} \mid \forall \alpha \in \Delta_{\tau_{mod}}^+, \alpha(\xi) \ge \alpha_0 \}$ which is an intersection of finitely many half-spaces together with the unit sphere, so it is compact and convex. Furthermore $\Theta = \sigma_{mod} \cap W_{\tau_{mod}} \Theta$ is a compact subset of $\interior_{\tau_{mod}}(\sigma_{mod}) \cap \sigma_{mod}$.
§.§ Generalized Iwasawa decomposition
Let $p$ be a point in $\X$, $\tau \in \Flagt$ and let $X \in \mfp$ be $\tau$-spanning. Choose a Cartan subspace $X \in \mfa \subset \mfp$, with restricted roots $\Lambda$ and a choice of simple roots $\Delta$ associated to $\sigma \supset \tau$. Recalling the notation in <ref> following Definition <ref> we define
* $\mfa_\tau = Z(X) \cap \mfp = \{ Y \in \mfp \mid [X,Y]=0 \}$ and $A_\tau = \exp(\mfa_\tau)$. Note that $\mfa_\tau$ and $A_\tau$ depend on $p$.
* The (nilpotent) horocyclic subalgebra $\mfn_\tau = \bigoplus_{\alpha \in \Lambda_\tau^+} \mfg_\alpha$ and the (unipotent) horocylic subgroup $N_\tau = \exp( \mfn_\tau)$.
* The generalized Iwasawa decomposition of $\mfg$ is $ \mfg = \mfk \oplus \mfa_\tau \oplus \mfn_\tau .$
* The generalized Iwasawa decomposition of $G$ is $G = K A_\tau N_\tau = N_\tau A_\tau K$. The indicated decomposition is unique.
Note that our notation differs from <cit.>, where $N_\tau$ denotes the full horocyclic subgroup at $\tau$ and $A_\tau$ is the group of translations of the flat factor of the parallel set defined by $p$ and $\tau$, see Section <ref>. In our notation, $N_\tau$ is the unipotent radical of the parabolic subgroup $G_\tau$, see <cit.>.
§.§ Antipodal simplices, parallel sets and horocycles
We say a pair of points $\xi,\eta$ in $\partial \X$ are antipodal if there exists a geodesic $c$ with $c(-\infty)=\xi$ and $c(+\infty)=\eta$. Equivalently, $\xi,\eta$ are antipodal if there exists a geodesic symmetry $S_p$ taking $\xi$ to $\eta$.
A pair of simplices $\tau_\pm$ are antipodal if there exists some $p \in \X$ such that $S_p \tau_- =\tau _+$, or equivalently if there exists a geodesic $c$ with $c(-\infty) \in \interior (\tau_ -)$ and $c(+\infty) \in \interior (\tau_+)$. If a model simplex $\tau_{mod}$ is $\iota$-invariant then every simplex $\tau$ of type $\tau_{mod}$ has the same type as any of its antipodes.
For antipodal simplices $\tau_\pm$, the parallel set $P(\tau_-,\tau_+)$ is the union of (images of) geodesics $c$ with $c(-\infty) \in \tau_-$ and $c(+\infty) \in \tau_+$. Given one such geodesic $c$, we may alternatively define $P(\tau_-,\tau_+)=P(c)$ to be the union of geodesics parallel to $c$, or equivalently to be the union of maximal flats containing $c$. Antipodal $\tau_{mod}$-regular points $\xi,\eta$ lie in the boundary of a unique parallel set $P=P(\tau(\xi),\tau(\eta))$, where $\tau(\xi)$ (resp. $\tau(\eta)$) is the unique simplex of type $\tau_{mod}$ in some/every Weyl chamber containing $\xi$ (resp. $\eta$). We say that $P(\tau_-,\tau_+)$ joins $\tau_-$ and $\tau_+$. The parallel set joining a pair of antipodal Weyl chambers is a maximal flat.
The horocycle centered at $\tau \in \Flag(\tau_{mod})$ through $p \in \X$ is denoted $H(p,\tau)$ and is defined to be the orbit $N_\tau \cdot p$. For any $p \in \X$ and $\hat{\tau}$ antipodal to $\tau$, the horocycle $H(p,\tau)$ intersects the parallel set $P(\hat{\tau},\tau)$ in exactly one point. A horocycle is the union of basepoints of strongly asymptotic Weyl sectors/ geodesic rays <cit.>.
§.§ The $\zeta$-angle and Tits angle
We follow <cit.> in defining the $\zeta$-angle between two simplices at a point $p \in \X$. For fixed $p \in X$ and $\zeta$, the $\zeta$-angle provides a metric on $\Flagt$ by viewing it as embedded in the tangent space at $p$ and restricting the angle metric $\angle_p$ to the vectors of type $\zeta$. The $\zeta$-angle also makes sense for $\tau_{mod}$-regular directions by projecting to $\Flagt$. To make this definition, we first fix the auxilary data of a $(\zeta_0,\tau_{mod})$-spanning $\iota$-invariant model ideal point $\zeta=\zeta_{mod} \in \interior (\tau_{mod})$.
* For a simplex $\tau \in \Flag(\tau_{mod})$ let $\zeta(\tau)$ denote the unique point in $\interior (\tau)$ of type $\zeta$.
* For a $\tau_{mod}$-regular ideal point $\xi \in \partial \X$, let $\zeta(\xi)=\zeta(\tau(\xi))$ where $\tau(\xi)$ is the simplex spanned by $\xi$.
* Let $p \in \X$, let $\tau,\tau'$ be Weyl chambers in $\partial \X$ and let $x,y \in \overline{\X}$ with $px$ and $py$ $\tau_{mod}$-regular. The $\zeta$-angle is given by
\begin{align*}
\angle_p^\zeta(\tau,\tau') & \coloneqq \angle_p(\zeta(\tau),\zeta(\tau')), \\
\angle_p^\zeta(\tau,y) & \coloneqq \angle_p(\zeta(\tau),\zeta(py)), \\
\angle_p^\zeta(x,y) & \coloneqq \angle_p(\zeta(px),\zeta(py)). \\
\end{align*}
Note there is a typo in the definition of $\zeta$-angle in <cit.>.
[circle,scale=0.5,fill=black] (t1) at (0,0) ;
[circle,fill=black,scale=0.5] (t2) at (2,3.46) ;
[circle,fill=black,scale=0.5] (t3) at (4,0) ;
(t1) – node[pos=0.2](t12) (t2) – (t3) node[pos=0.8](t23) – (t1);
[circle,fill=black,scale=0.5,label=right:$\zeta$] (z) at (2,1.5) ;
$\zeta \in \sigma_{mod}$
(o) at (0,0);
(r) at (5.5,1);
(l) at (3,1);
[label=right:$\sigma(X)$] (u) at (4,3) ;
[label=[xshift=0.2cm, yshift=-0.2cm]:$\zeta(X)$] (z) at (4,1.8) ;
[label=[xshift=-0.4cm, yshift=-0.2cm]:$X$] (x) at (5.3,1.2) ;
(o) – (r);
(o) – (l);
(o) – (u.center);
(u.center) – (l) – (r) – cycle;
[-latex[scale=2.5,length=2,width=3]] (o) – (4.2,1.8);
[-latex[scale=2.5,length=2,width=3]] (o) – (x);
(mr) at (-5.5,1);
(ml) at (-3,1);
[label=left:$\sigma(Y)$] (mu) at (-4,3) ;
[label=[xshift=-0.55cm, yshift=-0.6cm]:$\zeta(Y)$] (mz) at (-4,1.8) ;
[label=[xshift=0cm, yshift=-0.3cm]:$Y$] (y) at (-4.1,2.3) ;
(o) – (mr);
(o) – (ml);
(o) – (mu.center);
(mu.center) – (ml) – (mr) – cycle;
[-latex[scale=2.5,length=2,width=3]] (o) – (-4.2,1.8);
[-latex[scale=2.5,length=2,width=3]] (o) – (y);
The $\zeta$-angle between $X$ and $Y$
For $\xi,\eta \in \partial \X$, the Tits angle is
$$ \angle_{Tits}(\xi,\eta) \coloneqq \sup_{p\in \X} \angle_p(\xi,\eta) .$$
Ideal points $\xi,\eta$ are antipodal if and only if their Tits angle is $\pi$. For $p \in \X$, $\xi,\eta\in \partial \X$, the equality $\angle_p(\xi,\eta) = \angle_{Tits}(\xi,\eta)$ holds if and only if there is a maximal flat $F$ containing $p$ with $\xi,\eta \in \partial F$ and moreover for any $\xi,\eta \in \partial \X$, there exists some maximal flat $F$ with $\xi,\eta \in \partial F$ <cit.>.
For simplices $\tau,\tau'$ in $\Flagt$, we may define
$$ \angle_{Tits}^\zeta(\tau,\tau') \coloneqq \angle_{Tits}(\zeta(\tau),\zeta(\tau') ).$$
There are only finitely many possible Tits angles between ideal points of fixed type. Therefore, there exists a bound $\varepsilon(\zeta_{mod})$ such that if $\angle_{Tits}^\zeta(\tau,\tau') > \pi - \varepsilon(\zeta_{mod})$ then $\tau$ and $\tau'$ are antipodal, as observed in <cit.>. By Remark <ref>, we have
$$ \sin(\frac{1}{2} \varepsilon(\zeta_{mod}) ) = \min_{\alpha \in \Lambda_{\tau_{mod}}^+} \ \frac{\alpha(\zeta_{mod})}{\abs{\alpha}} \ge \frac{\zeta_0}{\kappa_0} .$$
By the definition of Tits angle, the same holds if the $\zeta$-angle at any point is strictly within $\varepsilon(\zeta_{mod})$ of $\pi$: the inequality
$$ \angle_{Tits}^\zeta(\tau,\tau') \ge \angle_p^\zeta(\tau,\tau') > \pi - \varepsilon(\zeta_{mod}) $$
implies that $\tau$ and $\tau'$ are antipodal. Since $\zeta_0 \le \kappa_0 < 2 \kappa_0$ we have
$$ \sin \frac{1}{2} \frac{\zeta_0^2}{\kappa_0^2} \le \frac{1}{2} \frac{\zeta_0^2}{\kappa_0^2} < \frac{\zeta_0}{\kappa_0} \le \sin \frac{1}{2} \varepsilon(\zeta_{mod}) ,$$
and we obtain the estimate $\frac{\zeta_0^2}{\kappa_0^2} < \varepsilon(\zeta_{mod})$. We record this observation in the following lemma.
If the inequality $\angle_p^\zeta(\tau_-,\tau_+) \ge \pi - \frac{\zeta_0^2}{\kappa_0^2}$ holds for some $p \in \X$ then $\tau_-$ is antipodal to $\tau_+$. In other words, $\frac{\zeta_0^2}{\kappa_0^2} <\varepsilon(\zeta_{mod})$.
|
# Modeling ice crystal growth using the lattice Boltzmann method
Q. Tan S.A. Hosseini<EMAIL_ADDRESS>A. Seidel-Morgenstern D.
Thévenin H. Lorenz Laboratory of Fluid Dynamics and Technical Flows,
University of Magdeburg “Otto von Guericke”, D-39106 Magdeburg, Germany
Department of Mechanical and Process Engineering, ETH Zürich, 8092 Zürich,
Switzerland Max Planck Institute for Dynamics of Complex Technical Systems
(MPI DCTS), 39106 Magdeburg, Germany
###### Abstract
Given the multitude of growth habits, pronounced sensitivity to ambient
conditions and wide range of scales involved, snowflake crystals are one of
the most challenging systems to model. The present work focuses on the
development and validation of a coupled flow/species/phase solver based on the
lattice Boltzmann method. It is first shown that the model is able to
correctly capture species and phase growth coupling. Furthermore, through a
study of crystal growth subject to ventilation effects, it is shown that the
model correctly captures hydrodynamics-induced asymmetrical growth. The
validated solver is then used to model snowflake growth under different
ambient conditions with respect to humidity and temperature in the plate-
growth regime section of the Nakaya diagram. The resulting crystal habits are
compared to both numerical and experimental reference data available in the
literature. The overall agreement with experimental data shows that the
proposed algorithm correctly captures both the crystal shape and the onset of
primary and secondary branching instabilities. As a final part of the study
the effects of forced convection on snowflake growth are studied. It is shown,
in agreement with observations in the literature, that under such condition
the crystal exhibits non-symmetrical growth. The non-uniform humidity around
the crystal due to forced convection can even result in the coexistence of
different growth modes on different sides of the same crystal.
###### keywords:
phase-field models, Lattice Boltzmann method, anisotropic snowflake growth,
Ventilation effects.
###### MSC:
[2010] 00-01, 99-00
††journal: …
## 1 Introduction
Ice crystals and their growth are of interest in many fields such as
environmental sciences, agriculture and industries as aviation (encountering
de-icing issues). Due to the wide spectrum of habits they take on, they have
been the topic of scientific research for decades [1, 2, 3]. Johannes Kepler
was the first person to explore the growth mechanisms of snowflakes in 1611,
and attempted to explain the possible origins of snow crystal symmetry [4].
With the development of photography in the late 19th century, Wilson Bentley
[5] collected in 1931 thousands of snow crystal images [6]. In 1938, Ukichiro
Nakaya and his co-workers [7, 8, 9] conducted comprehensive experimental
studies of ice crystal growth to determine the relationship between growth
conditions and crystal shape. The different growth modes and associated habits
as a function of temperature and supersaturation (humidity) are illustrated in
Fig. 1.
Figure 1: The snow crystal morphology diagram, showing the morphology of ice
crystals growing from water vapor in air at 1 bar as a function of temperature
and supersaturation. This figure is taken from [10, 11].
Overall, based on the dominant growth direction, snow-flakes can be classified
as pertaining to one of these two categories: plates, or columns/needles. The
growth mode is essentially dictated by the temperature, while growth rate, and
the associated instabilities, are affected by both temperature and humidity.
As observed in Fig. 1, crystal habits alternate between flat and columnar.
Transitions between the different modes, i.e. plate and column, occur at
around -5, -10 and -20∘C. Furthermore, higher humidity and thus, higher
supersaturation as crystallization driving force contributes to more
pronounced instability effects and therefore more complex shapes. Despite the
efforts by Ukichiro Nakaya [9] and other researchers, much of the
phenomenology behind the growth of snowflakes still remains uncertain [12, 13,
14, 15]. In 1958, Mason [16] suggested that the basic habit is determined by
the surface diffusion rates. Later, Nelson and Knight [17] suggested that
layer nucleation rates influence the morphology of snow crystals. Libbrecht
[14] suggested that the different crystal habits and instabilities appearing
during growth are direct consequences of two main competing mechanisms: vapor
diffusion in air and kinetics of the water molecules attachment to the crystal
surface [8, 18, 19]. The geometrically ordered potential field on the crystal
surface dictates the growth into faceted structures (resulting from the
molecules’ arrangement within the crystal lattice), while diffusion
contributes to growth instabilities that produce dendritic branching and
associated complex structures. Attempts to further explain the growth
mechanism of snowflakes are still ongoing.
While most of the previous research was focused on experimental studies of ice
growth, widespread effort has also been put on developing mathematical models
and numerical tools. Microscopic models and simulations such as Molecular
Dynamic simulations [20] have been conducted to analyze fundamental processes
such as surface adsorption and diffusion. While physically sound, these
simulations are limited in time and space and are not efficient for
simulations of a full snowflake (especially at later stages when instabilities
appear and the crystal grow in size). So-called mesoscopic formulations are an
interesting alternative. The cellular automaton of Gravner and Griffeath [21]
is an illustration of such approaches. While providing spectacular results in
predicting faceted growth, such models lack established connections with
physical processes and parameters [22]. Furthermore, reliable models
introducing additional physics such as interaction with a flow field are yet
to be developed [23]. At the macroscopic level, crystal growth is modeled
using either sharp or diffuse interface formulations. Various snowflake
morphologies were simulated by Barrett et al. [24] with a sharp interface
model. However, only small supersaturations could be considered because of the
numerical cost of the interface parametrization [25]. In recent years, the
phase-field model [26], a type of diffuse interface formulation, has become
one of the most popular methods for the simulation of crystal growth. The
phase-field model is a powerful tool to simulate interface development in the
crystallization process as the model does not require explicit tracking of the
interface via front-tracking algorithms. In addition, the non-linear partial
differential equations are obtained from the principles of non-equilibrium
thermodynamics making the interface dynamics consistent without the need for
explicit boundary treatments. Over the past several decades, few successful
attempts are reported to model faceted snowflake growth [27, 28]. Comparisons
with experimental data were rather promising. lbmLBMlattice Boltzmann method
lbLBlattice Boltzmann nsNSNavier-Stokes pdePDEpartial differential equation
The , developed over the past decades, has become a popular alternative to
classical solvers for the equations [29]. It has since been extended to a
variety of applications and flow regimes such as multi-phase flows [30], flows
in porous media, turbulence and multi-component flows [31, 32, 33]. It has
also been used, in combination with classical solvers for the solid phase, to
simulate crystal growth [23, 34, 35, 36, 37, 38, 39, 40]. While initially
developed as a discrete solver for the Boltzmann equation in the hydrodynamic
regime, it has also widely been used as a solver for different types of
parabolic via appropriate parametrization of the equilibrium state and the
collision-streaming operators. In that same spirit a number of work have
proposed -based formulations to solve the phase-field evolution equation.
Younsi et al. [41] modified the standard equations and equilibrium
distribution function to introduce anisotropic surface tension and growth rate
effects. The proposed model was successfully used to simulate anisotropic and
dendritic growth. While readily applied to generic systems, these models have
rarely been used to simulate realistic systems such as snowflake growth.
The aim of the present work is to present a pure model able to correctly
capture not only different growth modes of snowflakes in the platelet regime,
but also effects caused by the surrounding convective field. For this purpose,
after an overview of the approach proposed to model crystal growth, it will
first be validated against generic test-cases well-documented in the
literature (both with and without ventilation effects). The solver is then
used to model the growth of a single ice crystal under different conditions
regarding temperature and supersaturation. The obtained shapes are validated
(qualitatively) against both experimental and numerical data available in the
literature. It will also be shown that the growth modes are in agreement with
those predicted in the Nakaya diagram. Finally, the effect of fluid flow on
dendritic growth is studied. Interestingly, rarely studied growth habits are
observed in the simulations;For example triangular shapes as observed only
when crystals are exposed to a background convection.
## 2 Theoretical background
While influenced by both temperature and humidity (referred to as
supersaturation in what follows), given the more pronounced effect of the
latter on faceted growth and dendritic instabilities, the temperature field
will be considered to be uniform during the present study. As such, the phase-
field growth dynamics will involve two coupled equations (apart from the flow
field solver). One will describe the morphology of the snowflakes through a
phase parameter $\phi$; the second one is an advection/diffusion-type equation
for the supersaturation field $U$.
### 2.1 Diffuse-interface formulation: governing equations
The kinetics of phase-field are described by a parameter $\phi$ designating
solid and fluid phases respectively when reaching +1 (ice) and -1 (vapor) and
the reduced supersaturation of water vapor defined by
$U=(c-c_{sat}^{S})/c_{sat}^{S}$, where $c_{sat}^{S}(T)$ is the saturation
number density of vapor of ice at temperature $T$. The space/time evolution
equations are written as [27, 41, 42, 43]:
$\tau_{0}a_{s}^{2}(\textbf{n})\frac{\partial\phi}{\partial t}=\\\
W_{0}^{2}\bm{\nabla}\cdot\left(a_{s}^{2}(\textbf{n})\right)\bm{\nabla}\phi+W_{0}^{2}\bm{\nabla}\cdot\left(|\bm{\nabla}\phi|^{2}\frac{\partial[a(\textbf{n})^{2}]}{\partial\bm{\nabla}\phi}\right)\\\
+(\phi-\phi^{3})+\lambda U(1-\phi^{2})^{2},$ (1)
and:
$\frac{\partial U}{\partial
t}+\left(\frac{1-\phi}{2}\right)\bm{u}\cdot\bm{\nabla}U=D\bm{\nabla}\cdot\left(q(\phi)\bm{\nabla}U\right)-\frac{L_{sat}}{2}\frac{\partial\phi}{\partial
t},$ (2)
defining for the later analysis $\tau=\tau_{0}a_{s}^{2}(\textbf{n})$. Here,
$\tau_{0}$ denotes the characteristic time and $W_{0}$ denotes the
characteristic width of the diffuse interfaces. In Eq. 1, the coefficient
$\lambda=\frac{15L^{2}}{16Hc_{p}T_{m}}$ describes the strength of the coupling
between the phase-field and the species field. $1/\lambda$ is a dimensionless
measure of the barrier height $H$ of the double-well potential. The latent
heat of melting is shown as $L$. The specific heat capacity $c_{p}$ is assumed
to be the same in the two phases (symmetric model).
$\textbf{n}=-\frac{\bm{\nabla}\phi}{\left|\bm{\nabla}\phi\right|}$ is the unit
vector normal to the crystal interface –pointing from solid to the fluid.
$a_{s}(\textbf{n})$ is the surface tension anisotropy function which here, in
the context of the snowflake growth studies, is defined as:
$a_{s}(\textbf{n})=1+\epsilon_{xy}\cos(6\theta),$ (3)
rhsRHSright hand side lhsLHSleft hand side where
$\theta=\arctan(n_{y}/n_{x})$. $\epsilon_{xy}$ is a numerical parameter
characterizing the anisotropy strength. The second and third terms on the of
Eq. 1 control the anisotropic growth of dendrites in snowflakes.
$\phi-\phi^{3}$ is the derivative of the double-well potential. The last term
in Eq. 1 is a source term accounting for the coupling between the
supersaturation $U$ and the order parameter $\phi$. $(1-\phi^{2})^{2}$ is an
interpolation function minimizing the bulk potential at $\phi=\pm 1$.
In Eq. 2, $\bm{u}$ denotes the fluid local velocity while $q(\phi)=(1-\phi)$
is a function canceling out diffusion within the solid. As such solute
transport is assumed to take place only within the fluid phase (one-side
model). The parameter $D$ is the diffusion coefficient of water vapor in air
as applied in Section 3.2.1. $L_{sat}$ describes the absorption rate of water
vapor at the interface of the snow crystals.
The thermal capillary length is defined as:
$d_{0}=\frac{\gamma_{0}T_{M}c_{p}}{L^{2}},$ (4)
where the excess isotropic free energy of the solid-liquid interface
$\gamma_{0}$ is given by:
$\gamma_{0}=IW_{0}H,$ (5)
where $W_{0}$ is the interface thickness, $H$ is the barrier height of the
double-well potential and $I=2\sqrt{2}/3$. Hence, the matched asymptotic
expansions provide a relation between the phase-field and sharp-interface
parameters given by:
$d_{0}=a_{1}\frac{W_{0}}{\lambda}.$ (6)
The expression for the kinetic coefficient is:
$\beta=a_{1}\left(\frac{\tau_{0}}{W_{0}\lambda}-a_{2}\frac{W_{0}}{D}\right)$
(7)
where $a_{2}$ is a constant and depends on the choice of the function
$g(\phi)$. In this paper, the standard form
$g(\phi)=\phi-2\phi^{3}/3+\phi^{5}/5$ will be used. For the model,
$a_{1}\approx 0.8839$ and $a_{2}\approx 0.6267$. These relations make it
possible to choose phase-field parameters for prescribed values of the
capillary length (surface energy) and the interface mobility (interface
kinetic coefficient). Note that the interface width $W_{0}$ is a parameter
that can be freely chosen in this formulation; the asymptotic analysis remains
valid as long as $W_{0}$ remains much smaller than any length scale present in
the sharp-interface solution of the considered problem [26, 44, 45].
It follows from Eq.(7) that $\beta$ can be made to vanish provided that $\tau$
is chosen to be:
$\tau=\tau_{0}\left[A(n)^{2}\right].$ (8)
Therefore,
$\tau_{0}=a_{2}\lambda\frac{W_{0}^{2}}{D}.$ (9)
The results should be independent of $\lambda$ when they are converged.
Decreasing $\lambda$ corresponds physically to decreasing the interface width
while increasing at the same time the height of the double-well potential so
as to keep the surface energy and hence $d_{0}$ fixed.
The physical dimensions of $W_{0}$, $\tau_{0}$ and $\lambda$ are respectively
$[W_{0}]\equiv[\mathcal{L}]$, $[\tau_{0}]\equiv[\mathcal{T}]$,
$[\lambda]\equiv[-]$, $[D]\equiv[\mathcal{L}^{2}/\mathcal{T}]$ where
$[\mathcal{L}]$ indicates the length dimension and $[\mathcal{T}]$ indicates
the time dimension.
### 2.2 Lattice Boltzmann formulation
#### 2.2.1 Flow field solver
The target system of equations describing the flow field behavior (i.e.
incompressible -continuity equations) are modeled using the classical
isothermal formulation consisting of the now-famous stream-collide operators:
$f_{\alpha}\left(\bm{x}+\bm{c}_{\alpha}\delta_{t},t+\delta_{t}\right)-f_{\alpha}\left(\bm{x},t\right)=\delta_{t}\Omega_{\alpha}\left(\bm{x},t\right)+\delta_{t}\bm{F},$
(10)
where $\bm{F}$ is the external force. Here, the external force $\bm{F}$
modeling interaction with the solid phase is given as [43]:
$F=-\frac{h\eta_{f}(1+\phi)^{2}(1-\phi)\bm{u}}{4W_{0}^{2}}$ (11)
where $h$ is a dimensionless constant, here $h=2.757$. Due to the absence of
fluid velocity in snow crystals, the fluid velocity $\bm{u}$ is updated as:
$\bm{u^{*}}=\frac{(1-\phi)}{2}\bm{u},$ (12)
where the updated fluid velocity $\bm{u^{*}}$ is taken into the momentum
equation. The above friction term acts as a distributed momentum sink that
gradually forces the liquid velocity to zero as $\phi\rightarrow 1$. The
collision operator $\Omega_{\alpha}$ follows the linear Batnagar-Gross-Krook
approximation:
$\Omega_{\alpha}=\frac{1}{\tau}\left[f^{(eq)}_{\alpha}-f_{\alpha}\right],$
(13)
edfEDFequilibrium distribution function where $f_{\alpha}^{(eq)}$ is the
discrete isothermal defined as:
$f_{\alpha}^{(eq)}=\rho\sum_{i}\frac{1}{2c_{s}^{2}}a^{(eq)}_{i}(\bm{u}):\mathcal{H}_{i}(\bm{c}_{\alpha}),$
(14)
where $a^{(eq)}_{i}$ and $\mathcal{H}_{i}(\bm{c}_{\alpha})$ are the
corresponding multivariate Hermite coefficients and polynomials of order $i$,
with $c_{s}$ the lattice sound speed corresponding to the speed of sound at
the stencil reference temperature [46]. Further information on the expansion
along with detailed expressions of the can be found in [47, 48, 49]. The
relaxation time $\tau$ is tied to the fluid kinematic viscosity as:
$\tau=\frac{\nu}{c_{s}^{2}}+\frac{\delta_{t}}{2}.$ (15)
It must be noted that conserved variables, i.e. density and momentum are
defined as moments of the discrete distribution function:
$\rho=\sum_{\alpha}f_{\alpha},$ (16)
$\rho\bm{u}=\sum_{\alpha}\bm{c}_{\alpha}f_{\alpha}.$ (17)
#### 2.2.2 Advection-diffusion-reaction solver for supersaturation field
The space/time evolution equation of the water vapour supersaturation field is
modeled using an advection-diffusion-reaction -based discrete kinetic
formulation defined as[50]:
$g_{\alpha}\left(\bm{x}+\bm{c}_{\alpha}\delta_{t},t+\delta_{t}\right)-g_{\alpha}\left(\bm{x},t\right)=\delta_{t}\Omega_{\alpha}\left(\bm{x},t\right)+\delta_{t}\dot{\omega}_{\alpha},$
(18)
where $\dot{\omega}_{\alpha}$ is the source term defined as:
$\dot{\omega}_{\alpha}=-w_{\alpha}\frac{L_{sat}}{2}\frac{\partial\phi}{\partial
t}.$ (19)
The collision operator $\Omega_{\alpha}$ for the supersaturation field is:
$\Omega_{\alpha}=\frac{1}{\tau_{g}}\left[g^{(eq)}_{\alpha}-g_{\alpha}\right].$
(20)
where $g_{\alpha}^{(eq)}$ is the defined as:
$g^{(eq)}_{\alpha}=\omega_{\alpha}U\left[1+\frac{\bm{c}_{\alpha}\cdot\bm{u}}{c_{s}^{2}}\right].$
(21)
The supersaturation is computed locally as the zeroth-order moment of
$g_{\alpha}$:
$U=\sum_{\alpha}g_{\alpha},$ (22)
and the relaxation coefficient is tied to the water vapour diffusion
coefficient [51] in air:
$\tau=\frac{Dq(\phi)}{c_{s}^{2}}+\frac{\delta_{t}}{2}.$ (23)
#### 2.2.3 Solver for phase-field equation
The phase-field equation is modeled using a modified scheme defined as [52,
53]:
$a_{s}^{2}(\bm{n})h_{\alpha}(\bm{x}+\bm{c}_{\alpha}\delta x,t+\delta t)=\\\
h_{\alpha}(\bm{x},t)-\left(1-a_{s}^{2}(\bm{n})\right)h_{\alpha}(\bm{x}+\bm{c}_{\alpha}\delta
x,t)-\\\
\frac{1}{\eta_{\phi}(\bm{x},t)}\left[h_{\alpha}(\bm{x},t)-h_{\alpha}^{eq}(\bm{x},t)\right]+\omega_{\alpha}Q_{\phi}(\bm{x},t)\frac{\delta
t}{\tau_{0}},$ (24)
where the scalar function $Q_{\phi}$ is the source term of the phase-field
defined by:
$Q_{\alpha}=(\phi-\phi^{3})+\lambda U(1-\phi^{2})^{2},$ (25)
while the $h_{i}^{eq}$ is defined as:
$h_{\alpha}^{eq}=\omega_{\alpha}\left(\phi-\frac{1}{c_{s}^{2}}\bm{c}_{\alpha}\cdot\frac{W_{0}^{2}}{\tau_{0}}|\bm{\nabla}\phi|^{2}\frac{\partial(a_{s}(\bm{n})^{2}}{\partial\bm{\nabla}\phi}\frac{\delta
t}{\delta x}\right).$ (26)
The local value of the order parameter $\phi$ is calculated as:
$\phi=\sum_{\alpha}h_{\alpha},$ (27)
The relaxation time $\eta_{\phi}$ is a function of position and time and must
be updated at each time-step as:
$\eta_{\phi}=\frac{1}{c_{s}^{2}}a_{s}^{2}(\bm{n})\frac{W_{0}^{2}}{\tau_{0}}+\frac{\delta_{t}}{2},$
(28)
## 3 Simulations and numerical studies
For the sake of clarity, this part is organized into two different
subsections. In the first subsection results from simulations of generic
systems, intended as validation are presented and discussed. The results for
snowflakes in the plate regime are presented in the second subsection.
### 3.1 Validation studies
#### 3.1.1 Effect of directional derivatives of gradients in LB scheme
In order to calculate the derivatives of $a_{s}(\bm{n})$ with respect to
$\partial_{x}\phi$ and $\partial_{y}\phi$ involved in the second term in Eq.
1, different choices are available. Overall, one can either evaluate the
gradient using classical finite-difference approximations, e.g. the central
difference second-order formulation:
$\partial_{x}\phi\simeq(\phi_{i+1,j}-\phi_{i-1,j})/2\delta x$ and
$\partial_{y}\phi\simeq(\phi_{i,j+1}-\phi_{i,j-1})/2\delta x$, where $i$ and
$j$ are the indexes of the coordinates $x$ and $y$, or the method based on the
directional derivatives with higher-order isotropy [54]. The isotropic finite-
difference approximations to the first-order derivative can be computed as:
$\nabla\phi=\frac{1}{c_{s}^{2}}\sum_{\alpha=0}^{Q}w_{\alpha}\left(|c_{\alpha}|^{2}\right)\phi\left(x+c_{\alpha}\right)c_{\alpha}$
(29)
where $w_{\alpha}\left(|c_{\alpha}|^{2}\right)$ are the weights associated to
each layer of neighboring nodes maximizing isotropy [55]. These weights are
summarized in table LABEL:t1. Given the importance of this directional
derivative in the growth dynamics of the solid phase, the choice of the
approximation will be briefly discussed here.
Table 1: Weights for 4th, 6th and 8th order isotropic tensors in two dimensions [56, 55] Tensor | $w(1)$ | $w(2)$ | $w(3)$ | $w(4)$ | $w(5)$ | $w(6)$ | $w(7)$ | $w(8)$
---|---|---|---|---|---|---|---|---
E4 | 1/3 | 1/12 | - | - | - | - | - | -
E6 | 4/15 | 1/10 | - | 1/120 | - | - | - | -
E8 | 4/21 | 4/45 | - | 1/60 | 2/315 | - | - | 1/5040
Figure 2: The grid points identifying the set of velocity fields for a 2d case
from [56, 55]. With reference to the weights reported in Table LABEL:t1,
different degrees of isotropy can be achieved: fourth order [up to
$\omega$(2)], sixth order [up to $\omega$(4)], eighth order [up to
$\omega$(8)].
To that end we consider a generic growing crystal with hexagonal symmetry
driven by the temperature field. For all simulations a domain of size
$800\times 800$, with $\delta_{x}=\delta_{y}=0.01$ and $\delta_{t}=1.5\times
10^{-5}$ is considered. Furthermore, $\tau_{0}=1.5625\times 10^{-4}$,
$W_{0}=0.0125$, $\kappa=1$, the undercooling is $\Delta=0.3$, $\lambda=10$,
$\varepsilon_{s}=0.05$. The circular seed is initialized at the center of the
domain $\bm{X_{c}}=(400,400)$ with a radius of $R_{s}=10$ lattice units.
Figure 3: Iso-contours of $\phi=0$ at time $t=1\times 10^{5}\delta_{t}$. Black
circles are reference results from [41].
The solid boundaries, $\phi=0$, as obtained using different approximations at
$t=10^{5}\delta_{t}$ are shown in Fig. 3. It could be seen that results
obtained with a second-order finite difference method (E2) do not match well
with the reference. As a function of their orientation relative to the main
axes, the dendrites are slightly different from each other. When the gradients
are calculated with higher order isotropic formulations these non-physical
anisotropic effects are considerably reduced. As such a minimum isotropy of
order 4, i.e. the E4 stencil, is necessary for a six-tip crystal simulation.
#### 3.1.2 Validation for crystal growth
To validate the model and subsequent implementation, we use the generic system
with four-fold symmetry studied in [41, 53]. To provide both qualitative and
quantitative proof, we compare the shape of the dendrites and the evolution of
the tip velocity.
Initially, a circular seed of radius $R_{s}=10\delta_{x}$ is placed at the
centre of the square domain. The interface thickness $W_{0}$ and the
characteristic time $\tau_{0}$ are $W_{0}=\tau_{0}=1$. The grid-size is set to
$\delta_{x}/W_{0}=0.4$ [26] while the time-step size is $\delta_{t}=0.008$.
The capillary length $d_{0}$ and the kinetic coefficient $\beta$ are given by
[26]: $d_{0}=a_{1}W_{0}/\lambda$ and $\beta=a_{1}(\tau_{0}/\lambda
W_{0}-a_{2}W_{0}/\kappa)$ where $a_{1}=0.8839$ and $a_{2}=0.6267$. In this
benchmark, we choose the parameters $\kappa=4$,
$\lambda=\frac{\tau_{0}\kappa}{a_{2}W_{0}^{2}}=6.3826$ with $\beta=0$. The
anisotropic strength is $\epsilon_{s}=0.05$. The initial supersaturations are
$U_{0}=0.3$ and $0.55$. For $U_{0}=0.3$, we use a grid of size $1000^{2}$
nodes while for the latter the domain size is reduced to $500^{2}$ nodes. The
growth velocity of the crystal tips $\tilde{\bm{V_{p}}}$ is made dimensionless
as $\tilde{\bm{V_{p}}}=\bm{V_{p}}d_{0}/D$. The non-dimensional position
$\tilde{\bm{x}}$ is defined as ($\tilde{\bm{x}}=\bm{x}/\delta x$) and the
reduced time $T$ as ($T=t/\tau_{0}$).
Figure 4: (left) $\phi=0$ iso-contours for $U_{0}=0.3$ at $t=1.3\times
10^{5}\delta_{t}$. Red dots are from the present study while dashed black
lines are from [53]. (right) Dimensionless tip velocity $V_{p}$ as a function
of time (in units of $\tau_{0}$) for (lower curve/symbols) $U_{0}=0.3$ and
(upper curve/symbols) $U_{0}=0.55$. The red circles are from the present
study, while plain black lines are extracted from [53].
The obtained results are shown in Fig. 4. As shown there, the data obtained
from the present work closely follows those reported in [53].
#### 3.1.3 Validation of flow/solid coupling
In order to validate the coupling between the flow field and other solvers, we
model the 2-D case presented in [43, 42], and solved there via an adaptive
finite-elements solver. The computational domain is a box of length $L=204.8$,
the grid-size $\delta_{x}=0.4$ and the time-step unit $\delta_{t}=0.008$. We
also set the interface thickness to $W_{0}=1$, the characteristic time to
$\tau_{0}=1$, the coupling strength to $\lambda=6.383$ and the anisotropy
strength to $\epsilon_{s}=0.05$. The kinematic viscosity of the flow field is
$\mu_{f}=92.4$. The flow enters from the left side of the box with a fixed
inlet velocity $u_{x}=1.0$. The boundary is set to zero-gradient. The top and
bottom sides of the box are set to periodic.
The evolution of all dendrite tips without/with velocity are plotted in Fig.
5.
Figure 5: Computed phase-field contours from the dendritic growth in 2-D
(left) without and (right) with flow at two different times. Red and blue
symbols are results at $t=72$ and $104$ while black dashed lines are
corresponding reference data.
It can be observed that the phase-field contours are in excellent agreement
with the reference. To better illustrate the interaction of the flow field
with the growing solid, the streamlines and species concentration field at two
different times are shown in Fig. 6. As expected, one can readily see the non-
isotropic distribution of the concentration field around the growing seed
caused by the flow field. The incoming velocity induces a higher concentration
gradient around the tip on the causing it to grow faster than its counter-part
in the opposite direction.
Figure 6: (top row) Velocity field streamlines and (bottom row) concentration
fields at two different times: (left) $T=72$ and (right) $104$.
### 3.2 Faceted and dendritic snowflake growth
In this subsection, we first study the plate growth regime of snowflakes as a
function of the supersaturation to showcase the ability of the proposed model
to capture the wide variety of habits exhibited by snowflakes. Then, we
briefly look at the effect of forced convection on the growth of snowflakes.
#### 3.2.1 Ice crystal habit in thin-plate regime as a function of
temperature and supersaturation
As stated earlier, the focus of the present study is on the plate growth
regime. According to the Nakaya diagram, the widest variety of habits and
instabilities in this regime can be observed at $-16^{\circ}$C where the
saturated vapor density of ice is around
$\rho_{sat}^{I}=1.125\hbox{g}/\hbox{m}^{3}$ and the saturated vapor density of
water is around $\rho_{sat}^{W}=1.53\hbox{g}/\hbox{m}^{3}$ [53]. The excess
density over vapor-water equilibrium is $\Delta\rho=\rho-\rho_{sat}^{W}$ which
is exhibited in Fig. 1 by y-axis, where $\rho_{sat}^{W}$ is the satuaration
vapor density of water. The supersaturation is given by
$U=(c-c_{sat}^{I})/c_{sat}^{I}$, where $c_{sat}^{I}$ is the saturation number
density of vapor of ice at temperature $T$. Using $\rho=m_{H_{2}o}c$, where
$m_{H_{2}o}$ is the mass of a molecule of water, it could be written
$U=(\Delta\rho+\rho_{sat}^{W}-\rho_{sat}^{I})/\rho_{sat}^{I}$ and different
snowflake morphologies are obtained by changing the initial excess vapor
density $\Delta\rho$. According to [57], the melting temperature of the snow
crystals is $T_{m}=276.9\hbox{K}$, heat capacity $c_{p}=4.23\times
10^{6}\hbox{J}\hbox{m}^{-3}\hbox{K}^{-1}$, diffusion coefficient of the vapor
$D=1.17\times 10^{-7}\hbox{m}^{2}\hbox{s}^{-1}$ which is illusrated in Eq. 2,
surface tension $\gamma=2.845\times 10^{-2}\hbox{J}\hbox{m}^{-2}$, and latent
heat $L=1.12\times 10^{8}\hbox{J}\hbox{m}^{-3}$. The capillary length is
computed to be $d_{0}\simeq 2$ nm. The coupling parameter is set to
$\lambda=3$ [58].
The simulation parameters are chosen as: $W_{0}=1.25\delta_{x}$ and
$\tau_{0}=20\delta_{t}$. The simulations were conducted in a 400$\times$400
box. The anistropy strength was set to $\varepsilon_{xy}=0.05$. To produce
different snowflake morphologies, the following parameters were varied:
initial supersaturation $u_{0}$, depletion rate $L_{sat}$, initial density
$\rho_{0}$ and the excess density $\Delta\rho^{W}$. The corresponding values
are listed in table LABEL:t2.
Table 2: Parameters for different morphologies of snowflakes Nr. | shapes | $U_{0}[-]$ | $L_{sat}[-]$ | $\rho_{0}[\hbox{g}/\hbox{m}^{3}]$ | $\Delta\rho^{W}[\hbox{g}/m^{3}]$
---|---|---|---|---|---
1 | solid plate | 0.4 | 2.0 | 1.575 | 0.045
2 | stellar plate I | 0.5 | 1.8 | 1.6875 | 0.1575
3 | sectored plate | 0.5 | 1.6 | 1.6875 | 0.1575
4 | stars | 0.6 | 1.0 | 1.8 | 0.27
5 | fern dendrite | 0.8 | 1.6 | 2.025 | 0.495
6 | fernlike stellar | 0.7 | 1.6 | 1.9125 | 0.383
7 | stellar plate II | 0.5 | 1.2 | 1.6875 | 0.1575
The results obtained for the conditions listed in table LABEL:t2 are shown in
Fig. 7 along with corresponding numerical and experimental data from [27] and
[10].
Figure 7: The morphology numbers are from 1 to 7 (from left to right) in this
figure, as indicated in table LABEL:t2. Comparison between (top) real
snowflakes photographs taken from Libbrecht’s experiments [10], (middle) our
phase-field simulations, and (bottom) the simulations from [27].
The primary habit of the crystal (six-fold symmetry) is dictated by the
anisotropy function (and the microscopic crystallographic structure). At lower
supersaturation values where the adsorption rate is slow, the surface
diffusion process characteristic time is smaller and therefore dominates over
surface adsorption. More explicitly it means that the adsorbed water molecules
have enough time to propagate on the crystal surface and find the points with
the lowest potential (dictated by the molecules’ arrangement in the crystal
lattice). Furthermore, given the low growth rate and gradients, the surface is
not subject to branching instabilities. As the supersaturation goes up, the
larger adsorption rate at the sharper parts of the interface (regions with the
highest curvatures and consequently highest surface area) result in the
formation of six thick branches (usually referred to as primary branches). In
the lower supersaturation regimes these primary branches have a faceted
structure following the symmetry of the crystal. As the concentration goes
further up, the branches get thinner and rougher (the straight faces tend to
disappear); this eventually produces secondary instabilities and branches
going towards a somewhat fractal structure. All the obtained crystal habits,
are in excellent agreement with not only numerical simulations from [27] but
also experimental data from [10]. Further comparing the different crystal
habits to Nakaya’s diagram, it can be concluded that the proposed model
correctly predicts the behavior of the crystal in the platelet regime. The
next part will focus on the effects of ventilation on the evolution of the
crystal habit.
#### 3.2.2 Ventilation effect on snowflakes
Growth of snowflakes under forced convection is a topic of interest, as for
example falling flakes are usually subject to ventilation. The dynamics of
snowflake growth under ventilation effects are not very well-documented.
Libbrecht and Arnold [59] worked on an aerodynamic model to show the growth
and appearance of new habits such as the triangular snow crystals both in
nature and in laboratory settings [60]. Furthermore, the anisotropy induced by
the flow field can, when pronounced, cause different regions on the surface of
the crystal to grow in different regimes. The present section will focus on
recovery of these effects and qualitative validation with experimental
observations.
For the first configuration, the domain has a size of $1600\times 1600$ grids,
the initial seed radius is $R=25\delta x$ and the initial vapor density
$\rho_{0}=1.364$g/m3 (supersaturation $U_{0}=0.2$). The grid size is set to
$\delta_{x}=4.8\times 10^{-6}$ m while the time-step is $\delta_{t}=2\times
10^{-7}$ s. The coupling strength is $\lambda=3$. The flow blows from the
bottom to the top along the $y$-axis. The inlet velocity is set to
$\bm{u}_{y}=0.12\hbox{m}/\hbox{s}$ and the kinetic viscosity to
$\nu_{f}=1.152\times 10^{-6}\hbox{m}^{2}/\hbox{s}$. The outlet is modeled
using a Neumann condition, while along the $x$-axis periodic boundary
conditions are used. The resulting snowflake morphology is shown in Fig. 8.
Figure 8: The morphology of the asymmetrical hexagonal snowflakes at $t=7500$
in units of $\tau_{0}$. Top left: experimental image, top right: our
simulation, and bottom: associated supersaturation field.
It can be clearly observed that the crystal growth on the side facing the
incoming flow is higher than its neighboring sides. Furthermore, its opposite
side is growing slower than its neighbour. Both of these non-symmetrical
growth rates push the habit towards, first a non-symmetrical hexagon and then
a triangular shape, in agreement with experimental observations.
Figure 9: (left) morphology of the snowflakes with ventilation effects and
(right) velocity field streamlines and supersaturation fields at (from top to
bottom) $t$ = 0, 4000 and 8000 in units of $\tau_{0}$.
To further put the effect of hydrodynamics into perspective we also consider
another test-case with the same configuration as the previous one, however
with a larger initial supersaturation. The initial vapor density is set to
$\rho_{f}=1.445\hbox{g}/\hbox{m}^{3}$ (supersaturation $U_{0}=0.3$). The inlet
velocity is set to $\bm{u}_{y}=0.24\hbox{m}/\hbox{s}$. The evolution of the
crystal, stream-lines and supersaturation fields at different time-steps are
illustrated in Fig. 9.
As observed in these figures, the natural anisotropy in supersaturation around
the crystal is further accentuated by the formation of two re-circulation
zones. Due to the presence of these flow structures, the growth rate on the
top half of the crystal is slowed down and brought into the regular hexagonal
habit regime. The lower half, however, is subject to larger concentration
gradients (further amplified by the incoming convective flux) and therefore
exhibits primary branching instabilities. As the system progresses further in
time, the bottom-facing branches distinguish themselves from the side-
branches, as we observe secondary instability effects on them, along with a
much faster growth rate on the main branches. These observations are in clear
agreement with expectations from fluid dynamics. Going into the details of the
crystal structure and supersaturation fields, we can also see that secondary
branching instabilities are not present at the bases of the down-facing
primary branches. That is explained by the fact that they are in a flow
stagnation zone (due to the built-up pressure in this closed area), where the
supersaturation is almost fully depleted. All of these effects are in
qualitative agreement with experimental observations reported in the
literature [34].
## 4 Conclusions and perspectives
In this study, we presented a model for the simulation of snowflakes.
Throughout the many generic test-cases it has been shown that this model
closely matches results from other solvers for the system of macroscopic s
needed for this system. It was further shown that the proposed formulation is
able to capture the different crystal habits in the plate growth regime. The
final crystal habits were in good agreement with experimental data and the
Nakaya diagram predictions [9]. To go one step further, the model was also
used to look at the possible effects of forced convection on the growth
dynamics and resulting asymmetrical shapes.
Given the promising results obtained using this model, future work will focus
on the extension of this study to the long-prismatic growth regime to cover
the entire spectrum of habits exhibited by snowflakes. Effects from local
variations in temperature will also be added to the model to have a better
image of the mechanisms behind the growth of snowflakes [61].
## 5 Acknowledgement
Q.T. would like to acknowledge the financial support by the EU-program ERDF
(European Regional Development Fund) within the Research Center for Dynamic
Systems (CDS). S.A.H. would like to acknowledge the financial support of the
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in TRR 287
(Project-ID 422037413), and thank K. G. Libbrecht for allowing the authors to
use images from his ice crystal growth experiments.
## References
* [1] F. Frank, Snow crystals, Contemporary Physics 23 (1) (1982) 3–22.
* [2] B. J. Mason, Snow crystals, natural and man made, Contemporary Physics 33 (4) (1992) 227–243.
* [3] P. K. Wang, Shape and microdynamics of ice particles and their effects in cirrus clouds, Advances in Geophysics 45 (2002) 1–258.
* [4] I. Shafranovskii, Kepler’s Crystallographic ideas and his tract “the six-cornered snowflake”, Vistas in Astronomy 18 (1975) 861–876.
* [5] F. E. Chickering, Cloud Crystals: A Snow-flake Album, Appleton, 1864.
* [6] W. Bentley, W. Humphreys, Snow crystals, McGraw-Hili Book Company, Journal of Interdisciplinary History.
* [7] U. Nakaya, Snow crystals: Natural and Artificial, Harvard University Press, 2013.
* [8] T. Gonda, S. Nakahara, Dendritic ice crystals with faceted tip growing from the vapor phase, Journal of Crystal Growth 173 (1-2) (1997) 189–193.
* [9] U. Nakaya, The formation of ice crystals, Compendium of Meteorology (1951) 207–220.
* [10] K. G. Libbrecht, Snowcrystals.com (1999).
URL http://www.snowcrystal.com
* [11] U. Nakaya, Snow crystals, natural and artificial, Harvard University Press (1954) 510.
* [12] K. G. Libbrecht, The physics of snow crystals, Reports on Progress in Physics 68 (4) (2005) 855.
* [13] K. G. Libbrecht, Morphogenesis on ice: The physics of snow crystals, Engineering and Science 64 (1) (2001) 10–19.
* [14] K. G. Libbrecht, Physical dynamics of ice crystal growth, Annual Review of Materials Research 47 (2017) 271–295.
* [15] G. Ross, The snowflake: Winter’s Secret Beauty, American Scientist 92 (2) (2004) 181–182.
* [16] B. Mason, The growth of ice crystals from the vapour and the melt, Advances in Physics 7 (26) (1958) 235–253.
* [17] J. Nelson, C. Knight, Snow crystal habit changes explained by layer nucleation, Journal of the Atmospheric Sciences 55 (8) (1998) 1452–1465.
* [18] J. Nelson, Growth mechanisms to explain the primary and secondary habits of snow crystals, Philosophical Magazine A 81 (10) (2001) 2337–2373.
* [19] K. G. Libbrecht, M. E. Rickerby, Measurements of surface attachment kinetics for faceted ice crystal growth, Journal of Crystal Growth 377 (2013) 1–8.
* [20] C. Magono, C. Lee, Meteorological classification of natural snow crystals, Journal of the Faculty of Science 2 (4) (1966) 321–335.
* [21] J. Gravner, D. Griffeath, Modeling snow-crystal growth: A three-dimensional mesoscopic approach, Physical Review E 79 (1) (2009) 011601.
* [22] J. G. Kelly, E. C. Boyer, Physical improvements to a mesoscopic cellular automaton model for three-dimensional snow crystal growth, Crystal Growth & Design 14 (3) (2014) 1392–1405.
* [23] D. Chatterjee, S. Chakraborty, A hybrid lattice boltzmann model for solid–liquid phase transition in presence of fluid flow, Physics Letters A 351 (4-5) (2006) 359–367.
* [24] J. W. Barrett, H. Garcke, R. Nürnberg, Numerical computations of faceted pattern formation in snow crystal growth, Physical Review E 86 (1) (2012) 011604\.
* [25] I. Singer-Loginova, H. M. Singer, The phase field technique for modeling multiphase materials, Reports on Progress in Physics 71 (10) (2008) 106501\.
* [26] A. Karma, W. J. Rappel, Quantitative phase-field modeling of dendritic growth in two and three dimensions, Physical Review E 57 (4) (1998) 4323.
* [27] G. Demange, H. Zapolsky, R. Patte, M. Brunel, A phase field model for snow crystal growth in three dimensions, NPJ Computational Materials 3 (1) (2017) 1–7.
* [28] G. Demange, H. Zapolsky, R. Patte, M. Brunel, Growth kinetics and morphology of snowflakes in supersaturated atmosphere using a three-dimensional phase-field model, Physical Review E 96 (2) (2017) 022803.
* [29] G. D. Doolen, Lattice boltzmann methods for fluid flows, Annual Review of Fluid Mechanics 30 (1) (1998) 329–364.
* [30] H. W. Zheng, C. Shu, Y. T. Chew, A lattice boltzmann model for multiphase flows with large density ratio, Journal of Computational Physics 218 (1) (2006) 353–371.
* [31] S. A. Hosseini, N. Darabiha, D. Thévenin, Mass-conserving advection–diffusion lattice boltzmann model for multi-species reacting flows, Physica A: Statistical Mechanics and its Applications 499 (2018) 40–57.
* [32] S. A. Hosseini, H. Safari, N. Darabiha, D. Thévenin, M. Krafczyk, Hybrid lattice boltzmann-finite difference model for low mach number combustion simulation, Combustion and Flame 209 (2019) 394–404.
* [33] S. A. Hosseini, A. Abdelsamie, N. Darabiha, D. Thévenin, Low-mach hybrid lattice boltzmann-finite difference solver for combustion in complex flows, Physics of Fluids 32 (7) (2020) 077105.
* [34] D. Medvedev, K. Kassner, Lattice boltzmann scheme for crystal growth in external flows, Physical Review E 72 (5) (2005) 056703.
* [35] I. Rasin, W. Miller, S. Succi, Phase-field lattice kinetic scheme for the numerical simulation of dendritic growth, Physical Review E 72 (6) (2005) 066705\.
* [36] W. Miller, I. Rasin, S. Succi, Lattice boltzmann phase-field modelling of binary-alloy solidification, Physica A: Statistical Mechanics and its Applications 362 (1) (2006) 78–83.
* [37] D. Medvedev, T. Fischaleck, K. Kassner, Influence of external flows on crystal growth: Numerical investigation, Physical Review E 74 (3) (2006) 031606.
* [38] C. Huber, A. Parmigiani, B. Chopard, M. Manga, O. Bachmann, Lattice boltzmann model for melting with natural convection, International Journal of Heat and Fluid Flow 29 (5) (2008) 1469–1480.
* [39] D. Sun, M. Zhu, S. Pan, D. Raabe, Lattice boltzmann modeling of dendritic growth in a forced melt convection, ACTA MATERIALIA 57 (6) (2009) 1755–1767.
* [40] G. Lin, J. Bao, Z. Xu, A three-dimensional phase field model coupled with a lattice kinetics solver for modeling crystal growth in furnaces with accelerated crucible rotation and traveling magnetic field, Computers & Fluids 103 (2014) 204–214.
* [41] A. Younsi, A. Cartalade, On anisotropy function in crystal growth simulations using lattice boltzmann equation, Journal of Computational Physics 325 (2016) 1–21.
* [42] J. H. Jeong, N. Goldenfeld, J. A. Dantzig, Phase field model for three-dimensional dendritic growth with fluid flow, Physical Review E 64 (4) (2001) 041602.
* [43] C. Beckermann, H. J. Diepers, I. Steinbach, A. Karma, X. Tong, Modeling melt convection in phase-field simulations of solidification, Journal of Computational Physics 154 (2) (1999) 468–496.
* [44] B. Echebarria, R. Folch, A. Karma, M. Plapp, Quantitative phase-field model of alloy solidification, Physical Review E 70 (6) (2004) 061604.
* [45] R. Folch, M. Plapp, Publisher’s note: Quantitative phase-field modeling of two-phase growth [phys. rev. e 72, 011602 (2005)], Physical Review E 72 (2) (2005) 029903.
* [46] S. A. Hosseini, N. Darabiha, D. Thévenin, Compressibility in lattice Boltzmann on standard stencils: effects of deviation from reference temperature, Philosophical Transactions of the Royal Society A 378 (2175) (2020) 20190399.
* [47] X. Shan, X. F. Yuan, H. Chen, Kinetic theory representation of hydrodynamics: a way beyond the Navier–Stokes equation, Journal of Fluid Mechanics 550 (2006) 413–441.
* [48] S. A. Hosseini, C. Coreixas, N. Darabiha, D. Thévenin, Extensive analysis of the lattice boltzmann method on shifted stencils, Physical Review E 100 (6) (2019) 063301.
* [49] S. A. Hosseini, Development of a lattice Boltzmann-based numerical method for the simulation of reacting flows, Ph.D. thesis, Université Paris-Saclay & Otto-von-Guericke university (2020).
* [50] S. Ponce D., S. Chen, G. D. Doolen, Lattice boltzmann computations for reaction-diffusion equations, The Journal of Chemical Physics 98 (2) (1993) 1514–1523.
* [51] C. Huber, B. Chopard, M. Manga, A lattice boltzmann model for coupled diffusion, Journal of Computational Physics 229 (20) (2010) 7956–7976.
* [52] S. D. Walsh, M. O. Saar, Macroscale lattice-boltzmann methods for low peclet number solute and heat transport in heterogeneous porous media, Water Resources Research 46 (7).
* [53] A. Cartalade, A. Younsi, M. Plapp, Lattice boltzmann simulations of 3d crystal growth: Numerical schemes for a phase-field model with anti-trapping current, Computers & Mathematics with Applications 71 (9) (2016) 1784–1798.
* [54] L. Chen, Q. Kang, Y. Mu, Y. He, W. Tao, A critical review of the pseudopotential multiphase lattice boltzmann model: Methods and applications, International Journal of Heat and Mass Transfer 76 (2014) 210–236.
* [55] X. Shan, Analysis and reduction of the spurious current in a class of multiphase lattice boltzmann models, Physical Review E 73 (4) (2006) 047701.
* [56] M. Sbragaglia, R. Benzi, L. Biferale, S. Succi, K. Sugiyama, F. Toschi, Generalized lattice boltzmann method with multirange pseudopotential, Physical Review E 75 (2) (2007) 026702.
* [57] I. Yoshizaki, T. Ishikawa, S. Adachi, E. Yokoyama, Y. Furukawa, Precise measurements of dendrite growth of ice crystals in microgravity, Microgravity Science and Technology 24 (4) (2012) 245–253.
* [58] J. C. Ramirez, C. Beckermann, A. s. Karma, H. J. Diepers, Phase-field modeling of binary alloy solidification with coupled heat and solute diffusion, Physical Review E 69 (5) (2004) 051607.
* [59] K. G. Libbrecht, Physically derived rulesfor simulating faceted crystal growth using cellular automata, ArXiv Condensed Matter: Materials Science.
* [60] K. G. Libbrecht, H. M. Arnold, Aerodynamic stability and the growth of triangular snow crystals, ArXiv Condensed Matter: Materials Science.
* [61] S. A. Hosseini, N. Darabiha, D. Thévenin, Lattice boltzmann advection-diffusion model for conjugate heat transfer in heterogeneous media, International Journal of Heat and Mass Transfer 132 (2019) 906–919.
|
# Steiner Configurations ideals: containment and colouring
Edoardo Ballico Dipartimento di Matematica
Via Sommarive, 14 - 38123 Povo (TN), Italy<EMAIL_ADDRESS>,
Giuseppe Favacchio Dipartimento di Matematica e Informatica
Viale A. Doria, 6 - 95100 - Catania, Italy<EMAIL_ADDRESS>www.dmi.unict.it/ gfavacchio , Elena Guardo Dipartimento di Matematica e
Informatica
Viale A. Doria, 6 - 95100 - Catania, Italy<EMAIL_ADDRESS>www.dmi.unict.it/guardo , Lorenzo Milazzo${\dagger}$ Dipartimento di
Matematica e Informatica
Viale A. Doria, 6 - 95100 - Catania, Italy<EMAIL_ADDRESS>www.dmi.unict.it/milazzo and Abu Chackalamannil Thomas Department of
Mathematics,
Tulane University, New Orleans, LA, U.S.A 70118<EMAIL_ADDRESS>
###### Abstract.
Given a homogeneous ideal $I\subseteq k[x_{0},\dots,x_{n}]$, the Containment
problem studies the relation between symbolic and regular powers of $I$, that
is, it asks for which pair $m,r\in\mathbb{N}$, $I^{(m)}\subseteq I^{r}$ holds.
In the last years, several conjectures have been posed on this problem,
creating an active area of current interests and ongoing investigations. In
this paper, we investigated the Stable Harbourne Conjecture and the Stable
Harbourne – Huneke Conjecture and we show that they hold for the defining
ideal of a Complement of a Steiner configuration of points in
$\mathbb{P}^{n}_{k}$. We can also show that the ideal of a Complement of a
Steiner Configuration of points has expected resurgence, that is, its
resurgence is strictly less than its big height, and it also satisfies
Chudnovsky and Demailly’s Conjectures. Moreover, given a hypergraph $H$, we
also study the relation between its colourability and the failure of the
containment problem for the cover ideal associated to $H$. We apply these
results in the case that $H$ is a Steiner System.
###### Key words and phrases:
monomial ideals; ideals of points; symbolic powers of ideals; Waldschmidt
constant; Steiner systems
###### 2010 Mathematics Subject Classification:
13F55, 13F20,14G50, 51E10,94B27
${\dagger}$ Deceased, March 4, 2019.
Last updated: Jan 15, 2021
## 1\. Introduction
In this paper we continue the study of Steiner configurations of points and
their invariants, such as Hilbert Function, Betti numbers, Waldschmidt
constant, regularity, resurgence found in [3]. We will focus on the
Containment problem and we will show that the Stable Harbourne Conjecture and
the Stable Harbourne – Huneke Conjecture hold for the defining ideal of a
Complement of a Steiner configuration of points in
$\mathbb{P}^{n}_{k}:=\mathbb{P}^{n}$. As pointed out in Remarks 2.5 and 2.6 in
[3] in the language of Algebraic Geometry/Commutative Algebra, Steiner
configurations of points and their Complement are special subsets of star
configurations.
First, we give an overview on the Containment problem to introduce the related
conjectures. Then we devote Section 2 to recall notation, definitions and
known results for a Steiner configuration of points and its Complement that we
will use to prove the results of this paper. Let $I$ be a homogeneous ideal in
the standard graded polynomial ring $R:=k[x_{0},\ldots,x_{n}]$, where $k$ is a
field. Given an integer $m$, we denote by $I^{m}$ the regular power of the
ideal $I$. The $m$-th symbolic power of $I$ is defined as
$I^{(m)}=\bigcap_{\mathfrak{p}\in Ass(I)}(I^{m}R_{\mathfrak{p}}\cap R)$
where $Ass(I)$ denotes the set of associated primes of $I$ and
$R_{\mathfrak{p}}$ is the localization of $R$ at a prime ideal
${\mathfrak{p}}$.
If $I$ is a radical ideal (this includes for instance squarefree monomial
ideals and ideals of finite sets of points) then
$I^{(m)}=\bigcap_{\mathfrak{p}\in Ass(I)}{\mathfrak{p}}^{m}.$
Symbolic powers of ideals play a significant role in the famous Zariski-Nagata
Theorem (see [48, 54]). If $R$ is a polynomial ring over an algebraically
closed field $k$, then $I^{(m)}$ consists precisely of those functions which
vanish on the algebraic variety defined by $I$ with multiplicity at least $m$.
It is easy to show from the definition that $I^{r}\subseteq I^{(m)}$ if and
only if $r\geq m$. The reverse inclusion $I^{(m)}\subseteq I^{r}$ motivates
the following question.
###### Question 1.1.
(Containment problem) Given a homogeneous ideal $I\subseteq
k[x_{0},\dots,x_{n}]$, for which pairs $m,r\in\mathbb{N}$, does
$I^{(m)}\subseteq I^{r}$ hold?
One of the initial works that introduce Question 1.1 is [51]. The problem is
still open in general and in the last couple of decades it was extensively
studied for several classes of ideals, in particular for ideals defining
finite sets of points in projective and multiprojective spaces, see [8, 9, 17,
13, 24, 25, 26, 35, 36, 37, 46] just to cite some among all the known results.
Containment problems are useful in giving lower bounds to nonzero homogeneous
forms vanishing through a finite set of points with a fixed multiplicity.
It is of great interest to study the ideals of fat points. Given distinct
points $P_{1},\dots,P_{s}\in\mathbb{P}^{n}$ and nonnegative integers $m_{i}$
(not all $0$), let $Z=m_{1}p_{1}+\dots+m_{s}p_{s}$ denote the scheme (called a
fat point scheme) defined by the ideal
$I_{Z}=\cap_{i=1}^{s}(I_{P_{i}}^{m_{i}})\subseteq k[\mathbb{P}^{n}]$, where
$I_{P_{i}}$ is the ideal generated by all homogeneous polynomials vanishing at
$P_{i}$. Symbolic powers of $I_{Z}$ take the form
$I_{Z}^{(m)}=I_{mZ}=\cap_{i=1}^{s}I_{P_{i}}^{mm_{i}}$. We say that $Z$ is
reduced if $I_{Z}$ is a radical ideal.
The Containment problem also helps us to bound certain useful invariants like
Waldschmidt constant, $\widehat{\alpha}(I)$, of an ideal $I$ defined as
$\widehat{\alpha}(I)=\lim_{m\to\infty}\frac{\alpha(I^{(m)})}{m},$
where $\alpha(I)$ is the minimum integer $d$ such that $I_{d}\neq(0)$, that
is, it is the least degree of a minimal generator of $I$. This limit exists
and was first defined by Waldschmidt [53] for ideals of finite sets of points
in the context of complex analysis; specifically, in our language, the problem
was to determine the minimal degree of a hypersurface that passed through a
collection of points with prescribed multiplicities.
The following slight different version of Question 1.1 was introduced in [39].
Recall that the big height of an ideal $I$ refers to the maximum of all the
heights of its associated prime ideals.
###### Conjecture 1.2.
Let $Z\subset\mathbb{P}^{n}$ be a fat point scheme and $I:=I_{Z}$ the ideal
defining $Z$. Let $\mathcal{M}=(x_{0},\dots,x_{n})$ be the graded maximal
ideal. Then $I^{(rn)}\subseteq\mathcal{M}^{r(n-1)}I^{r}$ holds for all $r>0$.
B. Harbourne conjectured in [4]:
###### Conjecture 1.3.
Given a nonzero, proper, homogeneous, radical ideal $I\subseteq
k[x_{0},\dots,x_{n}]$ with big height $h$,
$I^{(hr-h+1)}\subseteq I^{r}$
for all $r\geq 1.$
A counterexample to the above conjecture was initially found in [23].
A celebrated result of [24, 41, 46] is shown in the next theorem.
###### Theorem 1.4.
Let $R$ be a regular ring and $I$ a radical ideal in $R$. Then for all
$n\in\mathbb{N}$,
$I^{(hn)}\subseteq I^{n},$
whenever $h$ is the big height of $I$.
One could hope to sharpen the containment by reducing the symbolic power on
the left hand side by a constant or increasing the ordinary power on the right
hand side by a fixed constant. This motivates us to look at stable versions of
Conjectures 2.1 and 4.1 in [39], respectively.
###### Conjecture 1.5.
(Stable Harbourne Conjecture) Given a nonzero, proper, homogeneous, radical
ideal $I\subseteq k[x_{0},\dots,x_{n}]$ with big height $h$, then
$I^{(hr-h+1)}\subseteq I^{r}$
for all $r\gg 0.$
###### Conjecture 1.6.
(Stable Harbourne–Huneke Conjecture) Let $I\subseteq k[x_{0},\dots,x_{n}]$ be
a homogeneous radical ideal of big height $h$. Let
$\mathcal{M}=(x_{0},\dots,x_{n})$ be the graded maximal ideal. Then for $r\gg
0$,
1. (1)
$I^{(hr)}\subseteq\mathcal{M}^{r(h-1)}I^{r}$
2. (2)
$I^{(hr-h+1)}\subseteq\mathcal{M}^{(r-1)(h-1)}I^{r}$.
In the study of finding the least degree of minimal generators of an ideal
$I$, Chudnovsky made the following:
###### Conjecture 1.7.
(Chudnovsky’s Conjecture). Suppose that $k$ is an algebraically closed field
of characteristic $0$. Let $I$ be the defining ideal of a set of points
$X\subseteq\mathbb{P}^{n}_{k}$. Then, for all $h>1$,
$\frac{\alpha(I^{(h)})}{h}\geq\frac{\alpha(I)+n-1}{n}.$
A generalization of Chudnovsky’s Conjecture is the following:
###### Conjecture 1.8.
(Demailly’s Conjecture). Suppose that $k$ is an algebraically closed field of
characteristic $0$. Let $I$ be the defining ideal of a set of points
$X\subseteq\mathbb{P}^{n}_{k}$ and let $m\in\mathbb{N}$ be any integer. Then,
for all $h>1$,
$\frac{\alpha(I^{(h)})}{h}\geq\frac{\alpha(I^{(m)})+n-1}{m+n-1}.$
Two recent preprints, [5, 6], focus on the Containment problem and related
conjectures. In the first one, the authors show that Chudnovsky’s Conjecture
holds for sufficiently many general points and to prove it they show that one
of the containments conjectured by Harbourne and Huneke holds eventually,
meaning for large powers (see Theorem 4.6 in [5]). They also show other
related results, for example, that general sets of points have expected
resurgence and thus satisfy the Stable Harbourne Conjecture.
In the second preprint, the authors show that Demailly’s Conjecture (which is
a generalization of Chudnovsky’s) also holds for sufficiently many general
points, for star configurations (in general, not just points) and for generic
determinantal ideals.
In this paper we prove that the Stable Harbourne Conjecture and the Stable
Harbourne–Huneke Conjecture hold for ideals defining the Complement of a
Steiner Configuration of points in $\mathbb{P}^{n}$ that are special subsets
of star configurations and, hence, far from being general. We will give more
details in Section 3.
We remark that the least degree of a minimal generator of the ideal defining
the Complement of a Steiner Configuration of points in $\mathbb{P}^{n}$ is
strictly less than the least degree of a minimal generator of the ideal of a
star configurations (see Theorem 2.6 and also Proposition 2.9 in [29]). So, it
is worth investigating whether the Containment problem and its related
conjectures hold for the Complement of a Steiner Configuration of points in
$\mathbb{P}^{n}$.
In [3] the authors constructed a squarefree monomial ideal $J$ associated to a
set $X$ of points in $\mathbb{P}^{n}$ constructed from the Complement of a
Steiner system. The ideal $I_{X}$ defining the Complement of a Steiner system
is not a monomial ideal. But the authors proved that the symbolic powers of
$I_{X}$ and $J$ share the same homological invariants (see Proposition 3.6 in
[3]). This was possible because $J$ is the Stanley-Reisner ideal of a matroid,
so its symbolic powers define an arithmetically Cohen-Macaulay (ACM for short)
scheme which gives, after proper hyperplane sections, the scheme of fat points
supported on $X$. But we point out that the regular powers of $J$ are not
necessarily ACM anymore and we cannot relate them to squarefree monomial
ideals. Thus, the homological invariants of the regular powers of $J$ are not
immediately correlated to that of $I_{X}$.
In [17] the authors proved that the Chudnovsky’s Conjecture, the Harbourne’s
Conjecture and the Harbourne–Huneke containment conjectures hold for
squarefree monomial ideals.
As previously remarked, since the ideal $I_{X}$ defining the Complement of a
Steiner system is not a squarefree monomial ideal, we cannot recover the
Stable Harbourne Conjecture and the Stable Harbourne – Huneke Conjecture using
[17].
We also point out that the two above preprints [5, 6] do not compute the
Waldschmidt constant exactly for any class of ideals, they study lower bounds
for the Waldschmidt constant. And, since in [3] the authors found the exact
value of the Waldschmidt constant for the Complement of a Steiner
configurations of points, then Chudnovsky and Demailly’s Conjectures easily
follow for our class of ideals (see Section 3).
For other results on this topic we can also see [27, 47].
Another tool useful to measure the non–containment among symbolic and ordinary
powers of ideals is the notion of resurgence $\rho(I)$ of an ideal $I$,
introduced in [9] that gives some notion of how small the ratio $m/r$ can be
and still be sure to have $I^{(m)}\subseteq I^{r}$.
###### Definition 1.9.
Let $I$ be a nonzero, proper ideal in a commutative ring $R$, the resurgence
of the ideal $I$ is given by
$\rho(I)=\sup\left\\{\frac{m}{r}\quad|\quad I^{(m)}\nsubseteq I^{r}\right\\}.$
It always satisfies $\rho(I)\geq 1$. The groundbreaking results of [24, 41,
46] show that $\rho(I)\leq h$, where $h$ is the big height of the radical
ideal $I$. This motivates us to ask whether $\rho(I)$ can strictly be less
than its big height and which are some of the interesting consequences.
Although there are few cases where the resurgence has been computed, in
general, it is extremely difficult to estimate the exact value for $\rho(I)$.
The reader can look at [22] for the first examples where the resurgence and
the asymptotic resurgence are not equal. An asymptotic version of the
resurgence was introduced in the paper [35].
###### Definition 1.10.
For a nonzero, proper homogeneous ideal $I\subseteq k[x_{0},\dots,x_{n}]$, the
asymptotic resurgence $\rho_{a}(I)$ is defined as follows:
$\rho_{a}(I)=\sup\left\\{\frac{m}{r}\quad|\quad I^{(mt)}\nsubseteq
I^{rt},\quad\text{for all}\quad t\gg 0\right\\}.$
It is clear from the definition that $1\leq\rho_{a}(I)\leq\rho(I)$. As pointed
out in [40], DiPasquale, Francisco, Mermin, Schweig showed that
$\rho_{a}(I)=sup\\{m/r:~{}I^{(m)}\nsubseteq\overline{I^{r}}\\}$, where
$\overline{I^{r}}$ is the integral closure of $I^{r}$ (see also [20] Corollary
4.14) .
In this paper we study the containment properties of the ideal defining a
Complement of a Steiner configuration of points in $\mathbb{P}^{n}$. Section 2
is devoted to recall notation, definitions and known results from [3] that we
will use in the next sections. The main result of Section 3 is Theorem 3.5
where we prove that an ideal defining the Complement of a Steiner
Configuration of points in $\mathbb{P}^{n}$ satisfies both the Stable
Harbourne Conjecture and the Stable Harbourne–Huneke Conjecture. In Lemma 3.2,
we give a criterion for when the resurgence number can be computed in finite
number of steps. This result improves the bounds found in Corollary 4.8 in
[3]. We also point out that Lemma 3.2 is similar to results from [19, 20]. As
a consequence, in Corollary 3.6 we show that the ideal of a Complement of a
Steiner Configuration of points has expected resurgence, that is, its
resurgence is strictly less than its big height (see [34] for the first
definition). Moreover, using Theorem 2.6, Corollaries 2.7 and 2.8, we show
that the ideal of a Complement of a Steiner Configuration of points satisfies
Chudnovsky and Demailly’s Conjectures (see Corollary 2.8, Corollary 3.7 and
Corollary 3.8).
Finally, in Section 4, given a hypergraph ${H}$, we also study the relation
between its colourability and the failure of the Containment problem for the
cover ideal associated to $H$. The ideas come from the paper [28] where the
authors start to study the natural one-to-one correspondence between
squarefree monomial ideals and finite simple hypergraphs via the cover ideal
construction.
There exists an extensive literature on the subject of colourings both from
Design Theory and Algebraic Geometry/Commutative Algebra point of view. Among
all, we make use of [10, 28, 38, 31] as some referring texts for preliminaries
on hypergraph theory and associated primes and for an algebraic method to
compute the chromatic number, respectively.
Most of the existing papers are devoted to the case of weak colourings (or
vertex colourings), i.e. colourings where the colours are assigned to the
elements in such a way that no hyperedge is monochromatic (i.e. no hyperedge
has all its elements assigned the same colour). The reader can see [10] or
Chapter 3 in [31] for other different types of colouring a hypergraph, such as
strong vertex colouring, vertex equicolouring, good colouring of $H$.
In this paper we use the case of weak colouring to get result on the
Containment problem since it is the one commonly used in Combinatorial
Commutative Algebra. The main result of this section is Theorem 4.8 that more
generally predicts the failure of the containement for squarefree monomial
ideals based on the definition of coverability (see Definition 4.4). We apply
these results in the case that $H$ is a Steiner System.
We end the paper recalling some open questions posed in [3] and that are still
under investigations and posing new ones as possible further research
problems.
## 2\. Notation, definitions and known results for Ideals of a Steiner
configuration of points and its Complement
In this section we recall the main results from [3], where the authors studied
the homological properties of ideals constructed from Steiner systems,
especially in the zero dimensional case of $\mathbb{P}^{n}$.
A Steiner system $(V,B)$ of type $S(t,n,v)$ is a collection $B$ of $n$-subsets
(blocks) of a $v$-set $V$ such that each $t$-tuple of $V$ is contained in a
unique block in $B$. The elements in $V$ are called vertices or points and
those of $B$ are called blocks. In particular, a Steiner triple system of
order $v$, $STS(v)$, is a collection of triples ($3$-subsets) of $V$, such
that each unordered pair of elements is contained in precisely one block, and
a Steiner quadruple system of order $v$, $SQS(v)$, is a collection of
quadruples (4-subsets) of $V$ such that each triple is found in precisely one
block.
The existence of a Steiner system strongly depends on the parameters
$(t,n,v)$. If a Steiner system $(V,B)$ of type $S(t,n,v)$ exists, then
$|B|=\frac{{\binom{v}{t}}}{{\binom{n}{t}}}.$
We use [14] and [16] as main references for all the background on design
theory.
We recall the most known example.
###### Example 2.1.
One of the simplest and most known example of Steiner system is the Fano
Plane. It is unique up to isomorphism and it is a Steiner system $S(2,3,7)$
with block set
$B:=\\{\\{1,2,3\\},\\{3,4,5\\},\\{3,6,7\\},\\{1,4,7\\},\\{2,4,6\\},\\{2,5,7\\},\\{1,5,6\\}\\}.$
For the ease of the reader, we recall some definitions and results from [3].
Let $V:=\\{1,\ldots,v\\}$ and $\mathcal{H}:=\\{H_{1},\ldots H_{v}\\}$ be a
collection of distinct hyperplanes of $\mathbb{P}^{n}$, where $n\leq v$. Say
$H_{j}$ defined by the linear forms $\ell_{j}$ for $j=1,\dots,n$. Assume that
any $n$ hyperplanes in $\mathcal{H}$ meet properly, i.e., they meet in a
point. There is a natural way to associate a point in $\mathbb{P}^{n}$ to a
subset of $n$ elements of $V$. For
$\sigma:=\\{\sigma_{1},\ldots,\sigma_{n}\\}\subseteq V$, we denote by
$P_{\mathcal{H},\sigma}$ the point obtained by intersecting the hyperplanes
$H_{\sigma_{1}},\ldots,H_{\sigma_{n}}.$ Then, the ideal
$I_{P_{\mathcal{H},\sigma}}=(\ell_{\sigma_{1}},\ldots,\ell_{\sigma_{n}})\subseteq
k[\mathbb{P}^{n}]$ is the vanishing ideal of the point
$P_{\mathcal{H},\sigma}$.
###### Definition 2.2.
Let $Y$ be a collection of subsets of $V$ containing $n$ elements, and
$\mathcal{H}$ a set of hyperplanes meeting properly. We define the following
set of points in $\mathbb{P}^{n}$ with respect to $\mathcal{H}$
$X_{\mathcal{H},Y}:=\bigcup_{\sigma\in Y}~{}P_{\mathcal{H},\sigma}$
and its defining ideal
$I_{X_{\mathcal{H},Y}}:=\bigcap_{\sigma\in Y}~{}I_{P_{\mathcal{H},\sigma}}.$
Denoted by $C_{(n,v)}$ the set containing all the subsets of $V$ with $n$
elements the above definition applied to a Steiner system gives us two
different sets of points.
###### Definition 2.3.
Let $(V,B)$ be a Steiner system of type $S(t,n,v)$ with $t<v\leq n$. We
associate to $B$ the following set of points in $\mathbb{P}^{n}$
$X_{\mathcal{H},B}:=\bigcup_{\sigma\in B}~{}P_{\mathcal{H},\sigma}$
and its defining ideal
$I_{X_{\mathcal{H},B}}:=\bigcap_{\sigma\in B}~{}I_{P_{\mathcal{H},\sigma}}.$
We call $X_{\mathcal{H},B}$ the Steiner configuration of points associated to
the Steiner system $(V,B)$ of type $S(t,n,v)$ with respect to $\mathcal{H}$
(or just $X_{B}$ if there is no ambiguity).
###### Definition 2.4.
Let $(V,B)$ be a Steiner system of type $S(t,n,v)$ with $t<n\leq v$. We
associate to $C_{(n,v)}\setminus B$ the following set of points in
$\mathbb{P}^{n}$
$X_{\mathcal{H},C_{(n,v)}\setminus B}:=\bigcup_{\sigma\in C_{(n,v)}\setminus
B}~{}P_{\mathcal{H},\sigma}$
and its defining ideal
$I_{X_{\mathcal{H},C_{(n,v)}\setminus B}}:=\bigcap_{\sigma\in
C_{(n,v)}\setminus B}~{}I_{P_{\mathcal{H},\sigma}}.$
We call $X_{\mathcal{H},C_{(v,n)}\setminus B}$ the Complement of a Steiner
configuration of points with respect to $\mathcal{H}$ (or C-Steiner
$X_{\text{C}}$ if there is no ambiguity).
As pointed out in [3], Remarks 2.5 and 2.6 a Steiner configuration of points
and its Complement are subschemes of a star configuration of ${\binom{v}{n}}$
points in $\mathbb{P}^{n}$ (see [11, 12, 29, 30] just to cite some reference
on star configurations).
We also have
$\deg X_{\mathcal{H},C_{(n,v)}\setminus
B}={\binom{v}{t}}-|B|={\binom{v}{t}}-\dfrac{{\binom{v}{t}}}{{\binom{n}{t}}}.$
We recall the most known construction of Steiner Configuration of points and
its Complement.
###### Example 2.5.
Consider the Steiner configuration associated to $(V,B)$ of type $S(2,3,7)$ as
in Example 2.1. Take $\mathcal{H}:=\\{H_{1},\ldots,H_{7}\\}$ a collection of
$7$ distinct hyperplanes $H_{i}$ in $\mathbb{P}^{3}$ defined by a linear form
$\ell_{i}$ for $i=1,\dots,7$, respectively, with the property that any $3$ of
them meet in a point $P_{\mathcal{H},\sigma}=H_{\sigma_{1}}\cap
H_{\sigma_{2}}\cap H_{\sigma_{3}}$, where
$\sigma=\\{\sigma_{1},\sigma_{2},\sigma_{3}\\}\in B$. We get that
$X_{\mathcal{H},C_{(3,7)}}$ is a star configuration of ${\binom{7}{3}}=35$
points in $\mathbb{P}^{3}$, $X_{\mathcal{H},B}:=\cup_{\sigma\in
B}~{}\\{P_{\mathcal{H},\sigma}\\}$ is a Steiner configuration consisting of
$7$ points in $\mathbb{P}^{3}$ and $X_{\mathcal{H},C_{(3,7)}\setminus B}$ is a
C-Steiner configuration consisting of ${\binom{7}{3}}-7=28$ points in
$\mathbb{P}^{3}.$ Their defining ideals are respectively,
$I_{X_{\mathcal{H},B}}:=\cap_{\sigma\in
B}~{}I_{P_{\mathcal{H},\sigma}}\textrm{ and
}I_{X_{\mathcal{H},C_{(3,7)}\setminus B}}:=\cap_{\sigma\in C_{(3,7)}\setminus
B}~{}I_{P_{\mathcal{H},\sigma}}.$
In [3] the authors constructed a squarefree monomial ideal $J$ associated to a
set $X_{\mathcal{H},C}$ of points in $\mathbb{P}^{n}$ constructed from the
Complement of a Steiner system. The ideal $I_{X_{\mathcal{H},C}}$ defining the
Complement of a Steiner system is not a monomial ideal. But the authors proved
that the symbolic powers of $I_{X_{\mathcal{H},C}}$ and $J$ share the same
homological invariants (see Proposition 3.6 in [3]).
The following results give the least degree of a minimal generator and the
regularity and the Waldschmidt constant of an ideal defining the Complement of
a Steiner configuration of points, respectively.
###### Theorem 2.6 ([3], Theorem 3.9).
Let $(V,B)$ be a Steiner system of type $S(t,n,v)$. Set
$I_{X_{\mathcal{H},C}}:=I_{X_{C}}$ the ideal defining the Complement of the
Steiner configuration of points associated to $S(t,n,v)$. Then
* i)
$\alpha(I_{X_{C}})=v-n$;
* ii)
$\alpha(I_{X_{C}}^{(q)})=v-n+q$ for $2\leq q<n$;
* iii)
$\alpha(I_{X_{C}}^{(m)})=\alpha(I_{X_{C}}^{(q)})+pv$, where $m=pn+q$ and
$0\leq q<n$ and $\alpha(I_{X_{C}}^{(n)})=\alpha(I_{X_{C}}^{(0)})+v=v$.
###### Corollary 2.7 ([3], Corollary 4.2).
Let $\text{reg}(I_{X_{C}})$ be the regularity of a Complement of a Steiner
configuration. Then $\text{reg}(I_{X_{C}})=\alpha(I_{X_{C}})+1=v-n+1$.
###### Corollary 2.8 ([3], Corollary 3.12).
If $(V,B)$ is a Steiner system of type $S(t,n,v)$, then the Waldschmidt
constant of its Complement is $\widehat{\alpha}(I)=\frac{v}{n}$.
## 3\. Asymptotic resurgence and Stable Harbourne Conjecture
Containment problems have been of interest among commutative algebraists and
algebraic geometers. In the last decade, several conjectures related to this
problem have been posed creating an active area of current interests and
outgoing investigations.
A celebrated result of [24, 41, 46] is that $I^{(hn)}\subseteq I^{n}$ for all
$n\in\mathbb{N}$, whenever $h$ is the big height of $I$. One could hope to
sharpen the containment by reducing the symbolic power on the left hand side
by a constant or increasing the ordinary power on the right hand side by a
fixed constant. This motivates us to look at the Stable Harbourne Conjecture
and the Stable Harbourne – Huneke Conjecture and study which class of ideals
satisfies them. Here, we prove that the ideal defining a Complement of a
Steiner configurations of points satisfies both conjectures. We need to recall
some known results.
In [33], the Conjecture 1.5 is shown to hold
1. (1)
if there exists $k>0$ such that $I^{(hk-h)}\subseteq I^{k}$;
2. (2)
if $I^{(hk-h+1)}\subseteq I^{k}$ for some $k$ and $I^{(r+h)}\subseteq
II^{(r)}$ for all $r\geq k$;
3. (3)
if the resurgence satisfies $\rho(I)<h$.
In particular, condition (2) gives a criterion for the Stable Harbourne
Conjecture (SHC for short) to hold. Namely, for a radical ideal of big height
$h$, if for all $k\geq 1$, it is $I^{(k+h)}\subseteq II^{(k)}$ and fix an
integer $C$ and $m$ such that $I^{(hm-C)}\subseteq I^{m}$ holds, then for all
$q\geq m$, we have
$I^{(hq-C)}=I^{(h(q-m)+hm-h+h-C)}\subseteq II^{(h(q-m-1)+hm-C)}\subseteq
I^{q-m}I^{(hm-C)}\subseteq I^{q-m}I^{m}=I^{q},$
that is, $I^{(hq-C)}\subseteq I^{q}$.
###### Theorem 3.1 (Theorem 2.5, [33]).
Let $R$ be a regular ring containing a field, and let $I$ be a radical ideal
in $R$ with big height $h$. If $I^{(h(m-1))}\subseteq I^{m}$ for some $m\geq
2$, then $I^{(h(k-1))}\subseteq I^{k}$ for all $k>>0$ (indeed for all $k\geq
hm$).
We have learned that Harbourne, Kettinger, Zimmitti in [40] and DiPasquale,
Drabkin in [19] proved independently that $\rho_{a}(I)<h$ if and only if
$\rho(I)<h$. As pointed out in [19] Remark 2.3, the next result is similar as
Proposition 4.1.3 of Denkert’s thesis [18], as Lemma 4.12 in [20] and as
Proposition 2.2 in [19].
For the ease of the reader, we adapt the proof in our case.
###### Lemma 3.2.
Let $I\subseteq k[x_{0},\dots,x_{n}]$ be a homogeneous radical ideal with
$big\ height(I)=h$, such that $\rho(I)>\rho_{a}(I)$. Suppose we have the
equality
$\rho_{a}(I)=\frac{hr_{1}-h}{r_{1}}$
for some $r_{1}>0$. Then $\rho(I)$ can be computed by taking the maximum of
finitely many $\frac{s}{r}$ with $I^{(s)}\nsubseteq I^{r}$.
###### Proof.
Using Briancon Skoda Theorem ([52], Corollary 13.3.4), we have that
$\overline{I^{r+n}}\subseteq I^{r}$, where $n+1$ is the number of variables in
the polynomial ring and $\overline{I}$ denotes the integral closure of the
ideal $I$. For $s,r\in\mathbb{N}$ such that $I^{(s)}\nsubseteq I^{r}$ then
$I^{(s)}\nsubseteq\overline{I^{r+n}}$. Using [20], Lemma 4.12, we get
$\frac{s}{r+n}<h(1-1/r_{1})=\rho_{a}(I)$, that is,
$\frac{s}{r}<(1+n/r)h(1-1/r_{1}).$
If $\rho(I)>\rho_{a}(I)$, applying [19] Proposition 2.2, then there exist
$s_{0},r_{0}$, such that $I^{(s_{0})}\nsubseteq I^{r_{0}}$ and
$\rho(I)\geq\frac{s_{0}}{r_{0}}\geq(1+\frac{n}{r})h(1-\frac{1}{r_{1}}),$
solving for $r$ gives us the inequality
$r\geq\frac{n}{\frac{s_{0}/r_{0}}{h(1-1/r_{1})}-1},$
so whenever $r\geq\frac{n}{\frac{s_{0}/r_{0}}{h(1-1/r_{1})}-1}$ and $s$ is
such that $I^{(s)}\nsubseteq I^{r}$, we have
$\frac{s}{r}<\frac{s_{0}}{r_{0}}$.
Hence, it suffices to look at
$r\leq\frac{n}{\frac{s_{0}/r_{0}}{h(1-1/r_{1})}-1}$
and $s\leq(r+n)h(1-\frac{1}{r_{1}})$. ∎
###### Corollary 3.3.
If the resurgence can be computed by taking the maximum of finitely many
ratios of the form $\frac{m}{r}$ for which $I^{(m)}\nsubseteq I^{r}$, then
$\rho(I)<h$.
###### Proof.
Suppose we have $\frac{a}{b}=h$, then $I^{(hb)}\nsubseteq I^{b}$ is a
contradiction since $I^{(hk)}\subseteq I^{k}$ for all $k\in\mathbb{N}$ (from
[24, 41, 46]). Hence, $\rho(I)<\frac{a}{b}=h$. ∎
The next proposition shows that Conjecture 3.1 in [7] holds for the Complement
of a Steiner Configuration of points.
###### Proposition 3.4.
Let $I\subset k[\mathbb{P}^{n}]$ be an ideal defining a Complement of a
Steiner Configuration of points and let $\mathcal{M}=(x_{0},\dots,x_{n})$ be
the homogeneous maximal ideal. Then $I^{(nr)}\subseteq\mathcal{M}^{rn}I^{r}$
holds for all $r\in\mathbb{N}.$
###### Proof.
From Theorem 2.6, we have $\alpha(I^{(nr)})=rv$. From [3], Corollary 4.7, we
have $\omega(I^{r})=\alpha(I^{r})=r(v-n)$, where $\omega(I)$ is the maximum of
the generating degrees of the ideal $I$. Since $I^{(nr)}\subseteq I^{r}$ for
all $r\geq 1$, we have $\alpha(I^{(nr)})\geq r\omega(I)=r\alpha(I)$ and
$\alpha(I^{r})=r(v-n)$, so $\alpha(I^{(nr)})-r\omega(I)$ = $rv-r(v-n)=rn$.
Since every minimal generator of $I^{(nr)}$ is contained inside $I^{r}$ and
the difference between the degree of any nonzero homogeneous polynomial in
$I^{(nr)}$ and that of generators of $I^{r}$ is at least $rn$, we have that
$I^{(nr)}\subseteq\mathcal{M}^{rn}I^{r}$, the conclusion follows. ∎
We prove the main result of this section:
###### Theorem 3.5.
Let $I\subseteq k[x_{0},\dots,x_{n}]$ be the ideal defining the Complement of
a Steiner Configuration of points in $\mathbb{P}^{n}_{k}$. Then $I$ satisfies
1. (1)
Stable Harbourne–Huneke Conjecture;
2. (2)
Stable Harbourne Conjecture.
###### Proof.
(1) Consider the Steiner Configuration of points $S(t,n,v)$ in
$\mathbb{P}_{k}^{n}$ and $I:=I_{C}$ the ideal defining its Complement.
Using Theorem 2.6 , (iii), it is $\alpha(I^{(n(r-1))})=(r-1)v$. Using
Corollary 2.7, and choosing $r\gg 0$, such that
$(r-1)v\geq r\cdot\operatorname{reg}(I)=r(v-n+1)$
we get $I^{(n(r-1))}\subseteq I^{r}.$ Moreover, since
$\alpha(I^{(n(r-1))})-\alpha(I^{r})=v(r-1)-r(v-n)=rn-v$, we get
$I^{(n(r-1))}\subseteq\mathcal{M}^{rn-v}I^{r}.$ Using Euler’s Formula, we get
$\begin{array}[]{rcl}I^{(n(r-1)+1)}&\subseteq&\mathcal{M}I^{(n(r-1))}\subseteq\mathcal{M}^{rn-v+1}I^{r}=\mathcal{M}^{rn-
v-n+n+1}I^{r}=\mathcal{M}^{rn-n-(v-n)+1}I^{r}\\\ &\subseteq&\mathcal{M}^{rn-
n-r+1}I^{r}=\mathcal{M}^{(r-1)(n-1)}I^{r}.\end{array}$
(2) We have the containment $I^{(n(r-1))}\subseteq I^{r}$ for $r\gg 0$.
Let $k=nm+t$. From [44], we have $I^{(ns+a_{1}+\cdots+a_{s})}\subseteq
I^{(a_{1}+1)}I^{(a_{2}+1)}\cdots I^{(a_{s}+1)},$ letting
$s=n+t,a_{1}=a_{2}=\dots=a_{n}=nm-n-1,a_{n+1}=\dots=a_{n+t}=0$.
Let $k=nm+t=(v-1)n+t$ for $t\geq 0$ and let $s=n+t$ and $a_{1}=a_{2}=\cdots
a_{n}=nm-n-1=n(v-1)-n-1,a_{n+1}=\dots=a_{t}=0$. Therefore,
$I^{(n(n+t)+n(nm-n-1))}=I^{(n^{2}+nt+n^{2}m-n^{2}-n)}=I^{(nk-n)}\subseteq(I^{((n(v-1)-n))})^{n}I^{t}=I^{n(v-1)}I^{t}=I^{k}.$
Hence $I^{(nk-n)}\subseteq I^{k}$ for $k\gg 0$. ∎
As a consequence, we can show that the ideal of a Complement of a Steiner
Configuration of points has expected resurgence, that is, its resurgence is
strictly less than its big height (see [34]).
###### Corollary 3.6.
Let $I\subseteq k[x_{0},\dots,x_{n}]$ be the ideal defining the Complement of
a Steiner Configuration of points in $\mathbb{P}^{n}_{k}$. Then, $\rho(I)<n.$
###### Proof.
From Theorem 3.5, we have $\rho_{a}(I)<n$. Note that $\rho_{a}(I)\leq\rho(I)$.
If $\rho_{a}(I)=\rho(I)$, then clearly $\rho(I)<n$. On the other hand, if
$\rho_{a}(I)<\rho(I)$, then from Lemma 3.2 and Corollary 3.3, we conclude that
$\rho(I)<n.$ ∎
We give an alternative proof of Chudnovky’s Conjecture:
###### Corollary 3.7.
Let $I\subseteq k[x_{0},\dots,x_{n}]$ be the ideal defining the Complement of
a Steiner Configuration of points in $\mathbb{P}^{n}_{k}$. Then Chudnovky’s
Conjecture holds for $I$.
###### Proof.
From Theorem 2.6, item i) $\alpha(I)=v-n$ and from Theorem 2.8 it is
$\widehat{\alpha}(I)=\frac{v}{n}$. Then
$\widehat{\alpha}(I)\geq\frac{\alpha(I)+n-1}{n}\Leftrightarrow\frac{v}{n}\geq\frac{v-1}{n}.$
∎
###### Corollary 3.8.
Let $I\subseteq k[x_{0},\dots,x_{n}]$ be the ideal defining the Complement of
a Steiner Configuration of points in $\mathbb{P}^{n}_{k}$. Then Demailly’s
Conjecture holds for $I$.
###### Proof.
From Theorem 2.6, for $m=pn+q$ and for $2\leq q<n$ it is
$\alpha(I^{(m)})=pv+\alpha(I^{(q)})=pv+v-n+q$. From Corollary 2.8, we have
that $\widehat{\alpha}(I)=\frac{v}{n}$. Hence, whenever
1. (1)
$m=pn+q$, with $2\leq q<n$, we have
$\frac{\alpha(I^{(m)})+n-1}{m+n-1}=\frac{pv+v-n+q+n-1}{pn+q+n-1}=\frac{(p+1)v+q-1}{(p+1)n+q-1}\leq\frac{v}{n}=\widehat{\alpha}(I)$
2. (2)
$q=1$ and $m=np+1$, we have
$\frac{\alpha(I^{(m)})+n-1}{m+n-1}=\frac{pv+v-n+n-1}{pn+1+n-1}=\frac{(p+1)v-1}{(p+1)n}<\frac{v}{n}=\widehat{\alpha}(I)$
3. (3)
$q=0$ and $m=np$, we have
$\frac{\alpha(I^{(m)})+n-1}{m+n-1}=\frac{pv+n-1}{pn+n-1}<\frac{v}{n}=\widehat{\alpha}(I).$
∎
###### Remark 3.9.
Chudnovsky’s Conjecture can be showed from Proposition 3.4. We have that
$I^{(nr)}\subseteq\mathcal{M}^{rn}I^{r}$. This gives us the inequality
$\alpha(I^{(nr)})\geq rn+r\alpha(I)\geq rn+r\alpha(I)-r$. Dividing both sides
by $nr$ and letting $r\rightarrow\infty$ gives
$\widehat{\alpha}(I)\geq\frac{\alpha(I)+n-1}{n}.$
## 4\. Containment and colouring
In this section we focus on the relation between the colourability of a
hypergraph ${H}$ and the failure of the containment problem for the cover
ideal associated to $H$. Then, we apply these results in the case that $H$ is
a Steiner System. There exists an extensive literature on the subject of
colourings both from Design Theory and Algebraic Geometry/Commutative Algebra
point of view. Among all, we make use of [10, 28, 38, 31] as some of referring
texts.
Most of the existing papers are devoted to the case of weak colourings (or
vertex colourings), i.e. colourings where the colours are assigned to the
elements in such a way that no hyperedge is monochromatic (i.e. no hyperedge
has all its elements assigned the same colour). The reader can see [10] or
Chapter 3 in [31] for other types of colouring a hypergraph, such as strong
vertex colouring, vertex equicolouring, good colouring of $H$.
In this paper we use the case of weak colouring to get results on Containment
problem.
We first recall some known definitions and results from [10] or [31], Chapter
2.
A hypergraph is a pair $H=(V,E)$, where $E=\\{x_{1},\dots,x_{n}\\}$ is a
finite nonempty set containing $n$ elements called vertices and
$E=\\{e_{i}\\}_{i\in I}$ ($I$ set of indices) is a family of subsets of $X$,
called edges, or otherwise hyperedges, such that for all $e\in
E,e\neq\emptyset$ and $\cup_{e\in E}\;e=X$.
A colouring of a hypergraph $H=(V,E)$ is a surjective mapping $c:V\rightarrow
C$ where $C$ is the set of colours. When $|C|=m$, then a proper $m$-colouring
of a hypergraph $H=(V,E)$ is a mapping $c:V\rightarrow\\{1,2,\dots,m\\}$ for
which every edge $e\in E$ has at least two vertices of different colours.
As for graphs, proper colourings generate partitions of the vertex set into a
number of stable (independent) non-empty subsets called colour classes, with
as many classes as the number of colours actually used.
Thus, we use an equivalent definition from [28, 38], used in Combinatorial
Algebraic Geometry/Commutative Algebra research, i.e,
###### Definition 4.1.
Let $H=(V,E)$ be a hypergraph. An $m$-colouring of $H$ is any partition of
$V=U_{1}\cup\cdots\cup U_{m}$ into $m$ disjoint sets such that for every $e\in
E$ we have $e\not\subseteq U_{j}$ for all $j=1,\dots,m$. The $U_{j}$’s are
called the colour classes.
The chromatic number of $H$, denoted by $\chi(H)$, is the minimum $m$ such
that $H$ has an $m$-colouring.
###### Definition 4.2.
A hypergraph $H=(V,E)$ is $m$-colourable if there exists a proper
$m$-colouring, i.e, if $\chi(H)\leq m$.
###### Definition 4.3.
We say $H$ is $m$-chromatic if it is $m$-colourable but not
$(m-1)$-colourable.
When $\chi(H)\leq 2$, the hypergraph $H$ is called bicolourable. (In parts of
the literature the term ‘bipartite’ is also used.)
###### Definition 4.4.
Let $H:=(V,E)$ be a hypergraph. For an integer $c,$ we say that $H$ is
$c$-coverable if there exists a partition $U_{1},U_{2},\dots,U_{c}$ of $V$
such that $e\cap U_{i}\neq\emptyset$ for each $i=1,\ldots,c$ and for each
$e\in E.$
###### Remark 4.5.
Note that, as an immediate consequence of the above definitions, if $H$ is
$c$-coverable, $c>1$, then $H$ is $c$-colourable.
###### Example 4.6.
Set $V:=\\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{7}\\}$. Let $H$ be the set
of blocks of a $STS(7)$
(4.1)
$H:=\\{\\{x_{1},x_{2},x_{3}\\},\\{x_{1},x_{4},x_{5}\\},\\{x_{1},x_{6},x_{7}\\},\\{x_{2},x_{4},x_{6}\\},\\{x_{2},x_{5},x_{7}\\},\\{x_{3},x_{4},x_{7}\\},\\{x_{3},x_{5},x_{6}\\}\\}.$
Take, for instance, the partition
$\\{x_{1},x_{2},x_{5}\\},\\{x_{3},x_{4},x_{6}\\},\\{x_{7}\\}$ (see Figure 1).
$H$ is 3-colourable but it is not 3-coverable.
Notice also that no colouring of $H$ with two colours exists. Then
$\chi(H)=3$.
Figure 1. The three colour classes of a $STS(7)$.
###### Remark 4.7.
We refer the reader to Section 3.5 in [31] to see examples of different types
of colourings that give different chromatic numbers for the same $H$. In
particular, in Example 8 of [31], the strong vertex colouring of $H$ as in
(4.1) gives $\chi(H)=7$ (recall that a mapping $c$ is a strong colouring of
vertices of $H$ if for all $e\in E$ it is $|c(e)|=|e|$).
For a non empty hypergraph $H$, i.e. $H\subset 2^{V}$, we define the ideal
$J_{H}:=\bigcap_{\sigma\in H}\mathfrak{p}_{\sigma}\subseteq k[V]$
called the cover ideal of $H,$ where for a subset of $V$,
$\sigma:=\\{{i_{1}},{i_{2}},\ldots,{i_{n}}\\}\subseteq V,$ the ideal
$\mathfrak{p}_{\sigma}:=(x_{i_{1}},x_{i_{2}},\ldots,x_{i_{n}})\subseteq k[V]$
denotes the prime ideal generated by the variables indexed by $\sigma$.
For a hypergraph $H=(V,B)$, we denote by $\tau(H):=\min_{b\in B}\\{|b|\\}.$
We study some properties of the cover ideals of $B$. The following results
show a relation between the coverability of a hypergraph $H$ and that the
Containment problem can fail.
###### Theorem 4.8.
Let $H=(V,B)$ be a hypergraph. If $H$ is not $d$-coverable then
$J_{H}^{(\tau(H))}\not\subseteq J_{H}^{d}.$
###### Proof.
We put $\tau:=\tau(H)$ and $w:=x_{1}\cdot x_{2}\cdots x_{n}$. In order to
prove the statement it is enough to show that $w\in J_{H}^{(\tau)}$ but
$w\notin J_{H}^{d}.$ For each $b\in B$ the ideal $\mathfrak{p}_{b}$ has height
$|b|\leq\tau$. Therefore $x_{b}\in(\mathfrak{p}_{b})^{\tau}$. Thus
$w\in\mathfrak{p}_{b}^{\tau}$ for each $b\in B$. This implies $w\in
J_{H}^{(\tau)}.$ By contradiction, assume $w\in J_{H}^{d}.$ Thus, there exist
$w_{1},\ldots,w_{d}\in J_{H}$ such that $w=w_{1}\cdots w_{d}.$ We
$U_{j}:=\\{x_{u}\in V\ |\ x_{u}\ \text{divides}\ w_{j}\\}$, then
$U_{1},\ldots,U_{d}$ is a partition of $V$. Thus, for each $b\in B$ we have
$w_{i}\in\mathfrak{p}_{b}$, therefore $U_{i}\cap e\neq 0$ with $i=1,\ldots,d.$
This contradicts that $H$ is not $d$-coverable. ∎
Recall that an $m$-colouring of $(V,B)$ is called an $m$-bicolouring if the
vertices of each $b\in B$ are coloured with exactly two colours. A Steiner
Triple Systems $(V,B)$ admitting an $m$-bicolouring is $m$-bicolourable. Thus,
in a bicolouring of a Steiner Triple System $(V,B)$, every triple has two
elements in one colour class and one in another class, so there are no
monochromatic triples nor polychromatic triples (i.e. triples receiving three
colours). For instance, for a deep investigation of colouring properties of
Steiner Triple Systems the reader can see [15].
As a consequence, we get a failure of the containment for the cover ideals
associated to Steiner Triple Systems $(V,B)$ of type $S(t,n,v).$
###### Proposition 4.9.
If $v>3$ and $S(2,3,v)=(V,B)$ is a Steiner Triple System, then
$J_{B}^{(3)}\not\subseteq J_{B}^{2}$.
###### Proof.
It is enough to show that $B$ in not 2-coverable. Assume by contradiction
$V=U_{1}\cup U_{2}.$ By definition, for each $\\{i,j,k\\}\in B$ we have
$\\{i,j,k\\}\cap U_{1}\neq\emptyset$ and $\\{i,j,k\\}\cap U_{2}\neq\emptyset$.
This implies that $S(2,3,v)$ is 2-bicolourable contradicting a well known fact
about Steiner Triple Systems, see [50]. ∎
We end the paper showing the failure of the containment for the cover ideals
associated to Steiner Systems.
###### Proposition 4.10.
Let $\mathcal{S}=(V,B)$ be a Steiner System with parameters $S(t,t+a,v)$ where
$1\leq a\leq t-2$ and $v>(a+1)t$. Then, $J_{B}^{(t+a)}\not\subseteq
J_{B}^{t}.$
###### Proof.
Note that, from Theorem 4.8, it is enough to show that $B$ is not
$t$-coverable. Assume by contradiction there is a partition of $V$ in $t$
colour classes, $V=C_{1}\cup\cdots\cup C_{t}$ such that $B\cap
C_{j}\neq\emptyset$ for each $j=1,\ldots,t$ and $B$ a block of $\mathcal{S}.$
We denote by $c_{j}$ the number of elements in $C_{j}$ for $j=1,\ldots,t$.
Note that $c_{j}\leq a+1$. Indeed if $i_{1},i_{2},\ldots,i_{a+2}\in C_{j}$ are
different elements, then a block containing $i_{1},i_{2},\ldots,i_{a+2}$
cannot intersect $t$ colour classes. This implies $v\leq(a+1)t$. ∎
The next example shows that Theorem 4.8 does not characterize the failure of
the containment.
###### Example 4.11.
Let $B$ denote the blocks of a Steiner quadruple system $SQS({8})=S(3,4,8)$ on
the vertex set $V=\\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{7},x_{8}\\}$,
$\begin{array}[]{rrl}B:=\\{&\\{x_{1},x_{2},x_{3},x_{4}\\},\\{x_{1},x_{2},x_{5},x_{6}\\},\\{x_{1},x_{2},x_{7},x_{8}\\},\\{x_{1},x_{3},x_{5},x_{7}\\},\\{x_{1},x_{3},x_{6},x_{8}\\},&\\\
&\\{x_{1},x_{4},x_{5},x_{8}\\},\\{x_{1},x_{4},x_{6},x_{7}\\},\\{x_{2},x_{3},x_{5},x_{8}\\},\\{x_{2},x_{3},x_{6},x_{7}\\},\\{x_{2},x_{4},x_{5},x_{7}\\},&\\\
&\\{x_{2},x_{4},x_{6},x_{8}\\},\\{x_{3},x_{4},x_{5},x_{6}\\},\\{x_{3},x_{4},x_{7},x_{8}\\},\\{x_{5},x_{6},x_{7},x_{8}\\}&\\}.\end{array}$
From Proposition 4.10, $B$ is not 3-coverable. Therefore Theorem 4.8 ensures
$J_{B}^{(4)}\not\subseteq J_{B}^{3}.$ However, one can check that, for
instance, $x_{1}x_{2}\cdots x_{7}\in J_{B}^{(3)}\setminus J_{B}^{2},$ so the
failure of the containment $J_{B}^{(3)}\subseteq J_{B}^{2}$ cannot be
motivated from Theorem 4.8.
## 5\. Conclusions
Several conjectures have been posed on the Containment problem, creating an
active area of current interests and ongoing investigations. In this paper, we
show that the Stable Harbourne Conjecture and the Stable Harbourne – Huneke
Conjecture hold for the defining ideal of a Complement of a Steiner
configuration of points in $\mathbb{P}^{n}_{k}$. Moreover, given a hypergraph
$H$, we also study the relation between its colourability and that the
Containment problem can fail for the cover ideal associated to $H$. We wish to
continue the study of Steiner configurations of points and their Complements,
since they are special subsets of star configurations whose Hilbert Function
is the same as sets of generic points while geometrically they are far of
being generic.
We end this section recalling some open questions that are still under
investigations and posing new ones.
We recall from [3], that from a combinatorial point of view, two Steiner
systems having the same parameters could have very different properties and
such differences effect the homological invariants. Using experiments with [1]
and [32] we ask:
###### Question 5.1.
Let $(V,B)$ be a Steiner system of type $S(t,n,v)$, and $X_{\mathcal{H},B}$
the associated Steiner configuration of points. Assume that the hyperplanes in
$\mathcal{H}$ are chosen generically. Do the Hilbert function and the graded
Betti numbers of $X_{\mathcal{H},B}$ only depend on $t,n,v$?
###### Question 5.2.
Let $(V,B)$ be a Steiner system of type $S(t,n,v)$, and $X_{\mathcal{H},B}$
the associated Steiner configuration of points. Assume that the hyperplanes in
$\mathcal{H}$ are chosen generically. Are the Hilbert function and the graded
Betti numbers of $X_{\mathcal{H},B}$ generic with respect to the Hilbert
function? (i.e. the same as a set of $|B|$ generic points in
$\mathbb{P}^{n}$?)
Given a hypergraph ${H}$, we also study the relation between its colourability
and the failure of the containment problem for the cover ideal associated to
$H$. We suggest the following
###### Question 5.3.
Can different types of colourings of a hypergraph give different answers to
the Containment problem and related conjectures?
We also thank one of the referees to point out [42, 43], where the author
studies graph partitioning (fragmentation criteria) that has many fields of
applications in engineering, especially in applied sciences as applied
chemistry and physics, computer science and automation, electronics and
telecommunication. See [2, 21, 49] just to cite some of them.
###### Question 5.4.
Can different types of colourings of hypergraphs give also different answers
to fragmentation criteria?
###### Acknowledgments 5.5.
This paper represents the second part of a project started in April 2018 when
E. Guardo and L. Milazzo met together with the idea of making an
interdisciplinary project between their own fields of research, such as
Combinatorics and Commmutative Algebra/Algebraic Geometry. L. Milazzo died in
2019 but all the authors recognize his contribution in Section 4. We thank M.
Gionfriddo and S. Milici for encouraging us to continue this project. We also
thank Zs. Tuza for his useful comments in the revised version of the paper.
Favacchio and Guardo’s work has been supported by the Università degli Studi
di Catania, ”Piano della Ricerca 2020/2022 Linea di intervento 2”. The first
three authors are members of GNSAGA of INdAM (Italy). The computer softwares
CoCoA [1] and Macaulay2 [32] were indispensable for all the computations we
made.
The authors thank all the referees for the useful suggestions/comments that
improved the first version of the paper.
## References
* [1] Abbott J., Bigatti A. M., Robbiano L. CoCoA: a system for doing Computations in Commutative Algebra. Available at http://cocoa.dima.unige.it
* [2] Balaban A. T., Solved and Unsolved Problems in Chemical Graph Theory, Ann. Discrete Math. 1993 55, 109–126.
* [3] Ballico E., Favacchio G., Guardo E., Milazzo L. Steiner systems and configurations of points. Des. Codes Cryptogr. 2020, https://doi.org/10.1007/s10623-020-00815-x.
* [4] Bauer T., Di Rocco S., Harbourne B., Kapustka M., Knutsen A., Syzdek W., Szemberg T. A primer on Seshadri Constants, In Interactions of Classical and Numerical Algebraic Geometry; Contemp. Math., 496; 2009; pp 33–70.
* [5] Bisui S., Grifo E., Há H. T., Nguyên T.T. Chudnovsky’s Conjecture and the stable Harbourne–Huneke containment Preprint 2020 https://arxiv.org/abs/2004.11213
* [6] Bisui S., Grifo E., Há H. T., Nguyên T.T. Demailly’s Conjecture and the Containment problem Preprint 2020 https://arxiv.org/pdf/2009.05022.pdf
* [7] Bocci C., Cooper S., Harbourne B. Containment results for ideals of variuos configurations of points in $\mathbb{P}^{N}$, J. Pure Appl. Algebra 2014, 218 (1), 65–75.
* [8] Bocci C., Harbourne B. Comparing powers and symbolic powers of ideals. J. Algebraic Geom. 2010, 19 (3), 399–417.
* [9] Bocci C., Harbourne B. The resurgence of ideal of points and the containment problem. Proc. Amer. Math. Soc. 2010, 138, 1175–1190.
* [10] Bujtas Cs., Tuza Zs., Voloshin V. Hypergraphs colouring. In Topics in chromatic graph theory, Chapter 11; L. W. Beineke, R. J. Wilson; Cambridge University press, 2015; pp 230–254.
* [11] Carlini E., Catalisano M. V., Guardo E., Van Tuyl A., Hadamard Star Configurations, Rocky Mountain J. Math. 2019,49 (2), 419–432.
* [12] Carlini E., Guardo E., Van Tuyl A. Star configurations on generic hypersurfaces, J. Algebra 2014, 407, 1–20.
* [13] Carlini E., Há H. T., Harbourne B., Van Tuyl A. Ideals of Powers and Powers of Ideals. In Lecture Notes of the Unione Matematica Italiana; Springer, Cham., 2020; pp. 1–161
* [14] Colbourn C.J., Dinitz J. H. Handbook of combinatorial designs. In Handbook of combinatorial designs, Second Edition, CRC press; 2006, pp. 1–984.
* [15] Colbourn C.J., Dinitz J. H., Rosa A. Bicoloring Steiner Triple Systems, Electron. J. Combin. 1999, 6, Article Number R25, 1–16.
* [16] Colbourn C.J., Rosa A. Triple systems. Oxford University Press, 1999.
* [17] Cooper S. M., Embree R. J., Há H. T., Hoefel A. H. Symbolic powers of monomial ideals. Proc. Edinb. Math. Soc.(2) 2017, 60 (1), 39–55.
* [18] Denkert A. Results on containments and resurgences, with a focus on ideals of points in the plane. PhD Thesis 2013.
* [19] DiPasquale M., Drabkin B. On Resurgence via Asymptotic Resurgence arXiv:2003.06980.
* [20] DiPasquale M., Francisco C., Mermin J., Schweig J. Asymptotic Resurgence via integral closures Trans. Amer. Math. Soc. 2019, 372, 6655–6676.
* [21] Diudea M. V., Topan M., Graovac A., Layer Matrices of Walk Degrees, J. Chem. Inf. Comput. Sci. 1994, 34, 1072–1078.
* [22] Dumnicki M., Harbourne B., Nagel U., Seceleanu A., Szemberg T., Tutaj-Gasińska H. Resurgences for ideals of special point configurations in $\mathbb{P}^{N}$ coming from hyperplane arrangements. J. Algebra 2015, 443, 383–394.
* [23] Dumnicki M., Szemberg T., Tutaj-Gasińska H. Counterexamples to the containment $I^{(3)}\subseteq I^{2}$ containment. J. Algebra 2013, 393, 24–29.
* [24] Ein L., Lazarsfeld R., Smith K.E. Uniform bounds and symbolic powers on smooth varieties. Invent. Math. 2001, 144, 241–252.
* [25] Favacchio G., Guardo E. The Minimal Free Resolution of Fat Almost Complete Intersections in $\mathbb{P}^{1}\times\mathbb{P}^{1}$. Canad. J. Math. 2017, 69 (6), 1274–1291.
* [26] Favacchio G., Guardo E. On the Betti numbers of three fat points in $\mathbb{P}^{1}\times\mathbb{P}^{1}$ , J. Korean Math. Soc. 2019, 56 (3), 751–766, doi.org/10.4134/JKMS.j180385
* [27] Fouli L., Mantero P., Xie Y. Chudnovsky’s conjecture for very general points in $\mathbb{P}^{n}$. J. Algebra 2018, 498, 211–227.
* [28] Francisco C., Há H. T., Van Tuyl A. Colorings of hypergraphs, perfect graphs, and associated primes of powers of monomial ideals, J. Algebra 2011, 331 (1), 224–242.
* [29] Geramita A., Harbourne B., Migliore J. Star configurations in $\mathbb{P}^{n}$. J. Algebra 2013, 376, 279–299.
* [30] Geramita A., Harbourne B., Migliore J., Nagel U. Matroid configurations and symbolic powers of their ideals. Trans. Amer. Math. Soc. 2017, 369 (10), 7049–7066.
* [31] Gionfriddo M., Milazzo L., Voloshin V. Hypergraphs and Designs In Nova Science Pub Inc, New York, NY, USA, 2015; pp. 1–169.
* [32] Grayson D.R, Stillman M. E. Macaulay 2, a software system for research in algebraic geometry.
* [33] Grifo E. A stable version of Harbourne’s Conjecture and the containment problem for space monomial curves, J. Pure Appl. Algebra 2020, 224 (12), Article 106435.
* [34] Grifo E., Huneke C., Mukundan V. Expected Resurgence and symbolic powers of Ideals, J. Lond. Math. Soc. 2020, 102, 453–469.
* [35] Guardo E., Harbourne B., Van Tuyl A. Asymptotic resurgence for ideals of positive dimensional subschemes in projective space, Adv. Math. 2013, 246, 114–127.
* [36] Guardo E., Harbourne B., Van Tuyl A. Symbolic powers versus regular powers of ideals of general points in $\mathbb{P}^{1}\times\mathbb{P}^{1}$, Canad. J. Math. 2013, 65 (4), 823–842.
* [37] Guardo E., Harbourne B., Van Tuyl A. Fat lines in $\mathbb{P}^{3}$: powers versus symbolic powers, J. Algebra 2013, 390, 221–230.
* [38] Há H.T., Van Tuyl A. Monomial ideals, edge ideals of hypergraphs, and their minimal graded free resolutions, J. Algebraic Combin. 2008, 27 (2), 215–245.
* [39] Harbourne B., Huneke C.Are symbolic powers highly evolved? J. Ramanujan Math. Soc. 2013, 28 (3), 311–330.
* [40] Harbourne B., Kettinger J., Zimmitti F. Extreme values of the resurgence for homgeneous ideals in polynomials rings,https://arxiv.org/pdf/2005.05282.pdf.
* [41] Hochster M., Huneke C. Comparison of Symbolic and ordinary powers of ideals, Invent. Math. 2002, 147, 349–369.
* [42] Jäntschi L. Graph Theory. 1. Fragmentation of Structural Graphs. Leonardo EI J. Pract. Technol. 2002, 1(1), 19–36.
* [43] Jäntschi L. Graph Theory. 2. Vertex Descriptors and Graph Coloring. Leonardo EI J. Pract. Technol. 2002, 1(1), 37–52.
* [44] Johnson R. Containing Symbolic powers in Regular Rings, Comm. Algebra 2014, 42, 3552–3557.
* [45] Lin K.N., McCullough J. Hypergraphs and regularity of square-free monomial ideals. Internat. J. Algebra Comput. 2013, 23 (7), 1573–1590.
* [46] Ma L., Schwede K. Perfectoid multiplier test ideals in regular rings and bounds on symbolic powers. Invent. Math. 2018, 214, 913–955.
* [47] Mantero P. The structure and free resolutions of the symbolic powers of star configurations of hypersurfaces. 2019 arXiv:1907.08172.
* [48] Nagata M. Local rings. Interscience, 1962; pp. 1– 234.
* [49] Randić M., Design of Molecules with Desired Properties (A Molecular Similarity Approach to Property Optimization), In Concept and Applications of Molecular Similarity, Ed. A. M. Johnson, G. M. Maggiora, John Wiley & Sons, 5 , 1990, pp. 77–145.
* [50] Rosa A. Steiner triple systems and their chromatic number, Acta Fac. Rer. Nat. Univer. Comen. Math. 1970, 24, 159–174.
* [51] Swanson I. Linear equivalence of ideal topologies Math. Z. 2000, 234, 755–775.
* [52] Swanson I., Huneke C. Integral Closure of Ideals, Rings and Modules, In London Math. Soc. Lecture Note Ser. 336, Cambridge University Press, 2006; pp. 1–463.
* [53] Waldschmidt M. Propriétés arithmétiques de fonctions de plusieurs variables. II. In Séminaire P. Lelong (Analyse), 1975/76, Lecture Notes Math. 578, Springer, 1977; pp. 108–135.
* [54] Zariski O. A fundamental lemma from the theory of holomorphic functions on an algebraic variety, Ann. Mat. Pura Appl. 1949, (4),29, 187-–198.
|
# HarDNet-MSEG: A Simple Encoder-Decoder Polyp Segmentation Neural Network
that Achieves over 0.9 Mean Dice and 86 FPS
Chien-Hsiang Huang Hung-Yu Wu Youn-Long Lin
###### Abstract
We propose a new convolution neural network called HarDNet-MSEG for polyp
segmentation. It achieves the SOTA in both accuracy and inference speed on
five popular datasets (Kvasir-SEG[22], CVC-ColonDB[19], EndoScene[31], ETIS-
Larib Polyp DB[30] and CVC-Clinic DB[5]). For Kvasir-SEG, HarDNet-MSEG
delivers 0.904 mean Dice running at 86.7 FPS on a GeForce RTX 2080 Ti GPU
(showing in Figure 1). It consists of a backbone and a decoder. The backbone
is a low memory traffic CNN called HarDNet68[7], which has been successfully
applied to various CV tasks including image classification, object detection,
multi-object tracking ,and semantic segmentation, etc. The decoder part is
inspired by the Cascaded Partial Decoder[32], known for fast and accurate
salient object detection. We have evaluated HarDNet-MSEG using those five
popular datasets. The code and all experiment details are available at Github.
https://github.com/james128333/HarDNet-MSEG
Figure 1: Mean Dice accuracy vs frame rate running on a GeForce RTX 2080 Ti
GPU as reported in [20](blue) and [13](orange). HarDNet-MSEG is faster and
more accurate than the SOTA (U-Net[ResNet34] and PraNet).
## 1 Introduction
The incidence of colorectal cancer (CRC) has been ranked third in the world
for many years. Therefore, how to prevent CRC is an important global issue.
Studies have pointed out that 95% of CRC is due to a colorectal adenomatous
polyp. The resection of colorectal adenomatous polyps can greatly reduce the
incidence of CRC. Therefore, it is very important to have a colonoscopy on a
regular basis as well as early invention and treatment.
At present, the best way to prevent CRC is by taking regular colonoscopy and
undergo a polyp removal resection. With the emergence and popularization of
painless colonoscopy, people’s acceptance of the examination is getting
higher. However, the detection of polyps was performed manually by
endoscopists in the past, which is a consuming task for human beings and
greatly depends on the doctor’s experience and ability. Early segmentation
methods[2, 19, 4] are based on extracting features such as color, patterns,
etc., and then using a classifier to distinguish polyps from their
surroundings. However, this method still has a high rate of missed detection.
The position, size, color, etc. of each polyp are different, so it is very
difficult to segment them automatically and accurately.
In recent years, CNN has grown rapidly with breakthrough growth in the
application of various imaging tasks. The segmentation of polyp have also
benefited[6, 1]. For this task, FCN[27, 6, 1], U-Net[28, 29], U-Net++[24, 36],
DoubleU-Net[21] and ResUNet[23, 33, 20] series, etc., have good results
compared to the early methods. Most polyp blocks can be segmented out well,
but there are still many problems, such as the cutting of boundary areas and
the lack of smaller blocks, as well as broken images in large areas. Moreover,
the inference time of these networks is usually long, and the training time is
relatively time-consuming.
We propose HarDNet-MSEG based on the backbone of HarDNet68[7]. With a simple
encoder-decoder[3] architecture, it achieves excellent accuracy and efficient
inference time for related benchmarks such as Kvasir-SEG, CVC-ColonDB, etc.
## 2 Related work
Since the emergence of LeNet[25] in 1998, CNN has grown rapidly and has been
used in different computer vision fields. Among them, the task of image
segmentation is widely used in medical imaging.
In 2015, Long et al. first introduced fully convolutional networks (FCN)[27]
for the task of image segmentation. An end-to-end trained convolutional neural
network is used to classify each pixel in an image. Since then, the
convolutional neural network has flourished in the field of image
segmentation. In the same year, U-Net[29] introduced at MICCAI has been widely
used in the field of medical imaging. Through a fairly symmetrical U-shaped
encoder-decoder architecture, combined with skip connections at different
scales to integrate deep and shallow features, it has now become a baseline
network architecture for most medical imaging semantic segmentation. Then, the
emergence of U-Net++[24, 36] expands the original U-shaped architecture. With
more skip connections and convolutions to achieve the effect of deep layer
aggregation[34]. It solves the problem that edge information and small objects
are easily lost due to deep network down-sampling and up-sampling.
In recent years, the use of a better CNN backbone, or the introduction of
additional modules like spatial pyramid pooling[16], attention modules[8, 15],
etc., have achieved very good results in medical imaging semantic
segmentation. Examples of the former include ResUNet[33, 20], ResUNet++[23],
and DoubleU-Net[21]. By integrating a better CNN backbone with a U-shaped
structure, the entire network has a stronger recognition capability, a larger
receiving domain, and multi-scale information integration. The second is to
insert additional modules, such as DoubleU-Net[21] uses ASPP[9] between the
encoder and the decoder, which helps to deal with different object scales and
improve accuracy; PraNet[13] adds an RFB[26] module to skip connection to
capture more visual information for features of different scales. In recent
years, attention has also been widely used in the field of computer vision,
especially for semantic segmentation which requires detailed edge information
at the pixel level. Examples include PraNet[13], PolypSeg[35] and ABC-Net[14].
After adding different context modules, they all get good results in medical
imaging segmentation. Context modules such as Spatial Attention Module[10] and
Channel Attention Module[10] will reduce the inference speed, but on the other
hand, they are very efficient in improving accuracy and making edge cutting
more precise.
The HarDNet-MSEG we proposed uses HarDNet[7] as the backbone and is designed
with an encoder-decoder architecture. It has achieved the high accuracy of the
current state of the art in CVC-ColonDB, EndoScene, ETIS-Larib Polyp DB,
CVCClinic DB, and Kvasir-SEG, and at the same time has an efficient inference
speed. In addition, we have also tried to add additional modules such as RFB,
ASPP, Attention, etc. to our network architecture to further improve the
accuracy.
Figure 2: HarDNet-MSEG overview. The encoder part consists of HarDNet68, and
the decoder part is using partial decoder. Figure 3: HarDNet Block overview.
## 3 HarDNet-MSEG
Figure 2 depicts the architecture of our proposed HarDNet-MSEG. It consists of
an encoder backbone and a decoder.
### 3.1 Backbone : HarDNet
HarDNet[7], improved the original dense block of Densenet[18] are illustrated
in Figure 3. Considering the impact of memory traffic on model design, it
reduces shortcuts to increase the inference speed, and at the same time
increases its channels’ width for the key layer to make up for the loss of
accuracy. It also uses a small amount of Conv1x1 to increase computational
density.
Through this design, it not only achieves 30% inference time reduction
compared with DenseNet[18] and ResNet[17], also having higher accuracy on
ImageNet[12]. On the other hand, FC-HarDNet70[18] also reaches the state of
the art in image segmentation on Cityscapes Dataset[11]. Therefore, we use
HarDNet68 as the model backbone for Colorectal Polyps image semantic
segmentation.
### 3.2 Cascaded Partial Decoder
Many well-known medical image segmentation networks are often modified based
on the U-Net. Our design also went in this direction at the beginning. But
based on the balance of the inference time and performance, we did not use
HarDBlock (HarDBlk) in the Decoder part, which is different from FC-HarDNet.
We reference the Cascaded partial decoder [32]. It found out that the shallow
features have high resolution and occupy computing resources, and the deep
information can also represent the spatial details of the shallow information
relatively well. So we decide to discard the shallow features and do more
computing on the deeper layers’ features. At the same time, the aggregation of
feature maps at different scales can be achieved by adding appropriate
convolution and skip connections.
Figure 4: RFB Module overview.
#### 3.2.1 RFB Module
Figure 4 shows a Receptive Field Block[26]. It can strengthen the deep
features learned from a lightweight CNN backbone. By using multi-branch with
different kernel size convolution and dilated convolution layers, it generates
features with the different receptive fields. Afterwards, it applies a 1x1
convolution to merge these features and generate the final representation.
We add this module to the skip connection according to [32], so that we could
enlarge our receptive fields from each different resolutions’ feature maps.
#### 3.2.2 Dense Aggregation
We perform aggregation by element-wise multiplication shown in Figure 5. After
up-sampling to the same scale, the feature is multiplied with another input
feature of the corresponding scale.
Figure 5: Aggregation Module overview.
## 4 Experiments
We used the training data from [13] and [20] for training because they have
excellent performance in polyp segmentation. The training data and training
methods used in the two articles are different. In order to reduce the
variable factors, the training methods we use will refer to the methods
proposed in the two articles respectively, and then compare the accuracy and
inference speed with other models.
| mIoU | mDice | F2-score | Precision | Recall | Overall Acc. | FPS
---|---|---|---|---|---|---|---
U-Net | 0.471 | 0.597 | 0.598 | 0.672 | 0.617 | 0.894 | 11
ResUNet | 0.572 | 0.690 | 0.699 | 0.745 | 0.725 | 0.917 | 15
ResUNet++ | 0.613 | 0.714 | 0.720 | 0.784 | 0.742 | 0.917 | 7
FCN8 | 0.737 | 0.831 | 0.825 | 0.882 | 0.835 | 0.952 | 25
HRNet | 0.759 | 0.845 | 0.847 | 0.878 | 0.859 | 0.952 | 12
DoubleUNet | 0.733 | 0.813 | 0.820 | 0.861 | 0.840 | 0.949 | 7.5
PSPNet | 0.744 | 0.841 | 0.831 | 0.890 | 0.836 | 0.953 | 17
DeepLabv3+[ResNet50] | 0.776 | 0.857 | 0.855 | 0.891 | 0.8616 | 0.961 | 28
DeepLabv3+[ResNet101] | 0.786 | 0.864 | 0.857 | 0.906 | 0.859 | 0.961 | 17
U-Net[ResNet34] | 0.810 | 0.876 | 0.862 | 0.944 | 0.860 | 0.968 | 35
HarDNet-MSEG | 0.848 | 0.904 | 0.915 | 0.907 | 0.923 | 0.969 | 86.7
Table 1: Quantitative results on Kvasir dataset (training/testing
split:880/120). Showing the performance of different metrics and inference
speed evaluating on GeForce RTX 2080 Ti GPU. Others evaluation scores are
refer from [20].
### 4.1 Dataset
We used the datasets proposed in the two papers mentioned earlier, namely
Kvasir-SEG, CVC-ColonDB, EndoScene, ETIS-Larib Polyp DB, and CVC-Clinic DB.
And we will make a detailed comparison with other SOTA models on these
datasets.
### 4.2 Training setting and policy
The two articles are based on different splitting method of training data, so
we made two different experiments on each training setting to compare, and the
details of the experiments will be explained below.
For [20], 880 images of Kvasir-SEG is used for training, and the other 120
images are used for testing. It does use augmentations like random rotation,
horizontal flip, vertical flip. Our training input size is 512x512. We train
our model with SGD optimizer for 100 epochs and the learning rate is set to
1e-2. The results comparing to [20] is in Table 1. HarDNet-MSEG shows the
greatest accuracy on most metrics, and the inference speed is much faster than
others.
In [13], 1450 training images without any augmentation is used, including 900
images in Kvasir-SEG and 550 images in CVC-ClinicDB. And the testing set has 5
datasets mentioned above. Our training input size is 312x312, We train our
model with Adam optimizer for 100 epochs and the learning rate is set to 1e-4.
The quantitative results of each 5 datasets are shown in Table 2 (Kvasir-SEG)
and Table 3 (ETIS, CVC-ClinicDB, CVC-ColonDB and EndoScene). We achieve the
best performance in mean Dice and mIoU on each dataset, with the fastest
inference speed (88 FPS).
| mDice | mIoU | wfm | Sm | MAE | maxEm | FPS
---|---|---|---|---|---|---|---
U-Net | 0.818 | 0.746 | 0.794 | 0.858 | 0.055 | 0.893 | 53
U-Net++ | 0.821 | 0.743 | 0.808 | 0.862 | 0.048 | 0.910 | 25
ResUNet-mod | 0.791 | n/a | n/a | n/a | n/a | n/a | n/a
ResUNet++ | 0.813 | 0.793 | n/a | n/a | n/a | n/a | n/a
SFA | 0.723 | 0.611 | 0.67 | 0.782 | 0.075 | 0.849 | 40
PraNet | 0.898 | 0.840 | 0.885 | 0.915 | 0.030 | 0.948 | 66
HarDNet-MSEG | 0.912 | 0.857 | 0.903 | 0.923 | 0.025 | 0.958 | 88
Table 2: Quantitative results on Kvasir, comparing with the SOTA. Using the
same training script with the release code of PraNet. The inference speed is
testing under 312x312 resolution on GeForce RTX 2080 Ti GPU.
| ClinicDB | ColonDB | ETIS | CVC-T
---|---|---|---|---
| mDice | mIoU | mDice | mIoU | mDice | mIoU | mDice | mIoU
U-Net | 0.823 | 0.755 | 0.512 | 0.444 | 0.398 | 0.335 | 0.71 | 0.627
U-Net++ | 0.794 | 0.729 | 0.483 | 0.410 | 0.401 | 0.344 | 0.707 | 0.624
ResUNet-mod | 0.779 | n/a | n/a | n/a | n/a | n/a | n/a | n/a
ResUNet++ | 0.796 | 0.796 | n/a | n/a | n/a | n/a | n/a | n/a
SFA | 0.700 | 0.607 | 0.469 | 0.347 | 0.297 | 0.217 | 0.467 | 0.329
PraNet | 0.899 | 0.849 | 0.709 | 0.640 | 0.628 | 0.567 | 0.871 | 0.797
HarDNet-MSEG | 0.932 | 0.882 | 0.731 | 0.660 | 0.677 | 0.613 | 0.887 | 0.821
Table 3: More results on CVC-ClinicDB, CVC-ColonDB, ETIS, and CVC-T, comparing
with the SOTA. Among them, CVC-T is the testing data for EndoScene.
### 4.3 Metrics
Mean Dice $=\frac{2*tp}{2*tp+fp+fn}$ mIoU $=\frac{tp}{tp+fp+fn}$
Recall $=\frac{tp}{tp+fn}$ Precision $=\frac{tp}{tp+fp}$
F2 $=\frac{5p*r}{4p+r}$ Acc. $=\frac{tp+tn}{tp+tn+fp+fn}$
We will mainly use Kvasir’s official website as the basis for comparison,
namely mean Dice and Mean IoU, but we will still use other metrics mentioned
in these two articles for comparison so that we can show our advantages more
clearly.
### 4.4 Training and Inference Environment setting:
In order to show our advantage in speed, we respectively compare with other
famous models. The platforms we use for evaluation is written below.
Intel i9-9900k CPU, GeForce RTX 2080 Ti, Pytorch: 1.6 and CUDA: 10.2
## 5 Conculsion
HarDNet-MSEG achieved the SOTA in all five challenging datasets. It is the
only network that has achieved over 0.90 mean Dice (0.912 comparing with [13]
and 0.904 comparing with [20]) on Kvasir-SEG. And it is 1.3 times faster than
PraNet and more than 2 times faster than other models. We achieve this with a
simple encoder-decoder architecture without any attention module used in [13]
and [14]. See Figure 6 for some inference results of Kvasir-SEG. It shows that
our model outputs better boundary and the prediction is more accurate.
Again, it shows that HarDNet[7] is a great and efficient backbone in not only
classification and detection, but also medical imaging segmentation. We hope
this study can help pushing the frontier of medical imaging and contribute to
the application of CNN in this field.
Figure 6: Inference results of Kvasir-SEG.
## Acknowledgements
This research is supported in part by a grant from the Ministry of Science and
Technology (MOST) of Taiwan. We thank National Center for High-performance
Computing (NCHC) for providing computational and storage resources. Without it
this research is impossible. We would also like to thank Mr.Ping Chao for many
fruitful discussions.
## References
* [1] Mojtaba Akbari, Majid Mohrekesh, Ebrahim Nasr-Esfahani, SM Reza Soroushmehr, Nader Karimi, Shadrokh Samavi, and Kayvan Najarian. Polyp segmentation in colonoscopy images using fully convolutional network. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 69–72. IEEE, 2018\.
* [2] Mamonov A.V., Figueiredo I.N.and Figueiredo P.N., and Tsai Y.H.R. Automated polyp detection in colon capsule endoscopy. IEEE TMI 33(7), 1488–1502, 2014.
* [3] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12):2481–2495, 2017.
* [4] Seung-Hwan Bae and Kuk-Jin Yoon. Polyp detection via imbalanced learning and discriminative feature learning. IEEE TMI 34(11), 2379–2393, 2015.
* [5] Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Debora Gil, Cristina Rodríguez, and Fernando Vilariño. Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Computerized Medical Imaging and Graphics, 43:99–111, 2015.
* [6] Patrick Brandao, Evangelos Mazomenos, Gastone Ciuti, Renato Caliò, Federico Bianchi, Arianna Menciassi, Paolo Dario, Anastasios Koulaouzidis, Alberto Arezzo, and Danail Stoyanov. Fully convolutional neural networks for polyp segmentation in colonoscopy. In Medical Imaging 2017: Computer-Aided Diagnosis, volume 10134, page 101340F. International Society for Optics and Photonics, 2017.
* [7] Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, and Youn-Long Lin. Hardnet: A low memory traffic network. In Proceedings of the IEEE International Conference on Computer Vision, pages 3552–3561, 2019.
* [8] Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu, and Alan L Yuille. Attention to scale: Scale-aware semantic image segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3640–3649, 2016.
* [9] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801–818, 2018.
* [10] Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5659–5667, 2017.
* [11] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.
* [12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [13] Deng-Ping Fan, Ge-Peng Ji, Tao Zhou, Geng Chen, Huazhu Fu, Jianbing Shen, and Ling Shao. Pranet: Parallel reverse attention network for polyp segmentation. MICCAI, 2020.
* [14] Yuqi Fang, Delong Zhu, Jianhua Yao, Yixuan Yuan, and Kai-yu Tong. Abc-net: Area-boundary constraint network with dynamical feature selection for colorectal polyp segmentation. IEEE Sensors Journal, 2020.
* [15] Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. Dual attention network for scene segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3146–3154, 2019.
* [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 37(9):1904–1916, 2015.
* [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [18] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
* [19] Tajbakhsh N.and Gurudu S.R.and Liang J. Automated polyp detection in colonoscopy videos using shape and context information. IEEE TMI 35(2), 630–644, 2015.
* [20] Debesh Jha, Sharib Ali, Håvard D Johansen, Dag D Johansen, Jens Rittscher, Michael A Riegler, and Pål Halvorsen. Real-time polyp detection, localisation and segmentation in colonoscopy using deep learning. arXiv preprint arXiv:2011.07631, 2020.
* [21] Debesh Jha, Michael A Riegler, Dag Johansen, Pål Halvorsen, and Håvard D Johansen. Doubleu-net: A deep convolutional neural network for medical image segmentation. arXiv preprint arXiv:2006.04868, 2020.
* [22] Debesh Jha, Pia H Smedsrud, Michael A Riegler, Pål Halvorsen, Thomas de Lange, Dag Johansen, and Håvard D Johansen. Kvasir-seg: A segmented polyp dataset. In International Conference on Multimedia Modeling, pages 451–462. Springer, 2020.
* [23] Debesh Jha, Pia H Smedsrud, Michael A Riegler, Dag Johansen, Thomas De Lange, Pål Halvorsen, and Håvard D Johansen. Resunet++: An advanced architecture for medical image segmentation. In 2019 IEEE International Symposium on Multimedia (ISM), pages 225–2255. IEEE, 2019.
* [24] Nicolas Boutry Le Duy Huynh. A u-net++ with pre-trained efficientnet backbone for segmentation of diseases and artifacts in endoscopy images and videos. ISBI 2020, 2020.
* [25] Yann LeCun et al. Lenet-5, convolutional neural networks. URL: http://yann. lecun. com/exdb/lenet, 20(5):14, 2015.
* [26] Songtao Liu, Di Huang, et al. Receptive field block net for accurate and fast object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 385–400, 2018.
* [27] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440, 2015.
* [28] Ahmed Mohammed, Sule Yildirim, Ivar Farup, Marius Pedersen, and Øistein Hovde. Y-net: A deep convolutional neural network for polyp detection. arXiv preprint arXiv:1806.01907, 2018.
* [29] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
* [30] Juan Silva, Aymeric Histace, Olivier Romain, Xavier Dray, and Bertrand Granado. Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer. International Journal of Computer Assisted Radiology and Surgery, 9(2):283–293, 2014.
* [31] David Vázquez, Jorge Bernal, F Javier Sánchez, Gloria Fernández-Esparrach, Antonio M López, Adriana Romero, Michal Drozdzal, and Aaron Courville. A benchmark for endoluminal scene segmentation of colonoscopy images. Journal of healthcare engineering, 2017, 2017.
* [32] Zhe Wu, Li Su, and Qingming Huang. Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3907–3916, 2019.
* [33] Xiaofei Yang, Xutao Li, Yunming Ye, Raymond YK Lau, Xiaofeng Zhang, and Xiaohui Huang. Road detection and centerline extraction via deep recurrent convolutional neural network u-net. IEEE Transactions on Geoscience and Remote Sensing, 57(9):7209–7220, 2019.
* [34] Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. Deep layer aggregation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2403–2412, 2018.
* [35] Jiafu Zhong, Wei Wang, Huisi Wu, Zhenkun Wen, and Jing Qin. Polypseg: An efficient context-aware network for polyp segmentation from colonoscopy videos. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 285–294. Springer, 2020.
* [36] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 3–11. Springer, 2018.
|
###### Abstract
This paper reviews the theoretical and practical principles of the broadcast
approach to communication over state-dependent channels and networks in which
the transmitters have access to only the probabilistic description of the
time-varying states while remaining oblivious to their instantaneous
realizations. When the temporal variations are frequent enough, an effective
long-term strategy is adapting the transmission strategies to the system’s
ergodic behavior. However, when the variations are infrequent, their temporal
average can deviate significantly from the channel’s ergodic mode, rendering a
lack of instantaneous performance guarantees. To circumvent a lack of short-
term guarantees, the broadcast approach provides principles for designing
transmission schemes that benefit from both short- and long-term performance
guarantees. This paper provides an overview of how to apply the broadcast
approach to various channels and network models under various operational
constraints.
###### keywords:
Broadcast, broadcast channel, channel state information, degradedness,
interference channel, multiple-access channel, networks, relay, slow-fading,
superposition coding.
xx 1 5 The Broadcast Approach in Communication Networks Ali Tajer1 , Avi
Steiner2, Shlomo Shamai (Shitz)3 Ali Tajer, Avi Steiner, Shlomo Shamai (Shitz)
###### Contents
1. 1 Motivation and Overview
1. 1.1 What is the Broadcast Approach?
2. 1.2 Degradedness & Superposition Coding
3. 1.3 Application to Multimedia Communication
2. 2 Variable-to-fixed Channel Coding
1. 2.1 Broadcast Approach in Wireless Channels
2. 2.2 Relevance to the Broadcast Channel
3. 2.3 The SISO Broadcast Approach - Preliminaries
4. 2.4 The MIMO Broadcast Approach
1. 2.4.1 Weak Supermajorization
2. 2.4.2 Relation to Capacity
3. 2.4.3 The MIMO Broadcast Approach Derivation
4. 2.4.4 Degraded Message Sets
5. 2.5 On Queuing and Multilayer Coding
1. 2.5.1 Queue Model – Zero-padding Queue
2. 2.5.2 Delay Bounds for a Finite Level Code Layering
3. 2.5.3 Delay bounds for Continuum Broadcasting
6. 2.6 Delay Constraints
1. 2.6.1 Mixed Delay Constraints
2. 2.6.2 Broadcasting with Mixed Delay Constraints
3. 2.6.3 Parallel MIMO Two-state Fading Channel
4. 2.6.4 Capacity of Degraded Gaussian Broadcast Product Channels
5. 2.6.5 Extended Degraded Gaussian Broadcast Product Channels
6. 2.6.6 Broadcast Encoding Scheme
7. 2.7 Broadcast Approach via Dirty Paper Coding
3. 3 The Multiple Access Channel
1. 3.1 Overview
2. 3.2 Network Model
1. 3.2.1 Discrete Channel Model
2. 3.2.2 Continuous Channel Model
3. 3.3 Degradedness and Optimal Rate Spitting
4. 3.4 MAC without CSIT – Continuous Channels
5. 3.5 MAC without CSIT – Two-state Channels: Adapting Streams to the Single-user Channels
6. 3.6 MAC without CSIT – Two-state Channels: State-dependent Layering
7. 3.7 MAC without CSIT – Multi-state Channels: State-dependent Layering
8. 3.8 MAC with Local CSIT – Two-state Channels: Fixed Layering
9. 3.9 MAC with Local CSIT – Two-State Channels: State-dependent Layering
10. 3.10 MAC with Local CSIT – Multi-state Channels: State-dependent Layering
4. 4 The Interference Channel
1. 4.1 Overview
2. 4.2 Broadcast Approach in the Interference Channel – Preliminaries
3. 4.3 Two-user Interference Channel without CSIT
1. 4.3.1 Successive Decoding: Two-state Channel
2. 4.3.2 Successive Decoding: $\ell$-state Channel
3. 4.3.3 Average Achievable Rate Region
4. 4.3.4 Sum-rate Gap Analysis
4. 4.4 $N$-user Interference Channel without CSIT
5. 4.5 Two-user Interference Channel with Partial CSIT
1. 4.5.1 Two-user Interference Channel with Partial CSIT – Scenario 1
2. 4.5.2 Two-user Interference Channel with Partial CSIT – Scenario 2
5. 5 Relay Channels
1. 5.1 Overview
2. 5.2 A Two-Hop Network
1. 5.2.1 Upper Bounds
2. 5.2.2 DF Strategies
3. 5.2.3 Continuous Broadcasting DF Strategies
4. 5.2.4 AF Relaying
5. 5.2.5 AQF Relay and Continuum Broadcasting
3. 5.3 Cooperation Techniques of Two Co-located Users
1. 5.3.1 Lower and Upper Bounds
2. 5.3.2 Naive AF Cooperation
3. 5.3.3 AF with Separate Preprocessing
4. 5.3.4 Multi-Session AF with Separate Preprocessing
5. 5.3.5 Multi-Session Wyner-Ziv CF
4. 5.4 Transmit Cooperation Techniques
1. 5.4.1 Single-layer Sequential Decode-and-Forward (SDF)
2. 5.4.2 Continuous Broadcasting
3. 5.4.3 Two Layer SDF - Successive Decoding
5. 5.5 Diamond Channel
1. 5.5.1 Decode-and-Forward
2. 5.5.2 Amplify-and-Forward
6. 5.6 Multi-relay Networks
1. 5.6.1 Oblivious Relays
2. 5.6.2 Oblivious Agents
7. 5.7 Occasionally Available Relays
6. 6 Communications Networks
1. 6.1 Overview
2. 6.2 Multi-User MAC Broadcasting with Linear Detection
1. 6.2.1 Channel Model
2. 6.2.2 Strongest Users Detection - Overview and Bounds
3. 6.2.3 Broadcast Approach with Strongest Users Detection - (NO SIC)
4. 6.2.4 SIC Broadcast Approach Upper Bound
5. 6.2.5 Broadcast Approach with Iterative SIC
3. 6.3 The Broadcast Approach for Source-Channel Coding
1. 6.3.1 SR with Finite Layer Coding
2. 6.3.2 The Continuous SR-Broadcasting
4. 6.4 The Information Bottleneck Channel
1. 6.4.1 Uncertainty of Bottleneck Capacity
5. 6.5 Transmitters with Energy Harvesting
1. 6.5.1 Optimal Power Allocation Densities
2. 6.5.2 Optimal Power Allocation over Time
3. 6.5.3 Grouping the Constraints
4. 6.5.4 Dominant Constraints
5. 6.5.5 Optimality of Algorithm 1
7. 7 Outlook
8. A Constants of Theorem 3
9. B Corner Points in Figure 16
## 1 Motivation and Overview
### 1.1 What is the Broadcast Approach?
The information- and communication-theoretic models of a communication channel
are generally specified by the probabilistic description of the channel’s
input and output relationship. The output, subsequently, depends on the
channel input and the state process of the channel. The channel’s
probabilistic description changes over time in various domains, rendering a
time-varying channel state process. These, for instance, include mobile
wireless communications, storage systems, and digital fingerprinting, where
all have time-varying communication mediums. Reliable communication generally
necessitates transmitting an encoded message over multiple channel uses.
Therefore, temporal fluctuations in channel states can cause a significant
impediment to sustaining reliable communications. When channel states are
known to the transmitters, the encoders can be guided to adjust the
transmission rates in response to the changes in the channel’s actual states.
When a transmitter is informed of the channel state (e.g., via side
information or feedback), it can adopt variable-length channel coding, the
fundamental performance limits of which are well-investigated Burnashev ;
Tchamkerten ; Shayevitz ; PPV:ISIT2010 ; Tyagi .
While desirable, informing the transmitters of the time-varying state process
can be practically prohibitive in a wide range of existing or emerging
communications technologies. In such circumstances, while the encoders cannot
adapt their transmissions to channel states, there is still the possibility of
adapting the decoders to the channel states. The information-theoretic limits
of communication over such state-dependent channels when the transmitters have
only access to the statistical description of the channel state process is
studied broadly under the notion of variable-rate channel coding
Verdu10variable-ratechannel . When the temporal variations are frequent
enough, an effective long-term strategy is adapting the transmission
strategies to the system’s ergodic behavior. However, when these variations
are infrequent, their temporal average can deviate significantly from the
channel’s ergodic mode, rendering the ergodic metrics (e.g., ergodic capacity)
unreliable performance targets.
State-dependent channels appear in various forms in communication systems. A
prevalent example is mobile wireless channels, which undergo fading processes.
Fading induces time-varying states for the channel, resulting in uncertainty
about the network’s state at all transmitter and receiver sites SH98 . Other
examples include opportunistic scheduling, in which the transmitter adjusts
encoding and transmission based on a quality-of-service metric that depends on
the state of the channel TelatarOpp ; Sharif ; asadi , e.g., signal-to-noise
ratio, latency, and throughput; opportunistic spectrum access (across time,
space, and frequency); and cognitive radio communication, in which the quality
of communication relies on the access to the spectrum resources Zhao ; Tanab .
This survey paper focuses primarily on the fading process in different network
models and the mechanisms for circumventing transmitters’ lack of information
about random fading processes. Nevertheless, most techniques that we will
review can be adjusted to cater to other forms of state-dependent channels as
well.
When wireless channels undergo fading, a useful convention to circumvent
uncertainties about the fading process is establishing training sessions to
estimate channel states. Such sessions should repeat periodically commensurate
to how frequently the states vary. Depending on the multiplexing mode in a
communication channel, the training sessions are either bidirectional (e.g.,
in frequency-division multiplexing systems) or they are unidirectional and
ensued by feedback sessions (e.g., in time-division multiplexing systems).
While effective in delivering the channel state to the receiver sites, both
mechanisms face various challenges for delivering the same information to the
transmitters. For instance, establishing channels in both directions is not
always feasible, and even when it is, feedback communication incurs additional
costs and imposes additional latency. Such impediments are further exacerbated
as the size of a network grows.
When the probabilistic model of the process is known, an alternative approach
to channel training and estimation is hedging against the random fluctuations.
When the fluctuations are rapid enough, an effective long-term strategy is
adapting the transmission strategies to the system’s ergodic behavior. A
widely-used instance of this is the ergodic capacity as a reliable
transmission rate for a channel that undergoes a fast-fading process. On the
other hand, when the fluctuations occur in time blocks, which is often the
case, an effective strategy is the outage strategy, aiming to meet target
reliability with a pre-specific probabilistic guarantee. An example of an
outage strategy is adopting the notion of outage capacity, which evaluates the
likelihood of reliable communication at a fixed transmission rate OZ98 . When
the actual channel realization can sustain the rate, the transmission is
carried out successfully; and otherwise, it fails, and no message is decoded
SH98 ; OZ98 . The notions of outage and delay-limited capacities are studied
extensively for various networks, including the multiple access channel (c.f.
HanlyTse ; LiJindalGoldsmith ; narasimhan ; Haghi ; DasNarayan ; jafar and
references therein).
While the ergodic and outage approaches provide long-term probabilistic
performance guarantees, they lack instantaneous guarantees. That is, each
communication session faces a chance of complete failure. For instance, when
the channel’s instantaneous realization does not sustain a rate equal to the
ergodic or outage capacity, the entire communication session over that channel
will be lost. To circumvent a lack of short-term guarantees, the broadcast
approach provides principles for designing transmission schemes that benefit
from both short- and long-term performance guarantees. In information-
theoretic terms, the broadcast approach is called variable-to-fixed channel
coding Verdu10variable-ratechannel .
### 1.2 Degradedness & Superposition Coding
The broadcast approach ensures a minimum level of successful communication,
even when the channels are in their weakest states. In this approach, any
channel realization is viewed as a broadcast receiver, rendering an equivalent
network consisting of several receivers. Each receiver is designated to a
specific channel realization, and it is degraded with respect to a subset of
other channels. Designing a broadcast approach for a channel model has the
following two pivotal elements.
1- Degradedness in channel realizations: The first step in specifying a
broadcast approach for a given channel pertains to designating a notion of
degradedness that facilitates rank-ordering different realizations of a
channel based on their relative strengths. The premise for assigning such
degradedness is that if communication is successful in a specific realization,
it will also be successful in all realizations considered stronger. For
instance, in a single-user single-antenna wireless channel that undergoes a
flat-fading process, the fading gain can be a natural degradedness metric. In
this channel, as the channel gain increases, the channel becomes stronger.
Adopting a proper degradedness metric hinges on the channel model. While it
can emerge naturally for some channels (e.g., single-user flat-fading), in
general, selecting a degradedness metric is rather heuristic, if possible at
all. For instance, in the multiple access channel, the sum-rate capacity can
be used as a metric to designate degradedness, while in the interference
channel, comparing different network realizations, in general, is not well-
defined.
2- Degradedness in message sets: Parallel to degradedness in channel
realization, in some systems, we might have a natural notion of degradedness
in the message sets as well. Specifically, in some communication scenarios
(e.g., video communication), the messages can be naturally divided into
multiple ordered layers that incrementally specify the entire message. In such
systems, the first layer conveys the baseline information (e.g., the lowest
quality version of a video); the second layer provides additional information
that incrementally refines the baseline information (e.g., refining video
quality), and so on. Such a message structure specifies a natural way of
ordering the information layers, which should also be used by the receiver to
retrieve the messages successfully. Specifically, the receiver starts by
decoding the baseline (lowest-ranked) layer, followed by the second layer, and
so on. While some messages have inherent degradedness structures (e.g.,
audio/video signals), that is not the case in general. When facing messages
without an inherent degradedness structure, a transmitter can still split a
message into multiple, independently generated information layers. The
decoders, which are not constrained by decoding the layers in any specific
order, will decode as many layers as they afford based on the actual channel
realization.
In a communication system, in general, the states of degradedness in channel
realizations and degradedness in message sets can vary independently.
Subsequently, designing a broadcast approach for a communication system hinges
on its channel and message degradedness status. By leveraging the intuitions
from the known theories on the broadcast channel, we briefly comment on
different combinations of the degradedness states.
* •
Degraded message sets. A message set with an inherent degradedness structure
enforces a prescribed decoding order for the receiver.
* –
Degraded channels. When there is a natural notion of degradedness among
channel realizations (e.g., in the single-user single-antenna flat-fading
channel), we can designate one message to each channel realization such that
the messages are rank-ordered in the same way that their associated channels
are ordered. At the receiver side, based on the actual realization of the
channel, the receiver decodes the messages designated to the weaker channels,
e.g., in the weakest channel realization, the receiver decodes only the
lowest-ranked message, and in the second weakest realization, it decodes the
two lowest-ranked messages, and so on. Communication over a parallel Gaussian
channel is an example in which one might face degradedness both in the channel
and the message Kfir:IZS2020 .
* –
General channels. When lacking a natural notion of channel degradedness (e.g.,
in the single-user multi-antenna channel or the interference channel), we
generally adopt an effective (even though imperfect) approach to rank order
channel realizations. These orders will be used to prescribe an order
according to which the messages will be decoded. The broadcast approach in
such settings mimics the Körner-Marton coding approach for broadcast
transmission with degraded message sets KornaerMarton77 . This approach is
known to be optimal for a two-user broadcast channel with a degraded set of
messages, while the optimal strategy for the general broadcast approach is an
open problem despite the significant recent advances, e.g., NairGamal09 .
* •
General message sets. Without an inherent degradedness structure in the
message, we have more freedom to generate the message set and associate the
messages to different channel realizations. In general, each receiver has the
freedom to decode any desired set of messages in any desired order. The
single-user multi-antenna channel is an important example in which such an
approach works effectively ShitzSteiner03 . In this setting, while the channel
is not degraded in general, different channel realizations are ordered based
on the singular values of the channel matrix’s norm, which implies an order in
channel capacities. In this setting, it is noteworthy that the specific choice
of ordering the channels and assigning the set of messages decoded in each
realization induces degradedness in the message set.
Built based on these two principles, and following the broadcast approach to
compound channels CO72 , the notion of broadcast strategy for slowly fading
single-user channel was initially introduced for effective single-user
communication Shitz97broadcast .
### 1.3 Application to Multimedia Communication
The broadcast approach has a wide range of applications that involve
successive and incremental retrieval of information sources. Representative
examples include image compression and video coding systems, which can be
naturally integrated with the successive refinement techniques bergergibson ;
WWZ . Specifically, the broadcast approach’s underlying premise is to allow
the receivers to decode the messages only partially, as much as the channels’
actual instantaneous realizations allow. This is especially relevant in
audio/video broadcast systems, in which even partially decoding the messages
still renders signals that are aurally or visually interpretable or
recognizable. In these systems, a transmitter is often oblivious to the
instantaneous realization of the channels, and the quality of its channel
shapes the quality of the audio or video signal recovered. This is also the
principle widely used in communication with successive refinement, in which a
message is split into multiple layers. A baseline layer carries the minimal
content that allows decoding an acceptable message. The subsequent layers
successively and progressively add more details to the message, refining its
content and quality. This approach enables digitally achieving a key feature
of analog audio and video transmission: the quality of communication is a
direct function of the channel quality, while there is no channel state
information at the transmitter.
In this review paper, we start by reviewing the core ideas in designing a
broadcast approach in the single-user wireless channel in Section 2. In this
section, we address both single-antenna and multi-antenna systems under
various transmission constraints. Next, we provide an overview of the
applications to the multiple access channel in Section 3. This section
discusses settings in which transmitters are either entirely or partially
oblivious to the channel states. Sections 4 and 5 will be focused on the
interference channel and the relay channel, respectively. A wide range of
network settings will be discussed in Section 6, and finally, Section 7
provides a perspective on the possible directions for extending the theory and
applications of the broadcast approach.
## 2 Variable-to-fixed Channel Coding
As pointed out earlier, the broadcast approach is, in essence, a variable-to-
fixed channel coding Verdu10variable-ratechannel for a state-dependent
channel, where the state realization is known only at the receiver. While
being oblivious to the channel realizations, the transmitter has access to the
probabilistic description of the channel. The key idea underpinning the
broadcast approach is splitting the transmitted message into multiple
independent layers and providing the receiver with the flexibility to decode
as many layers as it affords, depending on the channel’s actual state. While
the concept is general and can be applied to a wide range of state-dependent
channels, in this paper we focus on wireless channels.
### 2.1 Broadcast Approach in Wireless Channels
In wireless communications, the channels undergo random fading processes. In
these systems, the channel state corresponds to a fading gain, and the channel
state statistical description is characterized by the probability model of the
fading process ShitzSteiner03 ; Shitz97broadcast ; AsSh08_1 ; SH98 . The
relative duration of the channel’s coherence time to the system’s latency
requirement specifies the channel’s fading condition. Specifically, slow
(fast) fading arises when the channel’s coherence time is large (small)
relative to the system’s latency requirement. In particular, slowly fading
channels are commonly when a mobile front-end moves slowly relative to the
data transmission rate. Such a model is especially apt in modern communication
systems with high spectral efficiency and data rates.
In systems with slowly-fading channels, a receiver can estimate the channel
fading coefficients with high accuracy. This motivates considering the
instantaneous and perfect availability of the channel state information (CSI)
at the receiver sites. On the other hand, acquiring such CSI at the
transmitter sites (CSIT) can be either impossible, due to the lack of a
backward channel from a receiver to its respective transmitter; prohibitive,
due to the extensive costs associated with backward communication; or
unhelpful, due to a mismatch between the stringent latency constraints and the
frequency of backward communication. Hence, in these circumstances, properly
circumventing the lack of perfect CSIT plays a pivotal role in designing
effective communication schemes.
Capitalizing on the system’s ergodic behavior (e.g., setting the transmission
rate to the ergodic capacity of a channel) effectively addresses the lack of
CSIT SH98 . However, this is viable only when the transmission is not facing
any delay constraints, and the system is allowed to have sufficiently long
transmission blocks (relative to the fading dynamics). In particular, in a
highly dynamic channel environment, stringent delay constraints imply that a
transmission block, in spite of still being large enough for having reliable
communication OZ98 ), is considerably shorter than the dynamics of the slow
fading process. To quantify the quality of communication in such
circumstances, the notion of capacity versus outage was introduced and
discussed in OZ98 and SH98 (and references therein). A fundamental
assumption in these systems is that the fading process variations throughout
the transmission block are negligible. In an outage strategy, the transmission
rate is fixed, and the information is reliably retrieved by the receiver when
the instantaneous channel realizations allow. Otherwise, communication fails
(an outage event). In such systems, the term outage capacity refers to the
maximal achievable average rate. It can also be cast as the capacity of an
appropriately defined compound channel SH98 . The main shortcoming of the
outage approach to designing transmission is the possibility of outage events,
which translates to possibly a significant loss in spectral efficiency.
The broadcast approach aims to avoid outage events while the transmitters
remain oblivious to the state of their channels. In this approach, reliable
transmission rates are adapted to the actual channel conditions without
providing feedback from the receiver to the transmitter. This approach’s
origins are discussed in Cover’s original paper CO72 , which suggests using a
broadcast approach for the compound channel. Since the slowly-fading channel
can be viewed as a compound channel with the channel realization as the
parameter of the compound channel, transmission over these channels can be
naturally viewed and analyzed from the perspective of the broadcast approach.
This strategy is useful in various applications, and in particular, it is in
line with the successive refinement source coding approach of
successiveCover1991 and the subsequent studies in RI99 ; NgTian07 ; Tian08 ;
Ng09 ; NgTian12 . Specifically, the underlying premise is that the more the
provided information rate, the less average distortion evident in the
reconstructed source.
An example of successive refinement of source coding is image compression, in
which a gross description exists at first, and gradually with successive
improvements of the description, the image quality is further refined. An
application example is progressive JPEG encoding, where additional coded
layers serve to refine the image quality. In the broadcast approach, the
transmitter sends layered coded information, and in view of the receiver as a
continuum of ordered users, the maximum number of layers successively decoded
is dictated by the fading channel realization. Thus, the channel realization
influences the received quality of the data. The broadcast approach has a
practical appeal in voice communication cellular systems, where a layered
voice coding is possible. Service quality, subsequently, depends on the
channel realization. This facilitates using coding to achieve the basic
features of analog communications, that is, the better the channel, the better
the performance, e.g., the measured signal-to-noise ratio (SNR) or the
received minimum mean-squared error (MMSE). All this is viable, while the
transmitters are unaware of channel realizations. Other applications can be
found in DuhamelKieffer09 . The problem of layered coding suggests unequal
error protection on the transmitted data, which was studied in (TR96, , and
references therein). A related subject is the priority encoding transmission
(PET). The study in BO99 shows that sending hierarchically-organized messages
over lossy packet-based networks can be analyzed using the broadcast erasure
channel with degraded message set, using the information spectrum approach
AL96 . Finally, we remark Woyach extends the notion to settings in which the
probabilistic model is unknown to the transmitter.
### 2.2 Relevance to the Broadcast Channel
Since the broadcast approach’s foundations hinge on those of the broadcast
channel, we provide a brief overview of the pertinent literature on the
broadcast channel, which was first explored by Cover CO72 ; Cover . In a
broadcast channel, a single transmission is directed to a number of receivers,
each enjoying possibly different channel conditions, reflected in their
received SNRs. The Gaussian broadcast channel with a single transmit antenna
coincides with the classical physically degraded Gaussian broadcast channel,
whose capacity region is well known (see Cover for the deterministic case and
TS02 ; li98_1 ; li98_2 for the composite or ergodic cases). For multiple
transmit antennas, the Gaussian broadcast channel is, in general, a non-
degraded broadcast channel, for which the capacity region with a general
message set is not fully known VI02 ; SV02 ; KR02 ; CA01_1 ; YU01 , and it
cannot be reduced to an equivalent set of parallel degraded broadcast
channels, as studied in GA80 ; TS02 ; li98_1 ; li98_2 . In the special case of
individual messages without common broadcasting, the capacity region in the
multi-antenna setting was characterized in Weingarten06 .
Broadcasting a single user essentially means broadcasting common information.
Information-theoretic results and challenges for broadcasting a common source
are discussed in SE03 , and in light of endless information, data transmission
is termed streaming in FED01 . The interpretation of single-user broadcasting
is the hierarchical broadcasting using multi-level coding (MLC) SC97 ; SC98 ;
SC99 . The study in SC98 demonstrates the spectral efficiency of MLC with
hierarchical demodulation in an additive white Gaussian noise (AWGN) channel
and a fading channel. The study in SA96 examines the fading interleaved
channel with one bit of side information about the fading process. The
broadcast approach is adapted to decode different rates for channels taking
these two distinct states (determined by whether the SNR is above or below a
threshold value). Since the channel is memoryless, the average rate, given by
the mutual information we have $I(y,\hat{s};x)$ (where $x$ is the channel
input, $y$ is the channel output, and $\hat{s}$ is the partial state
information), is achievable. This is not the case with the broadcast approach,
which seems to be unfit here, where channel states are assumed to be
independent and identically distributed (i.i.d.).
Finally, the study in Takesh01 considers a superposition coding scheme to
achieve higher transmission rates in the slowly-fading channel. This study
adopts the broadcast approach for the single-input single-output (SISO)
channel with a finite number of receivers. The number of receivers is the
number of coded layers. It is evident from Takesh01 that for the SISO
channel, a few levels of coded layering closely approximates the optimal
strategy employing transmission of infinite code layers.
### 2.3 The SISO Broadcast Approach - Preliminaries
In this section, we elaborate on the original broadcast approach, first
presented in Shitz97broadcast , and we provide the derivation of the
expressions related to the broadcast approach concept, an optimal power
distribution, and the associated average achievable rates under different
system constraints. We start by providing a canonical channel model for the
single-user single-antenna system. The fading parameter realization can be
interpreted as an index (possibly continuous), which designates the SNR at the
receiver of interest. This model also serves as the basis for other channel
models discussed in the rest of the paper. Specifically, consider the channel
model:
$\displaystyle y\;=\;hx\;+\;n~{},$ (1)
where $x$ is the transmitted complex symbol, $y$ is the received symbol, and
$n$ accounts for the AWGN with zero mean and unit variance denoted by
${\mathcal{CN}}(0,1)$. Constant $h$ represents the fading coefficient. For
each realization of $h$, there is an achievable rate. We are interested in the
average achievable rate for various independent transmission blocks. Thus we
present the results in terms of average performance, averaged over the
distribution of $h$.
Information-theoretic considerations for this simple model were discussed in
(OZ98, , and references therein), as a special case of the multi-path setting.
With the $h$ value known to the transmitter, and with a short-term power
constrain (excluding power optimization in different blocks), the reliable
rate averaged over many block realizations is given by
$\displaystyle C_{\rm erg}={\mathbb{E}}_{s}[\log(1+sP)]\ ,$ (2)
where $s\triangleq|h|^{2}$ is the random fading power. The normalized SNR,
following the channel model definition (1), is denoted by
$P={\mathbb{E}}[|x|^{2}]$, where ${\mathbb{E}}$ stands for the expectation
operator (when a subscript is added, it specifies the random variable with
respect to which the expectation is taken).
Figure 1: (a) A SISO channel with a fading parameter $h$. (b) The equivalent
SISO broadcast channel model. For a channel realization $h^{(j)}$, only
receivers indexed up to $j$ can decode their fractional rate
$\,\textnormal{d}R$.
The SISO channel defined in (1) is illustrated in Fig. 1(a), and its
associated broadcast channel is depicted in Fig. 1b. This figure also
illustrates the broadcast approach, according to which the transmitter sends
an infinite number of coded information layers. The receiver is equivalent to
a continuum of ordered users, each decoding a coded layer if channel
realization allows. In general, the number of coded layers (and respectively,
receivers) depends on the cardinality of the fading power random variable
(RV). Specifically, in a Gaussian fading channel, a continuum of coded layers
is required. Predetermined ordering is achieved due to the degraded nature of
the Gaussian SISO channel Cover . Each of the users has to decode a fractional
rate, denoted by $\,\textnormal{d}R$ in Fig. 1(b). The fractional rates
$\,\textnormal{d}R$ of the different users are not equal but depend on their
receiver index. For some fading realization $h^{(j)}$, only the continuum of
receivers up to receiver $j$ can decode their fractional rates
$\,\textnormal{d}R$. The first receiver decodes only its own
$\,\textnormal{d}R$, the second decode initially the interference
$\,\textnormal{d}R$ (information intended to the first user) and then decodes
its own $\,\textnormal{d}R$. Finally, receiver $j$ decodes all fractional
interferences up to layer $j-1$, and then decodes its information layer
$\,\textnormal{d}R$. Hence the total achievable rate for a realization
$h^{(j)}$ is the integral of $\,\textnormal{d}R$ over all receivers up to $j$.
This model is the general case of coded layering. The broadcast approach in
Shitz97broadcast with a finite number of code layers, also termed
superposition coding, is presented in Takesh01 . In finite level code
layering, only a finite set of ordered receivers is required. This approach
has a lower decoding complexity. However, it is a broadcast sub-optimal
approach.
Next, assume that the fading power RV $S$ is continuous. Then for some channel
realization $h^{(j)}$ of Fig. 1(b), with a fading power $s^{(j)}$, the
designated reliably conveyed information rate is denoted by $R(s^{(j)})$. We
now drop the superscript $j$, and refer to $s$ as the realization of the
fading power RV $S$. As illustrated, the transmitter views the fading channel
as a degraded Gaussian broadcast channel Cover with a continuum of receivers,
each experiencing a different effective receive SNR specified by $s\cdot P$.
The total transmitted power $P$ is also the SNR as the fading and additive
noise are normalized according to (1). The term $s$ is, therefore, interpreted
as a continuous index. By noting that for small enough $x>0$ $\log(1+x)\approx
x$, the incremental differential rate is given by
$\displaystyle\,\textnormal{d}R(s)=\log\left(1+\frac{s\rho(s)\,\textnormal{d}s}{1+sI(s)}\right)=\frac{s\rho(s)\,\textnormal{d}s}{1+sI(s)}\
,$ (3)
where $\rho(s)\,\textnormal{d}s$ is the transmit power associated with a layer
parameterized by $s$, intended for receiver $s$, which also designates the
transmit power distribution. The right-hand-side equality is justified in V90
. Information streams intended for receivers indexed by $u>s$ are undetectable
and are treated as additional interfering noise, denoted by $I(s)$. The
interference for a fading power $s$ is
$\displaystyle I(s)=\int\limits_{s}^{\infty}\rho(u)\,\textnormal{d}u\ ,$ (4)
which is also a monotonically decreasing function of $s$. The total
transmitted power is the overall collected power assigned to all layers, i.e.,
$\displaystyle P=\int\limits_{0}^{\infty}\rho(u)\,\textnormal{d}u=I(0)\ .$ (5)
As mentioned earlier, the total achievable rate for a fading realization $s$
is an integration of the fractional rates over all receivers with successful
layer decoding capability, rendering
$\displaystyle R(s)=\int_{0}^{s}\frac{u\rho(u)\,\textnormal{d}u}{1+uI(u)}\ .$
(6)
The average rate is achieved with sufficiently many transmission blocks, each
viewing an independent fading realization. Therefore, the total rate averaged
over all fading realizations is
$\displaystyle R_{\rm
bs}~{}=~{}\int\limits_{0}^{\infty}\,\textnormal{d}u~{}f(u)R(u)~{}=~{}\int_{0}^{\infty}\,\textnormal{d}u(1-F(u))\frac{u\rho(u)}{1+uI(u)}\
,$ (7)
where $f(u)$ is the probability distribution function (PDF) of the fading
power, and
$\displaystyle F(u)=\int\limits_{0}^{u}\,\textnormal{d}af(a)\ ,$ (8)
is the corresponding cumulative distribution function (CDF).
Optimizing $R_{\rm bs}$ with respect to the power distribution $\rho(s)$ (or
equivalently with respect to $I(u)$, where $u\geq 0$) under the power
constraint $P$ (5) is of interest and can in certain cases be found by solving
the associated constrained Eüler equation GF91 . We turn back to the
expression in (7), corresponding to $s_{\rm th}=0$, and explicitly write the
optimization problem posed
$\displaystyle R_{\rm
bs,max}~{}=~{}\max\limits_{I(u)}\int_{0}^{\infty}\,\textnormal{d}u(1-F(u))\frac{u\rho(u)}{1+uI(u)}\
,$ (9)
where we maximize $R_{\rm bs}$ (7) over the residual interference function
$I(u)$. For an extremum function $I(x)$, the variation of the functional (9)
is zero GF91 , corresponding to a proper Eüler equation, which yields the
extremal solution for $I(x)$. Let us first present the functional of (9)
subject to maximization
$\displaystyle
S(x,I(x),I^{\prime}(x))=(1-F(x))\frac{-xI^{\prime}(x)}{1+xI(x)}\ .$ (10)
The necessary condition for a maximum of the integral of
$S(x,I(x),I^{\prime}(x))$ over $x$ is a zero variation of the functional
(GF91, , (Theorem 2, Section 3.2)). Correspondingly, the Eüler Equation is
given by
$\displaystyle
S_{I}-\frac{\,\textnormal{d}}{\,\textnormal{d}x}S_{I^{\prime}}=0\ ,$ (11)
where
$\displaystyle S_{I}$
$\displaystyle=(1-F(x))\frac{x^{2}I^{\prime}(x)}{(1+xI(x))^{2}}\ ,$ (12)
$\displaystyle S_{I^{\prime}}$ $\displaystyle=(1-F(x))\frac{-x}{1+xI(x)}\ ,$
(13) $\displaystyle\frac{\,\textnormal{d}}{\,\textnormal{d}x}S_{I^{\prime}}$
$\displaystyle=\frac{xf(x)}{1+xI(x)}+(1-F(x))\frac{x^{2}I^{\prime}(x)-1}{(1+xI(x))^{2}}\
.$ (14)
These relationships simplify from a differential equation (11) to a linear
equation by $I(x)$, providing the following closed-form solution
$\displaystyle I(x)=\left\\{\begin{array}[]{cl}\frac{1-F(x)-x\cdot
f(x)}{x^{2}f(x)}&~{}x_{0}\leq x\leq x_{1}\\\ 0&~{}else\end{array}\right.~{},$
(17)
where $x_{0}$ is determined by $I(x_{0})=P$, and $x_{1}$ by $I(x_{1})=0$. All
the analyses are also valid for the single-input multiple-output (SIMO) and
multiple-input single-output (MISO) channels as long as the channels are
degraded regardless of the number of receive antennas in SIMO or transmit
antennas in MISO. The number of transmit or receive antennas only affects the
fading power distribution CDF. As an example, consider a SISO Rayleigh flat
fading channel for which the fading power $S$ has an exponential distribution
with pdf
$\displaystyle f(u)=e^{-u}\ ,\qquad\mbox{and}\qquad
F(u)=1-e^{-u},~{}~{}~{}u\geq 0\ .$ (18)
The optimal transmitter power distribution that maximizes $R_{\rm bs}$ in (9)
is specified by substituting $f(u)$ and $F(u)$ from (18) into (17), resulting
in
$\displaystyle\rho(s)=-\frac{\,\textnormal{d}}{\,\textnormal{d}s}I(s)=\left\\{\begin{array}[]{cl}\frac{2}{s^{3}}-\frac{1}{s^{2}}\
,&s_{0}\leq s\leq s_{1}\\\ &\\\ 0\ ,&\mbox{else}\end{array}\right.\ .$ (22)
Constant $s_{0}$ is determined by solving $I(s_{0})=P$, and it is given by
$\displaystyle s_{0}=\frac{2}{1+\sqrt{1+4P}}\ .$ (23)
Similarly, $s_{1}$ can be found by solving $I(s_{1})=0$, which indicates
$s_{1}=1$. The corresponding rate $R(s)$ using (6) is
$\displaystyle R(s)=\left\\{\begin{array}[]{cl}0&,~{}0\leq s\leq s_{0}\\\ &\\\
2\ln(\frac{s}{s_{0}})-(s-s_{0})&,~{}s_{0}\leq s\leq 1\\\ &\\\
-2\ln(s_{0})-(1-s_{0})&,~{}s\geq 1\end{array}\right.\ ,$ (29)
and following (7), the associated total average rate is
$\displaystyle R_{\rm bs}=2E_{i}(s_{0})-2E_{i}(1)-(e^{-s_{0}}-e^{-1})\ ,$ (30)
where
$\displaystyle
E_{i}(x)=\int\limits_{x}^{\infty}\frac{e^{-t}}{t}\;\,\textnormal{d}t,~{}~{}~{}x\geq
0$ (31)
is the exponential integral function. The limiting behavior of $R_{\rm bs}$ is
found to be
$\displaystyle R_{\rm
bs}\thickapprox\left\\{\begin{array}[]{cl}\ln\frac{P}{9.256}&,~{}P\rightarrow\infty\\\
&\\\ \frac{1}{e}P&,~{}P\rightarrow 0\end{array}\right.\ .$ (35)
The ergodic capacity in this case is given by OZ98 ,
$\displaystyle C_{\rm erg}=e^{1/P}\cdot
E_{i}(\frac{1}{P})\thickapprox\left\\{\begin{array}[]{cl}\ln\frac{P}{1.78}&,~{}P\rightarrow\infty\\\
&\\\ P&,~{}P\rightarrow 0\end{array}\right.\ .$ (39)
The average achievable rate of the standard outage approach, depends on the
outage probability $P_{\rm out}={\mathbb{P}}\\{s\leq s_{\rm
th}\\}=1-e^{-s_{\rm th}}$. Thus, the achievable outage rate is given by
$\displaystyle R_{\rm o}(s_{\rm th})=e^{-s_{\rm th}}\log(1+s_{\rm th}P)\ ,$
(40)
where $R_{\rm o}(s_{\rm th})$ is the average achievable rate of a single
layered code for a parameter $s_{\rm th}$. That is, a rate of $\log(1+s_{\rm
th}P)$ is achieved when the fading power realization is greater than $s_{\rm
th}$, with probability $e^{-s_{\rm th}}$. The outage capacity is the product
of maximizing the achievable outage average rate (40) with respect to the
outage probability (or the fading power threshold $s_{\rm th}$). This yields
an outage capacity
$\displaystyle R_{\rm o,max}=e^{-s_{\rm th,opt}}\log(1+s_{\rm th,opt}P)\ ,$
(41)
where $s_{\rm th,opt}$ solves the equation
$\displaystyle\log(1+s_{\rm th,opt}P)=\frac{P}{1+s_{\rm th,opt}P}\ ,$ (42)
and it can be expressed in closed-form as
$\displaystyle s_{\rm th,opt}=\frac{P-W_{L}(P)}{W_{L}(P)\cdot P}\ ,$ (43)
where $W_{L}(P)$ is the Lambert-W function, also known as the Omega function,
which is the inverse of the function $f(W)=We^{W}$. Subsequently, the outage
capacity is given by AvestimehrTse07
$\displaystyle R_{\rm
o,max}=e^{-(P-W_{L}(P))/W_{L}(P)/P}\cdot\log\left(P/W_{L}(P)\right)\thickapprox\left\\{\begin{array}[]{cl}\ln\frac{P}{W_{L}(P)}&,~{}P\rightarrow\infty\\\
&\\\ \frac{1}{e}P&,~{}P\rightarrow 0\end{array}\right.\ .$ (47)
The study in BustinSh15 provides an interesting interpretation for the basics
of the broadcast approach Shitz97broadcast from the I-MMSE perspective.
When a transmitter has full CSI and transmits at a fixed power $P$, the
transmission rate can be adapted to channel state, and single-layer
transmission can achieve the ergodic capacity. When variability in
transmission power is allowed, and we face an average power constraint, a
water-filling approach can be used. This facilitates adapting the transmission
power and rate to the fading state, which is advantageous in terms of the
expected rate. However, when lacking the perfect CSIT, the SISO broadcast
approach can be optimized as studied in AsSh08 . In this approach, the CSI is
quantized by the receiver and fed back to the transmitter. This allows for
short latency, and the optimized achievable expected rate can be characterized
as a function of the CSI accuracy.
The studies in AsSh08_2 ; ShenLiuFitz08 investigate various multi-layer
encoding hybrid automatic repeat request (HARQ) schemes AsSh08_4 . The
motivation for extending the conventional HARQ schemes to multi-layer coding
is to achieve high throughput efficiency with low latency. The study in
AsSh08_2 focuses on finite-level coding with incremental redundancy HARQ,
where every coded layer supports incremental redundancy coding. The multi-
layer bounds were investigated through continuous broadcasting by defining
different broadcasting protocols that coherently combine HARQ and broadcasting
incremental redundancy HARQ. Optimal power distribution cannot be obtained for
continuous broadcasting. However, it was observed that even with a sub-optimal
broadcasting power distribution, significantly high gains of $\sim 3$ dB over
an outage approach could be achieved for low and moderate SNRs in the long-
term static channel model, with latency as short as two blocks. In the long-
term static channel model, the channel is assumed to remain in the same fading
state within the HARQ session. This is especially interesting as the
conventional broadcast approach (without HARQ), has only marginal gains over
the outage approach for low SNRs. The retransmission protocol of AsSh08_2 is
also an interesting approach, which uses retransmissions for sending new
information at a rate matched to the broadcasting feedback from the first
transmission. The optimal broadcasting power distribution for outage approach
retransmission was fully characterized in AsSh08_2 , and numerical results
showed that it is the most efficient scheme for high SNRs, and at the same
time, it closely approximates the broadcasting incremental redundancy-HARQ for
low SNRs. However, in broadcasting incremental redundancy HARQ, only sub-
optimal power distributions were used and finding the broadcasting optimal
power distribution is still an open problem. It may also turn out that the
broadcasting incremental redundancy HARQ with an optimal power distribution
has more gains over the outage approach retransmission scheme.
Figure 2: SISO broadcast achievable average rate $R_{\rm bs}$, outage capacity
$R_{\rm o}$, ergodic capacity $C_{\rm erg}$ and Gaussian channel upper bound
$C_{G}$ versus SNR.
Next, we present the results on the achievable rates for the single-user SISO
Rayleigh flat fading channel under the broadcast approach. Figure 2
demonstrates the SISO broadcast achievable average rate $R_{\rm bs}$ (30),
outage capacity $R_{\rm o}$ (41), the ergodic capacity $C_{\rm erg}$ (39)
upper bound, and the Gaussian capacity $C_{G}=\log(1+P)$ as a reference.
Clearly, $R_{\rm bs}>R_{\rm o}$ as the latter is achieved by substituting
$\rho(s)$ with $P\delta(s-s_{\rm th,opt})$ in lieu of the optimized $\rho(s)$
in (6). Outage capacity is equivalent to optimized single-layer coding rather
than the optimized continuum of code layers in the broadcast approach. This
difference is more pronounced in the high SNRs. Such a comparison of the
single- level code layer and two-level achievable rates is presented in
Takesh01 . This comparison shows that two-level code layering is already very
close to the optimum $R_{\rm bs}$. The ergodic capacity in the general SIMO
case, with $N$ receive antennas, is given by (T99, , (9)):
$\displaystyle C_{\rm
erg}=\frac{1}{\Gamma(N)}\int_{0}^{\infty}\,\textnormal{d}x\log(1+P\cdot
x)x^{N-1}e^{-x}\ ,$ (48)
where $\Gamma$ denotes the Gamma function. The probability density of the
total fading power for $N$ receive antennas, is given by T99
$\displaystyle f(\lambda)={\rm const}(N)\cdot\lambda^{N-1}e^{-\lambda}\ ,$
(49)
where ${\rm const}(N)$ is a normalization constant.
### 2.4 The MIMO Broadcast Approach
Next, we review the multiple-input multiple-output (MIMO) channel. MIMO
channels, in general, are non-degraded broadcast channels. The MIMO capacity
region is known for multiple users with private messages Weingarten06 , and
for two users with a common message GengNair14 . A complete characterization
of the broadcast approach requires the full solution of the most general MIMO
broadcast channel with a general degraded message set, which is not yet
available. Hence, suboptimal ranking procedures are studied. Broadcasting with
degraded message sets is not only unknown in general channels, but also, it is
unknown for MIMO channels Chong14 ; Chong18 . Various approaches to
transmitting degraded message set with sub-optimal ranking at the receiver are
studied in ShitzSteiner03 ; AsSh04 ; BustinPaySh13 . The ranking of channel
matrices (as opposed to a vector in a SIMO case) can be achieved via
supermajorization ranking of the singular values of $HH^{\sf H}$. The
variational problem for deriving the optimal power distribution for the MIMO
broadcast strategy is characterized in ShitzSteiner03 , but seems not to lend
itself to closed-form expressions. Thus a sub-optimal solution using
majorization is considered and demonstrated for the Rayleigh fading channel.
We adopt the broadcast approach described earlier for the SISO and SIMO
channels, in which the receivers opt to detect the highest possible rate based
on the actual realization of the propagation matrix $H$ not available to the
transmitter. In short, as $H$ improves, it sustains higher reliable rates.
This is because the MIMO setting is equivalent to the general broadcast
channel (from the perspective of infinite layer coding), rather than a
degraded broadcast channel as in the single-input case. In the sequel, we
demonstrate a broadcast approach suited for this MIMO scenario. The approach
suggests an ordering of the receivers based on supermajorization of singular
values of the channel norm matrix. Consider the following flat fading MIMO
channel with $M$ transmit antennas and $N$ receive antennas:
${\bf y}\;=\;H{\bf x}\;+\;{\bf n}\,,$ (50)
where ${\bf x}$ is the input $(M\times 1)$ vector, ${\bf n}$ is the $(N\times
1)$ noise vector with complex Gaussian i.i.d. ${\mathcal{CN}}(0,1)$ elements.
The propagation matrix $(N\times M)$ is designated by $H$ and also possesses
complex Gaussian i.i.d. ${\mathcal{CN}}(0,1)$ elements. The received $(N\times
1)$ vector is denoted by ${\bf y}$. We adhere to the non-ergodic case, where
$H$ is fixed throughout the code word transmission. We assume that the
receiver is aware of $H$ while the transmitter is not. The total transmit
power constraint is $P$, i.e., ${\mathbb{E}}[\textrm{tr}\\{{\bf x}{\bf x}^{\sf
H}\\}]\leq P$.
#### 2.4.1 Weak Supermajorization
First, we introduce some partial ordering relations based on classical theory
of majorization MO79 . Let
$\hbox{\boldmath$\alpha$}=\\{\alpha_{i}\\},\,\hbox{\boldmath$\beta$}=\\{\beta_{i}\\}$
be two sequences of length $K$. Let $\\{\alpha_{(i)}\\}\,,\\{\beta_{(i)}\\}$
be the increasing ordered permutations of the sequences, i.e.,
$\displaystyle\alpha_{(1)}$
$\displaystyle\leq\alpha_{(2)}\,\dotsb\leq\alpha_{(K)}\ ,$ (51)
$\displaystyle\beta_{(1)}$
$\displaystyle\leq\beta_{(2)}\,\dotsb\leq\beta_{(K)}\,.$ (52)
Let $\alpha$ be weakly supermajorized by $\beta$,
$\hbox{\boldmath$\alpha$}\prec^{w}\hbox{\boldmath$\beta$}$, that is
$\sum\limits_{i=1}^{k}\,\alpha_{(i)}\geq\sum\limits_{i=1}^{k}\,\beta_{(i)}\;,\quad
k=1\,\dotsc\,,K\,.$ (53)
Then, the relation $\hbox{\boldmath$\alpha$}\prec^{w}\hbox{\boldmath$\beta$}$
implies that MO79
$\sum\limits_{i=1}^{K}\,\phi(\alpha_{i})\leq\displaystyle\sum_{i=1}^{K}\,\phi(\beta_{i})\,,$
(54)
for all continuous decreasing convex functions $\phi(\cdot)$.
#### 2.4.2 Relation to Capacity
Next, consider the received signal in (50), where the undetectable code layers
are explicitly stated as
$\displaystyle{\bf y}=H({\bf x}_{S}+{\bf x}_{I})+{\bf n}\,,$ (55)
where ${\bf x}_{S}$ and ${\bf x}_{I}$ are decodable information and residual
interference Gaussian vectors, respectively. Their average norms are denoted
by $P_{S}$ and $P_{I}$, respectively, and the total transmit power
$P=P_{I}+P_{S}$. ${\bf n}$ is an i.i.d. Gaussian complex vector with unit
variance per component. The mutual information between ${\bf x}_{S}$ and ${\bf
y}$ is given by
$\displaystyle I({\bf y};{\bf x}_{S})$ $\displaystyle=I({\bf y};{\bf
x}_{S},{\bf x}_{I})-I({\bf y};{\bf x}_{I}|{\bf x}_{S})$ (56)
$\displaystyle=\log\det\,\left(I+\displaystyle\frac{P_{S}+P_{I}}{M}HH^{{\sf
H}}\right)-\log\det\,\left(I+\displaystyle\frac{P_{I}}{M}\,HH^{{\sf
H}}\right)$ (57)
$\displaystyle=\sum\limits_{k=1}^{J}\,\log\,\left(1+\displaystyle\frac{P_{S}\lambda_{k}}{1+P_{I}\lambda_{k}}\right)$
(58) $\displaystyle\triangleq C(\hbox{\boldmath$\lambda$};P_{S},P_{I})\,.$
(59)
Parameters $\\{\lambda_{k}\\}$ for $k=1\,\dots\,J$, where
$J\triangleq\min(N,M)$, designate the singular values (or eigenvalues) of the
matrix $\frac{1}{M}\,H^{{\sf H}}H$ for $M\leq N$, or $\frac{1}{M}\,HH^{{\sf
H}}$ for $N\leq M$ T99 . Finally, if
$\hbox{\boldmath$\lambda$}\prec^{w}\hbox{\boldmath$\delta$}$, we have
$C(\hbox{\boldmath$\lambda$};P_{S},P_{I})\geq
C(\hbox{\boldmath$\delta$};P_{S},P_{I})\,.$ (60)
#### 2.4.3 The MIMO Broadcast Approach Derivation
We discuss the MIMO channel broadcast approach via supermajorization layering
for the simple case of $M=N=2$. The signal ${\bf x}$ is composed of a layered
double indexed data stream with indices denoted by $u$ and $v$. We refer to
layer ordering by columns bottom-up, where $u$ and $v$ are described as a pair
of indices taking integer values within the prescribed region. This is only
for demonstration purposes, as indices $u$ and $v$ are continuous singular
values of $\frac{1}{2}HH^{\sf H}$. Say $u$ and $v$ are associated with the
minimal eigenvalue $\lambda_{2}$ and the sum of eigenvalues
$\lambda_{2}+\lambda_{1}$, respectively. Evidently, $u\geq 0,\,v\geq 2u$. Say
that $\lambda_{2},\,\lambda_{1}$ take on the set of integer values
$\\{0,1,2,3,4\\}$, then the layered system is described by $(u,v)$ in the
order: $(0,0),\,(0,1),\,(0,2),\,(0,3),\,(0,4),\,(1,2),\,(1,3)$,
$(1,4),\,(2,4)$. The actual ordering of the layers is in fact immaterial, as
will be shown decoding is not done successively as in the SISO case
Shitz97broadcast , but rather according to what is decodable adhering to
partial ordering.
We envisage all possible realizations of $H$ and order them by
$u=\lambda_{2}$, $v=\lambda_{2}+\lambda_{1}$ where $\lambda_{2}$ and
$\lambda_{1}$ are, respectively, the minimal and maximal eigenvalues of
$\frac{1}{2}\,HH^{{\sf H}}$ (a $2\times 2$ matrix in our case).
Supermajorization ordering dictates that all streams decodable for realization
$H$ will be decodable for realization $H^{\prime}$ as long as
$\lambda_{2}^{{}^{\prime}}\geq\lambda_{2},\,~{}~{}\lambda_{2}^{{}^{\prime}}+\lambda_{1}^{{}^{\prime}}>\lambda_{2}+\lambda_{1}\,.$
(61)
Thus, we visualize all possible realizations of $H$ as channels referring to
different users in a broadcast setting, and we investigate the associated
rates of the users, which we have ranked as in Section 2.4.1, via a degraded
ordering. It is evident that the current approach specifies an achievable rate
region, but by no means is it claimed to be optimal. In fact, it even has some
inherent limitations.
Let $u=\lambda_{2}$ and $v=\lambda_{1}$ be the eigenvalues of
$\frac{1}{2}HH^{\sf H}$ for some channel realization such that $v\geq u\geq
0$. Let $\rho(u,v)\,\,\textnormal{d}u\,\textnormal{d}v$ be the power
associated with the information stream indexed by $(u,v)$ where $v\geq u$, and
featuring the incremental rate $\,\textnormal{d}^{2}R(u,v)$. Again, for a
given $u$ and $v$, all rates associated with the indices $(a,b)\,,\;a\leq u$,
$b\leq v$ can be decoded, as $(\lambda_{2},\,\lambda_{1})$ is supermajorized
by $(\lambda_{2}=a,\,\lambda_{1}=b)$. A natural optimization problem, in
parallel to that posed and solved for the single dimensional case, is to
optimize the power density $\rho(u,v)$, or the related interference pattern
$I(u,v)$ maximizing the average rate, under the power constraint $I(0,0)=P$.
Let $I(u,v)$ designate the residual interference at $(u,v)$. Hence,
$I(u,v)=P-\int\limits_{0}^{u}\,\textnormal{d}a\int\limits_{a}^{v}\,\,\textnormal{d}b\,\rho(a,b)\
.$ (62)
The associated incremental rate $\,\textnormal{d}^{2}R(u,v)$, based on (3) and
(56), is then given by
$\displaystyle\,\textnormal{d}^{2}R(u,v)$
$\displaystyle=\log\,\left(1+\frac{u\rho(u,v)\,\,\textnormal{d}u\,\textnormal{d}v}{1+uI(u,v)}\right)+\log\,\left(1+\frac{v\rho(u,v)\,\,\textnormal{d}u\,\textnormal{d}v}{1+vI(u,v)}\right)$
(63)
$\displaystyle=\,\frac{u\rho(u,v)\,\,\textnormal{d}u\,\textnormal{d}v}{1+uI(u,v)}+\frac{v\rho(u,v)\,\,\textnormal{d}u\,\textnormal{d}v}{1+vI(u,v)}\
.$ (64)
The power density is the second order derivative of the residual interference
function (62), i.e.,
$\rho(u,v)=-\frac{\partial^{2}}{\partial u\partial v}I(u,v)\triangleq I_{uv}\
,$ (65)
and the incremental rate may be expressed as
$\,\textnormal{d}^{2}R(u,v,I,I_{uv})=-\displaystyle\frac{uI_{uv}(u,v)\,\textnormal{d}u\,\textnormal{d}v}{1+uI(u,v)}-\displaystyle\frac{vI_{uv}(u,v)\,\textnormal{d}u\,\textnormal{d}v}{1+vI(u,v)}\
.$ (66)
The accumulated reliable rate decoded at $(u,v)$ is
$R(u,v)=\int\limits_{0}^{u}\int\limits_{a}^{v}\,\,\textnormal{d}^{2}R(a,b)\ .$
(67)
The expected rate, averaged over various channel realizations, is then given
by
$R_{\rm
ave}=\int\limits_{0}^{\infty}\,\int\limits_{0}^{\infty}\,f(u,v)\,R(u,v)\,\textnormal{d}u\,\textnormal{d}v\
,$ (68)
where $f(u,v)$ designates the joint PDF of the ordered eigenvalues of
$\frac{1}{2}HH^{\sf H}$, random variables $u$ and $v$. For a Gaussian $H$ with
i.i.d. components, the joint density function of $\lambda_{2},\,\lambda_{1}$
is given by T99
$f_{\lambda_{2},\lambda_{1}}(u,v)=16\,e^{-2v-2u}(v-u)^{2},v\geq u\geq 0\ .$
(69)
The optimal expected rate is a product of an optimal selection of the power
distribution $\rho(u,v)$. Specifying the power distribution uniquely specifies
the residual interference function $I(u,v)$ (62) and (65). Hence, optimizing
$R_{\rm ave}$ can instead be carried out with respect to the $I(u,v)$, i.e.,
$\displaystyle R_{\rm
ave}^{\max}=\max\limits_{I(u,v)}\int\limits_{0}^{\infty}\,\textnormal{d}a\int\limits_{0}^{\infty}\,\textnormal{d}bf(a,b)\int\limits_{0}^{a}\,\textnormal{d}u\int\limits_{u}^{b}\,\textnormal{d}vR_{F}(u,v,I,I_{uv})\
,$ (70)
where $f(a,b)$ is defined in (69), and we have set
$R_{F}(u,v,I,I_{uv})\triangleq\frac{\,\textnormal{d}^{2}R(u,v,I,I_{uv})}{\,\textnormal{d}u\,\textnormal{d}v}$
from (66), which depends on the interference function $I(u,v)$ and the power
density $I_{uv}(u,v)$ from (62) and (65), respectively. Maximizing $R_{\rm
ave}$ with respect to the functional $I(u,v)$ is a variational problem
(ShitzSteiner03, , Appendix A). Consequently, the optimization problem may be
stated in the form of a partial differential equation (PDE),
$\displaystyle S_{I}+\frac{\partial^{2}}{\partial uv}S_{I_{uv}}=0\ ,$ (71)
where
$\displaystyle S(a,b,I,I_{ab})\triangleq\left(1+F(a,b)-F(a)-F(b)\right)\cdot
R_{F}(a,b,I,I_{ab})\ ,$ (72)
and $S_{I}$ is the partial derivative with respect to the function $I(u,v)$,
$S_{I_{uv}}$ is the partial derivative with respect to the function $I_{uv}$,
and $I_{uv}$ is the second-order partial derivative of $I(u,v)$ with respect
to $u$ and $v$. The necessary condition for the extremum is given in
(ShitzSteiner03, , Appendix A) in terms of a non-linear second order PDE and
does not appear to have a straightforward analytical solution. Therefore, we
demonstrate a single-dimension approximation to the optimal solution. This
approximation approach is called the 1-D approximation, and it is developed
for the $2\times 2$ channel, i.e., two transmit and two receive antennas. It
suggests breaking the mutual dependency of the optimal power distribution
$\rho(a,b)$ by requiring $\rho(a,b)=\rho(a)\rho(b)$. Such a representation
bears two independent solutions, obtained from solving the optimal SISO
broadcast strategy. Another sub-optimal solution could be obtained based on a
finite-level code layering, as suggested in Takesh01 for the SISO scheme.
Accordingly, a single layer (outage) coding with and without employing
majorization ranking at the receiver is suggested by ShitzSteiner03 . A two-
layer coded scheme for the $2\times 2$ channel is also studied and compared
with the outage approach in ShitzSteiner03 . Another sub-optimal approach to
the MIMO channel involves modeling the MIMO channel as a multiple-access
channel (MAC), where each antenna transmits an independent stream
ShitzSteiner03 . In a MAC approach for the MIMO channel, instead of performing
joint encoding for all transmit antennas, each antenna has an independent
encoder. Thus the receiver views a MAC. When each encoder performs layered
coding, we essentially get a MAC-broadcast strategy. This approach was first
presented in SH00 for the multiple-access channel, employing the broadcast
approach at the receiver. The advantage of this approach is that each
transmitter views an equivalent degraded broadcast channel, and the results of
the SISO broadcast strategy may be directly used.
#### 2.4.4 Degraded Message Sets
Next, we briefly outline the formulation of the general MIMO broadcasting with
degraded message sets. The key step for addressing the continuous broadcast
approach for MIMO channels with degraded message sets involves decoupling the
layering index and the channel state. In many previous studies on the
continuous broadcast approach (e.g., ShitzSteiner03 ; Tian08 ;
SteinerShamai2007 ) the layering index is associated with the channel fading
gain. However, for the MIMO case with degraded message set, it is proposed
that the continuous layering indices are associated with only the power
allocation and layer rates.
Consider the MIMO channel model in (50). The source transmits layered messages
with a power density distribution function $\rho(s)$, where $s\in[0,\infty)$.
The first transmitted message is associated with $s=0$, and can be considered
as a common message for all receivers. The next layer indexed by
$\,\textnormal{d}s$, cannot be decoded by the first user, but it is a common
message for all other users. The capacity of the channel in (50) for a given
channel state is the mutual information given by
$\displaystyle I({\bf y};{\bf x})=\log\det\left(I+\frac{P}{M}HH^{\sf
H}\right)\ ,$ (73)
which can also be expressed using the eigenvalues of $\frac{1}{M}HH^{\sf H}$
T99 ,
$\displaystyle I({\bf y};{\bf
x})=\sum\limits_{k=1}^{K}\log\left(1+P\lambda_{k}\right)\ ,$ (74)
where $K=\min(M,N)$ is the degree of freedom of the MIMO channel, and
$\\{\lambda_{k}\\}_{k=1}^{K}$ are the eigenvalues of $\frac{1}{M}HH^{\sf H}$.
The singular value decomposition (SVD) of $\frac{1}{M}HH^{\sf H}=U\Lambda
V^{\sf H}$ where $U$ and $V$ are unitary matrices and $\Lambda$ is a $[KxK]$
diagonal matrix of singular values of $\frac{1}{M}HH^{\sf H}$. The equivalent
receive signal of (50) multiplied by $H$ is ${\bf y}^{\prime}=U\Lambda V^{\sf
H}{\bf x}+{\bf n}^{\prime}$, and multiplying the received signal by $U^{\sf
H}$ creates a parallel channel $U^{\sf H}{\bf y}^{\prime}=\Lambda{\bf
x}^{\prime}+{\bf n}^{\prime\prime}$, where ${\bf x}^{\prime}=V{\bf x}$. This
makes the channel of (50) an effective parallel channel when ${\bf
x}^{\prime}$ is transmitted. However $V$ is known at the receiver, and
therefore the transmitter does not have to perform any precoding, and layering
can be performed with respect to singular values distribution of
$\frac{1}{M}HH^{\sf H}$. The fractional achievable rate for a power allocation
$\rho(s)\,\textnormal{d}s$, and under successive decoding, is given by
$\displaystyle\sum\limits_{k=1}^{K}\log\left(1+\frac{\lambda_{k}\rho(s)\,\textnormal{d}s}{1+\lambda_{k}I(s)}\right)=\sum\limits_{k=1}^{K}\frac{\lambda_{k}\rho(s)\,\textnormal{d}s}{1+\lambda_{k}I(s)}\
,$ (75)
where $I(s)$ is the residual layering power. $I(s)$ serves as interference for
decoding layer $s$. The relationship between power density distribution and
the residual interference is
$\rho(s)=-\frac{\,\textnormal{d}I(s)}{\,\textnormal{d}s}$. It is achievable
for the set of eigenvalues $\\{\lambda_{k}\\}_{k=1}^{K}$ such that
$\displaystyle\,\textnormal{d}R(s)\leq\sum\limits_{k=1}^{K}\frac{\lambda_{k}\rho(s)\,\textnormal{d}s}{1+\lambda_{k}I(s)}\triangleq\,\textnormal{d}I_{K}(\lambda_{1},...,\lambda_{K},s)\
.$ (76)
Feasibility of successive decoding here results from the fact that the
function $\,\textnormal{d}I_{K}(\lambda_{1},...,\lambda_{K},s)$ is an
increasing function of $\lambda_{k}$, $\forall~{}k\in\\{1,..,K\\}$. Define a
fractional rate allocation function $r(s)$, such that
$r(s)\rho(s)=\,\textnormal{d}R(s)$. The cumulative rate achievable for a layer
index $s$ is simply
$\displaystyle R(s)=\int\limits_{0}^{s}r(u)\rho(u)\,\textnormal{d}u\ .$ (77)
The probability of achieving $R(s)$ is given by
$\displaystyle
F^{c}(s)={\mathbb{P}}\left(r(s)\leq\sum\limits_{k=1}^{K}\frac{\lambda_{k}}{1+\lambda_{k}I(s)}\right)\
,$ (78)
where $F^{c}(s)$ is the complementary CDF of the layering index $s$, i.e.,
$F^{c}(s)=1-F(s)$. The expected broadcasting rate is then
$\displaystyle R_{\rm
bs}=\int\limits_{0}^{\infty}\,\textnormal{d}s(1-F(s))r(s)\rho(s)=\int\limits_{0}^{\infty}\,\textnormal{d}s{\mathbb{P}}\left(r(s)\leq\sum\limits_{k=1}^{K}\frac{\lambda_{k}}{1+\lambda_{k}I(s)}\right)r(s)\rho(s)\
.$ (79)
We focus now on the case of $K=2$, i.e., $\min(M,N)=2$. In this case the
fractional rate $r(s)$ is decipherable if
$\displaystyle
r(s)\leq\frac{\lambda_{1}}{1+\lambda_{1}I(s)}+\frac{\lambda_{2}}{1+\lambda_{2}I(s)}\
.$ (80)
An alternative formulation is for a given $\lambda_{1}$, the eigenvalues
$\lambda_{2}$ for which $r(s)$ can be reliably decoded are given by
$\displaystyle\lambda_{2}\geq\frac{r(s)+r(s)\lambda_{1}I(s)-\lambda_{1}}{1+(2\lambda_{1}-r(s))I(s)-r(s)\lambda_{1}I^{2}(s)}\triangleq
G(\lambda_{1},s,I,r)\ ,$ (81)
where the inequality holds only for $G(\lambda_{1},s,I,r)\geq 0$. An
alternative representation of the decoding probability of layer $s$ is thus
$\displaystyle F^{c}(s)$ $\displaystyle={\mathbb{P}}\left(\lambda_{2}\geq
G(\lambda_{1},s,I,r)\right)$ (82)
$\displaystyle=\int\limits_{0}^{\infty}\,\textnormal{d}u\int\limits_{G(u,s,I,r)}^{\infty}\,\textnormal{d}vf_{\lambda_{1},\lambda_{2}}(u,v)\cdot\textbf{1}\left[G(u,s,I,r)\geq
0\right]$ (83)
$\displaystyle=\int\limits_{0}^{\infty}\,\textnormal{d}u\left(f_{\lambda_{1}}(u)-Q_{\lambda_{1},\lambda_{2}}(u,G(u,s,I,r))\right)\cdot\textbf{1}\left[G(u,s,I,r)\geq
0\right]\ ,$ (84)
where $\textbf{1}(x)$ is the indicator function, and
$\displaystyle
f_{\lambda_{1},\lambda_{2}}(u,v)=\frac{\partial^{2}F_{\lambda_{1},\lambda_{2}}(u,v)}{\partial
u\partial v}\ ,$ (85)
is the joint PDF of $(\lambda_{1},\lambda_{2})$, and
$\displaystyle Q_{\lambda_{1},\lambda_{2}}(u,v)\triangleq\frac{\partial
F_{\lambda_{1},\lambda_{2}}(u,v)}{\partial u}\ .$ (86)
The expected rate for a general layering function $r(s)$ and a layering power
allocation function $I(s)$ is given by
$\displaystyle R_{\rm
bs}=\int\limits_{0}^{\infty}\,\textnormal{d}sr(s)\rho(s)\cdot\int\limits_{0}^{\infty}\,\textnormal{d}u\left[f_{\lambda_{1}}(u)-Q_{\lambda_{1},\lambda_{2}}(u,G(u,s,I,r))\right]\cdot\textbf{1}\left[G(u,s,I,r)\geq
0\right]\ .$ (87)
Clearly, the optimization problem for expected broadcasting rate maximization
is given by
$\displaystyle R_{\rm bs,opt}=\max\limits_{r(s)\geq
0,~{}I(s),~{}\textrm{s.t.}~{}I(0)=P,~{}\rho(s)\geq
0}\int\limits_{0}^{\infty}\,\textnormal{d}s\;J(s,I,I^{\prime},r)\ ,$ (88)
where the integrand functional $J(s,I,I^{\prime},r)$ is given by
$\displaystyle
J(s,I,I^{\prime},r)=r(s)\rho(s)\int\limits_{0}^{\infty}\,\textnormal{d}u\left[f_{\lambda_{1}}(u)-Q_{\lambda_{1},\lambda_{2}}(u,G(u,s,I,r))\right]\cdot\textbf{1}\left[G(u,s,I,r)\geq
0\right]\ .$ (89)
The necessary conditions for extremum are given by the Euler equations GF91
$\displaystyle J_{r}=0\ ,$ (90) $\displaystyle J_{I}-\frac{\partial}{\partial
s}J_{I^{\prime}}=0\ ,$ (91)
where $J_{r}$ is the partial derivative of $J$ with respect to $r(s)$. The
extremum condition for $r(s)$ in (90) can be expressed as follows:
$\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}u\left\\{\left[f_{\lambda_{1}}(u)-Q_{\lambda_{1},\lambda_{2}}(u,G)\right]\cdot\left(\frac{1}{r(s)}\textbf{1}\left[G\geq
0\right]+\delta(G)\right)-f_{\lambda_{1},\lambda_{2}}(u,G)\frac{\partial}{\partial
s}G\cdot\textbf{1}\left[G\geq 0\right]\right\\}=0\ ,$ (92)
where for brevity, $G(u,s,I,r)$ is replaced by $G$, and $\delta(x)$ is the
Dirac delta function. The extremum conditions as stated in (90) and (91) do
not lend themselves into closed-form analytical solutions even though $K=2$,
and characterizing them remains an open problem for future research.
### 2.5 On Queuing and Multilayer Coding
Classical information theory generally assumes an infinitely long queue of
data ready for transmission, which is motivated by maximizing communication
throughput (Shannon capacity). In network theory, on the other hand, the input
data is usually a random process that controls writing to a buffer (serving as
a queue), and the readout from this buffer is another random process. In these
settings, the design goal of transmission concentrates on minimizing the queue
delay for the input data. However, designing the data queue and transmission
algorithm cannot be decoupled in the presence of stringent delay constraints
on input data transmission. This is because the objective is no longer only
maximizing the throughput. This conceptual difference between network theory
and information theory can be overcome by posing a common optimization problem
and jointly minimizing the delay of a random input process under a power
(rate) control constraint. This becomes a cross-layer optimization problem
involving the joint optimization of two layers of the seven-layer open systems
interconnection (OSI) model. Other fundamental gaps between network theory and
information theory are covered in detail in TG95 ; HAJEK98 ; GALLAGER85 ;
YooLiuShamai2012 .
Queuing and channel coding for a block fading channel with transmit CSI only,
for a single user, is discussed in IDO01 . In this section, we first consider
optimizing rate and power allocation for a single layer code transmission. For
this scheme, the outage capacity OZ98 maximizes the achievable throughput.
Rate and power are optimized jointly to minimize the overall delay. The delay
is measured from the arrival of a packet at the queue until successfully
decoded, including, if needed, retransmission due to outage events.
Figure 3: A schematic communication system with a queue buffer followed by a
wireless transmitter.
The study in AsSh10 considers a cross-layer system optimization approach for
a single-server queue followed by a multi-layer wireless channel encoder, as
depicted in Fig. 3. The main focus is on minimizing the average delay of a
packet measured from entering the queue until successful service completion.
#### 2.5.1 Queue Model – Zero-padding Queue
Next we consider the zero-padding queue model described in AsSh10 . It is
assumed that the transmission is performed every time the queue is not empty.
If the available queue data is less than a packet size, a frame can be
generated with zero-padding to have a valid frame for the channel encoder. We
define the queuing time as the time from arrival to completion of service, and
the waiting time as the time measured from arrival until initially being
served. The queue’s waiting time analysis can be done at embedded points: the
beginning of every time slot. The random process of packet arrival random at
each slot is a deterministic process denoted by $\lambda$ (bits/channel use).
The queue waiting time can be measured directly based on the queue size, as
stated on Little’s theorem WOLFF89 , by normalizing the queue size by the
inverse of the input rate $\lambda$. Notice that Little’s theorem does not
consider the instantaneous quantities to the average waiting time and average
queue size. The following equation defines the queue size:
$\displaystyle
Q_{n+1}=\left\\{\begin{array}[]{ll}N\lambda_{n+1}+Q_{n}-NR_{n}&N\lambda_{n+1}+Q_{n}-NR_{n}\geq
0\\\ &\\\ 0&{\rm otherwise}\end{array}\ ,\right.$ (96)
where $N$ is the number of channel uses between slots, which is also the block
length, and $\lambda_{n+1}$ is a deterministic queue input rate $\lambda$. It
is noteworthy that in a single-layer coding, $R_{n}$ is a fixed $R$ with
probability $p$, and it is 0 with probability $1-p$. This waiting time
equation is also analyzed in (IDO02, , chapter 5) for a single-layer coding
approach and a deterministic arrival process, where tight bounds on the
expected waiting time are obtained. For simplicity, by normalization of the
queue size by the block-length $N$, the Lindley equation is obtained
kleirock_v2 :
$\displaystyle\widetilde{q}_{n+1}=\left\\{\begin{array}[]{ll}\widetilde{q}_{n}+\lambda_{n+1}-R_{n}&~{}\widetilde{q}_{n}+\lambda_{n+1}-R_{n}\geq
0\\\ &\\\ 0&~{}\widetilde{q}_{n}+\lambda_{n+1}-R_{n}<0\end{array}\ ,\right.$
(100)
where $\widetilde{q}_{n}$ is now the queue size in units of blocks of data
corresponding to $N$ arrivals to the queue. In an outage approach, we have
$R_{n}=R$ with probability $p$, and $R_{n}=0$ with a complementary probability
$1-p$, which is also the outage probability. For the rest of the analysis, the
queue equations will be normalized following (100). We specify the queuing
time equation for completeness of the definitions, which is the overall system
delay for the zero-padding queue. The overall delay must always take into
account the additional delay of service time beyond the queue’s waiting time.
The normalized queue size is the waiting time equivalent, i.e.,
$\displaystyle
q_{n+1}=\left\\{\begin{array}[]{ll}q_{n}+\frac{\lambda_{n+1}}{\lambda}-\frac{R_{n}}{\lambda}\
,&~{}q_{n}-\frac{R_{n}}{\lambda}\geq 0\\\ &\\\ \frac{\lambda_{n+1}}{\lambda}\
,&otherwise\end{array}\ ,\right.$ (104)
where $q_{n}$ is a normalized queue size at a renewal slot $n$. In a single-
layer coding approach, it is possible to analyze the queue delay by adopting
the standard M/G/1 queue model. The input random process of an M/G/1 model
follows a Poisson process, and its service distribution is another general
random process. In an outage approach, a geometrically distributed random
variable characterizes the time between services. For using the M/G/1 model,
an important assumption on the system model is made: input arrives in blocks
that have the same length as the coded transmission blocks. That is, the queue
equation is normalized to the data block size of its corresponding
transmission. The number of arrivals is measured in block units, and the input
process has a rate of $\lambda_{\rm norm}$.
Having the arrival blocks are equal in size to transmitted blocks is a
limiting constraint since a change of transmission rate means a change in
input block size. Therefore, the M/G/1 queue model is not adopted in AsSh10 ,
and in the following, we use the zero-padding queue model as described
earlier.
#### 2.5.2 Delay Bounds for a Finite Level Code Layering
We consider here $K$ multi-layer coding, and describe the Lindley equation
WOLFF89 . The queue update equation is given by
$\displaystyle
w_{n+1}=\left\\{\begin{array}[]{ll}w_{n}+x_{n}&~{}w_{n}+x_{n}\geq 0\\\
0&~{}w_{n}+x_{n}<0\end{array}\ ,\right.$ (107)
where $x_{n}$ is the update random variable, which depends on the number of
code layers. Its value represents the difference between the queue input
$\lambda$ and the number of layers successfully decoded, i.e.,
$\displaystyle x_{n}\triangleq\lambda-\sum\limits_{i=1}^{K}\nu_{i,n}R_{i}\ .$
(108)
Random variables $\\{\nu_{i,n}\\}_{i=1}^{K}$ are associated with the outage
probability as function of layer index. The corresponding fading power
thresholds are denoted by $\\{s_{{\rm th},i}\\}_{i=1}^{K}$. Random variables
$\\{\nu_{i,n}\\}_{i=1}^{K}$ are related to the fading thresholds as follows
$\displaystyle\nu_{i,n}=\left\\{\begin{array}[]{ll}1&s_{{\rm th},i}\leq
s_{n}\leq s_{{\rm th},i+1}\\\ &\\\ 0&\textrm{otherwise}\end{array}\right.\ ,$
(112)
where $s_{n}$ is the fading power realization at the $n^{\rm th}$ time-slot,
and $s_{{\rm th},K+1}=\infty$. Every random variable $\nu_{i,n}$ has a
probability of being 1, denoted by $p_{K-i+1}$. Note that outage probability
is
$\displaystyle\overline{p}=1-\sum\limits_{i=1}^{K}p_{i}\ ,$ (113)
in which $\overline{p}$ represents the probability that all layers cannot be
decoded. The CDF of the queue size at these embedding points requires
computing the CDF at every time instant. In this setting, the probability
density $\,\textnormal{d}F_{X}(\tau)$ of $X$ (108) is given by
$\displaystyle\,\textnormal{d}F_{X}(x)=\sum\limits_{i=1}^{K}p_{i}\delta\left(x-(\lambda-\sum_{j=1}^{K-i+1}R_{j})\right)+\overline{p}\delta(x-\lambda)\
,$ (114)
where $p_{i}={\mathbb{P}}\\{s_{{\rm th},i}\leq s_{n}\leq s_{{\rm th},i+1}\\}$
for $i\in\\{1,\dots,K\\}$ and $s_{{\rm th},K+1}=\infty$. The next theorem
discussed in details in (AsSh10, , Appendix B) establishes upper and lower
bounds on ${\mathbb{E}}[W_{K}]$. [AsSh10 ] For a $K$-layer coding, the
expected queue size is upper and lower bounded by
$\displaystyle{\mathbb{E}}[w_{K}]\geq\frac{(\Re_{K}-\lambda)(\sum\limits_{i=1}^{K}p_{i}\Re_{K-i+1}-\lambda)-(\Re_{K}-\lambda)^{2}+\sum\limits_{i=1}^{K}p_{i}(\Re_{K}-\Re_{K-i+1})^{2}+\overline{p}\Re_{K}^{2}}{2(\sum\limits_{i=1}^{K}p_{i}\Re_{K-i+1}-\lambda)}\
,$ (115)
and
$\displaystyle{\mathbb{E}}[w_{K}]\leq\frac{2(\Re_{K}-\lambda)(\sum\limits_{i=1}^{K}p_{i}\Re_{K-i+1}-\lambda)-(\Re_{K}-\lambda)^{2}+\sum\limits_{i=1}^{K}p_{i}(\Re_{K}-\Re_{K-i+1})^{2}+\overline{p}\Re_{K}^{2}}{2(\sum\limits_{i=1}^{K}p_{i}\Re_{K-i+1}-\lambda)}\
,$ (116)
where $\Re_{V}\triangleq\sum_{j=1}^{V}R_{j}$. The variance of the achievable
rate random variable $\sigma^{2}_{R_{\rm KL}}$ is given by
$\displaystyle\begin{array}[]{lll}\sigma^{2}_{R_{\rm
KL}}&\triangleq&\sum\limits_{i=1}^{K}p_{i}\Re_{K-i+1}^{2}-(R_{\rm KL,av})^{2}\
,\end{array}$ (118)
where
$\displaystyle R_{\rm KL,av}\triangleq\sum\limits_{i=1}^{K}p_{i}\Re_{K-i+1}\
.$ (119)
Queue expected size and expected delay for $K$-layer coding are upper bounded
by
$\displaystyle{\mathbb{E}}[w_{\rm KL}]\leq\frac{\sigma^{2}_{R_{KL}}}{2(R_{\rm
KL,av}-\lambda)}-(1-\frac{\lambda}{R_{\rm
KL,av}})\frac{\sigma^{2}_{R_{KL}}}{2R_{\rm KL,av}}\ ,$ (120)
and the expected delay is upper bounded by
$\displaystyle{\mathbb{E}}[w_{\lambda,\rm
KL}]\leq\frac{\sigma^{2}_{R_{KL}}}{2\lambda(R_{\rm
KL,av}-\lambda)}-(1-\frac{\lambda}{R_{\rm
KL,av}})\frac{\sigma^{2}_{R_{KL}}}{2R_{\rm KL,av}\lambda}\ ,$ (121)
where $\sigma^{2}_{R_{\rm KL}}$ and $R_{\rm KL,av}$ are given by (118) and
(119) respectively.
#### 2.5.3 Delay bounds for Continuum Broadcasting
A continuous broadcasting approach is considered in this section. In this
approach, the transmitter also sends multi-layer coded data. Unlike $K$-layer
coding, the layering is a continuous function of the channel fading gain
parameter. The number of layers is not limited, and an incremental rate with a
differential power allocation is associated with every layer. The differential
per layer rate is
$\,\textnormal{d}R(s)=\frac{s\rho(s)\,\textnormal{d}s}{1+sI(s)}$ and
$\rho(s)\,\textnormal{d}s$ is the transmit power of a layer $s$. This also
determines the transmission power distribution per layer V90 . The residual
interference for a fading power $s$ is
$I(s)=\int_{s}^{\infty}\rho(u)\,\textnormal{d}u$ (4). The total achievable
rate for a fading gain realization $s$ is
$R(s)=\int_{0}^{s}\frac{u\rho(u)\,\textnormal{d}u}{1+uI(u)}$ (6). It is
possible to extend the $K$-layer coding bounds shown above to this continuous
broadcast setting. The bounds in (115) and (116) could be used for
broadcasting after performing the following modifications:
1. 1.
The number of layers is unlimited, that is $K\rightarrow\infty$.
2. 2.
Since the layering is continuous, every layer $i$ is associated with a fading
gain parameter $s$. Every Rate $R_{i}$ is associated with a differential rate
$\,\textnormal{d}R(s)$ specified in (3).
3. 3.
The cumulative rate $\Re_{K}$ should be replaced by
$\displaystyle R_{T}=\int\limits_{0}^{\infty}\,\textnormal{d}R(s)\ .$ (122)
4. 4.
The sum $\sum\limits_{i=1}^{K}p_{i}\Re_{K-i+1}$ is actually the average rate
and it turns to be $R_{\rm bs}$ (7) for the continuum case.
5. 5.
Finally, in finite-level coding the expression
$\sum\limits_{i=1}^{K}p_{i}(\Re_{K}-\Re_{K-i+1})^{2}+\overline{p}\Re_{K}^{2}$
turns out to be
$\displaystyle R^{2}_{\rm d,bs}$
$\displaystyle\triangleq\int\limits_{0}^{\infty}\,\textnormal{d}uf(u)\left[R_{T}-\int\limits_{0}^{u}\,\textnormal{d}R(s)\right]^{2}$
(123)
$\displaystyle=\int\limits_{0}^{\infty}\,\textnormal{d}uf(u)\left[\int\limits_{u}^{\infty}\,\textnormal{d}R(s)\right]^{2}$
(124)
$\displaystyle=2\int\limits_{0}^{\infty}\,\textnormal{d}uF(u)\,\textnormal{d}R(u)\int\limits_{u}^{\infty}\,\textnormal{d}R(s)\
,$ (125)
in the continuous case, where $\,\textnormal{d}R(u)$ and $R(u)$ are specified
in (3) and (6), respectively.
The queue average size for a continuous code layering is upper and lower
bounded by
$\displaystyle{\mathbb{E}}[w_{\rm
bs}]\geq\frac{R_{T}-\lambda}{2}+\frac{R^{2}_{\rm
d,bs}-(R_{T}-\lambda)^{2}}{2(R_{\rm bs}-\lambda)}\ ,$ (126)
$\displaystyle{\mathbb{E}}[w_{\rm bs}]\leq(R_{T}-\lambda)+\frac{R^{2}_{\rm
d,bs}-(R_{T}-\lambda)^{2}}{2(R_{\rm bs}-\lambda)}\ ,$ (127)
and the average delay is lower and upper bounded by
$\displaystyle{\mathbb{E}}[w_{\lambda,\rm
bs}]\geq\frac{R_{T}-\lambda}{2\lambda}+\frac{R^{2}_{\rm
d,bs}-(R_{T}-\lambda)^{2}}{2\lambda(R_{\rm bs}-\lambda)}\ ,$ (128)
$\displaystyle{\mathbb{E}}[w_{\lambda,\rm
bs}]\leq\frac{R_{T}-\lambda}{\lambda}+\frac{R^{2}_{\rm
d,bs}-(R_{T}-\lambda)^{2}}{2\lambda(R_{\rm bs}-\lambda)}\ ,$ (129)
where $R_{\rm bs}$, $R_{T}$, and $R^{2}_{\rm d,bs}$ are specified in (7),
(122), and (123) respectively. The variance of the achievable rate random
variable $\sigma^{2}_{R_{\rm bs}}$ is given by
$\displaystyle\sigma^{2}_{R_{\rm bs}}$
$\displaystyle\triangleq\int\limits_{0}^{\infty}\,\textnormal{d}uf(u)\left[R(u)\right]^{2}-R_{\rm
bs}^{2}$ (130)
$\displaystyle=\int\limits_{0}^{\infty}\,\textnormal{d}uf(u)\left[\int\limits_{0}^{u}\,\textnormal{d}R(s)\right]^{2}-R_{\rm
bs}^{2}$ (131)
$\displaystyle=2\int\limits_{0}^{\infty}\,\textnormal{d}u(1-F(u))\,\textnormal{d}R(u)\int\limits_{0}^{u}\,\textnormal{d}R(s)-R_{\rm
bs}^{2}$ (132)
$\displaystyle=2\int\limits_{0}^{\infty}\,\textnormal{d}u(1-F(u))\,\textnormal{d}R(u)R(u)-R_{\rm
bs}^{2}\ .$ (133)
The queue average size for a continuous code layering is upper bounded by
$\displaystyle{\mathbb{E}}[w_{\rm bs}]\leq\frac{\sigma^{2}_{R_{\rm
bs}}}{2(R_{\rm bs}-\lambda)}-(1-\frac{\lambda}{R_{\rm
bs}})\frac{\sigma^{2}_{R_{\rm bs}}}{2R_{\rm bs}}\ ,$ (134)
and the average delay is upper bounded by
$\displaystyle{\mathbb{E}}[w_{\lambda,\rm bs}]\leq\frac{\sigma^{2}_{R_{\rm
bs}}}{2\lambda(R_{\rm bs}-\lambda)}-(1-\frac{\lambda}{R_{\rm
bs}})\frac{\sigma^{2}_{R_{\rm bs}}}{2R_{\rm bs}\lambda}\ ,$ (135)
where $R_{\rm bs}$ and $\sigma^{2}_{R_{\rm bs}}$ are given by (7) and (130),
respectively.
For minimizing the expected delay in the continuous layering case, it is
required to obtain the optimal $\rho(s)$ (4) which minimizes the average queue
size upper bound. As in multi-layer coding, an analytic solution is not
available and remains an open problem for further research. However, numerical
optimization is impossible here. The constraint of optimization is a
continuous function. The target functional in the optimization problem for
continuous layering does not have a localization property GF91 . A functional
with localization property can be written as an integral of some target
function. Our functional contains a ratio of integrals and further
multiplication of integrals, which cannot be converted to an integral over a
single target function. Such functional is also denoted as a nonlocal
functional in GF91 . In such cases, it is preferable to look for an
approximate representation of the nonlocal functional, which has the
localization property. Alternatively, approximate target functions with
reduced degrees of freedom may be optimized.
An interesting observation from the numerical results of AsSh10 is that when
considering delay as a performance measure, code layering could give
noticeable performance gains in terms of delay, which are more impressive than
those associated with throughput. This makes layering more attractive when
communicating under stringent delay constraints.
Analytic resource allocation optimization for delay minimization, under the
simple queue model in AsSh10 , remains an open problem for further research.
In general, when layering is adopted at the transmitter, in conjunction with
successive decoding at the receiver, the first layer is decoded earlier than
other layers, and it has the shortest service time. Accounting for a different
service delay per layer, the basic queue size update equation (the Lindley
equation) should be modified accordingly. The analysis of the broadcast
approach with a per layer queue is a subject for further research. The queue
model which was used in AsSh10 is a zero-padding queue. In this model, the
frame size is kept fixed every transmission, and if the queue is nearly empty,
the transmission includes zero-padded bits on top of queue data. Optimizing
the transmission strategy as a function of the queue size, such that no zero-
padding is required, can further increase layering efficiency and minimize the
expected delay. This is a possible direction for further research.
### 2.6 Delay Constraints
There are various aspects in which delay constraints in communications may
impact the system design. Stringent delay constraints might not allow to
capture the channel ergodic distribution, and may benefit from a broadcast
approach. This is while relaxed delay constraints may allow transmission of
long codewords that capture the channel ergodicity. When there is a mixture of
delay requirements on data using the same physical transmission resources,
interesting coded transmission schemes can be considered. This is studied in
CohenSteinerShamai12 as discussed in next subsections and also widely covered
in Nikbakht2019 ; Nikbakht_2020 . Another aspect is decoding multiple
independent blocks, as considered in YEH01 , and studied by its equivalent
channel setting, which is the MIMO Parallel channel Kfir:IZS2020 and
discussed in detail in the next subsections.
#### 2.6.1 Mixed Delay Constraints
The work in CohenSteinerShamai12 considers the problem of transmission with
delay-constrained (DC) and non-delay-constrained (NDC) streams are transmitted
over a SISO channel, with no CSIT adhering to the broadcast approach for the
DC stream. The DC stream comprises layers that have to be decoded within a
short period of a single transmission block. The NDC stream comprises layers
that may be encoded over multiple blocks and decoded after the complete
codeword is received, potentially observing the channel ergodicity. Three
overall approaches are suggested in CohenSteinerShamai12 , trying to maximize
the expected sum rate. Their achievable rate regions over DC and NDC are
examined. A DC stream is always decoded in the presence of an NDC stream,
which is treated as interference. However, before decoding an NDC stream, the
decodable DC layers can be removed, allowing NDC decoding at the highest
signal-to-interference-plus-noise ratio (SINR). A closed-form solution of the
sum-rate maximization problem can be derived for the outage and broadcast DC
stream in parallel to a single NDC layer. When NDC transmission is also
composed of multi-layers, the optimization problem of the expected sum-rate
becomes much more complicated.
The joint strategy of accessing both DC and NDC parts on a single channel uses
a two-level block nesting. Every $L$ samples define a block for the DC stream,
while the NDC stream is encoded over $K$ such blocks, consisting of $L\cdot K$
samples. The NDC block is called a super block. $L$ is large enough for
reliable communication for the DC part, but it is much shorter than the
dynamics of the slow fading process. $K$ is large enough to enable the
empirical distribution of the fading coefficient to be similar to the real
one. Two independent streams of information are encoded. The _DC stream_ is
decoded at the completion of each block at the decoder, at a rate dependent
upon the realization of the channel fading coefficient for that block. The
_NDC stream_ is decoded only at the completion of the super block. All of the
following proposed schemes assume superposition coding, equivalent to symbol-
wise additivity of the DC and NDC code letters. Denote by $w^{L}$ the
$L$-length codeword for the DC code for each block, and $z^{KL}$ the
$KL$-length codeword for the NDC code for each super block. Define one super
block as
$\displaystyle y_{k,i}$
$\displaystyle=\sqrt{s_{k}}\cdot(w_{k,i}+z_{k,i})+n_{k,i}\quad,\qquad\mbox{for}\;\;\;i=1,2,\dotsc,L\
,\quad\quad k=1,2,\dotsc,K\ ,$ (136)
where the double sub-index $\\{k,i\\}$ is equivalent to the time index
$(k-1)\cdot L+i$. Note that slow fading channel nature was used by defining
$s_{k,i}=s_{k}$. This scheme reflects a power constraint of the form
${\mathbb{E}}[\left|w_{k,i}+z_{k,i}\right|^{2}]\leq P$. Define $R_{\rm DC}(s)$
as the _achievable rate for a fading power realization $s$_ per block. The
_total expected DC rate over all fading power realizations_ is given by
$\displaystyle R_{\rm DC}$ $\displaystyle=\int_{0}^{\infty}{f_{S}(u)R_{\rm
DC}(u)\,\textnormal{d}u}\ .$ (137)
Let $R_{\rm NDC}$ designates the _rate of the NDC part_ , which experiences
enough such realizations throughout communication. When relaxing the stringent
delay constraint, coding over sufficient large blocks achieves _ergodic
capacity_ , denoted by $C_{\rm
erg}={\mathbb{E}}_{S}[{\log\left(1+SP\right)}]$. Clearly, for any coding
scheme $R_{\rm DC}+R_{\rm NDC}\leq C_{\rm erg}$.
#### 2.6.2 Broadcasting with Mixed Delay Constraints
The superposition of DC and NDC is employed by allocating a fixed amount of
power per stream. Define the _DC relative power portion_ as $\beta\in[0,1]$,
that is $\beta\cdot P$ is the power allocated for the DC stream and the rest
$(1-\beta)\cdot P$ for the NDC stream. The DC part uses the broadcast
approach. During decoding of the DC part, the NDC is treated as additional
interference since during the decoding of each DC block the NDC codeword
cannot be completely received, and thus cannot be decoded nor reconstructed to
assist the DC decoding. The NDC decoder is informed of all DC decoded layers
per DC codeword, and it cancels out the decoded part from the corresponding
NDC block, maximizing its SINR for NDC decoding. By designing the two encoders
like described earlier, we can justify that both DC and NDC parts communicate
over a flat fading channel with additive Gaussian noise. The imposed noise for
each part consists of the white channel noise along with undecodable codewords
of those that are undecoded yet from both parts.
The DC encoder uses superposition of an infinite number of layers, ordered
using channel fading realization $s$ in a manner that forms a degraded
broadcast channel. Per DC message, the transmitted codeword of length $L$ is
given by
$\displaystyle w^{L}(m_{1},m_{2},\dotsc,m_{\infty})$
$\displaystyle=\sum_{j=1}^{\infty}w_{j}^{L}(m_{j})\ .$ (138)
Designate $\rho(s)$ to be the _DC layering power distribution_ , which will be
optimized later on, and each layer communication scheme will try to overcome a
Gaussian channel where the fading is known to both sides. The NDC encoder
sends a single message through a block of length $L\cdot K$. By random coding
over a Gaussian channel, the codewords can be generated. A total of $e^{L\cdot
K\cdot R_{\rm NDC}}$ codewords can be used, where the channel rate $R_{\rm
NDC}$ relies on the optimized channel power $\rho(s)$ as well.
The decoders are activated by order. First, the DC decoder works on every
$L$-block and by _successive decoding_ can reveal as many layers as the
channel permits. It is similar to the classic broadcast approach, except all
layers suffer from an undecodable (at this stage) interference. All DC
decoders’ outputs are fed to the NDC decoder, which works after $K$ such
blocks. After removal of the decodable DC codewords of all blocks, the NDC
part is decoded with a minimal residual interference, where the interference
includes only the undecoded DC layers. Calculating the DC rate in the presence
of NDC is a direct extension of ShitzSteiner03 , which is a special case for
$\beta=1$. Define the _DC interference for a fading power $s$_ as $I(s)$,
implying
$\displaystyle I(s)$
$\displaystyle=\int_{s}^{\infty}{\rho(u)\,\textnormal{d}u}\
,\qquad\mbox{and}\qquad\rho(s)=-{\frac{\,\textnormal{d}}{\,\textnormal{d}s}I(s)}\
.$ (139)
It associates the undecodable layers upon a channel fading realization $s$ as
noise to the transmission. It is restricted to the total DC allocated power
$I(0)=\int_{0}^{\infty}{\rho(u)\,\textnormal{d}u}=\beta P\ ,$
with $0\leq\beta\leq 1$.
[Achievable Expected DC Rate CohenSteinerShamai12 ] Any total expected DC rate
$R_{\rm DC}$, which is averaged over all fading realizations, that satisfies
$\displaystyle R_{\rm DC}$
$\displaystyle\leq\int_{u=0}^{\infty}{(1-F_{S}(u))\frac{u\rho(u)}{1+uI(u)+(1-\beta)Pu}\,\textnormal{d}u}\
,$ (140)
is achievable.
[Achievable Expected NDC Rate CohenSteinerShamai12 ] Any total expected NDC
rate $R_{\rm NDC}$, which is averaged over all fading realizations, that
satisfies
$\displaystyle R_{\rm NDC}$
$\displaystyle\leq\int_{0}^{\infty}{f_{S}(u)\log\left(1+\frac{(1-\beta)Pu}{1+uI(u)}\right)\,\textnormal{d}u}\
,$ (141)
is achievable. It is possible to derive the optimal power allocation for DC
layering that maximizes the sum rate $(R_{\rm DC}+R_{\rm NDC}$ as stated in
(140) and (141), respectively. It is a function that depends on $I(s)$
according to (139). Specifically, the optimization problem is
$\displaystyle I^{*}(s)$
$\displaystyle=\operatornamewithlimits{argmax}_{I(s)}\left\\{R_{\rm DC}+R_{\rm
NDC}\right\\}\qquad\mbox{s.t. }\quad I(0)=\beta P\ ,\quad\mbox{and}\quad
I(\infty)=0\ .$ (142)
The outage approach is a simple special case of layering, where a single DC
coded layer is used. In this case, the power distribution $I(s)$ is explicitly
given by
$\displaystyle I(s)$ $\displaystyle=\begin{cases}\beta P&\text{if }0\leq s\leq
s_{\rm th}\\\ 0&\text{if }s>s_{\rm th}\end{cases}\ ,$ (143)
$\displaystyle\rho(s)$ $\displaystyle=\beta P\cdot\delta(s-s_{\rm th})\ ,$
(144)
where $\delta$ is the Dirac function and $s_{\rm th}$ is a parameter set prior
to the communication. Constant $s_{\rm th}$ may be interpreted as the fading
gain threshold for single layer coding. The advantages of this approach are
low implementation complexity and ease of analysis. The disadvantage is its
sub-optimality. The outage approach is designed for a channel with fixed
fading of $s_{\rm th}$. On the one hand, if $s\geq s_{\rm th}$, the message
can be transmitted error-free at a rate adjusted for $s_{\rm th}$. On the
other hand, if $s<s_{\rm th}$, the specific transmission is useless.
[Joint Optimality by Outage DC CohenSteinerShamai12 ] The maximizer $I_{o}(s)$
of (142) subject that satisfies the form in (143) is specified by $s^{*}_{\rm
th}$, which can be found as the solution to
$\displaystyle f_{S}(s_{\rm th}^{*})\log\left(1+\beta Ps_{\rm
th}^{*}\right)=(1-F_{S}(s_{\rm th}^{*}))\frac{\beta P}{(1+Ps_{\rm
th}^{*})(1+(1-\beta)Ps_{\rm th}^{*})}\ .$ (145)
The optimal expected DC outage rate and the _optimal expected NDC outage rate_
, which together maximize the sum rate are
$\displaystyle R_{\rm DC,o}$ $\displaystyle=\left(1-F_{S}(s_{\rm
th}^{*})\right)\log\left(1+\frac{\beta Ps_{\rm th}^{*}}{1+(1-\beta)Ps_{\rm
th}^{*}}\right)\ ,$ (146) $\displaystyle R_{\rm NDC,o}$
$\displaystyle=\int_{0}^{s_{\rm
th}^{*}}{f_{S}(u)\log\left(1+\frac{(1-\beta)Pu}{1+\beta
Pu}\right)\,\textnormal{d}u}+\>\int_{s_{\rm
th}^{*}}^{\infty}{f_{S}(u)\log\left(1+(1-\beta)Pu\right)\,\textnormal{d}u}\ .$
(147)
Maximizing (142) can be derived analytically by developing an Eüler Equation
in a similar way to ShitzSteiner03 . This is done by enlarging the class of
admissible functions $I(s)$ (as opposed to the outage approach) to be
continuously differentiable and to satisfy the boundary conditions $I(0)=\beta
P$ and $I(\infty)=0$.
[Joint Optimality by Broadcast DC ] The maximizer $I_{bs}(s)$ of (142) when
considering all continuously differentiable boundary conditioned functions is
$I_{\rm bs}(s)={[\tilde{I}(s)]}_{0}^{\beta P}$, where
$\displaystyle\tilde{I}(x)$
$\displaystyle=\frac{1}{x}\left(\frac{-b(x)+\sqrt{b^{2}(x)-4a(x)c(x)}}{2a(x)}-1\right)\
,$ (148) $\displaystyle a(x)$ $\displaystyle=xf_{S}(x)\ ,$ (149)
$\displaystyle b(x)$ $\displaystyle=2(1-\beta)Pf_{S}(x)x^{2}-(1-F_{S}(x))\ ,$
(150) $\displaystyle c(x)$ $\displaystyle=(1-\beta)^{2}P^{2}f_{S}(x)x^{3}\ .$
(151)
The associated rates $R_{\rm DC,bs}$ and $R_{\rm NDC,bs}$ can be achieved by
substituting it in (140) and (141). The square root in (148) can impose a
finite-length domain for $\tilde{I}(s)$, that can result in discontinuity at
$I(s)$. This situation is addressed by assigning a Dirac function at
$\rho(s)$, which can be interpreted as a superposition of single-layer coding
and continuous layering. Figure 4 shows the relation of $R_{\rm DC}+R_{\rm
NDC}$ for the joint outage approach and the joint broadcast approach, for
selected values of $\beta$. The total expected sum-rate is the sum of the DC
rate and the NDC rate. As may be observed, if $\beta\leq 0.9$, then the
ergodic capacity can be nearly achieved in high SNRs.
Figure 4: Total rate for several $\beta$ values and the ergodic capacity vs.
the SNR $P$, for the flat Rayleigh channel.
#### 2.6.3 Parallel MIMO Two-state Fading Channel
Broadcasting over MIMO channels is still an open problem, and only sub-optimal
achievable schemes are known ShitzSteiner03 . The work in Kfir:IZS2020
considers a two-state parallel MIMO channel, which is equivalent to a SISO
two-state channel, where decoding can be done over multiple consecutive
blocks, as studied in YEH01 . The work in Kfir:IZS2020 considers the slow
(block) fading parallel MIMO channel T99 , where channel state is known at the
receiver only. Under this channel model, the transmitter may adopt a broadcast
approach ShitzSteiner03 , which can optimize the expected transmission rate
under no transmission CSI, which is essentially characterized by the
_variable-to-fixed coding_ Verdu10variable-ratechannel .
The study in GA80 composes two degraded broadcast channels CO72 ;
Cover1998CommentsBroadcast into a three-user setup: an encoder with two
outputs, each driving a dual-output broadcast channel; two decoders, where
each is fed by one less-noisy broadcast channel output and one more-noisy
output of the other channel (called unmatched). This channel is referred to as
_degraded broadcast product channel_. For the AWGN case, the capacity region
(private and common rates) of this channel was derived GA80 . In Kfir:IZS2020
, the MIMO setting for the broadcast approach is revisited, with new tools
that differ from those in ShitzSteiner03 ; SteinerShamai2007 . This is by
analyzing the finite-state parallel MIMO channel, where the capacity region in
GA80 is used to address the multi-layering optimization problem for
maximizing the expected rate of a two-state fading zohdy2019broadcast ; YEH01
; Tajer18 parallel MIMO channel.
#### 2.6.4 Capacity of Degraded Gaussian Broadcast Product Channels
Consider the model introduced in GA80 , which is a two-receiver discrete
memoryless degraded product broadcast channel. The Gaussian case was addressed
as a special case. A single transmitter encodes two $n$-length codewords
consisting of a common message $w_{0}\in\\{1,...,2^{nR_{0}}\\}$ to be decoded
by both users, and two private messages $w_{\rm BA}\in\\{1,...,2^{nR_{\rm
BA}}\\}$ and $w_{\rm AB}\in\\{1,...,2^{nR_{\rm AB}}\\}$, one for each of the
two decoding users. A single function encodes these three messages into two
codewords, where each undergoes parallel degraded broadcast sub-channels
$\left\\{\begin{aligned} y_{1}&=x_{1}+n_{11}\\\
z_{1}&=y_{1}+n_{12}\end{aligned}\
,\right.\quad\quad\mbox{and}\qquad\left\\{\begin{aligned}
z_{2}&=x_{2}+n_{21}\\\ y_{2}&=z_{2}+n_{22}\end{aligned}\ ,\right.$ (152)
where $n_{11},n_{21}\sim\mathcal{CN}(0,\nu_{b}^{-1})$ ,
$n_{21},n_{22}\sim\mathcal{CN}(0,\nu_{a}^{-1}-\nu_{b}^{-1})$. As depicted in
the bold and red parts of Fig. 5, two users (namely $AB$ and $BA$) receive
both common and private messages from the transmitter and independently decode
the messages. This is an unmatched setting, as $y_{1}$ is less noisy than
$z_{1}$, and $z_{2}$ is less noisy than $y_{2}$. Hence, each of the users has
one less-noisy channel output alongside another, which is the noisier output
of the other sub-channel. Following (GA80, , Theorem 2), which shows this
case, and exploiting symmetry for equal power allocation to both sub-channels,
optimal allocation is expected to be achieved by equal common rate allocation
to every user (state). Denoting $\bar{\alpha}=1-\alpha$, the capacity region
$(R_{0},R_{\rm BA},R_{\rm AB})$ is
$\displaystyle R_{0}\leq\log\left(1+\tfrac{\nu_{a}\alpha
P}{1+\nu_{a}\bar{\alpha}P}\right)+\log\left(1+\tfrac{\nu_{b}\alpha
P}{1+\nu_{b}\bar{\alpha}P}\right)\ ,$ (153) $\displaystyle R_{0}+R_{\rm
BA}=R_{0}+R_{\rm AB}\leq\log\\!\left(\mkern-1.5mu1+\tfrac{\nu_{a}\alpha
P}{1+\nu_{a}\bar{\alpha}P}\mkern-1.5mu\right)\\!\\!+\\!\log(1+\nu_{b}P)\ ,$
(154) $\displaystyle R_{0}+R_{\rm BA}+R_{\rm
AB}\leq\log\left(1+\nu_{b}P\right)+\log\left(1+\tfrac{\nu_{a}\alpha
P}{1+\nu_{a}\bar{\alpha}P}\right)+\log\left(1+\nu_{b}\bar{\alpha}P\right).$
(155)
$f_{\rm AA}(\cdot)$$f_{\rm cr}(\cdot)$$f_{\rm
BB}(\cdot)$$+$$+$${\scriptstyle\mathcal{CN}(0,\nu_{b}^{-1})}$${\scriptstyle\mathcal{CN}(0,\nu_{b}^{-1})}$$w_{\rm
AA}$ $w_{0},w_{BA},w_{\rm AB}$$w_{\rm BB}$
$+$$+$$+$$+$${\scriptstyle\mathcal{CN}(0,\nu_{a}^{-1}-\nu_{b}^{-1})}$${\scriptstyle\mathcal{CN}(0,\nu_{a}^{-1}-\nu_{b}^{-1})}$$x_{1}$$x_{2}$$g_{\rm
BA}(\cdot)$$g_{\rm AB}(\cdot)$$g_{\rm AA}(\cdot)$$g_{\rm
BB}(\cdot)$$y_{1}$$z_{1}$$y_{2}$$z_{2}$$\hat{w}_{\rm
AA}^{(AA)}$${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\hat{w}_{\rm
AA}^{(BA)},}\hat{w}_{0}^{(BA)},\hat{w}_{\rm
BA}^{(BA)}$${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\hat{w}_{\rm
AA}^{(AB)},}\hat{w}_{0}^{(AB)},\hat{w}_{\rm AB}^{(AB)}$$\hat{w}_{\rm
AA}^{(BB)},\hat{w}_{0}^{(BB)},$$\hat{w}_{\rm BA}^{(BB)},\hat{w}_{\rm
AB}^{(BB)},\hat{w}_{\rm BB}^{(BB)}$Gaussian BC channel 1Gaussian BC channel 2
Figure 5: Encoding-decoding scheme of the 2 receiver Gaussian degraded product
broadcast channel with users: AA, AB, BA, BB
#### 2.6.5 Extended Degraded Gaussian Broadcast Product Channels
The classical product channel is extended by introducing two dual-input
receivers in addition to the original two. The first gets the two more noisy
channel outputs $(z_{1},y_{2})$, whereas the second receives the two less
noisy outputs $(z_{2},y_{1})$. To support this, two messages $w_{\rm AA}$ and
$w_{\rm BB}$ are added. The total two $n$-length codewords are the
superposition of three codewords by independent encoders as follows
$(\textbf{X}_{1},\textbf{X}_{2})=f_{\rm AA}(w_{\rm AA})+f_{\rm
cr}(w_{0},w_{\rm BA},w_{\rm AB})+f_{\rm BB}(w_{\rm BB})$, where subscript
${\rm cr}$ stands for crossed states ($(A,B)$ or $(B,A)$). See Fig. 5 for an
illustration.
Stream AA is decoded first, regardless of whether the others can be decoded
(this is done by treating all the other streams as interference). Then, both
streams AB and BA, including their common stream subscripted $0$ can be
decoded after removing the AA impact from their decoder inputs (treating the
BB stream as interference). Finally, removing all the above decoded streams
allows decoding stream BB. From (155), we have
$\displaystyle R_{\rm AA}\leq 2\log\left(1+\tfrac{\alpha_{\rm
AA}P}{\nu_{a}^{-1}+\bar{\alpha}_{\rm AA}P}\right)\ ,$ (156) $\displaystyle
R_{\rm AA}+R_{0}\leq 2\log\left(1+\tfrac{\alpha_{\rm
AA}P}{\nu_{a}^{-1}+\bar{\alpha}_{\rm
AA}P}\right)+\log\\!\left(1+\tfrac{\alpha\alpha_{\rm
cr}P}{\nu_{b}^{-1}+(\bar{\alpha}\alpha_{\rm cr}+\alpha_{\rm
BB})P}\\!\right)+\log\\!\left(1+\tfrac{\alpha\alpha_{\rm
cr}P}{\nu_{a}^{-1}+(\bar{\alpha}\alpha_{\rm cr}+\alpha_{\rm BB})P}\\!\right)\
,$ (157) $\displaystyle R_{\rm AA}+R_{0}+R_{\rm BA}=R_{\rm AA}+R_{0}+R_{\rm
AB}\ $ $\displaystyle\qquad\qquad\qquad\leq 2\log\left(1+\tfrac{\alpha_{\rm
AA}P}{\nu_{a}^{-1}+\bar{\alpha}_{\rm
AA}P}\right)+\log\left(1+\tfrac{\alpha\alpha_{\rm
cr}P}{\nu_{a}^{-1}+(\bar{\alpha}\alpha_{\rm cr}+\alpha_{\rm
BB})P}\right)+\log\left(1+\tfrac{\alpha_{\rm cr}P}{\nu_{b}^{-1}+\alpha_{\rm
BB}P}\right)\ ,$ (158) $\displaystyle R_{\rm AA}+R_{0}+R_{\rm BA}+R_{\rm AB}$
$\displaystyle\qquad\qquad\qquad\leq 2\log\left(1+\tfrac{\alpha_{\rm
AA}P}{\nu_{a}^{-1}+\bar{\alpha}_{\rm
AA}P}\right)+\log\left(1+\tfrac{\alpha_{\rm cr}P}{\nu_{b}^{-1}+\alpha_{\rm
BB}P}\right)$
$\displaystyle\qquad\qquad\qquad+\log\left(1+\tfrac{\alpha\alpha_{\rm
cr}P}{\nu_{a}^{-1}+(\bar{\alpha}\alpha_{\rm cr}+\alpha_{\rm
BB})P}\right)+\log\left(1+\tfrac{\bar{\alpha}\alpha_{\rm
cr}P}{\nu_{b}^{-1}+\alpha_{\rm BB}P}\right)\ ,$ (159) $\displaystyle R_{\rm
AA}+R_{0}+R_{\rm BA}+R_{\rm AB}+R_{\rm BB}$
$\displaystyle\qquad\qquad\qquad\leq 2\log\left(1+\tfrac{\alpha_{\rm
AA}P}{\nu_{a}^{-1}+\bar{\alpha}_{\rm
AA}P}\right)+\log\left(1+\tfrac{\alpha_{\rm cr}P}{\nu_{b}^{-1}+\alpha_{\rm
BB}P}\right)$
$\displaystyle\qquad\qquad\qquad+\log\left(1+\tfrac{\alpha\alpha_{\rm
cr}P}{\nu_{a}^{-1}+(\bar{\alpha}\alpha_{\rm cr}+\alpha_{\rm
BB})P}\right)+\log\left(1+\tfrac{\bar{\alpha}\alpha_{\rm
cr}P}{\nu_{b}^{-1}+\alpha_{\rm BB}P}\right)+2\log\left(1+\tfrac{\alpha_{\rm
BB}P}{\nu_{b}^{-1}}\right)\ ,$ (160)
where $\alpha_{\rm AA},\alpha_{\rm cr},\alpha_{\rm BB}\in[0,1]$ are the
relative power allocations for the subscripted letters $\alpha_{\rm
AA}+\alpha_{\rm cr}+\alpha_{\rm BB}=1$, and $\alpha\in[0,1]$ is the single
user private power allocation within the unmatched channel.
#### 2.6.6 Broadcast Encoding Scheme
Adding a message splitter at the transmitter and channel state-dependent
message multiplexer at the receiver enriches the domain. Figure 6 illustrates
the encoding and decoding schemes. During decoding, the four possible channel
states $\mathbf{S}=(S_{1},S_{2})$ impose different decoding capabilities. If
$\mathbf{S}=\rm(A,A)$, then $g_{\rm AA}(\cdot)$ can reconstruct $w_{\rm AA}$
to achieve a total rate of $R_{\rm AA}$. For $\mathbf{S}=\rm(B,A)$, $g_{\rm
BA}(\cdot)$ is capable of reconstructing three messages $(w_{\rm
AA},w_{0},w_{\rm BA})$ with sum rate of $R_{\rm AA}+R_{0}+R_{\rm BA}$.
Similarly for $\mathbf{S}=\rm(A,B)$, $g_{\rm AB}(\cdot)$ reconstructs $(w_{\rm
AA},w_{0},w_{\rm AB})$ with sum rate $R_{\rm AA}+R_{0}+R_{\rm AB}$. When both
channels are permissive $\mathbf{S}=\rm(B,B)$, all five messages $(w_{\rm
AA},w_{0},w_{\rm BA},w_{\rm AB},w_{\rm BB})$ are reconstructed at $g_{\rm
BB}(\cdot)$ under the rate $R_{\rm AA}+R_{0}+R_{\rm BA}+R_{\rm AB}+R_{\rm
BB}$. Recall that a single user transmission is of interest here, thus the
expected rate of the parallel channel underhand can be expressed by
$\displaystyle R_{\text{ave}}$ $\displaystyle=P_{A}^{2}R_{\rm
AA}+P_{A}P_{B}(R_{\rm AA}+R_{0}+R_{\rm AB})$
$\displaystyle\quad+P_{B}P_{A}(R_{\rm AA}+R_{0}+R_{\rm BA})$
$\displaystyle\quad+P_{B}^{2}(R_{\rm AA}+R_{0}+R_{\rm BA}+R_{\rm AB}+R_{\rm
BB})\ .$ (161)
Using (160), and since both channels have identical statistics leading to
$R_{\rm AB}=R_{\rm BA}$, the achievable average rate is
$\displaystyle R_{\text{ave}}$
$\displaystyle=2(P_{A}+P_{B})^{2}\log\left(1+\nu_{a}P\right)+R_{0}(1-\alpha_{\rm
AA})$ $\displaystyle\quad+R_{1}(1-\alpha_{\rm AA}-\alpha\alpha_{\rm
cr})+R_{2}(1-\alpha_{\rm AA}-\alpha_{\rm cr})\ ,$ (162)
where the new notations are
$\displaystyle R_{0}(\alpha_{0})$
$\displaystyle=[(P_{A}+P_{B})^{2}-P_{A}^{2}]\log(1+\nu_{b}\alpha_{0}P)-[(P_{A}+P_{B})^{2}+P_{A}^{2}]\log(1+\nu_{a}\alpha_{0}P)\
,$ (163) $\displaystyle R_{1}(\alpha_{1})$
$\displaystyle=P_{B}^{2}\log(1+\nu_{b}\alpha_{1}P)-[(P_{A}+P_{B})^{2}-P_{A}^{2}]\log(1+\nu_{a}\alpha_{1}P)\
,$ (164) $\displaystyle R_{2}(\alpha_{2})$
$\displaystyle=-2P_{A}P_{B}\log(1+\nu_{b}\alpha_{2}P)\ .$ (165)
and the arguments satisfy $\alpha_{0}=1-\alpha_{\rm AA}$,
$\alpha_{1}=1-\alpha_{\rm AA}-\alpha\alpha_{\rm cr}$, and
$\alpha_{2}=1-\alpha_{\rm AA}-\alpha_{\rm cr}=\alpha_{\rm BB}$. Note that
$R_{0}(\alpha_{0})$ and $R_{1}(\alpha_{1})$ are not obliged to be positive, as
they can be negative for some scenarios, and $R_{2}(\alpha_{2})$ is non-
positive by definition. Denoting the domain $D^{\prime}$ of valid power
allocations vector $\mathbold{\alpha}^{\prime}=[\alpha,\alpha_{\rm
AA},\alpha_{\rm cr},\alpha_{\rm BB}]^{T}\in[0,1]^{4}$ and the operator
$[x]_{+}=\max\\{0,x\\}$ yield the following proposition.
splitter$f_{\rm AA}(\cdot)$$f_{\rm cr}(\cdot)$$f_{\rm
BB}(\cdot)$$H_{1}$$H_{2}$$w$ $w_{\rm AA}$$w_{0}$ $w_{\rm BA}$$w_{\rm
AB}$$w_{\rm
BB}$$+$$+$$H_{1}^{-1}$$H_{2}^{-1}$$+$$+$${\scriptstyle\mathcal{CN}(0,1)}$${\scriptstyle\mathcal{CN}(0,1)}$$\textbf{Y}_{1}$$\textbf{Y}_{2}$$\textbf{X}_{1}$$\textbf{X}_{2}$$g_{\rm
BA}(\cdot)$$g_{\rm AB}(\cdot)$$g_{\rm AA}(\cdot)$$g_{\rm
BB}(\cdot)$$\hat{w}_{\rm AA}^{(AA)}$$AA$$\hat{w}_{\rm
AA}^{(BA)},\hat{w}_{0}^{(BA)},\hat{w}_{\rm BA}^{(BA)}$$BA$$\hat{w}_{\rm
AA}^{(AB)},\hat{w}_{0}^{(AB)},\hat{w}_{\rm AB}^{(AB)}$$AB$$\hat{w}_{\rm
AA}^{(BB)},\hat{w}_{0}^{(BB)},$$\hat{w}_{\rm BA}^{(BB)},\hat{w}_{\rm
AB}^{(BB)},\hat{w}_{\rm BB}^{(BB)}$$BB$mux$\hat{w}$ EncoderParallel
ChannelDecoder
Figure 6: Encoding and decoding scheme of the two receiver Gaussian degraded
product broadcast channel broadcast approach
The maximal sum rate of the symmetric two parallel two state channel over all
power allocations is
$\displaystyle\max_{\mathbold{\alpha}^{\prime}\in
D^{\prime}}R_{\textup{ave}}(\mathbold{\alpha}^{\prime})$
$\displaystyle=2(P_{A}+P_{B})^{2}\log(1+\nu_{a}P)+\\!\max_{0\leq\alpha_{\rm
AA}\leq 1}\\!\\!\left\\{R_{0}(1-\alpha_{\rm
AA})+R_{1}(\alpha_{1}^{\textup{opt}}(\alpha_{\rm AA}))\right\\}\ ,$ (166)
where
$\displaystyle\alpha_{1}^{\textup{opt}}(\alpha_{\rm AA})$
$\displaystyle=\max\\{0,\min\\{1-\alpha_{\rm AA},\alpha_{1}^{*}\\}\\}\ ,$
(167) $\displaystyle\alpha_{1}^{*}$
$\displaystyle=\frac{P_{B}^{2}\nu_{b}-[(P_{A}+P_{B})^{2}-P_{A}^{2}]\nu_{a}}{[(P_{A}+P_{B})^{2}-P_{A}^{2}-P_{B}^{2}]\nu_{a}\nu_{b}P}\
,$ (168)
and the latter solves
$\frac{\partial}{\partial\alpha_{1}}R_{1}(\alpha_{1}^{*})=0$.
The optimal power allocation for the state $(B,B)$ is $\alpha_{\rm
BB}^{\textup{opt}}=0$. This is true for any set of parameters
$\nu_{a},\nu_{b},P_{A},P_{B}$, even if $P_{B}\rightarrow 1$ and
$\nu_{b}\gg\nu_{a}$. Inherently, a penalty occurs when trying to exploit the
double permissive state. Under the optimal power allocation,
$\alpha^{\textup{opt}}(\alpha_{\rm
AA})=1-\alpha_{1}^{\textup{opt}}(\alpha_{\rm AA})/(1-\alpha_{\rm AA})$. This
removes a degree of freedom in the optimization problem.
Using these corollaries, and the notation
$\mathbold{\alpha}^{\prime}=[\alpha,\alpha_{\rm AA},\alpha_{\rm
cr},\alpha_{\rm BB}]^{T}$ instead of
$\mathbold{\alpha}=[\alpha_{0},\alpha_{1},\alpha_{2}]^{T}$, we have the
following theorem. The maximal sum rate of the symmetric two-parallel two-
state channel over all allocations $\mathbold{\alpha}^{\prime}\in D^{\prime}$
is
$\displaystyle R_{\textup{ave}}^{\textup{opt}}$
$\displaystyle=2(P_{A}+P_{B})^{2}\log(1+\nu_{a}P)+\smashoperator[l]{\max_{0\leq\alpha_{\rm
AA}\leq 1}^{}}\\!\\!\left\\{\mkern-1.5muR_{0}(1\\!-\\!\alpha_{\rm
AA})\\!+\\!R_{1}((1\\!-\\!\alpha_{\rm
AA})\\!\cdot\\!(1\\!-\\!\alpha^{\textup{opt}}(\alpha_{\rm AA})))\\!\right\\}\
,$ (169)
where
$\displaystyle{rCl}\alpha^{\textup{opt}}(\alpha_{\rm AA})$
$\displaystyle=\left[\min\left\\{1,1-\tfrac{P_{B}^{2}\nu_{b}-[(P_{A}+P_{B})^{2}-P_{A}^{2}]\nu_{a}}{2P_{A}\cdot
P_{B}\cdot\nu_{a}\nu_{b}P(1-\alpha_{\rm AA})}\right\\}\right]_{+}\ .$ (170)
Denoting the argument of the maximization as $\alpha_{\rm AA}^{\textup{opt}}$,
the optimal power allocation vector is
$\mathbold{\alpha}^{\prime\textup{opt}}=[\alpha^{\textup{opt}}(\alpha_{\rm
AA}),\alpha_{\rm AA}^{\textup{opt}},1-\alpha_{\rm
AA}^{\textup{opt}},0]^{\top}.$
From Proposition 2.6.6 and by setting $\alpha_{1}=1-\alpha_{\rm
AA}-\alpha\alpha_{\rm cr}=(1-\alpha_{\rm AA})(1-\alpha)$, it can be observed
that the optimal allocation for state BB is $\alpha_{\rm BB}=0$. For
evaluation of the advantage of the joint $\alpha_{\rm AA}$ and $\alpha$, the
following sub-optimal schemes are compared: a) independent broadcasting; b)
privately broadcasting; and c) only common broadcasting. A scheme for which
the encoder disjointly encodes different messages into each single channel of
the parallel channel using the broadcast approach over the fading channel is
denoted by independent broadcasting. The broadcast approach for fading SISO
channel relies on two main operations: superposition coding by layering at the
transmitter; and successive interference cancellation at the receiver. The
maximal expected sum rate of the symmetric two parallel two state channel
under independent broadcasting is
$\displaystyle R^{\textup{ind-bc,opt}}_{\textup{ave}}$
$\displaystyle=2(P_{A}+P_{B})\log\left(\tfrac{1+\nu_{a}P}{1+\nu_{a}(1-\alpha^{\textup{ind-
bc,opt}})P}\right)+2P_{B}\log\left(1+\nu_{b}(1-\alpha^{\textup{ind-
bc,opt}})P\right),$
$\alpha^{\textup{bc,opt}}=\left[\min\left\\{1,1-\tfrac{P_{B}\nu_{b}-(P_{A}+P_{B})\nu_{a}}{P_{A}\nu_{a}\nu_{b}P}\right\\}\right]_{+}.$
(171)
A scheme for which no power is allocated for the common stream in the
$\rm(B,A)$ and $\rm(A,B)$ states (message $w_{0}$) is called privately
broadcasting. This scheme is equivalent to setting $\alpha=0$ in Theorem
2.6.6, thus allocating encoding power from the common stream ($R_{0}=0$) to
the other streams $R_{\rm AA},R_{\rm AB},R_{\rm BA}$ and $R_{\rm BB}$, which
achieves optimality for
$\alpha_{\rm AA}^{\textup{prv-
bc,opt}}=\left[\min\left\\{1,1-\tfrac{[P_{B}-P_{A}]\nu_{b}-[P_{B}+P_{A}]\nu_{a}}{2P_{A}\nu_{a}\nu_{b}P}\right\\}\right]_{+}.$
(172)
A scheme for which all of the crossed state power is allocated only to the
common stream (message $w_{0}$) and no power is allocated to the private
messages (no allocation for messages $w_{\rm AB}$ and $w_{\rm BA}$) is called
_only common broadcasting_. This scheme is equivalent to setting $\alpha=1$ in
Theorem 2.6.6, thus allocating encoding power from the private streams
($R_{\rm AB}=R_{\rm BA}=0$) to the other streams $R_{\rm AA},R_{0}$ and
$R_{\rm BB}$, which achieves optimality for
$\alpha_{\rm AA}^{\textup{cmn-
bc,opt}}=\left[\min\\!\left\\{\\!1,\\!1\\!-\\!\tfrac{[(P_{A}\\!+\\!P_{B})^{2}\\!-\\!P_{A}^{2}]\nu_{b}\\!-\\![(P_{A}\\!+\\!P_{B})^{2}\\!+\\!P_{A}^{2}]\nu_{a}}{2P_{A}^{2}\nu_{a}\nu_{b}P}\\!\right\\}\right]_{+}.$
(173)
The result in Theorem 2.6.6 differs from the one presented in YEH01 for the
two-parallel two state channel. In YEH01 it is chosen to transmit only common
information to the pairs $\rm(A,B)$ and $\rm(B,A)$. (YEH01, , equation (39))
clearly states that for the crossed states (A,B) and (B,A) only common rate is
used without justification. It is further claimed that this is an expected
rate upper bound for some power allocation. The result in (171) proves that
broadcasting common information only, i.e., $\alpha=1$ is sub-optimal, and
does not yield the maximal average rate.
### 2.7 Broadcast Approach via Dirty Paper Coding
We conclude this section by noting the relevance of dirty paper coding (DPC)
to the broadcast approaches discussed. Even though the central focus of the
broadcast approaches discussed is superposition coding, all these approaches
can be revisited by instead adopting dirty paper coding. Information layers
generated by a broadcast approach interfere with one another with the key
property that the interference is known to the transmitter. DPC enables
effective transmission when the transmitted signal is corrupted by
interference (and noise in general) terms that are known to the transmitter.
This is facilitated via precoding the transmitted signal by accounting for and
canceling the interference.
DPC plays a pivotal role in the broadcast channel. It is an optimal (capacity-
achieving) scheme for the multi-antenna Gaussian broadcast channel
Weingarten06 ; CA01_1 with general message sets and effective, in the form of
binning, for the general broadcast channel with degraded message sets
NairGamal09 . To discuss the application of the DPC in the broadcast approach,
consider the single-user channel with a two-state fading process, that is for
the model in (1) we have $h\in\\{h_{w},h_{s}\\}$ where $|h_{w}|<|h_{s}|$,
rendering the following two models in these two states:
$\displaystyle y_{w}\;$ $\displaystyle=\;h_{w}x\;+\;n_{w}\ ,$ (174)
$\displaystyle y_{s}\;$ $\displaystyle=\;h_{s}x\;+\;n_{s}\ ,$ (175)
which can be also considered a broadcast channel with two receivers with
channels $h_{w}$ and $h_{s}$. The broadcast region for this channel can be
achieved by both superposition coding and DPC. When the noise terms have
standard Gaussian distribution, the capacity region is characterized by all
pairs
$\displaystyle R_{w}\;$
$\displaystyle\leq\;\frac{1}{2}\log\left(1+\frac{\alpha
P|h_{w}|^{2}}{1+(1-\alpha)P|h_{w}|^{2}}\right)\ ,$ (176) $\displaystyle
R_{s}\;$
$\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}leq}\;\frac{1}{2}\log\left(1+(1-\alpha)P|h_{s}|^{2}\right)\
.$ (177)
over all $\alpha\in[0,1]$. This capacity region is achievable by superposition
coding of two information layers $x_{w}$ and $x_{s}$ with rates $R_{w}$ and
$R_{s}$ to transmit $x=x_{w}+x_{s}$. The same region can be achieved by DPC,
where $x_{w}$ is generated and decoded as done in superposition coding, and
$x_{s}$ is designed by treating $x_{w}$ as the interference known to the
transmitter non-causally. It is noteworthy that the original design of DPC in
Costa1983 the non-causally known interference term is modeled as additive
Gaussian noise. However, as shown in CohenISIT2012 , the interference term can
be any sequence, like a Gaussian codeword, and still achieve the same capacity
region.
The operational difference of superposition coding and DPC at the receiver
side is that when using superposition coding, at the stronger receiver, the
layers $x_{w}$ and $x_{s}$ have to be decoded sequentially, while when using
DPC, the two layers can be decoded in parallel. This observation alludes to an
operational advantage of DPC over superposition coding: while both achieving
the capacity region, DPC imposes shorter decoding latency.
## 3 The Multiple Access Channel
### 3.1 Overview
As discussed in detail in Section 2, CSI uncertainties result in degradation
in communication reliability. Such degradations can be further exacerbated as
we transition to multiuser networks consisting of a larger number of
simultaneously communicating users. Irrespective of multiuser channel models,
a common realistic assumption is that slowly varying channels can be estimated
by the receivers with high fidelity, providing the receivers with the CSI.
Acquiring the CSI by the transmitters can be further facilitated via feedback
from the receivers. However, feedback communication is often infeasible or
incurs additional communication and delay costs, which increase significantly
as the number of transmitters and receivers grows in the network.
This section focuses on the multi-access channel, consisting of multiple users
with independent messages communicating with a common receiver. The channels
undergo slow fading processes. Similar to the setting considered in Section 2,
it is assumed that the receivers can acquire the CSI with high fidelity (e.g.,
through training sessions). While the receiver has perfect and instantaneous
access to the states of all channels, the transmitters are either entirely or
partially oblivious to the CSI, rendering settings in which the transmitters
face CSI uncertainty. The information-theoretic limits of the MAC when all the
transmitters and receivers have complete CSI are well-investigated ahlsewde ;
KimElGamal2011 ; SH98 . Furthermore, there is a rich literature on the MAC’s
information-theoretic limits under varying degrees of availability of
instantaneous CSIT. Representative studies on the capacity region include the
impact of degraded CSIT CemalSteiberg , quantized and asymmetric CSIT
SenComoYuksel , asymmetric delayed CSIT BasherShiraziPermuter , non-causal
asymmetric partial CSIT SenAlajajiYuksel , and symmetric noisy CSIT
SenAlajajiYiikselComo:2013 . Bounds on the capacity region of the memoryless
MAC in which the CSIT is made available to a different encoder in a causal
manner are characterized in LapidothSteinberg:2013 . Counterpart results are
characterized for the case of common CSI at all transmitters in
LapidothSteinberg:2013com , which are also extended in LiSimeoneYener:2013 to
address the case in which the encoder compresses previously transmitted
symbols and the previous states. The study in kotagiri2008multiaccess
provides an inner bound on the capacity region of the discrete and Gaussian
memoryless two-user MAC in which the CSI is made available to one of the
encoders non-causally. An inner bound on the capacity of the Gaussian MAC is
derived in lapidoth2010multiple when both encoders are aware of the CSI in a
strictly causal manner. The capacity region of a cooperative MAC with partial
CSIT is characterized in PermuterShamaiSomekh:2011 . The capacity region of
the multi-user Gaussian MAC in which each interference state is known to only
one transmitter is characterized within a constant gap in Wang:2012 . A two-
user generalized MAC with correlated states and non-causally known CSIT is
studied in EmadiKhormujiSkoglundAref:2014 . In
MonemizadehBahmaniHodtaniSeyedin:2014 , a two-user Gaussian double-dirty
compound MAC with partial CSIT is studied. The capacity regions of a MAC with
full and distributed CSIT are analyzed in sreekumar2015distributed . A two-
user cooperative MAC with correlated states and partial CSIT is analyzed in
EmadiZamanighomiAref:2012 . The study in PermuterWeissmanChen:2009
characterizes inner and upper bounds on the capacity region of a finite-state
MAC with feedback.
Despite the rich literature on the MAC with full CSIT, when the transmitters
can acquire only the probability distribution of the fading channel state,
without any instantaneous CSIT, the performance limits are not fully known.
The broadcast approach is investigated for the two-user MAC with no CSIT in
SH00 ; Minero:ISIT07 ; Tajer18 ; KazemiTajer2017 ; Zou13 ; zohdy2019broadcast
. The multiple access channel is primarily studied in SH00 ; Minero:ISIT07 ;
Tajer18 ; KazemiTajer2017 ; Zou13 ; zohdy2019broadcast . Specifically, the
effectiveness of a broadcast strategy for multiuser channels is investigated
in SH00 ; Minero:ISIT07 ; Tajer18 for the settings in which the transmitters
are oblivious to all channels, and in Zou13 ; zohdy2019broadcast for the
settings in which each transmitter is oblivious to only channels linking other
users to the receiver. Specifically, when the transmitters are oblivious to
all channels, the approaches in SH00 and Minero:ISIT07 adopt the broadcast
strategy designed for single-user channels and directly apply it to the MAC.
As a result, each transmitter generates a number of information streams, each
adapted to a specific realization of the direct channel linking the
transmitter to the receiver. The study in Tajer18 takes a different approach
based on the premise that the contribution of each user to the overall
performance of the multiple access channel not only depends on the direct
channel linking this user to the receiver, but it is also influenced by the
relative qualities of the other users’ channels. Hence, it proposes a strategy
in which the information streams are generated and adapted to the channel’s
combined state resulting from incorporating all individual channel states. The
setting in which the transmitters have only local CSIT, that is, each
transmitter has the CSI of its direct channel to the receiver while being
unaware of the states of other users’ channels, is studied in Zou13 ;
zohdy2019broadcast . Medium access without transmitter coordination is studied
in Cao2007 .
The remainder of this section is organized as follows. This section focuses
primarily on the two-user MAC, for which we provide a model in Section 3.2. We
start by discussing the settings in which the transmitters have access to only
the statistical model of the channel, and they are oblivious to the channel
model in Section 3.4 with an emphasis on continuous channel models. Next, we
focus on the setting in which the receiver has full CSI, and the transmitters
have only the statistical model of the CSI and review two broadcast approaches
in Sections 3.5 and 3.6. The focus of these two subsections is two-state
discrete channel models. Their generalization to multi-state channels will be
discussed in Section 3.7. Finally, we will review two broadcast approach
solutions for settings with local (partial) CSIT in Sections 3.8 and 3.9. The
focus of these two subsections are on the two-state discrete channel models,
and their generalization to the multi-state models is discussed in Section
3.10.
Figure 7: Equivalent degraded broadcast channel corresponding to a two user
four state multiple access channel with channel gains ${s}_{1}$ and ${s}_{2}$.
### 3.2 Network Model
Consider a two-user multiple access channel, in which two independent users
transmit independent messages to a common receiver via a discrete-time
Gaussian multiple-access fading channel. All the users are equipped with one
antenna, and the random channel coefficients are statistically independent.
The fading process is assumed to remain unchanged during each transmission
cycle and can change to independent states afterward. The users are subject to
an average transmission power constraint $P$. By defining $x_{i}$ as the
signal of transmitter $i\in\\{1,2\\}$ and $h_{i}$ as the coefficient of the
channel linking transmitter $i\in\\{1,2\\}$ to the receiver, the received
signal is
$y=h_{1}x_{1}+h_{2}x_{2}+n\ ,$ (178)
where $n$ accounts for the AWGN with mean zero and variance 1. We consider
both continuous and discrete models for the channel.
#### 3.2.1 Discrete Channel Model
Each of the channels, independently of the other one, can be in one of the
finite distinct states. We denote the number of states by $\ell\in\mathbb{N}$
and denote the distinct values that $h_{1}$ and $h_{2}$ can take by
$\\{\sqrt{s}_{m}:m\in\\{1,\dots,\ell\\}\\}$. Hence the multiple access channel
can be in one of the combined $\ell^{2}$ states. By leveraging the broadcast
approach (c.f. Shitz97broadcast ; ShitzSteiner03 , and Minero:ISIT07 ), the
communication model in (178) can be equivalently presented by a broadcast
network that has two inputs $x_{1}$ and $x_{2}$ and $\ell^{2}$ outputs, each
corresponding to one channel state combination. Each output corresponds to one
possible combinations of channels $h_{1}$ and $h_{2}$. We denote the output
corresponding to the combination $h_{1}=\sqrt{{s}_{m}}$ and
$h_{2}=\sqrt{{s}_{n}}$ by
$y_{mn}=\sqrt{{s}_{m}}x_{1}+\sqrt{{s}_{n}}x_{2}+n_{mn}\ ,$ (179)
where $n_{mn}$ is a standard Gaussian random variable for all
$m,n\in\\{1,\dots,\ell\\}$. Figure 7 depicts this network for the case of the
two-state channels ($\ell=2$). Without loss of generality and for the
convenience in notations, we assume that channel gains
$\\{{s}_{m}:m\in\\{1,\dots,\ell\\}\\}$ take real positive values and are
ordered in the ascending order, i.e., $0<{s}_{1}<{s}_{2}<\dots<{s}_{\ell}$. We
define $p_{mn}$ as the probability of the state
$(h_{1},h_{2})=({{s}_{m}},{{s}_{n}})$. Accordingly, we also define
$q_{m}=\sum_{n=1}^{\ell}p_{mn}$ and $p_{n}=\sum_{m=1}^{\ell}p_{mn}$. We will
focus throughout the section on the case of symmetric average transmission
power constraints, i.e., $P_{1}=P_{2}=P$, whereas the generalization to the
case of asymmetric power constraints is straightforward.
#### 3.2.2 Continuous Channel Model
In the continuous channel model, the fading coefficients $h_{1}$ and $h_{2}$
take a continuous of values that follow known statistical models. These
statistical models are known to the transmitter and receiver. We denote the
fading powers by $s_{1}=|h_{1}|^{2}$ and $s_{2}=|h_{2}|^{2}$. Depending on
channel realizations, denote the channel output when the channel gains are
$s_{1}$ and $s_{2}$ by
$y_{s_{1}s_{2}}=\sqrt{{s}_{1}}x_{1}+\sqrt{{s}_{2}}x_{2}+n_{s_{1}s_{2}}\ .$
(180)
Throughout this section, we use the notation
$C(x,y)=\frac{1}{2}\log_{2}\big{(}1+\frac{x}{y+\frac{1}{P}}\big{)}$.
We review settings in which the transmitters are either fully oblivious to all
channels or have local CSIT. That is, each transmitter 1 (2) knows channel
$h_{1}$ ($h_{2}$) while being unaware of channel $h_{2}$ ($h_{1}$). We refer
to this model by L-CSIT, and similarly to the N-CSIT setting, we characterize
an achievable rate region for it.
### 3.3 Degradedness and Optimal Rate Spitting
The broadcast approach’s hallmark is a designating an order of degradedness
among different network realizations based on their qualities. Designating
degradedness in the single-user single-antenna channel arises naturally, as
discussed in Section 2. However, for the multiuser networks, there is no
natural notion of degradedness, and any ordering approach will at least bear
some level of heuristics. In the broadcast approaches that we discuss in this
section for the MAC, we use the capacity region of the multiple access
channels under different network states. Based on this notion of degradedness,
once one of the channels improves, the associated capacity region expands,
alluding the to the possibility of sustaining higher reliable rates.
### 3.4 MAC without CSIT – Continuous Channels
We start by discussing the canonical Gaussian multiple-access channel in which
the channels undergo a continuous fading model in (180). This is the setting
that is primarily investigated in SH00 . To formalize this approach, we define
$R_{i}(s)$ as the reliability communicated information rate of transmitter $i$
at fading level $s$. Similarly to the single-user channel, we define
$\rho_{i}(s)$ as the power assigned to the infinitesimal information layer of
transmitter $i$ corresponding to fading power $s$. Accordingly, we define the
interference terms
$\displaystyle I_{i}(s)=\int_{s}^{\infty}\rho_{i}(u)\;\,\textnormal{d}u\ .$
(181)
When the channels fading powers are $s_{1}$ and $s_{2}$, we define ${\sf
SNR}_{i}(s_{1},s_{2})$ as the effective SNR of transmitter $i$. These SNR
terms satisfy
$\displaystyle{\sf SNR}_{1}(s_{1},s_{2})=\frac{s_{1}}{1+s_{2}I_{2}({\sf
SNR}_{2}(s_{1},s_{2}))}\ ,\qquad\mbox{and}\qquad{\sf
SNR}_{2}(s_{1},s_{2})=\frac{s_{1}}{1+s_{2}I_{1}({\sf SNR}_{1}(s_{1},s_{2}))}\
.$ (182)
Hence, corresponding to this channel combination, the rate that transmitter
$i$ can sustain reliability is
$\displaystyle R_{i}(s_{1},s_{2})=\int_{0}^{{\sf
SNR}_{i}(s_{1},s_{2})}\frac{-u\,\textnormal{d}I_{i}(u)}{1+uI_{i}(u)}\ ,$ (183)
and subsequently, the expected rate of transmitter $i$ is
$\displaystyle\bar{R}_{i}=\mathbb{E}[R_{i}(s_{1},s_{2})]=\int_{0}^{\infty}\left(1-F_{i}(u)\right)\frac{-u\,\textnormal{d}I_{i}(u)}{1+uI_{i}(u)}\
,$ (184)
where $F_{i}$ denotes the CDF of ${\sf SNR}_{i}(s_{1},s_{2})$. Any resource
allocation or optimization problem over the average rates $\bar{R}_{1}$ and
$\bar{R}_{2}$ consists in determining the power allocation functions
$\rho_{i}(s)$. For instance, finding the transmission policy that yields the
maximum average rate $\bar{R}_{1}+\bar{R}_{2}$ boils down to designing
functions $\rho_{1}$ and $\rho_{2}$. The same formulation can be readily
generalized to the $K$-user MAC, in which we designate a power allocation
function to each transmitter, accordingly define the interference functions,
the achievable rates for each specific channel realization, and the average
rate of each user.
### 3.5 MAC without CSIT – Two-state Channels: Adapting Streams to the
Single-user Channels
We continue by reviewing finite-state multi-access channels. This setting is
first investigated in Minero:ISIT07 for the two-state discrete channel model.
As suggested in Minero:ISIT07 , one can readily adopt the single-user strategy
of Shitz97broadcast and split the information stream of a transmitter into
two streams, each corresponding to one fading state, and encodes them
independently. Recalling the canonical model in (180), let us refer to the
channel with the fading gains $s_{1}$ and $s_{2}$ as weak and strong channels,
respectively111We will use this strong versus weak dichotomous model
throughout Section 3. The two encoded information streams are subsequently
superimposed and transmitted over the channel. One of the streams, denoted by
$W_{1}$, is always decoded by the receiver, while the second stream, denoted
by $W_{2}$, is decoded only when the channel is strong.
This strategy is adopted and directly applied to the multiple access channel
in Minero:ISIT07 . Specifically, it generates two coded information streams
per transmitter, where the streams of user $i\in\\{1,2\\}$ are denoted by
$\\{W^{i}_{1},W^{i}_{2}\\}$. Based on the channels’ actual realizations, a
combination of these streams is successively decoded by the receiver. In the
first stage, the baseline streams $W^{1}_{1}$ and $W^{2}_{1}$, which
constitute the minimum amount of guaranteed information, are decoded.
Additionally, when the channel between transmitter $i$ and the receiver, i.e.,
$h_{i}$ is strong, in the second stage information stream $W^{i}_{2}$ is also
decoded. Table 1 depicts the decoding sequence corresponding to each of the
four possible channel combinations.
Table 1: Successive decoding order when adapting the layers to the single-user channels, $(h_{1}^{2},h_{2}^{2})$ | Decoding stage 1 | Decoding stage 2
---|---|---
$({s}_{1},{s}_{1})$ | $W^{1}_{1},W^{2}_{1}$ |
$({s}_{2},{s}_{1})$ | $W^{1}_{1},W^{2}_{1}$ | $W^{1}_{2}$
$({s}_{1},{s}_{2})$ | $W^{1}_{1},W^{2}_{1}$ | $W^{2}_{2}$
$({s}_{2},{s}_{2})$ | $W^{1}_{1},W^{2}_{1}$ | $W^{1}_{2},W^{2}_{2}$
Figure 8: Equivalent network when adapting the layers to the single-user
channels (no CSIT).
Based on the codebook assignment and decoding specified in Table 1, the
equivalent multiuser network is depicted in Fig. 8. The performance limits on
the rates are characterized by delineating the interplay among the rates of
the four codebooks $\\{W_{1}^{1},W_{1}^{2},W_{2}^{1},W_{2}^{2}\\}$. We denote
the rate of codebook $W_{i}^{j}$ by $R(W_{i}^{j})$. There are two ways of
grouping these rates and assessing the interplay among different groups. One
approach would be analyzing the interplay between the rate of the codebooks
adapted to the weak channels and the codebooks’ rate adapted to the strong
channels. The second approach would be analyzing the interplay between the
rates of the two users. In a symmetric case and in the face of CSIT
uncertainty, a natural choice will be the former approach. For this purpose,
define $R_{w}=R_{1}^{1}+R_{1}^{2}$ and $R_{s}=R_{2}^{1}+R_{2}^{2}$ as the rate
of the codebooks adapted to the weak and strong channels, respectively. The
study in Minero:ISIT07 characterizes the capacity region of the pair
$(R_{w},R_{s})$ achievable in the Gaussian channel, where it is shown that
superposition coding is the optimal coding strategy. The capacity region of
this channel is specified in the following theorem.
[Minero:ISIT07 ] The $(R_{w},R_{s})$ capacity region for the channel depicted
in Fig. 8 is given by the set of all rates satisfying
$\displaystyle R_{w}$ $\displaystyle\leq C(2s_{1}(1-\beta)\;,\;2s_{1}\beta)\
,$ (185) $\displaystyle R_{s}$ $\displaystyle\leq C(2s_{2}\;,\;0)\ ,$ (186)
for all $\beta\in[0,1]$.
### 3.6 MAC without CSIT – Two-state Channels: State-dependent Layering
In the approach of Section 3.5, each transmitter adapts its transmission to
its direct link to the receiver without regards for the channel linking the
other transmission to the receiver. However, the contribution of user
$i\in\\{1,2\\}$ to a network-wide performance metric (e.g., sum-rate capacity)
depends not only on the quality of the channel $h_{i}$, but also on the
quality of the channel of the other user. This motivates adapting the
transmission scheme of each transmitter to the MAC’s combined state instead of
the individual channels. As investigated in KazemiTajer2017 ; Tajer18
adapting to the network state can be facilitated by assigning more information
streams to each transmitter and adapting them to the combined effect of both
channels. Designing and assigning more than two information streams to each
transmitter allows for a finer resolution in successive decoding, which in
turn expands the capacity region characterized in Minero:ISIT07 .
To review the encoding and decoding scheme as well as the attendant rate
regions, we start by focusing on the two-state discrete channel model. This
setting furnishes the context to highlight the differences between streaming
and successive decoding strategy in this section and those investigated in
Section 3.5. By leveraging the intuition gained, the general multi-state
discrete channel model will be discussed in Section 3.6.
Figure 9: Streaming and codebook assignments by user 1 and user 2.
In the approach that adapts the transmissions to the combined network states,
each transmitter splits its message into four streams corresponding to the
four possible combinations of the two channels. These codebooks for
transmitter $i\in\\{1,2\\}$ are denoted by
$\\{W^{i}_{11},W^{i}_{12},W^{i}_{21},W^{i}_{22}\\}$, where the information
stream $W^{i}_{uv}$ is associated with the channel realization in which the
channel gain of user $i$ is ${s}_{v}$, and the channel gain of the other user
is ${s}_{u}$. These stream assignments are demonstrated in Fig. 9. The initial
streams $\\{W^{1}_{11},W^{2}_{11}\\}$ account for the minimum amount of
guaranteed information, which are adapted to the channel combination
$(h^{2}_{1},h^{2}_{2})=({s}_{1},{s}_{1})$ and they should be decoded by all
four possible channel combinations. When at least one of the channels is
strong, the remaining codebooks are grouped and adapted to different channel
realizations according to the assignments described in Fig. 9. Specifically:
* •
The second group of streams $\\{W^{1}_{12},W^{2}_{21}\\}$ are reserved to be
decoded in addition to $\\{W^{1}_{11},W^{2}_{11}\\}$ when $h_{1}$ is strong,
while $h_{2}$ is still weak.
* •
Alternatively, when $h_{1}$ is weak and $h_{2}$ is strong, instead the third
group of streams, i.e., $\\{W^{1}_{21},W^{2}_{12}\\}$, are decoded.
* •
Finally, when both channels are strong, in addition to all the previous
streams, the fourth group $\\{W^{1}_{22},W^{2}_{22}\\}$ is also decoded.
Figure 10: Equivalent network when adapting the layers to the MAC (no CSIT).
The order in which the codebooks are successively decoded in different network
states is presented in Table 2. Based on this successive decoding order,
channel gain state $({s}_{1},{s}_{1})$ is degraded with respect to all other
states (i.e., the capacity region of the MAC corresponding to receiver
$y_{11}$ is strictly smaller than those of the other three receivers), while
$({s}_{1},{s}_{2})$ and $({s}_{2},{s}_{1})$ are degraded with respect to
$({s}_{2},{s}_{2})$. Clearly, the codebook assignment and successive decoding
approach presented in Table 2 subsumes the one proposed in Minero:ISIT07
presented in Table 1. In particular, Table 1 can be recovered as a special
case of Table 2 by setting the rates of the streams
$\\{W^{1}_{21},W^{2}_{21},W^{1}_{22},W^{2}_{22}\\}$ to zero. The codebook
assignment and decoding order discussed leads to the equivalent multiuser
network with two inputs $\\{x_{1},x_{2}\\}$ and four outputs
$\\{y_{11},y_{12},y_{21},y_{22}\\}$, as depicted in Fig. 10. Each receiver is
designated to decode a pre-specified set of codebooks.
Table 2: Successive decoding order of the streams adapted to the MAC $(h^{2}_{1},h^{2}_{2})$ | stage 1 | stage 2 | stage 3
---|---|---|---
$({s}_{1},{s}_{1})$ | $W^{1}_{11},W^{2}_{11}$ | |
$({s}_{2},{s}_{1})$ | $W^{1}_{11},W^{2}_{11}$ | $W^{1}_{12},W^{2}_{21}$ |
$({s}_{1},{s}_{2})$ | $W^{1}_{11},W^{2}_{11}$ | $W^{1}_{21},W^{2}_{12}$ |
$({s}_{2},{s}_{2})$ | $W^{1}_{11},W^{2}_{11}$ | $W^{1}_{12},W^{2}_{12},W^{1}_{21},W^{2}_{21}$ | $W^{1}_{22},W^{2}_{22}$
Next, we delineate the region of all achievable rates $R^{i}_{uv}$ for
$i,u,v\in\\{1,2\\}$, where $R^{i}_{uv}$ accounts for the rate of codebook
$W^{i}_{uv}$. Define $\beta^{i}_{uv}\in[0,1]$ as the fraction of the power
that transmitter $i$ allocates to stream $W^{i}_{uv}$ for $u\in\\{1,2\\}$ and
$v\in\\{1,2\\}$, where we clearly have
$\sum_{u=1}^{2}\sum_{v=1}^{2}\beta^{i}_{uv}=1$. For the simplicity in
notations, and in order to place the emphasis on the interplay among the rates
of different information streams, we focus on a symmetric setting in which
relevant streams in different users have identical rates, i.e., rates of
information streams $W^{1}_{uv}$ and $W^{2}_{uv}$, denoted by $R^{1}_{uv}$ and
$R^{2}_{uv}$ respectively, are the same, and it is denoted by $R_{uv}$, i.e.,
$R_{uv}=R^{1}_{uv}=R^{2}_{uv}$.
[Tajer18 ] The achievable rate region of the rates
$(R_{11},R_{12},R_{21},R_{22})$ for the channel depicted in Fig. 10 is the set
of all rates satisfying:
$\displaystyle R_{11}\;$ $\displaystyle\;\leq\;r_{11}$ (187) $\displaystyle
R_{12}\;$ $\displaystyle\;\leq\;r_{12}$ (188) $\displaystyle R_{21}\;$
$\displaystyle\;\leq\;r_{21}$ (189) $\displaystyle R_{12}+R_{21}\;$
$\displaystyle\leq\;r_{1}$ (190) $\displaystyle 2R_{12}+R_{21}\;$
$\displaystyle\leq\;r_{12}^{\prime}$ (191) $\displaystyle R_{12}+2R_{21}\;$
$\displaystyle\;\leq\;r_{21}^{\prime}$ (192) $\displaystyle R_{22}\;$
$\displaystyle\;\leq r_{22}\ ,$ (193)
over all possible power allocation factors $\beta^{i}_{uv}\in[0,1]$ such that
$\Sigma_{u=1}^{2}\Sigma_{v=1}^{2}\beta^{i}_{uv}=1$, where by setting
$\bar{\beta}_{uv}=1-\beta_{uv}$ we have defined
$\displaystyle r_{11}$
$\displaystyle=\min\Big{\\{}\frac{1}{2}\;C\big{(}2s_{1}\beta_{11},2s_{1}\bar{\beta}_{11}\big{)},C\big{(}s_{1}\beta_{11},(s_{1}+s_{2})\bar{\beta}_{11}\big{)}\;\Big{\\}}\
,C\big{(}s_{2}\beta_{12},s_{1}(\beta_{12}+\beta_{22})+s_{2}(\beta_{21}+\beta_{22}))\big{)}\Big{\\}}\
,$ (194) $\displaystyle r_{12}$
$\displaystyle=\min\Big{\\{}\frac{1}{2}\;C\big{(}2\alpha_{2}\beta_{12},2\alpha_{2}\beta_{22}\big{)}\;,\;C\big{(}\alpha_{2}\beta_{12},\alpha_{1}(\beta_{12}+\beta_{22})+\alpha_{2}(\beta_{21}+\beta_{22}))\big{)}\Big{\\}}\
,$ (195) $\displaystyle r_{21}$
$\displaystyle=\min\Big{\\{}\frac{1}{2}\;C\big{(}2s_{2}\beta_{21},2s_{2}\beta_{22}\big{)}\;,C\big{(}s_{1}\beta_{21},s_{1}(\beta_{12}+\beta_{22})+s_{2}(\beta_{21}+\beta_{22})\big{)}\;\Big{\\}}\
,$ (196) $\displaystyle r_{1}$
$\displaystyle=\min\Big{\\{}\frac{1}{2}\;C\big{(}2s_{2}(\beta_{12}+\beta_{21}),2s_{2}\beta_{22}\big{)},C\big{(}s_{1}\beta_{21}+s_{2}\beta_{12},s_{1}(\beta_{12}+\beta_{22})+s_{2}(\beta_{21}+\beta_{22})\big{)}\Big{\\}}\
,$ $\displaystyle r_{12}^{\prime}$
$\displaystyle=\;C\big{(}s_{2}(2\beta_{12}+\beta_{21})\;,\;2s_{2}\beta_{22}\big{)}\
,$ (197) $\displaystyle r_{21}^{\prime}$
$\displaystyle=\;C\big{(}s_{2}(\beta_{12}+2\beta_{21})\;,\;2s_{2}\beta_{22}\big{)}\
,$ (198) $\displaystyle r_{22}$
$\displaystyle=\;\frac{1}{2}C\big{(}2s_{2}\beta_{22}\;,\;0\big{)}\ .$ (199)
###### Proof.
The proof follows from the structure of the rate-splitting approach presented
in Fig. 9 and the decoding strategy presented in Table 2. The detailed proof
is provided in (Tajer18, , Appendix B). ∎
In order to compare the achievable rate region in Theorem 2 and the capacity
region presented in Theorem 8, we group the information streams in the way
that they are ordered and decoded in Minero:ISIT07 . Specifically, the streams
$\\{W^{1}_{21},W^{2}_{21},W^{1}_{22},W^{2}_{22}\\}$ are allocated zero power.
Information streams $W_{11}^{1}$ and $W_{11}^{2}$ are adapted to the weak
channels, and the information streams $W_{12}^{2}$ and $W_{12}^{2}$ are
reserved to be decoded when one or both channels are strong. Information
streams adapted to the strong channels are grouped, and their rates are
aggregated, and those adapted to the weak channels are also groups, and their
rates are aggregated. Based on this, the region presented in Theorem 2 can be
used to form the sum-rates $R_{w}=(R^{1}_{11}+R^{2}_{11})$ and
$R_{s}=(R^{1}_{12}+R^{2}_{12})$.
[Tajer18 ] By setting the power allocated to streams
$\\{W^{1}_{21},W^{2}_{21},W^{1}_{22},W^{2}_{22}\\}$ to zero, the achievable
rate region characterized by Theorem 2 reduces to the following region, which
coincides with the capacity region characterized in Minero:ISIT07 .
$\displaystyle R_{w}\;$
$\displaystyle\;\leq\min\\{a_{3},a_{6},a_{9},a_{4}+a_{8}\\}\ ,$ (200)
$\displaystyle\mbox{and}\quad R_{s}\;$ $\displaystyle\;\leq
C\left({{s}_{2}\beta^{1}_{12}+{s}_{2}\beta^{2}_{12}\;,\;0}\right)\ ,$ (201)
where we have defined
$\displaystyle a_{3}=$ $\displaystyle
C\left(s_{1}(\beta^{1}_{11}+\beta^{2}_{11}),s_{1}(\bar{\beta}^{1}_{11}+\bar{\beta}^{2}_{11})\right)\
,$ (202) $\displaystyle a_{4}=$ $\displaystyle
C\left(s_{1}\beta^{1}_{11}\;,\;s_{1}\bar{\beta}^{1}_{11}+s_{2}\bar{\beta}^{2}_{11}\right)\
,$ (203) $\displaystyle a_{6}=$ $\displaystyle
C\left(s_{1}\beta^{1}_{11}+s_{2}\beta^{2}_{11}\;,\;s_{1}\bar{\beta}^{1}_{11}+s_{2}\bar{\beta}^{2}_{11}\right)\
,$ (204) $\displaystyle a_{8}=$ $\displaystyle
C\left(s_{1}\beta^{2}_{11}\;,\;s_{2}\bar{\beta}^{1}_{11}+s_{1}\bar{\beta}^{2}_{11}\right)\
,$ (205) $\displaystyle a_{9}=$ $\displaystyle
C\left(s_{2}\beta^{1}_{11}+s_{1}\beta^{2}_{11}\;,\;s_{2}\bar{\beta}^{1}_{11}+s_{1}\bar{\beta}^{2}_{11}\right)\
.$ (206)
###### Proof.
See (Tajer18, , Appendix D). ∎
Figure 11: Comparison of the capacity region in Section 3.5 and achievable
rate region in Section 3.6 demonstrating the trade-off between $R_{s}$ and
$R_{w}$, and $\bar{R}_{s}$ and $\bar{R}_{w}$. Transmission SNR is 10, and the
channel gains are $(\sqrt{{s}_{1}},\sqrt{{s}_{2}})=(0.5,1)$.
Figure 11 quantifies and compares the achievable rate region characterized in
Theorem 2 and Theorem 2 with the capacity region characterized in Theorem 8.
The regions presented in theorems 2 and 2 capture the interplay among the
rates of the individual codebooks and the capacity region of Theorem 8
characterize the trade-off between the sum-rates of the information streams
adapted to the weak and strong channels. To have a common ground for
comparisons, the result of theorems 2 and 2 can be presented to signify the
codebooks of the weak and strong channel states. Recall that earlier we
defined the sum-rates
$\displaystyle R_{w}=R^{1}_{11}+R^{2}_{11}\ ,\quad\mbox{and}\quad
R_{s}=R^{1}_{12}+R^{2}_{12}\ .$ (207)
Accordingly, for the coding scheme (Table 2) we define
$\displaystyle\bar{R}_{w}$
$\displaystyle=R^{1}_{11}+R^{2}_{11}+R_{21}^{1}+R_{21}^{2}+R_{12}^{1}+R_{12}^{2}\
,$ (208) $\displaystyle\quad\mbox{and}\quad\bar{R}_{s}$
$\displaystyle=R^{1}_{22}+R^{2}_{22}\ .$ (209)
Based on these definitions, Fig. 11 demonstrates the regions described by
$(R_{w},R_{s})$ and $(\bar{R}_{w},\bar{R}_{s})$, in which the transmission SNR
is 10, the channel coefficients are $(\sqrt{{s}_{1}},\sqrt{{s}_{2}})=(0.5,1)$,
and the regions are optimized over all possible power allocation ratios. The
numerical evaluation in Fig. 11 depict that the achievable rate region in
Theorem 2 subsumes that of Theorem 2 (and subsequently, that of 8), and the
gap between the two regions diminishes as the rates of the information layers
adapted to the strong channels increases, i.e., $R_{s}$ and $\bar{R}_{s}$
increase. Next, in order to assess the tightness of the achievable rate
regions, we present an outer bound on the capacity region of the network in
Fig. 10.
Figure 12: Comparison of the capacity region in Section 3.5 and achievable
rate region and outer bounds in Section 3.6.
[Tajer18 ] An outer bound for the capacity region of the rates
$(R_{11},R_{12},R_{21},R_{22})$ for the channel depicted in Fig. 10 is the set
of all rates satisfying:
$\displaystyle
R_{11}\leq\frac{1}{2}a_{3}\;,\;R_{12}\leq\frac{1}{2}a_{24}\;,\;R_{21}\leq\frac{1}{2}a_{27}\;,\;R_{22}\leq
r_{22}\ ,$
where we have defined
$\displaystyle a_{24}$
$\displaystyle=C\left({s_{2}\beta^{1}_{12}+s_{2}\beta^{2}_{12}\;,\;s_{2}\beta^{1}_{22}+s_{2}\beta^{2}_{22}}\right)\
,$ (210) $\displaystyle a_{27}$
$\displaystyle=C\left({s_{2}\beta^{1}_{21}+s_{2}\beta^{2}_{21}\;,\;s_{2}\beta^{1}_{22}+s_{2}\beta^{2}_{22}}\right)\
,$ (211) $\displaystyle r_{22}$
$\displaystyle=\;\frac{1}{2}C\big{(}2s_{2}\beta_{22}\;,\;0\big{)}\ .$ (212)
Figure 12 compares the outer bound specified in Theorem 12 and the achievable
rate region presented in Theorem 2 for SNR values 1 and 5, and the choice of
$(\sqrt{s_{1}},\sqrt{s_{2}})=(0.5,1)$. Corresponding to each SNR, this figure
illustrates the capacity region obtained in Theorem 8, as well as the
achievable rate region and the outer bound reviewed in this section.
To evaluate the average rate as a long-term relevant proper measure capturing
the expected rate over a large number of transmission cycles, where each cycle
undergoes an independent fading realization. Consider a symmetric channel, in
which the corresponding information streams are allocated identical power and
have the same rate, and set $R_{uv}=R^{1}_{uv}=R^{2}_{uv}$ for
$u,v\in\\{1,2\\}$. Also, consider a symmetric distribution for $h_{1}$ and
$h_{2}$ such that
$\mathbb{P}(h_{1}^{2}={s}_{i})=\mathbb{P}(h_{2}^{2}={s}_{i})$ for
$i\in\\{1,2\\}$, and define
$p=\mathbb{P}(h_{1}^{2}={s}_{1})=\mathbb{P}(h_{2}^{2}={s}_{1})$. By leveraging
the stochastic model of the fading process, the average rate is
$\displaystyle R_{\rm ave}=2[R_{11}+(1-p)(R_{12}+R_{21})+(1-p)^{2}R_{22}]\ .$
(213)
Figure 13 depicts the variations of the average sum-rate versus $p$ for
different values of ${s}_{1}$. The observations from this figure also confirm
that higher gain levels are exhibited as $p$ decreases. It is noteworthy that
the results from Fig. 11 validates the observations from Fig. 13 that
improvement in average rate is significant when the probability of
encountering a weak channel state is low since the rate distribution
considered in the achievable rate region comparison will correspond to average
rate if the probability of observing ${s}_{1}$ is zero.
Figure 13: Average sum-rate versus $p$ for different values of ${s}_{1}$
(${s}_{2}=1$ and SNR=5).
### 3.7 MAC without CSIT – Multi-state Channels: State-dependent Layering
The idea of adapting the transmission to the combined network states can be
extended to devise codebook assignment and decoding strategy schemes for the
general multiple-state channel. Similarly to the two-state channel, in the
$\ell$-state channel model, $\ell^{2}$ codebooks are assigned to each
transmitter. Hence, corresponding to the combined channel state
$(h_{1}^{2},h_{2}^{2})=({s}_{q},{s}_{p})$ codebook $W^{1}_{pq}$ is assigned to
transmitter 1 and codebook $W^{2}_{qp}$ is assigned to transmitter 2. By
following the same line of analysis as in the two-state channel, the network
state $(h_{1}^{2},h_{2}^{2})=({s}_{1},{s}_{1})$ can be readily verified to be
degraded with respect to states $({s}_{1},{s}_{2})$, $({s}_{2},{s}_{1})$, and
$({s}_{2},{s}_{2})$ when ${s}_{2}>{s}_{1}$. Additionally, channel combinations
$({s}_{1},{s}_{2})$ and $({s}_{2},{s}_{1})$ are also degraded with respect to
state $({s}_{2},{s}_{2})$. When a particular transmitter’s channel becomes
stronger while the interfering channel remains constant, the transmitter
affords to decode additional codebooks. Similarly, when a transmitter’s own
channel remains constant while the interfering channel becomes stronger, the
transmitter can decode additional layers. This can be facilitated by decoding
and removing the interfering transmitter’s message, based on which the
transmitter experiences reduced interference. Based on these observations, by
ordering the different realizations of $h_{1}$ and $h_{2}$ in the ascending
order and determining their relative degradedness, a successive decoding
strategy is illustrated in Table 3. In this table $A_{p,q}$ denotes the cell
in the $p^{\rm th}$ row and the $q^{\rm th}$ column, and it specifies the set
of codebooks $\mathcal{U}_{pq}$ to be decoded when the combined channel state
is $(h_{1}^{2},h_{2}^{2})=({s}_{q},{s}_{p})$. In this table, the codebooks set
to be decoded in each possible combined state is recursively related to the
codebooks decoded in the weaker channels. Specifically, the state
corresponding to $A_{p-1,q-1}$ is degraded with respect to states $A_{p,q-1}$
and $A_{p-1,q}$. Therefore, in the state $A_{p,q}$, the receiver decodes all
streams from states $A_{p-1,q-1}$ (included in ${\cal U}_{p-1,q-1}$),
$A_{p,q-1}$ (included in ${\cal U}_{p,q-1}$), and $A_{p-1,q}$ (included in
${\cal U}_{p-1,q}$). Subsequently, these are followed by decoding one
additional stream from each user denoted by $W^{1}_{pq}$ and $W^{2}_{qp}$.
When both channel coefficients have the strongest possible realizations, all
the streams from both users will be decoded at the receiver.
Table 3: Successive decoding order for the $\ell$-state MAC. $h^{2}_{2}$ $h^{2}_{1}$ | ${s}_{1}$ | ${s}_{2}$ | $\ldotp\ldotp$ | ${{s}_{q}}$ | $\ldotp\ldotp$ | ${s}_{\ell}$
---|---|---|---|---|---|---
${s}_{1}$ | | $W^{1}_{11}$ , $W^{2}_{11}$
---
| $\mathcal{U}_{11}$
---
$W_{12}^{1}$ , $W_{21}^{2}$
$\ldotp\ldotp$ | $\ldotp$ | $\ldotp\ldotp$ | | $\mathcal{U}_{1(\ell-1)}$
---
$W_{1\ell}^{1}$ , $W_{\ell 1}^{2}$
${s}_{2}$ | | $\mathcal{U}_{11}$
---
$W_{21}^{1}$ , $W_{12}^{2}$
| $\mathcal{U}_{11}$ , $\mathcal{U}_{12}$ , $\mathcal{U}_{21}$
---
$W_{22}^{1}$ , $W_{22}^{2}$
$\ldotp\ldotp$ | $\ldotp$ | $\ldotp\ldotp$ | | $\mathcal{U}_{1(\ell-1)}\;,\;\mathcal{U}_{2(\ell-1)}\;,\;\mathcal{U}_{1l}$
---
$W_{2l}^{1}$ , $W_{l2}^{2}$
$\ldotp$ | $\ldotp$ | $\ldotp$ | $\ldotp\ldotp$ | $\ldotp$ | $\ldotp\ldotp$ | $\ldotp$
${s}_{p}$ | $\ldotp$ | $\ldotp$ | $\ldotp\ldotp$ | | $\mathcal{U}_{(p-1)(q-1)},\mathcal{U}_{p(q-1)},\mathcal{U}_{(p-1)q},$
---
$W_{pq}^{1}$ , $W_{qp}^{2}$
$\ldotp\ldotp$ | $\ldotp$
$\ldotp$ | $\ldotp$ | $\ldotp$ | $\ldotp\ldotp$ | $\ldotp$ | $\ldotp\ldotp$ | $\ldotp$
${s}_{\ell}$ | | $\mathcal{U}_{(\ell-1)1}$
---
$W_{\ell 1}^{1}$ , $W_{1\ell}^{2}$
| $\mathcal{U}_{(\ell-1)1},\mathcal{U}_{\ell 1},\mathcal{U}_{(\ell-1)2},$
---
$W_{\ell 2}^{1}$,$W_{2\ell}^{2}$
$\ldotp\ldotp$ | $\ldotp$ | $\ldotp\ldotp$ | | $\mathcal{U}_{(\ell-1)(\ell-1)}\;,\;\mathcal{U}_{\ell(\ell-1)}\;,\;\mathcal{U}_{(\ell-1)\ell}$
---
$W_{\ell\ell}^{1}$ , $W_{\ell\ell}^{2}$
Next, the rate region achieved in presented in Theorem 3 for the general
multi-state channel. It can be verified that the region characterized by
Theorem 2 is subsumed by this general rate region. Similarly to the two-state
channel settings, define $R^{i}_{uv}$ as the rate of codebook $W^{i}_{uv}$ for
$i\in\\{1,2\\}$ and $u,v\in\\{1,\dots,\ell\\}$. Furthermore, define
$\beta_{uv}\in[0,1]$ as the fraction of the power allocated to the codebook
$W^{i}_{uv}$, where $\sum_{u=1}^{\ell}\sum_{v=1}^{\ell}\beta_{uv}=1$. For the
simplicity in notations and for emphasizing the interplay among the rates, we
focus on the symmetric case in which $R_{uv}=R^{1}_{uv}=R^{2}_{uv}$. [Tajer18
] A region of simultaneously achievable rates
$\\{R_{uv}:u<v\;\;\mbox{and}\;\;u,v\in\\{1,\dots,\ell\\}\\}$
for an $\ell$-state two-user multiple access channel is characterized as the
set of all rates satisfying:
$\displaystyle R_{uv}$
$\displaystyle\leq\min\left\\{b_{1}(u,v),b_{2}(u,v),\frac{b_{3}(u,v)}{2}\right\\}$
(214) $\displaystyle R_{vu}$
$\displaystyle\leq\min\left\\{b_{4}(u,v),\frac{b_{5}(u,v)}{2}\right\\}$ (215)
$\displaystyle R_{uv}+R_{vu}$
$\displaystyle\leq\min\left\\{b_{6}(u,v),b_{7}(u,v),\frac{b_{8}(u,v)}{2}\right\\}$
(216) $\displaystyle 2R_{uv}+R_{vu}$ $\displaystyle\leq b_{9}(u,v)$ (217)
$\displaystyle R_{uv}+2R_{vu}$ $\displaystyle\leq b_{10}(u,v)$ (218)
$\displaystyle R_{uu}$
$\displaystyle\leq\min\left\\{b_{11}(u),\frac{b_{12}(u)}{2}\right\\}\ ,$ (219)
where constants $\\{b_{i}:i\in\\{1,\dots,12\\}\\}$ are specified in Appendix
A.
### 3.8 MAC with Local CSIT – Two-state Channels: Fixed Layering
Next, we turn to the setting in which the transmitters have local CSI.
Specifically, each channel randomly takes one of a finite number of states,
and each transmitter only knows the state of its direct channel to the
receiver perfectly, along with the probability distribution of the state of
the other transmitter’s channel. This model was first studied in Zou13 , in
which a single-user broadcast approach is directly applied to the MAC. In this
approach, each transmitter generates two coded layers, where each layer is
adapted to one of the states of the channel linking the other transmitter to
its receiver. This transmission approach is followed by successive decoding at
the receiver in which there exists a pre-specified order of decoding of the
information layers.
This scheme assigns codebooks based on channels’ strengths such that it
reserves one additional information layer as the channel state gets stronger.
In this scheme, the number of transmitted layers and the decoding order are
fixed and independent of the actual channel state. In the two-state channel
model, when a transmitter $i$ experiences the channel state ${s}_{m}$, it
splits its message to two information layers via two independent codebooks
denoted by $T^{i}_{m1}$ and $T^{i}_{m2}$. The rate of layer $T^{i}_{m1}$ is
adapted to the weak channel state of the other user while the rate of layer
$T^{i}_{m2}$ is adapted to the strong channel state. Thus, each transmitter
encodes its information stream by two layers and adapts the power distribution
between them according to its channel state. Subsequently, the receiver
implements a successive decoding scheme according to which it decodes one
layer from transmitter $1$ followed by one layer from transmitter $2$, and
then the remaining layer of transmitter $1$, and finally the remaining layer
of transmitter $2$. This order is pre-fixed and is used in all channel states.
This scheme is summarized in Table 4.
Table 4: Successive decoding scheme in Zou13 $(h_{1},h_{2})$ | stage 1 | stage 2 | stage 3 | stage 4
---|---|---|---|---
$(s_{1},s_{1})$ | $T^{1}_{11}$ | $T^{2}_{11}$ | $T^{1}_{12}$ | $T^{2}_{12}$
$(s_{1},s_{2})$ | $T^{1}_{11}$ | $T^{2}_{21}$ | $T^{1}_{12}$ | $T^{2}_{22}$
$(s_{2},s_{1})$ | $T^{1}_{21}$ | $T^{2}_{11}$ | $T^{1}_{22}$ | $T^{2}_{12}$
$(s_{2},s_{2})$ | $T^{1}_{21}$ | $T^{2}_{21}$ | $T^{1}_{22}$ | $T^{2}_{22}$
The following theorem characterizes an outer bound on the average rate region.
For this purpose, define $R_{i}(h_{1},h_{2})$ as the rate of transmitter $i$
for the state pair $(h_{1},h_{2})$. Accordingly, define
$\bar{R}_{i}=\mathbb{E}_{h_{1},h_{2}}[R_{i}(h_{1},h_{2})]$ as the average rate
of transmitter $i$, where the expected value is with respect to the
distributions of $h_{1}$ and $h_{2}$. [Zou13 ] When the transmitters have
local CSIT, an outer bound on the expected capacity region contains rates
$(\bar{R}_{1},\bar{R}_{2})$ satisfying
$\displaystyle\bar{R}_{1}$ $\displaystyle\leq
q_{1}C(s_{1},0)+(1-q_{1})C(s_{2},0)$ (220) $\displaystyle\bar{R}_{1}$
$\displaystyle\leq q_{2}C(s_{1},0)+(1-q_{2})C(s_{2},0)$ (221)
$\displaystyle\bar{R}_{1}+\bar{R}_{2}$ $\displaystyle\leq
q_{1}q_{2}C(2s_{1},0)+(q_{1}+q_{2}-2q_{1}q_{2})C(s_{1}+s_{2},0)+(1-q_{1})(1-q_{2})C(2s_{2},0)\
.$ (222)
Figure 14: Equivalent network for the two-user MAC (local CSIT).
### 3.9 MAC with Local CSIT – Two-State Channels: State-dependent Layering
Next, we present another scheme for the MAC with local CSIT that generalizes
the scheme of Section 3.8 via adapting information layering to the combined
states of the channel. The underlying motivation guiding this generalization
is that we need to account for both the direct and interfering roles that each
transmitter plays. Hence, the transmission rates of different layers should be
adapted to the combined state of the entire network. The major difference
between this approach and that in Section 3.8 is that this scheme relies on
the available local CSIT available to the individual transmitters such that
each transmitter adapts its layers and their associated raters to the
instantaneous state of the channel. This facilitates opportunistically
sustaining higher rates.
Figure 15: Layering and codebook assignments.
State-dependent Layering. In this approach, each transmitter, depending on the
instantaneous state of the local CSI available to it, splits its message into
independent information layers. Formally, when transmitter $i\in\\{1,2\\}$ is
in the weak state, it encodes its message by only one layer, which we denote
by $U^{i}_{11}$. On the other contrary, when transmitter $i\in\\{1,2\\}$ is in
the strong state, it divides its message into two information layers, which we
denote by $U^{i}_{12}$, and $U^{i}_{22}$. Hence, transmitter $i$ adapts the
codebook $U^{i}_{12}$ (or $U^{i}_{22}$) to the state in which the other
transmitter experiences a weak (or strong) channel. A summary of the layering
scheme and the assignment of the codebooks to different network states is
provided in Fig. 15. In this table, the cell associated with the state
$({s}_{m},{s}_{n})$ for $m,n\in\\{1,2\\}$ specifies the codebook adapted to
this state.
Decoding Scheme. A successive decoding scheme is designed based on the premise
that as the combined channel state becomes stronger, more layers are decoded.
Based on this, one the total number of codebooks decoded increases as one of
the two channels becomes stronger. In this decoding scheme, the combination of
codebooks decoded in different states is as follows (and it is summarizes in
Table 5):
* •
State $({s}_{1},{s}_{1})$: In this state, both transmitters experience weak
states, and they generate codebooks $\\{U^{1}_{11},U^{2}_{11}\\}$ according to
Fig. 15. In this state, the receiver jointly decodes the baseline layers
$U^{1}_{11}$ and $U^{2}_{11}$.
* •
State $({s}_{2},{s}_{1})$: When the channel of transmitter 1 is strong and the
channel of transmitter 2 is weak, three codebooks
$\\{U^{1}_{12},U^{1}_{22},U^{2}_{11}\\}$ are generated and transmitted. As
specified by Table 5, the receiver jointly decodes
$\\{U^{1}_{12},U^{2}_{11}\\}$. This is followed by decoding the remaining
codebook, i.e., $U^{1}_{22}$.
* •
State $({s}_{1},{s}_{2})$: In this state, codebook generation and decoding are
similar to those in the state $({s}_{2},{s}_{1})$, except that the roles of
transmitters 1 and 2 are interchanged.
* •
State $({s}_{2},{s}_{2})$: Finally, when both transmitters experience strong
channels, the receiver decodes four codebooks in the order specified by the
last row of Table 5. Specifically, the receiver first jointly decodes the
baseline layers $\\{U^{1}_{12},U^{2}_{12}\\}$, followed by jointly decoding
the remaining codebooks $\\{U^{1}_{22},U^{2}_{22}\\}$.
$(h^{2}_{1},h^{2}_{2})$ | stage 1 | stage 2
---|---|---
$({s}_{1},{s}_{1})$ | $U^{1}_{11},U^{2}_{11}$ |
$({s}_{2},{s}_{1})$ | $U^{1}_{12},U^{2}_{11}$ | $U^{1}_{22}$
$({s}_{1},{s}_{2})$ | $U^{1}_{11},U^{2}_{12}$ | $U^{2}_{22}$
$({s}_{2},{s}_{2})$ | $U^{1}_{12},U^{2}_{12}$ | $U^{1}_{22},U^{2}_{22}$
Table 5: Decoding scheme
Compared to the setting without any CSIT at the transmitter (i.e., the setting
discussed in Section 3.6), the key difference is that the transmitters have
distinct transmission strategies when they are experiencing different channel
states. Specifically, each transmitter dynamically chooses its layering scheme
based on the instantaneous channel state known to it. Furthermore, the major
difference with the scheme of Section 3.8 is that this scheme adapts the
number of encoded layers proportionately to the strength of the combined
channel state. Such adaptation of the number of encoded layers results in two
advantages. The first one is that adapting the number of layers leads to
overall fewer information layers to be generated and transmitted. This, in
turn, results in decoding overall fewer codebooks and reduced decoding
complexity. The second advantage pertains to providing the receiver with the
flexibility to vary the decoding order according to the combined channel
state. This allows for a higher degree of freedom in optimizing power
allocation, and subsequently, larger achievable rate regions. In support of
these observations, the numerical evaluations in Fig. 17, the achievable rate
region subsumes that of Section 3.8. Furthermore, as the number of channel
states increases, the sum-rate gap between these two schemes becomes more
significant. Finally, depending on the actual channel state, the scheme in
this section decodes between $2$ and $\frac{\ell(\ell+1)}{2}$ codebooks,
whereas the scheme of Section 3.8 always decodes $\ell^{2}$ codebooks.
It is noteworthy that when in the two-state channel model of Fig. 14 the
channel states are ${s}_{1}=0$ and ${s}_{2}=1$, this model simplifies to the
two-user random access channel investigated in Section 3.5. In this special
case, reserving one codebook to be decoded exclusively in each of the
interference-free states, i.e., $({s}_{1},{s}_{2})$ and $({s}_{2},{s}_{1})$,
enlarges the achievable rate region. Hence, it is beneficial in this special
case to treat codebooks $(U^{1}_{22},U^{2}_{22})$ as interference whenever
both users are active, i.e., when the channel state is $({s}_{2},{s}_{2})$. In
general, however, when the channel gain $s_{1}$ is non-zero, i.e.,
${s}_{1}>0$, reserving two codebooks to be decoded exclusively in these two
channel states limits the average achievable rate region.
Achievable Rate Region. Next, we provide an inner bound on the average
capacity region. Recall that the average rate of transmitter $i$ is denoted by
$\bar{R}_{i}=\mathbb{E}_{h_{1},h_{2}}[R_{i}(h_{1},h_{2})]$, where the
expectation is with respect to the random variables $h_{1}$ and $h_{2}$.
Hence, the average capacity region is the convex hull of all simultaneously
achievable average rates $(\bar{R}_{1},\bar{R}_{2})$. Furthermore, we define
$\beta^{k}_{ij}\in[0,1]$ as the ratio of the total power $P$ assigned to
information layer $U^{k}_{ij}$, where we have
$\sum_{i=1}^{j}\beta^{k}_{ij}=1$
for all $j,k\in\\{1,2\\}$. The next theorem characterizes an average
achievable rate region. [zohdy2019broadcast ] For the codebook assignment in
Fig. 15, and the decoding scheme in Table 5, for any given set of power
allocation factors $\\{\beta^{k}_{ij}\\}$, the average achievable rate region
$\\{\bar{R}_{1},\bar{R}_{2}\\}$ is the set of all rates that satisfy
$\displaystyle\bar{R}_{1}$ $\displaystyle\leq
q_{1}C\left({s}_{1},{s}_{2}\beta^{2}_{22}\right)+q_{2}\left(C\left({s}_{2}\beta^{1}_{12},{s}_{2}\beta^{1}_{22}+{s}_{2}\beta^{2}_{22}\right)\\!+\\!C\left({s}_{2}\beta^{1}_{22},0\right)\right),$
(223) $\displaystyle\bar{R}_{2}$ $\displaystyle\leq
p_{1}C({s}_{1},{s}_{2}\beta^{1}_{22})+p_{2}\left(C\left({s}_{2}\beta^{2}_{12},{s}_{2}\beta^{1}_{22}+{s}_{2}\beta^{2}_{22}\right)\\!+\\!C\left({s}_{2}\beta^{2}_{22},0\right)\right),$
(224) $\displaystyle\bar{R}_{1}+\bar{R}_{2}$ $\displaystyle\leq
q_{1}p_{1}C\left(2{s}_{1},0\right)$
$\displaystyle\quad+q_{1}p_{2}C\left({s}_{1}+{s}_{2}\beta^{2}_{12}+{s}_{2}\beta^{2}_{22},0\right)$
$\displaystyle\quad+q_{2}p_{1}C\left({s}_{1}+{s}_{2}\beta^{1}_{12}+{s}_{2}\beta^{1}_{22},0\right)$
$\displaystyle\quad+q_{2}p_{2}C\left({s}_{2}\beta^{1}_{12}+{s}_{2}\beta^{2}_{12}+{s}_{2}\beta^{1}_{22}+{s}_{2}\beta^{2}_{22},0\right).$
(225)
Achieving the average rate region specified in this theorem requires decoding
the codebooks in the order specified by Table 5. Specifically, the receiver
adopts a multi-state decoding scheme where in each state it decodes at most
two codebooks. This decoding scheme continues until all the codebooks from
both transmitters are decoded. Even though limiting the number of codebooks to
be decoded at each stage is expected to result in a reduced rate region, it
can be readily verified that the rate region that is achieved by employing a
fully joint decoding scheme can be recovered via time-sharing among the
average achievable rates corresponding to all possible decoding orders in each
channel state.
Outer Bound. Next, we provide outer bounds on the average capacity region, and
we compare them with the achievable rate region specified by Theorem 3.9.
Outer bound 1: The first outer bound is the average capacity region
corresponding to the two-user MAC in which the transmitters have complete
access to the CSI knopp1995information . This region is specified by OTVYZO in
Fig. 16.
Outer bound 2: The second outer bound is the average capacity region of the
two-user MAC with local CSI at transmitter $1$ and full CSI at transmitter
$2$. Outer bound 2 is formally characterized in the following theorem.
[zohdy2019broadcast ] For the two-user MAC with local CSI at transmitter 1 and
full CSI at transmitter 2, the average capacity region is the set of all
average rates enclosed by the region OTUWYZO shown in Fig. 16, where the
corner points are specified in Appendix B.
Figure 16: Outer bounds on the average achievable rate region.
For the case of available local CSI at transmitter $1$ and full CSI at
transmitter $2$, it can be shown that deploying the discussed layering scheme
at transmitter $1$ (with local CSIT) achieves the average sum-rate capacity of
Outer bound 1. This is formalized in the following theorem.
[zohdy2019broadcast ] With local CSI at transmitter 1 and full CSI at
transmitter 2, an average achievable rate region is the region OTUXYZO shown
in Fig. 16. The average capacity region is achieved along TU and YZ, and the
sum-rate capacity is achieved on XY. The corner points are specified in
Appendix B.
Figure 16 illustrates the relative representations of the inner and outer
bounds on the average capacity region. Specifically, the region specified by
OTVYZO is the average capacity region of a two-user MAC with full CSI at each
transmitter, which serves as Outer Bound 1 specified earlier. This region
encompasses Outer Bound 2 denoted by OTUWYZ. Segments $\sf TU$ and $\sf XYZ$
of the boundary of Outer Bound $1$ coincide with the average capacity region
of the case of the two-user MAC with full CSIT.
Figure 17: Average rate regions for $\ell=2$.
Figure 17 demonstrates the average rate region for the two-state channel. For
this region we have $P=P_{1}=P_{2}=10$ dB, and select the channel gains as
${s}_{1}=0.25$ and ${s}_{2}=1$. Accordingly, the channel probability
parameters are set to $q_{1}=p_{1}=0.5$. The main observation is that the
average achievable rate region coincides with average rate region achieved
when the receiver adopts joint decoding. It can be shown that when the
transmitters have local CSIT, it is possible to achieve an average sum-rate
that is close to outer bound 1, and that the average the sum-rate capacity can
be achieved asymptotically in the low and high power regimes. This observation
is formalized in the next theorem. [zohdy2019broadcast ] By adopting the
codebook assignment presented and setting
$\beta^{1}_{22}=\beta^{2}_{22}=\frac{{s}_{1}}{{s}_{2}}$, the sum-rate capacity
of a two-user MAC with full CSIT is achievable asymptotically as $P\rightarrow
0$ or $P\rightarrow\infty$.
### 3.10 MAC with Local CSIT – Multi-state Channels: State-dependent Layering
In this section, we generalize the encoding and decoding strategy of Section
3.9 to the general $\ell$-state channel. When the channels have $\ell$
possible states, each transmitter is allocated $\ell$ different sets of
codebooks, one corresponding to each channel state. Specifically,
corresponding to channel state ${s}_{m}$ for $m\in\\{1,\dots,\ell\\}$,
transmitter $i$ encodes its message via $m$ information layers generated
according to independent codebooks. This set of codebooks is denoted by ${\cal
W}^{i}_{m}=\\{U^{i}_{1m},\dots,U^{i}_{mm}\\}$.
Table 6: Successive decoding stages for $\ell-$state MAC with local CSIT ${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}h_{2}}$ ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}h_{1}}$ | ${s}_{1}$ | ${s}_{2}$ | $\cdots$ | ${s}_{q}$ | $\cdots$ | ${s}_{\ell}$
---|---|---|---|---|---|---
${s}_{1}$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}U^{1}_{11}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}U^{2}_{11}}$
|
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}U^{1}_{12},U^{1}_{22}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\cal
V}^{2}_{11}}$
$\cdots$ | $\cdot$ | $\cdots$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}U^{1}_{1\ell},\dots,U^{1}_{\ell\ell}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\cal
V}^{2}_{(\ell-1)1}}$
${s}_{2}$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\cal V}^{1}_{11}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}U^{2}_{12},U^{2}_{22}}$
| ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\cal
V}^{1}_{12}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\cal
V}^{2}_{21}}$
$\cdots$ | $\cdot$ | $\cdots$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\cal V}^{1}_{1\ell}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\cal
V}^{2}_{2(\ell-1)}}$
$\cdot$ | $\cdot$ | $\cdot$ | $\cdots$ | $\cdot$ | $\cdots$ | $\cdot$
${s}_{p}$ | $\cdot$ | $\cdot$ | $\cdots$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\cal V}^{1}_{(p-1)q}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\cal
V}^{2}_{p(q-1)}}$
$\cdots$ | $\cdot$
$\cdot$ | $\cdot$ | $\cdot$ | $\cdots$ | $\cdot$ | $\cdots$ | $\cdot$
${s}_{\ell}$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\cal V}^{1}_{(\ell-1)1}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}U^{2}_{1\ell},\dots,U^{2}_{\ell\ell}}$
| ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\cal
V}^{1}_{(\ell-1)2}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\cal
V}^{2}_{1\ell}}$
$\cdots$ | $\cdot$ | $\cdots$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\cal V}^{1}_{(\ell-1)\ell}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\cal
V}^{2}_{(\ell-1)\ell}}$
Table 6 specifies the designation of the codebooks to different combined
channel states. In this table, the channels are ordered in the ascending
order. In particular, varying channels for transmitter 1, the combined channel
state $({s}_{q},{s}_{p})$ precedes all channel states $({s}_{k},{s}_{p})$ for
all $k>q$. Similarly, for transmitter $2$ channel state $({s}_{q},{s}_{p})$
precedes the channel state $({s}_{q},{s}_{k})$, for every $k>p$. Furthermore,
according to this approach, when user $i$’s channel becomes stronger, it
decodes additional codebooks. The sequence of decoding the codebooks, as shown
in Table 6, is specified in three steps:
1. 1.
State $({s}_{1},{s}_{1})$: Start with the weakest channel combination
$({s}_{1},{s}_{1})$, and reserve the baseline codebooks
$U^{1}_{11},U^{2}_{11}$ to be the only codebooks to be decoded in this state.
Define ${\cal V}^{i}_{11}=\\{U^{i}_{11}\\}$ as the set of codebooks that the
receiver decodes from transmitter $i$ when the channel state is
$({s}_{1},{s}_{1})$.
2. 2.
States $({s}_{1},{s}_{q})$ and $({s}_{q},{s}_{1})$: Next, construct the first
row of the table. For this purpose, define ${\cal V}^{2}_{1q}$ as the set of
the codebooks that the receiver decodes from transmitter $2$, when the channel
state is $({s}_{1},{s}_{q})$. Based on this, the set of codebooks in each
state can be specified recursively. Specifically, in the state
$({s}_{1},{s}_{q})$, decode what has been decoded in the preceding state
$({s}_{1},{s}_{q-1})$, i.e., the set of codebooks ${\cal V}^{2}_{1(q-1)}$,
plus new codebooks $\\{U^{1}_{1q},\dots,U^{1}_{qq}\\}$. Then, construct the
first column of the table in a similar fashion, except that the roles of
transmitter 1 and 2 are swapped.
3. 3.
States $({s}_{q},{s}_{p})$ for $p,q>1$: By defining the set of codebooks that
the receiver decodes from transmitter $i$ in the state $({s}_{q},{s}_{p})$ by
${\cal V}^{i}_{qp}$, the codebooks decoded in this state are related to the
ones decoded in two preceding states. Specifically, in state
$({s}_{q},{s}_{p})$ decode codebooks ${\cal V}^{1}_{(p-1)q}$ and ${\cal
V}^{1}_{p(q-1)}$. For example, for $\ell=3$, the codebooks decoded in
$({s}_{2},{s}_{3})$ includes those decoded for transmitter $1$ in state
$({s}_{2},{s}_{2})$ along with those decoded for transmitter $2$ in channel
state $({s}_{1},{s}_{3})$.
The decoding order in the general case is similar the one used for $\ell=2$ in
Table 5. In particular, in channel state $({s}_{q},{s}_{p})$ the receiver
successively decodes $q$ codebooks from transmitter 1 along with $p$ codebooks
from transmitter 2. The set of decodable codebooks in channel state
$({s}_{q},{s}_{p})$ is related to set of codebooks decoded for transmitter $2$
in state $({s}_{q-1},{s}_{p})$ and those decoded for transmitter $1$
$({s}_{q},{s}_{p-1})$. The average achievable rate region for the codebook
assignment and decoding strategy presented in this section is summarized in
Theorem 6. Similar to the two-state channel case, define
$\beta^{i}_{mn}\;\in\;[0,1]$ as the fraction of power allocated to the
codebook $U^{i}_{mn}$ such that $\sum_{m=1}^{n}\beta^{i}_{mn}=1,\;\forall
n\;\in\;\\{1,\dots.\ell\\}$.
[zohdy2019broadcast ] For the codebook assignment in this section and the
decoding scheme in Table 6, for any given set of power allocation factors
$\\{\beta^{i}_{mn}\\}$, the average achievable rate region
$\\{\bar{R}_{1},\bar{R}_{2}\\}$ for the $\ell$-state channel is the set of all
rates that satisfy
$\displaystyle\bar{R}_{2}$ $\displaystyle\leq\mathbb{E}[r_{1}(n,m)]\ ,$ (226)
$\displaystyle\bar{R}_{2}$ $\displaystyle\leq\mathbb{E}[r_{2}(n,m)]\ ,$ (227)
$\displaystyle\bar{R}_{1}+\bar{R}_{2}$
$\displaystyle\leq\mathbb{E}[\min\\{r_{3}(n,m),r_{4}(n,m)\\}]\ ,$ (228)
where the functions $\\{r_{1}(n,m),\dots,r_{4}(n,m)\\}$, for all
$m,n\in\\{1,\dots,\ell\\}$ are defined as follows.
$\displaystyle r_{1}(n,m)=$
$\displaystyle\min_{m}\sum^{\ell}_{j=1}c_{1}(j,m)+c_{3}(j,n,m)\,,$ (229)
$\displaystyle r_{2}(n,m)=$
$\displaystyle\min_{m}\sum^{\ell}_{j=1}c_{2}(j,m)+c_{4}(j,n,m)\,,$ (230)
$\displaystyle r_{3}(n,m)=$ $\displaystyle\sum_{\forall
k<m}c_{5}(m)+c_{7}(m,n)+c_{9}(k,m,n)\,,$ (231) $\displaystyle r_{4}(n,m)=$
$\displaystyle\sum_{\forall k<m}c_{5}(m)+c_{6}(m,n)+c_{8}(k,m,n)\ ,$ (232)
where
$\displaystyle c_{1}(j,m)=$
$\displaystyle\;C(s_{j}\beta^{1}_{jj},s_{m}C_{2}(j,m)),\quad\forall
j\in\\{1,\dots,\ell\\},m\in\\{j,\dots,\ell\\}\,,$ (233) $\displaystyle
c_{2}(j,i)=$
$\displaystyle\;C(s_{j}\beta^{2}_{jj},s_{m}C_{1}(j,m)),\quad\forall
j\in\\{1,\dots,\ell\\}\,,$ (234) $\displaystyle c_{3}(j,n,m)=$
$\displaystyle\;C(s_{n}\beta^{1}_{jn},s_{n}C_{1}(j,n)+s_{j}C_{2}(j,j)),\quad\forall
n\in\\{j+1,\dots,\ell\\},m\in\\{j,\dots,\ell\\}\,,$ (235) $\displaystyle
c_{4}(j,n,m)=$
$\displaystyle\;C(s_{n}\beta^{2}_{jn},s_{j}C_{1}(j,m)+s_{n}C_{2}(j,n))\
,\quad\forall n\in\\{j+1,\dots,\ell\\}\,,$ (236) $\displaystyle c_{5}(m)=$
$\displaystyle\;C(s_{m}\beta^{1}_{mm}+s_{n}\beta^{2}_{mm}),\quad\forall
m\in\\{1,\dots,\ell\\}\ ,$ (237) $\displaystyle c_{6}(m,n)=$
$\displaystyle\;C(s_{m}\beta^{1}_{mm}+s_{n}\beta^{2}_{mn},s_{n}C_{2}(m,n)),\quad\forall
m<n,\forall n\in\\{m+1,\dots,\ell\\}\,,$ (238) $\displaystyle c_{7}(m,n)=$
$\displaystyle\;C(s_{n}\beta^{1}_{mn}+s_{m}\beta^{2}_{mm},s_{n}C_{1}(m,n)),\quad\forall
m<n,\forall n\in\\{m+1,\dots,\ell\\}\,,$ (239) $\displaystyle c_{8}(k,m,n)=$
$\displaystyle\;C\left(s_{m}\beta^{1}_{km}+s_{n}\beta^{2}_{kn},s_{m}C_{1}(k,m)+s_{n}C_{2}(k,n)\right),\quad\;\;\;\;\;\forall
k<m,\forall n\in\\{m,\dots,\ell\\}\,,$ (240) $\displaystyle c_{9}(k,m,n)=$
$\displaystyle\;C\left(s_{n}\beta^{1}_{kn}+s_{m}\beta^{2}_{km},a_{n}C_{1}(k,n)+s_{m}C_{2}(k,m)\right),\quad\forall\;k<m,\forall
n\in\\{m,\dots,\ell\\}\,,$ (241)
and we have defined $C_{1}(m,n)=1-\sum_{i=1}^{m}\beta^{1}_{in}$ and
$C_{2}(m,n)=1-\sum_{i=1}^{m}\beta^{2}_{in}$, for all $m<n$ and
$n\in\\{m+1,\dots,\ell\\}$.
Figure 18: Average rate regions for $\ell=3$.
Figure 18 demonstrates the average rate region for the three-state channel, in
which the channel gains are ${s}_{1}=0.04$, ${s}_{2}=0.25$, ${s}_{3}=1$, and
channel probability parameters $q_{1}=0.3,q_{2}=0.4$ for transmitter $1$, and
$p_{1}=0.6,p_{2}=0.1$ for transmitter $2$. Furthermore, the region in Theorem
16 is evaluated in Fig. 19. Specifically, the average achievable rate region
OTUXYZ specified in Fig. 16 is evaluated for three scenarios
$\hat{\cal{S}}_{1},\hat{\cal{S}}_{2},\hat{\cal{S}}_{3}$. In all three
scenarios, the average power constraint is set to $10$ dB, i.e.,
$P_{1}=P_{2}=P=10$ dB, and the channel states are $({s}_{1},{s}_{2})=(0.3,1)$.
Evaluations are carried out for the symmetric setting $\hat{\cal{S}}_{1}$ with
the probability distribution $q_{1}=p_{1}=0.5$, and the asymmetric cases
$\hat{\cal{S}}_{1},\hat{\cal{S}}_{2}$ with probability distributions
$q_{1}=0.2,p_{1}=0.8$ and $q_{1}=0.4,p_{1}=0.5$. This figures illustrates that
the average capacity region of the two-user MAC with full CSIT can be
partially achieved when only one user has full CSIT.
Figure 19: Average rate regions in Theorem 16.
## 4 The Interference Channel
### 4.1 Overview
In this section, we turn the focus to the interference channel as a key
building block in interference-limited wireless networks. In this channel,
multiple transmitters communicate with their designated receivers, imposing
interference on one another. Designing and analyzing interference management
schemes has a rich literature. Irrespective of their discrepancies, the
existing approaches often rely on the accurate availability of the CSIT and
CSIR. We discuss how the broadcast approach can be viewed as a distributed
interference management scheme, rendering a practical approach to have
effective communication in the interference channel in the face of unknown
CSIT.
While the literature on assessing the communication reliability limits of the
interference channel and the attendant interference management schemes is
rich, a significant focus is on the channels with perfect availability of the
CSIT at all transmitters. Representative known results in the asymptote of
high SNR regime include the degrees-of-freedom (DoF) region achievable by
interference alignment Jafar:IT2008 ; maddah2008communication . In the non-
asymptotic SNR regime of particular note is the achievable rate region due to
Han-Kobayashi (HK) han1981new ; chong2008han , which is shown to achieve rates
within one bit of the capacity region for the Gaussian interference channel
etkin2008gaussian . While unknown in its general form, the capacity region is
known in special cases, including the strong interference channel
carleial1975case ; sato1981capacity , the discrete additive degraded
interference channel benzel1979capacity , certain classes of the deterministic
interference channel gamal1982capacity ; cadambe2009capacity ;
chong2007capacity ; bresler2008two , and opportunistic communication under
bursty interference, which is a form of the broadcast approach and is studied
under different assumptions on the non-causal availability of the CSI at the
transmitters and receivers Villacres2018 . some other examples of interference
channel and broadcast approach are found in YiSun20 ; YiSum20TCOM . There are
extensive studies on circumventing the challenges associated with analyzing
and optimal resource allocation over the HK region wang2014sliding ;
bandemer2015optimal ; tuan2017superposition ; yagi2011multi ; zhao2012maximum
; geng2015optimalityJ ; Tajer:IT2016 ; YiSun20 . A more detailed and thorough
overview of these can be found in KimElGamal2011 .
Interference management without CSIT has also been the subject of intense
studies more recently, with more focus on the high SNR regime. Representative
studies in the high SNR regime include characterizing the DoF region for the
two-user multi-antenna interference channel in huang2012degrees ;
zhu2011degrees ; vaze2012degree ; gou2011degrees ; shin2016mimo ;
jeon2017degrees ; morales2019degrees ; blind interference alignment in
jafar2012blind ; lu2013blind ; lu2014blind ; jafar2010exploiting ;
gou2011aiming ; wang2011improved ; akoum2012data ; wang2014degrees ;
castanheira2017retrospective ; chen2017blind ; interference management via
leveraging network topologies in jafar2013topological ;
naderializadeh2014interference ; and ergodic interference channels in
morales2014blind ; yang2015degrees ; Akhlaghi:CL2011 . In the non-asymptotic
SNR regime, the studies are more limited, and they include analysis on the
capacity region of the erasure interference channel in vahid2017binary ;
zhu2016layered ; the compound interference channel in raja2009two ; ergodic
capacity for the Z-interference channel in zhu2011ergodic ; ergodic capacity
of the strong and very strong interference channels in lin2016ergodic ;
lin2019stochastic ; and approximate capacity region for the fast-fading
channels sebastian2015rate ; sebastian2018approximate .
In this section, conductive to relieving dependency on full CSIT, we discuss
how the broadcast approach can be viewed as a distributed interference
management solution for circumventing the lack of CSIT in the multiuser
interference channel. One significant intuition provided by the HK scheme is
that even with full CSIT, layering and superposition coding is necessary.
Built upon this intuition, the broadcast approach is a natural evolution of
the HK scheme. We focus on the two-user and finite-state Gaussian interference
channel to convey the key ideas in rate-splitting, codebook assignments, and
decoding schemes. The remainder of this section is organized as follows. This
section focuses primarily on the two-user Gaussian interference channel, for
which we provide a model in Section 4.2. We start by discussing the setting in
which the receiver has full CSI, and the transmitters have only the
statistical model of the CSI and review the application of the broadcast
approach in this setting in Section 4.3 for the two-user channel and in
Section 4.4 for the multiuser channel. Finally, we will review the
interference channel with local CSIT in Sections 4.5. Under the setting with
local CSIT, we consider two scenarios in which each transmitter either knows
the level of the interference that their respective receiver experiences, or
the level of interference they impose on the unintended receiver. We discuss
how the broadcast approach can be designed for each of these two scenarios.
### 4.2 Broadcast Approach in the Interference Channel – Preliminaries
Consider the two-user slowly-fading Gaussian interference channel, in which
the coefficient of the channel connecting transmitter $i$ to receiver $j$ is
denoted by $h^{*}_{ij}$ for $i,j\in\\{1,2\\}$. We refer to $h^{*}_{ii}$ and
$h^{*}_{ij}$ as the direct and cross channel coefficients, respectively,
$\forall\;i\neq j$. The signal received by receiver $i$ is denoted by
$\displaystyle y^{*}_{i}$
$\displaystyle=h^{*}_{ii}\;x^{*}_{i}\;+\;h^{*}_{ij}\;x^{*}_{j}\;+\;n^{*}_{i}\
,$ (242)
where $x^{*}_{i}$ denotes the signal transmitted by transmitter $i$, and
$n^{*}_{i}$ accounts for the AWGN distributed according to
$\mathcal{N}\left(0,N_{i}\right)$. The transmitted symbol $x^{*}_{i}$ is
subject to the average power constraint $P^{*}_{i}$, i.e., ${\mathbb
E}[|x_{i}^{*}|^{2}]\leq P^{*}_{i}$. Each channel is assumed to follow a block
fading model in which the channel coefficients remain constant for the
duration of a transmission block of length $n$, and randomly change to another
state afterward. We consider an $\ell$-state channel model in which each
channel coefficient $h_{ij}^{*}$ randomly and independently of the rest of the
channels takes one of the $\ell$ possible states
$\\{\sqrt{s_{i}}:i\in\\{1,\dots,\ell\\}\\}$. Without loss of generality, we
assume that $0<s_{1}<\dots<s_{\ell}<+\infty$. The $\ell$-state interference
channel in (242) gives rise to an interference channel with $\ell^{2}$
different states. The entire channel states are assumed to be fully known to
the receivers while being unknown to the transmitters. A statistically
equivalent form of the $\ell$-state interference channel in (242) is the
standard interference channel model given by carleial1978interference ;
sason2004achievableJ
$\displaystyle y_{1}$ $\displaystyle=x_{1}\;+\;\sqrt{a_{1}}x_{2}\;+\;n_{1}\
,\qquad\mbox{and}\qquad y_{2}=\sqrt{a_{2}}x_{1}\;+\;x_{2}\;+\;n_{2}\ ,$ (243)
and the inputs satisfy ${\mathbb E}[|x_{i}^{*}|^{2}]\leq P^{*}_{i}$, where we
have defined
$\displaystyle
a_{1}=\left(\frac{h^{*}_{12}}{h^{*}_{22}}\right)^{2}\cdot\frac{N_{2}}{N_{1}}\
,\qquad
a_{2}=\left(\frac{h^{*}_{21}}{h^{*}_{11}}\right)^{2}\cdot\frac{N_{1}}{N_{2}}\
,\qquad\mbox{and}\qquad P_{i}=\frac{(h^{*}_{ii})^{2}}{N_{i}}\cdot P^{*}_{i}\
.$ (244)
and the terms $n_{1}$ and $n_{2}$ are the additive noise terms distributed
according to ${\cal N}(0,1)$. The equivalence between (242) and (243) can be
established by setting
$\displaystyle y_{i}=\frac{y^{*}_{i}}{\sqrt{N_{i}}}\ ,\quad\
x_{i}=\frac{h^{*}_{ii}}{\sqrt{N_{i}}}\;x^{*}_{i}\ ,\quad
n_{i}=\frac{n^{*}_{i}}{\sqrt{N_{i}}}\ .$ (245)
Channel gains $a_{1}$ and $a_{2}$ are statistically independent, inheriting
their independence from that of the channel coefficients. By invoking the
normalization in (244), it can be readily verified that the cross channel
gains $a_{i}$ take one of $K=\ell(\ell-1)+1$ possible states, which we denote
by $\\{\beta_{1},\dots,\beta_{K}\\}$. Without loss of generality we assume
they are in the ascending order. For the two-state channel, the cross channel
gain takes one of the three states
$\beta_{1}=\frac{s_{1}}{s_{2}},\beta_{2}=1$, and
$\beta_{3}=\frac{1}{\beta_{1}}$. Hence, the state of the network is specified
by two cross links, rendering $K^{2}$ states for the network. We say that the
network is in the state $(\beta_{s},\beta_{t})$ when
$(a_{1},a_{2})=(\beta_{s},\beta_{t})$. To distinguish different states, in the
network state $(\beta_{s},\beta_{t})$, we denote the outputs by
$\displaystyle y^{s}_{1}$
$\displaystyle=x_{1}\;+\;\sqrt{\beta_{s}}\;x_{2}\;+\;n_{1}\
,\qquad\mbox{and}\qquad y^{t}_{2}=\sqrt{\beta_{t}}\;x_{1}\;+\;x_{2}\;+\;n_{2}\
.$ (246)
Hence, this interference channel can be equivalently presented as a network
with two transmitters and $K^{2}$ receiver pairs, where each receiver pair
corresponds to one possible channel state. In the case of the symmetric
interference channel, we have $a_{1}=a_{2}$, and the number of possible
channel combinations reduces to $K$, rending an equivalent network with two
transmitters and $2K$ receivers. Figure 20 depicts such a symmetric network
for the two-state channel. Finally, we define
$\displaystyle q^{s}_{1}={\mathbb P}(a_{1}=\beta_{s})\qquad\mbox{and}\qquad
q^{s}_{2}={\mathbb P}(a_{2}=\beta_{s})\ .$ (247)
Fig. 20: Equivalent network for the symmetric Gaussian interference channel
($\ell=2$).
### 4.3 Two-user Interference Channel without CSIT
Effective interference management in the interference channel hinges on how a
transmitter can balance the two opposing roles that it has as both an
information source and an interferer. Striking such a balance requires
designating a proper notion of degradedness according to which different
realizations of the network can be distinguished and ordered. Hence,
specifying an order of degradedness plays a central role in assigning
codebooks and designing the decoding schemes. We adopt the same notion of
degradedness that was used for the MAC with proper modifications.
When each channel has $\ell$ possible states, the cross channels take one of
the $K=\ell(\ell-1)+1$. Hence, by adopting the broadcast approach, this two-
user interference channel becomes equivalent to a multiuser network consisting
of two transmitters and $K^{2}$ receivers. The transmitters and each of these
receivers form a MAC, in which the receiver is interested in decoding as many
information layers as possible. To this end and by following the same line of
arguments we had for the MAC, we use the capacity region of the individual
MACs to designate degradedness among distinct network states.
The network model in (246) is equivalent to a collection of MACs. The MAC
associated with receiver $y^{s}_{i}$, for $s<k$, is degraded with respect to
the MAC associated with the receiver $y_{i}^{k}$. Hence, receiver $y_{i}^{k}$
can successfully decode all the information layers that are decoded by the
receivers $\\{y^{1}_{i},\dots,y^{s}_{i}\\}$. Driven by this approach to
designating degradedness, each transmitter splits its message into multiple
independent codebooks, where each is adapted to one combined state of the
network and intended to be decoded by specific receivers.
At receiver $y^{k}_{i}$, decoding every additional layer form transmitter $i$
directly increases the achievable rate. In parallel, decoding each additional
layer from the the interfering other transmitter indirectly increases the
achievable rate by canceling a part of the interfering signal. Driven by these
two observations, transmitter $i$ breaks its message into $2K$ layers denoted
by $\\{V_{i}^{k},U_{i}^{k}\\}_{k=1}^{K}$, each serving a specific purpose.
Recall that in the canonical model in (246), the direct channels remain
unchanged and only the cross channels have varying states. Hence, each of the
$2K$ layers of each transmitter is designated to a specific cross channel
state and receiver.
* •
Transmitter 1 (or 2) reserves the information layer $V_{1}^{k}$ (or
$V_{2}^{k}$) for adapting it to the channel from transmitter $1$ (or 2) to the
unintended receiver $y_{2}^{k}$ (or $y_{1}^{k}$). Based on this designation,
the intended receivers $\\{y_{1}^{k}\\}_{k=1}^{K}$ (or
$\\{y_{2}^{k}\\}_{k=1}^{K}$) will decode all codebooks
$\\{V_{1}^{k}\\}_{k=1}^{K}$ (or $\\{V_{2}^{k}\\}_{k=1}^{K}$), and the non-
intended receivers $\\{y^{k}_{2}\\}_{k=1}^{K}$ (or
$\\{y^{k}_{1}\\}_{k=1}^{K}$) will be decoding a subset of these codebooks. The
selection of the subsets depends on on channel strengths of the receivers,
such that the non-intended receiver $y_{2}^{k}$ (or $y_{1}^{k}$) decodes only
codebooks $\\{V_{1}^{s}\\}_{s=1}^{k}$ (or $\\{V_{2}^{s}\\}_{s=1}^{k}$).
* •
Transmitter 1 (or 2) reserves the layer $U_{1}^{k}$ (or $U_{2}^{k}$) for
adapting it to the channel from transmitter $2$ (or 1) to the intended
receiver $y_{1}^{k}$ (or $y_{2}^{k}$). Based on this designation, the
unintended receivers $\\{y_{2}^{k}\\}_{k=1}^{K}$ (or
$\\{y_{1}^{k}\\}_{k=1}^{K}$) will not decode any of the codebooks
$\\{U_{1}^{k}\\}_{k=1}^{K}$ (or $\\{U_{2}^{k}\\}_{k=1}^{K}$), and the intended
receivers $\\{y_{1}^{k}\\}_{k=1}^{K}$ (or $\\{y_{2}^{k}\\}_{k=1}^{K}$) will be
decoding a subset of these codebooks. The selection of these subsets depends
on channel strengths of the receives such that the intended receiver
$y_{1}^{k}$ (or $y_{2}^{k}$) decodes only the codebooks
$\\{U_{1}^{s}\\}_{s=1}^{k}$ (or $\\{U_{2}^{s}\\}_{s=1}^{k}$).
Figure 21 specifies how the codebooks are assigned to transmitter 1 as we as
the set of codebooks decoded by each of the three receivers
$\\{y_{1}^{k}\\}_{k=1}^{3}$ associated with transmitter 1.
Fig. 21: Codebook assignments at transmitter 1 in the two-state channel.
#### 4.3.1 Successive Decoding: Two-state Channel
We review a successive decoding scheme for the two-state channel. This scheme
will be then generalized in Section 4.3.2. In this decoding scheme, each
codebook will be decoded by a number of receivers. Therefore, the rate of each
codebook will be limited by the its most degraded channel state. The codebooks
that are not decoded by a receiver will be treated as Gaussian noise. These
codebooks impose interference on the receiver, which compromises the
achievable rate at the receiver. This observation guides designing a
successive decoding scheme that dynamically identifies (i) the set of the
receivers that decode a given codebook, and (ii) the order by which the
codebooks are successively decoded by each receiver.
For formalizing this decoding strategy, denote the set of receivers that
decode codebook $V_{i}^{k}$ by $\mathcal{V}_{i}^{k}$, and denote the set of
the receivers that decode $U_{i}^{k}$ by $\mathcal{U}_{i}^{k}$. Therefore, we
have
$\displaystyle\mathcal{V}_{1}^{k}=\\{y_{1}^{s}\\}_{s=1}^{3}\cup\\{y_{2}^{s}\\}_{s=k}^{3}\
,\qquad\mathcal{V}_{2}^{k}=\\{y_{2}^{s}\\}_{s=1}^{3}\cup\\{y_{1}^{s}\\}_{s=k}^{3}\
,\qquad\mbox{and}\qquad{\cal U}_{i}^{k}=\\{y_{i}^{s}\\}_{s=k}^{3}\ .$ (248)
The order of successively decoding the codebooks at receiver $y_{i}^{k}$ is
specified in Table 7.
Table 7: Successive decoding order at the receivers. Receiver | Stage 1 | Stage 2 | Stage 3 | Stage 4 | Stage 5 | Stage 6 | Stage 7 | Stage 8 | Stage 9
---|---|---|---|---|---|---|---|---|---
$y_{i}^{1}$ | $V_{i}^{1}$ | $V_{j}^{1}$ | $V_{i}^{2}$ | $V_{i}^{3}$ | $U_{i}^{1}$ | | | |
$y_{i}^{2}$ | $V_{i}^{1}$ | $V_{j}^{1}$ | $V_{i}^{2}$ | $V_{j}^{2}$ | $V_{i}^{3}$ | $U_{i}^{1}$ | $U_{i}^{2}$ | |
$y_{i}^{3}$ | $V_{j}^{1}$ | $V_{i}^{1}$ | $V_{j}^{2}$ | $V_{i}^{2}$ | $V_{j}^{3}$ | $V_{i}^{3}$ | $U_{i}^{1}$ | $U_{i}^{2}$ | $U_{i}^{3}$
#### 4.3.2 Successive Decoding: $\ell$-state Channel
In this section, we generalize the successive decoding scheme to the general
multi-state channels. Similarly to (248) we define
$\displaystyle\mathcal{V}_{1}^{k}=\\{y_{1}^{s}\\}_{s=1}^{K}\cup\\{y_{2}^{s}\\}_{s=k}^{K}\
,\quad\
\mathcal{V}_{2}^{k}=\\{y_{2}^{s}\\}_{s=1}^{K}\cup\\{y_{1}^{s}\\}_{s=k}^{K}\
,\quad\mbox{and}\quad{\cal U}_{i}^{k}=\\{y_{i}^{s}\\}_{s=k}^{K}\ .$ (249)
Each of the two receivers decodes a set of the codebooks. The choice of the
set depends on the channel states. Specifically, when the network state is
$(\beta_{q},\beta_{p})$, receiver 1 decodes $K+q$ codebooks from transmitter 1
and $q$ codebooks from transmitters 2. These codebooks are decoded
successively in two stages in the following order:
* •
Receiver 1 – stage 1 (Codebooks $\\{V^{s}_{i}\\}_{s=1}^{q}$): Receiver 1
decodes one information layer from each transmitter in an alternating manner
until all codebooks $\\{V^{s}_{1}\\}_{s=1}^{q}$ and
$\\{V^{s}_{2}\\}_{s=1}^{q}$ are decoded. The first layer to be decoded in this
stage depends on the state $\beta_{q}$. If $\beta_{q}<1$, the receiver starts
by decoding codebook $V^{1}_{1}$ from transmitter 1, then it decodes the
respective layer $V^{1}_{2}$ from transmitter 2, and continues alternating
between the two transmitters. Otherwise, if $\beta_{q}>1$, receiver 1 first
decodes $V^{1}_{2}$ from the interfering transmitter 2, followed by
$V^{1}_{1}$ from transmitter 1, and continues alternating. By the end of stage
1, receiver 1 has decoded $q$ codebooks from each transmitter.
* •
Receiver 1 – stage 2 (Codebooks $\\{V^{s}_{1}\\}_{s=q+1}^{K}$ &
$\\{U^{s}_{1}\\}_{s=1}^{q}$): In stage 2, receiver 1 carries on decoding
layers $\\{V^{s}_{1}\\}_{s=q+1}^{K}$ from transmitter 1, in an ascending order
of the index $s$. Finally, receiver 1 decodes layers
$\\{U^{s}_{1}\\}_{s=1}^{q}$ specially adapted to receivers
$\\{y^{s}_{1}\\}_{s=1}^{q}$, in an ascending order of index $s$. Throughout
stage 2, receiver 1 has additionally decoded $K$ codebooks from its intended
transmitter 1.
The decoding scheme at receiver 2 follows the same structure by swapping the
roles of the two transmitters. The set of codebooks decoded by receiver $i$ in
channel state $(\beta_{q},\beta_{p})$ is partly defined by the set of
codebooks decoded by receiver $i$ and the set decoded by receiver $j$ in state
$(\beta_{q-1},\beta_{p-1})$. The decoding scheme is summarized in Table 8. In
this table, the channels are ordered in the ascending order such that at
receiver 1, state $(\beta_{q},\beta_{p})$ precedes all channel states
$(\beta_{k},\beta_{p})$ for all $k>q$. Similarly, at receiver $2$, state
$(\beta_{q},\beta_{p})$ precedes network state $(\beta_{q},\beta_{k})$, for
every $k>p$. Furthermore, according to this approach, when the cross channel
of receiver $i$ becomes stronger, receiver $i$ decodes additional codebooks
from both transmitters. In particular, in Table 8, every cell contains the
codebooks decoded in the combined channel state $(\beta_{q},\beta_{p})$ where
we mark the codebooks decoded by receiver 1 in blue color, while those decoded
by receiver 2 in red color. To further highlight the relationship between the
decodable codebooks in different states, we denote by ${\cal C}^{k}_{i}$ the
set of codebooks decoded by the receiver $i$ when $a_{i}=\beta_{k}$.
Table 8: Successive decoding for $\ell-$state channel ${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}a_{2}}$ ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}a_{1}}$ | $\beta_{1}$ | $\beta_{2}$ | $\cdots$ | $\beta_{q}$ | $\cdots$ | $\beta_{K}$
---|---|---|---|---|---|---
$\beta_{1}$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\\{V^{s}_{1}\\}_{s=1}^{K},U^{1}_{1},V^{1}_{2}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\\{V^{s}_{2}\\}_{s=1}^{K},U^{1}_{2},V^{1}_{1}}$
|
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathcal{C}^{1}_{1},U^{2}_{1},V^{2}_{2}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathcal{C}^{1}_{2}}$
$\cdots$ | $\cdot$ | $\cdots$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathcal{C}^{K-1}_{1},U^{K}_{1},V^{K}_{2}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathcal{C}^{1}_{2}}$
$\beta_{2}$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathcal{C}^{1}_{1}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathcal{C}^{1}_{2},U^{2}_{2},V^{2}_{1}}$
|
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathcal{C}^{1}_{1},U^{2}_{1},V^{2}_{2}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathcal{C}^{1}_{2},U^{2}_{2},V^{2}_{1}}$
$\cdots$ | $\cdot$ | $\cdots$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathcal{C}^{K-1}_{1},U^{K}_{1},V^{K}_{2}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathcal{C}^{1}_{2},U^{2}_{2},V^{2}_{1}}$
$\cdot$ | $\cdot$ | $\cdot$ | $\cdots$ | $\cdot$ | $\cdots$ | $\cdot$
$\beta_{p}$ | $\cdot$ | $\cdot$ | $\cdots$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathcal{C}^{q-1}_{1},U^{q}_{1},V^{q}_{2}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathcal{C}^{p-1}_{2},U^{p}_{2},V^{p}_{1}}$
$\cdots$ | $\cdot$
$\cdot$ | $\cdot$ | $\cdot$ | $\cdots$ | $\cdot$ | $\cdots$ | $\cdot$
$\beta_{K}$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathcal{C}^{1}_{1}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathcal{C}^{K-1}_{2},U^{K}_{2},V^{K}_{1}}$
|
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathcal{C}^{1}_{1},U^{2}_{1},V^{2}_{2}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathcal{C}^{K-1}_{2},U^{K}_{2},V^{K}_{1}}$
$\cdots$ | $\cdot$ | $\cdots$ | | ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathcal{C}^{K-1}_{1},U^{K}_{1},V^{K}_{2}}$
---
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathcal{C}^{K-1}_{2},U^{K}_{2},V^{K}_{1}}$
#### 4.3.3 Average Achievable Rate Region
In this section, we provide an overview on the average achievable rate region.
The average rates of the users are specified by the rates of codebooks
$\\{V_{i}^{k},U_{i}^{k}\\}_{k=1}^{K},$ for $i\in\\{1,2\\}$. These rates should
satisfy all the constraints imposes by different receivers in order for them
to successfully decode all their designated codebooks. Hence, the rates are
bounded by the smallest achievable rates by the receivers
$\mathcal{V}_{i}^{k}$ and $\mathcal{U}_{i}^{k}$. To formalize the rate
regions, define $R(A)$ as the rate of codebook
$A\in\\{V_{i}^{k},U_{i}^{k}:\forall i,k\\}$, and define $\gamma(A)$ as the
fraction of the power $P_{i}$ allocated to the codebook
$A\in\\{V_{i}^{k},U_{i}^{k}:\forall i,k\\}$. Accordingly, define $R_{i}(s,t)$
as the total achievable rate of user $i$, when the network is in the state
$(\beta_{s},\beta_{t})$. Finally, denote the average achievable rate at
receiver $i$ by $\bar{R}_{i}=\mathbb{E}[R_{i}(\beta_{s},\beta_{t})]$, where
the expectation is taken with respect to the probabilistic model of the
channel. Note that the the transmitters, collectively have $4K$ codebooks.
Corresponding to the set ${\cal S}\subseteq\mathbb{R}_{+}^{4K}$, define the
rate region ${\cal R}_{\rm in}({\cal S})$ as the set of all average rate
combinations $(\bar{R}_{1},\bar{R}_{2})$ such that $R(A)\in{\cal S}$ for all
$A\in\\{V_{i}^{k},U_{i}^{k}:\forall i,k\\}$, i.e.,
$\displaystyle{\cal R}_{\rm in}({\cal
S})=\left\\{(\bar{R}_{1},\bar{R}_{2})\;:\;R(A)\in{\cal S}\ ,\quad\forall
A\in\\{V_{i}^{k},U_{i}^{k}:\forall i,k\right\\}\ .$ (250)
Furthermore, corresponding to each receiver $y^{k}_{i}$ and codebook
$A\in\\{U^{k}_{i},V^{k}_{i}:\forall i,k\\}$ that should be decoded by
$y^{k}_{i}$, define $R^{k}_{i}(A)$ as the maximum rate that we can sustain for
codebook $A$, while being decodable by $y^{k}_{i}$. Accordingly, for user $i$,
and corresponding to $s,t\in\\{1,\dots,K\\}$ define the rates
$\displaystyle r_{i}(s,t)$
$\displaystyle=\sum_{k=t+1}^{K}R^{s}_{i}(V^{t}_{i})+\sum_{k=1}^{s}R^{k}_{i}(U^{k}_{i})+\underset{\ell\in\\{1,2\\}}{\min}\sum_{k=1}^{t}R^{k}_{\ell}(V^{k}_{i})\
,$ (251)
where the the rates $R^{s}_{i}(A)$ are defined as follows. First, define
$\gamma(A)$ as the fraction of $P_{i}$ allocated to
$A\in\\{U^{k}_{i},V^{k}_{i}:\forall k\\}$, and set
$\displaystyle\Gamma_{v}(i,k)=\sum_{j=1}^{k}\gamma(V^{j}_{i})\,,\qquad\mbox{and}\qquad\Gamma_{u}(i,k)=\sum_{j=1}^{k}\gamma(U^{j}_{i})\,.$
(252)
Based on these definitions, if the codebook $V_{i}^{k}$ is decoded by the
receiver $y^{s}_{i}$, then we have
$\displaystyle\mbox{If $\beta_{s}\leq 1$}\,,\qquad\qquad R^{s}_{i}(V^{k}_{i})$
$\displaystyle=C\left(\gamma(V^{k}_{i})P_{i},(1-\Gamma_{v}(i,k))P_{i}+\beta_{k}(1-\Gamma_{v}(j,s-1))P_{j}\right)\,,$
(253) $\displaystyle\mbox{If $\beta_{s}>1$}\,,\qquad\qquad
R^{s}_{i}(V^{k}_{i})$
$\displaystyle=C\left(\gamma(V^{k}_{i})P_{i},(1-\Gamma_{v}(i,k))P_{i}+\beta_{k}(1-\Gamma_{v}(j,s))P_{j}\right)\,.$
(254)
Similarly, if the codebook $V_{i}^{k}$ is decoded by the receiver $y^{s}_{j}$,
then we have
$\displaystyle\mbox{If $\beta_{s}\leq 1$}\,,\qquad\qquad R^{s}_{j}(V^{k}_{i})$
$\displaystyle=C\left(\beta_{s}\gamma(V^{k}_{i})P_{i},\beta_{k}(1-\Gamma_{v}(i,k))P_{i}+(1-\Gamma_{v}(j,s))P_{j}\right)\,,$
(255) $\displaystyle\mbox{If $\beta_{s}>1$}\,,\qquad\qquad
R^{s}_{j}(V^{k}_{i})$
$\displaystyle=C\left(\beta_{s}\gamma(V^{k}_{i})P_{i},\beta_{k}(1-\Gamma_{v}(i,k))P_{i}+(1-\Gamma_{v}(j,s-1))P_{j}\right)\,.$
(256)
Finally, when codebook $U_{i}^{k}$ is decoded by the receiver $y^{s}_{i}$,
then we have
$\displaystyle R^{s}_{i}(U^{k}_{i})$
$\displaystyle=C\left(\gamma(U^{k}_{i})P_{i},\beta_{k}(1-\Gamma_{v}(1,s))P_{j}+(1-\Gamma_{v}(1,K)-\Gamma_{u}(1,s))P_{i}\right)\,.$
(257)
[zohdytajersh:2020 ] The average achievable rate region via sequential
decoding is specified by
$\displaystyle{\cal R}_{\rm in}^{\rm
seq}=\left\\{(\bar{R}_{1},\bar{R}_{2})\;:\;R_{i}(s,t)\leq
r_{i}(s,t),\quad\forall i\in{1,2}\;,s,t\in\\{1,\dots,K\\}\right\\}\ .$ (258)
Figure 22: Average rate region ($\ell=2$). Figure 23: Average rate region
($\ell=3$).
The average achievable rate region characterized in Theorem 4.3.3 is
illustrated in figures 22 and 23. Figure 22 shows a general asymmetric setting
for the following set of parameters $P_{1}=P_{2}=10$ dB,
$(q_{1}^{1},q_{1}^{2},q_{1}^{3})=(0.5,0.3,0.2)$,
$(q_{2}^{1},q_{2}^{2},q_{2}^{3})=(0.3,0.3,0.4)$, and the weak channel state at
each receiver is given by $\beta_{1}=0.5$. Similarly, for $\ell=3$, Fig. 23
evaluates the average rate region for an asymmetric channel with
$(q_{i}^{1},q_{i}^{2},q_{i}^{3},q_{i}^{4},q_{i}^{5},q_{i}^{6},q_{i}^{7})=(0.1,0.1,0.1,0.2,0.1,0.1,0.3)$,
where the weak channel states at each receivers 1 and 2 are set as
$(\beta_{1},\beta_{2},\beta_{3})=(0.1,0.2,0.9)$ and
$(\beta_{1},\beta_{2},\beta_{3})=(0.1,0.4,0.7)$, respectively. Comparing these
figures indicates that increasing the number of states reduces the gap between
the two regions (broadcast approach and HK).
#### 4.3.4 Sum-rate Gap Analysis
Next, we provide an upper bound on the gap of the the average achievable sum-
rate to the average sum-rate capacity of the interference channel in which the
the transmitters have full CSI. To this end, we use the existing results on
the gap between the sum-rate achievable by the HK scheme and the sum-rate
capacity. to present the main ideas and for simplicity in notations, we
present the results for the symmetric setting, i.e., when $a_{1}=a_{2}=a$,
symmetric average power constraints, i.e., when $P_{1}=P_{2}=P$, and symmetric
probabilistic models for the channels, i.e., when
$q_{1}^{s}=q_{2}^{s}=q^{s}=\frac{1}{3}$. Generalization of the results to the
non-symmetric settings is straightforward.
In the symmetric settings, the channel in (246) simplifies to either a weak
interference channel (when $a=\beta_{1})$, or to a strong interference channel
(when $a\in\\{\beta_{2},\beta_{3}\\}$). To assess the average sum-rate gap, we
start by analyzing the gap in the weak and strong interference regimes
separately. The average of these gaps provides the average sum-rate gap.
Throughout this discussions we set $\beta=\beta_{3}=\frac{1}{\beta_{1}}$.
* •
Weak interference: In the weak interference regime, the capacity with full
CSIT is in unknown. In this regime, in order to quantify the gap of interest,
we first evaluate the gap of the sum-rate achieved by the scheme of Section
4.3.2 to the sum-rate achieved by the HK scheme. By using this gap in
conjunction with the known results on the gap between the sum-rate of HK and
the sum-rate capacity, we provide an upper bound on the average sum-rate gap
of interest.
* •
Strong interference: In the strong interference regime, the sum-rate capacity
with full CSIT is known. It can be characterized by evaluating the sum-rate of
the intersection of two capacity regions corresponding to two multiple access
channels formed by the transmitters and each of the receivers sato1981capacity
.
By quantifying the two gaps above, and then aggregating them based on the
probabilistic model of the channel, the following theorem establishes upper
bounds on the gap between the average sum-rate achieved by the approach in
Section 4.3.1 and the sum-rate capacity. The gap is characterized in two
distinct regimes of transmission power, denoted by ${\rm G}_{1}$ and ${\rm
G}_{2}$. Define $\bar{R}_{\rm sum}({\rm G}_{i})$ as the minimum average sum-
rate achievable under region ${\rm G}_{i}$, and denote the average sum-rate
capacity with full CSIT by $C_{\rm sum}({\rm G}_{i})$. Finally, define the gap
$\Delta(G_{i})=C_{\rm sum}({\rm G}_{i})-\bar{R}_{\rm sum}({\rm G}_{i})$.
[zohdytajersh:2020 ] The average sum-rate achievable by the broadcast approach
in Fig. 21 and Table 7 has the following gap with the sum-rate capacity of the
symmetric Gaussian interference channel with full CSIT:
1. (i)
For $P\in{\rm
G}_{1}=\left(0,\beta\right)\cup\left(\beta(\beta^{2}+\beta-1),+\infty\right)$
we have
$\displaystyle\Delta({\rm G}_{1})$
$\displaystyle\leq\frac{1}{3}\left[1+\log\left(\frac{1+P(1+\beta)}{1+P(1+\frac{1}{\beta})}\right)+\frac{1}{2}\log(2+\beta)\right]\
.$ (259)
2. (ii)
For $P\in{\rm G}_{2}=\left[\beta,\beta(\beta^{2}+\beta-1)\right]$ we have
$\displaystyle\Delta({\rm
G}_{2})\leq\frac{1}{3}\left[\log\frac{4}{3}+\log\left(\frac{1+P(1+\beta)}{1+P/\beta+\beta}\right)+3\log\frac{(2+\beta)^{2}}{1+2\beta}\right]\
.$ (260)
Further analyzing the result in this theorem shows that the gap in the high
SNR regime is upper bounded by a constant (for fixed channel gains).
[zohdytajersh:2020 ] For any fixed network model (i.e., fixed $\beta$), when
$P$ is sufficiently large, the gap between the sum-rate capacity of the
symmetric Gaussian interference channel with full CSIT and the average sum-
rate achievable by the broadcast approach in Fig. 21 and Table 7 is upper
bounded by
$\displaystyle\Delta$
$\displaystyle\leq\frac{1}{6}\log\left(8\beta^{2}\cdot\frac{\beta+2}{\beta+1}\right)\
.$ (261)
Figure 24: Sum-rate gap versus power.
Figures 24 and 25 compare the maximum average sum-rate with that of the sum-
rate of the HK scheme. In both schemes, the average sum-rate is maximized over
all possible power allocation schemes over different codebooks. Figure 24
depicts the gap between the two methods normalized by the sum-rate of HK. This
signifies the relative sum-rate loss that can be attributed, for the most
part, to the lack of the CSI at the transmitters. It is observed that the
relative loss with respect to the HK peaks for moderate power regimes, while
in the small and large power regimes, it is diminishing. For the evaluations
in Fig. 24, a two-state channel with a symmetric channel probability model
($q^{s}_{1}=q^{s}_{1}=q^{s}$) is considered, in which
$(q^{1},q^{2},q^{3})=(0.3,0.6,0.1)$. The results follow the same trend for
different values of $\beta_{1}$. Figure 25 evaluates the bounds on the sum-
rate gaps presented in Theorem 4.3.4. The three plots in this figure
correspond to those in Fig. 24. Specifically, the plots in Fig. 25 depict the
$\Delta({\rm G}_{i})$ normalized by the HK sum-rate. This figure shows that
the bound on the gap becomes tighter as the power increases.
Figure 25: Bound on sum-rate gap versus power.
### 4.4 $N$-user Interference Channel without CSIT
Finally, we provide a brief overview of a representative approach to
generalizing the approaches discussed thus far to the general $N$ user
channel. To this end, consider a generalization of (243) to the $N$-user
channel, in which for user $m\in\\{1,\dots,N\\}$ we have
$\displaystyle y_{m}$ $\displaystyle=x_{m}\;+\;\sum_{i\neq
m}\sqrt{a_{mi}}x_{i}\;+\;n_{m}\ .$ (262)
Each of the channel coefficients $a_{im}$ takes one of the $L$ possible states
$\\{\beta_{1},\dots,\beta_{L}\\}$, order in the ascending order. The state of
the network will be specified by the cross-links states, rendering a network
with $N$ transmitters and $NL^{N-1}$ receivers. In the case of the symmetric
interference channel, i.e., $a_{mi}=a$, the number of network states reduces
to $N$ transmitters and $NL$ receivers.
Each transmitter, to balance its impacts on the intended and unintended
receivers, performs rate-splitting and adapts each information layer to one
combined state of the network designated to be decoded by a specific group of
receivers. In the equivalent network, each transmitter and its associated
$S=L^{N-1}$ receivers form a MAC. The critical stage in specifying the
broadcast approach is adopting a notion of degradedness among different such
MAC. Facing a lack of a natural choice, we define an aggregate strength for
receiver $y^{m}$ as
$\displaystyle\theta_{m}=\sum_{i\neq m}a_{mi}\ .$ (263)
This metric is used to sort the $S$ receivers associated with $y^{m}$ in the
equivalent network. Denote these receivers by
$\\{y_{m}^{1},\dots,y_{m}^{S}\\}$, which are sorted in the ascending order
such that for $s<k$ receivers the $\theta_{m}$ value associated with the
states in the channel receiver $y_{i}^{s}$ is smaller than that of receiver
$y_{i}^{k}$. Hence, for $s<k$, the multiple access channel at receiver
$y^{s}_{i}$ is degraded with respect to the channel at $y_{i}^{k}$. Therefore,
receiver $y_{i}^{k}$ can successfully decode all the layers adapted to the
channels with the designated receivers $\\{y^{1}_{i},\dots,y^{s}_{i}\\}$. At
receiver $y^{k}_{i}$, every layer decoded form transmitter $i$ directly
increases the achievable rate, whereas every layer decoded from the other
transmitter indirectly increases the achievable rate by canceling a part of
the interfering signal. Hence, similarly to the two-user channel, transmitter
$i$ splits its message into $2S$ layers denoted by
$\\{V_{i}^{k},U_{i}^{k}\\}_{k=1}^{S}$ with the following designations and
objectives:
* •
Transmitter $m$ adapts layer $V_{m}^{k}$ to the state of the channels linking
all other transmitters to the unintended receivers
$\\{y_{1}^{k},\dots,y_{N^{L-1}}^{k}\\}\backslash\\{y^{k}_{m}\\}$: while the
intended receivers $\\{y_{m}^{k}\\}_{k=1}^{S}$ will be decoding all codebooks
$\\{V_{m}^{k}\\}_{k=1}^{S}$, the non-intended receivers
$\\{y_{1}^{k},\dots,y_{S}^{k}\\}\backslash\\{y^{k}_{m}\\}$ decode a subset of
these codebooks depending on their channel strengths. More specifically, a
non-intended receiver $y_{i}^{k}$ decodes only the codebooks
$\\{V_{m}^{s}\\}_{s=1}^{k}$.
* •
Transmitter $m$ adapts the layer $U_{m}^{k}$ to the state of the channels
linking all other transmitters to the intended receiver $y_{m}^{k}$: while the
unintended receivers
$\\{y_{1}^{k},\dots,y_{S}^{k}\\}\backslash\\{y^{k}_{m}\\}$ will not be
decoding any of the codebooks $\\{U_{m}^{k}\\}_{k=1}^{S}$, the intended
receivers $\\{y_{m}^{k}\\}_{k=1}^{S}$ decode a subset of these codebooks
depending on their channel strengths. More specifically, the intended receiver
$y_{m}^{k}$ decodes only the codebooks $\\{U_{m}^{s}\\}_{s=1}^{k}$.
As the number of users $N$ increases, the total number of codebooks per
transmitter $2S=2L^{N-1}$ grows exponentially with the number of users. This
renders joint decoding to have a prohibitive decoding complexity. A viable
remedy is adopting the opportunistic successive decoding schemes (e.g.,
Tajer:IT2016 ; Gong:TCOM2012 ), which are low-complexity schemes in which each
receiver can dynamically identify an optimal set of codebooks to decode.
### 4.5 Two-user Interference Channel with Partial CSIT
We now turn the attention to how the transmitters have partial information
about the overall network state. Specifically, based on the model in (243), we
consider two separate scenarios in which each transmitter either knows the
interference that it causes to the unintended receiver, or the interference
that its intended receiver experiences. More specifically, in Scenario 1,
transmitter $i$ knows the channel state $a_{j}$ for $j\neq i$, while being
unaware of $a_{i}$. In contrast, in Scenario 2, transmitter $i$ knows the
channel state $a_{i}$ while being unaware of $a_{j}$ for $j\neq i$. These two
scenarios and their associated broadcast approaches are discussed next.
#### 4.5.1 Two-user Interference Channel with Partial CSIT – Scenario 1
State-dependent Adaptive Layering. In this setting, each transmitter controls
the interference that it imposes by leveraging the partially known CSI.
Concurrently, each transmitter adapts one layer to every possible channel
state at its intended receiver, overcoming the partial uncertainty about the
other transmitter’s interfering link. Based on these observations, transmitter
$i$ splits its information stream into a certain set of codebooks depending on
the state of its outgoing cross channel. We denote the set of codebooks
transmitted by user $i$ when $a_{j}=\sqrt{\beta_{k}}$ by ${\cal C}_{i}^{k}$.
Each set ${\cal C}_{i}^{k}$ consists of $K+1$ codebooks given by
$\displaystyle{\cal
C}_{i}^{k}=\\{V_{i}^{k},U^{k}_{i}(1),\dots,U^{k}_{i}(K)\\}\,.$ (264)
The layers denoted by $V^{k}_{i}$ are adapted to be successfully decoded by
both receivers in all network states. Additionally, layers
$U^{k}_{i}(s),\forall s\in\\{1,\dots,K\\}$ are specifically adapted to be
opportunistically decoded by the intended receiver $y_{i}^{s}$ only. In
particular, when transmitter $i$ knows the state of its outgoing channel to be
$\beta_{k}$, it splits its information stream into $K+1$ layers specified in
(264) where
* •
layer $V^{k}_{1}$ ($V^{k}_{2}$) is adapted to the cross channel state at the
unintended receiver $y^{k}_{1}$ ($y^{k}_{2}$); and
* •
Layer $U^{k}_{1}(s)$ ( $U^{k}_{2}(s)$) is adapted to the cross channel state
at the intended receiver $y_{2}^{s}$ ($y_{1}^{s}$), for $s\in\\{1,\dots,K\\}$.
Note that the sets $\\{{\cal C}_{i}^{k}\\}_{i,k}$ contain the same number of
codebooks corresponding to each user $i$ and outgoing channel state
$\beta_{k}$. However, the motivation behind using $K$ different sets is that
the power allocation among the layers in each set is distinct. Given that
transmitter $i$ adapts the layers $V^{k}_{i}$ to each cross channel state at
the unintended receiver, it directly imposes a constraint on the fraction of
power allocated to the remaining layers $\\{U^{k}_{i}(s)\\}$.
Successive Decoding. Each codebook will be decoded by multiple receivers in
the equivalent network formed by different receivers associated with different
network states. Hence, each codebook rate will be constrained by its
associated most degraded channel state. Furthermore, any undecoded layer at a
particular receiver imposes interference, which degrades the achievable rate
at that receiver. Motivated by these premises, a simple successive decoding
scheme can be designed that specifies (i) the set of receivers at which each
layer is decoded, and (ii) the order of successive decoding order at each
receiver.
In network state $(\beta_{s},\beta_{t})$, user $1$ transmits the superposition
of all the layers in the set ${\cal
C}_{1}^{s}=\\{V_{1}^{s},U_{1}^{s}(1),\dots,U_{1}^{s}(K)\\}$, while user $2$
transmits the layers in the set ${\cal
C}_{2}^{t}=\\{V_{2}^{t},U_{2}^{t}(1),\dots,U_{2}^{t}(K)\\}$. Accordingly,
layers $V_{1}^{s}$ and $V_{2}^{t}$ are decoded by both receivers $y_{1}^{s}$
and $y_{2}^{t}$. Further, a subset of the layers $\\{U_{i}^{s}(t)\\}$ from
user $i$ are opportunistically decoded by the intended receiver only depending
of the interfering channel state $\beta_{t}$ from transmitter $j\neq i$. For
state $(\beta_{s},\beta_{t})$, we summarize the decoding order at each
receiver $y_{i}^{s}$ as follows:
* •
Receiver $y_{1}^{s}$: First, it decodes one layer from the unintended
transmitter $V_{2}^{t}$ and remove it from the received signal. Secondly, it
decodes the baseline layer from its intended transmitter $V_{1}^{s}$. Finally,
depending on the network state $(\beta_{s},\beta_{t})$, it successively
decodes all the layers $\\{U_{1}^{s}(1),\dots,U_{1}^{s}(t)\\}$.
* •
Receiver $y_{2}^{t}$: First, it decodes one layer from the unintended
transmitter $V_{1}^{s}$ and remove it from the received signal. Secondly, it
decodes the baseline layer from its intended transmitter $V_{2}^{t}$. Finally,
depending on the network state $(\beta_{s},\beta_{t})$, it successively
decodes all the layers $\\{U_{1}^{t}(1),\dots,U_{1}^{t}(s)\\}$.
#### 4.5.2 Two-user Interference Channel with Partial CSIT – Scenario 2
State-dependent Adaptive Layering. In contrast to Scenario 1, in this
scenario, transmitter $i$ knows $a_{i}$, and it is oblivious to the other
channel. Lacking the extent of interference that each transmitter causes,
transmitter $i$ adapts multiple layers with different rates such that the
unintended receiver opportunistically decodes and removes a part of the
interfering according to the actual state of the channel. Simultaneously,
transmitter $i$ adapts the encoding rate of a single layer to be decoded only
by its intended receiver based on the actual state of channel $a_{2}$. Based
on this vision, transmitter $i$ splits its information stream into a distinct
set of codebooks corresponding to each state of the cross channel at its
intended receiver. We denote the set of codebooks transmitted by user $i$ when
$a_{i}=\sqrt{\beta_{k}}$ by ${\cal D}_{i}^{k}$. Each set ${\cal D}_{i}^{k}$
consists of $K+1$ codebooks given by
$\displaystyle{\cal
D}_{i}^{k}=\\{V_{i}^{k}(1),\dots,V^{k}_{i}(K),U^{k}_{i}\\}\,.$ (265)
The layers denoted by $\\{V^{k}_{i}(s)\\}_{s=1}^{K}$ are adapted to be fully
decoded by transmitter $i$ and partially decoded by receiver $j$, depending on
the actual network state. Contrarily, layers $U^{k}_{i}$ are specifically
adapted to be decoded by the intended receiver $y_{i}^{s}$ only. In
particular, when transmitter $i$ knows the state of the interfering link to
its intended receiver to be $\beta_{k}$, it splits its information stream into
$K+1$ encoded layers specified in (265) where
* •
Layer $V^{k}_{1}(s)$ ($V^{k}_{2}(s)$) is adapted to the cross channel state at
the unintended receiver $y^{s}_{2}$ ($y^{s}_{1}$), for $s\in\\{1,\dots,K\\}$;
and
* •
Layer $U^{k}_{1}$ ($U^{k}_{2}$) is adapted to the cross channel state at the
intended receiver $y_{1}^{k}$ ($y_{2}^{k}$).
Similarly to the layering approach in Scenario ‘11, sets ${\cal D}_{i}^{k}$
contain an equal number of codebooks for each $i\in\\{1,2\\}$ and
$k\in\\{1,\dots,K\\}$. Nevertheless, power allocation schemes among the layers
in each set are distinct.
Successive Decoding. Given that each codebook will opportunistically be
decoded by multiple receivers, its maximum achievable rate is constrained by
the most degraded network state in which it is decoded. Similarly to that of
Scenario 1, a successive decoding scheme is devised that specifies (i) the set
of receivers at which each layer is decoded, and (ii) the order of successive
decoding order at each receiver.
In network state $(\beta_{s},\beta_{t})$, user $1$ transmits the superposition
of all the layers in the set ${\cal
D}_{1}^{s}=\\{V_{1}^{s}(1),\dots,V_{1}^{s}(K),U_{1}^{s}\\}$, while user $2$
transmits the layers in the set ${\cal
D}_{2}^{t}=\\{V_{2}^{t}(1),\dots,V_{2}^{t}(K),U_{2}^{t}\\}$. Layers
$U_{1}^{s}$ and $U_{2}^{t}$ are decoded by both receivers $y_{1}^{t}$ and
$y_{2}^{s}$. On the other hand, a subset of the layers $\\{V_{i}^{s}(k)\\}$
from user $i$ are opportunistically decoded by the unintended receiver $j$
depending on the interfering channel state. In network state
$(\beta_{s},\beta_{t})$, we summarize the decoding order at each receiver
$y_{i}^{s}$ as follows:
* •
Receiver $y_{1}^{s}$: First, it decodes one layer from the interfering signal
$V^{t}_{2}(1)$. Afterwards, it decodes one layer from the intended signal
$V^{s}_{1}(1)$. This receiver continues the decoding process in an alternating
manner until codebooks $\\{V^{t}_{2}(j)\\}_{j=1}^{s}$ from transmitter $2$ and
codebooks $\\{V^{s}_{1}(j)\\}_{j=1}^{K}\\}$ are decoded from the intended
receiver $1$ are successfully decoded. Finally, the last remaining layer fro
the intended message $U^{s}_{1}$ is decoded.
* •
Receiver $y_{2}^{t}$: First, it decodes one layer from the interfering signal
$V^{s}_{1}(1)$. Afterwards, it decodes one layer from the intended signal
$V^{t}_{2}(1)$. This receiver continues the decoding process in an alternating
manner until codebooks $\\{V^{s}_{1}(j)\\}_{j=1}^{s}$ from transmitter $1$ and
codebooks $\\{V^{t}_{2}(j)\\}_{j=1}^{K}\\}$ are decoded from the intended
receiver $2$ are successfully decoded. Lastly, the last remaining layer fro
the intended message $U^{t}_{2}$ is decoded.
## 5 Relay Channels
### 5.1 Overview
In this section, we extend the discussions to two-hop networks, in which a
source and a destination communicate while being assisted by relay nodes. In
line with the key assumptionthroughout this paper (i.e., no CSIT), we assume
that the transmitter and the relay(s) are oblivious to instantaneous
realizations of their outgoing channels. Such settings are especially relevant
when a source communicates with a remote destination, and a relay terminal is
occasionally present near the source but without the source’s knowledge Katz05
; Katz06 ; Katz07 ; Katz09 .
We start the discussion with a two-hop network model in Section 5.2, where the
source and destination communicated through a relay node (no direct source-
destination communication). In this section, we review various decode-and-
forward (DF), amplify-and-forward (AF), quantize-and-forward (QF), and
amplify-and-quantize-and-forward (AQF) relaying schemes and characterize their
attendant average communication rates, based on the results in AsSh06TwoHop .
The work in AsAsSh07 considers the problem of communication between a single
remote transmitter and a destined user while being helped by co-located users.
This problem is discussed in more detail in Section 5.3. In a dynamic wireless
network where a source terminal communicates with a destination, it is worth
considering oblivious relaying strategies of a relay near the source
transmitter in the presence of other users in the network BraginskiyAsSh12 ,
as discussed in Section 5.4. In Section 5.5, we review the broadcast
transmission schemes of the diamond channel investigated in Zamani14 .
Motivated by addressing the distributed nature and delay sensitivity of modern
communication systems, the study in SimoneSh09 investigates a network
consisting of a source-destination pair, the communication between which is
assisted by multiple relays. This setting is reviewed in Section 5.6. Finally,
motivated by the fact that in practical wireless networks, it is often
difficult for each user to keep track of the relay nodes, in Section 5.7, we
review the settings in which the relays are available only occasionally.
### 5.2 A Two-Hop Network
Let us consider a two-hop relay fading channel AsSh06TwoHop , where the
transmitter and receiver communicate through an intermediate network node that
serves as a relay. Various relaying protocols and broadcasting strategies are
considered. For example, DF relaying, a simple relay with a single packet
buffer, which cannot reschedule retransmissions, is first studied. DF relaying
limitations in various cases give rise to consideration of other relaying
techniques, such as AF, where a maximal broadcasting achievable rate is
analytically derived. A QF relay, coupled with a single-level code at the
source, uses codebooks matched to the received signal power and performs
optimal quantization. This is simplified by an AQF relay, which performs
scaling, and single codebook quantization on the input. As discussed later in
this section, it is observed that the latter may be throughput-optimal on the
relay-destination link while maintaining a lower coding complexity compared
with the QF setting.
The work in Baghani16 concerns two-hop transmissions over relay-assisted
block fading channels, assuming there is no direct link between the
transmission ends and the communication is carried out by a relay. Various
relaying strategies are considered in combination with the multi-layer coded
transmission. The study in Akhlaghi19 optimizes the power allocation for
relaying strategies in a similar two-way relay setting. The work in Attia14
considers a two-layer transmission and optimizes the power allocation for a DF
relay.
In a DF JNL04 scheme, the relay decodes the received source message; re-
encodes it; and forwards the resulting signal to the destination. Note that,
since the relay must perfectly decode the source message, the achievable rates
are bounded by the capacity of the channel between the source and the relay. A
non-regenerative relay has a different coding scheme than the source, and it
can improve, for example, the reliability of the relay-destination
transmission. The work in DENIZ05 compares two DF protocols assuming
knowledge of channel gains at the transmitter and adhering to delay-limited
capacity. Further work on user cooperation to increase diversity gains, using
DF cooperation techniques over a Rayleigh fading channel is found in YUKSEL04
.
In BOYER04 , different types of AF relay settings are studied and general
expressions for the aggregate SNR at the destination are derived for a varying
number of relaying nodes. This study is motivated by previous observations
that AF relays can sometimes approach or exceed the performance of their DF
counterparts JNL04 .
A QF relay implementation is considered in Katz06 and it is shown to be
superior to the DF and AF, in terms of average throughput in the presence of a
direct link and a known channel gain on the relay-destination link, which
models a two co-located user cooperation. Practical compress-and-forward (CF)
code design was presented in LIU05 for the half-duplex relay channel. The
quantization in Katz06 ; LIU05 is of Wyner-Ziv (WZ) quantization type WYNER76
, which refers to the relay quantizing its received observation of the source
symbol while relying on the side information that is available at the
destination receiver, from the direct link. In a two-hop relay setting, the
receiver has no additional side information, and thus the quantization applied
at the relay is a standard quantization of a noisy Gaussian source BERGER71 .
Consider the following SISO channel:
$\displaystyle\textbf{y}_{r}\;=\;h_{s}\textbf{x}_{s}\;+\;\textbf{n}_{s}~{},$
(266)
where $\textbf{y}_{r}$ is a received vector of length $N$ at the relay, which
is also the transmission block length, $\textbf{x}_{s}$ is the transmitted
vector, and $\textbf{n}_{s}$ is the additive noise vector, with elements that
are complex Gaussian i.i.d. with zero mean and unit variance denoted
${\mathcal{CN}}(0,1)$. $h_{s}$ is the (scalar) fading coefficient, assumed to
be perfectly known at the relay and the destination receivers only. The source
transmitter has no CSI. The power constraint at the source is given by
$P_{s}={\mathbb{E}}[|x_{s}|^{2}]$. The channel between the relay and the
destination is described by
$\displaystyle\textbf{y}_{d}\;=\;h_{r}\textbf{x}_{r}\;+\;\textbf{n}_{r}~{},$
(267)
where $\textbf{y}_{d}$ is a received vector of length $N$ at the destination
receiver, and $\textbf{x}_{r}$ is the relay transmitted vector.
$\textbf{n}_{r}$ is the additive noise vector, with elements that are complex
Gaussian i.i.d. with zero mean and unit variance denoted by
${\mathcal{CN}}(0,1)$, and $h_{r}$ is the (scalar) fading coefficient. The
fading coefficients $h_{s}$ and $h_{r}$ are assumed to be perfectly known at
the destination receivers only. The relay transmitter does not possess
$h_{r}$. The power constraint at the relay is given by
$P_{r}={\mathbb{E}}[|x_{r}|^{2}]$.
It is assumed that the relay operates in a full-duplex mode by receiving and
transmitting on different frequency bands, realizing a two-hop network.
Furthermore, the relay is not capable of buffering data. In the DF protocols,
the relay has to forward all the data successfully decoded immediately. Layers
that were not decoded on the path from source to destination must be
rescheduled for retransmission at the source. If the relay had packet
scheduling capabilities, the DF protocols could be improved by letting the
relay perform retransmission of layers that are not decoded at the
destination. However, this calls for distributed scheduling control, which
highly complicates the system and is beyond the scope of this subsection.
#### 5.2.1 Upper Bounds
A full CSI (FCSI) upper bound is derived for a hypothetical case that both
source and relay have perfect CSI of all links, and the source always
transmits in the maximal achievable rate over this relay channel. This
achievable rate is the minimal rate determined by the fading gain realizations
on both links. It is generally expressed by
$\displaystyle C_{\rm
FCSI}={\mathbb{E}}_{s_{s},s_{r}}[\log(1+\min(P_{s}s_{s},P_{r}s_{r}))]\ ,$
(268)
where $s_{s}=|h_{s}|^{2}$, and $s_{r}=|h_{r}|^{2}$. By explicitly extracting
the expectation in (268) we get
$\displaystyle C_{\rm FCSI}$
$\displaystyle=\int\limits_{0}^{\infty}\,\textnormal{d}\nu\int\limits_{0}^{\infty}\,\textnormal{d}\mu
f(\nu)f(\mu)\log(1+\min(P_{s}\nu,P_{r}\mu))$ (269)
$\displaystyle=\int\limits_{0}^{\infty}\,\textnormal{d}\nu\int\limits_{\frac{P_{s}}{P_{r}}\nu}^{\infty}\,\textnormal{d}\mu
f(\nu)f(\mu)\log(1+P_{s}\nu)$ (270)
$\displaystyle\qquad+\int\limits_{0}^{\infty}\,\textnormal{d}\nu
f(\nu)\int\limits_{0}^{\frac{P_{s}}{P_{r}}\nu}\,\textnormal{d}\mu
f(\mu)\log(1+P_{r}\mu)$ (271)
$\displaystyle=\int\limits_{0}^{\infty}\,\textnormal{d}\nu
f(\nu)(1-F(\frac{P_{s}}{P_{r}}\nu))\log(1+P_{s}\nu)$ (272)
$\displaystyle\qquad+\frac{P_{s}}{P_{r}}\int\limits_{0}^{\infty}\,\textnormal{d}\nu(1-F(\nu))f(\frac{P_{s}}{P_{r}}\nu)\log(1+P_{s}\nu)\
,$ (273)
where $f(x)$ and $F(x)$ are the PDF and CDF of the fading gain, respectively.
For a Rayleigh fading channel, the FCSI upper bound is given by
$\displaystyle C_{\rm FCSI}$
$\displaystyle=(1+\frac{P_{s}}{P_{r}})\int\limits_{0}^{\infty}\,\textnormal{d}\nu
e^{-(1+\frac{P_{s}}{P_{r}})\nu}\log(1+P_{s}\nu)$ (274)
$\displaystyle=e^{\frac{P_{s}+P_{r}}{P_{r}P_{s}}}E_{1}\left(\frac{P_{s}+P_{r}}{P_{r}P_{s}}\right)\
,$ (275)
where $E_{1}(x)$ is the exponential integral function
$E_{1}(x)\triangleq\int_{x}^{\infty}\,\textnormal{d}t\frac{e^{-t}}{t}$ for
$x\geq 0$ Abramowitz65 . The ergodic cut-set upper bound is the minimum of the
average achievable rates on the two links (source-relay and relay-
destination). This is specified by
$\displaystyle C_{\rm
erg}=\min\\{{\mathbb{E}}_{s_{s}}[\log(1+P_{s}s_{s})]\;,\;{\mathbb{E}}_{s_{r}}[\log(1+P_{r}s_{r})]\\}\
.$ (276)
For Rayleigh fading channels, and similar fading gain distribution functions
$f(x)$ for the two links, the ergodic upper bound simplifies to
$\displaystyle C_{\rm erg}=\int\limits_{0}^{\infty}\,\textnormal{d}\nu
e^{-\nu}\log(1+P\nu)\ ,~{}~{}~{}~{}\textrm{s.t.}\qquad P=\min(P_{s},P_{r})\ ,$
(277)
which is justified by the monotonicity of the ergodic capacity as function of
$P$. A tighter upper bound on the broadcast strategy is the broadcasting cut-
set bound. This is the minimum average broadcasting rate achievable on each of
the links separately. It is specified by
$\displaystyle{R_{\rm bs-
cutset}}=\min\left\\{\int\limits_{0}^{\infty}\,\textnormal{d}u~{}f_{\mu}(u)R_{\mu}(u)\;,~{}\int\limits_{0}^{\infty}\,\textnormal{d}u~{}f_{\nu}(u)R_{\nu}(u)\right\\}\
,$ (278)
where $f_{\nu}(u)$ and $f_{\mu}(u)$ are the PDFs of the source-relay and relay
destination fading gains, respectively, and $R(u)$ is the broadcasting
achievable rate for a fading gain $u$. For a Rayleigh fading channel with
similar distribution on both links, the cut-set bound is given by
(ShitzSteiner03, , equation (18))
$\displaystyle R_{\rm bs-cutset}=2E_{1}(s_{0})-2E_{1}(1)-(e^{-s_{0}}-e^{-1})\
,$ (279)
where
$\displaystyle s_{0}=\frac{2}{1+\sqrt{1+4\min(P_{s},P_{r})}}\ .$ (280)
The broadcasting cut-set bound (279) may be achieved if the relay is allowed
to delay its data and reschedule retransmissions independently. Furthermore,
the relay has to inform the source how many layers were decoded for every
block. We do not assume such feedback is available. The only feedback, in our
channel model, is from destination to source indicating the number of
successfully decoded layers.
#### 5.2.2 DF Strategies
Consider first the simple DF relaying for an outage approach with single-level
coding at source and relay. In single-level coding, the code rate from the
source transmitter to the relay is determined by the fading gain threshold
selected. For a power threshold $s_{s}$, the code rate is
$R=\log\left(1+P_{s}s_{s}\right)$, and this same rate is transmitted from the
relay to the destination with power $P_{r}$, thus
$R=\log\left(1+P_{r}s_{r}\right)$, and $s_{r}=\frac{P_{s}}{P_{r}}s_{s}$.
Hence, the average achievable rate from the source to the destination is
$\displaystyle R_{1,\rm ave}$
$\displaystyle={\mathbb{P}}(\nu>s_{s}){\mathbb{P}}(\mu>s_{r})\log(1+P_{s}s_{s})$
(281) $\displaystyle=(1-F_{\nu}(s_{s}))(1-F_{\mu}(s_{r}))\log(1+P_{s}s_{s})\
,$ (282)
where $\nu$, $\mu$ are the fading gain random variables, $F_{\nu}(x)$,
$F_{\mu}(x)$ are the corresponding CDFs, and $P_{s}$ is the source
transmission power. For a Rayleigh fading channel, with
$F_{\mu}(x)=F_{\nu}(x)=1-e^{-x}$, the average rate is given by
$\displaystyle R_{1,\rm
ave}=e^{-s_{s}}e^{-\frac{P_{s}}{P_{r}}s_{s}}\log(1+P_{s}s_{s})\ ,$ (283)
and the maximal achievable rate is thus
$\displaystyle
R_{1L}=\max\limits_{s_{s}}e^{-s_{s}}e^{-\frac{P_{s}}{P_{r}}s_{s}}\log(1+P_{s}s_{s})\
.$ (284)
Let the source perform two-level coding, and the relay has to decode as many
layers as possible, depending on the fading realization. If successful in
decoding both layers, it transmits a single-level code at a rate that is the
sum of source rates. If only one layer was decoded successfully at the relay,
it encodes it into a different single-level code, which is equal in rate to
the first level of the source channel code. This gives higher flexibility in
decoding of a single layer at the destination receiver when the channel
conditions on the source-relay link allow only one layer detection at the
relay. The channel code rate at the source is given by
$\displaystyle R_{1}^{s}$
$\displaystyle=\log(1+P_{s}s_{s,1})-\log(1+(1-\alpha_{s})P_{s}s_{s,1})\ ,$
(285) $\displaystyle R_{2}^{s}$
$\displaystyle=\log(1+(1-\alpha_{s})P_{s}s_{s,2})\ ,$ (286)
where $0\leq\alpha_{s}\leq 1$, $s_{s,1}$ and $s_{s,2}$ are the fading gain
thresholds implicitly specifying the layering rates. The rates of the single-
level code at the relay are then given by
$\displaystyle R_{1}^{r}$ $\displaystyle=\log(1+P_{r}s_{r,1})\
,\quad\textrm{s.t.}\quad R_{1}^{r}=R_{1}^{s}\ ,$ (287) $\displaystyle
R_{2}^{r}$ $\displaystyle=\log(1+P_{r}s_{r,2})\ ,\quad\textrm{s.t.}\quad
R_{2}^{r}=R_{1}^{s}+R_{2}^{s}\ ,$ (288)
where $s_{r,1}$ and $s_{r,2}$ are determined from the rate equalities on the
right hand side of (287). The overall average rate is then
$\displaystyle{R_{2-1,\rm ave}}$
$\displaystyle=\max\limits_{s_{s,1},s_{s,2},\alpha_{s}}{\mathbb{P}}(s_{s,1}\leq\nu<s_{s,2})P(\mu>s_{r,1})R_{1}^{s}$
$\displaystyle\qquad\qquad+{\mathbb{P}}(\nu>s_{s,2})P(\mu>s_{r,2})(R_{1}^{s}+R_{2}^{s})$
(289)
$\displaystyle=\max\limits_{s_{s,1},s_{s,2},\alpha_{s}}\left(F_{\nu}(s_{s,2})-F_{\nu}(s_{s,1})\right)(1-F_{\mu}(s_{r,1}))R_{1}^{s}$
$\displaystyle\qquad\qquad+\left(1-F_{\nu}(s_{s,2})\right)(1-F_{\mu}(s_{r,2}))(R_{1}^{s}+R_{2}^{s})\
,$ (290)
where $\nu$ is the fading gain RV on the source-relay link, and $\mu$ is the
RV on the relay destination link. This approach outperforms single-level
coding at the source and two-level coding at the relay, described in the
previous subsection. The main reason for this difference is that the outage
approach described here adapts to the source-relay channel conditions. That
is, the outage rate from the relay to the destination is equal to the decoded
rate and depends on the number of successfully decoded layers (287). However,
when considering the opposite approach (source: outage, relay: two-level), the
outage rate is fixed for all channel conditions, and if the relay fails in its
decoding, nothing is transmitted to the destination.
#### 5.2.3 Continuous Broadcasting DF Strategies
Coding Scheme I – Source: Outage & Relay: Continuum Broadcasting. In this
coding scheme, the source transmitter performs single-level coding. Whenever
channel conditions allow decoding at the relay, it performs continuum
broadcasting, as described in the previous subsection. Thus, the received rate
at the destination depends on the instantaneous channel fading gain
realization on the relay-destination link. Clearly, a necessary condition for
receiving something at the destination is that channel conditions on the
source-relay link will allow decoding. The source transmission rate is given
by
$\displaystyle R_{1}^{s}=\log(1+P_{s}s_{s})\ ,$ (291)
and the corresponding achievable rate at the destination is given by
$\displaystyle
R^{r}(\nu)=\int_{0}^{\nu}\frac{u\rho_{r}(u)\,\textnormal{d}u}{1+uI_{r}(u)}\ ,$
(292)
where $I_{r}(\nu)$ is the residual interference distribution function and its
boundary conditions are stated in (4)-(5). The total rate transmitted in the
broadcasting link is equal to the single-level code rate of the source-relay
link, that is
$\displaystyle
R_{1}^{s}=\int\limits_{0}^{\infty}\frac{u\rho_{r}(u)\,\textnormal{d}u}{1+uI_{r}(u)}\
.$ (293)
The above condition in (293) states a constraint on the optimization of the
average rate. The average rate expression, considering the transmission scheme
on the two links is
$\displaystyle R_{\rm ave}$
$\displaystyle={\mathbb{P}}(\nu>s_{s})\int\limits_{0}^{\infty}\,\textnormal{d}xf_{\mu}(x)\int\limits_{0}^{x}\frac{u\rho_{r}(u)\,\textnormal{d}u}{1+uI_{r}(u)}$
(294)
$\displaystyle=(1-F_{\nu}(s_{s}))\int\limits_{0}^{\infty}\,\textnormal{d}x(1-F_{\mu}(x))\frac{x\rho_{r}(x)}{1+xI_{r}(x)}\
,$ (295)
where we have used partial integration rule. The average rate maximization
problem can now be posed as
$\displaystyle\displaystyle{R_{1-\rm bs,\rm
ave}}=\left\\{\begin{array}[]{ll}\max\limits_{s_{s},I_{r}(\nu)}&\displaystyle(1-F_{\nu}(s_{s}))\int\limits_{0}^{\infty}\,\textnormal{d}x(1-F_{\mu}(x))\frac{x\rho_{r}(x)}{1+xI_{r}(x)}\\\
\textrm{s.t.}&\displaystyle\int\limits_{0}^{\infty}\frac{u\rho_{r}(u)du}{1+uI_{r}(u)}=\log(1+P_{s}s_{s})\end{array}\right.\
.$ (298)
As a first step in solving the maximal average rate, the residual interference
distribution $I_{r}(\nu)$ is found for every $s_{s}$. That is
$\displaystyle{R_{1-\rm bs}(s_{r})}$
$\displaystyle=\left\\{\begin{array}[]{ll}\max\limits_{I_{r}(\nu)}&\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}x(1-F_{\mu}(x))\frac{x\rho_{r}(x)}{1+xI_{r}(x)}\\\
\textrm{s.t.}&R_{1}^{s}=\displaystyle\int\limits_{0}^{\infty}\frac{u\rho_{r}(u)\,\textnormal{d}u}{1+uI_{r}(u)}\\\
\end{array}\right.$ (301)
$\displaystyle=\left\\{\begin{array}[]{ll}\max\limits_{I_{r}(\nu)}&\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}xG_{1}(x,I_{r}(x),I_{r}^{\prime}(x))\\\
\textrm{s.t.}&R_{1}^{s}=\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}xG_{2}(x,I_{r}(x),I_{r}^{\prime}(x))\end{array}\right.\
,$ (304)
where $I_{r}^{\prime}(x)=\frac{\,\textnormal{d}I_{r}(x)}{\,\textnormal{d}x}$.
The necessary condition for extremum in (301) subject to the subsidiary
condition, is in generally stated GF91
$\displaystyle G_{1,I_{r}}+\lambda
G_{2,I_{r}}-\frac{d}{\,\textnormal{d}x}\left(G_{1,I_{r}^{\prime}}+\lambda
G_{2,I_{r}^{\prime}}\right)~{}=~{}0\ ,$ (305)
where $G_{1,I_{r}}$ is the derivative of $G_{1}$ with respect to $I_{r}$, and
$G_{1,I_{r}^{\prime}}$ is the derivative of $G_{1}$ with respect to
$I_{r}^{\prime}$. The scalar $\lambda$ is also known as a Lagrange multiplier,
and it is determined from the subsidiary condition in (301). The substitution
of $S_{I_{r}}\triangleq G_{1,I_{r}}+\lambda G_{2,I_{r}}$, and
$S_{I_{r}^{\prime}}\triangleq G_{1,I_{r}^{\prime}}+\lambda
G_{2,I_{r}^{\prime}}$ by using (301) results in
$\displaystyle S_{I_{r}}$
$\displaystyle=\frac{x^{2}I_{r}^{\prime}\left(1-F_{\mu}+\lambda\right)}{(1+xI_{r})^{2}}\
,$ (306) $\displaystyle S_{I_{r}^{\prime}}$
$\displaystyle=\frac{-x\left(1-F_{\mu}+\lambda\right)}{1+xI_{r}}\ ,$ (307)
$\displaystyle\frac{\,\textnormal{d}S_{I_{r}^{\prime}}}{\,\textnormal{d}x}$
$\displaystyle=\frac{(x^{2}I_{r}^{\prime}-1)\left(1-F_{\mu}+\lambda\right)}{(1+xI_{r})^{2}}\
.$ (308)
Substituting the expressions in (306) into the extremum condition in (305)
yields a general solution for the residual interference, as function of the
Lagrange multiplier $\lambda$. This is summarized in the following
proposition. The relay broadcasting residual power distribution function
$I_{r}(x)$ that maximizes the expected rate over the two-hop wireless fading
channel (266)-(267) is given by
$\displaystyle I_{r}(x)=\left\\{\begin{array}[]{ll}P&0\leq x\leq x_{0}\\\ &\\\
\frac{1-F_{\mu}(x)+\lambda-xf_{\mu}(x)}{f_{\mu}(x)x^{2}}&x_{0}\leq x\leq
x_{1}\\\ &\\\ 0&x\geq x_{1}\end{array}\right.\ ,$ (314)
where $x_{0}$ and $x_{1}$ are determined from the boundary conditions
$I_{r}(x_{0})=P$ and $I_{r}(x_{1})=0$, respectively. The scalar $\lambda$ is
determined from the subsidiary condition in (301). When considering a Rayleigh
flat fading channel for the relay destination link, i.e.,
$F_{\mu}(x)=1-\exp(-x)$, the residual interference distribution gets the
following form
$\displaystyle
I_{r}(x)=\frac{\lambda}{e^{-x}x^{2}}+\frac{1}{x^{2}}-\frac{1}{x}\
,\qquad\textrm{for}\quad x_{0}\leq x\leq x_{1}\ ,$ (315)
and the condition $I_{r}(x_{1})=0$ provides
$\displaystyle x_{1}=1-W_{L}(-\lambda e)\ ,$ (316)
where $W_{L}(x)$ is the Lambert W-function, also called the omega function,
and it is the inverse of the function $f(W)=We^{W}$. Interestingly, the
subsidiary condition with (315) as the solution for $I_{r}(x)$ yields a
simplified expression
$\displaystyle R_{T}$
$\displaystyle=\int\limits_{x_{0}}^{x_{1}}\frac{u\rho_{r}(u)\,\textnormal{d}u}{1+uI_{r}(u)}$
(317) $\displaystyle=2\log(x_{1})-x_{1}-(2\log(x_{0})-x_{0})$ (318)
$\displaystyle=2\log(1-W_{L}(-\lambda e))-1+W_{L}(-\lambda e)$ (319)
$\displaystyle-2\log(x_{0})+x_{0}\ ,$ (320)
where (316) is used for substitution of $x_{1}$. Using the subsidiary
condition (301), i.e., $R_{T}=R_{1}^{s}$, the solution of $x_{0}$ as function
of $\lambda$ is
$\displaystyle x_{0}=$ $\displaystyle-2W_{L}\left(-0.5e^{\log(1-W_{L}(-\lambda
e))-0.5+0.5W_{L}(-\lambda e)-0.5R_{1}^{s}}\right)\ .$ (321)
Finally, $\lambda$ can be found by solving $I_{r}(x_{0})=P$. Thus, all initial
conditions are satisfied, the solution for $\lambda$ is obtained by
numerically solving the nonlinear equation specified by $I_{r}(x_{0})=P$. The
maximal rate $R_{1-\rm bs,\rm ave}$ is then obtained by searching numerically
over $s_{s}$ and evaluating $R_{1-\rm bs,\rm ave}$ for all $s_{s}$ in the
search.
Coding Scheme II – Source: Continuum Broadcasting, Relay: Outage. In this
coding scheme, the source transmitter performs continuum broadcasting, as
described in the previous subsection. The relay encodes the successfully
decoded layers into a single-level block code. Thus, the rate of each
transmission from the relay depends on the number of layers decoded. For a
fading gain realization $\nu$ on the source-relay link the decodable rate at
the relay is
$\displaystyle
R^{s}(\nu)=\int_{0}^{\nu}\frac{u\rho_{s}(u)\,\textnormal{d}u}{1+uI_{s}(u)}\ .$
(322)
This is also the rate to be transmitted in a single-level coding approach,
yielding
$\displaystyle R_{1}^{r}(\nu)=\log(1+P_{r}s_{r}(\nu))\ ,$ (323)
where $s_{r}(\nu)$ is the fading gain threshold for decoding at the
destination. In order to ensure equal source and relay transmission rates, it
is required that $R_{1}^{r}(\nu)=R^{s}(\nu)$. The average rate is then given
by
$\displaystyle{R_{\rm bs-1,ave}}$
$\displaystyle=\max\limits_{I_{s}(x)}\int\limits_{0}^{\infty}\,\textnormal{d}x{\mathbb{P}}(\mu\geq
s_{r}(x))f_{\nu}(x)R^{s}(x)$ (324)
$\displaystyle=\max\limits_{I_{s}(x)}\int\limits_{0}^{\infty}\,\textnormal{d}x(1-F_{\mu}(s_{r}(x)))f_{\nu}(x)\int_{0}^{x}\frac{u\rho_{s}(u)\,\textnormal{d}u}{1+uI_{s}(u)}$
(325)
$\displaystyle=\max\limits_{I_{s}(x)}\int\limits_{0}^{\infty}\,\textnormal{d}xe^{-x}e^{-s_{r}(x)}\int_{0}^{x}\frac{u\rho_{s}(u)\,\textnormal{d}u}{1+uI_{s}(u)}\
,$ (326)
where a Rayleigh fading distribution is assumed on the last equality, and
$\displaystyle
s_{r}(\nu)=\frac{1}{P_{r}}\left(\exp\left({\int\limits_{0}^{\nu}\,\textnormal{d}x\frac{x\rho_{s}(x)}{1+xI_{s}(x)}}\right)-1\right)\
.$ (327)
As may be noticed from (327), the functional subject to optimization in (324)
does not have a localization property GF91 , and thus, it does not have a
standard Euler-Lagrange equation for an extremum condition.
Coding Scheme III – Source and Relay: Continuous Broadcasting. In this scheme,
both source and relay perform the optimal continuum broadcasting. The source
transmitter encodes a continuum of layered codes. The relay decodes up to the
maximal decodable layer. Then it retransmits the data in a continuum multi-
layer code matched to the rate that has been decoded last. In this scheme, the
source encoder has a single power distribution function, which depends only on
a single fading gain parameter. The relay uses a power distribution that
depends on the two fading gains on the source-relay and the relay-destination
links.
In general, the source channel code rate as a function of the fading gain is
the same one specified in (322). The rate achievable at the destination is
then given by
$\displaystyle
R^{s}(\nu,\mu)=\int_{0}^{\mu}\frac{u\rho_{r}(\nu,u)\,\textnormal{d}u}{1+uI_{r}(\nu,u)}\
.$ (328)
The maximal average rate is then specified by
$\displaystyle{R_{\rm bs-
bs,ave}}=\left\\{\begin{array}[]{ll}\max\limits_{I_{s}(\nu),I_{r}(\nu,\mu)}&\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}x\int\limits_{0}^{\infty}\,\textnormal{d}yf_{\nu}(x)f_{\mu}(y)\int\limits_{0}^{y}\frac{u\rho_{r}(x,u)du}{1+uI_{r}(x,u)}\\\
\textrm{s.t.
}&\displaystyle\int\limits_{0}^{\infty}\frac{u\rho_{r}(x,u)\,\textnormal{d}u}{1+uI_{r}(x,u)}\leq
R^{s}(v)\end{array}\right.\ ,$ (331)
which may be simplified into
$\displaystyle{R_{\rm bs-
bs,ave}}=\left\\{\begin{array}[]{ll}\max\limits_{I_{s}(\nu),I_{r}(\nu,\mu)}&\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}x\int\limits_{0}^{\infty}\,\textnormal{d}yf_{\nu}(x)(1-F_{\mu}(y))\frac{y\rho_{r}(x,y)}{1+yI_{r}(x,y)}\\\
\textrm{s.t.
}&\displaystyle\int\limits_{0}^{\infty}\frac{u\rho_{r}(x,u)\,\textnormal{d}u}{1+uI_{r}(x,u)}\leq
R^{s}(v)\end{array}\right.\ .$ (334)
In order to present an Euler-Lagrange equation here, the subsidiary condition
in (334) still has to be brought to a functional form, and then it could be
solved with the aid of the Lagrange multipliers.
#### 5.2.4 AF Relaying
Consider a relay that cannot decode and encode the data, but rather it can
only amplify the input signal. The channel model to consider here is the same
one specified in (266)-(267). However, it may be assumed that the relay can
estimate the input signal power and amplify the signal (without distortion) by
a factor that ensures maximal transmission $P_{r}$ from the relay. In such a
case, the amplification coefficient is given by
$\displaystyle\gamma=\sqrt{\frac{P_{r}}{P_{s}|h_{s}|^{2}+1}}\ .$ (335)
The equivalent received signal at the destination can be specified by
$\displaystyle\textbf{y}^{\prime}_{d}=\frac{\gamma
h_{r}h_{s}}{\sqrt{\gamma^{2}|h_{r}|^{2}+1}}\textbf{x}_{s}+\textbf{n}^{\prime}_{r}~{}\
,$ (336)
where $\textbf{n}^{\prime}_{r}\sim{\mathcal{CN}}(0,1)$ and the original source
signal is multiplied by a factor, which represents an equivalent fading
coefficient with power
$\displaystyle s_{\rm
b}=\frac{\gamma^{2}s_{r}s_{s}}{\gamma^{2}s_{r}+1}=\frac{P_{r}s_{r}s_{s}}{P_{r}s_{r}+P_{s}s_{s}+1}~{}\
,$ (337)
where $s_{r}=|h_{r}|^{2}$ and $s_{s}=|h_{s}|^{2}$, and we have used the
amplification factor definition from (335) for explicitly stating the
equivalent fading gain. The CDF of the equivalent fading gain $s_{b}$ is then
given by
$\displaystyle F_{s_{\rm
b}}(x)={\mathbb{P}}(s_{b}<x)=\int\int\limits_{\mathcal{R}}\,\textnormal{d}x_{s}\,\textnormal{d}x_{r}f_{s_{s}}(x_{s})f_{s_{r}}(x_{r})\
,$ (338)
where
${\mathcal{R}}=\left\\{x_{r},x_{s}\in[0,\infty)\;\Big{|}\;\frac{P_{r}x_{r}x_{s}}{P_{r}x_{r}+P_{s}x_{s}+1}\leq
x\right\\}\ .$
When assuming a Rayleigh fading channel, i.e., $f_{s_{r}}(x_{r})=e^{-x_{r}}$
and $f_{s_{s}}(x_{s})=e^{-x_{s}}$, we have
$\displaystyle F_{s_{\rm b}}(x)$
$\displaystyle=1-\int\limits_{\frac{P_{s}}{P_{r}}x}^{\infty}\,\textnormal{d}x_{r}\int\limits_{\frac{x(1+P_{r}x_{r})}{x_{r}P_{r}-xP_{s}}}^{\infty}\,\textnormal{d}x_{s}e^{-x_{s}}e^{-x_{r}}$
(339)
$\displaystyle=1-\int\limits_{\frac{P_{s}}{P_{r}}x}^{\infty}\,\textnormal{d}x_{r}e^{-x_{r}-{\frac{x(1+P_{r}x_{r})}{x_{r}P_{r}-xP_{s}}}}\
,$ (340)
which does not lend itself to a closed-form expression. In a broadcast
approach, the transmitter performs continuous code layering, matched to the
equivalent single fading gain RV. Using the equivalent channel model (336) and
using the results of ShitzSteiner03 , the average received rate is given by
$\displaystyle R_{\rm
bs,AF,ave}=\max\limits_{I(x)}\int\limits_{0}^{\infty}\left(1-F_{s_{\rm
b}}(x)\right)\frac{-xI^{\prime}(x)}{1+xI(x)}\ ,$ (341)
where the optimal residual interference distribution $I_{\rm opt}(x)$ is given
by ShitzSteiner03
$\displaystyle I_{\rm opt}(x)=\left\\{\begin{array}[]{ll}P&0\leq x\leq
x_{0}\\\ \frac{1-F_{s_{\rm b}}(x)-xf_{s_{\rm b}}(x)}{f_{s_{\rm
b}}(x)x^{2}}&x_{0}\leq x\leq x_{1}\\\ 0&x\geq x_{1}\end{array}\right.\ ,$
(345)
where $x_{0}$ and $x_{1}$ are determined from the boundary conditions
$I_{r}(x_{0})=P_{s}$ and $I_{r}(x_{1})=0$, respectively. The maximal
achievable rate is provided by the following proposition. The maximal
achievable expected rate of a two-hop AF-relay network is explicitly given by
$\displaystyle{R_{\rm
bs,AF,ave}}=\int\limits_{x_{0}}^{x_{1}}\,\textnormal{d}x\left[\frac{2(1-F_{s_{\rm
b}}(x))}{x}+\frac{(1-F_{s_{\rm b}}(x))f_{s_{\rm b}}^{\prime}(x)}{f_{s_{\rm
b}}(x)}\right]\ ,$ (346)
where the CDF $F_{s_{\rm b}}(x)$ is specified in (339), and thus the
corresponding PDF is given by
$\displaystyle f_{s_{\rm b}}(x)=\frac{d}{\,\textnormal{d}x}F_{s_{\rm
b}}(x)=\int\limits_{\frac{P_{s}}{P_{r}}x}^{\infty}\,\textnormal{d}x_{r}\frac{P_{r}x_{r}(1+P_{r}x_{r})}{(P_{s}x-P_{r}x_{r})^{2}}e^{-x_{r}-{\frac{x(1+P_{r}x_{r})}{x_{r}P_{r}-xP_{s}}}}\
.$ (347)
Finally, $R_{\rm bs,AF,ave}$ (346) can be obtained via a numerical
integration.
#### 5.2.5 AQF Relay and Continuum Broadcasting
Now, let the source encoder perform continuum layering, and the relay, as
before, amplifies its input signal, quantizes it with average distortion $D$,
optimally in an MSE sense. The destination tries to first decode the quantized
signal $u_{q}$. Upon successful decoding, it decodes the multi-level code up
to the highest layer possible, depending on the fading gain on the source
relay link. In this setting, we consider single-level quantization. In
broadcasting, it may be assumed that part of the original signal cannot be
decoded. Therefore, it is modeled as additive Gaussian noise. The quantized
signal, after suitable amplification and forward channel conversion, as a
function of the source data, is given by
$\displaystyle u_{q}=\beta\gamma h_{s}x_{s,s}+\beta\gamma
h_{s}x_{s,I}+\beta\gamma n_{s}+n_{q}^{\prime}\ ,$ (348)
where $n_{q}^{\prime}$ is the equivalent quantization noise distributed
according to ${\mathcal{CN}}(0,\beta D)$, $\beta=1-\frac{D}{P_{r}}$,
$\gamma=\sqrt{\frac{P_{r}}{P_{s}\nu_{s}+1}}$ with $\nu_{s}=|h_{s}|^{2}$, and
$x_{s,I}$ represents the residual interference in the decoded signal. Consider
a power distribution $\rho(\nu_{s})$ which is the source power distribution as
function of the fading gain. Then the incremental rate associated with a
fading $\nu_{s}$ is
$\displaystyle\,\textnormal{d}R(\nu_{s})=\frac{\gamma^{2}\nu_{s}\rho(\nu_{s})\beta^{2}\,\textnormal{d}\nu_{s}}{\gamma^{2}+\beta
D+\gamma^{2}\nu_{s}I(\nu_{s})\beta^{2}}\ ,$ (349)
which simplifies after substituting $\gamma$ and some algebra
$\displaystyle\,\textnormal{d}R(\nu_{s})=\frac{\nu_{s}\rho(\nu_{s})\,\textnormal{d}\nu_{s}}{1+D_{\beta}+\nu_{s}(I(\nu_{s})+P_{s}D_{\beta})}\
,$ (350)
where the $D_{\beta}\triangleq\frac{D/P_{r}}{1-D/P_{r}}$. Thus, the average
rate attainable, when $u_{q}$ is successfully decoded, is
$\displaystyle R_{\rm ave}$
$\displaystyle=\int\limits_{0}^{\infty}\,\textnormal{d}\nu_{s}f(\nu_{s})\int\limits_{0}^{\nu_{s}}\,\textnormal{d}R(u)$
(351)
$\displaystyle=\int\limits_{0}^{\infty}(1-F(\nu_{s}))\frac{\nu_{s}\rho(\nu_{s})\,\textnormal{d}\nu_{s}}{1+D_{\beta}+\nu_{s}(I(\nu_{s})+P_{s}D_{\beta})}$
(352)
$\displaystyle=\int\limits_{0}^{\infty}(1-F(\nu_{s}))\frac{\nu_{s}\rho_{N}(\nu_{s})\,\textnormal{d}\nu_{s}}{1+\nu_{s}I_{N}(\nu_{s})}\
,$ (353)
where the first equality is obtained by solving the integral in parts. The
following relationships follow from the definitions of the normalized power
distribution and residual interference:
$\displaystyle\rho_{N}(\nu_{s})$
$\displaystyle\triangleq\frac{\rho(\nu_{s})}{1+D_{\beta}}\ ,$ (354)
$\displaystyle I_{N}(\nu_{s})$
$\displaystyle\triangleq\frac{I(\nu_{s})+D_{\beta}P_{s}}{1+D_{\beta}}\ ,$
(355)
that satisfy $\rho_{N}(\nu_{s})=-I_{N}^{\prime}(\nu_{s})$. For a given average
distortion $D$, $D_{\beta}$ is also explicitly determined, and the maximal
average rate $R_{\rm ave}$ is achieved for
$\displaystyle\rho_{N}(\nu_{s})$
$\displaystyle=\frac{2}{\nu_{s}^{3}}-\frac{1}{\nu_{s}^{2}}\ ,$ (356)
$\displaystyle I_{N}(\nu_{s})$
$\displaystyle=\frac{1}{\nu_{s}^{2}}-\frac{1}{\nu_{s}}\ ,$ (357)
on the range of $\nu_{s}\in[\nu_{0},\nu_{1}]$, where the boundary conditions
are $I_{N}(\nu_{0})=P_{s}$ and $I_{N}(\nu_{1})=0$. Thus, the range of the
optimal solution is
$\displaystyle\nu_{0}$ $\displaystyle=\frac{2}{1+\sqrt{1+4P_{s}}}\ ,$ (358)
$\displaystyle\nu_{1}$
$\displaystyle=\frac{2}{1+\sqrt{1+4\frac{P_{s}D_{\beta}}{1+D_{\beta}}}}\ .$
(359)
This rate is attainable only when the compressed signal may be decoded at the
destination. Otherwise, an outage event occurs, and nothing can be restored
from the original signal. Evidently, the event of outage depends only on the
relay-destination link. Hence, the average achievable rate for the broadcast-
amplify-quantize (BAQ) approach is formalized in the next proposition.
In the system model described by (266)-(267), with $\nu_{s}$ known to relay
and destination, and $\nu_{r}$ known to destination only, the maximal average
attainable rate in a BAQ scheme is specified by
$\displaystyle R_{\rm BAQ,ave}=\max\limits_{D}~{}\overline{P}_{\rm
out}\cdot\int\limits_{0}^{\infty}(1-F(\nu_{s}))\frac{\nu_{s}\rho_{N}(\nu_{s})\,\textnormal{d}\nu_{s}}{1+\nu_{s}I_{N}(\nu_{s})}\
,$ (360)
where the complementary outage probability is
$\displaystyle\overline{P}_{\rm
out}={\mathbb{P}}\left(\log\frac{P_{r}}{D}\leq\log(1+\nu_{r}P_{r})\right)\ .$
(361)
The complementary outage probability for a Rayleigh fading channel reduces
(361) into $\overline{P}_{\rm out}=e^{-\frac{1}{D}+\frac{1}{P_{r}}}$.
Computing $R_{\rm BAQ,ave}$ can be directly pursued, while optimizing the
selection of the average distortion $D$, and directly computing the average
rate for every $D$.
Figure 26: Achievable average rates, for $P_{r}=P_{s}$, and for various
relaying protocols and broadcasting strategies.
Figure 26 demonstrates the maximal attainable expected rates for the various
relaying protocols. The numerical results correspond to Rayleigh fading
channels on both source-relay and relay-destination links. A comparison of all
relaying protocols (DF, AF, QF, AQF) is provided for equal SNR on both links,
i.e., $P_{r}=P_{s}$. As may be noticed, the broadcasting for AF relay has the
highest throughput gains for high SNRs. The AQF scheme with two levels of
refinement at the relay, shows only a small gain in the overall expected
throughput. This questions the possible benefits of higher levels of
successive refinement at the relay, when the source performs only single-level
coding.
### 5.3 Cooperation Techniques of Two Co-located Users
The work in AsAsSh07 considers the problem of communication between a single
remote transmitter and a destination while being helped by co-located users,
over an independent block Rayleigh-fading channel, as depicted in Fig. 27. The
users’ colocation nature allows cooperation, enabling a higher communication
rate from the transmitter to the destination. The transmitter has no CSI,
while receivers have access to perfect CSI. Under this channel model,
cooperation between co-located users for a transmitter using a broadcast
approach achieves higher expected rates. This is directly explained by the
fundamentals of the broadcast approach, where the better the channel quality,
the more layers that can be successfully decoded. The cooperation between the
users is performed over AWGN channels, under a relay power constraint with
unlimited bandwidth. Three cooperation techniques are considered: AF, CF, and
DF. For the case of a relaxed decoding delay constraint, these techniques are
extended by the broadcast approach. The high efficiency is obtained from
multi-session cooperation as each session allows decoding more layers.
Interestingly, closed-form expressions for infinitely many AF sessions and
recursive expressions for the more complex CF can be derived.
Figure 27: A schematic diagram of a source transmitter and two co-located
users - destination and a helping node, performing multi-session cooperation.
The first cooperation strategy is based on the AF relaying by a network user
to the destination user over the cooperation with the following variations.
1. 1.
Naive AF \- A helping node scales its input and relays it to the destined
user, who jointly decodes the relay signal and the direct link signal.
2. 2.
Separate preprocessing AF \- A more efficient form of single-session AF is a
separate preprocessing approach in which the co-located users exchange the
values of the estimated fading gains, and each individually decodes the layers
up to the smallest fading gain. The helping user removes the decoded common
information from its received signal and performs AF on the residual signal to
the destined user.
3. 3.
Multi-session AF \- Repeatedly separate preprocessing is followed by a
transmitting cooperation information at both helper and destination nodes (on
orthogonal channels). The preprocessing stage includes individual decoding of
the received information from the direct link and previous cooperation
sessions. Along the cooperation sessions, transmission of the next block
already takes place. It means that multi-session cooperation introduces
additional decoding delays without any impact on the throughput. For this
purpose, multiple parallel cooperation channels are assumed. For incorporating
practical constraints on the multi-session approach, the total power of multi-
session cooperation is restricted to $P_{r}$. This is identical to the power
constraint in single-session cooperation.
In the limit of infinitely many sessions, the multi-session cooperation
channel capacity is $C_{\rm coop}=P_{r}$. The other cooperation schemes (naive
AF, and separate preprocessing AF) cannot efficiently use unlimited bandwidth.
Single-session wide-band AF means duplicating the AF signal while
proportionally reducing its power. This has no gain over narrow-band
cooperation. Therefore a narrow-band cooperation channel is used for these two
schemes, with $C_{\rm coop}=\log(1+P_{r})$. Another set of cooperative
strategies based on the WZ WYNER76 CF relaying are:
1. 1.
Naive CF \- A helping node performs WZ-CF over the cooperation link. The
destination informs the relay of its fading gain realization prior to the WZ
compression. The destination performs optimal joint decoding of the WZ
compressed signal forwarded over the cooperation link, and its own copy of the
signal from the direct link.
2. 2.
Separate preprocessing CF \- Each user decodes independently up to the highest
common decodable layer. Then WZ-CF cooperation takes place on the residual
signal by WZ coding.
3. 3.
Multi-session CF \- Multi session cooperation, as described for AF, is carried
out in conjunction with successive refinement WZ SM04 CF relaying.
To analyze these models, consider the following SIMO channel
$\displaystyle\textbf{y}_{i}=h_{i}\textbf{x}_{s}+\textbf{n}_{i}\
,~{}~{}~{}~{}~{}i\in\\{1,2\\}\ ,$ (362)
where $\textbf{y}_{i}$ is a received vector by user $i$, with length $L$,
which is the transmission block length. The length $L$ is assumed to be
sufficiently large that transmission rates close to the mutual information are
reliably decoded. $\textbf{x}_{s}$ is the original source transmitted vector,
and $\textbf{n}_{i}$ is the additive noise vector, with complex Gaussian
i.i.d. zero-mean and unit variance ${\mathcal{CN}}(0,1)$, and $h_{i}$ is the
(scalar) fading coefficient, which is perfectly known at the $i-th$ receiver.
The fading $h_{i}$ is distributed according to the Rayleigh distribution
$h_{i}\sim{\mathcal{CN}}(0,1)$, and it remains constant for the duration of
every transmission block (adhering to a block fading channel). It means that
the two users have equal average SNR, which is realistic due to their
colocation. Nevertheless, the results may be extended to the case of unequal
average SNRs in a straightforward way. Receivers being co-located may also
suggest channel realization correlation ($h_{1}$ and $h_{2}$). In the case of
such correlation, the cooperation gains are expected to be smaller since even
the joint decoding channel capacity decreases. We assume, for simplicity of
analysis, fully independent fading channel realizations. The cooperation link
between the users are modeled by AWGN channels as follows:
$\displaystyle\textbf{y}_{2,1}^{(k)}$
$\displaystyle=\textbf{x}_{1}^{(k)}+\textbf{w}_{1}^{(k)}\ ,$ (363)
$\displaystyle\textbf{y}_{1,2}^{(k)}$
$\displaystyle=\textbf{x}_{2}^{(k)}+\textbf{w}_{2}^{(k)}\ ,$ (364)
where $\textbf{y}_{2,1}^{(k)}$ is the length $L$ helper’s received cooperation
vector from the destination ($i=1$), on the $k^{\rm th}$ cooperation link, and
vise-versa for $\textbf{y}_{1,2}^{(k)}$. $\textbf{x}_{i}^{(k)}$ is the
cooperation signal from user $i$, on the $k^{\rm th}$ cooperation link, and
$\textbf{w}_{i}$ is the noise vector with i.i.d. elements distributed
according to ${\mathcal{CN}}(0,1)$. On a single-session cooperation $k=1$, and
the power of $x_{i}^{(1)}$ is limited by
${\mathbb{E}}[\left(|x_{i}^{(1)}|^{2}\right)]\leq P_{r}$ (for $i=1,2$). On a
$K$-session cooperation there are $K$ orthogonal cooperation channels
available for each user with a total power constraint $P_{r}$. The power
constraint here is specified by
$\displaystyle{\mathbb{E}}\left[\left(\sum\limits_{k=1}^{K}|x_{i}^{(k)}|^{2}\right)\right]\leq
P_{r}\ .$ (365)
Hence, $K$ is also the bandwidth expansion that results from the multi-session
cooperation. It is assumed that the next block’s receive can be performed
while transmitting a cooperation message of previous blocks, that is, a full-
duplex receiver. Cooperation is without interference, as orthogonal channels
are assumed for this purpose. Naturally, the link capacity of a single-session
narrow-band cooperation over the AWGN channel defined in (363) is given by
$C_{\rm coop,NB}=\log(1+P_{r})$.
In the limit of $K\rightarrow\infty$ with a power constraint for multi-session
cooperation, the cooperation link capacity is given by
$C_{\rm
coop,WB}=\int\limits_{0}^{\infty}\,\textnormal{d}R(s)=\int\limits_{0}^{\infty}\log(1+\rho(s)ds)=\int\limits_{0}^{\infty}\rho(s)\,\textnormal{d}s=P_{r}\
,$ (366)
where the fractional rate of a session $s$ is $\,\textnormal{d}R(s)$ and
$dR(s)=\log(1+\rho(s)ds)$. The fractional power at the $s^{\rm th}$ session is
$\rho(s)$. The multi-session power constraint implies
$\int\limits_{0}^{\infty}\rho(s)\,\textnormal{d}s=P_{r}\ ,$
which justifies the last equality in (366).
#### 5.3.1 Lower and Upper Bounds
To evaluate the benefit of cooperation among receivers in a fading channel
following the model described in (362)-(363), we provide upper and lower
bounds on relevant figures of merit. There are three types of bounds relevant
to our channel model. The first is the outage capacity, which is the ultimate
average rate achievable using a single-level code (without multi-layer
coding). The second one is the achievable broadcasting rate, which refers to a
scheme using a continuous broadcast approach. The last one is the ergodic
capacity, which gives the ultimate upper bound on average rates by averaging
maximal rates obtained with full transmitter CSI.
The lower bounds are obtained by considering no-cooperation. That is a single
transmitter-receiver pair with no cooperating user. Therefore, all lower
bounds are simple SISO fading channel capacities. Upper bounds refer to the
case where a co-located helping node is available, and the two users can
perform optimal joint decoding of their received signals. In all cases, the
bounds relate to a Gaussian block fading channel, adhering to (362)-(363).
Outage lower bound. The single-layer coding expected rate is
$\displaystyle R_{\rm outage,LB}=\max\limits_{u_{\rm th}>0}\left\\{(1-F(u_{\rm
th}))\log(1+u_{\rm th}P_{s})\right\\}\ ,$ (367)
where the optimal threshold $u_{\rm th}$ that maximizes (367) is given by
$u_{\rm th,opt}=\frac{P_{s}-W(P_{s})}{W(P_{s})P_{s}}$. The function $W(x)$ is
the Lambert-W function.
Broadcasting lower bound. This bound is based on a SISO block fading channel,
with receive CSI. The maximal expected broadcasting rate ShitzSteiner03 , for
a Rayleigh fading channel is
$\displaystyle R_{\rm bs,LB}=e^{-1}-e^{-s_{0}}+2E_{1}(s_{0})-2E_{1}(1)\ ,$
(368)
where $s_{0}=2/(1+\sqrt{1+4P_{s}})$, and $E_{1}(x)$ is the exponential
integral function.
Ergodic lower bound. Ergodic capacity of a general SIMO channel with $m$
receiver antennas is T99
$\displaystyle C_{\rm
erg}(m)=\int\limits_{0}^{\infty}u^{m-1}e^{-u}\log(1+P_{s}u)\,\textnormal{d}u,~{}~{}~{}m\in\mathbb{N}\
,$ (369)
which simplifies for a SISO channel into
$\displaystyle C_{\rm erg,LB}=C_{\rm erg}(1)=e^{1/P_{s}}E_{1}(1/P_{s})\ .$
(370)
Outage upper bound. Fully cooperating users bound is derived similarly to
(367), with $F_{\rm UB}(u)$ as the fading gain distribution function.
Broadcasting upper bound. The broadcasting upper bound is a two receive
antenna block fading channel. The expected broadcasting rate for a Rayleigh
fading channel ShitzSteiner03 is
$\displaystyle R_{\rm bs,UB}=$ $\displaystyle
s_{1}e^{-s_{1}}-e^{-s_{1}}-3E_{1}(s_{1})-(s_{0}e^{-s_{0}}-e^{-s_{0}}-3E_{1}(s_{0}))\
,$ (371)
where $s_{0}$ and $s_{1}$ are determined by the boundary conditions $I_{\rm
UB}(s_{0})=P_{s}$ and $I_{\rm UB}(s_{1})=0$, respectively. The residual
interference $I_{\rm UB}(x)$ is given by $I_{\rm UB}(x)=(1+x-x^{2})/x^{3}$.
Ergodic upper bound. Ergodic bound for two receive antennas SIMO fading
channel is $C_{\rm erg}(2)$ in (369),
$\displaystyle C_{\rm erg,UB}$ $\displaystyle=C_{\rm
erg}(2)=1+e^{1/P_{s}}E_{1}(1/P_{s})-1/P_{s}e^{1/P_{s}}E_{1}(1/P_{s})\ .$ (372)
Figure 28 exemplifies the upper and lower bounds for two cooperating users.
Single-session cut-set upper bound. Another upper bound considered is the
classical cut-set bound of the relay channel Cover . This bound may be useful
for single-session cooperation, where the capacity of the cooperation link is
rather small. Using the relay channel definitions in (362)-(363), and assuming
a single cooperation session $K=1$, the cut-set bound for a Rayleigh fading
channel is given by
$\displaystyle C_{\rm cut-set}$
$\displaystyle=\sup\limits_{p(x_{s}),p(x_{2})}\\!\\!\\!\\!\\!\min\Big{\\{}I(x_{s};y_{1}|h_{1})+I(x_{2};y_{1,2})\;,\;I(x_{s};y_{1},y_{2}|h_{1},h_{2})\Big{\\}}$
(373) $\displaystyle=\min\Big{\\{}C_{\rm erg}(1)+C_{\rm coop}\;,~{}C_{\rm
erg}(2)\Big{\\}}\ ,$ (374)
where the ergodic capacity $C_{\rm erg}(m)$ is given by (369), and the terms
$C_{\rm erg}(1)$ and $C_{\rm erg}(2)$ are specified in (370) and (372),
respectively. The cut-set bound is tight only when $C_{\rm erg}(1)+C_{\rm
coop}\leq C_{\rm erg}(2)$, since otherwise the cut-set bound coincides with
the ergodic upper bound $C_{\rm erg,UB}$ in (372).
Figure 28: Ranges of the average rates for both outage and broadcast
approaches, over the cooperation channel, which were calculated using these
approaches for either single antenna user (LB) or two antennas user (UB). The
corresponding rate-range for an ergodic channel from (370) and (372) is also
given for comparison.
#### 5.3.2 Naive AF Cooperation
In this AF strategy, the helping node scales its input signal to the relaying
power $P_{r}$, and relays the signal to the destination user. The destination
received signal at the destination, after AF relaying is
$\displaystyle\textbf{y}_{b}=\left[\begin{array}[]{c}\textbf{y}^{(1)}_{1,2}\\\
\textbf{y}_{1}\end{array}\right]=\left[\begin{array}[]{c}\alpha
h_{2}\textbf{x}_{s}+\alpha\textbf{n}_{2}+\textbf{w}_{2}\\\
h_{1}\textbf{x}_{s}+\textbf{n}_{1}\end{array}\right]=\left[\begin{array}[]{c}(\sqrt{\beta}\textbf{x}_{s}+\widetilde{\textbf{w}}_{2})\cdot\sqrt{1+\alpha^{2}}\\\
h_{1}\textbf{x}_{s}+\textbf{n}_{1}\end{array}\right]\ ,$ (381)
where $\textbf{y}_{b}$ is the signal to be decoded at the destination, and
$\alpha$ scales the relay output to $P_{r}$. Hence,
$\alpha=\sqrt{\frac{P_{r}}{1+P_{s}s_{2}}}$, and $s_{i}=|h_{i}|^{2}$. The
normalized noise vector $\widetilde{\textbf{w}}_{2}$ has i.i.d. elements
distributed ${\mathcal{CN}}(0,1)$. Hence, the normalized signal gain after the
scaling of user $i=2$ is
$\displaystyle\beta=\frac{P_{r}s_{2}}{1+P_{s}s_{2}+P_{r}}.$ (382)
The achievable rate as a function of the channel fading gains is given by the
following mutual information
$\displaystyle
I(x_{s};\textbf{y}_{b}|h_{1},h_{2})=\log(1+P_{s}s_{b})=\log\left(1+P_{s}\left(s_{1}+\frac{P_{r}s_{2}}{1+P_{s}s_{2}+P_{r}}\right)\right),$
(383)
where $s_{b}=s_{1}+\beta$, and therefore, the equivalent fading $s_{b}$ is the
broadcasting variable. The CDF of $s_{b}$ ShitzSteiner03 is
$\displaystyle F_{s_{\rm b}}(x)={\mathbb{P}}(s_{b}\leq
x)=\int\limits_{0}^{\infty}\,\textnormal{d}uf_{s_{1}}(u)\int\limits_{0}^{\max\left(0,x-\frac{P_{r}u}{1+P_{s}u+P_{r}}\right)}\,\textnormal{d}vf_{s_{2}}(v)\
,$ (384)
where $f_{s_{i}}(u)$ is the PDF of $s_{i}$. The CDF of $s_{b}$, for a Rayleigh
fading channel, is
$\displaystyle F_{s_{\rm b}}(x)=\begin{cases}\begin{array}[]{ll}0&x\leq 0\\\
1-e^{-\frac{(1+P_{r})x}{P_{r}-P_{s}x}}-\\!\\!\\!\\!\\!\displaystyle\int\limits_{0}^{\frac{(1+P_{r})x}{P_{r}-P_{s}x}}\,\textnormal{d}u\cdot
e^{-u-x+\frac{P_{r}u}{1+P_{s}u+P_{r}}}&0\leq x<\frac{P_{r}}{P_{s}}\\\
1-\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}u\cdot
e^{-u-x+\frac{P_{r}u}{1+P_{s}u+P_{r}}}&x\geq\frac{P_{r}}{P_{s}}\end{array}\
.\end{cases}$ (385)
The corresponding PDF $f_{s_{\rm b}}(x)$ is given by
$\displaystyle f_{s_{\rm b}}(x)=\begin{cases}0x\leq 0\\\
\displaystyle\int\limits_{0}^{\frac{(1+P_{r})x}{P_{r}-P_{s}x}}\,\textnormal{d}u\cdot
e^{-u-x+\frac{P_{r}u}{1+P_{s}u+P_{r}}}&0\leq x<\frac{P_{r}}{P_{s}}\\\
\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}u\cdot
e^{-u-x+\frac{P_{r}u}{1+P_{s}u+P_{r}}}&x\geq\frac{P_{r}}{P_{s}}\end{cases}$
(386)
This provides the single-layer and broadcasting expected rates for the naive
AF. The transmitter performs broadcasting optimized on fading gain random
variable $s_{b}$ from (384). The maximal expected rate is expressed as follows
$\displaystyle R_{\rm
bs,ave}=\max\limits_{I(u)}\int\limits_{0}^{\infty}\,\textnormal{d}u(1-F_{s_{\rm
b}}(u))\frac{\rho(u)u}{1+I(u)u}\ ,$ (387)
where $F_{\nu}(u)$ is the fading gain CDF. The optimal power distribution,
which maximizes the broadcasting achievable expected rate for naive AF
cooperation is given by
$\displaystyle I_{\rm NAF}(u)=\left\\{\begin{array}[]{ll}P_{s}&u<u_{0}\\\
\frac{1-F_{s_{\rm b}}(u)-u\cdot f_{s_{\rm b}}(u)}{u^{2}f_{s_{\rm
b}}(u)}&u_{0}\leq u\leq u_{1}\\\ 0&u>u_{1}\end{array}\right.\ ,$ (391)
where $u_{0}$ and $u_{1}$ are obtained from the boundary conditions $I_{\rm
opt}(u_{0})=P_{s}$ and $I_{\rm opt}(u_{1})=0$, respectively. The equivalent
fading gain distribution $F_{s_{\rm b}}(x)$ and $f_{s_{\rm b}}(x)$ are
specified in (385) and (386), respectively. The broadcasting gain is compared
to the single-layer coding under the same fading gain distribution. Using the
equivalent SISO channel model, which is governed by $s_{\rm b}$ with CDF
$F_{s_{\rm b}}(u)$ in (385), the optimal power allocation for naive-AF can be
specified following (391). Note that $I_{\rm NAF}(u)$ is non-increasing,
starting from $P_{s}$ at $u=0$. The average rate is explicitly given by
$\displaystyle R_{\rm
NAF}=\\!\\!\int\limits_{0}^{\infty}\\!\,\textnormal{d}x\left[\frac{2(1-F_{s_{\rm
b}}(x))}{x}+\frac{(1-F_{s_{\rm b}}(x))f_{s_{\rm b}}^{\prime}(x)}{f_{s_{\rm
b}}(x)}\right].$ (392)
The first derivative of the PDF of $s_{b}$ is denoted by $f_{s_{\rm
b}}^{\prime}(x)$.
#### 5.3.3 AF with Separate Preprocessing
In this section, every node tries to decode independently as many layers as
possible. Then both users exchange the index of the highest layer successfully
decoded. Every node re-encodes the decoded data of each layer up to the lowest
common index and removes it from the original received signal. The helping
node scales the result to power $P_{r}$ and relays it over the cooperation
link to the destination. This improves on the naive AF, as the cooperation is
more efficient, though it requires the helping node to be aware of the source
codebook and be able to decode its transmission. Like the naive AF, this is a
single-session $K=1$ cooperation. The received signal at the helping node can
be expressed as
$\displaystyle\boldsymbol{y}_{2}=h_{2}(\boldsymbol{x}_{s,D}+\boldsymbol{x}_{s,I})+\boldsymbol{n}_{2},$
(393)
where $\boldsymbol{x}_{s,D}$ is the part of the source data successfully
independently decoded by helping node $i=2$. The coded layers not decoded
independently $\boldsymbol{x}_{s,I}$ are actually the residual interference
signal.
When $s_{1}\geq s_{2}$, the decoded data in $\boldsymbol{x}_{s,D}$ include
layers up to $s_{2}$. This reflects in the residual interference power $I(s)$,
where $s$ is the fading gain equivalent. The residual signals at both sides
(before a cooperation session) are
$\displaystyle\boldsymbol{y}_{1,I}=h_{1}\boldsymbol{x}_{s,I(s_{2})}+\boldsymbol{n}_{1}\
,$ (394)
$\displaystyle\boldsymbol{y}_{2,I}=h_{2}\boldsymbol{x}_{s,I(s_{2})}+\boldsymbol{n}_{2}\
.$ (395)
It may be shown, similarly to AF derivation, that the equivalent fading gain,
after AF relaying $y_{2,I}$, is (396). Generally speaking, the helping node
removes only common information from its input signal and relays the scaled
residual signal to the destination. The destination user receives a relayed
residual signal, containing only its undecoded layers when $s_{2}\geq s_{1}$.
Otherwise, the helping node transmits its scaled residual interference,
including some layers that could be independently decoded by the destination.
The equivalent fading gain observed by the destination and its distribution
are stated in the following proposition.
In an AF with separate preprocessing cooperation strategy, with a single
cooperation session $K=1$ with a limited power $P_{r}$, the highest decodable
layer is associated with an equivalent fading gain determined by
$\displaystyle
s_{a}=s_{1}+\frac{P_{r}s_{2}}{1+s_{2}\cdot\max(I(s_{1}),I(s_{2}))+P_{r}}\ ,$
(396)
with the following CDF for a Rayleigh fading channel
$\displaystyle
F_{s_{a}}(x)=\int\limits_{0}^{\phi_{1}^{-1}(x)}[\exp{(-2u)}-\exp\left(-u-\phi_{2}(u)\right)-\exp\left(-u-\phi_{3}(u)\right)]\,\textnormal{d}u\
,$ (397)
where
$\displaystyle\phi_{1}(u)$ $\displaystyle=u+\frac{uP_{r}}{1+uI(u)+P_{r}}\ ,$
(398) $\displaystyle\phi_{2}(u)$
$\displaystyle=\max\left(u,x-\frac{uP_{r}}{1+uI(u)+P_{r}}\right)\ ,$ (399)
$\displaystyle\phi_{3}(u)$ $\displaystyle=\max\left(u,\phi_{4}(x-u)\right)\ ,$
(400) $\displaystyle\phi_{4}(x-u)$
$\displaystyle=\begin{cases}\frac{(1+P_{r})(x-u)}{P_{r}-I(u)(x-u)}&P_{r}-I(u)(x-u)>0\\\
\infty&P_{r}-I(u)(x-u)\leq 0\end{cases}\ .$ (401)
Additional details are available in AsAsSh07 .
#### 5.3.4 Multi-Session AF with Separate Preprocessing
Next, we discuss $K$ multi-session AF with separate preprocessing per session.
The total power allocation per transmitted codeword for all sessions
corresponding to its decoding is $P_{r}$, in the limit of $K=\infty$. In this
approach, common layers are subtracted before every AF session by both users.
After every AF relaying, each node attempts to decode more layers using all
received AF signals so far and its own received signal. It should be
emphasized that the multi-session is performed over parallel orthogonal
channels in such a way that the source transmission is block-wise continuous.
For example, during the $k^{\rm th}$ cooperation session of the $1^{st}$
transmitted block (from the source), the first cooperation session for the
$k-1$ transmitted block takes place. As the overall multi-session power is
limited to $P_{r}$, at every block epoch, the total power of $P_{r}$ is used.
As parallel channels are used for cooperation, with infinitesimal power
$\rho(s)$ allocated per channel, this wide-band cooperation link’s capacity is
the capacity of the corresponding parallel channel. The power allocation is
$\int_{0}^{\infty}\rho(s)\,\textnormal{d}s=P_{r}$ under the constraint of
$P_{r}$. The fractional rate per sub-band is then
$\,\textnormal{d}R(s)=\log(1+\rho(s)\,\textnormal{d}s)=\rho(s)\,\textnormal{d}s$,
V90 . Therefore, the average capacity of this wide-band cooperation link,
regardless of the actual power allocation density, is $C_{\rm coop}=P_{r}$
(366). Notice that we use AF, which cannot effectively use such capacity
increase in single-session cooperation ($P_{r}>\log(1+P_{r})$). This capacity
is available in two directions: relay-destination and destination-relay. It is
required that information is exchanged in both directions. Otherwise, multi-
session cooperation becomes inefficient, and unidirectional transmission, of
only the relay to the destination, will not gain from multi-session relaying.
In the case of unlimited sessions, the scalar equivalent fading gain can be
derived for a given broadcasting power allocation $I(s)$.
In a multi-session AF ($K\rightarrow\infty$, cooperation power constraint
$P_{r}$) with separate preprocessing cooperation strategy, the highest
decodable layer is associated with an equivalent fading gain determined by
$s_{\rm ms}=\left\\{\begin{array}[]{ll}s_{a}^{*}&s_{1}\geq s_{2}\\\
s_{b}^{*}&s_{1}<s_{2}\\\ \end{array}\ ,\right.$ (402)
where $s_{\rm b}^{*}$ is the solution of
$\int_{s_{2}}^{s_{b}^{*}}\frac{s_{1}}{(s_{1}+s_{2}-\sigma)^{2}}[1+s_{1}I(\sigma)]\,\textnormal{d}\sigma=P_{r}\
,$ (403)
and by using $s_{\rm b}^{*}$,
$s_{a}^{*}=s_{1}+s_{2}\cdot\frac{Z(s_{b}^{*})}{1+Z(s_{b}^{*})}\ ,$ (404)
where
$Z(s)=\int_{s_{2}}^{s}\frac{1+s_{1}I(\sigma)}{(1+s_{2}I(\sigma))}\frac{s_{1}}{(s_{1}+s_{2}-\sigma)}\,\textnormal{d}\sigma\
.$ (405)
Similarly, achievable rates are obtained for naive CF and CF with separate
preprocessing AsAsSh07 .
#### 5.3.5 Multi-Session Wyner-Ziv CF
In this cooperation scheme, both nodes can quantize and compress their
received signals and exchange the result via a cooperation session. The
compression is performed by the WZ WYNER76 algorithm using side information
at the decoder. For this to be performed, several definitions are required.
Notice that each WZ compression step can use all information collected in the
previous sessions in the form of side information. Define
$\boldsymbol{\hat{y}}_{1}^{(k)}=\boldsymbol{y}_{1}+\boldsymbol{n}_{c,1}^{(k)}\
,$
where $\boldsymbol{n}_{c,1}^{(k)}$ is independent of $\boldsymbol{y}_{1}$, as
the compressed signal that is transmitted from $i=1$ to the co-located user
$i=2$. We refer the reader to SM04 , for successive Wyner-Ziv overview. Here,
we deal with the case where the message that is transmitted in each session
has better side information than the previous session since more layers are
decoded. Furthermore, the second session can use the information sent by all
the previous sessions in order to improve performance. Since the power that is
used by each session is a control parameter, rather than a fixed parameter,
the use of an auxiliary variable that is transmitted during a session, but
decoded only at the next session is superfluous (due to the better side
information, declared as $V$ in SM04 ). Next, using SM04 , the following
Markov chain is defined, where unlike SM04 , we are interested in independent
averaged distortion, rather than plain averaged distortion. The main feature
here is that the compression noise $\boldsymbol{n}_{c,i}^{(k)}$ should
decrease from iteration to iteration, ending up with a sequence of degraded
channels $\boldsymbol{\hat{y}}_{i}^{(k)}$, following the Markov chain:
$\displaystyle\boldsymbol{y}_{2}-\boldsymbol{x}_{s}-\boldsymbol{y}_{1}-\boldsymbol{\hat{y}}_{1}^{(k)}-\boldsymbol{\hat{y}}_{1}^{(k-1)}-\dots-\boldsymbol{\hat{y}}_{1}^{(1)}\
,$ (406)
$\displaystyle\boldsymbol{y}_{1}-\boldsymbol{x}_{s}-\boldsymbol{y}_{2}-\boldsymbol{\hat{y}}_{2}^{(k)}-\boldsymbol{\hat{y}}_{2}^{(k-1)}-\dots-\boldsymbol{\hat{y}}_{2}^{(1)}\
.$ (407)
The equivalent fading gains after every iteration of the multi-session
cooperation are stated in the following proposition. The achievable rate in
the multi-session with separate preprocessing and successive refinement WZ is
given in a recursive form for the $k^{\rm th}$ session,
$\displaystyle R_{WZ}^{(k)}={\mathbb{E}}_{s_{\rm ms}^{(k)}}\left[\log(1+s_{\rm
ms}^{(k)}P_{s})\right]\ ,$ (408)
where
$s_{\rm ms}^{(k)}=\left\\{\begin{array}[]{ll}s_{a}^{(k)}&s_{1}\geq s_{2}\\\
s_{b}^{(k)}&s_{1}<s_{2}\\\ \end{array}\right.\ ,$ (409)
and
$\displaystyle s_{a}^{(k)}=s_{1}+\frac{s_{2}}{1+(\sigma_{2}^{(k)})^{2}}\ ,$
(410) $\displaystyle s_{b}^{(k)}=s_{2}+\frac{s_{1}}{1+(\sigma_{1}^{(k)})^{2}}\
,$ (411)
and
$\left(\sigma_{j}^{(k)}\right)^{2}=\left(\sigma_{j}^{(k-1)}\right)^{2}\frac{1+s_{j}I(s^{(k-1)})+s_{3-j}I(s^{(k-1)})}{(1+s_{3-j}I(s^{(k-1)}))\left[1+\delta_{j}^{(k)}\left(1+\left(\sigma_{j}^{(k-1)}\right)^{2}\right)\right]+s_{j}I(s^{(k-1)})(1+\delta_{j}^{(k)})}\
,$ (412)
where $\sigma_{j}^{(k)}$ is specified in (412) for $j=1,2$, and
$\delta_{j}^{(k)}$ is the fractional power assigned to user $j$ for the
$k^{\rm th}$ cooperation session.
Figure 29: Broadcast approach: average rates of Naive AF, AF with separate
preprocessing, multi sessions AF and narrow-band (NB) naive CF compared to
upper and lower bounds, as function of the channels quality ratio
$\frac{P_{r}}{P_{s}}$. ($P_{s}=20$ dB).
Figure 29 compares the variation of the average rates versus the cooperation
link quality ($P_{r}/P_{s}$) achieved by the naive AF, separate preprocessing
AF, multi-session AF, and narrow-band naive CF. It is observed that the gains
of separate preprocessing AF over the naive approach increase with decreasing
$P_{r}/P_{s}$. For $P_{s}=20$ dB, both approaches achieve gains over the
outage upper bound for $P_{r}/P_{s}\geq 0$ dB. For moderate to high $P_{s}$
and $P_{r}$, the multi-session AF approximates the broadcasting upper bound.
Again, the naive CF outperforms all other approaches and approximates the
broadcasting upper bound even on a wider range of $P_{r}$ values.
### 5.4 Transmit Cooperation Techniques
Relaying strategies of a relay close to the source transmitter are considered
in BraginskiyAsSh12 . The source-relay channel is assumed to be a fixed gain
AWGN due to their colocation, while the source-destination and the relay-
destination channels are subject to a block flat Rayleigh fading. A perfect
CSI is assumed only at the respective receivers. With the expected throughput
as a performance measure, BraginskiyAsSh12 incorporates a two-layer broadcast
approach into a cooperative strategy based on the DF scheme, referred to as
SDF. The SDF strategy’s achievable rate expressions are derived under the
broadcast approach for multiple settings, including single-user MISO and the
general relay setting using successive decoding technique, both numerically
and analytically.
The system consists of a source terminal $s$ communicating with a destination
receiver, denoted by $d$. The multi-terminal network may consist of a helping
terminal $r$. The helping terminal is occasionally present, and when
available, it is near the source. However, the source is not aware of the
relay’s existence or availability. This model is motivated by the nature of
wireless sensor networks. In such networks, numerous sensors intended to
gather some information from the environment are deployed over a limited area.
The sensors usually transmit information to a control point, which may have
high processing capabilities. The dense deployment, along with autonomous
functionality required from each sensor, leads to the concepts of collocation
and obliviousness of cooperation among sensors.
The information is transmitted over a shared wireless medium where
transmission received by the destination is subject to block flat Rayleigh
fading. The fading coefficients between the source and the destination denoted
by $h_{s}$, and between the relay and the destination denoted by $h_{r}$, are
modeled by two independent zero-mean unit variance complex Gaussian RVs and
are assumed to remain constant over a transmission block of $N$ symbols, with
$N$ large enough to allow Shannon theoretic arguments to hold. Since the
source and the relay are physically collocated, the channel gain between the
two is assumed to be $\sqrt{Q}e^{j\theta}$, where $Q$ is a fixed power gain
(known to all), and $\theta$ is a random phase uniformly distributed RV over
$\left[-\pi,\pi\right)$, which is assumed fixed during a transmission block of
$N$ symbols and independent from one block to the next.
During the transmission period of one fading block, the relay (if one exists)
can assist the source in relaying the message to the destination. Unaware of
the relay’s presence, the source assumes that in the worst case, it is the
only active transmitter, optimizing its transmission for the SISO channel.
When the relay exists, the received signals at the relay and the destination
at time $n$, $n=1,\dots,N$, are modeled by
$\displaystyle y_{r}\left(n\right)$
$\displaystyle=\sqrt{Q}x_{s}\left(n\right)+n_{r}\left(n\right)\ ,$ (413)
$\displaystyle y_{d}\left(n\right)$
$\displaystyle=h_{s}x_{s}\left(n\right)+h_{r}x_{r}\left(n\right)+n_{d}\left(n\right)\
,$ (414)
where $y_{r}(n)$ and $y_{d}(n)$ are the received signals at the relay and
destination, respectively. The signals $x_{s}(n)$ and $x_{r}(n)$ designate the
source and relay transmitted signals, respectively. The AWGN samples are
denoted by $n_{r}(n),n_{d}(n)$ and they are distributed as
${\mathcal{CN}}(0,1)$. Without a helping relay, the received signal at the
destination is given by
$y_{d}\left(n\right)=h_{s}x_{s}\left(n\right)+n_{d}\left(n\right).$ (415)
For brevity, the fading gains are denoted by $\nu_{s}=\left|h_{s}\right|^{2}$
and $\nu_{r}=\left|h_{r}\right|^{2}$ each of which is exponentially
distributed with unit mean.
#### 5.4.1 Single-layer Sequential Decode-and-Forward (SDF)
In the SDF strategy Katz05 ; Katz09 , the source transmits a single layer
coded signal at the rate $R$. The relay (if present) remains silent while
trying to decode the single-layer message. Once it can decode the message
(after accumulating enough mutual information), it starts transmitting the
message, acting as a second transmit antenna. If it is unable to decode the
message before the block ends, it remains silent throughout the block, and no
further cooperation occurs. The term sequential decode-and-forward emphasizes
that the relay first decodes the entire message and only then starts sending
its codeword. The mutual information at the relay is
$\log\left(1+P_{s}Q\right)$, which means that a relay will decode a rate $R$
message for $R\leq\log\left(1+P_{s}Q\right)$. Define $\varepsilon$ as the
fractional time within the transmission block which the relay uses to decode
the message, i.e.,
$\varepsilon\buildrel\Delta\over{=}\min\left(1,\frac{R}{\log\left(1+P_{s}Q\right)}\right),\bar{\varepsilon}=1-\varepsilon$.
The expected throughput for a Rayleigh fading channel is expressed by
$R_{\rm ave}^{\rm
SDF}=R\cdot\left\\{\begin{array}[]{ll}{e^{{\frac{-e^{R}-1}{P_{s}}}}+\displaystyle\int_{0}^{\frac{e^{R}-1}{P_{s}}}\exp\left({-\frac{\left(e^{\frac{R-\varepsilon\log\left(1+\nu
P_{s}\right)}{\bar{\varepsilon}}}-1-\nu
P_{s}\right)}{P_{r}}}\right)\exp({-\nu})\,\textnormal{d}\nu}&{R\leq\log\left(1+P_{s}Q\right)}\\\
e^{{\frac{-e^{R}-1}{P_{s}}}}&{R>\log\left(1+P_{s}Q\right)}\end{array}\right.\
.$ (416)
The expected throughput $R_{\rm ave}^{\rm SDF}$ for
$R>\log\left(1+P_{s}Q\right)$ is also equal to the achievable rate without a
relay, which serves as the oblivious cooperation lower bound.
#### 5.4.2 Continuous Broadcasting
Consider the problem of oblivious relaying where the transmitter performs
continuous layering. It is assumed that when the relay exists, it first
decodes the entire message from the source and then starts its transmission.
Under a collocation assumption, the relay decoding time may be negligible
compared to the transmission block duration. This setting of negligible relay
decoding delay is called informed SDF. The informed SDF protocol assumes that
the helping relay, when available, is informed of the transmitted packets in
advance. Thus, when a relay is available, it helps throughout the transmission
block.
Denote the power density at the transmitter by $\rho_{s}\left(s\right)$ and
its corresponding residual interference function by $I_{s}\left(s\right)$,
where $I_{s}\left(s_{0}\right)=P_{s}$ and $I_{s}\left(s_{1}\right)=0$. The
layering power density at the relay is denoted by $\rho_{r}\left(s\right)$.
The relay residual interference function $I_{r}\left(s\right)$ maximizing the
expected throughput in presence of a helping relay is the subject for
optimization. The relay power constraint is $I_{r}\left(s_{0}\right)=P_{r}$.
As the optimization problem does not lend itself to a closed-form solution for
a general power distribution $I_{r}(s)$, a suboptimal $I_{r}\left(s\right)$ is
proposed. Consider a relay power distribution of the form
$I_{r}\left(s\right)=\frac{P_{r}}{P_{s}}I_{s}\left(s\right)\ .$ (417)
The selection of such a power distribution (417) may be analytically analyzed
using a single-variable function as a subject for optimization via the
calculus of variations. Any general selection of $I_{s}(s)$ and $I_{r}(s)$
requires optimizing two functionals, and does not seem to have a closed-form
analytical solution. This general problem remains a subject for further
research. Under the power allocation in (417) the equivalent fading gain of
the combined source and relay signals takes the form of $s_{\rm
eq}\triangleq\nu_{s}+\frac{P_{r}}{P_{s}}\nu_{r}$. The CDF of $s_{\rm eq}$ is
thus
$F_{s_{\rm
eq}}\left(s\right)=\left\\{\begin{array}[]{ll}F\left(s\right)=1-e^{-s}-se^{-s}&a=1\\\
&\\\
1+\frac{e^{-s}}{a-1}+\frac{ae^{-\frac{s}{a}}}{1-a}&\textrm{otherwise}\end{array}\right.\
,$ (418)
where $a\triangleq\frac{P_{r}}{P_{s}}$. It is clear that the expected
throughput may be directly computed, as $I_{s}(s)$ is the source optimal power
allocation ShitzSteiner03 , and the relay uses the mentioned
$I_{r}\left(s\right)=\frac{P_{r}}{P_{s}}I_{s}\left(s\right)$. We call this
setting relay broadcasting. In order to evaluate the oblivious cooperation
gain, the achievable expected throughput can be compared to the $2\times 1$
MISO setting, where a single source with two antennas transmits using a
continuous layering coded signal. This serves as a tight upper bound and is
termed MISO broadcasting.
#### 5.4.3 Two Layer SDF - Successive Decoding
Previous subsections presented achievable rates for the single-layer and for
the continuous broadcasting approaches. This subsection focuses on a practical
layering approach, involving only two coded layers. Two coded layers are
incorporated within the SDF schemes, and achievable rates are studied. Lower
and upper bounds are derived first, and then achievable rates are formulated.
More details are available in BraginskiyAsSh12 . The general problem can be
formulated by considering a transmitter using a two-layer coding approach with
a power per layer defined by $\alpha P_{s}$ and $\bar{\alpha}P_{s}$ , where
$\bar{\alpha}\triangleq 1-\alpha$. Accordingly, the rate per layer is defined
by
$\displaystyle R_{1}$ $\displaystyle=\log\left(1+\frac{\eta_{1}\alpha
P_{s}}{1+\eta_{1}\bar{\alpha}P_{s}}\right)\ ,$ (419) $\displaystyle R_{2}$
$\displaystyle=\log\left(1+\eta_{2}\bar{\alpha}P_{s}\right)\ ,$ (420)
where $\eta_{1}<\eta_{2}$ can be interpreted as equivalent fading gains for
reliable decoding of the $i^{\rm th}$ layer. In oblivious relaying, the source
power allocation per layer, defined by $\alpha$, is set such that the expected
throughput is maximized without a relay. When a helping relay is available,
the source keeps using the power allocation $\alpha$, while the relay
allocates $\beta P_{r}$ and $\bar{\beta}P_{r}$ for the first and second layer,
respectively. Under SDF relaying, the relay has to decode the message before
transmitting it. The relay fractional decoding time, $\varepsilon^{i}_{r}$ of
the $i^{\rm th}$ layer, is
$\displaystyle\varepsilon_{r}^{1}$
$\displaystyle\triangleq\min\left(1,\frac{R_{1}}{\log\left(1+\frac{Q\alpha
P_{s}}{1+Q\bar{\alpha}P_{s}}\right)}\right)\ ,$ (421)
$\displaystyle\varepsilon_{r}^{2}$ $\displaystyle\triangleq\
\min\left(1,\max\left(\varepsilon_{r}^{1},\frac{R_{2}}{\log\left(1+Q\bar{\alpha}P_{s}\right)}\right)\right)\
,$ (422)
where $\varepsilon_{r}^{i}$ specifies the fractional time for the relay to
gain sufficient mutual information to decode the $i^{\rm th}$ layer. Note that
due to successive decoding, the second layer decoding cannot be shorter than
its preceding layer. Using the fractional decoding times, it is required to
derive the mutual information at the destination for each of the layers. When
the relay requires more time to decode the second layer, it may begin
allocating all its power $P_{r}$ for the first layer. Only once the second
layer decoding is complete does the relay begin transmitting using $\beta
P_{r}$ and $\bar{\beta}P_{r}$ allocated power per layer. The mutual
information for decoding the first layer is given by
$\displaystyle I^{\rm SDF,1}$
$\displaystyle=\varepsilon_{r}^{1}\log\left(1+\frac{\nu_{s}\alpha
P_{s}}{1+\nu_{s}\bar{\alpha}P_{s}}\right)+\left(\varepsilon_{r}^{2}-\varepsilon_{r}^{1}\right)\log\left(1+\frac{\nu_{s}\alpha
P_{s}+\nu_{r}P_{r}}{1+\nu_{s}\bar{\alpha}P_{s}}\right)$ (423)
$\displaystyle\qquad+\left(1-\varepsilon_{r}^{2}\right)\log\left(1+\frac{\nu_{s}\alpha
P_{s}+\nu_{r}\beta
P_{r}}{1+\nu_{s}\bar{\alpha}P_{s}+\nu_{r}\bar{\beta}P_{r}}\right)\ ,$ (424)
where $\nu_{s}$ and $\nu_{r}$ are the fading gain realizations of the source-
destination and the relay-destination links, respectively. The coefficients
$\varepsilon_{r}^{1},\varepsilon_{r}^{2}$ are the relative time for the relay
to gain sufficient mutual information to decode the first layer and second
layer, respectively. The mutual information associated with the second layer
is
$I^{\rm
SDF,2}=\varepsilon_{r}^{2}\log\left(1+\nu_{s}\bar{\alpha}P_{s}\right)+\left(1-\varepsilon_{r}^{2}\right)\log\left(1+\nu_{s}\bar{\alpha}P_{s}+\nu_{r}\bar{\beta}P_{r}\right)\
.$ (425)
The expected throughput achievable at the destination, with a helping relay,
can be computed by using (419)-(425), to obtain
$R_{\rm ave}^{\rm
BSDF}=R_{1}\cdot{\mathbb{P}}\left[\left(I^{SDF,1}>R_{1}\right)\cap\left(I^{SDF,2}<R_{2}\right)\right]\\\
+\left(R_{1}+R_{2}\right)\cdot{\mathbb{P}}\left[\left(I^{SDF,1}>R_{1}\right)\cap\left(I^{SDF,2}>R_{2}\right)\right]\
,$ (426)
which can be maximized over $\alpha,\beta,\eta_{1},\eta_{2}$. We assume that
$\varepsilon_{r}^{1}=\varepsilon_{r}^{2}$, implying simplex relay. This means
the relay transmits only after completing the decoding of both layers.
A lower bound for the achievable rate of oblivious relaying is considered
here. In an oblivious setting, the maximal expected throughput without a
helping relay is called a direct transmission rate. This rate serves as the
lower bound to achievable rates for the relay channel. The oblivious relaying
lower bound, i.e., single user direct transmission expected throughput is
$R_{\rm ave}^{\rm
BSU}=R_{1}{\mathbb{P}}\left(\eta_{1}<\nu_{s}<\eta_{2}\right)+\left(R_{1}+R_{2}\right){\mathbb{P}}\left[\left(\nu_{s}>\eta_{1}\right)\cap\left(\nu_{s}>\eta_{2}\right)\right]\
,$ (427)
where $R_{1},~{}R_{2}$ are the two-layers’ rate allocation, and
$\eta_{1},\eta_{2}$ are the fading gain threshold for decoding the first layer
and second layer, respectively. The expected rate $R_{\rm ave}^{\rm BSU}$ can
be optimized over $\alpha,\eta_{1},\eta_{2}$ to maximize (427), and provide a
tight lower bound. In an oblivious setting, it remains to optimize the relay
layering power allocation, i.e., $\beta$, to maximize $R_{\rm ave}^{\rm BSDF}$
in (426).
The MISO achievable rates serve as upper bounds, reflecting full cooperation
among transmitters. As the relay and source might have different power
allocations, it is required to study the problem of MISO layering with
individual power constraints per antenna. Consider first a sub-optimal
approach where the same fractional power allocation per layer is used per
antenna. In our setting this means $\alpha=\beta$ in (423)-(425), i.e., the
first layer power allocation of the source and the relay is $\alpha P_{s}$ and
$\alpha P_{r}$, respectively. The expected rate then, similarly to (426),
becomes
${R_{\rm ave}^{\rm
BMISO}=}R_{1}{\mathbb{P}}\left[\left(\log\left(\frac{1+Y}{1+\bar{\alpha}Y}\right)>R_{1}\right)\cap\left(\log\left(1+\bar{\alpha}Y\right)<R_{2}\right)\right]\\\
+\left(R_{1}+R_{2}\right){\mathbb{P}}\left[\left(\log\left(\frac{1+Y}{1+\bar{\alpha}Y}\right)>R_{1}\right)\cap\left(\log\left(1+\bar{\alpha}Y\right)>R_{2}\right)\right]\
,$ (428)
where $Y\triangleq P_{s}\nu_{s}+P_{r}\nu_{r}$. For a Rayleigh fading channel
the CDF of $Y$ is given by
$F_{Y}\left(u\right)=\left\\{\begin{array}[]{ll}\frac{1}{P_{r}-P_{s}}\left(P_{r}e^{-\frac{u}{P_{r}}}-P_{s}e^{-\frac{u}{P_{s}}}\right)&P_{s}\neq
P_{r}\\\
\left(1+\frac{u}{P_{s}}\right)e^{-\frac{u}{P_{s}}}&P_{s}=P_{r}\end{array}\right.\
.$ (429)
Now, consider the more general setting for the MISO layering problem, where
source and relay layering power distribution is not necessarily equal, i.e.,
$\alpha\neq\beta$. The following result derived via explicit evaluation of the
decoding probabilities quantifies the average throughput achievable by letting
the relay use an independent power allocation.
[BraginskiyAsSh12 ] For a $2\times 1$ MISO, a channel model described by (413)
and independent predetermined power allocation coefficients $\alpha,\beta$,
the average throughput is given by
$R_{\rm ave}^{\rm
BVMISO}=\left\\{\begin{array}[]{ll}R_{1}\frac{ke^{-\eta_{1}}}{k-1}+R_{2}\left[\frac{e^{-\nu_{s_{2}}-k\left(\eta_{1}-\nu_{s_{2}}\right)}}{k-1}+\frac{ne^{-\eta_{2}}-e^{-\nu_{s_{2}}-n\left(\eta_{2}-\nu_{s_{2}}\right)}}{n-1}\right]&1-e^{R_{1}}\bar{\beta}\leq
0\\\
{R_{1}\frac{ke^{-\eta_{1}}-e^{-k\eta_{1}}}{k-1}+R_{2}\left[\frac{e^{-\nu_{s_{1}}-k\left(\eta_{1}-\nu_{s_{1}}\right)}-e^{-k\eta_{1}}}{k-1}+\frac{ne^{-\eta_{2}}-e^{-\nu_{s_{1}}-n\left(\eta_{2}-\nu_{s_{1}}\right)}}{n-1}\right]}&1-e^{R_{1}}\bar{\beta}>0\end{array}\right.\
,$ (430)
where
$n\buildrel\Delta\over{=}\frac{\bar{\alpha}P_{s}}{\bar{\beta}P_{r}},~{}k\buildrel\Delta\over{=}\frac{\alpha
P_{s}}{\left(\beta+\eta_{1}P_{s}\left(\beta-\alpha\right)\right)P_{r}}$, and
where
$\displaystyle\nu_{s_{1}}$
$\displaystyle\triangleq\left\\{\begin{array}[]{ll}0&\frac{\bar{\alpha}\eta_{2}}{\bar{\beta}}>\frac{\alpha\eta_{1}}{\beta+\eta_{1}P_{s}\left(\beta-\alpha\right)}\\\
\\\ &\\\
\displaystyle\frac{-\alpha\eta_{1}\bar{\beta}+\bar{\alpha}\eta_{2}\left(\beta+\eta_{1}P_{s}\left(\beta-\alpha\right)\right)}{\bar{\alpha}\left(\beta+\eta_{1}P_{s}\left(\beta-\alpha\right)\right)-\alpha\bar{\beta}}&\mbox{\rm
otherwise}\end{array}\right.\ ,$ (435) $\displaystyle\nu_{s_{2}}$
$\displaystyle\triangleq\frac{-\alpha\eta_{1}\bar{\beta}+\bar{\alpha}\eta_{2}\left(\beta+\eta_{1}P_{s}\left(\beta-\alpha\right)\right)}{\bar{\alpha}\left(\beta+\eta_{1}P_{s}\left(\beta-\alpha\right)\right)-\alpha\bar{\beta}}\
.$ (436)
It is evident from the above proposition that the relay’s power allocation has
a crucial effect on the achievable rate, and a powerful relay does not
guarantee a high achievable rate unless equipped with appropriate power
allocation. For an equal layering power allocation, i.e., $\alpha=\beta$,
(430) reduces to (428) as $\nu_{s_{1}}=0$. A step in determining an optimal
power allocation for the MISO is taken in the following proposition, which
establishes the optimal power allocation for an asymptotic source power and a
constant ratio of source to relay powers.
[BraginskiyAsSh12 ] For a $2\times 1$ MISO setting satisfying
$P_{s}\to\infty$, $P_{r}\to\infty$, and $\frac{P_{s}}{P_{r}}=c$ under the
channel model described by (413), the equal power allocation is optimal.
### 5.5 Diamond Channel
Next, we review the two-hop transmission from a source to destination via two
parallel full-duplex relay channel, which is investigated in Zamani14 .
Similarly to the general theme of this paper, the transmitter and the relays
are oblivious to their forward links to their next hops, while being aware of
their backward channel from the previous one.
Figure 30: The diamond channel.
As shown in Fig. 30, the transmitter sends a signal $x$, and it is received by
both relay nodes. The signal received by relay $i\in\\{1,2\\}$ is given by
$\displaystyle y_{r_{i}}\;=\;h_{r_{i}}x\;+\;n_{i}\ ,$ (437)
where $h_{r_{i}}$ follows a Rayleigh fading process and $n_{i}$ accounts for
the AWGN. The signal received by the receiver from the concurrent
transmissions by the relays is
$\displaystyle y\;=\;h_{1}x_{r_{1}}\;+\;h_{2}x_{r_{2}}\;+\;n\ ,$ (438)
where $h_{i}$ follows a Rayleigh fading process and $n$ is the AWGN. The
relays can be in the half- or full-duplex modes. Accordingly, the channel
gains are defined as $s_{i}=|h_{i}|^{2}$ and $s_{r_{i}}=|h_{r_{i}}|^{2}$.
A relevant metric to assess the broadcast approach’s performance is the
average rate that can be sustained reliably between the source and the
destination, maximized over all possible allocations of power across different
information layers at the transmitter and the relays. Each overall channel
realization is the combination of the realizations of four distinct and
independently varying channels. A relevant notion of degradedness in the
channel can be specified based on the source-destination rate that the channel
can support. Based on this, channel realizations are rank-ordered based on the
aggregate rate they support. The transmitter designates one layer per
realization, and the receiver at each channel realization decodes all the
layers designated to that realization and those designated to the weaker ones.
This strategy is next reviewed under different relaying strategies.
#### 5.5.1 Decode-and-Forward
In this scheme, a transmitter generates $K$ information layers denoted by
$\\{x_{1},\dots,x_{K}\\}$, which are adapted to $K$ discrete channel gains.
The first baseline layer is designed to be decoded by the relays when the gain
of the channels linking the source to the relays is at least $s_{1}$, i.e.,
relay $i$ decodes $x_{1}$ when $s_{r_{i}}\geq a_{1}$. Similarly, in general,
layer $k$ is designed to be decoded by the relays when $s_{r_{i}}\geq a_{k}$.
Hence, the fraction of the power allocation to layer $k\in\\{1,\dots,K\\}$ is
denoted by $\gamma_{k}$. The incremental rate allocated to layer $k$ is
$\displaystyle
R_{k}=\log\left(1+\frac{\gamma_{k}a_{k}}{1+\sum_{j=k+1}^{K}\gamma_{i}a_{k}}\right)\
.$ (439)
Each relay starts decoding the information layers from the baseline layer 1 up
to the layer that its actual channel realization affords. This results in the
two relay nodes decoding a different number of information layers. Denote the
number of layers decoded by relay 1 and relay 2 by $M_{1}$ and $M_{2}$,
respectively. Relay $i$ then superimposes all the $M_{i}$ information layers
and allocates $\alpha_{ij}$ fraction of its power to layer
$j\in\\{1,\dots,M_{i}\\}$, with the rest being allocated power 0, i.e.,
$\alpha_{ij}=0$ for $j\in\\{M_{i}+1,\dots,K\\}$. Hence, the message
transmitted by relay $i$ is
$\displaystyle x_{r_{i}}\;=\;\sum_{j=1}^{K}\sqrt{\alpha_{ij}}x_{j}\ .$ (440)
Since each relay is oblivious to the channel and power distribution of the
other relay, due to the symmetry involved, it is assumed that power
distributions are the same in both schemes. It is shown in (Zamani14, ,
Theorem 2) that if power distribution across the layers is identical in both
relays, then the relay signals must be uncorrelated in order to achieve the
maximum expected rate. Hence, each relay’s code construction is based on
assuming a similar power distribution for the other relay. The two relays
adopt a transmission scheme that mimics the space-time block codes,
implemented in a distributed way. Specifically, consider the time-slotted
transmission in which the signal transmitted by relay $i\in\\{1,2\\}$ at time
$t$ is denoted by $X_{i}(t)$. At time $t$, transmitter 1 transmits
$\sum_{j=1}^{K}\sqrt{\alpha_{1j}}x_{j}(t)$ and transmitter 2 transmits
$\sum_{j=1}^{K}\sqrt{\alpha_{2j}}x_{j}(t+1)$. Subsequently, at time $t+1$,
transmitter 1 transmits $-\sum_{j=1}^{K}\sqrt{\alpha_{1j}}x^{*}_{j}(t+1)$ and
transmitter 2 transmits $\sum_{j=1}^{K}\sqrt{\alpha_{2j}}x^{*}_{j}(t)$. Hence,
the received signal at the destination is
$\displaystyle\left[\begin{array}[]{c}y(t)\\\
-y^{*}(t+1)\end{array}\right]=\sum_{i=1}^{K}\left[\begin{array}[]{cc}h_{1}\sqrt{\alpha_{1i}}&h_{2}\sqrt{\alpha_{2i}}\\\
-h_{2}^{*}\sqrt{\alpha_{2i}}&h_{1}^{*}\sqrt{\alpha_{1i}}\end{array}\right]\left[\begin{array}[]{c}x_{i}(t)\\\
x_{i}(t+1)\end{array}\right]+\left[\begin{array}[]{c}n(t)\\\
-n^{*}(t+1)\end{array}\right]\ .$ (449)
By capitalizing on the structure, the destination decouples the received
signal into two parallel streams of signals by post-multiplying the received
vector on the left side of (449) by
$\displaystyle\left[\begin{array}[]{cc}h_{1}^{*}\sqrt{\alpha_{1i}}&-h_{2}\sqrt{\alpha_{2i}}\\\
h_{2}^{*}\sqrt{\alpha_{2i}}&h_{1}\sqrt{\alpha_{1i}}\end{array}\right]\ .$
(452)
Based on this approach, the interference power imposed when decoding layer $i$
by the destination is
$\displaystyle
I_{i}=(s_{1}\alpha_{1i}+s_{2}\alpha_{2i})\sum_{j=i+1}^{K}(s_{1}\alpha_{1j}+s_{2}\alpha_{2j})\
.$ (453)
Therefore, the probability of successfully decoding layer $k$ at the
destination is
$\displaystyle{\sf
P}_{k}\;=\;{\mathbb{P}}\left(\frac{s_{1}\alpha_{1ki}+s_{2}\alpha_{2k}}{1+\sum_{j=k+1}^{K}(s_{1}\alpha_{1j}+s_{2}\alpha_{2j})}\geq\frac{\gamma_{k}a_{k}}{1+\sum_{i=k+1}^{K}\gamma_{i}a_{k}}\right)\
.$ (454)
Hence, the expected achievable rate is
$\displaystyle R_{\rm ave}=\sum_{k=1}^{K}{\sf P}_{k}R_{k}=\sum_{k=1}^{K}{\sf
P}_{k}\log\left(1+\frac{\gamma_{k}a_{k}}{1+\sum_{j=k+1}^{K}\gamma_{i}a_{k}}\right)\
.$ (455)
An optimal allocation of power across different layers can be found by
maximizing the average sum-rate. A toy example showing the details and some
steps involved is available in Zamani14 .
#### 5.5.2 Amplify-and-Forward
In this relaying mode, only the destination node decodes the information
layers and the relay nodes only amplify what they receive. To coherently
decode the signals, the destination deploys a distributed space-time code
permutation along with the threshold-based ON/OFF power scheme, which is known
to improve the performance of AF relaying. In this scheme, relay $i$ will
remain silent if the channel gain $s_{r_{i}}$ is smaller than a pre-specified
threshold $a_{\rm th}$. Otherwise, each relay completes its transmission in
two consecutive time slots. In time $t$, relay $1$ transmits
$c_{1}y_{r_{1}}(t)$ and relay 2 transmits $c_{2}y_{r_{2}}(t+1)$. In time slot
$t+1$, relay 1 transmits $-c_{1}y^{*}_{r_{1}}(t+1)$ and relay 2 transmits
$c_{2}y^{*}_{r_{2}}(t)$, where coefficients $c_{1}$ and $c_{2}$ are selected
properly to satisfy the power constraints. At the destination, the received
vector
$\displaystyle\left[\begin{array}[]{c}y(t)\\\ -y^{*}(t+1)\end{array}\right]$
(458)
is multiplied by
$\displaystyle\left[\begin{array}[]{cc}h_{r_{1}}h_{1}c_{1}&h_{r_{2}}h_{2}c_{2}\\\
-h^{*}_{r_{2}}h^{*}_{2}c_{2}&h^{*}_{r_{1}}h^{*}_{1}c_{1}\\\
\end{array}\right]^{\sf H}\ ,$ (461)
transforming the channel into two parallel channel, yielding the following
mutual information between the transmitter and the receiver
$\displaystyle
I(x;y)=\log\left(1+\frac{s_{r_{1}}s_{1}c_{1}^{2}+s_{r_{2}}s_{2}c_{2}^{2}}{1+s_{1}c_{1}^{2}+s_{2}c_{2}^{2}}\cdot
P\right)\ .$ (462)
Hence, the average rate of this channel can be found by averaging $I(x;y)$
over the distributions of all the channel gains involved. Subsequently, the
maximum average rate can be found by averaging the mutual information across
different realizations of the channels.
### 5.6 Multi-relay Networks
Motivated by addressing the distributed nature and delay sensitivity of modern
communication systems, the study in SimoneSh09 investigates a network
consisting of a source-destination pair, the communication between which is
assisted by $M_{T}$ relays. The source is connected to the relays via a
broadcast channel, and the relays have orthogonal channels to the destination.
There is no direct link between the transmitter and the receiver. The signal
received by relay $i\in\\{1,\dots,M_{T}\\}$ during time slot
$t\in\\{1,\dots,n\\}$ is given by
$\displaystyle y_{r_{i}}(t)=x(t)+n_{i}(t)\ ,$ (463)
where $x(t)$ and $n_{i}(t)$ represent the transmitted signal by the source and
the AWGN. The channel between each relay and the destination has a finite
capacity denoted by $C$. Furthermore, the relays will have a non-ergodic
failure profile, and it is assumed that at any given time, a random number of
relays denoted by $M\in\\{M_{0},\dots,M_{T}\\}$ are functioning, while
communication by the rest is erased for the entire duration of a transmission.
$M_{0}$ denotes the minimum number of relays that are functioning at any given
time, and define $\boldsymbol{p}=[p_{M_{0}},\dots,p_{M_{T}}]$ as the
probability mass function of $M$.
The success/erasure model of the relay-destination links provides a context
for defining degradedness among different realizations. Specifically, a
realization that has $m$ functioning relay-destination links will be
considered degraded with respect to the one with $n>m$ functioning links.
Based on this, the transmitter splits its messages into $M_{T}-M_{0}+1$
independently generated information layers $\\{W_{M_{0}},\dots,W_{M_{T}}\\}$
with rates $\\{R_{M_{0}},\dots,R_{M_{T}}\\}$. When there are $M=m$ active
relay-destination links, the destination decodes information layers
$\\{W_{M_{0}},\dots,W_{M_{T}}\\}$, rendering a total rate of
$R^{T}_{m}=\sum_{i=1}^{m}R_{i}$. Subsequently, the average rate in the channel
is
$\displaystyle R_{\rm ave}=\sum_{m=M_{0}}^{M_{T}}p_{m}R^{T}_{m}\ .$ (464)
Two distinct settings will be discussed next: the oblivious relays setting, in
which which the relays are oblivious to the codebooks used by the source, and
the non-oblivious relays settings in which the relays are informed about the
codebooks used by the source.
#### 5.6.1 Oblivious Relays
In this setting, a relay performs a stochastic mapping from the message set to
a codeword. This stochastic mapping depends on a random key $F\in\cal F$ that
is revealed to the destination, but it is unknown to the relays. By
appropriately choosing a probabilistic model for $F$, it is possible to model
a scenario in which the signal transmitted by the source is i.i.d. over the
codeword elements. At each relay, being oblivious to the codebook $F$, the
relay maps its received sequence to an index in the set
$\\{1,\dots,2^{nC}\\}$. Finally, the destination uses the relays’ indexes, the
knowledge of the codebook $F$, and the actual number of active relay-
destination links, decode the layers associated with the number of active
relay-destination links. By restricting the input to be Gaussian, it is shown
in SimoneSh09 that the average capacity of the channel is upper bounded by
$\displaystyle C_{\rm
ave}\leq\max\sum_{m=M_{0}}^{M_{T}}p_{m}\sum_{i=1}^{m}R_{i}\ ,$ (465)
where
$\displaystyle
R_{m}=\frac{1}{2}\log\left(1+\frac{m\beta_{m}P}{1+m\sigma^{2}_{m}+mP\sum_{k=m+1}^{M_{T}}\beta_{k}}\right)\
.$ (466)
The maximization in (465) is taken with respect to parameters $\beta_{m}\geq
0$, which satisfy
$\displaystyle\sum_{m=M_{0}}^{M_{T}}=1\ ,$ (467)
and $\sigma^{2}_{m}$ is defined as
$\displaystyle\sigma^{2}_{m}=\left(\frac{1}{m}+P\right)\left(2^{2mC}-1\right)^{-1}\
.$ (468)
Motivated by the structure of this upper bound, SimoneSh09 proposes a
broadcast approach and single-description compression at the relays. In this
approach, each relay sends over the relay-destination link a single index
(description), which is a function of its received signal. The
compression/decompression scheme is inspired by the technique used in ishwar
for robust distributed source coding in a CEO problem. The technique works by
performing random binning at the agents, as is standard in distributed
compression. Moreover, the test channel (i.e., equivalent compression noise)
and binning rate are selected so that the receiver can recover with high
probability the compressed signals on the $M$ active links irrespective of the
realized value of $M$ as long as it is $M\geq M_{0}$ (as guaranteed by
assumption). In other words, the design of the compression scheme targets the
worst-case scenario of $M=M_{0}$. Notice that, should more than $M_{0}$ links
be active $M>M_{0}$, the corresponding compressed signals would also be
recoverable at the receiver, since, by the design of the binning rate, any
subset of $M_{0}$ descriptions can be decompressed ishwar . After
decompression is performed, the receiver uses all the $M$ signals obtained
from the relays to decode the codewords up to the $M^{\rm th}$ layer (that is,
the layers with rates $R_{m}$ with $M_{0}\leq m\leq M$). Under this
transmission scheme, the achievable average rate is
$\displaystyle
R_{m}\leq\frac{1}{2}\log\left(1+\frac{m\beta_{m}P}{1+\sigma^{2}+mP\sum_{k=m+1}^{M_{T}}\beta_{k}}\right)\
,$ (469)
where $\sigma^{2}$ satisfies
$\displaystyle\frac{1}{2}\log\left[\left(1+\frac{M_{0}P}{1+\sigma^{2}}\right)^{\frac{1}{M_{0}}}\left(1+\frac{1}{\sigma^{2}}\right)\right]\leq
C\ .$ (470)
This broadcast approach can be further developed to couple the broadcast
coding approach with multi-description, rather than single-description,
compression at the relays. The idea follows the work in chenberger , which
focused on the CEO problem. Accordingly, each relay shares the $nC$ bits it
can convey to the destination between multiple descriptions of the received
signal to the decoder. The basic idea is that different descriptions are
designed to be recoverable only if certain connectivity conditions are met
(that is, if the number of functioning links $M$ is sufficiently large). This
adds flexibility and robustness to the compression strategy. To simplify the
presentation, the analysis is focused on the two-agent case ($M_{T}=2$). In
this approach, the two relays send two descriptions: a basic one to be used at
the destination in case the number of active links turns out to be $M=1$ and a
refined one that will be used only if $M=2$. In this setting, the achievable
average rate is
$\displaystyle R_{1}$ $\displaystyle\leq\frac{1}{2}\log\left(1+\frac{\beta
P}{1+(1-\beta)P+\sigma_{1}^{2}+\sigma_{2}^{2}}\right)\ ,$ (471) $\displaystyle
R_{2}$
$\displaystyle\leq\frac{1}{2}\log\left(1+\frac{2(1-\beta)P}{1+\sigma_{2}^{2}}\right)\
,$ (472)
with any power allocation factor $\beta$ and $\sigma^{2}_{1}$ and
$\sigma^{2}_{2}$ that satisfy
$\displaystyle\frac{1}{2}\log\left(1+\frac{P+1}{\sigma_{1}^{2}+\sigma_{2}^{2}}\right)+\frac{1}{4}\log\left(\frac{(\sigma_{1}^{2}+\sigma_{2}^{2})^{2}(2P+\sigma^{2}_{2}+1)(\sigma_{2}^{2}+1)}{(2P+\sigma_{1}^{2}+\sigma_{2}^{2}+1)(\sigma_{1}^{2}+\sigma_{2}^{2}+1)\sigma_{2}^{4})}\right)\leq
C\ .$ (473)
#### 5.6.2 Oblivious Agents
Next, we briefly review the model in which the agents are informed about the
codebook used at the source. As shown in SimoneSh09 , the average capacity for
this setting is upper bounded by
$\displaystyle C_{\rm
ave}\leq\sum_{m=M_{0}}^{M_{T}}p_{m}\min\left\\{\frac{1}{2}\log(1+mP)\;,\;mC\right\\}\
.$ (474)
This result follows directly from cut-set arguments, where the first term in
the $\min$ follows by considering the cut between source and relays, and the
second depends on the cut from relays to the destination.
As for an achievable strategy, a generalization of the single-description
approach for the setting of the oblivious relay can be constructed in a
straightforward way. In this scheme, the source uses broadcast coding with
Gaussian codebooks. However, on top of the $M_{T}-M_{0}+1$ layers considered
earlier, the source superimposes a further layer carrying a common message,
denoted by $W_{0}$, with rate $R_{0}$, to be decoded by all relays and then
forwarded to the destination. The destination is considered to recover such a
message at all times, that is, as long as the number of active links $M$
satisfies $M\geq M_{0}$. For this purpose, each agent reserves a rate of
$R/M_{0}$ on its outgoing links to send an index computed as a random function
of the decoded $W_{0}$. It can be easily seen that, even though the agents are
unaware of which links are currently active, the receiver will be able to
recover $W_{0}$ with vanishing probability of error as $n\rightarrow\infty$.
The extra layer carrying $W_{0}$ is decoded first by the agents and canceled,
and the rest of coding/decoding takes place as for the broadcast approach with
a single-description scheme with the caveat that now the remaining link
capacity to forward compression indices is $C-R/M_{0}$. Under this scheme, the
average rate that can be achieved is given by
$\displaystyle R_{M_{0}}$ $\displaystyle\leq\tilde{R}_{M_{0}}+R_{0}\ ,$ (475)
$\displaystyle R_{m}$
$\displaystyle\leq\frac{1}{2}\log\left(1+\frac{m\beta_{m}P}{1+\sigma^{2}+mP\sum_{k=m+1}^{M_{T}}\beta_{k}}\right)\
\qquad\mbox{for}\quad M_{0}+1,\dots,M_{T}\ .$ (476)
where
$\displaystyle
R_{0}=\frac{1}{2}\log\left(1+\frac{\beta_{0}P}{1+(1-\beta_{0})P}\right)\ ,$
(477)
and $\sigma^{2}$ satisfies
$\displaystyle\frac{1}{2}\log\left[\left(1+\frac{M_{0}P(1-\beta_{0})}{1+\sigma^{2}}\right)^{\frac{1}{M_{0}}}\left(1+\frac{1}{\sigma^{2}}\right)\right]\leq
C-\frac{R_{0}}{M_{0}}\ .$ (478)
### 5.7 Occasionally Available Relays
Finally, we consider the impact of uncertainty in the network topology on
transmission. This is motivated by the fact that in practical wireless
networks, it is often difficult for each user to keep track of neighboring
terminals, potentially assisting in the transmission of its information. This
is especially pronounced in high-mobility networks. One immediate implication
of this setting is in the IEEE 802.11 WLAN protocol using occasional relay
terminals is explored. Mobile users that are far away from an access point can
suffer from low uplink rates. Occasional relaying terminals between the mobile
users and the access point receive the transmitted packets and relay them to
the access point. When relays do not exist, then the direct links are used,
albeit at a lower rate.
This setting is studied in Katz09 , which considers communication between a
source and a destination where occasionally there might be a relay node in
close proximity of the source, and assisting it without its knowledge (i.e.,
the source is oblivious to the existence of the relay node). The destination,
on the other hand, is aware of the existence of the relay node. When the relay
exists, the source-relay channel is considered to be of a constant quality
(due to the proximity), and the source-destination and relay-destination
channels undergo block fading. All channels are known only to their associated
receivers, and they are otherwise unknown to other nodes.
Hence, in this setting, the transmitter’s uncertainty is due to a combination
of channel uncertainty and relay existence uncertainty. Furthermore, the
combination of these factors can be used for adopting a natural notion of
channel degradedness. Specifically, we can use the throughput of the channel
as a metric based on which different channel realizations and relay existence
scenarios can be rank-ordered. By leveraging this notion of degradedness, the
transmitter generates the codebooks, one corresponding to each possible
realization, ensued by superposition coding for transmission. At the
destination node, the receiver uses the information about the actual
realization of the channel and the relay’s existence and decodes all the
codebooks assigned to this realization and all the weaker ones, treating the
rest as noise.
## 6 Communications Networks
### 6.1 Overview
Previous sections discussed point-to-point communication, the MAC, the
interference channel, and the relay channel. This section considers a broader
span of communication networks with multiple communicating nodes and different
cooperation levels. Only a limited number of examples are covered in detail,
and an outlook of additional relevant problems is provided in Section 7.
We review the application of the broadcast approach to four different aspects
of modern communication networks. First, we focus on cellular communication.
Specifically, the case of uplink communications is studied in AsLupuKatzSh12
where the broadcast approach is studied in conjunction with multiuser
detection for randomly spread direct sequence (DS) code-division multiple
access (CDMA). This is discussed in more detail on Section 6.2. In networks,
it may be commonly required to minimize the distortion of the source
information rather than maximize the expected rate. For fading channels
combining the broadcast approach with successive refinement source coding
allows minimization of expected distortion. This aspect is discussed in
Section 6.3. Successive refinement as combined with the broadcast approach
gives idea beyond the basic setting, and was recently used for a multi-user
downlink with layered cooperation among users KimPark20 . The broadcast
approach for the information bottleneck channel is studied in
AsSh20_Bottleneck ; SteinerShamai2020 ; steinerSh2020bottleneck , and it is
discussed on Section 6.4. Finally, the design of the broadcast approach for
transmitters with harvested energy is discussed in Section 6.5. There are
indeed many additional network related works which are worth noting but cannot
be reviewed in details in this section, such as thsoe in Liang14 ; Liang12 ;
Tulino09 ; Simone09 ; park2013multilayer ; ParkSimeoneSahinShamai2014 ;
Park_2019 ; SimoneSomek11 ; ZouLiang15 ; ZouLiang18 ; Karaksik13 ; ROY_6613623
; Huleihel15 .
### 6.2 Multi-User MAC Broadcasting with Linear Detection
A cellular system where macrocells are overlaid with femtocells is studied in
SimoneSh10 . Each femtocell is served by a home base station that is connected
to the macrocell base station via an unreliable network access link, such as a
digital subscriber line (DSL) followed by the Internet. A scenario with a
single macrocell and a single femtocell is considered first, and it is then
extended to include multiple macrocells and femtocells, both with standard
single-cell processing and multicell processing (or network MIMO). Two main
issues are addressed for the uplink channel: (i) interference management
between femto and macrocells; and (ii) robustness to uncertainties on the
quality of the femtocell access link. The problem is formulated in
information-theoretic terms, and inner and outer bounds are derived to
achievable per-cell sum-rates for outdoor and home users. Overall, the
analysis lends evidence to the performance advantages of sophisticated
interference management techniques, based on joint decoding and relaying, and
of robust coding strategies via the broadcast approach.
The work in AsLupuKatzSh12 considers the problem of multiuser detection for
randomly spread DS-CDMA over flat fading channels. The analysis focuses on the
case of many users and large spreading sequences such that their ratio, which
is the system load, is kept fixed. Spectral efficiency of practical linear
detectors such as match-filter and decorrelator employing successive
interference cancellation (SIC) at the receiver is derived. This is used to
extend the notion of the strongest users detectors for SIC receivers. The
strongest users detectors system design relies on an outage approach where
each user transmits in a single layer (fixed rate), and only users
experiencing good channel conditions may be reliably decoded, while the other
users are not decoded. In AsLupuKatzSh12 , iterative SIC decoding is studied,
and it is shown that for equal power users, the optimal rate allocation, for
maximizing the expected spectral efficiency, is equal rates for all users.
This outage approach analysis is extended for a multilayer coding broadcast
approach per user. The expected sum-rate, under iterative decoding with linear
multiuser detectors, is optimized, and the optimal layering power distribution
is obtained. For small system loads, the achievable spectral efficiency with
the continuous broadcast approach and a linear matched filter detector
exhibits significant gains over the single-layer coding approach.
Multiuser wireless communication systems using CDMA have been studied and
implemented in recent years. The results on the asymptotic distribution of
singular values of certain random matrices allowed the analysis of randomly
spread direct sequence CDMA TSEHANLY99 ; VerduShamai99 ; ShamaiVerdu01 . In
those multiple access channels, random and independent signature waveforms are
assigned to the network subscribers.
In VerduShamai99 , the sum-rate capacity per chip was analyzed for a non-
fading channel, the number of users $K$ is taken to the limit
($K\rightarrow\infty$), and the spreading sequence length $N$ is also large
($N\rightarrow\infty$). The system load, which is also the number of users per
chip is kept fixed, i.e.,
$\displaystyle\beta=\frac{K}{N}\ .$ (479)
The main conclusions from the results in VerduShamai99 are that for low
$\beta$, the linear multiuser detectors (e.g., decorrelator and linear MMSE
detectors) have near-optimal spectral efficiency. For any $\beta$ and
$E_{b}/N_{0}$, the match-filter multiuser detector is far from optimal. The
spectral efficiency of the linear detectors, except for the matched filter,
grows unbounded with $E_{b}/N_{0}$, for a given $\beta$. The work in
ShamaiVerdu01 extended these results to the case where every user experiences
a flat fading channel. The sum-rate capacity is an ergodic capacity, which is
achievable for fast fading channels, where every transmitted block experiences
sufficiently many fading realizations to approximate ergodicity. Otherwise, a
framework of outage capacity may better characterize the expected performance.
The channel model with slow fading, where a fading remains fixed throughout a
transmission block, is considered in shamaiZaidel02 , where an outage
probability is equivalent to the fraction of undecoded users, providing a
framework for strongest users detection. In this work, it is assumed that all
users transmit at equal rates and equal power, regardless of their individual
fading realizations. In such a case, the receiver can no longer guarantee
reliable decoding for all active users. In this case, the receiver ranks all
active users by their received powers and decodes the transmissions of the
largest number of users, for which decoding is successful. The system design
can be done such that a fraction of undecodable users (FUU) is defined, and
this dictates the fixed rate to be used by all active users. The total
achievable sum-rate is referred to as the outage capacity. The FUU can be
optimized such that the average sum rate is maximized.
In AsLupuKatzSh12 , the sum-rate capacity of linear detectors with SIC
receivers is studied for different types of detectors. Different approaches to
rate allocation and multi-stage decoders with SIC are considered.
Interestingly, it turns out that with iterative SIC decoding, equal rate
allocation achieves the highest average spectral efficiency. In iterative
decoding, the receiver decodes as many users as possible and performs SIC
every iteration. The effective system load is reduced after every SIC
iteration, increasing the multiuser detector efficiency. Thus, more users with
worse channel conditions can be decoded. Moreover, by letting every user
employ multi-layer coding, the expected spectral efficiency may increase
further.
The multi-access channel combined with the broadcast approach ShitzSteiner03
in its continuous layering form was first analyzed in SH00 . Some MAC outage
approaches and MIMO multi-layering schemes were studied in SteinerShamai2007 .
In Minero:ISIT07 , a simple two state multi-access channel with two users is
studied, where it is shown that superposition coding is optimal, and the sum-
rate capacity per layer is derived. A random-access (non-fading) channel is
also a special case of the MAC. Achievable rates over this channel are studied
with superposition coding in Minero:ISIT07 ; GOLDSMITH04 . An alternative
practical approach is to use variable-rate coding over the MAC
CAIRE04_VARIABLERATE .
The main results of AsLupuKatzSh12 may be summarized as follows:
1. 1.
formulation of ergodic bounds for systems with random spreading DS-CDMA over
fading channels, employing SIC receivers;
2. 2.
derivation of the expected spectral efficiency achievable with equal rate
allocation per user, and iterative SIC decoding. It is also shown that equal
rate allocation maximizes the expected spectral efficiency;
3. 3.
derivation of the expected spectral efficiency for the case of multi-layer
coding taken to the limit of many layers (continuous broadcast approach);
4. 4.
analysis of a multi-layer coding where parallel decoders are used, without
employing SIC;
5. 5.
analysis of a more complicated setting, including a multi-layer coded
transmission with iterative SIC decoding. It is shown that, like in the
single-layer case, the expected spectral efficiency is maximized for equal
rate allocation per user. Furthermore, the optimal layering power allocation
function, which maximizes the expected spectral efficiency, is obtained for
the matched-filter and decorrelator detectors. The case of broadcasting with
MMSE and optimal detectors under iterative SIC decoding remains an open
problem.
#### 6.2.1 Channel Model
We describe the channel model and the basic assumptions. Consider the
following system model
$\displaystyle\mathbf{y}=\mathbf{V}\mathbf{H}\mathbf{x}+\mathbf{n}\ ,$ (480)
where $\mathbf{x}=[x_{1},...,x_{K}]$ is a vector of length $K$. An individual
term $x_{k}$ is a sample of a layered coded signal of the $k^{\rm th}$ user,
and $\\{x_{k}\\}$ are i.i.d. and distributed according to
${\mathcal{CN}}(0,P)$, where $P$ sets the power constraint per user.
$\mathbf{V}$ is an $[N\times K]$ signature matrix with elements i.i.d.
distributed according to $v_{i,j}\sim{\mathcal{CN}}(0,\frac{1}{N})$, and
$\mathbf{n}$ is, without loss of generality, a normalized AWGN vector
$\mathbf{n}\sim{\mathcal{CN}}(0,I_{N})$. The channel matrix $\mathbf{H}$ is a
diagonal matrix $\mathbf{H}=\textrm{diag}(h_{1},h_{2},...,h_{K})$ of fading
gains. The empirical distribution of $\\{s_{k}\\}\triangleq\\{|h_{k}|^{2}\\}$
converges almost surely to a distribution $Q(s)$ such that
${\mathbb{E}}_{Q}[s]=1$. The channel matrix $\mathbf{H}$ remains fixed
throughout a transmission block, which corresponds to a slowly fading channel
model. Note that, since the additive noise is normalized we have $\textrm{\rm
SNR}=P$.
The energy per bit to noise spectral density ratio is used for evaluating the
spectral efficiency and for comparing different strategies, and it is defined
as
$\displaystyle\frac{E_{b}}{N_{0}}=\frac{\beta}{R_{\rm sum}}\;\textrm{\rm SNR}\
,$ (481)
where $R_{\rm sum}$ is the total spectral efficiency, i.e., the sum-rate in
bits per second per Hertz. The system load $\beta$ is defined in (479).
#### 6.2.2 Strongest Users Detection - Overview and Bounds
Motivated by practical considerations, the decoding of strongest users on
block fading channels is studied in shamaiZaidel02 . This study assumes that
all users transmit at equal rates and equal powers, regardless of their
individual fading realizations. In such a case, the receiver can no longer
guarantee the reliable decoding of all active users. Thus, the receiver ranks
all active users by their received powers and decodes the transmissions of the
largest number of users, for which decoding is successful. The system design
can be optimized to a fixed FUU, which dictates the rate to be used by all
active users. The maximal achievable sum-rate is referred to as the outage
capacity. It is obtained by optimizing the FUU such that the average sum rate
is maximized.
The ergodic spectral efficiency for the fading CDMA channel model in (480) is
given by ShamaiVerdu01
$\displaystyle C_{\rm erg}(\beta,\textrm{\rm
SNR})=\beta{\mathbb{E}}_{s}\left[\log(1+s\eta(\beta)\textrm{\rm SNR})\right]\
,$ (482)
where $\eta(\beta)$ is the multiuser detector efficiency, which depends on the
detector type (e.g, matched filer, decorrelator, MMSE), and is a function of
the system load $\beta$, and the SNR. The expectation is taken with respect to
the fading gain distribution $Q(s)$. For the completeness of this
presentation, the multiuser detector efficiency is specified for each relevant
detector. The detector efficiency of a matched filter is ShamaiVerdu01
$\displaystyle\eta_{\rm mf}(\beta)=\frac{1}{1+\beta\textrm{\rm SNR}}\ .$ (483)
The detector efficiency of a decorrelator receiver is
$\displaystyle\eta_{\rm dec}(\beta)=\max\\{0,1-\beta\\}\ ,$ (484)
and for an MMSE detector, $\eta_{\rm mmse}(\beta)$ satisfies the following
equation
$\displaystyle\eta_{\rm mmse}(\beta)+\beta E_{s}\left[\frac{s\eta_{\rm
mmse}(\beta)\textrm{\rm SNR}}{1+s\eta_{\rm mmse}(\beta)\textrm{\rm
SNR}}\right]=1\ .$ (485)
The expectation here is taken with respect to the fading gain distribution
$Q(s)$. For a Rayleigh fading channel, the expectation is explicitly expressed
as
$\displaystyle{\mathbb{E}}_{s}\left[\frac{s\eta_{\rm mmse}\textrm{\rm
SNR}}{1+s\eta_{\rm mmse}\textrm{\rm
SNR}}\right]=1-\frac{E_{1}\left(\frac{1}{\eta_{\rm mmse}\textrm{\rm
SNR}}\right)}{\eta_{\rm mmse}\textrm{\rm SNR}}\exp\left(\frac{1}{\eta_{\rm
mmse}\textrm{\rm SNR}}\right)\ ,$ (486)
where $E_{1}(x)$ is the exponent integral function.
Upper Bound. It is well-known that the optimum multiuser detector capacity is
also equal to the ergodic successive decoding sum-rate capacity with an MMSE
detector, according to the mutual information chain rule Cover . Thus the
ergodic capacity, obtained with an optimum detector, can be expressed by the
ergodic SIC MMSE detection capacity ShamaiVerdu01
$\displaystyle C_{\rm opt}(\beta,\textrm{\rm
SNR})={\mathbb{E}}_{s}\left[\int\limits_{0}^{\beta}\log\left(1+s\cdot\eta_{\rm
mmse}(z)\cdot\textrm{\rm SNR}\right)~{}\,\textnormal{d}z\right]\ .$ (487)
Figure 31: Schematic description of a parallel multiuser decoder, without SIC.
Strongest users detection. It refers to the practical case where all users
transmit at a fixed rate, via single-layer coding. The adequate channel model
here is the block fading channel, where a fixed fading realization throughout
the block for each user is observed. Thus all users, experiencing fading gains
smaller than a threshold $s_{\rm th}$, will not be reliably decoded. This is
demonstrated in Fig. 31, where a fraction of users, corresponding to $Q(s_{\rm
th})$, is in an outage, and all other users are reliably decoded. The average
achievable sum-rate for outage decoding is given by shamaiZaidel02
$\displaystyle C_{\rm out}(\beta,\textrm{\rm SNR})=\beta(1-Q\left(s_{\rm
th}\right)){\log(1+s_{\rm th}\eta(\beta)\textrm{\rm SNR})}\ ,$ (488)
where $Q(s_{\rm th})$ is the probability of outage corresponding to the
fraction of users that cannot be reliably decoded. The multiuser detector
efficiency $\eta(\beta)$ is specified in equations (483)-(485) for the
underlying linear detectors.
In parallel decoding schemes, the decoding latency may be small. However,
there is an inherent spectral efficiency loss due to decoding every user in
the presence of interference from all other users. An SIC decoder attempts
decoding the users one by one, where after every successful decoding, a
reconstructed signal associated with the decoded user is subtracted from the
input signal. The procedure continues until the last user is decoded. Consider
the case that each user transmits over a fading channel. Such a channel model
was studied in ShamaiVerdu01 , where the detectors considered were an optimal
detector, MMSE, decorrelator, and MF. The derivations in AsLupuKatzSh12
extend the results for the SIC receiver strategy.
For a given system load $\beta$, the ergodic sum-rate is specified in (482).
This sum-rate is an upper bound since its achievability requires fast feedback
from the receiver to all users. With SIC decoding, the ergodic sum-rate is
given by
$\displaystyle C_{\rm SIC,erg}(\beta,\textrm{\rm
SNR})={\mathbb{E}}_{s}\left[\int\limits_{0}^{\beta}\,\textnormal{d}z~{}{\log\left(1+s\cdot\eta\left(z\right)\textrm{\rm
SNR}\right)}\right]\ .$ (489)
The ergodic sum-rate for an MF-SIC detector is derived in the same lines as
for the non-fading case, yielding
$C_{\rm SIC,MF}(\beta,\textrm{\rm
SNR})={\mathbb{E}}_{s}\left[\left(s+\beta+\frac{1}{\textrm{\rm
SNR}}\right)\log(1+\textrm{\rm SNR}(s+\beta))\right.\\\
-\left.(\frac{1}{\textrm{\rm SNR}}+s)\log(1+s\textrm{\rm
SNR})-\left(\beta+\frac{1}{\textrm{\rm SNR}}\right)\log(1+\beta\textrm{\rm
SNR})\right]\ .$ (490)
The sum-rate capacity, for an SIC decorrelator detector, is also available as
a function of the fading gain distribution
$C_{\rm SIC,Dec}(\beta,\textrm{\rm
SNR})={\mathbb{E}}_{s}\left[\left(1+\frac{1}{s\textrm{\rm
SNR}}\right)\log(1+s\textrm{\rm SNR})\right.\\\
\left.-\beta~{}-\left(1-\beta+\frac{1}{s\textrm{\rm
SNR}}\right)\log(1+s\textrm{\rm SNR}(1-\beta))~{}\right]\ .$ (491)
For an MMSE detector, the sum-capacity cannot be given in a closed-form, and
it is computed using $\eta(z)$ given in (486), plugged into the ergodic
capacity expression in (489).
#### 6.2.3 Broadcast Approach with Strongest Users Detection - (NO SIC)
If we let every user transmit a continuously layered coded signal, then the
number of decodable layers per user directly depends on the experienced fading
level. Consider here parallel decoding, where the receiver decodes all users
in parallel up to the highest reliably decoded layer. Thus the achievable
rate, averaged over all possible fading realizations, is given by
$\displaystyle R_{\rm bs}(\beta,\textrm{\rm SNR})$
$\displaystyle=\lim\limits_{K,N,J\rightarrow\infty,~{}\frac{K}{N}\rightarrow\beta,~{}J(s)/K\rightarrow
q(s)}\int\limits_{0}^{\infty}\,\textnormal{d}s\frac{J(s)}{N}\int\limits_{0}^{s}\,\textnormal{d}u\frac{u\eta\
\rho(u)}{1+u\eta I(u)}\ ,$ (492)
where $J(s)$ is the number of decoded users at fading level $s$, and where the
broadcasting rate, derived in (6), is modified here by the detector efficiency
$\eta$. The expected sum rate is simplified into
$\displaystyle R_{\rm bs}(\beta,\textrm{\rm
SNR})=\int\limits_{0}^{\infty}\,\textnormal{d}s~{}q(s)\int\limits_{0}^{s}\,\textnormal{d}u\frac{u\eta\rho(u)}{1+u\eta
I(u)}=\int\limits_{0}^{\infty}\,\textnormal{d}s(1-Q(s))\frac{s\eta\rho(s)}{1+s\eta
I(s)}\ .$ (493)
It can be shown that the optimal power distribution, which maximizes $R_{\rm
bs}(\beta,\textrm{\rm SNR})$, is like in (17), where the detector efficiency
$\eta$ scales the power distribution
$\displaystyle I(x)=\left\\{\begin{array}[]{cl}\textrm{\rm SNR}&~{}x<x_{0}\\\
\frac{1-Q(x)-x\cdot q(x)}{x^{2}q(x)\eta}&~{}x_{0}\leq x\leq x_{1}\\\
0&~{}\mbox{else}\end{array}\right.~{},$ (497)
where $x_{0}$ is determined by $I(x_{0})=\textrm{\rm SNR}$, and $x_{1}$ by
$I(x_{1})=0$.
#### 6.2.4 SIC Broadcast Approach Upper Bound
In order to characterize an achievable rate via layering, the power
distribution for layering should be optimized for every subset of users, and
their corresponding residual interference must be accounted for in the stages
of the SIC, as described above for the outage case. Such an analytical
analysis for the broadcast approach seems to be intractable. Therefore, an
upper bound significantly tighter than the ergodic upper bound is provided.
The upper bound of the broadcast approach is simply the broadcast approach
combined with SIC, where optimal layering is performed for every subset of
users. It is assumed that at any decoding stage, there is no residual
interference from previous SIC stages. Although interference from undecoded
layers of early stages does exist, this assumption allows full derivation and
optimization of a continuous broadcast approach. Under this simplified
setting, the layering sum-rate with SIC is given by
$\displaystyle C_{\rm SIC,BS}(\beta,\textrm{\rm
SNR})=\int\limits_{0}^{\beta}\,\textnormal{d}z\int\limits_{0}^{\infty}\,\textnormal{d}s~{}(1-Q(s))\frac{s\eta(z)\rho(s)}{1+s\eta(z)I(s)}\
,$ (498)
where the inner integral is the average achievable rate for a given system
load $z$. The maximization of this average rate is given in (493), with an
optimal power distribution specified in (497). For a Rayleigh fading channel,
this maximal average rate can be expressed more explicitly as
$\displaystyle C_{\rm SIC,BS}(\beta,\textrm{\rm
SNR})=\beta(e^{-1}-2E_{1}(1))+\int\limits_{0}^{\beta}\,\textnormal{d}z\left(2E_{1}(S_{0}(z))-e^{-S_{0}(z)}\right)\
,$ (499)
where $S_{0}(z)=2/\left(1+\sqrt{1+4\textrm{\rm SNR}\eta(z)}\right)$. Since
this broadcasting upper bound does not provide an achievable expected rate,
the analysis of continuous broadcasting, which follows assumes equal rates
with iterative decoding.
#### 6.2.5 Broadcast Approach with Iterative SIC
Figure 32: Schematic description of an iterative successive interference
cancellation multiuser decoder for multi-layer coded transmission with $M$
layers. Every iteration includes multiple sub-iterations of iterative-SIC
decoding per code layer.
A broadcast approach is employed by all transmitting users, where all users
transmit with the same rate and layering power distribution. The decoder
applies iterative SIC decoding. The main idea in the decoding scheme is to
apply an iterative SIC per layer. The decoding process is illustrated in Fig.
32. Every iteration includes $M$ stages of iterative SIC decoding, where $M$
is the number of coded layers. The first stage attempts iterative SIC decoding
of the first layer of all users. Then the next stage performs iterative SIC
decoding for the group of users for which decoding of the first layer was
successful. This continues until the last layer decoding is done. Then the
second iteration continues similarly.
The continuous layering characterizes the highest achievable sum-rate for the
broadcast approach, i.e., the number of layers is unlimited. Every layer is
associated with a fractional rate and power allocation, as described in the
broadcast approach overview. The achievable rates and overall performance
strongly depend on the transmission scheme and the decoding strategy. The
decoding strategy which is adopted here is the multi-round iterative decoding.
The maximal achievable average rate can be expressed by the following
optimization problem
$\displaystyle R_{\rm sum,bs}=\max\limits_{I}~{}R_{\rm sum,bs}(I)\ ,$ (500)
where the achievable continuous layering rate $R_{\rm sum,bs}(I)$ is given by
$\displaystyle R_{\rm
sum,bs}(I)=\beta\int\limits_{0}^{\infty}~{}\,\textnormal{d}s(1-Q(s))\frac{s\eta(G)\rho(s)}{1+s\eta(G)I(s)}\triangleq\int\limits_{0}^{\infty}\,\textnormal{d}sJ(s,I,I^{\prime})\
,$ (501)
where $G$ corresponds to the remaining layers per user, which induce the
mutual interference
$\displaystyle G\triangleq\frac{\beta}{\textrm{\rm
SNR}}\int\limits_{0}^{\infty}Q(s)\rho(s)\,\textnormal{d}s\triangleq\int\limits_{0}^{\infty}\,\textnormal{d}sZ(s,I,I^{\prime})\
,$ (502)
where $I(s)=\int_{s}^{\infty}\,\textnormal{d}u~{}\rho(u)$.
The optimization of (501) with respect to the residual interference constraint
in (502) can be solved by fixing the interference parameter $G$ to an
arbitrary value such that $0<G\leq\beta$. For such $G$, the optimization in
(501) is a standard variational problem with a residual interference
constraint on top of the power constraint $I(0)=\textrm{\rm SNR}$. The
optimization problem is, therefore
$\displaystyle\max\limits_{I}~{}\int\limits_{0}^{\infty}\,\textnormal{d}sJ(s,I,I^{\prime})\qquad\textrm{s.t.}\qquad
G\geq\int\limits_{0}^{\infty}\,\textnormal{d}sZ(s,I,I^{\prime})\ .$ (503)
We can write the Lagrangian form
$\displaystyle
L=\int\limits_{0}^{\infty}\,\textnormal{d}sJ(s,I,I^{\prime})~{}+~{}\lambda\left(G-\int\limits_{0}^{\infty}\,\textnormal{d}sZ(s,I,I^{\prime})\right)\
.$ (504)
The Euler-Lagrange condition for extremum can be derived, and the optimal
layering power distribution can be expressed in a closed-form, as summarized
in the next proposition.
[AsLupuKatzSh12 ] The optimal power distribution, which maximizes the expected
sum-rate of a continuous broadcast approach (503), with matched-filter
multiuser detection and iterative SIC decoding, is achieved from
$\displaystyle I(s)=\left\\{\begin{array}[]{ll}\textrm{\rm SNR}&s<s_{0}\\\
\dfrac{-\textrm{\rm SNR}+\sqrt{\textrm{\rm
SNR}^{2}~{}+~{}\dfrac{4\lambda\beta(1-Q(s))\textrm{\rm
SNR}}{\eta(G)s^{2}Q^{\prime}(s)}}}{2\lambda\beta}-\dfrac{1}{s\eta(G)}&s_{0}\leq
s\leq s_{1}\\\ 0&s>s_{1}\end{array}\right.\ ,$ (508)
where $s_{1}$ is the smallest fading gain for which $I(s_{1})=0$, and the left
boundary condition on $s_{0}$ satisfies $I(s_{0})=\textrm{\rm SNR}$. The
Lagrangian multiplier $\lambda$ is obtained by an equality for the residual
interference constraint (502), as specified by
$\displaystyle\int\limits_{s_{0}}^{s_{1}}Q(s)I^{\prime}(s)~{}\,\textnormal{d}s=-G\frac{\textrm{\rm
SNR}}{\beta}\ .$ (509)
The decoding algorithm for a decorrelator multiuser detector is similar. In
the continuous setting, the detector efficiency is updated according to the
number of users for which all layers are decoded. This is the reason the upper
boundary of the power distribution is a subject for optimization. The solution
is obtained by solving the corresponding variable endpoint variational
optimization problem.
It is assumed here that the optimal solution for the power distribution lies
on a single continuous interval $[s_{0},s_{1}]$. The extension to multiple
continuous intervals may be done as in Tian08 . The average achievable rate
with a decorrelator detector, in its general form, is
$R_{\rm
bs,decorr}=\beta\int\limits_{s_{a}^{+}}^{s_{b}^{-}}\,\textnormal{d}s(1-Q(s))\frac{s\rho(s)\eta\left(\beta
Q(s_{b})\right)}{1+sI(s)\eta\left(\beta
Q(s_{b})\right)}+(1-Q(s_{a}))R_{0}(s_{a})+(1-Q(s_{b}))R_{1}(s_{b})\ ,$ (510)
where $\eta(x)=1-x$, $I(s_{0})=\textrm{\rm SNR}$, $I(s_{1})=0$, and the rate
of the first layer is
$\displaystyle R_{0}(s_{a})=\beta\log\left(1+\frac{s_{a}\eta\left(\beta
Q(s_{b})\right)(\textrm{\rm SNR}-I(s_{a}^{+}))}{1+s_{a}\eta\left(\beta
Q(s_{b})\right)I(s_{a}^{+})}\right)\ ,$ (511)
where $I(s_{a}^{-})=\textrm{\rm SNR}$, and $I(s_{a}^{+})$ is the remaining
power allocation for the continuous and last layers. The last layer is
allocated
$\displaystyle R_{1}(s_{b})=\beta\log\left(1+s_{b}\eta\left(\beta
Q(s_{b})\right)\left(I(s_{b}^{-})-I(s_{b}^{+})\right)\right)\ ,$ (512)
where $I(s_{b}^{+})=0$. Thus the discontinuity in $I(s)$, can be in $s_{a}$,
and $s_{b}$. Define the functional subject for optimization, from (510), by
$\displaystyle
G(s_{b},s,I,I^{\prime})=\beta(1-Q(s))\frac{-sI^{\prime}(s)\eta\left(\beta
Q(s_{b})\right)}{1+sI(s)\eta\left(\beta Q(s_{b})\right)}\ .$ (513)
The following variable end point variational optimization problem is solved
following
$\displaystyle R_{\rm
bs,decorr}=\left\\{\begin{array}[]{ll}\max\limits_{s_{a},s_{b},I}&\int\limits_{s_{a}^{+}}^{s_{b}^{-}}\,\textnormal{d}sG(s_{b},s,I,I^{\prime})+(1-Q(s_{a}))R_{0}(s_{a})+(1-Q(s_{b}))R_{1}(s_{b})\\\
{\rm s.t.}&I(s_{a}^{-})=\textrm{\rm SNR}\\\ &I(s_{b}^{+})=0\end{array}\right.\
.$ (517)
The optimal power distribution is formulated in the next proposition. The
expected sum-rate for continuous layering per user, with a decorrelator
multiuser detector and iterative SIC decoding, is given by
$R_{\rm
bs,decorr}=\beta\int\limits_{s_{a}^{+}}^{s_{b}^{-}}\,\textnormal{d}s(1-Q(s))\frac{-sI^{\prime}_{\rm
opt}(s)\eta\left(\beta Q(s_{b})\right)}{1+sI_{\rm opt}(s)\eta\left(\beta
Q(s_{b})\right)}+(1-Q(s_{a}))R_{0}(s_{a})+(1-Q(s_{b}))R_{1}(s_{b})\ ,$ (518)
where $\eta(x)=1-x$, and the optimal layering power distribution is given by
$\displaystyle I_{\rm opt}(s)=\left\\{\begin{array}[]{cl}\textrm{\rm
SNR}&~{}s\leq s_{a}^{-}\\\ \frac{1-Q(s)-s\cdot
Q^{\prime}(s)}{s^{2}Q^{\prime}(s)\eta(\beta Q(s_{b}))}&~{}s_{a}^{+}\leq s\leq
s_{b}^{-}\\\ 0&~{}else\end{array}\right.\ ,$ (522)
and the interval for continuous layering satisfies
$\int\limits_{s_{a}^{+}}^{s_{b}^{-}}\,\textnormal{d}s\frac{\partial
G(s_{b},s,I_{\rm opt},I^{\prime}_{\rm opt})}{\partial
s_{b}}+G(s_{b},s_{b},I_{\rm opt},I^{\prime}_{\rm opt})\\\
+\frac{\partial}{\partial
s_{b}}\left[(1-Q(s_{b}))R_{1}(s_{b})+(1-Q(s_{a}))R_{0}(s_{a})\right]=0\ ,$
(523)
and
$\displaystyle-G(s_{b},s=s_{a},I_{\rm opt},I^{\prime}_{\rm
opt})+\frac{\partial}{\partial s_{a}}\left((1-Q(s_{a}))R_{0}(s_{a})\right)=0\
.$ (524)
From the numerical results in AsLupuKatzSh12 , and Fig. 33, the broadcast
approach with multi-round iterative SIC decoding offers a significant spectral
efficiency gain over the single-layer coding strategies. The gain is
especially noticeable for the lower system loads. Interestingly, for
$\beta\leq 0.2$, it can be noticed that the spectral efficiency of the
broadcast approach exceeds the MF single-layer ergodic bound at high
$E_{b}/N_{0}$ (Fig. 33). For a single user setting, the ergodic bound is
always an upper bound which cannot be exceeded. However, in our multiuser
setting, an MF detector is used for the ergodic bound, and the MF detection is
information lossy. In the broadcast approach, the MF detection is performed
over and over for every layer according to the iterative decoding scheme.
Hence, the broadcast approach with iterative decoding may outperform the
optimum single-layer coding with transmitter channel side information (ergodic
bound). Generally speaking, this result should not be limited to small system
loads. Any non-zero slope of the broadcast approach is sufficient for
exceeding the MF single-layer ergodic bound. However, the crossing level will
be at very high $E_{b}/N_{0}$ values. Figure 34 demonstrates the achievable
rates for a Rayleigh fading channel with a decorrelator based multiuser
detection, with different transmission and decoding strategies.
Figure 33: Expected sum-rate for a Rayleigh fading channel, with different
transmission and decoding strategies, based on a matched filter multiuser
detector ($\beta=0.2$). Figure 34: Expected sum-rate throughput for a
Rayleigh fading channel, with different transmission and decoding strategies,
based on a decorrelator multiuser detector ($\beta=0.2$).
It can be concluded that unequal transmission rate assignment is a practical
strategy, as the base-station, aware of all its users, can take care of rate
allocation. This is in contrast to ergodic bounds, where the users must
transmit at rates matching their experienced fading realizations (which is an
impractical assumption). In SIC with the strongest users and single-iteration
detection, the subsets are ordered regardless of the instantaneous channel
realizations. Therefore, such an assignment can be done once for every new
subscriber. It was shown in this work that for single-iteration decoding,
unequal rate allocation maximizes the spectral efficiency since the decoding
order is fixed. However, for iterative SIC decoding, the decoding order is no
longer fixed, and it is shown that equal rate allocation maximizes the
expected sum-rate.
It is worth noting that systems employing decorrelator detection can
significantly gain from using SIC at system loads close to $\beta=1$. For such
system loads, single-user detection is interference-limited, and therefore,
the achievable rate can be infinitesimally small. While with SIC, only the
users decoded first transmit at low rates. Gradually, the effective system
load for decoding is reduced, and higher spectral efficiency can be achieved
for other users, resulting in higher sum-rates.
The single-layer analysis was extended to a multi-layer coding broadcast
approach per user. The expected sum-rate, under iterative decoding with linear
multiuser detectors, is optimized, and the optimal power distribution is
obtained (for a decorrelator and an MF detector). The achievable spectral
efficiency for a linear matched filter detector shows significant gains over
the single-layer coding approach. The interesting observation here is that the
expected spectral efficiency exceeds the single-layer ergodic sum-capacity.
The ergodic bound assumes that every user transmits at a rate matched to its
decoding stage and channel realization. For a single-user setting, the ergodic
bound is always an upper bound for the broadcast approach. However, in our
multiuser setting, an MF detector is used for the ergodic bound, and the MF
detection is information lossy. In the broadcast approach, the MF detection is
performed over and over for every layer according to the iterative decoding
scheme. Therefore, the broadcast approach can provide spectral efficiencies
exceeding those of a single layer coding with channel side information, when
an MF detector is used.
### 6.3 The Broadcast Approach for Source-Channel Coding
In networks, it may be commonly required to minimize the distortion of the
source information rather than maximize the expected rate. This broadcast
approach is useful in a variety of applications, and it matches the successive
refinement (SR) source coding approach Koshelev:80 ; successiveCover1991 ;
RI99 and later works RI99 ; NgTian07 ; Tian08 ; Ng09 ; NgTian12 . That is,
the more information rate is provided, the less average distortion is evident
in the reconstructed source. On a wireless fading channel, in order to
minimize the expected distortion at the receiver, it is essential to find the
optimal power allocation in the broadcast strategy and this is indeed our
focus in this section. This cross-layer design approach was, in fact, already
suggested in ShitzSteiner03 .
The broadcast-SR approach facilitates to achieve via coding the basic features
of analog communications, that is the better the channel, the better the
performance (say measured by received SNR (MMSE)). Furthermore, that is
without the transmitter knowing the state channel realization, see
applications as referenced in the book DuhamelKieffer09 .
The initial effort on this problem was made in Sesia:05 , where the broadcast
strategy coupled with SR source coding was compared with several other
schemes. The optimization problem was formulated by discretizing the
continuous fading states, and an algorithm was devised when the source coding
layers are assumed to have the same rate. This algorithm, however, does not
directly yield the optimal power allocation when the fading states are
discrete and pre-specified, nor does it give a closed-form solution for the
continuous case. This problem is also considered in EtemadiJafarkhani:06 ,
which provides an iterative algorithm by separating the optimization problem
into two sub-problems. The study in Ng09 provides a recursive algorithm to
compute the optimal power allocation for $M$ fading states, with worst-case
complexity of $O(2^{M})$. Furthermore, by directly taking the limit of the
optimal solution for the discrete case, a solution was given for the
continuous case optimal power allocation, under the assumption that the
optimal power allocation is concentrated in a single interval. Similar
problems were considered in CaireNarayanan:05 ; Gunduz:06 in the high SNR
regime from the perspective of distortion exponent. Successive refinement, as
combined with the broadcast approach, gives idea beyond the basic setting and
was used in KimPark20 .
The work in Tian08 proposes a new algorithm that can compute in linear time,
i.e., of $O(M)$ complexity, the optimal power allocation for the case with $M$
discrete fading states. Furthermore, it provides a derivation of the
continuous case optimal power allocation solution by the classical variational
method GF91 . Both the algorithm and the derivation rely on an alternative
representation of the Gaussian broadcast channel capacity, which appeared in
TS02 . The dual problem of minimizing power consumption subject to a given
expected distortion constraint is also discussed.
#### 6.3.1 SR with Finite Layer Coding
Finite-layer coding can be matched to a finite number of fading states, the
$M$ possible power gains in an increasing order $s_{1}<s_{2}<...<s_{M}$ are
distributed according to a probability mass function $p_{i}$ such that
$\sum_{i=1}^{M}p_{i}=1$. The transmitter has an average power constraint $P$,
and if power $P_{i}$ is allocated to the $i^{\rm th}$ layer in the broadcast
strategy, the $i^{\rm th}$ layer channel rate $R_{i}$ is given by
$\displaystyle
R_{i}=\frac{1}{2}\log\left(1+\frac{s_{i}P_{i}}{1+s_{i}\sum_{j=i+1}^{M}P_{j}}\right)=\frac{1}{2}\log\left(1+\frac{P_{i}}{1/s_{i}+\sum_{j=i+1}^{M}P_{j}}\right)\
.$ (525)
From the second expression in (525), the equivalence to broadcast on a set of
channels with different noise variances is clear. Let $n_{i}\triangleq
1/s_{i}$, which implies $n_{1}>n_{2}>...>n_{M}$ are the equivalent noise power
on the channels. The layers corresponding to smaller values of $s_{i}$ (and
larger values of $n_{i}$) will be referred to as the lower layers, which is
consistent with the intuition that they are used to transmit the more
protected base layers of the SR source coding. Since the Gaussian source is
successively refinable successiveCover1991 , the receiver with power gain
$s_{i}$ can thus reconstruct the source within distortion
$\displaystyle D_{i}=\exp\left(-2b\sum_{j=1}^{i}R_{j}\right)\ ,$ (526)
where $b$ is the bandwidth expansion coefficient. Combining (525) and (526),
the problem we wish to solve is essentially the following minimization over
the power allocation $(P_{1},P_{2},...,P_{M})$:
$\displaystyle\left\\{\begin{array}[]{ll}\min&\sum_{i=1}^{M}p_{i}\left(\prod_{j=1}^{i}\left(1+\frac{P_{j}}{1/s_{j}+\sum_{k=j+1}^{M}P_{k}}\right)\right)^{-b}\\\
{\rm s.t.}&P_{i}\geq 0,\quad i=1,\dots,M\\\
&\displaystyle\sum_{i=1}^{M}P_{i}\leq P\end{array}\right.\ .$ (530)
When the fading state is continuous, the density of the power gain
distribution is then given by $f(s)$, which is assumed to be continuous and
differentiable almost everywhere. In this case, the goal is then to find a
power allocation density function $P(s)$, or its cumulative function, which
minimizes the expected distortion, see more details in Tian08 .
#### 6.3.2 The Continuous SR-Broadcasting
We next turn our attention to the case of a continuum of layers, which is, in
fact, the case considered in ShitzSteiner03 . To facilitate understanding, we
first provide a less technical derivation under the assumption that the
optimal power allocation concentrates on a single interval of the power gain
range, and show that this is indeed true for some probability density function
$f(s)$. This simple derivation provides important intuitions for the general
case, based on which a more general derivation is then given and some
properties of the solution are subsequently discussed. For simplicity, we
first assume $f(s)$ has support on the entire non-negative real line
$[0,\infty)$. Later it is shown that this assumption can be relaxed. The
optimization problem can be reformulated as follows. Define
$\displaystyle I(i)=\exp\left(\sum_{j=1}^{i}2R_{j}\right)\ .$ (531)
We take the number of layers to infinity and the constraint becomes an
integral equation, where we convert back to the power gain $s$ instead of
noise power $n$, and it is clear we can replace the inequality by equality
without loss of optimality
$\displaystyle\int\limits_{0}^{\infty}I(s)\frac{1}{s^{2}}\,\textnormal{d}s=\int\limits_{0}^{\infty}\frac{\exp\left(2R(s)\right)}{s^{2}}\,\textnormal{d}s=P\
,$ (532)
where $R(s)$ is the cumulative rate associated with a fading gain $s$. The
term to be optimized is given by
$\displaystyle\overline{D}(I)=\int\limits_{0}^{\infty}f(s)\exp(-2bR(s))\,\textnormal{d}s=\int\limits_{0}^{\infty}\frac{f(s)}{I(s)^{b}}\,\textnormal{d}s\
.$ (533)
Note the additional condition that $I(s)$ has to be monotonically non-
decreasing, and it should satisfy the boundary conditions $I(0)=1$. Ignoring
the positivity constraint $I^{\prime}(s)\geq 0$ for now, take
$\displaystyle J(s,I,I^{\prime})=\frac{f(s)}{I^{b}(s)},\quad
G(s,I,I^{\prime})=\frac{I(s)}{s^{2}}\ .$ (534)
Hence the optimization problem can be written as
$\displaystyle\min$
$\displaystyle\quad\int_{0}^{\infty}J(s,I,I^{\prime})\,\textnormal{d}s\quad{\rm
s.t.}\quad\int_{0}^{\infty}G(s,I,I^{\prime})ds=P\ .$ (535)
Next, we assume there is a unique interval $[s_{1},s_{2}]$ for which power
allocation is non-zero. Under this assumption, the objective function reduces
to
$\displaystyle\overline{D}(I)=\int_{0}^{\infty}J(s,I,I^{\prime})\,\textnormal{d}s=\int_{s_{1}}^{s_{2}}\frac{f(s)}{I(s)^{b}}\,\textnormal{d}s+F(s_{1})+\frac{1-F(s_{2})}{I(s_{2})^{b}}\
,$ (536)
where $F(s)$ is the CDF of the fading gain random variable, i.e.,
$F(s)=\int_{0}^{s}f(r)\,\textnormal{d}r$, and the constraint becomes
$\displaystyle
P(I)=\int_{0}^{\infty}G(s,I,I^{\prime})ds=\int_{s_{1}}^{s_{2}}I(s)\frac{1}{s^{2}}\,\textnormal{d}s+\frac{I(s_{2})}{s_{2}}-\frac{1}{s_{1}}=P\
.$ (537)
Then we can write the Lagrangian form $L(I)=\overline{D}(I)+\lambda(P(I)-P)$.
To find the extremal solution, we consider an increment $q(s)$, and the
increment of the Lagrangian functional is given by $\Delta(q)=L(I+q)-L(I)$. By
taking an increment $q(s)$ with $q(s_{1})=q(s_{2})=0$ as well as $q(s)=0$ for
$s\notin[s_{1},s_{2}]$, then the Euler-Lagrange equation (pages 42-50 in GF91
) requires
$\displaystyle J_{I}+\lambda
G_{I}-\frac{\,\textnormal{d}}{\,\textnormal{d}s}[J_{I^{\prime}}+\lambda
G_{I^{\prime}}]=0\ ,$ (538)
with
$\displaystyle J_{I}=\frac{-bf(s)}{I^{b+1}(s)}\ ,\quad
G_{I}=\frac{1}{s^{2}},\quad J_{I^{\prime}}=G_{I^{\prime}}=0\ ,$ (539)
which further simplifies to
$\displaystyle I(s)=\left(\frac{bf(s)s^{2}}{\lambda}\right)^{1/(b+1)}\ .$
(540)
At this point, it is clear that for $I^{\prime}(s)\geq 0$ to be true, which is
necessary for $I(s)$ to be a valid solution, $f(s)s^{2}$ should have non-
negative derivative in any interval such that (540) holds. In fact, for any
interval that a positive rate is allocated to, $f(s)s^{2}$ should have
strictly positive derivative such that $I(s)$ is strictly increasing. If there
is only one interval over the support of $f(s)$ where $f(s)s^{2}$ has strictly
positive derivative, then the single interval solution assumption is indeed
true. Now, since $q(s_{2})$ can be arbitrary, at this variable end (pages
25-29 in GF91 ) a necessary condition for an extremum is
$\displaystyle\frac{-b(1-F(s_{2}))}{I(s_{2})^{b+1}}+\lambda\frac{1}{s_{2}}=0\
,$ (541)
which gives
$\displaystyle\lambda=\frac{bs_{2}(1-F(s_{2}))}{I(s_{2})^{b+1}}\ .$ (542)
Since $I(s_{1})=1$, $\lambda=bf(s_{1})s^{2}_{1}$, with the expression of
$I(s)$ gives one boundary condition
$\displaystyle 1-F(s_{2})=f(s_{2})s_{2}\ .$ (543)
The lower bound $s_{1}$ is determined by the power constraint, from which we
have
$\displaystyle\int_{s_{1}}^{\infty}\frac{I(s)}{s^{2}}\,\textnormal{d}s=\int_{s_{1}}^{s_{2}}\left(\frac{f(s)}{f(s_{1})s^{2}_{1}}\right)^{1/(b+1)}s^{-2b/(b+1)}\,\textnormal{d}s+\frac{1}{s_{2}}\left(\frac{f(s_{2})s_{2}^{2}}{f(s_{1})s^{2}_{1}}\right)^{1/(b+1)}=P+\frac{1}{s_{1}}\
,$ (544)
where in the second equation we split the integral into two parts partitioned
by $s=s_{2}$. Hence, the unique extremal solution is
$\displaystyle
I(s)=\left(\frac{f(s)s^{2}}{f(s_{1})s^{2}_{1}}\right)^{1/(b+1)}\ ,$ (545)
with the boundary conditions specified by (543) and (544). To find the
corresponding power allocation, define
$T(s)=\int_{s}^{\infty}P(r)\,\textnormal{d}r$. We derive from (545) that
$\displaystyle T(s)$
$\displaystyle=\int\limits_{s}^{\infty}\frac{I(r)}{I(s)}\,\textnormal{d}r-\frac{1}{s}=\left(\frac{f(s_{2}){s^{2}_{2}}}{f(s)s^{2}}\right)^{1/(b+1)}\frac{1}{s_{2}}+\int_{s}^{s_{2}}\left(\frac{f(r)r^{2}}{f(s)s^{2}}\right)^{1/(b+1)}\frac{1}{r^{2}}\,\textnormal{d}r-\frac{1}{s}\
.$ (546)
Through some basic calculation, it can be shown that this is the same solution
as that in Ng09 . Thus, the limit of the optimal solution of the discrete case
in Ng09 indeed converges to the extremal solution derived through the
classical variational method. Furthermore, the variational method derivation
directly asserts that $f(s)s^{2}$ has a non-negative derivative for any
positive power allocation interval. This condition was, however, lacking in
the derivation in Ng09 .
We consider the average achievable distortion for a SISO Rayleigh fading
channel, with CDF $F(s)=1-\exp\left(-\frac{s}{\overline{s}}\right)$, where
$\overline{s}$ is the expected fading gain power. For this distribution, the
optimal power allocation is single interval continuous, and zero outside the
interval $[s_{1},s_{2}]$. That can be immediately observed from $f(s)s^{2}$ by
taking its first derivative
$\displaystyle\frac{\,\textnormal{d}}{\,\textnormal{d}s}f(s)s^{2}=\left(\frac{2s}{\overline{s}}-\frac{s^{2}}{\overline{s}^{2}}\right)\exp\left(-\frac{s}{\overline{s}}\right)\
,$ (547)
where $\frac{\,\textnormal{d}}{\,\textnormal{d}s}f(s)s^{2}\geq 0$ on a single
interval $s\in[0,2\overline{s}]$. Then the upper bound
$s_{2}\in[0,2\overline{s}]$ is determined by (543), which reduces to
$\displaystyle\exp\left(-\frac{s_{2}}{\overline{s}}\right)=\frac{s_{2}}{\overline{s}}\exp\left(-\frac{s_{2}}{\overline{s}}\right)\
,$ (548)
yielding $s_{2}=\overline{s}$. Solving (544) gives the other boundary value
$s_{1}$, denoted by $s_{1,\rm opt}$; the condition (544) does not lead to an
analytical expression, but can be solved numerically. Then, the general
expression for $I(s)$ for the Rayleigh fading channel is given by
$\displaystyle I(s)=\left\\{\begin{array}[]{ll}1&s\leq s_{1,\rm opt}\\\
\left(\frac{s^{2}}{s^{2}_{1,\rm opt}}\exp{\left(-\frac{s-s_{1,\rm
opt}}{\overline{s}}\right)}\right)^{1/(b+1)}&s_{1,\rm
opt}<s\leq\overline{s}\\\ \left(\frac{\overline{s}^{2}}{s^{2}_{1,\rm
opt}}\exp{\left(-\frac{\overline{s}-s_{1,\rm
opt}}{\overline{s}}\right)}\right)^{1/(b+1)}&s>\overline{s}\end{array}\right..$
(552)
Figure 35: Minimal average distortion, a comparison of outage approach and
broadcast approach, for bandwidth expansions $b=0.5,1,2$.
In Fig. 35, the average distortion bounds for Rayleigh fading channels are
demonstrated for three different values of bandwidth expansion values
($b=0.5,1,2$). For every bandwidth expansion, the minimal average distortion
of the outage approach and broadcast approach are compared. It can be noticed
that the smaller $b$ is, the larger is the broadcast gain, which can be
defined as the SNR gain of the broadcast approach over the outage approach for
the same average distortion value. Thus, the benefit of the broadcast approach
compared to the outage directly depends on the system design parameter $b$.
### 6.4 The Information Bottleneck Channel
An interesting setting is the information bottleneck channel, the objective of
which is the efficient transmission of data over a wireless block fading
channel that is connected to a limited capacity reliable link. This setting is
known as the bottleneck channel AsSh20_Bottleneck . Two main broadcast
approaches are considered for the bottleneck channel in AsSh20_Bottleneck .
The first is an oblivious approach, where the sampled noisy observations are
compressed and transmitted over the bottleneck channel without having any
knowledge of the original information codebook. This is compared to a decode-
forward (non-oblivious) approach, where the sampled noisy data is decoded, and
whatever is successfully decoded is reliably transmitted over the bottleneck
channel. This work is extended for an uncertain bottleneck channel capacity
setting in SteinerShamai2020 , where the transmitter is not aware of the
available backhaul capacity per transmission and knows only its distribution.
In both settings, it is possible to analytically describe the optimal
continuous layering power distribution that maximizes the average achievable
rate in closed-form expressions. The topic is covered in more details
steinerSh2020bottleneck .
The Gaussian bottleneck problem is depicted in Fig. 36. Consider a Markov
chain of a random variable triplet $x-y-z$, related according to
$y=h\cdot x+n\ ,$ (553)
where $x$ and $n$ are i.i.d, with $n\sim{\cal N}(0,1)$. The fading gain is
$s=|h|^{2}$ fixed per transmission block. The SNR is $P\cdot s$, where $P$ is
the transmission power ${\mathbb{E}}[X^{2}]=P$. The fading gain $s$
distribution is known to transmitter and receiver as the broadcast approach
ShitzSteiner03 discussed earlier. The output $z$ of the bottleneck channel is
a compressed version of the received signal $y$ under the bottleneck channel
capacity $C$ constraint. The optimization problem can be formalized as
$\max\limits_{{\mathbb{P}}(z|y),{\mathbb{P}}(x)~{}{\rm s.t.}I(y;z)\leq
C}~{}I(x;z)\ .$ (554)
If $x$ is Gaussian, then it is clear Tishby2004 ; tishby99information that
$y-z$ is also a Gaussian channel. Therefore, the maximization result of (554)
is
$C_{\rm
Obliv}=I(x;z)=\frac{1}{2}\log\left(\frac{1+P|h|^{2}}{1+P|h|^{2}\cdot\exp(-2C)}\right)\
,$ (555)
which is a direct result of the rate-distortion approach. The output of the
relay $y$ may be represented by quantizing its input
$z=y+m\ ,$ (556)
where $P|h|^{2}+1$ is the variance of $y$ of the channel model (553). The
quantization noise variance, denoted by $m$, is obtained by $I(z;y)=C$, i.e.,
${\mathbb{E}}[m^{2}]=\frac{P|h|^{2}+1}{\exp(2C)-1}\ .$ (557)
The problem underhand is the reliable transmission rate from $x$ to
destination with an oblivious relay that uses the bottleneck channel to send
compressed versions of its input, without knowledge of the transmitter
codebook.
For a DF non-oblivious relay, the relay can decode its input and then send the
decoded data under bandwidth limitation $C$ over the bottleneck channel $y-z$.
Hence, the minimum of two capacities provides the achievable transmission rate
$C_{\rm DF}=\min\left\\{\frac{1}{2}\log(1+P|h|^{2}),C\right\\}\ .$ (558)
An alternative setting that generalizes the current model is a variable
availability of the bottleneck capacity, which is common in cellular uplink
transmission. This can be due to changing traffic loads over time on the
network ROY_6613623 . This means that the relay-destination bottleneck channel
capacity $C$ is a random variable. The source transmitter knows its
distribution; however, like in the wireless fading channel, feedback to the
transmitter is not available due to the capacity variability dynamics. If the
relay perfectly knows, per received codeword, the bottleneck currently
available capacity, it can adapt its data rate. However, if the relay has no
access to the capacity per codeword, it may use successive refinement source
coding Tian08 matched to the capacity distribution.
Figure 36: Information bottleneck fading channel system model block diagram.
Consider a fading wireless link to $y$, where $s=|h|^{2}$ is a unit variance
block fading gain. It is assumed to change independently between codewords and
remains fixed over a single codeword. The channel model of $z$ is expressed by
its block fading gain as
$z=\sqrt{{\rm FPR}_{\rm eq}}x+n\ ,$ (559)
where $n$ is a unit variance Gaussian noise. The equivalent fading gain ${\rm
FPR}_{\rm eq}$ is given by
${\rm FPR}_{\rm eq}=\frac{s(1-\exp(-2C))}{1+s\cdot P\cdot\exp(-2C)}\ ,$ (560)
which is directly obtained from (557). It may be observed that ${\rm FPR}_{\rm
eq}$ is finite for $s\geq 0$, and at the limit of $s\rightarrow\infty$ becomes
$\lim\limits_{s\rightarrow\infty}{\rm FPR}_{\rm eq}=(\exp(2C)-1)/P\ ,$ (561)
and the ergodic capacity of the bottleneck fading channel is
$\displaystyle C_{\rm Obliv,Erg}$
$\displaystyle={\mathbb{E}}_{s}\left[\frac{1}{2}\log(1+P\cdot{\rm FPR}_{\rm
eq})\right]$ (562)
$\displaystyle={\mathbb{E}}_{s}\left[\frac{1}{2}\log\left(1+\frac{s\cdot
P\cdot(1-\exp(-2C))}{1+s\cdot P\cdot\exp(-2C)}\right)\right]\ .$ (563)
The continuous broadcasting approach solution is rather straightforward here.
The channel model here can be expressed by equivalent fading gain $\nu={\rm
FPR}_{\rm eq}$ from (560), which depends on the bottleneck channel capacity
$C$ and the distribution of the channel fading gain $s$. In this bottleneck
channel with oblivious relaying, the broadcast approach is optimized for a
fading distribution $F_{\nu}(u)$ of (560). Obtaining optimal power
distribution can be derived directly. Clearly for high bottleneck channel
capacity $C\rightarrow\infty$, then ${\rm FPR}_{\rm eq}\rightarrow s$.
A DF relay (non-oblivious approach) can decode the received signal $y$, and
reliably convey to the destination the decoded data under capacity limit $C$.
An ergodic upper bound of the bottleneck fading channel $C_{\rm DF,Erg}$, is
not achievable for a block fading channel, as the transmitter has no CSI. It
is beneficial to transmit a multi-layer coded signal for this channel model.
The DF non-oblivious ergodic capacity is expressed as
$C_{\rm
DF,Erg}={\mathbb{E}}_{s}\left[\min\left\\{C,~{}\frac{1}{2}\log(1+sP)\right\\}\right]\
,$ (564)
where a single fading realization is assumed per transmission and decoding of
a single codeword, for the slowly fading channel.
The continuous broadcasting approach for the non-oblivious DF approach can be
optimized in the following way. A transmitted signal $x$ is multi-layer coded
in a continuum of layers. The received signal $y$ is decoded layer-by-layer in
a successive decoding manner. All the successfully decoded layers with a total
rate up to $C$, the bottleneck channel capacity, can be reliably conveyed over
the bottleneck channel. The optimization goal is to maximize the average
transmitted rate over the bottleneck channel in this block fading channel
model. We formulate here the optimization of power density distribution
function $\rho_{\rm opt}(u)$ so that the average transmission rate is
maximized under the bottleneck channel capacity constraint.
For the non-oblivious block fading bottleneck channel, the total _expected
average achievable rate of the broadcast approach_ is obtained by the
following residual power distribution function
$\displaystyle I_{\rm
opt}(u)=\left\\{\begin{array}[]{ll}\arg\max\limits_{I(u)}&\frac{1}{2}\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}u(1-F_{s}(u))\frac{\rho(u)u}{1+I(u)u}\\\
\mbox{\rm
s.t.}&\displaystyle\int\limits_{0}^{\infty}\,\textnormal{d}u\frac{\rho(u)u}{1+I(u)u}\leq
C\ .\end{array}\right.\ ,$ (567)
where $F_{s}(u)$ is the CDF of the fading gain random variable, and $C$ is the
bottleneck channel capacity. The optimal power allocation $I_{\rm opt}(u)$ is
given by
$\displaystyle I_{\rm opt}(u)=\left\\{\begin{array}[]{ll}P&u<u_{0}\\\
\frac{1-F_{s}(u)+\lambda_{\rm opt}-u\cdot f_{s}(u)}{u^{2}f_{s}(u)}&u_{0}\leq
u\leq u_{1}\\\ 0&u>u_{1}\end{array}\right.\ ,$ (571)
where $\lambda_{\rm opt}\geq 0$ is a Lagrange multiplier specified by
$\displaystyle\lambda_{\rm opt}=-u_{1}\cdot f_{s}(u_{1})-1+F_{s}(u_{1})\ ,$
(572)
and for any $\lambda_{\rm opt}>0$,
$\displaystyle u_{1}^{2}\cdot f_{s}(u_{1})=\exp(2C)\cdot u_{0}^{2}\cdot
f_{s}(u_{0})\ .$ (573)
Figure 37: Oblivious vs. Non-Oblivious single layer coding and broadcast
approach compared to the ergodic capacity, for bottleneck channel capacity of
$C=4$ [Nats/Channel Use].
Figure 37 demonstrates the achievable rates with a non-oblivious approach as
compared to an oblivious approach for a bottleneck channel capacity $C=4$
(Nats/Channel use). It can be observed here that in the high SNR region, the
gain of the broadcast approach compared to single-layer coding is higher with
a non-oblivious approach.
#### 6.4.1 Uncertainty of Bottleneck Capacity
A common case in cellular uplink is a variable availability of backhaul
capacity. This may be the result of variable loads on the network over time.
Traffic congestion of internet data may lead to changing the availability
levels of the backhaul ROY_6613623 . On the bottleneck channel, this means
that the relay-destination link capacity $C$ is a random variable. It may be
assumed that the transmitter is aware of the average capacity and its
distribution. However, like the wireless fading channel, the capacity
variability dynamics may not allow time for feedback to the transmitter. The
following subsection considers the case that the relay is fully aware of the
current bottleneck available capacity for each received codeword.
Consider a bottleneck channel with discrete capacity levels represented by $N$
random capacity values $\\{C_{i}\\}_{i=1}^{N}$, such that $C_{1}\leq
C_{2}\leq\cdots\leq C_{N}$ with corresponding probabilities
$\\{p_{b,i}\\}_{i=1}^{N}$, such that $p_{b,i}\geq 0$ and
$\sum_{i=1}^{N}p_{b,i}=1$. The average capacity of the bottleneck channel is
$\displaystyle C_{\rm ave}=\sum_{i=1}^{N}p_{b,i}C_{i}\ .$ (574)
The broadcast approach can be derived here for an oblivious relay setting and
under an equivalent fading gain distribution. Since the transmitter is not
aware of the bottleneck capacity per codeword, and only knows its
distribution, the following optimization flow is used for the continuous
broadcast approach optimization. The combined equivalent channel viewed by the
transmitter is
${\rm FPR}_{\rm eq}(s,C_{b})=\frac{s(1-\exp(-2C_{b}))}{1+s\cdot
P\cdot\exp(-2C_{b})}\ ,\quad\mbox{and}\quad s=|h|^{2}\ .$ (575)
The continuous broadcast approach is optimized for a fading distribution
$F_{\mu}(u)$, where $\mu={\rm FPR}_{\rm eq}(s,C_{b})$ is the equivalent
channel gain depending on the fading gain realization $s$, and bottleneck
channel capacity $C_{b}$ available per codeword. The CDF of this fading gain
is
$F_{\mu}(u)=\sum_{i=1}^{N}p_{b,i}F_{s}\left(\frac{u}{1-(1+Pu)\exp(-2C_{i})}\right)\
.$ (576)
The main result here is expressed in the following proposition The power
distribution, which maximizes the expected rate over the oblivious bottleneck
channel, is
$\displaystyle I(x)=\left\\{\begin{array}[]{cl}\frac{1-F_{\mu}(x)-x\cdot
f_{\mu}(x)}{x^{2}f_{\mu}(x)}&,~{}x_{0}\leq x\leq x_{1}\\\ 0&,~{}\mbox{\rm
else}\end{array}\right.\ ,$ (579)
where $x_{0}$ is determined by $I(x_{0})=P$, and $x_{1}$ by $I(x_{1})=0$.
Furthermore, the broadcasting rate is expressed as function of the ${\rm
FPR}_{\rm eq}$ distribution $F_{\mu}(u)$
$\displaystyle R_{\rm opt}(s)=\left\\{\begin{array}[]{ll}0&s<x_{0}\\\
\log(s/x_{0})+\frac{1}{2}\log\left(\frac{f_{\mu}(s)}{f_{\mu}(x_{0})}\right)&x_{0}\leq
s\leq x_{1}\\\
\log(x_{1}/x_{0})+\frac{1}{2}\log\left(\frac{f_{\mu}(x_{1})}{f_{\mu}(x_{0})}\right)&s>x_{1}\end{array}\right.\
.$ (583)
The derivation of this optimization is based on the analysis in ShitzSteiner03
for characterizing the power distribution under an equivalent channel model
that includes the relayed signal after compression to a rate which matches the
bottleneck channel capacity. The channel model for the relayed signal $z$ can
be expressed by its block fading gain, under an oblivious approach.
Specifically,
$z=\sqrt{{\rm FPR}_{\rm eq}}\cdot x+n\ ,$ (584)
where $n$ is a unit variance Gaussian noise, and ${\rm FPR}_{\rm eq}(s,C_{b})$
is specified in (575). More details can be found in steinerSh2020bottleneck .
It is interesting to note here that although the relay can perform successive
refinement source coding matched to backhaul capacity distribution, it does
not help and cannot increase the expected achievable rate if the relay is
informed with the available capacity per codeword.
An interesting problem arises when the wireless channel is fast fading, and
bottleneck channel capacity is random. That is, the fading $h$ (553) changes
independently (i.i.d.) for every channel use. For long codewords, the ergodic
nature of the channel can be utilized per transmitted codeword. Evidently,
under a non-oblivious DF relay, the relay decodes the transmission, and then
whatever possible is conveyed through the backhaul. The interesting part is
the oblivious processing. Here the relay should also convey the fading
realizations $h$ and received signal $y$ to the destination, the best possible
way. Hence, $h$ plays the role of the source to be conveyed with successive
refinement. Furthermore, note that even if all $y$ is provided to the
destination, unavailable fading realization vector $h$ makes the capacity
behave as $\log\log({\sf SNR})$, as in the i.i.d. channel with unavailable
fading at the transmitter and receiver. This problem is analyzed for a known
bottleneck capacity in CaireShamai18 .
### 6.5 Transmitters with Energy Harvesting
As the last model in this section, we review the channel model of
zohdytajerharvesting in which the transmitter relies on an exogenous energy
harvesting unit as its only source of energy. Energy harvesting has been
evolving rapidly as a promising alternative to systems with lifetime-limited
batteries. Communication systems empowered by energy harvesting units rely on
ambient sources, which facilitate potentially perpetual sources of power
lu2015wireless2 ; lu2015wireless ; panatik2016energy ; jabbar2010rf .
Specifically, the recent advances in both the theory and implementation of
energy harvesting circuitry has facilitated the growth in various wireless
domains, e.g., ad-hoc networks huang2013spatial , wireless body networks
zhang2010energy , wireless sensor networks nishimoto2010prototype , and radio
frequency identification systems sudevalayam2011energy , which constitute the
main technologies that the IoT relies on.
By relying on harvested energy, the transmitter faces two sources of
randomness due to the fading and energy arrival processes. The transmitter
knows only the statistical descriptions of these processes while remaining
oblivious to the actual realizations of both. We review the optimal
distribution of power across information layers and over time in order to
maximize the average rate that can be reliably sustained. An interesting
observation is that allocation of power across layers and over time can be
decoupled into two independent power allocation tasks, one specifying the
allocation over time, and the second one optimizing the available power at any
given time across different layers. Furthermore, both sub-problems can be
solved optimally (under proper assumptions on the fading process).
To lay the context, consider transmission over a slowly-fading Gaussian
channel. The channel undergoes block fading, where the fading gain is constant
over a block of $n$ channel uses and changes independently across blocks. The
block length $n$ is assumed to be sufficiently long such that under the given
delay constraints (finite transmission duration), one codeword can be reliably
transmitted to the receiver. The input-output relationship across $B$ fading
blocks is given by
$y_{b}^{i}=h_{b}\cdot x_{b}^{i}+n_{b}^{i}\ ,\;\forall\;i\in\\{1,\dots,n\\}\
,\;b\in\\{1,\dots,B\\}\ ,$ (585)
where $x_{b}^{i}$ and $y_{b}^{i}$ are the transmitted and received symbols at
time $i$ in block $b$, $h_{b}$ is the fading coefficient in block $b$, and
$n_{b}^{i}$ accounts for the AWGN distributed according to ${\cal
N}_{\mathbb{C}}(0,1)$. Denote the channel gains by $s_{b}=|h_{b}|^{2}$, for
$b\in\\{1,\dots,B\\}$, and denote the CDF of $s_{b}$, known to the
transmitter, by $F_{b}:\mathbb{R}_{+}\rightarrow[0,1]$. Accordingly, denote
the associated PDF by $f_{b}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$.
Finally, set $p_{b}^{i}=\mathbb{E}[|x_{b}^{i}|^{2}]$ as the transmission power
at time $i$ in block $b$, and define $p_{b}$ as the aggregate power used in
block $b$, i.e.,
$\displaystyle
p_{b}=\sum_{i=1}^{n}\mathbb{E}[|x_{b}^{i}|^{2}]=\sum_{i=1}^{n}p_{b}^{i}\ .$
(586)
Let $g^{i}_{b}$ denote the amount of energy harvested during time slot $i$ of
block $b$. Accordingly, corresponding to each block $b$ define the vectors
$\boldsymbol{p}_{b}=[p_{b}^{1},\dots,p_{b}^{n}]^{T}$ and
$\boldsymbol{g}_{b}=[g_{b}^{1},\dots,g_{b}^{n}]^{T}$. The transmitter is
equipped with a battery whose capacity order-dominates that of the amount of
harvested energy. This induces a set of power consumption constraints
according to which the amount of energy consumption up to each time instant
cannot exceed the harvested energy up to that point. Specifically, by defining
$\displaystyle\mathbbl{1}_{i}=[\underset{i}{\underbrace{1,\dots,1}},\underset{n-i}{\underbrace{0,\dots,0}}]^{T}\
,\quad\forall i\in\\{1,\dots,n\\}\ ,$ (587)
corresponding to each pair $b\in\\{1,\dots,B\\}$ and $i\in\\{1,\dots,n\\}$
$\sum_{j=1}^{b-1}\mathbbl{1}^{T}_{n}\cdot\boldsymbol{p}_{j}\;+\;\mathbbl{1}^{T}_{i}\cdot\boldsymbol{p}_{b}\;\leq\;\sum_{j=1}^{b-1}\mathbbl{1}^{T}_{n}\cdot\boldsymbol{g}_{j}\;+\;\mathbbl{1}^{T}_{i}\cdot\boldsymbol{g}_{b}\
.$ (588)
Based on this approach, when the aggregate transmission power over any
transmission block is $p$ and the actual channel gain during that block is
$s=|h|^{2}$, define $\rho(p,s)$ as the density of the power allocated to the
information layer indexed by $s$. Hence, the amount of power allocated to
realization $s$ is $\rho(p,s)\,\textnormal{d}s$, and the amount of
interference power imposed on the receiver designated to the channel
realization with gain $s$ is
$I\left(p,s\right)=\int_{s}^{\infty}\rho\left(p,u\right)\,\textnormal{d}u\ .$
(589)
To satisfy the power constraint for the aggregate power split across different
layers, the following condition must be satisfied.
$\displaystyle I(p,0)=p\ .$ (590)
Based on such power allocation and interference terms, the average rate over
all possible fading realizations within one transmission block is
$\displaystyle{R_{b}}(p_{b})$
$\displaystyle=\int_{0}^{\infty}[1-F_{b}(s)]\;\dfrac{s\cdot\rho_{b}\left(p_{b},s\right)}{1+s\cdot
I_{b}\left(p_{b},s\right)}\;\,\textnormal{d}s\ ,$ (591)
Sum-rate optimization is constrained with the energy availability constraints
in (588) and the aggregate power allocation constraint $I_{b}(p_{b},0)=p_{b}$.
Hence, the optimal allocation of power across information layers and over time
is the solution to the following problem, which involves a stochastic
guarantee on meeting the power constraints.
$\displaystyle{\cal
R}=\left\\{\begin{array}[]{lll}\max\limits_{\\{\boldsymbol{p}_{b}\\},\\{\rho_{b}(p_{b},s)\\}}&\sum\limits_{b=1}^{B}{R_{b}}(p_{b})&\\\
\quad{\rm
s.t.}&{\mathbb{P}}\left(\sum\limits_{j=1}^{b-1}\mathbbl{1}^{T}_{n}\cdot\boldsymbol{p}_{j}\;+\;\mathbbl{1}^{T}_{i}\cdot\boldsymbol{p}_{b}\;\leq\;\sum\limits_{j=1}^{b-1}\mathbbl{1}^{T}_{n}\cdot\boldsymbol{g}_{j}\;+\;\mathbbl{1}^{T}_{i}\cdot\boldsymbol{g}_{b}\right)\geq\eta\,,&\forall\;b,i\\\
&I_{b}(p_{b},0)=p_{b}\,,&\forall\;b\\\ &\boldsymbol{p}_{b}\succeq
0\,,&\forall\;b\end{array}\right.\ .$ (596)
#### 6.5.1 Optimal Power Allocation Densities
Based on (596), for any given set of power allocation terms
$\\{\boldsymbol{p}_{b}:b\in\\{1,\dots,B\\}\\}$, the set of optimal densities
can be found as the solution to
$\displaystyle{\cal
P}(\boldsymbol{p}_{1},\dots,\boldsymbol{p}_{B})=\left\\{\begin{array}[]{lll}\\!\\!\\!\underset{\\{\rho_{b}(p_{b},s)\\}}{\max}&\\!\\!\\!\sum\limits_{b=1}^{B}{R_{b}}(p_{b})&\\\
\quad{\rm
s.t.}&\\!\\!\\!I_{b}(p_{b},0)=p_{b}\,,&\forall\;b\end{array}\right..$ (599)
By noting the expansions of $I_{b}(p,s)$ and ${R_{b}}(p)$ in (589) and (591),
respectively, we have
$\displaystyle{R_{b}}(p)=-\int_{0}^{\infty}[1-F_{b}(s)]\;\cdot\dfrac{s\cdot\frac{\partial
I_{b}(p,s)}{\partial s}}{1+s\cdot I_{b}\left(p,s\right)}\;\,\textnormal{d}s\
.$ (600)
Based on this characterization, for a given power allocation over time
$\\{\boldsymbol{p}_{b}:b\in\\{1,\dots,B\\}\\}$, we have
$I_{b}(p,s)=\begin{cases}\dfrac{1-F_{b}(s)}{s^{2}f_{b}(s)}-\dfrac{1}{s}\
,&\quad\ell_{b}\leq s\leq u_{b}\\\ 0\ ,&\quad\mbox{\rm otherwise}\end{cases}\
,$ (601)
where $\ell_{b}$ and $u_{b}$ can be determined uniquely by solving
$\displaystyle I_{b}(p_{b},\ell_{b})\;=\;p_{b}\qquad\mbox{and}\qquad
I_{b}(p_{b},u_{b})\;=\;0\ .$ (602)
The analysis directly follows the same line of arguments as in the setting
without an energy harvesting transmitter. Based on the characterization of
interference residual functions $\\{I_{b}:b\in\\{1,\dots,B\\}\\}$, the optimal
rate over the fading block $b\in\\{1,\dots,B\\}$ at the fading state $s$ is
$\displaystyle R_{b}(p_{b},s)\;=\;\begin{cases}0&\mbox{\rm for}\quad
s<\ell_{b}\\\ \ln\frac{s^{2}f_{b}(s)}{\ell_{b}^{2}f_{b}(\ell_{b})}&\mbox{\rm
for}\quad\ell_{b}\leq s\leq u_{b}\\\
\ln\frac{u_{b}^{2}f_{b}(u_{b})}{\ell_{b}^{2}f_{b}(\ell_{b})}&\mbox{\rm
for}\quad u_{b}\leq s\end{cases}\ ,$ (603)
Subsequently, the average transmission rate over the fading block $b$ with
aggregate power $p_{b}$ is
$\displaystyle
R_{b}(p_{b})\;=\;\ln\frac{u_{b}^{2}f_{b}(u_{b})}{\ell_{b}^{2}f_{b}(\ell_{b})}-\int_{\ell_{b}}^{u_{b}}\left[\frac{2}{s}+\frac{f^{\prime}_{b}(s)}{f_{b}(s)}\right]\cdot
F_{b}(s)\;\,\textnormal{d}s\ .$ (604)
which can be used to show the interesting property that for any continuous
CDF, $R_{b}(p_{b})$, it is non-decreasing and strictly concave in $p_{b}$
zohdytajerharvesting .
#### 6.5.2 Optimal Power Allocation over Time
Next, based on the given allocation of power across information layers and
leveraging the key properties of $R_{b}(p_{b})$, i.e., concavity and being
non-decreasing, optimal power distribution over time can be delineated. For
this purpose, we present the solution to the following problem studied in
tajersequentialopt , which is a more general problem that subsumes both
problems ${\cal R}$ its special case.
$\displaystyle{\cal
Q}(\boldsymbol{\gamma})=\left\\{\begin{array}[]{lll}\underset{\\{p_{b}\\}}{\max}&\sum\limits_{b=1}^{B}w_{b}(p_{b})&\\\
{\rm s.t.}&\sum\limits_{i=1}^{b}p_{i}\leq\gamma_{b}&\forall\;b\\\ &p_{b}\geq
0&\forall\;b\end{array}\right.\ ,$ (608)
where $\boldsymbol{\gamma}=[\gamma_{1},\dots,\gamma_{B}]$ and
$w_{b}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ is strictly concave and non-
decreasing in $p_{b}$. Based on the expressions for ${R_{b}}(p)$, the sum-rate
over block $b$ depends on the power vector $\boldsymbol{p}_{b}$ only through
its sum $p_{b}$, defined in (586). This implies that instead of enforcing the
energy availability constraints in (588), we can equivalently enforce a
constraint only on the aggregate power in each block. Hence, by defining
$\gamma_{b}=\sum_{i=1}^{b}\mathbbl{1}^{T}_{n}\cdot\boldsymbol{g}_{i}\ ,$ (609)
the linear constraints in ${\cal R}$ can be equivalently stated as the linear
constraint in ${\cal Q}$. The detailed steps of solving the problem (608)
analytically and the attendant performance guarantees are discussed in details
in tajersequentialopt , a summary of which is provided next.
In order to facilitate different steps in the analysis, define the following
auxiliary problem, solving which is instrumental to characterizing the
properties of interest. Corresponding to each pair $i$ and $j$ such that
$1\leq i<j\leq B$ define
$\displaystyle{\cal Q}_{i\rightarrow
j}(\boldsymbol{\gamma})=\left\\{\begin{array}[]{lll}\underset{\\{p_{b}\\}}{\max}&\sum\limits_{b=i}^{j}w_{b}(p_{b})&\\\
{\rm s.t.}&\sum\limits_{b=i}^{j}p_{b}=\gamma_{j}-\gamma_{i}\\\ &p_{b}\geq
0,\qquad\forall\;b\in\\{i,\dots,j\\}\end{array}\right.\\!\\!\\!\\!,$ (613)
which has a unique globally optimal solution since the utility function is
strictly concave.
Algorithm 1 - Computing $\boldsymbol{p}$
1: set $\gamma_{b}$ according to (609) $\forall\;b\in\\{1,\dots,B\\}$. 2:
initialize $\>d=0$ and $u_{0}=0$, 3: while $u_{d}\;\leq\;B-1$ 4: $d\leftarrow
d+1$ 5: set ${\cal A}_{d}=\\{u_{d-1}+1,\dots,B\\}$ 6: for $b\in{\cal A}_{d}$
7: set $\boldsymbol{y}^{d,b}$ as the solution to ${\cal Q}_{u_{d-1}\rightarrow
b}(\boldsymbol{\gamma})$ 8: set $q^{d,b}=\min\left\\{\frac{{\rm d}w_{i}}{{\rm
d}y}(y^{d,b}_{i}):i\in\\{u_{d-1}+1,\dots,b\\}\right\\}$ 9: end for 10:
$u_{d}=\arg\underset{b\in{\cal A}_{d}}{\max}\;\;q^{d,b}$ (if not unique select
the smallest) 11: $v_{d}=\underset{b\in{\cal A}_{d}}{\max}\;\;q^{d,b}$ 12:
$\boldsymbol{z}^{d}=\boldsymbol{y}^{d,u_{d}}$ 13: end while 14: for
$i\in\\{1,\dots,d\\}$ 15: for $b\in{\cal D}_{i}=\\{u_{i-1}+1,\dots,u_{i}\\}$
16: $p_{b}\;=\;z_{b}^{i}$ 17 : end for 18 : end for
#### 6.5.3 Grouping the Constraints
The auxiliary term $\tilde{\boldsymbol{p}}$ has a pivotal role in establishing
the properties of $\boldsymbol{p}$. Corresponding to $\boldsymbol{p}$ we
define the auxiliary vector $\tilde{\boldsymbol{p}}$ by slightly modifying
Algorithm 1. Specifically, by modifying line 1 to initiate the values of
$\gamma_{b}$ according to $\gamma_{b}=\sum_{\ell=1}^{b}p_{\ell}$. This
modified version of Algorithm 1 successively partitions the set of constraints
$\\{\sum_{\ell=1}^{b}p_{\ell}\;\leq\;\gamma_{b}\\}$ into $d$ disjoint subsets
of constraints. Specifically, it returns time indices
$u_{0}<u_{1}<\dots<u_{d}<B$, and partitions the set $\\{1,\dots,B\\}$ into $d$
disjoint sets
$\displaystyle{\cal D}_{i}=\\{u_{i-1}+1,\dots,u_{i}\\}\ ,\qquad\mbox{for}\quad
i\in\\{1,\dots,d\\}\ .$ (614)
Furthermore, this algorithm computes the metrics
$\\{v_{i}:i\in\\{1,\dots,d\\}\\}$ and assigns $v_{i}$ to the set ${\cal
D}_{i}$. Once these sets are known, solving ${\cal Q}$ reduces to solving a
collection of smaller problems in the form of ${\cal Q}_{u_{i-1}\rightarrow
u_{i}}(\boldsymbol{\gamma})$ defined in (613). The properties of
$\tilde{\boldsymbol{p}}$ are formalized next. Given $\boldsymbol{p}$ as the
optimal solution to ${\cal Q}$, vector $\tilde{\boldsymbol{p}}$ satisfies all
the constraints of ${\cal Q}$. Furthermore, the vector
$\tilde{\boldsymbol{p}}$ satisfies
$\sum_{b=1}^{B}w_{b}(\tilde{p}_{b})\geq\sum_{b=1}^{B}w_{b}(p_{b})$ and the
equality holds if and only if $\boldsymbol{p}=\tilde{\boldsymbol{p}}$. This
establishes the optimality of $\tilde{\boldsymbol{p}}$ generated by modifying
Algorithm 1. If $\boldsymbol{p}$ is the optimal solution to the problem ${\cal
Q}$, then $\tilde{\boldsymbol{p}}$ generated by modifying Algorithm 1 is also
optimal. Uniqueness of $\boldsymbol{p}$ indicates
$\tilde{\boldsymbol{p}}=\boldsymbol{p}$.
#### 6.5.4 Dominant Constraints
By leveraging the results in the previous subsection, which partition the set
of constraints into a collection of $d$ disjoint constraint sets, additional
properties for these sets of constraints can be concluded. Specifically, in
each of the given $d$ sets, it can be shown that one constraint holds with
equality, which we refer to as the dominant constraint. These $d$ dominant
constraints are the only constraints needed to characterize the optimal
solution $\boldsymbol{p}$. This property is formalized in the following
theorem.
Under the optimal solution $\boldsymbol{p}$, all the inequality constraints
with indices included in $\\{u_{m}:m\in\\{1,\dots,d\\}\\}$ hold with equality.
Furthermore, the sequence $\\{v_{1},v_{2},\dots,v_{d}\\}$ is strictly
decreasing.
We remark that the set of indices $\\{u_{i}:i\in\\{1,\dots,d\\}\\}$ and
measures $\\{v_{i}:i\in\\{1,\dots,d\\}\\}$ have significant physical meanings
in power allocation. The elements of $\\{u_{i}:i\in\\{1,\dots,d\\}\\}$ specify
the time instances at which all the resources arrived by that time instance
are entirely consumed. Furthermore, the second part of Lemma 6.5.4 establishes
a connection among the derivative measures $q^{d,b}$ and $v_{d}$ defined in
Algorithm 1. In particular, the measures $\\{v_{i}:i\in\\{1,\dots,d\\}\\}$ are
the derivatives of the utility functions at the optimal solution
$\boldsymbol{p}$ over time.
#### 6.5.5 Optimality of Algorithm 1
So far we have shown that if we modify Algorithm 1 such that instead of
initializing the terms $\gamma_{b}$ as defined in (609) we initialize them
based on $\boldsymbol{p}$, then the output will be in fact the optimal
solution $\boldsymbol{p}$. Next we show that initiating Algorithm 1 with
either $\gamma_{b}=\sum_{\ell=1}^{b}p_{\ell}$ or according to (609) yields the
same output. The underlying insight is that closer scrutiny of Algorithm 1
shows that this algorithm depends on $\boldsymbol{p}$ primarily for
determining the metrics $\\{v_{i}:i\in\\{1,\dots,d\\}\\}$ and their associated
constraint indices $\\{u_{i}:i\in\\{1,\dots,d\\}\\}$. By invoking the result
of Lemma 6.5.4, we next show that for determining the sets
$\\{v_{i}:i\in\\{1,\dots,d\\}\\}$ and $\\{u_{i}:i\in\\{1,\dots,d\\}\\}$,
alternatively, we can also initialize $\gamma_{b}$ according to (609), based
on which we can show that the outcome of Algorithm 1 will be, in fact, the
optimal solution $\boldsymbol{p}$. This observation is formalized in the
following theorem. By setting $\gamma_{b}$ according to (609), Algorithm 1
generates the optimal solution of ${\cal Q}$.
## 7 Outlook
We conclude this survey by providing an outlook for some of the key open or
uninvestigated research directions.
Single-user MIMO channel. Designing an optimal broadcast approach for the
general MIMO channel is still an open problem since the MIMO channel is a non-
degraded broadcast channel Chong14 ; Chong18 . Its capacity region is known
for multiple users with private messages Weingarten06 , and for two users with
a common message GengNair14 . However, a complete characterization of the
broadcast approach requires the full solution of the most general MIMO
broadcast channel with a degraded message set KornaerMarton77 , which is not
yet available (infinite number of realizations, for $H$ with Gaussian
components), and hence suboptimal ranking procedures were considered. Various
degraded message sets and transmission schemes with sub-optimal ranking at the
receiver are studied in ShitzSteiner03 ; AsSh04 ; BustinPaySh13 . Formulation
of the general MIMO broadcasting with degraded message sets and the
optimization of the layering power distribution, which maximizes the expected
rate, is stated in (90)-(92). This optimization problem does not lend itself
to a closed-form solution and remains an open problem for future research. The
framework analyzed in romero2020rate , which uses rate-splitting and binning,
may be useful for the general broadcast problem with degraded message sets. It
is shown in GohariNair2020 that a tight upper bound might be obtained for the
two users broadcast channel by adding an auxiliary receiver. Generalizing this
work for multiple users may provide an efficient tool for obtaining outer
bounds in general and on the MIMO broadcast approach.
The capacity region of a compound multiple-antenna broadcast channel is
characterized under a particular degradedness order of users in Weingarten09 .
The channel considered there has two users, where each user has a finite set
of possible realizations. This again suggests that there is much room for
further research to fully characterize the broadcast approach for the general
MIMO channel. The majority of contributions discussed so far have considered
Gaussian distribution for transmitted signals. It may be of interest to apply
the broadcast approach to finite-input signals WuXiao2018 , or even binary-
input channels GengNair2013 . This facilitates analyzing more practical
settings and, in turn, obtaining tighter achievable bounds with the broadcast
approach.
Binary-dirty paper coding. DPC has a pivotal role in Gaussian broadcast
transmissions. Owing to its optimality for some settings (e.g., MIMO broadcast
channel Weingarten06 ), an interesting research direction is investigating the
performance or operation gains (e.g., rate and latency) of using DPC instead
of superposition coding in the settings discussed in sections 2-6. From a
broader perspective, binning techniques facilitate DPC to be effective beyond
Gaussian channels. In particular, Marton’s general capacity region relies on
the basic elements of binning Marton1979 , in the context of which the
classical Gelfand-Pinsker GelfandPinsker80 strategy can be interpreted as a
vertex point GelfandPinsker80 . The Gelfand-Pinsker strategy in the Gaussian
domain becomes DPC Costa1983 ; GelfandPinsker80 . The study in Somekh
addresses both binning and superposition coding aspects in a unified
framework. Furthermore, this study also investigates mismatched decoding,
which can account for the imperfect availability of the CSI at the receivers.
It is also noteworthy that throughout the paper, we primarily focused on the
notion of physically degraded channels and rank-ordering them based on their
degradedness. Nevertheless, it is important to investigate less restrictive
settings, such as less-noisy channels KornerMarton1975 ; ElGamal1979 ;
KimElGamal2011 .
Secrecy. When considering the broadcast approach, it is natural to look also
at secrecy in communications. Such an approach not only involves determining
which decoded messages depend on the channel state, but it also involves
determining those that are required to be kept secret Liang12 ; Liang14 ;
ZouLiang15 ; Hyadi . This can be designed as part of the multi-layer broadcast
approach.
Latency. There are various aspects in which delay constraints in
communications may impact the system design, some of which were discussed in
Section 2. There exists significant room for incorporating fixed-to-variable
channel coding and variable-to-variable channel coding in the broadcast
approach. In a way, this is a combination of variable-to-fixed coding
(broadcast approach) and fixed-to-variable coding (that is, Fountain-like
schemes). For example, some applications allow decoding following multiple
independent transmission blocks, as considered in YEH01 , and studied by its
equivalent channel setting, which is the MIMO parallel channel Kfir:IZS2020 .
Queuing theory can be used to analyze the expected achievable latency, as in
AsSh10 . An interesting observation is that layering often offers higher
latency gains than throughput gains. The problem of resource allocation for
delay minimization, even under a simple queue model as in AsSh10 , remains an
open problem for further research. Similarly, a generalization of the queue
model with parallel queues associated with multiple streams, each with a
different arrival random process and a different delay constraint, is an
important direction to investigate.
Connection to I-MMSE. It is well-known that the scalar additive Gaussian noise
channel has the single crossing point property between the MMSE in the
estimate of the input given the channel output. This property also provides an
alternative proof to the capacity region of the scalar two-user Gaussian
broadcast channel Guo13 . This observation is extended to the vector Gaussian
channel BustinPaySh13 via information-theoretic properties on the mutual
information, using the I-MMSE relationship, a fundamental connection between
estimation theory and information theory shown in Guo13 . An interesting
future direction is investigating the impact of I-MMSE relation on the
broadcast approach.
Information Bottleneck. Another interesting setting is the information
bottleneck channel. In this channel model, a wireless block fading channel is
connected to a reliable channel with limited capacity, referred to as the
bottleneck channel AsSh20_Bottleneck ; SteinerShamai2020 . In these studies,
it is assumed that the transmitted signal is Gaussian, which made it possible
to describe the optimal continuous layering power distribution in closed-form
expressions. Extensions beyond Gaussian have both practical and theoretical
significance.
One may consider the bottleneck channel setting, as depicted in Fig. 36, where
the transmitted signal is not necessarily Gaussian. Define the random variable
triplet $x-t-z$ that form a Markov chain, and are related according to (553),
i.e., $y=x+n$, where $x$ and $n$ are independent random variables, with $n\sim
N(0,1)$ being real Gaussian with a unit variance. The transmitted signal $x$
distribution is subject to optimization, and SNR$=P\cdot s$, and $P$ are the
transmission power ${\mathbb{E}}[x^{2}]=P$. The bottleneck channel output $z$
is a compressed version of $y$ adhering to a limited capacity of the
bottleneck channel $C$, i.e., $I(y;z)\leq C$. It is of interest to maximize
$I(x;z)$, with a maximizing probability that is not necessarily Gaussian, see
for example SAND08 . It is conjectured in Zaidi20 that the optimal
distribution maximizing $I(x;z)$ is discrete. The bottleneck channel may also
consist of multiple independent relays connected through digital links to the
destination, creating a distributed bottleneck. This setting is the CEO
problem with logarithmic loss Aguerri18IZS ; Ugur2018 , and under this
setting, the broadcast approach for multi-access channels Tajer18 becomes
very beneficial. With other loss functions, e.g., MMSE, the problem falls
within source quality via broadcasting. Hence, the distributed bottleneck can
also be viewed as source-channel coding problems with a distortion performance
measure, as discussed in Section 6.3. A model with two relays, known as the
diamond channel, is also interesting and relevant. In the oblivious non-fading
case, the optimal transmission and relay compression, together with joint
decompression at the receiver, are known and characterized in sholomoDiamond19
. For the non-oblivious diamond channel, only upper bounds Michaelsholomo19 ,
and achievable rates of the type discussed in Urbanke19 are available. It may
also be interesting to consider the setting of recent work ZouhairBataineh20
and extend it to the case that no CSIT is available and consider a broadcast
approach strategy for each user. Another possible direction is extending
Karaksik13 to scenarios in which the variable backhaul links capacities
$\\{C_{i}\\}$ are available only at the destination. Adapting the broadcast
MIMO approach for the vector bottleneck channel ShitzSteiner03 ; 6875354 is
another important direction.
Implementation. The actual implementation of the broadcast approach, in
general, is a rich topic for further research. Evidently, as it is mainly
associated with layered decoding, this can be done by a variety of advanced
coding and modulation techniques such as the low-density parity-check (LDPC)
codes and turbo codes. The work in Gong:TCOM2011 considers LDPC
implementation in conjunction with rate-splitting (no CSIT) in the
interference channel, and Barak2008 provides bounds on LDPC codes over an
erasure channel with variable erasures. Polar codes can be directly adopted
for implementing the broadcast approach as their decoding is based on
successive cancellations, and hence they naturally fit in the broadcast
approach. Its efficiency has been demonstrated in the general broadcast
channel Golea15 ; Mondelli14 , and further its ability to work on general
channels without adapting the transmitter to the actual channel Mondelli17
demonstrates the special features that are central to the broadcast approach.
Furthermore, its applicability to multiple description Bhatt17 make it a
natural candidate that can be used for implementing joint source-channel
coding via a broadcast approach. Polar codes may also be used to practically
address the variable-to-variable rate channel coding, as it is suitable for
variable-to-fixed channel coding as well as fixed-to-variable channel coding,
as demonstrated in Li2016 for rateless codes. Power allocation across
different information layers in special cases is investigated in Boyle , and
there is room for further generalizing the results.
Finite blocklength. This paper focuses primarily on the asymptotically long
transmission blocks. It is also essential to analyze the broadcast approach in
the non-asymptotic block length regime. In such regimes, one could compromise
the distribution of rates (asymptotic regime) with second-order descriptions,
or even random coding error exponents, as there is a tradeoff between the
error exponent rate of a finite block and the maximum rate. The practical
aspects of communication under stringent finite blocklength constraints are
discussed in Mary .
Identification via channels. The identification problem introduced in
ahlswede1989 is another case of a state-dependent channel. Its objective is
communicating messages over a channel to select a group of messages at the
receiver. This is in contrast to Shannon’s formulation in which the objective
is selecting one message. Many of the challenges pertinent to state-dependent
channels and the lack of CSIT that appear in Shannon’s formulation are
relevant for the identification problem as well. Recent studies on the
identification via channels without the CSIT include pereg .
Mixed-delay constraints. One major challenge in modern communication systems
is heterogeneity in data type and their different attendant constraints. One
such constraint pertains to latency, where different data types and streams
can face various delay constraints. The broadcast approach investigated for
addressing mixed-delay constraints in the single-user channel
CohenSteinerShamai12 , can be further extended to address this problem in more
complex settings (e.g., soft handoff in cellular systems Nikbakht_2020 and
C-RAN uplink Nikbakht2019 ) while facing the lack of CSIT and in the context
of fixed-to-variable channel coding Verdu10variable-ratechannel and fountain
codes Qureshi .
Source coding. Another application is source coding with successive refinement
where side information at the receiver (Wyner-Ziv) can be different, e.g.,
another communications link that might provide information and its quality is
not known at the transmitter Kaspi94 . Another possible extension is the
combination of successive refinements and broadcast approach Tian08 .
Caching. In cooperative communication, it is common that relay stations
perform data caching ParkSimeone16 ; KarasikSimeone18 , and the transmitter
has no information about what is being cached. This random aspect of the
amount and location (for multi-users) of cashing might play an interesting
role in a broadcast approach for such a system.
Algebraic structured codes. The information-theoretic analyses of the networks
reviewed in this paper generally are based on unstructured code design. In
parallel to unstructured codes, there is rich literature on the structured
design of codes with a wide range of applications to multi-terminal
communication (e.g., multiple access and interference channels) and
distributed source coding. A thorough recent overview of algebraic codes is
available in Pradhan:FnT2021 .
Networking. All different settings and scenarios discussed in this article
play important roles in communication networks. As a network’s size and
complexity grow, the users cannot be all provided with the complete and
instantaneous state of the networks. Specifically, in the future wireless
systems (e.g., 6G), cell-based hierarchical network architectures will be
dispensed with Giordani . In such networks, acquiring the CSI at the
transmitters will be impossible, in which case the broadcast approach will be
effective in circumventing the lack of CSIT. Furthermore, network coding can
be incorporated in the broadcast approach, as it can account for latency,
general wireless impediments (e.g., fading), and various network models, e.g.,
the relay, broadcast, interference, and multiple-access channels Lee2010 .
Finally, we highlight that the broadcast approach’s hallmark is that it
enables communication systems to adapt their key communication performance
metrics (e.g., data rate, service latency, and message distortion) to the
actual realizations of the communication channels. Such a feature is
especially important as the size, scale, and complexity of the communication
systems grow, rendering the instantaneous acquisition of channel realizations
at the transmitters costly, if not prohibitive altogether. Adapting
communication to unknown channels is an inherent property of communication
systems in the pre-digital (analog) era, facilitating the mainstream adoption
of broadcasting technologies for distributing audio and video contents. The
broadcast technology instates this property in digital communication systems
as well.
The work of A. Tajer has been supported in part by the U.S. National Science
Foundation under the grant ECCS-1933107. The work of S. Shamai has been
supported by the European Union’s Horizon 2020 Research And Innovation
Programme, grant agreement no. 694630, and the WIN consortium via the Israel
minister of economy and science.
The following abbreviations are used in this manuscript:
AF | amplify-and-forward
---|---
AQF | hybrid amplify-quantize-and-forward
AWGN | additive white Gaussian noise
BCC | broadcasting coherent combining
BIR | broadcasting incremental redundancy
CC | coherent combining
CDF | cumulative distribution function
CDMA | code-division multiple access
CF | compress-and-forward
CSI | channel state information
CSIT | channels state information at the transmitter sites
DC | delay constrained
DF | decode-and-forward
DoF | Degrees-of-freedom
DS | direct-sequence
DSL | digital subscriber line
FCSI | full CSI
FUU | fraction of undecodable users
HARQ | hybrid automatic retransmission request
HK | Han-Kobayashi
IR | incremental redundancy
LTSC | Long-term static channel
MAC | multi-access channel
MF | matched filter
MIMO | multiple input multiple output
MISO | multiple input single output
MLC | multi-level coding
MMSE | minimum mean squared-error
NDC | non-delay constrained
OAR | outage approach retransmission
PDF | probability distribution function
PET | priority encoding transmission
QF | quantize-and-forward
RV | random variable
SDF | sequential decode and forward
SIC | successive interference cancellation
SINR | signal to interference and noise ratio
SISO | single input single output
SIMO | single input multiple output
SNR | signal-to-noise ratio
SR | successive refinement
yes
## Appendix A Constants of Theorem 3
$\displaystyle b_{1}(u,v)=\min_{j\in
J_{1}}\left\\{C\big{(}{s_{v}\beta_{uv}\;,\;s_{j}B_{1}(j,u,v)+s_{v}B_{2}(j,u,v)}\big{)}\right\\}\
,$ $\displaystyle
b_{2}(u,v)=C\big{(}{s_{v}\beta_{uv}\;,\;(s_{v}+s_{\ell})B_{3}(u,v)}\big{)}\ ,$
(615) $\displaystyle
b_{3}(u,v)=C\big{(}{2s_{v}\beta_{uv}\;,\;2s_{v}B_{3}(u,v)}\big{)}\ ,$ (616)
$\displaystyle
b_{4}(u,v)=C\big{(}{s_{u}\beta_{vu}\;,\;s_{\ell}B_{4}(u,v)+s_{u}B_{5}(u,v)}\big{)}\
,$ (617) $\displaystyle
b_{5}(u,v)=C\big{(}2s_{v}\beta_{vu}\;,\;2s_{v}B_{3}(u,v)\big{)}\ ,$ (618)
$\displaystyle b_{6}(u,v)=\min_{(j,k)\in
J_{2}}\\{C\big{(}s_{j}\beta_{vu}+s_{k}\beta_{uv}\;,\;s_{j}B_{6}(k,u,v)+s_{k}B_{7}(k,u,v)\big{)}\\}\
,$ $\displaystyle
b_{7}(u,v)=C\big{(}{s_{v}(\beta_{uv}+\beta_{vu}),(s_{v}+s_{\ell})B_{3}(u,v)}\big{)}\
,$ (619) $\displaystyle
b_{8}(u,v)=C\big{(}{2s_{v}(\beta_{uv}+\beta_{vu})\;,\;2s_{v}B_{3}(u,v)}\big{)}\
,$ (620) $\displaystyle b_{9}(u,v)=\min_{(j,k)\in
J_{3}}\\{C\big{(}s_{j}(\beta_{uv}+\beta_{vu})+s_{k}\beta_{uv}\;,\;(s_{j}+s_{k})B_{3}(u,v)\big{)}\\}\
,$ $\displaystyle b_{10}(u,v)=\min_{j,k\in
J_{3}}\\{C\big{(}s_{j}(\beta_{uv}+\beta_{vu})+s_{k}\beta_{vu}\;,\;(s_{j}+s_{k})B_{3}(u,v))\big{)}\\}\
,$ $\displaystyle
b_{11}(u)=C\big{(}{s_{u}\beta_{uu}\;,\;(s_{u}+s_{\ell})B_{8}(u,u)}\big{)}\ ,$
(621) $\displaystyle
b_{12}(u)=C\big{(}{2s_{u}\beta_{uu}\;,\;2s_{u}B_{8}(u,u)}\big{)}\ ,$ (622)
and
$\displaystyle B_{1}(j,u,v)$
$\displaystyle=1-\sum_{n=1}^{j}\sum_{m=1}^{v-1}\beta_{mn}-\sum_{n=1}^{u}\beta_{vn}\
,$ (623) $\displaystyle B_{2}(j,u,v)$
$\displaystyle=1-\sum_{n=1}^{v-1}\sum_{m=1}^{j}\beta_{mn}-\sum_{n=1}^{u}\beta_{nv}\
,$ (624) $\displaystyle B_{3}(u,v)$
$\displaystyle=1-\sum_{n=1}^{v-1}\sum_{m=1}^{v-1}\beta_{mn}-\sum_{n=1}^{u}\beta_{vn}-\sum_{n=1}^{u}\beta_{nv}\
,$ (625) $\displaystyle B_{4}(u,v)$
$\displaystyle=1-\sum_{n=1}^{v-1}\sum_{m=1}^{u}\beta_{mn}-\sum_{n=1}^{u}\beta_{nv}\
,$ (626) $\displaystyle B_{5}(u,v)$
$\displaystyle=1-\sum_{n=1}^{u}\sum_{m=1}^{v-1}\beta_{mn}-\sum_{n=1}^{u}\beta_{vn}\
,$ (627) $\displaystyle B_{6}(j,u,v)$
$\displaystyle=1-\sum_{n=1}^{j}\sum_{m=1}^{v-1}\beta_{mn}-\sum_{n=1}^{u}\beta_{vn}\
,$ (628) $\displaystyle B_{7}(j,u,v)$
$\displaystyle=1-\sum_{n=1}^{j}\sum_{m=1}^{v-1}\beta_{nm}-\sum_{n=1}^{u}\beta_{nv}$
(629) $\displaystyle B_{8}(u,v)$
$\displaystyle=1-\sum_{n=1}^{u}\sum_{m=1}^{v}\beta_{mn}\ .$ (630)
## Appendix B Corner Points in Figure 16
The coordinates of the corner points of Fig. 16 are specified as follows
$\displaystyle{\sf T}:(0,b_{1}),\quad{\sf U}:(b_{2},b_{1}),\quad{\sf
V}:(b_{7},b_{1}),\quad{\sf W}:(b_{3},b_{4}),$ $\displaystyle{\sf
X}:(f_{1},f_{2}),\quad{\sf Y}:(b_{5},b_{6}),\quad{\sf Z}:(b_{5},0),$ (631)
where we have defined
$\displaystyle b_{1}$ $\displaystyle=p_{1}\;C(s_{1},0)+p_{2}\;C(s_{2},0)\,,$
(632) $\displaystyle b_{2}$
$\displaystyle=q_{1}\;C\left(s_{1},s_{2}\right)+q_{2}\;C\left(s_{2},s_{2}\right)\,,$
(633) $\displaystyle b_{3}$
$\displaystyle=q_{1}\;\rho_{i^{*}}+q_{2}\hat{\rho}_{j^{*}}\,,$ (634)
$\displaystyle b_{4}$
$\displaystyle=p_{1}\mu_{i^{*}}+p_{2}\hat{\mu}_{j^{*}}\,,$ (635)
$\displaystyle b_{5}$ $\displaystyle=q_{1}\;C(s_{1},0)+q_{2}\;C(s_{2},0)\,,$
(636) $\displaystyle b_{6}$
$\displaystyle=p_{11}\;C\left(s_{1},s_{1}\right)+p_{12}\;C\left(s_{2},s_{1}\right)+p_{21}\;C\left(s_{1},s_{2}\right)+p_{22}\;C\left(s_{2},s_{2}\right)\,,$
(637) $\displaystyle b_{7}$
$\displaystyle=p_{11}\;C\left(s_{1},s_{1}\right)+p_{21}\;C\left(s_{2},s_{1}\right)+p_{12}\;C\left(s_{1},s_{2}\right)+p_{22}\;C\left(s_{2},s_{2}\right)\,,$
(638) $\displaystyle f_{1}$
$\displaystyle=q_{1}C\left(s_{1},0\right)+q_{2}\left[C\left(s_{2}\beta^{1}_{12},s_{1}+s_{2}\beta^{1}_{22}\right)+\;C(s_{2}\beta^{1}_{22},0)\right]\,,$
(639) $\displaystyle f_{2}$
$\displaystyle=p_{11}C(2s_{1},0)+(p_{12}+p_{21})C(s_{1}+s_{2},0)+p_{22}C(2s_{2},0)-f_{1}\,,$
(640)
and we have defined $i^{*}=\arg\max_{i}\mu_{i}$ and
$j^{*}=\arg\max_{j}\hat{\mu}_{j}$ for
$\displaystyle\mu_{1}$
$\displaystyle=\;p_{1}\;C(s_{1},0)+p_{2}[C(s_{1}+s_{2},2s_{1})+C(s_{1},0)]\,,$
(641) $\displaystyle\mu_{2}$
$\displaystyle=\;p_{1}\;[C(2s_{1},s_{1}+s_{2})+C(s_{2},0)]+p_{2}C(s_{2},0)\,,$
(642) $\displaystyle\hat{\mu}_{1}$
$\displaystyle=\;p_{1}\;C(s_{1},0)+p_{2}[C(2s_{2},s_{1}+s_{2})+C(s_{1},0)]\,,$
(643) $\displaystyle\hat{\mu}_{2}$
$\displaystyle=\;p_{1}\;[C(s_{1}+s_{2},2s_{2})+C(s_{2},0)]+p_{2}C(s_{2},0)\,,$
(644) $\displaystyle\rho_{1}$ $\displaystyle=C(s_{1},s_{1})\ ,$ (645)
$\displaystyle\rho_{2}$ $\displaystyle=C(s_{1},s_{2})\,,$
$\displaystyle\hat{\rho}_{1}$ $\displaystyle=C(s_{2},s_{1})\ ,$ (646)
$\displaystyle\hat{\rho}_{2}$ $\displaystyle=C(s_{2},s_{2})\,.$ (647)
References
## References
* (1) M. V. Burnashev, “Data transmission over discrete channel with feedback: Random transmission time,” _Problemy Peredachi Informatsii_ , vol. 12, no. 4, pp. 10–30, 1976.
* (2) A. Tchamkerten and E. Telatar, “Variable length coding over an unknown channel,” _IEEE Transactions on Information Theory_ , vol. 52, no. 5, pp. 2126–2145, May 2006.
* (3) O. Shayevitz and M. Feder, “Achieving the empirical capacity using feedback: Memoryless additive models,” _IEEE Transactions on Information Theory_ , vol. 55, no. 3, pp. 1269–1295, March 2009.
* (4) Y. Polyanskiy, H. V. Poor, and S. Verdú, “Variable-length coding with feedback in the non-asymptotic regime,” in _Proc. IEEE International Symposium on Information Theory_ , Austin, TX, June 2010.
* (5) H. Tyagi and P. Narayan, _Excursions in Harmonic Analysis_ , 2013, vol. 1, ch. State-dependent channels: Strong converse and bounds on reliability function, pp. 461–477.
* (6) S. Verdú and S. Shamai (Shitz), “Variable-rate channel capacity,” _IEEE Transactions on Information Theory_ , vol. 56, no. 6, pp. 2651–2667, June 2010.
* (7) E. Biglieri, J. Proakis, and S. Shamai (Shitz), “Fading channels: Information-theoretic and communication aspects,” _IEEE Transactions on Information Theory_ , vol. 44, no. 6, pp. 2619–2692, October 1998.
* (8) S. Shamai (Shitz) and E. Telatar, “Some information-theoretic aspects of decentralized power control in multiple access fading channels,” in _Proc. Information Theory and Networking Workshop_ , Metsovo, Greece, June 1999.
* (9) M. Sharif and B. Hassibi, “Delay considerations for opportunistic scheduling in broadcast fading channels,” _IEEE Transactions on Wireless Communications_ , vol. 6, no. 9, pp. 3353–3363, September 2007.
* (10) A. Asadi and V. Mancuso, “A survey on opportunistic scheduling in wireless communications,” _IEEE Communications Surveys and Tutorials_ , vol. 15, no. 4, pp. 1671–1688, Fourth Quarter 2013.
* (11) Q. Zhao and B. M. Sadler, “A survey of dynamic spectrum access,” _IEEE Signal Processing Magazine_ , vol. 24, no. 3, pp. 79 – 89, May 2007.
* (12) M. E. Tanab and W. Hamouda, “Resource allocation for underlay cognitive radio networks: A survey,” _IEEE Communications Surveys and Tutorials_ , vol. 19, no. 2, pp. 1249 – 1276, Second Quarter 2016.
* (13) L. Ozarow, S. Shamai (Shitz), and A. Wyner, “Information-theoretic considerations for cellular mobile radio,” _IEEE Transactions on Vehicular Technology_ , vol. 43, no. 2, pp. 359–378, May 1994.
* (14) S. V. Hanly and D. N. C. Tse, “Multiaccess fading channels - Part II: Delay-limited capacities,” _IEEE Transactions on Information Theory_ , vol. 44, no. 7, pp. 2816–2831, November 1998.
* (15) L. Li, N. Jindal, and A. Goldsmith, “Outage capacities and optimal power allocation for fading multiple-access channels,” _IEEE Transactions on Information Theory_ , vol. 51, no. 4, pp. 1326–1347, April 2005.
* (16) R. Narasimhan, “Individual outage rate regions for fading multiple access channels,” in _Proc. IEEE International Symposium on Information Theory_ , Nice, France, June 2007, pp. 1571–1575.
* (17) A. Haghi, R. Khosravi-Farsani, M. Aref, and F. Marvasti, “The capacity region of fading multiple access channels with cooperative encoders and partial CSIT,” in _Proc. IEEE International Symposium on Information Theory_ , Austin, TX, June 2010, pp. 485–489.
* (18) A. Das and P. Narayan, “Capacities of time-varying multiple-access channels with side information,” _IEEE Transactions on Information Theory_ , vol. 48, no. 1, pp. 4–25, January 2001.
* (19) S. Jafar, “Capacity with causal and noncausal side information: A unified view,” _IEEE Transactions on Information Theory_ , vol. 52, no. 12, pp. 5468–5474, December 2006.
* (20) K. M. Cohen, A. Steiner, and S. Shamai (Shitz), “On the broadcast approach over parallel MIMO two-state fading channel,” in _Proc. IEEE International Zurich Seminar on Information and Communication_ , Zurich, Switzerland, February 2020.
* (21) J. Körner and K. Marton, “General broadcast channels with degraded message sets,” _IEEE Transactions on Information Theory_ , vol. 23, no. 1, pp. 60–64, January 1977.
* (22) C. Nair and A. El Gamal, “The capacity region of a class of three-receiver broadcast channels with degraded message sets,” _IEEE Transactions on Information Theory_ , vol. 55, no. 10, pp. 4479–4493, October 2009.
* (23) S. Shamai (Shitz) and A. Steiner, “A broadcast approach for a single-user slowly fading MIMO channel,” _IEEE Transactions on Information Theory_ , vol. 49, no. 10, pp. 2617–2635, October 2003.
* (24) T. M. Cover, “Broadcast channels,” _IEEE Transactions on Information Theory_ , vol. 18, no. 1, pp. 2–14, January 1972.
* (25) S. Shamai (Shitz), “A broadcast strategy for the Gaussian slowly fading channel,” in _Proc. IEEE International Symposium on Information Theory_ , Ulm, Germany, June 1997, p. 150.
* (26) T. Berger and J. D. Gibson, “Lossy source coding,” _IEEE Transactions on Information Theory_ , vol. 44, no. 6, pp. 2693–2723, October 1998.
* (27) J. K. Wolf, A. D. Wyner, and J. Ziv, “Source coding for multiple descriptions,” _Bell System Technical Journal_ , vol. 59, pp. 1417–1426, 1980.
* (28) A. Steiner and S. Shamai (Shitz), “The broadcast approach in communications systems,” in _Proc. IEEE Convention of Electrical and Electronics Engineers in Israel_ , Eilat, Israel, December 2008.
* (29) W. H. R. Equitz and T. M. Cover, “Successive refinement of information,” _IEEE Transactions on Information Theory_ , vol. 37, no. 2, pp. 269–275, March 1991.
* (30) B. Rimoldi, “Successive refinement of information: Characterization of the achievable rates,” _IEEE Transactions on Information Theory_ , vol. 40, no. 1, pp. 253–259, January 1994.
* (31) C. T. K. Ng, C. Tian, A. J. Goldsmith, and S. Shamai (Shitz), “Minimum expected distortion in Gaussian source coding with uncertain side information,” in _Proc. IEEE Information Theory Workshop_ , Solstrand, Norway., July 2007, pp. 454–459.
* (32) C. Tian, A. Steiner, S. Shamai (Shitz), and S. N. Diggavi, “Successive Refinement Via Broadcast: Optimizing Expected Distortion of a Gaussian Source Over a Gaussian Fading Channelaussian fading channel,” _IEEE Transactions on Information Theory_ , vol. 54, no. 7, pp. 2903–2918, July 2008\.
* (33) C. T. K. Ng, D. Gunduz, A. J. Goldsmith, and E. Erkip, “Distortion minimization in Gaussian layered broadcast coding with successive refinement,” _IEEE Transactions on Information Theory_ , vol. 55, no. 11, pp. 5074–5086, November 2009.
* (34) C. T. K. Ng, C. Tian, A. J. Goldsmith, and S. Shamai (Shitz), “Minimum expected distortion in Gaussian source coding with fading side information,” _IEEE Transactions on Information Theory_ , vol. 58, no. 9, pp. 5725–5739, 2012.
* (35) P. Duhamel and M. Kieffer, _Joint Source-Channel Decoding. A Cross-Layer Perspective with Applications in Video Broadcasting over Mobile and Wireless Networks_. Academic Press, 2009.
* (36) M. Trott, “Unequal error protection codes: Theory and practice,” in _Proc. IEEE Information Theory Workshop_ , Haifa, Israel, June 1996.
* (37) S. Boucheron and M. R. Salamatian, “About priority encoding transmission,” _IEEE Transactions on Information theory_ , vol. 46, no. 2, pp. 609–705, March 2000.
* (38) ——, “Priority encoding transmission,” _IEEE Transactions on Information theory_ , vol. 42, pp. 1737–1744, November 1996.
* (39) K. Woyach, K. Harrison, G. Ranade, and A. Sahai, “Comments on unknown channels,” in _Proc. IEEE Information Theory Workshop_ , Lausanne, Switzerland., September 2012.
* (40) T. M. Cover and J. A. Thomas, _Elements of Information Theory_ , 2nd ed. John Wiley & Sons, 2006.
* (41) D. N. C. Tse, “Optimal power allocation over parallel Gaussian broadcast channels,” in _Proc. IEEE International Symposium on Information Theory_ , Ulm, Germany, June 1997, p. 27.
* (42) L. Li and A. Goldsmith, “Capacity and optimal resource allocation for fading broadcast channels. I: Ergodic capacity,” _IEEE Transactions on Information Theory_ , vol. 47, no. 3, pp. 1102–1127, March 2001.
* (43) ——, “Capacity and optimal resource allocation for fading broadcast channels. II: Outage capacity,” _IEEE Transactions on Information Theory_ , vol. 47, no. 3, pp. 1103–1127, March 2001.
* (44) P. Viswanath and D. N. C. Tse, “Sum capacity of the multiple antenna broadcast channel,” in _Proc. IEEE International Symposium on Information Theory_ , Lausanne, Switzerland, July 2002.
* (45) S. Vishwanath, N. Jindal, and A. Goldsmith, “Duality, achievable rates and sum-rate capacity of Gaussian MIMO broadcast channel,” _IEEE Transactions on Information Theory_ , vol. 49, no. 10, pp. 2658 – 2668, October 2003.
* (46) G. Kramer, S. Vishwanath, S. Shamai (Shitz), and A. Goldsmith, “Information-theoretic issues concerning broadcasting,” in _Proc. IEEE Workshop on Signal Processing for Wireless Communications_ , Rutgers University, NJ, October 2002.
* (47) G. Caire and S. Shamai (Shitz), “On the achievable throughput of a multi-antenna Gaussian broadcast channel,” _IEEE Transactions on Information theory_ , vol. 49, no. 7, pp. 1691 – 1706, July 2002.
* (48) W. Yu and J. Cioffi, “The sum capacity of a Gaussian vector broadcast channel,” in _Proc. IEEE International Symposium on Information Theory_ , Lausanne, Switzerland, July 2002.
* (49) A. El Gamal, “Capacity of the product and sum of two unmatched broadcast channels,” _Problemy Peredachi Informatsii_ , vol. 16, no. 1, pp. 3–23, January-March 1980.
* (50) H. Weingarten, Y. Steinberg, and S. Shamai (Shitz), “The capacity region of the Gaussian multiple-input multiple-output broadcast channel,” _IEEE Transactions on Information Theory_ , vol. 52, no. 9, pp. 3936–3964, September 2006.
* (51) S. Sesia, G. Caire, and G. Vivier, “Broadcasting a common source: Information-thoeretic results and system challenges,” in _Proc. IEEE International Symposium on Information Theory_ , Monte Verita, Switzerland, February 2003.
* (52) N. Shulman and M. Feder, “Source broadcasting with unknown amount of receiver side information,” in _Proc. IEEE Information Theory Workshop_ , Banglore, India, October 2002.
* (53) P. Schramn, “Multilevel coding with independent decoding on levels for efficient communications on static and interleaved fading channels,” in _Proc. IEEE Personal, Indoor and Mobile Radio Communications_ , Helsinki, Finland, September 1997, pp. 1196–1200.
* (54) D. Schill and J. Huber, “On hierarchical signal constellations for the Gaussian broadcast channel,” in _Proc. International Conference on Telecommunications_ , Porto Carras, Greece, June 1998.
* (55) D. Schill, D. Yuan, and J. Huber, “Efficient broadcasting using multilevel codes,” in _Proc. Information Theory and Networking Workshop_ , Metsovo, Greece, June 1999, p. 72.
* (56) M. Sajadieh, F. R. Kschischang, and A. Leon-Garcia, “Analysis of two-layered adaptive transmission systems,” in _Proc. IEEE Vehicular Technology Conference_ , Atlanta, Georgia, April 1996, pp. 1771–1775.
* (57) Y. Liu, K. Lau, C. Takeshita, and M. Fitz, “Optimal rate allocation for superposition coding in quasi-static fading channels,” in _Proc. IEEE International Symposium on Information Theory_ , Lausanne, Switzerland, July 2002, p. 111.
* (58) A. J. Viterbi, “Very low rate conventional codes for maximum theoretical performance of spread-spectrum multiple-access channels,” _IEEE Journal on Selected Areas in Communications_ , vol. 8, no. 4, pp. 641–649, May 1990.
* (59) I. Geldfand and S. Fomin, _Calculus of Variations_. Mineola, New-York: Courier Corporation, 2000.
* (60) A. S. Avestimehr and D. N. C. Tse, “Outage capacity of the fading relay channel in the low-SNR regime,” _IEEE Transactions on Information Theory_ , vol. 53, no. 4, pp. 1401–1415, April 2007.
* (61) R. Bustin, R. F. Schaefer, H. V. Poor, and S. Shamai (Shitz), “An i-MMSE based graphical representation of rate and equivocation for the Gaussian broadcast channel ” in _Proc. IEEE Conference on Communications and Network Security_ , Florence, Italy, September 2015, pp. 53–58.
* (62) A. Steiner and S. Shamai (Shitz), “Achievable rates with imperfect transmitter side information using a broadcast transmission strategy,” _IEEE Transactions on Wireless Communications_ , vol. 7, no. 3, pp. 1043–1051, 2008.
* (63) ——, “Multi-layer broadcasting hybrid-ARQ strategies for block fading channels,” _IEEE Transactions on Wireless Communications_ , vol. 7, no. 7, pp. 2640–2650, July 2008.
* (64) C. Shen, T. Liu, and M. P. Fitz, “Aggressive transmission with ARQ in quasi-static fading channels,” in _Proc. IEEE International Conference on Communications_ , Shanghai, China, May 2008, pp. 1092–1097.
* (65) A. Steiner and S. Shamai (Shitz), “Multi-layer broadcast hybrid-ARQ strategies,” in _Proc. IEEE International Zurich Seminar on Information and Communication_ , Zurich, Switzerland, 2008, pp. 148–151.
* (66) E. Telatar, “Capacity of multi-antenna Gaussian channels,” _European Transactions on Telecommunications_ , vol. 10, no. 6, pp. 585–595, November 1999\.
* (67) Y. Geng and C. Nair, “The capacity region of the two-receiver Gaussian vector broadcast channel with private and common messages,” _IEEE Transactions on Information Theory_ , vol. 60, no. 4, pp. 2087–2104, April 2014\.
* (68) H. Chong and Y. Liang, “The capacity region of the class of three-receiver Gaussian MIMO multilevel broadcast channels with two-degraded message sets,” _IEEE Transactions on Information Theory_ , vol. 60, no. 1, pp. 42–53, January 2014.
* (69) ——, “On the capacity region of the parallel degraded broadcast channel with three receivers and three-degraded message sets,” _IEEE Transactions on Information Theory_ , vol. 64, no. 7, pp. 5017–5041, 2018.
* (70) A. Steiner and S. Shamai (Shitz), “Hierarchical coding for a MIMO channel,” in _Proc. IEEE Convention of Electrical and Electronics Engineers in Israel_ , Tel-Aviv, Israel, September 2004, pp. 72–75.
* (71) R. Bustin, M. Payaro, D. P. Palomar, and S. Shamai (Shitz), “On MMSE crossing properties and implications in parallel vector Gaussian channels,” _IEEE Transactions on Information Theory_ , vol. 59, no. 2, pp. 818–844, February 2013.
* (72) A. Marshall and I. Olkin, _Inequalities: Theory of Majorization and Its Applications_. Academic Press, New York, 1979.
* (73) S. Shamai (Shitz), “A broadcast approach for the multiple-access slow fading channel,” in _Proc. IEEE International Symposium on Information Theory_ , Sorrento, Italy, June 2000, p. 128.
* (74) A. Steiner and S. Shamai (Shitz), “Multi-layer broadcasting over a block fading MIMO channel,” _IEEE Transactions on Wireless Communications_ , vol. 6, no. 11, pp. 3937 –3945, November 2007.
* (75) E. Telatar and R. G. Gallager, “Combining queueing theory with information theory for multiaccess,” _IEEE Selected Areas in Communications_ , vol. 13, pp. 963–969, August 1995.
* (76) A. Ephremides and B. Hajek, “Information theory and communication networks: An unconsummated union,” _IEEE Transactions on Information Theory_ , vol. 44, no. 3, pp. 2416–2434, July 1998.
* (77) R. G. Gallager, “A perspective on multiaccess channels,” _IEEE Transactions on information theory_ , vol. 31, no. 2, pp. 124–142, March 1985\.
* (78) J. W. Yoo, T. Liu, S. Shamai (Shitz), and C. Tian, “Worst-case expected-capacity loss of slow-fading channels,” _IEEE Transactions on Information Theory_ , vol. 59, no. 6, pp. 3764–3779, June 2013.
* (79) I. Bettesh and S. Shamai (Shitz), “Optimal power and rate control for minimal average delay: The single-user case,” _IEEE Transactions on Information Theory_ , vol. 52, no. 9, pp. 4115–4141, September 2006.
* (80) A. Steiner and S. Shamai (Shitz), “On queueing and multilayer coding,” _IEEE Transactions on Information Theory_ , vol. 56, no. 5, pp. 2392–2415, 2010.
* (81) R. W. Wolff, _Stochastic Modeling and the Theory of Queues_. Englewood Cliffs, NJ: Perentice-Hall, 1989.
* (82) I. Bettesh, “Information and network theory aspects of communication systems in fading enviornment,” Ph.D. dissertation, Technion – Israel Institute of Technology, 2003.
* (83) L. Kleirock, _Queueing Systems Volume 2: Theory_. New-York: John Wiley, 1975.
* (84) K. M. Cohen, A. Steiner, and S. Shamai (Shitz), “The broadcast approach under mixed delay constraints,” in _Proc. IEEE International Symposium on Information Theory_ , Cambridge, MA, July 2012, pp. 209 –213.
* (85) H. Nikbakht, M. Wigger, W. Hachem, and S. Shamai (Shitz), “Mixed delay constraints on a fading C-RAN uplink,” in _Proc. IEEE Information Theory Workshop_ , Visby, Sweden, August 2019.
* (86) H. Nikbakht, M. A. Wigger, and S. Shamai (Shitz), “Multiplexing gains under mixed-delay constraints on Wyner’s soft-handoff model,” _Entropy_ , vol. 22, no. 2, p. 182, February 2020.
* (87) P. A. Whiting and E. M. Yeh, “Broadcasting over uncertain channels with decoding delay constraints,” _IEEE Transactions on Information Theory_ , vol. 52, no. 3, pp. 904–921, March 2006.
* (88) T. M. Cover, “Comments on broadcast channels,” _IEEE Transactions on Information Theory_ , vol. 44, no. 6, pp. 2524–2530, October 1998.
* (89) M. Zohdy, A. Tajer, and S. Shamai (Shitz), “Broadcast approach to multiple access with local CSIT,” _IEEE Transactions on Communications_ , vol. 67, no. 11, pp. 7483–7498, August 2019.
* (90) S. Kazemi and A. Tajer, “Multiaccess communication via a broadcast approach adapted to the multiuser channel,” _IEEE Transactions on Communications_ , vol. 66, no. 8, pp. 3341–3353, August 2018.
* (91) M. Costa, “Writing on dirty paper,” _IEEE Transactions on Information Theory_ , vol. 29, no. 3, pp. 439–441, May 1983.
* (92) A. Cohen and A. Lapidoth, “Generalized writing on dirty paper,” in _Proc. IEEE International Symposium on Information Theory_ , Lausanne, Switzerland, July 2002.
* (93) R. Ahlswede, “Multi-way communication channels,” in _Proc. IEEE International Symposium on Information Theory_ , Hong kong, China, June 1971, pp. 103–105.
* (94) Y.-H. Kim and A. El Gamal, _Network Information Theory_. Cambridge, UK: Cambridge University Press, 2012.
* (95) Y. Cemal and Y. Steinberg, “The multiple-access channel with partial state information at the encoders,” _IEEE Transactions on Information Theory_ , vol. 51, no. 11, pp. 3992–4003, Nov. 2005.
* (96) N. Sen, G. Como, S. Yuksel, and F. Alajaji, “On the capacity of memoryless finite-state multiple access channels with asymmetric noisy state information at the encoders,” in _Proc. Annual Allerton Conference on Communication, Control and Computing_ , Monticello, IL, September 2011, pp. 1210–1215.
* (97) U. Basher, A. Shirazi, and H. H. Permuter, “Capacity region of finite state multiple-access channels with delayed state information at the transmitters,” _IEEE Transactions on Information Theory_ , vol. 58, no. 6, pp. 3430–3452, June 2012.
* (98) N. Şen, F. Alajaji, S. Yiiksel, and G. Como, “Multiple access channel with various degrees of asymmetric state information,” in _Proc. IEEE International Symposium on Information Theory_ , Cambridge, MA, July 2012, pp. 1697–1701.
* (99) N. Şen, F. Alajaji, S. Yüksel, and G. Como, “Memoryless multiple access channel with asymmetric noisy state information at the encoders,” _IEEE Transactions on Information Theory_ , vol. 59, no. 11, pp. 7052–7070, November 2013.
* (100) A. Lapidoth and Y. Steinberg, “The multiple-access channel with causal side information: Double state,” _IEEE Transactions on Information Theory_ , vol. 59, no. 3, pp. 1379–1393, March 2013.
* (101) ——, “The multiple-access channel with causal side information: Common state,” _IEEE Transactions on Information Theory_ , vol. 59, no. 1, pp. 32–50, January 2013.
* (102) M. Li, O. Simeone, and A. Yener, “Multiple access channels with states causally known at transmitters,” _IEEE Transactions on Information Theory_ , vol. 59, no. 3, pp. 1394–1404, March 2013.
* (103) S. P. Kotagiri and J. N. Laneman, “Multiaccess channels with state known to some encoders and independent messages,” _EURASIP Journal on Wireless Communications and Networking_ , no. 1, 2008.
* (104) A. Lapidoth and Y. Steinberg, “The multiple access channel with two independent states each known causally to one encoder.” in _Proc. IEEE International Symposium on Information Theory_ , Austin, TX, June 2010, pp. 480–484.
* (105) H. H. Permuter, S. Shamai (Shitz), and A. Somekh-Baruch, “Message and state cooperation in multiple access channels,” _IEEE Transactions on Information Theory_ , vol. 57, no. 10, pp. 6379–6396, October 2011.
* (106) I. H. Wang, “Approximate capacity of the dirty multiple-access channel with partial state information at the encoders,” _IEEE Transactions on Information Theory_ , vol. 58, no. 5, pp. 2781–2787, May 2012.
* (107) M. J. Emadi, M. N. Khormuji, M. Skoglund, and M. R. Aref, “Multi-layer Gelfand-Pinsker strategies for the generalised multiple-access channel,” _IET Communications_ , vol. 8, no. 8, pp. 1296–1308, May 2014.
* (108) M. Monemizadeh, E. Bahmani, G. A. Hodtani, and S. A. Seyedin, “Gaussian doubly dirty compound multiple-access channel with partial side information at the transmitters,” _IET Communications_ , vol. 8, no. 12, pp. 2181–2192, 2014\.
* (109) S. Sreekumar, B. K. Dey, and S. R. B. Pillai, “Distributed rate adaptation and power control in fading multiple access channels,” _IEEE Transactions on Information Theory_ , vol. 61, no. 10, pp. 5504–5524, October 2015.
* (110) M. J. Emadi, M. Zamanighomi, and M. R. Aref, “Multiple-access channel with correlated states and cooperating encoders,” _IET Communications_ , vol. 6, no. 13, pp. 1857–1867, Sep. 2012.
* (111) H. H. Permuter, T. Weissman, and J. Chen, “Capacity region of the finite-state multiple-access channel with and without feedback,” _IEEE Transactions on Information Theory_ , vol. 55, no. 6, pp. 2455–2477, June 2009.
* (112) P. Minero and D. N. C. Tse, “A broadcast approach to multiple access with random states,” in _Proc. IEEE International Symposium on Information Theory_ , Nice, France, June 2007, pp. 2566–2570.
* (113) S. Kazemi and A. Tajer, “A broadcast approach to multiple access adapted to the multiuser channel,” in _Proc. IEEE International Symposium on Information Theory_ , Aachen, Germany, June 2017, pp. 883–887.
* (114) S. Zou, Y. Liang, and S. Shamai (Shitz), “Multiple access channel with state uncertainty at transmitters,” in _Proc. IEEE International Symposium on Information Theory_ , Istanbul, Turkey, July 2013, pp. 1466–1470.
* (115) J. Cao and E. M. Yeh, “Asymptotically optimal multiple-access communication via distributed rate splitting,” _IEEE Transactions on Information Theory_ , vol. 51, no. 1, p. January, 304-319 2007.
* (116) R. Knopp and P. A. Humblet, “Information capacity and power control in single-cell multiuser communications,” in _Proc. IEEE International Conference on Communications_ , Seattle, WA, June 1995, pp. 331–335.
* (117) V. R. Cadambe and S. A. Jafar, “Interference alignment and degrees of freedom of the $K$-user interference channel,” _IEEE Transactions on Information Theory_ , vol. 54, no. 8, pp. 3425 – 3441, July 2008.
* (118) M. A. Maddah-Ali, A. S. Motahari, and A. K. Khandani, “Communication over MIMO X channels: Interference alignment, decomposition, and performance analysis,” _IEEE Transactions on Information Theory_ , vol. 54, no. 8, pp. 3457–3470, August 2008.
* (119) T. Han and K. Kobayashi, “A new achievable rate region for the interference channel,” _IEEE Transactions on Information Theory_ , vol. 27, no. 1, pp. 49–60, January 1981.
* (120) H.-F. Chong, M. Motani, H. K. Garg, and H. El Gamal, “On the Han–Kobayashi region for the interference channel,” _IEEE Transactions on Information Theory_ , vol. 54, no. 7, pp. 3188–3195, June 2008\.
* (121) R. H. Etkin, N. David, and H. Wang, “Gaussian interference channel capacity to within one bit,” _IEEE Transactions on information theory_ , vol. 54, no. 12, pp. 5534–5562, December 2008.
* (122) A. Carleial, “A case where interference does not reduce capacity,” _IEEE Transactions on Information Theory_ , vol. 21, no. 5, pp. 569–570, September 1975\.
* (123) H. Sato, “The capacity of the Gaussian interference channel under strong interference,” _IEEE Transactions on Information Theory_ , vol. 27, no. 6, pp. 786–788, November 1981.
* (124) R. Benzel, “The capacity region of a class of discrete additive degraded interference channels,” _IEEE Transactions on Information Theory_ , vol. 25, no. 2, pp. 228–231, March 1979.
* (125) A. El Gamal and M. Costa, “The capacity region of a class of deterministic interference channels,” _IEEE Transactions on Information Theory_ , vol. 28, no. 2, pp. 343–346, March 1982.
* (126) V. R. Cadambe, S. A. Jafar, and S. Vishwanath, “The capacity region of a class of deterministic Z-channels,” in _Proc. IEEE International Symposium on Information Theory_ , Seoul, South Korea, June 2009, pp. 2634–2638.
* (127) H.-F. Chong, M. Motani, and H. K. Garg, “The capacity region of a class of interference channels,” in _Proc. IEEE International Symposium on Information Theory_ , Nice, France, June 2007, pp. 2856–2860.
* (128) G. Bresler and D. N. C. Tse, “The two-user Gaussian interference channel: A deterministic view,” _European Transactions on Telecommunications_ , vol. 19, no. 4, pp. 333–354, April 2008.
* (129) G. Villacrés, T. Koch, A. Sezgin, and G. Vazquez-Vilar, “Robust signaling for bursty interference,” _Entropy_ , vol. 20, no. 11, p. 870, November 2018\.
* (130) X. Yi and H. Sun, “Opportunistic treating interference as noise,” _IEEE Transactions on Information Theory_ , vol. 66, no. 1, pp. 520–533, January 2020.
* (131) ——, “Opportunistic topological interference management,” _IEEE Transactions on Communications_ , vol. 68, no. 1, pp. 521–535, January 2020.
* (132) L. Wang, E. Sasoglu, and Y.-H. Kim, “Sliding-window superposition coding for interference networks,” in _Proc. IEEE International Symposium on Information Theory_ , Honolulu, HI, June 2014, pp. 2749–2753.
* (133) B. Bandemer, A. El Gamal, and Y.-H. Kim, “Optimal achievable rates for interference networks with random codes,” _IEEE Transactions on Information Theory_ , vol. 61, no. 12, pp. 6536–6549, December 2015.
* (134) H. D. Tuan, H. H. M. Tam, H. H. Nguyen, T. Q. Duong, and H. V. Poor, “Superposition signaling in broadcast interference networks,” _IEEE Transactions on Communications_ , vol. 65, no. 11, pp. 4646 – 4656, November 2017\.
* (135) H. Yagi and H. V. Poor, “Multi-level rate-splitting for synchronous and asynchronous interference channels,” in _Proc. IEEE International Symposium on Information Theory_ , St. Petersburg, Russia, July 2011, pp. 2080–2084.
* (136) Y. Zhao, C. W. Tan, A. S. Avestimehr, S. N. Diggavi, and G. J. Pottie, “On the maximum achievable sum-rate with successive decoding in interference channels,” _IEEE Transactions on Information Theory_ , vol. 58, no. 6, pp. 3798–3820, June 2012.
* (137) C. Geng, N. Naderializadeh, A. S. Avestimehr, and S. A. Jafar, “On the optimality of treating interference as noise,” _IEEE Transactions on Information Theory_ , vol. 61, no. 4, pp. 1753–1767, April 2015.
* (138) M. Ashraphijuo, A. Tajer, C. Gong, and X. Wang, “A receiver-centric approach to interference management: Fairness and outage optimization,” _IEEE Transactions on Information Theory_ , vol. 62, no. 10, pp. 5619–5642, October 2016\.
* (139) C. Huang, S. A. Jafar, S. Shamai (Shitz), and S. Vishwanath, “On degrees of freedom region of MIMO networks without channel state information at transmitters,” _IEEE Transactions on Information Theory_ , vol. 58, no. 2, pp. 849–857, February 2012.
* (140) Y. Zhu and D. Guo, “The degrees of freedom of isotropic MIMO interference channels without state information at the transmitters,” _IEEE Transactions on Information Theory_ , vol. 58, no. 1, pp. 341–352, January 2012\.
* (141) C. S. Vaze and M. K. Varanasi, “The degree-of-freedom regions of MIMO broadcast, interference, and cognitive radio channels with no CSIT,” _IEEE Transactions on Information Theory_ , vol. 58, no. 8, pp. 5354–5374, August 2012.
* (142) T. Gou, S. A. Jafar, and C. Wang, “On the degrees of freedom of finite state compound wireless networks,” _IEEE Transactions on Information Theory_ , vol. 57, no. 6, pp. 3286–3308, June 2011.
* (143) W. Shin, B. Lee, B. Shim, and J. Lee, “A MIMO relay with delayed feedback can improve DoF in $k$-user MISO interference channel with no CSIT,” _IEEE Transactions on Vehicular Technology_ , vol. 65, no. 12, pp. 10 188–10 192, December 2016.
* (144) Y.-S. Jeon, N. Lee, and R. Tandon, “Degrees of freedom and achievable rate of wide-band multi-cell multiple access channels with no CSIT,” _IEEE Transactions on Communications_ , vol. 66, no. 4, pp. 1772–1786, April 2017.
* (145) M. Morales-Céspedes, L. Vandendorpe, and A. G. Armada, “Degrees of freedom of 2-tier networks without channel state information at the transmitter,” _IEEE Signal Processing Letters_ , vol. 26, no. 2, pp. 382–386, February 2019\.
* (146) S. A. Jafar, “Blind interference alignment,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 6, no. 3, pp. 216–227, June 2012.
* (147) Y. Lu and W. Zhang, “Blind interference alignment in the $K$-user MISO interference channel,” in _Proc. IEEE Global Communications Conference_ , Atlanta, GA, June 2013, pp. 3464–3469.
* (148) Y. Lu, W. Zhang, and K. B. Letaief, “Blind interference alignment with diversity in $k$-user interference channels,” _IEEE Transactions on Communications_ , vol. 62, no. 8, pp. 2850–2859, August 2014.
* (149) S. A. Jafar, “Exploiting channel correlations-simple interference alignment schemes with no CSIT,” in _Proc. IEEE Global Communications Conference_ , Miami, FL, December 2010.
* (150) T. Gou, C. Wang, and S. A. Jafar, “Aiming perfectly in the dark-blind interference alignment through staggered antenna switching,” _IEEE Transactions on Signal Processing_ , vol. 59, no. 6, pp. 2734–2744, June 2011\.
* (151) C. Wang, H. C. Papadopoulos, S. A. Ramprashad, and G. Caire, “Improved blind interference alignment in a cellular environment using power allocation and cell-based clusters,” in _Proc. IEEE International Conference on Communications_ , Kyoto, Japan, June 2011.
* (152) S. Akoum, C. S. Chen, M. Debbah, and R. W. Heath, “Data sharing coordination and blind interference alignment for cellular networks,” in _Proc. IEEE Global Communications Conference_ , Anaheim, CA, December 2012, pp. 4273–4277.
* (153) C. Wang, “Degrees of freedom characterization: The 3-user SISO interference channel with blind interference alignment,” _IEEE Communications Letters_ , vol. 18, no. 5, pp. 757–760, May 2014.
* (154) D. Castanheira, A. Silva, and A. Gameiro, “Retrospective interference alignment: Degrees of freedom scaling with distributed transmitters,” _IEEE Transactions on Information Theory_ , vol. 63, no. 3, pp. 1721–1730, March 2017.
* (155) X. Chen, Z. Zhang, L. Zheng, L. Wu, J. Dang, P.-S. Lu, and C. Sun, “Blind interference alignment in two-cell Z interference MIMO channel,” _IEEE Access_ , vol. 5, pp. 10 526–10 532, June 2017.
* (156) S. A. Jafar, “Topological interference management through index coding,” _IEEE Transactions on Information Theory_ , vol. 60, no. 1, pp. 529–568, January 2013.
* (157) N. Naderializadeh and A. S. Avestimehr, “Interference networks with no CSIT: Impact of topology,” _IEEE Transactions on Information Theory_ , vol. 61, no. 2, pp. 917–938, February 2014.
* (158) M. Morales-Céspedes, J. Plata-Chaves, D. Toumpakaris, S. A. Jafar, and A. Garcı, “Blind interference alignment for cellular networks,” _IEEE Transactions on Signal Processing_ , vol. 63, no. 1, pp. 41–56, January 2014.
* (159) H. Yang, W. Shin, and J. Lee, “Degrees of freedom for $K$-user SISO interference channels with blind interference alignment,” in _Proc. Asilomar Conference on Signals, Systems, and Computers_ , Pacific Grove, CA, November 2015, pp. 1097–1101.
* (160) S. Akhlaghi and M. Baghani, “On the average achievable rate of block fading decentralized interference channel,” _IEEE Communications Letters_ , vol. 15, no. 9, pp. 992–994, September 2011.
* (161) A. Vahid, M. A. Maddah-Ali, A. S. Avestimehr, and Y. Zhu, “Binary fading interference channel with no CSIT,” _IEEE Transactions on Information Theory_ , vol. 63, no. 6, pp. 3565–3578, June 2017.
* (162) Y. Zhu and C. Shen, “On layered erasure interference channels without CSI at transmitters,” in _Proc. IEEE International Symposium on Information Theory_ , Barcelona, Spain, July 2016, pp. 710–714.
* (163) A. Raja, V. M. Prabhakaran, and P. Viswanath, “The two-user compound interference channel,” _IEEE Transactions on Information Theory_ , vol. 55, no. 11, pp. 5100–5120, November 2009.
* (164) Y. Zhu and D. Guo, “Ergodic fading Z-interference channels without state information at transmitters,” _IEEE Transactions on Information Theory_ , vol. 57, no. 5, pp. 2627–2647, May 2011.
* (165) P.-H. Lin, E. A. Jorswieck, and R. F. Schaefer, “On ergodic fading Gaussian interference channels with statistical CSIT,” in _Proc. IEEE Information Theory Workshop_ , Cambridge, UK, September 2016, pp. 454–458.
* (166) P.-H. Lin, E. A. Jorswieck, C. R. Janda, M. Mittelbach, and R. F. Schaefer, “On stochastic orders and fading Gaussian multi-user channels with statistical CSIT,” in _Proc. IEEE International Symposium on Information Theory_ , Paris, France, June 2019, pp. 1497–1501.
* (167) J. Sebastian, C. Karakus, S. Diggavi, and I.-H. Wang, “Rate splitting is approximately optimal for fading Gaussian interference channels,” in _Proc. Annual Allerton Conference on Communication, Control and Computing_ , Monticello, IL, September 2015, pp. 315–321.
* (168) J. Sebastian, C. Karakus, and S. Diggavi, “Approximate capacity of fast fading interference channels with no instantaneous CSIT,” _IEEE Transactions on Communications_ , vol. 66, no. 12, pp. 6015 – 6027, December 2018.
* (169) A. Carleial, “Interference channels,” _IEEE Transactions on Information Theory_ , vol. 24, no. 1, pp. 60–70, January 1978.
* (170) I. Sason, “On achievable rate regions for the Gaussian interference channel,” _IEEE Transactions on Information Theory_ , vol. 50, no. 6, pp. 1345–1356, June 2004.
* (171) M. Zohdy, A. Tajer, and S. Shamai (Shitz), “Distributed interference management: A broadcast approach,” _IEEE Transactions on Communications_ , vol. 69, no. 1, pp. 149–163, January 2021.
* (172) C. Gong, O. Abu-Ella, A. Tajer, and X. Wang, “Constrained group decoder for interference channels,” _Journal of Communications, Special Issue on Future Directions in Computing and Networking_ , vol. 7, no. 5, pp. 382–390, May 2012.
* (173) M. Katz and S. Shamai (Shitz), “Transmitting to colocated users in wireless ad hoc and sensor networks,” _IEEE Transactions on Information Theory_ , vol. 51, no. 10, pp. 3540–3563, October 2005.
* (174) ——, “Relaying protocols for two colocated users,” _IEEE Transactions on Information Theory_ , vol. 52, no. 6, pp. 2329–2344, June 2006.
* (175) ——, “On the outage probability of a multiple-input single-output communication link,” _IEEE Transactions on Wireless Communications_ , vol. 6, no. 11, pp. 4120–4128, June 2007.
* (176) ——, “Cooperative schemes for a source and an occasional nearby relay in wireless networks,” _IEEE Transactions on Information Theory_ , vol. 55, no. 11, pp. 5138–5160, November 2009.
* (177) A. Steiner and S. Shamai (Shitz), “Single-user broadcasting protocols over a two-hop relay fading channel,” _IEEE Transactions on Information Theory_ , vol. 52, no. 11, pp. 4821–4838, November 2006.
* (178) A. Steiner, A. Sanderovich, and S. Shamai (Shitz), “Broadcast cooperation strategies for two colocated users,” _IEEE Transactions on Information Theory_ , vol. 53, no. 10, pp. 3394–3412, October 2007.
* (179) E. Braginskiy, A. Steiner, and S. Shamai (Shitz), “Oblivious sequential decode and forward cooperative strategies for the wireless relay channel,” _IEEE Transactions on Communications_ , vol. 60, no. 11, pp. 3228–3238, November 2012.
* (180) M. Zamani and A. K. Khandani, “Broadcast approaches to the diamond channel,” _IEEE Transactions on Information Theory_ , vol. 60, no. 1, pp. 623–642, January 2014.
* (181) O. Simeone, O. Somekh, E. Erkip, H. V. Poor, and S. Shamai (Shitz), “A broadcast approach to robust communications over unreliable multi-relay networks,” in _Proc. IEEE Information Theory and Applications Workshop_ , San Diego, CA, February 2009, pp. 334–340.
* (182) M. Baghani, S. Akhlaghi, and V. Golzadeh, “Average achievable rate of broadcast strategy in relay-assisted block fading channels,” _IET Communications_ , vol. 10, no. 3, pp. 346–355, March 2016.
* (183) S. Akhlaghi and S. A. Khodam Hoseini, “Power allocation strategies in block-fading two-way relay networks,” _Journal of Communication Engineering_ , vol. 8, no. 2, pp. 313–324, 2019.
* (184) M. A. Attia, M. Shaqfeh, K. Seddik, and H. Alnuweiri, “Power optimization for layered transmission over decode-and-forward relay channels,” in _Proc. IEEE International Wireless Communications and Mobile Computing Conference_ , Nicosia, Cyprus, August 2014, pp. 594–599.
* (185) J. N. Laneman, D. N. C. Tse, and G. Wornell, “Cooperative diversity in wireless networks: Efficient protocols and outage behavior,” _IEEE Transactions on Information Theory_ , vol. 50, no. 12, pp. 3062–3080, December 2004.
* (186) D. Gunduz and E. Erkip, “Opportunistic cooperation and power control strategies for delay-limited capacity,” in _Proc. Conference on Information Sciences and Systems_ , Baltimore, MD, March 2005.
* (187) M. Yuksel and E. Erkip, “Diversity gains and clustering in wireless relaying,” in _Proc. IEEE International Symposium on Information Theory_ , Chicago, IL, July 2004.
* (188) J. Boyer, D. D. Falconer, and H. Yanikomeroglu, “On the aggregate SNR of amplified relaying channels,” in _Proc. IEEE Global Communications Conference_ , Dallas, TX, December 2004, pp. 3394–3398.
* (189) Z. Liu, V. Stankovic, and Z. Xiong, “Practical compress-and-forward code design for the half-duplex relay channel,” in _Proc. IEEE Conference on Information Sciences and Systems_ , Baltimore, MD, March 2005.
* (190) A. Wyner and J. Ziv, “The rate-distortion function for source coding with side information at the decoder,” _IEEE Transactions on Information Theory_ , vol. 22, no. 1, pp. 1–10, 1976.
* (191) T. Berger, _Rate Distortion Theory, A Mathematical Basis for Data Compression_. Englewood Cliffs, New Jersey: Prentice-Hall, 1971.
* (192) M. Abramowitz and I. Stegun (Eds.), _Handbook of Mathematical Functions_. National Bureau of Standards, 1964; re-issued by Dover Publications, New York, 1965.
* (193) Y. Steinberg and N. Merhav, “On successive refinement for the Wyner-Ziv problem,” _IEEE Transactions on Information Theory_ , vol. 50, no. 8, pp. 1636–1654, August 2004.
* (194) P. Ishwar, R. Puri, K. Ramchandran, and S. S. Pradhan, “On rate-constrained distributed estimation in unreliable sensor networks,” _IEEE Journal on Selected Areas in Communications_ , vol. 34, no. 4, pp. 765–775, April 2005.
* (195) J. Chen and T. Berger, “Robust distributed source coding,” _IEEE Transactions on Information Theory_ , vol. 54, no. 8, pp. 3385–3398, August 2008\.
* (196) A. Steiner, V. Lupu, U. Katz, and S. Shamai (Shitz), “The spectral efficiency of successive cancellation with linear multiuser detection for randomly spread CDMA,” _IEEE Transactions on Information Theory_ , vol. 58, no. 5, pp. 2850–2873, May 2012.
* (197) J. Kim and S. Park, “Broadcast coding and successive refinement for layered UE cooperation in multi-user downlink,” _IEEE Wireless Communications Letters_ , vol. 9, no. 6, pp. 893–896, June 2020.
* (198) A. Steiner and S. Shamai (Shitz), “Broadcast approach for the information bottleneck channel,” in _Proc. IEEE International Conference on Microwaves, Antennas, Communications and Electronic Systems_ , Tel Aviv, Israel, November 2019.
* (199) ——, “Broadcast approach under information bottleneck capacity uncertainty,” in _Proc. IEEE Information Theory and Applications Workshop_ , San Diego, CA, February 2020.
* (200) A. Steiner and S. Shamai (Shitz), “Broadcast approach for the information bottleneck channel,” _IEEE Transactions on Communications_ , 2021.
* (201) Y. Liang, L. Lai, H. V. Poor, and S. Shamai (Shitz), “A broadcast approach for fading wiretap channels,” _IEEE Transactions on Information Theory_ , vol. 60, no. 2, pp. 842–858, February 2014.
* (202) ——, “An improved broadcast approach for fading wiretap channels,” in _Proc. IEEE International Symposium on Information Theory_ , Cambridge, MA, July 2012.
* (203) A. Tulino, G. Caire, and S. Shamai (Shitz), “Broadcast approach for the sparse-input random-sampled MIMO Gaussian channel,” in _Proc. IEEE International Symposium on Information Theory_ , Honolulu, HI, July 2014, pp. 621–625.
* (204) O. Simeone, O. Somekh, E. Erkip, H. V. Poor, and S. Shamai (Shitz), “Multirelay channel with non-ergodic link failures,” in _Proc. IEEE Information Theory Workshop on Networking and Information Theory_ , Volos, Greece, June 2009, pp. 52–56.
* (205) S.-H. Park, O. Simeone, O. Sahin, and S. Shamai (Shitz), “Multi-layer transmission and hybrid relaying for relay channels with multiple out-of-band relays,” Tech. Rep., 2013.
* (206) S. H. Park, O. Simeone, O. Sahin, and S. Shamai (Shitz), “Robust layered transmission and compression for distributed uplink reception in cloud radio access networks,” _IEEE Transactions on Vehicular Technology_ , vol. 63, no. 1, pp. 204–216, January 2014.
* (207) S.-H. Park, O. Simeone, and S. Shamai (Shitz), “Robust baseband compression against congestion in packet-based fronthaul networks using multiple description coding,” _Entropy_ , vol. 21, no. 4, p. 433, April 2019.
* (208) O. Simeone, O. Somekh, E. Erkip, H. V. Poor, and S. Shamai (Shitz), “Robust communication via decentralized processing with unreliable backhaul links,” _IEEE Transactions on Information Theory_ , vol. 57, no. 7, pp. 4187–4201, July 2011.
* (209) S. Zou, Y. Liang, L. Lai, H. V. Poor, and S. Shamai (Shitz), “Broadcast Networks With Layered Decoding and Layered Secrecy: Theory and Applications,” _Proceedings of the IEEE_ , vol. 103, no. 10, pp. 1841–1856, October 2015.
* (210) ——, “Degraded broadcast channel with secrecy outside a bounded range,” _IEEE Transactions on Information Theory_ , vol. 64, no. 3, pp. 2104–2120, March 2018.
* (211) R. Karasik, O. Simeone, and S. Shamai (Shitz), “Robust uplink communications over fading channels with variable backhaul connectivity,” in _Proc. IEEE International Symposium on Information Theory_ , July 2013, pp. 1172–1176.
* (212) ——, “Robust uplink communications over fading channels with variable backhaul connectivity,” _IEEE Transactions on Wireless Communications_ , vol. 12, no. 11, pp. 5788–5799, November 2013.
* (213) W. Huleihel, N. Merhav, and S. Shamai (Shitz), “On compressive sensing in coding problems: A rigorous approach,” _IEEE Transactions on Information Theory_ , vol. 61, no. 10, pp. 5727–5744, October 2015.
* (214) O. Simeone, E. Erkip, and S. Shamai (Shitz), “Robust transmission and interference management for femtocells with unreliable network access,” _IEEE Journal on Selected Areas in Communications_ , vol. 28, no. 9, pp. 1469–1478, December 2010.
* (215) D. N. C. Tse and S. Hanly, “Linear multiuser receivers: Effective interference, effective bandwidth and user capacity,” _IEEE Transactions on Information Theory_ , vol. 45, no. 2, pp. 641–657, March 1999\.
* (216) S. Verdú and S. Shamai (Shitz), “Spectral efficiency of CDMA with random spreading,” _IEEE Transactions on Information Theory_ , vol. 45, no. 2, pp. 622–640, March 1999.
* (217) S. Shamai (Shitz) and S. Verdú, “The impact of frequency-flat fading on the spectral efficiency of CDMA,” _IEEE Transactions on Information Theory_ , vol. 47, no. 4, pp. 1302–1327, May 2001.
* (218) S. Shamai (Shitz), B. Zaidel, and S. Verdú, “Strongest-users-only detectors for randomly spread CDMA,” in _Proc. IEEE International Symposium on Information Theory_ , Sorrento, Italy, June 2002.
* (219) M. Medard, J. Huang, A. J. Goldsmith, S. P. Meyn, and T. P. Coleman, “Capacity of time-slotted ALOHA packetized multiple-access systems over the AWGN channel,” _IEEE Transactions on Wireless Communications on Wireless communications_ , vol. 3, no. 2, pp. 486–499, March 2004.
* (220) G. Caire, D. Tuninetti, and S. Verdú, “Variable-rate coding for slowly fading Gaussian multiple-access channels,” _IEEE Transactions on Information Theory_ , vol. 50, no. 10, pp. 2271–2292, October 2004.
* (221) V. N. Koshelev, “Hierarchical coding of discrete sources,” _Problemy Peredachi Informatsii_ , vol. 16, no. 3, pp. 31–49, 1980.
* (222) S. Sesia, G. Caire, and G. Vivier, “Lossy transmission over slow-fading AWGN channels: a comparison of progressive and superposition and hybrid approaches,” in _Proc. IEEE International Symposium on Information Theory_ , Adelaide, Australia, September 2005, pp. 224–228.
* (223) F. Etemadi and H. Jafarkhani, “Optimal layered transmission over quasi-static fading channels,” in _Proc. IEEE International Symposium on Information Theory_ , Seattle, WA, July 2006, pp. 1051–1055.
* (224) G. Caire and K. Narayanan, “On the distortion SNR exponent of hybrid digital-analog space-time coding,” _IEEE Transactions on Information Theory_ , vol. 53, no. 8, pp. 2867–2878, August 2007.
* (225) D. Gunduz and E. Erkip, “Source and channel coding for quasi-static fading channels,” in _Proc. Asilomar Conference on Signals, Systems, and Computers_ , Pacific Grove, CA, November 2005, pp. 18–22.
* (226) G. Chechik, A. Globerson, N. Tishby, and Y. Weiss, “Information bottleneck for Gaussian variables,” in _Proc. Advances in Neural Information Processing Systems_ , S. Thrun, L. K. Saul, and B. Schölkopf, Eds., Vancouver, Canada, December 2004, pp. 1213–1220.
* (227) N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck method,” in _Proc. Annual Allerton Conference on Communication, Control and Computing_ , Monticello, IL, September 1999, pp. 368–377.
* (228) G. Caire, S. Shamai (Shitz), A. Tulino, S. Verdü, and C. Yapar, “Information bottleneck for an oblivious relay with channel state information: The scalar case,” in _Proc. IEEE International Conference on the Science of Electrical Engineering_ , Eilat, Israel, November 2018.
* (229) M. Zohdy and A. Tajer, “Broadcast approach for the single-user energy harvesting channel,” _IEEE Transactions on Communications_ , vol. 67, no. 5, pp. 3192 – 3204, May 2019.
* (230) X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless networks with RF energy harvesting: A contemporary survey,” _IEEE Communications Surveys & Tutorials_, vol. 17, no. 2, pp. 757–789, November 2015.
* (231) X. Lu, D. Niyato, P. Wang, D. I. Kim, and Z. Han, “Wireless charger networking for mobile devices: Fundamentals, standards, and applications,” _IEEE Wireless Communications_ , vol. 22, no. 2, pp. 126–135, April 2015.
* (232) K. Z. Panatik, K. Kamardin, S. A. Shariff, S. S. Yuhaniz, N. A. Ahmad, O. M. Yusop, and S. Ismail, “Energy harvesting in wireless sensor networks: A survey,” in _Proc. IEEE International Symposium on Telecommunication Technologies_ , Kuala Lumpur, Malaysia, May 2016, pp. 53–58.
* (233) H. Jabbar, Y. S. Song, and T. T. Jeong, “RF energy harvesting system and circuits for charging of mobile devices,” _IEEE Transactions on Consumer Electronics_ , vol. 56, no. 1, pp. 247–253, March 2010.
* (234) K. Huang, “Spatial throughput of mobile ad hoc networks powered by energy harvesting,” _IEEE Transactions on Information Theory_ , vol. 59, no. 11, pp. 7597–7612, March 2013.
* (235) X. Zhang, H. Jiang, L. Zhang, C. Zhang, Z. Wang, and X. Chen, “An energy-efficient ASIC for wireless body sensor networks in medical applications,” _IEEE Transactions on Biomedical Circuits and Systems_ , vol. 4, no. 1, pp. 11–18, November 2010.
* (236) H. Nishimoto, Y. Kawahara, and T. Asami, “Prototype implementation of ambient RF energy harvesting wireless sensor networks,” in _Proc. IEEE Sensors_ , Kona, HI, January 2010, pp. 1282–1287.
* (237) S. Sudevalayam and P. Kulkarni, “Energy harvesting sensor nodes: Survey and implications,” _IEEE Communications Surveys & Tutorials_, vol. 13, no. 3, pp. 443–461, July 2011.
* (238) A. Tajer, M. Zohdy, and K. Alnajjar, “Resource allocation under sequential resource access,” _IEEE Transactions on Communications_ , vol. 66, no. 11, pp. 5608 – 5620, November 2018.
* (239) H. Romero and M. K. Varanasi, “Rate splitting, superposition coding and binning for groupcasting over the broadcast channel: A general framework,” 2020\.
* (240) A. Gohari and C. Nair, “New outer bounds for the two-receiver broadcast channel,” in _Proc. IEEE International Symposium on Information Theory_ , Los Angeles, CA, June 2020, pp. 1492–1497.
* (241) H. Weingarten, T. Liu, S. Shamai (Shitz), Y. Steinberg, and P. Viswanath, “The capacity region of the degraded multiple-input multiple-output compound broadcast channel,” _IEEE Transactions on Information Theory_ , vol. 55, no. 11, pp. 5011–5023, November 2009.
* (242) Y. Wu, C. Xiao, Z. Ding, X. Gao, and S. Jin, “A Survey on MIMO Transmission With Finite Input Signals: Technical Challenges, Advances, and Future Trends,” _Proceedings of the IEEE_ , vol. 106, no. 10, pp. 1779–1833, October 2018.
* (243) Y. Geng, C. Nair, S. Shamai (Shitz), and Z. V. Wang, “On broadcast channels with binary inputs and symmetric outputs,” _IEEE Transactions on Information Theory_ , vol. 59, no. 11, pp. 6980–6989, November 2013.
* (244) K. Marton, “A coding theorem for the discrete memoryless broadcast channel,” _IEEE Transactions on Information Theory_ , vol. 25, no. 3, pp. 306–311, May 1979.
* (245) S. I. Gelfand and M. S. Pinsker, “Coding for channel with random parameters,” _Problems of Control Theory_ , vol. 9, no. 1, pp. 19–31, 1980.
* (246) A. Somekh-Baruch, “On achievable rates and error exponents for channels with mismatched decoding,” _IEEE Transactions on Information Theory_ , vol. 61, no. 2, pp. 727–740, February 2015.
* (247) J. Körner and K. Marton, “A source network problem involving the comparison of two channels,” _Transactions on Colloquium Information Theory_ , August 1975.
* (248) A. E. Gamal, “The capacity of a class of broadcast channels,” _IEEE Transactions on Information Theory_ , vol. 25, no. 2, pp. 166–169, May 1979.
* (249) A. Hyadi, Z. Rezki, and M.-S. Alouini, “An overview of physical layer security in wireless communication systems with CSIT uncertainty,” _IEEE Access_ , vol. 4, pp. 6121 – 6132, September 2016.
* (250) D. Guo, S. Shamai (Shitz), and S. Verdú, “The interplay between information and estimation measures,” _Foundations and Trends in Signal Processing_ , vol. 6, no. 4, pp. 243–429, November 2013.
* (251) A. Sanderovich, S. Shamai (Shitz), Y. Steinberg, and G. Kramer, “Communication via decentralized processing,” _IEEE Transactions on Information Theory_ , vol. 54, no. 7, pp. 3008–3023, July 2008.
* (252) A. Zaidi and I. Estella-Aguerri, “On the information bottleneck problems: Models, connections, applications and information theoretic views,” _Entropy_ , vol. 22, p. 151, 01 2020.
* (253) I. E. Aguerri and A. Zaidi, “Distributed information bottleneck method for discrete and Gaussian sources,” in _Proc. IEEE International Zurich Seminar on Information and Communication_ , Zurich, Switzerland, February 2018\.
* (254) Y. Ugur, I. E. Aguerri, and A. Zaidi, “Vector Gaussian CEO problem under logarithmic loss,” in _Proc. IEEE Information Theory Workshop_ , Guangzhou, China, November 2018.
* (255) I. E. Aguerri, A. Zaidi, G. Caire, and S. Shamai (Shitz), “On the capacity of cloud radio access networks with oblivious relaying,” in _Proc. IEEE International Symposium on Information Theory_ , June 2017, pp. 2068–2072.
* (256) X. Wu, A. Ozgur, M. Peleg, and S. Shamai (Shitz), “New upper bounds on the capacity of primitive diamond relay channels,” in _Proc. IEEE Information Theory Workshop_ , Gotland, Sweden, August 2019.
* (257) M. Mondelli, S. H. Hassani, and R. Urbanke, “A new coding paradigm for the primitive relay channel,” _Algorithms_ , vol. 12, no. 10, p. 218, October 2019.
* (258) Z. Al-qudah, M. Al Bataineh, and A. Musa, “A novel multiple access diamond channel model,” _International Journal of Communication Systems_ , vol. 33, no. 17, November 2020.
* (259) A. Winkelbauer, S. Farthofer, and G. Matz, “The rate-information trade-off for Gaussian vector channels,” in _Proc. IEEE International Symposium on Information Theory_ , Honolulu, HI, June 2014, pp. 2849–2853.
* (260) C. Gong, A. Tajer, and X. Wang, “Interference channels with partial group decoding,” _IEEE Transactions on Communications_ , vol. 59, no. 11, pp. 3059 – 3071, November 2011.
* (261) O. Barak, U. Erez, and D. Burshtein, “Bounds on rates of LDPC codes for BEC with varying erasure rate,” in _Proc. IEEE International Symposium on Information Theory_ , Toronto, Canada, July 2008, pp. 1133–1137.
* (262) N. Goela, E. Abbe, and M. Gastpar, “Polar codes for broadcast channels,” _IEEE Transactions on Information Theory_ , vol. 61, no. 2, pp. 758–782, February 2015.
* (263) M. Mondelli, S. H. Hassani, R. Urbanke, and I. Sason, “Achieving marton’s region for broadcast channels using polar codes,” in _Proc. IEEE International Symposium on Information Theory_ , Honolulu, HI, July 2014, pp. 306–310.
* (264) M. Mondelli, S. H. Hassani, I. Maric, D. Hui, and S. Hong, “Capacity-achieving rate-compatible polar codes for general channels,” in _Proc. IEEE Wireless Communications and Networking Conference Workshops_ , San Francisco, CA, March 2017.
* (265) A. Bhatt, N. Ghaddar, and L. Wang, “Polar coding for multiple descriptions using monotone chain rules,” in _Proc. Annual Allerton Conference on Communication, Control and Computing_ , Monticello, IL, September 2017, pp. 565–571.
* (266) B. Li, D. N. C. Tse, K. Chen, and H. Shen, “Capacity-achieving rateless polar codes,” in _Proc. IEEE International Symposium on Information Theory_ , Barcelona, Spain, July 2016.
* (267) B. D. Boyle, J. M. Walsh, and S. Weber, “Channel dependent adaptive modulation and coding without channel state information at the transmitter,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal ProcessingProcessing_ , Vancouver, Canada, May 2013.
* (268) P. Mary, J.-M. Gorce, A. Unsal, and H. V. Poor, “Finite blocklength information theory: What is the practical impact on wireless communications?” in _Proc. IEEE Global Communications Conference (Workshops)_ , Washington, DC, December 2016.
* (269) R. Ahlswede and G. Dueck, “Identification via channels,” _IEEE Transactions on Information Theory_ , vol. 35, no. 1, pp. 15–29, January 1989\.
* (270) U. Pereg, H. Boche, and C. Deppe, “Deterministic identification over fading channels,” _arXiv_ , 2020.
* (271) J. Qureshi, C. Heng Foh, and J. Cai, “Primer and recent developments on fountain codes,” _Recent Advances in Communications and Networking Technology_ , vol. 2, no. 1, pp. 2–11, July 2013.
* (272) A. H. Kaspi, “Rate-distortion function when side-information may be present at the decoder,” _IEEE Transactions on Information Theory_ , vol. 40, no. 6, pp. 2031–2034, June 1994.
* (273) S. Park, O. Simeone, and S. Shamai, “Joint optimization of cloud and edge processing for fog radio access networks,” in _Proc. IEEE International Symposium on Information Theory_ , Barcelona, Spain, July 2016.
* (274) R. Karasik, O. Simeone, and S. Shamai, “Fundamental latency limits for D2D-aided content delivery in fog wireless networks,” in _Proc. IEEE International Symposium on Information Theory_ , Vail, CO, June 2018.
* (275) S. S. Pradhan, A. Padakandla, and F. Shirani, “An Algebraic and Probabilistic Framework for Network Information Theory,” _Foundations and Trend in Communications and Information Theory_ , vol. 18, no. 2, pp. 173–379, 2020.
* (276) M. Giordani, M. Polese, M. Mezzavilla, S. Rangan, and M. Zorzi, “Toward 6G networks: Use cases and technologies,” _IEEE Communications Magazine_ , vol. 58, no. 3, pp. 55–61, March 2020.
* (277) H.-N. Lee, S.-Y. Chung, C. Fragouli, and Z.-H. Mao, “Network coding for wireless networks,” _EURASIP Journal on Wireless Communications and Networking_ , vol. 2010.
|
monthyeardate[],
# Formal FT-based Cause-Consequence Reliability Analysis using Theorem
Proving
Mohamed Abdelghany and Sofiène Tahar
Department of Electrical and Computer Engineering,
Concordia University, Montréal, QC, Canada
<EMAIL_ADDRESS>
TECHNICAL REPORT
(January 2021)
###### Abstract
Cause-consequence Diagram (CCD) is widely used as a deductive safety analysis
technique for decision-making at the critical-system design stage. This
approach models the causes of subsystem failures in a highly-critical system
and their potential consequences using Fault Tree (FT) and Event Tree (ET)
methods, which are well-known dependability modeling techniques. Paper-and-
pencil-based approaches and simulation tools, such as the Monte-Carlo
approach, are commonly used to carry out CCD analysis, but lack the ability to
rigorously verify essential system reliability properties. In this work, we
propose to use formal techniques based on theorem proving for the formal
modeling and step-analysis of CCDs to overcome the inaccuracies of the
simulation-based analysis and the error-proneness of informal reasoning by
mathematical proofs. In particular, we use the HOL4 theorem prover, which is a
computer-based mathematical reasoning tool. To this end, we developed a
formalization of CCDs in Higher-Order Logic (HOL), based on the algebraic
approach, using HOL4. We demonstrate the practical effectiveness of the
proposed CCD formalization by performing the formal reliability analysis of
the IEEE 39-bus electrical power network. Also, we formally determine the
Forced Outage Rate ($\mathcal{FOR}$) of the power generation units and the
network reliability index, i.e., System Average Interruption Duration Index
($\mathcal{SAIDI}$). To assess the accuracy of our proposed approach, we
compare our results with those obtained with MATLAB Monte-Carlo Simulation
(MCS) as well as other state-of-the-art approaches for subsystem-level
reliability analysis.
Keywords— Cause-Consequence Diagram, Event Tree, Fault Tree, Reliability
Analysis, Safety, Formal Methods, Theorem Proving, HOL4, Monte-Carlo, FMECA,
Electrical Power Network, FOR, SAIDI.
## 1 Introduction
Nowadays, in many safety-critical systems, which are prevalent, e.g. in smart
grids [1] and automotive industry [2], a catastrophic accident may happen due
to coincidence of sudden events and/or failures of specific subsystem
components. These undesirable accidents may result in loss of profits and
sometimes severe fatalities. Therefore, the central inquiry, in many critical-
systems, where safety is of the utmost importance, is to identify the possible
consequences given that one or more components could fail at a subsystem level
on the entire system. For that purpose, the main discipline for safety design
engineers is to perform a detailed Cause-Consequence Diagram (CCD) [3]
reliability analysis for identifying the subsystem events that prevent the
entire system from functioning as desired. This approach models the causes of
component failures and their consequences on the entire system using Fault
Tree (FT) [4] and Event Tree (ET) [5] dependability modeling techniques.
FTs mainly provide a graphical model for analyzing the factors causing a
system failure upon their occurrences. FTs are generally classified into two
categories Static Fault Trees (SFT) and Dynamic Fault Trees (DFT) [6]. SFTs
and DFTs allow safety-analysts to capture the static/dynamic failure
characteristics of systems in a very effective manner using logic-gates, such
as OR, AND, NOT, Priority-AND (PAND) and SPare (SP) [4]. However, the FT
technique is incapable of identifying the possible consequences resulting from
an undesirable failure on the entire system. ETs provide risk analysis with
all possible system-level operating states that can occur in the system, i.e.,
success and failure, so that one of these possible scenarios can occur [5].
However, both of these modeling techniques are limited to analyzing either a
critical-system failure or cascading dependencies of system-level components
only, respectively.
There exist some techniques that have been developed for subsystem-level
reliability analysis of safety-critical systems. For instance, Papadopoulos et
al. in [7] have developed a software tool called HiP-HOPS (Hierarchically
Performed Hazard Origin & Propagation Studies) [8] for subsystem-level failure
analysis to overcome classical manual failure analysis of complex systems and
prevent human errors. HiP-HOPS can automatically generate the subsystem-level
FT and perform Failure Modes, Effects, and Critically Analyses (FEMCA) from a
given system model, where each system component is associated with its failure
rate or failure probability [7]. Currently, HiP-HOPS lacks the modeling of
multi-state system components and also cannot provide generic mathematical
expressions that can be used to predict the reliability of a critical-system
based on any probabilistic distribution [9]. Similarly, Jahanian in [10] has
proposed a new technique called Failure Mode Reasoning (FMR) for identifying
and quantifying the failure modes for safety-critical systems at the subsystem
level. However, according to Jahanian [11], the soundness of the FMR approach
needs to be proven mathematically.
On the other hand, CCD analysis typically uses FTs to analyze failures at the
subsystem or component level combined with an ET diagram to integrate their
cascading failure dependencies at the system level. CCDs are categorized into
two general methods for the ET linking process with the FTs [12]: (1) Small ET
diagram and large subsystem-level FT; (2) Large ET diagram and small
subsystem-level FT. The former one with small ET and large subsystem-level FT
is the most commonly used for the probabilistic safety assessment of
industrial applications (e.g., in [13]). There are four main steps involved in
the CCD analysis [14]: (1) Component failure events: identify the causes of
each component failure associated with their different modes of operations;
(2) Construction of a complete CCD: construct a CCD model using its basic
blocks, i.e., Decision box, Consequence path and Consequence box; (3)
Reduction: removal of unnecessary decision boxes based on the system
functional behavior to obtain a minimal CCD; and lastly (4) Probabilistic
analysis: evaluating the probabilities of CCD paths describing the occurrence
of a sequence of events.
Traditionally, CCD subsystem-level reliability analysis is carried out by
using paper-and-pencil-based approaches to analyze safety-critical systems,
such as high-integrity protection systems (HIPS) [14] and nuclear power plants
[15], or using computer simulation tools based on Monte-Carlo approach, as in
[16]. A major limitation in both of the above approaches is the possibility of
introducing inaccuracies in the CCD analysis either due to human infallibility
or the approximation errors due to numerical methods and pseudo-random numbers
in the simulation tools. Moreover, simulation tools do not provide the
mathematical expressions that can be used to predict the reliability of a
given system based on any probabilistic distributions and failure rates.
A more safe way is to substitute the error-prone informal reasoning of CCD
analysis by formal generic mathematical proofs as per recommendations of
safety standards, such as IEC 61850 [17], EN 50128 [18] and ISO 26262 [19]. In
this work, we propose to use formal techniques based on theorem proving for
the formal reliability CCD analysis-based of safety-critical systems, which
provides us the ability to obtain a verified subsystem-level failure/operating
consequence expression. Theorem proving is a formal verification technique
[20], which is used for conducting the proof of mathematical theorems based on
a computerized proof tool. In particular, we use HOL4 [21], which is an
interactive theorem prover with the ability of verifying a wide range of
mathematical expressions constructed in higher-order logic (HOL). For this
purpose, we endeavor to formalize the above-mentioned four steps of CCD
analysis using HOL4 proof assistant. To demonstrate the practical
effectiveness of the proposed CCD formalization, we conduct the formal CCD
analysis of an IEEE 39-bus electrical power network system. Subsequently, we
formally determine a commonly used metric, namely Forced Outage Rate
($\mathcal{FOR}$), which determines the capacity outage or unavailability of
the power generation units [22]. Also, we evaluate the System Average
Interruption Duration Index ($\mathcal{SAIDI}$), which describes the average
duration of interruptions for each customer in a power network [22].
The main contributions of the work we describe in this report can be
summarized as follows:
* $\bullet$
Formalization of the CCD basic constructors, such as Decision box, Consequence
path and Consequence box, that can be used to build an arbitrary level of CCDs
* $\bullet$
Enabling the formal reduction of CCDs that can remove unnecessary decision
boxes from a given CCD model, a feature not available in other existing
approaches
* $\bullet$
Provide reasoning support for formal probabilistic analysis of scalable CCDs
consequence paths with new proposed mathematical formulations
* $\bullet$
Application on a real-world IEEE 39-bus electrical power network system and
verification of its reliability indexes $\mathcal{FOR}$ and $\mathcal{SAIDI}$
* $\bullet$
Development of a Standard Meta Language (SML) function that can numerically
compute reliability values from the verified expressions of $\mathcal{FOR}$
and $\mathcal{SAIDI}$
* $\bullet$
Comparison between our formal CCD reliability assessment with the
corresponding results obtained from MATLAB MCS and other notorious approaches
The rest of the report is organized as follows: In Section 2, we present the
related literature review. In Section 3, we describe the preliminaries to
facilitate the understanding of the rest of the report. Section 4 presents the
proposed formalization of CCD and its formal probabilistic properties. In
Section 5, we describe the formal CCD analysis of an electrical network system
and the evaluation of its reliability indices $\mathcal{FOR}$ and
$\mathcal{SAIDI}$. Lastly, Section 6 concludes the report.
## 2 Related Work
Only a few work have previously considered using formal techniques [20] to
model and analyze CCDs. For instance, Ortmeier et al. in [23] developed a
framework for Deductive Cause-Consequence Analysis (DCCA) using the SMV model
checker [24] to verify the CCD proof obligations. However, according to the
authors [23], there is a problem of showing the completeness of DCCA due to
the exponential growth of the number of proof obligations with complex systems
that need cumbersome proof efforts. To overcome above-mentioned limitations, a
more practical way is to verify generic mathematical formulations that can
perform $\mathcal{N}$-level CCD reliability analysis for real-world systems
within a sound environment. Higher-Order-Logic (HOL) [25] is a good candidate
formalism for achieving this goal.
Prior to our work, there were two notable projects for building frameworks to
formally analyze dependability models using HOL4 theorem proving [21]. For
instance, HOL4 has been previously used by Ahmad et al. in [26] to formalize
SFTs. The SFT formalization includes a new datatype consisting of AND, OR and
NOT FT gates [4] to analyze the factors causing a static system failure.
Furthermore, Elderhalli et al. in [27] had formalized DFTs in the HOL4 theorem
prover, which can be used to conduct formal dynamic failure analysis.
Similarly, we have defined in [28] a new EVENT_TREE datatype to model and
analyze all possible system-level success and failure relationships. All these
formalizations are basically required to formally analyze either a system
static/dynamic failure or cascading dependencies of system-level components
only, respectively. On the other hand, CCDs have the superior capability to
use SFTs/DFTs for analyzing the static/dynamic failures at the subsystem level
and analyze their cascading dependencies at the system-level using ETs. For
that purpose, in this work, we provide new formulations that can model
mathematically the graphical diagrams of CCDs and perform the subsystem-level
reliability analysis of highly-critical systems. Moreover, our proposed new
mathematics provides the modeling of multi-state system components and is
based on any given probabilistic distribution and failure rates, which makes
our proposed work the first of its kind. In order to check the correctness of
the proposed equations, we verified them within the sound environment of HOL4.
## 3 Preliminaries
In this section, we briefly summarize the fundamentals of the HOL4 theorem
proving approach and existing FT and ET formalizations in HOL4 to facilitate
the reader’s understanding of the rest of the report.
### 3.1 HOL4 Theorem Proving
Theorem proving [20] is one of the formal verification techniques that use a
computerized proof system for conducting the proof of mathematical theorems.
HOL4 [21] is an interactive theorem prover, which is capable of verifying a
wide range of safety-critical systems as well as mathematical expressions
constructed in HOL. In general, given a safety-critical system to be formally
analyzed, we first model its structure mathematically, then using the HOL4
theorem prover, several properties of the system can be verified based on this
mathematical model. The main characteristic of the HOL4 theorem prover is that
its core consists only of four axioms and eight inference rules. Any further
proof or theorem should be formally verified based on these axioms and rules
or based on previously proven theorems. This ensured the soundness of the
system model analysis, i.e., no wrong proof goal can be proved. Moreover,
since the system properties are proven mathematically within HOL4, no
approximation is involved in the analysis results. These features make HOL4
suitable for carrying out the CCD-based reliability analysis of safety-
critical systems that require sound verification results. Table 1 provides the
HOL4 symbols and functions that we will use in this report.
Table 1: HOL4 Symbols and Functions HOL4 Symbol | Standard | Meaning
---|---|---
{x $|$ P(x)} | {$\lambda x$. $P(x)$} | Set of all $x$ such that $P(x)$
h :: L | $cons$ | Add an element $h$ to a list L
MAP ($\lambda$x. f(x)) X | x $\in$ X $\to$ ($\lambda$x. f) | Function that maps each element x in the list X to f(x)
$\mathrm{L}_{1}$ $++$ $\mathrm{L}_{2}$ | $append$ | Joins lists $\mathrm{L}_{1}$ and $\mathrm{L}_{2}$ together
### 3.2 Probability Theory in HOL4
Measure space is defined mathematically as ($\Omega$, $\Sigma$, and $\mu$),
where $\Omega$ represents the sample space, $\Sigma$ represents a
$\sigma$-algebra of subsets of $\Omega$, and $\mu$ represents a measure with
the domain $\Sigma$. A probability space is a measure space ($\Omega$,
$\Sigma$, and Pr), where $\Omega$ is the complete sample space, $\Sigma$ is
the corresponding event space containing all the events of interest, and Pr is
the probability measure of the sample space as 1. The HOL4 theorem prover has
a rich library of probabilities, including the functions p_space, events, and
prob. Given a probability space p, these functions return the corresponding
$\Omega$, $\Sigma$, and Pr, respectively. The Cumulative Distribution Function
(CDF) is defined as the probability of the event where a random variable X has
a value less or equal to a value t, i.e., $\mathcal{P}(X\leq t)$. This
definition can be been formalized in HOL4 as [29]:
$\vdash$ CDF p X t = distribution p X {y | y $\leq$ t}
where the function distribution takes three inputs: (i) probability space p;
(ii) random variable X; and (iii) set of real numbers, then returns the
probability of the variable X acquiring all the values of the given set in
probability space p.
### 3.3 FT Formalization
Fault Tree (FT) analysis [4] is one of the commonly used reliability
assessment techniques for critical-systems. It mainly provides a schematic
diagram for analyzing undesired top events, which can cause complete system
failure upon their occurrence. An FT model is represented by logic-gates, like
OR, AND and NOT, where an OR gate models the failure of the output if any of
the input failure events occurs alone, while an AND gate models the failure of
the output if all of the input failure events occur at the same time, and
lastly a NOT gate models the complement of the input failure event. Ahmad et
al. [26] presented the FT formalization by defining a new datatype gate, in
HOL4 as:
Hol_datatype gate = AND of (gate list) |
OR of (gate list) |
NOT of (gate) |
atomic of (event)
The FT constructors AND and OR are recursive functions on gate-typed lists,
while the FT constructor NOT operates on a gate-type variable. A semantic
function is then defined over the gate datatype that can yield an FT diagram
as:
Definition 1:
$\vdash$ FTree p (atomic X) = X $\wedge$
FTree p (OR (h::t)) = FTree p h $\cup$ FTree p (OR t) $\wedge$
FTree p (AND (h::t)) = FTree p h $\cap$ FTree p (AND t) $\wedge$
FTree p (NOT X) = p_space p DIFF FTree p X
The function FTree takes an event X, identified by a type constructor atomic,
and returns the given event X. If the function FTree takes a list of type
gate, identified by a type constructor OR, then it returns the union of all
elements after applying the function FTree on each element of the given list.
Similarly, if the function FTree takes a list of type gate, identified by a
type constructor AND, then it performs the intersection of all elements after
applying the function FTree on each element of the given list. For the NOT
type constructor, the function FTree returns the complement of the failure
event obtained from the function FTree.
Table 2: FT HOL4 Probabilistic Theorems FT Gate | Probabilistic Theorem
---|---
| prob p (FTree p (AND $\mathrm{F}_{\mathcal{N}}$)) = $\prod$ (PROB_LIST p
$\mathrm{F}_{\mathcal{N}}$)
| prob p (FTree p (OR $\mathrm{F}_{\mathcal{N}}$)) = 1 - $\prod$ (PROB_LIST p
(COMPL_LIST p $\mathrm{F}_{\mathcal{N}}$))
The formal verification in HOL4 for the failure probabilistic expressions of
the above-mentioned FT gates is presented in Table 2 [26]. These expressions
are verified under the following constrains: (a) $F_{\mathcal{N}}$ $\in$
events p ensures that all associated failure events in the given list
$F_{\mathcal{N}}$ are drawn from the events space p; (b) prob_space p ensures
that p is a valid probability space; and lastly (c) MUTUAL_INDEP p
$F_{\mathcal{N}}$ ensures the independence of the failure events in the given
list $F_{\mathcal{N}}$. The function $\prod$ takes a list and returns the
product of the list elements while the function PROB_LIST returns a list of
probabilities associated with the elements of the list. The function
COMPL_LIST returns the complement of the given list elements.
### 3.4 ET Formalization
Event Tree (ET) [5] analysis is a widely used technique to enumerate all
possible combinations of system-level components failure and success states in
the form of a tree structure. An ET diagram starts by an initiating event
called Node and then all possible scenarios of an event that can occur in the
system are drawn as Branches. ETs were formally modeled by using a new
recursive datatype EVENT_TREE, in HOL4 as [28]:
Hol_datatype EVENT_TREE = ATOMIC of (event) |
NODE of (EVENT_TREE list) |
BRANCH of (event) (EVENT_TREE list)
The type constructors NODE and BRANCH are recursive functions on EVENT_TREE-
typed lists. A semantic function is then defined over the EVENT_TREE datatype
that can yield a corresponding ET diagram as:
Definition 2:
$\vdash$ ETREE (ATOMIC X) = X $\wedge$
ETREE (NODE (h::L)) = ETREE h $\cup$ (ETREE (NODE L)) $\wedge$
ETREE (BRANCH X (h::L)) = X $\cap$ (ETREE h $\cup$ ETREE (BRANCH X L))
Table 3: ET HOL4 Probabilistic Theorems ET Constructor | Probabilistic Theorem
---|---
| prob p (ETREE (NODE $\mathcal{X}_{\mathcal{N}}$)) = $\sum_{\mathcal{P}}$ p
$\mathcal{X}_{\mathcal{N}}$
| prob p (ETREE (BRANCH Y $\mathcal{Z}_{\mathcal{N}}$)) = (prob p Y) $\times$
$\sum_{\mathcal{P}}$ p $\mathcal{Z}_{\mathcal{N}}$
The function ETREE takes an event X, identified by a type constructor ATOMIC
and returns the event X. If the function ETREE takes a list of type
EVENT_TREE, identified by a type constructor NODE, then it returns the union
of all elements after applying the function ETREE on each element of the list.
Similarly, if the function ETREE takes an event X and a list of type
EVENT_TREE, identified by a type constructor BRANCH, then it performs the
intersection of the event X with the union of the head of the list after
applying the function ETREE and the recursive call of the BRANCH constructor.
For the formal probabilistic assessment of each path occurrence in the ET
diagram, HOL4 probabilistic properties for NODE and BRANCH ET constructors are
presented in Table 3 [28]. These expressions are formally verified under the
same FT constrains, i.e., $\mathcal{X}_{\mathcal{N}}$ $\in$ events p,
prob_space p and MUTUAL_INDEP p $\mathcal{X}_{\mathcal{N}}$. The function
$\sum_{\mathcal{P}}$ is defined to sum the probabilities of events for a list.
## 4 Cause-Consequence Diagrams
Cause–Consequence Diagram [15] (CCD) has been developed to analyze the causes
of an undesired subsystem failure events, using FT analysis, and from these
events obtain all possible consequences on the entire system, using ET
analysis [30]. The description of the CCD basic constructors are illustrated
in Table 4 [14]. CCD analysis is mainly divided into two categories [31]: (1)
Type I that combines SFT and ET, as shown in Fig. 1 and Table 5 [12]; and (2)
Type II that combines DFT and ET without shared events in different
subsystems, as shown in Fig. 2 and Table 6 [12]. In this analysis, we focus on
the CCD-based reliability analysis at the subsystem level of Type I.
Table 4: CCD Symbols and Functions CCD Symbol | Function
---|---
| Decision Box: represents the functionality of a component. (1) NO Box:
describes the component or subsystem failure behavior. A FT of the component
is connected to this box that can be used to determine the failure probability
($\mathcal{P}_{F}$) (2) YES Box: represents the correct functioning of the
component or reliability, which can be calculated by simply taking the
complement of the failure probability determined in the NO Box, i.e., 1 -
$\mathcal{P}_{F}$
| Consequence Path: models the next possible consequence scenarios due to a
particular event
| Consequence Box: models the outcome event due to a particular sequence of
events
Figure 1: CCD Analysis Type A Table 5: SFT Symbols and Functions SFT Symbol | Function
---|---
| AND Gate: models the failure of the output if all of the input failure
events, i.e., A and B, occur at the same time (simultaneously)
| OR Gate: models the failure of the output if any of the input failure
events, i.e., C or D, occurs alone
Figure 2: CCD Analysis Type B Table 6: DFT Symbols and Functions DFT Symbol | Function
---|---
| Priority-AND (PAND) Gate: models the dynamic behavior of failing the output
when all input events occur in a sequence, i.e., A then B
| SPare (SP) Gate: models the dynamic behavior of activating the spare input
D after the failure of the main input C
Figure 3: Overview of CCD Analysis
Fig. 3 depicts the overview of the four steps of CCD analysis [3]: (1)
Components failure events: identify the causes of the undesired failure events
for each subsystem/component in the safety-critical system; (2) Construction
of a complete CCD: draw a complete system CCD model using its basic
constructors considering that the order of components should follow the
temporal action of the system; (3) CCD model reduction: remove the unnecessary
decision boxes in the system to obtain its minimal CCD representing the actual
functional behavior of the system; and (4) CCD probabilistic analysis:
evaluate the probabilities of all CCD consequence paths. The paths in a CCD
represent the likelihood of specific sequence scenarios that are possible to
occur in a system so that only one scenario can occur [30]. This implies that
all consequences in a CCD are disjoint (mutually exclusive) [14]. Assuming
that all events associated with the decision boxes in a CCD model are mutually
independent, then the CCD paths probabilities can be quantified as follows
[15]:
1. 1.
Evaluate the probabilities of each outgoing branch stemming from a decision
box, i.e., quantifying the associated FT models
2. 2.
Compute the probability of each consequence path by multiplying the individual
probabilities of all events associated with the decision boxes
3. 3.
Determine the probability of a particular consequence box by summing the
probabilities of all consequence paths ending with that consequence event
As an example, consider a Motor Control Centre (MCC) [32] consisting of three
components Relay, Timer and Fuse, as shown in Fig. 4. The MCC is designed to
control an Induction Motor (IM) and let it run for a specific period of time
then stops. The IM power circuit is energized by the closure of the Relay
Contacts (Rc), as shown in Fig. 4. Rc closes after the user press the Start
button that energizes R and at the same time energizes an ON-delay Timer (T).
The Timer opens its contacts (Tc) after a specific period of time t and
consequently the IM stops. If the IM is overloaded than its design, then the
Fuse (F) melts and protects both MCC and IM from damage. Assume that each
component in the MCC has two operational states, i.e., operating or failing.
The four steps of a CCD-based reliability analysis described by Andrews et al.
[14] are as follows [30]:
Figure 4: Schematic of an Example MCC
1. 1.
Components failure events: Assign a FT to each component in the MCC, i.e.,
$\mathrm{FT}_{Relay}$, $\mathrm{FT}_{Timer}$, $\mathrm{FT}_{Fuse}$.
2. 2.
Construction of a complete CCD: Construct a complete CCD model of the IM
control operation, as shown in Fig. 5. For instance, if the condition of the
first decision box is either satisfied or not, i.e., YES or NO, then the next
system components are considered in order, i.e., Timer and Fuse, respectively.
Each consequence in the CCD ends with either motor stops (MS) or motor runs
(MR).
3. 3.
CCD model reduction: Apply the reduction process on the obtained complete CCD
model. For instance, if the condition of the first decision box (Relay
Contacts Open) is satisfied, i.e., YES box, then the IM stops regardless of
the status of the rest of the components, as shown in Fig. 6. Similarly, if
the condition of the second decision box (Timer Contacts Open) is satisfied,
then the IM stops. So, Fig. 6 represents the minimal CCD for the IM control
operation.
Figure 5: Complete CCD Model of the MCC Figure 6: Reduced CCD Model of the MCC
4. 4.
CCD probabilistic analysis: The probabilities of the two consequence boxes MS
and MR in Fig. 6 can be expressed mathematically as:
$\begin{split}\@add@centering\centering\mathcal{P}(Consequence\\_Box_{MS})&=\mathcal{P}(Relay_{S})+\mathcal{P}(Relay_{F})\times\mathcal{P}(Timer_{S})+\\\
&\hskip
11.38109pt\mathcal{P}(Relay_{F})\times\mathcal{P}(Timer_{F})\times\mathcal{P}(Fuse_{S})\end{split}$
(1)
$\centering\mathcal{P}(Consequence\\_Box_{MR})=\mathcal{P}(Relay_{F})\times\mathcal{P}(Timer_{F})\times\mathcal{P}(Fuse_{F})\@add@centering$
(2)
where $\mathcal{P}(\mathcal{X}_{F})$ is the unreliability function or the
probability of failure for a component $\mathcal{X}$, i.e.,
$\mathrm{FT}_{\mathcal{X}}$ model, and $\mathcal{P}(\mathcal{X}_{S})$ is the
reliability function or the probability of operating, i.e., the complement of
the $\mathrm{FT}_{\mathcal{X}}$ model.
In the next section, we present, in detail, the formalization of CCDs in the
HOL4 theorem prover to analyze the failures at the subsystem level of a given
safety-critical complex system and determine all their possible cascading
dependencies of complete/partial reliability and failure events that can occur
at the system level.
### 4.1 Formal CCD Modeling
We start the formalization of CDDs by formally model its basic symbols, as
described in Table 4 in HOL4 as follows:
Definition 3:
$\vdash$ DEC_BOX p X Y = if X = 1 then FST Y else if X = 0 then SND Y else
p_space p
where Y is an ordered pair (FST Y, SND Y) representing the reliability and
unreliability functions in a decision box, respectively. The condition X = 1
represents the YES Box while X = 0 represents the NO Box. If X is neither 1
nor 0, for instance, X = 2, this represents the irrelevance of the decision
box, which returns the probability space p to be used in the reduction process
of CCDs.
Secondly, we define the CCD Consequence path by recursively applying the
BRANCH ET constructor on a given $\mathcal{N}$ list of decision boxes
($\mathrm{\small{\texttt{DEC\\_BOX}}}_{\mathcal{N}}$) using the HOL4 recursive
function FOLDL as:
Definition 4:
$\vdash$ CONSEQ_PATH p
($\mathrm{\small{\texttt{DEC\\_BOX}}}_{1}$::$\mathrm{\small{\texttt{DEC\\_BOX}}}_{\mathcal{N}}$)
=
FOLDL ($\lambda$a b. ETREE (BRANCH a b))
$\mathrm{\small{\texttt{DEC\\_BOX}}}_{1}$
$\mathrm{\small{\texttt{DEC\\_BOX}}}_{\mathcal{N}}$
Finally, we define the CCD Consequence box by mapping the function CONSEQ_PATH
on a list using the HOL4 function MAP, then applies the NODE ET constructor:
Definition 5:
$\vdash$ CONSEQ_BOX p $\mathrm{L}_{\mathcal{M}}$ = ETREE (NODE (MAP
($\lambda$a. CONSEQ_PATH p a) $\mathrm{L}_{\mathcal{M}}$))
Using the above definitions, we can construct a complete CCD model (Step 2 in
Fig. 3) for the MCC system shown in Fig. 5, in HOL4 as:
$\vdash$ MCC_COMPLETE_CCD $\mathrm{FT}_{R}$ $\mathrm{FT}_{T}$
$\mathrm{FT}_{F}$ =
CONSEQ_BOX p
[[DEC_BOX p 1 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)];
[DEC_BOX p 1 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)];
[DEC_BOX p 1 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)]]
In CCD analysis [30], Step 3 in Fig. 3 is used to model the accurate
functional behavior of systems in the sense that the irrelevant decision boxes
should be removed from a complete CCD model. Upon this, the actual CCD model
of the MCC system after reduction, as shown in Fig. 6, can be obtained by
assigning X with neither 1 nor 0, for instance, X = 2, which represents the
irrelevance of the decision box, in HOL4 as:
$\vdash$ MCC_REDUCED_CCD $\mathrm{FT}_{R}$ $\mathrm{FT}_{T}$ $\mathrm{FT}_{F}$
=
CONSEQ_BOX p
[[DEC_BOX p 1 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 2
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 2
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 2
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)]]
Also, we can formally verify the above reduced CCD model of the MCC system, in
HOL4 as:
$\vdash$ MCC_REDUCED_CCD $\mathrm{FT}_{R}$ $\mathrm{FT}_{T}$ $\mathrm{FT}_{F}$
=
CONSEQ_BOX p
[[DEC_BOX p 1 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 1
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{R}}$,$\mathrm{FT}_{R}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{T}}$,$\mathrm{FT}_{T}$);DEC_BOX p 0
($\overline{\mathrm{FT}_{F}}$,$\mathrm{FT}_{F}$)]]
where $\overline{\mathrm{FT}_{X}}$ for a component X is the complement of
$\mathrm{FT}_{X}$.
### 4.2 Formal CCD Analysis
The important step in the CCD analysis is to determine the probability of each
consequence path occurrence in the CCD [14]. For that purpose, we formally
verify the following CCD generic probabilistic properties, in HOL4 as follows:
Property 1: The probability of a consequence path for one decision box
assigned with a generic FT model, i.e., OR or AND, as shown in Fig. 7, under
the assumptions described in Table 2, respectively as follows:
Theorem 1:
$\vdash$ prob_space p $\wedge$ $F_{\mathcal{N}}$ $\in$ events p $\wedge$
MUTUAL_INDEP p $F_{\mathcal{N}}$ $\Rightarrow$
prob p
(CONSEQ_PATH p [DEC_BOX p X (FTree p (NOT (OR
$\mathrm{F}_{\mathcal{N}}$)),FTree p (OR $\mathrm{F}_{\mathcal{N}}$))]) =
if X = 1 then $\prod$ (PROB_LIST p (COMPL_LIST p $\mathrm{F}_{\mathcal{N}}$))
else if X = 0 then 1 - $\prod$ (PROB_LIST p (COMPL_LIST p
$\mathrm{F}_{\mathcal{N}}$)) else 1
Figure 7: Decision Boxes with FT Gates
For example, consider a system X consists of two components $\mathrm{C}_{1}$
and $\mathrm{C}_{2}$. Assuming the failure of either one them causes the
system failure, i.e., $\mathrm{C}_{1F}$ or $\mathrm{C}_{2F}$, We can formally
model the FT of the system ($\mathrm{FT}_{system}$), in HOL4 as:
$\vdash$ $\mathrm{FT}_{system}$ p $\mathrm{C}_{1F}$ $\mathrm{C}_{2F}$ = FTree
p (OR [$\mathrm{C}_{1F}$;$\mathrm{C}_{2F}$])
Using Theorem 1, we can obtain the probability of a decision box YES/NO
outcomes connected to the above FT model, respectively, in HOL4 as:
$\vdash$ prob p (CONSEQ_PATH p [DEC_BOX p 1
($\overline{\mathrm{FT}_{system}}$,$\mathrm{FT}_{system}$))]) =
(1 - prob p $\mathrm{C}_{1F}$) $\times$ (1 - prob p $\mathrm{C}_{2F}$)
$\vdash$ prob p (CONSEQ_PATH p [DEC_BOX p 0
($\overline{\mathrm{FT}_{system}}$,$\mathrm{FT}_{system}$))]) =
1 - (1 - prob p $\mathrm{C}_{1F}$) $\times$ (1 - prob p $\mathrm{C}_{2F}$)
Theorem 2:
$\vdash$ prob_space p $\wedge$ $F_{\mathcal{N}}$ $\in$ events p $\wedge$
MUTUAL_INDEP p $F_{\mathcal{N}}$ $\Rightarrow$
prob p
(CONSEQ_PATH p
[DEC_BOX p X (FTree p (NOT (AND $\mathrm{F}_{\mathcal{N}}$)),FTree p (AND
$\mathrm{F}_{\mathcal{N}}$))]) =
if X = 1 then 1 - $\prod$ (PROB_LIST p $\mathrm{F}_{\mathcal{N}}$)
else if X = 0 then $\prod$ (PROB_LIST p $\mathrm{F}_{\mathcal{N}}$) else 1
For instance, in the above example, assume the failure of both components
simultaneously only causes the system failure, i.e., $\mathrm{C}_{1F}$ and
$\mathrm{C}_{2F}$. We can formally model the FT of the system, in HOL4 as:
$\vdash$ $\mathrm{FT}_{system}$ p $\mathrm{C}_{1F}$ $\mathrm{C}_{2F}$ = FTree
p (AND[$\mathrm{C}_{1F}$;$\mathrm{C}_{2F}$])
Figure 8: Two-level Decision Boxes for CCD Analysis
Using Theorem 2, we can obtain the probability of a decision box YES/NO
outcomes connected to the above FT model, respectively, in HOL4 as:
$\vdash$ prob p (CONSEQ_PATH p [DEC_BOX p 1
($\overline{\mathrm{FT}_{system}}$,$\mathrm{FT}_{system}$))]) =
1 - prob p $\mathrm{C}_{1F}$ $\times$ prob p $\mathrm{C}_{2F}$
$\vdash$ prob p (CONSEQ_PATH p [DEC_BOX p 0
($\overline{\mathrm{FT}_{system}}$,$\mathrm{FT}_{system}$))]) =
prob p $\mathrm{C}_{1F}$ $\times$ prob p $\mathrm{C}_{2F}$
Property 2: The probability of two-level decision boxes assigned to a CCD path
with all combinations of FT gates (AND-OR/OR-AND , AND-AND and OR-OR), as
shown in Fig. 8. Each combination has 4 possible operating scenarios that can
occur (0-0, 0-1, 1-0 and 1-1) and 2 other possible reduction scenarios that
may be required in Step 3 (0-2 and 1-2), which represents the removal of the
decision box Y from the path. The basic idea is to select different
combinations of decision boxes to achieve the desired system behavior and also
select the reduction combination ($>$ 1) to remove irreverent decision boxes.
This probabilistic expressions can be formally verified, in HOL4 as:
Theorem 3:
$\vdash$ prob_space p $\wedge$ ($\forall$y. y $\in$
($\mathrm{F}_{\mathcal{N}}$++$\mathrm{F}_{\mathcal{M}}$) $\Rightarrow$ y $\in$
events p) $\wedge$
MUTUAL_INDEP p ($\mathrm{F}_{\mathcal{N}}$++$\mathrm{F}_{\mathcal{M}}$)
$\Rightarrow$
prob p (CONSEQ_PATH p
[DEC_BOX p X (FTree p (NOT (AND $\mathrm{F}_{\mathcal{N}}$)),FTree p (AND
$\mathrm{F}_{\mathcal{N}}$));
DEC_BOX p Y (FTree p (NOT (OR $\mathrm{F}_{\mathcal{M}}$)),FTree p (OR
$\mathrm{F}_{\mathcal{M}}$))]) =
if X = 0 $\wedge$ Y = 0 then
$\prod$ (PROB_LIST p $\mathrm{F}_{\mathcal{N}}$) $\times$ (1 - $\prod$
(PROB_LIST p (COMPL_LIST p $\mathrm{F}_{\mathcal{M}}$)))
else if X = 0 $\wedge$ Y = 1 then
$\prod$ (PROB_LIST p $\mathrm{F}_{\mathcal{N}}$) $\times$ $\prod$ (PROB_LIST p
(COMPL_LIST p $\mathrm{F}_{\mathcal{M}}$))
else if X = 1 $\wedge$ Y = 0 then
(1 - $\prod$ (PROB_LIST p $\mathrm{F}_{\mathcal{N}}$)) $\times$ (1 - $\prod$
(PROB_LIST p (COMPL_LIST p $\mathrm{F}_{\mathcal{M}}$)))
else if X = 1 $\wedge$ Y = 1 then
(1 - $\prod$ (PROB_LIST p $\mathrm{F}_{\mathcal{N}}$)) $\times$ $\prod$
(PROB_LIST p (COMPL_LIST p $\mathrm{F}_{\mathcal{M}}$))
else if X = 0 $\wedge$ Y = 2 then $\prod$ (PROB_LIST p
$\mathrm{F}_{\mathcal{N}}$)
else if X = 1 $\wedge$ Y = 2 then (1 - $\prod$ (PROB_LIST p
$\mathrm{F}_{\mathcal{N}}$)) else 1
Theorem 4:
$\vdash$ prob p (CONSEQ_PATH p
[DEC_BOX p X (FTree p (NOT (AND $\mathrm{F}_{\mathcal{N}}$)),FTree p (AND
$\mathrm{F}_{\mathcal{N}}$));
DEC_BOX p Y (FTree p (NOT (AND $\mathrm{F}_{\mathcal{M}}$)),FTree p (AND
$\mathrm{F}_{\mathcal{M}}$))]) =
if X = 0 $\wedge$ Y = 0 then
$\prod$ (PROB_LIST p $\mathrm{F}_{\mathcal{N}}$) $\times$ $\prod$ (PROB_LIST p
$\mathrm{F}_{\mathcal{M}}$)
else if X = 0 $\wedge$ Y = 1 then
$\prod$ (PROB_LIST p $\mathrm{F}_{\mathcal{N}}$) $\times$ (1 - $\prod$
(PROB_LIST p $\mathrm{F}_{\mathcal{M}}$))
⋮
else if X = 1 $\wedge$ Y = 2 then (1 - $\prod$ (PROB_LIST p
$\mathrm{F}_{\mathcal{N}}$)) else 1
Theorem 5:
$\vdash$ prob p (CONSEQ_PATH p
[DEC_BOX p X (FTree p (NOT (OR $\mathrm{F}_{\mathcal{N}}$)),FTree p (OR
$\mathrm{F}_{\mathcal{N}}$));
DEC_BOX p Y (FTree p (NOT (OR $\mathrm{F}_{\mathcal{M}}$)),FTree p (OR
$\mathrm{F}_{\mathcal{M}}$))]) =
if X = 0 $\wedge$ Y = 0 then
(1 - $\prod$ (PROB_LIST p (COMPL_LIST p $\mathrm{F}_{\mathcal{N}}$))) $\times$
(1 - $\prod$ (PROB_LIST p (COMPL_LIST p $\mathrm{F}_{\mathcal{M}}$)))
else if X = 0 $\wedge$ Y = 1 then
(1 - $\prod$ (PROB_LIST p (COMPL_LIST p $\mathrm{F}_{\mathcal{N}}$))) $\times$
$\prod$ (PROB_LIST p (COMPL_LIST p $\mathrm{F}_{\mathcal{M}}$))
⋮
else if X = 1 $\wedge$ Y = 2 then $\prod$ (PROB_LIST p (COMPL_LIST p
$\mathrm{F}_{\mathcal{N}}$)) else 1
Property 3: A generic probabilistic property for a consequence path consisting
of complex four-level decision boxes associated with different combination of
FTs and each one consisting of $\mathcal{N}$ components (AND-OR-AND-OR/OR-AND-
OR-AND/AND-AND-OR-OR/OR-OR-AND-AND), which has 16 possible operating scenarios
that can occur and 14 other possible reduction possibilities, as shown in Fig.
9, in HOL4 as:
Theorem 6:
$\vdash$ Let
$\small{\texttt{W}}_{\mathrm{F}}$ = $\prod$ (PROB_LIST p
$\mathrm{F}_{\mathcal{N}}$);
$\overline{\small{\texttt{W}}}$ = 1 - $\small{\texttt{W}}_{\mathrm{F}}$;
$\small{\texttt{X}}_{\mathrm{F}}$ = 1 - $\prod$ (PROB_LIST p (COMPL_LIST p
$\mathrm{F}_{\mathcal{K}}$)); $\overline{\small{\texttt{X}}}$ = 1 -
$\small{\texttt{X}}_{\mathrm{F}}$;
$\small{\texttt{Y}}_{\mathrm{F}}$ = $\prod$ (PROB_LIST p
$\mathrm{F}_{\mathcal{M}}$);
$\overline{\small{\texttt{Y}}}$ = 1 - $\small{\texttt{Y}}_{\mathrm{F}}$;
$\small{\texttt{Z}}_{\mathrm{F}}$ = 1 - $\prod$ (PROB_LIST p (COMPL_LIST p
$\mathrm{F}_{\mathcal{J}}$)); $\overline{\small{\texttt{Z}}}$ = 1 -
$\small{\texttt{Z}}_{\mathrm{F}}$
in
prob p
(CONSEQ_PATH p
[DEC_BOX p W (FTree p (NOT (AND $\mathrm{F}_{\mathcal{N}}$)),FTree p (AND
$\mathrm{F}_{\mathcal{N}}$));
DEC_BOX p X (FTree p (NOT (OR $\mathrm{F}_{\mathcal{K}}$)),FTree p (OR
$\mathrm{F}_{\mathcal{K}}$));
DEC_BOX p Y (FTree p (NOT (AND $\mathrm{F}_{\mathcal{M}}$)),FTree p (AND
$\mathrm{F}_{\mathcal{M}}$));
DEC_BOX p Z (FTree p (NOT (OR $\mathrm{F}_{\mathcal{J}}$)),FTree p (OR
$\mathrm{F}_{\mathcal{J}}$))]) =
if W = 0 $\wedge$ X = 0 $\wedge$ Y = 0 $\wedge$ Z = 0
then $\small{\texttt{W}}_{\mathrm{F}}$ $\times$
$\small{\texttt{X}}_{\mathrm{F}}$ $\times$ $\small{\texttt{Y}}_{\mathrm{F}}$
$\times$ $\small{\texttt{Z}}_{\mathrm{F}}$
else if W = 0 $\wedge$ X = 0 $\wedge$ Y = 0 $\wedge$ Z = 1
then $\small{\texttt{W}}_{\mathrm{F}}$ $\times$
$\small{\texttt{X}}_{\mathrm{F}}$ $\times$ $\small{\texttt{Y}}_{\mathrm{F}}$
$\times$ $\overline{\small{\texttt{Z}}}$
else if W = 0 $\wedge$ X = 0 $\wedge$ Y = 1 $\wedge$ Z = 0
then $\small{\texttt{W}}_{\mathrm{F}}$ $\times$
$\small{\texttt{X}}_{\mathrm{F}}$ $\times$ $\overline{\small{\texttt{Y}}}$
$\times$ $\small{\texttt{Z}}_{\mathrm{F}}$
⋮
else if W = 1 $\wedge$ X = 1 $\wedge$ Y = 2 $\wedge$ Z = 2
then $\overline{\small{\texttt{W}}}$ $\times$ $\overline{\small{\texttt{X}}}$
else if W = 1 $\wedge$ X = 2 $\wedge$ Y = 2 $\wedge$ Z = 2
then $\overline{\small{\texttt{W}}}$ else 1
Figure 9: Four-level Decision Boxes for CCD Analysis
For complex systems consisting of $\mathcal{N}$-level decision boxes, where
each decision box is associated with an AND/OR gate consisting of an arbitrary
list of failure events, we define three types A, B and C of possible CCD
outcomes, as shown in Fig. 10, with a new proposed mathematics as:
Figure 10: Generic $\mathcal{N}$-level CCD Analysis
Property 4 (N Decision Boxes of Type A): The probability of $n$ decision boxes
assigned to a consequence path corresponding to $n$ subsystems, where all
decision boxes are associated with FT AND models consisting of arbitrary lists
of $k$ events, can be expressed mathematically at a specific time t for three
cases as:
(A1) All outcomes of $n$ decisions boxes are NO
$\centering\mathcal{F}_{A1}(t)=\prod\limits^{n}_{i=1}\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t)\@add@centering$
(3)
(A2) All outcomes of $n$ decisions boxes are YES
$\centering\mathcal{F}_{A2}(t)=\prod\limits^{n}_{i=1}(1-\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t))\@add@centering$
(4)
(A3) Some outcomes of $m$ decisions boxes are NO and the rest outcomes of $p$
decisions boxes are YES
$\centering\mathcal{F}_{A3}(t)=\Bigg{(}\prod\limits^{m}_{i=1}\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t)\Bigg{)}\times\Bigg{(}\prod\limits^{p}_{i=1}(1-\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t))\Bigg{)}\@add@centering$
(5)
To verify the correctness of the above-proposed new safety analysis
mathematical formulations in the HOL4 theorem prover, we define two generic
CCD functions $\mathcal{SS}^{YES}_{AND}$ and $\mathcal{SS}^{NO}_{AND}$ that
can recursively generate the outcomes YES and NO of the function FTree,
identified by gate constructors AND and NOT, for a given arbitrary list of all
subsystems failure events (SSN), respectively, in HOL4 as:
Definition 6:
$\vdash$ $\mathcal{SS}^{YES}_{AND}$ p (SS::SSN) = FTree p (NOT (AND
SS1))::$\mathcal{SS}^{YES}_{AND}$ p SSN
Definition 7:
$\vdash$ $\mathcal{SS}^{NO}_{AND}$ p (SS1::SSN) = FTree p (AND
SS1)::$\mathcal{SS}^{NO}_{AND}$ p SSN
Using above defined functions, we can verify three two-dimensional and
scalable probabilistic properties corresponding to the above-mentioned safety
equations Eq. 3, Eq. 4, and Eq. 5, respectively, in HOL4 as:
Theorem 7:
$\vdash$ prob p (CONSEQ_PATH p ($\mathcal{SS}^{NO}_{AND}$ p SSN)) =
$\prod$ (MAP ($\lambda$ a. $\prod$ (PROB_LIST p a)) SSN)
Theorem 8:
$\vdash$ prob p (CONSEQ_PATH p ($\mathcal{SS}^{YES}_{AND}$ p SSN)) =
$\prod$ (MAP ($\lambda$ b. (1 - $\prod$ (PROB_LIST p b))) SSN)
Theorem 9:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{NO}_{AND}$ p SSm);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{AND}$ p SSp)]) =
$\bigg{(}\prod$ (MAP ($\lambda$ a. $\prod$ (PROB_LIST p a)) SSm)$\bigg{)}$
$\times$
$\bigg{(}\prod$ (MAP ($\lambda$ b. 1 - $\prod$ (PROB_LIST p b)) SSp)$\bigg{)}$
Property 5 (N Decision Boxes of Type B): The probabilistic assessment of $n$
decision boxes assigned to a CCD consequence path, where all decision boxes
are associated with generic FT OR models consisting of arbitrary lists of $k$
events, can be expressed mathematically for three cases:
(B1) All outcomes of $n$ decisions boxes are NO
$\centering\mathcal{F}_{B1}(t)=\prod\limits^{n}_{i=1}(1-\prod\limits^{k}_{j=2}(1-{\mathcal{F}}_{ij}(t)))\@add@centering$
(6)
(B2) All outcomes of $n$ decisions boxes are YES
$\centering\mathcal{F}_{B2}(t)=\prod\limits^{n}_{i=1}\prod\limits^{k}_{j=2}(1-{\mathcal{F}}_{ij}(t))\@add@centering$
(7)
(B3) Some outcomes of $m$ decisions boxes are NO and some outcomes of $p$
decisions boxes are YES
$\centering\mathcal{F}_{B3}(t)=\Bigg{(}\prod\limits^{m}_{i=1}(1-\prod\limits^{k}_{j=2}(1-{\mathcal{F}}_{ij}(t)))\Bigg{)}\times\Bigg{(}\prod\limits^{p}_{i=1}\prod\limits^{k}_{j=2}(1-{\mathcal{F}}_{ij}(t))\Bigg{)}\@add@centering$
(8)
To verify the correctness of the above-proposed new CCD mathematical formulas
in HOL4, we define two generic functions $\mathcal{SS}^{YES}_{OR}$ and
$\mathcal{SS}^{NO}_{OR}$ to recursively generate the outcomes YES and NO of
the function FTree, identified by gate constructors OR and NOT, for a given
list of subsystems events.
Definition 8:
$\vdash$ $\mathcal{SS}^{YES}_{OR}$ p (SS::SSN) = FTree p (NOT (OR
SS1))::$\mathcal{SS}^{YES}_{OR}$ p SSN
Definition 9:
$\vdash$ $\mathcal{SS}^{NO}_{OR}$ p (SS1::SSN) = FTree p (OR
SS1)::$\mathcal{SS}^{NO}_{OR}$ p SSN
Using above defined functions, we can formally verify three scalable
probabilistic properties corresponding to Eq. 6, Eq. 7, and Eq. 8,
respectively, in HOL4 as:
Theorem 10:
$\vdash$ prob p (CONSEQ_PATH p ($\mathcal{SS}^{NO}_{OR}$ p SSN)) =
$\prod$
(MAP
($\lambda$ a.
(1 - $\prod$ (PROB_LIST p (compl_list p a)))) SSN)
Theorem 11:
$\vdash$ prob p (CONSEQ_PATH p ($\mathcal{SS}^{YES}_{OR}$ p SSN)) =
$\prod$
(MAP
($\lambda$ b.
$\prod$ (PROB_LIST p (compl_list p b))) SSN)
Theorem 12:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{NO}_{OR}$ p SSm);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{OR}$ p SSp)]) =
$\prod$
(MAP
($\lambda$ a.
(1 - $\prod$ (PROB_LIST p (compl_list p a)))) SSm)
$\times$ $\prod$
(MAP
($\lambda$ b.
$\prod$ (PROB_LIST p (compl_list p b))) SSp)
Property 6 (N Decision Boxes of Type C): The probabilistic assessment of $n$
decision boxes assigned to a consequence path for a very complex system, where
some $m$ decision boxes are associated with generic FT AND models consisting
of $k$-events, while other $p$ decision boxes are associated with generic FT
OR models consisting of $z$-events, as shown in Fig. 10, is proposed to be
expressed mathematically for nine cases as:
(C1) All outcomes of $m$ and $p$ decisions boxes are NO.
$\centering\mathcal{F}_{C1}(t)=\Bigg{(}\prod\limits^{m}_{i=1}\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t)\Bigg{)}\times\Bigg{(}\prod\limits^{p}_{i=1}(1-\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t)))\Bigg{)}\@add@centering$
(9)
Theorem 13:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{NO}_{AND}$ p SSm);
CONSEQ_PATH p ($\mathcal{SS}^{NO}_{OR}$ p SSp)]) =
$\prod$ (MAP ($\lambda$ a. $\prod$ (PROB_LIST p a)) SSm)
$\times$ $\prod$
(MAP
($\lambda$ b.
(1 - $\prod$ (PROB_LIST p (compl_list p b)))) SSp)
(C2) All outcomes of $m$ and $p$ decisions boxes are YES.
$\centering\mathcal{F}_{C2}(t)=\Bigg{(}\prod\limits^{m}_{i=1}(1-\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t))\Bigg{)}\times\Bigg{(}\prod\limits^{p}_{i=1}\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t))\Bigg{)}\@add@centering$
(10)
Theorem 14:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{YES}_{AND}$ p SSm);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{OR}$ p SSp)]) =
$\prod$ (MAP ($\lambda$ a. 1 - $\prod$ (PROB_LIST p a)) SSm)
$\times$ $\prod$
(MAP
($\lambda$ b.
$\prod$ (PROB_LIST p (compl_list p b))) SSp)
(C3) All outcomes of $m$ decisions boxes are NO and all outcomes of $p$
decisions boxes are YES.
$\centering\mathcal{F}_{C3}(t)=\Bigg{(}\prod\limits^{m}_{i=1}\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t)\Bigg{)}\times\Bigg{(}\prod\limits^{p}_{i=1}\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t))\Bigg{)}\@add@centering$
(11)
Theorem 15:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{NO}_{AND}$ p SSm);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{OR}$ p SSp)]) =
$\prod$ (MAP ($\lambda$ a. $\prod$ (PROB_LIST p a)) SSm)
$\times$ $\prod$
(MAP
($\lambda$ b.
$\prod$ (PROB_LIST p (compl_list p b))) SSp)
(C4) All outcomes of $m$ decisions boxes are YES and all outcomes of $p$
decisions boxes are NO.
$\centering\mathcal{F}_{C4}(t)=\Bigg{(}\prod\limits^{m}_{i=1}(1-\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t))\Bigg{)}\times\Bigg{(}\prod\limits^{p}_{i=1}(1-\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t)))\Bigg{)}\@add@centering$
(12)
Theorem 16:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{YES}_{AND}$ p SSm);
CONSEQ_PATH p ($\mathcal{SS}^{NO}_{OR}$ p SSp)]) =
$\prod$ (MAP ($\lambda$ a. 1 - $\prod$ (PROB_LIST p a)) SSm)
$\times$ $\prod$
(MAP
($\lambda$ b.
(1 - $\prod$ (PROB_LIST p (compl_list p b)))) SSp)
(C5) Some outcomes of $s$ out of $m$ decisions boxes are NO, some outcomes of
$u$ out of $m$ decisions boxes are YES and all outcomes of $p$ decisions boxes
are NO.
$\centering\mathcal{F}_{C5}(t)=\Bigg{(}\prod\limits^{s}_{i=1}\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t)\Bigg{)}\times\Bigg{(}\prod\limits^{u}_{i=1}(1-\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t))\Bigg{)}\times\Bigg{(}\prod\limits^{p}_{i=1}(1-\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t)))\Bigg{)}\@add@centering$
(13)
Theorem 17:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{NO}_{AND}$ p SSs);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{AND}$ p SSu);
CONSEQ_PATH p ($\mathcal{SS}^{NO}_{OR}$ p SSp)]) =
$\prod$ (MAP ($\lambda$ a. $\prod$ (PROB_LIST p a)) SSs)
$\times$ $\prod$ (MAP ($\lambda$ b. 1 - $\prod$ (PROB_LIST p b)) SSu)
$\times$ $\prod$
(MAP
($\lambda$ c.
(1 - $\prod$ (PROB_LIST p (compl_list p c)))) SSp)
(C6) Some outcomes of $s$ out of $m$ decisions boxes are NO, some outcomes of
$u$ out of $m$ decisions boxes are YES and all outcomes of $p$ decisions boxes
are YES.
$\centering\mathcal{F}_{C6}(t)=\Bigg{(}\prod\limits^{s}_{i=1}\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t)\Bigg{)}\times\Bigg{(}\prod\limits^{u}_{i=1}(1-\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t))\Bigg{)}\times\Bigg{(}\prod\limits^{p}_{i=1}\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t))\Bigg{)}\centering\@add@centering\@add@centering$
(14)
Theorem 18:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{NO}_{AND}$ p SSs);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{AND}$ p SSu);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{OR}$ p SSp)]) =
$\prod$ (MAP ($\lambda$ a. $\prod$ (PROB_LIST p a)) SSs)
$\times$ $\prod$ (MAP ($\lambda$ b. 1 - $\prod$ (PROB_LIST p b)) SSu)
$\times$ $\prod$
(MAP
($\lambda$ c.
$\prod$ (PROB_LIST p (compl_list p c))) SSp)
(C7) Some outcomes of $s$ out of $p$ decisions boxes are NO, some outcomes of
$u$ out of $p$ decisions boxes are YES and all outcomes of $m$ decisions boxes
are NO.
$\centering\mathcal{F}_{C7}(t)=\Bigg{(}\prod\limits^{m}_{i=1}\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t)\Bigg{)}\times\Bigg{(}\prod\limits^{u}_{i=1}\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t))\Bigg{)}\times\Bigg{(}\prod\limits^{s}_{i=1}(1-\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t)))\Bigg{)}\@add@centering$
(15)
Theorem 19:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{NO}_{AND}$ p SSm);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{OR}$ p SSu);
CONSEQ_PATH p ($\mathcal{SS}^{NO}_{OR}$ p SSs)]) =
$\prod$ (MAP ($\lambda$ a. $\prod$ (PROB_LIST p a)) SSm)
$\times$ $\prod$
(MAP
($\lambda$ b.
$\prod$ (PROB_LIST p (compl_list p b))) SSu)
$\times$ $\prod$
(MAP
($\lambda$ c.
(1 - $\prod$ (PROB_LIST p (compl_list p c)))) SSs)
(C8) Some outcomes of $s$ out of $p$ decisions boxes are NO, some outcomes of
$u$ out of $p$ decisions boxes are YES and all outcomes of $m$ decisions boxes
are YES.
$\centering\mathcal{F}_{C8}(t)=\hskip
11.38109pt\Bigg{(}\prod\limits^{m}_{i=1}(1-\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t))\Bigg{)}\times\Bigg{(}\prod\limits^{u}_{i=1}\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t))\Bigg{)}\times\Bigg{(}\prod\limits^{s}_{i=1}(1-\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t)))\Bigg{)}\@add@centering$
(16)
Theorem 20:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{YES}_{AND}$ p SSm);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{OR}$ p SSu);
CONSEQ_PATH p ($\mathcal{SS}^{NO}_{OR}$ p SSs)]) =
$\prod$ (MAP ($\lambda$ a. 1 - $\prod$ (PROB_LIST p a)) SSm)
$\times$ $\prod$
(MAP
($\lambda$ b.
$\prod$ (PROB_LIST p (compl_list p b))) SSu)
$\times$ $\prod$
(MAP
($\lambda$ c.
(1 - $\prod$ (PROB_LIST p (compl_list p c)))) SSs)
(C9) Some outcomes of $s$ out of $m$ decisions boxes are NO, some outcomes of
$u$ out of $m$ decisions boxes are YES, some outcomes of $v$ out of $p$
decisions boxes are NO and some outcomes of $w$ out of $p$ decisions boxes are
YES.
$\centering\begin{split}\mathcal{F}_{C9}(t)&=\Bigg{(}\prod\limits^{s}_{i=1}\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t)\Bigg{)}\times\Bigg{(}\prod\limits^{v}_{i=1}(1-\prod\limits^{z}_{j=1}(1-{\mathcal{F}}_{ij}(t)))\Bigg{)}\\\
&\times\Bigg{(}\prod\limits^{u}_{i=1}(1-\prod\limits^{k}_{j=2}{\mathcal{F}}_{ij}(t))\Bigg{)}\times\Bigg{(}\prod\limits^{w}_{i=1}\prod\limits^{z}_{j=2}(1-{\mathcal{F}}_{ij}(t))\Bigg{)}\end{split}\@add@centering$
(17)
Theorem 21:
$\vdash$ prob p
(CONSEQ_PATH p
[CONSEQ_PATH p ($\mathcal{SS}^{NO}_{AND}$ p SSs);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{AND}$ p SSu);
CONSEQ_PATH p ($\mathcal{SS}^{NO}_{OR}$ p SSv);
CONSEQ_PATH p ($\mathcal{SS}^{YES}_{OR}$ p SSw)]) =
$\prod$ (MAP ($\lambda$ a. $\prod$ (PROB_LIST p a)) SSs)
$\times$ $\prod$ (MAP ($\lambda$ b. 1 - $\prod$ (PROB_LIST p b)) SSu)
$\times$ $\prod$
(MAP
($\lambda$ c.
(1 - $\prod$ (PROB_LIST p (compl_list p c)))) SSv)
$\times$ $\prod$
(MAP
($\lambda$ d.
$\prod$ (PROB_LIST p (compl_list p d))) SSw)
Therefore, by verifying all the above-mentioned theorems in HOL4, we showed
the completeness of our proposed formal approach and thereupon solving the
scalability problem of CCD analysis for any given large engineering complex
system at the subsystem level [33].
Property 7: A generic probabilistic expression of CONSEQ_BOX for a certain
event occurrence in the entire system as the sum of all individual
probabilities of all $\mathcal{M}$ CONSEQ_PATH ending with that event:
Theorem 22:
$\vdash$ Let
CONSEQ_PATHS $\mathrm{L}_{\mathcal{M}}$ = MAP ($\lambda$a. CONSEQ_PATH p a)
$\mathrm{L}_{\mathcal{M}}$)
in
prob_space p $\wedge$ MUTUAL_INDEP p $\mathrm{L}_{\mathcal{M}}$ $\wedge$
disjoint (CONSEQ_PATHS $\mathrm{L}_{\mathcal{M}}$) $\wedge$ ALL_DISTINCT
(CONSEQ_PATHS $\mathrm{L}_{\mathcal{M}}$) $\Rightarrow$
prob p (CONSEQ_BOX p $\mathrm{L}_{\mathcal{M}}$) = $\sum$ (PROB_LIST p
(CONSEQ_PATHS $\mathrm{L}_{\mathcal{M}}$))
where the HOL4 function disjoint ensures that each pair of elements in a given
list is mutually exclusive while the function ALL_DISTINCT ensures that each
pair is distinct. The function $\sum$ is defined to sum the events for a given
list. Remark that all above-mentioned CCD new formulations have been formally
verified in HOL4, where the proof-script amounts to about 16,000 lines of HOL4
code, which can be downloaded for use from [33]. Also, this code can be
extended, with some basic knowhow about HOL4, to perform dynamic failure
analysis of dynamic subsystems where no dependencies exist in subsystems using
DFTs, such as PAND and SP, i.e, CCD reliability analysis of Type II (see Fig.
2).
To illustrate the applicability of our proposed approach, in the next section,
we present the formal CCD step-analysis of the standard IEEE 39-bus electrical
power network and verify its reliability indexes ($\mathcal{FOR}$ and
$\mathcal{SAIDI}$), which are commonly used as reliability indicators by
electric power utilities.
## 5 Electrical Power 39-bus Network System
An electrical power network is an interconnected grid for delivering
electricity from producers to customers. The power network system consists of
three main zones [1]: (i) generating stations that produce electric power;
(ii) transmission lines that carry power from sources to loads; and (iii)
distribution lines that connect individual consumers. Due to the complex and
integrated nature of the power network, failures in any zone of the system can
cause widespread catastrophic disruption of supply [1]. Therefore a rigorous
formal cause-consequence analysis of the grid is essential in order to reduce
the risk situation of a blackout and enable back-up decisions [34]. For power
network safety assessment, reliability engineers have been dividing the power
network into three main hierarchical levels [12]: (a) generation systems; (b)
composite generation and transmission (or bulk power) systems; and (c)
distribution systems. We can use our proposed CCD formalization for the formal
modeling and analysis of any hierarchical level in the power network. In this
case study, we focus on the generation part only, i.e., hierarchical level I.
Also, we can evaluate the Force Outage Rate ($\mathcal{FOR}$) for the
generation stations, which is defined as the probability of the unit
unavailability to produce power due to unexpected equipment failure [34].
Additionally, we can determine the System Average Interruption Duration Index
($\mathcal{SAIDI}$), which is used to indicate the average duration for each
customer served to experience a sustained outage. $\mathcal{SAIDI}$ is defined
as the sum of all customer interruption durations (probability of load
failures ↯ multiplying by the mean-time-to-repair the failures and the number
of customers that are affected by these failures) over the total number of
customers served [34]:
$\centering\mathcal{SAIDI}=\frac{\sum_{{\mathcal{P}}(\mathcal{X}_{\mbox{\Lightning}})\times\mathrm{MTTR}_{\mathcal{X}}\times\mathrm{CN}_{\mathcal{X}}}}{\sum_{\mathrm{CN}_{\mathcal{X}}}}\@add@centering$
(18)
where $\mathrm{CN}_{\mathcal{X}}$ is the number of customers for a certain
location $\mathcal{X}$ while $\mathrm{MTTR}_{\mathcal{X}}$ is the mean-time-
to-repair the failure that occurred at $\mathcal{X}$. We formally define a
function $\sum\nolimits^{T}_{\mbox{\Lightning}}$ in HOL4 to sum all customer
interruption durations. Also, we formally define a generic function
$\mathcal{SAIDI}$ by dividing the output of
$\sum\nolimits^{T}_{\mbox{\Lightning}}$ over the total number of customers
served, in HOL4 as:
Definition 10:
$\vdash$ $\sum\nolimits^{T}_{\mbox{\Lightning}}$
(L::$\mathrm{\small{\texttt{L}}}_{\mathcal{M}}$)
(MTTR::$\mathrm{\small{\texttt{MTTR}}}_{\mathcal{M}}$)
(CN:$\mathrm{\small{\texttt{CN}}}_{\mathcal{M}}$) p =
prob p (CONSEQ_BOX p $\mathrm{\small{\texttt{L}}}_{\mathcal{M}}$) $\times$
MTTR $\times$ CN + $\sum\nolimits^{T}_{\mbox{\Lightning}}$
$\mathrm{\small{\texttt{L}}}_{\mathcal{M}}$
$\mathrm{\small{\texttt{MTTR}}}_{\mathcal{M}}$
$\mathrm{\small{\texttt{CN}}}_{\mathcal{M}}$ p
Definition 11:
$\vdash$ $\mathcal{SAIDI}$ $\mathrm{\small{\texttt{L}}}_{\mathcal{M}}$
$\mathrm{\small{\texttt{MTTR}}}_{\mathcal{M}}$
$\mathrm{\small{\texttt{CN}}}_{\mathcal{M}}$ p =
$\dfrac{\sum\nolimits^{T}_{\mbox{\Lightning}}\mathrm{\small{\texttt{L}}}_{\mathcal{M}}\hskip
2.84526pt\mathrm{\small{\texttt{MTTR}}}_{\mathcal{M}}\hskip
2.84526pt\mathrm{\small{\texttt{CN}}}_{\mathcal{M}}\hskip
2.84526ptp}{\sum\mathrm{\small{\texttt{CN}}}_{\mathcal{M}}}$
where $\mathrm{\small{\texttt{L}}}_{\mathcal{M}}$ is the list of CCD paths,
$\mathrm{\small{\texttt{MTTR}}}_{\mathcal{M}}$ is the list of meantime to
repairs, and $\mathrm{\small{\texttt{CN}}}_{\mathcal{M}}$ is the list of
customer numbers. The function $\sum\nolimits^{T}_{\mbox{\Lightning}}$
(Definition 10) models the numerator of Eq. 18, which is the sum of all
customer interruption durations at different locations in the electrical power
grid. Each probability of failure is obtained by evaluating a CONSEQ_BOX
consisting of a list of $\mathcal{M}$ CONSEQ_PATH, which cause that failure.
Definition 11 represents the division of output of Definition 10 over the
total number of customers at all those locations as described in Eq. 18.
Consider a standard IEEE 39-bus electrical power network test system
consisting of 10 generators (G), 12 substations (S/S), 39 Buses (Bus), and 34
transmission lines (TL), as shown in Fig. 11 [35]. Assuming the generators
G1-G10 are of two types: (i) solar photo-voltaic (PV) power plants G1-G5; and
(ii) steam power plants G6-G10. Using the Optimal Power Flow (OPF)
optimization [36], we can determine the flow of electricity from generators to
consumers in the power network. Typically, we are only interested in
evaluating the duration of certain failure events occurrence for specific
loads in the grid. For instance, if we consider the failure of load A, which
according to the OPF is supplied from G9 and G5 only, as shown in Fig. 11,
then the failure of either one or both power plants will lead to a partial or
a complete blackout failure at that load, respectively. Assuming the failure
of two consecutive power plants causes a complete blackout of the load. Hence,
considering the disruption cases of only one supply generator, then different
partial failures for loads A, B, C and D, as shown in Fig. 11, can be obtained
by observing different failures in the power network as:
1. a.
$\begin{aligned}
\mathcal{P}(\mathrm{Load}_{A\mbox{\Lightning}})=&(1-\mathcal{FOR}_{G_{9}})\times\mathcal{FOR}_{G_{5}}+\mathcal{FOR}_{G_{9}}\times(1-\mathcal{FOR}_{G_{5}})\end{aligned}$
2. b.
$\begin{aligned}
\mathcal{P}(\mathrm{Load}_{B\mbox{\Lightning}})=&(1-\mathcal{FOR}_{G_{7}})\times\mathcal{FOR}_{G_{9}}+\mathcal{FOR}_{G_{7}}\times(1-\mathcal{FOR}_{G_{9}})\end{aligned}$
3. c.
$\begin{aligned}
\mathcal{P}(\mathrm{Load}_{C\mbox{\Lightning}})=&(1-\mathcal{FOR}_{G_{1}})\times\mathcal{FOR}_{G_{2}}+\mathcal{FOR}_{G_{1}}\times(1-\mathcal{FOR}_{G_{2}})\end{aligned}$
4. d.
$\begin{aligned}
\mathcal{P}(\mathrm{Load}_{D\mbox{\Lightning}})&=(1-\mathcal{FOR}_{G_{6}})\times(1-\mathcal{FOR}_{G_{3}})\times(1-\mathcal{FOR}_{G_{8}})\times\mathcal{FOR}_{G_{4}}\\\
&\hskip
1.99168pt+(1-\mathcal{FOR}_{G_{6}})\times(1-\mathcal{FOR}_{G_{3}})\times\mathcal{FOR}_{G_{8}}\times(1-\mathcal{FOR}_{G_{4}})\\\
&\hskip
1.99168pt+(1-\mathcal{FOR}_{G_{6}})\times\mathcal{FOR}_{G_{3}}\times(1-\mathcal{FOR}_{G_{8}})\times(1-\mathcal{FOR}_{G_{4}})\\\
&\hskip
1.99168pt+\mathcal{FOR}_{G_{6}}\times(1-\mathcal{FOR}_{G_{3}})\times(1-\mathcal{FOR}_{G_{8}})\times(1-\mathcal{FOR}_{G_{4}})\end{aligned}$
Therefore, the assessment of $\mathcal{SAIDI}$ for the Grid (G) shown in Fig.
11, including an evaluation for the $\mathcal{FOR}$ of all its power plants,
can be written mathematically as:
$\centering\mathcal{SAIDI}_{G}=\frac{\mathcal{P}(\mathrm{Load}_{A\mbox{\Lightning}})\times\mathrm{MTTR}_{\mathrm{Load}_{A}}\times\mathrm{CN}_{\mathrm{Load}_{A}}+\dots}{\mathrm{CN}_{\mathrm{Load}_{A}}+\mathrm{CN}_{\mathrm{Load}_{B}}+\mathrm{CN}_{\mathrm{Load}_{C}}+\mathrm{CN}_{\mathrm{Load}_{D}}}\@add@centering$
(19)
Figure 11: IEEE 39-bus Electrical Power Network [35]
### 5.1 Formal CCD Analysis in HOL4
We can apply our four steps of CCD formalization to verify the expression of
$\mathcal{SAIDI}$ in terms of the power plant generator components, in HOL4
as:
Step 1 (Component failure events):
The schematic FT models of a typically PV power plant consisting of 2 solar
farms [37] and a steam power plant consisting of 3 generators [34] are shown
in Fig. 13 and Fig. 13, respectively. Using the formal FT modeling, we can
formally define the FT models of both plants, in HOL4 as:
Figure 12: FT Model of a PV Power Plant
Figure 13: FT Model of a Steam Power Plant
Definition 12:
$\vdash$ $\mathrm{FT}_{PV}$ p [LF1;LF2] [DC_DC1;DC_DC2] [SA1;SA2]
[DC_AC1;DC_AC2] =
FTree p (OR [OR [LF1;DC_DC1;DC_AC1;SA1]; OR [LF2;DC_DC2;DC_AC2;SA2]])
Definition 13:
$\vdash$ $\mathrm{FT}_{STEAM}$ p [BO1;BO2;BO3] [TA1;TA2;TA3] =
FTree p (AND [AND [BO1;TA1]; AND [BO2;TA2]; AND [BO3;TA3]])
Steps 2 and 3 (Construction of a CCD and Reduction):
Construct a formal complete CCD for all loads in our case study (Fig. 11),
i.e., A, B, C, and D, then remove the irrelevant decision boxes according to
the electrical power network functional behavior. For instance, we can model
the CCD models for loads A and D, as shown in Fig. 14, respectively, in HOL4
as:
Definition 14:
$\vdash$ CCD_LOAD_A =
CONSEQ_BOX p
[[DEC_BOX p 1 ($\overline{\mathrm{FT}_{STEAM}}$,$\mathrm{FT}_{STEAM}$);DEC_BOX
p 1 ($\overline{\mathrm{FT}_{PV}}$,$\mathrm{FT}_{PV}$)];
[DEC_BOX p 1 ($\overline{\mathrm{FT}_{STEAM}}$,$\mathrm{FT}_{STEAM}$);DEC_BOX
p 0 ($\overline{\mathrm{FT}_{PV}}$,$\mathrm{FT}_{PV}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{STEAM}}$,$\mathrm{FT}_{STEAM}$);DEC_BOX
p 1 ($\overline{\mathrm{FT}_{PV}}$,$\mathrm{FT}_{PV}$)];
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{STEAM}}$,$\mathrm{FT}_{STEAM}$);DEC_BOX
p 0 ($\overline{\mathrm{FT}_{PV}}$,$\mathrm{FT}_{PV}$)]]
Definition 15:
$\vdash$ CCD_LOAD_D =
CONSEQ_BOX p
[[DEC_BOX p 1 ($\overline{\mathrm{FT}_{STEAM}}$,$\mathrm{FT}_{STEAM}$);DEC_BOX
p 1 ($\overline{\mathrm{FT}_{PV}}$,$\mathrm{FT}_{PV}$);
DEC_BOX p 1 ($\overline{\mathrm{FT}_{STEAM}}$,$\mathrm{FT}_{STEAM}$);DEC_BOX p
1 ($\overline{\mathrm{FT}_{PV}}$,$\mathrm{FT}_{PV}$)];
[DEC_BOX p 1 ($\overline{\mathrm{FT}_{STEAM}}$,$\mathrm{FT}_{STEAM}$);DEC_BOX
p 1 ($\overline{\mathrm{FT}_{PV}}$,$\mathrm{FT}_{PV}$);
DEC_BOX p 1 ($\overline{\mathrm{FT}_{STEAM}}$,$\mathrm{FT}_{STEAM}$);DEC_BOX p
0 ($\overline{\mathrm{FT}_{PV}}$,$\mathrm{FT}_{PV}$)];
⋮
[DEC_BOX p 0 ($\overline{\mathrm{FT}_{STEAM}}$,$\mathrm{FT}_{STEAM}$);DEC_BOX
p 0 ($\overline{\mathrm{FT}_{PV}}$,$\mathrm{FT}_{PV}$)]]
Figure 14: CCD Analysis of Loads A and D
Step 4 (Probabilistic analysis):
We can use our proposed formal approach to express subsystem-level
failure/reliability probabilistic expressions of electrical power grids, which
enable us to analyze the cascading dependencies with many subsystem levels,
based on any probabilistic distribution. In this work, we assumed that the
failure of each component is exponentially distributed (i.e., CDF p X t = 1
$-$ $\mathrm{e}^{(-\lambda_{X}t)}$, where $\lambda_{X}$ is the failure rate of
the variable X and t is a time index).
#### 5.1.1 $\mathcal{FOR}$ Analysis
Using Definitions 12 and 13 with the assumption that the failure states of
components are exponentially distributed, we can formally specify the
probabilistic $\mathcal{FOR}$ expression for both PV and steam power plants,
in HOL4 as:
Definition 16:
$\vdash$ $\mathcal{FOR}_{PV}$ p [LF1;LF2] [DC_DC1;DC_DC2] [SA1;SA2]
[DC_AC1;DC_AC2] =
prob p ($\mathrm{FT}_{PV}$ p ($\downarrow$ [LF1;LF2]) ($\downarrow$
[DC_DC1;DC_DC2])
($\downarrow$ [SA1;SA2]) ($\downarrow$ [DC_AC1;DC_AC2]))
Definition 17:
$\vdash$ $\mathcal{FOR}_{STEAM}$ p [BO1;BO2;BO3] [TA1;TA2;TA3] =
prob p ($\mathrm{FT}_{STEAM}$ p ($\downarrow$ [BO1;BO2;BO3]) ($\downarrow$
[TA1;TA2;TA3])
where the function $\downarrow$ takes a list of $\mathcal{N}$ components and
assigns an exponential failing event to each component in the list.
We can formally verify the above-expressions of $\mathcal{FOR}_{PV}$ and
$\mathcal{FOR}_{STEAM}$, in HOL4 as:
Theorem 23:
$\vdash$ $\mathcal{FOR}_{PV}$ p [LF1;LF2] [DC_DC1;DC_DC2] [SA1;SA2]
[DC_AC1;DC_AC2] =
$1-\mathrm{e}^{(-\lambda_{LF1}t)}$ $\times$ $\mathrm{e}^{(-\lambda_{LF2}t)}$
$\times$ $\mathrm{e}^{(-\lambda_{DC\\_DC1}t)}$ $\times$
$\mathrm{e}^{(-\lambda_{DC\\_DC2}t)}$ $\times$
$\mathrm{e}^{(-\lambda_{SA1}t)}$ $\times$ $\mathrm{e}^{(-\lambda_{SA2}t)}$
$\times$
$\mathrm{e}^{(-\lambda_{DC\\_AC1}t)}$ $\times$
$\mathrm{e}^{(-\lambda_{DC\\_AC2}t)}$
Theorem 24:
$\vdash$ $\mathcal{FOR}_{STEAM}$ p [BO1;BO2;BO3] [TA1;TA2;TA3] =
$(1-\mathrm{e}^{(-\lambda_{BO1}t)})$ $\times$
$(1-\mathrm{e}^{(-\lambda_{BO2}t)})$ $\times$
$(1-\mathrm{e}^{(-\lambda_{BO3}t)})$ $\times$
$(1-\mathrm{e}^{(-\lambda_{TA1}t)})$ $\times$
$(1-\mathrm{e}^{(-\lambda_{TA2}t)})$ $\times$
$(1-\mathrm{e}^{(-\lambda_{TA3}t)})$
#### 5.1.2 $\mathcal{SAIDI}$ Analysis
Using Theorems 1-24 with the assumption that the failure states of components
are exponentially distributed, we can formally verify $\mathcal{SAIDI}_{G}$
(Eq. 19), in HOL4 as:
Theorem 25:
$\vdash$ $\mathcal{SAIDI}$
[[CONSEQ_PATH p
[DEC_BOX p 1
(FTree p (NOT ($\mathrm{FT}_{STEAM}$ p ($\downarrow$ [BO1;BO2;BO3])
($\downarrow$ [TA1;TA2;TA3]))),
$\mathrm{FT}_{STEAM}$ p ($\downarrow$ [BO1;BO2;BO3]) ($\downarrow$
[TA1;TA2;TA3]));
DEC_BOX p 0
(FTree p (NOT ($\mathrm{FT}_{PV}$ p ($\downarrow$ [LF1;LF2]) ($\downarrow$
[DC_DC1;DC_DC2])
($\downarrow$ [SA1;SA2]) ($\downarrow$ [DC_AC1;DC_AC2]))),
$\mathrm{FT}_{PV}$ p ($\downarrow$ [LF1;LF2]) ($\downarrow$ [DC_DC1;DC_DC2])
($\downarrow$ [SA1;SA2]) ($\downarrow$ [DC_AC1;DC_AC2]))];
[DEC_BOX p 0
(FTree p (NOT ($\mathrm{FT}_{STEAM}$ p ($\downarrow$ [BO1;BO2;BO3])
($\downarrow$ [TA1;TA2;TA3]))),
$\mathrm{FT}_{STEAM}$ p ($\downarrow$ [BO1;BO2;BO3]) ($\downarrow$
[TA1;TA2;TA3]));
DEC_BOX p 1
(FTree p (NOT ($\mathrm{FT}_{PV}$ p ($\downarrow$ [LF1;LF2]) ($\downarrow$
[DC_DC1;DC_DC2])
($\downarrow$ [SA1;SA2]) ($\downarrow$ [DC_AC1;DC_AC2]))),
$\mathrm{FT}_{PV}$ p ($\downarrow$ [LF1;LF2]) ($\downarrow$ [DC_DC1;DC_DC2])
($\downarrow$ [SA1;SA2]) ($\downarrow$ [DC_AC1;DC_AC2]))]];
…]
[MTTR_LoadA;MTTR_LoadB;MTTR_LoadC;MTTR_LoadD]
[CN_LoadA; CN_LoadB; CN_LoadC; CN_LoadD] p =
$\frac{\begin{aligned}
&((1-(1-\mathrm{e}^{(-\lambda_{BO1}t)})\times(1-\mathrm{e}^{(-\lambda_{BO2}t)})\times(1-\mathrm{e}^{(-\lambda_{BO3}t)})\times\\\
&\hskip
5.406pt\quad\quad(1-\mathrm{e}^{(-\lambda_{TA1}t)})\times(1-\mathrm{e}^{(-\lambda_{TA2}t)})\times(1-\mathrm{e}^{(-\lambda_{TA3}t)}))\times\\\
&\hskip
4.2679pt(1-\mathrm{e}^{(-\lambda_{LF1}t)}\times\mathrm{e}^{(-\lambda_{LF2}t)}\times\mathrm{e}^{(-\lambda_{DC\\_DC1}t)}\times\mathrm{e}^{(-\lambda_{DC\\_DC2}t)}\times\\\
&\hskip
5.406pt\quad\quad\mathrm{e}^{(-\lambda_{DC\\_AC1}t)}\times\mathrm{e}^{(-\lambda_{DC\\_AC2}t)}\times\mathrm{e}^{(-\lambda_{SA1}t)}\times\mathrm{e}^{(-\lambda_{SA2}t)})+\\\
&\hskip
4.2679pt(1-\mathrm{e}^{(-\lambda_{BO1}t)})\times(1-\mathrm{e}^{(-\lambda_{BO2}t)})\times(1-\mathrm{e}^{(-\lambda_{BO3}t)})\times\\\
&\hskip
4.2679pt(1-\mathrm{e}^{(-\lambda_{TA1}t)})\times(1-\mathrm{e}^{(-\lambda_{TA2}t)})\times(1-\mathrm{e}^{(-\lambda_{TA3}t)})\times\\\
&\hskip
4.2679pt\mathrm{e}^{(-\lambda_{LF1}t)}\times\mathrm{e}^{(-\lambda_{LF2}t)}\times\mathrm{e}^{(-\lambda_{DC\\_DC1}t)}\times\mathrm{e}^{(-\lambda_{DC\\_DC2}t)}\times\\\
&\hskip
4.2679pt\mathrm{e}^{(-\lambda_{DC\\_AC1}t)}\times\mathrm{e}^{(-\lambda_{DC\\_AC2}t)}\times\mathrm{e}^{(-\lambda_{SA1}t)}\times\mathrm{e}^{(-\lambda_{SA2}t)})\times\\\
&\hskip
4.2679pt\mathrm{MTTR\\_LoadA}\times\mathrm{CN\\_LoadA}+\dots)\end{aligned}}{\mathrm{CN\\_LoadA}+\mathrm{CN\\_LoadB}+\mathrm{CN\\_LoadC}+\mathrm{CN\\_LoadD}}$
To further facilitate the exploitation of our proposed approach for power grid
reliability engineers, we defined a Standard Meta Language (SML) functions
[33] that can numerically evaluate the above-verified expressions of
$\mathcal{FOR}_{PV}$, $\mathcal{FOR}_{STEAM}$, and $\mathcal{SAIDI}$.
Subsequently, we compared our results with MATLAB CCD algorithm based on
Monte-Carlo Simulation (MCS) and also with other existing subsystem-level
reliability analysis techniques, such as HiP-HOPS and FMR, to ensure the
accuracy of our computations, which is presented in the next section.
### 5.2 Experimental Results and Discussion
Considering the failure rates of the power plant components
$\lambda_{\mathrm{BO}}$, $\lambda_{\mathrm{TA}}$, $\lambda_{\mathrm{LF}}$,
$\lambda_{\mathrm{DC\\_DC}}$, $\lambda_{\mathrm{DC\\_AC}}$ and
$\lambda_{\mathrm{SA}}$ are 0.91, 0.84, 0.96, 0.67, 0.22, and 0.56 per year
[38], respectively. Also, assuming that $\mathrm{MTTR}_{\mathrm{Load}_{A}}$,
$\mathrm{MTTR}_{\mathrm{Load}_{B}}$, $\mathrm{MTTR}_{\mathrm{Load}_{C}}$, and
$\mathrm{MTTR}_{\mathrm{Load}_{D}}$ are 12, 20, 15, and 10 hours/interruption
[39] and $\mathrm{CN}_{\mathrm{Load}_{A}}$, $\mathrm{CN}_{\mathrm{Load}_{B}}$,
$\mathrm{CN}_{\mathrm{Load}_{C}}$, and $\mathrm{CN}_{\mathrm{Load}_{D}}$ are
500, 1800, 900, and 2500 customers, respectively. The reliability study is
undertaken for 1 year, i.e., t = 8760 hours. Based on the given data, we can
evaluate $\mathcal{FOR}$ and $\mathcal{SAIDI}$ for the electrical power
network (Fig. 11) using following techniques:
1. 1.
Our proposed SML functions to evaluate the verified expressions of
$\mathcal{FOR}_{PV}$, $\mathcal{FOR}_{STEAM}$, and $\mathcal{SAIDI}$ in HOL4
(Theorems 23-25), as shown in Fig. 15.
Figure 15: SML Functions: $\mathcal{FOR}$ and $\mathcal{SAIDI}$ Results
2. 2.
MATLAB MCS-based toolbox that uses a random-based algorithm to obtain
$\mathcal{FOR}$ and $\mathcal{SAIDI}$ for the electrical grid. The steps
followed in this technique are as follows [40]:
* •
Read the values of failure rate $\lambda$ in f/hours and repair time r in
hours for each component
* •
Generate a random number U
* •
Calculate the predicted next Time to Fail (TTF) and Time to Repair (TTR) from
the equations
$TTF=\frac{-\ln{U}}{\lambda}\hskip 11.38109ptTTR=\frac{-\ln{U}}{r}$ (20)
* •
Repeat the above iterative process till the number of iterations exceeds 1e5
Based on the above-mentioned MCS steps, we obtain different results of
$\mathcal{FOR}$ and $\mathcal{SAIDI}$ every run of the algorithm depending on
the generated random number with a tolerance error between 4-9%. So, we
present in Table 7 the best-estimated results of $\mathcal{FOR}$ and
$\mathcal{SAIDI}$ in MATLAB based on the MCS approach with the least errors.
Subsequently, we take the mean average of all the obtained $\mathcal{FOR}$ and
$\mathcal{SAIDI}$ results for the power grid.
Table 7: MATLAB MCS: $\mathcal{FOR}$ and $\mathcal{SAIDI}$ Results Run | $\mathcal{FOR}_{PV}$ | $\mathcal{FOR}_{STEAM}$ | $\mathcal{SAIDI}$
---|---|---|---
1 | 88.55e-2 | 36.18e-3 | 5.8023
2 | 107.19e-2 | 40.03e-3 | 6.5045
3 | 93.52e-2 | 36.35e-3 | 6.0222
5 | 110.17e-2 | 43.03e-3 | 7.0495
4 | 95.24e-2 | 38.66e-3 | 6.3960
Average | 98.93e-2 | 38.85e-3 | 6.3549
3. 3.
The Failure Mode Reasoning (FMR) approach, which identifies all the failure
modes of safety-critical system inputs that can result in an undesired state
at its output. The FMR process consists of four main stages [10]:
1. (a)
Composition: Failure mode variables are defined and a set of logical
implication statements is generated that express local failure modes.
2. (b)
Substitution: Local statements will be combined to create a single global
implication statement between the critical-system inputs and outputs.
3. (c)
Simplification: The complex formula is simplified, where we trim off any
redundant statements.
4. (d)
Calculation: The probability of failure is evaluated using the component
failure rates.
Based on the above-mentioned FMR procedures, we can express the component-
level failure analysis of the PV power plant (Fig. 13) as:
$(\hat{o}=\dot{f})\Rightarrow(\hat{x_{1}}=\dot{f}\lor\hat{x_{2}}=\dot{f})$
(21)
The above equation means that if the output $o$ is False by fault then either
one of its inputs to the OR gate, i.e., $x_{1}$ or $x_{2}$, must be False by
fault. We now need to determine what can cause $\hat{x_{1}}=\dot{f}$ and
$\hat{x_{2}}=\dot{f}$. Similar to Eq. 6, we can write:
$(\hat{x_{1}}=\dot{f})\Rightarrow(\hat{x_{3}}=\dot{f}\hskip
2.84526pt\lor\hat{x_{4}}=\dot{f}\hskip 2.84526pt\lor\hat{x_{5}}=\dot{f}\hskip
2.84526pt\lor\hat{x_{6}}=\dot{f})$ (22)
$(\hat{x_{2}}=\dot{f})\Rightarrow(\hat{x_{7}}=\dot{f}\hskip
2.84526pt\lor\hat{x_{8}}=\dot{f}\hskip 2.84526pt\lor\hat{x_{9}}=\dot{f}\hskip
2.84526pt\lor\hat{x_{10}}=\dot{f})$ (23)
where $x_{3}$, $x_{4}$, $x_{5}$, $x_{6}$, $x_{7}$, $x_{8}$, $x_{9}$, $x_{10}$
are $LF_{1}$, $DC\\_DC_{1}$, $DC\\_AC_{1}$, $SA_{1}$, $LF_{2}$, $DC\\_DC_{2}$,
$DC\\_AC_{2}$, $SA_{2}$, respectively. Similarly, we can express the
component-level failure analysis of the steam power plant (Fig. 13) as:
$(\hat{o}=\dot{f})\Rightarrow(\hat{x_{11}}=\dot{f}\hskip 1.42262pt\wedge\hskip
1.42262pt\hat{x_{12}}=\dot{f}\hskip 1.42262pt\wedge\hskip
1.42262pt\hat{x_{13}}=\dot{f})$ (24)
$(\hat{x_{11}}=\dot{f})\Rightarrow(\hat{x_{14}}=\dot{f}\hskip
1.42262pt\wedge\hskip 1.42262pt\hat{x_{15}}=\dot{f})$ (25)
$(\hat{x_{12}}=\dot{f})\Rightarrow(\hat{x_{16}}=\dot{f}\hskip
1.42262pt\wedge\hskip 1.42262pt\hat{x_{17}}=\dot{f})$ (26)
$(\hat{x_{13}}=\dot{f})\Rightarrow(\hat{x_{18}}=\dot{f}\hskip
1.42262pt\wedge\hskip 1.42262pt\hat{x_{19}}=\dot{f})$ (27)
where $x_{14}$, $x_{15}$, $x_{16}$, $x_{17}$, $x_{18}$, $x_{19}$, are
$BO_{1}$, $TA_{1}$, $BO_{2}$, $TA_{2}$, $BO_{3}$, $TA_{3}$, respectively.
Table 8 shows the results of $\mathcal{FOR}_{PV}$, $\mathcal{FOR}_{STEAM}$,
and $\mathcal{SAIDI}$ based on FMR analysis using the assumed failure rates of
the power plant components.
Table 8: FMR: $\mathcal{FOR}$ and $\mathcal{SAIDI}$ Results $\mathcal{FOR}_{PV}$ | $\mathcal{FOR}_{STEAM}$ | $\mathcal{SAIDI}$
---|---|---
99.19e-2 | 38.87e-3 | 6.3728
According to Jahanian et al. [11], the soundness of the obtained FMR equations
(Eq. 21 to Eq. 27) needs to be proven mathematically.
Figure 16: HiP-HOPS: PV Plant FMECA Analysis
4. 4.
The HiP-HOPS software for failure analysis, which can perform FMECA analysis
by given architectural blocks that hierarchically describe a safety-critical
system at the subsystem level. Fig. 16 and Fig. 17 depict the FMECA analysis
of the PV and steam power plants using the HiP-HOPS software, respectively.
The probabilistic results of $\mathcal{FOR}_{PV}$, $\mathcal{FOR}_{STEAM}$,
and $\mathcal{SAIDI}$ based on HiP-HOPS analysis are equivalent to the FMR
analysis results presented in Table 8.
Figure 17: HiP-HOPS: Steam Plant FMECA Analysis
It can be observed that $\mathcal{SAIDI}$ result obtained from our formal HOL4
analysis are approximately equivalent to the corresponding ones calculated
using FMR and HiP-HOPS approaches. On the other hand, MATLAB MCS-based uses a
random-based algorithm, which estimates different results of $\mathcal{FOR}$
and $\mathcal{SAIDI}$ every generation of a random number with errors between
4-9%. This clearly demonstrates that our analysis is not only providing the
correct result but also with a formally proven reliability expressions
(Theorems 23-25) compared to simulation tools, i.e., the soundness of
subsystem-level reliability analysis. By performing the formal CCD step-
analysis of a real-world 39-bus electrical power network, we demonstrated the
practical effectiveness of the proposed CCD formalization in HOL4, which will
help design engineers to meet the desired quality requirements. Also, our
proposed formal approach can be used to analyze larger scale CCD models of
other complex electrical power system applications, such as Smartgrids [1].
## 6 Conclusions
In this work, we developed a formal approach for Cause-Consequence Diagrams
(CCD), which enables safety engineers to perform $\mathcal{N}$-level CCD
analysis of safety-critical systems within the sound environment of the HOL4
theorem prover. Our proposed approach provides new CCD mathematical
formulations, which their correctness was verified in the HOL4 theorem prover.
These formulations are capable of performing CCD analysis of multi-state
system components and based on any given probabilistic distribution and
failure rates. These features are not available in any other existing
approaches for subsystem-level reliability analysis. The proposed
formalization is limited to perform CCD-based reliability analysis at the
subsystem level that integrates static dependability analysis. However, this
formalization is generic and can be extended to perform dynamic failure
analysis of dynamic subsystems where no dependencies exist in different
subsystems. We demonstrated the practical effectiveness of the proposed CCD
formalization by performing the formal CCD step-analysis of a standard IEEE
39-bus electrical power network system and also formally verified the power
plants Force Outage Rate ($\mathcal{FOR}$) and the System Average Interruption
Duration Index ($\mathcal{SAIDI}$). Eventually, we compared the
$\mathcal{FOR}$ and $\mathcal{SAIDI}$ results obtained from our formal CCD-
based reliability analysis with the corresponding ones using MATLAB based on
Monte-Carlo Simulation (MCS), the HiP-HOPS software tool, and the Failure Mode
Reasoning (FMR) approach. As future work, we plan to integrate Reliability
Block Diagrams (RBDs) [41] as reliability functions in the CCD analysis, which
will enable us to analyze hierarchical systems with different component
success configurations, based on our CCD formalization in the HOL4 theorem
prover.
## References
* [1] X. Fang, S. Misra, G. Xue, and D. Yang, “Smart Grid—The New and Improved Power Grid: A Survey,” _IEEE Communications Surveys & Tutorials_, vol. 14, no. 4, pp. 944–980, 2011.
* [2] M. Rahman, “Power Electronics and Drive Applications for the Automotive Industry,” in _Conference on Power Electronics Systems and Applications_. IEEE, 2004, pp. 156–164.
* [3] J. D. Andrews and L. M. Ridley, “Reliability of Sequential Systems Using the Cause—Consequence Diagram Method,” _Part E: Journal of Process Mechanical Engineering_ , vol. 215, no. 3, pp. 207–220, 2001.
* [4] M. Towhidnejad, D. R. Wallace, and A. M. Gallo, “Fault Tree Analysis for Software Design,” in _NASA Goddard Software Engineering Workshop_ , 2002, pp. 24–29.
* [5] I. A. Papazoglou, “Mathematical Foundations of Event Trees,” _Reliability Engineering $\&$ System Safety_, vol. 61, no. 3, pp. 169–183, 1998.
* [6] O. Bäckström, Y. Butkova, H. Hermanns, J. Krčál, and P. Krčál, “Effective Static and Dynamic Fault Tree Analysis,” in _Computer Safety, Reliability, and Security_ , ser. LNCS, vol. 9922\. Springer, 2016, pp. 266–280.
* [7] Y. Papadopoulos, M. Walker, D. Parker, E. Rüde, R. Hamann, A. Uhlig, U. Grätz, and R. Lien, “Engineering Failure Analysis and Design Optimisation with HiP-HOPS,” _Engineering Failure Analysis_ , vol. 18, no. 2, pp. 590–608, 2011.
* [8] HiP-HOPS, 2020. [Online]. Available: https://hip-hops.co.uk/
* [9] S. Kabir, K. Aslansefat, I. Sorokos, Y. Papadopoulos, and Y. Gheraibia, “A Conceptual Framework to Incorporate Complex Basic Events in HiP-HOPS,” in _Model-Based Safety and Assessment_ , ser. LNCS, vol. 11842. Springer, 2019, pp. 109–124.
* [10] H. Jahanian, “Failure Mode Reasoning,” in _International Conference on System Reliability and Safety_. IEEE, 2019, pp. 295–303.
* [11] H. Jahanian, D. Parker, M. Zeller, A. McIver, and Y. Papadopoulos, “Failure Mode Reasoning in Model Based Safety Analysis,” 2020. [Online]. Available: https://arxiv.org/abs/2005.06279
* [12] M. Čepin, _Assessment of Power System Reliability: Methods and Applications_. Springer Science & Business Media Springer, 2011.
* [13] T. Liu and J. Tong, J.and Zhao, “Probabilistic Risk Assessment Framework Development for Nuclear Power Plant,” in _International Conference on Industrial Engineering and Engineering Management_. IEEE, 2008, pp. 1330–1334.
* [14] J. D. Andrews and L. M. Ridley, “Application of the Cause-Consequence Diagram Method to Static Systems,” _Reliability Engineering & System Safety_, vol. 75, no. 1, pp. 47–58, 2002.
* [15] L. M. Ridley, “Dependency Modelling Using Fault-Tree and Cause-Consequence Analysis,” Ph.D. dissertation, Loughborough University, UK, 2000.
* [16] M. Bevilacqua, M. Braglia, and R. Gabbrielli, “Monte Carlo Simulation Approach for a Modified FMECA in a Power Plant,” _Quality and Reliability Engineering International_ , vol. 16, no. 4, pp. 313–324, 2000.
* [17] R. E. Mackiewicz, “Overview of IEC 61850 and Benefits,” in _Power Engineering Society General Meeting_. IEEE, 2006, pp. 623–630.
* [18] B. Gallina, E. Gómez-Martínez, and C. B. Earle, “Deriving Safety Case Fragments for Assessing MBASafe’s Compliance with EN 50128,” in _Conference on Software Process Improvement and Capability Determination_. Springer, 2016, pp. 3–16.
* [19] R. Palin, D. Ward, I. Habli, and R. Rivett, “ISO 26262 Safety Cases: Compliance and Assurance,” in _Conference on System Safety_ , 2011, pp. 1–6.
* [20] O. Hasan and S. Tahar, “Formal verification methods,” in _Encyclopedia of Information Science and Technology, Third Edition_. IGI Global, 2015, pp. 7162–7170.
* [21] HOL Theorem Prover, 2020. [Online]. Available: https://hol-theorem-prover.org
* [22] J. J. Grainger and W. D. Stevenson, _Power System Analysis_. McGraw-Hill, 2003.
* [23] F. Ortmeier, W. Reif, and G. Schellhorn, “Deductive Cause-Consequence Analysis,” _IFAC Proceedings Volumes_ , vol. 38, no. 1, pp. 62–67, 2005\.
* [24] SMV, 2020. [Online]. Available: http://www.cs.cmu.edu/~modelcheck/smv.html
* [25] D. Miller and G. Nadathur, _Programming with higher-Order Logic_. Cambridge University Press, 2012.
* [26] W. Ahmad and O. Hasan, “Towards Formal Fault Tree Analysis Using Theorem Proving,” in _Intelligent Computer Mathematics_ , ser. LNCS, vol. 9150\. Springer, 2015, pp. 39–54.
* [27] Y. Elderhalli, O. Hasan, and S. Tahar, “A Methodology for the Formal Verification of Dynamic Fault Trees using HOL Theorem Proving,” _IEEE Access_ , vol. 7, pp. 136 176–136 192, 2019.
* [28] M. Abdelghany, W. Ahmad, and S. Tahar, “A Formally Verified HOL4 Algebra for Event Trees,” 2020. [Online]. Available: http://arxiv.org/abs/2004.14384
* [29] O. Hasan, N. Abbasi, B. Akbarpour, S. Tahar, and R. Akbarpour, “Formal Reasoning About Expectation Properties for Continuous Random Variables,” in _Formal Methods_ , ser. LNCS, vol. 5850. Springer, 2009, pp. 435–450.
* [30] G. Vyzaite, S. Dunnett, and J. Andrews, “Cause-Consequence Analysis of Non-Repairable Phased Missions,” _Reliability Engineering & System Safety_, vol. 91, no. 4, pp. 398–406, 2006.
* [31] H. Xu and J. Dugan, “Combining Dynamic Fault Trees and Event Trees for Probabilistic Risk Assessment,” in _Symposium Reliability and Maintainability_. IEEE, 2004, pp. 214–219.
* [32] L. R. Olsen, J. A. Kay, and M. Van Krey, “Enhanced Safety Features in Motor Control Centers and Drives for Diagnostics and Troubleshooting,” in _IAS Electrical Safety_. IEEE, 2015, pp. 1–9.
* [33] M. Abdelghany, “Cause-Consequence Diagrams Formalization in HOL4,” 2020. [Online]. Available: https://github.com/hvg-concordia/CCD
* [34] R. N. Allan, _Reliability Evaluation of Power Systems_. Springer Science & Business Media, 2013.
* [35] G. Bhatt and S. Affljulla, “Analysis of Large Scale PV Penetration Impact on IEEE 39-Bus Power System,” in _Riga Technical University Conference on Power and Electrical Engineering_. IEEE, 2017, pp. 1–6.
* [36] D. Gan, R. J. Thomas, and R. D. Zimmerman, “Stability-Constrained Optimal Power Flow,” _IEEE Transactions on Power Systems_ , vol. 15, no. 2, pp. 535–540, 2000.
* [37] A. Alferidi and R. Karki, “Development of Probabilistic Reliability Models of Photo-Voltaic System Topologies for System Adequacy Evaluation,” _Applied Sciences_ , vol. 7, no. 2, p. 176, 2017.
* [38] W. Li _et al._ , _Reliability Assessment of Electric Power Systems Using Monte Carlo Methods_. Springer Science & Business Media, 2013.
* [39] G. J. Anders and A. Vaccaro, _Innovations in Power Systems Reliability_. Springer, 2011.
* [40] A. K. Pradhan, S. K. Kar, P. Dash _et al._ , “Implementation of Monte Carlo Simulation to the Distribution Network for Its Reliability Assessment,” in _Innovation in Electrical Power Engineering, Communication, and Computing Technology_. Springer, 2020, pp. 219–228.
* [41] W. Ahmed, O. Hasan, and S. Tahar, “Formalization of Reliability Block Diagrams in Higher-Order Logic,” _Journal of Applied Logic_ , vol. 18, pp. 19–41, 2016.
|
In Memory of Maryam Mirzakhani (1977-2017)
# There Are No Odd Perfect Numbers
HOOSHANG SAEID–NIA HOOSHANG SAEID–NIA<EMAIL_ADDRESS>- - - Social
Page: www.instagram.com/h.s.nia
###### Abstract.
While even perfect numbers are completely characterized, the existence or
otherwise of odd perfect numbers is an open problem. We address that problem
and prove that if a natural number is odd, then it’s not perfect.
###### Key words and phrases:
perfect numbers, odd perfect, Mersenne primes
###### 2010 Mathematics Subject Classification:
Primary 11N25, Secondary 11Y50.
## 1\. introduction
The following is one of the ancient open problems in number theory:
###### Definition.
(Perfect Number) A natural number $n$ is said to be perfect if the sum of all
its (positive) divisors, including $n$ itself, is equal to $2n$.
(1.1) $\sum_{d|n}d=2n.$
Example: 6, 28, 496, and 8128 are the first few perfect numbers. $\lhd$
All even perfect numbers are completely determined [1] by the following
theorem:
###### Theorem.
Even Perfect Numbers
A) Euclid (300 B.C.) if $2^{p}-1$ is prime, then $n=2^{p-1}(2^{p}-1)$ is a
perfect number. (Elements, Book IX, Proposition 36, as cited in [1]).
B) Euler (1707 – 1783) If $n$ is even, then the converse of part (A) is also
true. [1]
The ancient open problem is whether or not any odd perfect numbers exist?
Section 2 of this inquiry is a concise review of the preliminaries. In section
3 we offer a counter–proof for the existence of odd perfect numbers.
A good account of previous work on this topic can be found, for example in [2]
and [3].
## 2\. Preliminaries
It’s understood that 1 is not a perfect number. The sum of divisors of 1 is 1,
which is not two times 1. Therefore, $n>1$, and it can be uniquely factorized
as
$n=\prod_{i=1}^{m}p_{i}^{a_{i}}\mbox{\quad(}p_{i}\mbox{ odd prime)}\mbox{
(}a_{i}\mbox{ positive).}$
The sum of divisors of $n$ is usually denoted by $\sigma(n)$. For a prime
power,
(2.1) $\sigma(p^{a})=\sum_{j=0}^{a}p^{j}=\frac{p^{a+1}-1}{p-1}=p\
\sigma(p^{a-1})+1,\quad[a>1,\ \sigma(1)=1].$
$\sigma$ is a multiplicative function [1]. So for every $n$ in general, it can
be written as:
(2.2) $\sigma(n)=\prod_{i=1}^{m}\sigma(p_{i}^{a_{i}}),$
Therefore, using (1.1), our main equation is
(2.3)
$\prod_{i=1}^{m}\sigma(p_{i}^{a_{i}})=2\prod_{i=1}^{m}p_{i}^{a_{i}}\mbox{\quad(}p_{i}\not=2,a_{i}>0,m\geq
1\mbox{).}$
To solve the problem we have to exhibit a literal odd number with this
property, or – as we do in the next section – prove that it’s impossible.
## 3\. Odd perfect numbers don’t exist
###### Proposition.
If a natural number is odd, then it’s not a perfect number.
###### Proof.
We argue by contradiction. Let $n>1$ be an odd number and assume, on the
contrary to the statement above, that $n$ is perfect. Hence, 2.3 holds. For
$m=1,2$ the solution is not difficult ([1]). For $m\geq 3$, we begin with the
most basic observation, that only one $\sigma(p_{i}^{a_{i}})$ must be even,
and of the form $2S$, with odd $S$. We may assume that $i=1$, possibly after
relabeling the primes.
(3.1) $\sigma(p_{1}^{a_{1}})=2S$
$\sigma(p_{1}^{a_{1}})$ is a divisor of $2n$, and it’s coprime to
$p_{1}^{a_{1}}$. So,
(3.2) $S=\prod_{i=2}^{m}p_{i}^{b_{i}},$
in which $0\leq b_{i}\leq a_{i}$, with at least one $b_{i}>0$. We define
(3.3) $G=\prod_{i=2}^{m}p_{i}^{c_{i}},$
so that $b_{i}+c_{i}=a_{i}$. Now we have
(3.4) $\prod_{i=2}^{m}p_{i}^{a_{i}}=SG,$
and
(3.5) $\prod_{i=2}^{m}\sigma(p_{i}^{a_{i}})=p_{1}^{a_{1}}G.$
Now we regroup and add the divisors of $n$ into three terms, like this:
(3.6)
$p_{1}\sigma(p_{1}^{a_{1}-1})+p_{1}^{a_{1}}G+\prod_{i=1}^{m}p_{i}\sigma(p_{i}^{a_{i}-1})=2n.$
The first summand, $p_{1}\sigma(p_{1}^{a_{1}-1})$, is the sum of those
divisors that consist of one or more $p_{1}$ and nothing else. In other words,
$p_{1}\sigma(p_{1}^{a_{1}-1})=p_{1}+p_{1}^{2}+\ldots+p_{1}^{a_{1}}$. The
second term, $p_{1}^{a_{1}}G$ is already introduced as
$\prod_{i=2}^{m}\sigma(p_{i}^{a_{i}})$ in equation 3.5. It contains all of
those divisors that have no $p_{1}$. Specifically, $1$ belongs to this
category. The third term, $\prod_{i=1}^{m}p_{i}\sigma(p_{i}^{a_{i}-1})$, is as
one expects, the rest. It includes those divisors that have at least one
$p_{1}$ but also include some other prime or primes.
On the other hand, we can write:
(3.7) $p_{1}^{a_{1}}G+p_{1}^{a_{1}+1}\sigma(p_{1}^{a_{1}-1})G=2n.$
This can be obtained by writing
$\sigma(p_{1}^{a_{1}})\prod_{i=2}^{m}\sigma(p_{i}^{a_{i}})=2n$, then using 3.5
and the fact that $\sigma(p_{1}^{a_{1}})=p_{1}\sigma(p_{1}^{a_{1}-1})+1$,
(which was mentioned earlier, in the previous section, as 2.1) to get
$[p_{1}\sigma(p_{1}^{a_{1}-1})+1]p_{1}^{a_{1}}G=2n$.
Finally, we compare 3.6 and 3.7 to get:
(3.8)
$p_{1}\sigma(p_{1}^{a_{1}-1})+\prod_{i=1}^{m}p_{i}\sigma(p_{i}^{a_{i}-1})=p_{1}^{a_{1}+1}\sigma(p_{1}^{a_{1}-1})G.$
(We eliminated $2n$ between them, then canceled the common $p_{1}^{a_{1}}G$
term which appeared on both sides.) Now we divide through by
$p_{1}\sigma(p_{1}^{a_{1}-1})$, to get:
(3.9)
$1+\prod_{i=2}^{m}p_{i}\sigma(p_{i}^{a_{i}-1})=p_{1}^{a_{1}}G=\prod_{i=2}^{m}\sigma(p_{i}^{a_{i}})=\prod_{i=2}^{m}[1+p_{i}\sigma(p_{i}^{a_{i}-1})]$
(Here, 2.1 was used again.) The equality between
$1+\prod_{i=2}^{m}p_{i}\sigma(p_{i}^{a_{i}-1})$ and
$\prod_{i=2}^{m}[1+p_{i}\sigma(p_{i}^{a_{i}-1})]$ implies that some positive
terms from the latter expression must be zero, unless $m=2$. We already knew
that all perfect numbers with $m=2$ are even so we assumed that $m\geq 3$. One
way or another, (positive terms being zero, $n$ being even, or $m=2$) from
this contradictions we conclude that an odd number $n$ cannot be perfect. This
completes the proof. ∎
## 4\. Conclusion
We proved that perfect numbers are always even, and therefore, always built
from Mersenne primes. Whether or not the set of Mersenne primes is infinite,
is another interesting open problem [3].
## References
* R [1] John Voight, _Perfect Numbers: An Elementary Introduction_ , http://math.dartmouth.edu/`~`jvoight/notes/perfelem.pdf
* R [2] Oliver Knill, _The oldest open problem in mathematics_ , NEU Math Circle, 2007.
http://www.math.harvard.edu/`~`knill/seminars/perfect/handout.pdf
* R [3] Richard K. Guy, _Unsolved Problems in Number Theory_ , Springer, 2004.
|
# Fuzzy Dark Matter and the 21 cm Power Spectrum
Dana Jones1, Skyler Palatnick1, Richard Chen1, Angus Beane2, Adam Lidz1
dhjones<EMAIL_ADDRESS>
###### Abstract
We model the 21 cm power spectrum across the Cosmic Dawn and the Epoch of
Reionization (EoR) in fuzzy dark matter (FDM) cosmologies. The suppression of
small mass halos in FDM models leads to a delay in the onset redshift of these
epochs relative to cold dark matter (CDM) scenarios. This strongly impacts the
21 cm power spectrum and its redshift evolution. The 21 cm power spectrum at a
given stage – i.e., compared at fixed average brightness temperature but
varying redshift – of the EoR/Cosmic Dawn process is also modified: in
general, the amplitude of 21 cm fluctuations is boosted by the enhanced bias
factor of galaxy hosting halos in FDM. We forecast the prospects for
discriminating between CDM and FDM with upcoming power spectrum measurements
from HERA, accounting for degeneracies between astrophysical parameters and
dark matter properties. If FDM constitutes the entirety of the dark matter and
the FDM particle mass is $10^{-21}$ eV, HERA can determine the mass to within
20% at $2-\sigma$ confidence.
###### Subject headings:
cosmology: theory – intergalactic medium – large scale structure of universe
11affiliationtext: Department of Physics & Astronomy, University of
Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104,
USA22affiliationtext: Center for Astrophysics — Harvard & Smithsonian, 60
Garden Street, Cambridge, MA 02138, USA
## 1\. Introduction
In spite of decades of effort, the particle properties of dark matter remain
mysterious. One well-motivated possibility is that the dark matter consists of
elementary particles with weak-scale interaction cross sections and particle
masses (i.e., masses of order 100 GeV or thereabouts); these particles were
produced thermally in the early universe and are non-relativistic during
structure formation, behaving as cold dark matter (CDM). However, direct
detection experiments, collider searches, and indirect methods have yet to
make convincing detections and have placed increasingly stringent bounds on
these weakly-interacting massive particle (WIMP) candidates (see e.g. the
review by Arcadi et al. 2018). Although regions of parameter space remain
unconstrained, and hence it is still feasible that WIMPs make up the entirety
of the dark matter, the recent limits have further motivated the study of
alternative possibilities.
Among these, an intriguing case is that of fuzzy dark matter (FDM; Hu et al.,
2000). In FDM the dark matter consists of extremely light scalar particles
with masses of order $m_{\mathrm{FDM}}\sim 10^{-22}$ eV. This possibility is
well-motivated by the ubiquitous presence of ultralight scalar fields in
theories beyond the standard model of particle physics, while the present day
dark matter abundance may be naturally obtained for this general mass range
(Hui et al., 2017). Furthermore, FDM has distinctive astrophysical signatures
that may allow one to confirm or rule-out its presence. In particular, the
small particle mass in FDM gives rise to macroscopic DeBroglie wavelengths,
which can be $\sim$ kpc in scale depending on the particle velocity, and this
leads to a host of interesting astrophysical consequences. In general, FDM
preserves the well-established success of CDM on large-scales, while providing
different predictions on small-scales (Hui et al., 2017).
One consequence of the macroscopic DeBroglie wavelengths in FDM is that the
power spectrum of initial density fluctuations is truncated on small scales
(Hu et al., 2000). This strongly suppresses the abundance of small mass dark
matter halos relative to the case of CDM. This should, in turn, lead to delays
in the earliest phases of galaxy formation; in CDM small mass halos collapse
first and galaxies form as gas subsequently falls into the dark matter
potential wells, cools, and fragments to form stars (Gunn et al., 1978; White
& Rees, 1978). In FDM this process is delayed until halos above the
suppression mass start to collapse. A promising way of testing FDM is
therefore to study the Epoch of Reionization (EoR) and Cosmic Dawn eras when
the first galaxies form, emit ultraviolet light, and gradually photoionize and
heat the surrounding intergalactic medium (IGM) (Hu et al., 2000; Bozek et
al., 2015; Lidz & Hui, 2018).
One of the most exciting ways to probe the EoR and Cosmic Dawn eras is via the
redshifted 21 cm line (Furlanetto et al., 2006; Pritchard & Loeb, 2012).
First, as early sources of radiation turn on, a background of Ly-$\alpha$
photons builds up and couples the spin temperature of the 21 cm line to the
gas temperature. At this time the gas temperature is expected to be less than
the cosmic microwave background (CMB) temperature, and the 21 cm signal should
be observable in absorption relative to the CMB. Subsequently, early sources
of X-ray emission raise the gas temperature above the CMB temperature. These
processes are expected to occur before most of the IGM is reionized and these
earliest phases – before the bulk of reionization – are hence referred to as
the “Cosmic Dawn”.111In some parts of the literature, “Cosmic Dawn” and
reionization are treated as synonymous. Here we prefer a 21 cm-centric
definition in which Cosmic Dawn refers to the Ly-$\alpha$ coupling and X-ray
heating phases, while we use the EoR to denote subsequent stages of
reionization after X-ray heating is complete. Finally, ionized regions around
the first luminous sources gradually grow, merge, and eventually fill
essentially the entire volume of the universe during the EoR. The overall
timing of this process, and its statistical properties, should reveal the
nature of the first luminous objects and also provide a powerful test of dark
matter properties, with FDM having a potentially dramatic impact.
In fact, the EDGES collaboration recently reported evidence of a feature in
the sky-averaged radio spectrum which they interpreted as a signature of 21 cm
absorption at $z\sim 15-20$ (Bowman et al., 2018). Taken at face value, this
detection of the global average 21 cm signal implies an early start to
structure formation and that a Ly-$\alpha$ background was already established
by $z\sim 15-20$. This, in turn, leads to tight limits on the possibility that
FDM makes up the entirety of the dark matter (Schneider, 2018; Lidz & Hui,
2018; Nebrin et al., 2019). However, the EDGES signal has puzzling features
(see e.g. Mirocha & Furlanetto 2018; Lidz & Hui 2018; Schauer et al. 2019;
Fialkov & Barkana 2019; Reis et al. 2020). Moreover, the global average 21 cm
signal is a challenging measurement and a number of works have pointed out
concerns with the EDGES analysis (e.g. Hills et al. 2018; Singh & Subrahmanyan
2019; Sims & Pober 2020).
In addition to the global average 21 cm signal, it may be possible to measure
spatial fluctuations in the 21 cm signal across the sky and as a function of
frequency. Indeed, ongoing and upcoming projects aim to measure the power
spectrum of 21 cm fluctuations, eventually spanning the entire redshift range
of the EoR, the Cosmic Dawn, and ultimately the preceding dark ages
(Furlanetto et al., 2006). These measurements have a different set of
systematic concerns than the global 21 cm experiments, and offer a potentially
richer data set to exploit. Especially exciting in this regard is the HERA
survey, which is underway, and is forecasted to measure the 21 cm power
spectrum at high statistical significance across a broad range of redshifts
(DeBoer et al., 2017). In the future, the SKA (Dewdney et al., 2009) should
provide even more precise measurements.
The goal of this paper is to model the 21 cm fluctuations during the EoR and
Cosmic Dawn in FDM, to characterize the differences with CDM models, and to
forecast the prospects for detecting or constraining FDM with HERA, while
exploring degeneracies with some of the uncertain astrophysical parameters
involved. We use the publicly available 21cmFAST code (Mesinger et al., 2011)
to model reionization and Cosmic Dawn, and a Fisher matrix formalism to
forecast the constraining power of HERA. In §2 we describe our models. §3
provides a qualitative description of the impact of FDM on the redshifted 21
cm signal, while §4 quantifies the sensitivity of HERA and its prospects for
discriminating between CDM and FDM. These results are sharpened in §5, where
we present full Fisher matrix forecasts. Finally, we conclude and discuss
possible future directions in §6.
In considering the 21 cm power spectrum in FDM, this study has some overlap
with earlier work by Sitwell et al. (2014); Muñoz et al. (2020); Nebrin et al.
(2019) who investigated the 21 cm power spectrum in FDM and/or the related
case of warm dark matter (WDM) models. Although WDM and FDM are physically
very different models for the dark matter, they each lead to a suppression in
the power spectrum of initial density fluctuations and delay
reionization/Cosmic Dawn. We focus on the FDM case here, but translate our
results into WDM constraints in the Conclusion. Our independent analysis
includes full Fisher forecasts for HERA and furthers the discussion in Sitwell
et al. (2014) and Nebrin et al. (2019) which did not include such forecasts.
The more recent work by Muñoz et al. (2020) does include Fisher forecasts for
HERA measurements, and is broadly consistent with our work, although these
authors adopt a slightly different approach and emphasis. Throughout we assume
the following cosmological parameters, based on Planck 2015 constraints and
consistent with Planck 2018 results (Aghanim et al., 2018):
$(\Omega_{m},\Omega_{b}h^{2},\Omega_{\Lambda},h,\sigma_{8},n_{s})=(0.308,0.02226,0.691,0.678,0.815,0.968)$,
where these have their usual meanings and $\sigma_{8}$ and $n_{s}$ describe
linear power spectrum in the CDM case.
## 2\. Reionization and Cosmic Dawn Models
Here we briefly describe the 21 cm signal and the simulations used to model
Cosmic Dawn/the EoR in CDM and FDM. The 21 cm brightness temperature contrast
of a neutral hydrogen cloud, at co-moving spatial position $\bm{x}$, relative
to the cosmic microwave background (CMB) is given by (Madau et al., 1997):
$T_{21}(\bm{x})=T_{0}x_{\rm{HI}}(\bm{x})\left[\frac{T_{s}(\bm{x})-T_{\gamma}}{T_{s}(\bm{x})}\right]\left[1+\delta_{\rho}({\bm{x}})\right]\text{.}$
(1)
Here $T_{0}$ is a normalization constant, $T_{0}=28{\rm
mK}\left[(1+z)/10\right]^{1/2}$, $x_{\rm{HI}}$ is the neutral fraction of
hydrogen, $T_{s}$ is the spin temperature of the 21 cm transition,
$T_{\gamma}$ is the temperature of the radio background (which we assume
throughout is dominated by the CMB), and $1+\delta_{\rho}$ is the gas density
in units of the cosmic mean. The gas density fluctuations are assumed to trace
the overall matter density variations on the scales of interest. Note that we
model spin temperature fluctuations in our analysis, as these produce strong
spatial variations during the Cosmic Dawn era, and so $T_{s}$ is a function of
spatial position in Eq 1. All of the quantities here generally evolve strongly
with redshift, but the $z$ dependence is suppressed in the above equation for
brevity. For simplicity, we ignore the impact of peculiar velocities
throughout this work (see e.g. Mao et al. 2012).
The main observable of interest for our current study is the power spectrum of
21 cm brightness temperature fluctuations, $P_{21}(k)$. This power spectrum is
defined by
$\langle
T_{21}(\bm{k})T_{21}(\bm{k^{\prime}})\rangle=(2\pi)^{3}\delta_{D}(\bm{k}+\bm{k^{\prime}})P_{21}(k)\text{,}$
(2)
where $\delta_{D}$ denotes a Dirac delta function. We generally work with the
related quantity, $\Delta^{2}_{21}(k)=k^{3}P_{21}(k)/(2\pi^{2})$, which gives
the variance of the 21 cm brightness temperature fluctuations per $\rm{ln}(k)$
with our Fourier convention. Throughout, we describe $\Delta^{2}_{21}(k)$ in
units of $\rm{mK}^{2}$.
### 2.1. 21cmFAST simulations
In order to model the Cosmic Dawn and reionization, we make use of the
21cmFAST code (Mesinger et al., 2011), specifically version 1.3. 21cmFAST
produces “semi-numerical” realizations of the reionization process (Zahn et
al., 2006; Mesinger & Furlanetto, 2007), based on an excursion-set formalism
(Bond et al., 1991) for reionization (Furlanetto et al., 2004). The code also
includes an approximate treatment of Ly-$\alpha$ background photons,
responsible for coupling the spin temperature of the 21 cm line to the gas
temperature, and of X-ray heating.
The simulations employed in our study span 300 Mpc co-moving on a side, and
the density, ioniziation, and 21 cm fields are produced on a $256^{3}$ grid.
Each 21cmFAST model is characterized by several “astrophysical parameters”
describing the properties of the ionizing sources, and their production of UV
and X-ray photons. First, $\zeta$ is a parameter describing the efficiency of
ionizing photon production, which we set to $\zeta=20$. Second, $T_{\rm vir}$
is the minimum virial temperature of galaxy hosting dark matter halos. We
adopt $T_{\rm vir}=2\times 10^{4}$ K for our fiducial model. Third, the
maximum smoothing scale adopted in generating the ionization field is taken to
be $R_{\rm max}=50$ co-moving Mpc. We neglect any redshift dependence in these
parameters and in those that follow. The model star formation efficiency –
i.e., the fraction of halo baryons that are converted into stars in galaxy
hosting halos – adopted is $f_{\star}=0.05$. The UV photon emissivity follows
the Pop-II case described in Barkana & Loeb (2005); Mesinger et al. (2011).
In terms of X-ray heating, our fiducial model assumes that $\zeta_{X}=2\times
10^{56}$ X-ray photons are produced per solar mass incorporated in stars. The
X-ray emission follows a power law spectrum (i.e., the specific luminosity is
$L_{\nu}\propto\nu^{-\alpha_{X}}$ with spectral index $\alpha_{X}=1.2$ above a
threshold frequency of $h\nu_{\rm\min,X}=0.5$ keV. For further discussion
regarding these parameters, we refer the reader to Mesinger et al. (2011).
Finally, our 21cmFAST runs adopt the inhomogeneous recombination model of
Sobacchi & Mesinger (2014). Note that in §5 we vary many of these parameters:
$\zeta$, $T_{\rm vir}$, $f_{\star}$, and $\zeta_{X}$, in performing our Fisher
matrix forecasts to account for parameter degeneracies.
Figure 1.— A comparison between the reionization history in our fiducial CDM
and FDM models and current observational constraints. The magenta colored
points show the volume-averaged neutral hydrogen fraction versus redshift in
our CDM model with $\zeta=20$, $T_{\rm vir}=2\times 10^{4}$ K, while the blue
points show the neutral fraction evolution for an FDM model with $m_{\rm
FDM}=1\times 10^{-21}$ eV and the same $\zeta$, $T_{\rm vir}$. The data points
show observational bounds on the reionizaion history, with $1-\sigma$ error
estimates, compiled from the current literature.
Fig 1 compares our fiducial CDM model with current observational constraints
on the ionization history, as inferred from: measurements of possible damping
wing features in two $z\gtrsim 7$ quasars (Davies et al., 2018), observations
of the redshift evolution of the fraction of photometrically selected Lyman-
break galaxies that emit prominent Ly-$\alpha$ lines (Schenker et al., 2014;
Mason et al., 2019; Hoag et al., 2019), and measurements of the dark pixel
fraction in the Ly-$\alpha$ and Ly-$\beta$ forests towards background quasars
(McGreer et al., 2015). We further compare these measurements with a fiducial
FDM model (described in the next sub-section) of particle mass $m_{\rm
FDM}=1\times 10^{-21}$ eV and identical $\zeta$, and $T_{\rm vir}$ to our CDM
case. Overall, Fig 1 illustrates broad consistency between each fiducial model
and this compilation of current observational constraints. Note that our
objective here is not to precisely match the current data, but merely to
ensure that our baseline models are reasonable enough to reliably forecast the
prospects for upcoming 21 cm observations.
We can further compare these models with CMB measurements from the Planck
satellite, which constrain the probability that CMB photons scatter off of
free electrons produced during and after reionization. The Planck 2018
measurement of the electron scattering optical depth (specifically their
combined TT, TE, EE, lowE, lensing + BAO constraint) is $\tau_{e}=0.0561\pm
0.0071$, where the error bars are $1-\sigma$ confidence intervals (Aghanim et
al., 2018). Our CDM and FDM models yield $\tau_{e}=0.0675,0.0580$, consistent
with the Planck 2018 measurements at $1.6-\sigma$ and $0.27-\sigma$,
respectively.
Finally, it is worth commenting on how our fiducial models compare with the
EDGES results (Bowman et al., 2018), which suggest a deep 21 cm absorption
signal starting at redshifts as high as $z\sim 20$. In our fiducial CDM model,
the minimum absorption depth is reached at $z\sim 18$, while this is delayed
until $z\sim 15$ in FDM. The redshift of the absorption dip in the CDM case is
close to that of the EDGES measurement, although the depth and shape of this
feature are quite different than observed (e.g. Bowman et al. 2018; Mirocha &
Furlanetto 2018). Note, however, that our fiducial model assumes a star-
formation efficiency of $f_{\star}=0.05$ which is larger than suggested by UV
luminosity function measurements and abundance matching constraints near
$z\sim 8$ (see e.g. Mirocha & Furlanetto 2018; Lidz & Hui 2018). Adopting a
lower star-formation efficiency would delay the onset of Cosmic Dawn and push
the redshift of the absorption feature to lower redshifts.
### 2.2. Modeling FDM with 21cmFAST
In order to model FDM with 21cmFAST, we adopt the approximation that FDM
suppresses the initial power spectrum of density fluctuations on small scales,
but we ignore the subsequent impact of FDM on the dynamics of structure
formation. This is likely a good approximation for our application,
essentially because the FDM Jeans mass drops with decreasing redshift (see
Lidz & Hui 2018 for further discussion and e.g. Schive et al. 2014; Li et al.
2019 for simulation runs that follow the dynamical impact of so-called quantum
pressure.)
With this simplification, we need only to modify the transfer function used in
21cmFAST in generating initial conditions. This in turn reduces the variance
of the (linearly-extrapolated) density field at small smoothing scales, which
suppresses the halo collapse fraction and thereby impacts the 21cmFAST
excursion-set based modeling of the Cosmic Dawn and EoR.
We use the FDM transfer function from Hu et al. (2000). Given the FDM particle
mass in our model, $m_{\rm{FDM}}$, this may be written as:
$\frac{P_{\rm{FDM}(k)}}{P_{\rm
CDM}(k)}=\left[\frac{\rm{cos}(x^{3}(k))}{1+x^{8}(k)}\right]^{2}\text{,}$ (3)
where $x(k)=1.61\left[m_{\rm{FDM}}/10^{-22}\rm{eV}\right]^{1/18}k/k_{\rm
J,eq}$ and $k_{\rm J,eq}$ is the FDM Jeans wavenumber at matter-radiation
equality, $k_{\rm J,eq}=9.11\,{\rm
Mpc}^{-1}\left[m_{\rm{FDM}}/10^{-22}\rm{eV}\right]^{1/2}$ (Hu et al., 2000).
This specifies the linear FDM power spectrum, $P_{\rm FDM}(k)$, in terms of
the CDM one, $P_{\rm CDM}(k)$.
The cutoff in the initial conditions, described by $k_{\rm J,eq}$, leads to a
suppression in the halo mass function at small masses. It is useful to further
describe this suppression by a characteristic halo mass scale, $M_{1/2}$. This
is defined as the mass corresponding to the wavenumber, $k_{1/2}$, at which
the linear FDM power spectrum is reduced by a factor of two relative to the
CDM one with $M_{1/2}=\frac{4\pi\rho_{M}}{3}\left(\pi/k_{1/2}\right)^{3}$.
Here $\rho_{M}$ is the mean co-moving matter density. Numerically, the
suppression mass is (Hui et al., 2017):
$\displaystyle M_{1/2}$ $\displaystyle=2.51\times
10^{9}M_{\odot}\left(\frac{1\times 10^{-21}{\rm eV}}{m_{\rm
FDM}}\right)^{4/3}$
$\displaystyle\times\left(\frac{\Omega_{m}}{0.308}\right)\left(\frac{h}{0.678}\right)^{1/2}\text{.}$
(4)
This mass scale is potentially larger than the characteristic host halo masses
of the early generations of galaxies which formed during the Cosmic Dawn and
the EoR, at least in CDM cosmological models. For example, the halo mass
corresponding to a virial temperature of $T=10^{4}{\rm K}$ – at which point
primordial gas can cool by atomic line emission, fragment, and form stars – is
$M=1.1\times 10^{8}M_{\odot}$ at $z=8$ (e.g. Lidz & Hui 2018).222For further
reference, the minimum galaxy hosting halo mass in our fiducial model with
$T_{\rm vir}=2\times 10^{4}$ K is $M=3.1\times 10^{8}M_{\odot}$ at $z=8.$
Therefore the suppression of small-mass halos in FDM may delay the formation
of the first galaxies relative to the CDM case, especially if galaxies are
able to form efficiently in small mass CDM halos.
## 3\. The Impact of FDM on Cosmic Dawn and the EoR
Figure 2.— An overview of the results presented in this paper. Top left: A
slice through the simulated 21 cm brightness temperature in our fiducial CDM
model at $z=12.51$. The slice is 1.2 co-moving Mpc thick, and $300$ co-moving
Mpc on a side. Top right: The corresponding slice though our fiducial FDM
model, with $m_{\rm FDM}=10^{-21}{\rm eV}$. The same random seeds and large-
scale modes are adopted in the initial conditions for each simulation, and so
a side-by-side comparison is warranted. While the 21 cm signal is observable
in emission (red) across much of the simulation slice in the CDM model, most
of the corresponding FDM model is still seen in absorption (blue). Note that
although the 21 cm signal in the FDM case reaches brightness temperatures as
low as $T_{21}=-140{\rm mK}$, the color bar is symmetric around zero and
saturates at $T_{21}=-30{\rm mK}$. The partial absorption signal is a
consequence of the delay in structure formation in FDM. Bottom: The full
redshift evolution of the 21 cm power spectra at $k=0.2$ Mpc-1 in CDM and FDM.
The redshift evolution in FDM lags that in CDM, and the power spectra differ
by large factors at several redshifts. The shaded regions give error bar
forecasts for upcoming HERA observations (assuming the moderate foreground
avoidance scenario, see §4), illustrating that the two example models can be
distinguished at high statistical significance.
### 3.1. Summary
Before going into more detail, Fig 2 provides a brief overview of what
follows. The top panel of the figure contrasts example slices though our
fiducial CDM and FDM models (with an FDM particle mass of $m_{\rm
FDM}=10^{-21}{\rm eV}$) at $z=12.51$. This particular redshift is chosen
because it highlights some of the qualitative differences that arise. In the
CDM model, early X-ray heating has already succeeded in raising the gas
temperature much above the CMB temperature across a significant fraction of
the simulation volume at this redshift, and so the 21 cm signal is observable
in emission across much of the slice shown (Eq 1). For example, the average
brightness temperature across the simulation volume is $-1.30$ mK and $38\%$
of the simulation volume has been heated above the CMB temperature at this
redshift ($T_{\gamma}=37$ K).
Furthermore, reionization is underway in this model with a volume-averaged
ionization fraction of $\langle x_{i}\rangle=0.052$. In contrast, the
suppression of small mass halos in the FDM model leads to much of the gas
being seen in absorption relative to the CMB. Although early sources of
Ly-$\alpha$ photons have managed to couple the spin temperature to the gas
temperature globally in this model, much of the gas is still cooler than the
CMB temperature and little of it is ionized. The average brightness
temperature in the FDM model is $-56.8$ mK and just $1.1\%$ of the simulation
volume has gas kinetic temperature above the CMB temperature. In FDM, X-ray
emitting sources have only succeeded in forming around prominent overdensities
and heated just relatively nearby gas above the CMB temperature (red regions),
while most of the gas is cooler than the CMB (blue regions). Since the 21 cm
brightness temperature is proportional to $1-(T_{\gamma}/T_{s})$ (Eq 1), the
overall contrast in the 21 cm brightness temperature data cube is quite strong
during stages of the Comic Dawn in which some of the gas is in absorption and
some in emission.
The bottom panel of Fig 2 gives a more quantitative summary, showing the full
redshift evolution of the 21 cm power spectrum in CDM and FDM at an example
wavenumber of $k=0.2$ Mpc-1. As anticipated earlier, the FDM power spectrum
evolution is delayed relative to the CDM one. One consequence of this is that
the FDM power spectrum greatly exceeds the CDM one at certain redshifts. For
example, near the redshift of the slices in the upper panel ($z=12.51$) the
FDM power is enhanced relative to the CDM one by a factor of $\sim 5$. This
occurs because much of the FDM volume at this redshift is in absorption, which
leads to a larger contrast in the 21 cm brightness temperature than in CDM,
where much of the gas is in emission. On the other hand, at some redshifts the
CDM fluctuations exceed those in FDM: for example, at $z\sim 20-25$ the CDM
power spectrum is much larger than in FDM because of the earlier Ly-a coupling
in CDM. We discuss the different redshift evolution in these models further in
what follows. The shaded regions show the expected error bars from HERA
assuming the moderate foreground contamination model from Pober et al. (2014b)
(see §4), demonstrating that these two example scenarios may be distinguished
at high statistical significance, as we will quantify further subsequently. As
discussed in §4, the larger signal in the Cosmic Dawn epoch partly compensates
for the increased thermal noise in the measurements, and so the model power
spectra are potentially detectable at redshifts as large as $z\sim 15$ (see
also e.g. Ewall-Wice et al. 2016).
Figure 3.— Brightness temperature evolution during Cosmic Dawn in our fiducial
CDM and FDM models. Left hand panels: The CDM model at $z=18.69$, $z=15.80$,
and $z=12.51$. Right hand panels: The FDM model at the same redshifts. The
slice thickness and area are identical to those in Fig 2, but here the color
bar spans a larger range in brightness temperature in this figure. Note also
that the bottom panel shows an identical redshift to the top panel of Fig 2:
they differ in appearance only because of the larger color bar range here.
### 3.2. Cosmic Dawn
With this preview of the results to follow, we now systematically explore the
full evolution of Cosmic Dawn and the EoR in our fiducial CDM and FDM
scenarios. The earliest phases of the Cosmic Dawn era involve the formation of
the first stars, galaxies, and accreting black holes and their emission of
ultraviolet (UV) photons. Some of these photons redshift into Lyman series
resonances, and couple the spin temperature to the kinetic gas temperature via
the Wouthuysen-Field effect (Wouthuysen, 1952; Field, 1958); this coupling
requires on the order of one Lyman-alpha photon for every ten hydrogen atoms
(Chen & Miralda-Escude, 2004; Lidz & Hui, 2018). This is expected to occur
before the gas has been heated above the CMB temperature (e.g. Pritchard &
Loeb 2012)), and so the gas is cool and observable in 21 cm absorption during
these early phases just after Ly-$\alpha$ coupling is achieved. In our
fiducial CDM model, the upper left hand panel of Fig 3 shows that the 21 cm
spin temperature is well-coupled to the gas temperature throughout most of the
IGM volume in this simulation slice, at $z=18.69$. At the same redshift in
FDM, the Wouthuysen-Field coupling is incomplete and so less of the simulation
volume reaches the low brightness temperature seen in the CDM case. In the
middle panel at $z=15.80$, the FDM model resembles the CDM scenario in the top
panel (at $z=18.69$): the spin temperature is now well-coupled to the gas
temperature across much of the simulation volume, and the FDM model shows a
strong absorption signal. In the CDM model, early X-ray heating has started to
boost the gas temperature to much above the CMB temperature in overdense
regions, which are hence visible in emission (red regions). Finally, the
bottom panel is identical to the top panel of the summary figure (Fig 2),
although we adopt a different color bar here. As discussed earlier, X-ray
heating is well underway in the CDM case but the FDM model shows mostly
absorption.
Figure 4.— Brightness temperature evolution during the EoR in our fiducial CDM
and FDM models. This is identical to Fig 3, but illustrates the evolution at
redshifts of $z=11.00$, $z=8.65$, and $z=6.76$. The volume-averaged ionization
fraction is given in each panel. The figure illustrates the delay in the EoR
in FDM relative to CDM: the CDM model has a larger ionized fraction and
somewhat bigger ionized bubbles.
### 3.3. EoR
Fig 4 displays slices through the simulation at slightly lower redshifts. The
top panel shows each model at $z=11.00$: here FDM still shows a combination of
21 cm absorption/emission against the CMB, while the gas is everywhere heated
above the CMB temperature in the CDM case. The middle and bottom panel
illustrate how the EoR is more advanced in the case of CDM than FDM at
redshifts $z=8.65$ and $z=6.76$. In terms of the volume-averaged ionization
fraction, $\langle x_{i}\rangle=0.109$ and $0.049$ at $z=11.00$, $\langle
x_{i}\rangle=0.326$ and $0.200$ at $z=8.65$, and $\langle x_{i}\rangle=0.733$
and $0.567$ at $z=6.76$ in CDM and FDM, respectively. Naturally, the ionized
regions are larger in CDM, mainly because the bubbles have had longer to grow
and merge in this model. The absence of small halos in FDM also leads to
larger ionized regions in FDM. These figures serve to qualitatively illustrate
the delay in structure formation in FDM and the impact on the resulting 21 cm
brightness temperature fluctuations.
### 3.4. Power Spectra and the Impact of FDM on Spatial Structure
Figure 5.— The 21 cm power spectra in the CDM and FDM models for six different
example redshifts, two per panel. The solid lines show FDM models, while the
dashed lines are for CDM. Top panel: CDM and FDM power spectra at both
$z=20.31$ and $z=17.55$. Middle panel: The power spectra in each model are
displayed at $z=15.15$ and $z=12.78$. Bottom panel: CDM and FDM power spectra
at $z=8.28$ and $z=6.31$. The shaded regions give error bar forecasts for
upcoming HERA observations. At moderate scales, $k\sim 0.2$ Mpc-1, the two
models can be distinguished at high statistical significance across a range of
redshifts.
A more quantitative comparison is given in Fig 5. In contrast to Fig 2, which
shows the power spectra as a function of redshift at one particular
wavenumber, this figure presents the full scale dependence at six example
redshifts, spanning the Cosmic Dawn and the EoR. These power spectra differ
strikingly in shape and amplitude at many redshifts. For example, consider
first the $z=15.15$ case in the middle panel of Fig 5 (blue curves). Here, the
CDM power spectrum exceeds that of FDM by a factor of $\sim 50$ on the largest
scales shown (near $k\sim 0.03$ Mpc-1, comparable to the fundamental mode of
our simulation box). On the other hand, at higher wavenumber, $k\gtrsim 0.3$
Mpc-1, the FDM model has more power than the CDM case at this redshift. The
striking difference between the shape of the 21 cm power spectra in these
models bodes well for distinguishing them with upcoming observations.
The top, highest redshift, panel compares the power spectra in both models at
$z=17.55$ and $z=20.31$. The difference between the power spectra at these
redshifts owes to the earlier Wouthuysen-Field effect coupling epoch in CDM.
This leads to larger fluctuations in CDM across all scales shown at $z=20.31$,
while FDM has larger fluctuations at $z=17.55$ for $k\lesssim 0.3$ Mpc-1. The
fluctuations are larger in FDM at the lower redshift because some regions of
the universe in FDM are well-coupled and give deep 21 cm absorption, while
other areas are close to the CMB temperature. This gives a larger contrast
than the case of CDM where the spin temperature is well-coupled to the gas
temperature across most of the simulation volume. Since the Wouthuysen-Field
fluctuations are coherent on large scales, the excess power in FDM is
concentrated at low $k$.
The bottom panel of Fig 5 shows the 21 cm power spectra during the EoR.
Initially, as illustrated by the $z=8.09$ redshift case, the CDM fluctuations
exceed the FDM ones at $k\lesssim 0.5$ Mpc-1: this is a consequence of the
larger ionized regions in the CDM model. However, by $z=6.31$ in the CDM
model, the situation has reversed and the fluctuations are larger in FDM. This
occurs because reionization is largely complete in CDM at this redshift, and
the fluctuations are small since little neutral hydrogen remains, while
reionization is less progressed in FDM.
Fig 5 also includes HERA error bar forecasts in the moderate foreground
removal scenario (see §4). This illustrates that the models differ by more
than the anticipated errors over a fairly broad range of scales and redshifts.
Overall, the most valuable wavenumbers are in the intermediate range between
roughly $k\sim 0.15-0.5$ Mpc-1: the measurements on the largest scales are
limited by foreground avoidance and sample variance, while the power at
high-$k$ is swamped by thermal noise. As the thermal noise drops with
increasing frequency (decreasing redshift), higher-$k$ modes become
accessible. In terms of redshift, these forecasts suggest that HERA can
discriminate between these models at high significance from the end of the EoR
at $z\sim 5-6$ out to $z\sim 15$, with the error bars decreasing towards low
redshift. Greater sensitivity would be required to detect the models at still
higher redshifts (such as those in the top panel of Fig 5).
Figure 6.— The full redshift evolution of the 21 cm power spectra at k = 0.25
Mpc-1, k = 0.66 Mpc-1, and k = 2.6 Mpc-1 in CDM and FDM. This is similar to
the bottom panel of Fig 2, but here we show three different example
wavenumbers.
Fig 6 shows the redshift evolution in further detail for three example
wavenumbers. This reinforces the trends seen in Fig 2 and Fig 5 and
illustrates the effects of the delay in structure formation in FDM at finer
redshift evolution than in Fig 5.
Figure 7.— Brightness temperature in our fiducial CDM and FDM models at the
same stage of the EoR, yet different redshifts. This figure is similar to Fig
3 and 4 except the CDM and FDM models along each row are shown at fixed
volume-averaged ionization fraction, $\langle x_{i}\rangle$, yet different
redshifts. Top row: Each model is shown at $\langle x_{i}\rangle=0.25$ (
$z=9.24$ in CDM and $z=8.28$ for FDM). Middle row: Each model is given at
$\langle x_{i}\rangle=0.50$. This occurs at $z=7.74$ in CDM and $z=7.08$ in
FDM. Bottom row: Here the ionization fraction in each model is $\langle
x_{i}\rangle=0.75$, at $z=6.76$ in CDM and $z=6.17$ in FDM.
While the different redshift evolution in CDM and FDM is interesting, note
that the overall timing of reionization depends also on uncertain parameters
such as the ionizing efficiency, $\zeta$, and the minimum virial temperature
of galaxy hosting dark matter halos, $T_{\rm vir}$. It is also, therefore,
interesting to contrast the 21 cm brightness temperature fluctuations in CDM
and FDM models at the same stage of the Cosmic Dawn/EoR, yet different
redshifts. This helps to understand how much of the effect of FDM is an
overall delay in structure formation, and how much FDM impacts the overall
spatial structure of Cosmic Dawn and the EoR.
For example, Fig 7 contrasts the CDM and FDM models during the EoR with slices
drawn from each of $\langle x_{i}\rangle=0.25,0.50$, and $0.75$. Although the
effect is subtle, the ionized regions are slightly larger at fixed ionization
fraction in FDM than in CDM (see also Nebrin et al. 2019). This result is seen
because the size distribution of the ionized regions is sensitive to the
clustering of the ionizing sources (e.g. McQuinn et al. 2007): the ionized
regions are larger at a given stage of the EoR in cases where the ionizing
sources lie in more massive – and hence more highly biased – dark matter
halos. Since small mass halos are missing in FDM, this therefore leads to
slightly larger ionized regions in FDM than CDM, at least for cases where the
minimum galaxy host halo mass is smaller than the FDM suppression mass (Eq
2.2). The larger ionized regions in FDM tend to boost the large-scale
amplitude of the 21 cm power spectrum (at a given $\langle x_{i}\rangle$) in
FDM relative to the CDM case. Hence, FDM modifies both the timing of
reionization as well as the spatial structure of the 21 cm field. Although we
do not illustrate it explicitly here, analogous effects also occur during the
earlier Cosmic Dawn phases. That is, when we compare CDM and FDM at fixed
average brightness temperature (yet differing redshifts), the greater source
clustering in FDM enhances the large-scale 21 cm power spectra relative to
CDM.
## 4\. HERA and Upcoming 21 cm Power Spectrum Measurements
HERA is a radio interferometer, under development in the Karoo desert of South
Africa, designed to detect the 21 cm signal from the EoR (DeBoer et al.,
2017), and potentially the Cosmic Dawn (Ewall-Wice et al., 2016), at high
statistical significance. Readers already familiar with HERA may wish to skip
to the second to last paragraph of this section. When complete, HERA will
consist of 350 antenna dishes, each 14 meters in diameter, with 320 of these
in a close-packed hexagonal configuration, along with 30 outrigger antennas at
longer baselines. The close-packed hexagonal configuration provides a highly
redundant sampling of baselines, with many identical copies of the same
antenna separations; this helps achieve high 21 cm power spectrum sensitivity
while facilitating instrumental calibration (Dillon & Parsons, 2016).
Ultimately, the instrument will observe a broad frequency range from 50-225
MHz, corresponding to redshifted 21 cm radiation from $z\sim 5-27$. The array
always points towards the zenith, but the interferometer operates as a drift-
scan telescope, accumulating sky coverage as the sky revolves overhead owing
to the rotation of the Earth.
In order to quantify the prospects for HERA measurements of the 21 cm power
spectrum, and its ability to discriminate between CDM and FDM models, we make
use of the open-source Python package 21cmSense (Pober et al., 2013, 2014b).
In brief (see e.g. Pober et al. 2014b; Liu & Shaw 2020 for more details), the
21cmSense code accounts for the detailed layout of the HERA antennas, gridding
the measurements into cells in the $uv$ plane, where $u$ and $v$ describe the
physical separation between a pair of antenna dishes in units of observed
wavelength. The size of each $uv$ cell is set by the diameter of the HERA
dishes and is of order $D/\lambda_{obs}$ on a side, where $D$ is the dish
diameter. Further, each cell has a width in $\eta$, where $\eta$ is the
Fourier counterpart to frequency, set by the frequency bandwidth of the
measurement, $B$. The code calculates the observing time per $uv$ cell
accounting for the rotation of the Earth which causes baselines to move across
$uv$ cells over the course of a day. The interferometric $uv$ cells sample
Fourier modes of transverse wavenumber, ${\mathchoice{{\hbox{\BFd
k}}}{{\hbox{\BFt k}}}{{\hbox{\BFs k}}}{{\hbox{\BFss k}}}}_{\perp}=2\pi{\bf
b}/X(z)$, where ${\bf b}$ is a baseline vector in the $uv$ plane and $X(z)$ is
the co-moving angular diameter distance to the 21 cm redshift at the central
frequency across the bandwidth of interest. The $\eta$ dimension maps to the
line-of-sight wavenumber component, $k_{\parallel}$ (see, e.g., Eq 40 of Liu &
Shaw 2020).
After determining the total observing time $t({\mathchoice{{\hbox{\BFd
k}}}{{\hbox{\BFt k}}}{{\hbox{\BFs k}}}{{\hbox{\BFss k}}}})$ for each
$u,v,\eta$ cell, the variance of the power spectrum estimate in a cell is
given by (Pober et al., 2014b; Ewall-Wice et al., 2016):
$\sigma_{P}^{2}({\mathchoice{{\hbox{\BFd k}}}{{\hbox{\BFt k}}}{{\hbox{\BFs
k}}}{{\hbox{\BFss
k}}}})=\left[X^{2}Y\frac{\Omega^{\prime}}{2t({\mathchoice{{\hbox{\BFd
k}}}{{\hbox{\BFt k}}}{{\hbox{\BFs k}}}{{\hbox{\BFss k}}}})}T_{\rm
sys}^{2}+P_{21}(k)\right]^{2},$ (5)
where $X$ is the co-moving angular diameter distance, and $Y$ is a redshift
dependent factor that converts between frequency intervals and co-moving line-
of sight distance (e.g. Eq 41 of Liu & Shaw 2020). The quantity
$\Omega^{\prime}$ is a factor related to the solid angle of the primary beam.
Specifically, it is the integral of the primary beam squared over solid angle
divided by the solid-angle integral of the primary beam (Parsons et al.,
2014), while $T_{\rm sys}$ is the sum of the HERA receiver temperature and the
sky temperature.333The 21cmSense codes assumes a receiver temperature of $100$
K and a sky temperature of $T=60\,\mathrm{K}\,(\nu/300\mathrm{MHz})^{-2.55}$.
The $P_{21}(k)$ term accounts for sample variance under the Gaussian error
approximation, and is determined by our 21cmFAST model under consideration.
Finally, in order to estimate the variance across different $k$-bins,
21cmSense adds the errors from Eq 5 over contributing k cells in inverse
quadrature.
In forecasting the HERA sensitivity, we assume 1080 total hours of observing
time, and that power spectra are simultaneously measured from individual
bandwidths of frequency extent $8$ MHz across the entire observing range from
$z\sim 5-27$. It may be unrealistic to assume that simultaneous measurements
are feasible across this large frequency band (Ewall-Wice et al., 2016), but
this should nevertheless provide a useful, if optimistic, forecast. Note also
that redshifted 21 cm observations between $z\sim 12-15$ lie within the FM
radio band, where radio-frequency interference mitigation may be especially
challenging (Ewall-Wice et al., 2016). We quantify how the expected signal to
noise ratio varies with redshift, and so determine which observed frequencies
across this broad range may be most valuable.
Foreground contamination is a serious concern for redshifted 21 cm
measurements (see e.g. the review by Liu & Shaw 2020): the foreground emission
from sources including galactic synchrotron radiation, free-free emission, and
extra-galactic point sources is many orders of magnitude brighter than the
redshifted 21 cm signal. Nevertheless, the foregrounds are expected to be
spectrally smooth while the 21 cm signal has a great deal of spectral
structure, and this distinction holds promise for separating the signal from
the foregrounds. Specifically, spectrally smooth foregrounds will strongly
contaminate low $k_{\parallel}$ modes, while higher $k_{\parallel}$ modes may
be robustly measurable. Accounting, however, for the frequency dependence of
the instrumental response leads to a mode-mixing effect in which some high
$k_{\parallel}$ modes are also corrupted by foregrounds. Still, the corrupted
modes should mostly occupy a wedge-shaped region in the
$k_{\parallel}-k_{\perp}$ plane referred to as “the foreground wedge” in Pober
et al. (2014a). A promising strategy is then to simply excise Fourier modes
within the foreground wedge and make use only of regions in k-space outside of
this wedge. In practice, the precise form of the foreground wedge is uncertain
owing to (see Liu & Shaw 2020 for further discussion): the unknown
$k_{\parallel}$ dependence of the foreground emission, the impact of
calibration errors, and instrumental artifacts, with some effects potentially
leaking power outside of the wedge entirely.
To roughly quantify the uncertain impact of foreground contamination, we
follow the three separate treatments of the foreground wedge discussed in
Pober et al. (2014b) and included in the 21cmSense code, termed the
“pessimistic, moderate, and optimistic” foreground scenarios. These cases are
only briefly summarized here; we refer the reader to the original paper for
further details. In these scenarios, the “horizon wedge” describes a line with
$k_{\parallel}=Ck_{\perp}$ (where $C$ is a redshift dependent number), below
which a population of spatially unclustered radio sources at the horizon, with
a frequency-independent emission spectrum, will contaminate measurements (see
e.g. Eq 166 in Liu & Shaw 2020 and the associated discussion). Above this
line, such sources produce no contamination. In Pober et al. (2014b)’s
moderate case, the wedge is assumed to extend to $\Delta k_{\parallel}=0.1$
$h$Mpc-1 beyond the horizon wedge limit. In the optimistic case, the angular
scale defining the wedge (which determines $C$) is assumed to be set by the
FWHM of the primary beam of HERA, rather than the horizon scale. Finally, in
the pessimistic case the horizon wedge is assumed, but only instantaneously
redundant baselines, or baselines that measure the same Fourier component of
the sky brightness distribution (Marthi & Chengalur (2013)), are added
coherently.
The resulting power spectrum sensitivities in the moderate foreground case are
shown at several example redshifts in the figures of the previous section
(Figs 2 and 5), illustrating the usual high significance forecasts for HERA
power spectrum measurements (e.g. DeBoer et al. 2017).
Figure 8.— The statistical significance at which HERA can distinguish between
our fiducial CDM and FDM models as a function of redshift. The particle mass
in the FDM model is $m_{\rm FDM}=10^{-21}$ eV. The green, blue, and red curves
show the optimistic, moderate, and pessimistic foreground wedge models (see
text), respectively. In the absence of parameter degeneracies, the two models
may be discriminated at $\geq 5-\sigma$ significance for each foreground
treatment – as shown by regions above the grey band – across a range of
redshifts.
To gain further insight into the prospects for discriminating between CDM and
FDM with HERA 21 cm power spectrum measurements, we calculate $\Delta\chi^{2}$
between our two fiducial models as a function of redshift, assuming that the
CDM case is the true underlying model. Encouragingly, as illustrated in Fig 8,
these two models may be discriminated at high statistical significance across
a range of redshifts for all foreground contamination scenarios. Formally, in
the optimistic case the two models may be distinguished at more than
$100-\sigma$, with this level of discriminating power achievable in multiple
independent redshift bins. This is encouraging, especially considering that
our fiducial FDM model has a particle mass of $m_{\rm FDM}=10^{-21}$ eV,
fairly comparable to current limits from the Ly-$\alpha$ forest (Irsic et al.,
2017), although more stringent limits were found recently from Ly-$\alpha$
data by Rogers & Peiris (2020). In any case, HERA measurements may provide an
independent and potentially powerful constraint on FDM, although this
statement depends somewhat on the impact of degeneracies with astrophysical
parameters, as studied in the next section.
Fig 8 also reveals interesting trends in the constraining power versus
redshift. In all cases, the strongest differences between the two models
(relative to the HERA error bars) occurs towards the end of reionization, but
$\Delta\chi^{2}$ also shows prominent peaks near $z\sim 12.5$. This redshift
dependence arises because the sky background, dominated by galactic
synchrotron emission, scales as $\nu^{-2.6}$ and so the noise power spectrum –
which is quadratic in the sky temperature – scales strongly with redshift. On
the other hand, the signal power spectrum and the difference between models is
actually larger at high redshift during the Cosmic Dawn (see §3 and Fig 5),
which partly compensates for the enhanced noise. Intuition for the bumpy
structure in Fig 8 can be gleaned from Fig 2: FDM is mostly a delayed version
of the CDM case, and at some redshifts the power spectra differ greatly in
magnitude while at others they happen to be nearer in amplitude. These
differences lead to corresponding structure in the $\Delta\chi^{2}$ curves
although the exact location of these bumps will change for different
reionization models.
## 5\. Fisher Matrix Forecasts
In order to make more quantitative forecasts, however, we need to account for
parameter degeneracies. We accomplish this using the Fisher matrix formalism.
Specifically, we consider a five-dimensional parameter space described by a
vector ${\bf q}$ with components $(q_{\zeta},q_{\rm Tvir},q_{\rm
f\star},q_{\rm\zeta X},q_{mFDM})$. The parameters here describe the fractional
difference between each quantity and our fiducial model (e.g. Ewall-Wice et
al. 2016): e.g. $q_{\rm Tvir}=(T_{\rm vir}-T_{\rm vir,fid})/T_{\rm vir,fid}$.
As discussed previously in §2, $\zeta$ is an ionizing efficiency parameter,
$T_{\rm vir}$ is the minimum virial temperature of galaxy hosting dark matter
halos, $f_{\star}$ is the star formation efficiency, $\zeta_{X}$ is an X-ray
heating efficiency parameter, and $m_{\rm FDM}$ is the FDM particle mass. For
simplicity, we assume the astrophysical parameters are redshift independent;
we comment further on this assumption in what follows. As discussed earlier,
our fiducial parameter set is: $(\zeta=20,T_{\rm vir}=2\times 10^{4}{\rm
K},f_{\star}=0.05,\zeta_{X}=2\times 10^{56}\,M_{\odot}^{-1},m_{\rm
FDM}=10^{-21}{\rm eV})$. Note that the fiducial model here is an FDM one,
rather than a CDM case, since this facilitates the Fisher matrix
computations.444First, a fiducial CDM case would involve an effectively
infinite FDM mass. This problem could be avoided by adopting the inverse FDM
mass as the model parameter rather than the mass itself. However, this leads
to asymmetric errors and violates the Fisher formalism’s assumption of a
quadratic expansion in the log-likelihood around the fiducial parameter
values. Although techniques have been proposed in the context of warm dark
matter models to circumvent these issues (Markovic et al., 2011), we avoid
them here by simply assuming an FDM case as our fiducial model. This is
sufficient for our goals of understanding the impact of parameter degeneracies
and the overall ability of the HERA data to constrain FDM mass.
The Fisher matrix may be written as:
$F_{ij}=\sum_{k,z}\frac{\partial\Delta^{2}_{21}(k,z)}{\partial
q_{i}}\frac{\partial\Delta^{2}_{21}(k,z)}{\partial q_{j}}\frac{1}{{\rm
var}[\Delta^{2}_{21}[k,z]]}\text{,}$ (6)
where the sum runs over the full range of redshift and wavenumber bins, and
the power spectrum variance is computed using 21cmSense as described in the
previous section. The wavenumber bins and redshift bins, separated by the
$B=8$ MHz bandwidth of each power spectrum measurement, are approximated as
independent. The resulting parameter constraint forecasts are obtained by
computing the inverse of the Fisher matrix. We compute the derivatives with
respect to the various parameters in Eq 6 using two-sided numerical
derivatives with a step-size of $5\%$ in each parameter. We find nearly
identical results using one-sided derivatives with the same step size.
Figure 9.— Derivatives of the 21 cm power spectrum with respect to various
parameters. These derivatives enter into the Fisher matrix computations of Eq
6 and show how the power spectrum depends on parameters. For illustration, the
results are shown at several example redshifts as a function of $k$. Figure
10.— Identical to Figure 9 but at higher redshifts.
It is instructive to first examine the derivatives with respect to the five
parameters. Fig 9 shows derivatives at several example redshifts as a function
of wavenumber. We focus our attention on how the derivatives with respect to
FDM mass compare with the other parameter derivatives. During the EoR (at
$z\lesssim 10$ in our fiducial model), the derivatives with respect to FDM
mass and $q_{\zeta}$ share the same sign and have a similar scale dependence.
This is expected because increasing the FDM particle mass reduces the halo
abundance suppression effect from FDM. This lessens the resulting delay in
structure formation and in the timing of the EoR, while increasing the
ionizing efficiency $\zeta$ has a similar effect. In other words, we expect
the error bars for these two parameters to show a negative correlation since
one can compensate for the reduced suppression from raising the FDM mass by
decreasing the ionizing efficiency. In practice, however, allowing further
parameters to vary impacts this degeneracy direction as discussed further
below.
The degeneracy with $\zeta$ can in principle be broken by observations at
higher redshift. During the Cosmic Dawn, when only a very small fraction of
the IGM volume is ionized, the ionizing efficiency is not by itself an
important parameter. In these earlier epochs, the X-ray heating and star-
formation efficiency parameters are instead important. Although we generally
expect there to be some relationship between the ionizing and star-formation
efficiencies, we treat these as independent parameters since the ionizing
efficiency depends additionally on the escape fraction of ionizing photons,
for example. At high redshifts, the star-formation efficiency in our model
plays a key role in determining the onset of Wouthuysen-Field coupling.
Therefore, at higher redshifts, the derivatives with respect to FDM should be
compared to those with respect to $q_{f\star}$ and $q_{\zeta x}$, while there
is a different and much weaker dependence on $q_{\zeta}$ (as illustrated by
the two highest redshift panels in Fig 9). Furthermore, $q_{f\star}$ impacts
mostly higher redshifts than $q_{\zeta x}$ since the Wouthuysen-Field coupling
precedes X-ray heating in our fiducial model. This is shown explicitly in Fig
10. The differing trends with redshift imply that HERA and other 21 cm surveys
can help break degeneracies between FDM mass and astrophysical parameters by
measuring the full redshift evolution of the signal, especially if this can be
done over a relatively broad range in wavenumber. The one caveat here is that
we have assumed the various astrophysical parameters are redshift independent:
allowing redshift evolution in these parameters would naturally weaken our
forecasts on FDM mass. We suspect, however, that this is not a strong effect
for plausible smooth and monotonic redshift variations in these parameters.
The other important astrophysical parameter at play is $T_{\rm vir}$ which
sets the minimum mass of galaxy hosting dark matter halos in our model.
Increasing the virial temperature suppresses the abundance of galaxy hosting
halos. This effect resembles decreasing the FDM mass, and so we anticipate
positively correlated errors on virial temperature and FDM mass. This
degeneracy is reflected by the opposite signs, yet similar shape, of the
derivatives with respect to $q_{\rm Tvir}$ and $q_{\rm mFDM}$ in Fig 9 and Fig
10. Although this is an important degeneracy, note that the FDM suppression
mass in our fiducial model is almost an order of magnitude larger than the
mass associated with our fiducial value of the virial temperature (see Eq 2.2
and the discussion in §2.2). Therefore, sharp constraints on FDM mass are
still expected in our fiducial model.
Figure 11.— Parameter constraint forecasts for each of the three different
foreground contamination scenarios for HERA. The optimistic, moderate, and
pessimistic cases are shown in green, blue, and red, respectively. The
ellipses span 2-$\sigma$ confidence intervals.
Fig 11 shows the resulting parameter constraint forecasts for HERA
observations in each of the pessimistic, moderate, and optimistic foreground
contamination scenarios (§4). These results sum over the full redshift range
spanned by HERA ($z=5-27$) and over all wavenumbers. The bottom-line
constraint on the FDM particle mass is given by the 1D likelihoods
(marginalized over the other parameters) in the bottom-right hand panel.
Encouragingly, we forecast very tight constraints on the FDM mass for all
foreground treatments: HERA should determine the FDM mass to within 4.8%, 20%,
and 26% in the optimistic, moderate, and pessimistic scenarios, respectively
at 2-$\sigma$ confidence. That is, we expect a strong detection of FDM and a
tight constraint on the FDM particle mass. These numbers assume our fiducial
FDM mass of $m_{FDM}=10^{-21}$ eV as the true underlying model. If we had
instead assumed CDM as the fiducial model, the tight constraints shown here
suggest that the upper bound on FDM mass in CDM would be significantly tighter
than $10^{-21}$ eV.
As anticipated earlier, there are fairly strong parameter degeneracies between
FDM mass and other astrophysical parameters. The most prominent one is with
the minimum virial temperature of galaxy hosting dark matter halos. The strong
positive correlation between these parameters results because increasing the
FDM mass lessens the delay in the EoR from FDM, which can be counteracted by
increasing $T_{\rm vir}$. The slightly less strong degeneracy seen in the
$q_{\zeta}-q_{mFDM}$ plane is naively surprising, since we expect the errors
on these parameters to be negatively correlated. This occurs, however, because
$\zeta$ and $m_{FDM}$ are not the only parameters in the problem. For example,
an increase in $\zeta$ can be compensated by boosting $T_{\rm vir}$ which then
requires a counteracting increase in $m_{FDM}$. Indeed, if we fix all of the
other nuisance parameters to their fiducial values, the degeneracy direction
in the $q_{\zeta}-q_{mFDM}$ plane flips: in this case, these quantities show
negative error correlations as naively expected. The degeneracies between the
FDM particle mass and the star-formation efficiency and X-ray heating
parameters are less strong. This mainly results because the error bars on
HERA’s power spectrum measurements during the EoR are much smaller than during
the Cosmic Dawn (see §4), while the star-formation efficiency and X-ray
heating parameters mostly impact the Cosmic Dawn and not the EoR.
Figure 12.— Constraint forecasts for HERA including only redshifts $z\leq 10$
in the Fisher matrix calculations. That is, these forecasts only include
redshifts during the EoR. We adopt the moderate foreground removal scenario.
The ellipses enclose $2-\sigma$ confidence regions. Figure 13.— Constraint
forecasts for HERA including only redshifts $z\geq 10$ in the Fisher matrix
calculations. Here we include only Cosmic Dawn redshifts. We adopt the
moderate foreground removal scenario. The ellipses enclose $2-\sigma$
confidence regions.
Figs 12 and 13 further illustrate this by showing, respectively, the parameter
constraints from the EoR and Cosmic Dawn alone. These results are shown for
the moderate foreground case. The EoR calculations consider $z\leq 10$ while
the Cosmic Dawn ones adopt $z\geq 10$. The FDM mass constraints from the EoR
alone are a factor of $\sim 5$ tighter than those from the Cosmic Dawn alone.
Although the statistical precision of the Cosmic Dawn constraints are formally
much weaker, it is still appealing to constrain FDM mass with HERA power
spectrum measurements in this era. For one, somewhat different physics is
involved in this period and so it provides a potential cross-check on the EoR
constraints. Second, it probes the earliest stages of star and galaxy
formation where FDM has an especially strong impact. Finally, although we do
not consider this combination here, one can potentially combine the power
spectrum constraints with global 21 cm measurements: current global 21 cm
experiments already have the sensitivity to detect the Cosmic Dawn if
systematic concerns can be mitigated (e.g. Bowman et al. 2018; Lidz & Hui
2018; Schneider 2018; Nebrin et al. 2019; Muñoz et al. 2020).
Here we can also briefly discuss how our results compare with those in the
related work of Muñoz et al. (2020). Both studies find that future HERA
measurements should provide cutting-edge constraints on FDM. The most
important difference between our study and this earlier work is that Muñoz et
al. (2020) adopt a rather different set of fiducial astrophysical parameters.
Specifically, in that study, they allow efficient star formation in molecular
cooling halos with masses of order $\sim 10^{6}-10^{7}M_{\odot}$, although
they also include a model for dissociating Lyman-Werner band feedback which
partly regulates star formation in these halos. If star formation is indeed
efficient in these small halos, the relative streaming velocity between dark
matter and baryons is an important effect (Tseliakhovich & Hirata, 2010): this
leads to spatial variations in the halo-collapse fraction which in turn
enhances the Cosmic Dawn era 21 cm fluctuation signal (see also e.g. Fialkov
et al. 2013). In our model, on the other hand, we assume that star-formation
is inefficient in these small mass halos, and that Ly-a coupling, X-ray
heating, and reionization are accomplished entirely by stars forming in higher
mass atomic cooling halos. In this case, the relative streaming velocity
effect is a fairly minor one (Tseliakhovich & Hirata, 2010) and neglected
here.
Their scenario hence leads to a stronger Cosmic Dawn 21 cm fluctuation signal,
and so they arrive at more optimistic conclusions regarding the detectability
of this era with HERA. In fact, their analysis considers only the Cosmic Dawn
era signal, and not the EoR. In our model, we have seen that the constraints
on FDM mass from the EoR are much stronger than those from the Cosmic Dawn
(see Figures 12 and 13). Their scenario requires strong evolution in the star
formation efficiency towards high redshift and low halo mass (e.g. Mirocha &
Furlanetto 2018). Moreover, this case may be uncomfortable with the low
electron scattering optical depths – which bound the star formation efficiency
in such halos (e.g. Visbal et al. 2015; Miranda et al. 2017) – suggested by
Planck 2018 measurements (Aghanim et al., 2018). Hopefully, upcoming 21 cm
measurements will help determine empirically which fiducial model here is more
reliable. In any case, these upcoming surveys should provide interesting FDM
constraints.
Finally, it is interesting to note that allowing FDM mass as a free parameter
significantly degrades the constraints on the astrophysical parameters.
Specifically, our constraints on $T_{\rm vir}$, $\zeta$, $f_{\star}$, and
$\zeta_{x}$ decrease if we fix the FDM particle mass to the fiducial value,
rather than letting it vary freely. This decrease is at the factor of several
levels for $T_{\rm vir}$ and $\zeta$. Indeed, our error forecasts on these
parameter are larger than in previous work (e.g. Ewall-Wice et al. 2016). We
find very similar results to this earlier study, however, if we instead fix
the FDM mass.
## 6\. Conclusions
We modeled the impact of FDM on the 21 cm power spectrum and forecasted the
expected constraints on FDM mass from upcoming HERA measurements. The
suppression in the abundance of small mass halos leads to a delay in the
Cosmic Dawn and the EoR and strongly impacts the power spectrum of 21 cm
fluctuations, even for FDM models with $m_{FDM}\sim 10^{-21}$ eV that remain
challenging to constrain by other means. In addition, FDM modifies the spatial
structure of the 21 cm signal at a given stage of the EoR and Cosmic Dawn.
This occurs because of the small mass halo suppression in FDM; the ionizing
sources are hence more highly-biased tracers of the matter power spectrum in
FDM than CDM.
We further characterized degeneracies between the effects of FDM and uncertain
astrophysical parameters. The most important one is with the minimum virial
temperature of galaxy hosting halos. In our fiducial model ($T_{\rm
vir}=2\times 10^{4}$ K and $m_{FDM}=10^{-21}$ eV), however, the FDM
suppression mass is larger than the minimum mass of galaxy hosting halos and
so sharp constraints on FDM are still expected. In the future, measurements of
the UV luminosity function with e.g. the James Webb Space Telescope may reveal
a turn-over or flattening at low luminosities. The precise shape and redshift
dependence of the faint end of the luminosity function may then help in
separating out the effects of the minimum virial temperature and the FDM mass,
especially when the UV luminosity function measurements are combined with
redshifted 21 cm observations.
Assuming an FDM model with $m_{FDM}=10^{-21}$ eV, we forecast a strong
detection in upcoming HERA 21 cm observations and a tight 20% determination of
the FDM particle mass (at 2-$\sigma$ confidence). On the other hand, if CDM is
the true model, we expect to strongly rule out a case with $m_{FDM}=10^{-21}$
eV. These constraints depend on the ability of future 21 cm surveys to
mitigate challenging foreground contamination systematics, but strong limits
appear feasible even in the pessimistic foreground contamination scenario of
Pober et al. (2014b). Furthermore, we have only considered the power spectrum
in this study, but more information about the 21 cm field should be contained
in higher order statistics (e.g., the bispectrum; Majumdar et al., 2018),
potentially allowing even tighter constraints.
It is also interesting to note that our study has implications for warm dark
matter (WDM) particle candidates. Although the transfer function in WDM has a
different shape than in FDM, we can roughly translate our FDM constraints into
WDM forecasts by finding the WDM particle mass that matches the suppression
mass of Eq. 2.2 in FDM (see e.g. Hui et al. 2017; Lidz & Hui 2018). In the
case of thermal relic WDM, this translation gives $m_{\rm WDM}=2.6\,{\rm
keV}\left[m_{\rm FDM}/(10^{-21}{\rm eV})\right]^{0.4}$ (e.g. Lidz & Hui 2018).
Thus our fiducial FDM model roughly matches the suppression in a thermal relic
WDM model with a mass of 2.6 eV. If the true model is WDM with this mass, HERA
should deliver an fractional error on the WDM mass of $\sigma_{\rm
m_{WDM}}/{\rm m_{WDM}}\sim 0.4\,\sigma_{\rm m_{FDM}}/{\rm m_{FDM}}$. In the
moderate foreground removal scenario, for instance, this implies an $8\%$
constraint on the WDM particle mass.
Although there are uncertainties in modeling early star and galaxy formation
and the resulting 21 cm signal, the redshifted 21 cm signal provides a
uniquely powerful constraint on the timing of some of the earliest phases of
structure formation and this gives an appealing handle on FDM models. It would
be hard to reconcile FDM, or any other model in which the initial density
power spectrum is suppressed on small scales, with an early start to the
Cosmic Dawn and the EoR, as might be revealed via upcoming HERA measurements.
The HERA observations can be combined with independent methods to convincingly
rule-out or detect FDM, including: analyses of the Lyman-alpha forest (Irsic
et al., 2017), post-reionization 21 cm intensity mapping surveys (Bauer et
al., 2020), measurements of UV luminosity functions (Bozek et al., 2015), sub-
structure lensing data (Dalal & Kochanek, 2002), studies of ultra-faint dwarf
galaxies (Marsh & Niemeyer, 2019), the potential imprint of FDM on tidal
streams in our galaxy (Dalal et al., 2020), and using the black hole super-
radiance phenomenon (Davoudiasl & Denton, 2019).
## Acknowledgements
AL acknowledges support, in part, through NASA ATP grant 80NSSC20K0497. We
thank the anonymous referee for helpful comments on the manuscript.
## References
* Aghanim et al. (2018) Aghanim, N., et al. 2018, 1807.06209
* Arcadi et al. (2018) Arcadi, G., Dutra, M., Ghosh, P., Lindner, M., Mambrini, Y., Pierre, M., Profumo, S., & Queiroz, F. S. 2018, Eur. Phys. J. C, 78, 203, 1703.07364
* Barkana & Loeb (2005) Barkana, R., & Loeb, A. 2005, Astrophys. J., 626, 1, astro-ph/0410129
* Bauer et al. (2020) Bauer, J. B., Marsh, D. J. E., Hložek, R., Padmanabhan, H., & Laguë, A. 2020, Mon. Not. Roy. Astron. Soc., 500, 3162, 2003.09655
* Bond et al. (1991) Bond, J. R., Cole, S., Efstathiou, G., & Kaiser, N. 1991, ApJ, 379, 440
* Bowman et al. (2018) Bowman, J. D., Rogers, A. E. E., Monsalve, R. A., Mozdzen, T. J., & Mahesh, N. 2018, Nature, 555, 67
* Bozek et al. (2015) Bozek, B., Marsh, D. J. E., Silk, J., & Wyse, R. F. G. 2015, Mon. Not. Roy. Astron. Soc., 450, 209, 1409.3544
* Chen & Miralda-Escude (2004) Chen, X.-L., & Miralda-Escude, J. 2004, Astrophys. J., 602, 1, astro-ph/0303395
* Dalal et al. (2020) Dalal, N., Bovy, J., Hui, L., & Li, X. 2020, 2011.13141
* Dalal & Kochanek (2002) Dalal, N., & Kochanek, C. 2002, Astrophys. J., 572, 25, astro-ph/0111456
* Davies et al. (2018) Davies, F. B., et al. 2018, Astrophys. J., 864, 142, 1802.06066
* Davoudiasl & Denton (2019) Davoudiasl, H., & Denton, P. B. 2019, Phys. Rev. Lett., 123, 021102
* DeBoer et al. (2017) DeBoer, D. R., et al. 2017, Publ. Astron. Soc. Pac., 129, 045001, 1606.07473
* Dewdney et al. (2009) Dewdney, P. E., Hall, P. J., Schilizzi, R. T., & Lazio, T. J. L. W. 2009, IEEE Proceedings, 97, 1482
* Dillon & Parsons (2016) Dillon, J. S., & Parsons, A. R. 2016, Astrophys. J., 826, 181, 1602.06259
* Ewall-Wice et al. (2016) Ewall-Wice, A., Hewitt, J., Mesinger, A., Dillon, J. S., Liu, A., & Pober, J. 2016, Mon. Not. Roy. Astron. Soc., 458, 2710, 1511.04101
* Fialkov & Barkana (2019) Fialkov, A., & Barkana, R. 2019, Mon. Not. Roy. Astron. Soc., 486, 1763, 1902.02438
* Fialkov et al. (2013) Fialkov, A., Barkana, R., Visbal, E., Tseliakhovich, D., & Hirata, C. M. 2013, MNRAS, 432, 2909, 1212.0513
* Field (1958) Field, G. B. 1958, Proceedings of the IRE, 46, 240
* Furlanetto et al. (2006) Furlanetto, S., Oh, S. P., & Briggs, F. 2006, Phys. Rept., 433, 181, astro-ph/0608032
* Furlanetto et al. (2004) Furlanetto, S., Zaldarriaga, M., & Hernquist, L. 2004, Astrophys. J., 613, 1, astro-ph/0403697
* Gunn et al. (1978) Gunn, J. E., Lee, B. W., Lerche, I., Schramm, D. N., & Steigman, G. 1978, ApJ, 223, 1015
* Hills et al. (2018) Hills, R., Kulkarni, G., Meerburg, P. D., & Puchwein, E. 2018, Nature, 564, E32, 1805.01421
* Hoag et al. (2019) Hoag, A. et al. 2019, ApJ, 878, 12, 1901.09001
* Hu et al. (2000) Hu, W., Barkana, R., & Gruzinov, A. 2000, Phys. Rev. Lett., 85, 1158, astro-ph/0003365
* Hui et al. (2017) Hui, L., Ostriker, J. P., Tremaine, S., & Witten, E. 2017, Phys. Rev., D95, 043541, 1610.08297
* Irsic et al. (2017) Irsic, V., Viel, M., Haehnelt, M. G., Bolton, J. S., & Becker, G. D. 2017, Phys. Rev. Lett., 119, 031302, 1703.04683
* Li et al. (2019) Li, X., Hui, L., & Bryan, G. L. 2019, Phys. Rev. D, 99, 063509, 1810.01915
* Lidz & Hui (2018) Lidz, A., & Hui, L. 2018, Phys. Rev. D, 98, 023011, 1805.01253
* Liu & Shaw (2020) Liu, A., & Shaw, J. R. 2020, Publ. Astron. Soc. Pac., 132, 062001, 1907.08211
* Madau et al. (1997) Madau, P., Meiksin, A., & Rees, M. J. 1997, Astrophys. J., 475, 429, astro-ph/9608010
* Majumdar et al. (2018) Majumdar, S., Pritchard, J. R., Mondal, R., Watkinson, C. A., Bharadwaj, S., & Mellema, G. 2018, MNRAS, 476, 4007, 1708.08458
* Mao et al. (2012) Mao, Y., Shapiro, P. R., Mellema, G., Iliev, I. T., Koda, J., & Ahn, K. 2012, MNRAS, 422, 926, 1104.2094
* Markovic et al. (2011) Markovic, K., Bridle, S., Slosar, A., & Weller, J. 2011, Journal of Cosmology and Astroparticle Physics, 2011, 022, 1009.0218
* Marsh & Niemeyer (2019) Marsh, D. J. E., & Niemeyer, J. C. 2019, Phys. Rev. Lett., 123, 051103, 1810.08543
* Marthi & Chengalur (2013) Marthi, V. R., & Chengalur, J. 2013, Monthly Notices of the Royal Astronomical Society, 437, 524, https://academic.oup.com/mnras/article-pdf/437/1/524/18455845/stt1902.pdf
* Mason et al. (2019) Mason, C. A., et al. 2019, Mon. Not. Roy. Astron. Soc., 485, 3947, 1901.11045
* McGreer et al. (2015) McGreer, I., Mesinger, A., & D’Odorico, V. 2015, Mon. Not. Roy. Astron. Soc., 447, 499, 1411.5375
* McQuinn et al. (2007) McQuinn, M., Lidz, A., Zahn, O., Dutta, S., Hernquist, L., & Zaldarriaga, M. 2007, Mon. Not. Roy. Astron. Soc., 377, 1043, astro-ph/0610094
* Mesinger & Furlanetto (2007) Mesinger, A., & Furlanetto, S. 2007, Astrophys. J., 669, 663, 0704.0946
* Mesinger et al. (2011) Mesinger, A., Furlanetto, S., & Cen, R. 2011, MNRAS, 411, 955, 1003.3878
* Miranda et al. (2017) Miranda, V., Lidz, A., Heinrich, C. H., & Hu, W. 2017, Mon. Not. Roy. Astron. Soc., 467, 4050, 1610.00691
* Mirocha & Furlanetto (2018) Mirocha, J., & Furlanetto, S. R. 2018, 1803.03272
* Muñoz et al. (2020) Muñoz, J. B., Dvorkin, C., & Cyr-Racine, F.-Y. 2020, Phys. Rev. D, 101, 063526, 1911.11144
* Nebrin et al. (2019) Nebrin, O., Ghara, R., & Mellema, G. 2019, JCAP, 04, 051, 1812.09760
* Parsons et al. (2014) Parsons, A. R. et al. 2014, ApJ, 788, 106, 1304.4991
* Pober et al. (2014a) Pober, J. C. et al. 2014a, The Astrophysical Journal, 782, 66
* Pober et al. (2013) Pober, J. C. et al. 2013, AJ, 145, 65, 1210.2413
* Pober et al. (2014b) Pober, J. C., et al. 2014b, Astrophys. J., 782, 66, 1310.7031
* Pritchard & Loeb (2012) Pritchard, J. R., & Loeb, A. 2012, Reports on Progress in Physics, 75, 086901, 1109.6012
* Reis et al. (2020) Reis, I., Fialkov, A., & Barkana, R. 2020, Mon. Not. Roy. Astron. Soc., 499, 5993, 2008.04315
* Rogers & Peiris (2020) Rogers, K. K., & Peiris, H. V. 2020, 2007.12705
* Schauer et al. (2019) Schauer, A. T. P., Liu, B., & Bromm, V. 2019, Astrophys. J. Lett., 877, L5, 1901.03344
* Schenker et al. (2014) Schenker, M. A., Ellis, R. S., Konidaris, N. P., & Stark, D. P. 2014, Astrophys. J., 795, 20, 1404.4632
* Schive et al. (2014) Schive, H.-Y., Chiueh, T., & Broadhurst, T. 2014, Nature Phys., 10, 496, 1406.6586
* Schneider (2018) Schneider, A. 2018, 1805.00021
* Sims & Pober (2020) Sims, P. H., & Pober, J. C. 2020, Mon. Not. Roy. Astron. Soc., 492, 22, 1910.03165
* Singh & Subrahmanyan (2019) Singh, S., & Subrahmanyan, R. 2019, ApJ, 880, 26, 1903.04540
* Sitwell et al. (2014) Sitwell, M., Mesinger, A., Ma, Y.-Z., & Sigurdson, K. 2014, MNRAS, 438, 2664, 1310.0029
* Sobacchi & Mesinger (2014) Sobacchi, E., & Mesinger, A. 2014, Mon. Not. Roy. Astron. Soc., 440, 1662, 1402.2298
* Tseliakhovich & Hirata (2010) Tseliakhovich, D., & Hirata, C. 2010, Phys. Rev. D, 82, 083520, 1005.2416
* Visbal et al. (2015) Visbal, E., Haiman, Z., & Bryan, G. L. 2015, Mon. Not. Roy. Astron. Soc., 453, 4456, 1505.06359
* White & Rees (1978) White, S. D. M., & Rees, M. J. 1978, MNRAS, 183, 341
* Wouthuysen (1952) Wouthuysen, S. A. 1952, AJ, 57, 31
* Zahn et al. (2006) Zahn, O., Lidz, A., McQuinn, M., Dutta, S., Hernquist, L., Zaldarriaga, M., & Furlanetto, S. R. 2006, Astrophys. J., 654, 12, astro-ph/0604177
|
, and
# Phase transitions and noise sensitivity
on the Poisson space
via stopping sets and decision trees
Günter<EMAIL_ADDRESS>[ Giovanni Peccatilabel=e2 [
<EMAIL_ADDRESS>D. Yogeshwaranlabel=e3 [
<EMAIL_ADDRESS>Institute for Stochastics, Karlsruhe Institute of
Technology, Karlsruhe. Department of Mathematics (DMATH), Luxembourg
University, Luxembourg. Theoretical Statistics and Mathematics Unit, Indian
Statistical Institute, Bangalore.
###### Abstract
Proofs of sharp phase transition and noise sensitivity in percolation have
been significantly simplified by the use of randomized algorithms, via the
OSSS inequality (proved by O’Donnell, Saks, Schramm and Servedio in [60]) and
the Schramm-Steif inequality for the Fourier-Walsh coefficients of functions
defined on the Boolean hypercube (see [70]). In this article, we prove
intrinsic versions of the OSSS and Schramm-Steif inequalities for functionals
of a general Poisson process, and apply these new estimates to deduce
sufficient conditions — expressed in terms of randomized stopping sets —
yielding sharp phase transitions, quantitative noise sensitivity, exceptional
times and bounds on critical windows for monotonic Boolean Poisson functions.
Our analysis is based on a new general definition of ‘stopping set’, not
requiring any topological property for the underlying measurable space, as
well as on the new concept of a ‘continuous-time decision tree’, for which we
establish several fundamental properties. We apply our findings to the
$k$-percolation of the Poisson Boolean model and to the Poisson-based confetti
percolation with bounded random grains. In these two models, we reduce the
proof of sharp phase transitions for percolation, and of noise sensitivity for
crossing events, to the construction of suitable randomized stopping sets and
the computation of one-arm probabilities at criticality. This enables us to
settle an open problem suggested by Ahlberg, Tassion and Texeira [4] (a
special case of which was conjectured earlier by Ahlberg, Broman, Griffiths
and Morris [2]) on noise sensitivity of crossing events for the planar Poisson
Boolean model with random balls whose radius distribution has finite
$(2+\alpha)$-moments and also show the same for planar confetti percolation
model with bounded random balls. We also prove that critical probability is
$1/2$ for the planar confetti percolation model with bounded, $\pi/2$-rotation
invariant and reflection invariant random grains. Such a result was
conjectured by Benjamini and Schramm [10] in the case of fixed balls and
proved by Müller [56], Hirsch [40] and Ghosh and Roy [31] in the case of
balls, boxes and random boxes, respectively; our results contain all previous
findings as special cases.
60D05,
82B43,
60G55,
60J25,
60H99,
68W20,
Poisson process,
functional inequalities,
stopping sets,
noise sensitivity,
dynamical percolation,
sharp phase-transition,
###### keywords:
[class=MSC2020]
###### keywords:
###### Contents
1. 1 Introduction
1. 1.1 Overview and motivation
2. 1.2 A closer look at our main bounds
3. 1.3 Overview of applications
4. 1.4 Organization of the paper
5. 1.5 Some preliminaries on Poisson processes
2. I The OSSS Inequality for Poisson functionals
1. 2 Continuous time decision trees
2. 3 The OSSS inequality for Poisson functionals
3. 4 Variants of the OSSS inequality
3. II Quantitative Noise Sensitivity
1. 5 Schramm-Steif inequalities on the Poisson space
2. 6 Quantitative noise sensitivity
1. 6.1 The Ornstein-Uhlenbeck semigroup
2. 6.2 Noise sensitivity
3. 6.3 Noise stability
3. 7 Exceptional times and critical windows
4. III Applications to Continuum percolation models
1. 8 $k$-Percolation in the Poisson Boolean model
2. 9 Confetti Percolation
3. 10 Noise sensitivity for the planar Boolean model with unbounded balls
4. 11 Further applications
5. A Graph-measurable mappings and stopping sets
6. B Non-attainable stopping sets
### 1 Introduction
#### 1.1 Overview and motivation
Functional inequalities for mappings on the discrete hypercubes
$\\{0,1\\}^{n}$ and $\\{-1,1\\}^{n}$, $n\in\\{1,\ldots,\infty\\}$, (typically
endowed with some product measure) play a pivotal role in many applications,
ranging from the mathematical theory of voting and social choice, to
computational complexity and percolation on lattices; see e.g. [29, 58, 59,
76]. These estimates – examples of which are the discrete Poincaré and
logarithmic Sobolev inequalities (see e.g. [29, Theorem 1.13] and [76, Lemma
8.15]) and Talagrand’s $L^{1}$-$L^{2}$ bound [18, 73] – are analytic in nature
and can be canonically framed in the language of Markov semigroups, see e.g.
[18, 57].
More than a decade ago, two striking estimates — one for the variance of
$f:\\{0,1\\}^{n}\to\\{0,1\\}$ and the other for the Fourier-Walsh coefficients
of $f:\\{-1,1\\}^{n}\to\\{-1,1\\}$ — were proved by O’Donnell, Saks, Schramm
and Servedio [60, Theorem 3.1] and Schramm and Steif [70, Theorem 1.8],
respectively. The former inequality is known as the OSSS inequality and we
shall refer to the latter as the Schramm-Steif inequality. Both the OSSS and
Schramm-Steif estimates require the existence of a randomized algorithm
determining the function $f$ (although this assumption can be relaxed in the
Schramm-Steif case, see Section 1.2 below), in such a way that the upper
bounds are expressed in terms of revealment probabilities, i.e., of the
probability that a bit/coordinate of the input is revealed by the algorithm.
While the Schramm-Steif inequality was motivated by quantitative noise
sensitivity for discrete percolation models, the OSSS inequality was
established in the context of decision tree complexity. More recently,
references [23, 25, 24] have pioneered the use of the OSSS inequality to
derive simple proofs of sharp phase transitions in percolation.
The aim of this paper is to establish bounds analogous to the OSSS and
Schramm-Steif inequalities for random variables that depend on Poisson
processes defined on abstract measurable spaces, see Theorem 3.2, Theorem 5.1
and Corollary 5.3 below. As demonstrated by the applications discussed in the
forthcoming Section 1.3, our estimates are perfectly tailored for studying
phase transitions and noise sensitivity in models of continuum percolation
(see e.g. [15, 37, 51]), and will allow us to settle some open problems from
[2, 4, 10].
One new theoretical insight developed in our work is that, in the context of
functionals of random point measures, the role of randomized algorithms for
Boolean inputs is naturally played by randomized stopping sets and continuous-
time decision trees (CTDTs). Although the notion of ‘stopping set’ is
classical in stochastic geometry (see e.g. [9, 54, 65, 81, 82]), in the
sections to follow we choose to adopt a more general definition of such an
object, that will allow us to directly work with Borel spaces not necessarily
verifying specific topological requirements. To the best of our knowledge,
such a general theory of stopping sets (which is presented in a mostly self-
contained way in Appendix A and has a clear independent interest) is developed
here for the first time. The notion of CTDT is also new, and will be studied
from scratch in Section 2 and Appendix B.
It is important to observe that the proofs of Theorem 3.2 and Theorem 5.1
below do not rely on semigroup techniques or on discretization schemes, but
are rather based on the use of the Markov property of stopping sets – as
proved in Theorem A.6 – combined with the classical characterization of
Poisson processes as completely independent random measures. In particular,
for now it does not seem possible to deduce our main findings from classical
functional inequalities on the Poisson space, such as those proved in [1, 48,
49, 61, 57, 78]. More to the point, when compared with the tools available for
mappings on Boolean hypercubes, the relevance of our estimates is amplified by
the fact that $L^{1}$-$L^{2}$ inequalities on the Poisson space only hold
under very restrictive assumptions, see [57]. Also, even if $L^{1}$-$L^{2}$
inequalities on the Poisson space were available in full generality, it would
not be straightforward to deduce from them sharp phase-transitions (see the
discussion below (2.12) in [21]). Remarkably, our analysis will also show that
multiple Wiener-Itô integrals and associated chaotic decompositions (see [47,
49, 62]) play a role similar to Fourier-Walsh expansions (associated with
mappings on hypercubes) for establishing quantitative noise sensitivity in
various contexts, see Section 6. Our method of proof should eventually be
contrasted with the original proof of the discrete OSSS inequality given in
[60], which is rather based on martingale estimates reminiscent of the
arguments leading to the discrete Poincaré inequality.
#### 1.2 A closer look at our main bounds
In order to motivate the reader, we now provide a more formal discussion of
some of the new estimates proved in this paper. We refer to Section 1.5 for a
rigorous definition of the objects appearing below.
Let $\eta$ be a Poisson process on a Borel space ${\mathbb{X}}$ (i.e.,
${\mathbb{X}}$ is a measurable space with a Borel-measurable bijection to a
Borel subset of $[0,1]$ with a measurable inverse), with a locally finite
intensity measure $\lambda$, and consider a random variable of the form
$f(\eta)$, where $f$ is some measurable mapping and
$\mathbb{E}[f(\eta)^{2}]<\infty$. In this case, the variance of $f(\eta)$ can
be bounded by using the classical Poincaré inequality stated in [49, Theorem
18.7], according to which
$\displaystyle\operatorname{{\mathbb{V}ar}}[f(\eta)]\leq\int\mathbb{E}[|D_{x}f(\eta)|^{2}]\,\lambda(dx),$
(1.1)
where, for $x\in{\mathbb{X}}$, the mapping $D_{x}f$ is the add-one cost
operator given by $D_{x}f(\eta):=f(\eta+\delta_{x})-f(\eta)$, with
$\delta_{x}$ the Dirac mass at $x$, and $\operatorname{{\mathbb{V}ar}}$
indicates the variance. The estimate (1.1) is an infinite-dimensional
counterpart to the well-known Efron-Stein inequality (see e.g. [17, Chapter
3]), stating that, if $F(X)$ is a square-integrable functional of a finite
vector of independent random variables $X=(X_{1},...,X_{n})$, then
$\operatorname{{\mathbb{V}ar}}(F(X))\leq\frac{1}{2}\sum_{i=1}^{n}\mathbb{E}[(F(X)-F(X^{i}))^{2}],$
(1.2)
where $X^{i}$ is the vector obtained from $X$ by re-randomizing the $i$th
coordinate $X_{i}$. Note that, if $f,F$ in the above discussion take values in
$\\{-1,1\\}$, then
$\operatorname{{\mathbb{V}ar}}(f(\eta))=\mathbb{E}[|f(\eta)-f(\eta^{\prime})]]$
and $\operatorname{{\mathbb{V}ar}}(F(X))=\mathbb{E}[|F(X)-F(X^{\prime})|]$,
where $\eta^{\prime}$ and $X^{\prime}$ are independent copies of $\eta$ and
$X$, respectively, and also
$\mathbb{E}[|D_{x}f(\eta)|^{2}]=4\mathbb{P}(f(\eta)\neq f(\eta+\delta_{x}))$
and $\mathbb{E}[(F(X)-F(X^{i}))^{2}]=4\mathbb{P}(F(X)\neq F(X^{i}))$. It is
easily checked that, if $X$ takes vaules in $\\{-1,1\\}^{n}$, then the latter
probability coincides (up to a factor) with the usual notion of influence of
the $i$th coordinate on $F$, as defined e.g. in [29, Section 1.3] (see also
Remark 1.1 below). Albeit fundamentally useful in many situations (see again
[17, Chapter 3], as well as [38] for a representative geometric application),
the estimates (1.1)–(1.2) are in general suboptimal, and not sufficiently
tight for several applications. See [57] (and Example 1 below) for several
illustrations of this point.
Now consider a stopping set $Z(\eta)$. This notion, which is formally defined
in Appendix A, indicates that $Z(\eta)$ is a random subset of ${\mathbb{X}}$,
satisfying some adequate measurability conditions and such that $Z(\eta)$ only
depends on the restricition $\eta_{Z}$ of $\eta$ to $Z(\eta)$. We say that a
mapping $f(\cdot)$ is determined by $Z(\eta)$ if $f(\eta)$ only depends on
$\eta_{Z}$. In Theorem 3.2 and Corollary 5.3 below, we will prove the
following two estimates (1.3) and (1.5), allowing one to control the
fluctuations of a random variable $f(\eta)$ such that $f$ is determined by
$Z$.
1. (a)
Assume that the measure $\lambda$ is diffuse and that the stopping set
$Z(\eta)$ is the output of a sufficiently regular continuous-time decision
tree (CTDT), that is, of a sequence of increasing stopping sets
$\\{Z_{t}(\eta):t\geq 0\\}$ such that $Z_{t}(\eta)\uparrow Z(\eta)$ (see
Section 2). Then, if the random variable $f(\eta)$ is integrable and $f$ is
determined by $Z(\eta)$,
$\displaystyle\mathbb{E}[|f(\eta)-f(\eta^{\prime})|]\leq 2\int\mathbb{P}(x\in
Z)\mathbb{E}[|D_{x}f(\eta)|]\,\lambda(dx).$ (1.3)
Relation (1.3) is an infinite-dimensional counterpart to [60, Theorem 3.1],
implying the following bound: if $F(X)$ is an integrable functional of a
vector of independent random variables and $F$ is determined by a randomized
algorithm $T$, then
$\mathbb{E}[|F(X)-F(X^{\prime})|]\leq\sum_{i=1}^{n}\mathbb{P}(i\mbox{ is
revealed by}\,\,T)\,\mathbb{E}[|F(X)-F(X^{i})]],$ (1.4)
where we have used the previously introduced notation, and the event inside
the probability corresponds to the event that the coordinate $i$ is visited by
$T$. In the applications developed in Sections 8 and 9 we will show that (1.3)
and its generalizations can be used to prove sharp thresholds in continuum
percolation models. In Appendix B, we will also show that not every stopping
set $Z(\eta)$ can be represented as the output of a regular CTDT; as discussed
below, the extension of (1.3) to general stopping sets is an open problem.
2. (b)
Consider a stopping set $Z(\eta)$ (not necessarily generated by a CTDT), and
assume that $f(\eta)$ is square-integrable and $f$ is determined by $Z(\eta)$.
Then, if $u_{k}(\eta)$ denotes the $k$th summand in the Wiener-Itô chaotic
decomposition of $f(\eta)$ ($k=1,2,...$) (see (1.10) for definition),
$\displaystyle\mathbb{E}[u_{k}(\eta)^{2}]\leq
k\left[\sup_{x\in{\mathbb{X}}}\mathbb{P}(x\in
Z(\eta))\right]\times\operatorname{{\mathbb{V}ar}}(f(\eta)).$ (1.5)
The bound (1.5) is a counterpart to [60, Theorem A.1] (of which [70, Theorem
1.8] is a special case), yielding the following: if $F(X)$ is a a square-
integrable functional of a vector of independent random variables and $F$ is
determined by a discrete stopping set $S$, then, denoting by $F_{k}$ the $k$th
component in the Hoeffding-ANOVA decomposition of $F$ ($k=1,...,n$) (see e.g.
[71, Chapter 5]),
$\operatorname{{\mathbb{V}ar}}(F_{k}(X))\leq
k\left[\max_{i=1,...,n}\mathbb{P}(i\in
S)\right]\times\operatorname{{\mathbb{V}ar}}(F(X)).$ (1.6)
Observe that, if the random set $S$ is the collection of all coordinates of
$X$ revealed by a randomized algorithm, then $\mathbb{P}(i\in
S)=\mathbb{P}(i\mbox{ is revealed})$, but such an additional feature is not
required for (1.6) to hold. In Sections 8, Section 9 and Section 10, we will
show that (1.5) and its extensions are useful for establishing noise
sensitivity in continuum percolation. The crucial technical step for deriving
the original Schramm-Steif inequality (1.6) is a bound on the conditional
expectations of degenerate $U$-statistics (see [60, formula (12)]). In order
to prove (1.5), we will derive an analogous bound for multiple Wiener-Itô
integrals (Theorem 5.1) which is of independent interest.
###### Remark 1.1.
Let $F:\\{-1,1\\}^{n}\to\\{-1,1\\}$, and let $X=(X_{1},...,X_{n})$ be a
collection of i.i.d. Rademacher random variables with parameter $p$. One
remarkable consequence of [60] (see Corollary 1.2 therein) is the general
bound
${\bf Inf}_{max}(F)\geq\frac{\operatorname{{\mathbb{V}ar}}{F(X)}}{\Delta(T)},$
(1.7)
where
${\bf Inf}_{max}(F):=\max_{i=1,...,n}{\bf
Inf}_{i}(F):=4p(1-p)\max_{i=1,...,n}\mathbb{P}[F(X)\neq F(X^{(i)})],$
$X^{(i)}$ is the element of $\\{-1,1\\}^{n}$ obtained by switching the sign of
the $i$th coordinate of $X$, and $\Delta(T)$ is the average number of
coordinates visited by $T$. As mentioned above, the quantity ${\bf
Inf}_{i}(F)$ is defined as the influence of the $i$th coordinate on $F$. One
can derive an estimate with the same flavour starting from (1.3). Consider
indeed $f(\eta)$ with values in $\\{-1,1\\}$ and determined by the output $Z$
of a CTDT, and set
${\bf Inf}_{max}(f):=2\sup_{x\in{\mathbb{X}}}\,\mathbb{P}(f(\eta)\neq
f(\eta+\delta_{x})).$
Then, one infers from (1.3) that
$2{\bf
Inf}_{max}(f)\geq\frac{\operatorname{{\mathbb{V}ar}}(f(\eta))}{\mathbb{E}[\lambda(Z)]},$
(1.8)
where we have applied a Fubini argument. The full analogy with (1.7) is
obtained from (A.11), yielding the identity
$\mathbb{E}[\lambda(Z)]=\mathbb{E}[\eta(Z)]$, that is, the denominator on the
right-hand side of (1.8) coincides with the average number of Poisson points
that are revealed by the stopping set $Z$.
We will now provide an outline of the applications developed in Sections 8, 9
and 10.
#### 1.3 Overview of applications
In Sections 8, 9 and 10, we will develop three applications of our abstract
results to continuum percolations models, namely, to $k$-percolation in the
Poisson Boolean model (see e.g. [13] and the references therein), to confetti
percolation (see [37, 51, 15, 10, 4]) and to the planar Poisson Boolean model
with unbounded grains (see [4]). These are by no means exhaustive, and more
potential applications are described in Section 11.
General considerations. We first observe that the finite-dimensional nature of
the original OSSS and Schramm-Steif inequalities has not been — in general — a
major obstacle for applying them to random geometric models based on infinite-
dimensional inputs. Indeed, using suitable discretization schemes and some
extra technical work, these estimates have been successfully applied to
geometric models based on stationary Euclidean Poisson point processes, see
[2, 3, 25, 24, 26, 31, 46]. This fact notwithstanding, the applications
developed in the present paper indicate that the our intrinsic approach is a
new valuable tool for establishing quantitative noise sensitivity, existence
of exceptional times and sharp phase transition in (possibly dynamical)
continuum percolation models, under minimal assumptions and by using
strategies of proofs that only require one to prove non-degenaracy of some
class of functionals and decay on arm probabilities. Another important point
is that our estimates allow one to prove quantitative noise sensitivity in
percolation models for which discretization techniques would be difficult to
apply. One typical example of this situation is the confetti percolation
discussed in Section 9, whose dynamical nature makes discretization procedures
particularly challenging to implement (see e.g. [31]), and a similar remark
also applies for the Poisson Boolean model with unbounded grains as in Section
10. In general, while the discretization approach has been widely applied to
the detection of sharp thresholds, it has been exploited less intensively in
the proof of noise sensitivity (see [3] for some distinguished examples). We
believe that the our intrinsic approach will also be helpful when dealing with
more general Poissonian percolation models such as those suggested in Section
11 as well as in the study of Poisson-Voronoi tesselations on the hyperbolic
plane [11] or the Poisson cylinder model [75].
Description of the models. The $k$-percolation model arises by placing i.i.d.
bounded grains (shapes) at each point in the support of a stationary Poisson
point process on ${\mathbb{R}}^{d},\,d\geq 2$ with intensity $\gamma$. The
$k$-covered region is the region covered by at least $k$ of these grains and
$k$-percolation refers to existence of an unbounded connected component in the
$k$-covered region. If $k=1$, we obtain the standard Poisson Boolean model.
The confetti percolation model is defined in terms of a space-time Poisson
point process. The Poisson points are independently coloured either in black
or white, with probability $p$ and $1-p$, respectively. The black and white
points carry i.i.d. bounded grains as well and can have different
distributions. A spatial location $x\in{\mathbb{R}}^{d}$ is covered in black
or white depending on the colour of the first grain that falls on $x$. The
black region is the region of black points and confetti percolation refers to
existence of an infinite component in the black region. Apart from
boundedness, we require that the grains are not too thin i.e., contain some
ball with positive probability. We give more details on these two models as
well as some more background literature in the corresponding sections. The
varying parameters in the $k$-percolation model is $\gamma$, whereas it is $p$
in the confetti percolation model.
Overview of results: phase transitions. In both models, we first prove a sharp
phase transition in arbitrary dimensions (Theorems 8.2 and 9.1). This involves
showing exponential decay for the radius of the component containing the
origin in the sub-critical regime and the mean-field lower bound for the
radius in the super-critical regime. The proof uses the Poisson OSSS variance
inequality (1.3) and the necessary CTDTs are constructed via a technical
adaptation of the ideas of Duminil-Copin, Raoufi and Tassion [23] to the
continuum setting.
We now describe some consequences of the sharp phase transition result derived
above. In the $k$-percolation model, we show equality of the usual critical
intensity and of the critical intensities defined in terms of the expected
volume of the component containing the origin and of the non-triviality of
certain crossing probabilities introduced by Gouéré in [34] (see Theorem 8.1).
In the confetti percolation model, we prove that the critical parameter is
$1/2$ in a very general planar confetti percolation model in the general
setting of bounded grains satisfying $\pi/2$-rotation invariance and
invariance under reflection by coordinates (see Theorem 9.3). This result was
conjectured by Benjamini and Schramm [10, Problem 5] for the case of
deterministic balls. It was first proven in the case of deteministic boxes by
Hirsch [40] and later by Müller [56] in the case of deterministic balls and
then by Ghosh and Roy [31] in the case of bounded random boxes.
Overview of results: noise sensitivity. The Schramm-Steif inequality (1.6)
arose in the study of noise sensitivity for lattice percolation models.
Informally noise sensitivity of a sequence of functions $f_{n}$, refers to the
phenomenon that under a small perturbation of the random input, the functions
become asymptotically independent or de-correlated. Apart from the natural
motivation coming from the study of random structures under perturbations,
there are varied reasons to study noise sensitivity coming from statistical
physics and computer science (see [28, Section 1.2], [29, Chapter 12]). By
considering the perturbation as induced by a Markov process, one may naturally
relate noise sensitivity to the study of functions of a dynamically evolving
random input. In percolation theory, these are known as dynamical percolation
models (see [36, 72]) where one studies percolation models such that the
random input is varied dynamically. In the case of Poisson-based models, we
consider the dynamics induced by the spatial birth-death process i.e., the
existing points are deleted after exponential lifetimes and new points are
born independently and stay for exponential lifetimes. The corresponding
perturbation is to delete a small fraction of existing points and compensate
it by adding a small fraction of new points independently. From the Poisson
Schramm-Steif inequality (1.5) and Wiener-Itô chaos decomposition (1.10), we
can obtain easily that a sequence of $\\{0,1\\}$-valued functions $f_{n}$ is
noise sensitive if there exists a corresponding sequence of randomized
stopping sets $Z_{n}$ determining $f_{n}$ such that the supremum of the
revealement probabilities $\sup_{x\in{\mathbb{X}}}\mathbb{P}(x\in Z_{n})$
vanishes asymptotically. Quantitative noise sensitivity can be used to prove
existence of exceptional times in dynamical models as well as to give bounds
on the critical window for phase transitions.
In the two continuum percolation models ($k$-percolation in the Poisson
Boolean model and confetti percolation), we take $f_{n}$ to be the indicator
functions that there is a crossing of large boxes (say $[0,\kappa
n]\times[0,n]^{d-1}$ for $\kappa>0$) along the $x$-axis by occupied components
at criticality. We prove that noise sensitivity and exceptional times in
dynamical percolation for the above crossing events follow from the non-
degeneracy of the crossing probabilities and decay of one-arm probabilities
(Theorems 8.3 and 9.5). This helps us to settle a conjecture by Ahlberg,
Broman, Griffiths and Morris [2, Conjecture 9.1] regarding noise sensitivity
of the critical planar Poisson Boolean model with bounded random balls
straightforwardly using the recent one-arm probability estimates in Ahlberg,
Tassion and Texeira [4] and also yields easily noise sensitivity of the
critical planar confetti percolation model where grains are balls with bounded
random radii (see Corollary 8.5 and 9.7). Further, we also prove quantiative
noise sensitivity of the planar Poisson Boolean model with random balls whose
radius distribution satisfies a finite $(2+\alpha)$-moment condition (Theorem
10.1). This answers an open question in Ahlberg, Tassion and Texeira [4]. (As
indicated in [2], a version of this result can in principle be obtained by
combining the BKS randomized algorithm method from [12] with the estimates of
[4] – but it is not clear that our near-optimal moment condition on the random
radii is preserved by this alternative strategy). Less quantitatively, we also
prove noise sensitivity of the critical planar confetti percolation with black
and white particles both having the same $\pi/2$-rotation invariant and
reflection invariant bounded grain distribution (Corollary 9.8).
#### 1.4 Organization of the paper
The paper is composed of three parts. Part I contains basic notions about
CTDTs (Section 2), the statement and proof of the OSSS inequality for Poisson
functionals (Section 3), and several useful variants (Section 4). Part II
focusses on the Schramm-Steif inequality (Section 5), its consequences for
quantiative noise sensitivity and noise stability (Section 6) and their
applications to exceptional times and sharp phase transition (Section 7). Part
III presents applications to $k$-percolation (Section 8), confetti percolation
(Section 9), noise sensitivity for the planar Boolean model with unbounded
balls (Section 10) and a discussion of further applications (Section 11).
Appendix A contains a general theory of stopping sets, that is used throughout
the paper, under minimal topological assumptions. Appendix B is devoted to
zero-one laws for CTDTs and non-attainable stopping sets.
The next Subsection 1.5 contains some basic definitions and facts about
Poisson processes. The reader is referred to the monographs [45, 49] for a
comprehensive discussion; see also [47].
#### 1.5 Some preliminaries on Poisson processes
Throughout the paper, we let $({\mathbb{X}},{\mathcal{X}})$ denote a Borel
space. We fix an increasing sequence $\\{B_{n}:n\geq 1\\}$ of sets in
$\mathcal{X}$ satisfying $B_{n}\uparrow{\mathbb{X}}$ and define
${\mathcal{X}}_{0}\subset{\mathcal{X}}$ as the system of all sets of the form
$B\cap B_{n}$ for some $B\in{\mathcal{X}}$ and $n\in{\mathbb{N}}$. (This is an
example of localizing ring as defined in [45].) Note that
$\sigma(\mathcal{X}_{0})={\mathcal{X}}$. A measure $\nu$ on
$({\mathbb{X}},{\mathcal{X}})$ which is finite on ${\mathcal{X}}_{0}$ is
called locally finite (note that the property of being locally finite depends
on the choice of the sequence $\\{B_{n}:n\geq 1\\}$).
Write ${\mathbb{N}}_{0}:=\\{0,1,2,...\\}$ and let
${\mathbf{N}}\equiv{\mathbf{N}}({\mathbb{X}})$ be the space of all measures on
${\mathbb{X}}$ which are ${\mathbb{N}}_{0}$-valued on ${\mathcal{X}}_{0}$ and
let ${\mathcal{N}}$ denote the smallest $\sigma$-field such that
$\mu\mapsto\mu(B)$ is measurable for all $B\in{\mathcal{X}}$. A point process
is a random element $\eta$ of ${\mathbf{N}}$, defined over some fixed
probability space $(\Omega,{\mathcal{F}},\mathbb{P})$. The intensity measure
of $\eta$ is the measure $\mathbb{E}[\eta]$ defined by
$\mathbb{E}[\eta](B):=\mathbb{E}[\eta(B)]$, $B\in{\mathcal{X}}$. By our
assumption on ${\mathbb{X}}$ every point process $\eta$ is proper, that is
$\eta=\sum_{n=1}^{\eta(\mathbb{X})}\delta_{Y_{n}}$, where $\delta_{y}$ is the
Dirac mass at $y$, and $\\{Y_{n}:n\geq 1\\}$ is a collection of random
elements with values in $\mathbb{X}$; see [49, Section 6.1] for more details.
Given a measure $\nu$ on ${\mathbb{X}}$ and $B\in\mathcal{X}$, we write
$\nu_{B}$ to indicate the trace measure $A\mapsto\nu(A\cap B)$. For
$\mu\in{\mathbf{N}}$ and $A\in{\mathcal{X}}$, we write $\mu\subset A$ if
$\mu(A^{c})=0$.
Fix a locally finite measure $\lambda$ on ${\mathbb{X}}$. By construction of
${\mathcal{X}}_{0}$, $\lambda$ is automatically $\sigma$-finite. (Sometimes it
is convenient to start with a $\sigma$-finite measure $\lambda$ and to choose
the $B_{n}$ such that $\lambda(B_{n})<\infty$.) A Poisson process with
intensity measure $\lambda$ is a point process $\eta$ enjoying the following
two properties: (i) for every $B\in\mathcal{X}$, the random variable $\eta(B)$
has a Poisson distribution with parameter $\lambda(B)$, and (ii) given
$m\in{\mathbb{N}}$ and disjoint sets $B_{1},...,B_{m}\in\mathcal{X}$, the
random variables $\eta(B_{1}),...,\eta(B_{m})$ are stochastically independent.
Property (ii) is often described as $\eta$ being completely independent or,
equivalently, independently scattered.
It is a well-known fact (see e.g. [49, Chapter 2]) that, if $\lambda$ is non-
atomic, then $\mathbb{P}$-a.s. every point in the support of $\eta$ is charged
with mass 1. In other words this means that
$\mathbb{P}(\eta\in{\mathbf{N}}_{s})=1$, where ${\mathbf{N}}_{s}$ is the set
of all $\mu\in{\mathbf{N}}$ with $\mu(\\{x\\})\leq 1$ for each
$x\in{\mathbb{X}}$. Given $\mu\in{\mathbf{N}}$ and $n\geq 1$, we denote by
$\mu^{(n)}$ the $n$th factorial measure associated with $\mu$, as defined in
[49, Chapter 4]. If $\mu\in{\mathbf{N}}_{s}$ then $\mu^{(n)}$ is the measure
on $({\mathbb{X}}^{n},\mathcal{X}^{n})$ obtained by removing from the support
of $\mu^{n}$ every point $(x_{1},...,x_{n})$ such that $x_{i}=x_{j}$ for some
$i\neq j$ (with $\mu^{(1)}=\mu$). We will often make use of the following
(multivariate) Mecke formula (see [49, Theorem 4.4]). If $\eta$ is a Poisson
process with intensity $\lambda$ then, for all $n\geq 1$ and all measurable
mappings $f\colon{\mathbb{X}}^{n}\times{\mathbf{N}}\to[0,\infty)$,
$\displaystyle\mathbb{E}$
$\displaystyle\left[\int_{{\mathbb{X}}^{n}}f(x_{1},...,x_{n},\eta)\,\eta^{(n)}(dx_{1},...,dx_{n})\right]$
(1.9)
$\displaystyle\qquad\qquad=\mathbb{E}\left[\int_{{\mathbb{X}}^{n}}f(x_{1},...,x_{n},\eta+\delta_{x_{1}}+\cdots+\delta_{x_{n}})\,\lambda^{n}(dx_{1},...,dx_{n})\right],$
where $\mathbb{E}$ denotes expectation with respect to $\mathbb{P}$. A further
crucial fact is that every measurable function
$f\colon{\mathbf{N}}\to{\mathbb{R}}$ such that
$\mathbb{E}[f(\eta)^{2}]<\infty$ admits a unique chaotic decomposition of the
form
$f(\eta)=\sum_{k=0}^{\infty}I_{k}(u_{k}),$ (1.10)
where $I_{0}(u_{0})=u_{0}:=\mathbb{E}[f(\eta)]$, $I_{k}$ indicates a multiple
Wiener-Itô integral of order $k$ with respect to the compensated measure
$\hat{\eta}:=\eta-\lambda$, the kernels $u_{k}\in L^{2}(\lambda^{k})$ ($k\geq
1$) are $\lambda^{k}$-a.e. symmetric, and the series converges in
$L^{2}(\mathbb{P})$. Multiple Wiener-Itô integrals are the exact equivalent of
homogeneous sums in the framework of random variables depending on an
independent Rademacher sequence (or, more generally, of Hoeffding-ANOVA
decompositions for functions of independent random vectors). We refer the
reader to [49, Chapters 12 and 18] and [62, Chapter 5] for more details. For
future use, we record here the classical orthonormality relation, valid for
all $k,m\geq 0$ and all pairs of a.e. symmetric kernels $u\in
L^{2}(\lambda^{k})$, $v\in L^{2}(\lambda^{m})$ (with the obvious
identification $L^{2}(\lambda^{0})={\mathbb{R}}$):
$\mathbb{E}[I_{k}(u)I_{m}(v)]={\mathbf{1}}\\{k=m\\}\,k!\,\langle
u,v\rangle_{L^{2}(\lambda^{k})}.$ (1.11)
Recall from [49, (18.20)] that, if $f:{\mathbf{N}}\to{\mathbb{R}}$ is as in
(1.10), then
$u_{k}(x_{1},\ldots,x_{k}):=\frac{1}{k!}\mathbb{E}[D^{k}_{x_{1},\ldots,x_{k}}f(\eta)],$
(1.12)
where $D^{k}$ is the iterated difference operator defined as
$D^{k}_{x_{1},\ldots,x_{k}}f(\eta):=D_{x_{1}}(D^{k-1}_{x_{2},\ldots,x_{k}}f(\eta))$
with $D_{x}$ being the add-one cost operator defined below (1.1). Applying
(1.10), (1.11) and (1.12), yields the relation
$\mathbb{E}[f(\eta)^{2}]=\sum_{k=0}^{\infty}k!\|u_{k}\|^{2}_{L^{2}(\lambda^{k})}=\mathbb{E}[f(\eta)]^{2}+\sum_{k=1}^{\infty}\int_{{\mathbb{X}}^{k}}\frac{1}{k!}\mathbb{E}[D^{k}_{x_{1},\ldots,x_{k}}f(\eta)]^{2}\lambda(\mathrm{d}x_{1})\ldots\lambda(\mathrm{d}x_{k}).$
(1.13)
Formula (1.13) will be used in Section 5.
## Part I The OSSS Inequality for Poisson functionals
Throughout this part, we work in the general setting of Section 1.5. We fix a
Poisson process $\eta$ with a locally finite intensity measure $\lambda$.
### 2 Continuous time decision trees
The following key definition relies on the concept of a stopping set,
discussed in Appendix A.
A family $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ of stopping sets is called a
continuous-time decision tree (CTDT) if $Z_{t}\in{\mathcal{X}}_{0}$ for
$t\in{\mathbb{R}}_{+}$, $\mathbb{E}[\lambda(Z_{0})]=0$ and if the following
properties are satisfied:
$\displaystyle Z_{s}$ $\displaystyle\subset Z_{t},\quad s\leq t,$ (2.1)
$\displaystyle Z_{t}$ $\displaystyle=\bigcap_{s>t}Z_{s},\quad
t\in{\mathbb{R}}_{+}.$ (2.2)
If $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ is a CTDT, we then define
$\displaystyle Z_{\infty}$
$\displaystyle:=\bigcup_{t\in{\mathbb{R}}_{+}}Z_{t},$ (2.3) $\displaystyle
Z_{t-}$ $\displaystyle:=\bigcup_{s<t}Z_{s},\quad t\in{\mathbb{R}}_{+},$ (2.4)
as well as $Z_{0-}:=\emptyset$. As monotone unions of stopping sets,
$Z_{\infty}$ and $Z_{t-}$ are also stopping sets; see Theorem A.6. Note that
${\mathbb{Z}}_{\infty}$ is not required to be $\mathcal{X}_{0}$-valued. The
aim of this section is to prove some technical results for CTDTs, laying the
ground for the main OSSS-type estimate stated in Theorem 3.2.
Given a CTDT $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ we define
$\displaystyle t(x,\mu):=\inf\\{t\in{\mathbb{R}}_{+}:x\in Z_{t}(\mu)\\},\quad
x\in{\mathbb{X}},\,\mu\in{\mathbf{N}},$ (2.5)
where $\inf\emptyset:=\infty$. Observe that, by (2.1) and (2.2) we have
$\displaystyle x\in Z_{t(x,\mu)}(\mu)\setminus
Z_{t(x,\mu)-}(\mu),\quad\text{if $t(x,\mu)<\infty$}.$ (2.6)
Furthermore we can replace ${\mathbb{R}}_{+}$ on the right-hand side of (2.5)
by the non-negative rational numbers. Hence we obtain from Proposition A.9
that
$\displaystyle
t(x,\eta)=t(x,\eta+\delta_{x}),\quad\mathbb{P}\text{-a.s.},\,\lambda\text{-a.e.\
$x$}.$ (2.7)
The following result is crucial.
###### Proposition 2.1.
Suppose that $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ is a CTDT and let $\eta$ and
$\eta^{\prime}$ be two independent Poisson processes with the same intensity
measure $\lambda$. Let
$g\colon[0,\infty]\times{\mathbf{N}}\to{\mathbb{R}}_{+}$ be measurable and
$x\in{\mathbb{X}}$. Then
$\displaystyle\mathbb{E}[g(t(x,\eta),\eta_{{\mathbb{X}}\setminus
Z_{t(x,\eta)}(\eta)}+\eta^{\prime}_{Z_{t(x,\eta)}(\eta)})]=\mathbb{E}[g(t(x,\eta),\eta^{\prime})].$
The proof of Proposition 2.1 requires the following (purely deterministic)
lemma.
###### Lemma 2.2.
Suppose that $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ is a CTDT. Then, $\mu\mapsto
Z_{x}(\mu):=Z_{t(x,\mu)}(\mu)$ is a stopping set for each $x\in{\mathbb{X}}$.
Moreover, for each $x\in{\mathbb{X}}$ there exist ${\mathcal{X}}_{0}$-valued
stopping sets $Z_{n}^{\prime}$, $n\in{\mathbb{N}}$, such that
$Z_{n}^{\prime}\uparrow Z_{x}$.
###### Proof.
We first need to settle measurability. It follows from the right-continuity
(2.2) and the graph-measurability of $Z_{t}$ that
$(y,t,\mu)\mapsto{\mathbf{1}}\\{y\in Z_{t}(\mu)\\}$ is measurable. Moreover,
for each $s\in{\mathbb{R}}_{+}$ and all
$(x,\mu)\in{\mathbb{X}}\times{\mathbf{N}}$ we have that $t(x,\mu)\leq s$ if
and only if $x\in Z_{s}(\mu)$. Therefore $(x,\mu)\mapsto t(x,\mu)$ is
measurable, and consequently $(x,y,\mu)\mapsto{\mathbf{1}}\\{y\in
Z_{t(x,\mu)}(\mu)\\}$ is a measurable mapping. Let us now fix
$x\in{\mathbb{X}}$. For $\mu\in{\mathbf{N}}$ we use the shorthand notation
$Z(\mu):=Z_{t(x,\mu)}(\mu)$. To prove that $Z$ is a stopping set, we check
(A.1). Let $\mu,\psi\in{\mathbf{N}}$ such that $\psi(Z(\mu))=0$. We assert
that
$\displaystyle t(x,\mu_{Z(\mu)}+\psi)=t(x,\mu).$ (2.8)
Assume that $t(x,\mu)=\infty$ (implying $Z(\mu)=Z_{\infty}(\mu)$), which is
equivalent to $x\notin Z_{\infty}(\mu)$. By (A.1) this means that $x\notin
Z_{\infty}(\mu_{Z_{\infty}(\mu)}+\psi)$, so that
$t(x,\mu_{Z(\mu)}+\psi)=\infty$. Assume now that $t:=t(x,\mu)<\infty$ and
abbreviate $\nu:=\mu_{Z(\mu)}+\psi$. Since $x\in
Z_{t}(\mu)=Z_{t}(\mu_{Z_{t}(\mu)}+\psi)=Z_{t}(\nu)$ we observe that
$t(x,\nu)\leq t$. Assume now that $t>0$. Then,
$\displaystyle x\notin
Z_{t-}(\mu)=\bigcup_{s<t}Z_{s}(\mu)=\bigcup_{s<t}Z_{s}(\mu_{Z_{t}(\mu)}+\psi)=Z_{t-}(\nu).$
It follows that $t(x,\mu)=t(x,\nu)$, from which one deduces (2.8) and
consequently the following chain of equalities:
$\displaystyle Z(\mu_{Z(\mu)}+\psi)$
$\displaystyle=Z_{t(x,\mu_{Z(\mu)}+\psi)}(\mu_{Z(\mu)}+\psi)=Z_{t(x,\mu)}(\mu_{Z(\mu)}+\psi)$
$\displaystyle=Z_{t(x,\mu)}(\mu_{Z_{t(x,\mu)}(\mu)}+\psi).$
Using (A.1) once again we obtain that
$\displaystyle Z(\mu_{Z(\mu)}+\psi)=Z_{t(x,\mu)}(\mu)=Z(\mu).$
This proves that $Z$ is a stopping set. To prove the second assertion we take
$n\in{\mathbb{N}}$ and define
$\displaystyle t_{n}(x,\mu):=\inf\\{t\in{\mathbb{R}}_{+}:x\in Z_{t\wedge
n}(\mu)\\},\quad x\in{\mathbb{X}},\,\mu\in{\mathbf{N}}.$
Applying the previous result to the CTDT $\\{Z_{t\wedge
n}:t\in{\mathbb{R}}_{+}\\}$ we see that
$\displaystyle Z_{n}^{\prime}(\mu):=Z_{t_{n}(x,\mu)\wedge
n}(\mu),\quad\mu\in{\mathbf{N}},$
defines a stopping set $Z_{n}^{\prime}$ with values in ${\mathcal{X}}_{0}$. By
definition, we have that $t_{n}(x,\mu)=t(x,\mu)$ if $t(x,\mu)\leq n$ and
$t_{n}(x,\mu)=\infty$ otherwise. As a consequence, we obtain that
$t_{n}(x,\mu)\wedge n=t(x,\mu)\wedge n$ and therefore $Z_{n}^{\prime}\uparrow
Z$, as required. ∎
###### Proof of Proposition 2.1.
By Lemma 2.2, $\mu\mapsto Z(\mu):=Z_{t(x,\mu)}(\mu)$ is a stopping set
satisfying the assumptions of Theorem A.6. Using the independence of $\eta$
and $\eta^{\prime}$, one infers that
$\displaystyle\mathbb{E}\big{[}g(t(x,\eta),\eta_{{\mathbb{X}}\setminus
Z(\eta)}+\eta^{\prime}_{Z(\eta)})\big{]}=\int\mathbb{E}[g(t(x,\eta_{Z(\eta)}),\eta_{{\mathbb{X}}\setminus
Z(\eta)}+\mu_{Z(\eta_{Z(\eta)})})]\,\Pi_{\lambda}(d\mu),$ (2.9)
where $\Pi_{\lambda}$ denotes the distribution of $\eta$ and where we have
also used (2.8) and (A.1) with $\psi=0$. By virtue of (A.6), the right-hand
side of (2.9) equals
$\displaystyle\iint\mathbb{E}[g(t(x,\eta_{Z(\eta)}),\nu_{{\mathbb{X}}\setminus
Z(\eta)}+\mu_{Z(\eta_{Z(\eta)})})]$
$\displaystyle\,\Pi_{\lambda}(d\nu)\,\Pi_{\lambda}(d\mu)$
$\displaystyle=\int\mathbb{E}[g(t(x,\eta_{Z(\eta)}),\mu)]\,\Pi_{\lambda}(d\mu),$
where the equality comes from the complete independence property of a Poisson
process (and again (A.2)). By (2.8), the last term in the previous equality
coincides with the right-hand side of the asserted identity. ∎
The next statement is used in the proof of our main estimates.
###### Proposition 2.3.
Suppose that $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ is a CTDT. Let $\eta$ and
$\eta^{\prime}$ be two independent Poisson processes with intensity measure
$\lambda$. Let $m\in{\mathbb{N}}$ and
$g\colon{\mathbf{N}}^{m}\to{\mathbb{R}}_{+}$ be measurable. Then, for all
$t_{1},\ldots,t_{m}\in[0,\infty]$ with $t_{1}<\cdots<t_{m}$,
$\displaystyle\mathbb{E}[g(\eta_{Z_{t_{1}}(\eta)},\ldots,\eta_{Z_{t_{m}}(\eta)},\eta_{{\mathbb{X}}\setminus
Z_{t_{m}}(\eta)})]=\mathbb{E}[g(\eta_{Z_{t_{1}}(\eta)},\ldots,\eta_{Z_{t_{m}}(\eta)},\eta^{\prime}_{{\mathbb{X}}\setminus
Z_{t_{m}}(\eta)})].$ (2.10)
###### Proof.
We apply Theorem A.6 (or equivalently (A.9)) with $Z=Z_{t_{m}}$. By (A.1) we
have for each $i\in\\{1,\ldots,m\\}$ that
$Z_{t_{i}}(\eta)=Z_{t_{i}}(\eta_{Z_{t_{m}}})$. The result follows. ∎
### 3 The OSSS inequality for Poisson functionals
The main result of the section is an OSSS inequality for Poisson functionals,
stated in the forthcoming Theorem 3.2.
We will now list a set of assumptions and notational conventions. We fix a
locally finite measure $\lambda$ on ${\mathbb{X}}$, and assume moreover that
$\lambda$ is diffuse.
1. (a)
Let $\eta$ and $\eta^{\prime}$ be independent Poisson processes on
${\mathbb{X}}$ with intensity measure $\lambda$.
2. (b)
Let $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ be a CTDT. As before, set
$Z_{\infty}:=\cup_{t\in{\mathbb{R}}_{+}}Z_{t}$ and write $Z:=Z_{\infty}$. For
$t\in[0,\infty]$, we use the notation
$\displaystyle\zeta_{t}\equiv\zeta_{t}(\eta,\eta^{\prime}):=\eta_{Z(\eta)\setminus
Z_{t}(\eta)}+\eta^{\prime}_{Z_{t}(\eta)}+\eta^{\prime}_{{\mathbb{X}}\setminus
Z(\eta)};$ (3.1)
observe that $\zeta_{0}=\eta_{Z}+\eta^{\prime}_{{\mathbb{X}}\setminus Z}$.
3. (c)
Consider a mapping $f\colon{\mathbf{N}}\to{\mathbb{R}}$ such that
$\mathbb{E}|f(\eta)|<\infty$ and $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ is a CTDT
determining $f$, meaning that
$\displaystyle f(\mu)=f(\mu_{Z_{\infty}(\mu)}),\quad\mu\in{\mathbf{N}},$ (3.2)
and moreover
$\displaystyle
f(\zeta_{t})\overset{\mathbb{P}}{\longrightarrow}f(\eta^{\prime}),\quad\mbox{as}\,\,t\to\infty.$
(3.3)
4. (d)
Assume that
$\displaystyle\lambda(Z_{t}(\mu)\setminus
Z_{t-}(\mu))=0,\quad\mu\in{\mathbf{N}},\,\,t\in{\mathbb{R}}_{+}$ (3.4)
and
$\displaystyle\mathbb{P}(\text{$\eta(Z_{t}(\eta)\setminus Z_{t-}(\eta))\leq 1$
for all $t\in{\mathbb{R}}_{+}$})=1.$ (3.5)
###### Remark 3.1.
If $Z$ is ${\mathcal{X}}_{0}$-valued, then assumption (3.3) holds for any
function $f$. Indeed, in this case we have that $\zeta_{t}\to\zeta_{\infty}$
in the discrete topology. For analogous reasons, such an assumption also holds
if $f$ is supported by a set in ${\mathcal{X}}_{0}$.
A CTDT verifying (3.4) is said to be $\lambda$-continuous. In order to
simplify the notation, in the discussion to follow we will sometimes (but not
always) write $Z_{t}$, $Z$, …, instead of $Z_{t}(\eta),Z(\eta)$, and so on.
The next statement is one of the main results of the paper.
###### Theorem 3.2 (OSSS inequality for Poisson functionals).
Let assumptions (a)–(d) above prevail. Then,
$\displaystyle\mathbb{E}[|f(\eta)-f(\eta^{\prime})|]\leq 2\int\mathbb{P}(x\in
Z(\eta))\mathbb{E}[|D_{x}f(\eta)|]\,\lambda(dx),$ (3.6)
where $D_{x}f(\eta)=f(\eta+\delta_{x})-f(\eta)$ is the previously introduced
add-one cost operator at $x$.
A discrete time version of the above theorem which provides some insights into
the above inequality can be found in the first arxiv version of the paper.
###### Proof.
A direct application of (A.1) yields $Z(\eta_{Z}+\eta^{\prime}_{Z^{c}})=Z$. By
virtue of (3.2),(A.3), (3.3) and $\mathbb{E}[\lambda(Z_{0})]=0$ we have that
$f(\zeta_{0})=f(\eta)$, $\mathbb{P}$-a.s., and also
$\displaystyle
f(\eta)-f(\zeta_{t})\stackrel{{\scriptstyle\mathbb{P}}}{{\longrightarrow}}f(\eta)-f(\eta^{\prime})=f(\zeta_{0})-f(\zeta_{\infty}),\quad
t\to\infty.$ (3.7)
Since the sets $Z_{t}$ are ${\mathcal{X}}_{0}$-valued and $\eta,\eta^{\prime}$
are locally finite, it follows from (2.1) and (2.2) that the function
$t\mapsto\zeta_{t}$ is right-continuous (w.r.t. the discrete metric) and that
it has left-hand limits on $(0,\infty)$, given by
$\displaystyle\zeta_{t-}=\eta_{Z(\eta)\setminus
Z_{t-}(\eta)}+\eta^{\prime}_{Z_{t-}(\eta)}+\eta^{\prime}_{{\mathbb{X}}\setminus
Z(\eta)}.$ (3.8)
Since $\zeta_{t-}\neq\zeta_{t}$ implies that $\eta(Z_{t}(\eta)\setminus
Z_{t-}(\eta))+\eta^{\prime}(Z_{t}(\eta)\setminus Z_{t-}(\eta))>0$ we see that
the set $\\{t>0:\zeta_{t-}\neq\zeta_{t}\\}$ is contained in
$\displaystyle
A:=\\{t(x,\eta):x\in\operatorname{supp}\eta\cup\operatorname{supp}\eta^{\prime},t(x,\eta)<\infty\\},$
where we have used the notation (2.5). The set $A$ is locally finite and
$t\mapsto\zeta_{t}$ is constant on any connected component of
$[0,\infty)\setminus A$. Using (3.7), we infer that
$f(\zeta_{0})-f(\zeta_{\infty})$ is the limit in probability, as $T\to\infty$,
of $f(\zeta_{0})-f(\zeta_{T})=-\sum_{t\leq
T}\big{(}f(\zeta_{t})-f(\zeta_{t-})\big{)}$. Selecting a sequence
$T_{n}\to\infty$ such that the above convergence takes place $\mathbb{P}$-a.s.
and applying the triangle inequality, yields the bound
$\displaystyle\mathbb{E}[|f(\zeta_{0})-f(\zeta_{\infty})|]\leq\mathbb{E}\bigg{[}\sum_{t\in
A}|f(\zeta_{t})-f(\zeta_{t-})|\bigg{]}\leq J+J^{\prime},$ (3.9)
where
$\displaystyle J$
$\displaystyle:=\mathbb{E}\bigg{[}\int{\mathbf{1}}\\{t(x,\eta)<\infty\\}{\mathbf{1}}\\{\eta(Z_{t(x,\eta)}(\eta)\setminus
Z_{t(x,\eta)-}(\eta))=1\\}|f(\zeta_{t(x,\eta)})-f(\zeta_{t(x,\eta)-})|\,\eta(dx)\bigg{]},$
$\displaystyle J^{\prime}$
$\displaystyle:=\mathbb{E}\bigg{[}\int{\mathbf{1}}\\{t(x,\eta)<\infty\\}{\mathbf{1}}\\{\eta(Z_{t(x,\eta)}(\eta)\setminus
Z_{t(x,\eta)-}(\eta))=0\\}|f(\zeta_{t(x,\eta)})-f(\zeta_{t(x,\eta)-})|\,\eta^{\prime}(dx)\bigg{]}$
and where we have used (3.5) in the expression of $J$. By independence of
$\eta$ and $\eta^{\prime}$ and the Mecke equation (1.9),
$\displaystyle J=\int\mathbb{E}[{\mathbf{1}}\\{t(x,\eta_{x})<\infty\\}$
$\displaystyle{\mathbf{1}}\\{\eta_{x}(Z_{t(x,\eta)}(\eta_{x})\setminus
Z_{t(x,\eta)-}(\eta_{x}))=1\\}$
$\displaystyle\times|f(\zeta_{t(x,\eta)}(\eta_{x},\eta^{\prime}))-f(\zeta_{t(x,\eta_{x})-}(\eta_{x},\eta^{\prime}))|]\,\lambda(dx),$
where $\mu_{x}:=\mu+\delta_{x}$ for $\mu\in{\mathbf{N}}$ and
$x\in{\mathbb{X}}$ and where we have used (2.7). Similarly,
$\displaystyle J^{\prime}=\int\mathbb{E}[{\mathbf{1}}\\{t(x,\eta)<\infty,$
$\displaystyle\eta(Z_{t(x,\eta)}(\eta)\setminus Z_{t(x,\eta)-}(\eta))=0\\}$
$\displaystyle\times|f(\zeta_{t(x,\eta)}(\eta,\eta^{\prime}_{x}))-f(\zeta_{t(x,\eta)-}(\eta,\eta^{\prime}_{x}))|]\,\lambda(dx).$
Fix $x\in{\mathbb{X}}$. By (3.4) and the independence of $\eta$ and
$\eta^{\prime}$ we have that, $\mathbb{P}$-a.s.,
$\displaystyle\eta^{\prime}(Z_{t(x,\eta)}(\eta)\setminus
Z_{t(x,\eta)-}(\eta))=\eta^{\prime}(Z_{t(x,\eta)}(\eta_{x})\setminus
Z_{t(x,\eta)-}(\eta_{x}))=0\quad\text{if $t(x,\eta)<\infty$}.$ (3.10)
If $\eta_{x}(Z_{t(x,\eta)}(\eta_{x})\setminus Z_{t(x,\eta)-}(\eta_{x}))=1$
then (2.6) shows that $\eta(Z_{t(x,\eta)}(\eta_{x})\setminus
Z_{t(x,\eta)-}(\eta_{x}))=0$. Therefore we obtain from (3.10) that,
$\mathbb{P}$-a.s.,
$\displaystyle\zeta_{t(x,\eta)-}(\eta_{x},\eta^{\prime})$
$\displaystyle=\zeta_{t(x,\eta)}(\eta_{x},\eta^{\prime})+\delta_{x}\quad\text{if
$t(x,\eta)<\infty$}.$
By (2.6),
$\displaystyle\zeta_{t(x,\eta)}(\eta,\eta^{\prime}_{x})$
$\displaystyle=\zeta_{t(x,\eta)}(\eta,\eta^{\prime})+\delta_{x},$
Furthermore, if $t(x,\eta)<\infty$ we obtain from (3.10) that
$\displaystyle\zeta_{t(x,\eta)-}(\eta,\eta^{\prime}_{x})$
$\displaystyle=\zeta_{t(x,\eta)}(\eta,\eta^{\prime}).$
Therefore,
$\displaystyle\max(J,J^{\prime})$
$\displaystyle\leq\mathbb{E}\bigg{[}\int{\mathbf{1}}\\{t(x,\eta_{x})<\infty\\}|f(\zeta_{t(x,\eta)}(\eta_{x},\eta^{\prime})+\delta_{x})-f(\zeta_{t(x,\eta)}(\eta_{x},\eta^{\prime}))|\,\lambda(dx)\bigg{]}.$
To finish the proof we use the identity
$\displaystyle\mathbb{P}(\zeta_{t(x,\eta)}\in\cdot\mid t(x,\eta))$
$\displaystyle=\mathbb{P}(\eta\in\cdot),\quad\mathbb{P}\text{-a.s.\ on
$\\{t(x,\eta)<\infty\\}$},$ (3.11)
which holds for all $x\in{\mathbb{X}}$. To see this, we fix $x\in{\mathbb{X}}$
and consider a measurable function
$g\colon{\mathbb{R}}_{+}\times{\mathbf{N}}\to{\mathbb{R}}_{+}$. Let
$\eta^{\prime\prime}$ be a copy of $\eta$, independent of
$(\eta,\eta^{\prime})$ and set $T:=t(x,\eta)$. Since $\eta$ is independently
scattered, we have that
$\displaystyle\mathbb{E}[{\mathbf{1}}\\{T<\infty\\}g(T,\zeta_{T})]=\mathbb{E}[{\mathbf{1}}\\{T<\infty\\}g(T,\eta_{Z(\eta)\setminus
Z_{T}(\eta)}+\eta^{\prime}_{Z_{T}(\eta)}+\eta^{\prime\prime}_{{\mathbb{X}}\setminus
Z(\eta)})].$
By Proposition 2.3 the conditional distribution of
$\eta_{{\mathbb{X}}\setminus Z}$ given $\\{\eta_{Z_{t}}):{t\in[0,\infty]}\\}$
is that of $\eta^{\prime\prime}_{{\mathbb{X}}\setminus Z(\eta)}$. By the
continuity properties of a CTDT, $(T,\eta_{Z(\eta)\setminus
Z_{T}(\eta)},Z_{T}(\eta))$ is measurable with respect to
$\sigma\\{\eta_{Z_{t}}:{t\in[0,\infty]}\\}$. As a consequence,
$\displaystyle\mathbb{E}[{\mathbf{1}}\\{T<\infty\\}g(T,\zeta_{T})]=\mathbb{E}[{\mathbf{1}}\\{T<\infty\\}g(T,\eta_{{\mathbb{X}}\setminus
Z_{T}(\eta)}+\eta^{\prime}_{Z_{T}(\eta)})]$
In view of Proposition 2.1 this implies (3.11). From (3.11) we obtain by
conditioning that
$\displaystyle\max(J,J^{\prime})\leq\int\mathbb{P}(t(x,\eta)<\infty)\mathbb{E}[|f(\eta+\delta_{x})-f(\eta)|]\,\lambda(dx).$
Since $t(x,\eta)<\infty$ if and only if $x\in Z_{\infty}$, the asserted
inequality (3.6) now follows from (3.7) and (3.9). ∎
###### Remark 3.3.
Assume that ${\mathbb{X}}$ is a Borel subset of a complete separable metric
space with metric $\rho$ equipped with the Borel $\sigma$-field
${\mathcal{X}}$. Let $\lambda$ be a diffuse measure on ${\mathbb{X}}$ which is
finite on (metrically) bounded Borel sets. Fix $x_{0}\in{\mathbb{X}}$ and let
$Z_{t}:=\\{x\in{\mathbb{X}}:\rho(x_{0},x)\leq t\\}$, $t\geq 0$. Then
$\\{Z_{t}:t\geq 0\\}$ is a CTDT which does not depend on $\mu\in{\mathbf{N}}$.
If, moreover, $\lambda(\partial Z_{t})=0$, for each $t\geq 0$, then it is easy
to see that (3.4) and (3.5) hold. If $f\colon{\mathbf{N}}\to{\mathbb{R}}$
satisfies $\mathbb{E}[|f(\eta)|]<\infty$ and $f(\eta_{{\mathbb{X}}\setminus
Z_{t}}+\eta_{Z^{\prime}_{t}})\overset{\mathbb{P}}{\longrightarrow}f(\eta)$ as
$t\to\infty$ (which is always true if $\lambda({\mathbb{X}})<\infty$), then
Theorem 3.2 implies that
$\displaystyle\mathbb{E}[|f(\eta)-f(\eta^{\prime})|]\leq
2\int\mathbb{E}[|D_{x}f(\eta)|]\,\lambda(dx).$
In fact, the proof is much simpler in this case.
###### Remark 3.4.
We will prove in Appendix B that there exist stopping sets $Z$ that are not
$\lambda$-attainable, that is, such that $Z\neq\cup_{t\geq 0}Z_{t}$ for any
$\lambda$-continuous CTDT $\\{Z_{t}\\}$. Whether the OSSS inequality (3.6)
continues to hold for non-attainable stopping sets is a challenging problem
that we leave open for future research.
###### Remark 3.5.
It might be desirable to start the CTDT with given non-trivial stopping set
$Z_{0}$. So assume that $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ has all properties
of a CTDT except for $\mathbb{E}[\lambda(Z_{0})]=0$. Let the function $f$
satisfy the assumptions of Theorem 3.2 and let $\eta^{\prime}$ be an
independent copy of $\eta$. We assert that
$\displaystyle\mathbb{E}[|f(\eta)-f(\eta^{\prime})|]$
$\displaystyle\leq\mathbb{E}[|f(\eta)-f(\eta_{Z\setminus
Z_{0}}+\eta^{\prime}_{Z_{0}})|]$ $\displaystyle\qquad+2\int\mathbb{P}(x\in
Z(\eta)\setminus Z_{0}(\eta))\mathbb{E}[|D_{x}f(\eta)|]\,\lambda(dx).$ (3.12)
To see this, we define $\zeta_{t}$, $t\in[0,\infty]$, by (3.1) and set
$\zeta_{0-}:=\eta_{Z}+\eta_{{\mathbb{X}}\setminus Z}$. Then
$\displaystyle\mathbb{E}[|f(\eta)-f(\eta^{\prime})|]=\mathbb{E}[|f(\zeta_{0-})-f(\zeta_{\infty})|].\leq\mathbb{E}[|f(\zeta_{0-})-f(\zeta_{0})|]+\mathbb{E}[|f(\zeta_{0})-f(\zeta_{\infty})|].$
Here the second term can be treated as before, while the first term can be
seen to equal $\mathbb{E}[|f(\eta_{Z\setminus
Z_{0}}+\eta^{\prime}_{Z_{0}})-f(\eta)|]$ (Further details on this point are
left to the reader).
The Poincaré inequality is sharp for linear functionals of $\eta$. The
following example shows that the OSSS inequality is sharp also for other
functionals.
###### Example 1.
Assume that ${\mathbb{X}}={\mathbb{R}}^{d}$ and that $\lambda$ is the Lebesgue
measure $\lambda_{d}$. Let $W\subset{\mathbb{R}}^{d}$ be compact and
$x_{0}\in{\mathbb{R}}^{d}$. Define $\tau\colon{\mathbf{N}}\to[0,\infty]$ by
$\displaystyle\tau(\mu):=\inf\\{s\geq 0:\mu(W\cap B(x_{0},s))>0\\},$
where $B(x_{0},t)$ is the ball with radius $t$ centred at $x_{0}$. We assert
that
$\displaystyle Z_{t}(\mu):=B(x_{0},\tau(\mu)\wedge t),\quad
t\in{\mathbb{R}}_{+},\,\mu\in{\mathbf{N}},$
is a CTDT. To see this we first note that, given $\mu\in{\mathbf{N}}$ and
$s\in{\mathbb{R}}_{+}$, that the inequalities $\tau(\mu)>s$ and
$\mu(B(x_{0},s)\cap W)>0$ are equivalent. In particular $\tau$ is measurable
so that $Z_{t}$ is graph-measurable for each $t\in{\mathbb{R}}_{+}$. To check
(A.1) for $Z=Z_{t}$ we need to show that
$\displaystyle\tau(\mu)\wedge t=\tau(\mu_{B(x_{0},\tau(\mu)\wedge
t)}+\psi)\wedge t,\quad\mu\in{\mathbf{N}}$
provided that $\psi(B(x_{0},\tau(\mu)\wedge t))=0$. This can be directly
checked. Define $f\colon{\mathbf{N}}\to{\mathbb{R}}$ by
$f(\mu):={\mathbf{1}}\\{\mu(W)=0\\}$. Standard arguments yield that
$\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ and $f$ satisfy the assumptions of Theorem
3.2. We note that
$\displaystyle f(\mu+\delta_{x})-f(\mu)$
$\displaystyle={\mathbf{1}}\\{\mu(W)=0,x\notin
W\\}-{\mathbf{1}}\\{\mu(W))=0\\}$ (3.13) $\displaystyle=-{\mathbf{1}}\\{x\in
W,\mu(W)=0\\}=-{\mathbf{1}}\\{x\in W\\}f(\mu).$
Further we have $x\in Z_{\infty}(\mu)$ iff $\mu(W\cap
B^{0}(x_{0},\|x-x_{0}\|))=0$, where $B^{0}(x_{0},t)$ is the interior of
$B^{0}(x_{0},t)$. Therefore (3.6) means that
$\displaystyle\operatorname{{\mathbb{V}ar}}[f(\eta)]\leq
e^{-\lambda_{d}(W)}\int_{W}\exp[-\lambda_{d}(W\cap
B(x_{0},\|x-x_{0}\|))]\,dx.$ (3.14)
To compare this with the exact variance
$e^{-\lambda_{d}(W)}\big{(}1-e^{-\lambda_{d}(W)}\big{)}$ of $f(\eta)$ we
assume that $W-x_{0}$ is isotropic, that is invariant under rotations. In
fact, we can then assume $x_{0}=0$ without loss of generality. Let
$I\subset{\mathbb{R}}_{+}$ the a Borel set such that $x\in W$ iff $\|x\|\in
I$. Using polar coordinates, we have for each $x\in{\mathbb{R}}^{d}$ that
$\displaystyle\lambda_{d}(W\cap B(0,\|x\|))=d\kappa_{d}\int{\mathbf{1}}\\{s\in
I,s\leq\|x\|\\}s^{d-1}\,ds=\int{\mathbf{1}}\\{s\leq\|x\|\\}\,\nu(ds)$
where $\kappa_{d}:=\lambda_{d}(B(x,1))$ is the volume of the unit ball and
$\nu$ is the measure on ${\mathbb{R}}_{+}$ given by
$\nu(ds):=d\kappa_{d}{\mathbf{1}}\\{s\in I\\}s^{d-1}\,ds$. Using polar
coordinates once again we see that the right-hand side of (3.14) equals
$\displaystyle e^{-\lambda_{d}(W)}\int
e^{-\nu([0,r])}\,\nu(dr)=e^{-\lambda_{d}(W)}\big{(}1-e^{-\nu({\mathbb{R}}_{+})}\big{)}=e^{-\lambda_{d}(W)}\big{(}1-e^{-\lambda_{d}(W)}\big{)},$
where we have used a well-known formula of Lebesgue–Stieltjes calculus, see
e.g. [49, Proposition A.31]. Hence (3.6) is sharp in this case. In view of
(3.13), one sees immediately that the right-hand side of the Poincaré
inequality (1.1) equals $\lambda_{d}(W)e^{-\lambda_{d}(W)}$, which is seen to
be suboptimal with respect to the exact variance by letting
$\lambda_{d}(W)\to\infty$. It is also interesting to notice that, since
$D_{x}f(\mu)=-{\mathbf{1}}\\{x\in W\\}f(\mu)\leq 0$ and therefore
$D_{y}D_{x}f(\mu)={\mathbf{1}}\\{x,y\in W\\}f(\mu)\geq 0$, then the variance
of $f(\eta)$ can be bounded by using the restricted $L^{1}$-$L^{2}$ inequality
proved in [57, Theorem 1.6]. Such a result yields indeed that
$\operatorname{{\mathbb{V}ar}}[f(\eta)]\leq\frac{1}{2}\int_{W}\frac{\mathbb{E}[(D_{x}f(\eta))^{2}]}{1+\log\frac{1}{\mathbb{E}[|D_{x}f(\eta)|]^{1/2}}}\lambda_{d}(dx)=\left(1\\!-\\!\frac{2}{2\\!+\\!\lambda_{d}(W)}\right)e^{-\lambda_{d}(W)},$
an estimate that also improves – albeit not as sharply – the Poincaré
inequality.
### 4 Variants of the OSSS inequality
In this section, we present some variants and extensions of the OSSS
inequality for Poisson functionals stated in Theorem 3.2. Firstly, in the
spirit of [60, Theorem B.1], we extend Theorem 3.2 to two functions and to the
case of a randomized CTDT (Corollary 4.1 below). The two function version has
recently been used by [41] to derive critical exponent inequalities for random
cluster models on transitive graphs. We will then present a variant of the
OSSS inequality for marked Poisson point processes (see Theorem 4.2) – a
declination of our main findings that will be very useful for the applications
in Part III. For the rest of the section, $\eta$ is a Poisson process on
${\mathbb{X}}$, with a locally finite and diffuse intensity measure $\lambda$.
Let $(\mathbb{Y},\mathcal{Y})$ be a measurable space and suppose that $Z^{y}$
is, for $\mathbb{P}(Y\in\cdot)$-a.e. $y$, a stopping set, such that the
mapping $(\mu,x,y)\mapsto{\mathbf{1}}\\{x\in Z^{y}(\mu)\\}$ is (jointly)
measurable on ${\mathbf{N}}\times{\mathbb{X}}\times{\mathbb{Y}}$. If $Y$ is an
independent $\mathbb{Y}$-valued random element, then $Z^{Y}$ is called a
randomized stopping set. If $\\{Z_{t}^{y}:t\in{\mathbb{R}}_{+}\\}$ is a CTDT
for all $y\in{\mathbb{Y}}$ such that the above measurability properties are
satisfied, then $\\{Z_{t}^{Y}:t\in{\mathbb{R}}_{+}\\}$ is called a randomized
continuous time decision tree (randomized CTDT). In this case, we say that
$\\{Z_{t}^{Y}:t\in{\mathbb{R}}_{+}\\}$ is a randomized CTDT determining
$f\colon{\mathbf{N}}\to{\mathbb{R}}$ if $\\{Z_{t}^{y}:t\in{\mathbb{R}}_{+}\\}$
is a CTDT determining $f$ for all $y\in{\mathbb{Y}}$, where the property of
being a CTDT determining a given function $f$ corresponds to the two
properties (3.2)–(3.3) introduced above.
###### Corollary 4.1.
Let $f\colon{\mathbf{N}}\to[-1,1]$ and $g\colon{\mathbf{N}}\to{\mathbb{R}}$ be
measurable and such that $\mathbb{E}[|g(\eta)|]<\infty$. Let $Y$ be an
independent $\mathbb{Y}$-valued random element. Let
$\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}=\\{Z_{t}^{Y}:t\in{\mathbb{R}}_{+}\\}$ be a
randomized CTDT determining $f$ such that, for $\mathbb{P}(Y\in\cdot)$-a.e.
$y\in{\mathbb{Y}}$, $\\{Z_{t}^{y}\\}$ satisfies (3.4) and (3.5). Let $g$ also
satisfy (3.3). Then,
$\displaystyle|\operatorname{{\mathbb{C}ov}}(f,g)|\leq 2\int\mathbb{P}(x\in
Z(\eta))\mathbb{E}[|D_{x}g(\eta)|]\,\lambda(dx),$ (4.1)
where $Z(\eta):=Z_{\infty}(\eta)=Z^{Y}_{\infty}(\eta)$, and therefore
$\displaystyle\operatorname{{\mathbb{V}ar}}(f)\leq 2\int\mathbb{P}(x\in
Z(\eta))\mathbb{E}[|D_{x}f(\eta)|]\,\lambda(dx).$ (4.2)
###### Proof.
We first observe that the specific form of the two inequalities in the
statement implies that it suffices to prove them for a generic CTDT
determining $f$ and satisfying (3.4) and (3.5); one can then apply the
obtained estimate to $\\{Z_{t}^{y}\\}$, for $\mathbb{P}(Y\in\cdot)$-a.e. $y$,
and take expectations with respect to $Y$. Thus, we will now prove the
statement for a CTDT $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ determining $f$ and
such that (3.4) and (3.5) are verified. Let $\eta^{\prime}$ be an independent
copy of $\eta$. We will again use the definition of $\zeta_{t}$ given in
(3.1). Since $f(\eta)=f(\zeta_{0})$ as $Z_{t}$ is a CTDT for $f$,
$\zeta_{0}\overset{d}{=}\eta$ and $\zeta_{\infty}=\eta^{\prime}$, we have that
$f(\eta)g(\eta)\overset{d}{=}f(\zeta_{0})g(\zeta_{0}),\,f(\eta)g(\eta^{\prime})=f(\zeta_{0})g(\zeta_{\infty}).$
Thus we obtain that
$|\operatorname{{\mathbb{C}ov}}(f,g)|=|\mathbb{E}[f(\eta)(g(\eta)-g(\eta^{\prime}))]|=|\mathbb{E}(f(\zeta_{0})(g(\zeta_{0})-g(\zeta_{\infty}))]|\leq\mathbb{E}[|g(\zeta_{0})-g(\zeta_{\infty})|],$
where we have used that $|f|\leq 1$ in the last inequality. Observe that
$\mathbb{E}[|g(\zeta_{0})-g(\zeta_{\infty})|]$ is the same as the LHS of (3.9)
with $g$ replacing $f$. Now, the rest of the proof follows the same lines as
that of Theorem 3.2 once it is observed that, after having established (3.9)
in such a proof using the fact that $g$ also satisfies (3.3), the fact that
$\\{Z_{t}\\}$ is a CTDT determining $f$ is never used, and only the stopping
set properties of $Z_{t}$ are exploited. ∎
We stress that – as opposed to the main OSSS inequality (3.6) – relation (4.2)
is derived under the assumption that $|f|\leq 1$, and is consequently not
homogeneous in $f$.
We will now generalize Theorem 3.2 to marked Poisson processes. We let
$({\mathbb{M}},\mathcal{M})$ be another Borel space and consider the product
${\mathbb{X}}\times{\mathbb{M}}$. As in Subsection 1.5 we take
$B_{n}\in{\mathcal{X}}$, $n\in{\mathbb{N}}$, satisfying
$B_{n}\uparrow{\mathbb{X}}$. However, this time we define the localizing
subring of ${\mathcal{X}}\otimes\mathcal{M}$ as the system of all sets of the
form $B\cap(B_{n}\times{\mathbb{M}})$ for some
$B\in{\mathcal{X}}\otimes\mathcal{M}$ and $n\in{\mathbb{N}}$. A measure $\nu$
on $({\mathbb{X}},{\mathcal{X}})$ which is finite on ${\mathcal{X}}_{0}$ is
called locally finite. In that case $\nu(B_{n}\times{\mathbb{M}})<\infty$ for
each $n\in{\mathbb{N}}$.
Let $\lambda$ be a locally finite measure on ${\mathbb{X}}\times{\mathbb{M}}$
such that $\bar{\lambda}:=\lambda(\cdot\times{\mathbb{M}})$ is diffuse and let
$\eta$ be a Poisson process with intensity measure $\lambda$. According to
Remark A.7, in such a framework a stopping set (on ${\mathbb{X}}$) is a graph-
measurable mapping
$Z\colon{\mathbf{N}}({\mathbb{X}}\times{\mathbb{M}})\to{\mathcal{X}}$
satisfying (A.1). A CTDT on ${\mathbb{X}}$ is a family
$\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ of stopping sets on ${\mathbb{X}}$ having
the properties (2.1)–(2.2); we stress once again that, in the present
framework, each $Z_{t}$ is a $\mathcal{X}$-valued mapping defined on
${\mathbf{N}}({\mathbb{X}}\times{\mathbb{M}})$. A CTDT
$\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ is said to determine a given measurable
mapping $f\colon{\mathbf{N}}({\mathbb{X}}\times{\mathbb{M}})\to{\mathbb{R}}$
if
$\displaystyle
f(\mu)=f(\mu_{Z(\mu)\times{\mathbb{M}}}),\quad\mu\in{\mathbf{N}}({\mathbb{X}}\times{\mathbb{M}}),$
(4.3)
where $Z:=Z_{\infty}$, and relation (3.3) holds, with $Z_{t}(\eta)$ and
$Z(\eta)=Z_{\infty}(\eta)$ replaced by $Z_{t}(\eta)\times{\mathbb{M}}$ and
$Z_{\infty}(\eta)\times{\mathbb{M}}=Z(\eta)\times{\mathbb{M}}$, respectively.
Let $\bar{\eta}:=\eta(\cdot\times{\mathbb{M}})$. In the next theorem we shall
assume that
$\displaystyle\bar{\lambda}(Z_{t}(\mu)\setminus
Z_{t-}(\mu))=0,\quad\mu\in{\mathbf{N}},\,t\in{\mathbb{R}}_{+}$ (4.4)
and
$\displaystyle\mathbb{P}(\text{$\bar{\eta}(Z_{t}(\eta)\setminus
Z_{t-}(\eta))\leq 1$ for all $t\in{\mathbb{R}}_{+}$})=1.$ (4.5)
###### Theorem 4.2 (OSSS inequality for marked processes).
Let $f\colon{\mathbf{N}}({\mathbb{X}}\times{\mathbb{M}})\to{\mathbb{R}}$ be
measurable such that $\mathbb{E}[|f(\eta)|]<\infty$. Let $Y$ be an independent
$\mathbb{Y}$-valued random element. Let $\\{Z_{t}\\}=\\{Z_{t}^{Y}\\}$ be a
randomized CTDT determining $f$ such that for $\mathbb{P}(Y\in\cdot)$-a.e.
$y\in{\mathbb{Y}}$, $\\{Z_{t}^{y}\\}$ satisfies (4.4) and (4.5). Let
$\eta^{\prime}$ be an independent copy of $\eta$. Then
$\displaystyle\mathbb{E}[|f(\eta)-f(\eta^{\prime})|]\leq 2\int\mathbb{P}(x\in
Z(\eta))\mathbb{E}[|D_{(x,w)}f(\eta)|]\,\lambda(d(x,w)).$ (4.6)
###### Proof.
As with the proof of Corollary 4.1, we will prove the result for a
deterministic CTDT. In particular, we will show that
$\tilde{Z}_{t}=Z_{t}\times{\mathbb{M}},\,t\geq 0,$ is a CTDT on
${\mathbb{X}}\times{\mathbb{M}}$ satisfying the assumptions of Theorem 3.2.
The stopping set properties are immediate, see Remark A.8. Further, it is easy
to check that $\tilde{Z}_{t}$ satisfies (2.1) and (2.2) (as a collection of
set-valued mappings with values in $\mathcal{X}\otimes\mathcal{M}$), since
$Z_{t}$ satisfies the same conditions (as a collection of mappings with values
in $\mathcal{X}$). Further, by the definition of
$\tilde{Z}_{t},\tilde{\lambda},\tilde{\eta}$, (4.4) and (4.5) imply (3.4) and
(3.5) respectively. Also, (3.3) is assumed. Thus, we have that $\tilde{Z}_{t}$
is a CTDT as in Theorem 3.2 and further $\mathbb{P}(x\in
Z(\eta))=\mathbb{P}((x,w)\in\tilde{Z}(\eta))$ for any $w\in{\mathbb{M}}$. This
completes the proof. ∎
## Part II Quantitative Noise Sensitivity
Throughout this part, we adopt the general framework and notation for Poisson
processes and stopping sets introduced in Section 1.5 and Appendix A,
respectively. In particular, we denote by $\eta$ a Poisson process on
${\mathbb{X}}$ with locally finite (and not necessarily diffuse) intensity
$\lambda$.
### 5 Schramm-Steif inequalities on the Poisson space
The next result is a crucial finding of the paper: it provides an upper bound
on the variance of conditional expectations of multiple integrals, where the
conditioning is with respect to the restriction of $\eta$ to a randomized
stopping set, a notion that has been introduced in Section 4. For the sake of
conciseness, the proof of Theorem 5.1 will only be provided in the case where
$Z$ is non-randomized. The general case follows from an averaging argument
analogous to the one used in the proof of Corollary 4.1 (to simplify the
notation we also suppress the dependence on the independent random element
$Y$). A similar strategy is adopted (sometimes tacitly) elsewhere in Part II —
see e.g. the proof of (5.2) below.
In the subsequent discussion, we will mostly focus on (possibly randomized)
stopping sets $Z$ verifying the assumption that there exists an increasing
sequence of stopping sets $\\{Z_{k}:k\geq 1\\}$ such that
$\mathbb{P}(\eta(Z_{k}(\eta))<\infty)=1,\mbox{ $k\geq 1$, \quad and }\quad
Z=\bigcup_{k\geq 1}Z_{k}.$ (5.1)
We stress that, in the case of randomized stopping sets, we implicitly assume
that the sets $Z_{k}=Z_{k}^{Y}$ are such that the randomization $Y$ is the
same for every $k$. The reason for requiring (5.1) is that, according to
Theorem A.6, such a condition is sufficient for the Markov property (A.9) to
hold. As already done in other parts of the paper, given a (randomized)
stopping set $Z$ we will often write $Z$ for $Z(\eta)$ (resp. $Z^{Y}(\eta)$).
###### Theorem 5.1.
Let $Z$ be a randomized stopping set verifying (5.1). Let $k\in{\mathbb{N}}$
and $u\in L^{2}(\lambda^{k})$ be $\lambda^{k}$-a.e. symmetric. Then,
$\displaystyle\mathbb{E}[\mathbb{E}[I_{k}(u)\mid\eta_{Z}]^{2}]\leq k!\int
u(x_{1},\ldots,x_{k})^{2}\,\mathbb{P}(\\{x_{1},\ldots,x_{k}\\}\cap
Z(\eta)\neq\emptyset)\,\lambda^{k}(\mathrm{d}(x_{1},\ldots,x_{k})),$
where $\eta_{Z}:=\eta_{Z(\eta)}$ denotes the restriction of $\eta$ to
$Z(\eta)$.
###### Proof.
As announced, in the proof we assume that $Z$ is non-randomized. For every
$k\geq 1$, define $\mathcal{S}_{k}$ to be the subset of
$L^{2}(\lambda^{k})\cap L^{1}(\lambda^{k})$ composed of symmetric bounded
kernels $u$, such that the support of $u$ is contained in a set of the type
$A\times\cdots\times A$, with $A\in\mathcal{X}_{0}$. By virtue of the local
finiteness of $\lambda$ and of the fact that
$\sigma({\mathcal{X}}_{0})={\mathcal{X}}$, one has that, for every a.e.
symmetric $u\in L^{2}(\lambda^{k})$, there exists a sequence $\\{u_{n}:n\geq
1\\}\subset\mathcal{S}_{k}$ such that $u_{n}\to u$ in $L^{2}(\lambda^{k})$.
Since both sides of the inequality in the statement are trivially continuous
in $u$ (because of (1.11) and the contractive properties of conditional
expectations), it follows that it is enough to prove the desired estimate for
$u\in\mathcal{S}_{k}$, in which case one has also that
$\displaystyle\int\bigg{(}\int
u(x_{1},\ldots,x_{k})\lambda^{k-i}(\mathrm{d}(x_{i+1},\ldots,x_{k}))\bigg{)}^{2}\,\lambda^{i}(\mathrm{d}(x_{1},\ldots,x_{i}))<\infty,\quad
i\in\\{1,\ldots,k\\}.$ (5.2)
For $k\in{\mathbb{N}}_{0}$, $B\in{\mathcal{X}}$ and $\mu\in{\mathbf{N}}$ we
define
$\displaystyle
I_{k,B,\mu}(u):=\sum^{k}_{i=0}(-1)^{k-i}\binom{k}{i}\mu^{(i)}_{B}\otimes\lambda^{k-i}_{B}(u),$
(5.3)
where $\mu^{(i)}$ is the $i$-th factorial measure of $\mu$ and
$\mu^{(i)}_{B}:=(\mu_{B})^{(i)}$ is the $i$-th factorial measure of the
restriction $\mu_{B}$ of $\mu$ to $B$. (We adopt the standard conventions that
$\mu^{(0)}(c)=\lambda^{0}(c)=c$ for all $c\in{\mathbb{R}}$,
$\mu^{(0)}\otimes\lambda^{k}:=\lambda^{k}$ and
$\mu^{(k)}\otimes\lambda^{0}:=\mu^{(k)}$.) Since $u\in\mathcal{S}_{k}$ we have
from the multivariate Mecke formula (see [49, Section 4.2]) that
$I_{k,B,\mu}(u)$ is well-defined and finite for
$\mathbb{P}(\eta\in\cdot)$-a.e. $\mu\in{\mathbf{N}}$. By [49, Proposition
12.9] we have $\mathbb{P}$-a.s. the pathwise identity
$\displaystyle I_{k}(u)=I_{k,{\mathbb{X}},\eta}(u).$ (5.4)
We wish to exploit the conditional variance formula
$\displaystyle\mathbb{E}\big{[}\mathbb{E}[I_{k}(u)\mid\eta_{Z}]^{2}\big{]}=\mathbb{E}\big{[}I_{k}(u)^{2}\big{]}-\mathbb{E}[\operatorname{{\mathbb{V}ar}}[I_{k}(u)\mid\eta_{Z}]],$
(5.5)
which is a direct consequence of the law of the total expectation. To deal
with the conditional variance of the right-hand side of (5.5), we use Theorem
A.6 together with (5.1) to infer that the conditional distribution of
$I_{k}(u)$ given $\eta_{Z}=\mu$ coincides (for
$\mathbb{P}(\eta_{Z}\in\cdot)$-a.e. $\mu$) with that of
$\displaystyle{{}J}:=\sum^{k}_{i=0}(-1)^{k-i}{\binom{k}{i}}\big{(}(\mu+\eta_{Z(\mu)^{c}})^{(i)}\otimes\lambda^{k-i}\big{)}(u).$
We stress that, since $u\in\mathcal{S}_{k}$, such a quantity is well-defined
for $\mathbb{P}(\eta_{Z}\in\cdot)$-a.e. $\mu$. Writing
$\lambda=\lambda_{Z(\mu)}+\lambda_{Z(\mu)^{c}}$ we deduce from a simple
calculation that
$\displaystyle{{}J}=\sum^{k}_{i=0}\binom{k}{i}I_{i,Z(\mu)^{c},\eta}(I_{k-i,Z(\mu),\mu}(u)),$
where the inner (deterministic) multiple integral refers to the arguments of
$u$ with index in $\\{i+1,\ldots,k\\}$ (with the first $i$ variables fixed)
and the outer (stochastic) integral is performed with respect to the remaining
variables. We observe that, since $u\in\mathcal{S}_{k}$, one has that, for
$\mathbb{P}(\eta_{Z}\in\cdot)$-a.e. $\mu$,
$\int_{({\mathbb{X}}\setminus
Z(\mu))^{i}}\big{(}I_{k-i,Z(\mu),\mu}(u_{(x_{1},\ldots,x_{i})})\big{)}^{2}\,\lambda^{i}(\mathrm{d}(x_{1},\ldots,x_{i}))<\infty,\quad
i=1,...,k,$
where $u_{(x_{1},\ldots,x_{i})}$ denotes the function
$(x_{i+1},\ldots,x_{k})\mapsto u(x_{1},\ldots,x_{k})$. By virtue of (5.2) we
can now apply the orthogonality relations (1.11) with $\eta$ replaced by
$\eta_{Z(\mu)^{c}}$ to obtain for $\mathbb{P}(\eta_{Z}\in\cdot)$-a.e. $\mu$
$\displaystyle\operatorname{{\mathbb{V}ar}}[I_{k}(u)\mid\eta_{Z}=\mu]=\sum^{k}_{i=1}\binom{k}{i}^{2}i!\int_{({\mathbb{X}}\setminus
Z(\mu))^{i}}\big{(}I_{k-i,Z(\mu),\mu}(u_{(x_{1},\ldots,x_{i})})\big{)}^{2}\,\lambda^{i}(\mathrm{d}(x_{1},\ldots,x_{i})).$
Dropping all summands except the $k$-th and integrating w.r.t. the
distribution of $\eta$ we get
$\displaystyle\mathbb{E}[\operatorname{{\mathbb{V}ar}}[I_{k}(u)\mid\eta_{Z}]]\geq
k!\,\mathbb{E}\bigg{[}\int_{({\mathbb{X}}\setminus
Z(\eta))^{k}}u(x_{1},\ldots,x_{k})^{2}\,\lambda^{k}(\mathrm{d}(x_{1},\ldots,x_{k}))\bigg{]}.$
Inserting this into (5.5) we obtain that
$\displaystyle\mathbb{E}\big{[}$
$\displaystyle\mathbb{E}[I_{k}(u)\mid\eta_{Z}]^{2}\big{]}$ $\displaystyle\leq
k!\mathbb{E}\bigg{[}\int
u(x_{1},\ldots,x_{k})^{2}(1-{\mathbf{1}}\\{x_{1}\notin
Z(\eta),\ldots,x_{k}\notin
Z(\eta)\\})\,\lambda^{k}(\mathrm{d}(x_{1},\ldots,x_{k}))\bigg{]}$
and hence the assertion. ∎
By inspection of the proof of Theorem 5.1, one sees that our arguments only
exploit the fact that $Z$ verifies the Markov property (A.9). It is allowed to
choose $Z\equiv{\mathbb{X}}$ in Theorem 5.1. Then the inequality boils down to
an equality, namely to the case $k=m$ and $u=v$ of the isometry property
(1.11).
###### Corollary 5.2.
Consider measurable mappings $f,g:{\mathbf{N}}\to{\mathbb{R}}$ such that
$\mathbb{E}[f(\eta)^{2}],\,\mathbb{E}[g(\eta)^{2}]<\infty$. We denote by
$\\{u_{k}:k=0,1,...\\}$ the sequence of kernels of the Wiener-Itô chaos
expansion of $f(\eta)$ (see (1.10)), and by $\\{v_{k}:k=0,1,...\\}$ the
kernels of the decomposition of $g$. We assume that $f$ is determined by a
randomized stopping set $Z$ verifying (5.1). Then, for all $k\geq 1$, we have
that
$\displaystyle(\mathbb{E}$ $\displaystyle[I_{k}(u_{k})I_{k}(v_{k})])^{2}$
$\displaystyle\leq
k!\,\mathbb{E}[f(\eta)^{2}]\int\mathbb{P}(\\{x_{1},\ldots,x_{k}\\}\cap
Z(\eta)\neq\emptyset)v_{k}^{2}(x_{1},\ldots,x_{k})\,\lambda^{k}(\mathrm{d}(x_{1},\ldots,x_{k})).$
(5.6)
###### Proof of Corollary 5.2.
Fix $k\geq 1$, and recall from (A.2) that $f(\eta)$ is
$\sigma(\eta_{Z})$-measurable. The orthonormality properties of multiple
integrals (1.11) and the Cauchy–Schwarz inequality yield that
$\displaystyle(\mathbb{E}[I_{k}(u_{k})I_{k}(v_{k})])^{2}$
$\displaystyle=(\mathbb{E}[f(\eta)I_{k}(v_{k})])^{2}=(\mathbb{E}[f(\eta)\,\mathbb{E}[I_{k}(v_{k})\mid\eta_{Z}]])^{2}$
$\displaystyle\leq\mathbb{E}[f(\eta)^{2}]\times\mathbb{E}[\mathbb{E}[I_{k}(v_{k})\mid\eta_{Z}]^{2}].$
An application of Theorem 5.1 implies the result. ∎
We will now obtain an equivalent of the Schramm-Steif quantitative noise
sensitivity theorem (see [70, Theorem 1.8]). Several applications of this
result will be illustrated in the subsequent sections.
###### Corollary 5.3 (Schramm-Steif inequality for Poisson functionals).
Consider a measurable and square-integrable $f:{\mathbf{N}}\to{\mathbb{R}}$ as
in (1.10), and assume that $f$ is determined by a randomized stopping set $Z$
verifying (5.1). Then, for all $k\geq 1$,
$\mathbb{E}[I_{k}(u_{k})^{2}]\leq k\delta(Z)\,\mathbb{E}[f(\eta)^{2}],$ (5.7)
where $\delta(Z):=\sup_{x\in{\mathbb{X}}}\mathbb{P}(x\in Z(\eta))$.
###### Proof.
Setting $g=f$ in (5.2) and using the union bound for
$\mathbb{P}(\\{x_{1},\ldots,x_{k}\\}\cap Z\neq\emptyset)$, we derive that
$\int\mathbb{P}(\\{x_{1},\ldots,x_{k}\\}\cap
Z(\eta)\neq\emptyset)u_{k}^{2}(x_{1},\ldots,x_{k})\,\lambda^{k}(\mathrm{d}(x_{1},\ldots,\mathrm{d}x_{k}))\leq
k\delta(Z)\|u_{k}\|^{2}_{L^{2}(\lambda^{k})}.$
The proof is completed using the orthonormality relation (1.11) and Corollary
5.2. ∎
To conclude the section, we will now extend the above OSSS-type inequality to
more general functions, albeit with square-root of the revealement
probability. The corollary below was once again motivated by a similar result
in the discrete case, whose proof was shared with us by Hugo Vanneuville. Some
other consequences of the above inequality can be found in the first arxiv
version of the article.
###### Corollary 5.4.
Consider a measurable and square-integrable function
$f:{\mathbf{N}}\to{\mathbb{R}}$ and assume that $f$ is determined by a
randomized stopping set $Z$ verifying (5.1). Then,
$\operatorname{{\mathbb{V}ar}}[f(\eta)]\leq{{}3}\sqrt{\delta(Z)}\int_{{\mathbb{X}}}\mathbb{E}[|D_{x}f(\eta)|^{2}]\lambda(\mathrm{d}x).$
Deriving the above corollary with $\delta(Z)$ instead of $\sqrt{\delta(Z)}$
remains a challenge and would give a $L^{2}$ version of OSSS inequality with
randomized stopping sets instead of randomized CTDTs. In view of Remark 3.4
and being an $L^{2}$ version, this would significantly enlarge the scope of
applications of the OSSS inequality.
###### Proof.
We can assume $\delta(Z)\in(0,1)$. Using (1.11) — (1.13) and Corollary 5.3, we
deduce that that, for all $m\geq 1$,
$\displaystyle\int_{{\mathbb{X}}}\mathbb{E}[|D_{x_{1}}f(\eta)|^{2}]\lambda(\mathrm{d}x_{1})=\sum_{k=1}^{\infty}\int_{{\mathbb{X}}^{k}}\frac{1}{(k-1)!}\mathbb{E}[D^{k}_{x_{1},\ldots,x_{k}}f(\eta)]^{2}\lambda(\mathrm{d}x_{1})\lambda(\mathrm{d}x_{2})\ldots\lambda(\mathrm{d}x_{k})$
$\displaystyle\geq
m\sum_{k=m}^{\infty}\int_{{\mathbb{X}}^{k}}\frac{1}{k!}\mathbb{E}[D^{k}_{x_{1},\ldots,x_{k}}f(\eta)]^{2}\lambda(\mathrm{d}x_{1})\lambda(\mathrm{d}x_{2})\ldots\lambda(\mathrm{d}x_{k})$
$\displaystyle=m\left(\operatorname{{\mathbb{V}ar}}[f(\eta)]-\sum_{k=1}^{m-1}\int_{{\mathbb{X}}^{k}}\frac{1}{k!}\mathbb{E}[D^{k}_{x_{1},\ldots,x_{k}}f(\eta)]^{2}\lambda(\mathrm{d}x_{1})\lambda(\mathrm{d}x_{2})\ldots\lambda(\mathrm{d}x_{k})\right)$
$\displaystyle\geq
m\left(\operatorname{{\mathbb{V}ar}}[f(\eta)]-\delta(Z)\sum_{k=1}^{m-1}k\operatorname{{\mathbb{V}ar}}[f(\eta)]\right)$
$\displaystyle\geq
m\operatorname{{\mathbb{V}ar}}[f(\eta)]\left(1-\frac{m^{2}\delta(Z)}{2}\right).$
Selecting $m=\lceil\delta(Z)^{-1}\rceil$ and exploiting the usual Poincaré
inequality yields the bound
$\operatorname{{\mathbb{V}ar}}[f(\eta)]\leq
2\min\left\\{\frac{1}{2}\,;\,\frac{\sqrt{\delta(Z)}}{\mathbf{1}-\sqrt{\delta(Z)}}\right\\}\int_{{\mathbb{X}}}\mathbb{E}[|D_{x}f(\eta)|^{2}]\lambda(\mathrm{d}x),$
and the conclusion follows from elementary considerations. ∎
### 6 Quantitative noise sensitivity
As indicated before, Corollary 5.3 has direct applications to noise
sensitivity, that one can use to derive further results on exceptional times
and bounds on critical windows. We will develop these themes in this section
and the next. In order to formally define the notion of noise sensitivity, we
need to introduce a collection of resampling procedures for the Poisson point
process $\eta$, that can then be used to define the Ornstein-Uhlenbeck
semigroup.
#### 6.1 The Ornstein-Uhlenbeck semigroup
For every $t\geq 0$ we define $\eta^{t}$ to be the Poisson process with
intensity measure $\lambda$ obtained by independently deleting each point in
the support of $\eta$ with probability $1-e^{-t}$, and then adding an
independent Poisson process with intensity measure $(1-e^{-t})\,\lambda$. For
$f\colon{\mathbf{N}}\to{\mathbb{R}}$ such that $\mathbb{E}[|f(\eta)|]<\infty$,
define the operator $T_{t}$ as
$T_{t}f(\eta):=\mathbb{E}[f(\eta^{t})\mid\eta].$ (6.1)
Setting $T_{\infty}F:=\mathbb{E}[f(\eta)]$, one sees immediately that the
operators $P_{s}:=T_{-\log s}$, $s\geq 0$, coincide with those defined in [49,
Section 20.1]. Now assume that $f\colon{\mathbf{N}}\to{\mathbb{R}}$ is such
that $\mathbb{E}[f(\eta)^{2}]<\infty$ and that $f$ admits the chaotic
representation (1.10). In this case, according to Mehler’s formula (see [48,
formula (3.13)] or [47, formula (80)]) one has that, for each $t\geq 0$,
$\displaystyle
T_{t}f(\eta)=\mathbb{E}[f(\eta^{t})\mid\eta]=\sum_{k=0}^{\infty}e^{-kt}I_{k}(u_{k}),$
(6.2)
in such a way that the family of operators $\\{T_{t}:t\geq 0\\}$ (restricted
to $L^{2}(\mathbb{P})$) coincides with the classical Ornstein-Uhlenbeck
semigroup associated with $\eta$; see e.g. [47, Section 7] for a full
discussion.
###### Remark 6.1 (Markov process representation).
In the sections to follow, we will sometimes need to realise the resampled
processes $\\{\eta^{t}:t\geq 0\\}$ as a Markov process with values in
${\mathbf{N}}$. To do so, we recall from [47, Proposition 4 and Section 7]
that the infinitesimal generator $L$ of the semigroup $\\{T_{t}:t\geq 0\\}$ is
explicitly given as
$\displaystyle
Lf(\eta):=\int(f(\eta-\delta_{x})-f(\eta))\,\eta(dx)-\int(f(\eta+\delta_{x})-f(\eta))\,\lambda(dx)$
for all $f:{\bf N}\to{\mathbb{R}}$ verifying suitable integrability
assumptions. This is the generator of a free birth and death process on
${\mathbf{N}}$, with $\mathbb{P}(\eta\in\cdot)$ as its stationary measure. If
$\lambda$ is finite, then it is straightforward to construct a Markov process
$\\{\tilde{\eta}^{t}:t\geq 0\\}$ with generator $L$. Indeed, start with some
arbitrary initial configuration $\mu\in{\mathbf{N}}$ and attach independent
unit rate exponential life times to each point of $\mu$ (multiplicities have
to be taken into account). At the end of its lifetime, the point is removed.
Independently, new points are born with intensity $\lambda({\mathbb{X}})$,
distributed according to the normalized measure $\lambda$, and having
independent exponential life times as well. The arising Markov process
$\\{\tilde{\eta}^{t}:t\geq 0\\}$ is right-continuous and has left-hand limits
(càdlàg) with respect to the discrete topology. If
$\lambda({\mathbb{X}})=\infty$ we can partition ${\mathbb{X}}$ into sets of
finite $\lambda$-measure and paste together independent birth and death
processes. This procedure yields eventually a homogeneous Markov process
$\\{\tilde{\eta}^{t}:t\geq 0\\}$ that has càdlàg paths with respect to the
topology of weak convergence of point measures; we refer the reader to [64,
79] for more details. What is of importance for us is that, in the case
$\tilde{\eta}^{0}=\eta$, one has that
$\mathbb{E}[F(\tilde{\eta}^{t})\mid\tilde{\eta}^{0}]=\mathbb{E}[F(\tilde{\eta}^{t})\mid\eta]=T_{t}f(\eta)$,
as one can verify by a direct computation. From now on, we will refer to the
formula $T_{t}f(\eta)=\mathbb{E}[F(\tilde{\eta}^{t})\mid\eta]$, $t\geq 0$, as
the Markov process representation of the Ornstein-Uhlenbeck semigroup. When
adopting the Markov process representation of $\\{T_{t}\\}$ we will write
$\tilde{\eta}^{t}=\eta^{t}$, by a slight abuse of notation.
#### 6.2 Noise sensitivity
With the above notation at hand, we will now present the central definition of
the section.
###### Definition 6.1.
Let $f_{n}:{\mathbf{N}}\to{\mathbb{R}},\,n\geq 1$ be a sequence of measurable
mappings such that $\mathbb{E}[f_{n}(\eta)^{2}]<\infty$, for each $n$. The
sequence $\\{f_{n}\\}$ is said to be noise sensitive if, for every $t>0$,
$\mathbb{E}[f_{n}(\eta)f_{n}(\eta^{t})]-\mathbb{E}[f_{n}(\eta)]^{2}\to 0,\quad
n\to\infty.$
In order to detect noise sensitivity, it is often useful to exploit the fact,
if $\mathbb{E}[f(\eta)^{2}]<\infty$ and $f$ admits the chaos expansion (1.10),
then (6.2) and (1.11) imply that
$\mathbb{E}[f(\eta)f(\eta^{t})]=\sum_{k=1}^{\infty}e^{-kt}\mathbb{E}[I_{k}(u_{k})^{2}]{{}+\mathbb{E}[f(\eta)]^{2}}.$
(6.3)
Relation (6.3) is the pivotal element in the proof of the next statement.
###### Proposition 6.2.
Let $f_{n}:{\mathbf{N}}\to{\mathbb{R}}$, $n\geq 1$, be a sequence of
measurable mappings such that
$\sup_{n}\mathbb{E}[f_{n}(\eta)^{2}]:=C<\infty,$ (6.4)
and denote by $u^{(n)}_{k}$ the $k$th kernel in the chaotic expansion (1.10)
of $f_{n}(\eta)$.
1. 1.
The sequence $\\{f_{n}:n\geq 1\\}$ is noise sensitive if and only if
$\lim_{n\to\infty}\mathbb{E}\big{[}I_{k}\big{(}u^{(n)}_{k}\big{)}^{2}\big{]}=0,\quad\,k\in{\mathbb{N}}.$
(6.5)
2. 2.
For every nonnegative sequence $\\{t_{n}:n\geq 1\\}$, the following double
implication holds:
$\operatorname{{\mathbb{C}ov}}[f_{n}(\eta),f_{n}(\eta^{t_{n}})]\to
0,\,\,\mbox{as $n\to\infty$,}$ (6.6)
if and only if
$\displaystyle\mathbb{E}[f_{n}(\eta^{t_{n}})\mid\eta]-\mathbb{E}[f_{n}(\eta)]\overset{L^{2}(\mathbb{P})}{\longrightarrow}0,\,\,\mbox{as
$n\to\infty$}.$ (6.7)
###### Proof.
The fact that noise sensitivity implies (6.5) is a direct consequence of the
relation
$\mathbb{E}[f_{n}(\eta)f_{n}(\eta^{t})]-\mathbb{E}[f_{n}(\eta)]^{2}\geq
e^{-kt}\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}]$, which follows in turn from (6.3).
On the other hand, if (6.5) is in order one can use again (6.3) together with
the bound
$\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}]\leq\mathbb{E}[f_{n}(\eta)^{2}]\leq C$
(valid for all $k,n$), and infer noise sensitivity by dominated convergence.
This proves Part 1. Part 2 is deduced from (6.4) and from the two relations
$\operatorname{{\mathbb{C}ov}}[f_{n}(\eta),f_{n}(\eta^{t_{n}})]=\sum_{k=1}^{\infty}e^{-kt_{n}}\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}]\geq\sum_{k=1}^{\infty}e^{-2kt_{n}}\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}]=\operatorname{{\mathbb{V}ar}}[\mathbb{E}[f_{n}(\eta^{t_{n}})\mid\eta],$
and
$\operatorname{{\mathbb{C}ov}}[f_{n}(\eta),f_{n}(\eta^{t_{n}})]=\mathbb{E}\Big{[}\big{(}f_{n}(\eta)-\mathbb{E}[f_{n}(\eta)]\big{)}\big{(}\mathbb{E}[f_{n}(\eta^{t_{n}})\mid\eta]-\mathbb{E}[f_{n}(\eta)]\big{)}\Big{]}.$
∎
Combining Part 1 of Proposition 6.2 with Corollary 5.3, we deduce the
following criterion for noise sensitivity.
###### Proposition 6.3.
Let $f_{n}:{\mathbf{N}}\to{\mathbb{R}}$, $n\geq 1$, be a sequence of
measurable mappings satisfying the assumptions of Proposition 6.2. Assume that
there exist randomized stopping sets $Z_{n}$, $n\geq 1$, such that, for every
$n$, $Z_{n}$ verifies (5.1) and determines $f_{n}$, and moreover
$\delta_{n}:=\delta(Z_{n}):=\sup_{x\in{\mathbb{X}}}\mathbb{P}(x\in
Z_{n})\longrightarrow 0,\quad\mbox{as $n\to\infty$}.$
Then, $\\{f_{n}:n\geq 1\\}$ is noise sensitive.
One can actually prove a more quantitative version of the previous result,
that will be very useful in the forthcoming sections. Indeed, using (6.3) and
Corollary 5.3, one derives the estimate
$\operatorname{{\mathbb{C}ov}}[f_{n}(\eta),f_{n}(\eta^{t})]\leq{{}C}\,\delta_{n}\frac{e^{-t}}{(1-e^{-t})^{2}},$
(6.8)
from which we deduce that
$\operatorname{{\mathbb{C}ov}}[f_{n}(\eta),f_{n}(\eta^{t_{n}})]\to
0,\,\,\mbox{as $n\to\infty$.}$ (6.9)
for any nonnegative bounded sequence $\\{t_{n}\\}$ such that
$\delta_{n}=o((1-e^{-t_{n}})^{2})$, as $n\to\infty$.
#### 6.3 Noise stability
We now define noise stability, and then prove a statement containing a
criterion for it expressed in terms of chaos expansion (analogous to [29,
Proposition 4.3]), as well as a quantitative noise stability estimate similar
to [29, Proposition 6.1.2].
###### Definition 6.1.
Let $f_{n}:{\mathbf{N}}\to\\{0,1\\}$ be a sequence of measurable mappings. The
sequence $\\{f_{n}\\}$ is said to be noise stable if, for every $t>0$,
$\lim_{t\to 0}\lim_{n\to\infty}\mathbb{P}[f_{n}(\eta)\neq f_{n}(\eta^{t})]=0,$
where $\eta^{t}$ has been defined in Section 6.1.
We recall that, if a sequence of mappings $f_{n}$, $n\geq 1$, takes values in
a bounded set, then the random variables $f_{n}(\eta)$ admit a chaos
decomposition: as before, we will denote by $u_{k}^{(n)}$ the $k$th kernel in
the chaotic decomposition (1.10) of $f_{n}(\eta)$. The following statement
contains the announced criterion for noise stability, as well as an extension
to mappings with values in $\\{-1,1\\}$.
###### Proposition 6.2.
* (a)
Let $f_{n}:{\mathbf{N}}\to\\{0,1\\}$ be a sequence of measurable mappings.
Then, the sequence $\\{f_{n}\\}$ is noise stable if and only if, for every
$\epsilon>0$, there exists $m$ such that for all $n$,
$\sum_{k=m}^{\infty}\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}]<\epsilon.$ (6.10)
* (b)
Let $f_{n}:{\mathbf{N}}\to\\{-1,1\\}$ be a sequence of measurable mappings. If
$\\{t_{n}:n\geq 1\\}\subset\mathbb{R}_{+}$ is a sequence such that
$t_{n}\int\mathbb{E}[|D_{x}f_{n}(\eta)|^{2}]\lambda(\mathrm{d}x)\to 0$ as
$n\to\infty$, then
$\lim_{n\to\infty}\mathbb{P}(f_{n}(\eta)\neq f_{n}(\eta^{t_{n}}))=0.$
###### Proof.
Using the covariance formula (6.3) one infers that
$\displaystyle\mathbb{P}[f_{n}(\eta)\neq f_{n}(\eta^{t})]$
$\displaystyle=\mathbb{E}[f_{n}(\eta)(1-f_{n}(\eta^{t}))]+\mathbb{E}[(1-f_{n}(\eta))f_{n}(\eta^{t})]$
$\displaystyle=2(\mathbb{E}[f_{n}(\eta)^{2}]-\mathbb{E}[f_{n}(\eta)f_{n}(\eta^{t})])$
$\displaystyle=2\sum_{k=1}^{\infty}(1-e^{-kt})\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}],$
which yields immediately Part (a). To prove Part (b), we first observe that,
since each $f_{n}$ takes values in $\\{-1,1\\}$, then
$\sum_{k=0}^{\infty}\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}]=1,$
that is: for every $n$, the weights
$\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}],\,\,k\geq 0$, constitute a probability
distribution on $\\{0,1,\ldots\\}$. Combining the covariance formula (6.3)
with Jensen’s inequality, we deduce that, for each $t\geq 0$,
$\mathbb{E}[f_{n}(\eta)f_{n}(\eta^{t})]=\sum_{k=0}^{\infty}e^{-kt}\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}]\geq\exp\\{-t\sum_{k=0}^{\infty}k\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}]\\}.$
Since
$\sum_{k=0}^{\infty}k\mathbb{E}[I_{k}(u^{(n)}_{k})^{2}]=\int\mathbb{E}[|D_{x}f_{n}|^{2}]\lambda(\mathrm{d}x),$
we eventually obtain the lower bound
$\mathbb{E}[f_{n}(\eta)f_{n}(\eta^{t})]\geq\exp\left\\{-t\int\mathbb{E}[|D_{x}f_{n}|^{2}]\lambda(\mathrm{d}x)\right\\},$
from which the desired conclusion follows at once. ∎
From Propositions 6.2 and 6.2 and covariance formula (6.3), it is easy to see
that if $f_{n}$ is noise sensitive and noise stable then $f_{n}$ is
asymptotically degenerate i.e., $\operatorname{{\mathbb{V}ar}}[f_{n}]\to 0$ as
$n\to\infty$.
### 7 Exceptional times and critical windows
Throughout this section, we adopt the Markov process representation of the
Ornstein-Uhlenbeck semigroup $\\{T_{t}\\}$, as put forward in Remark 6.1.
We say that a sequence of Boolean functions $\\{f_{n}:n\geq 1\\}$,
$f_{n}:{\mathbf{N}}\to\\{0,1\\}$ is non-degenerate if, for some
$\gamma_{0}\in(0,1)$,
$\gamma_{0}\leq\mathbb{E}[f_{n}(\eta)]\leq
1-\gamma_{0},\quad\,n\in{\mathbb{N}}.$ (7.1)
We say that a function $f\colon{\mathbf{N}}\to{\mathbb{R}}$ is jump regular
(with respect to $\\{\eta^{t}:t\geq 0\\}$) if the mapping $t\mapsto
f(\eta^{t})$ is almost surely piecewise constant, that is, on any compact
interval such a mapping equals a finite linear combination of indicators of
bounded intervals of the form $[a,b)$. We will abuse notation and denote the
left-hand limit of $t\mapsto f(\eta^{t})$ at $t>0$ by $f(\eta^{t-})$ even
though $\eta^{t-}$ may be not well defined. Observe that if
$\lambda({\mathbb{X}})<\infty$ and $f\colon{\mathbf{N}}\to\\{0,1\\}$, then
$f(\eta^{t})$ is almost surely piecewise constant because of the right-
continuity of $\eta^{t}$ with respect to the discrete topology and this is
sufficient to verify jump-regularity in many applications. Given a sequence of
Boolean jump regular functions $f_{n}\colon{\mathbf{N}}\to\\{0,1\\}$,
$n\in{\mathbb{N}}$, we define the sets of exceptional times as
$S_{n}:=\\{t>0:f_{n}(\eta^{t})\neq f_{n}(\eta^{t-})\\},\quad
n\in{\mathbb{N}}.$
One naturally expects that, if the sequence $\\{f_{n}:n\geq 1\\}$ is noise
sensitive, then $|S_{n}|\to\infty$ in some sense. Such an intuition is
formally confirmed in the next statement, where we show that a non-degenerate
and noise sensitive Boolean function has an infinite set of exceptional times
even in small time intervals. This was proved in the specific case of critical
percolation in [12, Lemma 5.1], but the proof can be adapted in more
generality (see [29, Theorem 1.23]). The proof of the next result follows
closely the strategy developed in the aforementioned references.
###### Theorem 7.1 (Existence of exceptional times).
Let $f_{n}\colon{\mathbf{N}}\to\\{0,1\\}$, $n\in{\mathbb{N}}$, be a sequence
of jump regular and non-degenerate Boolean functions. Further, assume that
there exists a sequence $c_{n}\in(0,1)$, $n\geq 1$, such that, for every
$\beta\in(0,1)$, the sequence $b_{n}=\beta\,c_{n}$, $n\geq 1$, is such that
$\displaystyle\operatorname{{\mathbb{C}ov}}[f_{n}(\eta),f_{n}(\eta^{b_{n}})]\to
0,\,\,\mbox{as $n\to\infty$.}$ (7.2)
Then, $|S_{n}\cap[0,c_{n}]|\overset{\mathbb{P}}{\longrightarrow}\infty$.
###### Proof.
We claim that it suffices to show that for all $0\leq a<b\leq 1$,
$\lim_{n\to\infty}\mathbb{P}(S_{n}\cap[ac_{n},bc_{n}]=\emptyset)=0.$ (7.3)
To see this, assume (7.3) is true, and fix an arbitrary integer $M\geq 1$ and
$\alpha>0$. Partitioning $[0,{{}c_{n}}]$ into intervals of length $1/(2M)$, we
can choose $N$ large enough in order to have that, for all $n\geq N$,
$\mathbb{P}\left(S_{n}\cap\left[0,\frac{c_{n}}{2M}\right]=\emptyset\right)\leq\alpha/2.$
Set
$X_{n}:=\sum_{l=1}^{2M}{\mathbf{1}}\\{S_{n}\cap[\frac{(l-1)c_{n}}{2M},\frac{lc_{n}}{2M}]=\emptyset\\}$
and observe that $\mathbb{E}[X_{n}]\leq\alpha M$ by the homogeneous Markov
property of $\eta^{t}$. Using Markov’s inequality,
$\mathbb{P}({{}|S_{n}\cap[0,c_{n}]|}\leq M)\leq\mathbb{P}(X_{n}\geq
M)\leq\alpha.$
As $M,\alpha$ are arbitrary, the previous estimate completes the proof of the
theorem assuming (7.3). We will now prove (7.3) for arbitrary $0\leq a<b\leq
1$ that we fix until the end of the proof. For $\epsilon\in(0,1)$ and
$n\in{\mathbb{N}}$, we set
$W_{n,\epsilon}=\\{\mu\in{\mathbf{N}}:\mathbb{P}(f_{n}(\eta^{\epsilon})=1\mid\eta=\mu)\in[0,\gamma_{0}{{}/2}]\cup[1-\gamma_{0}{{}/2},1]\\},$
where $\gamma_{0}$ is as in (7.1). Select an arbitrary $\gamma>0$, let
$k\in{\mathbb{N}}$ and define ${{}b_{n}:=(b-a)c_{n}/k}$. By the non-degeneracy
condition and part 2 of Proposition 6.2, one can find a large
$n_{0}\in{\mathbb{N}}$ such that for all $n\geq n_{0}$, $\mathbb{P}(\eta\in
W_{n,b_{n}})\leq\gamma_{0}\gamma$; from now on, we consider that $n\geq
n_{0}$. Now take $t_{0},\ldots,t_{k}$ such that
$ac_{n}=t_{0}<t_{1}\cdots<t_{k}=bc_{n}$ and $t_{i}-t_{i-1}={{}b_{n}}$. For
each $i\in\\{1,\ldots,k\\}$, we have that
$\displaystyle\mathbb{P}(S_{n}\cap(t_{0},t_{i}]=\emptyset)$
$\displaystyle\leq\mathbb{P}\big{(}\eta^{t_{i-1}}\in
W_{n,{{}b_{n}}}\big{)}+\mathbb{P}\big{(}\eta^{t_{i-1}}\notin
W_{n,{{}b_{n}}},S_{n}\cap(t_{0},t_{i}]=\emptyset\big{)}$
$\displaystyle\leq\gamma_{0}\gamma+\mathbb{P}\big{(}S_{n}\cap(t_{0},t_{i-1}]=\emptyset,\eta^{t_{i-1}}\notin
W_{n,{{}b_{n}}},f_{n}(\eta^{t_{i-1}})=f_{n}(\eta^{t_{i}})\big{)},$
where we have used that $f_{n}$ is jump regular. Using the Markov property of
$\eta^{t}$, we obtain that
$\displaystyle\mathbb{P}$ $\displaystyle(S_{n}\cap(t_{0},t_{i}]=\emptyset)$
$\displaystyle\leq\gamma_{0}\gamma+\mathbb{E}\big{[}{\mathbf{1}}\\{S_{n}\cap(t_{0},t_{i-1}]=\emptyset,\eta^{t_{i-1}}\notin
W_{n,{{}b_{n}}}\\}\mathbb{P}(f_{n}(\eta^{t_{i}})=f_{n}(\eta^{t_{i-1}})|\eta^{t_{i-1}})\big{]}.$
By definition of $W_{n,{{}b_{n}}}$ (and since $f_{n}$ is Boolean) we have
that, if $\eta^{t_{i-1}}\notin W_{n,{{}b_{n}}}$,
$\displaystyle\mathbb{P}(f_{n}(\eta^{t_{i}})=f_{n}(\eta^{t_{i-1}})|\eta^{t_{i-1}})\leq
1-\gamma_{0}{{}/2}.$
This yields the bound
$\displaystyle\mathbb{P}(S_{n}\cap(t_{0},t_{i}]=\emptyset)\leq\gamma_{0}\gamma+(1-\gamma_{0}{{}/2})\mathbb{P}(S_{n}\cap(t_{0},t_{i-1}]=\emptyset),$
and an induction argument allows one to infer that
$\displaystyle\mathbb{P}(S_{n}\cap(ac_{n},bc_{n}]=\emptyset)\leq\gamma_{0}\gamma\sum^{k}_{i=0}(1-\gamma_{0}{{}/2})^{i}+(1-\gamma_{0}/2)^{k}\leq\gamma+(1-\gamma_{0}{{}/2})^{k},$
where the second inequality follows from a standard calculation. Letting
$k\to\infty$, we obtain that, for all $n\geq n_{0}$,
$\mathbb{P}(S_{n}\cap(ac_{n},bc_{n}]=\emptyset)\leq\gamma$, and (7.3) is
proved. ∎
Combining Theorem 7.1 with (6.9), one deduces the following easily applicable
corollary.
###### Corollary 7.2.
Let $f_{n}\colon{\mathbf{N}}\to\\{0,1\\},n\geq 1$, be a sequence of jump
regular and non-degenerate mappings, and assume that there exist randomized
stopping sets $Z_{n}$, $n\geq 1$, such that $Z_{n}$ determines $f_{n}(\eta)$
and $Z_{n}$ verifies (5.1) for every $n$, and moreover
$\displaystyle\delta_{n}:=\delta(Z_{n}):=\sup_{x\in{\mathbb{X}}}\mathbb{P}(x\in
Z_{n})\longrightarrow 0,\quad\,\mbox{as $n\to\infty$}.$
Then, for any sequence $c_{n}\in(0,1)$ such that
${{}\delta_{n}}=o(c_{n}^{2})$, we have that
$|S_{n}\cap[0,c_{n}]|\overset{\mathbb{P}}{\longrightarrow}\infty$.
Following the ideas in the proof of [30, Theorem 1.10], we use the above
result to bound the critical window for phase-transition of a monotonic
Boolean function. To smoothly formulate the result it is convenient to
introduce for any $\sigma$-finite measure $\nu$ on ${\mathbb{X}}$ a Poisson
process $\eta_{\nu}$ with that intensity measure. We shall use this notation
only for measures $\nu$ that are multiples of the intensity measure $\lambda$.
###### Theorem 7.3 (Bounds on critical window).
Let $f_{n}\colon{\mathbf{N}}\to\\{0,1\\},n\geq 1$, be a sequence of
increasing, jump regular and non-degenerate mappings. Assume that there exist
randomized stopping sets $Z_{n}$, $n\geq 1$, such that $Z_{n}$ determines
$f_{n}(\eta)$ and $Z_{n}$ verifies (5.1) for every $n$, and moreover
$\displaystyle\delta_{n}:=\delta(Z_{n}):=\sup_{x\in{\mathbb{X}}}\mathbb{P}(x\in
Z_{n})\to 0,\quad\mbox{as $n\to\infty$}.$ (7.4)
Then, for any sequence $c_{n}{{}\in(0,1)}$ such that
${{}\delta_{n}}=o(c_{n}^{2})$, we have that
$\mathbb{E}[f_{n}(\eta_{(1+c_{n})\lambda})]\to 1$ and
$\mathbb{E}[f_{n}(\eta_{(1-c_{n})\lambda})]\to 0$, as $n\to\infty$.
We remark that the proof only uses the fact that
$|S_{n}\cap[0,c_{n}]|\overset{\mathbb{P}}{\longrightarrow}\infty$ where
$S_{n}$ is the set of exceptional times.
###### Proof.
Firstly, from Corollary 7.2, we have that
$\displaystyle\mathbb{P}(f_{n}(\eta^{t})=1\,\mbox{for some
$t\in[0,c_{n}]$})\to 1\,\mbox{as $n\to\infty$}.$ (7.5)
For $t\geq 0$, let $\eta^{\prime}_{t}$ be the Poisson process of points born
before time $t$. This process is independent of $\eta$ and has intensity
measure $t\lambda$. Then $\zeta_{t}:=\eta+\eta^{\prime}_{t}$ is a Poisson
process with intensity measure $(1+t)\lambda$. By construction of $\eta^{t}$
and $\zeta_{t}$, we have $\eta^{t}\leq\zeta_{t}\leq\zeta_{c_{n}}$ whenever
$t\leq c_{n}$. By the monotonicity of $f_{n}$ and (7.5) this implies
$\mathbb{E}[f_{n}(\zeta_{c_{n}})]\to 1$ as $n\to\infty$ and hence the first
assertion. Again, from Corollary 7.2, we have that
$\displaystyle\mathbb{P}(f_{n}(\eta^{t})=0\,\mbox{for some
$t\in[0,c_{n}]$})\to 1\,\mbox{as $n\to\infty$}.$ (7.6)
For $t\geq 0$, let $\zeta^{\prime}_{t}$ be the point process of points from
$\eta$ (counting multiplicities) which are still alive at time $t$. Since the
lifetimes of points are exponential, $\zeta^{\prime}_{t}$ is a Poisson process
with intensity measure $e^{-t}\lambda$ and also trivially by construction
$\zeta^{\prime}_{c_{n}}\leq\zeta^{\prime}_{t}\leq\eta^{t}$ whenever $t\leq
c_{n}$. Thus, by the monotonicity of $f_{n}$ and (7.6), we obtain that
$\mathbb{E}[f_{n}(\eta_{e^{-c_{n}}\lambda})]=\mathbb{E}[f_{n}(\zeta^{\prime}_{c_{n}})]\to
0$ as $n\to\infty$. Now, the second assertion follows by first observing that
$1-t\leq e^{-t}$ for $t\geq 0$ and then using the monotonicity of $f_{n}$
along with the thinning property of the Poisson process. ∎
We now prove a strengthening of the above result when the randomized stopping
sets $Z_{n}$ above can be attained by a CTDT as in Theorem 3.2. This proof
proceeds similar to the Friedgut-Kalai sharp threshold theorem (see [29,
Theorem 3.5]). This refinement was pointed out to us by Stephen Muirhead.
###### Theorem 7.4 (Bounds on critical window).
Let $f_{n}\colon{\mathbf{N}}\to\\{0,1\\},n\geq 1$, be as in Theorem 7.3.
Assume for each $n\in{\mathbb{N}}$ that $f_{n}$ is determined by a randomized
stopping set $Z_{n}$, satisfying the assumptions of Theorem 3.2. Assume that
(7.4) holds. Then, for any sequence $c_{n}\in(0,1)$ such that
${{}\delta_{n}}=o(c_{n})$, we have that
$\mathbb{E}[f_{n}(\eta_{(1+c_{n})\lambda})]\to 1$ and
$\mathbb{E}[f_{n}(\eta_{(1-c_{n})\lambda})]\to 0$, as $n\to\infty$.
We shall actually prove a stronger quantitative version than the one stated.
###### Proof.
Set $p_{n}(1+t):=\mathbb{E}[f_{n}(\eta_{(1+t)\lambda})]$ for
$t\in[-1,\infty)$. Let $\epsilon>0$. We will show that, for $n$ large enough,
$\mbox{if }\,\,p_{n}(1)\geq\gamma_{0}\,\mbox{ then }\,p_{n}(1+t_{n})\geq
1-\epsilon\,\mbox{ for
}\,t_{n}=\frac{-\delta_{n}\log(\gamma_{0})}{\epsilon+\delta_{n}\log(\gamma_{0})}.$
(7.7)
This suffices to show that $\mathbb{E}[f_{n}(\eta_{(1+c_{n})\lambda})]\to 1$
for any sequence $c_{n}\in(0,1)$ such that ${{}\delta_{n}}=o(c_{n})$. Applying
the same argument to $1-p_{n}(1-t)$ will yield the second claim in the
theorem. For given $\epsilon>0$ let $n$ be large enough such that $t_{n}\geq
0$ in (7.7). This is possible because $\delta_{n}\to 0$ as $n\to\infty$. Using
the Russo-Margulis formula for Poisson functionals (see [49, Theorem 19.4]),
monotonicity of $f_{n}$ and the Poisson-OSSS inequality (Theorem 3.2), we
derive that for $t\geq 0$,
$\displaystyle\frac{\mathrm{d}p_{n}(1+t)}{\mathrm{d}t}$
$\displaystyle=\int_{{\mathbb{X}}}\mathbb{E}[D_{x}f_{n}(\eta_{(1+t)\lambda})]\lambda(\mathrm{d}x)$
$\displaystyle=\int_{{\mathbb{X}}}\mathbb{E}[|D_{x}f_{n}(\eta_{(1+t)\lambda})|]\lambda(\mathrm{d}x)$
$\displaystyle\geq\frac{1}{(1+t)\delta_{n}}\operatorname{{\mathbb{V}ar}}(f_{n}(\eta_{(1+t)\lambda}))$
$\displaystyle=\frac{1}{(1+t)\delta_{n}}p_{n}(1+t)(1-p_{n}(1+t)).$
Suppose that $p_{n}(1)\geq\gamma_{0}$ and $p_{n}(1+t_{n})<1-\epsilon$ for
$t_{n}$ as defined in (7.7). Since $f_{n}$ is monotonic, we have that
$p_{n}(1+t)<1-\epsilon$ for all $t\leq t_{n}$ and hence using this in the
above differential inequality, we obtain that for all $t\in[0,t_{n}]$,
$\frac{\mathrm{d}\log
p_{n}(1+t)}{\mathrm{d}t}\geq\frac{\epsilon}{(1+t_{n})\delta_{n}}.$
Exploiting the fact that $p_{n}(1+t)\geq p_{n}(1)\geq\gamma_{0}$, one infers
that
$\log
p_{n}(1+t_{n})\geq\log\gamma_{0}+\frac{\epsilon\,t_{n}}{(1+t_{n})\delta_{n}}=0,$
which implies $p_{n}(1+t_{n})\geq 1$ and therefore a contradiction. So, we
have that $p_{n}(1+t_{n})\geq 1-\epsilon$ as required by (7.7). This completes
the proof of (7.7) and hence the theorem as well. ∎
## Part III Applications to Continuum percolation models
In this part, we fix $d\geq 2,k\geq 1$ and $r>0$. Let $\eta=\\{X_{i}:i\geq
1\\}$ be the stationary Poisson point process in ${\mathbb{R}}^{d}$ of
intensity $\gamma>0$ (identified as usual with its support). We write
${\mathcal{K}}_{r}$ to indicate the space of non-empty compact subsets of
$B_{r}(0)$ equipped with the Hausdorff distance and
${\mathcal{K}}^{b}_{r}:=\\{B_{s}(0):s\in[0,r]\\}$ denote the subset of
${\mathcal{K}}$ consisting of balls centred at the origin. Let $\mathbb{Q}$ be
a probability measure on compact sets of ${\mathbb{R}}^{d}$ such that it
satisfies
$\mathbb{Q}({\mathcal{K}}_{r})=1\,\,\mbox{and}\,\,\mathbb{Q}\\{K:B_{r_{0}}(0)\subset
K\\}>0,$ (8.0)
for two constants $r,r_{0}>0$. The two assumptions of compactly supported and
containing small balls ensure non-triviality of percolation phase transition
in our models. The former is a strong assumption and in the case of unbounded
grains, some of the results could change depending on the distribution of
grain sizes; see [4, 25] for results in the case of balls with unbounded
radii. In Section 10 alone, we shall work with unbounded balls and this
already indicates that the first assumption can be removed with some
additional work under suitable moment assumptions on the size of the grains.
### 8 $k$-Percolation in the Poisson Boolean model
We will now consider the $k$-percolation model. We denote the marked point
process by $\tilde{\eta}=\\{(X_{i},M_{i})\\}_{i\geq 1}$ where $M_{i},i\geq 1$
are i.i.d. compact sets (or also referred to as grains) distributed according
to $\mathbb{Q}$. We shall use symbol $M_{i}$’s for random grains and $K$ for
deterministic grains. The $k$-covered or $k$-occupied region of the Poisson
Boolean model on $\tilde{\eta}$ is defined as
$\mathcal{O}(\gamma):=\mathcal{O}_{k}(\tilde{\eta})=\bigcup_{1\leq
i_{1}<\ldots<i_{k}<\infty}(X_{i_{1}}+M_{i_{1}})\cap\ldots\cap(X_{i_{k}}+M_{i_{k}})$
(8.1)
Apart from being a natural extension of the usual continuum percolation,
$k$-percolation can also be seen as percolation of $(k-1)$-faces in the random
Cěch complex on $\eta$ with i.i.d. grains distributed according to
$\mathbb{Q}$ (see [13, Remark 3.8]).
We have suppressed the dependence on $k,\mathbb{Q}$ in the above definition as
$\mathbb{Q}$ and $k$ will remain fixed but $\gamma$ will vary. If $k=1$, this
is the classical Boolean model. Study of continuum percolation was initiated
by Gilbert [32] where he considered the Poisson Boolean model with fixed radii
balls and much later, it was extended to random radii with unbounded support
by Hall [37]. We refer the reader to the monograph of Meester and Roy [51] as
well as that of Bollobas and Riordan [15] for detailed accounts on continuum
percolation.
We now define the percolation events and the corresponding probabilities. For
connected subsets $A,B,R\subset{\mathbb{R}}^{d}$ such that $A\cup B\subset R$,
we define
$\displaystyle\\{A\stackrel{{\scriptstyle R}}{{\leftrightarrow}}B\\}$
$\displaystyle:=\\{\mbox{there exists a path in $\mathcal{O}(\gamma)\cap R$
from $x$ to $y$ for some $x\in A,y\in B$}\\},$
$\displaystyle\\{A\leftrightarrow B\\}$
$\displaystyle:=\\{A\stackrel{{\scriptstyle{\mathbb{R}}^{d}}}{{\leftrightarrow}}B\\},$
$\displaystyle\\{0\leftrightarrow\infty\\}$ $\displaystyle:=\\{\mbox{there is
an unbounded path from $0$ in $\mathcal{O}(\gamma)$}\\},$
$\displaystyle\theta_{s}(\gamma)$
$\displaystyle:=\mathbb{P}(0\leftrightarrow\partial B_{s}(0)),s>0,$
$\displaystyle\theta(\gamma)$
$\displaystyle:=\mathbb{P}(0\leftrightarrow\infty),$ $\displaystyle
Arm_{s,t}(\gamma)$ $\displaystyle:=\\{B^{\infty}_{s}(0)\leftrightarrow\partial
B^{\infty}_{t}(0)\\},\,\,0<s<t,$ $\displaystyle Cross_{s,t}(\gamma)$
$\displaystyle:=\\{\\{0\\}\times[0,t]^{d-1}\stackrel{{\scriptstyle[0,s]\times[0,t]^{d-1}}}{{\leftrightarrow}}\\{s\\}\times[0,t]^{d-1}\\},\,\,s,t>0,$
(8.2)
where in the fourth and fifth definitions, we have replaced the singleton set
$\\{0\\}$ by $0$ for convenience and $B^{\infty}_{s}(x)=x\oplus[-s,+s]^{d}$ is
the $\ell_{\infty}$-ball of side-length $2s$ at $x$ where $\oplus$ denotes
Minkowski sum of sets. $Arm_{s,t}(\gamma)$ is the usual one-arm event and
$Cross_{\kappa t,t}(\gamma)$ is the crossing event. We say that the origin
$k$-percolates if $0\leftrightarrow\infty$ holds and $\theta(\gamma)$ is the
percolation probability.
We now define the critical intensity for $k$-percolation. Since the event
$\\{0\leftrightarrow\partial B_{n}(0)\\}$ is decreasing in $n$, we have that
$\theta(\gamma)=\lim_{n\to\infty}\theta_{n}(\gamma).$
The critical intensity of the model is defined as
$\gamma_{c}:=\gamma_{c}(\mathbb{Q})=\inf\\{\gamma\geq 0:\theta(\gamma)>0\\}.$
(8.3)
The above definition is justified because $\theta(\gamma)$ is monotonically
increasing in $\gamma$. Further, let $C_{0}$ be the connected component
containing $0$ in $\mathcal{O}(\gamma)$ if $0\in\mathcal{O}(\gamma)$ else
$C_{0}=\emptyset$. We define two other critical intensities related to
percolation of $\mathcal{O}(\gamma)$ :
$\displaystyle\hat{\gamma}_{c}:=\sup\\{\gamma\geq
0:\mathbb{E}[|C_{0}|]<\infty\\},\tilde{\gamma}_{c}:=\inf\\{\gamma\geq
0:\inf_{s>0}\mathbb{P}(B_{s}(0)\leftrightarrow\partial B_{2s}(0))>0\\},$ (8.4)
where $|.|$ denotes the Lebesgue measure of a set. Again, the definitions are
justified by the monotonicity in $\gamma$ of the respective quantities. We
have that $\hat{\gamma}_{c}\leq\gamma_{c}$ (using the first equality in (8.5))
and that $\tilde{\gamma}_{c}\leq\gamma_{c}$ follows via a simple monotonicity
argument. While $\hat{\gamma}_{c}$ is a natural notion of critical intensity,
$\tilde{\gamma}_{c}$ is often very useful in initiating renormalization
arguments and was introduced in [34].
Our first main result is that the three critical intensities are equal.
###### Theorem 8.1 (Equality of critical intensities).
Let $\mathcal{O}(\gamma)$ be the $k$-covered region of the Poisson Boolean
model as defined in (8.1) with grain distribution $\mathbb{Q}$ satisfying
assumptions in (8.0). Then we have that
$\gamma_{c}=\hat{\gamma}_{c}=\tilde{\gamma}_{c}\in(0,\infty)$.
Similar to the Poisson Boolean model, one can associate various other critical
densities using box crossing events, diameter, number of grains in $C_{0}$
(see [51, Sections 3.4 and 3.5]). When $\mathbb{Q}$ is supported on
deterministic bounded balls, equality of some of these critical intensities
are shown in [51, Theorems 3.4 and 3.5] for $k=1$. In [25, Theorem 1.2], this
was extended to the case of unbounded balls with the radius distribution
having finite $5d-3$ moments. The above theorem for $k=1$ also follows
straightforwardly from the results of [80, Theorem 3.1]. Similar to [80,
Theorem 3.1], we prove an exponential decay bound for $\theta_{n}(\gamma)$ to
deduce equality of critical intensities. One of the advantages of our proof is
that it yields a mean-field lower bound in the super-critical regime also
easily.
###### Theorem 8.2 (Sharp threshold for $k$-percolation).
Let $\mathcal{O}(\gamma)$ be the $k$-covered region of the Poisson Boolean
model as defined in (8.1) with grain distribution $\mathbb{Q}$ satisfying
assumptions in (8.0). Then the critical intensity $\gamma_{c}$ defined in
(8.3) is non-degenerate (i.e., $\gamma_{c}\in(0,\infty)$) and the following
statements hold.
1. (i)
For all $\gamma<\gamma_{c}$, there exists a constant
$\alpha(\gamma)\in(0,\infty)$ such that
$\theta_{s}(\gamma)\leq\exp\\{-\alpha(\gamma)s\\}$ for all $s>0$.
2. (ii)
For any $b>\gamma_{c}$, there exists a a constant $\alpha(b)\in(0,\infty)$
such that for all $\gamma\in(\gamma_{c},b)$,
$\theta(\gamma)\geq\alpha(b)(\gamma-\gamma_{c})$.
The above theorem for $k=1$ was also proven in [80, Theorem 3.1] generalizing
the result of [51, Lemma 3.3] which was for the case of $\mathbb{Q}$ supported
on balls. Again, for the case of unbounded balls with finite $5d-3$ moments of
the radius distribution, part (ii) was shown to hold in [25, Theorem 1.2].
Further, if the radius distribution has finite exponential moments, part (i)
of the above theorem was shown in [25, Theorem 1.4]. It was also shown in [25,
Theorem 1.5] that radius distribution with slower decay of tails can exhibit a
different behaviour in the subcritical regime.
Traditionally, the proof of sharp phase transition in percolation models have
relied upon adaptation of Menshikov’s arguments [52, 53]. For example, see
Meester and Roy [51, Sections 3.4 and 3.5] for sharp phase transition in
subcritical Poisson Boolean models ($k=1$ in our $k$-percolation model). There
is also an independent proof by Aizenmann and Barsky [6]. Another simpler
proof that emerged in recent years is that of Duminil-Copin and Tassion [20].
This proof is by showing sharp phase transition with respect to a new critical
intensity which is defined by existence of a set containing the origin such
that the expected number of paths exiting the set is strictly bounded above by
$1$. This was adapted to the Poisson Boolean model by Ziesche [80]. Another
proof of sharp phase transition using randomized algorithms was pioneered in
[23] and also applied to continuum models in [24] and [25] via suitable
discretization. Our proof technique based on CTDTs was inspired by these works
and possibly enables a much easier execution of the approach initiated in [23]
for continuum percolation models. Also, observe that the critical intensities
$\gamma_{c}$ are increasing in $k$ and possibly even strictly increasing and
so Theorems 8.1 and 8.2 do not follow from the corresponding results for
$k=1$.
We postpone the proof of Theorem 8.2 to the end of the section and now show
how the proof of Theorem 8.1 follows from Theorem 8.2.
###### Proof of Theorem 8.1.
Observe that by Fubini’s theorem, we have from Theorem 8.2(i) for
$\gamma<\gamma_{c}$
$\mathbb{E}[|C_{0}|]=\int_{{\mathbb{R}}^{d}}\mathbb{P}(0\leftrightarrow
x)\mathrm{d}x\leq\int_{{\mathbb{R}}^{d}}\exp\\{-c_{\gamma}|x|\\}\mathrm{d}x<\infty.$
(8.5)
This with the trivial bound yields that $\hat{\gamma}_{c}=\gamma_{c}$.
For the second part, consider $s>10$ and $l_{s}\in{\mathbb{N}}$, such that
$x_{1},\ldots,x_{l_{s}}\in\partial B_{s}(0)$ with $\partial
B_{s}(0)\subset\cup_{i=1}^{l_{s}}B_{1}(x_{i})$. Note that $l_{s}$ can be
chosen such that $l_{s}\leq cs^{d-1}$ for some $c>0$ not depending on $s$. For
$t>0$, define $A_{t}(x):=\\{x\leftrightarrow\partial
B_{2t}(x)\\}\cap\\{\partial B_{2t}(x)\subset\mathcal{O}(\gamma)\\}$ i.e., $x$
is connected to the boundary of $\partial B_{2t}(x)$ and $\partial B_{2t}(x)$
is contained in a single component. Observe that $A_{t}(x)$ is an increasing
event for all $t>0$ and also by assumption (8.0), we have that
$\mathbb{P}(A_{t}(x))=\mathbb{P}(A_{t}(0))\geq\mathbb{P}(B_{2t}(0)\subset\mathcal{O}(\gamma))>0$
for all $t>0$. Now, using Markov’s inequality, isotropy of the Poisson point
process, monotonicity and positivity of $A_{1/2}(x_{1})$, Harris-FKG
inequality [49, Theorem 20.4] and the bound on $l_{s}$, we can derive that
$\displaystyle\mathbb{P}(B_{s}(0)\leftrightarrow\partial B_{2s}(0))$
$\displaystyle\leq\mathbb{P}(\cup_{i=1}^{l_{s}}\\{B_{1}(x_{i})\leftrightarrow\partial
B_{2s}(0)\\})\leq\sum_{i=1}^{l_{s}}\mathbb{P}(B_{1}(x_{i})\leftrightarrow\partial
B_{2s}(0))$ $\displaystyle\leq
l_{s}\mathbb{P}(B_{1}(x_{1})\leftrightarrow\partial B_{2s}(0))$
$\displaystyle\leq
l_{s}\mathbb{P}(A_{1/2}(x_{1}))^{-1}\mathbb{P}(\\{B_{1}(x_{1})\leftrightarrow\partial
B_{2s}(0)\\}\cap A_{1/2}(x_{1}))$ $\displaystyle\leq
l_{s}\mathbb{P}(A_{1/2}(0))^{-1}\mathbb{P}(x_{1}\leftrightarrow\partial
B_{2s}(0))\leq\mathbb{P}(A_{1/2}(0))^{-1}l_{s}\theta_{s}(\gamma)$
$\displaystyle\leq c\mathbb{P}(A_{1/2}(0))^{-1}s^{d-1}\theta_{s}(\gamma).$
Now using Theorem 8.2(i) again, we obtain that for all $\gamma<\gamma_{c}$
$\limsup_{s\to\infty}\mathbb{P}(B_{s}(0)\leftrightarrow\partial B_{2s}(0))\leq
c\mathbb{P}(A_{1/2}(0))^{-1}\lim_{s\to\infty}s^{d-1}\theta_{s}(\gamma)=0.$
Thus we have that $\gamma_{c}\leq\tilde{\gamma}_{c}$ and combined with the
trivial inequality in the other direction, this completes the proof. ∎
We now give a simple criterion for verifying noise sensitivity, existence of
infinite exceptional times as well as determining the critical window in the
above models.
###### Theorem 8.3 (Noise Sensitivity Criteria).
Let $\mathcal{O}(\gamma)$ be the $k$-covered region of the Poisson Boolean
model as defined in (8.1) with grain distribution $\mathbb{Q}$ satisfying
assumptions in (8.0). Let $\kappa>0$ be given. Let
$f_{n}:=f_{n}(\gamma)=\mathbf{1}\\{Cross_{\kappa n,n}(\gamma)\\}$ where
$Cross_{\kappa n,n}(\gamma)$ is the crossing event defined in (8.2). Assume
that $\\{f_{n}(\gamma_{c})\\}_{n\geq 1}$ is non-degenerate as in (7.1) and
$\mathbb{P}(Arm_{r,s}(\gamma_{c}))\to 0$ as $s\to\infty$. Then we have the
following :
1. (i)
$\\{f_{n}(\gamma_{c})\\}_{n\geq 1}$ is noise sensitive.
2. (ii)
$\\{f_{n}(\gamma_{c})\\}_{n\geq 1}$ has an infinite set of exceptional times
in $[0,1]$ i.e., $|S_{n}\cap[0,1]|\overset{\mathbb{P}}{\longrightarrow}\infty$
where the set of exceptional times $S_{n}$ is as defined below (7.1).
3. (iii)
Further, assume that there exists $c_{arm}>0$ such that for all $s>2r$, the
following holds :
$\mathbb{P}(Arm_{r,s}(\gamma_{c}))\leq Cs^{-c_{arm}}.$
Let $c_{n}=n^{-\frac{c_{arm}}{2}+\epsilon}$ for some $\epsilon>0$. Then, we
have that $\mathbb{E}[f_{n}((1-c_{n})\gamma_{c})]\to 0$ and
$\mathbb{E}[f_{n}((1+c_{n})\gamma_{c})]\to 1.$
Before presenting a corollary of the above to the standard planar Poisson
Boolean model, we make some remarks about the theorem.
###### Remark 8.4.
1. 1.
Using Theorem 7.4 one can actually prove that the conclusion of Theorem
8.3-(iii) continues to hold for sequences of the type
$c_{n}=n^{-c_{arm}+\epsilon}$, with $\epsilon>0$. In order to obtain such an
improvement, one has to prove that the randomized stopping set $Z_{n}$
determining $f_{n}$, as appearing in the proof, can be attained by a CTDT
satisfying the assumptions in Theorem 4.2. This fact can be checked by a
recursive construction similar to the one performed at the end of the
forthcoming proof of Theorem 8.2. Details are omitted.
2. 2.
We can use $\theta_{s}(\gamma_{c})\to 0$ as $s\to\infty$ in the proof instead
of $\mathbb{P}(Arm_{r,s}(\gamma_{c}))\to 0$ as $s\to\infty$ and in item (iii),
we can use that $\theta_{s}(\gamma_{c})\leq Cs^{-c_{arm}}.$ We can argue this
as follows : Set $A^{\infty}_{r}(x):=\\{x\leftrightarrow\partial
B^{\infty}_{r}(x)\\}\cap\\{\partial
B^{\infty}_{r}(x)\subset\mathcal{O}(\gamma)\\}$ and note that for all $x$, we
have that $\mathbb{P}(A^{\infty}_{r}(x))=\mathbb{P}(A^{\infty}_{r}(0))>0$.
Since $A_{r}(x)$ and $\\{B^{\infty}_{r}(x)\leftrightarrow L\\}$ are increasing
events, by Harris-FKG inequality for Poisson point process [49, Theorem 20.4],
we derive that
$\displaystyle\mathbb{P}(Arm_{r,s}(\gamma_{c}))$
$\displaystyle\leq\mathbb{P}(A^{\infty}_{r}(0))^{-1}\mathbb{P}(A^{\infty}_{r}(0)\cap\\{B^{\infty}_{r}(0)\leftrightarrow\partial
B^{\infty}_{s}(0)\\})$
$\displaystyle\leq\mathbb{P}(A^{\infty}_{r}(0))^{-1}\mathbb{P}(0\leftrightarrow\partial
B^{\infty}_{s}(0))$
$\displaystyle\leq\mathbb{P}(A^{\infty}_{r}(0))^{-1}\mathbb{P}(0\leftrightarrow\partial
B_{s}(0))$
$\displaystyle\leq\mathbb{P}(A^{\infty}_{r}(0))^{-1}\theta_{s}(\gamma_{c}).$
3. 3.
The above theorem has reduced the proof of noise sensitivity, exceptional
times and sharp phase transition for crossing events at criticality in the
$k$-percolation model to showing non-degeneracy of the crossing events and
bounds for one-arm probabilities or non-percolation at criticality. Showing
these properties is a seperate percolation theoretic question, often model-
specific and in the planar case, these have been achieved in some models via
RSW-type estimates (see [66, 67, 7, 74, 4, 42, 46]). See Corollary 8.5 below
for a case in which the above estimates are known.
4. 4.
Another case in which it is known that $\theta_{s}(\gamma_{c})\to 0$ as
$s\to\infty$ is for $k=1$, determinstic balls and large $d$ (see [39, Section
6]).
5. 5.
It was suggested in [2, Section 8] on how one may use the Schramm-Steif
quantitative noise sensitivity result on the Boolean hypercube and use
discretization to obtain noise sensitivity exponents. As illustrated in the
above theorem, the Poisson analogue of Schramm-Steif inequality (Corollary 5.3
that we apply via Proposition 6.3) helps to achieve these goals by using
stopping sets and without resorting to discretization. Furthermore, this makes
the results more easily applicable and also gives a transparent way to
quantify the noise sensitivity exponents in terms of the exponent $c_{arm}$ in
arm-event probabilities.
For the case of planar Poisson Boolean model (i.e., $k=1,d=2$) with grains
supported on balls centred at origin, the assumptions of the above theorem
follow immediately from [4, Theorems 1.1(ii) and 1.3(i)] and hence we derive
the following corollary easily from Theorem 8.3.
###### Corollary 8.5.
Consider the $1$-covered region $\mathcal{O}_{1}(\tilde{\eta})$ of the Poisson
Boolean model on ${\mathbb{R}}^{2}$ (i.e., $k=1,d=2$ in Theorem 8.3) as
defined in (8.1) with grain distribution $\mathbb{Q}$ supported on
${\mathcal{K}}^{b}_{r}$ and satisfying assumptions in (8.0). Then for
$f_{n}:=f_{n}(\gamma_{c})$ as defined in Theorem 8.3, the conclusions of the
Theorem 8.3 hold.
The above corollary settles [2, Conjecture 9.1] and our Theorem 8.3 gives
percolation theoretic criteria to prove [2, Conjecture 9.2 and Question 1].
See the discussion at the end of Section 1.3 for comparison with approach of
[2].
###### Proof of Theorem 8.3.
We shall assume $\gamma=\gamma_{c}$ and fix a $\kappa>0$ in our proof. Let
$R_{n}=[0,\kappa n]\times[0,n]^{d-1}$ be the rectangle to be crossed. Our
proof is by constructing a randomized stopping set
$\tilde{Z}_{n}:=Z_{n}\times{\mathcal{K}}_{r}\subset
R^{+}_{n}\times{\mathcal{K}}_{r},n\geq 10r$ with $R^{+}_{n}=R_{n}\oplus
B_{r}(0)$ as follows : For every $s\in(0,\kappa n)$, we shall construct a
stopping set $\tilde{Z}_{n}^{s}=Z_{n}^{s}\times{\mathcal{K}}_{r}$ such that
for all $x=(x_{1},\ldots,x_{d})\in R_{n}$,
$\displaystyle\mathbb{P}((x,K)\in\tilde{Z}_{n}^{s})$
$\displaystyle=\mathbb{P}(x\in Z_{n}^{s})$
$\displaystyle\leq\mathbf{1}\\{|s-x_{1}|\leq
r\\}+\mathbb{P}(Arm_{r,|s-x_{1}|}(\gamma_{c}))\mathbf{1}\\{|s-x_{1}|>r\\}$
$\displaystyle\leq\mathbb{P}(Arm_{r,|s-x_{1}|}(\gamma_{c})),$ (8.6)
where the last inequality is by setting the convention that
$Arm_{r,s}(\gamma_{c})=1$ for $s\leq r$. Now, we choose our randomized
stopping set by first choosing $Y$ uniformly at random in $(0,\kappa n)$ and
then setting
$Z_{n}=Z_{n}^{Y},\tilde{Z}_{n}=\tilde{Z}_{n}^{Y}=Z_{n}^{Y}\times{\mathcal{K}}_{r}$.
Thus, we obtain that
$\displaystyle\mathbb{P}((x,K)\in\tilde{Z}_{n})$
$\displaystyle=\mathbb{P}(x\in Z_{n})\leq(\kappa n)^{-1}\int_{0}^{\kappa
n}\mathbb{P}(Arm_{r,|s-x_{1}|}(\gamma_{c}))\mathrm{d}s$ $\displaystyle\leq
2(\kappa n)^{-1}\int_{0}^{\kappa
n}\mathbb{P}(Arm_{r,s}(\gamma_{c}))\mathrm{d}s.$
Thus, we derive that
$\delta_{n}:=\delta(\tilde{Z}_{n})\leq 2(\kappa n)^{-1}\int_{0}^{\kappa
n}\mathbb{P}(Arm_{r,s}(\gamma_{c}))\mathrm{d}s$
and since $\mathbb{P}(Arm_{r,s}(\gamma_{c}))\to 0$ as $s\to\infty$, we have
that $\delta_{n}\to 0$ by l’Hôpital’s rule. Note that $f_{n}$ is jump-regular
as the intensity measure of $\tilde{\eta}\cap((R_{n}\oplus
B_{0}(r))\times{\mathcal{K}}_{r})$ is finite and also $f_{n}$ is monotonic.
Now all the three items in the theorem follow from Proposition 6.3, Corollary
7.2 and Theorem 7.3 respectively.
Thus, we are left with construction of a stopping set $\tilde{Z}_{n}^{s}$ as
above or equivalently $Z_{n}^{s}$ with revealment probability as in (8.6).
Given $s\in(0,\kappa n)$, let $L=\\{s\\}\times[0,n]^{d-1}$ and let $S$ be
union of the connected components of $\mathcal{O}(\tilde{\eta})\cap R_{n}$
that intersect $L$. Note that $S=\emptyset$ if no such components exist.
Define $Z_{n}^{s}:=(S\cup L)\oplus B_{r}(0)$.
Now we show that $Z_{n}^{s}$ is a stopping set in ${\mathbb{R}}^{d}$ as in
Remark A.7. Let $x=(x_{1},\ldots,x_{d})\in{\mathbb{R}}^{d}$. Suppose that
$B_{r}(x)\cap L\neq\emptyset$ (or equivalently $|x_{1}-s|\leq r$), then $x\in
Z_{n}^{s}$. Otherwise, $x\in Z_{n}^{s}$ and $B_{r}(x)\cap L=\emptyset$ i.e.,
$|x_{1}-s|>r$. Then, we have that $B_{r}(x)\cap S\neq\emptyset$ which is
equivalent to $B_{r}(x)\stackrel{{\scriptstyle R_{n}}}{{\leftrightarrow}}L$.
Thus, we derive that
$\mathbf{1}\\{x\in Z_{n}^{s}\\}=\mathbf{1}\\{B_{r}(x)\cap
L\neq\emptyset\\}+\mathbf{1}\\{B_{r}(x)\cap
L=\emptyset\\}\mathbf{1}\\{B_{r}(x)\stackrel{{\scriptstyle
R_{n}}}{{\leftrightarrow}}L\\}.$ (8.7)
The above identity yields graph-measurability of $Z_{n}^{s}$. Observe that $S$
as defined above is actually $S(\tilde{\eta})$ to be more explicit. Set
$S^{\prime}(\tilde{\eta})=S(\tilde{\eta})\oplus B_{r}(0)$. Then
$S(\tilde{\eta})=S(\tilde{\eta}_{S^{\prime}\times{\mathcal{K}}_{r}})$ because
$(x,K)\in(S^{\prime})^{c}\times{\mathcal{K}}_{r}$ implies that $(x+K)\cap
S=\emptyset$. Thus,
$S^{\prime}(\tilde{\eta})=S^{\prime}(\tilde{\eta}_{S^{\prime}\times{\mathcal{K}}_{r}})$.
In the same way, we can also deduce that
$S^{\prime}(\tilde{\eta}_{S^{\prime}\times{\mathcal{K}}_{r}})=S^{\prime}(\tilde{\eta}_{S^{\prime}\times{\mathcal{K}}_{r}}+\psi_{(S^{\prime})^{c}\times{\mathcal{K}}_{r}})$
for any $\psi\in{\mathbf{N}}({\mathbb{R}}^{d}\times{\mathcal{K}}_{r})$. This
verifies (A.1) and shows that $Z_{n}^{s}$ is a stopping set as in Remark A.7.
We obtain from the above arguments and Remark A.8 that
$\tilde{Z}_{n}^{s}=Z_{n}^{s}\times{\mathcal{K}}_{r}$ is a stopping set in
${\mathbb{R}}^{d}\times{\mathcal{K}}_{r}$ as required. Further, it is easy to
see that $\tilde{Z}_{n}^{s}=Z_{n}^{s}\times{\mathcal{K}}_{r}$ determines
$f_{n}$ as well. Now, we will derive bounds on the revealment probability of
$Z_{n}^{s}$. For $x\in R^{d}$ with $B_{r}(x)\cap L=\emptyset$, using (8.7), we
have that
$\mathbb{P}(x\in Z_{n}^{s})\leq\mathbb{P}(B_{r}(x)\cap
S\neq\emptyset)\leq\mathbb{P}(B_{r}(x)\stackrel{{\scriptstyle
R_{n}}}{{\leftrightarrow}}L)\leq\mathbb{P}(Arm_{r,|s-x_{1}|}(\gamma_{c})).$
Thus, we have shown (8.6) and the proof is complete. ∎
We now conclude the section with the proof of Theorem 8.2.
###### Proof of Theorem 8.2.
Let $\mathbb{Q}_{s}:=\delta_{B_{s}(0)}$ denote the probability distribution on
grains supported on a deterministic ball of radius $s$. By the scaling
property of the Poisson process, $k$-percolation in the Poisson Boolean model
with intensity $\gamma$ and grain distribution $\mathbb{Q}_{s}$ is equivalent
to $k$-percolation in the Poisson Boolean model with intensity $1$ and grain
distribution $\mathbb{Q}_{\gamma^{1/d}s}$. Now, using [13, Corollary 1.3] and
the above scaling property, we have that
$\gamma_{c}(\mathbb{Q}_{s})\in(0,\infty)$ for any $s\in(0,\infty)$. Now since
$\mathbb{Q}$ satisfies the assumption (8.0), we have that
$\gamma_{c}(\mathbb{Q})\geq\gamma_{c}(\mathbb{Q}_{r})>0$. Further, let
$\beta=\mathbb{Q}\\{K:B_{r_{0}}(0)\subset K\\}$ and by assumption (8.0),
$\beta>0$. Let $\mathcal{O}^{\prime}$ be the $k$-occupied region defined on a
Poisson point process with intensity $\beta\gamma$ and grain distribution
$\mathbb{Q}_{r_{0}}$. By the Poisson thinning property [49, Theorem 5.8], we
have that $\mathcal{O}^{\prime}\subset\mathcal{O}(\gamma)$ and hence
$\gamma_{c}(\mathbb{Q})\leq\beta\gamma_{c}(\mathbb{Q}_{r_{0}})<\infty$. Thus
$\gamma_{c}(\mathbb{Q})\in[\gamma_{c}(\mathbb{Q}_{r}),\beta\gamma_{c}(\mathbb{Q}_{r_{0}})]\subset(0,\infty)$
and we have shown the non-degeneracy of $\gamma_{c}(\mathbb{Q})$.
Further, $\mathcal{O}_{k}(\tilde{\eta})\subset\mathcal{O}_{1}(\tilde{\eta})$
for all $k\geq 1$ and hence by the exponential decay result for
$\mathcal{O}_{1}(\tilde{\eta})$ in [80, Theorem 3.1], we have that there
exists $a>0$ such that $\theta_{s}(\gamma)$ decays exponentially for
$\gamma<a$. Trivially $a<\gamma_{c}$ and we shall fix such an $a$ in the rest
of the proof.
Assume $n\geq 10r$ and choose $b\in(\gamma_{c},\infty)$. Similar to the the
proof of Theorem 8.3, we will define a randomized stopping set $Z_{n}$ in
${\mathbb{R}}^{d}$ that determines $\mathbf{1}\\{0\leftrightarrow\partial
B_{n}(0)\\}$ such that there exists a constant $C_{a}$ for all
$\gamma\in[a,b]$ satisfying the following bound on revealment probability :
$\delta_{n}=\max_{x\in{\mathbb{R}}^{d}}\mathbb{P}(x\in Z_{n})\leq
C_{a}n^{-1}\int_{0}^{n}\theta_{s}(\gamma)\mathrm{d}s.$ (8.8)
Further, we will show that $Z_{n}$ can be constructed via randomized CTDT as
in Theorem 4.2. Then using Theorem 4.2, monotonicity of
$\mathbf{1}\\{0\leftrightarrow\partial B_{n}(0)]\\}$ and the Russo-Margulis
formula for Poisson functionals (see [49, Theorem 19.4]), we derive that
$\displaystyle\theta_{n}(\gamma)(1-\theta_{n}(\gamma))$
$\displaystyle\leq\gamma\int_{{\mathbb{R}}^{d}\times{\mathcal{K}}_{r}}\mathbb{P}(x\in
Z_{n})\mathbb{E}[D_{(x,K)}\mathbf{1}\\{0\leftrightarrow\partial
B_{n}(0)\\}]\,\mathbb{Q}(dK)\,\mathrm{d}x$
$\displaystyle\leq\gamma\delta_{n}\int_{B_{n+2r}(0)\times{\mathcal{K}}_{r}}\mathbb{E}[D_{(x,K)}\mathbf{1}\\{0\leftrightarrow\partial
B_{n}(0)\\}]\,\mathbb{Q}(dK)\,\mathrm{d}x$
$\displaystyle=\gamma\delta_{n}\frac{\mathrm{d}\theta_{n}(\gamma)}{\mathrm{d}\gamma}=\gamma\delta_{n}\theta^{\prime}_{n}(\gamma).$
Thus, using the bound for $\delta_{n}$, we obtain for $\gamma\in[a,b]$ the
differential inequality ,
$\theta^{\prime}_{n}(\gamma)\geq\frac{n}{C_{a}b\int_{0}^{n}\theta_{s}(\gamma)\mathrm{d}s}\theta_{n}(\gamma)(1-\theta_{n}(\gamma))\geq\frac{n}{C_{a}b\int_{0}^{n}\theta_{s}(\gamma)\mathrm{d}s}\theta_{n}(\gamma)(1-\theta_{1}(b)),$
where we have used that $\theta_{s}(\gamma)$ is increasing in $\gamma$ and
decreasing in $s$. Now using a straightforward variant of [23, Lemma 3.1] (see
also [25, Proof of Theorem 1.2]), we obtain that there exists
$\gamma_{1}\in[a,b]$ such that
* •
For $\gamma\in[a,\gamma_{1})$,
$\theta_{n}(\gamma)\leq\exp\\{-\alpha(\gamma)n\\}$ for all $n\geq 1$ and some
constants $\alpha(\gamma)\in(0,\infty)$.
* •
For $\gamma\in[\gamma_{1},b]$, there exists a constant
$\alpha(b)\in(0,\infty)$ such that
$\theta(\gamma)\geq\alpha(b)(\gamma-\gamma_{1})$.
Since $a<\gamma_{c}<b$ by construction and since the previous properties imply
that $\theta(\gamma)=0$ for $\gamma\in(a,\gamma_{1})$ and $\theta(\gamma)>0$
for $\gamma\in(\gamma_{1},b)$ we conclude that, necessarily,
$\gamma_{1}=\gamma_{c}$. Thus, both the claims in the theorem follow.
All that remains to complete the proof is to show that $Z_{n}$ can be
constructed via a randomized CTDT satisfying suitable assumptions. First, we
will describe the randomized stopping set $Z_{n}$, then show that it satisfies
the required revealment probability and finally construct it as a randomized
CTDT satisfying the necessary assumptions.
Fix $s\in(0,n)$. We now define $Z^{s}_{n}$. Let $S$ be the union of connected
components of $\mathcal{O}(\gamma)\cap B_{n}(0)$ that intersect $\partial
B_{s}(0)$. Note that $S=\emptyset$ if there is no connected component of
$\mathcal{O}(\gamma)\cap B_{n}(0)$ that intersects $\partial B_{s}(0)$.
Observe that the event $\\{0\leftrightarrow\partial B_{n}(0)\\}$ is determined
by $\tilde{\eta}\cap(B_{n+r}(0)\times{\mathcal{K}}_{r})$ and hence we restrict
to this set. Define $Z^{s}_{n}=(S\cup\partial B_{s}(0))\oplus B_{2r}(0)$ and
$A_{r}(x):=\\{x\leftrightarrow\partial B_{2r}(x)\\}\cap\\{\partial
B_{2r}(x)\subset\mathcal{O}(\gamma)\\}$ similar to $A^{\infty}_{r}(x)$ defined
in Remark 8.4(i). Further, reasoning as in Remark 8.4(i), we multiply and
divide by $\mathbb{P}(A_{r}(x))$ in the second term, then use the Harris-FKG
inequality and stationarity of the Poisson point process to derive that
$\displaystyle\mathbb{P}(x\in Z^{s}_{n})$
$\displaystyle\leq\mathbf{1}\\{||x|-s|\leq
2r\\}+\mathbf{1}\\{||x|-s|>2r\\}\mathbb{P}(B_{2r}(x)\leftrightarrow\partial
B_{s}(0))$ $\displaystyle\leq\mathbf{1}\\{||x|-s|\leq
2r\\}+\mathbf{1}\\{||x|-s|>2r\\}\mathbb{P}(A_{r}(0))^{-1}\mathbb{P}(x\leftrightarrow\partial
B_{s}(0))$ $\displaystyle\leq\mathbf{1}\\{||x|-s|\leq
2r\\}+\mathbf{1}\\{||x|-s|>2r\\}\mathbb{P}(A_{r}(0))^{-1}\mathbb{P}(x\leftrightarrow\partial
B_{|s-|x||}(x)).$ (8.9)
Denote $C^{\prime}_{\gamma}=\mathbb{P}(A_{r}(0))$ and since
$C^{\prime}_{\gamma}$ is increasing in $\gamma$, $\inf_{\gamma\geq
a}C^{\prime}_{\gamma}=C^{\prime}_{a}>0$. Also, we have that $\inf_{\gamma\geq
a}\inf_{s\leq 2r}\theta_{s}(\gamma)\geq\theta_{2r}(a)>0$ and so from (8.9), we
obtain that for some constant $C_{a}$
$\displaystyle\mathbb{P}(x\in Z_{n}^{s})$
$\displaystyle\leq\mathbf{1}\\{||x|-s|\leq
2r\\}\theta_{2r}(a)^{-1}\theta_{|s-|x||}(\gamma)+\mathbf{1}\\{||x|-s|>2r\\}(C^{\prime}_{a})^{-1}\theta_{|s-|x||}(\gamma)$
$\displaystyle\leq C_{a}\theta_{|s-|x||}(\gamma).$
Thus, choosing $Y$ uniformly at random in $(0,n)$ and setting
$Z_{n}=Z_{n}^{Y}$, we obtain that
$\mathbb{P}(x\in Z_{n})\leq
C_{a}n^{-1}\int_{0}^{n}\theta_{|s-|x||}(\gamma)\mathrm{d}s\leq
2C_{a}n^{-1}\int_{0}^{n}\theta_{s}(\gamma)\mathrm{d}s.$
This shows that our randomized stopping set has the revealment probability as
required in (8.8). Now, we only need to show that it can be constructed as a
randomized CTDT satisfying the assumptions in Theorem 4.2.
We will first describe a sequence of stopping sets in ${\mathbb{R}}^{d}$ and
then describe how to build a CTDT on ${\mathbb{R}}^{d}$ from the same. Let
$s\in(0,n)$.
* •
Set $S_{0}=\emptyset,E_{0}=\partial B_{s}(0)$.
* •
Set $E_{1}=(S_{0}\cup E_{0})\oplus B_{2r}(0)$ and $S_{1}$ is the union of
components of
$\mathcal{O}_{k}(\tilde{\eta}\cap(E_{1}\times{\mathcal{K}}_{r}))\cap
B_{n+r}(0)$ intersecting $E_{0}$.
* •
Given $(S_{i},E_{i})$ for all $1\leq i\leq m\in{\mathbb{N}}$, we construct
$(S_{m+1},E_{m+1})$ as follows :
* –
If $S_{m}=S_{m-1}$, then the algorithm terminates. Also,
$\mathbf{1}\\{0\leftrightarrow\partial B_{n}(0)]\\}=1$ if there is a path in
$S_{m}$ from $0$ to $\partial B_{n}(0)$ else
$\mathbf{1}\\{0\leftrightarrow\partial B_{n}(0)]\\}=0$. Further, set
$E_{\infty}=E_{m}$.
* –
If $S_{m}\neq S_{m-1}$. Set $E_{m+1}=(S_{m}\cup E_{0})\oplus B_{2r}(0)$ and
$S_{m+1}$ be the union of components of
$\mathcal{O}_{k}(\tilde{\eta}\cap(E_{m+1}\times{\mathcal{K}}_{r}))\cap
B_{n+r}(0)$ intersecting $S_{m}$.
Observe that if the algorithm terminates after $m$ steps, then $S_{m}=S$ where
$S$ is the union of connected components of $\mathcal{O}(\gamma)\cap
B_{n+r}(0)$ that intersect $\partial B_{s}(0)$ if there exists any and else
$S=\emptyset$. Thus, by definition of $E_{\infty}$, we also obtain that
$E_{\infty}=(S\cup\partial B_{s}(0))\oplus B_{2r}(0)$.
Now we construct the CTDT. For $t\in(m,m+1)$ such that $S_{m}\neq S_{m-1}$,
set $Z^{s}_{n,t}=(S_{m}\cup\partial B_{s}(0))\oplus B_{2r(t-m)}(0)$. Thus, we
have that $Z^{s}_{n}=Z^{s}_{n,\infty}=E_{\infty}$ as required. The graph
measurability and stopping set property (see Remark A.7) of $Z^{s}_{n,t},t\geq
0$ can be argued as in the proof of Theorem 8.3. We will now verify the other
properties of CTDT.
Observe that for all $t\in(m,m+1]$ and any $\epsilon\in(0,t-m)$,
$Z^{s}_{n,t}\setminus
Z^{s}_{n,t-\epsilon}\subset\\{x\in{\mathbb{R}}^{d}:d(x,S_{m}\cup\partial
B_{s}(0))\in(2r(t-\epsilon),2r(t-m)]\\}.$
Also note that $Z^{s}_{n,0}=\partial B_{s}(0)$ has zero measure and further
using the above observation, other properties of CTDT (namely (2.1), (2.2))
can be verified for $Z^{s}_{n,t}$. Next, (4.4) holds trivially by the above
observation and since the intensity measure of $\tilde{\eta}$ is
$\gamma\,\mathrm{d}x\,\mathbb{Q}(\mathrm{d}K)$, a diffuse measure. Finally,
because the intensity measure of $\tilde{\eta}$ is diffuse and the observation
on $Z^{s}_{n,t}\ Z^{s}_{n,t-\epsilon}$, we have that $\\{|X_{i}|\\}_{i\geq 1}$
is a simple point process and this verifies (4.5). Since our function
$f_{n}(\tilde{\eta})$ is a function of
$f_{n}(\tilde{\eta}\cap(B_{n+2r}(0)\times{\mathcal{K}}_{R}))$, assumption
(3.3) holds because of Remark 3.1.
This completes the proof as we have constructed a CTDT satisfying the
assumptions of Theorem 4.2 and having revealment probability as given in
(8.8). ∎
### 9 Confetti Percolation
The confetti percolation model has its origins in the dead leaves model
introduced by Matheron [50]. Various geometric properties of the dead leaves
model have been studied. For example, see [69, 44, 14]. The percolation-
theoretic version of the model called confetti percolation was introduced by
Benjamini and Schramm in [10]. Since then, this model has been investigated in
many works [40, 56, 4, 31]. Although the dead leaves model is defined with
general grains and in arbitrary dimensions, studies of confetti percolation
model are focussed on random balls or squares (i.e., $\ell_{\infty}$ balls) in
two dimensions. We shall consider the former framework of general grains in
arbitrary dimensions and prove a sharp phase transition therein.
Consider a Poisson point process $\tilde{\eta}$ on
$\tilde{{\mathbb{X}}}={\mathbb{R}}^{d}\times{\mathbb{R}}_{+}\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r}\times\\{0,1\\}$
with intensity measure
$\tilde{\gamma}(\mathrm{d}(x,t,K_{1},K_{2},a))=\mathrm{d}x\times\mathrm{d}t\times\mathbb{Q}_{1}(\mathrm{d}K_{1})\times\mathbb{Q}_{2}(\mathrm{d}K_{2})\times\nu_{p}(a)$
where
$x\in{\mathbb{R}}^{d},t\in{\mathbb{R}}_{+},K_{1},K_{2}\in{\mathcal{K}}_{r},a\in\\{0,1\\}$
and $\nu_{p}=p\delta_{0}+(1-p)\delta_{1}$ is the standard Bernoulli($p$)
distribution on $\\{0,1\\}$. The interpretation is that $x$ denotes the
location of the particle, $t$ the arrival time, $a$ will determine the colour
of the particle (black if $a=0$ and white otherwise) and $K_{1}$ is the grain
attached to black particles and $K_{2}$ is the grain attached to white
particles. The grains fall on $x$ at time $t$ with colour determined by $a$
and accordingly the grain. Each point in the plane is coloured according to
the first grain that covers it and the question of interest is percolation of
the black region. We will now define this more formally using the framework of
[40, Section 2]. We will assume throughout the section that
$\mathbb{Q}_{1},\mathbb{Q}_{2}$ satisfy assumptions in (8.0).
We define the black region as
$\mathcal{O}_{p}:=\mathcal{O}_{p}(\tilde{\eta})=\mathcal{O}_{p}(\mathbb{Q}_{1},\mathbb{Q}_{2})=\bigcup_{(x,t,K_{1},K_{2},0)\in\tilde{\eta}}(x+K_{1})\setminus\\{\bigcup_{(x^{\prime},t^{\prime},K^{\prime}_{1},K^{\prime}_{2},1)\in\tilde{\eta},t^{\prime}<t}(x^{\prime}+K^{\prime}_{2})\\}.$
(9.1)
The white region $\mathcal{V}_{p}$ can be defined analogously or because
$\tilde{\eta}$ is a space-time Poisson point process, we can observe that
every point has to be coloured black or white because
$\mathbb{Q}_{1},\mathbb{Q}_{2}$ satisfy assumptions in (8.0). Hence
$\mathcal{V}_{p}={\mathbb{R}}^{d}\setminus\mathcal{O}_{p}$ is the white
region.
We now define the percolation events and the corresponding probabilities as in
(8.2). For connected subsets $A,B,R\subset{\mathbb{R}}^{d}$ such that $A\cup
B\subset R$, define
$\displaystyle\\{A\overset{R}{\longleftrightarrow}B\\}$
$\displaystyle:=\\{\mbox{there exists a path in $\mathcal{O}_{p}\cap R$ from
$x$ to $y$ for some $x\in A,y\in B$}\\},$
$\displaystyle\\{A\longleftrightarrow B\\}$
$\displaystyle:=\\{A\overset{{\mathbb{R}}^{d}}{\longleftrightarrow}B\\},$
$\displaystyle\\{0\longleftrightarrow\infty\\}$ $\displaystyle:=\\{\mbox{there
is an unbounded path from $0$ in $\mathcal{O}_{p}$}\\}$
$\displaystyle\theta_{s}(p)$
$\displaystyle:=\theta_{n}(p,\mathbb{Q}_{1},\mathbb{Q}_{2})=\mathbb{P}(0\longleftrightarrow\partial
B_{s}(0)),s>0$ $\displaystyle\theta(p)$
$\displaystyle:=\theta(p,\mathbb{Q}_{1},\mathbb{Q}_{2})=\mathbb{P}(0\longleftrightarrow\infty)$
$\displaystyle Arm_{s,t}(p)$
$\displaystyle:=\\{B^{\infty}_{s}(0)\longleftrightarrow\partial
B^{\infty}_{t}(0)\\},\,\,0<s<t,$ $\displaystyle Cross_{s,t}(p)$
$\displaystyle:=\\{\\{0\\}\times[0,t]^{d-1}\overset{[0,s]\times[0,t]^{d-1}}{\longleftrightarrow}\\{s\\}\times[0,t]^{d-1}\\},\,\,\kappa,t>0.$
(9.2)
Again, by monotonicity (in $n$) of $\theta_{n}(p)$, we have that
$\theta(p)=\lim_{n\to\infty}\theta_{n}(p)$. We suppress the dependence on
$\mathbb{Q}_{1},\mathbb{Q}_{2}$ as they are often fixed. Analogously to
$\mathcal{O}_{p}$, we can define the above events with respect to
$\mathcal{V}_{p}$ and in this case, we will denote the events and
probabilities with a $*$ superscript i.e.,
$\longleftrightarrow^{*},\theta^{*}_{s}(p),\theta^{*}(p),Arm^{*}_{s,t}(p),\theta^{*}(p),Cross^{*}_{s,t}(p)$
and so on.
Throughout this section, we have chosen the intensity to be $1$ but instead we
could have also chosen the intensity to be $\gamma$ for some
$\gamma\in(0,\infty)$ i.e.,
$\tilde{\gamma}(\mathrm{d}(x,t,K_{1},K_{2},a))=\gamma\mathrm{d}x\times\mathrm{d}t\times\mathbb{Q}_{1}(\mathrm{d}K_{1})\times\mathbb{Q}_{2}(\mathrm{d}K_{2})\times\nu_{p}(a)$.
Because of the scale-invariance of the Poisson point process in the time
direction, the probabilities defined above do not depend on the value of
$\gamma$ and hence we have chosen $\gamma=1$ for convenience.
A measurable mapping $f:{\mathbf{N}}(\tilde{{\mathbb{X}}})\to{\mathbb{R}}$ is
said to be black-increasing if for all $\mu\in{\mathbf{N}}$ and
$x^{\prime}=(x,t,K_{1},K_{2})\in{\mathbb{R}}^{d}\times{\mathbb{R}}_{+}\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r}$,
we have that
$f_{n}(\mu+\delta_{(x^{\prime},1)})\leq f_{n}(\mu)\leq
f_{n}(\mu+\delta_{(x^{\prime},0)}).$ (9.3)
As usual an event is said to be black-increasing if its indicator function is.
Trivially, if $f$ is a black-increasing function,
$\mathbb{E}[f(\tilde{\eta})]$ is an increasing function in $p$. Since
$\\{0\longleftrightarrow\infty\\}$ is a black-increasing event, $\theta(p)$ is
increasing in $p$ and $\theta^{*}(p)$ is decreasing in $p$. Hence, we can
define the critical probabilities as
$\displaystyle p_{c}$
$\displaystyle:=p_{c}(\mathbb{Q}_{1},\mathbb{Q}_{2})=\inf\\{p:\theta(p,\mathbb{Q}_{1},\mathbb{Q}_{2})>0\\},$
$\displaystyle p_{c}^{*}$
$\displaystyle:=p^{*}_{c}(\mathbb{Q}_{1},\mathbb{Q}_{2})=\sup\\{p:\theta^{*}(p,\mathbb{Q}_{1},\mathbb{Q}_{2})>0\\}.$
(9.4)
Out first theorem shows a sharp phase transition for $\theta_{n}(p)$.
###### Theorem 9.1 (Sharp threshold for confetti percolation).
Consider the black occupied region $\mathcal{O}_{p}$ in the confetti
percolation model as defined in (9.1) with grain distributions
$\mathbb{Q}_{1},\mathbb{Q}_{2}$ satisfying assumptions in (8.0). Then, for
$p_{c}$ as defined above, the following statements hold.
1. 1.
For all $p<p_{c}$, there exists a constant $\alpha(p)\in(0,\infty)$ such that
$\theta_{n}(p)\leq\exp\\{-\alpha(p)n\\}$ for all $n\geq 1$.
2. 2.
For all $p^{\prime}<1$, there exists a constant $\alpha\in(0,\infty)$ such
that for all $p\in[p_{c},p^{\prime})$, we have that
$\theta_{n}(p)\geq\alpha(p-p_{c})$ for all $n\geq 1$.
###### Remark 9.2.
1. 1.
Note that we do not say anything about non-triviality of $p_{c}$ i.e.,
$0<p_{c}<1$. Though we would expect this to hold under the assumptions of
Theorem 9.1, this is beyond the scope of our project. In the planar case
(i.e., $d=2$) non-triviality is known in the the case when
$\mathbb{Q}_{1},\mathbb{Q}_{2}$ are supported on balls (see [4, Theorem 8.3]).
For the general case, one may use a suitable coupling argument similar to that
in Theorem 8.2.
2. 2.
The above theorem in the case of $d=2$ and when
$\mathbb{Q}_{1},\mathbb{Q}_{2}$ are supported on boxes was shown in [31,
Proposition 1.1]. The proof therein uses the discrete OSSS inequality whereas
we use the continuum version and thereby enabling us to provide a simpler
proof that holds in greater generality. It was remarked in [31, Remark 1.1]
that their method can be adapted for other shapes but our proof allows one to
treat different shapes at once without making any shape-specific argument. The
same remark applies to the Theorem 9.3 as well.
###### Proof.
We shall prove the sharp threshold result using the same proof strategy as in
Theorem 8.2 but with some additional technicalities due to the non-compactness
of $\tilde{{\mathbb{X}}}$.
We fix a $n$ large and $\epsilon\in(0,1/2)$. Suppose we show that for
$p\in[\epsilon,1-\epsilon]$,
$\frac{\mathrm{d}\theta_{n}(p)}{\mathrm{d}p}\geq\frac{b_{r}(\epsilon)n}{2\int_{0}^{n}\theta_{s}(p)\mathrm{d}s}\theta_{n}(p)(1-\theta_{n}(p)),$
(9.5)
where $b_{r}(\epsilon)>0$ will be defined explicitly soon. Now observe that
$1-\theta_{n}(p)\geq
1-\theta_{n}(1-\epsilon)\geq\mathbb{P}(B_{1}(0)\subset\mathcal{V}_{1-\epsilon})>0,$
i.e., there cannot be a path from $0$ to $\partial B_{n}(0)$ if the unit ball
around $0$ is covered by the white region. Thus, we obtain from (9.5) and the
above inequality that
$\frac{\mathrm{d}\theta_{n}(p)}{\mathrm{d}p}\geq\frac{b_{r}(\epsilon)n}{2\int_{0}^{n}\theta_{s}(p)\mathrm{d}s}\theta_{n}(p)(1-\theta_{n}(1-\epsilon)).$
Now using [24, Lemma 1], we obtain that there exists
$p_{0}:=p_{0}(\epsilon)\in[\epsilon,1-\epsilon]$ such that
1. 1.
For all $p\in[\epsilon,p_{0})$, there exists a constant
$\alpha(p)\in(0,\infty)$ such that $\theta_{n}(p)\leq\exp\\{-\alpha(p)n\\}$
for all $n\geq 1$.
2. 2.
There exists a constant $\alpha(\epsilon)\in(0,\infty)$ such that for all
$p\in(p_{0},1-\epsilon]$, $\theta_{n}(p)\geq\alpha(\epsilon)(p-p_{0})$ for all
$n\geq 1$.
By definition of $p_{c}$, we have that $\theta(p)=0$ for $p<p_{c}$ and
$\theta(p)>0$ for $p>p_{c}$. From this and the above two statements, the proof
of the two statements in the theorem is complete.
Now, we are left to prove (9.5). Fix $s\in(0,n)$. For $h>0$, let
$\tilde{\eta}_{h}=\tilde{\eta}\cap\tilde{{\mathbb{X}}}_{h}$ where
$\tilde{{\mathbb{X}}}_{h}={\mathbb{R}}^{d}\times[0,h]\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r}\times\\{0,1\\}$.
When we want to refer to events in (9.2) but with respect to
$\mathcal{O}_{p}(\tilde{\eta}_{h})$, we shall use $\longleftrightarrow^{h}$
for existence of corresponding paths in $\mathcal{O}(\tilde{\eta}_{h})$ and
$\theta_{s}^{h},\theta_{h}$ for the corresponding percolation probabilities.
We define
$f^{h}_{n}:=\mathbf{1}\\{0\longleftrightarrow^{h}\partial
B_{n}(0)\\},f_{n}:=\mathbf{1}\\{0\longleftrightarrow\partial B_{n}(0)\\}.$
Observe that
$\theta^{h}_{n}(p)=\mathbb{E}[f_{n}^{h}],\theta_{n}(p)=\mathbb{E}[f_{n}].$ Our
proof strategy now is to first use Theorem 4.2 for $f_{n}^{h}$ and derive a
version of the differential inequality (9.5) for $\theta_{n}^{h}$. Then we
will complete the proof by showing that all the terms in the differential
inequality (9.5) are well-approximated by the corresponding truncated versions
for $f_{n}^{h}$. Since $f_{n}^{h}$ is dependent on the point process in a
compact set, we can use ideas similar to those in the proof of Theorem 8.2 to
derive the differential inequality (9.5) for $\theta_{n}^{h}$.
Now, we construct a CTDT satisfying assumptions of Theorem 4.2 for
$f_{n}^{h}$. Let $S$ be the union of connected components of
$\mathcal{O}_{p}(\tilde{\eta}_{h})$ that intersect $\partial B_{s}(0)$. Note
that $S=\emptyset$ if no such component exists. Define
$Z_{n}^{s}:=(S\cup\partial B_{s}(0))\oplus B_{2r}(0)$. We will skip the
construction of $Z_{n}^{s}$ via a CTDT as it is similar to that in Theorem
8.2. We note that $Z_{n}^{s}$ determines $f_{n}$ and further it is a stopping
set as in Remark A.7. Now again as in the derivation of (8.9), using Harris-
FKG inequality (see [31, Definition 2.1 and below]) and translation invariance
of the Poisson point process, we obtain that for $x\in{\mathbb{R}}^{d}$,
$\displaystyle\mathbb{P}(x\in Z_{n}^{s})$
$\displaystyle\leq\mathbb{P}(B_{2r}(x)\cap(S\cup\partial
B_{s}(0))\neq\emptyset)$ $\displaystyle=\mathbf{1}\\{||x|-s|\leq
2r\\}+\mathbf{1}\\{||x|-s|>2r\\}\mathbb{P}(B_{2r}(x)\longleftrightarrow^{h}\partial
B_{s}(0))$ $\displaystyle\leq\mathbf{1}\\{||x|-s|\leq
2r\\}+(b^{h}_{r}(p))^{-1}\mathbf{1}\\{||x|-s|>2r\\}\mathbb{P}(x\longleftrightarrow^{h}\partial
B_{s}(0))$ $\displaystyle\leq\mathbf{1}\\{||x|-s|\leq
2r\\}+(b^{h}_{r}(p))^{-1}\mathbf{1}\\{||x|-s|>2r\\}\theta^{h}_{|s-|x||}(p),$
(9.6)
where $b^{h}_{r}(p)=\mathbb{P}(\\{0\longleftrightarrow^{h}\partial
B_{2r}(0)\\}\cap\\{\partial
B_{2r}(0)\subset\mathcal{O}_{p}(\tilde{\eta}_{h})\\})$ and the usage of
Harris-FKG inequality is justified as the events are black-increasing as
defined in (9.3). Further $b^{h}_{r}(p)\geq
b^{h}_{r}(\epsilon)\geq\mathbb{P}(B_{2r}(0)\subset\mathcal{O}_{\epsilon}(\tilde{\eta}_{h}))$
and the positivity of the latter guarantees the positivity of
$b^{h}_{r}(\epsilon)$. Now we randomize over $s$ as before i.e., set
$Z_{n}:=Z_{n}^{Y}$ where $Y$ is a uniform $(0,n)$-valued random variable. Then
we obtain that
$\displaystyle\delta_{n}$ $\displaystyle:=\sup_{x\in
B_{n+2r}(0)}\mathbb{P}(x\in
Z_{n})\leq(b^{h}_{r}(\epsilon))^{-1}n^{-1}\sup_{x\in
B_{n+2r}(0)}\int_{0}^{n}\theta^{h}_{|s-|x||}(p)\mathrm{d}s$ $\displaystyle\leq
2(b^{h}_{r}(\epsilon))^{-1}n^{-1}\int_{0}^{n}\theta^{h}_{s}(p)\mathrm{d}s,$
where in the first inequality follows from (9.6) and from arguments analogous
to those rehearsed in the proof of Theorem 8.2.
Having constructed a suitable CTDT, we now derive the differential inequality
for $\theta_{n}^{h}$. By $x^{\prime}$ we denote $(x,t,K_{1},K_{2})$. Using
that $f^{h}_{n}$ is black-increasing as in (9.3) and $f^{h}_{n}\in\\{0,1\\}$,
we have that
$|D_{(x^{\prime},0)}f^{h}_{n}|=f^{h}_{n}(\tilde{\eta}+\delta_{(x^{\prime},0)})-f^{h}_{n}(\tilde{\eta})\leq
f^{h}_{n}(\tilde{\eta}+\delta_{(x^{\prime},0)})-f^{h}_{n}(\tilde{\eta}+\delta_{(x^{\prime},1)})\geq|D_{(x^{\prime},1)}f^{h}_{n}|.$
Let $\mathbb{X}$ be such that
$\tilde{{\mathbb{X}}}={\mathbb{X}}\times\\{0,1\\}$. Thus from Theorem 4.2, the
definition of $\delta_{n}$, the above inequalities and a version of the Russo-
Margulis formula in [49, Exercise 19.8], we derive that
$\displaystyle\theta^{h}_{n}(p)(1-\theta^{h}_{n}(p))$
$\displaystyle\leq\delta_{n}\left(p\int_{{\mathbb{X}}}\mathbb{E}[|D_{(x^{\prime},0)}f^{h}_{n}(\tilde{\eta})|]\mathbb{Q}_{1}(\mathrm{d}K_{1})\mathbb{Q}_{2}(\mathrm{d}K_{2})\mathrm{d}t\mathrm{d}x\right.$
$\displaystyle\quad+\left.(1-p)\int_{{\mathbb{X}}}\mathbb{E}[|D_{(x^{\prime},1)}f^{h}_{n}(\tilde{\eta})|]\mathbb{Q}_{1}(\mathrm{d}K_{1})\mathbb{Q}_{2}(\mathrm{d}K_{2})\mathrm{d}t\mathrm{d}x\right)$
$\displaystyle\leq\delta_{n}\int_{{\mathbb{X}}}\mathbb{E}[f^{h}_{n}(\tilde{\eta}+\delta_{(x^{\prime},0)})-f^{h}_{n}(\tilde{\eta}+\delta_{(x^{\prime},1)})]\mathbb{Q}_{1}(\mathrm{d}K_{1})\mathbb{Q}_{2}(\mathrm{d}K_{2})\mathrm{d}t\mathrm{d}x$
$\displaystyle=\delta_{n}\frac{\mathrm{d}\theta^{h}_{n}(p)}{\mathrm{d}p}\leq\left(2(b^{h}_{r}(\epsilon))^{-1}n^{-1}\int_{0}^{n}\theta^{h}_{s}(p)\mathrm{d}s\right)\frac{\mathrm{d}\theta^{h}_{n}(p)}{\mathrm{d}p}.$
(9.7)
Now, we will complete the proof by showing that the terms in (9.7) approximate
those of (9.5). Define
$b_{r}(p):=\mathbb{P}(\\{0\longleftrightarrow\partial
B_{2r}(0)\\}\cap\\{\partial B_{2r}(0)\subset\mathcal{O}_{p}\\}).$
By the same reasoning as in the forthcoming arguments in (9.8), we have that
for every $s>0$ and $p\in[\epsilon,1-\epsilon]$,
$\theta^{h}_{s}(p)\uparrow\theta_{s}(p),b^{h}_{r}(p)\uparrow b_{r}(p).$
Thus, to show that the differential inequality (9.7) converges to the
differential inequality (9.5), it remains to show that $\theta_{s}(p)$ is
differentiable in $p$ and that the derivatives
$\frac{\mathrm{d}\theta^{h}_{n}(p)}{\mathrm{d}p}$ converge as $h\to\infty$. To
prove this, we will now show that derivatives
$\frac{\mathrm{d}\theta^{h}_{n}(p)}{\mathrm{d}p}$ converge uniformly as
$h\to\infty$ and thus proving both differentiability and convergence of the
derivatives.
Set
${\mathbb{X}}_{h}:={\mathbb{R}}^{d}\times[0,h]\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r},{\mathbb{X}}^{\prime}:={\mathbb{R}}^{d}\times(h,\infty)\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r}$.
Define,
$\displaystyle g_{n}^{h}(p)$
$\displaystyle:=\int_{{\mathbb{X}}_{h}}\mathbb{E}[f^{h}_{n}(\tilde{\eta}+\delta_{(x^{\prime},0)})-f^{h}_{n}(\tilde{\eta}+\delta_{(x^{\prime},1)})]\mathbb{Q}_{1}(\mathrm{d}K_{1})\mathbb{Q}_{2}(\mathrm{d}K_{2})\mathrm{d}t\mathrm{d}x$
$\displaystyle g_{n}(p)$
$\displaystyle:=\int_{{\mathbb{X}}}\mathbb{E}[f_{n}(\tilde{\eta}+\delta_{(x^{\prime},0)})-f_{n}(\tilde{\eta}+\delta_{(x^{\prime},1)})]\mathbb{Q}_{1}(\mathrm{d}K_{1})\mathbb{Q}_{2}(\mathrm{d}K_{2})\mathrm{d}t\mathrm{d}x,$
By the Russo-Margulis formula [49, Exercise 19.8],
$\frac{\mathrm{d}\theta^{h}_{n}(p)}{\mathrm{d}p}=g^{h}_{n}(p)$. Hence to show
uniform convergence of the derivatives, it is enough to show uniform
convergence of $g^{h}_{n}(p)$ to $g_{n}(p)$ for $p\in[\epsilon,1-\epsilon]$.
Now, using the fact that $f_{n}^{h}$ depends only on $\tilde{\eta}^{h}$, we
derive that
$\displaystyle|g_{n}(p)-g_{n}^{h}(p)|$
$\displaystyle\leq\int_{{\mathbb{X}}^{\prime}}\mathbb{E}[f_{n}(\tilde{\eta}+\delta_{(x^{\prime},0)})-f_{n}(\tilde{\eta}+\delta_{(x^{\prime},1)})]\mathbb{Q}_{1}(\mathrm{d}K_{1})\mathbb{Q}_{2}(\mathrm{d}K_{2})\mathrm{d}t\mathrm{d}x$
$\displaystyle\quad+\int_{{\mathbb{X}}_{h}}\mathbb{E}[|f_{n}(\tilde{\eta}+\delta_{(x^{\prime},0)})-f^{h}_{n}(\tilde{\eta}+\delta_{(x^{\prime},0)})|]\mathbb{Q}_{1}(\mathrm{d}K_{1})\mathbb{Q}_{2}(\mathrm{d}K_{2})\mathrm{d}t\mathrm{d}x$
$\displaystyle\quad+\int_{{\mathbb{X}}_{h}}\mathbb{E}[|f^{h}_{n}(\tilde{\eta}+\delta_{(x^{\prime},1)})-f_{n}(\tilde{\eta}+\delta_{(x^{\prime},1)})|]\mathbb{Q}_{1}(\mathrm{d}K_{1})\mathbb{Q}_{2}(\mathrm{d}K_{2})\mathrm{d}t\mathrm{d}x.$
We sub-divide $B_{n+2r}(0)$ into small cubes $R_{n1},\ldots,R_{nk_{a}}$ of
side-length $a$ and $a$ small enough such that the diameter of each cube is at
most $r_{0}/4$ for $r_{0}$ as in assumption (8.0). Set
${\mathcal{K}}^{\prime}_{r_{0}}:=\\{K\in{\mathcal{K}}_{r}:B_{r_{0}}(0)\subset
K\\}$ and $\beta_{i}=\mathbb{Q}_{i}({\mathcal{K}}^{\prime}_{r_{0}}),i=1,2$.
Note that $\beta_{i}>0$ for $i=1,2$ because of assumption (8.0).
Our key observation is that if for some $t$ and all $i$,
$\tilde{\eta}_{h}(R_{ni}\times[0,t)\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})\neq
0$, then
$\mathcal{O}_{p}(\tilde{\eta}_{h}+\delta_{(x,t^{\prime},K_{1},K_{2},0)})\cap
B_{n}(0)=\mathcal{O}_{p}(\tilde{\eta}+\delta_{(x,t^{\prime},K_{1},K_{2},1)})\cap
B_{n}(0)$ for $t^{\prime}>t,x\in B_{n+2r}(0)$ and for all $p$. Now from this
observation, union bound over $R_{ni}$’s and the fact that
$\tilde{\eta}_{h}(R_{ni}\times[0,t)\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})$
is a Poisson random variable with mean $a^{d}\beta_{1}\beta_{2}t$, we derive
that for $x^{\prime}=(x,t,K_{1},K_{2})$,
$\mathbb{E}[f_{n}(\tilde{\eta}+\delta_{(x^{\prime},0)})-f_{n}(\tilde{\eta}+\delta_{(x^{\prime},1)})]\leq
k_{a}\exp\\{-a^{d}\beta_{1}\beta_{2}t\\}.$
Similarly, if for some $h$ and all $i$,
$\tilde{\eta}_{h}(R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})\neq
0$, then $\mathcal{O}_{p}(\tilde{\eta}_{h})\cap
B_{n}(0)=\mathcal{O}_{p}(\tilde{\eta})\cap B_{n}(0)$ and
$\mathcal{O}_{p}(\tilde{\eta}_{h}+\delta_{(x,t,K_{1},K_{2},a)})\cap
B_{n}(0)=\mathcal{O}_{p}(\tilde{\eta}+\delta_{(x,t,K_{1},K_{2},a)})\cap
B_{n}(0)$ for $t\leq h,a\in\\{0,1\\},x\in B_{n+2r}(0)$ and for all $p$.
Arguing as above, the latter two integrands are bounded by
$k_{a}\exp\\{-a^{d}\beta_{1}\beta_{2}h\\}$. Thus substituting these bounds in
the above inequality and observing that by the definition of
$f_{n},f_{n}^{h}$, one can restrict to $x\in B_{n+2r}(0)$, we obtain that
$|g_{n}(p)-g_{n}^{h}(p)|\leq
k_{a}|B_{n+2r}(0)|\left(\int_{h}^{\infty}\exp\\{-a^{d}\beta_{1}\beta_{2}t\\}\mathrm{d}t+2\exp\\{-a^{d}\beta_{1}\beta_{2}h\\}\right).$
(9.8)
From the above bounds, we obtain uniform convergence of $g^{h}_{n}(p)$ to
$g_{n}(p)$ for $p\in[\epsilon,1-\epsilon]$ and as argued below (9.7), this
completes the proof of the (9.5) and hence that of the theorem as well. ∎
A powerful consequence of the above sharp phase transition result is the exact
determination of the critical probabilities in the planar case when
$\mathbb{Q}_{1}=\mathbb{Q}_{2}=\mathbb{Q}$. By considerations of self-duality,
it was conjectured that $p_{c}=p^{*}_{c}=\frac{1}{2}$ in the case of fixed
balls by Benjamini and Schramm [10, Problem 5]. It was proven in the case of
fixed boxes by Hirsch [40] and then in the case of fixed balls by Müller [56].
Recently, it was proven in the case of random boxes by Ghosh and Roy [31]. Our
next result will include all of the above as special cases.
###### Theorem 9.3 (Critical probability for planar confetti percolation).
Consider the black occupied region $\mathcal{O}_{p}$ in the planar (i.e.,
$d=2$) confetti percolation model as defined in (9.1) with grain distribution
$\mathbb{Q}_{1}=\mathbb{Q}_{2}=\mathbb{Q}$ satisfying assumptions in (8.0).
Further, we assume that $\mathbb{Q}$ is invariant under $\pi/2$-rotations and
reflections at the coordinate axes. Then we have that
$p_{c}(\mathbb{Q},\mathbb{Q})=p_{c}^{*}(\mathbb{Q},\mathbb{Q})=\frac{1}{2}$.
Further, under a certain transitivity condition, it was shown in [31, Theorem
1.2] for the planar case that
$p_{c}(\mathbb{Q}_{1},\mathbb{Q}_{2})=\frac{|B_{1}|}{|B_{1}|+|B_{2}|}$ when
$\mathbb{Q}_{1}=\delta_{B_{1}}$ and $\mathbb{Q}_{2}=\delta_{B_{2}}$ for
$B_{1},B_{2}\in{\mathcal{K}}_{r}$ and $|.|$ denotes the Lebesgue volume. It is
not yet known if the transitivity condition holds always.
###### Proof.
Using the observation
$\mathcal{V}_{p}(\mathbb{Q}_{1},\mathbb{Q}_{2})\overset{d}{=}\mathcal{O}_{1-p}(\mathbb{Q}_{2},\mathbb{Q}_{1})$,
we obtain that
$p^{*}_{c}(\mathbb{Q}_{1},\mathbb{Q}_{2})=1-p_{c}(\mathbb{Q}_{2},\mathbb{Q}_{1})$
and thus $p^{*}_{c}(\mathbb{Q},\mathbb{Q})+p_{c}(\mathbb{Q},\mathbb{Q})=1$
when $\mathbb{Q}_{1}=\mathbb{Q}_{2}=\mathbb{Q}$. Hence, to prove the theorem
it suffices to show that
$p_{c}(\mathbb{Q},\mathbb{Q})=p_{c}^{*}(\mathbb{Q},\mathbb{Q})$.
Now from the sharp phase transition result (Theorem 9.1), we can obtain that
$\mathbb{P}(Cross_{2n,2n}(p))\overset{n\to\infty}{\to}0\,\,\mbox{for}\,\,p<p_{c}\,\,\mbox{and}\,\,\mathbb{P}(Cross_{2n,2n}(p))\overset{n\to\infty}{\to}1\,\,\mbox{for}\,\,p>p_{c},\,\,$
(9.9)
and similarly for $\mathbb{P}(Cross^{*}_{2n,2n}(p))$ with respect to
$p_{c}^{*}$. We will now conclude the proof of the theorem assuming (9.9) and
later prove the same. By planarity and the definition of the model, we have
that there exists a left-right crossing of $\mathcal{O}_{p}$ iff there exists
no top-down crossing of $\mathcal{V}_{p}$. From this observation and
$\pi/2$-rotation invariance of $\mathcal{O}_{p}$ and $\mathcal{V}_{p}$, we
obtain that $\mathbb{P}(Cross_{2n,2n}(p))=1-\mathbb{P}(Cross^{*}_{2n,2n}(p))$.
Thus, we also have that
$\mathbb{P}(Cross_{2n,2n}(p))\overset{n\to\infty}{\to}0\,\,\mbox{for}\,\,p<p^{*}_{c}\,\,\mbox{and}\,\,\mathbb{P}(Cross_{2n2,n}(p))\overset{n\to\infty}{\to}1\,\,\mbox{for}\,\,p>p^{*}_{c},\,\,$
and so $p_{c}=p_{c}^{*}$ as required to complete the proof of the theorem.
We now prove (9.9) for $\mathbb{P}(Cross_{2n,2n}(p))$ and that for
$\mathbb{P}(Cross^{*}_{2n,2n}(p))$ follows the same arguments. For the first
statement, observe that a left-right crossing in $[0,2n]^{d}$ has to pass
through one of the $(n-1)$ balls $B_{(n,2i+1)}(1),i=0,2,\ldots,n-1$ and hence
there is a path from one of these balls to $\\{2n\\}\times[0,2n]$. As in the
proof of $\tilde{\gamma}_{c}=\gamma_{c}$ in Theorem 8.1, by union bound,
stationarity of the Poisson process and reasoning as in the derivation of
(9.6), we obtain that
$\displaystyle\mathbb{P}(Cross_{2n,2n}(p))$
$\displaystyle\leq\mathbb{P}(\cup_{i=0}^{n-1}\\{B_{1}(n,2i+1)\longleftrightarrow\\{\\{2n\\}\times[0,2n]\\})$
$\displaystyle\leq\sum_{i=0}^{n-1}\mathbb{P}(B_{1}(n,2i+1)\longleftrightarrow\partial
B_{n}(n,2i+1))$ $\displaystyle\leq
n\mathbb{P}(B_{0}(1)\longleftrightarrow\partial B_{n}(0))\leq
b_{1/2}(p)^{-1}n\theta_{n}(p),$
where $b_{r}(p)$ is defined similarly as below (9.6). Now, by exponential
decay in Theorem 9.1(1), the RHS above converges to $0$ as $n\to\infty$ for
$p<p_{c}$. For the second statement, we shall follow Zhang’s argument (see
[22, Proposition 4.1] and [40, Proposition 3]). For Zhang’s argument, it
suffices that $\mathcal{O}_{p}$ is invariant under translations of
${\mathbb{R}}^{d}$ and rotation by $\pi/2$, $\mathcal{O}_{p}$ is ergodic with
respect to translations of ${\mathbb{R}}^{d}$ and black-increasing events are
positively correlated. We shall now sketch Zhang’s argument. Let $n\geq 2k\geq
1$. Any $\mathcal{O}_{p}$-path from $B^{\infty}_{k}$ to $\partial
B^{\infty}_{n}$ ends in one of the four sides of $\partial B^{\infty}_{n}$.
Now, by $\pi/2$-rotation invariance of $\mathcal{O}_{p}$ and square-root trick
[74, Proposition 4.1], we have that
$\displaystyle\mathbb{P}(\mbox{there is a $\mathcal{O}_{p}$-path from
$B^{\infty}_{k}$ to the left of $\partial B^{\infty}_{n}$})$
$\displaystyle\geq 1-\mathbb{P}(B^{\infty}_{k}\nleftrightarrow\partial
B_{n}^{\infty})^{1/4}$ $\displaystyle\geq
1-\mathbb{P}(B^{\infty}_{k}\nleftrightarrow\infty)^{1/4}.$
Let $A_{n,k}$ denote the event that $B^{\prime}_{n,k}:=(n,n)+B^{\infty}_{k}$
is connected to the left of $[0,2n]^{2}$ and the right of $[0,2n]^{2}$ via
paths in $[0,2n]^{2}$. Then from the above bound and positive correlation of
black-increasing events, we derive that
$\mathbb{P}(A_{n,k})\geq
1-2\mathbb{P}(B^{\infty}_{k}\nleftrightarrow\infty)^{1/4}.$
Observe that $A_{n,k}\setminus Cross_{2n,2n}(p)\subset A^{\prime}_{n,k},$
where the latter is defined as
$A^{\prime}_{n,k}:=\\{\mbox{there are two distinct components in
$\mathcal{O}_{p}$ that intersect $B^{\prime}_{n,k}$ and
$\partial[0,2n]^{2}$}\\}.$
Further, $\cap_{n\geq 1}A^{\prime}_{n,k}\subset\\{\mbox{there are two distinct
infinite components in $\mathcal{O}_{p}$}\\}$ and the latter event has zero
probability by uniqueness of the infinite component of $\mathcal{O}_{p}$ (see
Remark 9.4). Thus, we obtain that
$\lim_{n\to\infty}\mathbb{P}(Cross_{2n,2n}(p))=\lim_{n\to\infty}\mathbb{P}(A_{n,k})\geq
1-2\mathbb{P}(B^{\infty}_{k}\nleftrightarrow\infty)^{1/4}.$
By ergodicity and that $\theta(p)>0$ as $p>p_{c}$, we have that
$\mathbb{P}(\mbox{$\mathcal{O}_{p}$ has an infinite component})=1$ and so
$\mathbb{P}(B^{\infty}_{k}\longleftrightarrow\infty)\to 1$ as $k\to\infty$.
Substituting this in the above bound, we obtain the second statement in (9.9)
and thus completing the proof of (9.9). ∎
###### Remark 9.4.
We do not provide a detailed proof for the uniqueness of the infinite occupied
component in the confetti percolation model. To prove this claim, one can
follow the arguments in [27]. The uniqueness of the infinite component of
$\mathcal{O}_{p}$ follows by ergodicity (with respect to translations of
${\mathbb{R}}^{d}$), invariant under $\pi/2$-rotations and reflections at the
coordinate axes, positive association of black increasing events and
asymptotic independence property of $\tilde{\eta}$ (see [40, Section 3.2]).
We now give a simple criterion for verifying noise sensitivity and exceptional
times in the confetti percolation model analogous to the one for
$k$-percolation in the Poisson Boolean model.
###### Theorem 9.5 (Noise sensitivity criteria).
Consider the occupied region $\mathcal{O}_{p}$ in the confetti percolation
model defined in (9.1) with grain distributions
$\mathbb{Q}_{1},\mathbb{Q}_{2}$ satisfying the assumptions in (8.0). Let
$f_{n}:=f_{n}(p)=\mathbf{1}\\{Cross_{\kappa n,n}(p)\\}$ for $\kappa>0$ where
$Cross_{\kappa n,n}(p)$ is the crossing event defined in (9.2). Assume that
$\\{f_{n}(p_{c})\\}_{n\geq 1}$ is non-degenerate as in (7.1) and
$\mathbb{P}(Arm_{r,s}(p_{c}))\to 0$ as $s\to\infty$. Then we have the
following.
1. (i)
$\\{f_{n}(p_{c})\\}_{n\geq 1}$ is noise sensitive.
2. (ii)
$\\{f_{n}(p_{c})\\}_{n\geq 1}$ has an infinite set of exceptional times in
$[0,1]$ i.e., $|S_{n}\cap[0,1]|\overset{\mathbb{P}}{\longrightarrow}\infty$
where the set of exceptional times $S_{n}$ is defined below (7.1).
3. (iii)
Assume that there exists $C<\infty$ and $c_{arm}>0$ such that for all $s>2r$,
the following holds :
$\mathbb{P}(Arm_{r,s}(p_{c}))\leq Cs^{-c_{arm}}.$
Then, $\mathbb{P}(Cross_{\kappa n,n}((1+c_{n})p_{c}))\to 1$ and
$\mathbb{P}(Cross_{\kappa n,n}((1-c_{n})p_{c}))\to 0$ where
$c_{n}=n^{-\frac{c_{arm}}{2}+\epsilon}$ for some $\epsilon>0$.
###### Remark 9.6.
1. 1.
Similarly to what was observed in Remark 8.4-1, one can use Theorem 7.4 to
show that the conclusion of Point (iii) in the previous statement continues to
hold for sequences of the type $c_{n}=n^{-c_{arm}+\epsilon}$.
2. 2.
Reasoning as in Remark 8.4(i), using the Harris-FKG inequality and
monotonicity of the events, we can derive that
$\mathbb{P}(Arm_{r,s}(p_{c}))\leq b_{r}^{-1}\theta_{s-r}(p_{c})$ where $b_{r}$
is as defined below (9.6). This will be useful in Corollary 9.8.
###### Proof.
As with the proof of Theorem 8.3, we construct a suitable randomized stopping
set and then appeal to our general results in Proposition 6.3 and Corollary
7.2 to prove the first two statements. The third statement does not directly
follow from Theorem 7.3 but needs adaptation of the techniques therein.
We fix a $\kappa>0$. Let $R_{n}=[0,\kappa n]\times[0,n]^{d-1}$ be the
rectangle. Given $s\in(0,\kappa n)$, let $L=\\{s\\}\times[0,n]^{d-1}$ and let
$S$ be union of the connected components of $\mathcal{O}_{p}\cap R_{n}$ that
intersect $L$. Again if no such component exists, $S=\emptyset$. Define
$Z_{n}^{s}:=(S\cup L)\oplus B_{r}(0)$. By arguments as in the proof of Theorem
8.3 and Remark A.8, we have that
$\tilde{Z}_{n}^{s}=Z_{n}^{s}\times{\mathbb{R}}_{+}\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r}\times\\{0,1\\}$
is a stopping set and determines $f_{n}$. As usual, we choose our randomized
stopping set $Z_{n}$ by first choosing $Y\in(0,n)$ uniformly at random and
then setting $\tilde{Z}_{n}^{s}=\tilde{Z}_{n}^{Y}$. As in Theorem 8.3, we
derive the following bound on the revealment probability :
$\delta_{n}:=\sup_{\tilde{x}=(x,t,K_{1},K_{2},a)\in\tilde{\mathbb{X}}}\mathbb{P}(\tilde{x}\in\tilde{Z}_{n})\leq
2(\kappa n)^{-1}\int_{0}^{\kappa n}\mathbb{P}(Arm_{r,s}(p_{c}))\mathrm{d}s.$
Thus the proof of the first statement of our theorem is complete using our
assumption on decay of arm probability and Proposition 6.3. Now the second
statement will follow from Corollary 7.2, provided we can show jump regularity
of $f_{n}$. We will show jump-regularity at the end of the proof.
Now, we will prove the third statement. Set $c_{n}=n^{-c_{arm}/2+\epsilon}$
for some $\epsilon>0$. Observe that by our assumption on decay of arm-
probabilities in the third statement, Corollary 7.2 and the jump-regularity of
$f_{n}$ (to be proved below), we have that
$|S_{n}\cap[0,c_{n}]|\overset{\mathbb{P}}{\longrightarrow}\infty.$ (9.10)
Abusing notation, we will let $\tilde{\eta}$ be the Poisson point process on
$R^{+}_{n}\times{\mathbb{R}}_{+}\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r}\times\\{0,1\\}$
with $p=p_{c}$ and $R_{n}^{+}=R_{n}\oplus B_{0}(2r)$. Let $\tilde{\eta}^{t}$
be the Markov process driven by Ornstein-Uhlenbeck semigroup as in Section 6.1
with stationary distribution same as that of $\tilde{\eta}$ and also
$\tilde{\eta}^{0}=\tilde{\eta}$. To make explicit the dependence on the point
process, we set $f^{t}_{n}(p_{c}):=f_{n}(\tilde{\eta}^{t})$ to denote the
left-right crossing event of $R_{n}$ in
$\mathcal{O}_{p_{c}}(\tilde{\eta}^{t})$ i.e., $Cross_{kn,n}(p_{c})$ for
$\mathcal{O}_{p_{c}}(\tilde{\eta}^{t})$. Now, using (9.10), we have that
$\displaystyle\mathbb{P}(f^{t}_{n}(p_{c})=1\,\mbox{for some
$t\in[0,c_{n}]$})\to 1\,\mbox{as $n\to\infty$}.$ (9.11)
For $t\geq 0$, let $\tilde{\eta}^{t}_{+}$ be the Poisson process of points
born before time $t$ and $\tilde{\eta}^{t}_{-}\subset\tilde{\eta}$ be the
Poisson process of points that have been removed by time $t$. Let
$\tilde{\eta}^{t}_{b+}\subset\tilde{\eta}^{t}_{+}$ be the subset of ‘black
grains’ i.e.,
$\tilde{\eta}^{t}_{b+}=\tilde{\eta}^{t}_{+}\cap(R^{+}_{n}\times{\mathbb{R}}_{+}\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r}\times\\{0\\})$.
Similarly, we denote $\tilde{\eta}^{t}_{w-}\subset\tilde{\eta}^{t}_{-}$ be the
subset of ‘white grains’ that were removed by time $t$. As the process
$\tilde{\eta}^{t}_{+}$ is independent of $\tilde{\eta}$, so is the process
$\tilde{\eta}^{t}_{b+}$ and the latter process has intensity measure
$p_{c}t\tilde{\gamma}$, where recall that $\tilde{\gamma}$ is the intensity
measure of the Poisson point process $\tilde{\eta}$ in the above paragraph.
Then $\zeta^{t}:=\tilde{\eta}-\tilde{\eta}^{t}_{w-}+\tilde{\eta}^{t}_{b+}$ is
a Poisson process with intensity measure
$(p_{c}+e^{-t}(1-p_{c})+tp_{c})\tilde{\gamma}$ and the intensity measure of
the ‘black grains’ is $(1+t)p_{c}\tilde{\gamma}$. So, the probability of a
grain being black is $\frac{(1+t)p_{c}}{p_{c}+e^{-t}(1-p_{c})+tp_{c}}$. We let
$\tilde{\eta}^{t}_{b},\tilde{\eta}^{t}_{w}$ denote respectively the black and
white grains in $\tilde{\eta}^{t}$. Similarly, we use the notation
$\zeta^{t}_{b},\zeta^{t}_{w}$. By construction of $\tilde{\eta}^{t}$ and
$\zeta^{t}$, the ‘black grains’ are never removed in $\zeta^{t}$ while they
may be removed in $\tilde{\eta}^{t}$ depending on their lifetimes. Also,
‘white grains’ are removed in $\zeta^{t}$ and not added while newer white
grains may be added in $\tilde{\eta}^{t}$. Thus,
$\tilde{\eta}^{t}_{b}\subset\zeta^{t}_{b}\subset\zeta^{c_{n}}_{b}$ and
$\tilde{\eta}^{t}_{w}\supset\zeta^{t}_{w}\supset\zeta^{c_{n}}_{w}$ for $t\leq
c_{n}$. Since $f_{n}$ is monotonic (9.3), $f_{n}(\eta^{t})\leq
f_{n}(\zeta^{t})\leq f_{n}(\zeta^{c_{n}}),t\in[0,c_{n}]$. Now, by (9.11), we
have that $\mathbb{E}[f_{n}(\zeta^{c_{n}})]\to 1$ as $n\to\infty$. Since the
probability of a grain being black in $\zeta^{c_{n}}$ is at most
$(1+c_{n})p_{c}$, again by monotonicity, we have that
$\mathbb{P}(Cross_{kn,n}((1+c_{n})p_{c}))\to 1$ as $n\to\infty$. Adapting the
arguments in Theorem 7.3 as we did above, we can obtain the second part of the
statement also.
Only jump-regularity remains to be proven now to complete the proof of the
theorem. Unlike in the $k$-percolation case, it is not true that
$\tilde{\eta}\cap(R_{n}^{+}\times{\mathbb{R}}_{+}\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r}\times\\{0,1\\})$
has a finite intensity measure for any $n$. Nevertheless, we can still show
that $f_{n}$ is well-approximate by functionals that depend on $\tilde{\eta}$
restricted to a compact set and are hence jump-regular.
Let $\tilde{\eta}_{h}:=\tilde{\eta}\cap{\mathbb{X}}_{h}$ where
${\mathbb{X}}_{h}:=R^{+}_{n}\times[0,h]\times{\mathcal{K}}_{r}\times{\mathcal{K}}_{r}\times\\{0,1\\}$
i.e., $\tilde{\eta}_{h}$ is the Poisson process restricted to height $h$ for
some $h>0$. Note that
$\tilde{\eta}^{t}_{h}:=\tilde{\eta}^{t}\cap{\mathbb{X}}_{h}$ is the Markov
process driven by Ornstein-Uhlenbeck semigroup with stationary distribution
same as that of $\tilde{\eta}_{h}$. Analogous to
$f^{t}_{n}(p_{c}):=f_{n}(\tilde{\eta}^{t})$, define
$f^{t}_{n,h}(p_{c}):=f_{n}(\tilde{\eta}^{t}_{h})$ to be the indicator of the
crossing event $Cross_{\kappa n,n}(p_{c})$ in
$\mathcal{O}_{p}(\tilde{\eta}_{h}^{t})$ for $h,t>0$. Since
$\tilde{\gamma}({\mathbb{X}}_{h})<\infty$, $f^{t}_{n,h}(p_{c})$ is jump-
regular as it depends only on the Poisson process inside the compact set
${\mathbb{X}}_{h}$. Suppose we show that
$\lim_{h\to\infty}\sup_{t\in[0,1]}|f^{t}_{n}(p_{c})-f^{t}_{n,h}(p_{c})|\overset{p}{=}0.$
(9.12)
Then, we claim that $f_{n}^{t}(p_{c})$ is also jump-regular. We argue this as
follows : Let $S_{n}$ be the set of exceptional times of $f_{n}^{t}$
restricted to $[0,1]$ (see the definition of exceptional times in Section 7).
Similarly, set $S_{n,h}$ to be the set of exceptional times of $f_{n,h}^{t}$
restricted to $[0,1]$. To show jump-regularity of $f_{n}^{t}(p_{c})$, it
suffices to show that $\mathbb{P}(S_{n}<\infty)=1$ as $f_{n}^{t}(p_{c})$ is
$\\{0,1\\}$-valued. Fix $M,h>0$. Using that $S_{n}=S_{n,h}$ if
$f^{t}_{n}\equiv f^{t}_{n,h}$ for $t\in[0,1]$ and that $f^{t}_{n},f^{t}_{n,h}$
are $\\{0,1\\}$-valued, we derive that
$\displaystyle\mathbb{P}(S_{n}\geq M)$
$\displaystyle\leq\mathbb{P}(S_{n,h}\geq
M)+\mathbb{P}(\sup_{t\in[0,1]}|f^{t}_{n}(p_{c})-f^{t}_{n,h}(p_{c})|\neq 0)$
$\displaystyle=\mathbb{P}(S_{n,h}\geq
M)+\mathbb{P}(\sup_{t\in[0,1]}|f^{t}_{n}(p_{c})-f^{t}_{n,h}(p_{c})|\geq 1)$
Now, by letting $M\to\infty$ first and then letting $h\to\infty$ and using
jump-regularity of $f^{t}_{n,h}(p_{c})$ and (9.12), we obtain that
$\lim_{M\to\infty}\mathbb{P}(S_{n}\geq M)=0$. Thus, we have that
$\mathbb{P}(S_{n}<\infty)=1$ and hence that $f_{n}^{t}$ is jump-regular.
We are now left to prove (9.12). We sub-divide $R^{+}_{n}$ into small cubes
$R_{n1},\ldots,R_{nk_{a}}$ of side-length $a$ and $a$ small enough such that
the diameter of each cube is at most $r_{0}/4$ for $r_{0}$ as in assumption
(8.0). Set
${\mathcal{K}}^{\prime}_{r_{0}}:=\\{K\in{\mathcal{K}}_{r}:B_{r_{0}}(0)\subset
K\\}$. Our key observation is that if for some $t$ and all $i$,
$\tilde{\eta}^{t}_{h}(R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})\neq
0$, then
$\mathcal{O}_{p}(\tilde{\eta}^{t}_{h})=\mathcal{O}_{p}(\tilde{\eta}^{t})$ and
so $f_{n}^{t}(p_{c})=f_{n,h}^{t}(p_{c})$. Thus, we have that
$\\{\sup_{t\in[0,1]}|f^{t}_{n}(p_{c})-f^{t}_{n,h}(p_{c})|\neq
0\\}\subset\bigcup_{i=1}^{k_{a}}\bigcup_{t\in[0,1]}\\{\tilde{\eta}^{t}_{h}(R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})=0\\}.$
(9.13)
We now show that the probability of the latter event can be made small by
choosing a large $h$. By stationarity of the Poisson point process
$\tilde{\eta}_{h}$, it suffices to show that for each $i\leq k_{a}$,
$\lim_{h\to\infty}\mathbb{P}(\bigcup_{t\in[0,1]}\\{\tilde{\eta}^{t}_{h}(R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})=0\\})=0.$
(9.14)
Let us fix a $i\leq k_{a}$ to prove (9.14). Set
$N_{h}:=\tilde{\eta}_{h}(R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})$.
Note that $N_{h}$ is Poisson distributed with mean $a^{d}\beta_{1}\beta_{2}h$
where $\beta_{i}=\mathbb{Q}_{i}({\mathcal{K}}^{\prime}_{r_{0}}),i=1,2$. By
assumption (8.0), $\beta_{i}>0,i=1,2$. By the definition of the Markov process
$\tilde{\eta}^{t}_{h}$, if
$\tilde{\eta}^{t}_{h}(R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})=0$
for some $t\in[0,1]$, then either
$\tilde{\eta}_{h}(R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})=0$
or the life-time of every particle in $\tilde{\eta}_{h}\cap
R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\}$
is at most $1$. Since the lifetimes are all i.i.d. exponential($1$) random
variables and $N_{h}$ is a Poisson random variable, we can now compute the
required probability.
$\displaystyle\mathbb{P}(\bigcup_{t\in[0,1]}\\{\tilde{\eta}^{t}_{h}(R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})=0\\})$
$\displaystyle\leq\mathbb{P}(\tilde{\eta}_{h}(R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\})=0)$
$\displaystyle\quad+\mathbb{P}(\mbox{life-time of every particle in
$\tilde{\eta}_{h}\cap
R_{ni}\times[0,h]\times{\mathcal{K}}^{\prime}_{r_{0}}\times{\mathcal{K}}^{\prime}_{r_{0}}\times\\{0,1\\}$
is at most $1$})$
$\displaystyle=\exp\\{-a^{d}\beta_{1}\beta_{2}h\\}+\mathbb{E}[(1-e^{-1})^{N_{h}}]$
$\displaystyle=\exp\\{-a^{d}\beta_{1}\beta_{2}h\\}+\exp\\{-a^{d}\beta_{1}\beta_{2}he^{-1}\\}.$
(9.15)
Thus letting $h\to\infty$, we obtain (9.14) which gives (9.12) via (9.13).
This completes the proof of jump-regularity of $f_{n}^{t}(p_{c})$ and hence
the proof of the theorem. ∎
Combining the above theorem with [4, Theorem 8.3], we get the following
corollary in the case of planar confetti percolation model with grains
supported on balls centred at origin. Recall that ${\mathcal{K}}_{r}^{b}$ is
the space of all balls of radius at most $r$ centred at origin (see above
(8.0)).
###### Corollary 9.7.
Consider the occupied region $\mathcal{O}_{p}$ in the planar confetti
percolation model (i.e., $d=2$) as defined in (9.1) with grain distributions
$\mathbb{Q}_{1},\mathbb{Q}_{2}$ satisfying assumption (8.0) and further
$\mathbb{Q}_{1},\mathbb{Q}_{2}$ are supported on ${\mathcal{K}}_{r}^{b}$. Then
for $f_{n}:=f_{n}(p_{c})$ as defined in Theorem 9.5, the conclusions of
Theorem 9.5 hold.
We now give a second application to noise sensitivity of confetti percolation
using the arguments in Theorem 9.3. This applies to more general grains
distribution than balls but is less quantitative.
###### Corollary 9.8.
Consider the occupied region $\mathcal{O}_{p}$ in the planar confetti
percolation model (i.e., $d=2$) as defined in (9.1) with grain distributions
$\mathbb{Q}_{1}=\mathbb{Q}_{2}=\mathbb{Q}$ satisfying assumption (8.0).
Further, assume that $\mathbb{Q}$ is invariant under $\pi/2$-rotations and
reflections at the coordinate axes. Then for
$f_{n}:=f_{n}(1/2)=\mathbf{1}\\{Cross_{n,n}(1/2)\\}$, the first two statements
in the conclusions of the Theorem 9.5 hold.
###### Proof.
Observe that by planar duality, a left-right crossing of $[0,n]^{2}$ by
$\mathcal{O}_{p}$ exists iff there is no top-down crossing of $[0,n]^{2}$ by
$\mathcal{V}_{p}$. Further using $\pi/2$-rotation invariance of $\mathbb{Q}$,
we have that $\mathbb{P}(Cross_{n,n}(p))=1-\mathbb{P}(Cross^{*}_{n,n}(p))$ for
all $p\in[0,1]$. By self-duality at $p=1/2$, we obtain that
$\mathbb{P}(Cross_{n,n}(1/2))=\mathbb{P}(Cross^{*}_{n,n}(1/2))=1/2$ for all
$n\geq 1$. Thus, $f_{n}$ is non-degenerate as in (7.1).
As remarked below Theorem 9.5, we have that
$\mathbb{P}(Arm_{r,s}(1/2))\leq b_{r}^{-1}\theta_{s-r}(1/2),$
where $b_{r}$ is as defined below (9.6) and $s>2r$. From Zhang’s argument as
elaborated in the proof of Theorem 9.3, we obtain that if $\theta(1/2)>0$ then
$\mathbb{P}(Cross_{n,n}(1/2))\to 1$ as $n\to\infty$. Since
$\mathbb{P}(Cross_{n,n}(1/2))=1/2$ for all $n\geq 1$, we have that
$\theta(1/2)=0$. Thus, we also obtain that
$\lim_{s\to\infty}\theta_{s-r}(1/2)=0$ and thereby verifying the assumptions
of Theorem 9.5. ∎
### 10 Noise sensitivity for the planar Boolean model with unbounded balls
While the previous two sections studied sharp phase transition and noise
sensitivity for some percolation models with bounded range of dependence, we
will now show that our noise sensitivity results can be extended to a model
with a unbounded range of dependence. We will focus on a very specific model -
the planar Poisson Boolean model - as many explicit estimates are known here
and this will best illustrate the usefulness of our results and also the proof
ideas.
We shall adopt the framework of [4]. Let
$\tilde{\eta}:=\\{(X_{i},R_{i})\\}_{i\geq
1}\subset{\mathbb{R}}^{2}\times{\mathbb{R}}_{+}$ be a Poisson point process of
intensity $\gamma\,\mathrm{d}x\,\mathbb{Q}(dr)$ where $\mathbb{Q}$ is a
probability distribution on ${\mathbb{R}}_{+}$ called the radii distribution
such that for some $\alpha>0$,
$\int_{0}^{\infty}r^{2+\alpha}\mathbb{Q}(dr)\in(0,\infty).$ (10.1)
In other words, $\\{X_{i}\\}_{i\geq 1}$ is a stationary Poisson point process
of intensity $\gamma$ and the points are equipped with i.i.d. marks
$R_{i},i\geq 1$. The occupied region of the planar Poisson Boolean model is
defined as
$\mathcal{O}(\gamma):=\mathcal{O}(\tilde{\eta})=\bigcup_{i\geq
1}B_{R_{i}}(X_{i}).$ (10.2)
Similarly, we will use other notations from Section 8. In particular, one can
define a critical intensity $\gamma_{c}$ as in (8.3). It was shown in [37]
that (10.1) with $\alpha=0$ is necessary for $\gamma_{c}>0$ and much later
[33] showed that (10.1) with $\alpha=0$ is sufficient for $\gamma_{c}>0$.
Thus, using a standard coupling with the bounded radius case, one can obtain
that $\gamma_{c}\in(0,\infty)$ under condition (10.1) with $\alpha=0$; see for
example, [4, Theorem 1.1]. We have assumed $\alpha>0$ as the more quantitative
estimates on arm probabilities in [4] require this assumption.
Our main theorem in this section yields quantitative noise sensitivity,
existence of exceptional times and bounds on critical window for crossing
probabilities in the planar Poisson Boolean model as defined above. This was
suggested as an open problem in [4].
###### Theorem 10.1 (Noise sensitivity for planar Poisson Boolean model).
Let $\tilde{\eta}$ be the Poisson point process as defined above with
$\gamma=\gamma_{c}$ and $\mathcal{O}(\gamma)$ be the occupied region of the
Poisson Boolean model as defined in (10.2) with the radii distribution
$\mathbb{Q}$ satisfying (10.1). Let $\kappa>0$ be given. Let
$f_{n}(\gamma_{c})=f_{n}(\tilde{\eta}):=\mathbf{1}\\{Cross_{\kappa
n,n}(\tilde{\eta})\\}$ where $Cross_{\kappa n,n}(\tilde{\eta})$ is the
crossing event defined in (8.2). Then the following statements hold.
1. (i)
$\\{f_{n}(\gamma_{c})\\}_{n\geq 1}$ is noise sensitive.
2. (ii)
$\\{f_{n}(\gamma_{c})\\}_{n\geq 1}$ has an infinite set of exceptional times
in $[0,1]$ i.e., $|S_{n}\cap[0,1]|\overset{\mathbb{P}}{\longrightarrow}\infty$
where the set of exceptional times $S_{n}$ is as defined below (7.1).
3. (iii)
There exists $b>0$ such that $\mathbb{E}[f_{n}((1+n^{-b})\gamma_{c})]\to 1$
and $\mathbb{E}[f_{n}((1-n^{-b})\gamma_{c})]\to 0$ as $n\to\infty$.
The last statement can be used to provide an alternative proof of [4, Theorem
5.1] which shows sharpness of phase transition of crossing probabilities via
discretization, an inequality of Talagrand [73] and the Russo-Margulis
formula.
###### Proof.
The proof of the theorem relies on our exact estimates for the case of bounded
balls in Theorem 8.3 and the following observation relating noise sensitivity
of a sequence of functions to its approximation. Suppose that
$f_{n},f^{\prime}_{n}:{\mathbf{N}}\to[0,1],n\geq 1$ are two sequences of
functions. Then, using the triangle inequality, we have that
$|\operatorname{{\mathbb{C}ov}}[f_{n}(\tilde{\eta}^{t}),f_{n}(\tilde{\eta})]-\operatorname{{\mathbb{C}ov}}[f^{\prime}_{n}(\tilde{\eta}^{t}),f^{\prime}_{n}(\tilde{\eta})]|\leq
4\mathbb{P}(f_{n}(\tilde{\eta})\neq f^{\prime}_{n}(\tilde{\eta})),$ (10.3)
where $\tilde{\eta}^{t},t\geq 0$ is the Markov process defined in Section 6.1.
This is the Markov process driven by Ornstein-Uhlenbeck semigroup with
stationary distribution same as that of $\tilde{\eta}$ and also
$\tilde{\eta}^{0}=\tilde{\eta}$.
Choose a sequence $r_{n}=n^{1-\epsilon},n\geq 0$ for an
$\epsilon<\frac{\alpha}{2+\alpha}$. Define
$\tilde{\eta}^{\prime}_{n}:=\\{(X_{i},R_{i})\in\tilde{\eta}:R_{i}\leq
r_{n}\\}$ be the restricted Poisson point process and
$f^{\prime}_{n}(\tilde{\eta}):=\mathbf{1}\\{Cross_{\kappa
n,n}(\tilde{\eta}^{\prime}_{n})\\}.$ Because of (10.3), the proof of the
theorem is complete if we show that $f^{\prime}_{n}$ is noise sensitive and
that
$\lim_{n\to\infty}\mathbb{P}(f_{n}(\tilde{\eta})\neq
f^{\prime}_{n}(\tilde{\eta}))=0$ (10.4)
Firstly, we show that $f^{\prime}_{n}(\tilde{\eta})$ is noise sensitive.
Following the proof of Theorem 8.3, we have that there exists a randomized
stopping set $Z^{\prime}_{n}$ determining $f^{\prime}_{n}(\tilde{\eta})$ and
satisfying the assumptions of Proposition 5.3 such that
$\delta^{\prime}_{n}:=\delta(Z^{\prime}_{n})\leq\frac{2}{\kappa
n}\int_{0}^{\kappa
n}\mathbb{P}(Arm_{r_{n},s}(\tilde{\eta}^{\prime}_{n}))\mathrm{d}s,$
where $Arm_{r,s}(\cdot)$ is the one-arm crossing event as defined in (8.2).
Since the one-arm crossing event is increasing and further using [4, Corollary
4.4], we obtain that for some $c_{arm}>0$,
$\delta^{\prime}_{n}\leq\frac{2}{\kappa n}\int_{0}^{\kappa
n}\mathbb{P}(Arm_{r_{n},s}(\tilde{\eta}))\mathrm{d}s\leq\frac{C}{\kappa
n}\left(2r_{n}+\int_{2r_{n}}^{\kappa
n}\Big{(}\frac{r_{n}}{s}\Big{)}^{c_{arm}}\mathrm{d}s\right)\to 0,$
where the convergence is due to the choice of $r_{n}$. Thus $f^{\prime}_{n}$
is noise sensitive by Proposition 5.3. Now we will show (10.4).
Observe that by definition,
$\mathbb{P}(f_{n}(\tilde{\eta})\neq
f^{\prime}_{n}(\tilde{\eta}))\leq\mathbb{P}(\exists\,i\in{\mathbb{N}},R_{i}\geq
r_{n}\,\,\mbox{and}\,\,B_{R_{i}}(X_{i})\cap[0,\kappa n]^{2}\neq\emptyset).$
(10.5)
From arguments similar to that in [4, (2.13)], we have that for some constant
$C$,
$\displaystyle\mathbb{P}(\exists\,i\in{\mathbb{N}},R_{i}\geq
r_{n}\,\,\mbox{and}\,\,B_{R_{i}}(X_{i})\cap[0,\kappa n]^{2}\neq\emptyset)$
$\displaystyle\leq\mathbb{P}(\tilde{\eta}([0,\kappa
n+r_{n}]^{2}\times[r_{n},\infty))\neq 0)$
$\displaystyle\quad+\mathbb{P}(\exists\,i\in{\mathbb{N}},X_{i}\notin[0,\kappa
n+r_{n}]^{2}\,\,\mbox{and}\,\,B_{R_{i}}(X_{i})\cap[0,\kappa
n]^{2}\neq\emptyset)$ $\displaystyle\leq C\gamma_{c}\Big{(}1+\frac{\kappa
n+1}{r_{n}}\Big{)}^{2}\left(r_{n}^{2}\mathbb{Q}((r_{n},\infty))+\int_{r_{n}}^{\infty}r^{2}\mathbb{Q}(\mathrm{d}r)\right)$
$\displaystyle\leq 2C\gamma_{c}\Big{(}1+\frac{\kappa
n+1}{r_{n}}\Big{)}^{2}r_{n}^{-\alpha}\int_{r_{n}}^{\infty}r^{2+\alpha}\mathbb{Q}(\mathrm{d}r)\leq
8C\gamma_{c}n^{-\alpha+\epsilon(2+\alpha)}\int_{r_{n}}^{\infty}r^{2+\alpha}\mathbb{Q}(\mathrm{d}r)\to
0,$ (10.6)
where in the last inequality we have again used the choice of $r_{n}$ and in
the convergence, we have used the choice of $\epsilon$. Thus, using (10.5), we
have proven (10.4) and hence the proof of the first statement is complete via
(10.3). Further, by using the above bounds and (6.8) for covariance of
$f^{\prime}_{n}$, we obtain that
$\operatorname{{\mathbb{C}ov}}[f_{n}(\tilde{\eta}^{t}),f_{n}(\tilde{\eta})]\leq
8C\gamma_{c}n^{-\alpha+\epsilon(2+\alpha)}\int_{r_{n}}^{\infty}r^{2+\alpha}\mathbb{Q}(\mathrm{d}r)+C\delta^{\prime}_{n}\frac{e^{-t}}{(1-e^{-t})^{2}}.$
(10.7)
The second and third statement follow from (10.7), the explicit bounds for
$\delta^{\prime}_{n}$ and Theorems 7.1 and 7.3 (see remark below the theorem)
if we show jump-regularity of $f_{n}$. We will do so now by using a strategy
similar to that in the proof of Theorem 9.5.
Fix $h>0$. Let $\tilde{\eta}_{h}:=\tilde{\eta}\cap([0,\kappa
n+h]^{2}\times[0,h])$ be the truncated Poisson process and
$g_{n,h}(\tilde{\eta})=\mathbf{1}\\{Cross_{\kappa n,n}(\tilde{\eta}_{h})\\}$
denote the indicator of the crossing event of the window $[0,\kappa
n]\times[0,n]$ by $\mathcal{O}(\tilde{\eta}_{h})$. Since $\tilde{\eta}_{h}$
has finite intensity measure, $g_{n,h}$ is jump-regular. As argued in Theorem
9.5 (see (9.12) and below), the proof of jump-regularity of $f_{n}$ is
complete if we show that
$\lim_{h\to\infty}\sup_{t\in[0,1]}|f_{n}(\tilde{\eta}^{t})-g_{n,h}(\tilde{\eta}^{t})|\overset{p}{=}0.$
(10.8)
Let $\tilde{\eta}_{+}$ denote the new points born in time interval $[0,1]$ in
the Markov process $\tilde{\eta}^{t}$. By the definition of the Markov
process, $\tilde{\eta}_{+}$ is independent of $\tilde{\eta}$ and has intensity
measure $\gamma_{c}\,\mathrm{d}x\,\mathbb{Q}(\mathrm{d}r)$ i.e.,
$\tilde{\eta}_{+}\overset{d}{=}\tilde{\eta}$. Using this and the definitions
of $f_{n},g_{n,h}$, we derive that
$\displaystyle\mathbb{P}(\sup_{t\in[0,1]}|f_{n}(\tilde{\eta}^{t})-g_{n,h}(\tilde{\eta}^{t})|\geq
1)$ $\displaystyle\leq\mathbb{P}(\exists
t\in[0,1],\tilde{\eta}^{t}(\\{(x,r):r>h,B_{r}(x)\cap[0,\kappa
n]^{2}\neq\emptyset\\})\neq 0)$
$\displaystyle\leq\mathbb{P}(\tilde{\eta}(\\{(x,r):r>h,B_{r}(x)\cap[0,\kappa
n]^{2}\neq\emptyset\\})\neq 0)$
$\displaystyle\quad+\mathbb{P}(\tilde{\eta}_{+}(\\{(x,r):r>h,B_{r}(x)\cap[0,\kappa
n]^{2}\neq\emptyset\\})\neq 0)$ $\displaystyle\leq
2\mathbb{P}(\tilde{\eta}(\\{(x,r):r>h,B_{r}(x)\cap[0,\kappa
n]^{2}\neq\emptyset\\})\neq 0)$ $\displaystyle\leq
4C\gamma_{c}\Big{(}1+\frac{\kappa
n+1}{h}\Big{)}^{2}\int_{h}^{\infty}r^{2}\mathbb{Q}(\mathrm{d}r)$
where in the last inequality we have used the bounds as derived in (10.6) but
for $h$ instead of $r_{n}$. From our last bound the required convergence in
(10.8) follows and hence the proof of jump-regularity of $f_{n}$ and also the
proof of the Theorem is complete. ∎
The $2+\alpha$ moment condition (10.1) can be replaced by $d+\alpha$ moment
condition in $d$-dimensions and while most of the above proof would go
through, the arm probability estimates of [4, Corollary 4.4] are not known and
this is the main obstacle in extending the theorem to higher-dimensions.
### 11 Further applications
We now briefly indicate some further possible applications of our main
theorems without going into details.
Vacant-set percolation: Our proof methods can also be adapted to show sharp
phase transition for $\mathcal{O}(\gamma)^{c}$ as well i.e., the region
covered by at most $(k-1)$-balls. If $k=0$, this is the vacancy region in the
usual Boolean model and one can find sharp phase transition results for vacant
region of the Boolean model with random radii (including the case of unbounded
support in the planar case) in [4, 5, 63, 25]. Further, phase transition
results for the Boolean model with random radii with unbounded support have
been proven in [33, 35, 25]. A challenging question would be to extend the
results of [25] to Poisson Boolean model with general grains and possibly to
$k$-percolation model.
Voronoi Percolation: The results of the previous section on confetti
percolation can also be shown to hold for the Voronoi percolation model on the
Poisson process. These results were proven in [23, 3] by suitable
discretization and application of OSSS and Schramm-Steif inequalities
respectively. Our construction of stopping sets and proof methodology in the
previous section can be extended to the case of Voronoi percolation as well.
This will yield sharp phase transition as well as noise sensitivity and
exceptional times at criticality more directly from the results of [74].
Further, it should be possible to obtain analogues of Corollary 8.5 for more
general Voronoi percolation models as defined in [4, Section 8] and [31,
Section 1.2]. The necessary estimates for non-degeneracy of crossing events
and decay of arm-probabilities were established in [4, Theorem 8.1].
Level set percolation: Percolation of level sets of Poissonian shot-noise
random fields has been another model that has received some attention (see
[55, 8, 16]). More recently, in [46] RSW-type estimates, non-degeneracy of
crossing events, decay of arm-probabilities and sharp phase transition were
proven. In particular, [46, Proposition 4.2] is proved using the discrete OSSS
inequality and our continuum analogue of OSSS inequality can again be used to
avoid discretization. Further, one would expect to deduce noise sensitivity
and exceptional times for crossings in this model at criticality analogous to
Corollary 8.5.
Other continuum models: One can apply our methods also to prove a sharp phase
transition in random connection model with bounded edges and Miller-Abrahams
random resistor network with lower-bounded conductances more straightforwardly
(see [26]). Another model where it would be interesting to apply our results
is the Poisson stick model [67]. We had mentioned that $k$-percolation in the
Boolean model corresponds to face percolation in the Cěch complex. One can
also analogously define percolation of faces in the Vietoris-Rips complex
built on the Poisson point process. This is same as clique percolation on the
random graph induced by the Boolean model. Clique percolation was introduced
in [19]. See [43] for the above two models as well as two other models
describing connectivity of faces. Percolation in all these models can be
studied using our results.
### Appendices
In the forthcoming Appendices A & B, we consider the general setup of Section
1.5. In particular, $({\mathbb{X}},{\mathcal{X}})$ denotes a Borel space
endowed with a localizing ring ${\mathcal{X}}_{0}$ defined in terms of an
increasing sequence $\\{B_{n}:n\geq 1\\}$; see Subsection 1.5. Given a locally
finite measure $\lambda$, we write $\Pi_{\lambda}$ to indicate the law of a
Poisson process $\eta$ on ${\mathbb{X}}$ such that
$\lambda=\mathbb{E}[\eta(\cdot)]$.
#### A Graph-measurable mappings and stopping sets
A mapping $Z\colon{\mathbf{N}}\to{\mathcal{X}}$ is called graph-measurable
(see [54]) if $(x,\mu)\mapsto{\mathbf{1}}\\{x\in Z(\mu)\\}$ is a measurable
mapping on ${\mathbb{X}}\times{\mathbf{N}}$. Note that we do not equip
${\mathcal{X}}$ with a $\sigma$-field and that the mapping $Z$ is not assumed
to be a measurable mapping in any sense. All that is required in our paper is
the following measurability property.
###### Lemma A.1.
Suppose that $Z\colon{\mathbf{N}}\to{\mathcal{X}}$ is graph-measurable. Then,
writing $Z(\mu)^{c}={\mathbb{X}}\backslash Z(\mu)$, the mapping
${\mathbf{N}}\times{\mathbf{N}}\rightarrow{\mathbf{N}}\times{\mathbf{N}}\times{\mathbf{N}}\times{\mathbf{N}}:(\nu,\mu)\mapsto(\nu_{Z(\mu)},\mu_{Z(\mu)},\nu_{Z(\mu)^{c}},\mu_{Z(\mu)^{c}})$
is measurable.
###### Proof.
It suffices to prove that the first and third components of the above mapping
are measurable. The proof proceeds as that of Lemma 115-(i) in [45]. Let
$B\in{\mathcal{X}}$. We need to show that $(\nu,\mu)\mapsto\nu_{Z(\mu)}(B)$ is
measurable. By definition of a localizing ring and monotone convergence we may
assume that $B\in{\mathcal{X}}_{0}$. By the monotone class theorem (using
Dynkin systems) it can be shown that the mapping
$\displaystyle(\nu,\mu)\mapsto\int{\mathbf{1}}\\{(x,\mu)\in A,\,x\in
B\\}\,\nu(dx)$
is measurable for each $A\in{\mathcal{X}}\otimes{\mathcal{N}}$. In particular,
writing $H(\mu)$ for either $Z(\mu)$ or $Z(\mu)^{c}$ and applying the graph-
measurability of $Z$ (and therefore of $Z^{c}$), one deduces that
$\int{\mathbf{1}}\\{x\in H(\mu),\,x\in B\\}\,\nu(dx)$ is a measurable function
of $(\nu,\mu)$, as asserted. ∎
We say that a graph-measurable mapping $Z\colon{\mathbf{N}}\to{\mathcal{X}}$
is a stopping set, if
$\displaystyle
Z(\mu)=Z(\mu_{Z(\mu)}+\psi_{Z(\mu)^{c}}),\quad\mu,\psi\in{\mathbf{N}}.$ (A.1)
In particular a stopping set satisfies
$\displaystyle Z(\mu)=Z(\mu_{Z(\mu)}),\quad\mu\in{\mathbf{N}},$ (A.2)
so that $(x,\mu)\mapsto{\mathbf{1}}\\{x\in Z(\mu)\\}$ is
${\mathcal{X}}\otimes{\mathcal{N}}_{Z}$-measurable, where ${\mathcal{N}}_{Z}$
is the $\sigma$-field on ${\mathbf{N}}$ generated by $\mu\mapsto\mu_{Z(\mu)}$.
Observe that, if $Z^{\prime}$ is another stopping set such that $Z(\mu)\subset
Z^{\prime}(\mu)$ for every $\mu\in{\mathbf{N}}$, then (A.2) implies that
$\mathcal{N}_{Z}\subset\mathcal{N}_{Z^{\prime}}$.
We say that a measurable function $f\colon{\mathbf{N}}\to{\mathbb{R}}$ is
determined by a stopping set $Z$ if $f(\mu)=f(\mu_{Z(\mu)})$ for each
$\mu\in{\mathbf{N}}$. This means that $f$ is ${\mathcal{N}}_{Z}$-measurable.
In this case we even have that
$\displaystyle
f(\mu)=f(\mu_{Z(\mu)}+\psi_{Z(\mu)^{c}}),\quad\mu,\psi\in{\mathbf{N}}.$ (A.3)
This follows by applying the determination property of $f$ with $\mu$ replaced
by $\mu_{Z(\mu)}+\psi_{Z(\mu)^{c}}$ and then (A.1).
In contrast to the classical theory of stopping sets (see e.g. [81]) we do not
assume that $Z$ is taking its values in the space of closed (or compact)
subsets of a locally compact Polish space. But if this is the case and if $Z$
is measurable with respect to the Borel $\sigma$-field generated by the Fell
topology, then [9, Proposition A.1] shows that (A.1) is equivalent to the
standard definition of a stopping set, at least in the case of simple point
measures. Moreover, in this case the $\sigma$-field ${\mathcal{N}}_{Z}$
coincides with the classical stopped $\sigma$-field, given as the system of
all sets $A\in{\mathcal{N}}$ such that $A\cap\\{Z\subset
K\\}\in{\mathcal{N}}_{K}$ for each compact $K\subset{\mathbb{X}}$. Our
definition of a stopping set seems to be new in this generality. As already
observed, it is very convenient for the applications developed in our work,
since it allows one to avoid several restrictive (and technical) measurability
and topological assumptions that typically emerge when using the classical
theory.
But in the case $Z$ is measurable with respect to the Borel $\sigma$-field
generated by the Fell topology, then [9, Proposition A.1] shows that (A.1) is
equivalent to the standard definition of a stopping set.
We will need the following simple lemma.
###### Lemma A.2.
Suppose that $Z$ is a stopping set and let $\varphi,\psi\in{\mathbf{N}}$. Then
$\mu_{Z(\mu)}=\psi$ if and only if $\mu_{Z(\psi)}=\psi$. In this case
$Z(\mu)=Z(\psi)$.
###### Proof.
If $\mu_{Z(\mu)}=\psi$, then (A.1) yields $Z(\mu)=Z(\psi)$ and in particular
$\mu_{Z(\psi)}=\psi$. If, conversely, the latter holds, we have that $\psi$ is
supported by $Z(\psi)$ and $\mu_{Z(\psi)}=\psi_{Z(\psi)}$. Applying (A.1)
(with $\psi$ in place of $\mu$) we obtain that
$\displaystyle
Z(\psi)=Z(\psi_{Z(\psi)}+\mu_{Z(\psi)^{c}})=Z(\mu_{Z(\psi)}+\mu_{Z(\psi)^{c}})=Z(\mu).$
This finishes the proof.∎
We now fix a Poisson process $\eta$ on ${\mathbb{X}}$ with locally finite
intensity $\lambda$. The following result is well-known for classical stopping
sets and can be attributed to [68, Theorem 4]; see also [81, 82]. In our
Poisson setting (and still adopting the classical definition of stopping sets
from [81]) it can be derived from Lemma A.3 in [9] and the multivariate Mecke
equation (1.9). Our proof shows that such an approach applies to the more
general definiton stopping set adopted in this paper.
###### Theorem A.3.
Suppose that $Z$ is a stopping set and let $A\in{\mathcal{N}}$. Then
$\displaystyle\mathbb{P}(\eta_{Z(\eta)^{c}}\in
A\mid\eta_{Z(\eta)})=\Pi_{\lambda_{Z(\eta)^{c}}}(A),\quad\mathbb{P}\text{-a.s.\
on $\\{\eta(Z(\eta))<\infty\\}$}.$ (A.4)
###### Proof.
Let $g\colon{\mathbf{N}}\times{\mathbf{N}}\to{\mathbb{R}}_{+}$ be measurable
and $k\in{\mathbb{N}}$. Given $x_{1},\dots,x_{k}\in{\mathbb{X}}$ we write
$\mathbf{x}=(x_{1},\dots,x_{k})$ and
$\delta_{\mathbf{x}}:=\delta_{x_{1}}+\cdots+\delta_{x_{k}}$. We have that (see
e.g. equation (4.19) in [49])
$\displaystyle
I:=\mathbb{E}[{\mathbf{1}}\\{\eta(Z(\eta))=k\\}g(\eta_{Z(\eta)},\eta_{Z(\eta)^{c}})]=\frac{1}{k!}\mathbb{E}\bigg{[}\int
g(\delta_{\mathbf{x}},\eta_{Z(\eta)^{c}}){\mathbf{1}}\\{\eta_{Z(\eta)}=\delta_{\mathbf{x}}\\}\,\eta^{(k)}(\mathrm{d}\mathbf{x})\bigg{]}.$
By Lemma A.2 we have that $\eta_{Z(\eta)}=\delta_{\mathbf{x}}$ iff
$\eta_{Z(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}$ in which case
$Z(\eta)=Z(\delta_{\mathbf{x}})$. Therefore,
$\displaystyle I$ $\displaystyle=\frac{1}{k!}\mathbb{E}\bigg{[}\int
g(\delta_{\mathbf{x}},\eta_{Z(\delta_{\mathbf{x}})^{c}}){\mathbf{1}}\\{\eta_{Z(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}\\}\,\eta^{(k)}(\mathrm{d}\mathbf{x})\bigg{]}$
$\displaystyle=\frac{1}{k!}\mathbb{E}\bigg{[}\int
g(\delta_{\mathbf{x}},(\eta+\delta_{\mathbf{x}})_{Z(\delta_{\mathbf{x}})^{c}}){\mathbf{1}}\\{(\eta+\delta_{\mathbf{x}})_{Z(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}\\}\,\lambda^{k}(\mathrm{d}\mathbf{x})\bigg{]},$
where we have used the the multivariate Mecke equation (1.9) and the required
measurability of the mappings
$(\mu,\mathbf{x})\mapsto{\mathbf{1}}\\{\mu=\delta_{\mathbf{x}}\\}$,
$\mathbf{x}\mapsto\delta_{\mathbf{x}}$ and
$(\mu,\mathbf{x})\mapsto\mu_{Z(\delta_{\mathbf{x}})^{c}}$ follows from the
Borel property of ${\mathbb{X}}$ and Lemma A.1. Since
$(\eta+\delta_{\mathbf{x}})_{Z(\delta_{\mathbf{x}})}=\delta_{\mathbf{x}}$ iff
$\mathbf{x}\in Z(\delta_{\mathbf{x}})$ and $\eta(Z(\delta_{\mathbf{x}}))=0$ we
can use the complete independence property of $\eta$ (and Fubini’s theorem) to
obtain that
$\displaystyle I=\frac{1}{k!}\mathbb{E}\bigg{[}\iint
g(\delta_{\mathbf{x}},\psi){\mathbf{1}}\\{\mathbf{x}\in
Z(\delta_{\mathbf{x}}),\eta(Z(\delta_{\mathbf{x}}))=0\\}\,\Pi_{\lambda_{Z(\delta_{\mathbf{x}})^{c}}}(\mathrm{d}\psi)\,\lambda^{k}(\mathrm{d}\mathbf{x})\bigg{]}.$
(A.5)
Reversing the preceding arguments yields
$\displaystyle
I=\mathbb{E}\bigg{[}\int{\mathbf{1}}\\{\eta(Z(\eta))=k\\}g(\eta_{Z(\eta)},\psi)\,\Pi_{\lambda_{Z(\eta)^{c}}}(\mathrm{d}\psi)\bigg{]}.$
A simplified version of the preceding proof shows that this remains true for
$k=0$. Therefore
$\displaystyle\mathbb{E}[{\mathbf{1}}\\{\eta(Z(\eta))<\infty\\}g(\eta_{Z(\eta)},\eta_{Z(\eta)^{c}})]=\mathbb{E}\bigg{[}\int{\mathbf{1}}\\{\eta(Z(\eta))<\infty\\}g(\eta_{Z(\eta)},\psi)\,\Pi_{\lambda_{Z(\eta)^{c}}}(\mathrm{d}\psi)\bigg{]}.$
Taking $g$ of product form, this yields (A.4), provided that
$\Pi_{\lambda_{Z(\eta)^{c}}}(A)$ depends measurably on $\eta_{Z(\eta)}$. Since
$Z(\eta)=Z(\eta_{Z(\eta)})$ this follows from the measurability of
$\mu\mapsto\Pi_{\lambda_{Z(\mu)^{c}}}(A)$, a fact that can be verified as in
the the proof of [49, Lemma 13.4]. Indeed, it suffices to take
$A:=\\{\psi\in{\mathbf{N}}:\psi(C_{1})=m_{1},\dots,\psi(C_{n})=m_{n}\\},$
where $C_{1},\ldots,C_{n}\in{\mathcal{X}}$ are pairwise disjoint and
$m_{1},\ldots,m_{n}\in{\mathbb{N}}_{0}$. Then
$\Pi_{\lambda_{Z(\mu)^{c}}}(A)=\prod^{n}_{i=1}\frac{g_{i}(\mu)^{m_{i}}}{m_{i}!}\exp[-g_{i}(\mu)],$
where $g_{i}(\mu):=\int{\mathbf{1}}\\{x\in C_{i},\,x\in
Z(\mu)^{c}\\}\,\lambda(dx)$. Since $(x,\mu)\mapsto{\mathbf{1}}\\{x\notin
Z(\mu)\\}$ is measurable, the functions $g_{i}$ are measurable. ∎
###### Remark A.4.
The proof of Theorem A.3 shows that the full stopping set property (A.2) is
not required to conclude (A.4). We only need that
$Z\colon{\mathbf{N}}\to{\mathcal{X}}$ is graph-measurable and that the
following holds for all $\mu,\psi\in{\mathbf{N}}$ with
$\psi({\mathbb{X}})<\infty$. We have $\mu_{Z(\mu)}=\psi$ if and only if
$\mu_{Z(\psi)}=\psi$. In this case $Z(\mu)=Z(\psi)$.
It is worth mentioning that (A.5) yields a formula for the distribution of
$\eta_{Z(\eta)}$. For a constant mapping $Z$ this reduces to an elementary
property of a Poisson process.
###### Proposition A.5.
Suppose that $Z$ is a stopping set. Then
$\displaystyle\mathbb{P}(\eta(Z(\eta))<\infty,\eta_{Z(\eta)}\in\cdot)=\sum^{\infty}_{k=0}\frac{1}{k!}\int{\mathbf{1}}\\{\delta_{\mathbf{x}}\in\cdot\\}{\mathbf{1}}\\{\mathbf{x}\in
Z(\delta_{\mathbf{x}})\\}\exp[-\lambda(Z(\delta_{\mathbf{x}}))]\,\lambda^{k}(\mathrm{d}\mathbf{x}),$
where the summand for $k=0$ is interpreted as ${\mathbf{1}}\\{{\bf
0}\in\cdot\\}\exp[-\lambda(Z({\bf 0}))]$, where $\bf{0}$ stands for the zero
measure.
Next we extend Theorem A.3 so as to allow for $\eta(Z(\eta))=\infty$.
###### Theorem A.6.
Suppose that $Z_{1},Z_{2},\ldots$ are stopping sets such that $Z_{n}\uparrow
Z$ for some $Z\colon{\mathbf{N}}\to{\mathcal{X}}$. Then $Z$ is a stopping set.
If, moreover, $\mathbb{P}(\eta(Z_{n}(\eta))<\infty)=1$ for each
$n\in{\mathbb{N}}$ then we have for each $A\in\mathcal{N}$ that
$\displaystyle\mathbb{P}(\eta_{Z(\eta)^{c}}\in
A\mid\eta_{Z(\eta)})=\Pi_{\lambda_{Z(\eta)^{c}}}(A),\quad\mathbb{P}\text{-a.s.}$
(A.6)
###### Proof.
Graph-measurability of $Z$ is obvious. To check (A.1) we take
$\mu,\psi\in{\mathbf{N}}$ such that $\psi(Z(\mu))=0$. Then
$\displaystyle Z(\mu_{Z(\mu)}+\psi)$
$\displaystyle=\bigcup^{\infty}_{n=1}Z_{n}(\mu_{Z(\mu)}+\psi)=\bigcup^{\infty}_{n=1}Z_{n}(\mu_{Z_{n}(\mu)}+\mu_{Z(\mu)\setminus
Z_{n}(\mu)}+\psi)$
$\displaystyle=\bigcup^{\infty}_{n=1}Z_{n}(\mu_{Z(\mu)})=Z(\mu_{Z(\mu)}),$
concluding the proof of (A.1).
To prove (A.6) we need to show for each bounded and measurable
$f\colon{\mathbf{N}}\times{\mathbf{N}}\to{\mathbb{R}}_{+}$ that
$\displaystyle\mathbb{E}[f(\eta_{Z(\eta)},\eta_{{\mathbb{X}}\setminus
Z(\eta)})]=\mathbb{E}\bigg{[}\int f(\eta_{Z(\eta)},\mu_{{\mathbb{X}}\setminus
Z(\eta)})\,\Pi_{\lambda}(d\mu)\bigg{]}.$ (A.7)
By a monotone class argument we can assume that there exists
$B\in{\mathcal{X}}_{0}$ such that $f(\mu,\nu)=f(\mu_{B},\nu_{B})$ for all
$\mu,\nu\in{\mathbf{N}}$. It follows from (A.4) (and a monotone class
argument) that
$\displaystyle\mathbb{E}[f(\eta_{Z_{n}(\eta)},\eta_{{\mathbb{X}}\setminus
Z_{n}(\eta)})]=\mathbb{E}\bigg{[}\int
f(\eta_{Z_{n}(\eta)},\mu_{{\mathbb{X}}\setminus
Z_{n}(\eta)})\,\Pi_{\lambda}(d\mu)\bigg{]}.$ (A.8)
Since $\eta(B)<\infty$ we have that
$f(\eta_{Z_{n}(\eta)},\eta_{{\mathbb{X}}\setminus Z_{n}(\eta)})\to
f(\eta_{Z(\eta)},\eta_{{\mathbb{X}}\setminus Z(\eta)})$ with respect to the
discrete topology. By bounded convergence the left-hand side of (A.8) tends to
the left-hand side of (A.7). For the right-hand sides we note that bounded
convergence implies for each $\nu\in{\mathbf{N}}$ that
$\displaystyle\lim_{n\to\infty}\int
f(\nu_{Z_{n}(\nu)},\mu_{{\mathbb{X}}\setminus
Z_{n}(\nu)})\,\Pi_{\lambda}(d\mu)=\int
f(\nu_{Z(\nu)},\mu_{{\mathbb{X}}\setminus Z(\nu)})\,\Pi_{\lambda}(d\mu).$
Again by bounded convergence the right-hand side of (A.8) tends to the right-
hand side of (A.7). ∎
Property (A.6) is sometimes referred to as the Markov property of $\eta$ (see
again [68, Theorem 4] and [82]). To express it in a different way, we let
$\eta^{\prime}$ be an independent copy of $\eta$. Then (A.6) is equivalent to
$\displaystyle\eta\overset{d}{=}\eta_{Z(\eta)}+\eta^{\prime}_{{\mathbb{X}}\backslash
Z(\eta)},$ (A.9)
where the equality in distribution is in the sense of point processes.
###### Remark A.7.
The preceding results can be generalized as follows. Suppose that
$({\mathbb{M}},\mathcal{M})$ is another Borel space. Consider the product
${\mathbb{X}}\times{\mathbb{M}}$ equipped with the product $\sigma$-field and
some localizing ring. For each measure $\nu$ on
${\mathbb{X}}\times{\mathbb{M}}$ and each $B\in\mathcal{X}$ we denote by
$\nu_{B}:=\nu_{B\times{\mathbb{M}}}$ the restriction of $\nu$ to
$B\times{\mathbb{M}}$. We say that a graph-measurable mapping
$\tilde{Z}\colon{\mathbf{N}}({\mathbb{X}}\times{\mathbb{M}})\to{\mathcal{X}}$
is a stopping set on ${\mathbb{X}}$ if (A.1) holds. Then Theorems A.3 and A.6
remain true in an obviously modified form and essentially unchanged proofs.
###### Remark A.8.
Let $\tilde{Z}$ be a stopping set as in Remark A.7. Then
$Z:=\tilde{Z}\times{\mathbb{M}}$ is graph-measurable and a stopping set on
$({\mathbb{X}}\times{\mathbb{M}},{\mathcal{X}}\otimes\mathcal{M})$, in the
sense of (A.1).
A weaker version of the next result (corresponding to equation (A.12) below)
was proved in [81, Theorem 3]; see also [65, formula (3.3)].
###### Proposition A.9.
Suppose that $Z,Z_{1},Z_{2},\ldots$ are stopping sets such that
$\mathbb{P}(\eta(Z_{n}(\eta))<\infty)=1$ for each $n\in{\mathbb{N}}$ and
$Z_{n}\uparrow Z$. Then
$\displaystyle{\mathbf{1}}\\{x\in Z(\eta)\\}={\mathbf{1}}\\{x\in
Z(\eta+\delta_{x})\\},\quad\mathbb{P}\text{-a.s.},\,\lambda\text{-a.e.\ $x$}.$
(A.10)
###### Proof.
Let $B\in{\mathcal{X}}_{0}$. Then
$\displaystyle\mathbb{E}[\eta(B\cap Z(\eta))]$
$\displaystyle=\lambda(B)-\mathbb{E}[\mathbb{E}[\eta(B\cap
Z(\eta)^{c})\mid\eta_{Z(\eta)}]]$
$\displaystyle=\lambda(B)-\mathbb{E}\bigg{[}\int\mu(B)\,\Pi_{\lambda_{Z(\eta)^{c}}}(d\mu)\bigg{]}$
$\displaystyle=\lambda(B)-\mathbb{E}[\lambda_{Z(\eta)^{c}}(B)]=\mathbb{E}[\lambda(B\cap
Z(\eta))],$ (A.11)
where we have used (A.6) for the second identity. On the other hand we obtain
from the Mecke equation (1.9) that
$\displaystyle\mathbb{E}[\eta(B\cap
Z(\eta))]=\mathbb{E}\bigg{[}\int{\mathbf{1}}\\{x\in B\\}{\mathbf{1}}\\{x\in
Z(\eta+\delta_{x})\\}\,\lambda(dx)\bigg{]}.$
Therefore,
$\displaystyle\int{\mathbf{1}}\\{x\in B\\}\mathbb{P}(x\in
Z(\eta))\,\lambda(dx)=\int{\mathbf{1}}\\{x\in B\\}\mathbb{P}(x\in
Z(\eta+\delta_{x}))\,\lambda(dx),$
and since $B$ is arbitrary this shows that
$\displaystyle\mathbb{P}(x\in Z(\eta))=\mathbb{P}(x\in
Z(\eta+\delta_{x})),\quad\lambda\text{-a.e.\ $x$}.$ (A.12)
Take $\mu\in{\mathbf{N}}$ and $x\in{\mathbb{X}}$ such that $x\notin
Z(\mu+\delta_{x})$. By (A.1),
$\displaystyle
Z(\mu+\delta_{x})=Z((\mu+\delta_{x})_{Z(\mu+\delta_{x})}+\mu_{Z(\mu+\delta_{x})^{c}})=Z(\mu_{Z(\mu+\delta_{x})}+\mu_{Z(\mu+\delta_{x})^{c}})=Z(\mu),$
so that $x\notin Z(\mu)$. Therefore $\\{x\in Z(\eta)\\}\subset\\{x\in
Z(\eta+\delta_{x})\\}$ and (A.12) implies the asserted result (A.10). ∎
#### B Non-attainable stopping sets
For the rest of the section, we fix a locally finite measure $\lambda$ on
$({\mathbb{X}},{\mathcal{X}})$ and denote by $\eta$ a Poisson process on
${\mathbb{X}}$ with intensity $\lambda$. Recall from Remark 3.4 that a
stopping set $Z$ is said to be $\lambda$-attainable if $Z=\cup_{t}Z_{t}$ for
some $\lambda$-continuous (that is, verifying (3.4)) CTDT $\\{Z_{t}\\}$. The
fact that the OSSS inequality (3.6) has only been proved for
$\lambda$-attainable stopping sets begs the question of whether there exist
stopping sets $Z$ that are not $\lambda$-attainable (and therefore for which
the validity of the OSSS inequality is not established). The principal aim of
this section is to answer positively to this question by proving the next
statement.
###### Proposition B.1 (Existence of non-attainable stopping sets).
Let $Z$ be a stopping set such that $\lambda(Z(\mu))>0$ for every
$\mu\in{\mathbf{N}}$. Assume that there exists a measurable partition
$\\{C_{i}:i=1,...,m\\}$ of ${\mathbb{X}}$ such that, for every $i$,
$\mathbb{P}(\lambda(C_{i}\cap Z(\eta))=0)>0$. Then, there is no
$\lambda$-continuous CTDT $\\{Z_{t}\\}$ such that $Z=\cup_{t}Z_{t}$.
###### Example 2.
The following example is inspired by the discussion contained in [77, Example
1.28], that was brought to our attention by Laurin Koehler-Schindler. Let
$\lambda({\mathbb{X}})\in(0,\infty)$ and assume that ${\mathbb{X}}$ equals the
disjoint union of three measurable sets $C_{1},C_{2},C_{3}$ such that
$\lambda(C_{i})>0$, $i=1,2,3$. For $\mu\in{\mathbf{N}}$, set
$X_{i}(\mu):={\mathbf{1}}\\{\mu(C_{i})>0\\}$ for $i=1,2,3$. We define
$\displaystyle Z(\mu):=\begin{cases}C_{1}\cup C_{2}\cup
C_{3}={\mathbb{X}},&\text{if $X_{1}(\mu)=X_{2}(\mu)=X_{3}(\mu)$},\\\ C_{1}\cup
C_{2},&\text{if $X_{1}(\mu)=1,X_{2}(\mu)=0$},\\\ C_{1}\cup C_{3},&\text{if
$X_{1}(\mu)=0,X_{3}(\mu)=1$},\\\ C_{2}\cup C_{3},&\text{if
$X_{2}(\mu)=1,X_{3}(\mu)=0$}.\end{cases}$
Then, it is easily seen that $Z$ is a stopping set such that $\lambda(Z)>0$,
and $\mathbb{P}(\lambda(C_{i}\cap Z(\eta))=0)>0$ for every $i=1,2,3$.
The proof of Proposition B.1 is based on a non-trivial zero-one law for CTDTs,
stated in the forthcoming Theorem B.3. We first prove an ancillary result.
###### Lemma B.2.
Suppose that $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ is a $\lambda$-continuous
CTDT. Define the mapping $\tau\colon{\mathbf{N}}\to[0,\infty]$ by
$\displaystyle\tau(\mu):=\inf\\{t\geq
0:\lambda(Z_{t}(\mu))>0\\},\quad\mu\in{\mathbf{N}},$ (B.13)
with the usual convention $\inf\emptyset:=+\infty$. Then, writing
$Z_{\infty}(\mu):=\cup_{t}Z_{t}(\mu)$, one has that $\mu\mapsto
Z^{u}(\mu):=Z_{\tau(\mu)+u}(\mu)$ is a stopping set for every $u\geq 0$.
Moreover, one has necessarily that $\lambda(Z^{0}(\mu))=0$, for every
$\mu\in{\mathbf{N}}$.
###### Proof.
Given $\mu\in{\mathbf{N}}$ and $t\geq 0$ we have $\tau(\mu)<t$ if and only if
$\lambda(Z_{t}(\mu))>0$. Hence, $\tau$ is measurable and $Z^{u}$ is graph-
measurable since, for every $(\mu,x)\in{\mathbf{N}}\times{\mathbb{X}}$, the
quantity ${\bf 1}\\{x\in Z^{u}(\mu)\\}$ is the limit of
$\displaystyle{\bf 1}\\{\tau(\mu)\\!=\\!+\infty,\,x\in
Z_{\infty}\\}\\!+\\!{\bf 1}\\{\tau(\mu)\\!=\\!0,\,x\in
Z_{u}\\}\\!+\\!\sum_{k=0}^{\infty}{\bf
1}\left\\{\tau(\mu)\\!\in\\!\left(\frac{k}{n},\frac{k+1}{n}\right],\,x\in
Z_{\frac{k+1}{n}+u}\right\\},$
as $n\to\infty$, where we have used (2.2). It is also immediately checked that
$\lambda(Z^{0}(\mu))=0$, because of the $\lambda$-continuity property (3.4).
Now take $u\geq 0$ and $\mu,\psi\in{\mathbf{N}}$ such that
$\psi(Z^{u}(\mu))=0$; we claim that
$\displaystyle\tau(\mu)=\tau(\mu_{Z^{u}(\mu)}+\psi).$ (B.14)
Reasoning as in the first part of the proof of Theorem A.6, one sees that
(B.14) is true whenever $\tau(\mu)=+\infty$. Now assume $\tau(\mu)<\infty$; we
will prove (B.14) for $u=0$, from which the general case follows at once.
Abbreviate $s:=\tau(\mu)$, in such a way that $Z_{s}(\mu)=Z^{0}(\mu)$. Since
$\lambda(Z_{s}(\mu))=0$ and $Z_{s}(\mu)=Z(\mu_{Z_{s}(\mu)}+\psi)$ we obtain
that $\tau(\mu_{Z^{0}(\mu)}+\psi)\geq s$. On the other hand, we have for each
$t>s$ that $\lambda(Z_{t}(\mu))>0$. Since the $Z_{t}$ are
${\mathcal{X}}_{0}$-valued and $\mu,\psi$ are locally finite, we can exploit
the right-continuity (2.2) of a CTDT to find a $t>s$ satisfying
$\displaystyle\mu(Z_{t}(\mu)\setminus Z_{s}(\mu))=\psi(Z_{t}(\mu)\setminus
Z_{s}(\mu))=0.$
But then, $\psi(Z_{t}(\mu))=0$ and
$\displaystyle\lambda(Z_{t}(\mu_{Z_{s}(\mu)}))=\lambda(Z_{t}(\mu_{Z_{t}(\mu)}+\psi))=\lambda(Z_{t}(\mu))>0,$
which shows that $\tau(\mu_{Z^{0}(\mu)}+\psi)\leq t$ for every $t>s$ such that
$t-s$ is sufficiently small, yielding (B.14). To conclude, set
$\mu^{\prime}:=\mu_{Z^{u}(\mu)}+\psi$, with $\psi(Z^{u}(\mu))=0$. By (B.14) we
have $\tau(\mu^{\prime})=\tau(\mu)$. Therefore,
$\displaystyle
Z^{u}(\mu_{Z^{u}(\mu)}+\psi)=Z_{\tau(\mu^{\prime})+u}(\mu^{\prime})=Z_{\tau(\mu)+u}(\mu^{\prime})=Z_{\tau(\mu)+u}(\mu_{Z^{u}(\mu)}+\psi)=Z_{\tau(\mu)+u}(\mu)=Z^{u}(\mu).$
Hence, $Z^{u}$ is a stopping set. ∎
The next statement is the most important result of the section.
###### Theorem B.3 (Zero-one laws for CTDT).
Let $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$ be a $\lambda$-continuous CTDT, and
define the mapping $\tau(\cdot)$ as in (B.13). Assume that $\tau(\mu)<\infty$
for every $\mu\in{\mathbf{N}}$ and write
$\mathcal{G}_{u}:=\sigma(\eta_{Z^{u}(\eta)}),\quad u\geq 0,$ (B.15)
where the stopping sets $Z^{u}$ are defined in the statement of Lemma B.2.
Then $\mathcal{G}_{u}\subset\mathcal{G}_{v}$ for every $u<v$. Moreover,
$\eta_{Z^{0}(\eta)}$ is equal to the zero measure with probability one, and
the $\sigma$-field
$\mathcal{G}_{0+}:=\bigcap_{u>0}\mathcal{G}_{u},$
is $\mathbb{P}$-trivial, that is: for all $B\in\mathcal{G}_{0+}$,
$\mathbb{P}(B)\in\\{0,1\\}$.
###### Proof.
The first part of the statement follows immediately from Lemma A.1 and the
fact that, for every $u<v$, $Z^{u}$ and $Z^{v}$ are stopping sets such that
$Z^{u}(\mu)\subset Z^{v}(\mu)$ for all $\mu\in{\mathbf{N}}$. The fact that
$\eta_{Z^{0}(\eta)}$ equals the zero measure 0, $\mathbb{P}$-a.s., is a direct
consequence of Proposition A.5 and of the fact that $\lambda(Z^{0}({\bf
0}))=0$, by virtue of Lemma B.2. To show the triviality of $\mathcal{G}_{0+}$,
it is enough to show that, for every bounded $\sigma(\eta)$-measurable random
variable $Y$,
$\mathbb{E}[Y\,|\,\mathcal{G}_{0+}]=\mathbb{E}[Y],\quad\mathbb{P}\mbox{\rm-a.s.}\;$
(B.16)
indeed, if (B.16) is in order then, for every $B\in\mathcal{G}_{0+}$, one has
that ${\bf 1}_{B}=\mathbb{P}(B)$, $\mathbb{P}$-a.s., from which the triviality
follows. By a monotone class argument, it is sufficient to prove (B.16) for
every $Y=\exp\\{i\int fd\eta\\}$, where $f$ is a bounded deterministic kernel,
whose support is contained in some $A\in{\mathcal{X}}_{0}$. Now select a
sequence $\epsilon_{n}\downarrow 0$. By the backwards martingale convergence
theorem, one has that
$\mathbb{E}[Y\,|\,\mathcal{G}_{0+}]=\lim_{n\to\infty}\mathbb{E}[Y\,|\,\mathcal{G}_{\epsilon_{n}}],\quad\mathbb{P}\mbox{\rm-a.s.}.$
Exploiting the Markov property (A.6) one has also that, for $\epsilon>0$,
$\mathbb{E}[Y\,|\,\mathcal{G}_{\epsilon}]=\exp\left\\{i\int_{Z^{\epsilon}(\eta)}fd\eta\right\\}\times\int\exp\left\\{i\int_{{\mathbb{X}}\backslash
Z^{\epsilon}(\eta)}fd\mu\right\\}\Pi_{\lambda}(d\mu),$
and the right-hand side of the previous equality converges $\mathbb{P}$-a.s.
to
$\int\exp\left\\{i\int_{{\mathbb{X}}}fd\mu\right\\}\Pi_{\lambda}(d\mu)=\mathbb{E}[Y],$
as $\epsilon\downarrow 0$, since $Z^{\epsilon}(\mu)\downarrow Z^{0}(\mu)$, for
all $\mu\in{\mathbf{N}}$, and
$\int_{Z^{0}(\eta)}fd\eta=0=\lambda(Z^{0}(\eta))$, $\mathbb{P}$-a.s.. ∎
###### Proof of Proposition B.1.
The proof is by contradiction. Assume that $\\{Z_{t}:t\in{\mathbb{R}}_{+}\\}$
is a $\lambda$-continuous CTDT such that $Z=\cup_{t}Z_{t}$, and observe that
the assumptions on $Z$ imply that $\tau(\mu)<\infty$ for every
$\mu\in{\mathbf{N}}$. For every $\mu\in{\mathbf{N}}$, we define
$\mathcal{U}(\mu)=\\{A\in\mathcal{X}:\lambda(A\cap
Z^{\epsilon}(\mu))>0,\,\,\forall\epsilon>0\\}.$
It is readily checked that, for every $D\in\mathcal{X}$, the event
$\\{D\in\mathcal{U}(\eta)\\}$ is in $\mathcal{G}_{0+}$, and therefore
$\mathbb{P}[D\in\mathcal{U}(\eta)]\in\\{0,1\\}$, by virtue of Theorem B.3. By
the definition of $\tau(\eta)$, there exists at least one index
$i\in\\{1,...,m\\}$ such that $\mathbb{P}[C_{i}\in\mathcal{U}(\eta)]>0$, and
therefore $\mathbb{P}[C_{i}\in\mathcal{U}(\eta)]=1$. But this is absurd, since
then
$0=\mathbb{P}[C_{i}\notin\mathcal{U}(\eta)]\geq\mathbb{P}[\lambda(C_{i}\cap
Z(\eta))=0]>0,$
where we have used the assumptions on $C_{i}$ in the statement. ∎
Example 2 gives explicitly a stopping set that is not $\lambda$-attainable by
any CTDT. However, one may wonder whether there are stopping sets that are not
$\lambda$-attainable by any randomized CTDT as defined in Section 4. We will
now show that this is true for the stopping set in Example 2.
###### Example 3.
Let $Z$ be the stopping set defined in Example 2. Suppose that
$\\{Z_{t}=Z_{t}^{Y}:t\in{\mathbb{R}}_{+}\\}$ is a $\lambda$-continuous
randomized CTDT for an independent random variable $Y$ such that
$Z=\cup_{t}Z_{t}$. Define $\mathcal{U}(\eta)=\mathcal{U}^{Y}(\eta)$ as in the
proof of Proposition B.1. Without loss of generality assume that
$\mathbb{P}(C_{1}\in\mathcal{U}(\eta))>0$.
Observe that $\lambda(Z^{0}(\eta))=0$ for $Z^{0}$ as defined in Lemma B.2 and
so as in the proof of Theorem B.3, we have that $\eta(Z^{0}(\eta))={\bf 0}$,
$\mathbb{P}$-a.s.. Thus, we derive that, $\mathbb{P}$-a.s.,
$\displaystyle\mathbb{P}(\eta(C_{2})>0,\eta(C_{3})=0\,|\,\eta_{Z^{0}(\eta)})$
$\displaystyle=\mathbb{P}(\eta(C_{2}\setminus
Z^{0}(\eta))>0,\eta(C_{3}\setminus Z^{0}(\eta))=0\,|\,\eta_{Z^{0}(\eta)})$
$\displaystyle=(1-e^{-\lambda(C_{2})})e^{-\lambda(C_{3})}=:\kappa$ (B.17)
where in deriving the second equality, we have used the strong Markov property
(A.6) and the complete independence property of the Poisson process as well as
that $Z^{0}(\eta)$ is a stopping set with $\lambda(Z^{0}(\eta))=0$. Now, for
any $\epsilon>0$, using (B.17), we derive that
$\displaystyle 0$
$\displaystyle<\kappa\mathbb{P}(C_{1}\in\mathcal{U}(\eta))\leq\kappa\mathbb{P}(C_{1}\cap
Z^{\epsilon}(\eta)\neq\emptyset)$
$\displaystyle\leq\mathbb{E}[\mathbb{P}(C_{1}\cap
Z^{\epsilon}(\eta)\neq\emptyset\,|\,\eta_{Z^{0}(\eta)})\mathbb{P}(\eta(C_{2}\setminus
Z^{0}(\eta))>0,\eta(C_{3}\setminus Z^{0}(\eta))=0\,|\,\eta_{Z^{0}(\eta)})]$
$\displaystyle=\mathbb{E}[\mathbb{P}(C_{1}\cap
Z^{\epsilon}(\eta)\neq\emptyset,\eta(C_{2})>0,\eta(C_{3})=0\,|\,\eta_{Z^{0}(\eta)})]$
$\displaystyle=\mathbb{P}(C_{1}\cap
Z^{\epsilon}(\eta)\neq\emptyset,\eta(C_{2})>0,\eta(C_{3})=0)$
$\displaystyle\leq\mathbb{P}(C_{1}\cap Z\neq\emptyset,Z=C_{2}\cup C_{3})=0,$
where in the first equality we have again used the strong Markov property
(A.6) and the complete independence property of the Poisson process, as well
as the fact that $Z^{0}(\eta)$ is a stopping set. Thus, we cannot realize $Z$
by a randomized CTDT as well.
[Acknowledgments] The authors thank Sergei Zuyev for sharing an earlier
version of his continuum noise sensitivity notes. We are thankful to Raphael
Lachiéze-Rey for discussions regarding CTDT and suggesting some simplified
CTDT constructions. We are thankful to Laurin Koehler-Schindler, Stephen
Muirhead, Vincent Tassion and Hugo Vanneuville for some comments regarding the
discrete OSSS and Schramm-Steif inequalities. We thank Srikanth K. Iyer for
discussing and pointing out percolation theory literature related to RSW-type
estimates and comments on a preliminary draft.
DY’s research was supported by DST-INSPIRE faculty award and CPDA from the
Indian Statistical Institute. GP’s research was supported by the FNR grant
FoRGES (R-AGR3376-10) at Luxembourg University. The work significantly
benefitted from reciprocal visits to our respective institutions and we wish
to thank all the three institutions for hosting us and supporting our visits.
### References
* [1] R. Adamczak, B. Polaczyk, M. Strzelecki. Modified log-Sobolev inequalities, Beckner inequalities and moment estimates. J. Func. Anal., 282(7). 2022.
* [2] D. Ahlberg, E. Broman, S. Griffiths and R. Morris. Noise sensitivity in continuum percolation. Isr. J. Math., 201(2), 847–899. 2014.
* [3] D. Ahlberg and R. Baldasso. Noise sensitivity and Voronoi percolation. Elec. J. Prob., 23, 2018.
* [4] D. Ahlberg, V. Tassion, and A. Teixeira. Sharpness of the phase transition for continuum percolation in $\mathbb{R}^{2}$. Prob. Th. Rel. Fields, 172(1-2), 525–581. 2018.
* [5] D. Ahlberg, V. Tassion and A. Teixeira. Existence of an unbounded vacant set for subcritical continuum percolation. Elec. Comm. Prob., 23, 2018.
* [6] M. Aizenman and D. J. Barsky. Sharpness of the phase transition in percolation models. Comm. Math. Phy., 108(3), 489–526. 1987.
* [7] K. S. Alexander. The RSW theorem for continuum percolation and the CLT for Euclidean minimal spanning trees. Ann. Appl. Prob., 6(2), 466–494. 1996.
* [8] K. S. Alexander. Boundedness of level lines for two-dimensional random fields. Ann. Prob., 24(4),1653–1674. 1996.
* [9] V. Baumstark and G. Last. Gamma distributions for stationary Poisson flat processes. Adv. Appl. Prob., 41, 911–939, 2009.
* [10] I. Benjamini. and O. Schramm. Exceptional planes of percolation. Prob. Th. Rel. Fields, 111(4), 551–564. 1998.
* [11] I. Benjamini and O. Schramm. Percolation in the hyperbolic plane. J. Amer. Math. Soc., 14(2), 487–507. 2001.
* [12] I. Benjamini, G. Kalai and O. Schramm. Noise sensitivity of Boolean functions and applications to percolation. Publ. Math. de l’IHES, 90(1), 5–43, 1999.
* [13] B. Błaszczyszyn and D. Yogeshwaran. Clustering and percolation of point processes. Elec. J. Prob., 18(72), 2013.
* [14] C. Bordenave, Y. Gousseau and F. Roueff. The dead leaves model: a general tessellation modelling occlusion. Adv. Appl. Prob., 38(1), 31–46. 2006
* [15] B. Bollobás and O. Riordan. Percolation. Cambridge University Press. 2006.
* [16] E. Broman and R. Meester. Phase transition and uniqueness of level set percolation. J. Stat. Phys., 167(6), 1376–1400. 2017.
* [17] S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities. Oxford University Press, Oxford, 2013,
* [18] D. Cordero-Erausquin and M. Ledoux. Hypercontractive measures, Talagrand’s inequality, and influences. In : Geometric aspects of functional analysis. 169–189, Springer, Berlin, Heidelberg. 2012.
* [19] I. Derényi, G. Palla, and T. Vicsek. Clique percolation in random networks. Phy. Rev. Let., 94(16). 2005.
* [20] H. Duminil-Copin and V. Tassion. A new proof of the sharpness of the phase transition for Bernoulli percolation and the Ising model. Comm. Math. Phy., 343(2), 725–745. 2016.
* [21] H. Duminil-Copin. Lectures on the Ising and Potts models on the hypercubic lattice. In: M. T. Barlow and G. Slade (eds.) Random Graphs, Phase Transitions, and the Gaussian Free Field. 35–161. Springer, Cham. 2017.
* [22] H. Duminil-Copin. Introduction to Bernoulli percolation. Lecture Notes. Available at https://www.ihes.fr/~duminil/publi/2017percolation.pdf, 2018.
* [23] H. Duminil-Copin, A. Raoufi and V. Tassion. Sharp phase transition for the random-cluster and Potts models via decision trees. Ann. Math., 189(1) 75–99. 2019.
* [24] H. Duminil-Copin, A. Raoufi and V. Tassion. Exponential decay of connection probabilities for subcritical Voronoi percolation in $\mathbb{R}^{d}$. Prob. Th. Rel. Fields, 173(1-2), 479–490. 2019.
* [25] H. Duminil-Copin, A. Raoufi and V. Tassion. Subcritical phase of $d$-dimensional Poisson-Boolean percolation and its vacant set. Ann. Henri Leb., 3, 677–700. 2020.
* [26] A. Faggionato and H. A. Mimun. Connection probabilities in Poisson random graphs with uniformly bounded edges. Lat. Am. J. Probab. Math. Stat., 16, 463–-486. 2019.
* [27] A. Gandolfi, M. S. Keane and C. M. Newman. Uniqueness of the infinite component in a random graph with applications to percolation and spin glasses. Prob. Th. Rel. Fields, 92(4), 511–527. 1992
* [28] C. Garban. Oded Schramm’s contributions to noise sensitivity. Ann. Prob., 39(5), 1702–1767. 2011.
* [29] C. Garban and J. E. Steif. Noise Sensitivity of Boolean Functions and Percolation. Cambridge University Press, 2014.
* [30] C. Garban and H. Vanneuville. Bargmann-Fock percolation is noise sensitive. Elec. J. Prob., 25, 1-20. 2020.
* [31] P.P. Ghosh and R. Roy. Criticality and covered area fraction in confetti and Voronoi percolation. J. Stat. Phy., 186 (1), 1-26. 2022.
* [32] E.N. Gilbert. Random plane networks. J. Soc. Ind. Appl. Math., 9(4), 533–543. 1961.
* [33] J-B. Gouéré. Subcritical regimes in the Poisson Boolean model of continuum percolation. Ann. Prob. 36(4) 1209–20. 2008.
* [34] J-B. Gouéré. Percolation in a multiscale Boolean model. Lat. Am. J. Probab. Math. Stat., 11-1. 2014.
* [35] J-B. Gouéré and M. Théret. Equivalence of some subcritical properties in continuum percolation. Bern., 25(4B),3714–3733, 2019.
* [36] O. Hägström, Y. Peres and J. E. Steif. Dynamical percolation. Ann. l’Institut Henri Poincaré, Prob. et Stat., 33(4), 497–528. 1997.
* [37] P. Hall. On continuum percolation. Ann. Prob., 13(4), 1250–1266. 1985.
* [38] M. Heveling and M. Reitzner. Poisson-Voronoi approximation. Ann. Appl. Prob. 19(2), 719–736, 2019.
* [39] M. Heydenreich, R. van der Hofstad, G. Last and K. Matzke. Lace expansion and mean-field behavior for the random connection model. arXiv:1908.11356. 2019.
* [40] C. Hirsch. A Harris‐Kesten theorem for confetti percolation. Rand. Struct. Alg., 47(2), 361–385. 2015.
* [41] T. Hutchcroft New critical exponent inequalities for percolation and the random cluster model. Prob. Math. Phy., 1(1), 147–165. 2020.
* [42] S. K. Iyer and S. K. Jhawar. Phase transitions and percolation at criticality in planar enhanced random connection models. Elec. J. Prob., 26, 1-23. 2021.
* [43] S. K. Iyer and D. Yogeshwaran. Thresholds for vanishing of ‘Isolated’ faces in random Čech and Vietoris–Rips complexes. Ann. l’Institut Henri Poincaré, Prob. et Stat., 56(3), 1869–1897. 2020.
* [44] D. Jeulin. Dead leaves models: from space tessellation to random functions. In Proc. of the Symposium on the Advances in the Theory and Applications of Random Sets (Fontainebleau, 9-11 October 1996), D. Jeulin (ed), World Scientific Publishing Company. 137–156. 1997.
* [45] O. Kallenberg. Random Measures, Theory and Applications. Springer, Cham. 2017.
* [46] R. Lachièze-Rey and S. Muirhead. Percolation of the excursion sets of planar symmetric shot noise fields. Stoch. Proc. Applns., 147, 175-209. 2022.
* [47] G. Last. Stochastic analysis for Poisson processes. In: G. Peccati and M. Reitzner (eds.) Stochastic Analysis for Poisson Point Processes. Springer, Milan, 1–36. 2016.
* [48] G. Last, G. Peccati and M. Schulte. Normal approximation on Poisson spaces: Mehler’s formula, second order Poincaré inequalities and stabilization. Prob. Th. Rel. Fields, 165(3-4), 667–723. 2016.
* [49] G. Last and M. Penrose. Lectures on the Poisson Process. Cambridge University Press., Cambridge, 2017.
* [50] G. Matheron. Schéma booléen séquentiel de partitions aléatoires. Note géostatistique, 89, Centre de Morphologie Mathématique, Fontainebleau. 1968. Available from http://cg.ensmp.fr/bibliotheque/cgi-bin/public/bibli_index.cgi
* [51] R. Meester and R. Roy. Continuum percolation. Camridge University Press., Cambridge, (Vol. 119). 1996.
* [52] M. V. Menshikov. Coincidence of critical points in percolation problems. In Soviet Mathematics Doklady 33, 856–859. 1986.
* [53] M. V. Menshikov, S. A. Molchanov and A. F. Sidorenko. Percolation theory and some applications. Itogi Nauki i Tekhniki. (Series of Probability Theory, Mathematical Statistics, Theoretical Cybernetics), 24, 53–110. 1986.
* [54] I. Molchanov. Theory of Random Sets. Springer, London, 2005.
* [55] S.A. Molchanov and A.K. Stepanov. Percolation in random fields. II. Theor. Math. Phys., 55(3), 592–-599. 1983.
* [56] T. Müller. The critical probability for confetti percolation equals 1/2. Rand. Struct. Alg., 50(4), 679–697. 2017.
* [57] I. Nourdin, G. Peccati and X. Yang. Restricted hypercontractivity on the Poisson space. Proc. Amer. Math. Soc., 148(8), 3617-3632. 2020.
* [58] R. O’Donnell. Analysis of Boolean functions. Cambridge University Press, 2014.
* [59] R. O’Donnell. Social choice, computational complexity, Gaussian geometry, and Boolean functions. Proceedings of the International Congress of Mathematicians – Seoul 2014. Vol. IV, 633–-658, Kyung Moon Sa, Seoul, 2014.
* [60] R. O’Donnell, M. Saks, O. Schramm and R. Servedio. Every decision tree has an influential variable. FOCS, IEEE, 31–39, 2005.
* [61] G. Peccati and M. Reitzner, eds. Stochastic analysis for Poisson point processes: Malliavin calculus, Wiener-Itô chaos expansions and stochastic geometry. Vol. 7. Springer, 2016.
* [62] G. Peccati and M.S. Taqqu. Wiener chaos: moments, cumulants and diagrams. Springer, 2010
* [63] M. D. Penrose. Non-triviality of the vacancy phase transition for the Boolean model. Elec. Comm. Prob., 23. 2018.
* [64] C. Preston. Spatial birth-and-death processes. In Proceedings of the 40th Session of the International Statistical Institute (Warsaw, 1975), 2, 371–-391. 1975.
* [65] N. Privault. Laplace transform identities for the volume of stopping sets based on Poisson point processes. Adv. Appl. Prob., 47(4), 919–933. 2015.
* [66] R. Roy. The Russo-Seymour-Welsh theorem and the equality of critical densities and the ”dual” critical densities for continuum percolation on $\mathbb{R}^{2}$. Ann. Prob., 18(4), 1563–75. 1990.
* [67] R. Roy. Percolation of Poisson sticks on the plane. Prob. Th. Rel. Fields, 89(4), 503–517. 1991.
* [68] Yu. A. Rozanov. Markov random fields. Springer, New York, 1982.
* [69] J. Serra. Image Analysis and Mathematical Morphology. English version revised by Noel Cressie.Academic Press, Inc. London, 1982.
* [70] O. Schramm and J. E. Steif. Quantitative noise sensitivity and exceptional times for percolation. Ann. Math., 619–672. 2010.
* [71] R.J. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, New York, 1980.
* [72] J. E. Steif. A survey of dynamical percolation. In Fractal geometry and stochastics IV, Birkhäuser Basel, 145–174. 2009.
* [73] M. Talagrand. On Russo’s approximate zero-one law. Ann. Prob.. 22(3), 1576–1587, 1994
* [74] V. Tassion. Crossing probabilities for Voronoi percolation. Ann. Prob., 44(5), 3385–3398. 2016.
* [75] J. Tykesson and D. Windisch. Percolation in the vacant set of Poisson cylinders. Prob. Th. Rel. Fields, 154(1-2), 165–191. 2012
* [76] R. van Handel. Probability in High Dimension. Lecture Notes. available at https://web.math.princeton.edu/~rvan/APC550.pdf 2016\.
* [77] W. Werner and E. Powell. Lectures on the Gaussian free field. arXiv:2004.04720. 2020
* [78] L. Wu. A new modified logarithmic Sobolev inequality for Poisson point processes and several applications. Prob. Th. Rel. Fields, 118(3), 427–438, 2000
* [79] A. Xia. Stein’s method and Poisson process approximation. In An introduction to Stein’s method, Eds. A. D. Barbour and L. H. Y. Chen, Vol. 4. World Scientific, 115–181. 2005.
* [80] S. Ziesche. Sharpness of the phase transition and lower bounds for the critical intensity in continuum percolation on $\mathbb{R}^{d}$. Ann. l’Institut Henri Poincaré, Prob. et Stat., 54(2), 866–878. 2018.
* [81] S. Zuyev. Stopping sets: gamma-type results and hitting properties. Adv. Appl. Prob., 31(2), 355–366, 1999.
* [82] S. Zuyev. Strong Markov property of Poisson processes and Slivnyak formula. Lect. Notes in Stat., 185, 77–84. 2006.
|
# Refined blowup analysis and nonexistence of Type II blowups for an energy
critical nonlinear heat equation
Kelei Wang† †School of Mathematics and Statistics
Wuhan University
Wuhan 430072, China<EMAIL_ADDRESS>and Juncheng Wei∗ ∗Department of
Mathematics
University of British Columbia
Vancouver, B.C., Canada, V6T 1Z2<EMAIL_ADDRESS>
###### Abstract.
We consider the energy critical semilinear heat equation
$\left\\{\begin{aligned} &\partial_{t}u-\Delta u=|u|^{\frac{4}{n-2}}u&\mbox{in
}\mathbb{R}^{n}\times(0,T),\\\ &u(x,0)=u_{0}(x),\end{aligned}\right.$
where $n\geq 3$, $u_{0}\in L^{\infty}(\mathbb{R}^{n})$, and
$T\in\mathbb{R}^{+}$ is the first blow up time. We prove that if $n\geq 7$ and
$u_{0}\geq 0$, then any blowup must be of Type I, i.e.,
$\|u(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{n})}\leq C(T-t)^{-\frac{1}{p-1}}.$
A similar result holds for bounded convex domains. The proof relies on a
reverse inner-outer gluing mechanism and delicate analysis of bubbling
behavior (bubbling tower/cluster).
###### Key words and phrases:
Nonlinear heat equation; critical exponent; blow up; bubbling analysis.
###### 2020 Mathematics Subject Classification:
35B33; 35B44; 35K58.
K. Wang’s research was supported by the National Natural Science Foundation of
China (No. 11871381).J. Wei’s research is partially supported by NSERC of
Canada.
###### Contents
1. 1 Introduction
1. 1.1 Cauchy problem
2. 1.2 Outline of proof
3. 1.3 Cauchy-Dirichlet problem
4. 1.4 List of notations and conventions used throughout the paper.
2. I Energy concentration behavior
1. 2 Setting
1. 2.1 List of notations and conventions used in this part.
2. 3 Monotonicity formula and $\varepsilon$-regularity
3. 4 Defect measures
1. 4.1 Definition and basic properties
2. 4.2 Definition of the mean curvature
3. 4.3 Limiting monotonicity formula
4. 5 Tangent flow analysis, I
5. 6 The case $m$ is not an integer
6. 7 Partial regularity of suitable weak solutions
7. 8 The case $m$ is an integer
8. 9 The case $p=(n+2)/(n-2)$
1. 9.1 A local quantization result
2. 9.2 Proof of Theorem 9.1
3. 9.3 Further properties of defect measures
3. II Energy concentration with only one bubble
1. 10 Setting
1. 10.1 List of notations and conventions used in this part
2. 11 Blow up profile
3. 12 Orthogonal and inner-outer decompositions
1. 12.1 Orthogonal decomposition
2. 12.2 Inner-outer decomposition
4. 13 Inner problem
5. 14 Outer problem
1. 14.1 Decomposition of the outer equation
2. 14.2 Estimates for the outer equation
3. 14.3 Estimate of $\mathcal{O}(t)$
6. 15 A Harnack inequality for $\lambda$
7. 16 Inner problem and outer problem again
8. 17 Improved estimates on $\phi$
1. 17.1 Uniform $L^{\infty}$ bound
2. 17.2 Gradient estimates
3. 17.3 Estimate of time derivative
9. 18 Linearization of Pohozaev identity
1. 18.1 The left hand side
2. 18.2 The right hand side
10. 19 A weak form of Schoen’s Harnack inequality
11. 20 A conditional exclusion of bubble clustering
1. 20.1 Construction of Green function
2. 20.2 Local Pohozaev invariants
12. A Linearization estimates around bubbles
13. B Estimates on some integrals
4. III Energy concentration in the general case
1. 3 Setting
2. 4 Preliminaries
3. 5 Bubble tree construction
4. 6 A remark on the Lipschitz hypothesis in Part II
5. 7 Exclusion of bubble towering
5. IV Analysis of first time singularity
1. 8 Setting
2. 9 Some integral estimates
3. 10 Tangent flow analysis, II
4. 11 No coexistence of Type I and Type II blow ups
5. 12 Exclusion of bubble clustering
6. 13 Exclusion of blow up with only one bubble
7. 14 Proof of main results
1. 14.1 Proof of Theorem 8.1
2. 14.2 Cauchy-Dirichlet problems
3. 14.3 Cauchy problem
4. 14.4 Energy collapsing and the blow up of $L^{p+1}$ norm
### 1\. Introduction
In this paper we consider the blowup problem for the nonlinear heat equation
$\partial_{t}u-\Delta u=|u|^{p-1}u$ (1.1)
where $p>1$.
#### 1.1. Cauchy problem
We first consider the Cauchy problem
$\left\\{\begin{aligned} &\partial_{t}u-\Delta u=|u|^{p-1}u&\mbox{in
}\mathbb{R}^{n}\times(0,T),\\\ &u(x,0)=u_{0}(x).\end{aligned}\right.$ (1.2)
Here $u_{0}\in L^{\infty}(\mathbb{R}^{n})$, and $T\in\mathbb{R}^{+}$ is the
first blow up time, that is, $u\in L^{\infty}(\mathbb{R}^{n}\times[0,t])$ for
any $t<T$, while
$\lim_{t\to T}\|u(t)\|_{L^{\infty}(\mathbb{R}^{n})}=+\infty.$
Problem (1.2) is one of classical nonlinear parabolic equations which has been
studied extensively in recent decades. See the monograph by Quitter and
Souplet [75] for backgrounds and references therein. Following the seminal
work of Fujita [33], it is well-known that finite time blow-up must occur when
the initial datum is sufficiently large (in some suitable sense). An important
and fundamental question is the classification of the blowup near the first
blowup time $T$. Since the equation (1.1) is invariant under the scaling
$u^{\lambda}(x,t):=\lambda^{\frac{2}{p-1}}u(\lambda x,\lambda^{2}t),$ (1.3)
the blow up is divided into two types: it is Type I if there exists a constant
$C$ such that for any $t<T$,
$\|u(\cdot,t)\|_{L^{\infty}(\mathbb{R}^{n})}\leq C(T-t)^{-\frac{1}{p-1}},$
(1.4)
otherwise it is called Type II.
When $p$ is subcritical (i.e. $1<p<+\infty$ for $n=1,2$, or
$1<p<\frac{n+2}{n-2}$ for $n\geq 3$), in classical works of Giga-Kohn [37] (in
the case of $u_{0}\geq 0$) and Giga-Matsui-Sasayama [39] (in the case of sign-
changing $u_{0}$), it is shown that any blow up is of Type I.
In contrast, less is known about the energy critical case, i.e. $n\geq 3$ and
$p=\frac{n+2}{n-2}$, see e.g. [75, Open Problem 2.8 in Appendix I]. In this
paper we establish the following
###### Theorem 1.1.
If $n\geq 7$, $p=\frac{n+2}{n-2}$ and $u_{0}\geq 0$, then any blow up to (1.2)
is of Type I.
Some remarks on Theorem 1.1 are in order. First we remark that the dimension
restriction in Theorem 1.1 is optimal: when $n\leq 6$ and $p=\frac{n+2}{n-2}$,
Type II blowup solutions to (1.2) do exist. This was first predicted and
proved formally in the pioneering work of Filippas-Herrero-Velázquez in [30]
via method of matched asymptotics. The first rigorous proof of Type II blowup
solutions in the radial setting is given by Schweyer in [79] for $n=4$. (For
nonradial setting and multiply blowups in dimension $n=4$ we refer to [23].)
For the remaining dimensions rigorous construction of Type II blowups are
established recently: for $n=3$, see [22], and for $n=5$, we refer to [19,
41]. For $n=6$, see Harada [42]. We should mention that when
$p=\frac{n+2}{n-2}$ all the type II blowup solutions to (1.2) constructed so
far are sign-changing. It is an open question if there are Type II blowups for
positive solutions in low dimensions $n=3,4,5,6$.
Second we remark that the exponent restriction in Theorem 1.1 is also optimal:
when $p>\frac{n+2}{n-2}$ many types of Type II blowup solutions have been
found. The first example was discovered in the radial setting by Herrero and
Velazquez in [43] for $p>p_{JL}$ where $p_{JL}$ is the Joseph-Lundgreen
exponent,
$p_{JL}=\begin{cases}1+\frac{4}{n-4-2\sqrt{n-1}}&\text{if $n\geq 11$}\\\
+\infty,&\text{if $n\leq 10$}.\\\ \end{cases}$
See also Mizoguchi [69] for the case of a ball, Seki [80] for the case of
$p=p_{JL}$, and Collot [10] for the case of general domains with the
restriction that $p>p_{JL}$ and $p$ is an odd integer. A new and interesting
anisotropic Type II blow-up solutions is also constructed recently for
$p>p_{JL}(n-1),n\geq 14$ by Collot, Merle and Raphael in [13]. In the
intermediate exponent regime $\frac{n+2}{n-2}<p<p_{JL}$, Matano and Merle [63,
64] proved that no type II blow-up is present for radial solutions (under some
extra technical conditions). However type II blow-up also exists in this
intermediate regime. In [21], the authors successfully constructed non-radial
type II blow-up solution in the Matano-Merle range
$p=\frac{n+1}{n-3}\in(\frac{n+2}{n-2},p_{JL}(n))$. Another type of non-radial
Type blow-up with shrinking spheres is recently found for $n\geq 5,p=4$ in
[18].
Theorem 1.1 is the first instance of classification in the critical exponent
case for general initial datum. In the pioneering work of Filippas-Herrero-
Velázquez [30] it is shown that blowup is type I if the initial data is
positive and radially decreasing, when $n\geq 3$. For initial datum with low
energy, we mention that in [11], Collot, Merle and Raphael proved a result
similar to Theorem 1.1 under the condition that
$\|u_{0}-W\|_{\dot{H}^{1}(\mathbb{R}^{n})}\ll 1$, where $W$ is a positive
Aubin-Talenti solution satisfying (1.9), i.e. positive steady state of (1.2).
More precisely assume that $\|u_{0}-W\|_{\dot{H}^{1}(\mathbb{R}^{n})}\ll 1$
and the dimension $n\geq 7$, then there is a trichotomy to solution to (1.2):
it either dissipates to zero, or approaches to a rescaled Aubin-Talenti
solution, or blows up in finite time and the blow-up is of Type I. Note that
in Theorem 1.1, we have assumed neither decaying nor energy condition.
#### 1.2. Outline of proof
The proof of Theorem 1.1 consists of the following five steps.
1. (1)
First we perform tangent flow analysis at a possible blow up point, see Part
IV. This is in the same spirit of Giga-Kohn [36, 37, 38], but we rewrite it in
the form which is more familiar in geometric measure theory such as the
tangent cone analysis/ blowing up analysis/tangent flow analysis used in the
study of minimal surfaces, mean curvature flows and many other geometric
variational problems.
2. (2)
Bubbles may appear during this blow up procedure. A general theory on the
energy concentration phenomena in (1.1) is then developed in Part I, where we
mainly follow the treatment (on harmonic map heat flows) in Lin-Wang [55, 56,
57], see also their monograph [58].
3. (3)
We then perform a refined analysis of this bubbling phenomena, first in the
case of one bubble (_multiplicity one case_), see Part II. Here we mainly use
_the inner-outer gluing mechanism_ developed by the second author with Davila,
del Pino and Musso in [15], [16], [19], [20], [73], [22] (see also [17] for a
survey);
4. (4)
In Part III, we combine the analysis in Part I and Part II to establish the
refined analysis in the general case (_higher multiplicity_), where in
particular we will exclude the bubble towering formation in this energy
concentration phenomena. Arguments used here are motivated by those used in
the study of Yamabe problems through pioneering work of Schoen [78], including
secondary blow ups, construction of Green functions, next order expansion of
Pohozaev identities, see for example Schoen [77, 78], Kuhri-Marques-Schoen
[46], Li-Zhu [50], Li-Zhang [48].
5. (5)
Finally, the refined analysis in Part III and Part II are applied to the first
time singularity problem. Results in Part III are used to exclude blow ups
with bubble clustering, while results in Part II are used to exclude blow ups
with only one bubble.
#### 1.3. Cauchy-Dirichlet problem
Our method can also be applied to the initial-boundary value problem
$\left\\{\begin{aligned} &\partial_{t}u-\Delta u=|u|^{p-1}u&\mbox{in
}\Omega^{T}:=\Omega\times(0,T),\\\ &u=0&\mbox{on
}\partial\Omega\times(0,T),\\\ &u(x,0)=u_{0}(x).\end{aligned}\right.$ (1.5)
Here $\Omega\subset\mathbb{R}^{n}$ is a bounded domain with $C^{2}$ boundary,
$u_{0}\in L^{q}(\Omega)$ for some $q\geq n(p-1)/2$. (The exponent
$\frac{n(p-1)}{2}$ is optimal. See [70].) By [75, Theorem 15.2] or [7] and
[88], there exists a $T>0$ such that $u\in
L^{\infty}(\overline{\Omega}\times(0,t])$ for any $t<T$. Here we assume $u$
blows up in finite time, and $T$ is the first blow up time, that is,
$\lim_{t\to T}\|u(\cdot,t)\|_{L^{\infty}(\Omega)}=+\infty.$
For this Cauchy-Dirichlet problem we prove
###### Theorem 1.2.
If $n\geq 7$, $p=\frac{n+2}{n-2}$ and $u_{0}\geq 0$, the first time
singularity in the interior must be Type I, that is, for any
$\Omega^{\prime}\Subset\Omega$, there exists a constant $C$ such that
$\|u(\cdot,t)\|_{L^{\infty}(\Omega^{\prime})}\leq
C(T-t)^{-\frac{1}{p-1}},\quad\forall 0<t<T.$
The proof of this theorem is similar to the one for Theorem 1.1. It is also
generally believed that there is no boundary blow up. This, however, is only
known in some special cases, e.g. when $\Omega$ is convex (cf. [38, Theorem
5.3], [40]).
Once we have this Type I blow up bound, it will be interesting to know if the
set of blow up points enjoy further regularities, and if the blow up profiles
satisfy the uniform, refined estimates as in the subcritical case, see for
example Liu [60], Filippas-Kohn [31], Filippas-Liu [32], Velázquez [83, 84,
85], Merle-Zaag [65, 66, 67, 68], Zaag [92, 93, 94] and Kammerer-Merle-Zaag
[29].
Our proof also implies that the energy collapses and the $L^{p+1}$ norm blows
up at $T$, cf. Giga [35], Zaag [91] and Du [27].
###### Corollary 1.3.
Under the assumptions of Theorem 1.2, if there exists an interior blow up
point, then
$\lim_{t\to T}\int_{\Omega}\left[\frac{1}{2}|\nabla
u(x,t)|^{2}-\frac{1}{p+1}u(x,t)^{p+1}\right]dx=-\infty,$ (1.6)
and
$\lim_{t\to T}\int_{\Omega}u(x,t)^{p+1}dx=+\infty.$ (1.7)
In fact, we prove that
$\lim_{t\to T}\int_{0}^{t}\int_{\Omega}|\partial_{t}u(x,t)|^{2}dx=+\infty.$
(1.8)
Then (1.6) follows from the standard energy identity for $u$. A local version
of (1.8) and (1.6) also hold for the solution of the Cauchy problem (1.2).
We do not know if the $L^{2}(\Omega)$ norm of $\nabla u(t)$ blows up as $t\to
T$. We conjecture that the blow up must be complete, i.e. the solution cannot
be extended beyond $T$ as a weak solution, cf. Baras-Cohen [3], Galaktinov-
Vazquez [34], Martel [62] and [75, Section 27].
#### 1.4. List of notations and conventions used throughout the paper.
* •
The open ball in $\mathbb{R}^{n}$ is denoted by $B_{r}(x)$, and by $B_{r}$ if
the center is the origin $0$.
* •
The parabolic cylinder is $Q_{r}(x,t):=B_{r}(x)\times(t-r^{2},t+r^{2})$, the
forward parabolic cylinder is $Q_{r}^{+}(x,t):=B_{r}(x)\times(t,t+r^{2})$, and
the backward parabolic cylinder is
$Q_{r}^{-}(x,t):=B_{r}(x)\times(t-r^{2},t)$. If the center is the origin
$(0,0)$, it will not be written down explicitly.
* •
The parabolic distance is
$\mbox{dist}_{P}\left((x,t),(y,s)\right):=\max\\{|x-y|,\sqrt{|t-s|}\\}.$
Hölder, Lipschitz continuous functions with respect to this distance is
defined as usual.
* •
Given a domain $\Omega\subset\mathbb{R}^{n}$, $H^{1}(\Omega)$ is the Sobolev
space endowed with the norm
$\left[\int_{\Omega}\left(|\nabla u|^{2}+u^{2}\right)dx\right]^{1/2}.$
Given an interval $\mathcal{I}\subset\mathbb{R}$, we use $L^{2}_{t}H^{1}_{x}$
to denote the space $L^{2}(\mathcal{I};H^{1})$.
* •
A bubble is an entire solution of the stationary equation
$-\Delta W=|W|^{\frac{4}{n-2}}W\quad\mbox{in }\mathbb{R}^{n}$ (1.9)
with finite energy
$\int_{\mathbb{R}^{n}}|\nabla
W|^{2}=\int_{\mathbb{R}^{n}}|W|^{\frac{2n}{n-2}}<+\infty.$
By the translational and scaling invariance, if $W$ is a bubble, so is
$W_{\xi,\lambda}(x):=\lambda^{\frac{n-2}{2}}W\left(\frac{x-\xi}{\lambda}\right),\quad\xi\in\mathbb{R}^{n},~{}~{}\lambda\in\mathbb{R}^{+}.$
* •
By Caffarelli-Gidas-Spruck [8], all entire positive solutions to the
stationary equation (1.9) are given by Aubin-Talenti bubbles
$W_{\xi,\lambda}(x):=\left(\frac{\lambda}{\lambda^{2}+\frac{|x-\xi|^{2}}{n(n-2)}}\right)^{\frac{n-2}{2}},\quad\quad\quad\lambda>0,\quad\xi\in\mathbb{R}^{n}.$
(1.10)
They have finite energy, which are always equal to
$\Lambda:=\int_{\mathbb{R}^{n}}|\nabla
W_{\xi,\lambda}|^{2}=\int_{\mathbb{R}^{n}}W_{\xi,\lambda}^{p+1}.$ (1.11)
For simplicity, denote $W:=W_{0,1}$.
* •
We use $C$ (large) and $c$ (small) to denote universal constants. They could
be different from line to line. Given two quantities $A$ and $B$, if $A\leq
CB$ for some universal constant $C$, we simply write $A\lesssim B$. If the
constant $C$ depends on a quantity $D$, it will be written as $C(D)$ or
$\lesssim_{D}$.
## Part I Energy concentration behavior
### 2\. Setting
In this part we assume that $p>1$ (which may not necessarily be the critical
exponent), and that the solutions could be sign-changing. Here we study the
energy concentration behavior of the nonlinear heat equation (1.1).
First we need to define what we call as a solution.
###### Definition 2.1 (Suitable weak solution).
A function $u$ is a suitable weak solution of (1.1) in $Q_{1}$, if
$\partial_{t}u,\nabla u\in L^{2}(Q_{1})$, $u\in L^{p+1}(Q_{1})$, and
* •
$u$ satisfies (1.1) in the weak sense, that is, for any $\eta\in
C_{0}^{\infty}(Q_{1})$,
$\int_{Q_{1}}\left[\partial_{t}u\eta+\nabla
u\cdot\nabla\eta-|u|^{p-1}u\eta\right]=0;$ (2.1)
* •
$u$ satisfies the localized energy inequality: for any $\eta\in
C_{0}^{\infty}(Q_{1})$,
$\int_{Q_{1}}\left[\left(\frac{|\nabla
u|^{2}}{2}-\frac{|u|^{p+1}}{p+1}\right)\partial_{t}\eta^{2}-|\partial_{t}u|^{2}\eta^{2}-2\eta\partial_{t}u\nabla
u\cdot\nabla\eta\right]\geq 0;$ (2.2)
* •
$u$ satisfies the stationary condition: for any $Y\in
C_{0}^{\infty}(Q_{1},\mathbb{R}^{n})$,
$\int_{Q_{1}}\left[\left(\frac{|\nabla
u|^{2}}{2}-\frac{|u|^{p+1}}{p+1}\right)\mbox{div}Y-DY(\nabla u,\nabla
u)+\partial_{t}u\nabla u\cdot Y\right]=0.$ (2.3)
A smooth solution satisfies all of these conditions. (In fact, (2.2) will
become an equality, which is just the standard energy identity.) But a
suitable weak solution need not to be smooth everywhere.
###### Definition 2.2.
Given a weak solution $u$ of (1.1), a point $(x,t)$ is a regular point of $u$
if there exists an $r>0$ such that $u\in C^{\infty}(Q_{r}(x,t))$, otherwise it
is a singular point of $u$. The corresponding sets are denoted by
$\mathcal{R}(u)$ and $\mathcal{S}(u)$.
By definition, $\mathcal{R}(u)$ is open and $\mathcal{S}(u)$ is closed.
In this part, $u_{i}$ denotes a sequence of suitable weak solutions to (1.1)
in $Q_{1}$, satisfying a uniform energy bound
$\limsup_{i\to+\infty}\int_{Q_{1}}\left[|\nabla
u_{i}|^{2}+|u_{i}|^{p+1}+|\partial_{t}u_{i}|^{2}\right]dxdt<+\infty.$ (2.4)
The integral bound on $\partial_{t}u_{i}$ in (2.4) can be deduced by
substituting the bounds on the first two integrands into (2.2), if we shrink
the domain $Q_{1}$ a little.
We will state the results about the energy concentration behavior after
introducing some necessary notations. The main results in this part are
1. (1)
Theorem 4.2, where we establish basic properties about this energy
concentration behavior;
2. (2)
Theorem 6.1 about the case $2\frac{p+1}{p-1}$ is not an integer, where we
prove a strong convergence result;
3. (3)
Theorem 8.1 about the case $2\frac{p+1}{p-1}$ is an integer, where we show
that the limiting problem is a generalized Brakke’s flow;
4. (4)
Theorem 9.1, which is about the energy quantization result in the critical
case $p=\frac{n+2}{n-2}$.
Our treatment in this part mainly follows the work of Lin and Wang [55], [56],
[57]. See also their monograph [58].
There are still many problems remaining open about this energy concentration
behavior, such as the energy quantization result in the general case (cf. Lin-
Riviere [53] for the corresponding results for harmonic maps, and Naber-
Valtorta [74] for Yang-Mills fields), Brakke type regularity result for the
limiting problem (cf. Brakke [6], Kasai-Tonegawa [45]). But we will content
with such a preliminary analysis of this energy concentration phenomena,
because the main goal of this part is to providing a setting for later parts.
#### 2.1. List of notations and conventions used in this part.
* •
For any $\lambda>0$ and each set $A\subset\mathbb{R}^{n}\times\mathbb{R}$, we
use $\lambda A$ to denote the parabolic scaling of $A$, which is the set
$\left\\{(\lambda x,\lambda^{2}t):(x,t)\in A\right\\}.$
* •
Given a set $A\subset\mathbb{R}^{n}\times\mathbb{R}$, its time slices are
$A_{t}:=A\cap\left(\mathbb{R}^{n}\times\\{t\\}\right)$.
* •
Denote $m:=\frac{2(p+1)}{p-1}$. Note that $m>2$, and $p=\frac{m+2}{m-2}$. If
$m$ is an integer, then $p$ is the Sobolev critical exponent in dimension $m$.
* •
For $s\geq 0$, $\mathcal{P}^{s}$ denotes the $s$-dimensional parabolic
Hausdorff measure. The dimension of a subset of
$\mathbb{R}^{n}\times\mathbb{R}$ is always understood to be the parabolic
Hausdorff dimension.
### 3\. Monotonicity formula and $\varepsilon$-regularity
In this section, $u$ denotes a fixed suitable weak solution of (1.1) in
$Q_{1}$. Here we recall the monotonicity formula and $\varepsilon$-regularity
theorems. We also derive a Morrey space estimate, which will be used below in
Section 5 to preform the tangent flow analysis. Our use of Morrey space
estimates follows closely Chou-Du-Zheng [9] and Du [25, 27], which is
different from the ones used in Blatt-Struwe [5] and Souplet [81].
Fix a function $\psi\in C^{\infty}_{0}(B_{1})$ such that $0\leq\psi\leq 1$,
$\psi\equiv 1$ in $B_{3/4}$ and $|\nabla\psi|+|\nabla^{2}\psi|\leq 100$. For
$t>0$ and $x\in\mathbb{R}^{n}$, let
$G(x,t):=\left(4\pi t\right)^{-\frac{n}{2}}e^{-\frac{|x|^{2}}{4t}}$
be the standard heat kernel on $\mathbb{R}^{n}$.
Take a large constant $C$ and a small constant $c$. For each $(x,t)\in
B_{3/4}\times(-3/4,1]$ and $s\in(0,1/4)$, define
$\displaystyle\Theta_{s}(x,t;u)$ $\displaystyle:=$ $\displaystyle
s^{\frac{p+1}{p-1}}\int_{B_{1}}\left[\frac{|\nabla
u(y,t-s)|^{2}}{2}-\frac{|u(y,t-s)|^{p+1}}{p+1}\right]G(y-x,s)\psi(y)^{2}dy$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}s^{\frac{2}{p-1}}\int_{B_{1}}u(y,t-s)^{2}G(y-x,s)\psi(y)^{2}dy+Ce^{-cs^{-1}}.$
###### Remark 3.1.
Because $u$ and $\nabla u$ are only integrable in space-time, rigorously we
should integrate one more time in $s$, e.g. to consider the quantity
$\overline{\Theta}_{s}(x,t;u):=s^{-1}\int_{s}^{2s}\Theta_{\tau}(x,t;u)d\tau.$
However, to simplify notations we will not use this quantity.
The following is a localized version of the monotonicity formula of Giga and
Kohn, see [36, 38] and [39].
###### Proposition 3.2 (Localized monotonicity formula).
If $C$ is universally large and $c$ is universally small, then for any
$(x,t)\in B_{3/4}\times(-3/4,1]$ and $0<s_{1}<s_{2}<1/4$,
$\displaystyle\Theta_{s_{2}}(x,t)-\Theta_{s_{1}}(x,t)$ $\displaystyle\geq$
$\displaystyle\int_{s_{1}}^{s_{2}}\tau^{\frac{2}{p-1}-1}\int_{B_{1}}\Big{|}(t-\tau)\partial_{t}u(y,t-\tau)+\frac{u(y,t-\tau)}{p-1}+\frac{y}{2}\cdot\nabla
u(y,t-\tau)\Big{|}^{2}$ $\displaystyle\quad\quad\quad\quad\quad\quad\times
G(y-x,\tau)\psi(y)^{2}dyd\tau.$
This almost monotonicity allows us to define
$\Theta(x,t):=\lim_{s\to 0^{+}}\Theta_{s}(x,t).$
For each $s>0$, from the definition we see
$s^{-1}\int_{s}^{2s}\Theta_{\tau}(x,t)d\tau$ is a continuous function of
$(x,t)$. Combining this fact with the monotonicity formula, we get
###### Lemma 3.3.
$\Theta(x,t)$ is upper semi-continuous in $(x,t)$.
The following Morrey space bound is essentially Giga-Kohn’s [37, Proposition
2.2] or [38, Proposition 3.1].
###### Proposition 3.4.
For any $(x,t)\in Q_{1/2}$ and $r\in(0,1/4)$,
$\displaystyle r^{m-2-n}\int_{Q_{r}^{-}(x,t-r^{2})}\left(|\nabla
u|^{2}+|u|^{p+1}\right)+r^{m-n}\int_{Q_{r}^{-}(x,t-r^{2})}|\partial_{t}u|^{2}$
(3.1) $\displaystyle\leq$ $\displaystyle
C\max\left\\{\Theta_{4r^{2}}(x,t),\Theta_{4r^{2}}(x,t)^{\frac{2}{p+1}}\right\\}.$
Combining this estimate with the monotonicity formula, we obtain a Morrey
space estimate of $u$.
###### Corollary 3.5 (Morrey space bound).
There exists a universal constant $M$ depending on $\int_{Q_{1}}\left[|\nabla
u|^{2}+|u|^{p+1}\right]$ such that for any $(x,t)\in Q_{1/2}$ and
$r\in(0,1/4)$,
$r^{m-2-n}\int_{Q_{r}(x,t)}\left(|\nabla
u|^{2}+|u|^{p+1}\right)+r^{m-n}\int_{Q_{r}(x,t)}|\partial_{t}u|^{2}\leq M.$
(3.2)
These Morrey space estimates are invariant under the scaling (1.3).
Combining (3.1) and (3.2), we also get
###### Corollary 3.6 (Change of base point).
For any $\varepsilon>0$, there exist
$0<\delta(\varepsilon)\ll\theta(\varepsilon)\ll 1$ and $C(\varepsilon)$ so
that the following holds. For any $(x,t)\in Q_{1/2}$, $r<1/4$ and $(y,s)\in
Q_{\delta r}(x,t)$,
$\Theta_{\theta r^{2}}(y,s)\leq
C\max\left\\{\Theta_{r^{2}}(x,t),\Theta_{r^{2}}(x,t)^{\frac{2}{p+1}}\right\\}+\varepsilon.$
Next we recall two standard $\varepsilon$-regularity theorems. The first one
is a reformulation of Du [27, Theorem 3.1].
###### Theorem 3.7 ($\varepsilon$-regularity I).
There exist three universal constants $\varepsilon_{\ast}$, $\delta_{\ast}$
and $c_{\ast}$ so that the following holds. If
$\Theta_{r^{2}}(x,t)\leq\varepsilon_{\ast},$
then
$\sup_{Q_{\delta_{\ast}r}(x,t)}|u|\leq c_{\ast}r^{-\frac{2}{p-1}}.$
Once we have an $L^{\infty}$ bound on $u$, higher order regularity follows by
applying standard parabolic estimates to (1.1). As a consequence, we get
###### Corollary 3.8.
$\mathcal{R}(u)=\\{\Theta=0\\}$ and
$\mathcal{S}(u)=\\{\Theta\geq\varepsilon_{\ast}\\}$.
The second $\varepsilon$-regularity theorem is Chou-Du-Zheng [9, Theorem 2]
(see also Du [27, Proposition 4.1]).
###### Theorem 3.9 ($\varepsilon$-regularity II).
After decreasing $\varepsilon_{\ast}$, $\delta_{\ast}$, and enlarging
$c_{\ast}$ further, the following holds. There exists a constant
$\theta_{\ast}>0$ such that, if
$\int_{Q_{r}^{-}(x,t-\theta_{\ast}r^{2})}\left(|\nabla
u|^{2}+|u|^{p+1}\right)\leq\varepsilon_{\ast}r^{n+2-m},$
then
$\sup_{Q_{\delta_{\ast}r}(x,t)}|u|\leq c_{\ast}r^{-\frac{2}{p-1}}.$
A standard covering argument then gives
###### Corollary 3.10.
If $u$ is a suitable weak solution of (1.1) in $Q_{1}$, then
$\mathcal{P}^{n+2-m}\left(\mathcal{S}(u)\right)=0.$
In particular, $\mbox{dim}\left(\mathcal{S}(u)\right)\leq n+2-m$.
### 4\. Defect measures
From now on until Section 9, we will be concerned with the energy
concentration behavior in (1.1). In this section we define the defect measure
and prove some basic properties about it.
#### 4.1. Definition and basic properties
By (2.4), after passing to a subsequence, we may assume $u_{i}$ converges
weakly to a limit $u_{\infty}$ in $L^{p+1}(Q_{1})$, $\nabla u_{i}$ and
$\partial_{t}u_{i}$ converges weakly to $\nabla u_{\infty}$ and
$\partial_{t}u_{\infty}$ respectively in $L^{2}(Q_{1})$.
By Sobolev embedding theorem and an interpolation argument, for any $1\leq
q<p+1$, $u_{i}$ converges to $u_{\infty}$ strongly in $L^{q}_{loc}(Q_{1})$. As
a consequence, $u_{\infty}$ is a weak solution of (1.1).
There exist three Radon measures $\mu_{1},\mu_{2}$ and $\nu$ such that
$\left\\{\begin{aligned}
&|u_{i}|^{p+1}dxdt\rightharpoonup|u_{\infty}|^{p+1}dxdt+\mu_{1},\\\ &|\nabla
u_{i}|^{2}dxdt\rightharpoonup|\nabla u_{\infty}|^{2}dxdt+\mu_{2},\\\
&|\partial_{t}u_{i}|^{2}dxdt\rightharpoonup|\partial_{t}u_{\infty}|^{2}dxdt+\nu.\end{aligned}\right.$
These measures, called _defect measures_ , characterize the failure of
corresponding strong convergence.
We also let
$\nabla u_{i}\otimes\nabla u_{i}dxdt\rightharpoonup\nabla
u_{\infty}\otimes\nabla u_{\infty}dxdt+\mathcal{T}d\mu_{2},$
where $\mathcal{T}$ is a matrix valued $\mu_{2}$-measurable functions.
Furthermore, $\mathcal{T}$ is symmetric, semi-positive definite and its trace
equals $1$ $\mu_{2}$-a.e.
###### Lemma 4.1 (Energy partition).
$\mu_{1}=\mu_{2}$.
###### Proof.
For any $\eta\in C_{0}^{\infty}(Q_{1})$, multiplying the equation (1.1) by
$u_{i}\eta^{2}$ and integrating by parts, we obtain
$\int_{Q_{1}}\left[-u_{i}^{2}\eta\partial_{t}\eta+|\nabla
u_{i}^{2}|\eta^{2}-|u_{i}|^{p+1}\eta^{2}+2\eta u_{i}\nabla
u_{i}\nabla\eta\right]=0.$ (4.1)
Letting $i\to\infty$, and noting that $u_{\infty}$ also satisfies (4.1), we
obtain
$\int_{Q_{1}}\eta^{2}d\left(\mu_{1}-\mu_{2}\right)=0.$
Since this holds for any $\eta\in C_{0}^{\infty}(Q_{1})$, $\mu_{1}=\mu_{2}$ as
Radon measures. ∎
By this lemma, we can write $\mu_{1}$ and $\mu_{2}$ just as $\mu$. Denote
$\Sigma:=\mbox{spt}(\mu)$, and define
$\Sigma^{\ast}:=\left\\{(x,t):~{}~{}\forall
r>0,~{}~{}\limsup_{i\to\infty}r^{m-2-n}\int_{Q_{r}^{-}(x,t)}\left(|\nabla
u_{i}|^{2}+|u_{i}|^{p+1}\right)\geq\varepsilon_{\ast}/2\right\\}$
to be _the blow up locus_.
Now we state some basic properties on this energy concentration phenomena.
###### Theorem 4.2.
Suppose $u_{i}$ is a sequence of suitable weak solutions of (1.1) in $Q_{1}$,
satisfying (2.4). Define $\mu,\nu$, $\Sigma$ and $\Sigma^{\ast}$ as above.
Then the following holds.
1. (1)
(Morrey space bound) For any $(x,t)\in Q_{1/2}$ and $r\in(0,1/4)$,
$\left\\{\begin{aligned} &\mu(Q_{r}(x,t))+\int_{Q_{r}(x,t)}\left(|\nabla
u_{\infty}|^{2}+|u_{\infty}|^{p+1}\right)\leq Mr^{n+2-m},\\\
&\nu(Q_{r}(x,t))+\int_{Q_{r}(x,t)}|\partial_{t}u_{\infty}|^{2}\leq
Mr^{n-m}.\end{aligned}\right.$ (4.2)
Here $M$ is the constant in Corollary 3.5.
2. (2)
The blow up locus $\Sigma^{\ast}$ is closed.
3. (3)
(Smooth convergence) $u_{i}$ converges to $u_{\infty}$ in
$C_{loc}^{\infty}(Q_{1}\setminus\Sigma^{\ast})$. As a consequence,
* •
$u_{\infty}\in C^{\infty}(Q_{1}\setminus\Sigma^{\ast})$, that is,
$\mathcal{S}(u_{\infty})\subset\Sigma^{\ast}$.
* •
$\Sigma\subset\Sigma^{\ast}$ and $\mbox{spt}(\nu)\subset\Sigma^{\ast}$.
4. (4)
(Measure estimate of the blow up locus) For any $(x,t)\in\Sigma^{\ast}\cap
Q_{1/2}$ and $0<r<1/2$,
$\frac{\varepsilon_{\ast}}{C}r^{n+2-m}\leq\mathcal{P}^{n+2-m}(\Sigma^{\ast}\cap
Q_{r}(x,t))\leq\frac{C}{\varepsilon_{\ast}}r^{n+2-m}.$
5. (5)
(Lower density bound) For $\mu$-a.e. $(x,t)\in\Sigma\cap Q_{1/2}$ and
$r\in(0,1/2)$,
$\mu(Q_{r}(x,t))\geq\mu(Q_{r/2}^{-}(x,t-\theta_{\ast}r^{2}))\geq\varepsilon_{\ast}r^{n+2-m}.$
(4.3)
6. (6)
$\mathcal{P}^{n+2-m}\left(\Sigma^{\ast}\setminus\Sigma\right)=0$.
7. (7)
There exists a measurable function $\theta$ on $\Sigma$ such that
$\mu=\theta(x,t)\mathcal{P}^{n+2-m}\lfloor_{\Sigma}.$
Moreover,
$\frac{\varepsilon_{\ast}}{C}\leq\theta\leq
C\quad\quad\mathcal{P}^{n+2-m}-\mbox{a.e. in}~{}~{}\Sigma.$
Before presenting the proof, we first note the following Federer-Ziemer type
result [28]. It can be proved by a Vitali covering argument.
###### Lemma 4.3.
1. (1)
For $\mathcal{P}^{n+2-m}$ a.e. $(x,t)\in Q_{1/2}$,
$\lim_{r\to 0}r^{m-2-n}\int_{Q_{r}(x,t)}\left(|\nabla
u_{\infty}|^{2}+|u_{\infty}|^{p+1}\right)=0.$
2. (2)
For $\mathcal{P}^{n-m}$ a.e. $(x,t)\in Q_{1/2}$,
$\lim_{r\to 0}r^{m-n}\int_{Q_{r}(x,t)}|\partial_{t}u_{\infty}|^{2}=0.$
###### Proof of Theorem 4.2.
1. (1)
This Morrey space bound follows directly by passing to the limit in (3.1) (for
$u_{i}$).
2. (2)
By definition, for any $(x,t)\notin\Sigma^{\ast}$, there exists an $r>0$ such
that for all $i$ large,
$\int_{Q_{r}^{-}(x,t)}\left(|\nabla
u_{i}|^{2}+|u_{i}|^{p+1}\right)<\varepsilon_{\ast}r^{n+2-m}.$
By Theorem 3.9 and standard parabolic regularity theory, $u_{i}$ are uniformly
bound in $C^{k}(Q_{\delta_{\ast}r}(x,t))$ for each $k\in\mathbb{N}$. By the
weak convergence of $u_{i}$ and Arzela-Ascolli theorem, they converge to
$u_{\infty}$ in $C^{\infty}(Q_{\delta_{\ast}r}(x,t))$.
3. (3)
This follows directly from the previous point.
4. (4)
This follows from a standard Vitali type covering argument, by utilizing the
upper density bound in (4.2) and the lower density bound coming from the
definition of $\Sigma^{\ast}$, that is, for any $(x,t)\in\Sigma^{\ast}\cap
Q_{1/2}$ and $0<r<1/2$,
$\mu(Q_{r}(x,t))+\int_{Q_{r}(x,t)}\left(|\nabla
u_{\infty}|^{2}+|u_{\infty}|^{p+1}\right)\geq\frac{\varepsilon_{\ast}}{2}r^{n+2-m}.$
(4.4)
5. (5)
This follows by combining (4.4) and Lemma 4.3.
6. (6)
This follows from a Vitali type covering argument, by using (4.4) and Lemma
4.3.
7. (7)
This follows from differentiation theorem for measures, and an application of
(4.2) and (4.3). ∎
#### 4.2. Definition of the mean curvature
In this subsection we define a mean curvature type term for defect measures.
###### Lemma 4.4.
There exists an $\mathbb{R}^{n}$ valued function $\mathbf{H}\in
L^{2}(Q_{1},d\mu)$ such that
$\partial_{t}u_{i}\nabla u_{i}dxdt\rightharpoonup\partial_{t}u_{\infty}\nabla
u_{\infty}dxdt+\frac{1}{m}\mathbf{H}d\mu$
weakly as Radon measures.
###### Proof.
By Cauchy-Schwarz inequality,
$\int_{Q_{1}}|\partial_{t}u_{i}||\nabla
u_{i}|\leq\left(\int_{Q_{1}}|\partial_{t}u_{i}|^{2}\right)^{1/2}\left(\int_{Q_{1}}|\nabla
u_{i}|^{2}\right)^{1/2}$ (4.5)
are bounded as $i\to+\infty$. Therefore we may assume
$\partial_{t}u_{i}\nabla u_{i}dxdt\rightharpoonup\xi\quad\mbox{weakly as
vector valued Radon measures},$
where $\xi$ is an $\mathbb{R}^{n}$-valued Radon measure.
Take the Radon-Nikodym decomposition of $\xi$ with respect to the Lebesgue
measure, $\xi=\xi^{a}+\xi^{s}$, where $\xi^{a}$ is the absolute continuous
part, and $\xi^{s}$ is the singular part.
By Point (3) in Theorem 4.2,
$\xi=\partial_{t}u_{\infty}\nabla
u_{\infty}dxdt\quad\mbox{outside}~{}~{}\Sigma^{\ast}.$
In view of Point (4) in Theorem 4.2, this is just the absolutely continuous
part $\xi^{a}$. In other words,
$\xi^{a}=\partial_{t}u_{\infty}\nabla u_{\infty}dxdt.$ (4.6)
On the other hand, we can also define
$\mathbf{H}_{i}:=\begin{cases}\frac{\partial_{t}u_{i}\nabla u_{i}}{|\nabla
u_{i}|^{2}},&\mbox{if }|\nabla u_{i}|\neq 0\\\
0,&\mbox{otherwise}.\end{cases}$
It satisfies
$\int_{Q_{1}}|\mathbf{H}_{i}|^{2}|\nabla
u_{i}|^{2}dxdt\leq\int_{Q_{1}}|\partial_{t}u_{i}|^{2}dxdt.$
By Hutchinson [44], we may assume $(\mathbf{H}_{i},|\nabla u_{i}|^{2}dxdt)$
converges to $(\widetilde{\mathbf{H}},|\nabla u_{\infty}|^{2}dxdt+\mu)$ weakly
as measure-functions pairs. By Fatou lemma,
$\int_{Q_{1}}|\widetilde{\mathbf{H}}|^{2}|\nabla
u_{\infty}|^{2}dxdt+\int_{Q_{1}}|\widetilde{\mathbf{H}}|^{2}d\mu<+\infty.$
(4.7)
Since $\xi$ is the weak limit of $\partial_{t}u_{i}\nabla u_{i}dxdt$, we have
$\xi=\widetilde{\mathbf{H}}\left(|\nabla u_{\infty}|^{2}dxdt+\mu\right).$
(4.8)
By (4.6),
$\widetilde{\mathbf{H}}=\begin{cases}\frac{\partial_{t}u_{\infty}\nabla
u_{\infty}}{|\nabla u_{\infty}|^{2}},&\mbox{if }|\nabla u_{\infty}|\neq 0\\\
0,&\mbox{otherwise},\end{cases}$
a.e. with respect to the Lebesgue measure in $Q_{1}$. (In fact, this holds
everywhere in $Q_{1}\setminus\Sigma^{\ast}$.) Hence in view of (4.6), we get
$\xi^{s}=\widetilde{\mathbf{H}}d\mu.$
In particular, $\widetilde{\mathbf{H}}$ is the Radon-Nikodym derivative
$d\xi^{s}/d\mu$. The proof is complete by defining
$\mathbf{H}:=m\widetilde{\mathbf{H}}$. ∎
Similar to (4.5), we obtain
###### Corollary 4.5.
For each $Q_{r}(x,t)\subset Q_{1}$,
$\frac{1}{m^{2}}\int_{Q_{r}(x,t)}|\mathbf{H}|^{2}d\mu\leq\nu(Q_{r}(x,t))\mu(Q_{r}(x,t)).$
A more precise estimate will be given in Lemma 8.3 below.
Passing to the limit in (2.2) gives the limiting energy inequality:
$\displaystyle 0$ $\displaystyle\leq$
$\displaystyle\int_{Q_{1}}\left[\left(\frac{|\nabla
u_{\infty}|^{2}}{2}-\frac{|u_{\infty}|^{p+1}}{p+1}\right)\partial_{t}\eta^{2}-|\partial_{t}u_{\infty}|^{2}\eta^{2}-2\eta\partial_{t}u_{\infty}\nabla
u_{\infty}\cdot\nabla\eta\right]$ (4.9) $\displaystyle+$
$\displaystyle\frac{1}{m}\int_{Q_{1}}\partial_{t}\eta^{2}d\mu-\int_{Q_{1}}\eta^{2}d\nu-\frac{1}{m}\int_{Q_{1}}\nabla\eta^{2}\cdot\mathbf{H}d\mu.$
Passing to the limit in (2.3) gives the limiting stationary condition: for any
$Y\in C_{0}^{\infty}(Q_{1},\mathbb{R}^{n})$,
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\int_{Q_{1}}\left[\left(\frac{|\nabla
u_{\infty}|^{2}}{2}-\frac{|u_{\infty}|^{p+1}}{p+1}\right)\mbox{div}Y-DY(\nabla
u_{\infty},\nabla u_{\infty})+\partial_{t}u_{\infty}\nabla u_{\infty}\cdot
Y\right]$ (4.10) $\displaystyle+$
$\displaystyle\frac{1}{m}\int_{Q_{1}}\left[\left(I-m\mathcal{T}\right)\cdot
DY+\mathbf{H}\cdot Y\right]d\mu.$
#### 4.3. Limiting monotonicity formula
In this section we establish the monotonicity formula for the limit
$(u_{\infty},\mu)$. For this purpose, we need first to define the time slices
of $\mu$.
###### Lemma 4.6.
There exists a family of Radon measures$\mu_{t}$ on $B_{1}$ (defined for a.e.
$t\in(-1,1)$) such that
$\mu=\mu_{t}dt.$ (4.11)
###### Proof.
Denote the projection onto the time axis by $\pi$. For each $r<1$, let
$\mu^{r}$ be the restriction of $\mu$ to $B_{r}\times(-1,1)$.
Take a function $\eta_{r}\in C_{0}^{\infty}(B_{1})$, $0\leq\eta_{r}\leq 1$ and
$\eta_{r}\equiv 1$ in $B_{r}$. The limiting energy inequality (4.9) implies
that
$\int_{B_{1}}\left(\frac{1}{2}|\nabla
u_{\infty}(x,t)|^{2}-\frac{1}{p+1}|u_{\infty}(x,t)|^{p+1}\right)\eta_{r}(x)^{2}dx+\frac{1}{m}\pi_{\sharp}\left(\eta_{r}\mu\right)$
is a BV function on $(-1,1)$. Hence there exists a function
$\overline{e}_{r}(t)\in L^{1}(-1,1)$ such that
$\pi_{\sharp}\left(\eta_{r}\mu\right)=\overline{e}_{r}(t)dt,$
Because
$0\leq\pi_{\sharp}\mu^{r}\leq\pi_{\sharp}\left(\eta_{r}\mu\right),$
we find another function $e_{r}(t)\in L^{1}(-1,1)$ such that
$\pi_{\sharp}\mu^{r}=e^{r}(t)dt.$
By the disintegration theorem, there exists a family of probability measure
$\overline{\mu}_{t}$ on $(-1,1)$ such that
$\mu^{r}=\overline{\mu}_{t}d\left(\pi_{\sharp}\mu^{r}\right).$
By defining
$\mu_{t}:=e_{r}(t)\overline{\mu}_{t},$
we get (4.11). ∎
###### Remark 4.7.
Unlike harmonic map heat flows, because the energy density for (1.1) is sign-
changing, we do not know if
$\left[\frac{1}{2}|\nabla
u_{\infty}(x,t)|^{2}-\frac{1}{p+1}|u_{\infty}(x,t)|^{p+1}\right]dx+\mu_{t}$
is a well-defined measure _for all $t$_. Similarly, we also do not know if
there is an estimate on the Hausdorff measure of $\Sigma^{\ast}_{t}$.
Define $\Theta_{s}(x,t;u_{\infty},\mu)$ to be
$\displaystyle s^{\frac{p+1}{p-1}}\int_{B_{1}}\left[\frac{|\nabla
u_{\infty}(y,t-s)|^{2}}{2}-\frac{|u_{\infty}(y,t-s)|^{p+1}}{p+1}\right]G(y-x,s)\psi(y)^{2}dy$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}s^{\frac{2}{p-1}}\int_{B_{1}}u_{\infty}(y,t-s)^{2}G(y-x,s)\psi(y)^{2}dy$
$\displaystyle+$
$\displaystyle\frac{1}{m}s^{\frac{p+1}{p-1}}\int_{\mathbb{R}^{n}}G(y-x,s)\psi(y)^{2}d\mu_{t-s}(y)+Ce^{-cs^{-1}}.$
As in Remark 3.1, rigourously we should integrate one more time in $s$.
Passing to the limit in the monotonicity formula for $u_{i}$, we obtain
###### Proposition 4.8 (Limiting, localized monotonicity formula).
For any $(x,t)\in Q_{1/2}$ and a.a. $0<s_{1}<s_{2}<1/4$,
$\displaystyle\Theta_{s_{2}}(x,t;u_{\infty},\mu)-\Theta_{s_{1}}(x,t;u_{\infty},\mu)$
$\displaystyle\geq$
$\displaystyle\int_{s_{1}}^{s_{2}}\tau^{\frac{2}{p-1}-1}\int_{B_{1}}\Big{|}(t-\tau)\partial_{t}u_{\infty}(y,t-\tau)+\frac{u_{\infty}(y,t-\tau)}{p-1}+\frac{y}{2}\cdot\nabla
u_{\infty}(y,t-\tau)\Big{|}^{2}$
$\displaystyle\quad\quad\quad\quad\quad\quad\times
G(y-x,\tau)\psi(y)^{2}dyd\tau$ $\displaystyle+$
$\displaystyle\int_{s_{1}}^{s_{2}}\int_{B_{1}}\tau^{\frac{2}{p-1}-1}(t-\tau)^{2}G(y-x,\tau)\psi(y)^{2}d\nu(y,\tau)$
$\displaystyle+$
$\displaystyle\int_{s_{1}}^{s_{2}}\int_{B_{1}}\tau^{\frac{2}{p-1}-1}\frac{|y|^{2}}{4}G(y-x,\tau)\psi(y)^{2}d\mu(y,\tau)$
$\displaystyle+$
$\displaystyle\frac{1}{m}\int_{s_{1}}^{s_{2}}\int_{B_{1}}\tau^{\frac{2}{p-1}-1}(t-\tau)y\cdot\mathbf{H}(y,\tau)G(y-x,\tau)\psi(y)^{2}d\mu(y,\tau).$
### 5\. Tangent flow analysis, I
In this section we perform the tangent flow analysis for $(u_{\infty},\mu)$.
For any $(x,t)\in\Sigma^{\ast}$ and a sequence $\lambda_{i}\to 0$, define the
blowing up sequence
$\left\\{\begin{aligned}
&u_{\infty}^{\lambda_{i}}(y,s):=\lambda_{i}^{\frac{2}{p-1}}u_{\infty}(x+\lambda_{i}y,t+\lambda_{i}^{2}s),\\\
&\mu^{\lambda_{i}}(A):=\lambda_{i}^{m-2-n}\mu(\lambda_{i}A),\quad\nu^{\lambda_{i}}(A):=\lambda_{i}^{m-n}\nu(\lambda_{i}A),\quad\mbox{for
any}~{}~{}A\subset\mathbb{R}^{n}\times\mathbb{R}.\end{aligned}\right.$
By scaling (4.2), we see that with the same constant $M$ in Corollary 3.5, for
any $Q_{R}\subset\mathbb{R}^{n}\times\mathbb{R}$, we have
$\left\\{\begin{aligned} &\mu^{\lambda_{i}}(Q_{R})+\int_{Q_{R}}\left(|\nabla
u^{\lambda_{i}}_{\infty}|^{2}+|u_{\infty}^{\lambda_{i}}|^{p+1}\right)\leq
MR^{n+2-m},\\\
&\nu^{\lambda_{i}}(Q_{R})+\int_{Q_{R}}|\partial_{t}u_{\infty}^{\lambda_{i}}|^{2}\leq
MR^{n-m}.\end{aligned}\right.$ (5.1)
Therefore there exists a subsequence (still denoted by $\lambda_{i}$) such
that $u_{\infty}^{\lambda_{i}}\rightharpoonup u_{\infty}^{0}$ weakly in
$L^{2}_{t}H^{1}_{x,loc}(\mathbb{R}^{n}\times\mathbb{R})\cap
L^{p+1}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$, and
$\left\\{\begin{aligned} &|\nabla
u_{\infty}^{\lambda_{i}}|^{2}dxdt+\mu^{\lambda_{i}}\rightharpoonup|\nabla
u_{\infty}^{0}|^{2}dxdt+\mu^{0},\\\
&|\partial_{t}u_{\infty}^{\lambda_{i}}|^{2}dxdt+\nu^{\lambda_{i}}\rightharpoonup|\partial_{t}u_{\infty}^{0}|^{2}dxdt+\nu^{0}\end{aligned}\right.$
weakly as Radon measures on any compact set of
$\mathbb{R}^{n}\times\mathbb{R}$.
###### Remark 5.1.
By Lemma 4.3, we can avoid a set of zero $\mathcal{P}^{n+2-m}$ measure so that
$\lim_{r\to 0}\left[r^{m-2-n}\int_{Q_{r}(x,t)}\left(|\nabla
u_{\infty}|^{2}+|u_{\infty}|^{p+1}\right)+r^{m-n}\int_{Q_{r}(x,t)}|\partial_{t}u_{\infty}|^{2}\right]=0.$
(5.2)
Under this assumption, $u_{\infty}^{\lambda_{i}}$ converges to $0$ strongly in
$L^{2}_{t}H^{1}_{x,loc}(\mathbb{R}^{n}\times\mathbb{R})\cap
L^{p+1}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$,
$\partial_{t}u_{\infty}^{\lambda_{i}}$ converges to $0$ strongly in
$L^{2}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$, so
$\mu^{\lambda_{i}}\rightharpoonup\mu^{0},\quad\nu^{\lambda_{i}}\rightharpoonup\nu^{0}.$
This special choice will be used in Section 6.
Because $\mu$ is the defect measure coming from $u_{i}$, we can take a further
subsequence of $u_{i}$ so that, by defining
$u_{i}^{\lambda_{i}}(y,s):=\lambda_{i}^{\frac{2}{p-1}}u_{i}\left(x+\lambda_{i}y,t+\lambda_{i}^{2}s\right),$
we have
$\left\\{\begin{aligned} &|\nabla
u_{i}^{\lambda_{i}}|^{2}dyds\rightharpoonup|\nabla
u_{\infty}^{0}|^{2}dxdt+\mu^{0},\\\
&|\partial_{t}u_{i}^{\lambda_{i}}|^{2}dyds\rightharpoonup|\partial_{t}u_{\infty}^{0}|^{2}dxdt+\nu^{0}.\end{aligned}\right.$
As a consequence, all results in Section 4 hold for $u_{\infty}^{0}$,
$\mu^{0}$ and $\nu^{0}$. In particular,
1. (1)
by Lemma 4.6, there exists a family of Radon measures $\mu^{0}_{t}$ on
$\mathbb{R}^{n}$ (for a.a. $t\in\mathbb{R}$) such that
$\mu^{0}=\mu^{0}_{t}dt;$ (5.3)
2. (2)
there exists an $\mathbf{H}^{0}\in
L^{2}_{loc}(\mathbb{R}^{n}\times\mathbb{R},d\mu^{0})$ such that
$\partial_{t}u_{i}^{\lambda_{i}}\nabla
u_{i}^{\lambda_{i}}dxdt\rightharpoonup\partial_{t}u_{\infty}^{0}\nabla
u_{\infty}^{0}dxdt+\frac{1}{m}\mathbf{H}^{0}d\mu^{0};$
3. (3)
for any $(x,t)\in\mathbb{R}^{n}\times\mathbb{R}$ and $s>0$, we define
$\Theta_{s}(x,t;u_{\infty}^{0},\mu^{0})$ to be
$\displaystyle s^{\frac{p+1}{p-1}}\int_{\mathbb{R}^{n}}\left[\frac{|\nabla
u_{\infty}^{0}(y,t-s)|^{2}}{2}-\frac{|u_{\infty}^{0}(y,t-s)|^{p+1}}{p+1}\right]G(y-x,s)dy$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}s^{\frac{2}{p-1}}\int_{\mathbb{R}^{n}}u_{\infty}^{0}(y,t-s)^{2}G(y-x,s)dy+\frac{1}{m}s^{\frac{p+1}{p-1}}\int_{\mathbb{R}^{n}}G(y-x,s)d\mu_{t-s}^{0}(y),$
which is still non-decreasing in $s>0$.
For any $(x,t)\in\mathbb{R}^{n}\times\mathbb{R}$, we still define
$\Theta(x,t;u_{\infty}^{0},\mu^{0}):=\lim_{s\to
0}\Theta_{s}(x,t;u_{\infty}^{0},\mu^{0}).$
By the scaling invariance of $\Theta$, we obtain
$\Theta_{s}(0,0;u_{\infty}^{0},\mu^{0})=\Theta(x,t;u_{\infty},\mu),\quad\forall
s>0.$ (5.4)
Then an application of Proposition 4.8 gives
###### Lemma 5.2.
1. (1)
The function $u_{\infty}^{0}$ is backwardly self-similar in the sense that
$u_{\infty}^{0}(\lambda
x,\lambda^{2}t)=\lambda^{-\frac{2}{p-1}}u_{\infty}(x,t),\quad\forall(x,t)\in\mathbb{R}^{n}\times\mathbb{R}^{-},~{}~{}\lambda>0.$
2. (2)
The measure $\mu^{0}$ is backwardly self-similar in the sense that
$\mu^{0}(\lambda
A)=\lambda^{n+2-m}\mu^{0}(A),\quad\forall\lambda>0,~{}~{}~{}A\subset\mathbb{R}^{n}\times\mathbb{R}^{-}.$
By (5.4), following the proof of Lin-Wang [58, Lemma 8.3.3], we obtain
###### Lemma 5.3.
For any $(x,t)\in\mathbb{R}^{n}\times\mathbb{R}$,
$\Theta(x,t;u_{\infty}^{0},\mu^{0})\leq\Theta(0,0;u_{\infty}^{0},\mu^{0}).$
Moreover, if the equality is attained at $(x,t)$, then $u_{\infty}^{0}$ and
$\mu^{0}$ are translational invariant in the $(x,t)$-direction.
By this lemma,
$\mathcal{L}(u_{\infty}^{0},\mu^{0}):=\left\\{(x,t):~{}~{}\Theta(x,t;u_{\infty}^{0},\mu^{0})=\Theta(0,0;u_{\infty}^{0},\mu^{0})\right\\}$
is a linear subspace of $\mathbb{R}^{n}\times\mathbb{R}$, which is called the
invariant subspace of $(u_{\infty}^{0},\mu^{0})$.
###### Definition 5.4.
The invariant dimension of $(u_{\infty}^{0},\mu^{0})$ is
$\begin{cases}k+2,&\mbox{if
}\mathcal{L}_{\mu_{0}}=\mathbb{R}^{k}\times\mathbb{R}\\\
k,&\mbox{otherwise}.\end{cases}$
Using these notations we can define a stratification of $\Sigma^{\ast}$, and
give a dimension estimate on these stratifications as in White [89], see also
[58, Section 8.3].
### 6\. The case $m$ is not an integer
In this section, we use Marstrand theorem ([61], see also [59, Theorem
1.3.12]) to study the case when $m$ is not an integer. This is similar to the
elliptic case studied in Du [26] and the authors [86]. The main result in this
case is
###### Theorem 6.1.
Suppose $m$ is not an integer. Then under the assumptions of Theorem 4.2,
$u_{i}$ converges strongly in $L^{p+1}_{loc}(Q_{1})$ and $\nabla u_{i}$
converges strongly to $\nabla u_{\infty}$ in $L^{2}_{loc}(Q_{1})$. In other
words, $\mu=0$. As a consequence, $u_{\infty}$ is a suitable weak solution of
(1.1) in $Q_{1}$.
Here we do not claim the strong convergence of $\partial_{t}u_{i}$.
To prove this theorem, we use the tangent flow analysis in the previous
section, but only performed at a point $(x,t)$ satisfying the condition (5.2).
Hence by Remark 5.1, $u_{\infty}^{0}=0$ and
$\mu^{\lambda}\rightharpoonup\mu^{0},\quad\nu^{\lambda}\rightharpoonup\nu^{0}.$
By (4.3) in Theorem 4.2, $\mu^{0}$ is not identically zero in
$\mathbb{R}^{n}\times\mathbb{R}^{-}$. Then by the self-similarity of
$\mu^{0}$, there exists a point $(x_{0},-1)\in\mbox{spt}(\mu^{0})$. We blow up
$\mu^{0}$ again at $(x_{0},-1)$ as in Section 5, producing a tangent measure
$\mu^{1}$ to $\mu^{0}$ at this point.
###### Lemma 6.2.
The measure $\mu^{1}$ is static, that is,
$\partial_{t}\mu^{1}=0\quad\mbox{in the distributional sense}.$
###### Proof.
By Lemma 5.2 and the scaling invariance of $\Theta_{s}$, for any $\lambda>0$,
$\Theta_{\lambda^{2}s}(\lambda
x_{1},-\lambda^{2};\mu^{0})=\Theta_{s}(x_{1},-1;\mu^{0}).$
Letting $s\to 0$, we obtain
$\Theta(\lambda x_{1},-\lambda^{2};\mu^{0})=\Theta(x_{1},-1;\mu^{0}).$
After blowing up to $\mu^{1}$, this equality implies that
$\Theta(0,t;\mu^{1})=\Theta(0,0;\mu^{1}),\quad\forall t\in\mathbb{R}.$
By Lemma 5.3, $\mu^{1}$ is invariant under translations in the time direction.
∎
###### Corollary 6.3.
$\nu^{1}=0$ and $\mathbf{H}^{1}=0$ $\mu^{1}$-a.e..
###### Proof.
This follows from the energy identity for $\mu^{1}$,
$\frac{d}{dt}\int_{\mathbb{R}^{n}}\eta d\mu^{1}_{t}=-m\int\eta
d\nu^{1}-\int_{\mathbb{R}^{n}}\nabla\eta\cdot\mathbf{H}^{1}d\mu^{1}_{t}.$
Note that $\mathbf{H}^{1}$ can be controlled by $\nu^{1}$ as in Corollary 4.5.
∎
By the previous lemma, we can view $\mu^{1}$ as a Radon measure on
$\mathbb{R}^{n}$. Now the stationary condition (4.10) reads as
$\int_{\mathbb{R}^{n}}\left(I-m\mathcal{T}\right)\cdot
DYd\mu^{1}=0,\quad\forall Y\in C_{0}^{\infty}(\mathbb{R}^{n},\mathbb{R}^{n}).$
(6.1)
Following Moser [72], this is called a stationary measure. Similar to [72,
Lemma 2.1], we have
###### Lemma 6.4.
For any $x\in\mathbb{R}^{n}$,
$\Theta_{r}(x;\mu^{1}):=r^{m-n}\mu_{1}(B_{r}(x))$
is non-decreasing in $r>0$.
As a consequence,
$\Theta(x;\mu^{1}):=\lim_{r\to 0}r^{m-n}\mu_{1}(B_{r}(x))$
exists. By (4.2), (4.3) and Lemma 4.3,
$\frac{\varepsilon_{\ast}}{2}\leq\Theta(x;\mu^{1})\leq
C,\quad\mu^{1}-\mbox{a.e. in}~{}~{}\mathbb{R}^{n}.$
By Marstrand theorem, $m$ must be an integer. In other words, if $m$ is not an
integer, then $\mu^{1}$, and consequently, $\mu^{0}$ and $\mu$, must be
trivial. This finishes the proof of Theorem 6.1.
### 7\. Partial regularity of suitable weak solutions
In this section, as an application of Theorem 6.1, we prove the following
partial regularity result for a fixed, suitable weak solution. This is almost
the same with the one obtained in [9], with a small improvement.
###### Theorem 7.1.
Suppose $u$ is a suitable weak solution of (1.1) in $Q_{1}$. Then
* •
If $1<p<\frac{n+2}{n-2}$, $u$ is smooth, that is, $\mathcal{S}(u)=\emptyset$.
* •
If there exists an integer $k$ with $3\leq k<n$ such that
$\frac{k+3}{k-1}<p<\frac{k+2}{k-2}$, then
$\mbox{dim}_{\mathcal{P}}(\mathcal{S}(u))\leq n-k+2$.
* •
If $m$ is an integer, then $\mathcal{P}^{n-m+2}(\mathcal{S}(u))=0$.
We have already obtained a dimension bound on the singular set
$\mathcal{S}(u)$ in Corollary 3.10. Here we need only to improve this bound
when $m$ is not an integer. This follows from a standard dimension reduction
argument, cf. [54], [86].
Suppose $(x,t)\in\mathcal{S}(u)\cap Q_{1/2}$. By Theorem 3.9, for any
$r\in(0,1/2)$,
$\int_{Q_{r}^{-}(x,t)}\left(|\nabla
u|^{2}+|u|^{p+1}\right)\geq\varepsilon_{\ast}r^{n+2-m}.$ (7.1)
For $\lambda\to 0$, define the blowing up sequence
$u^{\lambda}(y,s):=\lambda^{\frac{2}{p-1}}u(x+\lambda y,t+\lambda^{2}s).$
As in Section 5, we can take a subsequence so that $u^{\lambda}\rightharpoonup
u^{0}$ weakly in $L^{2}_{t}H^{1}_{x,loc}(\mathbb{R}^{n}\times\mathbb{R})\cap
L^{p+1}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$, and
$\left\\{\begin{aligned} &|\nabla u^{\lambda}|^{2}dxdt\rightharpoonup|\nabla
u^{0}|^{2}dxdt+\mu^{0},\\\
&|\partial_{t}u^{\lambda}|^{2}dxdt\rightharpoonup|\partial_{t}u^{0}|^{2}dxdt+\nu^{0}\end{aligned}\right.$
weakly as Radon measures on any compact set of
$\mathbb{R}^{n}\times\mathbb{R}$.
Furthermore, because $m$ is not an integer, by Theorem 6.1, we deduce that
$\mu^{0}=0$ and $u^{\lambda}$ in fact converges strongly. Because tangent
flows are always nontrivial, $u^{0}\neq 0$.
By Lemma 5.2, $u^{0}$ is backwardly self-similar. For these solutions, the
following Liouville theorem was established by Giga-Kohn in [36].
###### Theorem 7.2.
When $p\leq(n+2)/(n-2)$, if $u$ is a backward self-similar solution of (1.1)
in $\mathbb{R}^{n}\times\mathbb{R}^{-}$, satisfying
$\sup_{x\in\mathbb{R}^{n}}\int_{Q_{1}^{-}(x,-1)}\left(|\nabla
u|^{2}+|u|^{p+1}\right)<+\infty,$ (7.2)
then either $u\equiv 0$, or
$u\equiv\pm(p-1)^{-\frac{1}{p-1}}(-t)^{-\frac{1}{p-1}}$.
Here the original $L^{\infty}(\mathbb{R}^{n})$ assumption in [36, Theorem 1]
is replaced by an integral one (7.2). With this condition, we are still able
to do the same computation in [36] to deduce the Pohozaev identity [36,
Proposition 1]. For $u^{0}$, the condition (7.2) follows by scaling (3.2) and
passing to the limit.
By this theorem, if $p<(n+2)/(n-2)$ is subcritical, because $u^{0}$ also
satisfies
$\int_{Q_{1}}|u^{0}|^{p+1}<+\infty,$
we must have $u^{0}=0$. This contradiction implies that there is no singular
point of $u$. In other words, $u$ is smooth.
If $p>(n+2)/(n-2)$ is supcritical, by Point (3) in Theorem 4.2, the strong
convergence of $u^{\lambda}$ implies that if $\lambda$ is small enough, then
$\lambda^{-1}\left(\mathcal{S}(u)-(x,t)\right)$ is contained in a small
neighborhood of $\mathcal{S}(u^{0})$. Since $u^{0}$ is backwardly self-similar
(see Lemma 5.2), applying White’s stratification theorem in [89], we conclude
the proof of Theorem 7.1.
### 8\. The case $m$ is an integer
In this section we continue the analysis of energy concentration behavior, now
under the assumption that $m$ is an integer. For this case we prove
###### Theorem 8.1.
If $m$ is an integer, then $(u_{\infty},\mu)$ is a generalized Brakke’s flow
in the following sense: For any $\varphi\in
C_{0}^{\infty}(B_{1};\mathbb{R}^{+})$ and $-1<t_{1}<t_{2}<1$,
$\displaystyle\left[\int_{B_{1}}\left(\frac{1}{2}|\nabla
u_{\infty}(t_{2})|^{2}-\frac{1}{p+1}|u_{\infty}(t_{2})|^{p+1}\right)\varphi
dx+\frac{1}{m}\int_{B_{1}}\varphi d\mu_{t_{2}}\right]$
$\displaystyle-\left[\int_{B_{1}}\left(\frac{1}{2}|\nabla
u_{\infty}(t_{1})|^{2}-\frac{1}{p+1}|u_{\infty}(t_{1})|^{p+1}\right)\varphi
dx+\frac{1}{m}\int_{B_{1}}\varphi d\mu_{t_{1}}\right]$ $\displaystyle\leq$
$\displaystyle-\int_{t_{1}}^{t_{2}}\int_{B_{1}}\left[|\partial_{t}u_{\infty}|^{2}\varphi+\partial_{t}u_{\infty}\nabla
u_{\infty}\cdot\nabla\varphi\right]dxdt$
$\displaystyle-\frac{1}{m}\int_{t_{1}}^{t_{2}}\int_{\Sigma_{t}}\left[|\mathbf{H}_{t}|^{2}\varphi-\nabla\varphi\cdot\mathbf{H}_{t}\right]d\mu_{t}dt.$
By Lemma 4.6, for a.e. $t\in(-1,1)$ and any $Y\in
C_{0}^{\infty}(B_{1},\mathbb{R}^{n})$,
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\int_{B_{1}}\left[\left(\frac{|\nabla
u_{\infty}(x,t)|^{2}}{2}-\frac{|u_{\infty}(x,t)|^{p+1}}{p+1}\right)\mbox{div}Y-DY\left(\nabla
u_{\infty}(x,t),\nabla u_{\infty}(x,t)\right)\right]dx$ $\displaystyle+$
$\displaystyle\int_{B_{1}}\partial_{t}u_{\infty}(x,t)\nabla
u_{\infty}(x,t)\cdot Ydx$ $\displaystyle+$
$\displaystyle\int_{B_{1}}\left[\left(\frac{1}{m}I-\mathcal{T}(x,t)\right)\cdot
DY+\frac{1}{m}\mathbf{H}(x,t)\cdot Y\right]d\mu_{t}(x).$
In view of the lower density bound in (4.3), as in [1] or [52] (see also [72]
or [2, Proposition 3.1]), we deduce that $\mu_{t}$ is countably
$(n-m)$-rectifiable. In other words, $\Sigma_{t}$ is countably
$(n-m)$-rectifiable, and
$I-m\mathcal{T}=T_{x}\Sigma_{t},\quad\mathcal{H}^{n-m}~{}\mbox{a.e.
on}~{}\Sigma_{t},$ (8.3)
where $T_{x}\Sigma_{t}$ is the weak tangent space (identified with the
projection map onto it) of $\Sigma_{t}$ at $x$.
Similar to Lin-Wang [58, Lemma 9.2.2], we get
###### Lemma 8.2.
For a.e. $t\in(-1,1)$,
$\mathbf{H}(x,t)\bot
T_{x}\Sigma_{t},\quad\mbox{for}~{}\mathcal{H}^{n-m}~{}\mbox{a.e.
}~{}x\in\Sigma_{t}.$
Similar to Lin-Wang [58, Lemma 9.2.7], we also get
###### Lemma 8.3.
For any $\eta\in C_{0}^{\infty}(Q_{1})$,
$\int_{Q_{1}}\eta^{2}|\mathbf{H}(x,t)|^{2}d\mu_{t}(x)dt\leq\frac{1}{m}\int_{Q_{1}}\eta^{2}d\nu.$
Plugging this estimate into (4.9), we obtain (8.1). This finishes the proof of
Theorem 8.1.
### 9\. The case $p=(n+2)/(n-2)$
In this section we assume $p=(n+2)/(n-2)$ is the Sobolev critical exponent,
and all of the solutions $u_{i}$ are _smooth_. Note that now
$m=2(p+1)/(p-1)=n$. The main result of this section is about the quantization
of energy.
###### Theorem 9.1.
For any $(x,t)\in\Sigma$, there exist finitely many bubbles $W^{k}$,
$k=1,\cdots,N$, such that
$\Theta(x,t)=\frac{1}{n\left(4\pi\right)^{n/2}}\sum_{k=1}^{N}\int_{\mathbb{R}^{n}}|\nabla
W^{k}|^{2}.$
Furthermore, if all solutions are positive, then there exists an
$N\in\mathbb{N}$ such that
$\Theta(x,t)=N\frac{\Lambda}{n\left(4\pi\right)^{n/2}}.$
We first prove a local version of this proposition in Subsection 9.1, then
prove this theorem in Subsection 9.2. During this course, some further
properties of defect measures will also be established in Subsection 9.2 and
Subsection 9.3, in particular, for applications in Part II, Part III and Part
IV, the special case of positive solutions will be discussed.
#### 9.1. A local quantization result
In this subsection we prove the following
###### Lemma 9.2.
Given a constant $M>0$, suppose a sequence of smooth solutions $u_{i}$ to
(1.1) in $Q_{1}$ satisfies
$\left\\{\begin{aligned} &|\nabla u_{i}|^{2}dxdt\rightharpoonup
M\delta_{0}\otimes dt,\\\ &|u_{i}|^{p+1}dxdt\rightharpoonup M\delta_{0}\otimes
dt,\\\ &\int_{Q_{1}}|\partial_{t}u_{i}|^{2}dxdt\to 0.\end{aligned}\right.$
Then there exist finitely many bubbles $W^{k}$ such that
$M=\sum_{k}\int_{\mathbb{R}^{n}}|\nabla W^{k}|^{2}dx.$
First let us present some immediate consequences of the assumptions in this
lemma.
* •
An application of the $\varepsilon$-regularity theorem (Theorem 3.9) implies
that
$u_{i}\to
0\quad\mbox{in}~{}C^{\infty}_{loc}\left((B_{1}\setminus\\{0\\})\times(-1,1)\right).$
(9.1)
* •
By the $L^{2}(Q_{1})$ bound on $\partial_{t}u_{i}$, we deduce that $u_{i}\to
0$ in $C_{loc}(-1,1;L^{2}_{loc}(B_{1}))$.
* •
By Fatou lemma,
$\int_{-1}^{1}\lim_{i\to+\infty}\left(\int_{B_{1}}|\partial_{t}u_{i}(x,t)|^{2}dx\right)dt\leq\lim_{i\to+\infty}\int_{Q_{1}}|\partial_{t}u_{i}|^{2}dxdt=0.$
Hence for a.e. $t\in(-1,1)$,
$\lim_{i\to+\infty}\int_{B_{1}}|\partial_{t}u_{i}(x,t)|^{2}dx=0.$ (9.2)
The following lemma describes the energy concentration behavior for a.e. time
slice of $u_{i}$, under the assumptions in Lemma 9.2.
###### Lemma 9.3.
For a.e. $t\in(-1,1)$,
$\left\\{\begin{aligned} &|\nabla u_{i}(x,t)|^{2}dx\rightharpoonup
M\delta_{0},\\\ &|u_{i}(x,t)|^{p+1}dx\rightharpoonup
M\delta_{0}.\end{aligned}\right.$
###### Proof.
For any $\eta\in C_{0}^{\infty}(B_{1})$, by the energy identity for $u_{i}$,
the function
$E_{\eta,i}(t):=\int_{B_{1}}\left(\frac{|\nabla
u_{i}(x,t)|^{2}}{2}-\frac{|u_{i}(x,t)|^{p+1}}{p+1}\right)\eta(x)^{2}dx$
is uniformly bounded in $BV_{loc}(-1,1)$. After passing to a subsequence, they
converge in $L^{1}_{loc}(-1,1)$ and a.e. in $(-1,1)$ to the limit
$(M/n)\eta(0)^{2}$.
Another consequence of this uniform BV bound is, $E_{\eta,i}$ are uniformly
bounded in any compact set of $(-1,1)$. If (9.2) holds at $t$, then
$\int_{B_{1}}\left(|\nabla
u_{i}(t)|^{2}-|u_{i}(t)|^{p+1}\right)\eta^{2}=-\int_{B_{1}}\left[\partial_{t}u_{i}(t)u_{i}(t)\eta^{2}+2\eta
u_{i}(t)\nabla u_{i}(t)\cdot\nabla\eta\right]$
are also bounded as $i\to+\infty$. Therefore
$\limsup_{i\to+\infty}\int_{B_{1}}\left[|\nabla
u_{i}(t)|^{2}+|u_{i}(t)|^{p+1}\right]dx<+\infty.$ (9.3)
Because $u_{i}(t)\to 0$ in $L^{2}(B_{1})$, with the help of (9.1), we find a
nonnegative constant $M(t)$ such that
$|\nabla u_{i}(x,t)|^{2}dx\rightharpoonup
M(t)\delta_{0},\quad|u_{i}(x,t)|^{p+1}dx\rightharpoonup M(t)\delta_{0}.$
Then using the above a.e. convergence of $E_{\eta,i}(t)$, and letting $\eta$
vary in $C_{0}^{\infty}(B_{1})$, we deduce that $M(t)=M$. ∎
For those $t$ satisfying (9.2), we have the uniform $H^{1}(B_{1})$ bound
(9.3), hence by Struwe’s global compactness theorem ([82]), the following
bubble tree convergence holds for $u_{i}(t)$.
###### Proposition 9.4 (Bubble tree convergence).
There exist $N(t)$ points $\xi_{ik}^{\ast}(t)$, positive constants
$\lambda_{ik}^{\ast}(t)$, $k=1,\cdots,N(t)$, all converging to $0$ as
$i\to+\infty$, and $N(t)$ bubbles $W^{k}$, such that
$u_{i}(x,t)=\sum_{k=1}^{N(t)}W^{k}_{\xi_{ik}^{\ast}(t),\lambda_{ik}^{\ast}(t)}(x)+o_{i}(1),$
where $o_{i}(1)$ are measured in $H^{1}(B_{1})$.
As a consequence,
$\int_{B_{1}}|\nabla
u_{i}(x,t)|^{2}dx=\sum_{k=1}^{N(t)}\int_{\mathbb{R}^{n}}|\nabla
W^{k}|^{2}+o_{i}(1).$ (9.4)
###### Remark 9.5.
If $u_{i}$ are positive, then all bubbles constructed in this proposition are
positive, see [24, Section 3.2]. In view of the Liouville theorem of
Caffarelli-Gidas-Spruck [8], we can take all of these $W^{k}$ to be the
standard Aubin-Talenti bubble $W$. As a consequence, (9.4) reads as
$\int_{B_{1}}|\nabla u_{i}(x,t)|^{2}dx=N(t)\Lambda+o_{i}(1).$ (9.5)
###### Remark 9.6.
About bubble tree convergence, there are two phenomena that we will
investigate more closely latter.
Bubble towering:
Two bubbles at $\xi_{ik}^{\ast}(t)$ and $\xi_{i\ell}^{\ast}(t)$ are towering
if
$\limsup_{i\to+\infty}\frac{|\xi_{ik}^{\ast}(t)-\xi_{i\ell}^{\ast}(t)|}{\max\\{\lambda_{ik}^{\ast}(t),\lambda_{i\ell}^{\ast}(t)\\}}<+\infty.$
In this case, either
$\frac{\lambda_{ik}^{\ast}(t)}{\lambda_{i\ell}^{\ast}(t)}\to+\infty$ or
$\frac{\lambda_{i\ell}^{\ast}(t)}{\lambda_{ik}^{\ast}(t)}\to+\infty$. Hence
these two bubbles are located at almost the same point (with respect to the
bubble scales), but the height of one bubble is far larger than the other
one’s.
Bubble clustering:
If for some $k\neq\ell$,
$\lim_{i\to+\infty}|\xi_{ik}^{\ast}(t)-\xi_{i\ell}^{\ast}(t)|=0$
but
$\lim_{i\to+\infty}\frac{|\xi_{ik}^{\ast}(t)-\xi_{i\ell}^{\ast}(t)|}{\max\\{\lambda_{ik}^{\ast}(t),\lambda_{i\ell}^{\ast}(t)\\}}=+\infty,$
we say these bubbles are clustering.
Using the terminology introduced in the study of Yamabe problem (see Schoen
[77]), if there is no bubble clustering and towering, the blow up is called
_isolated and simple_.
Combining Lemma 9.3 with Proposition 9.4, we conclude the proof of Lemma 9.2.
Furthermore, if all solutions are positive, by Remark 9.5, there exists an
$N\in\mathbb{N}$ such that
$M=N\Lambda,\quad\mbox{and}\quad N(t)=N\quad\mbox{a.e. in }~{}~{}(-1,1).$
(9.6)
#### 9.2. Proof of Theorem 9.1
Under the critical exponent assumption, the tangent flow analysis in Section 5
can give more information.
First, by noting that $m=n$, in the definition of the blowing up sequence, we
have
$\nu^{\lambda}(A):=\nu\left((x,t)+\lambda A\right),\quad\forall
A\subset\mathbb{R}^{n}\times\mathbb{R}.$
In the same way, for any $R>0$,
$\int_{Q_{R}}|\partial_{t}u_{\infty}^{\lambda}|^{2}=\int_{Q_{\lambda
R}}|\partial_{t}u_{\infty}|^{2}\to 0\quad\mbox{as}~{}\lambda\to 0.$ (9.7)
Therefore by defining
$Q:=\lim_{\lambda\to 0}\nu\left(Q_{\lambda}(x,t)\right),$ (9.8)
we get
$\nu^{0}=Q\delta_{(0,0)}.$ (9.9)
There is an atom of $\nu$ at $(x,t)$ if and only if $Q>0$.
By Corollary 4.5, we find that
$\mathbf{H}^{0}d\mu=\mathbf{H}^{0}(0,0)\delta_{(0,0)}.$ (9.10)
Recall that $u_{\infty}$ is backwardly self-similar (Lemma 5.2). By combining
Theorem 7.2 with (9.7), we deduce that
$u_{\infty}^{0}\equiv 0\quad\mbox{in}~{}~{}\mathbb{R}^{n}\times\mathbb{R}.$
Hence in view of (9.9) and (9.10), the energy identity (4.9) for
$(u_{\infty}^{0},\mu^{0})$ reads as
$\frac{1}{m}\int_{Q_{1}}\partial_{t}\eta
d\mu^{0}=Q\eta(0,0)+\frac{1}{m}\nabla\eta(0,0)\cdot\mathbf{H}^{0}(0,0),\quad\forall\eta\in
C_{0}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}).$ (9.11)
In particular,
$\partial_{t}\mu^{0}=0\quad\mbox{in the distributional sense in
}~{}\left(\mathbb{R}^{n}\times\mathbb{R}\right)\setminus\\{(0,0)\\}.$
Combining this fact with the backward self-similarity of $\mu^{0}$ (see Lemma
5.2), we deduce that there exists a constant $M\geq Q$ such that
$\mu^{0}=M\delta_{0}\otimes
dt\lfloor_{\mathbb{R}^{-}}+\left(M-Q\right)\delta_{0}\otimes
dt\lfloor_{\mathbb{R}^{+}}.$ (9.12)
By choosing $\eta(x,t)=\varphi(x)\psi(t)x$ in (9.11), where $\varphi\in
C_{0}^{\infty}(\mathbb{R}^{n})$ and $\psi\in C_{0}^{\infty}(\mathbb{R})$, we
also deduce that
$\mathbf{H}^{0}(0,0)=0.$ (9.13)
Now we come to the proof of Theorem 9.1.
###### Proof.
As in Section 5, there exist two sequences of solutions to (1.1) (with
$p=(n+2)/(n-2)$) satisfying the assumptions in Lemma 9.2 in $Q_{1}(0,-2)$ and
$Q_{1}(0,2)$. Therefore there exist two groups of finitely many bubbles,
$\\{W^{k}\\}$ and $\\{W^{\ell}\\}$, such that
$M=\sum_{k}\int_{\mathbb{R}^{n}}|\nabla W^{k}|^{2}dx,\quad
M-Q=\sum_{\ell}\int_{\mathbb{R}^{n}}|\nabla W^{\ell}|^{2}dx.$ (9.14)
Furthermore, if all solutions are positive, then there exist
$N_{1},N_{2}\in\mathbb{N}$ such that
$M=N_{1}\Lambda,\quad M-Q=N_{2}\Lambda.$ (9.15)
By the weak convergence of $|\nabla
u_{\infty}^{\lambda}|^{2}dxdt+\mu^{\lambda}$ etc., we get
$\displaystyle\Theta(x,t;u_{\infty},\mu)$ $\displaystyle=$
$\displaystyle\lim_{\lambda\to
0}\int_{1}^{2}\Theta_{\lambda^{2}s}(x,t;u_{\infty},\mu)ds$ $\displaystyle=$
$\displaystyle\lim_{\lambda\to
0}\int_{1}^{2}\Theta_{s}(0,0;u_{\infty}^{\lambda},\mu^{\lambda})ds$
$\displaystyle=$
$\displaystyle\frac{1}{n}\int_{1}^{2}s^{\frac{n}{2}}\left[\int_{\mathbb{R}^{n}}G(y,1)d\mu^{0}_{-s}(y)dy\right]ds$
$\displaystyle=$ $\displaystyle\frac{M}{n\left(4\pi\right)^{n/2}}.$
Substituting (9.14) into this equality, we conclude the proof. ∎
A consequence of Theorem 9.1 is the following relation between $\Theta$ and
$\theta$.
###### Corollary 9.7.
For $\mu$-a.e. $(x,t)$,
$\Theta(x,t;u_{\infty},\mu)=\frac{1}{n\left(4\pi\right)^{\frac{n}{2}}}\theta(x,t),$
and consequently, there exist finitely many bubbles $W^{k}$, $k=1,\cdots,N$,
such that
$\theta(x,t)=\sum_{k=1}^{N}\int_{\mathbb{R}^{n}}|\nabla W^{k}|^{2}.$
Moreover, if all solutions are positive, there exist an $N(x,t)\in\mathbb{N}$
such that
$\theta(x,t)=N(x,t)\Lambda\quad\mu-a.e..$
###### Proof.
For those $(x,t)$ satisfying the condition (5.2), we have
$\theta(x,t)=\lim_{\lambda\to
0}\frac{\mu\left(Q_{\lambda}(x,t)\right)}{2\lambda^{2}}=\frac{1}{2}\mu^{0}(Q_{1}).$
We can also assume there is no atom for $\nu$ at $(x,t)$ by avoiding a at most
countable set. Then by the above structure theory of tangent flows and Theorem
9.1, we get
$\mu^{0}(Q_{1})=2M=2n\left(4\pi\right)^{\frac{n}{2}}\Theta(x,t).\qed$
#### 9.3. Further properties of defect measures
Some consequences follow from the above structure result on tangent flows. The
first one is
###### Lemma 9.8.
For each $t$, $\Sigma_{t}^{\ast}$ is isolated.
###### Proof.
Assume by the contrary, there exists a sequence of points
$(x_{j},t)\in\Sigma_{t}^{\ast}$ converging to a limit point $(x,t)$. Because
$\Sigma^{\ast}$ is closed, $(x,t)\in\Sigma^{\ast}$. Denote
$\lambda_{j}:=|x_{j}-x|\to 0.$
Define the blow up sequence $(u_{\infty}^{\lambda_{j}},\mu^{\lambda_{j}})$
with respect to the bast point $(x,t)$ as before. Assume without loss of
generality that
$\frac{x_{j}-x}{\lambda_{j}}\to x_{\infty}\in\partial B_{1}.$
By the upper semi-continuity of $\Theta$, we get
$\Theta(x_{\infty},0;\mu^{0})\geq\limsup_{j\to+\infty}\Theta(x_{j},t;u_{\infty},\mu)\geq\varepsilon_{\ast}.$
(9.16)
On the other hand,the tangent flow analysis shows that
$\mbox{spt}(\mu^{0})=\\{0\\}\times\mathbb{R}$. This fact, together with the
$\varepsilon$-regularity theorem and the fact that $u_{\infty}^{0}=0$, implies
that
* •
$u_{\infty}^{\lambda_{j}}\to 0$ in
$C^{\infty}_{loc}\left((\mathbb{R}^{n}\setminus\\{0\\})\times\mathbb{R}\right)$,
* •
and for all $\lambda_{j}$ small, $\mbox{spt}(\mu^{\lambda_{j}})$ is contained
in $B_{1/2}\times\mathbb{R}$.
Therefore,
$\Theta(x_{\infty},0;\mu^{0})=\lim_{j\to+\infty}\Theta\left(\frac{x_{j}-x}{\lambda_{j}},0;u_{\infty}^{\lambda_{j}},\mu^{\lambda_{j}}\right)=0.$
This is a contradiction with (9.16). In other words, there does not exist
converging sequences in $\Sigma_{t}^{\ast}$. ∎
The next one is about the form of the stress-energy tensor $\mathcal{T}$.
###### Lemma 9.9.
$\mathcal{T}=I/n$ $\mu$-a.e..
###### Proof.
This is (8.3) in this special case. Here we explain briefly a direct proof.
First, because $\mathcal{T}$ is $\mu$-measurable, it is approximately
continuous $\mu$-a.e.. Choose a point $(x,t)$ so that $\mathcal{T}$ is
approximate continuous and Lemma 4.3 holds at this point. Take a sequence
$\lambda\to 0$ and define the tangent flow at this point as before, denoted by
$\mu^{0}$. Let
$\mathcal{T}^{\lambda}(y,s):=\mathcal{T}(x+\lambda y,t+\lambda^{2}s).$
By the approximate continuity of $\mathcal{T}$ at $(x,t)$, we see
$\mathcal{T}^{\lambda}d\mu^{\lambda}\rightharpoonup\mathcal{T}(x,t)d\mu^{0}.$
Then we obtain the stationary condition for the tangent flow,
$\int_{\mathbb{R}^{n}\times\mathbb{R}}\left[I-n\mathcal{T}(x,t)\right]\cdot
DYd\mu^{0}=0,\quad\forall Y\in
C_{0}^{\infty}(\mathbb{R}^{n}\times\mathbb{R},\mathbb{R}^{n}).$
In view of the form of $\mu^{0}$ in (9.12), substituting suitable $Y$ as test
functions into this identity we deduce that $\mathcal{T}(x,t)=I/n$. ∎
The following lemma is a rigourous statement that zero dimensional objects
should have zero mean curvatures.
###### Lemma 9.10.
$\mathbf{H}=0$ $\mu$-a.e..
###### Proof.
We will show that, for a.e. $t\in(-,1,1)$,
$\mathbf{H}=0\quad\mu_{t}-\mbox{everywhere}.$ (9.17)
For a.e. $t$,
$\int_{B_{1}}\left[|\nabla
u_{\infty}(x,t)|^{2}+|u_{\infty}(x,t)|^{p+1}+|\partial_{t}u_{\infty}(x,t)|^{2}\right]dx<+\infty,$
(9.18)
and by Lemma 4.6, $\mu_{t}$ is a well defined Radon measure.
By the form of $\mathcal{T}$, now the stationary condition (4.10) reads as
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\int_{Q_{1}}\left[\left(\frac{|\nabla
u_{\infty}|^{2}}{2}-\frac{|u_{\infty}|^{p+1}}{p+1}\right)\mbox{div}Y-DY(\nabla
u_{\infty},\nabla u_{\infty})+\partial_{t}u_{\infty}\nabla u_{\infty}\cdot
Y\right]$ (9.19) $\displaystyle+$ $\displaystyle\frac{1}{m}\int_{Q_{1}}\
\mathbf{H}\cdot Yd\mu,\quad\quad\forall Y\in
C_{0}^{\infty}(Q_{1},\mathbb{R}^{n}).$
Hence for a.e. $t$, we have the stationary condition
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\int_{B_{1}}\left[\left(\frac{|\nabla
u_{\infty}|^{2}}{2}-\frac{|u_{\infty}|^{p+1}}{p+1}\right)\mbox{div}X-DX(\nabla
u_{\infty},\nabla u_{\infty})+\partial_{t}u_{\infty}\nabla u_{\infty}\cdot
X\right]$ (9.20) $\displaystyle+$ $\displaystyle\frac{1}{m}\int_{B_{1}}\
\mathbf{H}\cdot Xd\mu_{t},\quad\quad\forall X\in
C_{0}^{\infty}(B_{1},\mathbb{R}^{n}).$
Now we claim that
Claim. $u_{\infty}(t)\in W^{2,2}_{loc}(B_{1})\cap L^{2p}_{loc}(B_{1})$.
Once we have this claim, by integration by parts, we can show that
$u_{\infty}(t)$ satisfies the stationary condition
$0=\int_{B_{1}}\left[\left(\frac{|\nabla
u_{\infty}|^{2}}{2}-\frac{|u_{\infty}|^{p+1}}{p+1}\right)\mbox{div}X-DX(\nabla
u_{\infty},\nabla u_{\infty})+\partial_{t}u_{\infty}\nabla u_{\infty}\cdot
X\right].$
Combining this identity with (9.20) we get
$\int_{B_{1}}\ \mathbf{H}\cdot Xd\mu_{t}=0,\quad\quad\forall X\in
C_{0}^{\infty}(B_{1},\mathbb{R}^{n}),$
from which (9.17) follows.
Proof of the claim. By Lemma 9.8, $\Sigma_{t}^{\ast}$ is isolated. Since
$u_{\infty}$ is smooth outside $\Sigma^{\ast}$, we only need to show that for
each $\xi\in\Sigma_{t}^{\ast}$, there exists a ball $B_{r}(\xi)$ such that
$u_{\infty}(t)\in W^{2,2}(B_{r}(\xi))\cap L^{2p}(B_{r}(\xi))$.
We take a sufficiently small $\sigma$ and choose this ball so that
$\int_{B_{r}(\xi)}|u_{\infty}(t)|^{\frac{2n}{n-2}}\leq\sigma.$
Take a standard cut-off function $\eta$ in $B_{r}(\xi)$ such that $\eta\equiv
1$ in $B_{r/2}(\xi)$. A direct calculation gives
$-\Delta\left(u_{\infty}(t)\eta\right)=|u_{\infty}(t)|^{\frac{n-2}{2}}u_{\infty}\eta+f_{\eta},$
where $f_{\eta}\in L^{2}(B_{r}(\xi))$. Then by the $W^{2,2}$ estimate for
Laplacian operator and an application of Hölder inequality, if $\sigma$ is
sufficiently small, we obtain
$\|u_{\infty}(t)\eta\|_{W^{2,2}(B_{r}(\xi))}\lesssim\|u(t)\|_{L^{2}(B_{r}(\xi))}+\|f_{\eta}\|_{L^{2}(B_{r}(\xi))}.$
The $L^{2p}$ estimate on $u_{\infty}(t)$ follows by combining this estimate
with the equation for $u_{\infty}(t)$. The proof of the claim is complete. ∎
###### Corollary 9.11.
$u_{\infty}$ satisfies the stationary condition (2.3).
We do not know if the energy inequality (4.9) can be decoupled in the same
way.
Finally, for positive solutions, we note the following fact as a consequence
of (9.15):
* •
either $Q=0$, which implies that there is no atom at $(x,t)$,
* •
or $Q\geq\Lambda$, which implies that there is an atom in $\nu$ at $(x,t)$ and
its mass is at least $\Lambda$.
In conclusion, we get
###### Lemma 9.12.
If all solutions are positive, then the mass of each atom in $\nu$ is at least
$\Lambda$. Consequently, there are at most finitely many atoms in $\nu$.
The atoms of $\nu$ correspond to the singular point of $\Sigma$.
Microscopically, such an atom comes from scalings of a connecting orbit (or a
terrace of connecting orbits)
$\left\\{\begin{aligned} &\partial_{t}u-\Delta
u=u^{p},\quad\mbox{in}~{}~{}\mathbb{R}^{n}\times\mathbb{R},\\\
&\int_{\mathbb{R}^{n}}\left[\frac{1}{2}|\nabla
u|^{2}-\frac{1}{p+1}u^{p+1}\right]\Bigg{|}_{+\infty}^{-\infty}=Q.\end{aligned}\right.$
However, we do not know if $\Sigma$ is smooth outside this singular set.
## Part II Energy concentration with only one bubble
### 10\. Setting
In this part, $p=(n+2)/(n-2)$ is the Sobolev critical exponent, $u_{i}$
denotes a sequence of _smooth, positive solutions_ of (1.1) in $Q_{1}$,
satisfying the following three assumptions:
(II.a) Weak limit:
$u_{i}$ converges weakly to $u_{\infty}$ in $L^{p+1}(Q_{1})$, and $\nabla
u_{i}$ converges weakly to $\nabla u_{\infty}$ in $L^{2}(Q_{1})$. Here
$u_{\infty}$ is a _smooth solution_ of (1.1) in $Q_{1}$.
(II.b) Energy concentration set:
weakly as Radon measures
$\left\\{\begin{aligned} &|\nabla u_{i}|^{2}dxdt\rightharpoonup|\nabla
u_{\infty}|^{2}dxdt+\Lambda\delta_{0}\otimes dt,\\\
&u_{i}^{p+1}dxdt\rightharpoonup u_{\infty}^{p+1}dxdt+\Lambda\delta_{0}\otimes
dt.\end{aligned}\right.$
(II.c) Convergence of time derivatives:
as $i\to\infty$, $\partial_{t}u_{i}$ converges to $\partial_{t}u_{\infty}$
strongly in $L^{2}(Q_{1})$.
The assumption (II.b) says there is only one bubble. From the above
assumptions, it is also seen that $u_{i}$ converges to $u_{\infty}$ in
$C(-1,1;L^{2}(B_{1}))$.
The main result of this part is the following theorem, which can be viewed as
a weak form of Schoen’s Harnack inequality for Yamabe problem (see [77],
[47]). As in Yamabe problem, this will be used to prove that there is no
bubble towering.
###### Theorem 10.1.
Under the above assumptions, we must have $u_{\infty}\equiv 0$.
In the following, we also assume there exists a constant $L$ such that for all
$i$,
$\sup_{-1<t<1}\int_{B_{1}}|\partial_{t}u_{i}(x,t)|^{2}dx\leq L.$ (10.1)
This assumption is in fact a consequence of (II.a-II.c), see Section 6 in Part
III for the proof.
The proof of Theorem 10.1 uses mainly a reverse version of _the inner-outer
gluing mechanism_.
Figure 1. Inner-outer gluing mechanism
1. (1)
In Section 11, we describe the blow up profile of $u_{i}$, i.e. the form of
bubbles when energy concentration phenomena appears, see Proposition 11.1.
This is the main order term of $u_{i}$, and it provides us the starting point
for the decomposition in the next step.
2. (2)
In Section 12, we take two decompositions for $u_{i}$: the first one is an
orthogonal decomposition where we decompose $u_{i}$ into a standard bubble
(which is the main order term) and an error function (which is the next order
term); the second one is the inner-outer decomposition, where we divide the
error function into two further parts, the first one (the inner part) is
localized near the bubble, and the second one (the outer part) is on the
original scale.
3. (3)
In Section 13, we establish an estimate for the inner problem, where we mainly
use the nondegenaracy of the bubble (see Section A). Roughly speaking, this
estimate reads as
$\mathcal{I}\leq A\mathcal{O}+\mbox{higher order terms from scaling parameters
etc.},$
where $\mathcal{I}$ is a quantity measuring the inner component, $\mathcal{O}$
is a quantity measuring the outer component, and $A$ is a constant.
4. (4)
In Section 14, we establish an estimate for the outer problem. This estimate
reads as
$\displaystyle\mathcal{O}$ $\displaystyle\leq$ $\displaystyle
B\mathcal{I}+\mbox{effect from initial-boundary value}$ $\displaystyle+$
$\displaystyle\mbox{higher order terms from scaling parameters etc.},$
where $B$ is a constant.
The _inner-outer gluing mechanism_ works thanks to the fact that
$AB<1.$
This follows from a fast decay estimate away from the bubble domains, where we
mainly rely on a Gaussian bound on heat kernels associated to a parabolic
operator with small Hardy term, see Moschini-Tesei [71].
5. (5)
In Section 15, we combine these two estimates on inner and outer problems to
establish an Harnack inequality for the scaling parameter. This gives a
uniform in time control on the height of bubbles.
6. (6)
In Section 16, by the estimate in Section 15, we improve the estimates on
inner and outer problems.
7. (7)
In Section 17, we improve further the estimates on the error function to an
optimal one, such as uniform $L^{\infty}$ estimate, first order gradient
Hölder estimate and Schauder estimate.
8. (8)
In Section 18, with the help of these estimates on the error function, we are
able to linearize the Pohozaev identity. This is because a Pohozaev identity
holds for $u$, and $u$ is approximated by the bubble at main order, which also
satisfies a Pohozaev identity, then the next order term in the Pohozaev
identity of $u$ gives further information.
9. (9)
In Section 19, we use this linearization of Pohozaev identity to establish a
weak form of Schoen’s Harnack inequality, which also finishes the proof of
Theorem 10.1.
10. (10)
In Section 20, we use this weak form of Schoen’s Harnack inequality to exclude
a special case of bubble clustering, where we use the same idea of
constructing Green functions in the study of Yamabe problems. This result will
be used in Section 12 of Part IV.
#### 10.1. List of notations and conventions used in this part
* •
Given $\theta\in[0,1)$, $\alpha\geq 0$ and $k\in\mathbb{N}$, let
$\mathcal{X}_{\alpha}^{k+\theta}$ be the space of functions $\phi\in
C^{k,\theta}(\mathbb{R}^{n})$ with the weighted $C^{k,\theta}$ norm
$\|\phi\|_{\alpha,k+\theta}:=\sup_{x\in\mathbb{R}^{n}}\sum_{\ell=0}^{k-1}\left(1+|x|\right)^{\ell+\alpha}|\nabla^{\ell}\phi(x)|+\sup_{x\in\mathbb{R}^{n}}\left(1+|x|\right)^{k+\alpha}\|\nabla^{k}\phi\|_{C^{\theta}(B_{1}(x))}.$
If $\theta=0$ and $k=0$, this space is written as $\mathcal{X}_{\alpha}$.
* •
Throughout this part, an even function $\eta\in C_{0}^{\infty}(-2,2)$ will be
fixed, which satisfies $\eta\equiv 1$ in $(-1,1)$ and
$|\eta^{\prime}|+|\eta^{\prime\prime}|\leq 10$. For any $R>0$, denote
$\eta_{R}(y):=\eta\left(\frac{|y|}{R}\right).$
* •
$\alpha:=\frac{n-2}{2}$.
* •
$\bar{p}:=\min\\{p,2\\}$.
* •
Two large constants $K\gg L\gg 1$ will be chosen in Section 12.
* •
In this part, unless otherwise stated, it is always assumed that $n\geq 7$,
which implies $\alpha>2$.
### 11\. Blow up profile
In this section we only assume $n\geq 3$. Here we prove
###### Proposition 11.1 (Blow up profile).
For any $t\in[-81/100,81/100]$, there exists a unique maxima point of
$u_{i}(\cdot,t)$ in the interior of $B_{1}(0)$. Denote this point by
$\xi_{i}^{\ast}(t)$ and let
$\lambda_{i}^{\ast}(t):=u_{i}(\xi_{i}^{\ast}(t),t)^{-\frac{2}{n-2}}$.
As $i\to+\infty$,
$\lambda_{i}^{\ast}(t)\to 0,\quad\xi_{i}^{\ast}(t)\to 0,\quad\mbox{uniformly
in }C([-81/100,81/100]),$
and the function
$u_{i}^{t}(y,s):=\lambda_{i}^{\ast}(t)^{\frac{n-2}{2}}u_{i}\left(\xi_{i}^{\ast}(t)+\lambda_{i}^{\ast}(t)y,t+\lambda_{i}^{\ast}(t)^{2}s\right),$
converges to $W(y)$ in $C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
Let us first present some preliminary results, which are needed for the proof
of this proposition. The first one is a direct consequence of Theorem 3.9
(with the help of (II.b)).
###### Corollary 11.2.
As $i\to+\infty$, $u_{i}$ are uniformly bounded in
$C^{\infty}_{loc}\left(Q_{1}\setminus\Sigma\right)$.
We also note that there is no concentration for $\partial_{t}u_{i}$.
###### Lemma 11.3.
For any $\sigma>0$, there exists an $r(\sigma)>0$ such that for any $(x,t)\in
Q_{9/10}$,
$\limsup_{i\to+\infty}\int_{Q_{r(\sigma)}(x,t)}|\partial_{t}u_{i}|^{2}<\sigma.$
###### Proof.
Assume by the contrary, there exists a $\sigma>0$, a sequence of points
$(x_{i},t_{i})\in Q_{9/10}$ and a sequence of $r_{i}\to 0$ such that
$\int_{Q_{r_{i}}(x_{i},t_{i})}|\partial_{t}u_{i}|^{2}\geq\sigma.$ (11.1)
Without loss of generality, assume $(x_{i},t_{i})\to(x_{\infty},t_{\infty})$.
For any $r>0$ fixed, by (II.c), if $i$ is large enough,
$\int_{Q_{r}(x_{\infty},t_{\infty})}|\partial_{t}u_{\infty}|^{2}\geq\int_{Q_{r_{i}}(x_{i},t_{i})}|\partial_{t}u_{i}|^{2}\geq\sigma.$
This is a contradiction with the fact that $\partial_{t}u_{\infty}\in
L^{2}(Q_{1})$. ∎
Next is a result about the energy concentration behavior of each time slice
$u_{i}(t)$.
###### Lemma 11.4.
For any $\delta>0$, there exists an $r(\delta)\in(0,1)$ such that for any
$t\in[-81/100,81/100]$,
$\limsup_{i\to+\infty}\left(\left|\int_{B_{r(\delta)}}|\nabla
u_{i}(x,t)|^{2}dx-\Lambda\right|+\left|\int_{B_{r(\delta)}}u_{i}(x,t)^{p+1}dx-\Lambda\right|\right)<\delta.$
(11.2)
###### Proof.
Take a sufficiently small $\sigma>0$, and then choose $r(\sigma)$ according to
Lemma 11.3. Without loss of generality, assume $t=0$. By the scaling
invariance of energy, in the following we need only to prove the corresponding
result for the rescaling of $u_{i}$,
$\widetilde{u}_{i}(x,t):=r(\sigma)^{\frac{n-2}{2}}u_{i}\left(r(\sigma)x,r(\sigma)^{2}t\right).$
However, in order not to complicate notations, we still use $u_{i}$ to denote
$\widetilde{u}_{i}$. In particular, now $u_{i}$ also satisfyies
$\int_{Q_{1}}|\partial_{t}u_{i}|^{2}\leq\sigma.$ (11.3)
For each $r\in(0,1)$, recall that $\eta_{r}(x):=\eta(|x|/r)$ is a standard
cut-off function in $B_{2r}$. Because $u_{i}$ is smooth, we have the standard
localized energy identity
$\displaystyle\frac{d}{dt}\int_{B_{1}}\left[\frac{1}{2}|\nabla
u_{i}(x,t)|^{2}-\frac{1}{p+1}u_{i}(x,t)^{p+1}\right]\eta_{r}(x)^{2}dx$
$\displaystyle=$
$\displaystyle-\int_{B_{1}}|\partial_{t}u_{i}(x,t)|^{2}\eta_{r}(x)^{2}dx+2\int_{B_{1}}\eta_{r}(x)\partial_{t}u_{i}(x,t)\nabla
u_{i}(x,t)\cdot\nabla\eta_{r}(x)dx.$
Integrating this identity and applying Cauchy-Schwarz inequality, we get
$\displaystyle\mbox{osc}_{t\in(-1,1)}\int_{B_{1}}\left[\frac{1}{2}|\nabla
u_{i}(x,t)|^{2}-\frac{1}{p+1}u_{i}(x,t)^{p+1}\right]\eta_{r}(x)^{2}$
$\displaystyle\lesssim$
$\displaystyle\left(\int_{Q_{1}}|\partial_{t}u_{i}|^{2}\right)^{1/2}$ (11.5)
$\displaystyle\lesssim$ $\displaystyle\sigma^{1/2},$
where we have used (11.3).
By (11.5), (II.b) and the smoothness of $u_{\infty}$, we get
$\displaystyle\int_{B_{1}}\left[\frac{1}{2}|\nabla
u_{i}(x,t)|^{2}-\frac{1}{p+1}u_{i}(x,t)^{p+1}\right]\eta_{r}(x)^{2}dxdt$
$\displaystyle=$
$\displaystyle\frac{\Lambda}{n}+O\left(\sigma^{1/2}\right)+O\left(r^{n}\right)+o_{i}(1),\quad\quad\quad\forall
t\in(-1,1).$
Next, for each $t\in(-1,1)$, multiplying (1.1) by $u_{i}(t)\eta_{r}^{2}$ and
integrating on $B_{1}$, we get
$\displaystyle\int_{B_{1}}\left[|\nabla
u_{i}(x,t)|^{2}-u_{i}(x,t)^{p+1}\right]\eta_{r}(x)^{2}dx$ $\displaystyle=$
$\displaystyle\int_{B_{1}}\left[\partial_{t}u_{i}(x,t)u_{i}(x,t)\eta_{r}(x)^{2}-2\eta_{r}(x)u_{i}(x,t)\nabla\eta_{r}(x)\cdot\nabla
u_{i}(x,t)\right]dx.$
Concerning the right hand side of this equation, the following estimates hold.
* •
By Cauchy-Schwarz inequality and (10.1), we have
$\displaystyle\left|\int_{B_{1}}\partial_{t}u_{i}(x,t)u_{i}(x,t)\eta_{r}(x)^{2}dx\right|$
$\displaystyle\lesssim$
$\displaystyle\left(\int_{B_{r}}\partial_{t}u_{i}(x,t)^{2}dx\right)^{\frac{1}{2}}\left(\int_{B_{r}}u_{i}(x,t)^{2}dx\right)^{\frac{1}{2}}$
$\displaystyle\lesssim$ $\displaystyle
L^{1/2}\left[\left(\int_{B_{r}}u_{\infty}(x,t)^{2}dx\right)^{\frac{1}{2}}+o_{i}(1)\right],$
where in the second inequality we have used the Lipschitz assumption (10.1)
and the uniform convergence of $u_{i}(t)$ in $C_{loc}(-1,1;L^{2}(B_{1}))$.
* •
Still by the uniform convergence of $u_{i}(t)$ in
$C_{loc}(-1,1;L^{2}(B_{1}))$, we obtain
$\displaystyle\int_{B_{1}}\eta_{r}(x)u_{i}(x,t)\nabla\eta_{r}(x)\cdot\nabla
u_{i}(x,t)dx$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\int_{B_{1}}u_{i}(x,t)^{2}\Delta\eta_{r}(x)^{2}dx$
$\displaystyle=$
$\displaystyle-\frac{1}{2}\int_{B_{1}}u_{\infty}(x,t)^{2}\Delta\eta_{r}(x)^{2}dx+o_{i}(1).$
Substituting these two estimates into (11) and noting the smoothness of
$u_{\infty}$, we get
$\int_{B_{1}}\left[|\nabla
u_{i}(x,t)|^{2}-u_{i}(x,t)^{p+1}\right]\eta_{r}(x)^{2}dx=o_{r}(1)+o_{i}(1).$
(11.8)
Plugging this relation into (11), we obtain
$\left\\{\begin{aligned} &\int_{B_{1}}|\nabla
u_{i}(x,t)|^{2}\eta_{r}(x)^{2}=\Lambda+O\left(\sigma^{1/2}\right)+o_{r}(1)+o_{i}(1),\\\
&\int_{B_{1}}u_{i}(x,t)^{p+1}\eta_{r}(x)^{2}=\Lambda+O\left(\sigma^{1/2}\right)+o_{r}(1)+o_{i}(1).\end{aligned}\right.$
(11.9)
Then (11.2) follows by noting that $\eta_{r}\leq\chi_{B_{r}}\leq\eta_{2r}$ and
choosing a suitable $r$. ∎
Some consequences follow directly from this lemma.
1. (1)
There exists a constant $C(u_{\infty},L)$ such that
$\int_{B_{4/5}}\left[|\nabla u_{i}(t)|^{2}+u_{i}(t)^{p+1}\right]\leq
C(u_{\infty},L),\quad\forall t\in[-81/100,81/100].$ (11.10)
2. (2)
By the convergence of $u_{i}(t)$ in $C^{\infty}_{loc}(B_{1}\setminus\\{0\\})$,
we deduce that for any $t\in[-81/100,81/100]$,
$|\nabla u_{i}(t)|^{2}dx\rightharpoonup|\nabla
u_{\infty}|^{2}dx+\Lambda\delta_{0},\quad u_{i}(t)^{p+1}dx\rightharpoonup
u_{\infty}^{p+1}dx+\Lambda\delta_{0}.$ (11.11)
Equation (11.11) implies that for any $t\in[-81/100,81/100]$,
$\sup_{B_{1}}u_{i}(x,t)\to+\infty\quad\mbox{as}\quad i\to+\infty.$
Because $u_{i}(t)$ are bounded in $C_{loc}(B_{1}\setminus\\{0\\})$, this
supremum must be a maxima, and it is attained near $0$. Take one such maximal
point $\xi_{i}^{\ast}(t)$, and define $\lambda_{i}^{\ast}(t)$, $u_{i}^{t}$ as
in the statement of Proposition 11.1. By definition,
$u_{i}^{t}(0,0)=\max_{y\in B_{\lambda_{i}^{\ast}(t)^{-1}/2}}u_{i}^{t}(y,0)=1.$
(11.12)
Scaling (10.1) gives
$\lim_{i\to+\infty}\int_{B_{R}}\partial_{t}u_{i}^{t}(y,s)^{2}dy=0,\quad\mbox{for
any }R>0,~{}~{}~{}s\in\mathbb{R}.$ (11.13)
By scaling (11.2), we see for any $R>0$ and $s\in\mathbb{R}$,
$\left\\{\begin{aligned} &\limsup_{i\to+\infty}\int_{B_{R}}|\nabla
u_{i}^{t}(y,s)|^{2}dy\leq\Lambda,\\\
&\limsup_{i\to+\infty}\int_{B_{R}}u_{i}^{t}(y,s)^{p+1}dy\leq\Lambda.\end{aligned}\right.$
(11.14)
Then by Struwe’s global compactness theorem (see [82]), we deduce that for any
$s\in\mathbb{R}$, $u_{i}^{t}(\cdot,s)$ converges strongly in
$W^{1,2}_{loc}(\mathbb{R}^{n})$ and $L^{p+1}_{loc}(\mathbb{R}^{n})$. Denote
its limit by $u_{\infty}^{t}$. By (11.13) and (11.14), $u_{\infty}^{t}$ is a
nonnegative solution of (1.9). In view of its $H^{1}$ regularity, it must be
smooth. Hence for any $(y,s)\in\mathbb{R}^{n}\times\mathbb{R}$, there exists
an $r>0$ such that
$\int_{Q_{r}^{-}(y,s)}\left[|\nabla
u_{\infty}^{t}|^{2}+\left(u_{\infty}^{t}\right)^{p+1}\right]\leq\frac{\varepsilon_{\ast}}{2}r^{2}.$
By the above strong convergence of $u_{i}^{t}$, we get
$\limsup_{i\to+\infty}\int_{Q_{r}^{-}(y,s)}\left[|\nabla
u_{i}^{t}|^{2}+\left(u_{i}^{t}\right)^{p+1}\right]<\varepsilon_{\ast}r^{2}.$
Therefore Theorem 3.7 is applicable, which implies that $u_{i}^{t}$ are
uniformly bounded in $C^{\infty}(Q_{\delta_{\ast}r}(y,s))$. In conclusion,
$u_{i}^{t}$ are uniformly bounded in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$. Hence it converges to
$u_{\infty}^{t}$ in $C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$. Then
we can let $i\to+\infty$ in (11.12), which leads to
$u_{\infty}^{t}(0)=\max_{y\in\mathbb{R}^{n}}u_{\infty}^{t}(y)=1.$
By the Liouville theorem of Caffarelli-Gidas-Spruck [8], we get
$u_{\infty}^{t}\equiv W\quad\mbox{in }\mathbb{R}^{n}.$
Next we give a scaling invariant estimate for $u_{i}$ away from
$\xi_{i}^{\ast}(t)$.
###### Lemma 11.5.
For any $R$ large enough, there exist two constants $\sigma(R)\ll 1$ and
$C(R)>0$ such that, outside $B_{R\lambda_{i}^{\ast}(t)}(\xi_{i}^{\ast}(t))$ we
have
$\left\\{\begin{aligned}
&u_{i}(x,t)\leq\sigma(R)|x-\xi_{i}^{\ast}(t)|^{-\frac{n-2}{2}}+C(R),\\\
&|\nabla u_{i}(x,t)|\leq\sigma(R)|x-\xi_{i}^{\ast}(t)|^{-\frac{n}{2}}+C(R),\\\
&|\nabla^{2}u_{i}(x,t)|+|\partial_{t}u_{i}(x,t)|\leq\sigma(R)|x-\xi_{i}^{\ast}(t)|^{-\frac{n+2}{2}}+C(R).\end{aligned}\right.$
(11.15)
###### Proof.
For any $\delta>0$, by the smooth convergence of $u_{i}^{t}$ obtained above,
there exists an $R(\delta)\gg 1$ such that for any $t\in[-81/100,81/100]$,
$\left\\{\begin{aligned}
&\int_{B_{R(\delta)\lambda_{i}^{\ast}(t)}(\xi_{i}^{\ast}(t))}|\nabla
u_{i}(x,t)|^{2}\geq\Lambda-\delta/4,\\\
&\int_{B_{R(\delta)\lambda_{i}^{\ast}(t)}(\xi_{i}^{\ast}(t))}u_{i}(x,t)^{p+1}\geq\Lambda-\delta/4.\end{aligned}\right.$
(11.16)
By Lemma 11.4, there exists an $r(\delta)\ll 1$ such that for any
$t\in[-81/100,81/100]$,
$\left\\{\begin{aligned} &\int_{B_{r(\delta)}}|\nabla
u_{i}(x,t)|^{2}\leq\Lambda+\delta/4,\\\
&\int_{B_{r(\delta)}}u_{i}(x,t)^{p+1}\leq\Lambda+\delta/4.\end{aligned}\right.$
(11.17)
Combining (11.16) with (11.17), we get
$\sup_{|t|\leq 81/100}\int_{B_{r(\delta)}\setminus
B_{R(\delta)\lambda_{i}^{\ast}(t)}(\xi_{i}^{\ast}(t))}\left[|\nabla
u_{i}(x,t)|^{2}+u_{i}(x,t)^{p+1}\right]dx\leq\delta.$ (11.18)
By Corollary 11.2, there exists a constant $C(\delta)$ such that
$u_{i}(x,t)+|\nabla
u_{i}(x,t)|+|\nabla^{2}u_{i}(x,t)|+|\partial_{t}u_{i}(x,t)|\leq
C(\delta),\quad\mbox{if}~{}x\notin B_{r(\delta)}(\xi_{i}^{\ast}(t)).$
It remains to prove (11.15) when $x\in B_{r(\delta)}\setminus
B_{R(\delta)\lambda_{i}^{\ast}(t)}(\xi_{i}^{\ast}(t))$. We argue by
contradiction, so assume there exists a sequence $(x_{i},t_{i})$, with
$t_{i}\in[-81/100,81/100]$ and $x_{i}\in B_{r(\delta)}(0)\setminus
B_{R(\delta)\lambda_{i}^{\ast}(t_{i})}(\xi_{i}^{\ast}(t_{i}))$, but for some
$\sigma$ (to be determined below),
$u_{i}(x_{i},t_{i})\geq\sigma|x_{i}-\xi_{i}^{\ast}(t_{i})|^{-\frac{n-2}{2}}.$
(11.19)
Let $r_{i}:=|x_{i}-\xi_{i}^{\ast}(t_{i})|$ and
$\widetilde{u}_{i}(y,s):=r_{i}^{\frac{n-2}{2}}u_{i}\left(\xi_{i}^{\ast}(t_{i})+r_{i}y,t_{i}+r_{i}^{2}s\right).$
Denote
$\widetilde{x}_{i}:=\frac{x_{i}-\xi_{i}^{\ast}(t_{i})}{r_{i}}.$
It lies on $\partial B_{1}$. Assume it converges to a limit point
$\widetilde{x}_{\infty}\in\partial B_{1}$.
Scaling (11.16) and (11.2) leads to
$\left\\{\begin{aligned}
&\int_{B_{R(\delta)^{-1}}}|\nabla\widetilde{u}_{i}(y,0)|^{2}\geq\Lambda-\delta/4,\\\
&\int_{B_{R(\delta)^{-1}}}\widetilde{u}_{i}(y,0)^{p+1}\geq\Lambda-\delta/4\end{aligned}\right.$
(11.20)
and for any $R>0$ and $s\in\mathbb{R}$,
$\left\\{\begin{aligned}
&\int_{B_{R}(0)}|\nabla\widetilde{u}_{i}(y,s)|^{2}\leq\Lambda+\delta/4,\\\
&\int_{B_{R}(0)}\widetilde{u}_{i}(y,s)^{p+1}\leq\Lambda+\delta/4.\end{aligned}\right.$
(11.21)
Then by noting that for any $s$, $\partial_{s}\widetilde{u}_{i}(\cdot,s)$
converges to $0$ strongly in $L^{2}_{loc}(\mathbb{R}^{n})$, we can apply Lemma
11.4 to find an $\rho(\delta)>0$ such that for any $s\in[-1,1]$,
$\left|\int_{B_{\rho(\delta)}}|\nabla\widetilde{u}_{i}(y,s)|^{2}dy-\Lambda\right|+\left|\int_{B_{\rho(\delta)}}\widetilde{u}_{i}(y,s)^{p+1}dy-\Lambda\right|<\delta.$
Combining this estimate with (11.21), we get
$\int_{-1}^{0}\int_{B_{1/2}(\widetilde{x}_{i})}\left(|\nabla\widetilde{u}_{i}|^{2}+\widetilde{u}_{i}^{p+1}\right)\leq
C\delta.$
By choosing $C\delta<\varepsilon_{\ast}$, we can apply Theorem 3.7 to deduce
that
$\widetilde{u}_{i}(\widetilde{x}_{i},0)+|\nabla\widetilde{u}_{i}(\widetilde{x}_{i},0)|+|\nabla^{2}\widetilde{u}_{i}(\widetilde{x}_{i},0)|+|\partial_{s}\widetilde{u}_{i}(\widetilde{x}_{i},0)|\leq
c(\delta),$
where $c(\delta)$ is a small constant depending on $\delta$. Scaling back to
$u_{i}$ we get a contradiction with (11.19). In other words, (11.15) must hold
in $B_{r(\delta)}\setminus
B_{R(\delta)\lambda_{i}^{\ast}(t)}(\xi_{i}^{\ast}(t))$. ∎
Combining Proposition 11.1 and Lemma 11.5. we get
###### Corollary 11.6.
For each $t\in[-81/100,81/100]$, $\xi_{i}^{\ast}(t)$ is the unique maximal
point of $u_{i}(t)$ in $B_{1}$.
This completes the proof of Proposition 11.1.
### 12\. Orthogonal and inner-outer decompositions
For simplicity of notations, from here to Section 16 we are concerned with a
fixed solution $u_{i}$ with sufficiently large index $i$, and denote it by
$u$. In this section we define the orthogonal decomposition and inner-outer
decomposition for $u$.
#### 12.1. Orthogonal decomposition
From now on, until Section 19, a constant $K\gg 1$ will be fixed. In the
following we will use the notations about bubbles and kernels to their
linearized equations, $W_{\xi,\lambda}$ and $Z_{i,\xi,\lambda}$, see Appendix
A.
###### Proposition 12.1 (Orthogonal condition).
For any $t\in[-81/100,81/100]$, there exists a unique
$(a(t),\xi(t),\lambda(t))\in\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{+}$
with
$\frac{|\xi(t)-\xi^{\ast}(t)|}{\lambda(t)}+\Big{|}\frac{\lambda(t)}{\lambda^{\ast}(t)}-1\Big{|}+\Big{|}\frac{a(t)}{\lambda(t)}\Big{|}=o(1),$
(12.1)
such that for each $i=0,\cdots,n+1$,
$\int_{B_{1}}\left[u(x,t)-W_{\xi(t),\lambda(t)}(x)-a(t)Z_{0,\xi(t),\lambda(t)}(x)\right]\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)dx=0.$
(12.2)
###### Proof.
For these $t$, set
$\widetilde{u}(y):=\lambda^{\ast}(t)^{\frac{n-2}{2}}u(\xi^{\ast}(t)+\lambda^{\ast}(t)y,t).$
Define a smooth map $F$ from $B_{1/2}^{n+2}\subset\mathbb{R}^{n+2}$ into
$\mathbb{R}^{n+2}$ as
$F(\xi,\lambda,a):=\left(\int_{\mathbb{R}^{n}}\left[\widetilde{u}(y)-W_{\xi,1+\lambda}(y)-aZ_{0,\xi,1+\lambda}(y)\right]\eta_{K}\left(\frac{y-\xi}{1+\lambda}\right)Z_{i,\xi,1+\lambda}(y)dy\right)_{i=0}^{n+1}.$
The task is reduced to find a solution of $F(\xi,\lambda,a)=0$.
By Proposition 11.1, $|F(0,0,0)|\ll 1$. By a direct calculation and applying
Proposition 11.1 once again, we find a fixed small constant $\rho_{\ast}$ such
that, for any $(\xi,\lambda,a)\in B_{\rho_{\ast}}^{n+2}$, $DF(\xi,\lambda,a)$
is diagonally dominated, and hence invertible. Then the inverse function
theorem implies the existence and uniqueness of the solution $F=0$ in
$B_{\rho_{\ast}}^{n+2}$. ∎
In the above, in order to show that $DF(\xi,\lambda,a)$ is diagonally
dominated, we need $n\geq 5$ to make sure $Z_{n+1}\in L^{2}(\mathbb{R}^{n})$,
and we also need the $n\geq 7$ assumption because at first we only know that
$u$ decays like $|y|^{-(n-2)/2}$. (This term appears in the integral
containing $\nabla\eta$.)
For later purpose, we notice a Lipschitz bound on the parameters
$(a(t),\xi(t),\lambda(t))$.
###### Lemma 12.2.
For any $t\in[-81/100,81/100]$,
$\left|a^{\prime}(t)\right|+\left|\xi^{\prime}(t)\right|+\left|\lambda^{\prime}(t)\right|\lesssim\left(\int_{B_{1}}\partial_{t}u(x,t)^{2}dx\right)^{1/2}.$
(12.3)
###### Proof.
For each $i=1,\cdots,n$, because $Z_{i}\eta_{K}$ is orthogonal to $Z_{0}$ in
$L^{2}(\mathbb{R}^{n})$, the orthogonal condition (12.2) reads as
$\int_{B_{1}}\left[u(x,t)-W_{\xi(t),\lambda(t)}(x)\right]\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)dx=0.$
Differentiating this equality in $t$, by the $L^{2}$ orthogonal relations
between different $Z_{i}$, we obtain
$\displaystyle\left[\int_{B_{1}}\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)^{2}dx\right]\xi_{i}^{\prime}(t)$
$\displaystyle=$
$\displaystyle\int_{B_{1}}\partial_{t}u(x,t)\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)dx$
$\displaystyle+$
$\displaystyle\int_{B_{1}}\left[u(x,t)-W_{\xi(t),\lambda(t)}(x)\right]\frac{\partial}{\partial
t}\left[\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)\right]dx.$
First, by the definition of $Z_{i,\xi(t),\lambda(t)}$, we have
$\int_{B_{1}}\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)^{2}dx=\int_{\mathbb{R}^{n}}\eta_{K}(y)Z_{i}(y)^{2}dy\geq
c.$
Next, by Cauchy-Schwarz inequality and (10.1), we get
$\displaystyle\left|\int_{B_{1}}\partial_{t}u(x,t)\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)dx\right|$
$\displaystyle\leq$
$\displaystyle\left(\int_{B_{1}}\partial_{t}u(x,t)^{2}dx\right)^{1/2}\left(\int_{B_{1}}\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)^{2}Z_{i,\xi(t),\lambda(t)}(x)^{2}dx\right)^{1/2}$
$\displaystyle\lesssim$
$\displaystyle\left(\int_{B_{1}}\partial_{t}u(x,t)^{2}dx\right)^{1/2}.$
Finally, because in $B_{K\lambda(t)}(\xi(t))$,
$\left\\{\begin{aligned}
&\left|u(x,t)-W_{\xi(t),\lambda(t)}(x)\right|=o(1)\lambda(t)^{-\frac{n-2}{2}}\left(1+\frac{|x-\xi(t)|}{\lambda(t)}\right)^{-\frac{n-2}{2}},\\\
&\left|\frac{\partial}{\partial
t}\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)\right|\lesssim\frac{\left|\xi^{\prime}(t)\right|+\left|\lambda^{\prime}(t)\right|}{\lambda(t)^{\frac{n+2}{2}}}\left(1+\frac{|x-\xi(t)|}{\lambda(t)}\right)^{1-n},\\\
&\left|\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)\frac{\partial}{\partial
t}Z_{i,\xi(t),\lambda(t)}(x)\right|\lesssim\frac{\left|\xi^{\prime}(t)\right|+\left|\lambda^{\prime}(t)\right|}{\lambda(t)^{\frac{n+2}{2}}}\left(1+\frac{|x-\xi(t)|}{\lambda(t)}\right)^{1-n},\end{aligned}\right.$
we get
$\displaystyle\left|\int_{B_{1}}\left[u(x,t)-W_{\xi(t),\lambda(t)}(x)\right]\frac{\partial}{\partial
t}\left[\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)\right]dx\right|$
$\displaystyle\ll$
$\displaystyle\lambda(t)^{-n}\left(\left|\xi^{\prime}(t)\right|+\left|\lambda^{\prime}(t)\right|\right)\int_{B_{1}}\left(1+\frac{|x-\xi(t)|}{\lambda(t)}\right)^{2-\frac{3n}{2}}$
$\displaystyle\lesssim$
$\displaystyle\left|\xi^{\prime}(t)\right|+\left|\lambda^{\prime}(t)\right|.$
Plugging these three estimates into (12.1), we obtain
$|\xi^{\prime}(t)|\lesssim\left(\int_{B_{1}}\partial_{t}u(x,t)^{2}dx\right)^{1/2}+o\left(\left|\xi^{\prime}(t)\right|+\left|\lambda^{\prime}(t)\right|\right).$
(12.5)
In the same way, we get
$|\lambda^{\prime}(t)|\lesssim\left(\int_{B_{1}}\partial_{t}u(x,t)^{2}dx\right)^{1/2}+o\left(\left|\xi^{\prime}(t)\right|+\left|\lambda^{\prime}(t)\right|+\left|a^{\prime}(t)\right|\right)$
(12.6)
and
$|a^{\prime}(t)|\lesssim\left(\int_{B_{1}}\partial_{t}u(x,t)^{2}dx\right)^{1/2}+o\left(\left|\xi^{\prime}(t)\right|+\left|\lambda^{\prime}(t)\right|+\left|a^{\prime}(t)\right|\right).$
(12.7)
Adding these three estimates together we get (12.3). ∎
#### 12.2. Inner-outer decomposition
In the following we denote the error function
$\phi(x,t):=u(x,t)-W_{\xi(t),\lambda(t)}(x)-a(t)Z_{0,\xi(t),\lambda(t)}(x),$
$W_{\ast}(x,t):=W_{\xi(t),\lambda(t)}(x),\quad
Z_{i,\ast}(x,t):=Z_{i,\xi(t),\lambda(t)}(x)$
and $Z_{\ast}:=(Z_{0,\ast},Z_{1,\ast},\cdots,Z_{n+1,\ast})$.
Combining Lemma 11.5 and Proposition 12.1, we obtain
###### Proposition 12.3.
In $B_{1}$,
$\left\\{\begin{aligned}
&|\phi(x,t)|=o\left(\left(\lambda(t)+|x-\xi_{\ast}(t)|\right)^{-\frac{n-2}{2}}\right)+O(1),\\\
&|\nabla\phi(x,t)|=o\left(\left(\lambda(t)+|x-\xi_{\ast}(t)|\right)^{-\frac{n}{2}}\right)+O(1),\\\
&|\nabla^{2}\phi(x,t)|+|\phi_{t}(x,t)|=o\left(\left(\lambda(t)+|x-\xi_{\ast}(t)|\right)^{-\frac{n+2}{2}}\right)+O(1).\end{aligned}\right.$
The error function $\phi$ satisfies
$\partial_{t}\phi-\Delta\phi=pW_{\ast}^{p-1}\phi+\mathcal{N}+\left(-a^{\prime}+\mu_{0}\frac{a}{\lambda^{2}},\xi^{\prime},\lambda^{\prime}\right)\cdot
Z_{\ast}-a\partial_{t}Z_{0,\ast},$ (12.8)
where $\\{\\}^{\prime}$ denotes differentiation in $t$, and the nonlinear term
is defined by
$\mathcal{N}:=\left(W_{\ast}+\phi+aZ_{0,\ast}\right)^{p}-W_{\ast}^{p}-pW_{\ast}^{p-1}\left(\phi+aZ_{0,\ast}\right).$
Now take a further decomposition of $\phi$, _the inner-outer decomposition_ ,
as follows. Keep $K$ as the large constant used in Proposition 12.1. Take
another constant $L$ satisfying $1\ll L\ll K$. Denote
$\eta_{in}(x,t):=\eta\left(\frac{x-\xi(t)}{K\lambda(t)}\right),\quad\eta_{out}(x,t):=\eta\left(\frac{x-\xi(t)}{L\lambda(t)}\right).$
(12.9)
Set
$\phi_{in}(x,t):=\phi(x,t)\eta_{K}(x,t),\quad\phi_{out}(x,t):=\phi(x,t)\left[1-\eta_{out}(x,t)\right].$
For later purpose, we introduce two quantities:
$\mathcal{I}(t):=\left[L\lambda(t)\right]^{\alpha}\sup_{B_{2L\lambda(t)}(\xi(t))\setminus
B_{L\lambda(t)}(\xi(t))}|\phi|+\left[L\lambda(t)\right]^{1+\alpha}\sup_{B_{2L\lambda(t)}(\xi(t))\setminus
B_{L\lambda(t)}(\xi(t))}|\nabla\phi|$
and
$\mathcal{O}(t):=\left[K\lambda(t)\right]^{\alpha}\sup_{B_{2K\lambda(t)}(\xi(t))\setminus
B_{K\lambda(t)}(\xi(t))}|\phi|+\left[K\lambda(t)\right]^{1+\alpha}\sup_{B_{2K\lambda(t)}(\xi(t))\setminus
B_{K\lambda(t)}(\xi(t))}|\nabla\phi|.$
By Proposition 12.3, for any $t\in[-81/100,81/100]$,
$\mathcal{I}(t)+\mathcal{O}(t)\ll 1.$
Starting from this smallness, we will improve it to an explicit bound in the
following sections.
### 13\. Inner problem
In this section we give an $C^{1,\theta}$ estimate on the inner component
$\phi_{in}$, see Proposition 13.4.
Define a new coordinate system around $\xi(t)$ by
$y:=\frac{x-\xi(t)}{\lambda(t)},\quad\quad\tau=\tau(t).$
Here the new time variable $\tau$ is determined by the relation
$\tau^{\prime}(t)=\lambda(t)^{-2},\quad\quad\tau(0)=0.$
Because there is a one to one correspondence between $\tau$ and $t$, in the
following we will not distinguish between them. For example, we will use
$\mathcal{O}(\tau)$ instead of the notation $\mathcal{O}(t(\tau))$.
It is convenient to write $\phi$ in these new coordinates. By denoting
$\varphi(y,\tau):=\lambda(t)^{\frac{n-2}{2}}\phi(x,t),$
we get
$\displaystyle\partial_{\tau}\varphi-\Delta\varphi$ $\displaystyle=$
$\displaystyle
pW^{p-1}\varphi+\left(-\frac{\dot{a}}{\lambda}+\mu_{0}\frac{a}{\lambda},\frac{\dot{\xi}}{\lambda},\frac{\dot{\lambda}}{\lambda}\right)\cdot
Z$ $\displaystyle+$
$\displaystyle\mathcal{N}+\frac{\dot{\xi}}{\lambda}\cdot\nabla\varphi+\frac{\dot{\lambda}}{\lambda}\left(y\cdot\nabla\varphi+\frac{n-2}{2}\varphi\right)$
$\displaystyle+$
$\displaystyle\frac{a}{\lambda}\left[\frac{\dot{\xi}}{\lambda}\cdot\nabla
Z_{0}+\frac{\dot{\lambda}}{\lambda}\left(y\cdot\nabla
Z_{0}+\frac{n}{2}Z_{0}\right)\right].$
Here $\dot{\\{\\}}$ denotes differentiation in $\tau$, and by abusing
notations,
$\mathcal{N}:=\left(W+\varphi+\frac{a}{\lambda}Z_{0}\right)^{p}-W^{p}-pW^{p-1}\left(\varphi+\frac{a}{\lambda}Z_{0}\right).$
We will need the following pointwise estimate on $\mathcal{N}$. Recall that
$\bar{p}=\min\\{p,2\\}$.
###### Lemma 13.1.
The following point-wise inequality holds:
$|\mathcal{N}|\lesssim|\varphi|^{\bar{p}}+\Big{|}\frac{a}{\lambda}\Big{|}^{\bar{p}}Z_{0}^{\bar{p}}.$
###### Proof.
There exists a $\vartheta\in[0,1]$ such that
$\displaystyle\left(W+\varphi+\frac{a}{\lambda}Z_{0}\right)^{p}-W^{p}-pW^{p-1}\left(\varphi+\frac{a}{\lambda}Z_{0}\right)$
$\displaystyle=$ $\displaystyle
p\left\\{\left[W+\vartheta\left(\varphi+\frac{a}{\lambda}Z_{0}\right)\right]^{p-1}-W^{p-1}\right\\}\left(\varphi+\frac{a}{\lambda}Z_{0}\right).$
Since both $W$ and $W+\varphi+\frac{a}{\lambda}Z_{0}=u$ are bounded by a
universal constant, and $\varphi+\frac{a}{\lambda}Z_{0}$ is small, by
considering the two cases $W\geq|\varphi+\frac{a}{\lambda}Z_{0}|$ and
$W<|\varphi+\frac{a}{\lambda}Z_{0}|$ separately, we get
$|\mathcal{N}|\lesssim\Big{|}\varphi+\frac{a}{\lambda}Z_{0}\Big{|}^{\bar{p}}\lesssim|\varphi|^{\bar{p}}+\Big{|}\frac{a}{\lambda}\Big{|}^{\bar{p}}Z_{0}^{\bar{p}}.\qed$
By scaling the estimates in Proposition 12.3, we obtain
###### Proposition 13.2.
For each $\tau$,
$K^{\alpha}\sup_{y\in
B_{2K}}\left[|\varphi(y,\tau)|+K|\nabla\varphi(y,\tau)|+K^{2}\left(|\nabla^{2}\varphi(y,\tau)|+|\partial_{\tau}\varphi(y,\tau)|\right)\right]=o(1).$
In $(y,\tau)$ coordinates, the inner error function $\phi_{in}$ is
$\varphi_{K}(y,\tau):=\varphi(y,\tau)\eta_{K}(y).$
For each $\tau$, the support of $\varphi_{K}(\cdot,\tau)$ is contained in
$B_{K}$. By (12.2), for each $i=0,\cdots,n+1$,
$\int_{\mathbb{R}^{n}}\varphi_{K}(y,\tau)Z_{i}(y)dy=0.$ (13.2)
The equation for $\varphi_{K}$ is
$\partial_{\tau}\varphi_{K}-\Delta_{y}\varphi_{K}=pW^{p-1}\varphi_{K}+\lambda^{-1}\left(-\dot{a}+\mu_{0}a,\dot{\xi},\dot{\lambda}\right)\cdot
Z+E_{K}.$ (13.3)
Here
$\displaystyle E_{K}$ $\displaystyle:=$
$\displaystyle-2\nabla\varphi\cdot\nabla\eta_{K}-\varphi\Delta\eta_{K}+\mathcal{N}\eta_{K}$
$\displaystyle+$
$\displaystyle\lambda^{-1}\left(-\dot{a}+\mu_{0}a,\dot{\xi},\dot{\lambda}\right)\cdot
Z\left(\eta_{K}-1\right)$ $\displaystyle+$
$\displaystyle\lambda^{-1}\left[\dot{\lambda}\left(\frac{n-2}{2}\varphi+y\cdot\nabla_{y}\varphi\right)+\dot{\xi}\cdot\nabla_{y}\varphi\right]\eta_{K}$
$\displaystyle+$
$\displaystyle\frac{a}{\lambda}\left[\frac{\dot{\xi}}{\lambda}\cdot\nabla
Z_{0}+\frac{\dot{\lambda}}{\lambda}\left(y\cdot\nabla
Z_{0}+\frac{n}{2}Z_{0}\right)\right]\eta_{K}.$
The following estimate holds for $E_{K}$.
###### Lemma 13.3.
For any $\tau$, we have
$\displaystyle\|E_{K}(\tau)\|_{2+\alpha}$ $\displaystyle\lesssim$
$\displaystyle\mathcal{O}(\tau)+K^{2+\alpha-\bar{p}\alpha}\|\varphi_{K}(\tau)\|_{\alpha}^{\bar{p}}+\Big{|}\frac{a(\tau)}{\lambda(\tau)}\Big{|}^{\bar{p}}$
$\displaystyle+$ $\displaystyle
K^{3+\alpha-n}\left|\frac{\dot{\xi}(\tau)}{\lambda(\tau)}\right|+K^{4+\alpha-n}\left|\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}\right|+e^{-cK}\Big{|}\frac{\dot{a}(\tau)-\mu_{0}a(\tau)}{\lambda(\tau)}\Big{|}.$
###### Proof.
1. (1)
Because $\alpha=(n-2)/2$, by the definition of $\mathcal{O}$, we have
$K^{2+\alpha}\sup_{y}\left|2\nabla\varphi(y,\tau)\cdot\nabla\eta_{K}(y)+\varphi(y,\tau)\Delta\eta_{K}(y)\right|\lesssim\mathcal{O}(\tau).$
2. (2)
By Lemma 13.1, we obtain
$\displaystyle|\mathcal{N}\eta_{K}|$ $\displaystyle\lesssim$
$\displaystyle|\varphi|^{\bar{p}}\eta_{K}+\left|\frac{a}{\lambda}\right|^{\bar{p}}Z_{0}^{\bar{p}}$
$\displaystyle=$
$\displaystyle|\varphi|^{\bar{p}}\left(\eta_{K}-\eta_{K}^{\bar{p}}\right)+|\varphi_{K}|^{\bar{p}}+\left|\frac{a}{\lambda}\right|^{\bar{p}}Z_{0}^{\bar{p}}.$
First, because $\mathcal{O}(\tau)\ll 1$,
$\sup_{y\in\mathbb{R}^{n}}K^{2+\alpha}|\varphi(y,\tau)|^{\bar{p}}\left[\eta_{K}(y)-\eta_{K}(y)^{\bar{p}}\right]\lesssim
K^{2+\alpha}\mathcal{O}(\tau)^{\bar{p}}\ll\mathcal{O}(\tau).$
Secondly, because the support of $\varphi_{K}$ is contained in $B_{K}$,
$\sup_{y\in\mathbb{R}^{n}}\left(1+|y|\right)^{2+\alpha}|\varphi_{K}(y,\tau)|^{\bar{p}}\lesssim
K^{2+\alpha-\bar{p}\alpha}\|\varphi_{K}(\tau)\|_{\alpha}^{\bar{p}}.$
Finally, by the exponential decay of $Z_{0}$ at infinity,
$\sup_{y\in\mathbb{R}^{n}}\left(1+|y|\right)^{2+\alpha}\left|\frac{a(\tau)}{\lambda(\tau)}\right|^{\bar{p}}|Z_{0}(y)|^{\bar{p}}\lesssim\left|\frac{a(\tau)}{\lambda(\tau)}\right|^{\bar{p}}.$
3. (3)
For $i=1,\cdots,n$, $Z_{i}(y)=O\left(|y|^{1-n}\right)$. Hence
$\sup_{y\in\mathbb{R}^{n}}\left(1+|y|\right)^{2+\alpha}\left|\frac{\dot{\xi}(\tau)}{\lambda(\tau)}\cdot
Z(y)\left[\eta_{K}(y)-1\right]\right|\lesssim
K^{3+\alpha-n}\left|\frac{\dot{\xi}(\tau)}{\lambda(\tau)}\right|.$
4. (4)
As in the previous case, because $Z_{n+1}(y)=O\left(|y|^{2-n}\right)$, we get
$\sup_{y\in\mathbb{R}^{n}}\left(1+|y|\right)^{2+\alpha}\left|\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}Z_{n+1}(y)\left[\eta_{K}(y)-1\right]\right|\lesssim
K^{4+\alpha-n}\left|\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}\right|.$
5. (5)
By the exponential decay of $Z_{0}$, we get
$\sup_{y\in\mathbb{R}^{n}}\left(1+|y|\right)^{2+\alpha}\left|\frac{\dot{a}(\tau)-\mu_{0}a(\tau)}{\lambda(\tau)}Z_{0}(y)\left[\eta_{K}(y)-1\right]\right|\lesssim
e^{-cK}\frac{\left|\dot{a}(\tau)-\mu_{0}a(\tau)\right|}{\lambda(\tau)}.$
6. (6)
By Proposition 13.2, we have
$\sup_{y\in\mathbb{R}^{n}}\left(1+|y|\right)^{2+\alpha}\left|\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}\left[\frac{n-2}{2}\varphi(y)+y\cdot\nabla_{y}\varphi(y)\right]\eta_{K}(y)\right|\ll
K^{4+\alpha-n}\frac{|\dot{\lambda}(\tau)|}{\lambda(\tau)}.$
7. (7)
As in the previous case,
$\displaystyle\sup_{y\in\mathbb{R}^{n}}\left(1+|y|\right)^{2+\alpha}\left|\frac{a(\tau)}{\lambda(\tau)}\left[\frac{\dot{\xi}(\tau)}{\lambda(\tau)}\cdot\nabla
Z_{0}(y)+\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}\left(y\cdot\nabla
Z_{0}(y)+\frac{n}{2}Z_{0}(y)\right)\right]\eta_{K}(y)\right|$
$\displaystyle\ll$ $\displaystyle
K^{4+\alpha-n}\left(\frac{|\dot{\xi}(\tau)|}{\lambda(\tau)}+\frac{|\dot{\lambda}(\tau)|}{\lambda(\tau)}\right).$
Putting these estimates together we finish the proof. ∎
Because $n\geq 7$ and $\alpha=(n-2)/2$,
$4+\alpha-n<0.$
Furthermore, by noting that those terms like $\|\varphi_{K}(\tau)\|_{\alpha}$
are small, this lemma can be restated as
$\displaystyle\|E_{K}(\tau)\|_{2+\alpha}$ $\displaystyle\lesssim$
$\displaystyle
o\left(\|\varphi_{K}(\tau)\|_{\alpha}+\Big{|}\frac{\dot{\xi}(\tau)}{\lambda(\tau)}\Big{|}+\Big{|}\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}\Big{|}+\Big{|}\frac{a(\tau)}{\lambda(\tau)}\Big{|}+\Big{|}\frac{\dot{a}(\tau)-\mu_{0}a(\tau)}{\lambda(\tau)}\Big{|}\right)$
(13.4) $\displaystyle+$ $\displaystyle\mathcal{O}(\tau).$
In particular, the main order term in $E_{K}$ is $\mathcal{O}(\tau)$, which
comes from the outer component.
Now we prove our main estimate in this section.
###### Proposition 13.4 ($C^{1,\theta}$ estimates).
* •
For any $\tau$,
$\left|\frac{\dot{\xi}(\tau)}{\lambda(\tau)}\right|+\left|\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}\right|+\left|\frac{\dot{a}(\tau)-\mu_{0}a(\tau)}{\lambda(\tau)}\right|\lesssim\mathcal{O}(\tau)+K^{2+\alpha-\bar{p}\alpha}\|\varphi_{K}(\tau)\|_{\alpha}^{\bar{p}}+\Big{|}\frac{a(\tau)}{\lambda(\tau)}\Big{|}^{\bar{p}}.$
(13.5)
* •
For any $\sigma>0$, there exist two constants $C$ (universal) and
$T(\sigma,K)$ such that for any $\tau_{1}<\tau_{2}$,
$\displaystyle\|\varphi_{K}\|_{C^{(1+\theta)/2}(\tau_{1},\tau_{2};\mathcal{X}_{\alpha}^{1+\theta})}$
$\displaystyle\leq$
$\displaystyle\sigma\|\varphi_{K}(\tau_{1}-T(\sigma,K))\|_{\mathcal{X}_{\alpha}}$
$\displaystyle+C\left\|\left(\mathcal{O},\left|\frac{a}{\lambda}\right|^{\bar{p}}\right)\right\|_{L^{\infty}(\tau_{1}-T(\sigma,K),\tau_{2})}.$
###### Proof.
Take a decomposition $\varphi_{K}=\varphi_{K,1}+\varphi_{K,2}$, where
$\varphi_{K,1}$ is the solution of (A.3) with initial value (at
$\tau_{1}-T(\sigma)$, with $T(\sigma,K)$ to be determined below)
$\varphi_{K}$, and $\varphi_{K,2}$ is the solution of (A.8) with non-
homogeneous term $E_{K}$.
To apply Lemma A.4, take an $\alpha^{\prime}>n/2$. Because $\varphi_{K}$ is
supported in $B_{K}$, we have
$\left\|\varphi_{K}(\tau_{1}-T(\sigma,K))\right\|_{\alpha^{\prime}}\leq\left(2K\right)^{\alpha^{\prime}-\alpha}\left\|\varphi_{K}(\tau_{1}-T(\sigma,K))\right\|_{\alpha}.$
(13.7)
By Lemma A.4, there exists a $T(\sigma,K)>0$ such that
$\|\varphi_{K,1}\|_{C^{(1+\theta)/2}(\tau_{1},\tau_{2};C^{1+\theta}(B_{2K}))}\leq\sigma\left(4K\right)^{-1-\alpha}\left(2K\right)^{\alpha-\alpha^{\prime}}.$
(13.8)
For $\varphi_{K,2}$, in view of the estimate in Lemma 13.3, a direct
application of Lemma A.5 gives
$\|\varphi_{K,2}\|_{C^{(1+\theta)/2}(\tau_{1},\tau_{2};\mathcal{X}_{\alpha}^{1+\theta})}\leq
C(\alpha)\left\|\left(\mathcal{O},\left|\frac{a}{\lambda}\right|^{\bar{p}}\right)\right\|_{L^{\infty}(\tau_{1}-T(\sigma,K),\tau_{2})}.$
(13.9)
Adding (13.8) and (13.9), by using again the fact that $\varphi_{K}(\tau)$ is
supported in $B_{K}$, we get (• ‣ 13.4). ∎
For applications in Section 18, we give another estimate on the parameters
$\dot{\lambda}$ etc. in terms of the $L^{\infty}$ norm of $\varphi$.
###### Lemma 13.5.
For each $\tau$,
$\left|\frac{\dot{a}(\tau)-\mu_{0}a(\tau)}{\lambda(\tau)}\right|+\left|\frac{\dot{\xi}(\tau)}{\lambda(\tau)}\right|+\left|\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}\right|\lesssim\sup_{B_{2K}\setminus
B_{K}}|\varphi(\tau)|+\sup_{B_{2K}}|\varphi(\tau)|^{\bar{p}}+\left|\frac{a(\tau)}{\lambda(\tau)}\right|^{\bar{p}}.$
(13.10)
###### Proof.
For each $j=0,\cdots,n+1$, multiplying (13.3) by $Z_{j}$, utilizing the
orthogonal condition (13.2), we get
$\left\\{\begin{aligned}
&\frac{\dot{a}(\tau)-\mu_{0}a(\tau)}{\lambda(\tau)}=\int_{\mathbb{R}^{n}}Z_{0}(y)E_{K}(y,\tau)dy,\\\
&\frac{\dot{\xi}_{j}(\tau)}{\lambda(\tau)}=-\int_{\mathbb{R}^{n}}Z_{j}(y)E_{K}(y,\tau)dy,\quad
j=1,\cdots,n,\\\
&\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}=-\int_{\mathbb{R}^{n}}Z_{n+1}(y)E_{K}(y,\tau)dy.\end{aligned}\right.$
An integration by parts gives
$\displaystyle\left|\int_{\mathbb{R}^{n}}\left(2\nabla\varphi\cdot\nabla\eta_{K}+\varphi\Delta\eta_{K}\right)Z_{j}\right|$
$\displaystyle=$
$\displaystyle\left|\int_{\mathbb{R}^{n}}\varphi\left(2\nabla\eta_{K}\cdot\nabla
Z_{j}+\Delta\eta_{K}Z_{j}\right)\right|$ (13.11) $\displaystyle\lesssim$
$\displaystyle\sup_{B_{2K}\setminus B_{K}}|\varphi|.$
By the at most $|y|^{2-n}$ decay of $Z_{j}$, Lemma 13.1 and Proposition 13.2,
we find the contribution from other terms in $E_{K}$ is of the order
$O\left(\sup_{y\in
B_{2K}}|\varphi(y,\tau)|^{\bar{p}}+\left|\frac{a(\tau)}{\lambda(\tau)}\right|^{\bar{p}}\right)+o\left(\left|\frac{\dot{\xi}(\tau)}{\lambda(\tau)}\right|+\left|\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}\right|+\left|\frac{\dot{a}(\tau)-\mu_{0}a(\tau)}{\lambda(\tau)}\right|\right).$
Adding these estimates for
$\frac{\dot{a}(\tau)-\mu_{0}a(\tau)}{\lambda(\tau)}$,
$\frac{\dot{\xi}_{j}(\tau)}{\lambda(\tau)}$ and
$\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}$ together, we get (13.10). ∎
Finally, we will need the following backward estimate on $a/\lambda$.
###### Lemma 13.6.
For any $\tau_{1}<\tau_{2}$, we have
$\displaystyle\frac{|a(\tau_{1})|}{\lambda(\tau_{1})}\lesssim
e^{-\frac{\mu_{0}}{2}(\tau_{2}-\tau_{1})}\frac{|a(\tau_{2})|}{\lambda(\tau_{2})}+\int_{\tau_{1}}^{\tau_{2}}e^{-\frac{\mu_{0}}{2}(\tau_{2}-s)}\left[\mathcal{O}(s)+\|\varphi_{K}(s)\|_{\alpha}^{\bar{p}}\right]ds.$
(13.12)
###### Proof.
By a direct differentiation and then applying (13.5), we obtain
$\displaystyle\frac{d}{d\tau}\frac{a(\tau)}{\lambda(\tau)}$ $\displaystyle=$
$\displaystyle\frac{\dot{a}(\tau)}{\lambda(\tau)}-\frac{\dot{\lambda}(\tau)}{\lambda(\tau)}\frac{a(\tau)}{\lambda(\tau)}$
$\displaystyle=$
$\displaystyle\mu_{0}\frac{a(\tau)}{\lambda(\tau)}+O\left(\mathcal{O}(\tau)+\|\varphi_{K}(\tau)\|_{\alpha}^{\bar{p}}+\left|\frac{a(\tau)}{\lambda(\tau)}\right|^{\bar{p}}\right)$
$\displaystyle=$
$\displaystyle\left[\mu_{0}+o(1)\right]\frac{a(\tau)}{\lambda(\tau)}+O\left(\mathcal{O}(\tau)+\|\varphi_{K}(\tau)\|_{\alpha}^{\bar{p}}\right).$
Integrating this differential inequality gives (13.12). ∎
### 14\. Outer problem
In this section we establish an estimate on the outer component $\phi_{out}$.
Our main goal is to obtain an estimate of $\mathcal{O}$ in terms of
$\mathcal{I}$ and the parameters $\lambda^{\prime}$ etc..
#### 14.1. Decomposition of the outer equation
First we need to take a further decomposition of $\phi_{out}$. Recall that
$\phi_{out}$ satisfies
$\displaystyle\partial_{t}\phi_{out}-\Delta\phi_{out}$ $\displaystyle=$
$\displaystyle\left(u^{p}-W_{\ast}^{p}\right)\left(1-\eta_{out}\right)$
$\displaystyle-$
$\displaystyle\phi\left(\partial_{t}\eta_{out}-\Delta\eta_{out}\right)+2\nabla\phi\cdot\nabla\eta_{out}$
$\displaystyle+$
$\displaystyle\left(-a^{\prime}+\mu_{0}\frac{a}{\lambda^{2}},\xi^{\prime},\lambda^{\prime}\right)Z_{\ast}\left(1-\eta_{out}\right)$
$\displaystyle-$ $\displaystyle
a\partial_{t}Z_{0}^{\ast}\left(1-\eta_{out}\right).$
Introducing
$\left\\{\begin{aligned}
&V_{\ast}:=\begin{cases}\frac{u^{p}-W_{\ast}^{p}}{u-W_{\ast}}\chi_{B_{L\lambda(t)}(\xi(t))^{c}},&\mbox{if
}u\neq W_{\ast},\\\
pu^{p-1}\chi_{B_{L\lambda(t)}(\xi(t))^{c}},&\mbox{otherwise},\end{cases}\\\
&F_{2}:=-\phi\left(\partial_{t}\eta_{out}-\Delta\eta_{out}\right)+2\nabla\phi\cdot\nabla\eta_{out},\\\
&F_{3}:=\lambda^{\prime}Z_{n+1,\ast}\left(1-\eta_{out}\right),\\\
&F_{4}:=\sum_{j=1}^{n}\xi_{j}^{\prime}Z_{j,\ast}\left(1-\eta_{out}\right),\\\
&F_{5}:=\left[aV_{\ast}Z_{0}^{\ast}-\left(a^{\prime}-\mu_{0}\frac{a}{\lambda^{2}}\right)Z_{0}^{\ast}-a\partial_{t}Z_{0}^{\ast}\right]\left(1-\eta_{out}\right),\end{aligned}\right.$
then (14.1) is written as
$\partial_{t}\phi_{out}-\Delta\phi_{out}=V_{\ast}\phi_{out}+F_{2}+F_{3}+F_{4}+F_{5}.$
(14.2)
Corresponding to this decomposition of the right hand side, we have
$\phi_{out}=\phi_{1}+\phi_{2}+\phi_{3}+\phi_{4}+\phi_{5}$, with these five
functions solving the following five equations respectively: first,
$\left\\{\begin{aligned}
&\partial_{t}\phi_{1}-\Delta\phi_{1}=V_{\ast}\phi_{1}\quad&\mbox{in
}Q_{9/10},\\\ &\phi_{1}=\phi_{out},\quad&\mbox{on
}\partial^{p}Q_{9/10};\end{aligned}\right.$
next,
$\left\\{\begin{aligned}
&\partial_{t}\phi_{2}-\Delta\phi_{2}=V_{\ast}\phi_{2}+F_{2}\quad&\mbox{in
}Q_{9/10},\\\ &\phi_{2}=0,\quad&\mbox{on
}\partial^{p}Q_{9/10};\end{aligned}\right.$
and $\phi_{3},\phi_{4},\phi_{5}$ solve the same equation with $\phi_{2}$, but
with the right hand side term replaced by $F_{3}$, $F_{4}$ and $F_{5}$
respectively.
The following estimates hold for $V_{\ast}$ and $F_{2}$-$F_{5}$.
1. (i)
Because $u>0$ and $W_{\ast}>0$, by convexity of the function $x\mapsto x^{p}$
and Lemma 11.5, for any $\delta>0$ and $C>0$, there exists a constant
$R(\delta)>0$ such that, if $L\geq R(\delta)$, then
$|V_{\ast}|\leq
p\left(u^{p-1}+W_{\ast}^{p-1}\right)\leq\frac{\delta}{|x-\xi(t)|^{2}}+C.$
(14.3)
2. (ii)
By the definition of $\eta_{out}$ in (12.9), we get
$\displaystyle\big{|}\phi\left(\partial_{t}\eta_{out}-\Delta\eta_{out}\right)\big{|}$
$\displaystyle\lesssim$
$\displaystyle|\phi|\left(\frac{|\lambda^{\prime}(t)|}{\lambda(t)}+\frac{|\xi^{\prime}(t)|}{L\lambda(t)}+\frac{1}{L^{2}\lambda(t)^{2}}\right)\chi_{B_{2L\lambda(t)}(\xi(t))\setminus
B_{L\lambda(t)}(\xi(t))}$ $\displaystyle\lesssim$
$\displaystyle\frac{\mathcal{I}(t)}{L^{\alpha}\lambda(t)^{\alpha}}\left(\frac{|\lambda^{\prime}(t)|}{\lambda(t)}+\frac{|\xi^{\prime}(t)|}{L\lambda(t)}+\frac{1}{L^{2}\lambda(t)^{2}}\right)\chi_{B_{2L\lambda(t)}(\xi(t))\setminus
B_{L\lambda(t)}(\xi(t))}.$
In the same way, we get
$\displaystyle 2|\nabla\phi||\nabla\eta_{out}|$ $\displaystyle\lesssim$
$\displaystyle\frac{|\nabla\phi|}{L\lambda(t)}\chi_{B_{2L\lambda(t)}(\xi(t))\setminus
B_{L\lambda(t)}(\xi(t))}$ $\displaystyle\lesssim$
$\displaystyle\frac{\mathcal{I}(t)}{L^{2+\alpha}\lambda(t)^{2+\alpha}}\chi_{B_{2L\lambda(t)}(\xi(t))\setminus
B_{L\lambda(t)}(\xi(t))}.$
Combining ((ii)) and ((ii)), we obtain
$|F_{2}|\lesssim\frac{\mathcal{I}(t)}{L^{\alpha}\lambda(t)^{\alpha}}\left[\frac{|\lambda^{\prime}(t)|}{\lambda(t)}+\frac{|\xi^{\prime}(t)|}{L\lambda(t)}+\frac{1}{L^{2}\lambda(t)^{2}}\right]\chi_{B_{2L\lambda(t)}(\xi(t))\setminus
B_{L\lambda(t)}(\xi(t))}.$ (14.6)
3. (iii)
By the decay of $Z_{n+1}$ at infinity, we get
$|F_{3}|\lesssim\lambda(t)^{\frac{n-4}{2}}|\lambda^{\prime}(t)||x-\xi(t)|^{2-n}\chi_{B_{L\lambda(t)}(\xi(t))^{c}}.$
(14.7)
4. (iv)
By the decay of $Z_{1},\cdots,Z_{n}$ at infinity, we get
$|F_{4}|\lesssim\lambda(t)^{\frac{n-2}{2}}|\xi^{\prime}(t)||x-\xi(t)|^{1-n}\chi_{B_{L\lambda(t)}(\xi(t))^{c}}.$
(14.8)
5. (v)
By the decay of $Z_{0}$ at infinity, we get
$\Big{|}\left(a^{\prime}-\mu_{0}\frac{a}{\lambda^{2}}\right)Z_{0}^{\ast}\left(1-\eta_{out}\right)\Big{|}\lesssim\Big{|}a^{\prime}(t)-\mu_{0}\frac{a(t)}{\lambda(t)^{2}}\Big{|}\lambda(t)^{-\frac{n}{2}}e^{-c\frac{|x-\xi(t)|}{\lambda(t)}}\chi_{B_{L\lambda(t)}(\xi(t))^{c}}.$
(14.9)
Similarly, we have
$\Big{|}a\partial_{t}Z_{0}^{\ast}\left(1-\eta_{out}\right)\Big{|}\lesssim|a(t)|\lambda(t)^{-\frac{n+2}{2}}\left(|\lambda^{\prime}(t)|+|\xi^{\prime}(t)|\right)e^{-c\frac{|x-\xi(t)|}{\lambda(t)}}\chi_{B_{L\lambda(t)}(\xi(t))^{c}}.$
(14.10)
Finally, by (14.3), we obtain
$\displaystyle\left|aV_{\ast}Z_{0}^{\ast}\left(1-\eta_{out}\right)\right|$
$\displaystyle\lesssim$
$\displaystyle\frac{|a(t)|}{|x-\xi(t)|^{2}}\lambda(t)^{-\frac{n}{2}}e^{-c\frac{|x-\xi(t)|}{\lambda(t)}}\chi_{B_{L\lambda(t)}(\xi(t))^{c}}$
$\displaystyle\lesssim$
$\displaystyle\frac{|a(t)|}{\lambda(t)^{2}}\lambda(t)^{-\frac{n}{2}}e^{-c\frac{|x-\xi(t)|}{\lambda(t)}}\chi_{B_{L\lambda(t)}(\xi(t))^{c}}.$
Combining (14.9)-((v)), we get
$\displaystyle|F_{5}|$ $\displaystyle\lesssim$
$\displaystyle\left[\frac{|a(t)|}{\lambda(t)^{2}}+\Big{|}a^{\prime}(t)-\mu_{0}\frac{a(t)}{\lambda(t)^{2}}\Big{|}+\frac{|a(t)|}{\lambda(t)}\left(|\lambda^{\prime}(t)|+|\xi^{\prime}(t)|\right)\right]$
$\displaystyle\quad\quad\times\lambda(t)^{-\frac{n}{2}}e^{-c\frac{|x-\xi(t)|}{\lambda(t)}}\chi_{B_{L\lambda(t)}(\xi(t))^{c}}.$
For each $i=1,\cdots,5$, let
$\overline{\phi}_{i}(x,t):=\left|\phi_{i}\left(x-\xi(t),t\right)\right|.$
(14.13)
By the Kato inequality, we obtain
$\partial_{t}\overline{\phi}_{1}-\Delta\overline{\phi}_{1}+\xi^{\prime}(t)\cdot\nabla\overline{\phi}_{1}\leq|V_{\ast}|\overline{\phi}_{1}$
(14.14)
and for $i=2,3,4,5$,
$\partial_{t}\overline{\phi}_{i}-\Delta\overline{\phi}_{i}+\xi^{\prime}(t)\cdot\nabla\overline{\phi}_{i}\leq|V_{\ast}|\overline{\phi}_{i}+|F_{i}|.$
(14.15)
#### 14.2. Estimates for the outer equation
In this subsection, we establish some pointwise estimates on
$\overline{\phi}_{1},\cdots,\overline{\phi}_{5}$.
Denote the Dirichlet heat kernel for the operator
$\mathcal{H}:=\partial_{t}-\Delta-\left(\delta|x|^{-2}+C\right)+\xi^{\prime}(t)\cdot\nabla$
in $Q_{1}$ by $G(x,t;y,s)$ ($t>s$).
Let
$\gamma:=\frac{n-2}{2}-\sqrt{\frac{(n-2)^{2}}{4}-4\delta}.$
Then $w(x):=|x|^{-\gamma}$ is a weak solution of the elliptic equation
$-\Delta w(x)=\frac{\delta}{|x|^{2}}w(x).$
###### Lemma 14.1 (Estimate on $\overline{\phi}_{1}$).
$\overline{\phi}_{1}(x,t)\lesssim|x|^{-\gamma}$ in $Q_{8/9}$.
###### Proof.
Set $\varphi_{1}:=\overline{\phi}_{1}/w$. It is a sub-solution to the
parabolic equation
$\partial_{t}\varphi=w^{-2}\mbox{div}\left(w^{2}\nabla\varphi\right)-\xi^{\prime}(t)\cdot\nabla\varphi+\left[C+\gamma\xi^{\prime}(t)\cdot\frac{x}{|x|^{2}}\right]\varphi.$
(14.16)
We note the following three facts about the coefficients of this equation.
1. (1)
Because $\gamma\ll 1$, by [71, Theorem 3.1], the volume doubling property and
the scaling invariant Poincare inequality holds on $\mathbb{R}^{n}$ with the
weighted measure $w^{2}dx$. By [76, Theorem 5.2.3], for some $q>2$
(independent of $\gamma$), there is a local Sobolev embedding from
$W^{1,2}(B_{1},w^{2}dx)$ into $L^{q}(B_{1},w^{2}dx)$.
2. (2)
By the Lipschitz hypothesis (10.1), $\xi^{\prime}(t)\in L^{\infty}(-1,1)$.
3. (3)
Similarly, the zeroth order term
$C+\gamma\xi^{\prime}(t)\cdot\frac{x}{|x|^{2}}\in
L^{\infty}(-1,1;L^{n-3\gamma}(B_{1},w^{2}dx))$.
Then by standard De Giorgi-Nash-Moser estimate (cf. [71, Section 3] and [76,
Chapter 5]), we deduce that $\varphi_{1}$ is bounded in $Q_{8/9}$. (The lower
order terms in this parabolic operator do not affect this argument by noting
their higher integrability.) ∎
In fact, Moser’s Harnack inequality holds for positive solutions of (14.16).
This is similar to [71, Theorem 3.5]. Then by [76, Section 5.4.7] (see also
[71, Theorem 4.3]), the heat kernel $G$ satisfies a Gaussian bound
$G(x,t;y,s)\leq
C\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}.$
(14.17)
Hereafter we fix two constants $\beta>1$ (but sufficiently close to $1$) and
$\mu\in(0,1)$ (to be determined below). We choose $\mu$ so that
$\left(\frac{n-2}{2}-2\gamma\right)\left(1-\mu\right)>2.$
which is denoted by $2+2\kappa$. This inequality is guaranteed by the
assumption that $n\geq 7$.
###### Lemma 14.2 (Estimate on $\overline{\phi}_{2}$).
If $K\lambda(t)/2\leq|x|\leq 4K\lambda(t)$, then
$\displaystyle\overline{\phi}_{2}(x,t)$ $\displaystyle\lesssim$
$\displaystyle\left[K^{-\left(\frac{n-2}{2}-\gamma\right)(\beta-1)}+\frac{L^{\frac{n-2}{2}-\gamma}}{K^{\frac{n-2}{2}-(2\beta-1)\gamma}}\right]\left[K\lambda(t)\right]^{-\frac{n-2}{2}}\|\mathcal{I}\|_{L^{\infty}(t-\lambda(t)^{2\mu},t)}$
$\displaystyle+$ $\displaystyle\lambda(t)^{-\gamma(1-\mu)-\frac{n-2}{2}\mu}.$
###### Proof.
By the heat kernel representation formula, for any $(x,t)$ we have
$\overline{\phi}_{2}(x,t)\leq\int_{-81/100}^{t}\int_{B_{2L\lambda(s)}\setminus
B_{L\lambda(s)}}G(x,t;y,s)\frac{\mathcal{I}(s)}{L^{\alpha}\lambda(s)^{\alpha}}\left(\frac{|\lambda^{\prime}(s)|}{\lambda(s)}+\frac{|\xi^{\prime}(s)|}{L\lambda(s)}+\frac{1}{L^{2}\lambda(s)^{2}}\right).$
Divide this integral into three parts, the first part $\mathrm{I}$ being on
$(-81/100,t-\lambda(t)^{2\mu})$, the second part $\mathrm{II}$ involving the
integral on $(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})$, and the third
part $\mathrm{III}$ involving the integral on
$(t-K^{2\beta}\lambda(t)^{2},t)$.
Estimate of $\mathrm{I}$. By (13.5),
$|\lambda^{\prime}(s)|+|\xi^{\prime}(s)|\ll\lambda(s)^{-1}.$ (14.18)
Hence
$\left\|\frac{\mathcal{I}(s)}{L^{\alpha}\lambda(s)^{\alpha}}\left(\frac{|\lambda^{\prime}(s)|}{\lambda(s)}+\frac{|\xi^{\prime}(s)|}{L\lambda(s)}+\frac{1}{L^{2}\lambda(s)^{2}}\right)\chi_{B_{2L\lambda(s)}\setminus
B_{L\lambda(s)}}\right\|_{L^{\frac{2n}{n+2}}(B_{1})}\lesssim\mathcal{I}(s).$
By the Guassian bound on $G(x,y;t,s)$ in (14.17),
$\left\|G(x,t;\cdot,s)\right\|_{L^{\frac{2n}{n-2}}(B_{1})}\lesssim(t-s)^{-\frac{n+2}{4}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}.$
(14.19)
Then by Hölder inequality we get
$\displaystyle\mathrm{I}$ $\displaystyle\lesssim$
$\displaystyle|x|^{-\gamma}\int_{-81/100}^{t-\lambda(t)^{2\mu}}(t-s)^{-\frac{n+2}{4}+\frac{\gamma}{2}}\mathcal{I}(s)ds$
$\displaystyle\lesssim$ $\displaystyle
K^{-\gamma}\lambda(t)^{-(1-\mu)\gamma-\frac{n-2}{2}\mu},$
where in the last step we used only the estimate $|\mathcal{I}(s)|\leq C$.
Estimate of $\mathrm{II}$. This case differs from the previous one only in the
last step. Now we have
$\displaystyle\mathrm{II}$ $\displaystyle\lesssim$
$\displaystyle|x|^{-\gamma}\int_{t-\lambda(t)^{2\mu}}^{t-K^{2\beta}\lambda(t)^{2}}(t-s)^{-\frac{n+2}{4}+\frac{\gamma}{2}}\mathcal{I}(s)$
$\displaystyle\lesssim$ $\displaystyle
K^{-\frac{n-2}{2}\beta+(\beta-1)\gamma}\lambda(t)^{-\frac{n-2}{2}}\|\mathcal{I}\|_{L^{\infty}(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})}.$
Estimate of $\mathrm{III}$. Still by the heat kernel representation formula,
$\mathrm{III}$ is bounded by
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\int_{B_{2L\lambda(s)}\setminus
B_{L\lambda(s)}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\times\frac{\mathcal{I}(s)}{L^{\alpha}\lambda(s)^{\alpha}}\left(\frac{|\lambda^{\prime}(s)|}{\lambda(s)}+\frac{|\xi^{\prime}(s)|}{L\lambda(s)}+\frac{1}{L^{2}\lambda(s)^{2}}\right)dyds$
$\displaystyle=$
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\frac{\mathcal{I}(s)}{L^{\alpha}\lambda(s)^{\alpha}}\left(\frac{|\lambda^{\prime}(s)|}{\lambda(s)}+\frac{|\xi^{\prime}(s)|}{L\lambda(s)}+\frac{1}{L^{2}\lambda(s)^{2}}\right)\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}$
$\displaystyle\times\left[\int_{B_{2L\lambda(s)}\setminus
B_{L\lambda(s)}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}dy\right]ds.$
For $s\in[t-K^{2\beta}\lambda(t)^{2},t]$, by Proposition 11.1 we have
$\lambda(s)\sim\lambda(t).$ (14.21)
Thus by noting that $K\lambda(t)/2\leq|x|\leq 4K\lambda(t)$ and $K\gg L$, we
have
$|x|\gg|y|\quad\mbox{for any}~{}~{}y\in B_{2L\lambda(s)}.$
Therefore in the integral above we can replace $e^{-c\frac{|x-y|^{2}}{t-s}}$
by $e^{-c\frac{|x|^{2}}{t-s}}$.
Then in view of (14.18), $\mathrm{III}$ is controlled by
$\displaystyle
K^{(\beta-1)\gamma}\|\mathcal{I}\|_{L^{\infty}(t-K^{2\beta}\lambda(t)^{2},t)}L^{\frac{n-2}{2}}\lambda(t)^{\frac{n-2}{2}}$
$\displaystyle\quad\quad\quad\quad\times\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{L\lambda(t)}\right)^{\gamma}ds$
$\displaystyle\lesssim$ $\displaystyle
K^{(\beta-1)\gamma}\|\mathcal{I}\|_{L^{\infty}(t-K^{2\beta}\lambda(t)^{2},t)}L^{\frac{n-2}{2}}\lambda(t)^{\frac{n-2}{2}}$
$\displaystyle\quad\quad\quad\quad\left(1+\frac{K^{\beta}}{L}\right)^{\gamma}\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x|^{2}}{t-s}}ds$
$\displaystyle\lesssim$ $\displaystyle
K^{(2\beta-1)\gamma}L^{\frac{n-2}{2}-\gamma}\|\mathcal{I}\|_{L^{\infty}(t-K^{2\beta}\lambda(t)^{2},t)}\lambda(t)^{\frac{n-2}{2}}|x|^{2-n}$
$\displaystyle\lesssim$
$\displaystyle\frac{L^{\frac{n-2}{2}-\gamma}}{K^{\frac{n-2}{2}-(2\beta-1)\gamma}}\left[K\lambda(t)\right]^{-\frac{n-2}{2}}\|\mathcal{I}\|_{L^{\infty}(t-K^{2}\lambda(t)^{2},t)}.$
Adding up the estimates for $\mathrm{I}$, $\mathrm{II}$ and $\mathrm{II}$, we
finish the proof. ∎
###### Lemma 14.3 (Estimate on $\overline{\phi}_{3}$).
For $K\lambda(t)/2\leq|x|\leq 4K\lambda(t)$,
$\overline{\phi}_{3}(x,t)\lesssim\lambda(t)^{-\frac{n-2}{2}\mu+2(\mu-1)\gamma}+K^{-\frac{n-2}{2}(\beta-1)+2(\beta-1)\gamma}\left[K\lambda(t)\right]^{-\frac{n-2}{2}}\|\lambda\lambda^{\prime}\|_{L^{\infty}(t-\lambda(t)^{2\mu},t)}.$
###### Proof.
By the heat kernel representation,
$\displaystyle\overline{\phi}_{3}(x,t)$ $\displaystyle\lesssim$
$\displaystyle\int_{-81/100}^{t}\int_{B_{L\lambda(s)}^{c}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}$
$\displaystyle\quad\quad\quad\quad\times\lambda(s)^{\frac{n-4}{2}}|\lambda^{\prime}(s)||y|^{2-n}dyds$
$\displaystyle=$
$\displaystyle\int_{-81/100}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\lambda(s)^{\frac{n-4}{2}}|\lambda^{\prime}(s)|$
$\displaystyle\quad\times\left[\int_{B_{L\lambda(s)}^{c}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}|y|^{2-n}dy\right]ds.$
We still divide this integral into three parts, $\mathrm{I}$ being on the
interval $(-81/100,t-\lambda(t)^{2\mu})$, $\mathrm{II}$ being on the interval
$(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})$, and $\mathrm{III}$ on
$(t-K^{2\beta}\lambda(t)^{2},t)$.
Estimate of $\mathrm{I}$. Direct calculation gives
$\displaystyle\mathcal{P}$ $\displaystyle:=$
$\displaystyle\int_{B_{L\lambda(s)}^{c}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}|y|^{2-n}dy$
$\displaystyle\lesssim$
$\displaystyle\left(t-s\right)^{-\frac{n-2}{2}}\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}\int_{\frac{L\lambda(s)}{\sqrt{t-s}}}^{+\infty}e^{-c\left(\frac{|x|}{\sqrt{t-s}}-r\right)^{2}}rdr.$
Recall that $K\lambda(t)/2\leq|x|\leq 4K\lambda(t)$, $t-s\geq
K^{2\beta}\lambda(t)^{2}$. There are two cases.
* •
If $L\lambda(s)\geq 6K\lambda(t)$,
$\mathcal{P}\lesssim\left(t-s\right)^{-\frac{n-2}{2}}\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}.$
* •
If $L\lambda(s)\leq 6K\lambda(t)$,
$\displaystyle\mathcal{P}$ $\displaystyle\lesssim$
$\displaystyle\left(t-s\right)^{-\frac{n-2}{2}}\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}$
$\displaystyle\lesssim$
$\displaystyle\left(t-s\right)^{-\frac{n-2-\gamma}{2}}L^{-\gamma}\lambda(s)^{-\gamma}.$
Hence
$\displaystyle\mathrm{I}$ $\displaystyle\lesssim$
$\displaystyle|x|^{-\gamma}\int_{(-81/100,t-\lambda(t)^{2\mu})\cap\\{L\lambda(s)\geq
6K\lambda(t)\\}}(t-s)^{-\frac{n-2-\gamma}{2}}\lambda(s)^{\frac{n-4}{2}}|\lambda^{\prime}(s)|$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\times\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}$
$\displaystyle+$ $\displaystyle
L^{-\gamma}|x|^{-\gamma}\int_{(-81/100,t-\lambda(t)^{2\mu})\cap\\{L\lambda(s)\leq
6K\lambda(t)\\}}(t-s)^{-\frac{n-2}{2}+\gamma}\lambda(s)^{\frac{n-4}{2}-\gamma}|\lambda^{\prime}(s)|.$
Because
$\displaystyle(t-s)^{-\frac{n-2-\gamma}{2}}\lambda(s)^{\frac{n-6}{2}}\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}$
$\displaystyle=$
$\displaystyle(t-s)^{-\frac{n+2-2\gamma}{4}}\left[\frac{\lambda(s)}{\sqrt{t-s}}\right]^{\frac{n-6}{2}}\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}$
$\displaystyle\lesssim$ $\displaystyle
L^{-\frac{n-6}{2}-\gamma}\lambda(s)^{-\gamma}(t-s)^{-\frac{n+2}{4}+\gamma},$
by noting that we have assumed $L\lambda(s)\geq 6K\lambda(t)$, the first
integral is controlled by
$\displaystyle
L^{-\frac{n-6}{2}}\left[K\lambda(t)\right]^{-2\gamma}\int_{-81/100}^{t-\lambda(t)^{2\mu}}(t-s)^{-\frac{n+2}{4}+\gamma}|\lambda(s)\lambda^{\prime}(s)|$
$\displaystyle\lesssim$ $\displaystyle
L^{-\frac{n-6}{2}}K^{-2\gamma}\lambda(t)^{-\frac{n-2}{2}\mu+2(\mu-1)\gamma}.$
In the last step we used only the estimate $|\lambda\lambda^{\prime}|\leq C$.
For the second integral, by the estimate $|\lambda\lambda^{\prime}|\leq C$
again, we obtain
$\displaystyle
L^{-\gamma}|x|^{-\gamma}\int_{(-81/100,t-\lambda(t)^{2\mu})\cap\\{L\lambda(s)\leq
6K\lambda(t)\\}}(t-s)^{-\frac{n-2}{2}+\gamma}\lambda(s)^{\frac{n-4}{2}-\gamma}|\lambda^{\prime}(s)|$
$\displaystyle\lesssim$ $\displaystyle
L^{-\frac{n-6}{2}}K^{\frac{n-6}{2}-2\gamma}\lambda(t)^{\frac{n-6}{2}-2\gamma}\int_{-81/100}^{t-\lambda(t)^{2\mu}}(t-s)^{-\frac{n-2}{2}+\gamma}$
$\displaystyle\lesssim$ $\displaystyle
L^{-\frac{n-6}{2}}K^{\frac{n-6}{2}-2\gamma}\lambda(t)^{-(n-4)\mu+\frac{n}{2}-3+2(\mu-1)\gamma}.$
Because
$-(n-4)\mu+\frac{n}{2}-3+2(\mu-1)\gamma>-\frac{n-2}{2}\mu+2(\mu-1)\gamma,$
combining these two estimates, we get
$\mathrm{I}\lesssim
L^{-\frac{n-6}{2}}K^{-2\gamma}\lambda(t)^{-\frac{n-2}{2}\mu+2(\mu-1)\gamma}.$
Estimate of $\mathrm{II}$. As in the previous case, we have
$\displaystyle\mathrm{II}$ $\displaystyle\lesssim$
$\displaystyle|x|^{-\gamma}\int_{(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})\cap\\{L\lambda(s)\geq
6K\lambda(t)\\}}(t-s)^{-\frac{n-2-\gamma}{2}}\lambda(s)^{\frac{n-4}{2}}|\lambda^{\prime}(s)|$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\times\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}$
$\displaystyle+$ $\displaystyle
L^{-\gamma}|x|^{-\gamma}\int_{(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})\cap\\{L\lambda(s)\leq
6K\lambda(t)\\}}(t-s)^{-\frac{n-2}{2}+\gamma}\lambda(s)^{\frac{n-4}{2}-\gamma}|\lambda^{\prime}(s)|$
$\displaystyle\lesssim$ $\displaystyle
L^{-\frac{n-6}{2}}\left[K\lambda(t)\right]^{-2\gamma}\int_{t-\lambda(t)^{2\mu}}^{t-K^{2\beta}\lambda(t)^{2}}(t-s)^{-\frac{n+2}{4}+\gamma}|\lambda(s)\lambda^{\prime}(s)|$
$\displaystyle+$ $\displaystyle
L^{-\frac{n-6}{2}}K^{\frac{n-6}{2}-2\gamma}\lambda(t)^{\frac{n-6}{2}-2\gamma}\int_{t-\lambda(t)^{2\mu}}^{t-K^{2\beta}\lambda(t)^{2}}(t-s)^{-\frac{n-2}{2}+\gamma}|\lambda(s)\lambda^{\prime}(s)|$
$\displaystyle\lesssim$ $\displaystyle
L^{-\frac{n-6}{2}}\left[K^{-\frac{n-2}{2}\beta+2(\beta-1)\gamma}+K^{-\frac{n-2}{2}\beta-\left(\frac{n-6}{2}-2\gamma\right)\left(\beta-1\right)}\right]$
$\displaystyle\quad\times\lambda(t)^{-\frac{n-2}{2}}\|\lambda\lambda^{\prime}\|_{L^{\infty}(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})}$
$\displaystyle\lesssim$ $\displaystyle
K^{-\frac{n-2}{2}\beta+2(\beta-1)\gamma}\lambda(t)^{-\frac{n-2}{2}}\|\lambda\lambda^{\prime}\|_{L^{\infty}(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})}.$
Estimate of $\mathrm{III}$. By (14.21),
$\displaystyle\mathrm{III}$ $\displaystyle\lesssim$
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\int_{B_{1}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}$
$\displaystyle\quad\quad\quad\quad\quad\times\lambda(s)^{\frac{n-4}{2}}|\lambda^{\prime}(s)||y|^{2-n}$
$\displaystyle=$
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\lambda(s)^{\frac{n-4}{2}}|\lambda^{\prime}(s)|$
$\displaystyle\quad\quad\quad\quad\times\left[\underbrace{\int_{B_{1}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}|y|^{2-n}dy}_{\mbox{Lemma
\ref{lem heat kernel integral 1}}}\right]ds$ $\displaystyle\lesssim$
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(|x|+\sqrt{t-s}\right)^{2-n}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{2\gamma}\lambda(s)^{\frac{n-4}{2}}|\lambda^{\prime}(s)|ds$
$\displaystyle\lesssim$
$\displaystyle\left[K\lambda(t)\right]^{2-n}K^{2(\beta-1)\gamma}\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\lambda(s)^{\frac{n-4}{2}}|\lambda^{\prime}(s)|ds$
$\displaystyle\lesssim$ $\displaystyle
K^{2-n+2\beta+2(\beta-1)\gamma}\lambda(t)^{-\frac{n-2}{2}}\|\lambda\lambda^{\prime}\|_{L^{\infty}(t-K^{2\beta}\lambda(t)^{2},t)}.$
Because $n>6$ and $\beta$ is sufficiently close to $1$, we have
$2-n+2\beta+2(\beta-1)\gamma<-\frac{n-2}{2}\beta+2(\beta-1)\gamma.$
Adding up estimates for $\mathrm{I}$, $\mathrm{II}$ and $\mathrm{III}$ we
finish the proof. ∎
###### Lemma 14.4 (Estimates on $\overline{\phi}_{4}$).
For $K\lambda(t)/2\leq|x|\leq 4K\lambda(t)$,
$\overline{\phi}_{4}(x,t)\lesssim\lambda(t)^{-\frac{n-2}{2}\mu+2(\mu-1)\gamma}+K^{-\frac{n-2}{2}(\beta-1)+2(\beta-1)\gamma}\left[K\lambda(t)\right]^{-\frac{n-2}{2}}\|\lambda\xi^{\prime}\|_{L^{\infty}(t-\lambda(t)^{2\mu},t)}.$
###### Proof.
We still have the heat kernel representation
$\displaystyle\overline{\phi}_{4}(x,t)$ $\displaystyle\lesssim$
$\displaystyle\int_{-81/100}^{t}\int_{B_{L\lambda(s)}^{c}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}$
$\displaystyle\quad\quad\quad\quad\times\lambda(s)^{\frac{n-2}{2}}|\xi^{\prime}(s)||y|^{1-n}dyds$
$\displaystyle=$
$\displaystyle\int_{-81/100}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\lambda(s)^{\frac{n-2}{2}}|\xi^{\prime}(s)|$
$\displaystyle\quad\times\left[\int_{B_{L\lambda(s)}^{c}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}|y|^{1-n}dy\right]ds.$
We still divide this integral into three parts, $\mathrm{I}$ being on the
interval $(-81/100,t-\lambda(t)^{2\mu})$, $\mathrm{II}$ being on the interval
$(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})$, and $\mathrm{III}$ on the
interval $(t-K^{2\beta}\lambda(t)^{2},t)$.
Estimate of $\mathrm{I}$. Direct calculation gives
$\displaystyle\mathcal{P}$ $\displaystyle:=$
$\displaystyle\int_{B_{L\lambda(s)}^{c}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}|y|^{1-n}dy$
$\displaystyle\lesssim$
$\displaystyle\left(t-s\right)^{-\frac{n-1}{2}}\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}\int_{\frac{L\lambda(s)}{\sqrt{t-s}}}^{+\infty}e^{-c\left(\frac{|x|}{\sqrt{t-s}}-r\right)^{2}}dr.$
Recall that $K\lambda(t)/2\leq|x|\leq 4K\lambda(t)$ and $t-s\geq
K^{2\beta}\lambda(t)^{2}$. There are two cases.
* •
If $L\lambda(s)\geq 6K\lambda(t)$,
$\mathcal{P}\lesssim\left(t-s\right)^{-\frac{n-1}{2}}\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}.$
* •
If $L\lambda(s)\leq 6K\lambda(t)$,
$\mathcal{P}\lesssim\left(t-s\right)^{-\frac{n-1-\gamma}{2}}L^{-\gamma}\lambda(s)^{-\gamma}.$
Plugging these estimates into the formula for $\mathrm{I}$, we obtain
$\displaystyle\mathrm{I}$ $\displaystyle\lesssim$
$\displaystyle|x|^{-\gamma}\int_{(-81/100,t-\lambda(t)^{2\mu})\cap\\{L\lambda(s)\geq
6K\lambda(t)\\}}(t-s)^{-\frac{n-1-\gamma}{2}}\lambda(s)^{\frac{n-2}{2}}|\xi^{\prime}(s)|$
$\displaystyle\quad\quad\quad\quad\quad\quad\times\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}$
$\displaystyle+$ $\displaystyle
L^{-\gamma}|x|^{-\gamma}\int_{(-81/100,t-\lambda(t)^{2\mu})\cap\\{L\lambda(s)\leq
6K\lambda(t)\\}}(t-s)^{-\frac{n-1}{2}+\gamma}\lambda(s)^{\frac{n-2}{2}-\gamma}|\xi^{\prime}(s)|.$
Because
$\displaystyle(t-s)^{-\frac{n-1-\gamma}{2}}\lambda(s)^{\frac{n-4}{2}}\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}$
$\displaystyle=$
$\displaystyle(t-s)^{-\frac{n+2}{4}+\frac{\gamma}{2}}\left[\frac{\lambda(s)}{\sqrt{t-s}}\right]^{\frac{n-4}{2}}\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}$
$\displaystyle\lesssim$ $\displaystyle
L^{-\gamma}\lambda(s)^{-\gamma}(t-s)^{-\frac{n+2}{4}+\gamma},$
the first integral is controlled by
$\displaystyle\left[K\lambda(t)\right]^{-2\gamma}\int_{-81/100}^{t-\lambda(t)^{2\mu}}(t-s)^{-\frac{n+2}{4}+\gamma}|\lambda(s)\xi^{\prime}(s)|$
$\displaystyle\lesssim$ $\displaystyle
K^{-2\gamma}\lambda(t)^{-\frac{n-2}{2}\mu+2(\mu-1)\gamma}.$
In the last step we used only the estimate $|\lambda\xi^{\prime}|\leq C$.
Similarly, for the second integral, we have
$\displaystyle
L^{-\gamma}|x|^{-\gamma}\int_{(-81/100,t-\lambda(t)^{2\mu})\cap\\{L\lambda(s)\leq
4K\lambda(t)\\}}(t-s)^{-\frac{n-1}{2}+\gamma}\lambda(s)^{\frac{n-2}{2}-\gamma}|\xi^{\prime}(s)|$
$\displaystyle\lesssim$ $\displaystyle
L^{-\frac{n-4}{2}}K^{\frac{n-4}{2}-2\gamma}\lambda(t)^{\frac{n-4}{2}-2\gamma}\int_{-81/100}^{t-\lambda(t)^{2\mu}}(t-s)^{-\frac{n-1}{2}+\gamma}$
$\displaystyle\lesssim$ $\displaystyle
L^{-\frac{n-4}{2}}K^{\frac{n-4}{2}-2\gamma}\lambda(t)^{\frac{n-4}{2}-(n-3)\mu+2(\mu-1)\gamma}.$
Because
$-\frac{n-2}{2}\mu+2(\mu-1)\gamma<\frac{n-4}{2}-(n-3)\mu+2(\mu-1)\gamma,$
we get
$\mathrm{I}\lesssim\lambda(t)^{-\frac{n-2}{2}\mu+2(\mu-1)\gamma}.$
Estimate of $\mathrm{II}$. We have
$\displaystyle\mathrm{II}$ $\displaystyle\lesssim$
$\displaystyle|x|^{-\gamma}\int_{(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})\cap\\{L\lambda(s)\geq
6K\lambda(t)\\}}(t-s)^{-\frac{n-1-\gamma}{2}}\lambda(s)^{\frac{n-2}{2}}|\xi^{\prime}(s)|$
$\displaystyle\quad\quad\quad\quad\quad\quad\times\left[1+\frac{\sqrt{t-s}}{L\lambda(s)}\right]^{\gamma}e^{-c\frac{L^{2}\lambda(s)^{2}}{t-s}}$
$\displaystyle+$ $\displaystyle
L^{-\gamma}|x|^{-\gamma}\int_{(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})\cap\\{L\lambda(s)\leq
6K\lambda(t)\\}}(t-s)^{-\frac{n-1}{2}+\gamma}\lambda(s)^{\frac{n-2}{2}-\gamma}|\xi^{\prime}(s)|$
$\displaystyle\lesssim$
$\displaystyle\left[K\lambda(t)\right]^{-2\gamma}\int_{t-\lambda(t)^{2\mu}}^{t-K^{2\beta}\lambda(t)^{2}}(t-s)^{-\frac{n+2}{4}+\gamma}|\lambda(s)\xi^{\prime}(s)|$
$\displaystyle+$ $\displaystyle
L^{-\frac{n-4}{2}}K^{\frac{n-4}{2}-2\gamma}\lambda(t)^{\frac{n-4}{2}-2\gamma}\int_{t-\lambda(t)^{2\mu}}^{t-K^{2\beta}\lambda(t)^{2}}(t-s)^{-\frac{n-1}{2}+\gamma}|\lambda(s)\xi^{\prime}(s)|$
$\displaystyle\lesssim$
$\displaystyle\left[K^{-\frac{n-2}{2}\beta+2(\beta-1)\gamma}+L^{-\frac{n-4}{2}}K^{-\beta(n-3-2\gamma)+\frac{n-4}{2}-2\gamma}\right]$
$\displaystyle\quad\quad\times\lambda(t)^{-\frac{n-2}{2}}\|\lambda\xi^{\prime}\|_{L^{\infty}(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})}$
$\displaystyle\lesssim$ $\displaystyle
K^{-\frac{n-2}{2}\beta+2(\beta-1)\gamma}\lambda(t)^{-\frac{n-2}{2}}\|\lambda\xi^{\prime}\|_{L^{\infty}(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})}.$
Estimate of $\mathrm{III}$. As in the $\overline{\phi}_{3}$ case, we have
$\displaystyle\mathrm{II}$ $\displaystyle=$
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\lambda(s)^{\frac{n-2}{2}}|\xi^{\prime}(s)|$
$\displaystyle\quad\quad\quad\quad\times\left[\int_{B_{1}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}|y|^{1-n}dy\right]ds$
$\displaystyle\lesssim$
$\displaystyle|x|^{1-n}\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{2\gamma}\lambda(s)^{\frac{n-2}{2}}|\xi^{\prime}(s)|ds$
$\displaystyle\lesssim$ $\displaystyle
K^{1-n+2\beta+2(\beta-1)\gamma}\lambda(t)^{-\frac{n-2}{2}}\|\lambda\xi^{\prime}\|_{L^{\infty}(t-K^{2\beta}\lambda(t)^{2},t)}.$
Because
$1-n+2\beta+2(\beta-1)\gamma<-\frac{n-2}{2}\beta+2(\beta-1)\gamma,$
adding up estimates for $\mathrm{I}$, $\mathrm{II}$ and $\mathrm{III}$ we
finish the proof. ∎
###### Lemma 14.5 (Estimates on $\overline{\phi}_{5}$).
For $K\lambda(t)/2\leq|x|\leq 4K\lambda(t)$,
$\displaystyle\overline{\phi}_{5}(x,t)$ $\displaystyle\lesssim$
$\displaystyle\left[K^{-\left(\frac{n-2}{2}-\gamma\right)(\beta-1)}+K^{\frac{n-2}{2}+2\beta+2(\beta-1)\gamma}e^{-cK}+\frac{K^{\frac{n-2}{2}+2\beta+(\beta-1)\gamma}}{e^{cL}}\right]$
$\displaystyle\quad\quad\quad\quad\times\left[K\lambda(t)\right]^{-\frac{n-2}{2}}\left\|\left(\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\lambda\lambda^{\prime},\lambda\xi^{\prime}\right)\right\|_{L^{\infty}(t-\lambda(t)^{2\mu},t)}$
$\displaystyle+$ $\displaystyle\lambda(t)^{-\frac{n-2}{2}\mu+(\mu-1)\gamma}.$
###### Proof.
By the heat kernel representation, we have
$\displaystyle\overline{\phi}_{5}(x,t)$ $\displaystyle\leq$
$\displaystyle\int_{-81/100}^{t}\int_{B_{L\lambda(s)}^{c}}G(x,y;t,s)\lambda(s)^{-\frac{n+2}{2}}e^{-c\frac{|y|}{\lambda(s)}}$
$\displaystyle\times\left[\frac{|a(s)|}{\lambda(s)}+\left|\lambda(s)a^{\prime}(s)-\mu_{0}\frac{a(s)}{\lambda(s)}\right|+\frac{|a(s)|}{\lambda(s)}\left(|\lambda(s)\lambda^{\prime}(s)|+|\lambda(s)\xi^{\prime}(s)|\right)\right].$
We still divide this integral into three parts, $\mathrm{I}$ being on the
interval $(-81/100,t-\lambda(t)^{2\mu})$, $\mathrm{II}$ being on the interval
$(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})$, and $\mathrm{III}$ on
$(t-K^{2\beta}\lambda(t)^{2},t)$.
Estimate of $\mathrm{I}$. For $\mathrm{I}$, just using the estimate
$\left\|\left(\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\frac{a}{\lambda}\left(|\lambda\lambda^{\prime}|+|\lambda\xi^{\prime}|\right)\right)\right\|_{L^{\infty}(-1,t)}\leq
C,$
by the operator bound on $G$ (see (14.19)) and Hölder inequality, we get
$\displaystyle\mathrm{I}$ $\displaystyle\lesssim$
$\displaystyle\int_{-81/100}^{t-\lambda(t)^{2\mu}}\int_{B_{1}}G(x,t;y,s)\lambda(s)^{-\frac{n+2}{2}}e^{-c\frac{|y|}{\lambda(s)}}dyds$
$\displaystyle\lesssim$
$\displaystyle\int_{-81/100}^{t-\lambda(t)^{2\mu}}(t-s)^{-\frac{n+2}{4}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}ds$
$\displaystyle\lesssim$
$\displaystyle\lambda(t)^{-\frac{n-2}{2}\mu+(\mu-1)\gamma}.$
Estimate of $\mathrm{II}$. For $\mathrm{II}$, in the same way we obtain
$\displaystyle\mathrm{II}$ $\displaystyle\lesssim$
$\displaystyle\left\|\left(\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\frac{a}{\lambda}\left(|\lambda\lambda^{\prime}|+|\lambda\xi^{\prime}|\right)\right)\right\|_{L^{\infty}(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})}$
$\displaystyle\quad\times\int_{t-\lambda(t)^{2\mu}}^{t-K^{2\beta}\lambda(t)^{2}}\int_{B_{1}}G(x,y;t,s)\lambda(s)^{-\frac{n+2}{2}}e^{-c\frac{|y|}{\lambda(s)}}dyds$
$\displaystyle\lesssim$ $\displaystyle\left\|\left(\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\lambda\lambda^{\prime},\lambda\xi^{\prime}\right)\right\|_{L^{\infty}(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})}$
$\displaystyle\quad\times\int_{-1}^{t-K^{2\beta}\lambda(t)^{2}}(t-s)^{-\frac{n+2}{4}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}ds$
$\displaystyle\lesssim$ $\displaystyle
K^{-\frac{n-2}{2}\beta+(\beta-1)\gamma}\lambda(t)^{-\frac{n-2}{2}}\left\|\left(\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\lambda\lambda^{\prime},\lambda\xi^{\prime}\right)\right\|_{L^{\infty}(t-\lambda(t)^{2\mu},t-K^{2\beta}\lambda(t)^{2})}.$
Estimate of $\mathrm{III}$. In this case, first note that
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\int_{B_{L\lambda(s)}^{c}}G(x,t;y,s)\lambda(s)^{-\frac{n+2}{2}}e^{-c\frac{|y|}{\lambda(s)}}dyds$
$\displaystyle\lesssim$
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\int_{B_{L\lambda(s)}^{c}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\times\lambda(s)^{-\frac{n+2}{2}}e^{-c\frac{|y|}{\lambda(s)}}dyds$
$\displaystyle\lesssim$
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\lambda(s)^{-\frac{n+2}{2}}$
$\displaystyle\quad\times\left[\underbrace{\int_{B_{L\lambda(s)}^{c}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}e^{-c\frac{|y|}{\lambda(s)}}dy}_{\mbox{
Lemma \ref{lem heat kernel integral 2}}}\right]ds$ $\displaystyle\lesssim$
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\lambda(s)^{-\frac{n+2}{2}}$
$\displaystyle\quad\times\left[e^{-c\frac{|x|}{\lambda(s)}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}+\left(1+\frac{\lambda(s)}{\sqrt{t-s}}\right)^{n+\gamma}e^{-cL-c\frac{|x|^{2}}{t-s}}\right]ds$
$\displaystyle\lesssim$
$\displaystyle\left[K^{2\beta+2(\beta-1)\gamma}e^{-cK}+K^{2\beta+(\beta-1)\gamma}e^{-cL}\right]\lambda(t)^{-\frac{n-2}{2}},$
where we have used (14.21) to deduce that last inequality.
Then similar to the estimate of $\mathrm{II}$, we get
$\displaystyle\mathrm{III}$ $\displaystyle\lesssim$
$\displaystyle\left[K^{2\beta+2(\beta-1)\gamma}e^{-cK}+K^{2\beta+(\beta-1)\gamma}e^{-cL}\right]\lambda(t)^{-\frac{n-2}{2}}$
$\displaystyle\quad\quad\quad\quad\times\left\|\left(\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\lambda\lambda^{\prime},\lambda\xi^{\prime}\right)\right\|_{L^{\infty}(t-K^{2\beta}\lambda(t)^{2},t)}.$
Adding up estimates for $\mathrm{I}$, $\mathrm{II}$ and $\mathrm{III}$ we
finish the proof. ∎
#### 14.3. Estimate of $\mathcal{O}(t)$
As an application of the pointwise estimates on
$\overline{\phi}_{1},\cdots,\overline{\phi}_{5}$ obtained in the previous
subsection, we give an estimate on $\mathcal{O}(t)$.
###### Proposition 14.6.
There exist two constants $C(K,L)$ and $\sigma(K,L)\ll 1$ (depending on $K$
and $L$) such that for any $t\in(-81/100,81/100)$,
$\mathcal{O}(t)\leq
C(K,L)\lambda(t)^{2+2\kappa}+\sigma(K,L)\left\|\left(\mathcal{I},\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\lambda\lambda^{\prime},\lambda\xi^{\prime}\right)\right\|_{L^{\infty}(t-\lambda(t)^{2\mu},t)}.$
(14.24)
###### Proof.
First by the five lemmas in the previous subsection, we obtain
$\displaystyle\left[K\lambda(t)\right]^{\frac{n-2}{2}}\sup_{B_{2K\lambda(t)}(\xi(t))\setminus
B_{K\lambda(t)}(\xi(t))}|\phi(x,t)|$ $\displaystyle\leq$
$\displaystyle\left[K\lambda(t)\right]^{\frac{n-2}{2}}\sup_{B_{2K\lambda(t)}\setminus
B_{K\lambda(t)}}\left[\overline{\phi}_{1}(x,t)+\overline{\phi}_{2}(x,t)+\overline{\phi}_{3}(x,t)+\overline{\phi}_{4}(x,t)+\overline{\phi}_{5}(x,t)\right]$
$\displaystyle\leq$ $\displaystyle
C(K,L)\lambda(t)^{2+2\kappa}+\sigma(K,L)\left\|\left(\mathcal{I},\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\lambda\lambda^{\prime},\lambda\xi^{\prime}\right)\right\|_{L^{\infty}(t-\lambda(t)^{2\mu},t)}.$
Next, by a rescaling and standard interior gradient estimates for heat
equation, we get
$\displaystyle\left[K\lambda(t)\right]^{\frac{n}{2}}\sup_{B_{2K\lambda(t)}(\xi(t))\setminus
B_{K\lambda(t)}(\xi(t))}|\nabla\phi_{1}(x,t)|$ $\displaystyle\lesssim$
$\displaystyle\left[K\lambda(t)\right]^{\frac{n-2}{2}}\sup_{\left(B_{4K\lambda(t)(\xi(t))}\setminus
B_{K\lambda(t)/2(\xi(t))}\right)\times(t-K^{2}\lambda(t)^{2},t)}|\phi_{1}(y,s)|$
$\displaystyle\lesssim$ $\displaystyle
K^{\frac{n-2}{2}-\gamma}\lambda(t)^{\frac{n-2}{2}-\gamma}.$
The same estimate holds for $\phi_{2}$, if we note that $F_{2}\equiv 0$ in
$B_{4K\lambda(t)(\xi(t))}\setminus B_{K\lambda(t)/2(\xi(t))}$.
In the same way, for $\phi_{3}$ we have
$\displaystyle\left[K\lambda(t)\right]^{\frac{n}{2}}\sup_{B_{2K\lambda(t)}(\xi(t))\setminus
B_{K\lambda(t)}(\xi(t))}|\nabla\phi_{3}(x,t)|$ $\displaystyle\lesssim$
$\displaystyle\left[K\lambda(t)\right]^{\frac{n-2}{2}}\sup_{\left(B_{4K\lambda(t)(\xi(t))}\setminus
B_{K\lambda(t)/2(\xi(t))}\right)\times(t-K^{2}\lambda(t)^{2},t)}\left[|\phi_{3}(y,s)|+\left(K\lambda(t)\right)^{2}|F_{3}(y,s)|\right]$
$\displaystyle\lesssim$ $\displaystyle
K^{-\frac{n-2}{2}(\beta-1)+2(\beta-1)\gamma}\|\lambda\lambda^{\prime}\|_{L^{\infty}(t-\lambda(t)^{2\mu},t)}+K^{\frac{n-2}{2}}\lambda(t)^{2+2\kappa}$
$\displaystyle+$ $\displaystyle
K^{-\frac{n-6}{2}}\left\|\lambda\lambda^{\prime}\right\|_{L^{\infty}(t-K^{2}\lambda(t)^{2},t)}.$
The same estimates hold for $\phi_{4}$ and $\phi_{5}$.
Putting these gradient estimates together and arguing as in (14.3), we get the
estimate on the remaining terms in $\mathcal{O}(t)$. ∎
### 15\. A Harnack inequality for $\lambda$
In this section, we combine the estimates in the previous two sections to
establish an Harnack inequality for $\lambda$.
For any $t\in(-81/100,81/100)$, denote
$\mathcal{D}(t):=\|\phi_{in}(t)\|_{\alpha;1+\theta}+\left|\lambda(t)\xi^{\prime}(t)\right|+\left|\lambda(t)\lambda^{\prime}(t)\right|+\left|\lambda(t)a^{\prime}(t)-\mu_{0}\frac{a(t)}{\lambda(t)}\right|+\left|\frac{a(t)}{\lambda(t)}\right|.$
For $r\in(0,81/100)$, let
$g(r):=\sup_{-r<t<r}\mathcal{D}(t)$
and
$M(r):=\sup_{-r<t<r}\lambda(t)^{2},\quad m(r):=\inf_{-r<t<r}\lambda(t)^{2}.$
By Proposition 13.4 and Lemma 13.6, there exist three universal constants $C$,
$T\gg 1$ and $\sigma\ll 1$ such that
$\mathcal{D}(t)\leq\sigma\sup_{[t-T\lambda(t)^{2},t+T\lambda(t)^{2}]}\mathcal{D}(s)+C\sup_{[t-T\lambda(t)^{2},t+T\lambda(t)^{2}]}\mathcal{O}(s).$
(15.1)
By Proposition 14.6,
$\mathcal{O}(t)\leq
C(K,L)\lambda(t)^{2+2\kappa}+\sigma(K,L)\sup_{[t-\lambda(t)^{2\mu},t]}\mathcal{D}(s).$
(15.2)
Combining these two estimates we get
$\mathcal{D}(t)\leq\frac{1}{2}\sup_{[t-2\lambda(t)^{2\mu},t+2K\lambda(t)^{2}]}\mathcal{D}(s)+C\sup_{[t-2\lambda(t)^{2\mu},t+2K\lambda(t)^{2}]}\lambda(s)^{2+2\kappa}.$
(15.3)
For any $r\in(0,1)$, taking supremum over
$t\in\left(-r+2M(r)^{\mu},r-2M(r)^{\mu}\right)$ in (15.3), we obtain
$g\left(r-2M(r)^{\mu}\right)\leq\frac{1}{2}g(r)+CM(r)^{1+\kappa}.$ (15.4)
For any $r_{1}<r_{2}$, an iteration of this inequality from $r_{2}$ to $r_{1}$
in $\lfloor(r_{2}-r_{1})/M(r_{2})^{\mu}\rfloor$ steps leads to
$g(r_{1})\leq
g(r_{2})e^{-c(r_{2}-r_{1})M(r_{2})^{-\mu}}+CM(r_{2})^{1+\kappa}.$
For any $r\in(0,1)$, $M(r)\ll 1$, so
$e^{-cM(r)^{-\mu/2}}\lesssim M(r)^{1+\kappa}.$
Hence by choosing $r_{2}=r$ and $r_{1}=r-M(r)^{\mu/2}$, we get
$g\left(r-M(r)^{\mu/2}\right)\lesssim M(r)^{1+\kappa}.$
By this estimate, integrating $\lambda\lambda^{\prime}$ on
$[-r+M(r)^{\mu/2},r-M(r)^{\mu/2}]$, we find a constant $C_{H}$ such that
$M\left(r-M(r)^{\mu/2}\right)\leq
m\left(r-M(r)^{\mu/2}\right)+C_{H}M(r)^{1+\kappa}.$ (15.5)
###### Lemma 15.1.
There exists an $r_{H}\in[1/2,3/4]$ such that
$M\left(r_{H}-M(r_{H})^{\mu/2}\right)\geq 2C_{H}M(r_{H})^{1+\kappa}.$
###### Proof.
Assume that for any $r\in[1/2,3/4]$,
$M\left(r-M(r)^{\mu/2}\right)\leq 2C_{H}M(r)^{1+\kappa}.$ (15.6)
Set
$r_{0}:=3/4,\quad a_{0}:=M(r_{0})$
and for $k\in\mathbb{N}$,
$r_{k+1}:=r_{k}-a_{k}^{\mu/2},\quad a_{k+1}:=M(r_{k+1}).$
Then (15.6) says
$a_{k+1}\leq 2C_{H}a_{k}^{1+\kappa}.$
By our assumption, $a_{0}\ll 1$. An induction gives
$a_{k}\leq a_{0}^{k+1},\quad r_{k}\geq
r_{0}-\sum_{k=0}^{+\infty}a_{0}^{(k+1)\frac{\mu}{2}}\geq 1/2.$
As a consequence, $M(1/2)=0$. This is impossible. ∎
For this $r_{H}$, (15.5) can be written as a Harnack inequality
$m(r_{H})\geq\left[1-C_{H}M(r_{H})^{\kappa}\right]M(r_{H})\geq\frac{1}{2}M(r_{H}).$
(15.7)
After a scaling $u(x,t)\mapsto r_{H}^{(n-2)/2}u(r_{H}x,r_{H}^{2}t)$, from here
to Section 18 we work in the following setting:
* •
we denote
$\varepsilon:=\lambda(0),$ (15.8)
and after a translation in the spatial direction, we assume $\xi(0)=0$;
* •
by the above Harnack inequality (15.7), for any $t\in[-1,1]$,
$\lambda(t)=\varepsilon+O\left(\varepsilon^{1+\kappa}\right)$ (15.9)
and
$\|\phi_{in}(t)\|+\left|\lambda(t)\lambda^{\prime}(t)\right|+\left|\lambda(t)\xi^{\prime}(t)\right|+\left|\lambda(t)a^{\prime}(t)-\mu_{0}\frac{a(t)}{\lambda(t)}\right|+\left|\frac{a(t)}{\lambda(t)}\right|\lesssim\varepsilon^{1+\kappa}.$
(15.10)
* •
integrating $\xi^{\prime}$ and using the above estimate, we obtain
$|\xi(t)|\lesssim\varepsilon^{\kappa},\quad\mbox{for any}~{}~{}t\in[-1,1].$
(15.11)
### 16\. Inner problem and outer problem again
In this section, under the assumptions (15.8)-(15.10), we prove
###### Proposition 16.1.
For any $\gamma>0$, there exists a constant $C(\gamma)$ such that
$g\left(\frac{8}{9}\right)\leq C(\gamma)\varepsilon^{\frac{n-2}{2}-\gamma}.$
(16.1)
This gives an improvement on the estimate of scaling parameters etc.. A more
precise estimate on $\phi$ also follows:
###### Proposition 16.2.
For any $\gamma>0$, there exists a constant $C(\gamma)>0$ such that
$|\phi(x,t)|\leq
C(\gamma)\left(\varepsilon+|x-\xi(t)|\right)^{-\gamma}\quad\mbox{in}\quad
Q_{7/8}.$ (16.2)
To prove these propositions, estimates on $\overline{\phi}_{2}$,
$\overline{\phi}_{3}$, $\overline{\phi}_{4}$ and $\overline{\phi}_{5}$ in
Section 14 are not sufficient. We need to upgrade them.
###### Lemma 16.3 (Estimate on $\overline{\phi}_{2}$).
If $|x|\geq K\lambda(t)/2$, then
$\overline{\phi}_{2}(x,t)\lesssim\left[K^{-\left(\frac{n-2}{2}-\gamma\right)\beta}\frac{\varepsilon^{\gamma}}{|x|^{\gamma}}+L^{\frac{n-2}{2}-\gamma}K^{\beta\gamma}\frac{\varepsilon^{n-2}}{|x|^{n-2}}\right]\varepsilon^{-\frac{n-2}{2}}\|\mathcal{I}\|_{L^{\infty}(-1,t)}.$
(16.3)
###### Proof.
As in the proof of Lemma 14.2, we have the heat kernel representation for
$\overline{\phi}_{2}$. But now we just divide the integral into two parts,
$\mathrm{I}$ being on $(-1,t-K^{2\beta}\lambda(t)^{2})$ and $\mathrm{II}$ on
$(t-K^{2\beta}\lambda(t)^{2},t)$.
For $\mathrm{I}$, similar to (14.2), we have
$\displaystyle\mathrm{I}$ $\displaystyle\lesssim$
$\displaystyle|x|^{-\gamma}\int_{-1}^{t-K^{2\beta}\lambda(t)^{2}}(t-s)^{-\frac{n+2}{4}+\frac{\gamma}{2}}\mathcal{I}(s)ds$
$\displaystyle\lesssim$ $\displaystyle
K^{-\left(\frac{n-2}{2}-\gamma\right)\beta}\lambda(t)^{-\frac{n-2}{2}+\gamma}|x|^{-\gamma}\|\mathcal{I}\|_{L^{\infty}(-1,t-K^{2\beta}\lambda(t)^{2})}$
$\displaystyle\lesssim$ $\displaystyle
K^{-\left(\frac{n-2}{2}-\gamma\right)\beta}\varepsilon^{-\frac{n-2}{2}+\gamma}|x|^{-\gamma}\|\mathcal{I}\|_{L^{\infty}(-1,t-K^{2\beta}\lambda(t)^{2})}$
The estimate for $\mathrm{II}$ is almost the same as the one for
$\mathrm{III}$ in the proof of Lemma 14.2, that is,
$\displaystyle\mathrm{II}$ $\displaystyle\lesssim$
$\displaystyle\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\frac{\mathcal{I}(s)}{L^{\alpha}\lambda(s)^{\alpha}}\left(\frac{|\lambda^{\prime}(s)|}{\lambda(s)}+\frac{|\xi^{\prime}(s)|}{L\lambda(s)}+\frac{1}{L^{2}\lambda(s)^{2}}\right)\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}$
$\displaystyle\times\left[\int_{B_{2L\lambda(s)}\setminus
B_{L\lambda(s)}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}dy\right]$
$\displaystyle\lesssim$
$\displaystyle\|\mathcal{I}\|_{L^{\infty}(t-K^{2\beta}\lambda(t)^{2},t)}L^{\frac{n-2}{2}}\lambda(t)^{\frac{n-2}{2}}\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{L\lambda(t)}\right)^{\gamma}$
$\displaystyle\lesssim$
$\displaystyle\|\mathcal{I}\|_{L^{\infty}(t-K^{2\beta}\lambda(t)^{2},t)}L^{\frac{n-2}{2}}\lambda(t)^{\frac{n-2}{2}}\left(1+\frac{K^{\beta}}{L}\right)^{\gamma}\int_{t-K^{2\beta}\lambda(t)^{2}}^{t}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x|^{2}}{t-s}}$
$\displaystyle\lesssim$
$\displaystyle\|\mathcal{I}\|_{L^{\infty}(t-K^{2\beta}\lambda(t)^{2},t)}L^{\frac{n-2}{2}}\lambda(t)^{\frac{n-2}{2}}\left(1+\frac{K^{\beta}}{L}\right)^{\gamma}|x|^{2-n}$
$\displaystyle\lesssim$ $\displaystyle
L^{\frac{n-2}{2}-\gamma}K^{\beta\gamma}\varepsilon^{\frac{n-2}{2}}|x|^{2-n}\|\mathcal{I}\|_{L^{\infty}(t-K^{2}\lambda(t)^{2},t)}.$
Putting these two estimates together, we obtain (16.4). ∎
###### Corollary 16.4.
If $|x|\geq K\lambda(t)/2$, then
$\overline{\phi}_{2}(x,t)\lesssim\left[K^{-\frac{n-2}{2}(\beta-1)+\beta\gamma}+\frac{L^{\frac{n-2}{2}-\gamma}}{K^{\frac{n-2}{2}-\beta\gamma}}\right]\left(K\varepsilon\right)^{-\frac{n-2}{2}}\|\mathcal{I}\|_{L^{\infty}(-1,t)}.$
(16.4)
###### Lemma 16.5 (Estimate on $\overline{\phi}_{3}$).
If $|x|\geq K\lambda(t)/2$, then
$\overline{\phi}_{3}(x,t)\lesssim\varepsilon^{\frac{n-6}{2}}|x|^{4-n}\|\lambda\lambda^{\prime}\|_{L^{\infty}(-1,t)}.$
(16.5)
###### Proof.
As in the proof of Lemma 14.3, we still have the heat kernel representation
$\displaystyle\overline{\phi}_{3}(x,t)$ $\displaystyle\lesssim$
$\displaystyle\int_{-81/100}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\lambda(s)^{\frac{n-4}{2}}\left|\lambda^{\prime}(s)\right|$
$\displaystyle\times\left[\int_{B_{1}}(t-s)^{-\frac{n}{2}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}e^{-c\frac{|x-y|^{2}}{t-s}}|y|^{2-n}dy\right]ds$
$\displaystyle\lesssim$
$\displaystyle\varepsilon^{\frac{n-6}{2}}\|\lambda\lambda^{\prime}\|_{L^{\infty}(-1,t)}$
$\displaystyle\times\int_{-81/100}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{2\gamma}\left(\sqrt{t-s}+|x|\right)^{2-n}ds$
$\displaystyle\lesssim$
$\displaystyle\varepsilon^{\frac{n-6}{2}}\|\lambda\lambda^{\prime}\|_{L^{\infty}(-1,t)}|x|^{4-n}.$
Here we have used Lemma B.1 to deduce the second inequality. ∎
###### Corollary 16.6.
If $|x|\geq K\lambda(t)/2$, then
$\overline{\phi}_{3}(x,t)\lesssim
K^{-\frac{n-6}{2}}\left(K\varepsilon\right)^{-\frac{n-2}{2}}\|\lambda\lambda^{\prime}\|_{L^{\infty}(-1,t)}.$
(16.6)
###### Lemma 16.7 (Estimate on $\overline{\phi}_{4}$).
If $|x|\geq K\lambda(t)/2$, then
$\overline{\phi}_{4}(x,t)\lesssim\varepsilon^{\frac{n-4}{2}}|x|^{3-n}\|\lambda\xi^{\prime}\|_{L^{\infty}(-1,t)}.$
(16.7)
###### Proof.
As in the proof of Lemma 14.4, we still have the heat kernel representation
$\displaystyle\overline{\phi}_{4}(x,t)$ $\displaystyle\lesssim$
$\displaystyle\int_{-81/100}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}\lambda(s)^{\frac{n-2}{2}}\left|\xi^{\prime}(s)\right|$
$\displaystyle\times\left[\int_{B_{1}}(t-s)^{-\frac{n}{2}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}e^{-c\frac{|x-y|^{2}}{t-s}}|y|^{1-n}dy\right]ds$
$\displaystyle\lesssim$
$\displaystyle\varepsilon^{\frac{n-4}{2}}\|\lambda\xi^{\prime}\|_{L^{\infty}(-1,t)}$
$\displaystyle\times\int_{-81/100}^{t}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{2\gamma}\left(\sqrt{t-s}+|x|\right)^{1-n}ds$
$\displaystyle\lesssim$
$\displaystyle\varepsilon^{\frac{n-4}{2}}\|\lambda\xi^{\prime}\|_{L^{\infty}(-1,t)}|x|^{3-n}.$
Here we have used Lemma B.1 to deduce the second inequality. ∎
###### Corollary 16.8.
If $|x|\geq K\lambda(t)/2$, then
$\overline{\phi}_{4}(x,t)\lesssim
K^{-\frac{n-4}{2}}\left(K\varepsilon\right)^{-\frac{n-2}{2}}\|\lambda\xi^{\prime}\|_{L^{\infty}(-1,t)}.$
(16.8)
###### Lemma 16.9 (Estimate on $\overline{\phi}_{5}$).
If $|x|\geq K\lambda(t)/2$, then
$\overline{\phi}_{5}(x,t)\lesssim\varepsilon^{\frac{n-4}{2}}|x|^{3-n}\left\|\left(\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\lambda\lambda^{\prime},\lambda\xi^{\prime}\right)\right\|_{L^{\infty}(-1,t)}.$
(16.9)
###### Proof.
For $|y|\geq L\lambda(s)$,
$\lambda(s)^{-\frac{n+2}{2}}e^{-c|y|/\lambda(s)}\lesssim\lambda(s)^{\frac{n-4}{2}}|y|^{1-n}.$
Hence
$\displaystyle\partial_{t}\overline{\phi}_{5}-\Delta\overline{\phi}_{5}-\left[C+\frac{\delta}{|x|^{2}}\right]\overline{\phi}_{5}+\xi^{\prime}(t)\cdot\nabla\overline{\phi}_{5}$
$\displaystyle\lesssim$
$\displaystyle\left[\frac{|a(t)|}{\lambda(t)}+\Big{|}\lambda(t)a^{\prime}(t)-\mu_{0}\frac{a(t)}{\lambda(t)}\Big{|}+|\lambda(t)\lambda^{\prime}(t)|+|\lambda(t)\xi^{\prime}(t)|\right]\lambda(t)^{\frac{n-4}{2}}|x|^{1-n}\chi_{B_{L\lambda(t)}^{c}}.$
Then we can proceed as in the previous lemma to conclude. ∎
###### Corollary 16.10.
If $|x|\geq K\lambda(t)/2$, then
$\overline{\phi}_{5}(x,t)\lesssim
K^{-\frac{n-4}{2}}\left(K\varepsilon\right)^{-\frac{n-2}{2}}\left\|\left(\frac{a}{\lambda},\lambda
a^{\prime}-\mu_{0}\frac{a}{\lambda},\lambda\lambda^{\prime},\lambda\xi^{\prime}\right)\right\|_{L^{\infty}(-1,t)}.$
(16.10)
Combining Corollary 16.4-Corollary 16.10 with Lemma 14.1, proceeding as in the
proof of Proposition 14.6 (to estimate $|\nabla\phi_{1}|$-$|\nabla\phi_{5}|$),
we find two constants $C(K,L)$ and $\sigma(K,L)\ll 1$ such that, for any
$t\in(-81/100,81/100)$,
$\mathcal{O}(t)\leq
C(K,L)\varepsilon^{\frac{n-2}{2}-\gamma}+\sigma(K,L)\sup_{-1<s<t}\mathcal{D}(s).$
(16.11)
Combining this inequality with Proposition 13.4, we obtain (compare this with
(15.4))
$\mathcal{D}(t)\leq\frac{1}{2}\sup_{[t-C\varepsilon^{2},t+C\varepsilon^{2}]}\mathcal{D}(s)+C\varepsilon^{\frac{n-2}{2}-\gamma}.$
(16.12)
Here we have used the fact that, by (15.9) and the definition of inner time
variable $\tau$ in Section 13, we have
$\frac{d\tau}{dt}\sim\varepsilon^{-2}.$ (16.13)
Similar to Section 15, an iteration of (16.12) from $r=1$ to $r=8/9$ gives
(16.1). This finishes the proof of Proposition 16.1. Plugging (16.1) into
Lemma 16.3-Lemma 16.9, we get Proposition 16.2.
### 17\. Improved estimates on $\phi$
From now on we need to write down the dependence on $i$ explicitly. For
$u_{i}$, the decomposition in Section 12 reads as
$\phi_{i}(x,t):=u_{i}(x,t)-W_{\xi_{i}(t),\lambda_{i}(t)}(x)-a_{i}(t)Z_{0,\xi_{i}(t),\lambda_{i}(t)}(x).$
(17.1)
The parameters satisfy the assumptions (15.8)-(15.10). In particular,
$\varepsilon_{i}:=\lambda_{i}(0),\quad\xi_{i}(0)=0.$
In this section we prove some uniform (in $\varepsilon_{i}$) estimates on
$\phi_{i}$.
For simplicity of notation, we will denote
$W_{i}(x,t):=W_{\xi_{i}(t),\lambda_{i}(t)}(x),\quad
Z_{\ast,i}(x,t):=\left(Z_{j,\xi_{i}(t),\lambda_{i}(t)}(x)\right)_{j=0,\cdots,n+1}.$
#### 17.1. Uniform $L^{\infty}$ bound
The first one is a uniform $L^{\infty}$ bound on $\phi_{i}$.
###### Proposition 17.1.
There exists a universal constant $C$ (independent of $i$) such that
$|\phi_{i}(x,t)|\leq C\quad\mbox{in}\quad Q_{6/7}.$ (17.2)
First by (16.2), we get
$u_{i}(x,t)\lesssim\left(\frac{\varepsilon_{i}}{\varepsilon_{i}^{2}+|x-\xi_{i}(t)|^{2}}\right)^{\frac{n-2}{2}}+|x-\xi_{i}(t)|^{-\gamma}.$
(17.3)
Hence $\overline{\phi}_{i,1}$ satisfies, for some constant $C_{0}>0$,
$\partial_{t}\overline{\phi}_{i,1}-\Delta\overline{\phi}_{i,1}+\xi_{i}^{\prime}(t)\cdot\nabla\overline{\phi}_{i,1}\leq
C_{0}\left[\varepsilon_{i}^{2}|x|^{-4}\chi_{B_{L\varepsilon_{i}}^{c}}+|x|^{-(p-1)\gamma}\right]\overline{\phi}_{i,1}.$
(17.4)
As in the proof of Lemma 14.1, for each $\varepsilon>0$, consider the problem
$\left\\{\begin{aligned} &-\Delta
w_{\varepsilon}=C_{0}\varepsilon^{2}|x|^{-4}\chi_{B_{L\varepsilon}^{c}}w_{\varepsilon}\quad&\mbox{in
}~{}~{}B_{1},\\\ &w_{\varepsilon}=1\quad&\mbox{on }~{}~{}\partial
B_{1}.\end{aligned}\right.$ (17.5)
###### Lemma 17.2.
There exists a radially symmetric solution $w_{\varepsilon}$ of (17.5).
Moreover, if $L$ is universally large, for any $\varepsilon>0$,
$1\leq w_{\varepsilon}\leq 2\quad\mbox{in }B_{1}.$
###### Proof.
Consider the initial value problem
$\left\\{\begin{aligned}
&w^{\prime\prime}(s)+(n-2)w^{\prime}(s)+C_{0}e^{-2s}w(s)=0\quad\mbox{in
}~{}~{}[\log L,+\infty),\\\ &w(\log L)=1,\quad w^{\prime}(\log
L)=0.\end{aligned}\right.$
Global existence and uniqueness of the solution follows from standard ordinary
differential equation theory.
If $w>0$ in $[\log L,s_{0}]$ for some $s_{0}>\log L$, then $w^{\prime}<0$, and
consequently, $0<w<1$ in $(\log L,s_{0})$. Integrating the equation of $w$ we
get
$\displaystyle w^{\prime}(s)$ $\displaystyle=$ $\displaystyle-
C_{0}e^{-(n-2)s}\int_{\log L}^{s}e^{(n-4)\tau}w(\tau)d\tau$
$\displaystyle\geq$ $\displaystyle-\frac{C_{0}}{n-4}e^{-2s}.$
Hence, if $L$ is large enough,
$w(s_{0})\geq 1-\frac{C_{0}}{n-4}L^{-2}\geq\frac{1}{2}.$
This holds for any $s_{0}$ provided that $w>0$ in $[\log L,s_{0}]$, so
$w(s)\geq 1/2$ for any $s\geq\log L$.
For each $\varepsilon>0$, the function
$w_{\varepsilon}(x):=\frac{w\left(\log(\varepsilon^{-1}|x|)\right)}{w\left(\log\varepsilon^{-1}\right)}$
satisfies all of the requirements. ∎
###### Corollary 17.3.
There exists a constant $C$ independent of $\varepsilon$ such that
$|\nabla w_{\varepsilon}|\leq C\quad\mbox{in }~{}~{}B_{1}.$
###### Proof.
By definition, we have
$|\nabla
w_{\varepsilon}(x)|=\frac{w^{\prime}\left(\log(\varepsilon^{-1}|x|)\right)}{w\left(\log\varepsilon^{-1}\right)}\frac{\varepsilon}{|x|}.$
If $|x|\leq L\varepsilon$, $w^{\prime}=0$ and there is nothing to prove. For
$|x|\geq L\varepsilon$, by (17.1),
$\left|w^{\prime}\left(\log(\varepsilon^{-1}|x|)\right)\right|\leq\frac{C_{0}}{n-4}.$
Because $w\left(\log\varepsilon^{-1}\right)\geq 1/2$, we get
$|\nabla w_{\varepsilon}(x)|\leq\frac{2C_{0}}{n-4}.\qed$
The function $\varphi_{i}:=\overline{\phi}_{i,1}/w_{\varepsilon_{i}}$
satisfies
$\partial_{t}\varphi_{i}-w_{\varepsilon_{i}}^{-2}\mbox{div}\left(w_{\varepsilon_{i}}^{2}\nabla\varphi_{i}\right)+\xi_{i}^{\prime}\cdot\nabla\varphi_{i}\leq\left[C_{0}|x|^{-(p-1)\gamma}-\xi_{i}^{\prime}\cdot\nabla
w_{\varepsilon_{i}}\right]\varphi_{i}.$
By Lemma 17.2, this is a uniformly parabolic equation. Because $\gamma$ is
very small, $|x|^{-(p-1)\gamma}\in L^{2n}(B_{1})$. By (10.1) and Corollary
17.3, $\xi_{i}^{\prime}\cdot\nabla w_{\varepsilon_{i}}$ are uniformly bounded
in $L^{\infty}(Q_{6/7})$. Then standard Moser iteration gives
###### Lemma 17.4.
There exists a universal constant $C$ independent of $i$ such that
$\sup_{Q_{6/7}}\varphi_{i}\lesssim\int_{Q_{1}}\varphi_{i}.$ (17.7)
Combining this lemma with Lemma 17.2 and the definition of
$\overline{\phi}_{i,1}$ (see (14.13)), we get
$\|\phi_{i,1}\|_{L^{\infty}(Q_{6/7})}\leq C.$
Starting from this estimate, following the iteration argument in Section 16,
we deduce that
$g(5/6)\lesssim\varepsilon_{i}^{\frac{n-2}{2}}.$ (17.8)
Substituting this estimate into Lemma 16.3-Lemma 16.9, and noting that now the
heat kernel for $\partial_{t}-\Delta-\xi_{i}^{\prime}\cdot\nabla-V_{i}$ enjoys
the standard Gaussian bound
$G_{i}(x,t;y,s)\lesssim(t-s)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}},$
i.e. we can take $\gamma=0$, we obtain
###### Lemma 17.5.
For any $x\in B_{K\varepsilon}^{c}$ and $t\in(-5/6,5/6)$, we have
$|\phi_{i,2}(x,t)|+|\phi_{i,3}(x,t)|+|\phi_{i,4}(x,t)|+|\phi_{i,5}(x,t)|\lesssim_{K}\frac{\varepsilon^{n-4}}{|x|^{n-4}}.$
Combining (17.7), (17.8) with this lemma, we finish the proof of Proposition
17.1.
Once we know that $\phi_{i}$ are uniformly bounded in $Q_{1}$ (after a scaling
of the domain), the above estimates are improved to
###### Proposition 17.6.
$\displaystyle\|\phi_{i,1}\|_{L^{\infty}(Q_{5/6})}\lesssim\|\phi_{i}\|_{L^{\infty}(Q_{1})},$
$\displaystyle
g(5/6)\lesssim\varepsilon_{i}^{\frac{n-2}{2}}\left(\|\phi_{i}\|_{L^{\infty}(Q_{1})}+e^{-c\varepsilon_{i}^{-2}}\right),$
and for any $(x,t)\in Q_{5/6}$,
$|\phi_{i,2}(x,t)|+|\phi_{i,3}(x,t)|+|\phi_{i,4}(x,t)|+|\phi_{i,5}(x,t)|\lesssim_{K}\frac{\varepsilon^{n-4}}{|x|^{n-4}}\left(\|\phi_{i}\|_{L^{\infty}(Q_{1})}+e^{-c\varepsilon_{i}^{-2}}\right).$
Here the term $e^{-c\varepsilon_{i}^{-2}}$ appears because when we apply Lemma
13.6 to estimate $a/\lambda$, there is a boundary term
$\frac{a_{i}(1)}{\lambda_{i}(1)}e^{-c\left[\tau(1)-\tau(t)\right]},$
where, because $d\tau/dt\sim\varepsilon_{i}^{-2}$,
$\tau(1)-\tau(t)\geq c\varepsilon_{i}^{-2},\quad\mbox{for any }~{}~{}t<25/36.$
Because $\xi_{i}(0)=0$, by the estimate of $g(5/6)$ in Proposition 17.6, now
(15.11) is upgraded to
$|\xi_{i}(t)|\lesssim\varepsilon_{i}^{\frac{n-4}{2}},\quad\mbox{for
any}~{}~{}t\in[-25/36,25/36].$ (17.9)
#### 17.2. Gradient estimates
In this subsection we establish a uniform $C^{1+\theta,(1+\theta)/2}$ estimate
on $\phi_{i}$.
With the estimates in Proposition 17.6, now we have
###### Lemma 17.7.
In $Q_{5/6}$,
$\left|\partial_{t}\phi_{i}-\Delta\phi_{i}\right|\lesssim\left(\frac{\varepsilon_{i}^{2}}{\varepsilon_{i}^{4}+|x|^{4}}+1\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
(17.10)
###### Proof.
By (17.8), now (17.9) can be upgraded to
$|\xi_{i}(t)|\lesssim\varepsilon_{i}^{\frac{n-4}{2}},\quad\mbox{for any
}~{}~{}t\in[-25/36,25/36].$ (17.11)
Then by the definition of $\phi_{i}$, we have
$\displaystyle u_{i}(x,t)$ $\displaystyle\lesssim$ $\displaystyle
W_{i}(x,t)+|a_{i}(t)|Z_{0,i}(x,t)+|\phi_{i}(x,t)|$ $\displaystyle\lesssim$
$\displaystyle\left(\frac{\varepsilon_{i}}{\varepsilon_{i}^{2}+|x|^{2}}\right)^{\frac{n-2}{2}}+e^{-c|x|/\varepsilon_{i}}+1.$
Therefore
$\displaystyle\left|u_{i}^{p}-W_{i}^{p}\right|$ $\displaystyle\lesssim$
$\displaystyle\left(u_{i}^{p-1}+W_{i}^{p-1}\right)\left(|\phi_{i}|+|a_{i}(t)|Z_{0,i}\right)$
$\displaystyle\lesssim$
$\displaystyle\left[\left(\frac{\varepsilon_{i}}{\varepsilon_{i}^{2}+|x|^{2}}\right)^{\frac{n-2}{2}}+e^{-c|x|/\varepsilon_{i}}+1\right]^{\frac{4}{n-2}}\left(1+e^{-c|x|/\varepsilon_{i}}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)$
$\displaystyle\lesssim$
$\displaystyle\left(\frac{\varepsilon_{i}^{2}}{\varepsilon_{i}^{4}+|x|^{4}}+1\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
By (17.8), we have
$\displaystyle\left|\lambda_{i}^{\prime}(t)Z_{n+1,i}(x,t)\right|$
$\displaystyle\lesssim$
$\displaystyle\varepsilon_{i}^{-2}\left(1+\frac{|x-\xi_{i}(t)|}{\varepsilon_{i}}\right)^{2-n}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)$
$\displaystyle\lesssim$
$\displaystyle\frac{\varepsilon_{i}^{n-4}}{\left(\varepsilon_{i}+|x|\right)^{n-2}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)$
$\displaystyle\lesssim$
$\displaystyle\frac{\varepsilon_{i}^{2}}{\varepsilon_{i}^{4}+|x|^{4}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Similar estimates hold for
$(-a_{i}^{\prime}+\mu_{0}a_{i}\lambda_{i}^{-2})Z_{0i}$,
$\xi_{ji}^{\prime}\cdot Z_{ji}$, $j=1,\cdots,n$ and $a_{i}\partial_{t}Z_{0i}$.
∎
###### Lemma 17.8.
For any $(x,t)\in Q_{4/5}$,
$|\nabla\phi_{i}(x,t)|\lesssim\left(\frac{\varepsilon_{i}^{2}}{\varepsilon_{i}^{3}+|x|^{3}}+1\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
(17.12)
###### Proof.
We consider three cases separately.
Case 1. If $1/4\leq|x|\leq 4/5$, this estimate (in fact, a bound on the
Lipschitz seminorm) follows from standard interior gradient estimates.
Case 2. If $|x|\leq\varepsilon_{i}$, by looking at the inner equation (13) and
using Proposition 17.6, we obtain
$|\nabla\varphi_{i}(y,\tau)|\lesssim\varepsilon_{i}^{\frac{n-2}{2}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Scaling back to $\phi_{i}$, this is
$|\nabla\phi_{i}(x,t)|\lesssim\varepsilon_{i}^{-1}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
In fact, this gives a full Lipschitz estimate in space-time.
Case 3. Finally, we consider the remaining case where
$\varepsilon_{i}\leq|x|\leq 1/4$. Take the decomposition
$\phi_{i}=\phi_{h,i}+\phi_{n,i}$, where
$\left\\{\begin{aligned}
&\partial_{t}\phi_{h,i}-\Delta\phi_{h,i}=0,\quad&\mbox{in }~{}~{}Q_{3/4},\\\
&\phi_{h,i}=\phi_{i},\quad&\mbox{on
}~{}~{}\partial^{p}Q_{3/4}.\end{aligned}\right.$
By standard interior gradient estimates for heat equation, we have
$\|\nabla\phi_{h,i}\|_{L^{\infty}(Q_{2/3})}\lesssim\|\phi_{i}\|_{L^{\infty}(\partial^{p}Q_{3/4})}.$
(17.13)
Next, for the non-homogeneous part $\phi_{n,i}$, by the heat kernel
representation, we have
$\displaystyle|\nabla\phi_{n,i}(x,t)|$ $\displaystyle=$
$\displaystyle\left|\int_{-\frac{81}{100}}^{t}\int_{B_{\frac{3}{4}}}\nabla_{x}G(x,t;y,s)\left(\partial_{t}-\Delta\right)\phi_{i}(y,s)dyds\right|$
$\displaystyle\lesssim$
$\displaystyle\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)\int_{-\frac{9}{16}}^{t}\int_{B_{\frac{3}{4}}}\frac{|x-y|}{(t-s)^{\frac{n+2}{2}}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(\frac{\varepsilon_{i}^{2}}{\varepsilon_{i}^{4}+|y|^{4}}+1\right)$
$\displaystyle\lesssim$
$\displaystyle\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)\int_{-\frac{9}{16}}^{t}\int_{B_{\frac{3}{4}}}(t-s)^{-\frac{n+1}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(\frac{\varepsilon_{i}^{2}}{|y|^{4}}+1\right)$
$\displaystyle\lesssim$
$\displaystyle\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)\int_{-\frac{9}{16}}^{t}(t-s)^{-\frac{1}{2}}\left[\frac{\varepsilon_{i}^{2}}{\left(\sqrt{t-s}+|x|\right)^{4}}+1\right]$
$\displaystyle\lesssim$
$\displaystyle\left(\frac{\varepsilon_{i}^{2}}{|x|^{3}}+1\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Combining this estimate with (17.13), we get (17.12) in this case. ∎
Next we extend this Lipschitz estimate in spatial variables to the full
Lipschitz estimate in space-time variables. For this we need a technical lemma
transforming the spatial Lipschitz estimate into a full Lipschitz estimate in
space-time.
###### Lemma 17.9.
Given a constant $\sigma>0$, suppose $\psi\in C^{\infty}(Q_{1}^{-})$ satisfies
1. (1)
$|\nabla\psi|\leq\sigma$ in $Q_{1}^{-}$;
2. (2)
$|\partial_{t}\psi-\Delta\psi|\leq\sigma$ in $Q_{1}^{-}$.
Then the Lipschitz seminorm (with respect to the parabolic distance) of $\psi$
in $Q_{1/2}^{-}$ satisfies
$|\psi|_{Lip(Q_{1/2}^{-})}\lesssim\sigma.$
###### Proof.
We need only to prove that, for any $x\in B_{1/2}$ and $-1/4<t_{1}<t_{2}<0$,
$|\psi(x,t_{1})-\psi(x,t_{2})|\lesssim\sigma\sqrt{t_{2}-t_{1}}.$ (17.14)
Denote $h:=\sqrt{t_{2}-t_{1}}$ and define
$\widetilde{\psi}(y,s):=\frac{1}{h}\left[\psi(x+hy,t_{2}+h^{2}s)-\psi(x,t_{2})\right].$
It satisfies
1. (1)
$|\nabla\widetilde{\psi}|\leq\sigma$ in $Q_{2}^{-}$;
2. (2)
for any $(y,s)\in Q_{2}^{-}$,
$|\partial_{t}\widetilde{\psi}(y,s)-\Delta\widetilde{\psi}(y,s)|\leq h\sigma;$
(17.15)
3. (3)
because $\widetilde{\psi}(0,0)=0$, by integrating (1) we obtain
$\sup_{y\in B_{2}}|\widetilde{\psi}(y,0)|\leq 2\sigma.$ (17.16)
Fix a function $\eta\in C_{0}^{\infty}(B_{2})$. Multiplying (17.15) and
integrating by parts, we obtain
$\displaystyle\left|\frac{d}{ds}\int_{B_{2}}\widetilde{\psi}\eta\right|$
$\displaystyle\leq$ $\displaystyle
h\sigma\int_{B_{2}}|\eta|+\int_{B_{2}}|\nabla\widetilde{\psi}||\nabla\eta|$
$\displaystyle\lesssim$ $\displaystyle\sigma.$
Combining this inequality with (17.16), we obtain
$\left|\int_{B_{2}}\widetilde{\psi}(y,s)\eta(y)\right|\lesssim\sigma\quad\mbox{
for any}~{}~{}s\in(-4,0).$
With the Lipschitz estimate in (1), this implies that
$\sup_{y\in B_{2}}|\widetilde{\psi}(y,s)|\lesssim\sigma\quad\mbox{ for
any}~{}~{}s\in(-4,0).$ (17.17)
In particular,
$|\widetilde{\psi}(0,-1)|\lesssim\sigma.$
Scaling back to $\psi$, this is (17.14). ∎
###### Proposition 17.10.
For any $(x,t)\in Q_{3/4}$,
$|\phi_{i}|_{Lip(Q_{|x|/4}(x,t))}\lesssim\left(\frac{\varepsilon_{i}^{2}}{\varepsilon_{i}^{3}+|x|^{3}}+1\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
(17.18)
###### Proof.
When $|x|<\varepsilon_{i}$ and $1/4<|x|<3/4$, this was already covered in the
proof of Lemma 17.8, see Case 1 and Case 2 therein.
If $\varepsilon_{1}<|x|<1/4$, denote $r:=|x|/2$ and let
$\widetilde{\phi}_{i}(y,s):=\phi_{i}\left(x+ry,t+r^{2}s\right).$
By Lemma 17.7 and Lemma 17.8, it satisfies the assumptions in Lemma 17.9 with
the constant
$\sigma\lesssim\left(\frac{\varepsilon_{i}^{2}}{r^{2}}+r\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Hence this lemma implies that
$|\widetilde{\phi}_{i}|_{Lip(Q_{1/2}^{-})}\lesssim\left(\frac{\varepsilon_{i}^{2}}{r^{2}}+r\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Scaling back to $\phi_{i}$, this is (17.18) in this case. ∎
#### 17.3. Estimate of time derivative
In this subsection we establish the following estimate on
$\partial_{t}\phi_{i}$. We can also obtain some estimates on
$\nabla^{2}\phi_{i}$, but we do not need it.
###### Proposition 17.11.
For any $(x,t)\in Q_{2/3}$,
$|\partial_{t}\phi_{i}(x,t)|\lesssim\left(\frac{\varepsilon_{i}^{2}}{\varepsilon_{i}^{4}+|x|^{4}}+\frac{1}{\varepsilon_{i}^{2}+|x|}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
(17.19)
###### Proof.
Take an arbitrary sequence of points $(x_{i},t_{i})\in Q_{2/3}$. Denote
$r_{i}:=|x_{i}|/2$.
Case 1. First assume
$\limsup_{i\to+\infty}\frac{r_{i}}{\varepsilon_{i}}<+\infty.$
In this case we work in the inner coordinates introduced in Section 13. By
(13.2) and (13.3), we have the following representation formulas:
$\left\\{\begin{aligned}
&\frac{\dot{a}_{i}(\tau)-\mu_{0}a_{i}(\tau)}{\lambda_{i}(\tau)}=\int_{\mathbb{R}^{n}}Z_{0}(y)E_{K,i}(y,\tau)dy,\\\
&\frac{\dot{\xi}_{i}(\tau)}{\lambda_{i}(\tau)}=-\int_{\mathbb{R}^{n}}Z_{j}(y)E_{K,i}(y,\tau)dy,\quad
j=1,\cdots,n,\\\
&\frac{\dot{\lambda}_{i}(\tau)}{\lambda_{i}(\tau)}=-\int_{\mathbb{R}^{n}}Z_{n+1}(y)E_{K,i}(y,\tau)dy.\end{aligned}\right.$
By the form of $E_{K}$ in Section 13, and the smallness of
$\left|\frac{\dot{a}_{i}-\mu_{0}a_{i}}{\lambda_{i}}\right|+\left|\frac{\dot{\xi}_{i}}{\lambda_{i}}\right|+\left|\frac{\dot{\lambda}_{i}}{\lambda_{i}}\right|,$
the above three equations are solved as
$\left(\frac{\dot{a}_{i}(\tau)-\mu_{0}a_{i}(\tau)}{\lambda_{i}(\tau)},\frac{\dot{\xi}_{i}(\tau)}{\lambda_{i}(\tau)},\frac{\dot{\lambda}_{i}(\tau)}{\lambda_{i}(\tau)}\right)=\mathcal{J}\left(\nabla\varphi_{i}(\tau),\varphi_{i}(\tau),\frac{a_{i}(\tau)}{\lambda_{i}(\tau)}\right),$
(17.20)
where $\mathcal{J}=(\mathcal{J}_{0},\cdots,\mathcal{J}_{n+1})$ is a vector
valued, nonlinear (but smooth) integral operator. Plugging this back into
(13), we get
$\displaystyle\partial_{\tau}\varphi_{i}-\Delta_{y}\varphi_{i}$
$\displaystyle=$
$\displaystyle\left(W+\varphi+a_{i}Z_{0}\right)^{p}-W^{p}+\mathcal{J}\cdot Z$
$\displaystyle+$
$\displaystyle\left(\mathcal{J}_{1},\cdots,\mathcal{J}_{n}\right)\cdot\nabla\varphi+\mathcal{J}_{n+1}\left(y\cdot\nabla\varphi_{i}+\frac{n-2}{2}\varphi_{i}\right)$
$\displaystyle+$
$\displaystyle\frac{a_{i}}{\lambda_{i}}\left[\left(\mathcal{J}_{1},\cdots,\mathcal{J}_{n}\right)\cdot\nabla
Z_{0}+\mathcal{J}_{n+1}\left(y\cdot\nabla
Z_{0}+\frac{n}{2}Z_{0}\right)\right].$
Starting from the $L^{\infty}$ bound of $\varphi_{i}$ (by rescaling the
estimates in Proposition 17.6) and bootstrapping parabolic estimates, we get
$\|\varphi_{i}\|_{C^{2+\theta,1+\theta/2}(B_{2K}\times[\tau-1,\tau])}\lesssim\varepsilon_{i}^{\frac{n-2}{2}},\quad\mbox{for
any}~{}~{}\tau.$ (17.21)
A byproduct of this estimate is the validity of (17.19) in this case.
Case 2. Substituting (17.21) into (17.20) gives, for any $\tau_{1}<\tau_{2}$,
$\displaystyle\left\|\left(\frac{\dot{a}_{i}-\mu_{0}a_{i}}{\lambda_{i}},\frac{\dot{\xi}_{i}}{\lambda_{i}},\frac{\dot{\lambda}_{i}}{\lambda_{i}}\right)\right\|_{C^{\theta/2}([\tau_{1},\tau_{2}])}\lesssim\varepsilon_{i}^{\frac{n-2}{2}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
(17.22)
After scaling back to the original coordinates, by noting that
$d\tau/dt\sim\varepsilon_{i}^{-2}$, this estimate is transformed into
$\displaystyle\left\|\left(a_{i}^{\prime}-\mu_{0}\frac{a_{i}}{\lambda_{i}^{2}},\xi_{i}^{\prime},\lambda_{i}^{\prime}\right)\right\|_{C^{\theta/2}([-3/4,3/4])}\lesssim\varepsilon_{i}^{\frac{n-4}{2}-\theta}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
(17.23)
By Proposition 17.6, (17.23) and Proposition 17.10, we deduce that the terms
in the right hand side of (12.8) are uniformly bounded in
$C^{\theta,\theta/2}\left((B_{1}\setminus B_{1/8})\times(-1,1)\right)$. Hence
standard Schauder estimates for heat equation gives
$\sup_{\left((B_{2/3}\setminus
B_{1/4})\times(-4/9,4/9)\right)}|\partial_{t}\phi_{i}|\lesssim\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}.$
Case 3. In this case we assume
$\lim_{i\to+\infty}\left(r_{i}+\frac{\varepsilon_{i}}{r_{i}}\right)=0.$
Define
$\widetilde{u}_{i}(x,t):=r_{i}^{\frac{n-2}{2}}u_{i}\left(r_{i}x,t_{i}+r_{i}^{2}t\right).$
The decomposition in Section 12 can be transferred to $\widetilde{u}_{i}$ as
$\widetilde{u}_{i}=W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}+\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}+\widetilde{\phi}_{i},$
(17.24)
where
$\left\\{\begin{aligned}
&\widetilde{\xi}_{i}(t):=r_{i}^{-1}\xi_{i}(t_{i}+r_{i}^{2}t),\\\
&\widetilde{\lambda}_{i}(t):=r_{i}^{-1}\lambda_{i}(t_{i}+r_{i}^{2}t),\\\
&\widetilde{a}_{i}(t):=r_{i}^{-1}a_{i}(t_{i}+r_{i}^{2}t),\\\
&\widetilde{\phi}_{i}(x,t):=r_{i}^{(n-2)/2}\phi_{i}(r_{i}x,t_{i}+r_{i}^{2}t).\end{aligned}\right.$
Because for any $t\in[-3/4,3/4]$,
$r_{i}>2K\varepsilon_{i}>K\lambda_{i}(t),$
the orthogonal relation in Proposition 12.1 still holds between
$\widetilde{\phi}_{i}$ and
$Z_{j,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}$, $j=0,\cdots,n+1$.
Furthermore, the error equation (12.8) now reads as
$\displaystyle\partial_{t}\widetilde{\phi}_{i}-\Delta\widetilde{\phi}_{i}$
$\displaystyle=$
$\displaystyle\left(W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}+\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}+\widetilde{\phi}_{i}\right)^{p}-W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}^{p}-pW_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}^{p-1}\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}$
(17.25) $\displaystyle+$
$\displaystyle\left(-\widetilde{a}_{i}^{\prime}+\mu_{0}\frac{\widetilde{a}_{i}}{\widetilde{\lambda}_{i}^{2}},\widetilde{\xi}_{i}^{\prime},\widetilde{\lambda}_{i}^{\prime}\right)\cdot
Z_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}-\widetilde{a}_{i}\partial_{t}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}.$
By (17.11) and (17.23), we get the following estimates for those terms in the
right hand side of (17.25):
$\left\\{\begin{aligned}
&\|W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}\lesssim\left(\frac{\varepsilon_{i}}{r_{i}}\right)^{\frac{n-2}{2}};\\\
&\|\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}\lesssim\varepsilon_{i}^{\frac{n-2}{2}}e^{-c\frac{r_{i}}{\varepsilon_{i}}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right);\\\
&\|\widetilde{\phi}_{i}\|_{L^{\infty}((B_{1}\setminus
B_{1/4})\times(-4,4))}\lesssim
r_{i}^{\frac{n-2}{2}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right);\\\
&\left\|\left(\widetilde{a}_{i}^{\prime}-\mu_{0}\widetilde{a}_{i}\widetilde{\lambda}_{i}^{-2},\widetilde{\xi}_{i}^{\prime},\widetilde{\lambda}_{i}^{\prime}\right)\cdot
Z_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\right\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}\lesssim\frac{\varepsilon_{i}^{n-4}}{r_{i}^{\frac{n}{2}-3}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right);\\\
&\left\|\widetilde{a}_{i}\partial_{t}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\right\|_{C^{\theta,\theta/2}((B_{2}\setminus
B_{1/4})\times(-4,4))}\lesssim\varepsilon_{i}^{n-2}e^{-c\frac{r_{i}}{\varepsilon_{i}}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).\end{aligned}\right.$
(17.26)
These estimates imply that in $\left(B_{2}\setminus
B_{1/4}\right)\times(-4,4)$,
$\widetilde{u}_{i}\lesssim\left(\frac{\varepsilon_{i}}{r_{i}}\right)^{\frac{n-2}{2}}+r_{i}^{\frac{n-2}{2}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)$
(17.27)
and
$\displaystyle\left|\left(W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}+\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}+\widetilde{\phi}_{i}\right)^{p}-W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}^{p}-pW_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}^{p-1}\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\right|$
$\displaystyle\lesssim$
$\displaystyle\left[W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}^{p-1}+\widetilde{u}_{i}^{p-1}\right]\left(\left|\widetilde{\phi_{i}}\right|+\left|\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\right|\right)$
$\displaystyle\lesssim$
$\displaystyle\left(\frac{\varepsilon_{i}^{2}}{r_{i}^{2}}+r_{i}^{2}\right)r_{i}^{\frac{n-2}{2}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
By Proposition 17.10 and a rescaling, we know that the Lipschitz seminorm of
$\widetilde{\phi}_{i}$ t in $\left(B_{2}\setminus B_{1/4}\right)\times(-4,4)$
is bounded by
$r_{i}^{\frac{n}{2}}\left(\frac{\varepsilon_{i}^{2}}{r_{i}^{3}}+1\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
By these estimates and standard $W^{2,p}$ estimates for heat equation, we
obtain
$\displaystyle\int_{-3}^{3}\int_{B_{9/5}\setminus
B_{1/3}}|\partial_{t}\widetilde{\phi}_{i}|$ $\displaystyle\lesssim$
$\displaystyle\left[r_{i}^{\frac{n}{2}}\left(\frac{\varepsilon_{i}^{2}}{r_{i}^{3}}+1\right)+\varepsilon_{i}^{\frac{n-2}{2}}\left(\frac{\varepsilon_{i}}{r_{i}}\right)^{\frac{n-3}{2}}+r_{i}^{\frac{n-2}{2}}\left(\frac{\varepsilon_{i}^{2}}{r_{i}^{2}}+r_{i}^{2}\right)\right]$
(17.28)
$\displaystyle\quad\quad\times\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Because
$\left(\partial_{t}-\Delta\right)\partial_{t}\widetilde{u}_{i}=p\widetilde{u}_{i}^{p-1}\partial_{t}\widetilde{u}_{i},$
and $\widetilde{u}_{i}$ are uniformly bounded in $\left(B_{9/5}\setminus
B_{1/3}\right)\times(-3,3)$, we get
$\displaystyle\left\|\partial_{t}\widetilde{u}_{i}\right\|_{L^{\infty}((B_{3/2}\setminus
B_{1/2})\times(-1,1))}$ $\displaystyle\lesssim$
$\displaystyle\int_{-3}^{3}\int_{B_{9/5}\setminus
B_{1/3}}|\partial_{t}\widetilde{u}_{i}|$ $\displaystyle\lesssim$
$\displaystyle\left(\int_{-3}^{3}\int_{B_{9/5}\setminus
B_{1/3}}|\partial_{t}\widetilde{\phi}_{i}|\right)+\|\partial_{t}W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}$
$\displaystyle\quad\quad\quad\quad+\left\|\partial_{t}\left(\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\right)\right\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))},$
where we have used the expansion from (17.24),
$\partial_{t}\widetilde{u}_{i}=\partial_{t}W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}+\partial_{t}\left(\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\right)+\partial_{t}\widetilde{\phi}_{i}.$
Similar to (17.26), we also have
$\left\\{\begin{aligned}
&\|\partial_{t}W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}\lesssim\varepsilon_{i}^{n-4}r_{i}^{-\frac{n}{2}-3}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right);\\\
&\left\|\partial_{t}\left(\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\right)\right\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}\lesssim\varepsilon_{i}^{\frac{n-2}{2}}e^{-c\frac{r_{i}}{\varepsilon_{i}}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).\end{aligned}\right.$
Substituting these estimates and (17.28) into (17.3), we obtain
$\displaystyle\left\|\partial_{t}\widetilde{\phi}_{i}\right\|_{L^{\infty}((B_{3/2}\setminus
B_{1/2})\times(-1,1))}$ $\displaystyle\lesssim$
$\displaystyle\left\|\partial_{t}\widetilde{u}_{i}\right\|_{L^{\infty}((B_{3/2}\setminus
B_{1/2})\times(-1,1))}$ $\displaystyle+$
$\displaystyle\|\partial_{t}W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}+\left\|\partial_{t}\left(\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\right)\right\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}$ $\displaystyle\lesssim$
$\displaystyle\int_{-3}^{3}\int_{B_{9/5}\setminus
B_{1/3}}|\partial_{t}\widetilde{\phi}_{i}|$ $\displaystyle+$
$\displaystyle\|\partial_{t}W_{\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}+\left\|\partial_{t}\left(\widetilde{a}_{i}Z_{0,\widetilde{\xi}_{i},\widetilde{\lambda}_{i}}\right)\right\|_{L^{\infty}((B_{2}\setminus
B_{1/4})\times(-4,4))}$ $\displaystyle\lesssim$
$\displaystyle\left[r_{i}^{\frac{n}{2}}\left(\frac{\varepsilon_{i}^{2}}{r_{i}^{3}}+1\right)+\varepsilon_{i}^{\frac{n-2}{2}}\left(\frac{\varepsilon_{i}}{r_{i}}\right)^{\frac{n-3}{2}}+r_{i}^{\frac{n-2}{2}}\left(\frac{\varepsilon_{i}^{2}}{r_{i}^{2}}+r_{i}^{2}\right)\right]\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)$
$\displaystyle+$
$\displaystyle\left[\frac{\varepsilon_{i}^{n-4}}{r_{i}^{\frac{n}{2}-3}}+\varepsilon_{i}^{\frac{n-2}{2}}e^{-c\frac{r_{i}}{\varepsilon_{i}}}\right]\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Scaling back to $\phi_{i}$, we get (17.19). ∎
### 18\. Linearization of Pohozaev identity
Given a smooth function $v$ and a sphere $\partial B_{r}$, define the Pohozaev
invariant as
$\mathcal{P}_{v}(r):=\int_{\partial B_{r}}\left[\frac{|\nabla
v|^{2}}{2}-\left(\partial_{r}v\right)^{2}-\frac{n-2}{2r}v\partial_{r}v\right].$
Choose of a sequence of $r_{i}$ satisfying
$\lim_{i\to+\infty}\frac{r_{i}}{\varepsilon_{i}}=+\infty.$
Multiplying (1.1) by $x\cdot\nabla u_{i}(x,0)$ and integrating in $B_{r_{i}}$,
after integrating by parts we obtain a Pohozaev identity
$\mathcal{P}_{u_{i}(0)}(r_{i})-\int_{\partial
B_{r_{i}}}\frac{u_{i}(0)^{p+1}}{p+1}=-\frac{1}{r_{i}}\int_{B_{r_{i}}}\partial_{t}u_{i}(0)\left[x\cdot\nabla
u_{i}(0)+\frac{n-2}{2}u_{i}(0)\right].$ (18.1)
Substitute the decomposition (17.1) into this identity, and take a expansion.
Let us estimate each term in this expansion. We will see that the zeroth order
terms cancel with each other, because these terms form exactly the Pohozaev
invariant of the bubble. The next order term then gives us some information.
#### 18.1. The left hand side
For the left hand side, we have the expansion
$\displaystyle\mathcal{P}_{u_{i}(0)}(r_{i})-\int_{\partial
B_{r_{i}}}\frac{u_{i}(0)^{p+1}}{p+1}$ $\displaystyle=$
$\displaystyle\mathcal{P}_{W_{i}(0)}(r_{i})-\int_{\partial
B_{r_{i}}}\frac{W_{i}(0)^{p+1}}{p+1}+\mathcal{P}_{\phi_{i}(0)}(r_{i})+\mathcal{P}_{a_{i}(0)Z_{0,i}(0)}(r_{i})$
$\displaystyle+$ $\displaystyle\underbrace{\int_{\partial B_{r_{i}}}\nabla
W_{i}(0)\cdot\nabla\phi_{i}(0)-2\partial_{r}W_{i}(0)\partial_{r}\phi_{i}(0)-\frac{n-2}{2r_{i}}\left[W_{i}(0)\partial_{r}\phi_{i}(0)+\phi_{i}(0)\partial_{r}W_{i}(0)\right]}_{\mbox{Main
order term}}$ $\displaystyle-$ $\displaystyle\frac{1}{p+1}\int_{\partial
B_{r_{i}}}\left[u_{i}(0)^{p+1}-W_{i}(0)^{p+1}\right]+\mbox{cross terms
involving }a_{i}(0)Z_{0,i}(0).$
Let us estimate each term in this expansion.
1. (1)
Because $W_{i}(0)$ is a smooth solution of (1.9), it satisfies the standard
Pohozaev identity
$\mathcal{P}_{W_{i}(0)}(r_{i})-\int_{\partial
B_{r_{i}}}\frac{W_{i}(0)^{p+1}}{p+1}=0.$
2. (2)
By Proposition 17.1 and Proposition 17.10, we get
$\mathcal{P}_{\phi_{i}(0)}(r_{i})=O\left(r_{i}^{n-3}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)^{2}.$
Here the factor
$\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)$
is kept for later use.
3. (3)
By estimates on $a_{i}$ and $\lambda_{i}$ in Proposition 17.6, on $\partial
B_{r_{i}}$ we have
$\left\\{\begin{aligned} &|a_{i}(t)Z_{0,i}(t)|\lesssim
e^{-cr_{i}/\varepsilon_{i}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right),\\\
&\left|\nabla\left(a_{i}(t)Z_{0,i}(t)\right)\right|\lesssim\varepsilon_{i}^{-1}e^{-cr_{i}/\varepsilon_{i}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).\end{aligned}\right.$
(18.2)
By these estimates we get
$\left|\mathcal{P}_{a_{i}(0)Z_{0,i}(0)}(r)\right|\lesssim\varepsilon_{i}^{-2}r_{i}^{n-1}e^{-cr_{i}/\varepsilon_{i}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)^{2}.$
4. (4)
First by integrating (17.12), for any $x\in\partial B_{r_{i}}$, we have
$\phi_{i}(x,0)=\frac{1}{|\partial B_{\rho}|}\int_{\partial
B_{\rho}}\phi_{i}(0)+O\left(\frac{\varepsilon_{i}^{2}}{r_{i}^{2}}+\rho\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Next, because $\xi_{i}(0)=0$, on $\partial B_{r_{i}}$, by Proposition 17.6 and
(17.11), we obtain
$\left\\{\begin{aligned}
&W_{i}(0)\lesssim\varepsilon_{i}^{\frac{n-2}{2}}r_{i}^{2-n},\\\
&\partial_{r}W_{i}(0)=-c(n)\varepsilon_{i}^{\frac{n-2}{2}}r_{i}^{1-n}+O\left(\varepsilon_{i}^{\frac{n+2}{2}}r_{i}^{-1-n}\right),\\\
&\nabla W_{i}(0)=\partial_{r}W_{i}(0)\frac{x}{|x|},\\\
&|\nabla\phi_{i}(t)|\lesssim\left(\varepsilon_{i}^{2}r_{i}^{-3}+1\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).\end{aligned}\right.$
Therefore the main order term equals
$\displaystyle-c(n)\left[\frac{1}{|\partial B_{\rho}|}\int_{\partial
B_{\rho}}\phi_{i}+O\left(\frac{\varepsilon_{i}^{2}}{r_{i}^{2}}+\rho\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)\right]\frac{\varepsilon_{i}^{\frac{n-2}{2}}}{r_{i}}$
$\displaystyle+$ $\displaystyle
O\left(\varepsilon_{i}^{\frac{n-2}{2}}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
5. (5)
On $\partial B_{r_{i}}$, we have
$W_{i}(0)\lesssim\varepsilon_{i}^{\frac{n-2}{2}}r_{i}^{2-n},\quad|a_{i}Z_{0,i}(0)|\lesssim
e^{-cr_{i}/\varepsilon_{i}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Therefore
$\left|\int_{\partial
B_{r_{i}}}\left[u_{i}(t)^{p+1}-W_{i}(t)^{p+1}\right]\right|\lesssim\left(\varepsilon_{i}^{\frac{n+2}{2}}r_{i}^{-3}+r_{i}^{n-1}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
6. (6)
Finally, by (18.2) and estimates of $W_{i}(0)$ and $\phi_{i}(0)$, the cross
terms involving $a_{i}Z_{0,i}(0)$ are of the order
$O\left(\varepsilon_{i}^{\frac{n-4}{2}}+r_{i}^{n-3}+\varepsilon_{i}^{-1}r_{i}^{n-2}\right)e^{-cr_{i}/\varepsilon_{i}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Putting these estimates together, we see that the left hand side of (18.1)
equals
$\displaystyle-c(n)\left[\frac{1}{|\partial B_{\rho}|}\int_{\partial
B_{\rho}}\phi_{i}+O\left(\frac{\varepsilon_{i}^{2}}{r_{i}^{2}}+\rho\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)\right]\frac{\varepsilon_{i}^{\frac{n-2}{2}}}{r_{i}}$
$\displaystyle+$ $\displaystyle
O\left(r_{i}^{n-3}+\varepsilon_{i}^{-2}r_{i}^{n-1}e^{-cr_{i}/\varepsilon_{i}}+\varepsilon_{i}^{\frac{n-2}{2}}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)$
$\displaystyle+$
$\displaystyle\left(\varepsilon_{i}^{\frac{n+2}{2}}r_{i}^{-3}+r_{i}^{n-1}+\varepsilon_{i}^{\frac{n-4}{2}}e^{-cr_{i}/\varepsilon_{i}}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
#### 18.2. The right hand side
By (17.1), we get
$\partial_{t}u_{i}=\left(a_{i}^{\prime},\xi_{i}^{\prime},\lambda_{i}^{\prime}\right)\cdot
Z_{\ast,i}+\partial_{t}\phi_{i}+a_{i}\partial_{t}Z_{0,i}$ (18.4)
and
$\displaystyle x\cdot\nabla u_{i}+\frac{n-2}{2}u_{i}$ $\displaystyle=$
$\displaystyle\lambda_{i}Z_{n+1,i}+\left(x\cdot\nabla\phi_{i}+\frac{n-2}{2}\phi_{i}\right)$
$\displaystyle+$ $\displaystyle a_{i}\left(x\cdot
Z_{0,i}+\frac{n-2}{2}Z_{0,i}\right).$
Therefore for any $r$, we have
$\displaystyle\int_{B_{r}}\partial_{t}u_{i}(x,0)\left[x\cdot\nabla
u_{i}(x,0)+\frac{n-2}{2}u_{i}(x,0)\right]dx$ $\displaystyle=$
$\displaystyle\lambda_{i}(0)\left(a_{i}^{\prime}(0),\xi_{i}^{\prime}(0),\lambda_{i}^{\prime}(0)\right)\cdot\int_{B_{r}}Z_{\ast,i}(x,0)Z_{n+1,i}(x,0)dx$
$\displaystyle+$
$\displaystyle\left(a_{i}^{\prime}(0),\xi_{i}^{\prime}(0),\lambda_{i}^{\prime}(0)\right)\cdot\int_{B_{r}}Z_{\ast,i}(x,0)\left[x\cdot\nabla\phi_{i}(x,0)+\frac{n-2}{2}\phi_{i}(x,0)\right]dx$
$\displaystyle+$ $\displaystyle
a_{i}(0)\left(a_{i}^{\prime}(0),\xi_{i}^{\prime}(0),\lambda_{i}^{\prime}(0)\right)\cdot\int_{B_{r}}Z_{\ast,i}(x,0)\left[x\cdot
Z_{0,i}(x,0)+\frac{n-2}{2}Z_{0,i}(x,0)\right]dx$ $\displaystyle+$
$\displaystyle\lambda_{i}(0)\int_{B_{r}}Z_{n+1,i}(x,0)\partial_{t}\phi_{i}(x,0)dx$
$\displaystyle+$
$\displaystyle\int_{B_{r}}\partial_{t}\phi_{i}(x,0)\left[x\cdot\nabla\phi_{i}(x,0)+\frac{n-2}{2}\phi_{i}(x,0)\right]dx$
$\displaystyle+$ $\displaystyle
a_{i}(0)\int_{B_{r}}\partial_{t}\phi_{i}(x,0)\left[x\cdot
Z_{0,i}(x,0)+\frac{n-2}{2}Z_{0,i}(x,0)\right]dx$ $\displaystyle+$
$\displaystyle
a_{i}(0)\lambda_{i}(0)\int_{B_{r}}Z_{n+1,i}(x,0)\partial_{t}Z_{0,i}(x,0)dx$
$\displaystyle+$ $\displaystyle
a_{i}(0)\int_{B_{r}}\partial_{t}Z_{0,i}(x,0)\left[x\cdot\nabla\phi_{i}(x,0)+\frac{n-2}{2}\phi_{i}(x,0)\right]dx$
$\displaystyle+$ $\displaystyle
a_{i}(0)^{2}\int_{B_{r}}\partial_{t}Z_{0,i}(x,0)\left[x\cdot
Z_{0,i}(x,0)+\frac{n-2}{2}Z_{0,i}(x,0)\right]dx$ $\displaystyle=:$
$\displaystyle\mathrm{I}+\mathrm{II}+\mathrm{III}+\mathrm{IV}+\mathrm{V}+\mathrm{VI}+\mathrm{VII}+\mathrm{VIII}+\mathrm{IX}.$
Let us take $r=r_{i}$ and estimate each term one by one.
1. (1)
By (15.9), Proposition 17.6 and the orthogonal relation between $Z_{0}$ and
$Z_{n+1}$,
$\left|\lambda_{i}(0)a_{i}^{\prime}(0)\int_{B_{r_{i}}}Z_{0,i}(x,t)Z_{n+1,i}(x,0)dx\right|\lesssim\varepsilon_{i}^{\frac{n-2}{2}}e^{-cr_{i}/\varepsilon_{i}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Similarly, for each $j=1,\cdots,n$,
$\left|\lambda_{i}(0)\xi_{j,i}^{\prime}(0)\int_{B_{r_{i}}}Z_{j,i}(x,t)Z_{n+1,i}(x,0)dx\right|\lesssim\varepsilon_{i}^{\frac{3n}{2}-4}r_{i}^{3-n}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
By Lemma 13.5, (15.9) and Proposition 17.6,
$\left|\lambda_{i}(0)\lambda_{i}(0)^{\prime}\int_{B_{r_{i}}}Z_{i,n+1}(x,0)^{2}dx\right|\lesssim
K^{-\frac{n-2}{2}}\varepsilon_{i}^{\frac{n-2}{2}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Putting these three estimates together we obtain
$|\mathrm{I}|\lesssim\left[e^{-cr_{i}/\varepsilon_{i}}+\left(\frac{\varepsilon_{i}}{r_{i}}\right)^{n-3}+K^{-\frac{n-2}{2}}\right]\varepsilon_{i}^{\frac{n-2}{2}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
2. (2)
By Proposition 17.6, Proposition 17.1 and Lemma 17.8, we get
$|\mathrm{II}|\lesssim\varepsilon_{i}^{n-4}r_{i}^{2}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)^{2}.$
3. (3)
By (15.9) and (17.8),
$|\mathrm{III}|\lesssim\varepsilon_{i}^{n-2}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)^{2}.$
4. (4)
To estimate $\mathrm{IV}$, first note that because $r_{i}\gg\varepsilon_{i}$,
the orthogonal condition (12.2) for $\phi_{i}$ reads as
$\int_{B_{r_{i}}}\phi_{i}(x,t)\eta_{in}(x,t)Z_{n+1,i}(x,t)dx=0.$
Differentiating this equation in $t$ gives
$\displaystyle\left|\int_{B_{r_{i}}}Z_{n+1,i}(x,0)\partial_{t}\phi_{i}(x,0)\eta_{in}(x,0)dx\right|$
$\displaystyle\leq$
$\displaystyle\left|\int_{B_{r_{i}}}\phi_{i}(x,0)\partial_{t}\eta_{in}(x,0)Z_{n+1,i}(x,0)dx\right|+\left|\int_{B_{r_{i}}}\phi_{i}(x,0)\eta_{in}(x,0)\partial_{t}Z_{n+1,i}(x,0)dx\right|$
$\displaystyle\lesssim$ $\displaystyle
K^{2}\varepsilon_{i}^{n-3}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
On the other hand, a direct calculation using Proposition 17.11 gives
$\displaystyle\left|\int_{B_{r_{i}}}Z_{n+1,i}(x,0)\partial_{t}\phi_{i}(x,0)\left[1-\eta_{in}(x,0)\right]dx\right|$
$\displaystyle\lesssim$
$\displaystyle\left(K^{-2}+r_{i}\right)\varepsilon_{i}^{\frac{n-4}{2}}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
Putting these two estimates together we get
$|\mathrm{IV}|\lesssim\left[\left(K^{-2}+r_{i}\right)\varepsilon_{i}^{\frac{n-2}{2}}+K^{2}\varepsilon_{i}^{n-2}\right]\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
5. (5)
By Proposition 17.1, Lemma 17.8 and Proposition 17.11,
$|\mathrm{V}|\lesssim\left(\varepsilon_{i}^{2}r_{i}^{n-4}+r_{i}^{n-1}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)^{2}.$
6. (6)
By Proposition 17.6 and Proposition 17.11,
$|\mathrm{VI}|\lesssim\varepsilon_{i}^{n-2}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)^{2}.$
7. (7)
By Proposition 17.6,
$|\mathrm{VII}|\lesssim\varepsilon_{i}^{n-2}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)^{2}.$
8. (8)
By Proposition 17.6, Proposition 17.1 and Lemma 17.8,
$|\mathrm{VIII}|\lesssim\varepsilon_{i}^{\frac{3n}{2}-3}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)^{2}.$
9. (9)
By Proposition 17.6,
$|\mathrm{IX}|\lesssim\varepsilon_{i}^{2n-3}\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right)^{2}.$
Putting these estimates together, we get
$\mbox{RHS}\lesssim
r_{i}^{-1}\left[\left(K^{-2}+r_{i}\right)\varepsilon_{i}^{\frac{n-2}{2}}+\varepsilon_{i}^{2}r_{i}^{n-4}+r_{i}^{n-1}\right]\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
(18.6)
### 19\. A weak form of Schoen’s Harnack inequality
In this section, we establish a weak form of Schoen’s Harnack inequality,
which then finishes the proof of Theorem 10.1. In the Yamabe problem, this
Harnack inequality was first introduced in Schoen [77], see also Li [47] and
Li-Zhang [49] for a proof using the method of moving plane and moving sphere.
The following proof instead is mainly a consequence of the Pohozaev identity
calculation in Section 18.
Combining (18.1) and (18.6) in the previous section, we get
$\displaystyle\frac{1}{|\partial B_{\rho}|}\int_{\partial
B_{\rho}}\phi_{i}(0)$ $\displaystyle\lesssim$
$\displaystyle\left(\rho+\frac{\varepsilon_{i}^{2}}{r_{i}^{2}}+\frac{r_{i}^{n-2}}{\varepsilon_{i}^{(n-2)/2}}+\frac{\varepsilon_{i}^{2}}{r_{i}^{2}}+K^{-2}+r_{i}+\frac{r_{i}^{n-3}}{\varepsilon_{i}^{\frac{n}{2}-3}}+e^{-c\frac{r_{i}}{\varepsilon_{i}}}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
By choosing $r_{i}=\varepsilon_{i}^{(n-1)/n}$, we obtain a constant
$\delta(n)>0$ such that
$\frac{1}{|\partial B_{\rho}|}\int_{\partial
B_{\rho}}\phi_{i}(0)\lesssim\left(\rho+K^{-2}+\varepsilon_{i}^{\delta(n)}\right)\left(\left\|\phi_{i}\right\|_{L^{\infty}(Q_{1})}+e^{c\varepsilon_{i}^{-2}}\right).$
(19.1)
On $\partial B_{\rho}$, $u_{i}(0)\to u_{\infty}(0)$ uniformly. In view of the
estimates (15.8)-(15.10), we deduce that $\phi_{i}(0)\to u_{\infty}(0)$
uniformly on $\partial B_{\rho}$, too. Passing to the limit in (19.1), we
obtain
$\frac{1}{|\partial B_{\rho}|}\int_{\partial
B_{\rho}}u_{\infty}(x,0)\lesssim\left(\rho+K^{-2}\right)\|u_{\infty}\|_{L^{\infty}(Q_{1})}\lesssim\rho+K^{-2}.$
(19.2)
Since $u_{\infty}$ is a smooth, nonnegative solution of (1.1), if $u_{\infty}$
is not identically zero, by the standard Harnack ineqaulity,
$\inf_{Q_{1/4}}u_{\infty}>0.$ (19.3)
On the other hand, if we have chosen $K$ large enough at the beginning (in
Proposition 12.1) and taken $\rho$ arbitrarily small, (19.2) contradicts
(19.3). 111We can also avoid the use of Harnack inequality, and use only the
strong maximum principle. For this approach, we need to take a sequence of
$K\to+\infty$ in Proposition 12.1. This changes the error function $\phi_{i}$,
but it does not affect the argument. This is because both (19.2) and (19.3)
involve only $u_{\infty}$, which is the weak limit of $u_{i}$ and does not
depend on the construction of $\phi_{i}$.
Hence in the setting of Section 10, we must have $u_{\infty}=0$. This finishes
the proof of Theorem 10.1.
For the study of bubble clusterings (see Section 20 below), we need a
quantitative version of the above qualitative description. If the solution
exists for a sufficiently large time, an iteration of the following
proposition backwardly in time will lead to an quantitative upper bound on the
error function, see Section 20 for details. In particular, if the solution
exists globally in time (e.g. a solution independent of time), we can recover
Schoen’s Harnack inequality.
###### Proposition 19.1.
There exist two universal constants $\varepsilon_{0}$ and $M_{0}$ so that the
following holds. Suppose $u_{\varepsilon}$ is a positive solution of (1.1) in
$Q_{1}$, satisfying (II.a’)-(I.d’) and the decomposition (17.1) with
parameters
$(a_{\varepsilon}(t),\xi_{\varepsilon}(t),\lambda_{\varepsilon}(t))$, where
$\lambda_{\varepsilon}(0)=\varepsilon,\quad\xi_{\varepsilon}(0)=0.$
If $\varepsilon\leq\varepsilon_{0}$ and
$\|\phi_{\varepsilon}\|_{L^{\infty}(Q_{1})}\geq
M_{0}\varepsilon^{\frac{n-2}{2}},$ (19.4)
then
$\|\phi_{\varepsilon}\|_{L^{\infty}(B_{1}\times(-1/4,1/8))}\leq\frac{1}{2}\|\phi_{\varepsilon}\|_{L^{\infty}(Q_{1})}.$
###### Proof.
We prove this proposition by contradiction. Assume
* •
$u_{i}$ is a sequence of solutions satisfying (II.a)-(II.c) and (10.1);
* •
the decomposition given in (17.1) holds, where
$\varepsilon_{i}:=\lambda_{i}(0)\to 0,\quad\xi_{i}(0)=0;$
* •
$\phi_{i}$ satisfies
$\lim_{i\to+\infty}\frac{\|\phi_{i}\|_{L^{\infty}(Q_{1})}}{\varepsilon_{i}^{\frac{n-2}{2}}}=+\infty,$
(19.5)
but
$\|\phi_{i}\|_{L^{\infty}(B_{1}\times(-1/4,1/8))}>\frac{1}{2}\|\phi_{i}\|_{L^{\infty}(Q_{1})}.$
(19.6)
We show that this leads to a contradiction.
Step 1. Denote $\delta_{i}:=\|\phi_{i}\|_{L^{\infty}(Q_{1})}$. Let
$\widetilde{\phi}_{i}:=\phi_{i}/\delta_{i}$. In this step we prove
$\widetilde{\phi}_{i}\to 0\quad\mbox{uniformly in any compact set
of}~{}~{}Q_{1}.$ (19.7)
First by Proposition 17.10, for any $\rho>0$, $\widetilde{\phi}_{i}$ are
uniformly Lipschitz in $(B_{2/3}\setminus B_{\rho})\times(-4/9,4/9)$. Hence
after passing to a subsequence, we may assume $\widetilde{\phi}_{i}$ converges
to a limit $\widetilde{\phi}$, uniformly in any compact set of
$(B_{2/3}\setminus\\{0\\})\times(-4/9,4/9)$.
Because
$u_{i}=\phi_{i}+O\left(\varepsilon_{i}^{\frac{n-2}{2}}|x|^{2-n}\right),$
in view of (19.5), $u_{i}/\delta_{i}$ also converges uniformly to
$\widetilde{\phi}$ in any compact set of
$(B_{2/3}\setminus\\{0\\})\times(-4/9,4/9)$. As a consequence,
$\widetilde{\phi}\geq 0\quad\mbox{in
}\left(B_{2/3}\setminus\\{0\\}\right)\times(-4/9,4/9).$ (19.8)
Dividing (1.1) by $\delta_{i}$, and then letting $i\to+\infty$, by the above
uniform convergence of $u_{i}/\delta_{i}$, we get
$\partial_{t}\widetilde{\phi}-\Delta\widetilde{\phi}=0\quad\mbox{in
}\left(B_{2/3}\setminus\\{0\\}\right)\times(-4/9,4/9).$
Since $\left|\widetilde{\phi}\right|\leq 1$ in $Q_{1}$, removable singularity
theorem for heat equation implies that $\widetilde{\phi}$ is smooth and
satisfies the heat equation in $Q_{2/3}$.
By (19.8) and the strong maximum principle, either $\widetilde{\phi}\equiv 0$
or $\widetilde{\phi}>0$ everywhere in $Q_{2/3}$. We claim that the first case
must happen.
Indeed, dividing both sides of (19) by $\delta_{i}$ and letting $i\to+\infty$,
by (19.5) and the above uniform convergence of $\widetilde{\phi}_{i}$, we
obtain
$\frac{1}{|\partial B_{\rho}|}\int_{\partial
B_{\rho}}\widetilde{\phi}(x,0)\lesssim\rho+K^{-2}.$
Sending $\rho\to 0$ and $K\to+\infty$ as before, we get
$\widetilde{\phi}(0,0)=0$. By the strong maximum principle (or Harnack
inequality), $\widetilde{\phi}\equiv 0$ in $Q_{2/3}$.
Step 2. Choose a small radius $\rho$. By results obtained in Step 1, for all
$i$ large,
$|\phi_{i}|\ll\delta_{i}\quad\mbox{on}~{}~{}\partial^{p}Q_{2/3}\setminus\left(B_{\rho}\times\\{-2/3\\}\right).$
(19.9)
On $B_{\rho}\times\\{-2/3\\}$, we have the trivial bound
$|\phi_{i}|\leq\delta_{i}.$ (19.10)
As in Subsection 17.1, we obtain
Claim. If $\rho$ is sufficiently small, there exists a constant
$\sigma(\rho)\ll 1$ (independent of $\varepsilon_{i}$) such that
$|\phi_{i,1}|\leq\sigma(\rho)\delta_{i}\quad\mbox{in }Q_{7/12}.$ (19.11)
Here $\phi_{i,1}$ denotes the decomposition of the outer component taken in
Section 14.
By this claim, repeating the estimates of other terms in the decomposition of
the outer component, $\phi_{i,2}$-$\phi_{i,5}$, and the corresponding estimate
for the inner component $\phi_{in,i}$ in Section 16, we obtain
$|\phi_{i}|\lesssim\sigma(\rho)\delta_{i}\quad\mbox{in }Q_{1/2}.$ (19.12)
Step 3. By (19.12), we get
$u_{i}\lesssim\varepsilon_{i}^{\frac{n-2}{2}}+\sigma(\rho)\delta_{i}\quad\mbox{in
}\left(B_{1/2}\setminus B_{1/4}\right)\times(-1/4,1/4).$
Because $u_{i}$ satisfies the standard parabolic Harnack inequality in
$\left(B_{1}\setminus B_{1/8}\right)\times(-1,1)$, we get
$u_{i}\lesssim\varepsilon_{i}^{\frac{n-2}{2}}+\sigma(\rho)\delta_{i}\quad\mbox{in
}\left(B_{1}\setminus B_{1/2}\right)\times(-1/4,1/8).$
By the definition of $\phi_{i}$, we get
$-\varepsilon_{i}^{\frac{n-2}{2}}\lesssim\phi_{i}\lesssim\varepsilon_{i}^{\frac{n-2}{2}}+\sigma(\rho)\delta_{i}\quad\mbox{in
}\left(B_{1}\setminus B_{1/2}\right)\times(-1/4,1/8).$
Combining these two estimates with (19.12), for all $i$ sufficiently large, we
have
$\left|\widetilde{\phi}_{i}\right|\leq 1/4\quad\mbox{in
}B_{1}\times(-1/4,1/8).$
This is a contradiction with (19.6). ∎
### 20\. A conditional exclusion of bubble clustering
In this section, we work in the following settings. (This will appear in a
suitable rescalings of bubble clustering from Theorem 3.1 in Part III.)
1. (1)
There exist two sequences $R_{i}$ and $T_{i}$, both diverging to $+\infty$ as
$i\to+\infty$;
2. (2)
$u_{i}$ is a sequence of positive, smooth solution of (1.1) in
$\Omega_{i}:=B_{R_{i}}\times(-T_{i},T_{i})$;
3. (3)
as $i\to+\infty$,
$\int_{-T_{i}}^{T_{i}}\int_{B_{R_{i}}}\partial_{t}u_{i}(x,t)^{2}dxdt\to 0;$
(20.1)
4. (4)
there exists an $N\in\mathbb{N}$ such that for each $i$ and
$t\in(-T_{i},T_{i})$, in any compact set of $\mathbb{R}^{n}$, we have the
bubble decomposition
$u_{i}(x,t)=\sum_{j=1}^{N}W_{\xi_{ij}^{\ast}(t),\lambda_{ij}^{\ast}(t)}(x)+o_{i}(1),$
(20.2)
where $o_{i}(1)$ are measured in $H^{1}_{loc}(\mathbb{R}^{n})$;
5. (5)
as $i\to+\infty$,
$\max_{j=1,\cdots,N}\sup_{-T_{i}<t<T_{i}}\lambda_{ij}^{\ast}(t)\to 0;$ (20.3)
6. (6)
for any $t\in(-T_{i},T_{i})$,
$\min_{1\leq j\neq k\leq N}|\xi_{ij}^{\ast}(t)-\xi_{ik}^{\ast}(t)|\geq 2,$
(20.4)
and at $t=0$,
$\xi_{i1}^{\ast}(0)=0,\quad|\xi_{i2}^{\ast}(0)|=1;$ (20.5)
7. (7)
after relabelling indices, assume for some $N^{\prime}\leq N$ and any
$j=1,\cdots,N^{\prime}$,
$\xi_{ij}^{\ast}(0)\to P_{j},\quad\mbox{as}~{}~{}i\to+\infty,$
while for any $j=N^{\prime}+1,\cdots,N$,
$|\xi_{ij}^{\ast}(0)|\to+\infty,\quad\mbox{as}~{}~{}i\to+\infty,$
By (20.5), $P_{1}=0$ and $P_{2}\in\partial B_{1}$, so $N^{\prime}\geq 2$.
The main result of this section is
###### Proposition 20.1.
Under the above assumptions, we have
$T_{i}\leq 100\max_{1\leq j\leq N^{\prime}}|\log\lambda_{ij}^{\ast}(0)|.$
(20.6)
We prove this proposition by contradiction, so assume (20.6) does not hold,
that is,
$T_{i}>100\max_{1\leq j\leq N^{\prime}}|\log\lambda_{ij}^{\ast}(0)|.$ (20.7)
With this bound in hand, we can iterate Proposition 19.1 backwardly in time,
leading to an optimal upper bound on the error function as Schoen’s Harnack
inequality in Yamabe problem. This bound allows us to define a Green function
from $u_{i}$. This will be done in Subsection 20.1. Then in Subsection 20.2,
still similar to the treatment in Yamabe problem, we employ Pohozaev identity
again to give a sign restriction on the next order term in the expansion of
this Green function at a pole. This then leads to a contradiction with the
assumption that there is one bubble located at $P_{1}$ and $P_{2}$
respectively.
#### 20.1. Construction of Green function
Under the above assumptions, for any $i,j$ and $t\in(-T_{i}+1,T_{i}-1)$,
Proposition 12.1 can be applied to $u_{i}$ in $Q_{1}(\xi_{ij}^{\ast}(t),t)$.
We denote the corresponding parameters by $\xi_{ij}(t)$, $\lambda_{ij}(t)$ and
$a_{ij}(t)$, and the error function
$\phi_{ij}(x,t):=u_{i}(x,t)-W_{\xi_{ij}(t),\lambda_{ij}(t)}(x)-a_{ij}(t)Z_{0,\xi_{ij}(t),\lambda_{ij}(t)}(x).$
(20.8)
Denote $\varepsilon_{ij}:=\lambda_{ij}(0)$, $j=1,\cdots,N^{\prime}$ and
$\varepsilon_{i}=\max_{1\leq j\leq N^{\prime}}\varepsilon_{ij}$. By
Proposition 12.1 and (20.7), we get
$T_{i}>50\max_{1\leq j\leq N^{\prime}}|\log\varepsilon_{ij}|.$ (20.9)
###### Lemma 20.2.
For each $j=1,\cdots,N^{\prime}$ and any
$t\in(-50|\log\varepsilon_{ij}|,50|\log\varepsilon_{ij}|)$,
$\frac{1}{2}\varepsilon_{ij}\leq\lambda_{ij}(t)\leq 2\varepsilon_{ij}.$
(20.10)
###### Proof.
Applying Proposition 17.6 to each $\phi_{ij}$ in $Q_{1}(\xi_{ij}(t),t)$, we
obtain
$\left|\lambda_{ij}^{\prime}(t)\right|\lesssim\lambda_{ij}(t)^{\frac{n-4}{2}}.$
(20.11)
Define
$\widetilde{T}_{ij}:=\sup\left\\{T:~{}~{}T\leq
50|\log\varepsilon_{ij}|,~{}~{}\lambda_{ij}(t)\leq
2\varepsilon_{ij}\quad\mbox{in }~{}[0,T]\right\\}.$
Integrating (20.11) on $[0,\widetilde{T}_{ij}]$ leads to
$\sup_{t\in[0,\widetilde{T}_{ij}]}\lambda_{ij}(t)\leq\varepsilon_{ij}e^{C\varepsilon_{ij}^{(n-6)/2}|\log\varepsilon_{ij}|}\leq\varepsilon_{ij}e^{C\varepsilon_{ij}^{(n-6)/4}}<\frac{3}{2}\varepsilon_{ij}.$
(20.12)
Hence we must have $\widetilde{T}_{ij}=50|\log\varepsilon_{ij}|$. This gives
the upper bound of (20.10) in the positive side. The lower bound and the
negative side follow in the same way. ∎
For each $t\in(-50|\log\varepsilon_{ij}|,50|\log\varepsilon_{ij}|)$, applying
Proposition 19.1 to $Q_{1}(\xi_{ij}(t),t)$, we obtain
$\sup_{Q_{1}(\xi_{ij}(t),t)}|\phi_{ij}|\leq
M_{0}\varepsilon_{i}^{\frac{n-2}{2}}+\frac{1}{2}\sup_{Q_{1}(\xi_{ij}(t+1),t+1)}|\phi_{ij}|.$
(20.13)
For any $R>1$ fixed, an iteration of this estimate from
$t=50|\log\varepsilon_{ij}|$ to any $t\in[-R,R]$ leads to
$\sup_{B_{1}(\xi_{ij}(0))\times(-R,R)}|\phi_{ij}|\lesssim\varepsilon_{i}^{\frac{n-2}{2}}.$
(20.14)
This bound depends only on the constant $M_{0}$ in Proposition 19.1, and it is
independent of $R$.
Substituting (20.14) and estimates of parameters in Proposition 17.6 into
(20.8), we get
$\sup_{B_{1}(\xi_{ij}(0))\times(-R,R)}u_{i}\lesssim\varepsilon_{i}^{\frac{n-2}{2}}.$
In $\left(B_{7/4}(\xi_{ij}(0))\setminus
B_{3/4}(\xi_{ij}(0))\right)\times(-R,R)$, by applying standard Harnack
inequality to $u_{i}$, (20.14) is extended to
$\sup_{B_{3/2}(\xi_{ij}(0))\times(-R,R)}|\phi_{ij}|\lesssim\varepsilon_{i}^{\frac{n-2}{2}}.$
(20.15)
This estimate gives us the expansion
$u_{i}(x,t)=\varepsilon_{ij}^{-\frac{n-2}{2}}W\left(\frac{x-\xi_{ij}(t)}{\varepsilon_{ij}}\right)+O\left(\varepsilon_{i}^{\frac{n-2}{2}}\right)\quad\mbox{in}~{}~{}B_{3/2}(\xi_{ij}(0))\times(-R,R).$
(20.16)
Here we note that, by Proposition 17.6,
$|\xi_{ij}^{\prime}(t)|\lesssim\lambda_{ij}(t)^{\frac{n-4}{2}}\lesssim\varepsilon_{i}^{\frac{n-4}{2}}.$
Hence
$|\xi_{ij}(t)-\xi_{ij}(0)|\lesssim_{R}\varepsilon_{i}^{\frac{n-4}{2}},\quad\mbox{in}~{}~{}[-R,R].$
(20.17)
Assume
$\lim_{i\to+\infty}\frac{\varepsilon_{ij}}{\varepsilon_{i}}=m_{j}\in[0,1]\quad\mbox{for
any }~{}~{}j=1,\cdots,N^{\prime}.$
By the definition of $\varepsilon_{i}$, there exists at least one $j$
satisfying $m_{j}=1$.
It is directly verified that for some dimensional constant $c(n)>0$,
$\varepsilon_{i}^{-\frac{n-2}{2}}u_{i}^{p}dxdt\rightharpoonup
c(n)m_{j}^{\frac{n-2}{2}}\delta_{P_{j}}\otimes dt$ (20.18)
weakly as Radon measures in $B_{3/2}(P_{j})\times(-R,R)$.
Set
$\widehat{u}_{i}:=\varepsilon_{i}^{-\frac{n-2}{2}}u_{i}.$
It satisfies
$\partial_{t}\widehat{u}_{i}-\Delta\widehat{u}_{i}=\varepsilon_{i}^{-\frac{n-2}{2}}u_{i}^{p}=\varepsilon_{i}^{2}\widehat{u}_{i}^{p}.$
(20.19)
By (20.15), in any compact set of
$\left(B_{3/2}(P_{j})\setminus\\{P_{j}\\}\right)\times\mathbb{R}$,
$\widehat{u}_{i}$ are uniformly bounded. Because $u_{i}$ satisfies the
standard parabolic Harnack inequality in any compact set of
$\left(\mathbb{R}^{n}\setminus\cup_{j=1}^{N^{\prime}}\\{P_{j}\\}\right)\times\mathbb{R}$,
we deduce that $\widehat{u}_{i}$ are also uniformly bounded in any compact set
of
$\left(\mathbb{R}^{n}\setminus\cup_{j=1}^{N^{\prime}}\\{P_{j}\\}\right)\times\mathbb{R}$.
Then by standard parabolic regularity theory, $\widehat{u}_{i}$ converges to
$\widehat{u}_{\infty}$ smoothly in any compact set of
$\left(\mathbb{R}^{n}\setminus\cup_{j=1}^{N^{\prime}}\\{P_{j}\\}\right)\times\mathbb{R}$.
By (20.18), $\widehat{u}_{\infty}$ satisfies
$\partial_{t}\widehat{u}_{\infty}-\Delta\widehat{u}_{\infty}=c(n)\sum_{j=1}^{N^{\prime}}m_{j}^{\frac{n-2}{2}}\delta_{P_{j}}\otimes
dt\quad\mbox{in}~{}~{}\mathbb{R}^{n}\times\mathbb{R}.$ (20.20)
###### Lemma 20.3.
For each $j=1\cdots,N^{\prime}$,
$m_{j}>0.$
###### Proof.
Because there is one $m_{j}$ satisfying $m_{j}=1$, $\widehat{u}_{\infty}\neq
0$. By the strong maximum principle,
$\widehat{u}_{\infty}>0\quad\mbox{in}~{}~{}\left(\mathbb{R}^{n}\setminus\cup_{j=1}^{N^{\prime}}\\{P_{j}\\}\right)\times\mathbb{R}.$
(20.21)
By (20.16), for each $j=1,\cdots,N^{\prime}$,
$\widehat{u}_{i}\lesssim\left(\frac{\varepsilon_{ij}}{\varepsilon_{i}}\right)^{\frac{n-2}{2}}\quad\mbox{in}~{}~{}\left(B_{3/2}(P_{j})\setminus
B_{1/2}(P_{j})\right)\times(-1,1).$
If $m_{j}=0$, we would have
$\widehat{u}_{\infty}=\lim_{i\to+\infty}\widehat{u}_{i}=0\quad\mbox{in}~{}~{}\left(B_{3/2}(P_{j})\setminus
B_{1/2}(P_{j})\right)\times(-1,1).$
This is a contradiction with (20.21). ∎
###### Lemma 20.4.
For any
$(x,t)\in\left(\mathbb{R}^{n}\setminus\cup_{j=1}^{N^{\prime}}\\{P_{j}\\}\right)\times\mathbb{R}$,
$\widehat{u}_{\infty}(x,t)\equiv
c(n)\sum_{j=1}^{N^{\prime}}m_{j}^{\frac{n-2}{2}}|x-P_{j}|^{2-n}.$ (20.22)
###### Proof.
Take two arbitrary $s<t$. Take a sequence of cut-off functions $\zeta_{R}\in
C_{0}^{\infty}(\mathbb{R}^{n})$ such that
$\left\\{\begin{aligned} &0\leq\zeta_{R}\leq 1,\\\ &\zeta_{R}\equiv
1\quad\mbox{in}~{}~{}B_{R}\setminus\cup_{j=1}^{N^{\prime}}B_{1/R}(P_{j}),\\\
&\zeta_{R}\equiv
0\quad\mbox{in}~{}~{}B_{2R}^{c}\cup\cup_{j=1}^{N^{\prime}}B_{1/2R}(P_{j}).\end{aligned}\right.$
By the comparison principle, for any
$x\in\mathbb{R}^{n}\setminus\\{P_{1},\cdots,P_{N^{\prime}}\\}$,
$\displaystyle\widehat{u}_{\infty}(x,t)$ $\displaystyle\geq$
$\displaystyle\int_{\mathbb{R}^{n}}\widehat{u}_{\infty}(y,s)\zeta_{R}(y)G(x-y,t-s)dy$
$\displaystyle+$ $\displaystyle
c(n)\sum_{j=1}^{N^{\prime}}m_{j}^{\frac{n-2}{2}}\int_{s}^{t}G(x-P_{j},t-s)ds.$
Here $G$ denotes the standard heat kernel on $\mathbb{R}^{n}$.
Letting $R\to+\infty$, by the monotone convergence theorem we obtain
$\widehat{u}_{\infty}(x,t)\geq\int_{\mathbb{R}^{n}}\widehat{u}_{\infty}(y,s)G(x-y,t-s)dy+c(n)\sum_{j=1}^{N^{\prime}}m_{j}^{\frac{n-2}{2}}\int_{s}^{t}G(x-P_{j},t-s)ds.$
Letting $s\to-\infty$, we get
$\widehat{u}_{\infty}(x,t)\geq
c(n)\sum_{j=1}^{N^{\prime}}m_{j}^{\frac{n-2}{2}}|x-P_{j}|^{2-n}.$
Thus their difference
$\widehat{v}_{\infty}(x,t):=\widehat{u}_{\infty}(x,t)-c(n)\sum_{j=1}^{N^{\prime}}m_{j}^{\frac{n-2}{2}}|x-P_{j}|^{2-n}$
is a positive caloric function on $\mathbb{R}^{n}\times\mathbb{R}$. By [51] or
[90], there exists a nonnegative Radon measure $\nu$ on
$\\{(\xi,\lambda):\lambda=|\xi|^{2}\\}\subset\mathbb{R}^{n}\times\mathbb{R}$
such that
$\widehat{v}_{\infty}(x,t)=\int_{\\{\lambda=|\xi|^{2}\\}}e^{\lambda t+\xi\cdot
x}d\nu(\xi,\lambda).$
Because (20.16) holds for any $R$, $\widehat{u}_{\infty}$ is bounded in
$\left(B_{1}(P_{1})\setminus B_{1/2}(P_{1})\right)\times\mathbb{R}$, so
$\widehat{v}_{\infty}$ is also bounded $\left(B_{1}(P_{1})\setminus
B_{1/2}(P_{1})\right)\times\mathbb{R}$. This is possible only if $\nu=0$, and
(20.22) follows. ∎
By this lemam, near $P_{1}=0$, there exists a smooth harmonic function $h$
such that
$\widehat{u}_{\infty}(x)=c(n)m_{1}|x|^{2-n}+h(x).$
Because $P_{2}\in\partial B_{1}$, we deduce that
$h(0)>0.$ (20.23)
#### 20.2. Local Pohozaev invariants
By the expansion of $\widehat{u}_{\infty}(x)$ near $0$, in particular,
(20.23), and a direct calculation, we find a dimensional constant $c(n)>0$
such that for all $r$ small,
$\mathcal{P}_{\widehat{u}_{\infty}}(r)=\frac{c(n)m_{1}h(0)}{r}+O(1).$ (20.24)
On the other hand, because $\widehat{u}_{\infty}$ comes from $u_{i}$, we claim
that
###### Lemma 20.5.
For all $r$ small,
$\mathcal{P}_{\widehat{u}_{\infty}}(r)\lesssim K^{-2}+r.$ (20.25)
###### Proof.
In (18.1), choose $r_{i}$ to be a fixed, small $r>0$. Multiply both sides of
(18.1) by $\varepsilon_{i}^{2-n}$. Let us analyse the convergence of both
sides.
Step 1. The left hand side of (18.1).
By the convergence of $\widehat{u}_{i}$ in
$C^{\infty}_{loc}\left(\left(\mathbb{R}^{n}\setminus\cup_{j=1}^{N^{\prime}}\\{P_{j}\\}\right)\times\mathbb{R}\right)$,
we get
$\varepsilon_{i}^{2-n}\mathcal{P}_{u_{i}(0)}(r)=\mathcal{P}_{\widehat{u}_{i}(0)}(r)\to\mathcal{P}_{\widehat{u}_{\infty}(0)}(r),\quad\mbox{as
}~{}~{}i\to+\infty$ (20.26)
and
$\varepsilon_{i}^{2-n}\int_{\partial
B_{r}}u_{i}^{p+1}\lesssim\varepsilon_{i}^{2}\to 0,\quad\mbox{as
}~{}~{}i\to+\infty.$ (20.27)
Hence the left hand side converges to $\mathcal{P}_{\widehat{u}_{\infty}}(r)$.
Step 2. The right hand side of (18.1).
For simplicity of notations, denote the parameters
$(a_{i1},\xi_{i1},\lambda_{i1})$ just by $(a_{i},\xi_{i},\lambda_{i})$, and
$\phi_{i1}$, the error function in $Q_{1}$, just by $\phi_{i}$. Plugging the
$L^{\infty}$ estimate (20.14) into Proposition 17.6, we get the following
estimates:
$\sup_{t\in(-1,1)}|a_{i}(t)|\lesssim\varepsilon_{i}^{n-1},$ (20.28)
$\sup_{t\in(-1,1)}\left(\left|a_{i}^{\prime}(t)-\mu_{0}\frac{a_{i}(t)}{\lambda_{i}(t)^{2}}\right|+\left|\xi_{i}^{\prime}(t)\right|+\left|\lambda_{i}^{\prime}(t)\right|\right)\lesssim
K^{-\frac{n-2}{2}}\varepsilon_{i}^{n-3},$ (20.29)
$|\nabla\phi_{i}(x,t)|\lesssim\frac{\varepsilon_{i}^{(n+2)/2}}{\varepsilon_{i}^{3}+|x|^{3}}+\varepsilon_{i}^{\frac{n-2}{2}}\quad\mbox{for
any}~{}~{}(x,t)\in Q_{1},$ (20.30)
and
$|\partial_{t}\phi_{i}(x,t)|\lesssim\frac{\varepsilon_{i}^{(n+2)/2}}{\varepsilon_{i}^{4}+|x|^{4}}+\frac{\varepsilon_{i}^{(n-2)/2}}{\varepsilon_{i}+|x|}\quad\mbox{for
any}~{}~{}(x,t)\in Q_{1}.$ (20.31)
The factor $K^{-(n-2)/2}$ in (20.29) comes from Lemma 13.5.
As in Subsection 18.2, we expand the right hand side of (18.1) into nine
terms, $\mathrm{I}$-$\mathrm{IX}$. Let us estimate them one by one.
1. (1)
By (20.29) and the orthogonal relation between $Z_{0}$ and $Z_{n+1}$,
$\left|\lambda_{i}(0)a_{i}^{\prime}(0)\int_{B_{r}}Z_{0,i}(x,0)Z_{n+1,i}(x,0)dx\right|\lesssim\varepsilon_{i}^{n-2}e^{-cr/\varepsilon_{i}}.$
Similarly, for each $j=1,\cdots,n$,
$\left|\lambda_{i}(0)\xi_{i}^{\prime}(0)\int_{B_{r}}Z_{j,i}(x,0)Z_{n+1,i}(x,0)dx\right|\lesssim\varepsilon_{i}^{2n-5}r^{3-n},$
and
$\left|\lambda_{i}(0)\lambda_{i}^{\prime}(0)\int_{B_{r}}Z_{n+1,i}(0)^{2}\right|\lesssim
K^{-\frac{n-2}{2}}\varepsilon_{i}^{n-2}.$
Combining these three estimates, we obtain
$|\mathrm{I}|\lesssim\left[K^{-\frac{n-2}{2}}+\left(\frac{\varepsilon_{i}}{r}\right)^{n-3}+e^{-cr/\varepsilon_{i}}\right]\varepsilon_{i}^{n-2}.$
2. (2)
By (20.14), (20.29) and (20.30),
$|\mathrm{II}|\lesssim\varepsilon_{i}^{2n-6}r^{2}.$
3. (3)
By (20.28) and (20.29),
$|\mathrm{III}|\lesssim\varepsilon_{i}^{2n-4}.$
4. (4)
As in the treatment of $\mathrm{IV}$ in Subsection 18.2, we get
$|\mathrm{IV}|\lesssim
K^{2}\varepsilon_{i}^{\frac{3n}{2}-3}+\left(K^{-2}+r\right)\varepsilon_{i}^{n-2}.$
5. (5)
By (20.14), (20.28) and (20.29),
$|\mathrm{V}|\lesssim\varepsilon_{i}^{n}r^{n-4}+\varepsilon_{i}^{n-2}r^{n-1}.$
6. (6)
By (20.28) and (20.31),
$|\mathrm{VI}|\lesssim\varepsilon_{i}^{2n-4}.$
7. (7)
By (20.28) and (20.29),
$|\mathrm{VII}|\lesssim\varepsilon_{i}^{2n-4}.$
8. (8)
By (20.28), (20.29) and (20.30),
$|\mathrm{VIII}|\lesssim\varepsilon_{i}^{3n-6}.$
9. (9)
By (20.28) and (20.29)
$|\mathrm{IX}|\lesssim\varepsilon_{i}^{3n-6}.$
Putting these estimates together and using (20.26), we obtain (20.25). ∎
Combining this lemma with (20.24), after letting $r\to 0$, we deduce that
$h(0)=0$. This is a contradiction with (20.23), so (20.7) cannot be true. The
proof of Proposition 20.1 is thus complete.
### Appendix A Linearization estimates around bubbles
In this appendix, we collect some estimates about the linearized equation
around the standard bubble. This is mainly used to study the inner problem in
Section 13.
Recall that the standard positive bubble is
$W(x):=\left(1+\frac{|x|^{2}}{n(n-2)}\right)^{-\frac{n-2}{2}}.$
Concerning eigenvalues and eigenfunctions for the linearized operator
$-\Delta-pW^{p-1}$, we have (see for example [11, Proposition 2.2])
###### Theorem A.1.
* (i)
There exists one and only one negative eigenvalue for $-\Delta-pW^{p-1}$,
denoted by $-\mu_{0}$, for which there exists a unique (up to a constant),
positive, radially symmetric and exponentially decaying eigenfunction $Z_{0}$.
* (ii)
There exist exactly $(n+1)$-eigenfunctions $Z_{i}$ in
$L^{\infty}(\mathbb{R}^{n})$ corresponding to eigenvalue $0$, given by
$\left\\{\begin{aligned} &Z_{i}=\frac{\partial W}{\partial x_{i}},\quad
i=1,\cdots,n,\\\ &Z_{n+1}=\frac{n-2}{2}W+x\cdot\nabla W.\end{aligned}\right.$
###### Remark A.2.
The following decay as $|x|\to\infty$ holds for these eigenfunctions:
$\left\\{\begin{aligned} &Z_{0}(x)\lesssim e^{-c|x|},\\\
&|Z_{i}(x)|\lesssim|x|^{1-n},\quad i=1,\cdots,n,\\\
&|Z_{n+1}(x)|\lesssim|x|^{2-n}.\end{aligned}\right.$
Throughout the paper $Z_{0}$ is normalized so that
$\int_{\mathbb{R}^{n}}Z_{0}^{2}=1.$
For any $\xi\in\mathbb{R}^{n}$ and $\lambda\in\mathbb{R}^{+}$, in accordance
with the scalings for $W$, define
$Z_{i,\xi,\lambda}(x):=\lambda^{-\frac{n}{2}}Z_{i}\left(\frac{x-\xi}{\lambda}\right).$
This scaling preserves the $L^{2}(\mathbb{R}^{n})$ norm of $Z_{i}$.
Next we study the linearized parabolic equation around $W$. The first one is a
nondegeneracy result.
###### Proposition A.3 (Nondegeneracy).
Suppose $\alpha>2$ and $\varphi\in L^{\infty}(-\infty,0;\mathcal{X}_{\alpha})$
is a solution of
$\partial_{t}\varphi-\Delta\varphi=pW^{p-1}\varphi,\quad\mbox{in
}~{}~{}\mathbb{R}^{n}\times\mathbb{R}^{-}.$ (A.1)
Then there exist $(N+2)$ constants $c_{0},\cdots,c_{n+1}$ such that
$\varphi(x,t)\equiv c_{0}e^{\mu_{0}t}Z_{0}(x)+\sum_{i=1}^{n+1}c_{i}Z_{i}(x).$
###### Proof.
By standard parabolic regularity theory and the $O(|x|^{-4})$ decay of
$W^{p-1}$ at infinity, we deduce that $\varphi\in
L^{\infty}(-\infty,0;\mathcal{X}_{\alpha}^{2+\theta})$ and
$\partial_{t}\varphi\in L^{\infty}(-\infty,0;\mathcal{X}_{2+\alpha})$. Because
$\alpha>2$, we may assume
$\int_{\mathbb{R}^{n}}\varphi(x,t)Z_{i}(x)dx\equiv 0,\quad\forall t<0,\quad
i=0,\cdots,n+1.$
Then $\partial_{t}\varphi$ satisfy these orthogonal conditions, too. Since it
is also a solution of (A.1), by the method in [11, Section 2] (in particular,
the coercivity estimate in [11, Lemma 2.3]), we deduce that
$\partial_{t}\varphi\equiv 0$. Therefore $\varphi$ is a stationary solution of
(A.1). Since $\varphi\in\mathcal{X}_{\alpha}$ and it is orthogonal to $Z_{i}$,
by Theorem A.1, $\varphi\equiv 0$. ∎
Now we state two a priori estimates for this linearized equation.
###### Lemma A.4.
Suppose $\alpha^{\prime}>\max\\{2,n/2\\}$,
$\varphi_{0}\in\mathcal{X}_{\alpha^{\prime}}$. If $\varphi_{0}$ satisfies the
orthogonal condition
$\int_{\mathbb{R}^{n}}\varphi_{0}(x)Z_{i}(x)dx=0,\quad\forall i=0,\cdots,n+1,$
(A.2)
then there exists a unique $\varphi\in
L^{\infty}_{loc}\left(\mathbb{R}^{+};\mathcal{X}_{\alpha^{\prime}}\right)$
solving the equation
$\left\\{\begin{aligned}
&\partial_{t}\varphi-\Delta\varphi=pW^{p-1}\varphi\quad\mbox{in}~{}~{}\mathbb{R}^{n}\times(0,2T_{D}),\\\
&\varphi(x,0)=\varphi_{0}(x)\quad\mbox{on}~{}~{}\mathbb{R}^{n},\end{aligned}\right.$
(A.3)
Moreover, $\varphi$ satisfies the orthogonal condition
$\int_{\mathbb{R}^{n}}\varphi(x,t)Z_{i}(x)dx=0,\quad\forall t>0,\quad\forall
i=0,\cdots,n+1.$ (A.4)
and the decay property
$\varphi(\cdot,\cdot+t)\to
0\quad\mbox{in}~{}~{}C_{loc}^{(1+\theta)/2}(\mathbb{R};C_{loc}^{1,\theta}(\mathbb{R}^{n}))~{}~{}\mbox{as}~{}~{}t\to+\infty.$
(A.5)
###### Proof.
Existence of a global solution $\varphi$ to the problem (A.3) follows from
standard parabolic theory, if we note that the heat semigroup $e^{t\Delta}$ is
uniformly bounded as an operator from $\mathcal{X}_{\alpha^{\prime}}$ into
itself.
By standard parabolic regularity theory, $\partial_{t}\varphi$,
$\nabla\varphi$ and $\Delta\varphi$ belong to
$L^{\infty}_{loc}\left(\mathbb{R}^{+};\mathcal{X}_{\alpha^{\prime}}\right)$.
The orthogonal condition (A.4) then follows by testing (A.3) with $Z_{i}$.
Testing (A.3) with $\varphi$ and then applying Theorem A.1, we obtain
$\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^{n}}\varphi(x,t)^{2}dx=-\int_{\mathbb{R}^{n}}\left[|\nabla\varphi(x,t)|^{2}-pW(x)^{p-1}\varphi(x,t)^{2}\right]dx\leq
0.$ (A.6)
Testing (A.3) with $\partial_{t}\varphi$, we also obtain
$\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^{n}}\left[|\nabla\varphi(x,t)|^{2}-pW(x)^{p-1}\varphi(x,t)^{2}\right]dx=-\int_{\mathbb{R}^{n}}\partial_{t}\varphi(x,t)^{2}dx\leq
0.$ (A.7)
Combining these two identities with Proposition A.3, we deduce that
$\lim_{t\to+\infty}\|\varphi(t)\|_{H^{1}(\mathbb{R}^{n})}=0.$
Then (A.5) follows by an application of standard parabolic regularity theory.
∎
The second one is inspired by [73, Lemma 5.1].
###### Lemma A.5.
If $\alpha>2$, there exists a positive constant $C(\alpha)$ so that the
following holds. Given $T>1$, assume $\varphi\in
L^{\infty}\left(0,T;\mathcal{X}_{\alpha}\right)$, $a_{i}\in L^{\infty}([0,T])$
and $E\in L^{\infty}\left(0,T;\mathcal{X}_{2+\alpha}\right)$ solve the problem
$\left\\{\begin{aligned}
&\partial_{t}\varphi-\Delta\varphi=pW^{p-1}\varphi+\sum_{i=0}^{n+1}a_{i}Z_{i}+E\quad\mbox{in}~{}~{}\mathbb{R}^{n}\times(0,T),\\\
&\varphi(x,0)=0,\end{aligned}\right.$ (A.8)
and $\varphi$ satisfies the orthogonal condition
$\int_{\mathbb{R}^{n}}\varphi(x,t)Z_{i}(x)dx=0,\quad\forall
t\in[0,T],\quad\forall i=0,\cdots,n+1.$ (A.9)
Then
$\left\\{\begin{aligned} &\sum_{i=0}^{n+1}|a_{i}(t)|\leq
C\|E(t)\|_{2+\alpha},\quad\forall t\in[0,T],\\\
&\|\varphi\|_{C^{(1+\theta)/2}\left(0,T;\mathcal{X}^{1+\theta}_{\alpha}\right)}\leq
C(\alpha)\|E\|_{L^{\infty}\left(0,T;\mathcal{X}_{2+\alpha}\right)}.\end{aligned}\right.$
###### Proof.
First, for each $i$, multiplying (A.8) by $Z_{i}$ and integrating on
$\mathbb{R}^{n}$, we obtain
$a_{i}(t)=-\frac{\int_{\mathbb{R}^{n}}E(x,t)Z_{i}(x)}{\int_{\mathbb{R}^{n}}Z_{i}(x)^{2}}.$
(A.10)
Because $\alpha>2$, this gives the estimate on $a_{i}(t)$.
For the second estimate, by standard parabolic regularity theory, it suffices
to prove
$\|\varphi\|_{L^{\infty}\left(0,T;\mathcal{X}_{\alpha}\right)}\leq
C(\alpha)\|E\|_{L^{\infty}\left(0,T;\mathcal{X}_{2+\alpha}\right)}.$ (A.11)
We will argue by contradiction. Assume there exists a sequence of $T_{k}>1$, a
sequence of $\varphi_{k}\in
L^{\infty}\left(0,T_{k};\mathcal{X}_{\alpha}\right)$, $a_{k,i}\in
L^{\infty}\left([0,T_{k}]\right)$ and $E_{k}\in
L^{\infty}\left(0,2T_{k};\mathcal{X}_{2+\alpha}\right)$, satisfying (A.8) and
(A.9), with
$\|\varphi_{k}\|_{L^{\infty}\left(0,T_{k};\mathcal{X}_{\alpha}\right)}=1$
(A.12)
but
$\|E_{k}\|_{L^{\infty}\left(0,2T_{k};\mathcal{X}_{2+\alpha}\right)}\leq\frac{1}{k}.$
(A.13)
Take a $t_{k}\in(0,T_{k})$ with $\|\varphi_{k}(t_{k})\|_{\alpha}\geq 1/2$ and
a point $x_{k}\in\mathbb{R}^{n}$ such that
$\left(1+|x_{k}|\right)^{\alpha}|\varphi_{k}(x_{k},t_{k})|\geq 1/4.$ (A.14)
We claim that
###### Sub-Lemma.
$\limsup_{k\to\infty}|x_{k}|<+\infty$.
###### Proof.
By the heat kernel representation,
$\displaystyle\varphi(x,t_{k})$ $\displaystyle=$
$\displaystyle\int_{0}^{t_{k}}\int_{\mathbb{R}^{n}}\left[4\pi(t_{k}-t)\right]^{-\frac{n}{2}}e^{-\frac{|x-y|^{2}}{4(t_{k}-t)}}$
$\displaystyle\quad\quad\quad\quad\times\left[pW(y)^{p-1}\varphi_{k}(y,t)+\sum_{i=0}^{n+1}a_{k,i}(t)Z_{i}(y)+E_{k}(y,t)\right]dydt.$
Following the calculation in [87, Appendix B], we get
$\displaystyle\left|\int_{0}^{t_{k}}\left[4\pi(t_{k}-t)\right]^{-\frac{n}{2}}e^{-\frac{|x-y|^{2}}{4(t_{k}-t)}}pW(y)^{p-1}\varphi_{k}(y,t)dydt\right|$
$\displaystyle\lesssim$
$\displaystyle\int_{\mathbb{R}^{n}}|x-y|^{2-n}\left(1+|y|\right)^{-4-\alpha}$
$\displaystyle\lesssim$ $\displaystyle\left(1+|x|\right)^{-2-\alpha}$
and
$\displaystyle\left|\int_{0}^{t_{k}}\left[4\pi(t_{k}-t)\right]^{-\frac{n}{2}}e^{-\frac{|x-y|^{2}}{4(t_{k}-t)}}\left[\sum_{i=0}^{n+1}a_{k,i}(t)Z_{i}(y)+E_{k}(y,t)\right]dydt\right|$
$\displaystyle\lesssim$
$\displaystyle\frac{1}{k}\int_{\mathbb{R}^{n}}|x-y|^{2-n}\left(1+|y|\right)^{-2-\alpha}$
$\displaystyle\lesssim$
$\displaystyle\frac{1}{k}\left(1+|x|\right)^{-\alpha}.$
Putting these two estimates together we get
$|\varphi_{k}(x_{k},t_{k})|\leq
C\left(1+|x_{k}|\right)^{-2-\alpha}+\frac{1}{k}\left(1+|x_{k}|\right)^{-\alpha}.$
Combining this inequality with (A.14) we finish the proof of this sub-lemma. ∎
Next we divide the proof into two cases.
Case 1. $t_{k}\to+\infty$.
Let
$\widetilde{\varphi}_{k}(x,t):=\varphi_{k}(x,t_{k}+t).$
An application of standard parabolic regularity theory shows that
$\widetilde{\varphi}_{k}$ are uniformly bounded in
$C^{1+\theta,(1+\theta)/2}_{loc}(\mathbb{R}^{n}\times(-\infty,0])$. After
passing to a subsequence, $\widetilde{\varphi}_{k}$ converges to a limit
$\widetilde{\varphi}_{\infty}$, which satisfies the following conditions.
* •
Passing to the limit in (A.12) gives
$\|\widetilde{\varphi}_{\infty}\|_{L^{\infty}\left(-\infty,0;\mathcal{X}_{\alpha}\right)}\leq
1.$ (A.15)
* •
Since $\alpha>2$, we can pass to the limit in (A.9) for $\varphi_{k}$,
obtaining
$\int_{\mathbb{R}^{n}}\widetilde{\varphi}_{\infty}(x,t)Z_{i}(x)dx=0,\quad\quad\mbox{for
any}~{}i=0,\cdots,n+1,\quad t\leq 0.$ (A.16)
* •
Passing to the limit in (A.8) for $\widetilde{\varphi}_{k}$ and noting (A.13)
as well as the estimate on $a_{k,i}$, we see $\widetilde{\varphi}_{\infty}$ is
a solution of (A.1).
By Proposition A.3, these three conditions imply that
$\widetilde{\varphi}_{\infty}\equiv 0$.
On the other hand, passing to the limit in (A.14) leads to
$\left(1+|x_{\infty}|\right)^{\alpha}|\widetilde{\varphi}_{\infty}(x_{\infty},0)|\geq
1/4.$
This is a contradiction with the fact that $\widetilde{\varphi}_{\infty}\equiv
0$.
Case 2. $t_{k}\to t_{\infty}$.
As in the previous case, now $\varphi_{k}$ itself converges to a limit
$\varphi_{\infty}$, which solves the equation
$\left\\{\begin{aligned}
&\partial_{t}\varphi_{\infty}-\Delta\varphi_{\infty}=pW^{p-1}\varphi_{\infty}\quad\mbox{in}~{}~{}\mathbb{R}^{n}\times(0,t_{\infty}),\\\
&\varphi_{\infty}(x,0)=0.\end{aligned}\right.$
Passing to the limit in (A.12) still gives
$\|\varphi_{\infty}\|_{L^{\infty}\left(0,t_{\infty};\mathcal{X}_{\alpha}\right)}\leq
1.$ (A.17)
By standard parabolic theory, we also get $\varphi_{\infty}\equiv 0$. Then we
get the same contradiction as in Case 1. ∎
### Appendix B Estimates on some integrals
In this appendix we give some technical integral estimates involving the heat
kernel associated to the outer equation in Section 14.
###### Lemma B.1.
Assume $0<\nu<n$, $0\leq\gamma<n-\nu$, $t>s$. Then
$\int_{\mathbb{R}^{n}}\left(t-s\right)^{-\frac{n}{2}}e^{-c\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}|y|^{-\nu}dy\lesssim\left(|x|+\sqrt{t-s}\right)^{-\nu}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}.$
###### Proof.
After a change of variables $\hat{x}:=x/\sqrt{t-s}$, $\hat{y}:=y/\sqrt{t-s}$,
this integral is transformed into
$(t-s)^{-\nu/2}\int_{\mathbb{R}^{n}}e^{-c|\hat{x}-\hat{y}|^{2}}\left(1+\frac{1}{|\hat{y}|}\right)^{\gamma}|\hat{y}|^{-\nu}d\hat{y}.$
To estimate the integral in this formula, we consider two cases separately.
Case 1. If $|\hat{x}|\leq 1$, this integral is bounded by a universal
constant.
Case 2. If $|\hat{x}|\geq 1$, we divide this integral into two parts,
$\mathrm{I}$ outside $B_{|\hat{x}|/3}(0)$, and $\mathrm{II}$ in
$B_{|\hat{x}|/3}(0)$.
Outside $B_{|\hat{x}|/3}(0)$,
$\left(1+\frac{1}{|\hat{y}|}\right)^{\gamma}|\hat{y}|^{-\nu}\lesssim\left(1+\frac{1}{|\hat{x}|}\right)^{\gamma}|\hat{x}|^{-\nu}.$
Hence
$\displaystyle\mathrm{I}$ $\displaystyle\lesssim$
$\displaystyle\left(1+\frac{1}{|\hat{x}|}\right)^{\gamma}|\hat{x}|^{-\nu}\int_{B_{|\hat{x}|/3}(0)^{c}}e^{-c|\hat{x}-\hat{y}|^{2}}d\hat{y}$
$\displaystyle\lesssim$
$\displaystyle(t-s)^{\nu/2}|x|^{-\nu}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}.$
For $\mathrm{II}$, by noting that for $\hat{y}\in B_{|\hat{x}|/3}(0)$,
$e^{-c|\hat{x}-\hat{y}|^{2}}\lesssim e^{-c|\hat{x}|^{2}/9}.$
we obtain
$\displaystyle\mathrm{II}$ $\displaystyle\lesssim$ $\displaystyle
e^{-c|\hat{x}|^{2}/9}\int_{B_{|\hat{x}|/3}(0)}|\hat{y}|^{-\nu-\gamma}d\hat{y}$
$\displaystyle\lesssim$ $\displaystyle e^{-c\frac{|x|^{2}}{t-s}}$
$\displaystyle\lesssim$ $\displaystyle(t-s)^{\nu/2}|x|^{-\nu}.$
Adding $\mathrm{I}$ and $\mathrm{II}$ together, we get
$\int_{\mathbb{R}^{n}}e^{-c|\hat{x}-\hat{y}|^{2}}\left(1+\frac{1}{|\hat{y}|}\right)^{\gamma}|\hat{y}|^{-\nu}d\hat{y}\lesssim(t-s)^{\nu/2}|x|^{-\nu}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}.\qed$
###### Lemma B.2.
Assume $t>s$, $c_{1}$, $c_{2}$, $L$ and $\lambda$ are four positive constants.
Then
$\displaystyle\int_{B_{L\lambda}^{c}}\left(t-s\right)^{-\frac{n}{2}}e^{-c_{1}\frac{|x-y|^{2}}{t-s}}\left(1+\frac{\sqrt{t-s}}{|y|}\right)^{\gamma}e^{-c_{2}\frac{|y|}{\lambda}}dy$
$\displaystyle\lesssim$ $\displaystyle
e^{-\frac{c_{2}}{3}\frac{|x|}{\lambda}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}+e^{-\frac{c_{2}L}{2}-\frac{c_{1}}{9}\frac{|x|^{2}}{t-s}}\left(\frac{\lambda}{\sqrt{t-s}}\right)^{n-\gamma}\left(1+\frac{\lambda}{\sqrt{t-s}}\right)^{\gamma}.$
###### Proof.
After a change of variables $\hat{x}:=x/\sqrt{t-s}$, $\hat{y}:=y/\sqrt{t-s}$,
this integral is transformed into
$\int_{B_{L\lambda/\sqrt{t-s}}^{c}}e^{-c_{1}|\hat{x}-\hat{y}|^{2}}\left(1+\frac{1}{|\hat{y}|}\right)^{\gamma}e^{-c_{2}\frac{\sqrt{t-s}}{\lambda}|\hat{y}|}d\hat{y}.$
We divide this integral into two parts, $\mathrm{I}$ outside
$B_{|\hat{x}|/3}(0)$, and $\mathrm{II}$ in $B_{|\hat{x}|/3}(0)$.
Outside $B_{|\hat{x}|/3}(0)$,
$\left\\{\begin{aligned} &e^{-c_{2}\frac{\sqrt{t-s}}{\lambda}|\hat{y}|}\leq
e^{-\frac{c_{2}}{3}\frac{\sqrt{t-s}}{\lambda}|\hat{x}|},\\\
&\left(1+\frac{1}{|\hat{y}|}\right)^{\gamma}\lesssim\left(1+\frac{1}{|\hat{x}|}\right)^{\gamma}.\end{aligned}\right.$
Therefore
$\displaystyle\mathrm{I}$ $\displaystyle\lesssim$ $\displaystyle
e^{-\frac{c_{2}}{3}\frac{\sqrt{t-s}}{\lambda}|\hat{x}|}\left(1+\frac{1}{|\hat{x}|}\right)^{\gamma}\int_{B_{|\hat{x}|/3}(0)^{c}}e^{-c_{1}|\hat{x}-\hat{y}|^{2}}d\hat{y}$
$\displaystyle\lesssim$ $\displaystyle
e^{-\frac{c_{2}}{3}\frac{|x|}{\lambda}}\left(1+\frac{\sqrt{t-s}}{|x|}\right)^{\gamma}.$
In $B_{|\hat{x}|/3}(0)$,
$e^{-c_{1}|\hat{x}-\hat{y}|^{2}}\leq e^{-c_{1}|\hat{x}|^{2}/9}.$
Hence
$\displaystyle\mathrm{II}$ $\displaystyle\lesssim$ $\displaystyle
e^{-c_{1}|\hat{x}|^{2}/9}\int_{B_{|\hat{x}|/3}\setminus
B_{L\lambda/\sqrt{t-s}}}\left(1+\frac{1}{|\hat{y}|}\right)^{\gamma}e^{-c_{2}\frac{\sqrt{t-s}}{\lambda}|\hat{y}|}d\hat{y}$
$\displaystyle\lesssim$ $\displaystyle
e^{-\frac{c_{2}L}{2}-\frac{c_{1}}{9}\frac{|x|^{2}}{t-s}}\left(\frac{\lambda}{\sqrt{t-s}}\right)^{n}\left(1+\frac{\sqrt{t-s}}{\lambda}\right)^{\gamma}$
$\displaystyle\lesssim$ $\displaystyle
e^{-\frac{c_{2}L}{2}-\frac{c_{1}}{9}\frac{|x|^{2}}{t-s}}\left(\frac{\lambda}{\sqrt{t-s}}\right)^{n}\left(1+\frac{\lambda}{\sqrt{t-s}}\right)^{\gamma}.\qed$
## Part III Energy concentration in the general case
### 3\. Setting
In this part, we still consider a sequence of _smooth, positive solutions_
$u_{i}$ to the nonlinear heat equation (1.1) (with $p=(n+2)/(n-2)$) in
$Q_{1}$, but now satisfying the following three assumptions.
(IIII.a) Weak limit:
$u_{i}$ converges weakly to $u_{\infty}$ in $L^{p+1}(Q_{1})$, and $\nabla
u_{i}$ converges weakly to $\nabla u_{\infty}$ in $L^{2}(Q_{1})$. Here
$u_{\infty}$ is a _smooth solution_ of (1.1) in $Q_{1}$.
(IIII.b) Energy concentration behavior:
there exists an $N\geq 2$ such that
$\left\\{\begin{aligned} &|\nabla u_{i}|^{2}dxdt\rightharpoonup|\nabla
u_{\infty}|^{2}dxdt+N\Lambda\delta_{0}\otimes dt,\\\
&u_{i}^{p+1}dxdt\rightharpoonup u_{\infty}^{p+1}dxdt+N\Lambda\delta_{0}\otimes
dt,\end{aligned}\right.$
weakly as Radon measures.
(IIII.c) Convergence of time derivatives:
as $i\to\infty$, $\partial_{t}u_{i}$ converges to $\partial_{t}u_{\infty}$
strongly in $L^{2}(Q_{1})$.
The only difference with the assumptions in Part II is (IIII.b), where now we
do not assume there is only one bubble, in other words, now we are in _the
higher multiplicity case_ instead of _the multiplicity one case_.
The main result in this part is
###### Theorem 3.1.
After passing to a subsequence, the followings hold for $u_{i}$.
1. (1)
For all $i$ and any $t\in[-9/16,9/16]$, there exist exactly $N$ local maximal
point of $u_{i}(\cdot,t)$ in the interior of $B_{1}(0)$.
Denote these points by $\xi_{ij}^{\ast}(t)$ ($j=1,\cdots,N$) and let
$\lambda_{ij}^{\ast}(t):=u_{i}(\xi_{ij}^{\ast}(t),t)^{-\frac{2}{n-2}}$.
2. (2)
Both $\lambda_{ij}^{\ast}$ and $\xi_{ij}^{\ast}$ are continuous functions on
$[-9/16,9/16]$.
3. (3)
There exists a constant $C_{2}$ such that, for all $i$ and any $x\in B_{1}$,
$\displaystyle\min_{1\leq j\leq
N}|x-\xi_{ij}^{\ast}(t)|^{\frac{n-2}{2}}u_{i}(x,t)+\min_{1\leq j\leq
N}|x-\xi_{ij}^{\ast}(t)|^{\frac{n}{2}}|\nabla u_{i}(x,t)|$ (3.1)
$\displaystyle\quad\quad\quad\quad+\min_{1\leq j\leq
N}|x-\xi_{ij}^{\ast}(t)|^{\frac{n+2}{2}}\left(|\nabla^{2}u_{i}(x,t)|+|\partial_{t}u_{i}(x,t)|\right)\leq
C_{2}.$
4. (4)
For each $j=1,\cdots,N$, as $i\to\infty$,
$\lambda_{ij}^{\ast}(t)\to 0,\quad\xi_{ij}^{\ast}(t)\to 0,\quad\mbox{uniformly
on }[-9/16,9/16],$
and the function
$u_{ij}^{t}(y,s):=\lambda_{ij}^{\ast}(t)^{\frac{n-2}{2}}u\left(\xi_{ij}^{\ast}(t)+\lambda_{ij}^{\ast}(t)y,t+\lambda_{ij}^{\ast}(t)^{2}s\right),$
converges to $W(y)$ in $C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
5. (5)
For any $1\leq j\neq k\leq N$ and $t\in[-9/16,9/16]$,
$\lim_{i\to+\infty}\frac{|\xi_{ij}^{\ast}(t)-\xi_{ik}^{\ast}(t)|}{\max\left\\{\lambda_{ij}^{\ast}(t),\lambda_{ik}^{\ast}(t)\right\\}}=+\infty.$
(3.2)
6. (6)
$u_{\infty}\equiv 0$.
###### Remark 3.2.
Item (5) says there is no bubble towering. Item (6) will be used (in an
inductive way) to prove this property.
The proof of this theorem uses an induction argument on $N$. This part is
organized in the following way.
1. (1)
In Section 4, we establish several preliminary results.
2. (2)
In Section 5, we find all bubbles under the inductive assumption that Theorem
3.1 holds up to $N-1$.
3. (3)
In Section 6, we prove the Lipschitz hypothesis (10.1) in Part II, under the
assumptions (II.a)-(II.c). This also provides the first step of our inductive
argument.
4. (4)
In Section 20, we finish the proof of Theorem 3.1 by employing Proposition
19.1 in Part II.
### 4\. Preliminaries
Before starting the bubble tree construction, we recall several technical
results.
The first one is a uniform Morrey space estimate on $u_{i}$, which is just a
special case of Corollary 3.5 in Part I
###### Lemma 4.1.
There exists a constant $M>0$ such that for each $i$ and $Q_{r}(x,t)\subset
Q_{3/4}$,
$\int_{Q_{r}(x,t)}\left(|\nabla u_{i}|^{2}+u_{i}^{p+1}\right)\leq Mr^{2}.$
The second one is the non-concentration estimate of $\partial_{t}u_{i}$ in
Lemma 11.3 from Part II, which still holds in the current setting (thanks to
(IIII.c)).
The following lemma is about the convergence of rescalings of $u_{i}$.
###### Lemma 4.2.
Given a sequence $(x_{i},t_{i})\in Q_{1/2}$, $r_{i}\to 0$, define
$u_{i}^{r_{i}}(x,t):=r_{i}^{\frac{n-2}{2}}u_{i}\left(x_{i}+r_{i}x,t_{i}+r_{i}^{2}t\right).$
(4.1)
Then we have
1. (1)
After passing to a subsequence, $u_{i}^{r_{i}}$ converges weakly to $0$ or
$W_{\xi,\lambda}$ for some
$(\xi,\lambda)\in\mathbb{R}^{n}\times\mathbb{R}^{+}$.
2. (2)
If the defect measure associated to $u_{i}^{r_{i}}$ is nontrivial, it must be
of the form
$\sum_{j}k_{j}\Lambda\delta_{P_{j}}\otimes dt,$
where $P_{j}$ are distinct points in $\mathbb{R}^{n}$ and
$\sum_{j}k_{j}\leq M/\Lambda,\quad k_{j}\in\mathbb{N}.$
3. (3)
If there is no defect measure, then $u_{i}^{r_{i}}$ converges in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
###### Proof.
As in Part I, $u_{i}^{r_{i}}$ converges weakly to a nonnegative solution of
(1.1) in $L^{p+1}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$. By Lemma 11.3 (non-
concentration of time derivatives), $\partial_{t}u_{i}^{r_{i}}$ converges to
$0$ in $L^{2}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$. Hence this weak limit is
independent of time. By Caffarelli-Gidas-Spruck [8], it must be $0$ or
$W_{\xi,\lambda}$ for some
$(\xi,\lambda)\in\mathbb{R}^{n}\times\mathbb{R}^{+}$. Furthermore, the defect
measure is also independent of time, that is, there exists a Radon measure
$\mu_{0}$ on $\mathbb{R}^{n}$ such that the defect measure equals
$\mu_{0}\otimes dt$. By Lemma 4.1, for any $R>0$,
$\int_{-R^{2}}^{R^{2}}\int_{B_{R}}d\mu_{0}dt\leq MR^{2}.$
Hence
$\mu_{0}(\mathbb{R}^{n})\leq M.$ (4.2)
By Lemma 9.2, there exist finitely many distinct points
$P_{j}\in\mathbb{R}^{n}$ and constants $k_{j}\in\mathbb{N}$ such that
$\mu_{0}=\sum_{j}k_{j}\Lambda\delta_{P_{j}}.$
Finally, if there is no defect measure, the smooth convergence of
$u_{i}^{r_{i}}$ follows from a direct application of the
$\varepsilon$-regularity theorem, Theorem 3.9, by noting the smoothness of its
weak limit. ∎
We also need a technical result about the blow up of each time slice of
$u_{i}$. This will be used to find the first bubble in the next section.
###### Lemma 4.3.
For each $t\in[-9/16,9/16]$,
$\lim_{i\to+\infty}\max_{B_{1}}u_{i}(x,t)=+\infty.$
###### Proof.
For each $u_{i}$ and $(x,t)\in Q_{3/4}$, define $\rho_{i}(x,t)$ to be the
unique (if it exists) solution to the equation
$\Theta_{r^{2}}(x,t;u_{i})=\varepsilon_{\ast}.$
Here $\varepsilon_{\ast}$ is the small constant in the
$\varepsilon$-regularity theorem, Theorem 3.7.
By (IIII.b), if $x\neq 0$, there is a uniform, positive lower bound for
$\rho_{i}(x,t)$. By (IIII.a) and (IIII.b), for any $s>0$ fixed,
$\lim_{i\to+\infty}\Theta_{s}(0,t;u_{i})=\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}N+o_{s}(1).$
Therefore
$\lim_{i\to+\infty}\rho_{i}(0,t)=0.$
As a consequence, $\rho_{i}(\cdot,t)$ has an interior minimal point in
$B_{1}$, say $x_{i}(t)$. Denote $r_{i}:=\rho_{i}(x_{i}(t),t)$. Define
$u_{i}^{r_{i}}$ as in (4.1), with base point at $(x_{i}(t),t)$.
Claim. For some $(\xi,\lambda)\in\mathbb{R}^{n}\times\mathbb{R}^{+}$,
$u_{i}^{r_{i}}$ converges to $W_{\xi,\lambda}$ in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
By this claim we get
$\lim_{i\to+\infty}\rho_{i}(x_{i}(t),t)^{\frac{n-2}{2}}u_{i}(x_{i}(t),t)=W_{\xi,\lambda}(0)>0,$
and the proof of this lemma is complete.
Proof of the Claim. By the definition of $r_{i}$ and the scaling invariance of
$\Theta_{s}(\cdot)$, for any $y\in B_{r_{i}^{-1}}$,
$\Theta_{1}(y,0;u_{i}^{r_{i}})\leq\varepsilon_{\ast}.$
Moreover, the equality is attained at $y=0$, that is,
$\displaystyle\varepsilon_{\ast}$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{n}}\left[\frac{|\nabla
u_{i}^{r_{i}}(y,-1)|^{2}}{2}-\frac{u_{i}^{r_{i}}(y,-1)^{p+1}}{p+1}\right]\psi\left(\frac{y-x_{i}(t)}{r_{i}}\right)^{2}G(y,1)dy$
(4.3) $\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}\int_{\mathbb{R}^{n}}u_{i}^{r_{i}}(y,-1)^{2}\psi\left(\frac{x-x_{i}(t)}{r_{i}}\right)^{2}G(y,1)dy+Ce^{-cr_{i}^{-2}}.$
By Theorem 3.9, $u_{i}^{r_{i}}$ are uniformly bounded in
$C^{2}(B_{r_{i}^{-1}-1}\times(-\delta_{\ast},\delta_{\ast}))$. This then
implies that there is no defect measure appearing when we apply Lemma 4.2 to
$u_{i}^{r_{i}}$. In fact, if the defect measure is nontrivial, by Lemma 4.2,
it has the form
$\sum_{j}k_{j}\Lambda\delta_{P_{j}}\otimes dt,\quad
P_{j}\in\mathbb{R}^{n},~{}~{}k_{j}\in\mathbb{N}.$
Then by Lemma 9.3, for a.e. $t\in(-\delta_{\ast},\delta_{\ast})$,
$u_{i}^{r_{i}}(\cdot,t)$ should develop nontrivial Dirac measures at $P_{j}$.
This is a contradiction with the uniform
$C^{2}(B_{r_{i}^{-1}-1}\times(-\delta_{\ast},\delta_{\ast}))$ regularity of
$u_{i}^{r_{i}}$.
Since there is no defect measure, Lemma 4.2 implies that $u_{i}^{r_{i}}$
converges to $u_{\infty}$ in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$. In view of the Liouville
theorem in Caffarelli-Gidas-Spruck [8], it suffices to show that
$u_{\infty}\neq 0$. (By the strong maximum principle, either $u_{\infty}\equiv
0$ or $u_{\infty}>0$ everywhere.)
First by the monotonicity formula (Proposition 3.2), we have
$\displaystyle\varepsilon_{\ast}$ $\displaystyle\leq$
$\displaystyle\int_{-2}^{-1}(-s)^{\frac{p+1}{p-1}}\int_{\mathbb{R}^{n}}\left[\frac{|\nabla
u_{i}^{r_{i}}(y,s)|^{2}}{2}-\frac{u_{i}^{r_{i}}(y,s)^{p+1}}{p+1}\right]\psi\left(\frac{y-x_{i}(t)}{r_{i}}\right)^{2}G(y,-s)$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}\int_{-2}^{-1}(-s)^{\frac{2}{p-1}}\int_{\mathbb{R}^{n}}u_{i}^{r_{i}}(y,s)^{2}\psi\left(\frac{x-x_{i}(t)}{r_{i}}\right)^{2}G(y,-s)+Ce^{-cr_{i}^{-2}/4}.$
By the $C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$ convergence of
$u_{i}^{r_{i}}$, the Morrey space bound in Lemma 4.1 for $u_{i}^{r_{i}}$ and
the exponential decay of $G(y,-s)$ as $|y|\to+\infty$, we can let
$i\to+\infty$ in the above inequality to get
$\displaystyle\varepsilon_{\ast}$ $\displaystyle\leq$
$\displaystyle\int_{-2}^{-1}(-s)^{\frac{p+1}{p-1}}\int_{\mathbb{R}^{n}}\left[\frac{|\nabla
u_{\infty}(y,s)|^{2}}{2}-\frac{u_{\infty}(y,s)^{p+1}}{p+1}\right]G(y,-s)$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}\int_{-2}^{-1}(-s)^{\frac{2}{p-1}}\int_{\mathbb{R}^{n}}u_{\infty}(y,s)^{2}G(y,-s).$
Hence it is impossible that $u_{\infty}\equiv 0$. ∎
### 5\. Bubble tree construction
In this section, we construct bubbles by finding local maximal points of
$u_{i}(\cdot,t)$. The construction is divided into six steps. During the
course of this construction, we will also prove Theorem 3.1 except the last
point (6), under the inductive assumption that this theorem holds when the
multiplicity is not larger than $N-1$ ($N\geq 2$).
Step 1. Construction of the first maximal point. By (IIII.a) and Lemma 4.3,
for each $t$, $\max_{B_{1}}u_{i}(x,t)$ is attained at an interior point, say
$\xi_{i1}^{\ast}(t)$. Denote
$\lambda_{i1}^{\ast}(t):=u_{i}\left(\xi_{i1}^{\ast}(t),t\right)^{-\frac{2}{n-2}}.$
By Lemma 4.3,
$\lim_{i\to+\infty}\lambda_{i1}^{\ast}(t)=0.$
Let
$u_{i1}(y,s):=\lambda_{i1}^{\ast}(t)^{\frac{n-2}{2}}u_{i}\left(\xi_{i1}^{\ast}(t)+\lambda_{i1}^{\ast}(t)y,t+\lambda_{i1}^{\ast}(t)^{2}s\right).$
It satisfies the following conditions.
* •
By Lemma 4.1, for any $R>0$,
$\int_{Q_{R}}\left[|\nabla u_{i1}|^{2}+u_{i1}^{p+1}\right]\leq MR^{2}.$ (5.1)
* •
By Lemma 11.3, for any $R>0$,
$\lim_{i\to+\infty}\int_{Q_{R}}|\partial_{t}u_{i1}|^{2}=0.$ (5.2)
* •
At $s=0$,
$\max_{|y|\leq\lambda_{i1}^{\ast}(0)^{-1}/2}u_{i1}(y,0)=u_{i1}(0,0)=1.$ (5.3)
Using these conditions we show
###### Lemma 5.1.
As $i\to+\infty$, $u_{i1}$ are uniformly bounded in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$, and it converges to $W$ in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
###### Proof.
We use Lemma 4.2 to study the convergence of $u_{i1}$. We need only to show
that there is no defect measure appearing. Once this is established, $u_{i1}$
would converge in $C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$. Then by
(5.3), this limit must be $W$.
Assume by the contrary that the defect measure is nonzero. By Lemma 4.2, it
has the form
$\sum_{j}k_{j}\Lambda\delta_{P_{j}}\otimes dt,\quad
P_{j}\in\mathbb{R}^{n},~{}~{}k_{j}\in\mathbb{N}.$
Then applying Lemma 4.3 to $u_{i1}$ at $t=0$, we obtain
$\lim_{i\to+\infty}\max_{B_{1}(P_{j})}u_{i1}(x,0)=+\infty.$
This is a contradiction with (5.3). ∎
By this lemma, there exists a sequence $R_{i1}\to+\infty$ such that
$\lim_{i\to+\infty}\sup_{y\in
B_{R_{i1}}}|y|^{\frac{n-2}{2}}u_{i1}(y,0)<+\infty.$ (5.4)
Step 2. Iterative construction. Suppose
$\xi_{i1}^{\ast}(t)$,$\cdots$,$\xi_{i,j-1}^{\ast}(t)$ have been constructed.
If
$\max_{x\in B_{1}}\min_{1\leq k\leq
j-1}|x-\xi_{ik}^{\ast}(t)|^{\frac{n-2}{2}}u_{i}(x,t)$
are uniformly bounded in $B_{1}$, we stop at this step. Otherwise,
$\lim_{i\to\infty}\max_{x\in B_{1}}\min_{1\leq k\leq
j-1}|x-\xi_{ik}^{\ast}(t)|^{\frac{n-2}{2}}u_{i}(x,t)=+\infty.$ (5.5)
By (IIII.a), this function has an interior maximal point, say
$\xi_{ij}^{\ast}(t)$. Define $\lambda_{ij}^{\ast}(t)$, $u_{ij}$ as in Step 1.
The estimates (5.1) and (5.2) still hold for $u_{ij}$, while (5.3) now reads
as
$\left\\{\begin{aligned} &u_{ij}(0,0)=1,\\\ &u_{ij}(y,0)\leq\frac{\min_{1\leq
k\leq j-1}|\xi_{ij}^{\ast}(t)-\xi_{ik}^{\ast}(t)|^{\frac{n-2}{2}}}{\min_{1\leq
k\leq
j-1}|\xi_{ij}^{\ast}(t)-\xi_{ik}^{\ast}(t)+\lambda_{ij}^{\ast}(t)y|^{\frac{n-2}{2}}},\end{aligned}\right.$
(5.6)
the right hand side of which converges to $1$ uniformly in any compact set of
$\mathbb{R}^{n}$, by noting that we have
$\min_{1\leq k\leq
j-1}|\xi_{ij}^{\ast}(t)-\xi_{ik}^{\ast}(t)|^{\frac{n-2}{2}}u_{i}\left(\xi_{ij}^{\ast}(t),t\right)=+\infty.$
(This follows by letting $x=\xi_{ij}^{\ast}(t)$ in (5.5).)
We can argue as in Step 1 to deduce that $u_{ij}$ converges to $W$ in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$. Moreover, there exists a
sequence $R_{ij}\to+\infty$ such that
$\lim_{i\to+\infty}\sup_{y\in
B_{R_{ij}}}|y|^{\frac{n-2}{2}}u_{ij}(y,0)<+\infty.$ (5.7)
For any $k<j$, combining (5.5) (evaluated at $x=\xi_{ij}^{\ast}(t)$) and (5.4)
(with $j$ replaced by $k$) together, we obtain (3.2).
Step 3. $\xi_{ij}^{\ast}(t)$ are local maximal points of $u_{i}(\cdot,t)$.
Because $0$ is the maximal point of $W$ and the Hessian $\nabla^{2}W(0)$ is
strictly negative definite, by the above smooth convergence of $u_{ij}$ in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$, we deduce that
$\xi_{ij}^{\ast}(t)$ are local maximal points of $u_{i}(\cdot,t)$.
The strict concavity of $u_{i}$ near $\xi_{ij}^{\ast}(t)$ (with the help of
the implicit function theorem) also implies the continuous dependence of
$\xi_{ij}^{\ast}(t)$ with $t$. The continuity of $\lambda_{ij}^{\ast}(t)$ then
follows from its definition.
Step 4. Fix a large positive constant $R$. For each $t\in(-9/16,9/16)$, let
$\Omega_{i}(t):=\cup_{j}B_{R\lambda_{ij}^{\ast}(t)}\left(\xi_{ij}^{\ast}(t)\right).$
By (3.2), for all $i$ large enough, these balls are disjoint from each other,
and all of them are contained in $B_{1/2}$. Hence the above iterative
construction of $\xi_{ij}^{\ast}(t)$ must stop in finitely many steps. (At
this stage, we do not claim any uniform in $i$ bound on the number of these
points.)
By our construction, there exists a constant $C_{2}$ such that for any $i$ and
$x\in B_{1}\setminus\Omega_{i}(t)$,
$u_{i}(x,t)\leq C_{2}\max_{j}|x-\xi_{ij}^{\ast}(t)|^{-\frac{n-2}{2}}.$ (5.8)
Then (3.1) follows by applying standard parabolic regularity theory.
Step 5. In this step and the next one we show that there is no bubble other
than those constructed in the previous steps, and the number of bubbles is
exactly $N$.
In this step we consider the case that there are at least two bubbles. For
each $t$, let
$d_{i}(t):=\max_{j\neq k}|\xi_{ij}^{\ast}(t)-\xi_{ik}^{\ast}(t)|.$ (5.9)
Without loss of generality, assume this is attained between
$\xi_{i1}^{\ast}(t)$ and $\xi_{i2}^{\ast}(t)$. By (3.2),
$\lim_{i\to+\infty}\frac{\max_{j}\lambda_{ij}^{\ast}(t)}{d_{i}(t)}=0.$ (5.10)
###### Lemma 5.2.
For any $\sigma\in(0,1)$, there exists an $R(\sigma)$ such that for each $i,j$
and $t$,
$\Theta_{R(\sigma)^{2}d_{i}(t)^{2}}\left(\xi_{ij}^{\ast}(t),t;u_{i}\right)\geq\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\left(N-\sigma\right).$
###### Proof.
Assume by the contrary, there exists a $\sigma\in(0,1)$, a sequence of $r_{i}$
satisfying
$\lim_{i\to+\infty}\frac{d_{i}(t)}{r_{i}}=0,$ (5.11)
but
$\limsup_{i\to+\infty}\Theta_{r_{i}^{2}}\left(\xi_{ij}^{\ast}(t),t;u_{i}\right)\leq\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\left(N-\sigma\right).$
By the weak convergence of $u_{i}$ and (III.a) and (III.b), for any $s>0$
fixed,
$\lim_{i\to+\infty}\Theta_{s}\left(\xi_{ij}^{\ast}(t),t;u_{i}\right)=\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}N+\Theta_{s}(0,t;u_{\infty}).$
Because $u_{\infty}$ is smooth,
$\lim_{s\to 0}\Theta_{s}(0,t;u_{\infty})=0.$
Therefore by choosing $s$ sufficiently small, for all $i$ large
$\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}N\leq\Theta_{s}\left(\xi_{ij}^{\ast}(t),t;u_{i}\right)\leq\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}N+\sigma.$
(5.12)
By the monotonicity and continuity of $\Theta_{s}(\cdot;u_{i})$ in $s$, we can
enlarge $r_{i}$ so that
$\Theta_{r_{i}^{2}}\left(\xi_{ij}^{\ast}(t),t;u_{i}\right)=\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\left(N-\sigma\right).$
(5.13)
By this choice, (5.11) still holds, while in view of (5.12), we must have
$r_{i}\to 0$.
Define $u_{i}^{r_{i}}$ as in (4.1), with respect to the base point
$(\xi_{ij}^{\ast}(t),t)$ for some $j$. A scaling of (5.13) gives
$\displaystyle\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\left(N-\sigma\right)$
$\displaystyle=$ $\displaystyle\Theta_{1}\left(0,0;u_{i}^{r_{i}}\right)$
$\displaystyle=$ $\displaystyle\int_{\mathbb{R}^{n}}\left[\frac{|\nabla
u_{i}^{r_{i}}(y,-1)|^{2}}{2}-\frac{u_{i}^{r_{i}}(y,-1)^{p+1}}{p+1}\right]\psi\left(\frac{y-\xi_{ij}^{\ast}(t)}{r_{i}}\right)^{2}G(y,1)dy$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}\int_{\mathbb{R}^{n}}u_{i}^{r_{i}}(y,-1)^{2}\psi\left(\frac{y-\xi_{ij}^{\ast}(t)}{r_{i}}\right)^{2}G(y,1)dy+Ce^{-cr_{i}^{-2}}.$
By (5.8) and (5.11), for any $\rho>0$, if $i$ is large enough,
$u_{i}^{r_{i}}(y,0)\leq C_{2}|y|^{-\frac{n-2}{2}},\quad\mbox{for
any}~{}~{}y\in B_{r_{i}^{-1}/2}\setminus B_{\rho}.$
Arguing as in the proof of Lemma 5.1, we deduce that the defect measure of
$u_{i}^{r_{i}}$ is $N^{\prime}\delta_{0}\otimes dt$, for some
$N^{\prime}\in\mathbb{N}$. Letting $i\to+\infty$ in (5), we see
$N^{\prime}\leq N-1$.
By our inductive assumption, $\nabla u_{i}^{r_{i}}$ converges to $0$ weakly in
$L^{2}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$, $u_{i}^{r_{i}}$ converges to
$0$ weakly in $L^{p+1}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$ and strongly in
$C_{loc}(\mathbb{R};L^{2}_{loc}(\mathbb{R}^{n}))$. Then by Lemma 9.2, both
$|\nabla u_{i}^{r_{i}}|^{2}dxdt$ and $\left(u_{i}^{r_{i}}\right)^{p+1}dxdt$
converges to $N^{\prime}\delta_{0}\otimes dt$ weakly as Radon measures. These
convergence imply that the right hand side of (5) converges to
$\left(4\pi\right)^{-\frac{n}{2}}N^{\prime}\Lambda/n.$
(Here we need to use the monotonicity formula, integrate in time $s$ and then
argue as in the last step of the proof of Lemma 4.3.) This is a contradiction
with (5.13). ∎
For $(y,s)\in Q_{d_{i}(t)^{-1}/2}$, define
$\widetilde{u}_{i}(y,s):=d_{i}(t)^{\frac{n-2}{2}}u_{i}\left(\xi_{i1}^{\ast}(t)+d_{i}(t)y,t+d_{i}(t)^{2}s\right)$
(5.15)
and
$\widetilde{\xi}_{ij}^{\ast}(t):=\frac{\xi_{ij}^{\ast}(t)-\xi_{i1}^{\ast}(t)}{d_{i}(t)}$
Then we have
* •
$\widetilde{\xi}_{i1}^{\ast}(t)=0$;
* •
$|\widetilde{\xi}_{i2}^{\ast}(t)|=1$;
* •
for each $j$, $|\widetilde{\xi}_{ij}^{\ast}(t)|\leq 1$.
After passing to a subsequence, we may assume each
$\widetilde{\xi}_{ij}^{\ast}(t)$ converges to a point
$\widetilde{\xi}_{j}^{\ast}(t)\in\overline{B_{1}}$, and
$\widetilde{\xi}_{i2}^{\ast}(t)$ converges to a point
$\widetilde{\xi}_{2}^{\ast}(t)\in\partial B_{1}$. (These limiting points need
not to be distinct.)
By (5.10), $\widetilde{u}_{i}$ develops bubbles at each $\xi_{j}^{\ast}(t)$.
By a scaling of (5.8), $\widetilde{u}_{i}$ does not develop any bubble outside
$\cup_{j}\\{\xi_{j}^{\ast}(t)\\}$. Therefore the defect measure of
$\widetilde{u}_{i}$ is
$\Lambda\sum_{j}\delta_{\xi_{j}^{\ast}(t)}\otimes
dt=\Lambda\sum_{k}m_{k}\delta_{P_{k}}.$
In the above, we write these $\widetilde{\xi}_{j}^{\ast}(t)$ as distinct
points $P_{k}$, each one with multiplicity $m_{k}\in\mathbb{N}$. There are at
least two different points, $P_{1}=0$ and $P_{2}\in\partial B_{1}$. Therefore
for each $k$, $m_{k}\leq N-1$. By our inductive assumption, Theorem 3.1 holds
for $\widetilde{u}_{i}$. In particular, $\widetilde{u}_{i}$ converges to $0$
weakly in $L^{p+1}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$, and there is no
bubble towering for $\widetilde{u}_{i}$.
By (5.12) and Proposition 3.2, we have
$\limsup_{i\to+\infty}\Theta_{R(\sigma)^{2}d_{i}(t)^{2}}\left(\xi_{i1}^{\ast}(t),t;u_{i}\right)\leq\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\left(N+\sigma\right).$
Combining this inequality with Lemma 5.2, we get
$\displaystyle\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\left(N-\sigma\right)$
$\displaystyle\leq$ $\displaystyle
R(\sigma)^{\frac{n}{2}}\int_{\mathbb{R}^{n}}\left[\frac{|\nabla\widetilde{u}_{i}(x,-1)|^{2}}{2}-\frac{\widetilde{u}_{i}(x,-1)^{p+1}}{p+1}\right]\psi\left(\frac{x-\xi_{i1}^{\ast}(t)}{r_{i}}\right)^{2}G(x,R(\sigma)^{2})dx$
$\displaystyle+$
$\displaystyle\frac{R(\sigma)^{\frac{n-2}{2}}}{2(p-1)}\int_{\mathbb{R}^{n}}\widetilde{u}_{i}(x,-1)^{2}\psi\left(\frac{x-\xi_{i1}^{\ast}(t)}{r_{i}}\right)^{2}G(x,R(\sigma)^{2})dx+Ce^{-cd_{i}(t)^{-2}}$
$\displaystyle\leq$
$\displaystyle\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\left(N+\sigma\right).$
Letting $i\to+\infty$, we deduce that the defect measure from
$\widetilde{u}_{i}$ satisfies
$N-\sigma\leq\sum_{k}m_{k}e^{-\frac{|P_{k}|^{2}}{4R(\sigma)^{2}}}\leq
N+\sigma.$
Letting $\sigma\to 0$, which implies that $R(\sigma)\to+\infty$, we get
$\sum_{k}m_{k}=N.$
By Theorem 3.1 and our inductive assumption, for all $i$ large, there are
exactly $N$ bubbles located at $\widetilde{\xi}_{ij}^{\ast}(t)$ for
$\widetilde{u}_{i}(\cdot,0)$. Coming back to $u_{i}$, this says for each $t$,
there are exactly $N$ bubbles located at $\xi_{ij}^{\ast}(t)$, $j=1,\cdots,N$.
Step 6. In this step we consider the case where there is only one
$\xi_{ij}^{\ast}(t)$ at time $t$. We show that this is impossible, because we
have assumed the multiplicity $N\geq 2$.
In this case, (5.8) reads as
$u_{i}(x,t)\leq C_{2}|x-\xi_{i1}^{\ast}(t)|^{-\frac{n-2}{2}},\quad\mbox{for
any}~{}~{}x\in B_{1}.$ (5.17)
Under this assumption, we expect that there are $N$ bubbles towering at
$\xi_{i1}^{\ast}(t)$. We will determine the scale for the lowest bubble, and
then perform a rescaling at this scale. This gives a sequence of solutions
weakly converging to some nontrivial $W_{\xi,\lambda}$ and exhibiting at most
$N-1$ bubbles. By our inductive assumption, this is impossible.
More precisely, similar to Lemma 5.2, we have
###### Lemma 5.3.
For any $\sigma\in(0,1)$, there exists an $R(\sigma)$ such that
$\Theta_{R(\sigma)^{2}\lambda_{i1}^{\ast}(t)^{2}}\left(\xi_{i1}^{\ast}(t),t;u_{i}\right)\geq\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\left(N-\sigma\right).$
(5.18)
###### Proof.
Assume by the contrary that (5.18) does not hold. Arguing as in the proof of
Lemma 5.2, we find a sequence of $r_{i}\to 0$, satisfying
$\lim_{i\to+\infty}\frac{\lambda_{i1}^{\ast}(t)}{r_{i}}=0,$ (5.19)
and
$\Theta_{r_{i}^{2}}\left(\xi_{i1}^{\ast}(t),t;u_{i}\right)=\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\left(N-\sigma\right).$
(5.20)
Define $u_{i}^{r_{i}}$ as before, with base point at $(\xi_{i1}^{\ast}(t),t)$.
By (5.17), the defect measure for $u_{i}^{r_{i}}$ is
$N^{\prime}\delta_{0}\otimes dt$. By (5.20), $N^{\prime}\leq N-1$, while by
(5.19), $N^{\prime}\geq 1$. By our inductive assumption, $u_{i}^{r_{i}}$
converges weakly to $0$. Then as in the proof of Lemma 5.2, we get
$\Theta_{r_{i}^{2}}\left(\xi_{i1}^{\ast}(t),t;u_{i}\right)=\Theta_{1}\left(0,0;u_{i}^{r_{i}}\right)\to\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}N^{\prime},\quad\mbox{as
}i\to+\infty.$
This is a contradiction with (5.20), because $N^{\prime}\leq N-1$ and
$\sigma\in(0,1)$. ∎
###### Remark 5.4.
The scale $r_{i}$ satisfying (5.20) is the scale of the lowest bubble.
Choose a $\sigma\in(0,1)$ and set $r_{i}:=R(\sigma)\lambda_{i1}^{\ast}(t)$
according to the previous lemma. Define $u_{i}^{r_{i}}$ as before, with
respect to the base point $(\xi_{i1}^{\ast}(t),t)$. By definition,
$u_{i}^{r_{i}}(0,0)=\max_{y\in
B_{r_{i}^{-1}/2}(0)}u_{i}^{r_{i}}(y,0)=R(\sigma)^{-\frac{n-2}{2}}.$
Similar to Lemma 5.1, this implies that $u_{i}^{r_{i}}$ converges to
$W_{0,R(\sigma)}$ in $C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
Arguing as in the proof of Lemma 4.3, we deduce that
$\displaystyle\Theta_{R(\sigma)^{2}\lambda_{i1}^{\ast}(t)^{2}}\left(\xi_{i1}^{\ast}(t),t;u_{i}\right)$
$\displaystyle=$ $\displaystyle
R(\sigma)^{\frac{p+1}{p-1}}\int_{\mathbb{R}^{n}}\left[\frac{|\nabla\widetilde{u}_{i}(x,-1)|^{2}}{2}-\frac{\widetilde{u}_{i}(x,-1)^{p+1}}{p+1}\right]\psi\left(\frac{x-\xi_{ij}^{\ast}(t)}{r_{i}}\right)^{2}G(x,R(\sigma)^{2})$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}R(\sigma)^{\frac{2}{p-1}}\int_{\mathbb{R}^{n}}\widetilde{u}_{i}(x,-1)^{2}\psi\left(\frac{x-\xi_{ij}^{\ast}(t)}{r_{i}}\right)^{2}G(x,R(\sigma)^{2})+Ce^{-cR(\sigma)^{-2}r_{i}^{-2}}$
$\displaystyle\to$ $\displaystyle
R(\sigma)^{\frac{p+1}{p-1}}\int_{\mathbb{R}^{n}}\left[\frac{|\nabla
W_{0,R(\sigma)}(x)|^{2}}{2}-\frac{W_{0,R(\sigma)}(x)^{p+1}}{p+1}\right]G(x,R(\sigma)^{2})dx$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}R(\sigma)^{\frac{2}{p-1}}\int_{\mathbb{R}^{n}}W_{0,R(\sigma)}(x)^{2}G(x,R(\sigma)^{2})dx$
$\displaystyle<$
$\displaystyle\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}.$
This is a contradiction with Lemma 5.3.
### 6\. A remark on the Lipschitz hypothesis in Part II
In this section, under the hypothesis (II.a)-(II.c) in Part II, we prove the
Lipschitz hypothesis (10.1).
First we claim that Proposition 11.1 and Lemma 11.5 still hold, even now we do
not assume (10.1). This follows from the general analysis in the previous
section. Here we give more details.
###### Proof of the claim.
The proof is divided into three steps.
Step 1. By Lemma 4.3, for any $t\in(-1,1)$, $u_{i}(\cdot,t)$ blows up as
$i\to+\infty$. This allows us to find the first blow up point
$\xi_{i}^{\ast}(t)$.
Step 2. With $u_{i}^{r_{i}}$ defined as in (4.1) (with $x_{i}\to 0$), we claim
that
* •
either there is no defect measure and $u_{i}^{r_{i}}$ converges in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$,
* •
or there exists a point $P\in\mathbb{R}^{n}$ such that
$|\nabla u_{i}^{r_{i}}(x,t)|^{2}dxdt\rightharpoonup\Lambda\delta_{P}\otimes
dt.$
First, similar to (5.12), for any $\sigma\in(0,1)$, there exists an $s$ such
that
$\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}\leq\lim_{i\to+\infty}\Theta_{s}\left(\xi_{i}^{\ast}(t),t;u_{i}\right)\leq\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}+\sigma.$
Then because $r_{i}\to 0$, by the monotonicity formula we get, for any $R>0$,
$\displaystyle R^{\frac{p+1}{p-1}}\int_{\mathbb{R}^{n}}\left[\frac{|\nabla
u_{i}^{r_{i}}(x,-R^{2})|^{2}}{2}-\frac{u_{i}^{r_{i}}(x,-R^{2})^{p+1}}{p+1}\right]\psi\left(\frac{x-x_{i}}{r_{i}}\right)^{2}G(x,R^{2})dx$
$\displaystyle\quad\quad+\frac{R^{\frac{p-1}{2}}}{2(p-1)}\int_{\mathbb{R}^{n}}u_{i}^{r_{i}}(x,-R^{2})^{2}\psi\left(\frac{x-x_{i}}{r_{i}}\right)^{2}G(x,R^{2})dx+Ce^{-cR^{2}r_{i}^{-2}}$
$\displaystyle=$
$\displaystyle\Theta_{R^{2}r_{i}^{2}}(\xi_{i}^{\ast}(t),t;u_{i})$
$\displaystyle\leq$
$\displaystyle\Theta_{s}\left(\xi_{i}^{\ast}(t),t;u_{i}\right)$
$\displaystyle\leq$
$\displaystyle\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}+\sigma.$
Denote the weak limit of $u_{i}^{r_{i}}$ by $u_{\infty}$ (which, by Lemma 4.2,
is $0$ or $W_{\xi,\lambda}$ for some
$(\xi,\lambda)\in\mathbb{R}^{n}\times\mathbb{R}^{+}$), the defect measure
associated to them by $\Lambda\sum_{j}k_{j}\delta_{P_{j}}\otimes dt$ (with
notations as in Lemma 4.2). Passing to the limit in the above inequality leads
to
$R^{-2}\int_{-2R^{2}}^{-R^{2}}\Theta_{s}(0,0;u_{\infty})ds+\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}R^{-2}\int_{-2R^{2}}^{-R^{2}}\sum_{j}k_{j}e^{\frac{|P_{j}|^{2}}{4s}}\leq\left(4\pi\right)^{-\frac{n}{2}}\frac{\Lambda}{n}+\sigma.$
By the monotonicity formula and the smoothness of $u_{\infty}$, we deduce that
$\Theta_{s}(0,0;u_{\infty})\geq\lim_{s\to 0}\Theta_{s}(0,0;u_{\infty})=0.$
Hence we have
$R^{-2}\int_{-2R^{2}}^{-R^{2}}\sum_{j}k_{j}e^{\frac{|P_{j}|^{2}}{4s}}\leq
1+O(\sigma).$
Letting $R\to+\infty$, we deduce that there exists at most one blow up point,
whose multiplicity is exactly $1$.
If there is no defect measure, then Lemma 4.2 implies that $u_{i}^{r_{i}}$
converges to $u_{\infty}$ in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
Finally, we show that if the defect measure is nontrivial, then
$u_{\infty}\equiv 0$. Assume this is not the case. By Lemma 4.2,
$u_{\infty}=W_{\xi,\lambda}$ for some
$(\xi,\lambda)\in\mathbb{R}^{n}\times\mathbb{R}^{+}$. By the weak convergence
of $|\nabla u_{i}^{r_{i}}|^{2}dxdt$, there exists a sequence $R_{i}$
satisfying
$R_{i}\to+\infty\quad\mbox{and}\quad R_{i}r_{i}\to 0,$
such that the defect measure associated to $u_{i}^{R_{i}r_{i}}$ is
$2\Lambda\delta_{0}\otimes dt$. This is a contradiction with the claim that
the multiplicity of this defect measure is $1$.
Step 3. Denote $\lambda_{i}^{\ast}(t):=u_{i}(\xi_{i}^{\ast}(t),t)$. Similar to
Lemma 5.1, we deduce that $u_{i}^{\lambda_{i}^{\ast}(t)}$ (with base point at
$(\xi_{i}^{\ast}(t),t)$) converges to $W$ in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$. On the other hand, if the
sequence $r_{i}$ satisfies
$r_{i}\to 0\quad\mbox{and}\quad\frac{r_{i}}{\lambda_{i}^{\ast}(t)}\to+\infty,$
then by results in Step 2, we deduce that the sequence $u_{i}^{r_{i}}$
satisfies
$|\nabla u_{i}^{r_{i}}(x,t)|^{2}dxdt\rightharpoonup\Lambda\delta_{0}\otimes
dt.$
Combining this fact with Theorem 3.7, we deduce that
$\displaystyle|x-\xi_{i}^{\ast}(t)|^{\frac{n-2}{2}}u_{i}(x,t)+|x-\xi_{i}^{\ast}(t)|^{\frac{n}{2}}|\nabla
u_{i}(x,t)|$ (6.1)
$\displaystyle\quad\quad\quad\quad+|x-\xi_{i}^{\ast}(t)|^{\frac{n+2}{2}}\left(|\nabla^{2}u_{i}(x,t)|+|\partial_{t}u_{i}(x,t)|\right)\leq
C_{2}.\qed$
Now we come to the proof of the Lipschitz hypothesis (10.1). Assume there
exist a sequence of smooth solutions $u_{i}$ to (1.1), satisfying (II.a-II.c),
but
$\max_{|t|\leq
1/2}\int_{B_{1}}\left|\partial_{t}u_{i}(x,t)\right|^{2}dx\to+\infty.$ (6.2)
For each $i$, take a $t_{i}\in(-1,1)$ attaining
$\max_{|t|\leq
1}\left(1-|t|\right)\int_{B_{1}}\left|\partial_{t}u_{i}(x,t)\right|^{2}dx.$
Denote
$\lambda_{i}^{-2}:=\int_{B_{1}}\left|\partial_{t}u_{i}(x,t_{i})\right|^{2}dx.$
By (12.1), as $i\to+\infty$,
$\frac{1-|t_{i}|}{\lambda_{i}^{2}}\geq\frac{1}{2}\max_{|t|\leq
1/2}\int_{B_{1}}\left|\partial_{t}u_{i}(x,t)\right|^{2}dx\to+\infty.$ (6.3)
As a consequence,
$\lim_{i\to+\infty}\lambda_{i}=0.$
If $|t-t_{i}|<(1-|t_{i}|)/2$, by the definition of $t_{i}$, we get
$\int_{B_{1}}\left|\partial_{t}u_{i}(x,t)\right|^{2}dx\leq 2\lambda_{i}^{-2}.$
(6.4)
At time $t_{i}$, there exists a unique maximal point of $u_{i}(\cdot,t_{i})$
in $B_{1}$, denoted by $\xi_{i}(t_{i})$. Define
$\widetilde{u}_{i}(x,t):=\lambda_{i}^{\frac{n-2}{2}}u_{i}\left(\xi_{i}(t_{i})+\lambda_{i}x,t_{i}+\lambda_{i}^{2}t\right).$
By a scaling, we have
$\left\\{\begin{aligned}
&\int_{B_{\lambda_{i}^{-1}}}\left|\partial_{t}\widetilde{u}_{i}(x,0)\right|^{2}dx=1,\\\
&\int_{B_{\lambda_{i}^{-1}}}\left|\partial_{t}\widetilde{u}_{i}(x,t)\right|^{2}dx\leq
2,\quad\mbox{for
any}~{}|t|<\frac{1-|t_{i}|}{2\lambda_{i}^{2}}\end{aligned}\right.$ (6.5)
On the other hand, we claim that if $i$ is large,
$\int_{B_{\lambda_{i}^{-1}}}\left|\partial_{t}\widetilde{u}_{i}(x,0)\right|^{2}dx\leq\frac{1}{2}.$
(6.6)
This contradiction implies that (6.2) cannot be true. In other words, if
$u_{i}$ satisfies (II.a-II.c), then there must exist a constant $L$ such that
$\limsup_{i\to+\infty}\sup_{|t|\leq
1/2}\int_{B_{1}}\partial_{t}u_{i}(x,t)^{2}dx\leq L,$
that is, the Lipschitz hypothesis (10.1) holds.
To prove (6.6), first note that for $u_{i}$, by (6.1) we have
$\left|\partial_{t}u_{i}(x,t_{i})\right|\lesssim|x-\xi_{i}(t_{i})|^{-\frac{n+2}{2}}.$
For $\widetilde{u}_{i}$, this reads as
$\left|\partial_{t}\widetilde{u}_{i}(x,0)\right|\lesssim|x|^{-\frac{n+2}{2}}.$
Therefore there exists a large $R$ (independent of $i$) such that
$\int_{B_{\lambda_{i}^{-1}}\setminus
B_{R}}\left|\partial_{t}\widetilde{u}_{i}(x,0)\right|^{2}dx\leq
C\int_{B_{R}^{c}}|x|^{-n-2}\leq\frac{1}{4}.$ (6.7)
After establishing this inequality, (6.6) will follow from
###### Lemma 6.1.
For all $i$ large,
$\int_{B_{R}}\left|\partial_{t}\widetilde{u}_{i}(x,0)\right|^{2}dx\leq\frac{1}{4}.$
(6.8)
###### Proof.
Case 1. $\widetilde{u}_{i}$ converges in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
By Lemma 11.3,
$\lim_{i\to+\infty}\int_{-1}^{1}\int_{B_{R}}|\partial_{t}\widetilde{u}_{i}|^{2}=0.$
Hence by the smooth convergence of $\widetilde{u}_{i}$, we deduce that
$\partial_{t}\widetilde{u}_{i}$ converges to $0$ in
$C_{loc}(\mathbb{R}^{n}\times\mathbb{R})$ and the conclusion follows.
Case 2. $\widetilde{u}_{i}$ does not converge in
$C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
In this case, in view of the estimates in (6.5), Theorem 10.1 is applicable to
$\widetilde{u}_{i}$, which gives
$|\nabla\widetilde{u}_{i}|^{2}dxdt\rightharpoonup\Lambda\delta_{0}\otimes dt.$
(6.9)
By (18.4), estimates on the scaling parameters etc. in Proposition 17.6 and
the estimate on $\partial_{t}\phi_{i}$ in Proposition 17.11, there exists an
$\rho>0$ such that for all $i$ large,
$\displaystyle\int_{B_{\rho}}\left|\partial_{t}\widetilde{u}_{i}(x,0)\right|^{2}dx$
$\displaystyle\leq$ $\displaystyle C\int_{B_{\rho}}|x|^{-4}+o(1)$
$\displaystyle\leq$ $\displaystyle C\rho^{n-4}+o(1)\leq 1/8.$
Next, in view of (6.9), we deduce that as $i\to+\infty$,
$\partial_{t}\widetilde{u}_{i}(x,0)\to 0$ uniformly in $B_{R}\setminus
B_{\rho}$. Hence for all $i$ large, we also have
$\int_{B_{R}\setminus
B_{\rho}}\left|\partial_{t}\widetilde{u}_{i}(x,0)\right|^{2}dx\leq\frac{1}{8}.$
Putting these two inequalities together we get (6.8). ∎
### 7\. Exclusion of bubble towering
In this section, we prove Item (6) in Theorem 3.1. If $N=1$, in view of the
analysis in the previous section, this is exactly Theorem 10.1, so here we
consider the case $N\geq 2$.
We will prove this by a contradiction argument. Recall that we have assumed
$u_{\infty}$ is smooth, so it is a classical solution of (1.1). If
$u_{\infty}\neq 0$, by standard Harnack inequality,
$\inf_{Q_{7/8}}u_{\infty}>0.$ (7.1)
First we note that
###### Lemma 7.1.
There exists a constant $c_{\infty}>0$, depending only on
$\inf_{Q_{7/8}}u_{\infty}$, such that
$u_{i}\geq c_{\infty}\quad\mbox{in }\quad Q_{6/7}.$
###### Proof.
Because
$\partial_{t}u_{i}-\Delta u_{i}>0\quad\mbox{in}\quad Q_{1},$
the following weak Harnack inequality holds for $u_{i}$: for any $(x,t)\in
Q_{6/7}$,
$u_{i}(x,t)\geq c\int_{Q_{1/100}^{-}(x,t-1/50)}u_{i}.$
Because $u_{i}\to u_{\infty}$ in $L^{2}_{loc}(Q_{1})$, by (7.1) we get
$\lim_{i\to+\infty}\int_{Q_{1/100}^{-}(x,t-1/50)}u_{i}=\int_{Q_{1/100}^{-}(x,t-1/50)}u_{\infty}\geq
c\inf_{Q_{7/8}}u_{\infty}>0.$
The conclusion follows by combining these two inequalities. ∎
For each $t\in(-1,1)$, let
$\rho_{i}(t):=\min_{1\leq j\neq k\leq
N}|\xi_{ij}^{\ast}(t)-\xi_{ik}^{\ast}(t)|.$
Set $t_{0}=0$ and $\rho_{0}=\rho_{i}(t_{0})$, which is very small if $i$ is
large enough. Fix a large index $i$. For each $k\in\mathbb{N}$, set
$t_{k}:=\sup\left\\{t:\rho_{i}(s)\geq\frac{1}{2}\rho_{i}(t_{k-1})\quad\mbox{for
any}~{}~{}s\in[t_{k-1},t]\right\\},$
and
$\rho_{k}:=\rho_{i}(t_{k})=2^{-1}\rho_{i}(t_{k-1})=\cdots=2^{-k}\rho_{0}.$
(7.2)
For each $k$, assume $\rho_{i}(t_{k})$ is attained between
$\xi_{i1}^{\ast}(t_{k})$ and $\xi_{i2}^{\ast}(t_{k})$. Consider
$u_{i}^{k}(x,t):=\rho_{k}^{\frac{n-2}{2}}u_{i}\left(\xi_{i1}^{\ast}(t_{k})+\rho_{k}x,t_{k}+\rho_{k}^{2}t\right).$
By Lemma 7.1, we have
$u_{i}^{k}(x,0)\geq
c_{\infty}\rho_{k}^{\frac{n-2}{2}},\quad\mbox{for}~{}~{}x\in B_{1}\setminus
B_{1/2}.$
By our construction, for any $t\in[0,(t_{k+1}-t_{k})/\rho_{k}^{2}]$, the
distance between different bubble points of $u_{i}^{k}$ is not smaller than
$1/2$. This allows us to iteratively apply Proposition 19.1 (backwardly in
time), which gives
$u_{i}^{k}(x,0)\lesssim
e^{-c\frac{t_{k+1}-t_{k}}{\rho_{k}^{2}}},\quad\mbox{for}~{}~{}x\in
B_{1}\setminus B_{1/2}.$
Combining these two inequalities, we obtain
$t_{k+1}-t_{k}\lesssim\rho_{k}^{2}|\log\rho_{k}|\lesssim\rho_{k}.$
Combining this inequality with (7.2), we get
$t_{\infty}:=\lim_{k\to\infty}t_{k}\lesssim\sum_{k=0}^{\infty}\rho_{k}\lesssim\rho_{0}\ll
1.$
This then implies $\rho_{i}(t_{\infty})=0$ (by the continuity of
$\rho_{i}(t)$, which follows from the continuity of $\xi_{ij}^{\ast}(t)$).
This is a contradiction. In other words, we must have $u_{\infty}\equiv 0$.
## Part IV Analysis of first time singularity
### 8\. Setting
In this part we assume $u\in C^{\infty}(Q_{1}^{-})$ is a _smooth_ solution of
(1.1), satisfying
$\int_{-1}^{t}\int_{B_{1}}\left(|\nabla u|^{2}+|u|^{p+1}\right)\leq
K(T-t)^{-K}$ (8.1)
for some constant $K$. In particular, $u$ may not be smoothly extended to
$B_{1}\times\\{0\\}$.
Define
$\mathcal{R}(u):=\left\\{a\in B_{1}:\exists~{}r>0\mbox{ such that }u\in
L^{\infty}(Q_{r}^{-}(a,T))\right\\}$
to be the regular set, and $\mathcal{S}(u):=B_{1}\setminus\mathcal{R}(u)$ to
be the set of blow up points. By standard parabolic estimates, if
$a\in\mathcal{R}(u)$, $u$ can be extended smoothly up to $t=0$ in a small
backward parabolic cylinder centered at $(a,0)$.
The main result of this part is
###### Theorem 8.1.
If $n\geq 7$, $p=\frac{n+2}{n-2}$ and $u>0$, then there exists a constant $C$
such that
$\|u(t)\|_{L^{\infty}(B_{1/2})}\leq C(T-t)^{-\frac{1}{p-1}}.$ (8.2)
We will also show how to get Theorem 1.1 and Theorem 1.2 from this theorem.
The proof of this theorem consists of the following four steps.
1. (1)
After some preliminary estimates about the monotonicity formula and Morrey
space bounds in Section 9, we perform the tangent flow analysis in Section 10
at a possible singular point $(a,T)$. Here we mainly use results from Part I.
This tangent flow analysis shows that Type II blow up points are isolated in
$\mathcal{S}(u)$, see Section 11, which allows us to apply results in Part II
and Part III.
2. (2)
In Section 12, we apply Theorem 3.1 in Part III and Proposition 20.1 in
Section 20 to exclude the bubble clustering formation.
3. (3)
In Section 13, we exclude the formation of only one bubble. Here we mainly use
the differential inequality on the scaling parameter $\lambda$ derived in Part
II.
4. (4)
Finally, we use a stability result on Type I blow ups to derive the Type I
estimate (14.2).
### 9\. Some integral estimates
In this section and the next one, we assume only $p>1$, and $u$ needs not to
be positive. This section is devoted to showing a Morrey space bound on $u$,
which will be used in the next section to perform the tangent flow analysis.
This is similar to Section 3 in Part I.
Denote the standard heat kernel on $\mathbb{R}^{n}$ by $G$. Take a function
$\psi\in C^{\infty}(B_{1/4})$, which satisfies $0\leq\psi\leq 1$, $\psi\equiv
1$ in $B_{1/8}$ and $|\nabla\psi|+|\nabla^{2}\psi|\leq C$. Take an arbitrary
point $a\in B_{1/2}$. For any $s\in(0,1)$, set
$\displaystyle\Theta_{s}(a)$ $\displaystyle:=$ $\displaystyle
s^{\frac{p+1}{p-1}}\int_{B_{1}}\left[\frac{|\nabla
u(y,-s)|^{2}}{2}-\frac{|u(y,-s)|^{p+1}}{p+1}\right]G(y-a,s)\psi(y)^{2}dy$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}s^{\frac{2}{p-1}}\int_{B_{1}}u(y,-s)^{2}G(y-a,s)\psi(y)^{2}dy+Ce^{-cs^{-1}}.$
The following is another localized version of the monotonicity formula, which
is a little different from Proposition 3.2.
###### Proposition 9.1 (Localized monotonicity formula II).
For any $0<s_{1}<s_{2}<1$,
$\displaystyle\Theta_{s_{2}}(x)-\Theta_{s_{1}}(x)$ $\displaystyle\geq$
$\displaystyle\int_{s_{1}}^{s_{2}}\tau^{\frac{2}{p-1}-1}\int_{B_{1}}\Big{|}(-\tau)\partial_{t}u(y,-\tau)+\frac{u(y,-\tau)}{p-1}+\frac{y\nabla
u(y,-\tau)}{2}\Big{|}^{2}$ $\displaystyle\quad\quad\quad\quad\quad\quad\times
G(y-x,-\tau)\psi(y)^{2}dyd\tau.$
This almost monotonicity formula allows us to define
$\Theta(x):=\lim_{s\to 0}\Theta_{s}(x).$
As in Part I, we still have
###### Lemma 9.2.
$\Theta$ is nonnegative and it is upper semi-continuous in $x$.
###### Proposition 9.3.
For any $x\in B_{1/2}$, $r<1/4$ and $\theta\in(0,1/2)$, there exists a
constant $C(\theta,u)$ such that
$r^{-2}\int_{Q_{r}^{-}(x,-\theta r^{2})}\left(|\nabla
u|^{2}+|u|^{p+1}\right)+\int_{Q_{r}^{-}(x,-\theta
r^{2})}|\partial_{t}u|^{2}\leq C(\theta,u).$ (9.1)
By the $\varepsilon$-regularity theorem (see Theorem 3.9), we also get
###### Proposition 9.4.
$\mathcal{R}(u)=\\{\Theta=0\\}$ and
$\mathcal{S}(u)=\\{\Theta\geq\varepsilon_{\ast}\\}$.
### 10\. Tangent flow analysis, II
In this section, we perform the tangent flow analysis for $u$. This is similar
to the one in Section 5, with the only difference that now we work in backward
parabolic cylinders, instead of the full parabolic cylinders. Some special
consequences will also follow from the assumption that $p=(n+2)/(n-2)$, which
will be used in the next section to classify Type I and Type II blow up
points.
Take an arbitrary point $a\in B_{1/2}$. For any $\lambda>0$ sufficiently
small, define
$u_{\lambda}(x,t):=\lambda^{\frac{2}{p-1}}u(a+\lambda x,T+\lambda^{2}t).$
By Proposition 9.3, for any $R>0$ and $\theta\in(0,1/2)$, we have
$R^{-2}\int_{Q_{R}^{-}(0,-\theta R^{2})}\left[|\nabla
u_{\lambda}|^{2}+|u_{\lambda}|^{p+1}\right]+\int_{Q_{R}^{-}(0,-\theta
R^{2})}|\partial_{t}u_{\lambda}|^{2}\leq C(\theta,u),$ (10.1)
where $C(\theta,u)$ is the constant in Proposition 9.3. Therefore, as in
Section 4, for any $\lambda_{i}\to 0$, we can subtract a subsequence
$\lambda_{i}$ (not relabelling) such that
* •
$u_{\lambda_{i}}$ converges to $u_{\infty}$, weakly in
$L^{p+1}_{loc}(\mathbb{R}^{n}\times\mathbb{R}^{-})$ and strongly in
$L^{q}_{loc}(\mathbb{R}^{n}\times\mathbb{R}^{-})$ for any $q<p+1$;
* •
$\nabla u_{\lambda_{i}}$ converges to $\nabla u_{\infty}$ weakly in
$L^{2}_{loc}(\mathbb{R}^{n}\times\mathbb{R}^{-})$;
* •
$\partial_{t}u_{\lambda_{i}}$ converges to $\partial_{t}u_{\infty}$ weakly in
$L^{2}_{loc}(\mathbb{R}^{n}\times\mathbb{R}^{-})$;
* •
there exist two Radon measures $\mu$ and $\nu$ such that in any compact set of
$\mathbb{R}^{n}\times\mathbb{R}^{-}$,
$\left\\{\begin{aligned} &|\nabla u_{i}|^{2}dxdt\rightharpoonup|\nabla
u_{\infty}|^{2}dxdt+\mu,\\\
&|u_{i}|^{p+1}dxdt\rightharpoonup|u_{\infty}|^{p+1}dxdt+\mu,\\\ &\
|\partial_{t}u_{i}|^{2}dxdt\rightharpoonup|\partial_{t}u_{\infty}|^{2}dxdt+\nu\end{aligned}\right.$
weakly as Radon measures.
Passing to the limit in the equation for $u_{\lambda_{i}}$, we deduce that
$u_{\infty}$ is a weak solution of (1.1) in
$\mathbb{R}^{n}\times\mathbb{R}^{-}$. Passing to the limit in (10.1) gives
$R^{-2}\int_{Q_{R}^{-}(0,-\theta R^{2})}\left[|\nabla
u_{\infty}|^{2}+|u_{\infty}|^{p+1}\right]+\int_{Q_{R}^{-}(0,-\theta
R^{2})}|\partial_{t}u_{\infty}|^{2}\leq C(\theta,u),$ (10.2)
where $C(\theta,u)$ is the constant in Proposition 9.3.
Similar to Lemma 5.2 in Part I, we deduce that both $u_{\infty}$ and $\mu$ are
backwardly self-similar.
Results obtained so far hold for any $p>1$. If $p=(n+2)/(n-2)$, we can say
more about $(u_{\infty},\mu)$.
###### Proposition 10.1.
If $p=(n+2)/(n-2)$, then the followings hold.
1. (1)
Either $u_{\infty}\equiv 0$ or
$u_{\infty}\equiv\pm\left[-(p-1)t\right]^{-\frac{1}{p-1}}$.
2. (2)
There exists a constant $M>0$ such that
$\mu=M\delta_{0}\otimes dt.$
3. (3)
$\nu=0$.
###### Proof.
A rescaling of (10.2) gives
$\int_{B_{1}(x)}\left(|\nabla w_{\infty}|^{2}+|w_{\infty}|^{p+1}\right)\leq
C,\quad\forall x\in\mathbb{R}^{n}.$ (10.3)
Because $p=(n+2)/(n-2)$, by Theorem 7.2, we obtain (1).
Because $\mu$ is self-similar, by Theorem 9.1, we find a discrete set
$\\{\xi_{j}\\}\subset\mathbb{R}^{n}$, $M_{j}>0$ such that
$\mu=\mu_{t}dt,\quad\mu_{t}=\sum_{j}M_{j}\delta_{\sqrt{-t}\xi_{j}}.$ (10.4)
Because $u_{\lambda_{i}}$ are all smooth in
$\mathbb{R}^{n}\times\mathbb{R}^{-}$, the energy inequalities (2.2) for them
are in fact identities. Letting $\lambda_{i}\to 0$, we see the limiting energy
inequality (4.9) for $(u_{\infty},\mu)$ is also an identity. Furthermore,
because $u_{\infty}$ is smooth in $\mathbb{R}^{n}\times\mathbb{R}^{-}$, it
also satisfies the energy identity. From these facts we deduce that
$\partial_{t}\mu=-n\nu\quad\mbox{in the distributional sense in
}~{}~{}\mathbb{R}^{n}\times\mathbb{R}^{-}.$
Combining this relation with (10.4), we get (2) and (3). ∎
Furthermore, if $u$ is positive, this proposition implies that the blowing up
sequence $u_{\lambda}$ satisfy the assumptions (III.a-III.c) in Part III.
### 11\. No coexistence of Type I and Type II blow ups
From now on it is always assumed that $u$ is a positive solution,
$p=(n+2)/(n-2)$ and $n\geq 7$. In this section we classify Type I and Type II
blow up points, and give some qualitative description of the set of Type I and
Type II blow up points.
###### Lemma 11.1.
Either $u_{\infty}=0$ or $\mu=0$.
###### Proof.
If $\mu\neq 0$, by the positivity, there exists an $N\in\mathbb{N}$ such that
the constant in (10.4) satisfies
$M=N\Lambda.$
Since $u_{\infty}$ is smooth in $\mathbb{R}^{n}\times\mathbb{R}^{-}$,
(III.a-III.c) are satisfied by any blow up sequence $u_{\lambda_{i}}$ at
$(a,T)$. Then an application of Theorem 3.1 implies that $u_{\infty}=0$.
Alternatively, if $u_{\infty}\neq 0$, then $\mu=0$. ∎
A direct consequence of this lemma is
###### Proposition 11.2.
Any point $a\in B_{1/2}$ belongs to one of the following three classes:
Regular point:
$\Theta(a)=0$;
Type I blow up point:
$\Theta(a)=n^{-1}\left(\frac{n-2}{4}\right)^{\frac{n}{2}}$;
Type II blow up point:
$\Theta(a)=n^{-1}\left(4\pi\right)^{-\frac{n}{2}}N\Lambda$ for some
$N\in\mathbb{N}$.
Since all of the possible tangent flows form a discrete set (classified
according to the value of $\Theta(a)$), we obtain
###### Corollary 11.3.
The tangent flow of $u$ at any point $a\in B_{1/2}$ is unique.
Next we study the sets of Type I and Type II blow up points.
###### Lemma 11.4.
The set of Type I blow up points is relatively open in $\mathcal{S}(u)$.
###### Proof.
It is directly verified that
$n^{-1}\left(\frac{n-2}{4}\right)^{\frac{n}{2}}<n^{-1}\left(4\pi\right)^{-\frac{n}{2}}\Lambda.$
(11.1)
The conclusion then follows from the upper semi-continuity of $\Theta$, see
Lemma 9.2. ∎
###### Lemma 11.5.
The set of Type II blow up points is isolated in $\mathcal{S}(u)$.
###### Proof.
Assume $x_{0}$ is Type II, that is,
$\Theta(x_{0})=n^{-1}\left(4\pi\right)^{-\frac{n}{2}}N\Lambda$ for some
$N\in\mathbb{N}$. Assume by the contrary that there exists a sequence of
$x_{i}\in\mathcal{S}(u)$ converging to $x_{0}$.
Denote
$\lambda_{i}:=|x_{i}-x_{0}|,\quad\widehat{x}_{i}:=\frac{x_{i}-x_{0}}{\lambda_{i}}.$
Define the blow up sequence
$u_{i}(x,t):=\lambda_{i}^{\frac{2}{p-1}}u(x_{0}+\lambda_{i}x,T+\lambda_{i}^{2}t).$
Then the tangent flow analysis together with the fact that $x_{0}$ is Type II
implies that
$|\nabla u_{i}|^{2}dxdt\rightharpoonup N\Lambda\delta_{0}\otimes dt.$
As a consequence, there exists a fixed, small $s>0$ such that
$\Theta_{s}\left(\widehat{x}_{i};u_{i}\right)\leq\varepsilon_{\ast}/2.$
However, by the monotonicity formula (Proposition 9.1), the fact that
$x_{i}\in\mathcal{S}$ and the scaling invariance of $\Theta_{s}$, we always
have
$\Theta_{s}\left(\widehat{x}_{i};u_{i}\right)=\Theta_{\lambda_{i}^{2}s}(x_{i};u)\geq\Theta(x_{i};u_{i})\geq\varepsilon_{\ast}.$
This is a contradiction. ∎
### 12\. Exclusion of bubble clustering
In this section we exclude Type II blow ups with bubble clustering. From now
on we assume there exists a Type II blow up point. By Lemma 11.5, this blow up
point is isolated in $\mathcal{S}(u)$. Hence after a translation and a
scaling, we may assume $u\in
C^{\infty}(\overline{Q_{1}^{-}}\setminus\\{(0,0)\\})$, and $(0,0)$ is a Type
II blow up point.
Under these assumptions, Theorem 3.1 is applicable to any rescaling of $u$ at
$(0,0)$. Combining this theorem with a continuity argument in time, we obtain
###### Proposition 12.1.
There exists an $N\in\mathbb{N}$ so that for all $t$ sufficiently close to
$0$, the following results hold.
1. (1)
There exist exactly $N$ local maximal point of $u(\cdot,t)$ in the interior of
$B_{1}(0)$.
Denote this point by $\xi_{j}^{\ast}(t)$ ($j=1,\cdots,N$) and let
$\lambda_{j}^{\ast}(t):=u(\xi_{ij}^{\ast}(t),t)^{-\frac{2}{n-2}}$.
2. (2)
Both $\lambda_{j}^{\ast}$ and $\xi_{j}^{\ast}$ depend continuously on $t$.
3. (3)
There exists a constant $K$ such that, for all $i$ and any $x\in B_{1}$,
$u(x,t)\leq K\max_{1\leq j\leq N}|x-\xi_{j}^{\ast}(t)|^{-\frac{n-2}{2}}.$
(12.1)
4. (4)
As $t\to 0$,
$\frac{\lambda_{j}^{\ast}(t)}{\sqrt{-t}}\to
0,\quad\frac{|\xi_{j}^{\ast}(t)|}{\sqrt{-t}}\to 0,$
and the function
$u_{j}^{t}(y,s):=\lambda_{j}^{\ast}(t)^{\frac{n-2}{2}}u\left(\xi_{j}^{\ast}(t)+\lambda_{j}^{\ast}(t)y,t+\lambda_{j}^{\ast}(t)^{2}s\right),$
converges to $W(y)$ in $C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
5. (5)
For any $1\leq j\neq k\leq N$ and $t\in[-9/16,0)$,
$\lim_{t\to
0}\frac{|\xi_{j}^{\ast}(t)-\xi_{k}^{\ast}(t)|}{\max\left\\{\lambda_{j}^{\ast}(t),\lambda_{k}^{\ast}(t)\right\\}}=+\infty.$
(12.2)
The main result of this section is
###### Proposition 12.2.
The multiplicity $N$ in Proposition 12.1 must be $1$.
The proof is by a contradiction argument, so assume $N\geq 2$. For each
$t\in(-1,0)$, denote
$\rho(t):=\min_{1\leq k\neq\ell\leq
N}|\xi_{k}^{\ast}(t)-\xi_{\ell}^{\ast}(t)|.$
By Point (4) in Proposition 12.1, for any $t<0$, $\rho(t)>0$, and
$\lim_{t\to 0}\rho(t)=0.$ (12.3)
Then by the continuity of $\rho(t)$ (thanks to the continuity of
$\xi_{j}^{\ast}(t)$), there exists a large positive integer $i_{0}$ so that
the following is well-defined. For each $i\geq i_{0}$, define $t_{i}$ to be
the first time satisfying $\rho(t)=2^{-i}$. In other words,
$\rho(t_{i})=2^{-i}\quad\mbox{and}\quad\rho(t)>2^{-i}~{}~{}\mbox{in}~{}~{}[-1,t_{i}).$
(12.4)
On the other hand, we claim that
###### Lemma 12.3.
For all $i$ large,
$\rho(t_{i+1})\geq\frac{2}{3}\rho(t_{i}).$ (12.5)
This is clearly a contradiction with the definition of $\rho(t_{i})$. The
proof of Proposition 12.2 is thus complete. (This contradiction in fact shows
that if $N\geq 2$, $\xi_{i}^{\ast}(t)$ cannot converges to the same point as
$t\to 0^{-}$, that is, (12.3) does not hold.)
The proof of Lemma 12.3 uses two facts: the first one is the reduction
equation for scaling parameters, which will be applied in a forward in time
manner; the second one is the weak form of Schoen’s Harnack inequality,
Proposition 19.1, which will applied in a backward in time manner.
###### Proof of Lemma 12.3.
Without loss of generality, assume
$\rho(t_{i+1})=|\xi_{1}^{\ast}(t_{i+1})-\xi_{2}^{\ast}(t_{i+1})|=2^{-i-1}.$
Choose $\widetilde{t}_{i}$ to be the first time in $[t_{i},t_{i+1}]$
satisfying
$|\xi_{1}^{\ast}(t)-\xi_{2}^{\ast}(t)|=2^{-i},$
that is,
$|\xi_{1}^{\ast}(\widetilde{t}_{i})-\xi_{2}^{\ast}(\widetilde{t}_{i})|=2^{-i},\quad\mbox{and}\quad|\xi_{1}^{\ast}(t)-\xi_{2}^{\ast}(t)|>2^{-i}~{}~{}\mbox{for
any}~{}~{}t\in[t_{i},\widetilde{t}_{i}).$
This is well defined, if we note the continuous dependence of
$|\xi_{1}^{\ast}(t)-\xi_{2}^{\ast}(t)|$ with respect to $t$, and the fact that
$|\xi_{1}^{\ast}(t_{i})-\xi_{2}^{\ast}(t_{i})|\geq\rho(t_{i})=2^{-i}.$
Let
$u_{i}(x,t):=2^{-\frac{n-2}{2}i}u\left(\xi_{1}^{\ast}(\widetilde{t}_{i})+2^{-i}x,\widetilde{t}_{i}+4^{-i}t\right).$
It is well defined for $x\in B_{2^{i-1}}$ and
$t\in(-4^{i-1},-4^{i}\widetilde{t}_{i})$.
For each $t\in(-4^{i-1},-4^{i}\widetilde{t}_{i})$, $u_{i}$ has $N$ bubbles,
located at
$\xi^{\ast}_{ij}(t):=2^{i}\left[\xi_{j}^{\ast}\left(\widetilde{t_{i}}+4^{i}t\right)-\xi_{1}^{\ast}\left(\widetilde{t}_{i}\right)\right],$
with bubble scale
$\lambda^{\ast}_{ij}(t):=2^{i}\lambda_{j}^{\ast}\left(\widetilde{t_{i}}+4^{i}t\right).$
Denote
$T_{i}:=4^{i}\left(t_{i+1}-\widetilde{t}_{i}\right).$
By the definition of $\widetilde{t}_{i}$ and $t_{i+1}$, for any $1\leq j\neq
k\leq N$, we have
$|\xi^{\ast}_{ij}(t)-\xi^{\ast}_{ik}(t)|\geq\frac{1}{2},\quad\forall
t\in\left[-4^{i-1},T_{i}\right].$ (12.6)
We also have the normalization condition
$|\xi^{\ast}_{i1}(0)-\xi^{\ast}_{i2}(0)|=1.$ (12.7)
Moreover, by Point (5) in Proposition 12.1, as $i\to+\infty$, if
$|\xi_{ij}^{\ast}(t)|$ does not escape to infinity, then
$\lambda_{ij}^{\ast}(t)\to 0\quad\mbox{uniformly in any compact set
of}~{}~{}\mathbb{R}.$
Then Proposition 20.1 is applicable if $i$ is large enough, which gives
$T_{i}\leq
100\min\left\\{\left|\log\lambda_{i1}^{\ast}(0)\right|,\left|\log\lambda_{i2}^{\ast}(0)\right|\right\\}.$
(12.8)
For any $t\in[0,T_{i}]$, Proposition 12.1 can be applied to $u_{i}$ in
$Q_{1/2}(\xi^{\ast}_{i1}(t),t)$ and $Q_{1/2}(\xi^{\ast}_{i2}(t),t)$. Denote
the scaling parameter by $\lambda_{i1}(t)$ and $\lambda_{i2}(t)$. By
Proposition 12.1,
$\lambda_{i1}(t)=\left[1+o(1)\right]\lambda^{\ast}_{i1}(t),\quad|\xi_{i1}^{\ast}(t)-\xi_{i1}(t)|=o(\lambda^{\ast}_{i1}(t)).$
(12.9)
The reduction equation for $\lambda_{i1}(t)$ (see Proposition 17.6) gives
$\left|\lambda_{i1}^{\prime}(t)\right|\lesssim\lambda_{i1}(t)^{\frac{n-4}{2}}.$
(12.10)
For any $t\in(0,T_{i}]$, integrating (12.10) on $[0,t]$ leads to
$\lambda_{i1}(t)^{-\frac{n-6}{2}}\geq\lambda_{i1}(0)^{-\frac{n-6}{2}}-CT_{i}.$
Plugging (12.8) and (12.9) into this inequality, after a simplification we get
$\displaystyle\lambda_{i1}(t)$ $\displaystyle\leq$
$\displaystyle\lambda_{i1}(0)\left[1+C\lambda_{i1}(0)^{\frac{n-6}{2}}\left|\log\lambda_{i1}(0)\right|\right]$
$\displaystyle\leq$ $\displaystyle 2\lambda_{i1}(0),$
where we have used the fact that $\lambda_{i1}(0)\ll 1$ in the last step.
The reduction equation for $\xi_{i1}(t)$ is
$\left|\xi_{i1}^{\prime}(t)\right|\lesssim\lambda_{i1}(t)^{\frac{n-4}{2}}.$
Integrating this on $[0,T_{i}]$ and using (12) and (12.8) again, we get
$|\xi_{i1}(T_{i})-\xi_{i1}(0)|\lesssim\lambda_{i1}(0)^{\frac{n-4}{2}}\left|\log\lambda_{i1}(0)\right|\ll
1.$
The same estimates hold for $\xi_{i2}$. Combining these two estimates with the
second equality in (12.9) (and a similar estimate for $\xi_{i2}$), we obtain
$|\xi_{i1}^{\ast}(T_{i})-\xi_{i2}^{\ast}(T_{i})|\geq|\xi_{i1}^{\ast}(0)-\xi_{i2}^{\ast}(0)|-\frac{1}{3}=\frac{2}{3}|\xi_{i1}^{\ast}(0)-\xi_{i2}^{\ast}(0)|.$
Scaling back to $u$, this is
$\rho(t_{i+1})=|\xi_{1}(t_{i+1})-\xi_{2}(t_{i+1})|\geq\frac{2}{3}|\xi_{1}(\widetilde{t}_{i})-\xi_{2}(\widetilde{t}_{i})|=\frac{2}{3}\rho(t_{i}).\qed$
### 13\. Exclusion of blow up with only one bubble
In this section, we prove
###### Proposition 13.1.
Under the setting of Theorem 8.1, any point $a\in\mathcal{S}(u)$ must be Type
I, in the sense that
$\Theta(a)=n^{-1}\left(\frac{n-2}{4}\right)^{\frac{n}{2}}.$
In view of Proposition 12.2, we need to consider the remaining case in
Proposition 12.1, that is, when there is only one bubble, $N=1$. Under this
assumption, Proposition 12.1 now reads as
###### Proposition 13.2.
For any $t$ sufficiently close to $0$, there exists a unique maxima point of
$u(\cdot,t)$ in $B_{1}(0)$. Denote this point by $\xi^{\ast}(t)$ and let
$\lambda^{\ast}(t):=u(\xi^{\ast}(t),t)^{-\frac{2}{n-2}}$.
As $t\to 0$,
$\lambda^{\ast}(t)\to 0,\quad|\xi^{\ast}(t)|\to 0,$
and the function
$u^{t}(y,s):=\lambda^{\ast}(t)^{\frac{n-2}{2}}u\left(\xi^{\ast}(t)+\lambda^{\ast}(t)y,t+\lambda^{\ast}(t)^{2}s\right),$
converges to $W(y)$ in $C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R})$.
Next, applying Proposition 12.1 (with the notation of $\eta_{K}$ etc. as used
in Part II) to a suitable rescaling of $u$ at $(\xi^{\ast}(t),t)$ gives
###### Proposition 13.3 (Orthogonal condition).
For any $t$ sufficiently close $0$, there exists a unique
$(\xi(t),\lambda(t),a(t))\in\mathbb{R}^{n}\times\mathbb{R}_{+}\times\mathbb{R}$
with
$\frac{|\xi(t)-\xi^{\ast}(t)|}{\lambda(t)}+\Big{|}\frac{\lambda(t)}{\lambda^{\ast}(t)}-1\Big{|}+\Big{|}\frac{a(t)}{\lambda(t)}\Big{|}=o(1),$
(13.1)
such that for any $i=0,\cdots,n+1$,
$\int_{B_{1}}\left[u(x,t)-W_{\xi(t),\lambda(t)}(x)-a(t)Z_{0,\xi(t),\lambda(t)}(x)\right]\eta_{K}\left(\frac{x-\xi(t)}{\lambda(t)}\right)Z_{i,\xi(t),\lambda(t)}(x)dx=0.$
(13.2)
Starting with this decomposition, the analysis in Part II is applicable to
$\widetilde{u}^{t}(y,s):=(-t)^{\frac{n-2}{4}}u\left(\sqrt{-t}y,-ts\right),\quad\forall
t\in(-1/4,0).$
This gives an ordinary differential inequality for $\lambda(t)$:
$\left|\lambda^{\prime}(t)\right|\lesssim(-t)^{\frac{n-10}{4}}\lambda(t)^{\frac{n-4}{2}}.$
(13.3)
This inequality can be rewritten as
$\left|\frac{d}{dt}\lambda(t)^{-\frac{n-6}{2}}\right|\lesssim(-t)^{\frac{n-10}{4}}$
(13.4)
Because $n\geq 7$,
$\frac{n-10}{4}>-1.$
Integrating (13.4) leads to
$\lim_{t\to 0}\lambda(t)^{-\frac{n-6}{2}}<+\infty.$
Thus as $t\to 0^{-}$, $\lambda(t)$ does not converges to $0$. By (13.1) and
Proposition 13.2,
$\limsup_{t\to 0^{-}}\sup_{B_{1}}u(x,t)<+\infty.$
Hence $u$ does not blow up at time $t=0$. This is a contradiction.
### 14\. Proof of main results
In this section we first finish the proof of Theorem 8.1, and then show how
Theorem 1.1, Theorem 1.2 and Corollary 1.3 follow from this theorem.
#### 14.1. Proof of Theorem 8.1
Combining Proposition 13.1 with the tangent flow analysis in Section 10, we
deduce that for any $a\in\mathcal{S}(u)$,
$\limsup_{t\to
T}(T-t)^{\frac{1}{p-1}}u\left(a+\sqrt{T-t}y,t+(T-t)s\right)=(p-1)^{-\frac{1}{p-1}}(-s)^{\frac{1}{p-1}}$
(14.1)
uniformly in any compact set of $\mathbb{R}^{n}\times\mathbb{R}^{-}$.
However, this is only a qualitative description and we cannot obtain (8.2)
from it. Now we show how to obtain a quantitative estimate. We will mainly use
the stability of Type I blow ups, as characterized in (11.1). (For other
notions of stability of Type I blow ups, see Merle-Zaag [66], Kammerer -Merle-
Zaag [29], Collot-Merle–Raphaël [12], Collot-Raphaël-Szeftel [14].)
###### Lemma 14.1.
For each $a\in B_{1/2}$, there exist two constants $C(a)>1$ and $\rho(a)<1/8$
such that for any $0<s<\rho(a)^{2}$,
$\sup_{x\in B_{\rho(a)}(a)}u(x,T-s)\leq C(a)s^{-\frac{1}{p-1}}.$ (14.2)
###### Proof.
If $a\in\mathcal{R}(u)$, by definition, there exists an $\rho(a)>0$ such that
$u\in L^{\infty}(Q_{\rho(a)}^{-}(a,T))$. Then (14.2) holds trivially by
choosing a suitable constant $C(a)$.
Next, assume $a\in\mathcal{S}(u)$. By (11.1), we can take a small
$\varepsilon>0$ so that
$n^{-1}\left(\frac{n-2}{4}\right)^{\frac{n}{2}}+4\varepsilon<n^{-1}\left(4\pi\right)^{-\frac{n}{2}}\Lambda.$
(14.3)
By (14.1), there exists an $r(a)>0$ such that
$\Theta_{r(a)^{2}}(a)<n^{-1}\left(\frac{n-2}{4}\right)^{\frac{n}{2}}+\varepsilon.$
(14.4)
Next, combining (14.1) with Proposition 9.3 and the monotonicity formula
(Proposition 9.1), we find another constant $\rho(a)<r(a)$ such that
$\Theta_{r(a)^{2}}(x)\leq\Theta_{r(a)^{2}}(a)+\varepsilon,\quad\forall x\in
B_{2\rho(a)}(a).$
Substituting this inequality into (14.4) and noting (14.3), we get
$\Theta_{r(a)^{2}}(x)<n^{-1}\left(4\pi\right)^{-\frac{n}{2}}\Lambda-2\varepsilon,\quad\forall
x\in B_{2\rho(a)}(a).$ (14.5)
In the following, we argue by contradiction. Assume with $\rho(a)$ defined as
above, there does not exist such a constant $C(a)$ so that (14.2) can hold,
that is, there exists a sequence of $x_{k}\in B_{\rho(a)}(a)$, $s_{k}\to 0$
with
$u(x_{k},T-s_{k})\geq ks_{k}^{-\frac{1}{p-1}}.$ (14.6)
Define the blow up sequence
$u_{k}(x,t):=s_{k}^{\frac{1}{p-1}}u(x_{k}+\sqrt{s_{k}}y,T+s_{k}t).$
As in Section 10, by applying results in Part I, we may assume $u_{k}$
converges weakly to $u_{\infty}$, and
$|\nabla u_{k}|^{2}dxdt\rightharpoonup|\nabla u_{\infty}|^{2}dxdt+\mu$
weakly as Radon measures in $\mathbb{R}^{n}\times\mathbb{R}^{-}$. By (14.6),
$u_{k}(0,-1)\geq k.$
Hence $u_{k}$ cannot converge smoothly to $u_{\infty}$. On the other hand, we
claim that $u_{k}$ does converge smoothly to $u_{\infty}$. This contradiction
then finishes the proof of this lemma.
Proof of the claim. We first prove $\mu=0$. By scaling (14.5) and passing to
the limit, we get
$\Theta_{s}(x,0;u_{\infty},\mu)\leq
n^{-1}\left(4\pi\right)^{-\frac{n}{2}}\Lambda-2\varepsilon,\quad\forall
x\in\mathbb{R}^{n},\quad s>0.$ (14.7)
By Corollary 9.7, for $\mu$-a.e. $(x,t)$,
$\Theta(x,t;u_{\infty},\mu)\geq
n^{-1}\left(4\pi\right)^{-\frac{n}{2}}\Lambda.$ (14.8)
On the other hand, if $s$ is large enough, by the Morrey space bound as in
(5.1), we get
$\Theta_{s}(x,t;u_{\infty},\mu)\leq\Theta_{s}(x,0;u_{\infty},\mu)+\varepsilon.$
(14.9)
By the monotonicity of $\Theta_{s}(\cdot)$, these three inequalities lead to a
contradiction unless $\mu=0$.
Next we show that $u_{\infty}\in
C^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{-})$. First, because $\mu=0$, as in
Theorem 6.1, we deduce that $u_{\infty}$ is a suitable weak solution of (1.1)
in $\mathbb{R}^{n}\times\mathbb{R}^{-}$. If there is a singular point of
$u_{\infty}$, say $(x,t)$, by the analysis in Subsection 9.2 and Theorem 9.1,
any tangent flow of $u_{\infty}$ at $(x,t)$ would be of the form
$N_{1}\Lambda\delta_{0}\otimes
dt\lfloor_{\mathbb{R}^{-}}+N_{2}\Lambda\delta_{0}\otimes
dt\lfloor_{\mathbb{R}^{+}},$
where $N_{1}\geq 1$, $N_{2}\geq 0$. Then by Corollary 9.7, we get
$\Theta(x,t;u_{\infty})\geq n^{-1}\left(4\pi\right)^{-\frac{n}{2}}\Lambda.$
Using this inequality to replace (14.8) and aruging as above, we get a
contradiction with (14.7). In conclusion, the singular set of $u_{\infty}$ is
empty.
Now $\mu=0$ implies that $u_{k}$ converges to $u_{\infty}$ strongly. The fact
that $u_{\infty}$ is smooth, implies that, for any
$(x,t)\in\mathbb{R}^{n}\times\mathbb{R}^{-}$, we can apply the
$\varepsilon$-regularity theorem, Theorem 3.9, to $u_{k}$ in a sufficiently
small cylinder $Q_{r}^{-}(x,t)$. This implies that $u_{k}$ converges to
$u_{\infty}$ in $C^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{-})$. ∎
These balls $\\{B_{\rho(a)}(a)\\}$ form a covering of $B_{1/2}$. Take a finite
sub-cover of $B_{1/2}$ from $\\{B_{\rho(a)}(a)\\}$,
$\\{B_{\rho_{i}}(a_{i})\\}$. For
$t\geq T-\min_{i}\rho_{i}^{2},$
(14.2) is a direct consequence of (8.2), if we choose the constant in (14.2)
to be
$C:=\max_{i}C(a_{i}).$
Since (8.2) holds trivially for $t\leq T-\min_{i}\rho_{i}^{2}$, we finish the
proof of Theorem 8.1.
#### 14.2. Cauchy-Dirichlet problems
Define
$H(t):=\int_{\Omega}u(x,t)^{2}dx,\quad E(t):=\int_{\Omega}\left(\frac{|\nabla
u(x,t)|^{2}}{2}-\frac{|u(x,t)|^{p+1}}{p+1}\right)dx.$
The following estimate is Blatt-Struwe’s [4, Lemma 6.3].
###### Lemma 14.2.
There exists a constant $C$ such that for any $t\in(T/2,T)$,
$\left\\{\begin{aligned} &H(t)\leq C(T-t)^{-\frac{2}{p-1}},\\\
&\int_{T/2}^{t}\int_{\Omega}\left[|\nabla
u(y,s)|^{2}+|u(y,s)|^{p+1}\right]dyds\leq
C(T-t)^{-\frac{2}{p-1}}.\end{aligned}\right.$
With this estimate in hand, we know that $u$ locally satisfies the assumptions
in Section 8. Theorem 1.2 then follows from Theorem 8.1 and a covering
argument.
#### 14.3. Cauchy problem
Now we come to the proof of Theorem 1.1. As in [39], by the regularizing
effect of parabolic equations, we may assume $u_{0}\in C^{2}(\mathbb{R}^{n})$.
For each $a\in\mathbb{R}^{n}$ and $s\in[0,T)$, define
$\displaystyle\Theta_{s}(a)$ $\displaystyle:=$ $\displaystyle
s^{\frac{p+1}{p-1}}\int_{\mathbb{R}^{n}}\left[\frac{|\nabla
u(y,T-s)|^{2}}{2}-\frac{|u(y,T-s)|^{p+1}}{p+1}\right]G(y-x,s)dy$
$\displaystyle+$
$\displaystyle\frac{1}{2(p-1)}s^{\frac{2}{p-1}}\int_{\mathbb{R}^{n}}u(y,T-s)^{2}G(y-x,s)dy.$
Then we have (see the summarization in [39, Section 3.2])
1. (1)
$\Theta_{s}(a)$ is non-decreasing in $s$;
2. (2)
for any $a\in\mathbb{R}^{n}$, $r\in(0,\sqrt{T}/2)$ and $\theta\in(0,1/2)$,
$\left\\{\begin{aligned} &\int_{Q_{r}(a,T-\theta r^{2})}|\nabla
u|^{2}+|u|^{p+1}\leq
C(\theta)\max\left\\{\Theta_{T}(a),\Theta_{T}(a)^{\frac{2}{p+1}}\right\\}r^{2},\\\
&\int_{Q_{r}(a,T-\theta r^{2})}|\partial_{t}u|^{2}\leq
C(\theta)\max\left\\{\Theta_{T}(a),\Theta_{T}(a)^{\frac{2}{p+1}}\right\\}.\end{aligned}\right.$
With these estimates in hand, we can repeat the proof of Theorem 8.1 to get
Theorem 1.1.
#### 14.4. Energy collapsing and the blow up of $L^{p+1}$ norm
In this subsection, we prove Corollary 1.3.
Recall that we have assumed that there exists a blow up point $a\in\Omega$. We
define the blow up sequence $u_{\lambda}$ at $(a,T)$ as in Section 10.
Combining Proposition 10.1 with Proposition 13.1, we see
$u_{\lambda}(x,t)\to
u_{\infty}(x,t)=(p-1)^{-\frac{1}{p-1}}(-t)^{-\frac{1}{p-1}}\quad\mbox{in}~{}~{}C^{\infty}_{loc}(\mathbb{R}^{n}\times\mathbb{R}^{-}).$
Hence there exists a universal constant $c>0$ such that
$\lim_{\lambda\to
0}\int_{Q^{-}_{\lambda}(a,T-\lambda^{2})}|\partial_{t}u|^{2}=\lim_{\lambda\to
0}\int_{Q^{-}_{1}(0,-1)}|\partial_{t}u_{\lambda}|^{2}\geq\int_{Q_{1}(0,-1)}|\partial_{t}u_{\infty}|^{2}\geq
c.$
Taking a subsequence $\lambda_{i}$ so that
$Q^{-}_{\lambda_{i}}(a,T-\lambda_{i}^{2})$ are disjoint from each other. Then
$\lim_{t\to
T}\int_{0}^{t}\int_{\Omega}|\partial_{t}u|^{2}\geq\sum_{i=1}^{+\infty}\int_{Q^{-}_{\lambda_{i}}(a,T-\lambda_{i}^{2})}|\partial_{t}u|^{2}=+\infty.$
Then by the energy identity for $u$, we get (1.6).
In the same way, there exists a universal constant $c>0$ such that, for any
$R>1$,
$\lim_{\lambda\to 0}\int_{B_{\lambda
R}(a,T-\lambda^{2})}u(x,t)^{p+1}dx=\lim_{\lambda\to
0}\int_{B_{R}}u_{\lambda}(x,t)^{p+1}dx\geq cR^{n}.$
Therefore
$\lim_{t\to T}\int_{\Omega}u(x,t)^{p+1}dx\geq\lim_{t\to
T}\int_{B_{R\sqrt{T-t}}(a,T)}u(x,t)^{p+1}dx\geq cR^{n}.$
Because $R$ can be arbitrarily large, (1.7) follows.
### References
* [1] Luigi Ambrosio and Halil Mete Soner. A measure-theoretic approach to higher codimension mean curvature flows. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 25(1-2):27–49 (1998), 1997. Dedicated to Ennio De Giorgi.
* [2] Adolfo Arroyo-Rabasa, Guido De Philippis, Jonas Hirsch, and Filip Rindler. Dimensional estimates and rectifiability for measures satisfying linear PDE constraints. Geom. Funct. Anal., 29(3):639–658, 2019.
* [3] P. Baras and L. Cohen. Complete blow-up after $T_{{\rm max}}$ for the solution of a semilinear heat equation. J. Funct. Anal., 71(1):142–174, 1987.
* [4] Simon Blatt and Michael Struwe. An analytic framework for the supercritical Lane-Emden equation and its gradient flow. Int. Math. Res. Not. IMRN, 2015(9):2342–2385, 2015.
* [5] Simon Blatt and Michael Struwe. Well-posedness of the supercritical Lane-Emden heat flow in Morrey spaces. ESAIM Control Optim. Calc. Var., 22(4):1370–1381, 2016.
* [6] Kenneth A. Brakke. The motion of a surface by its mean curvature, volume 20 of Mathematical Notes. Princeton University Press, Princeton, N.J., 1978.
* [7] Haïm Brezis and Thierry Cazenave. A nonlinear heat equation with singular initial data. J. Anal. Math., 68:277–304, 1996.
* [8] Luis A. Caffarelli, Basilis Gidas, and Joel Spruck. Asymptotic symmetry and local behavior of semilinear elliptic equations with critical Sobolev growth. Comm. Pure Appl. Math., 42(3):271–297, 1989.
* [9] Kai-Seng Chou, Shi-Zhong Du, and Gao-Feng Zheng. On partial regularity of the borderline solution of semilinear parabolic problems. Calc. Var. Partial Differential Equations, 30(2):251–275, 2007\.
* [10] Charles Collot. Nonradial type II blow up for the energy-supercritical semilinear heat equation. Anal. PDE, 10(1):127–252, 2017.
* [11] Charles Collot, Frank Merle, and Pierre Raphaël. Dynamics near the ground state for the energy critical nonlinear heat equation in large dimensions. Comm. Math. Phys., 352(1):215–285, 2017.
* [12] Charles Collot, Frank Merle, and Pierre Raphaël. Stability of ODE blow-up for the energy critical semilinear heat equation. C. R. Math. Acad. Sci. Paris, 355(1):65–79, 2017.
* [13] Charles Collot, Frank Merle, and Pierre Raphaël. Strongly anisotropic type II blow up at an isolated point. J. Amer. Math. Soc., 33(2):527–607, 2020.
* [14] Charles Collot, Pierre Raphaël, and Jeremie Szeftel. On the stability of type I blow up for the energy super critical heat equation. Mem. Amer. Math. Soc., 260(1255):v+97, 2019.
* [15] Carmen Cortázar, Manuel del Pino, and Monica Musso. Green’s function and infinite-time bubbling in the critical nonlinear heat equation. J. Eur. Math. Soc. (JEMS), 22(1):283–344, 2020.
* [16] Juan Dávila, Manuel del Pino, and Juncheng Wei. Singularity formation for the two-dimensional harmonic map flow into $\mathbb{S}^{2}$. Invent. Math., 219(2):345–466, 2020.
* [17] Manuel del Pino. Bubbling blow-up in critical parabolic problems. In Nonlocal and nonlinear diffusions and interactions: new methods and directions, volume 2186 of Lecture Notes in Math., pages 73–116. Springer, Cham, 2017.
* [18] Manuel del Pino, Chen-Chih Lai, Monica Musso, Juncheng Wei, and Yifu Zhou. New type II finite time blow-up for the energy supercritical heat equation. arXiv: Analysis of PDEs, 2020.
* [19] Manuel del Pino, Monica Musso, and Jun Cheng Wei. Type II blow-up in the $5$-dimensional energy critical heat equation. Acta Math. Sin. (Engl. Ser.), 35(6):1027–1042, 2019.
* [20] Manuel del Pino, Monica Musso, and Juncheng Wei. Infinite-time blow-up for the $3$-dimensional energy-critical heat equation. Anal. PDE, 13(1):215–274, 2020.
* [21] Manuel del Pino, Monica Musso, and Juncheng Wei. Geometry driven type II higher dimensional blow-up for the critical heat equation. J. Funct. Anal., 280(1):108788, 49, 2021.
* [22] Manuel del Pino, Monica Musso, Juncheng Wei, Qidi Zhang, and Yifu Zhang. Type II finite time blow-up for the three dimensional energy critical heat equation. arXiv: Analysis of PDEs, 2020.
* [23] Manuel del Pino, Monica Musso, Juncheng Wei, and Yifu Zhou. Type II finite time blow-up for the energy critical heat equation in $\mathbb{R}^{4}$. Discrete Contin. Dyn. Syst., 40(6):3327–3355, 2020.
* [24] Olivier Druet, Emmanuel Hebey, and Frédéric Robert. Blow-up theory for elliptic PDEs in Riemannian geometry, volume 45 of Mathematical Notes. Princeton University Press, Princeton, NJ, 2004.
* [25] Shi-Zhong Du. On partial regularity of the borderline solution of semilinear parabolic equation with critical growth. Adv. Differential Equations, 18(1-2):147–177, 2013.
* [26] Shi-Zhong Du. The singular set of stationary solution for semilinear elliptic equations with supercritical growth. J. Differential Equations, 256(7):2392–2405, 2014.
* [27] Shi-Zhong Du. Energy non-collapsing and refined blowup for a semilinear heat equation. J. Differential Equations, 266(9):5942–5970, 2019.
* [28] Herbert Federer and William P. Ziemer. The Lebesgue set of a function whose distribution derivatives are $p$-th power summable. Indiana Univ. Math. J., 22:139–158, 1972/73.
* [29] C. Fermanian Kammerer, F. Merle, and H. Zaag. Stability of the blow-up profile of non-linear heat equations from the dynamical system point of view. Math. Ann., 317(2):347–387, 2000.
* [30] Stathis Filippas, Miguel A. Herrero, and Juan J. L. Velázquez. Fast blow-up mechanisms for sign-changing solutions of a semilinear parabolic equation with critical nonlinearity. R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 456(2004):2957–2982, 2000.
* [31] Stathis Filippas and Robert V. Kohn. Refined asymptotics for the blowup of $u_{t}-\Delta u=u^{p}$. Comm. Pure Appl. Math., 45(7):821–869, 1992.
* [32] Stathis Filippas and Wen Xiong Liu. On the blowup of multidimensional semilinear heat equations. Ann. Inst. H. Poincaré Anal. Non Linéaire, 10(3):313–344, 1993.
* [33] Hiroshi Fujita. On the blowing up of solutions of the Cauchy problem for $u_{t}=\Delta u+u^{1+\alpha}$. J. Fac. Sci. Univ. Tokyo Sect. I, 13:109–124 (1966), 1966.
* [34] Victor A. Galaktionov and Juan L. Vazquez. Continuation of blowup solutions of nonlinear heat equations in several space dimensions. Comm. Pure Appl. Math., 50(1):1–67, 1997.
* [35] Yoshikazu Giga. A bound for global solutions of semilinear heat equations. Comm. Math. Phys., 103(3):415–421, 1986.
* [36] Yoshikazu Giga and Robert V. Kohn. Asymptotically self-similar blow-up of semilinear heat equations. Comm. Pure Appl. Math., 38(3):297–319, 1985.
* [37] Yoshikazu Giga and Robert V. Kohn. Characterizing blowup using similarity variables. Indiana Univ. Math. J., 36(1):1–40, 1987.
* [38] Yoshikazu Giga and Robert V. Kohn. Nondegeneracy of blowup for semilinear heat equations. Comm. Pure Appl. Math., 42(6):845–884, 1989.
* [39] Yoshikazu Giga, Shin’ya Matsui, and Satoshi Sasayama. Blow up rate for semilinear heat equations with subcritical nonlinearity. Indiana Univ. Math. J., 53(2):483–514, 2004.
* [40] Yoshikazu Giga, Shin’ya Matsui, and Satoshi Sasayama. On blow-up rate for sign-changing solutions in a convex domain. Math. Methods Appl. Sci., 27(15):1771–1782, 2004.
* [41] Junichi Harada. A higher speed type II blowup for the five dimensional energy critical heat equation. Ann. Inst. H. Poincaré Anal. Non Linéaire, 37(2):309–341, 2020.
* [42] Junichi Harada. A type II blowup for the six dimensional energy critical heat equation. Ann. PDE, 6(2):Paper No. 13, 63, 2020.
* [43] Miguel A. Herrero and Juan J. L. Velázquez. Explosion de solutions d’équations paraboliques semilinéaires supercritiques. C. R. Acad. Sci. Paris Sér. I Math., 319(2):141–145, 1994.
* [44] John E. Hutchinson. Second fundamental form for varifolds and the existence of surfaces minimising curvature. Indiana Univ. Math. J., 35(1):45–71, 1986.
* [45] Kota Kasai and Yoshihiro Tonegawa. A general regularity theory for weak mean curvature flow. Calc. Var. Partial Differential Equations, 50(1-2):1–68, 2014.
* [46] M. A. Khuri, F. C. Marques, and R. M. Schoen. A compactness theorem for the Yamabe problem. J. Differential Geom., 81(1):143–196, 2009.
* [47] Yan Yan Li. Harnack type inequality: the method of moving planes. Comm. Math. Phys., 200(2):421–444, 1999.
* [48] Yan Yan Li and Lei Zhang. Compactness of solutions to the Yamabe problem. II. Calc. Var. Partial Differential Equations, 24(2):185–237, 2005\.
* [49] YanYan Li and Lei Zhang. Liouville-type theorems and Harnack-type inequalities for semilinear elliptic equations. J. Anal. Math., 90:27–87, 2003.
* [50] Yanyan Li and Meijun Zhu. Yamabe type equations on three-dimensional Riemannian manifolds. Commun. Contemp. Math., 1(1):1–50, 1999.
* [51] F.-H. Lin and Q. S. Zhang. On ancient solutions of the heat equation. Comm. Pure Appl. Math., 72(9):2006–2028, 2019.
* [52] Fang-Hua Lin. Gradient estimates and blow-up analysis for stationary harmonic maps. Ann. of Math. (2), 149(3):785–829, 1999.
* [53] Fang-Hua Lin and Tristan Rivière. Energy quantization for harmonic maps. Duke Math. J., 111(1):177–193, 2002.
* [54] Fanghua Lin. Mapping problems, fundamental groups and defect measures. Acta Math. Sin. (Engl. Ser.), 15(1):25–52, 1999.
* [55] FangHua Lin and ChangYou Wang. Harmonic and quasi-harmonic spheres. Comm. Anal. Geom., 7(2):397–429, 1999.
* [56] FangHua Lin and ChangYou Wang. Harmonic and quasi-harmonic spheres. II. Comm. Anal. Geom., 10(2):341–375, 2002.
* [57] FangHua Lin and ChangYou Wang. Harmonic and quasi-harmonic spheres. III. Rectifiability of the parabolic defect measure and generalized varifold flows. Ann. Inst. H. Poincaré Anal. Non Linéaire, 19(2):209–259, 2002.
* [58] Fanghua Lin and Changyou Wang. The analysis of harmonic maps and their heat flows. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2008.
* [59] Fanghua Lin and Xiaoping Yang. Geometric measure theory—an introduction, volume 1 of Advanced Mathematics (Beijing/Boston). Science Press Beijing, Beijing; International Press, Boston, MA, 2002\.
* [60] Wen Xiong Liu. Blow-up behavior for semilinear heat equations: multi-dimensional case. Rocky Mountain J. Math., 23(4):1287–1319, 1993.
* [61] John M. Marstrand. The $(\varphi,\,s)$ regular subsets of $n$-space. Trans. Amer. Math. Soc., 113:369–392, 1964.
* [62] Yvan Martel. Complete blow up and global behaviour of solutions of $u_{t}-\Delta u=g(u)$. Ann. Inst. H. Poincaré Anal. Non Linéaire, 15(6):687–723, 1998.
* [63] Hiroshi Matano and Frank Merle. On nonexistence of type II blowup for a supercritical nonlinear heat equation. Comm. Pure Appl. Math., 57(11):1494–1541, 2004.
* [64] Hiroshi Matano and Frank Merle. Classification of type I and type II behaviors for a supercritical nonlinear heat equation. J. Funct. Anal., 256(4):992–1064, 2009.
* [65] F. Merle and H. Zaag. Refined uniform estimates at blow-up and applications for nonlinear heat equations. Geom. Funct. Anal., 8(6):1043–1085, 1998.
* [66] Frank Merle and Hatem Zaag. Stability of the blow-up profile for equations of the type $u_{t}=\Delta u+|u|^{p-1}u$. Duke Math. J., 86(1):143–195, 1997.
* [67] Frank Merle and Hatem Zaag. Optimal estimates for blowup rate and behavior for nonlinear heat equations. Comm. Pure Appl. Math., 51(2):139–196, 1998.
* [68] Frank Merle and Hatem Zaag. A Liouville theorem for vector-valued nonlinear heat equations and applications. Math. Ann., 316(1):103–137, 2000.
* [69] Noriko Mizoguchi. Boundedness of global solutions for a supercritical semilinear heat equation and its application. Indiana Univ. Math. J., 54(4):1047–1059, 2005.
* [70] Noriko Mizoguchi and Philippe Souplet. Optimal condition for blow-up of the critical $L^{q}$ norm for the semilinear heat equation. Adv. Math., 355:106763, 24, 2019.
* [71] Luisa Moschini and Alberto Tesei. Harnack inequality and heat kernel estimates for the Schrödinger operator with Hardy potential. Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl., 16(3):171–180 (2006), 2005.
* [72] Roger Moser. Stationary measures and rectifiability. Calc. Var. Partial Differential Equations, 17(4):357–368, 2003\.
* [73] Monica Musso, Manuel Del Pino, Juncheng Wei, and Youquan Zheng. Sign-changing blowing-up solutions for the critical nonlinear heat equation. Annali Della Scuola Normale Superiore Di Pisa-classe Di Scienze, page 1, 2019.
* [74] Aaron Naber and Daniele Valtorta. Energy identity for stationary Yang Mills. Invent. Math., 216(3):847–925, 2019.
* [75] Pavol Quittner and Philippe Souplet. Superlinear parabolic problems. Birkhäuser Advanced Texts: Basler Lehrbücher. [Birkhäuser Advanced Texts: Basel Textbooks]. Birkhäuser/Springer, Cham, 2019. Blow-up, global existence and steady states, Second edition of [ MR2346798].
* [76] Laurent Saloff-Coste. Aspects of Sobolev-type inequalities, volume 289 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2002.
* [77] Richard M. Schoen. Courses at Stanford University. 1988\.
* [78] Richard M. Schoen. Variational theory for the total scalar curvature functional for Riemannian metrics and related topics. In Topics in calculus of variations (Montecatini Terme, 1987), volume 1365 of Lecture Notes in Math., pages 120–154. Springer, Berlin, 1989.
* [79] Rémi Schweyer. Type II blow-up for the four dimensional energy critical semi linear heat equation. J. Funct. Anal., 263(12):3922–3983, 2012.
* [80] Yukihiro Seki. Type II blow-up mechanisms in a semilinear heat equation with critical Joseph-Lundgren exponent. J. Funct. Anal., 275(12):3380–3456, 2018.
* [81] Philippe Souplet. Morrey spaces and classification of global solutions for a supercritical semilinear heat equation in $\mathbb{R}^{n}$. J. Funct. Anal., 272(5):2005–2037, 2017.
* [82] Michael Struwe. A global compactness result for elliptic boundary value problems involving limiting nonlinearities. Math. Z., 187(4):511–517, 1984.
* [83] J. J. L. Velázquez. Higher-dimensional blow up for semilinear parabolic equations. Comm. Partial Differential Equations, 17(9-10):1567–1596, 1992\.
* [84] J. J. L. Velázquez. Classification of singularities for blowing up solutions in higher dimensions. Trans. Amer. Math. Soc., 338(1):441–464, 1993.
* [85] J. J. L. Velázquez. Estimates on the $(n-1)$-dimensional Hausdorff measure of the blow-up set for a semilinear heat equation. Indiana Univ. Math. J., 42(2):445–476, 1993.
* [86] Kelei Wang and Juncheng Wei. Analysis of blow-up locus and existence of weak solutions for nonlinear supercritical problems. Int. Math. Res. Not. IMRN, (10):2634–2670, 2015.
* [87] Juncheng Wei and Shusen Yan. Infinitely many solutions for the prescribed scalar curvature problem on $\mathbb{S}^{N}$. J. Funct. Anal., 258(9):3048–3081, 2010.
* [88] Fred B. Weissler. Local existence and nonexistence for semilinear parabolic equations in $L^{p}$. Indiana Univ. Math. J., 29(1):79–102, 1980.
* [89] Brian White. Stratification of minimal surfaces, mean curvature flows, and harmonic maps. J. Reine Angew. Math., 488:1–35, 1997.
* [90] D. V. Widder. The role of the Appell transformation in the theory of heat conduction. Trans. Amer. Math. Soc., 109:121–134, 1963.
* [91] Hatem Zaag. A remark on the energy blow-up behavior for nonlinear heat equations. Duke Math. J., 103(3):545–556, 2000.
* [92] Hatem Zaag. On the regularity of the blow-up set for semilinear heat equations. Ann. Inst. H. Poincaré Anal. Non Linéaire, 19(5):505–542, 2002.
* [93] Hatem Zaag. One-dimensional behavior of singular $N$-dimensional solutions of semilinear heat equations. Comm. Math. Phys., 225(3):523–549, 2002.
* [94] Hatem Zaag. Determination of the curvature of the blow-up set and refined singular behavior for a semilinear heat equation. Duke Math. J., 133(3):499–525, 2006.
|
# Big mapping class groups and the co-Hopfian property
Javier Aramayona Instituto de Ciencias Matemáticas, ICMAT (CSIC-UAM-UC3M-UCM)
<EMAIL_ADDRESS>, Christopher J. Leininger Rice University
<EMAIL_ADDRESS>and Alan McLeay University of Luxembourg
<EMAIL_ADDRESS>
###### Abstract.
We study injective homomorphisms between big mapping class groups of infinite-
type surfaces. First, we construct (uncountably many) examples of surfaces
without boundary whose (pure) mapping class groups are not co-Hopfian; these
are the first such examples of injective endomorphisms of mapping class groups
that fail to be surjective.
We then prove that, subject to some topological conditions on the domain
surface, any continuous injective homomorphism between (arbitrary) big mapping
class groups that sends Dehn twists to Dehn twists is induced by a subsurface
embedding.
Finally, we explore the extent to which, in stark contrast to the finite-type
case, superinjective maps between curve graphs impose no topological
restrictions on the underlying surfaces.
J.A. was supported by grant PGC2018-101179-B-I00. He acknowledges financial
support from the Spanish Ministry of Science and Innovation, through the
“Severo Ochoa Programme for Centres of Excellence in R&D” (CEX2019-000904-S).
C.L. was partially supported by NSF grants DMS-1811518 and DMS-2106419. A.M
was supported by the Luxembourg National Research Fund OPEN grant
O19/13865598.
## 1\. Introduction and main results
Throughout this article, all surfaces will be assumed to be connected,
orientable and second-countable, unless otherwise specified. A surface $S$ has
finite type if its fundamental group is finitely generated, and has infinite
type otherwise.
The mapping class group $\operatorname{Map}(S)$ is the group of orientation-
preserving homeomorphisms of $S$, up to homotopy; if $\partial
S\neq\emptyset$, then we require that all homeomorphisms and homotopies fix
$\partial S$ pointwise. In the case when $S$ has finite type,
$\operatorname{Map}(S)$ is well-known to be finitely presented. If, on the
contrary, $S$ has infinite type, then $\operatorname{Map}(S)$ becomes an
uncountable, totally disconnected, non-locally compact topological group with
respect to the quotient topology stemming from the compact-open topology on
the homeomorphism group of $S$. We refer the reader to Section 2 for expanded
definitions, and to the recent survey [5] for a detailed treatment of mapping
class groups of infinite-type surfaces, now commonly known as big mapping
class groups.
### 1.1. Algebraic rigidity for mapping class groups
A seminal result of Ivanov [23] states that if $S$ is a (sufficiently
complicated) finite type surface $S$, then every automorphism of
$\operatorname{Map}(S)$ is a conjugation by an element of the extended mapping
class group $\operatorname{Map}^{\pm}(S)$, namely the group of all
homeomorphisms of $S$ up to isotopy. In other words, every isomorphism
$\operatorname{Map}(S)\to\operatorname{Map}(S)$ is induced by a homeomorphism
$S\to S$. The analog for infinite-type surfaces was recently obtained by
Bavard-Dowdall-Rafi [8]; see Theorem Theorem below.
The proofs in the finite and infinite type settings proceed along similar
lines, based on the work of Ivanov [23]. First, one proves an algebraic
characterization of Dehn twists and uses it to deduce that any isomorphism
$\phi:\operatorname{Map}(S)\to\operatorname{Map}(S)$ must send (powers of)
Dehn twists to (powers of) Dehn twists. In particular, $\phi$ induces a
simplicial automorphism $\phi_{*}:\mathcal{C}(S)\to\mathcal{C}(S)$ of the
curve complex. At this point, the argument boils down to showing that every
simplicial automorphism of $\mathcal{C}(S)$ is induced by a homeomorphism of
$S$ [23]. In fact, this approach applies, with a finite-list of exceptional
surfaces, to any isomorphism between finite-index subgroups of
$\operatorname{Map}(S)$ or the pure mapping class group
$\operatorname{PMap}(S)$, the subgroup of $\operatorname{Map}(S)$ whose
elements fix every end of $S$ [23, 9, 10, 21, 22, 20, 31]
### 1.2. The co-Hopfian property
The combination of results of Ivanov-McCarthy [24] and Bell-Margalit [10]
imply that, with the possible exception of the twice-punctured torus, if $S$
has finite type then every injective homomorphism
$\operatorname{Map}(S)\to\operatorname{Map}(S)$ is induced by a homeomorphism
of $S$. In particular, the mapping class group of a (sufficiently complicated)
finite-type surface is co-Hopfian: every injective endomorphism is surjective.
For infinite-type surfaces, Question 4.5 of the AIM Problem List on Surfaces
of Infinite Type [1] (see also [5, Question 5.2]) asks:
###### Question 1.1.
Are big mapping class groups co-Hopfian?
###### Remark.
The answer to the above question is immediately “no” for surfaces with non-
empty boundary. For instance, if a surface $S$ has non-empty boundary and its
space of ends is homeomorphic to a Cantor set, then there exists a proper
$\pi_{1}$-injective continuous map $S\to S$ that is not surjective. In turn,
this map induces an injective homomorphism
$\operatorname{Map}(S)\to\operatorname{Map}(S)$ that is not surjective;
compare with Theorem 2 below.
Our first result states that the answer to Question 1.1 is also negative for
surfaces without boundary. Recall that the Loch Ness Monster is the unique (up
to homeomorphism) connected orientable surface of infinite genus and exactly
one end. We will prove:
###### Theorem 1.
Let $S$ be either the once-punctured Loch Ness Monster or a torus minus the
union of a Cantor set and an isolated point. Then there exists a homomorphism
$\phi:\operatorname{Map}(S)\to\operatorname{Map}(S)$ such that:
1. (1)
$\phi$ is continuous and injective, but not surjective,
2. (2)
There is a Dehn twist $t$ such that $\phi(t)$ is not supported on any finite-
type subsurface of $S$. In the particular case when $S$ is the Loch Ness
Monster, no power of $\phi(t)$ is supported on a finite-type subsurface of
$S$.
3. (3)
There exists a partial pseudo-Anosov $f\in\operatorname{Map}(S)$ such that
$\phi(f)$ is a multitwist.
###### Remark.
Part (2) of the above theorem yields a negative answer to Problem 4.75 of the
AIM Problem List on Surfaces of Infinite Type [1].
The construction behind Theorem 1 is inspired by the construction of the non-
geometric injective homomorphism of the first two authors with Souto [2,
Theorem 2], building on a well-known construction of homomorphisms from covers
(see e.g. [12, 24, 34]). Namely, we construct a covering map $S^{\prime}\to S$
which induces an injective homomorphism
$\operatorname{Map}(S)\to\operatorname{Map}(S^{\prime})$, in such a way that
the surface $S^{\prime\prime}$ obtained by filling in all but one of the
punctures of $S^{\prime}$ is homeomorphic to $S$, and yet the homomorphism
$\operatorname{Map}(S)\to\operatorname{Map}(S^{\prime\prime})$ remains
injective. Once this has been done, the resulting homomorphism is easily seen
to not be surjective in light of Bavard-Dowdall-Rafi’s result [8] mentioned
above, since $\phi$ sends some finitely supported elements to elements which
are not finitely supported.
Motivated by this construction, we also observe the following.
###### Theorem 1.2.
Let $S$ denote the closed surface of genus $g\geq 1$ minus the union of a
Cantor set and an isolated point. Then there exists a continuous injective
homomorphism
$\operatorname{Map}(S)\to\operatorname{Map}(\mathbb{R}^{2}\smallsetminus C)$,
where $C$ denotes a Cantor set.
As far as we know, these are the first examples of injective homomorphisms
between mapping class groups where the genus decreases from domain to
codomain.
### 1.3. Not co-Hopfian pure mapping class groups
If we restrict Question 1.1 to pure mapping class groups, we will see that
there is a much richer palette of examples of injective, but not surjective,
homomorphisms. We say that a surface $S$ is _self-doubling_ if there exists a
multicurve $\mathcal{B}$ such that;
1. (1)
$S\setminus\mathcal{B}$ has two connected components $L$ and $R$,
2. (2)
$L$ and $R$ are both homeomorphic to $S$, and
3. (3)
there exists orientation-reversing homeomorphism $\iota:S\to S$ of order two
such that $\iota(L)=R$ and $\iota(b)=b$ for each $b\in\mathcal{B}$.
###### Theorem 2.
For every self-doubling surface $S$ there exists a continuous injective
homomorphism $\operatorname{PMap}(S)\to\operatorname{PMap}(S)$ that is not
surjective.
Examples. We now list some concrete examples of surfaces to which Theorem 2
applies.
1. (1)
A first example of a self-doubling surface $S$ is the sphere minus the union
of a Cantor set and the north and south pole: any essential curve $b$ that
separates the poles splits $S$ into two subsurfaces $L$ and $R$ with interiors
homeomorphic to $S$.
2. (2)
Alternatively, suppose $S$ has infinite genus such that every planar end is
isolated, and for every non-planar end there is a sequence of planar ends
converging to it. (We remark that there are uncountably many surfaces with
this property.) Then $S$ is also self-doubling; see Theorem 4.2.
We give an equivalent definition of self-doubling surfaces, along with more
examples, in Section 4.
### 1.4. Twist-preserving homomorphisms
In light of the discussion above, we now focus on homomorphisms that send Dehn
twists to Dehn twists; we call these twist-preserving homomorphisms. In this
section we will allow surfaces to have a non-empty boundary, though we assume
it is compact (and is thus a finite union of circles). We will prove the
following result; recall that a map between topological spaces is proper if
the preimage of every compact set is compact:
###### Theorem 3.
Let $S$ and $S^{\prime}$ be surfaces of infinite type, where $S$ has positive
genus. Assume further that either the boundary of $S$ is empty, or else $S$
has at most one end accummulated by genus. If
$\phi:\operatorname{PMap}(S)\to\operatorname{PMap}(S^{\prime})$ is a
continuous injective twist-preserving homomorphism, then there is a proper
$\pi_{1}$-injective embedding $S\to S^{\prime}$ that induces $\phi$.
###### Remark.
Theorem 3 no longer holds if $\partial S\neq\emptyset$ and $S$ has more than
one end accumulated by genus; see the final remark of Section 5. Also, in the
same remark we will see that the result we will prove in fact applies to a
larger class of homomorphisms than injective ones.
Continuing with our discussion, observe that if $\partial S=\emptyset$, any
proper $\pi_{1}$-injective embedding $S\to S^{\prime}$ is homotopic to a
homeomorphism. Hence we obtain:
###### Corollary 1.3.
Let $S$ and $S^{\prime}$ be surfaces of infinite type, where $S$ has positive
genus and no boundary. If
$\phi:\operatorname{PMap}(S)\to\operatorname{PMap}(S^{\prime})$ is a
continuous injective twist-preserving homomorphism, then it is induced by a
homeomorphism $S\to S^{\prime}$; in particular, it is surjective.
To prove Theorem 3, one first observes that $\phi$ induces a simplicial map
$\mathcal{C}(S)\to\mathcal{C}(S^{\prime})$ that preserves intersection number
one. Once this has been shown, the result follows considering exhaustions and
a result of Souto and the first named author [4], plus a continuity argument.
We conjecture that, although continuity is needed in our proof, it is in fact
not necessary for Theorem 3 to hold. In this direction, a result of Mann [26]
states that the mapping class group of certain surfaces have the automatic
continuity property. Specifically, this automatic continuity holds when the
surface is a closed surface punctured along the union of a Cantor set and a
finite set; for these surfaces we obtain:
###### Corollary 1.4.
Let $S$ and $S^{\prime}$ be surfaces of infinite type, where $S$ is obtained
from a closed surface of positive genus by removing the union of a Cantor set
and a finite set. If
$\phi:\operatorname{PMap}(S)\to\operatorname{PMap}(S^{\prime})$ is an
injective twist-preserving homomorphism, then there is a homeomorphism $S\to
S^{\prime}$ that induces $\phi$.
### 1.5. Injections between curve graphs
In the finite-type setting, an important step for establishing the co-Hopfian
property for finite-index subgroups of mapping class groups is that
superinjective self-maps of the curve graph are induced by homeomorphisms; see
[21], for instance. We recall that a simplicial map between curve graphs is
superinjective if it maps pairs of curves with non-zero intersection number to
pairs of curves with the same property.
While it is known that every automorphism of the curve graph of an infinite-
type surface is induced by a homeomorphism, [17, 8], it is easy to see that
this is no longer true if one drops the requirement that the map be
surjective; see [19, 5] for concrete examples. Our final result highlights the
extent of this failure:
###### Theorem 4.
Let $S$ and $S^{\prime}$ be infinite-type surfaces, where $S^{\prime}$ has
infinite genus. Then:
1. (1)
There exists a superinjective simplicial map $\mathcal{C}(S)\to\mathcal{C}(S)$
that is not surjective.
2. (2)
There exists a superinjective simplicial map
$\mathcal{C}(S)\to\mathcal{C}(S^{\prime})$.
In the case of infinite-genus surfaces, a proof of part (1) Theorem 4 was
previously obtained by Hernández-Valdez [19] using different methods.
###### Remark.
If $S$ and $S^{\prime}$ are arbitrary infinite-type surfaces, then there is
always an injective simplicial map $\mathcal{C}(S)\to\mathcal{C}(S^{\prime})$.
To see this, simply observe that the curve graph of any infinite-type surface
contains the complete graph on countably many vertices, and that the curve
graph of any surface has a countable set of vertices.
Theorem 4 should be compared with Theorem 3, which implies that every
superinjective map between curve graphs that comes from an injective
homomorphism between the corresponding mapping class groups is induced by an
embedding of the underlying surfaces.
Plan of the paper. In Section 2 we will give all the necessary background
needed in the article. Sections 3.2 and 3.3 are devoted to the proof of
Theorem 1 in the cases, respectively, of the one-holed torus minus a Cantor
set and the once-punctured Loch Ness Monster. In Section 4 we deal with
Theorem 2. Section 5 is devoted to the proof Theorem 3. Finally, in Section 6
we establish Theorem 4.
Acknowledgements. We are grateful to the organizers of the AIM workshop
“Surfaces of Infinite Type” for the discussions that are the origins of this
paper. Thanks also to AIM for its hospitality and financial support. We would
like to thank Nick Vlamis and Henry Wilton for enlightening conversations
about Lemma 6.2, and Vlamis for the reference [14]. We also thank Israel
Morales for bringing to our attention the refererence [6], and Ty Ghaswala for
comments on an earlier version.
## 2\. Preliminaries
In this section we will briefly introduce all the background material needed
for our results. We refer the reader to the survey paper [5] for a detailed
account on these topics.
### 2.1. Surfaces
In what follows, by a surface we will mean a orientable second-countable
topological surface, with a finite number (possibly zero) of boundary
components, all of them assumed to be compact. If the fundamental group of a
surface $S$ is finitely generated, we will say that $S$ has finite type;
otherwise it has infinite type. The space of ends of $S$ is
$\operatorname{Ends}(S):=\lim_{\leftarrow}\pi_{0}(S\smallsetminus K),$
where $K$ denotes a compact subset of $S$. The space of ends becomes a
topological space with respect to the final topology obtained from endowing
each set defining the inverse system with the discrete topology. It is well-
known that $\operatorname{Ends}(S)$ is a closed subset of a Cantor set. We say
that an end is planar if it has a planar neighborhood; otherwise we will say
that the end is non-planar or that it is accumulated by genus. An isolated
planar end is usually called a puncture; as is customary, we will treat
punctures both as ends and as marked points on the surface, switching between
the two viewpoints without further mention. Finally, we will denote by
$\operatorname{Ends}_{\infty}(S)$ the subspace of non-planar ends, which is
closed in $\operatorname{Ends}(S)$.
A classical result [29] asserts that any surface $S$ is uniquely determined by
the quadruple $(g,b,\operatorname{Ends}(S),\operatorname{Ends}_{\infty}(S))$,
where $g\in\mathbb{N}\cup\\{\infty\\}$ is the genus, $b\in\mathbb{N}$ is the
number of boundary components. More concretely, two surfaces are homeomorphic
if and only if their genera and number of boundary components are equal, and
there is a homeomorphism between the spaces of ends that restricts to a
homeomorphism between the spaces of non-planar ends.
### 2.2. Curves and multicurves
A simple closed curve on $S$ is said to be inessential if it bounds a disk, a
once-punctured disk, or an annulus whose other boundary is a boundary
component of $S$; otherwise we say the curve is essential. By a curve we mean
the isotopy class of an essential simple closed curve on $S$. A multicurve is
a set of curves on the surface that have pairwise disjoint representatives.
Given a multicurve $Q$, we will write $S\smallsetminus Q$ to mean the
(possibly disconnected) surface obtained from $S$ by cutting along (pairwise
disjoint representatives of) each element of $Q$.
### 2.3. Mapping class group
Let $\operatorname{Homeo}^{+}(S)$ be the group of all orientation-preserving
self-homeomorphisms of $S$, equipped with the compact-open topology. We will
denote by $\operatorname{Homeo}_{0}(S)$ the connected component of the
identity in $\operatorname{Homeo}^{+}(S)$, noting it is a normal subgroup. The
mapping class group of $S$ is
$\operatorname{Map}(S):=\operatorname{Homeo}^{+}(S)/\operatorname{Homeo}_{0}(S).$
As is customary, if $\partial S\neq\emptyset$, in this definition we
implicitly require that all homeomorphisms fix $\partial S$ pointwise.
The mapping class group is naturally a topological group with respect to the
quotient topology coming from the compact-open topology on
$\operatorname{Homeo}^{+}(S)$. It is a classical fact that
$\operatorname{Map}(S)$ is a finitely presented group if $S$ has finite type,
while it is easy to see that it is uncountable whenever $S$ has infinite type.
Moreover, in this latter case, $\operatorname{Map}(S)$ is totally disconnected
and not locally compact; see [5] for more details.
One of the motivating results for us is the following theorem of Bavard-
Dowdall-Rafi [8] mentioned in the introduction.
###### Theorem ([8]).
For any infinite-type surface $S$, any isomorphism
$\operatorname{Map}(S)\to\operatorname{Map}(S)$ is induced by a homeomorphism
$S\to S$. In particular, any such isomorphism is continuous.
The pure mapping class group $\operatorname{PMap}(S)$ is the subgroup of
$\operatorname{Map}(S)$ whose elements fix every end of $S$. If $S$ has finite
type, it is well-known that $\operatorname{PMap}(S)$ is generated by Dehn
twists along a finite set of curves (see [16, Section 4.4]). When $S$ has
infinite type, this is no longer true in general; instead, we have the
following result; see [28] for the definition of a handle shift:
###### Theorem 2.1 ([28]).
$\operatorname{PMap}(S)$ is topologically generated by Dehn twists if and only
if $S$ has at most one end accumulated by genus. Otherwise, it is
topologically generated by Dehn twists and handle shifts.
Here, being topologically generated by Dehn twists means that the subgroup
generated by Dehn twists (i.e. the compactly-supported mapping class group,
see below) is dense in $\operatorname{PMap}(S)$.
#### 2.3.1. Some important subgroups
Following [8], we will say that an element of $\operatorname{Map}(S)$ has
finite support if it represented by a homeomorphism which is the identity
outside some finite-type subsurface of $S$. We will write
$\operatorname{Map}_{f}(S)$ for the subgroup consisting of elements with
finite support. Although not directly relevant to our arguments, we record the
following beautiful result of Bavard-Dowdall-Rafi [8], which gives an
algebraic characterization of elements with finite support. In particular, it
serves to shed light on the potential differences between isomorphisms and
injective homomorphisms between big mapping class groups.
###### Proposition 2.2 ([8]).
An element of $\operatorname{Map}(S)$ has finite support if and only if its
conjugacy class in $\operatorname{Map}(S)$ is countable.
The compactly-supported mapping class group $\operatorname{Map}_{c}(S)$ is the
subgroup of $\operatorname{Map}(S)$ whose elements are represented by
homeomorphisms which are the identity outside some compact subsurface of $S$.
Observe that $\operatorname{Map}_{c}(S)$ is a subgroup of
$\operatorname{Map}_{f}(S)$, proper when $S$ has more than one puncture, and
of $\operatorname{PMap}(S)$. On the other hand, $\operatorname{Map}_{f}(S)$ is
a subgroup of $\operatorname{PMap}(S)$ if and only if $S$ has at most one
puncture. Before ending this section, we record the following immediate
observation for future use:
###### Lemma 2.3.
For every infinite-type surface $S$, we have
$\operatorname{Map}_{c}(S)=\lim_{\longrightarrow}\operatorname{Map}(X),$
where the direct limit is taken over all compact subsurfaces $X\subset S$,
directed with respect to inclusion.
### 2.4. Curve graph
The curve graph $\mathcal{C}(S)$ is the simplicial graph whose vertex set is
the set of curves on $S$, and where two vertices are deemed to be adjacent if
the corresponding curves may be realized disjointly on $S$. In what follows we
will not distinguish between vertices of the curve graph and the curves
representing them.
Observe that $\operatorname{Map}(S)$ acts on $\mathcal{C}(S)$ by simplicial
automorphisms. A classical fact, discovered initially by Ivanov [23] is that
every simplicial automorphism of $\mathcal{C}(S)$ is induced by a
homeomorphism of $S$, provided $S$ is not the twice-punctured torus. The
analog for infinite-type surfaces has been obtained independently by
Hernández-Morales-Valdez [18] and Bavard-Dowdall-Rafi [8]. A crucial step in
their proofs is the following so-called Alexander method for infinite-type
surfaces [18], which we record for future use:
###### Theorem 2.4.
Let $S$ be a connected orientable infinite-type surface with empty boundary.
The natural action of $\operatorname{Map}(S)$ on $\mathcal{C}(S)$ has trivial
kernel; in other words, if $f\in\operatorname{Map}(S)$ induces the identity
transformation on $\mathcal{C}(S)$, then it is the identity in
$\operatorname{Map}(S)$.
## 3\. Covers and forgetting
In this section we prove Theorem 1 and Theorem 1.2. These theorems are about
homomorphisms between mapping class groups of surfaces with a single puncture,
and the proofs exploit their relationship with automorphism groups and the
Birman exact sequence. We start with some generalities, then focus on the
specific cases of the theorems.
### 3.1. Covers and automorphism groups
In what follows, suppose $S$ is an orientable, connected surface without
punctures such that $\pi_{1}(S)$ is non-abelian. Given $z\in S$ any point, let
$S^{z}=S\smallsetminus\\{z\\}$ denote the surface obtained from $S$ by
removing $z$ (thus producing a puncture we call the $z$–puncture). Any
homeomorphism $f$ of $S^{z}$ induces a homeomorphism of $S$ that fixes the
point $z$, which by an abuse of notation we also denote by $f$. This defines a
homomorphism
$\operatorname{Homeo}^{+}(S^{z})\to\operatorname{Homeo}^{+}(S).$
This homomorphism sends $\operatorname{Homeo}_{0}(S^{z})$ into
$\operatorname{Homeo}_{0}(S)$ and so induces a homomorphism
$\operatorname{Map}(S^{z})\to\operatorname{Map}(S).$
Given a loop $\gamma$ representing an element of $\pi_{1}(S,z)$, one
associates a homeomorphism $f_{\gamma}\colon S^{z}\to S^{z}$ by “point pushing
along $\gamma$”. More precisely, $f_{\gamma}\colon S\to S$ is a homeomorphism
isotopic to the identity by an isotopy $f_{t}$ with $f_{0}=f_{\gamma}$,
$f_{1}=\text{id}$, and $f_{t}(z)=\gamma(t)$, for all $t\in[0,1]$. When $S$ has
finite type, Birman proved that $[\gamma]\mapsto[f_{\gamma}]$ defines an
injective homomorphism $\pi_{1}(S,z)\to\operatorname{Map}(S^{z})$ in such a
way that the sequence
(1)
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\pi_{1}(S,z)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Map}(S^{z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Map}(S)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$
is exact; see [11]. The infinite type case is proved by Dickmann-Domat in the
appendix to Domat’s paper [14].
###### Proposition 3.1.
For any connected, orientable surface $S$ without punctures and non-abelian
fundamental group, there is an injection of
$\pi_{1}(S,z)\to\operatorname{Map}(S^{z})$ given by
$[\gamma]\mapsto[f_{\gamma}]$ making (1) exact. ∎
For the remainder of this section, we use this theorem to identify
$\pi_{1}(S,z)$ with its image in $\operatorname{Map}(S^{z})$.
Remark. The theorem also holds when $S$ has punctures, replacing
$\operatorname{Map}(S^{z})$ with the subgroup preserving the $z$–puncture.
Given $f\in\operatorname{Homeo}^{+}(S^{z})$, after extending over $z$ we have
an induced automorphism $f_{*}\in\operatorname{Aut}(\pi_{1}(S,z))$, which
descends to a homomorphism
$\iota\colon\operatorname{Map}(S^{z})\to\operatorname{Aut}(\pi_{1}(S,z))$
given by $\iota([f])=f_{*}$. This further descends to a homomorphism
$\operatorname{Map}(S)\to\operatorname{Out}(\pi_{1}(S))$ which is injective by
Theorem 2.4. The inclusion of $\pi_{1}(S,z)$ into $\operatorname{Map}(S^{z})$,
composed with $\iota$ is precisely the isomorphism onto the group of inner
automorphisms, and thus we get a homomorphism of short exact sequences:
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\pi_{1}(S,z)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Map}(S^{z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\iota}$$\textstyle{\operatorname{Map}(S)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1\,\,}$$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\pi_{1}(S,z)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Aut}(\pi_{1}(S,z))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Out}(\pi_{1}(S,z))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1.}$
The first vertical map is the identity and the last is injective, from which
it follows that the middle is also injective.
A regular covering space $p\colon\widetilde{S}\to S$ with the property that
every homeomorphism of $S$ lifts to a homeomorphism of $\widetilde{S}$ will be
called a geometrically characteristic cover. If
$p_{*}(\pi_{1}(\widetilde{S},\widetilde{z}))$ is characteristic, general map
lifting implies that the cover is geometrically characteristic, but this is
not a necessary condition in general; see the examples below.
Now suppose $p\colon\widetilde{S}\to S$ is geometrically characteristic, let
$z\in S$ be a point, and fix any $\widetilde{z}\in p^{-1}(z)$. Being regular
and geometrically characteristic implies that for any
$f\in\operatorname{Homeo}^{+}(S^{z})$, after extending over $z$, there is a
unique lift $\widetilde{f}\colon\widetilde{S}\to\widetilde{S}$ that fixes
$\widetilde{z}$, and thus determines a homeomorphism of the same name
$\widetilde{f}\in\operatorname{Homeo}^{+}(\widetilde{S}^{\widetilde{z}})$. The
general fact we need is the following (compare with [2, Theorem 2]).
###### Proposition 3.2.
If $p\colon\widetilde{S}\to S$ is a geometrically characteristic covering
space, $\pi_{1}(S)$ is non-abelian, and $\pi_{1}(\widetilde{S})$ is non-
trivial, then the assignment $f\mapsto\widetilde{f}$ described above descends
to a continuous, injective homomorphism
$\phi\colon\operatorname{Map}(S^{z})\to\operatorname{Map}(\widetilde{S}^{\widetilde{z}}).$
Moreover, via the inclusions from the Birman exact sequence we have $\phi\circ
p_{*}=\text{id}|_{\pi_{1}(\widetilde{S},\widetilde{z})}$.
###### Proof.
Continuity of
$\operatorname{Homeo}^{+}(S^{z})\to\operatorname{Homeo}^{+}(\widetilde{S}^{\widetilde{z}})$
is clear. Since $f\mapsto\widetilde{f}$ maps $\operatorname{Homeo}_{0}(S^{z})$
to $\operatorname{Homeo}_{0}(\widetilde{S},\widetilde{z})$ (by lifting
isotopies), we get a well-defined, continuous homomorphism
$\phi\colon\operatorname{Map}(S^{z})\to\operatorname{Map}(\widetilde{S}^{\widetilde{z}})$.
Since the homomorphisms
$\iota\colon\operatorname{Map}(S^{z})\to\operatorname{Aut}(\pi_{1}(S,z))$ and
$\tilde{\iota}\colon\operatorname{Map}(\widetilde{S}^{\widetilde{z}})\to\operatorname{Aut}(\pi_{1}(\widetilde{S},\widetilde{z}))$
are injective, to prove the first part of the proposition it suffices to prove
injectivity of the homomorphism
$\phi_{*}\colon\iota(\operatorname{Map}(S^{z}))\to\widetilde{\iota}(\operatorname{Map}(\widetilde{S}^{\widetilde{z}}))$
defined by $\phi_{*}\circ\iota=\widetilde{\iota}\circ\phi$, or more explicitly
$\iota([f])=f_{*}\stackrel{{\scriptstyle\phi_{*}}}{{\longmapsto}}\tilde{f}_{*}=\tilde{\iota}([\tilde{f}])=\tilde{\iota}(\phi([f])).$
Consequently, we need only show that the kernel of this homomorphism is
trivial.
To this end, suppose that $f\in\operatorname{Homeo}^{+}(S^{z})$ is any element
such that $\tilde{f}_{*}$ is the identity. Let
$G=p_{*}(\pi_{1}(\widetilde{S},\widetilde{z}))\triangleleft\pi_{1}(S,z),$
so that $p_{*}$ is an isomorphism from $\pi_{1}(\widetilde{S},\widetilde{z})$
onto the normal subgroup $G$ (normality follows from regularity of $p$). Since
$\widetilde{f}$ is a lift of $f$ that preserves $\widetilde{z}$, we have that
$p_{*}\circ\tilde{f}_{*}=f_{*}\circ p_{*}.$
In particular, this means that $f_{*}$ preserves $G$ and $p_{*}$ conjugates
$\widetilde{f}_{*}$ to $f_{*}|_{G}$. Since $\tilde{f}_{*}$ is the identity,
this implies that $f_{*}|_{G}$ is the identity. We claim that $f_{*}$ is the
identity. To prove this, first observe that for all $x\in G$ and
$a\in\pi_{1}(S,z)$, we have $axa^{-1}\in G$, and thus
$axa^{-1}=f_{*}(axa^{-1})=f_{*}(a)f_{*}(x)f_{*}(a)^{-1}=f_{*}(a)xf_{*}(a)^{-1}.$
This implies that for all $a\in\pi_{1}(S,z)$ and all $x\in G$,
$a^{-1}f_{*}(a)$ commutes with $x$.
Since $\pi_{1}(S,z)$ is a non-abelian surface group (either free or a closed
surface group), we know that centralizers of elements are cyclic, and since
$G$ is a non-trivial normal subgroup of $\pi_{1}(S,z)$, we can find two
elements $x,y\in G$ who centralizers intersect trivially. Now for any
$a\in\pi_{1}(S,z)$, $a^{-1}f_{*}(a)$ is in the centralizer of $x$ and $y$, and
thus $a^{-1}f_{*}(a)=1$. Therefore, $f_{*}(a)=a$, and hence $f_{*}$ is the
identity proving that $\phi_{*}$, and hence $\phi$, is injective.
To prove the last statement, given any element $g$ of a group, let $c_{g}$
denote the inner automorphism determined by conjugating by $g$. From the
inclusions in the Birman exact sequence as noted above, given any
$[\gamma]\in\pi_{1}(\widetilde{S},\widetilde{z})$ we have
$\widetilde{\iota}([\gamma])=c_{[\gamma]}\quad\mbox{ and
}\quad\iota(p_{*}([\gamma]))=c_{p_{*}([\gamma])}.$
Since $\phi_{*}\circ\iota=\widetilde{\iota}\circ\phi$, we have
$\widetilde{\iota}(\phi(p_{*}([\gamma])))=\phi_{*}(\iota(p_{*}([\gamma])))=\phi_{*}(c_{p_{*}([\gamma])})=c_{[\gamma]}=\widetilde{\iota}([\gamma]),$
where the second-to-last equality comes from the fact that the isomorphism
described above, $p_{*}\colon\pi_{1}(\widetilde{S},\widetilde{z})\to
G<\pi_{1}(S,z)$, conjugates $c_{[\gamma]}$ to $c_{p_{*}([\gamma])}$, while the
argument above shows that it conjugates $\phi_{*}(c_{p_{*}([\gamma])})$ to
this as well, hence $c_{[\gamma]}=\phi_{*}(c_{p_{*}([\gamma])})$. Therefore,
$\widetilde{\iota}\circ\phi\circ p_{*}=\widetilde{\iota}$ and since
$\widetilde{\iota}$ is injective, it follows that $\phi\circ p_{*}$ is the
identity on $\pi_{1}(\widetilde{S},\widetilde{z})$, as required. ∎
### 3.2. The once-punctured torus minus a Cantor set
Here we prove Theorem 1 in one of the two cases.
###### Proof of Theorem 1 for the once-punctured torus minus a Cantor set.
Let $T$ be a torus and choose, once and for all, a Cantor set $C\subset T$,
set $S=T\smallsetminus C$, fix $z\in S$, and write $S^{z}=S\smallsetminus z$
as above. Then $C\cup\\{z\\}$ is canonically homeomorphic to
$\operatorname{Ends}(S^{z})$ and $T$ is the end-compactification of $S^{z}$.
Consider the surjective homomorphism
$\pi_{1}(S,z)\to\pi_{1}(T,z)\cong\mathbb{Z}\times\mathbb{Z}$
given by “filling in” every element of $C$. Fixing any integer $m\geq 2$ and
reducing modulo $m$, we get a surjective homomorphism
$\rho:\pi_{1}(T,z)\to\mathbb{Z}_{m}\times\mathbb{Z}_{m}.$
Let $p:\widetilde{T}\to T$ be the regular cover corresponding to $\ker(\rho)$,
and fix some $\tilde{z}\in p^{-1}(z)$. Write $\widetilde{C}=p^{-1}(C)$, and
observe that the surface
$\widetilde{S}=\widetilde{T}\smallsetminus\widetilde{C}$ is homeomorphic to
$S$, and
$\widetilde{S}^{\widetilde{z}}=\widetilde{S}\smallsetminus\\{\widetilde{z}\\}$
is homeomorphic to $S^{z}$.
Every homeomorphism of $T$ lifts to $\widetilde{T}$ since the kernel of
$\pi_{1}(T,z)\to\mathbb{Z}_{m}\times\mathbb{Z}_{m}$
is characteristic. Moreover, every homeomorphism of $S$ extends uniquely to a
homeomorphism of $T$ and hence the restricted covering (with the same name)
$p\colon\widetilde{S}\to S$ is geometrically characteristic. Proposition 3.2
therefore provides an injective, continuous homomorphism
$\phi\colon\operatorname{Map}(S^{z})\to\operatorname{Map}(\widetilde{S}^{\widetilde{z}})$.
Since $\widetilde{S}^{\widetilde{z}}$ and $S^{z}$ are homeomorphic surfaces,
by choosing a homeomorphism between them we can view $\phi$ as a continuous
homomorphism
$\phi\colon\operatorname{Map}(S^{z})\to\operatorname{Map}(S^{z})$. We have
already shown that $\phi$ is injective. We now verify the remaining properties
claimed in the statement:
Compact to non-finite support. To prove part (2) of Theorem 1 we will show
that the $\phi$–image of the Dehn twist
$t_{a}\in\operatorname{Map}_{c}(S^{z})$ about a non-separating curve $a$ is
not supported on any finite-type subsurface of $S^{z}$.
In order to achieve this it suffices to find a point of $\tilde{C}$ that is
not fixed by our chosen lift $\widetilde{t}_{a}=\phi(t_{a})$. Let
$\beta\subset S^{z}$ be an arc which begins at $z$ and ends at a point $e\in
C$ so that $\beta$ essentially intersects $a$ exactly once. If $\tilde{\beta}$
is the lift of $\beta$ starting at $\widetilde{z}$ then
$\widetilde{t}_{a}(\tilde{\beta})$ also starts at $\widetilde{z}$, but these
two arcs terminate at distinct points in the preimage of $e$. Therefore
$\phi(t_{a})$ does not have finite support. (Observe, however, that there is a
nontrivial power of $\phi(t_{a})$ that has finite support; this will not
happen in the case of the Loch Ness Monster.)
Not co-Hopfian. Suppose $\phi$ were surjective, and thus an automorphism.
Theorem Theorem implies that $\phi$ is induced by a homeomorphism of $S^{z}$,
and in particular preserves the property of having finite support, which is a
contradiction to the previous paragraph. This proves part (1)
Non-geometric. To prove part (3), and thus complete the proof of Theoreom 1,
we must show that there are partial pseudo-Anosovs in
$\operatorname{Map}(S^{z})$ that map to multitwists in
$\operatorname{Map}(\widetilde{S}^{\widetilde{z}})$. The proof is essentially
the same as that of [2, Theorem 2(1)]; we sketch it for completeness.
As in [2], one can find a loop $\gamma$ representing an element in
$\pi_{1}(\widetilde{S},\widetilde{z})$ which is simple, but for which
$p_{*}([\gamma])$ is not represented by any simple closed curve. The mapping
class associated to $[\gamma]$ via the Birman exact sequence is a multi-twist
about the boundary of a regular neighborhood of $\gamma$, while by a result of
Kra [25], $p_{*}([\gamma])$ is a pseudo-Anosov on $X\smallsetminus\\{z\\}$,
where $X$ is the subsurface filled by a loop representing $p_{*}([\gamma])$
with minimal self-intersection. Since $\phi\circ p_{*}([\gamma])=[\gamma]$ by
Proposition 3.2, it follows that $p_{*}([\gamma])$ is pseudo-Anosov on a
proper subsurface while $\phi(p_{*}([\gamma]))$ is a multi-twist. This
completes the proof of the theorem in this case. ∎
### 3.3. The Loch Ness Monster
In this section we prove Theorem 1 for the once-punctured Loch Ness Monster.
In what follows, $S$ will denote the Loch Ness Monster, that is, the connected
orientable surface of infinite genus and exactly one end. As in the case of a
torus minus a Cantor set, we fix once and for all, a point $z\in S$. We will
again apply Proposition 3.2 for an appropriate cover $p\colon\widetilde{S}\to
S$ to induces an injective homomorphism
$\phi\colon\operatorname{Map}(S^{z})\to\operatorname{Map}(\widetilde{S}^{\widetilde{z}})$.
This time the cover will be of infinite degree which, in turn, will be the key
to the existence of compactly-supported elements with image for which no non-
trivial power has compact support. Consider the homomorphism
$\rho:\pi_{1}(S,z)\to H_{1}(S,\mathbb{Z}_{2})$
obtained by first abelianizing and then reducing modulo 2. Observe that
$H:=\ker(m)$ is a characteristic subgroup of $\pi_{1}(S,z)$. Let
$p:\tilde{S}\to S$ be the cover associated to $H$, which is usually called the
mod-2 homology cover of $S$. We claim:
###### Proposition 3.3.
$\widetilde{S}$ is homeomorphic to $S$.
###### Proof.
First, observe that the Loch Ness Monster $S$ is a characteristic cover of the
closed surface $\Sigma$ of genus $2$; more precisely, it is the covering
surface associated to the commutator subgroup of $\pi_{1}(\Sigma)$. Since
$\widetilde{S}$ is a characteristic cover of $S$, it follows that
$\widetilde{S}$ is also a characteristic cover of $\Sigma$.
Let $H$ be the characteristic subgroup of $\pi_{1}(\Sigma)$ corresponding to
$\pi_{1}(\widetilde{S})$ and $D=\pi_{1}(\Sigma)/H$ the group of deck
transformations of $\tilde{S}\to\Sigma$. Since this action is properly
discontinuous and cocompact, the S̆varc-Milnor Lemma (see e.g. [13]) implies
that $D$ is quasi-isometric to $\widetilde{S}$. By Stallings’ Theorem [33]
$D$, or equivalently $\tilde{S}$, has one end, two ends, or infinitely many
ends. By the classification of infinite-type surfaces [29], it follows that
$\widetilde{S}$ is homeomorphism to either the Loch-Ness Monster $S$; Jacob’s
Ladder $L$; the Cantor tree $T$; or the Blooming Cantor Tree $B$; see [7] for
details. Since $S$ is the only one of these that has one end, we suppose
$\widetilde{S}$ has more than one end and derive a contradiction.
To this end, we appeal to Stallings Theorem again and note that $D$ admits a
non-trivial action on a tree $A$ with finite edge stabilizers and without edge
inversions. From this action construct an equivariant map
$f\colon\widetilde{S}\to A$, which we can assume is transverse to the union of
midpoints $X\subset T$ of every edge, so that $f^{-1}(X)$ is a properly
embedded $1$–submanifold which is $D$–invariant (c.f. [32]). This descends to
a closed $1$–submanifold in $\Sigma$, and via the action of
$\pi_{1}(\Sigma)\to D$ on $A$, any loop in the complement of the
$1$–submanifold fixes a vertex. Observe that there is a non-separating simple
closed curve in a component of this complement (this is true for any
$1$–submanifold in the genus $2$ surface $\Sigma$) which, when viewed as an
element of $\pi_{1}(\Sigma)$, fixes a vertex. Since the cover is
characteristic, every non-separating simple closed curve fixes a vertex of
$A$.
Now choose a generating set $a_{1},\ldots,a_{4}$ so that all $a_{i}$ are
simple as are $a_{i}a_{j}$, for all $i\neq j$. Then all $a_{i}$ and
$a_{i}a_{j}$ act elliptically, in which case Serre’s criterion [30] implies
$\pi_{1}(\Sigma)$ fixes a point of $A$, which is a contradiction. ∎
###### Proof of Theorem 1 for the once-punctured Loch Ness Monster.
Fixing $\widetilde{z}\in p^{-1}(z)$, Proposition 3.2 implies that $p$ induces
an injective homomorphism
$\phi\colon\operatorname{Map}(S^{z})\to\operatorname{Map}(\widetilde{S}^{\widetilde{z}})$.
Since $p$ has infinite degree, there are Dehn twists whose image does not have
compact support, even up to taking powers. For the same reason, $\phi$ is not
surjective. The fact that there exist partial pseudo-Anosovs whose image is a
Dehn twist follows along the exact same lines as in the torus case. Finally,
since $\widetilde{S}$ and $S$ are homeomorphic, so are
$\widetilde{S}^{\widetilde{z}}$ and $S^{z}$, and thus we may view the
homomorphism $\phi$ as an injective, but not surjective, endomorphism of
$\operatorname{Map}(S^{z})$. This finishes the proof of the Theorem in the
case of the Loch Ness Monster. ∎
###### Remark.
After this article was finished, we learned that Proposition 3 had already
been established in [6, Proposition 4.1].
### 3.4. Decreasing genus
Our final application of Proposition 3.2 is the following.
###### Proof of Theorem 1.2.
For $g\geq 1$, let $\Sigma_{g}$ denote the surface of genus $g\geq 1$, let
$C_{g}\subset\Sigma_{g}$ be a Cantor set, and $S=\Sigma_{g}\smallsetminus
C_{g}$. Let $p\colon\widetilde{\Sigma}\to\Sigma_{g}$ be the universal cover.
Choosing a disk $D\subset\Sigma$ that contains $C_{g}$, we note that
$p^{-1}(C_{g})$ is a disjoint union of Cantor sets that only accumulate at
infinity of $\widetilde{\Sigma}$. Consequently, since the one-point
compactification of $\widetilde{\Sigma}$ is the sphere, it follows that
$\widetilde{S}=\widetilde{\Sigma}\smallsetminus p^{-1}(C_{g})$
is homeomorphic to the $2$–sphere minus a Cantor set.
Now observe that $p\colon\widetilde{S}\to S$ is a geometrically characteristic
cover, and so for any basepoint $z\in S$ and choice of $\widetilde{z}\in
p^{-1}(z)$, Proposition 3.2 implies that $p$ induces an injective homomorphism
$\phi\colon\operatorname{Map}(S^{z})\to\operatorname{Map}(\widetilde{S}^{\widetilde{z}})$.
Observe that $S^{z}$ is a surface of genus $g$ minus a Cantor set and an
isolated point, while $\widetilde{S}^{\widetilde{z}}$ is a $2$–sphere minus a
Cantor set and an isolated point, which is also homeomorphic to the plane
$\mathbb{R}^{2}$ minus a Cantor set. This completes the proof. ∎
## 4\. Self-Doubling
In this section we will prove Theorem 2 and provide a large class of self-
doubling surfaces. We will use an argument similar to the example in [24,
Section 2], although adapted to the case of surfaces without boundary.
### 4.1. The construction
Let $\bar{S}$ be an orientable connected surface with non-empty boundary. In
this section, we still require each boundary component of $\bar{S}$ be
compact, although this time we allow the set of boundary components to be
countable. Let $\mathcal{B}=\\{b_{1},b_{2},\dots\\}$ be a (possibly finite)
subset of the set of boundary components, and $S$ be the surface that results
from $\bar{S}$ by gluing disks with marked points $z_{i}$ onto $b_{i}$. This
operation gives rise to a boundary deleting homomorphism
$\operatorname{PMap}(\bar{S})\to\operatorname{PMap}(S)$, which fits in a short
exact sequence
(2) $1\to\Pi_{i}\langle
T_{b_{i}}\rangle\to\operatorname{PMap}(\overline{S})\to\operatorname{PMap}(S)\to
1,$
see [16] for details. Now suppose $d_{\mathcal{B}}S$ is the surface obtained
by gluing two disjoint copies of $\bar{S}$ along the boundary components in
$\mathcal{B}$. This operation induces two natural inclusion maps
$\psi_{1}:\bar{S}\hookrightarrow d_{\mathcal{B}}S\quad\text{ and
}\quad\psi_{2}:\bar{S}\hookrightarrow d_{\mathcal{B}}S$
such that
$\operatorname{int}(\psi_{1}(\bar{S}))\cap\operatorname{int}(\psi_{2}(\bar{S}))=\emptyset$
and $\psi_{1}(b_{i})=\psi_{2}(b_{i})$ is an essential curve. We abuse notation
by writing $b_{i}$ and $\mathcal{B}$ for the images of $b_{i}$ and
$\mathcal{B}$, respectively. Furthermore, there is an induced orientation-
reversing homeomorphism
$\iota:d_{\mathcal{B}}S\to d_{\mathcal{B}}S$
that swaps the images of $\psi_{1}$ and $\psi_{2}$ and, in particular, fixes
the set $\mathcal{B}\subset d_{\mathcal{B}}S$. On the level of mapping class
groups, we have two injective homomorphisms
$\Psi_{1}:\operatorname{PMap}(\bar{S})\hookrightarrow\operatorname{PMap}(d_{\mathcal{B}}S)\quad\text{
and
}\quad\Psi_{2}:\operatorname{PMap}(\bar{S})\hookrightarrow\operatorname{PMap}(d_{\mathcal{B}}S),$
such that for all $f\in\operatorname{PMap}(\overline{S})$, we have
$\Psi_{1}(f)=f_{1}$, $\Psi_{2}(f)=f_{2}$, where $f_{1}=\iota f_{2}\iota^{-1}$;
here we consider $f_{1}$, $f_{2}$, and $\iota$ as elements of
$\operatorname{Map}^{\pm}(d_{\mathcal{B}}S$). In particular, a left twist
about $b_{i}$ is sent to a left twist by $\Psi_{1}$, and to a right twist by
$\Psi_{2}$. Consider now the map
$d:\operatorname{PMap}(\bar{S})\to\operatorname{PMap}(d_{\mathcal{B}}S)$
such that $d(f)=f_{1}f_{2}$. Note that $d$ is indeed a homomorphism as the
images of $\Psi_{1}$ and $\Psi_{2}$ commute. Furthermore, the kernel of $d$ is
equal to $\Pi_{i}\langle T_{b_{i}}\rangle$, so from the short exact sequence
(2) we have an induced injective homomorphism
$\operatorname{PMap}(S)\hookrightarrow\operatorname{PMap}(d_{\mathcal{B}}S).$
### Self-doubling surfaces
With the doubling construction to hand, we now prove Theorem 2. To this end,
we are tasked with finding a surface $\bar{S}$ with boundary components
$\mathcal{B}$ such that the surfaces $S$ and $d_{\mathcal{B}}S$ constructed
above are homeomorphic. Note that this is equivalent to the definition of
self-doubling from the introduction.
###### Proof of Theorem 2.
Suppose the surface $S$ is self-doubling, and let $L$ and $R$ be the
subsurfaces whose interiors are homeomorphic to $S$. We then define
$\mathcal{B}=\partial L$. Following the doubling construction above, we have
the injective homomorphism
$\operatorname{PMap}(S)\cong\frac{\operatorname{PMap}(L)}{\Pi_{i}\langle
T_{b_{i}}\rangle}\hookrightarrow\operatorname{PMap}(S),$
where $b_{i}\in\mathcal{B}$. ∎
In the remainder of this section, we give conditions on the space of ends
giving rise to self-doubling surfaces.
### 4.2. Ends and orbits
A surface $S$ is _stable_ if for any end $e\in\operatorname{Ends}(S)$ there
exists a sequence of nested subsurfaces $U_{1}\supset U_{2}\supset\dots$
defining $e$ such that $U_{i}\cong U_{i+1}$ [15]. Here, we call each $U_{i}$ a
_stable neighborhood_ of $e$.
We say that an end $e\in\operatorname{Ends}(S)$ is of _higher rank_ than
$e^{\prime}\in\operatorname{Ends}(S)$ if each stable neighborhood of $e$
contains an element of the $\operatorname{Map}(S)$-orbit of $e^{\prime}$.
Finally, we define $\mathcal{F}\subset\operatorname{Ends}(S)$ to be the set of
ends whose $\operatorname{Map}(S)$-orbit is finite [15].
We can now prove the first case of Theorem 2.
###### Theorem 4.1.
Let $S$ be a stable surface with infinitely many punctures, and suppose the
genus of $S$ is either zero or infinite. If $\mathcal{F}=\emptyset$ then $S$
is self-doubling.
###### Proof.
Let $\overline{S}$ be the surface obtained by removing an open disk
surrounding one puncture $z$. Let $d_{b}S$ be the doubled surface of
$\overline{S}$ along the sole boundary component $b$. The induced map on the
space of ends gives us the homeomorphism
$\displaystyle(\operatorname{Ends}(d_{b}S),\operatorname{Ends}_{\infty}(d_{b}S))$
$\displaystyle=(\operatorname{Ends}(\overline{S}),\operatorname{Ends}_{\infty}(\overline{S}))\cup(\operatorname{Ends}(\overline{S}),\operatorname{Ends}_{\infty}(\overline{S}))$
$\displaystyle\cong(\operatorname{Ends}(S),\operatorname{Ends}_{\infty}(S))\cup(\operatorname{Ends}(S),\operatorname{Ends}_{\infty}(S)).$
Now, suppose $e\in\operatorname{Ends}(S)$ has highest rank and denote the
$\operatorname{Map}(S)$-orbit of $e$ by $O(e)$. Since $O(e)$ is infinite,
there must be an accumulation point of its elements in
$\operatorname{Ends}(S)$. However, as each element of $O(e)$ is of highest
rank this accumulation point must also belong to $O(e)$; it follows that
$O(e)$ is a Cantor set, see [27, Proposition 4.7] for more details.
Since $S$ is stable, there exists a maximal finite set of highest rank ends
$e_{1},e_{2},\dots,e_{n}$ such that $O(e_{i})\neq O(e_{j})$. Indeed, if there
exist infinitely many highest rank ends with different orbits then any
accumulation point of such ends would not have a stable neighborhood. Write
$E(e_{i})$ for the end space of a stable neighborhood of $e_{i}$ and
$E_{\infty}(e_{i})=E(e_{i})\cap\operatorname{Ends}_{\infty}(S)$. Since each
$E(e_{i})$ is a Cantor set we have that
$(\operatorname{Ends}(S),\operatorname{Ends}_{\infty}(S))\cong\bigcup_{i}(E(e_{i}),E_{\infty}(e_{i}))\cong\bigcup_{i}(E(e_{i}),E_{\infty}(e_{i}))\cup(E(e_{i}),E_{\infty}(e_{i})).$
We can now conclude that
$\displaystyle(\operatorname{Ends}(d_{b}S),\operatorname{Ends}_{\infty}(d_{b}S))$
$\displaystyle\cong(\operatorname{Ends}(S),\operatorname{Ends}_{\infty}(S))\cup(\operatorname{Ends}(S),\operatorname{Ends}_{\infty}(S))$
$\displaystyle\cong\bigcup_{i}(E(e_{i}),E_{\infty}(e_{i}))\cup(E(e_{i}),E_{\infty}(e_{i}))$
$\displaystyle\cong(\operatorname{Ends}(S),\operatorname{Ends}_{\infty}(S)),$
so $S$ and $d_{\mathcal{B}}S$ are homeomorphic. ∎
### Doubling along infinitely many boundary components
We define an end to be _truly non-planar_ if it is non-planar and has a
neighborhood that does not contain a puncture.
Figure 1. The figure shows two copies of the surface $S$. The double obtained
using half the punctures is also homeomorphic to $S$. There is a natural
orientation reversing homeomorphism $\iota$ which fixes the boundary
components of $\overline{S}$.
###### Theorem 4.2.
Suppose $S$ is a stable surface of infinite genus with infinitely many
punctures. If $\mathcal{F}$ contains no planar nor truly non-planar ends then
$S$ is self-doubling.
###### Proof.
If $\mathcal{F}$ is empty then we the result follows from Theorem 4.1. As in
the previous subsection, since $S$ is stable we have that $|\mathcal{F}|=n$ is
finite as otherwise there would be an accumulation point without a stable
neighborhood [15]. Therefore we have a partition of $\operatorname{Ends}(S)$
$\operatorname{Ends}(S)=P_{0}\cup P_{1}\cup\dots\cup P_{n}$
such that $P_{0}$ contains no elements of $\mathcal{F}$, and all other $P_{k}$
contain a single element of $\mathcal{F}$. Moreover, we may choose each
$P_{k}$ to be the end space of a stable neighborhood of the corresponding
element of $\mathcal{F}$.
Our strategy is to choose an infinite collection of punctures in each set
$P_{k}$, where $k\geq 1$, remove disk neighborhoods of these punctures
producing a surface $\overline{S}$ with boundary $\mathcal{B}$, then prove
that the resulting surface $d_{\mathcal{B}}S$ is homeomorphic to $S$. Note
that since no element of $\mathcal{F}$ is truly non-planar, each $P_{k}$ does
indeed contain infinitely many punctures, when $k\geq 1$.
Suppose $k\geq 1$ and denote by $e$ the unique end in $P_{k}\cap\mathcal{F}$.
Let $U_{0}\supset U_{1}\supset\dots$ be a nested sequence of homeomorphic
subsurfaces with connected boundary such that
$\operatorname{Ends}(U_{0})=P_{k}$. Define $X_{i}=U_{i}\setminus U_{i+1}$.
Without loss of generality we may assume that $X_{i}\cong X_{i+1}$ and that
$X_{i}$ contains at least two punctures (it may contain infinitely many
punctures). Now, remove an open neighborhood of one puncture in each $X_{i}$.
Repeating this process for each $P_{k}$ (where $k\geq 1$) results in a surface
$\overline{S}$ and a set of boundary components $\mathcal{B}$. We define
$d_{\mathcal{B}}S$ as usual, and write $d_{\mathcal{B}}(U_{i})\subset
d_{\mathcal{B}}S$ for the subsurface obtained by doubling $U_{i}$.
Note that $d_{\mathcal{B}}(U_{0})\supset d_{\mathcal{B}}(U_{1})\supset\dots$
is a nested sequence of homeomorphic subsurfaces, in particular, it defines a
unique end of $d_{\mathcal{B}}S$. We will now show that the end spaces of
$d_{\mathcal{B}}(U_{0})$ and $U_{0}$ are homeomorphic. Indeed, if each $X_{i}$
has infinitely many punctures, then by construction we have that
$\big{(}\operatorname{Ends}(d_{\mathcal{B}}(X_{i})),\operatorname{Ends}_{\infty}(d_{\mathcal{B}}(X_{i}))\big{)}\cong\big{(}\operatorname{Ends}(X_{i}\cup
X_{i+1}),\operatorname{Ends}_{\infty}(X_{i}\cup X_{i+1})\big{)}.$
However, since $U_{0}\supset U_{2}\supset U_{4}\supset\dots$ also defines the
end $e$, it follows that the end spaces of $d_{\mathcal{B}}(U_{0})$ and
$U_{0}$ are homeomorphic. This shows that $d_{\mathcal{B}}(U_{0})$ is
homeomorpic to $U_{0}$ with an open disk removed, that is,
$d_{\mathcal{B}}(U_{0})$ has two boundary components.
If each $X_{i}$ contains finitely many punctures then the argument above
follows similarly as both $d_{\mathcal{B}}(U_{0})$ and $U_{0}$ contain
infinitely many punctures with a single accumulation point. Finally, since
$P_{0}$ contains no elements of $\mathcal{F}$, the argument from Theorem 4.1
implies that $P_{0}\cong P_{0}\cup P_{0}$. It follows that
$\displaystyle(\operatorname{Ends}(d_{\mathcal{B}}S),\operatorname{Ends}_{\infty}(d_{\mathcal{B}}S))$
$\displaystyle\cong(P_{0}\cup P_{0})\cup P_{1}\cup\dots P_{n}$
$\displaystyle\cong P_{0}\cup P_{1}\cup\dots P_{n}$
$\displaystyle\cong(\operatorname{Ends}(S),\operatorname{Ends}_{\infty}(S))$
Since $d_{\mathcal{B}}S$ has infinite genus and no boundary components, it is
homeomorphic to $S$. ∎
## 5\. Twist-pair preserving homomorphisms
The purpose of this section is to prove Theorem 3. The key ingredient of our
arguments is that a twist-pair preserving homomorphism preserves certain
combinatorial configurations of curves that we term spanning chains, and which
we now define; the reader should keep Figure 2 in mind:
1pt $\alpha_{1}$ at 29 105 $\alpha_{2}$ at 65 104 $\alpha_{3}$ at 105 101
$\alpha_{4}$ at 145 105 $\alpha_{5}$ at 180 100 $\alpha_{6}$ at 205 100
$\gamma_{1}$ at 225 115 $\gamma_{2}$ at 280 104 $\gamma_{3}$ at 285 74
$\gamma_{4}$ at 286 49 $\gamma_{5}$ at 283 20 $\gamma_{6}$ at 215 25 $\beta$
at 140 10
Figure 2. A spanning chain for the surface $S_{3,5}$.
###### Definition.
Let $S_{g,p}$ be the connected orientable surface of genus $g$ and for which
the number of punctures plus boundary components is $p$. A spanning chain is a
set
$\\{\beta,\alpha_{1},\alpha_{2},\ldots,\alpha_{2g-1},\alpha_{2g},\gamma_{1},\ldots,\gamma_{p+1}\\}$
of non-separating curves on $S_{g,p}$ such that:
* •
$i(\beta,\alpha_{4})=1$ and $i(\beta,\alpha_{j})=i(\beta,\gamma_{i})=0$ for
all $i$ and all $j\neq 3$.
* •
$i(\alpha_{i},\alpha_{i+1})=1$, and $i(\alpha_{i},\alpha_{j})=0$ otherwise.
* •
$i(\gamma_{i},\alpha_{2g})=1$, for all $i$, and $i(\gamma_{i},\alpha_{j})=0$
otherwise.
* •
$i(\gamma_{i},\gamma_{j})=0$ for all $i,j$.
Observe that any surface filled by a collection of distinct curves satisfying
the conditions in the definition of a spanning chain above must have genus $g$
since a regular neighborhood of the union of the curves can be reconstructed
from the data, up to homeomorphism (not necessarily preserving the names of
the curves); see again Figure 2. We will need the following well-known fact;
see [16, Section 4.4.4.] for instance:
###### Proposition 5.1.
Suppose $g\geq 1$. If $C$ is a spanning chain on $S_{g,p}$, then the Dehn
twists along the elements of $C$ generate $\operatorname{PMap}(S_{g,p})$.
We will also make use of the braid relation between Dehn twists; see e. g.
[16, Section 3.5.1]:
###### Lemma 5.2.
Let $\alpha,\beta$ be curves on a surface. Then
$t_{\alpha}t_{\beta}t_{\alpha}=t_{\beta}t_{\alpha}t_{\beta}$
if and only if $i(a,b)=1$.
We are now ready to prove Theorem 3:
###### Proof of Theorem 3.
Let $\phi:\operatorname{Map}(S)\to\operatorname{Map}(S^{\prime})$ be a
continuous injective twist-preserving homomorphism. In particular, $\phi$
induces a simplicial map
$\phi_{*}:\mathcal{C}(S)\to\mathcal{C}(S^{\prime})$
by the rule
$\phi_{*}(\alpha)=\beta\iff\phi(t_{\alpha})=t_{\beta}.$
Moreover, $\phi_{*}$ is superinjective, meaning that $i(\alpha,\beta)=0$ if
and only if $i(\phi_{*}(\alpha),\phi_{*}(\beta))\neq 0$. Finally, Lemma 5.2
implies that $i(\phi_{*}(\alpha),\phi_{*}(\beta))=1$ if and only if
$i(\alpha,\beta)=1$.
Now, let $Z_{1}\subset Z_{2}\subset\ldots$ be an exhaustion of $S$ by
connected, properly embedded, $\pi_{1}$–injective, finite type subsurfaces for
which the inclusion of $Z_{i}$ into $S$ induces an injection
$\operatorname{PMap}(Z_{i})\to\operatorname{PMap}(S)$. Without loss of
generality, we may assume each component of $\partial Z_{i}$ is contained in
the interior of $Z_{i+1}$ or is a component of $\partial S$. Note that since
each $Z_{i}$ is properly embedded and $\operatorname{PMap}(Z_{i})$ injects
into $\operatorname{PMap}(S)$, any puncture of $S$ is a puncture of $Z_{i}$
for some $i$. Since the genus of $S$ is positive and $S$ has infinite type,
without loss of generality we may assume that the same is true for the genus
of $Z_{i}$ for all $i$, and that $Z_{i}$ is not a torus with one
puncture/boundary component for any $i$.
For each $i$, choose a spanning chain $C_{i}$ of $Z_{i}$. By the discussion
above, $\phi_{*}(C_{i})$ is a set of curves on $S^{\prime}$ of the same
cardinality as $C_{i}$, and whose elements have the same intersection pattern
as the curves in $C_{i}$. Let $Z_{i}^{\prime}$ be the surface obtained from
the regular neighborhood of $\phi_{*}(C_{i})$ by adding any complementary
once-punctured disks or peripheral annuli which a boundary component of the
neighborhood may bound. By construction $Z_{i}^{\prime}$ is filled by
$\phi_{*}(C_{i})$, and thus $Z_{i}$ and $Z_{i}^{\prime}$ have the same genus.
Moreover,
$\operatorname{PMap}(Z_{i}^{\prime})<\operatorname{PMap}(S^{\prime})$, and
Proposition 5.1 implies that the homomorphism $\phi$ restricts to an injective
homomorphism
$\phi_{i}:\operatorname{PMap}(Z_{i})\to\operatorname{PMap}(Z_{i}^{\prime}).$
Since the genera of $Z_{i}$ and $Z_{i}^{\prime}$ are equal and the
homomorphism $\phi_{i}$ is injective, [4, Theorem 1.1] implies that $\phi_{i}$
is induced by a (unique) proper $\pi_{1}$-injective embedding
$h_{i}:Z_{i}\to Z_{i}^{\prime},$
which is in fact a homeomorphism since $Z_{i}$ and $Z_{i}^{\prime}$ are each
filled by a spanning chain of the same cardinality and combinatorial type.
###### Remark.
Although the main result of [4] is stated for surfaces of genus at least four
(see the remark below the statement of Theorem 1.1 in [4]), it is also true
for twist-preserving injective homomorphisms between pure mapping class groups
of surfaces of the same positive genus. The reader can check that (after a
suitable reduction of the target surface), the standing assumption of [4,
Section 10] holds, and that every argument goes through without modification
from then on.
We also note that the main result of [4] deals with arbitrary non-trivial
homomorphisms between pure mapping class groups, and under suitable genus
bounds, any such homomorphism is induced by an embedding between the
underlying surfaces. In the case of an injective homomorphism, the reader will
quickly verify that the definition of embedding of [4] simply means “proper
topological embedding”.
Fix a complete hyperbolic metric with geodesic boundary on $S^{\prime}$. We
view $h_{i}$ as a proper embedding from $Z_{i}$ to $S^{\prime}$, and note that
$h_{i}$ and $\phi_{*}$ agree on every curve in $Z_{i}$. Since $Z_{i}$ is more
complicated than a torus with one boundary/puncture, $h_{i}$ is uniquely
determined, up to isotopy, by this agreement with $\phi_{*}$ on curves. As the
punctures of $Z_{i}^{\prime}$ were punctures of $S^{\prime}$ (by
construction), it follows that $h_{i}$ maps punctures of $Z_{i}$ to punctures
of $S^{\prime}$. Now $Z_{i}\subset Z_{i+1}$, and it follows from the
uniqueness statement that $h_{i}$ and $h_{i+1}$ agree on $Z_{i}$, up to
isotopy. Starting with $h_{1}$, and adjusting $h_{2}$ by isotopy if necessary,
we may assume that $h_{1}$ and $h_{2}$ agree on $Z_{1}$. Continuing
inductively and adjusting $h_{i+1}$ to agree with $h_{i}$ on $Z_{i}$, we get a
well-defined injective continuous function
$h\colon S\to S^{\prime},$
that agrees with $h_{i}$ on $Z_{i}$ for all $i$. Without loss of generality,
by arranging it to be the case at every step, we may assume that $h(\partial
Z_{i})=h_{i}(\partial Z_{i})$ is a union of closed geodesics in the fixed
hyperbolic metric. Since any curve in $S$ is contained in $Z_{i}$ for some
$i$, it follows that $h(\delta)=\phi_{*}(\delta)$ for every curve $\delta$ on
$S$.
We now claim that $h$ is proper. To see this, suppose for contradiction that
this were not the case. Then there exists a compact set $K$ of $S^{\prime}$
such that $h^{-1}(K)$ is not compact. We may enlarge $K$ if necessary to a
finite type, connected, properly embedded subsurface of $W\subset S^{\prime}$
with geodesic boundary (in our fixed hyperbolic metric). Then $h^{-1}(W)$ is a
non-empty, non-compact, closed subset of $S$. Now we observe that
$h^{-1}(W)\cap\partial Z_{i}\neq\emptyset$ for all $i$ sufficiently large:
otherwise for some $i$, $W$ would be entirely contained in
$h(Z_{i})=h_{i}(Z_{i})$, and hence so would $K$, implying that
$h^{-1}(K)=h_{i}^{-1}(K)$ is compact (by properness of $h_{i}$), a
contradiction. Therefore, we can find components $\alpha_{i}\subset\partial
Z_{i}$ so that for all $i$ sufficiently large, $h(\alpha_{i})$ transversely
and essentially intersects $W$. In particular, the sequence
$\\{t_{\alpha_{i}}\\}$ of mapping classes converges to the identity in
$\operatorname{Map}(S)$. In contrast, the image sequence $h(\alpha_{i})$ is a
sequence of pairwise distinct curves, all of which intersect $W$. As a
consequence, the sequence $\\{t_{h(\alpha_{i})}\\}$ cannot converge to the
identity in $\operatorname{Map}(S^{\prime})$. This contradicts the fact that
$\phi$ is continuous, as
$\phi(t_{\alpha_{i}})=t_{\phi_{*}(\alpha_{i})}=t_{h(\alpha_{i})}.$
Since $h$ is a proper embedding it induces
$h_{\sharp}\colon\operatorname{PMap}(S)\to\operatorname{PMap}(S^{\prime})$, an
injective homomorphism.
By hypothesis, $S$ either has empty boundary or else has at most one end
accumulated by genus. In the former case, it follows that $h$ is in fact a
homeomorphism, and thus $h_{\sharp}$ and $\phi$ are equal by Theorem 2.4. In
the latter case, we know that $h$ and $\phi_{*}$ agree on every isotopy class
of simple closed curve, and hence $h_{\sharp}$ and $\phi$ agree on every Dehn
twist. Since Dehn twists topologically generate $\operatorname{PMap}(S)$ [28]
and $\phi$ is continuous, it follows that $\phi$ and $h_{\sharp}$ are equal,
as required. ∎
###### Remark.
(1) As mentioned in the introduction, Theorem 3 no longer holds if $\partial
S\neq\emptyset$ and $S$ has more than one end accumulated by genus. Indeed,
suppose that $S$ has at least two ends accumulated by genus, in which case
there exists a surjective homomorphism
$\rho:\operatorname{PMap}(S)\to\mathbb{Z}$ by [3]. Since $\partial
S\neq\emptyset$, we may find a surface $S^{\prime}$ for which there exists a
$\pi_{1}$-injective embedding $\iota:S\to S^{\prime}$ and such that the
mapping class group of $S^{\prime}\setminus S$ is infinite.
Consider the injective homomorphism
$\phi:\operatorname{PMap}(S)\to\operatorname{PMap}(S^{\prime})$ induced by
this embedding, noting that its image is supported on
$\operatorname{PMap}(\iota(S))$. Now choose an infinite-order element
$g\in\operatorname{PMap}(S^{\prime})$ supported on
$S^{\prime}\setminus\iota(S)$. Using the homomorphism $\rho$ we construct a
homomorphism
$\rho_{g}:\operatorname{PMap}(S)\to\operatorname{PMap}(S^{\prime})$ whose
image is the cyclic group generated by $g$; in particular, the image of
$\rho_{g}$ is supported on $\operatorname{PMap}(S^{\prime}\setminus\iota(S))$
As the images of $\phi$ and $\rho_{g}$ commute, we may consider the
homomorphism
$(\phi,\rho_{g}):\operatorname{PMap}(S)\to\operatorname{PMap}(S^{\prime})$,
whose image is contained in
$\operatorname{PMap}(\iota(S))\times\operatorname{PMap}(S^{\prime}\setminus\iota(S))<\operatorname{PMap}(S^{\prime}).$
Observe that this homomorphism is continuous, injective (since $\phi$ is) and
twist-preserving (since the $\rho$-image of every Dehn twist is trivial, by
[3]). However, it is not induced by any subsurface embedding $S\to
S^{\prime}$, as required.
(2) As may have become apparent in the proof of Theorem 3, the requirement
that the homomorphism $\phi$ be injective may be relaxed in various ways. For
instance, say that a twist-preserving homomorphism preserves twist pairs if
two Dehn twists commute if and only if their images commute. With this
definition, a minor modification of the proof above yields:
Let $S$ and $S^{\prime}$ be surfaces of infinite type, and assume $S$ has
positive genus. Assume further that either the boundary of $S$ is empty, or
else $S$ has at most one end accummulated by genus. If
$\phi:\operatorname{PMap}(S)\to\operatorname{PMap}(S^{\prime})$ a continuous
twist-pair preserving homomorphism, then there is a proper $\pi_{1}$-injective
embedding $h:S\to S^{\prime}$ that induces $\phi$.
Indeed, the only subtlety when mimicking the proof above occurs in justifying
why the restriction homomorphism
$\phi_{i}:\operatorname{PMap}(Z_{i})\to\operatorname{PMap}(Z_{i}^{\prime})$ is
injective. To this end, if a non-central element
$f\in\operatorname{PMap}(Z_{i})$ is in the kernel of $\phi_{i}$, we may use
its Nielsen-Thurston normal form to find a curve $\alpha\subset Z_{i}$ such
that $i(f(\alpha),\alpha)>0$. However, since $f\in\ker(\phi_{i})$ we have that
$i(\phi_{*}(f(\alpha)),\phi_{*}(\alpha))=0$, which contradicts that $\phi_{*}$
is superinjective.
If on the other hand, there is $f\in\ker(\phi_{i})$ which is central, then we
have an induced injective homomorphism between the pure mapping class groups
modulo (the relevant part of) their centers, at which point the main result of
[4] applies.
## 6\. Superinjective maps of curve graphs
Finally, in this section we give a proof of Theorem 4. We separate the proof
into two parts.
###### Proof of part (1) of Theorem 4.
Equip $S$ with a fixed hyperbolic metric, and realize every vertex of
$\mathcal{C}(S)$ by its unique geodesic representative in its homotopy class.
Suppose first that $S$ has infinite genus. By choosing two points $p,q$ in the
complement of the union of all simple closed geodesics on $S$, we obtain a
(non-surjective) superinjective map
$\mathcal{C}(S)\to\mathcal{C}(S\smallsetminus\\{p,q\\})$. Now, observe that
$\mathcal{C}(S\smallsetminus\\{p,q\\})\cong\mathcal{C}(S^{\prime})$, where
$S^{\prime}$ is obtained by removing two open discs (with disjoint closures)
from $S$. Finally, we may glue a cylinder to $S^{\prime}$ along its boundary
components, obtaining a surface $S^{\prime\prime}\cong S$ and the inclusion
$S^{\prime}\to S^{\prime\prime}$ induces a superinjective map
$\mathcal{C}(S^{\prime})\to\mathcal{C}(S^{\prime\prime})\cong\mathcal{C}(S)$.
The composition of these superinjective maps/isomorphisms
$\mathcal{C}(S)\to\mathcal{C}(S\smallsetminus\\{p,q\\})\cong\mathcal{C}(S^{\prime})\to\mathcal{C}(S^{\prime\prime})\cong\mathcal{C}(S)$
is a superinjective map which is not surjective (since the first map is non-
surjective).
Suppose now that $S$ has finite genus. Then either $S$ has infinitely many
punctures or its space of ends is a Cantor set union a finite set. In the
former case we may puncture $S$ at a point $p$ in the complement of the union
of simple closed geodesics, producing a non-surjective, superinjective map
$\mathcal{C}(S)\to\mathcal{C}(S\smallsetminus\\{p\\})\cong\mathcal{C}(S),$
where the isomorphism comes from fact that $S$ and $S\smallsetminus\\{p\\}$
are homeomorphic. In the latter case, we may again remove a point $p$ missing
all simple closed geodesics to get a non-surjective, superinjective map
$\mathcal{C}(S)\to\mathcal{C}(S\smallsetminus\\{p\\})\cong\mathcal{C}(S^{\prime})$,
where $S^{\prime}$ is obtained from $S$ by removing an open disk, then glue in
a disk minus a Cantor set to $S^{\prime}$ produce a surface
$S^{\prime\prime}\cong S$. This gives a superinjective map
$\mathcal{C}(S^{\prime})\to\mathcal{C}(S^{\prime\prime})\cong\mathcal{C}(S)$,
and composing with the maps above gives the required non-surjective,
superinjective map in this case. ∎
We are now going to prove part (2) of Theorem 4. The proof is divided into two
lemmas. In what follows, we denote by $M$ the Loch Ness Monster surface, which
we again recall is the unique, up to homeomorphism, infinite-genus surface
with exactly one end.
###### Lemma 6.1.
Let $S$ be an arbitrary infinite-type surface. Then there exists a
superinjective map $\mathcal{C}(S)\to\mathcal{C}(M)$.
###### Proof.
We construct the desired map via a series of intermediate steps. First, let
$S_{1}$ be the surface obtained from $S$ by first replacing each puncture by a
boundary component, and then gluing, to each of these new boundary components,
a torus with one boundary component. Observe that if $S_{1}$ has any planar
ends, then they must belong to a Cantor set. We have a superinjective map
(3) $\mathcal{C}(S)\to\mathcal{C}(S_{1}).$
Fix a principal exhaustion $P_{1}\subset P_{2}\subset\dots$ of $S_{1}$, namely
an exhaustion of $S_{1}$ by compact subsurfaces such that every component of
$\partial P_{i}$ is separating in $S_{1}$. Let
$X^{n}_{1},X^{n}_{2},\dots,X^{n}_{k_{n}}$ be the connected components of
$P_{n}\setminus P_{n-1}$. We puncture $S_{1}$ along $y^{n}_{i}\in X^{n}_{i}$,
and then replace each of these punctures by a boundary component $b^{n}_{i}$,
obtaining a new surface $S_{2}$ and a superinjective map
(4) $\mathcal{C}(S_{1})\to\mathcal{C}(S_{2})$
The surface $S_{2}$ is naturally equipped with a principal exhaustion
$Q_{1}\subset Q_{2}\subset\dots$ coming from the obvious subsurface embedding
$S_{2}\to S_{1}$; abusing notation, we denote the connected components of
$P_{n}\setminus P_{n-1}$ in $S_{2}$ by
$Z^{n}_{1},Z^{n}_{2},\dots,Z^{n}_{k_{n}}$.
Now, we glue a sphere with $k_{n}$ boundary components to the union of the
$Z_{i}^{n}$, thus obtaining a connected surface $Y_{n}$ which contains
$Q_{n}$, see Figure 3. Moreover, $Y_{n}\subset Y_{n+1}$, so we may consider
$S_{3}=\bigcup Y_{n}$. Since $S_{2}=\bigcup Q_{n}$, we have that $S_{3}$
contains a subsurface homeomorphic to $S_{2}$, and in particular there is a
superinjective map
(5) $\mathcal{C}(S_{2})\to\mathcal{C}(S_{3})$
It remains to show that $S_{3}$ is homeomorphic to $M$. By construction,
$S_{3}$ has infinite genus and no planar ends. Since the complement of any
finite-type subsurface of $S_{3}$ contains only one component of infinite-
type, we have that $S$ has a single end which is not planar. Therefore,
$S_{3}$ is homeomorphic to $M$, as desired. At this point, the composition of
the maps in (3),(4) and (5) gives the superinjective map
$\mathcal{C}(S)\to\mathcal{C}(M).$
This finishes the proof of the lemma. ∎
Figure 3. One of the steps towards obtaining the surface $S_{3}$
Next, we prove that every surface $S^{\prime}$ of infinite genus contains a
subsurface homeomorphic to $M$.
###### Lemma 6.2.
If $S$ is a surface with infinite genus then it contains a subsurface $S\cong
M$. In particular, there is a superinjective map
$\mathcal{C}(M)\to\mathcal{C}(S)$.
###### Proof.
Let $N$ be a neighborhood of a non-planar end $e$. Let
$c_{1},c_{2},\dots\subset N$ be a set of pairwise disjoint separating curves,
each of which bound bounding a one-holed torus. Let $a_{i}$ be an arc
connecting $c_{i-1}$ and $c_{i}$ such that $a_{i}\cap a_{j}=\emptyset$.
Now, the boundary of a neighborhood of $\bigcup a_{i}\cup c_{i}$ contains a
separating arc $\alpha$. We define $\Sigma$ to be the component of
$S\setminus\alpha$ containing every $c_{i}$ (here, we take the complement
$S\setminus\alpha$ to be without boundary). It follows that $\Sigma$ has
infinite genus. Moreover, the complement of any compact subsurface contains a
single non-compact component, so $\Sigma$ has a single end and is therefore
homeomorphic to $M$. ∎
Finally, we put together the above pieces in order to prove Theorem 4:
###### Proof of Theorem 4.
Let $S$ and $S^{\prime}$ be surfaces of infinite type, and assume $S^{\prime}$
has infinite genus. By Lemma 6.1, there is a superinjective map
$\mathcal{C}(S)\to\mathcal{C}(M).$
In turn, Lemma 6.2 tells us there exists a superinjective map
$\mathcal{C}(M)\to\mathcal{C}(S^{\prime}),$
from which the result follows. ∎
## References
* [1] AimPL: Surfaces of infinite type. Available at http://aimpl.org/genusinfinity.
* [2] J. Aramayona, C. J. Leininger, J. Souto, Injections of mapping class groups. Geom. Topol. 13 (2009), no. 5, 2523–2541.
* [3] J. Aramayona, P. Patel, N. G. Vlamis, The first integral cohomology of big mapping class groups. Int. Math. Res. Not. Volume 2020, Issue 22, November 2020, Pages 8973–8996.
* [4] J. Aramayona, J. Souto, Homomorphisms between mapping class groups. Geom. Topol. 16 (2012), no. 4, 2285–2341.
* [5] J. Aramayona, N. G. Vlamis. Big mapping class groups: an overview. Preprint available at arXiv:2003.07950. To appear in In the tradition of Thurston (ed. A. Papadopoulos).
* [6] Y. Atarihuana, J. García, R. A. Hidalgo, S. Quispe, C. Ramírez MaluendasDessins d’enfants and some holomorphic structures on the Loch Ness Monster. arXiv:2005.09804.
* [7] A. Basmajian, N. G. Vlamis, There are no exotic ladder surfaces. Preprint available at arXiv:2009.14266.
* [8] J. Bavard, S. Dowdall, K. Rafi, Isomorphisms between big mapping class groups, Int. Math. Res. Not., 2020, no. 10, 3084–3099.
* [9] J. Behrstock, D. Margalit, Curve complexes and finite index subgroups of mapping class groups. Geom. Dedicata 118 (2006), 71–85.
* [10] R. W. Bell, D. Margalit, Braid groups and the co-Hopfian property. J. Algebra 303 (2006), no. 1, 275–294.
* [11] J. S. Birman, Mapping class groups and their relationship to braid groups. Comm. Pure Appl. Math. 22 (1969), 213–238.
* [12] J. S. Birman and H. M. Hilden, On isotopies of homeomorphisms of Riemann surfaces. Ann. of Math. (2) 97 (1973), 424–439.
* [13] M. R. Bridson, A. Haefliger, Metric spaces of non-positive curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 319. Springer-Verlag, Berlin, 1999. xxii+643 pp.
* [14] G. Domat, Big pure mapping class groups are never perfect, appendix with R. Dickmann. arXiv:2007.14929.
* [15] F. Fanoni, T. Ghaswala, A. McLeay, Homeomorphic subsurfaces and the omnipresent arcs. Preprint available at arXiv:2003.04750.
* [16] B. Farb, D. Margalit. A primer on mapping class groups. Princeton Mathematical Series, 49. _Princeton University Press, Princeton, NJ, 2012._
* [17] J. Hernández Hernández, I. Morales, F. Valdez, Isomorphisms between curve graphs of infinite-type surfaces are geometric. Rocky Mountain J. Math. 48 (2018), no. 6, 1887–1904.
* [18] J. Hernández Hernández, I. Morales, F. Valdez, The Alexander method for infinite-type surfaces. Michigan Math. J. 68 (2019), no. 4, 743–753.
* [19] J. Hernández Hernández, F. Valdez, Automorphism groups of simplicial complexes of infinite-type surfaces. Publ. Mat. 61 (2017), no. 1, 51–82.
* [20] E. Irmak, Complexes of nonseparating curves and mapping class groups. Michigan Math. J. 54 (2006), no. 1, 81–110.
* [21] E. Irmak, Superinjective simplicial maps of complexes of curves and injective homomorphisms of subgroups of mapping class groups. Topology 43 (2004), no. 3, 513–541.
* [22] E. Irmak, Superinjective simplicial maps of complexes of curves and injective homomorphisms of subgroups of mapping class groups. II.Topology Appl. 153 (2006), no. 8, 1309–1340.
* [23] N. V. Ivanov, Automorphism of complexes of curves and of Teichmüller spaces. Internat. Math. Res. Notices 1997, no. 14, 651–666.
* [24] N. V. Ivanov, J. D. McCarthy, On injective homomorphisms between Teichml̈ler modular groups. I. Invent. Math. 135 (1999), no. 2, 425–486.
* [25] I. Kra, On the Nielsen-Thurston-Bers type of some self-maps of Riemann surfaces. Acta Math. 146 (1981), no. 3-4, 231–270.
* [26] K. Mann, Automatic continuity for some homeomorphism groups and mapping class groups of non-compact manifolds. Preprint available at https://e.math.cornell.edu/people/mann/papers/autcont_noncompact.pdf.
* [27] K. Mann, K. Rafi, Large scale geometry of big mapping class groups. Preprint, arXiv:1912.10914.
* [28] P. Patel, N. Vlamis, Algebraic and topological properties of big mapping class groups. Algebr. Geom. Topol. 18 (2018), no. 7, 4109–4142.
* [29] I. Richards, On the classification of noncompact surfaces. Trans. Amer. Math. Soc. 106 (1963), 259–269.
* [30] J.-P. Serre, Arbres, amalgames, $\operatorname{SL}_{2}$. Astérisque, No. 46. Société Mathématique de France, Paris, 1977. 189 pp.
* [31] K. J. Shackleton, Combinatorial rigidity in curve complexes and mapping class groups. Pacific J. Math. 230 (2007), no. 1, 217–232.
* [32] P. B. Shalen, Representations of 3-manifold groups and applications in topology. Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986), 607–614, Amer. Math. Soc., Providence, RI, 1987.
* [33] J. R. Stallings, On torsion-free groups with infinitely many ends. Ann. of Math. (2) 88 (1968), 312–334.
* [34] R. Winarski, Symmetry, isotopy, and irregular covers. Geom. Dedicata 177 (2015), 213–227.
|
# Selectors of discrete coarse spaces
Igor Protasov I.Protasov: Taras Shevchenko National University of Kyiv,
Department of Computer Science and Cybernetics, Academic Glushkov pr. 4d,
03680 Kyiv, Ukraine<EMAIL_ADDRESS>
###### Abstract.
Given a coarse space $(X,\mathcal{E})$ with the bornology $\mathcal{B}$ of
bounded subsets, we extend the coarse structure $\mathcal{E}$ from $X\times X$
to the natural coarse structure on
$(\mathcal{B}\backslash\\{\emptyset\\})\times(\mathcal{B}\backslash\\{\emptyset\\})$
and say that a macro-uniform mapping
$f:(\mathcal{B}\backslash\\{\emptyset\\})\rightarrow X$ (resp.
$f:[X]^{2}\rightarrow X$) is a selector (resp. 2-selector) of
$(X,\mathcal{E})$ if $f(A)\in A$ for each
$A\in\mathcal{B}\setminus\\{\emptyset\\}$ (resp. $A\in[X]^{2}$). We prove that
a discrete coarse space $(X,\mathcal{E})$ admits a selector if and only if
$(X,\mathcal{E})$ admits a 2-selector if and only if there exists a linear
order $\leq$ on $X$ such that the family of intervals $\\{[a,b]:a,b\in X,\
a\leq b\\}$ is a base for the bornology $\mathcal{B}$.
1991 MSC: 54C65.
Keywords: bornology, coarse space, selector.
## 1\. Introduction
The notion of selectors comes from Topology. Let $X$ be a topological space,
$exp\ X$ be the set of all non-empty closed subsets of $X$ endowed with some
(initially, the Vietoris) topology, $\mathcal{F}$ be a non-empty subset of
$exp\ X$. A continuous mapping $f:\mathcal{F}\rightarrow X$ is called an
$\mathcal{F}$-selector of $X$ if $F(A)\in A$ for each $A\in\mathcal{F}$. The
question on selectors of topological spaces was studied in a plenty of papers,
we mention only [1], [4], [6], [7].
Formally, coarse spaces, introduced independently and simultaneously in [8]
and [13], can be considered as asymptotic counterparts of uniform spaces. But
actually this notion is rooted in Geometry and Geometric Group Theory, see
[13, Chapter 1] and [5, Chapter 4]. At this point, we need some basic
definitions.
Given a set $X$, a family $\mathcal{E}$ of subsets of $X\times X$ is called a
coarse structure on $X$ if
* •
each $E\in\mathcal{E}$ contains the diagonal
$\bigtriangleup_{X}:=\\{(x,x):x\in X\\}$ of $X$;
* •
if $E$, $E^{\prime}\in\mathcal{E}$ then $E\circ E^{\prime}\in\mathcal{E}$ and
$E^{-1}\in\mathcal{E}$, where $E\circ E^{\prime}=\\{(x,y):\exists
z\;\;((x,z)\in E,\ (z,y)\in E^{\prime})\\}$, $E^{-1}=\\{(y,x):(x,y)\in E\\}$;
* •
if $E\in\mathcal{E}$ and $\bigtriangleup_{X}\subseteq E^{\prime}\subseteq E$
then $E^{\prime}\in\mathcal{E}$.
Elements $E\in\mathcal{E}$ of the coarse structure are called entourages on
$X$.
For $x\in X$ and $E\in\mathcal{E}$ the set $E[x]:=\\{y\in
X:(x,y)\in\mathcal{E}\\}$ is called the ball of radius $E$ centered at $x$.
Since $E=\bigcup_{x\in X}(\\{x\\}\times E[x])$, the entourage $E$ is uniquely
determined by the family of balls $\\{E[x]:x\in X\\}$. A subfamily
${\mathcal{E}}^{\prime}\subseteq\mathcal{E}$ is called a base of the coarse
structure $\mathcal{E}$ if each set $E\in\mathcal{E}$ is contained in some
$E^{\prime}\in\mathcal{E}^{\prime}$.
The pair $(X,\mathcal{E})$ is called a coarse space [13] or a ballean [8],
[11].
In this paper, all coarse spaces under consideration are supposed to be
connected: for any $x,y\in X$, there is $E\in\mathcal{E}$ such $y\in E[x]$. A
subset $Y\subseteq X$ is called bounded if $Y=E[x]$ for some
$E\in\mathcal{E}$, and $x\in X$. The family $\mathcal{B}_{X}$ of all bounded
subsets of $X$ is a bornology on $X$. We recall that a family $\mathcal{B}$ of
subsets of a set $X$ is a bornology if $\mathcal{B}$ contains the family
$[X]^{<\omega}$ of all finite subsets of $X$ and $\mathcal{B}$ is closed under
finite unions and taking subsets. A bornology $\mathcal{B}$ on a set $X$ is
called unbounded if $X\notin\mathcal{B}$. A subfamily $\mathcal{B}^{\prime}$
of $\mathcal{B}$ is called a base for $\mathcal{B}$ if, for each
$B\in\mathcal{B}$, there exists $B^{\prime}\in\mathcal{B}^{\prime}$ such that
$B\subseteq B^{\prime}$.
Each subset $Y\subseteq X$ defines a subspace $(Y,\mathcal{E}|_{Y})$ of
$(X,\mathcal{E})$, where $\mathcal{E}|_{Y}=\\{E\cap(Y\times
Y):E\in\mathcal{E}\\}$. A subspace $(Y,\mathcal{E}|_{Y})$ is called large if
there exists $E\in\mathcal{E}$ such that $X=E[Y]$, where $E[Y]=\bigcup_{y\in
Y}E[y]$.
Let $(X,\mathcal{E})$, $(X^{\prime},\mathcal{E}^{\prime})$ be coarse spaces. A
mapping $f:X\to X^{\prime}$ is called macro-uniform if for every
$E\in\mathcal{E}$ there exists $E^{\prime}\in\mathcal{E}^{\prime}$ such that
$f(E(x))\subseteq E^{\prime}(f(x))$ for each $x\in X$. If $f$ is a bijection
such that $f$ and $f^{-1}$ are macro-uniform, then $f$ is called an
asymorphism. If $(X,\mathcal{E})$ and $(X^{\prime},\mathcal{E}^{\prime})$
contain large asymorphic subspaces, then they are called coarsely equivalent.
For a coarse space $(X,\mathcal{E})$, we denote by $X^{\flat}$ the set of all
non-empty bounded subsets of $X$, so
$(X^{\flat}=\mathcal{B}\backslash\\{\emptyset\\})$ and by
$\mathcal{E}^{\flat}$ the coarse structure on $X^{\flat}$ with the base
$\\{E^{\flat}:E\in\mathcal{E}\\}$, where
$(A,B)\in E^{\flat}\Leftrightarrow A\subseteq E[B],\ \ B\subseteq E[A],$
and say that $(X^{\flat},\mathcal{E}^{\flat})$ is the hyperballean of
$(X,\mathcal{E})$. For hyperballeans see [2],[3], [9], [10].
We say that a macro-uniform mapping $f:X^{\flat}\longrightarrow X$ (resp.
$f:[X]^{2}\longrightarrow X$) is a selector (resp. 2-selector) of
$(X,\mathcal{E})$ if $f(A)\in A$ for each $A\in X^{\flat}$ (resp.
$A\in[X]^{2}$). We note that a selector is a macro-uniform retraction of
$X^{\flat}$ to $[X]^{1}$ identified with $X$.
We recall that a coarse space $(X,\mathcal{E})$ is discrete if, for each
$E\in\mathcal{E}$, there exists a bounded subset $B$ of $(X,\mathcal{E})$ such
that $E[x]=\\{x\\}$ for each $x\in X\setminus B$. Every bornology
$\mathcal{B}$ on a set $X$ defines the discrete coarse space
$X_{\mathcal{B}}=(X,\mathcal{E}_{\mathcal{B}})$, where
$\mathcal{E}_{\mathcal{B}}$ is a coarse structure with the base
$\\{E_{B}:B\in\mathcal{B}\\}$, $E_{B}[x]=B$ if $x\in B$ and $E_{B}[x]=\\{x\\}$
if $x\in X\setminus B$. On the other hand, every discrete coarse space
$(X,\mathcal{E})$ coincides with $X_{\mathcal{B}}$, where $\mathcal{B}$ is the
bornology of bounded subsets of $(X,\mathcal{E})$.
Our goal is to characterize discrete coarse spaces which admit selectors.
After exposition of results, we conclude with some comments and open problems.
## 2\. Results
Let $\leq$ be a linear order on a set $X$. We say that $(X,\leq)$ is
* •
right (left) well-ordered if every subset $Y$ of $X$ has the minimal (maximal)
element;
* •
right (left) bounded if $X$ has the maximal (minimal) element;
* •
bounded if $X$ is left and right bounded.
Every linear order $\leq$ on $X$ defines the bornology $\mathcal{B}_{\leq}$ on
$X$ such that the family $\\{[a,b]:a,b\in X,a\leq b\\}$, where $[a,b]=\\{x\in
X:a\leq x\leq b\\}$, is a base for $\mathcal{B}_{\leq}$. Clearly,
$X\in\mathcal{B}_{\leq}$ if and only if $(X,\leq)$ is bounded.
We say that a bornology $\mathcal{B}$ on a set $X$ has an interval base if
there exists a linear order $\leq$ on $X$ such
$\mathcal{B}=\mathcal{B}_{\leq}$.
Theorem 1. For a bornology $\mathcal{B}$ on a set $X$ and the discrete coarse
space $X_{\mathcal{B}}$, the following statements are equivalent
$(i)$ $X_{\mathcal{B}}$ admits a selector;
$(ii)$ $X_{\mathcal{B}}$ admits a 2-selector;
$(iii)$ $\mathcal{B}$ has an interval base.
Proof. If $X\in\mathcal{B}$ then we have nothing to prove: every mapping
$f:\mathcal{B}\backslash\\{\emptyset\\}\longrightarrow X$ (resp.
$f:[X]^{2}\longrightarrow X$) such that $f(A)\in A$ is a selector (resp.
2-selector) and we take an arbitrary linear order $\leq$ on $X$ such that
$(X,\leq)$ is bounded. In what follows, $X\notin\mathcal{B}$ so
$X_{\mathcal{B}}$ is unbounded. The implication $(i)\Rightarrow(ii)$ is
evident.
$(ii)\Rightarrow(iii)$ We take a 2-selector $f$ of $X_{\mathcal{B}}$ and
define a binary relation $\prec$ on $X$ as follows: $a\prec b$ if and only if
either $a=b$ or $f(\\{a,b\\})=a$.
We use the following key observation
$(\ast)$ for every $B\in\mathcal{B}$, there exists $C\in\mathcal{B}$ such that
if $z\in X\setminus C$ then either $b\prec z$ for each $b\in B$ or $z\prec b$
for each $b\in B$.
Indeed, we take $C\in\mathcal{B}$ such that $B\subseteq C$ and if
$A,A^{\prime}\in[X]^{2}$ and $(A,A^{\prime})\in E_{B}^{\flat}$ then
$(f(A),f(A^{\prime}))\in E_{C}$.
We take and fix distinct $l,r\in X$ such that $l\prec r$ and use the Zorn’s
lemma to choose a maximal by inclusion subset $A$ of $X$ such that $A=L\cup
R$, $L\cap R=\emptyset$, $R$ is right well-ordered by $\prec$ with the minimal
element $r$, $L$ is left well-ordered by $\prec$ with the maximal element $l$
and $x\prec y$ for all $x\in L$, $y\in R$.
By the maximality of $A$ and $(\ast)$, $A$ is unbounded in $X_{\mathcal{B}}$.
For $a,b\in A$, $a\prec b$, we denote $[a,b]_{A}=\\{x\in A:a\prec x\prec
b\\}$. Applying $(\ast)$ with $B=[a,b]_{A}$, we see that $[a,b]_{A}$ is
bounded in $X_{\mathcal{B}}$.
We consider three cases.
Case 1: $L$ and $R$ are unbounded in $X_{\mathcal{B}}$. We define some
auxiliary mapping $h:X\longrightarrow A$. For $x\in A$, we put $h(x)=x$. For
$x\in X\backslash A$, we use $(\ast)$ with $B=\\{r,x\\}$ to find the minimal
element $c\in R$ such that $x\prec y$ for each $y\in A$, $c\prec y$. If $c\neq
r$ then we put $h(x)=c$. Otherwise, we use $(\ast)$ to choose the maximal
element $d\in L$ such that $y\prec x$ for each $y\in L\cup\\{r\\}$, $y\prec
d$. We put $h(x)=d$.
We take arbitrary $a,b\in A$ such that $a\prec l\prec r\prec b$. If
$h(x)\in[a,b]$ then, by the construction of $h$, we have $a\prec x\prec b$.
Applying $(\ast)$ with $B=[a,b]_{A}$, we conclude that $h^{-1}([a,b]_{A})$ is
bounded in $X_{\mathcal{B}}$. In particular, $h^{-1}(c)$ is bounded in
$X_{\mathcal{B}}$ for each $c\in A$.
Now we are ready to define the desired linear order $\leq$ on $X$. If
$h(x)\prec h(y)$ and $h(x)\neq h(y)$ then we put $x<y$. If $c\in R$ then we
endow $h^{-1}(c)$ with a right well-order $\leq$. If $c\in L$ then we endow
$h^{-1}(c)$ with a left well-order $\leq$.
It remains to verify that the family $\\{[a,b]:a,b\in X$, $\ a\leq b\\}$ is a
base for $\mathcal{B}$. Let $a,b\in A$ and $a\leq b$. We have shown that
$h^{-1}([a,b]_{A})\in\mathcal{B}$, hence $[a,b]\in\mathcal{B}$. If $a,b\in X$
and $a\leq b$ then we take $a^{\prime}\in A$, $b^{\prime}\in A$ such that
$a^{\prime}<a$, $b<b^{\prime}$. Since $[a^{\prime},b^{\prime}]\in\mathcal{B}$,
we have $[a,b]\in\mathcal{B}$. On the other hand, if $Y$ is a bounded subset
of $X_{\mathcal{B}}$ then we apply $(\ast)$ with $B=Y\cup\\{l,r\\}$ to find
$a\in L$, $b\in R$ such that $h(B)\subseteq[a,b]_{A}$, hence
$B\subseteq[a,b]$.
Case 2: $L$ is bounded and $R$ is unbounded in $X_{\mathcal{B}}$. Since
$L\in\mathcal{B}$, by $(\ast)$, the set $C=\\{x\in X:x<y$ for each $y\in R\\}$
is bounded in $X_{\mathcal{B}}$. We use arguments from Case 1 to define $\leq$
on $X\setminus C$. Then we extend $\leq$ to $X$ so that $(C,\leq)$ is bounded
and $x\prec y$ for all $x\in C$, $y\in X\setminus C$.
Case 3: $L$ is unbounded and $R$ is bounded in $X_{\mathcal{B}}$. Since
$R\in\mathcal{B}$, by $(\ast)$, the set $D=\\{x\in X:y\prec x$ for each $y\in
L\\}$ is bounded in $X_{\mathcal{B}}$. We use arguments from Case 1 to define
$\leq$ on $X\setminus D$. Then we extend $\leq$ to $X$ so that $(D,\leq)$ is
bounded and $x\prec y$ for all $x\in X\setminus D$, $y\in D$.
$(iii)\Rightarrow(i)$. We take a linear order $\prec$ on $X$ witnessing that
$\mathcal{B}$ has an interval base. We define a 2-selector
$f:[X]^{2}\longrightarrow X$ by $f(\\{x,y\\})=x$ if and only if $x\prec y$.
Then we take the linear order $\leq$ on $X$ defined in the proof
$(ii)\Rightarrow(iii)$. To define a selector $s$ of $X_{\mathcal{B}}$, we
denote $X_{l}=\\{x\in X:x\leq l\\}$, $X_{r}=\\{x\in X:r\leq x\\}$. By the
construction of $\leq$, $X_{l}$ is right well-ordered and $X_{r}$ is left
well-ordered. We take an arbitrary $Y\in\mathcal{B}\setminus\\{\emptyset\\}$.
If $Y\cap X_{l}\neq\emptyset$ then we take the maximal element $a\in Y\cap
X_{l}$ and put $s(Y)=a$. Otherwise, we choose the minimal element $b\in Y\cap
X_{r}$ and put $s(Y)=b$.
To see that $s$ is macro-uniform, we take an interval $[a,b]$ in $(X,\leq)$
and $Y,Z\in\mathcal{B}\setminus\\{\emptyset\\}$ such that
$Y\setminus[a,b]=Z\setminus[a,b]$, $Y\cap[a,b]\neq\emptyset,$
$Z\cap[a,b]\neq\emptyset$. If $s(Y)\notin[a,b]$ then $s(Y)=s(Z)$. If
$s(Y)\in[a,b]$ then $s(Z)\in[a,b]$. $\ \ \ \Box$
An ordinal $\alpha$ endowed with the reverse ordering is called the
antiordinal of $\alpha$.
Corollary 2. If $X_{\mathcal{B}}$ has a selector then $\mathcal{B}$ has an
interval base with respect to some linear order $\leq$ on $X$ such that
$(X,\leq)$ is the ordinal sum of an antiordinal and an ordinal.
Proof. We take the linear order from the proof of Theorem 1 and note that
$X_{l}$ is an antiordinal, $X_{r}$ is ordinal and $(X,\leq)$ is the ordinal
sum of $X_{l}$ and $X_{r}$. $\ \ \ \Box$
Corollary 3. If a bornology $\mathcal{B}$ on a set $X$ has a base linearly
ordered by inclusion then the discrete coarse space $X_{\mathcal{B}}$ admits a
selector.
Proof. Since $\mathcal{B}$ has a linearly ordered base, we can choose a base
$\\{B_{\alpha}:\alpha<\kappa\\}$ well-ordered by inclusion. We show that
$\mathcal{B}$ has an interval base and apply Theorem 1.
For each $\alpha<\kappa$, let $\mathcal{D}_{\alpha}=B_{\alpha+1}\setminus
B_{\alpha}$. We endow each $\mathcal{D}_{\alpha}$ with an arbitrary right
well-order $\leq$. If $x\in\mathcal{D}_{\alpha}$, $y\in\mathcal{D}_{\beta}$
and $\alpha<\beta$, we put $x<y$. Then $\mathcal{B}=\mathcal{B}_{\leq}$. $\ \
\ \Box$
Remark 4. Let $(X,\leq)$ be the ordinal sum of the antiordinal of $\omega$
and the ordinal $\omega_{1}$. Then the interval bornology $\mathcal{B}_{\leq}$
does not have a linearly ordered base. Indeed, let $X=L\cup R$,
$L=\\{l_{n}:n<\omega\\}$, $l_{n}<l_{m}$ iff $m<n$,
$R=\\{r_{\alpha}:\alpha<\omega_{1}\\}$, $r_{\alpha}<r_{\beta}$ iff
$\alpha<\beta$, and $l_{n}<r_{\alpha}$ for all $n,\alpha$. Assuming that
$\mathcal{B}_{\leq}$ has a linearly ordered base, we choose a base
$\mathcal{B}^{\prime}$ of $\mathcal{B}_{\leq}$ well-ordered by inclusion and
denote $\mathcal{B}^{\prime}_{n}=\\{A\in\mathcal{B}^{\prime}:min\ A=l_{n}\\}$.
By the choice of $R$, there exists $m\in\omega$ such that
$\mathcal{B}^{\prime}_{m}$ is cofinal in $\mathcal{B}_{\leq}$, but
$l_{m+1}\notin A$ for each $A\in\mathcal{B}^{\prime}_{m}$ and we get a
contradiction.
Theorem 5. Let $(X,\mathcal{E})$ be a coarse space with the bornology
$\mathcal{B}$ of bounded subsets. If $f$ is a 2-selector of $(X,\mathcal{E})$
then $f$ is a 2-selector of $X_{\mathcal{B}}$.
Proof. Let $B\in\mathcal{B}$, $A,A^{\prime}\in[X]^{2}$ and $(A,A^{\prime})\in
E_{B}^{\flat}$. Since $f$ is a 2-selector of $X_{\mathcal{E}})$, there exists
$F\in\mathcal{E}$, $F=F^{-1}$ such that $(f(A),f(A^{\prime}))\in F$.
If $A\cap B=\emptyset$ then $A=A^{\prime}$. If $A\subseteq B$ then
$A^{\prime}\in B$, so $(f(A),f(A^{\prime}))\in E_{B}$.
Let $A=\\{b,a\\}$, $A^{\prime}=\\{b^{\prime},a\\}$, $b\in B$, $b^{\prime}\in
B$ and $a\in X\setminus B$. If $a\in F[\\{b,b^{\prime}\\}]$ then
$f(A),f(A^{\prime})\in F[\\{b,b^{\prime}\\}]$. If $a\notin
F[\\{b,b^{\prime}\\}]$ then either $f(A)=f(A^{\prime})=a$ or
$f(A),f(A^{\prime})\in\\{b,b^{\prime}\\}$.
In all considered cases, we have $(f(A),f(A^{\prime}))\in E_{F[B]}$. Hence,
$f$ is a 2-selector of $X_{\mathcal{B}}$. $\ \ \ \Box$
Remark 6. Every metric space $(X,d)$ has the natural coarse structure
$\mathcal{E}_{d}$ with the base $\\{E_{r}:r>0\\}$, $\
E_{r}=\\{(x,y);d(x,y)\leq r\\}$. Let $\mathcal{B}$ denotes the bornology of
bounded subsets of $(X,\mathcal{E}_{d})$. By Corollary 3, the discrete coarse
space $X_{\mathcal{B}}$ admits a 2-selector. We show that
$(X,\mathcal{E}_{d})$ could not admit a 2-selector, so the conversion of
Theorem 5 does not hold.
Let $X=\mathbb{Z}^{2}$, $x=(x_{1},x_{2})$, $y=(y_{1},y_{2})$, $d(x,y)=max\
\\{|x_{1}-y_{1}|,|x_{2}-y_{2}|\\}$. We suppose that there exists a 2-selector
$f$ of $(X,\mathcal{E}_{d})$ and choose a natural number $n$ such that if
$A,A^{\prime}\in[X]^{2}$ and $(A,A^{\prime})\in E_{1}^{\flat}$ then
$(f(A),f(A^{\prime}))\in E_{n}$, so $d(f(A),f(A^{\prime}))\leq n$. We denote
$S_{n}=\\{x\in X:d(x,0)=n\\}$. For $x\in S_{n}$, let $A_{x}=\\{x,-x\\}$. Then
we can choose $x,y\in S_{n}$ such that $d(x,y)=1$, $f(A_{x})=x$,
$f(A_{y})=-y$, but $d(x,-y)>n$.
## 3\. Comments
1\. Let $(X,\mathcal{U})$ be a uniform space and let $\mathcal{F}_{X}$ denotes
the set of all non-empty closed subsets of $X$ endowed with the Hausdorff-
Bourbaki uniformity. Given a subset $\mathcal{F}$ of $\mathcal{F}_{X}$, a
uniformly continuous mapping $f:\mathcal{F}\longrightarrow X$ is called an
$\mathcal{F}$-selector if $f(A)\in A$ for each $a\in A$. If
$\mathcal{F}=[X]^{2}$ then $f$ is called a 2-selector.
In contrast to the topological case, the problem of uniform selections is much
less studied. Almost all known results are concentrated around uniformizations
of the Michael’s theorem, for references see [12].
Given a discrete uniform space, how can one detects whether $X$ admits a
2-selector? This questions, seems, very difficult even in the case of a
countable discrete metric space $X$. To demonstrate the obstacles for a simple
characterization, we consider the following example.
We take a family $\\{C_{n}:n<\omega\\}$ of pairwise non-intersecting circles
of radius 1 on the Euclidean plane $\mathbb{R}^{2}$. Then we inscribe a
regular $n$-gon $M_{n}$ in $C_{n}$ and denote by $X$ the set of all vertices
of $\\{M_{n}:n<\omega\\}$. It is easy to verify that $X$ does not admit a
2-selector.
2\. Given a group $G$ with the identity $e$, we denote by $\mathcal{E}_{G}$ a
coarse structure on $G$ with the base
$\\{\\{(x,y)\in G\times G:y\in Fx\\}:F\in[G]^{<\omega},\ \ e\in F\\}$
and say that $(G,\mathcal{E}_{G})$ is the finitary coarse space of $G$. It
should be noticed that finitary coarse spaces of groups (in the form of Cayley
graphs) are used in Geometric Group Theory, see [5]. We note that the
bornology of bounded subset of $(G,\mathcal{E}_{G})$ is the set
$[G]^{<\omega}$. Applying Theorem 1 and Theorem 5, we conclude that if
$(G,\mathcal{E}_{G})$ admits a 2-selector then $G$ must be countable.
Problem 1. Characterize countable groups $G$ such that the finitary coarse
space $(G,\mathcal{E}_{G})$ admits a 2-selector.
3\. Every connected graph $\Gamma[\mathcal{V}]$ with the set of vertices
$\mathcal{V}$ can be considered as the metric space $(\mathcal{V},d)$, where
$d$ is the path metric on $\mathcal{V}$.
Problem 2. Characterize graphs $\Gamma[\mathcal{V}]$ such that the coarse
space of $(\mathcal{V},d)$, where $d$ is the path metric on the set of
vertices $\mathcal{V}$, admits a 2-selector.
Acknowledgments. I thank the referee for critical remarks to the initial
version of the paper.
## References
* [1] G. Artico, U. Marconi, J. Pelant, L. Rotter, M. Tkachenko, Selections and suborderability, Fundam. Math. 175 (2002), 1-33.
* [2] D. Dikranjan, I. Protasov, K. Protasova, N. Zava, Balleans, hyperballeans and ideals, Appl. Gen. Topology 2 (2019), 431-447.
* [3] D. Dikranjan, I. Protasov, N. Zava, Hyperballeans of groups, Topology Appl. 263 (2019), 172-198.
* [4] R. Engelking, R. W. Health, E. Michael, Topological well ordering and continuous selections, Invent. Math. 6(1968), 150-158.
* [5] P. de la Harpe,Topics in Geometrical Group Theory, University Chicago Press, 2000.
* [6] J. van Mill, E. Wattel, Selections and orderability, Proc. Amer. Math. Soc. 83 (1981), 601-605.
* [7] J. van Mill, J. Pelant, R. Pol, Selections that characterize topological completeness, Fundam. Math. 149 (1996), 127-141.
* [8] I. Protasov, T. Banakh, Ball Structures and Colorings of Groups and Graphs, Math. Stud. Monogr. Ser., vol. 11, VNTL, Lviv, 2003.
* [9] I. Protasov, K. Protasova, On hyperballeans of bounded geometry, Europ. J. Math. 4 (2018), 1515-1520.
* [10] I. Protasov, K. Protasova, The normality of macrocubes and hiperballeans, Europ. J. Math. https://doi.org/10.1007/s40879-020-00400-x.
* [11] I. Protasov, M. Zarichnyi, General Asymptology, Math. Stud. Monogr. Ser., vol. 12, VNTL, Lviv, 2007.
* [12] K. Przeslawski, D. T. Yost, Continuity properties of selectors in Michael’s theorem, Michgan. Math. J. 36 (1986), 113-134.
* [13] J. Roe, Lectures on Coarse Geometry, Univ. Lecture Ser., vol. 31, American Mathematical Society, Providence RI , 2003.
|
0
11institutetext: Technical University of Munich, Munich, Germany
11email<EMAIL_ADDRESS>22institutetext: Department of Computer
Science and Engineering, I.I.T. Delhi,
New Delhi, India
22email<EMAIL_ADDRESS>
# dtControl 2.0: Explainable Strategy Representation via Decision Tree
Learning Steered by Experts ††thanks: This work has been partially supported
by the German Research Foundation (DFG) project No. 383882557 _SUV_ (KR
4890/2-1), No. 427755713 _GOPro_ (KR 4890/3-1) and the TUM International
Graduate School of Science and Engineering (IGSSE) grant 10.06 _PARSEC_. We
thank Tim Quatman for implementing JSON-export of strategies in STORM and
Pushpak Jagtap for his support with the SCOTS models.
Pranav Ashok 11 Mathias Jackermeier 11 Jan Křetínský 11
Christoph Weinhuber(🖂) 11 Maximilian Weininger 11 Mayank Yadav 22
###### Abstract
Recent advances have shown how decision trees are apt data structures for
concisely representing strategies (or controllers) satisfying various
objectives. Moreover, they also make the strategy more explainable. The recent
tool dtControl had provided pipelines with tools supporting strategy synthesis
for hybrid systems, such as SCOTS and Uppaal Stratego. We present dtControl
2.0, a new version with several fundamentally novel features. Most
importantly, the user can now provide domain knowledge to be exploited in the
decision tree learning process and can also interactively steer the process
based on the dynamically provided information. To this end, we also provide a
graphical user interface. It allows for inspection and re-computation of parts
of the result, suggesting as well as receiving advice on predicates, and
visual simulation of the decision-making process. Besides, we interface model
checkers of probabilistic systems, namely STORM and PRISM and provide
dedicated support for categorical enumeration-type state variables.
Consequently, the controllers are more explainable and smaller.
###### Keywords:
Strategy representation Controller representation Decision Tree Explainable
Learning Hybrid systems Probabilistic Model Checking Markov Decision Process
## 1 Introduction
A _controller_ (also known as strategy, policy or scheduler) of a system
assigns to each state of the system a set of actions that should be taken in
order to achieve a certain goal. For example, one may want to satisfy a given
specification of a robot’s behaviour or exhibit a concurrency bug appearing
only in some interleaving. It is desirable that the controllers possess
several additional properties, besides achieving the goal, in order to be
usable in practice. Firstly, controllers should be _explainable_. Only then
can they be understood, trusted and implemented by the engineers, certified by
the authorities, or used in the debugging process [10]. Secondly, they should
be _small_ in size and efficient to run. Only then they can be deployed on
embedded devices with limited memory of a few kilobytes, while the
automatically synthesized ones are orders of magnitude larger [49]. Thirdly,
whenever the primary goal, e.g. functional correctness, is accompanied by a
secondary criterion, e.g. energy efficiency, they should be _performant_ with
respect to this criterion.
Automatic controller synthesis is able to provide controllers for a given goal
in various domains, such as probabilistic systems [32, 16], hybrid systems
[45, 15, 30, 18] or reactive systems [35]. In some cases, even the performance
can be reflected [15]. However, despite recent interest in explainability in
connection to AI-based controllers [2] and despite typically small memories of
embedded devices, automatic techniques for controller synthesis mostly fall
short of producing small explainable results. A typical outcome is a
controller in the form of a look-up table, listing the actions for each
possible state, or a binary decision diagram (BDD) [13] representation
thereof. While the latter reduces the size to some extent, none of the two
representations is explainable: the former due to its size, the latter due to
the bit-level representation with all high-level structure lost. Instead,
learning representations in the form of decision trees (DT) [38] has been
recently explored to this end [6, 3]. DTs turn out to be usually smaller than
BDD but do not drown to the bit level and are generally well known for their
interpretability and explainability due to their simple structure. However,
despite showing significant potential, the state-of-the-art tool dtControl [4]
uses predicates without natural interpretation, and moreover, the best size
reductions are achieved using _determinization_ , i.e. making the controller
less permissive, which negatively affects performance [6].
###### Example 1 (Motivating example)
Consider the cruise control model of [34], where we want to control the speed
of our car so that it never crashes into the car in front while, as a
secondary performance objective, keeping the distance between the two cars
small.
A safe controller for the this model as returned by Uppaal Stratego, is a
lookup table of size 418 MB with 300,000 lines. The respective BDD has 1,448
nodes with all information bit-blasted. Using adaptations of standard DT-
construction algorithms, as implemented in dtControl, we can get a DT with 987
nodes, which is still too large to be explained. Using determinization
techniques, the controller can be compressed to 3 nodes! However, then the DT
allows only to decelerate until the minimum velocity. This is safe, as we
cannot crash into the car in front, but it does not even attempt at getting
close to the front car, and thus has a very bad performance.
One can find a strategy with optimal performance, retaining the maximal
permissiveness, not determinizing at all, which can be represented by a DT
with 11 nodes. A picture of this DT as well as reasoning how to derive the
predicates from the kinematic equations is in Appendix 0.A.
However, exactly because the predicates are based on the _domain knowledge_ ,
namely the kinematic equations, they take the form of _algebraic predicates_
and not simply linear predicates, which are the only ones in dtControl and
commonly in the machine-learning literature on DTs. $\triangle$
This motivating example shows that using domain knowledge and algebraic
predicates, available now in dtControl 2.0, one can get smaller representation
than when using existing heuristics. Further, it improves the performance of
the DT, and it is easily explainable, as it is based on domain knowledge. In
fact, the discussed controller is so explainable that it allowed us to find a
bug in the original model. In general, using dtControl 2.0 a domain expert can
try to compress the controller, thus gain more insight and validate that it is
correct. Another example of this has been reported from the use of dtControl
in the manufacturing domain [31].
While automatic synthesis of good predicates from the domain knowledge may
seem as distant as automatic synthesis of program invariants or automatic
theorem provers, we adopt the philosophy of those domains and offer _semi-
automatic techniques_.
Additionally, if not performance but only safety of a controller is relevant,
we can still benefit from determinization without drawbacks. To this end, we
also provide a new determinization procedure that generalizes the extremely
successful MaxFreq technique of [4] and is as good or better on all our
examples.
To incorporate the changes just discussed, namely algebraic predicates, semi-
automatic approach, and better determinization, we have also reworked the tool
and its interfaces. To begin with, the software architecture of dtControl 2.0
is now very modular and allows for easy further modifications, as well as
adding support for new synthesis tools. In fact, we have already added parsers
for the tools STORM [16] and PRISM [32], and thus we support probabilistic
models as well. Since these models also contain categorical (or enumeration-
type) variables, e.g. protocol states, we have also added support for
categorical predicates. Furthermore, we added a graphical user interface that
not only is easier to use than the command-line interface, but also allows to
inspect the DT, modify and retrain parts of it, and simulate runs of the model
under its control, further increasing the possibilities to explain the DT and
validate the controller.
Summing up, the main improvements of dtControl 2.0 over the previous version
[4] are the following:
* •
Support of algebraic predicates and categorical predicates
* •
Semi-automatic interface and GUI with several interactive modes
* •
New determinization procedure
* •
Interfaces for model checkers PRISM and Storm and experimental evidence of
improvements on probabilistic models compared to BDD
The paper is structured as follows. After recalling necessary background in
Section 2, we give an overview of the improvements over the previous version
of the tool from the global perspective in Section 3. We detail on the
algorithmic contribution in Sections 4 (predicate domains), 5 (predicate
selection) and 6 (determinization). Section 7 provides experimental evaluation
and Section 8 concludes.
#### 1.0.1 Related work.
DTs have been suggested for representing controllers of and counterexamples in
probabilistic systems in [10], however, the authors only discuss approximate
representations. The ideas have been extended to other setting, such as
reactive synthesis [11] and hybrid systems [6]. More general linear predicates
have been considered in leaves of the trees in [3]. dtControl 2.0 contains the
DT induction algorithms from [6, 3]. The differences to the previous version
of the tool dtControl [4] are summarized above and schematically depicted in
Figure 2.
Besides, DTs have been used to represent and learn strategies for safety
objectives in [40] and to learn program invariants in [20]. Further, DTs were
used for representing the strategies during the model checking process, namely
in strategy iteration [9] or in simulation-based algorithms [42]. Representing
controllers exactly using a structure similar to DT (mistakenly claimed to be
an algebraic decision diagram) was first suggested by [21], however, no
automatic construction algorithm was provided.
The idea of non-linear predicates has been explored in [28]. In that work,
however, it is not based on domain knowledge, but rather on projecting the
state-space to higher dimensions.
BDDs [13] have been commonly used to represent strategies in planning [14],
symbolic model checking [32] as well as to represent hybrid system controllers
[45, 30]. While BDD [13] operate only on Boolean variables, they have the
advantage of being diagrams and not trees. Moreover, they correspond to
Boolean functions that can be implemented on hardware easily. [17] proposes an
automatic compression technique for numerical controllers using BDDs. Similar
to our work, [49] considers the problem of obtaining concise BDD
representation of controllers and presents a technique to obtain smaller BDDs
via determinization. However, BDDs are difficult to explain due to variables
being bit-blasted and their size is very sensitive to the chosen variable
ordering. An extension of BDDs, algebraic or multi-terminal decision diagrams
(ADD/MTBDD) [7, 19], have been used in reinforcement learning for strategy
synthesis [26, 47]. ADDs extend BDDs with the possibility to have multiple
values in the terminal nodes, but the predicates still work only on boolean
variables, retaining the disadvantages of BDDs.
## 2 Decision tree learning for controller representation
In this section, we briefly describe how controllers can be represented as
decision trees as in [4]. We give an exemplified overview of the method,
pinpointing the role of our algorithmic contributions.
A (non-deterministic, also called permissive) _controller_ is a map
$C:S\mapsto 2^{A}$ from states to non-empty sets of actions. This notion of a
controller is fairly general; the only requirement is that it has to be
memoryless and non-randomized. These kind of controllers are optimal for many
tasks such as expected (discounted) reward, reachability or parity objectives.
Moreover, even finite-memory controllers can be written in this form by
considering the product of the state space with the finite memory as the
domain, for example, like in LTL model checking.
_Decision trees_ (DT), e.g. [38], are trees where every leaf node is labelled
with a non-empty set of actions and every inner node is labelled with a
_predicate_ $\rho:S\mapsto\\{\mathit{true},\mathit{false}\\}$.
$v_{o}$ $v_{f}$ d actions 0 0 5 $\\{\mathit{neu}\\}$ 2 6 10
$\\{\mathit{dec},\mathit{neu},\mathit{acc}\\}$ 2 6 15
$\\{\mathit{dec},\mathit{neu},\mathit{acc}\\}$ 4 4 15
$\\{\mathit{dec},\mathit{neu}\\}$
(a)
$v_{o}>0$$\\{\mathit{neu}\\}$$v_{f}>4$$\\{\mathit{dec},\mathit{neu}\\}$$\\{\mathit{dec},\mathit{neu},\mathit{acc}\\}$_false_
_true_ (b)
Figure 1: An example controller based on the cruise-control model in the form
of a lookup table (left), and the corresponding decision tree (right).
###### Example 2 (Decision tree representation)
As an example, consider the controller given in Figure 1a. It is a subset of
the real cruise-control case study from the motivating Example 1. A state is a
3-tuple of the variables $v_{o}$, $v_{f}$ and $d$, which denote the velocity
of our car, the front car and the distance between the cars respectively. In
each state, our car may be allowed to perform a subset of the following set of
actions: decelerate ($\mathit{dec}$), stay in neutral ($\mathit{neu}$) or
accelerate ($\mathit{acc}$). A DT representing this lookup table is depicted
in Figure 1b.
Given a state, for example $v_{o}=v_{f}=4,d=10$, the DT is evaluated as
follows: We start at the root and, since it is an inner node, we evaluate its
predicate $v_{o}>0$. As this is true, we follow the true branch and reach the
inner node labelled with the predicate $v_{f}>4$. This is false, so we follow
the false branch and reach the leaf node labelled
$\\{\mathit{dec},\mathit{neu}\\}$. Hence, we know that all three possibilities
of decelerating, staying neutral and accelerating are allowed by the
controller. $\triangle$
To construct a DT representation of a given controller, the following
recursive algorithm may be used. Note that it is heuristic since constructing
an optimal binary decision tree is an NP-complete problem [27].
* Base case:
If all states in the the controller agree on their set of actions $B$ (i.e.
for all states $s$ we have $C(s)=B$), return a leaf node with label $B$.
* Recursive case:
Otherwise, we split the controller. For this, we select a predicate $\rho$ and
construct an inner node with label $\rho$. Then we partition the controller by
evaluating the predicate on the state space, and recursively construct one DT
for the sub-controller on states $\\{s\in S\mid\rho(s)\\}$ where the predicate
is true, and one for the sub-controller where it is false. These controllers
are the children of the inner node with label $\rho$ and we proceed
recursively.
For selecting the predicate, we consider two hyper-parameters: The _domain_ of
the predicates (see Section 4) and the way to _select_ predicates (see Section
5). The selection is typically performed by selecting the predicate with the
lowest _impurity_ ; this is a measure for how homogenous (or “pure”) the
controller is after the split, in other words the degree to which all the
states agree on their actions.
We also consider a third hyper-parameter of the algorithm, namely
_determinization_ by _safe early stopping_ (see Section 6). This modifies the
base case as follows: if all states in the controller agree on at least one
action $a$ (i.e. for all states $s$ we have $a\in C(s)$), then we return a
leaf node with label $\\{a\\}$. This variant of early stopping ensures that,
even though the controller is not represented exactly, still for every state a
safe action is allowed.
Hence, if the original controller satisfies some property, e.g. that a safe
set of states is never left, the DT construction algorithm ensures that this
property is retained. This is because our algorithm represents the strategy
exactly (or a safe subset, in case of determinization) and does not generalize
as DTs typically do in machine learning. DTs are suitable for both tasks, as
both rely on the strength of DTs exploiting underlying structure.
###### Remark 1
Note that for some types of objectives such as reachability, determinization
of permissive strategies might lead to a violation of the original guarantees.
For example, consider a strategy that allows both a self-looping and a non-
self-looping action at a particular state. If the determinizer decides to
restrict to the self-looping action, the reachability property may be violated
in the determinized strategy. However, this problem can be addressed when
synthesizing the strategy by ensuring that every action makes progress towards
the target.
## 3 Tool
dtControlDecision TreeLearnerPredicateDomainAxis-
alignedLinearCategoricalAlgebraicControllerSCOTS, UPPAALPRISM,
StormCSVPredicateSelectorEntropyInteractiveMany
moreDeterminizerNonePre/PostSafe early stoppingMaxFreqMinNormRandomMulti-label
entropyDecision TreeC-CodeGraph (JSON, DOT)Interactive
GraphVisualizedSimulationUser ChoiceDomainKnowledge Figure 2: An overview of
the components of dtControl 2.0, thereby showing software architecture and
workflow. Contributions of this paper are highlighted in red.
dtControl 2.0 is an easy-to-use open-source tool for representing memoryless
symbolic controllers as more compact and more interpretable DTs, while
retaining safety guarantees of the original controllers. Our website
dtcontrol.model.in.tum.de offers hyperlinks to the easy-to-install pip
package111pip is a standard package-management system used to install and
manage software packages written in Python., the documentation and the source
code. Additionally, the artifact that has passed the TACAS 21 artifact
evaluation is available here [5].
The schema in Figure 2 illustrates the workflow of using dtControl,
highlighting new features in red. Considering dtControl as a black box, it
shows that given a controller, it returns a DT representing the controller and
also offers the possibility to simulate a run of the system under the control
of the DT, visualizing the decisions made. The controller can be input in
various formats, including the newly supported strategy representations of the
well-known probabilistic model checkers PRISM [32] and STORM [16]. The DT is
output in several machine readable formats, and as C-code that can be directly
used for executing the controller on embedded devices. Note that this C-code
consists only of nested if-else-statements. The new graphical user interface
also offers the possibility to inspect the graph in an interactive web user
interface, which even allows to edit the DT. This means that parts of the DT
can be retrained with a different set of hyper-parameters and directly
replaced. This way, one can for example first train a determinized DT and then
retrain important parts of it to be more permissive and hence more performant
for a secondary criterion. Figure 3 shows a screenshot of the newly integrated
graphical user interface.
Figure 3: Screenshot of the new web-based graphical user interface. It offers
a sidebar for easy selection of the controller file and hyper-parameters, an
experiments table where benchmarks can be queued, and a results table in which
some statistics of the run are provided. Moreover, users can click on the
‘eye’ icon in the results table to inspect the built decision tree.
Looking at the inner workings of dtControl, we see the three important hyper-
parameters that were already introduced in Section 2: predicate domain,
predicate selector, and determinizer. For each of these, dtControl offers
various choices, some of which were newly added for version 2.0. Most
prominently, the user now has the possibility to directly influence both the
predicate domain and the predicate selector, by providing domain knowledge and
thus also additional predicates, or by directly using the interactive
predicate selection. More details on the predicate domain and how domain
knowledge is specified can be found in Section 4. The different ways to select
predicates, especially the new interactive mode, are the topic of Section 5.
Our new insights into determinization are described in Section 6. To support
the user in finding a good set of hyper-parameters, dtControl also offers
extensive benchmarking functionality, allowing to specify multiple variants
and reporting several statistics.
#### Technical notes.
dtControl 2.0 is written in Python 3 following an architecture closely
resembling the schema in Figure 2. The modularity, along with our technical
documentation, allows users to easily extend the tool. For example, supporting
another input format is only a matter of adding a parser.
dtControl 2.0 works with Python version 3.7.9 or higher. The core of the tool
which runs the learning algorithms requires numpy [23], pandas [36] and
scikit-learn [41] and optionally the library for the heuristic OC1 [39]. The
algebraic predicates rely on SymPy [37] and SciPy [48]. The web user interface
is powered by Flask [1] and D3.js [8].
## 4 Predicate domain
The domain of the predicates that we allow in the inner nodes of the DT is of
key importance. As we saw in the motivating Example 1, allowing for more
expressive predicates can dramatically reduce the size of the DT.
We assume that our state space is structured, i.e. it is a Cartesian product
of the domain of the variables ($S=S_{1}\times\dots\times S_{n}$). We use
$s_{i}$ to refer to the $i$-th state-variable of a state $s\in S$. In Example
2, the three state-variables are the velocity of our car, the velocity of the
front car, and the distance.
We first give an overview of the predicate domains dtControl 2.0 supports,
before discussing the details of the new ones.
_Axis-aligned predicates_ [38] have the form $s_{i}\leq c$, where $c$ is a
rational constant. This is the easiest form of predicates, and they have the
advantage that there are only finitely many, as the domain of every state-
variable is bounded. However, they are also least expressive.
_Linear predicates_ (also known as oblique [39]) have the form
$\sum_{i}s_{i}\cdot a_{i}\leq c$, where $a_{i}$ are rational coefficients and
$c$ is a rational constant. They have the advantage that they are able to
combine several state-variables which can lead to saving linearly many splits,
cf. [29, Fig. 5.2]. The disadvantage of these predicates is that there are
infinitely many choices of coefficients, which is why heuristics were
introduced to determine a good set of predicates to try out [39, 4]. However,
heuristically determined coefficients and combinations of variables can impede
explainability.
_Algebraic predicates_ have the form $f(s)\leq c$, where $f$ is any
mathematical function over the state-variables and $c$ is a rational constant.
It can use elementary functions such as exponentiation, $\log$, or even
trigonometric functions. Example 1 illustrated how this can reduce the size
and improve explainability. More discussion of these predicates follows in
Section 4.2.
_Categorical predicates_ are special predicates for categorical (enumeration-
type) state-variables such as colour or protocol state, and they are discussed
in Section 4.1.
### 4.1 Categorical predicates
Categorical state-variables do not have a numeric domain, but instead are
unordered and qualitative. They commonly occur in the models coming from the
tools PRISM and STORM.
###### Example 3
Let one state-variable be ‘colour’ with the domain
$\\{\text{red},\text{blue},\text{green}\\}$. A simple approach is to assign
numbers to every value, e.g. $\text{red}=0,\text{blue}=1,\text{green}=2$, and
treat this variable as numeric. However, a resulting predicate such as
$\text{colour}\leq 2$ is hardly explainable and additionally depends on the
assignment of numbers. For example, it would not be possible to single out
$\text{colour}\in\\{\text{red},\text{green}\\}$ using a single predicate,
given the aforementioned numeric assignment. Using linear predicates, for
example adding half of the colour to some other state-variable, is even more
confusing and dependent on the numeric assignment. $\triangle$
color$\\{c\\}$$\\{a\\}$$\\{b\\}$brg (a)
color$\\{b\\}$$\\{a\\}$br, g (b)
Figure 4: Two examples of a categorical split. On the left, all possible
values of the state-variable colour lead to a different child in a non-binary
split. On the right, red and green lead to the same child, which is a result
of grouping similar values together.
Instead of treating the categorical variables using their numeric encodings,
dtControl 2.0 supports specialized algorithms from literature, see e.g. [43,
44]. They work by labelling an inner node with a categorical variable and
performing a (possibly non-binary) split according to the value of the
categorical variable. The node can have at most one child for every possible
value of the categorical variable, but it can also group together similarly
behaving values, see Figure 4 for an example. For the grouping, dtControl 2.0
uses the greedy algorithm from [44, Chapter 7] called attribute-value
grouping. It proceeds by first considering to have a branch for every single
possible value of the categorical variable, and then merging branches as long
as it improves the predicate; see Appendix 0.C for the full pseudocode of the
algorithm.
In our experiments we found that the grouping algorithm sometimes did not
merge branches in cases where it would actually have made the DT smaller or
more explainable. This is because the resulting impurity, the goodness of a
predicate, could be marginally worse due to floating-point inaccuracies. Thus,
we introduce _tolerance_ , a bias parameter in favour of larger value groups.
When checking whether to merge branches, we do not require the impurity to
improve, but we allow it to become worse up to our tolerance. Setting
tolerance to 0 corresponds exactly to the algorithm from [44], while setting
tolerance to $\infty$ results in merging branches until only two remain, thus
producing binary predicates.
To allow dtControl 2.0 to use categorical predicates, the user has to provide
a metadata file, which tells the tool which variables are categorical and
which are numeric; see Appendix 0.B.1 for an example.
### 4.2 Algebraic predicates
It is impossible to try out every mathematical expression over the state-
variables, and it would also not necessarily result in an explainable DT.
Instead, we allow the user to enter _domain knowledge_ to suggest templates of
predicates that dtControl 2.0 should try. See Appendix 0.B.2 for a discussion
of the format in which domain knowledge can be entered.
Providing the basic equations that govern the model behaviour can already help
in finding a good predicate, and is easy to do for a domain expert.
Additionally, dtControl 2.0 offers several possibilities to further exploit
the provided domain knowledge:
Firstly, the given predicates need not be exact, but may contain coefficients.
These coefficients can be both completely arbitrary or may come from a finite
set suggested by the user. For coefficients with finite domain, dtControl 2.0
tries all possibilities; for arbitrary coefficients, it uses curve fitting to
find a good value. For example, the user can specify a predicate such as
$d+(v_{o}-v_{f})\cdot c_{0}>c_{1}$ with $c_{0}$ being an arbitrary rational
number and $c_{1}\in\\{0,5,10\\}$.
Secondly, the interactive predicate selection (see Section 5) allows the user
to try out various predicates at once and observe their respective impurity in
the current node. The user can then choose among them as well as iteratively
suggest further predicates, inspired by those where the most promising results
were observed.
Thirdly, the decisions given by a DT can be visualized in the simulator,
possibly leading to better understanding the controller. Upon gaining any
further insight, the user can directly edit any subtree of the result,
possibly utilizing the interactive predicate selection again.
## 5 Predicate selection
The tool offers a range of options to affect the selection of the most
appropriate predicate from a given domain.
##### Impurity measures:
As mentioned in Section 2, the predicate selection is typically based on the
lowest _impurity_ induced. The most commonly used impurity measure (and the
only one the first version of dtControl supported) is Shannon’s entropy [46].
In dtControl 2.0, a number of other impurity measures from the literature [43,
12, 25, 39, 3] are available. However, our results indicate that entropy
typically performs the best, and therefore it is used as the default option
unless the user specifies otherwise. Due to lack of space, we delegate the
details and experimental comparison between the impurity measures to Appendix
0.D.
##### Priorities:
dtControl 2.0 also has the new functionality to assign _priorities_ to the
predicate generating algorithms. Priorities are rational numbers between 0 and
1. The impurity of every predicate is divided by the priority of the algorithm
that generated it. For example, a user can use axis-aligned splits with
priority 1 and a linear heuristic with priority $\nicefrac{{1}}{{2}}$. Then
the more complicated linear predicate is only chosen if it is at least twice
as good (in terms of impurity) as the easier-to-understand axis-aligned split.
A predicate with priority 0 is only considered after all predicates with non-
zero priority have failed to split the data. This allows the user to give just
a few predicates from domain knowledge, which are then strictly preferred to
the automatically generated ones, but which need not suffice to construct a
complete DT for the controller.
##### Interactive predicate selection:
dtControl 2.0 offers the user the possibility to manually select the predicate
in every split. This way, the user can prefer predicates that are explainable
over those that optimize the impurity.
The screenshot of the interactive interface in Appendix 0.F shows the
information that dtControl 2.0 provides. The user is given some statistics and
metadata, e.g. minimum, maximum and step size of the state-variables in the
current node, a few automatically generated predicates for reference and all
predicates generated from domain knowledge. The user can specify new
predicates and is immediately informed about their impurity. Upon selecting a
predicate, the split is performed and the user continues in the next node.
The user can also first construct a DT using some automatic algorithm and then
restart the construction from an arbitrary node using the interactive
predicate selection to handcraft an optimized representation, or at any point
decide that the rest of the DT should be constructed automatically.
## 6 New insights about determinization
In our context, _determinization_ denotes a procedure that, for some or all
states, picks a subset of the allowed actions. Formally, a determinization
function $\delta$ transforms a controller $C$ into a “more determinized”
$C^{\prime}$, such that for all states $s\in C$ we have $\emptyset\subsetneq
C^{\prime}(s)\subseteq C(s)$. This reduces the permissiveness, but often also
reduces the size. Note that, for safety controllers, this always preserves the
original guarantees of the controller. For other (non-safety) controllers, see
Remark 1.
dtControl 2.0 supports three different general approaches to determinizing a
controller: pre-processing, post-processing and safe early stopping. Pre-
processing commits to a single determinization before constructing the DT.
Post-processing prunes the DT after its construction, e.g. safe pruning in
[6]. The basic idea of safe early stopping is already described in Section 2:
if all states agree on at least one action, then instead of continuing to
split the controller, stop early and return a leaf node with that common
action. Alternatively, to preserve more permissiveness, one can return not
only a single common action, but all common actions; formally, return the
maximum set $B$ such that for all states $s$ in the node $B\subseteq C(s)$.
The results of [4] show that both pre-processing and post-processing are
outperformed by an on-the-fly approach based on safe early stopping. This is
because pre-processing discards a lot of information that could have been
useful in the DT construction and post-processing can only affect the bottom-
most nodes of the resulting DT, but usually not those close to the root.
We now give a new view on safe early stopping approaches for determinizing a
controller that allows us to generalize the techniques of [4], reducing the
size of the resulting DTs even more.
###### Example 4
Consider the following controller: $C(s_{1})=\\{a,b,c\\}$,
$C(s_{2})=\\{a,b,d\\}$, $C(s_{3})=\\{x,y\\}$. All three states map to
different sets of actions, and thus an impurity measure like entropy penalizes
grouping $s_{1}$ and $s_{2}$ the same as grouping $s_{1}$ and $s_{3}$.
However, if determinization is allowed, grouping $s_{1}$ and $s_{2}$ need not
be penalized at all, as these states agree on some actions, namely $a$ and
$b$. Grouping $s_{1}$ and $s_{2}$ into the same child node thus allows the
algorithm to stop early at that point and return a leaf node with $\\{a,b\\}$,
in contrast to grouping $s_{1}$ and $s_{3}$. $\triangle$
Knowing that we want to determinize by safe early stopping affects the
predicate selection process. Intuitively, sets of states are more homogeneous
the more actions they share. We want to take this into account when
calculating the impurity of predicates. One way to do this would be to
calculate the impurity of all possible determinization functions and pick the
best one. This, however, is infeasible, hence we propose the heuristic of
_multi-label impurity measures_. These impurity measures do not only consider
the full set of allowed actions in their calculation, but instead they depend
on the individual actions occurring in the set. This allows the DT
construction to pick better predicates, namely those whose resulting children
are more likely to be determinizable. In Appendix 0.E we formally derive the
multi-label variants of entropy and Gini-index.
To conclude this section, we point out the key difference between the new
approach of multi-label impurity measures and the previous idea that was
introduced in [4]. The approach from [4] does not evaluate the impurity of all
possible determinization functions, but rather picks a smart one – that of
maximum frequency (MaxFreq) – and evaluates according to that. MaxFreq
determinizes in the following way: for every state, it selects from the
allowed actions that action occurring most frequently throughout the whole
controller. This way, many states share common actions. This is already better
than pre-processing, as it does not determinize the controller a priori, but
rather considers a different determinization function at every node. However,
in every node we calculate the impurity for several different predicates, and
the optimal choice of determinization function depends on the predicate. Thus,
choosing a single determinization function for a whole node is still too
coarse, as it is fixed independent of the considered predicate. We illustrate
the arising problem in the following Example 5.
###### Example 5
xy01230123$\\{a,c\\}$$\\{a,c\\}$$\\{a\\}$$\\{b,c\\}$$\\{b,c\\}$$\\{b\\}$
Figure 5: A simple example of a dataset that is split suboptimally by the
MaxFreq approach from [4], but optimally by the new multi-label entropy
approach.
Figure 5 shows a simple controller with a two-dimensional state space. Every
point is labeled with its set of allowed actions.
As $c$ is the most frequent action, MaxFreq determinizes the states $(1,2)$,
$(1,3)$, $(2,2)$ and $(2,3)$ to action $c$. Hence the red split (predicate
$y<1.5$) is considered optimal, as it groups together all four states that map
to $c$. The blue split (predicate $x<1.5$) is considered suboptimal, as then
the data still looks very heterogeneous. So, using MaxFreq, we need two splits
for this controller; one to split of all the $c$’s and one to split the two
remaining states.
However, it is better to first choose a predicate and then determine a fitting
determinization function. When calculating the impurity of the blue split, we
can choose to determinize all states with $x=1$ to $\\{a\\}$ and all states
with $x=2$ to $\\{b\\}$. Thus, in both resulting sub-controllers the impurity
is 0 as all states agree on at least one action. This way, one split suffices
to get a complete DT. Multi-label impurity measures notice when labels are
shared between many (or all) states in a sub-controller, and thus they allow
to prefer the optimal blue split. $\triangle$
## 7 Experiments
##### Experimental setup.
We compare three approaches: BDDs, the first version of dtControl from [4] and
dtControl 2.0. For BDDs222Our implementation of BDDs is based on the dd python
library https://github.com/tulip-control/dd. the variable ordering is
important, so we report the smallest of 20 BDDs that we constructed by
starting with a random initial variable ordering and reordering until
convergence. To determinize BDDs, we used the pre-processing approach, 10
times with the minimum norm and 10 times with MaxFreq. For the previous
version of dtControl, we picked the smaller of either a DT with only axis-
aligned predicates or a DT with linear predicates using the logistic
regression heuristic that was typically best in [4]. Determinization uses safe
early stopping with the MaxFreq approach. For dtControl 2.0, we use the multi-
label entropy based determinization and utilize the categorical predicates for
the case studies from probabilistic model checking. We ran all experiments on
a server with operating system Ubuntu 19.10, a 2.2GHz Intel(R) Xeon(R) CPU
E5-2630 v4 and 250 GB RAM.
##### Comparing determinization techniques on cyber-physical systems.
Table 1 shows the sizes of determinized BDDs and DTs on the permissive
controllers of the tools SCOTS and Uppaal Stratego that were already used in
[4]. We see that the new determinization approach is strictly better than the
previous one, with only two DTs being of equal size, as the result of the
previous method was already optimal. With the exception of the case studies
helicopter and truck_trailer where BDDs are comparable or slightly better,
both approaches using DTs are orders of magnitude smaller than BDDs or an
explicit representation of the state-action mapping.
Table 1: Controller sizes of different determinized representations of the controllers from SCOTS and Uppaal Stratego. “States” is the number of states in the controller, “BDD” the number of nodes of the smallest BDD from 20 tries, dtControl 1.0 [4] the smallest DT the previous version of dtControl could generate and dtControl 2.0 the smallest DT the new version can construct. “TO” denotes a failure to produce a result in 3 hours. The smallest numbers in each row are highlighted. Case study | States | BDD | dtControl 1.0 | dtControl 2.0
---|---|---|---|---
cartpole | 271 | 127 | 11 | 7
10rooms | 26,244 | 128 | 7 | 7
helicopter | 280,539 | 870 | 221 | 123
cruise-latest | 295,615 | 1,448 | 3 | 3
dcdc | 593,089 | 381 | 9 | 5
truck_trailer | 1,386,211 | 18,186 | 42,561 | 31,499
traffic_30m | 16,639,662 | TO | 127 | 97
##### Case studies from probabilistic model checking.
Table 2: Controller sizes of different representations of controllers from the quantitative verification benchmark set [24], i.e. from the tools STORM and PRISM. “States” is the number of states in the controller, “BDD” the number of nodes of the smallest BDD of 20 tries and dtControl 2.0 the smallest DT we could construct. The smallest numbers in each row are highlighted. Case study | States | BDD | dtControl 2.0
---|---|---|---
triangle-tireworld.9 | 48 | 51 | 23
pacman.5 | 232 | 330 | 33
rectangle-tireworld.11 | 241 | 498 | 373
philosophers-mdp.3 | 344 | 295 | 181
firewire_abst.3.rounds | 610 | 61 | 25
rabin.3 | 704 | 303 | 27
ij.10 | 1,013 | 436 | 753
zeroconf.1000.4.true.correct_max | 1,068 | 386 | 63
blocksworld.5 | 1,124 | 3,985 | 855
cdrive.10 | 1,921 | 5,134 | 2,401
consensus.2.disagree | 2,064 | 138 | 67
beb.3-4.LineSeized | 4,173 | 913 | 59
csma.2-4.some_before | 7,472 | 1,059 | 103
eajs.2.100.5.ExpUtil | 12,627 | 1,315 | 153
elevators.a-11-9 | 14,742 | 6,750 | 9,883
exploding-blocksworld.5 | 76,741 | 34,447 | 1,777
echoring.MaxOffline1 | 104,892 | 43,165 | 1,543
wlan_dl.0.80.deadline | 189,641 | 5,738 | 2,563
pnueli-zuck.5 | 303,427 | 50,128 | 150,341
For Table 2, we used case studies from the quantitative verification benchmark
set [24], which includes models from the PRISM benchmark suite [33]. Note that
these case studies contain unordered enumeration-type state-variables for
which we utilize the new categorical predicates. To get the controllers, we
solved the case study with STORM and exported the resulting controller. This
export already eliminates unreachable states. The previous version of
dtControl was not able to handle these case studies, so we only compare
dtControl 2.0 to BDDs.
Table 2 shows that also for case studies from probabilistic model checking,
DTs are a good way of representing controllers. The DT is the smallest
representation on 13 out of 19 case studies, often reducing the size by an
order of magnitude compared to BDDs or the explicit representation. On 3 case
studies, BDDs are smallest, and on 2 case studies, both the DT and the BDD
fail to reduce the size compared to the explicit representation. This happens
if there are many different actions and thus states cannot be grouped
together. A worst case example of this is a model where every state has a
different action; then, a DT would have as many leaf nodes as there are
states, and hence twice as many nodes in total.
###### Remark 2
Note that the controllers exported by STORM are deterministic, so no
determinization approach can be utilized in the DT construction. We conjecture
that if a permissive strategy was exported, dtControl 2.0 would benefit from
the additional information and be able to reduce the controller size further
as for the cyber-physical systems.
## 8 Conclusion
We have presented a radically new version of the tool dtControl for
representing controllers by decision trees. The tool now features a graphical
user interface, allowing both experts and non-experts to conveniently interact
with the decision tree learning process as well as the resulting tree. There
is now a range of possibilities on how the user can provide additional
information. The algebraic predicates provide the means to capture the (often
non-linear) relationships from the domain knowledge. The categorical
predicates together with the interface to probabilistic model checkers allow
for efficient representation of strategies for Markov decision processes, too.
Finally, the more efficient determinization yields very small (possibly non-
performant) controllers, which are particularly useful for debugging the
model.
We see at least two major promising future directions. Firstly, synthesis of
predicates could be made more automatic using mathematical reasoning on the
domain knowledge, such as substituting expressions with a certain unit of
measurement into other domain equations in the places with the same unit of
measurement, e.g. to plug difference of two velocities into an equation for
velocity. Secondly, one could transform the controllers into possibly entirely
different controllers (not just less permissive) so that they still preserve
optimality (or yield $\varepsilon$-optimality) but are smaller or simpler.
Here, a closer interaction loop with the model checkers might lead to
efficient heuristics.
## References
* [1] Flask web development: developing web applications with python. https://pypi.org/project/Flask/, accessed: 14.10.2020
* [2] Adadi, A., Berrada, M.: Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
* [3] Ashok, P., Brázdil, T., Chatterjee, K., Křetínský, J., Lampert, C.H., Toman, V.: Strategy representation by decision trees with linear classifiers. In: QEST. Lecture Notes in Computer Science, vol. 11785, pp. 109–128. Springer (2019)
* [4] Ashok, P., Jackermeier, M., Jagtap, P., Křetínský, J., Weininger, M., Zamani, M.: dtcontrol: decision tree learning algorithms for controller representation. In: HSCC. pp. 17:1–17:7. ACM (2020)
* [5] Ashok, P., Jackermeier, M., Křetínský, J., Weinhuber, C., Weininger, M., Yadav, M.: dtControl 2.0: Explainable strategy representation via decision tree learning steered by experts (TACAS 21 artifact) (Jan 2021). https://doi.org/10.5281/zenodo.4437169
* [6] Ashok, P., Křetínský, J., Larsen, K.G., Coënt, A.L., Taankvist, J.H., Weininger, M.: SOS: safe, optimal and small strategies for hybrid Markov decision processes. In: QEST. Lecture Notes in Computer Science, vol. 11785, pp. 147–164. Springer (2019)
* [7] Bahar, R.I., Frohm, E.A., Gaona, C.M., Hachtel, G.D., Macii, E., Pardo, A., Somenzi, F.: Algebraic decision diagrams and their applications. Formal Methods Syst. Des. 10(2/3), 171–206 (1997)
* [8] Bostock, M., Ogievetsky, V., Heer, J.: D3 data-driven documents. IEEE transactions on visualization and computer graphics 17(12), 2301–2309 (2011)
* [9] Boutilier, C., Dearden, R., Goldszmidt, M.: Exploiting structure in policy construction. In: IJCAI. pp. 1104–1113. Morgan Kaufmann (1995)
* [10] Brázdil, T., Chatterjee, K., Chmelik, M., Fellner, A., Křetínský, J.: Counterexample explanation by learning small strategies in Markov decision processes. In: CAV (1). Lecture Notes in Computer Science, vol. 9206, pp. 158–177. Springer (2015)
* [11] Brázdil, T., Chatterjee, K., Křetínský, J., Toman, V.: Strategy representation by decision trees in reactive synthesis. In: TACAS (1). Lecture Notes in Computer Science, vol. 10805, pp. 385–407. Springer (2018)
* [12] Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth (1984)
* [13] Bryant, R.E.: Graph-based algorithms for boolean function manipulation. IEEE Trans. Computers 35(8), 677–691 (1986)
* [14] Cimatti, A., Roveri, M., Traverso, P.: Automatic obdd-based generation of universal plans in non-deterministic domains. In: AAAI/IAAI. pp. 875–881. AAAI Press / The MIT Press (1998)
* [15] David, A., Jensen, P.G., Larsen, K.G., Mikucionis, M., Taankvist, J.H.: Uppaal stratego. In: TACAS. Lecture Notes in Computer Science, vol. 9035, pp. 206–211. Springer (2015)
* [16] Dehnert, C., Junges, S., Katoen, J., Volk, M.: A storm is coming: A modern probabilistic model checker. In: CAV (2). Lecture Notes in Computer Science, vol. 10427, pp. 592–600. Springer (2017)
* [17] Della Penna, G., Intrigila, B., Lauri, N., Magazzeni, D.: Fast and compact encoding of numerical controllers using obdds. In: Cetto, J.A., Ferrier, J.L., Filipe, J. (eds.) Informatics in Control, Automation and Robotics: Selcted Papers from the International Conference on Informatics in Control, Automation and Robotics 2008, pp. 75–87. Springer Berlin Heidelberg, Berlin, Heidelberg (2009)
* [18] Frehse, G., Guernic, C.L., Donzé, A., Cotton, S., Ray, R., Lebeltel, O., Ripado, R., Girard, A., Dang, T., Maler, O.: Spaceex: Scalable verification of hybrid systems. In: CAV. Lecture Notes in Computer Science, vol. 6806, pp. 379–395. Springer (2011)
* [19] Fujita, M., McGeer, P.C., Yang, J.C.: Multi-terminal binary decision diagrams: An efficient data structure for matrix representation. Formal Methods Syst. Des. 10(2/3), 149–169 (1997)
* [20] Garg, P., Neider, D., Madhusudan, P., Roth, D.: Learning invariants using decision trees and implication counterexamples. In: POPL. pp. 499–512. ACM (2016)
* [21] Girard, A.: Low-complexity quantized switching controllers using approximate bisimulation. CoRR abs/1209.4576 (2012)
* [22] Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016), http://www.deeplearningbook.org
* [23] Harris, C.R., Millman, K.J., van der Walt, S., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N.J., Kern, R., Picus, M., Hoyer, S., van Kerkwijk, M.H., Brett, M., Haldane, A., del Río, J.F., Wiebe, M., Peterson, P., Gérard-Marchant, P., Sheppard, K., Reddy, T., Weckesser, W., Abbasi, H., Gohlke, C., Oliphant, T.E.: Array programming with numpy. CoRR abs/2006.10256 (2020)
* [24] Hartmanns, A., Klauck, M., Parker, D., Quatmann, T., Ruijters, E.: The quantitative verification benchmark set. In: TACAS (1). Lecture Notes in Computer Science, vol. 11427, pp. 344–350. Springer (2019)
* [25] Heath, D.G., Kasif, S., Salzberg, S.: Induction of oblique decision trees. In: Proceedings of the 13th International Joint Conference on Artificial Intelligence. Chambéry, France, August 28 - September 3, 1993. pp. 1002–1007 (1993)
* [26] Hoey, J., St-Aubin, R., Hu, A.J., Boutilier, C.: SPUDD: stochastic planning using decision diagrams. In: UAI. pp. 279–288. Morgan Kaufmann (1999)
* [27] Hyafil, L., Rivest, R.L.: Constructing optimal binary decision trees is NP-complete. Inf. Process. Lett. 5(1), 15–17 (1976)
* [28] Ittner, A., Schlosser, M.: Non-linear decision trees - NDT. In: ICML. pp. 252–257. Morgan Kaufmann (1996)
* [29] Jackermeier, M.: dtControl: Decision Tree Learning for Explainable Controller Representation. Bachelor’s thesis, Technische Universität München (2020)
* [30] Jr., M.M., Davitian, A., Tabuada, P.: PESSOA: A tool for embedded controller synthesis. In: CAV. Lecture Notes in Computer Science, vol. 6174, pp. 566–569. Springer (2010)
* [31] Kiesbye, J.: Private Communication (2020)
* [32] Kwiatkowska, M.Z., Norman, G., Parker, D.: PRISM 4.0: Verification of probabilistic real-time systems. In: CAV. Lecture Notes in Computer Science, vol. 6806, pp. 585–591. Springer (2011)
* [33] Kwiatkowska, M.Z., Norman, G., Parker, D.: The PRISM benchmark suite. In: QEST. pp. 203–204. IEEE Computer Society (2012)
* [34] Larsen, K.G., Mikucionis, M., Taankvist, J.H.: Safe and optimal adaptive cruise control. In: Correct System Design. Lecture Notes in Computer Science, vol. 9360, pp. 260–277. Springer (2015)
* [35] Luttenberger, M., Meyer, P.J., Sickert, S.: Practical synthesis of reactive systems from LTL specifications via parity games. Acta Informatica 57(1-2), 3–36 (2020)
* [36] Wes McKinney: Data Structures for Statistical Computing in Python. In: Stéfan van der Walt, Jarrod Millman (eds.) Proceedings of the 9th Python in Science Conference. pp. 56 – 61 (2010). https://doi.org/10.25080/Majora-92bf1922-00a
* [37] Meurer, A., Smith, C.P., Paprocki, M., Certík, O., Kirpichev, S.B., Rocklin, M., Kumar, A., Ivanov, S., Moore, J.K., Singh, S., Rathnayake, T., Vig, S., Granger, B.E., Muller, R.P., Bonazzi, F., Gupta, H., Vats, S., Johansson, F., Pedregosa, F., Curry, M.J., Terrel, A.R., Roucka, S., Saboo, A., Fernando, I., Kulal, S., Cimrman, R., Scopatz, A.M.: Sympy: symbolic computing in python. PeerJ Comput. Sci. 3, e103 (2017)
* [38] Mitchell, T.M.: Machine learning. McGraw Hill series in computer science, McGraw-Hill (1997)
* [39] Murthy, S.K., Kasif, S., Salzberg, S., Beigel, R.: OC1: A randomized induction of oblique decision trees. In: AAAI. pp. 322–327. AAAI Press / The MIT Press (1993)
* [40] Neider, D., Markgraf, O.: Learning-based synthesis of safety controllers. In: FMCAD. pp. 120–128. IEEE (2019)
* [41] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, 2825–2830 (2011)
* [42] Pyeatt, L.D., Howe, A.E., et al.: Decision tree function approximation in reinforcement learning. In: Proceedings of the third international symposium on adaptive systems: evolutionary computation and probabilistic graphical models. vol. 2, pp. 70–77. Cuba (2001)
* [43] Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986)
* [44] Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann (1993)
* [45] Rungger, M., Zamani, M.: SCOTS: A tool for the synthesis of symbolic controllers. In: HSCC. pp. 99–104. ACM (2016)
* [46] Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(4), 623–656 (1948)
* [47] St-Aubin, R., Hoey, J., Boutilier, C.: APRICODD: approximate policy construction using decision diagrams. In: NIPS. pp. 1089–1095. MIT Press (2000)
* [48] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S., Brett, M., Wilson, J., Millman, K.J., Mayorov, N., Nelson, A.R.J., Jones, E., Kern, R., Larson, E., Carey, C.J., Polat, I., Feng, Y., Moore, E.W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E.A., Harris, C.R., Archibald, A.M., Ribeiro, A.H., Pedregosa, F., van Mulbregt, P., SciPy: Scipy 1.0-fundamental algorithms for scientific computing in python. CoRR abs/1907.10121 (2019)
* [49] Zapreev, I.S., Verdier, C., Jr., M.M.: Optimal symbolic controllers determinization for BDD storage. In: ADHS 2018. IFAC-PapersOnLine, vol. 51, pp. 1–6. Elsevier (2018). https://doi.org/10.1016/j.ifacol.2018.08.001
## Appendix 0.A Deriving algebraic predicates from domain knowledge for the
cruise-control model
The cruise-control model is governed by the kinematic equations, i.e. the new
distance between the cars is computed as follows:
$d_{\textit{new}}=d+(v_{f}-v_{o})\cdot t+0.5\cdot(a_{f}-a_{o})\cdot t^{2},$
where $t$ is a time span in seconds, $d$, $v_{f}$ and $v_{o}$ are distance
between the cars, velocity of the front car and velocity of our car as before,
and $a_{f}$ and $a_{o}$ are the acceleration that the cars use during the
whole time span. The model restricts these accelerations to be from the set
$\\{-2,0,2\\}$, which corresponds to the actions deceleration, neutral or
acceleration.
We want that the distance between the cars always is greater than some
threshold. The worst case behaviour of the front car is to always decelerate,
corresponding to emergency braking. We can thus use $a_{f}=-2$ and the only
unknown in the equations is our acceleration $a_{o}$. Now we can calculate the
worst-case distance between the cars, assuming we accelerate for one time step
(i.e. $a_{o}=2$) and then brake ($a_{o}=-2$) until both cars are at minimum
velocity. If that distance is greater than our threshold, we know that it is
safe to accelerate in the next time step. Since the actions are ordered, in
any state in which it is safe to accelerate, we can also stay neutral or
decelerate (excluding the corner case of minimum velocity). Similarly, we can
split of the states that allow to stay neutral or decelerate, but not to
accelerate. All remaining states only allow deceleration.
See Figure 6 for the DT using algebraic predicates to represent the whole
permissive controller with 11 nodes. $t_{f}$ is the time until the front car
reaches minimum velocity. The root node checks whether acceleration is safe in
the next time step, the left child of the root checks whether staying neutral
is safe. -6 and 14 are minimum respectively maximum velocity, and thus have to
be treated separately after these two most important splits.
Figure 6: DT using algebraic predicates to represent the whole permissive
controller for the cruise-control model with 11 nodes.
## Appendix 0.B Domain knowledge and metadata formats
### 0.B.1 Metadata file
An example of a metadata file that is used to inform dtControl 2.0, which
variables are numeric and which are categorical. Giving the column names as
well improves the graphical output, as the variables are not indexed (e.g.
x_0), but rather named (e.g. Host_1_ev).
⬇
{
"x_column_types":
{
"numeric": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ],
"categorical": [ 10, 11, 12, 13, 14, 15, 16 ]
},
"x_column_names": [
"Host_1_ev",
"Host_1_na",
"Host_1_wt",
"Host_2_ev",
"Host_2_na",
"Host_2_wt",
"Host_ev",
"Host_na",
"Host_wt",
"N",
"_loc_Clock",
"_loc_Host",
"_loc_Host_1",
"_loc_Host_2",
"cr",
"gave_up",
"line_seized"
]
}
### 0.B.2 Domain knowledge format
All domain knowledge must take the form of predicates. Their structure can be
summarized as follows:
$\textit{term}\sim\textit{term};\textit{def}$
term is an arbitrary arithmetic term, using any elementary function that can
be parsed by SymPy, any state-variable and coefficients that are defined in
def; the exact format of the coefficient definition def is provided in our
documentation, but most importantly, it allows you to specify finite sets of
values or a completely arbitrary coefficient. $\sim$ is a standard comparator
from the set {<=,>=,<,>,=}. An example of a predicate is
c_1 * x_1 - c_2 + 2 * x_2 <= c_3; c_1 in {1,2,3}; c_2 in {4,8}
Here, $c_{1}$ and $c_{2}$ are from a finite set and $c_{3}$ is completely
arbitrary.
## Appendix 0.C Algorithm for categorical predicates
Let $v$ be a categorical state-variable with possible values
$v_{1},\dots,v_{m}$. It is not feasible to simply try all different possible
attribute value groupings for every categorical state-variable $v$, since the
number of such groupings is exponential in the number of possible values of
$v$ [44, Ch. 7]. We instead use a greedy algorithm based on iterative merging
of value groups suggested by [44, Ch. 7], which proceeds as follows:
1. 1.
Initially, create a single group for every possible value, i.e. set the inital
grouping to $G=(\\{v_{1}\\},\dots,\\{v_{m}\\})$. Let $G_{i}$ denote the $i$-th
set in the grouping.
2. 2.
If only two value groups remain, return those as the optimal grouping.
3. 3.
For every pair of value groups $(G_{i},G_{j})$, compute the impurity of the
new value grouping in which $G_{i}$ and $G_{j}$ are merged.
4. 4.
If the impurity has not decreased in any of the new groupings, return the
original grouping. Otherwise, proceed to Step 2 with the best new grouping.
Our modification of tolerance that is discussed in Section 4.1 only modifies
the stopping condition in Step 4: it replaces “has not decreased in any of the
new groupings” with “has increased by more than the tolerance in all new
groupings”.
## Appendix 0.D Other impurities
In the following, we give a description of all impurity measures that are
supported by dtControl 2.0, and then evaluate them on some models from
probabilistic model checking as well as some cyber-physical systems. The text
is almost verbatim from the Bachelor’s thesis [29].
### 0.D.1 Description
#### 0.D.1.1 Entropy
A particularly well-known impurity measure is based on the concept of entropy
from information theory, as introduced in the seminal work of Shannon [46].
Information theory is concerned with quantifying the amount of information the
occurrence of a random event yields. If an event $x$ occurs with probability
$p_{x}$, its information content is defined to be
$I(x)=-\log_{2}(p_{x})\;\text{bits}.$
This definition has several desirable properties [22, Ch. 3]: first, we see
that the information content of an event is inversely proportional to its
probability. For instance, an event with probability $1$ always occurs, and
thus does not convey any information. Second, if two events are independent,
the information content of both events occurring is the sum of the individual
information contents, since
$I(x,y)=-\log_{2}(p_{x,y})=-\log_{2}(p_{x}p_{y})=-\log_{2}(p_{x})-\log_{2}(p_{y}).$
In the context of impurity measures, the events that we are interested in are
that a randomly picked data point from a sub-controller $C\subseteq S\times
2^{A}$ allows the action set $B\in 2^{A}$. Let $n_{B}$ be the frequency of the
set $B$ in $C$, i.e. $n_{B}=|\\{s\in S:C(x)=B\\}|$. Then, such an event occurs
with probability
$\frac{n_{B}}{|C|}$
and has an information content of
$-\log_{2}\left(\frac{n_{B}}{|C|}\right)\;\text{bits}.$
The crucial insight that allows for the development of an impurity measure is
the following: if the expected amount of information content of these events
is low, we already have a lot of knowledge about the labels in the sub-
controller. Thus, classifying this dataset probably requires less effort than
classifying a dataset where the expected amount of information content from
these events is high. The expected amount of information content is also known
as the entropy of the controller $C$ and is defined as
$H(C)=-\sum_{B\in
2^{A}}\frac{n_{B}}{|C|}\log_{2}\left(\frac{n_{B}}{|C|}\right).$
To illustrate two extreme cases, consider a dataset where every data point has
a different label. This is extremely hard to classify since every data point
has to be separated from all other points, and, correspondingly, the entropy
of such a dataset is maximal. In contrast, the entropy of a pure dataset is
always 0.
We now have a way to measure the difficulty of classifying a sub-controller.
The entropy impurity measure then simply averages the difficulty of
classifying the partitions $C_{1},\ldots,C_{m}$ created by a predicate $\rho$.
It is thus defined as follows:
$\mathrm{ent}(\rho,C)=\sum_{i\in[m]}\frac{|C_{i}|}{|C|}H(C_{i}).$
Instead of entropy, a similar measure called information gain is sometimes
used. The only real difference between the two is that information gain is a
measure of goodness, i.e. we want to maximize the information gain in DT
learning. This is however equivalent to minimizing entropy [43]. Entropy and
information gain are some of the most common ways to determine the quality of
predicates and have received a great deal of attention in the DT literature
[12, 43, 44].
#### 0.D.1.2 Entropy ratio
An issue with entropy that is sometimes encountered with categorical features
is that it favors multi-comparison predicates with a large number of branches
[43]. Quinlan [43] thus introduces a normalization of the information gain
criterion called the gain ratio. Since this is again a goodness measure, we
modified it into an impurity measure named the entropy ratio.
We first introduce the quantity of the intrinsic information content of a
split
$\mathrm{split\;info}(\rho,C)=-\sum_{i\in[m]}\frac{|C_{i}|}{|C|}\log_{2}\left(\frac{|C_{i}|}{|C|}\right).$
This measures the expected information content of the event that a randomly
selected data point in $C$ will be assigned the $i^{th}$ branch, which
corresponds to the information generated by the partitioning itself.
Naturally, the more branches the predicate $\rho$ creates, the higher this
information content will be. In contrast, the entropy measures the amount of
information relevant to classification from the same partitioning [44, Ch. 2].
The entropy ratio then simply normalizes the entropy with the intrinsic
information content of a split, i.e.
$\mathrm{ent\;ratio}(\rho,C)=\frac{\mathrm{ent}(\rho,C)}{\mathrm{split\;info}(\rho,C)}.$
It has to be noted that one of the primary reasons for using the entropy ratio
instead of just the entropy is to prevent overfitting [44, Ch. 2]. In our
setting, overfitting is desirable and there is hence less justification for
the entropy ratio.
#### 0.D.1.3 Gini index
Another common impurity measure used in e.g. the CART system is the Gini index
[12, Ch. 4]. It measures the probability of a data point being misclassified
if we were to assign labels randomly based on the label distribution in the
sub-controller. This probability is given by
$G(C)=\sum_{\begin{subarray}{c}y,z\in 2^{A}\\\ y\neq
z\end{subarray}}\frac{n_{y}}{|C|}\frac{n_{z}}{|C|}$
and can equally be written as
$G(C)=\left(\sum_{y\in 2^{A}}\frac{n_{y}}{|C|}\right)^{2}-\sum_{y\in
2^{A}}\left(\frac{n_{y}}{|C|}\right)^{2}=1-\sum_{y\in
2^{A}}\left(\frac{n_{y}}{|C|}\right)^{2}.$
Similarly to entropy, the Gini index is then a weighted average of these
values:
$\mathrm{Gini}(\rho,C)=\sum_{i\in[m]}\frac{|C_{i}|}{|C|}G(C_{i}).$
#### 0.D.1.4 Twoing rule
The CART system also defines another measure known as the twoing rule, which
is a goodness measure only defined for binary predicates. It is based on
transforming the problem of computing the impurity of a multi-class dataset
into the task of computing the impurity of a two-class dataset. This
transformed problem is then solved with the Gini index defined above.
Let $l$ ($r$) be the number of examples on the left (right) side of the split
and $n_{l,y}$ ($n_{r,y}$) be the number of examples with label $y$ on the left
(right) side of the split. Then, the twoing rule is defined as follows [12,
Ch. 4]:
$\mathrm{twoing}(\rho,C)=\frac{l}{|C|}\frac{r}{|C|}\left(\sum_{y\in
2^{A}}\left\lvert\frac{n_{l,y}}{l}-\frac{n_{r,y}}{r}\right\rvert\right)^{2}.$
Following [39], the impurity measure we minimize is then simply the reciprocal
of the twoing rule.
#### 0.D.1.5 Sum minority
Probably the simplest way to calculate impurity is to count the number of
misclassified instances if we were to assign the most frequent label in a
partition to all of its data points. This impurity measure is called sum
minority and due to Heath et al. [25].
Formally, for every partition $C_{i}$, let $|C_{i}|$ be the number of examples
in that partition and $\gamma(C_{i})$ be the label occurring most frequently
in $C_{i}$. Then, define the minority $\mu_{i}$ as
$\mu_{i}=|\\{(x,y)\in C_{i}:y\neq\gamma(C_{i})\\}|.$
The sum minority impurity measure is then simply the sum of these minorities:
$\textrm{sum minority}(\rho,C)=\sum_{i\in[m]}\mu_{i}.$
#### 0.D.1.6 Max minority
A very similar impurity measure is max minority [25], defined as
$\textrm{max minority}(\rho,C)=\max_{i\in[m]}\mu_{i}.$
It counts the number of misclassified instances in the “worst” partition with
the maximum number of misclassifications.
Max minority has the theoretical advantage that it produces trees of depth at
most $\log_{2}(|C|)$ [39]. However, note that it is the depth that is
logarithmic in this expression – the number of nodes is linear in $|C|$. Thus,
this theoretical insight is not very useful in practice.
#### 0.D.1.7 Area under the receiver-operator curve
The final impurity measure we discuss has been developed specifically for DTs
with linear classifiers in the context of controller representation [3]. The
underlying idea is simple: we want to exploit the knowledge that the
controller will be split with a hyperplane, obtained from a linear classifier
or a different heuristic. The impurity measure tries to estimate how well
separable the controller is by a hyperplane after having been split with the
predicate $\rho$.
For now, let us consider the case of only two actions in a controller. In
order to estimate how well separable a sub-controller is by a hyperplane, we
can again train a linear classifier on this sub-controller and report a metric
that measures some quality of this classifier. The simplest such quality would
probably be the accuracy, i.e. the fraction of data points classified
correctly. However, this has the disadvantage that even trivial classifiers
that assign the same label to every data point can achieve high accuracy in
the case of an imbalanced label distribution. Ashok et al. [3] instead suggest
the usage of the area under the receiver-operator curve (AUC), a well-known
metric in statistics and machine learning that does not suffer from this
drawback.
We thus proceed as follows to estimate the quality of a predicate $\rho$:
1. 1.
Train a linear classifier for every sub-dataset $C_{i}$.
2. 2.
Return the sum of the obtained AUC scores of the classifiers.
Note that this is again a goodness measure, which we can transform into an
impurity measure by considering the reciprocal.
Ashok et al. [3] use a different data representation. Instead of mapping
states to sets of actions, they map state-action pairs to $\\{0,1\\}$,
indicating whether an action is safe in a state or not. Hence they always deal
with a binary classification problem. In order to utilize their idea with our
different data representation, we again make use of the technique based on
one-versus-the-rest classification, cf. [4, Sec. 4.1.3]. We train one linear
classifier for every possible label, which tries to separate this label from
the rest, and report a weighted average of the obtained AUC scores.
Note that this impurity measure has the practical disadvantage that we need to
train several linear classifiers for every considered predicate, which can be
very inefficient.
### 0.D.2 Evaluation
#### 0.D.2.1 Models from probabilistic model checking
Table 3: Number of nodes obtained with different impurity measures and attribute value grouping on models from probabilistic model checking. Case study | Entropy | Entropy ratio | Gini index | Sum minority | Max minority
---|---|---|---|---|---
csma2_4 | 48 | 89 | 43 | 579 | 1,412
firewire_abst | 9 | 18 | 12 | 110 | 36
firewire_impl | 77 | 124 | 77 | 442 | 126
leader4 | 154 | 207 | 150 | 547 | 1,767
mer30 | 158 | 213 | 165 | 7,557 | 6,797
wlan2 | 222 | 416 | 220 | 495 | 3,723
zeroconf | 367 | 456 | 374 | 7,243 | 14,624
Table 3 gives the results of attribute value grouping in combination with
different impurity measures on some case studies from probabilistic model
checking. It clearly shows that probabilistic impurity measures such as
entropy and gini index perform far better than non-probabilistic impurity
measures like sum- and max-minority. Furthermore, we see that the entropy
ratio is strictly worse than the standard entropy. As discussed in Section
0.D.1.2, this is expected, since the main reason for choosing the entropy
ratio over just the entropy is normally to prevent overfitting. On the other
hand, gini index and entropy perform similarly well and are both viable
choices. Note that the twoing rule is not applicable in this scenario, as it
is limited to binary predicates.
We also experimented with our modified version of the AUC impurity measure,
introduced in Section 0.D.1.7. As previously noted, one of its drawbacks is
that it is very expensive to compute. Indeed, it took more than 13 minutes to
build a tree with AUC for the small firewire_abst controller – in comparison
to roughly $0.2$ seconds with the standard impurity measures. On the one hand,
this is due to the fact that AUC requires the training of several linear
classifiers for every considered predicate, which simply is computationally
demanding. On the other hand, we notice that it also produces unnecessarily
large trees, which in turn again increases the computational cost: we obtain
716 nodes in the tree for firewire_abst.
Why does AUC not work in our scenario? There are two factors that come into
play: first, the impurity measure tries to estimate the linear separability of
the sub-datasets resulting from a predicate. However, since many features in
the examples are categorical and we only use oblique splits with numeric
features because of explainability reasons, the measure itself is not that
meaningful in our context. Second, we conjecture that our approach based on
one-versus-the-rest classification, which we adopted due to our data
representation, just is not well-suited as an impurity measure.
#### 0.D.2.2 Cyber-physical systems
For cyber-phyiscal systems (CPS), our experiments suggest that entropy is one
of the strongest impurity measures overall in the case of controllers obtained
from CPS synthesis. To illustrate, we list the number of nodes when learning
DTs with axis-aligned predicates, no determinization, and varying impurity
measures in Table 4.
The table clearly shows that sum- and max-minority perform far worse than the
probabilistic impurity measures on many datasets and are overall not
competitive. Entropy, gini index, and twoing rule usually perform similarly
well, although entropy is slightly better in a number of cases. As expected,
the entropy ratio is overall somewhat worse. We again encountered the same
performance issues with AUC as before, and the numbers we could compute were
not promising, which is why we did not include this impurity measure in the
table.
Table 4: Effects of different impurity measures with axis-aligned predicates on decision tree sizes for synthesis of CPS. “$\infty$” indicates failure to produce a result within three hours. Case study | Entropy | | Entropy
---
ratio
| Gini
---
index
| Sum
---
minority
| Max
---
minority
| Twoing
---
rule
Single-output | | | |
cartpole | 253 | 257 | 255 | 259 | 277 | 253
tworooms | 27 | 37 | 27 | 39 | 2,627 | 27
helicopter | 6,347 | 7,363 | 7,177 | 31,835 | 125,727 | 6,429
cruise | 987 | 1,161 | 1,065 | 11,131 | 89,503 | 1,043
dcdc | 271 | 391 | 275 | $\infty$ | 2,429 | 277
Multi-output | | | |
tenrooms | 17,297 | 15,951 | 17,297 | 18,565 | 26,751 | 17,415
truck_trailer | 338,389 | 348,959 | 312,741 | 442,013 | 561,083 | 316,457
traffic | 12,573 | $\infty$ | 16,627 | 276,067 | $\infty$ | 15,319
vehicle | 13,237 | 15,677 | 13,135 | 32,271 | 39,129 | 13,109
aircraft | 913,857 | 932,625 | 923,709 | $\infty$ | 2,242,773 | 922,727
## Appendix 0.E More information on the better determinization
Here we give more information on the new insights about determinization. First
we give the proof that it suffices to consider complete determinization
functions. Then we give the derivation of multi-label entropy and afterwards
multi-label Gini index. Lastly, we give another derivation of multi-label
entropy and Gini-index, to provide more intuition and insight to their
workings. Parts of this appendix are almost verbatim from the Bachelor’s
thesis [29].
### 0.E.1 Considering complete determinization functions suffices
We show that we can reduce the search space by only considering complete
determinization functions, i.e. determinization functions such that for all
states $s$ we have $\lvert\delta(C)(s)\rvert=1$. In particular, we prove that
there always exists a complete determinization with minimal impurity in
Proposition 1 and Theorem 0.E.1. We limit our discussion to the entropy and
the Gini index, since these impurity measures are the most widely used and
performed the best in our experiments. We shorten the terminology and in the
following refer to determinization functions as determinizations.
###### Proposition 1
Given an impurity measure $\phi\in\\{\mathrm{ent},\mathrm{Gini}\\}$, a
predicate $\rho$, and a controller $D$, for every incomplete determinization
$\delta$ of $D$ there exists a complete determinization $\bar{\delta}$ of $D$
with $\phi(\rho,\bar{\delta},D)\leq\phi(\rho,\delta,D)$.
###### Proof
We outline a procedure that turns an incomplete determinization into a
complete determinization with at most the same impurity. Let $\delta$ be an
incomplete determinization of $D$ with co-domain $c\mathbb{D}(\delta(D))$.
Furthermore, let $n_{i,y}$ denote the number of data points that are assigned
the (possibly non-deterministic) label $y\in c\mathbb{D}(\delta(D))$ under
$\delta$ in the $i^{th}$ sub-dataset $D_{i}$ created by $\rho$.
Let us start with $\phi=\mathrm{ent}$. We have that
$\mathrm{ent}(\rho,\delta,D)=\sum_{i\in[m]}\frac{|D_{i}|}{|D|}H(\delta,D_{i}),$
where
$\displaystyle H(\delta,D_{i})=-\sum_{y\in
c\mathbb{D}(\delta(D))}\frac{n_{i,y}}{|D_{i}|}\log_{2}\left(\frac{n_{i,y}}{|D_{i}|}\right).$
Since $\delta$ is incomplete, there is a label $q\in c\mathbb{D}(\delta(D))$
with $|q|>1$. Consider the determinization $\bar{\delta}$ that is equivalent
to $\delta$, except that it assigns the single-label $r\in q$ to all data
points $x\in X$ where $\delta(x)=q$. We define $\bar{n}_{i,y}$ for
$\bar{\delta}$ equivalently to $n_{i,y}$ for $\delta$. We fix a specific sub-
dataset $D_{i}$ and drop the corresponding index to simplify notation. Then,
$\displaystyle H\left(\bar{\delta}\right)$ $\displaystyle=-\sum_{y\in
c\mathbb{D}(\bar{\delta}(D))}\frac{\bar{n}_{y}}{|D|}\log_{2}\left(\frac{\bar{n}_{y}}{|D|}\right)$
$\displaystyle=-\sum_{y\in
c\mathbb{D}(\delta(D))\setminus\\{q,\,r\\}}\frac{n_{y}}{|D|}\log_{2}\left(\frac{n_{y}}{|D|}\right)-\frac{\bar{n}_{r}}{|D|}\log_{2}\left(\frac{\bar{n}_{r}}{|D|}\right).$
It follows that
$\displaystyle H(\delta)-H\left(\bar{\delta}\right)$
$\displaystyle=-\frac{n_{q}}{|D|}\log_{2}\left(\frac{n_{q}}{|D|}\right)-\frac{n_{r}}{|D|}\log_{2}\left(\frac{n_{r}}{|D|}\right)+\frac{\bar{n}_{r}}{|D|}\log_{2}\left(\frac{\bar{n}_{r}}{|D|}\right).$
With
$\bar{n}_{r}=n_{r}+n_{q},$
we obtain:
$\displaystyle H(\delta)-H\left(\bar{\delta}\right)$
$\displaystyle=-\frac{n_{q}}{|D|}\log_{2}\left(\frac{n_{q}}{|D|}\right)-\frac{n_{r}}{|D|}\log_{2}\left(\frac{n_{r}}{|D|}\right)+\frac{n_{r}+n_{q}}{|D|}\log_{2}\left(\frac{n_{r}+n_{q}}{|D|}\right).$
First, consider the special case of $n_{r}=0$. As is usual in information
theory, we evaluate $0\log_{2}(0)$ as $\lim_{x\rightarrow 0}x\log_{2}(x)=0$,
and thus arrive at
$\displaystyle H(\delta)-H\left(\bar{\delta}\right)=0.$
If $n_{r}>0$, we have
$\displaystyle H(\delta)-H\left(\bar{\delta}\right)$
$\displaystyle=\frac{n_{q}}{|D|}\left(\log_{2}\left(\frac{n_{r}+n_{q}}{|D|}\right)-\log_{2}\left(\frac{n_{q}}{|D|}\right)\right)+\frac{n_{r}}{|D|}\left(\log_{2}\left(\frac{n_{r}+n_{q}}{|D|}\right)-\log_{2}\left(\frac{n_{r}}{|D|}\right)\right)$
$\displaystyle>0,$
where the last step follows from the fact that $\log_{2}$ is strictly
increasing.
Thus, for every sub-dataset $D_{i}$, we have
$H\left(\bar{\delta},D_{i}\right)\leq H(\delta,D_{i})$, and consequently
$\mathrm{ent}(\rho,\bar{\delta},D)\leq\mathrm{ent}(\rho,\delta,D).$
Note the following key point: $\bar{\delta}$ is “more deterministic” than
$\delta$ as there are fewer states to which it assigns a label $y$ with
$|y|>1$. If we thus continue this process of producing “more deterministic”
determinizations (now starting with $\bar{\delta}$), we will eventually reach
a complete determinization with an entropy less than or equal to the entropy
of $\delta$.
A similar analysis can be conducted for the case of $\phi=\mathrm{Gini}$. With
the same definitions as above, we obtain
$G\left(\bar{\delta}\right)=1-\sum_{y\in
c\mathbb{D}(\delta(D))\setminus\\{q,\,r\\}}\left(\frac{n_{y}}{|D|}\right)^{2}-\left(\frac{\bar{n}_{r}}{|D|}\right)^{2}.$
Then,
$\displaystyle G(\delta)-G\left(\bar{\delta}\right)$
$\displaystyle=-\left(\frac{n_{q}}{|D|}\right)^{2}-\left(\frac{n_{r}}{|D|}\right)^{2}+\left(\frac{n_{r}+n_{q}}{|D|}\right)^{2}$
$\displaystyle=-\frac{n_{q}^{2}}{{|D|}^{2}}-\frac{n_{r}^{2}}{{|D|}^{2}}+\frac{n_{r}^{2}+2n_{r}n_{q}+n_{q}^{2}}{{|D|}^{2}}$
$\displaystyle=\frac{2n_{r}n_{q}}{{|D|}^{2}}$ $\displaystyle\geq 0.$
Therefore, similar as above, we have
$\mathrm{Gini}(\rho,\bar{\delta},D)\leq\mathrm{Gini}(\rho,\delta,D)$
and can continue this process to eventually reach a complete determinization
with Gini index less than or equal to the Gini index of $\delta$.
###### Theorem 0.E.1
Let $\Delta^{*}$ be the set of determinizations that achieve the minimal
impurity with respect to an impurity measure
$\phi\in\\{\mathrm{ent},\mathrm{Gini}\\}$, a predicate $\rho$, and a dataset
$D$. Then, there exists a $\delta^{*}\in\Delta^{*}$ that is complete.
###### Proof
We give an indirect proof. Assume every determinization
$\delta^{*}\in\Delta^{*}$ is incomplete. Then, by Proposition 1, we know that
there exists a $\bar{\delta}^{*}$ that is complete and
$\phi(\rho,\bar{\delta}^{*},D)\leq\phi(\rho,\delta^{*},D)$ for every
$\delta^{*}\in\Delta^{*}$. However, this means that $\bar{\delta}^{*}$
achieves the minimal impurity and would have to be an element of $\Delta^{*}$.
Therefore, $\Delta^{*}$ cannot be the set of determinizations that achieve
minimal impurity.
Theorem 0.E.1 shows that it suffices to consider all complete determinizations
to determine the best predicate. However, the number of complete
determinizations is still far too large to simply enumerate them all.
### 0.E.2 Multi-label entropy derivation
We start our derivation with the following general formulation for multi-label
entropy of a controller $C$ and a determinization $\delta$. For a controller
$C$, $c\mathbb{D}(C)$ denotes the codomain and $\lvert C\rvert$ denotes the
size of the domain, and $p_{y}$ is the empirical frequency of $y$ in
$\delta(C)$, formally
$p_{y}=\lvert\\{s\in\delta(C)\mid\delta(C)(s)=y\\}\rvert$:
$H(C,\delta)=-\sum_{y\in c\mathbb{D}(\delta(C))}\frac{p_{y}}{\lvert
C\rvert}\log_{2}(\frac{p_{y}}{\lvert C\rvert})$
If $\delta$ is the identity-function, the considered co-domain is $2^{A}$, and
we have the classic entropy. For any other determinization function, we get
different $p_{y}$ and thus different impurities. As mentioned earlier, trying
every possible determinization function $\delta$ and calculating the precise
$p_{y}$ is optimal, but infeasible. We showed in Theorem 0.E.1 that for
calculating impurities it suffices to consider determinization functions that
map to singleton sets, i.e. for all states $s$ we have
$\lvert\delta(C)(s)\rvert=1$. Then $y$ is a singleton set $\\{a\\}$ for some
action $a\in A$, and $p_{y}$ is the frequency of $\\{a\\}$ in the determinized
controller. Thus, we can over-approximate $p_{y}$ by counting the occurrences
of every action $a$ in the controller, which is feasible and was already done
for MaxFreq. The difference is that MaxFreq used this to explicitly calculate
one fixed determinization function $\delta$ that was used for all predicates,
while the new approach uses it to over-approximate the multi-label impurity,
implicitly using a different determinization function for every considered
predicate333Note that we formulate the multi-label impurity for a single sub-
controller. As for all previous impurity measures, the impurity of a predicate
then is the weighted average of the impurities of the sub-controllers. Thus,
since the determinization comes into play after the split in every sub-
controller, a different determinization is considered for every predicate..
To complete the formulation of our heuristic, we add that in the corner case
that all states agree on an action, the entropy should be 0, and thus get the
following impurity measure, where for an action $a\in A$ we use $p_{a}$ to
denote the frequency of that action in the un-determinized controller, i.e.
$p_{a}=\lvert\\{s\in(C)\mid a\in(C)(s)\\}\rvert$:
$\textsc{MLE}(C)=\begin{cases}0&\mbox{if }\exists a\in A,\forall s\in S:a\in
C(s)\\\ -\sum_{a\in A}\frac{p_{a}}{\lvert C\rvert}\log_{2}(\frac{p_{a}}{\lvert
C\rvert})&\mbox{otherwise}\end{cases}$
### 0.E.3 Multi-label Gini index derivation
We proceed with the multi-label formulation of the Gini index, which can be
derived similar to multi-label entropy. Applying Theorem 0.E.1, we obtain that
$G(\delta_{\rho}^{*},C)=1-\sum_{l\in L}\left(\frac{n_{i,l}}{|C|}\right)^{2}.$
(1)
We can again estimate the $n_{i,l}$ with the approximation of maximum
frequency as in the entropy derivation. However, we need to be careful since
we over-approximate and the value of the sum in Eq. 1 can therefore be greater
than 1. In order to keep the impurity non-negative, we thus need to subtract
the sum not from 1, but from its maximal value $|L|$. Finally, again treating
the corner case of all labels agreeing, we get
$\widehat{G}(C)=\begin{dcases}0,&\text{if }\exists l\in L:f_{i}(l)=|C|\\\
|L|-\sum_{l\in
L}\left(\frac{f_{i}(l)}{|C|}\right)^{2},&\text{otherwise}.\end{dcases}$
The complete multi-label Gini index is then again the weighted average of the
estimated values $\widehat{G}(C)$.
### 0.E.4 An alternative point of view
Having derived multi-label impurity measures formally, we now want to point
out an alternative, more intuitive point of view that may shed some light on
their internal workings. Let us fix a specific sub-controller $D_{i}$ with
frequency function $f_{i}$. Plotting $f_{i}/|D_{i}|$, i.e. the fraction of
data points that can be assigned a specific single-label, yields a bar chart
that may look like the one depicted in Fig. 7 (left).
(a) (b)
Figure 7: Bar chart of the frequency function. The bar chart (left) for a
controller with actions $1,\ldots,5$ and the error bars indicating the
impurity (right).
If we now had to construct an impurity measure solely from this bar chart, we
might make the following observations:
1. 1.
The impurity should be 0 if all bars have a value of 1, since then every label
can be assigned to every data point.
2. 2.
The impurity should be high if there are many bars with low values.
3. 3.
Generally, if there are fewer bars the impurity should be lower, because if
there are fewer labels we will probably need fewer splits of the dataset.
This already rules out two simple ideas that come to mind: we cannot use the
reciprocal of the sum of the bars, because this would violate point 2 if there
is a great number of different labels. We also cannot use the reciprocal of
the mean of the bars, since this would not take observation 3 into account.
Instead, we could come up with the following impurity measure that satisfies
all three desired properties: we measure how much is missing from each bar to
get a value of 1 and return the sum of these values. This idea is depicted in
Fig. 7 (right).
Formalizing this concept yields the following function to measure the impurity
of a sub-dataset:
$\displaystyle J(D_{i})$ $\displaystyle=\sum_{l\in
2^{A}}1-\frac{f_{i}(l)}{|D_{i}|}$ (2) $\displaystyle=|L|-\sum_{l\in
2^{A}}\frac{f_{i}(l)}{|D_{i}|}.$ (3)
We could then compute the impurity of a predicate as the weighted average of
the values $J(D_{i})$ for every sub-dataset $D_{i}$.
Eq. 3 already looks surprisingly similar to the function $\widehat{G}(D_{i})$
of the multi-label Gini index. Indeed, we see that $\widehat{G}(D_{i})$ is
merely a scaled version of $J(D_{i})$: before computing the error bars, the
bar chart is scaled with the function $s(x)=x^{2}$, which penalizes smaller
bars more strongly.
With the same idea we can also work towards the multi-label entropy. Consider
the scaling function $s(x)=1+x\log_{2}(x)$. We have
$\displaystyle J(D_{i},s)$ $\displaystyle=\sum_{l\in
L}1-s\left(\frac{f_{i}(l)}{|D_{i}|}\right)$ $\displaystyle=\sum_{l\in
L}1-1-\frac{f_{i}(l)}{|D_{i}|}\log_{2}\left(\frac{f_{i}(l)}{|D_{i}|}\right)$
$\displaystyle=-\sum_{l\in
L}\frac{f_{i}(l)}{|D_{i}|}\log_{2}\left(\frac{f_{i}(l)}{|D_{i}|}\right),$
which matches the function $\widehat{H}(D_{i})$ of the multi-label entropy.
The scaling functions of the multi-label entropy and Gini index are plotted in
Fig. 8. As previously mentioned, the Gini index scaling function especially
increases the impurity assigned to small bars. In contrast, the entropy
scaling function penalizes bars with a value of approximately 0.37 the most
heavily and assigns small impurity to bars with very low value. While not
quite as intuitive at first glance, this also seems like a valid approach:
small bars mean that only few data points can be assigned a particular label
and we thus only have to separate those few data points. On the other hand, a
label that can be assigned to around 40 percent of the examples means that we
might have to split off a large fraction of the dataset.
$0$$0.2$$0.4$$0.6$$0.8$$1$$0$$0.2$$0.4$$0.6$$0.8$$1$$x$$s(x)$$s(x)=1+x\log_{2}(x)$
(a)
$0$$0.2$$0.4$$0.6$$0.8$$1$$0$$0.2$$0.4$$0.6$$0.8$$1$$x$$s(x)$$s(x)=x^{2}$ (b)
Figure 8: Plots of the scaling functions arising out of multi-label entropy
(left) and Gini index (right).
## Appendix 0.F Interactive Interface Screenshot
Figure 9: The interactive interface for predicate selection.
Open Access This chapter is licensed under the terms of the Creative
CommonsAttribution 4.0 International
License(https://creativecommons.org/licenses/by/4.0/), which permits use,
sharing, adaptation, distribution and reproduction in any medium or format, as
long as you give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons license and indicate if changes were
made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit
line to the material. If material is not included in the chapter’s Creative
Commons license and your intendeduse is not permitted by statutory regulation
or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
|
# Universality of the minimum modulus for
random trigonometric polynomials
Nicholas A. Cook Hoi H. Nguyen The second author is supported by NSF CAREER
grant DMS-1752345.
(
$\limsup_{n\to\infty}{\mathbf{P}}\Big{(}\,\min_{|z|=1}|F_{n}(z)|\leq\varepsilon
n^{-1/2}\,\Big{)}\leq C\varepsilon$ (1.3) for a universal constant
$C<\infty$.; unit variance at each fixed $x$. Up to a factor of unit modulus,
which does not affect our results, $P_{n}$ is the restriction of the Kac
polynomial $(2n+1)^{-1/2}F_{2n}(z)$ to the unit circle (all of our arguments
extend to the case of odd degree).; ,; For the interested reader, we note that
a new proof of Liggett’s general result in a special case sufficient for this
application was recently obtained in [CGS].; $|J|\times(4m)$;
###### Theorem 3.2.
Let $\boldsymbol{t}=(t_{1},\dots,t_{m})$ be $n^{\kappa}$-smooth and $1$-spread
for some ${\kappa}>0$. Fix $K>0$ and let $Q\subset{\mathbb{R}}^{4k}$ be a box
(cartesian product of intervals) with side lengths at least $n^{-K}$. Then
$\Big{|}{\mathbf{P}}\Big{(}\frac{1}{\sqrt{2n+1}}S_{n}(\boldsymbol{t})\in
Q\Big{)}-{\mathbf{P}}\big{(}\Gamma_{n}(\boldsymbol{t})\in Q\big{)}\Big{|}\ll
n^{-1/2}|Q|$
where $|Q|$ is the volume of $Q$, and the implied constant depends only on
$m,{\kappa},K$, and the sub-Gaussian constant for $\xi$.
; supw∈R4m; (recall the rescaled polynomial $\widetilde{P}_{n}$ from (2.5));
~P; ~P; The proof below can be read as a warmup to the more technical proof of
Theorem 3.1, where a similar (but more complicated) differencing strategy is
used.;
###### Remark 3.8.
We point out that if $\lambda$ is growing with $n$, it is not hard to show by
computations similar to [KS99, Lemma 3.2] that
$\frac{1}{|J|}W_{J}(\boldsymbol{t})^{\mathsf{T}}W_{J}(\boldsymbol{t})$
asymptotically splits into $m$ well-conditioned blocks. However, when
$\lambda$ is bounded or shrinking with $n$ the covariance matrix becomes
increasingly degenerate. We note that [KS99, Lemma 3.2] also contains
estimates for the covariance matrix of the real and imaginary parts of
$P_{n}(t)$ at a single point $t$ that is only $n^{-1/2}$-spread. In principle
it should be possible to extend those arguments to the above setting with
$m>1$ and additional columns for $P_{n}^{\prime}$; however, this appears to
involve technical case analysis, and in the end we do not think it would lead
to a significantly shorter proof than the one given below.
; $W_{J}(\boldsymbol{t})u$; consequence of Esseen’s inequality (see e.g.
[TV06, Lemma 7.17] and its proof); |; |; We are free to assume $K$ is larger
than any fixed constant. By approximating $Q$ with a union of smaller boxes
with disjoint interiors it suffices to establish the claim for boxes of the
form $Q=w+B_{m}(\boldsymbol{\delta})$ with
$B_{m}(\boldsymbol{\delta}):=\prod_{i=1}^{4}[-\delta_{i},\delta_{i}]^{m}\subset{\mathbb{R}}^{4m}$
for arbitrary $\delta_{i}\in[n^{-2K},1]$ for $1\leq i\leq 4$ (assuming $K\geq
1$, say). ; $K_{*}>2K$.; $\delta:=n^{-2K}$.; where we used in the last line
that $\delta_{i}\leq 1$ for each $1\leq i\leq 4$.; Recalling the notation
$M_{\ell}(f)$ from Theorem 8.1, we have
$M_{\ell}(f)\leq\|f\|_{\infty}=O(1/\delta)^{4{m}}$ for any $\ell\geq 0$.;
ftr,ℓ(j))
###### Abstract
It has been shown in [YZ] that the minimum modulus of random trigonometric
polynomials with Gaussian coefficients has a limiting exponential
distribution. We show this is a universal phenomenon. Our approach relates the
joint distribution of small values of the polynomial at a fixed number $m$ of
points on the circle to the distribution of a certain random walk in a
$4m$-dimensional phase space. Under Diophantine approximation conditions on
the angles, we obtain strong small ball estimates and a local central limit
theorem for the distribution of the walk.
title = Universality of the minimum modulus for
random trigonometric polynomials, author = Nicholas A. Cook and Hoi H. Nguyen,
plaintextauthor = Nicholas A. Cook and Hoi H. Nguyen, plaintexttitle =
Universality of the minimum modulus for random trigonometric polynomials,
runningtitle = The minimum modulus for random trigonometric polynomials,
runningauthor = Nicholas A. Cook and Hoi H. Nguyen, copyrightauthor = N. A.
Cook and H. H. Nguyen, keywords = Random trigonometric polynomials,
Universality, Edgeworth expansion, year=2021, number=20, received=5 February
2021, published=6 October 2021, doi=10.19086/da.28985, purplergb0.9,0,0.8
[classification=text]
## 1 Introduction
Consider the Kac polynomial
$F_{n}(z)=\sum_{j=0}^{n}\xi_{j}z^{j}$ (1.1)
for a sequence of iid random variables $\xi_{j}$ (real or complex). The study
of the distribution of zeros of $F_{n}$, and in particular on the number of
real zeros, has a long history: the case that $\xi_{j}\in\\{-1,0,1\\}$ was
considered by Bloch and Polya [BP31] and Littlewood and Offord [LO38, LO43] in
the 1930s, and the Gaussian case by Kac in the 1940s [Kac43, Kac49]. We refer
to [TV15] for an overview of the vast literature inspired by those early
works.
To the best of our knowledge, the question of the size of the minimum modulus
over the unit circle for Kac polynomials was first raised by Littlewood
[Lit66], who considered the case of Rademacher signs $\xi_{j}=\pm 1$.111We
also refer the readers to [BBM+20] for a recent striking result answering
another question of Littlewood. In particular, Littlewood asked whether
$\min_{|z|=1}|F_{n}(z)|=o(1).$222Here and throughout the article asymptotic
notation is with respect to the limit $n\to\infty$; see Section 1.3 for our
notational conventions. This question was answered in the affirmative by
Kashin [Kas87]; a significant improvement was later obtained by Konyagin
[Kon94], who showed
${\mathbf{P}}\Big{(}\,\min_{|z|=1}|F_{n}(z)|\geq
n^{-1/2+\varepsilon}\,\Big{)}\to 0$ (1.2)
as $n\to\infty$, for any $\varepsilon>0$. Subsequently, Konyagin and Schlag
[KS99] showed that for any $\varepsilon>0$, From the above two estimates, it
is thus natural to ask whether $n^{1/2}m(F_{n})$ converges in law, and to
identify the limiting distribution.
This question was recently addressed for the case of Gaussian coefficients by
a beautiful result of Yakir and Zeitouni [YZ], which we now recall. As we
consider the restriction of $F_{n}$ over the unit circle we parametrize
$z=e(x)$, where here and throughout we abbreviate $e(t):=\exp({\sqrt{-1}}t)$.
The work [YZ] considers the normalized trigonometric series
$P_{n}(x)=\frac{1}{\sqrt{2n+1}}\sum_{j=-n}^{n}\xi_{j}e(jx),\qquad
x\in{\mathbb{R}},$ (1.4)
where $\xi_{j}$ are iid copies of a real or complex, centered random variable
$\xi$ of unit variance. Note that $P_{n}$ has been scaled to have We denote
${m}_{n}:=\min_{x\in[-\pi,\pi]}|P_{n}(x)|.$ (1.5)
With our normalization and from (1.2) and (1.3) we expect that ${m}_{n}$ is
typically of order $n^{-1}$. For the case of Gaussian coefficients, in [YZ]
the limiting distribution of $n\cdot m_{n}$ was shown to be exponential:
###### Theorem 1.1 ([YZ]).
Assume that $\xi$ is a standard real or complex Gaussian. Then for any
$\tau>0$,
$\lim_{n\to\infty}{\mathbf{P}}\Big{(}\,{m}_{n}>\frac{\tau}{n}\,\Big{)}=e^{-{\lambda}\tau}$
(1.6)
where ${\lambda}=2\sqrt{\pi/3}$.
As shown in [YZ, Section 5], their argument in fact extends to allow some
distributions with a small Gaussian component – specifically, $\xi$ of the
form
$\xi^{\prime}+\delta X$ (1.7)
with $\delta$ at least of order $n^{-1}\log n$, where $\xi^{\prime}$ and $X$
are independent, $X\sim{\mathcal{N}}_{\mathbb{R}}(0,1)$, and $\xi^{\prime}$ is
an arbitrary random variable satisfying Cramér’s condition. While Cramér’s
condition is weaker than assuming a bounded density, it does not allow
$\xi^{\prime}$ to be discrete.
In the present work we show that the limiting exponential law for ${m}_{n}$ is
universal. Here and in the sequel,
${\mathbf{P}}_{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}$ denotes a probability
measure under which the real variables $\xi$ or
$\xi^{\prime},\xi^{\prime\prime}$ are standard Gaussian.
###### Theorem 1.2 (Main result).
Assume $\xi$ is a centered sub-Gaussian variable of unit variance, which is
either real-valued, or takes the form
$\frac{1}{\sqrt{2}}(\xi^{\prime}+{\sqrt{-1}}\xi^{\prime\prime})$ for iid real
variables $\xi^{\prime},\xi^{\prime\prime}$. Then for any $\tau>0$,
${\mathbf{P}}\Big{(}\,m_{n}>\frac{\tau}{n}\,\Big{)}-{\mathbf{P}}_{{\mathcal{N}}_{\mathbb{R}}(0,1)}\Big{(}\,m_{n}>\frac{\tau}{n}\,\Big{)}\longrightarrow
0$ (1.8)
as $n\to\infty$.
###### Remark 1.3.
In the proof we treat the case (1.4) with real-valued $\xi$ – the complex case
is slightly simpler. The necessary modifications as well as an extension to
another model of random trigonometric series, are given in Section 10.
###### Remark 1.4.
The sub-Gaussianity assumption is mainly for convenience, and one can check
that for our arguments it suffices to assume $\xi$ has a finite moment of
sufficiently large order.
As an immediate consequence we extend Theorem 1.1 to general sub-Gaussian
coefficients:
###### Corollary 1.5.
The limit (1.6) holds when $\xi$ is any sub-Gaussian random variable of mean
zero and unit variance.
In particular, (1.6) holds for Rademacher polynomials, which were the focus of
the aforementioned works of Littlewood and others. In fact, the Rademacher
case in some sense captures the main challenges for our proof. We comment on
some of these challenges below. See Figure 1 for a numerical illustration of
the universality phenomenon.
Figure 1: Histogram of the minimum modulus over $10^{4}$ points equally spaced
points on the unit circle, for $10^{4}$ samples of a random degree 20
polynomial $P_{n}(x)$ of (1.4) with Rademacher (left) and Gaussian (right)
coefficients.
We mention that the distribution of the _maximum_ value over a curve for
various random analytic functions has been studied extensively; see for
instances the books [AT07, AW09] and the references therein. Sharp asymptotics
for the maximum of random trigonometric polynomials with Rademacher
coefficients were obtained by Salem and Zygmund [SZ54] and Halász [Hal73], and
extended to more general coefficient distributions by Kahane [Kah85]. In
recent years there has been particular focus on characteristic polynomials of
random unitary matrices, with $\gamma$ the unit circle [ABB17, PZ17, CMN18,
CZ20], and the Riemann zeta function on a randomly shifted unit interval on
the critical axis [ABB+19, Naj18, Har, ABR]. Such questions are closely tied
to a fine understanding of large deviations and concentration of measure for
values of the function at given points.
The minimum modulus has received comparatively less attention. As we explain
below, its behavior is governed by central limit theorems and anti-
concentration for the distribution at given points. (Another well-known
instance of the dichotomy of concentration/anti-concentration for large/small
values of random fields is in the study of singular values of random
matrices.)
We further note that proving universality for roots of classical random
ensembles has become an active direction of research in recent years, see for
instance [BD04, DNV15, DNV18, IKM16, KZ14, NNV16, NV17, TV15] and the
references therein. Our main result stands out from the above works in two
ways: that our focus is not on the statistics of roots, and our method is
totally different. Corollary 1.5 can be seen as a polynomial analogue of the
result [TV10a] by Tao an Vu where they showed that the least singular value
statistics of random iid matrices is universal, although there is no real
connection between the random matrix model and our random polynomials. It is
remarked that the study of both the minimum modulus of Kac polynomials and of
the least singular values of random matrices have important implications to
the study of the condition number of matrices, see for instance [BG05] and
[TV10b].
Finally, we note that since the completion of this work, there has been
progress on the related problem of the distance of the _nearest root_ of Kac
polynomials to the unit circle. A beautiful result of Michelen and
Sahasrabudhe [MiSa] establishes the limiting distribution for the Gaussian
case, resolving a conjecture of Shepp and Vanderbei [ShVa]. In recent work
with Yakir and Zeitouni [CNYZ] we apply some tools developed in the present
paper to show their result is universal.
### 1.1 Some comments on the proof
We briefly sketch some highlights of the proof of Theorem 1.2. Consider the
parametrized random curve $\\{P_{n}(x):x\in[-\pi,\pi]\\}$ as the trajectory of
a particle in the complex plane. Following [KS99] we approximate the time the
particle is closest to the origin by a point in a discrete mesh
${\mathcal{X}}=\\{x_{\alpha}\\}_{\alpha=1}^{N}\subset[-\pi,\pi]$. Since the
velocity $P_{n}^{\prime}(x)$ is typically of order $n$, in order to capture
this moment we must take $N$ much larger than $n$. However, this means that
each approach within distance $O(1/n)$ of the origin will carry several points
$P_{n}(x)$, $x\in{\mathcal{X}}$ near the origin, so that a union bound over
events that $P_{n}(x_{\alpha})=O(1/n)$ is too wasteful to isolate the
distribution of $m_{n}$. Following [YZ], we isolate a single time
$x_{\alpha}\in{\mathcal{X}}$ for each approach, so that $|P_{n}(x_{\alpha})|$
is approximately a local minimum, by considering both $P_{n}(x_{\alpha})$ and
$P_{n}^{\prime}(x_{\alpha})$ – the precise criterion is given in Section 2.1.
The result is a collection of events ${\mathcal{A}}_{\alpha}$, $\alpha\in[N]$,
that $x_{\alpha}$ is an approximate local minimizer, with each event
determined by the positions and velocities of the particle on the discrete set
${\mathcal{X}}$. In this way we obtain a point process ${\mathcal{M}}_{n}$ on
${\mathbb{R}}_{+}$ of approximate local minima $n|P_{n}(x_{\alpha})|$,
rescaled so that the global minimum is of order one.
For the Gaussian case, it was shown in [YZ] that ${\mathcal{M}}_{n}$ is
approximately a Poisson point process of intensity $2\sqrt{\pi/3}$, from which
the result clearly follows. In Section 2.2 we provide a sketch of their key
argument using an invariance principle of Liggett. For universality, our
approach is to establish universality for the joint distribution of
$S_{n}=S_{n}(\alpha_{1},\dots,\alpha_{m}):=(P_{n}(x_{\alpha_{i}}),P_{n}^{\prime}(x_{\alpha_{i}}))_{i\in[m]}\in{\mathbb{C}}^{2{m}}$
giving the positions and velocities of the particle at any fixed collection of
times $x_{\alpha_{1}},\dots,x_{\alpha_{m}}$; this allows us to deduce
universality for the global minimum by comparison of moments.
The event that the real and imaginary parts of the positions and velocities
lie in given ranges, and moreover that ${\mathcal{A}}_{\alpha_{i}}$ holds for
each $i\in[m]$, is the event that the vector $S_{n}$ lies in a certain compact
domain ${\mathcal{U}}_{n}$ in $4m$-(real-)dimensional phase space. While
${\mathcal{U}}_{n}$ has piecewise smooth boundary, its regularity depends
strongly on $n$, so that estimating its measure under the law of $S_{n}$
requires precise estimates of the measure of boxes at polynomially-small
scales.
Recalling that $P_{n}$ is a trigonometric polynomial, we see that $S_{n}$ is a
random walk of the form $\sum_{j=-n}^{n}\xi_{j}{\boldsymbol{w}}_{j}$, with
${\boldsymbol{w}}_{j}\in{\mathbb{R}}^{4m}$ giving the real and imaginary parts
of $e(jx)$ and its derivative $je(jx)$ at the times
$x_{\alpha_{1}},\dots,x_{\alpha_{m}}$. In particular, when the coefficients
$\xi_{j}$ are Gaussian, $S_{n}$ is a Gaussian vector, and so the main problem
is to obtain a quantitative central limit theorem for $S_{n}$ when the
coefficients are general sub-Gaussian variables. This, as well as a small ball
estimate, hinge on a strong decay estimate on the characteristic function of
$S_{n}$ (Theorem 3.1), which is the main technical component of the proof. (In
fact our argument yields more than a CLT, giving a quantitative _Edgeworth
expansion_ for the distribution of $S_{n}$, though for our purposes we only
need that each term of the expansion is smooth.)
In our general setting and in particular when the coefficients have discrete
distribution, the distribution of the polynomial and its derivative at given
points $x_{\alpha_{1}},\dots,x_{\alpha_{m}}$ depends strongly on arithmetic
properties of the $x_{\alpha_{i}}$ (compared to the complex Gaussian case of
Theorem 1.1 where the distribution is stationary under rotations.) In
particular, the desired control on the characteristic function does not hold
for all choices of the $x_{\alpha_{i}}$ – basically when two of the points are
too close together or nearly antipodal, or when $e(x_{\alpha_{i}})$ is close
to a root of unity of order $n^{o(1)}$ for some $i\in[m]$. We handle such
“bad” $m$-tuples with relatively crude arguments (following [KS99]), and
establish the decay estimate on the characteristic function for “nice” tuples.
The latter is the most technically challenging part of the proof. A similar
estimate for the case ${m}=2$ was obtained in [DNN], but the generalization to
higher dimensions, together with the complexity of the case when $\xi$ is
real-valued, pose significant challenges. For this, roughly speaking, we must
show that it is not possible to simultaneously dilate the steps
${\boldsymbol{w}}_{j}$ of the walk by a factor $K$, for any $K=n^{O(1)}$, so
that their projections $\psi_{j}$ in some common direction all approximately
lie in the integer lattice. We argue by contradiction, showing that if there
is such a projection and dilation, then the sequence $\psi_{j}$ can be locally
approximated by polynomial progressions of controlled degree. Here we
crucially use the trigonometric properties of the steps
${\boldsymbol{w}}_{j}$. Combining this information with some judicious
differencing manipulations, we can isolate an angle $x_{i}$ that is well-
approximated by a rational of small denominator, contradicting the smoothness
assumption.
To summarize, some highlights of our note include:
1. 1.
A nearly sharp characterization, in terms of arithmetic properties, of the
collection of arcs of the circle over which the Kac polynomial is strongly
approximated by a Gaussian Kac polynomial (in the sense of joint distributions
at any fixed number of points);
2. 2.
Sharp small ball estimates under microscopic scaling for random walks in
${\mathbb{R}}^{m}$ of the form
$\sum_{j}\xi_{j}(g(\frac{jt_{1}}{n}),\dots,g(\frac{jt_{m}}{n}))$ for various
smooth functions $g:S^{1}\to{\mathbb{C}}$, such as $e(x)$, or $x\sin x$;
3. 3.
Local limit theorems for such high-dimensional random walks;
4. 4.
A sub-polynomial decay estimate on the associated characteristic function,
which greatly improves on estimates from [KS99].
All of these results seem to be new and of independent interest.
### 1.2 Organization
In Section 2 we will discuss the proof of [YZ] and reduce our task to
establishing Proposition 2.7, establishing universality for the joint
distribution of low-lying near-local minima over a discrete subset of the
torus. Along the way we recall some lemmas from [YZ], and identify two
important arithmetic properties for collections of points in the torus that
will be crucial for subsequent analysis. Section 3 reformulates Proposition
2.7 in terms of a vector-valued random walk, and proves it using a small-ball
estimate (Theorem 3.4) and local central limit theorem (Theorem 3.2), which
are consequences of a strong decay estimate for the characteristic function
(Theorem 3.1). The deduction of the main result from Proposition 2.7 is given
in Sections 5 and 6. Theorem 3.4 and Theorem 3.2 are deduced from Theorem 3.1
in Sections 7 and 8, respectively, and Theorem 3.1 is proved in Section 9.
Finally, in Section 10 we describe how our result can be extended to other
models of random trigonometric polynomials.
### 1.3 Notation
We write $C,C^{\prime},C_{0},c$ etc. to denote positive absolute constants,
which may change from line to line, while $C(\tau)$ etc. denotes a constant
that depends only on the parameter (or set of parameters) $\tau$. We use the
standard asymptotic notation $f=O(g)$, $f\ll g$ and $g\gg f$ to mean $|f|\leq
Cg$ for some absolute constant $C>0$, and $f=O_{\tau}(g)$, $f\ll_{\tau}g$ and
$g\gg_{\tau}f$ to mean $|f|\leq C(\tau)g$. For positive sequences
$\\{f_{n}\\},\\{g_{n}\\}$ we say that $g_{n}=o(f_{n})$ and
$f_{n}=\omega(g_{n})$ if $\lim f_{n}/g_{n}\to\infty$ with $n$. We allow
implied constants to depend on the sub-Gaussian constant of $\xi$ without
explicitly indicating this.
For a real number $x$, $\|x\|_{{\mathbb{R}}/{\mathbb{Z}}}$ denotes the
distance from $x$ to the nearest integer, and
$m=m_{\operatorname{{Leb}}}(\cdot)$ denotes the Lebesgue measure on
${\mathbb{R}}^{d}$ for any $d$. For a compact interval $J\subset{\mathbb{R}}$
we write $|J|:=m_{\operatorname{{Leb}}}(J)$ for its length. $\\{t\\}=t-\lfloor
t\rfloor$ denotes the fractional part of $t\in{\mathbb{R}}$. We write
$e_{n}(\theta)$ for $e(\theta/n)$. The singular values of a matrix $M$ are
ordered $\sigma_{1}(M)\geq\sigma_{2}(M)\geq\cdots$.
Sequences $(\xi_{j})_{j}$ are understood to be sequences of iid copies of the
variable $\xi$ from Theorem 1.2. We write
${\mathbf{P}}_{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}$ for a probability measure
under which the coefficients $\xi_{j}$ in (1.4) are standard real Gaussians,
and write ${\mathbf{E}}_{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}$ for the
associated expectation. (This notation is only used for comparisons of random
variables in law – we do not consider couplings.)
### 1.4 Acknowledgements
We thank Pavel Bleher, Yen Do, Oanh Nguyen, Oren Yakir and Ofer Zeitouni for
helpful discussions and comments, and to Yakir and Zeitouni for showing us an
early draft of their work [YZ] on the Gaussian case. This project was
initiated at the American Institute of Mathematics meeting “Zeros of random
polynomials” in August 2019, where Bleher and Zeitouni were also participants.
In particular, the idea used here and in [YZ] to study local linearizations
emerged from those discussions. We thank the workshop organizers and the
Institute for providing a stimulating research environment.
## 2 Preliminary reductions
Our main objective in this section is to reduce our task to proving
Proposition 2.7 below, which gives a comparison principle for the joint
distribution of low-lying values for a discretized process over the circle.
Along the way we recall elements of the proof from [YZ] that we will need. For
completeness we also include a brief description of their argument for the
Gaussian case.
### 2.1 Passage to local linearizations
We begin by recalling the approach from [YZ] for selecting near-local-
minimizers of $|P_{n}(x)|$ on a discrete set; we refer to Section 1.1 for the
high-level motivation of this approach. The criterion for $x_{\alpha}$ to be
such a representative point is in terms of the local linearization
$F_{\alpha}$ of $P_{n}$ at $x_{\alpha}$ – the intuition is that for the mesh
point $x_{\alpha}$ that is closest to a local minimizer of $|P_{n}(x)|$, it
will also be close to the minimizer of $|F_{\alpha}(x)|$. A key take-away from
this approximation is that all information on near-minimizers of $|P_{n}(x)|$
is encoded in the values of $P_{n}$ _and its derivative_ at the mesh points.
We collect some notation and lemmas from [YZ], with some minor modifications.
Let ${K_{0}}>4$ be a sufficiently large constant and set
$N:=\bigg{\lfloor}\frac{n^{2}}{\log^{{K_{0}}}n}\bigg{\rfloor}.$ (2.1)
We divide $[-\pi,\pi]$ into $N$ intervals: letting
$x_{\alpha}=\frac{2\pi\alpha}{N}\,,\qquad\alpha=1,\dots,N,$
we decompose
$[-\pi,\pi]=\bigcup_{\alpha=1}^{N}I_{\alpha},\quad\text{ where
}I_{\alpha}=\Big{[}x_{\alpha}-\frac{\pi}{N},x_{\alpha}+\frac{\pi}{N}\Big{]}.$
Note that for the case of real coefficients it suffices to consider
$x_{\alpha}\in[0,\pi]$.
Define
$\displaystyle Y_{\alpha}$
$\displaystyle:=-\frac{{\operatorname{Re}}({P}_{n}(x_{\alpha})\overline{{P}_{n}^{\prime}(x_{\alpha})})}{|{P}_{n}^{\prime}(x_{\alpha})|^{2}}\,,\qquad
Z_{\alpha}:=n\frac{{\operatorname{Im}}({P}_{n}(x_{\alpha})\overline{{P}_{n}^{\prime}(x_{\alpha})})}{|{P}_{n}^{\prime}(x_{\alpha})|}\,.$
(2.2)
We denote the local linearizations of ${P}_{n}$ given by
$F_{\alpha}(x):={P}_{n}(x_{\alpha})+(x-x_{\alpha}){P}_{n}^{\prime}(x_{\alpha}).$
(2.3)
As shown in [YZ, Section 1.3], $|F_{\alpha}(x)|$ is minimized at
$x=x_{\alpha}+Y_{\alpha}$, where it takes the value $|Z_{\alpha}|/n$; thus
$|F_{\alpha}(x_{\alpha}+Y_{\alpha})|=|Z_{\alpha}|/n=\min_{x\in{\mathbb{R}}}|F_{\alpha}(x)|.$
(2.4)
(The sign is kept on $Z_{\alpha}$ only for convenience – we mention that the
sign encodes whether the origin is to the left or right of the curve
$\\{P_{n}(x):x\in[-\pi,\pi]\\}$ as $x$ increases through $x_{\alpha}$, but
this fact will not be used.)
We denote the $2\pi n$-periodic trigonometric polynomial
${\widetilde{P}}_{n}(s)={P}_{n}(s/n),\qquad s\in{\mathbb{R}}.$ (2.5)
This scaling will often be convenient since all of its derivatives are
typically of order 1.
We consider the collection $\\{Z_{\alpha}\\}_{\alpha\in[N]}$ as a point
process on ${\mathbb{R}}$. The scaling by $n$ means we focus on (signed) low-
lying values of $|P_{n}|$. Now we give the criterion by which “representative”
near-minimizers are selected. Let
${\mathcal{A}}_{\alpha}:={\mathcal{A}}_{\alpha}^{\prime}\cap{\mathcal{A}}_{\alpha}^{\prime\prime}$
where
${\mathcal{A}}_{\alpha}^{\prime}:=\\{|Y_{\alpha}|\leq\pi/N,|Z_{\alpha}|\leq\log
n\\}$
and
$\displaystyle{\mathcal{A}}_{\alpha}^{\prime\prime}:$
$\displaystyle=\\{|{P}_{n}(x_{\alpha})|\leq
n^{-1/2},|{P}_{n}^{\prime}(x_{\alpha})|\in[n\log^{-{K_{0}}/2}n,C_{0}n\sqrt{\log
n}]\\}\,,$
and define the point process
${{\mathcal{M}}}_{n}=\sum_{\alpha=1}^{N}\delta_{X_{\alpha}},\qquad
X_{\alpha}:=Z_{\alpha}\mathbbm{1}_{{\mathcal{A}}_{\alpha}}+\infty\mathbbm{1}_{{\mathcal{A}}_{\alpha}^{c}}\,.$
(2.6)
The event ${\mathcal{A}}_{\alpha}^{\prime}$ is the condition on the local
linearization that was described above, while
${\mathcal{A}}_{\alpha}^{\prime\prime}$ enforces some regularity of $P_{n}$ on
$I_{\alpha}$.
The following control on the second derivative will be used to show that the
local linearizations $F_{\alpha}$ are good approximations to ${P}_{n}$ at the
scale of the intervals $I_{\alpha}$.
###### Lemma 2.1 (Derivative bounds).
For ${K}>1$ and integer $k\geq 0$ let ${\mathcal{G}}_{k}({K})$ be the event
that
$\sup_{s\in{\mathbb{R}}}|{\widetilde{P}}_{n}^{(k)}(s)|=\frac{1}{n^{k}}\sup_{x\in[-\pi,\pi]}|P_{n}^{(k)}(x)|\leq\log^{K}n.$
There exists $c=c(k)>0$ depending only on $k$ and the sub-Gaussian moment of
$\xi$ such that
${\mathbf{P}}({\mathcal{G}}_{k}({K})^{c})\leq\exp(-c\log^{2{K}}n).$
###### Proof.
Fix $K$ and $k$. It suffices to show the claimed bound for
$R:={\operatorname{Re}}{\widetilde{P}}_{n}^{(k)}$. By Bernstein’s inequality,
$\sup_{t\in[-n\pi,n\pi]}|R^{\prime}(t)|\ll\sup_{t\in[-n\pi,n\pi]}|R(t)|\,,$
so if we assume that $\sup_{t}|R(t)|$ is attained at $t_{0}$, then for all
$|t-t_{0}|\leq c_{0}$ for a sufficiently small constant $c>0$, we have
$|R(t)|\geq|R(t_{0})|-|t-t_{0}|\sup_{t\in[-n\pi,n\pi]}|R^{\prime}(t)|>|R(t_{0})|/2.$
It follows that if we divide $[-n\pi,n\pi]$ into $O(n)$ intervals $J_{i}$ of
sufficiently small length and with midpoints $t_{i}$, then we have
$\sup_{i}|R(t_{i})|>\frac{1}{2}\sup_{t\in[-n\pi,n\pi]}|R(t)|$. Hence
$\displaystyle{\mathbf{P}}(\sup_{t\in[-n\pi,n\pi]}|R(t)|\geq(\log
n)^{{K}})\leq\sum_{i}{\mathbf{P}}(|R(t_{i})|$ $\displaystyle\geq(\log
n)^{{K}}/2)$ $\displaystyle\ll n\exp(-c^{\prime}(\log
n)^{2{K}})\leq\exp(-c(\log n)^{2{K}})\,,$
where we used a sub-Gaussian tail estimate for the upper bound for each
$t_{i}$. ∎
The next proposition shows that near-minimizers are typically well separated.
The proof is a straightforward modification of the proof of [YZ, Lemma 2.11]
and is deferred to Appendix A. There is the minor issue that a local minimizer
for ${P}_{n}$ may cause a low value for two neighboring linearizations
simultaneously, as accounted for in part (i). This will (unfortunately)
present some issues of a purely technical nature in the proof of Proposition
2.5 below.
###### Lemma 2.2.
On the event ${\mathcal{G}}_{2}({K_{0}}/2)$ we have
1. (i)
If ${\mathcal{A}}_{\alpha}$ and ${\mathcal{A}}_{\alpha+1}$ hold, then
$Y_{\alpha}\in[\frac{\pi}{N}-\frac{\pi}{N\log^{{K_{0}}/4}n},\frac{\pi}{N}].$
2. (ii)
Furthermore, ${\mathcal{A}}_{\alpha}$ and ${\mathcal{A}}_{\alpha^{\prime}}$
cannot hold simultaneously as long as
$2\leq|\alpha^{\prime}-\alpha|\leq\frac{n}{\log^{3{K_{0}}}n}.$
### 2.2 The Yakir–Zeitouni invariance argument
Now we discuss briefly the key remaining ideas of [YZ] for the Gaussian case
(or the case with small Gaussian component as in (1.7)), which employs a
strategy used by Biskup and Louidor in their work on extreme values of the
planar discrete Gaussian free field [BL16] . The approach combines the
following ingredients:
1. 1.
A Gaussian computation showing that for any interval
$[a,b]\subset{\mathbb{R}}$ we have
$\lim_{n\to\infty}{\mathbf{E}}({\mathcal{M}}_{n}([a,b]))=\sqrt{\frac{\pi}{3}}(b-a)$.
2. 2.
A consequence of a general result of Liggett [Lig78]:333 that if the law of a
point process is invariant under adding an independent Gaussian perturbation
to each point, then it is a Poisson point process of constant intensity.
3. 3.
A consequence of the Gaussianity of the field
$\\{P_{n}(x)\\}_{x\in[-\pi,\pi]}$: that if $Q_{n}$ is an independent copy of
$P_{n}$, then
$\widehat{P_{n}}(x)=\sqrt{1-\frac{1}{n^{2}}}P_{n}(x)+\frac{1}{n}Q_{n}(x)$ is
identically distributed to $P_{n}(x)$.
4. 4.
The fact that near-minimizers of $|P_{n}|$ are well separated (from a
strengthening of Lemma 2.2).
Roughly speaking, from (3) one can view $\widehat{P_{n}}$ as a perturbation of
$P_{n}$ by an independent Gaussian field $\frac{1}{n}Q_{n}$ of typical size
$1/n$, which is the scale of the minimum modulus. Thus, the point process
$\widehat{{\mathcal{M}}}_{n}$ is obtained from ${\mathcal{M}}_{n}$ by (a
slight rescaling and) a perturbation of each point by a standard Gaussian. Now
from (4), the low values of $|P_{n}(x)|$ occur at points $x$ that are
sufficiently separated that (as one can show) the values of $Q_{n}(x)$ at
these near-minimizers are nearly uncorrelated. Hence, the point process
$\widehat{{\mathcal{M}}}_{n}$ is approximately a point process obtained from
${\mathcal{M}}_{n}$ by perturbing each $X_{\alpha}$ by an independent
Gaussian. From (2) we get that $\widehat{{\mathcal{M}}}_{n}$, and hence,
${\mathcal{M}}_{n}$, is a Poisson point process of constant intensity, and
from (1) it follows that the intensity is $\sqrt{\pi/3}$. (To apply (2) one
cannot actually argue at finite $n$ as just described, but instead one needs
to pass to subsequential limiting point processes, obtained from the tightness
implied by (1); in the end one finds a limiting Poisson point process of the
same intensity regardless of the subsequence.)
Morally speaking, the exponential law is then a straightforward consequence of
the minimum being approximately the smallest (absolute) value of a Poisson
point process on ${\mathbb{R}}$. The formal argument requires some
considerable work to justify all of the approximations, and the above sketch
glides over many important points; we invite the reader to see [YZ] for
further details.
### 2.3 Towards universality: matching moments over smooth points
It should be evident that the beautiful argument of [YZ] just described relies
heavily and in several different ways on properties of the Gaussian
distribution. Towards establishing Theorem 1.2, our approach is to establish
universality for the joint distribution of $X_{\alpha}$ at any fixed number of
indices $\alpha\in[N]$ (in particular this yields universality for the joint
intensity functions of the point process ${\mathcal{M}}_{n}$). From this one
can deduce universality of moments
${\mathbf{E}}({\mathcal{M}}_{n}([-\tau,\tau])^{m})$ of all order, leading to
universality for the distribution function ${\mathbf{P}}(m_{n}\leq\tau/n)$.
For general $\xi$, the main difficulty for studying the joint distribution of
$P_{n}(x_{i})$ and its derivative at $m$ different points $x_{i}$, or even at
a single point $x$, is that the distribution is highly dependent on arithmetic
properties of the points. Consider the case of Rademacher coefficients. At
$x=0$ we have $P_{n}(0)=\frac{1}{\sqrt{2n+1}}\sum_{j=-n}^{n}\xi_{j}$ – while
from the Central Limit Theorem this approaches the
${\mathcal{N}}_{\mathbb{R}}(0,1)$ distribution, it does so at the slowest
possible rate, and the distribution is only smooth (i.e. comparable to
Lebesgue measure on balls of radius $\delta$) at scales $\delta$ much larger
than $1/\sqrt{n}$. At $x=\pi/2$ we have that $P_{n}(\pi/2)$ splits into
independent real and imaginary sums, each tending to the
${\mathcal{N}}_{\mathbb{R}}(0,1/2)$ distribution at the slowest possible rate.
The situation is slightly improved at $x=\pi/4$, for which one can obtain a
meaningful small ball estimate at scale $\delta\sim 1/n$ with some effort. As
we shrink the scale $\delta$ at which we desire $P_{n}(x)$ to have an
effectively smooth distribution, the collection of “structured” angles that we
must avoid increases.
Thus we see that Diophantine approximation will play a crucial role in our
arguments. Indeed, such considerations played a strong role in the argument of
Konyagin and Schlag for the upper bound (1.3). That work only dealt with the
field at single points, however; to compare the joint distribution of $P_{n}$
and its derivative at an arbitrary fixed number of points we need finer
control.
We quantify the level of approximability of points $x$ by rationals as
follows:
###### Definition 2.3 (Smooth points).
For ${K}>0$, we say a point $t\in{\mathbb{R}}$ is _${K}$ -smooth_ if
$\Big{\|}\frac{p_{0}t}{\pi
n}\Big{\|}_{{\mathbb{R}}/{\mathbb{Z}}}>\frac{{K}}{n}\qquad\forall\;p_{0}\in{\mathbb{Z}}\cap[-{K}-1,{K}+1],p_{0}\neq
0.$
We say a tuple $(t_{1},\dots,t_{m})$ is ${K}$-smooth if $t_{r}$ is
${K}$-smooth for each $1\leq r\leq{m}$.
Thus in the special case that $K<1$ then $t\in{\mathbb{R}}$ is _${K}$ -smooth_
if $\|\frac{t}{\pi n}\|_{{\mathbb{R}}/{\mathbb{Z}}}>\frac{{K}}{n}.$ Observe
also that if $n^{-1+\kappa}\leq\|\frac{t}{\pi
n}\|_{{\mathbb{R}}/{\mathbb{Z}}}\leq n^{-2\kappa}$ then $t$ is
$n^{\kappa}$-smooth.
The following lets us focus on potential minimizers that are smooth.
###### Lemma 2.4 (Ruling out bad arcs).
For ${\kappa}>0$ let ${E_{\operatorname{{bad}}}}({\kappa})$ be the set of
points $x\in{\mathbb{R}}$ such that $nx$ is not $n^{\kappa}$-smooth. There
exist absolute constants ${\kappa}_{0},c_{0}>0$ such that
${\mathbf{P}}\big{(}\,\exists
x\in{E_{\operatorname{{bad}}}}({\kappa}_{0}):|{P}_{n}(x)|\leq
n^{-1+c_{0}}\,\big{)}=o(1).$
###### Proof.
This follows from the argument for [KS99, Lemma 3.3]; one only needs two
modifications:
1. 1.
Whereas they considered $A$-smooth points for $A$ fixed, their bounds in fact
allow $A$ to grow as fast as $n^{{\kappa}_{0}}$ for $\kappa_{0}$ sufficiently
small. (One also notes that their parameter $\varepsilon$ may grow as fast as
$O(n^{3/4})$.)
2. 2.
Whereas their model takes the sum in (1.4) to run over $[0,n]$ rather than
$[-n,n]$, they only need that the covariance matrix for
$({\operatorname{Re}}P_{n}(x),{\operatorname{Im}}P_{n}(x))$ has eigenvalues
bounded below by $\gg n^{2}\min(1,|x|,|\pi-x|)^{2}$ for $\min(|x|,|\pi-x|)\gg
n^{-1-c}$ for a small absolute constant $c>0$, which for the present model
follows from display (2.21) in [YZ]. (One may alternatively apply the proof of
[KS99, Lemma 3.3] but condition on the variables $(\xi_{j})_{-n\leq j<0}$
before applying the Berry–Esseen theorem.)
∎
With ${\kappa}_{0}$ as in Lemma 2.4 we now consider the thinned point process
${{\mathcal{M}}}_{n}^{\sharp}:=\sum_{\alpha:x_{\alpha}\notin{E_{\operatorname{{bad}}}}({\kappa}_{0})}\delta_{X_{\alpha}}.$
(2.7)
Theorem 1.2 will be deduced from the following comparison of moments. The
proof is deferred to Section 5.
###### Proposition 2.5 (Moment matching).
For any fixed $0pt>0$ and integer ${m}\geq 1$ we have
$\lim_{n\to\infty}{\mathbf{E}}\Big{(}{{\mathcal{M}}}_{n}^{\sharp}\big{(}[-0pt,0pt]\big{)}^{m}\Big{)}=\lim_{n\to\infty}{\mathbf{E}}_{{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}}\Big{(}{{\mathcal{M}}}_{n}^{\sharp}\big{(}[-0pt,0pt]\big{)}^{m}\Big{)},$
(2.8)
where we recall that ${\mathbf{E}}_{{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}}$
stands for expectation under the Gaussian model from Theorem 1.1.
### 2.4 Joint distribution over spread points
Expanding the moments in (2.8) leads to consideration of joint events that
$X_{\alpha_{i}}$ is small at $m$ different points $x_{\alpha_{i}}$, $1\leq
i\leq m$. In addition to the smoothness already imposed in the definition of
${{\mathcal{M}}}_{n}^{\sharp}$, we will require all of the points to be
separated from one another, in the following sense:
###### Definition 2.6 (Spread tuples).
For $m\geq 2$ and ${\lambda}>0$, we say
$\boldsymbol{t}=(t_{1},\dots,t_{m})\in{\mathbb{R}}^{m}$ is _${\lambda}$
-spread_ if
$\Big{\|}\frac{t_{r}\pm t_{r^{\prime}}}{2\pi
n}\Big{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\geq\frac{{\lambda}}{n}\qquad\forall\,1\leq
r<r^{\prime}\leq{m}\mbox{ (and all choices of the signs $\pm$). }$
For $m=1$, we say that $\boldsymbol{t}=t\in{\mathbb{R}}$ is _${\lambda}$
-spread_ if
$\Big{\|}\frac{t}{2\pi
n}\Big{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\geq\frac{{\lambda}}{n}.$
It is remarked that in the definition above we prevent $t_{r}$ from being
close to $t_{r^{\prime}}$ and $-t_{r^{\prime}}$ at the same time, and this
condition is necessary to hope for asymptotically independence between
$P_{n}(t_{r})$ and $P_{n}(t_{r^{\prime}})$, especially in the case that $\xi$
is real-valued.
In what follows we denote
$s_{\alpha}:=nx_{\alpha},\qquad\alpha\in[N].$ (2.9)
Recalling the scaled polynomial $\widetilde{P}$ from (2.5), we have
$Y_{\alpha}=-\frac{1}{n}\frac{{\operatorname{Re}}(\widetilde{P}_{n}(s_{\alpha})\overline{\widetilde{P}_{n}^{\prime}(s_{\alpha})})}{|\widetilde{P}_{n}^{\prime}(s_{\alpha})|^{2}}\qquad
Z_{\alpha}=n\frac{{\operatorname{Im}}(\widetilde{P}_{n}(s_{\alpha})\overline{\widetilde{P}_{n}^{\prime}(s_{\alpha})})}{|\widetilde{P}_{n}^{\prime}(s_{\alpha})|}.$
(2.10)
The main step towards the proof of Proposition 2.5 is the following:
###### Proposition 2.7.
Fix an $m$-tuple of indices $(\alpha_{1},\dots,\alpha_{m})\in[N]^{m}$. Assume
for some ${\kappa}>0$ that $s_{\alpha_{1}},\dots,s_{\alpha_{m}}$ are
$n^{\kappa}$-smooth and that
$\boldsymbol{s}=(s_{{\alpha_{1}}},\dots,s_{{\alpha_{m}}})$ is $1$-spread. Then
for any $0pt>0$,
$\bigg{|}\,{\mathbf{P}}\bigg{(}\bigwedge_{i\in[{m}]}|X_{\alpha_{i}}|\leq
0pt\bigg{)}-{\mathbf{P}}_{{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}}\bigg{(}\bigwedge_{i\in[m]}|X_{\alpha_{i}}|\leq
0pt\bigg{)}\,\bigg{|}=o(N^{-{m}}),$
where the rate of convergence depends on ${m},0pt,{\kappa},$ and ${K_{0}}$.
We prove Proposition 2.7 in Section 3 below, where we convert the task to a
problem involving a random walk in ${\mathbb{R}}^{4m}$. Before proceeding we
collect the following useful property of a smooth $m$-tuples, which basically
says that we can simultaneously dilate the points $t_{r}$ to be well separated
on the torus. This result will be useful for the proof of Lemma 3.6 below for
showing that the distribution of an associated random walk is genuinely full-
dimensional, and also for Section 9 when we bound
$\prod_{r=1}^{{m}-1}\|\frac{L(t_{m}\pm t_{r})}{2\pi
n}\|_{{\mathbb{R}}/{\mathbb{Z}}}$ from below for some $L$.
###### Lemma 2.8.
Assume $(t_{1},\dots,t_{m})\in{\mathbb{R}}^{m}$ is ${\lambda}$-spread for some
${\lambda}>0$, and let ${\lambda}\leq K=o(n)$. There exists an integer
${L}\asymp n/K$ such that
$\Big{\|}\frac{{L}\cdot(t_{r}\pm t_{r^{\prime}})}{2\pi
n}\Big{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\gg_{m}{\lambda}/K\qquad\forall 1\leq
r<r^{\prime}\leq m$ (2.11)
(and all choices of the signs). In particular, if $(t_{1},\dots,t_{m})$ is
$\omega(1)$-spread then there exists ${L}\leq n$ such that
$\Big{\|}\frac{{L}\cdot(t_{r}\pm t_{r^{\prime}})}{2\pi
n}\Big{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\gg_{m}1\qquad\forall 1\leq
r<r^{\prime}\leq m.$ (2.12)
In case $m=1$ then there exists an integer ${L}\asymp n/K$ such that
$\|\frac{{L}\cdot t}{2\pi n}\|_{{\mathbb{R}}/{\mathbb{Z}}}\gg_{m}{\lambda}/K.$
###### Proof.
The case $m=1$ is clear, so we just need to focus on $m\geq 2$. Assume towards
a contradiction that there exists $\varepsilon=\varepsilon({m})>0$ such that
for every $j\in[n/2K,n/K]$ there exists a pair of distinct indices
$r,r^{\prime}\in[m]$ such that
$\min\bigg{\\{}\bigg{\|}\frac{j(t_{r}-t_{r^{\prime}})}{2\pi
n}\bigg{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\,,\,\bigg{\|}\frac{j(t_{r}+t_{r^{\prime}})}{2\pi
n}\bigg{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\bigg{\\}}\leq\varepsilon{\lambda}/K$
(2.13)
By pigeonholing, there is a pair of distinct indices $r,r^{\prime}\in[{m}]$
and subset $J\subset[n/2K,n/K]$ of size $\gg n/K{m}^{2}$ such that either the
first quantity in the minimum in (2.13) is bounded by $\varepsilon{\lambda}/K$
for all $j\in J$, or the second is bounded by $\varepsilon{\lambda}/K$ for all
$j\in J$. We focus on the former case; the latter is handled by a similar
argument.
As $|J|$ is of the same order as its diameter, there exists $C=O_{m}(1)$ so
that $CJ-CJ$ contains a homogeneous arithmetic progression of length $\gg n/K$
(see for instance [Tao10, Lemma B.3]).
###### Claim 2.9.
Assume that $z=e^{i\theta},|\theta|\leq\pi/8$ such that for all $1\leq\ell\leq
M$ we have $|1-z^{\ell}|\leq 1/32$ for a sufficiently large $M$. Then
$|\theta|=O(1/M)$.
###### Proof.
By assumption, $|\theta|\leq\pi/8$ and
$\|2^{k}\theta\|_{{\mathbb{R}}/{\mathbb{Z}}}\leq\pi/8$ for all $1\leq
k\leq\log M$, and so we can repeatedly estimate $|\theta|$ to obtain
$|\theta|=O(1/M)$. ∎
By the triangle inequality, for $\varepsilon$ sufficiently small depending on
$C$, by 2.9 this would imply there exists $C_{r,r^{\prime}}=O_{m}(1)$ such
that
$\bigg{\|}\frac{C_{r,r^{\prime}}(t_{r}-t_{r^{\prime}})}{2\pi
n}\bigg{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\ll_{m}\frac{\varepsilon{\lambda}/K}{n/K}\ll_{m}\varepsilon{\lambda}/n.$
(2.14)
Let ${\mathcal{N}}_{1}$ be the collection of all pairs $(r,r^{\prime})$ such
that (2.14) holds, taking $C_{r,r^{\prime}}$ to be the smallest such positive
integer. We have shown that ${\mathcal{N}}_{1}$ is nonempty. By the assumption
that $\boldsymbol{t}$ is ${\lambda}$-spread we have that $C_{r,r^{\prime}}>1$
for all $(r,r^{\prime})\in{\mathcal{N}}_{1}$.
###### Claim 2.10.
Assume that for some $x\in{\mathbb{R}},\delta>0$ and positive integer $M$ we
have $\|x\|_{{\mathbb{R}}/{\mathbb{Z}}}>\delta$ and
$\|Mx\|_{{\mathbb{R}}/{\mathbb{Z}}}\leq\delta$. Then
$\|x\|_{{\mathbb{R}}/{\mathbb{Z}}}>1/2M.$
###### Proof.
Assuming otherwise, we have
$\|Mx\|_{{\mathbb{R}}/{\mathbb{Z}}}=M\|x\|_{{\mathbb{R}}/{\mathbb{Z}}}>M\delta$,
a contradiction. ∎
From the above claim, (2.14), and the assumption $\boldsymbol{t}$ is
${\lambda}$-spread, it follows that if $\varepsilon$ is sufficiently small,
then
$\bigg{\|}\frac{t_{r}-t_{r^{\prime}}}{2\pi
n}\bigg{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\geq 1/2C_{r,r^{\prime}}$
for each $(r,r^{\prime})\in{\mathcal{N}}_{1}$. Set
$D_{1}=\prod_{(r,r^{\prime})\in{\mathcal{N}}_{1}}C_{r,r^{\prime}}=O_{m}(1)$,
and let $I_{1}$ be intersection of the progression $\\{1+\ell
D_{1}\\}_{\ell\in{\mathbb{Z}}}$ with $[n/2K,n/K]$. Applying the triangle
inequality, if $L=1+lD_{1}\in I_{1}$ then for all
$(r,r^{\prime})\in{\mathcal{N}}_{1}$,
$\displaystyle\bigg{\|}\frac{L(t_{r}-t_{r^{\prime}})}{\pi
n}\bigg{\|}_{{\mathbb{R}}/{\mathbb{Z}}}=\bigg{\|}\frac{(1+lD_{1})(t_{r}-t_{r^{\prime}})}{2\pi
n}\bigg{\|}_{{\mathbb{R}}/{\mathbb{Z}}}$
$\displaystyle\geq\bigg{\|}\frac{t_{r}-t_{r^{\prime}}}{2\pi
n}\bigg{\|}_{{\mathbb{R}}/{\mathbb{Z}}}-\bigg{\|}\frac{l\frac{D_{1}}{C_{r,r^{\prime}}}C_{r,r^{\prime}}(t_{r}-t_{r^{\prime}})}{2\pi
n}\bigg{\|}_{{\mathbb{R}}/{\mathbb{Z}}}$ $\displaystyle\geq
1/2C_{r,r^{\prime}}-(n/K)O_{m}(\varepsilon{\lambda}/n)\geq\varepsilon{\lambda}/K$
provided that $\varepsilon$ is sufficiently small. Now if no $L\in I_{1}$
satisfies the conclusion of our lemma, then for each $L\in I_{1}$ there is a
pair $(r,r^{\prime})\notin{\mathcal{N}}_{1}$ that violates the condition, and
then we repeat the above process, with ${\mathcal{N}}_{2}$ being the
collection of such pairs. Set
$D_{2}=\prod_{(r,r^{\prime})\in{\mathcal{N}}_{2}}C_{r,r^{\prime}}$ (and so
$D_{2}=O_{m}(1)$) and let $I_{2}$ be intersection of the progression
$\\{1+\ell D_{1}D_{2}\\}_{\ell\in{\mathbb{Z}}}$ with $[n/2K,n/K]$, we then
continue the process as above. As each time we get rid of at least one pair
$(t_{r},t_{r^{\prime}})$, the process for differences terminates after
$\binom{{m}}{2}$ steps with $\Theta(n/K)$ indices left to choose. Finally, we
can start the process for $t_{r}+t_{r^{\prime}}$ with $j$ (appearing in
(2.14)) chosen from these indices; the remaining iterations are identical as
above.∎
## 3 Random walk in phase space
The key ingredients for the proof of Proposition 2.7 are local small ball
estimates and a comparison principle for an associated random walk in
${\mathbb{R}}^{4{m}}$, which we now define.
For a fixed tuple $\boldsymbol{t}=(t_{1},\dots,t_{m})\in{\mathbb{R}}^{m}$ and
$j\in{\mathbb{Z}}$ we denote the vectors
$\displaystyle{\boldsymbol{a}}_{j}={\boldsymbol{a}}_{j}(\boldsymbol{t})$
$\displaystyle:=\big{(}\sin(jt_{1}/n),\dots,\sin(jt_{m}/n)\big{)}\;\in{\mathbb{R}}^{m}$
$\displaystyle{\boldsymbol{b}}_{j}={\boldsymbol{b}}_{j}(\boldsymbol{t})$
$\displaystyle:=\big{(}\cos(jt_{1}/n),\dots,\cos(jt_{m}/n)\big{)}\;\in{\mathbb{R}}^{m}$
and
$\qquad{\boldsymbol{w}}_{j}={\boldsymbol{w}}_{j}(\boldsymbol{t})=\big{(}{\boldsymbol{a}}_{j}\,,\,(j/n){\boldsymbol{b}}_{j}\,,\,{\boldsymbol{b}}_{j}\,,\,-(j/n){\boldsymbol{a}}_{j}\big{)}\;\in{\mathbb{R}}^{4m}\,.$
(3.1)
For a finite set $J\subset{\mathbb{Z}}$ we let $W_{J}=W_{J}(\boldsymbol{t})$
be the matrix with rows ${\boldsymbol{w}}_{j}$, $j\in J$. Note that
$\boldsymbol{w}_{j}$ gives the values of the functions
$\sin(\frac{j}{n}\,\cdot),\cos(\frac{j}{n}\,\cdot)$ and their derivatives at
the points $t_{1},\dots,t_{m}$. We consider the random walk
$S_{n}(\boldsymbol{t}):=\sum_{j=-n}^{n}\xi_{j}\boldsymbol{w}_{j}(\boldsymbol{t})=W_{[-n,n]}^{\mathsf{T}}{\boldsymbol{\xi}}\;\in\,{\mathbb{R}}^{4m}$
(3.2)
with ${\boldsymbol{\xi}}=(\xi_{j})_{j\in[-n,n]}$ a vector of iid copies of a
real-valued $\xi$.
### 3.1 Control on the characteristic function
The following is the key technical ingredient for controlling the distribution
of the random walks $S_{n}(\boldsymbol{t})$.
###### Theorem 3.1.
Let $\boldsymbol{t}=(t_{1},\dots,t_{m})\in{\mathbb{R}}^{m}$ be
$n^{\kappa}$-smooth and ${\lambda}$-spread for some ${\kappa}\in(0,1)$ and
$\omega(n^{-1/8{m}})\leq{\lambda}\leq 1$. Then for any fixed $K_{*}<\infty$
and any $\boldsymbol{x}\in{\mathbb{R}}^{4m}$ with
$n^{-1/8}\leq\|\boldsymbol{x}\|_{2}\leq n^{K_{*}}$,
$|{\mathbf{E}}e(\langle
S_{n}(\boldsymbol{t}),\boldsymbol{x}\rangle)|\leq\exp(-\log^{2}n)$
for all $n$ sufficiently large depending on $K_{*},m,\kappa,$ and the sub-
Gaussian constant for $\xi$.
We note that here the sub-Gaussianity hypothesis enters only to have a uniform
anti-concentration bound for $\xi$ and could be replaced by a bound on the
Lévy concentration function.
We defer the proof of this theorem to Section 9. Now we state the two main
consequences of Theorem 3.1 towards the proof of Theorem 1.2. By combining
Theorem 3.1 with an Edgworth expansion, we will obtain the following
quantitative comparison with the Gaussian model. In the following we write
$\Gamma=\Gamma_{n}(\boldsymbol{t})\in{\mathbb{R}}^{4m}$ for a Gaussian vector
with covariance matrix $\frac{1}{2n+1}W_{[-n,n]}^{\mathsf{T}}W_{[-n,n]}$. Note
that this is the distribution of $\frac{1}{\sqrt{2n+1}}S_{n}(\boldsymbol{t})$
with iid standard real Gaussians in place of $\xi_{j}$.
###### Remark 3.3.
The proof shows that in place of the sub-Gaussianity assumption we only need
that $\xi$ has $O({m})$ finite moments.
We defer the proof of Theorem 3.2 to Section 8.
By standard arguments, the control on the characteristic function of
$S_{n}(\boldsymbol{t})$ provided by Theorem 3.1 yields an optimal small ball
estimate at arbitrary polynomial scales:
###### Theorem 3.4 (Small ball estimate).
With $\boldsymbol{t}=(t_{1},\dots,t_{m})$ as in Theorem 3.1, for any
$K<\infty$ and any $\delta\geq n^{-K}$,
$\sup_{w\in{\mathbb{R}}^{4m}}{\mathbf{P}}\bigg{(}\frac{1}{\sqrt{2n+1}}S_{n}(\boldsymbol{t})\in
B(w,\delta)\bigg{)}=O_{{m},{\kappa},K}({\lambda}^{-3{m}}\delta^{4{m}}).$
The proof of Theorem 3.4 is deferred to Section 7. We note the following
consequence, giving anti-concentration for the polynomial $P_{n}$ .
###### Corollary 3.5 (Small ball estimate for polynomials).
Assume that $t$ is $n^{\kappa}$-smooth. Then for any $K>0$ and
$\delta\in[n^{-K},1]$,
${\mathbf{P}}(|_{n}(t/n)|\leq\delta)=O_{\kappa,K}(\delta^{2})\qquad\text{ and
}\qquad{\mathbf{P}}(|^{\prime}_{n}(t/n)|\leq\delta)=O_{\kappa,K}(\delta^{2}).$
### 3.2 Non-degeneracy of the covariance matrix
As a first step towards controlling the distribution of
$S_{n}(\boldsymbol{t})$ we need to show that the random walk is genuinely
$4m$-dimensional, which amounts to showing the covariance matrix
$W_{[-n,n]}^{\mathsf{T}}W_{[-n,n]}$ has smallest singular value of order $n$.
This is accomplished by the following lemma, under the (necessary) assumption
that the points $t_{1},\dots,t_{m}$ are spread.
###### Lemma 3.6.
Let $J\subset[n]$ be an interval with $|J|\gg n$. If
$\boldsymbol{t}=(t_{1},\dots,t_{m})\in{\mathbb{R}}^{m}$ is ${\lambda}$-spread
for some ${\lambda}>0$, then
$\|W_{J}(\boldsymbol{t})u\|_{2}^{2}\gg_{m}\min({\lambda},1)^{6m-3}n$
uniformly over unit vectors $u\in S^{4m-1}$.
###### Remark 3.7.
We note that for the case $\xi_{j}\sim{\mathcal{N}}_{\mathbb{R}}(0,1)$, the
above control on the covariance matrix is enough to deduce an optimal small
ball estimate at all scales. For general distributions we need Theorem 3.1,
the proof of which amounts to showing that for $v$ of size $n^{O(1)}$, the
vector $W_{J}(\boldsymbol{t})v$ avoid the _lattice_ ${\mathbb{Z}}^{n}$, rather
than just the origin as above.
###### Proof of Lemma 3.6.
Without loss of generality we may assume ${\lambda}\in(0,1)$. Fix a vector
$u=(u^{1},u^{2},u^{3},u^{4})\in S^{4{m}-1}$. The $j$th entry of is
$\langle\boldsymbol{w}_{j},u\rangle=\sum_{r=1}^{m}u^{1}_{r}\sin(jt_{r}/n)+u^{2}_{r}(j/n)\cos(jt_{r}/n)+u^{3}_{r}\cos(jt_{r}/n)-u^{4}_{r}(j/n)\sin(jt_{r}/n).$
Substituting $\cos(jt_{r}/n)=\frac{1}{2}(e_{n}(jt_{r})+e_{n}(-jt_{r}))$ and
$\sin(jt_{r}/n)=-\frac{{\sqrt{-1}}}{2}(e_{n}(jt_{r})-e_{n}(-jt_{r}))$, the
above becomes
$\displaystyle\frac{1}{2}\sum_{r=1}^{m}(u_{r}^{3}-{\sqrt{-1}}u_{r}^{1})e_{n}(jt_{r})+(u_{r}^{3}+{\sqrt{-1}}u_{r}^{1})e_{n}(-jt_{r})$
$\displaystyle\qquad\qquad\qquad\qquad+(u_{r}^{2}+{\sqrt{-1}}u_{r}^{4})(j/n)e_{n}(jt_{r})+(u_{r}^{2}-{\sqrt{-1}}u_{r}^{4})(j/n)e_{n}(-jt_{r})$
$\displaystyle\qquad=\Big{\langle}\big{(}\boldsymbol{e}_{j},\bar{\boldsymbol{e}}_{j},(j/n)\boldsymbol{e}_{j},(j/n)\bar{\boldsymbol{e}}_{j}\big{)}\,,\,Au\Big{\rangle}$
where
$\boldsymbol{e}_{j}:=(e_{n}(jt_{1}),\dots,e_{n}(jt_{m}))$
and
$A=\frac{1}{2}\begin{pmatrix}-{\sqrt{-1}}I_{m}&0&I_{m}&0\\\
{\sqrt{-1}}I_{m}&0&I_{m}&0\\\ 0&I_{m}&0&{\sqrt{-1}}I_{m}\\\
0&I_{m}&0&-{\sqrt{-1}}I_{m}\end{pmatrix}$
where $I_{m}$ is the ${m}\times{m}$ identity matrix and 0 is the square matrix
of 0s. Since $\|A^{-1}\|=O(1)$, it suffices to show
$\|Mv\|_{2}^{2}\gg_{m}{\lambda}^{6m-3}n$
uniformly for $v$ in the complex sphere $S_{\mathbb{C}}^{4{m}-1}$, where
$M\in{\mathbb{C}}^{n\times 4m}$ is the matrix with rows
$(\boldsymbol{e}_{j},\bar{\boldsymbol{e}}_{j},{\sqrt{-1}}(j/n)\boldsymbol{e}_{j},{\sqrt{-1}}(j/n)\bar{\boldsymbol{e}}_{j}).$
From Lemma 2.8 there exists an integer ${L}$ with $n\ll_{m}{L}<n/100m$ such
that
$\Big{\|}\frac{{L}\cdot(t_{r}\pm t_{r^{\prime}})}{2\pi
n}\Big{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\gg_{m}{\lambda}\qquad\forall 1\leq
r<r^{\prime}\leq m.$
For notational convenience we will consider $M$ with rows of the general form
$(e_{n}(jt_{1}),\dots,e_{n}(jt_{d}),{\sqrt{-1}}(j/n)e_{n}(jt_{1}),\dots,{\sqrt{-1}}(j/n)e_{n}(t_{d}))$
satisfying
$\Big{\|}\frac{{L}\cdot(t_{r}-t_{r^{\prime}})}{2\pi
n}\Big{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\geq{\lambda}_{0}\qquad\forall 1\leq
r<r^{\prime}\leq d$ (3.3)
for some ${\lambda}_{0}\in(0,1)$ and $n\ll_{d}L<n/50d$, and aim to show
$\inf_{v\in S_{\mathbb{C}}^{2d-1}}\|Mv\|_{2}^{2}\gg_{d}{\lambda}_{0}^{3d-3}n.$
(3.4)
One passes back to the previous case by taking $d=2{m}$ and
$(t_{1},\dots,t_{2{m}})=(t_{1},\dots,t_{m},-t_{1},\dots,-t_{m})$, and
substituting any $c(m){\lambda}$ for ${\lambda}_{0}$.
Let $P$ denote the intersection of the interval $J$ with the progression
$\\{iL:i\in{\mathbb{Z}}\\}$, and let $M_{P}$ denote the submatrix of $M$ with
rows indexed by $P$. Note that $|P|\asymp_{d}1$. We will first show
$\inf_{v\in
S_{\mathbb{C}}^{2d-1}}\|M_{P}v\|_{2}^{2}\gg_{d}{\lambda}_{0}^{2d-2}.$ (3.5)
To do this we consider the twisted second-order differencing operators of the
form
$(D_{t_{0}}f)(j):=\sum_{a=0}^{2}{2\choose a}(-1)^{a}e(-aLt_{0})f(j+aL)$ (3.6)
acting on sequences $f:P\to{\mathbb{C}}$, for various choices of the parameter
$t_{0}\in{\mathbb{R}}$. Let us denote
$f_{t}(j)=e_{n}(jt),\qquad
g_{t}(j)={\sqrt{-1}}(j/n)e_{n}(jt)=\partial_{t}f_{t}(j).$
For $t,t_{0}\in{\mathbb{R}}$ and any $j\in P$ with $j+2L\in P$, we have
$(D_{t_{0}}f_{t})(j)=e_{n}(jt)\sum_{a=0}^{2}{2\choose
a}(-1)^{a}e(aL(t-t_{0}))=\big{[}1-e_{n}(L(t-t_{0}))\big{]}^{2}f_{t}(j)$ (3.7)
and
$\displaystyle(D_{t_{0}}g_{t})(j)$ $\displaystyle=\sum_{a=0}^{2}{2\choose
a}(-1)^{a}e(aL(t-t_{0})){\sqrt{-1}}\frac{j+aL}{n}e_{n}((j+aL)t)$
$\displaystyle={\sqrt{-1}}(j/n)(D_{t_{0}}f_{t})(j)+e_{n}(jt)\bigg{[}-2{\sqrt{-1}}\frac{L}{n}e_{n}(L(t-t_{0}))+2{\sqrt{-1}}\frac{L}{n}e_{n}(2L(t-t_{0}))\bigg{]}$
$\displaystyle=\big{[}1-e_{n}(L(t-t_{0}))\big{]}^{2}g_{t}(j)-2{\sqrt{-1}}\frac{L}{n}e_{n}(L(t-t_{0}))\big{[}1-e_{n}(L(t-t_{0}))\big{]}f_{t}(j)$
$\displaystyle=\big{[}1-e_{n}(L(t-t_{0}))\big{]}^{2}\big{[}g_{t}(j)+\beta_{L}(t-t_{0})f_{t}(j)\big{]}$
(3.8)
where we write
$\beta_{L}(s):=-2{\sqrt{-1}}\frac{L}{n}e_{n}(Ls)/\big{[}1-e_{n}(Ls)\big{]}$.
In particular, we have
$(D_{t_{0}}f_{t_{0}})(j)=(D_{t_{0}}g_{t_{0}})(j)=0\qquad\forall j.$ (3.9)
The key point about the factors $1-e_{n}(L\,\cdot)$ and $\beta_{L}(\cdot)$ is
that they are independent of $j$ and hence pass through the difference
operators $D_{t_{0}}$.
For the lower bound (3.5) we partition the sphere into $d$ pieces
$S_{r}=\\{v\in S_{\mathbb{C}}^{2d-1}:|v_{r}|^{2}+|v_{r+d}|^{2}\geq
1/d\\}\,\qquad 1\leq r\leq d$
and prove the bound separately on each piece. By symmetry it suffices to treat
$S_{d}$. We abbreviate
$G:=\prod_{r=1}^{d-1}\big{[}1-e_{n}(L(t_{d}-t_{r}))\big{]}^{2}\,,\qquad
H:=\sum_{r=1}^{d-1}\beta_{L}(t_{d}-t_{r}).$
Iterating the identities (3.7)–(3.9), we obtain that for any $j\in P$ such
that $j+2dL\in P$,
$\big{(}D_{t_{1}}\circ\cdots\circ D_{t_{d-1}}f_{t_{r}}\big{)}(j)=0\qquad 1\leq
r\leq d-1$
and otherwise
$\big{(}D_{t_{1}}\circ\cdots\circ D_{t_{d-1}}f_{t_{d}}\big{)}(j)=G\cdot
f_{t_{d}}(j).$
Similarly,
$\big{(}D_{t_{1}}\circ\cdots\circ D_{t_{d-1}}g_{t_{r}}\big{)}(j)=0\qquad 1\leq
r\leq d-1$
and otherwise
$\big{(}D_{t_{1}}\circ\cdots\circ
D_{t_{d-1}}g_{t_{d}}\big{)}(j)=G\cdot\big{(}g_{t_{d}}(j)+H\cdot
f_{t_{d}}(j)\big{)}.$
Fix an arbitrary $v\in S_{d}$. Recognizing the sequences $(f_{t_{r}}(j))_{j\in
P},(g_{t_{r}}(j))_{j\in P}$ as the $2d$ columns of $M_{P}$, we have
$(M_{P}v)_{j}=\sum_{r=1}^{d}v_{r}f_{t_{r}}(j)+v_{r+d}g_{t_{r}}(j).$
Letting $\boldsymbol{D}$ be the matrix associated to the linear operator
$D_{t_{1}}\circ\cdots\circ D_{t_{d-1}}$ on ${\mathbb{C}}^{P}$, we have
$\displaystyle(\boldsymbol{D}M_{P}v)_{j}$
$\displaystyle=v_{d}Gf_{t_{d}}(j)+v_{2d}G(g_{t_{d}}(j)+Hf_{t_{d}}(j))$
$\displaystyle=G\cdot
e_{n}(jt_{d})\big{[}v_{d}+\big{(}{\sqrt{-1}}(j/n)+H\big{)}v_{2d}\big{]}$
for each $j\in P$ such that $j+2dL\in P$. Taking the modulus of each side and
square-summing we obtain
$\displaystyle\sum_{j\in P:j+2dL\in P}|(\boldsymbol{D}M_{P}v)_{j}|^{2}$
$\displaystyle=|G|^{2}\sum_{j\in P:j+2dL\in
P}\big{|}v_{d}+\big{(}{\sqrt{-1}}(j/n)+H\big{)}v_{2d}\big{|}^{2}.$
From (3.3) we have
$G\geq(c{\lambda}_{0})^{2d-2},\qquad H=O(d/{\lambda}_{0}).$
In particular, since $v_{d},v_{2d}$ and $H$ are independent of $j$, and
$|v_{d}|^{2}+|v_{2d}|^{2}\geq 1/d$, the sum on the right hand side of the
previous display is at least $\gg|P|/d^{2}\gg_{d}1$, so
$\sum_{j\in P:j+2dL\in
P}|(\boldsymbol{D}M_{P}v)_{j}|^{2}\gg_{d}{\lambda}_{0}^{2d-2}.$
On the other hand, since the matrix $\boldsymbol{D}$ has
$\ell_{2}(P)\to\ell_{2}(P)$ operator norm $O(d)$, the left hand side is
bounded above by $\ll_{d}\|M_{P}v\|_{2}^{2}$, and we obtain (3.5) as desired.
It only remains to prove (3.4). Consider the submatrices
$M_{P},M_{1+P},\dots,M_{n_{0}+P}$ composed of rows indexed by the shifted
progressions $P,1+P,\dots,n_{0}+P$, respectively. If $n_{0}<L$ then these
submatrices are all disjoint. Moreover, letting $F$ denote the
$2d$-dimensional diagonal matrix with diagonal entries
$e_{n}(t_{1}),\dots,e_{n}(t_{d}),e_{n}(t_{1}),\dots,e_{n}(t_{d})$, we note
that $M_{k+P}$ and $M_{P}F^{k}$ differ by a matrix of norm $O_{d}(k/n)$ (as
they only differing in the dilations by ${\sqrt{-1}}j/n$ in the last $d$
columns). Since $F$ is unitary we have
$\sigma_{2d}(M_{P}F^{k})=\sigma_{2d}(M_{P})\gg_{d}{\lambda}_{0}^{d-1}$, and
taking $n_{0}=c(d){\lambda}_{0}^{d-1}n$ for $c(d)>0$ sufficiently small
depending on $d$, from the triangle inequality we obtain that
$\sigma_{2d}(M_{k+P})\gg_{d}{\lambda}_{0}^{d-1}$ for all $1\leq k\leq n_{0}$.
Since $1+P,\dots,n_{0}+P$ are disjoint, we conclude that for any fixed $v\in
S_{\mathbb{C}}^{2d-1}$,
$\|Mv\|_{2}^{2}\geq\sum_{k=1}^{n_{0}}\|M_{k+P}v\|_{2}^{2}\gg_{d}n_{0}{\lambda}_{0}^{2d-2}\gg_{d}{\lambda}_{0}^{3d-3}n$
giving (3.4) as desired. ∎
## 4 Proof of Proposition 2.7
In this section we combine Theorems 3.2 and 3.4 to prove Proposition 2.7. In
fact we will need the following more general result, which in particular
establishes universality for the joint distribution of the recentered near-
local minimizers $Y_{\alpha_{i}}$ and corresponding near-local minima
$X_{\alpha_{i}}$.
###### Proposition 4.1.
Fix an $m$-tuple of indices $(\alpha_{1},\dots,\alpha_{m})\in[N]^{m}$, and
assume $\boldsymbol{s}=(s_{{\alpha_{1}}},\dots,s_{{\alpha_{m}}})$ is
$n^{\kappa}$-smooth and $1$-spread for some ${\kappa}>0$. Let
$J_{1},\dots,J_{m}\subset{\mathbb{R}}$,
$J^{\prime}_{1},\dots,J^{\prime}_{m}\subseteq[-\pi,\pi]$ be arbitrary compact
intervals with lengths in the range $[n^{-L_{0}},n^{L_{0}}]$ for some
$L_{0}>0$, and denote the event
${\mathcal{E}}=\bigwedge_{i\in[{m}]}\big{\\{}X_{\alpha_{i}}\in
J_{i},\,NY_{\alpha_{i}}\in J_{i}^{\prime}\big{\\}}.$ (4.1)
We have
$\displaystyle\big{|}{\mathbf{P}}({\mathcal{E}})-{\mathbf{P}}_{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}({\mathcal{E}})\big{|}\ll_{{m},{\kappa},L_{0}}\frac{\log^{O(m)}n}{n^{1/2}N^{m}}\prod_{i=1}^{m}|J_{i}||J_{i}^{\prime}|.$
(4.2)
Moreover, if $\boldsymbol{s}$ is $n^{\kappa}$-smooth and ${\lambda}$-spread
for some $\omega(n^{-1/8{m}})\leq{\lambda}\leq 1$, then we have the upper
bounds
${\mathbf{P}}({\mathcal{E}})\ll_{{m},{\kappa},L_{0}}\frac{\log^{O(m)}n}{{\lambda}^{3{m}}N^{{m}}}\prod_{i=1}^{m}|J_{i}||J_{i}^{\prime}|\,,$
(4.3)
and
${\mathbf{P}}_{{\mathcal{N}}_{\mathbb{R}}(0,1)}({\mathcal{E}})\ll_{{m},{\kappa},L_{0}}\frac{1}{{\lambda}^{O({m}^{2})}N^{{m}}}\prod_{i=1}^{m}|J_{i}||J_{i}^{\prime}|\,.$
(4.4)
For the above bounds, the point is that the trivial bound on
${\mathbf{P}}_{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}({\mathcal{E}})$, obtained by
controlling the Gaussian measure by Lebesgue measure, is of order
$N^{-m}\prod_{i=1}^{m}|J_{i}||J_{i}^{\prime}|$ (this will be shown in the
proof, but can also be understood on the heuristic level). For the error in
(4.2) we save $\ll n^{-1/2+\varepsilon}$ on this bound, while in (4.3) we
obtain the same order upper bound for ${\mathbf{P}}({\mathcal{E}})$ up to a
tolerable loss of a factor ${\lambda}^{-3m}\log^{O(m)}n$.
We commence with the proof of Proposition 4.1. Let $K_{*}>0$ to be chosen
sufficiently large and set $\delta=n^{-K_{*}}$. We first describe the event
${\mathcal{E}}$ as a domain in ${\mathbb{R}}^{4{m}}$. Let ${\mathcal{D}}$
denote the annulus
${\mathcal{D}}:=B(0,C_{0}\sqrt{\log n})\setminus
B(0,\log^{-{K_{0}}/2}n)\subset{\mathbb{R}}^{2}.$
For ${\mathbf{b}}=(b,b^{\prime})\in{\mathbb{R}}^{2}$ we write
${\mathbf{b}}^{\perp}:=(b^{\prime},-b)$, and define the rectangles
$T_{i}({\mathbf{b}})=\bigg{\\{}{\mathbf{a}}\in{\mathbb{R}}^{2}:\frac{{\mathbf{a}}\cdot{\mathbf{b}}^{\perp}}{\|{\mathbf{b}}\|_{2}}\in\frac{1}{n}\cdot
J_{i}\;,\;\;-\frac{{\mathbf{a}}\cdot{\mathbf{b}}}{\|{\mathbf{b}}\|_{2}^{2}}\in\frac{n}{N}\cdot
J_{i}^{\prime}\bigg{\\}}\,,\,\qquad 1\leq i\leq{m},$ (4.5)
which have sides of length $n\|{\mathbf{b}}\|_{2}|J_{i}^{\prime}|/N$ and
$|J_{i}|/n$ in the direction of ${\mathbf{b}}$ and ${\mathbf{b}}^{\perp}$,
respectively. (Here we write $C\cdot J_{i}$ for the dilation of $J_{i}$ by a
factor $C$.) Let
$U_{i}=\Big{\\{}(a,a^{\prime},b,b^{\prime})=({\mathbf{a}},{\mathbf{b}})\in{\mathbb{R}}^{4}:{\mathbf{b}}\in{\mathcal{D}},\;{\mathbf{a}}\in
T_{i}({\mathbf{b}})\Big{\\}}\,,\qquad{\mathcal{U}}=\prod_{i=1}^{m}U_{i}\,.$
(4.6)
Abbreviating henceforth
$\widetilde{S}:=\frac{1}{\sqrt{2n+1}}S_{n}(\boldsymbol{t}),$ (4.7)
one sees that the left hand sides of (4.2) and (4.3) can be expressed as
$|{\mathbf{P}}(\widetilde{S}\in{\mathcal{U}})-{\mathbf{P}}(\Gamma\in{\mathcal{U}})|$
and ${\mathbf{P}}(\widetilde{S}\in{\mathcal{U}})$, respectively.
From the dimensions of the rectangles $T_{i}({\mathbf{b}})$ we have from
Fubini’s theorem that
$m_{\operatorname{{Leb}}}(U_{i})=\int_{\mathcal{D}}m_{\operatorname{{Leb}}}(T_{i}({\mathbf{b}}))d{\mathbf{b}}=\frac{\Delta}{N}|J_{i}||J^{\prime}_{i}|$
(4.8)
where we denote
$\Delta:=\int_{\mathcal{D}}\|{\mathbf{b}}\|d{\mathbf{b}}\sim\frac{2\pi
C_{0}}{3}\log^{3/2}n.$ (4.9)
Thus,
$m_{\operatorname{{Leb}}}({\mathcal{U}})=(\Delta/N)^{m}\prod_{i=1}^{m}|J_{i}||J_{i}^{\prime}|=\frac{\log^{O(m)}n}{N^{m}}\prod_{i=1}^{m}|J_{i}||J_{i}^{\prime}|.$
(4.10)
For the measure of ${\mathcal{U}}$ under the law of $\Gamma$, recall from
Lemma 3.6 that the norm of the inverse of the covariance matrix of $\Gamma$
has operator norm of size $O({\lambda}^{-3m})$, and hence determinant of size
$O({\lambda}^{-O(m^{2})})$. By controlling the conditional density of $\Gamma$
in directions $({\mathbf{a}}_{1},\dots,{\mathbf{a}}_{m})$ for fixed
$({\mathbf{b}}_{1},\dots,{\mathbf{b}}_{m})$ by the Lebesgue measure, and then
integrating over ${\mathcal{D}}^{m}$ under the marginal Gaussian measure, we
get
${\mathbf{P}}(\Gamma\in{\mathcal{U}})\ll_{{m},{\kappa},L_{0}}\frac{1}{{\lambda}^{O({m}^{2})}N^{{m}}}\prod_{i=1}^{m}|J_{i}||J_{i}^{\prime}|,$
(4.11)
giving (4.4) as desired.
We next note that the corners of the rectangles $T_{i}({\mathbf{b}})$ are
$n^{O(L_{0}+1)}$-Lipschitz functions of ${\mathbf{b}}\in{\mathcal{D}}$. From
this it follows that if $K_{*}$ is sufficiently large depending on $L_{0}$ and
$m$, we can find sets
${\mathcal{U}}_{-}\subset{\mathcal{U}}\subset{\mathcal{U}}_{+}$ such that
${\mathcal{U}}_{-}$ and ${\mathcal{U}}_{+}\setminus{\mathcal{U}}_{-}$ are
unions of cubes in ${\mathbb{R}}^{4m}$ of side length $\delta$ with disjoint
interiors, and such that
$m_{\operatorname{{Leb}}}({\mathcal{U}}_{+}\setminus{\mathcal{U}}_{-})\leq
n^{-100}m_{\operatorname{{Leb}}}({\mathcal{U}})$ (say).
The bound (4.3) now follows by covering each cube in ${\mathcal{U}}_{+}$ with
balls of bounded overlap and applying the union bound, Theorem 3.4, and
(4.10).
For (4.2), we bound
$|{\mathbf{P}}(\widetilde{S}\in{\mathcal{U}})-{\mathbf{P}}(\Gamma\in{\mathcal{U}})|\leq{\mathbf{P}}(\widetilde{S}\in{\mathcal{U}}_{+}\setminus{\mathcal{U}}_{-})+{\mathbf{P}}(\Gamma\in{\mathcal{U}}_{+}\setminus{\mathcal{U}}_{-})+\sum_{Q}|{\mathbf{P}}(\widetilde{S}\in
Q)-{\mathbf{P}}(\Gamma\in Q)|$
where the sum runs over the cubes comprising ${\mathcal{U}}_{-}$. Using the
union bound and Theorem 3.4 as we did for ${\mathcal{U}}_{+}$, the first two
terms above are of size
$\ll m_{\operatorname{{Leb}}}({\mathcal{U}}_{+}\setminus{\mathcal{U}}_{-})\ll
n^{-100}m_{\operatorname{{Leb}}}({\mathcal{U}}).$
For the sum over $Q$, use Theorem 3.2 to bound each term by
$\ll_{m,\kappa,K_{*}}n^{-1/2}m_{\operatorname{{Leb}}}(Q)$. Altogether we have
$|{\mathbf{P}}(\widetilde{S}\in{\mathcal{U}})-{\mathbf{P}}(\Gamma\in{\mathcal{U}})|\ll_{m,\kappa,K_{*}}n^{-1/2}m_{\operatorname{{Leb}}}({\mathcal{U}})$
and the claim now follows from (4.10). This concludes the proof of Proposition
4.1. ∎
## 5 Proof of Proposition 2.5 for the real-valued case (moment comparison)
We condition on ${\mathcal{G}}_{2}({K_{0}}/2)$ throughout the proof. As
remarked before, in the real-valued case it suffices to work with
$x_{\alpha}\in[0,\pi]$ because $P_{n}(-x)=\overline{P_{n}}(x)$. We allow
implied constants to depend on ${m}$ and $0pt$ without indication. Recall also
that ${\kappa}_{0}$ in the definition (2.7) of ${{\mathcal{M}}}^{\sharp}_{n}$
is an absolute constant. For
$\boldsymbol{\alpha}=(\alpha_{1},\dots,\alpha_{m})\in[N]^{m}$ we denote events
${\mathcal{E}}(\boldsymbol{\alpha}):=\bigg{\\{}\bigwedge_{i\in[{m}]}|X_{\alpha_{i}}|\leq
0pt\bigg{\\}}\,.$
We have
$\displaystyle{\mathbf{E}}\Big{(}{{\mathcal{M}}}^{\sharp}_{n}\big{(}[-0pt,0pt]\big{)}^{m}\Big{)}$
$\displaystyle=\sum_{\boldsymbol{\alpha}\in
E}{\mathbf{P}}({\mathcal{E}}(\boldsymbol{\alpha}))=\sum_{\boldsymbol{\alpha}\in
E^{\prime}}{\mathbf{P}}({\mathcal{E}}(\boldsymbol{\alpha}))+\sum_{\boldsymbol{\alpha}\in
E\setminus E^{\prime}}{\mathbf{P}}({\mathcal{E}}(\boldsymbol{\alpha}))$ (5.1)
where
$\displaystyle E$
$\displaystyle:=\big{\\{}\boldsymbol{\alpha}=(\alpha_{1},\dots,\alpha_{m})\in[N/2]^{m}:x_{\alpha_{1}},\dots,x_{\alpha_{m}}\notin{E_{\operatorname{{bad}}}}({\kappa}_{0})\big{\\}}\,,$
$\displaystyle E^{\prime}$ $\displaystyle:=\big{\\{}\boldsymbol{\alpha}\in
E:|x_{\alpha_{i}}-x_{\alpha_{j}}|>4\pi/n\;\forall 1\leq
i<j\leq{m}\big{\\}}\,.$
Note that if $x,x^{\prime}\in[0,\pi]$ such that
$|\frac{x-x^{\prime}}{2\pi}|\geq\frac{\lambda}{n}$ then we also have
$\frac{\lambda}{n}\leq\frac{x+x^{\prime}}{2\pi}\leq 1-\frac{\lambda}{n}$.
Hence within $E^{\prime}$ the angles are $1$-spread and by Proposition 2.7
$\bigg{|}\sum_{\boldsymbol{\alpha}\in
E^{\prime}}{\mathbf{P}}\big{(}{\mathcal{E}}(\boldsymbol{\alpha})\big{)}-\sum_{\boldsymbol{\alpha}\in
E^{\prime}}{\mathbf{P}}_{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}\big{(}{\mathcal{E}}(\boldsymbol{\alpha})\big{)}\bigg{|}\leq
N^{{m}}o(N^{-m})=o(1).$
It only remains to bound the sum over $\boldsymbol{\alpha}\in E\setminus
E^{\prime}$.
By Lemma 2.2, under ${\mathcal{G}}_{2}({K_{0}}/2)$, it suffices to consider
$m$-tuples of the form
$(\alpha_{1},\dots,\alpha_{{m}-k},\alpha_{1}+1,\alpha_{2}+1,\dots,\alpha_{k}+1)$
(5.2)
consisting of $k$ pairs of points $(\alpha_{l},\alpha_{l}+1)$ that are
immediate neighbors, for some $0\leq k\leq{m}/2$, while the ${m}-k$ points
$x_{\alpha_{1}},\dots,x_{\alpha_{{m}-k}}$ are separated by at least
$4\pi/(n\log^{3{K_{0}}}n)$ in $[0,\pi]$. Note also that by the remark above we
also have $\frac{x_{\alpha_{i}}+x_{\alpha_{j}}}{2\pi}\geq
1/(n\log^{3{K_{0}}}n)$.
We divide this class of such $\boldsymbol{\alpha}$ into two sets
$E_{1},E_{2}$, where $E_{1}$ is the set of $\boldsymbol{\alpha}\in E\setminus
E^{\prime}$ of the form (5.2) (possibly with $k=0$) such that
$|x_{\alpha_{i}}-x_{\alpha_{j}}|\leq 4\pi/n$ for some $1\leq i<j\leq{m}-k$,
and $E_{2}$ is the set of $\boldsymbol{\alpha}\in E\setminus E^{\prime}$ of
the form (5.2) with $k\geq 1$ and $|x_{\alpha_{i}}-x_{\alpha_{j}}|>4\pi/n$ for
all $1\leq i<j\leq{m}-k$.
For the sum over $E_{1}$, we have $|E_{1}|=O(N^{{m}-k}/n)$ since there are
$O(N/n)$ options for the close point with all others fixed. As the points
$x_{\alpha_{1}},\dots,\dots,x_{\alpha_{{m}-k}}$ are separated by at least
$4\pi/(n\log^{3{K_{0}}}n)$, from the upper bound (4.3) in Proposition 4.1 with
$J_{i}\equiv[-0pt,0pt]$ and $J_{i}^{\prime}\equiv[-\pi,\pi]$, we have
$\displaystyle\sum_{\boldsymbol{\alpha}\in
E_{1}}{\mathbf{P}}\big{(}{\mathcal{E}}(\boldsymbol{\alpha})\big{)}\ll(N^{{m}-k}/n)\times
N^{-{m}}0pt^{m}\log^{O(K_{0}m)}n\ll\frac{1}{n}\log^{O(m)}n=o(1).$
For the sum over $E_{2}$, by Lemma 2.2, under ${\mathcal{G}}_{2}({K_{0}}/2)$
we have the containment of events
$\big{\\{}|X_{\alpha_{i}}|\leq 0pt\;,\;|X_{\alpha_{i}+1}|\leq
0pt\big{\\}}\subset\bigg{\\{}|X_{\alpha_{i}}|\leq
0pt\;,\;Y_{\alpha_{i}}\in\Big{[}\frac{\pi}{N}-\frac{\pi}{N\log^{{K_{0}}/4}n},\frac{\pi}{N}\Big{]}\bigg{\\}}\,,$
so for each such $\boldsymbol{\alpha}$ we can bound
${\mathbf{P}}\big{(}{\mathcal{E}}(\boldsymbol{\alpha})\big{)}\leq{\mathbf{P}}\bigg{(}\,Y_{\alpha_{1}}\in\Big{[}\frac{\pi}{N}-\frac{\pi}{N\log^{{K_{0}}/4}n},\frac{\pi}{N}\Big{]}\,,\,\bigwedge_{i\in[{m}-k]}|X_{\alpha_{i}}|\leq
0pt\,\bigg{)}.$
Applying (4.2) with ${m}-k$ in place of ${m}$, ${\lambda}=1/2$ (say),
$J_{i}\equiv[-0pt,0pt]$, $J_{1}^{\prime}=[\pi(1-\log^{-{K_{0}}/4}n),\pi]$, and
$J_{i}^{\prime}=[-\pi,\pi]$ for $2\leq i\leq{m}-k$, the right hand side above
is bounded by
${\mathbf{P}}_{{\mathcal{N}}_{\mathbb{R}}(0,1)}\bigg{(}\,Y_{\alpha_{1}}\in\Big{[}\frac{\pi}{N}-\frac{\pi}{N\log^{{K_{0}}/4}n},\frac{\pi}{N}\Big{]}\,,\,\bigwedge_{i\in[{m}-k]}|X_{\alpha_{i}}|\leq
0pt\,\bigg{)}+o(N^{-(m-k)}).$
Finally, we apply (4.4) to bound the first term above by $o(N^{-(m-k)})$.
Combining the preceding displays and summing over $\boldsymbol{\alpha}\in
E_{2}$ gives
$\displaystyle\sum_{\boldsymbol{\alpha}\in
E_{2}}{\mathbf{P}}\big{(}{\mathcal{E}}(\boldsymbol{\alpha})\big{)}=o(1).$
We have thus shown that the sum over $\boldsymbol{\alpha}\in E\setminus
E^{\prime}$ in (5.1) is $o(1)$, which completes the proof of Proposition 2.5.∎
## 6 Proof of Theorem 1.2 (main result)
We fix $\kappa=\kappa_{0}$ as in Lemma 2.4, and let $\tau>0$ be arbitrary. As
in the previous section we allow implied constants to depend on ${m}$ and
$\tau$. It follows from Proposition 2.5 that
$\lim_{n\to\infty}\Big{|}{\mathbf{P}}\big{(}{{{\mathcal{M}}}^{\sharp}_{{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}}}([-0pt,0pt])=0\big{)}-{\mathbf{P}}\big{(}{{\mathcal{M}}}^{\sharp}([-0pt,0pt])=0\big{)}\Big{|}=0.$
On the other hand, by Theorem 1.1 and Lemma 2.4,
$\lim_{n\to\infty}\bigg{|}{\mathbf{P}}_{{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}}\Big{(}m_{n}>\frac{0pt}{n}\Big{)}-{\mathbf{P}}_{{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}}\Big{(}{{\mathcal{M}}}^{\sharp}([-0pt,0pt])=0\Big{)}\bigg{|}=0$
and hence it suffices to show
$\lim_{n\to\infty}\bigg{|}{\mathbf{P}}\Big{(}m_{n}>\frac{0pt}{n}\Big{)}-{\mathbf{P}}\Big{(}{{\mathcal{M}}}^{\sharp}([-0pt,0pt])=0\Big{)}\bigg{|}=0.$
To this end, recall that on the event ${\mathcal{G}}_{2}({K_{0}})$,
$|P(x)-F_{\alpha}(x)|\leq
N^{-2}\sup_{x\in[-\pi,\pi]}|P^{\prime\prime}(x)|\leq\frac{\log^{3{K_{0}}}n}{n^{2}}$
(6.1)
for all $x\in I_{\alpha}$. By Lemma 2.4 we have
$\displaystyle\bigg{|}{\mathbf{P}}\Big{(}m_{n}>\frac{0pt}{n}\Big{)}-{\mathbf{P}}\Big{(}{{\mathcal{M}}}^{\sharp}([-0pt,0pt])=0\Big{)}\bigg{|}$
$\displaystyle\qquad\qquad\leq{\mathbf{P}}\Big{(}m_{n}>\frac{0pt}{n}\;,\;{{\mathcal{M}}}^{\sharp}([-0pt,0pt])\geq
1\Big{)}+{\mathbf{P}}\Big{(}m_{n}\leq\frac{0pt}{n}\;,\;{{\mathcal{M}}}^{\sharp}([-0pt,0pt])=0\Big{)}$
$\displaystyle\qquad\qquad\leq\sum_{\alpha\in[N]:x_{\alpha}\notin{E_{\operatorname{{bad}}}}({\kappa})}{\mathbf{P}}\Big{(}{\mathcal{G}}_{2}({K_{0}})\,\wedge\,|X_{\alpha}|<0pt\,\wedge\,\min_{x\in
I_{\alpha}}|P(x)|\geq 0pt/n\Big{)}$
$\displaystyle\qquad\qquad\qquad\qquad+\sum_{\alpha\in[N]:x_{\alpha}\notin{E_{\operatorname{{bad}}}}({\kappa})}{\mathbf{P}}\Big{(}{\mathcal{G}}_{2}({K_{0}})\,\wedge\,|X_{\alpha}|\geq
0pt\,\wedge\,\min_{x\in I_{\alpha}}|P(x)|<0pt/n\Big{)}+o(1)$
$\displaystyle\qquad\qquad\leq\sum_{\alpha\in[N]:x_{\alpha}\notin{E_{\operatorname{{bad}}}}({\kappa})}{\mathbf{P}}\bigg{(}|X_{\alpha}|\in\bigg{[}0pt-\frac{\log^{3{K_{0}}}n}{n},0pt+\frac{\log^{3{K_{0}}}n}{n}\bigg{]}\bigg{)}+o(1),$
where we used the definition of $X_{\alpha}$ and (6.1) in the last estimate.
Applying the bound (4.3) of Proposition 4.1 with ${m}=1$,
$J_{1}=[0pt-n^{-1}\log^{3{K_{0}}}n,0pt+n^{-1}\log^{3{K_{0}}}n]$,
$J_{1}^{\prime}=[-\pi,\pi]$, and ${\lambda}=1$, say (with a single point
$x_{\alpha}$ being trivially ${\lambda}$-spread), we have
${\mathbf{P}}\bigg{(}X_{\alpha}\in\bigg{[}0pt-\frac{\log^{3{K_{0}}}n}{n},0pt+\frac{\log^{3{K_{0}}}n}{n}\bigg{]}\bigg{)}\ll\frac{\log^{3{K_{0}}}n}{nN}$
for each $\alpha$ with $x_{\alpha}\notin{E_{\operatorname{{bad}}}}({\kappa})$,
as well as the same bound for the event with $X_{\alpha}$ replaced by
$-X_{\alpha}$. From the union bound and summing over $\alpha$ we conclude
$\bigg{|}{\mathbf{P}}\Big{(}m_{n}>\frac{0pt}{n}\Big{)}-{\mathbf{P}}\Big{(}{{\mathcal{M}}}^{\sharp}([-0pt,0pt])=0\Big{)}\bigg{|}\ll\frac{\log^{3{K_{0}}}n}{n}+o(1)=o(1)$
as desired.
## 7 Proof of Theorem 3.4
Fix $\boldsymbol{t}$ as in the theorem statement. Recall the notation
$S_{n}=S_{n}(\boldsymbol{t})$ (we henceforth suppress $\boldsymbol{t}$) and
$\boldsymbol{w}_{j}$ from (3.2) and (3.1). Let $t_{0}=\delta^{-1}$ and let
$\phi_{j}$ denote the characteristic function of $\xi_{j}\boldsymbol{w}_{j}$.
By a standard we can bound the small ball probability by
${\mathbf{P}}(\frac{1}{\sqrt{2n+1}}S_{n}\in B(w,\delta))\leq
C_{m}(\frac{n}{t_{0}^{2}})^{4{m}/2}\int_{{\mathbb{R}}^{4{m}}}\prod_{j=-n}^{n}\phi_{j}(u)e^{-\frac{n\|u\|_{2}^{2}}{2t_{0}^{2}}}du=:J_{1}+J_{2}+J_{3},$
where in $J_{1},J_{2},J_{3}$ the integral is restricted to the ranges
$\|u\|_{2}\leq r_{0}=O(1)$, $r_{0}\leq\|u\|_{2}\leq R=n^{K_{*}}$, and
$\|u\|_{2}>R$, respectively for $K_{*}>0$ to be chosen sufficiently large.
For $J_{1}$, from (9.1) and (9.2) below we can bound
$\bigg{|}\prod_{j=-n}^{n}\phi_{j}(u)\bigg{|}\leq\exp\bigg{(}-c\inf_{a_{1}\leq|a|\leq
a_{2}}\sum_{j}\|a\langle{\boldsymbol{w}}_{j},u/2\pi\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}\bigg{)}.$
Thus, if $r_{0}$ is sufficiently small, then we have
$\|a\langle{\boldsymbol{w}}_{j},u/2\pi\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}=|a|\|\langle{\boldsymbol{w}}_{j},u/2\pi\rangle\|_{2}$,
and so from Lemma 3.6 we have
$\sum_{j}\|a\langle{\boldsymbol{w}}_{j},u/2\pi\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}\geq
c^{\prime}n\|u\|_{2}^{2}\min({\lambda},1)^{6{m}-3}.$
Hence
$\displaystyle J_{1}$
$\displaystyle=C_{m}(\frac{n}{t_{0}^{2}})^{2{m}}\int_{\|u\|_{2}\leq
r_{0}}\prod_{j}\phi_{j}(u)e^{-\frac{n\|u\|_{2}^{2}}{2t_{0}^{2}}}du$
$\displaystyle\leq C_{m}(\frac{n}{t_{0}^{2}})^{2{m}}\int_{\|u\|_{2}\leq
r_{0}}e^{-\frac{n\|u\|_{2}^{2}}{2t_{0}^{2}}-c^{\prime}n\|u\|_{2}^{2}{\lambda}^{6{m}-3}}du$
$\displaystyle=O_{m}(\frac{1}{{\lambda}^{3{m}}(t_{0}^{2}+1)^{2{m}}})=O_{m}({\lambda}^{-3{m}}\delta^{4{m}}).$
For $J_{2}$, recall by Theorem 3.1 that for $r_{0}\leq\|u\|_{2}\leq
R=n^{K_{*}}$ we have
$|\prod_{j=-n}^{n}\phi_{j}(u)|=O(e^{-\log^{2}n}).$
Thus
$\displaystyle J_{2}$
$\displaystyle=C_{m}(\frac{n}{t_{0}^{2}})^{2{m}}\int_{r_{0}\leq\|u\|_{2}\leq
R}\prod_{j=-n}^{n}\phi_{j}(u)e^{-\frac{n\|u\|_{2}^{2}}{2t_{0}^{2}}}du$
$\displaystyle\leq
C_{m}(\frac{n}{t_{0}^{2}})^{2{m}}\int_{r_{0}\leq\|u\|_{2}\leq
R}e^{-\log^{2}n}e^{-\frac{n\|u\|_{2}^{2}}{2t_{0}^{2}}}du$
$\displaystyle\ll_{m}n^{O_{{m},K_{*}}(1)}e^{-\log^{2}n}\ll_{{m},K}e^{-\frac{1}{2}\log^{2}n}.$
For $J_{3}$, we have
$\displaystyle J_{3}$
$\displaystyle=C_{m}(\frac{n}{t_{0}^{2}})^{4{m}/2}\int_{\|u\|_{2}\geq
n^{K_{*}}}\prod_{j=-n}^{n}\phi_{j}(u)e^{-\frac{n\|u\|_{2}^{2}}{2t_{0}^{2}}}du=O_{m}(e^{-n})$
for $K_{*}$ sufficiently large.
## 8 Proof of Theorem 3.2
For the proof we make use of a quantitative Edgeworth expansion for the
distribution of $S_{n}=S_{n}(\boldsymbol{t})$ (we will suppress the dependence
of $S_{n}$ on $\boldsymbol{t}$ in much of what follows). Our treatment is
similar to [DNN]. Let
$V_{n}:=\frac{1}{2n+1}\sum_{j=-n}^{n}\boldsymbol{w}_{j}\boldsymbol{w}_{j}^{\mathsf{T}}.$
(8.1)
be the covariance matrix of $S_{n}/\sqrt{2n+1}$. Let $\widetilde{Q}_{n}$
denote the distribution of $S_{n}/\sqrt{2n+1}$, and let $\widetilde{Q}_{n}(x)$
denote the cumulative distribution function for this distribution. The theorem
below shows that $\widetilde{Q}_{n}$ is asymptotically
$\widetilde{Q}_{n,\infty}$, where
$\widetilde{Q}_{n,\ell}:=\sum_{r=0}^{\ell-2}n^{-r/2}T_{r}(-\Phi_{0,V_{n}},\\{\overline{\chi}_{\nu}\\}),\qquad\ell\geq
2,$ (8.2)
for (signed) measures $T_{r}(-\Phi_{0,V_{n}},\\{\overline{\chi}_{\nu}\\})$ to
be defined below. For convenience, the density of $\widetilde{Q}_{n,\ell}$ is
denoted by $Q_{n,\ell}$ while the density of $\widetilde{Q}_{n}$ is denoted by
$Q_{n}$.
Let $W$ be the standard Gaussian vector in ${\mathbb{R}}^{4{m}}$. For any
covariance matrix $V$, $V^{1/2}W$ is the Gaussian random vector in
${\mathbb{R}}^{4{m}}$ with mean zero and covariance $V$. Let $\phi_{0,V}$
denote the density of its distribution and let $\Phi_{0,V}$ denote the
cumulative distribution function. If $V$ is the identity matrix then we simply
write $\phi$ and $\Phi$, respectively. Recall that the cumulants of a random
vector $X$ in ${\mathbb{R}}^{4{m}}$ are the coefficients in the following
formal power series expansion
$\log{\mathbf{E}}[e^{z\cdot
X}]=\sum_{\nu\in{\mathbf{N}}^{d}}\frac{\chi_{\nu}z^{\nu}}{|\nu|!}\,,\qquad
z\in{\mathbb{C}}^{4{m}}.$ (8.3)
From the independence of the random coefficients $\xi_{j}$, it follows that
the cumulants of $S_{n}$ are the sum of the corresponding cumulants of
$\xi_{j}\boldsymbol{w}_{j}$, which in turn are polynomials in the moments of
$\xi$ and the entries of $\boldsymbol{w}_{j}$. Let
$\overline{\chi}_{\nu}:=\chi_{\nu}(S_{n})/(2n+1)$, which is the average of
cumulants of $\xi_{j}\boldsymbol{w}_{j},-n\leq j\leq n$.
Note that cumulants of $V_{n}^{1/2}W$ match the cumulants of
$S_{n}/\sqrt{2n+1}$ for any $|\nu|\leq 2$, while the higher order cumulants of
the Gaussian vector $V_{n}^{1/2}W$ vanish. Therefore,
$\displaystyle\log{\mathbf{E}}[e^{z\cdot(S_{n}/\sqrt{2n+1})}]$
$\displaystyle=$
$\displaystyle\log{\mathbf{E}}[e^{z\cdot(V_{n}^{1/2}W)}]+\sum_{\nu\in{\mathbf{N}}^{d}:|\nu|\geq
3}(n\overline{\chi}_{\nu})\frac{z^{\nu}}{|\nu|!}n^{-|\nu|/2}$ $\displaystyle=$
$\displaystyle\log{\mathbf{E}}[e^{z\cdot V_{n}^{1/2}W}]+\sum_{\ell\geq
1}(\sum_{\nu\in{\mathbf{N}}^{d}:|\nu|=\ell+2}\overline{\chi}_{\nu}\frac{z^{\nu}}{|\nu|!})n^{-\ell/2}.$
Letting
$\overline{\chi}_{\ell}(z)=\ell!\sum_{\nu\in{\mathbf{N}}^{4{m}}:|\nu|=\ell}\overline{\chi}_{\nu}z^{\nu}/|\nu|!$
for all $z\in{\mathbb{C}}^{4{m}}$, we obtain
$\displaystyle{\mathbf{E}}[e^{z\cdot(S_{n}/\sqrt{2n+1})}]/{\mathbf{E}}[e^{z\cdot
V_{n}^{1/2}W}]$ $\displaystyle=$ $\displaystyle\exp[\sum_{\ell\geq
1}\frac{\overline{\chi}_{\ell+2}(z)}{(\ell+2)!}n^{-\ell/2}]$ $\displaystyle=$
$\displaystyle\sum_{m\geq 0}\frac{1}{m!}\Big{(}\sum_{\ell\geq
1}\frac{\overline{\chi}_{\ell+2}(z)}{(\ell+2)!}n^{-\ell/2}\Big{)}^{m}$
$\displaystyle=$ $\displaystyle\sum_{\ell\geq
0}\widetilde{T}_{\ell}n^{-\ell/2},$
where $\widetilde{T}_{\ell}$ is obtained by grouping terms of the same order
$n^{-\ell/2}$. It is clear that $\widetilde{T}_{\ell}$ depends only on $z$ and
the average cumulants $\overline{\chi}_{\nu},|\nu|\leq\ell+2$. We will write
$\widetilde{T}_{\ell}(z,\\{\overline{\chi}_{\nu}\\})$ to stress this
dependence. Replacing $z$ by $iz$, we obtain the following expansion for the
characteristic function of $S_{n}/\sqrt{2n+1}$:
$\displaystyle{\mathbf{E}}[e^{iz\cdot(S_{n}/\sqrt{2n+1})}]$ $\displaystyle=$
$\displaystyle{\mathbf{E}}[e^{iz\cdot V_{n}^{1/2}W}]\sum_{\ell\geq
0}\widetilde{T}_{\ell}(iz,\\{\overline{\chi}_{\nu}\\})n^{-\ell/2}.$
Next, let $D=(D_{1},\dots,D_{4{m}})$ be the partial derivative operator and
let $\widetilde{T}_{\ell}(-D,\\{\overline{\chi}_{\nu}\\})$ be the differential
operator obtained by formally replacing all occurences of $iz$ by $-D$ inside
$\widetilde{T}_{\ell}(iz,\\{\overline{\chi}_{\nu}\\})$. We define the signed
measures $T_{\ell}(-\Phi_{0,V_{n}},\\{\overline{\chi}_{\nu}\\})$ in (8.2) to
have the following density with respect to the Lebesgue measure:
$T_{\ell}(-\phi_{0,V_{n}},\\{\overline{\chi}_{\nu}\\})(x):=\Big{(}\widetilde{T}_{\ell}(-D,\\{\overline{\chi}_{\nu}\\})\phi_{0,V_{n}}\Big{)}(x).$
The following result gives a quantitative comparison between
$\widetilde{Q}_{n}$ and $\widetilde{Q}_{n,\ell}$; cf. also [DNN, Theorem 4.1].
For convenience of notation, for each $\ell>0$, let
$\rho_{\ell}:=\frac{1}{n}\sum_{-n\leq j\leq
n}\|\boldsymbol{w}_{j}\|_{2}^{\ell}\cdot{\mathbf{E}}|\xi|^{\ell}.$
Thus $\rho_{\ell}=O_{\ell,{m}}({\mathbf{E}}|\xi|^{\ell})=O_{\ell,{m}}(1)$ if
$\xi$ is sub-Gaussian. To stay slightly more general, here we only assume that
$\xi$ has bounded moments up to some sufficiently large order. For a given
measurable function $f:{\mathbb{R}}^{4{m}}\to{\mathbb{R}}$, define
$M_{\ell}(f):=\sup_{x\in{\mathbb{R}}^{4{m}}}\frac{|f(x)|}{1+\|x\|_{2}^{\ell}}.$
###### Theorem 8.1 (Edgeworth expansion).
Assume ${\mathbf{E}}|\xi|^{\ell+4{m}+1}<\infty$ for some $\ell\geq 4$. Let
$f:{\mathbb{R}}^{4{m}}\to{\mathbb{R}}$ be a measurable function such that
$M_{\ell}(f)<\infty$. Suppose that $\boldsymbol{t}=(t_{1},\dots,t_{m})$ is
$n^{\kappa}$-smooth and $1$-spread for some ${\kappa}>0$. Then for any fixed
$K_{*}>0$ and any $n^{-K_{*}}\leq\varepsilon\leq 1$,
$\displaystyle|\int f(x)d\widetilde{Q}_{n}(x)-\int
f(x)d\widetilde{Q}_{n,\ell}(x)|$ $\displaystyle\leq$ $\displaystyle
CM_{\ell}(f)(n^{-(\ell-1)/2}+e^{-\log^{2}n})+\overline{\omega}_{f}(2\varepsilon:\sum_{r=0}^{\ell+4{m}-2}n^{-r/2}T_{r}(-\phi_{0,V_{n}}:\\{\overline{\chi}_{\nu}\\})$
where for a density $\phi$,
$\overline{\omega}_{f}(\varepsilon:\phi)=\int(\sup_{y\in
B(x,\varepsilon)}f(y)-\inf_{y\in B(x,\varepsilon)}f(y))d\phi(x),$
for some $C=C(\\{\rho_{k},k\leq\ell\\},{\kappa},K_{*})>0$.
###### Proof of Theorem 8.1.
This follows from [DNN, Section 4] (which in turns follows the approach of
[BR10] with some important modifications, see also [BCP19]). For completeness
we sketch the proof below. Let $d=4m$. For convenience, we assume that
$\varepsilon=n^{-K_{*}}$ and denote
$\widetilde{H}_{n}=\widetilde{Q}_{n}-\widetilde{Q}_{n,\ell},$
and let $H_{n}$ be its density. As usual the characteristic function of
$H_{n}$ is
$\widehat{H_{n}}(\eta)=\int_{{\mathbb{R}}^{d}}e^{it\cdot\eta}\widetilde{H}_{n}(dt)$.
Let $\widetilde{K}$ be a probability measure supported inside the unit ball
$B(0,1)=\\{x\in{\mathbb{R}}^{d}:\|x\|\leq 1\\}$ (whose density is denoted by
$K$) such that its characteristic function $\widehat{K}(\eta)$ satisfies
$|D^{\alpha}\widehat{K}(\eta)|=O(e^{-\|\eta\|_{2}^{1/2}}),\quad|\alpha|\leq\ell+d+1.$
(8.4)
Such a measure could be constructed using elementary arguments, see for
instance [BR10, Section 10]. We then let $\widetilde{K}_{\epsilon}$ be the
$\epsilon$-dilation of $K$, namely
$\widetilde{K}_{\epsilon}(A)=\widetilde{K}(\epsilon^{-1}A)$ and
$\epsilon^{-1}A:=\\{x/\epsilon:x\in A\\}$ for all measurable $A$. Some simple
computation yields
$\displaystyle|\int f(y)d\widetilde{H}_{n}(y)|$ $\displaystyle\leq$
$\displaystyle
C_{\ell}M_{\ell}(f)\int(1+\|t\|_{2})^{\ell}|H_{n}*K_{\epsilon}|(t)dt+\bar{\omega}_{f}(2\varepsilon:|\widetilde{Q}_{n,\ell}|)$
$\displaystyle=$ $\displaystyle
O\Big{(}\max\\{\int|D^{\alpha}(\widehat{H_{n}})(\eta)D^{\beta}(\widehat{K_{\epsilon}})(\eta)|d\eta:\
\ |\alpha|+|\beta|\leq\ell+d+1\\}\Big{)}.$
Following [BR10] (see [DNN, Corollary 4.3] for a different proof) we can show
that for some $c_{1}>0$ sufficiently small we have
$\displaystyle\int_{\|\eta\|_{2}\leq
c_{1}\sqrt{n}}|D^{\alpha}\widehat{H_{n}}(\eta)D^{\beta}\widehat{K_{\epsilon}}(\eta)|d\eta$
$\displaystyle=$ $\displaystyle O\Big{(}\int_{\|\eta\|_{2}\leq
c_{1}\sqrt{n}}|D^{\alpha}\widehat{H_{n}}(\eta)|d\eta\Big{)}$ $\displaystyle=$
$\displaystyle O(n^{-(\ell+d-1)/2}).$
It thus remains to consider the range $\|\eta\|_{2}\geq c_{1}\sqrt{n}$. We use
triangle inequality to estimate (where $Q_{n}$ is the density of
$\widetilde{Q}_{n}$)
$\displaystyle\int_{\|\eta\|_{2}\geq
c_{1}\sqrt{n}}|D^{\alpha}\widehat{H}_{n}(t)D^{\beta}\widehat{K}_{\varepsilon}|d\eta$
$\displaystyle\leq\int_{\|\eta\|_{2}\geq
c_{1}\sqrt{n}}|D^{\alpha}\widehat{Q}_{n}(t)D^{\beta}\widehat{K}_{\varepsilon}|d\eta$
$\displaystyle+\int_{\|\eta\|_{2}\geq
c_{1}\sqrt{n}}|D^{\alpha}(\sum_{r=0}^{\ell-2+d}n^{-r/2}P_{r}(i\eta:\\{\chi_{\nu,n}\\}))\exp(-1/2\langle\eta,B_{n}\eta\rangle)|d\eta,$
where $B_{n}^{2}=V_{n}^{-1}$ (defined in (8.1).)
The second term can be controlled by $O(e^{-cn})$ thanks to the Gaussian decay
of $\exp(-1/2\langle\eta,B_{n}\eta\rangle)$.
Let $\phi_{i}(\eta)={\mathbf{E}}e^{i\eta\cdot{\mathbf{w}}_{i}}$. Then for
$|\alpha|\leq\ell+d+1$ we have
$D^{\alpha}_{\eta}(\phi_{i}(\eta/\sqrt{n}))=n^{-|\alpha|/2}O({\mathbf{E}}\|X_{n,i}\|_{2}^{|\alpha|})=O(1)$.
Thus,
$|D^{\alpha}\widehat{Q}_{n}(\eta)|=|D^{\alpha}(\prod_{i=1}^{n}\phi_{i}(\frac{\eta}{\sqrt{n}}))|=O(\sum_{\gamma_{1}+\dots+\gamma_{n}=\alpha}|\prod_{i=1,\gamma_{i}=0}^{n}\phi_{i}(\frac{\eta}{\sqrt{n}})|),$
while we also have
$|D^{\beta}\widehat{K}_{\varepsilon}(\eta)|=O(\varepsilon^{|\beta|}e^{-(\varepsilon\|\eta\|_{2})^{1/2}})=O(e^{-(\varepsilon\|\eta\|_{2})^{1/2}})$.
Thus, it remains to control, for each $(\gamma_{1},\dots,\gamma_{n})$ with
$|\gamma_{1}|+\dots+|\gamma_{n}|\leq\ell+d+1$ and each $r>0$ independent of
$n$:
$\displaystyle J_{\gamma}(n,\varepsilon)$
$\displaystyle=\int_{\|\eta\|_{2}\geq
r\sqrt{n}}|\prod_{i=1,\gamma_{i}=0}^{n}\phi_{i}(\frac{\eta}{\sqrt{n}})|e^{-(\varepsilon\|\eta\|_{2})^{1/2}}d\eta$
$\displaystyle=n^{d/2}\int_{\|\eta\|_{2}\geq
r}|\prod_{i=1,\gamma_{i}=0}^{n}\phi_{i}(\eta)|e^{-(n^{-K_{\ast}+1/2}\|\eta\|_{2})^{1/2}}d\eta.$
Clearly it suffices to consider $r\leq\|\eta\|_{2}\leq n^{K_{\ast}-1/2+\tau}$
because the integral for $\|\eta\|_{2}\geq n^{K_{\ast}-1/2+\tau}$ is extremely
small. Again, because $\alpha$ is fixed, by throwing away from the set
$\\{{\mathbf{w}}_{i}\\}$ a fixed number of elements, let us assume that
$\alpha=0$ for simplicity 444In the general case $\alpha\neq 0$ we apply
Theorem 10.2 instead of Theorem 3.1.. To this end, by Theorem 3.1 for
sufficiently large $n$ we have
$|\prod_{i}\phi_{i}(\eta)|\leq e^{-\log^{2}n}.$
Thus we just shown that, with $\varepsilon=n^{-K_{\ast}}$ we have
$J_{\gamma}(n,\varepsilon)=O(e^{-\log^{2}n})$, completing the proof. ∎
We turn now to the proof of Theorem 3.2. We follow [DNN, Section 5] with some
slight modifications. Let $\eta,\varepsilon>0$ to be chosen later, and towards
an application of Theorem 8.1 we fix some In the sequel we abbreviate We let
$g:=\frac{1}{16\delta_{1}\delta_{2}\delta_{3}\delta_{4}}1_{w+B_{m}(\boldsymbol{\delta})}$
be the $L^{1}$-normalized indicator for the box
$w+B_{m}(\boldsymbol{\delta})\subset{\mathbb{R}}^{4{m}}$. For $1\leq i\leq 4$
let $\varphi_{i,\eta}:{\mathbb{R}}\to[0,1]$ be a $C^{\infty}({\mathbb{R}})$
function with support inside $[-\delta_{i},\delta_{i}]$ such that
(i) $\varphi_{i,\eta}(x)=\delta_{i}^{-1}$ for $|x|\leq\delta_{i}(1-\eta)$, and
(ii) $|\varphi_{i,\eta}^{(k)}(x)|=O_{k}(\delta_{i}^{-(k+1)}\eta^{-k})$ for any
$k\geq 0$,
and set
$f(\boldsymbol{x})=\prod_{r=1}^{m}\prod_{i=1}^{4}\varphi_{i,\eta}(w^{i}_{r}+x^{i}_{r})$
where we write
$\boldsymbol{w}=(w^{1},\dots,w^{4}),\boldsymbol{x}=(x^{1},\dots,x^{4})\in{\mathbb{R}}^{4{m}}$.
We have
$\|\nabla f(\boldsymbol{x})\|_{2}\ll_{m}\frac{2}{\delta^{4{m}+1}\eta}$
uniformly in $\boldsymbol{x}$. Recall that
$\bar{\omega}_{f}(\varepsilon:\phi)=\int(\sup_{y\in
B(x,\varepsilon)}f(y)-\inf_{y\in B(x,\varepsilon)}f(y))\phi(x)dx$, and $\phi$
is the density of a Gaussian vector. Consequently, for any polynomial $p(x)$
with bounded degree and bounded coefficients we have
$\bar{\omega}_{f}(\varepsilon:p(x)\phi_{0,V_{n}}(x))=O(\eta^{-1}\delta^{-4{m}-1}\varepsilon),$
where the implied constant depends on the eigenvalues of $V_{n}$, and on the
degree and coefficients of $p$. In particular, the final error term in Theorem
8.1 can be expressed as
$\sum_{r=0}^{\ell+4{m}-2}n^{-r/2}T_{r}(-\phi_{0,V_{n}}:\\{\overline{\chi}_{\nu}\\})=p(x)\phi_{0,V_{n}}(x)$
for some polynomial $p$ with degree at most $4{m}+\ell$ and coefficients
bounded by the first $4{m}+\ell$ moments of $\xi$. Therefore
$\bar{\omega}_{f}(2\varepsilon:\sum_{r=0}^{\ell+4{m}-2}n^{-r/2}T_{r}(-\phi_{0,V_{n}}:\\{\overline{\chi}_{\nu}\\}))=O(\eta^{-1}\delta^{-4{m}-1}\varepsilon),$
(8.5)
where the implied constant depends on the eigenvalues of $V_{n}$ and the
moments up to order $O({m})$ of $\xi$.
Recall the shorthand notation
$\widetilde{S}:=S_{n}(\boldsymbol{t})/\sqrt{2n+1}$ from (4.7), and that
$\Gamma$ has the distribution of $\widetilde{S}$ with standard real Gaussians
in place of the variables $\xi_{j}$. From Theorem 3.4 and Corollary 3.5,
$\displaystyle\big{|}{\mathbf{E}}f(\widetilde{S})-{\mathbf{E}}g(\widetilde{S})\big{|}$
$\displaystyle\leq$
$\displaystyle(\frac{C}{\delta})^{4{m}}\sum_{r=1}^{m}{\mathbf{P}}(||{\operatorname{Re}}P_{n}(s_{r})|-\delta_{1}|\leq\eta\delta_{1})+{\mathbf{P}}(|{\operatorname{Re}}P_{n}^{\prime}(s_{r})|-\delta_{2}|\leq\eta\delta_{2})$
$\displaystyle+$
$\displaystyle{\mathbf{P}}(||{\operatorname{Im}}P_{n}(s_{r})|-\delta_{3}|\leq\eta\delta_{3})+{\mathbf{P}}(|{\operatorname{Im}}P_{n}^{\prime}(s_{r})|-\delta_{4}|\leq\eta\delta_{4})\Big{)}$
$\displaystyle\ll_{m}$ $\displaystyle(\frac{C}{\delta})^{4{m}}\eta$
By Theorem 8.1 and (8.5) (with $\ell=16{m}K+3$), after keeping the first term
of the expansion, and by the triangle inequality we have
$\displaystyle\Big{|}{\mathbf{E}}f(\widetilde{S})-{\mathbf{E}}f(\Gamma)\Big{|}$
$\displaystyle\leq\Big{|}\int
f(x)\sum_{r=1}^{\ell-2}n^{-r/2}T_{r}(-\phi_{0,V_{n}}(x),\\{\overline{\chi}_{\nu}\\})\Big{|}$
$\displaystyle+M_{\ell}(f)O\Big{(}n^{-8{m}K-1}+e^{-\log^{2}n}\Big{)}+\bar{\omega}_{f}(2\varepsilon:\sum_{r=0}^{4{m}+1}n^{-r/2}T_{r}(-\phi_{0,V_{n}}:\\{\overline{\chi}_{\nu}\\}))$
$\displaystyle=O(n^{-1/2})+(C/\delta)^{4{m}}O(n^{-8{m}K-1}+e^{-\log^{2}n})+O((C/\delta)^{4{m}+1}\eta^{-1}\varepsilon),$
where we used the fact that $|\int
f(x)T_{r}(-\phi_{0,V_{n}}(x),\\{\overline{\chi}_{\nu}\\})|=O(1)$ with the
implied constant depending on the moments of $\xi$ up to order $r$ and on the
implicit constant from (ii) of $\varphi$. In particular, the above is also
true for the Gaussian case. Consequently, again by the triangle inequality
$\displaystyle|{\mathbf{E}}g(\widetilde{S})-{\mathbf{E}}g(\Gamma)|$
$\displaystyle\leq$
$\displaystyle|{\mathbf{E}}g(\widetilde{S})-{\mathbf{E}}f(\widetilde{S})|+|{\mathbf{E}}f(\Gamma)-{\mathbf{E}}g(\Gamma)|+|{\mathbf{E}}f(\widetilde{S})-{\mathbf{E}}f(\Gamma)|$
$\displaystyle\ll_{m}$ $\displaystyle
n^{-1/2}+(\frac{1}{\delta})^{4{m}}(n^{-8{m}K-1}+e^{-\log^{2}n}+\delta^{-1}\eta^{-1}\varepsilon+\eta)=O(n^{-1/2}),$
where we took $\eta=\varepsilon^{1/2}$ and $\varepsilon=n^{-K_{*}}$ with
$K_{*}$ sufficiently large compared to $K$.
## 9 Proof of Theorem 3.1
We assume throughout this section that $n$ is sufficiently large depending on
${m},{\kappa},K_{*}$ and the sub-Gaussian constant for $\xi$. We first recall
a definition and fact from [TV08]. For a real number $w$ and a random variable
$\xi$, define the $\xi$-norm of $w$ as
$\|w\|_{\xi}:=({\mathbf{E}}\|w(\xi-\xi^{\prime})\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2})^{1/2},$
where $\xi^{\prime}$ is an iid copy of $\xi$. For instance, if $\xi$ has the
Rademacher distribution ${\mathbf{P}}(\xi=\pm 1)=1/2$, then
$\|w\|_{\xi}^{2}=\|2w\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}/2$. For any real
number $w$ we have
$|{\mathbf{E}}e(w\xi)|\leq\exp(-c\|w/2\pi\|_{\xi}^{2})$
for an absolute constant $c>0$.
Now with ${\phi}_{j}:{\mathbb{R}}^{4{m}}\to{\mathbb{C}}$ the characteristic
function of $\xi_{j}\boldsymbol{w}_{j}$, we have
$\displaystyle\Big{|}{\mathbf{E}}e\big{(}\langle
S_{n}(\boldsymbol{t}),\boldsymbol{x}\rangle\big{)}\Big{|}$
$\displaystyle=|\prod_{j}{\phi}_{j}(\boldsymbol{x})|=\prod_{j}|{\mathbf{E}}e(\xi_{j}\langle\boldsymbol{w}_{j},\boldsymbol{x}\rangle)|\leq\exp(-c\sum_{j}\|\langle\boldsymbol{w}_{j},\boldsymbol{x}/2\pi\rangle\|_{\xi}^{2}).$
(9.1)
Furthermore, as $\xi$ is sub-Gaussian and of unit variance, there exist
positive constants $a_{1},a_{2},c>0$ depending only on the sub-Gaussian moment
of $\xi$ such that ${\mathbf{P}}(a_{1}<|\xi-\xi^{\prime}|<a_{2})\geq c$, and
so
$\displaystyle\sum_{j}\|\langle\boldsymbol{w}_{j},\boldsymbol{x}/2\pi\rangle\|_{\xi}^{2}={\mathbf{E}}\sum_{j}\|\langle\boldsymbol{w}_{j},\boldsymbol{x}/2\pi\rangle(\xi-\xi^{\prime})\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}\geq
c\inf_{a_{1}\leq|a|\leq
a_{2}}\sum_{j}\|a\langle\boldsymbol{w}_{j},\boldsymbol{x}/2\pi\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}.$
(9.2)
It hence suffices to show that
$\sum_{j}\|a\langle\boldsymbol{w}_{j},\boldsymbol{x}/2\pi\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}\gg\log^{3}n$
uniformly for $|a|\in[a_{1},a_{2}]$. Fixing an arbitrary such $a$, since
$a_{1},a_{2}\asymp 1$ we will abuse notation and absorb $a$ into the
definition of $\boldsymbol{x}$. Recalling (3.1), since
$\boldsymbol{w}_{j}+\boldsymbol{w}_{-j}=2(0,0,\boldsymbol{b}_{j},-(j/n)\boldsymbol{a}_{j})$
and
$\boldsymbol{w}_{j}-\boldsymbol{w}_{-j}=2(\boldsymbol{a}_{j},(j/n)\boldsymbol{b}_{j},0,0)$,
for
$\boldsymbol{x}=(\boldsymbol{x}^{1},\boldsymbol{x}^{2},\boldsymbol{x}^{3},\boldsymbol{x}^{4})\in{\mathbb{R}}^{4{m}}$
and each $0\leq j\leq n$, we have from the triangle inequality that
$\displaystyle\|\langle\boldsymbol{w}_{j},\boldsymbol{x}\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}+\|\langle\boldsymbol{w}_{-j},\boldsymbol{x}\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}$
$\displaystyle\geq\frac{1}{2}\max\big{\\{}\|\langle\boldsymbol{w}_{j}+\boldsymbol{w}_{-j},\boldsymbol{x}\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}\,,\,\|\langle\boldsymbol{w}_{j}-\boldsymbol{w}_{-j},\boldsymbol{x}\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}\big{\\}}$
$\displaystyle=2\max\Big{\\{}\|\langle\boldsymbol{b}_{j},\boldsymbol{x}^{3}\rangle-(j/n)\langle\boldsymbol{a}_{j},\boldsymbol{x}^{4}\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}\,,\,\|\langle\boldsymbol{a}_{j},\boldsymbol{x}^{1}\rangle+(j/n)\langle\boldsymbol{b}_{j},\boldsymbol{x}^{2}\rangle\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}\Big{\\}}.$
Recalling our assumption $\|\boldsymbol{x}\|_{2}\geq n^{-1/8}$, we will assume
$\|\boldsymbol{x}^{3}\|_{2}^{2}+\|\boldsymbol{x}^{4}\|_{2}^{2}\geq\frac{1}{2}n^{-1/4}$;
the complementary case that
$\|\boldsymbol{x}^{1}\|_{2}^{2}+\|\boldsymbol{x}^{2}\|_{2}^{2}\geq\frac{1}{2}n^{-1/4}$
can be handled by the same argument. Fix now a vector
$(\boldsymbol{y},\boldsymbol{y}^{\prime})\in{\mathbb{R}}^{2{m}}$ satisfying
$n^{-1/8}\leq\|(\boldsymbol{y},\boldsymbol{y}^{\prime})\|_{2}\leq n^{K_{*}}$
and denote
$\psi(j)=\psi(j;\boldsymbol{t}):=\langle\boldsymbol{b}_{j},\boldsymbol{y}\rangle-(j/n)\langle\boldsymbol{a}_{j},\boldsymbol{y}^{\prime}\rangle=\sum_{r=1}^{m}y_{r}\cos(jt_{r}/n)-y^{\prime}_{r}(j/n)\sin(jt_{r}/n)\,.$
(9.3)
With $(\boldsymbol{y},\boldsymbol{y}^{\prime})$ playing the role of
$(\boldsymbol{x}^{3},\boldsymbol{x}^{4})$, to establish Theorem 3.1 our task
thus reduces to establishing the following:
###### Proposition 9.1.
Let $\boldsymbol{t}=(t_{1},\dots,t_{r})\in{\mathbb{R}}^{m}$ be
$n^{{\kappa}}$-smooth and ${\lambda}$-spread for some ${\kappa}\in(0,1)$ and
$\omega(n^{-1/8{m}})\leq{\lambda}<1$. Then
$\sum_{j=0}^{n}\|\psi(j)\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}>\log^{4}n.$
Turning to prove the proposition, we henceforth denote
$T:=\log^{4}n.$
In the remainder of this section we suppose towards a contradiction that
$\sum_{j=0}^{n}\|\psi(j)\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}\leq T\,.$ (9.4)
From (9.4) and Markov’s inequality we have
$|\\{j\in[0,n]\cap{\mathbb{Z}}:\|\psi(j)\|_{{\mathbb{R}}/{\mathbb{Z}}}>1/T\\}|\leq
2T^{3}$
and it follows that there is an interval $J\subset[n]$ of length at least
$n/T^{6}$ such that
$\|\psi(j)\|_{{\mathbb{R}}/{\mathbb{Z}}}\leq 1/T\qquad\forall\;j\in J.$ (9.5)
We henceforth fix such an interval $J=[n_{1},n_{2}]$.
Next we claim we can find $q_{0}\in{\mathbb{Z}}\cap[1,n^{{\kappa}}]$ and
$s_{1},\dots,s_{m}\in{\mathbb{R}}$ such that
$q_{0}t_{r}/2\pi n-s_{r}\in{\mathbb{Z}}$ (9.6)
and
$\sum_{r=1}^{m}s_{r}^{2}\leq{m}n^{-2{\kappa}/{m}}.$ (9.7)
Indeed, considering the sequence of points $(\\{qt_{1}/2\pi
n\\},\dots,\\{qt_{m}/2\pi n\\})\in[0,1]^{m}$ for $1\leq q\leq n^{{\kappa}}$,
it follows from Dirichlet’s principle that
$\sum_{r=1}^{m}|\\{q_{1}(t_{r}/2\pi n)\\}-\\{q_{2}(t_{r}/2\pi
n)\\}|^{2}\leq{m}n^{-2{\kappa}/{m}}$
for some $1\leq q_{1},q_{2}\leq n^{\kappa}$. Then we have
$|(q_{1}-q_{2})t_{r}/2\pi n-p_{r}|^{2}\leq{m}n^{-2{\kappa}/{m}}$
for some $p_{1},\dots,p_{m}\in{\mathbb{Z}}$. Now (9.6) and (9.7) follow by
taking $q_{0}=q_{1}-q_{2}$ and $s_{r}=(q_{1}-q_{r})t_{r}/2\pi n-p_{r}$.
Fixing such $q_{0},s_{1},\dots,s_{r}$, we have
$|e_{n}(q_{0}t_{r})-1|=|e(2\pi s_{r})-1|\leq
2\pi{m}^{1/2}n^{-{\kappa}/{m}}\qquad\forall\;1\leq r\leq{m}.$ (9.8)
We next combine (9.5) and (9.8) to deduce some smoothness of the sequence
$\psi(j)$ over $j\in J$, via Lemma 9.2 below. For $g:[n]\to{\mathbb{C}}$ and
positive integers $k,q$ we define the discrete differential of order $k$ and
step $q$ as
$\Delta^{k}_{q}g:[n]\to{\mathbb{C}}\,,\qquad(\Delta^{k}_{q}g)(j):=\sum_{i=0}^{k}{k\choose
i}(-1)^{i}g(j+iq).$
For any integer $q$ and $t\in{\mathbb{R}}$,
$\displaystyle\sum_{i=0}^{k}{k\choose
i}(-1)^{i}e_{n}((j+iq)t)=(1-e_{n}(qt))^{k}e_{n}(jt).$
Taking real parts on both sides, we obtain
$\sum_{i=0}^{k}{k\choose
i}(-1)^{i}\cos((j+iq)t/n)={\operatorname{Re}}\big{[}(1-e_{n}(qt))^{k}e_{n}(jt)\big{]},$
and differentiating in $t$ yields
$\sum_{i=0}^{k}{k\choose
i}(-1)^{i}\frac{j+iq}{n}\sin((j+iq)t/n)={\operatorname{Re}}\Big{[}\partial_{t}\big{[}(1-e_{n}(qt))^{k}e_{n}(jt)\big{]}\Big{]}\,.$
Combining the previous two identities over $t=t_{r},r\in[m]$ we obtain the
identity
$(\Delta_{q}^{k}\psi)(j)={\operatorname{Re}}\bigg{[}\sum_{r=1}^{m}y_{r}(1-e_{n}(qt_{r}))^{k}e_{n}(jt_{r})-y^{\prime}_{r}\partial_{t}\big{[}(1-e_{n}(qt_{r}))^{k}e_{n}(jt_{r})\big{]}\bigg{]}.$
(9.9)
Denoting henceforth
$f_{t,\ell}(j):=(1-e_{n}(\ell q_{0}t))^{k}e_{n}(jt),$ (9.10)
substituting $q=\ell q_{0}$ in the above identity yields
$(\Delta^{k}_{\ell
q_{0}}\psi)(j)={\operatorname{Re}}\bigg{[}\sum_{r=1}^{m}y_{r}+y_{r}^{\prime}\partial_{t_{r}}f_{t_{r},\ell}(j)\bigg{]}\,$
(9.11)
###### Lemma 9.2.
There exists $k=O_{K_{*},{\kappa},{m}}(1)$ such that for any $\ell\geq 1$ and
any $j\in J$ such that $[j,j+k\ell q_{0}]\subset J$,
$(\Delta_{\ell
q_{0}}^{k}\psi)(j)\ll_{K_{*},{\kappa},{m}}\sum_{i=0}^{k}\|\psi(j+i\ell
q_{0})\|_{{\mathbb{R}}/{\mathbb{Z}}}.$
###### Proof.
Fix $k\geq 1$ to be chosen sufficiently large depending on
$K_{*},{\kappa},{m}$. From (9.8), for $\ell=1$ we have
$|f_{t_{r},1}(j)|\leq(2\pi{m}^{1/2}n^{-{\kappa}/{m}})^{k}<n^{-k{\kappa}/2{m}}$
and
$|f_{t_{r},1}^{\prime}(j)|\leq
kq_{0}(2\pi{m}^{1/2}n^{-{\kappa}/{m}})^{k-1}+(2\pi{m}^{1/2}n^{-{\kappa}/{m}})^{k}<n^{-k{\kappa}/2{m}}$
and hence
$|(\Delta_{q_{0}}^{k}\psi)(j)|\leq
n^{-{\kappa}k/2{m}}\sum_{r=1}^{m}|y_{r}|+|y^{\prime}_{r}|<mn^{K_{*}-{\kappa}k/2{m}}.$
Let $p(j)$ denote the closest integer to $\psi(j)$. From the triangle
inequality and (9.5) it follows that
$|(\Delta^{k}_{q_{0}}p)(j)|<mn^{K_{*}-{\kappa}k/2{m}}+\frac{2^{k}}{T}$
as long as $\\{j,j+q_{0},\dots,j+kq_{0}\\}\subset J$. Taking $k=\lfloor
4{m}K_{*}/{\kappa}\rfloor+1$, the right hand side is smaller than 1. Since the
numbers $(\Delta^{k}_{q_{0}}p)(j)$ are integers, it follows that
$(\Delta^{k}_{q_{0}}p)(j)=0$
for all $j$ such that $\\{j,j+q_{0},\dots,j+kq_{0}\\}\subset J$. By repeated
application of the above for $j$ running over progressions
$j_{0},j_{0}+q_{0},j_{0}+2q_{0},\dots$ with $j_{0}\in J$, we deduce that for
any $j$ such that $[j,j+kq_{0}]\subset J=[n_{1},n_{2}]$ there exists a
polynomial $Q_{j}$ of degree at most $k-1$ such that
$p(j+iq_{0})=Q_{j}(i)\qquad\forall\;0\leq i\leq(n_{2}-j)/q_{0}.$
Thus we have $(\Delta^{k}_{\ell q_{0}}p)(j)=0$ for all $\ell\geq 1$ and $j$
such that $[j,j+k\ell q_{0}]\subset J$. Hence, for such $j$ we conclude by the
triangle inequality that
$|(\Delta^{k}_{\ell q_{0}}\psi)(j)|=|(\Delta^{k}_{\ell
q_{0}}\psi)(j)-(\Delta^{k}_{\ell q_{0}}p)(j)|\leq
2^{k}\sum_{i=0}^{k}\|\psi(j+i\ell q_{0})\|_{{\mathbb{R}}/{\mathbb{Z}}}$
as desired. ∎
Note that $\|\boldsymbol{y}\|_{2}+\|\boldsymbol{y}^{\prime}\|_{2}\gg
n^{-1/8}$. Thus either (1) there exists $i$ such that $|y_{i}^{\prime}|\gg
n^{-1/16}$ (with room to spare) or (2) $|y_{i}^{\prime}|\leq n^{-1/16}$ for
all $i$ and there exists $i$ such that $|y_{i}|\gg_{m}n^{-1/8}$. In what
follows we will mainly working with the first case (which is significantly
harder as one needs to deal with differentials of order two). We will comment
in Remark 9.4 below how to handle the second case. For the rest of the
section, without loss of generality we will assume
$|y_{1}^{\prime}|\gg_{m}n^{-1/16}$ (9.12)
On the other hand, by applying Lemma 9.2 to linear combinations of shifts of
$\Delta^{k}_{\ell q_{0}}\psi$ we can show the following:
###### Lemma 9.3.
For any positive integers $j,L,L^{\prime}$ and $\ell$ such that $[j,j+k\ell
q_{0}+4({m}-1)L+3L^{\prime}]\subset J$, we have
$\displaystyle\frac{L^{\prime}}{n}\left|y_{1}^{\prime}\big{(}1-e_{n}(2L^{\prime}t_{1})\big{)}^{2}\big{(}1-e_{n}(\ell
q_{0}t_{1})\big{)}^{k}\prod_{r=2}^{m}\big{(}1-e_{n}(L(t_{1}-t_{r}))\big{)}^{2}\big{(}1-e_{n}(L(t_{1}+t_{r}))\big{)}^{2}\right|$
$\displaystyle\qquad\qquad\qquad\qquad\ll_{K_{*},{\kappa},{m}}\sum_{i=1}^{k}\sum_{a=0}^{4({m}-1)}\sum_{b=0}^{3}\|\psi(j+i\ell
q_{0}+aL+bL^{\prime})\|_{{\mathbb{R}}/{\mathbb{Z}}}.$ (9.13)
We defer the proof of Lemma 9.3 for now and conclude the proof of 9.1.
Recall from (9.5) that $J=[n_{1},n_{2}]\subset[n]$ has length $|J|\geq
n/T^{6}$. Consider any $\ell\geq 1$ such that $k\ell q_{0}\leq|J|/2$. From
Lemma 2.8 we can choose ${L}\asymp n/T^{7}=o(|J|)$ such that
$\bigg{\|}\frac{{L}\cdot(t_{r}\pm t_{r^{\prime}})}{2\pi
n}\bigg{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\gg_{m}\frac{{\lambda}}{T^{7}}$
for all distinct $r,r^{\prime}\in[m]$ and all choices of the signs.
Furthermore, because $t_{1}$ is smooth, we can choose $L^{\prime}$ such that
$n/T^{8}\leq L^{\prime}=o(|J|)$ and
$|1-e_{n}(2L^{\prime}t_{1})|>{\lambda}^{2}=\omega(n^{-1/4{m}}).$
From these choices of $\ell,{L}$ and $L^{\prime}$, together with (9.12), we
have that the left hand side in (9.13) is at least
$\gg_{m}n^{-1/16}{\lambda}^{4}T^{-16}({\lambda}/T^{7})^{4({m}-1)}|1-e_{n}(\ell
q_{0}t_{1})|^{k}.$
On the other hand, from (9.4) and the Cauchy–Schwarz inequality we have
$\sum_{j=0}^{n}\|\psi(j)\|_{{\mathbb{R}}/{\mathbb{Z}}}\leq\sqrt{nT}\,,$ (9.14)
and it follows that that we can choose $j$ so that the right hand side in
Equation 9.13 is $O_{K_{*},{\kappa},{m}}(T^{1/2}n^{-1/2})$. Thus,
$|1-e_{n}(\ell q_{0}t_{1})|\leq n^{-1/3k}$ (9.15)
and this holds for any integer $\ell\geq 1$ such that $\ell kq_{0}\leq|J|/2$.
Applying 2.9, we conclude
$\|q_{0}t_{1}/2\pi n\|_{{\mathbb{R}}/{\mathbb{Z}}}=n^{-1}\log^{O(1)}n.$
But since we chose $q_{0}\leq n^{\kappa}$ this contradicts the assumption that
$t_{1}$ is $n^{{\kappa}}$-smooth. This concludes the proof of 9.1 and hence of
Theorem 3.1.∎
###### Proof of Lemma 9.3.
We begin by recording some identities. Recall the definition of
$f_{t,\ell}(j)$ from (9.10). To lighten notation we will suppress the
subscript $\ell$ as it is fixed throughout the proof. First note that
$g_{t}(j):=\partial_{t}f_{t}(j)=\sqrt{-1}\bigg{[}\frac{j}{n}-\frac{k\ell
q_{0}}{n}\big{(}1-e_{n}(\ell q_{0}t)\big{)}^{-1}\bigg{]}f_{t}(j).$ (9.16)
In particular, we have
$\overline{f_{t}(j)}=f_{-t}(j)\,,\qquad\overline{g_{t}(j)}=-g_{-t}(j)$
and from (9.11) we can express
$\frac{1}{2}(\Delta_{\ell
q_{0}}^{k}\psi)(j)=\sum_{r=1}^{m}y_{r}f_{t_{r}}(j)+y_{r}f_{-t_{r}}(j)+y^{\prime}_{r}g_{t_{r}}(j)-y^{\prime}_{r}g_{-t_{r}}(j).$
(9.17)
As in the proof of Lemma 3.6 we will eliminate terms in the above sum by
repeated application of the twisted second-order differencing operators
defined in (3.6). For a positive integer $L$ and $t_{0}\in{\mathbb{R}}$ we
have
$\displaystyle D_{t_{0}}f_{t}(j)$ $\displaystyle=\sum_{a=0}^{2}{2\choose
a}(-1)^{a}e_{n}(-aLt_{0})f_{t}(j+aL)$
$\displaystyle=f_{t}(j)\sum_{a=0}^{2}{2\choose a}(-1)^{a}e_{n}(aL(t-t_{0}))$
$\displaystyle=\big{[}1-e_{n}(L(t-t_{0}))\big{]}^{2}f_{t}(j).$
Note that the sequences $f_{t}(j)$ from that proof differ from the present
definition by a factor $(1-e_{n}(\ell q_{0}t))^{k}$. This is a key point:
whereas there our aim was to lower bound $\sum_{j}|\psi(j)|^{2}$, here we have
the more difficult task of lower bounding
$\sum_{j}\|\psi(j)\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}$ (which we are doing by
contradiction, starting from the assumption (9.4)). We are now in a similar
position as in the proof of Lemma 3.6 thanks to Lemma 9.2 and the application
of the differencing operators $\Delta_{\ell q_{0}}^{k}$, which is responsible
for the extra factor $(1-e_{n}(\ell q_{0}t))^{k}$.
Differentiating the above expression for $D_{t_{0}}f_{t}(j)$ yields
$\displaystyle D_{t_{0}}g_{t}(j)$
$\displaystyle=\big{[}1-e_{n}(L(t-t_{0}))\big{]}^{2}\partial_{t}f_{t}(j)+{\sqrt{-1}}\frac{L}{n}f_{t}(j)\sum_{a=0}^{2}{2\choose
a}(-1)^{a}a\cdot e_{n}(aL(t-t_{0}))$
$\displaystyle=\big{[}1-e_{n}(L(t-t_{0}))\big{]}^{2}\partial_{t}f_{t}(j)-2{\sqrt{-1}}\frac{L}{n}\big{[}1-e_{n}(L(t-t_{0}))\big{]}e_{n}(L(t-t_{0}))f_{t}(j)$
$\displaystyle=\big{[}1-e_{n}(L(t-t_{0}))\big{]}^{2}\big{[}g_{t}(j)+\beta_{L}(t-t_{0})f_{t}(j)\big{]}$
(9.18)
with $\beta_{L}(s):=-2{\sqrt{-1}}\frac{L}{n}e_{n}(Ls)/[1-e_{n}(Ls)]$, as in
(3.8). In particular,
$D_{t_{0}}f_{t_{0}}(j)=D_{t_{0}}g_{t_{0}}(j)=0.$ (9.19)
Now for general $t\in{\mathbb{R}}$, two applications with $t_{0}$ and $-t_{0}$
yield
$D_{t_{0}}\circ
D_{-t_{0}}f_{t}(j)=\big{[}1-e_{n}(L(t-t_{0}))\big{]}^{2}\big{[}1-e_{n}(L(t+t_{0}))\big{]}^{2}f_{t}(j)$
(9.20)
and
$D_{t_{0}}\circ
D_{-t_{0}}g_{t}(j)=\partial_{t}\Big{[}\big{[}1-e_{n}(L(t-t_{0}))\big{]}^{2}\big{[}1-e_{n}(L(t+t_{0}))\big{]}^{2}f_{t}(j)\Big{]}.$
(9.21)
For compactness, we write
$\delta_{L}(s):=1-e_{n}(Ls)$
for the remainder of the proof. Applying the above identities with
$t_{0}=t_{m}$ and $t$ running over $t_{r}$, $r\in[{m}-1]$, we obtain
$\displaystyle\frac{1}{2}\Big{(}D_{t_{m}}\circ D_{-t_{m}}\circ\Delta_{\ell
q_{0}}^{k}\,\psi\Big{)}(j)$
$\displaystyle\qquad=\sum_{r=1}^{{m}-1}\left(y_{r}+y_{r}^{\prime}\partial_{t_{r}}\right)\bigg{[}\delta_{L}(t_{r}-t_{m})^{2}\delta_{L}(t_{r}+t_{m})^{2}f_{t_{r}}(j)+\delta_{L}(-t_{r}-t_{m})^{2}\delta_{L}(-t_{r}+t_{m})^{2}f_{-t_{r}}(j)\bigg{]}\,.$
Iteratively applying $D_{t_{r}}\circ D_{-t_{r}}$ for $r={m}-1,{m}-2,\dots,2$,
we get
$\displaystyle\frac{1}{2}\big{(}D_{t_{2}}\circ D_{-t_{2}}\circ\cdots\circ
D_{t_{m}}\circ D_{-t_{m}}\circ\Delta_{\ell q_{0}}^{k}\,\psi\Big{)}(j)$
$\displaystyle\qquad=y_{1}f_{t_{1}}(j)\prod_{r=2}^{m}\delta_{L}(t_{1}-t_{r})^{2}\delta_{L}(t_{1}+t_{r})^{2}+y_{1}f_{-t_{1}}(j)\prod_{r=2}^{m}\delta_{L}(-t_{1}-t_{r})^{2}\delta_{L}(-t_{1}+t_{r})^{2}$
$\displaystyle\qquad+y_{1}^{\prime}\partial_{t}\bigg{[}f_{t}(j)\prod_{r=2}^{m}\delta_{L}(t-t_{r})^{2}\delta_{L}(t+t_{r})^{2}\bigg{]}_{t=t_{1}}+y_{1}^{\prime}\partial_{t}\bigg{[}f_{-t}(j)\prod_{r=2}^{m}\delta_{L}(-t-t_{r})^{2}\delta_{L}(-t+t_{r})^{2}\bigg{]}_{t=t_{1}}\,,$
and we have passed from a sum of $4{m}$ terms (see (9.17)) to a sum of 4. Now
we will reduce from four terms to one. Let $L^{\prime}$ be a positive integer
and define $D^{\prime}_{t_{0}}$ as in (3.6) with $L^{\prime}$ in place of $L$.
For any univariate function $G$ we have
$\displaystyle D^{\prime}_{t_{0}}f_{t_{0}}(j)G(t_{0})$
$\displaystyle=G(t_{0})D^{\prime}_{t_{0}}f_{t_{0}}(j)=0\,,$ $\displaystyle
D^{\prime}_{t_{0}}\partial_{t}\big{[}f_{t}(j)G(t)\big{]}_{t=t_{0}}$
$\displaystyle=G(t_{0})D^{\prime}_{t_{0}}g_{t_{0}}(j)+G^{\prime}(t_{0})D^{\prime}_{t_{0}}f_{t_{0}}(j)=0$
(using (9.19)). Set
$G(t):=\prod_{r=2}^{m}\delta_{L}(t-t_{r})^{2}\delta_{L}(t+t_{r})^{2}$
for which we have $\overline{G(t)}=G(-t)$. Application of
$D^{\prime}_{-t_{1}}$ to the previous expression for
$\frac{1}{2}(D_{t_{2}}\circ D_{-t_{2}}\circ\cdots\circ D_{t_{m}}\circ
D_{-t_{m}}\circ\Delta_{\ell q_{0}}^{k}\,\psi)(j)$ eliminates the second and
fourth terms on the right hand side, leaving
$\displaystyle\frac{1}{2}\Big{(}D^{\prime}_{-t_{1}}\circ D_{t_{2}}\circ
D_{-t_{2}}\circ\cdots\circ D_{t_{m}}\circ D_{-t_{m}}\circ\Delta_{\ell
q_{0}}^{k}\,\psi\Big{)}(j)$
$\displaystyle\qquad=y_{1}f_{t_{1}}(j)\delta_{L^{\prime}}(2t_{1})^{2}G(t_{1})+y_{1}^{\prime}D^{\prime}_{-t_{1}}\partial_{t}\Big{[}f_{t}(j)G(t)\Big{]}_{t=t_{1}}$
$\displaystyle\qquad=y_{1}f_{t_{1}}(j)\delta_{L^{\prime}}(2t_{1})^{2}G(t_{1})+y_{1}^{\prime}g_{t_{1}}(j)\delta_{L^{\prime}}(2t_{1})^{2}G(t_{1})+y_{1}^{\prime}f_{t_{1}}(j)\delta_{L^{\prime}}(2t_{1})^{2}G^{\prime}(t_{1})$
$\displaystyle\qquad=f_{t_{1}}(j)\bigg{[}y_{1}\delta_{L^{\prime}}(2t_{1})^{2}G(t_{1})+y_{1}^{\prime}\sqrt{-1}\frac{j}{n}\delta_{L^{\prime}}(2t_{1})^{2}G(t_{1})$
$\displaystyle\qquad\qquad\qquad\qquad-y_{1}^{\prime}\sqrt{-1}\frac{k\ell
q_{0}}{n}\big{(}1-e_{n}(\ell
q_{0}t_{1})\big{)}^{-1}\delta_{L^{\prime}}(2t_{1})^{2}G(t_{1})+y_{1}^{\prime}\delta_{L^{\prime}}(2t_{1})^{2}G^{\prime}(t_{1})\bigg{]}\,,$
where in the final line we substituted (9.16). Now since
$f_{t_{1}}(j+L^{\prime})=e_{n}(L^{\prime}t_{1})f_{t_{1}}(j)$, we can eliminate
all but the second term inside the brackets by multiplying both sides by
$e_{n}(L^{\prime}t_{1})$ and subtracting the result from the equation with $j$
replaced with $j+L^{\prime}$. We thus obtain
$\displaystyle\frac{1}{2}\Big{(}D^{\prime}_{-t_{1}}\circ D_{t_{2}}\circ
D_{-t_{2}}\circ\cdots\circ D_{t_{m}}\circ D_{-t_{m}}\circ\Delta_{\ell
q_{0}}^{k}\,\psi\Big{)}(j+L^{\prime})$ $\displaystyle\qquad\qquad-
e_{n}(L^{\prime}t_{1})\times\frac{1}{2}\Big{(}D^{\prime}_{-t_{1}}\circ
D_{t_{2}}\circ D_{-t_{2}}\circ\cdots\circ D_{t_{m}}\circ
D_{-t_{m}}\circ\Delta_{\ell q_{0}}^{k}\,\psi\Big{)}(j)$
$\displaystyle\qquad=y_{1}^{\prime}\sqrt{-1}\frac{L^{\prime}}{n}\delta_{L^{\prime}}(2t_{1})^{2}G(t_{1})f_{t_{1}}(j).$
Recalling our definitions of $\delta_{L^{\prime}}(2t_{1}),G(t_{1}),$ and
$f_{t_{1}}(j)$, the claimed bound now follows from taking the modulus of both
sides, applying the triangle inequality to the left hand side, and applying
Lemma 9.2 applied at various shifts of $\psi$. ∎
###### Remark 9.4.
For the case that $|y_{i}^{\prime}|\leq n^{-1/16}$ and
$|y_{1}|\gg_{m}n^{-1/8}$ in place of (9.12), we can show the following simpler
analogue of Lemma 9.3 (see also [DNN, Lemma 10.5] for a bivariate variant).
###### Lemma 9.5.
For any positive integers $j,L,L^{\prime}$ and $\ell$ such that $[j,j+k\ell
q_{0}+4({m}-1)L+3L^{\prime}]\subset J$, we have
$\displaystyle\frac{L^{\prime}}{n}\left|y_{1}\big{(}1-e_{n}(\ell
q_{0}t_{1})\big{)}^{k}\prod_{r=2}^{m}\big{(}1-e_{n}(L(t_{1}-t_{r}))\big{)}^{2}\big{(}1-e_{n}(L(t_{1}+t_{r}))\big{)}^{2}\right|$
$\displaystyle\qquad\qquad\qquad\qquad\ll_{K_{*},{\kappa},{m}}\sum_{i=1}^{k}\sum_{a=0}^{4({m}-1)}\sum_{b=0}^{3}\|\psi(j+i\ell
q_{0}+aL+bL^{\prime})\|_{{\mathbb{R}}/{\mathbb{Z}}}+O(2^{k}n^{-1/16}).$ (9.22)
Here the additional bound $2^{k}n^{-1/16}$ on the RHS is caused by applying
triangle inequalities basing on (9.9) (where we use $|y_{i}^{\prime}|\ll
n^{-1/16}$ for all $i$ to bound all the terms involving $\partial_{t}$ by
$O(n^{-1/6})$ and move to the right hand side during the differential
process). The proof of Lemma 9.5 can be carried out exactly the same way we
proved Lemma 9.3, and in fact it is simpler because we don’t have to take care
any of the terms involving $\partial_{t}$ because we started with the variant
of (9.9) without the $\partial_{t}$ term. From Lemma 9.5, by using the
assumption that $|y_{1}|\geq n^{-1/8}$ we can deduce (9.15), and hence
conclude 9.1 the same way.
Before concluding this section, as our approach to prove Proposition 9.1
starts with (9.5), by passing to subintervals of $J$ when needed (where we
note that at least one of such subintervals still has length
$\Omega(n/T^{6}))$, we obtain the following analogue of Theorem of Theorem 3.1
(where we recall $\phi_{j}({\mathbf{x}})$ from (9.1)).
###### Theorem 9.6 (Decay of the truncated characteristic function).
Let $\boldsymbol{t}=(t_{1},\dots,t_{m})\in{\mathbb{R}}^{m}$ be
$n^{\kappa}$-smooth and ${\lambda}$-spread for some ${\kappa}\in(0,1)$ and
$\omega(n^{-1/8{m}})\leq{\lambda}<1$. Then for any index set $I\subset[n]$
with $|I|=O(1)$, and for any fixed $K_{*}<\infty$ and any
$v\in{\mathbb{R}}^{4m}$ with $n^{-1/8}\leq\|v\|_{2}\leq n^{K_{*}}$ the
following holds for sufficiently large $n$
$\prod_{j\notin I}|\phi_{j}({\mathbf{x}})|\leq\exp(-\log^{2}n).$
## 10 Complex coefficients and extensions
### 10.1 Theorem 1.2 when $\xi$ is complex-valued
In the case that the random coefficients are complex-valued, our polynomial
can be written as
$\displaystyle P_{n}(x)$
$\displaystyle=\sum_{k=-n}^{n}(\xi_{k}+\sqrt{-1}\xi_{k}^{\prime})(\cos(kx)+\sqrt{-1}\sin(kx))$
$\displaystyle=\xi_{0}+\sqrt{-1}\xi_{0}^{\prime}+\sum_{k=1}^{n}(\xi_{k}+\xi_{-k})\cos(kx)-(\xi_{k}^{\prime}-\xi_{-k}^{\prime})\sin(kx)$
$\displaystyle+\sqrt{-1}\sum_{k=1}^{n}(\xi_{k}^{\prime}+\xi_{-k}^{\prime})\cos(kx)+(\xi_{k}-\xi_{-k})\sin(kx)$
where $\xi_{k},\xi_{k}^{\prime}$ are iid copies $\xi$. By limiting to only the
imaginary part, the corresponding random walk of interest is
$T_{n}(\boldsymbol{t}):=\sum_{j=1}^{n}\xi_{j}^{(1)}\boldsymbol{u}_{j}+\xi_{j}^{(2)}\boldsymbol{v}_{j}$
where $\xi_{j}^{(1)},\xi_{j}^{(2)}$ are independent sub-Gaussian of mean zero
and variance one with the property that
$\xi_{j}^{(1)}-\xi_{j}^{{}^{\prime}(1)},\xi_{j}^{(2)}-\xi_{j}^{{}^{\prime}(2)}$
have the same distribution (here $\xi_{j}^{{}^{\prime}(1)}$ and
$\xi_{j}^{{}^{\prime}(2)}$ are independent copies of $\xi_{j}^{(1)}$ and
$\xi_{j}^{(2)}$ respectively), and where for a fixed tuple
$\boldsymbol{t}=(t_{1},\dots,t_{m})\in{\mathbb{R}}^{m}$ and $j\in{\mathbb{Z}}$
we denote the vectors (see also (3.1))
$\qquad{\boldsymbol{u}}_{j}={\boldsymbol{u}}_{j}(\boldsymbol{t}):=\big{(}{\boldsymbol{a}}_{j}\,,\,(j/n){\boldsymbol{b}}_{j}\big{)},\
\qquad{\boldsymbol{v}}_{j}={\boldsymbol{v}}_{j}(\boldsymbol{t}):=\big{(}{\boldsymbol{b}}_{j}\,,\,-(j/n){\boldsymbol{a}}_{j}\big{)}.$
(10.1)
Because this random walk is only on ${\mathbb{R}}^{2m}$ with the steps
$\boldsymbol{u}_{j},\boldsymbol{v}_{j}$ compensating each other, we can
establish all of our previous results under the following weakly spreading
condition.
###### Definition 10.1.
For $m\geq 2$ and ${\lambda}>0$, we say
$\boldsymbol{t}=(t_{1},\dots,t_{m})\in{\mathbb{R}}^{m}$ is weakly
${\lambda}$-spread if
$\Big{\|}\frac{t_{r}-t_{r^{\prime}}}{2\pi
n}\Big{\|}_{{\mathbb{R}}/{\mathbb{Z}}}\geq\frac{{\lambda}}{n}\qquad\forall\,1\leq
r<r^{\prime}\leq{m}.$
Under this condition we have the following analog of Theorem 3.1.
###### Theorem 10.2 (Decay of the characteristic function).
Let $\boldsymbol{t}=(t_{1},\dots,t_{m})\in{\mathbb{R}}^{m}$ be
$n^{\kappa}$-smooth and weakly ${\lambda}$-spread for some ${\kappa}\in(0,1)$
and $\omega(n^{-1/8{m}})\leq{\lambda}<1$. Then for any fixed $K_{*}<\infty$
and any $\boldsymbol{x}\in{\mathbb{R}}^{2m}$ with
$n^{-1/8}\leq\|\boldsymbol{x}\|_{2}\leq n^{K_{*}}$,
$|{\mathbf{E}}e(\langle
T_{n}(\boldsymbol{t}),\boldsymbol{x}\rangle)|\leq\exp(-\log^{2}n)$
for all $n$ sufficiently large depending on $K_{*},m,\kappa,$ and the sub-
Gaussian constants.
We next sketch the main idea to prove this result. Fix a vector
$n^{-1/8}\leq\|(\boldsymbol{y},\boldsymbol{y}^{\prime})\|_{2}\leq n^{K_{*}}$,
recalling (9.3), we further denote
$\psi^{\prime}(j):=\psi^{\prime}(j;\boldsymbol{t})=\langle\boldsymbol{b}_{j},\boldsymbol{y}\rangle-(j/n)\langle\boldsymbol{a}_{j},\boldsymbol{y}^{\prime}\rangle=\sum_{r=1}^{m}y_{r}\sin(jt_{r}/n)+y^{\prime}_{r}(j/n)\cos(jt_{r}/n)\,.$
(10.2)
The main proposition is the following analog of Proposition 9.1.
###### Proposition 10.3.
Let $\boldsymbol{t}=(t_{1},\dots,t_{r})\in{\mathbb{R}}^{m}$ be
$n^{{\kappa}}$-smooth and assume that $\boldsymbol{t}$ is weakly
${\lambda}$-spread for some ${\kappa}\in(0,1)$ and
$\omega(n^{-1/8{m}})\leq{\lambda}<1$. Then
$\sum_{j=0}^{n}\|\psi(j)\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}+\sum_{j=0}^{n}\|\psi^{\prime}(j)\|_{{\mathbb{R}}/{\mathbb{Z}}}^{2}>\log^{4}n.$
We next sketch the proof, omitting most details. We follow the proof of
Proposition 9.1 with some simplifications, that instead of focusing on
$(\Delta^{k}_{\ell q_{0}}\psi)(j)$ as the real part of
$\sum_{r=1}^{m}y_{r}f_{j,\ell}(t_{r})+y_{r}^{\prime}\partial_{t_{r}}f_{t_{r},\ell}(j)$
in (9.11) we can study the sum directly. This would allow use to shorten the
differential process significantly, namely in the proof of Lemma 9.3 we will
only need to consider $D_{t_{1}}^{\prime}\circ D_{t_{2}}\circ\cdots\circ
D_{t_{m}}$ (without negative perturbations), leading to a simpler
multiplicative factor
$\prod_{r=2}^{m}\big{(}1-e_{n}(L(t_{1}-t_{r}))\big{)}^{2}$ (without
$(1-e_{n}(L(t_{1}+t_{r})))^{2}$), hence justifying the weakly spreadness
condition.
Finally, one can similarly prove Lemma 3.6, Theorem 3.2, and Theorem 3.4 for
the random walk $T_{n}(\boldsymbol{t})$ above under the weakly spreadness
condition on $\boldsymbol{t}$. Using these results, we can now conclude the
proof of Proposition 2.5 for the complex-valued case as in Section 5 where we
can now allow the $x_{\alpha_{i}}$ to vary entirely over $[-\pi,\pi]$.
### 10.2 Other extensions
As noted in Remark 1.3, with minor modifications our arguments extend Theorem
1.2 to $P_{n}$ of the general form $P_{n}(x)=|J_{n}|^{-1/2}\sum_{j\in
J_{n}}\xi_{j}e(jx)$ for any sequence of finite intervals
$J_{n}\subset{\mathbb{Z}}$ with $|J_{n}|\to\infty$. By multiplying by the
phase $e(-n_{0}x)$, which does not change the minimum modulus, where
$J=[n_{0},n_{1}]$, one sees it suffices to consider the form
$P_{n}(x)=\frac{1}{\sqrt{n+1}}\sum_{j=0}^{n}\xi_{j}e(jx).$ (10.3)
Our arguments also extend to another well-studied class of trigonometric
polynomials, of the form
$P_{n}(x)=\frac{1}{\sqrt{n+a}}\bigg{[}\sqrt{a}\xi_{0}+\sum_{j=1}^{n}\xi_{j}\cos(jx)+\eta_{j}\sin(jx)\bigg{]}\,,$
(10.4)
where the variables $\xi_{j},\eta_{j}$ are iid copies of a random variable
$\xi$, and $a>0$ is a fixed parameter. We note that for this model it is
natural to focus only on the complex $\xi$ case as otherwise $P_{n}$ is likely
to have roots.
###### Theorem 10.4.
Theorem 1.2 extends to hold for $P_{n}$ of the forms (10.3) and (10.4).
For the model (10.4), by combining with Theorem 1.1 we obtain the following:
###### Corollary 10.5.
The limit (1.6) holds also for the model (10.4) with $\xi$ a complex variable
as in Theorem 1.2, and $a=1/2$.
###### Proof.
From Theorem 10.4 it suffices to verify that (1.6) holds under
${\mathbf{P}}_{{\mathcal{N}}_{{\mathbb{R}}}(0,1)}$. Note that under this
measure, $\xi_{j},j\geq 0$ and $\eta_{j},j\geq 1$ are iid standard complex
Gaussians. Set $\zeta_{0}=\xi_{0}$ and for $1\leq j\leq n$ set
$\zeta_{j}:=\frac{1}{\sqrt{2}}(\xi_{j}+\eta_{j})$,
$\zeta_{-j}:=\frac{1}{\sqrt{2}}(\xi_{j}-\eta_{j})$. From the rotational
invariance of the complex Gaussian law it follows that $\zeta_{j},-n\leq j\leq
n$ are iid standard complex Gaussians. Then one verifies that with the change
of variables, (10.4) becomes
$P_{n}(x)=\frac{1}{\sqrt{2n+2a}}\sum_{j=-n}^{n}\zeta_{j}e(jx).$
The claim now follows from the complex Gaussian case of Theorem 1.1 and the
choice $a=1/2$. ∎
We comment on the minor modifications of the proof of Theorem 1.2 that are
needed to obtain Theorem 10.4. The probabilistic Lemmas 2.1 and 2.4 follow
from straightforward modifications. Lemma 2.2 is deterministic and does not
depend on the specific form of $P_{n}$ after conditioning on the good event.
The remainder of the argument only depends on the specific model through the
the matrix $W$ in the definition (3.2) of the random walks
$S_{n}(\boldsymbol{t})$, and the only proofs that need modification are those
of Lemma 3.6 and Theorem 3.1. For the model (10.4), we may condition on
$\xi_{0}$ and $\eta_{j},j\geq 1$. As the trigonometric series is now real, we
only need to consider a $2m$-dimensional walk of the form
$\sum_{j=1}^{n}\xi_{j}\boldsymbol{v}_{j}$
with notation as in (10.1). The $n\times m$ matrix $V$ with rows
$\boldsymbol{v}_{j}$ is a submatrix of $W_{[-n,n]}$ as defined in (3.1) one
checks that the argument for Lemma 3.6 yields the same bound on the smallest
singular value of $V$. Moreover, the proof of Theorem 3.1 began by reduction
of the problem to the submatrix $V$ (see (9.3)), so the result also holds in
this case.
## Appendix A Separation of near-minimizers
In this appendix we prove Lemma 2.2, restated below, along similar lines to
the proof of [YZ, Lemma 2.11].
###### Lemma A.1.
On the event ${\mathcal{G}}_{2}({K_{0}}/2)$ we have
1. (i)
If ${\mathcal{A}}_{\alpha}$ and ${\mathcal{A}}_{\alpha+1}$ hold, then
$Y_{\alpha}\in[\frac{\pi}{N}-\frac{\pi}{N\log^{{K_{0}}/4}n},\frac{\pi}{N}].$
2. (ii)
Furthermore, ${\mathcal{A}}_{\alpha}$ and ${\mathcal{A}}_{\alpha^{\prime}}$
cannot hold simultaneously as long as
$2\leq|\alpha^{\prime}-\alpha|\leq\frac{n}{\log^{3{K_{0}}}n}.$
###### Proof.
We first show (i). Assume that ${\mathcal{A}}_{\alpha}$ holds and
$Y_{\alpha}\in[0,\frac{\pi}{N}-\frac{\pi}{N\log^{{K_{0}}/4}n})$. Then
$\displaystyle|F_{\alpha}(x_{\alpha}+\pi/N)|=|Z_{\alpha}/n+(\pi/N-Y_{\alpha})P^{\prime}(x_{\alpha})|$
$\displaystyle\geq|(\pi/N-Y_{\alpha})P^{\prime}(x_{\alpha})|-|Z_{\alpha}|/n$
$\displaystyle\gg\frac{1}{N\log^{{K_{0}}/4}n}\times\frac{n}{\log^{{K_{0}}/2}n}-\frac{\log
n}{n}$ $\displaystyle\gg\frac{\log^{{K_{0}}/4}n}{n}-\frac{\log
n}{n}\gg\frac{\log^{{K_{0}}/4}n}{n}.$
Now for $x\in I_{\alpha+1}$ and under ${\mathcal{G}}_{2}({K_{0}}/2)$
$\displaystyle|F_{\alpha+1}(x)-F_{\alpha}(x)|$
$\displaystyle\leq|F_{\alpha+1}(x)-P(x)|+|F_{\alpha}(x)-P(x)|$
$\displaystyle\ll N^{-2}\sup_{x\in[-\pi,\pi]}|P^{\prime\prime}(x)|$
$\displaystyle\ll\frac{\log^{3{K_{0}}}n}{n^{2}}.$
So if $x\in I_{\alpha+1}$ then
$\displaystyle|F_{\alpha+1}(x)|$
$\displaystyle\geq|F_{\alpha}(x)|-|F_{\alpha+1}(x)-F_{\alpha}(x)|$
$\displaystyle\geq|F_{\alpha}(x_{\alpha}+\pi/N)|-|F_{\alpha+1}(x)-F_{\alpha}(x)|$
$\displaystyle\gg\frac{\log^{{K_{0}}/4}n}{n},$
where $|F_{\alpha}(x)|\geq|F_{\alpha}(x_{\alpha}+\pi/N)|$ because
$x_{\alpha}+\pi/N$ is closer than $x$ to the minimizer
$x_{\alpha}+Y_{\alpha}$. The above implies that
$|Z_{\alpha+1}|=n|F_{\alpha+1}(Y_{\alpha+1}+x_{\alpha+1})|>\log n$ and hence
that ${\mathcal{A}}_{\alpha+1}$ does not hold.
We turn to prove (ii). For $x\in I_{\alpha^{\prime}}$ we have
$\displaystyle|F_{\alpha}(x)-F_{\alpha^{\prime}}(x)|$
$\displaystyle\leq|F_{\alpha}(x)-P(x)|+|F_{\alpha^{\prime}}(x)-P(x)|$
$\displaystyle\ll(x_{\alpha}-x_{\alpha^{\prime}})^{2}\sup_{x\in[-\pi,\pi]}|P^{\prime\prime}(nx)|$
$\displaystyle\leq(x_{\alpha}-x_{\alpha^{\prime}})^{2}n^{2}\log^{{K_{0}}/2}n.$
On the other hand, on ${\mathcal{A}}_{\alpha}$, for all $x\in
I_{\alpha^{\prime}}$
$\displaystyle|F_{\alpha}(x)|\geq|F_{\alpha}(x_{\alpha^{\prime}}-\pi/N)|$
$\displaystyle\geq|F_{\alpha}(x_{\alpha^{\prime}}-\pi/N)-F_{\alpha}(Y_{a})|-|F_{\alpha}(Y_{a})|$
$\displaystyle\geq|(x_{\alpha^{\prime}}-\pi/N-Y_{\alpha})P^{\prime}(x_{\alpha})|-|Z_{\alpha}|/n$
$\displaystyle\gg
n|x_{\alpha^{\prime}-1}-x_{\alpha}|\log^{-{K_{0}}/2}n-\frac{\log n}{n}$
$\displaystyle\gg n|x_{\alpha^{\prime}-1}-x_{\alpha}|\log^{-{K_{0}}/2}n.$
Thus for all $x\in I_{\alpha^{\prime}}$,
$\displaystyle|F_{\alpha^{\prime}}(x)|$
$\displaystyle\geq|F_{\alpha}(x)|-(x_{\alpha}-x_{\alpha^{\prime}})^{2}n^{2}\log^{{K_{0}}/2}n$
$\displaystyle\gg|x_{\alpha^{\prime}-1}-x_{\alpha}|n\log^{-{K_{0}}/2}n-(x_{\alpha}-x_{\alpha^{\prime}})^{2}n^{2}\log^{{K_{0}}/2}n$
$\displaystyle\gg
n|x_{\alpha^{\prime}-1}-x_{\alpha}|(\log^{-{K_{0}}/2}n-4|x_{\alpha^{\prime}-1}-x_{\alpha}|n\log^{{K_{0}}/2}n)$
$\displaystyle\gg
n|x_{\alpha^{\prime}-1}-x_{\alpha}|(\log^{-{K_{0}}/2}n-4n^{-1}\log^{3{K_{0}}/2}n)$
$\displaystyle\gg|x_{\alpha^{\prime}-1}-x_{\alpha}|n\log^{-{K_{0}}/2}n\gg\frac{\log^{{K_{0}}/2}n}{n},$
implying $|Z_{\alpha^{\prime}}|>\log n$ and hence that
${\mathcal{A}}_{\alpha^{\prime}}$ does not hold. ∎
## Acknowledgments
We thank Pavel Bleher, Yen Do, Oanh Nguyen, Oren Yakir and Ofer Zeitouni for
helpful discussions and comments, and to Yakir and Zeitouni for showing us an
early draft of their work [YZ] on the Gaussian case. This project was
initiated at the American Institute of Mathematics meeting “Zeros of random
polynomials” in August 2019, where Bleher and Zeitouni were also participants.
In particular, the idea used here and in [YZ] to study local linearizations
emerged from those discussions. We thank the workshop organizers and the
Institute for providing a stimulating research environment.
## References
* [ABB17] L.-P. Arguin, D. Belius, and P. Bourgade. Maximum of the characteristic polynomial of random unitary matrices. Comm. Math. Phys., 349(2):703–751, 2017.
* [ABB+19] L.-P. Arguin, D. Belius, P. Bourgade, M. Radziwiłł, and K. Soundararajan. Maximum of the Riemann zeta function on a short interval of the critical line. Comm. Pure Appl. Math., 72(3):500–535, 2019.
* [ABR] L.-P. Arguin, P. Bourgade, and M. Radziwiłł. The Fyodorov–Hiary–Keating Conjecture. i. Preprint, arXiv:2007.00988.
* [AT07] R. J. Adler and J. E. Taylor. Random fields and geometry. Springer Monographs in Mathematics. Springer, New York, 2007.
* [AW09] J.-M. Azaïs and M. Wschebor. Level sets and extrema of random processes and fields. John Wiley & Sons, Inc., Hoboken, NJ, 2009.
* [BBM+20] P. Balister, B. Bollobás, R. Morris, J. Sahasrabudhe, and M. Tiba. Flat Littlewood polynomials exist. Ann. of Math. (2), 192(3):977–1004, 2020.
* [BCP19] V. Bally, L. Caramellino, and G. Poly. Non universality for the variance of the number of real roots of random trigonometric polynomials. Probab. Theory Related Fields, 174(3-4):887–927, 2019.
* [BD04] P. Bleher and X. Di. Correlations between zeros of non-Gaussian random polynomials. Int. Math. Res. Not., (46):2443–2484, 2004.
* [BG05] A. Böttcher and S. M. Grudsky. Structured condition numbers of large Toeplitz matrices are rarely better than usual condition numbers. Numer. Linear Algebra Appl., 12(2-3):95–102, 2005.
* [BL16] M. Biskup and O. Louidor. Extreme local extrema of two-dimensional discrete Gaussian free field. Comm. Math. Phys., 345(1):271–304, 2016.
* [BP31] A. Bloch and G. Pólya. On the Roots of Certain Algebraic Equations. Proc. London Math. Soc. (2), 33(2):102–114, 1931.
* [BR10] R. N. Bhattacharya and R. R. Rao. Normal approximation and asymptotic expansions, volume 64 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2010. Updated reprint of the 1986 edition [ MR0855460], corrected edition of the 1976 original [ MR0436272].
* [CGS] X. Chen, C. Garban, and A. Shekhar. A new proof of liggett’s theorem for non-interacting brownian motions. Preprint, arXiv:2012.03914.
* [CMN18] R. Chhaibi, T. Madaule, and J. Najnudel. On the maximum of the ${\rm C}\beta{\rm E}$ field. Duke Math. J., 167(12):2243–2345, 2018.
* [CNYZ] N. A. Cook, H. H. Nguyen, O. Yakir and O. Zeitouni, Universality of Poisson limits for moduli of roots of Kac polynomials, arXiv preprint, 2021.
* [CZ20] N. Cook and O. Zeitouni. Maximum of the characteristic polynomial for a random permutation matrix. Comm. Pure Appl. Math., 73(8):1660–1731, 2020.
* [DNN] Y. Do, H. H. Nguyen, and O. Nguyen. Random trigonometric polynomials: universality and non-universality of the variance for the number of real roots. Preprint, arXiv:1912.11901.
* [DNV15] Y. Do, H. Nguyen, and V. Vu. Real roots of random polynomials: expectation and repulsion. Proc. Lond. Math. Soc. (3), 111(6):1231–1260, 2015.
* [DNV18] Y. Do, O. Nguyen, and V. Vu. Roots of random polynomials with coefficients of polynomial growth. Ann. Probab., 46(5):2407–2494, 2018.
* [Hal73] G. Halász. On a result of Salem and Zygmund concerning random polynomials. Studia Sci. Math. Hungar., 8:369–377, 1973.
* [Har] A. J. Harper. On the partition function of the Riemann zeta function, and the Fyodorov–Hiary–Keating conjecture. Preprint, arXiv:1906.05783.
* [IKM16] A. Iksanov, Z. Kabluchko, and A. Marynych. Local universality for real roots of random trigonometric polynomials. Electron. J. Probab., 21:Paper No. 63, 19, 2016.
* [Kac43] M. Kac. On the average number of real roots of a random algebraic equation. Bull. Amer. Math. Soc., 49:314–320, 1943.
* [Kac49] M. Kac. On the Average Number of Real Roots of a Random Algebraic Equation (II). Proc. London Math. Soc. (2), 50(6):390–408, 1949.
* [Kah85] J.-P. Kahane. Some random series of functions, volume 5 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, second edition, 1985.
* [Kas87] B. S. Kashin. The properties of random trigonometric polynomials with $\pm 1$ coefficients. Vestnik Moskov. Univ. Ser. I Mat. Mekh., (5):40–46, 105, 1987.
* [Kon94] S. V. Konyagin. On the minimum modulus of random trigonometric polynomials with coefficients $\pm 1$. Mat. Zametki, 56(3):80–101, 158, 1994.
* [KS99] S. V. Konyagin and W. Schlag. Lower bounds for the absolute value of random polynomials on a neighborhood of the unit circle. Trans. Amer. Math. Soc., 351(12):4963–4980, 1999.
* [KZ14] Z. Kabluchko and D. Zaporozhets. Asymptotic distribution of complex zeros of random analytic functions. Ann. Probab., 42(4):1374–1395, 2014.
* [Lig78] T. M. Liggett. Random invariant measures for Markov chains, and independent particle systems. Z. Wahrsch. Verw. Gebiete, 45(4):297–313, 1978.
* [Lit66] J. E. Littlewood. On polynomials $\sum^{n}\pm z^{m}$, $\sum^{n}e^{\alpha_{m}i}z^{m}$, $z=e^{\theta_{i}}$. J. London Math. Soc., 41:367–376, 1966.
* [LO38] J. E. Littlewood and A. C. Offord. On the Number of Real Roots of a Random Algebraic Equation. J. London Math. Soc., 13(4):288–295, 1938.
* [LO43] J. E. Littlewood and A. C. Offord. On the number of real roots of a random algebraic equation. III. Rec. Math. [Mat. Sbornik] N.S., 12(54):277–286, 1943.
* [MiSa] M. Michelen, J. Sahasrabudhe, Random polynomials: the closest root to the unit circle, arXiv preprint, 2020.
* [Naj18] J. Najnudel. On the extreme values of the Riemann zeta function on random intervals of the critical line. Probab. Theory Related Fields, 172(1-2):387–452, 2018.
* [NNV16] H. Nguyen, O. Nguyen, and V. Vu. On the number of real roots of random polynomials. Commun. Contemp. Math., 18(4):1550052, 17, 2016.
* [NV17] O. Nguyen and V. Vu. Roots of random functions: A general condition for local universality, 2017.
* [PZ17] E. Paquette and O. Zeitouni. Extremal eigenvalue correlations in the GUE minor process and a law of fractional logarithm. Ann. Probab., 45(6A):4112–4166, 2017.
* [ShVa] L. Shepp, R. Vanderbei The complex zeros of random polynomials, Trans. Amer. Math. Soc. Vol. 347, pp. 4365–4384, 1995.
* [SZ54] R. Salem and A. Zygmund. Some properties of trigonometric series whose terms have random signs. Acta Math., 91:245–301, 1954.
* [Tao10] T. Tao. Freiman’s theorem for solvable groups. Contrib. Discrete Math., 5(2):137–184, 2010.
* [TV06] T. Tao and V. Vu. Additive combinatorics, volume 105 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2006.
* [TV08] T. Tao and V. Vu. Random matrices: the circular law. Commun. Contemp. Math., 10(2):261–307, 2008.
* [TV10a] T. Tao and V. Vu. Random matrices: the distribution of the smallest singular values. Geom. Funct. Anal., 20(1):260–297, 2010.
* [TV10b] T. Tao and V. Vu. Smooth analysis of the condition number and the least singular value. Math. Comp., 79(272):2333–2352, 2010.
* [TV15] T. Tao and V. Vu. Local universality of zeroes of random polynomials. Int. Math. Res. Not. IMRN, (13):5053–5139, 2015.
* [YZ] O. Yakir and O. Zeitouni. The minimum modulus of Gaussian trigonometric polynomials. To appear in Israel J. Math. Preprint: arXiv:2006.08943.
[nc] Nicholas A. Cook
Department of Mathematics
Duke University
Durham, NC 27708, USA
nickcookmathduke edu
[hn] Hoi H. Nguyen
Department of Mathematics
The Ohio State University
Columbus, OH 43210 USA
nguyen1261osuedu
|
11institutetext: National Solar Observatory, 3665 Discovery Drive, 3rd Floor,
Boulder, CO 80303 USA
11email<EMAIL_ADDRESS>22institutetext: Stanford University, Stanford, CA,
USA 33institutetext: ReSoLVE Centre of Excellence, Space Climate research
unit, University of Oulu, POB 3000, FIN-90014, Oulu, Finland 44institutetext:
NorthWest Research Associates, 3380 Mitchell Lane, Boulder, CO 80301 USA
55institutetext: Institute for Space-Earth Environmental Research, Nagoya
University, Furo-cho Chikusa-ku Nagoya, Aichi 464-8601 Japan
# On a limitation of Zeeman polarimetry and imperfect instrumentation in
representing solar magnetic fields with weaker polarization signal
Alexei A. Pevtsov 11 Yang Liu 22 Ilpo Virtanen 33 Luca Bertello 11 Kalevi
Mursula 33 K.D. Leka 44 5 5 Anna L.H. Hughes 11
###### Abstract
Full disk vector magnetic fields are used widely for developing better
understanding of large-scale structure, morphology, and patterns of the solar
magnetic field. The data are also important for modeling various solar
phenomena. However, observations of vector magnetic fields have one important
limitation that may affect the determination of the true magnetic field
orientation. This limitation stems from our ability to interpret the differing
character of the Zeeman polarization signals which arise from the photospheric
line-of-sight vs. the transverse components of the solar vector magnetic
field, and is likely exacerbated by unresolved structure (non-unity fill
fraction) as well as the disambiguation of the 180∘ degeneracy in the
transverse-field azimuth. Here we provide a description of this phenomenon,
and discuss issues, which require additional investigation.
###### keywords:
Sun: magnetic fields – Techniques: polarimetric
## 1 Introduction
Observations of magnetic fields provide key information for developing our
understanding of the Sun’s short-term (space weather) and long-term (space
climate) activity and in predicting these effects on Earth. Synoptic full disk
longitudinal magnetograms have existed since the late 1960s, and these data
continue to serve as the primary input for space weather and space climate
research and operational forecasts. By their nature, longitudinal magnetograms
do not contain sufficient information to derive the true orientations of the
magnetic-field vectors, and thus, require additional assumptions for physical
interpretation. For example, “radial field” synoptic maps, which are widely
used in space weather forecasting are created under the assumption that the
true field is radial. Observations of vector Stokes polarimetery in principle
have the information necessary to fully reconstruct photospheric vector-
magnetic-field maps, and efforts are underway to employ such data in
operational space weather forecasts.
The earliest observations of vector magnetic fields in solar active regions
were conducted at the Crimean Astrophysical Observatory in the early 1960s
(Stepanov and Severny, 1962; Severny, 1965). By the early 1980s, a number of
vector magnetographs were developed around the world, with the most prolific
instruments operating in Czechoslovakia, East Germany (Pflug and Grigoryev,
1986), Japan (NAOJ, Ichimoto et al., 1993), the Soviet Union (Crimean, Pulkovo
and Sayan observatories), and the USA (NASA’s Marshall Space Flight
Center/MSFC, Mees Solar Observatory of University of Hawaii, High Altitude
Observatory/HAO) (for review, see individual articles in Hagyard, 1985;
Cacciani et al., 1990). All of these instruments had a limited field of view,
typically about the size of an average active region. Full disk vector
magnetograms have been routinely observed since late 2003 by the Vector Stokes
Magnetograph (VSM) on the Synoptic Optical Long-term Investigation of the Sun
(SOLIS) platform (Keller et al., 2003). Beginning in 2010, full disk vector
magnetograms are available from the Helioseismic and Magnetic Imager (HMI,
Scherrer et al., 2012) on board the Solar Dynamics Observatory (SDO).
On 23–26 January 2017, a working meeting on the “Use of Vector Synoptic Maps
for Modeling” was held in Oulu, Finland with two follow up meetings at the
National Solar Observatory, Boulder, Colorado, USA (7–10 November 2017), and
at the Max Planck Institute for Solar System Research, Göttingen, Germany
(18-21 September, 2018). At the first meeting, a direct comparison of vector
magnetic field observations from SDO/HMI and SOLIS/VSM revealed disagreements
in the orientation of meridional (Bθ, North-South) and/or zonal (Bφ, East-
West) components of the vector field in plage areas. One example of this
disagreement is shown in Figure 1, which shows the three components of the
magnetic field in VSM and HMI synoptic maps between heliographic latitudes
-40∘ and 0∘ and Carrington longitudes 60∘ and 130∘.
Figure 1: Example of disagreement in orientation of Bθ and Bφ Magnetic field
vector components of an active region in the southern hemisphere, which passed
the central meridian on 19 November 2015. Upper panels show radial (Br),
meridional (Bθ) and zonal (Bφ) field from SDO/HMI and lower panels from
SOLIS/VSM. Meridional (middle) and zonal (right) components depict opposite
orientation. Red/blue represent positive/negative polarities scaled between
$\pm$ 50 G (see color bars on the right side of figure).
This decaying active region (plage) was at the central meridian on 19 November
2015. A synoptic map is used as an example because the averaging used in
producing a synoptic map results in lower background noise, but very similar
patterns are also seen in less-averaged full-disk data. Recent comparisons
also indicate that the average large-scale zonal field Bφ observed by
SOLIS/VSM and derived from line-of-sight observations tends to disagree
outside active regions (Virtanen et al., 2019). The investigation of this
disagreement uncovered what we think may be an important limitation to Zeeman
vector polarimetry. Here we present the results of our investigation into the
discrepancies uncovered by that collaborative effort. In Section 2, we
introduce observations from two vector magnetographs and compare the vector
field derived from these instruments and their standard data reduction
algorithms. Sections 3-4 introduce our explanation of the observed
disagreement, and in Section 5 we discuss the results of our findings.
## 2 Vector magnetograms from different instruments and a comparative
analysis
Here, we employ full disk vector magnetograms from two instruments: the Vector
Stokes Magnetograph (VSM) on the Synoptic Optical Long-term Investigations of
the Sun (SOLIS) platform (Keller et al., 2003; Balasubramaniam and Pevtsov,
2011), and the Helioseismic and Magnetic Imager (HMI, Scherrer et al., 2012;
Hoeksema et al., 2014) on board the Solar Dynamics Observatory (SDO, Pesnell
et al., 2012).
The HMI instrument is a filtergraph covering the full solar disk with
$4096\times 4096$ pixels. The spatial resolution is about 1” with a 0.5” pixel
size. The width of the filter profiles is 7.6 pm. The spectral line used is Fe
i 617.3 nm, which forms in the photosphere (Norton et al., 2006). The Stokes
parameters $(I,Q,U,V)$ are computed from those measurements (Couvidat et al.,
2016), and are further inverted to retrieve the vector magnetic field using a
Milne-Eddington (ME) based inversion algorithm, the Very Fast Inversion of the
Stokes Vector (VFISV Borrero et al., 2011; Centeno et al., 2014). To suppress
p-mode oscillations and to increase the signal-to-noise ratio, registered
filtergrams are averaged over a certain time before computing the Stokes
vector. Currently a weighted average is computed every 720 seconds using data
obtained over 1350 seconds by default; other averaging windows are available.
Inversion of the vector field has an unavoidable 180∘ ambiguity in the
azimuthal field direction. Assumptions about the field must be made to resolve
the ambiguity. For all pixels in active regions, as well as for strong-field
pixels (where the S/N $>$3 in the transverse signal plus a 50 G buffer) in
quiet Sun regions, the azimuth is determined using a minimum energy algorithm
(Metcalf, 1994; Metcalf et al., 2006; Leka et al., 2009; Hoeksema et al.,
2014). The minimum-energy-method computation is time consuming for pixels
where the signal is dominated by noise, so for weaker polarization regions,
the $180^{\circ}$ ambiguity is solved using three quicker methods: a
randomizing method (the option to add 180∘ is determined randomly), an acute-
angle comparison to a potential field, and a method that provides the most
radially-directed solution. More details can be found in Hoeksema et al.
(2014). In this study, we used the random disambiguation for pixels with
weaker linear polarization.
The VSM is a spectrograph-based instrument, which observes full line profiles
of the Fe i 630.15 and 630.25 nm spectral lines, with a spectral sampling of
2.4 pm and pixel size of 1.0 $\times$ 1.0 (1.14 $\times$ 1.14 before January
2010) arcseconds over a 2048 $\times$ 2048 pixels field of view. To construct
a full-disk magnetogram, the image of the Sun is scanned in the direction
perpendicular to the spectrograph slit. At each scanning step, the spectra for
each pixel along the slit are recorded simultaneously. Each scanning step
takes 0.6 s, and a full disk magnetogram can be completed in about 20 minutes.
The spectrograph slit is curved, which introduces geometric distortions to the
image of the Sun. These distortions are corrected by shifting the position of
each pixel in the final image to the closest integer position of the true
pixel location in a round-Sun image. The maximum uncertainty in the position
of a pixel does not exceed half-a-pixel, which is significantly smaller than
typical atmospheric seeing for this groundbased instrument. The above
correction procedure avoids ill-posed interpolation of full disk magnetograms,
and it preserves the mapping of spectral information for each image pixel.
Similar to HMI, the observed profiles of Stokes Q, U, V, and I parameters are
inverted using the VFISV code under the assumption of a standard Milne-
Eddington stellar atmosphere. However, unlike HMI, VSM inversion includes the
magnetic-field filling factor ($\alpha$) as an additional fit parameter, which
represents the fraction of each instrument pixel filled by magnetized plasma.
For additional details about SOLIS/VSM inversion methods and pipeline, see
Harker (2017). The 180∘ azimuthal ambiguity in the transverse field is
resolved using the Very Fast Disambiguation Method (VFDM, Rudenko and
Anfinogentov, 2014). The VFDM has an accuracy almost as good as that of the
minimum energy method (used for HMI disambiguation), but is much faster. For a
synoptic instrument such as VSM and HMI, the disambiguation is done
automatically as part of the pipeline data reduction. The pipeline reductions
are optimized for “a good answer most of the time, in time for the next
dataset”, and in some cases may not return the best possible solution.
VSM data are from data processing level PROVER0 = 15.0511, which uses only one
(Fe i 630.15 nm) spectral line for the inversion. In the following discussion
we adopt the righthanded coordinate system (Br,Bθ,Bφ), which has been used in
previous publications (e.g., Virtanen et al., 2019). Here the radial component
(Br) is positive when pointing away from the Sun, the meridional component
(Bθ) is positive southward, and the zonal component (Bφ) is positive westward.
## 3 Effects of noise in the transverse fields on the derived vector-field
orientation
Let us now consider how properties of noise could affect the derived
interpretation of the vector magnetic field. In all modern instruments, the
derivation of vector magnetic fields is based on observations of full Stokes
profiles in a selected spectral line sensitive to the effects of the magnetic
field at the location of the formation of this line. Stokes I represents the
total intensity of light. Stokes V is circular polarization (counter-/clock-
wise), and Stokes Q and U represent two linear polarizations. The observed
Stokes profiles are fitted by a model/synthetic line profiles in a process
called “inversion”, and the properties of the magnetic field (and other
parameters, such as Doppler velocity, temperature, magnetic filling factor
etc) are determined based on properties of the fitted line profiles.
The observed profiles of all Stokes parameters are affected by noise and other
instrumental limitations. The photometric noise in Stokes Q and U is similar
to that of Stokes V. However, the longitudinal (B||) field is related linearly
to circular polarization, while the relation between transverse field and
linear polarization is quadratic (e.g., see Equation 18 in Stenflo 2013 and
Equations 12c and 12d in Jefferies and Mickey 1991), which results in a lower
signal/noise in the latter for the same underlying field strength. For
example, in the weak-field approximation (Ronan et al., 1987; Jefferies et
al., 1989; Jefferies and Mickey, 1991),
$B_{||}\approx\frac{C_{||}}{\alpha}~{}{\rm V}$ (1)
$B_{\perp}\approx\frac{C_{\perp}}{\sqrt{\alpha}}\sqrt[4]{{\rm Q}^{2}+{\rm
U}^{2}}.$ (2)
Based on radiative transfer modeling of the Fe i 630.15 nm and assuming a
filling factor of unity, $\alpha=1$ (the entire pixel is filled with a field),
Stenflo (2013) estimated the coefficients of proportionality as $C_{||}\approx
29.4$ and $C_{\perp}\approx 184$. (The coefficients are model dependent, but
$C_{\perp}>>C_{||}$ independent of a model).
As a result, the noise measured in horizontal fields is typically larger than
the amplitude of noise in the longitudinal field by a factor of 10-25.
Moreover, unlike noise in B||, which is distributed around zero,
(B${}_{\perp}\pm$ noise) is always positive (e.g. see Appendix A in Leka and
Skumanich, 1999). For some specific magnetic configurations, this dichotomy
may systematically distort the true inclination of the vector magnetic field.
Figure 2: Effect of noise in the transverse field on the derived orientation
of the vector field. (a) Black arrows represent the true vector field on the
Sun (radial in this example). Red arrows represent the line-of-sight component
and thick blue arrows are transverse components of the true vector field.
Adding positive noise to the transverse field (thin blue arrows) makes the
observed field (dashed arrow) systematically inclined in the direction away
from the central meridian. Because of the 180∘ azimuthal ambiguity in the
transverse field, the same transverse field may satisfy an alternative
orientation (dashed-dotted arrow). The resulting orientation of the field
vector is shown by a dashed arrow for both locations. Letters E, W, N mark
approximate positions of solar East limb, West limb, and North as seen from
Earth. (b) Difference between the true radial direction and vector orientation
in the presence of noise, when azimuth disambiguation selects the most radial
of two possible solutions (open circles). Solid line is a 6th degree
polynomial fit.
Figure 2a shows a theoretical example, where the true magnetic field is
expected to be radial (black arrow). Observed at disc center, a radial
magnetic field is then determined by the observed line-of-sight component plus
symmetric random noise in the azimuth direction, which is negligible on
average. However, outside the central meridian as depicted in Figure 2a, a
purely radial field contributes to both the line-of-sight component (red
arrow) and the transverse component (blue arrow).
In the presence of noise, the orientation of the magnetic vector will be
determined by the observed (B${}_{||}\pm\delta B_{||}$) and
(B${}_{\perp}\pm\delta B_{\perp}$) components, where $\delta B$ represents the
amplitude of noise. Note that due to a quadratic contribution of Stokes Q and
U to B⟂ (see, Equation 2), (B${}_{\perp}\pm\delta B_{\perp}$) is always
positive (by definition, transverse field is always positive, with or without
noise).
For pixels situated East of the central meridian (upper part of Figure 2a),
the projection of the same (radial) vector would create a systematic non-zero
transverse component, which in the presence of noise will only increase (due
to B${}_{\perp}\pm\delta B_{\perp}>$ 0). Due to the 180-degree ambiguity in
the azimuth of the transverse field, the vector field would have two possible
orientations, and if the more radial (black solid arrow) option is chosen,
then the selected orientation of the magnetic field will have a systematic
tilt in the direction away from the solar disk center. Figure 2b shows a
difference between the true radial field and one with $\delta B_{\perp}$ about
20 G added to the true transverse field. For this modeling exercise, we
assumed that the true field strength of the vector field was 200 G, and
$\delta B_{||}$ = 0 (no error in the longitudinal field), and $\delta
B_{\perp}$ was randomly generated within the range of about 0–100 G, with a
mean about 50 G and a standard deviation of 20 G. For pixels located closer to
the disc center, we see an increase in scatter in the vector inclination
relative to the true radial direction. Most importantly, there is a systematic
inclination of the vector field in the direction away from the central
meridian. For a field strength of 200 G and 20 G noise in transverse field,
the inclination error could be up to 20 degrees (Figure 2b).
Figure 3: (Left) Changes in the pattern of the zonal (East-West) vector-field
component of a small bipolar region over its disk passage. Four panels show
the so-called Near Real Time (NRT) synoptic maps. White/black halftones
correspond to magnetic field directed towards the West/East. Each NRT map
covers 360 degrees in longitude (horizontal direction) and approximately $\pm$
30 degrees in latitude (vertical direction). The most recent data are added
onto the left side of synoptic maps. The dates of the most recent observations
added to synoptic map are shown in the upper-right corner of each panel. For
visual clarity, the synoptic charts are shifted to have the active regions
aligned in the vertical direction. Small red arrows plotted in the left part
of each panel correspond to the approximate location of the central meridian
for the day of observations. Panels on the right show a schematic inclination
pattern of the magnetic field vectors in a vertical East-West oriented plane
(as if we were looking at an active region from the side).
Figure 3 demonstrates this effect in the disk-passage of a small bipolar
region with negative leading and positive following polarity fluxes. It is
clear that when this small active region is located to the East of central
meridian, the horizontal magnetic field connecting the two polarities is
directed westward in the leading (negative) polarity and eastward in the
following (positive) polarity flux element. Only when the region is near the
central meridian does it become clear that the magnetic field in both polarity
fluxes is close to radial, with a very slight inclination in the direction
away from each polarity. When the region is located West of central meridian,
the pattern of the zonal field reverses as compared with its location East of
central meridian. This behaviour is in perfect qualitative agreement with the
explanation given in Figure 2.
Let’s now consider whether lowering the noise level will mitigate this
systematic effect. For this test, we selected HMI observations taken on 10
February 2015. That day had a good representation of various solar features
(plage, sunspots) situated at different distances from solar disk center. For
the test, we employed filtergrams that were processed using a much longer
integration than normal (pipeline) magnetograms (5760 s vs. 720 s, courtesy
Dr. X. Sun). Integrated (average) filtergrams then were inverted using
standard VFISV code, and disambiguated using the minimum energy disambiguation
method with default pipeline settings. The averaged magnetogram is centered at
19:12:00 UT, and is shown in Figure 4 (left). Despite significant time
averaging, we do not see any obvious smearing of solar features. The S/N is
improved when compared to the standard 720s magnetograms, especially in areas
with a weak polarization signal.
To test the effect of lower noise on the inferred direction of the magnetic
field vector in large-scale weak-signal areas, we select two decaying plage
regions that extend across the central meridian, and sample the B$\varphi$ on
either side (light/dark blue and green boxes in Figure 4). We assume that the
field vectors in these structures should not vary across the central meridian
per se. Figure 4 (right) shows the distribution of the zonal (East-West)
magnetic field in the area of negative (blue colors) and positive (green
colors) polarity flux. The distributions show a clear offset: for pixels
situated East of central meridian, the mean is positive, while for pixels in
the Western hemisphere, it is negative. Such an offset supports the notion
that a magnetic field situated in a region of decaying magnetic flux in the
Eastern hemisphere, on average, shows inclination patterns away from the solar
central meridian. Using observations with a lower noise level (5760 s) makes
the distributions narrower, but the offset is still present. This behavior is
in agreement with the explanation presented in Figure 2.
## 4 Effects of azimuth disambiguation and filling factor
For a simplified case of Bθ = 0, shown in Figure 2a, the transformation from
longitudinal and transverse components, measured in the image plane at a
heliographic longitude (central meridian distance) $\phi$, to heliographic
components Br and Bφ can be written as
$B_{r}=B_{||}\cos\phi\pm B_{\perp}\sin\phi=\frac{C_{||}}{\alpha}~{}{\rm
V}\cos\phi\pm\frac{C_{\perp}}{\sqrt{\alpha}}\sqrt[4]{{\rm Q}^{2}+{\rm
U}^{2}}\sin\phi$ (3)
$B_{\varphi}=-B_{||}\sin\phi\pm
B_{\perp}\cos\phi=-\frac{C_{||}}{\alpha}~{}{\rm
V}\sin\phi\pm\frac{C_{\perp}}{\sqrt{\alpha}}\sqrt[4]{{\rm Q}^{2}+{\rm
U}^{2}}\cos\phi.$ (4)
For this simple configuration, the $\pm$ ambiguity in Equations 3 and 4 is
resolved by requiring the two components to have the same sign in Equation 3
and consequently opposite signs in Equation 4. The first component (Equation
4) is positive East of the central meridian ($\phi<0$), and it is negative
West of the central meridian. The second component is always positive. For the
case when the field is mostly radial
$|\frac{-C_{||}~{}V}{\alpha}|>>\frac{C_{\perp}\sqrt[4]{Q^{2}+U^{2}}}{\sqrt{\alpha}}.$
(5)
Under this condition, the sum of the two components, which represents the sign
of Bφ, will be positive East of the central meridian and negative to the West.
However, in the case of a mostly horizontal field
$|\frac{-C_{||}~{}V}{\alpha}|<<\frac{C_{\perp}\sqrt[4]{Q^{2}+U^{2}}}{\sqrt{\alpha}}.$
(6)
The sign of Bφ will be always negative, independent of pixel location relative
to the central meridian. This is in qualitative agreement with the
observations mentioned in Section 1 that the disagreement in the sign of the
zonal (Bθ) and/or meridional (B$\varphi$) components of HMI and VSM
observations was only observed in pixels with weaker linear polarization
signals. In pixels with stronger linear polarization, the two instruments were
in agreement with each other with respect to the sign of Bφ and Bθ components.
Equations 5-6 suggest that the effect of sign reversal in Bφ could also depend
on magnetic filling factor $\alpha$. For some amplitudes of $|{-C_{||}~{}V}|$
and $C_{\perp}\sqrt[4]{Q^{2}+U^{2}}$ the inequity in Equation 5 could change
its sign if $\alpha$ is changed from being less then one to unity. The
appendix provides an example of a test done with SOLIS/VSM data, which
demonstrates that for $\alpha<1$, Bφ in three flux areas reverse their sign
when the area crosses the central meridian. When $\alpha$ is set to unity, the
same areas do not exhibit a sign reversal.
## 5 Discussion
We provide simple arguments and some observational evidence that a dichotomy
in properties of Zeeman polarization arising from longitudinal and transverse
field components, and our ability to interpret these signals especially in
unresolved structures (non-unity filling factor), combined with the azimuthal
disambiguation may lead to erroneous conclusions about the orientation of the
vector magnetic field, particularly in areas where the polarization signals
are weak. The systematic patterns may depend on the amplitude of noise and
whether the magnetic filling factor is resolved or assumed to be unity, and
thus, observations from different instruments could result in slightly
different orientation patterns (e.g., inclination of vector fields towards the
solar poles or towards the solar equator). Pixels with stronger polarization
signals are less affected (e.g., in sunspots, where $\alpha$ is relatively
large, and where Stokes Q, U, and V are typically strong), and thus, we expect
that only pixels with weaker polarization signals will show the effect
described above. Indeed, comparison of observations from SOLIS/VSM and SDO/HMI
show a good agreement between the two instruments in areas of strong fields
(e.g., sunspots), while in some areas of weak fields (e.g., plage), we see the
zonal (East-West) and/or meridional (North/South) components having opposite
sign. The amplitude of the effect will depend on the specific orientation of
the magnetic field vector relative to line-of-sight, and thus the impact of
this effect could vary across the solar disk in a somewhat complex way, making
it difficult (or perhaps impossible) to correct, since the signal inherently
originates from an unknown magnetic configuration. For an underlying radial-
field orientation, the effect could be mitigated by equalizing the amplitude
of the noise between the inferred longitudinal and transverse field
components. However, given the significant differences in noise levels between
inferred B|| and B⟂ such noise equalization could be done only at the expense
of increasing noise in the longitudinal field measurements (e.g., by using a
modulation schema allowing much longer integration time for states
corresponding to Stokes Q and U measurements). A test conducted using existing
HMI data shows that simply improving S/N for both transverse and longitudinal
fields does not completely eliminated the problem; while the amplitude of a
systematic inclination decreases slightly, the overall effect still remains.
We note that the systematic bias in the horizontal component of the magnetic
field may, in principle, lead to the effects similar in appearance to ones
that we discuss here. While our simplified example discussed in Section 4 uses
filling factor, it might be extremely difficult or even counterproductive to
model the exact magnetic and non-magnetic contribution in pixels outside
sunspots. The methods used to estimate the filling factor may inadvertently
introduce additional errors related to the unknown difference in line profiles
between magnetized and non-magnetized atmospheres in each pixel. The effects
of the contribution function in spatial direction (e.g., whether non-
magnetized component is predominantly located on one side of a pixel or it is
uniformly distributed around it as in Harker, 2017) are also unknown. Perhaps,
some of these issues may be addressed via spatially coupled inversion of
spectro-polarimetric image data as in van Noort (2012). That technique,
however, requires the knowledge of the Point-Spread-Function (PSF), which
could be achieved either by measuring the atmospheric seeing during the
observations (van Noort, 2017) or using adaptive optics. Thus, the method is
not applicable for the existing SOLIS/VSM observations.
Figure 4: (Upper-left): half-disk image of radial Br field (white/black
correspond to positive/negative polarity), scaled to $\pm$200 G. Outlines show
pixels with the minimum-energy disambiguation applied (confid$\\_$disambig
keyword $\geq$ 60), which are included in the distributions shown in right
columns. (Lower-left: half-disk image of zonal Bφ, scaled to $\pm$100 G.
Region 1 (blue boxes) corresponds to area of negative polarity flux in
decaying flux region. Region 2 (green boxes) corresponds to a similar area but
of positive polarity flux. Boxes outlined by lighter/darker color are located
West/East of the central meridian. (Middle column): distribution of the zonal
(Bφ) component of the magnetic field in the Region 1 for two (720 s and 5760
s) averaging. (Right column): similar to middle column but for Region 2. Note:
salmon (light pinkish-orange) contours that outline pixels with the minimum-
energy disambiguation mask underlying black/white magnetic fields. To see both
the magnetic field and the contours, the PDF figure needs to be magnified by
300% or more.
###### Acknowledgements.
The authors thank Robert Cameron for his thoughtful comments, which helped
improving this article. We acknowledge help by Alexander Pevtsov in
preparation of the manuscript. This work is the result of discussions held at
three working meetings on the “Use of Vector Synoptic Maps for Modeling”.
Funding for these meetings was provided by the hosting groups at the
University of Oulu, Finland; the National Solar Observatory, USA; the Max
Planck Institute for Solar System Research, Germany; and by NASA’s Solar
Dynamics Observatory (Dr. Dean Pesnell). We also acknowledge the financial
support by the Academy of Finland to the ReSoLVE Centre of Excellence (project
no. 307411). AAP and LB acknowledge partial support by NASA grants
80NSSC17K0686 and NNX15AN43G. KDL acknowledges partial support from NASA/GSFC
grant 80NSSC19K0317. This work utilizes SOLIS data obtained by the NSO
Integrated Synoptic Program (NISP), managed by the National Solar Observatory.
HMI data used here are courtesy of NASA/SDO and the HMI science teams.
## Appendix A The effect of magnetic fill factor on the horizontal components
of magnetic field
Figure 5 provides an example of a test done with SOLIS/VSM data, which
demonstrates that for $\alpha<1$, Bφ in three flux areas reverse its sign when
the area crosses central meridian. When $\alpha$ is set to unity, the same
areas do not exhibit the sign reversal.
Figure 5: Full disk images of Br (upper row), Bθ (middle) and Bφ (low row).
First and third columns show the vector field components in image coordinate
system, while second and fourth columns are in the heliographic coordinate
system. Three examples are marked by arrows of different color. Setting
$\alpha$ to unity emphasizes pixels, which otherwise are below the noise
threshold.
## References
* Balasubramaniam and Pevtsov (2011) Balasubramaniam, K. S., and A. Pevtsov, 2011. Ground-based synoptic instrumentation for solar observations. In Proc. SPIE, vol. 8148 of _Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series_ , 814809. 10.1117/12.892824.
* Borrero et al. (2011) Borrero, J. M., S. Tomczyk, M. Kubo, H. Socas-Navarro, J. Schou, S. Couvidat, and R. Bogart, 2011. VFISV: Very Fast Inversion of the Stokes Vector for the Helioseismic and Magnetic Imager. _Sol. Phys._ , 273(1), 267–293. 10.1007/s11207-010-9515-6, 0901.2702.
* Cacciani et al. (1990) Cacciani, A., J. Varsik, and H. Zirin, 1990. Observations of vector magnetic fields with a magneto-optic filter. _Sol. Phys._ , 125, 173–178. 10.1007/BF00154786.
* Centeno et al. (2014) Centeno, R., J. Schou, K. Hayashi, A. Norton, J. T. Hoeksema, Y. Liu, K. D. Leka, and G. Barnes, 2014. The Helioseismic and Magnetic Imager (HMI) Vector Magnetic Field Pipeline: Optimization of the Spectral Line Inversion Code. _Sol. Phys._ , 289(9), 3531–3547. 10.1007/s11207-014-0497-7, 1403.3677.
* Couvidat et al. (2016) Couvidat, S., J. Schou, J. T. Hoeksema, R. S. Bogart, R. I. Bush, T. L. Duvall, Y. Liu, A. A. Norton, and P. H. Scherrer, 2016. Observables Processing for the Helioseismic and Magnetic Imager Instrument on the Solar Dynamics Observatory. _Sol. Phys._ , 291(7), 1887–1938. 10.1007/s11207-016-0957-3, 1606.02368.
* Hagyard (1985) Hagyard, M. J., ed., 1985. Measurements of Solar Vector Magnetic Fields, NASA Conference Proceedings 2375. Washington, DC: NASA.
* Harker (2017) Harker, B. J., 2017. VFISV Inversion Code Documentation for SOLIS/VSM Pipeline Implementation. _Tech. rep._ , National Solar Observatory: Tucson, AZ. 1707.00002.
* Hoeksema et al. (2014) Hoeksema, J. T., Y. Liu, K. Hayashi, X. Sun, J. Schou, et al., 2014. The Helioseismic and Magnetic Imager (HMI) Vector Magnetic Field Pipeline: Overview and Performance. _Sol. Phys._ , 289(9), 3483–3530. 10.1007/s11207-014-0516-8, 1404.1881.
* Ichimoto et al. (1993) Ichimoto, K., T. Sakurai, E. Hiei, Y. Nishino, K. Shinoda, et al., 1993\. Solar Flare Telescope project. _Report of the National Astronomical Observatory of Japan_ , 1(4), 375–390.
* Jefferies et al. (1989) Jefferies, J., B. W. Lites, and A. Skumanich, 1989. Transfer of Line Radiation in a Magnetic Field. _ApJ_ , 343, 920–935.
* Jefferies and Mickey (1991) Jefferies, J., and D. L. Mickey, 1991. On the Inference of Magnetic Field Vectors from Stokes Profiles. _ApJ_ , 372, 694–702.
* Keller et al. (2003) Keller, C. U., J. W. Harvey, and M. S. Giampapa, 2003. SOLIS: an innovative suite of synoptic instruments. In S. L. Keil and S. V. Avakyan, eds., Innovative Telescopes and Instrumentation for Solar Astrophysics, vol. 4853 of _Proc. SPIE_ , 194–204. 10.1117/12.460373.
* Leka et al. (2009) Leka, K. D., G. Barnes, A. D. Crouch, T. R. Metcalf, G. A. Gary, J. Jing, and Y. Liu, 2009. Resolving the 180∘ Ambiguity in Solar Vector Magnetic Field Data: Evaluating the Effects of Noise, Spatial Resolution, and Method Assumptions. _Sol. Phys._ , 260(1), 83–108. 10.1007/s11207-009-9440-8.
* Leka and Skumanich (1999) Leka, K. D., and A. Skumanich, 1999. On the value of ‘$\alpha$AR’ from vector magnetograph data - I. Methods and Caveats. _Sol. Phys._ , 188(1), 3–19. 10.1023/A:1005108632671.
* Metcalf (1994) Metcalf, T. R., 1994. Resolving the 180-degree ambiguity in vector magnetic field measurements: The ‘minimum’ energy solution. _Sol. Phys._ , 155(2), 235–242. 10.1007/BF00680593.
* Metcalf et al. (2006) Metcalf, T. R., K. D. Leka, G. Barnes, B. W. Lites, M. K. Georgoulis, et al., 2006. An Overview of Existing Algorithms for Resolving the 180∘ Ambiguity in Vector Magnetic Fields: Quantitative Tests with Synthetic Data. _Sol. Phys._ , 237(2), 267–296. 10.1007/s11207-006-0170-x.
* Norton et al. (2006) Norton, A. A., J. P. Graham, R. K. Ulrich, J. Schou, S. Tomczyk, et al., 2006. Spectral Line Selection for HMI: A Comparison of Fe I 6173 Å and Ni I 6768 Å. _Sol. Phys._ , 239(1-2), 69–91. 10.1007/s11207-006-0279-y, astro-ph/0608124.
* Pesnell et al. (2012) Pesnell, W. D., B. J. Thompson, and P. C. Chamberlin, 2012. The Solar Dynamics Observatory (SDO). _Sol. Phys._ , 275(1-2), 3–15. 10.1007/s11207-011-9841-3.
* Pflug and Grigoryev (1986) Pflug, K., and V. M. Grigoryev, 1986. The chain of the solar magnetographs and its results. _Contributions of the Astronomical Observatory Skalnate Pleso_ , 15, 453–468.
* Ronan et al. (1987) Ronan, R. S., D. L. Mickey, and F. Q. Orrall, 1987. The derivation of vector magnetic fields from stokes profiles: Integral versus least squares fitting techniques. _Sol. Phys._ , 113(1-2), 353–359. 10.1007/BF00147722.
* Rudenko and Anfinogentov (2014) Rudenko, G. V., and S. A. Anfinogentov, 2014. Very Fast and Accurate Azimuth Disambiguation of Vector Magnetograms. _Sol. Phys._ , 289(5), 1499–1516. 10.1007/s11207-013-0437-y.
* Scherrer et al. (2012) Scherrer, P. H., J. Schou, R. I. Bush, A. G. Kosovichev, R. S. Bogart, et al., 2012. The Helioseismic and Magnetic Imager (HMI) Investigation for the Solar Dynamics Observatory (SDO). _Sol. Phys._ , 275, 207–227. 10.1007/s11207-011-9834-2.
* Severny (1965) Severny, A. B., 1965. Introductory report. In R. Lust, ed., Stellar and Solar Magnetic Fields, vol. 22 of _IAU Symposium_ , 238–266. North-Holland Pub. Co., Amsterdam.
* Stenflo (2013) Stenflo, J. O., 2013. Solar magnetic fields as revealed by Stokes polarimetry. _A &A Rev._, 21, 66. 10.1007/s00159-013-0066-3, 1309.5454.
* Stepanov and Severny (1962) Stepanov, V. E., and A. B. Severny, 1962. A photoelectric method for measurements of the magnitude and direction of the solar magnetic field. _Bull Crim Astrophys Obs_ , 28, 166–193.
* van Noort (2012) van Noort, M., 2012. Spatially coupled inversion of spectro-polarimetric image data. I. Method and first results. _A &A_, 548, A5. 10.1051/0004-6361/201220220, 1210.4636.
* van Noort (2017) van Noort, M., 2017. Image restoration of solar spectra. _A &A_, 608, A76. 10.1051/0004-6361/201731339, 1711.09629.
* Virtanen et al. (2019) Virtanen, I. I., A. A. Pevtsov, and K. Mursula, 2019. Structure and evolution of the photospheric magnetic field in 2010-2017: comparison of SOLIS/VSM vector field and BLOS potential field. _Astronomy and Astrophysics_ , 624, A73. 10.1051/0004-6361/201834895, 1904.10740.
|
# DeepGreen: Deep Learning of Green’s Functions
for Nonlinear Boundary Value Problems
Craig R. Gin1111Co-first authors. , Daniel E. Shea2∗, Steven L. Brunton3, and
J. Nathan Kutz4
1Department of Population Health and Pathobiology, North Carolina State
University,
Raleigh, NC 27695, United States
2Department of Materials Science and Engineering, University of Washington,
Seattle, WA 98195, United States
3Department of Mechanical Engineering, University of Washington,
Seattle, WA 98195, United States
4Department of Applied Mathematics, University of Washington,
Seattle, WA 98195, United States
∗Co-first authors and corresponding authors. Email<EMAIL_ADDRESS>and
<EMAIL_ADDRESS>
###### Abstract
Boundary value problems (BVPs) play a central role in the mathematical
analysis of constrained physical systems subjected to external forces.
Consequently, BVPs frequently emerge in nearly every engineering discipline
and span problem domains including fluid mechanics, electromagnetics, quantum
mechanics, and elasticity. The fundamental solution, or Green’s function, is a
leading method for solving linear BVPs that enables facile computation of new
solutions to systems under any external forcing. However, fundamental Green’s
function solutions for nonlinear BVPs are not feasible since linear
superposition no longer holds. In this work, we propose a flexible deep
learning approach to solve nonlinear BVPs using a dual-autoencoder
architecture. The autoencoders discover an invertible coordinate transform
that linearizes the nonlinear BVP and identifies both a linear operator $L$
and Green’s function $G$ which can be used to solve new nonlinear BVPs. We
find that the method succeeds on a variety of nonlinear systems including
nonlinear Helmholtz and Sturm–Liouville problems, nonlinear elasticity, and a
2D nonlinear Poisson equation. The method merges the strengths of the
universal approximation capabilities of deep learning with the physics
knowledge of Green’s functions to yield a flexible tool for identifying
fundamental solutions to a variety of nonlinear systems.
Keywords: Deep learning, Green’s function, Nonlinearity, Koopman operator
theory
## 1 Introduction
Boundary value problems (BVPs) are ubiquitous in the sciences (?). From
elasticity to quantum electronics, BVPs have been fundamental in the
development and engineering design of numerous transformative technologies of
the 20th century. Historically, the formulation of many canonical problems in
physics and engineering result in linear BVPs: from Fourier formulating the
heat equation in 1822 (?) to more modern applications such as designing chip
architectures in the semi-conductor industry (?, ?). Much of our theoretical
understanding of BVPs comes from the construction of the fundamental solution
of the BVP, commonly known as the Green’s function (?). The Green’s function
solution relies on a common property of many BVPs: linearity. Specifically,
general solutions rely on linear superposition to hold, thus limiting their
usefulness in many modern applications where BVPs are often heterogeneous and
nonlinear. By leveraging modern deep learning, we are able to learn
linearizing transformations of BVPs that render nonlinear BVPs linear so that
we can construct the Green’s function solution. Our deep learning of Green’s
functions, DeepGreen, provides a transformative architecture for modern
solutions of nonlinear BVPs.
DeepGreen is inspired by recent works which use deep neural networks (DNNs) to
discover advantageous coordinate transformations for dynamical systems (?, ?,
?, ?, ?, ?, ?, ?, ?, ?). The universal approximation properties of DNNs (?, ?)
are ideal for learning coordinate transformations that linearize nonlinear
BVPs, ODEs and PDEs. Specifically, such linearizing transforms fall broadly
under the umbrella of Koopman operator theory (?), which has a modern
interpretation in terms of dynamical systems theory (?, ?, ?, ?). There are
only limited cases in which Koopman operators can only be constructed
explicitly (?). However Dynamic Mode Decomposition (DMD) (?) provides a
numerical algorithm for approximating the Koopman operator (?), with many
recent extensions that improve on the DMD approximation (?). More recently,
neural networks have been used to construct Koopman embeddings (?, ?, ?, ?, ?,
?, ?, ?). This is an alternative to enriching the observables of DMD (?, ?, ?,
?, ?, ?, ?). Thus, neural networks have emerged as a highly effective
mathematical tool for approximating complex data (?, ?) with a linear model.
DNNs have been used in this context to discover time-stepping algorithms for
complex systems (?, ?, ?, ?, ?). Moreover, DNNs have been used to approximate
constitutive models of BVPs (?).
DeepGreen leverages the success of DNNs for dynamical systems to discover
coordinate transformations that linearize nonlinear BVPs so that the Green’s
function solution can be recovered. This allows for the discovery of the
fundamental solutions for nonlinear BVPs, opening many opportunities for the
engineering and physical sciences. DeepGreen exploits physics-informed
learning by using autoenconders (AEs) to take data from the original high-
dimensional input space to the new coordinates at the intrinsic rank of the
underlying physics (?, ?, ?). The architecture also leverages the success of
Deep Residual Networks (DRN) (?) which enables our approach to efficiently
handle near-identity coordinate transformations (?).
The Green’s function constructs the solution to a BVP for any given forcing by
superposition. Specifically, consider the classical linear BVP (?)
$\begin{array}[]{ll}L[v({\mathbf{x}})]=f({\mathbf{x}})\end{array}$ (1)
where $L$ is a linear differential operator, $f$ is a forcing,
${\mathbf{x}}\in{\Omega}$ is the spatial coordinate, and ${\Omega}$ is an open
set. The boundary conditions $Bv({\mathbf{x}})=0$ are imposed on
$\partial{\Omega}$ with a linear operator $B$. The fundamental solution is
constructed by considering the adjoint equation
$\begin{array}[]{ll}L^{\dagger}[G({\mathbf{x},\boldsymbol{\xi}})]=\delta({\mathbf{x}}-{\boldsymbol{\xi}})\end{array}$
(2)
where $L^{\dagger}$ is the adjoint operator (along with its associated
boundary conditions) and $\delta({\mathbf{x}}-{\bf\xi})$ is the Dirac delta
function. Taking the inner product of (1) with respect to the Green’s function
gives the fundamental solution
$v({\bf x})=(f({\boldsymbol{\xi}}),G(\boldsymbol{\xi},{\bf
x}))=\int\limits_{{\Omega}}G(\mathbf{\xi},{\bf
x})f(\boldsymbol{\xi})d\boldsymbol{\xi},$ (3)
which is valid for any forcing $f({\mathbf{x}})$. Thus once the Green’s
function is computed, the solution for arbitrary forcing functions can be
easily extracted from integration. This integration represents a superposition
of a continuum of delta function forcings that are used to represent
$f({\mathbf{x}})$.
In many modern applications, nonlinearity plays a fundamental role so that the
BVP is of the form
$\begin{array}[]{ll}N[u({\mathbf{x}})]=F({\mathbf{x}})\end{array}$ (4)
where $N[\cdot]$ is a nonlinear differential operator. For this case, the
principle of linear superposition no longer holds and the notion of a
fundamental solution is lost. However, modern deep learning algorithms allow
us the flexibility of learning a coordinate transformation (and their
inverses) of the form
$\displaystyle v$ $\displaystyle=\boldsymbol{\psi}(u),$ (5a) $\displaystyle f$
$\displaystyle=\boldsymbol{\phi}(F),$ (5b)
such that $v$ and $f$ satisfy the linear BVP (1) for which we generated the
fundamental solution (3). This gives a nonlinear fundamental solution through
use of this deep learning transformation.
DeepGreen is a supervised learning algorithm which is ultimately a high-
dimensional interpolation problem (?) for learning the coordinate
transformations $\boldsymbol{\psi}(u)$ and $\boldsymbol{\phi}(F)$. DeepGreen
is enabled by a physics-informed deep autoencoder coordinate transformation
which establishes superposition for nonlinear BVPs, thus enabling a Koopman
BVP framework. The learned Green’s function enables accurate construction of
solutions with new forcing functions in the same way as a linear BVP. We
demonstrate the DeepGreen method on a variety of nonlinear boundary value
problems, including a nonlinear 2D Poisson problem, showing that such an
architecture can be used in many modern and diverse applications in aerospace,
electromagnetics, elasticity, materials, and chemical reactors.
Figure 1: DeepGreen solves nonlinear BVPs by identifying the Green’s Function
of the nonlinear problem using a deep learning approach with a dual
autoencoder architecture. An nonhomogenous linear BVP can be solved using the
Green’s function approach, but a nonlinear BVP cannot. DeepGreen transforms a
nonlinear BVP to a linear BVP, solves the linearized BVP, and then inverse
transforms the linear solution to solve the nonlinear BVP.
## 2 Deep Autoencoders for Linearizing BVPs
Deep AEs have been used to linearize dynamical systems, which are initial
value problems. We extend this idea to BVPs. To be precise, we consider BVPs
of the form
$\displaystyle N[u(\mathbf{x})]$ $\displaystyle=F(\mathbf{x}),$
$\displaystyle\mathbf{x}$ $\displaystyle\in\Omega,$ (6a) $\displaystyle
B[u(\mathbf{x})]$ $\displaystyle=0,$ $\displaystyle\mathbf{x}$
$\displaystyle\in\partial\Omega,$ (6b)
where $\Omega$ is a simply connected open set in $\mathbb{R}^{n}$ with
boundary $\partial\Omega$, $N$ is a nonlinear differential operator,
$F(\mathbf{x})$ is the nonhomogeneous forcing function, $B$ is a boundary
condition, and $u(\mathbf{x})$ is the solution to the BVP. We wish to find a
pair of coordinate transformations of the form (5) such that $v$ and $f$
satisfy a linear BVP
$\displaystyle L[v(\widehat{\mathbf{x}})]$
$\displaystyle=f(\widehat{\mathbf{x}}),$ $\displaystyle\widehat{\mathbf{x}}$
$\displaystyle\in\widehat{\Omega},$ (7a)
$\displaystyle\widehat{B}[v(\widehat{\mathbf{x}})]$ $\displaystyle=0,$
$\displaystyle\widehat{\mathbf{x}}$
$\displaystyle\in\partial\widehat{\Omega},$ (7b)
where $L$ is a linear differential operator, $\widehat{\mathbf{x}}$ is the
spatial coordinate in the transformed domain $\widehat{\Omega}$ with boundary
$\partial\widehat{\Omega}$. Because $L$ is linear, there is a Green’s function
$G(\widehat{\mathbf{x}},\boldsymbol{\xi})$ such that the solution $v$ to the
BVP (7) can be obtained through convolution of the Green’s function and
transformed forcing function
$v(\widehat{\mathbf{x}})=\int\limits_{\widehat{\Omega}}G(\mathbf{\xi},\widehat{\mathbf{x}})f(\boldsymbol{\xi})d\boldsymbol{\xi}.$
(8)
The coordinate transformation along with the Green’s function of the
linearized BVP provide the analog of a Green’s function for the nonlinear BVP
(6). In particular, for a forcing function $F(\mathbf{x})$, the transformed
forcing function is $f=\boldsymbol{\phi}(F)$. The solution to the linearized
BVP can be obtained using the Green’s function $v=\int
G(\boldsymbol{\xi},\widehat{\mathbf{x}})f(\boldsymbol{\xi})d\boldsymbol{\xi}$.
Then the solution to the nonlinear BVP (6) is obtained by inverting the
coordinate transformation $u=\boldsymbol{\psi}^{-1}(v)$ to obtain the solution
to the nonlinear BVP, $u(\mathbf{x})$.
The question that remains is how to discover the appropriate coordinate
transformations $\boldsymbol{\psi}$ and $\boldsymbol{\phi}$. We leverage the
universal approximation properties of neural networks in order to learn these
transformations. In order to use neural networks, we first need to discretize
the BVP. Let $\mathbf{u}$ be a spatial discretization of $u(\mathbf{x})$ and
$\mathbf{F}$ be a discretization of $F(\mathbf{x})$. Then the discretized
version of the BVP (6) is
$\displaystyle\mathbf{N}[\mathbf{u}]$ $\displaystyle=\mathbf{F},$ (9a)
$\displaystyle\mathbf{B}[\mathbf{u}]$ $\displaystyle=\mathbf{0}.$ (9b)
Neural networks $\boldsymbol{\psi}_{u}$ and $\boldsymbol{\phi}_{F}$ are used
to transform $\mathbf{u}$ and $\mathbf{F}$ to the latent space vectors
$\mathbf{v}$ and $\mathbf{f}$
$\displaystyle\mathbf{v}$ $\displaystyle=\boldsymbol{\psi}_{u}(\mathbf{u}),$
(10a) $\displaystyle\mathbf{f}$
$\displaystyle=\boldsymbol{\phi}_{F}(\mathbf{F}),$ (10b)
where $\mathbf{v}$ and $\mathbf{f}$ satisfy the linear equation
$\mathbf{L}\mathbf{v}=\mathbf{f},$ (11)
for some matrix $\mathbf{L}$, which is also learned. In order to learn
invertible transforms $\boldsymbol{\psi}_{u}$ and $\boldsymbol{\phi}_{F}$, we
construct the problem as a pair of autoencoder networks.
In this construction, the transforms $\boldsymbol{\psi}_{u}$ and
$\boldsymbol{\phi}_{F}$ are the encoders and the transform inverses are the
decoders. The network architecture and loss functions are shown in Figure 2.
Figure 2: DeepGreen architecture. Two autoencoders learn invertible coordinate
transformations that linearize a nonlinear boundary value problem. The latent
space is constrained to exhibit properties of a linear system, including
linear superposition, which enables discovery of a Green’s function for
nonlinear boundary value problems.
The neural network is trained using numerous and diverse solutions to the
nonlinear BVP (9), which can be obtained with many different forcings
$\mathbf{F}_{k}$. Consider a dataset comprised of pairs of discretized
solutions and forcing functions
$\\{\mathbf{u}_{k},\mathbf{F}_{k}\\}_{k=1}^{N}$. The loss function for
training the network is the sum of six losses, each of which enforces a
desired condition. The loss functions can be split into three categories:
1. 1.
Autoencoder losses: We wish to learn invertible coordinate transformations
given by equations (10a) and (10b). In order to do so, we use two
autoencoders. The autoencoder for $\mathbf{u}$ consists of an encoder
$\boldsymbol{\psi}_{u}$ which performs the transformation (10a) and a decoder
$\boldsymbol{\psi}_{u}^{-1}$ which inverts the transformation. In order to
enforce that the encoder and decoder are inverses, we use the autoencoder loss
$\mathcal{L}_{1}=\frac{1}{N}\sum_{k=1}^{N}\frac{\left\lVert\mathbf{u}_{k}-\boldsymbol{\psi}_{u}^{-1}\circ\boldsymbol{\psi}_{u}(\mathbf{u}_{k})\right\rVert_{2}^{2}}{\left\lVert\mathbf{u}_{k}\right\rVert_{2}^{2}}.$
(12)
Similarly, there is an autoencoder for $\mathbf{F}$ where the encoder
$\boldsymbol{\phi}_{F}$ performs the transformation (10b). This transformation
also has an inverse enforced by the associated autoencoder loss function
$\mathcal{L}_{2}=\frac{1}{N}\sum_{k=1}^{N}\frac{\left\lVert\mathbf{F}_{k}-\boldsymbol{\phi}_{F}^{-1}\circ\boldsymbol{\phi}_{F}(\mathbf{F}_{k})\right\rVert_{2}^{2}}{\left\lVert\mathbf{F}_{k}\right\rVert_{2}^{2}}.$
(13)
2. 2.
Linearity losses: In the transformed coordinate system, we wish for the BVP to
be linear so that the operator can be represented by a matrix $\mathbf{L}$.
The matrix $\mathbf{L}$ and the encoded vectors $\mathbf{v}$ and $\mathbf{f}$
should satisfy equation (11). This is enforced with the linear operator loss
$\mathcal{L}_{3}=\frac{1}{N}\sum_{k=1}^{N}\frac{\left\lVert\mathbf{f}_{k}-\mathbf{L}\mathbf{v}_{k}\right\rVert_{2}^{2}}{\left\lVert\mathbf{f}_{k}\right\rVert_{2}^{2}}.$
(14)
The major advantage of working with a linear operator is that linear
superposition holds. We use a linear superposition loss in order to further
enforce the linearity of the operator in the latent space
$\mathcal{L}_{4}=\frac{1}{N^{2}}\sum_{j=1}^{N}\sum_{i=1}^{N}\frac{\left\lVert(\mathbf{f}_{i}+\mathbf{f}_{j})-\mathbf{L}(\mathbf{v}_{i}+\mathbf{v}_{j})\right\rVert_{2}^{2}}{\left\lVert\mathbf{f}_{i}+\mathbf{f}_{j}\right\rVert_{2}^{2}}.$
(15)
3. 3.
Cross-mapping losses: The losses described above are theoretically sufficient
to find coordinate transformations for $\mathbf{u}$ and $\mathbf{F}$ as well
as a linear operator $\mathbf{L}$. However, in practice the two autoencoders
were not capable of generating the Green’s function solution. To rectify this,
we add two “cross-mapping” loss functions that incorporate parts of both
autoencoders. The first cross-mapping loss enforces the following mapping from
$\mathbf{u}$ to $\mathbf{F}$. First, one of the solutions from the dataset
$\mathbf{u}_{k}$ is encoded with $\boldsymbol{\psi}_{u}$. This is an
approximation for $\mathbf{v}_{k}$. This is then multiplied by the matrix
$\mathbf{L}$, giving an approximation of $\mathbf{f}_{k}$. Then the result is
decoded with $\boldsymbol{\phi}_{F}^{-1}$. This gives an approximation of
$\mathbf{F}_{k}$. The $\mathbf{u}$ to $\mathbf{F}$ cross-mapping loss is given
by the formula
$\mathcal{L}_{5}=\frac{1}{N}\sum_{k=1}^{N}\frac{\left\lVert\mathbf{F}_{k}-\boldsymbol{\phi}_{F}^{-1}\circ\mathbf{L}\circ\boldsymbol{\psi}_{u}(\mathbf{u}_{k})\right\rVert_{2}^{2}}{\left\lVert\mathbf{F}_{k}\right\rVert_{2}^{2}}.$
(16)
We can similarly define a cross-mapping from $\mathbf{F}$ to $\mathbf{u}$. For
a forcing function $\mathbf{F}_{k}$ from the dataset, it is encoded with
$\boldsymbol{\phi}_{F}$, multiplied by the Green’s function
($\mathbf{G}=\mathbf{L}^{-1}$), and then decoded with
$\boldsymbol{\psi}_{u}^{-1}$ to give an approximation of $\mathbf{u}_{k}$. The
$\mathbf{F}$ to $\mathbf{u}$ cross-mapping loss is
$\mathcal{L}_{6}=\frac{1}{N}\sum_{k=1}^{N}\frac{\left\lVert\mathbf{u}_{k}-\boldsymbol{\psi}_{u}^{-1}\circ\mathbf{L}^{-1}\circ\boldsymbol{\phi}_{F}(\mathbf{F}_{k})\right\rVert_{2}^{2}}{\left\lVert\mathbf{u}_{k}\right\rVert_{2}^{2}}.$
(17)
Note that this final loss function gives the best indication of the
performance of the network to solve the nonlinear BVP (9) using the Green’s
function. The strategy for solving (9) for a given discrete forcing function
$\mathbf{F}$ is to encode the forcing function to obtain
$\mathbf{f}=\boldsymbol{\phi}_{F}(\mathbf{F})$, apply the Green’s function as
in equation (8) to obtain $\mathbf{v}$, and then decode this function to get
the solution $\mathbf{u}=\boldsymbol{\psi}_{u}^{-1}(\mathbf{v})$. The discrete
version of the convolution with the Green’s function given in equation (8) is
multiplication by the matrix $\mathbf{L}^{-1}$.
For the encoders $\boldsymbol{\phi}$ and $\boldsymbol{\psi}$ and decoders
$\boldsymbol{\phi}^{-1}$ and $\boldsymbol{\psi}^{-1}$, we use a residual
neural network (ResNet) architecture (?). The ResNet architecture has been
successful in learning coordinate transformations for physical systems (?) and
is motivated by near-identity transformations in physics. The linear operator
$\mathbf{L}$ is constrained to be a real symmetric matrix and therefore is
self-adjoint. Additionally, $\mathbf{L}$ is initialized as the identity
matrix. Therefore, $\mathbf{L}$ is strictly diagonally dominant for at least
the early parts of training which guarantees $\mathbf{L}$ is invertible and
well-conditioned. For more information on the network architecture and
training procedure, see Appendix B.
## 3 Results
The DeepGreen architecture, which is highlighted in Fig. 2 and whose detailed
loss functions are discussed in the last section, is demonstrated on a number
of canonical nonlinear BVPs. The first three BVPs are one-dimensional systems
and the final one is a two-dimensional system. The nonlinearities in these
problems do not allow for a fundamental solution, thus recourse is typically
made to numerical computations to achieve a solution. DeepGreen, however, can
produce a fundamental solution which can then be used for any new forcing of
the BVP.
### 3.1 Cubic Helmholtz
The architecture and methodology is best illustrated using a basic example
problem. The example problem uses a nonhomogeneous second-order nonlinear
Sturm–Liouville model with constant coefficients and a cubic nonlinearity,
thus making it a cubic Helmholtz equation. The differential equation is given
by
$\displaystyle u^{\prime\prime}+\alpha u+\epsilon u^{3}=F(x),$ (18a)
$\displaystyle u(0)=u(2\pi)=0,$ (18b)
where $u=u(x)$ is the solution when the system is forced with $F(x)$ with
$x\in(0,2\pi)$, $\alpha=-1$ and $\epsilon=-0.3$. The notation
$u^{\prime\prime}$ denotes $\frac{d^{2}}{dx^{2}}u(x)$. The dataset contains
discretized solutions and forcings,
$\\{\mathbf{u}_{k},\mathbf{F}_{k}\\}_{k=1}^{N}$. The forcing functions used
for training are cosine and Gaussian functions; details of data generation and
the forcing functions are provided in Appendix A.
Figure 3: Learning curve. This is a typical learning curve for the DeepGreen
architecture. The vertical dashed line indicates where the training procedure
transitions from autoencoders-only (only $\mathcal{L}_{1}$ and
$\mathcal{L}_{2}$) to a full-network training procedure (all losses). Figure
4: Latent space representations $\mathbf{v}_{k}$ and $\mathbf{f}_{k}$. The
autoencoder transformation $\boldsymbol{\psi}_{u}$ encodes $\mathbf{u}_{k}$ to
the latent space, producing the vector $\mathbf{v}_{k}$ (orange). The forcing
vector $\mathbf{F}_{k}$ is transformed by $\boldsymbol{\psi}_{F}$ to the
encoded vector $\mathbf{f}_{v}$ (blue). Figure 5: Visualized operator and
Green’s function. Discovered Green’s function $\mathbf{G}=\mathbf{L}^{-1}$ and
corresponding linear operator $\mathbf{L}$.
The data is divided into three groups: training, validation, and test. The
training and validation sets are used for training the model. The test set is
used to evaluate the results. The training set contains $N_{train}=8906$
vector pairs $\mathbf{u}_{k}$ and $\mathbf{F}_{k}$. The validation set
contains $N_{validation}=2227$ and test set contains $N_{test}=1238$.
#### 3.1.1 Training the Model
The autoencoders used in this example are constructed with fully connected
layers. In both autoencoders, a ResNet-like identity skip connection connects
the input layer to the layer before dimension reduction in the encoder, and
the first full-dimension layer in the decoder with the final output layer (see
Figure 14).
The model is trained in a two-step procedure. First, the autoencoders are
trained, without connection in the latent space, to condition the networks as
autoencoders. In this first phase, only the autoencoder loss functions listed
in Figure 2 are active ($\mathcal{L}_{1}$ and $\mathcal{L}_{2}$). After a set
number of epochs, the latent spaces are connected by an invertible matrix
operator, $\mathbf{L}$, and the remaining 4 loss functions in Figure 2 become
active ($\mathcal{L}_{3}$–$\mathcal{L}_{6}$). In the final phase of training,
the autoencoder learns to encode a latent space representation of the system
where properties associated with linear systems hold true, such as linear
superposition.
Figure 3 shows a typical training loss curve. The vertical dashed line
indicates the transition between the two training phases. The models in this
work are trained for 75 epochs in the first autoencoder-only phase and 2750
epochs in the final phase. The first-phase epoch count was tuned empirically
based on final model performance. The final phase epoch count was selected for
practical reasons; the training curve tended to flatten around 2750 epochs in
all of our tested systems. The autoencoder latent spaces are critically
important. The latent space is the transformed vector space where linear
properties (e.g. superposition) are enforced which enables the solution of
nonlinear problems. In the one-dimensional problems, the latent spaces vectors
$\mathbf{v}$ and $\mathbf{f}$ are in $\mathbb{R}^{20}$.
The latent spaces did not have any obvious physical interpretation, and
qualitatively appeared similar to the representations shown in Figure 4. We
trained 100 models to check the consistency in the learned model and latent
space representations, but discovered the latent spaces varied considerably
(see Appendix C). This implies the existence of an infinity of solutions to
the coordinate transform problem, which indicates further constraints could be
placed on the model.
Figure 6: Model predictions on test data. The top row shows the true solution
$\mathbf{u}_{k}(x)$ and the solution predicted by the network given the
forcing $\mathbf{F}_{k}(x)$ using the Green’s function $\mathbf{G}$. The
bottom row shows the true forcing function $\mathbf{F}_{k}(x)$ compared to the
forcing computed by applying the operator $\mathbf{L}$ to the solution
$\mathbf{u}_{k}$. Three columns show the best, mean, and worst case samples as
evaluated by the sum of normalized $\ell 2$ reconstruction errors.
Despite lacking obvious physical interpretations, the latent space enables
discovery of an invertible operator $\mathbf{L}$ which described the linear
system $\mathbf{L}[\mathbf{v}_{k}]=\mathbf{f}_{k}$. The operator matrix
$\mathbf{L}$ can be inverted to yield the Green’s function matrix
$\mathbf{G}$, which allows computation of solutions to the linearized system
$\mathbf{v}_{k}=\mathbf{G}[\mathbf{f}_{k}]$. An example of the operator
$\mathbf{L}$ and its inverse $\mathbf{G}$ are shown in Figure 5. The operator
and Green’s function shown in Figure 5 display an important prominent feature
seen in all of the results: a diagonally-dominant structure. We initialize the
operator as an identity matrix, but the initialization had little impact on
the diagonally-dominant form of the learned operator and Green’s function
matrices (see Appendix C). The diagonally-dominant operators indicate that the
deep learning network tends to discover a coordinate transform yielding a
nearly-orthonormal basis, which mirrors the common approach of diagonalization
in spectral theory for Hermitian operators. Furthermore, diagonally-dominant
matrices guarantee favorable properties for this application such as being
well-conditioned and non-singular.
We emphasize that training parameters and model construction choices used in
this work were not extensively optimized. We expect the model performance can
be improved in a myriad of ways including extending training times, optimizing
model architecture, modifying the size of the latent spaces, restricting the
form of the operator, and applying additional constraints to the model.
However, these topics are not the main scope of the present work; our focus is
to illustrate the use of autoencoders as a coordinate transform for finding
solutions to nonlinear BVPs.
#### 3.1.2 Evaluating the Model
The goal for this model is to find a Green’s function $\mathbf{G}$ for
computing solutions $\mathbf{u}_{k}$ to a nonlinear BVP governed by (6) for a
given forcing function $\mathbf{F}_{k}$. Similarly, we can estimate the
forcing term, $\mathbf{F}_{k}$, given the solution $\mathbf{u}_{k}$. The model
is consequently evaluated by its ability to use the learned Green’s function
and operator for predicting solutions and forcings, respectively, for new
problems from a withheld test data set.
Recall the original model is trained on data where the forcing function is a
cosine or Gaussian function. As shown in Figure 6, the model performs well on
withheld test data where the forcing functions are cosine or Gaussian
functions, producing a cumulative loss around $10^{-4}$. The solutions
$\mathbf{u}_{k}$ and forcing $\mathbf{F}_{k}$ are depicted for the best, mean,
and worst samples scored by cumulative loss.
It’s important to note the test data used in Figure 6 is similar to the
training and validation data. Because ML models typically work extremely well
in interpolation problems, it is reasonable to expect the model to perform
well on this test data set.
As an interesting test to demonstrate the ability of the model to extrapolate,
we prepared a separate set of test data
$\\{\mathbf{u}_{k},\mathbf{F}_{k}\\}_{k=1}^{N}$ containing solutions where
$\mathbf{F}_{k}$ are cubic polynomial forcing functions. This type of data was
not present in training, and provides some insight into the generality of the
learned linear operator and Green’s function matrices. Figure 7 shows examples
of how the model performs on these cubic polynomial-type forcing functions.
Similar to Figure 6, the best, mean, and worst samples are shown as graded by
overall loss.
Figure 7: Model predictions on cubic Helmholtz forced system. The top row
shows the true solution $\mathbf{u}_{k}(x)$ and the solution predicted by the
network given the forcing $\mathbf{F}_{k}(x)$ using the Green’s function
$\mathbf{G}$. The bottom row shows the true forcing function
$\mathbf{F}_{k}(x)$ compared to the forcing computed by applying the operator
$\mathbf{L}$ to the solution $\mathbf{u}_{k}$. Three columns show the best,
mean, and worst case samples as evaluated by the sum of normalized $\ell 2$
reconstruction errors. Figure 8: Model performance summary. Distribution of
loss values are shown for every sample in the test data set. Model loss
functions are minimized during training, making them a natural metric to use
for summarizing performance.
Figures 6 and 7 provide some qualitative insight into the model’s performance
on specific instances selected from the pool of evaluated data. A quantitative
perspective of the model’s performance is presented in Figure 8. This box plot
shows statistics (median value, $Q_{1}$, $Q_{3}$, and range) for four of the
loss functions evaluated on the similar (cosine and Gaussian) test data. Note
the superposition loss function is _not_ scored in this plot because the
superposition loss function can only be evaluated within a single batch, and
the loss depends on batch size and composition.
In conclusion, the DeepGreen architecture enables discovery of invertible,
linearizing transformations that facilitate identification of a linear
operator and Green’s function to solve nonlinear BVPs. It is tested on data
similar and dissimilar to the training data, and evaluated on the loss
functions that guide the training procedure. The discovered operator and
Green’s function take on a surprisingly diagonally-dominant structure, which
hints at the model’s preference to learn an optimal basis. The model appears
to extrapolate beyond the test data, suggesting that the learned operator is
somewhat general to the system.
### 3.2 Nonlinear Sturm–Liouville and Biharmonic Operators
In addition to the example system described above, the approach was applied to
two other one-dimensional systems. We used the same training procedure and
forcing functions that were described in Section 3.1. The first is a system
governed by the nonlinear Sturm–Liouville equation
$\displaystyle[-p(x)u^{\prime}]^{\prime}+q(x)(u+\epsilon u^{3})=F(x),$
$\displaystyle u(0)=u(2\pi)=0,$
where $\epsilon=0.4$ controls the extent of nonlinearity, and $p(x)$ and
$q(x)$ are spatially-varying coefficients
$\displaystyle p(x)=0.5\sin(x)-3,$ $\displaystyle q(x)=0.6\sin(x)-2,$
with $x\in[0,2\pi]$. The final one-dimensional system is a biharmonic operator
with an added cubic nonlinearity
$\displaystyle[-pu^{\prime\prime}]^{\prime\prime}+q(u+\epsilon u^{3})=F(x),$
$\displaystyle u(0)=u(2\pi)=u^{\prime}(0)=u^{\prime}(2\pi)=0,$
where $p=-4$ and $q=2$ are the coefficients and $\epsilon=0.4$ controls the
nonlinearity. As in the prior example, the forcing functions in the training
data are cosine and Gaussian functions, which are described further in
Appendix A.
Results for all the one-dimensional models, including the cubic Helmholtz
example from Section 3.1, are presented in Table 1. Model performance is
quantitatively summarized by box plots and the Green’s function matrix is
shown for each model.
Nonlinear cubic Helmholtz (constant coefficients) $u^{\prime\prime}+\alpha u+\epsilon u^{3}=F$ $u(0)=u(L)=0$ | Nonlinear Sturm–Liouville (varying $p(x)$, $q(x)$) $[-pu^{\prime}]^{\prime}\\!+\\!qu\\!+\\!\alpha qu^{3}\\!=\\!F$ $u(0)=u(L)=0$ | Nonlinear Biharmonic operator (constant coefficients) $-pu^{\prime\prime\prime\prime}\\!+\\!qu\\!+\\!\alpha qu^{3}\\!=\\!F$ $u(0)=u(L)=0$
---|---|---
| |
Table 1: Summary of results for three one-dimensional models. The models are
provided with the Green’s function learned by DeepGreen. A summary box plot
shows the relative losses $\mathcal{L}_{1}$, $\mathcal{L}_{2}$,
$\mathcal{L}_{3}$, $\mathcal{L}_{5}$, and $\mathcal{L}_{6}$ for all three
model systems.
Importantly, the learned Green’s function matrices consistently exhibit
diagonally-dominant structure. The losses for the nonlinear cubic Helmholtz
equation and the nonlinear Sturm–Liouville equation are similar which
indicates that spatially-varying coefficients do not make the problem
significantly more difficult for the DeepGreen architecture. In contrast, the
loss for the nonlinear biharmonic equation are about an order of magnitude
higher than the other two systems. This result implies the fourth-order
problem is more difficult than the second-order problems. Also of note is that
the linear operator loss $\mathcal{L}_{3}$ is consistently the highest loss
across all models. Therefore, it is easier for DeepGreen to find invertible
transformations for the solutions and forcing functions than it is to find a
linear operator that connects the two latent spaces.
### 3.3 Nonlinear Poisson Equation
Figure 9: Model predictions for the (a) best and (b) worst examples from test
data with Gaussian and cosine forcings. In both (a) and (b), the top row shows
the true solution $\mathbf{u}(\mathbf{x})$, the predicted solution using the
Green’s function, and the difference between the true and predicted solution.
The bottom row shows the true forcing function $\mathbf{F}(\mathbf{x})$, the
predicted forcing function, and the difference between the true and predicted
forces. In order to account for the difference in scale between
$\mathbf{u}(\mathbf{x})$ and $\mathbf{F}(\mathbf{x})$, the differences are
scaled by the infinity norm of the true solution or forcing function
($\text{Difference}=(\text{True}-\text{Predicted})/||\text{True}||_{\infty}$).
We also tested our method on a two-dimensional system. The two-dimensional
model is a nonlinear version of the Poisson equation with Dirichlet boundary
conditions
$\displaystyle-\nabla\cdot\left[(1+u^{2})\nabla u\right]=F(\mathbf{x}),$
$\displaystyle\mathbf{x}$ $\displaystyle\in\Omega,$ (19a) $\displaystyle u=0,$
$\displaystyle\mathbf{x}$ $\displaystyle\in\partial\Omega,$ (19b)
where $\Omega:=(0,2\pi)\times(0,2\pi)$. Similar to the one-dimensional models,
the forcing functions used to train the model are cosine and Gaussian
functions, the details of which are provided in Appendix A. The sizes of the
data sets are also similar to the one-dimensional data sets. The training data
contains $N_{train}=9806$ vector pairs $\mathbf{u}_{k}$ and $\mathbf{F}_{k}$,
the validation data contains $N_{validation}=2452$, and the test data contains
$N_{test}=1363$.
The network architecture of the encoders and decoders for the two-dimensional
example differs from the one-dimensional examples. Instead of fully connected
layers, convolutional layers were used in the encoders and decoders. However,
we still use a ResNet architecture. Additionally, the latent space vectors are
in $\mathbb{R}^{200}$. Full details on the network architecture can be found
in Appendix B. Note that the method proposed for discovering Green’s functions
allows for any network architecture to be used for the encoders and decoders.
For the one-dimensional example, similar results were obtained using fully
connected and convolutional layers. However, the convolutional architecture
was better in the two-layer case and also allowed for a more manageable number
of parameters for the wider network that resulted from discretizing the two-
dimensional space.
Figure 10: Model predictions for the (a) best and (b) worst examples from test
data with cubic polynomial forcings. In both (a) and (b), the top row shows
the true solution $\mathbf{u}(\mathbf{x})$, the predicted solution using the
Green’s function, and the difference between the true and predicted solution.
The bottom row shows the true forcing function $\mathbf{F}(\mathbf{x})$, the
predicted forcing function, and the difference between the true and predicted
forces. In order to account for the difference in scale between
$\mathbf{u}(\mathbf{x})$ and $\mathbf{F}(\mathbf{x})$, the differences are
scaled by the infinity norm of the true solution or forcing function
($\text{Difference}=(\text{True}-\text{Predicted})/||\text{True}||_{\infty}$).
The operator and Green’s function for the two-dimensional model are similar to
those displayed in shown in Figure 5. The diagonal dominance is even more
prevalent in this case than the one-dimensional example. The model was
evaluated on test data containing cosine and Gaussian forcing functions.
Figure 9a shows the true solution $\mathbf{u}(x)$ and forcing function
$\mathbf{F}(x)$ as well as the network predictions for the example from the
test data for which the model performed the best (i.e. the smallest value of
the loss). The difference between the true and predicted functions is shown in
the right column of Figure 9a and is scaled by the infinity norm of the true
solution or forcing functions. Figure 9b shows similar results but for the
worst example from the test data. In both cases, the model gives a
qualitatively correct solution for both $\mathbf{u}(x)$ and $\mathbf{F}(x)$.
Unsurprisingly, the network struggles most on highly localized forcing
functions and has the highest error in the region where the forcing occurs.
The model was also evaluated on test data that has cubic polynomial forcing
functions, a type of forcing function not found in the training data. The best
and worst examples are shown in Figure 10. Although the model does not perform
as well for test data which is not similar to the training data, the
qualitative features of the predicted solutions are still consistent with the
true solutions. Figure 11 shows a box plot of the model’s performance on the
similar (cosine and Gaussian forcing test data). The results are similar to
the one-dimensional results, and, in fact, better than the biharmonic operator
model.
Figure 11: Two-dimensional Poisson model performance summary. Distribution of
loss values are shown for every sample in the test data set. Model loss
functions are minimized during training, making them a natural metric to use
for summarizing performance.
## 4 Conclusion
We have leveraged the expressive capabilities of deep learning to discover
linearizing coordinates for nonlinear BVPs, thus allowing for the construction
of the fundamental solution or nonlinear Green’s function. Much like the
Koopman operator for time-dependent problems, the linearizing transformation
provides a framework whereby the fundamental solution of the linear operator
can be constructed and used for any arbitrary forcing. This provides a broadly
applicable mathematical architecture for constructing solutions for nonlinear
BVPs, which typically rely on numerical methods to achieve solutions. Our
DeepGreen architecture can achieve solutions for arbitrary forcings by simply
computing the convolution of the forcing with the Green’s function in the
linearized coordinates.
Given the critical role that BVPs play in the mathematical analysis of
constrained physical systems subjected to external forces, the DeepGreen
architecture can be broadly applied in nearly every engineering discipline
since BVPs are prevalent in diverse problem domains including fluid mechanics,
electromagnetics, quantum mechanics, and elasticity. Importantly, DeepGreen
provides a bridge between a classic and widely used solution technique to
nonlinear BVP problems which generically do not have principled techniques for
achieving solutions aside from brute-force computation. DeepGreen establishes
this bridge by providing a transformation which allows linear superposition to
hold. DeepGreen is a flexible, data-driven, deep learning approach to solving
nonlinear boundary value problems (BVPs) using a dual-autoencoder
architecture. The autoencoders discover an invertible coordinate transform
that linearizes the nonlinear BVP and identifies both a linear operator $L$
and Green’s function $G$ which can be used to solve new nonlinear BVPs. We
demonstrated that the method succeeds on a variety of nonlinear systems
including nonlinear Helmholtz and Sturm–Liouville problems, nonlinear
elasticity, and a 2D nonlinear Poisson equation. The method merges the
strengths of the universal approximation capabilities of deep learning with
the physics knowledge of Green’s functions to yield a flexible tool for
identifying fundamental solutions to a variety of nonlinear systems.
## Acknowledgments
SLB is grateful for funding support from the Army Research Office (ARO
W911NF-17-1-0306). JNK acknowledges support from the Air Force Office of
Scientific Research (FA9550-19-1-0011).
## References
* 1. I. Stakgold, Boundary Value Problems of Mathematical Physics: 2-Volume Set, vol. 29 (Siam, 2000).
* 2. J.-B. J. Fourier, Théorie analytique de la chaleur (Chez Firmin Didot, 1822).
* 3. J. D. Jackson, Classical electrodynamics (John Wiley & Sons, 2007).
* 4. A. Yariv, Quantum Electronics (John Wiley & Sons, 1989).
* 5. I. Stakgold, M. J. Holst, Green’s functions and boundary value problems, vol. 99 (John Wiley & Sons, 2011).
* 6. B. Lusch, J. N. Kutz, S. L. Brunton, Deep learning for universal linear embeddings of nonlinear dynamics. Nature communications 9, 4950 (2018).
* 7. K. Champion, B. Lusch, J. N. Kutz, S. L. Brunton, Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116, 22445–22451 (2019).
* 8. C. Wehmeyer, F. Noé, Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics. The Journal of Chemical Physics 148, 241703 (2017).
* 9. A. Mardt, L. Pasquali, H. Wu, F. Noé, VAMPnets: Deep learning of molecular kinetics. Nature Communications 9 (2018).
* 10. N. Takeishi, Y. Kawahara, T. Yairi, Advances in Neural Information Processing Systems (2017), pp. 1130–1140.
* 11. E. Yeung, S. Kundu, N. Hodas, 2019 American Control Conference (ACC) (2019), pp. 4832–4839.
* 12. S. E. Otto, C. W. Rowley, Linearly-recurrent autoencoder networks for learning dynamics. SIAM Journal on Applied Dynamical Systems 18, 558–593 (2019).
* 13. Q. Li, F. Dietrich, E. M. Bollt, I. G. Kevrekidis, Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the koopman operator. Chaos: An Interdisciplinary Journal of Nonlinear Science 27, 103111 (2017).
* 14. C. J. Dsilva, R. Talmon, R. R. Coifman, I. G. Kevrekidis, Parsimonious representation of nonlinear dynamical systems through manifold learning: A chemotaxis case study. Applied and Computational Harmonic Analysis 44, 759–773 (2018).
* 15. C. Gin, B. Lusch, J. N. Kutz, S. L. Brunton, Deep learning models for global coordinate transformations that linearize PDEs. To appear in the European Journal of Applied Mathematics (2020). Preprint Available: arXiv:1911.02710.
* 16. G. Cybenko, Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems (MCSS) 2, 303–314 (1989).
* 17. K. Hornik, M. Stinchcombe, H. White, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural networks 3, 551–560 (1990).
* 18. B. O. Koopman, Hamiltonian systems and transformation in Hilbert space. Proceedings of the National Academy of Sciences 17, 315-318 (1931).
* 19. I. Mezić, A. Banaszuk, Comparison of systems with complex behavior. Physica D: Nonlinear Phenomena 197, 101 - 133 (2004).
* 20. I. Mezić, Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynamics 41, 309–325 (2005).
* 21. M. Budišić, I. Mezić, Geometry of the ergodic quotient reveals coherent structures in flows. Physica D: Nonlinear Phenomena 241, 1255–1269 (2012).
* 22. I. Mezić, Analysis of fluid flows via spectral properties of the Koopman operator. Annual Review of Fluid Mechanics 45, 357–378 (2013).
* 23. S. L. Brunton, B. W. Brunton, J. L. Proctor, J. N. Kutz, Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PLOS ONE 11, 1–19 (2016).
* 24. P. J. Schmid, Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics 656, 5–28 (2010).
* 25. C. W. Rowley, I. Mezić, S. Bagheri, P. Schlatter, D. Henningson, Spectral analysis of nonlinear flows. J. Fluid Mech. 645, 115–127 (2009).
* 26. J. N. Kutz, S. L. Brunton, B. W. Brunton, J. L. Proctor, Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems (SIAM, 2016).
* 27. F. Noé, F. Nüske, A variational approach to modeling slow processes in stochastic dynamical systems. Multiscale Modeling & Simulation 11, 635–655 (2013).
* 28. F. Nüske, B. G. Keller, G. Pérez-Hernández, A. S. Mey, F. Noé, Variational approach to molecular kinetics. Journal of chemical theory and computation 10, 1739–1752 (2014).
* 29. M. O. Williams, I. G. Kevrekidis, C. W. Rowley, A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science 25, 1307–1346 (2015).
* 30. M. O. Williams, C. W. Rowley, I. G. Kevrekidis, A kernel-based method for data-driven Koopman spectral analysis. Journal of Computational Dynamics 2, 247–265 (2015).
* 31. S. Klus, F. Nüske, P. Koltai, H. Wu, I. Kevrekidis, C. Schütte, F. Noé, Data-driven model reduction and transfer operator approximation. Journal of Nonlinear Science 28, 985–1010 (2018).
* 32. J. N. Kutz, J. L. Proctor, S. L. Brunton, Applied Koopman theory for partial differential equations and data-driven modeling of spatio-temporal systems. Complexity 2018, 1–16 (2018).
* 33. J. Page, R. R. Kerswell, Koopman analysis of burgers equation. Physical Review Fluids 3, 071901 (2018).
* 34. C. Bishop, Pattern Recognition and Machine Learning (Springer, 2006).
* 35. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, 2016).
* 36. R. Rico-Martinez, I. Kevrekidis, K. Krischer, Nonlinear system identification using neural networks: dynamics and instabilities. Neural networks for chemical engineers pp. 409–442 (1995).
* 37. R. Gonzalez-Garcia, R. Rico-Martinez, I. Kevrekidis, Identification of distributed parameter systems: A neural net based approach. Computers & chemical engineering 22, S965–S968 (1998).
* 38. S. H. Rudy, J. N. Kutz, S. L. Brunton, Deep learning of dynamics and signal-noise decomposition with time-stepping constraints. Journal of Computational Physics 396, 483–506 (2019).
* 39. H. Lange, S. L. Brunton, N. Kutz, From fourier to koopman: Spectral methods for long-term time series prediction. arXiv preprint arXiv:2004.00574 (2020).
* 40. Y. Liu, J. N. Kutz, S. L. Brunton, Hierarchical deep learning of multiscale differential equation time-steppers. arXiv preprint arXiv:2008.09768 (2020).
* 41. D. Z. Huang, K. Xu, C. Farhat, E. Darve, Learning constitutive relations from indirect observations using deep neural networks. Journal of Computational Physics 416, 109491 (2020).
* 42. S. Pan, K. Duraisamy, Physics-informed probabilistic learning of linear embeddings of nonlinear dynamics with guaranteed stability. SIAM Journal on Applied Dynamical Systems 19, 480–509 (2020).
* 43. K. He, X. Zhang, S. Ren, J. Sun, Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.
* 44. S. Mallat, Understanding deep convolutional networks. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374, 20150203 (2016).
* 45. A. Logg, G. N. Wells, Dolfin: Automated finite element computing. ACM Transactions on Mathematical Software 37 (2010).
* 46. M. S. Alnæs, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. E. Rognes, G. N. Wells, The fenics project version 1.5. Archive of Numerical Software 3 (2015).
* 47. A. Logg, K.-A. Mardal, G. N. Wells, et al., Automated Solution of Differential Equations by the Finite Element Method (Springer, 2012).
## Appendix A Data Generation
### A.1 1D Problems
The data for all of the one-dimensional systems are created using the same
method and forcing functions. Each solution is computed on an evenly-spaced
128-point grid using MATLAB’s bvp5c solver with a relative error tolerance of
$10^{-8}$ and an absolute error tolerance of $10^{-10}$. The forcing functions
$\mathbf{F}_{k}(x)$ are designed to yield a variety of solutions
$\mathbf{u}_{k}$ such that $||\mathbf{u}_{k}||_{2}\simeq 1$.
The training data consists of two types of systems: Gaussian-forced and
cosine-forced systems. The Gaussian-forced systems have forcing functions of
the form
$F_{k}(x)=a\exp\left(\frac{-(x-b)^{2}}{2c^{2}}\right),$
where $a\in\\{-25,-20,-15,-10,-5,5,10,15,20,25\\}$,
$b\in\\{0,2\pi/19,4\pi/19,\dots,2\pi\\}$, and
$c\in\\{0.1,0.3,0.5,\dots,4.9\\}$. The cosine forcing functions are of the
form
$F_{k}(x)=\alpha\cos(\beta x),$
where $\alpha\in\\{1,1.1,1.2,\dots,10\\}$ and
$\beta\in\\{1,1.05,1.10,\dots,5\\}$. This gives a total of 5000 Gaussian-
forced solutions and 7371 cosine-forced solutions. For the cubic Helmholtz
equation and the nonlinear Sturm–Liouville equation with spatially-varying
coefficients, all of the 12371 solutions were within the error tolerance.
However, there were 97 solutions of the nonlinear biharmonic equation that did
not meet the error tolerance and were therefore discarded. Of the remaining
data, 10% are randomly chosen and withheld as test data, 80% are used as
training data, and 20% are used as validation data.
In order to test the ability of the network to generalize, we also have
another test data set that consists of solutions with cubic forcing functions
of the form
$F_{i}(x)=\gamma(x-\pi)^{3},$
where $\gamma\in\\{0.01,0.03,0.05,\dots,0.29\\}$, and cubic forcing functions
of the form
$F_{i}(x)=\gamma(x-\pi)^{3}+\zeta(x-\pi)^{2}+\boldsymbol{\psi},$
where $\gamma\in\\{0.01,0.03,0.05,\dots,0.29\\}$,
$\zeta\in\\{0.01,0.03,0.05,\dots,0.49\\}$, and
$\boldsymbol{\psi}\in\\{-5,-4,-3,\dots,5\\}$. There are a total of 4140
solutions with cubic forcing functions.
### A.2 2D Problem
The two-dimensional data satisfies the nonlinear Poisson equation (19). The
solutions are computed with a finite element method using the DOLFIN library
(?) of the FEniCS Project (?, ?). The forcing functions are similar to the
one-dimensional data in that there are Gaussian and cosine forcing functions
along with a separate data set of cubic polynomial forcing functions used to
test the ability of the network to generalize. The Gaussian forcing functions
are of the form
$F_{k}(x,y)=a\exp\left(\frac{-(x-b_{x})^{2}-(y-b_{y})^{2}}{2c^{2}}\right),$
where $a\in\\{-25,-20,-15,-10,-5,5,10,15,20,25\\}$,
$b_{x},b_{y}\in\\{\pi/3,2\pi/3,\pi,4\pi/3,5\pi/3\\}$, and
$c\in\\{0.1,0.3,0.5,\dots,4.9\\}$. The cosine forcing functions are of the
form
$F_{k}(x,y)=\alpha\cos(\beta_{x}x)\cos(\beta_{y}y),$
where $\alpha\in\\{1,1.1,1.2,\dots,10\\}$ and
$\beta_{x},\beta_{y}\in\\{1,1.5,2,\dots,5\\}$. The cubic forcing functions are
of the form
$F_{i}(x,y)=\gamma_{x}(x-\pi)^{3}+\gamma_{y}(y-\pi)^{3},$
where $\gamma_{x},\gamma_{y}\in\\{0.01+0.28k/3|k=0,1,2,3\\}$, and cubic
forcing functions of the form
$\begin{split}F_{i}(x,y)=&\gamma_{x}(x-\pi)^{3}+\gamma_{y}(y-\pi)^{3}\\\
&+\zeta_{x}(x-\pi)^{2}+\zeta_{y}(y-\pi)^{2}+\boldsymbol{\psi},\end{split}$
where $\gamma_{x},\gamma_{y}\in\\{0.01+0.28k/3|k=0,1,2,3\\}$,
$\zeta_{x},\zeta_{y}\in\\{0.01,0.07,0.13,0.19,0.25\\}$, and
$\boldsymbol{\psi}\in\\{-5,-4,-3,\dots,5\\}$. There are 6250 solutions with
Gaussian forcing functions, 7371 solutions with cosine forcing functions, and
4416 solutions with cubic forcing functions.
Figure 12: Encoder network architecture for the two-dimensional data. All
convolutional layers use $4\times 4$ kernels with stride size 1, zero-padding,
and ReLU activation functions. All pooling layers are average pooling layers
with pool size 2 and stride size 2. Figure 13: Decoder network architecture
for the two-dimensional data. All transposed convolutional layers use $4\times
4$ kernels with stride size 2, zero-padding, and ReLU activation functions
except for the last layer which has stride size 1. Figure 14: Layer-by-layer
autoencoder architecture for 1D problems.
## Appendix B Neural Network Implementation Details
The model training procedure is kept constant for all of the examples in this
work. The networks are optimized with an Adam optimizer ($\beta_{1}=0.9$,
$\beta_{2}=0.999$). Every numerical experiment starts by training a set of 20
models for a ‘small’ number of epochs. Each of the 20 models has a randomly
selected learning rate for the Adam optimizer, uniformly selected between
$10^{-2}$ and $10^{-5}$. The initial training period consists of two phases:
autoencoder-only (75 epochs) and full model (250 epochs). The autoencoder-only
phase only enforces the autoencoder losses $\mathcal{L}_{1}$ and
$\mathcal{L}_{2}$ during backpropagation. A checkpoint algorithm is used to
keep track of the model with the lowest overall loss throughout the training
procedure. At the end of the initial period, the best model is selected and
the others are eliminated. The best model is trained for an additional 2500
epochs.
There are two network architectures in this work. The architectures depicted
in Figures 12 and 13 are applied to the two-dimensional nonlinear Poisson BVP.
The architecture depicted in Figure 14, is applied to one-dimensional
problems.
The two architectures have a few training variables in common. Both models use
variance scaling initialization, $\ell_{2}$ regularization
($\lambda=10^{-6}$), and ReLu activation functions for fully connected (1D
architecture) and convolutional (2D architecture) layers. Notably, the two
layers immediately before and after the latent space do not have activation
functions. A normalized mean squared error loss function is used for all of
the loss functions, as described in Section 2. The models are trained in
batches of 64 samples.
The 2D architecture utilizes convolutional layers and pooling layers, as shown
in Figures 12 and 13. All convolutional layers use a kernel size of $4\times
4$. There are differences between the convolutional layers in the encoder and
the convolutional layers in the decoder. The encoder convolutional layers use
a stride size of $1\times 1$ and an increasing number of filters
($8,16,32,64$). The deconvolutional layers use a stride size of $2\times 2$
with decreasing filter size ($32,16,8$). Pooling layers are similar for both
the encoder and decoder with a stride size of $2\times 2$ and a pool size of
$2\times 2$.
## Appendix C Additional Results
The repeatability of the results and models learned by the DeepGreen
architecture are interesting to study from the perspective of operator
convergence and latent space representations. In both cases, we aim to
investigate the convergence of the model parameters to determine if the
learned latent spaces and operators are unique or non-unique.
### C.1 Operator Initialization
We repeat the training procedure for DeepGreen with three different
initialization approaches for the operator $L$. Again, we train with data from
the example nonlinear cubic Helmholtz model. This experiment focuses on
comparing the initial values of the operator $L$ with the final values of the
operator at the end of training to determine if the DeepGreen approach tends
to converge to a specific operator construction. The results in Figure 15 show
the initial and final operator for identity-initialized, randomly initialized,
and Toeplitz-initialized operator matrices. Impressively, the result shows
that the network tends to learn operators with diagonal dominance for all of
the tested initialization strategies. This approach, which DeepGreen appears
to prefer, draws strong parallels to the coordinate diagonalization approach
commonly used in physics.
Figure 15: Initial vs learned operators for an operator matrix $L$ for
different initial conditions. The top row shows identity matrix
initialization, the middle row shows random initialization (He normal), and
the bottom row shows a Toeplitz gradient initialization.
### C.2 Latent Space Analysis
We repeat the training procedure for the example system, the nonlinear cubic
Helmholtz model, a total of one hundred times. A single sample was selected
from the training data and the latent space representation, $\mathbf{v}_{i}$
and $\mathbf{f}_{i}$, of the input vectors $\mathbf{u}_{i}$ and
$\mathbf{F}_{i}$ are computed. Statistics for the latent space representations
are presented in Figure 16. It is evident that the latent space vectors are
not identical between runs, and that the values in the vector do not follow
any particular statistical distribution. This information implies that the
learned weights in the model, and the learned latent space representations,
vary for each training instance and do not appear to converge to a single
representation.
Figure 16: Statistics of latent space values of a single sample over 100
experimental runs.
### C.3 Residual network architecture
All of the autoencoders used in this work use a residual network (ResNet)
architecture. In order to demonstrate the advantage of the ResNet
architecture, we trained six models using the DeepGreen architecture for each
of the four systems. Three of the models use the ResNet skip connections,
while three do not use the ResNet architecture.
For the two simplest systems, the nonlinear cubic Helmholtz equation and the
nonlinear Sturm–Liouville equation, the difference between the models with and
without the ResNet skip connections was negligible. For the nonlinear cubic
Helmholtz equation, the mean validation loss for the non-ResNet models was
$2.7\times 10^{-3}$ and the median validation loss was $2.4\times 10^{-3}$.
Using the ResNet architecture resulted in a mean validation loss of $3.5\times
10^{-3}$ and a median validation loss of $8.8\times 10^{-4}$. The ResNet
architecture resulted in a lower median validation loss but a higher mean due
to one of the three models performing much more poorly than the other two. The
results for the nonlinear Sturm–Liouville system are analogous. With a non-
ResNet architecture, the mean validation loss was $4.5\times 10^{-}3$ and the
median validation loss was $4.0\times 10^{-3}$. With a ResNet architecture,
the mean validation loss was $5.7\times 10^{-}3$ and the median validation
loss was $3.1\times 10^{-3}$. Therefore, the use of the ResNet architecture
produced similar results to a non-ResNet architecture for these two simple
systems.
For the two systems that had larger losses — the nonlinear biharmonic equation
in 1D and the 2D nonlinear Poisson equation — the ResNet architecture was
clearly superior to a non-ResNet architecture. For the nonlinear biharmonic
equation, the ResNet architecture yields a mean validation loss of $2.5\times
10^{-2}$ and median validation loss of $2.8\times 10^{-2}$ for the three
models, compared with $3.8\times 10^{-2}$ and $4.0\times 10^{-2}$,
respectively, for the non-ResNet architecture. Therefore, the ResNet
architecture performed better in terms of both the mean and median loss. The
ResNet architecture is absolutely vital for the nonlinear Poisson system.
Without the ResNet architecture, the model essentially did not converge. Both
the mean and median validation losses were $1.9\times 10^{0}$. In contrast,
the ResNet architecture had a mean validation loss of $1.8\times 10^{-2}$ and
a median of $1.9\times 10^{-3}$.
|
# Ultra-high energy cosmic rays deflection by
the Intergalactic Magnetic Field
Andrés Arámburo-García<EMAIL_ADDRESS>Institute Lorentz, Leiden
University, Niels Bohrweg 2, Leiden, NL-2333 CA, the Netherlands Kyrylo
Bondarenko<EMAIL_ADDRESS>Theoretical Physics Department, CERN,
Geneva 23, CH-1211, Switzerland L’Ecole polytechnique fédérale de Lausanne,
1015 Lausanne, Switzerland Alexey Boyarsky<EMAIL_ADDRESS>Institute Lorentz, Leiden University, Niels Bohrweg 2, Leiden, NL-2333 CA, the
Netherlands Dylan Nelson<EMAIL_ADDRESS>Universität Heidelberg,
Zentrum für Astronomie, Institut für theoretische Astrophysik, Albert-Ueberle-
Str. 2, 69120 Heidelberg, Germany Annalisa Pillepich pillepich@mpia-
hd.mpg.de Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg,
Germany Anastasia Sokolenko<EMAIL_ADDRESS>Institute of High
Energy Physics, Austrian Academy of Sciences, Nikolsdorfergasse 18, 1050
Vienna, Austria
###### Abstract
The origin and composition of ultra-high energy cosmic rays (UHECRs) remain a
mystery. The common lore is that UHECRs are deflected from their primary
directions by the Galactic and extragalactic magnetic fields. Here we describe
an extragalactic contribution to the deflection of UHECRs that does not depend
on the strength and orientation of the initial seed field. Using the
IllustrisTNG simulations, we show that outflow-driven magnetic bubbles created
by feedback processes during galaxy formation deflect approximately half of
all $10^{20}$ eV protons by $1^{\circ}$ or more, and up to $20$–$30^{\circ}$.
This implies that the deflection in the intergalactic medium must be taken
into account in order to identify the sources of UHECRs.
The identification of the sources of ultra-high energy cosmic rays (UHECRs) is
one of the central problems of astroparticle physics. No strong signatures of
sources have been seen in the data so far – the observed UHECRs show a
surprisingly high level of isotropy, with no significant small scale
clustering, see e.g. [1] for a recent discussion or [2] for a review
111Indeed, at “intermediate” energies $E\geq 8\cdot 10^{18}$ eV, a dipole
anisotropy at the level of 6% have been detected [26]; at $E>5.7\cdot 10^{19}$
eV, there is evidence (with a $3.4\sigma$ significance) for a hot spot with
radius $\sim 25^{\circ}$ [27]. For the map of all 266 observed events with
$E>5.7\cdot 10^{19}$ eV, see e.g. [28]..
This absence of small scale clustering is believed to arise from the
deflection of UHECRs in magnetic fields during their propagation between the
sources and Earth. This effect depends on the mass composition of UHECRs,
which is not known – the same magnetic field deflects heavy nuclei much more
strongly than protons. In general, the total deflection of UHECRs can be
separated into the deflection by the Galactic magnetic field (GMF) and by the
intergalactic magnetic field (IGMF). For the GMF, both estimates [2] and
numerical studies [4] predict average deflection angles at the level of
$1^{\circ}$ for protons with energy $5\times 10^{19}$ eV outside of the
galactic plane. However, these predictions still suffer from many
uncertainties in GMF models (see [5, 6, 1], for a discussion). There is a hope
that a reliable model of the GMF would make it possible to identify the
sources of the UHECRs by re-tracing protons (see e.g.[6, 7]), which would
constrain the mass composition of the highest energies UHECRs [1].
In the approach outlined above, it is often assumed that the deflection of UHE
protons in the IGMF is not very large (below 1 degree), and that it is
sufficient to re-trace only the large-scale component of the GMF, thereby
neglecting the IGMF contribution. If instead the deflection of protons by the
IGMF is at the level of a few degrees or more, this would make the
identification of the sources much more difficult – a realiable model of the
magnetic field strength and topology in the local Universe would be required.
Figure 1: Maps of the magnetic fields for two random slices of the TNG100
simulation (20 kpc deep). The red circles show dark matter halos with total
mass above $10^{11.5}{\rm M}_{\odot}$ which reside within each slice, with
radii corresponding to $1.5$ times their virial radius. The magnetic field
values along two random lines of sight (green lines in maps) are shown in the
upper panels. We also zoom on a region with an extended, magnetized bubble and
show in detail magnetic field magnitude (upper right panel), gas metallicity
(middle right panel), and free electron number density (bottom right panel).
Existing numerical cosmological simulations do not give consistent predictions
about the strength of IGMFs [8, 9, 10, 11]. For example, [8] find that the
IGMF deflection is more important than the GMF contribution, while the
simulation performed in [9] that used a seed magnetic field $B\sim 2\times
10^{-12}$ G produces significantly smaller deflection angles, which are
negligible in comparison to the GMF contribution. More recent simulations [10,
11] have studied UHECR propagation using the cosmological code Enzo, finding
that the magnetic fields in voids have a minor influence on the propagation of
UHECRs. [12] adopt the maximum possible seed magnetic field strength based on
experiment and observational inferences, $B\sim 10^{-9}$ G, and derive
deflections of the order of ten degrees. Overall, the dependence of these
results on the unknown initial conditions for cosmic magnetic fields prevents
us from deriving robust conclusions.
In this paper, we concentrate on the part of the IGMFs that may be, to a large
extent, independent of the initial magnetic field seeds. We have recently
pointed out in [13] that, according to the IllustrisTNG cosmological
simulations [14, 15, 16, 17, 18], baryonic feedback processes, such as AGN and
supernovae-driven outflows, create extended over-magnetized bubbles in the
intergalactic medium (IGM). These have magnetic field strengths that can be
many orders of magnitude larger than those in similarly dense regions of the
IGM, and are determined by processes that take place inside galaxies rather
than by the initial properties of the magnetic fields. The contribution to the
deflection of UHECRs from these bubbles can be considered as a minimal effect
of the IGMFs, present even if the seed fields are feeble. Importantly, the
IllustrisTNG simulations have been shown to return overall properties of the
galaxy populations in good agreement with data across a wide range of
observables and regimes. In comparison to previous numerical experiments, this
provides support for the emerging properties of AGN and stellar feedback, that
in turn are responsible for the over-magnetized bubbles that are inflated well
into the IGM.
In this letter, we show that, according to the IllustrisTNG model, the
deflections of the highest-energy protons by the IGMF can be split into two
comparable groups – below one degree and 1–30 degrees. Such a picture suggests
that while the fraction of UHECRs with relatively small deflections in IGMF is
significant enough to leave the possibility of source identification, the
effect of IGMFs is strong and should be taken into account.
Simulations.—The IllustrisTNG project is a set of cosmological gravo-
magnetohydrodynamic simulations (hereafter TNG, [14, 15, 16, 17, 18]) based on
the moving-mesh Arepo code [19] that adopt Planck 2015 cosmological parameters
[20]. This code solves the coupled equations of self-gravity and ideal MHD
[21, 22]. Specifically, in this work, we use the publicly available TNG100-1
simulation (hereafter TNG100 [23]), that is the highest resolution run of the
TNG100 series, with a homogeneous initial seed magnetic field with strength
$10^{-14}$ cG (comoving Gauss). TNG100 has a box size
$\sim(110~{}\text{cMpc})^{3}$ and contains $1820^{3}$ dark matter particles
and an equal number of initial gas cells with mass resolution of
$m_{\text{DM}}=7.5\times 10^{6}~{}{\rm M}_{\odot}$ and
$m_{\text{bar}}=1.4\times 10^{6}~{}{\rm M}_{\odot}$, respectively.
Figure 2: Loss of the “memory” of the initial magnetic seed field in TNG100.
The blue curve shows the average orientation of the $z=0$ magnetic field along
the $z$ direction of the simulation domain (i.e. the orientation of the
initial uniform seed) as a function of the $z=0$ field strength, for regions
with $n_{e}<200\langle n_{e}\rangle$. Similar results hold for other electron
number densities. The red vertical line marks the seed field strength. Figure
3: Amplification of the magnetic fields in TNG100 with time. We show the mass
weighed distribution of comoving magnetic field strength at redshifts $z=0$,
$1$, and $2$.
The TNG simulations adopt a detailed galaxy formation model that is important
for the resulting structure of the magnetic fields both inside and outside
galaxies and clusters. For a more detailed description of this model, see
Section 2 in [13] and references therein. Of particular importance for the
magnetic field evolution is the inclusion of AGN feedback and galactic winds
launched by supernovae, among other astrophysical processes [24, 25]. Apart
from adiabatic contraction, magnetic fields in TNG experience strong
amplification by small-scale dynamos, which are then distributed beyond
galaxies and even halos by energetic outflows.
Outflow-driven bubbles.—As shown in [13] the TNG simulations predict that for
$z\lesssim 2$ as galaxies form, and AGN and stellar feedback become important,
these processes form large regions of strong magnetic fields far outside of
the virial radii of collapsed structures, thus creating over-magnetized
bubbles. These large-scale bubbles contain magnetic fields that are several
orders of magnitude stronger than in the unaffected regions of the IGM with
the same electron density. In particular, if we identify bubbles as regions
with $B>10^{-12}$ cG, enhanced metallicity, and with clear outflowing
kinematic signatures, they can be as large as tens of Mpc – see Fig. 1.
The properties of these magnetic bubbles are determined by baryonic feedback
physics. It was demonstrated in [13] that, similarly to the magnetic fields
inside galaxies, magnetic fields in the bubbles are to a certain extent
independent of the initial conditions for magnetic fields assumed in the
simulation. In Fig. 2 we show how the simulated magnetic field “forgets” about
the initial orientation of the seed magnetic field, which is along the $z$
axis, for increasing field strengths, i.e. due to amplification. It is clearly
seen that for outflow-driven bubbles (e.g. for $B>10^{-12}$ cG), there is no
longer a preferred direction of the field.
The TNG100 simulation suggests that such over-magnetized bubbles form quite
recently in cosmic history, at redshifts $z\lesssim 2$, see Fig. 3. While at
$z=2$ most of the gas mass still has initial-like magnetic field strengths
($10^{-14}$ cG for TNG100), at lower redshifts the situation changes. At $z=0$
the bubbles are not rare, occupying more than $10\%$ of cosmological volume
for magnetic fields stronger than $10^{-12}$ G and more than $3\%$ for
$B>10^{-9}$ G [13].
Figure 4: Distribution of the deflection angles $\delta$ for 1000 protons with
energy $10^{20}$ eV with random initial position in our fiducial volume in the
TNG100 simulation at $z=0$ for trajectory lengths of 110, 65, and 20 Mpc, from
left to right. In each panel, colors shows cuts by $r_{\min}$, the distance to
the closest halo with mass above $10^{12.5}M_{\odot}$ along the trajectory.
Figure 5: Evolution of the magnetic field strength $|B|$ (top panel, green
curve), the minimal distance $\mathrm{r_{min}}$ to a massive halo
$>10^{12.5}\mathrm{M_{\odot}}$ (top panel, red dashed curve), and of the
corresponding deflection angle $\delta$ (bottom panel), along a random
trajectory for a proton with energy $10^{20}$ eV.
Deflection of UHECRs.—To study the effect of the over-magnetized bubbles on
the propagation of UHECRs we trace trajectories of high-energy protons in the
TNG100 simulation. The change of the velocity of a particle traversing a
magnetic field is given by
$\Delta\bm{v}=\frac{q}{E_{p}}\int[\bm{v}\times\bm{B}]dl,$ (1)
where $q$ and $E_{p}$ are the charge and energy of the proton and the integral
is taken along the trajectory of a particle.
We take the magnetic field configuration from a random $(30\text{
Mpc})^{2}\times 110$ Mpc sub-volume of TNG100, with the long side parallel to
the direction of the initial seed field, averaging within $(40\text{
kpc})^{3}$ voxels. We then track the propagation of protons along the long
side of this volume, following their changing velocity and direction according
to Eq. (1). We consider 1000 protons with energy $10^{20}$ eV, with initial
positions randomly chosen within a $20\text{ Mpc}\times 20\text{ Mpc}$ area at
the volume boundary. Initial velocities are oriented along the long side of
the volume, and we propagate each until the end of the volume.
The resulting distribution of the angle between initial and final propagation
direction is shown in Fig. 4, for integration along paths of 110, 65, and 20
Mpc (left to right). For path lengths of $\sim$ 110 Mpc, about $2.8\%$ of
these protons are deflected by $\delta<0.1^{\circ}$. However, another $35.5\%$
of protons have $0.1^{\circ}<\delta<1^{\circ}$: these deflections are much
larger than the $\delta\sim 0.001^{\circ}$ that would be expected from a seed
field of $B\sim 10^{-14}$ cG along the whole 110 Mpc trajectory. Finally,
$61.7\%$ of protons encounter even larger magnetic fields both in bubbles and
sometimes inside halos and attain $\delta>1^{\circ}$, where deflections can
reach up to tens of degrees. In fact, the deflection angles are larger for
protons which pass close to massive halos, particularly for shorter path
lengths, as shown in the different histogram colors.
An example of the deflection angle evolution of a test protons is shown in
Fig. 5. We see that most of the deflection is acquired during the $\sim 2.5$
Mpc-size region that corresponds to the crossing of a magnetic bubble.
Conclusions and discussion.—We find that, according to the IllustrisTNG model
of galaxy formation, the influence of intergalactic magnetic fields on the
propagation of the UHECRs is essential and must to be taken into account when
searching for the sources of these particles. The average value of the
magnetic field strength in voids strongly depends on the (unknown) initial
seed magnetic fields. However, large, outflow-inflated, over-magnetized
bubbles form around collapsed halos, as recently emphasized in [13]. Magnetic
fields in these regions, which can extend out to tens of Mpc, are created by
feedback processes in the innermost regions of galaxies and can be orders of
magnitude larger than in unaffected regions of the IGM with the same baryon
density. The strength and geometry of these fields, being to great extent
defined by processes occurring within galaxies, “forget” the initial
conditions of the seed field. As a result, they provide a non-negligible
contribution of the IGMFs to UHECR deflection, even for extremely small values
of the initial seed.
Our results can potentially explain why the sources of UHECRs have not been
identified yet and, at the same time, suggest a possible promising way forward
to resolve this problem. Our study finds that about half of all protons with
$10^{20}$ eV energy are deflected in the IGM by $\delta<1^{\circ}$ along a
$\sim 100$ Mpc path. At the same time, there is a significant fraction of
protons that go through over-magnetized bubbles and are strongly deflected.
This means, for example, that among strongly deflected particles, identified
by the procedure of [1] as heavy nuclei, there may also be a fraction of
protons that passed through, or nearby, such regions.
In order to explicitly account for these bubbles and so help identify the
sources of UHECRs, we plan to perform a constrained simulation of the local
volume, extending to at least $100$-$200$ Mpc. This simulation will reproduce
the observed large-scale structure around the Milky Way, predict the
corresponding outflow-driven bubbles, and create a large-scale map of the
local magnetic fields.
Such a volume would contain most of the possible sources of the highest energy
CRs that could reach us despite the GZK cut-off. It would be, of course,
difficult to use a simulated map for re-tracing the UHECRs, as e.g. the
details of the geometry of magnetic fields inside the bubbles depend on the
kinematics of outflows. Even without explicit re-tracing, it would assist
statistical inferences on the fraction of protons which remember their
original directions when entering the Galaxy. Such a constrained simulation
will also enable detailed mock predictions for the detection of the local
bubbles, through for instance Faraday Rotation Measure (RM). As we expect the
RM in the bubbles to be much larger than in voids, but still lower than in the
collapsed structures or even in filaments (see [13] for a discussion), such a
detection would require a non-trivial statistical analysis.
We thank Dmitry Semikoz, Volker Springel, J.-P. Kneib and T.Theuns for very
fruitful discussions. AS are supported by the FWF Research Group grant FG1.
AA, KB and AB are supported by the European Research Council (ERC) Advanced
Grant “NuBSM” (694896).
## References
* Kuznetsov and Tinyakov [2020] M. Kuznetsov and P. Tinyakov, UHECR mass composition at highest energies from anisotropy of their arrival directions, (2020), arXiv:2011.11590 [astro-ph.HE] .
* Kachelriess and Semikoz [2019] M. Kachelriess and D. Semikoz, Cosmic Ray Models, Prog. Part. Nucl. Phys. 109, 103710 (2019), arXiv:1904.08160 [astro-ph.HE] .
* Note [1] Indeed, at “intermediate” energies $E\geq 8\cdot 10^{18}$ eV, a dipole anisotropy at the level of 6% have been detected [26]; at $E>5.7\cdot 10^{19}$ eV, there is evidence (with a $3.4\sigma$ significance) for a hot spot with radius $\sim 25^{\circ}$ [27]. For the map of all 266 observed events with $E>5.7\cdot 10^{19}$ eV, see e.g. [28].
* Giacinti _et al._ [2011] G. Giacinti, M. Kachelriess, D. Semikoz, and G. Sigl, Ultrahigh Energy Nuclei in the Turbulent Galactic Magnetic Field, Astropart. Phys. 35, 192 (2011), arXiv:1104.1141 [astro-ph.HE] .
* Jansson and Farrar [2012] R. Jansson and G. R. Farrar, A New Model of the Galactic Magnetic Field, ApJ 757, 14 (2012), arXiv:1204.3662 [astro-ph.GA] .
* Haverkorn _et al._ [2019] M. Haverkorn, F. Boulanger, T. Enßlin, J. Hörandel, T. Jaffe, J. Jasche, J. Rachen, and A. Shukurov, IMAGINE: Modeling the Galactic Magnetic Field, Galaxies 7, 17 (2019), arXiv:1903.04401 [astro-ph.GA] .
* Boulanger _et al._ [2018] F. Boulanger _et al._ , IMAGINE: A comprehensive view of the interstellar medium, Galactic magnetic fields and cosmic rays, JCAP 08, 049, arXiv:1805.02496 [astro-ph.GA] .
* Sigl _et al._ [2004] G. Sigl, F. Miniati, and T. A. Ensslin, Ultrahigh energy cosmic ray probes of large scale structure and magnetic fields, Phys. Rev. D 70, 043007 (2004), arXiv:astro-ph/0401084 .
* Dolag _et al._ [2005] K. Dolag, D. Grasso, V. Springel, and I. Tkachev, Constrained simulations of the magnetic field in the local Universe and the propagation of UHECRs, JCAP 01, 009, arXiv:astro-ph/0410419 .
* Hackstein _et al._ [2016] S. Hackstein, F. Vazza, M. Brüggen, G. Sigl, and A. Dundovic, Propagation of ultrahigh energy cosmic rays in extragalactic magnetic fields: a view from cosmological simulations, Mon. Not. Roy. Astron. Soc. 462, 3660 (2016), arXiv:1607.08872 [astro-ph.CO] .
* Hackstein _et al._ [2018] S. Hackstein, F. Vazza, M. Brüggen, J. G. Sorce, and S. Gottlöber, Simulations of ultra-high Energy Cosmic Rays in the local Universe and the origin of Cosmic Magnetic Fields, Mon. Not. Roy. Astron. Soc. 475, 2519 (2018), arXiv:1710.01353 [astro-ph.CO] .
* Alves Batista _et al._ [2017] R. Alves Batista, M.-S. Shin, J. Devriendt, D. Semikoz, and G. Sigl, Implications of strong intergalactic magnetic fields for ultrahigh-energy cosmic-ray astronomy, Phys. Rev. D 96, 023010 (2017), arXiv:1704.05869 [astro-ph.HE] .
* Garcia _et al._ [2020] A. A. Garcia, K. Bondarenko, A. Boyarsky, D. Nelson, A. Pillepich, and A. Sokolenko, Magnetization of the intergalactic medium in the IllustrisTNG simulations: the importance of extended, outflow-driven bubbles, (2020), arXiv:2011.11581 [astro-ph.CO] .
* Nelson _et al._ [2018] D. Nelson, A. Pillepich, V. Springel, R. Weinberger, L. Hernquist, R. Pakmor, S. Genel, P. Torrey, M. Vogelsberger, G. Kauffmann, F. Marinacci, and J. Naiman, First results from the IllustrisTNG simulations: the galaxy colour bimodality, MNRAS 475, 624 (2018), arXiv:1707.03395 [astro-ph.GA] .
* Springel _et al._ [2018] V. Springel, R. Pakmor, A. Pillepich, R. Weinberger, D. Nelson, L. Hernquist, M. Vogelsberger, S. Genel, P. Torrey, F. Marinacci, and J. Naiman, First results from the IllustrisTNG simulations: matter and galaxy clustering, MNRAS 475, 676 (2018), arXiv:1707.03397 [astro-ph.GA] .
* Pillepich _et al._ [2018a] A. Pillepich, D. Nelson, L. Hernquist, V. Springel, R. Pakmor, P. Torrey, R. Weinberger, S. Genel, J. P. Naiman, F. Marinacci, and M. Vogelsberger, First results from the IllustrisTNG simulations: the stellar mass content of groups and clusters of galaxies, MNRAS 475, 648 (2018a), arXiv:1707.03406 [astro-ph.GA] .
* Naiman _et al._ [2018] J. P. Naiman, A. Pillepich, V. Springel, E. Ramirez-Ruiz, P. Torrey, M. Vogelsberger, R. Pakmor, D. Nelson, F. Marinacci, L. Hernquist, R. Weinberger, and S. Genel, First results from the IllustrisTNG simulations: a tale of two elements - chemical evolution of magnesium and europium, MNRAS 477, 1206 (2018), arXiv:1707.03401 [astro-ph.GA] .
* Marinacci _et al._ [2018] F. Marinacci, M. Vogelsberger, R. Pakmor, P. Torrey, V. Springel, L. Hernquist, D. Nelson, R. Weinberger, A. Pillepich, J. Naiman, and S. Genel, First results from the IllustrisTNG simulations: radio haloes and magnetic fields, MNRAS 480, 5113 (2018), arXiv:1707.03396 [astro-ph.CO] .
* Springel [2010] V. Springel, E pur si muove: Galilean-invariant cosmological hydrodynamical simulations on a moving mesh, MNRAS 401, 791 (2010), arXiv:0901.4107 [astro-ph.CO] .
* Planck Collaboration _et al._ [2016] Planck Collaboration, P. A. R. Ade, _et al._ , Planck 2015 results. XIII. Cosmological parameters, A&A 594, A13 (2016), arXiv:1502.01589 [astro-ph.CO] .
* Pakmor _et al._ [2011] R. Pakmor, A. Bauer, and V. Springel, Magnetohydrodynamics on an unstructured moving grid, MNRAS 418, 1392 (2011), arXiv:1108.1792 [astro-ph.IM] .
* Pakmor and Springel [2013] R. Pakmor and V. Springel, Simulations of magnetic fields in isolated disc galaxies, MNRAS 432, 176 (2013), arXiv:1212.1452 [astro-ph.CO] .
* Nelson _et al._ [2019] D. Nelson, V. Springel, A. Pillepich, V. Rodriguez-Gomez, P. Torrey, S. Genel, M. Vogelsberger, R. Pakmor, F. Marinacci, R. Weinberger, L. Kelley, M. Lovell, B. Diemer, and L. Hernquist, The IllustrisTNG simulations: public data release, Computational Astrophysics and Cosmology 6, 2 (2019), arXiv:1812.05609 [astro-ph.GA] .
* Weinberger _et al._ [2017] R. Weinberger, V. Springel, L. Hernquist, A. Pillepich, F. Marinacci, R. Pakmor, D. Nelson, S. Genel, M. Vogelsberger, J. Naiman, and P. Torrey, Simulating galaxy formation with black hole driven thermal and kinetic feedback, MNRAS 465, 3291 (2017), arXiv:1607.03486 [astro-ph.GA] .
* Pillepich _et al._ [2018b] A. Pillepich, V. Springel, D. Nelson, S. Genel, J. Naiman, R. Pakmor, L. Hernquist, P. Torrey, M. Vogelsberger, R. Weinberger, and F. Marinacci, Simulating galaxy formation with the IllustrisTNG model, MNRAS 473, 4077 (2018b), arXiv:1703.02970 [astro-ph.GA] .
* Aab _et al._ [2017] A. Aab _et al._ (Pierre Auger), Observation of a Large-scale Anisotropy in the Arrival Directions of Cosmic Rays above $8\times 10^{18}$ eV, Science 357, 1266 (2017), arXiv:1709.07321 [astro-ph.HE] .
* Abbasi _et al._ [2014] R. Abbasi _et al._ (Telescope Array), Indications of Intermediate-Scale Anisotropy of Cosmic Rays with Energy Greater Than 57 EeV in the Northern Sky Measured with the Surface Detector of the Telescope Array Experiment, Astrophys. J. Lett. 790, L21 (2014), arXiv:1404.5890 [astro-ph.HE] .
* Matthews [2018] J. Matthews (Telescope Array), Highlights from the Telescope Array Experiment, PoS ICRC2017, 1096 (2018).
|
# A Critical Study of Cottenden _et al._ ’s _An Analytical Model of the Motion
of a Conformable Sheet Over a General Convex Surface in the Presence of
Frictional Coupling_
Kavinda Jayawardana<EMAIL_ADDRESS>
###### Abstract
In our analysis, we show that what Cottenden _et al._ [1, 2] and Cottenden [3]
accomplish is the derivation of the ordinary capstan equation, and a solution
to a dynamic membrane with both a zero-Poisson’s ratio and a zero-mass density
on a rigid right-circular cone. The authors states that the capstan equation
holds true for an elastic obstacle, and thus, it can be used to calculate the
coefficient of friction between human skin and fabrics. However, using data
that we gathered from human trials, we show that this claim cannot be
substantiated as it is unwise to use the capstan equation (i.e. belt-friction
models in general) to calculate the friction between in-vivo skin and fabrics.
This is due to the fact that such models assume a rigid foundation, while
human soft-tissue is deformable, and thus, a portion of the applied force to
the fabric is expended on deforming the soft-tissue, which in turn leads to
the illusion of a higher coefficient of friction when using belt-friction
models.
###### keywords:
Capstan Equation , Contact Mechanics , Coefficient of Friction , Mathematical
Elasticity
## 1 Introduction
Consider a situation where two elastic bodies that are in contact with each
other, where the contact area exhibits friction. A common area where modelling
of such problems can be found in the field of tire manufacturing [4, 5]. Now,
consider the scenario where one of the elastic bodies is very thin and almost
planar in a curvilinear sense, in comparison to the other body. Then the thin
body can be approximated by a shell or a membrane, and such models can be used
to model skin abrasion caused by fabrics as a result of friction. Need for
valid modelling techniques are immensely important in fields such as field of
incontinence associated dermatitis [2], sports related skin trauma [6] and
cosmetics [7]. It is documented that abrasion damage to human skin in cases
such as the jogger’s nipple [8] and dermatitis from clothing and attire [9]
are caused by repetitive movement of fabrics on skin, and in cases such as
pressure ulcers [10] and juvenile plantar dermatitis [11], friction may worsen
the problem. In an attempt to model such problem mathematically, Cottenden _et
al._ [2] put forward a model in their publication _An analytical model of the
motion of a conformable sheet over a general convex surface in the presence of
frictional coupling_. We show that, regardless of the authors’ claims, what
they derive is the ordinary capstan equation and a solution for a membrane
with a zero-Poisson’s ratio and a zero-mass density on a rigid right-circular
cone. Cottenden _et al._ [1, 2] and Cottenden [3] also claims that capstan
equation can be used to calculate the friction between in-vivo skin and
fabrics. With data gathered from human trials, we show that it is unwise to
use _belt-friction models_ (e.g. capstan equation) to calculate the
coefficient of friction between in-vivo skin and fabrics, as such models
assume a rigid foundation, while human soft-tissue is elastic; thus, a portion
of the applied force to the fabric is expended on deforming the soft-tissue,
which in turn leads to the illusion of a higher coefficient of friction when
using belt-friction models.
## 2 Capstan Equation
The capstan equation, otherwise known as Euler’s equation of tension
transmission, is the relationship governing the maximum applied-tension
$T_{\text{max}}$ with respect to the minimum applied-tension $T_{0}$ of an
elastic string wound around a rough cylinder. Thus, the governing equation can
be express by the following equation,
$\displaystyle T_{\text{max}}=T_{0}\exp(\mu_{F}\theta)~{},$ (1)
where $\theta$ is the contact angle and $\mu_{F}$ is the coefficient of
friction. By _string_ we mean a one-dimensional elastic body, and _rough_ we
mean that the contact area exhibits friction, where the coefficient of
friction is the physical ratio of the magnitude of the shear force and the
normal force between two contacting bodies. The capstan equation is the most
perfect example of a _belt-friction model_ , which describes behaviour of a
belt-like object moving over a rigid-obstacle subjected to friction [12]. In
engineering, the capstan equation describes a body under a load equilibrium
involving friction between a rope and a wheel like circular object, and thus,
it is widely used to analyse the tension transmission behaviour of cable-like
bodies in contact with circular profiled surfaces, such as in rope rescue
systems, marine cable applications, computer storage devices (electro-optical
tracking systems), clutch or brake systems in vehicles, belt-pulley machine
systems and fibre-reinforced composites [13].
For a rigorous study of the capstan equation (a membrane or a string over a
rigid-obstacle) generalised to non-circular geometries, we refer the reader to
chapter 2 of Jayawardana [14]. There, as an example, the author present a
solution to the capstan equation generalised for a rigid elliptical-prism,
i.e. given a prism with a horizontal diameter of $2a$ and the vertical
diameter of $2b$, parametrised by the map
$\boldsymbol{\sigma}(x^{1},\theta)=\boldsymbol{(}x^{1},~{}a\sin(\theta),~{}b\cos(\theta)\boldsymbol{)}$,
where $\theta$ is the acute angle that the vector
$\boldsymbol{(}0,0,1\boldsymbol{)}_{\text{C}}$ makes with the vector
$\boldsymbol{(}0,\varphi(\theta),0\boldsymbol{)}$, and where
$\varphi(\theta)=(b^{2}\sin^{2}(\theta)+a^{2}\cos^{2}(\theta))^{\frac{1}{2}}$,
the capstan equation takes the following form,
$\displaystyle
T_{\text{elliptical}}(\theta)=T_{0}\exp\left(\mu_{F}\arctan\left(\frac{b}{a}\tan(\theta)\right)\right)~{}.$
(2)
Note that equation (2) assumes that minimum applied-tension, $T_{0}$, is
applied at $\theta=0$, and the vector brackets
$\boldsymbol{(}\cdot\boldsymbol{)}$ implies that the vectors are in the
Euclidean space and $\boldsymbol{(}\cdot\boldsymbol{)}_{\text{C}}$ implies
that the vectors are in the curvilinear space.
Figure 1: Tension ratio against $\delta b$.
Equation (2) implies that the maximum applied-tension, $T_{\text{max}}$, is
dependent on the mean curvature of the rigid prism. To investigate this matter
further, assume that $T_{0}$ is applied at $\theta=-\frac{1}{4}\pi$ and
$T_{\text{max}}$ applied at $\theta=\frac{1}{4}\pi$, and thus, equation (2)
implies that
$\displaystyle\delta\tau=\exp\left(2\mu_{F}\arctan\left(\frac{b}{a}\right)\right)~{},$
(3)
where $\delta\tau=T_{\text{max}}/T_{0}$, which is defined as the _tension
ratio_. As the reader can see that for a fixed contact interval and a fixed
coefficient friction, equation (3) implies a non-constant tension ratio for
varying $\delta b$, where $\delta b=b/a$. As the mean curvature of the prism
is $H(\theta)=\frac{1}{2}ab(\varphi(\theta))^{-3}$, one can see that the
tension ratio is related to the mean curvature by the following relation,
$\displaystyle\delta\tau=\exp\left(2\mu_{F}\arctan\left[\max_{\theta\in[-\frac{1}{4}\pi,\frac{1}{4}\pi]}(2aH(\theta),1)+\min_{\theta\in[-\frac{1}{4}\pi,\frac{1}{4}\pi]}(2aH(\theta),1)-1\right]\right)~{}.$
Figure 1 presents a visualisation of the tension ratio against $\delta b$
(which is analogous to the mean curvature), which is calculated with
$\mu_{F}=\frac{1}{2}$ and $\delta b\in[\frac{1}{2},\frac{3}{2}]$, and it shows
that, for a fixed contact interval, as $\delta b$ increases (i.e. as the mean
curvature of the contact region increases), the tension ratio also increases.
This is an intuitive result as the curvature of the contact region increases,
the normal reaction force on the membrane (or the string) also increases,
which in turn leads to a higher frictional force, and thus, a higher tension
ratio. Now, this is a fascinating result as this effect cannot be observed
with the ordinary capstan equation (1).
For a rigorous study of belt friction models, we refer the reader to section
6.2 of Jayawardana [14], and for a rigorous study of the capstan equation
generalised to elastic obstacles, we refer the reader to Konyukhov’s [15], and
Konyukhov’s and Izi’s [16].
## 3 Cottenden _et al._ ’s Work
Cottenden _et al._ [2] (the principal author is D. J. Cottenden) attempt to
derive a mathematical model to analyse a frictionally coupled membrane
(defined as a nonwoven sheet) on an elastic foundation (defined as a
substrate) based on the research findings of D. J. Cottenden’s PhD thesis
[3]111https://discovery.ucl.ac.uk/id/eprint/1301772/. It is assumed that
friction (abrasion in the context of the publication) is the cause of some
pressure ulcers in largely immobile patients, and abrasion due to friction
contributes to the deterioration of skin health in incontinence pad users,
especially in the presence of liquid. The current literature shows very little
research in the area of frictional damage on skin due to fabrics, and thus,
the authors’ goal is to present a mathematical model to investigate this
phenomenon in a purely geometrical setting. Thus, the authors propose a model
for a general class of frictional interfaces, which includes those that obey
Amontons’ three laws of friction.
Cottenden _et al._ ’s method [2] for calculating the kinetic frictional force
induced on human skin due to nonwoven fabrics is as follows. The human body
part in question is modelled as a homogeneous isotropic ‘convex surface
[_sic_]’ [2] (i.e. the substrate) and the nonwoven fabric is modelled as an
isotropic membrane (i.e. the nonwoven sheet). The goal is to find the stresses
acting on the nonwoven sheet, including determining the friction acting on the
substrate. The contact region between the fabric and the skin is defined as
‘An _Instantaneous Isotropic Interface_ , [which] is an interface composed of
a pair of surfaces which have no intrinsically preferred directions and no
directional memory effects, so that the frictional force acts in the opposite
direction to the _current_ relative velocity vector … or to the sum of
_current_ applied forces acting to initiate motion …’ (see section 2.2 of
Cottenden _et al._ [2]): this simply implies that the contact region is
isotropic and nothing more. Also, consider the contact body in question: it is
modelled as a surface, i.e. a two-dimensional manifold. However, in reality,
it must be modelled as a three-dimensional object as a two-dimensional object
cannot describe the elastic properties of a fully three-dimensional object
such as a human body part, as a two-dimensional surface has measure zero in
three-dimensional space (for an introduction on measure theory, please consult
chapter 1 of Kolmogorov _et al._ [17]); Unless suitable assumptions are made
as one finds in shell theory (for a comprehensive mathematical study of the
shell theory, please consult sections B of Ciarlet [18]); however, this is not
what the authors are considering. Now, consider the authors’ statement
regarding the modelling assumptions carefully, particularly the term ‘convex
surface’. The authors definition of convexity is
$\boldsymbol{\eta}\cdot\boldsymbol{\nabla}_{\\!\text{E}}\hat{\textbf{N}}\cdot\boldsymbol{\eta}\geq
0$ (see section 3.1 of Cottenden _et al._ [2]), where $\hat{\textbf{N}}$ and
$\boldsymbol{\eta}$ are unit normal and tangential vectors surface
respectively. However, the authors’ definition is erroneous. Convexity has a
very precise mathematical definition, i.e. we say the functional
$f:X\to\mathbb{R}$ is convex, if $f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)$,
$\forall~{}t\in[0,1]$ and $\forall~{}x,y\in X$ (for more on the definition of
convexity, please see chapter 1 of Badiale and Serra [19]). Also, the very
idea of a convex surface is nonsensical as definition of convexity is only
applicable to functionals. A simple example of a convex functional is
$\exp(\cdot):\mathbb{R}\to\mathbb{R}_{>0}$. One is left to assume that what
the authors mean by convexity is surfaces (i.e. manifolds) with a positive
mean-curvature. For more on elementary differential geometry, please consult
do Carmo [20] or Lipschutz [21].
In their analysis, the authors defines a membrane with the following property,
‘always drapes, following the substrate surface without tearing or puckering’
(see section 2.1 of [2]). The authors’ definition is erroneous, as one cannot
guarantee that the given property will hold for a flat-membrane (in a
Euclidean sense) over an arbitrary curved surfaces. To illustrate the flaw,
consider a flat elastic-membrane (i.e. a film) over a rigid sphere. The only
way one can keep the membrane perfectly in contact with the sphere in a two-
dimensional region with nonzero measure is by deforming the membrane by
applying appropriate boundary stresses and or external loadings. Otherwise,
the membrane only makes contact with the sphere at a single point or a line.
Also, the authors do not specify whether the membrane is elastic or not. One
is left to assume that the membrane is elastic as the proposed frame work does
not acknowledge plastic deformations. Note that the authors never referred to
their nonwoven sheet as a membrane, but a membrane (or a film) is the closest
mathematical definition for modelling such objects. Please consult chapters 4
and 5 of Ciarlet [18] or chapter 7 of Libai and Simmonds [22] for a
comprehensive analysis on the theory of membranes.
To find the stresses acting on the membrane, consider Cauchy’s momentum
equation in the Euclidean space, which the authors define as follows,
$\displaystyle\boldsymbol{\nabla}_{\\!\text{E}}\cdot\textbf{T}+\textbf{f}=\rho\ddot{\boldsymbol{\chi}}~{},$
(4)
where T is Cauchy’s stress tensor,
$\textbf{f}=\textbf{f}(\textbf{T},\boldsymbol{\nabla}_{\\!\text{E}}\textbf{T})$
is the force density field and $\rho$ is the material mass density of the
membrane, $\boldsymbol{\nabla}_{\\!\text{E}}$ is the Euclidean differential
operator and $\boldsymbol{\chi}$ is given as a ‘time-dependent deformation
function mapping the positions of points in their undeformed _reference_
configuration to their deformed positions and the superposed double dot
denotes a double material description time derivative’ (see section 2.1 of
Cottenden _et al._ [2]). It is unclear what $\boldsymbol{\chi}$ represent from
the authors’ definition, whether it is the displacement field of the membrane
or some time-dependent mapping from one manifold to another. If the latter is
true, then equation (4) has a very different meaning, i.e. it means that the
space is dependent of time, and such problems are encountered in the field of
cosmology. If the reader consults section 5.4 of Cottenden’s thesis [3], then
it becomes evident that $\boldsymbol{\chi}$ is, indeed, a time dependent map.
However, if one consults Cottenden _et al._ [1, 2] and Cottenden [3], then one
concludes that the authors do not put forward the framework to handle the 3+1
decomposition in general relativity, with any mathematical rigour. If the
reader is interested in the 3+1 formalism in general relativity, then please
consult the publications [23, 24, 25] or Dr J. A. V. Kroon (QMUL) on his LTCC
lecture notes on _Problems of General Relativity_ 222
http://www.maths.qmul.ac.uk/$\sim$jav/LTCC.htm, where the reader can find an
extraordinary solution for two merging black-holes (i.e. the Brill-Lindquist
solution).
Assuming that the foundation is static and rigid, and the mass density of the
membrane is negligible, i.e. $\rho\approx 0$, the authors state that Cauchy’s
momentum equation (4) can be expressed as
$\displaystyle\textbf{P}_{s}\cdot(\boldsymbol{\nabla}_{\\!\text{E}}\cdot\textbf{T})+\textbf{P}_{s}\cdot\textbf{f}$
$\displaystyle=\boldsymbol{0}~{},$ (5)
$\displaystyle-(\boldsymbol{\nabla}_{\\!\text{E}}\hat{\textbf{N}}):\textbf{T}+\hat{\textbf{N}}\cdot\textbf{f}$
$\displaystyle=0~{},$ (6)
where $\textbf{P}_{s}$ projection matrix to the substrate (the explicit form
is not defined by the authors), $\hat{\textbf{N}}$ is the unit normal to the
surface, and $\cdot$ and $:$ are defined as a contraction and a double
contraction in the Euclidean space respectively. Although it is not explicitly
defined, one must assume that the authors use the fact that membranes cannot
support any normal stresses, i.e.
$\hat{\textbf{N}}\cdot\textbf{T}=\boldsymbol{0}$, to obtain equation (6). The
authors give equations (5) and (6) as the state of the ‘general case’ of the
problem. However, their assertion cannot hold as the system is
underdetermined. Consider the vector f, which consists of three unknowns.
Also, consider the tensor $\textbf{T},$ which is a symmetric tensor with six
unknowns. Thus, using the condition
$\hat{\textbf{N}}\cdot\textbf{T}=\boldsymbol{0}$ the number of unknowns can be
reduced by three: leaving six remaining unknowns. Now, direct one’s attention
to equations (5) and (6), which provide three additional equations. Thus, one
comes to the conclusion that one has an underdetermined system, with three
equations and six unknowns. Furthermore, there is no description of any
boundary conditions for the ‘general case’, which are essential in obtaining a
unique solution.
The derivation of Cottenden _et al._ ’s governing equations [3] can be found
on section 2.2 to 2.4 of the publication. Upon examination, the reader may
find that the methods put forward by the authors’ are inconsistent with
mathematical elasticity, contact mechanics and differential geometry. Thus, we
refer reader to Kikuchi and Oden [26] to see how to model friction with
mathematical competence and to show how incredibly difficult modelling such
problems are. We further refer the reader to Ciarlet [18] to see how to model
mathematical elasticity in a differential geometry setting with mathematical
rigour.
To find explicit solutions, the authors direct their attention to only ‘surfa-
ces that are isomorphic to the plane; that is, those which have the same first
fundamental form as the plane; the identity matrix in the case of plane
Cartesian coordinates. [_sic_]’ [2]. Found in section 4.1 of Cottenden _et
al._ [2], this is the basis for their entire publication (also the basis for
Cottenden’s [3] thesis). However, the authors’ statement is nonsensical. An
_isomorphism_ (preserves form) is at least a homomorphism, i.e. there exists
at least a continuous bijective mapping, whose inverse is also continuous,
between the two manifolds in question [20, 21]. Thus, a surface that is
isomorphic to the plane simply implies that there exists a continuous
bijective map, with a continuous inverse, between the surface in question and
the Euclidean plane, and it does not automatically guarantee that the surface
have the same metric as the Euclidean plane under the given map. The latter
part of the authors’ statement is clearly describing surfaces that are
_isometric_ (preserves distance) to the Euclidean plane, i.e. surfaces with a
zero-Gaussian curvature. However, the statement is still erroneous as being a
surface that is isometric to the Euclidean plane does not guarantee that the
surface have the same metric as the Euclidean plane. Being isometric to the
Euclidean plane simply implies that, if $f:U\subset\mathbb{R}^{2}\to
W\subset\textbf{E}^{3}$ is a 2D manifold that is isometric to the Euclidean
plane, then there exists a map $g:V\subset\mathbb{R}^{2}\to
U\subset\mathbb{R}^{2}$ such that the first fundamental form of the isometry
$f\circ g:V\subset\mathbb{R}^{2}\to W\subset\textbf{E}^{3}$ is the $2\times 2$
identity matrix [20, 21]. One is left to assume that the surfaces that are in
question by the authors belongs a subgroup of surfaces with zero-Gaussian
curvature that has the same metric as the Euclidean plane with respect to the
authors’ coordinate system, i.e. cylinders and right-circular cones: as one
later see that these are the only possible manifolds that can generate any
valid solutions. Note that Cottenden [3] accredits Pressley [27] for his
differential geometry results. However, Pressley’s publication [27] is a
widely accepted and verified mathematical publication in the field of
differential geometry, and it does not contain such provably false statements
as given by Cottenden [3].
Now, consider the equation
$\displaystyle\textbf{P}_{s}\cdot\textbf{f}+\mu_{d}(\hat{\textbf{N}}\cdot\textbf{f})\dot{\boldsymbol{\chi}}=0~{},$
(7)
which the authors define as Amontons’ law friction, where $\mu_{d}$ is the
coefficient of dynamic friction and $\dot{\boldsymbol{\chi}}$ is the relative
velocity vector between the membrane and the foundation. The inclusion of the
two equations implied by condition (7) still does not guarantee that the
system is fully determined, as the system requires one more equation to be
fully determined.
Now, assume that one is dealing with a rectangular membrane whose orthogonal
axis defined by the coordinates $\boldsymbol{(}x,y\boldsymbol{)}$, where $y$
defines the longer dimension, that is placed over a surface defined by the
regular map $\boldsymbol{\sigma}$. Also, assume that Poisson’s ratio of the
membrane is zero to prevent any lateral contractions due to positive tensile
strain. To reduce the complexity, the authors modify the problem by letting
$\dot{\boldsymbol{\chi}}$ be parallel to $\boldsymbol{\sigma}_{\\!,y}$. Also,
by applying a boundary stress of $T_{0}$ at some point $\phi_{1}$ whilst
applying a even greater stress at $\phi_{2}$ so that $T_{yy}(y)$ is an
increasing function in $y$, where $\phi_{1}$ and $\phi_{2}$ are angles of
contact such that $\phi_{1}<\phi_{2}$. Due to zero-Poisson’s ratio and the
boundary conditions, one finds $T_{xx}=0$, $T_{xy}=0$, where $T_{ij}$ are
stress tensor components. Thus, the governing equations finally reduce to a
fully determined system, i.e. is only now the number of unknowns equals to the
number of governing equations. Therefore, one must understand that having
zero-Gaussian curvature and zero-Poisson’s ratio is a necessity for this
model, and it is not some useful tool for deriving explicit equations as
stated by the authors. Upon integrating equation (7), under the specified
boundary conditions, one finds solutions of the following form (see equation
4.4 of Cottenden _et al._ [2]),
$\displaystyle
T_{yy}(y)=T_{0}\exp\left(\mu_{d}\int^{y}|C_{yy}(\eta)|~{}d\eta\right)~{},$ (8)
where
$C_{\alpha\beta}=F_{I\alpha\gamma}^{-1}F_{II\gamma\delta}F_{I\beta\delta}^{-1}$
is defined as the curvature tensor, ${\boldsymbol{F}}_{\\!I}$ is the first
fundamental form tensor and $F_{IIyy}$ is the only nonzero component of the
second fundamental form tensor. However, equation (8) is erroneous. This is
because, whatever is inside the $\exp(\cdot)$ term must be non-dimensional,
but this is not the case with equation (8). To illustrate this flaw, let $L$
be an intrinsic Euclidean length scale and $\ell$ be an intrinsic length scale
of the curvilinear coordinate $y$. Now, with the definition of $C_{yy}$ (see
equation 3.3 of Cottenden _et al._ [2]), one finds that the length scale
inside the $\exp(\cdot)$ term in equation (8) to be ${(\ell/L)}^{3}$. Given
that $y=\theta$ (i.e. the contact angle, which is dimensionless), one finds
that the length scale inside the $\exp(\cdot)$ term to be $L^{-3}$, which is
not mathematically viable. This flaw of the authors’ work is a result of not
correctly following tensor contraction rules, and not discerning between
covariant and contravariant tensors (see sections 3.2 and 4.1 of Cottenden _et
al._ [2]). For elementary tensor calculus, please consult Kay [28].
If one assumes a 2D manifold that is _isometric_ to the Euclidean plane, has a
positive mean-curvature (with respect to the unit-outward normal) and whose
first fundamental form tensor is diagonal (after a change of coordinates or
otherwise), and should one follow the derivation with mathematical rigour,
then one finds a solution of the following form,
$\displaystyle
T_{y}^{y}(y)=T_{0}\exp\left(-\mu_{d}\int^{y}\sqrt{F_{Iyy}(\eta)}F_{IIy}^{~{}~{}y}(\eta)~{}d\eta\right)~{},$
(9)
where a rigorous derivation can be fond in chapter 2 of Jayawardana [14]. As
the reader can see that unlike equation (8), no dimension violations can be
possible with equation (9).
To find the explicit solution for the general _prism_ case, the authors
present the following map (see section 4.2 of Cottenden _et al._ [2])
$\displaystyle\boldsymbol{\sigma}(x,y)$
$\displaystyle=\boldsymbol{(}R(\phi)\cos(\phi),~{}R(\phi)\sin(\phi),~{}x\cos(\zeta)+y\sin(\zeta)\boldsymbol{)}~{},$
(10)
where
$\displaystyle d\phi$
$\displaystyle=\frac{\cos(\zeta)dy-\sin(\zeta)dx}{\sqrt{R(\phi)^{2}+(R^{\prime}(\phi))^{2}}}~{}.$
From the authors’ definition, $\zeta$ appears to be the acute angle that the
vector $\hat{\boldsymbol{\sigma}}_{\\!,y}$ makes with the vector
$\hat{\boldsymbol{\sigma}}_{\\!,\phi}$, and $R$ and $\phi$ appear to be the
radius of curvature and the angle of the centre of rotation respectively. One
can clearly see that map (10) is only valid for cylinders (i.e. prisms with a
constant radius) as it must have the same metric as the Euclidean plane. To be
precise,
$\displaystyle\boldsymbol{F}_{I}=$
$\displaystyle\begin{pmatrix}{\boldsymbol{\sigma}}_{\\!,x}\cdot{\boldsymbol{\sigma}}_{\\!,x}&{\boldsymbol{\sigma}}_{\\!,x}\cdot{\boldsymbol{\sigma}}_{\\!,y}\\\
{\boldsymbol{\sigma}}_{\\!,x}\cdot{\boldsymbol{\sigma}}_{\\!,y}&{\boldsymbol{\sigma}}_{\\!,y}\cdot{\boldsymbol{\sigma}}_{\\!,y}\end{pmatrix}$
$\displaystyle=$
$\displaystyle[R(\phi)^{2}+(R^{\prime}(\phi))^{2}]\begin{pmatrix}(\phi_{,x})^{2}+\cos^{2}(\zeta)&\phi_{,x}\phi_{,y}+\sin(\zeta)\cos(\zeta)\\\
\phi_{,x}\phi_{,y}+\sin(\zeta)\cos(\zeta)&(\phi_{,y})^{2}+\sin^{2}(\zeta)\end{pmatrix}$
$\displaystyle=$ $\displaystyle\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}~{},$
which, in turn, implies the following,
$\displaystyle\phi_{,x}$
$\displaystyle=-\frac{\sin(\zeta)}{\sqrt{R(\phi)^{2}+(R^{\prime}(\phi))^{2}}}~{},$
$\displaystyle\phi_{,y}$
$\displaystyle=\frac{\cos(\zeta)}{\sqrt{R(\phi)^{2}+(R^{\prime}(\phi))^{2}}}~{},$
and thus, the following,
$\displaystyle(\phi_{,x})^{2}+(\phi_{,y})^{2}$
$\displaystyle=\frac{1}{R(\phi)^{2}+(R^{\prime}(\phi))^{2}}~{}.$ (11)
As $\phi$ is a real function (i.e. not complex) and noting that equation (11)
must hold for all $x$ and $y$, one finds that $R^{\prime}=0$, and thus,
equation (10) reduces to the following,
$\displaystyle\boldsymbol{\sigma}(x,y)$
$\displaystyle=\boldsymbol{(}c\cos(\phi),~{}c\sin(\phi),~{}x\cos(\zeta)+y\sin(\zeta)\boldsymbol{)}~{},$
$\displaystyle\phi(x,y)$
$\displaystyle=\frac{1}{c}[\cos(\zeta)y-\sin(\zeta)x]~{},$
which represents a parametrisation of a cylinder, where $c$ is a positive
constant (and $R=c$). Now, given that a solution exists in the interval
$[\phi_{1},\phi_{2}]$, the authors state that the solution can be expressed as
follows (see equation 4.7 of Cottenden _et al._ [2]),
$\displaystyle
T_{yy}(\phi_{2})=T_{0}\exp\left(\mu_{d}\cos(\zeta)\left[\phi-\arctan\left(\frac{R(\phi)_{,\phi}}{R(\phi)}\right)\right]\big{|}^{\phi_{2}}_{\phi_{1}}\right)~{}.$
(12)
Figure 2: Cross sections of elliptical-prisms (and elliptical-cones for the
$z=1$ case).
Regardless of the authors’ claims, solution (12) is only valid for cylinders,
i.e. the capstan equation (1). To see why equations (10) and (12) are not
valid for general prisms, one only needs to consider an example with
noncircular cross section. If the reader wishes to, then consider a
positively-oriented elliptical-prism (for the $\zeta=0$ case) that is defined
by the map
$\displaystyle\boldsymbol{\sigma}(\phi,z)=\boldsymbol{(}a\cos(\phi),~{}b\sin(\phi),~{}z\boldsymbol{)}~{},$
(13)
where $z\in\mathbb{R}$ and $a,b>0$, and let
$\phi\in[\frac{1}{4}\pi,\frac{3}{4}\pi]$ be the contact interval (see figure
2, and see equation (2) for the capstan equation for an elliptical-prism).
Now, the reader can see that both map (10) and solution (12) are no longer
valid for the elliptical-prism mapping (13).
To find the explicit solution for the _cone_ case, the authors present the
following map (see equation 4.8 of Cottenden _et al._ [2]),
$\displaystyle\boldsymbol{\sigma}(x,y)$
$\displaystyle=\frac{r}{\sqrt{1+R(\phi(\theta))^{2}}}\boldsymbol{(}R(\phi(\theta))\cos(\phi(\theta)),~{}R(\phi(\theta))\sin(\phi(\theta)),~{}1\boldsymbol{)}~{},$
(14)
where
$\displaystyle d\theta$
$\displaystyle=\frac{\sqrt{R^{2}+(R^{\prime})^{2}+R^{4}}}{1+R^{2}}d\phi=\frac{R\sqrt{1+R^{2}}}{\sqrt{(1+R^{2})^{2}-(R_{,\theta})^{2}}}d\phi~{},$
$\displaystyle r$ $\displaystyle=\sqrt{x^{2}+y^{2}}~{}~{}\text{and}$
$\displaystyle\theta$ $\displaystyle=\arctan(\frac{y}{x})-\zeta~{}.$
From the authors’ definition, $\zeta$ appears to be the acute angle that the
vector $\hat{\boldsymbol{\sigma}}_{\\!,y}$ makes with the vector
$\hat{\boldsymbol{\sigma}}_{\\!,\phi}$, $R$ is given as a ‘cylindrical polar
function’ and $\phi$ appears to be the angle of the centre of rotation. One
can clearly see that map (14) is only valid for right-circular cones (i.e.
cones with a circular cross section) as it must have the same metric as the
Euclidean plane. To be precise
$\displaystyle\boldsymbol{F}_{I}=$
$\displaystyle\begin{pmatrix}{\boldsymbol{\sigma}}_{\\!,x}\cdot{\boldsymbol{\sigma}}_{\\!,x}&{\boldsymbol{\sigma}}_{\\!,x}\cdot{\boldsymbol{\sigma}}_{\\!,y}\\\
{\boldsymbol{\sigma}}_{\\!,x}\cdot{\boldsymbol{\sigma}}_{\\!,y}&{\boldsymbol{\sigma}}_{\\!,y}\cdot{\boldsymbol{\sigma}}_{\\!,y}\end{pmatrix}$
$\displaystyle=$
$\displaystyle\begin{pmatrix}1&\frac{(1-R)}{1+R^{2}}(\phi^{\prime}R^{\prime})^{2}\\\
\frac{(1-R)}{1+R^{2}}(\phi^{\prime}R^{\prime})^{2}&\frac{(\phi^{\prime}R^{\prime})^{2}(1-2R)}{(1+R^{2})^{2}}+\frac{(\phi^{\prime})^{2}[R^{2}+(R^{\prime})^{2}]}{1+R^{2}}\end{pmatrix}$
$\displaystyle=$ $\displaystyle\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}~{},$
which, in turn, implies that $R^{\prime}=0$ (as $\phi^{\prime}\neq 0$). Thus,
equation (14) reduces to the following form,
$\displaystyle\boldsymbol{\sigma}(x,y)$
$\displaystyle=r\boldsymbol{(}\sin(\alpha)\cos(\phi(\theta),~{}\sin(\alpha)\sin(\phi(\theta)),~{}\cos(\alpha)\boldsymbol{)}~{},$
$\displaystyle\phi(\theta)$
$\displaystyle=\operatorname{cosec}(\alpha)\theta~{},$
which represents a parametrisation of a right-circular cone, where $2\alpha$
is the (constant-) angle of aperture. Now, given that a solution exists in the
interval $[\theta_{1},\theta_{2}]$, the authors states that the solution can
be expressed as follows (see equation 4.20 of Cottenden _et al._ [2]),
$\displaystyle
T_{yy}(\phi_{2})=T_{0}\exp\left(\frac{\mu_{d}}{R(\phi(\theta))}\sin\left(\theta+\zeta\right)\big{|}^{\theta_{2}}_{\theta_{1}}\right)~{}.$
(15)
Regardless of the authors’ claims, solution (15) is only valid for right-
circular cones, i.e. valid for $R=\tan(\alpha)$ where $2\alpha$ is the
(constant-) angle of aperture. To see why equations (14) and (15) is invalid
for a general cone, one only needs to consider an example with noncircular
cross section. If the reader wishes to, then consider a positively-oriented
elliptical-cone (for the $\zeta=0$ case) that is defined by the following map,
$\displaystyle\boldsymbol{\sigma}(\phi,z)=\boldsymbol{(}az\cos(\phi),~{}bz\sin(\phi),~{}z\boldsymbol{)}~{},$
(16)
where $z\in\mathbb{R}_{>0}$ and $a,b>0$, and let
$\phi\in[\frac{1}{4}\pi,\frac{3}{4}\pi]$ be the contact interval (see figure 2
and consider the $z=1$ case). Now, the reader can that both map (14) and
solution (15) is no longer valid for the elliptical-cone mapping (16).
The authors conclude by stating that their experimental results agreed almost
perfectly with equation (12) for the cylinder case. One expects that the
solution to agree with experiment data for the cylinder case as the solution
is only valid for the cylinder case. The authors further state the
‘Experimental data gathered on [right-circular] cones constructed from plaster
of Paris and Neoprene … with half-angles ranging up to $12^{\circ}$ and
contact angles in the range $[70^{\circ},120^{\circ}]$ show good agreement
with the simple cylindrical model at their error level (around $\pm 10\%$ for
most samples)’ [2]. Again, one expects this be the case as solution (15) is
only valid for right-circular cones. Also, it is given by the authors that in
the limit $R\rightarrow 0$, the cone case is asymptotically equals to the
cylinder case. This result is just a trivial mathematical result that follows
directly from the Maclaurin series, i.e. $\sin(\theta)\approx\theta$, when
$\theta\approx 0$.
In Cottenden _et al._ ’s publication [2], the authors fail to demonstrate a
sufficient knowledge in the subject of differential geometry, mathematical
elasticity and contact mechanics to tackle this problem with any mathematical
rigour, and this is evident in D. J. Cottenden’s thesis [3] as the publication
Cottenden _et al._ [2] is a summary of all the mathematical results from
Cottenden’s thesis [3]. For example, in section 2.15 of the thesis, the
compatibility conditions for the left Cauchy-Green deformation tensor is given
as a general condition (see page 8 of Cottenden [3]). However, there exists no
general compatibility condition for the left Cauchy-Green deformation tensor,
and the given compatibility conditions exist for the two-dimensional case only
[29]. Another example is that the entire section 5.4 of the thesis (see pages
132 to 137 of Cottenden [3]) is based on the assumption that one can invert a
$3\times 2$ matrix (see equation 5.15 of Cottenden [3]), i.e. given a
sufficiently differentiable 2D manifold
$\boldsymbol{\lambda}:\mathbb{R}^{2}\to\textbf{E}^{3}$ (e.g. a map of a
cylinder), the author asserts that the Jacobian matrix,
$\boldsymbol{(}\partial_{\beta}\lambda^{j}\boldsymbol{)}_{3\times 2}$ (where
$\beta\in\\{1,2\\}$ and $j\in\\{1,2,3\\}$), is invertible: note that the
author is considering regular inverse and not one-sided inverse. Should the
reader consult section 5.4.1 of Cottenden [3], it is evident that the author
failed to understand the difference between the inverse of a bijective mapping
and a preimage (which need not be bijective), as both definitions expressed
with the same mathematical notation, $\boldsymbol{\lambda}^{-1}$, in
Pressley’s publication [27]: recall that Cottenden [3] accredits Pressley [27]
for his differential geometry results. This misunderstanding of Pressley’s
work [27] leads to a substantial part of Cottenden’s work [3] being incorrect,
as Section 5.4 of Cottenden’s thesis [3] is based on Cottenden’s assumption
[3] that a $3\times 2$ matrix is invertible (where the author is considering
regular inverse).
In a different publication, a precursor to the one we discussed, Cottenden _et
al._ [1] present experimental framework to calculate the coefficient of
friction, based on the master’s thesis of Skevos-Evangelos Sp. Karavokyros333
_Validating a mathematical model with a simple laboratory model_. MSc. UCL.
2017.. The authors state that ‘The model generalizes the common assumption of
a cylindrical arm to any convex prism’ [1]. Coefficients of friction are
determined from experiments conducted on Neoprene-coated Plaster of Paris
prisms of circular and elliptical cross-sections (defined as arm phantoms) and
a nonwoven fabric. The authors state experimental results agreed within $\pm
8\%$, i.e. $16\%$. They also state that the coefficients of friction varied
very little with the applied weight, geometry and contact angle. Thus, the
authors conclude by asserting that accurate values of the coefficient of
friction can be obtained by applying the cylindrical model (i.e. the capstan
equation) to the experimental data on human arms. They further assert that the
coefficient of friction is independent of the substrate geometry, applied
weights and contact angles, and claims that both their mathematical model and
experimental results are in complete agreement.
Unfortunately, none of Cottenden _et al._ ’s mathematical results [1] can be
mathematically justified, mostly for the reasons that we described before. For
example, the reader may try to derive an arc-length of an ellipse with the use
of the definition of an arc length from section 2.4 of the publication:
although, the formula $\text{d}[\text{arc
length}]=\sqrt{R(\theta)^{2}+\frac{\text{d}R(\theta)}{d\theta}}\text{d}\theta$
holds true when calculating an arc-length of a curve (which can be derived
with simple differential geometry techniques), the term $\text{d}[\text{arc
length}]^{2}=(R\text{d}\theta)^{2}+\text{d}R^{2}$ (see directly above equation
12 of Cottenden _et al._ [1]) does not imply the former equation nor does it
have any mathematical context. Another example is the equation
$1/\tan(0.5\pi)=0$, which is from the latter part of section 4.1 of the
publication (see directly below equation 17 of Cottenden _et al._ [1]).
Figure 3: Coefficient of friction against applied mass in grams: (a)
Cylindrical body; (b) Elliptical prism with horizontal major axis; (c)
Elliptical prism with vertical major axis; (d) Elliptical prism with major
axis making $+135^{\circ}$ to the horizontal [1].
As for the experimental results, consider figure 3, which shows the
coefficients of friction in relation to different geometries, applied weights,
and contact angles (see figure 11 of Cottenden _et al._ [1]). One can see that
there are clear discrepancies between each calculated coefficients of friction
as the coefficients of friction vary between different geometries, weights,
and contact angles. The authors only acknowledge the dependence of coefficient
of friction relative to the applied weight (see section 4 of Cottenden _et
al._ [2]), but hastily dismiss this effect by asserting that ‘… the increase
[coefficient of friction relative to the applied weight] is small compared to
the scatter in the data’ [1].
Mass g | Tension $10^{-3}$N
---|---
1st | 2nd | 3rd | 4th | 5th | Mean
$10$ | $16.0g$ | $15.0g$ | $15.0g$ | $15.0g$ | $16.0g$ | $15.6g$
$30$ | $51.0g$ | $54.0g$ | $51.0g$ | $50.0g$ | $51.0g$ | $51.4g$
$50$ | $88.0g$ | $87.0g$ | $89.0g$ | $87.0g$ | $90.0g$ | $88.2g$
$70$ | $125g$ | $124g$ | $128g$ | $122g$ | $124g$ | $125g$
Table 1: Tensometer readings: Cylinder with $\frac{127}{360}\pi$ contact
angle, where $g$ is the acceleration due to gravity.
If one consults Karavokyros’ masters thesis for the experimental data, then
one can find the raw data for the cylinder, $\frac{127}{360}\pi$ contact angle
case (see table 2a of Karavokyros’ masters thesis), which is displayed in
table 1. Now, using this data, if one plot the tension ratio against the
applied mass, then one gets figure 4. Note that the capstan equation implies
that the tension ratio is constant for all applied masses, i.e.
$\delta\tau=\exp(\mu_{d}\theta_{0})$ is constant given that $\mu_{d}$ and
$\theta_{0}$ are constants. However, this is not what the experimental results
are implying, as the reader can clearly see from figure 4 that as the applied
mass increases, tension ratio also increases, and this is documented
phenomenon in the literature [13], which cannot be simply dismissed. Thus,
this implies that, for the given experiments, it is unsuitable to use the
standard capstan equation to find the coefficient of friction with a
significant degree of accuracy. Now, this is direct evidence that shows the
flaws in Cottenden _et al._ ’s [2] data analysis methods and their
interpretation of the experimental results.
Figure 4: Tension ratio against applied mass.
Unfortunately no raw data is available for the other experiments in the theses
of Karavokyros or Cottenden [3] to conduct further rigorous analysis, as we
did with the $\frac{127}{360}\pi$-cylinder case.
As a result of the flawed mathematics and data analysis techniques, Cottenden
_et al._ [1, 2] and Cottenden [3] assert that the tension observed on the
membrane is independent of the geometry and the elastic properties of the
foundation, and thus, the stress profile at the contact region and the
coefficient of friction can be calculated with the use of the ordinary capstan
equation. They further assert that the experimental methodology for
calculating the coefficient of friction between fabrics and in-vivo (i.e.
within the living) human skin is identical to the capstan model. However, we
found no experimental evidence in the body of the authors’ publications to
verify their assertion, i.e. no evidence is given for the assertion that
foundation’s geometric and elastic properties are irrelevant when calculating
the coefficient friction between an elastic foundation and an overlying
membrane. Thus, our next subject of investigation is the authors’ experimental
methodology.
## 4 Experimental Data: June 2015 Human Trial
We now analyse the data obtained from human trials based on Cottenden _et al._
[1, 2] and Cottenden’s [3] experimental methodology. We recruit $10$ subjects,
$5$ males and $5$ females, all between the ages of $18$ to $60$, and the
approval was granted by the UCL Research Ethics Committee: UCL Ethics Project
ID number 5876/001. Collected data of the subjects can be found in table 2, if
the reader wishes to reproduce any results, where BMI is the body mass index
(calculated with NHS BMI calculator 444https://www.nhs.uk/live-well/healthy-
weight/bmi-calculator/), Radius is the radius of the upper arm and $\delta l$
is a measure of how _flaccid_ the subject’s tissue is (see equation (17)). For
more comprehensive set of raw data, please consult Dr N. Ovenden (UCL) at
<EMAIL_ADDRESS>
Subject | Gender | Age (Years) | BMI | Radius (cm) | $\delta l$
---|---|---|---|---|---
F19 | Female | $19$ | $21.0$ | $3.98$ | $0.994$
F34 | Female | $34$ | $22.0$ | $4.22$ | $0.991$
F40 | Female | $40$ | $23.4$ | $3.82$ | $1.01$
F53 | Female | $53$ | $27.3$ | $4.54$ | $1.02$
F60 | Female | $60$ | $22.5$ | $4.46$ | $1.02$
M18 | Male | $18$ | $17.5$ | $3.50$ | $0.98$
M23 | Male | $23$ | $24.7$ | $4.77$ | $1.04$
M25 | Male | $25$ | $22.6$ | $4.22$ | $1.01$
M26 | Male | $26$ | $22.8$ | $4.50$ | $0.988$
M54 | Male | $54$ | $26.2$ | $5.09$ | $1.00$
Table 2: Experimentee data 2015.
For our experimental configuration, we place subjects upper arm horizontally,
bicep facing upwards, on custom-designed armrest. Then, we place a fabric (a
nonwoven fabric: $95\%$ polypropylene and $5\%$ cotton) over their bicep and
attach the upper end to the tensometer (Dia-Stron MTT170 provided by Dr S.
Falloon, UCL), and the free hanging end is reserved for hanging set of
weights, such that the contact angle between the bicep and the fabric (if
measured from the humerus) is approximately $\frac{1}{2}\pi$. The dimensions
of the fabric are $4$cm$\times 50$cm, and from our measurements, the fabric
has an approximate thickness of $0.5$mm and an approximate mass of $0.6$g.
Also, we mark the skin with a $3$cm$\times 5$cm grid with $1$cm$\times 1$cm
grid spacing, $0.5$cm away from either side of the fabric and starting from
the highest part (of the horizontal axis) of the arm, then placing semi-
hemispherical markers with a radius of $2$mm. See figure 5 for a visual
representation.
Figure 5: Experimental configuration on subject F53 (.stl file).
For each run, we pull the fabric with a constant speed of
$\frac{1}{6}\text{cms}^{-1}$ with the use the tensometer and we use a
static-3D scanner (3dMD Photogrammetric System555http://3dmd.com provided by
Mr C. Ruff, UCLH) to record the before and after deformation of the upper arm.
Note that we use the metric
$\displaystyle\delta
l=\frac{\sum_{10\text{g},\dots,140\text{g}}||\text{deformed}(\textbf{A4}-\textbf{E6})||}{\sum_{10\text{g},\dots,140\text{g}}||\text{undeformed}(\textbf{A4}-\textbf{E6})||}~{},$
(17)
to measure the _flaccidity_ (analogous to $1/(\text{Young's modulus})$) of the
subject’s soft tissue. Also, Radius (from table 2) is calculated by measuring
the girth around the bicep and then dividing the measurement by $2\pi$
Subject | $\delta\tau$
---|---
$40$g | $60$g | $80$g | $100$g | $120$g | $140$g
F19 | $2.19$ | $1.70$ | $1.52$ | $1.92$ | $1.88$ | $1.86$
F34 | $2.08$ | $\cdots$ | $1.93$ | $1.89$ | $1.83$ | $1.83$
F40 | $1.99$ | $1.96$ | $1.92$ | $1.89$ | $1.88$ | $1.79$
F53 | $2.13$ | $2.31$ | $2.23$ | $2.15$ | $1.84$ | $1.70$
F60 | $2.29$ | $2.20$ | $2.06$ | $1.99$ | $1.96$ | $1.95$
M18 | $1.99$ | $1.90$ | $1.88$ | $1.84$ | $1.68$ | $1.76$
M23 | $2.46$ | $2.28$ | $2.24$ | $2.19$ | $2.33$ | $1.99$
M25 | $2.14$ | $1.98$ | $1.81$ | $1.80$ | $1.98$ | $\cdots$
M26 | $2.41$ | $2.31$ | $2.26$ | $2.18$ | $2.31$ | $1.99$
M54 | $\cdots$ | $2.12$ | $2.03$ | $1.77$ | $1.91$ | $1.96$
Table 3: Experimental data from the 2015 human trial.
Table 3 shows the tension ratio, $\delta\tau=T_{\text{max}}/T_{0}$, for each
subject with respect to each applied mass, where $T_{\text{max}}$ are the
tensometer readings, $T_{0}=\text{Mass}\times g$ are the weight of the applied
mass and $g=9.81$ is the acceleration due to gravity. Note that the tensometer
data of F34 for $60$g, M25 for $140$g and M54 for $40$g are corrupted, and
thus, omitted.
Cottenden _et al._ [1, 2] and Cottenden [3] assert that the
$T_{\text{max}}=T_{0}\exp(\mu_{F}\theta_{0})$ equation holds true, regardless
of the geometric and elastic properties of the substrate (i.e. human soft-
tissue). If this is indeed true then $\delta\tau=\exp(\mu_{F}\theta_{0})$ is
only a function of the coefficient of friction, regardless of the geometric
and elastic properties of human soft-tissue, for a fixed contact angle
$\theta_{0}$. Also, one of the major assumption of the authors is that
coefficient of friction between skin and fabrics positively correlated with
the age of an individual, as they observe higher occurrence of skin damage in
elderly subjects. Thus, now we plot the tension ratio against various
properties to test the authors’ claims.
(a) Tension ratio against the age (in years).
(b) Tension ratio against the BMI.
(c) Tension ratio against the $\delta l$.
(d) Tension ratio the radius of the upper arm (in cm).
Figure 6: Tension ratio for varying age, BMI, $\delta l$ and radius of the
upper arm.
Figure 6(a) shows the tension ratio against subjects’ age, where the linear
regression line is $\delta\tau=0.00117\times\text{Age}+1.96$ and the age is in
years ($\text{R}^{2}=0.0512$, where $R^{2}$ is the coefficient of
determination). As the reader can see there is no obvious relationship between
the age of the subject and the tension ratio. In fact, the highest tension
ratios are observed for M23 and M26 (two of the youngest subjects).
Figure 6(b) shows the tension ratio against the body mass index (BMI), where
the linear regression line is $\delta\tau=0.0169\times\text{BMI}+1.61$
($\text{R}^{2}=0.0961$). As the reader can see that there is a vague positive
correlation between the BMI and the tension ratio. The reason we observe this
correlation is because that those who have a higher BMI tend to have a greater
fat content, i.e. higher volume of flaccid tissue. Thus, during the
experiments, a higher tension needs to be applied to the fabric as the as a
portion of the applied-tension is expended on deforming the flaccid tissue of
the subject.
Figure 6(c) shows the tension ratio against $\delta l$, where the linear
regression line is $\delta\tau=4.39\times\delta l-2.41$
($\text{R}^{2}=0.282$). As the reader can see that there is a positive
correlation between $\delta l$ and the tension ratio. This implies that for
more flaccid soft-tissue, larger portion of the applied-tension is expended on
deforming it.
Figure 6(d) shows the tension ratio against the radius of the upper arm, where
the linear regression line is $\delta\tau=0.205\times\text{Radius}+1.12$ and
the radius is in centimetres ($\text{R}^{2}=0.412$). As the reader can see
that see there is a strong positive correlation between the radius and the
tension ratio. The reason why we observe this correlation is exactly same as
the cases before: the larger the radius is, the larger the volume of soft-
tissue that needs to be deformed, and thus, a larger applied-tension.
From figures 6(b), 6(d) and 6(c), we observe higher tension ratios for
subjects with higher BMI, more flaccid soft-tissue, and larger biceps. Should
one use the capstan equation (or any belt-friction model in general) to
calculate the coefficient of friction, then one would observe a higher
coefficient of friction for subjects with higher BMI, more flaccid soft-
tissue, and larger biceps. However, this does neither imply nor does not imply
that those with a higher BMI, more flaccid soft-tissue, and larger biceps have
a greater risk of skin abrasion, i.e _correlation does not imply causation_
[30]; It merely implies that belt-friction models are not suitable for
modelling such problems, which directly contradicts Cottenden _et al._ [1, 2]
and Cottenden’s [3] assertion regarding the efficacy of the capstan model.
For a mathematically rigorous way of modelling this problem, we refer the
reader to section 6.6 and 6.7 of Jayawardana [14]. There, both numerical-
modelling (shell-membrane frictionally coupled to an elastic-foundation) and
human trial data imply that given a constant coefficient of friction, a higher
volume of soft-tissue (high radius) and more compliant soft-tissue (lower
Young’s modulus) would result in higher deformation of the skin, and a higher
volume of soft-tissue would result in more shear-stress generated on the skin.
## 5 Conclusions
In conclusion, we showed that no mathematical claim of Cottenden _et al._ [1,
2] and Cottenden [3] can be mathematically justified, and some fundamental and
trivial results in mathematical elasticity, differential geometry and contact
mechanics are misrepresented. Only the ordinary capstan equation, and a
solution to a dynamic membrane with both a zero-Poisson’s ratio and a zero-
mass density on a right-circular cone is given. Finally, limited experimental
data is given to show the trivial asymptotic nature of $\sin(\theta)$ near
$\theta=0$.
Also, Cottenden _et al._ [1, 2] and Cottenden [3] claim that the capstan
equation (1) holds true, regardless of the geometric and elastic properties of
the obstacle, and thus, it can be used to calculate the coefficient of
friction between human skin and fabrics. However, the data gathered from human
trials implies that it is unwise to use the capstan equation (e.g. belt-
friction models in general) to calculate the friction between in-vivo skin and
fabrics. This is because that such models assume a rigid obstacle while human
soft-tissue is elastic, and thus, a portion of the applied force to the fabric
is expended on deforming the soft-tissue, which in turn leads to the illusion
of a higher coefficient of friction when such models are used to calculate the
coefficient of friction.
## References
* Cottenden et al. [2008] A. M. Cottenden, D. J. Cottenden, S. Karavokiros, W. K. R. Wong, Development and experimental validation of a mathematical model for friction between fabrics and a volar forearm phantom, Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 222 (2008) 1097–1106.
* Cottenden and Cottenden [2009] D. J. Cottenden, A. M. Cottenden, An analytical model of the motion of a conformable sheet over a general convex surface in the presence of frictional coupling, The Quarterly Journal of Mechanics and Applied Mathematics (2009) hbp012.
* Cottenden [2011] D. J. Cottenden, A multiscale analysis of frictional interaction between human skin and nonwoven fabrics, Ph.D. thesis, UCL (University College London), 2011.
* Bowden and Tabor [1964] F. P. Bowden, D. Tabor, The friction and lubrication of solids, volume 2, Wiley Online Library, 1964.
* Clark [1981] S. K. Clark, Mechanics of pneumatic tires, US Department of Transportation, National Highway Traffic Safety Administration, 1981.
* Bergfeld and Taylor [1985] W. F. Bergfeld, J. S. Taylor, Trauma, sports, and the skin, American journal of industrial medicine 8 (1985) 403–413.
* Asserin et al. [2000] J. Asserin, H. Zahouani, P. Humbert, V. Couturaud, D. Mougin, Measurement of the friction coefficient of the human skin in vivo: quantification of the cutaneous smoothness, Colloids and surfaces B: Biointerfaces 19 (2000) 1–12.
* Levit [1977] F. Levit, Jogger’s nipples., The New England journal of medicine 297 (1977) 1127.
* Wilkinson [1985] D. S. Wilkinson, Dermatitis from repeated trauma to the skin, American journal of industrial medicine 8 (1985) 307–317.
* Maklebust and Sieggreen [2001] J. Maklebust, M. Sieggreen, Pressure ulcers: Guidelines for prevention and management, Lippincott Williams & Wilkins, 2001.
* Shank [1979] A. B. Shank, The aetiology of juvenile plantar dermatosis, British Journal of Dermatology 100 (1979) 641–648.
* Rao et al. [2003] C. L. Rao, J. Lakshinarashiman, R. Sethuraman, S. M. Sivakumar, Engineering Mechanics: Statics and Dynamics, PHI Learning Pvt. Ltd., 2003.
* Jung et al. [2008] J. H. Jung, N. Pan, T. J. Kang, Capstan equation including bending rigidity and non-linear frictional behavior, Mechanism and Machine Theory 43 (2008) 661–675.
* Jayawardana [2016] K. Jayawardana, Mathematical Theory of Shells on Elastic Foundations: An Analysis of Boundary Forms, Constraints, and Applications to Friction and Skin Abrasion, Ph.D. thesis, UCL (University College London), 2016.
* Konyukhov [2015] A. Konyukhov, Contact of ropes and orthotropic rough surfaces, ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik 95 (2015) 406–423.
* Konyukhov and Izi [2015] A. Konyukhov, R. Izi, Introduction to computational contact mechanics: a geometrical approach, John Wiley & Sons, 2015\.
* Kolmogorov et al. [1961] A. N. Kolmogorov, S. V. Fomin, N. A. Brunswick, A. Jeffrey, Measure, Lebesgue integrals and Hilbert space, volume 2, Academic Press New York, 1961\.
* Ciarlet [2000] P. G. Ciarlet, Theory of Shells, Mathematical Elasticity, Elsevier Science, 2000\.
* Badiale and Serra [2010] M. Badiale, E. Serra, Semilinear Elliptic Equations for Beginners: Existence Results via the Variational Approach, Springer Science & Business Media, 2010.
* do Carmo [2009] M. P. do Carmo, Differential Geometry of Curves and Surfaces, Pearson Education Taiwan Limited, 2009\.
* Lipschutz [1969] M. M. Lipschutz, Schaum’s Outline of Theory and Problems of Differential Geometry, Schaum’s outline series : theory and problem, McGraw-Hill, 1969.
* Libai and Simmonds [2005] A. Libai, J. G. Simmonds, The nonlinear theory of elastic shells, Cambridge university press, 2005.
* Griffiths and Podolskỳ [2009] J. B. Griffiths, J. Podolskỳ, Exact Space-Times in Einstein’s General Relativity, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 2009.
* Stephani et al. [2003] H. Stephani, D. Kramer, M. MacCallum, C. Hoenselaers, E. Herlt, Exact Solutions of Einstein’s Field Equations, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 2003.
* Wald [2010] R. M. Wald, General Relativity, University of Chicago Press, 2010.
* Kikuchi and Oden [1988] N. Kikuchi, J. Oden, Contact Problems in Elasticity: A Study of Variational Inequalities and Finite Element Methods, Studies in Applied Mathematics, Society for Industrial and Applied Mathematics, 1988\.
* Pressley [2010] A. N. Pressley, Elementary differential geometry, Springer Science & Business Media, 2010\.
* Kay [1988] D. Kay, Schaum’s Outline of Tensor Calculus, McGraw Hill Professional, 1988.
* Acharya [1999] A. Acharya, On compatibility conditions for the left cauchy–green deformation field in three dimensions, Journal of elasticity 56 (1999) 95–105.
* Aldrich [1995] J. Aldrich, Correlations genuine and spurious in pearson and yule, Statistical science (1995) 364–376.
|
# The dynamic energy balance in earthquakes expressed by fault surface
morphology
Xin Wang<EMAIL_ADDRESS>Juan Liu Feng Gao Zhizhen Zhang State Key
Laboratory for Geomechanics and Deep Underground Engineering, China University
of Mining and Technology, Xuzhou 221116, China
###### Abstract
The dynamic energy balance is essential for earthquake studies. The energy
balance approach is one of the most famous developments in fracture mechanics.
To interpret seismological data, crack models and sliding on a frictional
surface (fault) models are widely used. The macroscopically observable energy
budget and the microscopic processes can be related through the fracture
energy $G_{c}$. The fault surface morphology is the direct result of the
microscopic processes near the crack tip or on the frictional interface. Here
we show that the dynamic energy balance in earthquakes can be expressed by
fault surface morphology, and that they are quantitatively linked. The direct
shear experiments proves the predictions of the theoretical discussions, and
show that the strain rate has crucial influence on the dynamic energy balance.
###### keywords:
Earthquakes; Dynamic energy balance; Surface morphology; Strain rate
## 1 Introduction
An earthquake may be considered to be a dynamically running shear crack
[Scholz, 2019], in which the dynamic energy balance is essential for
earthquake studies. In the field of fracture mechanics, the energy balance
approach employed by Griffith has become one of the most famous developments
in materials science [Collins, 1993]. To interpret seismological data, crack
models are often used in part because the theories on cracks have been
developed well. On the other hand, seismic faulting may be more intuitively
viewed as sliding on a frictional surface (fault) where the physics of
friction, especially stick slip, plays a key role.
The fracture energy, $G_{c}$, in crack theory is the energy needed to create
new crack surfaces near the crack tip. Thus, the system must expend the
threshold fracture energy $G_{c}$ before the crack can extend. In contrast, in
frictional sliding model, $D_{c}$, is introduced as a critical slip before
rapid sliding begins at a constant friction. If the initial stress
$\sigma_{0}$ of the system drops more or less linearly to the final stress
$\sigma_{1}$, i.e., the final value of the frictional stress $\sigma_{f}$, the
energy spent in the system before this happens can be approximately written as
$\frac{1}{2}(\sigma_{0}-\sigma_{1})D_{c}$. Thus, if we are to link a crack
model to a friction model, we can equate this energy to $G_{c}$, i.e.,
$G_{c}=\frac{1}{2}(\sigma_{0}-\sigma_{1})D_{c}$. Then we can relate the
macroscopically observable energy budget to the microscopic processes in a
surprisingly general way. Any constraint on fracture energy obtained from the
energy budget will provide a strong bound on all microscopic rupture processes
[Kanamori and Brodsky, 2004]. Svetlizky and Fineberg [2014] have shown that
interface rupture propagation is described by, essentially, the same framework
of the crack model. This suggests that an analogous ’Griffith’ condition may
exist for the onset of rupture propagation for a frictional interface.
The fault surface morphology is the direct result of the microscopic processes
near the crack tip or on the frictional interface. Here we show that the
dynamic energy balance in earthquakes can be expressed by fault surface
morphology, and that they are quantitatively linked.
## 2 The description of fault surface morphology and its links to the dynamic
energy balance in earthquakes
The description of fault surface morphology has been investigated in the
literature [e.g., Ladanyi and Archambault, 1969, Barton and Choubey, 1977,
Plesha, 1987, Saeb and Amadei, 1992, Amadei et al., 1998] while trying to
study the contribution of surface morphology to the shear strength of rock
fractures. As shearing strictly depends on three-dimensional contact area
location and distribution [Gentier et al., 2000], Grasselli et al. [2002]
proposed a method for the quantitative three-dimensional description of a
rough fracture surface, and based on this description, Grasselli and Egger
[2003] proposed a constitutive criterion to model the shear strength of
fractures. Wang et al. [2019] defined the terms “quasi steps” and “quasi
striations” to refer to morphological structures that are created during the
creation of new crack surfaces and the friction on the frictional interface.
The same morphological structures are also causing anisotropy of the fracture
surface morphology, hence the anisotropy of the surface morphology’s
contribution on its shear strength. The terms “quasi steps” and “quasi
striations” are broader definitions of the fault steps and fault striations
whether they can be obviously seen or not. The parameter $\theta^{*}_{max}/C$
was proposed by Grasselli et al. [2002] to describe the contribution of
fracture surface morphology on the shear strength. Wang et al. [2019] proposed
a theoretical model that describes the contribution of quasi steps and quasi
striations on the shear strength. And by fitting this theoretical model to
$\theta^{*}_{max}/C$ from each slip direction on the fracture surface, the
amount of quasi steps $R_{G}$ and quasi striations $R_{H}$ can be estimated.
Combined with the method proposed by Wang et al. [2017] for outcrop fracture
surface extraction from point cloud data, the estimated quasi steps and quasi
striations data had good applications in tectonics [e.g., Wang and Gao, 2020].
Let’s consider the formation of quasi steps during the crack growth. The voids
(Fig. 1 as suggested by Bouchaud and Paun [1999]) are nucleated under the
influence of the stress field adjacent to the tip, but not at the tip, due to
the existence of the plastic zone that cuts off the purely linear-elastic
(unphysical) crack-tip singularities. The crack grows by coalescing the voids
with the tip, creating a new stress field which induces the nucleation of new
voids [Bouchbinder et al., 2004]. The morphological structures of quasi steps
are then formed during the crack growth under shear load as iillustrated in
Fig. 1. A crucial aspect of this picture is the existence of a typical scale,
$\xi_{c}$, which is roughly the distance between the crack tip and the first
void, at the time of the nucleation of the latter. In this picture there
exists a “energy dissipation zone ” in front of the crack tip in which plastic
yield is accompanied by the evolution of damage cavities. From this picture,
it can also be seen that the typical scale $\xi_{c}$ is positively related
with the estimated amount of quasi steps $R_{G}$ since the quasi steps are
more developed on the fracture surface with larger value of $\xi_{c}$.
A simple model for $\xi_{c}$ was developed [Bouchbinder et al., 2004, Afek et
al., 2005] by assuming the energy dissipation zone to be properly described by
the Huber–von Mises plasticity theory. The material yields as the distortional
energy exceeds a material-dependent threshold $\sigma_{Y}^{2}$ and the typical
distance $\xi_{c}$ scales as [Bouchbinder et al., 2004]
$\xi_{c}\sim\frac{K_{II}^{2}}{\sigma_{Y}^{2}},$ (1)
where $K_{II}$ is the stress intensity factor for mode II (in-plane shear)
cracks. On the other hand, the linear-elastic solution is still valid outside
the energy dissipation zone, and the energy release rate $G^{*}$ can be
expressed as
$G^{*}=\frac{K_{II}^{2}}{E^{\prime}},$ (2)
where $E^{\prime}$ is related to Young’s modulus $E$ and Poisson’s ratio $\nu$
(for plane strain):
$E^{\prime}=\frac{E}{1-\nu^{2}}.$
The definition of the energy release rate $G^{*}$ is
$G^{*}=\frac{\partial\Omega}{\partial s},$ (3)
where $\Omega$ is the strain energy and $s$ is the crack growth area. Then Eq.
1 can be rewritten as
$\xi_{c}\sim
G^{*}/\frac{\sigma^{2}_{Y}}{E^{\prime}}=\frac{\partial\Omega}{\partial
s}/\frac{\sigma^{2}_{Y}}{E^{\prime}}.$ (4)
The amount of quasi steps $R_{G}$ is a description of the degree of quasi
steps development over the fracture surface, so it is an average quantity over
the fracture surface. $R_{G}$ should scale with $\overline{\xi_{c}}$, the
average of $\xi_{c}$ over the fracture surface $S$:
$R_{G}\sim\overline{\xi_{c}}=\frac{1}{S}\int_{s}\xi_{c}\sim\frac{\Omega}{\gamma_{G}},$
(5)
where
$\gamma_{G}=\int\limits_{S}\frac{\sigma^{2}_{Y}}{E^{\prime}}.$ (6)
Figure 1: The model of the tip growth of mode II (in-plane shear) cracks. The
formation of voids are documented in Bouchaud and Paun [1999].
Let’s consider the formation of quasi striations during the friction on the
frictional interface. As suggested by Svetlizky and Fineberg [2014], the
interface rupture friction is described by, essentially, the same framework of
the crack model. The main difference is that the crack model consider the
crack growth in the rock mass, while the friction model consider the crack
growth through the local protruding obstacles on the frictional interface,
which are then removed or deformed during the friction. As the slip goes on,
more protruding obstacles that face the slip direction with high angles are
removed or deformed while protruding obstacles that are high angle
perpendicular to the slip direction are kept. This anisotropy results in the
macroscopic quasi striations structures. The amount of quasi striations
$R_{H}$ is a description of the degree of quasi striations development over
the fracture surface, so it is an average quantity over the fracture surface.
$R_{H}$ should scale with $E_{f}$, the frictional energy like this:
$R_{H}\sim\frac{E_{f}}{\overline{\omega}},$ (7)
where $\overline{\omega}$ is the average strain energy needed to remove or
deform a protruding obstacle. Suppose $n$ protruding obstacles are removed or
deformed, according to Eq. 5 and Eq. 6, the strain energy scale as
$\omega_{1}\sim\int\limits_{S^{*}_{1}}\frac{\sigma^{2}_{Y}}{E^{\prime}},\omega_{2}\sim\int\limits_{S^{*}_{2}}\frac{\sigma^{2}_{Y}}{E^{\prime}},...,\omega_{n}\sim\int\limits_{S^{*}_{n}}\frac{\sigma^{2}_{Y}}{E^{\prime}},$
where $S^{*}_{i}$ is the crack area in the $i$-th protruding obstacle. The
average strain energy $\overline{\omega}$ can be written as
$\overline{\omega}=\frac{1}{n}\sum^{n}\omega_{i}\sim\int\limits_{\overline{S^{*}}}\overline{\biggl{(}\frac{\sigma^{2}_{Y}}{E^{\prime}}\biggr{)}},$
(8)
where
$\overline{S^{*}}=\frac{1}{n}\sum^{n}S^{*}_{i},$
and
$\overline{\biggl{(}\frac{\sigma^{2}_{Y}}{E^{\prime}}\biggr{)}}=\sum^{n}\int\limits_{S^{*}_{i}}\frac{\sigma^{2}_{Y}}{E^{\prime}}\bigg{/}\sum^{n}S^{*}_{i}.$
Finally, we have the link between the fault surface morphology and the dynamic
energy balance in earthquakes:
$\frac{R_{G}}{R_{H}}\sim\frac{\overline{\omega}}{\gamma_{G}}\frac{\Omega}{E_{f}}.$
(9)
## 3 The experimental results and discussions
An experiment was designed to test the above theoretical discussion. The
dimensions and experiment settings are illustrated in Fig. 2. 14 red sandstone
rock samples are tested in this simple direct shear experiment with the strain
rate ranging from 0.01/min to 0.083/min. The final fracture surfaces after the
crack growth and the interface friction processes are scanned and analyzed,
the amount of quasi steps $R_{G}$ and quasi striations $R_{H}$ are estimated.
On the other hand, the stress-strain curves are recorded during the tests. As
shown in Fig. 3, the strain energy is mainly accumulated before the formation
of the macroscopic crack through the rock sample, while the frictional energy
is mainly spent after the formation of the macroscopic crack. Thus the strain
energy $\Omega$ and the frictional energy $E_{f}$ can be roughly estimated.
Figure 2: Experiment settings of direct shear test.
Figure 3: Examples of the stress-strain curve. Roughly the green regions
represent the strain energy to be released during the crack growth, and the
red regions represent the frictional energy spent during the friction on the
frictional interface.
The rock samples are numbered and their strain rate during the test are shown
in Table 1. The ratio between the amount of quasi steps and the amount of
quasi striations ($R_{G}/R_{H}$) is plotted against
$\log{(\Omega)}/\log{(E_{f})}$. The logarithm of the strain energy and the
frictional energy are taken here in order to compare their values with $R_{G}$
and $R_{H}$, although the logarithm relationship is not explicitly shown in
above discussions (e.g., Eq. 5 and Eq. 7). Results of rock samples are scatter
ploted in Fig. 4 with their sample numbers labeled on their corresponding
scatter points for reference.
Table 1: Rock sample numbers and their strain rate during the test. Sample N.O. | 0 | 1 | 2 | 3 | 4 | 5 | 6
---|---|---|---|---|---|---|---
Strain rate (%/min) | 0.08 | 0.077 | 0.07 | 0.063 | 0.037 | 0.03 | 0.01
Sample N.O. | 7 | 8 | 9 | 10 | 11 | 12 | 13
Strain rate (%/min) | 0.017 | 0.023 | 0.033 | 0.043 | 0.057 | 0.067 | 0.083
Figure 4: The experimental results. The ratio between the amount of quasi
steps and the amount of quasi striations ($R_{G}/R_{H}$) is plotted against
$\log{(\Omega)}/\log{(E_{f})}$. Each labeled point is the result of the
corresponding rock sample, and its strain rate can be found in Table 1.
The results show that there seems indeed a quantitative link between the fault
surface morphology and the dynamic energy balance in earthquakes as
theoretically predicted above. Some of the scattered results show a constant
value of $\overline{\omega}/{\gamma_{G}}$ (e.g., sample number 12, 1, 10, 5,
2, 6, 11 and 9) because $\overline{\omega}$ and $\gamma_{G}$ are both
integrals of $\sigma^{2}_{Y}/E^{\prime}$ over some area, whose average tend to
be the same for the same rock sample, although it varies locally. Thus the
link predicted by Eq. 9 has the property of material-independent, at least for
dry brittle materials discussed here.
Note that sample number 3, 13, 4 and 0 seem have slightly larger value of
$\overline{\omega}/{\gamma_{G}}$, and they seem have larger strain rate. On
the contrary, sample number 7 and 8 have slightly smaller value of
$\overline{\omega}/{\gamma_{G}}$ and they have smaller strain rate. If we take
the rupture speed $V$ into account, Eq. 9 becomes
$\frac{R_{G}}{R_{H}}\sim\frac{\overline{\omega}(1-\frac{V^{2}}{(B\beta)^{2}})}{\gamma_{G}}\frac{\Omega}{E_{f}},$
(10)
where $B$ is a constant of the order of 1 and $\beta$ is the shear-wave speed
[Kanamori and Brodsky, 2004], because
$G=G^{*}g(V)=\frac{\partial\Omega}{\partial s}(1-\frac{V^{2}}{(B\beta)^{2}}).$
But this doesn’t explain the data and the rupture speeds $V$ aren’t
significantly different in those tests, so the rupture speeds $V$ doesn’t have
a significant effect here. One explanation is that for real materials,
$\sigma^{2}_{Y}$ depends on the state of deformation and its history.
Experiments discussed here show that high strain rate results in larger value
of $\overline{\omega}$, i.e., on average, it needs more energy to remove or
deform a protruding obstacle on the fracture surface during the friction. In
the formation of the macroscopic crack and the protruding obstacles on the
crack surface, high strain rate may make less plastic deformation inside the
protruding obstacles, and hence more energy is needed to remove or deform them
in the following deformation.
## 4 Conclusions
The dynamic energy balance is essential for earthquake studies. To interpret
seismological data, crack models and sliding on a frictional surface (fault)
models are widely used. From these two types of models, the macroscopically
observable energy budget and the microscopic processes can be related through
the fracture energy $G_{c}$.
The fault surface morphology is the direct result of the microscopic processes
near the crack tip or on the frictional interface. Here we show that the
dynamic energy balance in earthquakes can be expressed by fault surface
morphology, and that they are quantitatively linked.
The direct shear experiments proves the predictions of the theoretical
discussions, and show that the strain rate has crucial influence on the
dynamic energy balance. The link predicted by the theoretical discussions has
the property of material-independent, at least for dry brittle materials
discussed here.
## Acknowledgments
This research was funded by the Fundamental Research Funds for the Central
Universities (grant no. 2020QN29), the China Postdoctoral Science Foundation
(Grant no. 2020M681759), and the State Key Program of National Natural Science
Foundation of China (Grant no. 51934007).
## References
* Afek et al. [2005] Afek, I., Bouchbinder, E., Katzav, E., Mathiesen, J., Procaccia, I., 2005. Void formation and roughening in slow fracture. Physical Review E 71, 066127\. doi:10.1103/PhysRevE.71.066127.
* Amadei et al. [1998] Amadei, B., Wibowo, J., Sture, S., Price, R.H., 1998\. Applicability of existing models to predict the behavior of replicas of natural fractures of welded tuff under different boundary conditions. Geotechnical & Geological Engineering 16, 79–128. doi:10.1023/A:1008886106337.
* Barton and Choubey [1977] Barton, N., Choubey, V., 1977\. The shear strength of rock joints in theory and practice. Rock mechanics 10, 1–54. doi:10.1007/BF01261801.
* Bouchaud and Paun [1999] Bouchaud, E., Paun, F., 1999\. Fracture and damage at a microstructural scale. Computing in Science Engineering 1, 32–38. doi:10.1109/5992.790585.
* Bouchbinder et al. [2004] Bouchbinder, E., Mathiesen, J., Procaccia, I., 2004. Roughening of fracture surfaces: The role of plastic deformation. Physical Review Letters 92, 245505\. doi:10.1103/PhysRevLett.92.245505.
* Collins [1993] Collins, J.A., 1993. Failure of materials in mechanical design: analysis, prediction, prevention. John Wiley & Sons.
* Gentier et al. [2000] Gentier, S., Riss, J., Archambault, G., Flamand, R., Hopkins, D., 2000. Influence of fracture geometry on shear behavior. International Journal of Rock Mechanics and Mining Sciences 37, 161 – 174. doi:10.1016/S1365-1609(99)00096-9.
* Grasselli and Egger [2003] Grasselli, G., Egger, P., 2003\. Constitutive law for the shear strength of rock joints based on three-dimensional surface parameters. International Journal of Rock Mechanics and Mining Sciences 40, 25 – 40. doi:10.1016/S1365-1609(02)00101-6.
* Grasselli et al. [2002] Grasselli, G., Wirth, J., Egger, P., 2002. Quantitative three-dimensional description of a rough surface and parameter evolution with shearing. International Journal of Rock Mechanics and Mining Sciences 39, 789 – 800. doi:10.1016/S1365-1609(02)00070-9.
* Kanamori and Brodsky [2004] Kanamori, H., Brodsky, E.E., 2004\. The physics of earthquakes. Reports on Progress in Physics 67, 1429–1496. doi:10.1088/0034-4885/67/8/r03.
* Ladanyi and Archambault [1969] Ladanyi, B., Archambault, G., 1969\. Simulation of shear behavior of a jointed rock mass, in: The 11th US Symposium on Rock Mechanics (USRMS), American Rock Mechanics Association.
* Plesha [1987] Plesha, M.E., 1987. Constitutive models for rock discontinuities with dilatancy and surface degradation. International Journal for Numerical and Analytical Methods in Geomechanics 11, 345–362. doi:10.1002/nag.1610110404.
* Saeb and Amadei [1992] Saeb, S., Amadei, B., 1992. Modelling rock joints under shear and normal loading. International Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts 29, 267 – 278. doi:10.1016/0148-9062(92)93660-C.
* Scholz [2019] Scholz, C.H., 2019. The Mechanics of Earthquakes and Faulting. 3 ed., Cambridge University Press. doi:10.1017/9781316681473.
* Svetlizky and Fineberg [2014] Svetlizky, I., Fineberg, J., 2014\. Classical shear cracks drive the onset of dry frictional motion. Nature 509, 205–208. doi:10.1038/nature13202.
* Wang and Gao [2020] Wang, X., Gao, F., 2020. Quantitatively deciphering paleostrain from digital outcrops model and its application in the eastern tian shan, china. Tectonics 39, e2019TC005999. doi:10.1029/2019TC005999.
* Wang et al. [2019] Wang, X., Qin, Y., Yin, Z., Zou, L., Shen, X., 2019\. Historical shear deformation of rock fractures derived from digital outcrop models and its implications on the development of fracture systems. International Journal of Rock Mechanics and Mining Sciences 114, 122–130. doi:10.1016/j.ijrmms.2018.12.018.
* Wang et al. [2017] Wang, X., Zou, L.J., Shen, X.H., Ren, Y.P., Qin, Y., 2017\. A region-growing approach for automatic outcrop fracture extraction from a three-dimensional point cloud. Computers & geosciences 99, 100–106. doi:10.1016/j.cageo.2016.11.002.
|
# Energetic Stability of the Solutions
of the Einstein Field Equations for
Spherically Symmetric Liquid Shells
Jorge L. deLyra111Email<EMAIL_ADDRESS>
Universidade de São Paulo
Instituto de Física
Rua do Matão, 1371,
05508-090 São Paulo, SP, Brazil
(March 18, 2021)
###### Abstract
We interpret the exact solutions previously obtained for spherically symmetric
shells of liquid fluid in General Relativity in terms of the energies
involved. In order to do this we make a change of variables in the field
equations in order to introduce some integral expressions that are related to
various parts of the energy. We then use these integrals in order to show that
a certain parameter with dimensions of length, that was necessarily introduced
into the solutions by the interface boundary conditions, is related to the
binding energies of the gravitational systems.
In sequence, we use this representation of the gravitational binding energy in
order to discuss the energetic stability of the new solutions found. We
include in the stability discussion the well-known interior Schwarzschild
solution for a liquid sphere, which can be obtained as a specific limit of the
solutions that were previously obtained for the liquid shells. We show that
this particular family of solutions turns out to have zero binding energy and
therefore to be a maximally unstable one, from the energetic point of view
discussed here.
We also perform a numerical exploration of the energetic stability criterion
of the liquid shell solutions, all of which have strictly positive binding
energies, and show that indeed there is a particular subset of the solutions
which are energetically stable. All these solutions have the form of shells
with non-vanishing internal radii. This reduces the original three-parameter
family of liquid shell solutions to a two-parameter family of energetically
stable solutions.
## 1 Introduction
The issue of the energy in General Relativity is a difficult one, and its
discussion in specific examples quite often becomes involved and obscure. The
difficulties start at the very foundations of the theory, with the
impossibility of defining an energy-momentum tensor density for the
gravitational field itself, a problem which apparently is related to the
impossibility of localizing the energy of the gravitational field in the
general case [1].
However, a recently discovered new class of static and time-independent exact
solutions [2] provides us with an opportunity to discuss the subject in a
clear, precise and complete manner. It leads to a simple and clear
characterization of all the energies involved in this class of solutions, as
well as a characterization of the relations among them, which establishes an
important connection with the fundamental concept of the conservation of
energy.
It is noteworthy that results similar to the ones we presented in [2] were
obtained for the case of neutron stars, with the Chandrasekhar equation of
state [3], by Ni [4] and Neslušan [5]. Just as in [2], the analysis of that
case also led to an inner vacuum region containing a singularity at the origin
and a gravitational field which is repulsive with respect to that origin. This
tends to indicate that these results are general at least to some extent. It
is to be expected that the ideas regarding the energy that we present here
will be useful in that case as well.
This paper is organized as follows: in the remainder of this introduction we
quickly review the new class of static and time-independent exact solutions
for liquid shells, as well as the interior Schwarzschild solution, which can
obtained from the new shell solutions in a certain limit; in Section 2 we
establish certain general integral formulas for all the energies involved; in
Section 3 we establish the general physical interpretation of the energies
involved, including for both the shell solutions and the interior
Schwarzschild solution; in Section 4 we perform a small numerical exploration
of the energetic stability of the shell solutions, and in Section 5 we state
our conclusions.
### 1.1 The Liquid Shell Solutions
In a previous paper [2] we established the solution of the Einstein field
equations for the case of a spherically symmetric shell of liquid fluid
located between the radial positions $r_{1}$ and $r_{2}$ of the Schwarzschild
system of coordinates. This is a three-parameter family of solutions, which
can be taken as any three of the four parameters $r_{1}$, $r_{2}$, $M$ and
$\rho_{0}$. The matter distribution is characterized by the radii $r_{1}$ and
$r_{2}$, by its total asymptotic gravitational mass $M$, associated to the
Schwarzschild radius $r_{M}$, and by a matter energy density $\rho_{0}$ which
is constant with the radial Schwarzschild coordinate $r$ within
$(r_{1},r_{2})$, and zero outside that interval. In this work we will use the
time-like signature $(+,-,-,-)$, following [1]. In terms of the coefficients
of the metric, for an invariant interval given in terms of the Schwarzschild
coordinates $(t,r,\theta,\phi)$ by
$ds^{2}=\,{\rm e}^{2\nu(r)}c^{2}dt^{2}-\,{\rm
e}^{2\lambda(r)}dr^{2}-r^{2}\left[d\theta^{2}+\sin^{2}(\theta)d\phi^{2}\right],$
(1)
where $\exp[\nu(r)]$ and $\exp[\lambda(r)]$ are two positive functions of only
$r$, as was explained in [2] the Einstein field equations reduce to the set of
three first-order differential equations
$\displaystyle\left\\{\rule{0.0pt}{10.76385pt}1-2\left[r\lambda^{\prime}(r)\right]\right\\}\,{\rm
e}^{-2\lambda(r)}$ $\displaystyle=$ $\displaystyle 1-\kappa r^{2}\rho(r),$ (2)
$\displaystyle\left\\{\rule{0.0pt}{10.76385pt}1+2\left[r\nu^{\prime}(r)\right]\right\\}\,{\rm
e}^{-2\lambda(r)}$ $\displaystyle=$ $\displaystyle 1+\kappa r^{2}P(r),$ (3)
$\displaystyle\left[\rho(r)+P(r)\right]\nu^{\prime}(r)$ $\displaystyle=$
$\displaystyle-P^{\prime}(r),$ (4)
where $\rho(r)$ is the energy density of the matter, $P(r)$ is the isotropic
pressure, $\kappa=8\pi G/c^{4}$, $G$ is the universal gravitational constant
and $c$ is the speed of light. In these equations the primes indicate
differentiation with respect to $r$. Given these equations, as presented in
[2] the complete solution for $\lambda(r)$ is given by
$\displaystyle\lambda(r)$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{lcl}-\,{\displaystyle\frac{\displaystyle
1}{\displaystyle 2}}\,\ln\\!\left({\displaystyle\frac{\displaystyle
r+r_{\mu}}{\displaystyle r}}\right)&\mbox{for}&0\;\leq r\leq
r_{1},\\\\[12.91663pt] -\,{\displaystyle\frac{\displaystyle 1}{\displaystyle
2}}\,\ln\\!\left[{\displaystyle\frac{\displaystyle\kappa\rho_{0}\left(r_{2}^{3}-r^{3}\right)+3\left(r-r_{M}\right)}{\displaystyle
3r}}\right]&\mbox{for}&r_{1}\leq r\leq r_{2},\\\\[12.91663pt]
-\,{\displaystyle\frac{\displaystyle 1}{\displaystyle
2}}\,\ln\\!\left({\displaystyle\frac{\displaystyle r-r_{M}}{\displaystyle
r}}\right)&\mbox{for}&r_{2}\leq r<\infty,\end{array}\right.$ (8)
where $r_{M}=2GM/c^{2}$, while for $\nu(r)$ we have
$\displaystyle\nu(r)$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{lcl}{\displaystyle\frac{\displaystyle
1}{\displaystyle 2}}\,\ln\\!\left({\displaystyle\frac{\displaystyle
1-r_{M}/r_{2}}{\displaystyle
1+r_{\mu}/r_{1}}}\right)+{\displaystyle\frac{\displaystyle 1}{\displaystyle
2}}\,\ln\\!\left({\displaystyle\frac{\displaystyle r+r_{\mu}}{\displaystyle
r}}\right)&\mbox{for}&0\;\leq r\leq r_{1},\\\\[12.91663pt]
{\displaystyle\frac{\displaystyle 1}{\displaystyle
2}}\,\ln\\!\left({\displaystyle\frac{\displaystyle r_{2}-r_{M}}{\displaystyle
r_{2}}}\right)+\ln\\!\left[z(r)\right]&\mbox{for}&r_{1}\leq r\leq
r_{2},\\\\[12.91663pt] {\displaystyle\frac{\displaystyle 1}{\displaystyle
2}}\,\ln\\!\left({\displaystyle\frac{\displaystyle r-r_{M}}{\displaystyle
r}}\right)&\mbox{for}&r_{2}\leq r<\infty,\end{array}\right.$ (12)
and finally the pressure within the shell, that is, for $r_{1}\leq r\leq
r_{2}$, is given by
$P(r)=\rho_{0}\,\frac{1-z(r)}{z(r)}.$ (13)
This solution is valid under the condition that $r_{2}>r_{M}$. In all these
expressions we have that $r_{\mu}$ is given in terms of the parameters
characterizing the system by
$r_{\mu}=\frac{\kappa\rho_{0}}{3}\left(r_{2}^{3}-r_{1}^{3}\right)-r_{M},$ (14)
we have that $\rho_{0}$ is determined algebraically in terms of $r_{1}$,
$r_{2}$ and $r_{M}$ as the solution of the transcendental algebraic equation
$\displaystyle\sqrt{\frac{r_{2}}{3\left(r_{2}-r_{M}\right)}}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{r_{1}}{\kappa\rho_{0}\left(r_{2}^{3}-r_{1}^{3}\right)+3\left(r_{1}-r_{M}\right)}}+$
(15)
$\displaystyle+\frac{3}{2}\int_{r_{1}}^{r_{2}}dr\,\frac{\kappa\rho_{0}\;r^{5/2}}{\left[\kappa\rho_{0}\left(r_{2}^{3}-r^{3}\right)+3\left(r-r_{M}\right)\right]^{3/2}},$
and we have that the real function $z(r)$ is determined in terms of a non-
trivial elliptic real integral by the relation
$\displaystyle z(r)$ $\displaystyle=$
$\displaystyle\sqrt{\frac{\kappa\rho_{0}\left(r_{2}^{3}-r^{3}\right)+3\left(r-r_{M}\right)}{r}}\times$
(16)
$\displaystyle\times\left\\{\sqrt{\frac{r_{2}}{3\left(r_{2}-r_{M}\right)}}+\frac{3}{2}\int_{r_{2}}^{r}ds\,\frac{\kappa\rho_{0}\;s^{5/2}}{\left[\kappa\rho_{0}\left(r_{2}^{3}-s^{3}\right)+3\left(s-r_{M}\right)\right]^{3/2}}\right\\}.$
The relation shown in Equation (14) is a direct consequence of the field
equations and of the interface boundary conditions associated to them. In [2]
we proved that, so long as the pressure of the liquid is positive, we must
have $r_{\mu}>0$. In fact, the hypotheses of that proof can be weakened to
require only that the pressure be strictly positive at a single point. This
strictly positive value of $r_{\mu}$ implies that the solution has a
singularity at the origin. However, that singularity is not associated to an
infinite concentration of matter, but rather, as explained in [2], to zero
energy density at that point. Also, the solution introduces into the system
the new physical parameter $r_{\mu}$ with dimensions of length, which can be
associated to a mass parameter $\mu$ in the same way that $M$ is associated to
$r_{M}$.
### 1.2 The Interior Schwarzschild Solution
It is an interesting and somewhat remarkable fact that the well-known interior
Schwarzschild solution [6, 7] can be obtained from our solution for a shell,
even though the interior Schwarzschild solution has no singularity at the
origin, while our solution always has that singularity. Curiously enough, we
must start by assuming that $r_{\mu}=0$, even though we proved in [2] that one
must have $r_{\mu}>0$ in the shell solutions. The subtle point here is that
the proof given in [2] relies on the existence of a shell with $r_{1}>0$,
while in the case of the interior Schwarzschild solution we will have to use
$r_{1}=0$, so that the shell becomes a filled sphere. If we start by first
putting $r_{\mu}=0$ and then making $r_{1}\to 0$ in Equation (14), we are led
to the relation
$\kappa\rho_{0}=\frac{3r_{M}}{r_{2}^{3}},$ (17)
so that we may substitute $\kappa\rho_{0}$ in terms of $r_{M}$ and the radius
$r_{2}$ of the resulting sphere. Following the usual notation for the interior
Schwarzschild solution, we now define a parameter $R$, with dimensions of
length, such that $R^{2}=r_{2}^{3}/r_{M}$, in terms of which we have
$\kappa\rho_{0}=\frac{3}{R^{2}}.$ (18)
Note that the required condition that $r_{2}>r_{M}$ is translated here as the
condition that $R>r_{2}$. Making this substitution we have for $\lambda(r)$
inside the resulting sphere, directly from the line in Equation (8) for the
case of the matter region, in the case in which $r_{\mu}=0$ and $r_{1}\to 0$,
$\lambda_{i}(r)=-\,\frac{1}{2}\,\ln\\!\left[1-\left(\frac{r}{R}\right)^{2}\right],$
(19)
which implies that for the radial metric coefficient we have
$\,{\rm e}^{-\lambda_{i}(r)}=\sqrt{1-\left(\frac{r}{R}\right)^{2}}.$ (20)
In order to obtain $\nu(r)$ inside the sphere we must first work out the
function $z(r)$. Making the substitution of $\kappa\rho_{0}$ in terms of $R$
in the result for $z(r)$ given in Equation (16) we get
$z(r)=\sqrt{1-\left(\frac{r}{R}\right)^{2}}\left[\sqrt{\frac{r_{2}}{r_{2}-r_{M}}}+\frac{3}{2}\int_{r_{2}}^{r}ds\,\frac{s/R^{2}}{\left(1-s^{2}/R^{2}\right)^{3/2}}\right].$
(21)
Is is now easy to see that in this case the remaining integral can be done,
and we get
$z(r)=\frac{3}{2}-\frac{1}{2}\,\sqrt{\frac{r_{2}}{r_{2}-r_{M}}}\sqrt{1-\left(\frac{r}{R}\right)^{2}}.$
(22)
Using again the definition of $R$, which implies that we have
$r_{M}/r_{2}=\left(r_{2}/R\right)^{2}$, we may write this as
$z(r)=\frac{3}{2}-\frac{1}{2}\,\sqrt{\frac{1-\left(r/R\right)^{2}}{1-\left(r_{2}/R\right)^{2}}}.$
(23)
Note that we have $z(r_{2})=1$, which corresponds to $P(r_{2})=0$, so that the
boundary conditions for $z(r)$ and $P(r)$ at $r_{2}$ are still satisfied. From
this we may now obtain all the remaining results for the interior
Schwarzschild solution. From the line in Equation (12) for the case of the
matter region, in the case in which $r_{\mu}=0$ and $r_{1}\to 0$, we get for
$\nu(r)$ in the interior of the sphere
$\nu_{i}(r)=\frac{1}{2}\,\ln\\!\left[1-\left(\frac{r_{2}}{R}\right)^{2}\right]+\ln\\!\left[\frac{3}{2}-\frac{1}{2}\,\sqrt{\frac{1-\left(r/R\right)^{2}}{1-\left(r_{2}/R\right)^{2}}}\right],$
(24)
which implies that for the temporal metric coefficient we have
$\,{\rm
e}^{\nu_{i}(r)}=\frac{3}{2}\sqrt{1-\left(\frac{r_{2}}{R}\right)^{2}}-\frac{1}{2}\,\sqrt{1-\left(\frac{r}{R}\right)^{2}}.$
(25)
Finally, from Equation (13), in the case in which $r_{\mu}=0$ and $r_{1}\to
0$, we get for the pressure $P(r)$ within the sphere
$P(r)=\rho_{0}\,\frac{\sqrt{1-\left(r/R\right)^{2}}-\sqrt{1-\left(r_{2}/R\right)^{2}}}{3\sqrt{1-\left(r_{2}/R\right)^{2}}-\sqrt{1-\left(r/R\right)^{2}}}.$
(26)
These are indeed the correct results for the case of the interior
Schwarzschild solution. Note that all the arguments of the logarithms and of
the square roots are positive due to the conditions that $R>r_{2}>r$. Note
also that in the $r_{1}\to 0$ limit the lines in Equations (8) and (12) for
the case of the inner vacuum region become irrelevant, since this region
reduces to a single point. On the other hand, the lines for the case of the
outer vacuum region do not change at all.
It is therefore apparent that the $r_{1}\to 0$ limit of our shell solutions
does reproduce the interior Schwarzschild solution, so long as we adopt the
value zero for $r_{\mu}$. Our interpretation of these facts is that the
$r_{1}\to 0$ limit to the interior Schwarzschild solution is a non-uniform
one, in which we have to leave out one point, the origin. In the $r_{1}\to 0$
limit the singularity of the shell solutions becomes a strictly point-like
one, and therefore a removable one, by a simple continuity criterion. This is
certainly the case for the energy density $\rho(r)$, which in the limit is
non-zero everywhere around the origin but at a single point, the origin
itself. The same is true for the pressure $P(r)$, which in the limit is also
non-zero around the origin but at the origin itself. Similar situations hold
for $\lambda(r)$ and $\nu(r)$, as is not difficult to see numerically. It
seems that all these functions converge in the $r_{1}\to 0$ limit to functions
with a point-like removable discontinuity at the origin.
## 2 Integral Expressions for the Energies
It is possible to express the masses $M$ and $\mu$, as well as the
corresponding energies $Mc^{2}$ and $\mu c^{2}$, which are associated to the
parameters with dimensions of length $r_{M}=2MG/c^{2}$ and $r_{\mu}=2\mu
G/c^{2}$ that appear in the exact solutions described in Section 1, as
integrals of the matter energy density $\rho(r)$ over coordinate volumes, in a
way similar to what is usually done for $M$ in the literature [8, 3], but
leading to very different results in the case of the shell solutions. In order
to do this in a simple and organized way, we first change variables in the
field equations from $\lambda(r)$ to $\beta(r)$, which is defined to be such
that
$\,{\rm e}^{2\lambda(r)}=\frac{r}{r-r_{M}\beta(r)},$ (27)
which then implies that we have for the corresponding derivatives
$2r\lambda^{\prime}(r)=-r_{M}\,\frac{\beta(r)-r\beta^{\prime}(r)}{r-r_{M}\beta(r)}.$
(28)
Note that $\beta(r)=0$ corresponds to $\lambda(r)=0$ and therefore to
$\exp[2\lambda(r)]=1$ for the radial coefficient of the metric. In such cases
the variations of the radial coordinate are equal to the variations of the
corresponding proper lengths. Substituting these expressions in the component
field equation shown in Equation (2) a very simple relation giving the
derivative of $\beta(r)$ in terms of $\rho(r)$ results,
$\beta^{\prime}(r)=\frac{\kappa r^{2}\rho(r)}{r_{M}}.$ (29)
Therefore, wherever $\rho(r)=0$, we have that $\beta(r)$ is a constant. Note
that these facts are completely general for the spherically symmetric static
case, in the sense that they are not limited to the case in which $\rho(r)$ is
constant within the matter region. It then follows from Equation (8) that we
have that $\beta(r)=1>0$ in the outer vacuum region, and in particular at
$r_{2}$, and that we have that $\beta(r)=-r_{\mu}/r_{M}<0$ in the inner vacuum
region, and in particular at $r_{1}$. Since $\beta(r)$ is a continuous
function that goes from negative values at $r_{1}$ to positive values at
$r_{2}$, it follows that there is a radial position $r_{z}$ within the matter
region where $\beta(r_{z})=0$, regardless of whether or not $\rho(r)$ is
constant within the shell. At this particular radial position we also have
that $\lambda(r_{z})=0$.
Let us now consider the integral of the energy density over a coordinate
volume within the matter region, where $\rho(r)\neq 0$, say from an arbitrary
point $r_{a}$ to another point $r_{b}>r_{a}$,
$\int_{r_{a}}^{r_{b}}dr\int_{0}^{\pi}d\theta\int_{0}^{2\pi}d\phi\,r^{2}\sin(\theta)\rho(r)=4\pi\int_{r_{a}}^{r_{b}}dr\,r^{2}\rho(r),$
(30)
where we integrated over the angles. Note that this is not an integral over
the proper volume, but just an integral over the coordinate volume, since we
are missing here the remaining factor $\exp[\lambda(r)+\nu(r)]$ of the
Jacobian $\sqrt{-g}$. Since we have the three special points $r_{1}$, $r_{z}$
and $r_{2}$ where the values of $\beta(r)$ are known, let us consider now the
integral of the energy density over the coordinate volume from $r_{z}$ to
$r_{2}$. Using Equation (29) we get
$4\pi\int_{r_{z}}^{r_{2}}dr\,r^{2}\rho(r)=4\pi\frac{r_{M}}{\kappa}\int_{r_{z}}^{r_{2}}dr\,\beta^{\prime}(r).$
(31)
One can now see that the integral is trivial, and since we have that
$\beta(r_{z})=0$ and that $\beta(r_{2})=1$, we get
$Mc^{2}=4\pi\int_{r_{z}}^{r_{2}}dr\,r^{2}\rho(r),$ (32)
where we have replaced $\kappa$ and $r_{M}$ by their values in terms of $M$
and $c$. We have therefore an expression for the energy $Mc^{2}$ in terms of a
coordinate volume integral of the energy density. Note however that the
integral does not run over the whole matter region, since it starts at $r_{z}$
rather than at $r_{1}$. In a similar way, if we consider the integral from
$r_{1}$ to $r_{z}$, we get
$4\pi\int_{r_{1}}^{r_{z}}dr\,r^{2}\rho(r)=4\pi\frac{r_{M}}{\kappa}\int_{r_{1}}^{r_{z}}dr\,\beta^{\prime}(r).$
(33)
Once again one can see that the integral is trivial, and since we have that
$\beta(r_{z})=0$ and that $\beta(r_{1})=-r_{\mu}/r_{M}$, we now get
$\mu c^{2}=4\pi\int_{r_{1}}^{r_{z}}dr\,r^{2}\rho(r),$ (34)
where we have replaced $\kappa$ and $r_{\mu}$ by their values in terms of
$\mu$ and $c$. We have therefore an expression for the energy $\mu c^{2}$ in
terms of a coordinate volume integral of the energy density.
If we now consider the integral over the whole matter region, due to the
additive property of the integrals over the union of disjoint domains, using
Equations (32) and (34) we obtain the result that
$4\pi\int_{r_{1}}^{r_{2}}dr\,r^{2}\rho(r)=\mu c^{2}+Mc^{2}.$ (35)
This is a sum of energies, and is therefore also an energy, to which we will
associate a mass parameter $M_{u}$, such that this energy is given by
$M_{u}c^{2}$, so that we have the relation
$M_{u}c^{2}=\mu c^{2}+Mc^{2}.$ (36)
We see therefore that the point $r_{z}$ where $\beta(r_{z})=0$ and therefore
$\lambda(r_{z})=0$ plays a particular role when it comes to the determination
of the energies involved.
Note that all this is true for any function $\rho(r)$ within the matter
region. For our specific case here, with a constant $\rho_{0}$, we find from
Equation (8) that we have within the matter region
$\beta(r)=1-\frac{\kappa\rho_{0}}{3r_{M}}\,\left(r_{2}^{3}-r^{3}\right),$ (37)
so that in this case we have for the zero $r_{z}$ of $\beta(r)$
$r_{z}=\left(r_{2}^{3}-\frac{3r_{M}}{\kappa\rho_{0}}\right)^{1/3}.$ (38)
Note that, although all these integrals are written in terms of the energy
density $\rho(r)$ of the matter, none of them represents just the energy of
only the matter itself. In fact we must now interpret the meaning of each one
of these expressions, which is what we will do in the next section.
## 3 Physical Interpretation of the Energies
Of the three energies at play here, namely $M_{u}c^{2}$, $\mu c^{2}$ and
$Mc^{2}$, only the last one has a well established meaning at this point.
Since $M$ is the asymptotic gravitational mass of the system, that is, the
gravitational mass seen as the source of the gravitational field at large
radial distances, the standard interpretation in General Relativity is that
the energy $Mc^{2}$ is the total energy of this gravitational system, bound
into the shell by the gravitational interactions, and which from now on we
will simply call the bound system. It includes both the energy of the matter
in the bound state and the energy stored in the gravitational field itself,
also in this bound state. The energy density $\rho(r)$ is the amount of energy
of the matter, per unit volume, as seen by a stationary local observer at the
radial position $r$.
Our first task here is to establish the physical interpretation of the energy
$M_{u}c^{2}$. In order to do this, the first thing to be done is to define an
unbound system related to our bound system as defined above. This unbound
system is what we get when we scatter all the elements of the shell to very
large distances from each other, in order to eliminate all the gravitational
interactions, but without making any changes in the energy content of the
matter. We will show here that the energy $M_{u}c^{2}$ is the total energy of
this unbound system. We will do this by performing a mathematical
transformation on the integral in Equation (35), which with the use of
Equation (36) leads to the following expression in terms of a volume integral
$M_{u}c^{2}=\int_{r_{1}}^{r_{2}}dr\int_{0}^{\pi}d\theta\int_{0}^{2\pi}d\phi\,r^{2}\sin(\theta)\rho(r).$
(39)
The transformation, applied to the right-hand side of this equation, will
allow us to interpret the meaning of the left-hand side. This will be done in
a general way, for any function $\rho(r)$ within the matter region. This
transformation will consist in fact of the construction of a second integral,
based on the concept of the Riemann sums of the volume integral shown in
Equation (39).
Let us consider therefore an arbitrary Riemann partition of the integral in
Equation (39), consisting of a finite number of cells $\delta V_{n}$ with
coordinate volume and linear coordinate dimensions below certain maximum
values, where $n\in\\{1,\ldots,N\\}$. By definition of a partition the sum of
all these volume elements is equal to the coordinate volume $V$ of the shell,
$V=\sum_{n=1}^{N}\delta V_{n},$ (40)
where we will assume that each volume element is at the spatial position
$\vec{r}_{n}$, as illustrated in Figure 1. The energy $M_{u}c^{2}$ can
therefore be written as the integration limit of the Riemann sum over this
partition,
$M_{u}c^{2}=\lim_{N\to\infty}\sum_{n=1}^{N}\rho(r_{n})\delta V_{n},$ (41)
Figure 1: Illustration of the geometrical transformation of the integral over
the shell.
where $r_{n}=|\vec{r}_{n}|$. We now consider the mathematical transformation
in which we map each volume element $\delta V_{n}$ at $\vec{r}_{n}$ onto an
identical volume element $\delta V^{\prime}_{n}$ at the coordinate position
$\vec{r}^{\;\prime}_{n}=\alpha\vec{r}_{n}$, for some large positive real
number $\alpha$, without changing the coordinate volume of the volume
elements. The result is a new set of volume elements, all at large distances
from each other, whose sum is still equal to the coordinate volume of the
shell,
$V=\sum_{n=1}^{N}\delta V^{\prime}_{n},$ (42)
The geometrical transformation leading to the construction of the new integral
is illustrated in Figure 1. Note that no physical transport of the matter or
of the energy within the volume elements $\delta V_{n}$ of the shell is meant
here, so that there are no actual physical transformations involved.
After defining the volume elements $\delta V^{\prime}_{n}$ ta large distances
in this fashion, we now put within each one of these new volume elements
exactly the same amount of mass and energy that we have in the corresponding
coordinate volume elements $\delta V_{n}$ of the shell. This means putting
into each volume element $\delta V^{\prime}_{n}$ at infinity the same numbers
of the same types of particles, as well as the same amount of thermal energy
and pressure, as seen by a stationary local observer at the position
$\vec{r}^{\;\prime}_{n}$, that a stationary local observer at $\vec{r}_{n}$
sees within $\delta V_{n}$. Is other words, we associate to each volume
element at infinity the same value of the energy density
$\rho(r^{\prime}_{n})=\rho(r_{n})$ that we had for the corresponding volume
element of the shell, where $r^{\prime}_{n}=|\vec{r}^{\;\prime}_{n}|$ and
$r_{n}=|\vec{r}_{n}|$.
For large values of $\alpha$ these elements of mass and energy within $\delta
V^{\prime}_{n}$ are all at large distances from each other, so as to render
the gravitational interactions among them negligible. In the $\alpha\to\infty$
limit all the gravitational interactions among the volume elements $\delta
V^{\prime}_{n}$ go to zero. Besides, in the integration limit each element of
mass and energy so constructed tends to zero, so that the gravitational self-
interactions within each volume element also become negligible. However,
independently of either limit, by construction the total coordinate volume of
the elements of volume at infinity remains equal to the coordinate volume of
the shell. Therefore, by construction the corresponding sum of all the energy
elements of energy at infinity is the same as the Riemann sum that appears in
Equation (41),
$\sum_{n=1}^{N}\rho(r^{\prime}_{n})\delta
V^{\prime}_{n}=\sum_{n=1}^{N}\rho(r_{n})\delta V_{n}.$ (43)
Now, at radial infinity spacetime is flat, so that the coordinate volume of
each volume element $\delta V^{\prime}_{n}$ coincides with its proper volume,
and hence the energy element $\rho(r^{\prime}_{n})\delta V^{\prime}_{n}$ is
the total energy of that element of matter, so that the sum of all these
energy elements is the total energy of the matter at infinity. In other words,
once we take the integration limit the integral given in Equation (39) gives
us the total energy of the system at infinity, which is free from all
gravitational bindings. Hence we will name the quantity $M_{u}c^{2}$ the total
energy of the unbound system. This is the total energy of the system when all
gravitational interactions have been eliminated by increasing without limit
the distances among its elements. This is in both analogy and contrast with
the quantity $Mc^{2}$, which is the total energy of the bound system, after
all its parts have been brought together to form the shell.
Note that this whole argument is general, in the sense that it is not limited
to the case in which $\rho(r)=\rho_{0}$ is a constant. In our case here, since
$\rho(r)=\rho_{0}$ is a constant, the total energy of the unbound system is
just the product of $\rho_{0}$ by the coordinate volume $V$ of the shell,
$M_{u}c^{2}=\rho_{0}V.$ (44)
Our next task here is to establish the physical interpretation of the energy
$\mu c^{2}$. From Equation (36) we have that the energy parameter $\mu c^{2}$
is the difference between the total energy of the unbound system and the total
energy of the bound system,
$\mu c^{2}=M_{u}c^{2}-Mc^{2},$ (45)
and therefore we conclude that it is the binding energy of the system. It is
the amount of energy that must be given to the system in order to disperse its
elements to infinity, thus eliminating all the gravitational bindings between
those elements. It is also the amount of energy that must be dissipated by the
system during the process of its assembly into the bound system, stating from
the unbound system at infinity. The theorem we proved in [2], in the
$\rho(r)=\rho_{0}$ case that we have here, namely that we must have
$r_{\mu}>0$, is equivalent to the statement that the bound system must have a
finite, positive and non-zero binding energy. This is, of course, closely
related to the attractive nature of the gravitational interaction between
particles.
Note that, although all these integrals are written in terms of the energy
density $\rho(r)$ of the matter, the energy $Mc^{2}$ is not the energy
$M_{m}c^{2}$ of just the matter within the bound system. That would be given
by the integral with the full Jacobian factor $\sqrt{-g}$, where $g$ is the
determinant of $g_{\mu\nu}$, which in our case here results in
$M_{m}c^{2}=4\pi\int_{r_{1}}^{r_{2}}dr\,r^{2}\,{\rm
e}^{\lambda(r)+\nu(r)}\rho(r).$ (46)
As a partial consistency check, it is not difficult to verify that this energy
is always smaller than $M_{u}c^{2}$, due to the fact that the exponent
$\lambda(r)+\nu(r)$ is always negative within the matter region. In order to
show this we just take the difference between the component field equations
shown in Equations (3) and (2), thus obtaining
$\left[\lambda(r)+\nu(r)\right]^{\prime}=\frac{\kappa}{2}\,\,{\rm
e}^{2\lambda(r)}r\left[\rho(r)+P(r)\right].$ (47)
Since all quantities appearing on the right-hand side are positive or zero, we
may conclude that the derivative of the exponent is non-negative. However, we
have that $\lambda(r_{2})+\nu(r_{2})=0$, since this exponent is identically
zero within the outer vacuum region. It follows that
$\lambda(r)+\nu(r)<0,$ (48)
and therefore that
$\,{\rm e}^{\lambda(r)+\nu(r)}<1,$ (49)
throughout the whole matter region, with the exception of the single point
$r_{2}$ where the exponential is equal to one. Therefore, it follows for the
two integrals that
$4\pi\int_{r_{1}}^{r_{2}}dr\,r^{2}\,{\rm
e}^{\lambda(r)+\nu(r)}\rho(r)<4\pi\int_{r_{1}}^{r_{2}}dr\,r^{2}\rho(r),$ (50)
and therefore that $M_{m}c^{2}<M_{u}c^{2}$. The difference $Mc^{2}-M_{m}c^{2}$
is the part of the energy of the bound system which is not the energy of the
matter itself, but rather the energy stored in the gravitational field. In
general, in order to determine this difference, $M_{m}c^{2}$ has to be
calculated numerically.
### 3.1 Energetic Stability
This interpretation of the parameters involved leads right away to the idea
that we may define a notion of energetic stability of the solutions obtained,
in the general spirit of the principle of virtual work. Given certain
constraints regarding some of the parameters of the solutions, we may obtain
the parameter $r_{\mu}$ as a function of the remaining parameters of the
system. Within this class of solutions, if there are two with different values
of $r_{\mu}$, which is proportional to the binding energy $\mu c^{2}$, then in
principle the constrained system will tend to go from the one with the smaller
value of $r_{\mu}$ to the one with the larger value, given the existence of a
permissible path between the two solutions. This type of analysis allows us to
acquire some information about the dynamical behavior of the system, without
having to find explicitly the corresponding time-dependent solutions.
Let us exemplify this with our current system, in a way that is physically
illustrative. Our system contains four parameters, namely $r_{1}$, $r_{2}$,
$r_{M}$ and $\rho_{0}$, of which only three are independent. As was explained
in [2], these four parameters are related by the condition in Equation (15).
Given any three of the parameters, that equation can be used to determine the
fourth in terms of those three. Let us assume that we are given fixed values
of both $M$ and $\rho_{0}$, thus determining the local properties of the
matter and the total amount of energy of the bound system. This is equivalent
to fixing $r_{M}$ and $\rho_{0}$, and therefore the result of solving Equation
(15) is to establish $r_{1}$ as a function of $r_{2}$. We therefore are left
with a collection of solutions parametrized by a single real parameter, the
external radius $r_{2}$. We may then determine $r_{\mu}(r_{2})$ and verify
whether this function has a single local maximum at a certain value of
$r_{2}$. This then identifies that particular solution which is stable, or
that has the largest binding energy, among all others, given the constraints
described.
Another approach, slightly more indirect, but perhaps simpler and more
physically compelling, would be to keep constant the local parameter
$\rho_{0}$ and the energy $M_{u}c^{2}$ of the unbound system. This fixes the
local properties of the matter and the total energy of the unbound system that
we start with, and we may then ask which is the solution that corresponds to
the most tightly bound system that can be assembled from that unbound system.
Since the energy of the unbound system is the product of $\rho_{0}$ by the
coordinate volume $V$ of the shell, as can be seen in Equation (44), keeping
fixed both $\rho_{0}$ and $M_{u}$ corresponds to keeping fixed at a value
$V_{0}$ that coordinate volume, which is given by
$V_{0}=\frac{4\pi}{3}\left(r_{2}^{3}-r_{1}^{3}\right).$ (51)
This immediately determines $r_{2}$ as a simple function $r_{2}(r_{1})$ of
$r_{1}$. Then solving Equation (15) results in $r_{M}$ being given as a
function $r_{M}(r_{1})$ of $r_{1}$ for the fixed value of $\rho_{0}$ and the
fixed coordinate volume $V_{0}$. This corresponds to the energy of the bound
system with internal radius $r_{1}$, for the given fixed values of $\rho_{0}$
and $V_{0}$. The minimum of this function gives us the value of $r_{1}$ that
corresponds to the most tightly bound system that can be assembled from a
given unbound system. Other solutions in the same family, with other values of
$r_{1}$, will tend to decay into this one, given a permissible decay path
between the two solutions involved. We will execute this program numerically
in Section 4.
We saw that in the case of the interior Schwarzschild solution we have the
value zero for $r_{\mu}$. This implies that the resulting solution has zero
gravitational binding energy, and that its energy is the same as the energy of
the corresponding unbound system, which is a very strange and even bizarre
situation indeed. This means that the resulting solution is not only
energetically unstable, but that it is in fact maximally energetically
unstable, since the bound system cannot possibly have more energy than the
unbound system. Given a permissible path, in principle one would be able to
disassemble the matter distribution of the interior Schwarzschild solution,
taking every element of matter do infinity, without giving any energy at all
to the system. This is quite unrealistic, and may be the reason why this
solution has never proved to be a very useful one.
## 4 Numerical Exploration of the Binding Energy
Here we will explore numerically the issues of the binding energy and of the
energetic stability of the shell solutions. In this exploration we will keep
fixed the local energy density parameter $\rho_{0}$, as well as the total
energy $M_{u}c^{2}$ of the unbound system. Our objective will be then to
determine the existence and the parameters of the maximally bound shell
solution. We will do this by calculating the energy $Mc^{2}$ of the bound
system and showing that it has a point of minimum as a function of $r_{1}$.
Since we keep fixed the parameter $\rho_{0}$, and since the energy of the
unbound system is given by $M_{u}c^{2}=\rho_{0}V_{0}$, this implies that we
also keep fixed the coordinate volume $V_{0}$ of the shell, given in Equation
(51), which immediately establishes $r_{2}$ as a given function of $r_{1}$,
$r_{2}(r_{1})=\left(r_{1}^{3}+\frac{3V_{0}}{4\pi}\right)^{1/3}.$ (52)
Therefore, of the three free parameters of our solutions, which can be taken
to be $r_{1}$, $r_{2}$ and $\rho_{0}$, one is being kept fixed and another is
a given function, so that we are left with only one free parameters, which we
will take to be $r_{1}$. Under these circumstances we have that $r_{M}$, and
therefore both the mass $M$ and the energy $Mc^{2}$ of the bound system, are
functions of $r_{1}$, with values that are left to be determined numerically.
Figure 2: Graph of the energy of the bound system as a function of $\xi_{1}$,
for a fixed energy of the unbound system, given by $\vartheta_{0}=2$, and with
$\xi_{1}$ in $[1,5]$.
Figure 3: Graph of the energy of the bound system as a function of $\xi_{1}$,
for a fixed energy of the unbound system, given by $\vartheta_{0}=5$, and with
$\xi_{1}$ in $[1,5]$.
Figure 4: Graph of the energy of the bound system as a function of $\xi_{1}$,
for a fixed energy of the unbound system, given by $\vartheta_{0}=10$, and
with $\xi_{1}$ in $[1,5]$.
Figure 5: Graph of the energy of the bound system as a function of $\xi_{1}$,
for a fixed energy of the unbound system, given by $\vartheta_{0}=20$, and
with $\xi_{1}$ in $[1,5]$.
In order to perform the numerical work it is convenient to first rescale the
variables, creating a set of equivalent dimensionless variables. Since under
these conditions $\kappa\rho_{0}$ is a constant which has dimensions of
inverse square length, we will define a constant $r_{0}$ with dimensions of
length by
$r_{0}=\frac{1}{\sqrt{\kappa\rho_{0}}}.$ (53)
Having now the known constant $r_{0}$, we use it in order to define the set of
dimensionless parameters given by
$\displaystyle\xi_{1}$ $\displaystyle=$ $\displaystyle\frac{r_{1}}{r_{0}},$
$\displaystyle\xi_{2}$ $\displaystyle=$ $\displaystyle\frac{r_{2}}{r_{0}},$
$\displaystyle\xi_{M}$ $\displaystyle=$ $\displaystyle\frac{r_{M}}{r_{0}},$
$\displaystyle\vartheta_{0}$ $\displaystyle=$ $\displaystyle\frac{3V_{0}}{4\pi
r_{0}^{3}},$ (54)
where $\vartheta_{0}$ is the ratio between the coordinate volume $V_{0}$ of
the shell and the volume of an Euclidean sphere of radius $r_{0}$. The
expression in Equation (52) giving $r_{2}$ as a function of $r_{1}$ is now
translated as
$\xi_{2}(\xi_{1})=\left(\vartheta_{0}+\xi_{1}^{3}\right)^{1/3}.$ (55)
Note, for subsequent use, that this can also be written as
$\xi_{2}^{3}-\xi_{1}^{3}=\vartheta_{0}$. The relation which we must now use in
order to determine $\xi_{M}$ is that given in Equation (15), which upon
rescalings by $r_{0}$ can be written as
$\sqrt{\frac{\xi_{2}}{3\left(\xi_{2}-\xi_{M}\right)}}=\sqrt{\frac{\xi_{1}}{\xi_{2}^{3}-\xi_{1}^{3}+3\left(\xi_{1}-\xi_{M}\right)}}+\frac{3}{2}\int_{\xi_{1}}^{\xi_{2}}d\xi\,\frac{\xi^{5/2}}{\left[\xi_{2}^{3}-\xi^{3}+3\left(\xi-\xi_{M}\right)\right]^{3/2}},$
(56)
where we changed variables in the integral from $r$ to $\xi=r/r_{0}$.
Substituting for $\vartheta_{0}$ where possible we have the following non-
trivial algebraic equation that determines $\xi_{M}$ and therefore $r_{M}$,
$\sqrt{\frac{\xi_{1}}{\vartheta_{0}+3\left(\xi_{1}-\xi_{M}\right)}}-\sqrt{\frac{\xi_{2}}{3\left(\xi_{2}-\xi_{M}\right)}}+\frac{3}{2}\int_{\xi_{1}}^{\xi_{2}}d\xi\,\frac{\xi^{5/2}}{\left[\xi_{2}^{3}-\xi^{3}+3\left(\xi-\xi_{M}\right)\right]^{3/2}}=0.$
(57)
Our objective here is to solve this equation in order to get
$\xi_{M}(\xi_{1})$, given a fixed value of $\vartheta_{0}$ and with $\xi_{2}$
given by Equation (55). Note that, due to the homogeneous scalings leading
from the dimensionfull quantities to the dimensionless ones, shown in Equation
(4), each solution of this equation is valid for any value of $\rho_{0}$,
which no longer appears explicitly. The same is true of the graphs to be
generated using this equation. Given a value of $\vartheta_{0}$, the
corresponding graph represents the results for all the possible strictly
positive values of the energy density $\rho_{0}$.
There are two main numerical tasks here, the calculation of the integral and
the resolution of this algebraic equation for $\xi_{M}$. The integral can be
readily and efficiently calculated by a cubic interpolation method, using the
values of the integrand and of its derivative at the two ends of each
integration interval. So long as we can return the value of the integral
without too much trouble, Equation (57) can be readily and efficiently solved
by an exponential sandwich (or bisection) method [9]. There are two readily
available and robust initial upper and lower bounds for the value of
$\xi_{M}$, the minimum possible lower bound being zero, and the maximum
possible upper bound being the energy of the unbound system, since we must
have that $Mc^{2}<M_{u}c^{2}$, which in terms of the dimensionless parameters
translates as $\xi_{M}<\vartheta_{0}/3$. We may therefore start the process
with a lower bound $\xi_{M\ominus}=0$ and an upper bound
$\xi_{M\oplus}=\vartheta_{0}/3$ for $\xi_{M}$. In practice, the efficiency of
this algorithm may be highly dependent on the use of a tighter pair of bounds.
A few examples of the functions obtained in this way can be seen in Figures 2
through 5, which show $\xi_{M}$ as a function of $\xi_{1}$, for fixed values
of the energy of the unbound system, that is, for fixed values of
$\vartheta_{0}$. Each graph consists of $81$ data points. In order to ensure
good numerical precision we used $10^{6}$ integration intervals in the domain
$[\xi_{1},\xi_{2}]$. The exponential sandwich was iterated until a relative
precision of the order of $10^{-12}$ was reached. The four graphs shown were
generated on a high-end PC in approximately $25$ hours, $15$ hours, $62$ hours
and $154$ hours, respectively, without too much preoccupation with efficiency.
As one can see, the graphs clearly display minima of $\xi_{M}$, which are
located at certain values of $\xi_{1}$. At these minima the pairs of values
$\left(\xi_{1},\xi_{2}\right)$ are given approximately, in each case, by
$(2.79,2.87)$, $(2.72,2.93)$, $(2.60,3.02)$ and $(2.35,3.21)$, respectively.
There is freely available an open-source program [10] that can be used to
perform these calculations for any set of input parameters.
The minima of these functions give us the value of $\xi_{1}$ that corresponds
to the most tightly bound system that can be assembled from the given unbound
system in each case. With the given values of $\rho_{0}$ and $M_{u}c^{2}$, in
each case this establishes the value of $r_{1}$ for the most tightly bound and
therefore energetically stable solution, and hence determines the values of
$r_{2}$, $r_{M}$ and of all the functions describing both the spacetime
geometry and the matter for that stable solution. The limiting value of
$\xi_{M}$ when $\xi_{1}\to 0$, not shown in these graphs, corresponds to the
interior Schwarzschild solution and thus to the energy of the unbound system
in each case, which in terms of the variables shown in the graphs is given by
$\vartheta_{0}/3$. The $\xi_{1}\to\infty$ limit to the other side rises fairly
slowly and does not seem to approach this same value asymptotically, a
situation that is probably due to the fact that an infinitesimally thin shell
at infinity still has some binding energy, as compared to the corresponding
set of isolated infinitesimal point masses.
## 5 Conclusions
In this paper we have established the energetic interpretation of the exact
solutions obtained in a previous paper for spherically symmetric shells of
liquid fluid [2]. All the energies involved were precisely characterized,
including the total energies of the unbound systems, the total energies of the
bound systems, the gravitational binding energies, and the energies stored in
the gravitational field. This led to a characterization of the stability of
the bound systems in terms of their binding energies. We have identified a
two-parameter family of energetically stable solutions, within the original
three-parameter family of solutions. In a few cases the stable solutions were
identified numerically. It is to be expected that the interpretations of the
energies that were introduced here will be useful in other cases, such as
those involving polytropes, white dwarfs and neutron stars.
In order to accomplish this, integral expressions for all the energies
involved were presented, as integrals of the matter energy density over
various coordinate volumes. All these expressions hold more generally than
just in the case of constant energy density $\rho(r)=\rho_{0}$ that we are
directly dealing with here. A particular radial position $r_{z}$ within the
matter region, at which we have $\lambda(r_{z})=0$ and therefore
$\exp[\lambda(r_{z})]=1$ for the radial coefficient of the metric, was
identified as playing a special role in relation to the integral expressions
for the various energies. This is the single finite radial position where the
three-dimensional space is neither stretched nor contracted, as compared to
the behavior of the radial coordinate $r$.
The energetic interpretation was extended to the case of the two-parameter
family of interior Schwarzschild solutions for filled spheres [6, 7], which
can be obtained as a particular limit of the shell solutions, and which turn
out to be maximally unstable ones. This means that there is a strong tendency
of the solution for a filled sphere to spontaneously generate an internal
vacuum region and thus become a shell solution. This is clearly connected to
the repulsive character of the gravitational field around the origin, in the
case of the shell solutions, pushing matter and energy away from that origin,
as was discussed and characterized in the previous paper [2]. Any small
perturbation of the interior Schwarzschild solution will put this mechanism in
action, thus leading to an energetic decay from that filled sphere solution to
a shell solution.
The crucial development leading to all this was the introduction of the
parameter $r_{\mu}$ in the previous paper, which was shown there to be
necessarily strictly positive in that case, for the correct resolution of the
differential equations and the corresponding interface boundary conditions, as
implied by the Einstein field equations. The apparently traditional routine of
choosing $r_{\mu}=0$ in order to eliminate the singularity at the origin not
only is often incompatible with the correct resolution of the differential
system but, when it is not thus incompatible, it is tantamount to selecting a
solution which has no binding energy at all and is therefore maximally
unstable from the energetic point of view. Both from the purely mathematical
point of view and from the physical point of view, this is more often than not
the incorrect choice, which we are simply not at liberty to make.
## Acknowledgments
The author would like to thank his friends Prof. C. E. I. Carneiro and Mr.
Rodrigo de A. Orselli for their helpful criticism and careful reading of the
manuscript.
## References
* [1] P. A. M. Dirac, General Theory of Relativity. John Wiley & Sons, Inc., 1975. ISBN 0-471-21575-9.
* [2] J. L. deLyra, R. de A. Orselli, and C. E. I. Carneiro, “Exact solution of the einstein field equations for a spherical shell of fluid matter,” arXiv, vol. gr-qc/2101.02012, 2021. Submitted to Physical Review D.
* [3] S. Weinberg, Gravitation and Cosmology. New York: John Wiley and Sons, 1972.
* [4] J. Ni, “Solutions without a maximum mass limit of the general relativistic field equations for neutron stars,” Science China, vol. 54, no. 7, pp. 1304–1308, 2011.
* [5] L. Neslušan, “Solutions without a maximum mass limit of the general relativistic field equations for neutron stars,” Journal of Modern Physics, vol. 6, pp. 2164–2183, 2015.
* [6] K. Schwarzschild, “Über das gravitationsfeld einer kugel aus inkompressibler flüssigkeit nach der einsteinschen theorie (on the gravitational field of a ball of incompressible fluid following einstein’s theory),” Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften, vol. 7, pp. 424–434, 1916.
* [7] R. Wald, General Relativity. University of Chicago Press, 2010.
* [8] C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation. San Francisco: W.H. Freeman and Co., 1973.
* [9] W. Press, B. Flannery, S. Teukolsky, and W. Vetterling, Numerical Recipes in FORTRAN 77: Volume 1, Volume 1 of Fortran Numerical Recipes: The Art of Scientific Computing. Cambridge University Press, 1992.
* [10] “Energetic stability program for liquid shells.” Direct web access for dowload. http://sft.if.usp.br/scientific/.
|
# Is it a great Autonomous Trading Strategy or you are just fooling yourself
Murilo Sibrão Bernardini and Paulo André Lima de Castro
Autonomous Computational Systems Lab - LABSCA
Aeronautics Institute of Technology (ITA)
São José dos Campos, São Paulo, Brazil
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
In this paper, we propose a method for evaluating autonomous trading
strategies that provides realistic expectations, regarding the strategy’s
long-term performance. This method addresses This method addresses many
pitfalls that currently fool even experienced software developers and
researchers, not to mention the customers that purchase these products. We
present the results of applying our method to several famous autonomous
trading strategies, which are used to manage a diverse selection of financial
assets. The results show that many of these published strategies are far from
being reliable vehicles for financial investment. Our method exposes the
difficulties involved in building a reliable, long-term strategy and provides
a means to compare potential strategies and select the most promising one by
establishing minimal periods and requirements for the test executions. There
are many developers that create software to buy and sell financial assets
autonomously and some of them present great performance when simulating with
historical price series (commonly called backtests). Nevertheless, when these
strategies are used in real markets (or data not used in their training or
evaluation), quite often they perform very poorly. The proposed method can be
used to evaluate potential strategies. In this way, the method helps to tell
if you really have a great trading strategy or you are just fooling yourself.
## I Introduction
Building autonomous agents for trading, sometimes called Algorithmic trading
or trading robots has been the focus of much research and development in
recent years. There are many published papers relating remarkable performance
of the strategies in backtests, which are simulations that use historical
data. We discuss some of these results briefly in Section III. Nevertheless,
when these strategies are used in actual markets, using current data that was
not part of their training or evaluation, quite often they perform very
poorly. The average results vary considerably but are usually disappointing.
Some analysts have even argued that a positive backtesting performance may be
taken to indicate a negative performance in real financial scenarios.
The goal of an investment manager, automated or human, is to find and acquire
the most desirable set of assets that fits investor’s preference profile. The
manager acquires these assets through the submission of buy and sell orders to
the stock market and such decisions are based on analyzes of a set of possible
interesting assets. Obviously, with so many assets to choose from, each with a
wealth of data to review, a machine learning approach, if not required. It is
important to note that there are also ethical and legal implications that
should be taken into consideration in the process of building automated
trading strategies. We recommend Wellman and Rajan’s paper [1] about ethical
issues in Autonomous Trading Agents.
The broad spectrum of AI techniques used in finance includes reinforcement
Learning [2, 3], multiagent systems [4, 5] complex networks [6], decision
trees [7], genetic algorithms [8], random forests [9] to more recent
approaches, like convolutional neural networks [10] and deep reinforcement
learning [11]. Recent studies have combined several techniques as LSTM NN and
reinforcement learning [12], while others have explored an ensemble of deep
learning models in a multi-layer architecture [13]. There are many cases where
the developers are successful in creating strategies with great performance
results when backtesting with historical price series. In fact, some
electronic platforms offer thousands of autonomous agents, or trading robots,
which are also called bots or expert advisors. Their developers claim this
software will be profitable in real financial markets [14]. These agents’
strategies may be created with a simple idea such as moving averages or use
complex machine learning schemes. To date, however, these strategies have
usually performed poorly with data not used in their training, despite all the
hype.
The problem of creating a trading strategy with great performance in some
previous known data set, but with bad performance in new data is well known in
Machine Learning field and it is usually called overfitting. Nevertheless,
this problem seems to be unknown or at least underestimated by many developers
in autonomous trading. In truth, overfitting is more likely to occur in
finance than in the traditional machine learning problems on which it is
based, such as face recognition. Moreover, the consequences of a false
positive can be far more disastrous in financial trading. False expectations
are created that can mislead investors, putting their financial livelihood at
stake. There is very little out there to protect investors from naive,
incompetent, or even malicious software developers. Section II-A, Overfitting
in Finance, deals with this important issue.
In this paper, we propose a method for evaluating trading strategies that
implements procedures that may applied regardless of the way the strategies
were built or the technology they use. This method is called the
(statistically) Significant Trading-Strategy Evaluation, or STSE. It
highlights many pitfalls that have been overlooked by software developers,
including experienced machine learning experts, thus minimizing financial
risks when using these strategies. We present the results of applying this
method to several famous autonomous strategies used in foreign exchange assets
and stock market. The results show that these strategies are far from being
reliable vehicles of investment. Some periods of returns can be observed, but
they are not predictable and probably happen by pure chance, rather than by
the software having identified a real pattern in the input data. This fleeting
return can be used maliciously by those selling the strategies, by their
advertising great, but non-repetitive results, while obscuring negative
results. The method we propose can be used to select among potential
strategies, by establishing minimal periods and requirements as part of the
test to avoid this seesaw in return results. Some autonomous approaches have
been tried for dealing with derivative instruments [15]. We did not apply our
method to any derivative instruments, which involve the future value of an
underlying asset [16], because of their price volatility [16]. Instead, we
focused on foreign exchanges and stock assets.
The remainder of this paper is organized as follows: In Section III, we
discuss some comparable proposals. In Section II, we present some distinct
characteristics of financial markets that make them an unique challenge for
machine learning and artificial intelligence approaches. In Section IV, we
propose our Significant Trading-Strategy Evaluation (STSE) method. We used it
to evaluate several well known trading strategies currently on the market. The
results are discussed in Section V. Finally, we discuss some possible
extensions and future study in Section VI. We have also all necessary details
to reproduce the experiments (Appendix A), which results are are shown in
Section V.
We believe that reproducible results are important for research and that there
is a significant reproducibility crisis in general. In fact, a recent survey
[17] published in Nature journal showed that many researchers share that
opinion. As pointed out by Stark [18], researchers should provide enough
information for others to repeat their experiments. We do that in appendix A.
We also provide the source code that implements STSE freely at github [19].
## II AI in Finance
A financial agent is responsible for selecting, buying and selling financial
assets in order to fulfill its investors requirements for risk and return,
respecting the investor’s horizon of investment. A financial autonomous agent
is a software program that implements a trading strategy able to perform such
activities more efficiently.
The Financial Environment where these trading strategies are applied may be
classified as: partially observable, sequential, stochastic, dynamic,
continuous and multiagent, according to the Russell and Norvig taxonomy [20].
The environment is also strategic, in the sense that two active investors
compete for a more accurate valuation of assets and their acts may change
other economic agents’ behavior. Russel and Norvig consider this the most
complex environment class [20].
This taxonomy, however, does not really represents the whole complexity of the
problem, because financial markets are not an independent and identically
distributed (i.i.d) environment, as assumed in most of machine learning
problems. Furthermore, financial environment have a low signal to noise ratio,
which makes overfitting even more likely in machine learning approaches.
Another aspect that should be addressed by any trading strategy, but is often
forgotten, is that people do not have the same level of acceptable risk. Some
investors may present a much stronger risk aversion than others. A trading
agent must be aware of its investor’s preferences, in order to trade
appropriately. Investment errors caused by overfitting can also be very
costly.
### II-A Overfitting in Finance
Financial environments are not only stochastic environments but usually non-
stationary process where the probability distribution do change along the
time. Due to this non-stationary feature, a specific strategy may perform
extremely well for a certain period of time and then perform very poorly
thereafter. Moreover, different assets may require different information and
models. For instance, oil companies may be very sensitive to changes of petrol
prices, while probably it is not the same case for banks. In other terms, a
specific asset price can be very correlated to some time series, but that time
series may have no relevant correlation with another asset price.
Markets conditions may change abruptly leading to structural breaks in
financial time series, which are unexpected changes over time in the
parameters of regression models. In fact, it can be checked by searching for
structural breaks in financial time series. As stated by Andreou and Ghysels
[21], there are abundant empirical evidence which shows that there are
structural breaks in financial markets that may be caused by various economic
events. When structural breaks occur the true model is changed; therefore,
previous data does not describe the same process. It reduces the amount of
useful data that can be used in fitting machine learning models. However, if
that effect is not observed and outdated data is used, despite structural
breaks, it may lead to models that do not generalize well in current market
situations;or in other words, Overfitted models. Structural breaks do not
usually occur in ML environments, which are traditionally considered to be
independent and identically distributed (i.i.d). This hypothesis is so widely
used in ML, that sometimes researchers forget that it is not valid for
financial environments.
Moreover, models with high representational capacity, such as deep learning
approaches [22], have higher risks of overfitting, if they are not properly
used. The problem is that such models have many parameters and ,therefore,
they can fit many different solutions to the available data points. There is
little chance that the learning algorithm will learn a solution that
generalizes well, i.e. it is not overfitted, when so many wildly different
solutions exist [22].
In financial environments, it is hard to separate signal (real information)
from the noise, because they are almost at the same level and, if the signal
is clear, it will not be for long. Arbitrage, exploiting price differences of
similar financial instruments on different markets, further decreases the
signal-to-noise ratio, making it easier to confuse signal and noise. This
confusion may also lead to overfitted models.
Overfitting in finance leads to unrealistic expectation of the performance
and, therefore, it can be used to mislead investors to bet on overfitted
autonomous trading systems, which may be built by naive or even malicious
developers [23].
In fact, it is not rare to see promises of high return by trading strategy
vendors in digital media. As pointed out by Prado [23], without control for
backtest overfitting, good backtest performance may be an indicator for
negative future results.
Unfortunately, it is not easy to prove that a given model is not overfitted,
despite the fact that there are many techniques in the ML field that help
avoiding lack of generalization power [24, 22]. It is beyond the scope of this
paper to address such general ML techniques to avoid overfitting and we are
going to focus on specific features of financial environments. In fact, we
believe it is fundamental to have a good understanding of the market faced by
trading strategies in order to avoid problems. By that we mean the reasons
that make overfitting more likely to happen in the financial environment and
some common mistakes in building and evaluating trading strategies. We tackle
these issues with the proposed method in Section IV.
## III Related Work
There is a long history of studies related to evaluating portfolio management
performance in the finance field [25]. However, there are some caveats when
applying these methods to autonomous trading strategies. It is relatively
simple to create new trading strategies using some machine learning technique
and historical data or some model based approach with optimized parameters for
a given market scenario. A developer needs to avoid some common pitfalls (1),
select a meaningful measure (2) and be able to correctly pick among possible
candidate strategies (3). We describe some related work that discuss these
issues next.
### III-A Common pitfalls in evaluating trading strategies
The complexity of the financial environment for autonomous agents, as
discussed in Section II, hides common pitfalls for the evaluation of trading
strategies. We have noted that these problems have been overlooked in many
studies as pointed out by other researchers [26]. The main pitfalls for
evaluating trading strategies are the following:
* •
Overfitting allow bad trading strategies present good performance for certain
periods of time, because they exploit false patterns that are unlikely to
repeat in real markets and it can be easily happen in trading strategies.
* •
Data leakage can occur by using information that was not public at the moment
the simulated decision was made. The time stamp for each data point must be
observed and take into account release dates, distribution delays, and back-
fill corrections [26]. One should also avoid training the model with testing
data, or allow the leak of testing data to the training process. Some kind of
leakage (or data-snooping) may also happen when a given set of data is used
more than once for purposes of inference or model selection [27]. Data leakage
may also happen using data points with very close time stamps in training and
testing sets. See discussion in [28] about overlapping and embargo (pgs.
105-109).
* •
Survivor bias may occur by selecting to simulate only securities and companies
that are still listed; therefore, ignoring all bankrupt and delisted
securities in the process [26].
* •
Evaluating strategies without caring about risk: In finance theory and finance
institutions, a great deal of effort is devoted to to measure and control
risk. Since the seminal work of Markowitz [29] that established modern
portfolio theory, building diversified portfolio, in order to achieve high
returns and mitigate risk is the norm in the finance field. It is surprising
to observe that creators of financial autonomous strategies often
underestimate the relevance of risk measurement and control and care only
about return [4] (pg. 213).
* •
The Disregard transaction costs produces strategies that look great in
backtests but may bring negative returns systematically in real operation
[26].
* •
The lack of control for the impact of their own trading is based on the
assumption that the autonomous trading strategy will not have significant
effect in the price or on the behaviour of others traders. This is really
reasonable as long the capital controlled by the strategy is significantly
smaller than the total volume traded. However, it is relevant to note in this
context, that autonomous trading strategies are easy to copy usually. In fact,
there are platforms that sell or rent autonomous strategies to third parties,
which magnifies its impact. Furthermore, copying trading strategies or using
other trades as primary information on other strategies are not new [30]. The
idea of investors building and sharing investment strategies with other
investors has been called social trading by some [31] (pg. 127) and pointed
out as a key trend in asset management [31].
* •
Oversimplifying shorting may falsely enhance performance results. In a short
position, the investor borrows stock and sells it with plans to buy it later.
Taking a short position on spot markets requires finding a lender. The cost of
lending and the amount available is generally unknown, and depends on several
factors like relations, inventory and others [26], but assuming it is zero is
certainly a mistake.
### III-B Sharpe ratio as Investment Skill measure
Most of the work related to defining a meaningful performance evaluation for
autonomous trading strategies rely on the Sharpe ratio. That includes those
cited in this Section. In fact, the Sharpe ratio is well known and often used
in evaluating fund performance. However, there are significant criticism about
it. One criticism s is that it does not distinguish between upside and
downside volatility. According to Rollinger and Hoffman [32], high outlier
returns could increase the value of the Sharpe ratio’s denominator (standard
deviation) more than the value of the numerator (return above the risk free
return), thereby lowering the value of the ratio. Another criticism is that
the assumption of normality in return distributions have been relaxed and the
Sharpe ratio thus becomes a questionable tool for selecting optimal portfolios
[33]. In fact, some indexes have been proposed as alternatives to the Sharpe
ration, including the Sortino ratio [34] and Modigliani index [35]. Despite
the quality of these proposals, the Sharpe ratio ”…definitely is the most
widely used..” measure of risk-adjusted performance [32](pg.42). Furthermore,
as stated by Bailey and others [36](pg. 15) the Sharpe ratio, despite its
known deficiencies, can provide evidence of investment skill since a proper
track record length is observed.
### III-C Picking among trading strategies
Another notable related work defines a minimum track record length (or minimum
backtest length) needed to avoid selecting the trading strategy with highest
in-sample Sharpe Ratio among N independent strategies, but that has an
expected out-of-sample Sharpe Ratio equal to zero [23]. The focus is avoiding
selecting an overfitted strategy that performs well in the training (or in-
sample) data, but performs poorly in real operations. Bailey and others [23]
demonstrated the Theorem 1. It stresses that the backtest period must be
longer as the number of independent strategies (N) grows in order to avoid the
risk of selecting an overfitted strategy.
Theorem 1. The Minimum Backtest Period (MinBTL, in years) needed to avoid
selecting a strategy with an in-sample Sharpe ratio of $E[max_{N}]$ among N
independent strategies and with an expected out-of-sample Sharpe ratio of zero
is given by Equation 1 (For demonstration, see [23]):
$MinBTL<\frac{2*ln[N]}{E[max_{N}]^{2}}.$ (1)
In Section IV, we propose a method that incorporates such concerns when allows
the evaluation of autonomous trading strategies, regardless of the way they
were built.
## IV The Significant Trading-Strategy Evaluation (STSE) Method - How to
evaluate trading strategy performance
We propose a method called Significant Trading-Strategy Evaluation (STSE),
that evaluates trading strategies in a statistically significant way
regardless of the technologies used for building them. To be meaningful, an
evaluation needs to cover a long enough time to avoid false positive results,
which are unreproducible in the long term, as well as prevent data leakage
from evaluation to training data and other pitfalls (Section III-A). There are
four steps to our method.
1. 1.
Define Parameters:
1. (a)
Financial parameters: set of assets, horizon of investment, and amount of
capital
2. (b)
Strategy parameters: according to picked approach and use available training
data
2. 2.
Verify conditions:
1. (a)
Verify if the traded volume may lead to an underestimate of the strategy’s
impact on the market
2. (b)
Verify if the strategy uses shorting, check if transaction costs have been
tallied sufficiently
3. 3.
Refine parameters using historical prices (backtest) until a reasonable
performance has been achieved
4. 4.
Execute strategy using real time simulation until achieving meaningful results
(detailed in Section IV-A)
The first two phases, which define parameters and verify conditions, are quite
direct, especially the first one. There are few details to keep in mind:
Underestimating impact of own trading.. That pitfall is perhaps the least
likely to happen, it is not even listed by [26]. If your maximum traded volume
is much smaller less than 10 base points than the average traded volume of the
asset and the strategy is not used by others the impact of one’s own trading
is probably small enough to be unconsidered. To account for Survivor bias.,
you need to state your criteria for selecting the target assets in a clear way
and verify that if using that criteria and only information available in the
beginning of the simulated time, you would select a problematic asset
(delisting, bankruptcy, etc). To ensure that Short operation costs have been
covered sufficiently you may use the maximum costs observed in the market for
short operations similar tor the short operations executed by your strategy.
The third phase, refine parameters, uses the so-called backtest to fit
parameters in order to improve performance. It should be carried out without
any data leakage from test to training set by including an embargo period as
discussed in Section III-A. However, it is still possible to have some data
leakage due to back-fill corrections or from avoiding assets that presented
bad performance or negotiation interruptions (survivor bias). In fact, the
best way to avoid data leakage of any kind and verify that the model
represents patterns that are still valid in the market is by using real time
simulation. That is the fourth step of STSE method.
Executing the strategy using real time simulation prevents data leakage, since
each data point is out-of-sample and used for evaluation as soon as available.
One could argue that a ’reasonable amount of time’ is subjective. However,
STSE uses a probabilistic approach that establishes the minimum time for
passing a performance threshold with a given level of certainty. That amount
of time is highly dependent on the strategy’s observed performance. Great
strategies may show skill briefly, while others require much more time and
some others will never pass a given performance threshold even after many
months or even years. In that case, the strategy is not a good option for the
current market situation. Nevertheless since the financial environment is non-
stationary, it is not possible to assure that such strategy will never present
good performance in the future.
### IV-A STSE Fundamentals
STSE is based on the idea that you need to observe a trading strategy avoiding
data leakage and mitigating overfitting risk for a long enough time. The
amount of time is highly dependent on the trading strategy performance itself
and the target performance threshold. This threshold is defined in terms of a
minimum desired Sharpe Ratio for the trading strategy. Naturally, the observed
Sharpe Ratio for any given trading strategy is highly dependent on market
conditions, but it is possible to define confidence levels where the observed
SR is equal or above the threshold even for non-Normal returns. In fact,
Bailey and Prado [36] showed that the probability that the observed Sharpe
Ratio ($\widehat{SR}$) will be greater than a given Sharpe Ratio threshold
($SR^{*}$), can be given by Equation 2.
In Equation 2, Z is the CDF (cumulative distribution function) of the Standard
Normal distribution and $\widehat{\gamma_{3}}$ and $\widehat{\gamma_{4}}$ are
the skewness and kurtosis, respectively.
$Prob[\widehat{SR}>SR^{*}]=Z\left[\frac{(\widehat{SR}-SR^{*})\sqrt{n-1}}{\sqrt{1-\widehat{\gamma_{3}}\widehat{SR}+\frac{\widehat{\gamma_{4}}-1}{4}\widehat{SR}^{2}}}\right]$
(2)
Naturally, the estimation of the Sharpe Ratio is subject to significant
errors, so the question becomes: ”how long should a track record be in order
to have statistical confidence that its Sharpe Ratio is above a given
threshold?”. For a level of certainty $\alpha$, it is shown that the number of
observations $n^{*}$ in the track record should be no less than:
$n^{*}=1+\left[1-\widehat{\gamma_{3}}\widehat{SR}+\frac{\widehat{\gamma_{4}}-1}{4}\widehat{SR}^{2}\right]\left(\frac{Z_{\alpha}}{\widehat{SR}-SR^{*}}\right)^{2}.$
(3)
Equation 3 shows that the exact number of observations will increase as the
Sharpe Ratio threshold ($SR^{*}$) gets closer to the observed Sharpe Ratio
($\widehat{SR}$). The more skewed the returns are, or, in other words, the
greater the fat tails, the longer the track records must be. An increase in
our required level of confidence also results in the need for longer track
records. The authors also point that, given the assumptions, the number of
required observations ($n^{*}$) should never be less than 30, which means 2.5
years of monthly data, or 0.5769 years of weekly data (52 observations/year),
or 0.119 years of daily data (252 observations/year) [36]. In Appendix A, we
provide an implementation of Equation 2 that calculates the probability of the
Sharpe Ratio (PSR) being greater than a given threshold and another
implementation of Equation 3 given the observed time series of returns of the
trading algorithm and the desired SR threshold ($SR^{*}$). Both implementation
are in MQL 5, the programming language used in the Metatrader platform, which
is very similar to C++. In order to illustrate the use of Equations 2 and 3,
we adapted Figure 1 from the original paper [36] and present the results
achieved by the authors. Observe that these equations were applied to 33 HFR
indices. HFR indices which are made up of funds managed by human experts, not
by autonomous trading strategies. In Figure 1, note that only 9 indices
presented investment skill over an annualized Sharpe Ratio of 0.5 with a 95%
confidence level (cells in black, PSR(0.5) column), but almost all (29 out of
33) substantiated investment skill over an annualized Sharpe Ratio of zero
(PSR(0)). It is also possible to observe the minimum track record ( mTRL)
required for each case, considering a Sharpe Ratio of 0.5 ( mTRL(0.5)) or zero
( mTRL(0)). These mTRL are reported in years, considering monthly
observations.
Figure 1: Performance analysis on 33 HFR Indices (Monthly indices). Adapted
from [36]
It is important to note that the duration of the track record is dependent on
strategy performance itself. So, a fixed duration cannot be stated for any
trading strategy. However, it should never shorter than 30 time slices, in our
implementation that means at least 30 business days.
We use the Sharpe Ratio as a measure for strategy performance adjusted to
risk, despite the criticism, as we discussed in Section III-B. It is possible
to estimate the probability that the Sharpe Ratio of an autonomous trading
strategy is above a given threshold using Equation 2. It is important to note
that the higher the threshold the longer the required track record may be. The
required track record can be estimated using Equation 3. Furthermore, a higher
level of certainty also may require a longer track record. Both equations are
implemented in the STSE module, which is open source code and publicly
available [19]. After executing the STSE steps, three possible outcomes should
be apparent:
* •
Probably a bad strategy: if the track record is long enough and it is not
possible to state that the trading strategy’s Sharpe Ratio is above zero or
any of the questions 4 to 6 are not answered affirmatively. It is unlikely
that is a good trading strategy, at least for the given financial parameters.
You should think about changing the parameters or the strategy itself.
* •
Longer track record required: the track record is not long enough given the
observed performance to state with the required level of certainty that
strategy performance is above the threshold. In this situation, the developer
should keep evaluating the strategy for more time.
* •
Perhaps a skillful trading strategy The track record was long enough; it
presented good performance, and questions in Step 2 were answered
affirmatively. You should consider testing it in real markets to confirm that
performance does not change significantly from simulation. Check Section IV-B
for more information.
### IV-B Operating in real markets
We believe that only after thorough evaluation, like that of the STSE method,
someone should consider letting real money be managed by a autonomous software
agent. Naturally, it is wise to start with a small amount of capital allocated
and expand it as the performance persists or improves. The capital investment
should be reduced it, if there is a performance degradation. Even with those
procedures, it is still true that past returns do not guarantee future ones.
As stated by Graham in his famous book [37], investment and speculative
operations are different because the first are based on thorough analysis that
promises safety of capital and adequate return, while the second lacks theses
features. Investors need to be sure which operation they are implementing
through their trading robots. The problem of estimating the optimal amount
capital that should be allocated to an autonomous trading strategy is a
complex question in itself. It may be called bet sizing [28]. We intend to
address this in the future, but it is out of the scope of this paper.
The STSE method may be used in strategies by yourself or by third parties.
However, if dealing with a strategy created by a third party, there may be
additional problems that we address in Section IV-C.
### IV-C Third-party Strategy Evaluation
The evaluation of a strategy developed by a third-party presents the same
challenges a strategy developed personally but with some additional pitfalls.
As the trading strategy and its parameters were defined by others, the
investor does not know if it suffers from some kind of data leakage
(intentional or not) from evaluating data to training data nor does she know
how many configurations where tried in order to define the parameter. As
pointed out by Bailey and others [23], if one does not know how many trials or
parameter configurations were tested, one cannot assess the risk of
overfitting and that risk grows with the number of trials. Additionally, any
developer is tempted to try the maximum possible number of configurations in
order to find the one with the best performance. In fact, even if the
developer reveals the number of trials and declares that all required
precautions against data leakage were observed, there is still an incentive
for him to lie, because the reported performance would be more credible.
However, even if the number of tested configurations is unknown and the
robot’s strategy is a ’black box’, i.e., there is no access to strategy’s
source code it is still possible to evaluate the strategy’s performance using
STSE method. To do so, there must be access to the strategy’s trading
decisions using reliable live data. This can be done using a trading platform
that allows simulated operation with real time data. There are several
available like Metatrader [14], Interactive Brokers [38], and Alpaca [39] or
others. It is beyond the scope of this paper to discuss such platforms, but
one should be able to use STSE with any platform, since you can extract the
equity time series marked to market. The STSE method can also be used no
matter the number of assets managed by the strategy, as long as it is possible
to obtain the equity time series reliably.
In Section V, we apply the STSE method to several autonomous trading
strategies with different financial parameters. We have used the Metatrader
platform to perform many experiments. The evaluated strategies include five
for which we had access to the source code and five for which we did not have
access to source code. They were downloaded from the Metatrader platform and
tested. The tested scenarios and results are discussed in the next Section.
## V STSE Results and Discussion for Several Strategies
Most of the autonomous trading strategies that we read about and almost all
trading strategies available in the Metatrader platform deal with only one
asset at time. This is true, despite the fact there are very good reasons for
creating strategies that deal with several assets and try to explore their
complementarity in terms of risk and return, which is widely used in non-
autonomous trading strategies. We evaluate autonomous trading strategies using
one asset per scenario in this Section to reflect the literature. However,
that is not a limitation of STSE method, which can be used for evaluating
portfolio strategies.
We present the results achieved by applying the proposed method, Significant
Trading-Strategy Evaluation (STSE), to many different trading strategies and
assets in this Section. A scenario is defined by an autonomous strategy, a set
of assets and a time period. We tested more than 300 scenarios. These many
scenarios were splitted into two groups. The first group (FX) focused on
Foreign Exchange assets, while the second group (SX) dealt with Stock Exchange
assets. We used two different thresholds for the minimum Sharpe Ratio: zero
and 0.1, both with a level of confidence of 95%. These values seem very
reasonable but one may set the values in STSE setup that best represents
individual preferences. In FX scenarios, we used five different strategies,
four based on technical signals and one based on Machine Learning. We had
access to the Source code. However, in SX scenarios we evaluated nine
strategies and five of them were developed by third-parties and we did not
have access to their source code. We were still able to evaluate them using
STSE method as discussed in Section IV-C. The two groups of scenarios (FX and
SX) and their respective results are described next.
### V-A Foreign Exchange (FX) Scenarios
In the context of Foreign Exchange assets (FX), we executed twenty scenarios
composed of five strategies and five different assets, over the period of
three years of trading data (Jan-1-2017 to Dec-31-2019). The assets were CFD
(Contract for Difference) instruments [40]. CFD is a contract that enables two
parties to trade based on the difference between the entry and closing prices.
In the case of foreign exchange pairs, a trader may bet that one currency will
rise (or fall) in comparison with on other. If she is right, she will profit,
and suffer a loss in she is wrong [41]. The foreign exchange pairs are listed
in Table I and one of them (BTCUSD) is based on a cryptocurrency (Bitcoin)
against the US dollar.
We evaluated four well-known strategies: moving average convergence/divergence
(MACD) [42] (pg. 134,150), Moving Average (MAMA) [42] (pg. 109-132), Moving
Average Parabolic Stop and Reverse (MAPS) [43] and another version of MAPS
with size optimization, that we will call MAPS2 from now on. We also used one
Trading Strategy based on Random Forest [44], that we refer as RFOR from now
on. All of them use technical indicators and have some different
implementations, each with some parameters. We have used the implementation
available in the Metatrader5 platform [14] with the default parameters defined
in Appendix A. We present each one of the scenarios results next and discuss
them in the following subsection.
# | Symbol | Description
---|---|---
1 | EURUSD | Euro vs US dollar
2 | USDJPY | US dollar vs Japan Yen
3 | EURJPY | Euro vs Japan Yen
4 | GBPUSD | Great Britain Pound vs US dollar
5 | BTCUSD | Bitcoin vs US dollar
TABLE I: Selected assets from the most traded assets in Foreign Exchange
(Forex) and Cryptocurrency (Bitcoin) Market.
### V-B Foreign Exchange (FX) Results
In the next tables, we present the achieved results using the five trading
algorithms for each of the five assets. For each simulation defined by a
scenario, asset, and trading algorithm, we observed the following values:
Return, maximum and minimum quarter returns, Sharpe Ratio, PSR (probability
real Sharpe Ratio is over the threshold), and minimum track record length (
mTRL) in number of years for two different levels of Sharpe Ratio thresholds.
The first threshold was zero while the second was 0.1 (10%). The first
threshold was significantly easier than the second. We have highlighted the
situations where the autonomous traders were able to overcome each respective
threshold with black cells and present the achieved results in table II for
EURUSD pair,table III for EURJPY, Table IV for USDJPY, table V for GBPUSD. The
Table VI shows the results for BTCUSD, which is based on the main
cryptocurrency, Bitcoin, In this case, there was a high return in the three
year period, but also a very negative return in the shorter period. There was
a very high volatility for BTCUsD as expected for a cryptocurrency.
| MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return | -0.000 | 0.000 | 0.000 | 0.493 | -77.498
Sharpe Ratio | -0.006 | 0.026 | 0.026 | 0.006 | 0.007
PSR(0) | 0.180 | 0.999 | 0.999 | 0.818 | 0.923
mTRL(0) | 60.882 | 0.798 | 0.816 | 67.591 | 0.169
PSR(0.1) | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
mTRL(0.1) | 0.249 | 0.107 | 0.109 | 0.324 | 0.169
TABLE II: Results for the five strategies for asset EURUSD in three years. A black cell indicates a situation where the strategy surpasses the proposed Sharpe Ratio threshold: 0 or 0.1. Negative results are possible, because we allowed negative equity (trading with leverage) | MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return (%) | -0.001 | 0.000 | 0.000 | -0.886 | -47.451
Sharpe Ratio | -0.011 | 0.033 | 0.023 | -0.007 | -0.007
PSR(0) | 0.070 | 1.0 | 0.999 | 0.125 | 0.018
mTRL(0) | 23.356 | 0.411 | 1.033 | 42.594 | 11.533
PSR(0.1) | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
mTRL(0.1) | 0.233 | 0.109 | 0.097 | 0.228 | 0.060
TABLE III: Results for the five strategies for asset EURJPY in three years. A black cell indicates a situation where the strategy surpasses the proposed Sharpe Ratio threshold: 0 or 0.1 | MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return (%) | -0.000 | 0.001 | 0.000 | -0.856 | -94.526
Sharpe Ratio | -0.006 | 0.045 | 0.010 | -0.008 | -0.001
PSR(0) | 0.194 | 1.0 | 0.999 | 0.110 | 0.417
mTRL(0) | 68.037 | 0.232 | 5.006 | 37.127 | 1173.406
PSR(0.1) | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
mTRL(0.1) | 0.250 | 0.163 | 0.075 | 0.223 | 0.242
TABLE IV: Results for the five strategies for asset USDJPY in three years. A black cell indicates a situation where the strategs overcome the proposed Sharpe ratio threshold: 0 or 0.1 | MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return (%) | -0.001 | 0.001 | 0.003 | -0.974 | -88.064
Sharpe Ratio | -0.010 | 0.033 | 0.032 | -0.013 | -0.008
PSR(0) | 0.075 | 1.0 | 1.0 | 0.023 | 0.043
mTRL(0) | 24.667 | 0.397 | 0.439 | 14.381 | 17.402
PSR(0.1) | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
mTRL(0.1) | 0.234 | 0.102 | 0.106 | 0.206 | 0.117
TABLE V: Results for the five strategies for asset GBPUSD in three years. A black cell indicates a situation where the strategy surpasses the proposed Sharpe Ratio threshold: 0 or 0.1 | MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return (%) | 0.060 | -0.014 | -0.014 | -99.44 | 6478.3
Sharpe Ratio | 0.030 | -0.030 | -0.030 | 0.040 | 0.01
PSR(0) | 0.943 | 0.146 | 0.146 | 0.932 | .753
mTRL(0) | 0.490 | 2.100 | 2.100 | 1.100 | 117.0
PSR(0.1) | 0.000 | 0.000 | 0.000 | 0.003 | NaN
mTRL(0.1) | 0.200 | 0.100 | 0.100 | 0.300 | NaN
TABLE VI: Results for the five strategies for asset BTCUSD in three years. A black cell indicates a situation where the strategy surpasses the proposed Sharpe Ratio threshold: 0 or 0.1 | MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return (%) | -0.000 | 7.920 | 6.689 | -0.449 | -119.169
Sharpe Ratio | -0.016 | 0.030 | 0.017 | -0.026 | -0.022
PSR(0) | 0.185 | 0.999 | 0.977 | 0.033 | 0.001
mTRL(0) | 10.571 | 0.591 | 2.122 | 2.735 | 0.987
PSR(0.1) | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
mTRL(0.1) | 0.219 | 0.112 | 0.105 | 0.120 | 0.036
TABLE VII: Results for the five strategies for asset EURUSD in six months. A
black cell indicates a situation where the strategy surpasses the proposed
Sharpe Ratio threshold: 0 or 0.1
The results show that none of the tested autonomous traders was able to
surpass the 0.5 threshold, but some of them were able to surpass the zero
threshold. However, as discussed in Section III and presented in Figure 1,
some human asset management professionals are able to overcame the 0.5
threshold. In fact, nine out of 33 HFR indices surpassed the 0.5 Sharpe Ratio
threshold as presented in [36]. We also tested an easier threshold (0.1) for
the selected autonomous, but once again none of them were able to surpass it.
Therefore, we can state that none of the tested strategies showed skill in any
of the assets comparable to good human experts. By significant skill, we mean
that the strategy performance were above the threshold ( 0 or 0.1) according
to the case, with 95% level of confidence and mTRL equal or shorter than the
simulated period (3 years, in this case). That indicates that creating
successful trading strategies may be a really hard task. However, two
strategies(MAMA and MAPS) were able to surpass the zero threshold. Even though
it is the easiest barrier, it is not completely meaningless. As shown in
Figure 1, four out of the 33 HFR indices were not able to surpass the zero
threshold. RFOR performance was particularly bad. That fact should not be
interpreted as a signal that AI-based strategies are not promising, because we
used the standard implementation presented in [44], which was focused on stock
markets rather than FX markets. Furthermore, if we had changed some hyper-
parameters or included more information, such as simple moving averages, it is
likely its performance would have been enhanced. In fact, we included RFOR to
show that the STSE method could also be used with AI-based strategies.
| MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return | -0.000 | 3.770 | 0.0 | 0.055 | -115.144
Sharpe Ratio | -0.014 | 0.023 | N/A | 0.005 | -0.018
PSR(0) | 0.212 | 0.999 | N/A | 0.631 | 0.104
mTRL(0) | 13.375 | 0.716 | N/A | 82.722 | 5.366
PSR(0.1) | 0.000 | 0.000 | N/A | 0.000 | 0.000
mTRL(0.1) | 0.223 | 0.071 | N/A | 0.325 | 0.134
TABLE VIII: Results for the five strategies for asset EURJPY in six months. A
black cell indicates a situation where the strategy surpasses the proposed
Sharpe Ratio threshold: 0 or 0.1
We also executed the tests for a much shorter period of time (only six months,
from Jul-1, 2019 to Dec-31, 2019). The results are shown below in Tables VII,
VIII, IX, X, and XI. It is possible to observe that MAPS and MAMA strategies
were successful only with the USDJPY asset. It indicates that it may be harder
for an autonomous strategy to show skill in shorter periods.
| MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return | -0.000 | 0.000 | 0.000 | 0.553 | -98.774
Sharpe Ratio | -0.014 | 0.051 | 0.051 | 0.025 | 0.003
PSR(0) | 0.214 | 0.999 | 0.999 | 0.920 | 0.579
mTRL(0) | 13.581 | 0.189 | 0.192 | 4.697 | 209.251
PSR(0.1) | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
mTRL(0.1) | 0.225 | 0.207 | 0.214 | 0.542 | 0.287
TABLE IX: Results for the five strategies for asset USDJPY in six months. A black cell indicates a situation where the strategy surpasses the proposed Sharpe Ratio threshold: 0 or 0.1 | MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return | -0.000 | 0.000 | 4.139 | -0.637 | 130.384
Sharpe Ratio | -0.004 | 0.027 | 0.017 | -0.022 | 0.025
PSR(0) | 0.407 | 0.999 | 0.977 | 0.072 | 0.934
mTRL(0) | 155.402 | 0.791 | 2.124 | 4.490 | 3.728
PSR(0.1) | 0.000 | 0.000 | 0.000 | 0.000 | 0.00
mTRL(0.1) | 0.258 | 0.121 | 0.105 | 0.156 | 0.420
TABLE X: Results for the five strategies for asset GBPUSD in six months. A black cell indicates a situation where the strategy surpasses the proposed Sharpe Ratio threshold: 0 or 0.1 | MACD | MAMA | MAPS | MAPS2 | RFOR
---|---|---|---|---|---
Return | 0.070 | 0.010 | -0.010 | 0.000 | -192.8
Sharpe Ratio | 0.090 | 0.000 | 0.000 | -1.00 | -0.34
PSR(0) | 0.947 | 0.514 | 0.491 | NaN | 0.0008
mTRL(0) | 0.100 | 268.800 | 733.800 | NaN | 1.5
PSR(0.1) | 0.394 | 0.137 | 0.125 | NaN | 0.000
mTRL(0.1) | 4.900 | 0.300 | 0.300 | NaN | 0.1
TABLE XI: Results for the five strategies for asset BTCUSD in six months. A
black cell indicates a situation where the strategy surpasses the proposed
Sharpe Ratio threshold: 0 or 0.1.
### V-C Stock Exchange (SX) Scenarios
In addition to crypto and traditional currency scenarios, we have also used
STSE to evaluate autonomous trading strategies operating in stock exchange
markets. We selected thirty (30) stocks from 3 different stock exchanges, that
are listed in Table XIII. Nine autonomous trading strategies were used; four
of them were also used FX Scenarios (MACD, MAMA, MAPS and MAPS2) and the five
others developed by third parties and available for free download in the
Metatrader platform in binary distributions without access to their source
codes. The complete list is available in Table XII. In the stock exchange
scenarios, we executed 270 scenarios using these nine strategies and thirty
assets. They were executed using real data from Jan 01, 2018 to Dec 31, 2019.
# | Strategy
---|---
1 | MACD
2 | MAMA
3 | MAPSR
4 | MAPS2
5 | Dark Venus MT5 (*)
6 | Neural Networks 2 Moving Averages (*)
7 | Dark Moon MT5 (*)
8 | Hyper Trade Global (*)
9 | CAP Random Trader EA MT5 (*)
TABLE XII: Selected Trading strategies for Stock Exchange scenarios. The strategies marked (*) were downloaded as binary code from Metatrader platform, without access to source code. # | Stock Ex. | Asset ID | Company
---|---|---|---
1 | Nasdaq | AAPL | Apple Inc.
2 | Nasdaq | AMD | Advanced Micro Devices, Inc.
3 | Nasdaq | MSFT | Microsoft
4 | Nasdaq | FB | Facebook
5 | Nasdaq | AAL | American Airlines Group, Inc.
6 | Nasdaq | TSL | Tesla, Inc.
7 | Nasdaq | TLRY | Tilray, Inc.
8 | Nasdaq | V | Visa Inc.
9 | Nasdaq | JNJ | Johnson & Johnson
10 | Nasdaq | MU | Micron Technology, Inc.
11 | NYSE | BABA | Alibaba Group
12 | NYSE | SQ | Square Inc. Cl A
13 | NYSE | BA | Boeing Co.
14 | NYSE | NIO | NIO Inc.
15 | NYSE | AMC | AMC Entertainment Inc.
16 | NYSE | JPM | JPMorgan Chase
17 | NYSE | ABEV | Ambev
18 | NYSE | SHOP | Shopify Inc. Cl A
19 | NYSE | AA | Alcoa corp.
20 | NYSE | DI | Didi Inc.
21 | B3 | VALE3 | Vale
22 | B3 | ITUB4 | Itaú
23 | B3 | PETR4 | Petrobras
24 | B3 | BBDC4 | Bradesco
25 | B3 | B3SA3 | B3
26 | B3 | PETR3 | Petrobras
27 | B3 | ABEV3 | Ambev
28 | B3 | OIBR3 | Oi
29 | B3 | ITSA4 | Itausa
30 | B3 | BBAS3 | Banco do Brasil
TABLE XIII: List of Assets used in SX scenarios
### V-D Stock Exchange (SX) Results
The SX results are presented in Tables XIV and XV. The black cells in these
tables show the situations where some strategy passed the zero threshold for a
given asset. We checked if those strategies with performance above zero, were
able to pass the 0.1 threshold and we also checked if some of them could pass
an easier limit (0.05). The results are presented in Table XVI. Just as was
observed in FX scenarios, not a single strategy was able to pass 0.1 threshold
or even the easier 0.05 limit.
| MACD | MAMA | MAPS | MAPS2
---|---|---|---|---
Asset ID | PSR(0) | mTRL(0) | PSR(0) | mTRL(0) | PSR(0) | mTRL(0) | PSR(0) | mTRL(0)
AAPL | 0.362 | 8.8 | 0 | 0.1 | 0 | 0.1 | 0.004 | 0.2
AMD | 0.218 | 1.9 | 0 | 0.1 | 0.218 | 1.9 | 0.033 | 0.3
MSFT | 0.199 | 1.6 | 0 | 0.1 | 0 | 0.1 | 0 | 0.1
FB | 0.375 | 11.2 | 0.012 | 0.2 | 0.012 | 0.2 | NaN | NaN
AAL | 0.455 | 88.8 | 0.434 | 40.8 | 0.434 | 40.8 | 0.044 | 0.4
TSLA | 0.136 | 0.5 | 0.998 | 0.100 | 0.918 | 0.3 | 0.001 | 0.1
TLRY | 0.129 | 0.5 | 0.918 | 0.3 | 0.918 | 0.3 | 0.953 | 0.200
V | 0.308 | 4.2 | 0 | 0.1 | 0 | 0.1 | 0.004 | 0.2
JNJ | 0.668 | 6.1 | 0.013 | 0.2 | 0.013 | 0.2 | 0 | 0.1
MU | 0.422 | 29.3 | 0.106 | 0.7 | 0.106 | 0.7 | 0.001 | 0.1
BABA | 0.348 | 3.9 | 0.009 | 0.1 | 0.009 | 0.1 | 0 | 0
SQ | 0.725 | 1.7 | 0.144 | 0.5 | 0.144 | 0.5 | 0.001 | 0.1
BA | 0.625 | 11.2 | 0.075 | 0.6 | 0.075 | 0.6 | 0.002 | 0.1
NIO | 0.024 | 0.2 | 0.473 | 125.5 | 0.473 | 125.5 | 0.575 | 16.7
AMC | 0.491 | 1154.9 | 0.505 | 3426.3 | 0.505 | 3426.3 | 0.002 | 0.1
JPM | 0.695 | 4.4 | 0.003 | 0.2 | 0.003 | 0.2 | 0 | 0.1
ABEV | 0.297 | 2.1 | 0.015 | 0.1 | 0.015 | 0.1 | 0.713 | 1.9
SHOP | 0.701 | 2.1 | 0 | 0.1 | 0 | 0.1 | 0.982 | 0.100
AA | 0.8 | 0.8 | 0.163 | 0.6 | 0.163 | 0.6 | 0.417 | 13.5
DIS | 0.608 | 15.3 | 0.002 | 0.1 | 0.002 | 0.1 | 0.009 | 0.2
VALE3 | 0.601 | 20.4 | 0.304 | 5.1 | 0.02 | 0.3 | NaN | NaN
ITUB4 | 0.567 | 46.8 | 0.438 | 54.9 | 0.4 | 21 | NaN | NaN
PETR4 | 0.572 | 40.7 | 0.253 | 3 | 0.299 | 4.8 | NaN | NaN
BBDC4 | 0.609 | 17.6 | 0.281 | 4 | 0.259 | 3.2 | NaN | NaN
B3SA3 | 0.5 | 1381322.3 | 0.262 | 3.3 | 0.285 | 4.2 | NaN | NaN
PETR3 | 0.554 | 72.2 | 0.337 | 7.6 | 0.259 | 3.2 | NaN | NaN
ABEV3 | 0.569 | 43.9 | 0.546 | 98.4 | 0.848 | 1.3 | 0.002 | 0.2
OIBR3 | 0.585 | 28.8 | 0.669 | 7 | 0.643 | 10 | NaN | NaN
ITSA4 | 0.48 | 546.4 | 0.37 | 12.2 | 0.007 | 0.2 | NaN | NaN
BBAS3 | 0.601 | 20.6 | 0.423 | 35.2 | 0.404 | 22.9 | NaN | NaN
TABLE XIV: Four strategies results for thirty SX assets. A black cell
indicates a situation where the strategy surpassed the zero Sharpe ratio
threshold
| Dark Venus MT5 | Neural Networks 2 Moving Averages | Dark Moon MT5 | Hyper Trade Global | CAP Random Trader EA MT5
---|---|---|---|---|---
Asset ID | PSR(0) | mTRL(0) | PSR(0) | mTRL(0) | PSR(0) | mTRL(0) | PSR(0) | mTRL(0) | PSR(0) | mTRL(0)
AAPL | 0.019 | 0.3 | NaN | NaN | 0.585 | 23.6 | 0.536 | 131.2 | 0.362 | 8.8
AMD | 0.163 | 1.2 | NaN | NaN | 0.433 | 40.6 | 0.486 | 867.2 | 0 | 0.1
MSFT | 0.471 | 215.8 | 0.015 | 0.2 | 0.427 | 33.8 | 0.482 | 569.9 | 0 | 0.1
FB | 0.557 | 55.6 | NaN | NaN | 0.45 | 73.1 | 0.498 | 79229.1 | 0.012 | 0.2
AAL | 0.536 | 142.5 | 0.962 | 0.400 | 0.478 | 361.1 | 0.496 | 10333.7 | 0.434 | 40.8
TSLA | 0.503 | 9710.3 | 0.471 | 114.3 | 0.237 | 1.2 | 0.474 | 143.8 | 0.998 | 0.100
TLRY | 0.562 | 24.8 | 0.129 | 0.5 | 0.51 | 974.4 | 0.498 | 16588.2 | 0.918 | 0.3
V | 0.522 | 360.3 | NaN | NaN | 0.445 | 57 | 0.488 | 1242.2 | 1.000 | 0.100
JNJ | 0.204 | 1.7 | NaN | NaN | 0.559 | 52.1 | 0.488 | 1258.6 | 0.013 | 0.2
MU | 0.409 | 21.8 | 0.309 | 4.6 | 0.466 | 161.1 | 0.498 | 36629.8 | 0.893 | 0.7
BABA | 0.368 | 5.2 | 0.001 | 0.1 | 0.421 | 15 | 0.483 | 323.3 | 0.991 | 0.100
SQ | 0.532 | 90.4 | 0.004 | 0.1 | 0.555 | 30.9 | 0.486 | 481.5 | 0.856 | 0.5
BA | 0.857 | 1 | 0.002 | 0.1 | 0.534 | 154.9 | 0.515 | 794.3 | 0.075 | 0.6
NIO | 0.597 | 9.8 | 0.834 | 0.6 | 0.401 | 9.5 | 0.525 | 147.7 | 0.473 | 125.5
AMC | 0.549 | 39.6 | 0.990 | 0.100 | 0.532 | 94.1 | 0.467 | 83.9 | 0.491 | 1288.7
JPM | 0.529 | 221.1 | 0.042 | 0.4 | 0.568 | 39.1 | 0.487 | 1036.2 | 0.997 | 0.200
ABEV | 0.435 | 22.3 | 0.999 | 0.100 | 0.554 | 31.4 | 0.467 | 85 | 0.015 | 0.1
SHOP | 0.646 | 4.3 | NaN | NaN | 0.605 | 8.4 | 0.49 | 1016.9 | 0 | 0.1
AA | 0.528 | 121.7 | 0.978 | 0.200 | 0.532 | 93.6 | 0.47 | 106.1 | 0.832 | 0.6
DIS | 0.109 | 0.8 | 0.001 | 0.1 | 0.556 | 56.7 | 0.484 | 670.6 | 0.998 | 0.100
VALE3 | 0.009 | 0.2 | NaN | NaN | 0.574 | 38.5 | 0.736 | 3.4 | 0.588 | 27
ITUB4 | 0.975 | 0.400 | NaN | NaN | 0.526 | 318.9 | 0.679 | 6.2 | 0.919 | 0.7
PETR4 | 0.646 | 4.3 | NaN | NaN | 0.606 | 18.5 | 0.607 | 18.3 | 0.659 | 8
BBDC4 | 0.525 | 327.8 | NaN | NaN | 0.5 | 19249669.2 | 0.615 | 15.7 | 0.583 | 30.3
B3SA3 | NaN | NaN | NaN | NaN | 0.501 | 173919.9 | 0.536 | 165.1 | 0.833 | 1.4
PETR3 | 0.927 | 0.600 | NaN | NaN | 0.522 | 445.3 | 0.627 | 12.7 | 0.5 | 3048446.5
ABEV3 | 0.99 | 0.3 | NaN | NaN | 0.488 | 1540.2 | 0.578 | 34.2 | 0.565 | 50.5
OIBR3 | 0.422 | 34.9 | NaN | NaN | 0.579 | 34.1 | 0.513 | 1190.7 | 0.246 | 2.8
ITSA4 | 0.491 | 2937.9 | NaN | NaN | 0.466 | 189.2 | 0.567 | 47.4 | 0.967 | 0.4
BBAS3 | 0.461 | 141.4 | NaN | NaN | 0.552 | 77.2 | 0.738 | 3.3 | 0.598 | 21.5
TABLE XV: Five strategies results for thirty SX assets. A black cell indicates
a situation where the strategy surpassed the zero Sharpe ratio threshold
Asset ID | STRATEGY | PSR(0.05) | mTRL(0.05) | PSR(0.1) | mTRL(0.1)
---|---|---|---|---|---
TSLA | CAP Random Trader EA MT5 | 0.886 | 0.4 | 0.331 | 3.1
V | CAP Random Trader EA MT5 | 0.822 | 1.3 | 0.053 | 0.4
BABA | CAP Random Trader EA MT5 | 0.800 | 0.8 | 0.245 | 1.2
JPM | CAP Random Trader EA MT5 | 0.662 | 6.5 | 0.030 | 0.3
DIS | CAP Random Trader EA MT5 | 0.720 | 3.4 | 0.040 | 0.4
ITSA4 | CAP Random Trader EA MT5 | 0.736 | 3.4 | 0.281 | 4.0
TSLA | MAMA | 0.886 | 0.4 | 0.331 | 3.1
TLRY | MAPS2 | 0.753 | 1.3 | 0.377 | 6.1
AAL | Neural Networks 2 Moving Averages | 0.439 | 48.6 | 0.019 | 0.3
AMC | Neural Networks 2 Moving Averages | 0.772 | 1.1 | 0.207 | 0.9
ABEV | Neural Networks 2 Moving Averages | 0.898 | 0.4 | 0.306 | 2.3
AA | Neural Networks 2 Moving Averages | 0.702 | 2.1 | 0.172 | 0.7
ITUB4 | Dark Venus MT5 | 0.519 | 586.4 | 0.013 | 0.3
ABEV3 | Dark Venus MT5 | 0.418 | 31.6 | 0.009 | 0.2
SHOP | MAPS2 | 0.722 | 1.7 | 0.176 | 0.7
TABLE XVI: The strategies that passed the zero threshold results for higher
thresholds. A black cell indicates a situation where the strategy surpassed
the 0.05 or 0.1 Sharpe ratio threshold
### V-E Discussion of Results
In the fifty FX scenarios, we observed that only eleven strategies managed to
produce Sharpe Ratio above the zero threshold, and only two strategies
(MAMA,MAPS) were able to do so in the six month period. As expected, it is
harder for a strategy to prove skill in shorter time periods. None of the
strategies were able to pass the 0.1 threshold. The results were similar in
the 270 SX scenarios. Only thirteen out of 270 scenarios showed strategies
passing the zero Sharpe Ratio threshold. We tested these thirteen strategies
against the 0.1 threshold and also against an easier target of 0.05. None of
them were able to pass theses thresholds, not even the easier 0.05 limit (see
Table XVI).
One may argue that such limits are too high. However, it has been observed
that professional human portfolio managers are able to pass a much harder
threshold, 0.5 [36]. In fact, this study, discussed in Section III, showed
that twenty-nine (29) out of thirty-three (33) funds were able to pass the
zero threshold. Furthermore, nine of them passed the much harder 0.5
threshold. These results seem to indicate that autonomous trading strategies
still have a long road ahead to match a human expert’s performance. However,
it is clear to us that using just one asset for a strategy contributed to the
poor performance, because it does not allow risk mitigation through
diversification, but as discussed in Section III-A, it is common to see
autonomous trading strategy focus only on return and disregarding risk
mitigation through diversification. The simple use of optimization algorithms,
such as quadratic programming [25], and trading with many assets would likely
help to improve results. However, it is not clear to us that this would be
enough to close the big gap observed among the tested autonomous strategies
and experts’ performance.
Naturally, it is possible that autonomous trading strategies dealing with many
assets would present much better performance. Nevertheless, the STSE method
would still be able to evaluate such multi-asset strategies without any
change, as discussed in Section IV-C.
The STSE method was tested on several strategies and assets (foreign
currencies, cryptocurrency, and stocks). The strategies were not selected
based on high expectation about their performance. They were selected based on
the use of several different techniques and availability. We did not engage in
a long process to refine the strategy’s parameters, which could have helped to
improve their performance. Some strategies did surpass the zero threshold,
which means that they have an expected Sharpe Ratio higher than zero with 95%
level of confidence and that the observed track record is equal or above the
minimum required track record (mTRL). The STSE method open source
implementation is freely available in [19]. In Appendix A, we detail all
simulation setups, and Strategy’s parameters used in the tested scenarios. As
discussed in Section V, some of the tested strategies are not open source, but
all allow binary download free of charge. Using the STSE open source
implementation, the same strategies and setup specified in Appendix A, it is
possible to reproduce the results achieved and presented in this paper. In the
next and final Section, we provide a brief summary of this study’s
contribution.
## VI Conclusions and Future Work
In this paper, we presented a method to evaluate autonomous trading strategies
regardless of the techniques used for their construction. This method, called
STSE, was able to separate skillful trading strategies from those with no
skill (expected Sharpe Ratio below zero). We also provided an open source
implementation which we believe can be useful for autonomous trading strategy
developers. The STSE implementation is freely available at GitHub [19].
We have used STSE to evaluate nine different strategies and thirty five assets
including currency, cryptocurrency and stocks. The autonomous strategies
include some based on technical analysis (Moving average, MACD and others),
machine learning techniques (Random Forest, Neural networks) and some
proprietary strategies developed by third-parties that we did not have much
information about. We were still able to apply our STSE method to each one of
them. These results indicate that the analyzed strategies are far from
presenting performance levels comparable to human experts. We also believe
that many claims about great performance achieved by some proprietary
autonomous strategies may be due to overfitted strategies, since it is hard to
avoid data leakage and other common problems. Furthermore, it is important to
note that the saying ”past returns do not guarantee future returns”, is also
valid for autonomous trading. Finally, we observe that new approaches, such as
deep reinforcement learning and the use of alternative data [28], may be
really useful in building better autonomous strategies. However, significant
evaluation processes will be needed to tell if you built a really successful
strategy or you are just fooling yourself.
## Acknowledgements
Paulo A.L. Castro is partially funded by CNPq (Brazil) Grant. No.
311838/2017-0.
## References
* [1] Michael P. Wellman and Uday Rajan. Ethical issues for autonomous trading agents. Minds and Machines, 27(4):609–624, Dec 2017.
* [2] Renato Oliveira and Adriano Pereira. Agente de negociação de ações utilizando aprendizado por reforço. In Proceeding of the Workshop of Artificial Intelligence Applied to Finance (WAIAF 2019), Sao Jose dos Campos, Brazil, 2019. WAIAF.
* [3] Alexander Sherstov and Peter Stone. Three automated stock-trading agents: A comparative study. In Proceedings of the Agent Mediated Electronic Commerce (AMEC) Workshop - AAMAS 2004, New York, 2004.
* [4] Paulo Andre Lima Castro and Jaime Simao Sichman. Automated asset management based on partially cooperative agents for a world of risks. Applied Intelligence, 38:210–225, 2013.
* [5] J. Paulin, A. Calinescu, and M. Wooldridge. Agent-based modeling for complex financial systems. IEEE Intelligent Systems, 33(2):74–82, Mar 2018.
* [6] Leandro Anghinoni and Liang Zhao. Time series trend detection and forecasting using complex network topology analysis. In Proceeding of the Workshop of Artificial Intelligence Applied to Finance (WAIAF 2018), Sao Jose dos Campos, Brazil, 2018. WAIAF.
* [7] Rafael Silva Wagner and André Alves Portela Dos Santos. Forecasting the direction of high-frequency returns: An ensemble-trees application. In Proceeding of the Workshop of Artificial Intelligence Applied to Finance (WAIAF 2018), Sao Jose dos Campos, Brazil, 2018. WAIAF.
* [8] Harish Subramanian, Subramanian Ramamoorthy, Peter Stone, and Benjamin J. Kuipers. Designing safe, profitable automated stock trading agents using evolutionary algorithms. In GECCO ’06: Proceedings of the 8th annual conference on Genetic and evolutionary computation, pages 1777–1784, New York, NY, USA, 2006\. ACM Press.
* [9] Flavio Abdenur Elias Cavalcante-Filho and Rodrigo De-Losso. Machine learning applied to accounting variables yields the risk-return metrics of private company portfolios. In Proceeding of the Workshop of Artificial Intelligence Applied to Finance (WAIAF 2019), Sao Jose dos Campos, Brazil, 2019. WAIAF.
* [10] Felipe Dias Paiva and Carolina Magda da Silva Roma. Métodos de deep learning aplicados a candlestick como estratégia de investimento. In Proceeding of the Workshop of Artificial Intelligence Applied to Finance (WAIAF 2019), Sao Jose dos Campos, Brazil, 2019. WAIAF.
* [11] Y. Hu and S. Lin. Deep reinforcement learning for optimizing finance portfolio management. In 2019 Amity International Conference on Artificial Intelligence (AICAI), pages 14–20, Feb 2019.
* [12] Jimin Lee, Hayeong Koh, and Hi Jun Choe. Learning to trade in financial time series using high-frequency through wavelet transformation and deep reinforcement learning. Applied Intelligence, Feb 2021.
* [13] Salvatore Carta, Andrea Corriga, Anselmo Ferreira, Alessandro Sebastian Podda, and Diego Reforgiato Recupero. A multi-layer and multi-ensemble stock trader using deep learning and deep reinforcement learning. Applied Intelligence, 51(2):889–905, Feb 2021.
* [14] Andrew R. Young. Expert Advisor programming for Metatrader 5. Edgehill Pushishing, Nashville, TN, USA, 2018.
* [15] Abraham Othman and Tuomas Sandholm. Inventory-based versus prior-based options trading agents. Algorithmic Finance, 1:95–121, 2011.
* [16] John C. Hull. Options, futures, and other derivatives. Pearson Prentice Hall, Upper Saddle River, NJ [u.a.], 6. ed., pearson internat. ed edition, 2006.
* [17] Monya Baker. Is there a reproducibility crisis? Nature, 533:452–454, 2016.
* [18] Philip B. Stark. Before reproducibility must come preproducibility. Nature, 557:613, 2018.
* [19] Murilo Sibrao Bernardini and Paulo Andre Lima de Castro. Stse: Significant trading -strategy evaluation. https://github.com/paulo-al-castro/stse, 2021.
* [20] Stuart Russell and Peter Norvig. Artificial Intelligence A Modern Approach Third Edition. Prentice Hall, Englewood Cliffs - NJ, 2013.
* [21] Elena Andreou and Eric Ghysels. Structural Breaks in Financial Time Series, pages 839–870. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009.
* [22] Yoshua Bengio Ian Goodfellow and Aaron Courville. Deep Learning. The MIT Press, Boston, 2016.
* [23] David H. Bailey, Jonathan Borwein, Marcos Lopez de Prado, and Qiji Jim Zhu. Pseudo-mathematics and financial charlatanism: The effects of backtest overfitting on out-of-sample performance. Notices of the American Mathematical Society, 61:458–471, May 2014\.
* [24] Mark A. Hall Eibe Frank and Ian H. Witten. Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, New York, 2016.
* [25] Frank K. Reilly and Keith C. Brown. Investment Analysis & Portfolio Management. South-Western, Cengage Learning, New York, 2012.
* [26] J. Jussa S. Wang A. Wang Luo Y., M. Alvarez and G. Rohal. Seven sins of quantitative investing, September 8 2014.
* [27] Allan Timmermann Ryan Sullivan and Halbert White. Data-snooping, technical trading rule performance, and the bootstrap. The Journal of Finance, 54(5):1647–1691, Oct 1999.
* [28] Marcos Lopez de Prado. Advances in Financial Machine Learning. Wiley, New York, 2018.
* [29] Harry M. Markowitz. Portfolio selection. Journal of Finance, 7(1):77–91, 1952.
* [30] Michael Greenberg. etoro launches its own forex social community – openbook, december 2010\.
* [31] R. Jesse McWaters. The future of financial services: How disruptive innovations are reshaping the way financial services are structured, provisioned and consumed, June 2015.
* [32] Tom Rollinger and Scott Hoffman. Sortino ratio: A better measure of risk. Futures magazine, 1(1):40–42, 2 2013.
* [33] Simone Farinelli, Manuel Ferreira, Damiano Rossello, Markus Thoeny, and Luisa Tibiletti. Beyond sharpe ratio: Optimal asset allocation using different performance ratios. Journal of Banking and Finance, 32(10):2057–2063, 10 2008.
* [34] F. A.Sortino and H. J.Forsey. On the use and misuse of downside risk. Journal of Portfolio Management, 22(1):35–42, 1996.
* [35] Aswath Damodaran. Applied Corporate Finance. Wiley, New York, NY, 2010.
* [36] David H. Bailey and Lopez de Prado. The sharpe ratio efficient frontier. Journal of Risk, 15:3–44, Feb 2012.
* [37] Benjamin Graham. The Intelligent Investor. Harper Business, Revised Edition, New York, 2006.
* [38] Interactive brokers. https://www.interactivebrokers.com/, 2021.
* [39] Alpaca securities. https://alpaca.markets/, 2021.
* [40] CORY MITCHELL. An introduction to contract for differences (cfds), 2021.
* [41] Corporate Finance Institute. Contract for difference (cfd), 2021.
* [42] Jefferey Katz and Donna McCormick. The Encyclopedia of trading strategies. McGraw-Hill, New York, 2000.
* [43] Jr J. Welles Wilder. New Concepts in Technical Trading Systems. Trend Research, Greensboro, NC, 1978.
* [44] Paulo Andre Lima de Castro. mt5b3: A framework for building autonomoustraders, January 2021.
## Appendix A Simulations setup
There are some parameters that need to be defined in order to execute
simulations in the Metatrader platforms. In Table XVII, you will find a list
of them, with explanations and the values used in the simulations described in
this paper. In Table XVIII, we provide a list of the specific parameters for
each Trading agent (or Expert Advisor). The Random Forest trader (RFOR) was
used with the basic setup described in [44].
Name | Description | Value
---|---|---
Expert | Identification of the Trading Strategy implementation (called Expert Advisor \- EA - in Metatrader) | Several EAs were used, see Section V-A for complete list
EA parameters | Specific parameters for each EA | see table XVIII for complete list
Symbol | Identification of the target asset | Several assets were used, see Table I for complete list
Period | The timeframe for testing/optimization | D1
Deposit | Initial Balance of EA | 100000
Currency | Currency used | BRL (Brazilian currency, Real)
Leverage | Leverage for testing | 1:100
Model | Tick mode | 2 (Open price only )
ExecutionMode | Execution of trade orders with a random delay | 0 (no delay )
Optimization | Genetic optimization of EA parameters | We used both possibilities with and without optimization
OptimizationCriterion | Criteria used for optimization | 0 (it means maximize balance value)
FromDate | Start date of the simulation | Several values used see in Section V
ToDate | End date of the simulation | Several values used see in Section V
ForwardMode | Custom mode of forward testing | 1 (Half of the testing period was used for parameter fitting)
TABLE XVII: Metatrader Trading Agent (Expert Advisor) Parameters Expert Advisor | Parameter name | Value
---|---|---
MACD | Inp_Expert_Title | ExpertMACD
MACD | Inp_Signal_MACD_PeriodFast | 12
MACD | Inp_Signal_MACD_PeriodSlow | 24
MACD | Inp_Signal_MACD_PeriodSignal | 9
MACD | Inp_Signal_MACD_TakeProfit | 50
MACD | Inp_Signal_MACD_StopLoss | 20
MAMA | Inp_Expert_Title | ExpertMAMA
MAMA | Inp_Signal_MA_Period | 12; 12; 1; 120; N
MAMA | Inp_Signal_MA_Shift | 6; 6; 1; 60; N
MAMA | Inp_Signal_MA_Method | 0; 0; 0; 3; N
MAMA | Inp_Signal_MA_Applied | 1; 1; 0; 7; N
MAMA | Inp_Trailing_MA_Period | 12; 12; 1;120;N
MAMA | Inp_Trailing_MA_Shift | 0;0;1;10;N
MAMA | Inp_Trailing_MA_Method | 0;0;0;3;N
MAMA | Inp_Trailing_MA_Applied | 1;1;0;7;N
MAPS | Inp_Expert_Title | ExpertMAPSAR
MAPS | Inp_Signal_MA_Period | 12;12;1;120;Y
MAPS | Inp_Signal_MA_Shift | 6;6;1;60;Y
MAPS | Inp_Signal_MA_Method | 0;0;0;3;Y
MAPS | Inp_Signal_MA_Applied | 0;0;0;6;Y
MAPS | Inp_Trailing_ParabolicSAR_Step | 0.02;0.02;0.002;0.2;Y
MAPS | Inp_Trailing_ParabolicSAR_Maximum | 0.2;0.2;0.02;2.0;Y
MAPS2 | Inp_Expert_Title | ExpertMAPSARSizeOptimized
MAPS2 | Inp_Signal_MA_Period | 15;12;1;120;N
MAPS2 | Inp_Signal_MA_Shift | 4;6;1;60;N
MAPS2 | Inp_Signal_MA_Method | 0;0;0;3;N
MAPS2 | Inp_Signal_MA_Applied | 1;1;0;7;N
MAPS2 | Inp_Trailing_ParabolicSAR_Step | 0.02;0.02;0.002000;0.200000;N
MAPS2 | Inp_Trailing_ParabolicSAR_Maximum | 0.2;0.2;0.020000;2.000000;N
MAPS2 | Inp_Money_SizeOptimized_DecreaseFactor | 3;3;0.300000;30.000000;N
MAPS2 | Inp_Money_SizeOptimized_Percent | 10;10;1.000000;100.000000;N
TABLE XVIII: List of Expert Advisors and their specific parameters
|
11institutetext: Department of Mathematical Sciences, New Jersey Institute of
Technology, Newark, NJ, 07102 22institutetext: Department of Physics and
Department of Neurobiology, Duke University, Durham, NC, 27708
# Correlating the force network evolution and dynamics in slider experiments
Chao Cheng 11<EMAIL_ADDRESS>Aghil Abed Zadeh 22<EMAIL_ADDRESS>Lou
Kondic 11<EMAIL_ADDRESS>
###### Abstract
The experiments involving a slider moving on top of granular media consisting
of photoelastic particles in two dimensions have uncovered elaborate dynamics
that may vary from continuous motion to crackling, periodic motion, and stick-
slip type of behavior. We establish that there is a clear correlation between
the slider dynamics and the response of the force network that spontaneously
develop in the granular system. This correlation is established by application
of the persistence homology that allows for formulation of objective measures
for quantification of time-dependent force networks. We find that correlation
between the slider dynamics and the force network properties is particularly
strong in the dynamical regime characterized by well-defined stick-slip type
of dynamics.
## 1 Introduction
A wide range of systems exhibit intermittent dynamics as they are slowly
loaded, with different dynamical regimes governing many industrial and natural
phenomena. In these systems, the energy is loaded gradually with a stable
configuration and then is dissipated in fast dynamics with microscopic and
macroscopic rearrangements Sethna2001_nat . Examples are fracture
Bares2014_prl ; Bares2018_natcom , magnetization Urbach1995_prl , and seismic
activities Davidsen2013_prl ; Bares2018_natcom such as earthquakes, in which
the slowly loaded energy relaxes via fast reconfiguration. This intermittent
behavior has been observed in a number of granular experiments and simulations
Denisov2017_sr ; Liu2016_prl ; zadeh_pre19 ; murphy2019transforming . In
analyzing such behavior, a significant progress has been reached by studying
the dynamics of a slider coupled with the boundary of a granular system. A
slider can exhibit a wide variety of dynamics, including continuous flows and
periodic or intermittent stick-slip behavior zadeh_pre19 ; zadeh2019crackling
; ciamarra_prl10 ; PicaCiamarra:2009hb .
While a significant amount of research on exploring intermittent dynamics of
granular systems has been carried out, not much is known about the connection
between particle-scale response and the global dynamics, in particular for
experimental systems. In slider experiments zadeh_pre19 ; zadeh2019crackling ,
see also Fig. 1, it is possible to measure particle scale response by using
photoelastic techniques. These techniques allow for extracting dynamic
information about evolving particle interactions which typically involve meso-
scale force networks (so-called ‘force chains’). Analysis of such time-
dependent weighted networks is not a simple task, and it has evolved through
last decades in a variety of different directions, including force network
ensemble tighe_sm10 ; sarkar_prl13 , statistics-based methods peters05 ;
makse_softmatter_2014 , and network type of analysis daniels_pre12 ;
walker_pre12 . In the present work, we will consider application of persistent
homology (PH), which allows for formulating precise and objective measures of
static and dynamic properties of the force networks. This approach has been
used extensively in analysis of the data obtained via discrete element
simulations in the context of dry granular matter ardanza_pre14 ; physicaD14
and suspensions gameiro_prf_2020 , but its application to experimental data
has so far been rather sparse dijksman_2018 ; pre18_impact . We show that this
method allows to develop clear correlations between the static and dynamic
properties of the force networks on micro- and meso-scale and the macro-scale
system dynamics.
Figure 1: (a) Sketch of the experiment. A slider sits on top of a 2D granular
system with photoelastic disks, and is connected to a stage by a spring of
constant $k$, pulled by a constant speed $c$. A force gouge measures the force
$f$; the granular medium is imaged with fast cameras. (b) Photoelastic
response during loading zadeh2019crackling . Reprinted with permission from
zadeh2019crackling .
## 2 Techniques
### 2.1 Experimental techniques
In our experiments, as shown in Fig. 1(a), a stage pulls a 2D frictional
slider with toothed bottom, of a fixed length of 25 cm and a mass of $M=$85 g
. The stage, which moves at constant speed $c$, is connected to the slider by
a linear spring of stiffness $K$. The slider rests on a vertical bed of fixed
depth $L=9.5$ cm and length 1.5 m, consisting of bi-disperse photo-elastic
disks with radii of 0.4 cm and 0.5 cm. A camera, recording the photo-elastic
response of the medium at 120 fps, is connected to the stage. We also record
the force $f$ experienced by the spring.
We consider three experiments characterized by different configurations of $c$
and $K$: Exp. 1: $K=14$ N/m, $c=0.5$ mm/s; Exp. 2: $K=70$ N/m, $c=0.5$ mm/s,
and Exp. 3: $K=70$ N/m, $c=1.0$ mm/s. The total number of analyzed frames
(images) is 30,000 for each experiment, corresponding to 250 seconds of
physical time.
### 2.2 Image processing
The goal of the image processing in this study is to reveal clear force signal
and reduce noise effects as much as possible. As the fast imaging in our
experiments constrains the resolution of images, we use brightness method to
capture force information, which works better that $G^{2}$ method for the type
of data collected, see zadeh2019enlightening ; similar approach was used in
pre18_impact . We first remove background noise from the original images by
applying a filter that removes pixels of brightness below chosen threshold
value so to remove low light area and particle textures. Multiple threshold
values were investigated, giving no quantitatively difference in the results
of the topological analysis that follow; we typically use threshold value of
90 (the maximum brightness is 255), which is appropriate for capturing the
relevant information. After thresholding, the image brightness is linearly
mapped to 0-255 range. MATLAB built-in functions imerode and imdilate were
applied to slightly dilate the bright regions so to fill the gaps between
neighboring particles where in force chains are connected, and then to erode
away the unwanted excessive dilation to restore the force networks with more
accuracy. Fig. 2 shows an example of image processing; in our computations
discussed in what follows, we use grey scale version of the figures such as
Fig. 2 for the purpose of computing considered topological measures.
Figure 2: An example of image processing. (a) An original experimental image.
(b) A processed image; color scheme shows pixel intensity.
### 2.3 Topological measures
Persistence homology (PH) allows for formulating objective measures describing
force networks in both simulations and experiments. Analysis of experimental
data, such as the ones considered here, presents some challenges which are
discussed in some detail in physicaD14 .
Each experimental image can be considered as an array of pixel brightness
$\theta\in[0,255]$. Since the pixels of high brightness correspond to the
particles experiencing large forces, we can apply PH to the pixels to track
and quantify their connectivity to extract (approximate) information about the
actual force networks. PH techniques essentially encode the appearance (birth)
and disappearance (death) of force networks by generating persistence diagrams
(PDs) that encode the birth and death brightness levels for components
(loosely speaking, force chains), and loops (cycles). The PDs therefore reduce
the complexity of underlying force networks into point clouds where the
coordinates are $(\theta_{\rm birth},\theta_{\rm death})$ and each point
represents an object that could be either connected component (chain) or a
loop (cycle). The lifespan of an object is defined as
$\theta_{birth}-\theta_{death}$, measuring how long the object lasts as the
threshold is varied. Total persistence (TP) of a PD is defined as the sum of
all lifespan of the points, TP(PD)$=\sum_{(\theta_{\rm birth},\theta_{\rm
death})\in PD}(\theta_{\rm birth}-\theta_{\rm death})$, which further reduce
the complexity of force networks to a single number kondic_2016 . Note that TP
is influenced by both how many components there are, and by their lifespans.
Another quantity related to PDs is the distance (or difference) between them.
The distance measures essentially the cost of mapping points in one PD to
those in another PD; in the case of different number of points, the extra ones
are mapped to the diagonal. In particular, the degree-q Wasserstein distance
between two persistence diagrams PD and PD’ is defined as
$d_{Wq}(PD,PD^{\prime})=\inf_{\gamma:{\rm PD}\rightarrow{\rm
PD^{\prime}}}\left(\sum_{p\in{\rm
PD}}\|p-\gamma(p)\|_{\infty}^{q}\right)^{1/q},$
where $\gamma$ is a bijection between points from PD to PD’, $\gamma:{\rm
PD}\rightarrow{\rm PD^{\prime}}$. In the present work we use $q=2$ and carry
out the calculations using the method discussed in TDA_exp ; rTDA .
## 3 Results
### 3.1 Structural response
Figure 3: Sliders’ velocity and spring force for the considered experiments:
(a) Exp. 1, (b) Exp. 2, (c) Exp. 3.
Figure 3 shows the calculated velocity of the slider and the measured force,
$f$, on the spring. This figure illustrates clearly the slider’s dynamics. We
note that Exp. 1 exhibits crackling stick-slip behavior as the driving rate is
small. During a stick, the spring builds up the stress, while the slider is
almost fixed, until the spring eventually yields, leading to a sharp velocity
jump and drop of the force. The system behaves more similarly to a continuous
flow for Exps. 2 and 3, as also discussed in zadeh2019crackling .
Figure 4: $W2$ distance and total persistence, corresponding to the
experiments shown in Fig. 3.
Next, we proceed with considering persistence measures. Figure 4 shows the
Wassertein $q=2$ distance (W2) for the considered experiments, as well as the
total persistence (TP), for the same time interval as shown in Fig. 3. Direct
comparison with Fig. 3 shows good agreement between the force network
measures, W2 and TP, from the one side, and the sliders’ velocity and the
spring force, on the other. Figure 5 illustrates in more detail the degree of
agreement between the force and W2 for Exp. 1. We note that this experiments
shows particularly good agreement, suggesting stronger correlation between the
force on the slider and force network properties for well defined stick-slip
dynamics. We note good agreement between W2 and TP, suggesting the existence
of a correlation between the ‘strength’ of the network measured by TP, and its
temporal evolution, measured by W2.
Having established correspondence between the slider dynamics and the force
network, we proceed to discuss whether such correlation could be explored for
predictive purposes. To explore this question, we consider the cross-
correlation between considered quantities. More precisely, consider two time
series $x_{t}$, $y_{t}$, with the data $(x_{1},x_{2},\cdots,x_{m})$, and
$(y_{1},y_{2},\cdots,y_{m})$. The cross-covariance is defined by
$c_{xy}(k)=\frac{1}{m}\sum_{t=1}^{m}(x_{t}-\bar{x})(y_{t-k}-\bar{y})$ (1)
where $\bar{x}=\sum_{i=1}^{m}x_{i}/m$, $\bar{y}=\sum_{i=1}^{m}y_{i}/m$, and
$k=0,\pm 1,\pm 2,\cdots$ is the chosen lag. When $m$ is outside the range of
$y$, $y_{m}=0$. Note that for the positive lag $k$, $x_{t}$ is correlating
with $y_{t-k}$ (at earlier time), which means that we may be able to use such
correlation to predict the future $x$ from earlier $y$. Finally, we define the
sample standard deviations of the series as
* •
$s_{x}=\sqrt{c_{xx}(0)}$, where $c_{xx}(0)=Var(x)$.
* •
$s_{y}=\sqrt{c_{yy}(0)}$, where $c_{yy}(0)=Var(y)$.
The cross-correlation coefficient is given by
$r_{xy}(k)=\frac{c_{xy}(k)}{s_{x}s_{y}},k=0,\pm 1,\pm 2,\cdots$ (2)
Figure 5: Force on the slider and W2 distance (data from Fig. 3a) and 4a)).
Figure 6: Cross correlation coefficient as a function of lag $k$ (in seconds).
$r_{f*}$ and $r_{v*}$ for Exp. 1, where $*$ stands either for W2 distance
(solid line), or for TP (dashed line).
Figure 6 shows the cross-correlation coefficients for Exp. 1, where $r_{f*}$,
$r_{v*}$ correspond to the cross-correlation coefficient between the $f$ or
$v$ and the measure of interest *, which can be $w$ for W2 distance or $t$ for
TP. Focusing first at the results without lag, $k=0$, we note that the
$r_{f*}$ correlations are higher than the $r_{v*}$ ones; we expect that this
is due to the fact that the velocity data were obtained by taking a discrete
derivative of the slider position data, introducing further noise which may
blur the actual signal. The $r_{f*}$ results show that the correlation is
higher for the TP data, which is not surprising since TP is expected to
reflect the force on the slider, while W2 distance measures the temporal
difference in PDs.
Considering next the results for non-zero lags, we note a different behavior
of $r_{f*}$ and $r_{v*}$ curves, with $r_{v*}$ curves rising from negative to
positive as lag is increased. Such difference results from the fact that the
structure of the force and velocity profiles are rather different. The
velocity profile shows sharp transitions, while the force profile slowly
builds up during the stick periods and drop dramatically at the events. More
importantly, we note that within a reasonable range of positive lags, the
$r_{fw}$ and $r_{ft}$ are still significant in size, suggesting a potential
for predictability. It should be noted however, that since the main part of
the data is in a "stick" region, the correlation will naturally weight more on
these data points. A more insightful procedure would involve correlating the
measures just before a slip event. Such an analysis would however require more
data points, and possibly also more detailed experimental input and therefore
we leave if for the future work.
Before closing, we note that for Exp. 2 and 3 we obtain consistent results,
however the correlations are weaker; e.g., the max of $r_{ft}$ goes down from
0.65 (Exp. 1), to 0.45 (Exp. 2), and to 0.3 (Exp. 3). Despite the fact that
all three experiments fall into the same category of stick-slip in the dynamic
phase diagram zadeh2019crackling , clearly (see Fig. 3) the slip events are
much stronger and better defined for Exp. 1, suggesting that in particular for
such situation the persistence measures provide insightful information.
## 4 Conclusion
We find that the tools of persistent homology (PH) allow for correlating the
dynamics of a slider and the photoelastic response of granular particles. In
particular stick and slip regimes of the slider dynamics are well captured by
the PH measures. These results suggest that there is a potential for
developing predictive capabilities by analyzing the response of the force
network to an external perturbation. One open question is how precise should
the information about the forces between the granular particles be so to allow
for further development of this potential. We hope that our results set up a
stage for this future work.
## Acknowledgements
The authors acknowledge many insightful conversations with R. Basak, M.
Carlevaro, K. Daniels, M. Kramar, K. Mischaikow, J.Morris, L. Pugnaloni, A.
Singh, J. Socolar, H. Zheng and J. Barés. CC and LK acknowledge support by the
ARO grant No. W911NF1810184.
## References
* (1) J.P. Sethna, K.A. Dahmen, C.R. Myers, Nature 410, 242 (2001)
* (2) J. Barés, M.L. Hattali, D. Dalmas, D. Bonamy, Phys. Rev. Lett. 113, 264301 (2014)
* (3) J. Barés, A. Dubois, L. Hattali, D. Dalmas, D. Bonamy, Nature Comm. 9, 1253 (2018)
* (4) J.S. Urbach, R.C. Madison, J.T. Markert, Phys. Rev. Lett. 75, 276 (1995)
* (5) J. Davidsen, G. Kwiatek, Phys. Rev. Lett. 110, 068501 (2013)
* (6) D.V. Denisov, K.A. Lőrincz, W.J. Wright, T.C. Hufnagel, A. Nawano, X. Gu, J.T. Uhl, K.A. Dahmen, P. Schall, Sci. Rep. 7, 43376 (2017)
* (7) C. Liu, E.E. Ferrero, F. Puosi, J.L. Barrat, K. Martens, Phys. Rev. Lett. 116, 065501 (2016)
* (8) A.A. Zadeh, J. Barés, J.E.S. Socolar, R.P. Behringer, Phys. Rev. E 99, 052902 (2019)
* (9) K.A. Murphy, K.A. Dahmen, H.M. Jaeger, Phys. Rev. X 9, 011014 (2019)
* (10) A.A. Zadeh, J. Barés, R.P. Behringer, Phys. Rev. E 99, 040901 (2019)
* (11) M. Pica Ciamarra, E. Lippiello, C. Godano, L. de Arcangelis, Phys. Rev. Lett. 104, 238001 (2010)
* (12) M. Pica Ciamarra, A. Coniglio, Phys. Rev. Lett. 103, 235701 (2009)
* (13) B.P. Tighe, J.H. Snoeijer, T.J.H. Vlugt, M. van Hecke, Soft Matter 6, 2908 (2010)
* (14) S. Sarkar, D. Bi, J. Zhang, R.P. Behringer, B. Chakraborty, Phys. Rev. Lett. 111, 068301 (2013)
* (15) J. Peters, M. Muthuswamy, J. Wibowo, A. Tordesillas, Phys. Rev. E 72, 041307 (2005)
* (16) L. Bo, R. Mari, C. Song, H.A. Makse, Soft Matter 10, 7379 (2014)
* (17) D.S. Bassett, E.T. Owens, K.E. Daniels, M.A. Porter, Phys. Rev. E 86, 041306 (2012)
* (18) D. Walker, A. Tordesillas, Phys. Rev. E 85, 011304 (2012)
* (19) S. Ardanza-Trevijano, I. Zuriguel, R. Arévalo, D. Maza, Phys. Rev. E 89, 052212 (2014)
* (20) M. Kramár, A. Goullet, L. Kondic, K. Mischaikow, Physica D. 283, 37 (2014)
* (21) M. Gameiro, A. Singh, L. Kondic, K. Mischaikow, J.F. Morris, Phys. Rev. Fluids 5, 034307 (2020)
* (22) J.A. Dijksman, L. Kovalcinova, J. Ren, R.P. Behringer, M. Kramár, K. Mischaikow, L. Kondic, Phys. Rev. E 97, 042903 (2018)
* (23) T. Takahashi, A.H. Clark, T. Majmudar, L. Kondic, Phys. Rev. E 97, 012906 (2018)
* (24) A.A. Zadeh, J. Barés, T.A. Brzinski, K.E. Daniels, J. Dijksman, N. Docquier, H.O. Everitt, J.E. Kollmer, O. Lantsoght, D. Wang et al., Gran. Matt. 21 (2019)
* (25) L. Kondic, M. Kramár, L.A. Pugnaloni, C.M. Carlevaro, K. Mischaikow, Phys. Rev. E 93, 062903 (2016)
* (26) _A data exploration tool for TDA_ , https://github.com/rachellevanger/tda-persistence-explorer
* (27) _Statistical tools for TDA_ , https://cran.r-project.org/web/packages/TDA/index.html
|
# Reservoir Computers Modal Decomposition and Optimization
Chad Nathe Mechanical Engineering Department, University of New Mexico,
Albuquerque, NM 87131 Enrico Del Frate Mechanical Engineering Department,
University of New Mexico, Albuquerque, NM 87131 Thomas Carroll U.S. Naval
Research Laboratory, Washington, DC 20375, USA Louis Pecora U.S. Naval
Research Laboratory, Washington, DC 20375, USA Afroza Shirin Mechanical
Engineering Department, University of New Mexico, Albuquerque, NM 87131
Francesco Sorrentino<EMAIL_ADDRESS>Mechanical Engineering Department,
University of New Mexico, Albuquerque, NM 87131
###### Abstract
The topology of a network associated with a reservoir computer is often taken
so that the connectivity and the weights are chosen randomly. Optimization is
hardly considered as the parameter space is typically too large. Here we
investigate this problem for a class of reservoir computers for which we
obtain a decomposition of the reservoir dynamics into modes, which can be
computed independently of one another. Each mode depends on an eigenvalue of
the network adjacency matrix. We then take a parametric approach in which the
eigenvalues are parameters that can be appropriately designed and optimized.
In addition, we introduce the application of a time shift to each individual
mode. We show that manipulations of the individual modes, either in terms of
the eigenvalues or the time shifts, can lead to dramatic reductions in the
training error.
A reservoir computer (RC) is a complex nonlinear dynamical system that is used
for processing and analyzing empirical data, see e.g. jaeger2001echo ;
schrauwen2007overview ; natschlager2002liquid ; maass2002real ;
martinenghi2012photonic ; brunner2013parallel ; nakajima2015information ;
hermans2015photonic ; vinckier2015high ; duport2016fully ; larger2017high ,
modeling of complex dynamical systems suykens2012artificial , speech
recognition crutchfield2010introduction , learning of context free and context
sensitive languages rodriguez2001simple ; gers2001lstm , the reconstruction
and prediction of chaotic attractors lu2018attractor ; zimmermann2018observing
; antonik2018using ; jaeger2004harnessing ; pathak2017using ; pathak2018model
, image recognition jalalvand2018application , and control of robotic systems
graves2004biologically ; robinson1994application ; lukovsevivcius2012reservoir
. A typical RC consists of a set of nodes coupled together to form a network.
Each node of the RC evolves in time in response to an input signal that is fed
into the reservoir. An output signal is then generated from the time
evolutions of the RC nodes. In a RC, the output connections (those that
connect the RC nodes to the output) are trained to produce a best fit between
the output signal and a training signal related to the original input signal.
On the other hand, the connections between the nodes of the reservoir are
constant parameters of the system. As a result, RCs are easier to analyze than
other machine learning tools for which all the connections are typically
trained.
Reference carroll2019network studied the effects of the network topology on
the performance of a reservoir computer and focused on the sparsity of the
connections and the presence of network symmetries. Recent work has analyzed
linear reservoir computers boyd1985fading ; bollt2020explaining and pointed
out a connection with the theory of dynamic mode decomposition
schmid2010dynamic . A common assumption is that nonlinear reservoirs can
outperform linear reservoirs bollt2020explaining . Optimizing the
hyperparameters of a reservoir computer is often done, but optimizing the
connections between the RC nodes is more difficult due to the high-dimensional
parameter space. The standard recipe is to use random matrices. Our analysis
that follows shows that under certain conditions, the reservoir equations can
be rewritten in an equivalent form which corresponds to individual uncoupled
nodes, which are easier to optimize.
We consider a reservoir computer modeled by the following dynamical equations
in continuous time,
$\dot{r}_{i}(t)=F\Bigl{(}r_{i}(t),\sum_{j=1}^{N}A_{ij}r_{j}(t),s_{1}(t),s_{2}(t),...,s_{l}(t)\Bigr{)},\quad
i=1,...,N,$ (1)
where $r_{i}$ is the scalar state of node $i$ of the reservoir, $N$ is the
number of nodes, the adjacency matrix $A=\\{A_{ij}\\}$ describes the
connectivity between the network nodes, and $s_{1},s_{2},...s_{l}$ are input
signals to the reservoir. These can represent different data being fed into
the reservoir, such as in a weather prediction application, a rainfall time
series, a wind time series, a humidity time series, and so on. These input
signals are in general a function of an underlying process to which the
reservoir is applied. The training signal $g(t)$ is another signal from the
same underlying process which is related to the input signals through a
complex relation (e.g., in the weather prediction application, a temperature
time series.) The function $F:R^{l+2}\rightarrow R$ determines the particular
dynamics of the reservoir nodes. Next we focus on a specific class of
reservoirs, which possess the property of universality grigoryeva2018universal
, described by the following set of equations,
$\dot{r}_{i}(t)=\alpha\Bigl{(}1+\epsilon
s_{2}(t)\Bigr{)}r_{i}(t)+\sum_{j=1}^{N}A_{ij}r_{j}(t)+w_{i}s_{1}(t),\quad
i=1,...,N,$ (2)
where $l=2$ and $s_{1}$ and $s_{2}$ are the two input signals. In what
follows, we will often refer to Eq. (2) as that of a linear reservoir
computer. The particular process to generate the adjacency matrix $A$ is
described in the Supplementary Information. In the rest of this paper we set
$N=100$ and we also assume for simplicity that $A$ is symmetric, $A=A^{T}$.
The coefficients $w_{i}$ represent the weight by which the input signal is
multiplied in the dynamics of node $i$. These are also typically randomly
chosen carroll2019network .
The underlying process we want to model may evolve in time based on a set of
deterministic (chaotic) equations, such as the equations of the Lorenz chaotic
attractor, in the variables $x_{L}(t),y_{L}(t),z_{L}(t)$ (see Supplementary
Information.) One task we can give the reservoir is to reconstruct the
$z_{L}(t)$ time evolution (training signal) from knowledge of either
$x_{L}(t)$ or $y_{L}(t)$ or both (input signals). We will also consider other
tasks for which the time series are generated by other chaotic or periodic
systems, such as the Hindmarsh-Rose system or the Duffing system (see
Supplementary Information.)
In order to examine the accuracy of the reservoir computer relative to the
dynamical system it is modeling, we must have a way to quantify how well the
reservoir is able to reproduce the training signal $g(t)$ from knowledge of
the input signals $s_{1}(t)$ and $s_{2}(t)$. After integrating the reservoir
equations for a long enough time, its dynamics can be described by the
$T\times(N+1)$ matrix,
$\Omega=\left[\begin{array}[]{ccccc}r_{1}(1)&r_{2}(1)&...&r_{N}(1)&1\\\
r_{2}(1)&r_{2}(2)&...&r_{N}(2)&1\\\ \vdots&\vdots&\vdots&\vdots&\vdots\\\
r_{1}(T)&r_{2}(T)&...&r_{N}(T)&1\\\ \end{array}\right]$ (3)
Here, $N$ is the number of nodes in the reservoir computer and $T$ is the
amount of time-steps taken. We add a column whose entries are all ones to
account for any constant offset. The fit $\mathbf{h}=[h(1),h(2),...,h(T)]$ to
the training signal $\mathbf{g}=[g(1),g(2),...,g(T)]$ is computed as
$h(t)=\sum_{i=1}^{N}\kappa_{i}r_{i}(t)+\kappa_{N+1}$ (or, equivalently, in
vectorial form $\mathbf{h}=\Omega\boldsymbol{\kappa}$), where the vector
$\boldsymbol{\kappa}=[\kappa_{1},\kappa_{2},...,\kappa_{N+1}]$, which contains
a set of unknown coefficients to be determined. We set
$\boldsymbol{\kappa}=\Omega^{\dagger}{\mathbf{g}},$ (4)
where with the symbol $\Omega^{\dagger}$ we indicate the pseudo-inverse of the
matrix $\Omega$.
From this, we can compute the training error doi:10.1063/1.5123733
$\Delta=\frac{\langle\Omega\boldsymbol{\kappa}-\mathbf{g}\rangle}{\langle\mathbf{g}\rangle}$,
where the notation $\langle\rangle$ is defined
$\langle\mathbf{X}\rangle=\sqrt{\frac{1}{T}\sum_{i=1}^{T}(X(i)-\mu)^{2}}$ for
$\mathbf{X}$ any $T$-dimensional vector and
$\mu=\frac{1}{T}\sum_{i=1}^{T}X(i)$.
Feeding the reservoir with more than one input signal or even driving the
reservoir in different ways with the same input signal can lead to improved
performance. To see this, we perform numerical simulations in order to compare
the single input with the $l=2$ input case of Eqs. (2). In Fig. 1 we plot the
training error $\Delta$ vs the coefficient $\epsilon$ seen in Eq. (2) (the
case $\epsilon=0$ corresponds to no effect of $s_{2}(t)$ on the reservoir
dynamics.) In this figure we deal with three different tasks, i.e.,
reconstructing $g(t)=z(t)$ from $s_{1}(t)=s_{2}(t)=x(t)$ for the Lorenz
chaotic system (A) and the Hindmarsh-Rose chaotic system (B), and
reconstructing $g(t)=y(t)$ from $s_{1}(t)=s_{2}(t)=x(t)$ for the Duffing
periodic system (C).
We see from these plots that as we increase $\epsilon$ the training error is
first reduced and then it increases, indicating the advantage of picking
specific values of $\epsilon$. We have observed this type of relationship
between the training error and $\epsilon$ in a large variety of situations,
including discrete time reservoirs (see Supplementary Information). Our
results show that typically two input reservoir computers (2) are advantageous
compared to the single input case ($\epsilon=0$).
Figure 1: Plots of the training error vs. $\epsilon$. Here we plot the
training error $\Delta$ for the following tasks in continuous time: Lorenz
attractor (A), Hindmarsh-Rose attractor (B), and the Duffing attractor (C). In
A and B $s_{1}(t)=s_{2}(t)=x(t)$ and $g(t)=z(t)$. In C
$s_{1}(t)=s_{2}(t)=x(t)$ and $g(t)=y(t)$.
We also considered the case that the reservoir is only driven by $s_{2}(t)$
and not by $s_{1}(t)$, i.e., for which the coefficients $w_{i}=0$. However, we
found that for this case the training error was always close to $1$, which
seems to indicate the advantage of the reservoir (2) is limited to the case
that both $s_{1}(t)$ and $s_{2}(t)$ are used.
Next we obtain a modal decomposition for the reservoir dynamics (2). Our
derivations that follow are obtained for continuous time, but analogous
derivations can be obtained for discrete time (see Supplementary Information.)
We first rewrite Eq. (2) in vector form,
$\mathbf{\dot{r}}(t)=p_{1}(t)I\mathbf{r}(t)+A\mathbf{r}(t)+\mathbf{w}s_{1}(t),$
(5)
where $I$ is the identity matrix, $p_{1}(t)=\alpha(1+\epsilon s_{2}(t))$ and,
$\mathbf{r}(t)=\left[\begin{array}[]{c}r_{1}(t)\\\ r_{2}(t)\\\ \vdots\\\
r_{N}(t)\end{array}\right]\;\;\;\mathbf{w}=\left[\begin{array}[]{c}w_{1}\\\
w_{2}\\\ \vdots\\\ w_{N}\end{array}\right]$ (6)
As we have set the adjacency matrix $A$ to be symmetric, it is also
diagonalizable, $A=V\Lambda V^{T}$ where $V$ is the matrix whose columns are
the eigenvectors of $A$ and $\Lambda$ is a diagonal matrix with the real
eigenvalues of the matrix $A$ on the main diagonal. We pre-multiply Eq. (5) by
$V^{T}$ and after setting $\mathbf{q}(t)=V^{T}\mathbf{r}(t)$ and
$\mathbf{c}(t)=V^{T}\mathbf{w}(t)$, we obtain,
$\mathbf{\dot{q}}(t)=p_{1}(t)\mathbf{q}(t)+\Lambda\mathbf{q}(t)+\mathbf{c}s_{1}(t),$
(7)
which breaks up into a set of $N$ independent modes,
${\dot{q}}_{i}(t)=p_{1}(t)q_{i}(t)+\lambda_{i}q_{i}(t)+c_{i}s_{1}(t),\quad
i=1,...,N,$ (8)
with solution,
$q_{i}(t)=\exp\Bigl{(}\int_{0}^{t}(\lambda_{i}+p_{1}(\tau))d\tau\Bigr{)}q_{i}(0)+c_{i}\int_{0}^{t}\exp\Bigl{(}\int_{\tau}^{t}(\lambda_{i}+p_{1}(\rho))d\rho\Bigr{)}s(\tau)d\tau,$
(9)
where the first term on the left hand side of (9) is the free evolution which
by the assumption of stability goes to zero for large $t$. For large $t$, each
mode $q_{i}$ differs from the others through the coefficient $\lambda_{i}$,
while the particular modal amplitude is given by $c_{i}$. We set
$h(t)=\sum_{j}\kappa^{\prime}_{j}q_{j}(t)+\kappa_{N+1},$ (10)
where
$\kappa^{\prime}_{j}=\sum_{i=1}^{N}V_{ij}\kappa_{i},$ (11)
i.e., $h(t)$ can be written as a linear combination of the modes $q_{j}(t)$.
It is important to emphasize that for large enough $t$ the particular value of
$c_{i}$ becomes irrelevant in order to determine the best fit to the training
signal, as in Eq. (10) each mode is ‘rescaled’ by a particular coefficient
$\kappa^{\prime}_{i}$. Another key observation is that the magnitude of
$\int_{0}^{t}\exp\Bigl{(}\int_{\tau}^{t}(\lambda_{i}+p_{1}(\rho))d\rho\Bigr{)}s(\tau)d\tau$
will depend on the value of $\lambda_{i}$. In practice, it may be convenient
to properly rescale each mode to be
$\tilde{q}_{i}(t)=(\lambda_{i}+\bar{p}_{1})^{-1}\int_{0}^{t}\exp\Bigl{(}\int_{\tau}^{t}(\lambda_{i}+p_{1}(\rho))d\rho\Bigr{)}s(\tau)d\tau,$
(12)
where $\bar{p}_{1}$ is a time-average value for $p_{1}$.
We can now formulate the problem of finding the best fit $h(t)$ to the
training signal in terms of the modes $q_{1}(t),q_{2}(t),...,q_{N}(t)$. To
this end, we introduce the $T\times(N+1)$ matrix,
$\Omega^{\prime}=\left[\begin{array}[]{ccccc}q_{1}(1)&q_{2}(1)&...&q_{N}(1)&1\\\
q_{1}(2)&q_{2}(2)&...&q_{N}(2)&1\\\ \vdots&\vdots&\vdots&\vdots&\vdots\\\
q_{1}(T)&q_{2}(T)&...&q_{N}(T)&1\\\ \end{array}\right]$ (13)
and we define the best fit to the training signal,
$\mathbf{h}^{\prime}=\Omega^{\prime}\boldsymbol{\kappa}^{\prime}$, in the
vector
$\boldsymbol{\kappa}^{\prime}=[\kappa^{\prime}_{1},\kappa^{\prime}_{2},...,\kappa^{\prime}_{N+1}]$
contains a set of unknown coefficients to be determined. The best fit is
obtained by setting
$\boldsymbol{\kappa}^{\prime}=\Omega^{\prime\dagger}{\mathbf{g}}$.
One best fit is equal to the other one and viceversa. To see this, assume to
first compute the coefficients $\bm{\kappa}^{\prime}$. To this set of
coefficients corresponds a set of coefficients $\bm{\kappa}$, which can be
obtained by solving Eq. (11).
An illustration of the reservoir computer and of its modal decomposition is
shown in Fig. 2. Figure 3 is a plot of the individual modes for the case of
the Lorenz system as the parameter $\lambda$ is varied.
There are several advantages of the modal decomposition. One is that in case
of instability of one or a few modes, these can be ‘removed’ without the
instability affecting the remaining modes. Another advantage is the
possibility to individually manipulate each $q_{i}(t)$ before it is used to
generate the best fit to the training signal. One simple such manipulation
that we will study in what follows is application of a time shift
$q_{i}(t)\rightarrow q_{i}(t+\tau_{i})$. As we will see, this simple
modification will lead to dramatic reductions in the training error.
Figure 2: On the left, we have a reservoir computer with a network of
connected nodes and on the right a modal decomposition of the reservoir
dynamics, where each _mode_ is described by a _node_. Figure 3: Mode amplitude
plot from a continuous time reservoir (8). The input signal is the $x_{L}$
state from the Lorenz system. We set $\alpha=-15$ and $\epsilon=0.5$.
Our analysis shows that the eigenvalues $\lambda_{i}$ differentiate individual
modes. We now consider a parametric approach in which the eigenvalues
$\lambda_{i}$ are treated as parameters of Eq. (8). We first choose an
interval $[a,b]$ and then select the $\lambda_{i}\in[a,b]$. A trivial choice
is to pick the eigenvalues to be uniformly randomly distributed in the
interval.
We are now interested in how the coefficients $\kappa^{\prime}$ (i.e., the
mode weights) depend on the particular choice of the eigenvalues $\lambda$. We
observe that typically the curve $\kappa^{\prime}(\lambda)$ is robust to (i)
the particular choice of $N$, (ii) the particular sampling of the eigenvalues
from the interval $[a,b]$, and (iii) the particular choice of the three time
series $s_{1}(t),s_{2}(t),g(t)$ from a given (the same) attractor. Property
(iii) indicates robustness of the vector $\boldsymbol{\kappa}^{\prime}$ with
respect to variations in the initial condition, which predicts a low testing
error (see Supplementary Information.) As an example of this, in Fig. 4 4 we
plot the resulting $\kappa^{\prime}$ vs the eigenvalues $\lambda$, for the
case of a continuous-time reservoir applied to the Lorenz chaotic system. In
plot (A) we used linearly spaced eigenvalues and in plot (B) we use randomly
spaced eigenvalues from a uniform distribution. In both plots we set $N=100$
nodes and use the interval $[-10^{2},-10^{-2}]$. Different curves in the same
plot are for several choices of the initial conditions on the Lorenz
attractor. In order to ensure that the time traces $s_{1}(t),s_{2}(t),g(t)$
are from the attractor, we take the last point from the Lorenz attractor for
the previous iteration as the initial point for the new iteration. In all
cases, we see that certain eigenvalues are associated with larger
$\kappa^{\prime}$ values in modulus (both positive and negative.) Other
eigenvalues instead have associated $\kappa^{\prime}$ close to zero,
indicating that these do not play a significant role in the mapping between
the input signals and the output (training) signal. We also see that the plots
in Fig. 4 are consistent over different iterations of the same task,
indicating that the preference for certain eigenvalues is robust with respect
to the particular choice of the input and training time series from the same
attractor. The figure also shows that the functional relationship between
$\lambda$ and $\kappa^{\prime}$ is robust with respect to variations in the
number of nodes $N$. We thus envision an advantage of picking the eigenvalues
$\lambda$’s about the maxima and minima of the $\kappa^{\prime}$ vs. $\lambda$
plot, which provides the motivation for the optimization study presented next.
Figure 4: Lorenz system. Continuous time RC, $N=100$ nodes. We use $x_{L}(t)$
as the input signal and $z_{L}(t)$ as the training signal. Linearly spaced
eigenvalues are used in plot A and uniformly randomly distributed eigenvalues
are used in plot B. For both plots, eigenvalues are from the interval
$[-10^{2},-10^{-2}]$. Figure 5: We plot the training error $\Delta$ vs the
length of the time series $T$. (A) The HR system. We set $\alpha=-15$,
$\epsilon=20$. For the linear RC $p_{2}=p_{3}=0$, $c_{i}=1$ and for the non-
linear RC $p_{2}=-5$,$p_{3}=-4$, $c_{i}=400$. (B) The Lorenz system. We set
$\alpha=-15$, $\epsilon=0.5$. For the linear RC $p_{2}=p_{3}=0$, $c_{i}=1$ and
for the non-linear RC $p_{2}=-5$, $p_{3}=-4$, $c_{i}=30$. In each plot we
compare the following cases: nonlinear, linear, linear with application of
random time shifts to the individual modes, linear with optimized time shifts
of the individual modes, and linear with optimized time shifts and eigenvalues
of the individual modes. The time shifts ($\tau_{i}$) are taken from the
interval $[-0.5,0.5]$ and the eigenvalues ($\lambda_{i}$) are taken from the
interval $[-100,0]$.
We now consider a comparison of linear and nonlinear reservoirs, described by
the following equations carroll2019network ,
$\dot{q}_{i}(t)=p_{1}(t)q_{i}(t)+p_{2}q_{i}^{2}(t)+p_{3}q_{i}^{3}(t)+\lambda_{i}q_{i}(t)+c_{i}s_{1}(t),\quad
i=1,...,N.$ (14)
where $p_{2}=-5$ and $p_{3}=-4$ were chosen so as to yield a low training
error doi:10.1063/1.5123733 . Note that Eq. (14) has the same parametric form
in $\lambda_{i}$ as (8) for direct comparison. When the parameters $c_{i}$ are
small, Eq. (14) is well approximated by the linear reservoir (8)
($p_{2}=p_{3}=0$.) In Fig. 5 we plot the training error $\Delta$ vs the length
of the time series $T$. Plot A is for the HR system and plot B is for the
Lorenz system. In each plot, we compare the following cases: nonlinear (Eq.
(14)), linear (Eq. (8)), linear with application of random time shifts to the
individual modes, linear with optimized time shifts of the individual modes,
and linear with optimized time shifts and eigenvalues of the individual modes.
Optimization of the time-shifts and of the eigenvalues was obtained via
simulated annealing (see Supplementary Information.) From Fig. 5, we see that
the nonlinear reservoir performs better than the linear one, which is expected
boyd1985fading ; bollt2020explaining . However, this relation is inverted when
manipulations of the individual modes of the linear reservoir are introduced,
either in terms of the eigenvalues or the time shifts, for which the training
error is much lower (also in the case of $\epsilon=0$, see Supplementary
Information.) A considerable reduction in the training error is observed even
when the time shifts applied to the individual modes are randomly chosen. In
the Supplementary Information we show that a linear reservoir with random
time-shifts has both lower training error and testing error than a nonlinear
reservoir.
In this paper we have studied a special class of reservoir computers for which
a modal decomposition is possible. This is equivalent to replacing the
reservoir network with a set of uncoupled nodes, each one corresponding to a
‘mode’ of the reservoir. We then have shown that the training error for the
two reservoir computers (coupled network and uncoupled nodes) is the same. We
build on this result and show that the modes can be manipulated to
significantly decrease the overall training error. For example, the simple
application of time shifts to the individual modes is found to be highly
beneficial; namely, a linear reservoir formed of uncoupled nodes with
application of random time shifts to the nodes’ outputs, is highly competitive
against a nonlinear reservoir. As shown in Fig. 5, sometimes, the improvement
is by orders of magnitude. A considerable reduction in the training error was
observed even when the time shifts applied to the individual modes were
randomly chosen. It is worth noting that the ability to either temporally
delaying or advancing the individual modes is limited to the uncoupled nodes
configuration (right panel of Fig. 2), as in the coupled network configuration
(left panel of Fig. 2), the reservoir states are linear combinations of the
modes.
## References
* [1] Herbert Jaeger. The “echo state” approach to analysing and training recurrent neural networks-with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148(34):13, 2001.
* [2] Benjamin Schrauwen, David Verstraeten, and Jan Van Campenhout. An overview of reservoir computing: theory, applications and implementations. In Proceedings of the 15th european symposium on artificial neural networks. p. 471-482 2007, pages 471–482, 2007.
* [3] Thomas Natschläger, Wolfgang Maass, and Henry Markram. The” liquid computer”: A novel strategy for real-time computing on time series. Special issue on Foundations of Information Processing of TELEMATIK, 8(ARTICLE):39–43, 2002.
* [4] Wolfgang Maass, Thomas Natschläger, and Henry Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural computation, 14(11):2531–2560, 2002.
* [5] Romain Martinenghi, Sergei Rybalko, Maxime Jacquot, Yanne K Chembo, and Laurent Larger. Photonic nonlinear transient computing with multiple-delay wavelength dynamics. Physical review letters, 108(24):244101, 2012.
* [6] Daniel Brunner, Miguel C Soriano, Claudio R Mirasso, and Ingo Fischer. Parallel photonic information processing at gigabyte per second data rates using transient states. Nature communications, 4:1364, 2013.
* [7] Kohei Nakajima, Helmut Hauser, Tao Li, and Rolf Pfeifer. Information processing via physical soft body. Scientific reports, 5:10487, 2015.
* [8] Michiel Hermans, Miguel C Soriano, Joni Dambre, Peter Bienstman, and Ingo Fischer. Photonic delay systems as machine learning implementations. Journal of Machine Learning Research, 2015.
* [9] Quentin Vinckier, François Duport, Anteo Smerieri, Kristof Vandoorne, Peter Bienstman, Marc Haelterman, and Serge Massar. High-performance photonic reservoir computer based on a coherently driven passive cavity. Optica, 2(5):438–446, 2015.
* [10] François Duport, Anteo Smerieri, Akram Akrout, Marc Haelterman, and Serge Massar. Fully analogue photonic reservoir computer. Scientific reports, 6:22381, 2016.
* [11] Laurent Larger, Antonio Baylón-Fuentes, Romain Martinenghi, Vladimir S Udaltsov, Yanne K Chembo, and Maxime Jacquot. High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification. Physical Review X, 7(1):011015, 2017.
* [12] Johan AK Suykens, Joos PL Vandewalle, and Bart L de Moor. Artificial neural networks for modelling and control of non-linear systems. Springer Science & Business Media, 2012.
* [13] James P Crutchfield, William L Ditto, and Sudeshna Sinha. Introduction to focus issue: intrinsic and designed computation: information processing in dynamical systems—beyond the digital hegemony, 2010\.
* [14] Paul Rodriguez. Simple recurrent networks learn context-free and context-sensitive languages by counting. Neural computation, 13(9):2093–2118, 2001.
* [15] Felix A Gers and E Schmidhuber. Lstm recurrent networks learn simple context-free and context-sensitive languages. IEEE Transactions on Neural Networks, 12(6):1333–1340, 2001.
* [16] Zhixin Lu, Brian R Hunt, and Edward Ott. Attractor reconstruction by machine learning. Chaos: An Interdisciplinary Journal of Nonlinear Science, 28(6):061104, 2018.
* [17] Roland S Zimmermann and Ulrich Parlitz. Observing spatio-temporal dynamics of excitable media using reservoir computing. Chaos: An Interdisciplinary Journal of Nonlinear Science, 28(4):043118, 2018.
* [18] Piotr Antonik, Marvyn Gulina, Jaël Pauwels, and Serge Massar. Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography. Physical Review E, 98(1):012215, 2018.
* [19] Herbert Jaeger and Harald Haas. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. science, 304(5667):78–80, 2004.
* [20] Jaideep Pathak, Zhixin Lu, Brian R Hunt, Michelle Girvan, and Edward Ott. Using machine learning to replicate chaotic attractors and calculate lyapunov exponents from data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(12):121102, 2017.
* [21] Jaideep Pathak, Brian Hunt, Michelle Girvan, Zhixin Lu, and Edward Ott. Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. Physical review letters, 120(2):024102, 2018.
* [22] Azarakhsh Jalalvand, Kris Demuynck, Wesley De Neve, and Jean-Pierre Martens. On the application of reservoir computing networks for noisy image recognition. Neurocomputing, 277:237–248, 2018.
* [23] Alex Graves, Douglas Eck, Nicole Beringer, and Juergen Schmidhuber. Biologically plausible speech recognition with lstm neural nets. In International Workshop on Biologically Inspired Approaches to Advanced Information Technology, pages 127–136. Springer, 2004.
* [24] Tony Robinson. An application of recurrent nets to phone probability estimation. IEEE transactions on Neural Networks, 5(2), 1994.
* [25] Mantas Lukoševičius, Herbert Jaeger, and Benjamin Schrauwen. Reservoir computing trends. KI-Künstliche Intelligenz, 26(4):365–371, 2012.
* [26] Thomas L Carroll and Louis M Pecora. Network structure effects in reservoir computers. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(8):083130, 2019.
* [27] Stephen Boyd and Leon Chua. Fading memory and the problem of approximating nonlinear operators with volterra series. IEEE Transactions on circuits and systems, 32(11):1150–1161, 1985\.
* [28] Erik Bollt. On explaining the surprising success of reservoir computing forecaster of chaos? the universal machine learning dynamical system with contrasts to var and dmd. arXiv preprint arXiv:2008.06530, 2020.
* [29] Peter J Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics, 656:5–28, 2010.
* [30] Lyudmila Grigoryeva and Juan-Pablo Ortega. Universal discrete-time reservoir computers with stochastic inputs and linear readouts using non-homogeneous state-affine systems. The Journal of Machine Learning Research, 19(1):892–931, 2018.
* [31] Afroza Shirin, Isaac S. Klickstein, and Francesco Sorrentino. Stability analysis of reservoir computers dynamics via lyapunov functions. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(10):103147, 2019.
|
# A Tensor-Based Formulation of
Hetero-functional Graph Theory
Amro M. Farid, Dakota Thompson, Wester Schoonenberg Amro M. Farid is an
Associate Professor of Engineering with the Thayer School of Engineering at
Dartmouth and a Visiting Associate Professor with the mechanical engineering
department at MIT, Cambridge, MA, USA<EMAIL_ADDRESS>Thompson is with
the Thayer School of Engineering at Dartmouth, Hanover, NH, USA.
<EMAIL_ADDRESS>C. Schoonenberg is with the Thayer
School of Engineering at Dartmouth, Hanover, NH, USA.
<EMAIL_ADDRESS>
###### Abstract
Recently, hetero-functional graph theory (HFGT) has developed as a means to
mathematically model the structure of large-scale complex flexible engineering
systems. It does so by fusing concepts from network science and model-based
systems engineering (MBSE). For the former, it utilizes multiple graph-based
data structures to support a matrix-based quantitative analysis. For the
latter, HFGT inherits the heterogeneity of conceptual and ontological
constructs found in model-based systems engineering including system form,
system function, and system concept. These diverse conceptual constructs
indicate multi-dimensional rather than two-dimensional relationships. This
paper provides the first tensor-based treatment of hetero-functional graph
theory. In particular, it addresses the “system concept” and the hetero-
functional adjacency matrix from the perspective of tensors and introduces the
hetero-functional incidence tensor as a new data structure. The tensor-based
formulation described in this work makes a stronger tie between HFGT and its
ontological foundations in MBSE. Finally, the tensor-based formulation
facilitates several analytical results that provide an understanding of the
relationships between HFGT and multi-layer networks.
## I Introduction
One defining characteristic of twenty-first century engineering challenges is
the breadth of their scope. The National Academy of Engineering (NAE) has
identified 14 “game-changing goals”[1].
1. 1.
Advance personalized learning
2. 2.
Make solar energy economical
3. 3.
Enhance virtual reality
4. 4.
Reverse-engineer the brain
5. 5.
Engineer better medicines
6. 6.
Advance health informatics
7. 7.
Restore and improve urban infrastructure
8. 8.
Secure cyber-space
9. 9.
Provide access to clean water
10. 10.
Provide energy from fusion
11. 11.
Prevent nuclear terror
12. 12.
Manage the nitrogen cycle
13. 13.
Develop carbon sequestration methods
14. 14.
Engineer the tools of scientific discovery
At first glance, each of these aspirational engineering goals is so large and
complex in its own right that it might seem entirely intractable. However, and
quite fortunately, the developing consensus across a number of STEM (science,
technology, engineering, and mathematics) fields is that each of these goals
is characterized by an “engineering system” that is analyzed and re-
synthesized using a meta-problem-solving skill set[2].
###### Definition 1:
Engineering system[3]: A class of systems characterized by a high degree of
technical complexity, social intricacy, and elaborate processes, aimed at
fulfilling important functions in society.
TABLE I: A Classification of Engineering Systems by Function and Operand[3] Function/Operand | Living Organisms | Matter | Energy | Information | Money
---|---|---|---|---|---
Transform | Hospital | Blast Furnace | Engine, electric motor | Analytic engine, calculator | Bureau of Printing & Engraving
Transport | Car, Airplane, Train | Truck, train, car, airplane | Electricity grid | Cables, radio, telephone, and internet | Banking Fedwire and Swift transfer systems
Store | Farm, Apartment Complex | Warehouse | Battery, flywheel, capacitor | Magnetic tape & disk, book | U.S. Buillon Repository (Fort Knox)
Exchange | Cattle auction, (illegal) human trafficking | eBay trading system | Energy market | World Wide Web, Wikipedia | London Stock Exchange
Control | U.S. Constitution & laws | National Highway Traffic Safety Administration | Nuclear Regulatory Commission | Internet engineering task force | United States Federal Reserve
The challenge of _convergence_ towards _abstract_ and _consistent_
methodological foundations for engineering systems is formidable. Consider the
engineering systems taxonomy presented in Table I[3]. It classifies
engineering systems by five generic functions that fulfill human needs: 1.)
transform 2.) transport 3.) store, 4.) exchange, and 5.) control. On another
axis, it classifies them by their operands: 1.) living organisms (including
people), 2.) matter, 3.) energy, 4.) information, 5.) money. This
classification presents a broad array of application domains that must be
consistently treated. Furthermore, these engineering systems are at various
stages of development and will continue to be so for decades, if not
centuries. And so the study of engineering systems must equally support design
synthesis, analysis, and re-synthesis while supporting innovation; be it
incremental or disruptive.
### I-A Background Literature
Two fields in particular have attempted to traverse this convergence
challenge: systems engineering and network science. Systems engineering, and
more recently model-based systems engineering (MBSE), has developed as a
practical and interdisciplinary engineering discipline that enables the
successful realization of complex systems from concept, through design, to
full implementation[4]. It equips the engineer with methods and tools to
handle systems of ever-greater complexity arising from greater interactions
within these systems or from the expanding heterogeneity they demonstrate in
their structure and function. Despite its many accomplishments, model-based
systems engineering still relies on graphical modeling languages that provide
limited quantitative insight (on their own)[5, 6, 7].
In contrast, network science has developed to quantitatively analyze networks
that appear in a wide variety of engineering systems. And yet, despite its
methodological developments in multi-layer networks, network science has often
been unable to address the explicit heterogeneity often encountered in
engineering systems[7, 8]. In a recent comprehensive review Kivela et. al [8]
write: “The study of multi-layer networks $\ldots$ has become extremely
popular. Most real and engineered systems include multiple subsystems and
layers of connectivity and developing a deep understanding of multi-layer
systems necessitates generalizing ‘traditional’ graph theory. Ignoring such
information can yield misleading results, so new tools need to be developed.
One can have a lot of fun studying ‘bigger and better’ versions of the
diagnostics, models and dynamical processes that we know and presumably love –
and it is very important to do so but the new ‘degrees of freedom’ in multi-
layer systems also yield new phenomena that cannot occur in single-layer
systems. Moreover, the increasing availability of empirical data for
fundamentally multi-layer systems amidst the current data deluge also makes it
possible to develop and validate increasingly general frameworks for the study
of networks.
$\ldots$ Numerous similar ideas have been developed in parallel, and the
literature on multi-layer networks has rapidly become extremely messy. Despite
a wealth of antecedent ideas in subjects like sociology and engineering, many
aspects of the theory of multi-layer networks remain immature, and the rapid
onslaught of papers on various types of multilayer networks necessitates an
attempt to unify the various disparate threads and to discern their
similarities and differences in as precise a manner as possible.
$\ldots$ [The multi-layer network community] has produced an equally immense
explosion of disparate terminology, and the lack of consensus (or even
generally accepted) set of terminology and mathematical framework for studying
is extremely problematic.”
In many ways, the parallel developments of the model-based systems engineering
and network science communities intellectually converge in _hetero-functional
graph theory_ (HFGT)[7]. For the former, it utilizes multiple graph-based data
structures to support a matrix-based quantitative analysis. For the latter,
HFGT inherits the heterogeneity of conceptual and ontological constructs found
in model-based systems engineering including system form, system function, and
system concept. More specifically, the explicit treatment of function and
operand facilitates a structural understanding of the diversity of engineering
systems found in Table I. Although not named as such originally, the first
works on HFGT appeared as early as 2006-2008[9, 10, 11, 12]. Since then, HFGT
has become multiply established and demonstrated cross-domain
applicability[13, 7]; culminating in the recent consolidating text[7].
The primary benefit of HFGT, relative to multi-layer networks, is the broad
extent of its ontological elements and associated mathematical models[7]. In
their recent review, Kivela et. al showed that _all_ of the reviewed works
have exhibited at least one of the following modeling constraints[8]:
1. 1.
Alignment of nodes between layers is required [8, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58,
59, 60, 61, 62]
2. 2.
Disjointment between layers is required [8, 63, 64, 56, 65, 66, 67, 68, 51,
69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80]
3. 3.
Equal number of nodes for all layers is required [8, 25, 38, 44, 19, 34, 35,
36, 18, 31, 32, 33, 65, 26, 43, 40, 22, 14, 15, 67, 47, 81, 27, 51, 20, 45,
39, 41, 82, 29, 71, 48, 72, 59, 61, 42, 30, 17, 24, 50, 23, 21, 46, 28, 37,
49, 16]
4. 4.
Exclusively vertical coupling between all layers is required [8, 25, 38, 44,
19, 34, 35, 36, 18, 31, 32, 33, 26, 43, 66, 40, 83, 22, 84, 14, 15, 47, 27,
51, 20, 45, 39, 41, 82, 29, 48, 59, 61, 42, 30, 17, 24, 50, 23, 21, 46, 28,
37, 49, 85, 16, 86]
5. 5.
Equal couplings between all layers are required[8, 25, 38, 19, 34, 35, 36, 18,
31, 32, 33, 26, 66, 40, 83, 22, 84, 47, 27, 51, 20, 45, 39, 41, 82, 29, 48,
59, 61, 30, 17, 24, 50, 23, 21, 46, 28, 37, 49, 85, 16, 86]
6. 6.
Node counterparts are coupled between all layers[25, 38, 19, 34, 35, 36, 18,
31, 32, 33, 26, 66, 40, 83, 84, 47, 27, 51, 20, 45, 39, 41, 82, 29, 48, 59,
61, 30, 17, 24, 50, 46, 28, 37, 49, 85, 16, 86]
7. 7.
Limited number of modelled layers[63, 64, 56, 65, 66, 87, 88, 83, 84, 89, 67,
68, 47, 81, 51, 69, 82, 70, 71, 48, 72, 59, 61, 73, 74, 75, 76, 90, 91, 92,
93, 77, 78, 49, 85, 86, 79, 80]
8. 8.
Limited number of aspects in a layer[63, 25, 38, 64, 44, 19, 34, 35, 36, 18,
31, 32, 33, 56, 65, 26, 43, 66, 40, 22, 14, 15, 67, 68, 47, 27, 51, 20, 45,
39, 41, 69, 82, 70, 29, 71, 48, 72, 59, 61, 42, 30, 17, 73, 24, 50, 74, 75,
23, 21, 46, 28, 76, 37, 77, 78, 49, 16, 79, 80]
To demonstrate the consequences of these modeling limitations, the HFGT
text[7] developed a very small, but highly heterogeneous, hypothetical test
case system that exhibited _all eight_ of the modeling limitations identified
by Kivela et. al. Consequently, none of the multi-layer network models
identified by Kivela et. al. would be able to model such a hypothetical test
case. In contrast, a complete HFGT analysis of this hypothetical test case was
demonstrated in the aforementioned text[7].
Figure 1: A Topological Visualizaiton of the Trimetrica Smart City
Infrastructure Test Case[7].
The same text provides the even more complex hypothetical smart city
infrastructure example shown in Fig. 1. It not only includes an electric power
system, water distribution system, and electrified transportation system but
it also makes very fine distinctions in the functionality of its component
elements.
Given the quickly developing _“disparate terminology and the lack of
consensus”_ , Kivela et. al.’s[8] stated goal _“to unify the various disparate
threads and to discern their similarities and differences in as precise a
manner as possible”_ appears imperative. While many may think that the
development of mathematical models is subjective, in reality, ontological
science presents a robust methodological foundation. As briefly explained in
Appendix -A, and as detailed elsewhere[7, 94, 95], the process of developing a
mathematical model of a given (engineering) system is never direct. Rather, a
specific engineering system (which is an instance of a class of systems) has
abstract elements in the mind111It is likely that modeling abstract elements
in the mind is unfamiliar to this journal’s readership. This is purely an
issue of nomenclature. Most physicists and engineers would agree on the
indispensable role that _intuition_ – itself a mental model – has to the
development of mathematical models of systems. For example, the shift from
Newtonian mechanics to Einstein’s relativity constituted first an expansion in
the abstract elements of the mental model and their relationships well before
that mental model could be translated into its associated mathematics.
Similarly, the “disparate terminology and lack of consensus” identified by
Kivela et. al [8] suggests that a reconciliation of this abstract mental model
is required (See Section -A). that constitute an _abstraction_ $\cal A$ (which
is an instance of a domain conceptualization ${\cal C}$). ${\cal C}$ is mapped
to a set of primitive mathematical elements called a language $\cal L$, which
is in turn instantiated to produce a mathematical model ${\cal M}$. The
fidelity of the mathematical model with respect to an abstraction is
determined by the four complementary linguistic properties shown in Figure
2[95]: soundness, completeness, lucidity, and laconicity [96] (See Defns. 24,
25, 26, 27). When all four properties are met, the abstraction and the
mathematical model have an isomorphic (one-to-one) mapping and faithfully
represent each other. For example, the network science and graph theory
literature assume an abstract conceptualization of nodes and edges prior to
defining their 1-to-1 mathematical counterparts[7]. Consequently, as hetero-
functional graph and multi-layer network models of engineering systems are
developed, there is a need to reconcile both the abstraction and the
mathematical model on the basis of the four criteria identified above (See
Appendix -A.).
Fig. 2: Graphical Representation of Four Ontological Properties As Mapping
Between Abstraction and Model: a Soundness, b Completeness, c Lucidity, and d
Laconicity [95].
The ontological strength of hetero-functional graph theory comes from the
“systems thinking” foundations in the model-based systems engineering
literature[97, 7]. In effect, and very briefly, all systems have a “subject +
verb + operand” form where the system form is the subject, the system function
is the verb + operand (i.e. predicate) and the system concept is the mapping
of the two to each other. The key distinguishing feature of HFGT (relative to
multi-layer networks) is its introduction of system function. In that regard,
it is more complete than multi-layer networks if system function is accepted
as part of an engineering system abstraction. Another key distinguishing
feature of HFGT is the differentiation between elements related to
transformation and transportation. In that regard, it takes great care to not
_overload_ mathematical modeling elements and preserve lucidity.
### I-B Original Contribution
This paper provides a tensor-based formulation of several of the most
important parts of hetero-functional graph theory. More specifically, it
discusses the system concept, the hetero-functional adjacency matrix, and the
hetero-functional incidence tensor. Whereas the hetero-functional graph theory
text[7] is a comprehensive discussion of the subject, the treatment is based
entirely on two-dimensional matrices. The tensor-based formulation described
in this work makes a stronger tie between HFGT and its ontological foundations
in MBSE. Furthermore, the tensor-based treatment developed here reveals
patterns of underlying structure in engineering systems that are less apparent
in a matrix-based treatment. Finally, the tensor-based formulation facilitates
an understanding of the relationships between HFGT and multi-layer networks
(“despite its disparate terminology and lack of consensus”). In so doing, this
tensor-based treatment is likely to advance Kivela et. al’s goal to discern
the similarities and differences between these mathematical models in as
precise a manner as possible.
### I-C Paper Outline
The rest of the paper is organized as follows. Section II discusses the system
concept as an allocation of system function to system form. Section III
discusses the hetero-functional adjacency matrix emphasizing the relationships
between system capabilities (i.e. structural degrees of freedom as defined
therein). Section IV, then, discusses the hetero-functional incidence tensor
which describes the relationships between system capabilities, operands, and
physical locations in space (i.e. system buffers as defined later). Section V
goes on to discuss this tensor-based formulation from the perspective of
layers and network descriptors. Section VI brings the work to a close and
offers directions for future work. Given the multi-disciplinary nature of this
work, several appendices are provided to support the work with background
material and avoid breaking the logical flow of the main article. Appendix -A
provides the fundamental definitions of ontological science that were used to
motivate this work’s original contribution. Appendix -B describes the notation
conventions used throughout this work. The paper assumes that the reader is
well grounded in graph theory and network science as it is found in any one of
a number of excellent texts[98, 99]. The paper does not assume prior exposure
to hetero-functional graph theory. It’s most critical definitions are tersely
introduced in the body of the work upon first mention. More detailed
classifications of these concepts are compiled in Appendix -C for convenience.
Given the theoretical treatment provided here, the interested reader is
referred to the hetero-functional graph theory text[7] for further explanation
of these well-established concepts and concrete examples. Furthermore, several
recent works have made illustrative comparisons between (formal) graphs and
hetero-functional graphs[100, 101]. Finally, this work makes extensive use of
set, Boolean, matrix, and tensor operations; all of which are defined
unambiguously in Appendices -D, -E, -F, and -G respectively.
## II The System Concept
At a high-level, the system concept $A_{S}$ describes the allocation of system
function to system form as the central question of engineering design. First,
Subsection II-A provides introductory definitions of system resources,
processes, and knowledge base that serve as prerequisite knowledge for the
remaining subsections. Next, Subsection II-B introduces the transportation
knowledge base as a third-order tensor. Next, Subsection II-C introduces the
refined transportation knowledge base as a fourth-order tensor. The tensor-
based formulations in these two subsection directly support the original
contribution mentioned in Section I-B. Finally, in order to support the
following section, Subsection II-D concludes the section with a discussion of
existence and availability of system capabilities as part of the system
concept.
### II-A System Resources, Processes, and Knowledge Base
This dichotomy of form and function is repeatedly emphasized in the fields of
engineering design and systems engineering[97, 102, 103, 104]. More
specifically, the allocation of system processes to system resources is
captured in the “design equation”[10, 94]:
$P=J_{S}\odot R$ (1)
where $R$ is set of system resources, $P$ is the set of system processes,
$J_{S}$ is the system knowledge base, and $\odot$ is matrix Boolean
multiplication (Defn. 48).
###### Definition 2 (System Resource):
[4] An asset or object $r_{v}\in R$ that is utilized during the execution of a
process.
###### Definition 3 (System Process[105, 4]):
An activity $p\in P$ that transforms a predefined set of input operands into a
predefined set of outputs.
###### Definition 4 (System Operand):
[4] An asset or object $l_{i}\in L$ that is operated on or consumed during the
execution of a process.
###### Definition 5 (System Knowledge Base[9, 10, 11, 12, 13, 94]):
A binary matrix $J_{S}$ of size $\sigma(P)\times\sigma(R)$ whose element
$J_{S}(w,v)\in\\{0,1\\}$ is equal to one when action $e_{wv}\in{\cal E}_{S}$
(in the SysML sense) exists as a system process $p_{w}\in P$ being executed by
a resource $r_{v}\in R$. The $\sigma()$ notation gives the size of a set.
In other words, the system knowledge base forms a bipartite graph between the
set of system processes and the set of system resources[13].
Hetero-functional graph theory further recognizes that there are inherent
differences within the set of resources as well as within the set of
processes. Therefore, classifications of these sets of resources and sets of
processes are introduced and defined in Appendix -C. $R=M\cup B\cup H$ where
$M$ is the set of transformation resources (Defn. 28), $B$ is the set of
independent buffers (Defn. 29), and $H$ is the set of transportation resources
(Defn. 30). Furthermore, the set of buffers $B_{S}=M\cup B$ (Defn. 31) is
introduced for later discussion. Similarly, $P=P_{\mu}\cup P_{\bar{\eta}}$
where $P_{\mu}$ is the set of transformation processes (Defn. 32) and
$P_{\bar{\eta}}$ is the set of refined transportation processes (Defn. 33).
The latter, in turn, is determined from the Cartesian product (❌) (Defn. 44)
of the set of transportation processes $P_{\eta}$ (Defn. 34) and the set of
holding processes $P_{\gamma}$ (Defn. 35).
$P_{\bar{\eta}}=P_{\gamma}\mbox{{\char 14\relax}}P_{\eta}$ (2)
Fig. 3: The Hetero-functional Graph Theory Meta-Architecture drawn using the
Systems Markup Language (SysML). It consists of three types of resources
$R=M\cup B\cup H$ that are capable of two types of process
$P_{\bar{\eta}}=P_{\gamma}\mbox{{\char 14\relax}}P_{\eta}$[7].
This taxonomy of resources, processes, and their allocation is organized in
the HFGT meta-architecture shown in Figure 3. The taxonomy of resources $R$
and processes $P$ originates from the field of production systems where
transformation processes are viewed as “value-adding”, holding processes
support the design of fixtures, and transportation processes are cost-
minimized. Furthermore, their existence is necessitated by their distinct
roles in the structural relationships found in hetero-functional graphs.
Consequently, subsets of the design equation 1 can be written to emphasize the
relationships between the constitutent classes of processes and resources[9,
10, 11, 12, 13].
$\displaystyle P_{\mu}$ $\displaystyle=J_{M}\odot M$ (3) $\displaystyle
P_{\gamma}$ $\displaystyle=J_{\gamma}\odot R$ (4) $\displaystyle P_{\eta}$
$\displaystyle=J_{H}\odot R$ (5) $\displaystyle P_{\bar{\eta}}$
$\displaystyle=J_{\bar{H}}\odot R$ (6)
where $J_{M}$ is the transformation knowledge base, $J_{\gamma}$ is the
holding knowledge base, $J_{H}$ is the transportation knowledge base, and
$J_{\bar{H}}$ is the refined transportation knowledge base [10, 106, 13, 107,
108, 109]. The original system knowledge base $J_{S}$ is straightforwardly
reconstructed from these smaller knowledge bases[9, 10, 11, 12, 13]:
$J_{S}=\left[\begin{array}[]{ccc}J_{M}&|&\mathbf{0}\\\
\hline\cr&J_{\bar{H}}&\\\ \end{array}\right]$ (7)
### II-B The Transportation Knowledge Base Tensor
The transportation knowledge base $J_{H}$ is best understood as a _matricized_
3rd-order tensor ${\cal J}_{H}$ where the element ${\cal
J}_{H}(y_{1},y_{2},v)=1$ when the transportation process $p_{u}\in P_{\eta}$
defined by the origin $b_{s_{y_{1}}}\in B_{S}$ and the destination
$b_{s_{y_{2}}}\in B_{S}$ is executed by the resource $r_{v}\in R$.
$\displaystyle J_{H}$ $\displaystyle={\cal F}_{M}\left({\cal
J}_{H},[2,1],[3]\right)$ (8) $\displaystyle{\cal J}_{H}$ $\displaystyle={\cal
F}_{M}^{-1}\left(J_{H},[\sigma(B_{S}),\sigma(B_{S}),\sigma(R)],[2,1],[3]\right)$
(9)
where ${\cal F}_{M}$ and ${\cal F}_{M}^{-1}$ are the matricization and
tensorization functions (Defns. 56 and 57) respectively. Here, ${\cal
F}_{M}()$ serves to vectorize the dimensions of the origin and destination
buffers into the single dimension of transportation processes. Appendix -G,
more generally, introduces the reader to tensor-based operations.
The ${\cal J}_{H}$ tensor reveals that the transportation knowledge base is
closely tied to the classical understanding of a graph $A_{B_{S}}$ where point
elements of form called nodes, herein taken to be the set of buffers $B_{S}$,
are connected by line elements of form called edges. Such a graph in hetero-
functional graph theory (and model-based systems engineering) is called a
formal graph[97] because all of its elements describe the system form and any
statement of function is entirely _implicit_.
$\displaystyle A_{B_{S}}(y_{1},y_{2})$
$\displaystyle=\bigvee_{v}^{\sigma(R)}{\cal J}_{H}(y_{1},y_{2},v)$
$\displaystyle=\bigvee_{v}^{\sigma(R)}J_{H}(u,v)\qquad\forall
y_{1},y_{2}\in\\{1,\ldots,\sigma(B_{S})\\},u=\sigma(B_{S})(y_{1}-1)+y_{2},v\in\\{1,\ldots,\sigma(R)\\}$
(10) $\displaystyle A_{B_{S}}$ $\displaystyle={\cal
J}_{H}\odot_{3}\mathds{1}^{\sigma(R)}$
$\displaystyle=vec^{-1}\left(J_{H}\odot\mathds{1}^{\sigma(R)},[\sigma(B_{S}),\sigma(B_{S})]\right)^{T}$
(11) $\displaystyle A_{B_{S}}^{TV}$ $\displaystyle=\left({\cal
J}_{H}\odot_{3}\mathds{1}^{\sigma(R)}\right)^{TV}$
$\displaystyle=J_{H}\odot\mathds{1}^{\sigma(R)}$ (12)
where the $\bigvee$ notation is the Boolean analogue of the $\sum$ notation
(Defn. 45), $\odot_{n}$ is the n-mode Boolean matrix product (Defn. 62),
$vec^{-1}()$ is inverse vectorization (Defn. 59) and $()^{V}$ is shorthand for
vectorization (Defn. 58). Appendix -E, more generally, introduces the reader
to Boolean operations. Furthermore, the notation $\mathds{1}^{n}$ is used to
indicate a ones-vector of length n. The transportation system knowledge base
$J_{H}$ replaces the edges of the formal graph $A_{B_{S}}$ with an _explicit_
description of function in the transportation processes $P_{\eta}$. The multi-
column nature of the transportation knowledge base $J_{H}$ contains more
information than the formal graph $A_{B_{S}}$ and allows potentially many
resources to execute any given transportation process. Consequently, the OR
operation across the rows of $J_{H}$ (or the third dimension of ${\cal
J}_{H}$) is sufficient to reconstruct the formal graph $A_{B_{S}}$. In short,
a single column transportation knowledge base is mathematically equivalent to
a vectorized formal graph $A_{B_{S}}$.
### II-C The Refined Transportation Knowledge Base Tensor
Similarly, the refined transportation knowledge base is best understood as a
matricized 4th order tensor ${\cal J}_{\bar{H}}$ where the element ${\cal
J}_{H}(g,y_{1},y_{2},v)=1$ when the refined transportation process
$p_{\varphi}\in P_{\bar{\eta}}$ defined by the holding process $p_{\gamma
g}\in P_{\gamma}$, the origin $b_{s_{y_{1}}}\in B_{S}$ and the destination
$b_{s_{y_{2}}}\in B_{S}$ is executed by the resource $r_{v}\in R$.
$\displaystyle J_{\bar{H}}$ $\displaystyle={\cal F}_{M}\left({\cal
J}_{\bar{H}},[3,2,1],[4]\right)$ (13) $\displaystyle{\cal J}_{\bar{H}}$
$\displaystyle={\cal
F}_{M}^{-1}\left(J_{\bar{H}},[\sigma(P_{\gamma}),\sigma(B_{S}),\sigma(B_{S}),\sigma(R)],[3,2,1],[4]\right)$
(14)
The $J_{\bar{H}}$ tensor reveals that the refined transportation knowledge
base is closely tied to the classical understanding of a multi-commodity flow
network ${\cal A}_{LB_{S}}$[110, 111, 112]. Mathematically, it is a 3rd-order
tensor whose element ${\cal A}_{LB_{S}}(i,y_{1},y_{2})=1$ when operand
$l_{i}\in L$ is transported from buffer $b_{s_{y_{1}}}$ to $b_{s_{y_{2}}}$.
Again, the multi-commodity flow network ${\cal A}_{LB_{S}}$ is purely a
description of system form and any statement of function is entirely
_implicit_. In the special case222By Defn. 35, holding processes are
distinguished by three criteria: 1.) different operands, 2.) how they hold
those operands, and 3.) if they change the state of the operand. The special
case mentioned above is restricted to only the first of these three
conditions. of a system where the set of operands $L$ maps 1-to-1 the set of
holding processes $P_{\gamma}$ (i.e. $i=g$):
$\displaystyle{\cal A}_{LB_{S}}(i,y_{1},y_{2})$
$\displaystyle=\bigvee_{v}^{\sigma(R)}{\cal J}_{\bar{H}}(g,y_{1},y_{2},v)$
$\displaystyle\quad\forall
g\in\\{1,\ldots,\sigma(P_{\gamma})\\},y_{1},y_{2}\in\\{1,\ldots,\sigma(B_{S})\\},v\in\\{1,\ldots,\sigma(R)\\}$
(15) $\displaystyle=\bigvee_{v}^{\sigma(R)}J_{\bar{H}}(\varphi,v)$
$\displaystyle\quad\forall\varphi=\sigma^{2}(B_{S})(g-1)+\sigma(B_{S})(y_{1}-1)+y_{2},v\in\\{1,\ldots,\sigma(R)\\}$
(16) $\displaystyle{\cal A}_{LB_{S}}$ $\displaystyle={\cal
J}_{\bar{H}}\odot_{4}\mathds{1}^{\sigma(R)}$
$\displaystyle=vec^{-1}\left(J_{\bar{H}}\odot\mathds{1}^{\sigma(R)},\left[\sigma(B_{S}),\sigma(B_{S}),\sigma(P_{\gamma})\right]\right)^{T}$
(17) $\displaystyle{\cal A}_{LB_{S}}^{TV}$ $\displaystyle=\left({\cal
J}_{\bar{H}}\odot_{4}\mathds{1}^{\sigma(R)}\right)^{TV}$
$\displaystyle=J_{\bar{H}}\odot\mathds{1}^{\sigma(R)}$ (18)
The refined transportation system knowledge base $J_{\bar{H}}$ replaces the
operands and edges of the multi-commodity flow network $A_{MP}$ with an
_explicit_ description of function in the holding processes $P_{\gamma}$ and
transportation processes $P_{\eta}$. The multi-column nature of the refined
transportation knowledge base $J_{\bar{H}}$ contains more information than the
multi-commodity flow network ${\cal A}_{LB_{S}}$ and allows potentially many
resources to execute any given refined transportation process. Consequently,
the OR operation across the rows of $J_{\bar{H}}$ (or the fourth dimension of
${\cal J}_{\bar{H}}$) is sufficient to reconstruct the multi-commodity flow
network ${\cal A}_{LB_{S}}$. In short, a single column of the refined
transportation knowledge base is mathematically equivalent to a vectorized
multi-commodity flow network.
The transformation, holding, transportation and refined transportation
knowledge bases ($J_{M}$, $J_{\gamma}$, $J_{H}$ and $J_{\bar{H}}$) readily
serve to reconstruct the system knowledge base $J_{S}$. First, the refined
transportation knowledge base is the Khatri-Rao product of the holding and
transportation knowledge bases.
$\displaystyle J_{\bar{H}}(\sigma(P_{\eta})(g-1)+u,v)$
$\displaystyle=J_{\gamma}(g,v)\cdot J_{H}(u,v)\qquad\forall
g\in\\{1,\ldots,\sigma(P_{\gamma})\\},u=\sigma(B_{S})(y_{1}-1)+y_{2},v\in\\{1,\ldots,\sigma(R)\\}$
(19) $\displaystyle J_{\bar{H}}$ $\displaystyle=J_{\gamma}\circledast J_{H}$
(20)
$\displaystyle=\left[J_{\gamma}\otimes\mathds{1}^{\sigma(P_{\eta})}\right]\cdot\left[\mathds{1}^{\sigma(P_{\gamma})}\otimes
J_{H}\right]$ (21)
where $\cdot$ is the Hadamard (or scalar) product (Defn. 51), $\circledast$ is
the Khatri-Rao product (Defn. 54) and $\otimes$ is the Kronecker product
(Defn. 53).
### II-D Existence, Availability, and Concept of System Capabilities
Hetero-functional graph theory also differentiates between the _existence_ and
the _availability_ of physical capabilities in the system[10, 107]. While the
former is described by the system knowledge base the latter is captured by the
system constraints matrix (which is assumed to evolve in time).
###### Definition 6 (System Constraints Matrix[9, 10, 11, 12, 13, 94]):
A binary matrix $K_{S}$ of size $\sigma(P)\times\sigma(R)$ whose element
$K_{S}(w,v)\in\\{0,1\\}$ is equal to one when a constraint eliminates event
$e_{wv}$ from the event set.
The system constraints matrix is constructed analogously to the system
knowledge base[9, 10, 11, 12, 13].
$K_{S}=\left[\begin{array}[]{ccc}K_{M}&|&\mathbf{0}\\\
\hline\cr&K_{\bar{H}}&\\\ \end{array}\right]$ (22)
In this regard, the system constraints matrix has a similar meaning to graph
percolation [113, 114] and temporal networks [115].
Once the system knowledge base $J_{S}$ and the system constraints matrix
$K_{S}$ have been constructed, the system concept $A_{S}$ follows
straightforwardly.
###### Definition 7 (System Concept[9, 10, 11, 12, 13, 94]):
A binary matrix $A_{S}$ of size $\sigma(P)\times\sigma(R)$ whose element
$A_{S}(w,v)\in\\{0,1\\}$ is equal to one when action $e_{wv}\in{\cal E}_{S}$
(in the SysML sense) is available as a system process $p_{w}\in P$ being
executed by a resource $r_{v}\in R$.
$A_{S}=J_{S}\ominus K_{S}=J_{S}\cdot\bar{K}_{S}$ (23)
where $\ominus$ is Boolean subtraction (Defn. 49) and
$\overline{K}_{S}=NOT(K_{S})$.
Every filled element of the system concept indicates a _system capability_
(Defn. 36) of the form: “Resource $r_{v}$ does process $p_{w}$”. The system
constraints matrix limits the availability of capabilities in the system
knowledge base to create the system concept $A_{S}$. The system capabilities
are quantified by the structural degrees of freedom.
###### Definition 8 (Structural Degrees of Freedom[9, 10, 11, 12, 13, 94]):
The set of independent actions ${\cal E}_{S}$ that completely defines the
instantiated processes in a large flexible engineering system. Their number is
given by:
$\displaystyle DOF_{S}=\sigma({\cal E}_{S})$
$\displaystyle=\sum_{w}^{\sigma(P)}\sum_{v}^{\sigma(R)}\left[J_{S}\ominus
K_{S}\right](w,v)$ (24)
$\displaystyle=\sum_{w}^{\sigma(P)}\sum_{v}^{\sigma(R)}A_{S}(w,v)$ (25)
$\displaystyle=\langle J_{S},\bar{K}_{S}\rangle_{F}$ (26)
As has been discussed extensively in prior publications, the term structural
degrees of freedom is best viewed as a generalization of kinematic degrees of
freedom (or generalized coordinates)[9, 10, 11, 12, 13, 116]. Note that the
transformation degrees of freedom $DOF_{M}$ and the refined transportation
degrees of freedom $DOF_{H}$ are calculated similarly[9, 10, 11, 12]:
$\displaystyle DOF_{M}$
$\displaystyle=\sum_{j}^{\sigma(P_{\mu})}\sum_{k}^{\sigma(M)}\left[J_{M}\ominus
K_{M}\right](j,k)$ (27) $\displaystyle DOF_{H}$
$\displaystyle=\sum_{\varphi}^{\sigma(P_{\bar{\eta}})}\sum_{v}^{\sigma(R)}\left[J_{\bar{H}}\ominus
K_{\bar{H}}\right](u,v)$ (28)
## III Hetero-functional Adjacency Matrix
This section serves provides a tensor-based formulation of the hetero-
functional adjacency matrix. First, Subsection III-A introduces this matrix as
pairwise sequences of system capabilities. Next, Subsection III-B provides a
tensor-based formulation of the system sequence knowledge base. Next,
Subsection III-C provides a tensor-based formulation of the system sequence
constraints. Both of these subsections directly support the paper’s original
contribution. Finally, Subsection III-D concludes the section with a
discussion of sequence-dependent degrees of freedom.
### III-A Pairwise Sequences of System Capabilities
Once the system’s physical capabilities (or structural degrees of freedom have
been defined), the hetero-functional adjacency matrix $A_{\rho}$ is introduced
to represent their pairwise sequences [13, 117, 118, 108, 119].
###### Definition 9 (Hetero-functional Adjacency Matrix[13, 117, 118, 108,
119]):
A square binary matrix $A_{\rho}$ of size
$\sigma(R)\sigma(P)\times\sigma(R)\sigma(P)$ whose element
$J_{\rho}(\chi_{1},\chi_{2})\in\\{0,1\\}$ is equal to one when string
$z_{\chi_{1},\chi_{2}}=e_{w_{1}v_{1}}e_{w_{2}v_{2}}\in{\cal Z}$ is available
and exists, where index $\chi_{i}\in\left[1,\dots,\sigma(R)\sigma(P)\right]$.
In other words, the hetero-functional adjacency matrix corresponds to a
hetero-functional graph $G=\\{{\cal E}_{S},{\cal Z}\\}$ with structural
degrees of freedom (i.e. capabilities) ${\cal E}_{S}$ as nodes and feasible
sequences ${\cal Z}$ as edges.
Much like the system concept $A_{S}$, the hetero-functional adjacency matrix
$A_{\rho}$ arises from a Boolean difference[13, 117, 118, 108, 119].
$A_{\rho}=J_{\rho}\ominus K_{\rho}$ (29)
where $J_{\rho}$ is the system sequence knowledge base and $K_{\rho}$ is the
system sequence constraints matrix.
###### Definition 10 (System Sequence Knowledge Base [13, 117, 118, 108,
119]):
A square binary matrix $J_{\rho}$ of size
$\sigma(R)\sigma(P)\times\sigma(R)\sigma(P)$ whose element
$J_{\rho}(\chi_{1},\chi_{2})\in\\{0,1\\}$ is equal to one when string
$z_{\chi_{1},\chi_{2}}=e_{w_{1}v_{1}}e_{w_{2}v_{2}}\in{\cal Z}$ exists, where
index $\chi_{i}\in\left[1,\dots,\sigma(R)\sigma(P)\right]$.
###### Definition 11 (System Sequence Constraints Matrix[13, 117, 118, 108,
119]):
A square binary constraints matrix $K_{\rho}$ of size
$\sigma(R)\sigma(P)\times\sigma(R)\sigma(P)$ whose elements
$K(\chi_{1},\chi_{2})\in\\{0,1\\}$ are equal to one when string
$z_{\chi_{1}\chi_{2}}=e_{w_{1}v_{1}}e_{w_{2}v_{2}}\in{\cal Z}$ is eliminated.
The definitions of the system sequence knowledge base $J_{\rho}$ and the
system sequence constraints matrix $K_{\rho}$ feature a translation of indices
from $e_{w_{1}v_{1}}e_{w_{2}v_{2}}$ to $z_{\chi_{1}\chi_{2}}$. This fact
suggests that these matrices have their associated 4th order tensors ${\cal
J}_{\rho}$, ${\cal K}_{\rho}$ and ${\cal A}_{\rho}$.
$\displaystyle J_{\rho}$ $\displaystyle={\cal F}_{M}\left({\cal
J}_{\rho},[1,2],[3,4]\right)$ (30) $\displaystyle K_{\rho}$
$\displaystyle={\cal F}_{M}\left({\cal K}_{\rho},[1,2],[3,4]\right)$ (31)
$\displaystyle A_{\rho}$ $\displaystyle={\cal F}_{M}\left({\cal
A}_{\rho},[1,2],[3,4]\right)$ (32)
### III-B The System Sequence Knowledge Base Tensor
The system sequence knowledge base $J_{\rho}$ and its tensor-equivalent ${\cal
J}_{\rho}$ create all the _potential_ sequences of the capabilities in
$A_{S}$.
$\displaystyle{\cal J}_{\rho}(w_{1},v_{1},w_{2},v_{2})$
$\displaystyle=A_{S}(w_{1},v_{1})\cdot A_{S}(w_{2},v_{2})\quad\forall
w_{1},w_{2}\in\\{1\ldots\sigma(P)\\},v_{1},v_{2}\in\\{1\ldots\sigma(R)\\}$
(33) $\displaystyle{J}_{\rho}(\chi_{1},\chi_{2})$
$\displaystyle=A_{S}^{V}(\chi_{1})\cdot
A^{V}_{S}(\chi_{2})\qquad\quad\;\;\forall\chi_{1},\chi_{2}\in\\{1\ldots\sigma(R)\sigma(P)\\}$
(34) $\displaystyle J_{\rho}$ $\displaystyle=A_{S}^{V}A_{S}^{VT}$ (35)
$\displaystyle J_{\rho}$
$\displaystyle=\left[J_{S}\cdot\bar{K}_{S}\right]^{V}\left[J_{S}\cdot\bar{K}_{S}\right]^{VT}$
(36) $\displaystyle{\cal J}_{\rho}$ $\displaystyle={\cal
F}_{M}^{-1}\left(J_{\rho},[\sigma(P),\sigma(R),\sigma(P),\sigma(R)],[1,2],[3,4]\right)$
(37)
### III-C The System Sequence Constraints Tensor
Of these potential sequences of capabilities, the system sequence constraints
matrix $K_{\rho}$ serves to eliminate the infeasible pairs. The feasibility
arises from five types of constraints:
1. I:
$P_{\mu}P_{\mu}$. Two transformation processes that follow each other must
occur at the same transformation resource. $m_{1}=m_{2}$.
2. II:
$P_{\mu}P_{\bar{\eta}}$. A refined transportation process that follows a
transformation process must have an origin equivalent to the transformation
resource at which the transformation process was executed.
$m_{1}-1=(u_{1}-1)/\sigma(B_{S})$ where $/$ indicates integer division.
3. III:
$P_{\bar{\eta}}P_{\mu}$. A refined transportation process that precedes a
transformation process must have a destination equivalent to the
transformation resource at which the transformation process was executed.
$m_{2}-1=(u_{1}-1)\%\sigma(B_{S})$ where $\%$ indicates the modulus.
4. IV:
$P_{\bar{\eta}}P_{\bar{\eta}}$. A refined transportation process that follows
another must have an origin equivalent to the destination of the other.
$(u_{1}-1)\%\sigma(B_{S})=(u_{2}-1)/\sigma(B_{S})$
5. V:
$PP$. The type of operand of one process must be equivalent to the type of
output of another process. In other words, the ordered pair of processes
$P_{w_{1}}P_{w_{2}}$ is feasible if and only if $A_{P}(w_{1},w_{2})=1$ where
$A_{P}$ is the adjacency matrix that corresponds to a _functional graph_ in
which pairs of system processes are connected.
In previous hetero-functional graph theory works, the system sequence
constraints matrix $K_{\rho}$ was calculated straightforwardly using for FOR
loops to loop over the indices $\chi_{1}$ and $\chi_{2}$ and checking the
presence of the five feasibility constraints identified above.
Here, an alternate approach based upon tensors is provided for insight into
the underlying mathematical structure. For convenience,
$\overline{K}_{\rho}=NOT(K_{\rho})$ captures the set of all _feasibility_
conditions that pertain to valid sequences of system capabilities. This set
requires that _any_ of the first four constraints above _and_ the last
constraint be satisfied.
$\overline{K}_{\rho}=\left(\overline{K}_{\rho I}\oplus\overline{K}_{\rho
II}\oplus\overline{K}_{\rho III}\oplus\overline{K}_{\rho
IV}\right)\cdot\overline{K}_{\rho V}$ (38)
where $\oplus$ is Boolean addition (Defn. 46) and $\overline{K}_{\rho
I},\overline{K}_{\rho II},\overline{K}_{\rho III},\overline{K}_{\rho
IV},\overline{K}_{\rho V}$ are the matrix implementations of the five types of
feasibility constraints identified above. Their calculation is most readily
achieved through their associated 4th-order tensors.
Type I Constraints: For the Type I constraint, $\overline{\cal K}_{\rho I}$ is
constructed from a sum of 4th-order outer products (Defn. 55) of elementary
basis vectors.
$\displaystyle\overline{\cal K}_{\rho I}$
$\displaystyle=\sum_{w_{1}=1}^{\sigma(P_{\mu})}\sum_{v_{1}=1}^{\sigma(M)}\sum_{w_{2}=1}^{\sigma(P_{\mu})}\sum_{v_{2}=v_{1}}^{v_{1}}e_{w_{1}}^{\sigma(P)}\circ
e_{v_{1}}^{\sigma(R)}\circ e_{w_{2}}^{\sigma(P)}\circ e_{v_{2}}^{\sigma(R)}$
(39)
where the $e_{i}^{n}$ notation places the value 1 on the $i^{th}$ element of a
vector of length $n$. $\overline{K}_{\rho I}$ is calculated straightforwardly
by matricizing both sides and evaluating the sums.
$\displaystyle\overline{K}_{\rho I}$
$\displaystyle=\sum_{w_{1}=1}^{\sigma(P_{\mu})}\sum_{v_{1}=1}^{\sigma(M)}\sum_{w_{2}=1}^{\sigma(P_{\mu})}\sum_{v_{2}=v_{1}}^{v_{1}}\left(e_{v_{1}}^{\sigma(R)}\otimes
e_{w_{1}}^{\sigma(P)}\right)\otimes\left(e_{v_{2}}^{\sigma(R)}\otimes
e_{w_{2}}^{\sigma(P)}\right)^{T}$ (40) $\displaystyle\overline{K}_{\rho I}$
$\displaystyle=\sum_{v_{1}=1}^{\sigma(M)}\left(e_{v_{1}}^{\sigma(R)}\otimes\begin{bmatrix}\mathds{1}^{\sigma(P_{\mu})}\\\
\mathbf{0}^{\sigma(P_{\bar{\eta}})}\end{bmatrix}\right)\left(e_{v_{1}}^{\sigma(R)}\otimes\begin{bmatrix}\mathds{1}^{\sigma(P_{\mu})}\\\
\mathbf{0}^{\sigma(P_{\bar{\eta}})}\end{bmatrix}\right)^{T}$ (41)
Type II Constraints: Similarly, for the Type II constraint:
$\displaystyle\overline{\cal K}_{\rho II}$
$\displaystyle=\sum_{w_{1}=1}^{\sigma(P_{\mu})}\sum_{v_{1}=1}^{\sigma(M)}\sum_{y_{1}=v_{1}}^{v_{1}}\sum_{v_{2}=1}^{\sigma(R)}e_{w_{1}}^{\sigma(P)}\circ
e_{v_{1}}^{\sigma(R)}\circ\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
X_{y_{1}}^{\sigma(P_{\bar{\eta}})}\end{bmatrix}\circ e_{v_{2}}^{\sigma(R)}$
(42)
Here, the $X_{y_{1}}^{\sigma(P_{\bar{H}})}$ vector has a value of 1 wherever a
refined transportation process $p_{w_{2}}$ originates at the transformation
resource $m_{v_{1}}$. Drawing on the discussion of the 3rd-order tensor ${\cal
J}_{\bar{H}}$ in Section II, $X_{y_{1}}^{\sigma(P_{\bar{H}})}$, itself, is
expressed as a vectorized sum of 3rd-order outer products.
$\displaystyle
X_{y_{1}}^{\sigma(P_{\bar{H}})}=\sum_{g=1}^{\sigma(P_{\gamma})}\sum_{y_{2}=1}^{\sigma(B_{S})}\left(e_{g}^{\sigma(P_{\gamma})}\circ
e_{y_{1}}^{\sigma(B_{S})}\circ
e_{y_{2}}^{\sigma(B_{S})}\right)^{TV}=\left(\mathds{1}^{\sigma(P_{\gamma})}\otimes
e_{y_{1}}^{\sigma(B_{S})}\otimes\mathds{1}^{\sigma(B_{S})}\right)$ (43)
$\overline{K}_{\rho II}$ is then calculated straightforwardly by matricizing
both sides of Eq. 42 and evaluating the sums.
$\displaystyle\overline{K}_{\rho II}$
$\displaystyle=\sum_{w_{1}=1}^{\sigma(P_{\mu})}\sum_{v_{1}=1}^{\sigma(M)}\sum_{y_{1}=v_{1}}^{v_{1}}\sum_{v_{2}=1}^{\sigma(R)}\left(e_{v_{1}}^{\sigma(R)}\otimes
e_{w_{1}}^{\sigma(P)}\right)\otimes\left(e_{v_{2}}^{\sigma(R)}\otimes\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
X_{y_{1}}^{\sigma(P_{\bar{H}})}\end{bmatrix}\right)^{T}$ (44)
$\displaystyle\overline{K}_{\rho II}$
$\displaystyle=\sum_{v_{1}=1}^{\sigma(M)}\left(e_{v_{1}}^{\sigma(R)}\otimes\begin{bmatrix}\mathds{1}^{\sigma(P_{\mu})}\\\
\mathbf{0}^{P_{\bar{\eta}}}\end{bmatrix}\right)\left(\mathds{1}^{\sigma(R)}\otimes\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
\mathds{1}^{\sigma(P_{\gamma})}\otimes
e_{v_{1}}^{\sigma(B_{S})}\otimes\mathds{1}^{\sigma(B_{S})}\end{bmatrix}\right)^{T}$
(45)
Type III Constraints: Similarly, for the Type III constraint:
$\displaystyle\overline{\cal K}_{\rho III}$
$\displaystyle=\sum_{y_{2}=v_{2}}^{v_{2}}\sum_{v_{1}=1}^{\sigma(R)}\sum_{w_{2}=1}^{\sigma(P_{\mu})}\sum_{v_{2}=1}^{\sigma(M)}\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
X_{y_{2}}^{\sigma(P_{\bar{\eta}})}\end{bmatrix}\circ
e_{v_{1}}^{\sigma(R)}\circ e_{w_{2}}^{\sigma(P)}\circ e_{v_{2}}^{\sigma(R)}$
(46)
Here, the $X_{y_{2}}^{\sigma(P_{\bar{H}})}$ vector has a value of 1 wherever a
refined transportation process $p_{w_{1}}$ terminates at the transformation
resource $m_{v_{2}}$. $X_{y_{2}}^{\sigma(P_{\bar{H}})}$, itself, is expressed
as a vectorized sum of 3rd-order outer products.
$\displaystyle
X_{y_{2}}^{\sigma(P_{\bar{H}})}=\sum_{g=1}^{\sigma(P_{\gamma})}\sum_{y_{1}=1}^{\sigma(B_{S})}\left(e_{g}^{\sigma(P_{\gamma})}\circ
e_{y_{1}}^{\sigma(B_{S})}\circ
e_{y_{2}}^{\sigma(B_{S})}\right)^{TV}=\left(\mathds{1}^{\sigma(P_{\gamma})}\otimes\mathds{1}^{\sigma(B_{S})}\otimes
e_{y_{2}}^{\sigma(B_{S})}\right)$ (47)
$\overline{K}_{\rho III}$ is then calculated straightforwardly by matricizing
both sides of Eq. 46 and evaluating the sums.
$\displaystyle\bar{K}_{\rho III}$
$\displaystyle=\sum_{y_{2}=v_{2}}^{v_{2}}\sum_{v_{1}=1}^{\sigma(R)}\sum_{w_{2}=1}^{\sigma(P_{\mu})}\sum_{v_{2}=1}^{\sigma(M)}\left(e_{v_{1}}^{\sigma(R)}\otimes\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
X_{y_{2}}^{\sigma(P_{\bar{H}})}\end{bmatrix}\right)\otimes\left(e_{v_{2}}^{\sigma(R)}\otimes
e_{w_{2}}^{\sigma(P)}\right)^{T}$ (48) $\displaystyle\overline{K}_{\rho III}$
$\displaystyle=\sum_{v_{2}=1}^{\sigma(M)}\left(\mathds{1}^{\sigma(R)}\otimes\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
\mathds{1}^{\sigma(P_{\gamma})}\otimes\mathds{1}^{\sigma(B_{S})}\otimes
e_{v_{2}}^{\sigma(B_{S})}\end{bmatrix}\right)\left(e_{v_{2}}^{\sigma(R)}\otimes\begin{bmatrix}\mathds{1}^{\sigma(P_{\mu})}\\\
\mathbf{0}^{P_{\bar{\eta}}}\end{bmatrix}\right)^{T}$ (49)
Type IV Constraints: Similarly, for the Type IV constraint:
$\displaystyle\overline{\cal K}_{\rho IV}$
$\displaystyle=\sum_{y_{2}=1}^{\sigma{(B_{S})}}\sum_{v_{1}=1}^{\sigma(R)}\sum_{y_{1}=y_{2}}^{y_{2}}\sum_{v_{2}=1}^{\sigma(R)}\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
X_{y_{2}}^{\sigma(P_{\bar{H}})}\end{bmatrix}\circ
e_{v_{1}}^{\sigma(R)}\circ\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
X_{y_{1}}^{\sigma(P_{\bar{H}})}\end{bmatrix}\circ e_{v_{2}}^{\sigma(R)}$ (50)
$\overline{K}_{\rho IV}$ is then calculated straightforwardly by matricizing
both sides of Eq. 50 and evaluating the sums.
$\displaystyle\overline{K}_{\rho IV}$
$\displaystyle=\sum_{y_{2}=1}^{\sigma{(B_{S})}}\sum_{v_{1}=1}^{\sigma(R)}\sum_{y_{1}=y_{2}}^{y_{2}}\sum_{v_{2}=1}^{\sigma(R)}\left(e_{v_{1}}^{\sigma(R)}\otimes\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
X_{y_{2}}^{\sigma(P_{\bar{H}})}\end{bmatrix}\right)\otimes\left(e_{v_{2}}^{\sigma(R)}\otimes\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
X_{y_{1}}^{\sigma(P_{\bar{H}})}\end{bmatrix}\right)^{T}$ (51)
$\displaystyle\overline{K}_{\rho IV}$
$\displaystyle=\sum_{y_{2}=1}^{\sigma{(B_{S})}}\left(\mathds{1}^{\sigma(R)}\otimes\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
\mathds{1}^{\sigma(P_{\gamma})}\otimes\mathds{1}^{\sigma(B_{S})}\otimes
e_{y_{2}}^{\sigma(B_{S})}\end{bmatrix}\right)\left(\mathds{1}^{\sigma(R)}\otimes\begin{bmatrix}\mathbf{0}^{\sigma(P_{\mu})}\\\
\mathds{1}^{\sigma(P_{\gamma})}\otimes
e_{y_{2}}^{\sigma(B_{S})}\otimes\mathds{1}^{\sigma(B_{S})}\end{bmatrix}\right)^{T}$
(52)
Type V Constraints: The Type V constraint must make use of the functional
graph adjacency matrix $A_{P}$. Consequently, the fourth-order tensor ${\cal
K}_{\rho V}$ is calculated first on a scalar basis using the Kronecker delta
function $\delta_{i}$ (Defn. 50) and then is matricized to $K_{\rho V}$.
$\displaystyle{\cal\overline{K}}_{\rho V}(w_{1},v_{1},w_{2},v_{2})$
$\displaystyle=\delta_{v_{1}v_{1}}\cdot\delta_{v_{2}v_{2}}\cdot
A_{P}(w_{1},w_{2})\qquad\forall
w_{1},w_{2}\in\\{1,\ldots,\sigma(P)\\},\;v_{1},v_{2}\in\\{1,\ldots,\sigma(R)\\}$
(53) $\displaystyle\overline{K}_{\rho V}$
$\displaystyle=\sum_{v_{1}=1}^{\sigma(R)}\sum_{v_{2}=1}^{\sigma(R)}\left(e_{v_{1}}^{\sigma(R)}\otimes
e_{v_{2}}^{\sigma(R)T}\right)\otimes A_{P}$ (54)
$\displaystyle\overline{K}_{\rho V}$
$\displaystyle=\left(\mathds{1}^{\sigma(R)}\otimes\mathds{1}^{\sigma(R)T}\right)\otimes
A_{P}$ (55)
### III-D Sequence-Dependent Degrees of Freedom
Once the system sequence knowledge base and constraints matrix have been
calculated, the number of sequence-dependent degrees of freedom follow
straightforwardly.
###### Definition 12 (Sequence-Dependent Degrees of Freedom [13, 117, 118,
108, 119]):
The set of independent pairs of actions
$z_{\chi_{1}\chi_{2}}=e_{w_{1}v_{1}}e_{w_{2}v_{2}}\in{\cal Z}$ of length 2
that completely describe the system language. The number is given by:
$\displaystyle DOF_{\rho}=\sigma({\cal Z})$
$\displaystyle=\sum_{\chi_{1}}^{\sigma(R)\sigma(P)}\sum_{\chi_{2}}^{\sigma(R)\sigma(P)}[J_{\rho}\ominus
K_{\rho}](\chi_{1},\chi_{2})$ (56)
$\displaystyle=\sum_{\chi_{1}}^{\sigma(R)\sigma(P)}\sum_{\chi_{2}}^{\sigma(R)\sigma(P)}[A_{\rho}](\chi_{1},\chi_{2})$
(57)
For systems of substantial size, the size of the hetero-functional adjacency
matrix may be challenging to process computationally. However, the matrix is
generally very sparse. Therefore, projection operators are used to eliminate
the sparsity by projecting the matrix onto a one’s vector[108, 119]. This is
demonstrated below for $J_{S}^{V}$ and $A_{\rho}$:
$\displaystyle\mathds{P}_{S}J_{S}^{V}$ $\displaystyle=\mathds{1}^{\sigma({\cal
E}_{S})}$ (58) $\displaystyle\mathds{P}_{S}A_{\rho}\mathds{P}_{S}^{T}$
$\displaystyle=\widetilde{A}_{\rho}$ (59)
where $\mathds{P}_{S}$ is a (non-unique) projection matrix for the vectorized
system knowledge base and the hetero-functional adjacency matrix [108, 119].
Note that the number of sequence dependent degrees of freedom for the
projected hetero-functional adjacency matrix can be calculated as:
$DOF_{\rho}=\sigma({\cal Z})=\sum_{\psi_{1}}^{\sigma({\cal
E}_{S})}\sum_{\psi_{2}}^{\sigma({\cal
E}_{S})}[\widetilde{A}_{\rho}](\psi_{1},\psi_{2})$ (60)
where $\psi\in\left[1,\dots,\sigma({\cal E}_{S})\right]$.
## IV Hetero-functional Incidence Tensor
This section serves to introduce the hetero-functional incidence tensor as
part of the paper’s original contribution. Subsection IV-A describes the
tensor in third-order form. Subsection IV-B then elaborates why it sometimes
useful to present this tensor in fourth-order form. Finally, Subsection IV-C
shows how matricizing the heter-functional incidence tensor (into second-order
form) can serve to reconstruct the hetero-functional adjacency matrix.
### IV-A Third Order Form
To complement the concept of a hetero-functional adjacency matrix $A_{\rho}$
and its associated tensor ${\cal A}_{\rho}$, the hetero-functional incidence
tensor $\widetilde{\cal M}_{\rho}$ describes the structural relationships
between the physical capabilities (i.e. structural degrees of freedom) ${\cal
E}_{S}$, the system operands $L$, and the system buffers $B_{S}$.
$\widetilde{\cal M}_{\rho}=\widetilde{\cal M}_{\rho}^{+}-\widetilde{\cal
M}_{\rho}^{-}$ (61)
###### Definition 13 (The Negative 3rd Order Hetero-functional Incidence
Tensor $\widetilde{\cal M}_{\rho}^{-}$):
The negative hetero-functional incidence tensor $\widetilde{\cal
M_{\rho}}^{-}\in\\{0,1\\}^{\sigma(L)\times\sigma(B_{S})\times\sigma({\cal
E}_{S})}$ is a third-order tensor whose element $\widetilde{\cal
M}_{\rho}^{-}(i,y,\psi)=1$ when the system capability
${\epsilon}_{\psi}\in{\cal E}_{S}$ pulls operand $l_{i}\in L$ from buffer
$b_{s_{y}}\in B_{S}$.
###### Definition 14 (The Positive 3rd Order Hetero-functional Incidence
Tensor $\widetilde{\cal M}_{\rho}^{+}$):
The positive hetero-functional incidence tensor $\widetilde{\cal
M}_{\rho}^{+}\in\\{0,1\\}^{\sigma(L)\times\sigma(B_{S})\times\sigma({\cal
E}_{S})}$ is a third-order tensor whose element $\widetilde{\cal
M}_{\rho}^{+}(i,y,\psi)=1$ when the system capability
${\epsilon}_{\psi}\in{\cal E}_{S}$ injects operand $l_{i}\in L$ into buffer
$b_{s_{y}}\in B_{S}$.
The calculation of these two tensors depends on the definition of two more
matrices which further depend on the hetero-functional graph theory
definitions in Appendix -C.
###### Definition 15 (The Negative Process-Operand Incidence Matrix
$M_{LP}^{-}$):
A binary incidence matrix $M_{LP}^{-}\in\\{0,1\\}^{\sigma(L)\times\sigma(P)}$
whose element $M_{LP}^{-}(i,w)=1$ when the system process $p_{w}\in P$ pulls
operand $l_{i}\in L$ as an input. It is further decomposed into the negative
transformation process-operand incidence matrix $M_{LP_{\mu}}^{-}$ (Defn. 37)
and the negative refined transformation process-operand incidence matrix
$M_{LP_{\bar{\eta}}}^{-}$ (Defn. 38) which by definition is in turn calculated
from the negative holding process-operand incidence matrix
$M_{LP_{\gamma}}^{-}$ (Defn. 39).
$M_{LP}^{-}=\begin{bmatrix}M_{LP_{\mu}}^{-}&M_{LP_{\bar{\eta}}}^{-}\end{bmatrix}=\begin{bmatrix}M_{LP_{\mu}}^{-}&M_{LP_{\gamma}}^{-}\otimes\mathds{1}^{\sigma(P_{\eta})T}\end{bmatrix}$
(62)
###### Definition 16 (The Positive Process-Operand Incidence Matrix
$M_{LP}^{+}$):
A binary incidence matrix $M_{LP}^{+}\in\\{0,1\\}^{\sigma(L)\times\sigma(P)}$
whose element $M_{LP}^{+}(i,w)=1$ when the system process $p_{w}\in P$ injects
operand $l_{i}\in L$ as an output. It is further decomposed into the positive
transformation process-operand incidence matrix $M_{LP_{\mu}}^{+}$ (Defn. 40)
and the positive refined transformation process-operand incidence matrix
$M_{LP_{\bar{\eta}}}^{+}$ (Defn. 41) which, by definition, is, in turn,
calculated from the positive holding process-operand incidence matrix
$M_{LP_{\gamma}}^{+}$ (Defn. 42)
$M_{LP}^{+}=\begin{bmatrix}M_{LP_{\mu}}^{+}&M_{LP_{\bar{\eta}}}^{+}\end{bmatrix}=\begin{bmatrix}M_{LP_{\mu}}^{+}&M_{LP_{\gamma}}^{+}\otimes\mathds{1}^{\sigma(P_{\eta})T}\end{bmatrix}$
(63)
With the definitions of these incidence matrices in place, the calculation of
the negative and positive hetero-functional incidence tensors $\widetilde{\cal
M}_{\rho}^{-}$ and $\widetilde{\cal M}_{\rho}^{+}$ follows straightforwardly
as a third-order outer product. For $\widetilde{\cal M}_{\rho}^{-}$:
$\displaystyle\widetilde{\cal
M}_{\rho}^{-}=\sum_{i=1}^{\sigma(L)}\sum_{y_{1}=1}^{\sigma(B_{S})}e_{i}^{\sigma(L)}\circ
e_{y_{1}}^{\sigma(B_{S})}\circ\mathds{P}_{S}\left(\left(X^{-}_{iy_{1}}\right)^{V}\right)$
(64)
where
$X^{-}_{iy_{1}}=\left[\begin{array}[]{c}M_{LP_{\mu}}^{-T}e_{i}^{\sigma(L)}e_{y_{1}}^{\sigma(M)T}\quad|\quad\mathbf{0}\\\
\hline\cr
M_{LP_{\gamma}}^{-T}e_{i}^{\sigma(L)}\otimes\left(e_{y_{1}}^{\sigma(B_{S})}\otimes\mathds{1}^{\sigma(B_{S})}\right)\otimes\mathds{1}^{\sigma(R)T}\\\
\end{array}\right]$ (65)
The $X^{-}_{iy_{1}}$ matrix is equivalent in size to the system concept
$A_{S}$. It has a value of one in all elements where the associated process
both withdraws input operand $l_{i}$ and originates at the buffer
$b_{s_{y_{1}}}$. Consequently, when $X^{-}_{iy_{1}}$ is vectorized and then
projected with $\mathds{P}_{S}$, the result is a vector with a value of one
only where the associated system capabilities meet these criteria.
For $\widetilde{\cal M}_{\rho}^{+}$:
$\displaystyle\widetilde{\cal
M}_{\rho}^{+}=\sum_{i=1}^{\sigma(L)}\sum_{y_{2}=1}^{\sigma(B_{S})}e_{i}^{\sigma(L)}\circ
e_{y_{2}}^{\sigma(B_{S})}\circ\mathds{P}_{S}\left(\left(X^{+}_{iy_{2}}\right)^{V}\right)$
(66)
where
$X^{+}_{iy_{2}}=\left[\begin{array}[]{c}M_{LP_{\mu}}^{+T}e_{i}^{\sigma(L)}e_{y_{2}}^{\sigma(M)T}\quad|\quad\mathbf{0}\\\
\hline\cr
M_{LP_{\gamma}}^{+T}e_{i}^{\sigma(L)}\otimes\left(\mathds{1}^{\sigma(B_{S})}\otimes
e_{y_{2}}^{\sigma(B_{S})}\right)\otimes\mathds{1}^{\sigma(R)T}\\\
\end{array}\right]$ (67)
The $X^{+}_{iy_{2}}$ matrix is equivalent in size to the system concept
$A_{S}$. It also has a value of one in all elements where the associated
process both injects output operand $l_{i}$ and terminates at the buffer
$b_{s_{y_{2}}}$. Consequently, when $X^{+}_{iy_{2}}$ is vectorized and then
projected with $\mathds{P}_{S}$, the result is a vector with a value of one
only where the associated system capabilities meet these criteria.
It is important to note that the definitions of the 3rd order hetero-
functional incidence tensors $\widetilde{\cal M}_{\rho}^{-}$, and
$\widetilde{\cal M}_{\rho}^{+}$ are provided in projected form as indicated by
the presence of the projection operator $\mathds{P}_{S}$ in Equations 64 and
66 respectively. It is often useful to use the un-projected form of these
tensors.
$\displaystyle{\cal
M}_{\rho}^{-}=\sum_{i=1}^{\sigma(L)}\sum_{y_{1}=1}^{\sigma(B_{S})}e_{i}^{\sigma(L)}\circ
e_{y_{1}}^{\sigma(B_{S})}\circ\left(X^{-}_{iy_{1}}\right)^{V}$ (68)
$\displaystyle{\cal
M}_{\rho}^{+}=\sum_{i=1}^{\sigma(L)}\sum_{y_{2}=1}^{\sigma(B_{S})}e_{i}^{\sigma(L)}\circ
e_{y_{2}}^{\sigma(B_{S})}\circ\left(X^{+}_{iy_{2}}\right)^{V}$ (69)
### IV-B Fourth Order Form
The third dimension of these unprojected 3rd order hetero-functional incidence
tensors can then be split into two dimensions to create 4th order hetero-
functional incidence tensors.
$\displaystyle{\cal M}_{PR}^{+}$ $\displaystyle=vec^{-1}\left({\cal
M}_{\rho}^{+},[\sigma(P),\sigma(R)],3\right)$ (70) $\displaystyle{\cal
M}_{PR}^{-}$ $\displaystyle=vec^{-1}\left({\cal
M}_{\rho}^{-},[\sigma(P),\sigma(R)],3\right)$ (71)
These fourth order tensors describe the structural relationships between the
system processes $P$, the physical resources $R$ that realize them, the system
operands $L$ that are consumed and injected in the process, and the system
buffers $B_{S}$ from which these are operands are sent and the system buffers
$B_{S}$ to which these operands are received. They are used in the following
section as part of the discussion on layers.
${\cal M}_{PR}={\cal M}_{PR}^{+}-{\cal M}_{PR}^{-}$ (72)
###### Definition 17 (The Negative 4th Order Hetero-functional Incidence
Tensor ${\cal M}_{PR}^{-}$):
The negative 4th Order hetero-functional incidence tensor ${\cal
M}_{PR}^{-}\in\\{0,1\\}^{\sigma(L)\times\sigma(B_{S})\times\sigma(P)\times\sigma(R)}$
has element ${\cal M}_{PR}^{-}(i,y,w,v)=1$ when the system process $p_{w}\in
P$ realized by resource $r_{v}\in R$ pulls operand $l_{i}\in L$ from buffer
$b_{s_{y}}\in B_{S}$.
###### Definition 18 (The Positive 4th Order Hetero-functional Incidence
Tensor ${\cal M}_{PR}^{-}$):
The positive 4th Order hetero-functional incidence tensor ${\cal
M}_{PR}^{-}\in\\{0,1\\}^{\sigma(L)\times\sigma(B_{S})\times\sigma(P)\times\sigma(R)}$
has element ${\cal M}_{PR}^{-}(i,y,w,v)=1$ when the system process $p_{w}\in
P$ realized by resource $r_{v}\in R$ injects operand $l_{i}\in L$ into buffer
$b_{s_{y}}\in B_{S}$.
Furthermore, the negative and positive 4th hetero-functional incidence tensors
can be used to demonstrate a direct relationship to the system concept
$A_{S}$.
$\displaystyle{\cal M}_{PR}^{-}(i,y,w,v)=$ $\displaystyle
X^{-}_{iy_{1}}(w,v)\cdot A_{S}(w,v)$ (73) $\displaystyle{\cal M}_{PR}^{-}=$
$\displaystyle\sum_{i}^{\sigma(L)}\sum_{y_{1}}^{\sigma(B_{S})}X^{-}_{iy_{1}}\cdot
A_{S}$ (74) $\displaystyle{\cal M}_{PR}^{+}(i,y,w,v)=$ $\displaystyle
X^{+}_{iy_{2}}(w,v)\cdot A_{S}(w,v)$ (75) $\displaystyle{\cal M}_{PR}^{+}=$
$\displaystyle\sum_{i}^{\sigma(L)}\sum_{y_{2}}^{\sigma(B_{S})}X^{+}_{iy_{2}}\cdot
A_{S}$ (76)
Equations 74 and 76 show that the 4th order hetero-functional incidence
tensors contain three types of information:
1. 1.
the mapping of system processes to system resources in the system concept
$A_{S}$,
2. 2.
the mapping of processes to their operands in $M_{LP}^{-}$ and $M_{LP}^{+}$,
3. 3.
the implicit knowledge that by definition transformation processes occur at a
stationary buffer, and that transportation processes are defined by their
origin and destination buffers.
In other words, the hetero-functional incidence tensor is a complete descption
of a system’s allocated architecture.
### IV-C Second Order Form
Returning back to the third-order hetero-functional incidence tensor
$\widetilde{M}_{\rho}$, it and and its positive and negative components
$\widetilde{M}_{\rho}^{+},\widetilde{M}_{\rho}^{-}$, can also be easily
matricized.
$\displaystyle M_{\rho}$ $\displaystyle={\cal F}_{M}\left({\cal
M}_{\rho},[1,2],[3]\right)$ (77) $\displaystyle M_{\rho}^{-}$
$\displaystyle={\cal F}_{M}\left({\cal M}_{\rho}^{-},[1,2],[3]\right)$ (78)
$\displaystyle M_{\rho}^{+}$ $\displaystyle={\cal F}_{M}\left({\cal
M}_{\rho}^{+},[1,2],[3]\right)$ (79)
The resulting matrices have a size of
$\sigma(L)\sigma(B_{S})\times\sigma(\cal{E_{S}})$ which have a corresponding
physical intuition. Each buffer $b_{s_{y}}$ has $\sigma(L)$ copies to reflect
a place (i.e. bin) for each operand at that buffer. Each of these places then
forms a bipartite graph with the system’s physical capabilities. Consequently,
and as expected, the hetero-functional adajacency matrix $A_{\rho}$ can be
calculated as a matrix product of the positive and negative hetero-functional
incidence matrices $M_{\rho}^{+}$ and $M_{\rho}^{+}$.
$A_{\rho}=M_{\rho}^{+T}\odot M_{\rho}^{-}=M_{\rho}^{+T}M_{\rho}^{-}$ (80)
Such a product systematically enforces all five of the feasibility constraints
identified in Section III. Furthermore, the Boolean and real matrix products
are interchangeable because each process is associated with exactly one
origin-destination pair.
## V Discussion
Given the discussion on multi-layer networks in the introduction, it is
worthwhile reconciling the gap in terminology between multi-layer networks and
hetero-functional graph theory. First, the concept of layers in hetero-
functional graphs is discussed. Second, an ontological comparison of layers in
hetero-functional graphs and multi-layer networks is provided. Third, a
discussion of network descriptors in the context of layers is provided. Given
the “disparate terminology and the lack of consensus” in the multi-layer
network literature, the discussion uses the multi-layer description provided
De Dominico et. al[14].
### V-A Layers in Hetero-functional Graphs
###### Definition 19 (Layer):
[120] A layer $G_{\lambda}=\\{{\cal E}_{S\lambda},Z_{S\lambda}\\}$ of a
hetero-functional graph $G=\\{{\cal E}_{S},Z_{S}\\}$ is a subset of a hetero-
functional graph, $G_{\lambda}\subseteq G$, for which a predefined layer
selection (or classification) criterion applies. A set of layers in a hetero-
functional graph adhere to a classification scheme composed of a number of
selection criteria.
Note that this definition of a layer is particularly flexible because it
depends on the nature of the classification scheme and its associated
selection criteria. Nevertheless, and as discussed later, it is important to
choose a classification scheme that leads to a set of mutually exclusive
layers that are also collectively exhaustive of the hetero-functional graph as
a whole.
To select out specific subsets of capabilities (or structural degrees of
freedom), HFGT has used the concept of “selector matrices” of various types[7,
121]. Here a layer selector matrix is defined.
###### Definition 20:
[120] Layer Selector Matrix: A binary matrix $\Lambda_{\lambda}$ of size
$\sigma(P)\times\sigma(R)$ whose element $\Lambda_{\lambda}(w,v)=1$ when the
capability $e_{wv}\subset E_{S\lambda}$.
From this definition, the calculation of a hetero-functional graph layer
follows straightforwardly. First, a layer projection operator
$\mathds{P}_{\lambda}$ is calculated[120]:
$\displaystyle\mathds{P}_{\lambda}\Lambda_{\lambda}^{V}$
$\displaystyle=\mathds{1}^{\sigma({\cal E}_{S})}$ (81)
Next, the negative and positive hetero-functional incidence tensors
$\widetilde{\cal M}_{\rho\lambda}^{-}$ and $\widetilde{\cal
M}_{\rho\lambda}^{+}$ for a given layer $\lambda$ are calculated
straightforwardly[120].
$\displaystyle\widetilde{\cal
M}_{\rho\lambda}^{-}=\sum_{i=1}^{\sigma(L)}\sum_{y_{1}=1}^{\sigma(B_{S})}e_{i}^{\sigma(L)}\circ
e_{y_{1}}^{\sigma(B_{S})}\circ\mathds{P}_{\lambda}\left(\left(X^{-}_{iy_{1}}\right)^{V}\right)=\widetilde{\cal
M}_{\rho}^{-}\odot_{3}\mathds{P}_{\lambda}$ (82) $\displaystyle\widetilde{\cal
M}_{\rho\lambda}^{+}=\sum_{i=1}^{\sigma(L)}\sum_{y_{2}=1}^{\sigma(B_{S})}e_{i}^{\sigma(L)}\circ
e_{y_{2}}^{\sigma(B_{S})}\circ\mathds{P}_{S}\left(\left(X^{+}_{iy_{2}}\right)^{V}\right)=\widetilde{\cal
M}_{\rho}^{+}\odot_{3}\mathds{P}_{\lambda}$ (83)
From there, the positive and negative hetero-functional incidence tensors for
a given layer can be matricized and the adjacency matrix of the associated
layer $\widetilde{A}_{\rho\lambda}$ follows straightforwardly[120].
$\displaystyle\widetilde{M}_{\rho\lambda}^{+}$ $\displaystyle={\cal
F}_{M}(\widetilde{\cal M}_{\rho\lambda}^{+},[1,2],[3])$ (84)
$\displaystyle\widetilde{M}_{\rho\lambda}^{-}$ $\displaystyle={\cal
F}_{M}(\widetilde{\cal M}_{\rho\lambda}^{-},[1,2],[3])$ (85)
$\displaystyle\widetilde{A}_{\rho\lambda}$
$\displaystyle=\widetilde{M}_{\rho\lambda}^{+T}\odot\widetilde{M}_{\rho\lambda}^{-}$
(86)
This approach of separating a hetero-functional graph into its constituent
layers is quite generic because the layer selector matrix $\Lambda_{\lambda}$
can admit a wide variety of classification schemes. Three classification
schemes are discussed here:
1. 1.
An Input Operand Set Layer
2. 2.
An Output Operand Set Layer
3. 3.
A Dynamic Device Model Layer
Fig. 4: The Trimetric Smart City Infrastructure Test Case Visualized as Five
Layers Defined by Input Operand Sets: The Potable Water Topology, The
Electrified Potable Water Topology, the Electric Power Topology, the Charging
Topology, and the Transportation Topology
###### Definition 21:
Input Operand Set Layer: A hetero-functional graph layer for which all of the
node-capabilities have a common set of input operands $L_{\lambda}\subseteq
L$.
This definition of an Operand Set Layer was used in the HFGT text[7] to
partition the Trimetrica test case (first mentioned in Figure 1) into the
multi-layer depiction in Figure 4. In this classification scheme, any system
contains up to 2σ(L) possible layers. For completeness, an index
$\lambda_{D}\in\\{1,\dots,2^{\sigma(L)}\\}$ is used to denote a given layer.
In reality, however, the vast majority of physical systems exhibit far fewer
than 2σ(L) layers. Consequently, it is often useful to simply assign an index
$\lambda$ to each layer and create a 1-1 mapping function (i.e. lookup table)
$f_{\lambda}$ back to the $\lambda_{D}$ index.
$f_{\lambda}:\lambda\rightarrow\lambda_{D}$ (87)
The utility of the $\lambda_{d}$ index (stated as a base 10 number) becomes
apparent when it is converted into a binary (base 2) number
$\lambda_{v}\in\\{0,1\\}^{\sigma(L)}$ which may be used equivalently as a
binary vector of the same length.
$\lambda_{v}=bin(\lambda_{D})$ (88)
The resulting binary vector $\lambda_{v}$ has the useful property that
$\lambda_{v}(l_{i})=1$, iff operand $l_{i}\in L_{\lambda}$. Consequently, a
given value of $\lambda_{v}$ serves to select from $L$ the operands that
pertain to layer $\lambda$. The associated layer selector matrix follows
straightforwardly:
$\Lambda_{\lambda}(w,v)=\left\\{\begin{array}[]{cc}1&\mbox{if}\quad\lambda_{v}=M_{LP}^{-}(:,w)\quad\forall
r_{v}\in R\\\ 0&\mbox{otherwise}\end{array}\right.$ (89)
It is also worth noting that the layer selector matrix $\Lambda$ above is
effectively a third order tensor whose value $\Lambda(\lambda,w,v)=1$ when the
capability $e_{wv}$ is part of layer $\lambda$.
One advantage of a classification scheme based on _sets_ of input operands is
that they lead to the generation of a mutually exclusive and collectively
exhaustive set of layers. Because no process (and consequently capability) has
two sets of input operands, it can only exist in a single layer (mutual
exclusivity). In the meantime, the presence of $2^{\sigma(L)}$ assures that
all capabilities fall into (exactly) one layer (exhaustivity). It is worth
noting that a classification scheme based on _individual_ operands would not
yield these properties. For example, a water pump consumes electricity and
water as input operands. Consequently, it would have a problematic existence
in both the “water layer” as well as the “electricity layer”. In contrast, a
classification scheme based on operand sets creates an “electricity-water”
layer.
Analogously to Defn. 21, an output-operand set layer can be defined and its
associated layer selector matrix calculated.
###### Definition 22:
Output Operand Set Layer: A hetero-functional graph layer for which all of the
node-capabilities have a common set of output operands $L_{\lambda}\subseteq
L$.
$\Lambda_{\lambda}(w,v)=\left\\{\begin{array}[]{cc}1&\mbox{if}\quad\lambda_{v}=M_{LP}^{+}(:,w)\quad\forall
r_{v}\in R\\\ 0&\mbox{otherwise}\end{array}\right.$ (90)
The third classification scheme is required when developing dynamic equations
of motion from the structural information of a hetero-functional graph[117,
122]. Every process is said to have a “dynamic device model” that is usually
described as a set of differential, algebraic, or differential-algebraic
equations[117, 122]. The simplest of these are the constitutive laws of basic
dynamic system elements (e.g. resistors, capacitors, and inductors). Some
processes, although distinct, may have device models with the same functional
form. For example, two resistors at different places in an electrical system
have the same constitutive (Ohm’s) law, but have different transportation
processes because their origin and destinations are different. Consequently,
layers that distinguish on the basis of dynamic device model (i.e.
constitutive law) are necessary.
###### Definition 23:
Dynamic Device Model Layer: A hetero-functional graph layer for which all of
the node-capabilities have a dynamic device model with the same functional
form.
In such a case, the layer selector matrix $\Lambda_{\lambda}$
straightforwardly maps capabilities to their layer and dynamic device model
interchangeably. A sufficient number of layers need to be created to account
for all of the different types of dynamic device models in the system. This
classification scheme may be viewed as a generalization of the well-known
literature on “linear-graphs”[123] and “bond graphs”[124].
### V-B Finding Commonality between Multilayer Networks and Hetero-functional
Graphs
The above discussion of layers in a hetero-functional graph inspires a
comparison with multi-layer networks. The multi-layer adjacency tensor (${\cal
A}_{MLN}$) defined by De Dominico et. al.[14] is chosen to facilitate the
discussion. This fourth order tensor has elements ${\cal
A}_{MLN}(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})$ where the indices
$\alpha_{1},\alpha_{2}$ denote “vertices” and $\beta_{1},\beta_{2}$ denote
“layers”. De Dominico et. al write that this multilayer adjacency tensor is a
[14]: _“…very general object that can be used to represent a wealth of
complicated relationships among nodes.”_ The challenge in reconciling the
multi-layer adjacency tensor ${\cal A}_{MLN}$ and the hetero-functional
adjacency tensor ${\cal A}_{\rho}$ is an ontological one. Referring back to
the ontological discussion in the introduction and more specifically Figure 2
reveals that the underlying abstract conceptual elements (in the mind) to
which these two mathematical models refer may not be the same.
Consider the following interpretation of ${\cal
A}_{MLN}(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})={\cal
A}_{B_{S}l_{1}}(y_{1},y_{2},i_{1},i_{2})$ where the multi-layer network’s
vertices are equated to the buffers $B_{S}$ and the layers are equated to the
operands $L$. This interpretation would well describe the departure of an
operand $l_{i_{1}}$ from buffer $b_{sy_{1}}$ and arriving as $l_{i_{2}}$ at
$b_{sy_{2}}$. The equivalence of vertices to buffers is effectively a
consensus view in the literature. In contrast, the concept of a “layer” in a
multi-layer network (as motivated in the introduction) remains relatively
unclear. The equivalence of layers to operands warrants further attention.
###### Theorem 1:
The mathematical model $A_{B_{S}l_{1}}$ is neither lucid nor complete with
respect to the system processes $P$ (as an abstraction).
###### Proof 1:
By contradiction. Assume that ${\cal A}_{B_{S}l_{1}}$ is both lucid and
complete network model with respect to system processes $P$. Consider an
operand $l_{1}$ that departs $b_{s1}$, undergoes process $p_{1}$, and arrives
as $l_{1}$ at $b_{s2}$. Now consider the same operand $l_{1}$ that departs
$b_{s1}$, undergoes process $p_{2}$, and arrives as $l_{1}$ at $b_{s2}$. Both
of these scenarios would be denoted by ${\cal A}_{B_{S}l_{1}}(1,2,1,1)=1$.
Consequently, this modeling element is overloaded and as such violates the
ontological property of lucidity. Furthermore, because ${\cal A}_{B_{S}l_{1}}$
makes no mention of the concept of system processes, then it violates the
completeness property as well.
The counter-example provided in the proof above is not simply a theoretical
abstraction but rather quite practical. For several decades, the field of
mechanical engineering has used “linear graphs”[123] to derive the equations
of motion of dynamic systems with multi-domain physics. Consider the RLC
circuit shown in Figure 5 and its associated linear graph. As parallel
elements, the inductor and capacitor both transfer electrical power (as an
operand) between the same pair of nodes. However, the constitutive law (as a
physical process) of a capacitor is distinct from that of the inductor.
Consequently, the interpretation ${\cal A}_{B_{S}l_{1}}$ of a multi-layer
network is inadequate even for this very simple counter-example333Although
electric power systems and circuits have served as a rich application domain
for graph theory and network science, these approaches usually parameterize
the circuit components homogeneously as a fixed-value impedance/admittance at
constant frequency. When the constant frequency assumption is relaxed, the
diversity of constitutive laws for resistors, capacitors, and inductors must
be explicitly considered..
Fig. 5: A Simple RLC Circuit shown as a circuit diagram on left and as a
linear graph model on right. Each resistor, capacitor and inductor can be said
to be part of its own layer by virtue of their distinct constitutive laws.
Another possible interpretation of a multi-layer network is ${\cal
A}_{MLN}(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})$ = ${\cal
A}_{B_{S}l_{2}}(y_{1},y_{2},w_{1},w_{2})$ where the multi-layer network’s
vertices are equated to the buffers $B_{S}$ and the layers are equated to the
processes $P$. This interpretation would well describe the execution of a
process $p_{w_{1}}$ that is realized by buffer $b_{sy_{1}}$ followed by a
process $p_{w_{2}}$ that is realized by buffer $b_{sy_{2}}$. The equivalence
of layers to processes warrants further attention as well.
###### Theorem 2:
The mathematical model $A_{B_{S}l_{2}}$ is neither lucid nor complete with
respect to the system’s transportation resources $H$ (as an abstraction).
###### Proof 2:
By contradiction. Assume that ${\cal A}_{B_{S}l_{2}}$ is both a lucid and
complete network model with respect to system’s transportation resources $H$.
Consider transportation process $p$ between a buffer $b_{s1}$ and a distinct
buffer $b_{s2}$. If such a transportation process were realized by any buffer
$b_{s}\in B_{S}$, then by definition it would no longer be a buffer but rather
a transportation resource. Consequently, ${\cal A}_{B_{S}l_{2}}$ is not
complete with respect the system’s transportation resources $H$. Now consider
a process $p_{1}$ that is realized by buffer $b_{s1}$ followed by a process
$p_{2}$ that is realized by a distinct buffer $b_{s2}$. This is denoted by
${\cal A}_{B_{S}l_{2}}(1,2,1,2)=1$. Given the distinctness of $b_{s1}$ and
$b_{s2}$, a transportation process must have happened in between $p_{1}$ and
$p_{2}$ although it is not explicitly stated by the mathematical statement
${\cal A}_{B_{S}l_{2}}(1,2,1,2)=1$. Such a transportation process, although
well-defined by its origin and destination could have been realized by any one
of a number of transportation resources. Consequently, the modeling element is
overloaded and as such violates the property of lucidity. The lack of an
explicit description of transportation processes or resources limits the
utility of this type of multi-layer network model.
It is worth noting that the first multi-layer network interpretation ${\cal
A}_{B_{S}l_{1}}$ can be derived directly from the positive and negative
hetero-functional incidence matrices[120].
$\displaystyle{\cal A}_{B_{S}L_{1}}(y_{1},y_{2},i_{1},i_{2})=$
$\displaystyle\bigvee_{\psi}{\cal M}_{\rho}^{-}(i,y_{1},\psi)\cdot{\cal
M}_{\rho}^{+}(i,y_{2},\psi)$ (91) $\displaystyle{\cal A}_{B_{S}l_{1}}=$
$\displaystyle{\cal F}_{M}^{-1}\left(M_{\rho}^{-T}\odot
M_{\rho}^{+},[\sigma(B_{S}),\sigma(B_{S}),\sigma(L),\sigma(L)],[1,3],[2,4]\right)$
(92)
When $M_{\rho}^{-T}$ and $M_{\rho}^{+}$ are multiplied so that the
capabilities ${\cal E}_{s}$ are the inner dimension, the result is an
adjacency matrix that when tensorized becomes ${\cal A}_{B_{S}l_{1}}$. In
effect, ${A}_{B_{S}l_{1}}$ (in matricized form) is the dual adjacency
matrix[125] of the hetero-functional adjacency matrix $A_{\rho}$. The presence
of this matrix multiplication obfuscates (i.e. creates a lack of lucidity) as
to whether one capability or another occurred when expressing the adjacency
tensor element ${\cal A}_{B_{S}l_{1}}(y_{1},y_{2},i_{1},i_{2})$. In contrast,
the matrix multiplication in Eq. 80 does not cause the same problem. When two
capabilities succeed one another, the information associated with their
physical feasibility in terms of intermediate buffers and their functional
feasibility in terms of intermediate operands remains intact. In other words,
given the sequence of capabilities $e_{w_{1}v_{1}}e_{w_{2}v_{2}}$, one can
immediately deduce the exchanged operands in $L_{\lambda}\subseteq L$ and the
intermediate buffer $b_{s}$. In the case of the exchanged operands, one simply
needs to intersect the output-operand set of the first process with the input
operand set of the second process. In the case of the intermediate buffer, one
checks if either or both of the resources are buffers. If not, then two
transportation processes followed one another and the intermediate buffer is
deduced by Eq. 95. In short, the hetero-functional adjacency matrix (or
tensor) unambigously describes the sequence of two subject+verb+operand
sentences whereas neither of the above interpretations of a multi-layer
network do.
### V-C Network Descriptors
In light of the commonalities and differences between hetero-functional graphs
and (formal) multilayer networks, this section discusses the meaning of
network descriptors in the context of hetero-functional graphs. In this
regard, the hetero-functional adjacency matrix is an adjacency matrix like any
other. Consequently, network descriptors can be calculated straightforwardly.
Furthermore, network descriptors can be applied to subsets of the graph so as
to conduct a layer-by-layer analysis. Nevertheless, that the nodes in a
hetero-functional graph represent whole-sentence-capabilities means that
network descriptors have the potential to provide new found meanings over
formal graphs based on exclusively formal elements.
#### V-C1 Degree Centrality
Degree centrality measures the number of edges attached to a vertex. Since a
hetero-functional graph is a directed graph, there is a need to distinguish
between the in-degree centrality, which measures the number of edges going
into vertex, and the out-degree centrality, which measures the number of edges
going out of a vertex [100]. In the context of hetero-functional graph theory,
the in-degree centrality of a vertex calculates the number of capabilities
that potentially proceed the capability related to the vertex. The out-degree
centrality calculates the number of capabilities that potentially succeed the
vertex’s capability. The higher the degree centrality of a capability, the
more connected that capability is to the other capabilities in the hetero-
functional graph. It is important to recognize that because transportation
capabilities receive nodes in a hetero-functional graph, they can become the
most central node. In contrast, the degree centrality of a formal graph could
not reach such a conclusion because the function of transportation is tied to
formal edges rather than formal nodes.
#### V-C2 Closeness Centrality
Closeness centrality measures the average shortest path from one vertex to
every other reachable vertex in the graph. In a hetero-functional graph, the
meaning of closeness centrality shows how a disruption has the potential to
propagate through the graph across all different types of operands [100]. This
metric is especially valuable for the resilience studies of interdependent
systems, where the propagation of disruption across multiple disciplines is
often poorly understood.
#### V-C3 Eigenvector Centrality
Eigenvector centrality calculates the importance of a node relative to the
other nodes in the network [126]. It also includes the eigenvector centrality
of the node’s direct neighbors [14]. The eigenvector centrality is
specifically designed for the weighting of the in-degree of nodes in a
directed network. The _Katz centrality_ , on the other hand, provides an
approach to study the relative importance of nodes based on the out-degree
[14].
#### V-C4 Clustering Coefficients
Clustering coefficients describe how strongly the nodes of a network cluster
together. This is performed by searching for “triangles” or “circles” of nodes
in a network. In a directed network, these circles can appear in multiple
distinct combinations of directed connections. Each of these combinations
needs to be measured and counted differently. Fagiolo discussed this taxonomy
and accompanying clustering coefficients [127]. These clustering coefficients
for directed networks can be directly applied to hetero-functional graphs and
show which capabilities are strongly clustered together. The definition of
layers in hetero-functional graphs allows for a consistent definition and
calculation of clustering coefficients within and across layers for different
types of systems. When investigating a system, the clustering coefficient may
show clusters of capabilities that were not yet recognized as heavily
interdependent. Such information can be used to revise control structures such
that clusters of capabilities are controlled by the same entity for
efficiency.
#### V-C5 Modularity
Modularity serves as a measure to study if a network can be decomposed in
disjoint sets. In the hetero-functional graph theory literature, much has been
published about modularity as it was a prime motivation towards the inception
of the theory [10, 128]. Hetero-functional graph theory introduces the concept
of the _Degree-of-Freedom-based Design Structure Matrix_ (or: the capability
DSM) that does not only encompass the hetero-functional adjacency matrix, but
extends the concept to the other elements of hetero-functional graph theory:
the service model and the control model. The hetero-functional graph design
structure matrix has the ability to visualize the couplings between the
subsystems of an engineering system and to classify those interfaces. Note
that the capability DSM can also be applied to just the hetero-functional
adjacency matrix. Furthermore, the capability DSM applies to the concept of
layers in a hetero-functional graph. To study the interfaces between layers,
the capability DSM can adopt layers as subsystems and classify the interfaces
between the layers as mentioned previously. In conclusion, hetero-functional
graphs are described by flat adjacency matrices, regardless of the number of
layers in the analysis. Consequently, conventional graph theoretic network
descriptors can be applied. The main difference in definition between the
conventional graph theoretic application and the hetero-functional graph
theoretic application is the result of the difference in the _definition of
the fundamental modeling elements_ , the nodes and edges, in a hetero-
functional graph.
## VI Conclusions and Future Work
This paper has provided a tensor-based formulation of several of the most
important parts of hetero-functional graph theory. More specifically, it
discussed the system concept showing it as a generalization of formal graphs
and multi-commodity networks. It also discussed the hetero-functional
adjacency matrix and its tensor-based closed form calculation. It also
discussed the hetero-functional incidence tensor and related it back to the
hetero-functional adjacency matrix. The tensor-based formulation described in
this work makes a stronger tie between HFGT and its ontological foundations in
MBSE. Finally, the tensor-based formulation facilitates an understanding of
the relationships between HFGT and multi-layer networks “despite its disparate
terminology and lack of consensus”. In so doing, this tensor-based treatment
is likely to advance Kivela et. al’s goal to discern the similarities and
differences between these mathematical models in as precise a manner as
possible.
## References
* [1] Anonymous-NAE, “Nae grand challenges for engineering,” National Academy of Engineering, Tech. Rep., 2019. [Online]. Available: http://www.engineeringchallenges.org/challenges.aspx
* [2] G.-J. Park and A. M. Farid, “Design of large engineering systems,” in _Design Engineering and Science_ , N. P. Suh, M. Cavique, and J. Foley, Eds. Berlin, Heidelberg: Springer, 2021, pp. 367–415. [Online]. Available: https://doi.org/10.1007/978-3-030-49232-8_14
* [3] O. L. De Weck, D. Roos, and C. L. Magee, _Engineering systems: meeting human needs in a complex technological world_. Cambridge, Mass.: MIT Press, 2011. [Online]. Available: http://www.knovel.com/knovel2/Toc.jsp?BookID=4611http://mitpress-ebooks.mit.edu/product/engineering-systems
* [4] SE Handbook Working Group, _Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities_. International Council on Systems Engineering (INCOSE), 2015.
* [5] T. Weilkiens, _Systems engineering with SysML/UML modeling, analysis, design_. Burlington, Mass.: Morgan Kaufmann, 2007.
* [6] S. Friedenthal, A. Moore, and R. Steiner, _A Practical Guide to SysML: The Systems Modeling Language_ , 2nd ed. Burlington, MA: Morgan Kaufmann, 2011.
* [7] W. C. Schoonenberg, I. S. Khayal, and A. M. Farid, _A Hetero-functional Graph Theory for Modeling Interdependent Smart City Infrastructure_. Berlin, Heidelberg: Springer, 2019. [Online]. Available: http://dx.doi.org/10.1007/978-3-319-99301-0
* [8] M. Kivelä, A. Arenas, M. Barthelemy, J. P. Gleeson, Y. Moreno, and M. A. Porter, “Multilayer networks,” _Journal of complex networks_ , vol. 2, no. 3, pp. 203–271, 2014.
* [9] A. M. Farid and D. C. McFarlane, “A Development of Degrees of Freedom for Manufacturing Systems,” in _IMS’2006: 5th International Symposium on Intelligent Manufacturing Systems: Agents and Virtual Worlds_ , Sakarya, Turkey, 2006, pp. 1–6. [Online]. Available: http://engineering.dartmouth.edu/liines/resources/Conferences/IEM-C02.pdf
* [10] A. M. Farid, “Reconfigurability Measurement in Automated Manufacturing Systems,” Ph.D. Dissertation, University of Cambridge Engineering Department Institute for Manufacturing, 2007. [Online]. Available: http://engineering.dartmouth.edu/liines/resources/Theses/IEM-TP00.pdf
* [11] A. M. Farid and D. C. McFarlane, “Production degrees of freedom as manufacturing system reconfiguration potential measures,” _Proceedings of the Institution of Mechanical Engineers, Part B (Journal of Engineering Manufacture) – invited paper_ , vol. 222, no. B10, pp. 1301–1314, 2008. [Online]. Available: http://dx.doi.org/10.1243/09544054JEM1056
* [12] A. M. Farid, “Product Degrees of Freedom as Manufacturing System Reconfiguration Potential Measures,” _International Transactions on Systems Science and Applications – invited paper_ , vol. 4, no. 3, pp. 227–242, 2008. [Online]. Available: http://engineering.dartmouth.edu/liines/resources/Journals/IEM-J04.pdf
* [13] ——, “Static Resilience of Large Flexible Engineering Systems: Axiomatic Design Model and Measures,” _IEEE Systems Journal_ , vol. PP, no. 99, pp. 1–12, 2015. [Online]. Available: http://dx.doi.org/10.1109/JSYST.2015.2428284
* [14] M. De Domenico, A. Solé-Ribalta, E. Cozzo, M. Kivelä, Y. Moreno, M. A. Porter, S. Gómez, and A. Arenas, “Mathematical formulation of multilayer networks,” _Physical Review X_ , vol. 3, no. 4, p. 041022, 2013.
* [15] M. De Domenico, A. Solé-Ribalta, S. Gómez, and A. Arenas, “Navigability of interconnected networks under random failures,” _Proceedings of the National Academy of Sciences_ , vol. 111, no. 23, pp. 8351–8356, 2014.
* [16] O. Yağan and V. Gligor, “Analysis of complex contagions in random multiplex networks,” _Phys. Rev. E_ , vol. 86, p. 036103, Sep 2012. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevE.86.036103
* [17] V. Nicosia, G. Bianconi, V. Latora, and M. Barthelemy, “Growing multiplex networks,” _Physical review letters_ , vol. 111, no. 5, p. 058701, 2013.
* [18] G. Bianconi, “Statistical mechanics of multiplex networks: Entropy and overlap,” _Physical Review E_ , vol. 87, no. 6, p. 062806, 2013.
* [19] F. Battiston, V. Nicosia, and V. Latora, “Structural measures for multiplex networks,” _Physical Review E_ , vol. 89, no. 3, p. 032804, 2014.
* [20] E.-A. Horvát and K. A. Zweig, “One-mode projection of multiplex bipartite graphs,” in _Proceedings of the 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012)_. IEEE Computer Society, 2012, pp. 599–606.
* [21] A. Sole-Ribalta, M. De Domenico, N. E. Kouvaris, A. Díaz-Guilera, S. Gómez, and A. Arenas, “Spectral properties of the laplacian of multiplex networks,” _Physical Review E_ , vol. 88, no. 3, p. 032807, 2013\.
* [22] E. Cozzo, R. A. Banos, S. Meloni, and Y. Moreno, “Contact-based social contagion in multiplex networks,” _Physical Review E_ , vol. 88, no. 5, p. 050801, 2013.
* [23] L. Solá, M. Romance, R. Criado, J. Flores, A. García del Amo, and S. Boccaletti, “Eigenvector centrality of nodes in multiplex networks,” _Chaos: An Interdisciplinary Journal of Nonlinear Science_ , vol. 23, no. 3, p. 033131, 2013.
* [24] P. Pattison and S. Wasserman, “Logit models and logistic regressions for social networks: Ii. multivariate relations,” _British Journal of Mathematical and Statistical Psychology_ , vol. 52, no. 2, pp. 169–193, 1999.
* [25] M. Barigozzi, G. Fagiolo, and G. Mangioni, “Identifying the community structure of the international-trade multi-network,” _Physica A: statistical mechanics and its applications_ , vol. 390, no. 11, pp. 2051–2066, 2011.
* [26] D. Cai, Z. Shao, X. He, X. Yan, and J. Han, “Community mining from multi-relational networks,” in _European Conference on Principles of Data Mining and Knowledge Discovery_. Springer, 2005, pp. 445–452.
* [27] A. Harrer and A. Schmidt, “An approach for the blockmodeling in multi-relational networks,” in _Advances in Social Networks Analysis and Mining (ASONAM), 2012 IEEE/ACM International Conference on_. IEEE, 2012, pp. 591–598.
* [28] V. Stroele, J. Oliveira, G. Zimbrao, and J. M. Souza, “Mining and analyzing multirelational social networks,” in _Computational Science and Engineering, 2009. CSE’09. International Conference on_ , vol. 4. IEEE, 2009, pp. 711–716.
* [29] W. Li, A. Bashan, S. V. Buldyrev, H. E. Stanley, and S. Havlin, “Cascading failures in interdependent lattice networks: The critical role of the length of dependency links,” _Phys. Rev. Lett._ , vol. 108, p. 228702, May 2012\. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevLett.108.228702
* [30] M. K.-P. Ng, X. Li, and Y. Ye, “Multirank: co-ranking for objects and relations in multi-relational data,” in _Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining_. ACM, 2011, pp. 1217–1225.
* [31] P. Bródka, K. Musial, and P. Kazienko, “A method for group extraction in complex social networks,” _Knowledge Management, Information Systems, E-Learning, and Sustainability Research_ , pp. 238–247, 2010.
* [32] P. Brodka, P. Stawiak, and P. Kazienko, “Shortest path discovery in the multi-layered social network,” in _Advances in Social Networks Analysis and Mining (ASONAM), 2011 International Conference on_. IEEE, 2011, pp. 497–501.
* [33] P. Bródka, P. Kazienko, K. Musiał, and K. Skibicki, “Analysis of neighbourhoods in multi-layered dynamic social networks,” _International Journal of Computational Intelligence Systems_ , vol. 5, no. 3, pp. 582–596, 2012.
* [34] M. Berlingerio, M. Coscia, F. Giannotti, A. Monreale, and D. Pedreschi, “The pursuit of hubbiness: analysis of hubs in large multidimensional networks,” _Journal of Computational Science_ , vol. 2, no. 3, pp. 223–237, 2011.
* [35] M. Berlingerio, F. Pinelli, and F. Calabrese, “Abacus: frequent pattern mining-based community discovery in multidimensional networks,” _Data Mining and Knowledge Discovery_ , vol. 27, no. 3, pp. 294–320, 2013.
* [36] M. Berlingerio, M. Coscia, F. Giannotti, A. Monreale, and D. Pedreschi, “Multidimensional networks: foundations of structural analysis,” _World Wide Web_ , vol. 16, no. 5-6, pp. 567–593, 2013.
* [37] L. Tang, X. Wang, and H. Liu, “Community detection via heterogeneous interaction analysis,” _Data mining and knowledge discovery_ , vol. 25, no. 1, pp. 1–33, 2012.
* [38] C. Barrett, K. Channakeshava, F. Huang, J. Kim, A. Marathe, M. V. Marathe, G. Pei, S. Saha, B. S. P. Subbiah, and A. K. S. Vullikanti, “Human initiated cascading failures in societal infrastructures,” _PLoS ONE_ , vol. 7, no. 10, pp. 1–20, 10 2012. [Online]. Available: http://dx.doi.org/10.1371%2Fjournal.pone.0045406
* [39] P. Kazienko, K. Musial, and T. Kajdanowicz, “Multidimensional social network in the social recommender system,” _IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans_ , vol. 41, no. 4, pp. 746–759, 2011\.
* [40] M. Coscia, G. Rossetti, D. Pennacchioli, D. Ceccarelli, and F. Giannotti, ““you know because i know”: A multidimensional network approach to human resources problem,” in _Advances in Social Networks Analysis and Mining (ASONAM), 2013 IEEE/ACM International Conference on_. IEEE, 2013, pp. 434–441.
* [41] P. Kazienko, K. Musial, E. Kukla, T. Kajdanowicz, and P. Bródka, “Multidimensional social network: model and analysis,” _Computational Collective Intelligence. Technologies and Applications_ , pp. 378–387, 2011.
* [42] P. J. Mucha, T. Richardson, K. Macon, M. A. Porter, and J.-P. Onnela, “Community structure in time-dependent, multiscale, and multiplex networks,” _science_ , vol. 328, no. 5980, pp. 876–878, 2010.
* [43] V. Carchiolo, A. Longheu, M. Malgeri, and G. Mangioni, “Communities unfolding in multislice networks,” in _Complex Networks_. Springer, 2011, pp. 187–195.
* [44] D. S. Bassett, M. A. Porter, N. F. Wymbs, S. T. Grafton, J. M. Carlson, and P. J. Mucha, “Robust detection of dynamic community structure in networks,” _Chaos: An Interdisciplinary Journal of Nonlinear Science_ , vol. 23, no. 1, p. 013142, 2013.
* [45] D. Irving and F. Sorrentino, “Synchronization of dynamical hypernetworks: Dimensionality reduction through simultaneous block-diagonalization of matrices,” _Physical Review E_ , vol. 86, no. 5, p. 056102, 2012.
* [46] F. Sorrentino, “Synchronization of hypernetworks of coupled dynamical systems,” _New Journal of Physics_ , vol. 14, no. 3, p. 033035, 2012.
* [47] S. Funk and V. A. Jansen, “Interacting epidemics on overlay networks,” _Physical Review E_ , vol. 81, no. 3, p. 036118, 2010.
* [48] V. Marceau, P.-A. Noël, L. Hébert-Dufresne, A. Allard, and L. J. Dubé, “Modeling the dynamical interaction between epidemics on overlay networks,” _Physical Review E_ , vol. 84, no. 2, p. 026105, 2011.
* [49] X. Wei, N. Valler, B. A. Prakash, I. Neamtiu, M. Faloutsos, and C. Faloutsos, “Competing memes propagation on networks: a case study of composite networks,” _ACM SIGCOMM Computer Communication Review_ , vol. 42, no. 5, pp. 5–12, 2012.
* [50] M. Rocklin and A. Pinar, “On clustering on graphs with multiple edge types,” _Internet Mathematics_ , vol. 9, no. 1, pp. 82–112, 2013.
* [51] J. Hindes, S. Singh, C. R. Myers, and D. J. Schneider, “Epidemic fronts in complex networks with metapopulation structure,” _Physical Review E_ , vol. 88, no. 1, p. 012809, 2013.
* [52] G. Baxter, S. Dorogovtsev, A. Goltsev, and J. Mendes, “Avalanche collapse of interdependent networks,” _Physical review letters_ , vol. 109, no. 24, p. 248701, 2012.
* [53] J. Gómez-Gardeñes, I. Reinares, A. Arenas, and L. M. Floría, “Evolution of cooperation in multiplex networks,” _Scientific reports_ , vol. 2, 2012.
* [54] M. Barigozzi, G. Fagiolo, and D. Garlaschelli, “Multinetwork of international trade: A commodity-specific analysis,” _Phys. Rev. E_ , vol. 81, p. 046104, Apr 2010. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevE.81.046104
* [55] D. Cellai, E. López, J. Zhou, J. P. Gleeson, and G. Bianconi, “Percolation in multiplex networks with overlap,” _Phys. Rev. E_ , vol. 88, p. 052811, Nov 2013. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevE.88.052811
* [56] C. D. Brummitt, K.-M. Lee, and K.-I. Goh, “Multiplexity-facilitated cascades in networks,” _Phys. Rev. E_ , vol. 85, p. 045102, Apr 2012. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevE.85.045102
* [57] P. J. Mucha, T. Richardson, K. Macon, M. A. Porter, and J.-P. Onnela, “Community structure in time-dependent, multiscale, and multiplex networks,” _Science_ , vol. 328, no. 5980, pp. 876–878, 2010. [Online]. Available: http://science.sciencemag.org/content/328/5980/876
* [58] S. Wasserman and K. Faust, _Social network analysis: Methods and applications_. Cambridge university press, 1994, vol. 8.
* [59] B. Min and K. Goh, “Layer-crossing overhead and information spreading in multiplex social networks,” _seed_ , vol. 21, no. T22, p. T12, 2013.
* [60] K.-M. Lee, J. Y. Kim, W.-k. Cho, K.-I. Goh, and I. Kim, “Correlated multiplexity and connectivity of multiplex random networks,” _New Journal of Physics_ , vol. 14, no. 3, p. 033027, 2012.
* [61] B. Min, S. Do Yi, K.-M. Lee, and K.-I. Goh, “Network robustness of multiplex networks with interlayer degree correlations,” _Physical Review E_ , vol. 89, no. 4, p. 042811, 2014.
* [62] E. Cozzo, R. A. Banos, S. Meloni, and Y. Moreno, “Contact-based social contagion in multiplex networks,” _Physical Review E_ , vol. 88, no. 5, p. 050801, 2013.
* [63] A. Allard, P.-A. Noël, L. J. Dubé, and B. Pourbohloul, “Heterogeneous bond percolation on multitype networks with an application to epidemic dynamics,” _Phys. Rev. E_ , vol. 79, p. 036113, Mar 2009. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevE.79.036113
* [64] A. Bashan, Y. Berezin, S. V. Buldyrev, and S. Havlin, “The extreme vulnerability of interdependent spatially embedded networks,” _Nat Phys_ , vol. 9, no. 10, pp. 667–672, 10 2013. [Online]. Available: http://dx.doi.org/10.1038/nphys2727
* [65] S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley, and S. Havlin, “Catastrophic cascade of failures in interdependent networks,” _Nature_ , vol. 464, no. 7291, p. 1025, 2010.
* [66] A. Cardillo, M. Zanin, J. Gómez-Gardenes, M. Romance, A. J. G. del Amo, and S. Boccaletti, “Modeling the multi-layer nature of the european air transport network: Resilience and passengers re-scheduling under random failures,” _arXiv preprint arXiv:1211.6839_ , 2012.
* [67] M. Dickison, S. Havlin, and H. E. Stanley, “Epidemics on interconnected networks,” _Physical Review E_ , vol. 85, no. 6, p. 066109, 2012.
* [68] J. F. Donges, H. C. Schultz, N. Marwan, Y. Zou, and J. Kurths, “Investigating the topology of interacting networks,” _The European Physical Journal B_ , vol. 84, no. 4, pp. 635–651, 2011.
* [69] E. Lazega, M.-T. Jourda, L. Mounier, and R. Stofer, “Catching up with big fish in the big pond? multi-level network analysis through linked design,” _Social Networks_ , vol. 30, no. 2, pp. 159–176, 2008.
* [70] E. A. Leicht and R. M. D’Souza, “Percolation on interacting networks,” _ArXiv e-prints_ , Jul. 2009.
* [71] V. Louzada, N. Araújo, J. Andrade Jr, and H. Herrmann, “Breathing synchronization in interconnected networks,” _arXiv preprint arXiv:1304.5177_ , 2013.
* [72] J. Martin-Hernandez, H. Wang, P. Van Mieghem, and G. D’Agostino, “On synchronization of interdependent networks,” _arXiv preprint arXiv:1304.4731_ , 2013.
* [73] R. Parshani, S. V. Buldyrev, and S. Havlin, “Interdependent networks: Reducing the coupling strength leads to a change from a first to second order percolation transition,” _Phys. Rev. Lett._ , vol. 105, p. 048701, Jul 2010\. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevLett.105.048701
* [74] F. D. Sahneh, C. Scoglio, and F. N. Chowdhury, “Effect of coupling on the epidemic threshold in interconnected complex networks: A spectral analysis,” in _American Control Conference (ACC), 2013_. IEEE, 2013, pp. 2307–2312.
* [75] A. Saumell-Mendiola, M. A. Serrano, and M. Boguñá, “Epidemic spreading on interconnected networks,” _Phys. Rev. E_ , vol. 86, p. 026106, Aug 2012. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevE.86.026106
* [76] Y. Sun, Y. Yu, and J. Han, “Ranking-based clustering of heterogeneous information networks with star network schema,” in _Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining_. ACM, 2009, pp. 797–806.
* [77] A. Vazquez, “Spreading dynamics on heterogeneous populations: multitype network approach,” _Physical Review E_ , vol. 74, no. 6, p. 066114, 2006\.
* [78] C. Wang, Z. Lu, and Y. Qiao, “A consideration of the wind power benefits in day-ahead scheduling of wind-coal intensive power systems,” _IEEE Trans. Power Syst._ , vol. 28, no. 1, pp. 236–245, Feb 2013.
* [79] J. Zhou, L. Xiang, and Z. Liu, “Global synchronization in general complex delayed dynamical networks and its applications,” _Physica A: Statistical Mechanics and its Applications_ , vol. 385, no. 2, pp. 729–742, Nov. 2007. [Online]. Available: http://linkinghub.elsevier.com/retrieve/pii/S0378437107007637
* [80] D. Zhou, J. Gao, H. E. Stanley, and S. Havlin, “Percolation of partially interdependent scale-free networks,” _Physical Review E_ , vol. 87, no. 5, p. 052812, 2013.
* [81] L. Gao, J. Yang, H. Zhang, B. Zhang, and D. Qin, “Flowinfra: A fault-resilient scalable infrastructure for network-wide flow level measurement,” _2011 13th Asia-Pacific Network Operations and Management Symposium_ , p. KICS KNOM; IEICE ICM, Sep 2011.
* [82] K.-M. Lee, J. Y. Kim, W.-k. Cho, K.-I. Goh, and I. Kim, “Correlated multiplexity and connectivity of multiplex random networks,” _New Journal of Physics_ , vol. 14, no. 3, p. 033027, 2012.
* [83] E. Cozzo, A. Arenas, and Y. Moreno, “Stability of boolean multilevel networks,” _Phys. Rev. E_ , vol. 86, p. 036115, Sep 2012. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevE.86.036115
* [84] R. Criado, J. Flores, A. García del Amo, J. Gómez-Gardeñes, and M. Romance, “A mathematical model for networks with structures in the mesoscale,” _International Journal of Computer Mathematics_ , vol. 89, no. 3, pp. 291–309, 2012.
* [85] Y. Xu and W. Liu, “Novel Multiagent Based Load Restoration Algorithm for Microgrids,” _Smart Grid, IEEE Transactions on_ , vol. 2, no. 1, pp. 152–161, 2011.
* [86] O. Yagan, D. Qian, J. Zhang, and D. Cochran, “Conjoining speeds up information diffusion in overlaying social-physical networks,” _IEEE Journal on Selected Areas in Communications_ , vol. 31, no. 6, pp. 1038–1048, 2013.
* [87] K. M. Carley and V. Hill, “Structural change and learning within organizations,” _Dynamics of organizations: Computational modeling and organizational theories_ , pp. 63–92, 2001.
* [88] K. M. Carley, J. Diesner, J. Reminga, and M. Tsvetovat, “Toward an interoperable dynamic network analysis toolkit,” _Decision Support Systems_ , vol. 43, no. 4, pp. 1324–1347, 2007.
* [89] D. Davis, R. Lichtenwalter, and N. V. Chawla, “Multi-relational link prediction in heterogeneous information networks,” in _Advances in Social Networks Analysis and Mining (ASONAM), 2011 International Conference on_. IEEE, 2011, pp. 281–288.
* [90] Y. Sun, J. Han, X. Yan, P. S. Yu, and T. Wu, “Pathsim: Meta path-based top-k similarity search in heterogeneous information networks,” _Proceedings of the VLDB Endowment_ , vol. 4, no. 11, pp. 992–1003, 2011.
* [91] Y. Sun, “Mining heterogeneous information networks,” Ph.D. dissertation, University of Illinois at Urbana-Champaign, 2012.
* [92] W.-Q. Sun, C.-M. Wang, P. Song, and Y. Zhang, “Flexible load shedding strategy considering real-time dynamic thermal line rating,” _IET Generation, Transmission & Distribution_, vol. 7, no. 2, pp. 130–137, Feb 2013.
* [93] M. Tsvetovat, J. Reminga, and K. M. Carley, “Dynetml: Interchange format for rich social network data,” _SSRN_ , 2004. [Online]. Available: http://dx.doi.org/10.2139/ssrn.2729286
* [94] A. M. Farid, “An engineering systems introduction to axiomatic design,” in _Axiomatic Design in Large Systems: Complex Products, Buildings & Manufacturing Systems_, A. M. Farid and N. P. Suh, Eds. Berlin, Heidelberg: Springer, 2016, ch. 1, pp. 1–47. [Online]. Available: http://dx.doi.org/10.1007/978-3-319-32388-6
* [95] G. Guizzardi, “On ontology, ontologies, conceptualizations, modeling languages, and (meta) models,” _Frontiers in artificial intelligence and applications_ , vol. 155, p. 18, 2007.
* [96] ——, _Ontological foundations for structural conceptual models_. CTIT, Centre for Telematics and Information Technology, 2005.
* [97] E. Crawley, B. Cameron, and D. Selva, _System Architecture: Strategy and Product Development for Complex Systems_. Upper Saddle River, N.J.: Prentice Hall Press, 2015.
* [98] A.-L. Barabási _et al._ , _Network science_. Cambridge university press, 2016.
* [99] M. Newman, _Networks: An Introduction_. Oxford, United Kingdom: Oxford University Press, 2009. [Online]. Available: http://books.google.ae/books?id=LrFaU4XCsUoC
* [100] D. Thompson, W. C. Schoonenberg, and A. M. Farid, “A Hetero-functional Graph Analysis of Electric Power System Structural Resilience,” in _IEEE Innovative Smart Grid Technologies Conference North America_ , Washington, DC, United states, 2020, pp. 1–5. [Online]. Available: http://dx.doi.org/10.1109/ISGT45199.2020.9087732
* [101] ——, “A Hetero-functional Graph Resilience Analysis of the Future American Electric Power System,” _IEEE Access_ , vol. 9, pp. 68 837–68 848, 2021\. [Online]. Available: https://doi.org/10.1109/ACCESS.2021.3077856
* [102] D. M. Buede, _The engineering design of systems: models and methods_ , 2nd ed. Hoboken, N.J.: John Wiley & Sons, 2009.
* [103] A. Kossiakoff, W. N. Sweet, and Knovel (Firm), _Systems engineering principles and practice_. Hoboken, N.J.: Wiley-Interscience, 2003. [Online]. Available: http://www.knovel.com/knovel2/Toc.jsp?BookID=1430
* [104] A. M. Farid and N. P. Suh, _Axiomatic Design in Large Systems: Complex Products, Buildings and Manufacturing Systems_. Berlin, Heidelberg: Springer, 2016. [Online]. Available: http://dx.doi.org/10.1007/978-3-319-32388-6
* [105] D. Hoyle, _ISO 9000 pocket guide_. Oxford ; Boston: Butterworth-Heinemann, 1998. [Online]. Available: http://www.loc.gov/catdir/toc/els033/99163006.html
* [106] A. M. Farid, “An Axiomatic Design Approach to Non-Assembled Production Path Enumeration in Reconfigurable Manufacturing Systems,” in _2013 IEEE International Conference on Systems Man and Cybernetics_ , Manchester, UK, 2013, pp. 1–8. [Online]. Available: http://dx.doi.org/10.1109/SMC.2013.659
* [107] A. M. Farid and L. Ribeiro, “An Axiomatic Design of a Multi-Agent Reconfigurable Mechatronic System Architecture,” _IEEE Transactions on Industrial Informatics_ , vol. 11, no. 5, pp. 1142–1155, 2015. [Online]. Available: http://dx.doi.org/10.1109/TII.2015.2470528
* [108] A. M. Farid, “A Hybrid Dynamic System Model for Multi-Modal Transportation Electrification,” _IEEE Transactions on Control System Technology_ , vol. PP, no. 99, pp. 1–12, 2016. [Online]. Available: http://dx.doi.org/10.1109/TCST.2016.2579602
* [109] ——, “Electrified transportation system performance: Conventional vs. online electric vehicles,” in _The On-line Electric Vehicle: Wireless Electric Ground Transportation Systems_ , N. P. Suh and D. H. Cho, Eds. Berlin, Heidelberg: Springer, 2017, ch. 20, pp. 279–313. [Online]. Available: http://engineering.dartmouth.edu/liines/resources/Books/TES-BC05.pdf
* [110] T. C. Hu, “Multi-commodity network flows,” _Operations research_ , vol. 11, no. 3, pp. 344–360, 1963.
* [111] H. Okamura, “Multicommodity flows in graphs,” _Discrete Applied Mathematics_ , vol. 6, no. 1, pp. 55–62, 1983.
* [112] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin, _Network flows: Theory, Algorithms, and Applications_. Cambridge, Mass.: Alfred P. Sloan School of Management, Massachusetts …, 1988.
* [113] D. S. Callaway, M. E. J. Newman, S. H. Strogatz, and D. J. Watts, “Network robustness and fragility: Percolation on random graphs,” _Phys. Rev. Lett._ , vol. 85, pp. 5468–5471, Dec 2000. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevLett.85.5468
* [114] M. E. Newman, “The structure and function of complex networks,” _SIAM review_ , vol. 45, no. 2, pp. 167–256, 2003.
* [115] P. Holme and J. Saramäki, “Temporal networks,” _Physics reports_ , vol. 519, no. 3, pp. 97–125, 2012.
* [116] A. M. Farid, “Static Resilience of Large Flexible Engineering Systems: Part I – Axiomatic Design Model,” in _4th International Engineering Systems Symposium_. Hoboken, N.J.: Stevens Institute of Technology, 2014, pp. 1–8. [Online]. Available: http://engineering.dartmouth.edu/liines/resources/Conferences/IES-C37.pdf
* [117] ——, “Multi-Agent System Design Principles for Resilient Coordination and Control of Future Power Systems,” _Intelligent Industrial Systems_ , vol. 1, no. 3, pp. 255–269, 2015. [Online]. Available: http://dx.doi.org/10.1007/s40903-015-0013-x
* [118] A. Viswanath, E. E. S. Baca, and A. M. Farid, “An Axiomatic Design Approach to Passenger Itinerary Enumeration in Reconfigurable Transportation Systems,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 15, no. 3, pp. 915 – 924, 2014. [Online]. Available: http://dx.doi.org/10.1109/TITS.2013.2293340
* [119] W. C. Schoonenberg and A. M. Farid, “A Dynamic Model for the Energy Management of Microgrid-Enabled Production Systems,” _Journal of Cleaner Production_ , vol. 1, no. 1, pp. 1–10, 2017. [Online]. Available: https://dx.doi.org/10.1016/j.jclepro.2017.06.119
* [120] D. Thompson and A. M. Farid, “Reconciling formal, multi-layer, and hetero-functional graphs with the hetero-functional incidence tensor,” in _IEEE Systems of Systems Engineering Conference_ , Rochester, NY, 2022, pp. 1–6.
* [121] A. M. Farid, “Measures of Reconfigurability and Its Key Characteristics in Intelligent Manufacturing Systems,” _Journal of Intelligent Manufacturing_ , vol. 28, no. 2, pp. 353–369, 2017. [Online]. Available: http://dx.doi.org/10.1007/s10845-014-0983-7
* [122] W. C. Schoonenberg and A. M. Farid, “Hetero-functional Network Minimum Cost Flow Optimization,” _Sustainable Energy Grids and Networks (in press)_ , vol. 31, no. 100749, pp. 1–18, 2022. [Online]. Available: https://arxiv.org/abs/2104.00504
* [123] D. Rowell and D. N. Wormley, _System dynamics: an introduction_. Upper Saddle River, NJ: Prentice Hall, 1997.
* [124] D. Karnopp, D. L. Margolis, and R. C. Rosenberg, _System dynamics: a unified approach_ , 2nd ed. New York: Wiley, 1990. [Online]. Available: http://www.loc.gov/catdir/enhancements/fy0650/90012110-t.html
* [125] Anonymous, “Dual graph,” Wikipedia, Tech. Rep., 2021. [Online]. Available: https://en.wikipedia.org/wiki/Dual_graph
* [126] P. Bonacich, “Some unique properties of eigenvector centrality,” _Social networks_ , vol. 29, no. 4, pp. 555–564, 2007.
* [127] G. Fagiolo, “Clustering in complex directed networks,” _Physical Review E_ , vol. 76, no. 2, p. 026107, 2007.
* [128] A. M. Farid, “Facilitating ease of system reconfiguration through measures of manufacturing modularity,” _Proceedings of the Institution of Mechanical Engineers, Part B (Journal of Engineering Manufacture) – invited paper_ , vol. 222, no. B10, pp. 1275–1288, 2008. [Online]. Available: http://dx.doi.org/10.1243/09544054JEM1055
* [129] Anonymous, “Cartesian product,” Wikipedia, Tech. Rep., 2021. [Online]. Available: https://en.wikipedia.org/wiki/Cartesian_product#:~:text=In%20mathematics%2C%20specifically%20set%20theory,and%20a%20set%20of%20columns.
* [130] M. J. Fischer and A. R. Meyer, “Boolean matrix multiplication and transitive closure,” in _Switching and Automata Theory, 1971., 12th Annual Symposium on_. IEEE, 1971, pp. 129–131.
* [131] Anonymous, “Kronecker delta,” Wikipedia, Tech. Rep., 2021. [Online]. Available: https://en.wikipedia.org/wiki/Kronecker_delta
* [132] G. Golub and C. Van Loan, _Matrix Computations_. Johns Hopkins University Press, 1996.
* [133] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” _SIAM review_ , vol. 51, no. 3, pp. 455–500, 2009.
* [134] T. G. Kolda, “Multilinear operators for higher-order decompositions.” Sandia National Laboratories, Tech. Rep., 2006.
* [135] R. Pan, “Tensor transpose and its properties,” _arXiv preprint arXiv:1411.1503_ , 2014.
### -A Ontological Science Definitions
The formal definitions of soundness, completeness, lucidity, and laconicity
rely on “Ullman’s Triangle” in Figure 6.
Fig. 6: Two Versions of Ullman’s Triangle. On the left is the relationship
between reality, the understanding of reality, and the description of reality.
On the right the instantiated version of the ontological definition[95].
###### Definition 24 (Soundness[96]):
A language ${\cal L}$ is sound w.r.t. a domain conceptualization ${\cal C}$
iff every modeling primitive in the language (${\cal M}$) has an
interpretation in the domain abstraction ${\cal A}$. (The absence of soundness
results in the excess of modeling primitives w.r.t. the domain abstractions as
shown in Figure 2.c on lucidity.)
###### Definition 25 (Completeness[96]):
A language ${\cal L}$ is complete w.r.t. a domain conceptualization ${\cal C}$
iff every concept in the domain abstraction ${\cal A}$ of that domain is
represented in a modeling primitive of that language. (The absence of
completeness results in one or more concepts in the domain abstraction not
being represented by a modeling primitive, as shown in Figure 2.d on
laconicity.)
###### Definition 26 (Lucidity[96]):
A language ${\cal L}$ is lucid w.r.t. a domain conceptualization ${\cal C}$
iff every modeling primitive in the language represents at most one domain
concept in abstraction ${\cal A}$. (The absence of lucidity results in the
overload of a modeling primitive w.r.t. two or more domain concepts as shown
in Figure 2.a on soundness.)
###### Definition 27 (Laconicity[96]):
A language ${\cal L}$ is laconic w.r.t. a domain conceptualization ${\cal C}$
iff every concept in the abstraction ${\cal A}$ of that domain is represented
at most once in the model of that language. (The absence of laconicity results
in the redundancy of modeling primitives w.r.t the domain abstractions as
shown in Figure 2.b on completeness.)
### -B Notation Conventions
Several notation conventions are used throughout this work:
* •
All sets are indicated by a capital letter. e.g. P – the set of processes..
* •
All elements within a set are indicated by a lower case letter. e.g. $p\in P$.
* •
A subscript number indicates the position in an ordered set. e.g. $p_{i}\in
P$.
* •
The $i^{th}$ elementary basis vector of size n is denoted by $e_{i}^{n}$.
* •
A matrix of ones of size $m\times n$ is denoted by $\mathds{1}^{m\times n}$.
* •
A matrix of zeros of size $m\times n$ is denoted by $\mathbf{0}^{m\times n}$.
* •
With the exception of elementary basis vectors, all vectors and matrices are
indicated with a capital letter. e.g. $J_{H}$.
* •
All tensors are indicated with capital letters in calligraphic script. e.g.
$\cal{J}_{H}$.
* •
All elements in vectors, matrices, and tensors are indicated with indices
within parentheses. e.g. $J_{S}(w,v)$.
* •
A(:,i) denotes the $i^{th}$ column of A or equivalently the $i^{th}$ mode-1
fiber. The : indicates all elements of the vector.
* •
A(i,:) denotes the $i^{th}$ row of A or equivalently the $i^{th}$ mode-2
fiber.
* •
A(i,j,:) denotes the $i,j$ mode-3 fiber of $\cal A$.
* •
Given the presence of Booleans, real numbers and their operators, this work
refrains from the use of Einstein’s (shorthand) tensor notation where the
sigma-notation $\sum$ is eliminated.
### -C Hetero-functional Graph Theory Definitions
###### Definition 28 (Transformation Resource[7]):
A resource $r\in R$ is a transformation resource $m\in M$ iff it is capable of
one or more transformation processes on one or more operands and it exists at
a unique location in space.
###### Definition 29 (Independent Buffer[7]):
A resource $r\in R$ is an independent buffer $b\in B$ iff it is capable of
storing one or more operands and is not able to transform them or transport
them to another location and it exists at a unique location in space.
###### Definition 30 (Transportation Resource[7]):
A resource $r\in R$ is a transportation resource $h\in H$ iff it is capable of
transporting one or more operands between an origin and a distinct
destination, without transforming these operands.
###### Definition 31 (Buffer[7]):
A resource $r\in R$ is a buffer $b_{s}\in B_{S}$ iff it is capable of storing
one or more operands at a unique location in space. $B_{S}=M\cup B$.
###### Definition 32 (Transformation Process[7]):
A process is a transformation process $p_{\mu j}\in P_{\mu}$ iff it is capable
of transforming one or more properties of a set of operands into a distinct
set of output properties in place. It’s syntax is:
$\\{\mbox{transitive verb, operands}\\}\rightarrow\\{\mbox{outputs}\\}$ (93)
###### Definition 33 (Refined Transportation Process[7]):
A process is a refined transportation process $p_{\bar{\eta}\varphi}\in
P_{\bar{\eta}}$ iff it is capable of transporting one or more operands between
an origin buffer $b_{sy_{1}}\in B_{S}$ to a destination buffer $b_{sy_{2}}\in
B_{S}$ while it is realizing holding process $p_{\gamma g}\in P_{\gamma}$.
It’s syntax is:
$\displaystyle\\{\mbox{transport, operands, origin, destination,while
transitive verb}\\}\rightarrow$ $\displaystyle\\{\mbox{outputs,
destination}\\}$ (94)
###### Definition 34 (Transportation Process[7]):
A process is a transportation process $p_{\eta u}\in P_{\eta}$ iff it is
capable of transporting one or more operands between an origin buffer
$b_{sy_{1}}\in B_{S}$ to a destination buffer $b_{sy_{2}}\in B_{S}$ according
to the following convention of indices[9, 10, 11, 12, 13]444Note that a
“storage process” is merely a transportation process with the same origin and
destination.:
$u=\sigma(B_{S})(y_{1}-1)+y_{2}$ (95)
It’s syntax is:
$\displaystyle\\{\mbox{transport, operands,}$
$\displaystyle\mbox{origin,destination}\\}\rightarrow\\{\mbox{outputs,destination}\\}$
(96)
###### Definition 35 (Holding Process[7]):
A process is a holding process $p_{\gamma g}\in P_{\gamma}$ iff it holds one
or more operands during the transportation from one buffer to another. In
order to maintain the independence axiom and the mutual exclusivity of the
system processes (Theorem 3), holding processes are specified so as to
distinguish between transportation processes that:
* •
Have _different_ operands,
* •
Hold a given operand in a _given way_ , or
* •
Change the _state_ of the operand.
###### Theorem 3 (Mutual Exclusivity of System Processes[7]):
A lucid representation of system processes as a domain conceptualization
distinguishes between two system processes as modeling primitives with
different sets of inputs and outputs.
###### Definition 36 (Capability[9, 10, 11, 12, 13, 94]):
An action $e_{wv}\in{\cal E}_{S}$ (in the SysML sense) defined by a system
process $p_{w}\in P$ being executed by a resource $r_{v}\in R$. It constitutes
a subject + verb + operand sentence of the form: “Resource $r_{v}$ does
process $p_{w}$”.
###### Definition 37 (The Negative Transformation Process-Operand Incidence
Matrix $M_{LP_{\mu}}^{-}$):
A binary incidence matrix
$M_{LP_{\mu}}^{-}\in\\{0,1\\}^{\sigma(L)\times\sigma(P_{\mu})}$ whose element
$M_{LP}^{-}(i,j)=1$ when the transformation system process $p_{\mu_{j}}\in P$
pulls operand $l_{i}\in L$ as an input.
###### Definition 38 (The Negative Refined Transportation Process-Operand
Incidence Matrix $M_{LP_{\bar{\eta}}}^{-}$):
A binary incidence matrix
$M_{LP_{\bar{\eta}}}^{-}\in\\{0,1\\}^{\sigma(L)\times\sigma(P_{\bar{\eta}})}$
whose element $M_{LP}^{-}(i,\varphi)=1$ when the refined transportation
process $p_{\varphi}\in P_{\bar{\eta}}$ pulls operand $l_{i}\in L$ as an
input. It is calculated directly from the negative holding process-operand
incidence matrix $M_{LP_{\gamma}}^{-}$.
$\displaystyle M_{LP_{\bar{\eta}}}^{-}(i,\varphi)$
$\displaystyle=\sum_{u=1}^{\sigma(P_{\eta})}M_{LP_{\gamma}}^{-}(i,g)\cdot\delta_{u}\qquad\forall
i\in\\{1,\ldots\sigma(L)\\},\;g\in\\{1,\ldots,\sigma(P_{\gamma})\\},u\in\\{1,\ldots,\sigma(P_{\eta})\\},\varphi=\sigma(P_{\eta})(g-1)+u$
(97) $\displaystyle M_{LP_{\bar{\eta}}}^{-}(i,\varphi)$
$\displaystyle=\sum_{y_{1}=1}^{\sigma(B_{S})}\sum_{y_{2}=1}^{\sigma(B_{S})}M_{LP_{\gamma}}^{-}(i,g)\cdot\delta_{y_{1}}\cdot\delta_{y_{2}}\quad\forall
y_{1},y_{2}\in\\{1,\ldots,\sigma(B_{S})\\},\varphi=\sigma^{2}(B_{S})(g-1)+\sigma(B_{S})(y_{1}-1)+y_{2}$
(98) $\displaystyle M_{LP_{\bar{\eta}}}^{-}$
$\displaystyle=\sum_{u=1}^{\sigma(P_{\eta})}M_{LP_{\gamma}}^{-}\otimes
e_{u}^{\sigma(P_{\eta})T}\quad\;\;=\sum_{y_{1}=1}^{\sigma(B_{S})}\sum_{y_{2}=1}^{\sigma(B_{S})}M_{LP_{\gamma}}^{-}\otimes\left(e_{y_{1}}^{\sigma(B_{S})}\otimes
e_{y_{2}}^{\sigma(B_{S})}\right)^{T}$ (99)
$\displaystyle=M_{LP_{\gamma}}^{-}\otimes\mathds{1}^{\sigma(P_{\eta})T}\qquad\quad=M_{LP_{\gamma}}^{-}\otimes\left(\mathds{1}^{\sigma(B_{S})}\otimes\mathds{1}^{\sigma(B_{S})}\right)^{T}$
(100)
###### Definition 39 (The Negative Holding Process-Operand Incidence Matrix
$M_{LP_{\gamma}}^{-}$):
A binary incidence matrix
$M_{LP_{\gamma}}^{-}\in\\{0,1\\}^{\sigma(L)\times\sigma(P_{\gamma})}$ whose
element $M_{LP}^{-}(i,g)=1$ when the holding process $p_{g}\in P_{\gamma}$
pulls operand $l_{i}\in L$ as an input.
###### Definition 40 (The Positive Transformation Process-Operand Incidence
Matrix $M_{LP_{\mu}}^{+}$):
A binary incidence matrix
$M_{LP_{\mu}}^{+}\in\\{0,1\\}^{\sigma(L)\times\sigma(P_{\mu})}$ whose element
$M_{LP}^{+}(i,j)=1$ when the transformation system process $p_{\mu_{j}}\in P$
ejects operand $l_{i}\in L$ as an output.
###### Definition 41 (The Positive Refined Transportation Process-Operand
Incidence Matrix $M_{LP_{\bar{\eta}}}^{+}$):
A binary incidence matrix
$M_{LP_{\bar{\eta}}}^{+}\in\\{0,1\\}^{\sigma(L)\times\sigma(P_{\bar{\eta}})}$
whose element $M_{LP}^{+}(i,\varphi)=1$ when the refined transportation
process $p_{\varphi}\in P_{\bar{\eta}}$ ejects operand $l_{i}\in L$ as an
output. It is calculated directly from the negative holding process-operand
incidence matrix $M_{LP_{\gamma}}^{+}$.
$\displaystyle M_{LP_{\bar{\eta}}}^{+}(i,\varphi)$
$\displaystyle=\sum_{u=1}^{\sigma(P_{\eta})}M_{LP_{\gamma}}^{+}(i,g)\cdot\delta_{u}\qquad\forall
i\in\\{1,\ldots\sigma(L)\\},\;g\in\\{1,\ldots,\sigma(P_{\gamma})\\},u\in\\{1,\ldots,\sigma(P_{\eta})\\},\varphi=\sigma(P_{\eta})(g-1)+u$
(101) $\displaystyle M_{LP_{\bar{\eta}}}^{+}(i,\varphi)$
$\displaystyle=\sum_{y_{1}=1}^{\sigma(B_{S})}\sum_{y_{2}=1}^{\sigma(B_{S})}M_{LP_{\gamma}}^{+}(i,g)\cdot\delta_{y_{1}}\cdot\delta_{y_{2}}\quad\forall
y_{1},y_{2}\in\\{1,\ldots,\sigma(B_{S})\\},\varphi=\sigma^{2}(B_{S})(g-1)+\sigma(B_{S})(y_{1}-1)+y_{2}$
(102) $\displaystyle M_{LP_{\bar{\eta}}}^{+}$
$\displaystyle=\sum_{u=1}^{\sigma(P_{\eta})}M_{LP_{\gamma}}^{+}\otimes
e_{u}^{\sigma(P_{\eta})T}\quad\;\;=\sum_{y_{1}=1}^{\sigma(B_{S})}\sum_{y_{2}=1}^{\sigma(B_{S})}M_{LP_{\gamma}}^{+}\otimes\left(e_{y_{1}}^{\sigma(B_{S})}\otimes
e_{y_{2}}^{\sigma(B_{S})}\right)^{T}$ (103)
$\displaystyle=M_{LP_{\gamma}}^{+}\otimes\mathds{1}^{\sigma(P_{\eta})T}\qquad\quad=M_{LP_{\gamma}}^{+}\otimes\left(\mathds{1}^{\sigma(B_{S})}\otimes\mathds{1}^{\sigma(B_{S})}\right)^{T}$
(104)
###### Definition 42 (The Positive Holding Process-Operand Incidence Matrix
$M_{LP_{\gamma}}^{+}$):
A binary incidence matrix
$M_{LP_{\gamma}}^{+}\in\\{0,1\\}^{\sigma(L)\times\sigma(P_{\gamma})}$ whose
element $M_{LP}^{+}(i,g)=1$ when the holding process $p_{g}\in P_{\gamma}$
ejects operand $l_{i}\in L$ as an output.
### -D Definitions of Set Operations
###### Definition 43 ($\sigma$() Notation [7]):
returns the size of the set. Given a set S with n elements, $n=\sigma(S)$.
###### Definition 44 (Cartesian Product ❌[129]):
Given three sets, A, B, and C,
$A\mbox{{\char 14\relax}}B=\\{(a,b)\in C\quad\forall a\in A\;\mbox{and}\;b\in
B\\}$ (105)
### -E Definitions of Boolean Operations
The conventional symbols of $\wedge$, $\vee$, and $\lnot$ are used to indicate
the AND, OR, and NOT operations respectively.
###### Definition 45 ($\bigvee$ Notation):
$\bigvee$ notation indicates a Boolean OR over multiple binary elements
$a_{i}$.
$\bigvee_{i}^{n}a_{i}=a_{1}\vee a_{2}\vee\ldots\vee a_{n}$ (106)
###### Definition 46 (Matrix Boolean Addition $\oplus$):
Given Boolean matrices $A,B,C\in\\{0,1\\}^{m\times n}$, $C=A\oplus B$ is
equivalent to
$C(i,j)=A(i,j)\vee B(i,j)\qquad\forall i\in\\{1\ldots m\\},j\in\\{1\ldots
n\\}$ (107)
###### Definition 47 (Matrix Boolean Scalar Multiplication $(\cdot$)):
Given Boolean matrices $A,B,C\in\\{0,1\\}^{m\times n}$, $C=A\cdot B$ is
equivalent to
$C(i,j)=A(i,j)\wedge B(i,j)=A(i,j)\cdot B(i,j)\qquad\forall i\in\\{1\ldots
m\\},j\in\\{1\ldots n\\}$ (108)
###### Definition 48 (Matrix Boolean Multiplication $\odot$[10, 130]):
Given matrices $A\in\\{0,1\\}^{m\times n}$, $B\in\\{0,1\\}^{n\times p}$, and
$C\in\\{0,1\\}^{m\times p}$, $C=A\times B=AB$ is equivalent to
$C(i,k)=\bigvee_{i=1}^{n}A(i,j)\wedge B(j,k)=\bigvee_{i=1}^{n}A(i,j)\cdot
B(j,k)\qquad\forall i\in\\{1\ldots m\\},k\in\\{1\ldots p\\}$ (109)
###### Definition 49 (Matrix Boolean Subtraction):
Given Boolean matrices $A,B,C\in\\{0,1\\}^{m\times n}$, $C=A\ominus B$ is
equivalent to
$C(i,k)=A(i,j)\wedge\lnot B(i,j))=A(i,j)\cdot\lnot B(i,j)\qquad\forall
i\in\\{1\ldots m\\},j\in\\{1\ldots n\\}$ (110)
### -F Matrix Operations
###### Definition 50 (Kronecker Delta Function $\delta_{ij}$[131]):
$\delta_{ij}=\left\\{\begin{array}[]{cc}1&\mbox{if}\;i=j\\\ 0&\mbox{if}\;i\neq
j\end{array}\right.$ (111)
###### Definition 51 (Hadamard Product[132]):
Given matrices $A,B,C\in\mathds{R}^{m\times n}$, $C=A\cdot B$ is equivalent to
$C(i,j)=A(i,j)\cdot B(i,j)\qquad\forall i\in\\{1\ldots m\\},j\in\\{1\ldots
n\\}$ (112)
###### Definition 52 (Matrix Product [132]):
Given matrices $A\in\mathds{R}^{m\times n}$, $B\in\mathds{R}^{n\times p}$, and
$C\in\mathds{R}^{m\times p}$, $C=A\times B=AB$ is equivalent to
$C(i,k)=\sum_{i=1}^{n}A(i,j)\cdot B(j,k)\qquad\forall i\in\\{1\ldots
m\\},k\in\\{1\ldots p\\}$ (113)
###### Definition 53 (Kronecker Product [133, 134]):
Given matrix $A\ \in\mathds{R}^{m\times n}$ and $B\ \in\mathds{R}^{p\times
q}$, the Kronecker (kron) product denoted by $C=A\otimes B$ is given by:
$C=\begin{bmatrix}A(1,1)B&A(1,2)B&\ldots&A(1,n)B\\\
A(2,1)B&A(2,2)B&\ldots&A(2,n)B\\\ \vdots&\vdots&\ddots&\vdots\\\
A(m,1)B&A(m,2)B&\ldots&A(m,n)B\end{bmatrix}$ (114)
Alternatively, in scalar notation:
$C(p(i-1)+k,q(j-1)+l)=a(i,j)\cdot b(k,l)\qquad\forall i\in\\{1\ldots
m\\},j\in\\{1\ldots n\\},k\in\\{1\ldots p\\},l\in\\{1\ldots q\\}$ (115)
###### Definition 54 (Khatri-Rao Product [133, 134]):
The Khatri-Rao Product is the “column-wise Kronecker product”. Given matrix
$A\ \in\mathds{R}^{m\times n}$ and $B\ \in\mathds{R}^{p\times n}$, the Khatri-
Rao product denoted by $C=A\circledast B$ is given by:
$\displaystyle C$ $\displaystyle=\begin{bmatrix}A(:,1)\otimes
B(:,1)&A(:,2)\otimes B(:,2)&\ldots&A(:,n)\otimes B(:,n)\\\ \end{bmatrix}$
(116)
$\displaystyle=\left[A\otimes\mathds{1}^{p}\right]\cdot\left[\mathds{1}^{m}\otimes
B\right]$ (117)
Alternatively, in scalar notation:
$C(p(i-1)+k,j)=a(i,j)\cdot b(k,j)\qquad\forall i\in\\{1\ldots
m\\},j\in\\{1\ldots n\\},k\in\\{1\ldots p\\}$ (118)
If A and B are column vectors, the Kronecker and Khatri-Rao products are
identical.
### -G Tensor Operations
###### Definition 55 (Outer Product of Vectors [133, 134]):
Given two vectors $A_{1}\in\mathds{R}^{m_{1}}$ and
$A_{2}\in\mathds{R}^{m_{2}}$, their outer product $B\in\mathds{R}^{m_{1}\times
m_{2}}$ is denoted by
$\displaystyle B=A_{1}\circ A_{2}$ $\displaystyle=A_{1}A_{2}^{T}$ (119)
$\displaystyle B(i_{1},i_{2})$ $\displaystyle=A_{1}(i_{1})\cdot
A_{2}(i_{2})\qquad\forall i_{1}\in\\{1\ldots m_{1}\\},i_{2}\in\\{1\ldots
m_{2}\\}$ (120)
Given $n$ vectors,
$A_{1}\in\mathds{R}^{m_{1}},A_{2}\in\mathds{R}^{m_{2}},\ldots,A_{n}\in\mathds{R}^{m_{n}}$,
their outer product ${\cal B}\in\mathds{R}^{m_{1}\times
m_{2}\times\ldots\times m_{n}}$ is denoted by ${\cal B}=A_{1}\circ
A_{2}\circ\ldots\circ A_{n}$ where
$\displaystyle{\cal B}(i_{1},i_{2},\ldots,i_{n})=A_{1}(i_{1})\cdot
A_{2}(i_{2})\cdot\ldots\cdot A_{n}(i_{n})\quad\forall i_{1}\in\\{1\ldots
m_{1}\\},i_{2}\in\\{1\ldots m_{2}\\},\ldots,i_{n}=\\{1\ldots m_{n}\\}$ (121)
###### Definition 56 (Matricization ${\cal F}_{M}()$ [133, 134]):
Given an nth order tensor ${\cal A}\in\mathds{R}^{p_{1}\times
p_{2}\times\ldots\times p_{n}}$, and ordered sets $R=\\{r_{1},\ldots,r_{L}\\}$
and $C=\\{c_{1},\ldots,c_{M}\\}$ that are a partition of the n modes
$N=\\{1,\ldots,n\\}$ (i.e. $R\cup C=N,R\cap C=\emptyset$), the matricization
function ${\cal F}_{M}()$ outputs the matrix $A\in\mathds{R}^{J\times K}$
$\displaystyle A$ $\displaystyle={\cal F}_{M}({\cal A},R,C)$ (122)
$\displaystyle A(j,k)$ $\displaystyle={\cal
A}(i_{1},i_{2},\ldots,i_{n})\qquad\forall
i_{1}\in\\{1,\ldots,p_{1}\\},i_{2}\in\\{1,\ldots,p_{2}\\},\ldots
i_{n}\in\\{1,\ldots,p_{n}\\}$ (123)
where
$\displaystyle
j=1+\sum_{l=1}^{L}\left[(i_{r_{l}}-1)\prod_{l^{\prime}=1}^{l-1}i_{r_{l^{\prime}}}\right],\quad$
$\displaystyle
k=1+\sum_{m=1}^{M}\left[(i_{c_{m}}-1)\prod_{m^{\prime}=1}^{m-1}i_{c_{m^{\prime}}}\right],\quad$
$\displaystyle J=\prod_{q\in R}p_{q}\quad$ $\displaystyle K=\prod_{q\in
C}p_{q}$ (124)
For the sake of clarity, ${\cal F}_{M}()$ is implemented in MATLAB code:
⬇
ATensor = rand(4,7,5,3); R = [4 1]; C = [2 3];
function AMatrix=matricize(ATensor,R,C);
P = size(ATensor); J = prod(P(R)); K = prod(P(C));
AMatrix = reshape(permute(ATensor,[R C]),J,K); % Matricize
###### Definition 57 (Tensorization [133, 134]):
Given a matrix $A\in\mathds{R}^{J\times K}$, the dimensions
$P=[p_{1},p_{2},\ldots,p_{n}]$ of a target nth order tensor ${\cal
A}\in\mathds{R}^{p_{1}\times p_{2}\times\ldots\times p_{n}}$, and ordered sets
$R=\\{r_{1},\ldots,r_{L}\\}$ and $C=\\{c_{1},\ldots,c_{M}\\}$ that are a
partition of the n modes $N=\\{1,\ldots,n\\}$ (i.e. $R\cup C=N,R\cap
C=\emptyset$), the tensorization function ${\cal F}_{M}^{-1}()$ outputs the
nth order tensor ${\cal A}$.
$\displaystyle{\cal A}$ $\displaystyle={\cal F}_{M}^{-1}(A,P,R,C)$ (125)
$\displaystyle{\cal A}(i_{1},i_{2},\ldots,i_{n})$
$\displaystyle=A(j,k)\qquad\forall
i_{1}\in\\{1,\ldots,p_{1}\\},i_{2}\in\\{1,\ldots,p_{2}\\},\ldots
i_{n}\in\\{1,\ldots,p_{n}\\}$ (126)
where
$\displaystyle
j=1+\sum_{l=1}^{L}\left[(i_{r_{l}}-1)\prod_{l^{\prime}=1}^{l-1}i_{r_{l^{\prime}}}\right],\quad$
$\displaystyle
k=1+\sum_{m=1}^{M}\left[(i_{c_{m}}-1)\prod_{m^{\prime}=1}^{m-1}i_{c_{m^{\prime}}}\right],\quad$
$\displaystyle J=\prod_{q\in R}p_{q}\quad$ $\displaystyle K=\prod_{q\in
C}p_{q}$ (127)
For the sake of clarity, ${\cal F}_{M}^{-1}()$ is implemented in MATLAB code:
⬇
AMatrix = rand(12,35); P=[4,7,5,3]; R = [4 1]; C = [2 3];
function ATensor=tensorize(AMatrix,P,R,C);
ATensor = ipermute(reshape(AMatrix,[P(R) P(C)]),[R C]); % Tensorize
###### Definition 58 (Vectorization [132, 133, 134]):
Vectorization denoted by $vec()$ or $()^{V}$ as a shorthand is a special case
of matricization when the resulting matrix is simply a vector. Formally, given
an nth order tensor ${\cal A}\in\mathds{R}^{p_{1}\times
p_{2}\times\ldots\times p_{n}}$ and the dimensions
$P=[p_{1},p_{2},\ldots,p_{n}]$, the vectorization function $vec()=()^{V}$
outputs the vector $A\in\mathds{R}^{J}$
$\displaystyle A$ $\displaystyle=vec({\cal A})={\cal A}^{V}$ (128)
$\displaystyle A(j)$ $\displaystyle={\cal
A}(i_{1},i_{2},\ldots,i_{n})\qquad\forall
i_{1}\in\\{1,\ldots,p_{1}\\},i_{2}\in\\{1,\ldots,p_{2}\\},\ldots
i_{n}\in\\{1,\ldots,p_{n}\\}$ (129)
where
$\displaystyle
j=1+\sum_{l=1}^{n}\left[(i_{l}-1)\prod_{l^{\prime}=1}^{l-1}i_{l^{\prime}}\right],\quad$
$\displaystyle J=\prod_{q=1}^{n}p_{q}\quad$ (130)
###### Definition 59 (Inverse Vectorization [132, 133, 134]):
Inverse vectorization denoted by $vec^{-1}()$ is a special case of
tensorization when the input matrix is simply a vector. Formally,
$A\in\mathds{R}^{J}$ and the dimensions $P=[p_{1},p_{2},\ldots,p_{n}]$ of a
target nth order tensor ${\cal A}\in\mathds{R}^{p_{1}\times
p_{2}\times\ldots\times p_{n}}$, the inverse vectorization function
$vec^{-1}()$ outputs the nth order tensor ${\cal A}$.
$\displaystyle{\cal A}$ $\displaystyle=vec^{-1}(A,P)$ (131)
$\displaystyle{\cal A}(i_{1},i_{2},\ldots,i_{n})$
$\displaystyle=A(j)\qquad\forall
i_{1}\in\\{1,\ldots,p_{1}\\},i_{2}\in\\{1,\ldots,p_{2}\\},\ldots
i_{n}\in\\{1,\ldots,p_{n}\\}$ (132)
where
$\displaystyle
j=1+\sum_{l=1}^{n}\left[(i_{l}-1)\prod_{l^{\prime}=1}^{l-1}i_{l^{\prime}}\right],\quad$
$\displaystyle J=\prod_{q=1}^{n}p_{q}\quad$ (133)
Furthermore, the above definition of inverse vectorization can be applied to a
qth dimensional slice of a tensor. In such a case,
$\displaystyle{\cal B}$ $\displaystyle=vec^{-1}(A,P,r)$ (134)
$\displaystyle{\cal
B}(k_{1},\ldots,k_{r-1},i_{1},\ldots,i_{n},k_{r+1},\ldots,k_{m})$
$\displaystyle={\cal A}(k_{1},\ldots,k_{r-1},j,k_{r+1},\ldots,k_{m})$ (135)
where index convention in Equation 133 applies.
###### Definition 60 (Matrix and Tensor Transpose):
Given a matrix $A\in\mathds{R}^{{m_{1}}\times{m_{2}}}$, its matrix transpose
$A^{T}\in\mathds{R}^{{m_{2}}\times{m_{1}}}$ is equivalent to:
$A^{T}(j,i)=A(i,j)\qquad\forall i\in\\{1\ldots m_{1}\\},j\in\\{1\ldots
m_{2}\\}$ (136)
In this work, the generalization to tensors is a special case of the
definition provided in [135]. Given a tensor ${\cal
A}\in\mathds{R}^{{m_{1}}\times\ldots\times{m_{n}}}$, its tensor transpose
${\cal A}^{T}\in\mathds{R}^{{m_{n}}\times\ldots\times{m_{1}}}$ is equivalent
to:
${\cal A}^{T}(i_{n},\ldots,i_{1})={\cal A}(i_{1},\ldots i_{n})\qquad\forall
i_{1}\in\\{1\ldots m_{1}\\},\ldots,i_{n}\in\\{1\ldots m_{n}\\}$ (137)
###### Definition 61 (N-Mode Matrix Product $\times_{p}$ [133, 134]):
The N-mode matrix product is a generalization of the matrix product. Given a
tensor ${\cal A}\in\mathds{R}^{m_{1}\times m_{2}\times\ldots\times
m_{p}\times\ldots\times m_{n}}$, matrix $B\in\mathds{R}^{m_{p}\times q}$, and
${\cal C}\in\mathds{R}^{m_{1}\times m_{2}\times\ldots\times
q\times\ldots\times m_{n}}$, the n-mode matrix product denoted by ${\cal
C}={\cal A}\times_{p}B$ is equivalent to:
$\displaystyle{\cal
C}(i_{1},i_{2},\ldots,i_{p-1},j,i_{p+1},\ldots,i_{n})=\sum_{i_{p}=1}^{m_{p}}{\cal
A}(i_{1},i_{2},\ldots,i_{n})\cdot B(i_{p},j)$ (138) $\displaystyle\forall
i_{1}\in\\{1,\ldots,m_{1}\\},\ldots,i_{p-1}\in\\{1,\ldots,m_{p-1}\\},i_{p+1}\in\\{1,\ldots,m_{p+1}\\},\ldots,i_{n}\in\\{1,\ldots,m_{n}\\},j\in\\{1,\ldots,q\\}$
###### Definition 62 (N-Mode Boolean Matrix Product):
The N-mode Boolean matrix product is a generalization of the Boolean matrix
product. Given a tensor ${\cal A}\in\\{0,1\\}^{m_{1}\times
m_{2}\times\ldots\times m_{p}\times\ldots\times m_{n}}$, matrix
$B\in\\{0,1\\}^{m_{p}\times q}$, and ${\cal C}\in\\{0,1\\}^{m_{1}\times
m_{2}\times\ldots\times q\times\ldots\times m_{n}}$, the n-mode matrix product
denoted by ${\cal C}={\cal A}\odot_{p}B$ is equivalent to:
$\displaystyle{\cal
C}(i_{1},i_{2},\ldots,i_{p-1},j,i_{p+1},\ldots,i_{n})=\bigvee_{i_{p}=1}^{m_{p}}{\cal
A}(i_{1},i_{2},\ldots,i_{n})\cdot B(i_{p},j)$ (139) $\displaystyle\forall
i_{1}\in\\{1,\ldots,m_{1}\\},\ldots,i_{p-1}\in\\{1,\ldots,m_{p-1}\\},i_{p+1}\in\\{1,\ldots,m_{p+1}\\},\ldots,i_{n}\in\\{1,\ldots,m_{n}\\},j\in\\{1,\ldots,q\\}$
|
# Reducing bias and increasing utility by federated generative modeling of
medical images using a centralized adversary
Jean-Francois Rajotte Data Science Institute, University of British Columbia
Sumit Mukherjee AI for Good Research Lab, Microsoft Caleb Robinson AI for
Good Research Lab, Microsoft Anthony Ortiz AI for Good Research Lab,
Microsoft Christopher West Data Science Institute, University of British
Columbia Juan M. Lavista Ferres AI for Good Research Lab, Microsoft Raymond
T Ng Data Science Institute, University of British Columbia
###### Abstract
A major roadblock in machine learning for healthcare is the inability of data
to be shared broadly, due to privacy concerns. Privacy preserving synthetic
data generation is increasingly being seen as a solution to this problem.
However, since healthcare data often has significant site-specific biases, it
has motivated the use of federated learning when the goal is to utilize data
from multiple sites for machine learning model training. Here, we introduce
FELICIA (FEderated LearnIng with a CentralIzed Adversary), a generative
mechanism enabling collaborative learning. It is a generalized extension of
the (local) PrivGAN mechanism allowing to take into account the diversity
(non-IID) nature of the federated sites. In particular, we show how a site
with limited and biased data could benefit from other sites while keeping data
from all the sources private. FELICIA works for a large family of Generative
Adversarial Networks (GAN) architectures including vanilla and conditional
GANs as demonstrated in this work. We show that by using the FELICIA
mechanism, a site with a limited amount of images can generate high-quality
synthetic images with improved utility, while none of the sites need to
provide access to their real data. The sharing happens solely through a
central discriminator with access limited to synthetic data. We demonstrate
these benefits on several realistic healthcare scenarios using benchmark image
datasets (MNIST, CIFAR-10) as well as on medical images for the task of skin
lesion classification. We show that the utility of synthetic images generated
by FELICIA surpasses that of the data available locally and we demonstrate
that it can correct the reduced utility of a biased subgroup within a class.
## 1 Introduction
Learning from images to build diagnostic or prognostic models of a medical
condition has become a very active research topic because of its great
potential to provide better care for patients. Deep learning has been involved
in much of modern progress in medical computer vision techniques, such as
disease detection and classification as well as biomedical segmentation [8].
However, for such methods to capture the subtle patterns between a medical
condition and an image, it is important that a model is exposed to a rich
variety of cases. It is well known that images from a single source can be
significantly biased by the demographics, equipment, and acquisition protocol
[28]. Consequently, training a model on images from a single source would skew
the performance of its prediction power towards the population from that
source and potentially perform poorly for other populations. Ideally, such a
model should be trained on images from as many sources as possible. To reduce
the associated cost of collecting and labelling data, it is obvious that all
sites such as hospitals and research centers would benefit to share their
images.
Gaining access to large medical datasets requires a very lengthy approval
process due to concerns about privacy breaches. Most current privacy
legislation prevents datasets from being accessed and analyzed outside of a
small number of dedicated servers (e.g., servers within a local hospital).
However, to unleash the full power of various machine learning techniques,
particularly deep learning methods, we need to find ways to share data among
research groups, while satisfying privacy requirements.
How sharing actually happens can depend on many factors such as use cases,
regulation, business value protection and infrastructure availability. In this
work, we focus on synthetic data creation which allows multiple downstream use
cases and exploration. Our objective is to show how different sites (e.g.
hospitals) can help each other by creating joint or disjoint synthetic
datasets that contain more utility than any of the single datasets alone.
Moreover, the synthetic dataset can be used as a benchmark for machine
learning in health care. To this end, we first test our method in two toy
setups using common benchmark datasets, where we create artificial sites with
datasets from different data distributions. This shows the potential of our
method in the domain of medical imaging.
Figure 1: FELICIA architecture with N=2 users. The real data subsets are
determined by the users to which we associate local components by subscripts.
In this work, the users will often be referred to as sites or hospitals in our
experiment scenarios.
Sharing private data or their characteristics has been extensively explored
recently. A common approach is to generate privacy preserving synthetic data
using various variants of Generative Adversarial Networks (GAN [11]). GANs are
generative models that are able to create realistic-looking synthetic images.
A GAN comprises of a generator G and a discriminator D playing a two-player
game. The generator aims to create fake samples such that the discriminator
will estimate their probability to be as high as possible. The discriminator
on the other hand, tries to estimate the probability that a sample is real
(rather than fake). PrivGAN ([21]) is an extension of GAN, originally designed
to generate synthetic data while improving the privacy of the data used for
training. Although PrivGAN was developed to be applied locally on a single
dataset, previous work ([24]) has demonstrated that PrivGAN can be useful in a
federated learning setting. In this paper, we develop a general mechanism
(FELICIA) to extend a large family of GANs to a federated learning setting
utilizing a centralized adversary. We explore the application of this
framework to show how different sites can collaborate with each other to
improve machine learning models in a privacy-preserving distributed data
sharing scenario. To demonstrate the relevance of FELICIA, we focus on
settings relevant to health care. However, other natural scenarios can be
found in disparate domains such as in banking.
Our main contributions are the following:
* •
Formalize a new federated learning mechanism (FELICIA) motivated by the
PrivGAN ([21]) architecture, which extends to a family of GAN architectures.
* •
Demonstrate empirically that the hyperparameter $\lambda$ can improve the
utility, contrary to the original PrivGAN.
* •
Generalize the hyperparameter $\lambda$ to be site-dependent.
* •
Improve the synthetic data by using generators from multiple epochs as was
done in ([4]).
* •
Demonstrate the applicability of FELICIA on real non-IID data both for
conditional and non-conditional synthetic data generation.
* •
Demonstrate the applicability of using FELICIA to enable medical images
sharing in a federated learning context.
* •
Demonstrate that FELICIA can create a synthetic dataset without the utility
bias from its local data.
## 2 Related work
Sharing data between non-local sites such as hospitals and research centers
can be achieved in many ways. A popular approach to share data with privacy is
to generate private synthetic data with Differential Privacy ([7]). Generative
models such as GANs based on either differentially private stochastic gradient
descent ([1, 29]) or the Private Aggregation of Teacher Ensembles, PATE ([22,
30]) are of particular interest. Both approaches suffer from low utility data
for a reasonable degree of privacy.
Another approach is to train a model in a federated learning setting such that
the data never has to be shared ([26, 25, 12, 9]). Since it has been
demonstrated that GANs are vulnerable to privacy attacks ([13]), various
approaches have been proposed to provide better privacy protection. Synthetic
data from GANs trained on distributed datasets with differential privacy ([10,
6]) suffer from the same low quality as synthetic data from centrally trained
GANs, unless they have access to a very large amount of training data as in
this language model application ([19]). FELICIA allows users to create high
quality local synthetic datasets while privacy protection naturally arises
from the architecture.
### 2.1 PrivGAN
PrivGAN ([21]) is an extension of a GAN originally designed to protect against
membership inference attacks, such as LOGAN and MACE ([13, 18]). The
architecture is comprised of $N$ GANs trained on disjoint, independent and
identically distributed (IID) subsets with an extra loss from a central
(private) discriminator DP. The authors show that their method “ _minimally
affects the quality of downstream samples as evidenced by the performance on
downstream learning tasks such as classification_ ”. The key feature here is
that the only connection between the subsets is the central discriminator DP
accessing only synthetic data.
## 3 Methods
### 3.1 Mathematical formulation
While the original formulation of privGAN can be seen as a modification to the
original GAN architecture ([11]), the mechanism of utilizing multiple
generator-discriminator pairs and a centralized adversary is quite general. To
that end, we first define a general family of GANs ([3]) that contain a single
generator $G$, a single discriminator $D$ and a loss governed by a measure
function $\phi:[0,1]\longrightarrow\mathbb{R}$ as follows:
$\displaystyle\begin{split}V_{\phi}(G,D)=&\mathbb{E}_{x\sim
p_{data}(x)}[\phi(D(x))]+\\\ &\mathbb{E}_{z\sim
p_{z}(z)}[\phi(1-D(G(z)))]\end{split}$ (1)
In the case of conditional GANs, $x$ is replaced by the conditioned tuple
$(x|y)$ where $y$ is the label associated with sample $x$. Our proposed
mechanism (FELICIA) extends any GAN belonging to this family to a federated
learning setting using a centralized adversary. Formally, given a measure
function $\phi$ and corresponding GAN loss $V_{\phi}$, the federated loss is:
$\displaystyle\text{FELICIA}(\phi,V_{\phi},\bm{\lambda},N)=\sum_{i=1}^{N}\underbrace{V_{\phi}(G_{i},D_{i})}_{local}+\lambda_{i}\underbrace{R^{i,\phi}_{p}(D_{p})}_{global}$
(2)
Where $R^{i,\phi}_{p}(D_{p})=\mathbb{E}_{z\sim
p_{z}(z)}\phi[D_{p}^{i}(G_{i}(z))]$. A notable novelty here is that
$\bm{\lambda}$ is now a $N$-dimensional parameter
$\bm{\lambda}=(\lambda_{1},...,\lambda_{N})$, one for each of the $N$ user.
Contrary to PrivGAN, both terms in FELICIA’s loss have the potential to
contribute to utility : local favors utility on local data and global favors
utility on all users’ data.
In this paper, we apply our mechanism to three separate GANs belonging to this
family: i) the original GAN ([11]), ii) DCGAN ([23]), and iii) conditional GAN
([20]). We note however, that these are simply representative examples and the
mechanism applies to a wide variety of GANs such as WGAN ([2]), DP-GAN ([29]),
etc.
### 3.2 Practical implementation
To implement the FELICIA mechanism we follow a process similar to the original
PrivGAN paper. Specifically, we duplicate the discriminator and generator
architectures of a ‘base GAN’ to each of the component generator-discriminator
pairs of FELICIA. The privacy discriminator (DP) is selected to be identical
in architecture to the other discriminators barring the activation of the
final layer. Most of the optimization effort is dedicated to train the base
GAN on the whole training data to generate realistic images. Then FELICIA’s
implementation is optimized with the base GAN’s parameters which are tuned to
get good looking samples. This last step is usually much faster as the base
GAN’s parameters represent a good starting point.
## 4 Experiments
Our experiments are based on a simulation of two hospitals (Hospital 1 and
Hospital 2) with different patient populations. We consider a regulation
preventing sharing images as well as models that had access to images. We will
use FELICIA where Hospital 1 and Hospital 2 correspond respectively to User1
and User2 in Figure 1. For our last two experiments, we define the concept of
helpee and helper. The helpee is a hospital with low utility and biased
dataset and the helper is a hospital with a rich and high utility dataset
willing to help within the above regulation restrictions. We will show that,
through the FELICIA framework, the helpee, Hospital 1, can locally generate a
less unbiased synthetic dataset with more utility than its own (real) data.
First, we use the MNIST dataset ([15]) to show how FELICIA can help generate
synthetic data with better coverage of the input distribution, even when both
sites have a biased coverage of the possible input space. Second, we use a
more complex dataset, CIFAR-10, to show how the utility could be significantly
improved when a subgroup is underrepresented in the data. Finally, we test
FELICIA in a federated learning setting with medical imagery using a skin
lesion image dataset. In the first experiment, the utility is demonstrated
visually by showing the distribution of the generated samples. In the other
experiments, the utility is defined as the performance of a classifier trained
on synthetic data (sometimes combined with real data) and evaluated on a held
out real dataset.
In the first two experiments, we have kept the default parameters of the
original PrivGAN implementation, namely equal $\lambda$’s (i.e.
$\lambda_{1}=\lambda_{2}=1$). We have also used the generator at the end of
the training phase which gave satisfactory results. In our last experiment
however, such implementation did not lead to synthetic data with satisfactory
utility. We hypothesized that user-dependent $\lambda_{i}$’s would better suit
the scenario of a helpee being more penalized when its synthetic data is
distinguishable from the helper’s synthetic data. Conversely, the helper which
can generate good quality synthetic data on its own, does not need to be
penalized as much when it’s synthetic data is distinguishable from a the
biased helpee. Also, we have noticed that the utility does not increase
asymptotically with increasing epoch. Inspired by previous work ([4]), we used
synthetic images from 5 generators from the top epochs in utility. The
selection of the top epochs (as well as the best combination of
($\lambda_{1},\lambda_{2}$), was determined with the hold out validation set
and the final utility was determined on the test set.
### 4.1 Improving distribution coverage
One setting that multiple sites may observe is when Hospital 1 owns a dataset
with samples from one part of the input distribution, while Hospital 2 has a
dataset with samples from a different part. We simulate this setting using the
28x28 gray scale hand written digit dataset MNIST ([15]). Specifically, we
test whether FELICIA is able to generate representative samples from the
entire input distribution while the local data is biased.
Given all images of a selected digit, we perform PCA and cluster the images in
the resulting embedding space using K-means ($k=2$). The resulting clusters
will be used to distribute the images to the sites.
We then train FELICIA using a varying proportion of images from both clusters
and compare the resulting generated images to the original images. We also
compare with images generated by traditional GANs trained only on data from
cluster 1 and cluster 2. Specifically, we define a mixing parameter, $\alpha$,
used to select the number of samples from each cluster used to fit FELICIA and
two simple GANs. FELICIA will be trained on the two subsets defined as
follows:
Subset 1
A random selection of $\alpha\%$ of samples from cluster 1, and a fraction
$(100-\alpha)\%$ of samples from cluster 2.
Subset 2
Same as Subset 1 but inverting the fraction, i.e. replacing $\alpha\%$ by
$(100-\alpha)\%$.
Subset 1 and Subset 2 correspond respectively to X1Real and X2Real in the
right diagram of Figure 1.
For $\alpha=0$, Subset 1 will be completely biased towards cluster 2
(representing a specific section of the input distribution), and for
$\alpha=50$ both subsets will consist of equal numbers of samples spread over
the input distribution. FELICIA’s training results in a generator for Subset
1, G1, and generator for Subset 2, G2. We also train two simple GANs using the
data from Subset 1 and Subset 2 respectively.
Once all GANs are trained, we generate 2000 samples from each and compare them
to the original samples by plotting all samples using the first two principal
components from the original image embedding step. Figure 2 shows such plots
for three values of $\alpha$. When the bias is maximal (i.e. when $\alpha=0$),
FELICIA generates images only at the cluster border, while the simple GANs
will generate images only from the cluster on which they were trained. This is
not surprising when we consider that if the local discriminator D1 is never
trained on real images from a given cluster – it will not “allow” the
generator to cover that part of the input space – the only generated samples
that satisfy both discriminators are those at the border. As $\alpha$
increases (shown in descending rows of Figure 2), it is clear that the samples
generated by FELICIA cover more of the input space than those of the local
GANs.
Figure 2: GAN and FELICIA generated samples on biased subsets for digit four.
Each points corresponds to the position of a given hand written digit in the
first two component of the PCA embedding. As a reference, each plot shows
clusters 1 (blue points) and cluster 2 (red points) in the background. The
green points correspond to the generated samples. Column 1 & 2 show simple GAN
generated samples after training on subset 1 & 2, respectively. Column 3 & 4
show generated samples by FELICIA G1 and G2, respectively.
### 4.2 Reducing bias
Another setting that various sites may observe is when one owns an imbalanced
dataset while the other owns a complete (unbiased) dataset. In this setting,
the owner of the imbalanced dataset, the helpee, should be able to benefit
from the owner with a balanced dataset, the helper. We use the CIFAR-10
dataset [14] to simulate such a setting. CIFAR-10 is a dataset of 32x32 RGB
images labeled with 10 different classes of animals and transport vehicles. To
represent a biased dataset, we define two classes: class 1, the house pets
class, consisting of “cats” and “dogs” and class 2, the large animal class,
consisting of “deers” and “horses”. Similarly to the previous experiment, we
will create two subsets:
Subset 1
Contains an equal number of cat and dog samples for class 1 and an unequal
number of deer and horse samples for class 2. The bias of this subset will be
quantified with $\beta$, the fraction of class 2 samples that are images of
deer. This represents the helpee’s dataset.
Subset 2
Contains an equal number of cat and dog samples for class 1 and an equal
number of deer and horse samples for class 2. This represents the helper’s
dataset.
Note that the two subsets have an equal number of images; the difference is in
the proportion of deers & horses of the samples that make up class 2.
We train a CNN to discriminate between class 1 and class 2 with three
different training sets: Subset 1 only, Subset 1 \+ GAN synthetic data (i.e.
augmented with GAN), and Subset 1 \+ FELICIA synthetic (i.e. augmented with
FELICIA) data, then measure the classification accuracy on a held out test
set. FELICIA synthetic data is created from the helpee’s generator associated
to Subset 1.
Figure 3: Classifier accuracy on three data subsets after training on real
data, real data augmented with synthetic data from a simple GAN and real data
augmented with FELICIA. (Left) accuracy over samples from all classes. (Right)
accuracy over samples of the deer class.
Figure 3 shows the accuracy as function of $\beta$ of each classification
model evaluated on the full held out test set and its deer images only. We
observe that the classifier accuracy after training on real data decreases
when the data is more biased towards the deer. This is expected as the test
data is balanced and the reduced subgroup in the training set leads to reduced
accuracy. This is confirmed in the right panels of Figure 3, showing a
decrease in accuracy of the classifier consistent with the biais of the
training data. The same figure shows that augmenting the classifier training
set with simple GANs synthetic data does not improve the accuracy. This is
also expected as a simple GAN goal is to reproduce the training data
distribution. Finally, the classifier trained on real data augmented with
FELICIA synthetic data is systematically better than other classifiers. The
improvement is particularly significant when the data is most biased.
### 4.3 Synthetic medical images
In our last experiment, we apply FELICIA in a federated learning setting with
a real-world medical image dataset, HAM10000 ([27]). Similar to the previous
experiment, we will use these images to simulate a biased subset for the
helpee Hospital 1. This dataset contains a large collection of multi-source
dermatoscopic images of common pigmented skin lesions111The original images
have a size of 600x450 pixels that we have resized to 64x64 pixels in order to
train models more quickly.. These are separated into 7 imbalanced sets of skin
lesion images, from which we use the four most populated, lesion 0:
Melanocytic nevi (6705 images), lesion 1: Melanoma (1113 images), lesion 2:
Benign keratosis (1099 images) lesion 3: Basal cell carcinoma (514 images).
Lesions 0 and 2 are benign whereas lesions 1 and 3 are associated with
different types of skin cancer. We create two classes from these lesion sets:
Class 0
Images of benign lesions from lesion 0 & lesion 2
Class 1
Images for cancerous lesions from lesion 1 & lesion 3
We evaluate the performance of a binary classifier trained to predict whether
a lesion is benign (class 0) or cancerous (class 1). This type of skin lesion
classification has shown to be successful with deep learning (see ([8]) and
references therein).
We first randomly remove 1000 images from each class to create two equal size
held out sets: a test and a validation set. From the remaining dataset, the
first subset is defined similarly as the previous experiment with balanced
classes but artificially biased in the lesion within one of the class. The
training subsets are defined as follows:
Subset 1
Contains 300 images from each classes.
* •
Class 0 (benign) biased with 10 images of lesion 0 and 290 images lesion 2.
* •
Class 1 (cancerous) balanced with 150 images of lesions 1 and 150 images from
lesion 3.
Subset 2
Contains the remainder of the dataset.
Where Subset 1 and Subset 2 correspond respectively to X1Real and X2Real in
the FELICIA diagram in Figure 1.
We explore how the helpee (Subset 1) with limited and biased data, could be
helped by the helper’s (Subset 2) richer data through the FELICIA mechanism.
In this experiment, the utility is defined as the performance of a
benign/cancerous classifier trained on synthetic data and evaluated on the
held out set. Specifically, we use area under the receiver operator
characteristic curve (AUC-ROC). Since we are dealing with two hyperparameters
($\lambda_{1},\lambda_{2}$), they are selected with the validation set and the
final performance is reported on the test set. To address the relatively low
amount of images (compared to the previous experiments), we use a conditional
GAN ([20]) to leverage all training images for one model as oppose to a model
per class as in the previous experiments222For simplicity, we have kept the
central discriminator $D_{p}$ unconditional, i.e. without class input..
We train FELICIA on the two subsets over 100000 epochs for various
combinations of ($\lambda_{1},\lambda_{2}$) from equation (2). For each set of
($\lambda_{1},\lambda_{2}$), we re-run the experiment multiple times while
varying the random seeds for the network initialization and data shuffling for
the train, test and validation set.
For comparison, we train a conditional GAN using data only from Subset 1. This
represents the synthetic data that the helpee could generate without access to
the helper’s dataset.
We evaluate the utility of the generated images every 50 training epochs. For
each evaluated epoch, we used the generator to create 200 images for each
class. A simple CNN classifier trained on these generated images is then
evaluated on the 500 images of the balanced validation set. Then we select the
best combination of ($\lambda_{1},\lambda_{2}$) and make our final evaluation
on a held out test set.
Figure 4: Real images and synthetic images generated by FELICIA and
conditional GAN. The images are created by the generator from the epoch of
highest utility. Figure 5: Helpee synthetic images utility as function of the
$\lambda_{1}$ parameter, i.e. the strength of the loss to the central
discriminator applied to the Helpee generator total loss.
Figure 4 shows how the helpee can generate very realistic images with FELICIA
while images from the simple GAN are of very low quality and lack diversity.
The images are produced from the saved generator at epoch leading to the best
utility. We show in Figure 5 the utility as function of several combination of
$(\lambda_{1},\lambda_{2})$. For simple (conditional) PrivGAN, we limit the
search to $\lambda_{1}=\lambda_{2}$ and a single generator as in the original
paper ([21]). We see that PrivGAN improves less the overall utility and for a
limited range of $\lambda$.
Table 1 summarized the utility metrics for the synthetic data generated by the
helpee. FELICIA data surpasses in utility both the local real data (Subset 1)
and the synthetic data from a simple (conditional) GAN. Furthermore, Table 1
shows how FELICIA improves the accuracy of the utility classifier for the
penalized subgroup (Melanocytic nevi) without significantly affecting the
other subgroups.
Metric | REAL | GAN | PrivGAN | FELICIA
---|---|---|---|---
| | | FELICIA $\lambda_{1}=\lambda_{2}$ |
AUC-ROC | 0.59 | 0.61 | 0.67 | 0.69
Accuracy (Melanocytic nevi) | 0.51 | 0.74 | 0.71 | 0.73
Accuracy (Benign keratosis) | 0.57 | 0.65 | 0.65 | 0.65
Accuracy (Melanoma) | 0.55 | 0.39 | 0.47 | 0.51
Accuracy (Basal c. carcinoma) | 0.74 | 0.52 | 0.62 | 0.65
Table 1: Utility of skin lesion synthetic images generated by the helpee.
Melanocytic nevi is the biased subgroup (equivalent to the deer in the
previous section). We note that FELICIA both improves the overall score while
keeping the performance on subgroups more balanced.
### 4.4 Discussion
Our experiments suggest that FELICIA allows generators to learn distributions
beyond a local subset. This is supported by the clusters coverage in the first
experiment on the hand written digit as well as the improved utility of a
synthetic dataset compared to the local data. Moreover, these results could
not be reproduced with synthetic data produced by a GAN on the same local
data.
We note that in our last experiment FELICIA is selecting the $\lambda_{i}$
hyperparameters based on the evaluation on a holdout dataset. In an
application, this data could be available as a public dataset. When this is
not possible, the utility could be determined by training a classifier on
synthetic data and be evaluated securely on the helper’s data with
differential privacy, e.g. ([17], [5]). Alternatively, one might be interested
in using the $\lambda$’s to weight the client’s contribution for diversity
rather than overall performance. In that case, they could be equal unless some
sites hold very small datasets, their $\lambda$’s would then have to be
reduced (presumably) proportionally to the datasets size.
## 5 Conclusions
We have developed a novel mechanism, FELICIA, that allows for the sharing of
data more securely in order to generate synthetic data in a federated learning
context. By setting up various scenarios with biased sites, we have
demonstrated the advantages of our mechanism with image datasets. We have
shown that a biased site can be securely helped by another site through the
FELICIA architecture and that it will benefit more the more biased it is. We
have also demonstrated on medical images that FELICIA can help generate
synthetic images with more utility than images available locally. The work
presented was implemented centrally, therefore the performance effect of the
sites being distributed is still to be investigated.
FELICIA can be implemented with a wide variety of GANs which will depend on
the type of data and use case. A particularly relevant use case is a pandemic
such as COVID-19 where hospitals and research centers at the beginning of an
outbreak would benefit from the data gathered by sites affected earlier. The
data sharing approval process can easily take months, whereas pandemic
microbiology evolution tells us that a virus can mutate to a different strain
orders of magnitude faster. Another application is the augmentation of an
image dataset to improve diagnostic such as the classification of cancer
pathology images ([16]). The data from one research center is often biased
towards some dominating population of the available data for training. FELICIA
could help mitigate such bias by allowing sites from all over the world to
create a synthetic dataset based on a more general population.
We are currently working on implementing FELICIA with progressive GAN in order
to generate highly complex medical images such as CT scans, x-rays and
histopathology slides in a real federated learning setting with non-local
sites.
## Acknowledgement
We are grateful to the Cascadia Data Discovery Initiative for enabling this
collaboration and for granting Azure credits for part of this work.
## References
* [1] Martin Abadi et al. “Deep learning with differential privacy” In _Proceedings of the 2016 ACM SIGSAC conference on computer and communications security_ , 2016, pp. 308–318
* [2] Martin Arjovsky, Soumith Chintala and Léon Bottou “Wasserstein generative adversarial networks” In _International Conference on Machine Learning_ , 2017, pp. 214–223 PMLR
* [3] Sanjeev Arora et al. “Generalization and equilibrium in generative adversarial nets (GANs)” In _International Conference on Machine Learning_ , 2017, pp. 224–232 PMLR
* [4] Brett K Beaulieu-Jones et al. “Privacy-preserving generative deep neural networks support clinical data sharing” In _Circulation: Cardiovascular Quality and Outcomes_ 12.7 Am Heart Assoc, 2019
* [5] Kendrick Boyd, Eric Lantz and David Page “Differential privacy for classifier evaluation” In _Proceedings of the 8th ACM Workshop on Artificial Intelligence and Security_ , 2015, pp. 15–23 ACM
* [6] Dingfan Chen, Tribhuvanesh Orekondy and Mario Fritz “GS-WGAN: A gradient-sanitized approach for learning differentially private generators” In _arXiv preprint arXiv:2006.08265_ , 2020
* [7] Cynthia Dwork and Aaron Roth “The algorithmic foundations of differential privacy.” In _Foundations and Trends in Theoretical Computer Science_ 9.3-4, 2014, pp. 211–407
* [8] Andre Esteva et al. “Deep learning-enabled medical computer vision” In _NPJ digital medicine_ 4.1 Nature Publishing Group, 2021, pp. 1–9
* [9] Chenyou Fan and Ping Liu “Federated generative adversarial learning” In _Chinese Conference on Pattern Recognition and Computer Vision (PRCV)_ , 2020, pp. 3–15 Springer
* [10] Robin C Geyer, Tassilo Klein and Moin Nabi “Differentially private federated learning: A client level perspective” In _arXiv preprint arXiv:1712.07557_ , 2017
* [11] Ian J Goodfellow et al. “Generative adversarial networks” In _Advances in Neural Information Processing Systems_ Curran, 2014
* [12] Corentin Hardy, Erwan Le Merrer and Bruno Sericola “MD-GAN: Multi-discriminator generative adversarial networks for distributed datasets” In _2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS)_ , 2019, pp. 866–877 IEEE
* [13] Jamie Hayes, Luca Melis, George Danezis and Emiliano De Cristofaro “LOGAN: Membership inference attacks against generative models” In _arXiv preprint arXiv:1705.07663_ , 2017
* [14] Alex Krizhevsky and Geoffrey Hinton “Learning multiple layers of features from tiny images” Citeseer, 2009
* [15] Yann LeCun, Corinna Cortes and CJ Burges “MNIST handwritten digit database”, http://yann.lecun.com/exdb/mnist, 2010
* [16] Adrian B Levine et al. “Synthesis of diagnostic quality cancer pathology images by generative adversarial networks” In _The Journal of pathology_ 252.2 Wiley Online Library, 2020, pp. 178–188
* [17] Jingcheng Liu and Kunal Talwar “Private selection from private candidates” In _Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing_ , 2019, pp. 298–309 ACM
* [18] Xiyang Liu, Yixi Xu, Sumit Mukherjee and Juan Lavista Ferres “MACE: A Flexible Framework for Membership Privacy Estimation in Generative Models” In _arXiv preprint arXiv:2009.05683_ , 2020
* [19] H Brendan McMahan, Daniel Ramage, Kunal Talwar and Li Zhang “Learning differentially private recurrent language models” In _arXiv preprint arXiv:1710.06963_ , 2017
* [20] Mehdi Mirza and Simon Osindero “Conditional generative adversarial nets” In _arXiv preprint arXiv:1411.1784_ , 2014
* [21] Sumit Mukherjee et al. “privGAN: Protecting GANs from membership inference attacks at low cost to utility” In _Proceedings on Privacy Enhancing Technologies_ 3, 2021, pp. 142–163
* [22] Nicolas Papernot et al. “Semi-supervised knowledge transfer for deep learning from private training data” In _arXiv preprint arXiv:1610.05755_ , 2016
* [23] Alec Radford, Luke Metz and Soumith Chintala “Unsupervised representation learning with deep convolutional generative adversarial networks” In _arXiv preprint arXiv:1511.06434_ , 2015
* [24] Jean-Francois Rajotte and Raymond T Ng “Private data sharing between decentralized users through the privGAN architecture” In _2020 IEEE 24th International Enterprise Distributed Object Computing Workshop (EDOCW)_ , 2020, pp. 37–42 IEEE
* [25] Mohammad Rasouli, Tao Sun and Ram Rajagopal “FedGAN: Federated generative adversarial networks for distributed data” In _arXiv preprint arXiv:2006.07228_ , 2020
* [26] Nicola Rieke et al. “The future of digital health with federated learning” In _NPJ digital medicine_ 3.1 Nature Publishing Group, 2020, pp. 1–7
* [27] Philipp Tschandl, Cliff Rosendahl and Harald Kittler “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions” In _Scientific data_ 5.1 Nature Publishing Group, 2018, pp. 1–9
* [28] Christian Wachinger, Anna Rieckmann, Sebastian Pölsterl and Alzheimer’s Disease Neuroimaging Initiative “Detect and correct bias in multi-site neuroimaging datasets” In _Medical Image Analysis_ 67 Elsevier, 2021, pp. 101879
* [29] Liyang Xie et al. “Differentially private generative adversarial network” In _arXiv preprint arXiv:1802.06739_ , 2018
* [30] Jinsung Yoon, James Jordon and Mihaela Schaar “PATE-GAN: Generating Synthetic Data with Differential Privacy Guarantees” In _International Conference on Learning Representations_ , 2019
## Appendix A Supplement material
We show here more details related to the Table 1. Figures 6, 7, 9, 8, 10
corresponds to the row 1-5 of Table 1. More precisely, the middle lines in the
box plots (the medians) correspond to the values of Table 1. We note that, not
only FELICIA has the best overall performance, but it also has the smallest
performance scatter on the subgroups.
Figure 6: Repeated AUC-ROC performance evaluation on the Real dataset (helpee
with different seed for the classifier and shuffling) and the best synthetic
lesion images from GAN, FELICIA (with PrivGAN parameters
$\lambda_{1}=\lambda_{2}$) and FELICIA. The values in bracket correspond to
the selected ($\lambda_{1}$, $\lambda_{2}$). Each Real point corresponds to
training a classifier initialized with a different random seed on a training
set selected with a different shuffling seed. Each synthetic point (GAN or
FELICIA) corresponds to a different random seed for the data generation and
the shuffling for creating the (real) subsets. The box extends from the lower
to upper quartile values of the data, with a line at the median. The whiskers
extend from the box to show the range of the data. Figure 7: Same as Figure 6
but for accuracy on the Melanocytic nevi lesions. Figure 8: Same as Figure 6
but for accuracy on the Melanoma lesions. Figure 9: Same as Figure 6 but for
accuracy on the Benign keratosis lesions. Figure 10: Same as Figure 6 but for
accuracy on the Basal cell carcinoma lesions.
|
# The duality covariant geometry and DSZ quantization of abelian gauge theory
C. Lazaroiu Department of Theoretical Physics, Horia Hulubei National
Institute for Physics and Nuclear Engineering, Bucharest-Magurele, Romania
<EMAIL_ADDRESS>and C. S. Shahbazi Department of Mathematics,
University of Hamburg, Germany<EMAIL_ADDRESS>
###### Abstract.
We develop the Dirac-Schwinger-Zwanziger (DSZ) quantization of classical
abelian gauge theories with general duality structure on oriented and
connected Lorentzian four-manifolds $(M,g)$ of arbitrary topology, obtaining
as a result the duality-covariant geometric formulation of such theories
through connections on principal bundles. We implement the DSZ condition by
restricting the field strengths of the theory to those which define classes
originating in the degree-two cohomology of a local system valued in the
groupoid of integral symplectic spaces. We prove that such field strengths are
curvatures of connections $\mathcal{A}$ defined on principal bundles $P$ whose
structure group $G$ is the disconnected non-abelian group of automorphisms of
an integral affine symplectic torus. The connected component of the identity
of $G$ is a torus group, while its group of connected components is a modified
Siegel modular group which coincides with the group of local duality
transformations of the theory. This formulation includes electromagnetic and
magnetoelectric gauge potentials on an equal footing and describes the
equations of motion through a first-order polarized self-duality condition for
the curvature of $\mathcal{A}$. The condition involves a combination of the
Hodge operator of $(M,g)$ with a taming of the duality structure determined by
$P$, whose choice encodes the self-couplings of the theory. This description
is reminiscent of the theory of four-dimensional euclidean instantons, even
though we consider a two-derivative theory in Lorentzian signature. We use
this formulation to characterize the hierarchy of duality groups of abelian
gauge theory, providing a gauge-theoretic description of the electromagnetic
duality group as the discrete remnant of the gauge group of $P$. We also
perform the time-like reduction of the polarized self-duality condition to a
Riemannian three-manifold, obtaining a new type of Bogomolny equation which is
modified by the given taming and duality structure induced by $P$. We give
explicit examples of such solutions, which we call _polarized dyons_.
###### Key words and phrases:
Abelian gauge theory, Electromagnetic duality, symplectic vector bundles,
character varieties, symplectic lattices
2010 MSC. Primary: 53C80. Secondary: 53C07.
###### Contents
1. 1 Classical abelian gauge theory
2. 2 The Dirac-Schwinger-Zwanziger condition
3. 3 Siegel bundles and connections
4. 4 Prequantum abelian gauge theory
5. 5 Time-like dimensional reduction and polarized Bogomolny equations
6. A Local abelian gauge theory
7. B Integral symplectic spaces and integral symplectic tori
## Introduction
Abelian gauge theory on Lorentzian four-manifolds is a natural extension of
Maxwell electrodynamics, which locally describes a finite number of abelian
gauge fields interacting through couplings which are allowed to vary over
space-time. Such theories occur frequently in high energy physics. For
instance, the low energy limit of a non-abelian gauge theory coupled to scalar
fields which can be maximally higgsed contains an abelian gauge theory sector;
this occurs in particular on the Coulomb branch of supersymmetric non-abelian
gauge theories. Moreover, the universal bosonic sector of four-dimensional
ungauged supergravity involves a fixed number of abelian gauge fields
interacting with each other through couplings which can vary over space-time
due to their dependence on the scalars of the theory. This sector of four-
dimensional supergravity can be described by pulling back an abelian gauge
theory defined on the target space of the sigma model of the scalar fields
[32].
The local behavior of abelian gauge theories (including their supersymmetric
extensions) was studied intensively in the physics literature, where the
subject has achieved the level of textbook material [16]. Despite intense
activity, a global geometric formulation of such theories on arbitrary
spacetimes is still missing. At the level of field strengths, such a
description was given in [32] (see [34] for a summary) in the wider context of
the geometric description of the universal sector of classical supergravity in
four dimensions. As explained there and recalled in Section 1, the global
formulation requires the specification of a “duality structure”, defined as a
flat symplectic vector bundle $\Delta=(\mathcal{S},\omega,\mathcal{D})$ on the
spacetime manifold $M$. The even rank $2n$ of $\mathcal{S}$ equals the number
of field strengths, where both electromagnetic and magnetoelectric fields are
included. When the spacetime is not simply connected, such a bundle need not
be trivial and it ‘twists’ the local formulation in such a way that the
combination of all electromagnetic and magnetoelectric field strengths can be
described globally by a $\mathrm{d}_{\mathcal{D}}$-closed two-form
$\mathcal{V}$ defined on $M$ and valued in $\mathcal{S}$, where
$\mathrm{d}_{\mathcal{D}}$ is the differential induced by the flat connection
$\mathcal{D}$ of the duality structure. A classical electromagnetic field
strength configuration is a $\mathrm{d}_{\mathcal{D}}$-closed two-form
$\mathcal{V}$. The classical equations of motion are encoded by the condition
that $\mathcal{V}$ be self-dual with respect to a ‘polarized Hodge operator’
obtained by tensoring the Hodge operator $\ast_{g}$ defined by the Lorentzian
spacetime metric $g$ with a (generally non-flat) taming $\mathcal{J}$ of the
symplectic bundle $(\mathcal{S},\omega)$, an object which encodes all
couplings and theta angles in a fully geometric manner. A classical field
strength solution is a field strength configuration which obeys the polarized
self-duality condition. This provides a global formulation of the theory on
oriented Lorentzian four manifolds of arbitrary topology, which is manifestly
covariant with respect to electromagnetic duality. In this formulation,
classical duality transformations are described by (based) flat symplectic
automorphisms of $\Delta$. Such theories admits global solutions which
correspond to ‘classical electromagnetic U-folds’ – a notion which had been
used previously in the physics literature without being given a mathematically
clear definition.
While the treatment found in the physics literature discusses local gauge
potentials and local gauge transformations (which are described using
differential forms defined locally on spacetime), a fully general and
manifestly duality-covariant geometric formulation of such theories in terms
of connections on an appropriate principal bundle has not yet been given. Such
a formulation is required by the Aharonov-Bohm effect [1] and by Dirac-
Schwinger-Zwanziger (DSZ) “quantization” [23, 42, 47], which force the field
strengths to obey an _integrality condition_ implied by the requirement of a
consistent coupling between classical gauge fields and quantum charge
carriers. Imposing this condition restricts the set of allowed field
strengths, defining a so-called prequantum abelian gauge theory As pointed out
in [32], the general formulation of this condition involves the choice of a
Dirac system $\mathcal{L}$ for $\Delta$, defined as a $\mathcal{D}$-flat fiber
sub-bundle of $\mathcal{S}$ whose fibers are full symplectic lattices inside
the fibers of $\mathcal{S}$. Every Dirac system has a type
$\mathfrak{t}=(t_{1},\ldots,t_{n})$, where $t_{1},\ldots,t_{n}$ are positive
integers such that $t_{1}|t_{2}|\ldots|t_{n}$. A duality structure is called
semiclassical if it admits a Dirac system. The choice of a Dirac system
$\mathcal{L}$ refines a semiclassical duality structure $\Delta$ to an
integral duality structure
${\bm{\Delta}}=(\mathcal{S},\omega,\mathcal{D},\mathcal{L})$ and reduces the
group of duality transformations to a discrete group, which generalizes the
arithmetic duality group known from the local formulation found in the physics
literature. In the global setting, the Dirac system replaces the “Dirac
lattice” of the local approach and makes the DSZ condition is rather subtle
since it requires the use of cohomology with local coefficients (see [28, 43,
48, 44]). In particular, the bundle-valued two-form $\mathcal{V}$ (which
describes all electromagnetic and magnetoelectric field strengths
simultaneously) cannot in general be the curvature of a connection defined on
a principal torus bundle. Indeed, the vector bundle $\mathcal{S}$ is generally
non-trivial, while the adjoint bundle of any principal torus bundle is
trivial.
In the present paper, we ‘solve’ the DSZ integrality condition determined by a
Dirac system $\mathcal{L}$ by giving the geometric formulation of prequantum
abelian gauge theory in terms of connections defined on an adequate principal
bundle $P$. More precisely, we show that the combined field strength
$\mathcal{V}$ is the curvature of a connection defined on a principal bundle
with non-abelian and disconnected structure group $G$, whose connected
component of the identity is a torus group and whose group of connected
components is a modified Siegel modular group. This shows that the manifestly
duality-covariant formulation of such gauge theories is not truly abelian,
since it involves a non-abelian structure group. Instead, the structure group
$G$ is weakly abelian, in the sense that only its Lie algebra is abelian. As a
consequence, the gauge group of $P$ has a “discrete remnant” which encodes
equivariance of the theory under electromagnetic duality transformations.
Using this framework, we show that the electromagnetic duality transformations
of the theory are given by (non-infinitesimal) gauge transformations, a fact
that provides a geometric interpretation for the former. Principal connections
$\mathcal{A}$ on $P$ describe the combined electromagnetic and magnetoelectric
gauge potentials of the theory. The polarized self-duality condition becomes a
first-order differential equation for $\mathcal{A}$ which is reminiscent of
the instanton equations, though the signature of our spacetime is Lorentzian
and the theory is of second order. These results provide a global geometric
formulation of prequantum abelian gauge theory as a theory of principal
connections, in a manner that is manifestly covariant under electromagnetic
duality.
To extract the principal bundle description, we proceed as follows. We first
show that integral duality structures of rank $2n$ and type $\mathfrak{t}$ are
associated to Siegel systems of rank $2n$, which we define to be local systems
$Z$ of free abelian groups of rank $2n$ whose monodromy is contained in the
modified Siegel modular group
$\Gamma=\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ of type $\mathfrak{t}$. The
latter is defined as the group of automorphisms of a full symplectic lattice
of rank $2n$ and type $\mathfrak{t}$ and is a subgroup of
$\mathrm{Sp}(2n,\mathbb{R})$ which contains the usual Siegel modular group
$\mathrm{Sp}(2n,\mathbb{Z})$, to which it reduces when $\mathfrak{t}$
coincides with the principal type $\delta=(1,\ldots,1)$. The vector bundle of
the duality structure is given by
$\mathcal{S}=Z\otimes_{\mathbb{Z}}\mathbb{R}$ while the flat connection
$\mathcal{D}$ is induced by the monodromy connection of $Z$. A classical field
strength configuration $\mathcal{V}$ is called integral if it satisfies the
DSZ quantization condition, which states that the cohomology class
$[\mathcal{V}]_{\mathcal{D}}\in H^{2}_{\mathcal{D}}(M,\mathcal{S})$ of
$\mathcal{V}$ with respect to $\mathrm{d}_{\mathcal{D}}$ lies in the image of
the local coefficient cohomology group $H^{2}(M,Z)$ through the natural
morphism $H^{2}(M,Z)\rightarrow H^{2}_{\mathcal{D}}(M,\mathcal{S})$, where
$H^{2}_{\mathcal{D}}(M,\mathcal{S})$ is the the second cohomology of the
complex $(\Omega^{\ast}(M,\mathcal{S}),\mathrm{d}_{\mathcal{D}})$. We then
show that $\mathcal{V}$ is integral if and only if it coincides with the
adjoint curvature $\mathcal{V}_{\mathcal{A}}$ of a connection $\mathcal{A}$
defined on a principal bundle $P$ (called a Siegel bundle) whose structure
group is the group $G=\mathrm{Aff}_{\mathfrak{t}}$ of automorphisms of an
integral affine symplectic torus and whose adjoint bundle identifies with
$\mathcal{S}$. The group $\mathrm{Aff}_{\mathfrak{t}}$ is a semidirect product
$A\rtimes\Gamma$ of the torus group $A=\mathbb{R}^{2n}/\mathbb{Z}^{2n}$ with
the modified Siegel modular group
$\Gamma=\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ and hence has a countable
group of components which is isomorphic with $\Gamma$. The
$\mathrm{d}_{\mathcal{D}}$-cohomology class of $\mathcal{V}_{\mathcal{A}}$
coincides with the image in $H^{2}_{\mathcal{D}}(M,\mathcal{S})$ of the
twisted Chern class $c(P)\in H^{2}(M,Z)$ of $P$. The Siegel system $Z$ (and
hence the duality structure $\Delta$) is uniquely determined by the Siegel
bundle $P$ and Siegel bundles are determined up to isomorphism by the pair
$(Z,c)$. The classifying space of such principal bundles is a twisted
Eilenberg MacLane space [27] of type $2$, namely a
$K(\mathbb{Z}^{2n},2)$-fibration over
$K(\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z}),1)$ whose $\kappa$-invariant is
trivial and whose monodromy is induced by the fundamental action of
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ on $\mathbb{Z}^{2n}$.
There exists a large group of classical ‘pseudo-duality’ transformations which
identifies the spaces of classical field strength configurations and solutions
of _different_ abelian gauge theories. Locally, such transformations involve
matrices $T\in\mathrm{Sp}(2n,\mathbb{R})$ acting on the field strengths and
the DSZ integrality condition with respect to a Dirac lattice of type
$\mathfrak{t}$ restricts $T$ to lie in the arithmetic group
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$. As already pointed out in [32],
the global theory of such duality transformations is much richer. We develop
this theory from scratch, defining a hierarchy of duality groups and providing
short exact sequences to compute them. We exploit this geometric framework to
show that the electromagnetic duality transformations of abelian gauge theory
correspond to gauge transformations of Siegel bundles, elucidating the
geometric origin of electromagnetic duality. In particular, we emphasize the
role played by the _type_ $\mathfrak{t}$ of the underlying Dirac system and by
the monodromy representation of the Siegel system $Z$ in the correct
definition and computation of the discrete duality group of the prequantum
theory. The fact that a symplectic lattice need not be of principal type is
well-known in the theory of symplectic tori as well as in that of Abelian
varieties, where non-principal types correspond to non-principal
polarizations. The physical implications of non-principal types have been
systematically explored only recently in the context of supersymmetric field
theories, see for instance [6, 7, 4, 5, 8, 14, 15] and references therein.
The construction of the present paper produces a class of geometric gauge
models which is amenable to the methods of mathematical gauge theory [24]. In
particular, it allows for the study of moduli spaces of solutions (which, as
shown in [32], can be viewed as ‘electromagnetic U-folds’) using techniques
borrowed from the theory of instantons. In this spirit, we perform the time-
like reduction of the polarized self-duality equations, obtaining a novel
system of Bogomolny-like equations. Solutions of these equations define
polarized abelian dyons, of which we describe a few examples.
The paper is organized as follows. Section 1 recalls the description of
classical abelian gauge theories with arbitrary duality structure in terms of
combined electromagnetic and magnetoelectric field strengths, following [32].
In the same section, we describe the hierarchy of duality groups of such
theories and give a few short exact sequences to characterize them. Section 2
discusses the DSZ integrality condition for general duality structures,
relating the notion of Dirac system (which appears in its formulation) to
various equivalent objects. Section 3 discusses Siegel bundles and connections
showing in particular that a Siegel induces a canonical integral duality
structure. In Section 4, we give the formulation of “prequantum” abelian gauge
theory (defined as classical abelian gauge theory supplemented by the DSZ
integrality condition) as a theory of principal connections on a Siegel
bundle. Section 5 discusses the time-like dimensional reduction of the
polarized self-duality equations, which leads to the notion of polarized
abelian dyon. In the same section, we construct examples of polarized abelian
dyons on the punctured affine 3-space. Appendix A recalls the duality-
covariant formulation of abelian gauge theory on contractible Lorentzian four-
manifolds, starting from the local treatment found in the physics literature.
Appendix B discusses integral symplectic spaces and integral symplectic tori,
introducing certain notions used in the main text.
### 0.1. Notations and conventions
All manifolds and fiber bundles considered in the paper are smooth, Hausdorff
and paracompact. The manifold denoted by $M$ is assumed to be connected. In
our convention, a Lorentzian four-manifold $(M,g)$ has “mostly plus” signature
$(3,1)$. If $E$ is a fiber bundle defined on a manifold $M$, we denote by
$\mathcal{C}^{\infty}(M,E)$ the set of globally-defined smooth sections of $E$
and by $\mathcal{C}^{\infty}(E)$ the sheaf of smooth sections of $E$.
###### Acknowledgements.
We thank V. Cortés for useful comments and discussions. C.S.S would like to
thank P. C. Argyres, A. Bourget, M. Martone and R. Szabo for useful comments
and suggestions. Part of the work of C. I. L. on this project was supported by
grant IBS-R003-S1. The work of C.S.S. is supported by the Germany Excellence
Strategy _Quantum Universe_ \- 390833306.
## 1\. Classical abelian gauge theory
In this section we introduce the configuration space and equations of motion
defining classical abelian gauge theory on an oriented Lorentzian four-
manifold $(M,g)$ and discuss its global dualities and symmetries. Appendix A
gives the description of this theory for the special case when $M$ is
contractible, which recovers the local treatment found in the physics
literature. The global formulation presented in this section was proposed in a
wider context in reference [32], to which we refer the reader for certain
details. The definition of classical abelian gauge theory on $(M,g)$ is given
in terms of field strengths and relies on the choice of a _duality structure_
(defined as a flat symplectic vector bundle
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ on $M$) equipped with a taming
$\mathcal{J}$ of $(\mathcal{S},\omega)$, which encodes the gauge-kinetic
functions (coupling constants and theta angles) in a globally-correct and
frame-independent manner. The equations of motion of the theory are encoded by
the $\mathcal{J}$-polarized self-duality condition for the combined
electromagnetic and magnetoelectric field strengths, which are modeled
mathematically by a $\mathcal{D}$-flat two-form valued in the underlying
vector bundle $\mathcal{S}$ of the duality structure.
### 1.1. Duality structures
Let $M$ be a connected smooth manifold and $(\mathcal{S},\omega)$ be a
symplectic vector bundle defined on $M$ with symplectic structure $\omega$.
Since $\mathrm{Sp}(2n,\mathbb{R})$ and $\mathrm{GL}(n,\mathbb{C})$ are
homotopy equivalent to their common maximal compact subgroup $\mathrm{U}(n)$,
the classification of symplectic, complex and Hermitian vector bundles defined
on $M$ are equivalent. In particular, any complex vector bundle admits a
Hermitian pairing and any symplectic vector bundle admits a complex structure
which is compatible with its symplectic pairing and makes it into a Hermitian
vector bundle. Thus a real vector bundle of even rank admits a symplectic
pairing if and only if it admits a complex structure. The classifying spaces
$\mathrm{B}\mathrm{Sp}(2n,\mathbb{R})$ and $\mathrm{B}\mathrm{U}(n)$ are
homotopy equivalent, hence the fundamental characteristic classes of a
symplectic vector bundle $(\mathcal{S},\omega)$ are Chern classes, which we
denote by $c_{k}(\mathcal{S},\omega)$.
###### Remark 1.1.
Suppose that $\dim M=4$ and let $\mathcal{S}$ be an oriented real vector
bundle of rank $2n$ defined on $M$, thus $w_{1}(\mathcal{S})=0$. If
$\mathcal{S}$ admits a complex structure $\mathcal{J}$ inducing its
orientation, then its third Stiefel-Whitney class $w_{3}(\mathcal{S})$ must
vanish and its even Stiefel-Whitney classes $w_{2}(\mathcal{S})$ and
$w_{4}(\mathcal{S})$ must coincide with the $\mathrm{mod}~{}2$ reduction of
the Chern classes $c_{1}(\mathcal{S},\mathcal{J})$ and
$c_{2}(\mathcal{S},\mathcal{J})$ of the complex rank $n$ vector bundle defined
by $\mathcal{S}$ and $\mathcal{J}$. In particular, the third integral Stiefel-
Whitney class $W_{3}(\mathcal{S})\in H^{3}(M,\mathbb{Z})$ must vanish, i.e.
$\mathcal{S}$ must admit a $\mathrm{Spin}^{c}$ structure (notice that
$W_{5}(\mathcal{S})$ vanishes for dimension reasons). These conditions are not
always sufficient. To state the necessary and sufficient conditions (see
[37]), we distinguish the cases:
* •
$n=1$, i.e. ${\rm rk}\mathcal{S}=2$. Then $\mathcal{S}$ always admits a
complex structure (equivalently, a symplectic pairing) which induces its
orientation (since $\mathrm{SO}(2)=\mathrm{U}(1)$).
* •
$n=2$, i.e. ${\rm rk}\mathcal{S}=4$. In this case, $\mathcal{S}$ admits a
complex structure (equivalently, a symplectic pairing) which induces its
orientation if and only if it satisfies $W_{3}(\mathcal{S})=0$ and
$\mathfrak{c}^{4}(\mathcal{S})=0$, where $\mathfrak{c}^{4}(\mathcal{S})\in
H^{4}(M,\mathbb{Z})$ is an integral obstruction class described in [37,
Theorem II].
* •
$n\geq 3$, i.e. ${\rm rk}\mathcal{S}\geq 6$. Then $\mathcal{S}$ admits a
complex structure (equivalently, a symplectic pairing $\omega$) which induces
its orientation if and only if $W_{3}(\mathcal{S})=0$.
Notice that an oriented real vector bundle $\mathcal{S}$ of rank four on a
four-manifold $M$ is determined up to isomorphism by its first Pontryaghin
class $p_{1}(\mathcal{S})\in H^{4}(M,\mathbb{Z})$, its second Stiefel-Whitney
class $w_{2}(\mathcal{S})\in H^{2}(M,\mathbb{Z}_{2})$ and its Euler class
$e(\mathcal{S})\in H^{4}(M,\mathbb{Z})$.
###### Definition 1.2.
A duality structure $\Delta\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(\mathcal{S},\omega,\mathcal{D})$ on $M$ is a flat symplectic
vector bundle $(\mathcal{S},\omega)$ over $M$ equipped with a flat connection
$\mathcal{D}:\mathcal{C}^{\infty}(M,\mathcal{S})\rightarrow\Omega^{1}(M,\mathcal{S})$
which preserves $\omega$. The rank of $\Delta$ is the rank of the vector
bundle $\mathcal{S}$, which is necessarily even.
Notice that the Chern classes $c_{1}(\mathcal{S},\omega)$ and
$c_{2}(\mathcal{S},\omega)$ of the underlying symplectic vector bundle of a
duality structure must be torsion classes.
###### Definition 1.3.
A based isomorphism of duality structures from
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ to
$\Delta^{\prime}=(\mathcal{S}^{\prime},\omega^{\prime},\mathcal{D}^{\prime})$
is a based isomorphism of vector bundles
$f:\mathcal{S}\xrightarrow{\sim}\mathcal{S}^{\prime}$ which satisfies the
conditions $\omega^{\prime}\circ(f\otimes f)=\omega$ and
$\mathcal{D}^{\prime}\circ f=(\mathrm{id}_{T^{\ast}M}\otimes
f)\circ\mathcal{D}$.
Here and below, we let based morphisms of vector bundles act on sections in
the natural manner. We denote by $\mathrm{Dual}(M)$ be the groupoid of duality
structures defined on $M$ and based isomorphisms of such.
The group $\mathrm{Aut}_{b}(\mathcal{S},\omega)$ of based automorphisms of a
symplectic vector bundle $(\mathcal{S},\omega)$ is called its group of
symplectic gauge transformations. Such transformations $\varphi$ act on the
set of linear connections $\mathcal{D}$ defined on $\mathcal{S}$ through:
$\mathcal{D}\rightarrow(\mathrm{id}_{T^{\ast}M}\otimes\varphi)\circ\mathcal{D}\circ\varphi^{-1}~{}~{}$
and preserve the set of flat symplectic connections. The group
$\mathrm{Aut}_{b}(\Delta)$ of based automorphism of a duality structure
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ coincides with the stabilizer of
$\mathcal{D}$ in $\mathrm{Aut}_{b}(\mathcal{S},\omega)$. For any such duality
structure, we have:
$c_{1}(\mathcal{S},\omega)=\delta({\hat{c}}_{1}(\mathcal{D}))\,,\quad
c_{2}(\mathcal{S},\omega)=\delta({\hat{c}}_{2}(\mathcal{D}))~{}~{},$
where ${\hat{c}}_{1}(\mathcal{D})\in H^{1}(M,\mathrm{U}(1))$ and
${\hat{c}}_{2}(\mathcal{D})\in H^{3}(M,\mathrm{U}(1))$ are the Cheeger-Chern-
Simons invariants of the flat connection $\mathcal{D}$ and
$\delta:H^{i}(M,\mathbb{C}/\mathbb{Z})\rightarrow H^{i+1}(M,\mathbb{Z})$ are
the Bockstein morphisms in the long exact sequence:
$\ldots\rightarrow H^{i}(M,\mathbb{Z})\rightarrow
H^{i}(M,\mathbb{R})\stackrel{{\scriptstyle\exp_{\ast}}}{{\rightarrow}}H^{i}(M,\mathrm{U}(1))\stackrel{{\scriptstyle\delta}}{{\rightarrow}}H^{i+1}(M,\mathbb{Z})\rightarrow
H^{i+1}(M,\mathbb{R})\rightarrow\ldots$
induced by the exponential sequence:
$0\rightarrow\mathbb{Z}\rightarrow\mathbb{R}\rightarrow\mathrm{U}(1)\rightarrow
0~{}~{}.$
The Cheeger-Chern-Simons invariants depend only on the gauge equivalence class
of $\mathcal{D}$.
### 1.2. The twisted de Rham complex of a duality structure
Given a duality structure $\Delta=(\mathcal{S},\omega,\mathcal{D})$ on a
connected manifold $M$, let
$\mathrm{d}_{\mathcal{D}}:\Omega(M,\mathcal{S})\rightarrow\Omega(M,\mathcal{S})$
be the exterior covariant derivative twisted by $\mathcal{D}$ (notice that
$\mathrm{d}_{\mathcal{D}}|_{\Omega^{0}(M,\mathcal{S})}=\mathcal{D}$). This
defines a cochain complex:
$0\to\mathcal{C}^{\infty}(M,\mathcal{S})\xrightarrow{\mathcal{D}}\Omega^{1}(M,\mathcal{S})\xrightarrow{\mathrm{d}_{\mathcal{D}}}\Omega^{2}(M,\mathcal{S})\xrightarrow{\mathrm{d}_{\mathcal{D}}}\Omega^{3}(M,\mathcal{S})\xrightarrow{\mathrm{d}_{\mathcal{D}}}\Omega^{4}(M,\mathcal{S})\to
0\,,$ (1)
whose total cohomology group (viewed as a $\mathbb{Z}$-graded abelian group)
we denote by $H_{\mathcal{D}}^{\ast}(M,\mathcal{S})$.
###### Definition 1.4.
The vector spaces $H_{\mathcal{D}}^{k}(M,\mathcal{S})$ (where $k=0,\ldots,4$)
are called the twisted de Rham cohomology spaces of the duality structure
$\Delta$.
Let $\Omega^{k}_{\mathrm{flat}}(\mathcal{S})$ be the locally-constant sheaf of
$\mathcal{D}$-flat $\mathcal{S}$-valued $k$-forms. Then a straightforward
modification of the Poincaré lemma shows that the complex of sheaves:
$0\to\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{S})\hookrightarrow\Omega^{0}(\mathcal{S})\xrightarrow{\mathcal{D}}\Omega^{1}(\mathcal{S})\xrightarrow{\mathrm{d}_{\mathcal{D}}}\Omega^{2}(\mathcal{S})\xrightarrow{\mathrm{d}_{\mathcal{D}}}\Omega^{3}(\mathcal{S})\xrightarrow{\mathrm{d}_{\mathcal{D}}}\Omega^{4}(\mathcal{S})\to
0$
is exact and hence provides a resolution of the locally-constant sheaf
$\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{S})$. Since $M$ is paracompact,
each of the sheaves $\Omega^{k}(\mathcal{S})$ is acyclic. Thus the sheaf
cohomology of $\Omega^{0}_{\mathrm{flat}}(\mathcal{S})$ can be computed as the
cohomology of the complex (1). This gives a natural isomorphism of graded
vector spaces:
$H^{\ast}_{\mathcal{D}}(M,\mathcal{S})\simeq
H^{\ast}(M,\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{S}))~{}~{},$
where the right hand side denotes sheaf cohomology.
### 1.3. Flat systems of symplectic vector spaces
Let $\Pi_{1}(M)$ be the fundamental groupoid of a connected manifold $M$,
whose objects are the points of $M$ and whose set of morphisms
$\Pi_{1}(m,m^{\prime})$ from $m$ to $m^{\prime}$ is the set of homotopy
classes of piecewise-smooth curves starting at $m$ and ending at $m^{\prime}$.
Let $\mathrm{Symp}$ be the groupoid of finite-dimensional symplectic vector
spaces (see Appendix B). The functor category $[\Pi_{1}(M),\mathrm{Symp}]$ is
a groupoid since all its morphisms (which are natural transformations) are
invertible.
###### Definition 1.5.
A flat system of symplectic vector spaces (or $\mathrm{Symp}$-valued local
system) on $M$ is a functor $F:\Pi_{1}(M)\rightarrow\mathrm{Symp}$, i.e. an
object of the groupoid $[\Pi_{1}(M),\mathrm{Symp}]$. An isomorphism of such
systems is an isomorphism in this groupoid. A flat system $F$ of symplectic
vector spaces has rank111The rank is constant on $M$ since $M$ is connected.
$2n$ if $\dim F(m)=2n$ for all $m\in M$.
Recall that a symplectic representation of a group $G$ is a representation
$\rho:G\rightarrow\mathrm{Aut}(V,\omega)$ of $G$ through automorphisms of a
finite-dimensional symplectic vector space $(V,\omega)$. An equivalence (or
isomorphism) of symplectic representations from
$\rho:G\rightarrow\mathrm{Aut}(V,\omega)$ to
$\rho^{\prime}:G\rightarrow\mathrm{Aut}(V^{\prime},\omega^{\prime})$ is an
isomorphism of symplectic vector spaces
$\varphi:(V,\omega)\xrightarrow{\sim}(V^{\prime},\omega^{\prime})$ such that
$\varphi\circ\rho(g)=\rho^{\prime}(g)\circ\varphi^{\prime}$ for all $g\in G$.
We denote by $\mathrm{SympRep}(G)$ the groupoid of symplectic representations
of $G$ equipped with this notion of isomorphism.
###### Definition 1.6.
Let $F$ be a flat system of symplectic vector spaces on $M$. The holonomy
representation of $F$ at a point $m\in M$ is the morphism of groups
$\mathrm{hol}_{m}(F):\pi_{1}(M,m)=\Pi_{1}(m,m)\rightarrow\mathrm{Aut}(F(m))=\mathrm{Symp}(F(m),F(m))$
defined through:
$\mathrm{hol}_{m}(F)(\mathbf{c})=F(\mathbf{c})\,,\quad\forall\,\,\mathbf{c}\in\pi_{1}(M,m)~{}~{}.$
The holonomy group of $F$ at $m$ is the subgroup of $\mathrm{Aut}(F(m))$
defined through:
$\mathrm{Hol}_{m}(F)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{im}(\mathrm{hol}_{m}(F))~{}~{}.$
Notice that $\mathrm{hol}_{m}(F)$ is a symplectic representation of
$\pi_{1}(M,m)$ on $F(m)$. Any homotopy class
$\bm{\gamma}\in\Pi_{1}(m,m^{\prime})$ of paths from $m$ to $m^{\prime}$
induces an isomorphism of symplectic vector spaces
$F(\bm{\gamma}):F(m)\stackrel{{\scriptstyle\sim}}{{\rightarrow}}F(m^{\prime})$
which intertwines the holonomy representations of $F$ at the points $m$ and
$m^{\prime}$:
$F(\bm{\gamma})\circ\mathrm{hol}_{m}(F)(\mathbf{c})=\mathrm{hol}_{m^{\prime}}(F)(\bm{\gamma}\,\mathbf{c}\,\bm{\gamma}^{-1})\circ
F(\bm{\gamma})\,,\quad\forall\,\,\mathbf{c}\in\pi_{1}(M,m)~{}~{}.$
Since $M$ is connected, it follows that the holonomy representation of $F$ at
a fixed basepoint $m_{0}\in M$ determines its holonomy representation at any
other point of $M$. Moreover, the isomorphism class of the holonomy group of
$F$ at $m_{0}$ does not depend on the choice of $m_{0}\in M$.
###### Proposition 1.7.
Let $m_{0}\in M$ be any point of $M$. Then the map $\mathrm{hol}_{m_{0}}$
which sends a $\mathrm{Symp}$-valued local system $F$ defined on $M$ to its
holonomy representation $\mathrm{hol}_{m_{0}}(F)$ at $m_{0}$ and sends an
isomorphism $f:F\xrightarrow{\sim}F^{\prime}$ of $\mathrm{Symp}$-valued local
systems to the equivalence of representations
$\mathrm{hol}_{m_{0}}(f)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}f(m_{0}):\mathrm{hol}_{m_{0}}(F)\xrightarrow{\sim}\mathrm{hol}_{m_{0}}(F^{\prime})$
defines an equivalence of groupoids from $[\Pi_{1}(M),\mathrm{Symp}]$ to
$\mathrm{SympRep}(\pi_{1}(M,m_{0}))$. In particular, isomorphism classes of
flat systems of symplectic vector spaces of rank $2n$ defined on $M$ are in
bijection with the points of the character variety:
$\mathfrak{R}(\pi_{1}(M,m_{0}),\mathrm{Sp}(2n,\mathbb{R}))\stackrel{{\scriptstyle{\rm
def.}}}{{=}}{\rm
Hom}(\pi_{1}(M,m_{0}),\mathrm{Sp}(2n,\mathbb{R}))/\mathrm{Sp}(2n,\mathbb{R})\,,$
(2)
where $\mathrm{Sp}(2n,\mathbb{R})$ acts on ${\rm
Hom}(\pi_{1}(M),\mathrm{Sp}(2n,\mathbb{R}))$ through its adjoint
representation.
###### Proof.
An isomorphism $f:F\xrightarrow{\sim}F^{\prime}$ of $\mathrm{Symp}$-valued
local systems is a collection of isomorphisms of symplectic vector spaces
$f(m):F(m)\rightarrow F^{\prime}(m)$ for all $m\in M$ which satisfies:
$f(m)\circ F(\bm{\gamma})=F^{\prime}(\bm{\gamma})\circ
f(m)\,,\quad\forall\,\,\bm{\gamma}\in\Pi_{1}(m,m^{\prime})\,,\quad\forall\,\,m,m^{\prime}\in
M~{}~{}.$
Taking $m^{\prime}=m=m_{0}$ in this relation shows that $f(m)$ is an
equivalence between the holonomy representations of $F$ and $F^{\prime}$ at
$m$:
$\mathrm{hol}_{m_{0}}(F^{\prime})(\mathbf{c})=f(m)\circ\mathrm{hol}_{m_{0}}(F)(\mathbf{c})\circ
f(m_{0})^{-1}\,,\quad\forall\,\,\mathbf{c}\in\pi_{1}(M,m_{0})~{}~{}.$
Conversely, it is easy to see that any such equivalence of representations
extends to an isomorphism from $F$ to $F^{\prime}$. ∎
### 1.4. The flat system of symplectic vector spaces defined by a duality
structure
###### Definition 1.8.
The parallel transport functor
$\mathcal{T}_{\Delta}\in[\Pi_{1}(M),\mathrm{Symp}]$ of a duality structure
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ is the functor which associates to
each point $m\in M$ the symplectic vector space
$\mathcal{T}_{\Delta}(m)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(\mathcal{S}_{m},\omega_{m})$ and to the homotopy class (with
fixed endpoints) $\mathbf{c}\in\Pi_{1}(M)$ of any piecewise-smooth curve
$c\colon[0,1]\to M$ the isomorphism of symplectic vector spaces:
$\mathcal{T}_{\Delta}(\mathbf{c}):(\mathcal{S}_{c(0)},\omega_{c(0)})\xrightarrow{\sim}(\mathcal{S}_{c(1)},\omega_{c(1)})$
given by the parallel transport of $\mathcal{D}$.
Notice that $\mathcal{T}_{\Delta}:\Pi_{1}(M)\rightarrow\mathrm{Symp}$ is a
flat system of symplectic vector spaces. The map which sends $\Delta$ to
$\mathcal{T}_{\Delta}$ extends in an obvious manner to an equivalence of
groupoids:
$\mathcal{T}:\mathrm{Dual}(M)\stackrel{{\scriptstyle\sim}}{{\rightarrow}}[\Pi_{1}(M),\mathrm{Symp}]~{}~{},$
which sends a based isomorphism
$f:\Delta=(\mathcal{S},\omega,\mathcal{D})\stackrel{{\scriptstyle\sim}}{{\rightarrow}}\Delta^{\prime}=(\mathcal{S}^{\prime},\omega^{\prime},\mathcal{D}^{\prime})$
of duality structures to the invertible natural transformation
$\mathcal{T}(f):\mathcal{T}_{\Delta}\xrightarrow{\sim}\mathcal{T}_{\Delta^{\prime}}$
given by the isomorphisms of symplectic vector spaces:
$\mathcal{T}(f)(m)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}f_{m}:(\mathcal{S}_{m},\omega_{m})\stackrel{{\scriptstyle\sim}}{{\rightarrow}}(\mathcal{S}^{\prime}_{m},\omega_{m})\,,\quad\forall\,\,m\in
M\,.$
These isomorphisms intertwine $\mathcal{T}^{\Delta}(\mathbf{c})$ and
$\mathcal{T}^{\Delta^{\prime}}(\mathbf{c})$ for any
$\mathbf{c}\in\Pi_{1}(M)(m,m^{\prime})$ since $f$ satisfies
$\mathcal{D}^{\prime}\circ f=(\mathrm{id}_{T^{\ast}M}\otimes
f)\circ\mathcal{D}$. Hence one can identify duality structures and systems of
flat symplectic vector spaces defined on $M$. For any duality structure
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ on $M$, the holonomy representation
$\mathrm{hol}_{m}(\mathcal{D})$ of the flat connection $\mathcal{D}$ at $m\in
M$ coincides with the holonomy representation of the flat system of symplectic
vector spaces defined by $\Delta$:
$\mathrm{hol}_{m}(\mathcal{D})=\mathrm{hol}_{m}(\mathcal{T}_{\Delta})~{}~{}.$
In particular, the holonomy groups of $\mathcal{D}$ and $\mathcal{T}_{\Delta}$
at $m\in M$ coincide:
$\mathrm{Hol}_{m}(\mathcal{D})=\mathrm{Hol}_{m}(\mathcal{T}_{\Delta})~{}~{}.$
Since $M$ is connected, Proposition 1.7 implies that isomorphism classes of
duality structures defined on $M$ are in bijection with the character variety
(2).
### 1.5. Trivial duality structures
###### Definition 1.9.
Let $\Delta=(\mathcal{S},\omega,\mathcal{D})$ be a duality structure defined
on $M$. We say that $\Delta$ is:
1. (1)
topologically trivial, if the vector bundle $\mathcal{S}$ is trivializable,
i.e. if it admits a global frame.
2. (2)
_symplectically trivial_ if the symplectic vector bundle
$(\mathcal{S},\omega)$ is isomorphic to the trivial symplectic vector bundle,
i.e. if it admits a global symplectic frame (a global frame in which $\omega$
has the standard form).
3. (3)
trivial (or _holonomy trivial_), if $\Delta$ admits a global flat symplectic
frame, i.e. if the holonomy group of $\mathcal{D}$ is the trivial group.
A symplectically trivial duality structure is automatically topologically
trivial, while a holonomy trivial duality structure is symplectically trivial.
If $M$ is simply-connected then every duality structure is holonomy trivial. A
global flat symplectic frame of a holonomy-trivial duality structure
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ of rank $2n$ has the form:
$\mathcal{E}=(e_{1},\ldots,e_{n},f_{1},\ldots,f_{n})~{}~{},$
where $e_{i},f_{j}$ are $\mathcal{D}$-flat sections of $\mathcal{S}$ such
that:
$\omega(e_{i},e_{j})=\omega(f_{i},f_{j})=0~{}~{},~{}~{}\omega(e_{i},f_{j})=-\omega(f_{i},e_{j})=\delta_{ij}\,,\quad\forall\,\,i,j=1,\ldots,n~{}~{}.$
Any choice of such a frame induces a trivialization isomorphism
$\tau_{\mathcal{E}}:\Delta\xrightarrow{\sim}\Delta_{n}$ between $\Delta$ and
the canonical trivial duality structure:
$\Delta_{n}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(\underline{\mathbb{R}}^{2n},\underline{\omega}_{2n},\mathrm{d})\,,$
(3)
of rank $2n$, where $\underline{\omega}_{2n}$ is the constant symplectic
pairing induced on the trivial vector bundle
$\underline{\mathbb{R}}^{2n}=M\times\mathbb{R}^{2n}$ by the canonical
symplectic pairing $\omega_{2n}$ of $\mathbb{R}^{2n}$ and
$\mathrm{d}\colon\mathcal{C}^{\infty}(M,\underline{\mathbb{R}}^{2n})\rightarrow\Omega^{1}(M,\underline{\mathbb{R}}^{2n})$
is the ordinary differential. Any two flat symplectic frames $\mathcal{E}$ and
$\mathcal{E}^{\prime}$ of $\Delta$ are related by a based automorphism
$T\in\mathrm{Aut}_{b}(\Delta)$:
$\mathcal{E}^{\prime}=T\mathcal{E}\,,$
which corresponds to the constant automorphism
$\tau_{\mathcal{E}^{\prime}}\circ\tau_{\mathcal{E}}^{-1}\in\mathrm{Aut}_{b}(\Delta_{n})$
of $\Delta_{n}$ induced by an element
${\hat{T}}\in\mathrm{Sp}(2n,\mathbb{R})$.
### 1.6. Electromagnetic structures
As before, let $M$ be a connected manifold.
###### Definition 1.10.
An electromagnetic structure defined on $M$ is a pair
$\Xi=(\Delta,\mathcal{J})$, where $\Delta=(\mathcal{S},\omega,\mathcal{D})$ is
a duality structure on $M$ and $\mathcal{J}\in\mathrm{End}(\mathcal{S})$ is a
taming of the symplectic vector bundle $(\mathcal{S},\omega)$, i.e. a complex
structure on $\mathcal{S}$ which:
* •
is compatible with $\omega$, i.e. it satisfies:
$\omega(\mathcal{J}x,\mathcal{J}y)=\omega(x,y)~{}~{}\forall(x,y)\in\mathcal{S}\times_{M}\mathcal{S}$
* •
has the property that the symmetric bilinear pairing $Q_{\mathcal{J},\omega}$
defined on $\mathcal{S}$ through:
$Q_{\mathcal{J},\omega}(x,y)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\omega(\mathcal{J}x,y)~{}~{}\forall(x,y)\in\mathcal{S}\times_{M}\mathcal{S}$
(4)
is positive-definite.
The rank of $\Xi$ is the rank of $\Delta$.
Notice that the taming $\mathcal{J}$ is not required to be flat with respect
to $\mathcal{D}$. The bundle-valued one-form:
$\Psi_{\Xi}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathcal{D}^{\mathrm{ad}}(\mathcal{J})=\mathcal{D}\circ\mathcal{J}-(\mathrm{id}_{T^{\ast}M}\otimes\mathcal{J})\circ\mathcal{D}\in\Omega^{1}(M,\mathrm{End}(\mathcal{S}))$
is called the fundamental form of $\Xi$ and measures the failure of
$\mathcal{D}$ to preserve $\mathcal{J}$. The electromagnetic structure $\Xi$
is called unitary if $\Psi_{\Xi}=0$. We refer the reader to [32] for more
detail on the fundamental form.
###### Definition 1.11.
Let $\Xi=(\Delta,\mathcal{J})$ and
$\Xi^{\prime}=(\Delta^{\prime},\mathcal{J}^{\prime})$ be two electromagnetic
structures defined on $M$. A based isomorphism of electromagnetic structures
from $\Xi$ to $\Xi^{\prime}$ is a based isomorphism of duality structures
$f:\Delta\xrightarrow{\sim}\Delta^{\prime}$ such that
$\mathcal{J}^{\prime}=f\circ\mathcal{J}\circ f^{-1}$.
###### Remark 1.12.
A taming $\mathcal{J}$ of a duality structure $\Delta$ can also be described
using a _positive complex polarization_. Let
$\Delta_{\mathbb{C}}=(\mathcal{S}_{\mathbb{C}},\omega_{\mathbb{C}},\mathcal{D}_{\mathbb{C}})$
be the complexification of $\Delta$. Then $\mathcal{J}$ is equivalent to a
complex Lagrangian sub-bundle $L\subset\mathcal{S}_{\mathbb{C}}$ such that:
$\mathbf{i}\omega(x,\bar{x})>0\,,\quad\forall\,\,x\in\dot{\mathcal{S}}_{\mathbb{C}}\,,$
where $\dot{\mathcal{S}}_{\mathbb{C}}$ is the complement of the zero section
in $\mathcal{S}_{\mathbb{C}}$. By definition, such a Lagrangian sub-bundle is
a positive complex polarization of the symplectic vector bundle
$(\mathcal{S},\omega)$. A detailed description of this correspondence can be
found in [32, Appendix A]. Note that the physics literature sometimes uses a
local version of positive polarizations when discussing Abelian gauge theory.
In our global framework this requires the supplementary step of complexifying
the vector bundle $\mathcal{S}$.
###### Definition 1.13.
A taming map of size $2n$ defined on $M$ is a smooth map
$\mathcal{J}:M\rightarrow\mathrm{Mat}(2n,\mathbb{R})$ such that
$\mathcal{J}(m)$ is a taming of the standard symplectic form $\omega_{2n}$ of
$\mathbb{R}^{2n}$ for all $m\in M$ (in particular, we have
$\mathcal{J}(m)\in\mathrm{Sp}(2n,\mathbb{R})$ for all $m$).
We denote by $\mathfrak{J}_{n}(M)$ the set of all taming maps of size $2n$.
Let $\Xi=(\Delta,\mathcal{J})$ be an electromagnetic structure of rank $2n$
defined on $M$ whose underlying duality structure
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ is holonomy trivial. Choosing a flat
global symplectic frame $\mathcal{E}$, we can identify $\Delta$ with the
canonical trivial duality structure (3) through an isomorphism
$\tau_{\mathcal{E}}:\Delta\xrightarrow{\sim}\Delta_{n}$. This identifies
$\mathcal{J}$ with the section
$\tau_{\mathcal{E}}\circ\mathcal{J}\circ\tau_{\mathcal{E}}^{-1}$ of the
trivial vector bundle $\underline{\mathbb{R}}^{2n}$, which in turn can be
viewed as a smooth map from $M$ to $\mathbb{R}^{2n}$. Since
$\tau_{\mathcal{E}}$ transports $\omega$ to the constant symplectic form
induced on $\underline{\mathbb{R}}^{2n}$ by $\omega_{2n}$, it is easy to see
that this map is a taming map of size $2n$ defined on $M$. Hence any choice of
global flat symplectic frame identifies the set of tamings of $\Delta$ with
$\mathfrak{J}_{n}(M)$.
### 1.7. The polarized Hodge operator
Let $(M,g)$ be an oriented Lorentzian four-manifold. Let
$\Xi=(\Delta,\mathcal{J})$ be an electromagnetic structure of rank $2n$
defined on $M$ and let $Q:=Q_{\mathcal{J},\omega}$ be the Euclidean scalar
product induced by $\mathcal{J}$ and $\omega$ on $\mathcal{S}$ as in equation
(4). The Hodge operator $\ast_{g}$ of $(M,g)$ extends trivially to an
automorphism of the vector bundle
$\wedge^{\ast}(M,\mathcal{S})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\wedge^{\ast}T^{\ast}M\otimes\mathcal{S}$, which we denote by the
same symbol.
###### Definition 1.14.
The $\mathcal{J}$-polarized Hodge operator of $(M,g,\Xi)$ is the automorphism
$\star_{g,\mathcal{J}}$ of the vector bundle $\wedge^{\ast}(M,\mathcal{S})$
defined through:
$\star_{g,\mathcal{J}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\ast_{g}\otimes\mathcal{J}=\mathcal{J}\otimes\ast_{g}\,.$
Let $(\cdot,\cdot)_{g}$ be the pseudo-Euclidean scalar product induced by $g$
on the total exterior bundle $\wedge^{\ast}(M)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\wedge^{\ast}T^{\ast}M$. Together with $Q$, this pairing induces a
pseudo-Euclidean scalar product $(\cdot,\cdot)_{g,Q}$ on the vector bundle
$\Lambda^{\ast}(M,\mathcal{S})$, which is uniquely determined by the
condition:
$(\rho_{1}\otimes\xi_{1},\rho_{2}\otimes\xi_{2})_{g,Q}=\delta_{k_{1},k_{2}}(-1)^{k_{1}}Q(\xi_{1},\xi_{2})(\rho_{1},\rho_{2})_{g}\,$
for all $\rho_{1}\in\Omega^{k_{1}}(M)$, $\rho_{2}\in\Omega^{k_{2}}(M)$ and
$\xi_{1},\xi_{2}\in\mathcal{C}^{\infty}(M,\mathcal{S})$. The polarized Hodge
operator $\star_{g,\mathcal{J}}$ induces an involutive automorphism of the
space $\Omega^{2}(M,\mathcal{S})$. We have a $(\cdot,\cdot)_{g,Q}$-orthogonal
splitting:
$\wedge^{2}(M,\mathcal{S})=\wedge^{2}_{+}(M,\mathcal{S})\oplus\wedge^{2}_{-}(M,\mathcal{S})$
into eigenbundles of $\star_{g,\mathcal{J}}$ with eigenvalues $+1$ and $-1$
for the polarized Hodge operator. The sections of
$\wedge^{2}_{+}(M,\mathcal{S})$ and $\wedge^{2}_{-}(M,\mathcal{S})$ are called
respectively polarized self-dual and polarized anti-self-dual
$\mathcal{S}$-valued two-forms. Notice that the definition of these notions
does not require complexification of $\mathcal{S}$.
### 1.8. Classical abelian gauge theory
Let $(M,g)$ be an oriented Lorentzian four-manifold.
###### Definition 1.15.
The classical configuration space defined by the duality structure
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ on $M$ is the vector space of two-
forms valued in $\mathcal{S}$ which are closed with respect to
$\mathrm{d}_{\mathcal{D}}$:
$\mathrm{Conf}(M,\Delta)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{\mathcal{V}\in\Omega^{2}(M,\mathcal{S})\,\,|\,\,\mathrm{d}_{\mathcal{D}}\mathcal{V}=0\right\\}\,.$
###### Definition 1.16.
The classical abelian gauge theory determined by an electromagnetic structure
$\Xi=(\Delta,\mathcal{J})$ on $(M,g)$ is defined by the polarized self-duality
condition:
$\star_{g,\mathcal{J}}\mathcal{V}=\mathcal{V}$
for $\mathcal{V}\in\mathrm{Conf}(M,\Delta)$. The solutions of this equation
are called classical electromagnetic field strengths and form the vector
space:
$\mathrm{Sol}(M,g,\Xi)=\mathrm{Sol}(M,g,\Delta,\mathcal{J})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{\mathcal{V}\in\mathrm{Conf}(M,\Delta)\,|\,\star_{g,\mathcal{J}}\mathcal{V}=\mathcal{V}\right\\}\,.$
Notice that classical abelian gauge theory is formulated in terms of field
strengths. In later sections, we will formulate the pre-quantum version of
this theory (which is obtained by imposing an appropriate DSZ quantization
condition) in terms of connections defined on certain principal bundles called
_Siegel bundles_ (see Definition 3.2). The classical theory simplifies when
$M$ is contractible, since in this case any duality structure defined on $M$
is holonomy-trivial. This case corresponds to the local abelian gauge theory
discussed in Appendix A, which makes contact with the formulation used in the
physics literature. Notice that the traditional physics formulation (which is
valid only locally or for holonomy-trivial duality structures) relies on
gauge-kinetic functions, leading to complicated formulas which obscure the
geometric structure displayed by Definition 1.16. It was shown in [32] that
elements of $\mathrm{Sol}(M,g,\Xi)$ correspond to classical electromagnetic
U-folds when the duality structure $\Delta$ is not holonomy trivial.
### 1.9. Classical duality groups
Let $(M,g)$ be an oriented Lorentzian four-manifold and $\mathcal{S}$ be a
vector bundle defined on $M$. Let $\mathrm{Aut}(\mathcal{S})$ be the group of
those unbased vector bundle automorphisms of $\mathcal{S}$ which cover
orientation-preserving diffeomorphisms of $M$ and let
$\mathrm{Aut}_{b}(\mathcal{S})\subset\mathrm{Aut}(\mathcal{S})$ the subgroup
of based automorphisms, i.e. those automorphisms which cover the identity of
$M$. Let $\mathrm{Diff}(M)$ be the group of orientation-preserving
diffeomorphisms of $M$. Given $u\in\mathrm{Aut}(\mathcal{S})$, let
$f_{u}\in\mathrm{Diff}(M)$ be the orientation-preserving diffeomorphism of $M$
covered by $u$. This defines a morphism of groups:
$\mathrm{Aut}(\mathcal{S})\ni u\rightarrow f_{u}\in\mathrm{Diff}(M)~{}~{}.$
The group $\mathrm{Aut}(\mathcal{S})$ admits an $\mathbb{R}$-linear
representation:
$\mathrm{Aut}(\mathcal{S})\ni
u\rightarrow\mathbb{A}_{u}\in\mathrm{Aut}(\Omega^{\ast}(M,\mathcal{S}))$
given by push-forward; we will occasionally also use the notation:
$u\cdot(-)\stackrel{{\scriptstyle{\rm def.}}}{{=}}\mathbb{A}_{u}(-)$. When
$f_{u}$ is not the identity the push-forward action of
$u\in\mathrm{Aut}(\mathcal{S})$ must be handled with care; we refer the reader
to [32, Appendix D] for a detailed description of this operation and its
properties. On decomposable elements of
$\Omega^{\ast}(M,\mathcal{S})=\Omega^{\ast}(M)\otimes\mathcal{C}^{\infty}(M,\mathcal{S})$,
it is given by:
$\mathbb{A}_{u}(\alpha\otimes\xi)=u\cdot(\alpha\otimes\xi)=(f_{u\ast}\alpha)\otimes(u\cdot\xi)=(f_{u\ast}\alpha)\otimes(u\circ\xi\circ
f^{-1}_{u})\,,$
where $f_{u\ast}\alpha$ is the push-forward of $\alpha$ on $M$ as defined in
[32, Appendix D]. For instance, if $\alpha\in\Omega^{1}(M)$ we have:
$(f_{u\ast}\alpha)(v)=\alpha(f_{u\ast}^{-1}(v))\circ
f^{-1}_{u}=\alpha(\mathrm{d}f_{u}^{-1}(v)\circ f_{u})\circ f^{-1}_{u}\in
C^{\infty}(M)\,,\quad\forall\,\,v\in\mathfrak{X}(M)\,.$
In particular, restricting to a given point $m\in M$ we obtain the familiar
formula:
$(f_{u\ast}\alpha)(v)_{m}=\alpha(f_{u\ast}^{-1}(v))_{f^{-1}_{u}(m)}=\alpha_{f^{-1}_{u}(m)}(\mathrm{d}f_{u}^{-1}|_{m}(v_{m}))\,.$
Recall that $f_{u\ast}^{-1}(v)\in\mathfrak{X}(M)$ is again a vector field on
$M$ whereas $\mathrm{d}f_{u}^{-1}(v)\colon M\to TM$ is a vector field _along_
$f_{u}$. For any duality structure $\Delta=(\mathcal{S},\omega,\mathcal{D})$
defined on $M$, let:
$\mathrm{Aut}(\Delta)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{u\in\mathrm{Aut}(\mathcal{S})\,\,|\,\,\omega_{u}=\omega\,,\,\,\mathcal{D}_{u}=\mathcal{D}\right\\}\,,$
be the group of unbased automorphisms of $\Delta$, defined as the stabilizer
of the pair $(\omega,\mathcal{D})$ in $\mathrm{Aut}(\mathcal{S})$. Here
$\mathcal{D}_{u}$ is the push-forward of the connection $\mathcal{D}$ by $u$,
which is defined through:
$(\mathcal{D}_{u})_{v}(s)=u\cdot\mathcal{D}_{f_{u\ast}^{-1}(v)}(u^{-1}\cdot
s)=u\circ(\mathcal{D}_{f_{u\ast}^{-1}(v)}(u^{-1}\circ s\circ f_{u}))\circ
f^{-1}_{u}\,,\quad\forall\,\,s\in\Gamma(\mathcal{S})\quad\forall\,\,v\in\mathfrak{X}(M)\,,$
where $f_{u\ast}^{-1}\cdot v=\mathrm{d}f^{-1}_{u}(v)\circ f_{u}$. Note that if
$s\in\Gamma(S)$ is a parallel section of $\mathcal{S}$ with respect to
$\mathcal{D}$, then $u\cdot s\in\Gamma(S)$ is parallel with respecto
$\mathcal{D}_{u}$ for all $u\in\mathrm{Aut}(\Delta)$. We have a short exact
sequence:
$1\to\mathrm{Aut}_{b}(\Delta)\to\mathrm{Aut}(\Delta)\to\mathrm{Diff}_{\Delta}(M)\to
1\,,$
where $\mathrm{Diff}_{\Delta}(M)\subset\mathrm{Diff}(M)$ is the subgroup
formed by those orientation-preserving diffeomorphisms of $M$ that can be
covered by elements of $\mathrm{Aut}(\Delta)$. Given a taming
$\mathcal{J}\in\mathrm{End}(\mathcal{S})$ of $(\mathcal{S},\omega)$, we
define:
$\mathcal{J}_{u}\stackrel{{\scriptstyle{\rm def.}}}{{=}}u\circ\mathcal{J}\circ
u^{-1}\in\mathrm{End}(\mathcal{S})~{}~{},$
which is a taming of the duality structure
$\Delta_{u}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(\mathcal{S},\omega_{u},\mathcal{D}_{u})$. On the other hand, if
$g$ is a Lorentzian metric on $M$, we set:
$g_{u}\stackrel{{\scriptstyle{\rm def.}}}{{=}}(f_{u})_{\ast}(g)~{}~{}.$
Finally, we define $\Xi_{u}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(\Delta_{u},\mathcal{J}_{u})$ and we denote by ${\rm Iso}(M,g)$
the group of orientation-preserving isometries of $(M,g)$.
###### Definition 1.17.
* Let $\Xi=(\Delta,\mathcal{J})$ be an electromagnetic structure defined on $M$.
* •
The group $\mathrm{Aut}(\Delta)$ of unbased automorphisms of $\Delta$ is
called the unbased pseudo-duality group defined by $\Delta$. The unbased
pseudo-duality transformation defined by $u\in\mathrm{Aut}(\Delta)$ is the
linear isomorphism:
$\mathbb{A}_{u}\colon\mathrm{Conf}(M,\Delta)\xrightarrow{\sim}\mathrm{Conf}(M,\Delta)~{}~{},$
(5)
which restricts to an isomorphism:
$\mathbb{A}_{u}\colon\mathrm{Sol}(M,g,\Delta,\mathcal{J})\xrightarrow{\sim}\mathrm{Sol}(M,g_{u},\Delta,\mathcal{J}_{u})~{}~{}.$
* •
The unbased duality group $\mathrm{Aut}(g,\Delta)$ of the pair $(g,\Delta)$ is
the stabilizer of $g$ in $\mathrm{Aut}(\Delta)$:
$\mathrm{Aut}(g,\Delta)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Stab}_{\mathrm{Aut}(\Delta)}(g)=\left\\{u\in\mathrm{Aut}(\Delta)\,\,|\,\,f_{u}\in{\rm
Iso}(M,g)\right\\}\,.$
The unbased duality transformation defined by $u\in\mathrm{Aut}(g,\Delta)$ is
the linear isomorphism (5), which restricts to an isomorphism:
$\mathbb{A}_{u}\colon\mathrm{Sol}(M,g,\Delta,\mathcal{J})\xrightarrow{\sim}\mathrm{Sol}(M,g,\Delta,\mathcal{J}_{u})~{}~{}.$
* •
The group $\mathrm{Aut}_{b}(\Delta)$ is called the (electromagnetic) duality
group defined by $\Delta$. The duality transformation defined by
$u\in\mathrm{Aut}_{b}(\Delta)$ is the linear isomorphism (5), which restricts
to an isomorphism:
$\mathbb{A}_{u}\colon\mathrm{Sol}(M,g,\Delta,\mathcal{J})\to\mathrm{Sol}(M,g,\Delta,\mathcal{J}_{u})~{}~{}.$
###### Remark 1.18.
The fact that $\mathbb{A}_{u}$ restricts as stated above follows from a direct
yet subtle computation. Since the conditions involved are linear, it is enough
to verify it on an homogeneous element. If
$\mathcal{V}=\alpha\otimes\xi\in\mathrm{Sol}(M,\Delta)$,
$u\in\mathrm{Aut}(\Delta)$ and $v\in\mathfrak{X}(M)$ we compute:
$\displaystyle\mathrm{d}_{(\mathcal{D}_{u})_{v}}\mathbb{A}_{u}(\mathcal{V})=\mathrm{d}_{(\mathcal{D}_{u})_{v}}((f_{u\ast}\alpha)\otimes(u\cdot\xi))=(\iota_{v}f_{u\ast}\mathrm{d}\alpha)\otimes(u\cdot\xi)+(f_{u\ast}\alpha)\otimes((\mathcal{D}_{u})_{v}u\cdot\xi)$
$\displaystyle=(f_{u\ast}\iota_{f_{u\ast}^{-1}(v)}\mathrm{d}\alpha)\otimes(u\cdot\xi)+(f_{u\ast}\alpha)\otimes(u\cdot\mathcal{D}_{f_{u\ast}^{-1}(v)}\xi)=\mathbb{A}_{u}((\iota_{f_{u\ast}^{-1}(v)}\mathrm{d}\alpha)\otimes(\xi)+\alpha\otimes(\mathcal{D}_{f_{u\ast}^{-1}(v)}\xi))$
$\displaystyle=\mathbb{A}_{u}(\mathrm{d}_{\mathcal{D}_{f_{u\ast}^{-1}(v)}}\mathcal{V})=0\,.$
On the other hand, if $\mathcal{V}=\alpha\otimes\xi\in\mathrm{Sol}(M,g,\Xi)$
we compute:
$\star_{g_{u},\mathcal{J}_{u}}\mathbb{A}_{u}(\mathcal{V})=(\ast_{g_{u}}f_{u\ast}\alpha)\otimes(\mathcal{J}_{u}\circ
u\cdot\xi)=f_{u\ast}(\ast_{g}\alpha)\otimes u\circ\mathcal{J}(\xi)\circ
f_{u}^{-1}=\mathbb{A}_{u}(\star_{g_{,}\mathcal{J}}\mathcal{V})=\mathbb{A}_{u}(\mathcal{V})\,,$
implying that the restrictions of $\mathbb{A}_{u}$ to $\mathrm{Sol}(M,\Delta)$
and $\mathrm{Conf}(M,g,\Xi)$ are well-defined as stated above.
We have obvious inclusions:
$\mathrm{Aut}_{b}(\Delta)\subset\mathrm{Aut}(g,\Delta)\subset\mathrm{Aut}(\Delta)$
(6)
and a short exact sequence:
$1\to\mathrm{Aut}_{b}(\Delta)\to\mathrm{Aut}(g,\Delta)\to{\rm
Iso}_{\Delta}(M,g)\to 1\,,$ (7)
where ${\rm Iso}_{\Delta}(M,g)\subset{\rm Iso}(M,g)$ denotes the group formed
by those orientation-preserving isometries of $(M,g)$ that can be covered by
elements of $\mathrm{Aut}(g,\Delta)$.
The classical duality group $\mathrm{Aut}_{b}(\Delta)$ consists of all based
automorphisms of $\mathcal{S}$ which preserve both $\omega$ and $\mathcal{D}$.
Therefore (see [24, Lemma 4.2.8]) fixing a point $m_{0}\in M$ it can be
realized as the centralizer $C_{m_{0}}(\Delta)$ of the holonomy group
$\mathrm{Hol}_{m_{0}}(\mathcal{D})$ of $\mathcal{D}$ at $m_{0}$ inside the
group
$\mathrm{Aut}(\mathcal{S}_{m_{0}},\omega_{m_{0}})\simeq\mathrm{Sp}(2n,\mathbb{R})$:
$\mathrm{Aut}_{b}(\Delta)\simeq C_{m_{0}}(\Delta)~{}~{}.$
In particular, $\mathrm{Aut}_{b}(\Delta)$ is a closed subgroup of
$\mathrm{Sp}(2n,\mathbb{R})$ and hence it is a finite-dimensional Lie group.
The same holds for ${\rm Iso}_{\Delta}(M,g)$, which is a closed subgroup of
the finite-dimensional222It is well-known that the isometry group of any
pseudo-Riemannian manifold is a finite-dimensional Lie group. See for example
[31, Theorem 5.1, p. 22]. Lie group ${\rm Iso}(M,g)$. The sequence (7) implies
that $\mathrm{Aut}(g,\Delta)$ is also a finite-dimensional Lie group. In
general, the groups defined above differ markedly from their local
counterparts described in Appendix A.2. We stress that the latter are not the
adequate groups to consider when dealing with electromagnetic U-folds (since
in that case the duality structure is not holonomy trivial).
###### Definition 1.19.
Let $\Xi=(\Delta,\mathcal{J})$ be an electromagnetic structure defined on $M$.
* •
The group:
$\mathrm{Aut}(\Xi)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{u\in\mathrm{Aut}(\Delta)~{}|~{}\mathcal{J}_{u}=\mathcal{J}\\}$
of unbased automorphisms of $\Xi$ is called the unbased unitary pseudo-duality
group defined by $\Xi$. The unbased unitary pseudo-duality transformation
defined by $u\in\mathrm{Aut}(\Xi)$ is the linear isomorphism:
$\mathbb{A}_{u}\colon\mathrm{Sol}(M,g,\Xi)\xrightarrow{\sim}\mathrm{Sol}(M,g_{u},\Xi)\,.$
* •
The unbased unitary duality group of the pair $(g,\Xi)$ is the stabilizer of
$g$ in $\mathrm{Aut}(\Xi)$:
$\mathrm{Aut}(g,\Xi)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Stab}_{\mathrm{Aut}(\Xi)}(g)=\mathrm{Stab}_{\mathrm{Aut}(g,\Delta)}(\mathcal{J})=\left\\{u\in\mathrm{Aut}(\Delta)\,\,|\,\,\mathcal{J}_{u}=\mathcal{J}~{}\&~{}f_{u}\in{\rm
Iso}(M,g)\right\\}~{}~{}.$ (8)
The unbased unitary duality transformation defined by
$u\in\mathrm{Aut}(g,\Xi)$ is the linear automorphism:
$\mathbb{A}_{u}\colon\mathrm{Sol}(M,g,\Xi)\xrightarrow{\sim}\mathrm{Sol}(M,g,\Xi)~{}~{}.$
* •
The group $\mathrm{Aut}_{b}(\Xi)$ of based automorphisms of $\Xi$ is called
the classical unitary duality group defined by $\Xi$:
$\mathrm{Aut}_{b}(\Xi)=\mathrm{Stab}_{\mathrm{Aut}_{b}(\Delta)}(\mathcal{J})=\left\\{u\in\mathrm{Aut}_{b}(\Delta)\,\,|\,\,\mathcal{J}_{u}=\mathcal{J}\right\\}\,.$
The classical unitary duality transformation defined by
$u\in\mathrm{Aut}_{b}(\Xi)$ is the linear automorphism:
$\mathbb{A}_{u}\colon\mathrm{Sol}(M,g,\Xi)\xrightarrow{\sim}\mathrm{Sol}(M,g,\Xi)$
We have obvious inclusions:
$\mathrm{Aut}_{b}(\Xi)\subset\mathrm{Aut}(g,\Xi)\subset\mathrm{Aut}(\Xi)$
and a short exact sequence:
$1\to\mathrm{Aut}_{b}(\Xi)\to\mathrm{Aut}(g,\Xi)\to{\rm Iso}_{\Xi}(M,g)\to
1\,,$ (9)
where ${\rm Iso}_{\Xi}(M,g)$ is the group formed by those orientation-
preserving isometries of $(M,g)$ which are covered by elements of
$\mathrm{Aut}(g,\Xi)$. Arguments similar to those above show that
$\mathrm{Aut}_{b}(\Xi)$, $\mathrm{Aut}(g,\Xi)$ and ${\rm Iso}_{\Xi}(M,g)$ are
finite-dimensional Lie groups. The previous definitions give global
mathematically rigorous descriptions of several types of _duality groups_
associated to abelian gauge theory on Lorentzian four-manifolds. In general,
these can differ markedly from their “local” counterparts described in
Appendix A.2, which are considered traditionally in the physics literature.
### 1.10. The case of trivial duality structure
Let $(M,g)$ be an oriented Lorentzian four-manifold. For any $n\geq 0$, the
set $\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))$ of smooth
$\mathrm{Sp}(2n,\mathbb{R})$-valued functions defined on $M$ is a group under
pointwise multiplication, whose group of automorphisms we denote by
$\mathrm{Aut}(\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R})))$.
###### Lemma 1.20.
Let $(\mathcal{S},\omega)$ be a symplectic vector bundle of rank $2n$ defined
on $M$ which is symplectically trivializable. Then any symplectic
trivialization of $(\mathcal{S},\omega)$ induces an isomorphism of groups:
$\mathrm{Aut}(\mathcal{S},\omega)\simeq\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))\rtimes_{\alpha}\mathrm{Diff}(M)~{}~{},$
where
$\alpha:\mathrm{Diff}(M)\rightarrow\mathrm{Aut}(\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R})))$
is the morphism of groups defined through:
$\alpha(\varphi)(f)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}f\circ\varphi^{-1}\,,\quad\forall\,\,\varphi\in\mathrm{Diff}(M)\,,\,\,\forall\,\,f\in\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))~{}~{}.$
In particular, we have a short exact sequence of groups:
$1\rightarrow\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))\rightarrow\mathrm{Aut}(\mathcal{S},\omega)\rightarrow\mathrm{Diff}(M)\rightarrow
1$
which splits from the right.
###### Proof.
Let
$\tau:\mathcal{S}\stackrel{{\scriptstyle\sim}}{{\rightarrow}}M\times\mathbb{R}^{2n}$
be a symplectic trivialization of $(\mathcal{S},\omega)$. Then the map
$\mathrm{Ad}(\tau):\mathrm{Aut}(\mathcal{S},\omega)\rightarrow\mathrm{Aut}(M\times\mathbb{R}^{2n},\omega_{2n})$
defined through:
$\mathrm{Ad}(\tau)(f)\stackrel{{\scriptstyle{\rm def.}}}{{=}}\tau\circ
f\circ\tau^{-1}\,,\quad\forall\,\,f\in\mathrm{Aut}(\mathcal{S},\omega)\,,$
is an isomorphism of groups. Let $f\in\mathrm{Aut}(\mathcal{S},\omega)$ be an
unbased automorphism of $(\mathcal{S},\omega)$ which covers the diffeomorphism
$\varphi\in\mathrm{Diff}(M)$. Then $\mathrm{Ad}(\tau)(f)$ is an unbased
automorphism of $M\times\mathbb{R}^{2n}$ which covers $\varphi$ and hence we
have:
$\mathrm{Ad}(\tau)(f)(m,x)=(\varphi(m),{\tilde{f}}(m)(x))\,,\quad\forall\,\,(m,x)\in
M\times\mathbb{R}^{2n}~{}~{},$
where ${\tilde{f}}:M\rightarrow\mathrm{Sp}(2n,\mathbb{R})$ is a smooth map.
Setting $h\stackrel{{\scriptstyle{\rm
def.}}}{{=}}{\tilde{f}}\circ\varphi^{-1}\in\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))$,
we have:
$\mathrm{Ad}(\tau)(f)(m,x)=(\varphi(m),h(\varphi(m))(x))\,,\quad\forall\,\,(m,x)\in
M\times\mathbb{R}^{2n}$ (10)
and the correspondence $f\rightarrow(h,\varphi)$ gives a bijection between
$\mathrm{Aut}(\mathcal{S},\omega)$ and the set
$\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))\times\mathrm{Diff}(M)$. If
$f_{1},f_{2}\in\mathrm{Aut}(\mathcal{S},\omega)$ correspond through this map
to the pairs:
$(h_{1},\varphi_{1}),(h_{2},\varphi_{2})\in\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))\times\mathrm{Diff}(M)\,,$
then direct computation using (10) gives:
$\mathrm{Ad}(\tau)(f_{1}\circ
f_{2})(m,x)=((\varphi_{1}\circ\varphi_{2})(m),h_{1}(m)(h_{2}\circ\varphi_{1}^{-1})(m)(x))~{}~{},$
showing that $f_{1}\circ f_{2}$ corresponds to the pair
$(h_{1}\cdot\alpha(\varphi_{1})(h_{2}),\varphi_{1}\circ\varphi_{2})$. ∎
###### Corollary 1.21.
Let $\Delta=(\mathcal{S},\omega,\mathcal{D})$ be a holonomy trivial duality
structure defined on $M$. Then any trivialization of $\Delta$ induces an
isomorphism of groups:
$\mathrm{Aut}(\Delta)\simeq\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)~{}~{}.$
###### Proof.
Follows from Lemma 1.20 by noticing that the action $\alpha$ of
$\mathrm{Diff}(M)$ on $\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))$
restricts to the trivial action on the subgroup:
$\\{f\in\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))~{}|~{}\mathrm{d}f=0\\}\simeq\mathrm{Sp}(2n,\mathbb{R})$
of constant $\mathrm{Sp}(2n,\mathbb{R})$-valued functions defined on $M$. ∎
Fix an electromagnetic structure $\Xi=(\Delta,\mathcal{J})$ of rank $2n$
defined on $M$ with holonomy-trivial underlying duality structure
$\Delta=(\mathcal{S},\omega,\mathcal{D})$. Choosing a global flat symplectic
frame $\mathcal{E}$ of $\mathcal{S}$, we identify $\Delta$ with the canonical
trivial duality structure (3) and $\mathcal{J}$ with a taming map of size
$2n$. Then $\mathcal{D}$ identifies with the trivial connection and
$\mathrm{d}_{\mathcal{D}}$ identifies with the exterior derivative
$\mathrm{d}\colon\Omega(M,\mathbb{R}^{2n})\to\Omega(M,\mathbb{R}^{2n})$
extended trivially to vector-valued forms. Using Lemma 1.20 and its obvious
adaptation, we obtain:
$\displaystyle\mathrm{Aut}_{b}(\mathcal{S})\equiv\mathcal{C}^{\infty}(M,\mathrm{GL}(2n,\mathbb{R}))\,,\quad\mathrm{Aut}(\mathcal{S})\equiv\mathcal{C}^{\infty}(M,\mathrm{GL}(2n,\mathbb{R}))\rtimes_{\alpha}\mathrm{Diff}(M)\,,$
$\displaystyle\mathrm{Aut}_{b}(\mathcal{S},\omega)\equiv\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))\,,\quad\mathrm{Aut}(\mathcal{S},\omega)\equiv\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))\rtimes_{\alpha}\mathrm{Diff}(M)\,.$
On the other hand, Corollary 1.21 gives:
$\displaystyle\mathrm{Aut}_{b}(\Delta)\equiv\mathrm{Sp}(2n,\mathbb{R})\,,\quad\mathrm{Aut}(\Delta)\equiv\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)\,,$
$\displaystyle\mathrm{Aut}(g,\Delta)=\mathrm{Sp}(2n,\mathbb{R})\times{\rm
Iso}(M,g)\,.$
Moreover, we have:
$\displaystyle\mathrm{Aut}_{b}(\Xi)\equiv\mathrm{U}_{\mathcal{J}}(n)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{\gamma\in\mathrm{Sp}(2n,\mathbb{R})\,\,|\,\,\gamma\mathcal{J}\gamma^{-1}=\mathcal{J}\\}\,,$
$\displaystyle\mathrm{Aut}(\Xi)\equiv\\{(\gamma,f)\in\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)\,\,|\,\,\gamma\mathcal{J}\gamma^{-1}=\mathcal{J}\circ
f\\}\,,$
as well as:
$\mathrm{Aut}(g,\Xi)\equiv\\{(\gamma,f)\in\mathrm{Sp}(2n,\mathbb{R})\times{\rm
Iso}(M)\,\,|\,\,\gamma\mathcal{J}\gamma^{-1}=\mathcal{J}\circ f\\}~{}~{}.$
Hence we recover the local formulas obtained in Appendix A.2 for the duality
groups of local abelian gauge theory. Notice that $\mathrm{Aut}_{b}(\Xi)$ is
isomorphic with the unitary group $\mathrm{U}(n)$ when the electromagnetic
structure $\Xi$ is unitary, which amounts to the map $\mathcal{J}$ being
constant.
## 2\. The Dirac-Schwinger-Zwanziger condition
The previous section introduced classical abelian gauge theory as a theory of
_field strengths_ , i.e. a theory of $\mathrm{d}_{\mathcal{D}}$-closed two-
forms valued in the flat symplectic vector bundle $\mathcal{S}$ with equation
of motion given by the polarized self-duality condition. Well-known arguments
originally due to Dirac as well as the Aharonov-Bohm effect [1] imply that a
consistent coupling of the theory to quantum charge carriers imposes an
integrality condition on field strength configurations. This is traditionally
called the DSZ “quantization” condition, even though it constrains classical
field strength configurations – in fact, only particles which carry the
corresponding charges are quantized in such arguments, but not the gauge
fields themselves. To avoid confusion, we prefer to call it the DSZ
integrality condition. For local abelian gauge theories of rank $2n$ (which
are discussed in Appendix A), this condition can be implemented using a full
symplectic lattice in the standard symplectic vector space
$(\mathbb{R}^{2n},\omega_{2n})$, as usually done in the physics literature
[42, 47]. For abelian gauge theories with non-trivial electromagnetic
structure defined on an arbitrary Lorentzian four-manifold, we shall implement
this condition using a Dirac system, as originally proposed in [32]. We begin
with some preliminaries.
### 2.1. Principal bundles with discrete structure group
Let $\Gamma$ be a discrete group and $Q$ be a principal bundle with structure
group $\Gamma$ and projection $p:Q\rightarrow M$. Then the total space of $Q$
is a (generally disconnected) covering space of $M$. Let
$U_{Q}:\Pi_{1}(M)\rightarrow\Phi_{0}(Q)$ be the monodromy transport of the
covering map $p:Q\rightarrow M$, where $\Phi_{0}(Q)$ is the bare fiber
groupoid of $Q$. By definition, the objects of $\Phi_{0}(Q)$ are the fibers of
$Q$ while its morphisms are arbitrary bijections between the latter. By
definition, the functor $U_{Q}$ associates to the homotopy class
$\mathbf{c}\in\Pi_{1}(M)(m,m^{\prime})$ of any curve $c:[0,1]\rightarrow M$
with $c(0)=m$ and $c(1)=m^{\prime}$ the bijection
$U(\mathbf{c}):Q_{m}\xrightarrow{\sim}Q_{m^{\prime}}$ given by
$U_{Q}(\mathbf{c})(x)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}{\tilde{c}}_{x}(1)\in Q_{m^{\prime}}$, where ${\tilde{c}}_{x}$ is
the the unique lift of $c$ to $Q$ through the point $x\in Q$ (thus
$\tilde{c}_{x}(0)=x$). Notice that $x$ and $U(\mathbf{c})(x)$ lie on the same
connected component of $Q$ and hence the diffeomorphism
$U_{Q}(\mathbf{c}):Q\xrightarrow{\sim}Q$ induces the trivial permutation of
$\pi_{0}(Q)$. For any $\gamma\in\Gamma$, the curve ${\tilde{c}}^{\gamma}_{x}$
defined through ${\tilde{c}}_{x}^{\gamma}(t)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}{\tilde{c}}(t)\gamma$ for all $t\in[0,1]$ is a lift of $c$ through
the point $x\gamma$. The homotopy lifting property of $p$ implies that
${\tilde{c}}^{\gamma}_{x}$ and ${\tilde{c}}_{x\gamma}$ are homotopic and hence
$U_{Q}(\mathbf{c})(x\gamma)=U(\mathbf{c})(x)\gamma$. This shows that $U_{Q}$
acts through isomorphisms of $\Gamma$-spaces and hence it is in fact a
functor:
$U_{Q}:\Pi_{1}(M)\rightarrow\Phi(Q)~{}~{},$
where $\Phi(Q)$ is the principal fiber groupoid of $Q$ (whose objects coincide
with those of $\Phi_{0}(Q)$ but whose morphisms are isomorphisms of
$\Gamma$-spaces). This implies that $U_{Q}$ is the parallel transport of a
flat principal connection defined on $Q$, which we shall call the monodromy
connection of $Q$. The holonomy morphism:
$\alpha_{m}(Q):\pi_{1}(M,m)\rightarrow\mathrm{Aut}_{\Gamma}(Q_{m})$
of this connection at a point $m\in M$ will be called the monodromy morphism
of $Q$ at $m$, while its image:
$\mathrm{Hol}_{m}(Q)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{im}(\alpha_{m}(Q))\subset\mathrm{Aut}_{\Gamma}(Q_{m})$
will be called the monodromy group of $Q$ at $m$. The monodromy morphism at a
fixed point $m_{0}\in M$ induces a bijection between the set of isomorphism
classes of principal $\Gamma$-bundles and the character variety:
$\mathfrak{R}(\pi_{1}(M,m_{0}),\Gamma)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}{\rm Hom}(\pi_{1}(M,m),\Gamma)/\Gamma~{}~{}.$
###### Remark 2.1.
For the purposes of this work, the most important class of principal bundles
with discrete structure group are principal
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ bundles, which are naturally
associated to Siegel bundles (see Section 3).
### 2.2. Bundles of finitely-generated free abelian groups
Let $\mathcal{F}$ be a bundle of free abelian groups of rank $r$ defined on
$M$. Then $\mathcal{F}$ is isomorphic with the bundle of groups with fiber
$\mathbb{Z}^{r}$ associated to a principal $\mathrm{GL}(r,\mathbb{Z})$-bundle
$Q$ through the left action
$\ell:\mathrm{GL}(r,\mathbb{Z})\rightarrow\mathrm{Aut}_{\mathbb{Z}}(\mathbb{Z}^{r})$:
$\mathcal{F}\simeq\mathcal{F}_{r}(Q)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}Q\times_{\ell}\mathbb{Z}^{r}\simeq\hat{M}\times_{\ell\circ\alpha_{m}(Q)}\mathbb{Z}^{r}~{}~{},$
where $\alpha_{m}(Q):\pi_{1}(M,m)\rightarrow\mathrm{GL}(r,\mathbb{Z})$ is the
monodromy morphism of $Q$ at $m$. The monodromy connection of $Q$ induces a
flat Ehresmann connection which we shall call the monodromy connection of
$\mathcal{F}$ and whose parallel transport:
$U_{\mathcal{F}}:\Pi_{1}(M)\rightarrow\Phi(\mathcal{F})$
acts by isomorphisms of groups between the fibers of $\mathcal{F}$. The
holonomy morphism:
$\sigma_{m}(\mathcal{F}):\pi_{1}(M,m)\rightarrow\mathrm{Aut}_{\mathbb{Z}}(\mathcal{F}_{m})$
at $m\in M$ can be identified with the morphism
$\ell\circ\alpha_{m}(P):\pi_{1}(M,m)\rightarrow\mathrm{Aut}_{\mathbb{Z}}(\mathbb{Z}^{r})$
upon choosing a basis of $\mathcal{F}_{m}$. The holonomy group:
$\mathrm{Hol}_{m}(\mathcal{F})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{im}(\sigma_{m}(\mathcal{F}))\subset\mathrm{Aut}_{\mathbb{Z}}(\mathcal{F}_{m})\simeq\mathrm{GL}(r,\mathbb{Z})$
is called the monodromy group of $\mathcal{F}$ at $m$ and identifies with a
subgroup of $\mathrm{GL}(r,\mathbb{Z})$ upon choosing an appropriate basis of
$\mathcal{F}_{m}$.
Conversely, let $\mathrm{Fr}(\mathbb{Z}^{r})$ be the set of all bases of the
free $\mathbb{Z}$-module $\mathbb{Z}^{r}$. Then $\mathrm{GL}(r,\mathbb{Z})$
has a natural free and transitive left action $\mu$ on this set. Taking the
set of bases of each of fiber gives the bundle of frames
$\mathrm{Fr}(\mathcal{F})$ of $\mathcal{F}$. This is a principal
$\mathrm{GL}(r,\mathbb{Z})$-bundle whose monodromy morphism coincides with
that of $\mathcal{F}$. This gives the following result.
###### Proposition 2.2.
The correspondences $Q\mapsto\mathcal{F}_{r}(Q)$ and
$\mathcal{F}\mapsto\mathrm{Fr}(\mathcal{F})$ extend to mutually quasi-inverse
equivalences between the groupoid of bundles of free abelian groups of rank
$r$ defined on $M$ and the groupoid of principal
$\mathrm{GL}(r,\mathbb{Z})$-bundles defined on $M$.
### 2.3. Dirac systems and integral duality structures
###### Definition 2.3.
Let $\Delta=(\mathcal{S},\omega,\mathcal{D})$ be a duality structure defined
on $M$. A Dirac system for $\Delta$ is a smooth fiber sub-bundle
$\mathcal{L}\subset\mathcal{S}$ of full symplectic lattices in
$(\mathcal{S},\omega)$ which is preserved by the parallel transport
$\mathcal{T}_{\Delta}$ of $\mathcal{D}$. That is, for any piece-wise smooth
path $\gamma:[0,1]\rightarrow M$ we have:
$\mathcal{T}_{\Delta}(\gamma)(\mathcal{L}_{\gamma(0)})=\mathcal{L}_{\gamma(1)}\,.$
A pair:
${\bm{\Delta}}\stackrel{{\scriptstyle{\rm def.}}}{{=}}(\Delta,\mathcal{L})\,$
consisting of a duality structure $\Delta$ and a choice of Dirac system
$\mathcal{L}$ for $\Delta$ is called an integral duality structure.
Let $\mathrm{Dual}_{\mathbb{Z}}(M)$ be the groupoid of integral duality
structures defined on $M$, with the obvious notion of isomorphism. For every
$m\in M$, the fiber $(\mathcal{S}_{m},\omega_{m},\mathcal{L}_{m})$ of an
integral duality structure ${\bm{\Delta}}=(\Delta,\mathcal{L})$ of rank $2n$
is an _integral symplectic space_ of dimension $2n$ (see Appendix B for
details). Each such space defines an integral vector (called its _type_)
belonging to a certain subset $\mathrm{Div}^{n}$ of $\mathbb{Z}_{>0}^{n}$
endowed with a partial order relation $\leq$ which makes it into a complete
meet semi-lattice. The type of an integral symplectic space depends only on
its isomorphism class (which it determines uniquely) and every element of
$\mathrm{Div}^{n}$ is realized as a type. Moreover, the group of automorphisms
of an integral symplectic space of type $\mathfrak{t}\in\mathrm{Div}^{n}$ is
isomorphic with the modified Siegel modular group
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ of type $\mathfrak{t}$ (see
Definition B.6). This is a discrete subgroup of $\mathrm{Sp}(2n,\mathbb{R})$
which contains the Siegel modular group $\mathrm{Sp}(2n,\mathbb{Z})$, to which
it reduces when $\mathfrak{t}$ equals the principal type
$\delta=(1,\ldots,1)$. If $\mathfrak{t}$ and $\mathfrak{t}^{\prime}$ are
elements of $\mathrm{Div}^{n}$ such that
$\mathfrak{t}\leq\mathfrak{t}^{\prime}$, then the lattice $\Lambda$ of any
integral symplectic space $(V,\omega,\Lambda)$ of type $\mathfrak{t}$ admits a
full rank sublattice $\Lambda^{\prime}$ such that
$(V,\omega,\Lambda^{\prime})$ is an integral symplectic space of type
$\mathfrak{t}^{\prime}$. In this case, we have
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\subset\mathrm{Sp}_{\mathfrak{t}^{\prime}}(2n,\mathbb{Z})$.
Since we assume that $M$ is connected and that $\mathcal{D}$ preserves
$\mathcal{L}$, the integral symplectic spaces
$(\mathcal{S}_{m},\omega_{m},\mathcal{L}_{m})$ are isomorphic to each other
through the parallel transport of $\mathcal{D}$, hence their type does not
depend on the base-point $m\in M$.
###### Definition 2.4.
The type $\mathfrak{t}_{{\bm{\Delta}}}\in\mathrm{Div}^{n}$ of an integral
duality structure ${\bm{\Delta}}=(\mathcal{S},\omega,\mathcal{D},\mathcal{L})$
(and of the corresponding Dirac system $\mathcal{L}$) is the common type of
the integral symplectic spaces $(\mathcal{S}_{m},\omega_{m},\mathcal{L}_{m})$,
where $m\in M$.
Let $\mathrm{Dual}_{\mathbb{Z}}^{\mathfrak{t}}(M)$ be the full subgroupoid of
$\mathrm{Dual}_{\mathbb{Z}}(M)$ consisting of integral duality structures of
type $\mathfrak{t}$.
###### Definition 2.5.
A duality structure $\Delta$ is called semiclassical if it admits a Dirac
system.
Not every duality structure is semiclassical, as the following proposition
shows.
###### Proposition 2.6.
A duality structure $\Delta=(\mathcal{S},\omega,\mathcal{D})$ admits a Dirac
system of type $\mathfrak{t}\in\mathrm{Div}^{n}$ if and only if the holonomy
representation of $\mathcal{D}$ at some point point (equivalently, at any
point) $m\in M$:
$\mathcal{T}_{\Delta}|_{\pi_{1}(M,m)}\colon\pi_{1}(M,m)\to\mathrm{Sp}(\mathcal{S}_{m},\omega_{m})\simeq\mathrm{Sp}(2n,\mathbb{R})$
can be conjugated so that its image lies inside the modified Siegel modular
group:
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\subset\mathrm{Sp}(2n,\mathbb{R})$
of type $\mathfrak{t}$. In this case, $\Delta$ is semiclassical and the
greatest lower bound of those $\mathfrak{t}\in\mathrm{Div}^{n}$ with this
property is called the type of $\Delta$ and denoted by
$\mathfrak{t}_{\Delta}$.
###### Proof.
Assume $\Delta=(\mathcal{S},\omega,\mathcal{D})$ admits a Dirac system
$\mathcal{L}$ of type $\mathfrak{t}$. Then (as explained in Appendix B) the
automorphism group of every fiber
$(\mathcal{S}_{m},\omega_{m},\mathcal{L}_{m})$ is isomorphic to
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$, which is the automorphism group
of the standard integral symplectic space
$(\mathbb{R}^{2n},\omega_{2n},\Lambda_{\mathfrak{t}})$ of type $\mathfrak{t}$.
Since $\mathcal{L}$ is preserved by the parallel transport of $\mathcal{D}$,
it follows that we have
$\mathcal{T}_{\Delta}(\pi_{1}(M,m))\subset\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
after identifying $(\mathcal{S}_{m},\omega_{m},\mathcal{L}_{m})$ with
$(\mathbb{R}^{2n},\omega_{2n},\Lambda_{\mathfrak{t}})$ and hence
$\mathrm{Sp}(\mathcal{S}_{m},\omega_{m},\mathcal{L}_{m})$ with
$\mathrm{Sp}(2n,\mathbb{R})$. The converse follows immediately from the
associated bundle construction. ∎
###### Remark 2.7.
A duality structure $\Delta=(\mathcal{S},\omega,\mathcal{D})$ of rank $2n$
admits a Dirac system $\mathcal{L}$ of type
$\mathfrak{t}=(t_{1},\ldots,t_{n})\in\mathrm{Div}^{n}$ if and only if $M$
admits an open cover $\mathcal{U}=(U_{\alpha})_{\alpha\in I}$ such that for
each $\alpha\in I$ there exists a $\mathcal{D}$-flat frame
$(e_{1}^{(\alpha)},\ldots,e_{2n}^{(\alpha)})$ of $\mathcal{S}|_{U_{\alpha}}$
with the property:
$\omega(e_{i},e_{j})=\omega(e_{n+i},e_{n+j})=0\,,\,\,\omega(e_{i},e_{n+j})=t_{i}\delta_{ij}\,,\,\,\omega(e_{n+i},e_{j})=-t_{i}\delta_{ij}\,,\quad\forall\,\,i,j=1,\ldots,n\,.$
(11)
For each $\alpha,\beta\in I$ with $U_{\alpha}\cap U_{\beta}\neq\emptyset$ we
have:
$e_{k}^{(\beta)}=\sum_{l=1}^{2n}T^{(\alpha\beta)}_{lk}e^{(\alpha)}_{l}\,,\quad\forall\,\,k=1,\ldots,2n~{}~{}\mathrm{on}~{}~{}U_{\alpha}\cap
U_{\beta}\,,$
where $T^{(\alpha\beta)}_{lk}\in\mathbb{Z}$ for all $k,l=1,\ldots,2n$.
Furthermore:
$\oplus_{k=1}^{2n}\mathbb{Z}e^{(\alpha)}_{k}(m)=\mathcal{L}_{m}\,,\quad\forall\,\,\alpha\in
I\,,\quad\forall\,\,m\in U_{\alpha}\,,$
and the matrices $T^{(\alpha\beta)}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(T^{(\alpha\beta)}_{kl})_{k,l=1,\ldots,2n}$ belong to
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$.
Every integral duality structure
${\bm{\Delta}}=(\mathcal{S},\omega,\mathcal{D},\mathcal{L})$ on $M$ defines a
parallel transport functor:
$\mathcal{T}_{{\bm{\Delta}}}\colon\Pi_{1}(M)\to\mathrm{Symp}_{\mathbb{Z}}\,,$
where $\mathrm{Symp}_{\mathbb{Z}}$ is the groupoid of integral symplectic
spaces defined in Appendix B. This functor associates the integral symplectic
vector space $(\mathcal{S}_{m},\omega_{m},\mathcal{L}_{m})$ to every point
$m\in M$ and the isomorphism of symplectic vector spaces
$\mathcal{T}_{{\bm{\Delta}}}(\mathbf{c})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathcal{T}_{\Delta}(\mathbf{c})$ to every homotopy class
$\mathbf{c}\in\Pi_{1}(M)(m,m^{\prime})$ of curves from $m\in M$ to
$m^{\prime}\in M$. The functor $\mathcal{T}_{{\bm{\Delta}}}$ defines a flat
system of integral symplectic vector spaces (that is, a
$\mathrm{Symp}_{\mathbb{Z}}$-valued local system) on $M$. As in Section 1, the
correspondence ${\bm{\Delta}}\mapsto\mathcal{T}_{{\bm{\Delta}}}$ extends to an
equivalence of groupoids:
$\mathcal{T}:\mathrm{Dual}_{\mathbb{Z}}(M)\xrightarrow{\sim}[\Pi_{1}(M),\mathrm{Symp}_{\mathbb{Z}}]$
between $\mathrm{Dual}_{\mathbb{Z}}(M)$ and the functor groupoid
$[\Pi_{1}(M),\mathrm{Symp}_{\mathbb{Z}}]$. Thus one can identify integral
duality structures with $\mathrm{Symp}_{\mathbb{Z}}$-valued local systems
defined on $M$. This implies the following result, whose proof is similar to
that of Proposition 1.7.
###### Proposition 2.8.
For any $m_{0}\in M$, the set of isomorphism classes of integral duality
structures of type $\mathfrak{t}$ defined on $M$ is in bijection with the
character variety:
$\mathfrak{R}(\pi_{1}(M,m_{0}),\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z}))\stackrel{{\scriptstyle{\rm
def.}}}{{=}}{\rm
Hom}(\pi_{1}(M,m_{0}),\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z}))/\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})~{}~{}.$
(12)
For later reference, we introduce the following:
###### Definition 2.9.
An _integral electromagnetic structure_ defined on $M$ is a pair:
$\bm{\Xi}\stackrel{{\scriptstyle{\rm def.}}}{{=}}(\Xi,\mathcal{L})\,,$
where $\Xi=(\mathcal{S},\omega,\mathcal{D},\mathcal{J})$ is an electromagnetic
structure on $M$ and $\mathcal{L}$ is a Dirac system for the duality structure
$\Delta=(\mathcal{S},\omega,\mathcal{D})$.
### 2.4. Siegel systems
Integral duality structures are associated to certain local systems of free
abelian groups of even rank defined on $M$.
###### Definition 2.10.
Let $n\in\mathbb{Z}_{>0}$. A Siegel system of rank $2n$ on $M$ is a bundle $Z$
of free abelian groups of rank $2n$ defined on $M$ equipped with a reduction
of its structure group from $\mathrm{GL}(2n,\mathbb{Z})$ to a subgroup of some
modified Siegel modular group $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$,
where $\mathfrak{t}\in\mathrm{Div}^{n}$. The greatest lower bound of the set
of those $\mathfrak{t}\in\mathrm{Div}^{n}$ with this property is called the
type of $Z$ and is denoted by $\mathfrak{t}_{Z}$.
Let $\mathrm{Sg}(M)$ be the groupoid of Siegel systems on $M$ and
$\mathrm{Sg}_{\mathfrak{t}}(M)$ be the full sub-groupoid of Siegel systems of
type $\mathfrak{t}$.
###### Remark 2.11.
Let $\underline{\mathbb{R}}$ be the trivial real line bundle on $M$ and
$U_{0}:\Pi_{1}(M)\rightarrow\Phi(\underline{\mathbb{R}})$ be the transport
functor induced by its trivial flat connection. The following statements are
equivalent for a bundle $Z$ of free abelian groups of rank $2n$ defined on
$M$:
1. (a)
$Z$ is a Siegel system of type $\mathfrak{t}$ defined on $M$.
2. (b)
The vector bundle $\mathcal{S}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}Z\otimes_{\mathbb{Z}}\underline{\mathbb{R}}$ carries a symplectic
pairing $\omega$ which is invariant under the parallel transport
$U_{Z}\otimes_{\mathbb{Z}}U_{0}$ of the flat connection induced from $Z$ and
which makes the triplet $(\mathcal{S}_{m},\omega_{m},Z_{m})$ into an integral
symplectic space of type $\mathfrak{t}$ for any $m\in M$.
3. (c)
For any $m\in M$, the $2n$-dimensional vector space
$\mathcal{S}_{m}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}Z_{m}\otimes_{\mathbb{Z}}\mathbb{R}$ carries a symplectic form
$\omega_{m}$ which makes the triplet $(\mathcal{S}_{m},\omega_{m},Z_{m})$ into
an integral symplectic space of type $\mathfrak{t}$ and we have
$\mathrm{Hol}_{m}(Z)=\mathrm{Aut}(\mathcal{S}_{m},\omega_{m},Z_{m})$.
By definition, any Siegel system $Z$ of type $\mathfrak{t}$ is isomorphic with
the bundle of groups with fiber $\mathbb{Z}^{2n}$ associated to a principal
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$-bundle $Q$ through the left action
$\ell:\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\rightarrow\mathrm{Aut}_{\mathbb{Z}}(\mathbb{Z}^{2n})$
of $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ on $\mathbb{Z}^{2n}$:
$Z\simeq Z(Q)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}Q\times_{\ell}\mathbb{Z}^{2n}\simeq\hat{M}\times_{\ell_{\mathfrak{t}}\circ\alpha_{m}(Q)}\mathbb{Z}^{2n}~{}~{},$
where
$\alpha_{m}(Q):\pi_{1}(M,m)\rightarrow\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
is the monodromy morphism of $Q$ at $m$ and ${\hat{M}}$ is the universal cover
of $M$. The monodromy morphism $\sigma_{m}(Z)$ of $Z$ at $m$ identifies with
$\ell\circ\alpha_{m}(Q)$ upon choosing an integral symplectic basis of the
integral symplectic space $(\mathcal{S}_{m},\omega_{m},Z_{m})$. This also
identifies the monodromy group
$\mathrm{Hol}_{m}(Z)\subset\mathrm{Aut}_{\mathbb{Z}}(Z_{m})$ with
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$. Conversely, let
$\mathrm{Fr}_{\mathfrak{t}}(\mathbb{Z}^{2n})$ be the set of those bases of the
free $\mathbb{Z}$-module $\mathbb{Z}^{2n}$ in which the standard symplectic
form $\omega_{2n}$ of
$\mathbb{R}^{2n}=\mathbb{Z}^{2n}\otimes_{\mathbb{Z}}\mathbb{R}$ takes the form
$\omega_{\mathfrak{t}}$ (see Appendix B). Then
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ has a natural free and transitive
left action $\mu_{\mathfrak{t}}$ on this set. Taking the set of bases of each
of fiber gives the bundle of frames $\mathrm{Fr}(Z)$ of the Siegel system $Z$,
which is a principal $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$-bundle. The
previous discussion implies the following result.
###### Proposition 2.12.
The correspondences $Q\mapsto Z(Q)$ and $Z\mapsto\mathrm{Fr}(Z)$ extend to
mutually quasi-inverse equivalence of groupoids between
$\mathrm{Prin}_{\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})}(M)$ and
$\mathrm{Sg}_{\mathfrak{t}}(M)$.
In particular, the set of isomorphism classes of Siegel systems of type
$\mathfrak{t}$ defined on $M$ is in bijection with the character variety
$\mathfrak{R}(\pi_{1}(M,m_{0}),\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z}))$ of
equation (12). Let $\underline{1}$ be the unit section of the trivial real
line bundle $\underline{\mathbb{R}}=M\times\mathbb{R}$.
###### Proposition 2.13.
Let $Z$ be a Siegel system defined on $M$. Then there exists a unique integral
duality structure $\Delta=(\mathcal{S},\omega,\mathcal{D},\mathcal{L})$ such
that $\mathcal{S}=Z\otimes_{\mathbb{Z}}\underline{\mathbb{R}}$ and
$\mathcal{L}=Z\otimes_{\mathbb{Z}}\mathbb{Z}$ and this duality structure has
the same type as $Z$. Moreover, the parallel transport
$U_{\mathcal{D}}:\Pi_{1}(M)\rightarrow\Phi(\mathcal{S})$ of $\mathcal{D}$ is
given by:
$U_{\mathcal{D}}(\mathbf{c})=U_{Z}(\mathbf{c})\otimes_{\mathbb{Z}}U_{0}(m,m^{\prime})\,,\quad\forall\,\,\mathbf{c}\in\Pi_{1}(M)(m,m^{\prime})\,,\quad\forall\,\,m,m^{\prime}\in
M~{}~{},$ (13)
where $U_{Z}:\Pi_{1}(M)\rightarrow\Phi(Z)$ is the monodromy transport of $Z$
and $U_{0}:\Pi_{1}(M)\rightarrow\Phi(\underline{\mathbb{R}})$ is the trivial
transport of $\underline{\mathbb{R}}$.
###### Remark 2.14.
The fiber of $\mathcal{L}$ at $m\in M$ is given by:
$\mathcal{L}_{m}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{z\otimes_{\mathbb{Z}}1~{}|~{}z\in Z_{m}\\}\equiv Z_{m}~{}~{},$
(14)
where $1$ is the unit element of the field $\mathbb{R}$. It is clear that the
transport $U_{\mathcal{D}}$ defined by (13) gives bijections from
$\mathcal{L}_{m}$ to $\mathcal{L}_{m^{\prime}}$ and hence preserves
$\mathcal{L}$. Any locally-constant frame $(s_{1},\ldots,s_{2n})$ of $Z$
defined above a non-empty open set $V\subset M$ determines a local flat
symplectic frame $(e_{1},\ldots,e_{2n})$ of $\Delta$ defined above $V$ given
by:
$e_{i}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}s_{i}\otimes_{\mathbb{Z}}1\,,\quad\forall\,\,i=1,\ldots,2n~{}~{}$
(15)
and the matrix of $\omega$ with respect to this frame is integer-valued.
###### Proof.
The restriction of $U_{\mathcal{D}}$ to $\mathcal{L}$ gives isomorphisms of
groups between the fibers (14) of $\mathcal{L}$ and hence it must agree with
the monodromy transport $U_{Z}$ of $Z$ in the sense that:
$U_{\mathcal{D}}(\mathbf{c})(z\otimes_{\mathbb{Z}}1)=U_{Z}(\mathbf{c})(z)\otimes_{\mathbb{Z}}1\,,\quad\forall\,\,\mathbf{c}\in\Pi_{1}(M)(m,m^{\prime})\,,\quad\forall\,\,m,m^{\prime}\in
M~{}~{}.$
This implies (13) since $U_{\mathcal{D}}$ is $\mathbb{R}$-linear and
$\mathcal{L}_{m}$ are full lattices in
$\mathcal{S}_{m}=Z_{m}\otimes_{\mathbb{Z}}\mathbb{R}$. Remark 2.14 gives a
$\mathcal{D}$-flat symplectic pairing $\omega_{Z}$ on $\mathcal{S}_{Z}$ such
that the integral symplectic spaces $(\mathcal{S}_{m},\omega,\mathcal{L}_{m})$
have type $\mathfrak{t}_{Z}$ and such that $\mathcal{L}$ is preserved by the
parallel transport of $\mathcal{D}$. ∎
###### Remark 2.15.
Notice that $\mathcal{S}$ identifies with the vector bundle associated to the
frame bundle $\mathrm{Fr}(Z)$ of $Z$ through the linear representation
$q=\varphi\circ\iota:\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\rightarrow\mathrm{Aut}_{\mathbb{R}}(\mathbb{R}^{2n})$
defined by the inclusion morphism:
$\iota:\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\hookrightarrow\mathrm{Sp}(2n,\mathbb{R})~{}~{},$
where $\mathrm{Sp}(2n,\mathbb{R})$ acts on $\mathbb{R}^{2n}$ through the
fundamental representation
$\varphi:\mathrm{Sp}(2n,\mathbb{R})\rightarrow\mathrm{Aut}_{\mathbb{R}}(\mathbb{R}^{2n})$.
The representation $q$ preserves the canonical symplectic form
$\omega_{\mathfrak{t}}$ of $\mathbb{R}^{2n}$ and the latter induces the
symplectic pairing $\omega$ of $\mathcal{S}$.
We denote by ${\bm{\Delta}}(Z)$ the integral duality structure defined by $Z$
as in Proposition 2.13. Conversely, any integral duality structure
${\bm{\Delta}}=(\mathcal{S},\omega,\mathcal{L},\mathcal{D})$ defines a Siegel
system $Z({\bm{\Delta}})$ upon setting:
$Z({\bm{\Delta}})\stackrel{{\scriptstyle{\rm def.}}}{{=}}\mathcal{L}$
and it is easy to see that ${\bm{\Delta}}$ is isomorphic with
${\bm{\Delta}}(\mathcal{L})$. Therefore, we obtain the following result:
###### Proposition 2.16.
The correspondences $Z\mapsto{\bm{\Delta}}(Z)$ and ${\bm{\Delta}}\rightarrow
Z({\bm{\Delta}})$ extend to mutually quasi-inverse equivalences of groupoids
between $\mathrm{Sg}_{\mathfrak{t}}(M)$ and
$\mathrm{Dual}^{\mathfrak{t}}_{\mathbb{Z}}(M)$.
### 2.5. Bundles of integral symplectic torus groups
In the following we use the notions of integral symplectic torus group
discussed in Appendix B.
###### Definition 2.17.
A bundle of integral symplectic torus groups of rank $2n$ is a bundle
$\mathcal{A}$ of $2n$-dimensional torus groups defined on $M$ whose structure
group reduces from $\mathrm{GL}(2n,\mathbb{Z})$ to a subgroup of some modified
Siegel modular group $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$, where
$\mathfrak{t}\in\mathrm{Div}^{n}$. The greatest lower bound
$\mathfrak{t}_{\mathcal{A}}$ of the set of elements
$\mathfrak{t}\in\mathrm{Div}^{n}$ with this property is called the type of
$\mathcal{A}$.
Let $\mathcal{A}$ be a bundle of integral symplectic torus groups of type
$\mathfrak{t}$. Then the zero elements of the fibers determine a section
$s_{0}\in\mathcal{C}^{\infty}(M,\mathcal{A})$. The structure group
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ acts on $\mathcal{A}_{m}$
preserving the distinguished point $s_{0}(m)$ and the abelian group structure
of each fiber. Since such a bundle is associated to a principal
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$-bundle, it carries an induced flat
Ehresmann connection whose holonomy group is a subgroup of
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ and whose holonomy representation
at $m\in M$ we denote by:
$\rho_{m}(\mathcal{A}):\pi_{1}(M,m)\rightarrow\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\subset\mathrm{GL}(2n,\mathbb{Z})~{}~{}.$
(16)
The parallel transport of this connection preserves the image of the section
$s_{0}$ as well as a fiberwise symplectic structure which makes each fiber
into an integral symplectic torus group of type $\mathfrak{t}$ in the sense of
Appendix B. We have:
$\mathcal{A}\simeq{\hat{M}}\times_{\rho}\left(\mathbb{R}^{2n}/\mathbb{Z}^{2n}\right)~{}~{}.$
###### Remark 2.18.
When $M$ is compact, the fiber bundle $\mathcal{A}$ is virtually trivial by
[17, Theorem 1.1.], i.e. the pull-back of $\mathcal{A}$ to some finite
covering space of $M$ is topologically trivial.
It follows from the above that bundles of integral symplectic torus groups of
type $\mathfrak{t}$ defined on $M$ are classified by group morphisms (16),
i.e. the set of isomorphism classes of such bundles is in bijection with the
character variety
$\mathfrak{R}(\pi_{1}(M,m_{0}),\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$,
which also classifies integral duality structures. This also follows from the
results below.
###### Proposition 2.19.
Let ${\bm{\Delta}}=(\mathcal{S},\omega,\mathcal{D},\mathcal{L})$ be an
integral duality structure of type $\mathfrak{t}$ defined on $M$. Then the
fiberwise quotient:
$\mathcal{A}({\bm{\Delta}})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathcal{S}/\mathcal{L}\,,$
is a bundle of integral symplectic torus groups of type $\mathfrak{t}$ defined
on $M$.
###### Proof.
It is clear that $\mathcal{A}({\bm{\Delta}})$ is a fiber bundle of even-
dimensional torus groups, whose zero section $s_{0}$ is inherited from the
zero section of $\mathcal{S}$. The fiberwise symplectic pairing $\omega$ of
$\mathcal{S}$ descends to a translation-invariant collection of symplectic
forms on the fibers of $\mathcal{A}({\bm{\Delta}})$, making the latter into
integral symplectic torus groups of type $\mathfrak{t}$. Since the parallel
transport of $\mathcal{D}$ preserves both $\mathcal{L}$ and $\omega$, this
bundle of torus groups inherits a flat Ehresmann connection which preserves
both its symplectic structure and the image of the section $s_{0}$ and whose
holonomy group reduces to $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$. In
particular, the structure group of $\mathcal{A}({\bm{\Delta}})$ reduces to
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$. ∎
As explained in Appendix B, any integral symplectic torus group
$\mathbf{A}=(A,\Omega)$ of type $\mathfrak{t}$ determines an integral
symplectic space $(H_{1}(A,\mathbb{R}),\omega,H_{1}(A,\mathbb{Z}))$, where
$\omega$ is the cohomology class of $\Omega$, viewed as a symplectic pairing
on the vector space $H_{1}(A,\mathbb{R})$.
###### Proposition 2.20.
Given a bundle $\mathcal{A}$ of integral symplectic torus groups of type
$\mathfrak{t}$ defined on $M$, let $\mathcal{S}_{\mathcal{A}}$ be the vector
bundle with fiber at $m\in M$ given by $H_{1}(\mathcal{A}_{m},\mathbb{R})$ and
$\mathcal{L}_{\mathcal{A}}$ be the bundle of discrete Abelian groups with
fiber at $m\in M$ given by $H_{1}(\mathcal{A}_{m},\mathbb{Z})$. Moreover, let
$\omega_{\mathcal{A}}$ be the fiberwise symplectic pairing defined on
$\mathcal{S}_{\mathcal{A}}$ through:
$\omega_{\mathcal{A},m}=\omega_{m}\,,\quad\forall\,\,m\in M~{}~{},$
where $\omega_{m}$ is the cohomology class of the symplectic form $\Omega_{m}$
of the fiber $\mathcal{A}_{m}$. Then the flat Ehresmann connection of
$\mathcal{A}$ induces a flat linear connection $\mathcal{D}_{\mathcal{A}}$ on
$\mathcal{S}_{\mathcal{A}}$ which makes the quadruplet:
${\bm{\Delta}}(\mathcal{A})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(\mathcal{S}_{\mathcal{A}},\omega_{\mathcal{A}},\mathcal{D}_{\mathcal{A}},\mathcal{L}_{\mathcal{A}})\,,$
into an integral duality structure of type $\mathfrak{t}$ defined on $M$.
###### Proof.
$\mathcal{D}_{\mathcal{A}}$ is the connection induced by the flat Ehresmann
connection of $\mathcal{D}$ on the bundle of first homology groups of the
fibers, which preserves the bundle $\mathcal{L}_{\mathcal{A}}$ of integral
first homology groups of these fibers. The remaining statements are immediate.
∎
The two propositions above imply the following statement.
###### Proposition 2.21.
The correspondences ${\bm{\Delta}}\mapsto\mathcal{A}({\bm{\Delta}})$ and
$\mathcal{A}\mapsto{\bm{\Delta}}(\mathcal{A})$ extend to mutually quasi-
inverse equivalences between the groupoid
$\mathrm{Dual}^{\mathfrak{t}}_{\mathbb{Z}}(M)$ of integral duality structures
of type $\mathfrak{t}$ defined on $M$ and the groupoid
$\mathfrak{T}_{\mathfrak{t}}(M)$ of bundles of integral symplectic torus
groups of type $\mathfrak{t}$ defined on $M$.
Combining everything, we have a chain of equivalences of groupoids:
$\mathrm{Prin}_{\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})}(M)\simeq\mathrm{Sg}_{\mathfrak{t}}(M)\simeq\mathfrak{T}_{\mathfrak{t}}(M)\simeq\mathrm{Dual}^{\mathfrak{t}}_{\mathbb{Z}}(M)~{}~{}.$
### 2.6. The exponential sheaf sequence defined by a Siegel system
Let $Z$ be a Siegel system of type $\mathfrak{t}\in\mathrm{Div}^{n}$ on $M$.
Let $\mathcal{S}_{Z}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}Z\otimes_{\mathbb{Z}}\underline{\mathbb{R}}$ be the underlying
vector bundle of the integral duality structure ${\bm{\Delta}}(Z)$ defined by
$Z$ and $\mathcal{A}_{Z}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathcal{S}_{Z}/Z$ be the associated bundle of integral symplectic
torus groups. The exponential sequence the torus group
$\mathbb{R}^{2n}/\Lambda_{\mathfrak{t}}\simeq\mathbb{R}^{2n}/\mathbb{Z}^{2n}$
(where the canonical symplectic lattice $\Lambda_{\mathfrak{t}}$ of type
$\mathfrak{t}$ is defined in Appendix B) induces a short exact sequence of
bundles of abelian groups (where $j$ is the inclusion):
$0\rightarrow Z\stackrel{{\scriptstyle
j}}{{\rightarrow}}\mathcal{S}_{Z}\stackrel{{\scriptstyle\exp}}{{\rightarrow}}\mathcal{A}_{Z}\rightarrow
0~{}~{}.$
In turn, this induces an exact sequence of sheaves of abelian groups:
$0\rightarrow\mathcal{C}(Z)\stackrel{{\scriptstyle
j}}{{\rightarrow}}\mathcal{C}^{\infty}(\mathcal{S}_{Z})\stackrel{{\scriptstyle\exp}}{{\rightarrow}}\mathcal{C}^{\infty}(\mathcal{A}_{Z})\rightarrow
0~{}~{},$
where $\mathcal{C}(Z)$ is the sheaf of continuous (and hence locally constant)
sections of $Z$. This induces a long exact sequence in sheaf cohomology, of
which we are interested in the following piece:
$H^{1}(M,\mathcal{C}(Z))\stackrel{{\scriptstyle
j_{\ast}}}{{\rightarrow}}H^{1}(M,\mathcal{C}^{\infty}(\mathcal{S}_{Z}))\stackrel{{\scriptstyle\exp_{\ast}}}{{\rightarrow}}H^{1}(M,\mathcal{C}^{\infty}(\mathcal{A}_{Z}))\stackrel{{\scriptstyle\delta}}{{\rightarrow}}H^{2}(M,\mathcal{C}(Z))\rightarrow
H^{2}(M,\mathcal{C}^{\infty}(\mathcal{S}_{Z}))\,,$ (17)
where $\delta$ is the Bockstein morphism. The sheaf
$\mathcal{C}^{\infty}(\mathcal{S}_{Z})$ is fine and hence acylic since $M$ is
paracompact and $\mathcal{S}_{Z}$ is a vector bundle on $M$. Thus:
$H^{j}(M,\mathcal{S}_{Z})=0\,,\quad\forall j>0\,,$
which by the long sequence above implies that $\delta$ is an isomorphism of
groups. We also have $H^{\ast}(M,\mathcal{C}(Z))=H^{\ast}(M,Z)$, where in the
right hand side $Z$ is viewed as a local system of $\mathbb{Z}^{2n}$
coefficients. Hence we can view $\delta$ as an isomorphism of abelian groups:
$\delta:H^{1}(M,\mathcal{C}^{\infty}(\mathcal{A}_{Z}))\xrightarrow{\sim}H^{2}(M,Z)~{}~{}.$
(18)
### 2.7. The lattice of charges of an integral duality structure
For every integral duality structure ${\bm{\Delta}}=(\Delta,\mathcal{L})$ on
$M$, the sheaf $\mathcal{C}_{\mathrm{flat}}^{\infty}(\mathcal{A})$ of flat
smooth local sections of the bundle $\mathcal{A}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathcal{A}({\bm{\Delta}})=\mathcal{S}/\mathcal{L}$ of integral
symplectic torus groups defined by ${\bm{\Delta}}$ fits into the short exact
sequence of sheaves of abelian groups:
$1\to\mathcal{C}(\mathcal{L})\xrightarrow{j_{0}}\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{S})\xrightarrow{\exp}\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{A})\to
1\,,$
where $j$ is the inclusion. This induces a long exact sequence in sheaf
cohomology, of which we are interested in the following terms:
$\ldots\rightarrow
H^{1}(M,\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{A}))\xrightarrow{\delta_{0}}H^{2}(M,\mathcal{C}(\mathcal{L}))\xrightarrow{j_{0\ast}}H^{2}(M,\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{S}))\xrightarrow{\exp_{\ast}}H^{2}(M,\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{A}))\rightarrow\ldots~{}~{},$
where $\delta_{0}$ is the Bockstein morphism. Notice that
$H^{\ast}(M,\mathcal{C}(\mathcal{L}))\simeq H^{\ast}(M,Z)$, where
$Z=\mathcal{L}$ is the Siegel system defined by $\mathcal{L}$, which we view
of a local system of $\mathbb{Z}^{2n}$ coefficients. Moreover, we have
$H^{2}(M,\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{A}))=H^{2}(M,\mathcal{A}_{\mathrm{disc}})$,
the right hand side being the cohomology with coefficients in the local system
defined by $\mathcal{A}$ when the fibers of the latter are endowed with the
discrete topology. Since
$H^{\ast}(M,\mathcal{C}_{\mathrm{flat}}^{\infty}(\mathcal{S}))=H^{\ast}_{\mathcal{D}}(M,\mathcal{S})$,
the sequence above can be written as:
$\ldots\rightarrow
H^{1}(M,\mathcal{A}_{\mathrm{disc}})\xrightarrow{\delta_{0}}H^{2}(M,Z)\xrightarrow{j_{0\ast}}H^{2}_{\mathcal{D}}(M,\mathcal{S})\xrightarrow{\exp_{\ast}}H^{2}(M,\mathcal{A}_{\mathrm{disc}})\rightarrow\ldots$
(19)
Denote by $H^{2}(M,Z)^{\mathrm{tf}}\subset H^{2}(M,Z)$ the torsion free part
of $H^{2}(M,Z)$.
###### Definition 2.22.
The lattice:
$L_{\bm{\Delta}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}j_{0\ast}(H^{2}(M,Z))=j_{0\ast}(H^{2}(M,Z)^{\mathrm{tf}})\subset
H^{2}_{\mathcal{D}}(M,\mathcal{S})$
is called the lattice of charges defined the integral duality structure
${\bm{\Delta}}$. Elements of this lattice are called _integral cohomology
classes_ or charges.
###### Proposition 2.23.
There exists a natural isomorphism:
$H^{k}_{\mathcal{D}}(M,\mathcal{S})\simeq
H^{k}(M,Z)\otimes_{\mathbb{Z}[\bm{\pi}]}\mathbb{R}\simeq
H^{k}(M,Z)^{\mathrm{tf}}\otimes_{\mathbb{Z}[\bm{\pi}]}\mathbb{R}~{}~{},$ (20)
for all $k$. In particular, the kernel of $j_{0\ast}$ coincides with the
torsion part of $H^{2}(M,Z)$ and $j_{\ast}(H^{2}(M,Z))$ is a full lattice in
$H^{2}_{\mathcal{D}}(M,\mathcal{S})$.
###### Proof.
Let $\bm{\pi}\stackrel{{\scriptstyle{\rm def.}}}{{=}}\pi_{1}(M,m)$ and
$\mathbb{Z}[\bm{\pi}]$ be the group ring of $\bm{\pi}$. The universal
coefficient theorem for cohomology with local coefficients of [29] gives a
short exact sequence:
$0\to H^{k}(M,Z)\otimes_{\mathbb{Z}[\bm{\pi}]}\mathbb{R}\to
H^{k}_{\mathcal{D}}(M,\mathcal{S})\to\mathrm{Tor}_{\mathbb{Z}[\bm{\pi}]}(H^{k+1}(M,Z),\mathbb{R})\to
0\,,$
where $\mathbb{R}$ is the $\mathbb{Z}[\bm{\pi}]$-module corresponding to the
trivial representation of $\bm{\pi}$ in $\mathbb{R}$. Since the latter module
is free, we have
$\mathrm{Tor}_{\mathbb{Z}[\bm{\pi}]}(H^{k+1}(M,Z),\mathbb{R})=0$ and the
sequence above gives the natural isomorphism (20). ∎
###### Remark 2.24.
We have a commutative diagram with exact rows:
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{C}(Z)~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces~{}}$$\scriptstyle{\mathrm{id}}$$\scriptstyle{j_{0}}$$\textstyle{~{}\mathcal{C}_{\mathrm{flat}}^{\infty}(\mathcal{S}_{Z})~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{~{}~{}\exp}$$\scriptstyle{\tau}$$\textstyle{\mathcal{C}_{\mathrm{flat}}^{\infty}(\mathcal{A}_{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\iota}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces~{}}$$\textstyle{~{}\mathcal{C}(Z)~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{j}$$\textstyle{~{}\mathcal{C}^{\infty}(\mathcal{S}_{Z})~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{~{}~{}\exp}$$\textstyle{\mathcal{C}^{\infty}(\mathcal{A}_{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
where $j_{0},j$ and $\tau,\iota$ are inclusions. In turn, this induces a
commutative diagram with exact rows:
$\textstyle{H^{1}(M,Z)~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces~{}}$$\scriptstyle{\mathrm{id}}$$\scriptstyle{j_{0\ast}}$$\textstyle{~{}H^{1}_{\mathcal{D}_{P}}(M,\mathcal{S}_{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tau_{\ast}}$$\scriptstyle{\exp_{\ast}}$$\textstyle{H^{1}(M,\mathcal{A}_{Z,\mathrm{disc}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\iota_{\ast}}$$\scriptstyle{\delta_{0}}$$\textstyle{H^{2}(M,Z)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{id}}$$\scriptstyle{j_{0\ast}}$$\textstyle{~{}H^{2}_{\mathcal{D}_{P}}(M,\mathcal{S}_{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tau_{\ast}}$$\textstyle{~{}H^{1}(M,Z)~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{j_{\ast}}$$\textstyle{~{}0~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\exp_{\ast}}$$\textstyle{H^{1}(M,\mathcal{C}^{\infty}(\mathcal{A}_{Z}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{H^{2}(M,Z)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{j_{\ast}}$$\textstyle{0}$
In particular, we have $\delta_{0}=\delta\circ\iota_{\ast}$.
### 2.8. The DSZ integrality condition
Let $(M,g)$ be an oriented and connected Lorentzian four-manifold. Given an
integral duality structure ${\bm{\Delta}}=(\Delta,\mathcal{L})$, we implement
the DSZ condition by restricting the configuration space
$\mathrm{Conf}(M,\Delta)$ to a subsets determined by the charge lattice
$L_{\bm{\Delta}}$. We will show in later sections that integral field
strengths are adjoint curvatures of connections defined on a certain principal
bundle.
###### Definition 2.25.
Let ${\bm{\Delta}}=(\Delta,\mathcal{L})$ be an integral duality structure on
$(M,g)$ with underlying duality structure
$\Delta=(\mathcal{S},\omega,\mathcal{D})$. The set of integral electromagnetic
field strength configurations defined by ${\bm{\Delta}}$ on $M$ is the
following subset of $\mathrm{Conf}(M,\Delta)$:
$\mathrm{Conf}(M,{\bm{\Delta}})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{\mathcal{V}\in\mathrm{Conf}(M,\Delta)\,\,|\,\,2\pi[\mathcal{V}]_{\mathcal{D}}\in
L_{\bm{\Delta}}\right\\}\,,$
where $[\mathcal{V}]_{\mathcal{D}}\in H^{2}_{\mathcal{D}}(M,\mathcal{S})$ is
the $\mathrm{d}_{\mathcal{D}}$-cohomology class of the $\mathcal{S}$-valued
two-form $\mathcal{V}\in\mathrm{Conf}(M,\Delta)$ and $L_{\bm{\Delta}}\subset
H^{2}_{\mathcal{D}}(M,\mathcal{S})$ is the lattice of charges defined by
${\bm{\Delta}}$.
###### Definition 2.26.
Let $\bm{\Xi}=({\bm{\Delta}},\mathcal{J})$ be an integral electromagnetic
structure defined on $(M,g)$ with electromagnetic structure
$\Xi=(\Delta,\mathcal{J})$ and integral duality structure
${\bm{\Delta}}=(\Delta,\mathcal{L})$. The set of integral field strength
solutions defined by $\bm{\Xi}$ on $(M,g)$ is the subset of
$\mathrm{Sol}(M,g,\Xi)$ defined through:
$\mathrm{Sol}(M,g,\bm{\Xi})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Sol}(M,g,\Xi)\cap\mathrm{Conf}(M,{\bm{\Delta}})$
and hence consists of those elements of $\mathrm{Conf}(M,{\bm{\Delta}})$ which
satisfy the equations of motion (i.e. the polarized self-duality condition) of
the classical abelian gauge theory defined by $\Xi$.
### 2.9. Integral duality groups
The DSZ integrality condition restricts the classical duality groups of
Subsection 1.9 to certain subgroups.
###### Definition 2.27.
Fix an integral duality structure ${\bm{\Delta}}$ on $(M,g)$.
* •
The integral unbased pseudo-duality group defined by ${\bm{\Delta}}$ is the
group $\mathrm{Aut}({\bm{\Delta}})\subset\mathrm{Aut}(\Delta)$ formed by those
elements $u\in\mathrm{Aut}(\Delta)$ which satisfy
$u(\mathcal{L})=\mathcal{L}$.
* •
The integral unbased duality group defined by ${\bm{\Delta}}$ is the subgroup
$\mathrm{Aut}(g,{\bm{\Delta}})$ of $\mathrm{Aut}({\bm{\Delta}})$ which covers
${\rm Iso}(M,g)$.
* •
The integral duality group defined by ${\bm{\Delta}}$ is the subgroup
$\mathrm{Aut}_{b}({\bm{\Delta}})$ of $\mathrm{Aut}({\bm{\Delta}})$ consisting
of those elements which cover the identity of $M$.
###### Definition 2.28.
Fix an integral electromagnetic structure
$\bm{\Xi}=({\bm{\Delta}},\mathcal{J})$ on $(M,g)$.
* •
The integral unbased unitary pseudo-duality group defined by $\bm{\Xi}$ is the
group:
$\mathrm{Aut}(\bm{\Xi})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{u\in\mathrm{Aut}({\bm{\Delta}})~{}|~{}\mathcal{J}_{u}=\mathcal{J}\\}$
* •
The integral unbased unitary duality group defined by $\bm{\Xi}$ is:
$\mathrm{Aut}(g,\bm{\Xi})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{u\in\mathrm{Aut}(\bm{\Xi})\,\,|\,\,g_{u}=g\right\\}~{}~{}.$
* •
The integral unitary duality group defined by $\bm{\Xi}$ is the subgroup
$\mathrm{Aut}_{b}(\bm{\Xi})$ of $\mathrm{Aut}(\bm{\Xi})$ consisting of those
elements which cover the identity of $M$.
It is easy to check that $\mathbb{A}_{u}$ with $u$ belonging to the groups
defined above restrict to transformations similar to those of Subsection 1.9
between the sets of integral configurations and solutions. The discrete
duality groups introduced above are the global counterparts of the discrete
duality group considered in the physics literature on local abelian gauge
theory. The latter is usually taken to be $\mathrm{Sp}(2n,\mathbb{Z})$ due to
the fact that the symplectic lattice of charges appearing in the local
treatment of abelian gauge theory is traditionally assumed to have principal
type $\mathfrak{t}=\delta=(1,\ldots,1)$. As explained in [32] and recalled in
Appendix A, $\mathrm{Sp}(2n,\mathbb{Z})$ is not always the correct duality
group even in the local case, since the local lattice of charges need not be
principal. In Section 4.1, we consider a natural gauge-theoretic extension of
the discrete duality groups defined above, which clarifies the geometric
origin of electromagnetic duality.
### 2.10. Trivial integral duality structures
Let $Z$ be a trivializable Siegel system of type
$\mathfrak{t}\in\mathrm{Div}^{n}$ and
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ be the associated duality structure,
where $\mathcal{S}=Z\otimes_{\mathbb{Z}}\underline{\mathbb{R}}$. Pick a flat
trivialization $\tau:\mathcal{S}\xrightarrow{\sim}M\times S$ of $\mathcal{S}$,
where $S\simeq\mathbb{R}^{2n}$. This takes $\omega$ into a symplectic pairing
$\omega_{S}$ on the vector space $S$ and restricts to an isomorphism
$\tau_{0}:Z\xrightarrow{\sim}M\times\Lambda$ between $Z$ and $M\times\Lambda$,
where $\Lambda$ is a full symplectic lattice in $(S,\omega_{S})$. Let
$A\stackrel{{\scriptstyle{\rm def.}}}{{=}}S/\Lambda$ be the torus group
defined by $(S,\Lambda)$ and $\mathcal{A}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathcal{S}/Z$ be the bundle of torus groups defined by
$(\mathcal{S},Z)$. Then $\tau$ induces a trivialization
$\bar{\tau}:\mathcal{A}\xrightarrow{\sim}M\times A$ of $\mathcal{A}$, which
fits into a commutative diagram of fiber bundles:
$\textstyle{Z~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces~{}}$$\scriptstyle{\tau_{0}}$$\scriptstyle{j}$$\textstyle{~{}\mathcal{S}~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tau}$$\textstyle{\mathcal{A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\bar{\tau}}$$\textstyle{M\times\Lambda~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i}$$\textstyle{~{}M\times
S~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{M\times
A}$
where $i$ and $j$ are inclusions. Since $\tau$ identifies $\mathcal{D}$ with
the trivial connection on $M\times S$, it induces an isomorphism of graded
vector spaces:
$\tau_{\ast}:H^{\ast}_{\mathcal{D}}(M,\mathcal{S})\xrightarrow{\sim}H^{\ast}(M,S)$
whose restriction coincides with the isomorphism of graded abelian groups:
$\tau_{0\ast}:H^{\ast}(M,Z)\xrightarrow{\sim}H^{\ast}(M,\Lambda)$
induced by $\tau_{0}$. Moreover, $\bar{\tau}$ induces an isomorphism of graded
abelian groups
$\bar{\tau}_{\ast}:H^{\ast}(M,\mathcal{A})\xrightarrow{\sim}H^{\ast}(M,A)$.
Hence the diagram above induces an isomorphism of long exact sequences of
abelian groups:
$\scalebox{1.0}{ \lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
6.75pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\ldots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
31.58331pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
31.58331pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces
H^{1}(M,\mathcal{A})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
102.6333pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
54.69165pt\raise-20.56888pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.37361pt\hbox{$\scriptstyle{\bar{\tau}_{\ast}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 54.69165pt\raise-29.5pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
102.6333pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{H^{2}(M,Z)~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces~{}}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
129.9257pt\raise-20.56888pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.8625pt\hbox{$\scriptstyle{\tau_{0\ast}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 129.9257pt\raise-29.5pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
163.3681pt\raise 5.98889pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.62779pt\hbox{$\scriptstyle{j_{\ast}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 182.62779pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
182.62779pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{~{}H^{2}(M,\mathcal{S})~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
258.96954pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
208.09383pt\raise-20.56888pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.04167pt\hbox{$\scriptstyle{\tau_{\ast}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 208.09383pt\raise-29.5pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
258.96954pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{H^{2}(M,\mathcal{A})\ignorespaces\ignorespaces\ignorespaces\ignorespaces~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
283.74284pt\raise-20.56888pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.37361pt\hbox{$\scriptstyle{\bar{\tau}_{\ast}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 283.74284pt\raise-29.5pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
332.51614pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
332.51614pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\ldots}$}}}}}}}{\hbox{\kern-6.75pt\raise-41.13776pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\ldots\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
30.75pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
30.75pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces
H^{1}(M,A)}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
102.93192pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
102.93192pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{~{}H^{2}(M,\Lambda)~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
163.80397pt\raise-35.36414pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.84306pt\hbox{$\scriptstyle{i_{\ast}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
181.21811pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
181.21811pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{~{}H^{2}(M,S)~{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
259.8012pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
259.8012pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{H^{2}(M,A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
332.51614pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
332.51614pt\raise-41.13776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\ldots}$}}}}}}}\ignorespaces}}}}\ignorespaces}~{}~{}.$
Since $\Lambda$ is free while $\mathcal{S}$ is a vector space, we have
isomorphisms of abelian groups:
$H^{\ast}(M,S)\simeq
H^{\ast}(M,\mathbb{R})\otimes_{\mathbb{R}}S~{}~{},~{}~{}H^{\ast}(M,\Lambda)\simeq_{\mathbb{Z}}H^{\ast}(M,\mathbb{Z})\otimes_{\mathbb{Z}}\Lambda~{}~{}.$
and:
$H^{\ast}(M,S)\simeq H^{\ast}(M,\Lambda)\otimes_{\mathbb{Z}}\mathbb{R}\simeq
H^{\ast}(M,\Lambda)^{\mathrm{tf}}\otimes_{\mathbb{Z}}\mathbb{R}~{}~{}.$
The latter agrees with the isomorphism (20) through the maps $\tau_{0\ast}$
and $\tau_{\ast}$. The map $i_{\ast}:H^{k}(M,\Lambda)\rightarrow H^{k}(M,S)$
is obtained by tensoring the map $H^{k}(M,\mathbb{Z})\rightarrow
H^{k}(M,\mathbb{R})$ with the inclusion $\Lambda\subset S$, while its
restriction $i_{\ast}^{\mathrm{tf}}:H^{k}(M,\Lambda)^{\mathrm{tf}}\rightarrow
H^{k}(M,S)$ is obtained by tensoring the inclusion $\Lambda\subset S$ with the
map $H^{l}(M,\mathbb{Z})^{\mathrm{tf}}\rightarrow H^{k}(M,\mathbb{R})$. Since
the latter is injective, it follows that $i_{\ast}^{\mathrm{tf}}$ is injective
and hence $H^{l}(M,\Lambda)^{\mathrm{tf}}$ identifies with a full lattice in
$H^{k}(M,S)$. Since $A$ and $(S,+)$ are divisible groups while
$H_{0}(M,\mathbb{Z})=\mathbb{Z}$ and $\Lambda$ are free, the universal
coefficient sequence for cohomology gives isomorphisms:
$\displaystyle H^{k}(M,S)\simeq{\rm
Hom}_{\mathbb{Z}}(H_{k}(M,\mathbb{Z}),S)={\rm
Hom}_{\mathbb{Z}}(H_{k}(M,\mathbb{Z})^{\mathrm{tf}},S)\,,$ $\displaystyle
H^{k}(M,\Lambda)\simeq{\rm Hom}_{\mathbb{Z}}(H_{k}(M,\mathbb{Z}),\Lambda)={\rm
Hom}_{\mathbb{Z}}(H_{k}(M,\mathbb{Z})^{\mathrm{tf}},\Lambda)$ (21)
and:
$H^{k}(M,A)\simeq{\rm Hom}_{\mathbb{Z}}(H_{k}(M,\mathbb{Z}),A)\simeq{\rm
Hom}_{\mathbb{Z}}(H_{k}(M,\mathbb{Z})^{\mathrm{tf}},A)$
for all $k$. The first of these is the period isomorphism:
$\mathrm{per}(\omega)(c):=\mathrm{per}_{c}(\omega)=c\cap\omega=\int_{c}\omega\,,\quad\forall\,\,\omega\in
H^{k}(M,S)\,,\quad\forall\,\,c\in H_{k}(M,\mathbb{Z})\,.$
The map $i_{\ast}:H^{k}(M,\Lambda)\rightarrow H^{k}(M,\mathcal{S})$ agrees
with the injective map induced by the inclusion $\Lambda\hookrightarrow S$
though the isomorphisms (2.10). Hence:
$H^{k}(M,\Lambda)\simeq\mathrm{per}^{-1}({\rm
Hom}_{\mathbb{Z}}(H_{k}(M,\mathbb{Z}),\Lambda))=i_{\ast}(H^{k}(M,\Lambda))=\\{\omega\in
H^{k}(M,S)~{}|~{}\mathrm{per}_{c}(\omega)\in\Lambda\,\&\,c\in
H_{k}(M,\mathbb{Z})\\}~{}~{}.$
## 3\. Siegel bundles and connections
In this section we use the notion of integral affine symplectic torus, for
which we refer the reader to Appendix B.
### 3.1. Automorphisms of integral affine symplectic tori
We denote by $\mathrm{Aff}_{\mathfrak{t}}$ the group of affine
symplectomorphisms of the integral affine symplectic torus
$\mathbbold{A}_{\mathfrak{t}}=(\mathbb{A},\Omega_{\mathfrak{t}})$ of type
$\mathfrak{t}\in\mathrm{Div}^{n}$. Here $\mathbb{A}$ is the underlying
$2n$-dimensional affine torus (which is a principal homogeneous space for the
torus group $\mathrm{U}(1)^{2n}\simeq\mathbb{R}^{2n}/\mathbb{Z}^{2n}$), while
$\Omega_{\mathfrak{t}}$ is the integral symplectic form of type $\mathfrak{t}$
on $\mathbb{A}$, which is translationally-invariant. As explained in Appendix
B, $\mathrm{Aff}_{\mathfrak{t}}$ is a non-compact disconnected Lie group whose
connected component of the identity is the $2n$-dimensional torus group
$\mathrm{U}(1)^{2n}$. We have
$\pi_{0}(\mathrm{Aff}_{\mathfrak{t}})\simeq\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
and:
$\mathrm{Aff}_{\mathfrak{t}}\simeq\mathrm{U}(1)^{2n}\rtimes\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\,,$
(22)
where $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ acts on $\mathrm{U}(1)^{2n}$
through the restriction of the group morphism defined in equation (82) of
Appendix B, an action which we denote by juxtaposition. Thus
$\mathrm{Aff}_{\mathfrak{t}}$ identifies with the set
$\mathrm{U}(1)^{2n}\times\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$, endowed
with the composition rule:
$(a_{1},\gamma_{1})\,(a_{2},\gamma_{2})=(a_{1}+\gamma_{1}a_{2},\gamma_{1}\gamma_{2})\,,\quad\forall\,\,a_{1},a_{2}\in\mathrm{U}(1)^{2n}\,,\quad\forall\,\,\gamma_{1},\gamma_{2}\in\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\,.$
Let
$\ell:\mathrm{Aff}(t)\rightarrow\mathrm{Diff}(\mathrm{Aff}_{\mathfrak{t}})$ be
the left action of $\mathrm{Aff}_{\mathfrak{t}}$ on itself:
$\ell(g)(g^{\prime})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}gg^{\prime}\,,\quad\forall\,\,g,g^{\prime}\in\mathrm{Aff}_{\mathfrak{t}}\,,$
and let
$\mathrm{pr}_{1}:\mathrm{Aff}_{\mathfrak{t}}\rightarrow\mathrm{U}(1)^{2n}$ and
$\mathrm{pr}_{2}:\mathrm{Aff}_{\mathfrak{t}}\rightarrow\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
be the projections of the set-theoretic decomposition
$\mathrm{Aff}_{\mathfrak{t}}=\mathrm{U}(1)^{2n}\times\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$.
Notice that $\mathrm{pr}_{2}$ is a morphism of groups. Define left actions
$\ell_{1}$ and $\ell_{2}$ of $\mathrm{Aff}_{\mathfrak{t}}$ on
$\mathrm{U}(1)^{2n}$ and $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ through:
$\ell_{1}(g)(a)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{pr}_{1}(\ell(g)(a,1))\,,\quad\ell_{2}(g)(\gamma)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{pr}_{2}(\ell(g)(0,\gamma))\,~{},~{}\forall
g\in\mathrm{Aff}_{\mathfrak{t}}~{},~{}\forall
a\in\mathrm{U}(1)^{2n}~{},~{}\forall\gamma\in\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})~{}~{}.$
Then $\ell_{1}$ is given by:
$\ell_{1}(a,\gamma)(a^{\prime})=a+\gamma
a^{\prime}\,,\quad\forall\,\,\gamma\in\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\,,\quad\forall\,\,a,a^{\prime}\in\mathrm{U}(1)^{2n}~{}~{}.$
(23)
This action is transitive with stabilizer isomorphic with
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$. On the other hand, $\ell_{2}$ is
given by:
$\ell_{2}(a,\gamma)(\gamma^{\prime})=\gamma\gamma^{\prime}=p_{2}(a,\gamma)\gamma^{\prime}\,,\quad\forall\,\,\gamma,\gamma^{\prime}\in\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\,,\quad\forall\,\,a\in\mathrm{U}(1)^{2n}\,$
(24)
and is transitive with stabilizer isomorphic to $\mathrm{U}(1)^{2n}$. This
gives the right-split short exact sequence:
$0\rightarrow\mathrm{U}(1)^{2n}\rightarrow\mathrm{Aff}_{\mathfrak{t}}\rightarrow\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\rightarrow
1~{}~{}.$
Notice that $\ell=\ell_{1}\times\ell_{2}$. The Lie algebra
$\mathfrak{aff}_{\mathfrak{t}}$ of $\mathrm{Aff}_{\mathfrak{t}}$ is abelian
and coincides with the Lie algebra of $\mathrm{U}(1)^{2n}$:
$\mathfrak{aff}_{\mathfrak{t}}=\mathbb{R}^{2n}\simeq
H_{1}(\mathbb{A}_{\mathfrak{t}},\mathbb{R})~{}~{}.$
The exponential map
$\exp:\mathfrak{aff}_{\mathfrak{t}}\rightarrow\mathrm{Aff}_{\mathfrak{t}}$ has
kernel $\Lambda_{\mathfrak{t}}\simeq
H_{1}(\mathbb{A}_{\mathfrak{t}},\mathbb{Z})$ and image $A$, giving the
exponential sequence:
$0\rightarrow\Lambda_{\mathfrak{t}}\rightarrow\mathfrak{aff}_{\mathfrak{t}}\stackrel{{\scriptstyle\exp}}{{\rightarrow}}\mathrm{U}(1)^{2n}\rightarrow
0~{}~{}.$ (25)
###### Lemma 3.1.
The adjoint representation
$\mathrm{Ad}\colon\mathrm{Aff}_{\mathfrak{t}}\to\mathrm{GL}(2n,\mathbb{R})$ of
$\mathrm{Aff}_{\mathfrak{t}}$ coincides with its fundamental linear
representation, that is:
$\mathrm{Ad}(a,\gamma)(v)=\gamma(v)\,,\quad\forall\,\,(a,\gamma)\in\mathrm{Aff}_{\mathfrak{t}}\,,\quad\forall\,\,v\in\mathbb{R}^{2n}\,.$
(26)
In particular, we have $\mathrm{Ad}=j\circ\mathrm{pr}_{2}$, where
$j:\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\rightarrow\mathrm{GL}(2n,\mathbb{R})$
is the fundamental representation of
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$.
###### Proof.
Let $\alpha=(\gamma,1)\colon
I\to\mathrm{Aff}(\mathfrak{t})=\mathrm{U}(1)^{2n}\times\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
be a smooth path on $\mathrm{Aff}(\mathfrak{t})$ such that
$\alpha(0)=\mathrm{Id}$. Set:
$\frac{\mathrm{d}}{\mathrm{d}t}\alpha(t)|_{t=0}=v\in\mathbb{R}^{2n}\,.$
For every
$x=(x_{1},x_{2})\in\mathrm{Aff}(\mathfrak{t})=\mathrm{U}(1)^{2n}\times\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
we have:
$x\,\alpha(t)\,x^{-1}=(x_{1},x_{2})\,(\gamma,1)\,(-x^{-1}_{2}x_{1},x_{2}^{-1})=(x_{2}\gamma,1)\,.$
Hence:
$\frac{\mathrm{d}}{\mathrm{d}t}(x\,\alpha(t)\,x^{-1})|_{t=0}=x_{2}(v)\,,$
which immediately implies:
$\mathrm{Ad}(x)=j\circ\mathrm{pr}_{2}(x)\,,$
for every $x\in\mathrm{Aff}(\mathfrak{t})$ and hence we conclude. ∎
### 3.2. Siegel bundles
Let $M$ be a connected manifold.
###### Definition 3.2.
A Siegel bundle $P$ of rank $n$ and type $\mathfrak{t}\in\mathrm{Div}^{n}$ is
a principal bundle on $M$ with structure group $\mathrm{Aff}_{\mathfrak{t}}$.
An isomorphism of Siegel bundles is a based isomorphism of principal bundles.
Let $\mathrm{Sieg}(M)$ be the groupoid of Siegel bundles defined on $M$ and
$\mathrm{Sieg}_{\mathfrak{t}}(M)$ be the full subgroupoid of Siegel bundles of
type $\mathfrak{t}$. Fix a Siegel bundle $P$ of type
$\mathfrak{t}\in\mathrm{Div}^{n}$, whose projection we denote by $\pi$. We
introduce several fiber bundles associated to $P$.
#### 3.2.1. The bundle of integral affine symplectic tori defined by $P$
###### Definition 3.3.
A fiber bundle $\mathfrak{A}$ defined on $M$ is called a bundle of integral
affine symplectic tori of rank $n$ if its fibers are $2n$-dimensional tori and
the structure group of $\mathfrak{A}$ reduces to a subgroup of
$\mathrm{Aff}_{\mathfrak{t}}$ for some $\mathfrak{t}\in\mathrm{Div}^{n}$. The
smallest element $\mathfrak{t}_{\mathfrak{A}}\in\mathrm{Div}^{n}$ with this
property is called the type of $\mathfrak{A}$.
Notice that bundles of integral symplectic torus groups coincide with those
bundles of integral affine symplectic tori which admit a smooth global
section. Indeed, such a section gives a further reduction of structure group
from $\mathrm{Aff}_{\mathfrak{t}}$ to
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$. Given a Siegel bundle $P$ of type
$\mathfrak{t}\in\mathrm{Div}^{n}$ defined on $M$, the fiber bundle:
$\mathfrak{A}(P)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}P\times_{\ell_{1}}\mathrm{U}(1)^{2n}$
associated to $P$ through the action (23) is a bundle of integral affine
symplectic tori of type $\mathfrak{t}$. The fibers of the latter admit
integral symplectic forms of type $\mathfrak{t}$ which vary smoothly over $M$.
The group $\mathrm{Sp}(V,\omega)$ acts freely and transitively on the set
$\mathrm{Fr}(V,\omega,\Lambda)$ of integral symplectic bases of any integral
symplectic space $(V,\omega,\Lambda)$. Any bundle $\mathfrak{A}$ of integral
affine symplectic tori of type $\mathfrak{t}$ is associated through the action
$\ell_{1}$ to its Siegel bundle $P(\mathfrak{A})$ of unpointed torus
symplectic frames, which has type $\mathfrak{t}$ and whose fiber at $m\in M$
is defined through:
$P(\mathfrak{A})_{m}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Fr}(H_{1}(\mathfrak{A}_{m},\mathbb{R}),H_{1}(\mathfrak{A}_{m},\mathbb{Z}),\omega_{m})\times\mathfrak{A}_{m}~{}~{}.$
Here $\omega_{m}\stackrel{{\scriptstyle{\rm def.}}}{{=}}[\Omega_{m}]\in
H^{2}(\mathfrak{A}_{m},\mathbb{R})\simeq\wedge^{2}H_{1}(\mathfrak{A}_{m},\mathbb{R})^{\vee}$
is the cohomology class of the symplectic form $\Omega_{m}$ of
$\mathfrak{A}_{m}$, viewed as a symplectic pairing defined on
$H_{1}(\mathfrak{A}_{m},\mathbb{R})$. More precisely, we have:
###### Proposition 3.4.
The correspondences $P\rightarrow\mathfrak{A}(P)$ and $\mathfrak{A}\rightarrow
P(\mathfrak{A})$ extend to mutually quasi-inverse equivalences of groupoids
between $\mathrm{Sieg}(M)$ and the groupoid of bundles of integral affine
symplectic tori and these equivalences preserve type.
This statement parallels a similar correspondence which holds for affine torus
bundles (see [11] as well as Theorem 2.2. and Remark 2.3 in [12]).
#### 3.2.2. The Siegel system defined by $P$
Given a Siegel bundle of type $\mathfrak{t}\in\mathrm{Div}^{n}$, consider the
bundle of discrete abelian groups defined through:
$Z(P)_{m}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}H_{1}(\mathfrak{A}(P)_{m},\mathbb{Z})\,,\quad\forall\,\,m\in
M~{}~{}.$
Since torus translations act trivially on
$H_{1}(\mathfrak{A}(P)_{m},\mathbb{Z})$, the structure group of $Z(P)$ reduces
to $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$. Thus $Z(P)$ is a Siegel system
on $M$. Moreover, $Z(P)$ is isomorphic with the bundle of discrete abelian
groups associated to $P$ through the projection morphism
$p_{2}:\mathrm{Aff}_{\mathfrak{t}}\rightarrow\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$,
when the latter is viewed as a left action of $\mathrm{Aff}_{\mathfrak{t}}$
through automorphisms of the group $(\mathbb{Z}^{2n},+)$.
###### Definition 3.5.
$Z(P)$ is called the Siegel system defined by $P$.
Notice that the the monodromy of $P$ at a point $m\in M$ acts through
automorphisms of the integral symplectic space
$(H_{1}(\mathfrak{A}(P)_{m},\mathbb{R}),H_{1}(\mathfrak{A}(P)_{m},\mathbb{Z}),[\Omega_{m}])$.
#### 3.2.3. The adjoint bundle and integral duality structure of $P$
The adjoint bundle $\mathrm{ad}(P)$ of $P$ can be identified with the tensor
product $Z(P)\otimes_{\mathbb{Z}}\underline{\mathbb{R}}$, whose fiber at $m\in
M$ is given by:
$\mathrm{ad}(P)_{m}=Z(P)_{m}\otimes_{\mathbb{Z}}\mathbb{R}\simeq
H_{1}(\mathfrak{A}(P)_{m},\mathbb{R})~{}~{}.$
Notice that $\mathrm{ad}(P)$ carries the fiberwise symplectic pairing
$\omega_{P}$ given by $(\omega_{P})_{m}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}[\Omega_{m}]$ for all $m\in M$ (see Lemma 3.1). Since the Lie
algebra of $\mathrm{Aff}_{\mathfrak{t}}$ is abelian, the structure group of
$\mathrm{ad}(P)$ reduces to $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$. The
Siegel system $Z(P)$ is naturally a sub-bundle of $\mathrm{ad}(P)$ whose
fibers $Z(P)_{m}=H_{1}(\mathfrak{A}(P)_{m},\mathbb{Z})$ are full symplectic
lattices with respect to $[\Omega_{m}]$. The monodromy of $Z(P)$ induces a
unique flat connection $\mathcal{D}_{P}$ on $\mathrm{ad}(P)$ whose parallel
transport preserves $Z(P)$. Setting
$\mathcal{S}_{P}\stackrel{{\scriptstyle{\rm def.}}}{{=}}\mathrm{ad}(P)$, it
follows that the system:
${\bm{\Delta}}(P)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(\Delta(P),Z(P))~{}~{},~{}~{}\mathrm{where}~{}~{}\Delta(P)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(\mathcal{S}_{P},\omega_{P},\mathcal{D}_{P})\,,$
is an integral duality structure of type $\mathfrak{t}$, whose underlying
duality structure is $\Delta(P)$.
###### Proposition 3.6.
There correspondence defined above extends to an essentially surjective
functor:
${\bm{\Delta}}\colon\mathrm{Sieg}(M)\to\mathrm{Dual}_{\mathbb{Z}}(M)\,,$
which associates to every Siegel bundle $P$ of type
$\mathfrak{t}\in\mathrm{Div}^{n}$ defined on $M$ the integral duality
structure ${\bm{\Delta}}(P)$, which has type $\mathfrak{t}$.
###### Proof.
It is clear that the correspondence extends to a functor. Given
${\bm{\Delta}}=(\Delta,\mathcal{L})\in\mathrm{Dual}_{\mathbb{Z}}(M)$, denote
by $Q$ the frame bundle of the Siegel system defined by $\mathcal{L}$, which
is a principal $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$-bundle (see
Proposition 2.12). Let $P$ be the Siegel bundle associated to $Q$ through the
natural left action $l$ of $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ on
$\mathrm{Aff}_{\mathfrak{t}}$:
$P=Q\times_{l}\mathrm{Aff}_{\mathfrak{t}}\,.$
Then ${\bm{\Delta}}(P)={\bm{\Delta}}$, showing that the functor is essentially
surjective. ∎
#### 3.2.4. The bundle of integral symplectic torus groups defined by $P$
Consider a Siegel bundle $P$ defined on $M$.
###### Definition 3.7.
The bundle of integral symplectic torus groups defined by $P$ is the bundle:
$\mathcal{A}(P)=\mathcal{A}({\bm{\Delta}}(P))=\mathrm{ad}(P)/Z(P)$
of integral symplectic torus groups defined by the integral duality structure
${\bm{\Delta}}(P)$.
#### 3.2.5. Siegel bundles with trivial monodromy
###### Proposition 3.8.
Let $P$ be a Siegel bundle of rank $n$ and type
$\mathfrak{t}\in\mathrm{Div}^{n}$ defined on $M$. Then the following
statements are equivalent:
1. (a)
The Siegel system $Z(P)$ has trivial monodromy.
2. (b)
$Z(P)$ is trivial as a bundle of discrete Abelian groups.
3. (c)
The structure group of $P$ reduces to the torus group $\mathrm{U}(1)^{2n}$.
4. (d)
The structure group of the bundle of integral symplectic affine tori
$\mathfrak{A}(P)$ reduces to $\mathrm{U}(1)^{2n}$.
5. (e)
The structure group of the bundle of integral symplectic torus groups
$\mathcal{A}(P)$ is trivial.
6. (f)
The duality structure $\Delta(P)$ is holonomy-trivial.
In this case, $\mathfrak{A}(P)$ identifies with a principal torus bundle and
$\mathcal{A}(P)$ is a trivial bundle of integral symplectic torus groups.
Moreover, $P$ is isomorphic with the fiber product
$\mathfrak{A}(P)\times_{M}\underline{\Gamma}$, where $\underline{\Gamma}$ is
the trivial $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$-bundle defined on $M$.
Thus $P$ identifies with a countable collection of copies of the principal
torus bundle $\mathfrak{A}(P)$, indexed by elements of
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$.
###### Proof.
The fact that $(a)$ implies $(b)$ follows from the standard characterization
of flat bundles in terms of holonomy representations of the fundamental group
of the underlying manifold. If $Z(P)$ is trivial as a bundle of discrete
groups then the holonomy representation preserves a global frame of $Z(P)$,
which in turn implies, using the explicit form of the adjoint representation
of $\mathrm{Aff}_{\mathfrak{t}}$, that the holonomy representation takes
values in $\mathrm{U}(1)^{2n}\subset\mathrm{Aff}_{\mathfrak{t}}$. The
associated holonomy bundle defines a reduction of $P$ to a principal torus
bundle with structure group $\mathrm{U}(1)^{2n}$. This immediately implies
$(d)$, $(e)$ and $(f)$. Since the flat connection on $\Delta(P)$ is by
definition the real linear extension of the flat connection of $Z(P)$ the
latter has trivial monodromy if and only if $\Delta(P)$ has trivial monodromy,
that is, if and only if $\Delta(P)$ is holonomy-trivial. This proves
$(f)\Rightarrow(a)$. ∎
### 3.3. Classification of Siegel bundles
Let $P$ be a Siegel bundle of type $\mathfrak{t}\in\mathrm{Div}^{n}$ defined
on $M$ and
${\bm{\Delta}}:={\bm{\Delta}}(P)=(\mathcal{S}_{P}=\mathrm{ad}(P),\omega_{P},\mathcal{D}_{P},Z(P))$
be the integral duality structure defined by $P$. The Bockstein isomorphism
(18) reads:
$\delta:H^{1}(M,\mathcal{C}^{\infty}(\mathcal{A}(P)))\xrightarrow{\sim}H^{2}(M,Z(P))~{}~{}.$
It was shown in [11] that $P$ determines a primary characteristic class
$c^{\prime}(P)\in H^{1}(M,\mathcal{C}^{\infty}(\mathcal{A}(P)))$.
###### Definition 3.9.
The twisted Chen class of $P$ is:
$c(P)\stackrel{{\scriptstyle{\rm def.}}}{{=}}\delta(c^{\prime}(P))\in
H^{2}(M,Z(P))~{}~{}.$
Recall from Proposition 3.4 that isomorphism classes of Siegel bundles defined
on $M$ are in bijection with isomorphism classes of bundles of integral affine
symplectic tori. This allows one to classify Siegel bundles by adapting the
classification of affine torus bundles given in [11, Section 2] (see also [12,
Theorem 2.2]). Since the modifications of the argument of loc. cit. are
straightforward, we simply describe the result. Adapting the argument of [11]
we obtain:
###### Theorem 3.10.
Consider the set:
$\Sigma(M)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{(Z,c)\,|\,Z\in\mathrm{Ob}[\mathrm{Sg}(M)]\,\,\&\,\,c\in
H^{2}(M,Z)\right\\}/_{\sim},$
where $(Z,c)\sim(Z^{\prime},c^{\prime})$ if and only if there exists an
isomorphism of Siegel systems $\varphi:Z\rightarrow Z^{\prime}$ such that
$\varphi_{\ast}(c)=c^{\prime}$. Then the map:
$P\mapsto(Z(P),c(P))$
induces a bijection between the set of isomorphism classes of Siegel bundles
defined on $M$ and the set $\Sigma(M)$.
A more conceptual explanation of this result is given in [35]. Let
$\rho:\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\rightarrow\mathrm{Aut}(\mathrm{U}(1)^{2n})$
denote the action of $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ on
$\mathrm{U}(1)^{2n}$ and
$\rho_{0}:\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\rightarrow\mathrm{Aut}_{\mathbb{Z}}(\mathbb{Z}^{2n})$
be the corresponding action on $\mathbb{Z}^{2n}$. Then the classifying space
of the $\mathrm{Aff}_{\mathfrak{t}}$ is a twisted Eilenberg-McLane space
$L:=L_{\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})}(\mathbb{Z}^{2n},2)$ in the
sense of [27], which is a sectioned fibration over $\mathrm{B}\Gamma\simeq
K(\Gamma,1)$ whose fibers are homotopy equivalent with
$\mathrm{B}\mathrm{U}(1)^{2n}\simeq
K(\mathbb{Z}^{2n},2)\simeq(\mathbb{C}\mathbb{P}^{\infty})^{\times 2n}$. This
space is a homotopy two-type with:
$\pi_{1}(L)=\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})~{}~{},~{}~{}\pi_{2}(L)=\mathbb{Z}^{2n}~{}~{}.$
The results of [27] are used in [35] to show that isomorphism classes of
principal $\mathrm{Aff}_{\mathfrak{t}}$-bundles $P$ defined on a pointed space
$X$ are in bijection with isomorphism classes of pairs $(\alpha,c)$, where
$\alpha:\pi_{1}(X)\rightarrow\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ is a
morphism of groups and $c\in H^{2}(X,Z_{\alpha})$, where $Z_{\alpha}$ is the
local system with fiber $\mathbb{Z}^{2n}$ and monodromy action at the
basepoint of $X$ given by
$\rho_{0}\circ\alpha:\pi_{1}(X)\rightarrow\mathrm{Aut}_{\mathbb{Z}}(\mathbb{Z}^{2n})$.
When $X=M$ is a manifold, this local system coincides with $Z(P)$, while the
cohomology class $c$ coincides with $c(P)$.
### 3.4. Principal connections on Siegel bundles
Let $P$ be a Siegel bundle of type $\mathfrak{t}\in\mathrm{Div}^{n}$ defined
on a connected manifold $M$, whose projection we denote by $\pi:P\rightarrow
M$. For ease of notation, we set $G\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Aff}_{\mathfrak{t}}$, $\Gamma\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ and
$A\stackrel{{\scriptstyle{\rm def.}}}{{=}}\mathrm{U}(1)^{n}$. We denote the
abelian Lie algebra of $A$ by $\mathfrak{a}\simeq\mathbb{R}^{2n}$. Let
${\bm{\Delta}}=(\Delta,Z)$ be the integral duality structure
${\bm{\Delta}}(P)$ determined by $P$, where
$\Delta=\Delta(P)=(\mathcal{S},\omega,\mathcal{D})$ with
$\mathcal{S}=\mathcal{S}_{P}=\mathrm{ad}(P)$, $\omega=\omega_{P}$ and
$\mathcal{D}=\mathcal{D}_{P}$ and $Z=Z(P)$ is the Siegel system determined by
$P$. Let:
$\mathrm{Conn}(P)=\left\\{\mathcal{A}\in\Omega^{1}(P,\mathfrak{a})\,\,|\,\,r_{a,\gamma}^{\ast}(\mathcal{A})=\gamma^{-1}\mathcal{A}~{}~{}\,\&\,~{}~{}\mathcal{A}_{y}(X^{a}_{y})=a~{}~{}\,\forall
y\in P~{}\,\,\forall\,\,(a,\gamma)\in G\right\\}\,,$
be the set of principal connections on $P$, where $r_{g}$ denotes the right
action of $g\in G$ on $P$ and we used the fact that the adjoint representation
$\mathrm{Ad}:G\rightarrow\mathrm{Aut}(\mathfrak{a})$ is given by (26). Let:
$\Omega_{\mathrm{Ad}}(P,\mathfrak{a})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{\eta\in\Omega(P,\mathfrak{a})\,\,|\,\,r_{a,\gamma}^{\ast}(\eta)=\gamma^{-1}\eta\,\,\&\,\,\iota_{X}\eta=0\,\,\forall(a,\gamma)\in
G~{}\forall X\in V(P)\right\\}\,,$
be the set of equivariant horizontal forms on $P$, where $V(P)$ is the space
of vertical vector fields defined on $P$. Then
$\Omega_{\mathrm{Ad}}(P,\mathfrak{a})$ is naturally isomorphic with
$\Omega(M,\mathcal{S})$. In particular, the curvature
$\Omega_{\mathcal{A}}=\mathrm{d}_{\mathcal{A}}\mathcal{A}\in\Omega^{2}_{\mathrm{Ad}}(P,\mathfrak{a})$
of any principal connection $\mathcal{A}\in\mathrm{Conn}(P)$ identifies with a
$\mathcal{S}$-valued two-form
$\mathcal{V}_{\mathcal{A}}\in\Omega^{2}(M,\mathcal{S})$, which is the adjoint
curvature of $\mathcal{A}$. Since $\mathfrak{a}$ is an abelian Lie algebra, we
are in the situation considered in [35]. Hence the covariant exterior
derivative defined by $\mathcal{A}$ restricts to the ordinary exterior
derivative on the space $\Omega_{\mathrm{Ad}}(P,\mathfrak{a})$:
$\mathrm{d}_{\mathcal{A}}|_{\Omega_{\mathrm{Ad}}(P,\mathfrak{a})}=\mathrm{d}:\Omega_{\mathrm{Ad}}(P,\mathfrak{a})\rightarrow\Omega_{\mathrm{Ad}}(P,\mathfrak{a})~{}~{}.$
(27)
Moreover, the principal curvature of $\mathcal{A}$ is given by:
$\Omega_{\mathcal{A}}=\mathrm{d}\mathcal{A}$
and the Bianchi identity $\mathrm{d}_{\mathcal{A}}\Omega_{\mathcal{A}}=0$
reduces to:
$\mathrm{d}\Omega_{\mathcal{A}}=0~{}~{}.$ (28)
As explained in [35], relation (27) implies that all principal connections on
$P$ induce the same linear connection on the adjoint bundle
$\mathrm{ad}(P)=\mathcal{S}$, which coincides with the flat connection
$\mathcal{D}$ of the duality structure $\Delta$ defined by $P$. Moreover, the
adjoint curvature $\mathcal{V}_{\mathcal{A}}\in\Omega^{2}(M,\mathcal{S})$
satisfies:
$\mathrm{d}_{\mathcal{D}}\mathcal{V}_{\mathcal{A}}=0~{}~{}.$
Let $\mathrm{Sieg}^{c}(M)$ be the groupoid of Siegel bundles with connection,
whose objects are pairs $(P,\mathcal{A})$ where $P$ is a Siegel bundle and
$\mathcal{A}\in\mathrm{Conn}(P)$ and whose morphisms are connection-preserving
based isomorphisms of principal bundles. Let
$\mathrm{Dual}_{\mathbb{Z}}^{c}(M)$ be the groupoid of pairs
$({\bm{\Delta}},\mathcal{V})$, where ${\bm{\Delta}}$ is an integral duality
structure on $M$ and $\mathcal{V}\in\mathrm{Conf}(M,{\bm{\Delta}})$ is a
$\mathrm{d}_{\mathcal{D}}$-closed $\mathcal{S}$-valued 2-form whose
$\mathrm{d}_{\mathcal{D}}$-cohomology class belongs to the charge lattice of
${\bm{\Delta}}$. There exists a natural functor:
${\bm{\Delta}}^{c}\colon\mathrm{Sieg}^{c}(M)\to\mathrm{Dual}_{\mathbb{Z}}^{c}(M)$
which sends $(P,\mathcal{A})$ to the pair
$({\bm{\Delta}}(P),\mathcal{V}_{\mathcal{A}})$. Let
$\mathrm{Sieg}^{c}(M)_{0}\subset\mathrm{Sieg}^{c}(M)$ be the full subgroupoid
consisting of Siegel bundles with flat connection.
###### Theorem 3.11.
There exists a short exact sequence of groupoids:
$1\to\mathrm{Sieg}^{c}(M)_{0}\xrightarrow{\kappa}\mathrm{Sieg}^{c}(M)\xrightarrow{\mathrm{curv}}\mathrm{Dual}_{\mathbb{Z}}^{c}(M)\to
1~{}~{},$
where $\kappa$ is the inclusion and $\mathrm{curv}$ is the curvature map. In
particular, for every integral duality structure ${\bm{\Delta}}$ on $M$ and
every $\mathcal{V}\in\mathrm{Conf}(M,{\bm{\Delta}})$, there exists a Siegel
bundle with connection $(P,\mathcal{A})$ such that:
$\mathcal{V}_{\mathcal{A}}=\mathcal{V}\,,$
and the set of pairs $(\mathcal{A},\mathcal{V}_{\mathcal{A}})$ with this
property is a torsor for $\mathrm{Sieg}^{c}(M)_{0}$.
###### Proof.
It is clear that an object in $\mathrm{Sieg}^{c}(M)$ defines an integral
duality structure and a cohomology class in
$H^{2}_{\mathcal{D}}(M,\mathcal{S})$, whence it defines an object in
$\mathrm{Sieg}^{c}(M)$. Functoriality of this assignment follows from
invariance of the aforementioned cohomology class under gauge transformations.
This is proved in Lemma 4.5. The key ingredient of the proof is now to show
that this cohomology class is in fact _integral_ , that is, belongs to
$j_{\ast}(H^{2}(M,Z))$, where $Z$ is the Siegel system defined by $P$. This is
a technical point which is proved in detail in [35], to which we refer the
reader for more details. Once this is proven, it follows from Theorem 3.10
that the curvature map is surjective onto $\mathrm{Dual}_{\mathbb{Z}}^{c}(M)$
and that its kernel is precisely the pairs of integral duality structures and
flat connections. ∎
The previous theorem shows that _integral_ electromagnetic field strengths can
always be realized as curvatures of principal connections defined on Siegel
bundles, which therefore provide the geometric realization of integral
configurations of abelian gauge theory.
###### Remark 3.12.
Theorem 3.11 can be elaborated to obtain the Dirac quantization of abelian
gauge theory in terms of a certain _twisted_ differential cohomology theory,
though we do not pursue this here. Recall that the DSZ quantization of various
gauge theories using the framework of algebraic quantum field theory and
differentiable cohomology has been considered before in the literature, see
[45, 13] and references therein.
### 3.5. Polarized Siegel bundles and polarized self-dual connections
Let $M$ be a connected manifold.
###### Definition 3.13.
A polarized Siegel bundle is a pair $\mathbf{P}=(P,\mathcal{J})$, where $P$ is
a Siegel bundle and $\mathcal{J}$ is a taming of the duality structure
$\Delta:=\Delta(P)$ defined by $P$.
A polarized Siegel bundle $\mathbf{P}=(P,\mathcal{J})$ determines an integral
electromagnetic structure $\bm{\Xi}_{\mathbf{P}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}({\bm{\Delta}}(P),\mathcal{J})$, where
${\bm{\Delta}}(P)=(\Delta(P),Z(P))$ is the integral duality structure defined
by $P$.
###### Definition 3.14.
Let $\mathbf{P}=(P,\mathcal{J})$ be a polarized Siegel bundle. A principal
connection $\mathcal{A}\in\mathrm{Conn}(P)$ is called _polarized selfdual_ ,
respectively _polarized anti-selfdual_ if its adjoint curvature satisfies:
$\star_{g,\mathcal{J}}\mathcal{V}_{\mathcal{A}}=\mathcal{V}_{\mathcal{A}}\,~{}~{},~{}~{}\mathrm{respectively}~{}~{}\star_{g,\mathcal{J}}\mathcal{V}_{\mathcal{A}}=-\mathcal{V}_{\mathcal{A}}\,.$
Recall the definitions:
$\Omega^{2}_{\pm,\mathcal{J}}(M,\mathcal{S})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{\mathcal{V}\in\Omega^{2}(M,\mathcal{S})\,|\,\star_{g,\mathcal{J}}\mathcal{V}_{\mathcal{A}}=\pm\mathcal{V}_{\mathcal{A}}\right\\}~{}~{},$
where $\Delta(P)=(\mathcal{S},\omega,\mathcal{D})$.
###### Remark 3.15.
The polarized selfduality condition is a first-order partial differential
equation for a connection on a Siegel bundle which, to the best of our
knowledge, has not been studied in the literature on mathematical gauge
theory.
## 4\. Prequantum abelian gauge theory
Let $(M,g)$ be an oriented and connected Lorentzian four-manifold. As
explained in the previous section, imposing the DSZ integrality condition on
an abelian gauge theory allows us to identify its prequantum gauge degrees of
freedom with principal connections on a Siegel bundle. More precisely, let
$\mathbf{P}=(P,\mathcal{J})$ be a polarized Siegel bundle on $(M,g)$ and
${\bm{\Delta}}:={\bm{\Delta}}(P)=(\Delta,Z)$ be the integral duality structure
defined by $P$, where $\Delta:=\Delta(P)=(\mathcal{S},\omega,\mathcal{D})$
(with $\mathcal{S}=\mathrm{ad}(P)$) and $Z:=Z(P)$ are the duality structure
and Siegel system defined by $P$. Let
$\bm{\Xi}:=\bm{\Xi}(P)=({\bm{\Delta}},\mathcal{J})$ be the integral
electromagnetic structure defined by $\mathbf{P}$ and
$\Xi:=\Xi(P)=(\Delta,\mathcal{J})$ be its underlying electromagnetic
structure. By Theorem 3.11, the set of integral field strength configurations
determined by the integral duality structure ${\bm{\Delta}}={\bm{\Delta}}(P)$
coincides with the set of adjoint curvatures of principal connections defined
on $P$:
$\mathrm{Conf}(M,{\bm{\Delta}})=\\{\mathcal{V}_{\mathcal{A}}\,|\,\mathcal{A}\in\mathrm{Conn}(P)\\}~{}~{},$
while the set of integral field strength solutions determined by
$\bm{\Xi}:=\bm{\Xi}(P)=({\bm{\Delta}},\mathcal{J})$ is:
$\mathrm{Sol}(M,g,\bm{\Xi})=\\{\mathcal{V}_{\mathcal{A}}\,|\,\mathcal{A}\in\mathrm{Conn}(P)~{}\&~{}\star_{g,\mathcal{J}}\mathcal{V}_{\mathcal{A}}=\mathcal{V}_{\mathcal{A}}\\}~{}~{}.$
This motivates the following.
###### Definition 4.1.
The set of prequantum gauge configurations determined by $P$ is the affine
set:
$\mathfrak{Conf}(M,P)\stackrel{{\scriptstyle{\rm def.}}}{{=}}\mathrm{Conn}(P)$
of principal connections defined on $P$. The adjoint curvature
$\mathcal{V}_{\mathcal{A}}\in\mathrm{Conf}(M,{\bm{\Delta}})$ of a principal
connection $\mathcal{A}\in\mathrm{Conn}(P)$ is called the integral field
strength configuration defined $\mathcal{A}$. The prequantum abelian gauge
theory defined by $\mathbf{P}$ on $(M,g)$ is described by the condition:
$\star_{g,\mathcal{J}}\mathcal{V}_{\mathcal{A}}=\mathcal{V}_{\mathcal{A}}~{}~{}\mathrm{for}~{}~{}\mathcal{A}\in\mathrm{Conn}(P)\,.$
(29)
The solutions $\mathcal{A}$ of this equation are called gauge potentials or
polarized self-dual connections and form the set:
$\mathfrak{Sol}(M,g,\mathbf{P})=\mathfrak{Sol}(M,g,P,\mathcal{J})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{\mathcal{A}\in\mathrm{Conn}(P)\,|\,\star_{g,\mathcal{J}}\mathcal{V}_{\mathcal{A}}=\mathcal{V}_{\mathcal{A}}\\}\subset\mathfrak{Conf}(M,P)~{}~{}.$
We have:
$\mathrm{Sol}(M,g,\bm{\Xi})=\\{\mathcal{V}_{\mathcal{A}}\,|\,\mathcal{A}\in\mathfrak{Sol}(M,g,\mathbf{P})\\}$
and the adjoint curvature map of $P$ gives surjections:
$\mathfrak{Conf}(M,P)\to\mathrm{Conf}(M,{\bm{\Delta}})~{}~{}\mathrm{and}~{}~{}\mathfrak{Sol}(M,g,\mathbf{P})\to\mathrm{Sol}_{\mathbb{Z}}(M,g,\bm{\Xi})~{}~{}.$
###### Remark 4.2.
Condition (29) reduces locally to a system of first-order differential
equations for $2n$ real-valued one-forms, which describe the local
electromagnetic and magnetoelectric potentials of the theory (see Appendix A).
Also notice that any $\mathcal{A}\in\mathfrak{Sol}(M,g,\mathbf{P})$ satisfies
the following second order equation of Abelian Yang-Mills type:
$\mathrm{d}_{\mathcal{D}}\star_{g,\mathcal{J}}\mathcal{V}_{\mathcal{A}}=0\,.$
### 4.1. The duality hierarchy of prequantum abelian gauge theory
In this subsection we discuss the duality groups of prequantum abelian gauge
theory. Let $P$ be a Siegel bundle of type $\mathfrak{t}\in\mathrm{Div}^{n}$
defined on $M$. For simplicity, we use the notations:
$A\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{U}(1)^{2n}~{}~{},~{}~{}\Gamma\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})~{}~{},~{}~{}G\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Aff}_{\mathfrak{t}}=A\rtimes\Gamma$
and denote the Abelian Lie algebra of $G$ by
$\mathfrak{g}=\mathfrak{aff}_{\mathfrak{t}}\simeq\mathbb{R}^{2n}$. Let
$q:G\rightarrow\Gamma$ be the epimorphism entering the short exact sequence of
groups:
$1\rightarrow A\rightarrow G\xrightarrow{q}\Gamma\rightarrow 1~{}~{},$
which splits from the right.
###### Definition 4.3.
The discrete remnant bundle of $P$ is the principal $\Gamma$-bundle
$\Gamma(P)\stackrel{{\scriptstyle{\rm def.}}}{{=}}P\times_{q}\Gamma$.
We denote the adjoint representation of $G$ by
$\mathrm{Ad}:G\rightarrow\mathrm{Aut}_{\mathbb{R}}(\mathfrak{g})$ and the
adjoint action of $G$ (i.e the action of $G$ on itself by conjugation) by
$\mathrm{Ad}_{G}:G\rightarrow\mathrm{Aut}(G)$. The restriction of the latter
to the normal subgroup $A\subset G$ is denoted by
$\mathrm{Ad}_{G}^{A}:G\rightarrow\mathrm{Aut}(A)$. Since $A$ is Abelian, this
factors through $q$ to the characteristic morphism
$\rho:\Gamma\rightarrow\mathrm{Aut}(A)$:
$\mathrm{Ad}_{G}^{A}=\rho\circ q~{}~{},$
while the adjoint representation factors through $q$ to the reduced adjoint
representation
$\bar{\rho}:\Gamma\rightarrow\mathrm{Aut}_{\mathbb{R}}(\mathfrak{g})$:
$\mathrm{Ad}=\bar{\rho}\circ q~{}~{}.$ (30)
This representation of $\Gamma$ on $\mathfrak{g}$ preserves the canonical
symplectic form on $\mathbb{R}^{2n}\simeq\mathfrak{g}$. The exponential map of
$G$ gives a surjective morphism of Abelian groups
$\exp_{G}:\mathfrak{g}\rightarrow A$ whose kernel is a full symplectic lattice
$\Lambda$ of $\mathfrak{g}$ which identifies with $\Lambda_{\mathfrak{t}}$.
This lattice is preserved by the reduced adjoint representation, which
therefore induces a morphism of groups:
$\rho_{0}:\Gamma\rightarrow\mathrm{Aut}_{\mathbb{Z}}(\Lambda)~{}~{}.$
Accordingly, $\mathrm{Ad}=\bar{\rho}\circ q$ also preserves $\Lambda$ and
hence induces a morphism of groups:
$\mathrm{Ad}_{0}=\rho_{0}\circ
q:G\rightarrow\mathrm{Aut}_{\mathbb{Z}}(\Lambda)~{}~{}.$
Let ${\bm{\Delta}}=(\Delta,Z)$ be the duality structure defined by $P$, where
$\Delta=(\mathcal{S},\omega,\mathcal{D})$ (with $\mathcal{S}=\mathrm{ad}(P)$)
and $Z=Z(P)$ are the duality structure and Siegel system defined by $P$. We
have:
$\mathrm{ad}(P)=P\times_{\mathrm{Ad}}\mathfrak{g}=\Gamma(P)\times_{\rho}\mathfrak{g}~{}~{},~{}~{}Z(P)=P\times_{\mathrm{Ad}_{0}}\Lambda=\Gamma(P)\times_{\rho_{0}}\Lambda~{}~{},~{}~{}\mathcal{A}(P)=P\times_{\mathrm{Ad}_{G}^{A}}A~{}~{},$
where
$\mathcal{A}(P)=\mathcal{A}({\bm{\Delta}}(P))=\mathrm{ad}(P)/Z(P)=\mathcal{S}/Z$
is the bundle of integral symplectic torus groups defined by $P$. As shown in
[35], the connection $\mathcal{D}$ coincides with the flat connection induced
on $\mathcal{S}$ by the monodromy connection of $\Gamma(P)$ and the symplectic
pairing $\omega$ of
$\mathcal{S}=\mathrm{ad}(P)=P\times_{\rho_{0}}\mathfrak{g}$ coincides with
that induced by the canonical symplectic pairing of
$\mathbb{R}^{2n}\simeq\mathfrak{g}$.
Let $\mathrm{Aut}(P)$ be the group of those unbased automorphisms of $P$ which
cover orientation-preserving diffeomorphisms of $M$. We have a short exact
sequence:
$1\to\mathrm{Aut}_{b}(P)\to\mathrm{Aut}(P)\to\mathrm{Diff}_{P}(M)\to 1\,,$
where $\mathrm{Diff}_{P}(M)\subset\mathrm{Diff}(M)$ is the group formed by
those orientation-preserving diffeomorphisms of $M$ that can be covered by
elements of $\mathrm{Aut}(P)$. Here $\mathrm{Aut}_{b}(P)$ is the group of
based automorphisms of $P$. For any $u\in\mathrm{Aut}(P)$, denote by
$f_{u}\in\mathrm{Diff}(M)$ the orientation-preserving diffeomorphism of $M$
covered by $u$. Every $u\in\mathrm{Aut}(P)$ induces an unbased automorphism
$\mathrm{ad}_{u}\in\mathrm{Aut}(\mathcal{S})$ of the adjoint bundle
$\mathcal{S}=\mathrm{ad}(P)$ defined through:
$\mathrm{ad}_{u}([y,v])\stackrel{{\scriptstyle{\rm
def.}}}{{=}}[u(y),v]\,,\quad\forall\,\,[y,v]\in\mathcal{S}=\mathrm{ad}(P)=P\times_{\mathrm{ad}}\mathfrak{g}\,.$
Notice that $\mathrm{ad}_{u}$ covers $f_{u}$.
###### Proposition 4.4.
For every $u\in\mathrm{Aut}(P)$, the map
$\mathrm{ad}_{u}:\mathcal{S}\rightarrow\mathcal{S}$ is an unbased automorphism
of the integral duality structure ${\bm{\Delta}}$ defined by $P$. Moreover,
the map
$\mathrm{ad}_{P}:\mathrm{Aut}(P)\rightarrow\mathrm{Aut}({\bm{\Delta}})$
defined through:
$\mathrm{ad}_{P}(u)=\mathrm{ad}_{u}~{}~{}\forall u\in\mathrm{Aut}(P)$
is a morphism of groups.
###### Proof.
It is clear from its definition that $\mathrm{ad}_{u}$ preserves $\omega$ and
$Z$. It also preserves $\mathcal{D}$, since the latter is induced by the
monodromy connection of $\Gamma(P)$, which is unique. The fact that
$\mathrm{ad}_{P}$ is a morphism of groups is immediate. ∎
Notice that $\mathrm{ad}_{P}$ restricts to a morphism
$\mathrm{ad}_{P}\colon\mathrm{Aut}_{b}(P)\to\mathrm{Aut}_{b}({\bm{\Delta}})$.
We set:
$g_{u}\stackrel{{\scriptstyle{\rm def.}}}{{=}}(f_{u})_{\ast}(g)~{}~{}\forall
u\in\mathrm{Aut}(P)~{}~{}.$
Let
$\mathbb{A}\colon\mathrm{Aut}(P)\times\mathrm{Conn}(P)\rightarrow\mathrm{Conn}(P)$
be the affine left action of $\mathrm{Aut}(P)$ on $\mathrm{Conn}(P)$ defined
through:
$\mathbb{A}_{u}(\mathcal{A})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}u_{\ast}(\mathcal{A})~{}~{}\forall\mathcal{A}\in\mathrm{Conn}(P)~{}~{}\forall
u\in\mathrm{Aut}(P)~{}~{},$
where
$u_{\ast}\colon\mathcal{C}^{\infty}(P,T^{\ast}P\otimes\mathfrak{g})\to\mathcal{C}^{\infty}(P,T^{\ast}P\otimes\mathfrak{g})$
denotes the push-forward of $u$ extended trivially to $\mathfrak{g}$-valued
forms defined on $P$.
###### Lemma 4.5.
For every $u\in\mathrm{Aut}(P)$, we have a commutative diagram of affine
spaces and affine maps:
$\textstyle{\mathrm{Conn}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{A}_{u}}$$\scriptstyle{\mathcal{V}}$$\textstyle{\mathrm{Conn}(P)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathcal{V}}$$\textstyle{2\pi
j_{0\ast}(c(P))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{ad}_{u}}$$\textstyle{2\pi
j_{0\ast}(c(P))}$
where
$\mathcal{V}:\mathrm{Conn}(P)\rightarrow\Omega_{{\mathrm{d}_{\mathcal{D}}\\!\mbox{-}{\mathrm{cl}}}}(M,\mathcal{S})$
is the adjoint curvature map of $P$, $c(P)\in H^{2}(M,Z)$ is the twisted Chern
class of $P$ and the map $j_{0\ast}:H^{2}(M,Z)\rightarrow
H^{2}(M,\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{S}))=H^{2}_{\mathcal{D}}(M,\mathcal{S})$
is induced by the sheaf inclusion
$\mathcal{C}(Z)\hookrightarrow\mathcal{C}^{\infty}_{\mathrm{flat}}(\mathcal{S})$
(see the exact sequence (19)). Here $2\pi j_{0\ast}(c(P))$ is viewed as an
affine subspace of
$\Omega_{{\mathrm{d}_{\mathcal{D}}\\!\mbox{-}{\mathrm{cl}}}}(M,\mathcal{S})$
consisting of $\mathrm{d}_{\mathcal{D}}$-closed $\mathcal{S}$-valued forms
which differ by $\mathrm{d}_{\mathcal{D}}$-exact $\mathcal{S}$-valued forms
and hence as an affine space modeled on the vector space
$\Omega^{2}_{{\mathrm{d}_{\mathcal{D}}\\!\mbox{-}{\mathrm{ex}}}}(M,\mathcal{S})$.
###### Proof.
It is shown in [35] that the curvature map $\mathcal{V}$, which is clearly
affine, takes values in $2\pi j_{\ast}(c(P))$. Therefore, it only remains to
prove that if:
$[\mathcal{V}_{\mathcal{A}}]=2\pi\,j_{\ast}(c(P))\,,$
then:
$[\mathcal{V}_{\mathbb{A}_{u}(\mathcal{A})}]=2\pi\,j_{\ast}(c(P))\,,$
or, equivalently, that
$[\mathcal{V}_{\mathbb{A}_{u}(\mathcal{A})}]=[\mathcal{V}_{\mathcal{A}}]$ for
every $u\in\mathrm{Aut}_{b}(P)$. Since $\mathcal{A}$ is a connection,
$\mathbb{A}_{u}(\mathcal{A})$ is also a connection on $P$, whence there exists
an equivariant and horizontal one-form
$\hat{\tau}\in\Omega^{1}(P,\mathfrak{a})$ such that:
$\mathbb{A}_{u}(\mathcal{A})=\mathcal{A}+\hat{\tau}\,.$
Hence,
$\mathrm{d}\mathbb{A}_{u}(\mathcal{A})=\mathrm{d}\mathcal{A}+\mathrm{d}\hat{\tau}$,
which descends to $M$ as follows:
$\mathcal{V}_{\mathbb{A}_{u}(\mathcal{A})}=\mathcal{V}_{\mathcal{A}}+\mathrm{d}_{\mathcal{D}}\tau\,,$
where $\tau\in\Omega^{1}(M,\mathcal{S}_{P})$ denotes the one-form with values
in $\mathcal{S}_{P}$ defined by $\tau$. On the other hand, considering
$\mathcal{V}_{\mathbb{A}_{u}(\mathcal{A})}\in\Omega^{2}(P,\mathfrak{aff}_{\mathfrak{t}})$
as a two-form on $P$ taking values in $\mathfrak{aff}_{\mathfrak{t}}$ a direct
computation shows that:
$\mathcal{V}_{\mathbb{A}_{u}(\mathcal{A})}=u_{\ast}(\mathcal{V}_{\mathcal{A}})\in\Omega^{2}(P,\mathfrak{aff}_{\mathfrak{t}})\,,$
which, by the equivariance properties of the latter, immediately implies:
$\mathcal{V}_{\mathbb{A}_{u}(\mathcal{A})}=\mathrm{ad}_{u}\cdot\mathcal{V}_{\mathcal{A}}\in\Omega^{2}(M,\Delta)\,,$
where the _dot_ action of an automorphism of $\Delta$ on two- forms taking
values in $\Delta$ was defined in subsection 1.9. ∎
###### Proposition 4.6.
For any $u\in\mathrm{Aut}(P)$, the map
$\mathbb{A}_{u}:\mathrm{Conn}(P)\rightarrow\mathrm{Conn}(P)$ restricts to a
bijection:
$\mathbb{A}_{u}\colon\mathfrak{Sol}(M,g,P,\mathcal{J})\to\mathfrak{Sol}(M,g_{u},P,\mathcal{J}_{u})~{}~{}.$
###### Proof.
Since
$\mathcal{V}_{\mathbb{A}_{u}(\mathcal{A})}=\mathrm{ad}_{u}\cdot\mathcal{V}_{\mathcal{A}}$,
it suffices to prove the relation:
$\star_{g_{u},\mathcal{J}_{u}}\circ\mathrm{ad}_{u}=\mathrm{ad}_{u}\circ\star_{g,\mathcal{J}}~{}~{}.$
(31)
For $\alpha\in\Omega^{k}(M)$ and $\xi\in\mathcal{C}^{\infty}(M,\mathcal{S})$,
we compute:
$\star_{g_{u},\mathcal{J}_{u}}\mathrm{ad}_{u}\cdot(\alpha\otimes\xi)=(\ast_{g_{u}}f_{u\ast}\alpha)\otimes(\mathcal{J}_{u}\circ\mathrm{ad}_{u}(\xi)\circ
f_{u}^{-1})=f_{u\ast}(\ast_{g}\alpha)\otimes\mathrm{ad}_{u}\circ\mathcal{J}(\xi)\circ
f_{u}^{-1}=\mathrm{ad}_{u}\cdot(\star_{g_{,}\mathcal{J}}(\alpha\otimes\xi))~{}~{},$
which implies (31) ∎
###### Definition 4.7.
Let $\mathbf{P}=(P,\mathcal{J})$ be a polarized Siegel bundle defined on $M$.
* •
The group $\mathrm{Aut}(P)$ is the unbased gauge group of $P$. For any
$u\in\mathrm{Aut}(P)$, the map:
$\mathbb{A}_{u}\colon\mathfrak{Sol}(M,g,P,\mathcal{J})\to\mathfrak{Sol}(M,g_{u},P,\mathcal{J}_{u})$
is the unbased gauge transformation induced by $u$.
* •
The group:
$\mathrm{Aut}(g,P)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{u\in\mathrm{Aut}(P)\,|\,f_{u}\in{\rm Iso}(M,g)\right\\}$
is the unbased gauge duality group defined by $P$ and $g$. For any
$u\in\mathrm{Aut}(g,P)$, the map:
$\mathbb{A}_{u}:\mathfrak{Sol}(M,g,P,\mathcal{J})\to\mathfrak{Sol}(M,g,P,\mathcal{J}_{u})$
is the unbased gauge duality transformation induced by $u$.
* •
The gauge group $\mathrm{Aut}_{b}(P)$ of $P$ is the gauge (electromagnetic)
duality group of the abelian gauge theories with underlying Siegel bundle $P$.
For any $u\in\mathrm{Aut}_{b}(P)$, the map:
$\mathbb{A}_{u}:\mathfrak{Sol}(M,g,P,\mathcal{J})\to\mathfrak{Sol}(M,g,P,\mathcal{J}_{u})$
is called the gauge duality transformation induced by $u$.
Lemma 4.5 implies that for any $u\in\mathrm{Aut}(P)$ we have a commutative
diagram:
$\textstyle{\mathfrak{Sol}(M,g,P,\mathcal{J})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{A}_{u}}$$\scriptstyle{\mathcal{V}}$$\textstyle{\mathfrak{Sol}(M,g_{u},P,\mathcal{J}_{u})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathcal{V}}$$\textstyle{\mathrm{Sol}(M,g,{\bm{\Delta}},\mathcal{J})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{ad}_{u}}$$\textstyle{\mathrm{Sol}(M,g_{u},{\bm{\Delta}},\mathcal{J}_{u})}$
and similar diagrams for the other groups in the previous definition. Hence
gauge transformations of $P$ induce integral pseudo-duality and duality
transformations of the abelian gauge theory defined by $(P,\mathcal{J})$.
###### Definition 4.8.
Let $\mathbf{P}=(P,\mathcal{J})$ be a polarized Siegel bundle on $M$.
* •
The group:
$\mathrm{Aut}(\mathbf{P})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{u\in\mathrm{Aut}(P)\,\,|\,\
\mathcal{J}_{u}=\mathcal{J}\right\\}$ (32)
is the unbased unitary gauge group defined by $\mathbf{P}$ on $(M,g)$. For any
$u\in\mathrm{Aut}(\mathbf{P})$, the map:
$\mathbb{A}_{u}:\mathfrak{Sol}(M,g,\mathbf{P})\to\mathfrak{Sol}(M,g_{u},\mathbf{P})$
is the unbased unitary gauge transformation induced by $u$.
* •
The group:
$\mathrm{Aut}(g,\mathbf{P})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{u\in\mathrm{Aut}(P)\,\,|\,\,g_{u}=g~{}~{}\mathrm{and}~{}~{}\mathcal{J}_{u}=\mathcal{J}\right\\}$
(33)
is the unbased unitary gauge duality group defined by $\mathbf{P}$ on $(M,g)$.
For any $u\in\mathrm{Aut}(g,\mathbf{P})$, the map:
$\mathbb{A}_{u}:\mathfrak{Sol}(M,g,\mathbf{P})\to\mathfrak{Sol}(M,g,\mathbf{P})$
is the unbased unitary gauge duality transformation induced by $u$.
* •
The group:
$\mathrm{Aut}_{b}(\mathbf{P})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{u\in\mathrm{Aut}_{b}(P)\,\,|\,\,\mathcal{J}_{u}=\mathcal{J}\right\\}\,.$
is the unitary gauge group defined by $\mathbf{P}$ on $M$. For any
$u\in\mathrm{Aut}_{b}(\mathbf{P})$, the map:
$\mathbb{A}_{u}:\mathfrak{Sol}(M,g,\mathbf{P})\to\mathfrak{Sol}(M,g,\mathbf{P})~{}~{}.$
is the unitary gauge transformation induced by $u$.
Let $\bm{\Xi}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}({\bm{\Delta}},\mathcal{J})$. For any
$u\in\mathrm{Aut}(\mathbf{P})$, we have a commutative diagram:
$\textstyle{\mathfrak{Sol}(M,g,\mathbf{P})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{A}_{u}}$$\scriptstyle{\mathcal{V}}$$\textstyle{\mathfrak{Sol}(M,g_{u},\mathbf{P})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathcal{V}}$$\textstyle{\mathrm{Sol}(M,g,\bm{\Xi})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{ad}_{u}}$$\textstyle{\mathrm{Sol}(M,g_{u},\bm{\Xi})}$
and similar diagrams for the other groups in the previous definition. We have
a short exact sequence:
$1\to\mathrm{Aut}_{b}(P)\rightarrow\mathrm{Aut}(g,P)\rightarrow{\rm
Iso}_{P}(M,g)\to 1\,,$ (34)
where ${\rm Iso}_{P}(M,g)\subset{\rm Iso}(M,g)$ is the group formed by those
orientation-preserving isometries that can be covered by elements of
$\mathrm{Aut}(g,P)$. Similarly, we have an exact sequence:
$1\to\mathrm{Aut}_{b}(\mathbf{P})\rightarrow\mathrm{Aut}(g,\mathbf{P})\rightarrow{\rm
Iso}_{\mathbf{P}}(M,g)\to 1\,,$
where ${\rm Iso}_{\mathbf{P}}(M,g)$ is the group formed by those orientation-
preserving isometries of $M$ which are covered by elements of
$\mathrm{Aut}(g,\mathbf{P})$.
###### Definition 4.9.
The _standard subgroup_ of the unbased gauge group of $P$ is defined through:
$\mathrm{C}(P)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\ker(\mathrm{ad}_{P})\subset\mathrm{Aut}(P)\,.$
When $\dim M>0$ and $\dim A>0$, the group $\mathrm{C}(P)$ is infinite-
dimensional. The classical duality group of a duality structure was shown to
be a finite dimensional Lie group in Section 1. This is no longer true of the
gauge groups introduced above. Instead, they are infinite-dimensional
extensions of the integral duality groups introduced in Section 2.
###### Proposition 4.10.
The gauge group of $P$ fits into the short exact sequence of groups:
$1\to\mathrm{C}(P)\hookrightarrow\mathrm{Aut}_{b}(P)\xrightarrow{\mathrm{ad}_{P}}\mathrm{Aut}_{b}({\bm{\Delta}})\to
1~{}~{}.$ (35)
###### Remark 4.11.
There exist similar short exact sequences for the remaining groups introduced
in Definition 4.7.
###### Proof.
It suffices to prove that
$\mathrm{ad}_{P}(\mathrm{Aut}_{b}(P))=\mathrm{Aut}_{b}({\bm{\Delta}})$. Recall
that
$\mathcal{L}=P\times_{\mathrm{Ad}_{0}}\Lambda=\Gamma(P)\times_{\rho_{0}}\Lambda$
and
$\mathcal{S}=P\times_{\mathrm{Ad}}\mathfrak{g}=\Gamma(P)\times_{\bar{\rho}}\mathfrak{g}$,
where $\Lambda\equiv\Lambda_{\mathfrak{t}}$ and
$\mathfrak{g}\equiv\mathbb{R}^{2n}$. Also recall that the gauge group
$\mathrm{Aut}_{b}(P)$ is naturally isomorphic with the group
$\mathcal{C}^{\infty}(P,G)^{G}$ of $G$-equivariant maps from $P$ to $G$, where
$G=A\times\Gamma=\mathrm{U}(1)^{2n}\rtimes\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
acts on itself through conjugation. This isomorphism takes
$u\in\mathrm{Aut}_{b}(P)$ to the equivariant map
$f\in\mathcal{C}^{\infty}(P,G)^{G}$ which satisfies:
$u(p)=pf(p)~{}~{}\forall p\in P~{}~{}\forall g\in G~{}~{}.$ (36)
Since the action of $G$ on $P$ is free and we have
$\Gamma\equiv\mathrm{Aut}(\mathbb{R}^{2n},\omega_{2n},\Lambda_{\mathfrak{t}})\simeq\mathrm{Aut}(\mathcal{S}_{m},\omega_{m},\mathcal{L}_{m})$
for all $m\in M$ while the reduced adjoint representation $\bar{\rho}$ is
faithful, every automorphism $\varphi\in\mathrm{Aut}({\bm{\Delta}})$
determines a map $\bar{\varphi}:P\to\Gamma$ which satisfies:
$\varphi([p,v])=[p,\bar{\rho}(\bar{\varphi}(p))(v)]~{}~{}\forall p\in
P~{}~{}\forall v\in\mathfrak{g}$ (37)
as well as:
$\bar{\varphi}(pg)=q(g)^{-1}\bar{\varphi}(p)q(g)~{}~{}\forall p\in
P~{}~{}\forall g\in G~{}~{}.$
The last relation follows from (37) and from the condition
$\varphi([pg,v])=\varphi([p,\bar{\rho}(q(g))(v)])$ of invariance under change
of representative of the equivalence class, where we used (30). Let
$u\in\mathrm{Aut}_{b}(P)$ be the based automorphism of $P$ which corresponds
to the $G$-equivariant map $f:P\rightarrow G$ defined through:
$f(p)\stackrel{{\scriptstyle{\rm def.}}}{{=}}(0_{A},\bar{\varphi}(p))\in
G=A\rtimes\Gamma~{}~{}\forall p\in P~{}~{}.$
For any $p\in P$ and $v\in\mathfrak{g}$, we have:
$\mathrm{ad}_{P}(u)([p,v])=[u(p),v]=[pf(p),v]=[p,\mathrm{Ad}(f(p))(v)]=[p,\bar{\rho}(\bar{\varphi}(p))(p)]=\varphi([p,v])~{}~{},$
where we used (36) and (37). This shows that $\mathrm{ad}_{P}(u)=\varphi$.
Since $\varphi\in\mathrm{Aut}_{b}({\bm{\Delta}})$ is arbitrary, we conclude
that $\mathrm{ad}_{P}(\mathrm{Aut}_{b}(P))=\mathrm{Aut}_{b}({\bm{\Delta}})$. ∎
The previous proposition clarifies the geometric origin of electromagnetic
duality as a ‘discrete remnant’ of gauge symmetry, a notion which is discussed
in more detail in [35]. In particular, $\mathrm{Aut}_{b}(P)$ is an extension
of $\mathrm{Aut}_{b}({\bm{\Delta}})$ by the continuous group $C(P)$.
Intuitively, elements of the latter correspond to the gauge transformations of
a principal torus bundle.
### 4.2. Duality groups for Siegel bundles with trivial monodromy
Let $M$ be a connected and oriented four-manifold.
###### Lemma 4.12.
Let $P$ be a trivial principal $G$-bundle over $M$. Then any trivialization of
$P$ induces an isomorphism of groups:
$\mathrm{Aut}(P)\simeq\mathcal{C}^{\infty}(M,G)\rtimes_{\alpha}\mathrm{Diff}(M)~{}~{},$
where
$\alpha:\mathrm{Diff}(M)\rightarrow\mathrm{Aut}(\mathcal{C}^{\infty}(M,G))$ is
the morphisms of groups defined through:
$\alpha(\varphi)(f)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}f\circ\varphi^{-1}\,,\quad\forall\,\,\varphi\in\mathrm{Diff}(M)\,,\quad\forall\,\,f\in\mathcal{C}^{\infty}(M,G)~{}~{}.$
In particular, we have a short exact sequence of groups:
$1\rightarrow\mathcal{C}^{\infty}(M,G)\rightarrow\mathrm{Aut}(P)\rightarrow\mathrm{Diff}(M)\rightarrow
1$
which is split from the right.
###### Proof.
Let $\tau:P\stackrel{{\scriptstyle\sim}}{{\rightarrow}}M\times G$ be a
trivialization of $P$. Then the map
$\mathrm{Ad}(\tau):\mathrm{Aut}(P)\rightarrow\mathrm{Aut}(M\times G)$ defined
through:
$\mathrm{Ad}(\tau)(f)\stackrel{{\scriptstyle{\rm def.}}}{{=}}\tau\circ
f\circ\tau^{-1}\,,\quad\forall\,\,f\in\mathrm{Aut}(P)\,,$
is an isomorphism of groups. Let $f\in\mathrm{Aut}(P)$ be an unbased
automorphism of $P$ which covers the diffeomorphism
$\varphi\in\mathrm{Diff}(M)$. Then $\mathrm{Ad}(\tau)(f)$ is an unbased
automorphism of $M\times G$ which covers $\varphi$ and hence we have:
$\mathrm{Ad}(\tau)(f)(m,g)=(\varphi(m),{\hat{f}}(m,g))\,,\quad\forall\,\,(m,g)\in
M\times G~{}~{},$
where ${\hat{f}}:M\times G\rightarrow G$ is a smooth map which satisfies:
${\hat{f}}(m,g_{1}g_{1})={\hat{f}}(m,g_{1})g_{2}\,,\quad\forall\,\,m\in
M\,,\quad\forall\,\,g_{1},g_{2}\in G~{}~{}.$
The last relation is equivalent with the condition that ${\hat{f}}$ has the
form:
${\hat{f}}(m,g)={\tilde{f}}(m)g\,,\quad\forall\,\,(m,g)\in M\times G~{}~{},$
where ${\tilde{f}}:M\rightarrow G$ is a smooth function which can be recovered
from ${\hat{f}}$ through the relation:
${\tilde{f}}(m)={\hat{f}}(m,1)\,,\quad\forall\,\,m\in M~{}~{}.$
Setting $h\stackrel{{\scriptstyle{\rm
def.}}}{{=}}{\tilde{f}}\circ\varphi^{-1}\in\mathcal{C}^{\infty}(M,G)$, we
have:
$\mathrm{Ad}(\tau)(f)(m,g)=(\varphi(m),h(\varphi(m))g)\,,\quad\forall\,\,(m,g)\in
M\times G\,,$ (38)
and the correspondence $f\rightarrow(h,\varphi)$ gives a bijection between
$\mathrm{Aut}(P)\simeq\mathrm{Aut}(M\times G)$ and the set
$\mathcal{C}^{\infty}(M,G)\times\mathrm{Diff}(M)$. If
$f_{1},f_{2}\in\mathrm{Aut}(P)$ correspond through this map to the pairs
$(h_{1},\varphi_{1}),(h_{2},\varphi_{2})\in\mathcal{C}^{\infty}(M,G)\times\mathrm{Diff}(M)$,
then direct computation using (38) gives:
$\mathrm{Ad}(\tau)(f_{1}\circ
f_{2})(m,g)=((\varphi_{1}\circ\varphi_{2})(m),h_{1}(m)(h_{2}\circ\varphi_{1}^{-1})(m)g)~{}~{},$
showing that $f_{1}\circ f_{2}$ corresponds to the pair
$(h_{1}\cdot\alpha(\varphi_{1})(h_{2}),\varphi_{1}\circ\varphi_{2})$. ∎
Let $P$ be a Siegel bundle or rank $n$ and type
$\mathfrak{t}\in\mathrm{Div}^{n}$ on $(M,g)$ and let
${\bm{\Delta}}=(\Delta,Z)$ be the integral duality structure defined by $P$.
Suppose that $P$ is topologically trivial and that $Z=Z(P)$ has trivial
monodromy, so that $\Delta$ is holonomy trivial. Choosing a trivialization of
$P$ gives:
$P\equiv
M\times\mathrm{Aff}_{\mathfrak{t}}\,,\qquad{\bm{\Delta}}\equiv(M\times\mathbb{R}^{2n},\omega_{2n},\mathrm{d},M\times\Lambda_{\mathfrak{t}})$
and:
$\mathrm{Aut}_{b}(P)\equiv\mathcal{C}^{\infty}(M,\mathrm{Aff}_{\mathfrak{t}})\,,\qquad\mathrm{Aut}(P)\equiv\mathcal{C}^{\infty}(M,\mathrm{Aff}_{\mathfrak{t}})\rtimes_{\alpha}\mathrm{Diff}(M)~{}~{},$
where the last identification follows from Lemma 4.12. Since
$\mathrm{Aff}_{\mathfrak{t}}=\mathrm{U}(1)^{2n}\rtimes\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
and $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ is discrete, we have:
$\mathcal{C}^{\infty}(M,\mathrm{Aff}_{\mathfrak{t}})=\mathcal{C}^{\infty}(M,\mathrm{U}(1)^{2n})\rtimes\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})~{}~{}.$
In particular, maps $h\in\mathcal{C}^{\infty}(M,\mathrm{Aff}_{\mathfrak{t}})$
can be identified with pairs $(f,\gamma)$, where
$f\in\mathcal{C}^{\infty}(M,\mathrm{U}(1)^{2n})$ and
$\gamma\in\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$. The unbased gauge
duality group is given by:
$\mathrm{Aut}(g,P)\equiv\mathcal{C}^{\infty}(M,\mathrm{Aff}_{\mathfrak{t}})\rtimes_{\alpha}{\rm
Iso}(M,g)~{}~{}.$
The integral pseudo-duality, relative duality and duality groups of
${\bm{\Delta}}$ are in turn given by:
$\displaystyle\mathrm{Aut}({\bm{\Delta}})\equiv\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\times\mathrm{Diff}(M)\,,$
$\displaystyle\mathrm{Aut}(g,{\bm{\Delta}})\equiv\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\times{\rm
Iso}(M,g)\,,$
$\displaystyle\mathrm{Aut}_{b}({\bm{\Delta}})\equiv\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\,,$
and we have short exact sequences:
$\displaystyle
1\to\mathcal{C}^{\infty}(M,\mathrm{U}(1)^{2n})\to\mathrm{Aut}(P)\to\mathrm{Aut}({\bm{\Delta}})\to
1\,,$ $\displaystyle
1\to\mathcal{C}^{\infty}(M,\mathrm{U}(1)^{2n})\to\mathrm{Aut}(g,P)\to\mathrm{Aut}(g,{\bm{\Delta}})\to
1\,,$ $\displaystyle
1\to\mathcal{C}^{\infty}(M,\mathrm{U}(1)^{2n})\to\mathrm{Aut}_{b}(P)\to\mathrm{Aut}_{b}({\bm{\Delta}})\to
1\,.$
Let us fix a taming of $\Delta$, which we view as a map
$\mathcal{J}\in\mathcal{C}^{\infty}(M,\mathrm{Sp}(2n,\mathbb{R}))$. Then the
unbased unitary gauge group of the tamed Siegel bundle
$\mathbf{P}=(P,\mathcal{J})$ is:
$\mathrm{Aut}(g,\mathbf{P})\equiv\left\\{(f,\gamma,\varphi)\in\left[\mathcal{C}^{\infty}(M,\mathrm{U}(1)^{2n})\times\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\right]\rtimes_{\alpha}{\rm
Iso}(M,g)\,|\,\,\gamma\mathcal{J}\gamma^{-1}=\mathcal{J}\circ\varphi\right\\}~{}~{},$
while its unitary gauge group is:
$\mathrm{Aut}_{b}(\mathbf{P})\equiv\left\\{(f,\gamma)\in\mathcal{C}^{\infty}(M,\mathrm{U}(1)^{2n})\times\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\,|\,\gamma\mathcal{J}\gamma^{-1}=\mathcal{J}\right\\}~{}~{}.$
The integral unbased unitary duality group of the integral electromagnetic
structure $\bm{\Xi}=({\bm{\Delta}},\mathcal{J})$ is:
$\mathrm{Aut}(g,\bm{\Xi})\equiv\left\\{(\gamma,\varphi)\in\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\times{\rm
Iso}(M,g)\,\,|\,\,\gamma\mathcal{J}\gamma^{-1}=\mathcal{J}\circ\varphi\right\\}\,,$
while the integral unitary duality group of $\bm{\Xi}$ is:
$\mathrm{Aut}_{b}(\bm{\Xi})\equiv\left\\{\gamma\in\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\,\,|\,\,\gamma\mathcal{J}\gamma^{-1}=\mathcal{J}\right\\}\subset\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\,.$
We have short exact sequences:
$\displaystyle
1\to\mathcal{C}^{\infty}(M,\mathrm{U}(1)^{2n})\to\mathrm{Aut}(g,\mathbf{P})\to\mathrm{Aut}(g,\bm{\Xi})\to
1\,$ $\displaystyle
1\to\mathcal{C}^{\infty}(M,\mathrm{U}(1)^{2n})\to\mathrm{Aut}_{b}(\mathbf{P})\to\mathrm{Aut}_{b}(\bm{\Xi})\to
1~{}~{}.$
## 5\. Time-like dimensional reduction and polarized Bogomolny equations
This section investigates the time-like dimensional reduction of the equations
of motion of abelian gauge theory on an oriented static space-time $(M,g)$ of
the form:
$(M,g)=(\mathbb{R}\times\Sigma,-\mathrm{d}t^{2}\oplus h)\,,$
where $t$ is the global coordinate on $\mathbb{R}$ and $(\Sigma,h)$ is an
oriented Riemannian three-manifold. We show that the reduction produces an
equation of Bogomolny type, similar to the dimensional reduction of the
ordinary self-duality equation on a Riemannian four-manifold. Unlike that
well-known case, here we reduce the _polarized_ self-duality condition on a
Lorentzian four-manifold.
### 5.1. Preparations
Consider the time-like exact one-form $\theta=\mathrm{d}t\in\Omega^{1}(M)$.
Let $\nu_{h}$ be the volume form of $(\Sigma,h)$ and orient $(M,g)$ such that
its volume form is given by:
$\nu_{g}=\theta\wedge\nu_{h}~{}~{}.$
Let $\ast_{g}$ and $\ast_{h}$ be the Hodge operators of $(M,g)$ and
$(\Sigma,h)$ and $\langle~{},~{}\rangle_{g}$, $\langle~{},~{}\rangle_{h}$ be
the non-degenerate bilinear pairings induced by $g$ and $h$ on
$\Omega^{\ast}(M)$ and $\Omega^{\ast}(\Sigma)$. Let $p:M\rightarrow\Sigma$ be
the projection of $M=\mathbb{R}\times\Sigma$ on the second factor and consider
the distribution $\mathcal{D}=p^{\ast}(T\Sigma)\subset TM$, endowed with the
fiberwise Euclidean pairing given by the bundle pullback $h^{p}$ of $h$. This
distribution is integrable with leaves gives by the spacelike hypersurfaces:
$M_{t}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{t\\}\times\Sigma\,,\quad\forall\,\,t\in\mathbb{R}~{}~{},$
on which $h^{p}$ restricts to the metric induced by $g$. Using $h^{p}$, we
extend $\langle~{},~{}\rangle_{h}$ and $\ast_{h}$ in the obvious manner to the
space
$\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast})\subset\Omega^{\ast}(M)$.
We have:
$\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast})=\\{\omega\in\Omega^{\ast}(M)~{}|~{}\iota_{\partial_{t}}\omega=0\\}~{}~{}.$
Since $g$ has signature $(3,1)$ while $h$ has signature $(3,0)$, we have:
$\ast_{g}\circ\ast_{g}=-\pi\,,\quad\ast_{h}\circ\ast_{h}=\mathrm{id}_{\mathcal{C}^{\infty}(M,\wedge\mathcal{D}^{\ast})}~{}~{},$
where $\pi\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\oplus_{k=0}^{4}{(-1)^{k}\mathrm{id}_{\Omega^{k}(M)}}$ is the
signature automorphism of the exterior algebra $(\Omega^{\ast}(M),\wedge)$
(see [36]). Moreover, we have $\langle\theta,\theta\rangle_{g}=-1$ and:
$\langle\nu_{g},\nu_{g}\rangle_{g}=-1\,,\quad\langle\nu_{h},\nu_{h}\rangle_{h}=+1~{}~{}.$
Any polyform $\omega\in\Omega^{\ast}(M,g)$ has a unique decomposition:
$\omega=\omega_{\parallel}+\omega_{\perp}~{}~{},$ (39)
such that $\omega_{\parallel},\omega_{\perp}\in\Omega^{\ast}(M)$ satisfy [36]:
$\theta\wedge\omega_{\parallel}=0\,,\quad\iota_{\partial_{t}}\omega_{\perp}=0~{}~{}.$
(40)
The second of these conditions amounts to the requirement that
$\omega_{\perp}\in\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast})$,
while the first is solved by:
$\omega_{\parallel}=\theta\wedge\omega_{\top}~{}~{}\mathrm{where}~{}~{}\omega_{\top}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}-\iota_{\partial_{t}}\omega\in\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast})~{}~{}.$
As shown in loc. cit., the map
$\omega\rightarrow(\omega_{\top},\omega_{\perp})$ gives a linear isomorphism
between $\Omega^{\ast}(M)$ and
$\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast})^{\oplus 2}$. For any
$k=0,\ldots,4$ and any $\omega,\eta\in\Omega^{k}(M)$, we have:
$\langle\omega,\eta\rangle_{g}=-\langle\omega_{\top},\eta_{\top}\rangle_{h}+\langle\omega_{\perp},\eta_{\perp}\rangle_{h}\,,\quad\langle\omega_{\parallel},\eta_{\parallel}\rangle_{g}=-\langle\omega_{\top},\eta_{\top}\rangle_{h}~{}~{}.$
Hence the isomorphism above identifies the quadratic space
$(\Omega^{\ast}(M),\langle~{},~{}\rangle_{h})$ with the direct sum of
quadratic spaces
$(\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast}),-\langle~{},~{}\rangle_{h})\oplus(\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast}),\langle~{},~{}\rangle_{h})$.
Notice that $\nu_{h}=\iota_{\partial_{t}}\nu_{g}=-(\nu_{g})_{\top}$. An easy
computation gives:
###### Lemma 5.1.
For any polyform $\omega\in\Omega^{\ast}(M)$, we have:
$(\ast_{g}\omega)_{\top}=\ast_{h}\pi(\omega_{\perp})\,,\quad(\ast_{g}\omega)_{\perp}=-\ast_{h}\omega_{\top}$
(41)
and hence:
$\ast_{g}\omega=-\ast_{h}\omega_{\top}+\theta\wedge\ast_{h}\pi(\omega_{\perp})~{}~{}.$
(42)
Given any vector bundle $V$ defined on $M$, the Hodge operators of $g$ and $h$
extend trivially to operators
$\ast_{g}:\Omega^{\ast}(M,V)\rightarrow\Omega^{\ast}(M,V)$ and
$\ast_{h}:\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast}\otimes
V)\rightarrow\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast}\otimes
V)$. The decomposition (39) holds for any $\omega\in\Omega^{\ast}(M,V)$, with
components $\omega_{\parallel},\omega_{\perp}\in\Omega^{\ast}(M,V)$ satisfying
(40). We have
$\omega_{\perp}\in\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast}\otimes
V)$ and $\omega_{\parallel}=\theta\wedge\omega_{\top}$ with
$\omega_{\top}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}-\iota_{\partial_{t}}\omega\in\mathcal{C}^{\infty}(M,\wedge^{\ast}\mathcal{D}^{\ast}\otimes
V)$. Finally, Lemma 5.1 holds for any $\omega\in\Omega^{\ast}(M,V)$.
### 5.2. Timelike dimensional reduction of abelian gauge theory
Let $P$ be a Siegel bundle of type $\mathfrak{t}\in\mathrm{Div}^{n}$ defined
on $\Sigma$, whose projection we denote by $\pi:P\rightarrow\Sigma$. Let
${\hat{P}}\stackrel{{\scriptstyle{\rm def.}}}{{=}}p^{\ast}(P)$ be the
$p$-pullback of $P$ to $M$, whose projection we denote by $\hat{\pi}$. The map
$\varphi:{\hat{P}}\rightarrow\mathbb{R}\times P$ defined through:
$\varphi(t,\sigma,y)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(t,y)\,,\quad\forall\,\,(t,\sigma)\in\mathbb{R}\times\Sigma\,,\quad\forall\,\,y\in
P_{\sigma}=\pi^{-1}(\sigma)$
allows us to identify ${\hat{P}}$ with the principal
$\mathrm{Aff}_{\mathfrak{t}}$-bundle with total space given by
$\mathbb{R}\times P$, base $M=\mathbb{R}\times\Sigma$ and projection given by
$\mathbb{R}\times P\ni(t,y)\rightarrow(t,\pi(y))\in M$. We make this
identification in what follows. Accordingly, we have:
$\hat{\pi}(t,y)=(t,\pi(y))\,,\quad\forall\,\,(t,y)\in{\hat{P}}\equiv\mathbb{R}\times
P\,.$
Let $\tau:{\hat{P}}\rightarrow P$ be the unbased morphism of principal
$\mathrm{Aff}_{\mathfrak{t}}$-bundles given by projection on the second
factor:
$\tau(t,y)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}y\,,\quad\forall\,\,(t,y)\in{\hat{P}}\,,$
which covers the map $p:M\rightarrow\Sigma$:
$\pi\circ\tau=p\circ\hat{\pi}~{}~{}.$
Consider the action $\rho:\mathbb{R}\rightarrow\mathrm{Aut}({\hat{P}})$ of
$(\mathbb{R},+)$ through unbased automorphisms of ${\hat{P}}$ given by:
$(t,y)\rightarrow\rho(a)(t,y)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(t+a,y)\,,\quad\forall\,\,(t,y)\in{\hat{P}}\equiv\mathbb{R}\times
P\,.$
This covers the action of $\mathbb{R}$ on $M=\mathbb{R}\times\Sigma$ given by
time translations:
$\hat{\pi}(\rho(a)(t,y))=(t+a,\pi(y))\,,\quad\forall\,\,(t,y)\in{\hat{P}}\equiv\mathbb{R}\times
P~{}~{}.$
###### Definition 5.2.
A principal connection on ${\hat{\mathcal{A}}}\in\mathrm{Conn}({\hat{P}})$ is
called time-invariant if it satisfies:
$\rho(a)^{\ast}({\hat{\mathcal{A}}})={\hat{\mathcal{A}}}\,,\quad\forall\,\,a\in\mathbb{R}~{}~{}.$
Notice that time-invariant principal connections defined on ${\hat{P}}$ form
an affine subspace $\mathrm{Conn}^{s}({\hat{P}})$ of
$\mathrm{Conn}({\hat{P}})$. The timelike one-form $\theta\in\Omega^{1}(M)$
pulls back through $\hat{\pi}$ to an exact one-form defined on ${\hat{P}}$
which we denote by $\hat{\theta}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}p^{\ast}(\theta)\in\Omega^{1}({\hat{P}})$.
Let $\Delta=(\mathcal{S},\omega,\mathcal{D})$ be the duality structure defined
by $P$ on $\Sigma$. Then it is easy to see that the duality structure
$\hat{\Delta}=(\hat{\mathcal{S}},\hat{\omega},{\hat{\mathcal{D}}})$ defined by
${\hat{P}}$ on $M$ is given by:
$\hat{\mathcal{S}}=p^{\ast}(\mathcal{S})\,,\quad\hat{\omega}=p^{\ast}(\omega)\,,\quad{\hat{\mathcal{D}}}=p^{\ast}(\mathcal{D})\,.$
###### Lemma 5.3.
A connection ${\hat{\mathcal{A}}}\in\mathrm{Conn}({\hat{P}})$ is time-
invariant if and only if it can be written as:
${\hat{\mathcal{A}}}=-(\Psi^{\pi}\circ\tau)\hat{\theta}+\tau^{\ast}(\mathcal{A})$
(43)
for some $\Psi\in\mathcal{C}^{\infty}(\Sigma,\mathcal{S})$ and some
$\mathcal{A}\in\mathrm{Conn}(P)$. In this case, $\Psi$ and $\mathcal{A}$ are
determined uniquely by ${\hat{\mathcal{A}}}$ and any pair
$(\Psi,\mathcal{A})\in\mathcal{C}^{\infty}(\Sigma,\mathcal{S})\times\mathrm{Conn}(P)$
determines a time-invariant connection on ${\hat{P}}$ though this relation.
Moreover, the curvature of ${\hat{\mathcal{A}}}$ is given by:
$\mathcal{V}_{\hat{\mathcal{A}}}=\theta\wedge
p^{\ast}(\mathrm{d}_{\mathcal{D}}\Psi)+p^{\ast}(\mathcal{V}_{\mathcal{A}})\in\Omega^{2}(\Sigma,\hat{\mathcal{S}})$
(44)
and we have:
$\mathrm{d}_{\mathcal{D}}\mathcal{V}_{\mathcal{A}}=0~{}~{}.$ (45)
###### Remark 5.4.
Relation (44) gives the decomposition (39) of
$\mathcal{V}_{\hat{\mathcal{A}}}$ since
$\iota_{\partial_{t}}p^{\ast}(\mathcal{V}_{A})=\iota_{\partial_{t}}p^{\ast}(\mathrm{d}_{\mathcal{D}}\Psi)=0$.
Thus:
$(\mathcal{V}_{\hat{\mathcal{A}}})_{\top}=p^{\ast}(\mathrm{d}_{\mathcal{D}}\Psi)\,,\quad(\mathcal{V}_{\hat{\mathcal{A}}})_{\perp}=p^{\ast}(\mathcal{V}_{\mathcal{A}})~{}~{}.$
(46)
###### Proof.
Any principal connection
$\mathcal{A}\in\mathrm{Conn}({\hat{P}})\subset\Omega^{1}({\hat{P}},\mathfrak{a})$
decomposes uniquely as:
${\hat{\mathcal{A}}}=-\Phi\hat{\theta}+\mathcal{A}_{\perp}~{}~{},$
where $\Phi\in\Omega^{0}_{\mathrm{Ad}}({\hat{P}},\mathfrak{a})$ and
$\mathcal{A}_{\perp}\in\mathrm{Conn}({\hat{P}},\mathfrak{a})$ satisfies
$\mathcal{A}_{\perp}(\partial_{t})=0$. It is clear that ${\hat{\mathcal{A}}}$
is time-invariant if and only if $\Phi=\Psi^{\prime}\circ\tau$ for some
$\Psi^{\prime}\in\Omega^{0}_{\mathrm{Ad}}(P,\mathfrak{a})$ and
${\hat{\mathcal{A}}}_{\perp}=\tau^{\ast}(\mathcal{A})$ for some
$\mathcal{A}\in\mathrm{Conn}(P)$. Since
$\Omega^{0}_{\mathrm{Ad}}(P,\mathfrak{a})\simeq\mathcal{C}^{\infty}(\Sigma,\mathcal{S})$,
we have $\Psi^{\prime}=\Psi^{\pi}$ for some
$\Psi\in\mathcal{C}^{\infty}(\Sigma,\mathcal{S})$. Since
$\mathrm{d}\hat{\theta}=0$, the principal curvature of ${\hat{\mathcal{A}}}$
reads:
$\Omega_{\hat{\mathcal{A}}}=\mathrm{d}{\hat{\mathcal{A}}}=\mathrm{d}\Phi\wedge\hat{\theta}+\tau^{\ast}(\Omega_{\mathcal{A}})~{}~{},$
which is equivalent with (44). Relation (45) follows from the results of [35].
The remaining statements are immediate. ∎
###### Definition 5.5.
The Bogomolny pair of a time-invariant connection
${\hat{\mathcal{A}}}\in\mathrm{Conn}^{s}({\hat{P}})$ is the pair
$(\Psi,\mathcal{A})\in\mathcal{C}^{\infty}(\Sigma,\mathcal{S})\times\mathrm{Conn}(P)$
defined in Lemma 5.3. The section
$\Psi\in\mathcal{C}^{\infty}(\Sigma,\mathcal{S})$ is called the Higgs field of
the pair.
Let $\mathcal{J}$ be a taming of $\Delta$. Then the $p$-pullback
$\hat{\mathcal{J}}$ of $\mathcal{J}$ defines a time-invariant taming of
$\hat{\mathcal{S}}$, thus $({\hat{P}},\hat{\mathcal{J}})$ is a polarized
Siegel bundle. Let $\star_{h,\mathcal{J}}=\ast_{h}\otimes\mathcal{J}$ be the
polarized Hodge operator defined by $h$ and $\mathcal{J}$. Since
$\mathcal{J}^{2}=-\mathrm{id}_{\mathcal{S}}$ while $\ast_{h}$ squares to the
identity on $\Omega^{\ast}(\Sigma)$, we have:
$\star_{h,\mathcal{J}}\circ\star_{h,\mathcal{J}}=-\mathrm{id}_{\Omega^{\ast}(\Sigma,\mathcal{S})}~{}~{}.$
###### Proposition 5.6.
A time-invariant connection
${\hat{\mathcal{A}}}\in\mathrm{Conn}^{s}({\hat{P}})$ is polarized self-dual
with respect to $\hat{\mathcal{J}}$ if and only if its Bogomolny pair
$(\Psi,\mathcal{A})$ satisfies the polarized Bogomolny equation with respect
to $\mathcal{J}$:
$\star_{h,\mathcal{J}}\mathcal{V}_{\mathcal{A}}=\mathrm{d}_{\mathcal{D}}\Psi\Longleftrightarrow\star_{h,\mathcal{J}}\mathrm{d}_{\mathcal{D}}\Psi=-\mathcal{V}_{\mathcal{A}}~{}~{}.$
(47)
A polarized Bogomolny pair $(\Psi,\mathcal{A})$ which satisfies this equation
is called a polarized abelian dyon relative to $\mathcal{J}$.
###### Proof.
Relations (46) and (42) give:
$\ast_{g}\mathcal{V}_{\hat{\mathcal{A}}}=-p^{\ast}(\ast_{h}\mathrm{d}_{\mathcal{D}}\Psi)+\theta\wedge
p^{\ast}(\ast_{h}\mathcal{V}_{\mathcal{A}})~{}~{},$
which implies:
$\star_{g,\hat{\mathcal{J}}}\mathcal{V}_{\hat{\mathcal{A}}}=-p^{\ast}(\star_{h,\mathcal{J}}\mathrm{d}_{\mathcal{D}}\Psi)+\theta\wedge
p^{\ast}(\star_{h,\mathcal{J}}\mathcal{V}_{\mathcal{A}})~{}~{}.$
Comparing this with (44) shows that the polarized self-duality condition for
$\mathcal{V}_{\hat{\mathcal{A}}}$ amounts to (47), where we used uniqueness of
the decomposition (39). ∎
Let:
$\mathrm{Dyons}(\Sigma,h,\mathbf{P})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{(\Psi,\mathcal{A})\in\mathcal{C}^{\infty}(\Sigma,\mathcal{S})\times\mathrm{Conn}(P)~{}|~{}\star_{h,\mathcal{J}}\mathcal{V}_{\mathcal{A}}=\mathrm{d}_{\mathcal{D}}\Psi\\}$
be the set of all polarized abelian dyons relative to $\mathcal{J}$.
###### Remark 5.7.
Equation (47) is reminiscent of the usual Bogomolny equations obtained by
dimensional reduction of the self-duality equations for a connection on a
principal bundle over a four-dimensional Riemannian manifold. However, it
differs from the latter in two crucial respects:
* •
The usual Bogomolny equations arise by dimensional reduction of the self-
duality equations (which are first order equations for a connection) in four
_Euclidean_ dimensions. By contrast, equation (47) is the reduction along a
timelike direction of the complete second order equations of motion defining
abelian gauge theory in four _Lorentzian_ dimensions, once these equations
have been re-written as first-order equations by doubling the number of
variables through the inclusion of both electromagnetic and magnetoelectric
gauge potentials. In particular, our reduction yields a system of first-order
differential equations, despite originating in a theory that was initially
defined by local second-order PDEs (see Appendix A).
* •
Equation (47) is modified by the action of the taming $\mathcal{J}$, which is
absent in the usual Bogomolny equations.
### 5.3. Gauge transformations of polarized abelian dyons
As explained in Subsection 4.1 (see [35] for more detail), the gauge group
$\mathrm{Aut}_{b}(P)$ of the Siegel bundle $P$ over $\Sigma$ has an action:
$\mathrm{ad}_{P}:\mathrm{Aut}_{b}(P)\rightarrow\mathrm{Aut}_{b}(\mathcal{S})$
through based automorphisms of the vector bundle $\mathcal{S}=\mathrm{ad}(P)$.
This action agrees through the adjoint curvature map with the pushforward
action:
$\mathbb{A}\colon\mathrm{Aut}_{b}(P)\rightarrow\mathrm{Aff}(\mathrm{Conn}(P))$
of the gauge group on the space of principal connections defined on $P$. For
any principal connection $\mathcal{A}\in\mathrm{Conn}(P)$ and any
$u\in\mathrm{Aut}_{b}(P)$, we have:
$\mathcal{V}_{\mathbb{A}_{u}(\mathcal{A})}=\mathrm{ad}_{P}(u)(\mathcal{V}_{\mathcal{A}})~{}~{}.$
Similar statements hold for the gauge group of the Siegel bundle ${\hat{P}}$
defined on $M=\mathbb{R}\times\Sigma$.
###### Definition 5.8.
A gauge transformation $\hat{u}\in\mathrm{Aut}_{b}({\hat{P}})$ of ${\hat{P}}$
is called time-invariant if:
$u\circ\rho(a)=\rho(a)\circ u\,,\quad\forall\,\,a\in\mathbb{R}~{}~{}.$
Notice that time-invariant gauge transformations of ${\hat{P}}$ form a
subgroup of $\mathrm{Aut}_{b}({\hat{P}})$, which we denote by
$\mathrm{Aut}_{b}^{s}({\hat{P}})$. Such transformations stabilize the affine
subspace $\mathrm{Conn}^{s}(P)$ of time-invariant principal connections
defined on ${\hat{P}}$. Since ${\hat{P}}=p^{\ast}(P)$, we have
$\mathrm{Ad}_{G}({\hat{P}})=p^{\ast}(\mathrm{Ad}_{P})$. It is easy to see that
$\hat{u}\in\mathrm{Aut}_{b}({\hat{P}})$ is time-invariant if and only if the
corresponding section
$\sigma_{\hat{u}}\in\mathcal{C}^{\infty}(M,\mathrm{Ad}_{G}({\hat{P}}))$ is the
bundle pull-back by $p$ of a section
$\sigma\in\mathcal{C}^{\infty}(\Sigma,\mathrm{Ad}_{G}(P))$. The latter
corresponds to a gauge transformation of $P$ which we denote by
$u\in\mathrm{Aut}_{b}(P)$. We have:
$\sigma_{\hat{u}}=(\sigma_{u})^{p}\,,$ (48)
(where the subscript $p$ denotes bundle pullback by $p$) as well as:
$\tau\circ\hat{u}=u\circ\tau~{}~{}.$ (49)
Conversely, any gauge transformation $u$ of $P$ determines a gauge
transformation $\hat{u}$ of ${\hat{P}}$ by relation (48) and $\hat{u}$
satisfies (49). The correspondence $u\rightarrow\hat{u}$ gives an isomorphism
of groups between $\mathrm{Aut}_{b}(P)$ and $\mathrm{Aut}_{b}^{s}({\hat{P}})$.
The following proposition shows that the map which takes a time-invariant
principal connection defined on ${\hat{P}}$ to its Bogomolny pair intertwines
the action of $\mathrm{Aut}_{b}^{s}({\hat{P}})\simeq\mathrm{Aut}(P)$ on
$\mathrm{Conn}^{s}(P)$ with the action of $\mathrm{Aut}_{b}(P)$ on the set
$\mathcal{C}^{\infty}(\Sigma,\mathcal{S})\times\mathrm{Conn}(P)$ given by:
$\mu\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{ad}_{P}\times\mathbb{A}:\mathrm{Aut}_{b}(P)\rightarrow\mathrm{Aut}_{\mathbb{R}}(\mathcal{C}^{\infty}(\Sigma,\mathcal{S}))\times\mathrm{Aff}(\mathrm{Conn}(P))~{}~{}.$
###### Proposition 5.9.
Let ${\hat{\mathcal{A}}}$ be a time-invariant principal connection on
${\hat{P}}$ and
$(\Psi,\mathcal{A})\in\mathcal{C}^{\infty}(\Sigma,\mathcal{S})\times\mathrm{Conn}(P)$
be the Bogomolny pair defined by ${\hat{\mathcal{A}}}$. For any
$u\in\mathrm{Aut}_{b}(P)$, the Bogomolny pair $(\Psi_{u},\mathcal{A}_{u})$ of
the time-invariant connection $\mathbb{A}_{\hat{u}}({\hat{\mathcal{A}}})$
obtained from ${\hat{\mathcal{A}}}$ by applying the time-invariant gauge
transformation $\hat{u}$ is given by:
$\Psi_{u}=\mathrm{ad}_{P}(u)(\Psi)\,,\quad\mathcal{A}_{u}=\mathbb{A}_{u}(\mathcal{A})\,.$
In particular, we have:
$\mathcal{V}_{\mathcal{A}_{u}}=\mathrm{ad}_{P}(u)(\mathcal{V}_{\mathcal{A}})~{}~{}.$
###### Proof.
We have
$\mathbb{A}_{\hat{u}}({\hat{\mathcal{A}}})=(\hat{u}^{-1})^{\ast}({\hat{\mathcal{A}}})$.
Using Lemma 5.3, this gives:
$\mathbb{A}_{\hat{u}}({\hat{\mathcal{A}}})=-(\Psi^{\pi}\circ
u^{-1}\circ\tau)\hat{\theta}+\tau^{\ast}((u^{-1})^{\ast}(\mathcal{A}))=-(\Psi_{u}^{\pi}\circ\tau)\hat{\theta}+\tau^{\ast}(\mathbb{A}_{u}(\mathcal{A}))~{}~{},$
where we used relation (49) and noticed that
$(\hat{u}^{-1})^{\ast}(\hat{\theta})=\theta$ since
$\hat{\theta}=\hat{\pi}^{\ast}(\theta)$ and
$\hat{\pi}\circ\hat{u}^{-1}=\hat{u}$ because $\hat{u}^{-1}$ is a based
automorphism of ${\hat{P}}$. ∎
Notice that the discrete remnant (see [35]) of any gauge transformation of
${\hat{P}}$ is time-invariant since $\Sigma$ (and hence $M$) is connected and
therefore any discrete gauge transformation of ${\hat{P}}$ is constant on $M$.
In particular, the groups of discrete gauge transformations of ${\hat{P}}$ and
$P$ can be identified. As explained in [35], we have:
$\mathrm{ad}_{P}(u)=\mathrm{ad}_{\Gamma(P)}(\bar{u})\,,\quad\forall\,\,u\in\mathrm{Aut}_{b}(P)~{}~{},$
where $\bar{u}$ is the discrete remnant of $u$. This implies that
$\mathrm{Aut}_{b}(P)$ acts on $\mathcal{S}$ through based automorphism of the
integral duality structure ${\bm{\Delta}}$ determined by $P$:
$\mathrm{ad}_{P}(u)\in\mathrm{Aut}_{b}({\bm{\Delta}})\,,\quad\forall\,\,u\in\mathrm{Aut}_{b}(P)$
and hence induces an integral duality transformation of $\Psi$.
###### Proposition 5.10.
Let $\mathcal{J}$ be a taming of $\mathcal{S}$. Then
$(\Psi,\mathcal{A})\in\mathcal{C}^{\infty}(\Sigma,\mathcal{S})\times\mathrm{Conn}(P)$
is a polarized abelian dyon with respect to $\mathcal{J}$ if and only if
$(\Psi_{u},\mathcal{A}_{u})$ is a polarized abelian dyon with respect to the
taming $\mathcal{J}_{u}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{ad}_{P}(u)\circ\mathcal{J}\circ\mathrm{ad}_{P}(u)^{-1}$.
In particular, the action $\mu$ of $\mathrm{Aut}_{b}(P)$ restricts to
bijections:
$\mu(u):\mathrm{Dyons}(M,g,P,\mathcal{J})\xrightarrow{\sim}\mathrm{Dyons}(\Sigma,g,P,\mathcal{J}_{u})\,,\quad\forall\,\,u\in\mathrm{Aut}_{b}(P)\,.$
###### Proof.
Applying $\mathrm{ad}_{P}(u)$ shows that the polarized Bogomolny equation (47)
relative to $\mathcal{J}$ is equivalent with the polarized Bogomolny equation
relative to $\mathcal{J}_{u}$:
$\star_{h,\mathcal{J}_{u}}\mathcal{V}_{\mathcal{A}_{u}}=\mathrm{d}_{\mathcal{D}}\Psi_{u}~{}~{},$
where we used Proposition 5.9 and the fact that $\mathrm{ad}_{P}(u)$ commutes
with $\mathcal{D}$, which is proved in [35]. ∎
### 5.4. The case when $Z(P)$ has trivial monodromy on $\Sigma$
Suppose that $Z(P)$ has trivial monodromy on $\Sigma$, so the duality
structure $\Delta:=\Delta(P)$ is holonomy-trivial and its Siegel system is
trivial. Then there exists a flat trivialization of $\Delta$ which identifies
the integral duality structure ${\bm{\Delta}}=(\Delta,\mathcal{L})$ of $P$
with
$(\Sigma\times\mathbb{R}^{2n},\omega_{2n},\mathrm{d},\Sigma\times\Lambda_{\mathfrak{t}})$.
Notice that a Higgs field identifies with a smooth map from $\Sigma$ to
$\mathbb{R}^{2n}$, which we decompose into maps
$\Phi,\Upsilon:\Sigma\rightarrow\mathbb{R}^{n}$ according to:
$\Psi=\begin{pmatrix}-\Phi\\\ \Upsilon\end{pmatrix}$ (50)
###### Proposition 5.11.
A pair
$(\Psi,\mathcal{A})\in\mathcal{C}^{\infty}(\Sigma,\mathbb{R}^{2n})\times\mathrm{Conn}(P)$
satisfies the polarized Bogomolny equations iff:
$\mathrm{d}\Psi=\begin{pmatrix}E\\\
-{\mathcal{R}}E-\mathcal{I}\ast_{h}B\end{pmatrix}~{}~{},$ (51)
where $E\in\Omega^{1}(\Sigma,\mathbb{R}^{n})$ and
$B\in\Omega^{2}(\Sigma,\mathbb{R}^{n})$ are determined by
$\mathcal{V}_{\mathcal{A}}$ through the relation:
$\mathcal{V}_{\mathcal{A}}=\begin{pmatrix}B\\\
-{\mathcal{R}}B+\mathcal{I}\ast_{h}E\end{pmatrix}\,.$ (52)
###### Proof.
In the chosen trivialization of $\Delta$, tamings identify with taming maps
$\mathcal{J}:\Sigma\rightarrow\mathrm{GL}(2n,\mathbb{R})$. By Proposition A.7,
the latter have the form:
$\mathcal{J}=\begin{pmatrix}\mathcal{I}^{-1}{\mathcal{R}}&\mathcal{I}^{-1}\\\
-\mathcal{I}-{\mathcal{R}}\mathcal{I}^{-1}{\mathcal{R}}&-{\mathcal{R}}\mathcal{I}^{-1}\end{pmatrix}~{}~{},$
where
$\mathcal{N}={\mathcal{R}}+\mathbf{i}\mathcal{I}:\Sigma\rightarrow\mathbb{S}\mathbb{H}^{n}$
is the corresponding period matrix map. A similar equation relates the taming
$\hat{\mathcal{J}}=\mathcal{J}\circ p$ of $\hat{\Delta}$ to the period matrix
map ${\hat{\mathcal{N}}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathcal{N}\circ
p={\hat{\mathcal{R}}}+\mathbf{i}{\hat{\mathcal{I}}}:M\rightarrow\mathrm{GL}(2n,\mathbb{R})$
(where ${\hat{\mathcal{R}}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}{\mathcal{R}}\circ p$ and
${\hat{\mathcal{I}}}\stackrel{{\scriptstyle{\rm def.}}}{{=}}\mathcal{I}\circ
p$) defined on $M$. By Lemma A.4, the adjoint curvature of $\mathcal{A}$ is
twisted self-dual with respect to $\hat{\mathcal{J}}$ if and only if it has
the form:
$\mathcal{V}_{\hat{\mathcal{A}}}=\begin{pmatrix}{\hat{F}}\\\
G_{g}({\hat{\mathcal{N}}},{\hat{F}})\end{pmatrix}$ (53)
for some ${\hat{F}}\in\Omega^{2}(M,\mathbb{R}^{n})$, where:
$G_{g}({\hat{\mathcal{N}}},{\hat{F}})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}-{\hat{\mathcal{R}}}\,{\hat{F}}-{\hat{\mathcal{I}}}\ast_{g}{\hat{F}}\,.$
Lemma 5.1 gives:
$G_{g}({\hat{\mathcal{N}}},{\hat{F}})_{\top}=-{\hat{\mathcal{R}}}{\hat{F}}_{\top}-{\hat{\mathcal{I}}}\ast_{h}{\hat{F}}_{\perp}\,,\quad
G_{g}({\hat{\mathcal{N}}},{\hat{F}})_{\perp}=-{\hat{\mathcal{R}}}{\hat{F}}_{\perp}+{\hat{\mathcal{I}}}\ast_{h}{\hat{F}}_{\top}\,.$
Thus (53) amounts to:
$(\mathcal{V}_{\hat{\mathcal{A}}})_{\top}=\begin{pmatrix}{\hat{F}}_{\top}\\\
-{\hat{\mathcal{R}}}{\hat{F}}_{\top}-{\hat{\mathcal{I}}}\ast_{h}{\hat{F}}_{\perp}\end{pmatrix}\,,\quad(\mathcal{V}_{\hat{\mathcal{A}}})_{\perp}=\begin{pmatrix}{\hat{F}}_{\perp}\\\
-{\hat{\mathcal{R}}}{\hat{F}}_{\perp}+{\hat{\mathcal{I}}}\ast_{h}{\hat{F}}_{\top}\end{pmatrix}~{}~{}.$
(54)
These relations imply:
${\hat{F}}_{\top}=p^{\ast}(E)\,,\quad{\hat{F}}_{\perp}=p^{\ast}(B)$
for some $E\in\Omega^{1}(\Sigma,\mathbb{R}^{n})$ and
$B\in\Omega^{2}(\Sigma,\mathbb{R}^{n})$. Using (54), we conclude that (46) is
equivalent with:
$\mathrm{d}\Psi=\begin{pmatrix}E\\\
-{\mathcal{R}}E-\mathcal{I}\ast_{h}B\end{pmatrix}\,,\quad\mathcal{V}_{\mathcal{A}}=\begin{pmatrix}B\\\
-{\mathcal{R}}B+\mathcal{I}\ast_{h}E\end{pmatrix}~{}~{}.$
Notice that the pair $(E,B)$ is uniquely determined by
$\mathcal{V}_{\mathcal{A}}$ through the second of these relations. ∎
We will refer to the forms $E\in\Omega^{1}(\Sigma,\mathbb{R}^{n})$ and
$B\in\Omega^{2}(\Sigma,\mathbb{R}^{n})$ as the electrostatic and magnetostatic
field strengths of $\mathcal{A}$. Equation (51) is equivalent to the system:
$\mathrm{d}\Phi=E\,,\quad\mathrm{d}\Upsilon=-{\mathcal{R}}E-\mathcal{I}\ast_{h}B\,,$
which in turn amounts to:
$\vec{E}=-\mathrm{grad}_{h}\Phi\,,\quad\mathcal{I}\vec{B}+{\mathcal{R}}\vec{E}=-\mathrm{grad}_{h}\Upsilon\,,$
(55)
where we defined
$\vec{E},\vec{B}\in\mathcal{C}^{\infty}(\Sigma,T\Sigma)\otimes\mathbb{R}^{n}$
through:
$\vec{E}=E^{\sharp}\,,\quad\vec{B}=(\ast_{h}B)^{\sharp}\,.$
Here $\sharp$ denotes the musical isomorphism given by raising of indices with
the metric $h$. Equation $\mathrm{d}\mathcal{V}_{\mathcal{A}}=0$ amounts to
the system:
$\mathrm{d}B=0\,,\quad\mathrm{d}(-{\mathcal{R}}B+\mathcal{I}\ast_{h}E)=0\Longleftrightarrow\mathrm{div}_{h}\vec{B}=0\,,\quad\mathrm{div}_{h}({\mathcal{R}}\,\vec{B}+\mathcal{I}\,\vec{E})=0\,.$
(56)
### 5.5. Polarized dyons in prequantum electrodynamics
Prequantum electrodynamics defined on $M$ corresponds to setting $n=1$ and
${\mathcal{R}}=\frac{\theta}{2\pi},\mathcal{I}=\frac{4\pi}{g^{2}}$ with
constant $\theta\in\mathbb{R}$ in the previous subsection (see Appendix A).
Then:
$\mathcal{J}=\mathcal{J}_{\theta}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\begin{pmatrix}\frac{g^{2}\theta}{8\pi^{2}}&\frac{g^{2}}{4\pi}\\\
-\frac{4\pi}{g^{2}}-\frac{g^{2}\theta^{2}}{16\pi^{3}}&-\frac{g^{2}\theta}{8\pi^{2}}\end{pmatrix}~{}~{},$
and relation (52) becomes:
$\mathcal{V}_{\mathcal{A}}=\begin{pmatrix}B\\\
-\frac{\theta}{2\pi}B+\frac{4\pi}{g^{2}}\ast_{h}E\end{pmatrix}~{}~{}.$ (57)
In this case, relations (55) reduce to:
$\vec{E}=-\mathrm{grad}_{h}\Phi\,,\quad\frac{4\pi}{g^{2}}\vec{B}+\frac{\theta}{2\pi}\vec{E}=-\mathrm{grad}_{h}\Upsilon\,,$
(58)
which imply
$\vec{B}=-\frac{g^{2}}{4\pi}\,\mathrm{grad}_{h}(\Upsilon-\frac{\theta}{2\pi}\Phi)$
and:
$\mathrm{curl}_{h}\vec{E}=\mathrm{curl}_{h}\vec{B}=0~{}~{}.$ (59)
On the other hand, relations (56) become:
$\mathrm{d}B=\mathrm{d}(\ast_{h}E)=0\Longleftrightarrow\mathrm{div}_{h}\vec{B}=\mathrm{div}_{h}\vec{E}=0~{}~{}.$
(60)
Equations (59) and (60) describe source-free Maxwell electromagnetostatics on
$\Sigma$, where the vector fields $\vec{E},\vec{B}\in\mathcal{A}(\Sigma)$ are
the classical static electric and magnetic fields. Relation (58) shows that
$\Phi$ and $\frac{g^{2}}{4\pi}\Upsilon-\frac{g^{2}\theta}{8\pi^{2}}\Phi$ are
globally-defined classical electrostatic and magnetostatic scalar potentials.
Since $H^{1}(\Sigma,\mathbb{R})$ need not vanish, relation (59) need not imply
(58). Hence polarized dyons describe special potential electromagnetostatic
configurations, i.e. solutions of the static Maxwell equations on $\Sigma$
which admit both a globally-defined electric scalar potential and a globally-
defined magnetic scalar potential. Even though $\vec{E}$ and $\vec{B}$ satisfy
(60), such solutions need not admit a globally-defined vector electric or
vector magnetic potential, since $H^{2}(\Sigma,\mathbb{R})$ may be non-zero.
The condition that the configuration $(\vec{E},\vec{B})$ admits globally-
defined electric and magnetic scalar potentials is a consequence of the fact
that a polarized dyon originates from a principal connection defined on
${\hat{P}}$, which itself is a consequence of the DSZ integrality condition.
We formalize this as follows.
###### Definition 5.12.
An electromagnetostatic configuration defined on $(\Sigma,h)$ is a pair of
vector fields
$(\vec{E},\vec{B})\in\mathcal{A}(\Sigma)\times\mathcal{A}(\Sigma)$ which
satisfies the static source-free Maxwell equations (59) and (60). Such a
configuration is said to be potential if there exist real-valued smooth
functions $\Phi,\Upsilon$ defined on $\Sigma$ such that relations (58) hold.
Let $\mathrm{EMC}(\Sigma,h)$ denote the vector space of all
electromagnetostatic configurations defined on $(\Sigma,h)$ and
$\mathrm{EMC}_{\mathrm{pot}}(\Sigma,h)$ denote the subspace of those
configurations which are potential. Consider the map:
$H_{\theta}:\mathrm{Dyons}(\Sigma,h,P,\mathcal{J}_{\theta})\rightarrow\mathrm{EMC}_{\mathrm{pot}}(\Sigma,h)$
which associates the electromagnetostatic configuration $(\vec{E},\vec{B})$
defined through relation (57) to the polarized dyon $(\Psi,\mathcal{A})$. As
in Subsection 2.10, the chosen trivialization of $\Delta$ induces
isomorphisms:
$H^{2}_{\mathcal{D}}(\Sigma,\mathcal{S})\simeq
H^{2}(M,\mathbb{R}^{2n})\,,\quad H^{2}(\Sigma,\mathcal{L})\simeq
H^{2}(\Sigma,\Lambda_{\mathfrak{t}})\,,$
which allow us to identify the morphism
$j:H^{2}(\Sigma,\mathcal{L})\rightarrow
H^{2}_{\mathcal{D}}(\Sigma,\mathcal{S})$ with the morphism $\iota$ appearing
in the long exact sequence:
$\ldots\rightarrow H^{1}(\Sigma,A)\rightarrow
H^{2}(\Sigma,\Lambda_{\mathfrak{t}})\xrightarrow{i}H^{2}(\Sigma,\mathbb{R}^{2n})\rightarrow
H^{2}(\Sigma,A)\rightarrow\ldots$
induced by the exponential sequence:
$0\rightarrow\Lambda_{\mathfrak{t}}\rightarrow\mathbb{R}^{2n}\rightarrow
A\rightarrow 0~{}~{}.$
The relation $[\mathcal{V}_{\mathcal{A}}]_{\mathcal{D}}=2\pi j_{\ast}(c(P))$
becomes:
$[\mathcal{V}_{\mathcal{A}}]=2\pi i_{\ast}(c(P))\,,$
where $[\omega]$ denotes the de Rham cohomology class of a form
$\omega\in\Omega^{k}(M,S)$. Conversely, any closed two-form
$\mathcal{V}\in\Omega^{2}_{\mathrm{cl}}(\Sigma,\mathbb{R}^{2n})$ which
satisfies $[\mathcal{V}]=2\pi i_{\ast}(c(P))$ identifies with the curvature of
some principal connection $\mathcal{A}$ on $P$. For any
$(\vec{E},\vec{B})\in\mathrm{EMC}(\Sigma,h)$, let:
$\mathcal{V}^{\theta}_{\vec{E},\vec{B}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\begin{pmatrix}B\\\
-\frac{\theta}{2\pi}B+\frac{4\pi}{g^{2}}\ast_{h}E\end{pmatrix}~{}~{},$
where $E\stackrel{{\scriptstyle{\rm def.}}}{{=}}\vec{E}^{\flat}$ and
$B\stackrel{{\scriptstyle{\rm def.}}}{{=}}-\ast_{h}(\vec{B}^{\flat})$.
###### Theorem 5.13.
The image of the map $H_{\theta}$ coincides with the set:
$\mathrm{EMC}_{\mathrm{pot}}(\Sigma,h;\theta,c(P))\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{(\vec{E},\vec{B})\in\mathrm{EMC}_{\mathrm{pot}}(\Sigma,h)~{}|~{}[\mathcal{V}^{\theta}_{\vec{E},\vec{B}}]=2\pi\iota_{\ast}(c(P))\\}~{}~{}.$
Moreover, the $H_{\theta}$-preimage of a configuration
$(\vec{E},\vec{B})\in\mathrm{EMC}_{\mathrm{pot}}(\Sigma,h;\theta,c(P))$ is
given by:
$H_{\theta}^{-1}(\\{(\vec{E},\vec{B})\\})=\Big{\\{}(\Psi,A)\in\mathrm{Dyons}(\Sigma,h,\mathcal{J}_{\theta})~{}|~{}\mathcal{V}_{\mathcal{A}}=\mathcal{V}^{\theta}_{\vec{E},\vec{B}}~{}\&~{}\mathrm{grad}_{h}\Psi=\begin{pmatrix}\vec{E}\\\
-\frac{\theta}{2\pi}\vec{E}-\frac{4\pi}{g^{2}}\vec{B}\end{pmatrix}\Big{\\}}~{}~{}.$
###### Proof.
Let $(\Psi,\mathcal{A})$ be a polarized abelian dyon. Then
$H_{\theta}(\Psi,\mathcal{A})\in\mathrm{EMC}_{\mathrm{pot}}(\Sigma,h)$ and by
equation (57), there exists an electromagnetostatic configuration
$(\vec{E},\vec{B})$ such that:
$\mathcal{V}^{\theta}_{\mathcal{A}}=\begin{pmatrix}B\\\
-\frac{\theta}{2\pi}B+\frac{4\pi}{g^{2}}\ast_{h}E\end{pmatrix}\,.$
Since $\mathcal{V}^{\theta}_{\mathcal{A}}$ is the curvature of a connection
$\mathcal{A}$ on $P$, it satisfies
$[\mathcal{V}^{\theta}_{\mathcal{A}}]_{\mathcal{D}}=2\pi j_{\ast}(c(P))$ and
hence $(\vec{E},\vec{B})\in\mathrm{EMC}_{\mathrm{pot}}(\Sigma,h;\theta,c(P))$.
On the other hand, if
$(\vec{E},\vec{B})\in\mathrm{EMC}_{\mathrm{pot}}(\Sigma,h;\theta,c(P))$,
equation $[\mathcal{V}^{\theta}_{\vec{E},\vec{B}}]=\iota_{\ast}(c(P))$ implies
that there exists a connection $\mathcal{A}$ on $P$ whose curvature is
$\mathcal{V}^{\theta}_{\vec{E},\vec{B}}$ and since $(\vec{E},\vec{B})$ is a
potential electromagnetostatic solution, choosing potentials $\Phi$ and
$\Upsilon$ and defining:
$\Psi=\begin{pmatrix}-\Phi\\\ \Upsilon\end{pmatrix}$ (61)
we conclude that $(\Psi,\mathcal{A})$ is a polarized abelian dyon whence
$(\vec{E},\vec{B})\in\mathrm{Im}(H_{\theta})$. It is now clear that the only
freedom in choosing a preimage of $(\vec{E},\vec{B})$ by $H_{\theta}$ lies
only in choosing the potentials $\Phi$ and $\Upsilon$ as prescribed in the
statement of the theorem and hence we conclude. ∎
### 5.6. Polarized Bogomolny equations on the punctured Euclidean 3-space
In this subsection, we construct families of solutions (which generalize the
dyons of ordinary electromagnetism) on the punctured Euclidean space
$\Sigma=\mathbb{R}^{3}_{0}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathbb{R}^{3}\backslash\left\\{0\right\\}$. Spherical coordinates
give a diffeomorphism:
$\mathbb{R}^{3}_{0}\simeq\mathbb{R}_{>0}\times\mathrm{S}^{2}\,,$
and we denote by $r\in\mathbb{R}_{>0}$ the radial coordinate. The metric $h$
reads:
$h=\mathrm{d}r^{2}+r^{2}h_{\mathrm{S}^{2}}\,,$
where $h_{\mathrm{S}^{2}}$ is the unit round metric of $\mathrm{S}^{2}$. We
have $\nu_{h}=r^{2}\mathrm{d}r\wedge\nu_{\mathrm{S}^{2}}$, where
$\nu_{\mathrm{S}^{2}}$ is the volume form of $\mathrm{S}^{2}$. Let
$q:\mathbb{R}^{3}_{0}\rightarrow\mathrm{S}^{2}$ be the map defined though:
$q(x)\stackrel{{\scriptstyle{\rm def.}}}{{=}}\frac{x}{||x||}~{}~{},$
where $||x||$ is the Euclidean norm of $x$. Up to isomorphism, any Siegel
bundle of rank $n$ and type $\mathfrak{t}\in\mathrm{Div}^{n}$ defined on $P$
on $\mathbb{R}^{3}_{0}$ is the pull-back through $q$ of a Siegel bundle of the
same rank and type defined on $\mathrm{S}^{2}$, which reduces to a principal
torus bundle since $\mathrm{S}^{2}$ is simply connected. Accordingly, the
duality structure $\Delta$ of $P$ is holonomy trivial and we can fix a global
flat trivialization in which $\Delta$ identifies with:
$\Delta\equiv(\Sigma\times\mathbb{R}^{2n},\omega_{2n},\mathrm{d})\,,$ (62)
and the Dirac system $\mathcal{L}$ determined by $P$ identifies with
$\Sigma\times\Lambda_{\mathfrak{t}}$. Hence a Bogomolny pair consists of a map
$\Psi\in\mathcal{C}^{\infty}(\Sigma,\mathbb{R}^{2n})$ and a connection
$\mathcal{A}\in\mathrm{Conn}(P)$ whose curvature can be viewed as a vector-
valued two-form
$\mathcal{V}_{\mathcal{A}}\in\Omega^{2}(\Sigma,\mathbb{R}^{2n})$. Moreover, a
taming of $\Delta$ can be viewed as a map
$\mathcal{J}\in\mathcal{C}^{\infty}(\Sigma,\mathrm{GL}(2n,\mathbb{R}))$ which
squares to minus the identity everywhere and satisfies
$\mathcal{J}^{t}(\sigma){\hat{\omega}}_{2n}\mathcal{J}(\sigma)=\hat{\omega}_{2n}$
for all $\sigma\in\Sigma$.
Let us assume that $\mathcal{J}$ and $\Psi$ depend only on $r$. Then
$\mathrm{d}\Psi=\partial_{r}\Psi\mathrm{d}r$ and
$\ast_{h}(\mathrm{d}r)=\iota_{\partial_{r}}\nu_{h}=r^{2}q^{\ast}(\nu_{\mathrm{S}^{2}})$,
hence the polarized Bogomolny equation reads:
$\mathcal{V}_{\mathcal{A}}=-r^{2}\mathcal{J}(r)\frac{\mathrm{d}\Psi}{\mathrm{d}r}\,q^{\ast}(\nu_{S^{2}})\,,$
(63)
Since $\mathcal{V}_{\mathcal{A}}$ and $\nu_{\mathrm{S}^{2}}$ are closed, a
necessary condition for this equation to admit solutions
$\mathcal{A}\in\mathrm{Conn}(P)$ is:
$\frac{\mathrm{d}}{\mathrm{d}r}(r^{2}\mathcal{J}\frac{\mathrm{d}}{\mathrm{d}r}\Psi\,)=0\,.$
The general solution of this integrability condition is:
$\Psi=-\int\frac{\mathcal{J}v}{2r^{2}}\mathrm{d}r+v^{\prime}\,,$
for constant vectors $v,v^{\prime}\in\mathbb{R}^{2n}$. Using this in (63)
gives:
$\mathcal{V}_{\mathcal{A}}=-\frac{1}{2}v\,q^{\ast}(\nu_{\mathrm{S}^{2}})\,.$
The last relation determines $\mathcal{A}$ up to a transformation of the form:
$\mathcal{A}\rightarrow\mathcal{A}+\omega~{}~{}$
where $\omega\in\Omega^{1}_{\mathrm{Ad}}(P,\mathbb{R}^{2n})$ is closed and
hence corresponds to a closed 1-form
$\omega^{\prime}\in\Omega^{1}(\Sigma,\mathbb{R}^{2n})$ (recall that
$\mathrm{d}_{\mathcal{D}}=\mathrm{d}$ in our trivialization of $\mathcal{S}$).
Notice that $\omega^{\prime}$ need not be exact since
$\Sigma=\mathbb{R}^{3}_{0}$ is not contractible.
#### 5.6.1. The integrality condition for $v$
Since $\mathrm{S}^{2}$ is a deformation retract of $\Sigma$, the map
$q^{\ast}:H^{\ast}(\mathrm{S}^{2},\mathbb{R}^{2n}/\Lambda_{\mathfrak{t}})\rightarrow
H^{\ast}(\Sigma,\mathbb{R}^{2n}/\Lambda_{\mathfrak{t}})$ induced by $q$ on
cohomology is an isomorphism of groups for any abelian group of coefficients
$A$. Since $H_{1}(\mathrm{S}^{2},\mathbb{Z})=0$, the universal coefficient
theorem for cohomology gives:
$H^{2}(\mathrm{S}^{2},\Lambda_{\mathfrak{t}})\simeq_{\mathbb{Z}}{\rm
Hom}_{\mathbb{Z}}(H_{2}(\mathrm{S}^{2},\mathbb{Z}),\Lambda_{\mathfrak{t}})\simeq_{\mathbb{Z}}\Lambda_{\mathfrak{t}}~{}~{},$
(64)
where the last isomorphism is given by evaluation on the fundamental class
$[\mathrm{S}^{2}]\in H_{2}(\mathrm{S}^{2},\mathbb{Z})$. This allows us to view
the characteristic class $c(P)=q^{\ast}(c(P_{0}))$ as an element of
$\Lambda_{\mathfrak{t}}$. Moreover, we have isomorphisms of vector spaces:
$H^{2}(\mathrm{S}^{2},\mathbb{R}^{2n})\simeq_{\mathbb{R}}H^{2}(\mathrm{S}^{2},\mathbb{Z})\otimes_{\mathbb{Z}}\mathbb{R}^{2n}\simeq_{\mathbb{R}}\mathbb{R}^{2n}~{}~{},$
(65)
the last of which takes $u\otimes_{\mathbb{Z}}x$ to $x$ for all
$x\in\mathbb{R}^{2n}$. Through the isomorphisms (64) and (65), the map
$i_{\ast}:H^{2}(\mathrm{S}^{2},\Lambda_{\mathfrak{t}})\rightarrow
H^{2}(\mathrm{S}^{2},\mathbb{R}^{2n})$ corresponds to the inclusion
$\Lambda_{\mathfrak{t}}\subset\mathbb{R}^{2n}$ and $2\pi c(P)$ identifies with
the de Rham cohomology class of $\mathcal{V}_{\mathcal{A}}$. The free abelian
group $H^{2}(\mathrm{S}^{2},\mathbb{Z})$ is generated by half of the Euler
class $e(\mathrm{S}^{2})$ of $\mathrm{S}^{2}$, which satisfies:
$\int_{\mathrm{S}^{2}}e(\mathrm{S}^{2})=\chi(\mathrm{S}^{2})=1+(-1)^{2}=2~{}~{}.$
On the other hand, the de Rham cohomology class $u\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left[\frac{\nu_{\mathrm{S}^{2}}}{4\pi}\right]\in
H^{2}(\mathrm{S}^{2},\mathbb{R}^{2n})$ satisfies:
$u=i_{\ast}\left(\frac{1}{2}e(\mathrm{S}^{2})\right)~{}~{}.$
We have:
$[\mathcal{V}_{\mathcal{A}}]=2\pi i_{\ast}(c(P))~{}~{},$
which implies $v\in\Lambda_{\mathfrak{t}}$ as well as:
$c(P_{0})=\frac{1}{2}v\,e(\mathrm{S}^{2})~{}~{}\mathrm{and}~{}~{}i_{\ast}(c(P_{0}))=vu~{}~{}.$
Hence we obtain an integrality condition for the integration constant $v$,
which guarantees the existence of a solution and determines through the
previous equation the topological type of the bundle $P$ carrying the
solution.
## Appendix A Local abelian gauge theory
This appendix discusses the duality-covariant formulation and global
symmetries of source-free $\mathrm{U}(1)^{n}$ abelian gauge theory on a
contractible oriented four-manifold $M$ endowed with an arbitrary metric of
signature $(3,1)$, with the goal of motivating the geometric model introduced
in Section 1 and of making contact with the physics literature. The local
study of electromagnetic duality for Maxwell electrodynamics in four
Lorentzian dimensions has a long history, see [21, 22] as well as [41] and its
references and citations. Let $\ast_{g}$ be the Hodge operator of $(M,g)$.
The prototypical example of local abelian gauge theory is given by classical
electrodynamics, which is defined by the Lagrangian density functional:
$\mathfrak{L}[A]=-\frac{4\pi}{g^{2}}(F_{A})_{ab}(F_{A})^{ab}+\frac{\theta}{2\pi}(F_{A})_{ab}\,(\ast_{g}F_{A})^{ab}\,,\qquad
F=\mathrm{d}A~{}~{}\mathrm{with}~{}~{}A\in\Omega^{1}(M)~{}~{},$
where $\theta\in\mathbb{R}$ is the theta angle. Classical Maxwell theory is
obtained for $\theta=0$. Below, we discuss the more general construction of
local abelian gauge theories, which is motivated by the local structure of
four-dimensional supergravity and supersymmetric field theory and which
involves a finite number of abelian gauge fields.
### A.1. Lagrangian formulation
Given $n\in\mathbb{Z}_{>0}$ and a Lorentzian metric $g$ on $M$, consider the
following generalization of the previous Lagrangian density, which allows all
couplings to be smooth real-valued functions:
$\mathfrak{L}[A^{1},\ldots,A^{n}]=-\mathcal{I}_{\Lambda\Sigma}\,\langle
F^{\Lambda}_{A},F^{\Sigma}_{A}\rangle_{g}+{\mathcal{R}}_{\Lambda\Sigma}\,\langle
F^{\Lambda}_{A},\ast_{g}F^{\Sigma}_{A}\rangle_{g}~{}~{}.$
Here $\Lambda,\Sigma=1,\ldots,n$ and :
$F^{\Lambda}_{A}=\mathrm{d}A^{\Lambda}\,,\qquad\Lambda=1,\ldots n\,,$
where $A^{\Lambda}\in\Omega^{1}(M)$ and
${\mathcal{R}}_{\Lambda\Sigma},\mathcal{I}_{\Lambda\Sigma}\in\mathcal{C}^{\infty}(M)$
are the components of an $\mathbb{R}^{n}$-valued one-form
$A\in\Omega^{1}(M,\mathbb{R}^{n})$ and of functions
${\mathcal{R}},\mathcal{I}\in\mathcal{C}^{\infty}(M,\mathrm{Mat}_{s}(n,\mathbb{R}))$
valued in the space $\mathrm{Mat}_{s}(n,\mathbb{R})$ of symmetric square
matrices of size $n$ with real entries. To guarantee a positive-definite
kinetic term we must assume that $\mathcal{I}$ is positive-definite
everywhere. Notice that classical electrodynamics corresponds to $n=1$ with
${\mathcal{R}}=\frac{\theta}{2\pi}$ and $\mathcal{I}=\frac{4\pi}{g^{2}}$.
Since $\ast_{g}^{2}F^{\Lambda}=-F^{\Lambda}$, the action is equivalent to:
$\mathcal{S}[A^{1},\ldots,A^{n}]=\int\mathfrak{L}[A^{1},\ldots,A^{n}]\mathrm{vol}_{g}=-\int\left(\mathcal{I}_{\Lambda\Sigma}\,F^{\Lambda}_{A}\wedge\ast_{g}F^{\Sigma}_{A}+{\mathcal{R}}_{\Lambda\Sigma}\,F^{\Lambda}_{A}\wedge
F^{\Sigma}_{A}\right)~{}~{}.$
The partial differential equations obtained as critical points of the
variational problem with respect to compactly supported variations are:
$\mathrm{d}({\mathcal{R}}_{\Lambda\Sigma}F^{\Sigma}_{A}+\mathcal{I}_{\Lambda\Sigma}\ast_{g}F^{\Sigma}_{A})=0\,,$
or, in matrix notation:
$\mathrm{d}({\mathcal{R}}F_{A}+\mathcal{I}\ast_{g}F_{A})=0\,,$
where
$F_{A}=\mathrm{d}A\in\Omega^{2}_{\mathrm{cl}}(M,\mathbb{R}^{n})=\Omega^{2}_{\mathrm{ex}}(M,\mathbb{R}^{n})$
is an $\mathbb{R}^{n}$-valued closed (and hence exact) two-form. These
equations define classical local abelian gauge theory. Since both
${\mathcal{R}}$ and $\mathcal{I}$ are symmetric and $\mathcal{I}$ is positive
definite, the pair $({\mathcal{R}},\mathcal{I})$ is equivalent to a map:
$\mathcal{N}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}{\mathcal{R}}+\mathbf{i}\mathcal{I}\colon M\to\mathbb{SH}_{n}\,,$
where $\mathbb{SH}_{n}$ denotes the Siegel upper half space of symmetric
$n\times n$ complex matrices with positive definite imaginary part.
###### Definition A.1.
A period matrix map of size $n$ on $M$ is a smooth function
$\mathcal{N}\in\mathcal{C}^{\infty}(M,\mathbb{SH}_{n})$ valued in
$\mathbb{SH}_{n}$. We denote the set of such maps by $\mathrm{Per}_{n}(M)$.
When the metric $g$ is fixed, classical local abelian gauge theory is uniquely
determined by a choice of period matrix map.
###### Definition A.2.
Let $\mathcal{N}={\mathcal{R}}+\mathbf{i}\mathcal{I}$ be a period matrix map
of size $n$. The local abelian gauge theory associated to $\mathcal{N}$ is
defined through the following system of equations:
$\mathrm{d}({\mathcal{R}}\,F_{A}+\mathcal{I}\ast_{g}F_{A})=0\,,$ (66)
with unknowns given by the vector valued one-form
$A\in\Omega^{1}(M,\mathbb{R}^{n})$, where $F_{A}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{d}A\in\Omega^{2}_{\mathrm{cl}}(M,\mathbb{R}^{n})$.
###### Remark A.3.
The equations of motion of local abelian gauge theory have a _gauge symmetry_
consisting of transformations of the type:
$A\mapsto
A+\mathrm{d}\alpha\,,\qquad\alpha\in\mathcal{C}^{\infty}(M,\mathbb{R}^{n})\,.$
Variables $A\in\Omega^{1}(M,\mathbb{R}^{n})$ related by such gauge
transformations should be viewed as physically equivalent.
Let us write the equations of motion (66) in a form amenable to geometric
interpretation. Given a period matrix map
$\mathcal{N}={\mathcal{R}}+\mathbf{i}\mathcal{I}$ and a vector of two-forms
$F\in\Omega^{2}(M,\mathbb{R}^{n})$, define:
$G_{g}(\mathcal{N},F)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}-{\mathcal{R}}\,F-\mathcal{I}\ast_{g}F\,.$
Then the condition $\mathrm{d}F=0$ together with the equations of motion (66)
are equivalent with the single equation:
$\mathrm{d}\mathcal{V}=0\,,$ (67)
where the $\mathbb{R}^{2n}$-valued two-form $\mathcal{V}$ is related to $F$
by:
$\mathcal{V}=\begin{pmatrix}F\\\
G_{g}(\mathcal{N},F)\end{pmatrix}\in\Omega^{2}(M,\mathbb{R}^{2n})=\Omega^{2}(M,\mathbb{R}^{n}\oplus\mathbb{R}^{n})\,.$
(68)
As the following lemma shows, not every vector-valued two-form
$\mathcal{V}\in\Omega^{2}(M,\mathbb{R}^{2n})$ can be written as prescribed by
equation (68).
###### Lemma A.4.
Let $\mathcal{N}={\mathcal{R}}+\mathbf{i}\mathcal{I}\in\mathrm{Per}_{n}(M)$ be
a period matrix map of size $n$ on $M$. A vector valued two-form
$\mathcal{V}\in\Omega^{2}(M,\mathbb{R}^{2n})$ can be written as:
$\mathcal{V}=\begin{pmatrix}F\\\ G_{g}(\mathcal{N},F)\end{pmatrix}$
for some $F\in\Omega^{2}(M,\mathbb{R}^{n})$ if and only if:
$\ast_{g}\mathcal{V}=-\mathcal{J}\mathcal{V}\,,$ (69)
where $\mathcal{J}\in\mathcal{C}^{\infty}(M,\mathrm{GL}(2n,\mathbb{R}))$ is
the matrix-valued map defined through:
$\mathcal{J}=\begin{pmatrix}\mathcal{I}^{-1}{\mathcal{R}}&\mathcal{I}^{-1}\\\
-\mathcal{I}-{\mathcal{R}}\mathcal{I}^{-1}{\mathcal{R}}&-{\mathcal{R}}\mathcal{I}^{-1}\end{pmatrix}~{}~{}.$
We have $\mathcal{J}^{2}=-1$. Moreover, $F$ with the property above is
uniquely determined by $\mathcal{V}$.
###### Remark A.5.
Notice that $\mathcal{J}$ can be viewed as a complex structure defined on the
trivial real vector bundle of rank $2n$ over $M$. For classical
electrodynamics, the taming map is constant and given by:
$\mathcal{J}=\begin{pmatrix}\frac{g^{2}\theta}{8\pi^{2}}&\frac{g^{2}}{4\pi}\\\
-\frac{4\pi}{g^{2}}-\frac{g^{2}\theta^{2}}{16\pi^{3}}&-\frac{g^{2}\theta}{8\pi^{2}}\end{pmatrix}~{}~{}.$
In this case, the period matrix map is constant and given by:
$\mathcal{N}=\frac{4\pi}{g^{2}}+\mathbf{i}\frac{\theta}{2\pi}~{}~{},$
being traditionally denoted by $\tau$.
###### Proof.
If:
$\mathcal{V}=\begin{pmatrix}F\\\ G_{g}(\mathcal{N},F)\end{pmatrix}\,,$
then direct computation using the fact that $\ast_{g}^{2}=-1$ on two-forms
shows that $\mathcal{V}$ satisfies
$\ast_{g}\mathcal{V}=-\mathcal{J}\mathcal{V}$. On the other hand, writing
$\mathcal{V}=\begin{pmatrix}F\\\ G\end{pmatrix}$ with
$F,G\in\Omega^{2}(M,\mathbb{R}^{n})$ shows that the equation
$\ast_{g}\mathcal{V}=-\mathcal{J}\mathcal{V}$ is equivalent to:
$\begin{pmatrix}\ast_{g}F\\\
\ast_{g}G\end{pmatrix}=\begin{pmatrix}-\mathcal{I}^{-1}{\mathcal{R}}F-\mathcal{I}^{-1}G\\\
-(\mathcal{I}+{\mathcal{R}}\mathcal{I}^{-1}{\mathcal{R}})F-{\mathcal{R}}\mathcal{I}^{-1}G\end{pmatrix}\,,$
which in turn amounts to $G=G_{g}(\mathcal{N},F)$. ∎
Let $\omega_{2n}$ be the standard symplectic form on $\mathbb{R}^{2n}$, which
in our conventions has the following matrix in the canonical basis
$\mathcal{E}=(e_{1},\ldots,e_{n},f_{1},\ldots,f_{n})$ of the latter:
${\hat{\omega}}_{2n}=\begin{pmatrix}0&I_{n}\\\ -I_{n}&0\end{pmatrix}~{}~{}.$
(70)
Here $I_{n}$ is the identity matrix of size $n$. We have:
$\omega_{2n}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\sum_{a=1}^{n}e_{a}^{\ast}\wedge f_{a}^{\ast}\,,$ (71)
where
$\mathcal{E}^{\ast}=(e_{1}^{\ast},\ldots,e^{\ast}_{n},f^{\ast}_{1},\ldots,f^{\ast}_{n})$
is the basis dual to $\mathcal{E}=(e_{1},\ldots,e_{n},f_{1},\ldots,f_{n})$.
The following result gives a geometric interpretation of the equations of
motion (69). Recall that an almost complex structure $J$ on $\mathbb{R}^{2n}$
is called a taming of the standard symplectic form $\omega_{2n}$ if:
$\omega_{2n}(J\xi_{1},J\xi_{2})=\omega_{2n}(\xi_{1},\xi_{2})\,,\quad\forall\,\,\xi_{1},\xi_{2}\in\mathbb{R}^{2n}\,,$
and:
$\omega_{2n}(J\xi,\xi)>0\,,\quad\forall\,\,\xi\in\mathbb{R}^{2n}\backslash\left\\{0\right\\}\,.$
###### Definition A.6.
A taming map of size $2n$ defined on $M$ is a smooth map
$\mathcal{J}\in\mathcal{C}^{\infty}(M,\mathrm{GL}(2n,\mathbb{R}))$ such that
$\mathcal{J}(m)$ is a taming of $\omega_{2n}$ for every $m\in M$. We denote
the set of all such maps by $\mathfrak{J}_{n}(M)$.
###### Proposition A.7.
A matrix-valued map
$\mathcal{J}\in\mathcal{C}^{\infty}(M,\mathrm{GL}(2n,\mathbb{R}))$ can be
written as:
$\mathcal{J}=\begin{pmatrix}\mathcal{I}^{-1}{\mathcal{R}}&\mathcal{I}^{-1}\\\
-\mathcal{I}-{\mathcal{R}}\mathcal{I}^{-1}{\mathcal{R}}&-{\mathcal{R}}\mathcal{I}^{-1}\,,\end{pmatrix}$
(72)
in terms of a period matrix map
$\mathcal{N}={\mathcal{R}}+\mathbf{i}\mathcal{I}\in\mathrm{Per}_{n}(M)$ iff
$\mathcal{J}\in\mathfrak{J}_{n}(M)$.
###### Proof.
If $\mathcal{J}$ is taken as in equation (72) for a period matrix map
$\mathcal{N}$ then direct computation shows that $\mathcal{J}(m)$ is a taming
of $\omega_{2n}$ for all $m\in M$. For the converse, assume that
$\mathcal{J}(m)\in\mathrm{GL}(2n,\mathbb{R})$ is a taming of $\omega_{2n}$ for
all $m\in M$ (we omit to indicate the evaluation at $m$ for ease of notation).
Let $\mathcal{E}=(e_{1},\ldots,e_{n},f_{1},\ldots,f_{n})$ the canonical basis
of $\mathbb{R}^{2n}$. The vectors $\mathcal{E}_{f}=(f_{1},\ldots,f_{n})$ form
a basis over $\mathbb{C}$ of the complex vector space
$(\mathbb{R}^{2n},\mathcal{J}(m))\simeq\mathbb{C}^{n}$, hence there exists a
unique map $\tau\in\mathcal{C}^{\infty}(M,\mathrm{Mat}(n,\mathbb{C}))$ which
satisfies:
$e_{a}=\tau(m)_{ab}\,f_{b}\,,\quad\forall\,\,a=1,\ldots,n\,,$ (73)
where we use Einstein summation over repeated indices. Thus:
$\delta_{ab}=\omega_{2n}(e_{a},f_{b})=\omega_{2n}(\tau(m)_{ac}\,f_{c},f_{b})=\mathrm{Im}(\tau(m))_{ac}\,\omega_{2n}(\mathcal{J}(m)f_{c},f_{b})~{}~{},$
which implies that $\mathrm{Im}(\tau(m))$ is symmetric and positive-definite.
Using the previous equation and compatibility of $\mathcal{J}(m)$ with
$\omega_{2n}$, we compute:
$0=\omega_{2n}(e_{a},e_{b})=\mathrm{Re}(\tau(m))_{ba}-\mathrm{Im}(\tau(m))_{bc}\,\omega_{2n}(\mathcal{J}(m)(e_{a}),f_{c})=\mathrm{Re}(\tau(m))_{ba}-\mathrm{Re}(\tau(m))_{ab}\,,$
which shows that $\mathrm{Re}(\tau(m))$ is symmetric. Hence the smooth map
$\mathcal{N}\in\mathcal{C}^{\infty}(M,\mathbb{SH}^{n})$ defined through
$\mathcal{N}={\mathcal{R}}+\mathbf{i}\mathcal{I}$, where:
${\mathcal{R}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Re}(\tau)~{}~{},\qquad\mathcal{I}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Im}(\tau)\,,$
is a period matrix map. Equation (73) gives:
$\displaystyle\mathcal{J}(m)(e_{a})={\mathcal{R}}(m)_{ab}\mathcal{I}^{-1}(m)_{bc}\,e_{c}-{\mathcal{R}}(m)_{ab}\mathcal{I}(m)^{-1}_{bc}{\mathcal{R}}(m)_{cd}\,f_{d}-\mathcal{I}(m)_{ad}\,f_{d}$
$\displaystyle\mathcal{J}(m)(f_{a})=\mathcal{I}(m)^{-1}_{ab}\,e_{b}-\mathcal{I}(m)^{-1}_{ab}{\mathcal{R}}(m)_{bc}\,f_{c}\,,$
which is equivalent to (72). ∎
###### Proposition A.8.
The map $\Theta\colon\mathrm{Per}_{n}(M)\to\mathfrak{J}_{n}(M)$ defined
through:
$\mathrm{Per}_{n}(M)\ni\mathcal{N}={\mathcal{R}}+\mathbf{i}\mathcal{I}\mapsto\Theta(\mathcal{N})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\begin{pmatrix}\mathcal{I}^{-1}{\mathcal{R}}&\mathcal{I}^{-1}\\\
-\mathcal{I}-{\mathcal{R}}\mathcal{I}^{-1}{\mathcal{R}}&-{\mathcal{R}}\mathcal{I}^{-1}\end{pmatrix}\in\mathfrak{J}_{n}(M)$
is a bijection between $\mathrm{Per}_{n}(M)$ and $\mathfrak{J}_{n}(M)$.
###### Proof.
Follows directly from the proof of Proposition A.7. The inverse of $\Theta$
takes a taming map $\mathcal{J}\in\mathfrak{J}_{n}(M)$ to the period matrix
$\Theta^{-1}(\mathcal{J})=\mathrm{Re}(\tau)+\mathbf{i}\,\mathrm{Im}(\tau)$,
where, for all $m\in M$, $\tau(m)$ is the complex symmetric matrix of size $n$
uniquely determined by the expansion $e_{a}=\tau_{ab}f_{b}$ of $e_{a}$ over
$\mathbb{C}$ when $\mathbb{R}^{2n}$ is endowed with the complex structure
$\mathcal{J}(m)$. ∎
Since $M$ is contractible, we have
$\Omega^{2}_{\mathrm{cl}}(M,\mathbb{R}^{2n})=\Omega^{2}_{\mathrm{ex}}(M,\mathbb{R}^{2n})$.
By the discussion above, this implies that local abelian gauge theory can be
formulated equivalently as a theory of closed $\mathbb{R}^{2n}$-valued two-
forms $\mathcal{V}\in\Omega^{2}_{\mathrm{cl}}(M,\mathbb{R}^{2n})$ satisfying
the condition:
$\ast_{g}\mathcal{V}=-\mathcal{J}\mathcal{V}$
with respect to a fixed taming map $\mathcal{J}\in\mathfrak{J}_{n}(M)$.
Consequently, the theory is uniquely determined by the choice of taming map.
The condition $\mathrm{d}\mathcal{V}=0$ is equivalent with
$\mathcal{V}=\mathrm{d}\mathcal{A}$, where
$\mathcal{A}\in\Omega^{1}(M,\mathbb{R}^{2n})$ is considered modulo gauge
transformations $\mathcal{A}\mapsto\mathcal{A}+\mathrm{d}\alpha$ with
$\alpha\in\mathcal{C}^{\infty}(M,\mathbb{R}^{2n})$. The map
$[\mathcal{A}]\mapsto\mathrm{d}\mathcal{A}$ gives a well-defined bijection
$\Omega^{1}(M,\mathbb{R}^{2n})/\Omega^{1}_{\mathrm{cl}}(M,\mathbb{R}^{2n})\xrightarrow{\sim}\Omega^{2}_{\mathrm{cl}}(M,\mathbb{R}^{2n})$.
Thus we can formulate classical local abelian gauge theory either in terms of
_electromagnetic gauge potentials_
$\mathcal{A}\in\Omega^{1}(M,\mathbb{R}^{2n})$ taken modulo gauge-equivalence
or in terms of electromagnetic field strengths
$\mathcal{V}\in\Omega^{2}_{\mathrm{cl}}(M,\mathbb{R}^{2n})$.
###### Definition A.9.
Let $\mathcal{J}\in\mathfrak{J}_{n}(M)$ be a taming map. The space of
electromagnetic gauge configurations of the $\mathrm{U}(1)^{n}$ local abelian
gauge is $\Omega^{1}(M,\mathbb{R}^{2n})$.
Two gauge configurations are called gauge equivalent if they differ by an
exact one-form. The theory is defined by the polarized self-duality condition
for $\mathcal{A}\in\Omega^{1}(M,\mathbb{R}^{2n})$:
$\ast_{g}\mathcal{V}_{\mathcal{A}}=-\mathcal{J}\mathcal{V}_{\mathcal{A}}~{}~{},~{}~{}\mathrm{where}\quad\mathcal{V}_{\mathcal{A}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{d}\mathcal{A}\,.$ (74)
The space of electromagnetic gauge fields (or electromagnetic gauge
potentials) of the theory is the linear subspace of
$\Omega^{1}(M,\mathbb{R}^{2n})$ consisting of those elements which satisfy
(74):
$\mathfrak{Sol}_{n}(M,g,\mathcal{J})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{\mathcal{A}\in\Omega^{1}(M,\mathbb{R}^{2n})\,\,|\,\,\ast_{g}\mathcal{V}_{\mathcal{A}}=-\mathcal{J}\mathcal{V}_{\mathcal{A}}\right\\}\,.$
Elements $\mathcal{A}\in\Omega^{1}(M,\mathbb{R}^{2n})$ are called
(electromagnetic) _gauge potentials_ or _gauge fields_. The space of field
strength configurations is the vector space:
$\mathrm{Conf}_{n}(M)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\Omega^{2}_{\mathrm{cl}}(M,\mathbb{R}^{2n})~{}~{},$
while the space of field strengths is defined through:
$\mathrm{Sol}_{n}(M,g,\mathcal{J})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{\mathcal{V}\in\mathrm{Conf}_{n}(M)~{}|~{}\ast_{g}\mathcal{V}=\mathcal{J}\mathcal{V}\\}~{}~{}.$
The map $[\mathcal{A}]\mapsto\mathrm{d}\mathcal{A}$ gives a bijection
$\Omega^{1}(M,\mathbb{R}^{2n})/\Omega^{1}_{\mathrm{cl}}(M,\mathbb{R}^{2n})\xrightarrow{\sim}\mathrm{Conf}_{n}(M)$,
which restricts to a bijection
$\mathfrak{Sol}_{n}(M,g,\mathcal{J})/\Omega^{1}_{\mathrm{cl}}(M,\mathbb{R}^{2n})\xrightarrow{\sim}\mathrm{Sol}_{n}(M,g,\mathcal{J})$.
### A.2. Duality groups
Let $\mathrm{Diff}(M)$ be the group of orientation-preserving diffeomorphisms
of $M$ and $\mathcal{J}\in\mathfrak{J}_{n}(M)$ be a taming map of rank $2n$
defined on $M$. For
$(\gamma,f)\in\mathrm{GL}(2n,\mathbb{R})\times\mathrm{Diff}(M)$, consider the
linear isomorphism:
$\mathbb{A}_{\gamma,f}\colon\Omega^{k}(M,\mathbb{R}^{2n})\xrightarrow{\sim}\Omega^{k}(M,\mathbb{R}^{2n})\,,\quad\omega\mapsto\gamma(f_{\ast}\omega)\,,$
(75)
where
$f_{\ast}\colon\Omega^{k}(M,\mathbb{R}^{2n})\to\Omega^{k}(M,\mathbb{R}^{2n})$
is the push-forward through the diffeomorphism $f$. This gives a linear action
of $\mathrm{GL}(2n,\mathbb{R})\times\mathrm{Diff}(M)$ on
$\Omega^{k}(M,\mathbb{R}^{2n})$. Since this action commutes with the exterior
derivative, it preserves the space $\mathrm{Conf}_{n}(M)$ of field strength
configurations.
For any $\gamma\in\mathrm{Sp}(2n,\mathbb{R})$, the map:
$\mathcal{J}_{\gamma,f}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\gamma(\mathcal{J}\circ f^{-1})\gamma^{-1}\,,$
is a taming map. This gives an action $\mu$ of
$\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)$ on $\mathfrak{J}_{n}(M)$
defined through:
$\mu(\gamma,f)(\mathcal{J})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathcal{J}_{\gamma,f}\,,\quad\forall\,\,(\gamma,f)\in\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)\,.$
###### Proposition A.10.
For every $(\gamma,f)\in\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)$, the
map $\mathbb{A}_{\gamma,f}$ induces by restriction a linear isomorphism:
$\mathbb{A}_{\gamma,f}\colon\mathrm{Sol}_{n}(M,g,\mathcal{J})\xrightarrow{\sim}\mathrm{Sol}_{n}(M,f_{\ast}(g),\mathcal{J}_{\gamma,f})~{}~{},$
where $f_{\ast}(g)$ denotes the push-forward of $g$ by $f\in\mathrm{Diff}(M)$.
###### Remark A.11.
If we consider a pair
$(\gamma,f)\in\mathrm{GL}(2n,\mathbb{R})\times\mathrm{Diff}(M)$ with
$\gamma\not\in\mathrm{Sp}(2n,\mathbb{R})$, then $\mathcal{J}_{\gamma,f}$ is
not a taming map, so it does not define a local abelian gauge theory. From a
different point of view, such a transformation would not preserve the energy
momentum tensor of the theory and its Lagrangian formulation. See [41] and
references therein for more details about this point.
###### Proof.
For any $\mathcal{V}\in\Omega^{2}_{\mathrm{cl}}(M,\mathbb{R}^{2n})$, we have:
$\ast_{g}\mathcal{V}=-\mathcal{J}\mathcal{V}\Longleftrightarrow\mathbb{A}_{\gamma,f}(\ast_{g}\mathcal{V})=-\mathbb{A}_{\gamma,f}(\mathcal{J}\mathcal{V})\Longleftrightarrow\ast_{f_{\ast}(g)}(\mathbb{A}_{\gamma,f}\mathcal{V})=-\mathcal{J}_{\gamma,f}\mathbb{A}_{\gamma,f}(\mathcal{V})~{}~{}.$
∎
Consider the infinite rank vector bundle with total space:
$\mathrm{Sol}_{n}(M)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\prod_{(g,\mathcal{J})\in\mathrm{Met}_{3,1}(M)\times\mathfrak{J}_{n}(M)}\mathrm{Sol}_{n}(M,g,\mathcal{J})~{}~{},$
and infinite-dimensional base $B_{n}(M)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Met}_{3,1}(M)\times\mathfrak{J}_{n}(M)$, with the natural
projection. Let $\sigma$ be the action of
$\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)$ on $B_{n}(M)$ defined
though $\sigma=f_{\ast}\times\mu$, i.e.:
$\sigma(\gamma,f)(g,\mathcal{J})=(f_{\ast}(g),\mathcal{J}_{\gamma,f})~{}~{}.$
Then Proposition A.10 shows that the restriction of $\mathbb{A}$ gives a
linearization of $\sigma$ on the vector bundle $\mathrm{Sol}_{n}(M)$. In
particular, each fiber of $\mathrm{Sol}_{n}(M)$ carries a linear
representation of the isotropy group of the corresponding point in the base.
Let ${\rm Iso}(M,g)$ be the group of orientation-preserving isometries of
$(M,g)$. Then:
$\mathrm{Stab}_{\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)}(g,\mathcal{J})=\\{(\gamma,f)\in\mathrm{Sp}(2n,\mathbb{R})\times{\rm
Iso}(M,g)~{}|~{}\mathcal{J}_{\gamma,f}=\mathcal{J}\\}$
and we have:
###### Corollary A.12.
Let $(\gamma,f)\in\mathrm{Sp}(2n,\mathbb{R})\times{\rm Iso}(M,g)$ such that
$\mathcal{J}_{\gamma,f}=\mathcal{J}$, i.e. $\mathcal{J}\circ
f=\gamma\mathcal{J}\gamma^{-1}$. Then $\mathbb{A}_{\gamma,f}$ is a linear
automorphism of $\mathrm{Sol}_{n}(M,g,\mathcal{J})$.
###### Definition A.13.
Let $\mathcal{J}\in\mathfrak{J}_{n}(M)$ be a taming map and $g$ be a
Lorentzian metric on $M$.
* •
The group $\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)$ is called the
unbased pseudo-duality group. The linear isomorphism:
$\mathbb{A}_{\gamma,f}\colon\mathrm{Sol}_{n}(M,g,\mathcal{J})\xrightarrow{\sim}\mathrm{Sol}_{n}(M,f_{\ast}(g),\mathcal{J}_{\gamma,f})\,,$
induced by an element of this group is called a unbased pseudo-duality
transformation.
* •
The group $\mathrm{Sp}(2n,\mathbb{R})\times{\rm Iso}(M,g)$ is called the
unbased duality group. The linear isomorphism:
$\mathbb{A}_{\gamma,f}\colon\mathrm{Sol}_{n}(M,g,\mathcal{J})\xrightarrow{\sim}\mathrm{Sol}_{n}(M,g,\mathcal{J}_{\gamma,f})\,,$
induced by an element $(\gamma,f)$ of this group is called a unbased duality
transformation.
* •
The group $\mathrm{Sp}(2n,\mathbb{R})$ is called the duality group. The linear
isomorphism:
$\mathbb{A}_{\gamma,\mathrm{id}_{M}}\colon\mathrm{Sol}_{n}(M,g,\mathcal{J})\xrightarrow{\sim}\mathrm{Sol}_{n}(M,g,\mathcal{J}_{\gamma})\,,$
where $\mathcal{J}_{\gamma}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathcal{J}_{\gamma,\mathrm{id}_{M}}=\gamma\mathcal{J}\gamma^{-1}$,
is called a classical duality transformation.
###### Definition A.14.
Let $\mathcal{J}\in\mathfrak{J}_{n}(M)$ be a taming map and $g$ be a
Lorentzian metric on $M$.
* •
The stabilizer:
$\mathcal{U}(M,\mathcal{J})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{(\gamma,f)\in\mathrm{Diff}(M,g)\times\mathrm{Sp}(2n,\mathbb{R})\,\,|~{}\mathcal{J}\circ
f=\gamma\mathcal{J}\gamma^{-1}\right\\}\,,$ (76)
of $\mathcal{J}$ in $\mathrm{Sp}(2n,\mathbb{R})\times\mathrm{Diff}(M)$ with
respect to the representation $\mu$ is called the unbased unitary pseudo-
duality group. The linear isomorphism:
$\mathbb{A}_{\gamma,f}\colon\mathrm{Sol}_{n}(M,g,\mathcal{J})\xrightarrow{\sim}\mathrm{Sol}_{n}(M,f_{\ast}(g),\mathcal{J})\,,$
induced by an element of this group is called an unbased unitary pseudo-
duality transformation.
* •
The stabilizer:
$\mathcal{U}(M,g,\mathcal{J})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{(\gamma,f)\in\mathrm{Sp}(2n,\mathbb{R})\times{\rm
Iso}(M,g)\,\,|~{}\mathcal{J}\circ f=\gamma\mathcal{J}\gamma^{-1}\right\\}\,,$
(77)
of $\mathcal{J}$ in $\mathrm{Sp}(2n,\mathbb{R})\times{\rm Iso}(M)$ with
respect to the representation $\mu$ is called the unbased unitary duality
group. The linear isomorphism:
$\mathbb{A}_{\gamma,f}\colon\mathrm{Sol}_{n}(M,g,\mathcal{J})\xrightarrow{\sim}\mathrm{Sol}_{n}(M,g,\mathcal{J})\,,$
induced by an element of this group is called an unbased unitary duality
transformation.
* •
The stabilizer:
$\mathrm{U}_{\mathcal{J}}(n)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{\gamma\in\mathrm{Sp}(2n,\mathbb{R})\,\,|\,\,\gamma\mathcal{J}\gamma^{-1}=\mathcal{J}\right\\}\,.$
of $\mathcal{J}$ in $\mathrm{Sp}(2n,\mathbb{R})$ with respect to the action
$\mathcal{J}\rightarrow\gamma\mathcal{J}\gamma^{-1}$ is called the unitary
duality group. The linear automorphism:
$\mathbb{A}_{\gamma,f}\colon\mathrm{Sol}_{n}(M,g,\mathcal{J})\xrightarrow{\sim}\mathrm{Sol}_{n}(M,g,\mathcal{J})\,$
of $\mathrm{Sol}_{n}(M,g,\mathcal{J})$ induced by an element of this group is
called a unitary duality transformation.
We have inclusions:
$\mathrm{U}_{\mathcal{J}}(n)\subset\mathcal{U}(M,g,\mathcal{J})\subset\mathcal{U}(M,\mathcal{J})$
and short exact sequences:
$\displaystyle
1\to\mathrm{U}_{\mathcal{J}}(n)\to\mathcal{U}(M,g,\mathcal{J})\to{\rm
Iso}_{\mathcal{J}}(M,g)\to 1\,,$ (78) $\displaystyle 1\to{\rm
Iso}(M,g)_{\mathcal{J}}\to\mathcal{U}(M,\mathcal{J})\to\mathrm{Sp}_{\mathcal{J}}(2n,\mathbb{R})\to
1\,,$ (79)
where ${\rm Iso}_{\mathcal{J}}(M,g)$ is the subgroup of those $f\in{\rm
Iso}(M,g)$ for which there exists $\gamma\in\mathrm{Sp}(n,\mathbb{R})$ such
that $\mathcal{J}\circ f=\gamma\mathcal{J}\gamma^{-1}$, while
$\mathrm{Sp}_{\mathcal{J}}(2n,\mathbb{R})$ is the subgroup of those
$\gamma\in\mathrm{Sp}(2n,\mathbb{R})$ for which there exists $f\in{\rm
Iso}(M,g)$ such that $\mathcal{J}\circ f=\gamma\mathcal{J}\gamma^{-1}$.
Finally, the group:
${\rm Iso}(M,g)_{\mathcal{J}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{f\in{\rm Iso}(M,g)~{}|~{}\mathcal{J}\circ f=\mathcal{J}\\}$
is the stabilizer of $\mathcal{J}$ in ${\rm Iso}(M,g)$. In particular, we
have:
###### Corollary A.15.
If $\mathrm{U}_{\mathcal{J}}(n)=1$ then $\mathcal{U}(M,g,\mathcal{J})={\rm
Iso}_{\mathcal{J}}(M,g)$. If ${\rm Iso}(M,g)_{\mathcal{J}}=1$ then
$\mathcal{U}(M,g,\mathcal{J})=\mathrm{Sp}_{\mathcal{J}}(2n,\mathbb{R})$.
### A.3. Gluing local abelian gauge theories
Let now $M$ be an arbitrary oriented manifold admitting Lorentzian metrics.
Let $\mathcal{U}=(U_{\alpha})_{\alpha\in I}$ be a good open cover of $M$,
where $I$ is an index set. Denote by $g_{\alpha}$ the restriction of $g$ to
$U_{\alpha}$. Roughly speaking, the definition of abelian gauge theory on
$(M,g)$ given in Section 1 is the result of gluing the local
$\mathrm{U}(1)^{n}$ abelian gauge theories defined on the contractible
Lorentzian four-manifolds $(U_{\alpha},g_{\alpha})$ using electromagnetic
dualities. In order to implement this idea, we choose a locally constant
$\mathrm{Sp}(2n,\mathbb{R})$-valued C̆ech cocycle for $\mathcal{U}$:
$u_{\alpha\beta}\colon U_{\alpha}\cap U_{\beta}\to\mathrm{Sp}(2n,\mathbb{R})$
and a family of taming maps $\mathcal{J}_{\alpha}\colon
U_{\alpha}\to\mathbb{R}^{2n}$ for $\omega_{2n}$ such that:
$\mathcal{J}_{\beta}=u_{\alpha\beta}\,\mathcal{J}_{\alpha}u^{-1}_{\alpha\beta}$
on double overlaps. The collection:
$\left\\{(U_{\alpha})_{\alpha\in I},(g_{\alpha})_{\alpha\in
I},(\mathcal{J}_{\alpha})_{\alpha\in I},(u_{\alpha\beta})_{\alpha,\beta\in
I}\right\\}\,,$
is equivalent to a flat symplectic vector bundle
$(\mathcal{S},\omega,\mathcal{D})$ with symplectic form $\omega$ and
symplectic flat connection $\mathcal{D}$, equipped with an almost complex
structure $\mathcal{J}$ which is a taming of $\omega$. A family
$(\mathcal{V}_{\alpha})_{\alpha\in I}$ of solutions of the local abelian gauge
theories defined by $(\mathcal{J}_{\alpha})_{\alpha\in I}$ on
$(U_{\alpha},g_{\alpha})$ which satisfies:
$\mathcal{V}_{\beta}=u_{\alpha\beta}\mathcal{V}_{\alpha}$
corresponds to an $\mathcal{S}$-valued two-form
$\mathcal{V}\in\Omega^{2}(M,\mathcal{S})$ which obeys:
$\mathrm{d}_{\mathcal{D}}\mathcal{V}=0\,,$
where
$\mathrm{d}_{\mathcal{D}}\colon\Omega^{\ast}(M,\mathcal{S})\to\Omega^{\ast}(M,\mathcal{S})$
is the exterior differential twisted by $\mathcal{D}$. This construction
motivates the global geometric model introduced in [32] and further elaborated
in Section 1.
## Appendix B Integral symplectic spaces and integral symplectic tori
This appendix recalls some notions from the theory of symplectic lattices and
symplectic tori which are used throughout the paper. We also introduce the
notion of integral symplectic torus.
###### Definition B.1.
An integral symplectic space is a triple $(V,\omega,\Lambda)$ such that:
* •
$(V,\omega)$ is a finite-dimensional symplectic vector space over
$\mathbb{R}$.
* •
$\Lambda\subset V$ is full lattice in $V$, i.e. a lattice in $V$ such that
$V=\Lambda\otimes_{\mathbb{Z}}\mathbb{R}$.
* •
$\omega$ is integral with respect to $\Lambda$, i.e. we have
$\omega(\Lambda,\Lambda)\subset\mathbb{Z}$.
An isomorphism of integral symplectic spaces
$f\colon(V_{1},\omega_{1},\Lambda_{1})\to(V_{2},\omega_{2},\Lambda_{2})$ is a
bijective symplectomorphism from $(V_{1},\omega_{1})$ to $(V_{2},\omega_{2})$
which satisfies:
$f(\Lambda_{1})=\Lambda_{2}\,.$
Denote by $\mathrm{Symp}_{\mathbb{Z}}$ the groupoid of integral symplectic
spaces and isomorphisms of such. Let $\mathrm{Aut}(V)$ be the group of linear
automorphisms of the vector space $V$ and
$\mathrm{Sp}(V,\omega)\subset\mathrm{Aut}(V)$ be the subgroup of symplectic
transformations. Then the automorphism group of the integral symplectic space
$(V,\omega,\Lambda)$ is denoted by:
$\mathrm{Sp}(V,\omega,\Lambda)=\left\\{T\in\mathrm{Sp}(V,\omega)\,\,|\,\,T(\Lambda)=\Lambda\right\\}\,.$
###### Definition B.2.
An _integral symplectic basis_ of a $2n$-dimensional integral symplectic space
$(V,\omega,\Lambda)$ is a basis
$\mathcal{E}=(\xi_{1},\ldots,\xi_{n},\zeta_{1},\ldots,\zeta_{n})$ of $\Lambda$
(as a free $\mathbb{Z}$-module) such that:
$\omega(\xi_{i},\xi_{j})=\omega(\zeta_{i},\zeta_{j})=0\,,\qquad\omega(\xi_{i},\zeta_{j})=t_{i}\delta_{ij}~{}~{},~{}~{}\omega(\zeta_{i},\xi_{j})=-t_{i}\delta_{ij}\,,\qquad\forall\,\,i,j=1,\ldots,n\,,$
where $t_{1},\ldots,t_{n}\in\mathbb{Z}$ are strictly positive integers
satisfying the divisibility conditions:
$t_{1}|t_{2}|\ldots|t_{n}\,.$
By the _elementary divisor theorem_ , see [19, Chapter VI], every integral
symplectic space admits an integral symplectic basis and the positive integers
$t_{1},\ldots,t_{n}$ (which are called the elementary divisors of
$(V,\omega,\Lambda)$) do not depend on the choice of such a basis. Define:
$\mathrm{Div}^{n}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{(t_{1},\ldots,t_{n})\in\mathbb{Z}_{>0}^{n}\,\,|\,\,t_{1}|t_{2}|\ldots|t_{n}\right\\}\,,$
and:
$\delta(n)\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(1,\ldots,1)\in\mathrm{Div}^{n}\,.$
Let $\leq$ be the partial order relation on $\mathrm{Div}^{n}$ defined
through:
$(t_{1},\ldots,t_{n})\leq(t^{\prime}_{1},\ldots,t^{\prime}_{n})~{}~{}\mathrm{iff}~{}~{}t_{i}|t^{\prime}_{i}~{}~{}\forall
i=1,\ldots,n~{}~{}.$
Then $\delta(n)$ is the least element of the ordered set
$(\mathrm{Div}^{n},\leq)$. Notice that this ordered set is directed, since any
two elements $t,t^{\prime}\in\mathrm{Div}^{n}$ have an upper bound given by
$(t_{1}t^{\prime}_{1},\ldots,t_{n}t^{\prime}_{n})$. In fact,
$(\mathrm{Div}^{n},\leq)$ is a lattice with join and meet given by:
$t\vee
t^{\prime}=(\mathrm{lcm}(t_{1},t^{\prime}_{1}),\ldots,\mathrm{lcm}(t_{n},t^{\prime}_{n}))~{}~{},~{}~{}t\wedge
t^{\prime}=(\mathrm{gcd}(t_{1},t^{\prime}_{1}),\ldots,\mathrm{gcd}(t_{n},t^{\prime}_{n}))~{}~{}.$
This lattice is semi-bounded from below with bottom element given by
$\delta(n)$ and it is complete for meets (i.e., it is a complete meet semi-
lattice).
###### Definition B.3.
The _type_ of an integral symplectic space $(V,\omega,\Lambda)$ is the ordered
system of elementary divisors of $(V,\omega,\Lambda)$, which we denote by:
$\mathfrak{t}(V,\omega,\Lambda)=(t_{1},\ldots,t_{n})\in\mathrm{Div}^{n}\,.$
The integral symplectic space $(V,\omega,\Lambda)$ is called _principal_ if:
$\mathfrak{t}(V,\omega,\Lambda)=\delta(n)\in\mathrm{Div}^{n}\,.$
Let $\omega_{2n}$ denotes the standard symplectic pairing on
$\mathbb{R}^{2n}$.
###### Proposition B.4.
Two integral symplectic spaces have the same type if and only if they are
isomorphic. Moreover, every element of $\mathrm{Div}^{n}$ is the type of an
integral symplectic space. Hence the type induces a bijection between the set
of isomorphism classes of integral symplectic spaces and the set
$\mathrm{Div}^{n}$.
###### Proof.
The first statement is obvious. For the second statement, fix
$\mathfrak{t}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(t_{1},\ldots,t_{n})\in\mathrm{Div}^{n}$. Consider the full
lattice $\Lambda_{\mathfrak{t}}\subseteq\mathbb{R}^{2n}$ defined as follows:
$\Lambda_{\mathfrak{t}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{(l_{1},\ldots,l_{n},t_{1}l_{n+1},\ldots,t_{n}l_{n+1})\,\,|\,\,l_{1},\ldots,l_{n}\in\mathbb{Z}\right\\}\,.$
(80)
Then $(\mathbb{R}^{2n},\omega_{2n},\Lambda_{\mathfrak{t}})$ is an integral
symplectic space of type $\mathfrak{t}$. ∎
###### Definition B.5.
The lattice $\Lambda_{\mathfrak{t}}$ defined in (80) is called the _standard
symplectic lattice_ of type $\mathfrak{t}$ and
$(\mathbb{R}^{2n},\omega_{2n},\Lambda_{\mathfrak{t}})$ is called the standard
integral symplectic space of type $\mathfrak{t}$.
We have $\Lambda_{\delta(n)}=\mathbb{Z}^{2n}$. Moreover,
$\Lambda_{\mathfrak{t}}$ is a sub-lattice of $\mathbb{Z}^{2n}$ and we have
$\mathbb{Z}^{2n}/\Lambda_{\mathfrak{t}}\simeq\mathbb{Z}_{t_{1}}\times\ldots\times\mathbb{Z}_{t_{n}}$
for all $\mathfrak{t}\in\mathrm{Div}(n)$. For
$\mathfrak{t},\mathfrak{t}^{\prime}\in\mathrm{Div}^{n}$, we have
$\Lambda_{\mathfrak{t}^{\prime}}\subset\Lambda_{\mathfrak{t}}$ if and only if
$\mathfrak{t}\leq\mathfrak{t}^{\prime}$. The lattice $\Lambda_{\mathfrak{t}}$
admits the basis:
$\displaystyle\xi_{1}=e_{1}=(1,0,\ldots,0),\ldots,\xi_{n}=e_{n}=(0,\ldots,0,1,0,\ldots,0)$
$\displaystyle\zeta_{1}=t_{1}f_{1}=(0,\ldots,0,t_{1},0,\ldots,0),\ldots,\zeta_{n}=t_{n}f_{n}=(0,\ldots,0,t_{n})~{}~{},$
in which the standard symplectic form of $\mathbb{R}^{2n}$ has coefficients:
$\displaystyle\omega_{2n}(\xi_{i},\xi_{j})=\omega_{2n}(\zeta_{i},\zeta_{j})=0~{}~{}$
$\displaystyle\omega_{2n}(\xi_{i},\zeta_{j})=t_{i}\delta_{ij}~{}~{},~{}~{}\omega_{2n}(\zeta_{i},\xi_{j})=-t_{i}\delta_{ij}~{}~{}.$
The isomorphism which takes $\xi_{i}$ to $e_{i}$ and $\zeta_{j}$ to $f_{j}$
identifies $(\mathbb{R}^{2n},\omega_{2n},\Lambda_{\mathfrak{t}})$ with the
integral symplectic space
$(\mathbb{R}^{2n},\omega_{\mathfrak{t}},\mathbb{Z}^{2n})$, where
$\omega_{\mathfrak{t}}$ is the symplectic pairing defined on $\mathbb{R}^{2n}$
by:
$\omega_{\mathfrak{t}}(e_{i},e_{j})=\omega_{\mathfrak{t}}(f_{i},f_{j})=0\,,\quad\omega_{\mathfrak{t}}(e_{i},f_{j})=\delta_{ij}\,,\quad\omega_{\mathfrak{t}}(f_{i},e_{j})=-\delta_{ij}\,,\quad\forall\,\,i=1,\ldots,n~{}~{}.$
(81)
Given $\mathfrak{t}=(t_{1},\ldots,t_{n})\in\mathrm{Div}^{n}$, consider the
diagonal $n\times n$ matrix:
$\mathrm{D}_{\mathfrak{t}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{diag}(t_{1},\ldots,t_{n})\in\mathrm{Mat}(n,\mathbb{Z})\,,$
as well as:
$\Gamma_{\mathfrak{t}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\begin{pmatrix}\mathrm{I}_{n}&0\\\
0&\mathrm{D}_{\mathfrak{t}}\end{pmatrix}\in\mathrm{Mat}(2n,\mathbb{Z})\,.$
###### Definition B.6.
The modified Siegel modular group of type $\mathfrak{t}\in\mathrm{Div}^{n}$ is
the subgroup of
$\mathrm{Aut}(\mathbb{R}^{2n},\omega_{2n})\simeq\mathrm{Sp}(2n,\mathbb{R})$
defined through:
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{T\in\mathrm{Aut}(\mathbb{R}^{2n},\omega_{2n})\,\,|\,\,T(\Lambda_{\mathfrak{t}})=\Lambda_{\mathfrak{t}}\right\\}\simeq\left\\{T\in\mathrm{Sp}(2n,\mathbb{R})\,\,|\,\,\Gamma_{\mathfrak{t}}T\Gamma_{\mathfrak{t}}^{-1}=T\right\\}\,.$
Since
$(\mathbb{R}^{2n},\omega_{2n},\Lambda_{\mathfrak{t}})\simeq(\mathbb{R}^{2n},\omega_{\mathfrak{t}},\mathbb{Z}^{2n})$,
we have
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\simeq\mathrm{Aut}(\mathbb{R}^{2n},\omega_{\mathfrak{t}},\mathbb{Z}^{2n})$.
Hence $\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ is a subgroup of
$\mathrm{GL}(2n,\mathbb{Z})$. The remarks above give:
###### Proposition B.7.
[32, Proposition F.12] Let $(V,\omega,\Lambda)$ be an integral symplectic
space of dimension $2n$. Any integral symplectic basis of this space induces
an isomorphism of integral symplectic spaces between $(V,\omega,\Lambda)$ and
$(\mathbb{R}^{2n},\omega_{2n},\Lambda_{\mathfrak{t}})$ as well as an
isomorphism of groups between $\mathrm{Sp}(V,\omega,\Lambda)$ and
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$.
We have $\mathrm{Sp}_{\delta(n)}(2n,\mathbb{Z})=\mathrm{Sp}(2n,\mathbb{Z})$
and
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\subseteq\mathrm{Sp}_{\mathfrak{t}^{\prime}}(2n,\mathbb{Z})$
when $\mathfrak{t}\leq\mathfrak{t}^{\prime}$. Hence
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$ forms a direct system of groups
and we have
$\mathrm{Sp}(2n,\mathbb{Z})\subset\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
for all $\mathfrak{t}\in\mathrm{Div}^{n}$. The direct limit
$\varinjlim_{\mathfrak{t}\in\mathrm{Div}^{n}}\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
identifies with the following subgroup of $\mathrm{Sp}(2n,\mathbb{R})$:
$\mathrm{Sp}_{\infty}(2n,\mathbb{Z})\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\\{T\in\mathrm{GL}(2n,\mathbb{R})\,\,|\,\,\exists\,\,\mathfrak{t}_{T}\in\mathrm{Div}^{n}:T\in\mathrm{Sp}_{\mathfrak{t}_{T}}(2n,\mathbb{Z})\\}\,,$
through the isomorphism of groups
$\varphi:\mathrm{Sp}_{\infty}(2n,\mathbb{Z})\rightarrow\varinjlim_{\mathfrak{t}\in\mathrm{Div}^{n}}\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
which sends $T\in\mathrm{Sp}_{\infty}(2n,\mathbb{Z})$ to the equivalence class
$[\alpha(T)]\in\varinjlim_{\mathfrak{t}\in\mathrm{Div}^{n}}\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
of the family
$\alpha(T)\in\sqcup_{\mathfrak{t}\in\mathrm{Div}^{n}}\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$
defined through:
$\alpha(T)_{\mathfrak{t}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left\\{\begin{array}[]{ll}T&\mbox{ if
}\mathfrak{t}_{T}\leq\mathfrak{t}\\\ 1&\mbox{ if
}\mathfrak{t}_{T}\nleq\mathfrak{t}\end{array}\right.\,.$
Notice that
$\mathrm{Sp}(2n,\mathbb{Z})=\mathrm{Sp}_{\delta(n)}(2n,\mathbb{Z})$ is a
subgroup of $\mathrm{Sp}_{\infty}(2n,\mathbb{Z})$.
###### Definition B.8.
The type of an element $T\in\mathrm{Sp}_{\infty}(2n,\mathbb{Z})$ is defined as
the greatest lower bound $\mathfrak{t}(T)\in\mathrm{Div}^{n}$ of the finite
set $\\{\mathfrak{t}\in\mathrm{Div}^{n}|\mathfrak{t}|\mathfrak{t}_{T}\\}$,
where $\mathfrak{t}_{T}$ is any element of $\mathrm{Div}^{n}$ such that
$T\in\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})$.
Notice that the type of $T$ is well-defined and that we have
$T\in\mathrm{Sp}_{\mathfrak{t}(T)}(2n,\mathbb{Z})$.
### B.1. Integral symplectic tori
The following definition distinguishes between a few closely related notions.
###### Definition B.9.
A $d$-dimensional torus is a smooth manifold $\mathrm{T}$ diffeomorphic with
the standard $d$-torus $\mathrm{T}^{d}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}(\mathrm{S}^{1})^{d}$. A $d$-dimensional torus group is a compact
abelian Lie group $A$ which is isomorphic with the standard $d$-dimensional
torus group $\mathrm{U}(1)^{d}$ as a Lie group. A $d$-dimensional affine torus
is a principal homogeneous space $\mathbb{A}$ for a $d$-dimensional torus
group. The standard affine $d$-torus is the $d$-dimensional affine torus
$\mathbb{A}_{d}$ defined by the right action of $\mathrm{U}(1)^{d}$ on itself.
The underlying manifold of the standard $d$-dimensional torus group is the
standard $d$-torus while the underlying manifold of a $d$-dimensional torus
group is a $d$-torus. Moreover, any $d$-dimensional affine torus group is
isomorphic with a standard affine $d$-torus. The transformations of an affine
torus given the right action of its underlying group will be called
translations. Choosing a distinguished point in any affine torus makes it into
a torus group having that point as zero element. Conversely, any torus group
defines an affine torus obtained by ‘forgetting’ its zero element. The
singular homology and cohomology groups of a $d$-torus $\mathrm{T}$ are the
free abelian groups given by:
$\displaystyle
H_{k}(\mathrm{T},\mathbb{Z})\simeq\wedge^{k}H_{1}(\mathrm{T},\mathbb{Z})$
$\displaystyle
H^{k}(\mathrm{T},\mathbb{Z})\simeq\wedge^{k}H^{1}(\mathrm{T},\mathbb{Z})\simeq\wedge^{k}H_{1}(\mathrm{T},\mathbb{Z})^{\vee}$
for all $k=0,\ldots,d$, where $H_{1}(\mathrm{T},\mathbb{Z})\simeq
H^{1}(\mathrm{T},\mathbb{Z})\simeq\mathbb{Z}^{d}$. The underlying torus group
of any affine torus $\mathbb{A}$ is isomorphic with
$A\stackrel{{\scriptstyle{\rm
def.}}}{{=}}H_{1}(\mathbb{A},\mathbb{R})/H_{1}(\mathbb{A},\mathbb{Z})$. The
group of automorphisms of a $d$-dimensional torus group $A$ is given by:
$\mathrm{Aut}(A)=\mathrm{Aut}(H_{1}(A,\mathbb{R}),H_{1}(A,\mathbb{Z}))\simeq\mathrm{GL}(d,\mathbb{Z})\,.$
Note that the group of automorphisms of the $d$-dimensional affine torus is
isomorphic to
$\mathrm{U}(1)^{d}\rtimes\mathrm{Aut}(\mathrm{U}(1)^{d})\simeq\mathrm{U}(1)^{d}\rtimes\mathrm{GL}(2n,\mathbb{Z})$,
where $\mathrm{GL}(2n,\mathbb{Z})$ acts on $\mathrm{U}(1)^{d}$ through the
morphism of groups
$\rho\colon\mathrm{GL}(2n,\mathbb{Z})\rightarrow\mathrm{Aut}(A)$ given by:
$\rho(T)(\exp(2\pi\mathbf{i}x))=\exp(2\pi\mathbf{i}T(x))\,,\quad\forall\,\,T\in\mathrm{GL}(d,\mathbb{Z})\,,\quad\forall\,\,x\in\mathbb{R}^{d}~{}~{}.$
(82)
Here $\exp:\mathbb{R}^{d}\rightarrow\mathrm{U}(1)^{d}$ is the exponential map
of $\mathrm{U}(1)^{d}$, which is given by:
$\exp(v)=(e^{v_{1}},\ldots,e^{v_{d}})\,,\quad\forall\,\,v=(v_{1},\ldots,v_{d})\in\mathbb{R}^{d}\,.$
###### Definition B.10.
Let $\mathrm{T}$ be a torus of dimension at least two. A subtorus
$\mathrm{T}^{\prime}\subset\mathrm{T}$ is called primitive if
$H_{1}(\mathrm{T}^{\prime},\mathbb{Z})$ is a primitive sub-lattice of
$H_{1}(\mathrm{T},\mathbb{Z})$, i.e. if the abelian group
$H_{1}(\mathrm{T},\mathbb{Z})/H_{1}(\mathrm{T}^{\prime},\mathbb{Z})$ is
torsion-free.
###### Definition B.11.
A symplectic torus is a pair $\mathbf{T}=(\mathrm{T},\Omega)$, where
$\mathrm{T}$ is an even-dimensional torus and $\Omega$ is a symplectic form
defined on $\mathrm{T}$. A symplectic torus group is a pair
$\mathbf{A}=(A,\Omega)$, where $A$ is an even-dimensional torus group and
$\Omega$ is a symplectic form defined on the underlying torus which is
invariant under translations by all elements of $A$. An affine symplectic
torus is a pair $\mathbbold{A}=(\mathbb{A},\Omega)$, where $\mathbb{A}$ is an
even-dimensional affine torus and $\Omega$ is a symplectic form on
$\mathbb{A}$ which is invariant under translations.
###### Definition B.12.
A symplectic torus $\mathbf{T}=(\mathrm{T},\Omega)$ is called integral if the
symplectic area $\int_{T^{\prime}}\Omega$ of any of its primitive two-
dimensional subtori $T^{\prime}$ is an integer.
Let $(\mathrm{T},\Omega)$ be a symplectic torus. The cohomology class of
$\Omega$ is a non-degenerate element $\omega\in
H^{2}(\mathrm{T},\mathbb{R})\simeq\wedge^{2}H_{1}(\mathrm{T},\mathbb{R})^{\vee}$,
i.e a symplectic pairing on the vector space $H_{1}(\mathrm{T},\mathbb{R})$.
The symplectic torus $(T,\Omega)$ is integral if and only if the triplet
$(H_{1}(\mathrm{T},\mathbb{R}),H_{1}(\mathrm{T},\mathbb{Z}),\omega)$ is an
integral symplectic space. In this case, $\omega$ descends to a symplectic
form ${\hat{\Omega}}$ which makes
$H_{1}(\mathrm{T},\mathbb{R})/H_{1}(\mathrm{T},\mathbb{Z})$ into an integral
symplectic torus group. If $\mathbbold{A}=(\mathbb{A},\Omega)$ is an integral
affine symplectic torus, then $\Omega$ is determined by its cohomology class
$\omega$, hence $\mathbbold{A}$ can also be viewed as a pair
$(\mathbb{A},\omega)$ where $\mathbb{A}$ is an affine torus and $\omega$ is a
symplectic form on $H_{1}(\mathbb{A},\mathbb{R})$ which is integral for the
lattice $H_{1}(\mathbb{A},\mathbb{Z})$. In this case, any choice of a point in
$\mathbb{T}$ allows us to identify $\mathbbold{A}$ with the integral
symplectic torus group
$(H_{1}(\mathbb{A},\mathbb{R})/H_{1}(\mathbb{A},\mathbb{Z}),{\hat{\Omega}})$.
Let $(\mathbb{R}^{2n},\omega_{2n},\Lambda_{\mathfrak{t}})$ be the standard
integral symplectic space of type $\mathfrak{t}\in\mathrm{Div}^{n}$ and
$\Omega_{\mathfrak{t}}$ be the translationally invariant symplectic form
induced by $\omega_{2n}$ on the torus group
$\mathbb{R}^{2n}/\Lambda_{\mathfrak{t}}$. Then the symplectic torus group
$\left(\mathbb{R}^{2n}/\Lambda_{\mathfrak{t}},\Omega_{\mathfrak{t}}\right)$ is
integral.
###### Definition B.13.
The $2n$-dimensional standard integral symplectic torus group of type
$\mathfrak{t}\in\mathrm{Div}^{n}$ is:
$\mathbf{A}_{\mathfrak{t}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\left(\mathbb{R}^{2n}/\Lambda_{\mathfrak{t}},\Omega_{\mathfrak{t}}\right)~{}~{}.$
The integral affine symplectic torus $\mathbbold{A}_{\mathfrak{t}}$ obtained
from $\mathbf{A}_{\mathfrak{t}}$ by forgetting the origin is the standard
integral affine symplectic torus of type $\mathfrak{t}$.
Note that every integral affine symplectic torus $\mathbbold{A}$ is affinely
symplectomorphic to a standard affine symplectic torus
$\mathbbold{A}_{\mathfrak{t}}$, whose type $\mathfrak{t}$ is uniquely-
determined and called the type of $\mathbbold{A}$. Similarly, every integral
symplectic torus group $\mathbf{A}$ is isomorphic with a standard integral
symplectic torus group $\mathbf{A}_{\mathfrak{t}}$ whose type $\mathfrak{t}$
is uniquely determined by $\mathbf{A}$ and called the type of $\mathbf{A}$.
The group of automorphisms of $\mathbf{A}_{\mathfrak{t}}$ for
$\mathfrak{t}\in\mathrm{Div}^{n}$ is given by:
$\mathrm{Aut}(\mathbf{A}_{\mathfrak{t}})=\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})~{}~{},$
while the group of automorphisms of an integral symplectic affine torus
$\mathbbold{A}_{\mathfrak{t}}=(\mathbb{A}_{\mathfrak{t}},\Omega)$ of type
$\mathfrak{t}\in\mathrm{Div}^{n}$ is:
$\mathrm{Aut}(\mathbbold{A}_{\mathfrak{t}})=A\rtimes\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})~{}~{},$
where $A=H_{1}(\mathbb{A},\mathbb{R})/H_{1}(\mathbb{A},\mathbb{Z})$ is the
underlying torus group of $\mathbb{A}$ and the action of
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\subset\mathrm{GL}(2n,\mathbb{Z})\simeq\mathrm{Aut}(A)$
on $A$ coincides with that induced from the action on
$H_{1}(\mathbb{A},\mathbb{R})\simeq\mathbb{R}^{2n}$. We denote by
$\mathrm{Aff}_{\mathfrak{t}}\stackrel{{\scriptstyle{\rm
def.}}}{{=}}\mathrm{Aut}(\mathbbold{A}_{\mathfrak{t}})$ the automorphism group
of the integral affine symplectic torus of type $\mathfrak{t}$. We have:
$\mathrm{Aff}_{\mathfrak{t}}=\mathrm{U}(1)^{2n}\rtimes\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})~{}~{},$
where
$\mathrm{Sp}_{\mathfrak{t}}(2n,\mathbb{Z})\subset\mathrm{GL}(2n,\mathbb{Z})$
acts on $\mathrm{U}(1)^{2n}$ through the restriction of (82).
###### Definition B.14.
Let $\mathbf{A}=(A,\Omega)$ and
$\mathbf{A}^{\prime}=(A^{\prime},\Omega^{\prime})$ be two integral symplectic
torus groups. A symplectic isogeny from $\mathbf{A}$ to $\mathbf{A}^{\prime}$
is a surjective morphism of groups
$f:\mathbf{A}\rightarrow\mathbf{A}^{\prime}$ with finite kernel such that
$f^{\ast}(\Omega^{\prime})=\Omega$.
The following statement is immediate.
###### Proposition B.15.
Let $\mathfrak{t},\mathfrak{t}^{\prime}\in\mathrm{Div}^{n}$ be such that
$\mathfrak{t}\leq\mathfrak{t}^{\prime}$, namely $t^{\prime}_{i}=q_{i}t_{i}$
(where $q_{i}\in\mathbb{Z}_{>0}$) for all $i=1,\ldots,n$. Then the map
$f:\mathbf{A}_{\mathfrak{t}^{\prime}}\rightarrow\mathbf{A}_{\mathfrak{t}}$
defined through:
$f(x+\Lambda_{\mathfrak{t}^{\prime}})=x+\Lambda_{\mathfrak{t}}~{}~{}\forall
x\in\mathbb{R}^{2n}$
is a symplectic isogeny whose kernel is given by:
$\ker(f)\simeq\mathbb{Z}_{q_{1}}\times\ldots\times\mathbb{Z}_{q_{n}}~{}~{}.$
In particular, $\mathbf{A}_{\mathfrak{t}}$ is isogenous with
$\mathbf{A}_{\delta(n)}$ for all $\mathfrak{t}\in\mathrm{Div}^{n}$.
### B.2. Tamings
###### Definition B.16.
A _tamed integral symplectic space_ is a quadruple $(V,\omega,\Lambda,J)$,
where $(V,\omega,\Lambda)$ is an integral symplectic space and $J$ is a taming
of the symplectic space $(V,\omega)$. The _type_ of a tamed integral
symplectic space is the type of its underlying integral symplectic space.
Given a tamed integral symplectic space $(V,\omega,\Lambda,J)$ of type
$\mathfrak{t}\in\mathrm{Div}^{n}$, the taming $J$ makes $V$ into a
$n$-dimensional complex vector space, which we denote by $V_{J}$. The
symplectic pairing $\omega$ induces a Kahler form $\Omega$ which makes the
complex torus $V_{J}/\Lambda$ into a (generally non-principal) polarized
abelian variety whose underlying symplectic torus coincides with
$\mathbf{A}_{\mathfrak{t}}$. We refer the reader to [32, Appendix F] for
details on the relation between tamed integral symplectic spaces and
(generally non-principal) abelian varieties.
## References
* [1] Y. Aharonov and D. Bohm, Significance of electromagnetic potentials in the quantum theory, Phys. Rev. 115 (1959) 485–491.
* [2] L. Andrianopoli, M. Bertolini, A. Ceresole, R. D’Auria, S. Ferrara, P. Fre and T. Magri, N=2 supergravity and N=2 superYang-Mills theory on general scalar manifolds: Symplectic covariance, gaugings and the momentum map, J. Geom. Phys. 23 (1997) 111–189.
* [3] L. Andrianopoli, R. D’Auria and S. Ferrara, U duality and central charges in various dimensions revisited, Int. J. Mod. Phys. A 13 (1998) 431–490.
* [4] P. C. Argyres and M. Martone, _4d $\mathcal{N}$ =2 theories with disconnected gauge groups_, JHEP 03 (2017), 145.
* [5] P. C. Argyres, A. Bourget and M. Martone, _On the moduli spaces of 4d $\mathcal{N}=3$ SCFTs I: triple special Kähler structure_, preprint arXiv:1912.04926.
* [6] P. Argyres, M. Lotito, Y. Lü and M. Martone, _Geometric constraints on the space of $\mathcal{N}$ = 2 SCFTs. Part I: physical constraints on relevant deformations_, JHEP 02 (2018), 001.
* [7] P. C. Argyres, M. Lotito, Y. Lü and M. Martone, _Geometric constraints on the space of $\mathcal{N}$ = 2 SCFTs. Part II: construction of special Kähler geometries and RG flows_, JHEP 02 (2018), 002.
* [8] P. C. Argyres and M. Martone, _Towards a classification of rank $r$ $\mathcal{N}=2$ SCFTs Part II: special Kahler stratification of the Coulomb branch_, preprint arXiv:2007.00012.
* [9] P. Aschieri, S. Ferrara and B. Zumino, Duality Rotations in Nonlinear Electrodynamics and in Extended Supergravity, Riv. Nuovo Cim. 31 (2008) 625–708.
* [10] M. F. Atiyah and N. Hitchin, _The Geometry and Dynamics of Magnetic Monopoles_ , Princeton Legacy Library, Porter Lectures, 1988.
* [11] D. Baraglia, Topological T-duality for general circle bundles, Pure Appl. Math. Q. 10 (2014) 3, 367–438.
* [12] D. Baraglia, Topological T-duality for torus bundles with monodromy, Rev. Math. Phys. 27 (2015) 3 , 1550008\.
* [13] C. Becker, M. Benini, A. Schenkel and R. J. Szabo, _Abelian duality on globally hyperbolic spacetimes_ , Commun. Math. Phys. 349 (2017) no.1, 361–392.
* [14] M. Caorsi and S. Cecotti, _Geometric classification of 4d $\mathcal{N}=2$ SCFTs_, JHEP 07 (2018), 138\.
* [15] M. Caorsi and S. Cecotti, _Homological classification of 4d $\mathcal{N}$ = 2 QFT. Rank-1 revisited_, JHEP 10 (2019), 013.
* [16] S. Cecotti, Supersymmetric Field Theories: Geometric Structures and Dualities, Cambridge, 2015.
* [17] I. Chatterji, J. Mislin, C. Pittet, Flat bundles with complex analytic holonomy, Quart. J. Math. 00 (2016), 1–13.
* [18] V. Cortés, C. I. Lazaroiu and C. S. Shahbazi, $\mathcal{N}=1$ Geometric Supergravity and chiral triples on Riemann surfaces, Commun. Math. Phys. 375 (2020) 429–478.
* [19] O. Debarre, Tores et variétés abéliennes complexes, EDP Sciences, 2000.
* [20] P. Deligne, Extensions centrales non residuéllement finies de groupes arithmétiques, C. R. Acad. Sci. Paris Sér. A-B 287 (1978), A203–A208.
* [21] S. Deser and C. Teitelboim, Duality Transformations of abelian and Nonabelian Gauge Fields, Phys. Rev. D 13 (1976) 1592–1597.
* [22] S. Deser, Off-Shell Electromagnetic Duality Invariance, J. Phys. A 15 (1982) 1053–1054..
* [23] P. A. M. Dirac, _Quantised singularities in the electromagnetic field_ , Proc. Roy. Soc. Lond. A 133 (1931) 821, 60–72.
* [24] S. K. Donaldson, P. B. Kronheimer, The Geometry of Four-Manifolds, Oxford Mathematical Monographs, 1997.
* [25] M. K. Gaillard and B. Zumino, Duality Rotations for Interacting Fields, Nucl. Phys. B 193 (1981) 221–244.
* [26] S. Eilenberg, Homology of Spaces with Operators I, Trans. AMS 61 (1947) 378–417.
* [27] S. Gitler, Cohomology operations with local coefficients, Amer. J. Math, 85 (1963) 2, 156–188.
* [28] A. Hatcher, Algebraic topology, Cambridge University Press, 2002.
* [29] A. Hatcher, Homology with local coefficients and characteristic classes, Homology, Homotopy and Applications, 8(2) (2006) 91–103.
* [30] D. Husemöller, Fiber bundles, Graduate Texts in Mathematics, 3rd ed., Springer, 1994.
* [31] S. Kobayashi, Transformation Groups in Differential Geometry, Springer, 1995.
* [32] C. I. Lazaroiu and C. S. Shahbazi, Generalized Einstein-Scalar-Maxwell theories and locally geometric U-folds, Rev. Math. Phys. Vol. 30, No. 05, 1850012 (2018).
* [33] C. I. Lazaroiu and C. S. Shahbazi, Section sigma models coupled to symplectic duality bundles on Lorentzian four-manifolds, J. Geom. Phys. 128 (2018) 58–86.
* [34] C. I. Lazaroiu and C. S. Shahbazi, The global formulation of generalized Einstein-Scalar-Maxwell theories, Springer Proceedings in Mathematics & Statistics, Quantum Theory and Symmetries with Lie Theory and Its Applications in Physics, vol. 2 (2018) 217–231
* [35] C. I. Lazaroiu, C. S. Shahbazi, The classification of weakly abelian principal bundles, in preparation.
* [36] C. I. Lazaroiu, E. M. Babalic, I.-A. Coman, Geometric algebra techniques in flux compactifications, Adv. High Energy Phys. 2016, 7292534.
* [37] W. S. Massey, Obstructions to the existence of almost complex structures, Bull. Amer. Math. Soc. vol. 67 6 (1961) 559–564.
* [38] P. W. Michor, Topics in differential geometry, A.M.S., 2008.
* [39] J. Milnor, On the existence of a connection with curvature zero, Comment. Math. Helv. 32 (1958) 215–223.
* [40] J. Millson, Real vector bundles with discrete structure group, Elsevier, NY, Topology 18 (1979) 83–89.
* [41] D. I. Olive, Exact electromagnetic duality, Nucl. Phys. Proc. Suppl. 45A (1996) 88–102.
* [42] J. S. Schwinger, _Magnetic charge and quantum field theory_ , Phys. Rev. 144 (1966) 1087–1093.
* [43] E. H. Spanier, Algebraic Topology, Springer, 1966\.
* [44] N. E. Steenrod, Homology with Local Coefficients, Annals of Math. 44 (1943) 610–627.
* [45] R. J. Szabo, Quantization of Higher abelian Gauge Theory in Generalized Differential Cohomology, PoS ICMP2012 (2012) 9–67.
* [46] T. Szamuely, Galois groups and fundamental groups, Cambdridge U.P., 2009.
* [47] D. Zwanziger, Quantum field theory of particles with both electric and magnetic charges, Phys. Rev. 176 (1968), 1489–1495.
* [48] G. W. Whitehead, Elements of Homotopy Theory, Graduate Texts in Mathematics 61, Springer, 1978.
|
# Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense
of Product-of-Experts
Svetlana Kutuzova Oswin Krause Douglas McCloskey Mads Nielsen Christian
Igel
###### Abstract
Multimodal generative models should be able to learn a meaningful latent
representation that enables a coherent joint generation of all modalities
(e.g., images and text). Many applications also require the ability to
accurately sample modalities conditioned on observations of a subset of the
modalities. Often not all modalities may be observed for all training data
points, so semi-supervised learning should be possible. In this study, we
propose a novel product-of-experts (PoE) based variational autoencoder that
have these desired properties. We benchmark it against a mixture-of-experts
(MoE) approach and a PoE approach of combining the modalities with an
additional encoder network. An empirical evaluation shows that the PoE based
models can outperform the contrasted models. Our experiments support the
intuition that PoE models are more suited for a conjunctive combination of
modalities.
###### keywords:
variatonal autoencoder , multimodal learning , semi-supervised learning ,
product-of-experts
††journal: Artificial Intelligence
[inst1]organization=Novo Nordisk Foundation Center for Biosustainability,
Technical University of Denmark, addressline=Kemitorvet 220, city=Copenhagen,
postcode=2800, country=Denmark
[inst2]organization=Department of Computer Science, University of Copenhagen,
addressline=Universitetsparken 1, city=Copenhagen, postcode=2100,
country=Denmark
## 1 Introduction
Multimodal generative modelling is important because information about real-
world objects typically comes in different representations, or modalities. The
information provided by each modality may be erroneous and/or incomplete, and
a complete reconstruction of the full information can often only be achieved
by combining several modalities. For example, in image- and video-guided
translation [1], additional visual context can potentially resolve ambiguities
(e.g., noun genders) when translating written text.
In many applications, modalities may be missing for a subset of the observed
samples during training and deployment. Often the description of an object in
one modality is easy to obtain, while annotating it with another modality is
slow and expensive. Given two modalities, we call samples _paired_ when both
modalities are present, and _unpaired_ if one is missing. The simplest way to
deal with paired and unpaired training examples is to discard the unpaired
observations for learning. The smaller the share of paired samples, the more
important becomes the ability to additionally learn from the unpaired data,
referred to as _semi-supervised learning_ in this context (following the
terminology from 2. Typically one would associate semi-supervised learning
with learning form labelled and unlabelled data to solve a classification or
regression tasks). Our goal is to provide a model that can leverage the
information contained in unpaired samples and to investigate the capabilities
of the model in situations of low levels of supervision, that is, when only a
few paired samples are available. While a modality can be as low dimensional
as a label, which can be handled by a variety of discriminative models [3], we
are interested in high dimensional modalities, for example an image and a text
caption.
Learning a representation of multimodal data that allows to generate high-
quality samples requires the following: 1) deriving a meaningful
representation in a joint latent space for each high dimensional modality and
2) bridging the representations of different modalities in a way that the
relations between them are preserved. The latter means that we do not want the
modalities to be represented orthogonally in the latent space – ideally the
latent space should encode the object’s properties independent of the input
modality. Variational autoencoders (4) using a product-of-experts (PoE, 5, 6)
approach for combining input modalities are a promising approach for
multimodal generative modelling having the desired properties, in particular
the VAEVAE(a) model developed by [7] and a novel model termed SVAE, which we
present in this study. Both models can handle multiple high dimensional
modalities, which may not all be observed at training time.
It has been argued that a PoE approach is not well suited for multimodal
generative modelling using variational autoencoders (VAEs) in comparison to
additive mixture-of-experts (MoE). It has empirically been shown that the PoE-
based MVAE [7] fails to properly model two high-dimensional modalities in
contrast to an (additive) MoE approach referred to as MMVAE, leading to the
conclusion that “PoE factorisation does not appear to be practically suited
for multi-modal learning” [8]. This study sets out to test this conjecture for
state-of-the-art multimodal VAEs.
The next section summarizes related work. Section 3 introduces SVAE as an
alternative PoE based VAE approach derived from axiomatic principles. Then we
present our experimental evaluation of multimodal VAEs before we conclude.
## 2 Background and Related Work
We consider multimodal generative modelling. We mainly restrict our
considerations to two modalities $x_{1}\in X_{1},x_{2}\in X_{2}$, where one
modality may be missing at a time. Extensions to more modalities are discussed
in Section 3.5. To address the problem of generative cross-modal modeling, one
modality $x_{1}$ can be generated from another modality $x_{2}$ by simply
using independently trained generative models ($x_{1}\rightarrow x_{2}$ and
$x_{2}\rightarrow x_{1}$) or a composed but non-interchangeable representation
[9, 10]. However, the ultimate goal of multimodal representation learning is
to find a meaningful joint latent code distribution bridging the two
individual embeddings learned from $x_{1}$ and $x_{2}$ alone. This can be done
by a two-step procedure that models the individual representations first and
then applies an additional learning step to link them [11, 12, 13]. In
contrast, we focus on approaches that learn individual and joint
representations simultaneously. Furthermore, our model should be able to learn
in a semi-supervised setting. [14] introduced two models suitable for the case
when one modality is high dimensional (e.g., an image) and another is low
dimensional (e.g., a label) while our main interest are modalities of high
complexity.
Figure 1: Schematic overview bi-modal VAEs using a PoE and additional network
structures that are capable of semi-supervised learning without requiring a
two step learning procedure. VAEVAE (a) and (b) are by [2], JMVAE is by [15],
MVAE is by [7], and SVAE is our newly proposed model. Each triangle stands for
an individual neural network, the colors indicate the two different
modalities.
We consider models based on variational autoencoders (VAEs, 4, 16). Standard
VAEs learn a latent representation $z\in Z$ for a set of observed variables
$x\in X$ by modelling a joint distribution $p(x,z)=p(z)p(x|z)$. In the
original VAE, the intractable posterior $q(z|x)$ and conditional distribution
$p(x|z)$ are approximated by neural networks trained by maximising the ELBO
loss taking the form
$\mathcal{L}=E_{q(z|x)}[\log{p(x|z)}]-D_{\text{KL}}(q(z|x)\;\|\;\mathcal{N}(0,I))$
(1)
with respect to the parameters of the networks modelling $q(z|x)$ and
$p(x|z)$. Here $D_{\text{KL}}(\cdot\;\|\;\cdot)$ denotes the Kullback-Leibler
divergence. Bi-modal VAEs that can handle a missing modality extend this
approach by modelling $q(z|x_{1},x_{2})$ as well as $q_{1}(z|x_{1})$ and
$q_{2}(z|x_{2})$, which replace the single $q(z|x)$. Multimodal VAEs may
differ in 1) the way they approximate $q(z|x_{1},x_{2})$, $q_{1}(z|x_{1})$ and
$q_{2}(z|x_{2})$ by neural networks and/or 2) the structure of the loss
function, see Figure 1. Typically, there are no conceptual differences in the
decoding, and we model the decoding distributions in the same way for all
methods considered in this study.
[15] introduced a model termed JMVAE (Joint Multimodal VAE), which belongs to
the class of approaches that can only learn from the paired training samples
(what we refer to as the _(fully) supervised setting_). It approximates
$q(z|x_{1},x_{2})$, $q_{1}(z|x_{1})$ and $q_{2}(z|x_{2})$ with three
corresponding neural networks and optimizes an ELBO-type loss of the form
$\mathcal{L}=E_{q(z|x_{1},x_{2})}[\log{p_{1}(x_{1}|z)}+\log{p_{2}(x_{2}|z)}]-D_{\text{KL}}(q(z|x_{1},x_{2})\;\|\;\mathcal{N}(0,I))\\\
-D_{\text{KL}}(q(z|x_{1},x_{2})\;\|\;q_{1}(z|x_{1}))-D_{\text{KL}}(q(z|x_{1},x_{2})\;\|\;q_{2}(z|x_{2}))\enspace.$
(2)
The last two terms imply that during learning the joint network output must be
generated which requires paired samples.
The MVAE (Multimodal VAE) model [7] is the first multimodal VAE-based model
allowing for missing modalities that does not require any additional network
structures for learning the joint latent code distribution. The joint
posterior is modeled using a product-of-experts (PoE, 5, 6) as
$q(z|x_{1:M})\propto\prod_{m}q_{m}(z|x_{m})$. For the missing modality
$q_{k}(z|x_{k})=1$ is assumed. The model by [7] allows for semi-supervised
learning while keeping the number of model parameters low. The multiplicative
combination of the experts can be interpreted as a conjunction. In the context
of probabilistic metric spaces, the product is a triangular norm (t-norm) and
generalizes the “AND” operation to multi-valued logic.
The bridged model [17] highlights the need for an additional network structure
for approximating the joint latent code distribution. It attempts to keep the
advantages of the additional encoding networks. It reduces the number of model
parameters by introducing the _bridge encoder_ that consists of one fully
connected layer which takes $z_{1}$ and $z_{2}$ latent code vectors generated
from $x_{1}$ and $x_{2}$ and outputs the mean and the variance of the joint
latent code distribution.
The arguably most advanced multimodal VAE models is VAEVAE by [2], which we
discuss in detail in the next section (see also Algorithm 1).
[8] proposed a MoE model termed MMVAE (Mixture-of-experts Multimodal VAE). In
MMVAE model the joint variational posterior for $M$ modalities is approximated
as $q(z|x_{1:M})=\sum_{m}{\alpha_{m}}q_{m}(z|x_{m})$ where
$\alpha_{m}=\frac{1}{M}$. The model utilizes a loss function from the
importance weighted autoencoder (IWAE, 18) that computes a tighter lower bound
compared to the VAE ELBO loss. The MoE rule formulation allows in principle to
train with a missing modality $i$ by assuming $\alpha_{i}=0$, however, [8] do
not highlight or evaluate this feature. They empirically compare MVAE [7] and
MMVAE, concluding that MVAE often fails to learn the joint latent code
distribution. Because of these results and those presented by [2], we did not
include MVAE as a benchmark model in our experiments.
## 3 SVAE
We developed a new approach as an alternative to VAEVAE. Both models 1) are
VAE based; 2) allow for interchangeable cross-model generation as well as a
learning joint embedding; 3) allow for missing modalities at training time;
and 4) can be applied to two similarly complex high dimensional modalities.
Next, we will present our new model SVAE (it was originally developed to
analyse mass spectrometry data, see Section 4.4, and thus referred to as
SpectraVAE). Then we highlight the differences to VAEVAE. Finally, we state a
newly derived objective function for training the models. We first consider
two modalities and generalize to more modalities in Section 3.5.
### 3.1 SVAE
Since both modalities might not be available for all the samples, it should be
possible to marginalize each of them out of $q(z|x_{1},x_{2})$. While the
individual encoding distributions $q(z|x_{1})$ and $q(z|x_{2})$ can be
approximated by neural networks as in the standard VAE, we need to define a
meaningful approximation of the joint encoding distribution
$q(z|x)=q(z|x_{1},x_{2})$. In the newly proposed SVAE model, these
distributions are defined as the following:
$\displaystyle q(z|x_{1},x_{2})$
$\displaystyle=\frac{1}{Z(x_{1},x_{2})}q_{1}(z|x_{1})q_{2}(z|x_{2})$ (3)
$\displaystyle q(z|x_{1})$ $\displaystyle=q_{1}(z|x_{1})q^{*}_{2}(z|x_{1})$
(4) $\displaystyle q(z|x_{2})$
$\displaystyle=q_{2}(z|x_{2})q^{*}_{1}(z|x_{2})$ (5) $\displaystyle q(z)$
$\displaystyle=\mathcal{N}(0,I)$ (6)
where $Z(x_{1},x_{2})$ is a normalization constant. The distributions
$q_{1}(z|x_{1})$, $q_{2}(z|x_{2})$ and the unormalized distributions
$q^{*}_{2}(z|x_{1})$ and $q^{*}_{1}(z|x_{2})$ are approximated by different
neural networks. The networks approximating $q_{i}(z|x_{i})$ and
$q^{*}_{j}(z|x_{i})$, $i,j\in\\{1,2\\}$ have the same architecture. In case
both observations are available, $q(z|x_{1},x_{2})$ is approximated by
applying the product-of-experts rule with $q_{1}(z|x_{1})$ and
$q_{2}(z|x_{2})$ being the experts for each modality. In case of a missing
modality, equation 4 or 5 is used. If, for example, $x_{2}$ is missing, the
$q^{*}_{2}(z|x_{1})$ distribution takes over as a “replacement” expert,
modelling marginalization over $x_{2}$.
The model is derived in Section 3.2. The desired properties of the model were
that 1) when no modalities are observed the generating distribution for the
latent code is Gaussian, 2) the modalities are independent given the latent
code, 3) both experts cover the whole latent space with equal probabilities,
and 4) the joint encoding distribution $q(z|x_{1},x_{2})$ is modelled by a
PoE.
### 3.2 Derivation of the SVAE model architecture
We define our model in an axiomatic way, requiring the following properties:
1. 1.
When no modalities are observed, the generating distribution for the latent
code is Gaussian:
$q(z)=p(z)=\mathcal{N}(0,I)$ (7)
This property is well known from VAEs and allows easy sampling.
2. 2.
The two modalities are independent given the latent code, so the decoder
distribution is:
$p(x_{1},x_{2}|z)=p_{1}(x_{1}|z)p_{2}(x_{2}|z)$ (8)
The second property formalizes our goal that the latent representation
contains all relevant information from all modalities.
The joint distribution $p(z|x)=p(z|x_{1},x_{2})$ is given by
$p(z|x_{1},x_{2})=\frac{p(z)p_{1}(x_{1},x_{2}|z)}{p(x_{1},x_{2})}=\frac{p(z)p_{1}(x_{1},x_{2}|z)}{\int
p(z^{\prime})p(x_{1},x_{2}|z^{\prime})\text{d}z^{\prime}}\\\
\overset{(\ref{eqa})}{=}\frac{p(z)p_{1}(x_{1}|z)p_{2}(x_{2}|z)}{\int
p(z^{\prime})p(x_{1}|z^{\prime})p(x_{2}|z^{\prime})\text{d}z^{\prime}}\enspace.$
(9)
3. 3.
Both experts cover the whole latent space with equal probabilities:
$q(z)=q_{1}(z)=\int q_{1}(z|x_{1})p(x_{1})\text{d}x_{1}=\int
q_{2}(z|x_{2})p(x_{2})\text{d}x_{2}=q_{2}(z)$ (10)
4. 4.
The joint encoding distribution $q(z|x)=q(z|x_{1},x_{2})$ is assumed to be
given by the product-of-experts rule [5, 6]:
$q(z|x_{1},x_{2})=\frac{1}{Z(x_{1},x_{2})}q_{1}(z|x_{1})q_{2}(z|x_{2})$ (11)
with $Z(x_{1},x_{2})=\int
q_{1}(z^{\prime}|x_{1})q_{2}(z^{\prime}|x_{2})\text{d}z^{\prime}$. The
modelling by a product-of-experts in equation 11 is a simplification of
equation 9 to make the model tractable.
Given equation 11 and equation 10 we obtain
$q(z)=\int q(z|x)p(x)\text{d}x=\int
q(z|x_{1},x_{2})p(x_{1},x_{2})\text{d}x_{1}\text{d}x_{2}\\\
\overset{(\ref{eq2})}{=}\int\frac{1}{Z(x_{1},x_{2})}q_{1}(z|x_{1})q_{2}(z|x_{2})p(x_{1})p(x_{2}|x_{1})\text{d}x_{1}\text{d}x_{2}\enspace.$
(12)
Let us define
$q^{*}_{j}(z|x_{i})=\int\frac{1}{Z(x_{i},x_{j})}q_{j}(z|x_{j})p(x_{j}|x_{i})\text{d}x_{j}$
(13)
and write
$q(z)\overset{(\ref{eq5})}{=}\int
q_{1}(z|x_{1})p(x_{1})\int\frac{1}{Z(x_{1},x_{2})}q_{2}(z|x_{2})p(x_{2}|x_{1})\text{d}x_{2}\text{d}x_{1}\\\
=\int p(x_{1})q_{1}(z|x_{1})q^{*}_{2}(z|x_{1})\text{d}x_{1}\enspace.$ (14)
So the proposal distributions are:
$\displaystyle q(z|x_{1},x_{2})$
$\displaystyle=\frac{1}{Z(x_{1},x_{2})}q_{1}(z|x_{1})q_{2}(z|x_{2})$ (15)
$\displaystyle q(z|x_{1})$ $\displaystyle=q_{1}(z|x_{1})q^{*}_{2}(z|x_{1})$
(16) $\displaystyle q(z|x_{2})$
$\displaystyle=q_{2}(z|x_{2})q^{*}_{1}(z|x_{2})$ (17) $\displaystyle q(z)$
$\displaystyle=\mathcal{N}(0,I)$ (18)
### 3.3 SVAE vs. VAEVAE
The VAEVAE model [2] is the most similar to ours. Wu et al. define two
variants which can be derived from the SVAE model in the following way.
Variant (a) can be derived by setting $q^{*}(z|x_{1})=q^{*}(z|x_{2})=1$.
Variant (b) is obtained from (a) by additionally using a separate network to
model $q(z|x_{1},x_{2})$. Having a joint network $q(z|x_{1},x_{2})$ implements
the most straightforward way of capturing the inter-dependencies of the two
modalities. However, the joint network cannot be trained on unpaired data –
which can be relevant when the share of supervised data gets smaller. Option
(a) uses the product-of-experts rule to model the joint distribution of the
two modalities as well, but does not ensure that both experts cover the whole
latent space (in contrast to SVAE, see equation 10), which can lead to
individual latent code distributions diverging. Based on this consideration
and the experimental results from [2], we focused on benchmarking VAEVAE (b)
and refer to it as simply VAEVAE in Section 4.
SVAE resembles VAEVAE in the need for additional networks besides one encoder
per each modality and the structure of ELBO loss. It does, however, solve the
problem of learning the joint embeddings in a way that allows to learn the
parameters of approximated $q(z|x_{1},x_{2})$ using all available samples,
i.e., both paired and unpaired. If $q(z|x_{1},x_{2})$ is approximated with the
joint network that accepts concatenated inputs, as in JMVAE and VAEVAE (b),
the weights of $q(z|x_{1},x_{2})$ can only be updated for the paired share of
samples. If $q(z|x_{1},x_{2})$ is approximated with a PoE of decoupled
networks as in SVAE, the weights are updated for each sample whether paired or
unpaired – which is the key differentiating feature of SVAE compared to
existing architectures.
### 3.4 A New Objective Function
When developing SVAE, we devised a novel ELBO-type loss:
$\displaystyle\mathcal{L}=$ $\displaystyle
E_{p_{\text{paired}}(x_{1},x_{2})}\big{[}E_{q(z|x_{1},x_{2})}[\log{p_{1}(x_{1}|z)}+\log{p_{2}(x_{2}|z)}]$
$\displaystyle-
D_{\text{KL}}(q(z|x_{1},x_{2})\;\|\;p(z|x_{1}))-D_{\text{KL}}(q(z|x_{1},x_{2})\;\|\;p(z|x_{2}))\big{]}$
$\displaystyle+E_{p_{\text{paired}}(x_{1})}\left[E_{q(z|x_{1})}[\log{p_{1}(x_{1}|z)}]-D_{\text{KL}}(q(z|x_{1})\;\|\;p(z))\right]$
$\displaystyle+E_{p_{\text{paired}}(x_{2})}\left[E_{q(z|x_{2})}[\log{p_{2}(x_{2}|z)}]-D_{\text{KL}}(q(z|x_{2})\;\|\;p(z))\right]$
(19) $\displaystyle\mathcal{L}_{1}=$ $\displaystyle
E_{p_{\text{unpaired}}(x_{1})}\left[E_{q(z|x_{1})}[\log{p_{1}(x_{1}|z)}]-D_{\text{KL}}(q(z|x_{1})\;\|\;p(z))\right]$
(20) $\displaystyle\mathcal{L}_{2}=$ $\displaystyle
E_{p_{\text{unpaired}}(x_{2})}\left[E_{q(z|x_{2})}[\log{p_{2}(x_{2}|z)}]-D_{\text{KL}}(q(z|x_{2})\;\|\;p(z))\right]$
(21) $\displaystyle\mathcal{L}_{\text{comb}}=$
$\displaystyle\mathcal{L}+\mathcal{L}_{1}+\mathcal{L}_{2}$ (22)
Here $p_{\text{paired}}$ and $p_{\text{unpaired}}$ denote the distributions of
the paired and unpaired training data, respectively. The differences between
this loss and the loss function used to train VAEVAE by [2] are highlighted in
Algorithm 1.
The loss function can derived as follows. Let consider the optimization
problem
$E_{p_{\text{Data}}(x_{1},x_{2})}[\log p(x_{1},x_{2})]=\\\
=\frac{1}{2}E_{p_{\text{Data}}(x_{1},x_{2})}[\log p(x_{1}|x_{2})+\log
p(x_{2})+\log p(x_{2}|x_{1})+\log p(x_{1})]\\\
=\frac{1}{2}E_{p_{\text{Data}}(x_{1},x_{2})}[\log
p(x_{1}|x_{2})]+\frac{1}{2}E_{p_{\text{Data}}(x_{1},x_{2})}[\log
p(x_{2}|x_{1})]\\\ +\frac{1}{2}E_{p_{\text{Data}}(x_{2})}[\log
p(x_{2})]+\frac{1}{2}E_{p_{\text{Data}}(x_{1})}[\log p(x_{1})]\enspace.$ (23)
We can now proceed by finding lower-bounds for each term. For the last two
terms $\log p(x_{i})$ we can use the standard ELBO as given in equation 1.
This gives the terms
$\mathcal{L}_{i}=E_{p_{\text{Data}}(x_{i})}\left[E_{q(z|x_{i})}[\log{p_{i}(x_{i}|z)}]-D_{\text{KL}}(q(z|x_{i})\;\|\;p(z))\right]$
(24)
Next, we will derive $\log p(x_{1}|x_{2})$. This we can do in terms of a
conditional VAE [10], where we condition all terms on $x_{2}$ (or $x_{1}$ if
we model $\log p(x_{2}|x_{1})$). So we derive the log-likelihood for
$p(x_{1}|x_{2})=\int p(x_{1}|z)p(z|x_{2})dz$, where $p(z|x_{2})$ is now our
prior. By model assumption we further have
$p(x_{1},x_{2},z)=p(x_{1}|z)p(x_{2}|z)p(z)$ and therefore
$p(x_{1}|x_{2},z)=p(x_{1}|z)$. Thus we arrive at the ELBO losses
$\mathcal{L}_{12}=E_{p_{\text{Data}}(x_{1},x_{2})}\left[E_{q(z|x_{1},x_{2})}[\log{p_{1}(x_{1}|z)}]-D_{\text{KL}}(q(z|x_{1},x_{2})\;\|\;p(z|x_{2}))\right]$
(25)
and
$\mathcal{L}_{21}=E_{p_{\text{Data}}(x_{1},x_{2})}\left[E_{q(z|x_{1},x_{2})}[\log{p_{2}(x_{2}|z)}]-D_{\text{KL}}(q(z|x_{1},x_{2})\;\|\;p(z|x_{1}))\right]\enspace.$
(26)
We now insert the terms in equation 23 and arrive at:
$2E_{p_{\text{Data}}(x_{1},x_{2})}[\log
p(x_{1},x_{2})]\geq\mathcal{L}_{12}+\mathcal{L}_{21}+\mathcal{L}_{1}+\mathcal{L}_{2}\\\
=E_{p_{\text{Data}}(x_{1},x_{2})}\left[E_{q(z|x_{1},x_{2})}[\log{p_{1}(x_{1}|z)}]-D_{\text{KL}}(q(z|x_{1},x_{2})\;\|\;p(z|x_{2}))\right]\\\
+E_{p_{\text{Data}}(x_{1},x_{2})}\left[E_{q(z|x_{1},x_{2})}[\log{p_{2}(x_{2}|z)}]-D_{\text{KL}}(q(z|x_{1},x_{2})\;\|\;p(z|x_{1}))\right]\\\
+E_{p_{\text{Data}}(x_{1})}\left[E_{q(z|x_{1})}[\log{p_{1}(x_{1}|z)}]-D_{\text{KL}}(q(z|x_{1})\;\|\;p(z))\right]\\\
+E_{p_{\text{Data}}(x_{2})}\left[E_{q(z|x_{2})}[\log{p_{2}(x_{2}|z)}]-D_{\text{KL}}(q(z|x_{2})\;\|\;p(z))\right]\enspace.$
(27)
The first two terms together give
$E_{p_{\text{Data}}(x_{1},x_{2})}\left[E_{q(z|x_{1},x_{2})}[\log{p_{1}(x_{1}|z)}+\log{p_{2}(x_{2}|z)}]\right]\enspace.$
(28)
We do not know the conditional prior $p(z|x_{i})$. By definition of the VAE,
we are allowed to optimize the prior, therefore we can parameterize it and
optimize it. However, we know that in an optimal model $p(z|x_{i})\approx
q(z|x_{i})$ and it might be possible to prove that if $p(z|x_{i})$ is learnt
in the same model-class as $q(z|x_{i})$ we can find that the optimum is indeed
$p(z|x_{i})=q(z|x_{i})$. Inserting this choice into the equation gives the
end-result.
Input : Supervised example $(x_{1},x_{2})$, unsupervised example
$x_{1}^{\prime}$, unsupervised example $x_{2}^{\prime}$
$z^{\prime}\sim q(z|x_{1},x_{2})$
$z_{x_{1}}\sim q_{1}(z|x_{1})$
$z_{x_{2}}\sim q_{2}(z|x_{2})$
$d_{1}=D_{\text{KL}}(q(z^{\prime}|x_{1},x_{2})\|q_{1}(z_{x_{1}}|x_{1}))\bm{+D_{\text{KL}}(q_{1}(z_{x_{1}}|x_{1})\|p(z))}$
$d_{2}=D_{\text{KL}}(q(z^{\prime}|x_{1},x_{2})\|q_{2}(z_{x_{2}}|x_{2}))\bm{+D_{\text{KL}}(q_{2}(z_{x_{2}}|x_{2})\|p(z))}$
$\mathcal{L}=\log{p_{1}(x_{1}|z)}+\log{p_{2}(x_{2}|z)}+\bm{\log{p_{1}(x_{1}|z_{x_{1}})}}+\bm{\log{p_{2}(x_{2}|z_{x_{2}})}}+d_{1}+d_{2}$
$\mathcal{L}_{x_{1}}=\log{p_{1}(x_{1}^{\prime}|z_{x_{1}})}+D_{\text{KL}}(q_{1}(z_{x_{1}}|x_{1}^{\prime})\|p(z))$
$\mathcal{L}_{x_{2}}=\log{p_{2}(x_{2}^{\prime}|z_{x_{2}})}+D_{\text{KL}}(q_{2}(z_{x_{2}}|x_{2}^{\prime})\|p(z))$
$\mathcal{L}_{\text{comb}}=\mathcal{L}+\mathcal{L}_{x_{1}}+\mathcal{L}_{x_{2}}$
Algorithm 1 Loss computation (forward pass) for SVAE and VAEVAE*. In bold are
terms that are different from [2]
### 3.5 SVAE and VAEVAE for more than two modalities
In the following, we formalize the VAEVAE model for three modalities and
present a naïve extension of the SVAE model to more than two modalities.
Figure 2: The SVAE and VAEVAE network architectures for 3-modalities. The
number of parameters is $kn^{2}$ for SVAE and $kn2^{n-1}$ for VAEVAE, where
$n$ is the number of modalities and $k$ is the number of parameters in one
encoding network.
In the canonical extension of VAEVAE to three modalities, the three- and two-
modal relations are captured by the corresponding networks
$q(z|x_{1},x_{2},x_{3})$, $q(z|x_{i},x_{j})$ and $q(z|x_{i})$ for
$i,j\in\\{1,2,3\\}$, see Figure 2. In the general $n$-modal case, the model
has $2^{n}$ networks. For $n=3$, the loss function reads:
$\displaystyle\mathcal{L}_{1,2,3}=$ $\displaystyle
E_{p_{\text{paired}}(x_{1},x_{2},x_{3})}\left[E_{q(z|x_{1},x_{2},x_{3})}[\log{p_{1}(x_{1}|z)}+\log{p_{2}(x_{2}|z)}+\log{p_{3}(x_{3}|z)}]\right]$
$\displaystyle-D_{\text{KL}}(q(z|x_{1},x_{2},x_{3})\;\|\;q(z|x_{1},x_{2}))$
$\displaystyle-D_{\text{KL}}(q(z|x_{1},x_{2},x_{3})\;\|\;q(z|x_{2},x_{3}))$
$\displaystyle-D_{\text{KL}}(q(z|x_{1},x_{2},x_{3})\;\|\;q(z|x_{1},x_{3}))$
(29) $\displaystyle\mathcal{L}_{ij}=$ $\displaystyle
E_{p_{\text{paired}}(x_{1},x_{2},x_{3})}\left[E_{q(z|x_{i},x_{j})}[\log{p_{i}(x_{i}|z)}+\log{p_{j}(x_{j}|z)}]\right]$
$\displaystyle-
D_{\text{KL}}(q(z|x_{i},x_{j})\;\|\;q(z|x_{1}))-D_{\text{KL}}(q(z|x_{i},x_{j})\;\|\;q(z|x_{2}))$
$\displaystyle-
D_{\text{KL}}(q(z|x_{i},x_{j})\;\|\;q(z|x_{3}))-D_{\text{KL}}(q(z|x_{i},x_{j})\;\|\;q(z))$
(30) $\displaystyle\mathcal{L}_{i}=$ $\displaystyle
E_{p_{\text{unpaired}}(x_{i})}\left[E_{q(z|x_{i})}[\log{p_{i}(x_{i}|z)}]-D_{\text{KL}}(q(z|x_{i})\;\|\;q(z))\right]$
(31) $\displaystyle\mathcal{L}_{\text{comb}}=$
$\displaystyle\mathcal{L}_{1,2,3}+\sum\limits_{i,j\in\\{1,2,3\\},i\neq
j}\mathcal{L}_{i,j}+\sum\limits_{i=1}^{3}\mathcal{L}_{i}$ (32)
In this study, we considered a simplifying extension of SVAE to $n$ modalities
using $n^{2}$ networks $q_{i}^{j}(z|x_{j})$ for $i,j\in\\{1,\dots,n\\}$. For
the 3-modal case depicted in Figure 2, the PoE relations between the
modalities are defined in the following way:
$\displaystyle q(z|x_{1},x_{2},x_{3})$
$\displaystyle=\frac{1}{Z(x_{1},x_{2},x_{3})}q_{1}^{1}(z|x_{1})q_{2}^{2}(z|x_{2})q_{3}^{3}(z|x_{3})$
(33) $\displaystyle i,j,k\in\\{1,2,3\\},i\neq j\neq k:$ (34) $\displaystyle
q^{i}(z|x_{i},x_{j})$
$\displaystyle=\frac{i}{Z(x_{i},x_{j})}q_{i}^{i}(z|x_{i})q_{j}^{j}(z|x_{j})q_{k}^{i}(z|x_{i})$
(35) $\displaystyle q^{j}(z|x_{i},x_{j})$
$\displaystyle=\frac{1}{Z(x_{i},x_{j})}q_{i}^{i}(z|x_{i})q_{j}^{j}(z|x_{j})q_{k}^{j}(z|x_{j})$
(36) $\displaystyle q(z|x_{i})$
$\displaystyle=q_{i}^{i}(z|x_{i})q_{j}^{i}(z|x_{i})q_{k}^{i}(z|x_{i})$ (37)
$\displaystyle q(z)$ $\displaystyle=\mathcal{N}(0,I)$ (38)
The corresponding SVAE loss function has additional terms due to the fact that
the relations between pairs of modalities need to be captured with two PoE
rules $q^{i}(z|x_{i},x_{j})$ and $q^{i}(z|x_{i},x_{j})$ in SVAE, while there
is only a single network $q(z|x_{i},x_{j})$ in VAEVAE. The loss functions
equation 29–equation 32 above are modified in a way that
$f(q(z|x_{i},x_{j}))=f(q^{i}(z|x_{i},x_{j}))+f(q^{j}(z|x_{i},x_{j}))$ for any
function $f$.
This extension of the bi-modal case assumes that
$p(x_{i},x_{j}|x_{k})=p(x_{i}|x_{k})p(x_{j}|x_{k})$ for
$i,j,k\in\\{1,2,3\\},i\neq j\neq k$, which implies that $x_{i}$, $x_{j}$ and
$x_{k}$ are independent of each other.
## 4 Experiments
We conducted experiments to compare state-of-the-art PoE based VAEs with the
MoE approach MMVAE [8]. We considered VAEVAE (b) as proposed by [2] and SVAE
as described above. The two approaches differ both in the underlying model as
well as the objective function. For a better understanding of these
differences, we also considered an algorithm referred to as VAEVAE*, which has
the same model architecture as VAEVAE and the same loss function as SVAE.111We
also evaluated SVAE*, our model with the VAEVAE loss function, but it never
outperformed other models. The difference in the training procedure for VAEVAE
and VAEVAE* is shown in Algorithm 1. Since the VAEVAE implementation was not
publicly available at the time of writing, we used our own implementation of
VAEVAE based on the PiXYZ library.222https://github.com/masa-su/pixyz For
details about the experiments we refer to Appendix A. The source code to
reproduce the experiments can be found at https://github.com/sgalkina/poe-
vaes. More qualitative examples are shown in Figure 3 for SVAE and VAEVAE.
(a)
(b)
Figure 3: MNIST-Split image reconstructions of a top half and a bottom half
given the top half, the bottom half of the original image or both halves.
Side-by-side MNIST-SVHN reconstruction from randomly sampled latent space,
with oracle predictions of a digit class. The joint coherence is a share of
classes predicted the same. The examples are generated by SVAE and VAEVAE for
the supervision levels 100% and 0.1%
For an unbiased evaluation, we considered the same test problems and
performance metrics as [8]. In addition, we designed an experiment referred to
as MNIST-Split that was supposed to be well-suited for PoE. In all experiments
we kept the network architectures as similar as possible (see Appendix A). For
the new benchmark problem, we constructed a multi-modal dataset where the
modalities are similar in dimensionality as well as complexity and are
providing missing information to each other rather than duplicating it. The
latter should favor a PoE modelling, which suits an “AND” combination of the
modalities.
We measured performance for different supervision levels for each dataset
(e.g., 10% supervision level means that 10% of the training set samples were
paired and the remaining 90% were unpaired).
Finally, we compared VAEVAE and SVAE on the real-world bioinformatics task
that motivated our study, namely learning a joint representation of mass
spectra and molecule structures.
### 4.1 Image and image: MNIST-Split
Figure 4: MNIST-Split image reconstructions of a top half and a bottom half
given (a) the top half; (b) the bottom half of the original image.
We created an image reconstruction dataset based on MNIST digits [19]. The
images were split horizontally into equal parts, either two or three depending
on the experimental setting. These regions are considered as different input
modalities.
Intuitively, for this task a joint representation should encode the latent
class labels, the digits. The correct digit can sometimes be inferred from
only one part of the image (i.e., one modality), but sometimes both modalities
are needed. In the latter cases, an “AND” combination of the inputs is
helpful. This is in contrast to the MNIST-SVHN task described below, where the
joint label could in principle be inferred from each input modality
independently.
Figure 5: MNIST-SVHN reconstruction for fully supervised VAEVAE.
#### Two modalities: MNIST-Split.
In the bi-modal version referred to as MNIST-Split, the MNIST images were
split in top and bottom halves of equal size, and the halves were then used as
two modalities. We tested the quality of the image reconstruction given one or
both modalities by predicting the reconstructed image label with an
independent oracle network, a ResNet-18 [20] trained on the original MNIST
dataset. The evaluation metrics were _joint coherence_ , _synergy_ , and
_cross-coherence_. For measuring joint coherence, 1000 latent space vectors
were generated from the prior and both halves of an image were then
reconstructed with the corresponding decoding networks. The concatenated
halves yield the fully reconstructed image. Since the ground truth class
labels do not exist for the randomly sampled latent vectors, we could only
perform a qualitative evaluation, see Figure 4. Synergy was defined as the
accuracy of the image reconstruction given both halves. Cross-coherence
considered the reconstruction of the full image from one half and was defined
as the fraction of class labels correctly predicted by the oracle network.
Table 1: Evaluation of the models trained on the fully supervised datasets. | Accuracy (both) | Accuracy (top half) | Accuracy (bottom half)
---|---|---|---
MMVAE | 0.539 | 0.221 | 0.283
SVAE | 0.948 | 0.872 | 0.816
VAEVAE | 0.956 | 0.887 | 0.830
VAEVAE* | 0.958 | 0.863 | 0.778
(a) Figure 6: MNIST-Split dataset. Accuracy of an oracle network applied to
images reconstructed given the full image (both halves), the top half and the
bottom half.
The quantitative results are shown in Table 1 and Figure 6. All PoE
architectures clearly outperformed MMVAE even when trained on the low
supervision levels. In this experiment, it is important that both experts
agree on a class label. Thus, as expected, the multiplicative PoE fits the
task much better than the additive mixture. Utilizing the novel loss function
(22) gave the best results for very low supervision (SVAE and VAEVAE*).
Figure 7: MNIST-Split-3 dataset, reproducing the logic of MNIST-Split for the
input images split into three parts. The plots show the accuracy of an oracle
network getting the full image or a single of the three single modalities
(Top, Middle, Button) as inputs.
#### Three modalities: MNIST-Split-3.
We compared a simple generalization of the SVAE model to more than two
modalities with the canonical extension of the VAEVAE model described in
Section 3.5 on the MNIST-Split-3 data, the 3-modal version of MNIST-Split
task. Figure 7 shows that SVAE performed better when looking at the
reconstructions from individual modalities, but worse when all three
modalities are given. While the number of parameters in the bi-modal case is
the same for SVAE and VAEVAE, it grows exponentially for VAEVAE and stays in
order of $n^{2}$ for SVAE where $n$ is the number of modalities, see Figure 2
and Section 3.5 for details.
### 4.2 Image and image: MNIST-SVHN
Figure 8: Performance on MNIST-SVHN for different supervision levels. (left)
Joint coherence, a share of generated images with the same digit class;
(middle) Cross-coherence, accuracy of SVHN reconstructions given MNIST;
(right) Cross-coherence, accuracy of MNIST reconstructions given SVHN.
The first dataset considered by [8] is constructed by pairing MNIST and SVHN
[21] images showing the same digit. This dataset shares some properties with
MNIST-Split, but the relation between the two modalities is different: the
digit class is derived from a concatenation of two modalities in MNIST-Split,
while in MNIST-SVHN it could be derived from any modality alone. As before,
oracle networks are trained to predict the digit classes of MNIST and SVHN
images. Joint coherence was again computed based on 1000 latent space vectors
generated from the prior. Both images were then reconstructed with the
corresponding decoding networks. A reconstruction was considered correct if
the predicted digit classes of MNIST and SVHN were the same. Cross-coherence
was measured as above.
Figure 5 shows examples of paired image reconstructions from the randomly
sampled latent space of the fully supervised VAEVAE model. The digit next to
the each reconstruction shows the digit class prediction for this image. The
quantitative results in Figure 8 show that all three PoE based models reached
a similar joint coherence as MMVAE, VAEVAE scored even higher. The cross-
coherence results were best for MMVAE, but the three PoE based models
performed considerably better than the MVAE baseline reported by [8].
### 4.3 Image and text: CUB-Captions
The second benchmark considered by [8] is the CUB Images-Captions dataset [22]
containing photos of birds and their textual descriptions. Here the modalities
are of different nature but similar in dimensionality and information content.
We used the source code333https://github.com/iffsid/mmvae by Shi et al. to
compute the same evaluation metrics as in the MMVAE study. Canonical
correlation analysis (CCA) was used for estimating joint and cross-coherences
of images and text [23]. The projection matrices $W_{x}$ for images and
$W_{y}$ for captions were pre-computed using the training set of CUB Images-
Captions and are available as part of the source code. Given a new image-
caption pair $\tilde{x},\tilde{y}$, we computed the correlation between the
two by
$\operatorname{corr}(\tilde{x},\tilde{y})=\frac{\phi(\tilde{x})^{T}\phi(\tilde{y})}{\left\lVert\phi(\tilde{x})\right\rVert\left\lVert\phi(\tilde{y})\right\rVert}$,
where $\phi(\tilde{k})=W_{k}^{T}\tilde{k}-\operatorname{avg}(W_{k}^{T}k)$.
We employed the same image generation procedure as in the MMVAE study. Instead
of creating the images directly, we generated 2048-d feature vectors using a
pre-trained ResNet-101. In order to find the resulting image, a nearest
neighbours lookup with Euclidean distance was performed. A CNN encoder and
decoder was used for the (see Table A.5 and Table A.6). Prior to computing the
correlations, the captions were converted to 300-d vectors using FastText
[24]. As in the experiment before, we used the same network architectures and
hyperparameters as [8]. We sampled 1000 latent space vectors from the prior
distribution. Images and captions were then reconstructed with the decoding
networks. The joint coherence was then computed as the CCA for the resulting
image and caption averaged over the 1000 samples. Cross-coherence was computed
from caption to image and vice versa using the CCA averaged over the whole
test set.
Figure 9: CUB Images-Captions dataset. Performance metrics for different
supervision levels. (left) Joint coherence, the correlation between images and
labels reconstructed from the randomly sampled latent vectors; (middle) Cross-
coherence, the correlation of the reconstructed caption given the image;
(right) Cross-coherence, the correlation of the reconstructed image given the
caption. Figure 10: Examples of image and caption reconstructions given one
modality input for SVAE and VAEVAE. Given that the caption can be broad (e.g.,
”this bird is black and white and has a long pointy beak” in the example), it
can fit many different images. In this case, the image from the caption
reconstruction tends to better fit the description than the original image.
The same goes for images: one of the reconstructed images has a bird with a
red belly which got reflected in the generated caption even though it was not
a part of the original caption.
As can be seen in Figure 9, VAEVAE showed the best performance among all
models. With full supervision the VAEVAE model outperformed MMVAE in all three
metrics. The cross-coherence of the three PoE models was higher or equal to
MMVAE except for very low supervision levels. All three PoE based models were
consistently better than MVAE.
### 4.4 Chemical structures and mass spectra
We evaluated SVAE and VAEVAE on a real-world bioinformatics application. The
models were used for annotating mass spectra with molecule structures.
Discovering chemical composition of a biological sample is one of the key
problems in analytical chemistry. Mass spectrometry is a common high
throughput analytical method. Thousands of spectra can be generated in a short
time, but identification rates of the corresponding molecules is still low for
most of the studies.
We approach the spectra annotation problem with bi-modal VAEs in the following
way: the SVAE and VAEVAE models are trained on a subset of the MoNA (Mass Bank
of North America) dataset [25], where the mass spectra suitable for molecule
identification are assembled. We focused on tandem mass spectra from only one
type of mass spectrometer collected in the positive ion mode. The mass spectra
were the first modality. To represent a molecule structure, we used a molecule
structural fingerprint, a bit string where an individual bit shows if a
substructure from a predefined set of possible substructures is present in the
molecule. Fingerprints of length $2149$ were used as the second modality.
During testing, only the mass spectrum was provided and the fingerprint was
predicted. A molecule structure still has to be identified based on the
predicted structural fingerprint. We did this by ranking the candidate
molecules from a molecule database based on the cross entropy loss between the
molecule fingerprint and the predicted fingerprint.
For evaluation, we used competition data from the CASMI2017 (Critical
Assessment of Small Molecule Identification) challenge [26]. Since the
molecule structure identification is based on ranking the candidate list, we
focused on the part of the challenge where the candidate list is already
provided for each spectrum. In the preliminary experiments, SVAE outperformed
VAEVAE in predicting molecule structural fingerprints from mass spectra, see
Figure 11.
Figure 11: A) The first modality: a tandem mass spectrum; B) The second
modality: a molecule structure fingerprint, a bit string where each bit
represents if a given substructure is present in the molecule; C) The
performance evaluation: the fingerprint is predicted with only the spectrum as
input. The candidate molecules are ranked by cross-entropy loss between their
fingerprints and the predicted fingerprint. The rank of the correct candidate
is used for comparing the performance of different methods. D) The evaluation
results for CASMI2017 challenge, with 112 test spectra and 200-10000 candidate
molecules for each spectrum: The plot shows for how many spectra the correct
candidate appeared in top $k$ candidates. SVAE performs better than VAEVAE in
this preliminary evaluation.
## 5 Discussion and Conclusions
We studied bi-modal variational autoencoders (VAEs) based on a product-of-
experts (PoE)architecture, in particular VAEVAE as proposed by [2] and a new
model SVAE, which we derived in an axiomatic way, and represents a
generalization of the VAEVAE architecture. The models learn representations
that allow coherent sampling of the modalities and accurate sampling of one
modality given the other. They work well in the semi-supervised setting, that
is, not all modalities need to be always observed during training. It has been
argued that the mixture-of-experts (MoE) approach MMVAE is preferable to a PoE
for multimodal VAEs [8], in particular in the fully supervised setting (i.e.,
when all data are paired). This conjecture was based on a comparison with the
MVAE model [7], but is refuted by our experiments showing that VAEVAE and our
newly proposed SVAE can outperform MMVAE on experiments conducted by [8].
Intuitively, PoEs are more tailored to towards an “AND” (multiplicative)
combination of the input modalities. This is supported by our experiments on
halved digit images, where a conjunctive combination is helpful and the PoE
models perform much better than MMVAE. In a real-world bioinformatics task,
SVAE outperformed VAEVAE in predicting molecule structural fingerprints from
mass spectra. We also expanded SVAE and VAEVAE to 3-modal case and show that
SVAE demonstrates better performance on individual modalities reconstructions
while having less parameters than VAEVAE.
## References
* [1] O. Caglayan, P. Madhyastha, L. Specia, L. Barrault, Probing the need for visual context in multimodal machine translation, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT), Vol. 1, 2019, pp. 4159–4170.
* [2] M. Wu, N. Goodman, Multimodal generative models for compositional representation learning, arXiv:1912.05075 (2019).
* [3] J. E. van Engelen, H. H. Hoos, A survey on semi-supervised learning, Machine Learning 109 (2) (2020) 373–440.
* [4] D. P. Kingma, M. Welling, Auto-encoding variational Bayes, in: International Conference on Learning Representations (ICLR), 2014.
* [5] G. E. Hinton, Training products of experts by minimizing contrastive divergence, Neural Computation 14 (8) (2002) 1771–1800.
* [6] M. Welling, Product of experts, Scholarpedia 2 (10) (2007) 3879.
* [7] M. Wu, N. Goodman, Multimodal generative models for scalable weakly-supervised learning, in: Advances in Neural Information Processing Systems 31 (NIPS), 2018, pp. 5575–5585.
* [8] Y. Shi, S. N, B. Paige, P. Torr, Variational mixture-of-experts autoencoders for multi-modal deep generative models, in: Advances in Neural Information Processing Systems (NeurIPS), 2019, pp. 15718–15729.
* [9] W. Wang, X. Yan, H. Lee, K. Livescu, Deep variational canonical correlation analysis, arXiv:1610.03454 (2016).
* [10] K. Sohn, H. Lee, X. Yan, Learning structured output representation using deep conditional generative models, in: Advances in Neural Information Processing Systems (NeurIPS), 2015, pp. 3483–3491.
* [11] Y. Tian, J. Engel, Latent translation: Crossing modalities by bridging generative models, arXiv:1902.08261 (2019).
* [12] C. Silberer, M. Lapata, Learning grounded meaning representations with autoencoders, in: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), Vol. 1, Association for Computational Linguistics, 2014, pp. 721–732.
* [13] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, A. Y. Ng, Multimodal deep learning, in: Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML), 2011, p. 689–696.
* [14] D. P. Kingma, S. Mohamed, D. J. Rezende, M. Welling, Semi-supervised learning with deep generative models, in: Advances in Neural Information Processing Systems (NeurIPS), 2014, pp. 3581–3589.
* [15] M. Suzuki, K. Nakayama, Y. Matsuo, Joint Multimodal Learning with Deep Generative Models, in: International Conference on Learning Representations Workshop (ICLR) Workshop Track, 2017.
* [16] D. J. Rezende, S. Mohamed, D. Wierstra, Stochastic backpropagation and approximate inference in deep generative models, in: Proceedings of the 31st International Conference on Machine Learning (ICML), Vol. 32(2) of Proceedings of Machine Learning Research, PMLR, 2014, pp. 1278–1286.
* [17] R. N. Yadav, A. Sardana, V. P. Namboodiri, R. M. Hegde, Bridged variational autoencoders for joint modeling of images and attributes, 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) (2020) 1468–1476.
* [18] Y. Burda, R. Grosse, R. Salakhutdinov, Importance weighted autoencoders, in: International Conference on Learning Representations (ICLR), 2016. arXiv:1509.00519.
* [19] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE 86 (11) (1998) 2278–2324.
* [20] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
* [21] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, A. Y. Ng, Reading digits in natural images with unsupervised feature learning, in: NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
* [22] C. Wah, S. Branson, P. Welinder, P. Perona, S. Belongie, The Caltech-UCSD Birds-200-2011 Dataset, Tech. Rep. CNS-TR-2011-001, California Institute of Technology (2011).
* [23] D. Massiceti, P. K. Dokania, N. Siddharth, P. H. S. Torr, Visual dialogue without vision or dialogue, in: NeurIPS Workshop on Critiquing and Correcting Trends in Machine Learning, 2018. arXiv:1812.06417.
* [24] P. Bojanowski, E. Grave, A. Joulin, T. Mikolov, Enriching word vectors with subword information, Transactions of the Association for Computational Linguistics 5 (2017) 135–146.
* [25] M. Vinaixa, E. L. Schymanski, S. Neumann, M. Navarro, R. M. Salek, O. Yanes, Mass spectral databases for LC/MS- and GC/MS-based metabolomics: State of the field and future prospects, TrAC Trends in Analytical Chemistry 78 (2016) 23–35. doi:10.1016/J.TRAC.2015.09.005.
URL https://www.sciencedirect.com/science/article/pii/S0165993615300832
* [26] E. L. Schymanski, C. Ruttkies, M. Krauss, C. Brouard, T. Kind, K. Dührkop, F. Allen, A. Vaniya, D. Verdegem, S. Böcker, J. Rousu, H. Shen, H. Tsugawa, T. Sajed, O. Fiehn, B. Ghesquière, S. Neumann, Critical assessment of small molecule identification 2016: automated methods, Journal of Cheminformatics 9 (1) (Mar. 2017). doi:10.1186/s13321-017-0207-1.
URL https://doi.org/10.1186/s13321-017-0207-1
* [27] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, in: International Conference on Learning Representations (ICLR), 2015.
## Appendix A Details of experiments
The encoder and decoder architectures for each experiment and modality are
listed below. To implement joint encoding network (VAEVAE architecture), an
fully connected layer followed by ReLU is added to the encoding architecture
for each modality. Another fully connected layer accepts the concatenated
features from the two modalities as an input and outputs the latent space
parameters. Adam optimiser is used for learning in all the models [27]. We
used a padding of 1 pixel if the stride was 2 pixels and no padding otherwise.
#### MNIST-Split.
The models are trained for 200 epochs with the learning rate $2\cdot 10^{-4}$.
The best epoch is chosen by the highest accuracy of the reconstruction from
the top half evaluated on the validation set. We used a latent space
dimensionality of $L=64$. The network architectures are described in Table
A.2.
Encoder
---
Input $\in\mathbb{R}^{3\times 32\times 32}$
$4\times 4$ conv. 64 stride 2 ReLU
$4\times 4$ conv. 128 stride 2 ReLU
$4\times 4$ conv. 256 stride 2 ReLU
FC. 786 ReLU
FC. L, FC. L
Decoder
---
Input $\in\mathbb{R}^{L}$
FC. L ReLU
FC. 512 ReLU
FC. 112 ReLU
$4\times 4$ upconv. 56 stride 1 ReLU
$4\times 4$ upconv. 28 stride 2 ReLU
Table A.2: Network architectures for MNIST-Split for each image half.
#### MNIST-SVHN.
The models were trained for 50 epochs with learning rates $10^{-3}$ and
$10^{-4}$. Only the results for the best learning rate are reported ($10^{-3}$
for VAEVAE and VAEVAE* and $10^{-4}$ for SVAE). The best epoch was chosen
based on the highest joint coherence evaluated on the validation set. We used
a latent space dimensionality of $L=20$. The network architectures are
summarized in Table A.4 and Table A.3 for the MNIST and SVHN modality,
respectively.
Encoder
---
Input $\in\mathbb{R}^{1\times 28\times 28}$
FC. 400 ReLU
FC. $L$, FC. $L$
Decoder
---
Input $\in\mathbb{R}^{L}$
FC. 400 ReLU
FC. 1 x 28 x 28 Sigmoid
Table A.3: Network architectures for MNIST-SVHN: MNIST.
Encoder
---
Input $\in\mathbb{R}^{3x32x32}$
$4\times 4$ conv. 32 stride 2 ReLU
$4\times 4$ conv. 64 stride 2 ReLU
$4\times 4$ conv. 128 stride 2 ReLU
$4\times 4$ conv. $L$ stride 1 , $4\times 4$ conv. $L$ stride 1
Decoder
---
Input $\in\mathbb{R}^{L}$
$4\times 4$ upconv. 128 stride 1 ReLU
$4\times 4$ upconv. 64 stride 2 ReLU
$4\times 4$ upconv. 32 stride 2 ReLU
$4\times 4$ upconv. 3 stride 2 Sigmoid
Table A.4: Network architectures for MNIST-SVHN: SVHN.
#### CUB-Captions.
The models were trained for 200 epochs with the learning rate $10^{-4}$. The
best epoch was chosen based on the highest joint coherence evaluated on the
validation set. We used a latent space dimensionality of $L=64$. The network
architectures are described in Table A.5 and Table A.6 for the text and image
modality, respectively.
Encoder
---
Input $\in\mathbb{R}^{1590}$
Word Emb. 256
$4\times 4$ conv. 32 stride 2 BatchNorm2d ReLU
$4\times 4$ conv. 64 stride 2 BatchNorm2d ReLU
$4\times 4$ conv. 128 stride 2 BatchNorm2d ReLU
$1\times 4$ conv. 256 stride $1\times 2$ pad $0\times 1$ & BatchNorm2d ReLU
$1\times 4$ conv. 512 stride $1\times 2$ pad $0\times 1$ & BatchNorm2d ReLU
$4\times 4$ conv. $L$ stride 1 , $4\times 4$ conv. $L$ stride 1
Decoder
---
Input $\in\mathbb{R}^{L}$
$4\times 4$ upconv. 512 stride 1 ReLU
$1\times 4$ upconv. 256 stride $1\times 2$ pad $0\times 1$ & BatchNorm2d ReLU
$1\times 4$ upconv. 128 stride $1\times 2$ pad $0\times 1$ & BatchNorm2d ReLU
$4\times 4$ upconv. 64 stride 2 BatchNorm2d ReLU
$4\times 4$ upconv. 32 stride 2 BatchNorm2d ReLU
$4\times 4$ upconv. 1 stride 2 ReLU
Word Emb.${}^{\text{T}}$ 1590
Table A.5: Network architectures for CUB-Captions language processing.
Encoder
---
Input $\in\mathbb{R}^{2048}$
FC. 1024 ELU
FC. 512 ELU
FC. 256 ELU
FC. L, FC. L
Decoder
---
Input $\in\mathbb{R}^{L}$
FC. 256 ELU
FC. 512 ELU
FC. 1024 ELU
FC. 2048
Table A.6: Network architectures for CUB-Captions image processing.
#### MNIST-Split-Three.
The models were trained for 50 epochs with the learning rate $2\cdot 10^{-4}$.
The best epoch was chosen by the highest accuracy of the reconstruction from
the top half evaluated on the validation set. We used a latent space
dimensionality of $L=64$. The network architectures are described in Table
A.7.
Encoder
---
Input $\in\mathbb{R}^{3x32x32}$
$4\times 4$ conv. 64 stride 2 ReLU
$4\times 4$ conv. 128 stride 2 ReLU
$4\times 4$ conv. 256 stride 2 ReLU
FC. 786 ReLU
FC. L, FC. L
Decoder
---
Input $\in\mathbb{R}^{L}$
FC. L ReLU
FC. 512 ReLU
FC. 112 ReLU
$4\times 4$ upconv. 56 stride 1 ReLU
$2\times 4$ upconv. 28 stride 2 ReLU
Table A.7: Network architectures for MNIST-Split-Three for each image part.
#### Spectra and molecule fingerprints.
The models were trained for 140 epochs with the learning rate $10^{-4}$. The
best epoch was chosen by the highest accuracy of the test set. We used a
latent space dimensionality of $L=300$. The network architectures are
described in Table A.8 and Table A.9.
Encoder
---
Input $\in\mathbb{R}^{11000}$
FC. 10000 & BatchNorm1d ReLU
FC. 5000 & BatchNorm1d ReLU
FC. 2048 & BatchNorm1d ReLU
FC. L, FC. L
Decoder
---
Input $\in\mathbb{R}^{L}$
FC. 3000 & BatchNorm1d ReLU
FC. 5000 & BatchNorm1d ReLU
FC. 10000 & BatchNorm1d ReLU
FC. 11000
Table A.8: Network architectures for the spectra-fingerprints experiment,
spectra modality.
Encoder
---
Input $\in\mathbb{R}^{2149}$
FC. 1024 ReLU
FC. 1000 ReLU
FC. L, FC. L
Decoder
---
Input $\in\mathbb{R}^{L}$
FC. 500 ReLU
FC. 1024 ReLU
FC. 2149 & Sigmoid
Table A.9: Network architectures for the spectra-fingerprints experiment,
fingerprints modality.
|
# Learning by Watching: Physical Imitation of
Manipulation Skills from Human Videos
Haoyu Xiong∗†, Quanzhou Li∗, Yun-Chun Chen∗, Homanga Bharadhwaj∗, Samarth
Sinha∗, Animesh Garg∗‡ ∗University of Toronto & Vector Institute, †Tianjin
University, ‡Nvidia.
###### Abstract
Learning from visual data opens the potential to accrue a large range of
manipulation behaviors by leveraging human demonstrations without specifying
each of them mathematically, but rather through natural task specification. In
this paper, we present Learning by Watching (LbW), an algorithmic framework
for policy learning through imitation from a single video specifying the task.
The key insights of our method are two-fold. First, since the human arms may
not have the same morphology as robot arms, our framework learns unsupervised
human to robot translation to overcome the morphology mismatch issue. Second,
to capture the details in salient regions that are crucial for learning state
representations, our model performs unsupervised keypoint detection on the
translated robot videos. The detected keypoints form a structured
representation that contains semantically meaningful information and can be
used directly for computing reward and policy learning. We evaluate the
effectiveness of our LbW framework on five robot manipulation tasks, including
reaching, pushing, sliding, coffee making, and drawer closing. Extensive
experimental evaluations demonstrate that our method performs favorably
against the state-of-the-art approaches. More results and analysis are
available at pair.toronto.edu/lbw-kp/.
## I Introduction
Robotic _Imitation Learning_ , also known as _Learning from Demonstration_
(LfD), allows robots to acquire manipulation skills performed by expert
demonstrations through learning algorithms [1, 2]. While progress has been
made by existing methods, collecting expert demonstrations remains expensive
and challenging as it assumes access to both observations and actions via
kinesthetic teaching [1, 2], teleoperation [3, 4], or crowdsourcing platform
[5, 6, 7, 8]. In contrast, humans have the ability to imitate manipulation
skills by watching third-person performances. Motivated by this, recent
methods resort to endowing robots with the ability to learn manipulation
skills via physical imitation from human videos [9, 10, 11].
Unlike conventional LfD methods [1, 2, 3, 4], which assume access to both
expert obsevations and actions, approaches based on imitation from human
videos relax the dependencies, requiring _only_ human videos as supervision
[9, 10, 11]. One of the main challenges of these imitation learning methods is
how to minimize the domain gap between humans and robots. For instance, human
arms may have different morphologies than those of robot arms. To overcome the
morphology mismatch issue, existing imitation learning methods [9, 10, 11]
typically leverage image-to-image translation models (e.g., CycleGAN [12]) to
translate videos from the human domain to the robot domain. However, simply
adopting vanilla image-to-image translation models still does not solve the
imitation from human videos task, since the image-to-image translation models
often capture only the macro features at the expense of neglecting the details
in salient regions that are crucial for downstream tasks [13].
Figure 1: LbW. Given a single human video, our LbW framework learns human to
robot translation followed by unsupervised keypoint detection. The resulting
keypoint-based representations are semantically meaningful and can be used to
guide the robot to learn manipulation skills through physical imitation.
In this paper, we present Learning by Watching (LbW), a framework for physical
imitation from human videos for learning robot manipulation skills. As shown
in Figure 1, our framework is composed of a perception module and a policy
learning module for physical imitation. The perception module aims at
minimizing the domain gap between the human domain and the robot domain as
well as capturing the details of salient regions that are crucial for
downstream tasks. To achieve this, our perception module learns to translate
the input human video to the robot domain with an unsupervised image-to-image
translation model, followed by performing _unsupervised_ keypoint detection on
the translated robot video. The detected keypoints then serve as a structured
representation that contains semantically meaningful information and can be
used as input to the downstream policy learning module.
To learn manipulation skills, we cast this as a _reinforcement learning_ (RL)
problem, where we aim to enable the robot to perform physically viable
learning with the objective to imitate similar behavior as demonstrated in the
translated robot video under context-specific constraints. We evaluate the
effectiveness of our LbW framework on five robot manipulation tasks, including
reaching, pushing, sliding, coffee making, and drawer closing in two
simulation environments (i.e., the Fetch-Robot manipulation in OpenAI gym [14]
and meta-world [15]). Extensive experimental results show that our algorithm
compares favorably against the state-of-the-art approaches.
The main contributions are summarized as follows:
1. 1.
We present a framework for physical imitation from human videos for learning
robot manipulation skills.
2. 2.
Our method learns structured representations based on unsupervised keypoint
detection that can be used directly for computing task reward and policy
learning.
3. 3.
Extensive experimental results show that our LbW framework achieves the state
of the art on five robot manipulation tasks.
## II Related Work
Imitation from human videos. Existing imitation learning approaches collect
demonstrations by kinesthetic teaching [1, 2], teleoperation [3, 4], or
through crowdsourcing platform [5, 6, 7, 8], and assume access to both expert
observations and expert actions at every time step. Recent progress in deep
representation learning has accelerated the development of imitation from
videos [11, 10, 9, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. While applying
image-to-image translation models to achieve imitation from human videos has
been explored [9, 11, 24], the dependency on paired human-robot training data
makes these methods hard to scale.
Among them, AVID [10] is closely related to our work which translates human
demonstrations to robot domain via CycleGAN [12] in an unpaired data setting.
However, directly encoding the translated images using a feature extractor for
deriving state representations may suffer from visual artifacts generated by
image-to-image translation models, leading to suboptimal performance on
downstream tasks.
Different from methods based on image-to-image translation models, Maximilian
et al. [22] leverage 3D detection to minimize the visual gap between the human
domain and the robot domain. SFV [21] enables humanoid characters to learn
skills from videos based on deep pose estimation. Our method shares a similar
reward computing scheme as these approaches [22, 21, 25, 23]. The difference
is that these methods require additional label data, whereas our framework is
learned in an unsupervised fashion.
Cycle consistency. The idea of exploiting cycle consistency constraints has
been widely applied in the context of image-to-image translation. CycleGAN
[12] learns to translate images in an unpaired data setting by exploiting the
idea of cycle consistency. UNIT [26] achieves image-to-image translation by
assuming a shared latent space between the two domains. Other methods explore
translating images across multiple domains [27] or learning to generate
diverse outputs [28, 29, 30]. Recently, the idea of cycle consistency is also
applied to address various problems such as domain adaptation [31, 32, 33, 34]
and policy learning [10, 35]. In our work, our LbW framework employs a MUNIT
[30] model to perform human to robot translation for achieving physical
imitation from human videos. We note that other unpaired image-to-image
translation models are also applicable in our task. We leave the discussion on
the effect of different image-to-image translation models as future work.
Unsupervised keypoint detection. Detecting keypoints from images without
supervision has been studied in the literature [36, 37]. In the context of
computer vision, existing methods typically infer keypoints by assuming access
to the temporal transformation between video frames [36] or employing a
differentiable keypoint bottleneck network without access to frame transition
information [37]. Other approaches estimate keypoints based on the access to
known image transformations and dense correspondences between features [38,
39, 40].
Apart from the aforementioned approaches, some recent methods focus on
learning keypoint detection for image-based control tasks [41, 42, 43, 44]. In
our method, we adopt Transporter [41] to detect keypoints from the translated
robot video in an unsupervised manner. We note that while other unsupervised
keypoint detection methods can also be used in our framework, the focus of our
paper lies in learning structured representations that are semantically
meaningful and can be used directly for downstream policy learning. We leave
the development of unsupervised keypoint detection methods as future work.
## III Preliminaries
To achieve physical imitation from human videos, we decompose the problem into
a series of tasks: 1) human to robot translation, 2) unsupervised keypoint-
based representation learning, and 3) physical imitation with RL. Here, we
review the first two tasks, in which our method builds upon existing
algorithms.
Figure 2: Overview of the proposed LbW. Our LbW framework is composed of
three main components: an image-to-image translation network $T$, a keypoint
detector $\Psi$, and a policy network $\pi$. The image-to-image translation
network translates the input human demonstration video frame by frame to
generate a robot demonstration video. Next, the keypoint detector takes the
generated robot demonstration video as input and extracts the keypoint-based
representation for each frame to form a keypoints trajectory. At each time
step, the keypoint detector also extracts the keypoint-based representation
for the current observation. The reward for physical imitation is defined by a
distance metric $d$ that measures the distance between the keypoint-based
representation of the current observation and those in the keypoints
trajectory. Finally, the keypoint-based representation of the current
observation is passed to the policy network to predict an action that is used
to interact with the environment.
### III-A Unsupervised Image-to-Image Translation
Similar to existing methods [10, 9], we cast human to robot translation as an
unsupervised image-to-image translation problem. Specifically, we aim to learn
a model that translates images from a source domain $X$ (e.g., human domain)
to a target domain $Y$ (e.g., robot domain) _without_ paired training data
[12, 30, 29, 26]. In our method, we adopt MUNIT [30] as the image-to-image
translation network to achieve human to robot translation. MUNIT learns to
translate images between the two domains by assuming that an image
representation can be disentangled into a domain-invariant content code
(encoded by a content encoder $E^{c}$) and a domain-specific style code
(encoded by a style encoder $E^{s}$). The content encoders $E_{X}^{c}$ and
$E_{Y}^{c}$ are shared in the two domains, whereas the style encoders
$E_{X}^{s}$ and $E_{Y}^{s}$ of the two domains do _not_ share weights. To
translate an image from one domain to the other, we combine its content code
with a style code sampled from the other domain. The translations are learned
to generate images that are indistinguishable from images in the translated
domain. Given an image $x$ from the source domain $X$ and an image $y$ from
the target domain $Y$, we define the adversarial loss
$\mathcal{L}_{\mathrm{GAN}}^{x}$ in the source domain as
$\mathcal{L}_{\mathrm{GAN}}^{x}=\mathbb{E}\bigg{[}\log
D_{X}(x)+\log\Big{(}1-D_{X}\big{(}G_{X}(c_{y},s_{x})\big{)}\Big{)}\bigg{]},$
(1)
where $c_{y}=E_{Y}^{c}(y)$ is the content code of image $y$,
$s_{x}=E_{X}^{s}(x)$ is the style code of image $x$, $G_{X}$ is a generator
that takes as input a content code $c_{y}$ and a style code $s_{x}$ and
generates images that have similar distributions like those in the source
domain, and $D_{X}$ is a discriminator that aims to distinguish between the
translated images generated by $G_{X}$ and the images in the source domain.
The adversarial loss $\mathcal{L}_{\mathrm{GAN}}^{y}$ in the target domain can
be similarly defined.
In addition to the adversarial losses, MUNIT applies reconstruction losses on
images and content and style codes to regularize the model learning. For the
source domain, the image reconstruction loss $\mathcal{L}_{\mathrm{rec}}^{x}$
is defined as
$\mathcal{L}_{\mathrm{rec}}^{x}=\mathbb{E}\Big{[}\big{\|}G_{X}(c_{x},s_{x})-x\big{\|}\Big{]},$
(2)
the content reconstruction loss $\mathcal{L}_{\mathrm{rec}}^{c_{x}}$ is
defined as
$\mathcal{L}_{\mathrm{rec}}^{c_{x}}=\mathbb{E}\Big{[}\big{\|}E_{Y}^{c}\big{(}G_{Y}(c_{x},s_{y})\big{)}-c_{x}\big{\|}\Big{]},$
(3)
and the style reconstruction loss $\mathcal{L}_{\mathrm{rec}}^{s_{x}}$ is
defined as
$\mathcal{L}_{\mathrm{rec}}^{s_{x}}=\mathbb{E}\Big{[}\big{\|}E_{X}^{s}\big{(}G_{X}(c_{y},s_{x})\big{)}-s_{x}\big{\|}\Big{]}.$
(4)
The image reconstruction loss $\mathcal{L}_{\mathrm{rec}}^{y}$, the content
reconstruction loss $\mathcal{L}_{\mathrm{rec}}^{c_{y}}$, and the style
reconstruction loss $\mathcal{L}_{\mathrm{rec}}^{s_{y}}$ in the target domain
can be derived similarly.
The total loss $\mathcal{L}_{\mathrm{MUNIT}}$ for training MUNIT is given by
$\begin{split}\mathcal{L}_{\mathrm{MUNIT}}&=\mathcal{L}_{\mathrm{GAN}}^{x}+\mathcal{L}_{\mathrm{GAN}}^{y}+\lambda_{\mathrm{image}}(\mathcal{L}_{\mathrm{rec}}^{x}+\mathcal{L}_{\mathrm{rec}}^{y})\\\
+&\lambda_{\mathrm{content}}(\mathcal{L}_{\mathrm{rec}}^{c_{x}}+\mathcal{L}_{\mathrm{rec}}^{c_{y}})+\lambda_{\mathrm{style}}(\mathcal{L}_{\mathrm{rec}}^{s_{x}}+\mathcal{L}_{\mathrm{rec}}^{s_{y}}),\end{split}$
(5)
where $\lambda_{\mathrm{image}}$, $\lambda_{\mathrm{content}}$, and
$\lambda_{\mathrm{style}}$ are hyperparameters used to control the relative
importance of the respective loss functions.
### III-B Unsupervised Keypoint Detection
To perform control tasks, existing approaches typically resort to learning
state representations based on image observations [45, 46, 47, 48, 10].
However, the image observations generated by image-to-image translation models
often capture only macro features while neglecting the details in salient
regions that are crucial for downstream tasks. Deriving state representations
by encoding the translated image observations using a feature encoder would
lead to suboptimal performance. On the other hand, existing methods may also
suffer from visual artifacts generated by the image-to-image translation
models. In contrast to these approaches, we leverage Transporter [41] to
detect the keypoints in each translated video frame in an unsupervised
fashion. The detected keypoints form a structured representation that captures
the robot arm pose and the location of the interacting object, providing
semantically meaningful information for downstream control tasks while
avoiding the negative impact of visual artifacts caused by the imperfect
image-to-image translation.
To realize the learning of unsupervised keypoint detection, Transporter
leverages object motion between a pair of video frames to transform a video
frame into the other by transporting features at the detected keypoint
locations. Given two video frames $x$ and $y$, Transporter first extracts
feature maps $\Phi(x)$ and $\Phi(y)$ for both video frames using a feature
encoder $\Phi$ and detects $K$ $2$-dimensional keypoint locations $\Psi(x)$
and $\Psi(y)$ for both video frames using a keypoint detector $\Psi$.
Transporter then synthesizes the feature map $\hat{\Phi}(x,y)$ by suppressing
the feature map of $x$ around each keypoint location in $\Psi(x)$ and
$\Psi(y)$ and incorporating the feature map of $y$ around each keypoint
location in $\Psi(y)$:
$\hat{\Phi}(x,y)=(1-\mathcal{H}_{\Psi(x)})\cdot(1-\mathcal{H}_{\Psi(y)})\cdot\Phi(x)+\mathcal{H}_{\Psi(y)}\cdot\Phi(y),$
(6)
where $\mathcal{H}_{\Psi(\cdot)}$ is a Gaussian heat map with peaks centered
at each keypoint location in $\Psi(\cdot)$.
Next, the transported feature $\hat{\Phi}(x,y)$ is passed to a refinement
network $R$ to reconstruct to the video frame $y$. We define the loss
$\mathcal{L}_{\mathrm{transporter}}$ for training Transporter as
$\mathcal{L}_{\mathrm{transporter}}=\mathbb{E}\Big{[}\big{\|}R\big{(}\hat{\Phi}(x,y)\big{)}-y\big{\|}\Big{]}.$
(7)
In the next section, we leverage the Transporter model to detect keypoints for
each translated video frame. The detected keypoints are then used as a
structured representation for defining the reward function and as the input of
the policy network to predict an action that is used to interact with the
environment.
## IV Proposed Method
In this section, we first provide an overview of our approach. We then
describe the unsupervised domain transfer with keypoint-based representations
module. Finally, we describe the details of physical imitation with RL.
### IV-A Algorithmic Overview
We consider the task of physical imitation from human videos for learning
robot manipulation skills. In this setting, we assume we have access to a
_single_ human demonstration video $V_{X}=\\{x_{i}^{E}\\}_{i=1}^{N}$ of length
$N$ depicting a human performing a specific task (e.g., pushing a block) that
we want the robot to learn from, where $x_{i}^{E}\in\mathbb{R}^{H\times
W\times 3}$ and $H\times W$ is the spatial size of $x_{i}^{E}$. We note that
the human actions are _not_ given in our setting. Our goal is to develop a
learning algorithm that allows the robot to _imitate_ the behavior
demonstrated by the human in the human demonstration video $V_{X}$. To achieve
this, we present LbW, a framework that comprises three components: 1) the
image-to-image translation network $T$ (from MUNIT [30]), 2) the keypoint
detector $\Psi$ (from the keypoint detector of Transporter [41]), and 3) the
policy network $\pi$.
As shown in Figure 2, given a human demonstration video $V_{X}$ and the
current observation $O_{t}\in\mathbb{R}^{H\times W\times 3}$ at time $t$, we
first apply the image-to-image translation network $T$ to each frame
$x_{i}^{E}$ in the human demonstration video $V_{X}$ and translate $x_{i}^{E}$
to a robot demonstration video frame $v_{i}^{E}\in\mathbb{R}^{H\times W\times
3}$. Next, the keypoint detector $\Psi$ takes each translated robot video
frame $v_{i}^{E}$ as input and extracts the keypoint-based representation
$z_{i}^{E}=\Psi(v_{i}^{E})\in\mathbb{R}^{K\times 2}$, where $K$ denotes the
number of keypoints. Similarly, we also apply the keypoint detector $\Psi$ to
the current observation $O_{t}$ to extract the keypoint-based representation
$z_{t}=\Psi(O_{t})\in\mathbb{R}^{K\times 2}$. To compute the reward for
physical imitation, we define a distance metric $d$ that computes the
distances between the keypoint-based representation $z_{t}$ of the current
observation $O_{t}$ and each of the keypoint-based representations $z_{i}^{E}$
of the translated robot video frames $v_{i}^{E}$. Finally, the policy network
$\pi$ takes as input the keypoint-based representation $z_{t}$ of the current
observation $O_{t}$ to predict an action $a_{t}=\pi(z_{t})$ that is used to
guide the robot to interact with the environment. The details of each
component are described in the following subsections.
Figure 3: Overview of the perception module. Our perception module is
composed of a MUNIT network (left) and a Transporter model (right). Given a
human video frame $x$ and a robot video frame $y$, the MUNIT model first
extracts the content code of the human video frame and the style code of the
robot video frame. The MUNIT model then generates the translated robot video
frame $v$ by combining the extracted content code and style code. Next, the
Transporter model extracts the features and detects the keypoints for both the
translated robot video frame $v$ and the input robot video frame $y$ and
reconstructs the translated robot video frame $\hat{v}$ by transporting
features at the detected keypoint locations. Note that the input robot video
frame $y$ is from a robot video generated by using a random policy.
### IV-B Unsupervised Domain Transfer with Keypoints
To achieve physical imitation from human videos, we develop a perception
module that consists of a MUNIT model for human to robot translation and a
Transporter network for keypoint detection as shown in Figure 3. To train the
MUNIT model, we first collect the training data for the source domain (i.e.,
human domain) and the target domain (i.e., robot domain). The source domain
contains the human demonstration video $V_{X}$ that we want the robot to learn
from. To increase the diversity of the training data in the source domain for
facilitating the MUNIT model training, we follow AVID [10] and collect a few
_random_ data by asking the human to randomly move the hands above the table
_without_ performing the task. As for the target domain training data, we
collect a number of robot videos generated by having the robot perform a
number of actions that are randomly sampled from the action space. As such,
the collection of the robot videos does _not_ require human expertise and
effort.
Using the training data from both source and target domains, we are able to
train the MUNIT model to achieve human to robot translation using the total
loss $\mathcal{L}_{\mathrm{MUNIT}}$ in (5) and following the protocol
described in Section III-A. After training the MUNIT model, we are able to
translate the human demonstration video $V_{X}=\\{x_{i}^{E}\\}_{i=1}^{N}$
frame by frame to the robot demonstration video $\\{v_{i}^{E}\\}_{i=1}^{N}$ by
combining the content code of each human demonstration video frame and a style
code randomly sampled from the robot domain.
Human videos
Robot observations
Pushing
Sliding
Drawer closing
Coffee making
Figure 4: Task overview. We present the sample task scenes and one sample
human video frame for the pushing, sliding, drawer closing, and coffee making
tasks, respectively. Our human videos can be collected in an environment with
a plain background (i.e., the left three columns) or with a noisy background
(i.e., the rightmost column).
As mentioned in Section III-B, we aim to learn keypoint-based representations
from the translated robot video in an unsupervised fashion. To achieve this,
we leverage Transporter to detect the keypoints in each translated robot video
frame in an unsupervised fashion, as there are no ground-truth keypoint
annotations available.
As illustrated in Figure 3, following the protocol stated in Section III-B,
the Transporter model takes as input a translated robot demonstration video
frame $v$ and a robot video frame $y$ from a robot video collected by applying
a random policy and extracts their features and detects keypoint locations,
respectively. The Transporter model then reconstructs the translated robot
demonstration video frame. To train the Transporter model, we optimize the
total loss $\mathcal{L}_{\mathrm{transporter}}$ in (7). Once the training of
the Transporter model converges, we are able to use the keypoint detector
$\Psi$ of the Transporter model to extract a keypoint-based representation
$z_{i}^{E}=\Psi(v_{i}^{E})$ for each frame $v_{i}^{E}$ in the translated robot
demonstration video to form a keypoints trajectory $\\{z_{i}^{E}\\}_{i=1}^{N}$
and a keypoint-based representation $z_{t}=\Psi(O_{t})$ for the current
observation $O_{t}$. The keypoints trajectory $\\{z_{i}^{E}\\}_{i=1}^{N}$ of
the translated robot demonstration video $\\{v_{i}^{E}\\}_{i=1}^{N}$ and the
keypoint-based representation $z_{t}$ of the current observation $O_{t}$
provide _semantically_ meaningful information for robot manipulation tasks. We
then use both of them to compute the reward $r_{t}$ and use the keypoint-based
representation $z_{t}$ of the current observation $O_{t}$ to predict an action
$a_{t}$. The details of reward computing and policy learning are elaborated in
the next subsection.
### IV-C Physical Imitation with RL
To control the robot, we use RL to learn a policy from image-based
observations that maximize the cumulative values of a learned reward function.
In our method, we _decouple_ the policy learning phase from the keypoint-based
representation learning phase. Given the keypoints trajectory
$\\{z_{i}^{E}\\}_{i=1}^{N}$ of the translated robot demonstration video
$\\{v_{i}^{E}\\}_{i=1}^{N}$ and the keypoint-based representation $z_{t}$ of
the current observation $O_{t}$, our policy network $\pi$ outputs an action
$a_{t}=\pi(z_{t})$ which is executed in the environment to obtain the next
observation $O_{t+1}$. To achieve physical imitation, we aim to minimize the
distance between the keypoints trajectory of the agent and that of the
translated robot demonstration video. Specifically, we define the reward
$r_{t}$ as
$r_{t}=d\big{(}z_{t},z_{t+1},\\{z_{i}^{E}\\}_{i=1}^{N}\big{)}=\lambda_{r_{1}}\cdot
r_{1}(t)+\lambda_{r_{2}}\cdot r_{2}(t),$ (8)
where $\lambda_{r_{1}}$ and $\lambda_{r_{2}}$ are hyperparameters that balance
the importance between the two terms, and the aforementioned goal is imposed
on $r_{1}(t)$ and $r_{2}(t)$, which are defined by the following equations:
$r_{1}(t)=-\min\|z_{t}-z_{p}^{E}\|,\textmd{ and}$ (9)
$r_{2}(t)=-\min\Big{(}\big{\|}(z_{t+1}-z_{t})-(z_{q+1}^{E}-z_{q}^{E})\big{\|}\Big{)},$
(10)
where $1\leq p\leq N-1$, $1\leq q\leq N-1$, $r_{1}(t)$ aims to minimize the
distance between the keypoint-based representation $z_{t}$ of the current
observation $O_{t}$ and the most similar (closest) keypoint-based
representation $z_{p}^{E}$ in the keypoints trajectory
$\\{z_{i}^{E}\\}_{i=1}^{N}$ of the translated robot demonstration video
$\\{v_{i}^{E}\\}_{i=1}^{N}$, and $r_{2}(t)$ is the first-order difference
equation of $r_{1}(t)$.
We add the tuple $(z_{t},a_{t},z_{t+1},r_{t})$ to a replay buffer. Then, the
policy network $\pi$ can be trained with any RL algorithms in principle. We
make use of Soft-Actor Critic (SAC) [49] as the RL algorithm for policy
learning in our experiments.
Input
CycleGAN
LbW (Ours)
Figure 5: Visual results and comparisons on the pushing task. Given a human
video as input in the first row, we present the translated images of CycleGAN
[12] in the second row. In the third row, we visualize our translated images
and the detected keypoints produced by the perception module. Our perception
module accurately detects the robot arm pose and the location of the
interacting object.
## V Experiments
In this section, we describe the experimental settings and report results with
comparisons to state-of-the-art methods on five robot manipulation tasks.
Through experiments, we aim to investigate the following questions:
1. 1.
How accurate is our perception module in handling the human-robot domain gap
and in detecting keypoints?
2. 2.
How does LbW compare with state-of-the-art baselines in terms of performance
on robot manipulation tasks?
### V-A Experimental Setting
We perform experimental evaluations in two simulation environments, i.e., the
Fetch-Robot manipulation in OpenAI gym [14] and meta-world [15]. We evaluate
on five tasks: reaching, pushing, sliding, coffee making, and drawer closing.
Figure 4 presents the overview of each task, including the task scenes and one
sample human video frame for each task. The goal of each task is described as
follows.
1. 1.
For the reaching task, the robot has to move its end-effector to reach the
target.
2. 2.
For the pushing task, a puck is placed on the table in front of the robot, and
the goal is to move the puck to the target location.
3. 3.
For the sliding task, a puck is placed on a long slippery table and the target
location is beyond the reach of the robot. The goal is to apply an appropriate
force to the puck so that the puck slides and stops at the target location due
to friction.
4. 4.
For the coffee making task, a cup is placed on the table in front of the
robot, and the goal is to move the cup to the location right below the coffee
machine outlet. The moving distance of coffee making task is longer than the
one in the pushing task.
5. 5.
For the drawer closing task, the robot has to move its end-effector to close
the drawer.
In the policy learning phase, the robot receives only an RGB image of size
$84\times 84\times 3$ as the observation. The robot arm is controlled by an
Operational Space Controller in end-effector positions. As each of the tasks
is described by a single human video, we set the initial locations of the
object and the target to a fixed configuration.
TABLE I: Dataset statistics for perception module training for each task. We
summarize the number of video frames of both the source (human) domain and the
target (robot) domain for training the perception module for each task.
Domain | Reaching | Pushing | Sliding | Drawer closing | Coffee making
---|---|---|---|---|---
Source (human) | 1,056 | 398 | 650 | 986 | 658
Target (robot) | 3,150 | 1,220 | 2,120 | 2,940 | 4,007
TABLE II: Success rates. Comparison of success rates for test evaluations of
our LbW framework and the baselines.
Method | Number of expert demonstrations | Reaching | Pushing | Sliding | Drawer closing | Coffee making
---|---|---|---|---|---|---
Classifier reward | 35 robot videos | 100% | 100% | 30% | 70% | 50%
AVID-m | 15 human videos | 100% | 60% | 0% | 50% | 40%
LbW (Ours) | 1 human video | 100% | 100% | 80% | 80% | 70%
### V-B Comparison to Baseline Methods
To evaluate the effectiveness of our perception module, we implement two
baseline methods using the same control model as LbW, which is adopted from
SAC$+$AE [50], but with different reward learning methods.
Classifier-reward. We implement a classifier-based reward learning method in a
similar way as VICE [51]. For each task, given robot demonstration videos,
instead of the human videos, the CNN classifier is pre-trained on _ground-
truth_ goal images with _positive labels_ and the remaining images with
_negative labels_. To learn a policy in the environment, we adopt the
implementation from SAC$+$AE[50], where we use the classifier-based reward to
train the agent.
AVID-m. Since AVID [10] is the state-of-the-art method that outperforms prior
approaches, including BCO [52] and TCN [17], we focus on comparing our method
with AVID. For a fair comparison, we reproduce the reward learning method of
AVID and replace the control module with SAC$+$AE[50]. We denote this method
as AVID-m. For each task, given human demonstration videos, we first translate
the human demonstration videos to the robot domain using the CycleGAN [12]
model. Then the CNN classifier is pre-trained on the _translated_ goal images
with _positive labels_ and the remaining translated images with _negative
labels_. For RL training, we adopt the implementation from SAC$+$AE [50].
### V-C Dataset Collection and Statistics
We decouple the training phase of the perception module from that of the
policy learning module.
Dataset for perception module training. To train our perception module and the
CycleGAN method, we collect human expert videos and videos of a human
performing random actions without performing the tasks for the human domain.
For the robot domain, we first constrain the action space of the robot such
that unexpected robot poses will not occur (i.e., robot arms are constrained
to move above the table), and then run a random policy to collect robot
videos. Note that we do _not_ use robot expert videos for training the
perception module. Table I presents the dataset statistics for each task for
training the perception module.
Dataset for policy learning. For policy learning, we use only one single human
expert video to train our policy network. The AVID-m method uses $15$ human
expert videos, while the classifier-reward approach uses $35$ robot expert
videos.
### V-D Performance Evaluations
Following AVID [10], we use _success rate_ as the evaluation metric. At test
time, the task is considered to be a success if the robot is able to complete
the task within a specified number of time steps (i.e., $50$ time steps for
reaching and pushing, and $300$ time steps for sliding, coffee making, and
drawer closing). The results are evaluated by $10$ test episodes for each
task. Table II reports the success rates of our method and the two baseline
approaches on all five tasks. We find that for the reaching task, all three
methods achieve a success rate of $100\%$. For the sliding, drawer closing,
and coffee making tasks, our LbW performs favorably against the two competing
approaches.
The difference between the AVID-m method and the classifier-reward approach is
that AVID-m leverages CycleGAN for human to robot translation, while the
classifier-reward method using ground-truth robot images directly. As shown in
Figure 5, the translated images of AVID-m have clear visual artifacts. For
instance, the red cube disappears and the robot poses in the translated images
do not match those in the human video frames. The comparisons between AVID-m
and the classifier-reward method and the visual results of AVID-m in Figure 5
show that using image-to-image translation models alone for minimizing the
human-robot domain gap will have negative impact to the performance of the
downstream tasks. Our perception module learns unsupervised human to robot
translation as well as unsupervised keypoint detection on the translated robot
videos. The learned keypoint-based representation provides semantically
meaningful information for the robot, allowing our LbW framework compares
favorably two competing approaches. More results, videos, performance
comparisons, and implementation details are available at pair.toronto.edu/lbw-
kp/.
### V-E Discussion of Limitations
While results on five tasks demonstrate the effectiveness of our LbW
framework, there are two limitations. First, existing imitation learning
methods [10, 9, 11, 24] that are based on image-to-image translation require
the pose of the human arms and that of the robot arms to be similar. As a
result, these methods may not perform well on human demonstration videos that
have larger pose variations or with more natural poses. Our LbW framework also
leverages an image-to-image translation model, thus suffering from the same
limitation as these methods. Second, in our method, learning from only a
_single_ human video limits the model generalization to new scenes.
## VI Conclusions
We introduced LbW, a framework for physical imitation from human videos. Our
core technical novelty lies in the design of the perception module that
minimizes the domain gap between the human domain and the robot domain
followed by keypoint detection on the translated robot video frames in an
unsupervised manner. The resulting keypoint-based representations capture
semantically meaningful information that guide the robot to learn manipulation
skills through physical imitation. We defined a reward function with a
distance metric that encourages the trajectory of the agent to be as close to
that of the translated robot demonstration video as possible. Extensive
experimental results on five robot manipulation tasks demonstrate the
effectiveness of our approach and the advantage of learning keypoint-based
representations over conventional state representation learning approaches.
Acknowledgement. Animesh Garg is supported by CIFAR AI chair, and we would
like to acknowledge Vector institute for computation support.
## References
* [1] P. Pastor, L. Righetti, M. Kalakrishnan, and S. Schaal, “Online movement adaptation based on previous sensor experiences,” in IROS, 2011.
* [2] B. Akgun, M. Cakmak, J. W. Yoo, and A. L. Thomaz, “Trajectories and keyframes for kinesthetic teaching: A human-robot interaction perspective,” in HRI, 2012.
* [3] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel, “Deep imitation learning for complex manipulation tasks from virtual reality teleoperation,” in ICRA, 2018.
* [4] S. Calinon, P. Evrard, E. Gribovskaya, A. Billard, and A. Kheddar, “Learning collaborative manipulation tasks by demonstration using a haptic interface,” in ICAR, 2009.
* [5] A. Mandlekar, Y. Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta, E. Orbay, S. Savarese, and L. Fei-Fei, “Roboturk: A crowdsourcing platform for robotic skill learning through imitation,” in CoRL, 2018.
* [6] A. Mandlekar, F. Ramos, B. Boots, L. Fei-Fei, A. Garg, and D. Fox, “Iris: Implicit reinforcement without interaction at scale for learning control from offline robot manipulation data,” in ICRA, 2020.
* [7] A. Mandlekar, D. Xu, R. Martín-Martín, S. Savarese, and L. Fei-Fei, “Learning to generalize across long-horizon tasks from human demonstrations,” arXiv, 2020.
* [8] A. Mandlekar, D. Xu, R. Martín-Martín, Y. Zhu, L. Fei-Fei, and S. Savarese, “Human-in-the-loop imitation learning using remote teleoperation,” arXiv, 2020.
* [9] Y. Liu, A. Gupta, P. Abbeel, and S. Levine, “Imitation from observation: Learning to imitate behaviors from raw video via context translation,” in ICRA, 2018.
* [10] L. Smith, N. Dhawan, M. Zhang, P. Abbeel, and S. Levine, “Avid: Learning multi-stage tasks via pixel-level translation of human videos,” in RSS, 2020.
* [11] P. Sharma, D. Pathak, and A. Gupta, “Third-person visual imitation learning via decoupled hierarchical controller,” in NeurIPS, 2019.
* [12] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in ICCV, 2017\.
* [13] D. Bau, J.-Y. Zhu, J. Wulff, W. Peebles, H. Strobelt, B. Zhou, and A. Torralba, “Seeing what a gan cannot generate,” in ICCV, 2019.
* [14] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. P. Abbeel, and W. Zaremba, “Hindsight experience replay,” in NeurIPS, 2017.
* [15] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine, “Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning,” in CoRL, 2019.
* [16] L. Shao, T. Migimatsu, Q. Zhang, K. Yang, and J. Bohg, “Concept2robot: Learning manipulation concepts from instructions and human demonstrations,” in RSS, 2020.
* [17] P. Sermanet, C. Lynch, Y. Chebotar, J. Hsu, E. Jang, S. Schaal, and S. Levine, “Time-contrastive networks: Self-supervised learning from video,” in ICRA, 2018.
* [18] D. Pathak, P. Mahmoudieh, G. Luo, P. Agrawal, D. Chen, Y. Shentu, E. Shelhamer, J. Malik, A. A. Efros, and T. Darrell, “Zero-shot visual imitation,” in ICLR, 2018.
* [19] P. Sermanet, K. Xu, and S. Levine, “Unsupervised perceptual rewards for imitation learning,” arXiv, 2016.
* [20] T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine, “One-shot imitation from observing humans via domain-adaptive meta-learning,” arXiv, 2018.
* [21] X. B. Peng, A. Kanazawa, J. Malik, P. Abbeel, and S. Levine, “Sfv: Reinforcement learning of physical skills from videos,” TOG, 2018.
* [22] M. Sieb, Z. Xian, A. Huang, O. Kroemer, and K. Fragkiadaki, “Graph-structured visual imitation,” in CoRL, 2020.
* [23] M. Sieb and K. Fragkiadaki, “Data dreaming for object detection: Learning object-centric state representations for visual imitation,” in Humanoids, 2018.
* [24] K. Schmeckpeper, O. Rybkin, K. Daniilidis, S. Levine, and C. Finn, “Reinforcement learning with videos: Combining offline observations with interaction,” in CoRL, 2020.
* [25] V. Petrik, M. Tapaswi, I. Laptev, and J. Sivic, “Learning Object Manipulation Skills via Approximate State Estimation from Real Videos,” in CoRL, 2020\.
* [26] M.-Y. Liu, T. Breuel, and J. Kautz, “Unsupervised image-to-image translation networks,” in NeurIPS, 2017.
* [27] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in CVPR, 2018.
* [28] J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman, “Toward multimodal image-to-image translation,” in NeurIPS, 2017.
* [29] H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang, “Diverse image-to-image translation via disentangled representations,” in ECCV, 2018\.
* [30] X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz, “Multimodal unsupervised image-to-image translation,” in ECCV, 2018.
* [31] K. Rao, C. Harris, A. Irpan, S. Levine, J. Ibarz, and M. Khansari, “Rl-cyclegan: Reinforcement learning aware simulation-to-real,” in CVPR, 2020.
* [32] S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz, S. Levine, R. Hadsell, and K. Bousmalis, “Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks,” in CVPR, 2019.
* [33] Y.-C. Chen, Y.-Y. Lin, M.-H. Yang, and J.-B. Huang, “Crdoco: Pixel-level domain transfer with cross-domain consistency,” in CVPR, 2019.
* [34] Q. Zhang, T. Xiao, A. A. Efros, L. Pinto, and X. Wang, “Learning cross-domain correspondence for control with dynamics cycle-consistency,” in ICLR, 2021\.
* [35] S. Gamrian and Y. Goldberg, “Transfer learning for related reinforcement learning tasks via image-to-image translation,” in ICML, 2019.
* [36] Y. Zhang, Y. Guo, Y. Jin, Y. Luo, Z. He, and H. Lee, “Unsupervised discovery of object landmarks as structural representations,” in CVPR, 2018.
* [37] T. Jakab, A. Gupta, H. Bilen, and A. Vedaldi, “Unsupervised learning of object landmarks through conditional image generation,” in NeurIPS, 2018.
* [38] J. Thewlis, H. Bilen, and A. Vedaldi, “Unsupervised learning of object landmarks by factorized spatial embeddings,” in ICCV, 2017.
* [39] Z. Shu, M. Sahasrabudhe, R. Alp Guler, D. Samaras, N. Paragios, and I. Kokkinos, “Deforming autoencoders: Unsupervised disentangling of shape and appearance,” in ECCV, 2018.
* [40] O. Wiles, A. Koepke, and A. Zisserman, “Self-supervised learning of a facial attribute embedding from video,” arXiv, 2018.
* [41] T. D. Kulkarni, A. Gupta, C. Ionescu, S. Borgeaud, M. Reynolds, A. Zisserman, and V. Mnih, “Unsupervised learning of object keypoints for perception and control,” in NeurIPS, 2019.
* [42] L. Manuelli, Y. Li, P. Florence, and R. Tedrake, “Keypoints into the Future: Self-Supervised Correspondence in Model-Based Reinforcement Learning,” in CoRL, 2020.
* [43] M. Minderer, C. Sun, R. Villegas, F. Cole, K. Murphy, and H. Lee, “Unsupervised learning of object structure and dynamics from videos,” arXiv, 2019.
* [44] C. Finn, X. Y. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel, “Deep spatial autoencoders for visuomotor learning,” in ICRA, 2016.
* [45] T. Lesort, N. Díaz-Rodríguez, J.-F. Goudou, and D. Filliat, “State representation learning for control: An overview,” Neural Networks, 2018\.
* [46] A. X. Lee, A. Nagabandi, P. Abbeel, and S. Levine, “Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model,” arXiv, 2019.
* [47] M. Zhang, S. Vikram, L. Smith, P. Abbeel, M. Johnson, and S. Levine, “Solar: Deep structured representations for model-based reinforcement learning,” in ICML, 2019.
* [48] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson, “Learning latent dynamics for planning from pixels,” in ICML, 2019.
* [49] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” arXiv, 2018.
* [50] D. Yarats, A. Zhang, I. Kostrikov, B. Amos, J. Pineau, and R. Fergus, “Improving sample efficiency in model-free reinforcement learning from images,” arXiv, 2019.
* [51] J. Fu, A. Singh, D. Ghosh, L. Yang, and S. Levine, “Variational inverse control with events: A general framework for data-driven reward definition,” in NeurIPS, 2018.
* [52] F. Torabi, G. Warnell, and P. Stone, “Behavioral cloning from observation,” arXiv, 2018.
|
# The Evolution of Flow and Mass Transport in 3D Confined Cavities
Reem Khojah Bioengineering Department, University of California Los Angeles,
Los Angeles, California 90095, USA Darren Lo Bioengineering Department,
University of California Los Angeles, Los Angeles, California 90095, USA
Fiona Tang Bioengineering Department, University of California Los Angeles,
Los Angeles, California 90095, USA Dino Di Carlo<EMAIL_ADDRESS>Bioengineering Department, University of California Los Angeles, Los Angeles,
California 90095, USA
###### Abstract
Flow in channels and ducts with adjoining cavities are common in natural and
engineered systems. Here we report numerical and experimental results of 3D
confined cavity flow, identifying critical conditions in the recirculating
flow formation and mass transport over a range of channel flow properties
($0.1\leq Re\leq 300$) and cavity aspect ratio ($0.1\leq H/X_{s}\leq 1$). In
contrast to 2D systems, a mass flux boundary is not formed in 3D confined
cavity-channel flow. Streamlines directly enter a recirculating vortex in the
cavity and exit to the main channel leading to an exponential increase in the
cavity mass flux when the recirculating vortex fills the cavity volume. These
findings extend our understanding of flow entry and exit in cavities and
suggest conditions where convective mass transport into and out of cavities
would be amplified and vortex particle capture reduced.
Rare cells Microfiltration Tangential-flow Inertial separation
###### pacs:
Valid PACS appear here
When flowing fluid with finite inertia in a channel that suddenly expands in
cross-sectional dimension, an adverse pressure gradient can occur resulting in
the formation of recirculating flow in the expansion region Batchelor (2000);
Macagno and Hung (1967); Baloch _et al._ (1995). If the cross-sectional
dimension is returned to the initial value further downstream, this creates a
cavity, which can support a recirculating flow up to the cavity size Sinha
_et al._ (1982). The formation of recirculating flows in 3D-confined cavities
can result in selective particle capture from the mainstream channel flow, an
area attracting significant interest in biotechnology and biomedicine where
microcavities have been utilized as well-controlled reaction chambers for
trapped bio-particles in miniaturized diagnostic devices Marcus _et al._
(2006); Gierahn _et al._ (2017); Hur _et al._ (2011); Khojah _et al._
(2017); Dhar _et al._ (2018). Moreover, cavity flow capture is a diverse
phenomenon which can also be found in nature such as in the assembly of
biological self-replicating molecules inside microvortices that arise in
porous ocean rocks Sun _et al._ (2018), as well as platelet aggregates
accumulation in aneurysms Rayz _et al._ (2008) (Figs 1(a-c)). Thus, a better
understanding of recirculating cavity flow can elucidate trapping phenomena in
natural and physiological flows and enable the engineering of mass transport
in cavities and microwells.
Figure 1: Fluid mass transport in three-dimensional cavity flow geometries
commonly found in (a) porous ocean rocks, (b) physiological blood flow in
aneurysms, and (c) selective cell trapping in microchambers. Confocal 3D
imaging of fluorophore containing streams shows flow enters and exits a cavity
without circulating at low Reynolds numbers ($Re$) in (d) and fully-developed
circulating cavity flow at higher $Re$ in (e) reveals a set of streamlines
swirling out of the vortex core and returing to the main channel flow (Fig.
S7, video S2). Figure 2: Analysis of recirculating vortex flow development
in three-dimensional confined cavities (a) Three dimensional flow diagram in a
square cavity where the leading wall ($x_{1}$), side wall ($x_{2}$), and
lagging wall ($x_{3}$) are equal in length. The formation and growth of
recirculating flow in the vortex or wake bubble (gray area) is characterized
by the transition of the reattachment point ($x_{r}$), normalized by the
length of all stationary cavity walls ($x^{*}=\frac{x_{r}}{\sum x_{i}}$). (b)
Experimental ($\square$) and numerical simulation ($\diamond$) of the
reattachment point ($x^{*}$) logistic growth and transition on the cavity
walls as a function of the channel’s Reynolds number ($Re$). (c) Cavity flow
development at a range of cavity side lengths collapse in a logistic function
presented in a solid red line (-) in (d) when $Re$ is rescaled by the cavity
geometry ratio ($\frac{H}{X_{s}}$). (e) Phase diagram of cavity flow
development states showing regions where phase I: no circulating cavity flow,
phase II: partial cavity flow, and phase III: full cavity flow, as a function
of the cavity aspect ratio ($\frac{H}{X_{s}}$) versus the channel Re. The
dashed red line is a power-law fit of $Re_{m}\sim(\frac{H}{X_{s}})^{-2}$.
There are similarities between the initial stages of vortical flow formation
in a confined 3D cavity and the flow through a sudden expansion or backward-
facing step at low channel Reynolds numbers Biswas _et al._ (2004). Flow
separates at the sudden expansion region forming an internal recirculating
vortical flow at the leading wall. However, without the existence of a
trailing wall to confine the recirculating flow, secondary and tertiary
separation bubbles occur downstream in the later stages of such flows Durst
_et al._ (1993); Armaly _et al._ (1983). In the presence of the trailing
wall, flow separation forms a recirculating vortex that grows and occupies the
entire cavity.
Although 2D cavity flow has been extensively studied, many aspects of
recirculating flow formation and mass transport remains unknown in 3D confined
cavities. For example, a defining feature in incompressible and unconfined 2D
cavity flow is the formation of a material flux boundary, or separatrix,
between the main channel flow and open cavity flow Maull and East (1963);
O’Brien (1972); Ghia _et al._ (1982); Horner _et al._ (2002). In contrast,
for confined 3D conditions, recirculating inertial flow is highly complex and
unfolds in a qualitatively different manner in three-dimensional space Koseff
_et al._ (1990), causing unexpected complex flow patterns like the ones
generated in wall-confined microchannels Amini _et al._ (2013) and
T-junctions Vigolo _et al._ (2014). Therefore, more investigations should
address the non-trivial dynamics of mass flux in 3D cavities.
In this letter, we further study the unexpected material flux into and out of
the cavity flow inspired by initial 3D experimental measurements of flow
streams exiting from the core of a cavity to the mainstream channel flow
without a mass flux boundary (Fig. 1(d-e)). We found no evidence of a
separatrix forming in 3D cavity flow, matching previous numerical studies
indicating that the recirculating wake could not be thought of as containing a
separatrix Torczynski and OHern (1993); Haddadi and Di Carlo (2017). We
further show that the streamline linkage, i.e. locations of entry and exit, to
the main flow is strongly dependent on the development of the cavity flow.
First, we introduced a dimensionless parameter to generalize flow development
stages for different cavity geometries. Then we investigated the mass flux
through the recirculating cavity flow based on the cavity flow development
parameter. Our investigation uncovered an unexpected evolution in the nature
of the channel mass flux to the cavity flow as it develops which has
implications for the ability to transport particles and cells into cavities.
### .1 Recirculating cavity flow formation
Experimental observations in a channel-cavity system with fixed channel flow
conditions ($Re=uD\textsubscript{h}/\nu$), where $u$ is the average inlet flow
velocity, $D\textsubscript{h}$, hydraulic diameter of the channel, and $\nu$
is the kinematic viscosity of the fluid, demonstrate the development of
recirculating flow in a square cavity as a function of the cavity height ($H$)
and the cavity side length ($X_{s}$) (Fig. S1 and video S1). Given the
complexity in geometry, a Reynolds number alone is not sufficient to define or
generalize cavity flow conditions nor predict recirculating flow formation and
development in microcavities. One can use a Reynolds number along with a
number of non-dimensional geometric parameters instead. Alternatively, one
common defining characteristic shared by all cavity geometries is a transition
in the location of the reattachment line along the cavity wall at which
streamlines diverge between circulating in the cavity, and flowing into the
cavity but returning to the main channel without circulating (Fig. 2(a))
Cherry _et al._ (1984). The reattachment point $x_{r}$ as observed from a
top-down perspective in the x-y plane, is used to measure the development of
recirculating flow in cavities (Fig. S8). To generalize the transition of the
reattachment point $x_{r}$ in different cavity geometries, (${\textstyle
x^{*}}{\textstyle=}\frac{x_{r}}{\sum x_{i}}$) is used as a measure of the
reattachment point transition distance along the cavity walls normalized to
the total wall lengths of a square cavity with side length $X_{s}$ (scaling
method in supplementary material).
Figure 3: Cavity-channel mass flux dynamics as a function of cavity vortex
flow development. (a) An exponential increase of cavity flow mass flux
$Q^{*}=\frac{Cavityflux(Q_{ca})}{Channelflux(Q)}$ is observed when the vortex
occupies cavity volume at ($x^{*}>0.9$) and does not significantly expand
further in size. The red line is the power-fit law of $Q^{*}\sim x^{*30}$. (b)
Entry (red) and exit (black) streamline regions into and out of the
recirculating flow are measured at a fixed location upstream and downstream of
the cavity, respectively. For a full cavity flow, entry and exit regions
switch locations from the side to the middle of the channel wall adjacent to
the cavity, and vice versa. The size of the entry region expands towards
higher velocity regions of the main channel flow. (c) The evolution of a
subset of channel streamlines that enter (and exit) the cavity, color mapped
by the by the initial z-height in the channel (Fig S10).
The evolution of the reattachment point in a microcavity ($X_{s}=250\mu m$,
$H=70\mu m$) varies with the entry channel’s Reynolds number ($Re=0.1-100$)
(Fig. 2(b)). At a low Reynolds number, fluid flow passes with fore-aft
symmetry through the cavity, as expected for Stokes flow, with no
recirculation ($x^{*}$ = 0) (fig. 1(d)). Flow separates at higher Reynolds
number creating a separation bubble with a circulating wake. The vortical flow
first evolves over an order of magnitude in Reynolds number, with the
reattachment point $x^{*}$ remaining on the cavity-leading wall ($x_{1}$) as
the Reynolds number increases. Then the reattachment point rapidly migrates
along the cavity sidewall ($x_{2}$), over a small range of Reynolds numbers.
Lastly, recirculating flow separation in the cavity reaches an asymptotic
phase where the reattachment point stagnates at the end of the cavity trailing
wall ($x_{3}$) ($x^{*}$ $\approx$ 1). The evolution of the reattachment point
as a function of the Reynolds number can be described by a logistic function:
$x^{\ast}=\frac{1}{1+e^{-k(Re-a)}}$ (1)
in which $x^{*}$ is the estimated reattachment point location, $k$ is the
curve steepness, and $a$ is the value of the sigmoid’s midpoint. Both
simulation and experimental results reflect the logistic function behavior
(Fig. 2(b)).
The evolution of the reattachment point follows similar forms for different
cavity side lengths ($X_{s}=400-1000\mu m$) (Fig. 2(c)). The vortex growth
curves for different cavity side lengths collapse into (Eq. 1) logistic
relation (red-line in Fig. 2(d)) when it is rescaled by $Re(\frac{H}{X}_{s})$,
where $k=1.31$ and $a=5.5$ (more data in supplementary material Fig. S2-4).
Using this scaling and the logistic function, one can define a condition at
$Re=a$ when the vortical flow occupies approximately half of the total cavity
volume and the reattachment point reaches the middle of the cavity, i.e.
$Re_{m}=Re_{(x^{*}=0.5)}$. $Re_{m}$ is substituted in terms of the logistic
function (Eq. 1) to derive a universal expression that estimates full cavity
flow at $2Re_{m}$ and no cavity circulating flow at $Re_{m}/2$ for any cavity
side length, height, and channel Reynolds number.
$Re_{m\;}=\;a(\frac{X_{s}}{H})$ (2)
To generalize cavity flow transition behavior at any channel-cavity geometry,
we studied cavity flow development as a function of the channel $Re$ along
with the cavity non-dimensional geometric parameter $H/X_{s}$. Experimental
observations are summarized in a phase diagram (Fig. 2(e)) showing the main
cavity flow developmental phases on a range of cavity aspect ratios $0.1\leq
H/X_{s}\leq 1$. Phase I: No cavity flow ($x^{*}=0$) with no recirculation in
the cavity. Phase II: partial cavity flow ($0.1\leq x^{*}<0.9$) after the
formation of a recirculating wake. Phase III: full cavity flow ($x^{*}\geq
0.9$) when the recirculating vortex fills the cavity volume. The red dotted
line represents the power-law fit of $Re_{m}$ at $x^{*}=0.5$. The phase
diagram suggests an inverse relation between recirculating flow development
and cavity aspect ratio. Cavities with low aspect ratio reach full
recirculating cavity flow (phase III) at low channel inertial conditions and
vice versa. Thus, channel-cavity geometry and flow rate can be designed in a
way to either prevent or accommodate recirculating flow.
### .2 Mass transport in cavity flow
Given the stages in the development of a recirculating vortex, we asked
whether the mass transfer between the main channel and the cavity is dependent
on the vortex reattachment point. Three-dimensional finite element method
simulations of the incompressible Navier-Stokes equations were used to
identify streamlines from the main channel that contribute to the
recirculating cavity flow. We plot in Fig. 3(a) the recirculating volumetric
flux ($Q^{*}$) in the channel versus the reattachment point ($x^{*}=0-0.98$)
in the cavity. The recirculating flux ($Q^{*}$ = $uA$), where $u$ is the
average velocity in $A$ the cross-sectional area, is normalized by the whole
channel flux ($Q$) (Fig. S9). Therefore, this represents the percentage of the
main channel flow that enters (and exits, -conserving mass) the cavity.
Interestingly, the fraction of the main channel flow entering (and exiting)
gradually increases with vortex size in the cavity. In particular, an
unexpected exponential growth of the channel mass flux contribution to cavity
flow is observed when the vortex asymptotically approaches its maximum size
after reaching the cavity volume ($x^{*}\approx 1$).
Coincident with the increase in mass transport, we observe an unexpected
switch in the location of entering and exiting streamlines into the
recirculating cavity flow. Leveraging the flow symmetry in the channel cross-
section, we tracked the evolution of the entrance (red) and exit (black)
locations across a quarter of the channel (Fig. 3(b)). Before the cavity
completely fills ($0.1\leq x^{*}<0.9$), fluid enters the vortex from the
regions near the upper and lower walls and exit along the middle of the
channel side wall downstream of the cavity. Surprisingly, a new entry region
grows at the middle of the cavity as the re-attachment point increases
eventually leading to the entry and exit regions switching locations as the
recirculating flow fills the cavity space further ($x^{*}=0.8-0.9$) (Fig. 3(c)
and Fig. S10). At full cavity flow ($x^{*}>0.9$) the topology of the entering
streamlines switches completely such that streamlines enter at the middle of
the channel sidewall. Coincident with the mass flux increasing, the entry
region area grows towards the center of the channel as the re-attachment point
and Reynolds number increases further. This increase in the entry area
spanning high-velocity regions of the channel cross-section explains the
exponential increase of the recirculating flux in the cavity.
Confocal imaging experiments of selectively dyed streams also reflect the mass
flux into and out of the recirculating flow in the cavity. In our experimental
set-up, we increment the location in the channel cross-section of
fluorescently labeled streams by controlling flow rates: Q1 (rhodamine: red
dye), Q2 center (fluorescein: green dye), and Q3 (water: colorless), and with
Q1=Q2=Q3 set at the same flow rate (Fig. S5). A significant decrease in the
contribution of Q2 streamlines in the recirculating flow is observed in the
transition between ($x^{*}=0.8-0.9$) over which flow entry streamlines are
modeled to change location from the center to the sidewall. For a full cavity
flow ($x^{*}>0.9$), we visualize in 3D the flow exit from the cavity at the
top and bottom of the channel leaving from the core of the vortex. Here
streamlines are selectively stained as follows: Q2 (fluorescein: green dye),
Q1, and Q3(water: colorless) (Fig.1(e) and Fig. S6 and video S2). In a new
cavity design with a notch or step partially blocking the cavity outlet
region, fluorescently labeled streamlines are observed leaving from the core
while diverting from the notch and returning to the mainstream channel flow.
Thus, the flow exit behavior remained consistent with geometric changes at the
cavity outlet region (Fig. S7).
Our numerical and experimental results in sum indicate that there is direct
mass transport from the channel flow into the recirculating flow in the cavity
with no mass flux boundary or separatrix. Besides showing that streamlines
from the main flow enter the recirculating flow and later leave, we found no
evidence of closed streamlines in the recirculating flow that are not
connected to the main flow. Previous theoretical investigation suggests the
same manner of continuous mass exchange between the channel and the cavity
when a full cavity flow is reached Cherry _et al._ (1984). Our results shed
new light on the increase in the relative mass flux (as a function of main
channel flow rate) as the cavity flow reaches a fully-filled configuration, as
well as the shifting locations of influx and efflux from the cavity. The
enhanced transport between the cavity and the channel as well as shifting
streamline locations that leave the cavity can contribute to the depletion of
particles captured in microvortices at later stages of cavity flow as observed
in previous studies Khojah _et al._ (2017). This change in fluid flux
direction and magnitude can modulate the limit cycle trajectory of trapped
particles, also experiencing inertial lift forces, to exit the vortex as the
recirculating flow envelops the entire cavity Haddadi and Di Carlo (2017).
These findings expand our understanding of mass transfer in 3D cavity flow
with finite inertia -a widely observed geometry across nature and engineered
systems. Our findings can inform the design of cavities to avoid dead zones of
low mass transport, or enhance transport from specific entry locations in the
main flow to e.g. isolate particles. The understanding of how mass transport
is not limited across a separatrix also has implications for the evolution of
biochemical processes in microvortices inside ocean rocks, physiological flow
and micro-clot accumulation in aneurysms, and engineering microwells for cell
trapping in miniaturized diagnostic devices.
## Acknowledgements
We Thank Dr. Dan Stoecklein and Dr. Kaytlien Hood for helpful discussion and
input. We gratefully acknowledge the support of the Advanced Light
Microscopy/Spectroscopy Center (ALMS) at the California NanoSystems Institute
(CNSI) in the University of California, Los Angeles.
## References
* Batchelor (2000) G. K. Batchelor, _An introduction to fluid dynamics_ (Cambridge university press, 2000).
* Macagno and Hung (1967) E. O. Macagno and T.-K. Hung, Journal of fluid Mechanics 28, 43 (1967).
* Baloch _et al._ (1995) A. Baloch, P. Townsend, and M. Webster, Computers & Fluids 24, 863 (1995).
* Sinha _et al._ (1982) S. Sinha, A. Gupta, and M. Oberai, AIAA journal 20, 370 (1982).
* Marcus _et al._ (2006) J. S. Marcus, W. F. Anderson, and S. R. Quake, Analytical chemistry 78, 956 (2006).
* Gierahn _et al._ (2017) T. M. Gierahn, M. H. Wadsworth II, T. K. Hughes, B. D. Bryson, A. Butler, R. Satija, S. Fortune, J. C. Love, and A. K. Shalek, Nature methods 14, 395 (2017).
* Hur _et al._ (2011) S. C. Hur, A. J. Mach, and D. Di Carlo, Biomicrofluidics 5, 022206 (2011).
* Khojah _et al._ (2017) R. Khojah, R. Stoutamore, and D. Di Carlo, Lab on a Chip 17, 2542 (2017).
* Dhar _et al._ (2018) M. Dhar, J. N. Lam, T. Walser, S. M. Dubinett, M. B. Rettig, and D. Di Carlo, Proceedings of the National Academy of Sciences 115, 9986 (2018).
* Sun _et al._ (2018) J. Sun, Y. Li, F. Yan, C. Liu, Y. Sang, F. Tian, Q. Feng, P. Duan, L. Zhang, X. Shi, _et al._ , Nature communications 9, 2599 (2018).
* Rayz _et al._ (2008) V. Rayz, L. Boussel, M. Lawton, G. Acevedo-Bolton, L. Ge, W. Young, R. Higashida, and D. Saloner, Annals of biomedical engineering 36, 1793 (2008).
* Biswas _et al._ (2004) G. Biswas, M. Breuer, and F. Durst, Journal of Fluids Engineering 126, 362 (2004).
* Durst _et al._ (1993) F. Durst, J. Pereira, and C. Tropea, Journal of Fluid Mechanics 248, 567 (1993).
* Armaly _et al._ (1983) B. F. Armaly, F. Durst, J. Pereira, and B. Schönung, Journal of fluid Mechanics 127, 473 (1983).
* Maull and East (1963) D. Maull and L. East, journal of Fluid Mechanics 16, 620 (1963).
* O’Brien (1972) V. O’Brien, The Physics of Fluids 15, 2089 (1972).
* Ghia _et al._ (1982) U. Ghia, K. N. Ghia, and C. Shin, Journal of computational physics 48, 387 (1982).
* Horner _et al._ (2002) M. Horner, G. Metcalfe, S. Wiggins, and J. Ottino, Journal of Fluid Mechanics 452, 199 (2002).
* Koseff _et al._ (1990) J. R. Koseff, A. K. Prasad, C. Perng, and R. L. Street, Physics of Fluids A: Fluid Dynamics 2, 619 (1990).
* Amini _et al._ (2013) H. Amini, E. Sollier, M. Masaeli, Y. Xie, B. Ganapathysubramanian, H. A. Stone, and D. Di Carlo, Nature communications 4, 1826 (2013).
* Vigolo _et al._ (2014) D. Vigolo, S. Radl, and H. A. Stone, Proceedings of the National Academy of Sciences 111, 4770 (2014).
* Torczynski and OHern (1993) J. Torczynski and T. OHern, _Numerical simulations of flow in a three-dimensional cavity-channel geometry_ , Tech. Rep. (Sandia National Labs., Albuquerque, NM (United States), 1993).
* Haddadi and Di Carlo (2017) H. Haddadi and D. Di Carlo, Journal of Fluid Mechanics 811, 436 (2017).
* Cherry _et al._ (1984) N. Cherry, R. Hillier, and M. Latour, Journal of Fluid Mechanics 144, 13 (1984).
* Haddadi _et al._ (2018) H. Haddadi, H. Naghsh-Nilchi, and D. Di Carlo, Biomicrofluidics 12, 014112 (2018).
|
# A short exact sequence
Ivan Panin111The author acknowledges support of the RFBR grant No.
19-01-00513.
###### Abstract
Let $R$ be a regular semi-local integral domain containing a field and $K$ be
its fraction field. Let $\mu:\mathbf{G}\to\mathbf{T}$ be an $R$-group schemes
morphism between reductive $R$-group schemes, which is smooth as a scheme
morphism. Suppose that $T$ is an $R$-torus. Then the map
$\mathbf{T}(R)/\mu(\mathbf{G}(R))\to\mathbf{T}(K)/\mu(\mathbf{G}(K))$ is
injective and certain purity theorem is true. These and other results are
derived from an extended form of Grothendieck–Serre conjecture proven in the
present paper for rings $R$ as above.
## 1 Main results
Let $R$ be a commutative unital ring. Recall that an $R$-group scheme
$\mathbf{G}$ is called reductive, (respectively, semi-simple or simple), if it
is affine and smooth as an $R$-scheme and if, moreover, for each algebraically
closed field $\Omega$ and for each ring homomorphism $R\to\Omega$ the scalar
extension $\mathbf{G}_{\Omega}$ is a connected reductive (respectively, semi-
simple or simple) algebraic group over $\Omega$. The class of reductive group
schemes contains the class of semi-simple group schemes which in turn contains
the class of simple group schemes. This notion of a reductive $R$-group scheme
coincides with [SGA3, Exp. XIX, Definition 2.7]. This notion of a simple
$R$-group scheme coincides with the notion of a simple semi-simple $R$-group
scheme from Demazure and Grothendieck [SGA3, Exp. XIX, Definition 2.7 and Exp.
XXIV, 5.3]. Here is our first main result based on results of [Pan2] and
[Pan3] and significantly extending the corresponding results of [Pan2] and
[Pan3].
###### Theorem 1.1.
Let $R$ be a regular semi-local integral domain containing a field. Let $K$ be
the fraction field of $R$. Let $\mu:\mathbf{G}\to\mathbf{T}$ be an $R$-group
scheme morphism between reductive $R$-group schemes, which is smooth as a
scheme morphism. Suppose $T$ is an $R$-torus. Then the map
$\mathbf{T}(R)/\mu(\mathbf{G}(R))\to\mathbf{T}(K)/\mu(\mathbf{G}(K))$ is
injective and the sequence
$\\{1\\}\to\mathbf{T}(R)/\mu(\mathbf{G}(R))\to\mathbf{T}(K)/\mu(\mathbf{G}(K))\xrightarrow{\sum
r_{\mathfrak{p}}}\bigoplus_{\mathfrak{p}}\mathbf{T}(K)/[\mathbf{T}(R_{\mathfrak{p}})\cdot\mu(\mathbf{G}(K))]\to\\{1\\}$
(1)
is exact, where $\mathfrak{p}$ runs over all height one prime ideals of $R$
and each $r_{\mathfrak{p}}$ is the natural map (the projection to the factor
group).
Let us comment on the first assertion of the theorem. Let $\mathbf{H}$ be the
kernel of $\mu$. It turns out that $\mathbf{H}$ is a quasi-reductive $R$-group
scheme (see Definition 1.3). There is a sequence of group sheaves
$1\to\mathbf{H}\to\mathbf{G}\to\mathbf{T}\to 1$, which is exact in the étale
topology on $SpecR$. Theorem 1.4 yields now the injectivity of the map
$\mathbf{T}(R)/\mu(\mathbf{G}(R))\to\mathbf{T}(K)/\mu(\mathbf{G}(K))$.
###### Theorem 1.2.
Let $R$ be a regular semi-local integral domain containing a field. Let $K$ be
the fraction field of $R$. Let $\mathbf{G}_{1}$ and $\mathbf{G}_{2}$ be two
semi-simple $R$-group schemes. Suppose the generic fibres $\mathbf{G}_{1,K}$
and $\mathbf{G}_{2,K}$ are isomorphic as algebraic $K$-groups. Then the
$R$-group schemes $\mathbf{G}_{1}$ and $\mathbf{G}_{2}$ are isomorphic.
To prove Theorem 1.2 we need to work with automorphism group scheme of a semi-
simple $R$-group scheme. The latter group scheme is not geometrically
connected in general. So, Theorem 1.2 can not be derived from [FP] and [Pan3].
We state right below a theorem, which asserts that an extended version of
Grothendieck–Serre conjecture holds for rings $R$ as above. This latter
theorem is proved in this paper. Theorem 1.2 and the first assertion of
Theorem 1 are derived from it. To state the mentioned theorem it is convenient
to give the following.
###### Definition 1.3 (quasi-reductive).
Assume that $S$ is a Noetherian commutative ring. An $S$-group scheme
$\mathbf{H}$ is called quasi-reductive if there is a finite étale $S$-group
scheme $\mathbf{C}$ and a smooth $S$-group scheme morphism
$\lambda:\mathbf{H}\to\mathbf{C}$ such that its kernel is a reductive
$S$-group scheme and $\lambda$ is surjecive locally in the étale topology on
$S$.
Clearly, reductive $S$-group schemes are quasi-reductive. Quasi-reductive
$S$-group schemes are affine and smooth as $S$-schemes. There are two types of
quasi-reductive $S$-group schemes, which we are focusing on in the present
paper. The first one is the automorphism group scheme of a semi-simple
$S$-group scheme. The second one is obtained as follows: take a reductive
$S$-group scheme $\mathbf{G}$, an $S$-torus $\mathbf{T}$ and a smooth
$S$-group morphism $\mu:\mathbf{G}\to\mathbf{T}$. Then one can check that the
kernel $\mathbf{H}$ of $\mu$ is quasi-reductive. It is an extension of a
finite étale $S$-group scheme $\mathbf{C}$ of multiplicative type via a
reductive $S$-group scheme $\mathbf{G}_{0}$.
Assume that $U$ is a regular scheme, $\mathbf{H}$ is a quasi-reductive
$U$-group scheme. Recall that a $U$-scheme $\mathcal{H}$ with an action of
$\mathbf{H}$ is called _a principal $\mathbf{H}$-bundle over $U$_, if
$\mathcal{H}$ is faithfully flat and quasi-compact over $U$ and the action is
simple transitive, that is, the natural morphism
$\mathbf{H}\times_{U}\mathcal{H}\to\mathcal{H}\times_{U}\mathcal{H}$ is an
isomorphism, see [Gr4, Section 6]. Since $\mathbf{H}$ is $S$-smooth, such a
bundle is trivial locally in étale topology but in general not in Zariski
topology. Grothendieck and Serre conjectured that for a reductive $U$-group
scheme $\mathbf{H}$ a principal $\mathbf{H}$-bundle $\mathcal{H}$ over $U$ is
trivial locally in Zariski topology, if it is trivial generically. A survey
paper on the topic is [P2].
The conjecture is true, if $\Gamma(U,\mathcal{O}_{U})$ contains a field (see
[FP] and [Pan3]). It is proved in [Ni2] that the conjecture is true in general
for discrete valuation rings. This result is extended in [PSt] to the case of
semi-local Dedekind integral domains assuming that $\mathbf{G}$ is simple
simply connected and isotropic in a certain precise sense. In [NG] results of
[Ni2] and [PSt] are extended further. It is proved there that the conjecture
is true in general for the case of semi-local Dedekind integral domains. The
following result is a further extension of the main theorem of [Pan3].
###### Theorem 1.4.
Let $R$ be a regular semi-local integral domain containing a field. Let $K$ be
the fraction field of $R$. Let $\mathbf{H}$ be a quasi-reductive group scheme
over $R$. Then the map
$\textrm{H}^{1}_{\text{\'{e}t}}(R,\mathbf{H})\to\textrm{H}^{1}_{\text{\'{e}t}}(K,\mathbf{H}),$
induced by the inclusion of $R$ into $K$ has a trivial kernel. In other words,
under the above assumptions on $R$ and $\mathbf{H}$, each principal
$\mathbf{H}$-bundle over $R$ having a $K$-rational point is trivial.
###### Corollary 1.5.
Under the hypothesis of Theorem 1.4, the map
$\textrm{H}^{1}_{\text{\'{e}t}}(R,\mathbf{H})\to\textrm{H}^{1}_{\text{\'{e}t}}(K,\mathbf{H}),$
induced by the inclusion of $R$ into $K$, is injective. Equivalently, if
$\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ are two principal $\mathbf{H}$-bundles
isomorphic over $\text{\rm Spec}K$, then they are isomorphic.
###### Proof.
Let $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ be two principal
$\mathbf{H}$-bundles isomorphic over $\operatorname{Spec}K$. Let
$\operatorname{Iso}(\mathcal{H}_{1},\mathcal{H}_{2})$ be the scheme of
isomorphisms of principal $\mathbf{H}$-bundles. This scheme is a principal
$\text{\rm Aut}\mathcal{H}_{1}$-bundle. By Theorem 1.4 it is trivial, and we
see that $\mathcal{H}_{1}\cong\mathcal{H}_{2}$. ∎
Theorems 1.4 and 1.2 are proved in Section 2. Theorem 1.1 is proved in Section
8.
## 2 Proof of Theorems 1.4 and 1.2
We begin with the following general
###### Lemma 2.1.
Let $X$ be a regular semi-local integral domain. Let $\pi:X^{\prime}\to X$ be
a finite morphism. Let $\eta\in X$ be the generic point of $X$. Then sections
of $\pi$ over $X$ are in the bijection with sections of $\pi$ over $\eta$.
###### Proof.
It suffices to check that each section $s:\eta\to X^{\prime}$ of $\pi$ can be
extended to a section of $\pi$ over $X$.
Decompose $\pi$ as a composition
$X^{\prime}\xrightarrow{i}\mathbf{A}^{n}_{X}\xrightarrow{p}X$, where $p$ is
the projection and $i$ is a closed embedding. Let $s:\eta\to X^{\prime}$ be a
section of $\pi$. Since $X$ is a regular semi-local and $\pi$ is projective
there is a closed codimension two subset $Z$ in $X$ and a section
$\varphi:X-Z\to X^{\prime}$ of $\pi$ that extends $s$. Since
$\Gamma(X,\mathcal{O}_{X})=\Gamma(X-Z,\mathcal{O}_{X})$ the section $\varphi$
is extended to a section $\tilde{\varphi}$ of $\pi$. The lemma is proved. ∎
###### Corollary 2.2.
Let $X$, $\eta\in X$ be as in the previous lemma and $\mathbf{E}$ be a finite
étale group $X$-scheme. Then the $\eta$-points of $\mathbf{E}$ coincides with
the $X$-points of $\mathbf{E}$.
###### Corollary 2.3.
Under the hypothesis of Corollary 2.2 the kernel of the pointed set map
$H^{1}_{\text{\'{e}t}}(X,\mathbf{E})\to
H^{1}_{\text{\'{e}t}}(\eta,\mathbf{E})$ is trivial.
###### Proof.
Let $\cal E$ be a principal $\mathbf{E}$-bundle over $X$. The standard descent
arguments shows that the $X$-scheme $\cal E$ is finite and étale. Thus, ${\cal
E}(X)=\cal E(\eta)$. This proves the corollary. ∎
###### Proof of Theorem 1.4.
Since $\mathbf{H}$ is quasi-reductive $R$-group scheme, there is a finite
étale $R$-group scheme $\mathbf{C}$ and a smooth $R$-group scheme morphism
$\lambda:\mathbf{H}\to\mathbf{C}$ such that its kernel $\mathbf{G}$ is a
reductive $R$-group scheme and $\lambda$ is surjecive locally in the étale
topology on $S$. The sequence of the étale sheaves
$1\to\mathbf{G}\to\mathbf{H}\to\mathbf{C}\to 1$ is exact. Thus, it induces a
commutative diagram of pointed set maps with exact rows
---
$\textstyle{\mathbf{C}(R)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\partial}$$\scriptstyle{\alpha}$$\textstyle{{H_{\text{{\'{e}t}}}^{1}(R,\mathbf{G})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta}$$\textstyle{{H_{\text{{\'{e}t}}}^{1}(R,\mathbf{H})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\gamma}$$\textstyle{{H_{\text{{\'{e}t}}}^{1}(R,\mathbf{C})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{\mathbf{C}(K)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\partial}$$\textstyle{{H_{\text{{\'{e}t}}}^{1}(K,\mathbf{G})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{H_{\text{{\'{e}t}}}^{1}(K,\mathbf{H})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{H_{\text{{\'{e}t}}}^{1}(K,\mathbf{C})}}$
The map $\alpha$ is bijective by Corollary 2.2, the map $\delta$ has the
trivial kernel by Corollary 2.3, the map $\beta$ is injective by [Pan3,
Corollary 1.2] Now a simple diagram chase shows that $ker(\gamma)=\ast$. This
proves the theorem. ∎
###### Remark 2.4.
The statement of [Ch-G-P/O, Lemma] and its proof are non-acurate both, since
the authors do not assume the injectivity of the map
$H_{\text{{\'{e}t}}}^{1}(R,\mathbf{G}^{0})\to
H_{\text{{\'{e}t}}}^{1}(K,\mathbf{G}^{0}_{R})$.
###### Proof of Theorem 1.2.
The $R$-group scheme $\underline{\text{Aut}}:=\underline{\text{Aut}}_{R-gr-
sch}(\mathbf{G}_{1})$ is quasi-reductive by [D-G]. The $R$-scheme
$\underline{\text{Iso}}:=\underline{\text{Iso}}_{R-gr-sch}({\bf G_{1}},{\bf
G_{2}})$ is a principal $\underline{\text{Aut}}$-bundle. An isomorphism
$\varphi:\mathbf{G}_{1,K}\to\mathbf{G}_{2,K}$ of algebraic $K$-groups gives a
section of $\underline{\text{Iso}}$ over $K$. So, $\underline{\text{Iso}}_{K}$
is a trivial principal $\underline{\text{Aut}}_{K}$-bundle. Hence
$\underline{\text{Iso}}$ is a trivial principal
$\underline{\text{Aut}}$-bundle by Theorem 1.4. Thus, it has a section over
$R$. So, there is an $R$-group scheme isomorphism
$\mathbf{G}_{1}\cong\mathbf{G}_{2}$. ∎
## 3 Proof of the first assertion of Theorem 1.1
###### Lemma 3.1.
Let $X$ be a regular irreducible affine scheme. Let $\mathbf{G}$ be a
reductive $X$-group scheme and $\mathbf{T}$ be an $X$-torus. Let
$\mu:\mathbf{G}\to\mathbf{T}$ be an $X$-group schemes morphism, which is
smooth as a scheme morphism. Then the kernel of $\mu$ is a quasi-reductive
$X$-group scheme.
###### Proof.
Consider the coradical $Corad(\mathbf{G})$ of $\mathbf{G}$ together with the
canonical $X$-group morphism $\alpha:\mathbf{G}\to Corad(\mathbf{G})$. By the
universal property of the $X$-group morphism $can$ there is a unique $X$-group
morphism $\bar{\mu}:Corad(\mathbf{G})\to T$ such that
$\mu=\bar{\mu}\circ\alpha$. Since $\mu$ is surjective locally for the étale
topology, hence so is $\bar{\mu}$. Let $\text{ker}(\bar{\mu})$ be the kernel
of $\bar{\mu}$ and let $\mathbf{H}:=\alpha^{-1}(\text{ker}(\bar{\mu}))$ be the
scheme theoretic pre-image of $\text{ker}(\bar{\mu})$. Clearly, $\mathbf{H}$
is a closed $X$-subgroup scheme of $\mathbf{G}$, which is the kernel of $\mu$.
We must check that $\mathbf{H}$ is a quasi-reductive.
The $X$-group scheme $\text{ker}(\bar{\mu})$ is of multiplicative type. Hence
there is a finite $X$-group scheme $\mathbf{M}$ of multiplicative type and a
faithfully flat $X$-group scheme morphism
$can:\text{ker}(\bar{\mu})\to\mathbf{M}$, which has the following property:
for any finite $X$-group scheme $\mathbf{M}^{\prime}$ of multiplicative type
and an $X$-group morphism
$\varphi:\text{ker}(\bar{\mu})\to\mathbf{M}^{\prime}$ there is a unique
$X$-group morphism $\psi:\mathbf{M}\to\mathbf{M}^{\prime}$ with $\psi\circ
can=\varphi$. It is known that the kernel of $can$ is an $X$-torus. Call it
$\mathbf{T}^{0}$. Since $\mu$ is smooth, hence so is $\bar{\mu}$. Thus, the
$X$-group scheme $\text{ker}(\bar{\mu})$ is an $X$-smooth scheme. This yields
that $M$ is étale over $X$.
Let $\beta=\alpha|_{\mathbf{H}}:\mathbf{H}\to\text{ker}(\bar{\mu})$ and let
$\mathbf{G}^{0}:=\beta^{-1}(\mathbf{T}^{0})$ be the scheme theoretic pre-image
of $\mathbf{T}^{0}$. Clearly, $\mathbf{G}^{0}$ is a closed $X$-subgroup scheme
of $\mathbf{H}$, which is the kernel of the morphism
$can\circ\beta:\mathbf{H}\to\mathbf{M}$. Let
$\gamma=\beta|_{\mathbf{G}^{0}}:\mathbf{G}^{0}\to\mathbf{T}^{0}$.
The $X$-group scheme $\mathbf{M}$ is finite and étale. The morphism $can$ is
smooth. The morphism $\beta$ is smooth as a base change of the smooth morphism
$\alpha$. Thus, $\lambda:=can\circ\beta$ is smooth. It is also surjective
locally in the étale topology on $X$, because $can$ and $\beta$ have this
property. By the construction $\mathbf{G}^{0}=\text{ker}(\lambda)$. So, to
prove that $\mathbf{H}$ is quasi-reductive it remains to check the reductivity
of $\mathbf{G}^{0}$.
The $X$-group scheme $\mathbf{G}^{0}$ is affine as a closed $X$-subgroup
scheme of the reductive $X$-group scheme $\mathbf{G}$. Prove now that
$\mathbf{G}^{0}$ is smooth over $X$. Indeed, the morphism $\gamma$ is smooth
as a base change of the smooth morphism $\alpha$. The $X$-scheme
$\mathbf{T}^{0}$ is smooth, since it is an $X$-torus. Thus, the $X$-scheme
$\mathbf{G}^{0}$ is smooth.
Write $X$ as $SpecS$ for a regular integral domain $S$. It remains to verify
that for each algebraically closed field $\Omega$ and for each ring
homomorphism $S\to\Omega$ the scalar extension $\mathbf{G}^{0}_{\Omega}$ is a
connected reductive algebraic group over $\Omega$. Firstly, recall that
$\text{ker}(\alpha)$ is a semi-simple $S$-group scheme. It is the $S$-group
scheme $\mathbf{G}^{ss}$ under the notation of [D-G]. Clearly,
$\text{ker}(\gamma)=\text{ker}(\alpha)$. Thus,
$\text{ker}(\gamma)=\mathbf{G}^{ss}$ is a semi-simple $S$-group scheme. Since
the morphism $\gamma$ is smooth for each algebraically closed field $\Omega$
and for each ring homomorphism $S\to\Omega$ we have an exact sequence of
smooth algebraic groups over $\Omega$
$1\to\mathbf{G}^{ss}_{\Omega}\to\mathbf{G}^{0}_{\Omega}\to\mathbf{T}^{0}_{\Omega}\to
1.$
The groups $\mathbf{T}^{0}_{\Omega}$, $\mathbf{G}^{ss}_{\Omega}$ are
connected. Hence the group $\mathbf{G}^{0}_{\Omega}$ is connected too. We know
already that it is affine.
Finally, check that its unipotent radical $\mathbf{U}$ of
$\mathbf{G}^{0}_{\Omega}$ is trivial. Since there is no non-trivial
$\Omega$-group morphisms $\mathbf{U}\to\mathbf{T}^{0}_{\Omega}$, we conclude
that $\mathbf{U}\subset\mathbf{G}^{ss}_{\Omega}$. Since
$\mathbf{G}^{ss}_{\Omega}$ is semi-simple one has $\mathbf{U}=\\{1\\}$. This
completes the proof of the reductivity of the $R$-group scheme
$\mathbf{G}^{0}$. Thus, the $R$-group scheme $\mathbf{H}$ is quasi-reductive.
This proves the lemma. ∎
###### Proof of the first assertion of Theorem 1.1.
Let $\mathbf{H}$ be the kernel of $\mu$. Since $\mu$ is smooth, the group
scheme sequence
$1\to\mathbf{H}\to\mathbf{G}\to\mathbf{T}\to 1$
gives rise to an short exact sequence of group sheaves in the étale topology.
In turn that sequence of sheaves induces a long exact sequence of pointed
sets. So, the boundary map
$\partial:\mathbf{T}(R)\to\textrm{H}^{1}_{\text{\'{e}t}}(R,\mathbf{H})$ fits
in a commutative diagram
$\begin{CD}\mathbf{T}(R)/\mu(\mathbf{G}(R))@>{}>{}>\textrm{H}^{1}_{\text{\'{e}t}}(R,\mathbf{H})\\\
@V{}V{}V@V{}V{}V\\\
\mathbf{T}(K)/\mu(\mathbf{G}(K))@>{}>{}>\textrm{H}^{1}_{\text{\'{e}t}}(K,\mathbf{H}).\\\
\end{CD}$
Clearly, the horizontal arrows have trivial kernels. The right vertical arrow
has trivial kernel by Lemma 3.1 and Theorem 1.4. Thus the left vertical arrow
has trivial kernel too. Since it is a group homomorphism, it is injective. ∎
## 4 Norms
We recall here a construction from [P1]. Let $k\subset K\subset L$ be field
extensions and assume that $L$ is finite separable over $K$. Let $K^{sep}$ be
a separable closure of $K$ and $\sigma_{i}:K\to K^{sep},\enspace 1\leq i\leq
n$ the different embeddings of $K$ into $L$. Let $C$ be a $k$-smooth
commutative algebraic group scheme defined over $k$. One can define a norm map
${\mathcal{N}}_{L/K}:C(L)\to C(K)$
by ${\mathcal{N}}_{L/K}(\alpha)=\prod_{i}C(\sigma_{i})(\alpha)\in
C(K^{sep})^{{\mathcal{G}}(K)}=C(K)\ .$ In [P1] following Suslin and Voevodsky
[SV, Sect.6] we generalize this construction to finite flat ring extensions.
Let $p:X\to Y$ be a finite flat morphism of affine schemes. Suppose that its
rank is constant, equal to $d$. Denote by $S^{d}(X/Y)$ the $d$-th symmetric
power of $X$ over $Y$.
Let $k$ be a field. Let $\mathcal{O}$ be the semi-local ring of finitely many
closed points on a smooth affine irreducible $k$-variety $X$. Let $C$ be an
affine smooth commutative $\mathcal{O}$-group scheme, Let $p:X\to Y$ be a
finite flat morphism of affine $\mathcal{O}$-schemes and $f:X\to C$ any
$\mathcal{O}$-morphism. In [P1] the norm $N_{X/Y}(f)$ of $f$ is defined as the
composite map
$Y\xrightarrow{N_{X/Y}}S^{d}(X/Y)\to
S^{d}_{\mathcal{O}}(X)\xrightarrow{S^{d}_{\mathcal{O}}(f)}S^{d}_{\mathcal{O}}(C)\xrightarrow{\times}C$
(2)
Here we write $"\times"$ for the group law on $C$. The norm maps $N_{X/Y}$
satisfy the following conditions
* (i’)
Base change: for any map $f:Y^{\prime}\to Y$ of affine schemes, putting
$X^{\prime}=X\times_{Y}Y^{\prime}$ we have a commutative diagram
$\begin{CD}C(X)@>{(id\times f)^{*}}>{}>C(X^{\prime})\\\
@V{N_{X/Y}}V{}V@V{}V{N_{X^{\prime}/Y^{\prime}}}V\\\
C(Y)@>{f^{*}}>{}>C(Y^{\prime})\end{CD}$
* (ii’)
multiplicativity: if $X=X_{1}\amalg X_{2}$ then the diagram commutes
$\begin{CD}C(X)@>{(id\times f)^{*}}>{}>C(X_{1})\times C(X_{2})\\\
@V{N_{X/Y}}V{}V@V{}V{N_{X_{1}/Y}N_{X_{2}/Y}}V\\\ C(Y)@>{id}>{}>C(Y)\end{CD}$
* (iii’)
normalization: if $X=Y$ and the map $X\to Y$ is the identity then
$N_{X/Y}=id_{C(X)}$.
## 5 Unramified elements
Let $k$ be a field, $\mathcal{O}$ be the semi-local ring of finitely many
closed points on a $k$-smooth irreducible affine $k$-variety $X$. Let $K$ be
the fraction field of $\mathcal{O}$, that is $K=k(X)$. Let
$\mu:\mathbf{G}\to\mathbf{T}$
be a smooth $\mathcal{O}$-morphism of reductive $\mathcal{O}$-group schemes,
with a torus $\mathbf{T}$. We work in this section with the category of
commutative Noetherian $\mathcal{O}$-algebras. For a commutative
$\mathcal{O}$-algebra $S$ set
${\cal F}(S)=\mathbf{T}(S)/\mu(\mathbf{G}(S)).$ (3)
For an element $\alpha\in\mathbf{T}(S)$ we will write $\bar{\alpha}$ for its
image in ${\cal F}(S)$. In this section we will write ${\cal F}$ for the
functor (3). The following result is a particular case of the first part of
Theorem 1.1 (those part is proved in Section 3).
###### Theorem 5.1.
Let $S$ be an $\mathcal{O}$-algebra which is discrete valuation ring with
fraction field $L$. Then the map ${\cal F}(S)\to{\cal F}(L)$ is injective.
###### Lemma 5.2.
Let $\mu:\mathbf{G}\to\mathbf{T}$ be the above morphism of our reductive group
schemes. Then for an $\mathcal{O}$-algebra $L$, where $L$ is a field, the
boundary map
$\partial:\mathbf{T}(L)/{\mu(\mathbf{G}(L))}\to\textrm{H}^{1}_{\text{\'{e}t}}(L,\mathbf{H})$
is injective.
###### Proof.
Repeat literally the proof of [Pan2, Lemma 6.2]. ∎
Let $k$, $\mathcal{O}$ and $K$ be as above in this Section. Let $\mathcal{K}$
be a field containing $K$ and $x:\mathcal{K}^{*}\to\mathbb{Z}$ be a discrete
valuation vanishing on $K$. Let $A_{x}$ be the valuation ring of $x$. Clearly,
$\mathcal{O}\subset A_{x}$. Let $\hat{A}_{x}$ and $\hat{\mathcal{K}}_{x}$ be
the completions of $A_{x}$ and $\mathcal{K}$ with respect to $x$. Let
$i:\mathcal{K}\hookrightarrow\hat{\mathcal{K}}_{x}$ be the inclusion. By
Theorem 5.1 the map ${\cal F}(\hat{A}_{x})\to{\cal F}(\hat{\mathcal{K}}_{x})$
is injective. We will identify ${\cal F}(\hat{A}_{x})$ with its image under
this map. Set
${\cal F}_{x}(\mathcal{K})=i_{*}^{-1}({\cal F}(\hat{A}_{x})).$
The inclusion $A_{x}\hookrightarrow\mathcal{K}$ induces a map ${\cal
F}(A_{x})\to{\cal F}(\mathcal{K})$ which is injective by Theorem 5.1. So both
groups ${\cal F}(A_{x})$ and ${\cal F}_{x}(\mathcal{K})$ are subgroups of
${\cal F}(\mathcal{K})$. The following lemma shows that ${\cal
F}_{x}(\mathcal{K})$ coincides with the subgroup of ${\cal F}(\mathcal{K})$
consisting of all elements unramified at $x$.
###### Lemma 5.3.
${\cal F}(A_{x})={\cal F}_{x}(\mathcal{K})$.
###### Proof.
Repeat literally the proof of [Pan2, Lemma 6.3]. ∎
Let $S$ be an $\mathcal{O}$-algebra which is an integral domain and suppose
$S$ is a regular ring. Let $L$ be its fraction field. For each height $1$
prime ideals $\mathfrak{p}$ in $S$ the group map ${\cal
F}(S_{\mathfrak{p}})\to{\cal F}(L)$ is injective by the first part of Theorem
1.1. Define the subgroup of $S$-unramified elements of $\mathcal{F}(L)$ as
$\mathcal{F}_{nr,S}(L)=\bigcap_{\mathfrak{p}\in
Spec(S)^{(1)}}\mathcal{F}(S_{\mathfrak{p}})\subseteq{\cal F}(L),$ (4)
where $Spec(S)^{(1)}$ is the set of height $1$ prime ideals in $S$. Obviously
the image of $\mathcal{F}(S)$ in $\mathcal{F}(L)$ is contained in
$\mathcal{F}_{nr,S}(L)$. For each height one prime $\mathfrak{p}$ in $S$ we
construct specialization maps $s_{\mathfrak{p}}:{\cal F}_{nr,S}(L)\to{\cal
F}(l(\mathfrak{p}))$, where $L$ is the field of fractions of $S$ and
$l(\mathfrak{p})$ is the residue field of $S$ at the prime $\mathfrak{p}$.
###### Definition 5.4.
Let
$Ev_{\mathfrak{p}}:\mathbf{T}(S_{\mathfrak{p}})\to\mathbf{T}(l(\mathfrak{p}))$
and $ev_{\mathfrak{p}}:{\cal F}(S_{\mathfrak{p}})\to{\cal F}(K(\mathfrak{p}))$
be the maps induced by the canonical $S$-algebra homomorphism
$S_{\mathfrak{p}}\to l(\mathfrak{p})$. Define a homomorphism
$s_{\mathfrak{p}}:{\cal F}_{nr,S}(L)\to{\cal F}(l(\mathfrak{p}))$ by
$s_{\mathfrak{p}}(\alpha)=ev_{\mathfrak{p}}(\tilde{\alpha})$, where
$\tilde{\alpha}$ is a lift of $\alpha$ to ${\cal F}(S_{\mathfrak{p}})$.
Theorem 5.1 shows that the map $s_{\mathfrak{p}}$ is well-defined. It is
called the specialization map. The map $ev_{\mathfrak{p}}$ is called the
evaluation map at the prime $\mathfrak{p}$.
Obviously for $\alpha\in\mathbf{T}(S_{\mathfrak{p}})$ one has
$s_{\mathfrak{p}}(\bar{\alpha})=\overline{Ev_{\mathfrak{p}}(\alpha)}\in{\cal
F}(l(\mathfrak{p}))$.
Let $k$, $\mathcal{O}$ and $K$ be as above in this Section. The following two
results are proved using literally the same arguments as in the proof of
[Pan2, Thm. 6.5] and [Pan2, Cor. 6.6] respectively.
###### Theorem 5.5 (Homotopy invariance).
Let $K(t)$ be the rational function field in one variable over the field $K$.
Define ${\cal F}_{nr,K[t]}(K(t))$ by the formulae (4). Then one has an
equality
${\cal F}(K)={\cal F}_{nr,K[t]}(K(t)).$
###### Corollary 5.6.
Let
$s_{0},s_{1}:{\cal F}_{nr,K[t]}(K(t))\to{\cal F}(K)$
be the specialization maps at zero and at one (at the primes (t) and (t-1)).
Then $s_{0}=s_{1}$.
###### Lemma 5.7.
Let $B\subset A$ be a finite extension of $K$-smooth algebras, which are
integral domains and each has dimension one. Let $0\neq f\in A$ and let $h\in
B\cap fA$ be such that the induced map $B/hB\to A/fA$ is an isomorphism.
Suppose $hA=fA\cap J^{\prime\prime}$ for an ideal $J^{\prime\prime}\subseteq
A$ co-prime to the principal ideal $fA$.
Let $E$ and $F$ be the field of fractions of $B$ and $A$ respectively. Let
$\alpha\in\mathbf{T}(A_{f})$ be such that $\bar{\alpha}\in{\cal F}(F)$ is
$A$-unramified. Then, for $\beta=N_{F/E}(\alpha)$, the class
$\bar{\beta}\in{\cal F}(E)$ is $B$-unramified.
###### Proof.
Repeat literally the proof of [Pan2, Lemma 6.7]. ∎
## 6 Few recollections
Let $X$ be an affine $k$-smooth irreducible $k$-variety, and let
$x_{1},x_{2},\dots,x_{n}$ be closed points in $X$. Let $\mathcal{O}$ be the
semi-local ring $\mathcal{O}_{X,\\{x_{1},x_{2},\dots,x_{n}\\}}$. Let
$U=Spec(\mathcal{O})$ and $can:U\hookrightarrow X$ be the canonical embedding.
Let $\mathbf{G}$ be a reductive $X$-group scheme and let
$\mathbf{G}_{U}=can^{*}(\mathbf{G})$ be the pull-back of $\mathbf{G}$ to $U$.
Let $\mathbf{T}$ be an $X$-torus and let $\mathbf{T}_{U}=can^{*}(\mathbf{T})$
be the pull-back of $\mathbf{G}$ to $U$. Let $\mu:\mathbf{G}\to\mathbf{T}$ be
an $X$-group scheme morphism which is smooth as an $X$-scheme morphism. Let
$\mu_{U}=can^{*}(\mu)$. The following result is [Pan2, Theorem 4.1].
###### Theorem 6.1.
Given a non-zero function $\emph{f}\in k[X]$ vanishing at each point $x_{i}$,
there is a diagram of the form
|
---|---
$\textstyle{\mathbf{A}^{1}\times
U\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\text{\rm
pr}_{U}}$$\textstyle{\mathcal{X}^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sigma}$$\scriptstyle{q_{U}}$$\scriptstyle{q_{X}}$$\textstyle{X}$$\textstyle{U\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$can$\scriptstyle{\Delta^{\prime}}$
(5)
with an irreducible affine scheme $\mathcal{X}^{\prime}$, a smooth morphism
$q_{U}$, a finite surjective $U$-morphism $\sigma$ and an essentially smooth
morphism $q_{X}$, and a function $f^{\prime}\in q^{*}_{X}(\emph{f}\
)k[\mathcal{X}^{\prime}]$, which enjoys the following properties:
* (a)
if $\mathcal{Z}^{\prime}$ is the closed subscheme of $\mathcal{X}^{\prime}$
defined by the ideal $(f^{\prime})$, then the morphism
$\sigma|_{\mathcal{Z}^{\prime}}:\mathcal{Z}^{\prime}\to\mathbf{A}^{1}\times U$
is a closed embedding and the morphism
$q_{U}|_{\mathcal{Z}^{\prime}}:\mathcal{Z}^{\prime}\to U$ is finite;
* (a’)
$q_{U}\circ\Delta^{\prime}=id_{U}$ and $q_{X}\circ\Delta^{\prime}=can$ and
$\sigma\circ\Delta^{\prime}=i_{0}$,
where $i_{0}$ is the zero section of the projection $\text{\rm pr}_{U}$;
* (b)
$\sigma$ is étale in a neighborhood of
$\mathcal{Z}^{\prime}\cup\Delta^{\prime}(U)$;
* (c)
$\sigma^{-1}(\sigma(\mathcal{Z}^{\prime}))=\mathcal{Z}^{\prime}\coprod\mathcal{Z}^{\prime\prime}$
scheme theoretically for some closed subscheme $\mathcal{Z}^{\prime\prime}$
and $\mathcal{Z}^{\prime\prime}\cap\Delta^{\prime}(U)=\emptyset$;
* (d)
$\mathcal{D}_{0}:=\sigma^{-1}(\\{0\\}\times
U)=\Delta^{\prime}(U)\coprod\mathcal{D}^{\prime}_{0}$ scheme theoretically for
some closed subscheme $\mathcal{D}^{\prime}_{0}$ and
$\mathcal{D}^{\prime}_{0}\cap\mathcal{Z}^{\prime}=\emptyset$;
* (e)
for $\mathcal{D}_{1}:=\sigma^{-1}(\\{1\\}\times U)$ one has
$\mathcal{D}_{1}\cap\mathcal{Z}^{\prime}=\emptyset$.
* (f)
there is a monic polynomial $h\in\mathcal{O}[t]$ with
$(h)=Ker[\mathcal{O}[t]\xrightarrow{\sigma^{*}}k[\mathcal{X}^{\prime}]\xrightarrow{-}k[\mathcal{X}^{\prime}]/(f^{\prime})]$,
where the map bar takes any $g\in k[\mathcal{X}^{\prime}]$ to ${\bar{g}}\in
k[\mathcal{X}^{\prime}]/(f^{\prime})$;
* (g)
there are $\mathcal{X}^{\prime}$-group scheme isomorphisms
$\Phi:q^{*}_{U}(\mathbf{G}_{U})\to q^{*}_{X}(\mathbf{G})$,
$\Psi:q^{*}_{U}(\mathbf{T}_{U})\to q^{*}_{X}(\mathbf{T})$ with
$(\Delta^{\prime})^{*}(\Phi)=id_{\mathbf{G}_{U}}$,
$(\Delta^{\prime})^{*}(\Psi)=id_{\mathbf{T}_{U}}$ and
$q^{*}_{X}(\mu)\circ\Phi=\Psi\circ q^{*}_{U}(\mu_{U})$.
###### Remark 6.2.
The triple $(q_{U}:\mathcal{X}^{\prime}\to U,f^{\prime},\Delta^{\prime})$ is a
nice triple over $U$, since $\sigma$ is a finite surjective $U$-morphism. See
[PSV, Defn.3.1] for the definition of a nice triple.
The morphism $q_{X}$ is not equal to $can\circ q_{U}$, since $f^{\prime}\in
q^{*}_{X}(\textrm{f})k[\mathcal{X}^{\prime}]$ and the morphism
$q_{U}|_{\mathcal{Z}^{\prime}}:\mathcal{Z}^{\prime}=\\{f^{\prime}=0\\}\to U$
is finite.
To formulate a consequence of the theorem 6.1 (see Corollary 6.3), note that
using the items (b) and (c) of Theorem 6.1 one can find an element $g\in
I(\mathcal{Z}^{\prime\prime})$ such that
(1)
$(f^{\prime})+(g)=\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})$,
(2)
$Ker((\Delta^{\prime})^{*})+(g)=\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})$,
(3)
$\sigma_{g}=\sigma|_{\mathcal{X}^{\prime}_{g}}:\mathcal{X}^{\prime}_{g}\to\mathbf{A}^{1}_{U}$
is étale.
Here is the corollary. It is proved in [Pan0, Cor. 7.2].
###### Corollary 6.3.
The function $f^{\prime}$ from Theorem 6.1, the polynomial $h$ from the item
$(\textrm{f}\ )$ of that Theorem, the morphism
$\sigma:\mathcal{X}^{\prime}\to\mathbf{A}^{1}_{U}$ and the function
$g\in\Gamma(\mathcal{X},\mathcal{O}_{\mathcal{X}})$ defined just above enjoy
the following properties:
* (i)
the morphism
$\sigma_{g}=\sigma|_{\mathcal{X}^{\prime}_{g}}:\mathcal{X}^{\prime}_{g}\to\mathbf{A}^{1}\times
U$ is étale,
* (ii)
data
$(\mathcal{O}[t],\sigma^{*}_{g}:\mathcal{O}[t]\to\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})_{g},h)$
satisfies the hypotheses of [C-T/O, Prop.2.6], i.e.
$\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})_{g}$ is a
finitely generated $\mathcal{O}[t]$-algebra, the element $(\sigma_{g})^{*}(h)$
is not a zero-divisor in
$\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})_{g}$ and
$\mathcal{O}[t]/(h)=\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})_{g}/h\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})_{g}$
,
* (iii)
$(\Delta(U)\cup\mathcal{Z}^{\prime})\subset\mathcal{X}^{\prime}_{g}$ and
$\sigma_{g}\circ\Delta=i_{0}:U\to\mathbf{A}^{1}\times U$,
* (iv)
$\mathcal{X}^{\prime}_{gh}\subseteq\mathcal{X}^{\prime}_{gf^{\prime}}\subseteq\mathcal{X}^{\prime}_{f^{\prime}}\subseteq\mathcal{X}^{\prime}_{q^{*}_{X}(\emph{f})}$
,
* (v)
$\mathcal{O}[t]/(h)=\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})/(f^{\prime})$,
$h\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})=(f^{\prime})\cap
I(\mathcal{Z}^{\prime\prime})$ and
$(f^{\prime})+I(\mathcal{Z}^{\prime\prime})=\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})$.
## 7 Purity
Let $S$ be a regular ring, $\mathbf{G}$ a reductive $S$-group scheme,
$\mathbf{T}$ an $S$-torus, $\mu:\mathbf{G}\to\mathbf{T}$ an $S$-group scheme
morphisms which is smooth as a scheme morphism. Suppose $S$ is an integral
domain. Let $L$ be its field of fractions. For each $S$-algebra $S^{\prime}$
write ${\cal F}(S^{\prime})$ for the group
$\mathbf{T}(S^{\prime})/\mu(\mathbf{G}(S^{\prime}))$. For any
$a\in\mathbf{T}(S^{\prime})$ write $\bar{a}$ for the class of $a$ in ${\cal
F}(S^{\prime})$. Let $\mathfrak{p}$ be a height one prime ideal in $S$, then
by Theorem 1.1 the group ${\cal F}(S_{\mathfrak{p}})$ is a subgroup in ${\cal
F}(L)$.
Recall some notion. For an element $a\in\mathbf{T}(L)$ and a height one prime
$\mathfrak{p}\subset S$ we say that $\bar{a}\in{\cal F}(L)$ is unramified at
$\mathfrak{p}$, if $\bar{a}$ is in ${\cal F}(S_{\mathfrak{p}})$. We say that
the element $\bar{a}\in{\cal F}(L)$ is $S$-unramified if for any height one
prime ideal $\mathfrak{p}$ in $S$ the element $\bar{a}$ is in ${\cal
F}(S_{\mathfrak{p}})$. Clearly, the image of ${\cal F}(S)$ in ${\cal F}(L)$ is
in $\cap\ {\cal F}(S_{\mathfrak{p}})$, where the intersection is taken over
all height one primes of $S$. We say that purity holds for the ring $S$ if
$Im[{\cal F}(S)\to{\cal F}(L)]=\cap\ {\cal F}(S_{\mathfrak{p}}).$
Equivalently, purity holds for $S$ if each $S$-unramified element of ${\cal
F}(L)$ comes from ${\cal F}(S)$. Clearly, the sequence $\\{1\\}\to{\cal
F}(S_{\mathfrak{p}})\to{\cal
F}(L)\xrightarrow{r_{\mathfrak{p}}}\mathbf{T}(L)/[\mathbf{T}(R_{\mathfrak{p}})\cdot\mu(\mathbf{G}(L))]\to\\{1\\}$
is exact, where $r_{\mathfrak{p}}$ is the factorization map. Thus, an element
$a\in\mathbf{T}(L)$ its class $\bar{a}$ in ${\cal F}(L)$ is unramified at
$\mathfrak{p}$, if and only if $r_{\mathfrak{p}}(\bar{a})=0$. Hence purity
holds for $S$ if and only if the sequence ${\cal F}(S)\to{\cal
F}(L)\xrightarrow{\sum
r_{\mathfrak{p}}}\oplus_{\mathfrak{p}}\mathbf{T}(L)/[\mathbf{T}(R_{\mathfrak{p}})\cdot\mu(\mathbf{G}(L))]$
is exact. Our aim is to prove the following assertion:
$(\ast)$ Purity holds for the ring $R$, group schemes
$\mathbf{G}$,$\mathbf{T}$ and the morphism $\mu$ as in Theorem 1.1.
The proof is subdivided in few steps.
Claim 1. Let $X$ be a $k$-smooth irreducible affine $k$-variety. Let
$\mathbf{G}$ be a reductive $X$-group scheme, $\mathbf{T}$ be an $X$-torus and
$\mu:\mathbf{G}\to\mathbf{T}$ be an $X$-group scheme morphism which is smooth
as an $X$-scheme morphism. Suppose the $k$-algebra $R$ is the semi-local ring
of finitely many closed points on $X$. Then purity holds for $R$.
To prove this Claim we just repeat literally the proof of [Pan2, Theorem 1.1]
replacing references to [Pan2, Corollary 4.3, (ii),(v)] with the one to the
items (ii) and (v) of Corollary 6.3. Replacing also references to [Pan2, Lemma
6.7] with the one to Lemma 5.7. Replacing also references to [Pan2, Theorem
6.5] with the one to Theorem 5.5. Replacing also references to [Pan2,
Corollary 6.6] with the one to Corollary 5.6. Replacing also references to
[Pan2, Theorem 4.1] with the one to Theorem 6.1. Replacing also references to
[Pan2, Definition 6.4] with the one to the remark at the end of Definition
5.4. The Claim 1 is proved.
Claim 2. Let $X$ be a $k$-smooth irreducible affine $k$-variety and
$\xi_{1},...,\xi_{n}$ be points of the scheme $Spec(k[X])$ such that for each
pair $r,s$ the point $\xi_{r}$ is not in the closure
$\overline{\\{\xi_{s}\\}}$ of $\xi_{s}$. Let $R$ be the semi-local ring
$\mathcal{O}_{X,\xi_{1},...,\xi_{n}}$ of scheme points $\xi_{1},...,\xi_{n}$
of $Spec(k[X])$. Let $\mathbf{G}$ be a reductive $X$-group scheme,
$\mathbf{T}$ be an $X$-torus and $\mu:\mathbf{G}\to\mathbf{T}$ be an $X$-group
scheme morphism which is smooth as an $X$-scheme morphism. Then purity holds
for $R$.
To prove this Claim take an element $a\in\mathbf{T}(k(X))$ such that $\bar{a}$
is unramified at each irreducible divisor $D$ containing at least one of the
points $\xi_{r}$. We have to prove that the element $\bar{a}\in{\cal F}(K)$ is
in the image of ${\cal F}(R)$. Clearly, there is a non-zero $f\in k[X]$ such
that $a\in\mathbf{T}(k[X_{f}])$. Write down the divisor $div(f)\in Div(X)$ in
the form $div(f)=\Sigma m_{i}D_{i}+\Sigma n_{j}D^{\prime}_{j}$ such that for
each index $i$ there is an index $r$ with $\xi_{r}\in D_{i}$ and for any index
$j$ and any index $r$ the point $\xi_{r}$ does not belong to $D^{\prime}_{j}$.
There is an element $g\in k[X]$ such that for any index $j$ one has
$D^{\prime}_{j}$ is contained in the closed subset $\\{g=0\\}$ and $g$ does
not belong to any of $\xi_{r}$’r. Replacing $X$ with $X_{g}$ we see that
$a\in\mathbf{T}(k[X_{f}])$, $div(f)=\Sigma m_{i}D_{i}$ and $\bar{a}$ is
unramified at each irreducible divisor $D_{i}$. Hence $\bar{a}$ is unramified
at each height one prime ideal of $k[X]$. Our assumption on points $\xi_{r}$’s
yield the following: one can choose closed points
$x_{r}\in\overline{\\{\xi_{s}\\}}$ such that for each $r\neq s$ the point
$x_{r}$ is not in $\overline{\\{\xi_{s}\\}}$. Particularly, for each $r\neq s$
one has $x_{r}\neq x_{s}$. The element $\bar{a}$ is unramified at each height
one prime ideal of $k[X]$. Thus, by Claim 1 the element $\bar{a}$ is in the
image of ${\cal F}(\mathcal{O}_{X,x_{1},...,x_{n}})$. So, the element
$\bar{a}$ is in the image of ${\cal
F}(\mathcal{O}_{X,\xi_{1},...,\xi_{n}})={\cal F}(R)$. The Claim 2 is proved.
Claim 3. The assertion $(\ast)$ is true.
In the rest of the section we prove Claim 3. Clearly, we may assume that $k$
is a prime field and hence $k$ is perfect. It follows from Popescu’s theorem
[Pop, Swa, Spi] that $R$ is a filtered inductive limit of smooth $k$-algebras
$R_{\alpha}$. Modifying the inductive system $R_{\alpha}$ if necessary, we can
assume that each $R_{\alpha}$ is integral. For each maximal ideal
$\mathfrak{m}_{i}$ in $R$ ($i=1,...,n$) set
$\mathfrak{p}_{i}=\varphi^{-1}_{\alpha}(\mathfrak{m}_{i})$. The homomorphism
$\varphi_{\alpha}:R_{\alpha}\to R$ induces a homomorphism of semi-local rings
$\varphi^{\prime}_{\alpha}:(R_{\alpha})_{\mathfrak{p}_{1},...,\mathfrak{p}_{n}}\to
R$. Since this moment we will write $A_{\alpha}$ for
$(R_{\alpha})_{\mathfrak{p}_{1},...,\mathfrak{p}_{n}}$ and $A$ for $R$ (to
keep consistency of notation). Thus, $A$ is a filtered inductive limit of
regular semi-local $k$-algebras $A_{\alpha}$.
There exist an index $\alpha$, a reductive group scheme $\mathbf{G}_{\alpha}$,
a torus $\mathbf{T}_{\alpha}$ over $A_{\alpha}$ and an $A_{\alpha}$-group
scheme morphism $\mu_{\alpha}:\mathbf{G}_{\alpha}\to\mathbf{T}_{\alpha}$ which
is smooth as an $A_{\alpha}$-scheme morphism such that
$\mathbf{G}=\mathbf{G}_{\alpha}\times_{Spec(A_{\alpha})}Spec(A)$,
$\mathbf{T}=\mathbf{T}_{\alpha}\times_{Spec(A_{\alpha})}Spec(A)$,
$\mu=\mu_{\alpha}\times_{Spec(A_{\alpha})}Spec(A)$. Replacing the index system
with a co-final one consisting of indexes $\beta\geq\alpha$, we may and will
suppose that the reductive group scheme $\mathbf{G}$, the torus $\mathbf{T}$
and the group scheme morphism $\mu:\mathbf{G}\to\mathbf{T}$ comes from
$A_{\alpha}$, and $\mu$ is is smooth as an $A_{\alpha}$-scheme morphism. These
observations and Claim 2 yield the following intermediate result
$(\ast\ast)$ for these $\mathbf{G}$, $\mathbf{T}$ and
$\mu:\mathbf{G}\to\mathbf{T}$ over $A_{\alpha}$ purity holds for each ring
$A_{\beta}$ with $\beta\geq\alpha$.
Let now $K$ be the field of fractions of $A$ and, for each $\beta\geq\alpha$,
let $K_{\beta}$ be the field of fractions of $A_{\beta}$. For each index
$\beta\geq\alpha$ let $\mathfrak{a}_{\beta}$ be the kernel of the map
$\varphi^{\prime}_{\beta}:A_{\beta}\to A$ and
$B_{\beta}=(A_{\beta})_{\mathfrak{a}_{\beta}}$. Clearly, for each
$\beta\geq\alpha$, $K_{\beta}$ is the field of fractions of $B_{\beta}$. The
composition map $A_{\beta}\to A\to K$ factors through $B_{\beta}$. Since $A$
is a filtering direct limit of the $A_{\beta}$’s we see that $K$ is a
filtering direct limit of the $B_{\beta}$’s. We will write $\psi_{\beta}$ for
the canonical morphism $B_{\beta}\to K$.
###### Lemma 7.1.
For each index $\alpha$ the group map $W(B_{\alpha})\to W(K_{\alpha})$ is
injective.
###### Proof.
Just apply the first part of Theorem 1.1 to the $k$-algebra $B_{\alpha}$. ∎
###### Lemma 7.2.
Let $a\in W(K)$ be an $A$-unramified element. Then there exists an index
$\alpha$ and an element $b_{\alpha}\in W(B_{\beta})$ such that
$\psi_{\alpha}(b_{\alpha})=b$ and the class $b_{\alpha}\in W(K_{\beta})$ is
$A_{\alpha}$-unramified.
###### Proof.
Repeat literally the proof of [P1, Lemma 9.0.9]. It works for the semi-local
case as well. ∎
We complete the proof of Claim 3 as follows. Let $a\in W(K)$ be an
$A$-unramified element. We have to check that it comes from $W(A)$. By Lemma
7.2 there exists an index $\alpha$ and an element $b_{\alpha}\in
W(B_{\alpha})$ such that $\psi_{\alpha}(b_{\alpha})=b$ and the class
$b_{\alpha}\in W(K_{\beta})$ is $A_{\alpha}$-unramified. For this index
$\alpha$ consider a commutative diagram of $k$-algebras
---
$\textstyle{A_{\alpha}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{\alpha}}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B_{\alpha}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi_{\alpha}}$$\textstyle{K}$$\textstyle{K_{\alpha}.}$
The class $\bar{b}_{\beta}\in{\cal F}(K_{\beta})$ is $A_{\beta}$-unramified.
Hence by the statement $(\ast\ast)$ there exists an element
$a_{\beta}\in\mathbf{T}(A_{\beta})$ such that
$\bar{b}_{\beta}=\bar{a}_{\beta}$ in ${\cal F}(K_{\beta})$. By Lemma 7.1 one
has an equality $\bar{b}_{\beta}=\bar{a}_{\beta}$ in ${\cal F}(B_{\beta})$.
Hence $\bar{b}\in{\cal F}(K)$ coincides with the image of the element
$\varphi_{\beta}(\bar{a}_{\beta})$ in ${\cal F}(K)$. The Claim 3 is proved.
Thus, the sequence (1) is exact at its middle term.
## 8 Proof of Theorem 1.1
###### Proof of Theorem 1.1.
The proof of the first assertion of Theorem 1.1 is given in Section 3. The
exactness of the sequence (1) at its middle term is proved in Section 7.
Prove now the surjectivity of the map $\sum r_{\mathfrak{p}}$. Clearly, it is
sufficient to prove the surjectivity of the map
$\mathbf{T}(K)\xrightarrow{\sum
r^{\prime}_{\mathfrak{p}}}\bigoplus_{\mathfrak{p}}\mathbf{T}(K)/\mathbf{T}(R_{\mathfrak{p}})$,
where $\mathfrak{p}$ runs over the height $1$ primes of $R$ and
$r^{\prime}_{\mathfrak{p}}$ is the factorisation map. We follow arguments from
[Pan3, Section 9].
We prefer to switch to the scheme terminology. Set $X:=Spec(R)$. Consider a
finite étale Galois morphism $\pi:\tilde{X}\to X$ with irreducible $\tilde{X}$
such that the torus $\mathbf{T}$ splits over $\tilde{X}$. Let
$Gal:=Aut(\tilde{X}/X)$ be its Galois group. Since the torus $\mathbf{T}$
splits over $\tilde{X}$ we have a short exact sequence of $Gal$-modules
$0\to\mathbf{T}(\tilde{\mathcal{O}})\to\mathbf{T}(\tilde{K})\to\oplus_{y}\mathbf{T}(\tilde{K})/\mathbf{T}(\tilde{\mathcal{O}}_{X,y})\to
0,$
where $\tilde{\mathcal{O}}=\Gamma(\tilde{X},\mathcal{O}_{\tilde{X}})$,
$\tilde{K}$ is the fraction field of $\tilde{\mathcal{O}}$, $y$ runs over the
set $X^{(1)}$ of all codimension $1$ points of $X$ and for any $y\in X^{(1)}$
the ring $\tilde{\mathcal{O}}_{X,y}$ is the semi-local ring
$\mathcal{O}_{\tilde{X},\tilde{y}}$ of the finite set $\tilde{y}=\pi^{-1}(y)$
on the scheme $\tilde{X}$. Write $\mathcal{O}$ for $R$ to be consistent with
the above notation.
The above short exact sequence of $Gal$-modules gives rise to a long exact
sequence of $Gal$-cohomology groups of the form
$0\to\mathbf{T}(\mathcal{O})\xrightarrow{in}\mathbf{T}(K)\to\oplus_{y}[\mathbf{T}(\tilde{K})/\mathbf{T}(\tilde{\mathcal{O}}_{X,y})]^{Gal}\to
H^{1}(Gal,\mathbf{T}(\tilde{\mathcal{O}}))\xrightarrow{H^{1}(in)}H^{1}(Gal,\mathbf{T}(\tilde{K})).$
We claim that the map $H^{1}(in)$ is a monomorphism. Indeed, the group
$H^{1}(Gal,\mathbf{T}(\tilde{\mathcal{O}}))$ is a subgroup of the group
$H^{1}_{et}(X,\mathbf{T})$ and the group $H^{1}(Gal,\mathbf{T}(\tilde{K}))$ is
a subgroup of the group $H^{1}_{et}(Spec~{}K,\mathbf{T}_{K})$. By Theorem 1.4
the group map $H^{1}_{et}(X,\mathbf{T})\to
H^{1}_{et}(Spec~{}K,\mathbf{T}_{K})$ is a monomorphism. Thus, $H^{1}(in)$ is a
monomorphism also. So, we have a short exact sequence of the form
$0\to\mathbf{T}(\mathcal{O})\xrightarrow{in}\mathbf{T}(K)\to\oplus_{y}[\mathbf{T}(\tilde{K})/\mathbf{T}(\tilde{\mathcal{O}}_{X,y})]^{Gal}\to
0$.
There is also the complex
$0\to\mathbf{T}(\mathcal{O})\xrightarrow{in}\mathbf{T}(K)\to\oplus_{y}\mathbf{T}(K)/\mathbf{T}(\mathcal{O}_{X,y})$.
Set $\alpha=id_{\mathbf{T}(\mathcal{O})}$, $\beta=id_{\mathbf{T}(K)}$ and let
$\gamma=\oplus_{y}\gamma_{y}$, where
$\gamma_{y}:\mathbf{T}(K)/\mathbf{T}(\mathcal{O}_{X,y})\to[\mathbf{T}(\tilde{K})/\mathbf{T}(\tilde{\mathcal{O}}_{X,y})]^{Gal}$
is induced by the inclusion $K\subset\tilde{K}$. The maps $\alpha$, $\beta$
and $\gamma$ form a morphism between this complex and the above short exact
sequence. We claim that this morphism is an isomorphism. This claim completes
the proof of the theorem.
To prove this claim it is sufficient to prove that $\gamma$ is an isomophism.
Since the map
$\mathbf{T}(K)\to\oplus_{y}[\mathbf{T}(\tilde{K})/\mathbf{T}(\tilde{\mathcal{O}}_{X,y})]^{Gal}$
is an epimorphism, hence so is the map $\gamma$. It remains to prove that
$\gamma$ is a monomorphism. To do this it is sufficient to check that for any
point $y\in X^{(1)}$ the map
$\mathbf{T}(K)/\mathbf{T}(\mathcal{O}_{X,y})\to\mathbf{T}(\tilde{K})/\mathbf{T}(\tilde{\mathcal{O}}_{X,y})$
is a monomorphism. We will write $\epsilon_{y}$ for the latter map. We prove
below that $ker(\epsilon_{y})$ is a torsion group and the group
$\mathbf{T}(K)/\mathbf{T}(\mathcal{O}_{X,y})$ has no torsion. These two claims
show that the map $\epsilon_{y}$ is injective indeed.
To prove that $ker(\epsilon_{y})$ is a torsion group recall that there are
norm maps
$N_{\tilde{\mathcal{O}}_{X,y}/\mathcal{O}_{X,y}}:\mathbf{T}(\tilde{\mathcal{O}}_{X,y})\to\mathbf{T}(\mathcal{O}_{X,y})$
and $N_{\tilde{K}/K}:\mathbf{T}(\tilde{K})\to\mathbf{T}(K)$ (see Section 4).
These maps induce a homomorphism
$N_{y}:\mathbf{T}(\tilde{K})/\mathbf{T}(\tilde{\mathcal{O}}_{X,y})\to\mathbf{T}(K)/\mathbf{T}(\mathcal{O}_{X,y})$
such that $N_{y}\circ\epsilon_{y}$ is the multiplication by the degree $d$ of
$\tilde{K}$ over $K$. Thus, $ker(\epsilon_{y})$ is killed by the
multiplication by $d$.
Show now that the group $\mathbf{T}(K)/\mathbf{T}(\mathcal{O}_{X,y})$ has no
torsion. Take an element $a_{K}\in\mathbf{T}(K)$ and suppose that its class in
$\mathbf{T}(K)/\mathbf{T}(\mathcal{O}_{X,y})$ is a torsion element. Let
$\tilde{a}_{K}$ be the image of $a_{K}$ in $\mathbf{T}(\tilde{K})$. Since
$\mathbf{T}$ splits over $\tilde{K}$ we see that
$\mathbf{T}(\tilde{K})/\mathbf{T}(\tilde{\mathcal{O}}_{X,y})$ is torsion free.
Thus, the class of $\tilde{a}_{K}$ in
$\mathbf{T}(\tilde{K})/\mathbf{T}(\tilde{\mathcal{O}}_{X,y})$ vanishes. So,
there is a unique element $\tilde{a}$ in
$\mathbf{T}(\tilde{\mathcal{O}}_{X,y})$ whose image in $\mathbf{T}(\tilde{K})$
is $\tilde{a}_{K}$. Moreover, $\tilde{a}$ is a $Gal$-invariant element in
$\mathbf{T}(\tilde{\mathcal{O}}_{X,y})$, because $\tilde{a}_{K}$ comes from
$\mathbf{T}(K)$. Since
$\mathbf{T}(\tilde{\mathcal{O}}_{X,y})^{Gal}=\mathbf{T}(\mathcal{O}_{X,y})$,
there is a unique element $a\in\mathbf{T}(\mathcal{O}_{X,y})$ whose image in
$\mathbf{T}(\tilde{\mathcal{O}}_{X,y})$ is $\tilde{a}$. Clearly, the image of
$a$ into $\mathbf{T}(K)$ is the element $a_{K}$. Thus, the class of $a_{K}$ in
$\mathbf{T}(K)/\mathbf{T}(\mathcal{O}_{X,y})$ vanishes. So, the group
$\mathbf{T}(K)/\mathbf{T}(\mathcal{O}_{X,y})$ is torsion free. The injectivity
of $\epsilon_{y}$ is proved. The surjectivity of the map $\sum
r_{\mathfrak{p}}$ is proved. Theorem 1.1 is proved. ∎
## References
* [Ch-G-P/O] _Chernousov, V.; Gille, P,; Pianzola, A._ A classification of torsors over Laurent polynomial rings, Comment. Math. Helv. 92 (2017), 37–55.
* [C-T/O] _Colliot-Thélène, J.-L.; Ojanguren, M._ Espaces Principaux Homogènes Localement Triviaux, Publ. Math. IHÉS 75 (1992), no. 2, 97–122.
* [C-T/S] _Colliot-Thélène, J.-L.; Sansuc, J.-J._ Principal homogeneous spaces under flasque tori: Applications, Journal of Algebra 106 (1987), 148–205.
* [SGA3] _Demazure, M.; Grothendieck, A._ Schémas en groupes, Lect. Notes Math., vol. 151–153, Springer-Verlag, Berlin-Heidelberg-New York, 1970.
* [D-G] _Demazure, Grothendieck._ Structure des schémas en groupes réductifs. Lect. Notes Math., vol 153.
* [E] _Eisenbud, D._ Commutative algebra with a view toward algebraic geometry. Graduate Texts in Mathematics 150, Springer-Verlag, New York, 1995.
* [FP] _Fedorov, R.; Panin, I._ A proof of Grothendieck–Serre conjecture on principal bundles over a semilocal regular ring containing an infinite field, Publ. Math. Inst. Hautes Etudes Sci., Vol. 122, 2015, pp. 169-193.
http://www.arxiv.org/abs/1211.2678v2.
* [Gr1] _Grothendieck, A._ Torsion homologique et section rationnalles, in Anneaux de Chow et applications, Séminaire Chevalley, 2-e année, Secrétariat mathématique, Paris, 1958.
* [Gr4] _Grothendieck, A._ Technique de descente et theoremes d’existence en geometrie algebrique: I. Generalites. Descente par morphismes delement plats. In Seminaire Bour- baki, Vol. 5, Exp. No. 190., pages 299–327. Soc. Math. France, Paris, 1995.
* [EGAIII] _Grothendieck, A._ Éléments de géométrie algébrique (rédigés avec la collaboration de Jean Dieudonné) : III. Étude cohomologique des faisceaux cohérents, Première partie, Publ. Math. IHÉS 11 (1961), 5–167.
* [EGAIV] _Grothendieck, A._ Éléments de géométrie algébrique (rédigés avec la collaboration de Jean Dieudonné) : IV. Étude locale des schémas et des morphismes de schémas, Seconde partie, Publ. Math. IHÉS 24 (1965), 5–231.
* [Ni2] _Nisnevich, Y._ Rationally Trivial Principal Homogeneous Spaces and Arithmetic of Reductive Group Schemes Over Dedekind Rings, C. R. Acad. Sci. Paris, Série I, 299 (1984), no. 1, 5–8.
* [NG] _Guo, N._ The Grothendieck–Serre conjecture over semi-local Dedekind rings Transformation rings, 2020, to appear, arXiv:1902.02315v2
* [P1] _Panin, I._ On Grothendieck–Serre’s conjecture concerning principal G -bundles over reductive group schemes: II, Izv. RAN: Ser. Mat, 2016, 80:4, 131–162.
* [P2] _Panin, I._ On Grothendieck–Serre conjecture concerning principal bundles, Proc. Intern. Congress of Math. - 2018, Rio de Janeiro, Vol.1, 201–222.
* [Pan0] _Panin, I._ Nice triples and a moving lemma for motivic spaces, Izvestiya: Mathematics, 2019, 83:4, 796–829; arXiv:1707.01755
* [Pan1] _Panin, I._ Nice triples and Grothendieck—Serre’s conjecture concerning principal $G$-bundles over reductive group schemes, Duke Mathematical Journal, Vol. 168, No. 2, 2019; DOI 10.1215/00127094-2018-0042, arXiv:1707.01756
* [Pan2] _Panin, I._ Two purity theorems and the Grothendieck—Serre conjecture concerning principal G-bundles, Mat. Sb., 2020, 211, No.12, 123–142 (in Russian)
Two purity theorems and the Grothendieck-Serre’s conjecture concerning
principal G-bundles, Sbornic Mathematics, 2020, 211, in press, (English
translation) DOI:10.1070/SM9393, arXiv: 1707.01763
* [Pan3] _Panin, I._ Proof of Grothendieck–Serre conjecture on principal bundles over regular local rings containing a field, Izv. Math., 84:4 (2020), 780–795.
* [PSV] _Panin, I.; Stavrova, A.; Vavilov, N._ On Grothendieck—Serre’s conjecture concerning principal $G$-bundles over reductive group schemes: I, Compositio Math. 151 (2015), 535–567.
* [PSt] _I. A. Panin and A. K. Stavrova_ On the Grothendieck–Serre conjecture concerning principal G-bundles over semi-local Dedekind domains. Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) Vol.443, 2016, 133–146. arXiv: 1512.00354.
* [Se] _Serre, J.-P._ Espaces fibrés algébriques, in Anneaux de Chow et applications, Séminaire Chevalley, 2-e année, Secrétariat mathématique, Paris, 1958.
* [SV] _Suslin, A., Voevodsky V._ Singular homology of abstract algebraic varieties, Invent. Math. 123 (1996), no. 1, 61–94
|
# Anisotropy, biased pairings, and the Lefschetz property for pseudomanifolds
and cycles
Karim Alexander Adiprasito Karim Adiprasito, Einstein Institute of
Mathematics, Hebrew University of Jerusalem, 91904 Jerusalem, Israel _and_
Department of Mathematical Sciences, University of Copenhagen, 2100
Copenhagen, Denmark<EMAIL_ADDRESS>, Stavros Argyrios Papadakis
Stavros Argyrios Papadakis, Department of Mathematics, University of Ioannina,
Ioannina, 45110, Greece<EMAIL_ADDRESS>and Vasiliki Petrotou Vasiliki
Petrotou, Department of Mathematics, University of Ioannina, Ioannina, 45110,
Greece<EMAIL_ADDRESS>
###### Abstract.
We prove the hard Lefschetz property for pseudomanifolds and cycles in any
characteristic with respect to an appropriate Artinian reduction. The proof is
a combination of Adiprasito’s biased pairing theory and a generalization of a
formula of Papadakis-Petrotou to arbitrary characteristic. In particular, we
prove the Lefschetz theorem for doubly Cohen Macaulay complexes, solving a
generalization of the g-conjecture due to Stanley. We also provide a
simplified presentation of the characteristic 2 case, and generalize it to
pseudomanifolds and cycles.
###### Key words and phrases:
hard Lefschetz theorem, pseudomanifolds, simplicial cycles, face rings
###### 2010 Mathematics Subject Classification:
Primary 05E45, 13F55; Secondary 32S50, 14M25, 05E40, 52B70, 57Q15
## 1\. Introduction
In [Adi18], Adiprasito introduced the Hall-Laman relations and biased pairing
property to prove the hard Lefschetz theorem for homology spheres and homology
manifolds with respect to generic Artinian reductions and generic degree one
elements acting as Lefschetz operators. They replace, intuitively speaking,
the Hodge-Riemann relations in the Kähler setting, which we know cannot apply
to general spheres outside the realm of polytope boundaries.
The proof relies on the fact that the biased pairing properties, the property
that the Poincaré pairing does not degenerate at _certain ideals_ ,
specifically monomial ideals. This property allows for replacing the problem
of proving the Lefschetz property with an equivalent one, which can be chosen
much easier, as well as the idea of using the linear system as a variable.
However, this replacement process is rather tedious geometrically, and is
somewhat tricky to generalize to singular manifolds, and gets more tedious for
objects beyond mild singularities. So the proof is geometrically somewhat
challenging.
Recently, Papadakis and Petrotou [PP20] have built on the same two ideas, and
studied anisotropy of Stanley-Reisner rings, a stronger (in the sense of more
restrictive) notion than the biased pairing property, demanding that the
Poincaré pairing does not degenerate at _any_ ideal. Clearly, for this, the
field has to be rather special, and cannot be algebraically closed, for
instance. Nevertheless, it implies a partial new proof of the Lefschetz
property for spheres, though only in characteristic 2, by observing that in
certain transcendental extensions, anisotropy is relatively easy to prove by
linear algebra arguments using a formula they had found.
Still, it is quite remarkable that both proofs of the generic Lefschetz
property have their key ideas in common:
* $\circ$
Both proofs make use of the self-pairing in face rings.
* $\circ$
Both make use of infinitesimal deformations of the linear system.
So it is natural to try to bring both together to compensate for each other’s
weaknesses. Our first main result is as follows.
###### Theorem I.
Consider $\mathbbm{k}$ any infinite field, $\mu$ a simplicial cycle of
dimension $d-1$ over $\mathbbm{k}$, and the associated graded commutative face
ring $\mathbbm{k}[|\mu|]$ over its support $|\mu|$.
Then there exists an Artinian reduction $\mathcal{A}(\mu)$, and an element
$\ell$ in $\mathcal{A}^{1}(\mu)$, such that for every
$k\leq\nicefrac{{d}}{{2}}$, we have the _hard Lefschetz property:_ We have an
isomorphism
$\mathcal{B}^{k}(\mu)\ \xrightarrow{\ \cdot\ell^{d-2k}\ }\
\mathcal{B}^{d-k}(\mu).$
Here, $\mathcal{B}$ denotes the Gorensteinification of $\mathcal{A}$, that is,
the quotient of $\mathcal{A}$ by the annihilator of the fundamental class.
This generalizes the generic Lefschetz theorem for spheres (in which case
$\mathcal{B}^{\ast}(\varSigma)=\mathcal{A}^{\ast}(\varSigma)$) and manifolds
in [Adi18] and the characteristic two case for spheres in [PP20]. An important
special case concerns pseudomanifolds. An orientable pseudomanifold is a
pseudomanifold with a nontrivial fundamental class, with respect to a fixed
characteristic. It is connected if that fundamental class is furthermore
unique.
###### Corollary II.
Consider $\mathbbm{k}$ any infinite field, $\varSigma$ any $(d-1)$-dimensional
orientable and connected pseudomanifold over $\mathbbm{k}$, and the associated
graded commutative face ring $\mathbbm{k}[\varSigma]$.
Then there exists an Artinian reduction $\mathcal{A}(\varSigma)$, and an
element $\ell$ in $\mathcal{A}^{1}(\varSigma)$, such that for every
$k\leq\nicefrac{{d}}{{2}}$, we have the _hard Lefschetz property:_ We have an
isomorphism
$\mathcal{B}^{k}(\varSigma)\ \xrightarrow{\ \cdot\ell^{d-2k}\ }\
\mathcal{B}^{d-k}(\varSigma).$
The Lefschetz property has many applications, implying the Grünbaum-Kalai-
Sarkaria conjecture, g-conjecture and many more, but we shall not discuss
these here, and direct the interested reader to [Adi18] for a derivation of
these implications. We will provide some new applications, to doubly Cohen-
Macaulay complexes. Note that orientability is automatic over characteristic
two. Alas, more miracles happen in that characteristic. We have the following
stronger result:
###### Theorem III.
Consider $\mathbbm{k}$ any field of characteristic two, $\mu$ any
$(d-1)$-dimensional cycle over $\mathbbm{k}$, and the associated graded
commutative face ring $\mathbbm{k}[|\mu|]$. Then, for some field extension
$\mathbbm{k}^{\prime}$ of $\mathbbm{k}$, we have an Artinian reduction
$\mathcal{A}(\mu)$ that is anisotropic, i.e. for every nonzero element
$u\in\mathcal{B}^{k}(\mu)$, $k\leq\frac{d}{2}$, we have
$u^{2}\ \neq\ 0.$
That this is stronger than the Lefschetz theorem is a consequence of the
characterization theorem of biased pairing theory, or alternatively the
Kronecker/perturbation lemma, both of which we shall recall. In general
characteristic, we are unable to prove such a statement. Instead, we prove
that every $u$ pairs with another that is sufficiently similar to $u$,
essentially related by a change in coefficients, so that they lie in the same
monomial ideal.
## 2\. Basic Notions
Face rings are the main object of the paper. Our treatment is standard except
for the relative case, in which we follow [AY20]. Fix a field $\mathbbm{k}$.
###### Definition 2.1.
Let $\varDelta$ be a simplicial complex of dimension $d-1$. Define the
polynomial ring $\mathbbm{k}[x_{v}\mid v\in\varDelta^{(0)}]$, with variables
indexed by vertices of $\varDelta$. The non-face ideal $I_{\varDelta}$ of
$\varDelta$ is the ideal generated by all elements of the form $x_{v_{1}}\cdot
x_{v_{2}}\cdot\ldots\cdot x_{v_{j}}$ where $\\{v_{1},\ldots,v_{j}\\}$ is not a
face of $\varDelta$. The face ring of $\varDelta$ is
$\mathbbm{k}[\varDelta]\coloneqq\mathbbm{k}[x_{v}\mid
v\in\varDelta^{(0)}]/I_{\varDelta}.$
If $\Psi=(\varDelta,\Gamma)$ is a relative complex, the relative face module
of $\Psi$ is defined by $I_{\Gamma}/I_{\varDelta}$. This is an ideal of
$\mathbbm{k}[\varDelta]$.
To further out notational abilities, let us recall the definition of a star in
a (relative) simplicial complex $\Psi=(\varDelta,\varGamma)$: The star of a
simplex $\tau$ within $\Psi$ is
$\mathrm{st}_{\tau}\Psi=(\\{\sigma\in\varDelta:\sigma\cup\tau\in\varDelta\\},\\{\sigma\in\varGamma:\sigma\cup\tau\in\varGamma\\}).$
Similarly, the link is
$\mathrm{lk}_{\tau}\Psi=(\\{\sigma\in\varDelta:\sigma\sqcup\tau\in\varDelta\\},\\{\sigma\in\varGamma:\sigma\sqcup\tau\in\varGamma\\}).$
Now, assume that $\mathbbm{k}$ is infinite. Consider an Artinian reduction
$\mathcal{A}^{\ast}(\varDelta)$ of a face ring $\mathbbm{k}[\varDelta]$ with
respect to a linear system of parameters $\Theta$. It is instructive to think
of $\mathcal{A}^{\ast}(\varDelta)$ as a geometric realization of $\varDelta$
in $\mathbbm{k}^{d}$, with the coefficients of $x_{i}$ in $\Theta$ giving the
coordinates of the vertex $i$, recorded in a matrix ${\bf V}$.
By definition, a simplicial cycle $\mu$ of $\Delta$ over $\mathbbm{k}$ is a
nonzero element of $H_{d-1}(\varDelta;\mathbbm{k})$. Evaluating at $\mu$, we
get a surjective $\mathbbm{k}$-linear map
$H^{d-1}(\varDelta;\mathbbm{k})\twoheadrightarrow\mathbbm{k}.$
Via the canonical isomorphism
$H^{d-1}(\varDelta;\mathbbm{k})\ \cong\ \mathcal{A}^{d}(\varDelta),$
see [Adi18, Section 3.9], [TW00], there is an induced surjective
$\mathbbm{k}$-linear map
$\mu^{\vee}:\mathcal{A}^{d}(\varDelta)\twoheadrightarrow\mathbbm{k}.$
Composing the multiplication map $\;\mathcal{A}^{k}(\varDelta)\ \times\
\mathcal{A}^{d-k}(\varDelta)\to\mathcal{A}^{d}(\varDelta)\;$ with
$\mu^{\vee}$, we get a bilinear pairing
$\mathcal{A}^{k}(\varDelta)\ \times\ \mathcal{A}^{d-k}(\varDelta)\
\to\mathbbm{k}.$
We denote by $L^{k}$ the set of elements of $\mathcal{A}^{k}(\varDelta)$ that
have zero pairing with all elements of $\;\mathcal{A}^{d-k}(\varDelta)$. The
direct sum $\;L^{\ast}=\oplus_{k}L^{k}\;$ is a homogeneous ideal of
$\;\mathcal{A}^{\ast}(\varDelta)$, and we set
$\mathcal{B}^{\ast}_{\mu}(\varDelta)=\mathcal{A}^{\ast}(\varDelta)/L^{\ast}.$
The algebra $\mathcal{A}^{\ast}(\varDelta)$ has been Gorensteinified, for lack
of a better word. This does not depend on the simplicial complex $\varDelta$.
The following is immediate.
###### Proposition 2.2.
Consider a simplicial complex $\varDelta$ as above, and a cycle $\mu$ in it.
Then the restriction
$\mathcal{A}^{\ast}(\varDelta)\ \longrightarrow\rightarrow\hskip 1.99997pt\
\mathcal{A}^{\ast}(|\mu|)$
where $|\mu|$ denotes the support of $\mu$, that is, the minimal simplicial
complex containing $\mu$, induces an isomorphism of Gorensteinifications. In
particular, we have
$\mathcal{B}^{\ast}_{\mu}(\varDelta)\ \cong\ \mathcal{B}^{\ast}_{\mu}(|\mu|).$
For ease of notation, we shall abbreviate
$\mathcal{B}^{\ast}(\mu):=\mathcal{B}^{\ast}_{\mu}(|\mu|)$.
For simplicial complexes such that
$\;\dim_{\mathbbm{k}}H_{d-1}(\varDelta;\mathbbm{k})=1$, we set $\mu$ to be a
fixed nonzero element of $H_{d-1}(\varDelta;\mathbbm{k})$. This is, in
particular, the case for connected orientable pseudomanifolds.
Notice that $\mathcal{B}^{\ast}(\mu)$ is a Poincaré duality algebra, with last
nonzero graded component in degree $d$.
We will now use the simplicial cycle $\mu$ to define a natural
$\mathbbm{k}$-linear isomorphism
$\mathrm{deg}:\mathcal{B}^{d}(\mu)\ \longrightarrow\ \mathbbm{k}.$
Convention. Note that the degree map is readily described by the coefficients
of the simplicial _cycle_ : It is enough to define it on cardinality $d$ faces
$F$, as face rings are generated by squarefree monomials [Lee96, Section 4.3]
111Although Lee proves this only in characteristic zero, the argument goes
through in general. We shall use his ideas several times in this paper for
general characteristic, provided they apply there. We note that all the ideas
we use readily extend, and without any modification of the arguments.. And for
a cardinality $d$ face $F$, we have
$\mathrm{deg}(\mathbf{x}_{F})\ =\ \frac{\mu_{F}}{|{\bf V}_{|F}|},$
where $\mu_{F}$ is the oriented coefficient of $\mu$ in $F$, and we fix an
order on the vertices of $\mu$ and compute the sign with respect to the
fundamental class, and the determinant $|{\bf V}_{|F}|$ of the minor ${\bf
V}_{|F}$ of ${\bf V}$ corresponding to $F$.
For instance, if $\mu$ is the fundamental class of a pseudomanifold, we have
canonically
$\mathrm{deg}(\mathbf{x}_{F})\ =\ \frac{\mathrm{sgn}(F)}{|{\bf V}_{|F}|}.$
Two perspectives. There are, of course, two perspectives that we shall make
use of: We can consider $\mathcal{A}^{\ast}(\varDelta)$ over $\mathbbm{k}$ as
functions in ${\bf V}$, including the degree map in particular, or we consider
$\mathcal{A}^{\ast}(\varDelta)$ over $\mathbbm{k}({\bf V})$, the field of
rational functions associated to the transcendental extension of $\mathbbm{k}$
by the entries of ${\bf V}$. It is useful to keep this dichotomy in mind.
## 3\. Application: Doubly Cohen-Macaulay complexes and the cycle filtration
Let us sketch the main application of Theorem I: we show the Lefschetz theorem
for doubly Cohen-Macaulay complexes.
Consider an $m$-dimensional subspace $M$ of $H_{d-1}(\varDelta)$, or dually, a
map
$M^{\vee}:H^{d-1}(\varDelta)\ \longrightarrow\rightarrow\hskip 1.99997pt\
\mathbbm{k}^{m}.$
Then we have the quotient $\mathcal{B}^{\ast}(M)$ of
$\mathcal{A}^{\ast}(\varDelta)$ induced as before as the quotient by the
annihilator under the pairing
$\mathcal{A}^{k}(\varSigma)\ \times\ \mathcal{A}^{d-k}(\varSigma)\
\longrightarrow\ \mathbbm{k}^{m}.$
###### Corollary 3.1.
$\mathcal{B}^{\ast}(M)$ has the top-heavy Lefschetz property over every
sufficiently large field extension of $\mathbbm{k}$ with respect to a
sufficiently general position of parameters, that is, there is an
$\ell\in\mathcal{B}^{1}(M)$ so that
$\mathcal{B}^{k}(M)\ \xrightarrow{\ \cdot\ell^{d-2k}\ }\ \mathcal{B}^{d-k}(M)$
is injective.
###### Proof.
Consider a generating system $(\mu_{i})_{i\in I}$ for $M$, Then we have an
injection
$\mathcal{B}^{\ast}(M)\ \longrightarrow\ \bigoplus_{i\in
I}\mathcal{B}^{\ast}(\mu_{i}),$
and hence
${\mathcal{B}^{k}(M)}$${\mathcal{B}^{d-k}(M)}$${\bigoplus_{i\in
I}\mathcal{B}^{k}(\mu_{i})}$${\bigoplus_{i\in
I}\mathcal{B}^{d-k}(\mu_{i})}$$\scriptstyle{\ \cdot\ell^{d-2k}\
}$$\scriptstyle{\ \cdot\ell^{d-2k}\ }$
which implies injectivity on the top if it is present on the bottom. ∎
### 3.1. Doubly Cohen-Macaulay complexes
A simplicial complex is called $s$-Cohen-Macaulay if it is Cohen-Macaulay and
after the removal of $s$ vertices, the complex is still Cohen-Macaulay of the
same dimension.
For instance, a triangulated homology sphere is $2$-Cohen-Macaulay, also
called doubly Cohen-Macaulay [Sta96, Chapter III.3]. Stanley showed that
doubly Cohen-Macaulay complexes are level, that is, for such a complex
$\varSigma$ of dimension $d-1$, the socle is concentrated in degree $d$. In
other words, if $M=\mathcal{A}^{d}(\varSigma)$, we have
$\mathcal{B}^{\ast}(M)\ =\ \mathcal{A}^{\ast}(\varSigma).$
From the last result, we conclude
###### Corollary 3.2.
Consider a doubly Cohen-Macaulay complex $\varSigma$ of dimension $d-1$. Then
$\mathcal{A}^{\ast}(\varSigma)$ has the top-heavy Lefschetz property over any
infinite field extension of $\mathbbm{k}$ with respect to a sufficiently
general position of parameters, that is, there is an
$\ell\in\mathcal{A}_{M}^{1}(\varSigma)$ so that
$\mathcal{A}^{k}(\varSigma)\ \xrightarrow{\ \cdot\ell^{d-2k}\ }\
\mathcal{A}^{d-k}(\varSigma)$
is injective.
In particular, the $g$-vector of a doubly Cohen-Macaulay complex is an
$M$-vector [Mac27], since
$\mathcal{A}^{k}(\varSigma)\ \xrightarrow{\ \cdot\ }\
\mathcal{A}^{k+1}(\varSigma)$
is injective for $k\leq\frac{d}{2}$.
## 4\. Biased pairings and Hall Laman relations
Let us recall the basics of biased pairing theory. More depth and breath is
found in [Adi18, Section 5], but we repeat proofs where they are needed for
our purposes.
### 4.1. Biased Poincaré pairings
Recall: Let $\mu$ be a $d-1$-dimensional simplicial cycle, and
$\mathcal{B}^{\ast}(\mu)$ the associated Gorenstein ring. Then we have a
pairing
$\mathcal{B}^{k}(\mu)\ \times\ \mathcal{B}^{d-k}(\mu)\ \longrightarrow\
\mathcal{B}^{d}(\mu)\ \cong\ \mathbb{R}.$
We say that $\mu$ has the biased pairing property in degree $k\leq\frac{d}{2}$
with respect to some proper subcomplex $\varGamma$ of the support, if the
pairing
$\mathcal{K}^{k}(\mu,\varGamma)\ \times\ \mathcal{K}^{d-k}(\mu,\varGamma)\
\longrightarrow\ \mathcal{K}^{d}(\mu,\varGamma)$ (1)
is nondegenerate on the left. Here $\mathcal{K}^{k}(\mu,\varGamma)$ is the
nonface ideal of $\varGamma$ in $\mathcal{B}^{\ast}(\mu)$. Notice:
###### Proposition 4.1.
For an ideal $\mathcal{I}$ in $\mathcal{B}^{\ast}(\mu)$ the following are
equivalent:
1. (1)
The map
$\mathcal{I}\ \longrightarrow\
{\raisebox{3.00003pt}{$\mathcal{B}^{\ast}(\mu)$}\Big{/}\raisebox{-3.00003pt}{$\mathrm{ann}_{\mathcal{B}^{\ast}(\mu)}\mathcal{I}$}}$
is an injection in degree $k$.
2. (2)
The map
$\mathcal{I}\ \longrightarrow\
{\raisebox{3.00003pt}{$\mathcal{B}^{\ast}(\mu)$}\Big{/}\raisebox{-3.00003pt}{$\mathrm{ann}_{\mathcal{B}^{\ast}(\mu)}\mathcal{I}$}}$
is a surjection in degree $d-k$.
3. (3)
For every $x\in\mathcal{I}^{k}$, there exists a $y$ in $\mathcal{I}^{d-k}$
such that $x\cdot y\neq 0$.
4. (4)
$\mathcal{I}$ has the biased pairing property in degree $k$.
We obtain immediately an instrumental way to prove biased Poincaré duality for
monomial ideals.
###### Corollary 4.2.
$\mathcal{K}^{\ast}(\mu,\varDelta)$ has the biased pairing property in degree
$k$ if and only if
$\mathcal{K}^{k}(\mu,\varGamma)\ \longrightarrow\
\mathcal{B}^{k}(\mu)_{\mid\overline{\varGamma}}$
is injective, where $\mathcal{B}^{\ast}(\mu)_{\mid\overline{\varGamma}}$ is
the Poincáre dual of $\mathcal{K}^{k}(\mu,\varGamma)$, that is, the quotient
of $\mathcal{B}^{k}(\mu)$ by elements that pair only trivially with elements
of $\mathcal{K}^{k}(\mu,\varGamma)$.
Let us note separately a different condition.
###### Corollary 4.3.
This is the case if and only if
$\mathcal{K}^{d-k}(\mu,\varGamma)\ \longrightarrow\
\mathcal{B}^{d-k}(\mu)_{\mid\overline{\varGamma}}$
is surjective.
Biased pairings as "rewriting an element in terms of an ideal": We have
several criterions for the biased pairing property: other than the pairing
question itself, Corollary 4.2 asks whether an element of
$\mathcal{K}^{k}(\mu,\varGamma)$ survives rewriting it in its Poincaré dual.
Corollary 4.3 instead asks us to rewrite elements
$\mathcal{B}^{d-k}(\mu)_{\mid\overline{\varGamma}}$ by elements in the ideal
$\mathcal{K}^{d-k}(\mu,\varGamma)$.
### 4.2. Invariance under subdivisions
An important tool is the invariance of biased pairing under subdivisions (and
their inverses). We only need this for odd-dimensional cycles, but it holds
regardless of that restriction, and with respect to the middle pairing, i.e.,
a manifold of dimension $2k-1$, and regarding the pairing in degree $k$.
Recall that a map of simplicial complexes is simplicial if the image of every
simplex lies in a simplex of the image.
A simplicial map $\varphi:\mu^{\prime}\rightarrow\mu$ of simplicial cycles is
a combinatorial subdivision (speak: $\mu^{\prime}$ is a subdivision of $\mu$)
if it is a facewise injective simplicial map of underlying complexes and maps
the fundamental class to the fundamental class. We have the following result
of Barnette:
###### Proposition 4.4 (Barnette, [Bar73]).
Any simplicial $d$-cycle $\mu$ is a subdivision of the boundary $\Delta_{d}$
of the $(d+1)$-dimensional simplex.
Moreover, if $F$ is any facet of $\mu$, and $v$ one of its vertices, then we
can assume the combinatorial subdivision maps $F$ to a facet of $\Delta_{d}$,
and the star of $v$ to the star of some vertex.
For geometric simplicial complexes (that is, with respect to an Artinian
reduction), we require a geometric subdivision to map the linear span of a
simplex to the linear span of the simplex containing it, i.e. if
$\sigma^{\prime}$ is an element of $\mu^{\prime}$, and $\sigma$ is the minimal
face of $\sigma$ containing $\varphi(\sigma^{\prime})$ combinatorially, then
in the geometric realization, we require that the linear spans are mapped to
corresponding linear spans, i.e.
$\mathrm{span}\ \varphi(\sigma^{\prime})\ \subseteq\ \mathrm{span}\ \sigma.$
In particular, we do not require the image of a face to lie within the
combinatorial target geometrically, only that they span the same space.
###### Lemma 4.5.
A geometric subdivision $\upvarphi:\mu^{\prime}\rightarrow\mu$ of simplicial
cycles of dimension at least $(2k-1)$ that restricts to the identity on a
common subcomplex $\varGamma$ preserves biased pairings, that is,
$\mathcal{K}^{\ast}(\mu,\varGamma)$ has the biased pairing property (in degree
$k$) if and only if $\mathcal{K}^{\ast}(\mu^{\prime},\varGamma)$ does (in
degree $k$).
###### Proof.
The map induces a pullback
$\varphi^{\ast}:\mathcal{A}^{\ast}(|\mu|)\ \longhookrightarrow\
\mathcal{A}^{\ast}(|\mu^{\prime}|),$
that is compatible with the Poincaré pairing, so it induces a map
$\varphi^{\ast}:\mathcal{B}^{\ast}(\mu)\ \longhookrightarrow\
\mathcal{B}^{\ast}(\mu^{\prime}).$
Let us denote by $G$ the orthogonal complement to the image in
$\mathcal{B}^{k}(\mu^{\prime})$, the image of the Gysin, so that the
decomposition
$\mathcal{B}^{k}(\mu)\ \oplus\ G\ =\ \mathcal{B}^{k}(\mu^{\prime})$
is orthogonal. Notice then that $\varphi^{\ast}$ is the identity on
$\mathcal{A}^{\ast}(|\mu|)$, and its image in $\mathcal{B}^{\ast}(\mu)$, so
$\mathcal{K}^{k}(\mu,\varGamma)\ \oplus\ G\ =\
\mathcal{K}^{k}(\mu^{\prime},\varGamma)$
Hence, it induces an isomorphism on the orthogonal complements of
$\mathcal{K}^{\ast}(\mu,\varGamma)$ resp.
$\mathcal{K}^{\ast}(\mu^{\prime},\varGamma)$. ∎
Hence, if we do not subdivide $\varGamma$, then the biased pairing property is
preserved. If one subdivides $\varGamma$, then only one direction holds.
###### Proposition 4.6.
Consider a geometric subdivision $\upvarphi:\mu^{\prime}\rightarrow\mu$ of
simplicial cycles of dimension at least $(2k-1)$, and $\varGamma$ a subcomplex
that is subdivided to $\varGamma^{\prime}$.
Then $\mathcal{K}^{\ast}(\mu,\varGamma)$ satisfies biased Poincaré duality (in
degree $k$) if $\mathcal{K}^{\ast}(\mu^{\prime},\varGamma^{\prime})$ does (in
degree $k$).
The other direction is no longer true, as is easy to see.
###### Proof.
It is still true that
$\mathcal{B}^{k}(\mu)\ \oplus\ G\ =\ \mathcal{B}^{k}(\mu^{\prime})$
in an orthogonal splitting under the Poincaré pairing. However, this is no
longer so nice when restricting to the ideal, as we can only conclude that
$\mathcal{K}^{k}(\mu,\varGamma)\ \oplus\ G^{\prime}\ =\
\mathcal{K}^{k}(\mu^{\prime},\varGamma^{\prime})$
where $G^{\prime}$ is some subspace of $G$. However, as the pullback map is
compatible with the pairing, the one direction we claimed still holds. ∎
### 4.3. Hall-Laman relations and the suspension trick
Finally, biased Poincaré duality allows us to formulate a Lefschetz property
at ideals. We say that $\mathcal{B}^{\ast}(\mu)$, with socle degree $d$,
satisfies the Hall-Laman relations in degree $k\leq\frac{d}{2}$ and with
respect to an ideal $\mathcal{I}^{\ast}\subset\mathcal{B}^{\ast}(\mu)$ if
there exists an $\ell$ in $\mathcal{B}^{1}(\mu)$, such that the pairing
$\begin{array}[]{ccccc}\mathcal{I}^{k}&\times&\mathcal{I}^{k}&\longrightarrow&\
\mathcal{I}^{d}\cong\mathbb{R}\\\ a&&b&{{\xmapsto{\ \ \ \ }}}&\
\mathrm{deg}(ab\ell^{d-2k})\end{array}$ (2)
is nondegenerate. Note that the Hall-Laman relations coincide with the biased
pairing property if $k=\frac{d}{2}$.
If we want to prove the Hall-Laman relations for a pair $(\mu,\varGamma)$,
where $\varGamma$ is a subcomplex of $|\mu|$, specifically the Hall-Laman
relations for $\mathcal{K}^{\ast}(\mu,\varGamma)$ or its annihilator, we
proceed using the following trick. Consider the suspension
$\operatorname{susp}\varDelta$ of a simplicial complex $\varDelta$. Label the
two vertices of the suspension $\mathbf{n}$ and $\mathbf{s}$ (for north and
south). Let $\uppi$ denote the projection along $\mathbf{n}$, and let
$\vartheta$ denote the height over that projection, and let $A\ast B$ denote
the free join of two simplicial complexes $A$ and $B$.
###### Lemma 4.7 ([Adi18, Lemma 7.5]).
Considering $\operatorname{susp}\mu$ realized in $\mathbb{R}^{d+1}$, and
$k<\frac{d}{2}$, the following two are equivalent:
1. (1)
The Hall-Laman relations for
$\mathcal{K}^{k+1}(\operatorname{susp}\mu,(\operatorname{susp}\varGamma)\cup\mathbf{s}\ast|\mu|)$
with respect to $x_{\mathbf{n}}$.
2. (2)
The Hall-Laman relations for
$\mathcal{K}^{k}(\uppi\hskip 0.85358pt\mu,\uppi\hskip 0.85358pt\varGamma)$
with respect to $\vartheta$.
###### Proof.
Set $\vartheta=x_{\mathbf{n}}-x_{\mathbf{s}}$ in
$\mathcal{B}^{\ast}(\operatorname{susp}|\mu|)$. Consider then the diagram
${\mathcal{B}^{k}(\uppi\mu)}$${\mathcal{B}^{d-k}(\uppi\mu)}$${\mathcal{B}^{k+1}(\operatorname{susp}\mu,\mathbf{s}\ast|\mu|)}$${\mathcal{B}^{d-k}(\operatorname{susp}\mu)_{\mid\overline{\mathbf{s}\ast|\mu|}}}$$\scriptstyle{\
\ \ \ \cdot\vartheta^{d-2k}\ \ \ \
}$$\scriptstyle{\sim}$$\scriptstyle{\sim}$$\scriptstyle{\ \ \ \cdot
x_{\mathbf{n}}^{d-2k-1}\ \ \ }$
An isomorphism on the top is then equivalent to an isomorphism of the bottom
map, and the same holds when restricting to ideals and their Poincaré duals. ∎
### 4.4. Lefschetz elements via the perturbation lemma
Let us note we can use another way to construct Lefschetz elements. For this,
let us remember a Kronecker lemma of [Adi18], see also [Rin13] and of course
Kronecker for the original formulation [Kro90]:
###### Lemma 4.8.
Consider two linear maps
$\upalpha,\upbeta:\mathcal{X}\ \longrightarrow\ \mathcal{Y}$
of two vector spaces $\mathcal{X}$ and $\mathcal{Y}$ over $\mathbb{R}$. Assume
that $\upbeta$ has image transversal to the image of $\upalpha$, that is,
$\upbeta(\mathrm{ker}\upalpha)\ \cap\ \mathrm{im}\upalpha\ =\ {0}\ \subset\
\mathcal{Y}.$
Then a generic linear combination $\upalpha\ ``{+}"\ \upbeta$ of $\upalpha$
and $\upbeta$ has kernel
$\mathrm{ker}(\upalpha\ ``{+}"\ \upbeta)\ =\ \mathrm{ker}\upalpha\ \cap\
\mathrm{ker}\upbeta.$
As observed in [Adi18, Section 6.6], this can be used to iteratively prove the
existence of Lefschetz elements provided that $\mathcal{B}(\varSigma)$
satisfies the biased pairing property in the pullback to any vertex link. Let
us consider for simplicity the case of $\mu$ a cycle of dimension $2k-1$.
The connection is to the classical Hall matching theorem, which constructs
stable matchings in a discrete setting [Hal35]. This lemma is designed to do
the same in the setting of linear maps. The idea is now to prove the following
transversal prime property: for $W$ a set of vertices in $|\mu|$ if
$\mathrm{ker}\ ``{\sum_{v\in W}}"\ x_{v}\ =\ \bigcap_{v\in W}\mathrm{ker}\
x_{v}$
Note: proving the transversal prime property for all vertices together is
equivalent to the Lefschetz isomorphism
$\mathcal{X}\ =\ \mathcal{B}^{k}(\mu)\ \xrightarrow{\ \cdot\ell\ }\
\mathcal{Y}\ =\ \mathcal{B}^{k+1}(\mu)$
for $\ell$ the generic linear combination over all variables. This is because
$\bigcap_{v\text{ vertex of }\varSigma}\mathrm{ker}\ x_{v}\ =\ 0$
because of Poincaré duality.
Note further, to see how the biased pairing property implies the transversal
property by induction on the size of the set $W$, that when we try to apply
the criterion by multiplying with a new variable $x_{v}$, adding a vertex $v$
to the set $W$, then we are really pulling back to a principal ideal ideal
$\left<x_{v}\right>$ in $\mathcal{B}(\mu)$, and asked to prove that
$x_{v}\mathrm{ker}\ ``{\sum_{v\in W}}"$ and $\mathrm{im}\ ``{\sum_{v\in
W}}"\cap\left<x_{v}\right>$ intersect only in $0$.
Note finally that both spaces are orthogonal complements. This is the case if
and only if the Poincaré pairing is perfect when restricted to either (or
equivalently both) of them.
## 5\. Some useful identities on residues and degrees
We now prove and recall some useful identities on the degree.
### 5.1. The square of a monomial
Now that we know that we can restrict to minimal cycles of odd dimension
$2k-1=d-1$, and biased pairings in them, we can go a little further. The trick
is to consider the degree of an element in $\mathcal{B}^{k}(\varDelta)$ of a
pseudomanifold of dimension $d-1$ as a function of ${\bf V}$, which we think
of as independent variables. Let us start with a formula due to Lee that
describes the coefficients of the fundamental class:
###### Lemma 5.1 ([Lee96, Theorem 11]).
We have
$\mathrm{deg}(\mathbf{x}_{\tau}^{2})({\bf V})\ =\ \sum_{F\text{ facet
containing
}\tau}\mathrm{deg}(\mathbf{x}_{F})\left(\prod_{i\in\tau}[F-i]\right)\cdot\left(\prod_{i\in
F\setminus\tau}[F-i]^{-1}\right)$
where $[F-i]$ is the volume element of $F-i$.
To compute the volume element $[F-i]$, we can fix a general position vector
$v$ that is added as an element to the matrix ${\bf V}$ in the $i$-th column,
and compute the determinant.
Now consider $\mathrm{deg}(\mathbf{x}_{\tau})^{2}({\bf V})$, for $\tau$ a face
of cardinality $k$. We now want to compute the partial differential with
respect to a $(k-1)$-dimensional face $\sigma$ of $\varDelta$. For this, we
pick a basis of $\mathbbm{k}({\bf V})^{d}$ by simply considering the vertices
$\bar{\sigma}_{1},\ \dots,\ \bar{\sigma}_{k}$ of $\sigma$ and the faces for
$\tau_{1},\ \dots,\tau_{k}$ of $\tau$. Denote the basis by
$\mathcal{B}_{\sigma}^{\tau}$, simply a labelling of ${\bf
V}_{|\tau\cup\sigma}$. Let us denote, for functions in ${\bf V}$, the partial
differential
$\partial_{\sigma}^{\tau}\ :=\
\mathrm{deg}(\mathbf{x}_{\sigma}\mathbf{x}_{\tau})\partial_{\mathcal{B}_{\sigma}^{\tau}},$
where $\partial_{\mathcal{B}_{\sigma}^{\tau}}$ is the partial differential of
${\bf V}_{\sigma_{1}}$ in the directions of ${\bf V}_{\tau_{1}}$ and ${\bf
V}_{\sigma_{1}}$, the partial differential of ${\bf V}_{\sigma_{2}}$ in
directions ${\bf V}_{\tau_{2}}$ and ${\bf V}_{\sigma_{2}}$ and so on. In other
words, we are interested in the behaviour of a function $f({\bf V})$ when
varying ${\bf V}_{\sigma_{1}}$ in the directions of ${\bf V}_{\tau_{1}}$ and
${\bf V}_{\sigma_{1}}$, varying ${\bf V}_{\sigma_{2}}$ in directions ${\bf
V}_{\tau_{2}}$ and ${\bf V}_{\sigma_{2}}$ etc. We have:
###### Lemma 5.2.
Assume $\sigma$ and $\tau$ are disjoint, but lie in a common face of
$\varDelta$. Then
$\partial_{\sigma}^{\tau}\mathrm{deg}(\mathbf{x}_{\tau}^{2}({\bf V}))\ =\
(\mathrm{deg}(\mathbf{x}_{\tau}\mathbf{x}_{\sigma}))^{2}$
The normalization factor is added to achieve linearity in characteristic $2$,
but is not needed for the proof of anisotropy and Lefschetz property. Let us
briefly note that this formula follows also quite simply from the global
residue formula [Gri76, CCD97], appropriately generalized to the setting of
pseudomanifolds and face rings. This is not hard, but as Lee already provided
a formula in the general setting, we shall work with him instead. The
following easy derivation is due to Geva Yashfe:
###### Proof.
Recall the generalized Leibniz formula
$\frac{\partial}{\partial x}\prod f_{i}^{e_{i}}\ =\ \left(\prod
f_{i}^{e_{i}}\right)\left(\sum e_{i}\frac{\frac{\partial}{\partial
x}f_{i}}{f_{i}}\right).$
Applying this to Lee’s formula, and differentiating $\sigma_{j}$ in the
direction of $\tau_{j}$, every summand on the right vanishes except for a
summand
$(-1)^{k-1}\frac{[F-\sigma_{j}]}{[F-\tau_{j}]}.$
Differentiating in direction $\sigma_{j}$ just multiplies everything with
$-1$. ∎
We need another, different version, that is similarly simple to prove:
###### Lemma 5.3.
Assume $\sigma$ and $\tau$ are any two faces, and consider a vertex $v$ not in
$\mathrm{st}_{\tau\cup\sigma}\varDelta$. Then
$\mathrm{deg}(\mathbf{x}_{\tau}\mathbf{x}_{\sigma})({\bf V})\ =\ 0$ is
independent of of $v$.
###### Proof.
This follows directly from Lee’s formula. ∎
### 5.2. The formula for a homogeneous element
Set now $\widetilde{\mathbbm{k}}\ :=\ \mathbbm{k}({\bf V},{\bf V}^{\prime})$,
where $\mathbbm{k}$ is any field, the field of rational functions with
variables ${\bf V}$, as well as a copy ${\bf V}^{\prime}$ of the vertex
coordinates. Consider now an element $u$ of $\mathbbm{k}({\bf
V})^{k}[\varDelta]$ that is the linear combination of squarefree monomials. We
say a face $\sigma$ is compatible with $u$ if
* $\circ$
$\mathrm{st}_{\sigma}\varDelta$ intersects the support $|u|$ of $u$ in a
unique face, denoted by $\tau(u,\sigma)$. The coefficient of $u$ at
$\tau(u,\sigma)$ is $1$.
* $\circ$
Consider a face $\tau^{\prime}$ of $|u|$ that is not $\tau(u,\sigma)$. Then
the star of $\tau^{\prime}$ intersects $\sigma$ trivially, or in a face
$\sigma_{\tau^{\prime}}$ of $\sigma$. Moreover, the coefficient
$u_{\tau^{\prime}}$ of $u$ in $\tau^{\prime}$ vanishes under differentiation
in the direction of vertices of $\sigma\setminus\sigma_{\tau^{\prime}}$.
Consider an element $u^{\prime}$ of $\mathbbm{k}({\bf
V}^{\prime})^{k}[\varDelta]$. We now differentiate $\mathrm{deg}(u\cdot
u^{\prime})$, using the formula of the previous section.
###### Lemma 5.4.
For $u$ compatible with respect to $\sigma$, we have
$\partial_{\sigma}^{\tau(u,\sigma)}\mathrm{deg}(u\cdot u^{\prime})\ =\
\mathrm{deg}((\partial_{\sigma}^{\tau(u,\sigma)}u)\cdot u^{\prime})\ +\
(\mathrm{deg}(x_{\tau(u,\sigma)}\cdot\mathbf{x}_{\sigma}))^{2}\ $ (3)
The idea will be to cleverly associate $u^{\prime}$ to $u$. Consider for
instance the case when $u^{\prime}$ is obtained from $u$ by replacing every
variable in ${\bf V}$ with the corresponding variable in ${\bf V}^{\prime}$.
If we now substitute ${\bf V}^{\prime}\rightarrow{\bf V}$ on both sides of
this equation, then we obtain
###### Corollary 5.5.
$\partial_{\sigma}^{\tau(u,\sigma)}\mathrm{deg}(u\cdot
u^{\prime})_{u=u^{\prime}}\ -\
\mathrm{deg}((\partial_{\sigma}^{\tau(u,\sigma)}u)\cdot u)\ =\
(\mathrm{deg}(x_{\tau(u,\sigma)}\cdot\mathbf{x}_{\sigma}))^{2}\ $ (4)
Key Observation: Note that if $u$ lies in some monomial ideal, then so does
$u^{\prime}$ and $\partial_{\sigma}^{\tau(u,\sigma)}u$, as we only changed the
coefficients of the monomials, and introduced no new monomial. This is rather
marvellous, and informs us how we want to prove the biased pairing property.
## 6\. Proving anisotropy and the Lefschetz property
To finish the proof of the Lefschetz theorem, over $\widetilde{\mathbbm{k}}$,
we need to prove the biased pairing property in degree $k$ for a pair
$(\mu,\varDelta)$, where $\mu$ is a cycle of odd dimension $d-1=2k-1$, and
$\varDelta$ is a codimension 0 subcomplex.
### 6.1. Characteristic $2$, with a bonus of anisotropy
In characteristic $2$, one can prove a stronger statement than just biased
pairing: We can prove that elements in $u\in\mathcal{B}^{k}(\mu)$ over
$\mathbbm{k}({\bf V})$, for $\mu$ a pseudomanifold of dimension $2k-1$, have
$\mathrm{deg}(u^{2})\neq 0$. It illustrates an important principle:
normalization.
Consider $u\in\mathcal{A}^{k}(\mu)$. Consider the pairing with
$\mathbf{x}_{\sigma}$ for some cardinality $k$ face $\sigma$.
Normalization: We may now assume that, $u$ is represented as
$\sum\lambda_{\tau}\mathbf{x}_{\tau}$ so that only one $\tau$ of the sum lies
in $\mathrm{st}_{\sigma}\mu$, and may further assume that this $\tau$ lies in
$\operatorname{lk}_{\sigma}\mu$. This is because
$\mathcal{B}^{k}(\mathrm{st}_{\sigma}\mu)\ \cong\
\mathcal{B}^{k}(\operatorname{lk}_{\sigma}\mu)\ \cong\
(\mathbf{x}_{\sigma})\mathcal{B}^{k}(\mu)$
is of dimension one.
Finally observe that
$\displaystyle\partial_{\sigma}^{\tau}\mathrm{deg}(u^{2})\ $ $\displaystyle=\
\partial_{\sigma}^{\tau}\mathrm{deg}(\sum(\lambda_{\tau}\mathbf{x}_{\tau})^{2})$
$\displaystyle=\
\sum\lambda_{\tau}^{2}\partial_{\sigma}^{\tau}\mathrm{deg}(\mathbf{x}_{\tau}^{2})$
$\displaystyle=\
\sum\lambda_{\tau}^{2}\mathrm{deg}(\mathbf{x}_{\tau}\mathbf{x}_{\sigma})^{2}$
$\displaystyle=\
\sum\mathrm{deg}(\sum\lambda_{\tau}\mathbf{x}_{\tau}\mathbf{x}_{\sigma})^{2}\
=\ \mathrm{deg}(\mathbf{x}_{\sigma}u)^{2}$
in characteristic $2$. We conclude:
###### Proposition 6.1.
We have
$\partial_{\sigma}^{\tau}\mathrm{deg}(u^{2})\ =\
\mathrm{deg}(\mathbf{x}_{\sigma}u)^{2}$
Note that this holds regardless of the normalization, and in particular also
of $\tau$, by linearity of the differential in characteristic $2$. On a low-
brow level, this is a consequence of the vanishing of the diagonal terms in
the Hessian in characteristic $2$. This finishes the proof of the biased
pairing property, as every $u$ must pair with some $\mathbf{x}_{\sigma}$ by
Poincaré duality, so every element must pair with itself.
### 6.2. General characteristic, middle isomorphism
Let us consider the cycle $\mu^{\prime}$ of dimension $2k-2$ in
$\widetilde{\mathbbm{k}}^{[2k-1]}$, where $[j]=\\{1,\cdots,j\\}$, and assume
we want to prove the middle isomorphism of the Lefschetz property. Following
Lemma 4.7, we can equivalently prove the biased pairing property in
$\mu=\operatorname{susp}\mu^{\prime}$ in $\widetilde{\mathbbm{k}}^{[2k]}$ with
respect to the subcomplex $\varGamma\mathbf{s}\ast|\mu^{\prime}|$ after we
lifted $\mu^{\prime}$ according to some height function into
$\widetilde{\mathbbm{k}}^{[2k]}$.
This is less beautiful: we do not have a differential formula that is
independent of the presentation of an element in the reduced face ring.
Instead, we shall need to construct a special presentation, and argue that
there is an element in the ideal that pairs with it. What is more, that
element will depend on $\sigma$, where $\mathbf{x}_{\sigma}$ pairs with the
element in question (which again exists by Poincaré duality).
Idea: There are several ways to achieve this (a previous version of this paper
provided a different argument, for instance), but we will make use of a
particular choice of idea here: McMullen’s idea of tracing the Lefschetz
property along subdivisions and inverses, an influential idea that was
previously thought to be difficult to make work in this setting. However, the
idea will come with a twist, in that, essentially, many incompatible
subdivisions have to be taken care of simultaneously, and are nonlocal.
Also, instead of showing that subdivisions can be used to preserve the
Lefschetz property directly, we use the detour over biased pairings.
Setup: Following Lemma 4.7, we are tasked to prove the biased pairing property
for a pair $\mathcal{K}^{\ast}(\mu,\varGamma)$. Consider an element $u$
generated by squarefree monomials supported not in $\varGamma$, i.e., it is an
element of $\widetilde{\mathbbm{k}}[\mu,\varGamma]$. Such an element may not
be compatible. Hence, the idea is to find an equivalent representation of the
class $[u]$ of $u$ in $\mathcal{K}^{k}(\mu,\varGamma)$, and use the
subdivision lemma and normalizations to replace it by an equivalent, but
compatible element.
Let us now consider a class $[u]$ in $\mathcal{K}^{k}(\mu,\varGamma)$,
represented by an element $u$ in the ideal
$\left<\mathbf{x}_{\mathbf{n}}\right>\subset\widetilde{\mathbbm{k}}[|\mu|]$ as
a sum
$u\ =\ \sum\lambda_{\tau}\mathbf{x}_{\tau},$
where the sum is over faces $\tau$ with common vertex $v$.
We want to prove that $u$ pairs nontrivially with some other element
$\mathcal{K}^{k}(\mu,\varGamma)$. If it does so with some element
$\mathbf{x}_{\sigma}$, associated to a cardinality $k$ face $\sigma$
containing $\mathbf{n}$, then we are done. Notice further that we may assume
that the coefficients $\lambda_{\tau}$ are from $\mathbbm{k}({\bf V})$, as the
variables ${\bf V}^{\prime}$ are transcendental over that field.
On the other hand, it has to pair with _some_ face of $\mu$ by Poincaré
duality. It has to be one of the faces $\sigma$ of $\varSigma^{\prime}$, as
all other faces annihilate $\mathcal{K}^{\ast}(\mu,\varGamma)$.
Three steps: We now proceed to find an element to pair $u$ with. This
procedure depends on $\sigma$.
Step 1. Normalization: First, we use the following observation:
###### Observation 6.2.
The quotient
$\mathcal{K}^{k}(\mathrm{st}_{\sigma}\mu,\mathrm{st}_{\sigma}\varGamma)\ =\
\mathcal{K}^{k}(\mu,\varGamma)/\mathrm{ann}_{\mathbf{x}_{\sigma}}$ of
$\mathcal{K}^{k}(\mu,\varGamma)$ is one-dimensional.
In particular, we can assume that the restriction of $u$ to
$\mathrm{st}_{\sigma}\mu$ is supported in a single face $\tau$ of cardinality
$k$.
Note that we can also normalize $u$, so that the coefficient of $u$ in that
face is $1$.
Step 2. Specialization and unsubdivision: Now, we shall do something
counterintuitive to our philosophy: We shall specialize the coordinates to a
very special position. This is counter to our credo, that we have to be as
generic as can be, but allows us to do a few interesting things.
Why can we do this? A rational function that does not vanish under some
specialization of its variables does not vanish.
We now specialize and require that all the vertices of $|\mu^{\prime}|$ lie in
$\widetilde{\mathbbm{k}}^{[2k-1]}$. Moreover, we can use Barnette’s
Proposition 4.4 to assume that $|\mu^{\prime}|$ is geometrically the
subdivision of the boundary of a $(2k-1)$-simplex, and that
$\sigma\cup\tau\setminus\mathbf{n}$ is a facet of that simplex.
###### Remark 6.3 (Desingularize).
There is one point we have to take care here: The specialization might
singularize $u$. In such case, we can follow the discussion that follows, and
take care that everything works at poles by careful calculation. The other
option is this: We append another variable, say $\delta$, to
$\widetilde{\mathbbm{k}}$, passing to the field extension
$\widetilde{\mathbbm{k}}(\delta)$. This additional variable is then used to
desingularize $u$, by perturbing it into a generic direction, so that we now
have a function $u(\delta)$ with $u(0)=u$.
So now we are stuck with $u$ in a subdivision of
$(\mathbf{n}\ast\Delta_{2k-2},\Delta_{2k-2})$. Consider its projection along
the Gysin to an image under the pullback map, and call the preimage
$\overline{u}$. We observed already that if $\overline{u}$ pairs with an
element in the ideal, then so is $u$; in fact we can simply pair $u$ with the
pullback of whatever form $\upsilon$ the form $\overline{u}$ pairs with, as
$\mathrm{deg}(\overline{u},\upsilon)\ =\ \mathrm{deg}(u,\upsilon)$
Step 3. Normalization again: Consider now $\overline{u}$, and assume that it
pairs trivially with all cardinality $k$ faces of $\overline{\mu}$ that
intersect $\sigma$ and contain $v$. We claim that this can essentially only
happen if $u$ is compatible, though with a caveat. Consider first a facet
$\sigma^{\prime}$ of $\sigma$. Note that since we are in the simplex, we can
assume that there is a unique face $A$ outside of $\tau$ that is in the star
of $\sigma^{\prime}$ and where the form is nontrivial.
Let us consider the coefficient in that face. Without saying anything further,
it is not restricted, but we may assume that $u$ _does not_ pair nontrivially
with $\mathbf{x}_{\sigma^{\prime}\ast\mathbf{n}}$ (otherwise we are done, as
$\mathbf{x}_{\sigma^{\prime}\ast\mathbf{n}}$ lies in
$\mathcal{A}(\mathbf{n}\ast\Delta_{2k-2},\Delta_{2k-2})$). The pairing then
only involves the newly constructed face $A$ and the face $\tau(u,\sigma)$. It
follows by Lemma 5.1 that the coefficient in $A$ vanishes under
differentiation in the direction of vertices of $\sigma$. Now, we repeat the
same process for all nontrivial faces ${\sigma^{\prime\prime}}$ of $\sigma$,
using pairing with the degree $k$ element
$\mathbf{x}_{\sigma^{\prime\prime}}\times
x_{\mathbf{n}}^{k-\text{card}{\sigma^{\prime\prime}}}$. Hence, $\overline{u}$
can be assumed to be compatible and by Corollary 5.5 we have that one of the
two terms,
$\mathrm{deg}(\overline{u}\cdot\overline{u}^{\prime})\ \quad\text{or}\
\quad\mathrm{deg}((\partial_{\sigma}^{\tau(\overline{u},\sigma)}u)\cdot\overline{u})$
is nontrivial. Note that this extends to the desingularization, as the
combination of both terms is a rational function that, as we set $\delta$ to
$0$, is a nontrivial function (after all, equalling to the degree of
$x_{\tau}x_{\sigma}$, squared). Which proves biased pairing for
$\overline{u}$, and hence for the original $u$. ∎
###### Remark 6.4.
Let us sketch another way that to construct the biased pair for $u$.
First, we can assume that $u$ is supported in a unique face of
$\mathrm{st}_{\tau}-\tau$ (there may be more faces in the support outside of
the star). We can then assume by renormalizing so that $u$ does not become
singular in the specialization, by multiplying with a scalar that only depends
only on indeterminates we do not differentiate after by
$\partial_{\sigma}^{\tau}$. Issue that arrises: the coefficient may go to $0$
on $\tau$.
However, we can choose $u^{\prime\prime}$ to be $u^{\prime}+tx_{\tau}$, $t$
sufficiently large so that $(u^{\prime}+tx_{\tau})u$ has coefficient $1$ on
$\mathbf{x}_{\tau}^{2}$. It again becomes clear that
$\mathrm{deg}(\overline{u}\cdot\overline{u}^{\prime\prime})$ or
$\mathrm{deg}((\partial_{\sigma}^{\tau(\overline{u},\sigma)}u)$ are
nontrivial.
### 6.3. General characteristic, general isomorphism
In the case of general isomorphisms, say, the isomorphism
$\mathcal{B}(\mu)^{k}\ \xrightarrow{\ \cdot\ell\ }\ \mathcal{B}(\mu)^{d-k}$
we are, instead of considering the first suspension, and ultimately the cone
over a simplex, we are instead tasked with considering the $d-2k-1$-fold
suspension of a simplex, and the cone over it. The rest of the argument
remains the same.
## 7\. Open problems
Concerning problems surrounding the $g$-theorem, there is a stronger
conjecture available: It has been conjectured that stronger, for $s$-Cohen-
Macaulay complexes, we have this injection for $k\leq\frac{d+s}{2}-1$, but the
approach offers no clue how to do it. In particular, the combinatorial
conjecture is:
###### Conjecture 7.1.
Consider an $s$-Cohen-Macaulay complex of dimension $d-1$. Then the vector
$(h_{0},\ h_{1}-h_{0},\ \dots,\
h_{\lceil\frac{d+s}{2}\rceil}-h_{\lceil\frac{d+s}{2}\rceil-1})$
is an $M$-vector.
It is equally an open problem to extend the anisotropy to general
characteristic. We conjecture this is true, but have no good idea for an
approach.
###### Conjecture 7.2.
Consider $\mathbbm{k}$ any field, $\varSigma$ any $(d-1)$-dimensional
pseudomanifold over $\mathbbm{k}$, and the associated graded commutative face
ring $\mathbbm{k}[\varSigma]$. Then, for some field extension
$\mathbbm{k}^{\prime}$ of $\mathbbm{k}$, we have an Artinian reduction
$\mathcal{A}(\varSigma)$ that is totally anisotropic, i.e. for every nonzero
element $u\in\mathcal{A}^{k}(\varSigma)$, $k\leq\frac{d}{2}$, we have
$u^{2}\ \neq\ 0.$
Of course, special cases over characteristic $0$, such as the anisotropy of
integral homology spheres over $\mathbbm{k}=\mathbb{Q}$ can be established
using standard mixed characteristic tricks and reduction to characteristic
$2$, but more seems difficult. A generalization of the anisotropy to
characteristic $p$ that should be more immediate is the following:
###### Conjecture 7.3.
Consider $\mathbbm{k}$ any field of characteristic $p$, $\varSigma$ any
$(d-1)$-dimensional pseudomanifold over $\mathbbm{k}$, and the associated
graded commutative face ring $\mathbbm{k}[\varSigma]$. Then, for some field
extension $\mathbbm{k}^{\prime}$ of $\mathbbm{k}$, we have an Artinian
reduction $\mathcal{A}(\varSigma)$ that is totally $p$-anisotropic, i.e. for
every nonzero element $u\in\mathcal{A}^{k}(\varSigma)$, $k\leq\frac{d}{p}$, we
have
$u^{p}\ \neq\ 0.$
Finally, both [Adi18] and the present work only prove the existence of
Lefschetz elements on a _generic_ linear system of parameters, and it would be
interesting to have specific ones. One candidate is, in our opinion, the
moment curve $(t,t^{2},\dots,t^{d})$.
###### Open Problem 7.4.
Do distinct points on the moment curve provide a linear system with the
Lefschetz property (on any (PL)-sphere, pseudomanifold or cycle)?
Acknowledgements. We thank Christos Athanasiadis, David Eisenbud, Gil Kalai,
Eran Nevo, Richard Stanley, Geva Yashfe and Hailun Zheng for some enlightening
conversations, as well as helpful comments on the first version. K. A. was
supported by the European Research Council under the European Union’s Seventh
Framework Programme ERC Grant agreement ERC StG 716424 - CASe and the Israel
Science Foundation under ISF Grant 1050/16. We benefited from experiments with
the computer algebra program Macaulay2 [GS]. This work is part of the Univ. of
Ioannina Ph.D. thesis of V. P., financially supported by the Special Account
for Research Funding (E.L.K.E.) of the University of Ioannina under the
program with code number 82561 and title _Program of financial support for
Ph.D. students and postdoctoral researchers_.
## References
* [Adi18] Karim Adiprasito, _Combinatorial Lefschetz theorems beyond positivity_ , 2018, arXiv:1812.10454.
* [AY20] Karim Adiprasito and Geva Yashfe, _The Partition Complex: an invitation to combinatorial commutative algebra_ , 2020, Surveys in Combinatorics, British Combinatorial Committee, arXiv:2008.01044.
* [Bar73] David Barnette, _Graph theorems for manifolds_ , Isr. J. Math. 16 (1973), 62–72 (English).
* [CCD97] Eduardo Cattani, David Cox, and Alicia Dickenstein, _Residues in toric varieties_ , Compos. Math. 108 (1997), no. 1, 35–76 (English).
* [GS] Daniel R. Grayson and Michael E. Stillman, _Macaulay2, a software system for research in algebraic geometry_ , Available at http://www.math.uiuc.edu/Macaulay2/.
* [Gri76] Phillip A. Griffiths, _Variations on a theorem of Abel_ , Invent. Math. 35 (1976), 321–390 (English).
* [Hal35] Philip Hall, _On representatives of subsets._ , J. Lond. Math. Soc. 10 (1935), 26–30.
* [Kro90] Leopold Kronecker, _Algebraische Reduction der Scharen bilinearer Formen_ , Berl. Ber. 1890 (1890), 1225–1237 (German).
* [Lee96] Carl William Lee, _P.L.-spheres, convex polytopes, and stress._ , Discrete Comput. Geom. 15 (1996), no. 4, 389–421.
* [Mac27] F. S. Macaulay, _Some properties of enumeration in the theory of modular systems._ , Proc. Lond. Math. Soc. (2) 26 (1927), 531–555.
* [PP20] Stavros Argyrios Papadakis and Vasiliki Petrotou, _The characteristic 2 anisotropicity of simplicial spheres_ , 2020, arXiv:2012.09815.
* [Rin13] Claus Michael Ringel, _Indecomposable representations of the Kronecker quivers._ , Proc. Am. Math. Soc. 141 (2013), no. 1, 115–121.
* [Sta96] Richard Stanley, _Combinatorics and commutative algebra_ , second ed., Progress in Mathematics, vol. 41, Birkhäuser Boston Inc., Boston, MA, 1996.
* [TW00] Tiong-Seng Tay and Walter Whiteley, _A homological interpretation of skeletal ridigity_ , Adv. in Appl. Math. 25 (2000), no. 1, 102–151.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.